注:以下操作均基于centos7系统。

安装ansible

ansilbe能够通过yum或者pip安装,因为kubernetes-ansible用到了密码。故而还须要安装sshpass:

pip install ansible
wget http://sourceforge.net/projects/sshpass/files/latest/download
tar zxvf download
cd sshpass-1.05
./configure && make && make install

配置kubernetes-ansible

# git clone https://github.com/eparis/kubernetes-ansible.git
# cd kubernetes-ansible # #在group_vars/all.yml中配置用户为root
# cat group_vars/all.yml | grep ssh
ansible_ssh_user: root # # Each kubernetes service gets its own IP address. These are not real IPs.
# # You need only select a range of IPs which are not in use elsewhere in your
# # environment. This must be done even if you do not use the network setup
# # provided by the ansible scripts.
# cat group_vars/all.yml | grep kube_service_addresses
kube_service_addresses: 10.254.0.0/16 # #配置root密码
# echo "password" > ~/rootpassword

配置master、etcd和minion的IP地址:

# cat inventory
[masters]
192.168.0.7 [etcd]
192.168.0.7 [minions]
# kube_ip_addr为该minion上Pods的地址池,默觉得/24掩码
192.168.0.3 kube_ip_addr=10.0.1.1
192.168.0.6 kube_ip_addr=10.0.2.1

測试各机器连接并配置ssh key:

# ansible-playbook -i inventory ping.yml #这个命令会输出一些错误信息。可忽略
# ansible-playbook -i inventory keys.yml

眼下kubernetes-ansible对依赖处理的还不是非常全面,须要先手动配置下:

# # 安装iptables
# ansible all -i inventory --vault-password-file=~/rootpassword -a 'yum -y install iptables-services'
# # 为CentOS 7加入kubernetes源
# ansible all -i inventory --vault-password-file=~/rootpassword -a 'curl https://copr.fedoraproject.org/coprs/eparis/kubernetes-epel-7/repo/epel-7/eparis-kubernetes-epel-7-epel-7.repo -o /etc/yum.repos.d/eparis-kubernetes-epel-7-epel-7.repo'
# # 配置ssh,防止ssh连接超时
# sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/ssh_config
# ansible all -i inventory --vault-password-file=~/rootpassword -a 'sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/ssh_config'
# ansible all -i inventory --vault-password-file=~/rootpassword -a 'sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/sshd_config'
# ansible all -i inventory --vault-password-file=~/rootpassword -a 'systemctl restart sshd'

配置docker网络,实际上就是创建kbr0网桥、为网桥配置ip并配置路由:

# ansible-playbook -i inventory hack-network.yml  

PLAY [minions] **************************************************************** 

GATHERING FACTS ***************************************************************
ok: [192.168.0.6]
ok: [192.168.0.3] TASK: [network-hack-bridge | Create kubernetes bridge interface] **************
changed: [192.168.0.3]
changed: [192.168.0.6] TASK: [network-hack-bridge | Configure docker to use the bridge inferface] ****
changed: [192.168.0.6]
changed: [192.168.0.3] PLAY [minions] **************************************************************** GATHERING FACTS ***************************************************************
ok: [192.168.0.6]
ok: [192.168.0.3] TASK: [network-hack-routes | stat path=/etc/sysconfig/network-scripts/ifcfg-{{ ansible_default_ipv4.interface }}] ***
ok: [192.168.0.6]
ok: [192.168.0.3] TASK: [network-hack-routes | Set up a network config file] ********************
skipping: [192.168.0.3]
skipping: [192.168.0.6] TASK: [network-hack-routes | Set up a static routing table] *******************
changed: [192.168.0.3]
changed: [192.168.0.6] NOTIFIED: [network-hack-routes | apply changes] *******************************
changed: [192.168.0.6]
changed: [192.168.0.3] NOTIFIED: [network-hack-routes | upload script] *******************************
changed: [192.168.0.6]
changed: [192.168.0.3] NOTIFIED: [network-hack-routes | run script] **********************************
changed: [192.168.0.3]
changed: [192.168.0.6] NOTIFIED: [network-hack-routes | remove script] *******************************
changed: [192.168.0.3]
changed: [192.168.0.6] PLAY RECAP ********************************************************************
192.168.0.3 : ok=10 changed=7 unreachable=0 failed=0
192.168.0.6 : ok=10 changed=7 unreachable=0 failed=0

最后,在全部节点安装并配置kubernetes:

ansible-playbook -i inventory setup.yml

执行完毕后能够看到kube相关的服务都在执行了:

# # 服务执行状态
# ansible all -i inventory -k -a 'bash -c "systemctl | grep -i kube"'
SSH password:
192.168.0.3 | success | rc=0 >>
kube-proxy.service loaded active running Kubernetes Kube-Proxy Server
kubelet.service loaded active running Kubernetes Kubelet Server 192.168.0.7 | success | rc=0 >>
kube-apiserver.service loaded active running Kubernetes API Server
kube-controller-manager.service loaded active running Kubernetes Controller Manager
kube-scheduler.service loaded active running Kubernetes Scheduler Plugin 192.168.0.6 | success | rc=0 >>
kube-proxy.service loaded active running Kubernetes Kube-Proxy Server
kubelet.service loaded active running Kubernetes Kubelet Server # # 端口监听状态
# ansible all -i inventory -k -a 'bash -c "netstat -tulnp | grep -E \"(kube)|(etcd)\""'
SSH password:
192.168.0.7 | success | rc=0 >>
tcp 0 0 192.168.0.7:7080 0.0.0.0:* LISTEN 14486/kube-apiserve
tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 14544/kube-schedule
tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 14515/kube-controll
tcp6 0 0 :::7001 :::* LISTEN 13986/etcd
tcp6 0 0 :::4001 :::* LISTEN 13986/etcd
tcp6 0 0 :::8080 :::* LISTEN 14486/kube-apiserve 192.168.0.3 | success | rc=0 >>
tcp 0 0 192.168.0.3:10250 0.0.0.0:* LISTEN 9500/kubelet
tcp6 0 0 :::46309 :::* LISTEN 9524/kube-proxy
tcp6 0 0 :::48500 :::* LISTEN 9524/kube-proxy
tcp6 0 0 :::38712 :::* LISTEN 9524/kube-proxy 192.168.0.6 | success | rc=0 >>
tcp 0 0 192.168.0.6:10250 0.0.0.0:* LISTEN 9474/kubelet
tcp6 0 0 :::52870 :::* LISTEN 9498/kube-proxy
tcp6 0 0 :::57961 :::* LISTEN 9498/kube-proxy
tcp6 0 0 :::40720 :::* LISTEN 9498/kube-proxy

执行以下的命令看看服务是否都是正常的

# curl -s -L http://192.168.0.7:4001/version # check etcd
etcd 0.4.6
# curl -s -L http://192.168.0.7:8080/api/v1beta1/pods | python -m json.tool # check apiserve
{
"apiVersion": "v1beta1",
"creationTimestamp": null,
"items": [],
"kind": "PodList",
"resourceVersion": 8,
"selfLink": "/api/v1beta1/pods"
}
# curl -s -L http://192.168.0.7:8080/api/v1beta1/minions | python -m json.tool # check apiserve
# curl -s -L http://192.168.0.7:8080/api/v1beta1/services | python -m json.tool # check apiserve
# kubectl get minions
NAME
192.168.0.3
192.168.0.6

部署apache服务

首先创建一个Pod:

# cat ~/apache.json
{
"id": "fedoraapache",
"kind": "Pod",
"apiVersion": "v1beta1",
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "fedoraapache",
"containers": [{
"name": "fedoraapache",
"image": "fedora/apache",
"ports": [{
"containerPort": 80,
"hostPort": 80
}]
}]
}
},
"labels": {
"name": "fedoraapache"
}
} # kubectl create -f apache.json
# kubectl get pod fedoraapache
NAME IMAGE(S) HOST LABELS STATUS
fedoraapache fedora/apache 192.168.0.6/ name=fedoraapache Waiting # #因为镜像下载较慢,因而Waiting持续的时间会比較久,等镜像下好后就会非常快起来了
# kubectl get pod fedoraapache
NAME IMAGE(S) HOST LABELS STATUS
fedoraapache fedora/apache 192.168.0.6/ name=fedoraapache Running # #到192.168.0.6机器上看看容器状态
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
77dd7fe1b24f fedora/apache:latest "/run-apache.sh" 31 minutes ago Up 31 minutes k8s_fedoraapache.f14c9521_fedoraapache.default.etcd_1416396375_4114a4d0
1455249f2c7d kubernetes/pause:latest "/pause" About an hour ago Up About an hour 0.0.0.0:80->80/tcp k8s_net.e9a68336_fedoraapache.default.etcd_1416396375_11274cd2
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
fedora/apache latest 2e11d8fd18b3 7 weeks ago 554.1 MB
kubernetes/pause latest 6c4579af347b 4 months ago 239.8 kB
# iptables-save | grep 2.2
-A DOCKER ! -i kbr0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.2.2:80
-A FORWARD -d 10.0.2.2/32 ! -i kbr0 -o kbr0 -p tcp -m tcp --dport 80 -j ACCEPT
# curl localhost # 说明Pod启动OK了。而且端口也正常
Apache

Replication Controllers

Replication Controllers保证足够数量的容器执行,以便均衡负载,并保证服务高可用:

A replication controller combines a template for pod creation (a “cookie-cutter” if you will) and a number of desired replicas, into a single API object. The replica controller also contains a label selector that identifies the set of objects managed by the replica controller. The replica controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods.

# cat replica.json
{
"id": "apacheController",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"labels": {"name": "fedoraapache"},
"desiredState": {
"replicas": 3,
"replicaSelector": {"name": "fedoraapache"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "fedoraapache",
"containers": [{
"name": "fedoraapache",
"image": "fedora/apache",
"ports": [{
"containerPort": 80,
}]
}]
}
},
"labels": {"name": "fedoraapache"},
},
}
} # kubectl create -f replica.json
apacheController # kubectl get replicationController
NAME IMAGE(S) SELECTOR REPLICAS
apacheController fedora/apache name=fedoraapache 3 # kubectl get pod
NAME IMAGE(S) HOST LABELS STATUS
fedoraapache fedora/apache 192.168.0.6/ name=fedoraapache Running
cf6726ae-6fed-11e4-8a06-fa163e3873e1 fedora/apache 192.168.0.3/ name=fedoraapache Running
cf679152-6fed-11e4-8a06-fa163e3873e1 fedora/apache 192.168.0.3/ name=fedoraapache Running

能够看到。已经有三个容器在执行了。

Services

通过Replication Controllers已经有多个Pod在执行了。但因为每一个Pod都分配了不同的IP。而且随着系统执行这些IP地址有可能会变化,那问题来了,怎样从外部訪问这个服务呢?这就是service干的事情了。

A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them - sometimes called a micro-service. The goal of services is to provide a bridge for non-Kubernetes-native applications to access backends without the need to write code that is specific to Kubernetes. A service offers clients an IP and port pair which, when accessed, redirects to the appropriate backends. The set of pods targetted is determined by a label selector.

As an example, consider an image-process backend which is running with 3 live replicas. Those replicas are fungible - frontends do not care which backend they use. While the actual pods that comprise the set may change, the frontend client(s) do not need to know that. The service abstraction enables this decoupling.

Unlike pod IP addresses, which actually route to a fixed destination, service IPs are not actually answered by a single host. Instead, we use iptables (packet processing logic in Linux) to define “virtual” IP addresses which are transparently redirected as needed. We call the tuple of the service IP and the service port the portal. When clients connect to the portal, their traffic is automatically transported to an appropriate endpoint. The environment variables for services are actually populated in terms of the portal IP and port. We will be adding DNS support for services, too.

# cat service.json
{
"id": "fedoraapache",
"kind": "Service",
"apiVersion": "v1beta1",
"selector": {
"name": "fedoraapache",
},
"protocol": "TCP",
"containerPort": 80,
"port": 8987
}
# kubectl create -f service.json
fedoraapache
# kubectl get service
NAME LABELS SELECTOR IP PORT
kubernetes-ro component=apiserver,provider=kubernetes 10.254.0.2 80
kubernetes component=apiserver,provider=kubernetes 10.254.0.1 443
fedoraapache name=fedoraapache 10.254.0.3 8987 # # 切换到minion上
# curl 10.254.0.3:8987
Apache

也能够为service配置一个公网IP,前提是要配置一个cloud provider。

眼下支持的cloud provider有GCE、AWS、OpenStack、ovirt、vagrant等。

For some parts of your application (e.g. your frontend) you want to expose a service on an external (publically visible) IP address. To achieve this, you can set the createExternalLoadBalancer flag on the service. This sets up a cloud provider specific load balancer (assuming that it is supported by your cloud provider) and also sets up IPTables rules on each host that map packets from the specified External IP address to the service proxy in the same manner as internal service IP addresses.

注:对Openstack的支持是使用rackspace开源的github.com/rackspace/gophercloud来做的,

Health Check

Currently, there are three types of application health checks that you can choose from:
HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise.
Container Exec - The Kubelet will execute a command inside your container. If it returns “ok” it will be considered a success.
* TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can’t it is considered a failure.
In all cases, if the Kubelet discovers a failure, the container is restarted.

The container health checks are configured in the “LivenessProbe” section of your container config. There you can also specify an “initialDelaySeconds” that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.

Here is an example config for a pod with an HTTP health check:

kind: Pod
apiVersion: v1beta1
desiredState:
manifest:
version: v1beta1
id: php
containers:
- name: nginx
image: dockerfile/nginx
ports:
- containerPort: 80
# defines the health checking
livenessProbe:
# turn on application health checking
enabled: true
type: http
# length of time to wait for a pod to initialize
# after pod startup, before applying health checking
initialDelaySeconds: 30
# an http probe
httpGet:
path: /_status/healthz
port: 8080

References

版权声明:本文博客原创文章,博客,未经同意,不得转载。

kubernetes多节点部署的决心的更多相关文章

  1. kubernetes多节点部署解析

    注:以下操作均基于centos7系统. 安装ansible ansilbe可以通过yum或者pip安装,由于kubernetes-ansible用到了密码,故而还需要安装sshpass: pip in ...

  2. kubernetes Node节点部署(四)

    一.部署kubelet 1.1.二进制包准备 将软件包从linux-node1复制到linux-node2中去 [root@linux-node1 ~]# cd /usr/local/src/kube ...

  3. kubernetes master节点部署(三)

    一.部署kubernetes api服务 1.1.准备软件包 [root@linux-node1 ~]# cd /usr/local/src/kubernetes [root@linux-node1 ...

  4. Kubernetes集群部署之四Master节点部署

    Kubernetes Master节点部署三个服务:kube-apiserver.kube-controller-manager.kube-scheduler和一个命令工具kubectl. Maste ...

  5. 二、安装并配置Kubernetes Master节点

    1. 安装配置Master节点上的Kubernetes服务 1.1 安装Master节点上的Kubernetes服务 yum -y install kubernetes 1.2 修改kube-apis ...

  6. Kubernetes 二进制部署(一)单节点部署(Master 与 Node 同一机器)

    0. 前言 最近受“新冠肺炎”疫情影响,在家等着,入职暂时延后,在家里办公和学习 尝试通过源码编译二进制的方式在单一节点(Master 与 Node 部署在同一个机器上)上部署一个 k8s 环境,整理 ...

  7. kubernetes集群部署

    鉴于Docker如此火爆,Google推出kubernetes管理docker集群,不少人估计会进行尝试.kubernetes得到了很多大公司的支持,kubernetes集群部署工具也集成了gce,c ...

  8. Kubernetes集群部署关键知识总结

    Kubernetes集群部署需要安装的组件东西很多,过程复杂,对服务器环境要求很苛刻,最好是能连外网的环境下安装,有些组件还需要连google服务器下载,这一点一般很难满足,因此最好是能提前下载好准备 ...

  9. Kubernetes集群部署--kubernetes1.10.1

    参考博客:https://mritd.me/2018/04/19/set-up-kubernetes-1.10.1-cluster-by-hyperkube/ 一.环境 (1)系统环境 IP 操作系统 ...

随机推荐

  1. 实验数据结构——KMP算法Test.ming

    翻译计划     小明初学者C++,它确定了四个算术.关系运算符.逻辑运算.颂值操作.输入输出.使用简单的选择和循环结构.但他的英语不是很好,记住太多的保留字,他利用汉语拼音的保留字,小屋C++,发明 ...

  2. 我只是不甘心-------Day51

    回放假回家一天,完全断网,天气也很给力配合.水蓝色的天空.白云,抬眼,我没有看到刺目的光芒,但仍眼眼睛刺痛.已经缩小眼,我试图打开眼睛,就像眼泪都流出来了,它不会擦到沙,这是很多其他的没地方. 哥哥去 ...

  3. client多线程

    1.多线程对象 对象可以是多线程访问,线程可以在这里分为两类: 为完成内部业务逻辑的创建Thread对象,线程需要访问对象. 使用对象的线程外部对象. 进一步假设更精细的划分.业主外螺纹成线等线,. ...

  4. java通讯录

    )设一个通信录由以下几项数据信息构成: 数据项               类型 姓名                  字符串 地址                  字符串 邮政编码        ...

  5. android v7兼容包RecyclerView的使用(四)——点击事件的不同方式处理

    前三篇文章 android v7兼容包RecyclerView的使用(三)--布局管理器的使用 android v7兼容包RecyclerView的使用(二) android v7兼容包Recycle ...

  6. ruby简单的基本 3

    类 Ruby一切都是对象,它包含了一个恒定.例如,可以使用.class物业查看对象的类型,你可以看一下1.class.你会发现常1类型是Fixnum,1但它是Fixnum的一个例子. Ruby本类cl ...

  7. Ini文件帮助类

    .ini文件是什么 .ini 文件是Initialization File的缩写,就是初始化文件.在Windows系统中,其是配置文件所采用的存储格式(主要是system.ini,win.ini,sy ...

  8. 加密解密工具类(Java,DES)

    一个Java版的DES加密工具类,能够用来进行网络传输数据加密,保存password的时候进行加密. import java.security.Key; import java.security.sp ...

  9. 【转】JAVA 网络编程

    网络编程 网络编程对于很多的初学者来说,都是很向往的一种编程技能,但是很多的初学者却因为很长一段时间无法进入网络编程的大门而放弃了对于该部分技术的学习. 在 学习网络编程以前,很多初学者可能觉得网络编 ...

  10. Visual Studio 原生开发的10个调试技巧(二)

    原文:Visual Studio 原生开发的10个调试技巧(二) 我以前关于 Visual Studio 调试技巧的文章引起了大家很大的兴趣,以至于我决定分享更多调试的知识.以下的列表中你可以看到写原 ...