注:以下操作均基于centos7系统。

安装ansible

ansilbe可以通过yum或者pip安装,由于kubernetes-ansible用到了密码,故而还需要安装sshpass:

  1. pip install ansible
  2. wget http://sourceforge.net/projects/sshpass/files/latest/download
  3. tar zxvf download
  4. cd sshpass-1.05
  5. ./configure && make && make install

配置kubernetes-ansible

  1. # git clone https://github.com/eparis/kubernetes-ansible.git
  2. # cd kubernetes-ansible
  3. # #在group_vars/all.yml中配置用户为root
  4. # cat group_vars/all.yml | grep ssh
  5. ansible_ssh_user: root
  6. # # Each kubernetes service gets its own IP address. These are not real IPs.
  7. # # You need only select a range of IPs which are not in use elsewhere in your
  8. # # environment. This must be done even if you do not use the network setup
  9. # # provided by the ansible scripts.
  10. # cat group_vars/all.yml | grep kube_service_addresses
  11. kube_service_addresses: 10.254.0.0/16
  12. # #配置root密码
  13. # echo "password" > ~/rootpassword

配置master、etcd和minion的IP地址:

  1. # cat inventory
  2. [masters]
  3. 192.168.0.7
  4. [etcd]
  5. 192.168.0.7
  6. [minions]
  7. # kube_ip_addr为该minion上Pods的地址池,默认为/24掩码
  8. 192.168.0.3 kube_ip_addr=10.0.1.1
  9. 192.168.0.6 kube_ip_addr=10.0.2.1

测试各机器连接并配置ssh key:

  1. # ansible-playbook -i inventory ping.yml #这个命令会输出一些错误信息,可忽略
  2. # ansible-playbook -i inventory keys.yml

目前kubernetes-ansible对依赖处理的还不是很全面,需要先手动配置下:

  1. # # 安装iptables
  2. # ansible all -i inventory --vault-password-file=~/rootpassword -a 'yum -y install iptables-services'
  3. # # 为CentOS 7添加kubernetes源
  4. # ansible all -i inventory --vault-password-file=~/rootpassword -a 'curl https://copr.fedoraproject.org/coprs/eparis/kubernetes-epel-7/repo/epel-7/eparis-kubernetes-epel-7-epel-7.repo -o /etc/yum.repos.d/eparis-kubernetes-epel-7-epel-7.repo'
  5. # # 配置ssh,防止ssh连接超时
  6. # sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/ssh_config
  7. # ansible all -i inventory --vault-password-file=~/rootpassword -a 'sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/ssh_config'
  8. # ansible all -i inventory --vault-password-file=~/rootpassword -a 'sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/sshd_config'
  9. # ansible all -i inventory --vault-password-file=~/rootpassword -a 'systemctl restart sshd'

配置docker网络,实际上就是创建kbr0网桥、为网桥配置ip并配置路由:

  1. # ansible-playbook -i inventory hack-network.yml
  2. PLAY [minions] ****************************************************************
  3. GATHERING FACTS ***************************************************************
  4. ok: [192.168.0.6]
  5. ok: [192.168.0.3]
  6. TASK: [network-hack-bridge | Create kubernetes bridge interface] **************
  7. changed: [192.168.0.3]
  8. changed: [192.168.0.6]
  9. TASK: [network-hack-bridge | Configure docker to use the bridge inferface] ****
  10. changed: [192.168.0.6]
  11. changed: [192.168.0.3]
  12. PLAY [minions] ****************************************************************
  13. GATHERING FACTS ***************************************************************
  14. ok: [192.168.0.6]
  15. ok: [192.168.0.3]
  16. TASK: [network-hack-routes | stat path=/etc/sysconfig/network-scripts/ifcfg-{{ ansible_default_ipv4.interface }}] ***
  17. ok: [192.168.0.6]
  18. ok: [192.168.0.3]
  19. TASK: [network-hack-routes | Set up a network config file] ********************
  20. skipping: [192.168.0.3]
  21. skipping: [192.168.0.6]
  22. TASK: [network-hack-routes | Set up a static routing table] *******************
  23. changed: [192.168.0.3]
  24. changed: [192.168.0.6]
  25. NOTIFIED: [network-hack-routes | apply changes] *******************************
  26. changed: [192.168.0.6]
  27. changed: [192.168.0.3]
  28. NOTIFIED: [network-hack-routes | upload script] *******************************
  29. changed: [192.168.0.6]
  30. changed: [192.168.0.3]
  31. NOTIFIED: [network-hack-routes | run script] **********************************
  32. changed: [192.168.0.3]
  33. changed: [192.168.0.6]
  34. NOTIFIED: [network-hack-routes | remove script] *******************************
  35. changed: [192.168.0.3]
  36. changed: [192.168.0.6]
  37. PLAY RECAP ********************************************************************
  38. 192.168.0.3 : ok=10 changed=7 unreachable=0 failed=0
  39. 192.168.0.6 : ok=10 changed=7 unreachable=0 failed=0

最后,在所有节点安装并配置kubernetes:

  1. ansible-playbook -i inventory setup.yml

执行完成后可以看到kube相关的服务都在运行了:

  1. # # 服务运行状态
  2. # ansible all -i inventory -k -a 'bash -c "systemctl | grep -i kube"'
  3. SSH password:
  4. 192.168.0.3 | success | rc=0 >>
  5. kube-proxy.service loaded active running Kubernetes Kube-Proxy Server
  6. kubelet.service loaded active running Kubernetes Kubelet Server
  7. 192.168.0.7 | success | rc=0 >>
  8. kube-apiserver.service loaded active running Kubernetes API Server
  9. kube-controller-manager.service loaded active running Kubernetes Controller Manager
  10. kube-scheduler.service loaded active running Kubernetes Scheduler Plugin
  11. 192.168.0.6 | success | rc=0 >>
  12. kube-proxy.service loaded active running Kubernetes Kube-Proxy Server
  13. kubelet.service loaded active running Kubernetes Kubelet Server
  14. # # 端口监听状态
  15. # ansible all -i inventory -k -a 'bash -c "netstat -tulnp | grep -E \"(kube)|(etcd)\""'
  16. SSH password:
  17. 192.168.0.7 | success | rc=0 >>
  18. tcp 0 0 192.168.0.7:7080 0.0.0.0:* LISTEN 14486/kube-apiserve
  19. tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 14544/kube-schedule
  20. tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 14515/kube-controll
  21. tcp6 0 0 :::7001 :::* LISTEN 13986/etcd
  22. tcp6 0 0 :::4001 :::* LISTEN 13986/etcd
  23. tcp6 0 0 :::8080 :::* LISTEN 14486/kube-apiserve
  24. 192.168.0.3 | success | rc=0 >>
  25. tcp 0 0 192.168.0.3:10250 0.0.0.0:* LISTEN 9500/kubelet
  26. tcp6 0 0 :::46309 :::* LISTEN 9524/kube-proxy
  27. tcp6 0 0 :::48500 :::* LISTEN 9524/kube-proxy
  28. tcp6 0 0 :::38712 :::* LISTEN 9524/kube-proxy
  29. 192.168.0.6 | success | rc=0 >>
  30. tcp 0 0 192.168.0.6:10250 0.0.0.0:* LISTEN 9474/kubelet
  31. tcp6 0 0 :::52870 :::* LISTEN 9498/kube-proxy
  32. tcp6 0 0 :::57961 :::* LISTEN 9498/kube-proxy
  33. tcp6 0 0 :::40720 :::* LISTEN 9498/kube-proxy

执行下面的命令看看服务是否都是正常的

  1. # curl -s -L http://192.168.0.7:4001/version # check etcd
  2. etcd 0.4.6
  3. # curl -s -L http://192.168.0.7:8080/api/v1beta1/pods | python -m json.tool # check apiserve
  4. {
  5. "apiVersion": "v1beta1",
  6. "creationTimestamp": null,
  7. "items": [],
  8. "kind": "PodList",
  9. "resourceVersion": 8,
  10. "selfLink": "/api/v1beta1/pods"
  11. }
  12. # curl -s -L http://192.168.0.7:8080/api/v1beta1/minions | python -m json.tool # check apiserve
  13. # curl -s -L http://192.168.0.7:8080/api/v1beta1/services | python -m json.tool # check apiserve
  14. # kubectl get minions
  15. NAME
  16. 192.168.0.3
  17. 192.168.0.6

部署apache服务

首先创建一个Pod:

  1. # cat ~/apache.json
  2. {
  3. "id": "fedoraapache",
  4. "kind": "Pod",
  5. "apiVersion": "v1beta1",
  6. "desiredState": {
  7. "manifest": {
  8. "version": "v1beta1",
  9. "id": "fedoraapache",
  10. "containers": [{
  11. "name": "fedoraapache",
  12. "image": "fedora/apache",
  13. "ports": [{
  14. "containerPort": 80,
  15. "hostPort": 80
  16. }]
  17. }]
  18. }
  19. },
  20. "labels": {
  21. "name": "fedoraapache"
  22. }
  23. }
  24. # kubectl create -f apache.json
  25. # kubectl get pod fedoraapache
  26. NAME IMAGE(S) HOST LABELS STATUS
  27. fedoraapache fedora/apache 192.168.0.6/ name=fedoraapache Waiting
  28. # #由于镜像下载较慢,因而Waiting持续的时间会比较久,等镜像下好后就会很快起来了
  29. # kubectl get pod fedoraapache
  30. NAME IMAGE(S) HOST LABELS STATUS
  31. fedoraapache fedora/apache 192.168.0.6/ name=fedoraapache Running
  32. # #到192.168.0.6机器上看看容器状态
  33. # docker ps
  34. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  35. 77dd7fe1b24f fedora/apache:latest "/run-apache.sh" 31 minutes ago Up 31 minutes k8s_fedoraapache.f14c9521_fedoraapache.default.etcd_1416396375_4114a4d0
  36. 1455249f2c7d kubernetes/pause:latest "/pause" About an hour ago Up About an hour 0.0.0.0:80->80/tcp k8s_net.e9a68336_fedoraapache.default.etcd_1416396375_11274cd2
  37. # docker images
  38. REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
  39. fedora/apache latest 2e11d8fd18b3 7 weeks ago 554.1 MB
  40. kubernetes/pause latest 6c4579af347b 4 months ago 239.8 kB
  41. # iptables-save | grep 2.2
  42. -A DOCKER ! -i kbr0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.2.2:80
  43. -A FORWARD -d 10.0.2.2/32 ! -i kbr0 -o kbr0 -p tcp -m tcp --dport 80 -j ACCEPT
  44. # curl localhost # 说明Pod启动OK了,并且端口也正常
  45. Apache

Replication Controllers

Replication Controllers保证足够数量的容器运行,以便均衡负载,并保证服务高可用:

A replication controller combines a template for pod creation (a “cookie-cutter” if you will) and a number of desired replicas, into a single API object. The replica controller also contains a label selector that identifies the set of objects managed by the replica controller. The replica controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods.

  1. # cat replica.json
  2. {
  3. "id": "apacheController",
  4. "kind": "ReplicationController",
  5. "apiVersion": "v1beta1",
  6. "labels": {"name": "fedoraapache"},
  7. "desiredState": {
  8. "replicas": 3,
  9. "replicaSelector": {"name": "fedoraapache"},
  10. "podTemplate": {
  11. "desiredState": {
  12. "manifest": {
  13. "version": "v1beta1",
  14. "id": "fedoraapache",
  15. "containers": [{
  16. "name": "fedoraapache",
  17. "image": "fedora/apache",
  18. "ports": [{
  19. "containerPort": 80,
  20. }]
  21. }]
  22. }
  23. },
  24. "labels": {"name": "fedoraapache"},
  25. },
  26. }
  27. }
  28. # kubectl create -f replica.json
  29. apacheController
  30. # kubectl get replicationController
  31. NAME IMAGE(S) SELECTOR REPLICAS
  32. apacheController fedora/apache name=fedoraapache 3
  33. # kubectl get pod
  34. NAME IMAGE(S) HOST LABELS STATUS
  35. fedoraapache fedora/apache 192.168.0.6/ name=fedoraapache Running
  36. cf6726ae-6fed-11e4-8a06-fa163e3873e1 fedora/apache 192.168.0.3/ name=fedoraapache Running
  37. cf679152-6fed-11e4-8a06-fa163e3873e1 fedora/apache 192.168.0.3/ name=fedoraapache Running

可以看到,已经有三个容器在运行了。

Services

通过Replication Controllers已经有多个Pod在运行了,但由于每个Pod都分配了不同的IP,并且随着系统运行这些IP地址有可能会变化,那问题来了,如何从外部访问这个服务呢?这就是service干的事情了。

A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them - sometimes called a micro-service. The goal of services is to provide a bridge for non-Kubernetes-native applications to access backends without the need to write code that is specific to Kubernetes. A service offers clients an IP and port pair which, when accessed, redirects to the appropriate backends. The set of pods targetted is determined by a label selector.

As an example, consider an image-process backend which is running with 3 live replicas. Those replicas are fungible - frontends do not care which backend they use. While the actual pods that comprise the set may change, the frontend client(s) do not need to know that. The service abstraction enables this decoupling.

Unlike pod IP addresses, which actually route to a fixed destination, service IPs are not actually answered by a single host. Instead, we use iptables (packet processing logic in Linux) to define “virtual” IP addresses which are transparently redirected as needed. We call the tuple of the service IP and the service port the portal. When clients connect to the portal, their traffic is automatically transported to an appropriate endpoint. The environment variables for services are actually populated in terms of the portal IP and port. We will be adding DNS support for services, too.

  1. # cat service.json
  2. {
  3. "id": "fedoraapache",
  4. "kind": "Service",
  5. "apiVersion": "v1beta1",
  6. "selector": {
  7. "name": "fedoraapache",
  8. },
  9. "protocol": "TCP",
  10. "containerPort": 80,
  11. "port": 8987
  12. }
  13. # kubectl create -f service.json
  14. fedoraapache
  15. # kubectl get service
  16. NAME LABELS SELECTOR IP PORT
  17. kubernetes-ro component=apiserver,provider=kubernetes 10.254.0.2 80
  18. kubernetes component=apiserver,provider=kubernetes 10.254.0.1 443
  19. fedoraapache name=fedoraapache 10.254.0.3 8987
  20. # # 切换到minion上
  21. # curl 10.254.0.3:8987
  22. Apache

也可以为service配置一个公网IP,前提是要配置一个cloud provider。目前支持的cloud provider有GCE、AWS、OpenStack、ovirt、vagrant等。

For some parts of your application (e.g. your frontend) you want to expose a service on an external (publically visible) IP address. To achieve this, you can set the createExternalLoadBalancer flag on the service. This sets up a cloud provider specific load balancer (assuming that it is supported by your cloud provider) and also sets up IPTables rules on each host that map packets from the specified External IP address to the service proxy in the same manner as internal service IP addresses.

注:对Openstack的支持是使用rackspace开源的github.com/rackspace/gophercloud来做的,

Health Check

Currently, there are three types of application health checks that you can choose from:
HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise.
Container Exec - The Kubelet will execute a command inside your container. If it returns “ok” it will be considered a success.
* TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can’t it is considered a failure.
In all cases, if the Kubelet discovers a failure, the container is restarted.

The container health checks are configured in the “LivenessProbe” section of your container config. There you can also specify an “initialDelaySeconds” that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.

Here is an example config for a pod with an HTTP health check:

kind: Pod
apiVersion: v1beta1
desiredState:
manifest:
version: v1beta1
id: php
containers:
- name: nginx
image: dockerfile/nginx
ports:
- containerPort: 80
# defines the health checking
livenessProbe:
# turn on application health checking
enabled: true
type: http
# length of time to wait for a pod to initialize
# after pod startup, before applying health checking
initialDelaySeconds: 30
# an http probe
httpGet:
path: /_status/healthz
port: 8080

References

kubernetes多节点部署解析的更多相关文章

  1. kubernetes多节点部署的决心

    注:以下操作均基于centos7系统. 安装ansible ansilbe能够通过yum或者pip安装,因为kubernetes-ansible用到了密码.故而还须要安装sshpass: pip in ...

  2. kubernetes Node节点部署(四)

    一.部署kubelet 1.1.二进制包准备 将软件包从linux-node1复制到linux-node2中去 [root@linux-node1 ~]# cd /usr/local/src/kube ...

  3. kubernetes master节点部署(三)

    一.部署kubernetes api服务 1.1.准备软件包 [root@linux-node1 ~]# cd /usr/local/src/kubernetes [root@linux-node1 ...

  4. Kubernetes集群部署之四Master节点部署

    Kubernetes Master节点部署三个服务:kube-apiserver.kube-controller-manager.kube-scheduler和一个命令工具kubectl. Maste ...

  5. 二、安装并配置Kubernetes Master节点

    1. 安装配置Master节点上的Kubernetes服务 1.1 安装Master节点上的Kubernetes服务 yum -y install kubernetes 1.2 修改kube-apis ...

  6. Kubernetes 二进制部署(一)单节点部署(Master 与 Node 同一机器)

    0. 前言 最近受“新冠肺炎”疫情影响,在家等着,入职暂时延后,在家里办公和学习 尝试通过源码编译二进制的方式在单一节点(Master 与 Node 部署在同一个机器上)上部署一个 k8s 环境,整理 ...

  7. kubernetes实战之部署一个接近生产环境的consul集群

    系列目录 前面我们介绍了如何在windows单机以及如何基于docker部署consul集群,看起来也不是很复杂,然而如果想要把consul部署到kubernetes集群中并充分利用kubernete ...

  8. kubernetes 集群部署

    kubernetes 集群部署 环境JiaoJiao_Centos7-1(152.112) 192.168.152.112JiaoJiao_Centos7-2(152.113) 192.168.152 ...

  9. linux运维、架构之路-Kubernetes集群部署TLS双向认证

    一.kubernetes的认证授权       Kubernetes集群的所有操作基本上都是通过kube-apiserver这个组件进行的,它提供HTTP RESTful形式的API供集群内外客户端调 ...

随机推荐

  1. linux下解压war文件命令

    jar -xvf project.war -->解压到当前目录下. -f  指定 JAR 文件名,通常这个参数是必须的 -v  显示过程信息

  2. WPF与WinForm开发有什么区别?

    转自http://hi.baidu.com/leoliu83/blog/item/1d1a4a66dcb41134aa184cfd.html WPF开发于WinForm之后,从技术发展的角度,WPF比 ...

  3. [解决方案] pythonchallenge level 6

    查看页面代码,知道找zip www.pythonchallenge.com/pc/def/channel.zip,查看zip下的readme.txt知道从90052,跑一遍知道要收集zip的comme ...

  4. Guess Number Higher or Lower II--困惑

    今天,试着做了一下LeetCode OJ上面的第375道题:Guess Number Higher or Lower II 原题链接:https://leetcode.com/problems/gue ...

  5. Velocity快速入门教程-脚本语法详解(转)

    1.变量 (1)变量的定义: #set($name = "hello")      说明:velocity中变量是弱类型的. 当使用#set 指令时,括在双引号中的字面字符串将解析 ...

  6. 项目里的jquery.min.js错误

    项目里的jquery.min.js报一系列 - Missing semicolon - Missing semicolon - Missing semicolon - Missing semicolo ...

  7. 微信公众号红包接口开发PHP开发 CA证书出错,请登陆微信支付商户平台下载证书

    微信红包接口调试过程中一直提示“CA证书出错,请登陆微信支付商户平台下载证书”,经反复调试,大致解决方法如下: 1.首先确保CA证书的路径是否正确,一定得是绝对路径,因为是PHP开发的,这里需要三个p ...

  8. MFC下OpenGL入门(可以用)

    MFC下OpenGL入门 源文件 1, 建一工程文件,我这里命名为first,现在first工程里面我们没有添加任何东西,所有的东西都是MFC自动帮我们创建的. 2, 添加链接库.这一步很关键.打开菜 ...

  9. 【转】mysql忘记密码(未初始化)

    Mac OS X - 重置 MySQL Root 密码您是否忘记了Mac OS 的MySQL的root密码? 通过以下4步就可重新设置新密码:1.  停止 mysql server.  通常是在 '系 ...

  10. oracle更新语句merge和update

    update: update语句更新需要根据索引或者数据列遍历所有行 语句举例: update table1  a set column1=(select column from table2 b w ...