注:以下操作均基于centos7系统。

安装ansible

ansilbe能够通过yum或者pip安装,因为kubernetes-ansible用到了密码。故而还须要安装sshpass:

  1. pip install ansible
  2. wget http://sourceforge.net/projects/sshpass/files/latest/download
  3. tar zxvf download
  4. cd sshpass-1.05
  5. ./configure && make && make install

配置kubernetes-ansible

  1. # git clone https://github.com/eparis/kubernetes-ansible.git
  2. # cd kubernetes-ansible
  3. # #在group_vars/all.yml中配置用户为root
  4. # cat group_vars/all.yml | grep ssh
  5. ansible_ssh_user: root
  6. # # Each kubernetes service gets its own IP address. These are not real IPs.
  7. # # You need only select a range of IPs which are not in use elsewhere in your
  8. # # environment. This must be done even if you do not use the network setup
  9. # # provided by the ansible scripts.
  10. # cat group_vars/all.yml | grep kube_service_addresses
  11. kube_service_addresses: 10.254.0.0/16
  12. # #配置root密码
  13. # echo "password" > ~/rootpassword

配置master、etcd和minion的IP地址:

  1. # cat inventory
  2. [masters]
  3. 192.168.0.7
  4. [etcd]
  5. 192.168.0.7
  6. [minions]
  7. # kube_ip_addr为该minion上Pods的地址池,默觉得/24掩码
  8. 192.168.0.3 kube_ip_addr=10.0.1.1
  9. 192.168.0.6 kube_ip_addr=10.0.2.1

測试各机器连接并配置ssh key:

  1. # ansible-playbook -i inventory ping.yml #这个命令会输出一些错误信息。可忽略
  2. # ansible-playbook -i inventory keys.yml

眼下kubernetes-ansible对依赖处理的还不是非常全面,须要先手动配置下:

  1. # # 安装iptables
  2. # ansible all -i inventory --vault-password-file=~/rootpassword -a 'yum -y install iptables-services'
  3. # # 为CentOS 7加入kubernetes源
  4. # ansible all -i inventory --vault-password-file=~/rootpassword -a 'curl https://copr.fedoraproject.org/coprs/eparis/kubernetes-epel-7/repo/epel-7/eparis-kubernetes-epel-7-epel-7.repo -o /etc/yum.repos.d/eparis-kubernetes-epel-7-epel-7.repo'
  5. # # 配置ssh,防止ssh连接超时
  6. # sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/ssh_config
  7. # ansible all -i inventory --vault-password-file=~/rootpassword -a 'sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/ssh_config'
  8. # ansible all -i inventory --vault-password-file=~/rootpassword -a 'sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/sshd_config'
  9. # ansible all -i inventory --vault-password-file=~/rootpassword -a 'systemctl restart sshd'

配置docker网络,实际上就是创建kbr0网桥、为网桥配置ip并配置路由:

  1. # ansible-playbook -i inventory hack-network.yml
  2. PLAY [minions] ****************************************************************
  3. GATHERING FACTS ***************************************************************
  4. ok: [192.168.0.6]
  5. ok: [192.168.0.3]
  6. TASK: [network-hack-bridge | Create kubernetes bridge interface] **************
  7. changed: [192.168.0.3]
  8. changed: [192.168.0.6]
  9. TASK: [network-hack-bridge | Configure docker to use the bridge inferface] ****
  10. changed: [192.168.0.6]
  11. changed: [192.168.0.3]
  12. PLAY [minions] ****************************************************************
  13. GATHERING FACTS ***************************************************************
  14. ok: [192.168.0.6]
  15. ok: [192.168.0.3]
  16. TASK: [network-hack-routes | stat path=/etc/sysconfig/network-scripts/ifcfg-{{ ansible_default_ipv4.interface }}] ***
  17. ok: [192.168.0.6]
  18. ok: [192.168.0.3]
  19. TASK: [network-hack-routes | Set up a network config file] ********************
  20. skipping: [192.168.0.3]
  21. skipping: [192.168.0.6]
  22. TASK: [network-hack-routes | Set up a static routing table] *******************
  23. changed: [192.168.0.3]
  24. changed: [192.168.0.6]
  25. NOTIFIED: [network-hack-routes | apply changes] *******************************
  26. changed: [192.168.0.6]
  27. changed: [192.168.0.3]
  28. NOTIFIED: [network-hack-routes | upload script] *******************************
  29. changed: [192.168.0.6]
  30. changed: [192.168.0.3]
  31. NOTIFIED: [network-hack-routes | run script] **********************************
  32. changed: [192.168.0.3]
  33. changed: [192.168.0.6]
  34. NOTIFIED: [network-hack-routes | remove script] *******************************
  35. changed: [192.168.0.3]
  36. changed: [192.168.0.6]
  37. PLAY RECAP ********************************************************************
  38. 192.168.0.3 : ok=10 changed=7 unreachable=0 failed=0
  39. 192.168.0.6 : ok=10 changed=7 unreachable=0 failed=0

最后,在全部节点安装并配置kubernetes:

  1. ansible-playbook -i inventory setup.yml

执行完毕后能够看到kube相关的服务都在执行了:

  1. # # 服务执行状态
  2. # ansible all -i inventory -k -a 'bash -c "systemctl | grep -i kube"'
  3. SSH password:
  4. 192.168.0.3 | success | rc=0 >>
  5. kube-proxy.service loaded active running Kubernetes Kube-Proxy Server
  6. kubelet.service loaded active running Kubernetes Kubelet Server
  7. 192.168.0.7 | success | rc=0 >>
  8. kube-apiserver.service loaded active running Kubernetes API Server
  9. kube-controller-manager.service loaded active running Kubernetes Controller Manager
  10. kube-scheduler.service loaded active running Kubernetes Scheduler Plugin
  11. 192.168.0.6 | success | rc=0 >>
  12. kube-proxy.service loaded active running Kubernetes Kube-Proxy Server
  13. kubelet.service loaded active running Kubernetes Kubelet Server
  14. # # 端口监听状态
  15. # ansible all -i inventory -k -a 'bash -c "netstat -tulnp | grep -E \"(kube)|(etcd)\""'
  16. SSH password:
  17. 192.168.0.7 | success | rc=0 >>
  18. tcp 0 0 192.168.0.7:7080 0.0.0.0:* LISTEN 14486/kube-apiserve
  19. tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 14544/kube-schedule
  20. tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 14515/kube-controll
  21. tcp6 0 0 :::7001 :::* LISTEN 13986/etcd
  22. tcp6 0 0 :::4001 :::* LISTEN 13986/etcd
  23. tcp6 0 0 :::8080 :::* LISTEN 14486/kube-apiserve
  24. 192.168.0.3 | success | rc=0 >>
  25. tcp 0 0 192.168.0.3:10250 0.0.0.0:* LISTEN 9500/kubelet
  26. tcp6 0 0 :::46309 :::* LISTEN 9524/kube-proxy
  27. tcp6 0 0 :::48500 :::* LISTEN 9524/kube-proxy
  28. tcp6 0 0 :::38712 :::* LISTEN 9524/kube-proxy
  29. 192.168.0.6 | success | rc=0 >>
  30. tcp 0 0 192.168.0.6:10250 0.0.0.0:* LISTEN 9474/kubelet
  31. tcp6 0 0 :::52870 :::* LISTEN 9498/kube-proxy
  32. tcp6 0 0 :::57961 :::* LISTEN 9498/kube-proxy
  33. tcp6 0 0 :::40720 :::* LISTEN 9498/kube-proxy

执行以下的命令看看服务是否都是正常的

  1. # curl -s -L http://192.168.0.7:4001/version # check etcd
  2. etcd 0.4.6
  3. # curl -s -L http://192.168.0.7:8080/api/v1beta1/pods | python -m json.tool # check apiserve
  4. {
  5. "apiVersion": "v1beta1",
  6. "creationTimestamp": null,
  7. "items": [],
  8. "kind": "PodList",
  9. "resourceVersion": 8,
  10. "selfLink": "/api/v1beta1/pods"
  11. }
  12. # curl -s -L http://192.168.0.7:8080/api/v1beta1/minions | python -m json.tool # check apiserve
  13. # curl -s -L http://192.168.0.7:8080/api/v1beta1/services | python -m json.tool # check apiserve
  14. # kubectl get minions
  15. NAME
  16. 192.168.0.3
  17. 192.168.0.6

部署apache服务

首先创建一个Pod:

  1. # cat ~/apache.json
  2. {
  3. "id": "fedoraapache",
  4. "kind": "Pod",
  5. "apiVersion": "v1beta1",
  6. "desiredState": {
  7. "manifest": {
  8. "version": "v1beta1",
  9. "id": "fedoraapache",
  10. "containers": [{
  11. "name": "fedoraapache",
  12. "image": "fedora/apache",
  13. "ports": [{
  14. "containerPort": 80,
  15. "hostPort": 80
  16. }]
  17. }]
  18. }
  19. },
  20. "labels": {
  21. "name": "fedoraapache"
  22. }
  23. }
  24. # kubectl create -f apache.json
  25. # kubectl get pod fedoraapache
  26. NAME IMAGE(S) HOST LABELS STATUS
  27. fedoraapache fedora/apache 192.168.0.6/ name=fedoraapache Waiting
  28. # #因为镜像下载较慢,因而Waiting持续的时间会比較久,等镜像下好后就会非常快起来了
  29. # kubectl get pod fedoraapache
  30. NAME IMAGE(S) HOST LABELS STATUS
  31. fedoraapache fedora/apache 192.168.0.6/ name=fedoraapache Running
  32. # #到192.168.0.6机器上看看容器状态
  33. # docker ps
  34. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  35. 77dd7fe1b24f fedora/apache:latest "/run-apache.sh" 31 minutes ago Up 31 minutes k8s_fedoraapache.f14c9521_fedoraapache.default.etcd_1416396375_4114a4d0
  36. 1455249f2c7d kubernetes/pause:latest "/pause" About an hour ago Up About an hour 0.0.0.0:80->80/tcp k8s_net.e9a68336_fedoraapache.default.etcd_1416396375_11274cd2
  37. # docker images
  38. REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
  39. fedora/apache latest 2e11d8fd18b3 7 weeks ago 554.1 MB
  40. kubernetes/pause latest 6c4579af347b 4 months ago 239.8 kB
  41. # iptables-save | grep 2.2
  42. -A DOCKER ! -i kbr0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.2.2:80
  43. -A FORWARD -d 10.0.2.2/32 ! -i kbr0 -o kbr0 -p tcp -m tcp --dport 80 -j ACCEPT
  44. # curl localhost # 说明Pod启动OK了。而且端口也正常
  45. Apache

Replication Controllers

Replication Controllers保证足够数量的容器执行,以便均衡负载,并保证服务高可用:

A replication controller combines a template for pod creation (a “cookie-cutter” if you will) and a number of desired replicas, into a single API object. The replica controller also contains a label selector that identifies the set of objects managed by the replica controller. The replica controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods.

  1. # cat replica.json
  2. {
  3. "id": "apacheController",
  4. "kind": "ReplicationController",
  5. "apiVersion": "v1beta1",
  6. "labels": {"name": "fedoraapache"},
  7. "desiredState": {
  8. "replicas": 3,
  9. "replicaSelector": {"name": "fedoraapache"},
  10. "podTemplate": {
  11. "desiredState": {
  12. "manifest": {
  13. "version": "v1beta1",
  14. "id": "fedoraapache",
  15. "containers": [{
  16. "name": "fedoraapache",
  17. "image": "fedora/apache",
  18. "ports": [{
  19. "containerPort": 80,
  20. }]
  21. }]
  22. }
  23. },
  24. "labels": {"name": "fedoraapache"},
  25. },
  26. }
  27. }
  28. # kubectl create -f replica.json
  29. apacheController
  30. # kubectl get replicationController
  31. NAME IMAGE(S) SELECTOR REPLICAS
  32. apacheController fedora/apache name=fedoraapache 3
  33. # kubectl get pod
  34. NAME IMAGE(S) HOST LABELS STATUS
  35. fedoraapache fedora/apache 192.168.0.6/ name=fedoraapache Running
  36. cf6726ae-6fed-11e4-8a06-fa163e3873e1 fedora/apache 192.168.0.3/ name=fedoraapache Running
  37. cf679152-6fed-11e4-8a06-fa163e3873e1 fedora/apache 192.168.0.3/ name=fedoraapache Running

能够看到。已经有三个容器在执行了。

Services

通过Replication Controllers已经有多个Pod在执行了。但因为每一个Pod都分配了不同的IP。而且随着系统执行这些IP地址有可能会变化,那问题来了,怎样从外部訪问这个服务呢?这就是service干的事情了。

A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them - sometimes called a micro-service. The goal of services is to provide a bridge for non-Kubernetes-native applications to access backends without the need to write code that is specific to Kubernetes. A service offers clients an IP and port pair which, when accessed, redirects to the appropriate backends. The set of pods targetted is determined by a label selector.

As an example, consider an image-process backend which is running with 3 live replicas. Those replicas are fungible - frontends do not care which backend they use. While the actual pods that comprise the set may change, the frontend client(s) do not need to know that. The service abstraction enables this decoupling.

Unlike pod IP addresses, which actually route to a fixed destination, service IPs are not actually answered by a single host. Instead, we use iptables (packet processing logic in Linux) to define “virtual” IP addresses which are transparently redirected as needed. We call the tuple of the service IP and the service port the portal. When clients connect to the portal, their traffic is automatically transported to an appropriate endpoint. The environment variables for services are actually populated in terms of the portal IP and port. We will be adding DNS support for services, too.

  1. # cat service.json
  2. {
  3. "id": "fedoraapache",
  4. "kind": "Service",
  5. "apiVersion": "v1beta1",
  6. "selector": {
  7. "name": "fedoraapache",
  8. },
  9. "protocol": "TCP",
  10. "containerPort": 80,
  11. "port": 8987
  12. }
  13. # kubectl create -f service.json
  14. fedoraapache
  15. # kubectl get service
  16. NAME LABELS SELECTOR IP PORT
  17. kubernetes-ro component=apiserver,provider=kubernetes 10.254.0.2 80
  18. kubernetes component=apiserver,provider=kubernetes 10.254.0.1 443
  19. fedoraapache name=fedoraapache 10.254.0.3 8987
  20. # # 切换到minion上
  21. # curl 10.254.0.3:8987
  22. Apache

也能够为service配置一个公网IP,前提是要配置一个cloud provider。

眼下支持的cloud provider有GCE、AWS、OpenStack、ovirt、vagrant等。

For some parts of your application (e.g. your frontend) you want to expose a service on an external (publically visible) IP address. To achieve this, you can set the createExternalLoadBalancer flag on the service. This sets up a cloud provider specific load balancer (assuming that it is supported by your cloud provider) and also sets up IPTables rules on each host that map packets from the specified External IP address to the service proxy in the same manner as internal service IP addresses.

注:对Openstack的支持是使用rackspace开源的github.com/rackspace/gophercloud来做的,

Health Check

Currently, there are three types of application health checks that you can choose from:
HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise.
Container Exec - The Kubelet will execute a command inside your container. If it returns “ok” it will be considered a success.
* TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can’t it is considered a failure.
In all cases, if the Kubelet discovers a failure, the container is restarted.

The container health checks are configured in the “LivenessProbe” section of your container config. There you can also specify an “initialDelaySeconds” that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.

Here is an example config for a pod with an HTTP health check:

kind: Pod
apiVersion: v1beta1
desiredState:
manifest:
version: v1beta1
id: php
containers:
- name: nginx
image: dockerfile/nginx
ports:
- containerPort: 80
# defines the health checking
livenessProbe:
# turn on application health checking
enabled: true
type: http
# length of time to wait for a pod to initialize
# after pod startup, before applying health checking
initialDelaySeconds: 30
# an http probe
httpGet:
path: /_status/healthz
port: 8080

References

版权声明:本文博客原创文章,博客,未经同意,不得转载。

kubernetes多节点部署的决心的更多相关文章

  1. kubernetes多节点部署解析

    注:以下操作均基于centos7系统. 安装ansible ansilbe可以通过yum或者pip安装,由于kubernetes-ansible用到了密码,故而还需要安装sshpass: pip in ...

  2. kubernetes Node节点部署(四)

    一.部署kubelet 1.1.二进制包准备 将软件包从linux-node1复制到linux-node2中去 [root@linux-node1 ~]# cd /usr/local/src/kube ...

  3. kubernetes master节点部署(三)

    一.部署kubernetes api服务 1.1.准备软件包 [root@linux-node1 ~]# cd /usr/local/src/kubernetes [root@linux-node1 ...

  4. Kubernetes集群部署之四Master节点部署

    Kubernetes Master节点部署三个服务:kube-apiserver.kube-controller-manager.kube-scheduler和一个命令工具kubectl. Maste ...

  5. 二、安装并配置Kubernetes Master节点

    1. 安装配置Master节点上的Kubernetes服务 1.1 安装Master节点上的Kubernetes服务 yum -y install kubernetes 1.2 修改kube-apis ...

  6. Kubernetes 二进制部署(一)单节点部署(Master 与 Node 同一机器)

    0. 前言 最近受“新冠肺炎”疫情影响,在家等着,入职暂时延后,在家里办公和学习 尝试通过源码编译二进制的方式在单一节点(Master 与 Node 部署在同一个机器上)上部署一个 k8s 环境,整理 ...

  7. kubernetes集群部署

    鉴于Docker如此火爆,Google推出kubernetes管理docker集群,不少人估计会进行尝试.kubernetes得到了很多大公司的支持,kubernetes集群部署工具也集成了gce,c ...

  8. Kubernetes集群部署关键知识总结

    Kubernetes集群部署需要安装的组件东西很多,过程复杂,对服务器环境要求很苛刻,最好是能连外网的环境下安装,有些组件还需要连google服务器下载,这一点一般很难满足,因此最好是能提前下载好准备 ...

  9. Kubernetes集群部署--kubernetes1.10.1

    参考博客:https://mritd.me/2018/04/19/set-up-kubernetes-1.10.1-cluster-by-hyperkube/ 一.环境 (1)系统环境 IP 操作系统 ...

随机推荐

  1. [BEROR]CodeSign error: code signing is required for product type 'Application' in SDK 'iOS 8.1'

    解决方法: 选择project->Build Settings -> Code Signing -> Code Signing Identity -> Debug -> ...

  2. 2.大约QT数据库操作,简单的数据库连接操作,增删改查数据库,QSqlTableModel和QTableView,事务性操作,大约QItemDelegate 代理

     Linux下的qt安装,命令时:sudoapt-get install qt-sdk 安装mysql数据库,安装方法參考博客:http://blog.csdn.net/tototuzuoquan ...

  3. offsetTop和scrollTop差异

    最近写组件,这两个属性的结果搞的有点晕,我检查的文件及资料,这两个性质如下面总结: 他一直在offsetLeft.offsetTop,scrollLeft.scrollTop这些方法都是非常迷茫,花一 ...

  4. 【C语言探索之旅】 第二部分第六课:创建你自己的变量类型

    内容简介 1.课程大纲 2.第二部分第六课: 创建你自己的变量类型 3.第二部分第七课预告:   文件读写 课程大纲 我们的课程分为四大部分,每一个部分结束后都会有练习题,并会公布答案.还会带大家用C ...

  5. 百度音乐搜索API介绍

    百度音乐搜索API的请求地址如下: [html] view plaincopy http://box.zhangmen.baidu.com/x?op=12&count=1&title= ...

  6. Hadoop处理HDF文件

    1.前言 HDF文件是遥感应用中一种常见的数据格式,因为其高度结构化的特点,笔者曾被怎样使用Hadoop处理HDF文件这个问题困扰过相当长的一段时间.于是Google各种解决方式,但都没有找到一种理想 ...

  7. HDU 4359 Easy Tree DP? 带权二叉树的构造方法 dp

    题意: 给定n deep 1.构造一个n个节点的带权树,且最大深度为deep,每一个节点最多仅仅能有2个儿子 2.每一个节点的值为2^0, 2^1 ··· 2^(n-1)  随意两个节点值不能同样 3 ...

  8. StackExchange.Redis 使用-配置 (四)

    Configurationredis有很多不同的方法来配置连接字符串 , StackExchange.Redis 提供了一个丰富的配置模型,当调用Connect 或者 ConnectAsync 时需要 ...

  9. Java Web整合开发(16) -- Struts 2.x 概述

    Struts2与Spring的整合 •Struts2框架为配合与Spring3框架进行整合,提供了相应的拦截器. •该组件名为StrutsSpringObjectFactory,位于struts2-s ...

  10. Lichee(三) Android4.0该产品的目标文件夹,Lichee链接---extract-bsp

    由<Lichee() 在sun4i_crane平台下的编译>介绍了编译lichee的基本情况,我们终于得到了编译后的结果例如以下: out/ ├── android │   ├── bIm ...