参考:https://www.cnblogs.com/zhenyuyaodidiao/p/6500830.html

1、环境介绍及准备:

1.1 物理机操作系统

  物理机操作系统采用Centos7.3 64位,细节如下。

  1. [root@localhost ~]# uname -a
  2. Linux localhost.localdomain 3.10.0-514.6.1.el7.x86_64 #1 SMP Wed Jan 18 13:06:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
  3. [root@localhost ~]# cat /etc/redhat-release
  4. CentOS Linux release 7.3.1611 (Core)

1.2 主机信息

  本文准备了三台机器用于部署k8s的运行环境,细节如下:

节点及功能

主机名

IP

Master、etcd、registry

K8s-master

10.0.251.148

Node1

K8s-node-1

10.0.251.153

Node2

K8s-node-2

10.0.251.155

  设置三台机器的主机名:

  Master上执行:

  1. [root@localhost ~]# hostnamectl --static set-hostname k8s-master

  Node1上执行:

  1. [root@localhost ~]# hostnamectl --static set-hostname k8s-node-1

  Node2上执行:

  1. [root@localhost ~]# hostnamectl --static set-hostname k8s-node-2

  在三台机器上设置hosts,均执行如下命令:

  1. echo '10.0.251.148 k8s-master
  2. 10.0.251.148 etcd
  3. 10.0.251.148 registry
  4. 10.0.251.153 k8s-node-1
  5. 10.0.251.155 k8s-node-2' >> /etc/hosts

1.3 关闭三台机器上的防火墙

  1. systemctl disable firewalld.service
  2. systemctl stop firewalld.service

2、部署etcd

  k8s运行依赖etcd,需要先部署etcd,本文采用yum方式安装:

  1. [root@localhost ~]# yum install etcd -y

yum安装的etcd默认配置文件在/etc/etcd/etcd.conf。编辑配置文件,更改以下带颜色部分信息:

  1. [root@localhost ~]# vi /etc/etcd/etcd.conf
  2.  
  3. # [member]
  4. ETCD_NAME=master
  5. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  6. #ETCD_WAL_DIR=""
  7. #ETCD_SNAPSHOT_COUNT="10000"
  8. #ETCD_HEARTBEAT_INTERVAL="100"
  9. #ETCD_ELECTION_TIMEOUT="1000"
  10. #ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
  11. ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
  12. #ETCD_MAX_SNAPSHOTS="5"
  13. #ETCD_MAX_WALS="5"
  14. #ETCD_CORS=""
  15. #
  16. #[cluster]
  17. #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
  18. # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
  19. #ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
  20. #ETCD_INITIAL_CLUSTER_STATE="new"
  21. #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
  22. ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
  23. #ETCD_DISCOVERY=""
  24. #ETCD_DISCOVERY_SRV=""
  25. #ETCD_DISCOVERY_FALLBACK="proxy"
  26. #ETCD_DISCOVERY_PROXY=""

启动并验证状态

  1. [root@localhost ~]# systemctl start etcd
  2. [root@localhost ~]# etcdctl set testdir/testkey0 0
  3. 0
  4. [root@localhost ~]# etcdctl get testdir/testkey0
  5. 0
  6. [root@localhost ~]# etcdctl -C http://etcd:4001 cluster-health
  7. member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
  8. cluster is healthy
  9. [root@localhost ~]# etcdctl -C http://etcd:2379 cluster-health
  10. member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
  11. cluster is healthy

扩展:Etcd集群部署参见——http://www.cnblogs.com/zhenyuyaodidiao/p/6237019.html

3、部署master

3.1 安装Docker

  1. [root@k8s-master ~]# yum install docker

配置Docker配置文件,使其允许从registry中拉取镜像。

  1. [root@k8s-master ~]# vim /etc/sysconfig/docker
  2.  
  3. # /etc/sysconfig/docker
  4.  
  5. # Modify these options if you want to change the way the docker daemon runs
  6. OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
  7. if [ -z "${DOCKER_CERT_PATH}" ]; then
  8. DOCKER_CERT_PATH=/etc/docker
  9. fi
  10. OPTIONS='--insecure-registry registry:5000'

设置开机自启动并开启服务

  1. [root@k8s-master ~]# chkconfig docker on
  2. [root@k8s-master ~]# service docker start

3.2 安装kubernets

  1. [root@k8s-master ~]# yum install kubernetes

3.3 配置并启动kubernetes

在kubernetes master上需要运行以下组件:

    Kubernets API Server

    Kubernets Controller Manager

    Kubernets Scheduler

相应的要更改以下几个配置中带颜色部分信息:

3.3.1 /etc/kubernetes/apiserver

  1. [root@k8s-master ~]# vim /etc/kubernetes/apiserver
  2.  
  3. ###
  4. # kubernetes system config
  5. #
  6. # The following values are used to configure the kube-apiserver
  7. #
  8.  
  9. # The address on the local server to listen to.
  10. KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
  11.  
  12. # The port on the local server to listen on.
  13. KUBE_API_PORT="--port=8080"
  14.  
  15. # Port minions listen on
  16. # KUBELET_PORT="--kubelet-port=10250"
  17.  
  18. # Comma separated list of nodes in the etcd cluster
  19. KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
  20.  
  21. # Address range to use for services
  22. KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
  23.  
  24. # default admission control policies
  25. #KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
  26. KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
  27.  
  28. # Add your own!
  29. KUBE_API_ARGS=""

3.3.2  /etc/kubernetes/config

  1. [root@k8s-master ~]# vim /etc/kubernetes/config
  2.  
  3. ###
  4. # kubernetes system config
  5. #
  6. # The following values are used to configure various aspects of all
  7. # kubernetes services, including
  8. #
  9. # kube-apiserver.service
  10. # kube-controller-manager.service
  11. # kube-scheduler.service
  12. # kubelet.service
  13. # kube-proxy.service
  14. # logging to stderr means we get it in the systemd journal
  15. KUBE_LOGTOSTDERR="--logtostderr=true"
  16.  
  17. # journal message level, 0 is debug
  18. KUBE_LOG_LEVEL="--v=0"
  19.  
  20. # Should this cluster be allowed to run privileged docker containers
  21. KUBE_ALLOW_PRIV="--allow-privileged=false"
  22.  
  23. # How the controller-manager, scheduler, and proxy find the apiserver
  24. KUBE_MASTER="--master=http://k8s-master:8080"

启动服务并设置开机自启动

  1. [root@k8s-master ~]# systemctl enable kube-apiserver.service
  2. [root@k8s-master ~]# systemctl start kube-apiserver.service
  3. [root@k8s-master ~]# systemctl enable kube-controller-manager.service
  4. [root@k8s-master ~]# systemctl start kube-controller-manager.service
  5. [root@k8s-master ~]# systemctl enable kube-scheduler.service
  6. [root@k8s-master ~]# systemctl start kube-scheduler.service

4、部署node

4.1 安装docker

  参见3.1

4.2 安装kubernets

  参见3.2

4.3 配置并启动kubernetes

  在kubernetes node上需要运行以下组件:

    Kubelet

    Kubernets Proxy

相应的要更改以下几个配置文中带颜色部分信息:

4.3.1 /etc/kubernetes/config

  1. [root@K8s-node-1 ~]# vim /etc/kubernetes/config
  2.  
  3. ###
  4. # kubernetes system config
  5. #
  6. # The following values are used to configure various aspects of all
  7. # kubernetes services, including
  8. #
  9. # kube-apiserver.service
  10. # kube-controller-manager.service
  11. # kube-scheduler.service
  12. # kubelet.service
  13. # kube-proxy.service
  14. # logging to stderr means we get it in the systemd journal
  15. KUBE_LOGTOSTDERR="--logtostderr=true"
  16.  
  17. # journal message level, 0 is debug
  18. KUBE_LOG_LEVEL="--v=0"
  19.  
  20. # Should this cluster be allowed to run privileged docker containers
  21. KUBE_ALLOW_PRIV="--allow-privileged=false"
  22.  
  23. # How the controller-manager, scheduler, and proxy find the apiserver
  24. KUBE_MASTER="--master=http://k8s-master:8080"

4.3.2 /etc/kubernetes/kubelet

  1. [root@K8s-node-1 ~]# vim /etc/kubernetes/kubelet
  2.  
  3. ###
  4. # kubernetes kubelet (minion) config
  5.  
  6. # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
  7. KUBELET_ADDRESS="--address=0.0.0.0"
  8.  
  9. # The port for the info server to serve on
  10. # KUBELET_PORT="--port=10250"
  11.  
  12. # You may leave this blank to use the actual hostname
  13. KUBELET_HOSTNAME="--hostname-override=k8s-node-1"
  14.  
  15. # location of the api-server
  16. KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
  17.  
  18. # pod infrastructure container
  19. KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
  20.  
  21. # Add your own!
  22. KUBELET_ARGS=""

启动服务并设置开机自启动

  1. [root@k8s-master ~]# systemctl enable kubelet.service
  2. [root@k8s-master ~]# systemctl start kubelet.service
  3. [root@k8s-master ~]# systemctl enable kube-proxy.service
  4. [root@k8s-master ~]# systemctl start kube-proxy.service

4.4 查看状态

  在master上查看集群中节点及节点状态

  1. [root@k8s-master ~]# kubectl -s http://k8s-master:8080 get node
  2. NAME STATUS AGE
  3. k8s-node-1 Ready 3m
  4. k8s-node-2 Ready 16s
  5. [root@k8s-master ~]# kubectl get nodes
  6. NAME STATUS AGE
  7. k8s-node-1 Ready 3m
  8. k8s-node-2 Ready 43s

至此,已经搭建了一个kubernetes集群,但目前该集群还不能很好的工作,请继续后续的步骤。

5、创建覆盖网络——Flannel

5.1 安装Flannel

  在master、node上均执行如下命令,进行安装

  1. [root@k8s-master ~]# yum install flannel

版本为0.0.5

5.2 配置Flannel

  master、node上均编辑/etc/sysconfig/flanneld,修改红色部分

  1. [root@k8s-master ~]# vi /etc/sysconfig/flanneld
  2.  
  3. # Flanneld configuration options
  4.  
  5. # etcd url location. Point this to the server where etcd runs
  6. FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
  7.  
  8. # etcd config key. This is the configuration key that flannel queries
  9. # For address range assignment
  10. FLANNEL_ETCD_PREFIX="/atomic.io/network"
  11.  
  12. # Any additional options that you want to pass
  13. #FLANNEL_OPTIONS=""

5.3 配置etcd中关于flannel的key

  Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置:(‘/atomic.io/network/config’这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)

  1. [root@k8s-master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'
  2. { "Network": "10.0.0.0/16" }

5.4 启动

  启动Flannel之后,需要依次重启docker、kubernete。

  在master执行:

  1. systemctl enable flanneld.service
  2. systemctl start flanneld.service
  3. service docker restart
  4. systemctl restart kube-apiserver.service
  5. systemctl restart kube-controller-manager.service
  6. systemctl restart kube-scheduler.service

  在node上执行:

  1. systemctl enable flanneld.service
  2. systemctl start flanneld.service
  3. service docker restart
  4. systemctl restart kubelet.service
  5. systemctl restart kube-proxy.service

centos7部署kubernetes的更多相关文章

  1. Centos7部署kubernetes API服务(四)

    1.准备软件包 [root@linux-node1 bin]# pwd /usr/local/src/kubernetes/server/bin [root@linux-node1 bin]# cp ...

  2. [Kubernetes]CentOS7部署Kubernetes集群

    环境介绍及安装前准备 三台机器,用于部署k8s的运行环境: 节点 ip Master 192.168.243.138 Node1 192.168.243.139 Node2 192.168.243.1 ...

  3. Centos7部署Kubernetes集群

    目录贴:Kubernetes学习系列 1.环境介绍及准备: 1.1 物理机操作系统 物理机操作系统采用Centos7.3 64位,细节如下. [root@localhost ~]# uname -a ...

  4. Centos7部署Kubernetes集群(单工作节点)+配置dashboard可视化UI

    目标:docker+kubernetes+cadvosor+dashboard 一:物理硬件 两台虚拟机(centos7):一台做为主节点(master),一台做为工作节点(node) [root@M ...

  5. Centos7部署kubernetes准备工作(一)

    一.准备工作: 1.创建三台虚拟机:(在node1配置好环境,然后关机克隆出node2.node3.并修改网卡.主机名即可) linux-node1.example.com 192.168.43.21 ...

  6. Centos7部署kubernetes Proxy(七)

    1.配置kube-proxy使用LVS(三个节点都装上去) [root@linux-node1 ssl]# yum install -y ipvsadm ipset conntrack [root@l ...

  7. Centos7部署kubernetes集群CA证书创建和分发(二)

    1.解压软件包 [root@linux-node1 ~]# cd /usr/local/src/ [root@linux-node1 src]# ls k8s-v1.10.1-manual.zip [ ...

  8. Centos7部署kubernetes测试k8s应用(九)

    1.创建一个deployment [root@linux-node1 ~]# kubectl run net-test --image=alpine --replicas=2 sleep 360000 ...

  9. centos7 使用kubeadm 快速部署 kubernetes 国内源

    前言 搭建kubernetes时看文档以及资料走了很多弯路,so 整理了最后成功安装的过程已做记录.网上的搭建文章总是少一些步骤,想本人这样的小白总是部署不成功(^_^). 准备两台或两台以上的虚拟机 ...

随机推荐

  1. 转录调控实战 | 一文解决转录调控问题 | chIP-seq | ATAC-seq

    做生物的想发文章怎么办?转录调控来解析(huyou)! 最简单的情形: 1. 我在研究一个非常重要的基因A,功能已经做得差不多了,现在想深挖,第一步就是想知道哪个转录因子调控这个基因A: 2. 我发现 ...

  2. LeetCode--268--缺失数字

    问题描述: 给定一个包含 0, 1, 2, ..., n 中 n 个数的序列,找出 0 .. n 中没有出现在序列中的那个数. 示例 1: 输入: [3,0,1] 输出: 2 示例 2: 输入: [9 ...

  3. Nim or not Nim? HDU - 3032

    题意:给定n堆石子,两人轮流操作,每次选一堆石子,取任意石子或则将石子分成两个更小的堆(非0),取得最后一个石子的为胜. 题解:比较裸的SG定理,用sg定理打表,得到表1,2,4,3,5,6,8,7, ...

  4. layui checkbox无法显示出来问题

    {type:'checkbox'} // ,{field: 'product_id', hide: 'true'} ,{field: 'id', title: 'ID', width: 90, fix ...

  5. 『Github』简易使用指南

    一.新建repository 新建项目从下图位置开始, 当我们完成了初始化后,找不到创建/上传文件的位置,只需如下操作, 然后, 即可,当然,按照下图提示进行命令行操作实际是一样的, 创建了READM ...

  6. Spring boot(四)thymeleaf使用介绍

    在上篇文章springboot(二):web综合开发中简单介绍了一下thymeleaf,这篇文章将更加全面详细的介绍thymeleaf的使用.thymeleaf 是新一代的模板引擎,在spring4. ...

  7. leetcode-algorithms-33 Search in Rotated Sorted Array

    leetcode-algorithms-33 Search in Rotated Sorted Array Suppose an array sorted in ascending order is ...

  8. linux平台的oracle11201借用expdp定时备份数据库

    备份脚本如下: #!/bin/bashexport ORACLE_BASE=/data/oracle export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db ...

  9. div中文字上下居中

    <div class="title">Title</div> 1. 将div高度设成定值 2. 将line-height设成定值 3. 将text-alig ...

  10. C++ 四种新式类型转换

    static_cast ,dynamic_cast,const_cast,reinterpret_cast static_cast 定义:通俗的说就是静态显式转换,用于基本的数据类型转换,及指针之间的 ...