一、kubernetes介绍

       Kubernetes简称K8s,它是一个全新的基于容器技术的分布式架构领先方案。Kubernetes(k8s)是Google开源的容器集群管理系统(谷歌内部:Borg)。在Docker技术的基础上,为容器化的应用提供部署运行、资源调度、服务发现和动态伸缩等一系列完整功能,提高了大规模容器集群管理的便捷性。

       Kubernetes是一个完备的分布式系统支撑平台,具有完备的集群管理能力,多扩多层次的安全防护和准入机制、多租户应用支撑能力、透明的服务注册和发现机制、內建智能负载均衡器、强大的故障发现和自我修复能力、服务滚动升级和在线扩容能力、可扩展的资源自动调度机制以及多粒度的资源配额管理能力。同时Kubernetes提供完善的管理工具,涵盖了包括开发、部署测试、运维监控在内的各个环节。

二、Kubernetes架构和组件

三、K8S特性

       Kubernetes是为生产环境而设计的容器调度管理系统,对于负载均衡、 服务发现、高可用、滚动升级、自动伸缩等容器云平台的功能要求有原生支持。
      一个K8s集群是由分布式存储(etcd)、服务节点(Minion, etcd现在称为Node)和控制节点(Master)构成的。所有的集群状态都保存在etcd中,Master节点上则运行集群的管理控制模块。Node节点是真正运行应用容器的主机节点,在每个Minion节点上都会运行一个Kubelet代理,控制该节点上的容器、镜像和存储卷等。

Kubernetes功能:
1. 自动化容器的部署和复制
2. 随时扩展或收缩容器规模
3. 将容器组织成组,并且提供容器间的负载均衡
4. 很容易地升级应用程序容器的新版本
5. 提供容器弹性,如果容器失效就替换它,等等…

Kubernetes解决的问题:
1. 调度 - 容器应该在哪个机器上运行
2. 生命周期和健康状况 - 容器在无错的条件下运行
3. 服务发现 - 容器在哪,怎样与它通信
4. 监控 - 容器是否运行正常
5. 认证 - 谁能访问容器
6. 容器聚合 - 如何将多个容器合并成一个工程

四、Kubernetes环境规划

1、环境

[root@k8s-master ~]# cat /etc/redhat-release
CentOS Linux release 7.2. (Core)
[root@k8s-master ~]# uname -r
3.10.-.el7.x86_64
[root@k8s-master ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
[root@k8s-master ~]# getenforce
Disabled

2、服务器规划

节点及功能

主机名

IP

Master、etcd、registry

K8s-master

10.0.0.148

Node1

K8s-node-1

10.0.0.149

Node2

K8s-node-2

10.0.0.150

3、统一/etc/hosts

echo '
10.0.0.148 k8s-master
10.0.0.148 etcd
10.0.0.148 registry
10.0.0.149 k8s-node-
10.0.0.150 k8s-node-' >> /etc/hosts

4、Kubernetes集群部署架构图

五、Kubernetes集群部署

1、 Master端安装

①部署K8s依赖etcd服务

yum install etcd -y

修改配置文件:vim /etc/etcd/etcd.conf

#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS=""
#ETCD_MAX_WALS=""
ETCD_NAME=master
#ETCD_SNAPSHOT_COUNT=""
#ETCD_HEARTBEAT_INTERVAL=""
#ETCD_ELECTION_TIMEOUT=""
#ETCD_QUOTA_BACKEND_BYTES=""
#ETCD_MAX_REQUEST_BYTES=""
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
#ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT=""
#ETCD_PROXY_REFRESH_INTERVAL=""
#ETCD_PROXY_DIAL_TIMEOUT=""
#ETCD_PROXY_WRITE_TIMEOUT=""
#ETCD_PROXY_READ_TIMEOUT=""
#
#[Security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION=""
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"

启动并验证etcd集群状态

[root@k8s-master ~]# systemctl start etcd.service
[root@k8s-master ~]# systemctl enable etcd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@k8s-master ~]# etcdctl -C http://etcd:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy
[root@k8s-master ~]# etcdctl -C http://etcd:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy

②安装Docker

添加yum源

wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
sed -i 's#download.docker.com#mirrors.ustc.edu.cn/docker-ce#g' /etc/yum.repos.d/docker-ce.repo

 安装docker

yum install docker-ce -y

配置Docker配置文件,使其允许从registry中拉取镜像

[root@k8s-master ~]# vim /etc/sysconfig/docker
# /etc/sysconfig/docker
# Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
if [ -z "${DOCKER_CERT_PATH}" ]; then
DOCKER_CERT_PATH=/etc/docker
fi
OPTIONS='--insecure-registry registry:5000'

启动docker并设置成开机启动

[root@k8s-master ~]# systemctl start docker.service
[root@k8s-master ~]# systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

③安装kubernets

yum install kubernetes -y
#安装k8s的时候会自动安装docker,如果先安装docker版本冲突#
yum list installed | grep docker
yum remove -y docker-ce.x86_64
yum install kubernetes -y

④配置配置并启动kubernetes

修改/etc/kubernetes/apiserver配置文件

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
# # The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" # The port on the local server to listen on.
KUBE_API_PORT="--port=8080" # Port minions listen on
# KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379" # Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
修改/etc/kubernetes/config配置文件
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, is debug
KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"

启动K8S相关服务并设置开机启动

[root@k8s-master ~]# systemctl start kube-controller-manager.service
[root@k8s-master ~]# systemctl enable kube-controller-manager.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@k8s-master ~]# systemctl start kube-scheduler.service
[root@k8s-master ~]# systemctl enable kube-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
2、Node节点部署
①安装kubernets、docker
yum install kubernetes -y

启动docker并设置开机启动

[root@k8s-node- ~]# systemctl start docker.service
[root@k8s-node- ~]# systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

②配置并启动kubernetes

修改/etc/kubernetes/config配置文件

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, is debug
KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"

修改/etc/kubernetes/kubelet

###
# kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on
# KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-node-1"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" # Add your own!
KUBELET_ARGS=""

 启动node节点服务并配置开机启动

systemctl start kubelet.service
systemctl enable kubelet.service
systemctl start kube-proxy.service
systemctl enable kube-proxy.service

3、在master上查看集群中节点及节点状态

[root@k8s-master ~]# kubectl -s http://k8s-master:8080 get node
NAME STATUS AGE
k8s-node- Ready 1m
k8s-node- Ready 1m
[root@k8s-master ~]# kubectl get nodes
NAME STATUS AGE
k8s-node- Ready 3m
k8s-node- Ready 3m

4、创建覆盖网络——Flannel

①在master、node上均执行如下命令,进行安装

yum install flannel -y
master、node上配置Flannel,
vi /etc/sysconfig/flanneld

# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379" # etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network" # Any additional options that you want to pass
#FLANNEL_OPTIONS=""
②设置etcd网络
etcdctl mk /atomic.io/network/config '{ "Network": "172.16.0.0/16" }'

启动Flannel,并重启docker、kubernets服务

Master:
systemctl start flanneld.service
systemctl enable flanneld.service
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service
Node:
systemctl start flanneld.service
systemctl enable flanneld.service
service docker restart
systemctl restart kubelet.service
systemctl restart kube-proxy.service
 

linux运维、架构之路-Kubernetes集群部署的更多相关文章

  1. linux运维、架构之路-Kubernetes集群部署TLS双向认证

    一.kubernetes的认证授权       Kubernetes集群的所有操作基本上都是通过kube-apiserver这个组件进行的,它提供HTTP RESTful形式的API供集群内外客户端调 ...

  2. Docker学习-Kubernetes - 集群部署

    Docker学习 Docker学习-VMware Workstation 本地多台虚拟机互通,主机网络互通搭建 Docker学习-Docker搭建Consul集群 Docker学习-简单的私有Dock ...

  3. Gitlab CI 集成 Kubernetes 集群部署 Spring Boot 项目

    在上一篇博客中,我们成功将 Gitlab CI 部署到了 Docker 中去,成功创建了 Gitlab CI Pipline 来执行 CI/CD 任务.那么这篇文章我们更进一步,将它集成到 K8s 集 ...

  4. kubernetes 集群部署

    kubernetes 集群部署 环境JiaoJiao_Centos7-1(152.112) 192.168.152.112JiaoJiao_Centos7-2(152.113) 192.168.152 ...

  5. Kubernetes集群部署DNS插件

    准备 kube-dns 相关镜像 准备 kube-dns 相关 yaml 文件 系统预定义的 RoleBinding 配置 kube-dns 相关服务 检查 kube-dns 功能 kube-dns ...

  6. kubernetes集群部署

    鉴于Docker如此火爆,Google推出kubernetes管理docker集群,不少人估计会进行尝试.kubernetes得到了很多大公司的支持,kubernetes集群部署工具也集成了gce,c ...

  7. Kubernetes集群部署关键知识总结

    Kubernetes集群部署需要安装的组件东西很多,过程复杂,对服务器环境要求很苛刻,最好是能连外网的环境下安装,有些组件还需要连google服务器下载,这一点一般很难满足,因此最好是能提前下载好准备 ...

  8. 基于Kubernetes集群部署skyDNS服务

    目录贴:Kubernetes学习系列 在之前几篇文章的基础,(Centos7部署Kubernetes集群.基于kubernetes集群部署DashBoard.为Kubernetes集群部署本地镜像仓库 ...

  9. 为Kubernetes集群部署本地镜像仓库

    目录贴:Kubernetes学习系列 经过之前两篇文章:Centos7部署Kubernetes集群.基于kubernetes集群部署DashBoard,我们基本上已经能够在k8s的集群上部署一个应用了 ...

随机推荐

  1. MySQL 常用报错注入原理分析

    简介 这段时间学习SQL盲注中的报错注入,发现语句就是那么两句,但是一直不知道报错原因,所以看着别人的帖子学习一番,小本本记下来 (1) count() , rand() , group by 1.报 ...

  2. PHP is_file() 函数

    is_file() 函数检查指定的文件名是否是正常的文件. 语法is_file(file)参数 描述file 必需.规定要检查的文件.说明如果文件存在且为正常的文件,则返回 true. 提示和注释 注 ...

  3. 第十届山东省acm省赛补题(1)

    今天第一场个人训练赛的题目有点恐怖啊,我看了半个小时多硬是一道都不会写.我干脆就直接补题去了.... 先补的都是简单题,难题等我这周末慢慢来吧... A Calandar Time Limit: 1 ...

  4. Lesson 4 The double life of Alfred Bloggs

    There are two type of people in the society. People who do manual works can higher payment than peop ...

  5. .net core 学习小结之 JWT 认证授权

    新增配置文件 { "Logging": { "IncludeScopes": false, "Debug": { "LogLeve ...

  6. 二、Zabbix-zabbix server部署-LNMP

    部署Zabbix server主要分为两部分(软件基本都是yum安装,不要问我为什么不用源码,因为没有必须用源码的需求) 一.部署LNMP/LAMP环境,已提供zabbix的界面展示,已经zabbix ...

  7. librdkafka 安装

    1,Git clone git clone https://github.com/edenhill/librdkafka.git 2,cd librdkafka/ 3,./configure 4,ma ...

  8. win10远程桌面报出现身份验证错误,要求的函数不受支持

    win10远程桌面报出现身份验证错误,要求的函数不受支持 编写人:左丘文 2019-6-6 公司换了一台新笔记本电脑,是win10操作系统,刚想远程连接一下服务器,发现以前很正常的功能,发现不行了.网 ...

  9. 打印页面内容,<input>不好使,用<textarea> 代替

    <textarea class="sld-textarea" onchange="changeTextareaValue(this)">123< ...

  10. BZOJ 3810 [Coci2015]Stanovi

    这讲真就是一篇显得自己认真做题的博客 因为真的比较习惯将培训所有的题都放到一篇博客中,又因为暑假好多培训,所以单题很少,这也是从博客中摘出来的 题目链接 如果合法,一定有一条贯穿整个矩形的线: dp[ ...