前言

关于kubernetes HA集群部署的方式有很多种(这里的HA指的是master apiserver的高可用),比如通过keepalived vip漂移的方式、haproxy/nginx负载均衡实现的高可用等。我这里一系列的部署都是通过haproxy 和 nginx 负载均衡的方式去实现集群的部署,并且由于现在k8s使用的用户越来越多,所以网上有很多相似的解决方案。如果本篇文章涉及的抄袭,可以联系我。

一、环境准备

1.1 主机环境

IP地址         主机名       角色         备注
192.168.15.131   k8s-master01   k8s-master/etcd_cluster01
192.168.15.132   k8s-master02   k8s-master/etcd_cluster01
192.168.15.133   k8s-master03   k8s-master/etcd_cluster01
192.168.15.134   k8s-node01   k8s-node
192.168.15.135   k8s-node02   k8s-node

提示:这样命名主要是因为部署k8s集群,整个etcd也是给k8s提供使用;

1.2 相关软件版本

docker 1.7-ce
kubernetes-1.7.3

安装docker-ce

# 卸载旧版本docker
yum remove docker docker-common docker-selinux docker-engine # 安装docker-ce
yum makecache
yum install -y yum-utils # 配置docker-ce yum源
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce -y # docker 加速器
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://jek8a03u.mirror.aliyuncs.com"]
}
EOF # 启动docker
systemctl start docker
systemctl enable docker

1.3 更改系统环境

# 更改主机名(略)

# 更改hosts文件

127.0.0.1 k8s-master01
::1 k8s-master01
192.168.15.131  k8s-master01
192.168.15.132  k8s-master02
192.168.15.133  k8s-master03 192.168.15.134  k8s-node01
192.168.15.135  k8s-node02

# 禁止selinux以及防火墙

setenforce 0
systemctl stop firewalld
systemctl disable firewalld

# 安装相关软件包

yum -y install ntppdate gcc git vim wget lrzsz

# 配置定时更新

*/5 * * * * /usr/sbin/ntpdate time.windows.com >/dev/null 2>&1

1.4 创建证书

# 安装证书制作工具cfssl

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo 

二、开始安装etcd集群

2.1 制作etcd证书

# 创建对应目录

mkdir /root/tls/etcd -p
cd /root/tls/etcd

# 创建相关文件

cat <<EOF > etcd-root-ca-csr.json
{
"key": {
"algo": "rsa",
"size": 4096
},
"names": [
{
"O": "etcd",
"OU": "etcd Security",
"L": "Beijing",
"ST": "Beijing",
"C": "CN"
}
],
"CN": "etcd-root-ca"
}
EOF cat <<EOF > etcd-gencert.json
{
"signing": {
"default": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
EOF cat <<EOF > etcd-csr.json
{
"key": {
"algo": "rsa",
"size": 4096
},
"names": [
{
"O": "etcd",
"OU": "etcd Security",
"L": "Beijing",
"ST": "Beijing",
"C": "CN"
}
],
"CN": "etcd",
"hosts": [
"127.0.0.1",
"localhost",
"192.168.15.131",
"192.168.15.132",
"192.168.15.133",
"192.168.15.134",
"192.168.15.135"
]
}
EOF

# 生成证书

cfssl gencert --initca=true etcd-root-ca-csr.json | cfssljson --bare etcd-root-ca
cfssl gencert --ca etcd-root-ca.pem --ca-key etcd-root-ca-key.pem --config etcd-gencert.json etcd-csr.json | cfssljson --bare etcd

2.2 安装etcd服务

yum -y install etcd
mkdir /etc/etcd/ssl
cp /root/tls/etcd/{etcd.pem,etcd-key.pem,etcd-root-ca.pem} /etc/etcd/ssl/
chmod 755 -R /etc/etcd/ssl

2.3 创建etcd配置文件

cat <<EOF > /etc/etcd/etcd.conf
# [member]
ETCD_NAME=etcd01
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_WAL_DIR="/var/lib/etcd/wal"
ETCD_SNAPSHOT_COUNT="10000"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://192.168.15.131:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.15.131:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.15.131:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.15.131:2380,etcd02=https://192.168.15.132:2380,etcd03=https://192.168.15.133:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.15.131:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
[security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem"
ETCD_PEER_AUTO_TLS="true"
#
#[logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""
#
#[profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
EOF

2.4 分发文件到其他主机并启动服务

scp -r /etc/etcd 192.168.15.132:/etc/
scp -r /etc/etcd 192.168.15.133:/etc/

提示:次配置文件需要根据自己的环境更改 ETCD_NAME 和 对应IP地址

# 启动服务

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

提示:如果是集群环境,至少需要2台以上的etcd同时启动,否则会提示相关错误。多台etcd直接将对应的证书文件、配置文件、启动文件拷贝过去即可;

# 查看集群状态

export ETCDCTL_API=3
etcdctl --cacert=/etc/etcd/ssl/etcd-root-ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.15.131:2379,https://192.168.15.132:2379,https://192.168.15.133:2379 endpoint health

三、部署kubernetes master服务

3.1 生成kubernetes证书

# 创建对应目录

mkdir /root/tls/k8s
cd /root/tls/k8s/

# 创建相关文件

cat <<EOF > k8s-root-ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 4096
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF cat <<EOF > k8s-gencert.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF cat <<EOF > kubernetes-csr.json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"10.254.0.1",
"192.168.15.131",
"192.168.15.132",
"192.168.15.133",
"192.168.15.134",
"192.168.15.135",
"localhost",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF cat <<EOF > kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF cat <<EOF > admin-csr.json
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF

# 生成证书

cfssl gencert --initca=true k8s-root-ca-csr.json | cfssljson --bare k8s-root-ca
for targetName in kubernetes admin kube-proxy; do
cfssl gencert --ca k8s-root-ca.pem --ca-key k8s-root-ca-key.pem --config k8s-gencert.json --profile kubernetes $targetName-csr.json | cfssljson --bare $targetName
done

3.2 二进制安装kubernets

wget https://storage.googleapis.com/kubernetes-release/release/v1.7.8/kubernetes-server-linux-amd64.tar.gz
tar zxf kubernetes-server-linux-amd64.tar.gz
cp kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl} /usr/local/bin/

3.3 生产token及kubeconfig

# 生成token

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

# 生成kubeconfig

##  生成kubeconfig
export KUBE_APISERVER="https://127.0.0.1:6443" #### 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=k8s-root-ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig ### 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig ### 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig ### 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

# 生成kube-proxy kubeconfig 配置

### 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=k8s-root-ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
### 设置客户端认证参数
kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
### 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
### 设置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

3.4 部署master服务

# 生成config通用文件

master 需要编辑 config、apiserver、controller-manager、scheduler这四个文件。

cat <<EOF > /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=2" # Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true" # How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"
EOF

# 生成apiserver配置

cat <<EOF > /etc/kubernetes/apiserver
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
# # The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=192.168.15.131 --insecure-bind-address=127.0.0.1 --bind-address=192.168.15.131" # The port on the local server to listen on.
KUBE_API_PORT="--insecure-port=8080 --secure-port=6443" # Port minions listen on
# KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.15.131:2379,https://192.168.15.132:2379,https://192.168.15.133:2379" # Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" # Add your own!
KUBE_API_ARGS="--authorization-mode=RBAC \\
--runtime-config=rbac.authorization.k8s.io/v1beta1 \\
--anonymous-auth=false \\
--kubelet-https=true \\
--experimental-bootstrap-token-auth \\
--token-auth-file=/etc/kubernetes/ssl/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\
--client-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \\
--service-account-key-file=/etc/kubernetes/ssl/k8s-root-ca.pem \\
--etcd-quorum-read=true \\
--storage-backend=etcd3 \\
--etcd-cafile=/etc/etcd/ssl/etcd-root-ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
--enable-swagger-ui=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/kube-audit/audit.log \\
--event-ttl=1h"
EOF

# 生成controller-manager配置

cat <<EOF > /etc/kubernetes/controller-manager
# The following values are used to configure the kubernetes controller-manager # defaults from config and apiserver should be adequate # Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=0.0.0.0 \\
--service-cluster-ip-range=10.254.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/etc/kubernetes/ssl/k8s-root-ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \\
--service-account-private-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \\
--root-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \\
--leader-elect=true \\
--node-monitor-grace-period=40s \\
--node-monitor-period=5s \\
--pod-eviction-timeout=5m0s" EOF

# 生成scheduler配置

cat <<EOF > /etc/kubernetes/scheduler
###
# kubernetes scheduler config # default config should be adequate # Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=0.0.0.0" EOF

# 编写apiserver服务启动脚本

cat <<EOF > /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service [Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
User=root
ExecStart=/usr/local/bin/kube-apiserver \\
\$KUBE_LOGTOSTDERR \\
\$KUBE_LOG_LEVEL \\
\$KUBE_ETCD_SERVERS \\
\$KUBE_API_ADDRESS \\
\$KUBE_API_PORT \\
\$KUBELET_PORT \\
\$KUBE_ALLOW_PRIV \\
\$KUBE_SERVICE_ADDRESSES \\
\$KUBE_ADMISSION_CONTROL \\
\$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target
EOF

# 编写controller-manager服务启动脚本

cat <<EOF > /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
User=root
ExecStart=/usr/local/bin/kube-controller-manager \\
\$KUBE_LOGTOSTDERR \\
\$KUBE_LOG_LEVEL \\
\$KUBE_MASTER \\
\$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target
EOF

# 编写kube-scheduler服务启动脚本

cat <<EOF > /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
User=root
ExecStart=/usr/local/bin/kube-scheduler \\
\$KUBE_LOGTOSTDERR \\
\$KUBE_LOG_LEVEL \\
\$KUBE_MASTER \\
\$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target
EOF

# 启动对应服务

systemctl daemon-reload
systemctl restart kube-apiserver
systemctl restart kube-controller-manager
systemctl restart kube-scheduler systemctl status kube-apiserver
systemctl status kube-controller-manager
systemctl status kube-scheduler systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler

3.5 部署 node 服务

# 创建相关目录

mkdir -p /etc/kubernetes/ssl
mkdir -p /var/lib/kubernetes
cp kubernetes/server/bin/{kubelet,kubectl,kube-proxy} /usr/local/bin/

# 生成config通用文件

cat <<EOF > /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true" # How the controller-manager, scheduler, and proxy find the apiserver
# KUBE_MASTER="--master=http://127.0.0.1:8080"
EOF

# 生成kubelet配置

cat <<EOF > /etc/kubernetes/kubelet
###
# kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.15.134" # The port for the info server to serve on
# KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.15.134" # location of the api-server
# KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080" # Add your own!
# KUBELET_ARGS="--cgroup-driver=systemd"
KUBELET_ARGS="--cgroup-driver=cgroupfs \\
--cluster-dns=10.254.0.2 \\
--resolv-conf=/etc/resolv.conf \\
--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--require-kubeconfig \\
--cert-dir=/etc/kubernetes/ssl \\
--cluster-domain=cluster.local. \\
--hairpin-mode promiscuous-bridge \\
--serialize-image-pulls=false \\
--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"
EOF

# 生成kube-proxy配置

cat <<EOF > /etc/kubernetes/proxy
# kubernetes proxy config # default config should be adequate # Add your own! KUBE_PROXY_ARGS="--bind-address=192.168.15.134 \\
--hostname-override=k8s-node01 \\
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \\
--cluster-cidr=10.254.0.0/16"
EOF

提示:config 配置文件(包括下面的 kubelet、proxy)中全部未 定义 API Server 地址,因为 kubelet 和 kube-proxy 组件启动时使用了 --require-kubeconfig 选项,该选项会使其从 *.kubeconfig 中读取 API Server 地址,而忽略配置文件中设置的,所以配置文件中设置的地址其实是无效的。

# 创建 ClusterRoleBinding

由于 kubelet 采用了 TLS Bootstrapping,所有根绝 RBAC 控制策略,kubelet 使用的用户 kubelet-bootstrap 是不具备任何访问 API 权限的,这是需要预先在集群内创建 ClusterRoleBinding 授予其 system:node-bootstrapper Role,在任意 master 执行即可,如下:

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

# 创建node服务启动脚本   

cat << EOF > /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service [Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/local/bin/kubelet \\
\$KUBE_LOGTOSTDERR \\
\$KUBE_LOG_LEVEL \\
\$KUBELET_API_SERVER \\
\$KUBELET_ADDRESS \\
\$KUBELET_PORT \\
\$KUBELET_HOSTNAME \\
\$KUBE_ALLOW_PRIV \\
\$KUBELET_ARGS
Restart=on-failure
KillMode=process [Install]
WantedBy=multi-user.target
EOF

# 创建kube-proxy服务启动脚本

cat << EOF > /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target [Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/local/bin/kube-proxy \\
\$KUBE_LOGTOSTDERR \\
\$KUBE_LOG_LEVEL \\
\$KUBE_MASTER \\
\$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target
EOF

# 创建Nginx代理

创建nginx代理的目的就是实现apiserver高可用,这样做的目的在于不需要维护前端的负载均衡,直接在node节点实现高可用。

mkdir -p /etc/nginx
cat << EOF > /etc/nginx/nginx.conf
error_log stderr notice; worker_processes auto;
events {
multi_accept on;
use epoll;
worker_connections 1024;
} stream {
upstream kube_apiserver {
least_conn;
server 192.168.15.131:6443;
server 192.168.15.132:6443;
server 192.168.15.133:6443;
} server {
listen 0.0.0.0:6443;
proxy_pass kube_apiserver;
proxy_timeout 10m;
proxy_connect_timeout 1s;
}
}
EOF
chmod +r /etc/nginx/nginx.conf

启动nginx-proxy容器

docker run -it -d -p 127.0.0.1:6443:6443 -v /etc/localtime:/etc/localtime -v /etc/nginx:/etc/nginx --name nginx-proxy --net=host --restart=always --memory=512M nginx:1.13.3-alpine

# 启动服务  

systemctl daemon-reload
systemctl start kubelet
systemctl status kubelet
systemctl enable kubelet

# 添加node到kubernetes集群  

由于采用了 TLS Bootstrapping,所以 kubelet 启动后不会立即加入集群,而是进行证书申请,从日志中可以看到如下输出:

Jul 19 14:15:31 docker4.node kubelet[18213]: I0719 14:15:31.810914   18213 feature_gate.go:144] feature gates: map[]
Jul 19 14:15:31 docker4.node kubelet[18213]: I0719 14:15:31.811025 18213 bootstrap.go:58] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file

此时只需要在 master 允许其证书申请即可,如下:

[root@localhost ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-_xILhfT4Z5FLQsz8csi3tJKLwz0q02U3aTI8MmoHgQg 24s kubelet-bootstrap Pending [root@localhost ~]# kubectl certificate approve node-csr-_xILhfT4Z5FLQsz8csi3tJKLwz0q02U3aTI8MmoHgQg [root@localhost ~]# kubectl get node
NAME STATUS AGE VERSION
192.168.15.131 Ready 27s v1.7.3

# 最后启动 kube-proxy 组件

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
systemctl status kube-proxy  

四、部署calico网络

4.1 简介

网路组件这里采用 Calico,Calico 目前部署也相对比较简单,只需要创建一下 yml 文件即可,具体可参考:https://docs.projectcalico.org/v2.3/getting-started/kubernetes/。部署calico需要满足以下条件:

  • 必须将kubelet配置为使用CNI网络插件(例如--network-plugin = cni);
  • kube-proxy必须以iptables代理模式启动(这是Kubernetes v1.2.0的默认值);
  • 启动kube-proxy 必须禁止使用--masquerade-all标志,这与Calico策略冲突;
  • kubernetes网络策略插件至少满足kubernetes 1.3版本;
  • 当启用RBAC时,Calico组件必须定义和使用适当的帐户,角色和绑定;

calico 有多种安装方式,选用哪种需要根据安装kubernetes的方式,具体如下:

  • Standard Hosted Install,安装Calico用于现有的etcd群集。这是部署Calico在生产中推荐的托管方法;
  • Kubeadm Hosted Install,安装Calico以及单个节点etcd群集。这是推荐的托管方法,与Calico一起快速入门,与kubeadm等工具结合使用;
  • Kubernetes Datastore,不需要自己的etcd集群的模式安装Calico;

4.2 安装calico网络

等等......开始之前,先检查一下你的kubelet配置,是否启用 --network-plugin=cni 参数,如果没有赶紧加上。如果不加,你可能获取的一直都是docker0 分配的网段。

wget http://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/calico.yaml
sed -i 's@.*etcd_endpoints:.*@\ \ etcd_endpoints:\ \"https://192.168.15.131:2379,https://192.168.15.132:2379,https://192.168.15.133:2379\"@gi' calico.yaml export ETCD_CERT=`cat /etc/etcd/ssl/etcd.pem | base64 | tr -d '\n'`
export ETCD_KEY=`cat /etc/etcd/ssl/etcd-key.pem | base64 | tr -d '\n'`
export ETCD_CA=`cat /etc/etcd/ssl/etcd-root-ca.pem | base64 | tr -d '\n'` sed -i "s@.*etcd-cert:.*@\ \ etcd-cert:\ ${ETCD_CERT}@gi" calico.yaml
sed -i "s@.*etcd-key:.*@\ \ etcd-key:\ ${ETCD_KEY}@gi" calico.yaml
sed -i "s@.*etcd-ca:.*@\ \ etcd-ca:\ ${ETCD_CA}@gi" calico.yaml sed -i 's@.*etcd_ca:.*@\ \ etcd_ca:\ "/calico-secrets/etcd-ca"@gi' calico.yaml
sed -i 's@.*etcd_cert:.*@\ \ etcd_cert:\ "/calico-secrets/etcd-cert"@gi' calico.yaml
sed -i 's@.*etcd_key:.*@\ \ etcd_key:\ "/calico-secrets/etcd-key"@gi' calico.yaml sed -i 's@192.168.0.0/16@10.254.64.0/18@gi' calico.yaml mkdir /data/kubernetes/calico -p
mv calico.yaml /data/kubernetes/calico/

提示:大陆访问gcr.io是访问不到,可以通过修改hosts文件实现,参考如下:

61.91.161.217 gcr.io
61.91.161.217 www.gcr.io
61.91.161.217 packages.cloud.google.com

# 启动pod

kubectl create -f /data/kubernetes/calico/calico.yaml
kubectl apply -f http://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/rbac.yaml

提示:可能由于网络等原因镜像下载较慢,导致pod无法正常启动,建议先将镜像下载到本地然后在启动。

# 验证网络

cat << EOF > demo.deploy.yml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: demo-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: mritd/demo
ports:
- containerPort: 80
EOF
kubectl create -f demo.deploy.yml
kubetcl get pods -o wide --all-namespaces

提示:kubectl exec到某个pod内ping另一个不同node上的pod,此时每个node节点上都应有不同pod IP的路由。具体过程,这里就不做演示。

五、部署DNS

DNS 部署目前有两种方式,一种是纯手动,另一种是使用 Addon-manager,目前个人感觉 Addon-manager 有点繁琐,所以以下采取纯手动部署 DNS 组件。

5.1 部署DNS

DNS 组件相关文件位于 kubernetes addons 目录下,把相关文件下载下来然后稍作修改即可;

# 创建相关目录

mkdir /data/kubernetes/dns
cd /data/kubernetes/dns

# 下载对应文件  

wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.7/cluster/addons/dns/kubedns-cm.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.7/cluster/addons/dns/kubedns-sa.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.7/cluster/addons/dns/kubedns-svc.yaml.sed
wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.7/cluster/addons/dns/kubedns-controller.yaml.sed
mv kubedns-controller.yaml.sed kubedns-controller.yaml
mv kubedns-svc.yaml.sed kubedns-svc.yaml

# 修改配置  

sed -i 's/$DNS_DOMAIN/cluster.local/gi' kubedns-controller.yaml
sed -i 's/$DNS_SERVER_IP/10.254.0.2/gi' kubedns-svc.yaml

提示:此 "DNS_SERVER_IP" 是你在 kubelet 配置文件中指定的地址,不是随便写的。

# 创建(我把所有 yml 放到的 dns 目录中)

kubectl create -f ./data/kubernetes/dns

# 验证

## 启动一个nginx pod  

cat > my-nginx.yaml << EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx:1.7.9
ports:
- containerPort: 80
EOF kubectl create -f my-nginx.yaml

## export 该 Deployment, 生成 my-nginx 服务  

kubectl expose deploy my-nginx

[root@k8s-master01 ~]# kubectl get services --all-namespaces |grep my-nginx
default my-nginx 10.254.127.14 <none> 80/TCP 2s

## 创建另一个 Pod,查看 /etc/resolv.conf 是否包含 kubelet 配置的 --cluster-dns 和 --cluster-domain,是否能够将服务 my-nginx 解析到上面显示的 Cluster IP 10.254.127.14;  

[root@k8s-master01 ~]# kubectl exec  nginx -i -t -- /bin/bash
root@nginx:/# cat /etc/resolv.conf
nameserver 10.254.0.2
search default.svc.cluster.local. svc.cluster.local. cluster.local. localhost
options ndots:5
root@nginx:/# ping my-nginx
PING my-nginx.default.svc.cluster.local (10.254.127.14): 48 data bytes
^C--- my-nginx.default.svc.cluster.local ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
root@nginx:/# ping kubernetes
PING kubernetes.default.svc.cluster.local (10.254.0.1): 48 data bytes
^C--- kubernetes.default.svc.cluster.local ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
root@nginx:/# ping kube-dns.kube-system.svc.cluster.local
PING kube-dns.kube-system.svc.cluster.local (10.254.0.2): 48 data bytes
^C--- kube-dns.kube-system.svc.cluster.local ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss

以上均能正常解析出IP地址,表明DNS服务正常;

5.2 自动伸缩DNS服务  

# 创建项目目录

mkdir /data/kubernetes/dns-autoscaler
cd /data/kubernetes/dns-autoscaler/

# 下载文件  

wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.7/cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler-rbac.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.7/cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml

然后直接 kubectl create -f 即可,DNS 自动扩容计算公式为 replicas = max( ceil( cores * 1/coresPerReplica ) , ceil( nodes * 1/nodesPerReplica ) ),如果想调整 DNS 数量(负载因子),只需要调整 ConfigMap 中对应参数即可,具体计算细节参考上面的官方文档;

# 编辑 Config Map

kubectl edit cm kube-dns-autoscaler --namespace=kube-system

提示:整个集群过程,可能存在镜像无法下载的情况。那么,你需要更镜像地址或者将镜像下载到本地。请注意镜像下载的问题,国内推荐阿里云容器镜像以及使用阿里云加速器下载dockerhub中镜像;

  

  

手动部署 kubernetes HA 集群的更多相关文章

  1. 5.基于二进制部署kubernetes(k8s)集群

    1 kubernetes组件 1.1 Kubernetes 集群图 官网集群架构图 1.2 组件及功能 1.2.1 控制组件(Control Plane Components) 控制组件对集群做出全局 ...

  2. Centos7 安装部署Kubernetes(k8s)集群

    目录 一.系统环境 二.前言 三.Kubernetes 3.1 概述 3.2 Kubernetes 组件 3.2.1 控制平面组件 3.2.2 Node组件 四.安装部署Kubernetes集群 4. ...

  3. 使用QJM部署HDFS HA集群

    一.所需软件 1. JDK版本 下载地址:http://www.oracle.com/technetwork/java/javase/index.html 版本: jdk-7u79-linux-x64 ...

  4. China Azure中部署Kubernetes(K8S)集群

    目前China Azure还不支持容器服务(ACS),使用名称"az acs create --orchestrator-type Kubernetes -g zymtest -n kube ...

  5. Centos 7 部署Kubernetes(K8S)集群

    资源链接:https://pan.baidu.com/s/1-PT_QQAf7cTu_znX-S-r9Q 密码:33sr 转发:http://blog.51cto.com/lizhenliang/19 ...

  6. Kubernetes容器集群管理环境 - 完整部署(上篇)

    Kubernetes(通常称为"K8S")是Google开源的容器集群管理系统.其设计目标是在主机集群之间提供一个能够自动化部署.可拓展.应用容器可运营的平台.Kubernetes ...

  7. Istio(二):在Kubernetes(k8s)集群上安装部署istio1.14

    目录 一.模块概览 二.系统环境 三.安装istio 3.1 使用 Istioctl 安装 3.2 使用 Istio Operator 安装 3.3 生产部署情况如何? 3.4 平台安装指南 四.Ge ...

  8. 使用kubectl管理Kubernetes(k8s)集群:常用命令,查看负载,命名空间namespace管理

    目录 一.系统环境 二.前言 三.kubectl 3.1 kubectl语法 3.2 kubectl格式化输出 四.kubectl常用命令 五.查看kubernetes集群node节点和pod负载 5 ...

  9. 菜鸟玩云计算之十九:Hadoop 2.5.0 HA 集群安装第2章

    菜鸟玩云计算之十九:Hadoop 2.5.0 HA 集群安装第2章 cheungmine, 2014-10-26 在上一章中,我们准备好了计算机和软件.本章开始部署hadoop 高可用集群. 2 部署 ...

随机推荐

  1. 怼天怼地怼空气的Linus喜欢怎样的工作方式?

    Linus Torvalds的“暴脾气”是出了名的,看到令自己不爽的事情就会怼过去,比如: 他曾经说Intel提交的漏洞修复程序是彻底的垃圾! 当别人说Git没用C++开发的时候, 他反击说“C++是 ...

  2. Android学习第九天

    为什么需要内容提供者 a)        如何创建数据库 b)        文件权限 c)         Chmod linux修改权限 内容提供者原理 a)        内容提供者把数据进行封 ...

  3. 10.3 Vue 路由系统

     Vue 路由系统  简单示例 main.js  import Vue from 'vue' import App from './App.vue' //https://router.vuejs.or ...

  4. <数据结构基础学习>(三)Part 1 栈

    一.栈 Stack 栈也是一种线性的数据结构 相比数组,栈相对应的操作是数组的子集. 只能从一端添加元素,也只能从一端取出元素.这一端成为栈顶. 1,2,3依次入栈得到的顺序为 3,2,1,栈顶为3, ...

  5. mysql慢查询日志按天切割归纳

    问题描述: mysql开启慢查询功能,再正常不过,那么存在这样一种情况:慢查询写入的文件位置和文件名是指定好的,如果慢查询时间设定严苛,不出意外,记录慢查询的单个文件大小会日益增大,几十兆或者上百兆, ...

  6. Spring mvc 整合PageHelper

    Integer page=queryBean.getPage(); Integer pageSize=queryBean.getPageSize(); response.setContentType( ...

  7. 浅谈js中的this关键字

    ---恢复内容开始--- this是JavaScript中的关键字之一,在编写程序的时候经常会用到,正确的理解和使用关键字this尤为重要.接下来,笔者就从作用域的角度粗谈下自己对this关键字的理解 ...

  8. 常见Web攻击

    一.SQL注入 1. sql注入的危害 非法读取.篡改.删除数据库中的数据 盗取用户的各类敏感信息,获取利益 通过修改数据库来修改网页上的内容 注入木马等 2. 实例 MYSQLDB # 通过在用户名 ...

  9. CMDB服务器管理系统【s5day91】:资产采集相关问题

    资产采集唯一标识和允许临时修改主机名 class AgentClient(BaseClient): def exec(self): obj = PluginManager() server_dict ...

  10. [高中作文赏析]妈妈, 我心中的"灯"