一、创建TLS证书和秘钥:

1、安装 CFSSL:

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo export PATH=/usr/local/bin:$PATH

2、创建 CA 配置文件:

mkdir /root/ssl
cd /root/ssl
cfssl print-defaults config > config.json
cfssl print-defaults csr > csr.json

# 根据config.json文件的格式创建如下的ca-config.json文件

# 过期时间设置成了 87600h

cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF

ca-config.json

3、创建 CA 证书签名请求

cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

ca-csr.json

4、生成 CA 证书和私钥

$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
$ ls ca*
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem

5、创建 kubernetes 证书:

cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.73.61",
"192.168.73.62",
"192.168.73.63",
"10.254.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

kubernetes-csr.json

6、生成 kubernetes 证书和私钥:

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
$ ls kubernetes*
kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem

7、创建 admin 证书

cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF

admin-csr.json

8、生成 admin 证书和私钥:

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
$ ls admin*
admin.csr admin-csr.json admin-key.pem admin.pem

9、创建 kube-proxy 证书:

cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

kube-proxy-csr.json

10、生成 kube-proxy 客户端证书和私钥:

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
$ ls kube-proxy*
kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem

校验证书:

$ openssl x509  -noout -text -in  kubernetes.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
6a:10:a0:d1:dc:43:c5:0a:a3:4f:d7:7e:d5:b8:3b:40:36:dc:71:40
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=CN, ST=BeiJing, L=BeiJing, O=k8s, OU=System, CN=kubernetes
Validity
Not Before: Feb 6 07:59:00 2018 GMT
Not After : Feb 4 07:59:00 2028 GMT
Subject: C=CN, ST=BeiJing, L=BeiJing, O=k8s, OU=System, CN=kubernetes
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:b3:91:7d:a9:24:4d:5b:18:5a:ba:ad:e5:1f:e6:
7f:8b:3e:38:d6:b9:21:0e:d6:32:83:b5:1d:16:9f:
5f:13:5c:43:a8:ef:46:24:f6:70:47:9e:a8:00:32:
9a:18:e6:dd:18:a2:a5:9c:22:31:fe:17:ba:64:65:
86:3d:63:d4:d3:94:95:a9:56:2f:6c:66:ce:12:a9:
4e:e8:51:c1:c7:ed:91:13:f8:c4:05:4b:2b:4c:da:
e8:d6:9e:b6:8e:27:3e:cd:a5:ea:bd:00:bf:84:4a:
c5:ed:18:7b:ae:d8:fc:17:de:4c:98:3f:81:2c:56:
d9:0a:a1:1f:73:18:58:a7:14:ee:9f:60:a7:38:b5:
63:b8:15:84:a6:e0:de:73:f3:e4:ac:20:1b:8b:26:
02:1a:28:b1:dc:d9:ed:c4:04:88:ca:6d:9c:aa:fb:
3a:26:7d:cb:2d:4b:86:1b:2a:d5:8c:4d:62:9a:ea:
7c:56:a7:44:5b:af:13:83:e6:6f:c2:61:d0:a0:58:
6c:e6:43:dd:9b:a1:26:a7:ef:8f:a5:7c:3c:79:23:
bc:dc:1d:2f:6b:63:d5:c7:4a:92:db:20:b0:66:81:
ac:85:0c:04:2c:17:e5:02:f2:41:cd:44:76:87:e3:
45:91:aa:f3:fb:62:8a:de:14:fe:07:75:79:9f:ce:
d9:3b
...................

分发证书:

将生成的证书和秘钥文件(后缀名为.pem)拷贝到所有机器的 /etc/kubernetes/ssl 目录下备用;

mkdir -p /etc/kubernetes/ssl
cp *.pem /etc/kubernetes/ssl
scp *.pem root@192.168.78.62:/etc/kubernetes/ssl
scp *.pem root@192.168.78.63:/etc/kubernetes/ssl

二、安装配置高可用etcd:

 1、下载安装etcd

1、修改host文件及修改主机名:
192.168.73.61 k8s-master
192.168.73.62 k8s-node1
192.168.73.63 k8s-node2 hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2 2、下载安装etd:
wget https://github.com/coreos/etcd/releases/download/v3.3.0/etcd-v3.3.0-linux-amd64.tar.gz
tar zxvf etcd-v3.3.0-linux-amd64.tar.gz
cp etcd-v3.3.0-linux-amd64/etcd* /usr/local/bin/
mkdir /var/lib/etcd #etcd的数据目录
mkdir /etc/etcd #etcd的配置文件目录

2、编写etcd的配置文件/etc/etcd/etcd.conf:
k8s-master:

# [member]
ETCD_NAME=infra1
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.73.61:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.73.61:2379" #[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.73.61:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.73.61:2379"

etcd.conf

k8s-node1:

# [member]
ETCD_NAME=infra2
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.73.62:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.73.62:2379" #[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.73.62:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.73.62:2379"

etcd.conf

k8s-node2:

# [member]
ETCD_NAME=infra3
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.73.63:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.73.63:2379" #[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.73.63:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.73.63:2379"

etcd.conf

3、编写etcd的启动文件/usr/lib/systemd/system/etcd.service:

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos [Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd \
--name ${ETCD_NAME} \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster infra1=https://192.168.73.61:2380,infra2=https://192.168.73.62:2380,infra3=https://192.168.73.63:2380 \
--initial-cluster-state new \
--data-dir=${ETCD_DATA_DIR}
Restart=on-failure
RestartSec=5
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target

etcd.service

 4、启动etcd及检查:

1、启动:
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
2、检查:
$ etcdctl   --ca-file=/etc/kubernetes/ssl/ca.pem   --cert-file=/etc/kubernetes/ssl/kubernetes.pem   --key-file=/etc/kubernetes/ssl/kubernetes-key.pem   cluster-health
member 52908acb5b27845e is healthy: got healthy result from https://192.168.73.63:2379
member 5d00203b8ec1e6c4 is healthy: got healthy result from https://192.168.73.62:2379
member d47f2b47509db50a is healthy: got healthy result from https://192.168.73.61:2379
cluster is healthy

三、在master上安装 kubectl及生成配置文件

 1、下载安装kubectl:

wget https://dl.k8s.io/v1.9.2/kubernetes-client-linux-amd64.tar.gz
tar -xzvf kubernetes-client-linux-amd64.tar.gz
cp kubernetes/client/bin/kube* /usr/bin/
chmod a+x /usr/bin/kube*

2、创建 kubectl kubeconfig 文件:

export KUBE_APISERVER="https://192.168.73.61:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER}
# 设置客户端认证参数
kubectl config set-credentials admin \
--client-certificate=/etc/kubernetes/ssl/admin.pem \
--embed-certs=true \
--client-key=/etc/kubernetes/ssl/admin-key.pem
# 设置上下文参数
kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin
# 设置默认上下文
kubectl config use-context kubernetes

3、创建 TLS Bootstrapping Token:

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
cp token.csv /etc/kubernetes/
scp token.csv root@192.168.73.62:/etc/kubernetes/
scp token.csv root@192.168.73.63:/etc/kubernetes/

4、创建 kubelet bootstrapping kubeconfig 文件:

cd /etc/kubernetes
export KUBE_APISERVER="https://192.168.73.61:6443" # 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig # 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig # 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig # 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

5、创建 kube-proxy kubeconfig 文件:

export KUBE_APISERVER="https://192.168.73.61:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

6、分发 kubeconfig 文件:

scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.73.62:/etc/kubernetes/
scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.73.63:/etc/kubernetes/

四、在master安装kube-apiserver,kube-scheduler,kube-controller-manager

1、下载最新的二进制文件:

wget https://dl.k8s.io/v1.9.2/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
tar -xzvf kubernetes-src.tar.gz
cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/

 2、配置和启动kube-apiserver:

1)、配置文件:/usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service [Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/local/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target

kube-apiserver.service

2)、配置文件:/etc/kubernetes/config

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true" # How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.73.71:8080"

config

3)、配置文件:/etc/kubernetes/apiserver

###
## kubernetes system config
##
## The following values are used to configure the kube-apiserver
##
#
## The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=192.168.73.71 --bind-address=192.168.73.71 --insecure-bind-address=192.168.73.71"
#
## The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"
#
## Port minions listen on
#KUBELET_PORT="--kubelet-port=10250"
#
## Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.73.71:2379,https://192.168.73.72:2379,https://192.168.73.73:2379"
#
## Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
#
## default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
#
## Add your own!
KUBE_API_ARGS="--authorization-mode=Node,RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h"

apiserver

4)、启动 kube-apiserver

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

3、配置和启动 kube-controller-manager:

1)、配置文件:/usr/lib/systemd/system/kube-controller-manager.service  

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/local/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target

kube-controller-manager.service

2)、配置文件:/etc/kubernetes/controller-manager

###
# The following values are used to configure the kubernetes controller-manager # defaults from config and apiserver should be adequate # Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true"

controller-manager

3)、启动 kube-controller-manager

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

 4、配置和启动 kube-scheduler

1)、配置文件:/usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/local/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target

kube-scheduler.service

2)、配置文件:/etc/kubernetes/scheduler

###
# kubernetes scheduler config # default config should be adequate # Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"

scheduler

3)、启动 kube-scheduler

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler

 5、验证功能:

$ kubectl get componentstatuses
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   

五、在所有节点安装flannel网络插件

1、安装(没有对版本有特殊要求):

1、yum安装:
yum install -y flannel 2、二进制包安装:
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
mkdir /usr/libexec/flannel/
cp mk-docker-opts.sh /usr/libexec/flannel/
cp flanneld /usr/local/bin

 2、配置和启动fannel:

1)、配置文件 /usr/lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service [Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/local/bin/flanneld \
-etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \
-etcd-prefix=${FLANNEL_ETCD_PREFIX} \
$FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure [Install]
WantedBy=multi-user.target
RequiredBy=docker.service

flanneld.service

2)、配置文件 /etc/sysconfig/flanneld

# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://192.168.73.71:2379,https://192.168.73.72:2379,https://192.168.73.73:2379" # etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network" # Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"

flanneld

3)、在etcd中创建网络配置(在master配置就行)

etcdctl --endpoints=https://192.168.73.71:2379,https://192.168.73.72:2379,https://192.168.73.73:2379 \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
mkdir /kube-centos/network
etcdctl --endpoints=https://192.168.73.71:2379,https://192.168.73.72:2379,https://192.168.73.73:2379 \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
mk /kube-centos/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'

4)、启动flannel

systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld

3、验证:

[root@k8s-master ~]# etcdctl --endpoints=${ETCD_ENDPOINTS} \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
ls /kube-centos/network/subnets
/kube-centos/network/subnets/172.30.79.0-24
/kube-centos/network/subnets/172.30.47.0-24
/kube-centos/network/subnets/172.30.65.0-24
[root@k8s-master ~]# etcdctl --endpoints=${ETCD_ENDPOINTS} \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
get /kube-centos/network/config
{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}

六、在所有节点安装配置docker、kubelet、kube-proxy

1、修改配置及安装docker

1)、注释掉swap及创建目录:

对于1.8以上的版本,需要修改/etc/fstab将,swap系统注释掉(重启系统才能生效,不然kubelet)

mkdir /var/lib/kubele

2)、安装docker:

docker17.03 安装:

yum remove docker docker-common container-selinux docker-selinux docker-engine  #移除旧的docker

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum makecache
yum install -y policycoreutils-python
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
rpm -ivh docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
yum list docker-ce --showduplicates | sort -r #查看所有所有版本包
yum -y install docker-ce-17.03.2.ce

3)、配置文件:/usr/lib/systemd/system/docker.service:

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target [Service]
Type=notify
EnvironmentFile=-/run/flannel/docker
ExecStart=/usr/bin/dockerd --insecure-registry=192.168.78.4 \
--exec-opt native.cgroupdriver=systemd \
$OPTIONS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$ADD_REGISTRY \
$BLOCK_REGISTRY \
$REGISTRIES ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s [Install]
WantedBy=multi-user.target

docker.service

4)、启动docker:

systemctl daemon-reload
systemctl enable docker
systemctl start docker
systemctl status docker

2、给kubelet赋予权限(主节点执行)

cd /etc/kubernetes
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

3、分发kubelet和kube-proxy二进制文件

scp /root/kubernetes/server/bin/{kube-proxy,kubelet} root@192.168.73.72:/usr/local/bin/
scp /root/kubernetes/server/bin/{kube-proxy,kubelet} root@192.168.73.72:/usr/local/bin/
mkdir /var/lib/kubelet

4、配置和启动kubelet

1)、配置文件(把地址改为本机地址) :/usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
--address=192.168.78.71 \
--hostname-override=192.168.78.71 \
--pod-infra-container-image=docker.io/kubernetes/pause \
--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--kubeconfig=/etc/kubernetes/ssl/kubelet.kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--hairpin-mode promiscuous-bridge \
--allow-privileged=true \
--serialize-image-pulls=false \
--logtostderr=true \
--cgroup-driver=systemd \
--cluster_dns=10.254.0.2 \
--cluster_domain=cluster.local \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

kubelet.service

2)、启动kublet

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

3)、通过 kublet 的 TLS 证书请求

kubelet 首次启动时向 kube-apiserver 发送证书签名请求,必须通过后 kubernetes 系统才会将该 Node 加入到集群。
1、查看未授权的请求:
kubectl get csr
2、通过csr请求:
kubectl get csr | awk '/Pending/ {print $1}' | xargs kubectl certificate approve
$ kubectl get nodes
NAME            STATUS    ROLES     AGE       VERSION
192.168.73.71   Ready     <none>    11m       v1.9.2
192.168.73.72   Ready     <none>    8m        v1.9.2
192.168.73.73   Ready     <none>    6m        v1.9.2

5、配置启动 kube-proxy

1)、安装conntrack

yum install -y conntrack-tools

2)、配置文件(把地址改为本机地址):/usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target [Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/local/bin/kube-proxy \
--bind-address=192.168.73.71 \
--hostname-override=192.168.73.71 \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
--cluster-cidr=10.254.0.0/16

Restart=on-failure
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target

3)、启动kube-proxy

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

参考文档:https://jimmysong.io/kubernetes-handbook/

kubernetes1.9 手动安装的更多相关文章

  1. Yii2 手动安装yii2-imagine插件

    由于网络的原因使用composer安装Yii框架,实在太过痛苦,所以这里干脆就手动安装yii-imagine的扩展. 首先下载yii2-image和Imagine扩展库,点击链接就可以从百度云下载上传 ...

  2. 将Apache手动安装成Windows的服务

    将Apache手动安装成Windows的服务 可以选择在安装Apache时自动将其安装为一个服务.如果选择"for all users",那么Apache将会被安装为服务. 如果选 ...

  3. python 利用 setup.py 手动安装django_chartit

    手动安装django_chartit库 1 下载压缩包 2 解压到python安装目录下,文件夹名为django_chartit,并检查文件夹下是否有setup.py文件 3 在cmd中进入djang ...

  4. Mac下手动安装Chromedriver.exe

    Mac OS X Yosemite 10.10.4下,ChromeDriver运行异常,需要手动安装chromedriver.exe Step 1: 打开https://sites.google.co ...

  5. windows下手动安装和配置xamarin

    安装xamarin xamarin官方给出了两种安装方式,自动安装和手动安装. 自动安装比较简单,到http://xamarin.com/download下载xamarininstaller.exe ...

  6. Xamarin 手动安装步骤+破解(最新版Xamarin V3)

    Create native iOS, Android, Mac and Windows apps in C#. 看到这句话,你就知道Xamarin是什么了,对于C#开发者,这样的标语还是会让你激动一下 ...

  7. WinServer2008 R2搭建TFS2013小结(无法连接Internet手动安装)

    不定时更新参考文档: TFS安装与管理 为本地管理配置本机模式报表服务器 (SSRS) 手里有文档还是掉进各种坑,这里把坑总结一下,方面以后填坑. 安装指导文档中搭建TFS2013用了两台服务器,把S ...

  8. 手动安装 atom 扩展包 packages

    由于某些原因, 我们下载 atom 扩展时发现速度特别慢, 或者根本无法下载, 那我们可以尝试手动安装 首先, 从 github 上下载(或其它地方) 扩展包, 解压 进入该文件夹, 找到 packa ...

  9. eclipse 中手动安装 subversive SVN

    为什么我选择手动安装呢?因为通过 eclipse market 下载实在太慢了.   1.下载离线安装包 http://www.eclipse.org/subversive/latest-releas ...

随机推荐

  1. Oracle Tuning ( instance 级别 ) 01

    Shared Pool Tuning 目标是提高命中率, 以减少 I/O 操作 shared pool : 是由 library cache, data dictionary cache 两部分组成. ...

  2. JavaWeb——过滤器

    过滤器简介 WEB过滤器是一个服务器端的组件,它可以截取用户端的请求与相应信息,并对这些信息过滤. 过滤器的工作原理和生命周期 在没有Web过滤器的情况下,用户直接访问服务器上的Web资源.但是如果存 ...

  3. TempData,跳转后的提醒

    TempData与ViewData用法一样,不同的是ViewData是当前action与对应的view中存在,TempData在下个action还有效,再往后就无效了.只是我的浅薄理解,希望不会误人子 ...

  4. 用n(0)次求一个数组里面最大子数组的和(数组可以输入负数)

    今天老师布置了题目上的任务,可谓是杀死人脑细胞不偿命呐... 在课上叽叽咕咕的讨论了一节课也没有答案,只得出几个备选方案,一个是通过枚举法将数组里面的子数组和一个个列出来然后在进行比较,可想而知(n2 ...

  5. 【python】函数参数-任意参数

    def min1(args): res=args[0] for arg in args[1:]: if arg<res: res=arg return res def min2(first,re ...

  6. 修改一些IntelliJ IDEA 11的设置,使Eclipse的使用者更容易上手(转)

    用惯了Eclipse,再来使用IntelliJ IDEA真是很难适应. 设置1:字体 修改IDE的字体:设置-Appearance-Look and Feel-OverRide设置你想要的字体.我设置 ...

  7. 如何在ChemDraw 15.1 Pro中添加模板

    ChemDraw化学绘图工具为了方便用户的使用,特地开发了众多的各种类型模板.用户在绘制一些简单或复杂的化学结构式时,可以直接从ChemDraw模板库里直接调用使用,虽然ChemDraw模板非常的丰富 ...

  8. HTML DOM和BOM常用操作总结

     JavaScript Code  1234567891011121314151617181920212223242526272829303132333435363738394041424344454 ...

  9. IOS控件:分歧解决其(UILabel 和 IBAction)

    #import <UIKit/UIKit.h> @interface demo7_dayViewController : UIViewController { // 用来显示程序结果 IB ...

  10. null!= xxx 和 xxx!=null有什么区别?

    从意义上将没有区别,从编程规范上讲,第一种写法是为了防止写成:null = xxx