apiserver的部署

api-server的部署脚本
[root@mast-1 k8s]# cat apiserver.sh
#!/bin/bash MASTER_ADDRESS=$1 主节点IP
ETCD_SERVERS=$2 etcd地址 cat <<EOF >/opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--bind-address=${MASTER_ADDRESS} \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes [Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure [Install]
WantedBy=multi-user.target
EOF systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

  下载二进制包

[root@mast-1 k8s]# wget https://dl.k8s.io/v1.10.13/kubernetes-server-linux-amd64.tar.gz

  解压安装

[root@mast-1 k8s]# tar xf kubernetes-server-linux-amd64.tar.gz
[root@mast-1 k8s]# cd kubernetes/server/bin/
[root@mast-1 bin]# ls
apiextensions-apiserver cloud-controller-manager.tar kube-apiserver kube-controller-manager kubectl kube-proxy.docker_tag kube-scheduler.docker_tag
cloud-controller-manager hyperkube kube-apiserver.docker_tag kube-controller-manager.docker_tag kubelet kube-proxy.tar kube-scheduler.tar
cloud-controller-manager.docker_tag kubeadm kube-apiserver.tar kube-controller-manager.tar kube-proxy kube-scheduler mounter
[root@mast-1 ~]# mkdir /opt/kubernetes/{cfg,ssl,bin} -pv
mkdir: 已创建目录 "/opt/kubernetes"
mkdir: 已创建目录 "/opt/kubernetes/cfg"
mkdir: 已创建目录 "/opt/kubernetes/ssl"
mkdir: 已创建目录 "/opt/kubernetes/bin"
[root@mast-1 bin]# cp kube-apiserver kube-controller-manager kube-scheduler /opt/kubernetes/bin/
[root@mast-1 k8s]# ./apiserver.sh 192.168.10.11 https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379
[root@mast-1 k8s]# cd /opt/kubernetes/cfg/
[root@mast-1 cfg]# vi kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=false \
--log-dir=/opt/kubernetes/logs \ 定义日志目录;注意创建此目录
--v=4 \
--etcd-servers=https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379 \
--bind-address=192.168.10.11 \ 绑定的IP地址
--secure-port=6443 \ 端口基于https通信的
--advertise-address=192.168.10.11 \ 集群通告地址;其他节点访问通告这个IP
--allow-privileged=true \ 容器层的授权
--service-cluster-ip-range=10.0.0.0/24 \ 负责均衡的虚拟IP
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ 启用准入插件;决定是否要启用一些高级功能
--authorization-mode=RBAC,Node \ 认证模式
--kubelet-https=true \ api-server主动访问kubelet是使用https协议
--enable-bootstrap-token-auth \ 认证客户端并实现自动颁发证书
--token-auth-file=/opt/kubernetes/cfg/token.csv \ 指定token文件
--service-node-port-range=30000-50000 \ node认证端口范围
--tls-cert-file=/opt/kubernetes/ssl/server.pem \ apiserver 证书文件
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ ca证书
--etcd-cafile=/opt/etcd/ssl/ca.pem \ etcd 证书
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

  生成证书与token文件

[root@mast-1 k8s]# cat k8s-cert.sh
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"10.206.176.19", master IP
"10.206.240.188", LB;node节点不用写,写上也不错
"10.206.240.189", LB:
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server #----------------------- cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin #----------------------- cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
[root@mast-1 k8s]# bash k8s-cert.sh
2019/04/22 18:05:08 [INFO] generating a new CA key and certificate from CSR
2019/04/22 18:05:08 [INFO] generate received request
2019/04/22 18:05:08 [INFO] received CSR
2019/04/22 18:05:08 [INFO] generating key: rsa-2048
2019/04/22 18:05:09 [INFO] encoded CSR
2019/04/22 18:05:09 [INFO] signed certificate with serial number 631400127737303589248201910249856863284562827982
2019/04/22 18:05:09 [INFO] generate received request
2019/04/22 18:05:09 [INFO] received CSR
2019/04/22 18:05:09 [INFO] generating key: rsa-2048
2019/04/22 18:05:10 [INFO] encoded CSR
2019/04/22 18:05:10 [INFO] signed certificate with serial number 99345466047844052770348056449571016254842578399
2019/04/22 18:05:10 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2019/04/22 18:05:10 [INFO] generate received request
2019/04/22 18:05:10 [INFO] received CSR
2019/04/22 18:05:10 [INFO] generating key: rsa-2048
2019/04/22 18:05:11 [INFO] encoded CSR
2019/04/22 18:05:11 [INFO] signed certificate with serial number 309283889504556884051139822527420141544215396891
2019/04/22 18:05:11 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2019/04/22 18:05:11 [INFO] generate received request
2019/04/22 18:05:11 [INFO] received CSR
2019/04/22 18:05:11 [INFO] generating key: rsa-2048
2019/04/22 18:05:11 [INFO] encoded CSR
2019/04/22 18:05:11 [INFO] signed certificate with serial number 286610519064253595846587034459149175950956557113
2019/04/22 18:05:11 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@mast-1 k8s]# ls
admin.csr apiserver.sh ca-key.pem etcd-cert.sh kube-proxy.csr kubernetes scheduler.sh server.pem
admin-csr.json ca-config.json ca.pem etcd.sh kube-proxy-csr.json kubernetes-server-linux-amd64.tar.gz server.csr
admin-key.pem ca.csr controller-manager.sh k8s-cert kube-proxy-key.pem kubernetes.tar.gz server-csr.json
admin.pem ca-csr.json etcd-cert k8s-cert.sh kube-proxy.pem master.zip

 生成token文件

[root@mast-1 k8s]# cp ca-key.pem ca.pem server-key.pem server.pem /opt/kubernetes/ssl/
[root@mast-1 k8s]#BOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008 [root@mast-1 k8s]#cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
[root@mast-1 k8s]# cat token.csv
0fb61c46f8991b718eb38d27b605b008,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[root@mast-1 k8s]# mv token.csv /opt/kubernetes/cfg/

  

 启动apiserver

[root@mast-1 k8s]# systemctl start kube-apiserver
[root@mast-1 k8s]# ps -ef | grep apiserver
root 3264 1 99 20:35 ? 00:00:01 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --log-dir=/opt/kubernetes/logs --v=4 --etcd-servers=https://192.168.10.11:2379,https:/
/192.168.10.12:2379,https://192.168.10.13:2379 --bind-address=192.168.10.11 --secure-port=6443 --advertise-address=192.168.10.11 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pemroot 3274 1397 0 20:35 pts/0 00:00:00 grep --color=auto apiserver

  生成配置文件并启动controller-manager

[root@mast-1 k8s]# cat controller-manager.sh
#!/bin/bash MASTER_ADDRESS=$1 cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\ 日志配置
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\ apimaster端口
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s" EOF cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes [Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure [Install]
WantedBy=multi-user.target
EOF systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
[root@mast-1 k8s]# bash controller-manager.sh 127.0.0.1 输入masterIP
[root@mast-1 k8s]# ss -lntp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 192.168.10.11:6443 *:*
users:(("kube-apiserver",pid=7604,fd=6))LISTEN 0 128 192.168.10.11:2379 *:*
users:(("etcd",pid=1428,fd=7))LISTEN 0 128 127.0.0.1:2379 *:*
users:(("etcd",pid=1428,fd=6))LISTEN 0 128 127.0.0.1:10252 *:*
users:(("kube-controller",pid=7593,fd=3))LISTEN 0 128 192.168.10.11:2380 *:*
users:(("etcd",pid=1428,fd=5))LISTEN 0 128 127.0.0.1:8080 *:*
users:(("kube-apiserver",pid=7604,fd=5))LISTEN 0 128 *:22 *:*
users:(("sshd",pid=902,fd=3))LISTEN 0 100 127.0.0.1:25 *:*
users:(("master",pid=1102,fd=13))LISTEN 0 128 :::10257 :::*
users:(("kube-controller",pid=7593,fd=5))LISTEN 0 128 :::22 :::*
users:(("sshd",pid=902,fd=4))LISTEN 0 100 ::1:25 :::*
users:(("master",pid=1102,fd=14))

  生成配置文件,并启动scheduler

[root@mast-1 k8s]# cat scheduler.sh
#!/bin/bash MASTER_ADDRESS=$1 cat <<EOF >/opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect" EOF cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes [Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure [Install]
WantedBy=multi-user.target
EOF systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
[root@mast-1 k8s]# bash scheduler.sh 127.0.0.1
[root@mast-1 k8s]# ss -lntp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 192.168.10.11:2379 *:*
users:(("etcd",pid=1428,fd=7))LISTEN 0 128 127.0.0.1:2379 *:*
users:(("etcd",pid=1428,fd=6))LISTEN 0 128 127.0.0.1:10252 *:*
users:(("kube-controller",pid=7809,fd=3))LISTEN 0 128 192.168.10.11:2380 *:*
users:(("etcd",pid=1428,fd=5))LISTEN 0 128 *:22 *:*
users:(("sshd",pid=902,fd=3))LISTEN 0 100 127.0.0.1:25 *:*
users:(("master",pid=1102,fd=13))LISTEN 0 128 :::10251 :::*
users:(("kube-scheduler",pid=8073,fd=3))LISTEN 0 128 :::10257 :::*
users:(("kube-controller",pid=7809,fd=5))LISTEN 0 128 :::22 :::*
users:(("sshd",pid=902,fd=4))LISTEN 0 100 ::1:25 :::*
users:(("master",pid=1102,fd=14))

  配置文件

[root@mast-1 k8s]# cat /opt/kubernetes/cfg/kube-controller-manager 

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \ API连接地址
--leader-elect=true \ 自动做高可用选举
--address=127.0.0.1 \ 地址,不对外提供服务
--service-cluster-ip-range=10.0.0.0/24 \ 地址范围与apiserver配置一样
--cluster-name=kubernetes \ 名字
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \签名
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \ 签名
--root-ca-file=/opt/kubernetes/ssl/ca.pem \ 根证书
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s" 有效时间

  配置文件

[root@mast-1 k8s]# cat /opt/kubernetes/cfg/kube-scheduler 

KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"

  将客户端工具复制到/usr/bin目录下

[root@mast-1 k8s]# cp kubernetes/server/bin/kubectl /usr/bin/

  查看集群状态

[root@mast-1 k8s]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
controller-manager Healthy ok

  

k8s集群之master节点部署的更多相关文章

  1. k8s集群———单master节点2node节点

    #部署node节点 ,将kubelet-bootstrap用户绑定到系统集群角色中(颁发证书的最小权限) kubectl create clusterrolebinding kubelet-boots ...

  2. 记录一个奇葩的问题:k8s集群中master节点上部署一个单节点的nacos,导致master节点状态不在线

    情况详细描述; k8s集群,一台master,两台worker 在master节点上部署一个单节点的nacos,导致master节点状态不在线(不论是否修改nacos的默认端口号都会导致master节 ...

  3. k8s集群节点更换ip 或者 k8s集群添加新节点

    1.需求情景:机房网络调整,突然要回收我k8s集群上一台node节点机器的ip,并调予新的ip到这台机器上,所以有了k8s集群节点更换ip一说:同时,k8s集群节点更换ip也相当于k8s集群添加新节点 ...

  4. 使用KubeOperator扩展k8s集群的worker节点

    官方文档网址:https://kubeoperator.io/docs/installation/install/ 背景说明 原先是一个三节点的k8s集群,一个master,三个woker(maste ...

  5. 实战交付一套dubbo微服务到k8s集群(2)之Jenkins部署

    Jenkins官网:https://www.jenkins.io/zh/ Jenkins 2.190.3 镜像地址:docker pull jenkins/jenkins:2.190.3 1.下载Je ...

  6. k8s集群添加node节点(使用kubeadm搭建的集群)

    1.安装docker.kubelet.kubectl.kubeadm.socat # cat kubernets.repo[kubernetes]name=Kubernetesbaseurl=http ...

  7. 8.实战交付一套dubbo微服务到k8s集群(1)之Zookeeper部署

    1.基础架构 主机名 角色 ip HDSS7-11.host.com K8S代理节点1,zk1 10.4.7.11 HDSS7-12.host.com K8S代理节点2,zk2 10.4.7.12 H ...

  8. 实战交付一套dubbo微服务到k8s集群(1)之Zookeeper部署

    基础架构 主机名 角色 IP地址 mfyxw10.mfyxw.com K8S代理节点1,zk1 192.168.80.10 mfyxw20.mfyxw.com K8S代理节点2,zk2 192.168 ...

  9. 记二进制搭建k8s集群完成后,部署时容器一直在创建中的问题

    gcr.io/google_containers/pause-amd64:3.0这个容器镜像国内不能下载容器一直创建中是这个原因 在kubelet.service中配置 systemctl daemo ...

随机推荐

  1. bzoj1017 [JSOI2008]魔兽地图DotR——DP

    题目:https://www.lydsy.com/JudgeOnline/problem.php?id=1017 好难想的状态啊!f[i][j][k]表示i号物品有j个向上贡献,一共花了k钱的最大力量 ...

  2. Mac 下的截图技巧

    最近想制作GIF图片,截图后,发现没有截出鼠标小效果,自己就查阅了一下资料,总结了不少的截图技巧,这里写下来,权当笔记,方便今后检索,方便别人共享. 方法一: 下载 QQ,在QQ的皮娜好设置里面设置截 ...

  3. Java 8中Collection转为Map的方法

    Java 8中java.util.stream.Collectors提供了几个方法可用于把Collection转为Map结构,本文记录了个人对其中三个的理解. Method Return Type g ...

  4. eclipse与maven配置

    一.配置maven环境 电脑上需安装java环境,安装JDK1.7 + 版本 (将JAVA_HOME/bin 配置环境变量path ) 配置 MAVEN_HOME 将 %MAVEN_HOME%/bin ...

  5. linux中vfork对打开文件的处理

    vfork和fork fork()函数是拷贝一个父进程的副本,拥有独立的代码段 数据段 堆栈空间 然而vfork是共享父亲进程的代码以及代码段 vfork是可以根据需要复制父进程空间,这样很大程度的提 ...

  6. Codeforces - 814B - An express train to reveries - 构造

    http://codeforces.com/problemset/problem/814/B 构造题烦死人,一开始我还记录一大堆信息来构造p数列,其实因为s数列只有两项相等,也正好缺了一项,那就把两种 ...

  7. poj3176【简单DP】

    其实就是简单递推对吧~ 贴一发记忆化搜索的- #include <iostream> #include <stdio.h> #include <string.h> ...

  8. python property的2种使用方法

    一.property类 class Person(): def __init__(self, name): self.set_name(name) def get_name(self): return ...

  9. Luogu P1552 [APIO2012]派遣【左偏树】By cellur925

    题目传送门 $Chat$ 哈哈哈我xj用dfs序乱搞竟然炸出了66分....(其实还是数据水,逃) $Sol$ 首先我们应该知道,一个人他自己的满意度与他子树所有节点的领导力是无关的,一个人的满意度受 ...

  10. IO流案例:1.复制多级文件夹 2.删除多级文件夹

    package copy; /* 需求:复制多级文件夹 复制d:\\itcast(包含文件和子文件夹)到模块目录下 分析: d:\\itcast a.txt b.txt javaweb a.xml b ...