> kubernetes 1.5.0 , 配置文档

# 1 初始化环境

## 1.1 环境:

| 节 点  |      I P      |
|--------|-------------|
|node-1|10.6.0.140|
|node-2|10.6.0.187|
|node-3|10.6.0.188|

## 1.2 设置hostname

hostnamectl --static set-hostname hostname

|       I P     | hostname |
|-------------|-------------|
|10.6.0.140|k8s-node-1|
|10.6.0.187|k8s-node-2|
|10.6.0.188|k8s-node-3|

## 1.3 配置 hosts

```
vi /etc/hosts
```

|     I P       | hostname |
|-------------|-------------|
|10.6.0.140|k8s-node-1|
|10.6.0.187|k8s-node-2|
|10.6.0.188|k8s-node-3|

# 2.0 部署 kubernetes master

## 2.1 添加yum

# 使用我朋友的 yum 源,嘿嘿

cat <<EOF> /etc/yum.repos.d/kubernetes.repo
[mritdrepo]
name=Mritd Repository
baseurl=https://yum.mritd.me/centos/7/x86_64
enabled=
gpgcheck=
gpgkey=https://cdn.mritd.me/keys/rpm.public.key
EOF yum makecache yum install -y socat kubelet kubeadm kubectl kubernetes-cni

## 2.2 安装docker

wget -qO- https://get.docker.com/ | sh

systemctl enable docker
systemctl start docker

## 2.3 安装 etcd 集群

yum -y install etcd

# 创建etcd data 目录

mkdir -p /opt/etcd/data

chown -R etcd:etcd /opt/etcd/

# 修改配置文件,/etc/etcd/etcd.conf 需要修改如下参数:

ETCD_NAME=etcd1
ETCD_DATA_DIR="/opt/etcd/data/etcd1.etcd"
ETCD_LISTEN_PEER_URLS="http://10.6.0.140:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.6.0.140:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.6.0.140:2380"
ETCD_INITIAL_CLUSTER="etcd1=http://10.6.0.140:2380,etcd2=http://10.6.0.187:2380,etcd3=http://10.6.0.188:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://10.6.0.140:2379"
# 修改 etcd 启动文件

sed -i 's/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\"/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --listen-client-urls=\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --advertise-client-urls=\\\"${ETCD_ADVERTISE_CLIENT_URLS}\\\" --initial-cluster-token=\\\"${ETCD_INITIAL_CLUSTER_TOKEN}\\\" --initial-cluster=\\\"${ETCD_INITIAL_CLUSTER}\\\" --initial-cluster-state=\\\"${ETCD_INITIAL_CLUSTER_STATE}\\\"/g' /usr/lib/systemd/system/etcd.service
# 启动 etcd

systemctl enable etcd

systemctl start etcd

systemctl status etcd

# 查看集群状态

etcdctl cluster-health

## 2.4 下载镜像

images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)
for imageName in ${images[@]} ; do
docker pull jicki/$imageName
docker tag jicki/$imageName gcr.io/google_containers/$imageName
docker rmi jicki/$imageName
done ``` ```
# 如果速度很慢,可配置一下加速 docker 启动文件 增加 --registry-mirror="http://b438f72b.m.daocloud.io" ```

## 2.4 启动 kubernetes

```
systemctl enable kubelet
systemctl start kubelet
```

## 2.5 创建集群

```
kubeadm init --api-advertise-addresses=10.6.0.140 \
--external-etcd-endpoints=http://10.6.0.140:2379,http://10.6.0.187:2379,http://10.6.0.188:2379 \
--use-kubernetes-version v1.5.1 \
--pod-network-cidr 10.244.0.0/ ``` ```
Flag --external-etcd-endpoints has been deprecated, this flag will be removed when componentconfig exists
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[init] Using Kubernetes version: v1.5.1
[tokens] Generated token: "c53ef2.d257d49589d634f0"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 15.299235 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 1.002937 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 2.502881 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns Your Kubernetes master has initialized successfully! You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node: kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140 ```

## 2.6 记录 token

You can now join any number of machines by running the following on each node:

kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140

## 2.7 配置网络

```
# 建议先下载镜像,否则容易下载不到 docker pull quay.io/coreos/flannel-git:v0.6.1--g5dde68d-amd64 # 或者这样 docker pull jicki/flannel-git:v0.6.1--g5dde68d-amd64
docker tag jicki/flannel-git:v0.6.1--g5dde68d-amd64 quay.io/coreos/flannel-git:v0.6.1--g5dde68d-amd64
docker rmi jicki/flannel-git:v0.6.1--g5dde68d-amd64 ``` ```
# http://kubernetes.io/docs/admin/addons/ 这里有多种网络模式,选择一种 # 这里选择 Flannel 选择 Flannel init 时必须配置 --pod-network-cidr kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ```

## 2.8 检查 kubelet 状态

systemctl status kubelet

# 3.0 部署 kubernetes node

## 3.1 安装docker

```
wget -qO- https://get.docker.com/ | sh systemctl enable docker
systemctl start docker
```

## 3.2 下载镜像

```
images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)
for imageName in ${images[@]} ; do
docker pull jicki/$imageName
docker tag jicki/$imageName gcr.io/google_containers/$imageName
docker rmi jicki/$imageName
done ```

## 3.3 启动 kubernetes

systemctl enable kubelet
systemctl start kubelet

## 3.4 加入集群

kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details. Run 'kubectl get nodes' on the master to see this machine join.

## 3.5 查看集群状态

[root@k8s-node- ~]#kubectl get node
NAME STATUS AGE
k8s-node- Ready,master 27m
k8s-node- Ready 6s
k8s-node- Ready 9s

## 3.6 查看服务状态

[root@k8s-node- ~]#kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system dummy--qrp68 / Running 1h
kube-system kube-apiserver-k8s-node- / Running 1h
kube-system kube-controller-manager-k8s-node- / Running 1h
kube-system kube-discovery--g2lpc / Running 1h
kube-system kube-dns--xbhv4 / Running 1h
kube-system kube-flannel-ds-39g5n / Running 1h
kube-system kube-flannel-ds-dwc82 / Running 1h
kube-system kube-flannel-ds-qpkm0 / Running 1h
kube-system kube-proxy-16c50 / Running 1h
kube-system kube-proxy-5rkc8 / Running 1h
kube-system kube-proxy-xwrq0 / Running 1h
kube-system kube-scheduler-k8s-node- / Running 1h

# 4.0 设置 kubernetes

## 4.1 其他主机控制集群

```
# 备份master节点的 配置文件 /etc/kubernetes/admin.conf # 保存至 其他电脑, 通过执行配置文件控制集群 kubectl --kubeconfig ./admin.conf get nodes ```

## 4.2 配置dashboard

```
#下载 yaml 文件, 直接导入会去官方拉取images curl -O https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml #编辑 yaml 文件 vi kubernetes-dashboard.yaml image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0 修改为 image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0 imagePullPolicy: Always 修改为 imagePullPolicy: IfNotPresent ``` ```
kubectl create -f ./kubernetes-dashboard.yaml deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
``` ```
# 查看 NodePort ,既外网访问端口 kubectl describe svc kubernetes-dashboard --namespace=kube-system NodePort: <unset> /TCP ``` ```
# 访问 dashboard http://10.6.0.140:31736 ```

# 5.0 kubernetes 应用部署

## 5.1 部署一个 nginx rc

> 编写 一个 nginx yaml

```
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-rc
spec:
replicas:
selector:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort:
``` ```
[root@k8s-node- ~]#kubectl get rc
NAME DESIRED CURRENT READY AGE
nginx-rc 2m [root@k8s-node- ~]#kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-rc-2s8k9 / Running 10m 10.32.0.3 k8s-node-
nginx-rc-s16cm / Running 10m 10.40.0.1 k8s-node-

> 编写一个 nginx service 让集群内部容器可以访问 (ClusterIp)

```
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
ports:
- port:
targetPort:
protocol: TCP
selector:
name: nginx
``` ```
[root@k8s-node- ~]#kubectl create -f nginx-svc.yaml
service "nginx-svc" created [root@k8s-node- ~]#kubectl get svc -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes 10.0.0.1 <none> /TCP 2d <none>
nginx-svc 10.6.164.79 <none> /TCP 29s name=nginx ``` > 编写一个 curl 的pods ```
apiVersion: v1
kind: Pod
metadata:
name: curl
spec:
containers:
- name: curl
image: radial/busyboxplus:curl
command:
- sh
- -c
- while true; do sleep ; done
``` ```
# 测试pods 内部通信
[root@k8s-node- ~]#kubectl exec curl curl nginx
``` ```
# 在任何node节点中,可使用ip访问 [root@k8s-node- ~]# curl 10.6.164.79
[root@k8s-node- ~]# curl 10.6.164.79 ```

> 编写一个 nginx service 让外部可以访问 (NodePort)

```
apiVersion: v1
kind: Service
metadata:
name: nginx-svc-node
spec:
ports:
- port:
targetPort:
protocol: TCP
type: NodePort
selector:
name: nginx
``` ```
[root@k8s-node- ~]#kubectl get svc -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes 10.0.0.1 <none> /TCP 2d <none>
nginx-svc 10.6.164.79 <none> /TCP 29m name=nginx
nginx-svc-node 10.12.95.227 <nodes> /TCP 17s name=nginx [root@k8s-node- ~]#kubectl describe svc nginx-svc-node |grep NodePort
Type: NodePort
NodePort: <unset> /TCP
``` ```
# 使用 ALL node节点物理IP + 端口访问 http://10.6.0.140:32669 http://10.6.0.187:32669 http://10.6.0.188:32669
```

## 5.2 部署一个 zookeeper 集群

> 编写 一个 zookeeper-cluster.yaml

```
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zookeeper-
spec:
replicas:
template:
metadata:
labels:
name: zookeeper-
spec:
containers:
- name: zookeeper-
image: zk:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: NODES
value: "0.0.0.0,zookeeper-2,zookeeper-3"
ports:
- containerPort: --- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zookeeper-
spec:
replicas:
template:
metadata:
labels:
name: zookeeper-
spec:
containers:
- name: zookeeper-
image: zk:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: NODES
value: "zookeeper-1,0.0.0.0,zookeeper-3"
ports:
- containerPort: --- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zookeeper-
spec:
replicas:
template:
metadata:
labels:
name: zookeeper-
spec:
containers:
- name: zookeeper-
image: zk:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: NODES
value: "zookeeper-1,zookeeper-2,0.0.0.0"
ports:
- containerPort:
--- apiVersion: v1
kind: Service
metadata:
name: zookeeper-
labels:
name: zookeeper-
spec:
ports:
- name: client
port:
protocol: TCP
- name: followers
port:
protocol: TCP
- name: election
port:
protocol: TCP
selector:
name: zookeeper- --- apiVersion: v1
kind: Service
metadata:
name: zookeeper-
labels:
name: zookeeper-
spec:
ports:
- name: client
port:
protocol: TCP
- name: followers
port:
protocol: TCP
- name: election
port:
protocol: TCP
selector:
name: zookeeper- --- apiVersion: v1
kind: Service
metadata:
name: zookeeper-
labels:
name: zookeeper-
spec:
ports:
- name: client
port:
protocol: TCP
- name: followers
port:
protocol: TCP
- name: election
port:
protocol: TCP
selector:
name: zookeeper- ``` ```
[root@k8s-node- ~]#kubectl create -f zookeeper-cluster.yaml --record [root@k8s-node- ~]#kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
zookeeper---cfyt4 / Running 51m 10.32.0.3 k8s-node-
zookeeper---0bxee / Running 51m 10.40.0.1 k8s-node-
zookeeper---5csqy / Running 51m 10.40.0.2 k8s-node- [root@k8s-node- ~]#kubectl get deployment -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
zookeeper- 51m
zookeeper- 51m
zookeeper- 51m [root@k8s-node- ~]#kubectl get svc -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
zookeeper- 10.8.111.19 <none> /TCP,/TCP,/TCP 51m name=zookeeper-
zookeeper- 10.6.10.124 <none> /TCP,/TCP,/TCP 51m name=zookeeper-
zookeeper- 10.0.146.143 <none> /TCP,/TCP,/TCP 51m name=zookeeper-

## 5.3 部署一个 kafka 集群

> 编写 一个 kafka-cluster.yaml

```

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka-deployment-
spec:
replicas:
template:
metadata:
labels:
name: kafka-
spec:
containers:
- name: kafka-
image: kafka:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: ZK_NODES
value: "zookeeper-1,zookeeper-2,zookeeper-3"
ports:
- containerPort: --- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka-deployment-
spec:
replicas:
template:
metadata:
labels:
name: kafka-
spec:
containers:
- name: kafka-
image: kafka:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: ZK_NODES
value: "zookeeper-1,zookeeper-2,zookeeper-3"
ports:
- containerPort: --- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka-deployment-
spec:
replicas:
template:
metadata:
labels:
name: kafka-
spec:
containers:
- name: kafka-
image: kafka:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: ZK_NODES
value: "zookeeper-1,zookeeper-2,zookeeper-3"
ports:
- containerPort: --- apiVersion: v1
kind: Service
metadata:
name: kafka-
labels:
name: kafka-
spec:
ports:
- name: client
port:
protocol: TCP
selector:
name: kafka- --- apiVersion: v1
kind: Service
metadata:
name: kafka-
labels:
name: kafka-
spec:
ports:
- name: client
port:
protocol: TCP
selector:
name: kafka- --- apiVersion: v1
kind: Service
metadata:
name: kafka-
labels:
name: kafka-
spec:
ports:
- name: client
port:
protocol: TCP
selector:
name: kafka- ```

# FAQ:

## kube-discovery error

    failed to create "kube-discovery" deployment [deployments.extensions "kube-discovery" already exists]

kubeadm reset

kubeadm init

Kubernetes 1.5.1 部署的更多相关文章

  1. 基于Kubernetes在AWS上部署Kafka时遇到的一些问题

    作者:Jack47 转载请保留作者和原文出处 欢迎关注我的微信公众账号程序员杰克,两边的文章会同步,也可以添加我的RSS订阅源. 交代一下背景:我们的后台系统是一套使用Kafka消息队列的数据处理管线 ...

  2. ASP.NET Core在Azure Kubernetes Service中的部署和管理

    目录 ASP.NET Core在Azure Kubernetes Service中的部署和管理 目标 准备工作 注册 Azure 账户 AKS文档 进入Azure门户(控制台) 安装 Azure Cl ...

  3. kubernetes nginx ingress controller部署

    Kubernetes nginx ingress controller部署 1.下载kubernetes nginx的yaml文件 Wget https://raw.githubusercontent ...

  4. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录

    0.目录 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.感谢 在此感谢.net ...

  5. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之集群部署环境规划(一)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.环境规划 软件 版本 ...

  6. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之自签TLS证书及Etcd集群部署(二)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.服务器设置 1.把每一 ...

  7. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之flanneld网络介绍及部署(三)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.flanneld介绍 ...

  8. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之部署master/node节点组件(四)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 1.部署master组件 ...

  9. kubernetes二进制高可用部署实战

    环境: 192.168.30.20 VIP(虚拟) 192.168.30.21 master1 192.168.30.22 master2 192.168.30.23 node1 192.168.30 ...

  10. Kubernetes集群的部署方式及详细步骤

    一.部署环境架构以及方式 第一种部署方式 1.针对于master节点 将API Server.etcd.controller-manager.scheduler各组件进行yum install.编译安 ...

随机推荐

  1. QT自绘标题和边框

    在QT中如果想要自绘标题和边框,一般步骤是: 1) 在创建窗口前设置Qt::FramelessWindowHint标志,设置该标志后会创建一个无标题.无边框的窗口. 2)在客户区域的顶部创建一个自绘标 ...

  2. 人人公益模式系统开发app

    人人公益模式系统开发app(微or电 158.1500.1390 小凡团队)人人公益系统开发,人人公益系统模式定制,人人公益系统开发模式,人人公益平台开发系统,人人公益APP系统开发. 深圳人人优益网 ...

  3. 【 转】 C/C++结构体和联合体的区别

    联合体用途:使几个不同类型的变量共占一段内存(相互覆盖) 结构体是一种构造数据类型用途:把不同类型的数据组合成一个整体-------自定义数据类型 总结: 声明一个联合体: union abc { i ...

  4. 如何优化 App 的启动时间

    http://www.cocoachina.com/ios/20161102/17931.html App 运行理论 main() 执行前发生的事 Mach-O 格式 虚拟内存基础 Mach-O 二进 ...

  5. liunx下试用yum进行php及opchache扩展安装

    Centos 6.6 环境: php 5.6.29 nginx:1.10.2 1.配置安装包源 # CentOs 6.x rpm -Uvh http://mirror.webtatic.com/yum ...

  6. Hive中频繁报警的问题

    在使用Hive的过程中,是不是会在shell中报一堆警告,虽然说不影响正常使用,但是看着很烦人,而且指不定会影响数据的准确性和运行的稳定性. 警告的内容如下: Tue Aug :: CST WARN: ...

  7. Struts2第四天

    Struts2第四天 昨天: 自定义的拦截器:继续methodFilterInterceptor,可以指定哪些方法需要拦截或者不拦截. Intercepters(配置拦截器),intercepter( ...

  8. L3-001. 凑零钱

    L3-001. 凑零钱 题目链接:https://www.patest.cn/contests/gplt/L3-001 动态规划 这道题一看就知道应该用背包思想来做,不过想了好久没什么思路(dp实在是 ...

  9. Web程序和应用程序服务器[转]

    转自:http://hi.baidu.com/lclkathy/blog/item/dae3be36763a47370b55a970.html 一 常见的WEB服务器和应用服务器 在UNIX和LINU ...

  10. 【Sort】RadixSort基数排序

    太晚了,明天有时间在写算法思路,先贴代码 ------------------------------------------------ 刚答辩完,毕业好难,感觉自己好水 ------------- ...