理解Kubernetes系列文章:

  1. 手工搭建环境
  2. 基本概念和操作

1. 基础环境准备

准备 3个Ubuntu节点,操作系统版本为 16.04,并做好以下配置:

  • 系统升级
  • 设置 /etc/hosts 文件,保持一致
  • 设置从 0 节点上无密码ssh 其它两个节点
节点名称 IP地址 etcd flanneld K8S docker
kub-node-0 172.23.100.4 Y Y
master:
kubctl
kube-apiserver
kuber-controller
kuber-scheduler
Y
kub-node-1 172.23.100.5 Y Y

node:

kube-proxy

kubelet

Y
kub-node-2 172.23.100.6 Y Y

node:

kube-proxy

kubelet

Y

2. 安装与部署

2.1 安装 etcd

2.1.1 安装

在3个节点上运行以下命令来安装 etcd 3.2.5 版本:
ETCD_VERSION=${ETCD_VERSION:-"3.2.5"}
ETCD="etcd-v${ETCD_VERSION}-linux-amd64"
curl -L https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/${ETCD}.tar.gz -o etcd.tar.gz
tar xzf etcd.tar.gz -C /tmp
mv /tmp/etcd-v${ETCD_VERSION}-linux-amd64 /opt/bin/

2.1.2 配置

在3个节点上做如下配置:
  • 创建目录:
sudo mkdir -p /var/lib/etcd/
sudo mkdir -p /opt/config/
  • 创建 /opt/config/etcd.conf 文件:
ETCD_DATA_DIR=/var/lib/etcd
ETCD_NAME="kub-node-0"
ETCD_INITIAL_CLUSTER="kub-node-0=http://172.23.100.4:2380,kub-node-1=http://172.23.100.5:2380,kub-node-2=http://172.23.100.6:2380"
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_LISTEN_PEER_URLS=http://172.23.100.4:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=http://172.23.100.4:2380
ETCD_ADVERTISE_CLIENT_URLS=http://172.23.100.4:2379
ETCD_LISTEN_CLIENT_URLS=http://172.23.100.4:2379,http://127.0.0.1:2379

注意:

(1)在 0 节点上 etcd cluster 起来后,1 和 2 上的 ETCD_INITIAL_CLUSTER_STATE 值需要改成 existing,表示加入已有集群。否则的话,它自己会创建一个cluster,而不是加入已有cluster。
(2)在每个节点上,IP 地址需要修改为本机地址。 
  • 创建 /lib/systemd/system/etcd.service 文件:
[Unit]
Description=Etcd Server
Documentation=https://github.com/coreos/etcd
After=network.target
[Service]
User=root
Type=simple
EnvironmentFile=-/opt/config/etcd.conf
ExecStart=/opt/bin/etcd
Restart=on-failure
RestartSec=10s
LimitNOFILE=
[Install]
WantedBy=multi-user.target

每个节点上都是一样的。

  • 在三个节点上启动服务:
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

2.1.3 测试服务

  • 查看etcd集群状态:
root@kub-node-:/home/ubuntu# /opt/bin/etcdctl cluster-health
member 664b85ff39242fbc is healthy: got healthy result from http://172.23.100.6:2379
member 9dd263662a4b6f73 is healthy: got healthy result from http://172.23.100.4:2379
member b17535572fd6a37b is healthy: got healthy result from http://172.23.100.5:2379
cluster is healthy
  • 查看 etcd 集群成员:
root@kub-node-:/home/ubuntu# /opt/bin/etcdctl member list
9dd263662a4b6f73: name=kub-node- peerURLs=http://172.23.100.4:2380 clientURLs=http://172.23.100.4:2379 isLeader=false
b17535572fd6a37b: name=kub-node- peerURLs=http://172.23.100.5:2380 clientURLs=http://172.23.100.5:2379 isLeader=true
e6db3cac1db23670: name=kub-node- peerURLs=http://172.23.100.6:2380 clientURLs=http://172.23.100.6:2379 isLeader=false

2.2 部署flanneld

2.2.1 安装 0.8.0 版本

在每个节点上:

curl -L https://github.com/coreos/flannel/releases/download/v0.8.0/flannel-v0.8.0-linux-amd64.tar.gz flannel.tar.gz
tar xzf flannel.tar.gz -C /tmp
mv /tmp/flanneld /opt/bin/

2.2.2 配置

在每个节点上: 
  • 创建 /lib/systemd/system/flanneld.service 文件:
[Unit]
Description=Flanneld
Documentation=https://github.com/coreos/flannel
After=network.target
Before=docker.service
[Service]
User=root
ExecStart=/opt/bin/flanneld \
--etcd-endpoints="http://172.23.100.4:2379,http://172.23.100.5:2379,http://172.23.100.4:2379" \
--iface=172.23.100.4 \
--ip-masq
Restart=on-failure
Type=notify
LimitNOFILE=

注意:在每个节点上,iface 设置为本机ip。

  • 在 0 node上,运行
/opt/bin/etcdctl --endpoints="http://172.23.100.4:2379,http://172.23.100.5:2379,http://172.23.100.4:2379" mk /coreos.com/network/config \ '{"Network":"10.1.0.0/16", "Backend": {"Type": "vxlan"}}'

确认:

root@kub-node-:/home/ubuntu# /opt/bin/etcdctl --endpoints="http://172.23.100.4:2379,http://172.23.100.5:2379,http://172.23.100.4:2379" get /coreos.com/network/config
{"Network":"10.1.0.0/16", "Backend": {"Type": "vxlan"}}
  • 在三个节点上启动 flannled:
systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld

备注:flannel服务需要先于Docker启动。flannel服务启动时主要做了以下几步的工作:

  • 从etcd中获取network的配置信息。
  • 划分subnet,并在etcd中进行注册。
  • 将子网信息记录到/run/flannel/subnet.env中。

此时,能看到 etcd 中的 subnet:

root@kub-node-:/home/ubuntu/kub# /opt/bin/etcdctl --endpoints="http://172.23.100.4:2379,http://172.23.100.5:2379,http://172.23.100.4:2379"; ls /coreos.com/network/subnets
/coreos.com/network/subnets/10.1.35.0-
/coreos.com/network/subnets/10.1.1.0-
/coreos.com/network/subnets/10.1.79.0-

2.2.3 验证

  • 通过运行 service flanneld status 查看其状态。
  • 检查 flannel 虚拟网卡。它们的配置需要和 etcd 中的配置一致。
root@kub-node-:/home/ubuntu/kub# ifconfig flannel.
flannel. Link encap:Ethernet HWaddr :fc::::
inet addr:10.1.35.0 Bcast:0.0.0.0 Mask:255.255.255.255 root@kub-node-:/home/ubuntu# ifconfig flannel.
flannel. Link encap:Ethernet HWaddr 0a:6e:a6:6f::
inet addr:10.1.1.0 Bcast:0.0.0.0 Mask:255.255.255.255 root@kub-node-:/home/ubuntu# ifconfig flannel.
flannel. Link encap:Ethernet HWaddr 6e::b3::1e:f4
inet addr:10.1.79.0 Bcast:0.0.0.0 Mask:255.255.255.255

2.3 部署 Docker

2.3.1 安装

 参考 https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#install-docker-ce-1,在每个节点上运行以下命令来安装Docker: 
   sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install docker-ce

2.3.2 验证

创建并运行 hello-world 容器:

root@kub-node-:/home/ubuntu/kub# docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete
Digest: sha256:445b2fe9afea8b4aa0b2f27fe49dd6ad130dfe7a8fd0832be5de99625dad47cd
Status: Downloaded newer image for hello-world:latest Hello from Docker!
This message shows that your installation appears to be working correctly.

2.3.3 配置

 在每个节点上: 
  • 进入 /tmp 目录,运行 cp mk-docker-opts.sh /usr/bin/ 拷贝文件
  • 执行下面的命令
root@kub-node-:/home/ubuntu/kub# mk-docker-opts.sh -i
root@kub-node-:/home/ubuntu/kub# source /run/flannel/subnet.env
root@kub-node-:/home/ubuntu/kub# ifconfig docker0
docker0 Link encap:Ethernet HWaddr ::bc::d0:
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
inet6 addr: fe80:::bcff:fe71:d022/ Scope:Link
UP BROADCAST MULTICAST MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (0.0 B) TX bytes: (258.0 B) root@kub-node-:/home/ubuntu/kub# ifconfig docker0 ${FLANNEL_SUBNET}
root@kub-node-:/home/ubuntu/kub# ifconfig docker0
docker0 Link encap:Ethernet HWaddr ::bc::d0:
inet addr:10.1.35.1 Bcast:10.1.35.255 Mask:255.255.255.0
inet6 addr: fe80:::bcff:fe71:d022/ Scope:Link
UP BROADCAST MULTICAST MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (0.0 B) TX bytes: (258.0 B)
  • 修改 /lib/systemd/system/docker.service 文件为:
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/var/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd -g /data/docker --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
ExecReload=/bin/kill -s HUP $MAINPID
#ExecStart=/usr/bin/dockerd -H fd://
#ExecReload=/bin/kill -s HUP $MAINPID
  • 放开 iptables 规则
iptables -F
iptables -X
iptables -Z
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables-save
  • 重启 docker 服务
systemctl daemon-reload
systemctl enable docker
systemctl restart docker
  • 验证
在三个节点上,运行命令 docker run -it ubuntu bash 启动一个 ubuntu 容器,其ip 分别为 10.1.35.2,10.1.79.2,10.1.1.2。互相ping,可通。  

2.4 证书创建与配置

2.4.1 0 节点上的配置

  • 在 0 节点上,创建 master_ssl.cnf 文件:
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = master
IP.1 = 192.1.0.1
IP.2 = 172.23.100.4
  • 生成 master 证书:
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj "/CN=company.com" -days 10000 -out ca.crt
openssl genrsa -out server.key 2048
openssl req -new -key server.key -subj "/CN=master" -config master_ssl.cnf -out server.csr
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 10000 -extensions v3_req -extfile master_ssl.cnf -out server.crt
openssl genrsa -out client.key 2048
openssl req -new -key client.key -subj "/CN=node" -out client.csr
openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 10000
  • 将生成的文件拷贝至  /root/key 文件夹
root@kub-node-0:/home/ubuntu/kub# ls /root/key
ca.crt ca.key client.crt client.key server.crt server.key
  • 将 ca.crt 文件和 ca.key 文件拷贝到各个node节点上的 /home/ubuntu/kub 文件夹中。

2.4.2 在 1 和 2 节点上的配置

在 1 和 2 上分别执行下面的操作。下面的示例以节点2为例,1上需要修改IP地址。

  • 运行:
CLINET_IP=172.23.100.6
openssl genrsa -out client.key 2048
openssl req -new -key client.key -subj "/CN=${CLINET_IP}" -out client.csr
openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 10000
  • 结果:
root@kub-node-2:/home/ubuntu/kub# ls -lt
total 8908
-rw-r--r-- 1 root root 985 Dec 31 20:57 client.crt
-rw-r--r-- 1 root root 17 Dec 31 20:57 ca.srl
-rw-r--r-- 1 root root 895 Dec 31 20:57 client.csr
-rw-r--r-- 1 root root 1675 Dec 31 20:57 client.key
-rw-r--r-- 1 root root 1099 Dec 31 20:54 ca.crt
-rw-r--r-- 1 root root 1675 Dec 31 20:54 ca.key
  • 将 client 和 ca 的 .crt 和 .key 拷贝至 /root/key 文件夹。此时其中有4个文件:
root@kub-node-2:/home/ubuntu# ls /root/key
ca.crt ca.key client.crt client.key
  • 创建 /etc/kubernetes/kubeconfig 文件
apiVersion: v1
clusters:
- cluster:
certificate-authority: /root/key/ca.crt
server: https://172.23.100.4:6443
name: ubuntu
contexts:
- context:
cluster: ubuntu
user: ubuntu
name: ubuntu
current-context: ubuntu
kind: Config
preferences: {}
users:
- name: ubuntu
user:
client-certificate: /root/key/client.crt
client-key: /root/key/client.key

2.5 Kubernetes master 节点配置

在 0 节点上做如下操作。

2.5.1 安装Kubernetes 1.8.5 版本

curl -L https://dl.k8s.io/v1.8.5/kubernetes-server-linux-amd64.tar.gz kuber.tar.gz
tar xzf kuber.tar.gz -C /tmp3
mv /tmp3/kubernetes/server/bin/* /opt/bin

2.5.2 配置服务

  • 创建 /lib/systemd/system/kube-apiserver.service 文件
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
User=root
ExecStart=/opt/bin/kube-apiserver \
--secure-port= \
--etcd-servers=http://172.23.100.4:2379,http://172.23.100.5:2379,http://172.23.100.6:2379 \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--allow-privileged=false \
--service-cluster-ip-range=192.1.0.0/ \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota \
--service-node-port-range=- \
--advertise-address=172.23.100.4 \
--client-ca-file=/root/key/ca.crt \
--tls-cert-file=/root/key/server.crt \
--tls-private-key-file=/root/key/server.key
Restart=on-failure
Type=notify
LimitNOFILE=
[Install]
WantedBy=multi-user.target
  • 创建 /lib/systemd/system/kube-controller-manager.service 文件
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
User=root
ExecStart=/opt/bin/kube-controller-manager \
--master=https://172.23.100.4:6443 \
--root-ca-file=/root/key/ca.crt \
--service-account-private-key-file=/root/key/server.key \
--kubeconfig=/etc/kubernetes/kubeconfig \
--logtostderr=false \
--log-dir=/var/log/kubernetes
Restart=on-failure
LimitNOFILE=
[Install]
WantedBy=multi-user.target
  • 创建 /lib/systemd/system/kube-scheduler.service 文件
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
User=root
ExecStart=/opt/bin/kube-scheduler \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--master=https://172.23.100.4:6443 \
--kubeconfig=/etc/kubernetes/kubeconfig
Restart=on-failure
LimitNOFILE=
[Install]
WantedBy=multi-user.target
  • 启动服务
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
systemctl enable flanneld
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
  • 确认各服务状态
systemctl status kube-apiserver
systemctl status kube-controller-manager
systemctl status kube-scheduler 

2.6 配置 kubectl

在 0 节点上,创建 /root/.kube/config 文件:

apiVersion: v1
clusters:
- cluster:
certificate-authority: /root/key/ca.crt
name: ubuntu
contexts:
- context:
cluster: ubuntu
user: ubuntu
name: ubuntu
current-context: ubuntu
kind: Config
preferences: {}
users:
- name: ubuntu
user:
client-certificate: /root/key/client.crt
client-key: /root/key/client.key 

2.7 Kubernetes node 节点配置

节点1 和  2 为 K8S node 节点。在它们上执行下面的操作。

2.7.1 安装

同 2.4.1 。

2.7.2 配置

  • 在 1 和 2 分别执行操作。下面的内容为1上的,2上的需要将 127.23.100.5 修改为 127.23.100.6 地址。
  • 创建 /lib/systemd/system/kubelet.service 文件
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
ExecStart=/opt/bin/kubelet \
--hostname-override=172.23.100.5 \
--pod-infra-container-image="docker.io/kubernetes/pause" \
--cluster-domain=cluster.local \
--log-dir=/var/log/kubernetes \
--cluster-dns=192.1.0.100 \
--kubeconfig=/etc/kubernetes/kubeconfig \
--logtostderr=false
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
[Unit]
Description=Kubernetes Proxy
After=network.target
  • 创建 /lib/systemd/system/kube-proxy.service 文件
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/opt/bin/kube-proxy \
--hostname-override=172.23.100.5 \
--master=https://172.23.100.4:6443 \
--log-dir=/var/log/kubernetes \
--kubeconfig=/etc/kubernetes/kubeconfig \
--logtostderr=false
Restart=on-failure
[Install]
WantedBy=multi-user.target
  • 启动服务
systemctl daemon-reload
systemctl enable kubelet
systemctl enable kube-proxy
systemctl start kubelet
systemctl start kube-proxy
  • 确认各组件的运行状态
systemctl status kubelet
systemctl status kube-proxy

3. 验证

3.1 获取集群信息

在节点 0 上运行以下命令。

  • 获取 master 节点
root@kub-node-:/home/ubuntu/kub# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
  • 查看node 节点
root@kub-node-:/home/ubuntu/kub# kubectl get nodes
NAME STATUS ROLES AGE VERSION
172.23.100.5 Ready <none> 2d v1.8.5
172.23.100.6 Ready <none> 2d v1.8.5

3.2 部署第一个应用

  • 创建 nginx.yml 文件
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-nginx4
spec:
replicas:
template:
metadata:
labels:
app: my-nginx4
spec:
containers:
- name: my-nginx4
image: nginx
ports:
- containerPort:
  • 创建一个deployment
root@kub-node-:/home/ubuntu/kub# kubectl create -f nginx4.yml
deployment "my-nginx4" created
  • 查看状态
root@kub-node-:/home/ubuntu/kub# kubectl get all
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/my-nginx4 3m NAME DESIRED CURRENT READY AGE
rs/my-nginx4-75bbfccc7c 3m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/my-nginx4 3m NAME DESIRED CURRENT READY AGE
rs/my-nginx4-75bbfccc7c 3m NAME READY STATUS RESTARTS AGE
po/my-nginx4-75bbfccc7c-5frpl / Running 3m
po/my-nginx4-75bbfccc7c-5kr4j / Running 3m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes ClusterIP 192.1.0.1 <none> /TCP 2d
  • 查看该部署的详细信息
root@kub-node-:/home/ubuntu/kub# kubectl describe deployments my-nginx4
Name: my-nginx4
Namespace: default
CreationTimestamp: Wed, Jan :: +
Labels: app=my-nginx4
Annotations: deployment.kubernetes.io/revision=
Selector: app=my-nginx4
Replicas: desired | updated | total | available | unavailable
StrategyType: RollingUpdate
MinReadySeconds:
RollingUpdateStrategy: max unavailable, max surge
Pod Template:
Labels: app=my-nginx4
Containers:
my-nginx4:
Image: nginx
Port: /TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: my-nginx4-75bbfccc7c (/ replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 1m deployment-controller Scaled up replica set my-nginx4-75bbfccc7c to
  • 查看 pod 的详细信息能看到它的容器、IP地址和所在的node节点
root@kub-node-:/home/ubuntu/kub# kubectl describe pod my-nginx4-75bbfccc7c-5frpl
Name: my-nginx4-75bbfccc7c-5frpl
Namespace: default
Node: 172.23.100.5/172.23.100.5
Start Time: Wed, Jan :: +
Labels: app=my-nginx4
pod-template-hash=
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"my-nginx4-75bbfccc7c","uid":"c2d83729-f023-11e7-a605-fa163e9a22a...
Status: Running
IP: 10.1.1.3
Created By: ReplicaSet/my-nginx4-75bbfccc7c
Controlled By: ReplicaSet/my-nginx4-75bbfccc7c
Containers:
my-nginx4:
Container ID: docker://4a994121e309fb81181e22589982bf8c053287616ba7c92dcddc5e7fb49927b1
Image: nginx
Image ID: docker-pullable://nginx@sha256:cf8d5726fc897486a4f628d3b93483e3f391a76ea4897de0500ef1f9abcd69a1
Port: /TCP
State: Running
Started: Wed, Jan :: +
Ready: True
Restart Count:
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-b2p4z (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-b2p4z:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-b2p4z
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m default-scheduler Successfully assigned my-nginx4-75bbfccc7c-5frpl to 172.23.100.5
Normal SuccessfulMountVolume 5m kubelet, 172.23.100.5 MountVolume.SetUp succeeded for volume "default-token-b2p4z"
Normal Pulling 5m kubelet, 172.23.100.5 pulling image "nginx"
Normal Pulled 5m kubelet, 172.23.100.5 Successfully pulled image "nginx"
Normal Created 5m kubelet, 172.23.100.5 Created container
Normal Started 5m kubelet, 172.23.100.5 Started container
  • 在节点 1 上能看到该pod包含的容器。其中 pause 容器比较特殊,是一个 K8S 基础设施类容器。
root@kub-node-:/home/ubuntu# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4a994121e309 nginx "nginx -g 'daemon of…" minutes ago Up minutes k8s_my-nginx4_my-nginx4-75bbfccc7c-5frpl_default_c35b9521-f023-11e7-a605-fa163e9a22a6_0
e3f39d708800 kubernetes/pause "/pause" minutes ago Up minutes k8s_POD_my-nginx4-75bbfccc7c-5frpl_default_c35b9521-f023-11e7-a605-fa163e9a22a6_0
  • 创建一个 NodePort 来访问该应用
root@kub-node-:/home/ubuntu/kub# kubectl expose deployment my-nginx4 --type=NodePort --name=nginx-nodeport
service "nginx-nodeport" exposed
  • 看到通过 node IP 访问的端口为 31362
root@kub-node-:/home/ubuntu/kub# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 192.1.0.1 <none> /TCP 2d
nginx-nodeport NodePort 192.1.216.223 <none> :/TCP 31s
  • 通过 <node-ip>:<node-port> 访问 ngnix
root@kub-node-:/home/ubuntu/kub# curl http://172.23.100.5:31362
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p> <p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p>
</body>
</html>

4.踩过的一些坑

  • K8S 1.7.2 版本无法创建 pod。kublet 一直报下面的错误。原因是因为该版本有bug,切换至 1.8.5 正常。
W0101 ::25.636397    helpers.go:] eviction manager: no observation found for eviction signal allocatableNodeFs.available
W0101 ::35.680877 helpers.go:] eviction manager: no observation found for eviction signal allocatableNodeFs.available
W0101 ::45.728875 helpers.go:] eviction manager: no observation found for eviction signal allocatableNodeFs.available
W0101 ::55.756455 helpers.go:] eviction manager: no observation found for eviction signal allocatableNodeFs.available
  • 看不到 k8s 服务的日志。fix 方法为在各服务的配置文件中,设置  logtostderr = false 以及添加 log-dir 并手动创建该 dir。
  • 使用 hello-world 容器部署一个应用,pod 状态一直在 CrashLoopBackOff。其原因是因为该容器是启动即退出的,因此K8S会不停地启动pod。
root@kub-node-:/home/ubuntu# kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world-5c9bd8867-76jjg / CrashLoopBackOff 12m
hello-world-5c9bd8867- / CrashLoopBackOff 12m
hello-world-5c9bd8867-cn75n / CrashLoopBackOff 12m
  • 创建第一个部署失败,pod 状态一直停留在 ContainerCreating。kubelet 日志如下。原因是因为 kubelet 要去 gcr.io 上拉取 pause 镜像,而这个站点被墙了。fix方法为在 kueblet service 配置文件中使用  --pod-infra-container-image="docker.io/kubernetes/pause”。原因分析在这里
E0101 ::51.908652    kuberuntime_manager.go:] createPodSandbox for pod "my-nginx3-596b5c5f58-vgvlb_default(aedfbe1b-eefc-11e7-b10d-fa163e9a22a6)" failed: rpc error: code = Unknown desc = failed pulling image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
E0101 ::51.908755 pod_workers.go:] Error syncing pod aedfbe1b-eefc-11e7-b10d-fa163e9a22a6 ("my-nginx3-596b5c5f58-vgvlb_default(aedfbe1b-eefc-11e7-b10d-fa163e9a22a6)"), skipping: failed to "CreatePodSandbox" for "my-nginx3-596b5c5f58-vgvlb_default(aedfbe1b-eefc-11e7-b10d-fa163e9a22a6)" with CreatePodSandboxError: "CreatePodSandbox for pod \"my-nginx3-596b5c5f58-vgvlb_default(aedfbe1b-eefc-11e7-b10d-fa163e9a22a6)\" failed: rpc error: code = Unknown desc = failed pulling image \"gcr.io/google_containers/pause-amd64:3.0\": Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)”
  • 创建 LoadBalancer 类型的 service 后,该 service 的 EXTERNAL-IP 一直停留在 Pending。其原因是因为该功能需要云平台上支持。改为 NodePort 就正常了。
  • 部署 nginx 应用和 NodePort service 后,无法通过 service 的 nodeport 访问。原因是因为 yml 文件中 containerport 写错了,ngnix 使用的端口为 80. 修改后重新部署问题消除。
 
 
 
参考文章:

理解Kubernetes(1):手工搭建Kubernetes测试环境的更多相关文章

  1. 使用Rancher搭建K8S测试环境

    使用Rancher搭建K8S测试环境 http://blog.csdn.net/csdn_duomaomao/article/details/75316926 环境准备(4台主机,Ubuntu16.0 ...

  2. 使用XAMPP和DVWA在Windows7上搭建渗透测试环境

    前言: XAMPP是一个Web应用程序运行环境集成软件包,包括MySQL.PHP.PerI和Apache的环境及Apache.MySQL.FilleZilla.Mercury和Tomecat等组件.D ...

  3. 使用WampServer和DVWA在Windows10上搭建渗透测试环境

    前言: DVWA是一个具有脆弱性的Web测试应用,需要PHP和MySQL的环境支持.我们可以手动配置DVWA所需的运行环境,也可以使用WampServer进行搭建.WampServer是集成了Apac ...

  4. 利用Docker Compose快速搭建本地测试环境

    前言 Compose是一个定义和运行多个Docker应用的工具,用一个YAML(dockder-compose.yml)文件就能配置我们的应用.然后用一个简单命令就能启动所有的服务.Compose编排 ...

  5. 一文教您如何通过 Docker 快速搭建各种测试环境(Mysql, Redis, Elasticsearch, MongoDB) | 建议收藏

    欢迎关注个人微信公众号: 小哈学Java, 文末分享阿里 P8 高级架构师吐血总结的 <Java 核心知识整理&面试.pdf>资源链接!! 个人网站: https://www.ex ...

  6. Docker-教你如何通过 Docker 快速搭建各种测试环境

    今天给大家分享的主题是,如何通过 Docker 快速搭建各种测试环境,本文列举的,也是作者在工作中经常用到的,其中包括 MySQL.Redis.Elasticsearch.MongoDB 安装步骤,通 ...

  7. 一文带你趟过mac搭建appium测试环境的遇到的坑

    做UI自动化,最难的一步就是在环境搭建上,怎么去搭建一个UI自动化测试的环境,会难住很多人,在Mac上搭建appium如何搭建呢,本文带着大家去领略如何在mac上搭建appium测试环境.下面就是详细 ...

  8. kubernetes实战之consul简单测试环境搭建及填坑

    这一节内容有点长,我们将介绍如何基于docker搭建一client一server的consul测试环境,以及如何搭建多server consul测试集群.在基于docker搭建多server的cons ...

  9. Kubernetes集群搭建之企业级环境中基于Harbor搭建自己的私有仓库

    搭建背景 企业环境中使用Docker环境,一般出于安全考虑,业务使用的镜像一般不会从第三方公共仓库下载.那么就要引出今天的主题 企业级环境中基于Harbor搭建自己的安全认证仓库 介绍 名称:Harb ...

随机推荐

  1. 爬起点小说 day01

    先介绍下我自己爬起点小说的思路: 1.爬取所有的类型列表 a.链接存redis中 类型表:novel_list 具体每一种类型:bnovel_all_list(把novel_list和bnovel_l ...

  2. 解决阿里云服务器3306端口无法访问的问题(windows server 2008r2)

    3306端口一般是指mysql数据的默认端口.郁闷了几天的问题,远程无法连接服务器上的mysql服务.今天终于得到彻底解决. 首先,你要确保在服务器上安装好Mysql,并能本地启动.修改密码(如不知道 ...

  3. LCD显示GPS时钟[嵌入式系统]

    夏任务102:做一个GPS钟 实验要求 用RPi的串口连接一个GPS模块,从GPS得到实时时间,在7段数码管或LCD上显示 实验工具: Raspberry Pi Model B主机, 8G c10 S ...

  4. C++ 指针和引用 吐血整理 Pointer&Reference

    说道C++的指针,很多人都很头疼,也很confuse.经常把它和变量名,引用(reference)等混淆,其实这最主要的原因是很多程序员对于基本知识的掌握有问题,从而导致的很多基本概念的混淆.本文就是 ...

  5. 用C语言画一个心

    用C语言图形库画一个心 --环家伟 这次我教大家用代码画一个心,这样你们就可以送给你们的女(男)朋友了.没找到对象的也可以用来表白啊. 1.首先,我去百度找了心形线的函数,如下: 2.  联系高中的数 ...

  6. 一些公司对quantitative的要求

    来自日月光华BBS: Company: UBS AG Job Title: Quantitative Developers / Analysts (Entry Level, Multiple Posi ...

  7. Codeforces 890A - ACM ICPC 暴力

    A. ACM ICPCtime limit per test2 secondsmemory limit per test256 megabytesinputstandard inputoutputst ...

  8. Openssl 生成证书server.key and server.crt

    1.key的生成 openssl genrsa -des3 -out server.key 2048 这样是生成rsa私钥,des3算法,openssl格式,2048位强度.server.key是密钥 ...

  9. 将DLL文件直接封装进exe执行文件中(C#)

    前言:由于项目需要,需制作一个注册机,将个人PC的MAC值和硬盘序列号与软件进行绑定,由于笔者的C++不是很好,所以采用C#进行开发.但在采用C#的时候,获取硬盘的MAC值和序列号的时候又不是很准确, ...

  10. js 判断是否为数组的方式 及 类数组转换成数组格式

    1. 判断是否为数组的通用方式 Object.prototype.toString.call(o)=='[object Array]' 其他方式: typeof ,  instanceof,  ary ...