Kubernetes集群部署及简单命令行操作
三个阶段部署docker:https://www.cnblogs.com/rdchenxi/p/10381631.html
环境准备
[root@master ~]# hostnamectl set-hostname master && exec bash
[root@node01 ~]# hostnamectl set-hostname node01 && exec bash
[root@node02 ~]# hostnamectl set-hostname node02 && exec bash
主机名解析
[root@master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.183.11 master
192.168.183.12 node01
192.168.183.13 node02
[root@master ~]# scp /etc/hosts node01:/etc/
The authenticity of host 'node01 (192.168.183.12)' can't be established.
ECDSA key fingerprint is SHA256:e66/gR4gS9VD4XMHWRVVglIHmU6I4/dgBiaB/swFLVM.
ECDSA key fingerprint is MD5:fd:2a:6c:8d:f0:c9:c4:b2:8d:2d:05:cb:ac:c0:41:50.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node01,192.168.183.12' (ECDSA) to the list of known hosts.
root@node01's password:
hosts 100% 227 98.2KB/s 00:00
[root@master ~]# scp /etc/hosts node02:/etc/
The authenticity of host 'node02 (192.168.183.13)' can't be established.
ECDSA key fingerprint is SHA256:e66/gR4gS9VD4XMHWRVVglIHmU6I4/dgBiaB/swFLVM.
ECDSA key fingerprint is MD5:fd:2a:6c:8d:f0:c9:c4:b2:8d:2d:05:cb:ac:c0:41:50.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node02,192.168.183.13' (ECDSA) to the list of known hosts.
root@node02's password:
hosts
三个节点配置K8s镜像yum仓库
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
注意防火墙与seLinux都关闭;安装docker-ce;三个阶段操作
yum -y install docker-ce
[root@master ~]# vim /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket [Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
Environment="HTTPS_PROXY=http://www.ik8s.io:10080" #国内用户添加这两个变量
Environment="NO_PROXY=127.0.0.0/8,172.20.0.0/16" # 这个
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3 # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s # Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity # Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity # set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes # kill only the docker process, not all processes in the cgroup
KillMode=process [Install]
WantedBy=multi-user.target
安装
[root@master ~]# yum -y install kubelet kubeadm kubectl master安装
[root@master ~]# systemctl enable kubelet.service
初始化
[root@master ~]# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false" #添加初始化参数,忽略swap [root@master ~]#echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables [root@master ~]# kubeadm init --kubernetes-version=v1.15.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
初始化最后
kubeadm join 192.168.183.11:6443 --token lotfu3.ag7oqtqaewlxg9xy \
--discovery-token-ca-cert-hash sha256:401c4f4770ef5acb209ec3d2da1c0d0204c2ea790c05ceb32b53f287ccc280ca
启动操作
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
查看
[root@master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
部署网络插件
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready 变成这个状态ok了 master 19m v1.15.1
查看flannel的部署状态
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-dc2hj 1/1 Running 0 21m
coredns-5c98db65d4-j4zc5 1/1 Running 0 21m
etcd-master 1/1 Running 0 20m
kube-apiserver-master 1/1 Running 0 20m
kube-controller-manager-master 1/1 Running 0 20m
kube-flannel-ds-amd64-czvzm 1/1 Running 0 4m21s 运行
kube-proxy-d5qcj 1/1 Running 0 21m
kube-scheduler-master 1/1 Running 0 20m
查看集群名称空间
[root@master ~]# kubectl get ns
NAME STATUS AGE
default Active 23m
kube-node-lease Active 23m
kube-public Active 23m
kube-system Active 23m
两个node安装
yum -y install kubelet kubeadm
节点配置启动,并加入集群
[root@master ~]# scp /etc/sysconfig/kubelet node01:/etc/sysconfig/
[root@master ~]# scp /etc/sysconfig/kubelet node02:/etc/sysconfig/ [root@node01 ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node02 ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node01 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@node02 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@node01 ~]# kubeadm join 192.168.183.11:6443 --token lotfu3.ag7oqtqaewlxg9xy --discovery-token-ca-cert-hash sha256:401c4f4770ef5acb209ec3d2da1
c0d0204c2ea790c05ceb32b53f287ccc280ca --ignore-preflight-errors=Swap[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the gui
de at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.0. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@node02 ~]# kubeadm join 192.168.183.11:6443 --token lotfu3.ag7oqtqaewlxg9xy --discovery-token-ca-cert-hash sha256:401c4f4770ef5acb209ec3d2da1
c0d0204c2ea790c05ceb32b53f287ccc280ca --ignore-preflight-errors=Swap[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the gui
de at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.0. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
主节点查看nodes信息
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 33m v1.15.1
node01 Ready <none> 3m28s v1.15.1
node02 Ready <none> 3m3s v1.15.1
查看node节点详细信息
[root@master ~]# kubectl describe node node01
查看版本信息
[root@master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean
", BuildDate:"2019-07-18T09:18:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean
", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
查看集群详细信息
[root@master ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.183.11:6443
KubeDNS is running at https://192.168.183.11:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
干跑一个pod;--dry-run=true
[root@master ~]# kubectl run nginx --image=nginx:1.14-alpine --port=80 --replicas=1 --dry-run=true
创建一个pod,使用deployment控制器
[root@master ~]# kubectl run nginx --image=nginx:1.14-alpine --port=80 --replicas=1
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl
create instead.deployment.apps/nginx created
查看deployment控制器下的pod容器
[root@master ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 2m8s
查看pod信息
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-5896f46c8-72wm4 1/1 Running 0 5m39s
查看pod详细信息
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-5896f46c8-72wm4 1/1 Running 0 8m1s 10.244.1.2 node01 <none> <none>
在node01查看IP
[root@node01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:2b:3b:45 brd ff:ff:ff:ff:ff:ff
inet 192.168.183.12/24 brd 192.168.183.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::8dc3:2482:a2b9:c57e/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::10dc:280:ec28:2db4/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:62:cd:a6:be brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 02:ee:3c:55:af:8f brd ff:ff:ff:ff:ff:ff
inet 10.244.1.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::ee:3cff:fe55:af8f/64 scope link
valid_lft forever preferred_lft forever
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether 26:ac:1e:b1:29:a4 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.1/24 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::24ac:1eff:feb1:29a4/64 scope link
valid_lft forever preferred_lft forever
6: veth64e1c1fd@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
link/ether 8a:a3:86:62:9f:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::88a3:86ff:fe62:9f5d/64 scope link
valid_lft forever preferred_lft forever
集群节点上访问,集群那个节点都可以访问
[root@master ~]# curl 10.244.1.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p> <p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p>
</body>
</html>
控制器检查pod挂掉自动创建pod功能
[root@master ~]# kubectl get pods 查看
NAME READY STATUS RESTARTS AGE
nginx-5896f46c8-72wm4 1/1 Running 0 15m
[root@master ~]# kubectl delete pods nginx-5896f46c8-72wm4 算出
pod "nginx-5896f46c8-72wm4" deleted
[root@master ~]# kubectl get pods 创建恢复
NAME READY STATUS RESTARTS AGE
nginx-5896f46c8-zblcs 0/1 ContainerCreating 0 15s
[root@master ~]# kubectl get pods 恢复可用状态
NAME READY STATUS RESTARTS AGE
nginx-5896f46c8-zblcs 1/1 Running 0 98s
[root@master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-5896f46c8-zblcs 1/1 Running 0 117s 10.244.2.2 node02 <none> <none>
把pod暴露参数选项介绍;即创建服务
--type='': Type for this service: ClusterIP:只能各个访问不能提供给外部访问, NodePort, LoadBalancer, or ExternalName. Default is 'ClusterIP':默认类型. kubectl expose (-f FILENAME | TYPE NAME) [--port=指定暴露给外网端口] [--protocol=TCP|UDP|SCTP] [--target-port=pod]
[--name=server 名称] [--external-ip=external-ip-of-service] [--type=类型] [options]
把暴露给集群内部pod访问
[root@master ~]# kubectl expose deployment控制器类型 nginx控制器名字 --name=nginx --port=80 --target-port=80
service/nginx exposed
查看创建的服务
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 101m
nginx ClusterIP 10.110.130.60 <none> 80/TCP 2m44s #是被pod客户端访问的
查看服务的详细信息
[root@master ~]# kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: run=nginx
Annotations: <none>
Selector: run=nginx
Type: ClusterIP
IP: 10.110.130.60
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.2.2:80
Session Affinity: None
Events: <none>
查看pod的标签
[root@master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-5896f46c8-zblcs 1/1 Running 0 52m pod-template-hash=5896f46c8,run=nginx
删除服务
[root@master ~]# kubectl delete svc nginx
service "nginx" deleted
查看控制器详细信息
[root@master ~]# kubectl describe deployment nginx
Name: nginx
Namespace: default
CreationTimestamp: Thu, 25 Jul 2019 13:15:04 +0800
Labels: run=nginx
Annotations: deployment.kubernetes.io/revision: 1
Selector: run=nginx
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: run=nginx
Containers:
nginx:
Image: nginx:1.14-alpine
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-5896f46c8 (1/1 replicas created)
Events: <none>
动态调整describe的控制器pod的副本数
[root@master ~]# kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=2 起一个2副本的pod
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl c
reate instead.deployment.apps/myapp created
[root@master ~]# kubectl get deployment 查看控制器下的pod
NAME READY UP-TO-DATE AVAILABLE AGE
myapp 2/2 2 2 110s
nginx 1/1 1 1 84m
^C[root@master ~]# kubectl get pods -o wide 查看集群总的pod
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-84cd4b7f95-px2kb 1/1 Running 0 3m27s 10.244.2.4 node02 <none> <none>
myapp-84cd4b7f95-xfcnk 1/1 Running 0 3m27s 10.244.1.6 node01 <none> <none>
nginx-5896f46c8-zblcs 1/1 Running 0 69m 10.244.2.2 node02 <none> <none>
[root@master ~]# kubectl expose deployment myapp --name=myapp --port=80
service/myapp exposed
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 153m
myapp ClusterIP 10.103.191.244 <none> 80/TCP 35s
nginx ClusterIP 10.108.177.175 <none> 80/TCP 17m
[root@master ~]# kubectl scale --replicas=4 deployment myapp 扩容到4个
deployment.extensions/myapp scaled
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-84cd4b7f95-px2kb 1/1 Running 0 30m 10.244.2.4 node02 <none> <none>
myapp-84cd4b7f95-tjgqz 1/1 Running 0 3s 10.244.2.5 node02 <none> <none>
myapp-84cd4b7f95-vphlz 0/1 ContainerCreating 0 3s <none> node01 <none> <none>
myapp-84cd4b7f95-xfcnk 1/1 Running 0 30m 10.244.1.6 node01 <none> <none>
nginx-5896f46c8-zblcs 1/1 Running 0 96m 10.244.2.2 node02 <none> <none>
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-84cd4b7f95-px2kb 1/1 Running 0 30m 10.244.2.4 node02 <none> <none>
myapp-84cd4b7f95-tjgqz 1/1 Running 0 5s 10.244.2.5 node02 <none> <none>
myapp-84cd4b7f95-vphlz 1/1 Running 0 5s 10.244.1.7 node01 <none> <none>
myapp-84cd4b7f95-xfcnk 1/1 Running 0 30m 10.244.1.6 node01 <none> <none>
nginx-5896f46c8-zblcs 1/1 Running 0 96m 10.244.2.2 node02 <none> <none>
[root@master ~]# kubectl scale --replicas=1 deployment myapp 缩减到一个pod
deployment.extensions/myapp scaled
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-84cd4b7f95-xfcnk 1/1 Running 0 31m 10.244.1.6 node01 <none> <none>
nginx-5896f46c8-zblcs 1/1 Running 0 97m 10.244.2.2 node02 <none> <none>
更新升级pod
[root@master ~]# kubectl describe pods myapp-84cd4b7f95-xfcnk
Name: myapp-84cd4b7f95-xfcnk
Namespace: default
Priority: 0
Node: node01/192.168.183.12
Start Time: Thu, 25 Jul 2019 14:37:33 +0800
Labels: pod-template-hash=84cd4b7f95
run=myapp
Annotations: <none>
Status: Running
IP: 10.244.1.6
Controlled By: ReplicaSet/myapp-84cd4b7f95
Containers:
myapp:
Container ID: docker://c13e99d23870a37627bc6b207a6b71f8d306f0a73f58515e57f4d964070b0df9
Image: ikubernetes/myapp:v1 #镜像版本
Image ID: docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 25 Jul 2019 14:37:51 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-2m2ts (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-2m2ts:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-2m2ts
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 35m default-scheduler Successfully assigned default/myapp-84cd4b7f95-xfcnk to node01
Normal Pulling 35m kubelet, node01 Pulling image "ikubernetes/myapp:v1"
Normal Pulled 34m kubelet, node01 Successfully pulled image "ikubernetes/myapp:v1"
Normal Created 34m kubelet, node01 Created container myapp
Normal Started 34m kubelet, node01 Started container myapp [root@master ~]# kubectl set image deployment myapp myapp=ikubernetes/myapp:v2
deployment.extensions/myapp image updated kubectl set image deployment myapp myapp=ikubernetes/myapp:v2 解释
set替换 镜像 deployment 控制类型 myapp 控制器名字 myapp=表示更新这个pod的镜像 [root@master ~]# kubectl describe pods myapp-746644f8d6-d7m7x
Name: myapp-746644f8d6-d7m7x
Namespace: default
Priority: 0
Node: node02/192.168.183.13
Start Time: Thu, 25 Jul 2019 15:18:39 +0800
Labels: pod-template-hash=746644f8d6
run=myapp
Annotations: <none>
Status: Running
IP: 10.244.2.6
Controlled By: ReplicaSet/myapp-746644f8d6
Containers:
myapp:
Container ID: docker://78184c2d58c04372e866da2f3e406a48257b0f97c831f54499b92b8d1dc40676
Image: ikubernetes/myapp:v2 更新后镜像
Image ID: docker-pullable://ikubernetes/myapp@sha256:85a2b81a62f09a414ea33b74fb8aa686ed9b168294b26b4c819df0be0712d358
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 25 Jul 2019 15:18:50 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-2m2ts (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-2m2ts:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-2m2ts
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m9s default-scheduler Successfully assigned default/myapp-746644f8d6-d7m7x to node02
Normal Pulling 4m8s kubelet, node02 Pulling image "ikubernetes/myapp:v2"
Normal Pulled 3m58s kubelet, node02 Successfully pulled image "ikubernetes/myapp:v2"
Normal Created 3m58s kubelet, node02 Created container myapp
Normal Started 3m58s kubelet, node02 Started container myapp
pod回滚操作
[root@master ~]# kubectl rollout undo deployment myapp 回滚到上一个版本
deployment.extensions/myapp rolled back
[root@master ~]# kubectl describe pods myapp-84cd4b7f95-g6ldp
Name: myapp-84cd4b7f95-g6ldp
Namespace: default
Priority: 0
Node: node01/192.168.183.12
Start Time: Thu, 25 Jul 2019 15:27:48 +0800
Labels: pod-template-hash=84cd4b7f95
run=myapp
Annotations: <none>
Status: Running
IP: 10.244.1.8
Controlled By: ReplicaSet/myapp-84cd4b7f95
Containers:
myapp:
Container ID: docker://7711bfc3da100aa6f25ebbde6b5a2500947501fe2fc1706dec75662f98fe86c0
Image: ikubernetes/myapp:v1 #回滚操作
Image ID: docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 25 Jul 2019 15:27:49 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-2m2ts (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-2m2ts:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-2m2ts
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 66s default-scheduler Successfully assigned default/myapp-84cd4b7f95-g6ldp to node01
Normal Pulled 65s kubelet, node01 Container image "ikubernetes/myapp:v1" already present on machine
Normal Created 65s kubelet, node01 Created container myapp
Normal Started 65s kubelet, node01 Started container myapp
修改服务类型让外部访问到pod
[root@master ~]# kubectl edit svc myapp # Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-07-25T06:45:50Z"
labels:
run: myapp
name: myapp
namespace: default
resourceVersion: "18434"
selfLink: /api/v1/namespaces/default/services/myapp
uid: acaab49a-e372-427f-b6a3-d712eb2b11d1
spec:
clusterIP: 10.103.191.244
externalTrafficPolicy: Cluster
ports:
- nodePort: 31339
port: 80
protocol: TCP
targetPort: 80
selector:
run: myapp
sessionAffinity: None
type: NodePort 修改为这个
status:
loadBalancer: {}
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h24m
myapp NodePort 10.103.191.244 <none> 80:31339/TCP 51m
nginx ClusterIP 10.108.177.175 <none> 80/TCP 68m
集群外部访问端口所有节点的31339访问测试
[root@master ~]# curl 192.168.183.11:31339
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master ~]# curl 192.168.183.12:31339
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master ~]# curl 192.168.183.13:31339
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Kubernetes集群部署及简单命令行操作的更多相关文章
- Docker学习-Kubernetes - 集群部署
Docker学习 Docker学习-VMware Workstation 本地多台虚拟机互通,主机网络互通搭建 Docker学习-Docker搭建Consul集群 Docker学习-简单的私有Dock ...
- kubernetes集群部署
鉴于Docker如此火爆,Google推出kubernetes管理docker集群,不少人估计会进行尝试.kubernetes得到了很多大公司的支持,kubernetes集群部署工具也集成了gce,c ...
- Kubernetes集群部署关键知识总结
Kubernetes集群部署需要安装的组件东西很多,过程复杂,对服务器环境要求很苛刻,最好是能连外网的环境下安装,有些组件还需要连google服务器下载,这一点一般很难满足,因此最好是能提前下载好准备 ...
- kubernetes 集群部署
kubernetes 集群部署 环境JiaoJiao_Centos7-1(152.112) 192.168.152.112JiaoJiao_Centos7-2(152.113) 192.168.152 ...
- 基于Kubernetes集群部署skyDNS服务
目录贴:Kubernetes学习系列 在之前几篇文章的基础,(Centos7部署Kubernetes集群.基于kubernetes集群部署DashBoard.为Kubernetes集群部署本地镜像仓库 ...
- Kubernetes 集群部署(2) -- Etcd 集群
Kubenetes 集群部署规划: 192.168.137.81 Master 192.168.137.82 Node 192.168.137.83 Node 以下在 Master 节点操作. ...
- Gitlab CI 集成 Kubernetes 集群部署 Spring Boot 项目
在上一篇博客中,我们成功将 Gitlab CI 部署到了 Docker 中去,成功创建了 Gitlab CI Pipline 来执行 CI/CD 任务.那么这篇文章我们更进一步,将它集成到 K8s 集 ...
- linux运维、架构之路-Kubernetes集群部署
一.kubernetes介绍 Kubernetes简称K8s,它是一个全新的基于容器技术的分布式架构领先方案.Kubernetes(k8s)是Google开源的容器集群管理系统(谷歌内部 ...
- 为Kubernetes集群部署本地镜像仓库
目录贴:Kubernetes学习系列 经过之前两篇文章:Centos7部署Kubernetes集群.基于kubernetes集群部署DashBoard,我们基本上已经能够在k8s的集群上部署一个应用了 ...
随机推荐
- ubuntu更换pip源
1.创建pip.conf文件 cd ~/.pip 当提示不存在时,创建它 mkdir ~/.pip 在.pip目录下创建一个pip.conf文件 touch pip.conf 2.编辑pip.conf ...
- 每天进步一点点------Alpha半透明图形叠加算法Matlab+Verilog实现
Alpha图形叠加算法Matlab+Verilog实现 1.1. Alpha算法的研究 Alpha通道是一个8位的灰度通道,该通道用256级灰度来记录图像中的透明度信息,定义透明.不透明和半透明区域, ...
- 1 dev repo organize
码云 注册 组织 创建 仓库 创建 Git版本管理工具 download from https://www.git-scm.com/download/ 克隆/下载 git clone https ...
- opencv python:图像二值化
import cv2 as cv import numpy as np import matplotlib.pyplot as plt # 二值图像就是将灰度图转化成黑白图,没有灰,在一个值之前为黑, ...
- UVa 400 Unix Is命令
简单题 #include <bits/stdc++.h> using namespace std; const int maxn=110; string s[maxn]; int main ...
- provide 和 inject高阶使用
provide 在祖先里授权导出 inject在后代负责接收 foo可以是本组件的函数方法 或者 变量foo 也可以是祖先组件自己 祖先组件foo: this 后代组件 foo.$options.da ...
- Jmeter_JsonPath 提取器
1.登录老黄历 2.提取阳历的数据,不用正则表达式提取器,因为这里是字典形式,用Json path提取器更简单 3.把提取的数据放到百度里去发送请求 4. 5. 6. 7. 8. 9.
- input file multiple 配合springmvc实现多文件上传
.前端页面的样子 <input id="file" name="file" type="file" multiple="mu ...
- 「HNOI2012」永无乡
传送门 Luogu 解题思路 很容易想到平衡树,然后还可以顺便维护一下连通性,但是如何合并两棵平衡树? 我们采用一种类似于启发式合并的思想,将根节点siz较小的那颗平衡树暴力的合并到另一颗上去. 那么 ...
- Linux - 终端terminal进入交互环境的快捷键
1. 上一页 ctrl + b 2. 下一页 空格 / ctrl + f 3. 上半页 ctrl + u 4. 下半页 ctrl + d 5. 上一行 k 6. 下一行 j 7. 向上查找 ?key ...