1.7:k8s应用环境:
1.7.1:dashboard(1.10.1)
部署kubernetes的web管理界面dashboard
                   https://www.kubernetes.org.cn/5462.html
1.7.1.1:具体步骤:
1.导入dashboard镜像并上传至本地harbor服务器
# tar xvf dashboard-yaml_image-1.10.1.tar.gz
# docker load -i kubernetes-dashboard-amd64-v1.10.1.tar.gz
# docker tag gcr.io/google-containers/kubernetes-dashboard-amd64:v1.10.1 harbor1.dexter.com/baseimages/kubernetes-dashboard-amd64:v1.10.1
 
 
2.修改kubernetes-dashboard.yaml文件中的dashboard镜像地址为本地harbor地址
root@ansible-vm1:~# cd /etc/ansible/manifests/dashboard/
root@ansible-vm1:/etc/ansible/manifests/dashboard# mkdir -pv 1.10.1
root@ansible-vm1:/etc/ansible/manifests/dashboard# cp ./*.yaml 1.10.1/
root@ansible-vm1:/etc/ansible/manifests/dashboard/1.10.1# vim kubernetes-dashboard.yaml
 
image: harbor1.dexter.com/baseimages/kubernetes-dashboard-amd64:v1.10.1
# ------------------- Dashboard Service ------------------- #
 
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      nodePort: 30001
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
 
修改此yaml文件为:
1).注释掉Dashboard Secret ,不然后面访问显示网页不安全,证书过期,我们自己生成证书。
2).因为我选择nodeport访问dashboard,所以将service type字段设置为nodeport,并指定nodeport为40000,如下图
 
 
 
 
3.创建服务
 
# kubectl create -f .
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
serviceaccount/dashboard-read-user created
clusterrolebinding.rbac.authorization.k8s.io/dashboard-read-binding created
clusterrole.rbac.authorization.k8s.io/dashboard-read-clusterrole created
clusterrole.rbac.authorization.k8s.io/ui-admin created
rolebinding.rbac.authorization.k8s.io/ui-admin-binding created
clusterrole.rbac.authorization.k8s.io/ui-read created
rolebinding.rbac.authorization.k8s.io/ui-read-binding created
 
4.验证dashboard启动完成:
# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-d59797cd7-dqkcd   1/1     Running   0          170m
calico-node-7nwn8                         2/2     Running   2          2d
calico-node-9sdfq                         2/2     Running   4          2d
calico-node-m9zkv                         2/2     Running   6          2d
calico-node-tdzv6                         2/2     Running   6          2d
kubernetes-dashboard-665997f648-zfqrk     1/1     Running   0          14m
# kubectl get service -n kube-system
root@k8s-m1:~# kubectl get service -n kube-system
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.20.112.241   <none>        443:30001/TCP   42m
# kubectl cluster-info #查看集群信息
# kubectl cluster-info
Kubernetes master is running at https://172.16.99.148:6443
kubernetes-dashboard is running at https://172.16.99.148:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
 
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
 
 
直接使用node的IP加端口访问,访问地址如下:
 
 
 
相关报错:
报错类型如下
 
解决办法:
生成证书
mkdir key && cd key
#生成证书
openssl genrsa -out dashboard.key 2048
#我这里写的自己的node1节点,因为我是通过nodeport访问的;如果通过apiserver访问,可以写成自己的master节点ip
openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=172.16.99.123'
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
#删除原有的证书secret
kubectl delete secret kubernetes-dashboard-certs -n kube-system
#创建新的证书secret
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kube-system
#查看pod
kubectl get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-d59797cd7-dqkcd   1/1     Running   0          155m
calico-node-7nwn8                         2/2     Running   2          47h
calico-node-9sdfq                         2/2     Running   4          47h
calico-node-m9zkv                         2/2     Running   6          47h
calico-node-tdzv6                         2/2     Running   6          2d
kubernetes-dashboard-665997f648-cb9jm     1/1     Running   0          27m
#重启pod
kubectl delete pod kubernetes-dashboard-665997f648-cb9jm  -n kube-system
再一次创建dashboard pod
kubectl apply -f kubernetes-dashboard.yaml
 
 
 
 
1.7.1.2:token登录dashboard:
 
# kubectl -n kube-system get secret | grep admin-user
admin-user-token-gpqv8                kubernetes.io/service-account-token   3      61m
# kubectl -n kube-system describe secret admin-user-token-gpqv8
Name:         admin-user-token-gpqv8
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 06b222ae-59ee-11ea-bc2b-fa163e62a670
 
 
Type:  kubernetes.io/service-account-token
 
 
Data
====
ca.crt:     1346 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWdwcXY4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwNmIyMjJhZS01OWVlLTExZWEtYmMyYi1mYTE2M2U2MmE2NzAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.PC9rHsfA9KiCSGAUnw0HkhIEQYw9RCZltC09uxTzaEGpzG3zzLj82dqhyIrKbXRrOeQimzmgHSDPlXbNvZDO3KudFPXAVx7ZYpTAnGb76HSMIB9QWQFycog5Zne4dzNByt5PqwzlyAKlul-_yljP3ZX6zZyQW7ZDeB99OHx_8b_yCRkBfqAzJrm9ssCcYaUYIK870oI8a-6ozySUIn7jsFgFU7iAVM4B9-btQ0O37YlscJa6vPE7slB7AN3UfCaqnKUGdlnrQisJynIFhNawDEYe-LgCc1CQZICABYzMsuEB9X0IClSHjivg5tFPw6nDmIjT531WkUre_LP1lyDiVw
 
使用令牌登陆
 
注:我们不用nodeport,其实也是可以的。直接访问kubectl cluster-info中的dashboard地址:https://172.16.99.148:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy,不过要记得
       如果你的k8s集群是运行在openstack中,防火墙一直是个阻碍,我们在openstack相关计算节点中粗暴的执行iptables -F命令后才能访问到了如下页面
 
1.7.1.3:Kubeconfig登录
制作Kubeconfig文件
# kubectl -n kube-system get secret | grep admin
admin-user-token-zncmg                kubernetes.io/service-account-token   3      19h
# kubectl -n kube-system describe secret admin-user-token-zncmg
Name:         admin-user-token-zncmg
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 8bd97c05-5a00-11ea-bc2b-fa163e62a670
 
 
Type:  kubernetes.io/service-account-token
 
 
Data
====
ca.crt:     1346 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXpuY21nIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4YmQ5N2MwNS01YTAwLTExZWEtYmMyYi1mYTE2M2U2MmE2NzAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.oVxI3GlCzRcMQoqnGqfyMBXWPO7oD0u9gOqsz8qiue2qIp-SISvyF9LyIznB_g0i3aXQHv_b-1Jr07NvH042aG02T_zJVC-_xC_WzvJ9xf_jyJkimFyjF6ZRwMsT6QJ0KaIcAxbhDCUD5MmcihQYg6EMtnYxkOFUn77eFJiaogslB-gVmeEz4EVsWPHX8NggXp8DA0gnLnQ2L6jq_zSoKNXe9synvj9LITo-6Zf2YrnmKhERVU2wqJxloI_VIzpQDQtYq9tdBUEiZ1ELdUCXw_2pYQ3qkphZiTXz8XoqUorwiB8xdjPHgI97e6tPLupyRljRkgbHwbKiHOWBiZD-4A
 
# cp /root/.kube/config  /opt/kubeconfig
root@ansible-vm1:~/.kube# cat /opt/kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVVDMwT2p1MVhJaE9HSVZrMVNIZklQdktWVHprd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qQXdNakkyTURNME5EQXdXaGNOTXpVd01qSXlNRE0wTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRVJNQThHQTFVRUNCTUlTR0Z1WjFwb2IzVXhDekFKQmdOVkJBY1RBbGhUTVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQmxONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxwbGwyZkRzdklZcmZ5bC95RENkZCszdlRURytob00KejJ5S1ZHUnhUaXhuengrd1ZwRXhEWVg1dTcwVmFjNmVkVFZEaTQ4RjNRaDNnUUhtcFRQaGczT3poOEROdnJqVgpVUm91UTdBMU9MMG9KY2hDZVh1TGNrY3pkckpYVjhTSHA5TmlCZURPUGxIbnpxMFU3T0pHUThRY1hkcHNaUW12CmluN0M0bTMybVpqUVdlYTYydlJGWHowN1UwRzg1QmVSUXdBTS92eWdYMWdRWmNuRE5VbldRZjJkOTFhMkRCT2IKT2RLdTBFV0tRSlFVcEJSUWl2S2gxTkRwQ2xTamFvYjhENjROcG1LNmhzcDR5WjRrNEFpb0NZVnF2alVWWWVIbgpyOStPc1JmZmRNZ00xc3NSTXoyM1RwQytCVUlKQnFGa0J6aGkzc3h0Q1JjZUJQeVdEV3RXaTlrQ0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRkpQZi9MTnZ0YzRTRFNQRDQ0Sm5UYXlSSXorY01COEdBMVVkSXdRWU1CYUFGSlBmL0xOdnRjNFNEU1BENDRKbgpUYXlSSXorY01BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQlVQVjdzcGszR24yNEZwYlhlZGJnQXRBeU9nUFdHClBONGtYbFdLZjRmQ2I0WVVOei9YVjlvLzdsL3NwR0NjcjR2U1ZsbVRQMWdVcGZPcVgyajJFSXBXSlAzMUttV1IKek1HL0hXQ0RNNlFLaUFkUDYxTWNtNThzTGtuelFaY25jQWNaNjRMdEREVU5DeWZlS21wMUI1U2pKaEovWXk1QwpEcktGbjhHWUJRR2NNRklFZXY1UExoYUIyR1k3cVBnb0pjVFo0Y0g1WmRIOGIyckR2WmlqMTF0RFZqNVErR1NHClM3NnU1UVJYYVc0WnAyd2J3WWVFTFg3QkpDVkNGL1ZOVENNWTRkbnk5eGhSYnRtQjFhSDlCajBCcUF0V2dsT08KeXA4b1V0MngveVZ3WDJOVUhZYmk5SE4zMnBMMzdWS2VEZlVwQkZnZmlKZ3phRFRTU0docy93engKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://172.16.99.148:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: admin
  name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxVENDQXIyZ0F3SUJBZ0lVQnlkTVpQcDhKOVkwZDlOUE11M3lnUFIyb2JBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qQXdNakkyTURVd01EQXdXaGNOTXpBd01qSXpNRFV3TURBd1dqQm5NUXN3Q1FZRFZRUUcKRXdKRFRqRVJNQThHQTFVRUNCTUlTR0Z1WjFwb2IzVXhDekFKQmdOVkJBY1RBbGhUTVJjd0ZRWURWUVFLRXc1egplWE4wWlcwNmJXRnpkR1Z5Y3pFUE1BMEdBMVVFQ3hNR1UzbHpkR1Z0TVE0d0RBWURWUVFERXdWaFpHMXBiakNDCkFTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTGhlOXQvNG91WTRBNys3VHNBVzYza3gKQ3JINzRyZnppWStnbEt3YWVCOXZKU3RBOWhVR2lOTGQzY040VS82VUF3eXVRSFhucWVOYVVKb0Ntb0dTR1MxbwphN1VOeHVWVEo2NXMrQnArSVNOQUFHMW1kRnhzbkk5MG9iWk5mem1XNWxnRDA5N3VvRnBXVFovZXVucG00NFQyCm50S053Wm8zVjJCZXVHYU9TRnV2WkdzWUpjbDNSYW5XK1QzSWJsSm9RdG1JNWZJZm1aZG93emVLM2l0YzVJbXEKYjJRRk5NQjJveHdWalgySkJXZk1WNmpYemk3SFIwT0UxWkJQWGo3Vm5oeGl3V1RYbEhOMW5rVFZZNVpBZUwwUQprTWFqN1pGcDRnUHVESkJxVkhBZ3hRWWJtVzhDYzZmYXdyOThzK2dONzNBckxnMXF5blAxUnBnMzU4Q2dNYVVDCkF3RUFBYU4vTUgwd0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXTUJRR0NDc0dBUVVGQndNQkJnZ3IKQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCUmxVYmp1OGhxQmJVeFVjNkFJZThWbwptOHZSdmpBZkJnTlZIU01FR0RBV2dCU1QzL3l6YjdYT0VnMGp3K09DWjAyc2tTTS9uREFOQmdrcWhraUc5dzBCCkFRc0ZBQU9DQVFFQUFiRHViNHZYR2s5UzFYS1UyZVhDd2FsQmRrbDBkQTd6eDBkTzBBc1hIL1NTMUQ5OTZKcFgKSy9vWU1pejRLWVRNMC9wRUFBQTFHWWV1RkRFdjJvTEhNZ3MvMHBMSXZqdm1uTUcxMG1mSHkrVWtoNHlrdDNvVgpEQlNuMXR1ZGhzeU1LS3JiRktHblJSNHlSSDVpcUJaQ3JmY0RmNUl3VUp5cnRhamN2TGJqVlJRSFh4N0JuMTI2ClkwRE1LOXJyV1JuT2J1ZTRnYy9PWVVPNkJqdERnWjkrQXVCN2NKWVhtb3liTUwwZFRWRUpVYk5uc1Q2YWFLNTEKMnRHd0N4M1pzMDlSSzY5K01VTnZSTEZDdytGMTNSTVI3TmFFWXlKMkpqelVsemQzZFFaQ3lpVng0cEdFalY1TgpheHRGL1I1UFBnc0VacGM3ZWpPS3ZvekNHVXlpSVVXdGF3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBdUY3MjMvaWk1amdEdjd0T3dCYnJlVEVLc2Z2aXQvT0pqNkNVckJwNEgyOGxLMEQyCkZRYUkwdDNkdzNoVC9wUURESzVBZGVlcDQxcFFtZ0thZ1pJWkxXaHJ0UTNHNVZNbnJtejRHbjRoSTBBQWJXWjAKWEd5Y2ozU2h0azEvT1pibVdBUFQzdTZnV2xaTm45NjZlbWJqaFBhZTBvM0JtamRYWUY2NFpvNUlXNjlrYXhnbAp5WGRGcWRiNVBjaHVVbWhDMllqbDhoK1psMmpETjRyZUsxemtpYXB2WkFVMHdIYWpIQldOZllrRlo4eFhxTmZPCkxzZEhRNFRWa0U5ZVB0V2VIR0xCWk5lVWMzV2VSTlZqbGtCNHZSQ1F4cVB0a1duaUErNE1rR3BVY0NERkJodVoKYndKenA5ckN2M3l6NkEzdmNDc3VEV3JLYy9WR21EZm53S0F4cFFJREFRQUJBb0lCQVFDeGZXSm12U0o5Uk1GLwpLNStsVnFzN2tWVzlnRUtEV2lVOHFwZFZrMm0rd1Mza0ZQYVJ5U2VnOEV2VUtKdWJ3ZnVwa25xbHh1NksyMkFxCjA0VFFaY2h0S1ZBL0RWTkRZNmtZeHZpVjhJU1FQY1hyaTYxTGFKZlRsckV6SWludlUvRE9IR2t6L1Q5TG1EZkUKUnhQNFQrS0tGeTFRZjMwNHJEd21ueWtnT2FzNDd0MFpUWHdGQlFWemxLTU9SU25GdWpDTmxvN0YvNUtsRjcwbAp0OStlQjNpQlJMVzRDeHc1WW9VTi9LcFYyY2ZUVnZGTmZOdis5NnI0WGw1UzQ0cGZaWmlwdzlweXBESXgvSWt1Ck5qRzBGeEZ1OGJmckpLVjVzVWpicG9ZKzFyVTFvV1M3eXFGemQ3UlNtV0hiTllrWHc0RVQ5TEpSRDlXWktpOEUKV3FsdlFuNkJBb0dCQU85UHZKK3hsS2luY1A0MEJpODI3ZnJIYUw1OUx2VXo2bUJiK2dHbW9kUVUxWHd2U0dodApUYU5NVmhvVFlzbE8rRUZmM0ZTRE56U0FLQStaUGZybDcxOFE4MFdOMW9lbmpqcXQvVUlTekVLREczd3U3bnd4CkpXVTBKWlJCOW90c0VHN003VktsV2tnQURqNnlZY2lJVDNxaTRBOE40aGl0TUhsdFFKekF2VXZoQW9HQkFNVTYKYVltYy8rdHYxeVpBbEo1SGprejlVMk5zR2VDU29qbnJHMFFOcXRQWlBiamYzM0gycEtWS3RSY2tXNDJucG1BUApKdjNNcktiRTVodnY2U3J6VkRNZ1BhenJRTXpTdWRBaXpYZkIzWVIwRXYxak9KUTFuVndQQ0NtNm5Oa20xZFFPCjBFVzdlcHFyeDlidkhBdlZkaWRxdnlYZmJ2VlEwb295MFoxYWduNUZBb0dCQUl0M211UXVxQWFLWHUybkFCdXcKRlkxYmZZM1dndkZnS2kyeUxNZWRoeDZFYmM2TDk5VDBMcFVHdmY5QVlRZ1ZQOVZKdXF4K05FUWlsRFpUQnE0YwpKeDd1VC9pdkt1R3dJdEhMNkpjRFFZdFp3VURrVVJTTHg5RnRUS0ZVdUF5VkZCYWUwNGlnMlRhdzRaeGtkVnhiCkpJYkNPWFpNandIMm5ST0hPbXFnWVRIQkFvR0FDSDFVTDZVL2F0MzhqOXYxeWI1Z3hMV2Uwa2ZEOFdPK2NlbkoKMmFzUThGK0loWjIxVzQxM1Z0b1pZMjZnTmovQ0xKNWFXbEJtR2lPZG1CUkNvQ09yT3l3bkczdGc1YkFvYVdvbQpHQUtUUzNGSG8vcVNZK2JPNkRpSmJHcG85L3Z3OWxqUTVEK0dyb080YldzTGRRTHlQQTRmUGowWTVKeGZBNjNlClVmeWtZMVVDZ1lFQXR4MUpYOEhZOUtvMFB0S2N5TktTZHU1dkdGMW4wdFNvTU8vVjNTdVBQcWRQYU8vSHhxMFQKMUlnUndWWDF0RGhwZFBPaWlRUzhrRisyWTYvRDhCU0hCMWs5dTlsbzhwTzE2L2R5RnE5Yk1yK1MzMzJ3bi90MApmM2RiQ1hONm1nbFRTU1p3eHlFUERqMFlpdWZmWk1BSEZKQlk3cTNDSGIrVFJHemVzR1poZzBVPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXpuY21nIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4YmQ5N2MwNS01YTAwLTExZWEtYmMyYi1mYTE2M2U2MmE2NzAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.oVxI3GlCzRcMQoqnGqfyMBXWPO7oD0u9gOqsz8qiue2qIp-SISvyF9LyIznB_g0i3aXQHv_b-1Jr07NvH042aG02T_zJVC-_xC_WzvJ9xf_jyJkimFyjF6ZRwMsT6QJ0KaIcAxbhDCUD5MmcihQYg6EMtnYxkOFUn77eFJiaogslB-gVmeEz4EVsWPHX8NggXp8DA0gnLnQ2L6jq_zSoKNXe9synvj9LITo-6Zf2YrnmKhERVU2wqJxloI_VIzpQDQtYq9tdBUEiZ1ELdUCXw_2pYQ3qkphZiTXz8XoqUorwiB8xdjPHgI97e6tPLupyRljRkgbHwbKiHOWBiZD-4A
 
# sz /opt/kubeconfig
 
1.7.1.4:修改iptables为ipvs及调度算法:
root@s6:~# vim /etc/systemd/system/kube-proxy.service --proxy-mode=ipvs \ --ipvs-scheduler=sh
 
1.7.1.5:设置token登录会话保持时间
# vim dashboard/kubernetes-dashboard.yaml
        image: harbor1.dexter.com/baseimages/kubernetes-dashboard-amd64:v1.10.1
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          - --token-ttl=43200
 
1.7.1.6:session保持:
sessionAffinity: ClientIP
sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10800
 
 
1.8:DNS服务:
目前常用的dns组件有kube-dns和coredns两个
1.8.1:部署coredns:
root@ansible-vm1:/etc/ansible/manifests#mkdir -pv dns/{kube-dns,coredns}
root@ansible-vm1:/etc/ansible/manifests# cd dns/kube-dns/
上传一些文件到kube-dns目录下,具体文件如下图
 
注:这些文件在kubernetns的github中能找到,一般放在其二进制安装包中。kubernetns二进制安装包下载地址: https://github.com/kubernetes/kubernetes/releases
# docker load -i busybox-online.tar.gz
# docker tag quay.io/prometheus/busybox:latest harbor1.dexter.com/baseimages/busybox:latest
修改busybox.yaml
root@ansible-vm1:/etc/ansible/manifests/dns/kube-dns# cat busybox.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default  #default namespace的DNS
spec:
  containers:
  - image: harbor1.dexter.com/baseimages/busybox:latest
    command:
      - sleep
      - "3600"
    imagePullPolicy: Always
    name: busybox
  restartPolicy: Always
启动busybox
 
root@ansible-vm1:/etc/ansible/manifests/dns/kube-dns# kubectl create -f busybox.yaml
root@ansible-vm1:/etc/ansible/manifests/dns/kube-dns# kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          14m
 
 
 
# docker tag gcr.io/google-containers/coredns:1.2.6  harbor1.dexter.com/baseimages/coredns:1.2.6
# docker push harbor1.dexter.com/baseimages/coredns:1.2.6
 
 
 
1.8.2:部署kube-dns:
1.skyDNS/kube-dns/ coreDNS
kube-dns:提供service name域名的解析
dns-dnsmasq:提供DNS缓存,降低kubedns负载,提高性能
dns-sidecar:定期检查kubedns和dnsmasq的健康状态
2.导入镜像并上传至本地harbor
# docker load -i k8s-dns-kube-dns-amd64_1.14.13.tar.gz
# docker tag gcr.io/google-containers/k8s-dns-kube-dns-amd64:1.14.13 harbor1.dexter.com/baseimages/k8s-dns-kube-dns-amd64:1.14.13
# docker push harbor1.dexter.com/baseimages/k8s-dns-kube-dns-amd64:1.14.13
# docker load -i k8s-dns-sidecar-amd64_1.14.13.tar.gz
# docker tag gcr.io/google-containers/k8s-dns-sidecar-amd64:1.14.13 harbor1.dexter.com/baseimages/k8s-dns-sidecar-amd64:1.14.13
# docker push harbor1.dexter.com/baseimages/k8s-dns-sidecar-amd64:1.14.13
# docker load -i k8s-dns-dnsmasq-nanny-amd64_1.14.13.tar.gz
# docker tag gcr.io/google-containers/k8s-dns-dnsmasq-nanny-amd64:1.14.13 harbor1.dexter.com/baseimages/k8s-dns-dnsmasq-nanny-amd64:1.14.13
# docker push harbor1.dexter.com/baseimages/k8s-dns-dnsmasq-nanny-amd64:1.14.13
 
3.修改yaml文件中的镜像地址为本地harbor地址
root@k8s-n1:~# ps -ef | grep dns | grep -v grep
root      1007     1  5 11:42 ?        00:16:08 /usr/bin/kubelet --address=172.16.99.123 --allow-privileged=true --anonymous-auth=false --authentication-token-webhook --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/ssl/ca.pem --cluster-dns=10.20.254.254 --cluster-domain=cluster.local. --cni-bin-dir=/usr/bin --cni-conf-dir=/etc/cni/net.d --fail-swap-on=false --hairpin-mode hairpin-veth --hostname-override=172.16.99.123 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --max-pods=110 --network-plugin=cni --pod-infra-container-image=harbor1.dexter.com/baseimages/pause-amd64:3.1 --register-node=true --root-dir=/var/lib/kubelet --tls-cert-file=/etc/kubernetes/ssl/kubelet.pem --tls-private-key-file=/etc/kubernetes/ssl/kubelet-key.pem --v=2
 
root@ansible-vm1:/etc/ansible/manifests/dns/kube-dns# vim kube-dns.yaml
.
.
  clusterIP: 10.20.254.254
.
.
      - name: kubedns
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthcheck/kubedns
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=cluster.local.
        - --dns-port=10053
        - --config-dir=/kube-dns-config
        - --v=2
.
      - name: dnsmasq
        livenessProbe:
          httpGet:
            path: /healthcheck/dnsmasq
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - -v=2
        - -logtostderr
        - -configDir=/etc/k8s/dns/dnsmasq-nanny
        - -restartDnsmasq=true
        - --
        - -k
        - --cache-size=1000
        - --no-negcache
        - --dns-loop-detect
        - --log-facility=-
        - --server=/dexter.com/172.20.100.23#53
        - --server=/cluster.local/127.0.0.1#10053
        - --server=/in-addr.arpa/127.0.0.1#10053
        - --server=/ip6.arpa/127.0.0.1#10053
        ports:
        - containerPort: 53
.
      - name: sidecar
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
        ports:
        - containerPort: 10054
 
4.创建服务
# kubectl apply -f kube-dns.yaml
 
5.查看pod是否运行正常
root@ansible-vm1:/etc/ansible/manifests/dns/kube-dns# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-d59797cd7-dqkcd   1/1     Running   0          6h5m
calico-node-7nwn8                         2/2     Running   2          2d3h
calico-node-9sdfq                         2/2     Running   4          2d3h
calico-node-m9zkv                         2/2     Running   6          2d3h
calico-node-tdzv6                         2/2     Running   6          2d3h
heapster-7f4864f77-jzk78                  1/1     Running   0          115m
kube-dns-569c979454-j579m                 3/3     Running   0          67s
kubernetes-dashboard-5d6c5449c8-lr7p7     1/1     Running   0          97m
monitoring-grafana-685557648b-9qr74       1/1     Running   0          115m
monitoring-influxdb-5cc945bc5c-kt8qn      1/1     Running   0          115m
 
1.8.3:dns测试:
# vim coredns.yaml
# kubectl apply -f coredns.yaml
# kubectl exec busybox nslookup kubernetes
Server:    10.20.254.254
Address 1: 10.20.254.254 kube-dns.kube-system.svc.cluster.local
 
Name:      kubernetes
Address 1: 10.20.0.1 kubernetes.default.svc.cluster.local
# kubectl exec busybox nslookup kubernetes.default.svc.cluster.local
Server:    10.20.254.254
Address 1: 10.20.254.254 kube-dns.kube-system.svc.cluster.local
Name:      kubernetes.default.svc.cluster.local
Address 1: 10.20.0.1 kubernetes.default.svc.cluster.local
 
1.8.4:监控组件heapster:
heapster:数据采集  influxdb:数据存储  grafana:web展示
1.导入相应的镜像
docker pull mirrorgooglecontainers/heapster-grafana-amd64:v5.0.4
docker pull mirrorgooglecontainers/heapster-amd64:v1.5.4
docker pull mirrorgooglecontainers/heapster-influxdb-amd64:v1.5.2
 
docker tag mirrorgooglecontainers/heapster-grafana-amd64:v5.0.4 harbor1.dexter.com/baseimages/heapster-grafana-amd64:v5.0.4
docker tag mirrorgooglecontainers/heapster-amd64:v1.5.4 harbor1.dexter.com/baseimages/heapster-amd64:v1.5.4
docker tag mirrorgooglecontainers/heapster-influxdb-amd64:v1.5.2 harbor1.dexter.com/baseimages/heapster-influxdb-amd64:v1.5.2
 
 
2.更改yaml中的镜像地址
mkdir -pv heapster
cd heapster
 
修改
# cat *.yaml | grep image
        image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4
        image: k8s.gcr.io/heapster-amd64:v1.5.4
        imagePullPolicy: IfNotPresent
        image: k8s.gcr.io/heapster-influxdb-amd64:v1.5.2
 
# sed -i 's#k8s.gcr.io#harbor1.dexter.com/baseimages#g' *.yaml
 
# cat *.yaml | grep image
        image: harbor1.dexter.com/baseimages/heapster-grafana-amd64:v5.0.4
        image: harbor1.dexter.com/baseimages/heapster-amd64:v1.5.4
        imagePullPolicy: IfNotPresent
        image: harbor1.dexter.com/baseimages/heapster-influxdb-amd64:v1.5.2
 
 
 
3.创建服务
kubectl create -f .
 
 
注1:heapster-grafana-amd64:v4.4.3这个版本较为好用,推荐使用,可以直接在grafana上查看信息,heapster-grafana-amd64:v5.0.4这个版本没有直接创建dashboard.
 
注2:harbor仓库里的镜像如下图
 

k8s应用环境的更多相关文章

  1. 基于腾讯云CLB实现K8S v1.10.1集群高可用+负载均衡

    概述: 最近对K8S非常感兴趣,同时对容器的管理等方面非常出色,是一款非常开源,强大的容器管理方案,最后经过1个月的本地实验,最终决定在腾讯云平台搭建属于我们的K8S集群管理平台~ 采购之后已经在本地 ...

  2. Apache Spark on K8s的安全性和性能优化

    前言 Apache Spark是目前最为流行的大数据计算框架,与Hadoop相比,它是替换MapReduce组件的不二选择,越来越多的企业正在从传统的MapReduce作业调度迁移到Spark上来,S ...

  3. 从零开始入门 K8s| K8s 的应用编排与管理

    作者 | 张振 阿里巴巴高级技术专家 一.资源元信息 1. Kubernetes 资源对象 我们知道,Kubernetes 的资源对象组成:主要包括了 Spec.Status 两部分.其中 Spec ...

  4. k8s本地部署

    k8s是什么 Kubernetes是容器集群管理系统,是一个开源的平台,可以实现容器集群的自动化部署.自动扩缩容.维护等功能. Kubernetes 具有如下特点: 便携性: 无论公有云.私有云.混合 ...

  5. k8s名称空间资源

    namespace是k8s集群级别的资源,用于将集群分隔为多个隔离的逻辑分区以配置给不同的用户.租户.环境或项目使用,例如,可以为development.qa.和production应用环境分别创建各 ...

  6. k8s&docker面试总结

    花了大半个月对k8s&docker进行了梳理,包括之前读过的书,官方文档以及k&d在公司项目的实践等. 以下是个人对docker & k8s 面试知识点的总结: 1 docke ...

  7. 【Kubernetes】K8S网络方案--最近在看的

    K8S网络-最近在看的 Create a Minikube cluster - Kubernetes Kubernetes Documentation - Kubernetes Kubernetes ...

  8. 【Kubernetes】K8S 网络隔离 方案

    参考资料: K8S-网络隔离参考 OpenContrail is an open source network virtualization platform for the cloud. – Kub ...

  9. k8s入门系列之guestbook快速部署

    k8s集群以及一些扩展插件已经安装完毕,本篇文章介绍一下如何在k8s集群上快速部署guestbook应用. •实验环境为集群:master(1)+node(4),详细内容参考<k8s入门系列之集 ...

随机推荐

  1. 【Vue】VUE源码中的一些工具函数

    Vue源码-工具方法 /* */ //Object.freeze()阻止修改现有属性的特性和值,并阻止添加新属性. var emptyObject = Object.freeze({}); // th ...

  2. Eclipse中构造方法自动生成

    代码中点击右键(快捷键Ctrl+Alt+S) ->Source ->Generate Constructor using Fields... ->默认全选(可选择需要作为构造方法参数 ...

  3. 转载的一篇文章eclipse添加插件

    eclipse没有(添加)"Dynamic Web Project"选项的方法 转载海边的第八只螃蟹 最后发布于2015-11-24 21:24:15 阅读数 40814  收藏 ...

  4. JDK 15已发布,你所要知道的都在这里!

    JDK 15已经在2020年9月15日发布!详情见 JDK 15 官方计划.下面是对 JDK 15 所有新特性的详细解析! 官方计划 2019/12/12 Rampdown Phase One (fo ...

  5. 分享篇:聊一聊 15.5K 的 FileSaver,是如何工作的?

    聊一聊 15.5K 的 FileSaver,是如何工作的? FileSaver.js 是在客户端保存文件的解决方案,非常适合在客户端上生成文件的 Web 应用程序.它简单易用且兼容大多数浏览器,被作为 ...

  6. 第7.28节 《Python类、类型、协议》章节总结

    本章详细介绍了Python协议.多态与"鸭子类型".类.类实例变量.类变量.实例方法.类方法.静态方法.类继承.抽象类.property函数和@property装饰器定义属性访问方 ...

  7. 第3章 Python的数据类型目录

    第3.1节 功能强大的 Python序列概述 第3.2节 Python列表简介 第3.3节 强大的Python列表 第3.4节 泛善可陈的元组 第3.5节 丰富的Python字典操作 第3.6节 Py ...

  8. HTTP助记

    1** 信息,服务器收到请求,需要请求者继续执行操作 100 continue 继续,客户端应继续请求 101 swithching protocls 切换协议,服务器根据客户端的请求切换协议.只能切 ...

  9. Java基础学习之流程控制语句(5)

    目录 1.顺序结构 2.选择结构 2.1.if else结构 2.2.switch case结构 3.循环结构 3.1.while结构 3.2.do while结构 3.3.for结构 3.3.1.普 ...

  10. html标签和body标签的区别

    首先想要总结这个问题就是因为在开发的过程中,在设置body的高度的时候,在浏览器窗口中并不起作用,一直都会显示是浏览器窗口的大小,所以想要搞清楚这面的原因. 一.前提 在页面的设计中,当我们没有为一个 ...