k8s一点
1、kubectl get secret -n kube-system|grep admin-token
kubernetes-dashboard-admin-token-9q757
2、kubectl get secret kubernetes-dashboard-admin-token-9q757 -o jsonpath={.data.token} -n kube-system |base64 -d
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token c04f89.b781cdb55d83c1ef 10.10.9.4:6443 --discovery-token-ca-cert-hash sha256:986e83a9cb948368ad0552b95232e31d3b76e2476b595bd1d905d5242ace29af
常用命令:
1、初始化集群
#kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=172.10.0.0/16 --ignore-preflight-errors=Swap
上面指定的网段172.10.0.0/16很重要,用于后面创建flannel网络,使pod能够访问外网。
net-conf.json: |
{
"Network": "172.10.0.0/16",
"Backend": {
"Type": "vxlan"
}
创建网络服务:
# cat kube-flannel.yml
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "172.10.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.10.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.10.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
#- --iface=docker0
- --iface=eth0
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
# kubectl apply -f kube-flannel.yml
2、重置集群
#kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
3、查看集群的节点
#kubectl get nodes
如果节点处于Notready,可以执行如下命令:
#kubectl taint nodes --all node-role.kubernetes.io/master-
4、查看集群的pod
#kubectl get pods
5、查看集群的token
# kubeadm token list | grep authentication,signing | awk '{print $1}'
6、查看discovery-token-ca-cert-hash
#openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
7、添加节点到集群
首先k8s默认的cgroup-driver为cgroupfs,但是yum安装kubelet的时候自动修改为systemd,而docker通过docker info命令查看是cgroupfs,所以需要将k8s的修改为cgroupfs:
#vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
#kubeadm join --token c04f89.b781cdb55d83c1ef 10.10.3.4:6443 --discovery-token-ca-cert-hash sha256:986e83a9cb948368ad0552b95232e31d3b76e2476b595bd1d905d5242ace29af
#kubectl label node km1 node-role.kubernetes.io/node=
8、在集群移除节点
#kubectl drain master.k8s.samwong.im --delete-local-data --force --ignore-daemonsets
#kubectl delete node master.k8s.samwong.im
9、创建dashboard
需要两个文件:
kubernetes-dashboard-admin.rbac.yaml kubernetes-dashboard.yaml
# cat kubernetes-dashboard.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.1
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30443
selector:
k8s-app: kubernetes-dashboard
# cat kubernetes-dashboard-admin.rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-admin
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard-admin
namespace: kube-system
# kubectl apply -f kubernetes-dashboard*
查看dashboard信息
#kubectl get services kubernetes-dashboard -n kube-system
k8s一点的更多相关文章
- k8s踩坑记 - kubeadm join 之 token 失效
抛砖引玉 环境 centos 7 amd64 两台 kubernetes 1.10 伴随着k8s1.10版本的发布,前天先在一台机器上搭建了k8s单机版集群,即既是master,也是node,按照经验 ...
- eShopOnContainers 知多少[10]:部署到 K8S | AKS
1. 引言 断断续续,感觉这个系列又要半途而废了.趁着假期,赶紧再更一篇,介绍下如何将eShopOnContainers部署到K8S上,进而实现大家常说的微服务上云. 2. 先了解下 Helm 读过我 ...
- Kubernetes初探[1]:部署你的第一个ASP.NET Core应用到k8s集群
Kubernetes简介 Kubernetes是Google基于Borg开源的容器编排调度引擎,作为CNCF(Cloud Native Computing Foundation)最重要的组件之一,它的 ...
- k8s数据管理(八)--技术流ken
volume 我们经常会说:容器和 Pod 是短暂的.其含义是它们的生命周期可能很短,会被频繁地销毁和创建.容器销毁时,保存在容器内部文件系统中的数据都会被清除. 为了持久化保存容器的数据,可以使用 ...
- 《两地书》--Kubernetes(K8s)基础知识(docker容器技术)
大家都知道历史上有段佳话叫“司马相如和卓文君”.“皑如山上雪,皎若云间月”.卓文君这么美,却也抵不过多情女儿薄情郎. 司马相如因一首<子虚赋>得汉武帝赏识,飞黄腾达之后便要与卓文君“故来相 ...
- 10分钟看懂Docker和K8S
本文来源:鲜枣课堂 2010年,几个搞IT的年轻人,在美国旧金山成立了一家名叫"dotCloud"的公司. 这家公司主要提供基于PaaS的云计算技术服务.具体来说,是和LXC有关的 ...
- 朱晔的互联网架构实践心得S2E4:小议微服务的各种玩法(古典、SOA、传统、K8S、ServiceMesh)
十几年前就有一些公司开始践行服务拆分以及SOA,六年前有了微服务的概念,于是大家开始思考SOA和微服务的关系和区别.最近三年Spring Cloud的大火把微服务的实践推到了高潮,而近两年K8S在容器 ...
- CentOS7安装k8s
借鉴博客:https://www.cnblogs.com/xkops/p/6169034.html 此博客里面有每个k8s配置文件的注释:https://blog.csdn.net/qq_359048 ...
- 使用heptiolabs/eventrouter收集K8S的事件
k8s的heapster项目中止以后, 事件收集的项目,就推荐使用https://github.com/heptiolabs/eventrouter项目了. 部署文档很简单,但有两个问题要解决: 一, ...
随机推荐
- cpld fpga 区别
cpld fpga 区别 系统的比较,与大家共享: 尽管FPGA和CPLD都是可编程ASIC器件,有很多共同特点,但由于CPLD和FPGA结构上的差异,具有各自的特点: ①CPLD更适合完成各种算法和 ...
- 10.Date对象
Date()对象 Date对象用于处理日期和时间. Math对象 ◆Math.ceil() 天花板函数 向上取整 ★如果是整数,取整之后是这个数本身 ★如果是小数,对数进行向上舍入. ◆Ma ...
- SQL Server 阻塞原因分析
这里通过连接在sysprocesses里字段值的组合来分析阻塞源头,可以把阻塞分为以下5种常见的类型(见表).waittype,open_tran,status,都是sysprocesses里的值,“ ...
- .net core 与ELK(3)安装Kibana
1.去产品官网下载https://www.elastic.co/downloads/kibana 对应的tar.gz的压缩包,放到/usr/local/src目录 2.解压 -linux-x86_64 ...
- Devexpress中Gridcontrol查找分组
private void button1_Click(object sender, EventArgs e) { DataTable dt = new DataTable(); dt.Columns. ...
- WPF异常捕获三种处理 UI线程, 全局异常,Task异常
protected override void OnStartup(StartupEventArgs e){base.OnStartup(e);RegisterEvents();} private v ...
- 【文文殿下】P3737 [HAOI2014]遥感监测
题解 显然可以把每个观测点,认为是x轴上的一段区间.问题就转换为了:对于x轴上的若干个区间,选取尽可能少的点,使得所有区间都有至少一个点. 这是一个相当经典的贪心问题. 代码如下: #include& ...
- Python测试 ——开发工具库
Web UI测试自动化 splinter - web UI测试工具,基于selnium封装. selenium - web UI自动化测试. mechanize- Python中有状态的程序化Web浏 ...
- Influxdb简介与安装
InfluxDB 是用Go语言编写的一个开源分布式时序.事件和指标数据库,无需外部依赖,类似的数据库有Elasticsearch.Graphite等 功能特色 基于时间序列,支持与时间有关的相关函数( ...
- TCP连接详解
一. 连接过程示意图 二. 建立TCP连接 2.1 三次握手 第一次握手:建立连接.客户端发送连接请求报文段,将SYN置为1,Sequence Number为x:然后,客户端进入SYN_SEND状态, ...