master上通过kubeadm安装Kubernetes

添加国内阿里源后安装kubeadm:

  1. deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
    2 apt-get update && apt-get install kubeadm

创建kubeadm.yaml文件, 然后执行安装:

  1. apiVersion: kubeadm.k8s.io/v1alpha2
  2. kind: MasterConfiguration
  3. controllerManagerExtraArgs:
  4. horizontal-pod-autoscaler-use-rest-clients: "true"
  5. horizontal-pod-autoscaler-sync-period: "10s"
  6. node-monitor-grace-period: "10s"
  7. apiServerExtraArgs:
  8. runtime-config: "api/all=true"
  9. kubernetesVersion: "stable-1.12.2"
  1. kubeadm init --config kubeadm.yaml

安装过程中出现的问题:

  1. [ERROR Swap]: running with swap on is not supported. Please disable swap
  2. [ERROR SystemVerification]: missing cgroups: memory
  3. [ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-apiserver-amd64:v1.12.2]

解决办法:

  1. . 报错很直白, 禁用swap分区即可.
    不过不建议使用: swapoff -a
  2. 从操作记录来看, 使用swapoff -akubeadm init命令虽然可以执行,但是却总是失败, 提示:
  3. [kubelet-check] It seems like the kubelet isn't running or healthy.
  4. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
  5.  
  6. 查看日志发现实际还是swap没有关闭的问题:
  7. kubernetes journalctl -xefu kubelet
  8. 11 :: debian kubelet[]: F1105 ::28.609272 server.go:] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename Type Size Used Priority /dev/sda9 partition -]
  9. kubernetes cat /proc/swaps
  10. Filename Type Size Used Priority
  11. /dev/sda9 partition -
  12. kubernetes
  13.  
  14. 注释掉/etc/fstabswap挂载后安装成功

  15. . echo GRUB_CMDLINE_LINUX=\"cgroup_enable=memory\" >> /etc/default/grub && update-grub && reboot
  16.  
  17. . 国内正常网络不能从k8s.grc.io拉取镜像, 所以从docker.io拉取, 然后重新打上一个符合k8stag:
  18. docker pull mirrorgooglecontainers/kube-apiserver:v1.12.2
  19. docker pull mirrorgooglecontainers/kube-controller-manager:v1.12.2
  20. docker pull mirrorgooglecontainers/kube-scheduler:v1.12.2
  21. docker pull mirrorgooglecontainers/kube-proxy:v1.12.2
  22. docker pull mirrorgooglecontainers/pause:3.1
  23. docker pull mirrorgooglecontainers/etcd:3.2.
  24. docker pull coredns/coredns:1.2.
  25.  
  26. docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.12.2 k8s.gcr.io/kube-apiserver:v1.12.2
  27. docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.12.2 k8s.gcr.io/kube-controller-manager:v1.12.2
  28. docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.12.2 k8s.gcr.io/kube-scheduler:v1.12.2
  29. docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.12.2 k8s.gcr.io/kube-proxy:v1.12.2
  30. docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
  31. docker tag docker.io/mirrorgooglecontainers/etcd:3.2. k8s.gcr.io/etcd:3.2.
  32. docker tag docker.io/coredns/coredns:1.2. k8s.gcr.io/coredns:1.2.
  33.  
  34. docker rmi mirrorgooglecontainers/kube-apiserver:v1.12.2
  35. docker rmi mirrorgooglecontainers/kube-controller-manager:v1.12.2
  36. docker rmi mirrorgooglecontainers/kube-scheduler:v1.12.2
  37. docker rmi mirrorgooglecontainers/kube-proxy:v1.12.2
  38. docker rmi mirrorgooglecontainers/pause:3.1
  39. docker rmi mirrorgooglecontainers/etcd:3.2.
  40. docker rmi coredns/coredns:1.2.
  41.  
  42. 也可以增加加速器(测试163后速度比直接访问更慢), 加入方法如下,然后重启docker服务:
  43. kubernetes cat /etc/docker/daemon.json
  44. {
  45. "registry-mirrors": ["http://hub-mirror.c.163.com"]
  46. }
  47. kubernetes

安装成功记录:

  1. kubernetes kubeadm init --config kubeadm.yaml
  2. I1205 23:08:15.852917 5188 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.12.2.txt": Get https://dl.k8s.io/release/stable-1.12.2.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  3. I1205 23:08:15.853144 5188 version.go:94] falling back to the local client version: v1.12.2
  4. [init] using Kubernetes version: v1.12.2
  5. [preflight] running pre-flight checks
  6. [preflight/images] Pulling images required for setting up a Kubernetes cluster
  7. [preflight/images] This might take a minute or two, depending on the speed of your internet connection
  8. [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
  9. [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  10. [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  11. [preflight] Activating the kubelet service
  12. [certificates] Generated front-proxy-ca certificate and key.
  13. [certificates] Generated front-proxy-client certificate and key.
  14. [certificates] Generated etcd/ca certificate and key.
  15. [certificates] Generated etcd/server certificate and key.
  16. [certificates] etcd/server serving cert is signed for DNS names [debian localhost] and IPs [127.0.0.1 ::1]
  17. [certificates] Generated etcd/peer certificate and key.
  18. [certificates] etcd/peer serving cert is signed for DNS names [debian localhost] and IPs [192.168.2.118 127.0.0.1 ::1]
  19. [certificates] Generated etcd/healthcheck-client certificate and key.
  20. [certificates] Generated apiserver-etcd-client certificate and key.
  21. [certificates] Generated ca certificate and key.
  22. [certificates] Generated apiserver certificate and key.
  23. [certificates] apiserver serving cert is signed for DNS names [debian kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.118]
  24. [certificates] Generated apiserver-kubelet-client certificate and key.
  25. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
  26. [certificates] Generated sa key and public key.
  27. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
  28. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
  29. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
  30. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
  31. [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
  32. [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
  33. [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
  34. [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
  35. [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
  36. [init] this might take a minute or longer if the control plane images have to be pulled
  37. [apiclient] All control plane components are healthy after 48.078220 seconds
  38. [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  39. [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
  40. [markmaster] Marking the node debian as master by adding the label "node-role.kubernetes.io/master=''"
  41. [markmaster] Marking the node debian as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  42. [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "debian" as an annotation
  43. [bootstraptoken] using token: x4p0vz.tdp1xxxx7uyerrrs
  44. [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  45. [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  46. [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  47. [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
  48. [addons] Applied essential addon: CoreDNS
  49. [addons] Applied essential addon: kube-proxy
  50.  
  51. Your Kubernetes master has initialized successfully!
  52.  
  53. To start using your cluster, you need to run the following as a regular user:
  54.  
  55. mkdir -p $HOME/.kube
  56. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  57. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  58.  
  59. You should now deploy a pod network to the cluster.
  60. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  61. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  62.  
  63. You can now join any number of machines by running the following on each node
  64. as root:
  65.  
  66. kubeadm join 192.168.2.118:6443 --token x4p0vz.tdp1xxxx7uyerrrs --discovery-token-ca-cert-hash sha256:64cb13f7f004fe8dd3e6d0e246950f4cbdfa65e2a84f8988c3070abf8183b3e9
  67.  
  68. kubernetes  

部署网络插件

安装成功后, 通过kubectl get nodes查看节点信息(kubectl命令需要使用kubernetes-admin来运行, 需要拷贝下配置文件并配置环境变量才能运行kubectl get nods):

  1. kubernetes kubectl get nodes
  2. The connection to the server localhost: was refused - did you specify the right host or port?
  3. kubernetes echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bashrc
  4. kubernetes source ~/.bashrc
  5. kubernetes kubectl get nodes
  6. NAME STATUS ROLES AGE VERSION
  7. debian NotReady master 21m v1.12.2
  8. kubernetes

可以看到节点NotReady, 这是由于还没有部署任何网络插件:

  1. kubernetes kubectl get pods -n kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. coredns-576cbf47c7-4vjhf / Pending 24m
  4. coredns-576cbf47c7-xzjk7 / Pending 24m
  5. etcd-debian / Running 23m
  6. kube-apiserver-debian / Running 23m
  7. kube-controller-manager-debian / Running 23m
  8. kube-proxy-5wb6k / Running 24m
  9. kube-scheduler-debian / Running 23m
  10. kubernetes
  11.  
  12. kubernetes kubectl describe node debian
  13. Name: debian
  14. Roles: master
  15. Labels: beta.kubernetes.io/arch=amd64
  16. beta.kubernetes.io/os=linux
  17. kubernetes.io/hostname=debian
  18. node-role.kubernetes.io/master=
  19. Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
  20. node.alpha.kubernetes.io/ttl:
  21. volumes.kubernetes.io/controller-managed-attach-detach: true
  22. CreationTimestamp: Wed, Dec :: +
  23. Taints: node-role.kubernetes.io/master:NoSchedule
  24. node.kubernetes.io/not-ready:NoSchedule
  25. Unschedulable: false
  26. Conditions:
  27. Type Status LastHeartbeatTime LastTransitionTime Reason Message
  28. ---- ------ ----------------- ------------------ ------ -------
  29. OutOfDisk False Wed, Dec :: + Wed, Dec :: + KubeletHasSufficientDisk kubelet has sufficient disk space available
  30. MemoryPressure False Wed, Dec :: + Wed, Dec :: + KubeletHasSufficientMemory kubelet has sufficient memory available
  31. DiskPressure False Wed, Dec :: + Wed, Dec :: + KubeletHasNoDiskPressure kubelet has no disk pressure
  32. PIDPressure False Wed, Dec :: + Wed, Dec :: + KubeletHasSufficientPID kubelet has sufficient PID available
  33. Ready False Wed, Dec :: + Wed, Dec :: + KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized. WARNING: CPU hardcapping unsupported
  34. Addresses:
  35. InternalIP: 192.168.2.118
  36. Hostname: debian
  37. Capacity:
  38. cpu:
  39. ephemeral-storage: 4673664Ki
  40. hugepages-2Mi:
  41. memory: 5716924Ki
  42. pods:
  43. Allocatable:
  44. cpu:
  45. ephemeral-storage:
  46. hugepages-2Mi:
  47. memory: 5614524Ki
  48. pods:
  49. System Info:
  50. Machine ID: 4341bb45c5c84ad2827c173480039b5c
  51. System UUID: 05F887C4-A455-122E-8B14-8C736EA3DBDB
  52. Boot ID: ff68f27b-fba0--a1cf-796dd013e025
  53. Kernel Version: 3.16.--amd64
  54. OS Image: Debian GNU/Linux (jessie)
  55. Operating System: linux
  56. Architecture: amd64
  57. Container Runtime Version: docker://18.6.1
  58. Kubelet Version: v1.12.2
  59. Kube-Proxy Version: v1.12.2
  60. Non-terminated Pods: ( in total)
  61. Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
  62. --------- ---- ------------ ---------- --------------- -------------
  63. kube-system etcd-debian (%) (%) (%) (%)
  64. kube-system kube-apiserver-debian 250m (%) (%) (%) (%)
  65. kube-system kube-controller-manager-debian 200m (%) (%) (%) (%)
  66. kube-system kube-proxy-5wb6k (%) (%) (%) (%)
  67. kube-system kube-scheduler-debian 100m (%) (%) (%) (%)
  68. Allocated resources:
  69. (Total limits may be over percent, i.e., overcommitted.)
  70. Resource Requests Limits
  71. -------- -------- ------
  72. cpu 550m (%) (%)
  73. memory (%) (%)
  74. Events:
  75. Type Reason Age From Message
  76. ---- ------ ---- ---- -------
  77. Normal Starting 22m kubelet, debian Starting kubelet.
  78. Normal NodeAllocatableEnforced 22m kubelet, debian Updated Node Allocatable limit across pods
  79. Normal NodeHasSufficientDisk 22m (x6 over 22m) kubelet, debian Node debian status is now: NodeHasSufficientDisk
  80. Normal NodeHasSufficientMemory 22m (x6 over 22m) kubelet, debian Node debian status is now: NodeHasSufficientMemory
  81. Normal NodeHasNoDiskPressure 22m (x6 over 22m) kubelet, debian Node debian status is now: NodeHasNoDiskPressure
  82. Normal NodeHasSufficientPID 22m (x5 over 22m) kubelet, debian Node debian status is now: NodeHasSufficientPID
  83. Normal Starting 21m kube-proxy, debian Starting kube-proxy.
  84. kubernetes

部署插件后可查看所有pods已经running(插件要几分钟才能运行起来, 中间状态有ContainerCreating/CrashLoopBackOff):

  1. kubernetes kubectl get pods -n kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. coredns-576cbf47c7-4vjhf / Pending 25m
  4. coredns-576cbf47c7-xzjk7 / Pending 25m
  5. etcd-debian / Running 25m
  6. kube-apiserver-debian / Running 25m
  7. kube-controller-manager-debian / Running 25m
  8. kube-proxy-5wb6k / Running 25m
  9. kube-scheduler-debian / Running 25m
  10. weave-net-nj7bk / ContainerCreating 21s
  11. kubernetes kubectl get pods -n kube-system
  12. NAME READY STATUS RESTARTS AGE
  13. coredns-576cbf47c7-4vjhf / CrashLoopBackOff 27m
  14. coredns-576cbf47c7-xzjk7 / CrashLoopBackOff 27m
  15. etcd-debian / Running 27m
  16. kube-apiserver-debian / Running 27m
  17. kube-controller-manager-debian / Running 27m
  18. kube-proxy-5wb6k / Running 27m
  19. kube-scheduler-debian / Running 27m
  20. weave-net-nj7bk / Running 2m32s
  21. kubernetes kubectl get pods -n kube-system
  22. NAME READY STATUS RESTARTS AGE
  23. coredns-576cbf47c7-4vjhf / Running 27m
  24. coredns-576cbf47c7-xzjk7 / Running 27m
  25. etcd-debian / Running 27m
  26. kube-apiserver-debian / Running 27m
  27. kube-controller-manager-debian / Running 27m
  28. kube-proxy-5wb6k / Running 27m
  29. kube-scheduler-debian / Running 27m
  30. weave-net-nj7bk / Running 2m42s
  31. kubernetes
  1. kubernetes kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. debian Ready master 38m v1.12.2
  4. kubernetes

调整master可以执行Pod

默认情况下,Kubernetes通过Taint/Toleration 机制给某一个节点打上"污点":

  1. kubernetes kubectl describe node debian | grep Taints
  2. Taints: node-role.kubernetes.io/master:NoSchedule
  3. kubernetes

那么所有Pod默认就不在被标记的节点上运行,除非:

  1. . Pod主动声明允许在这种节点上运行(通过在Podyaml文件中的spec部分,加入 tolerations 字段即可)。
  2. . 对于总共就几台机器的k8s测试机器来说,最好的选择就是删除Taint:
  3.    kubernetes kubectl taint nodes --all node-role.kubernetes.io/master-
  4.   node/debian untainted
  5.    kubernetes kubectl describe node debian | grep Taints
  6.   Taints: <none>
  7.    kubernetes

增加节点

  1. 由于masterkubeadm/kubelet都是v1.12.2版本,worker节点执行默认apt-get install时默认装了v1.13版本,导致加入集群失败,需卸载重装匹配的版本:
  2. root@debian-vm:~# kubeadm version
  3. kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:02:01Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
  4. root@debian-vm:~# root@debian-vm:~# kubelet --version
  5. Kubernetes v1.13.0
  6. root@debian-vm:~# apt-get --purge remove kubeadm kubelet
  7. root@debian-vm:~# apt-cache policy kubeadm
  8. kubeadm:
  9. 已安装: (无)
  10. 候选软件包:1.13.0-00
  11. 版本列表:
  12. 1.13.0-00 0
  13. 500 https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial/main amd64 Packages
  14. 1.12.3-00 0
  15. 500 https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial/main amd64 Packages
  16. 1.12.2-00 0
  17. 500 https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial/main amd64 Packages
  18. root@debian-vm:~# apt-get install kubeadm=1.12.2-00 kubelet=1.12.2-00
  1. root@debian-vm:~# kubeadm join 192.168.2.118: --token x4p0vz.tdp1xxxx7uyerrrs --discovery-token-ca-cert-hash sha256:64cb13f7f004fe8dd3e6d0e246950f4cbdfa65e2a84f8988c3070abf8183b3e9
  2. [preflight] running pre-flight checks
  3. [discovery] Trying to connect to API Server "192.168.2.118:6443"
  4. [discovery] Created cluster-info discovery client, requesting info from "https://192.168.2.118:6443"
  5. [discovery] Requesting info from "https://192.168.2.118:6443" again to validate TLS against the pinned public key
  6. [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.2.118:6443"
  7. [discovery] Successfully established connection with API Server "192.168.2.118:6443"
  8. [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
  9. [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  10. [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  11. [preflight] Activating the kubelet service
  12. [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
  13. [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "debian-vm" as an annotation
  14.  
  15. This node has joined the cluster:
  16. * Certificate signing request was sent to apiserver and a response was received.
  17. * The Kubelet was informed of the new secure connection details.
  18.  
  19. Run 'kubectl get nodes' on the master to see this node join the cluster.
  20.  
  21. root@debian-vm:~#

节点加入集群成功

  1. https://github.com/kubernetes/kubernetes/issues/54914
  2. https://github.com/kubernetes/kubeadm/issues/610
  3. https://blog.csdn.net/acxlm/article/details/79069468

debian8.2安装kubernetes的更多相关文章

  1. Centos7上安装Kubernetes集群部署docker

    一.安装前准备1.操作系统详情需要三台主机,都最小化安装 centos7.3,并update到最新 [root@master ~]# (Core) 角色 主机名 IPMaster master 192 ...

  2. CentOS 7.6 使用kubeadm安装Kubernetes 13

    实验环境:VMware Fusion 11.0.2 操作系统:CentOS 7.6 主机名 IP地址 CPU 内存 k8s2m 172.16.183.151 2核 4G k8s2n 172.16.18 ...

  3. Linux安装kubernetes

    使用KUBEADM安装KUBERNETES V1.14.0 一.环境准备      操作系统:Centos 7.5      一台或多台运⾏行行着下列列系统的机器器: ​ Ubuntu 16.04+ ...

  4. Centos7 二进制安装 Kubernetes 1.13

    目录 1.目录 1.1.什么是 Kubernetes? 1.2.Kubernetes 有哪些优势? 2.环境准备 2.1.网络配置 2.2.更改 HOSTNAME 2.3.配置ssh免密码登录登录 2 ...

  5. Centos7 使用 kubeadm 安装Kubernetes 1.13.3

    目录 目录 什么是Kubeadm? 什么是容器存储接口(CSI)? 什么是CoreDNS? 1.环境准备 1.1.网络配置 1.2.更改 hostname 1.3.配置 SSH 免密码登录登录 1.4 ...

  6. yum方式安装kubernetes

    环境准备 master01 node01 node02,连通网络,修改hosts文件,确认3台主机相互解析 vim /etc/hosts 127.0.0.1 localhost localhost.l ...

  7. 使用kubeadm安装Kubernetes

    Docker安装 yum install -y yum-utils yum-config-manager --add-repo https://docs.docker.com/v1.13/engine ...

  8. kubeadm安装kubernetes V1.11.1 集群

    之前测试了离线环境下使用二进制方法安装配置Kubernetes集群的方法,安装的过程中听说 kubeadm 安装配置集群更加方便,因此试着折腾了一下.安装过程中,也有一些坑,相对来说操作上要比二进制方 ...

  9. 使用kubeadm 安装 kubernetes 1.12.0

    目录 简介: 架构说明: 系统配置: 1.1 关闭防火墙 1.2 禁用SELinux 1.3 关闭系统Swap 1.4 安装docker 使用kubeadm部署Kubernetes: 2.1 安装ku ...

随机推荐

  1. Oracle树形结构数据---常见处理情景

    Oracle树形结构数据---常见处理情景 1.查看表数据结构 SELECT *      FROM QIANCODE.TREE_HIS_TABLE T  ORDER BY T.NODE_LEVEL; ...

  2. 一站式学习Redis 从入门到高可用分布式实践(慕课)第六章 Redis开发运维常见问题

    fork操作 1.同步操作 2.与内存量息息相关:内存越大,耗时越长(与机器类型有关) 3.info:latest_fork_usec 进程外开销 AOF追加阻塞 不知道哪个命令??? 单机多实例部署 ...

  3. Ionic的项目结构(angluar js)

    Hybird HTML5 App(移动应用开发)之3.Ionic的项目结构 前面使用命令ionic start myapp下载了默认的Ionic应用程序,下面我们打开应用程序项目,来分析一下Ionic ...

  4. Git工作流指南:Gitflow工作流

    git工作流 1.Git flow 核心分支:master,dev 可能还会有:功能分支,bug修复分支,预发布分支 2.github flow:只一个长期分支,就是master 第一步:根据需求,从 ...

  5. Qt基于model/view数据库编程3

    QSqlQueryModel和QSqlQuery类: 工程开发过程中将这两个类合起来使用,用QSqlQueryModel查询展示数据库中的数据,用QSqlQuery类执行sql语言,实现对数据库的操作 ...

  6. Execution default-cli of goal org.mybatis.generator:mybatis-generator-maven-plugin:1.3.2:generate failed: Exception getting JDBC Driver: com.mysql.jdbc.Driver (mybatis逆向工程)

    springboot整合mybatis时出现的问题 解决方法:在pom.xml中plugin中加入单独依赖Mysql驱动包,问题便可解决 <plugin> <groupId>o ...

  7. 网站如何使用https

    阿里云提供了免费的证书, 先去申请免费的https证书 https://common-buy.aliyun.com/?spm=5176.10695662.958455.3.1f0c7d54HhNTG4 ...

  8. CentOS6.9重新安装python2.6.6和yum

    CentOS6.9重新安装python2.6.6和yum 本文转载自昔日暖阳,原文地址:http://www.osheep.cn/4801.html 最近为了部署一个Python应用到腾讯云服务器,强 ...

  9. windows和Ubuntu下安装mongodb

    windows 下载 mongodb官网下载压缩版安装包:下载地址:https://www.mongodb.com/download-center/community 注意选择版本(目前windows ...

  10. go web cookie和session

    cookie是存储在浏览器端,session是服务器端 cookie是有时间限制的,分会话cookie和持久cookie,如果不设置时间,那周期就是创建到浏览器关闭为止.这种是会话cookie,一般保 ...