k8s环境搭建
mirantis 有个培训, 提供了安装脚本
- git clone https://bitbucket.org/mirantis-training/kd100-scripts
- 网络采用的是calico
培训内容 (student, sublime)
测试网址 (user: pass: happy knuth)
k8s的所有项目:
有源码, 有例子。
安装环境:
http://los-vmm.sc.intel.com/wiki/Start_a_devstack_in_20_minutes
wget -O- http://otcloud-gateway.bj.intel.com/runstack |bash
配置cloud init 可参考 http://www.cnblogs.com/shaohef/p/8137073.html
用户搭建
yanglin写了一个脚本安装
https://github.com/shaohef/transcoder-daemon/blob/master/k8s/installk8s.sh
kubectl 使用 snap安装
$ sudo apt update
$ sudo apt upgrade
一键脚本
# https://kubernetes.io/docs/tasks/tools/install-kubectl/ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl source <(kubectl completion bash) # install docker
sudo docker version
if [ $? != ]; then
wget -O- https://get.docker.com/ |bash
sudo usermod -aG docker $USER
fi # https://kubernetes.io/docs/tasks/tools/install-minikube/
# User VM to install kubernetes # https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#14-installing-kubeadm-on-your-hosts
# https://kubernetes.io/doup/incs/setdependent/install-kubeadm/
sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
# should use tee for sudo user
cat <<EOF |sudo tee -a /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm # kubectl # https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
# https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#24-initializing-your-master sudo kubeadm init --pod-network-cidr=10.244.0.0/ # flannel mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config # https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network
# https://github.com/coreos/flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml # Waiting for kube-dns ready
sleep # https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#master-isolation
kubectl taint nodes --all node-role.kubernetes.io/master- # https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#44-joining-your-nodes
curl https://glide.sh/get | sh
各种配置还需要参考: https://kubernetes.io/docs/getting-started-guides/scratch/
yanglin还有一个集群部署的
https://github.com/LinEricYang/kubernetes-vagrant-ansible
开发者搭建
1. 官方参考文档
https://github.com/kubernetes/community/tree/master/contributors/devel
下载community指导
git clone https://github.com/kubernetes/community.git
下载源码:
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md
安装go
sudo add-apt-repository ppa:gophers/archive
sudo apt update
sudo apt-get install golang-1.9-go echo "export PATH=\$PATH:/usr/lib/go-1.9/bin" >> ~/.profile
source ~/.profile
源码安装:
http://jdstaerk.de/installing-go-1-9-on-ubuntu/、
https://askubuntu.com/questions/959932/installation-instructions-for-golang-1-9-into-ubuntu-16-04
Download golang 1.9 tar from official site. Then extract it into /usr/local, creating a Go tree in /usr/local/goas follows:
tar -C /usr/local -xzf go$VERSION.$OS-$ARCH.tar.gz
After extracting add the following lines to your $HOME/.profile.
# Set GOROOT
export GOROOT=/usr/local/go
export PATH=$GOROOT/bin:$PATH
安装 CFSSL
PKI 工具集 cfssl 来生成 Certificate Authority (CA) 证书和秘钥文件
go get -u github.com/cloudflare/cfssl/cmd/...
PATH=$PATH:$GOPATH/bin
安装 etcd
hack/install-etcd.sh # Installs in ./third_party/etcd
echo export PATH="\$PATH:$(pwd)/third_party/etcd" >> ~/.profile # Add to PATH
Downloading https://github.com/coreos/etcd/releases/download/v3.1.10/etcd-v3.1.10-linux-amd64.tar.gz succeed
etcd v3.1.10 installed. To use:
export PATH=/home/ubuntu/kubernetes/third_party/etcd:${PATH}
测试etcd
http://cizixs.com/2016/08/02/intro-to-etcd
build (可以省略)
可以使用bazel也可以直接make
使用 bazel
需要先安装:
https://docs.bazel.build/versions/master/install.html
http://blog.csdn.net/u010510350/article/details/52247972
update 过程会报错
locale: Cannot set LC_ALL to default locale: No such file or directory
https://askubuntu.com/questions/162391/how-do-i-fix-my-locale-issue
运行k8s 集群
cd kubernetes
hack/local-up-cluster.sh
修改完代码,重新运行:
cd kubernetes
make
hack/local-up-cluster.sh
output:
~/kubernetes$ ./hack/local-up-cluster.sh [/]
WARNING : The kubelet is configured to not fail if swap is enabled; production deployments should disable swap.
WARNING : This script MAY be run as root for docker socket / iptables functionality; if failures occur, retry as root.
make: Entering directory '/home/ubuntu/kubernetes'
make[]: Entering directory '/home/ubuntu/kubernetes'
make[]: Leaving directory '/home/ubuntu/kubernetes'
+++ [ ::] Building the toolchain targets:
k8s.io/kubernetes/hack/cmd/teststale
k8s.io/kubernetes/vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [ ::] Generating bindata:
test/e2e/generated/gobindata_util.go
~/kubernetes ~/kubernetes/test/e2e/generated
~/kubernetes/test/e2e/generated
+++ [ ::] Building go targets for linux/amd64:
cmd/kubectl
cmd/hyperkube
+++ [ ::] +++ Warning: stdlib pkg with cgo flag not found.
+++ [ ::] +++ Warning: stdlib pkg cannot be rebuilt since /usr/lib/go-1.9/pkg is not writable by ubuntu
+++ [ ::] +++ Warning: Make /usr/lib/go-1.9/pkg writable for ubuntu for a one-time stdlib install, Or
+++ [ ::] +++ Warning: Rebuild stdlib using the command 'CGO_ENABLED=0 go install -a -installsuffix cgo std'
+++ [ ::] +++ Falling back to go build, which is slower
**
make: Leaving directory '/home/ubuntu/kubernetes'
WARNING: No swap limit support
Kubelet cgroup driver defaulted to use: cgroupfs
API SERVER insecure port is free, proceeding...
API SERVER secure port is free, proceeding...
Detected host and ready to start services. Doing some housekeeping first...
Using GO_OUT /home/ubuntu/kubernetes/_output/local/bin/linux/amd64 [/]
Starting services now!
Starting etcd
etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir /tmp/tmp.fc4lxZmyBY --listen-client-urls http://127.0.0.1:2379 --debug
> "/dev/null" >/dev/null
Waiting for etcd to come up.
+++ [ ::] On try , etcd: : http://127.0.0.1:2379
{"action":"set","node":{"key":"/_test","value":"","modifiedIndex":,"createdIndex":}}
Generating a bit RSA private key
.................+++
...................................................................+++
writing new private key to '/var/run/kubernetes/server-ca.key'
-----
Generating a bit RSA private key
..................................+++
...............+++
writing new private key to '/var/run/kubernetes/client-ca.key'
-----
Generating a bit RSA private key
....+++
.........+++
writing new private key to '/var/run/kubernetes/request-header-ca.key'
-----
// :: [INFO] generate received request
// :: [INFO] received CSR
// :: [INFO] generating key: rsa-
// :: [INFO] encoded CSR
// :: [INFO] signed certificate with serial number
// :: [INFO] generate received request [/]
// :: [INFO] received CSR
// :: [INFO] generating key: rsa-
// :: [INFO] encoded CSR
// :: [INFO] signed certificate with serial number
// :: [INFO] generate received request
// :: [INFO] received CSR
// :: [INFO] generating key: rsa-
// :: [INFO] encoded CSR
// :: [INFO] signed certificate with serial number
// :: [INFO] generate received request
// :: [INFO] received CSR
// :: [INFO] generating key: rsa-
// :: [INFO] encoded CSR
// :: [INFO] signed certificate with serial number
// :: [INFO] generate received request
// :: [INFO] received CSR
// :: [INFO] generating key: rsa-
// :: [INFO] encoded CSR
// :: [INFO] signed certificate with serial number
// :: [INFO] generate received request
// :: [INFO] received CSR
// :: [INFO] generating key: rsa-
// :: [INFO] encoded CSR
// :: [INFO] signed certificate with serial number
// :: [INFO] generate received request
// :: [INFO] received CSR
// :: [INFO] generating key: rsa-
// :: [INFO] encoded CSR [/]
// :: [INFO] signed certificate with serial number
// :: [INFO] generate received request
// :: [INFO] received CSR
// :: [INFO] generating key: rsa-
// :: [INFO] encoded CSR
// :: [INFO] signed certificate with serial number
// :: [INFO] generate received request
// :: [INFO] received CSR
// :: [INFO] generating key: rsa-
// :: [INFO] encoded CSR
// :: [INFO] signed certificate with serial number
Waiting for apiserver to come up
+++ [ ::] On try , apiserver: : ok
Cluster "local-up-cluster" set.
use 'kubectl --kubeconfig=/var/run/kubernetes/admin-kube-aggregator.kubeconfig' to use the aggregated API server
service "kube-dns" created
serviceaccount "kube-dns" created
configmap "kube-dns" created
deployment "kube-dns" created
Kube-dns addon successfully deployed.
kubelet ( ) is running.
Create default storage class for
storageclass "standard" created
Local Kubernetes cluster is running. Press Ctrl-C to shut it down. Logs:
/tmp/kube-apiserver.log
/tmp/kube-controller-manager.log [/] /tmp/kube-proxy.log
/tmp/kube-scheduler.log
/tmp/kubelet.log To start using your cluster, you can open up another terminal/tab and run: export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
cluster/kubectl.sh Alternatively, you can write to the default kubeconfig: export KUBERNETES_PROVIDER=local cluster/kubectl.sh config set-cluster local --server=https://localhost:6443 --certificate-authority=/var/run/kubernetes/server-ca.
crt
cluster/kubectl.sh config set-credentials myself --client-key=/var/run/kubernetes/client-admin.key --client-certificate=/var/run/k
ubernetes/client-admin.crt
cluster/kubectl.sh config set-context local --cluster=local --user=myself
cluster/kubectl.sh config use-context local
cluster/kubectl.sh
./hack/local-up-cluster.sh: line : Killed ${CONTROLPLANE_SUDO} "${GO_OUT}/hyperkube" controller-manager -
-v=${LOG_LEVEL} --vmodule="${LOG_SPEC}" --service-account-private-key-file="${SERVICE_ACCOUNT_KEY}" --root-ca-file="${ROOT_CA_FILE}"
--cluster-signing-cert-file="${CLUSTER_SIGNING_CERT_FILE}" --cluster-signing-key-file="${CLUSTER_SIGNING_KEY_FILE}" --enable-hostpa
th-provisioner="${ENABLE_HOSTPATH_PROVISIONER}" ${node_cidr_args} --pvclaimbinder-sync-period="${CLAIM_BINDER_SYNC_PERIOD}" --featur
e-gates="${FEATURE_GATES}" ${cloud_config_arg} --kubeconfig "$CERT_DIR"/controller.kubeconfig --use-service-account-credentials --co
ntrollers="${KUBE_CONTROLLERS}" --master="https://${API_HOST}:${API_SECURE_PORT}" > "${CTLRMGR_LOG}" >&
debug
$ go get github.com/derekparker/delve/cmd/dlv
$ ps -ef |grep "hyperkube apiserver"
$ sudo sysctl -w kernel.yama.ptrace_scope=0
$ cat >> ~/.bashrc <<<'
GOROOT=`go env |grep "GOROOT" |cut -d "=" -f2`
GOROOT=${GOROOT#\"}
GOROOT=${GOROOT%\"}
GOPATH=`go env |grep GOPATH |cut -d "=" -f 2`
GOPATH=${GOPATH%\"}
GOPATH=${GOPATH#\"}
export PATH="$PATH:$GOROOT/bin:$GOPATH/bin"' $ source ~/.bashrc
$ sudo su
# echo 0 > /proc/sys/kernel/yama/ptrace_scope
# exit
$ sudo $GOPATH/bin/dlv attach $PID
访问API or access-cluster-api
kubernetes-from-the-ground-up-the-api-server
$ curl http://localhost:8080/api/v1/pods
$ CERTDIR=/var/run/kubernetes
$ curl -i https://127.0.0.1:6443/api/v1/pods --cert $CERTDIR/client-admin.crt --key $CERTDIR/client-admin.key --cacert $CERTDIR/server-ca.crt
k8s环境搭建的更多相关文章
- Docker & k8s 系列二:本机k8s环境搭建
本篇将会讲解k8s是什么?本机k8s环境搭建,部署一个pod并演示几个kubectl命令,k8s dashboard安装. k8s是什么 k8s是kubernetes的简写,它是一个全新的基于容器技术 ...
- k8s环境搭建--基于kubeadm方法
环境 master node: 数量 1, 系统 ubuntu 16.04_amd64 worker node: 数量 1, 系统 ubuntu 16.04_amd64 kubernetes 版本: ...
- k8s 环境搭建
转自:https://blog.csdn.net/running_free/article/details/78388948 一.概述 1.简介 官方中文文档:https://www.kubernet ...
- k8s环境搭建--基于minik8s方法
minik8s 安装 关闭selinux.开启ipv6 sudo bash selinux_ipv6.sh 下载kubectl和minikube 下载minikube,因为国外的源被墙了,所以只能用阿 ...
- 记录一次k8s环境尝试过程(初始方案,现在已经做过很多完善,例如普罗米修斯)
记录一次Team k8s环境搭建过程(初始方案,现在已经做过很多完善,例如普罗米修斯) span::selection, .CodeMirror-line > span > span::s ...
- ASP.NET Core on K8S学习初探(1)K8S单节点环境搭建
当近期的一个App上线后,发现目前的docker实例(应用服务BFF+中台服务+工具服务)已经很多了,而我司目前没有专业的运维人员,发现运维的成本逐渐开始上来,所以容器编排也就需要提上议程.因此我决定 ...
- 部署k8s集群之环境搭建和etcd单节点安装
环境搭建以及etcd 单节点安装过程 安装之前的环境搭建 在进行k8s安装之前先把虚拟机准备好,这里准备的是三台虚拟机 主机名 ip地址 角色 master 172.16.163.131 master ...
- Kubernetes入门,使用minikube 搭建本地k8s 环境
这是一篇 K8S 的 HelloWorld,在学习K8S官方文档时搭建环境搭建的一个记录,照着文档下来还是比较顺利的. 一.安装kubectl 下载 kubectl curl -LO "ht ...
- 从零搭建云原生技术kubernetes(K8S)环境-通过kubesPhere的AllInOne方式
前言 k8s云原生搭建,步骤有点多,但通过kubesphere,可以快速搭建k8s环境,同时有一个以 Kubernetes 为内核的云原生分布式操作系统-kubesphere,本文将从零开始进行kub ...
随机推荐
- 基于python+Testlink+Jenkins实现的接口自动化测试框架V3.0
基于python+Testlink+Jenkins实现的接口自动化测试框架V3.0 目录 1. 开发环境2. 主要功能逻辑介绍3. 框架功能简介 4. 数据库的创建 5. 框架模块详细介绍6. Tes ...
- iOS 开发笔记-获取某个APP素材
2019.02.01 更新 以下这种方式只适合越狱的手机,目前12.1以后,iTools已经不适合了,请看最下面第二种方式. 有时候,我们看到别人的APP做得挺漂亮的,那么我们想查看该APP的图片素材 ...
- Linux之HugePages快速配置
关于Linux系统的HugePages与Oracle数据库优化,可以参考熊爷之前的文章,相关概念介绍的非常清晰: Linux大内存页Oracle数据库优化 本文旨在Linux系统上快速配置HugePa ...
- OEMCC 13.2 安装部署
需求:安装部署OEM 13.2 环境:两台主机,系统RHEL 6.5,分别部署OMS和OMR: OMS,也就是OEMCC的服务端 IP:192.168.1.88 内存:12G+ 硬盘:100G+ OM ...
- C#6.0中10大新特性的应用和总结
微软发布C#6.0.VS2015等系列产品也有一段时间了,但是网上的教程却不多,这里真对C#6.0给大家做了一些示例,分享给大家. 微软于2015年7月21日发布了Visual Studio 20 ...
- Hibarnate控制台打印不出sql,并且报出异常:org.hibernate.exception.JDBCConnectionException: Cannot open connection
1.认真查看hibarnate.cfg.xml文件中连接数据库的各个信息是否正确;如果正确看下一步; 2.MySQL版本>=5.6.X,对应的mysql-connector-java jar 的 ...
- websocket发送接收协议
一.websocket接收数据 1)固定字节(1000 0001或1000 0010); ---区分是否是数据包的一个固定字节(占1个字节) 个字节是数据的长度; 3)mark 掩码为包长之后的 ...
- MATLAB中文件的读写和数据的导入导出
http://blog.163.com/tawney_daylily/blog/static/13614643620111117853933/ 在编写一个程序时,经常需要从外部读入数据,或者将程序运行 ...
- sitecore系列教程之如何以编程方式将访客数据关联到联系人卡片
在我之前关于Sitecore体验资料的帖子中,我们看到了我们如何了解访问者的一切,包括访问他们在访问期间触发的事件.在这篇博客文章中,我将引导您完成识别匿名用户并将用户访问与联系人记录联系起来的过 ...
- SQLServer 创建自己的数据库
1)进入数据库服务器,创建自己的数据库 use master go create database Dt_Devtest on primary(name=[Dt_new_data],filename= ...