参考calico官网:http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/

上述链接介绍的是利用calico内置独立的etcd节点实现pod发现与存储的机制,过程中通过yaml文件描述的规则在Kubernetes master所在node上部署一个单节点的etcd集群,此etcd集群只供calico内部使用

贴一下我的配置:

# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# The location of your etcd cluster. This uses the Service clusterIP
# defined below.
# 192.168..128是我虚拟机的地址 也就是Kubernetes master节点所在机器的地址,下面配置中用到的这个地址同理
etcd_endpoints: "http://192.168.182.128:6666" # True enables BGP networking, false tells Calico to enforce
# policy only, using native networking.
enable_bgp: "true" # The CNI network configuration to install on each node.
cni_network_config: |-
{
"name": "k8s-pod-network",
"type": "calico",
"etcd_endpoints": "http://192.168.182.128:6666",
"log_level": "info",
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s",
"k8s_api_root": "https://192.168.182.128:6443",
"k8s_auth_token": ""
},
"kubernetes": {
"kubeconfig": "/etc/kubernetes/kubelet.conf"
}
} # The default IP Pool to be created for the cluster.
# Pod IP addresses will be assigned from this pool.
ippool.yaml: |
apiVersion: v1
kind: ipPool
metadata:
cidr: 10.1.0.0/
spec:
ipip:
enabled: true
nat-outgoing: true --- # This manifest installs the Calico etcd on the kubeadm master. This uses a DaemonSet
# to force it to run on the master even when the master isn't schedulable, and uses
# nodeSelector to ensure it only runs on the master.
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: calico-etcd
namespace: kube-system
labels:
k8s-app: calico-etcd
spec:
template:
metadata:
labels:
k8s-app: calico-etcd
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: |
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
spec:
# Only run this pod on the master.
nodeSelector:
kubeadm.alpha.kubernetes.io/role: master
hostNetwork: true
containers:
- name: calico-etcd
image: k8s/etcd:v3.0.15
env:
- name: CALICO_ETCD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command: ["/bin/sh","-c"]
args: ["/usr/local/bin/etcd --name=calico --data-dir=/var/etcd/calico-data --advertise-client-urls=http://$CALICO_ETCD_IP:6666 --listen-client-urls=http://0.0.0.0:6666 --listen-peer-urls=http://0.0.0.0:6667"]
volumeMounts:
- name: var-etcd
mountPath: /var/etcd
volumes:
- name: var-etcd
hostPath:
path: /var/etcd --- # This manfiest installs the Service which gets traffic to the Calico
# etcd.
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: calico-etcd
name: calico-etcd
namespace: kube-system
spec:
# Select the calico-etcd pod running on the master.
selector:
k8s-app: calico-etcd
# This ClusterIP needs to be known in advance, since we cannot rely
# on DNS to get access to etcd.
clusterIP: None
ports:
- port: --- # This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
template:
metadata:
labels:
k8s-app: calico-node
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: |
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
spec:
hostNetwork: true
containers:
# Runs calico/node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: calico/node:v1.0.2
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# Enable BGP. Disable to enforce policy only.
- name: CALICO_NETWORKING
valueFrom:
configMapKeyRef:
name: calico-config
key: enable_bgp
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Don't configure a default pool. This is done by the Job
# below.
- name: NO_DEFAULT_POOLS
value: "true"
# Auto-detect the BGP IP address.
- name: IP
value: ""
securityContext:
privileged: true
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
# This container installs the Calico CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: quay.io/calico/cni:v1.5.5
command: ["/install-cni.sh"]
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
volumes:
# Used by calico/node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d --- # This manifest deploys the Calico policy controller on Kubernetes.
# See https://github.com/projectcalico/k8s-policy
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: calico-policy-controller
namespace: kube-system
labels:
k8s-app: calico-policy
spec:
# The policy controller can only have a single active instance.
replicas:
strategy:
type: Recreate
template:
metadata:
name: calico-policy-controller
namespace: kube-system
labels:
k8s-app: calico-policy-controller
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: |
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
spec:
# The policy controller must run in the host network namespace so that
# it isn't governed by policy that would prevent it from working.
hostNetwork: true
containers:
- name: calico-policy-controller
image: calico/kube-policy-controller:v0.5.2
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# The location of the Kubernetes API. Use the default Kubernetes
# service for API access.
- name: K8S_API
value: "https://kubernetes.default:443"
# Since we're running in the host namespace and might not have KubeDNS
# access, configure the container's /etc/hosts to resolve
# kubernetes.default to the correct service clusterIP.
- name: CONFIGURE_ETC_HOSTS
value: "true" --- ## This manifest deploys a Job which performs one time
# configuration of Calico
apiVersion: batch/v1
kind: Job
metadata:
name: configure-calico
namespace: kube-system
labels:
k8s-app: calico
spec:
template:
metadata:
name: configure-calico
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: |
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
spec:
hostNetwork: true
restartPolicy: OnFailure
containers:
# Writes basic configuration to datastore.
- name: configure-calico
image: calico/ctl:v1.0.2
args:
- apply
- -f
- /etc/config/calico/ippool.yaml
volumeMounts:
- name: config-volume
mountPath: /etc/config
env:
# The location of the etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
volumes:
- name: config-volume
configMap:
name: calico-config
items:
- key: ippool.yaml
path: calico/ippool.yaml

通过kubectl命令安装calico

kubectl create -f calico.yaml

可以根据执行结果和kubectl系列命令查看相应的deployment和各个pod的运行状态以及出错信息。

如果一切正常,就可以创建测试pod进行ping通信测试了,官方给出的测试方法也可以:

kubectl create ns policy-demo
# Run the Pods.
kubectl run --namespace=policy-demo nginx --replicas= --image=nginx
# Create the Service.
kubectl expose --namespace=policy-demo deployment nginx --port=
# Run a Pod and try to access the `nginx` Service.
$ kubectl run --namespace=policy-demo access --rm -ti --image busybox /bin/sh
Waiting for pod policy-demo/access--y0m47 to be running, status is Pending, pod ready: false If you don't see a command prompt, try pressing enter. / # wget -q nginx -O -

如果看到nginx的回复说明网络已经通了, 注意前提是Kubernetes集群中已经运行dns服务

也可以直接在命令行发送ping:

kubectl exec -ti nginx--f2cdm --namespace=policy-demo ping 10.1.155.75

涉及到的docker镜像文件在国内网络环境下载很慢,如果有需要我可以贴上来。

在Kubernetes集群中使用calico做网络驱动的配置方法的更多相关文章

  1. 初试 Kubernetes 集群中使用 Traefik 反向代理

    初试 Kubernetes 集群中使用 Traefik 反向代理 2017年11月17日 09:47:20 哎_小羊_168 阅读数:12308    版权声明:本文为博主原创文章,未经博主允许不得转 ...

  2. 在kubernetes集群中创建redis主从多实例

    分类 > 正文 在kubernetes集群中创建redis主从多实例 redis-slave镜像制作 redis-master镜像制作 创建kube的配置文件yaml 继续使用上次实验环境 ht ...

  3. Kubernetes集群中Service的滚动更新

    Kubernetes集群中Service的滚动更新 二月 9, 2017 0 条评论 在移动互联网时代,消费者的消费行为已经“全天候化”,为此,商家的业务系统也要保持7×24小时不间断地提供服务以满足 ...

  4. 【转载】浅析从外部访问 Kubernetes 集群中应用的几种方式

    一般情况下,Kubernetes 的 Cluster Network 是属于私有网络,只能在 Cluster Network 内部才能访问部署的应用.那么如何才能将 Kubernetes 集群中的应用 ...

  5. kubernetes集群中的pause容器

    昨天晚上搭建好了k8s多主集群,启动了一个nginx的pod,然而每启动一个pod就伴随这一个pause容器,考虑到之前在做kubelet的systemd unit文件时有见到: 1 2 3 4 5 ...

  6. 解决项目迁移至Kubernetes集群中的代理问题

    解决项目迁移至Kubernetes集群中的代理问题 随着Kubernetes技术的日益成熟,越来越多的企业选择用Kubernetes集群来管理项目.新项目还好,可以选择合适的集群规模从零开始构建项目: ...

  7. 如何在 Kubernetes 集群中玩转 Fluid + JuiceFS

    作者简介: 吕冬冬,云知声超算平台架构师, 负责大规模分布式机器学习平台架构设计与功能研发,负责深度学习算法应用的优化与 AI 模型加速.研究领域包括高性能计算.分布式文件存储.分布式缓存等. 朱唯唯 ...

  8. ingress-nginx 的使用 =》 部署在 Kubernetes 集群中的应用暴露给外部的用户使用

    文章转载自:https://mp.weixin.qq.com/s?__biz=MzU4MjQ0MTU4Ng==&mid=2247488189&idx=1&sn=8175f067 ...

  9. Kubernetes集群中Jmeter对公司演示的压力测试

    6分钟阅读 背景 压力测试是评估Web应用程序性能的有效方法.此外,越来越多的Web应用程序被分解为几个微服务,每个微服务的性能可能会有所不同,因为有些是计算密集型的,而有些是IO密集型的. 基于微服 ...

随机推荐

  1. 深度学习框架-caffe安装-环境[Mac OSX 10.12]

    深度学习框架-caffe安装 [Mac OSX 10.12] [参考资源] 1.英文原文:(使用GPU) [http://hoondy.com/2015/04/03/how-to-install-ca ...

  2. CSS3学习笔记-1:CSS样式继承

    自己在写css时总会遇上css样式继承的问题,好在一般问题不大,但一直也不明白css样式继承的规则,最近发现了一篇文章讲的不错,因此转载过来: 所谓CSS的继承是指被包在内部的标签将拥有外部标签的样式 ...

  3. 【ACM小白成长撸】--计算单词个数

    我判断单词个数的方法,根据空格‘ ’的个数 分情况 当没有单词的时候 判断第一个符号,即a[0] == ‘\0’时,赋值给存储个数的数组 当遇到空格时,只有前面一个字符不是空格字符,后面一个字符不是空 ...

  4. 基于NIOS-II的示波器:PART1 按键&显示屏驱动&界面

    NIOS II 相关资料以及基础入门 <NiosII的奇幻漂流> <Nios II那些事儿> 本文所有的硬件基础以及工程参考来自魏坤示波仪,重新实现驱动并重构工程. 基于NIO ...

  5. Jquery 清空input file的值

    var file = $(obj).parent().find(".fileData");                $(file).val('');

  6. 1011. A+B和C (15)

    /*1011. A+B和C (15) 时间限制150 ms内存限制65536 kB代码长度限制8000 B判题程序Standard作者HOU, Qiming给定区间[-231, 231]内的3个整数A ...

  7. MySQL (五)--连接查询简介、 交叉连接、 内连接、外连接、自然连接、温馨小提示

    1 连接查询简介 将多张表(可以大于2)进行记录的连接(按照某个指定的条件进行数据拼接). 最终结果:记录数可能会有变化,字段书一定会增加(至少两张表的合并). 连接查询:join,使用方式:左表 j ...

  8. 如何将RDS的数据同步到本地自建数据库

    长期以来有很多的用户咨询如何将RDS的数据同步到本地的数据库环境中,本篇文章以在阿里云的ECS服务器为例来说明如何将RDS的数据同步到本地数据库中.RDS对外提供服务是一个DNS地址+端口3306,这 ...

  9. 201521123097《Java程序设计》第八周学习总结

    1. 本周学习总结 1.1 以你喜欢的方式(思维导图或其他)归纳总结集合与泛型相关内容. 2. 书面作业 1.本次作业题集集合 public static List return str; } pub ...

  10. 201521123027<java程序设计>第14周作业总结

    1.本周作业总结 1.1 以你喜欢的方式(思维导图或其他)归纳总结多数据库相关内容. 2.书面作业 Q1. MySQL数据库基本操作 建立数据库,将自己的姓名.学号作为一条记录插入.(截图,需出现自己 ...