Docker v1.12 brings in its integrated orchestration into docker engine.

Starting with Docker 1.12, we have added features to the core Docker Engine to make multi-host and multi-container orchestration easy. We’ve added new API objects, like Service and Node, that will let you use the Docker API to deploy and manage apps on a group of Docker Engines called a swarm. With Docker 1.12, the best way to orchestrate Docker is Docker!

Playing on GCE

Create swarm-manager:

gcloud init
docker-machine create swarm-manager --engine-install-url experimental.docker.com -d google --google-machine-type n1-standard-1 --google-zone us-central1-f --google-disk-size "500" --google-tags swarm-cluster --google-project k8s-dev-prj

Check what version has been installed:

$ eval $(docker-machine env swarm-manager)
$ docker version
Client:
Version: 1.12.0-rc2
API version: 1.24
Go version: go1.6.2
Git commit: 906eacd
Built: Fri Jun 17 20:35:33 2016
OS/Arch: darwin/amd64
Experimental: true Server:
Version: 1.12.0-rc2
API version: 1.24
Go version: go1.6.2
Git commit: 906eacd
Built: Fri Jun 17 21:07:35 2016
OS/Arch: linux/amd64
Experimental: true

Create worker node:

docker-machine create swarm-worker-1 \
--engine-install-url experimental.docker.com \
-d google \
--google-machine-type n1-standard-1 \
--google-zone us-central1-f \
--google-disk-size "500" \
--google-tags swarm-cluster \
--google-project k8s-dev-prj

Initialize swarm

# init manager
eval $(docker-machine env swarm-manager)
docker swarm init

Under the hood this creates a Raft consensus group of one node. This first node has the role of manager, meaning it accepts commands and schedule tasks. As you join more nodes to the swarm, they will by default be workers, which simply execute containers dispatched by the manager. You can optionally add additional manager nodes. The manager nodes will be part of the Raft consensus group. We use an optimized Raft store in which reads are serviced directly from memory which makes scheduling performance fast.

# join worker
eval $(docker-machine env swarm-worker-1)
manager_ip=$(gcloud compute instances list | awk '/swarm-manager/{print $4}')
docker swarm join ${manager_ip}:2377

List all nodes:

$ eval $(docker-machine env swarm-manager)
$ docker node ls
ID NAME MEMBERSHIP STATUS AVAILABILITY MANAGER STATUS
0m2qy40ch1nqfpmhnsvj8jzch * swarm-manager Accepted Ready Active Leader
4v1oo055unqiz9fy14u8wg3fn swarm-worker-1 Accepted Ready Active

Playing with service

eval $(docker-machine env swarm-manager)
docker service create --replicas 2 -p 80:80/tcp --name nginx nginx

This command declares a desired state on your swarm of 2 nginx containers, reachable as a single, internally load balanced service on port 80 of any node in your swarm. Internally, we make this work using Linux IPVS, an in-kernel Layer 4 multi-protocol load balancer that’s been in the Linux kernel for more than 15 years. With IPVS routing packets inside the kernel, swarm’s routing mesh delivers high performance container-aware load-balancing.

When you create services, can optionally create replicated or global services. Replicated services mean any number of containers that you define will be spread across the available hosts. Global services, by contrast, schedule one instance the same container on every host in the swarm.

Let’s turn to how Docker provides resiliency. Swarm mode enabled engines are self-healing, meaning that they are aware of the application you defined and will continuously check and reconcile the environment when things go awry. For example, if you unplug one of the machines running an nginx instance, a new container will come up on another node. Unplug the network switch for half the machines in your swarm, and the other half will take over, redistributing the containers amongst themselves. For updates, you now have flexibility in how you re-deploy services once you make a change. You can set a rolling or parallel update of the containers on your swarm.

docker service scale nginx=3

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b51a902db8bc nginx:latest "nginx -g 'daemon off" 2 minutes ago Up 2 minutes 80/tcp, 443/tcp nginx.1.8yvwxbquvz1ptuqsc8hewwbau
# switch to worker
$ eval $(docker-machine env swarm-worker-1)
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
da6a8250bef4 nginx:latest "nginx -g 'daemon off" About a minute ago Up About a minute 80/tcp, 443/tcp nginx.2.bqko7fyj1nowwj1flxva3ur0g
54d9ffd07894 nginx:latest "nginx -g 'daemon off" About a minute ago Up About a minute 80/tcp, 443/tcp nginx.3.02k4d34gjooa9f8m6yhfi5hyu

As seen above, one container runs on swarm-manager, and the others run on swarm-worker-1.

Expose services

Visit by node node ip

gcloud compute firewall-rules create nginx-swarm \
--allow tcp:80 \
--description "nginx swarm service" \
--target-tags swarm-cluster

Then use external IP (get by exec gcloud compute instances list) to visit nginx service.

GCP Load Balancer (tcp)

gcloud compute addresses create network-lb-ip-1 --region us-central1
gcloud compute http-health-checks create basic-check
gcloud compute target-pools create www-pool --region us-central1 --health-check basic-check
gcloud compute target-pools add-instances www-pool --instances swarm-manager,swarm-worker-1 --zone us-central1-f # Get lb addresses
STATIC_EXTERNAL_IP=$(gcloud compute addresses list | awk '/network-lb-ip-1/{print $3}')
# create forwarding rules
gcloud compute forwarding-rules create www-rule --region us-central1 --port-range 80 --address ${STATIC_EXTERNAL_IP} --target-pool www-pool

Now you could visit http://${STATIC_EXTERNAL_IP} for nginx service.

BTW, Docker for aws and azure will do this more easily as integrated:

  • Use an SSH key already associated with your IaaS account for access control
  • Provision infrastructure load balancers and update them dynamically as apps are created and updated
  • Configure security groups and virtual networks to create secure Docker setups that are easy for operations to understand and manage

By default, apps deployed with bundles do not have ports publicly exposed. Update port mappings for services, and Docker will automatically wire up the underlying platform loadbalancers:docker service update -p 80:80 <example-service>

Networking

Local networking

Create local scope network and place containers in existing vlans:

docker network create -d macvlan --subnet=192.168.0.0/16 --ip-range=192.168.41.0/24 --aux-address="favoriate_ip_ever=192.168.41.2" --gateway=192.168.41.1 -o parent=eth0.41 macnet41
docker run --net=macnet41 -it --rm alpine /bin/sh

Multi-host networking

A typical two-tier (web+db) application runs on swarm scope network would be created like this:

docker network create -d overlay mynet
docker service create –name frontend –replicas 5 -p 80:80/tcp –network mynet mywebapp
docker service create –name redis –network mynet redis:latest



Conclusion

Docker v1.12 indeeds introduced easy-of-use interface for orchestrating containers, but I’m concerned whether this way could scale for large clusters. Maybe we could see it on Docker’s further iterations.

Further more

Play with docker 1.12的更多相关文章

  1. Docker 1.12.0将要发布的新功能

    Docker 1.12.0将要发布的新功能 导读 按计划,6/14 是1.12.0版本的 feature冻结 的日子,再有两个星期Docker 1.12.0也该发布了.这里列出来的新功能,都是已经合并 ...

  2. docker 1.12 版本 docker swarm 集群

    博客已经迁移到 个人博客中 个人博客 更新地址: http://www.xf80.com/2016/10/25/docker-swarm-1.12/ docker 1.12 版本 的新特性 (1)do ...

  3. (转) Docker - Docker1.12服务发现,负载均衡和Routing Mesh

    看到一篇介绍 Docker swarm以及如何编排的好文章,挪放到这里,自己学习的同时也分享出来. 原文链接: http://wwwbuild.net/dockerone/414200.html -- ...

  4. Docker 1.12 集群

        环境介绍 虚拟机两台,vmware ,网络为NAT node139:192.168.190.139 Node140: 192.168.190.140     设置hostname 以139为例 ...

  5. CentOS 7.2 安装 Docker 1.12.3 版

    本文出自http://www.cnblogs.com/scoter2008 1.强大的官方文档 https://docs.docker.com/engine/installation/linux/ce ...

  6. docker 1.12设置非https访问registry

    升级docker到1.12后,发现使用原来的/etc/sysconfig/docker文件中设置--insecure-registry的方式,访问registry失败,提示"http: se ...

  7. docker 1.12.3版本搭建私有仓库,上传镜像报错:server gave HTTP response to HTTPS client”

    系统环境:centos7 docker版本: 1.12.3(注意版本,可能存在不同版本设置不同的情况) docker registry版本:2.4.1 问题: 成功安装docker registry, ...

  8. Docker从12升级到17ce

    先卸载 yum remove docker* yum remove container-selinux--.el7.centos.x86_64 安装 sudo yum install -y yum-u ...

  9. 数人云CTO解读Docker 1.12和金融业容器化

    7月29日 数人云 在上海举办金融沙龙,邀请上交所和近二十家来自银行.保险.证券的IT技术专家一同探讨容器技术在金融业中的最佳实践.数人云CTO肖德时在会上将传统金融行业通过容器可以解决的四大问题做了 ...

随机推荐

  1. mybatis中的#{}和${}

    #{}:相当于预处理中的占位符?. #{}里面的参数表示接收java输入参数的名称. #{}可以接受HashMap.简单类型.POJO类型的参数. 当接受简单类型的参数时,#{}里面可以是value, ...

  2. PIC XC8 EEPROM操作

    要做一个报警功能的东东,要求可以通过遥控来改变遥控内容.由于对系统的稳定性要求很高,所以用了看门狗. 可是看门狗复位会引起所有寄存器重置,恢复到默认状态.遥控要改变的内容也被复位了,所以只能借助EEP ...

  3. 设置vim默认参数 例如设置默认背景颜色

    因个人喜好问题,本人使用vim的时候喜欢将背景颜色设为dark. 但是每次打开一个vim的时候都要重新设置一次,感觉非常麻烦. 总要输入[Esc] :set bg=dark很不方便 粗暴的办法是直接进 ...

  4. NOIP 考前 数据结构复习

    BZOJ 1455 左偏树即可 #include <cstdio> #define LL long long ; struct Info{LL l,r,v,Dis;}Tree[Maxn]; ...

  5. Ansible-Tower快速入门-3.快速开始【翻译】

    快速开始 当你完成安装tower后,我们应该完成接下来的一些任务,并通过使用tower,快速设置和启动我们的第一个ansible playbooks.这第一个playbooks的启动会执行简单的ans ...

  6. 通过Cloudera在hadoop生态圈中安装Sentry服务。

    写在张文章时,差点辣死我了.把sentry数据库密码搞掉了,导致hive,impala,hue都挂了.此事要引以为戒,以后要小心操作了. 安装Sentry服务 a)                在c ...

  7. 转发-UI基础教程 – 原生App切图的那些事儿

    UI基础教程 – 原生App切图的那些事儿 转发:http://www.shejidaren.com/app-ui-cut-and-slice.html 移动APP切图是UI设计必须学会的一项技能,切 ...

  8. 【JavaScript忍者秘籍】

  9. redis-cluster 迁移过程错误记录

    因为集群内的 单点redis消耗 内存达到了14个G,所以需要增加新的节点,并将数据迁移过去,使用 redis-trib reshard ip:port A : 2105slot       14.5 ...

  10. PPPoE(以太网上的点对点协议)

    协议概述 PPPoE分为两个阶段,即Discovery(地址发现)阶段和PPP会话阶段.当某个主机希望发起一个PPPoE会话时,它必须首先执行Discovery来确定对方的以太网MAC地址并建立起一个 ...