• Make sure you have published the friendlyhello image you created by pushing it to a registry. We’ll use that shared image here.

  • Be sure your image works as a deployed container. Run this command, slotting in your info for usernamerepo, and tagdocker run -p 80:80 username/repo:tag, then visit http://localhost/.

  • Have a copy of your docker-compose.yml from Part 3 handy.

  • Make sure that the machines you set up in part 4 are running and ready. Run docker-machine ls to verify this. If the machines are stopped, run docker-machine start myvm1 to boot the manager, followed by docker-machine start myvm2to boot the worker.

  • Have the swarm you created in part 4 running and ready. Rundocker-machine ssh myvm1 "docker node ls" to verify this. If the swarm is up, both nodes will report a ready status. If not, reinitialze the swarm and join the worker as described in Set up your swarm.

learned how to set up a swarm, which is a cluster of machines running Docker, and deployed an application to it, with containers running in concert on multiple machines.

you’ll reach the top of the hierarchy of distributed applications: the stack.

A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together.

A single stack is capable of defining and coordinating the functionality of an entire application (though very complex applications may want to use multiple stacks).

Some good news is, you have technically been working with stacks since part 3, when you created a Compose file and used docker stack deploy.

But that was a single service stack running on a single host, which is not usually what takes place in production.

Here, you will take what you’ve learned, make multiple services relate to each other, and run them on multiple machines.

Add a new service and redeploy

It’s easy to add services to our docker-compose.yml file.

First, let’s add a free visualizer service that lets us look at how our swarm is scheduling containers.

  1.Open up docker-compose.yml in an editor and replace its contents with the following. Be sure to replace username/repo:tag with your image details. 

version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:

  The only thing new here is the peer service to web, named visualizer.

  You’ll see two new things here: a volumes key, giving the visualizer access to the host’s socket file for Docker, and a placement key, ensuring that this service only ever runs on a swarm manager – never a worker. That’s because this container, built from an open source project created by Docker, displays Docker services running on a swarm in a diagram.  

  2.Make sure your shell is configured to talk to myvm1 (full examples are here).

  • Run docker-machine ls to list machines and make sure you are connected to myvm1, as indicated by an asterisk next it.

  • If needed, re-run docker-machine env myvm1, then run the given command to configure the shell.

    On Mac or Linux the command is:

eval $(docker-machine env myvm1)

     On Windows the command is:

& "C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe" env myvm1 | Invoke-Expression

  3.Re-run the docker stack deploy command on the manager, and whatever services need updating will be updated:

$ docker stack deploy -c docker-compose.yml getstartedlab
Updating service getstartedlab_web (id: angi1bf5e4to03qu9f93trnxm)
Creating service getstartedlab_visualizer (id: l9mnwkeq2jiononb5ihz9u7a4)

  4.Take a look at the visualizer.

  You saw in the Compose file that visualizer runs on port 8080. Get the IP address of one of your nodes by running docker-machine ls. Go to either IP address at port 8080 and you will see the visualizer running:

  The single copy of visualizer is running on the manager as you expect, and the 5 instances of web are spread out across the swarm. You can corroborate this visualization by running docker stack ps <stack>: 

docker stack ps getstartedlab

  The visualizer is a standalone service that can run in any app that includes it in the stack. It doesn’t depend on anything else.

   Now let’s create a service that does have a dependency: the Redis service that will provide a visitor counter.

Persist the data

Let’s go through the same workflow once more to add a Redis database for storing app data.

1.Save this new docker-compose.yml file, which finally adds a Redis service. Be sure to replace username/repo:tag with your image details.

version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
ports:
- "6379:6379"
volumes:
- /home/docker/data:/data
deploy:
placement:
constraints: [node.role == manager]
command: redis-server --appendonly yes
networks:
- webnet
networks:
webnet:

  Redis has an official image in the Docker library and has been granted the short image name of just redis, so no username/repo notation here.

  The Redis port, 6379, has been pre-configured by Redis to be exposed from the container to the host, and here in our Compose file we expose it from the host to the world, so you can actually enter the IP for any of your nodes into Redis Desktop Manager and manage this Redis instance, if you so choose.

  Most importantly, there are a couple of things in the redisspecification that make data persist between deployments of this stack:

    •  redis always runs on the manager, so it’s always using the same filesystem.
    •  redis accesses an arbitrary directory in the host’s file system as /data inside the container, which is where Redis stores data.

  Together, this is creating a “source of truth” in your host’s physical filesystem for the Redis data.

  Without this, Redis would store its data in /data inside the container’s filesystem, which would get wiped out if that container were ever redeployed.

  This source of truth has two components:

    • The placement constraint you put on the Redis service, ensuring that it always uses the same host.
    • The volume you created that lets the container access ./data (on the host) as /data (inside the Redis container). While containers come and go, the files stored on ./data on the specified host will persist, enabling continuity.

  You are ready to deploy your new Redis-using stack.

2.Create a ./data directory on the manager:

docker-machine ssh myvm1 "mkdir ./data"

3.Make sure your shell is configured to talk to myvm1 (full examples are here).

  • Run docker-machine ls to list machines and make sure you are connected to myvm1, as indicated by an asterisk next it.

  • If needed, re-run docker-machine env myvm1, then run the given command to configure the shell.

    On Mac or Linux the command is:

eval $(docker-machine env myvm1)

  On Windows the command is:

& "C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe" env myvm1 | Invoke-Expression
  • Run docker stack deploy one more time.
$ docker stack deploy -c docker-compose.yml getstartedlab
  • Run docker service ls to verify that the three services are running as expected.
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
x7uij6xb4foj getstartedlab_redis replicated 1/1 redis:latest *:6379->6379/tcp
n5rvhm52ykq7 getstartedlab_visualizer replicated 1/1 dockersamples/visualizer:stable *:8080->8080/tcp
mifd433bti1d getstartedlab_web replicated 5/5 orangesnap/getstarted:latest *:80->80/tcp
  • Check the web page at one of your nodes (e.g. http://192.168.99.101) and you’ll see the results of the visitor counter, which is now live and storing information on Redis.

    Also, check the visualizer at port 8080 on either node’s IP address, and you’ll see the redis service running along with the web and visualizer services.

stacks are inter-related services all running in concert, and that – surprise! – you’ve been using stacks since part three of this tutorial. You learned that to add more services to your stack, you insert them in your Compose file. Finally, you learned that by using a combination of placement constraints and volumes you can create a permanent home for persisting data, so that your app’s data survives when the container is torn down and redeployed.

  

Docker4之Stack的更多相关文章

  1. 线性数据结构之栈——Stack

    Linear data structures linear structures can be thought of as having two ends, whose items are order ...

  2. Java 堆内存与栈内存异同(Java Heap Memory vs Stack Memory Difference)

    --reference Java Heap Memory vs Stack Memory Difference 在数据结构中,堆和栈可以说是两种最基础的数据结构,而Java中的栈内存空间和堆内存空间有 ...

  3. [数据结构]——链表(list)、队列(queue)和栈(stack)

    在前面几篇博文中曾经提到链表(list).队列(queue)和(stack),为了更加系统化,这里统一介绍着三种数据结构及相应实现. 1)链表 首先回想一下基本的数据类型,当需要存储多个相同类型的数据 ...

  4. Stack Overflow 排错翻译 - Closing AlertDialog.Builder in Android -Android环境中关闭AlertDialog.Builder

    Stack Overflow 排错翻译  - Closing AlertDialog.Builder in Android -Android环境中关闭AlertDialog.Builder 转自:ht ...

  5. Uncaught RangeError: Maximum call stack size exceeded 调试日记

    异常处理汇总-前端系列 http://www.cnblogs.com/dunitian/p/4523015.html 开发道路上不是解决问题最重要,而是解决问题的过程,这个过程我们称之为~~~调试 记 ...

  6. Stack操作,栈的操作。

    栈是先进后出,后进先出的操作. 有点类似浏览器返回上一页的操作, public class Stack<E>extends Vector<E> 是vector的子类. 常用方法 ...

  7. [LeetCode] Implement Stack using Queues 用队列来实现栈

    Implement the following operations of a stack using queues. push(x) -- Push element x onto stack. po ...

  8. [LeetCode] Min Stack 最小栈

    Design a stack that supports push, pop, top, and retrieving the minimum element in constant time. pu ...

  9. Stack的三种含义

    作者: 阮一峰 日期: 2013年11月29日 学习编程的时候,经常会看到stack这个词,它的中文名字叫做"栈". 理解这个概念,对于理解程序的运行至关重要.容易混淆的是,这个词 ...

随机推荐

  1. Spark学习之路 (十四)SparkCore的调优之资源调优JVM的GC垃圾收集器

    一.概述 垃圾收集 Garbage Collection 通常被称为“GC”,它诞生于1960年 MIT 的 Lisp 语言,经过半个多世纪,目前已经十分成熟了. jvm 中,程序计数器.虚拟机栈.本 ...

  2. CS131&Cousera图像处理学习笔记 - L4&W2滤波和卷积

    cs131: http://vision.stanford.edu/teaching/cs131_fall1617/ coursera: https://www.coursera.org/learn/ ...

  3. python 解码json数据并在一个OrderdDict中保留其顺序

    一般来讲,JSON 解码会根据提供的数据创建dicts 或lists.如果你想要创建其他类型的对象,可以给json.loads() 传递object_pairs_hook 或object_hook 参 ...

  4. hive中的with用法

    hive 可以通过with查询来提高查询性能,因为先通过with语法将数据查询到内存,然后后面其它查询可以直接使用,这种方法与创建临时表类似但是不需要创建临时表实体表,内存中的子查询结果在会话结束后会 ...

  5. 阿里巴巴 Java 代码规范

    1. 抽象类命名使用 Abstratc开头. 2. 阿里强制规定不允许任何魔法值(未经定义的常量)直接出现在代码中.魔法值会让代码的可读性大大降低,而且如果同样的数值多次出现时,容易出现不清楚这些数值 ...

  6. linux java -version 和 javac -version 不一致

    我是在6 的基础上又装了一个 8. 结果java -v 和 javac -v的 一个显示 6 一个显示8 解决方式: 再次source 一下配置文件 如: source ~/.bashrc   或者  ...

  7. web项目错误—Java.util.ConcurrentMidificationException

    源代码: Iterator<String> iterator = list.iterator(); synchronized(synObject) { while(iterator.has ...

  8. SpringAOP单元测试时找不到文件。

    ...applicationContext.xml] cannot be opened because it does not exist. 刚才在进行单元测试时,报这个错,我把它放到了src的某个包 ...

  9. django中的模型详解-1

    在说明django模型之前,首先来说明一下django的生命周期,也就是一个请求到达django是如何处理的.[暂时不包含中间件] 浏览器的请求---->到达django中的urls中找到对应的 ...

  10. mysql 查看当前使用的配置文件my.cnf的方法(推荐)

    my.cnf是mysql启动时加载的配置文件,一般会放在mysql的安装目录中,用户也可以放在其他目录加载. 安装mysql后,系统中会有多个my.cnf文件,有些是用于测试的. 使用locate m ...