Prerequisites

Introduction

In part 3, we scale our application and enable load-balancing. To do this, we must go one level up in the hierarchy of a distributed application: the service.

  • Stack
  • Services (you are here)
  • Container (covered in part 2)

About services

In a distributed application, different pieces of the app are called “services.”

For example, if you imagine a video sharing site, it probably includes a service for storing application data in a database, a service for video transcoding in the background after a user uploads something, a service for the front-end, and so on.

Services are really just “containers in production.”

A service only runs one image, but it codifies the way that image runs—what ports it should use, how many replicas of the container should run so the service has the capacity it needs, and so on.

Scaling a service changes the number of container instances running that piece of software, assigning more computing resources to the service in the process.

Luckily it’s very easy to define, run, and scale services with the Docker platform

-- just write a docker-compose.yml file.

Your first docker-compose.yml file

A docker-compose.yml file is a YAML file that defines how Docker containers should behave in production.

docker-compose.yml

Save this file as docker-compose.yml wherever you want. Be sure you have pushed the image you created in Part 2 to a registry, and update this .yml by replacing username/repo:tag with your image details.

version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- webnet
networks:
webnet:

 

This docker-compose.yml file tells Docker to do the following:

  • Pull the image we uploaded in step 2 from the registry.

  • Run 5 instances of that image as a service called web, limiting each one to use, at most, 10% of the CPU (across all cores), and 50MB of RAM.

  • Immediately restart containers if one fails.

  • Map port 4000 on the host to web’s port 80.

  • Instruct web’s containers to share port 80 via a load-balanced network called webnet. (Internally, the containers themselves publish to web’s port 80 at an ephemeral port.)

  • Define the webnet network with the default settings (which is a load-balanced overlay network).

Run your new load-balanced app

Before we can use the docker stack deploy command we first run:

docker swarm init

 Note: We get into the meaning of that command in part 4. If you don’t run docker swarm init you get an error that “this node is not a swarm manager.” 

Now let’s run it. You need to give your app a name. Here, it is set to getstartedlab:

docker stack deploy -c docker-compose.yml getstartedlab

Our single service stack is running 5 container instances of our deployed image on one host. Let’s investigate.

Get the service ID for the one service in our application:

docker service ls

Look for output for the web service, prepended with your app name. If you named it the same as shown in this example, the name is getstartedlab_web.

The service ID is listed as well, along with the number of replicas, image name, and exposed ports.

A single container running in a service is called a task. Tasks are given unique IDs that numerically increment, up to the number of replicas you defined in docker-compose.yml. List the tasks for your service:

docker service ps getstartedlab_web

  

Tasks also show up if you just list all the containers on your system, though that is not filtered by service:

docker container ls -q

 

You can run curl -4 http://localhost several times in a row, or go to that URL in your browser and hit refresh a few times.

Either way, the container ID changes, demonstrating the load-balancing;

with each request, one of the 5 tasks is chosen, in a round-robin fashion, to respond.

The container IDs match your output from the previous command (docker container ls -q).

Running Windows 10?

Windows 10 PowerShell should already have curl available, but if not you can grab a Linux terminal emulator like Git BASH, or download wget for Windows which is very similar.

Slow response times?

Depending on your environment’s networking configuration, it may take up to 30 seconds for the containers to respond to HTTP requests. This is not indicative of Docker or swarm performance, but rather an unmet Redis dependency that we address later in the tutorial. For now, the visitor counter isn’t working for the same reason; we haven’t yet added a service to persist data.

Scale the app

You can scale the app by changing the replicas value in docker-compose.yml, saving the change, and re-running the docker stack deploy command:

docker stack deploy -c docker-compose.yml getstartedlab

  

Docker performs an in-place update, no need to tear the stack down first or kill any containers.

Now, re-run docker container ls -q to see the deployed instances reconfigured.

If you scaled up the replicas, more tasks, and hence, more containers, are started.

 

Take down the app and the swarm

  • Take the app down with docker stack rm:

    docker stack rm getstartedlab
  • Take down the swarm.

    docker swarm leave --force

It’s as easy as that to stand up and scale your app with Docker.

You’ve taken a huge step towards learning how to run containers in production.

Up next, you learn how to run this app as a bonafide swarm on a cluster of Docker machines.

Note: Compose files like this are used to define applications with Docker, and can be uploaded to cloud providers using Docker Cloud, or on any hardware or cloud provider you choose with Docker Enterprise Edition.

Recap and cheat sheet (optional)

Here’s a terminal recording of what was covered on this page:

  

To recap, while typing docker run is simple enough, the true implementation of a container in production is running it as a service.

Services codify a container’s behavior in a Compose file, and this file can be used to scale, limit, and redeploy our app.

Changes to the service can be applied in place, as it runs, using the same command that launched the service: docker stack deploy.

Some commands to explore at this stage:

docker stack ls                                            # List stacks or apps
docker stack deploy -c <composefile> <appname> # Run the specified Compose file
docker service ls # List running services associated with an app
docker service ps <service> # List tasks associated with an app
docker inspect <task or container> # Inspect task or container
docker container ls -q # List container IDs
docker stack rm <appname> # Tear down an application
docker swarm leave --force # Take down a single node swarm from the manager

  

 

Docker:Service的更多相关文章

  1. 老司机实战Windows Server Docker:2 docker化现有iis应用的正确姿势

    前言 上一篇老司机实战Windows Server Docker:1 初体验之各种填坑介绍了安装docker服务过程中的一些小坑.这一篇,我们来填一些稍大一些的坑:如何docker化一个现有的iis应 ...

  2. docker:(1)docker基本命令使用及发布镜像

    docker镜像可以完全看作一台全新的电脑使用,无论什么镜像都是对某一东西进行了配置,然后打包后可以快速移植到需要的地方直接使用 省去复杂的配置工作 比如java web项目部署,如果是新部署,需要装 ...

  3. kubernetes进阶之七:Service

    1.概述 Service也是Kubernetes里的最核心的资源对象之一,Kubernetes里的每个Service其实就是我们经常提起的微服务架构中的一个“微服务”,之前我们所说的Pod.RC等资源 ...

  4. docker:学习笔记

    docker run -itd --net=none 22565cef72c2 /usr/sbin/sshd -Dpipework br0 5a3e7bab4c5c5260a93e153aa7fec3 ...

  5. Docker Kubernetes Service 网络服务代理模式详解

    Docker Kubernetes  Service 网络服务代理模式详解 Service service是实现kubernetes网络通信的一个服务 主要功能:负载均衡.网络规则分布到具体pod 注 ...

  6. Docker Kubernetes Service 代理服务创建

    Docker Kubernetes  Service 代理服务创建 创建Service需要提前创建好pod容器.再创建Service时需要指定Pod标签,它会提供一个暴露端口默会分配容器内网访问的唯一 ...

  7. 【亲测有效】Centos安装完成docker后启动docker报错docker: unrecognized service的两种解决方案

    今天在学习Docker的时候 使用yum install docker安装完后启动不了,报错如下: [root@Sakura ~]# service docker start docker: unre ...

  8. docker:轻量级图形页面管理之Portainer

    docker:轻量级图形页面管理之Portainer 原创甘兵2018-03-05 14:26:56评论(8)2586人阅读   1.介绍 docker 图形化管理提供了很多工具,有Portainer ...

  9. 【06】循序渐进学 docker:跨主机通信

    写在前面的话 目前解决容器跨主机通信的方案有很多种,这里给出的只是其中的一种,而且还不是最好的方案,不过归根结底,大同小异.在学习 docker swarm 之前,大家可以先看看这种. 啥是 over ...

随机推荐

  1. 设计模式之State(状态)(转)

    State的定义: 不同的状态,不同的行为;或者说,每个状态有着相应的行为. 何时使用? State模式在实际使用中比较多,适合"状态的切换".因为我们经常会使用If elseif ...

  2. flask设置cookie,设置session,模拟用户认证、模拟管理后台admin、模拟用户logout

    设置cookie HTTP协议是无状态的,在一次请求响应结束后,服务器不会留下关于客户端状态的信息.但是对于某些web程序来说,客户端的信息有必要被记住,比如用户的登录状态,这样就可以根据用户的状态来 ...

  3. Linux基础命令---显示主机名hostname

    hostname hostname指令用于设置或者显示系统主机名,没有任何参数就会返回gethostname()函数的返回值.使用hostname指令之后,主机名会立马被修改,但是重启系统之后就失效了 ...

  4. 使用SpringBoot的优势。

    Spring Boot 让开发变得更简单 Spring Boot 对开发效率的提升是全方位的,我们可以简单做一下对比: 在没有使用 Spring Boot 之前我们开发一个 web 项目需要做哪些工作 ...

  5. 实现私有化(Pimpl) --- QT常见的设计模式

    转载自:http://blog.sina.com.cn/s/blog_667102dd0100wxbi.html 一.遇到的问题 1.隐藏实现 我们在给客户端提供接口的时候只希望能暴露它的接口,而隐藏 ...

  6. 听 Fabien Potencier 谈Symfony2 之 《What is Dependency Injection ?》

    听 Fabien Potencier 谈Symfony2 之 <What is Dependency Injection ?>   什么是依赖注入?从PHP实现角度来分析依赖注入,因为PH ...

  7. 怎样从外网访问内网Linux系统?

    本地安装了一个Linux系统,只能在局域网内访问到,怎样从外网也能访问到本地的Linux系统呢?本文将介绍具体的实现步骤. 1. 准备工作 1.1 启动Linux系统 默认Linux系统ssh服务端端 ...

  8. 有登陆认证的情况下如何使用Wisdom RESTClient?

    访问REST API时,很多系统需要登陆认证,登陆成功以后才允许访问API.下面介绍一下有登陆认证情况下如何使用 Wisdom RESTClient测试API的方法. 方法很简单即在浏览器上成功登录系 ...

  9. JAVA连接数据库 #03# HikariCP

    索引 为什么用数据库连接池? HikariCP快速入门 依赖 简单的草稿程序 设置连接池参数(只列举常用的) MySQL配置 修改Java连接数据库#02#中的代码 测试 为什么用数据库连接池? 为什 ...

  10. Docker学习笔记之docker volume 容器卷的那些事(一)

    预览目录 volume 方式 相关用例 使用方式 使用 volume driver bind mount 方式 相关用例 使用方式 配置selinux标签 配置macOS的安装一致性 tmpfs 方式 ...