Transcript

In the last few articles we saw how to make our Phoenix chat app distributed; at the beginning with Redis, and after with distributed Elixir connecting the nodes together.

We had just one problem; we had to manually connect to the nodes in the IEX console. This is an issue in production. In this video we will see how to automatically cluster the Phoenix chat nodes using the libcluster library locally, and on a Kubernetes cluster with a dynamic number of nodes.

Let’s download that first code of the Phoenix chat example from my github account, poeticoding, and let’s use the pubsub_pg2 branch. Let’s clone the code, and check out to the pubsub_pg2 branch.

$ git clone https://github.com/poeticoding/phoenix_chat_example.git
...
$ cd phoenix_chat_example
$ git co pubsub_pg2
$ mix deps.get

Let’s download the dependencies, and try to run it locally so we can pass the port as an environment variable, so the first node at the port 4000. We give a name, first node A. And we start another Phoenix server, node B, and port 4001.

# Node A
$ PORT=4000 iex --sname a -S mix phx.server
# Node B
$ PORT=4001 iex --sname b -S mix phx.server

Okay, great. Let’s now connect the node A to B. And we see that, this needs to be connected correctly. Let’s try the chat app with the two browsers.

iex(a@mbp)> Node.connect :b@mbp
true
iex(b@mbp)> Node.list
[:a@mbp]

So, let’s connect with one tab to 4000 (node A), and the other tab with 4001(node B). We see that the messages are propagated correctly.

libcluster

We had to manually connect the nodes using the connect/1 function in the Node module. Let’s see how to use libcluster to automatically connect the nodes.

So at first, we need to add the libcluster library as a dependency.

# mix.exs
defp deps do
[
...
{:libcluster, "~> 3.0"}
]
end
# lib/chat.ex
defmodule Chat do
use Application def start(_type, _args) do
import Supervisor.Spec, warn: false topologies = [
chat: [
strategy: Cluster.Strategy.Gossip
]
] children = [
{Cluster.Supervisor, [topologies, [name: Chat.ClusterSupervisor]]},
supervisor(Chat.Endpoint, [])
]
opts = [strategy: :one_for_one, name: Chat.Supervisor]
Supervisor.start_link(children, opts)
end
end

We then need to start a Cluster.Supervisor, which is part of the libcluster library, with some topologies. We use the Gossipstrategy, which uses multicast UDP to gossip node names to other nodes in the network.

# Node A
$ PORT=4000 iex --sname a -S mix phx.server
# Node B
$ PORT=4001 iex --sname b -S mix phx.server
# Node C
$ PORT=4002 iex --sname c -S mix phx.server

Three Phoenix nodes connected

Great, and it should work straight away. So as before we start one node, the A node on port 4000, and the node B on port 4001. And you see that the node A is now connected to node B and vice versa. Node list. We see that we didn’t have to connect them manually. The same if I add another node, named C, on port 4002. They will connect automatically.

Kubernetes

Let’s now see how to deploy this distributed application on Kubernetes, and making the clustering of the nodes, of the Elixir nodes, automatic with libcluster.

We are going to deploy multiple chat nodes on my Kubernetes local setup. But what I’m going to show you could work without any radical change on any cloud provider. We’re going to deploy our chat nodes with Kubernetes deployment and we will connect them automatically together thanks to libcluster, and something called the Kubernetes headless service, which we’ll see in a moment. We will then create a load balancer, which will spread the connections from different browsers to different chat nodes.

So, what is a headless service? I’ve put this file, Nginx kube test. You can find all this code under the libcluster branch. So, let’s try to see with a simple Nginx deployment, what is a headless service.

# nginx_kube_test.yaml
kind: Service
apiVersion: v1 metadata:
name: nginx-nodes
namespace: default
spec:
clusterIP: None
selector:
app: nginx
ports:
- name: http
port: 80

It’s a service, but we specify clusterIP: None, and the DNS will be nginx- nodes, under the default namespace. And the port, in this case, is going to target port is 80.

This is just an Nginx deployment with 4 replicas. Let’s create the service and the deployment.

$ kubectl apply -f nginx_kube_test.yaml

We then start an ubuntu container installing dnsutils and curl.

$ kubectl run bash --rm -it --image ubuntu --bash
# apt-get update && apt-get install dnsutils curl -y
# nslookup nginx-nodes

We see that using this DNS we are able to list all the nginx nodes. If we scale out adding more replicas, we see launching again nslookup nginx-nodes that the new nodes are all present in the list.

Let’s start changing the topology. So we now use the Cluster.Kubernetes.DNS strategy, which will use the headless service we’re going to create.

# lib/chat.ex
topologies = [
k8s_chat: [
strategy: Cluster.Strategy.Kubernetes.DNS,
config: [
service: "chat-nodes",
application_name: "chat"
]
]
]
# web/controllers/page_controller.ex
defmodule Chat.PageController do
use Chat.Web, :controller def index(conn, _params) do
self_node = inspect(node())
nodes = inspect(Node.list())
render(conn, "index.html", %{node: self_node, nodes: nodes})
end
end
# web/templates/page/index.html.eex
<div>
<p>nodes: <%=@nodes%></p>
<p>self: <%=@node%></p>
</div>
<div id="messages" class="container">
</div>
...

So the application now is ready. We need to build a Docker image. But before building the Docker image, we’re gonna see first the headless service Kubernetes file.

kind: Service
apiVersion: v1 metadata:
name: chat-nodes
namespace: default
spec:
clusterIP: None
selector:
app: chat
ports:
- name: epmd
port: 4369

We expose the EPMD and the DNS is chat-nodes. We also create a chat load balancer.

kind: Service
apiVersion: v1 metadata:
name: chat
namespace: default
spec:
type: LoadBalancer
selector:
app: chat
ports:
- name: http
port: 8000
targetPort: 4000

Let’s see the deployment.

---
kind: Deployment
apiVersion: apps/v1
metadata:
name: chat
namespace: default
spec:
replicas: 4
selector:
matchLabels:
app: chat template:
metadata:
labels:
app: chat
spec:
containers:
- name: phoenix-chat
image: chat:libcluster #alvises/phoenix-chat-example:libcluster-kube
ports:
- containerPort: 4000
env:
- name: PORT
value: "4000"
- name: PHOENIX_CHAT_HOST
value: "localhost"
- name: ERLANG_COOKIE
value: "secret"
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command: ["elixir"]
args: [
"--name",
"chat@$(MY_POD_IP)",
"--cookie","$(ERLANG_COOKIE)",
"--no-halt",
"-S","mix",
"phx.server"
]

We’re going at first, to create 4 replicas. We are going to build our image but you can use the image I’ve published on DockerHub: alvises/phoenix-chat-example:libcluster-kube
The exposed container port is 4000, we need also to set the same Erlang cookie in each node (in production better to use Kubernetes Secrets).

The important part is the environment variable MY_POD_IP. We define an environment variable, where we set the IP of each node. We then use this variable when we start the server specifying the node name and cookie

elixir --name chat@$(MY_POD_IP) --cookie $(ERLANG_COOKIE) --no-halt -S mix phx.server

To build the Docker image is pretty simple.

$ docker image build -t chat:libcluster .

Let’s create the chat deployment and services in Kubernetes

$ kubectl create -f kube_chat_deploy_and_svc.yaml

We then connect to the load-balancer to our local port 8000. We see the node list and that the nodes automatically connect. If we add new replicas and we will see almost immediately the new nodes under the node list.

Wrap up

We saw how easy it is with libcluster to connect the nodes together and deploy, also on Kubernetes, distributed Phoenix chat application.

If you have a question or something wasn’t clear, please post a comment in the comment section below, and subscribe to be updated with new articles and screencasts. See you next week!

 
 
 
 

Connecting Elixir Nodes with libcluster, locally and on Kubernetes的更多相关文章

  1. Running Elixir in Docker Containers

    转自:https://www.poeticoding.com/running-elixir-in-docker-containers/ One of the wonderful things abou ...

  2. 入门-k8s查看Pods/Nodes (四)

    目标 了解Kubernetes Pods(容器组) 了解Kubernetes Nodes(节点) 排查故障 Kubernetes Pods 在 部署第一个应用程序 中创建 Deployment 后,k ...

  3. Distributed Phoenix Chat with PubSub PG2 adapter

    转自:https://www.poeticoding.com/distributed-phoenix-chat-with-pubsub-pg2-adapter/ In this article we’ ...

  4. infoq - neo4j graph db

    My name is Charles Humble and I am here at QCon New York 2014 with Ian Robinson. Ian, can you introd ...

  5. Distributed Phoenix Chat using Redis PubSub

      转自:https://www.poeticoding.com/distributed-phoenix-chat-using-redis-pubsub/ In the previous articl ...

  6. Mobile Push Notification

    In one embodiment, a method includes sending to a mobile client computing device a first notificatio ...

  7. Walls(floyd POJ1161)

    Walls Time Limit: 1000MS   Memory Limit: 10000K Total Submissions: 7677   Accepted: 3719 Description ...

  8. 行为识别(action recognition)相关资料

    转自:http://blog.csdn.net/kezunhai/article/details/50176209 ================华丽分割线=================这部分来 ...

  9. uva539 The Settlers of Catan

    The Settlers of Catan Within Settlers of Catan, the 1995 German game of the year, players attempt to ...

随机推荐

  1. jmeter源码导入eclipse并执行

    由于JMeter纯Java开发,界面也是基于Swing或AWT搞出来的,所以想更深层次的去了解这款工具或对于想了解JMeter插件开发或二次开发的童鞋们来说,读读JMeter的源码估计是必不可少的,所 ...

  2. vivado自动化tcl实现(更新中)

    ug1197-vivado-high-level-productivity vivado中如何使用自动化工具进行设计?用过的项目有AD9361提供的官方例子中,使用了自动化方式,可以借鉴.

  3. VS2017企业版的密钥

    Visual Studio 2017(VS2017) 企业版 Enterprise 注册码:NJVYC-BMHX2-G77MM-4XJMR-6Q8QFVisual Studio 2017(VS2017 ...

  4. <Spark><Tuning and Debugging>

    Overview 这一部分我们主要讨论如果配置一个Spark application,如何tune and debug Spark workloads 配置对Spark应用性能调优很重要.我们有必要理 ...

  5. leetcode第40题:组合总和II

    给定一个数组 candidates 和一个目标数 target ,找出 candidates 中所有可以使数字和为 target 的组合. candidates 中的每个数字在每个组合中只能使用一次. ...

  6. LVS+OSPF+FULLNAT集群架构

    OSPF:OSPF(Open Shortest Path First开放式最短路径优先)是一个内部网关协议(Interior Gateway Protocol,简称IGP),用于在单一自治系统(aut ...

  7. python生产者消费者模型优点

    生产者消费者模型:解耦,通过队列降低耦合,支持并发,生产者和消费者是两个独立的并发体,他们之间使用缓存区作为桥梁连接,生产者指望里丢数据,就可以生产下一个数据了,消费者从中拿数据,这样就不会阻塞,影响 ...

  8. winform 使用线程

    我这里写一个线程里创建一个窗体调用父窗体的方法 private void button4_Click(object sender, EventArgs e) { button4.Text = &quo ...

  9. wx小程序用canvas生成图片流程与注意事项

    1.需要画入canvas的 图片都需要先缓存到本地 let ps = [] ps.push(that.loadImageFun(this.statusInfo.avatar_url, "he ...

  10. JAVA_全局配置文件(配置网址,url等等)_第一种方式

    一.概述 当使用httpClient调其他系统接口时,需要通过地址来发送post请求. 这时我们有不同的环境,那么就有两个问题. 1是地址不能写在代码中,而是要写在配置文件. 2是不同环境配置文件应该 ...