Connecting Elixir Nodes with libcluster, locally and on Kubernetes
Transcript
In the last few articles we saw how to make our Phoenix chat app distributed; at the beginning with Redis, and after with distributed Elixir connecting the nodes together.
We had just one problem; we had to manually connect to the nodes in the IEX console. This is an issue in production. In this video we will see how to automatically cluster the Phoenix chat nodes using the libcluster library locally, and on a Kubernetes cluster with a dynamic number of nodes.
Let’s download that first code of the Phoenix chat example from my github account, poeticoding, and let’s use the pubsub_pg2 branch. Let’s clone the code, and check out to the pubsub_pg2 branch.
$ git clone https://github.com/poeticoding/phoenix_chat_example.git
...
$ cd phoenix_chat_example
$ git co pubsub_pg2
$ mix deps.get
Let’s download the dependencies, and try to run it locally so we can pass the port as an environment variable, so the first node at the port 4000
. We give a name, first node A
. And we start another Phoenix server, node B
, and port 4001
.
# Node A
$ PORT=4000 iex --sname a -S mix phx.server
# Node B
$ PORT=4001 iex --sname b -S mix phx.server
Okay, great. Let’s now connect the node A
to B
. And we see that, this needs to be connected correctly. Let’s try the chat app with the two browsers.
iex(a@mbp)> Node.connect :b@mbp
true
iex(b@mbp)> Node.list
[:a@mbp]
So, let’s connect with one tab to 4000
(node A), and the other tab with 4001
(node B). We see that the messages are propagated correctly.
libcluster
We had to manually connect the nodes using the connect/1
function in the Node
module. Let’s see how to use libcluster
to automatically connect the nodes.
So at first, we need to add the libcluster library as a dependency.
# mix.exs
defp deps do
[
...
{:libcluster, "~> 3.0"}
]
end
# lib/chat.ex
defmodule Chat do
use Application
def start(_type, _args) do
import Supervisor.Spec, warn: false
topologies = [
chat: [
strategy: Cluster.Strategy.Gossip
]
]
children = [
{Cluster.Supervisor, [topologies, [name: Chat.ClusterSupervisor]]},
supervisor(Chat.Endpoint, [])
]
opts = [strategy: :one_for_one, name: Chat.Supervisor]
Supervisor.start_link(children, opts)
end
end
We then need to start a Cluster.Supervisor
, which is part of the libcluster
library, with some topologies
. We use the Gossip
strategy, which uses multicast UDP to gossip node names to other nodes in the network.
# Node A
$ PORT=4000 iex --sname a -S mix phx.server
# Node B
$ PORT=4001 iex --sname b -S mix phx.server
# Node C
$ PORT=4002 iex --sname c -S mix phx.server
Three Phoenix nodes connected
Great, and it should work straight away. So as before we start one node, the A
node on port 4000
, and the node B on port 4001
. And you see that the node A is now connected to node B and vice versa. Node list. We see that we didn’t have to connect them manually. The same if I add another node, named C
, on port 4002
. They will connect automatically.
Kubernetes
Let’s now see how to deploy this distributed application on Kubernetes, and making the clustering of the nodes, of the Elixir nodes, automatic with libcluster.
We are going to deploy multiple chat nodes on my Kubernetes local setup. But what I’m going to show you could work without any radical change on any cloud provider. We’re going to deploy our chat nodes with Kubernetes deployment and we will connect them automatically together thanks to libcluster, and something called the Kubernetes headless service, which we’ll see in a moment. We will then create a load balancer, which will spread the connections from different browsers to different chat nodes.
So, what is a headless service? I’ve put this file, Nginx kube test. You can find all this code under the libcluster branch. So, let’s try to see with a simple Nginx deployment, what is a headless service.
# nginx_kube_test.yaml
kind: Service
apiVersion: v1
metadata:
name: nginx-nodes
namespace: default
spec:
clusterIP: None
selector:
app: nginx
ports:
- name: http
port: 80
It’s a service, but we specify clusterIP: None
, and the DNS will be nginx- nodes, under the default
namespace. And the port, in this case, is going to target port is 80
.
This is just an Nginx deployment with 4 replicas. Let’s create the service and the deployment.
$ kubectl apply -f nginx_kube_test.yaml
We then start an ubuntu
container installing dnsutils
and curl
.
$ kubectl run bash --rm -it --image ubuntu --bash
# apt-get update && apt-get install dnsutils curl -y
# nslookup nginx-nodes
We see that using this DNS we are able to list all the nginx nodes. If we scale out adding more replicas, we see launching again nslookup nginx-nodes
that the new nodes are all present in the list.
Let’s start changing the topology. So we now use the Cluster.Kubernetes.DNS strategy
, which will use the headless service we’re going to create.
# lib/chat.ex
topologies = [
k8s_chat: [
strategy: Cluster.Strategy.Kubernetes.DNS,
config: [
service: "chat-nodes",
application_name: "chat"
]
]
]
# web/controllers/page_controller.ex
defmodule Chat.PageController do
use Chat.Web, :controller
def index(conn, _params) do
self_node = inspect(node())
nodes = inspect(Node.list())
render(conn, "index.html", %{node: self_node, nodes: nodes})
end
end
# web/templates/page/index.html.eex
<div>
<p>nodes: <%=@nodes%></p>
<p>self: <%=@node%></p>
</div>
<div id="messages" class="container">
</div>
...
So the application now is ready. We need to build a Docker image. But before building the Docker image, we’re gonna see first the headless service Kubernetes file.
kind: Service
apiVersion: v1
metadata:
name: chat-nodes
namespace: default
spec:
clusterIP: None
selector:
app: chat
ports:
- name: epmd
port: 4369
We expose the EPMD and the DNS is chat-nodes
. We also create a chat load balancer.
kind: Service
apiVersion: v1
metadata:
name: chat
namespace: default
spec:
type: LoadBalancer
selector:
app: chat
ports:
- name: http
port: 8000
targetPort: 4000
Let’s see the deployment.
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: chat
namespace: default
spec:
replicas: 4
selector:
matchLabels:
app: chat
template:
metadata:
labels:
app: chat
spec:
containers:
- name: phoenix-chat
image: chat:libcluster #alvises/phoenix-chat-example:libcluster-kube
ports:
- containerPort: 4000
env:
- name: PORT
value: "4000"
- name: PHOENIX_CHAT_HOST
value: "localhost"
- name: ERLANG_COOKIE
value: "secret"
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command: ["elixir"]
args: [
"--name",
"chat@$(MY_POD_IP)",
"--cookie","$(ERLANG_COOKIE)",
"--no-halt",
"-S","mix",
"phx.server"
]
We’re going at first, to create 4 replicas. We are going to build our image but you can use the image I’ve published on DockerHub: alvises/phoenix-chat-example:libcluster-kube.
The exposed container port is 4000
, we need also to set the same Erlang cookie in each node (in production better to use Kubernetes Secrets).
The important part is the environment variable MY_POD_IP
. We define an environment variable, where we set the IP of each node. We then use this variable when we start the server specifying the node name and cookie
elixir --name chat@$(MY_POD_IP) --cookie $(ERLANG_COOKIE) --no-halt -S mix phx.server
To build the Docker image is pretty simple.
$ docker image build -t chat:libcluster .
Let’s create the chat deployment and services in Kubernetes
$ kubectl create -f kube_chat_deploy_and_svc.yaml
We then connect to the load-balancer to our local port 8000
. We see the node list and that the nodes automatically connect. If we add new replicas and we will see almost immediately the new nodes under the node list.
Wrap up
We saw how easy it is with libcluster
to connect the nodes together and deploy, also on Kubernetes, distributed Phoenix chat application.
If you have a question or something wasn’t clear, please post a comment in the comment section below, and subscribe to be updated with new articles and screencasts. See you next week!
Connecting Elixir Nodes with libcluster, locally and on Kubernetes的更多相关文章
- Running Elixir in Docker Containers
转自:https://www.poeticoding.com/running-elixir-in-docker-containers/ One of the wonderful things abou ...
- 入门-k8s查看Pods/Nodes (四)
目标 了解Kubernetes Pods(容器组) 了解Kubernetes Nodes(节点) 排查故障 Kubernetes Pods 在 部署第一个应用程序 中创建 Deployment 后,k ...
- Distributed Phoenix Chat with PubSub PG2 adapter
转自:https://www.poeticoding.com/distributed-phoenix-chat-with-pubsub-pg2-adapter/ In this article we’ ...
- infoq - neo4j graph db
My name is Charles Humble and I am here at QCon New York 2014 with Ian Robinson. Ian, can you introd ...
- Distributed Phoenix Chat using Redis PubSub
转自:https://www.poeticoding.com/distributed-phoenix-chat-using-redis-pubsub/ In the previous articl ...
- Mobile Push Notification
In one embodiment, a method includes sending to a mobile client computing device a first notificatio ...
- Walls(floyd POJ1161)
Walls Time Limit: 1000MS Memory Limit: 10000K Total Submissions: 7677 Accepted: 3719 Description ...
- 行为识别(action recognition)相关资料
转自:http://blog.csdn.net/kezunhai/article/details/50176209 ================华丽分割线=================这部分来 ...
- uva539 The Settlers of Catan
The Settlers of Catan Within Settlers of Catan, the 1995 German game of the year, players attempt to ...
随机推荐
- linux入门经验之谈
一. 选择适合自己的linux发行版 谈到linux的发行版本,太多了,可能谁也不能给出一个准确的数字,但是有一点是可以肯定的,linux正在变得越来越流行, 面对这么多的Linux 发行版,打 ...
- Alpha 冲刺 (6/10
Alpha 冲刺 (6/10) 队名:第三视角 组长博客链接 本次作业链接 团队部分 团队燃尽图 工作情况汇报 张扬(组长) 过去两天完成了哪些任务: 文字/口头描述: 1.组织会议 2.帮助队员解决 ...
- 奇怪问题之@RequestBody问题
在项目中使用到了@RequestBody注解:该注解的作用是获取Request请求中body中的数据:最近测试项目的时候发现调用该接口的时候直接返回状态400,当将@RequestBody注解去掉以后 ...
- python列表的11种方法
python列表的11种方法2017年11月24日 03:26:43 Milton-Long 阅读数:254版权声明:本文为博主原创文章,未经博主允许不得转载. https://blog.csdn.n ...
- python储存数据的方式
python储存数据的方式2017年10月13日 23:38:10 Nick_Spider 阅读数:59286 标签: redis 数据库 爬虫 存储 结构 更多 个人分类: 数据库 爬虫 pytho ...
- JAVA_模糊查询_重点是concat关键字
SELECT * FROM user WHERE username LIKE concat('%',#{username},'%') concat : 类似+ ,拼接sql.sql语句中会将+ 重写. ...
- Ubuntu add sudo
为了安全起见,ubuntu中的普通用户一般没有root权限,因此即使知道管理员密码也无法使用sudo,但这个情况可以通过加入sudoer列表或者加入sudo组来改变. 拓展: 不管使用哪种方式,使得一 ...
- Samsung_tiny4412(驱动笔记06)----list_head,proc file system,GPIO,ioremap
/**************************************************************************** * * list_head,proc fil ...
- 【leetcode】234. Palindrome Linked List
234. Palindrome Linked List 1. 使用快慢指针找中点的原理是fast和slow两个指针,每次快指针走两步,慢指针走一步,等快指针走完时,慢指针的位置就是中点.如果是偶数个数 ...
- 新添加一块硬盘制作LVM卷并进行分区挂载
linux服务器新添加一块硬盘,可以直接将盘格式化挂载就能用,比如挂载在/usr/local目录,但是这样有一个弊端,就是如果这一块磁盘满了,后续想要扩容的话,不能继续挂载这个/usr/local挂载 ...