flink Standalone Cluster
Requirements
Software Requirements
Flink runs on all UNIX-like environments, e.g. Linux, Mac OS X, and Cygwin (for Windows) and expects the cluster to consist of one master node and one or more worker nodes. Before you start to setup the system, make sure you have the following software installed on each node:
- Java 1.8.x or higher,
- ssh (sshd must be running to use the Flink scripts that manage remote components)
If your cluster does not fulfill these software requirements you will need to install/upgrade it.
Having passwordless SSH and the same directory structure on all your cluster nodes will allow you to use our scripts to control everything.
ssh免密登录:
在源机器执行
ssh-keygen //一路回车,生成公钥,位置~/.ssh/id_rsa.pub
ssh-copy-id -i ~/.ssh/id_rsa.pub root@目的机器 //将源机器生成的公钥拷贝到目的机器的~/.ssh/authorized_keys
或者 手动将源机器生成的公钥拷贝到目的机器的~/.ssh/authorized_keys。注意:不要有换行
完成!
JAVA_HOME
Configuration
系统环境配置了JAVA_HOME即可,无需如下操作
Flink requires the JAVA_HOME
environment variable to be set on the master and all worker nodes and point to the directory of your Java installation.
You can set this variable in conf/flink-conf.yaml
via the env.java.home
key.
Flink Setup
Go to the downloads page and get the ready-to-run package. Make sure to pick the Flink package matching your Hadoop version. If you don’t plan to use Hadoop, pick any version.
After downloading the latest release, copy the archive to your master node and extract it:
tar xzf flink-*.tgz
cd flink-*
Configuring Flink (不做任何设置,在单节点上直接运行bin/start-cluster.sh,可在单机启动单个jobmanager和单个taskmanager,方便debug代码)
After having extracted the system files, you need to configure Flink for the cluster by editing conf/flink-conf.yaml.
Set the jobmanager.rpc.address
key to point to your master node. You should also define the maximum amount of main memory the JVM is allowed to allocate on each node by setting the jobmanager.heap.mb
and taskmanager.heap.mb
keys.
These values are given in MB. If some worker nodes have more main memory which you want to allocate to the Flink system you can overwrite the default value by setting the environment variable FLINK_TM_HEAP
on those specific nodes.
Finally, you must provide a list of all nodes in your cluster which shall be used as worker nodes. Therefore, similar to the HDFS configuration, edit the file conf/slaves and enter the IP/host name of each worker node. Each worker node will later run a TaskManager.
The following example illustrates the setup with three nodes (with IP addresses from 10.0.0.1 to 10.0.0.3 and hostnames master, worker1, worker2) and shows the contents of the configuration files (which need to be accessible at the same path on all machines):

jobmanager.rpc.address: 10.0.0.1
注意:需要在所有需要启动taskmanager的机器进行如上配置
/path/to/flink/conf/slaves
10.0.0.2
10.0.0.3
The Flink directory must be available on every worker under the same path. You can use a shared NFS directory, or copy the entire Flink directory to every worker node.
Please see the configuration page for details and additional configuration options.
In particular,
- the amount of available memory per JobManager (
jobmanager.heap.mb
), - the amount of available memory per TaskManager (
taskmanager.heap.mb
), - the number of available CPUs per machine (
taskmanager.numberOfTaskSlots
), - the total number of CPUs in the cluster (
parallelism.default
) and - the temporary directories (
taskmanager.tmp.dirs
)
are very important configuration values.
Starting Flink
The following script starts a JobManager on the local node and connects via SSH to all worker nodes listed in the slaves file to start the TaskManager on each node. Now your Flink system is up and running. The JobManager running on the local node will now accept jobs at the configured RPC port.
Assuming that you are on the master node and inside the Flink directory:
bin/start-cluster.sh
To stop Flink, there is also a stop-cluster.sh
script.
Adding JobManager/TaskManager Instances to a Cluster
You can add both JobManager and TaskManager instances to your running cluster with the bin/jobmanager.sh
and bin/taskmanager.sh
scripts.
Adding a JobManager
bin/jobmanager.sh ((start|start-foreground) cluster)|stop|stop-all
Adding a TaskManager
bin/taskmanager.sh start|start-foreground|stop|stop-all
Make sure to call these scripts on the hosts on which you want to start/stop the respective instance.
flink Standalone Cluster的更多相关文章
- flink初识及安装flink standalone集群
flink architecture 1.可以看出,flink可以运行在本地,也可以类似spark一样on yarn或者standalone模式(与spark standalone也很相似),此外fl ...
- Apache Spark源码走读之19 -- standalone cluster模式下资源的申请与释放
欢迎转载,转载请注明出处,徽沪一郎. 概要 本文主要讲述在standalone cluster部署模式下,Spark Application在整个运行期间,资源(主要是cpu core和内存)的申请与 ...
- Spark Standalone cluster try
Spark Standalone cluster node*-- stop firewalldsystemctl stop firewalldsystemctl disable firewalld-- ...
- flink部署操作-flink standalone集群安装部署
flink集群安装部署 standalone集群模式 必须依赖 必须的软件 JAVA_HOME配置 flink安装 配置flink 启动flink 添加Jobmanager/taskmanager 实 ...
- Spark运行模式_spark自带cluster manager的standalone cluster模式(集群)
这种运行模式和"Spark自带Cluster Manager的Standalone Client模式(集群)"还是有很大的区别的.使用如下命令执行应用程序(前提是已经启动了spar ...
- Flink standalone模式作业执行流程
宏观流程如下图: client端 生成StreamGraph env.addSource(new SocketTextStreamFunction(...)) .flatMap(new FlatMap ...
- 【转帖】两年Flink迁移之路:从standalone到on yarn,处理能力提升五倍
两年Flink迁移之路:从standalone到on yarn,处理能力提升五倍 https://segmentfault.com/a/1190000020209179 flink 1.7k 次阅读 ...
- Apache Flink 的迁移之路,2 年处理效果提升 5 倍
一.背景与痛点 在 2017 年上半年以前,TalkingData 的 App Analytics 和 Game Analytics 两个产品,流式框架使用的是自研的 td-etl-framework ...
- 重磅!解锁Apache Flink读写Apache Hudi新姿势
感谢阿里云 Blink 团队Danny Chan的投稿及完善Flink与Hudi集成工作. 1. 背景 Apache Hudi 是目前最流行的数据湖解决方案之一,Data Lake Analytics ...
随机推荐
- ThreadPoolExecutor线程池任务执行失败的时候会怎样
接上一篇 <JDK1.8中的线程池> 1. 任务执行失败时的处理逻辑 1.1. Worker Worker相当于线程池中的线程 可以看到,Worker有几个重要的属性: thread ...
- java基础(十七)----- 浅谈Java中的深拷贝和浅拷贝 —— 面试必问
假如说你想复制一个简单变量.很简单: int apples = 5; int pears = apples; 不仅仅是int类型,其它七种原始数据类型(boolean,char,byte,short, ...
- asp.net core系列 35 EF保存数据(2) -- EF系列结束
一.事务 (1) 事务接着上篇继续讲完.如果使用了多种数据访问技术,来访问关系型数据库,则可能希望在这些不同技术所执行的操作之间共享事务.下面示例显示了如何在同一事务中执行 ADO.NET SqlCl ...
- 【朝花夕拾】Android性能篇之(五)Android虚拟机
前言 Android虚拟机的使用,使得android应用和Linux内核分离,这样做使得android系统更稳定可靠,比如程序中即使包含恶意代码,也不会直接影响系统文件:也提高了跨平台兼容性.在And ...
- [转载] Relearning to Learn - 学会学习
学会学习 说明: 本文是在阅读了下述博客后, 所作的梳理与总结, 原文链接是: 学会学习 阅读和理解是不够的, 你还需要记住你学的内容. 可通过把知识讲给不懂的人听, 抓住细节, 讲清讲透, 从而加深 ...
- 漫画:SOA中怎样确定服务的粒度?
一般系统的服务划分有以下两种维度: 按模块划分 这个比较适用于偏业务的场景:复杂的系统,最好先按业务领域横向拆分成可独立部署的子系统,每个子系统内部再按技术纵向拆分成不同的子模块. 按角色划分 这个比 ...
- java~spring-ioc的使用
spring-ioc的使用 IOC容器在很多框架里都在使用,而在spring里它被应用的最大广泛,在框架层面 上,很多功能都使用了ioc技术,下面我们看一下ioc的使用方法. 把服务注册到ioc容器 ...
- SpringBoot技术栈搭建个人博客【项目准备】
前言:很早之前就想要写一个自己的博客了,趁着现在学校安排的实习有很多的空档,决定把它给做出来,也顺便完成实习的任务(搞一个项目出来...) 需求分析 总体目标:设计一套自适应/简洁/美观/易于文章管理 ...
- 【面试】我是如何面试别人List相关知识的,深度有点长文
- 一起来看 rxjs
更新日志 2018-05-26 校正 2016-12-03 第一版翻译 过去你错过的 Reactive Programming 的简介 你好奇于这名为Reactive Programming(反应式编 ...