本文章经授权转载,原文链接:

https://blog.csdn.net/MiaoSO/article/details/104770720

目录

4. 软件部署

  • 4.1 为 dolphinscheduler 创建 Mysql 数据库

  • 4.2 解压 dolphinscheduler 安装包

    • 4.2.1 dolphinscheduler-backend

    • 4.2.2 dolphinscheduler-ui

  • 4.3 dolphinscheduler-backend 部署

    • 4.3.1 数据库配置

    • 4.3.2 初始化数据库

    • 4.3.3 修改环境变量配置

    • 4.3.4 修改集群部署配置

    • 4.3.5 添加 Hadoop 配置文件

    • 4.3.6 一键部署

    • 4.3.7 指令

    • 4.3.8 数据库升级(略)

  • 4.4 dolphinscheduler-ui 部署

    • 4.4.6.1 CentOS7 安装 Nginx

    • 4.4.6.2 Nginx 指令

    • 4.4.1 dolphinscheduler-ui 部署说明

    • 4.4.2 自动部署

    • 4.4.3 手动部署

    • 4.4.4 修改上传文件大小限制

    • 4.4.5 dolphinscheduler 首次登录

    • 4.4.6 Nginx 相关



4. 软件部署

4.1 为 dolphinscheduler 创建 Mysql 数据库

  1. CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
  2. GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'dscheduler'@'10.10.7.%' IDENTIFIED BY 'Ds@12345';
  3. #GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'dscheduler'@'10.158.1.%' IDENTIFIED BY 'Ds@12345';
  4. #drop user dscheduler@'%';
  5. flush privileges;

4.2 解压 dolphinscheduler 安装包

4.2.1 dolphinscheduler-backend

  1. cd /opt/dolphinscheduler && tar -zxf apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-backend-bin.tar.gz
  2. ln -s apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-backend-bin dolphinscheduler-backend
  3. # 目录介绍
  4. cd dolphinscheduler-backend && tree -L 1
  5. .
  6. ├── bin # 基础服务启动脚本
  7. ├── conf # 项目配置文件
  8. ├── DISCLAIMER-WIP# DISCLAIMER文件
  9. ├── install.sh # 一键部署脚本
  10. ├── lib # 项目依赖jar包,包括各个模块jar和第三方jar
  11. ├── LICENSE # LICENSE文件
  12. ├── licenses # 运行时license
  13. ├── NOTICE # NOTICE文件
  14. ├── script # 集群启动、停止和服务监控启停脚本
  15. └── sql # 项目依赖sql文件

4.2.2 dolphinscheduler-ui

  1. cd /opt/dolphinscheduler && tar -zxf apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-front-bin.tar.gz
  2. ln -s apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-front-bin dolphinscheduler-front


4.3 dolphinscheduler-backend 部署

4.3.1 数据库配置

1.修改配置文件

  1. vim /opt/dolphinscheduler/dolphinscheduler-backend/conf/application-dao.properties
  2. # postgre
  3. #spring.datasource.driver-class-name=org.postgresql.Driver
  4. #spring.datasource.url=jdbc:postgresql://192.168.xx.xx:5432/dolphinscheduler
  5. # mysql
  6. spring.datasource.driver-class-name=com.mysql.jdbc.Driver
  7. spring.datasource.url=jdbc:mysql://10.10.7.209:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8
  8. spring.datasource.username=dscheduler
  9. spring.datasource.password=Ds@12345
  1. 添加 mysql 驱动

  1. cp /usr/share/java/mysql-connector-java.jar /opt/dolphinscheduler/dolphinscheduler-backend/lib

  2. cd /opt/dolphinscheduler && wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.46.tar.gz
  3. tar zxvf mysql-connector-java-5.1.46.tar.gz
  4. cp mysql-connector-java-5.1.46/mysql-connector-java-5.1.46-bin.jar /opt/dolphinscheduler/dolphinscheduler-backend/lib

4.3.2 初始化数据库

  1. sh /opt/dolphinscheduler/dolphinscheduler-backend/script/create-dolphinscheduler.sh
  2. # create dolphinscheduler success -> 表示数据库初始化成功

4.3.3 修改环境变量配置

vim /opt/dolphinscheduler/dolphinscheduler-backend/conf/env/.dolphinscheduler_env.sh

  1. # ==========
  2. # CDH 版
  3. # ==========
  4. export HADOOP_HOME=/opt/cloudera/parcels/CDH/lib/hadoop
  5. export HADOOP_CONF_DIR=/opt/cloudera/parcels/CDH/lib/hadoop/etc/hadoop
  6. export SPARK_HOME1=/opt/cloudera/parcels/CDH/lib/spark
  7. export SPARK_HOME2=/opt/cloudera/parcels/CDH/lib/spark
  8. export PYTHON_HOME=/usr/bin/python
  9. export JAVA_HOME=/usr/java/jdk1.8.0_181-cloudera
  10. export HIVE_HOME=/opt/cloudera/parcels/CDH/lib/hive
  11. export FLINK_HOME=/opt/soft/flink
  12. export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$PATH

4.3.4 修改集群部署配置

  1. cp /opt/dolphinscheduler/dolphinscheduler-backend/install.sh /opt/dolphinscheduler/dolphinscheduler-backend/install.sh_b
  2. vim /opt/dolphinscheduler/dolphinscheduler-backend/install.sh
  1. # 注:以下参数仅为核心部分配置,并未包含 install.sh 脚本全部内容
  2. ......................................................
  3. source ${workDir}/conf/config/run_config.conf
  4. source ${workDir}/conf/config/install_config.conf
  5. # 1. 数据库配置
  6. # ${installPath}/conf/quartz.properties
  7. #dbtype="postgresql"
  8. dbtype="mysql"
  9. dbhost="10.10.7.209"
  10. dbname="dolphinscheduler"
  11. username="dscheduler"
  12. # Note: if there are special characters, please use the \ transfer character to transfer
  13. passowrd="Ds@12345"
  14. # 2. 集群部署环境配置
  15. # ${installPath}/conf/config/install_config.conf
  16. installPath="/opt/dolphinscheduler/dolphinscheduler-agent"
  17. # deployment user
  18. # Note: the deployment user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled, the root directory needs to be created by itself
  19. deployUser="dscheduler"
  20. # zk cluster
  21. zkQuorum="test01:2181,test02:2181,test03:2181"
  22. # install hosts
  23. ips="test01,test02,test03"
  24. # 3. 各节点服务配置
  25. # ${installPath}/conf/config/run_config.conf
  26. # run master machine
  27. masters="test02,test03"
  28. # run worker machine
  29. workers="test01,test02,test03"
  30. # run alert machine
  31. alertServer="test03"
  32. # run api machine
  33. apiServers="test03"
  34. # 4. alert 配置
  35. # ${installPath}/conf/alert.properties
  36. # 若公司未开启 SSL 服务,可设置: mailServerPort="25" ; starttlsEnable="false" ; sslEnable="false"
  37. # mail protocol
  38. mailProtocol="SMTP"
  39. # mail server host
  40. mailServerHost="smtp.sohh.cn"
  41. # mail server port
  42. mailServerPort="465"
  43. # sender
  44. mailSender="dashuju@sohh.cn"
  45. # user
  46. mailUser="dashuju@sohh.cn"
  47. # sender password
  48. mailPassword="dashuju@123"
  49. # TLS mail protocol support
  50. starttlsEnable="false"
  51. sslTrust="*"
  52. # SSL mail protocol support
  53. # note: The SSL protocol is enabled by default.
  54. # only one of TLS and SSL can be in the true state.
  55. sslEnable="true"
  56. # download excel path
  57. xlsFilePath="/tmp/xls"
  58. # Enterprise WeChat Enterprise ID Configuration
  59. enterpriseWechatCorpId="xxxxxxxxxx"
  60. # Enterprise WeChat application Secret configuration
  61. enterpriseWechatSecret="xxxxxxxxxx"
  62. # Enterprise WeChat Application AgentId Configuration
  63. enterpriseWechatAgentId="xxxxxxxxxx"
  64. # Enterprise WeChat user configuration, multiple users to , split
  65. enterpriseWechatUsers="xxxxx,xxxxx"
  66. # alert port
  67. alertPort=7789
  68. # 5. 开启监控自启动脚本
  69. # 控制是否启动自启动脚本(监控master,worker状态,如果掉线会自动启动)
  70. # whether to start monitoring self-starting scripts
  71. monitorServerState="true"
  72. # 6. 资源中心配置
  73. # ${installPath}/conf/common/ 中
  74. # resource Center upload and select storage method:HDFS,S3,NONE
  75. resUploadStartupType="HDFS"
  76. # if resUploadStartupType is HDFS,defaultFS write namenode address,HA you need to put core-site.xml and hdfs-site.xml in the conf directory.
  77. # if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
  78. # Note,s3 be sure to create the root directory /dolphinscheduler
  79. defaultFS="hdfs://stcluster:8020"
  80. # if S3 is configured, the following configuration is required.
  81. s3Endpoint="http://192.168.xx.xx:9010"
  82. s3AccessKey="xxxxxxxxxx"
  83. s3SecretKey="xxxxxxxxxx"
  84. # resourcemanager HA configuration, if it is a single resourcemanager, here is yarnHaIps=""
  85. yarnHaIps="test03,test02"
  86. # if it is a single resourcemanager, you only need to configure one host name. If it is resourcemanager HA, the default configuration is fine.
  87. singleYarnIp="ark1"
  88. # hdfs root path, the owner of the root path must be the deployment user.
  89. # versions prior to 1.1.0 do not automatically create the hdfs root directory, you need to create it yourself.
  90. hdfsPath="/dolphinscheduler"
  91. # have users who create directory permissions under hdfs root path /
  92. # Note: if kerberos is enabled, hdfsRootUser="" can be used directly.
  93. hdfsRootUser="hdfs"
  94. # 7. common 配置
  95. # ${installPath}/conf/common/common.properties 中
  96. # common config
  97. # Program root path
  98. programPath="/tmp/dolphinscheduler"
  99. # download path
  100. downloadPath="/tmp/dolphinscheduler/download"
  101. # task execute path
  102. execPath="/tmp/dolphinscheduler/exec"
  103. # SHELL environmental variable path
  104. shellEnvPath="$installPath/conf/env/.dolphinscheduler_env.sh"
  105. # suffix of the resource file
  106. resSuffixs="txt,log,sh,conf,cfg,py,java,sql,hql,xml"
  107. # development status, if true, for the SHELL script, you can view the encapsulated SHELL script in the execPath directory.
  108. # If it is false, execute the direct delete
  109. devState="true"
  110. # kerberos config
  111. # kerberos whether to start
  112. kerberosStartUp="false"
  113. # kdc krb5 config file path
  114. krb5ConfPath="$installPath/conf/krb5.conf"
  115. # keytab username
  116. keytabUserName="hdfs-mycluster@ESZ.COM"
  117. # username keytab path
  118. keytabPath="$installPath/conf/hdfs.headless.keytab"
  119. # 8. zk 配置
  120. # ${installPath}/conf/zookeeper.properties
  121. # zk config
  122. # zk root directory
  123. zkRoot="/dolphinscheduler"
  124. # used to record the zk directory of the hanging machine
  125. zkDeadServers="$zkRoot/dead-servers"
  126. # masters directory
  127. zkMasters="$zkRoot/masters"
  128. # workers directory
  129. zkWorkers="$zkRoot/workers"
  130. # zk master distributed lock
  131. mastersLock="$zkRoot/lock/masters"
  132. # zk worker distributed lock
  133. workersLock="$zkRoot/lock/workers"
  134. # zk master fault-tolerant distributed lock
  135. mastersFailover="$zkRoot/lock/failover/masters"
  136. # zk worker fault-tolerant distributed lock
  137. workersFailover="$zkRoot/lock/failover/workers"
  138. # zk master start fault tolerant distributed lock
  139. mastersStartupFailover="$zkRoot/lock/failover/startup-masters"
  140. # zk session timeout
  141. zkSessionTimeout="300"
  142. # zk connection timeout
  143. zkConnectionTimeout="300"
  144. # zk retry interval
  145. zkRetrySleep="100"
  146. # zk retry maximum number of times
  147. zkRetryMaxtime="5"
  148. # 9. master config
  149. # ${installPath}/conf/master.properties
  150. # master execution thread maximum number, maximum parallelism of process instance
  151. masterExecThreads="100"
  152. # the maximum number of master task execution threads, the maximum degree of parallelism for each process instance
  153. masterExecTaskNum="20"
  154. # master heartbeat interval
  155. masterHeartbeatInterval="10"
  156. # master task submission retries
  157. masterTaskCommitRetryTimes="5"
  158. # master task submission retry interval
  159. masterTaskCommitInterval="100"
  160. # master maximum cpu average load, used to determine whether the master has execution capability
  161. masterMaxCpuLoadAvg="10"
  162. # master reserve memory to determine if the master has execution capability
  163. masterReservedMemory="1"
  164. # master port
  165. masterPort=5566
  166. # 10. worker config
  167. # ${installPath}/conf/worker.properties
  168. # worker execution thread
  169. workerExecThreads="100"
  170. # worker heartbeat interval
  171. workerHeartbeatInterval="10"
  172. # worker number of fetch tasks
  173. workerFetchTaskNum="3"
  174. # worker reserve memory to determine if the master has execution capability
  175. workerReservedMemory="1"
  176. # master port
  177. workerPort=7788
  178. # 11. api config
  179. # ${installPath}/conf/application.properties
  180. # api server port
  181. apiServerPort="12345"
  182. # api session timeout
  183. apiServerSessionTimeout="7200"
  184. # api server context path
  185. apiServerContextPath="/dolphinscheduler/"
  186. # spring max file size
  187. springMaxFileSize="1024MB"
  188. # spring max request size
  189. springMaxRequestSize="1024MB"
  190. # api max http post size
  191. apiMaxHttpPostSize="5000000"
  192. # 1,replace file
  193. echo "1,replace file"
  194. ......................................................

4.3.5 添加 Hadoop 配置文件

  1. # 若 install.sh 中,resUploadStartupType 为 HDFS,且配置为 HA,则需拷贝 hadoop 配置文件到 conf 目录下
  2. cp /etc/hadoop/conf.cloudera.yarn/hdfs-site.xml /opt/dolphinscheduler/dolphinscheduler-backend/conf/
  3. cp /etc/hadoop/conf.cloudera.yarn/core-site.xml /opt/dolphinscheduler/dolphinscheduler-backend/conf/
  4. # 若需要修改 hadoop 配置文件,则需拷贝 hadoop 配置文件到 $installPath/conf 目录下,并重启 api-server 服务
  5. #cp /etc/hadoop/conf.cloudera.yarn/hdfs-site.xml /opt/dolphinscheduler/dolphinscheduler-agent/conf/
  6. #cp /etc/hadoop/conf.cloudera.yarn/core-site.xml /opt/dolphinscheduler/dolphinscheduler-agent/conf/
  7. #sh /opt/dolphinscheduler/dolphinscheduler-agent/bin/dolphinscheduler-daemon.sh start api-server
  8. #sh /opt/dolphinscheduler/dolphinscheduler-agent/bin/dolphinscheduler-daemon.sh stop api-server

4.3.6 一键部署

执行脚本部署并启动

  1. sh /opt/dolphinscheduler/dolphinscheduler-backend/install.sh

查看日志

  1. tree /opt/dolphinscheduler/dolphinscheduler/logs
  2. -------------------------------------------------
  3. /opt/DolphinScheduler/dolphinscheduler/logs
  4. ├── dolphinscheduler-alert.log
  5. ├── dolphinscheduler-alert-server-node-b.test.com.out
  6. ├── dolphinscheduler-alert-server.pid
  7. ├── dolphinscheduler-api-server-node-b.test.com.out
  8. ├── dolphinscheduler-api-server.log
  9. ├── dolphinscheduler-api-server.pid
  10. ├── dolphinscheduler-logger-server-node-b.test.com.out
  11. ├── dolphinscheduler-logger-server.pid
  12. ├── dolphinscheduler-master.log
  13. ├── dolphinscheduler-master-server-node-b.test.com.out
  14. ├── dolphinscheduler-master-server.pid
  15. ├── dolphinscheduler-worker.log
  16. ├── dolphinscheduler-worker-server-node-b.test.com.out
  17. ├── dolphinscheduler-worker-server.pid
  18. └── {processDefinitionId}
  19. └── {processInstanceId}
  20. └── {taskInstanceId}.log

查看Java进程

  1. jps
  2. 8138 MasterServer # master服务
  3. 8165 WorkerServer # worker服务
  4. 8206 LoggerServer # logger服务
  5. 8240 AlertServer # alert服务
  6. 8274 ApiApplicationServer # api服务

Worker 启动失败

  1. less /opt/dolphinscheduler/dolphinscheduler-agent/logs/dolphinscheduler-worker-server-test01.out
  2. nohup: 无法运行命令"/bin/java": 没有那个文件或目录
  3. 解决方法:创建 java 软链
  4. cd /usr/bin/ && sudo ln -s /usr/java/jdk1.8.0_181-cloudera/bin/java /usr/bin/java

4.3.7 指令

  1. # 一键部署(含暂停、重发安装包、启动等操作)
  2. sh /opt/dolphinscheduler/dolphinscheduler-backend/install.sh
  3. # 一键启停集群所有服务
  4. sh /opt/dolphinscheduler/dolphinscheduler-backend/bin/start-all.sh
  5. sh /opt/dolphinscheduler/dolphinscheduler-backend/bin/stop-all.sh

  6. sh /opt/dolphinscheduler/dolphinscheduler-agent/bin/start-all.sh
  7. sh /opt/dolphinscheduler/dolphinscheduler-agent/bin/stop-all.sh
  8. # 启停 Master
  9. sh /opt/dolphinscheduler/dolphinscheduler-agent/bin/dolphinscheduler-daemon.sh start master-server
  10. sh /opt/dolphinscheduler/dolphinscheduler-agent/bin/dolphinscheduler-daemon.sh stop master-server
  11. # 启停 Worker
  12. sh /opt/dolphinscheduler/dolphinscheduler-agent/bin/dolphinscheduler-daemon.sh start worker-server
  13. sh /opt/dolphinscheduler/dolphinscheduler-agent/bin/dolphinscheduler-daemon.sh stop worker-server
  14. # 启停 Api
  15. sh /opt/dolphinscheduler/dolphinscheduler-agent/bin/dolphinscheduler-daemon.sh start api-server
  16. sh /opt/dolphinscheduler/dolphinscheduler-agent/bin/dolphinscheduler-daemon.sh stop api-server
  17. # 启停 Logger
  18. sh /opt/dolphinscheduler/dolphinscheduler-agent/bin/dolphinscheduler-daemon.sh start logger-server
  19. sh /opt/dolphinscheduler/dolphinscheduler-agent/bin/dolphinscheduler-daemon.sh stop logger-server
  20. # 启停Alert
  21. sh /opt/dolphinscheduler/dolphinscheduler-agent/bin/dolphinscheduler-daemon.sh start alert-server
  22. sh /opt/dolphinscheduler/dolphinscheduler-agent/bin/dolphinscheduler-daemon.sh stop alert-server

4.3.8 数据库升级(略)

  1. # 数据库升级是在1.0.2版本增加的功能,执行以下命令即可自动升级数据库
  2. sh /opt/dolphinscheduler/dolphinscheduler-agent/script/upgrade_dolphinscheduler.sh

4.4 dolphinscheduler-ui 部署

4.4.1 dolphinscheduler-ui 部署说明

在部署 ApiApplicationServer 的服务器上部署 UI 服务。
前端部署分自动和手动两种方式:

  • 自动部署脚本会用 yum 安装 Nginx,通过引导设置后的 Nginx 配置文件为 /etc/nginx/conf.d/dolphinscheduler.conf

  • 如果本地已经存在 Nginx,则需手动部署,创建 Nginx 配置文件 /etc/nginx/conf.d/dolphinscheduler.conf

4.4.2 自动部署

sudo sh /opt/dolphinscheduler/dolphinscheduler-front/install-dolphinscheduler-ui.sh

  1. ············
  2. 请输入nginx代理端口,不输入,则默认8888 :8886
  3. 请输入api server代理ip,必须输入,例如:192.168.xx.xx :10.10.7.209
  4. 请输入api server代理端口,不输入,则默认12345 :12345
  5. =================================================
  6. 1.CentOS6安装
  7. 2.CentOS7安装
  8. 3.Ubuntu安装
  9. 4.退出
  10. =================================================
  11. 请输入安装编号(1|2|3|4):2
  12. ············
  13. Complete!
  14. port option is needed for add
  15. FirewallD is not running
  16. setenforce: SELinux is disabled
  17. 请浏览器访问:http://10.10.7.209:8886

4.4.3 手动部署

vim /etc/nginx/conf.d/dolphinscheduler.conf

  1. server {
  2. listen 8886;# access port
  3. server_name localhost;
  4. #charset koi8-r;
  5. #access_log /var/log/nginx/host.access.log main;
  6. location / {
  7. root /opt/dolphinscheduler/dolphinscheduler-front/dist; # static file directory
  8. index index.html index.html;
  9. }
  10. location /dolphinscheduler {
  11. proxy_pass http://10.10.7.209:12345; # interface address
  12. proxy_set_header Host $host;
  13. proxy_set_header X-Real-IP $remote_addr;
  14. proxy_set_header x_real_ipP $remote_addr;
  15. proxy_set_header remote_addr $remote_addr;
  16. proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  17. proxy_http_version 1.1;
  18. proxy_connect_timeout 300s;
  19. proxy_read_timeout 300s;
  20. proxy_send_timeout 300s;
  21. proxy_set_header Upgrade $http_upgrade;
  22. proxy_set_header Connection upgrade;
  23. }
  24. #error_page 404 /404.html;
  25. # redirect server error pages to the static page /50x.html
  26. #
  27. error_page 500 502 503 504 /50x.html;
  28. location = /50x.html {
  29. root /usr/share/nginx/html;
  30. }
  31. }

4.4.4 修改上传文件大小限制

sudo vim /etc/nginx/nginx.conf

  1. # 在 http 内加入
  2. client_max_body_size 1024m;

重启 nginx 服务

  1. systemctl restart nginx

4.4.5 dolphinscheduler 首次登录

  1. 访问 http://10.10.7.209:8886
  2. 初始用户:admin
  3. 初始密码:dolphinscheduler123
  4. 注:若访问网址提示 404,则删除 /etc/nginx/conf.d/default.conf 文件

4.4.6 Nginx 相关

4.4.6.1 CentOS7 安装 Nginx
  1. rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
  2. yum install nginx
  3. systemctl start nginx.service
4.4.6.2 Nginx 指令
  1. # 启动
  2. systemctl start nginx
  3. # 重启
  4. systemctl restart nginx
  5. # 状态
  6. systemctl status nginx
  7. # 停止
  8. systemctl stop nginx

  1. 文章目录:
  2. DS 1.2.0 使用文档(1/8):架构及名词解释
  3. DS 1.2.0 使用文档(2-3/8):集群规划及环境准备
  4. DS 1.2.0 使用文档(4/8):软件部署
  5. DS 1.2.0 使用文档(5/8):使用与测试
  6. DS 1.2.0 使用文档(6/8):任务节点类型与任务参数设置
  7. DS 1.2.0 使用文档(7/8):系统参数及自定义参数
  8. DS 1.2.0 使用文档(8/8):附录

Apache DolphinScheduler 使用文档(4/8):软件部署的更多相关文章

  1. Apache DolphinScheduler 使用文档(2-3/8):集群规划及环境准备

    本文章经授权转载,原文链接: https://blog.csdn.net/MiaoSO/article/details/104770720 目录 2. 集群规划 2.1 集群配置 2.2 软件版本 2 ...

  2. Apache DolphinScheduler 使用文档(5/8):使用与测试

    本文章经授权转载,原文链接: https://blog.csdn.net/MiaoSO/article/details/104770720 目录 5. 使用与测试 5.1 安全中心(Security) ...

  3. Apache DolphinScheduler 使用文档(8/8):附录

    本文章经授权转载,原文链接: https://blog.csdn.net/MiaoSO/article/details/104770720 目录 附录.队列管理 附录.令牌管理 附录.队列管理 Q : ...

  4. Apache DolphinScheduler 使用文档(7/8):系统参数及自定义参数

    本文章经授权转载,原文链接: https://blog.csdn.net/MiaoSO/article/details/104770720 目录 7. 参数 7.1 系统参数 7.2 时间自定义参数 ...

  5. Apache DolphinScheduler 使用文档(6/8):任务节点类型与任务参数设置

    本文章经授权转载,原文链接: https://blog.csdn.net/MiaoSO/article/details/104770720 目录 6. 任务节点类型和参数设置 6.1 Shell节点 ...

  6. Apache Flume 安装文档、日志收集

    简介: 官网 http://flume.apache.org 文档 https://flume.apache.org/FlumeUserGuide.html hadoop 生态系统中,flume 的职 ...

  7. 1小时搞定vuepress快速制作vue文档/博客+免费部署预览

    先来一下演示效果.和vue的官方文档几乎是一致的,页面内容都可自定义. 此教程部署后的效果预览. 在你跟着教程搭建好项目之后,你会收获: 快速搭建一个文档/博客,后期只需要修改markdown内容和导 ...

  8. tomcat+memcached+nginx部署文档(附完整部署包直接运行即可)

    1 前言 1.1 目的 为了正确的部署“ngix+memcached”特编写此部署手册,使安装人员可以通过部署手册知道如何部署系统,也为需要安装该系统的安装人员正确.快速的部署本系统提供帮助. 1.2 ...

  9. Apache Hive 安装文档

    简介: Apache hive 是基于 Hadoop 的一个开源的数据仓库工具,可以将结构化的数据文件映射为一张数据库表, 并提供简单的sql查询功能,将 SQL 语句转换为 MapReduce 任务 ...

随机推荐

  1. mysql查询关键字补充与多表查询

    目录 查询关键字补充 having过滤 distinct去重 order by排序 limit分页 regexp正则 多表查询 子查询 连表查询 查询关键字补充 having过滤 关键字having和 ...

  2. 使用acme.sh自动申请、续期、部署免费的SSL证书

    参考文档:https://github.com/acmesh-official/acme.sh 一个使用纯shell操作的免费SSL证书申请部署工具. 免费的SSL证书由以下CA机构提供: ZeroS ...

  3. lnav-日志查看器

    lnav是一个基于控制台的高级lnav是一个基于控制台的高级日志文件查看器(浏览器). lnav支持日志高亮显示内容以及查看压缩的日志文件,而且它可以使用较小的内存实时查看较大的日志文件.日志文件查看 ...

  4. Redis 应用只 消息队列 简单实现(生产者 消费者模式)

    运行效果:

  5. Linux详解(基础、环境配置、项目部署入门)

    Linux(CentOS 7)操作系统 消息队列(Kafka.RabbitMQ.RocketMQ),缓存(Redis),搜索引擎(ES),集群分布式(需要购买多台服务器,如果没有服务器我们就只能使用虚 ...

  6. docker 快速上手

    Docker 属于 Linux 容器的一种封装,提供简单易用的容器使用接口 安装 docker 设置仓库 $ sudo yum install -y yum-utils $ sudo yum-conf ...

  7. 字符串的操作、 Math类

    字符串的操作 我们先来定义一个字符串,如果来进行过去长度,获取内容. 我们来写一个小测试! public static void main(String[] args) { String aa = & ...

  8. C语言学习之我见-strcpy()字符串复制函数

    strcpy()函数,用于两个字符串值的复制. (1)函数原型 char * strcpy(char * _Dest,const char * _Source); (2)头文件 string.h (3 ...

  9. ngRoute 配置路径不能跳转问题

    1.原因:AngularJS 版本更新至1.6后对地址做了特别处理.如:<a hret="#/someurl"> 在浏览器中被解析为"#!%2Fsomeurl ...

  10. C#中的 Attribute 与 Python/TypeScript 中的装饰器是同个东西吗

    前言 最近成功把「前端带师」带入C#的坑(实际是前端带师开始从cocos转unity游戏开发了) 某天,「前端带师」看到这段代码后问了个问题:[这个是装饰器]? [HttpGet] public Re ...