kafka_2.10-0.8.1.1.tgz的1或3节点集群的下载、安装和配置(图文详细教程)绝对干货

一、安装前准备
  1.1 示例机器

二、 JDK7 安装

1.1 下载地址
  下载地址: http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html

  1. [hadoop@hadoop1 ~]$ cd
  2. [hadoop@hadoop1 ~]$ rz
  3. -bash: rz: command not found
  4. [hadoop@hadoop1 ~]$ su root
  5. Password:
  6. [root@hadoop1 hadoop]# yum -y install lrzsz

  hadoop2和hadoop3操作一样,不多赘述。

  在安装jdk之前,先要卸载自带的openjdk。

Centos 6.5下的OPENJDK卸载和SUN的JDK安装、环境变量配置

  1. [hadoop@hadoop1 ~]$ pwd
  2. /home/hadoop
  3. [hadoop@hadoop1 ~]$ ll
  4. total 0
  5. [hadoop@hadoop1 ~]$ rz
  6.  
  7. [hadoop@hadoop1 ~]$ ll
  8. total 149920
  9. -rw-r--r--. 1 hadoop hadoop 153512879 Oct 23 2015 jdk-7u79-linux-x64.tar.gz
  10. [hadoop@hadoop1 ~]$

  hadoop2和hadoop3操作一样,不多赘述。

1.2 安装
  解压缩

  1. cd /home/hadoop
  2. tar zxvf jdk-7u79-linux-x64.gz

  1. [hadoop@hadoop1 ~]$ pwd
  2. /home/hadoop
  3. [hadoop@hadoop1 ~]$ ll
  4. total 149920
  5. -rw-r--r--. 1 hadoop hadoop 153512879 Oct 23 2015 jdk-7u79-linux-x64.tar.gz
  6. [hadoop@hadoop1 ~]$ tar -zxvf jdk-7u79-linux-x64.tar.gz

  hadoop2和hadoop3操作一样,不多赘述。

  建立软连接

  1. ln -s jdk1.7.0_79 jdk

  1. [hadoop@hadoop1 ~]$ ll
  2. total 4
  3. drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79
  4. [hadoop@hadoop1 ~]$ ln -s jdk1.7.0_79/ jdk
  5. [hadoop@hadoop1 ~]$ ll
  6. total 4
  7. lrwxrwxrwx. 1 hadoop hadoop 12 Apr 22 22:27 jdk -> jdk1.7.0_79/
  8. drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79
  9. [hadoop@hadoop1 ~]$

  hadoop2和hadoop3操作一样,不多赘述。

  设置环境变量

  1. vim /etc/profile

  hadoop2和hadoop3操作一样,不多赘述。

  添加如下:

  1. #jdk
    export JAVA_HOME=/home/hadoop/jdk
  2. export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
  3. export PATH=$JAVA_HOME/bin:$PATH

  hadoop2和hadoop3操作一样,不多赘述。

  设置环境变量

  1. source /etc/profile

  hadoop2和hadoop3操作一样,不多赘述。

执行 java -version, 如果有版本显示则说明安装成功

三、安装 zookeeper

 1、安装 zookeeper

  1. [hadoop@hadoop1 ~]$ pwd
  2. /home/hadoop
  3. [hadoop@hadoop1 ~]$ ll
  4. total 4
  5. lrwxrwxrwx. 1 hadoop hadoop 12 Apr 22 22:27 jdk -> jdk1.7.0_79/
  6. drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79
  7. [hadoop@hadoop1 ~]$ rz
  8.  
  9. [hadoop@hadoop1 ~]$ ll
  10. total 27408
  11. lrwxrwxrwx. 1 hadoop hadoop 12 Apr 22 22:27 jdk -> jdk1.7.0_79/
  12. drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79
  13. -rw-r--r--. 1 hadoop hadoop 28060242 Mar 20 10:24 zookeeper-3.4.5-cdh5.5.4.gz
  14. [hadoop@hadoop1 ~]$

  hadoop2和hadoop3都去操作,这里不多赘述。

  1.1 安装
  解压

  1. cd /home/hadoop
  2. tar zxvf zookeeper-3.4.5-cdh5.5.4.gz

  1. [hadoop@hadoop1 ~]$ ll
  2. total 27408
  3. lrwxrwxrwx. 1 hadoop hadoop 12 Apr 22 22:34 jdk -> jdk1.7.0_79/
  4. drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79
  5. -rw-r--r--. 1 hadoop hadoop 28060242 Mar 20 10:24 zookeeper-3.4.5-cdh5.5.4.gz
  6. [hadoop@hadoop1 ~]$ tar -zxvf zookeeper-3.4.5-cdh5.5.4.gz

   hadoop2和hadoop3都去操作,这里不多赘述。

  1.2 建立软连接

  1. [hadoop@hadoop1 ~]$ pwd
  2. /home/hadoop
  3. [hadoop@hadoop1 ~]$ ll
  4. total 8
  5. lrwxrwxrwx. 1 hadoop hadoop 12 Apr 22 22:27 jdk -> jdk1.7.0_79/
  6. drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79
  7. drwxr-xr-x. 14 hadoop hadoop 4096 Apr 26 2016 zookeeper-3.4.5-cdh5.5.4
  8. [hadoop@hadoop1 ~]$ ln -s zookeeper-3.4.5-cdh5.5.4/ zookeeper
  9. [hadoop@hadoop1 ~]$ ll
  10. total 8
  11. lrwxrwxrwx. 1 hadoop hadoop 12 Apr 22 22:27 jdk -> jdk1.7.0_79/
  12. drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79
  13. lrwxrwxrwx. 1 hadoop hadoop 25 Apr 22 22:49 zookeeper -> zookeeper-3.4.5-cdh5.5.4/
  14. drwxr-xr-x. 14 hadoop hadoop 4096 Apr 26 2016 zookeeper-3.4.5-cdh5.5.4
  15. [hadoop@hadoop1 ~]$

  hadoop2和hadoop3都去操作,这里不多赘述。

  1. #zookeeper
  2. export ZOOKEEPER_HOME=/home/hadoop/zookeeper
  3. export PATH=$PATH:$ZOOKEEPER_HOME/bin

  hadoop2和hadoop3都去操作,这里不多赘述。

  1. [root@hadoop1 hadoop]# vim /etc/profile
  2. [root@hadoop1 hadoop]# source /etc/profile

  hadoop2和hadoop3都去操作,这里不多赘述。

  1.3 修改配置文件

  复制配置文件

  1. cd /home/hadoop/zookeeper/conf
  2. cp zoo_sample.cfg zoo.cfg

  1. [hadoop@hadoop1 ~]$ ll
  2. total 8
  3. lrwxrwxrwx. 1 hadoop hadoop 12 Apr 22 22:27 jdk -> jdk1.7.0_79/
  4. drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79
  5. lrwxrwxrwx. 1 hadoop hadoop 25 Apr 22 22:49 zookeeper -> zookeeper-3.4.5-cdh5.5.4/
  6. drwxr-xr-x. 14 hadoop hadoop 4096 Apr 26 2016 zookeeper-3.4.5-cdh5.5.4
  7. [hadoop@hadoop1 ~]$ cd zookeeper
  8. [hadoop@hadoop1 zookeeper]$ cd conf/
  9. [hadoop@hadoop1 conf]$ pwd
  10. /home/hadoop/zookeeper/conf
  11. [hadoop@hadoop1 conf]$ ll
  12. total 12
  13. -rw-rw-r--. 1 hadoop hadoop 535 Apr 26 2016 configuration.xsl
  14. -rw-rw-r--. 1 hadoop hadoop 2693 Apr 26 2016 log4j.properties
  15. -rw-rw-r--. 1 hadoop hadoop 808 Apr 26 2016 zoo_sample.cfg
  16. [hadoop@hadoop1 conf]$ cp zoo_sample.cfg zoo.cfg
  17. [hadoop@hadoop1 conf]$ ll
  18. total 16
  19. -rw-rw-r--. 1 hadoop hadoop 535 Apr 26 2016 configuration.xsl
  20. -rw-rw-r--. 1 hadoop hadoop 2693 Apr 26 2016 log4j.properties
  21. -rw-rw-r-- 1 hadoop hadoop 808 Apr 23 08:45 zoo.cfg
  22. -rw-rw-r--. 1 hadoop hadoop 808 Apr 26 2016 zoo_sample.cfg
  23. [hadoop@hadoop1 conf]$

  hadoop2和hadoop3都去操作,这里不多赘述。

  1. [hadoop@hadoop1 conf]$ pwd
  2. /home/hadoop/zookeeper/conf
  3. [hadoop@hadoop1 conf]$ ll
  4. total 16
  5. -rw-rw-r--. 1 hadoop hadoop 535 Apr 26 2016 configuration.xsl
  6. -rw-rw-r--. 1 hadoop hadoop 2693 Apr 26 2016 log4j.properties
  7. -rw-rw-r-- 1 hadoop hadoop 808 Apr 23 08:45 zoo.cfg
  8. -rw-rw-r--. 1 hadoop hadoop 808 Apr 26 2016 zoo_sample.cfg
  9. [hadoop@hadoop1 conf]$ vim zoo.cfg

  hadoop2和hadoop3都去操作,这里不多赘述。

  修改参数

  1. tickTime=2000
  2. initLimit=10
  3. syncLimit=5
  4. dataDir=/home/hadoop/zookeeper/data
  5. dataLogDir=/home/hadoop/zookeeper/logs
  6. clientPort=2181
  7. server.1=hadoop1:2888:3888
  8. server.2=hadoop2:2888:3888
  9. server.3=hadoop3:2888:3888

 tickTime 时长单位为毫秒, 为 zk 使用的基本时间度量单位。 例如, 1 * tickTime 是客户端与 zk 服务端的心跳时间, 2 * tickTime 是客户端会话的超时时间。 tickTime 的默认值为2000 毫秒, 更低的 tickTime 值可以更快地发现超时问题, 但也会导致更高的网络流量(心跳消息)和更高的 CPU 使用率(会话的跟踪处理) 。
  clientPort zk 服务进程监听的 TCP 端口, 默认情况下, 服务端会监听 2181 端口。dataDir 无默认配置, 必须配置, 用于配置存储快照文件的目录。 如果没有配置dataLogDir, 那么事务日志也会存储在此目录。

  hadoop2和hadoop3都去操作,这里不多赘述。

创建目录

  1. mkdir -p /home/hadoop/zookeeper/data
  2. mkdir -p /home/hadoop/zookeeper/logs

  hadoop2和hadoop3都去操作,这里不多赘述。

1.3 创建 ID 文件

  在 dataDir 目录下(即/home/hadoop/zookeeper/data下)添加 myid 文件, 并把 server.x 中的 x 数字写入文件中

  1. [hadoop@hadoop1 data]$ pwd
  2. /home/hadoop/zookeeper/data
  3. [hadoop@hadoop1 data]$ ll
  4. total 0
  5. [hadoop@hadoop1 data]$ vim myid
  6.  
  7. 1

   hadoop2和hadoop3都去操作,这里不多赘述。

2 启动 zookeeper
  2.1 启动

  1. cd /home/hadoop/zookeeper/bin
  2. ./zkServer.sh start

  启动 ZK 服务: ./zkServer.sh start
  查看 ZK 服务状态: ./zkServer.sh status
  停止 ZK 服务: ./zkServer.sh stop
  重启 ZK 服务: ./zkServer.sh restart

  hadoop2和hadoop3都去操作,这里不多赘述。

  2.1 测试
  在 hadoop1 机器上的 zookeeper 中创建节点

  1. cd /home/hadoop/zookeeper/bin
  2. ./zkCli.sh
  3. create /hello hehe

  这里不演示

  在 hadoop2 机器上查看节点

  1. cd /home/hadoop/zookeeper/bin
  2. ./zkCli.sh

  这里不演示

  get /hello //如果有值则说明 zookeeper 运行正常

2.1 进程的意义

  这里不多赘述。

四、kafka的安装
  4.1 安装
  在 hadoop1、hadoop2和hadoop3 机器上安装

  1. cd /home/hadoop
  2. tar zxvf kafka_2.11-0.8.2.2.tgz
  3. ln -s kafka_2.11-0.8.2.2 kafka

  1. [hadoop@hadoop1 ~]$ ll
  2. total 28
  3. drwxr-xr-x 8 hadoop hadoop 4096 Apr 26 2016 apache-flume-1.6.0-cdh5.5.4-bin
  4. drwxrwxr-x 5 hadoop hadoop 4096 Apr 23 18:34 data
  5. lrwxrwxrwx 1 hadoop hadoop 32 Apr 23 18:14 flume -> apache-flume-1.6.0-cdh5.5.4-bin/
  6. lrwxrwxrwx 1 hadoop hadoop 22 Apr 23 09:08 hadoop -> hadoop-2.6.0-cdh5.5.4/
  7. drwxr-xr-x 18 hadoop hadoop 4096 Apr 23 09:50 hadoop-2.6.0-cdh5.5.4
  8. lrwxrwxrwx 1 hadoop hadoop 21 Apr 23 16:55 hbase -> hbase-1.0.0-cdh5.5.4/
  9. drwxr-xr-x 27 hadoop hadoop 4096 Apr 23 17:27 hbase-1.0.0-cdh5.5.4
  10. lrwxrwxrwx 1 hadoop hadoop 20 Apr 23 15:27 hive -> hive-1.1.0-cdh5.5.4/
  11. drwxr-xr-x 10 hadoop hadoop 4096 Apr 26 2016 hive-1.1.0-cdh5.5.4
  12. lrwxrwxrwx. 1 hadoop hadoop 12 Apr 22 22:27 jdk -> jdk1.7.0_79/
  13. drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79
  14. lrwxrwxrwx. 1 hadoop hadoop 25 Apr 22 22:49 zookeeper -> zookeeper-3.4.5-cdh5.5.4/
  15. drwxr-xr-x. 16 hadoop hadoop 4096 Apr 23 08:57 zookeeper-3.4.5-cdh5.5.4
  16. [hadoop@hadoop1 ~]$ rz
  17.  
  18. [hadoop@hadoop1 ~]$ ll
  19. total 15436
  20. drwxr-xr-x 8 hadoop hadoop 4096 Apr 26 2016 apache-flume-1.6.0-cdh5.5.4-bin
  21. drwxrwxr-x 5 hadoop hadoop 4096 Apr 23 18:34 data
  22. lrwxrwxrwx 1 hadoop hadoop 32 Apr 23 18:14 flume -> apache-flume-1.6.0-cdh5.5.4-bin/
  23. lrwxrwxrwx 1 hadoop hadoop 22 Apr 23 09:08 hadoop -> hadoop-2.6.0-cdh5.5.4/
  24. drwxr-xr-x 18 hadoop hadoop 4096 Apr 23 09:50 hadoop-2.6.0-cdh5.5.4
  25. lrwxrwxrwx 1 hadoop hadoop 21 Apr 23 16:55 hbase -> hbase-1.0.0-cdh5.5.4/
  26. drwxr-xr-x 27 hadoop hadoop 4096 Apr 23 17:27 hbase-1.0.0-cdh5.5.4
  27. lrwxrwxrwx 1 hadoop hadoop 20 Apr 23 15:27 hive -> hive-1.1.0-cdh5.5.4/
  28. drwxr-xr-x 10 hadoop hadoop 4096 Apr 26 2016 hive-1.1.0-cdh5.5.4
  29. lrwxrwxrwx. 1 hadoop hadoop 12 Apr 22 22:27 jdk -> jdk1.7.0_79/
  30. drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79
  31. -rw-r--r-- 1 hadoop hadoop 15773865 Mar 20 10:24 kafka_2.11-0.8.2.2.tgz
  32. lrwxrwxrwx. 1 hadoop hadoop 25 Apr 22 22:49 zookeeper -> zookeeper-3.4.5-cdh5.5.4/
  33. drwxr-xr-x. 16 hadoop hadoop 4096 Apr 23 08:57 zookeeper-3.4.5-cdh5.5.4
  34. [hadoop@hadoop1 ~]$

  hadoop2和hadoop3操作,不多赘述。

  1. [hadoop@hadoop1 ~]$ ll
  2. total 15436
  3. drwxr-xr-x 8 hadoop hadoop 4096 Apr 26 2016 apache-flume-1.6.0-cdh5.5.4-bin
  4. drwxrwxr-x 5 hadoop hadoop 4096 Apr 23 18:34 data
  5. lrwxrwxrwx 1 hadoop hadoop 32 Apr 23 18:14 flume -> apache-flume-1.6.0-cdh5.5.4-bin/
  6. lrwxrwxrwx 1 hadoop hadoop 22 Apr 23 09:08 hadoop -> hadoop-2.6.0-cdh5.5.4/
  7. drwxr-xr-x 18 hadoop hadoop 4096 Apr 23 09:50 hadoop-2.6.0-cdh5.5.4
  8. lrwxrwxrwx 1 hadoop hadoop 21 Apr 23 16:55 hbase -> hbase-1.0.0-cdh5.5.4/
  9. drwxr-xr-x 27 hadoop hadoop 4096 Apr 23 17:27 hbase-1.0.0-cdh5.5.4
  10. lrwxrwxrwx 1 hadoop hadoop 20 Apr 23 15:27 hive -> hive-1.1.0-cdh5.5.4/
  11. drwxr-xr-x 10 hadoop hadoop 4096 Apr 26 2016 hive-1.1.0-cdh5.5.4
  12. lrwxrwxrwx. 1 hadoop hadoop 12 Apr 22 22:27 jdk -> jdk1.7.0_79/
  13. drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79
  14. -rw-r--r-- 1 hadoop hadoop 15773865 Mar 20 10:24 kafka_2.11-0.8.2.2.tgz
  15. lrwxrwxrwx. 1 hadoop hadoop 25 Apr 22 22:49 zookeeper -> zookeeper-3.4.5-cdh5.5.4/
  16. drwxr-xr-x. 16 hadoop hadoop 4096 Apr 23 08:57 zookeeper-3.4.5-cdh5.5.4
  17. [hadoop@hadoop1 ~]$ tar -zxvf kafka_2.11-0.8.2.2.tgz

  hadoop2和hadoop3操作,不多赘述。

  1. [hadoop@hadoop1 ~]$ ll
  2. total 32
  3. drwxr-xr-x 8 hadoop hadoop 4096 Apr 26 2016 apache-flume-1.6.0-cdh5.5.4-bin
  4. drwxrwxr-x 5 hadoop hadoop 4096 Apr 23 18:34 data
  5. lrwxrwxrwx 1 hadoop hadoop 32 Apr 23 18:14 flume -> apache-flume-1.6.0-cdh5.5.4-bin/
  6. lrwxrwxrwx 1 hadoop hadoop 22 Apr 23 09:08 hadoop -> hadoop-2.6.0-cdh5.5.4/
  7. drwxr-xr-x 18 hadoop hadoop 4096 Apr 23 09:50 hadoop-2.6.0-cdh5.5.4
  8. lrwxrwxrwx 1 hadoop hadoop 21 Apr 23 16:55 hbase -> hbase-1.0.0-cdh5.5.4/
  9. drwxr-xr-x 27 hadoop hadoop 4096 Apr 23 17:27 hbase-1.0.0-cdh5.5.4
  10. lrwxrwxrwx 1 hadoop hadoop 20 Apr 23 15:27 hive -> hive-1.1.0-cdh5.5.4/
  11. drwxr-xr-x 10 hadoop hadoop 4096 Apr 26 2016 hive-1.1.0-cdh5.5.4
  12. lrwxrwxrwx. 1 hadoop hadoop 12 Apr 22 22:27 jdk -> jdk1.7.0_79/
  13. drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79
  14. drwxr-xr-x 5 hadoop hadoop 4096 Sep 3 2015 kafka_2.11-0.8.2.2
  15. lrwxrwxrwx. 1 hadoop hadoop 25 Apr 22 22:49 zookeeper -> zookeeper-3.4.5-cdh5.5.4/
  16. drwxr-xr-x. 16 hadoop hadoop 4096 Apr 23 08:57 zookeeper-3.4.5-cdh5.5.4
  17. [hadoop@hadoop1 ~]$ ln -s kafka_2.11-0.8.2.2/ kafka
  18. [hadoop@hadoop1 ~]$ ll
  19. total 32
  20. drwxr-xr-x 8 hadoop hadoop 4096 Apr 26 2016 apache-flume-1.6.0-cdh5.5.4-bin
  21. drwxrwxr-x 5 hadoop hadoop 4096 Apr 23 18:34 data
  22. lrwxrwxrwx 1 hadoop hadoop 32 Apr 23 18:14 flume -> apache-flume-1.6.0-cdh5.5.4-bin/
  23. lrwxrwxrwx 1 hadoop hadoop 22 Apr 23 09:08 hadoop -> hadoop-2.6.0-cdh5.5.4/
  24. drwxr-xr-x 18 hadoop hadoop 4096 Apr 23 09:50 hadoop-2.6.0-cdh5.5.4
  25. lrwxrwxrwx 1 hadoop hadoop 21 Apr 23 16:55 hbase -> hbase-1.0.0-cdh5.5.4/
  26. drwxr-xr-x 27 hadoop hadoop 4096 Apr 23 17:27 hbase-1.0.0-cdh5.5.4
  27. lrwxrwxrwx 1 hadoop hadoop 20 Apr 23 15:27 hive -> hive-1.1.0-cdh5.5.4/
  28. drwxr-xr-x 10 hadoop hadoop 4096 Apr 26 2016 hive-1.1.0-cdh5.5.4
  29. lrwxrwxrwx. 1 hadoop hadoop 12 Apr 22 22:27 jdk -> jdk1.7.0_79/
  30. drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79
  31. lrwxrwxrwx 1 hadoop hadoop 19 Apr 23 18:41 kafka -> kafka_2.11-0.8.2.2/
  32. drwxr-xr-x 5 hadoop hadoop 4096 Sep 3 2015 kafka_2.11-0.8.2.2
  33. lrwxrwxrwx. 1 hadoop hadoop 25 Apr 22 22:49 zookeeper -> zookeeper-3.4.5-cdh5.5.4/
  34. drwxr-xr-x. 16 hadoop hadoop 4096 Apr 23 08:57 zookeeper-3.4.5-cdh5.5.4
  35. [hadoop@hadoop1 ~]$

  hadoop2和hadoop3操作,不多赘述。

  环境变量

  1. [hadoop@hadoop1 ~]$ su root
  2. Password:
  3. [root@hadoop1 hadoop]# vim /etc/profile

  hadoop2和hadoop3操作,不多赘述。

  1. #kafka
  2. export KAFKA_HOME=/home/hadoop/kafka
  3. export PATH=$PATH:$KAFKA_HOME/bin

  hadoop2和hadoop3操作,不多赘述。

  1. [hadoop@hadoop1 ~]$ su root
  2. Password:
  3. [root@hadoop1 hadoop]# vim /etc/profile
  4. [root@hadoop1 hadoop]# source /etc/profile
  5. [root@hadoop1 hadoop]#

  hadoop2和hadoop3操作,不多赘述。

  4.1 修改配置文件
  1. hadoop1 上修改 config/server.properties

  1. export HBASE_MANAGES_ZK=false
  2. broker.id=1
  3. port=9092
  4. host.name=hadoop1
  5. log.dirs=/home/kafka-logs
  6. num.partitions=5
  7. log.cleaner.enable=false
  8. offsets.storage=kafka
  9. dual.commit.enabled=true
    delete.topic.enable=true
    zookeeper.connect=hadoop1:2181,hadoop2:2181,hadoop3:2181

2. hadoop2 上修改 config/server.properties

  1. export HBASE_MANAGES_ZK=false
  2. broker.id=2
  3. port=9092
  4. host.name=hadoop2
  5. log.dirs=/home/kafka-logs
  6. num.partitions=5
  7. log.cleaner.enable=false
  8. offsets.storage=kafka
  9. dual.commit.enabled=true
  10. delete.topic.enable=true
  11. zookeeper.connect=hadoop1:2181,hadoop2:2181,hadoop3:2181

3. hadoop3 上修改 config/server.properties

  1. export HBASE_MANAGES_ZK=false
  2. broker.id=3
  3. port=9092
  4. host.name=hadoop3
  5. log.dirs=/home/kafka-logs
  6. num.partitions=5
  7. log.cleaner.enable=false
  8. offsets.storage=kafka
  9. dual.commit.enabled=true
  10. delete.topic.enable=true
  11. zookeeper.connect=hadoop1:2181,hadoop2:2181,hadoop3:2181

创建目录

  1. mkdir -p /home/kafka-logs

  hadoop1、hadoop2和hadoop3都执行。

  1. [hadoop@hadoop1 kafka]$ mkdir -p /home/kafka-logs
  2. mkdir: cannot create directory `/home/kafka-logs': Permission denied
  3. [hadoop@hadoop1 kafka]$ su root
  4. Password:
  5. [root@hadoop1 kafka]# mkdir -p /home/kafka-logs
  6. [root@hadoop1 kafka]# chown -R hadoop:hadoop /home/kafka-logs
  7. [root@hadoop1 kafka]# cd /home/
  8. [root@hadoop1 home]# ll
  9. total 8
  10. drwx------. 26 hadoop hadoop 4096 Apr 23 21:18 hadoop
  11. drwxr-xr-x 2 hadoop hadoop 4096 Apr 23 21:18 kafka-logs
  12. [root@hadoop1 home]#

  注意:在CDH版本的,发行者已经帮我们解决了这个问题。Apache版本,需要解决

  1. 首先解决kafka Unrecognized VM option UseCompressedOops’问题
  1. vi /home/hadoop/kafka/bin/kafka-run-class.sh
  1. if [ -z "$KAFKA_JVM_PERFORMANCE_OPTS" ]; then
  2. KAFKA_JVM_PERFORMANCE_OPTS="-server -XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true"
  3. fi
  1.  
  1.  

去掉-XX:+UseCompressedOops即可

  1. [hadoop@hadoop1 bin]$ pwd
  2. /home/hadoop/kafka/bin
  3. [hadoop@hadoop1 bin]$ ll
  4. total 80
  5. -rwxr-xr-x 1 hadoop hadoop 943 Sep 3 2015 kafka-console-consumer.sh
  6. -rwxr-xr-x 1 hadoop hadoop 942 Sep 3 2015 kafka-console-producer.sh
  7. -rwxr-xr-x 1 hadoop hadoop 870 Sep 3 2015 kafka-consumer-offset-checker.sh
  8. -rwxr-xr-x 1 hadoop hadoop 946 Sep 3 2015 kafka-consumer-perf-test.sh
  9. -rwxr-xr-x 1 hadoop hadoop 860 Sep 3 2015 kafka-mirror-maker.sh
  10. -rwxr-xr-x 1 hadoop hadoop 884 Sep 3 2015 kafka-preferred-replica-election.sh
  11. -rwxr-xr-x 1 hadoop hadoop 946 Sep 3 2015 kafka-producer-perf-test.sh
  12. -rwxr-xr-x 1 hadoop hadoop 872 Sep 3 2015 kafka-reassign-partitions.sh
  13. -rwxr-xr-x 1 hadoop hadoop 866 Sep 3 2015 kafka-replay-log-producer.sh
  14. -rwxr-xr-x 1 hadoop hadoop 872 Sep 3 2015 kafka-replica-verification.sh
  15. -rwxr-xr-x 1 hadoop hadoop 4185 Sep 3 2015 kafka-run-class.sh
  16. -rwxr-xr-x 1 hadoop hadoop 1333 Sep 3 2015 kafka-server-start.sh
  17. -rwxr-xr-x 1 hadoop hadoop 891 Sep 3 2015 kafka-server-stop.sh
  18. -rwxr-xr-x 1 hadoop hadoop 868 Sep 3 2015 kafka-simple-consumer-shell.sh
  19. -rwxr-xr-x 1 hadoop hadoop 861 Sep 3 2015 kafka-topics.sh
  20. drwxr-xr-x 2 hadoop hadoop 4096 Sep 3 2015 windows
  21. -rwxr-xr-x 1 hadoop hadoop 1370 Sep 3 2015 zookeeper-server-start.sh
  22. -rwxr-xr-x 1 hadoop hadoop 875 Sep 3 2015 zookeeper-server-stop.sh
  23. -rwxr-xr-x 1 hadoop hadoop 968 Sep 3 2015 zookeeper-shell.sh
  24. [hadoop@hadoop1 bin]$ vim kafka-run-class.sh

  这里,不需自己去解决。

  在三台机器上的kafka目录下,分别执行以下命令

  1. nohup bin/kafka-server-start.sh config/server.properties &

  1. [hadoop@hadoop1 kafka]$ pwd
  2. /home/hadoop/kafka
  3. [hadoop@hadoop1 kafka]$ nohup bin/kafka-server-start.sh config/server.properties &
  4. [1] 15091
  5. [hadoop@hadoop1 kafka]$ nohup: ignoring input and appending output to `nohup.out'
  6.  
  7. [1]+ Exit 1 nohup bin/kafka-server-start.sh config/server.properties
  8. [hadoop@hadoop1 kafka]$

  出现以下提示后回车即可。

查看状态

  1. cat nohup.out

  1. [2017-04-23 21:23:35,304] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
  2. [2017-04-23 21:23:35,304] INFO Client environment:java.home=/home/hadoop/jdk1.7.0_79/jre (org.apache.zookeeper.ZooKeeper)
  3. [2017-04-23 21:23:35,304] INFO Client environment:java.class.path=.:/home/hadoop/jdk/lib/dt.jar:/home/hadoop/jdk/lib/tools.jar:/home/hadoop/kafka/bin/../core/build/dependant-libs-2.10.4*/*.jar:/home/hadoop/kafka/bin/../examples/build/libs//kafka-examples*.jar:/home/hadoop/kafka/bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:/home/hadoop/kafka/bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:/home/hadoop/kafka/bin/../clients/build/libs/kafka-clients*.jar:/home/hadoop/kafka/bin/../libs/jopt-simple-3.2.jar:/home/hadoop/kafka/bin/../libs/kafka_2.11-0.8.2.2.jar:/home/hadoop/kafka/bin/../libs/kafka_2.11-0.8.2.2-javadoc.jar:/home/hadoop/kafka/bin/../libs/kafka_2.11-0.8.2.2-scaladoc.jar:/home/hadoop/kafka/bin/../libs/kafka_2.11-0.8.2.2-sources.jar:/home/hadoop/kafka/bin/../libs/kafka_2.11-0.8.2.2-test.jar:/home/hadoop/kafka/bin/../libs/kafka-clients-0.8.2.2.jar:/home/hadoop/kafka/bin/../libs/log4j-1.2.16.jar:/home/hadoop/kafka/bin/../libs/lz4-1.2.0.jar:/home/hadoop/kafka/bin/../libs/metrics-core-2.2.0.jar:/home/hadoop/kafka/bin/../libs/scala-library-2.11.5.jar:/home/hadoop/kafka/bin/../libs/scala-parser-combinators_2.11-1.0.2.jar:/home/hadoop/kafka/bin/../libs/scala-xml_2.11-1.0.2.jar:/home/hadoop/kafka/bin/../libs/slf4j-api-1.7.6.jar:/home/hadoop/kafka/bin/../libs/slf4j-log4j12-1.6.1.jar:/home/hadoop/kafka/bin/../libs/snappy-java-1.1.1.7.jar:/home/hadoop/kafka/bin/../libs/zkclient-0.3.jar:/home/hadoop/kafka/bin/../libs/zookeeper-3.4.6.jar:/home/hadoop/kafka/bin/../core/build/libs/kafka_2.10*.jar (org.apache.zookeeper.ZooKeeper)
  4. [2017-04-23 21:23:35,304] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
  5. [2017-04-23 21:23:35,304] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
  6. [2017-04-23 21:23:35,304] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
  7. [2017-04-23 21:23:35,305] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
  8. [2017-04-23 21:23:35,305] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
  9. [2017-04-23 21:23:35,305] INFO Client environment:os.version=2.6.32-431.el6.x86_64 (org.apache.zookeeper.ZooKeeper)
  10. [2017-04-23 21:23:35,305] INFO Client environment:user.name=hadoop (org.apache.zookeeper.ZooKeeper)
  11. [2017-04-23 21:23:35,305] INFO Client environment:user.home=/home/hadoop (org.apache.zookeeper.ZooKeeper)
  12. [2017-04-23 21:23:35,305] INFO Client environment:user.dir=/home/hadoop/kafka_2.11-0.8.2.2 (org.apache.zookeeper.ZooKeeper)
  13. [2017-04-23 21:23:35,308] INFO Initiating client connection, connectString=hadoop1:2181,hadoop2:2181,hadoop3:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@31dbfb7a (org.apache.zookeeper.ZooKeeper)
  14. [2017-04-23 21:23:35,391] INFO Opening socket connection to server hadoop1/192.168.80.121:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
  15. [2017-04-23 21:23:35,410] INFO Socket connection established to hadoop1/192.168.80.121:2181, initiating session (org.apache.zookeeper.ClientCnxn)
  16. [2017-04-23 21:23:35,530] INFO Session establishment complete on server hadoop1/192.168.80.121:2181, sessionid = 0x15b99fea1e10012, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
  17. [2017-04-23 21:23:35,542] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
  18. [2017-04-23 21:23:36,009] INFO Loading logs. (kafka.log.LogManager)
  19. [2017-04-23 21:23:36,069] INFO Logs loading complete. (kafka.log.LogManager)
  20. [2017-04-23 21:23:36,072] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
  21. [2017-04-23 21:23:36,110] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
  22. [2017-04-23 21:23:36,333] INFO Awaiting socket connections on hadoop1:9092. (kafka.network.Acceptor)
  23. [2017-04-23 21:23:36,340] INFO [Socket Server on Broker 1], Started (kafka.network.SocketServer)
  24. [hadoop@hadoop1 kafka]$

  1. r:/home/hadoop/kafka/bin/../examples/build/libs//kafka-examples*.jar:/home/hadoop/kafka/bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:/home/hadoop/kafka/bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:/home/hadoop/kafka/bin/../clients/build/libs/kafka-clients*.jar:/home/hadoop/kafka/bin/../libs/jopt-simple-3.2.jar:/home/hadoop/kafka/bin/../libs/kafka_2.11-0.8.2.2.jar:/home/hadoop/kafka/bin/../libs/kafka_2.11-0.8.2.2-javadoc.jar:/home/hadoop/kafka/bin/../libs/kafka_2.11-0.8.2.2-scaladoc.jar:/home/hadoop/kafka/bin/../libs/kafka_2.11-0.8.2.2-sources.jar:/home/hadoop/kafka/bin/../libs/kafka_2.11-0.8.2.2-test.jar:/home/hadoop/kafka/bin/../libs/kafka-clients-0.8.2.2.jar:/home/hadoop/kafka/bin/../libs/log4j-1.2.16.jar:/home/hadoop/kafka/bin/../libs/lz4-1.2.0.jar:/home/hadoop/kafka/bin/../libs/metrics-core-2.2.0.jar:/home/hadoop/kafka/bin/../libs/scala-library-2.11.5.jar:/home/hadoop/kafka/bin/../libs/scala-parser-combinators_2.11-1.0.2.jar:/home/hadoop/kafka/bin/../libs/scala-xml_2.11-1.0.2.jar:/home/hadoop/kafka/bin/../libs/slf4j-api-1.7.6.jar:/home/hadoop/kafka/bin/../libs/slf4j-log4j12-1.6.1.jar:/home/hadoop/kafka/bin/../libs/snappy-java-1.1.1.7.jar:/home/hadoop/kafka/bin/../libs/zkclient-0.3.jar:/home/hadoop/kafka/bin/../libs/zookeeper-3.4.6.jar:/home/hadoop/kafka/bin/../core/build/libs/kafka_2.10*.jar (org.apache.zookeeper.ZooKeeper)
  2. [2017-04-23 21:25:34,857] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
  3. [2017-04-23 21:25:34,857] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
  4. [2017-04-23 21:25:34,857] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
  5. [2017-04-23 21:25:34,858] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
  6. [2017-04-23 21:25:34,858] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
  7. [2017-04-23 21:25:34,858] INFO Client environment:os.version=2.6.32-431.el6.x86_64 (org.apache.zookeeper.ZooKeeper)
  8. [2017-04-23 21:25:34,858] INFO Client environment:user.name=hadoop (org.apache.zookeeper.ZooKeeper)
  9. [2017-04-23 21:25:34,858] INFO Client environment:user.home=/home/hadoop (org.apache.zookeeper.ZooKeeper)
  10. [2017-04-23 21:25:34,858] INFO Client environment:user.dir=/home/hadoop/kafka_2.11-0.8.2.2 (org.apache.zookeeper.ZooKeeper)
  11. [2017-04-23 21:25:34,862] INFO Initiating client connection, connectString=hadoop1:2181,hadoop2:2181,hadoop3:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@3764253e (org.apache.zookeeper.ZooKeeper)
  12. [2017-04-23 21:25:35,713] INFO Opening socket connection to server hadoop3/192.168.80.123:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
  13. [2017-04-23 21:25:35,811] INFO Socket connection established to hadoop3/192.168.80.123:2181, initiating session (org.apache.zookeeper.ClientCnxn)
  14. [2017-04-23 21:25:35,884] INFO Session establishment complete on server hadoop3/192.168.80.123:2181, sessionid = 0x35b99ff49bf0012, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
  15. [2017-04-23 21:25:35,890] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
  16. [2017-04-23 21:25:37,146] INFO Loading logs. (kafka.log.LogManager)
  17. [2017-04-23 21:25:37,243] INFO Logs loading complete. (kafka.log.LogManager)
  18. [2017-04-23 21:25:37,245] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
  19. [2017-04-23 21:25:37,300] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
  20. [2017-04-23 21:25:37,798] INFO Awaiting socket connections on hadoop2:9092. (kafka.network.Acceptor)
  21. [2017-04-23 21:25:37,803] INFO [Socket Server on Broker 2], Started (kafka.network.SocketServer)
  22. [2017-04-23 21:25:39,744] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
  23. [2017-04-23 21:25:42,084] INFO Registered broker 2 at path /brokers/ids/2 with address hadoop2:9092. (kafka.utils.ZkUtils$)
  24. [2017-04-23 21:25:42,520] INFO [Kafka Server 2], started (kafka.server.KafkaServer)
  25. [hadoop@hadoop2 kafka]$

  1. r:/home/hadoop/kafka/bin/../examples/build/libs//kafka-examples*.jar:/home/hadoop/kafka/bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:/home/hadoop/kafka/bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:/home/hadoop/kafka/bin/../clients/build/libs/kafka-clients*.jar:/home/hadoop/kafka/bin/../libs/jopt-simple-3.2.jar:/home/hadoop/kafka/bin/../libs/kafka_2.11-0.8.2.2.jar:/home/hadoop/kafka/bin/../libs/kafka_2.11-0.8.2.2-javadoc.jar:/home/hadoop/kafka/bin/../libs/kafka_2.11-0.8.2.2-scaladoc.jar:/home/hadoop/kafka/bin/../libs/kafka_2.11-0.8.2.2-sources.jar:/home/hadoop/kafka/bin/../libs/kafka_2.11-0.8.2.2-test.jar:/home/hadoop/kafka/bin/../libs/kafka-clients-0.8.2.2.jar:/home/hadoop/kafka/bin/../libs/log4j-1.2.16.jar:/home/hadoop/kafka/bin/../libs/lz4-1.2.0.jar:/home/hadoop/kafka/bin/../libs/metrics-core-2.2.0.jar:/home/hadoop/kafka/bin/../libs/scala-library-2.11.5.jar:/home/hadoop/kafka/bin/../libs/scala-parser-combinators_2.11-1.0.2.jar:/home/hadoop/kafka/bin/../libs/scala-xml_2.11-1.0.2.jar:/home/hadoop/kafka/bin/../libs/slf4j-api-1.7.6.jar:/home/hadoop/kafka/bin/../libs/slf4j-log4j12-1.6.1.jar:/home/hadoop/kafka/bin/../libs/snappy-java-1.1.1.7.jar:/home/hadoop/kafka/bin/../libs/zkclient-0.3.jar:/home/hadoop/kafka/bin/../libs/zookeeper-3.4.6.jar:/home/hadoop/kafka/bin/../core/build/libs/kafka_2.10*.jar (org.apache.zookeeper.ZooKeeper)
  2. [2017-04-23 21:26:10,239] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
  3. [2017-04-23 21:26:10,240] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
  4. [2017-04-23 21:26:10,240] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
  5. [2017-04-23 21:26:10,241] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
  6. [2017-04-23 21:26:10,241] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
  7. [2017-04-23 21:26:10,241] INFO Client environment:os.version=2.6.32-431.el6.x86_64 (org.apache.zookeeper.ZooKeeper)
  8. [2017-04-23 21:26:10,241] INFO Client environment:user.name=hadoop (org.apache.zookeeper.ZooKeeper)
  9. [2017-04-23 21:26:10,241] INFO Client environment:user.home=/home/hadoop (org.apache.zookeeper.ZooKeeper)
  10. [2017-04-23 21:26:10,241] INFO Client environment:user.dir=/home/hadoop/kafka_2.11-0.8.2.2 (org.apache.zookeeper.ZooKeeper)
  11. [2017-04-23 21:26:10,244] INFO Initiating client connection, connectString=hadoop1:2181,hadoop2:2181,hadoop3:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@3764253e (org.apache.zookeeper.ZooKeeper)
  12. [2017-04-23 21:26:11,052] INFO Opening socket connection to server hadoop3/192.168.80.123:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
  13. [2017-04-23 21:26:11,080] INFO Socket connection established to hadoop3/192.168.80.123:2181, initiating session (org.apache.zookeeper.ClientCnxn)
  14. [2017-04-23 21:26:11,370] INFO Session establishment complete on server hadoop3/192.168.80.123:2181, sessionid = 0x35b99ff49bf0013, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
  15. [2017-04-23 21:26:11,505] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
  16. [2017-04-23 21:26:12,561] INFO Loading logs. (kafka.log.LogManager)
  17. [2017-04-23 21:26:12,698] INFO Logs loading complete. (kafka.log.LogManager)
  18. [2017-04-23 21:26:12,702] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
  19. [2017-04-23 21:26:12,918] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
  20. [2017-04-23 21:26:14,277] INFO Awaiting socket connections on hadoop3:9092. (kafka.network.Acceptor)
  21. [2017-04-23 21:26:14,623] INFO [Socket Server on Broker 3], Started (kafka.network.SocketServer)
  22. [2017-04-23 21:26:17,519] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
  23. [2017-04-23 21:26:19,961] INFO Registered broker 3 at path /brokers/ids/3 with address hadoop3:9092. (kafka.utils.ZkUtils$)
  24. [2017-04-23 21:26:20,172] INFO [Kafka Server 3], started (kafka.server.KafkaServer)
  25. [hadoop@hadoop3 kafka]$

  至此,kafka_2.11-0.8.2.2.tgz的3节点集群的搭建完成!

扩展补充

kafka的server.properties配置文件参考示范(图文详解)(多种方式)

kafka_2.11-0.8.2.2.tgz的3节点集群的下载、安装和配置(图文详解)的更多相关文章

  1. kafka_2.10-0.8.1.1.tgz的1或3节点集群的下载、安装和配置(图文详细教程)绝对干货

    运行kafka ,需要依赖 zookeeper,你可以使用已有的 zookeeper 集群或者利用 kafka自带的zookeeper. 单机模式,用的是kafka自带的zookeeper, 分布式模 ...

  2. Hyperledger Fabric 1.0 从零开始(九)——Fabric多节点集群生产启动

    7:Fabric多节点集群生产启动 7.1.多节点服务器配置 在生产环境上,我们沿用4.1.配置说明中的服务器各节点配置方案. 我们申请了五台生产服务器,其中四台服务器运行peer节点,另外一台服务器 ...

  3. redis3.0 集群实战1 -- 安装和配置

    本文主要是在centos7上安装和配置redis集群实战 参考: http://hot66hot.iteye.com/blog/2050676 集群教程: http://redisdoc.com/to ...

  4. ansys19.0安装破解教程(图文详解)

    ansys19.0是一款非常著名的大型通用有限元分析(FEA)软件.该软件能够与多数计算机辅助设计软件接口,比如Creo, NASTRAN.Algor.I-DEAS.AutoCAD等,并能实现数据的共 ...

  5. creo2.0安装方法和图文详解教程

    Creo2.0是由PTC公司2012年8月底推出的全新CAD设计软件包,整合了PTC公司的三个软件Pro/Engineer的参数化技术.CoCreate的直接建模技术和ProductView的三维可视 ...

  6. Hyperledger Fabric 1.0 从零开始(八)——Fabric多节点集群生产部署

    6.1.平台特定使用的二进制文件配置 该方案与Hyperledger Fabric 1.0 从零开始(五)--运行测试e2e类似,根据企业需要,可以控制各节点的域名,及联盟链的统一域名.可以指定单独节 ...

  7. hadoop-2.6.0.tar.gz + spark-1.6.1-bin-hadoop2.6.tgz + zeppelin-0.5.6-incubating-bin-all.tgz(master、slave1和slave2)(博主推荐)(图文详解)

    不多说,直接上干货! 我这里,采取的是CentOS6.5,当然大家也可以在ubuntu 16.04系统里,这些都是小事 CentOS 6.5的安装详解 hadoop-2.6.0.tar.gz + sp ...

  8. hadoop-2.7.3.tar.gz + spark-2.0.2-bin-hadoop2.7.tgz + zeppelin-0.6.2-incubating-bin-all.tgz(master、slave1和slave2)(博主推荐)(图文详解)

    不多说,直接上干货! 我这里,采取的是ubuntu 16.04系统,当然大家也可以在CentOS6.5里,这些都是小事 CentOS 6.5的安装详解 hadoop-2.6.0.tar.gz + sp ...

  9. 基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

    不多说,直接上干货! 至于为什么,要写这篇博客以及安装Kafka-manager? 问题详情 无奈于,在kafka里没有一个较好自带的web ui.启动后无法观看,并且不友好.所以,需安装一个第三方的 ...

随机推荐

  1. csu - 1536: Bit String Reordering (模拟)

    http://acm.csu.edu.cn/OnlineJudge/problem.php?id=1536 不知道为何怎么写都写不对. 这题可以模拟. 虽然题目保证一定可以从原串变成目标串,但是不一定 ...

  2. [bzoj1031][JSOI2007]字符加密Cipher_后缀数组

    字符加密Cipher bzoj-1031 JSOI-2007 题目大意:题目链接. 注释:略. 想法: 后缀数组裸题啊. 后缀数组其实背下来板子之后有几个数组记住就可以了. $sa_i$表示排名为$i ...

  3. Evaluate Reverse Polish Notation(逆波兰式)

    Evaluate the value of an arithmetic expression in Reverse Polish Notation. Valid operators are +, -, ...

  4. Ubuntu 16.04 GNOME无法使用拼音输入法问题

    说明:添加好之后重启!不然会出现输入错乱的问题. 参考: http://forum.ubuntu.org.cn/viewtopic.php?t=477765

  5. json数组显示格式

    {“colorAndImg”:[{"颜色":“红色”,"地址":“www.sohu.com”}, {“颜色”:“绿色,“地址”:“www.sohu.com”}] ...

  6. OJ(Online Judge)

    OJ:它是Online Judge系统的简称,用来在线检测程序源代码的正确性.著名的OJ有RQNOJ.URAL等.国内著名的题库有北京大学题库.浙江大学题库等.国外的题库包括乌拉尔大学.瓦拉杜利德大学 ...

  7. Python学习系列之面向对象

    概述 一.Python编程方式 面向过程编程:根据业务逻辑从上到下磊代码 面向函数编程:将某功能代码封装到函数中,将来直接调用即可,无需重新写 面向对象编程:对函数进行分类.封装 二.面向过程编程 w ...

  8. POJ2773 Happy 2006【容斥原理】

    题目链接: http://poj.org/problem?id=2773 题目大意: 给你两个整数N和K.找到第k个与N互素的数(互素的数从小到大排列).当中 (1 <= m <= 100 ...

  9. android学习一(了解android)

    声明:android学习文件中面的全部内容为都是整理来自第一行代码Android.在接下来的文章里我就不在进行反复的声明. 想看原版的能够买书看看.或者去作者的博客http://blog.csdn.n ...

  10. JAVA实现N皇后问题(回溯法)

    package com.leetCode; /** * Follow up for N-Queens problem. Now, instead outputting board configurat ...