项目,要用到消息队列,这里采用activemq,相对使用简单点。这里重点是环境部署。

0. 服务器环境

RedHat7
10.90.7.2
10.90.7.10
10.90.2.102

1. 下载安装zookeeper

地址:https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.3.6/zookeeper-3.3.6.tar.gz

zookeeper的安装,采用一台机器装3个实例,伪集群。其实,搭建真集群,也是没问题的。
在7.10服务器上,安装这3个实例。

解压zookeeper。

[root@localhost zookeeper-3.3.]# pwd
/opt/amq/zookeeper-3.3.

然后,copy zookeeper-3.3.6 三份为zk1,zk2,zk3

[root@localhost amq]# ll
总用量
drwxr-xr-x root root 7月 : zk1
drwxr-xr-x root root 7月 : zk2
drwxr-xr-x root root 7月 : zk3
drwxr-xr-x www www 7月 zookeeper-3.3.
-rw-r--r-- root root 7月 : zookeeper-3.3..tar.gz

在zk1,zk2,zk3的目录下,创建data目录。

修改zk1,zk2,zk3的配置文件。

[root@localhost conf]# pwd
/opt/amq/zk1/conf
[root@localhost conf]# mv zoo_sample.cfg zoo.cfg
[root@localhost conf]# ll
总用量
-rw-r--r-- root root 7月 : configuration.xsl
-rw-r--r-- root root 7月 : log4j.properties
-rw-r--r-- root root 7月 : zoo.cfg

修改zk1的zoo.cfg的配置内容为下面的内容:

# The number of milliseconds of each tick
tickTime=
# The number of ticks that the initial
# synchronization phase can take
initLimit=
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=
# the directory where the snapshot is stored.
dataDir=/opt/amq/zk1/data
# the port at which the clients will connect
clientPort=
server.=10.90.7.10::
server.=10.90.7.10::
server.=10.90.7.10::

修改zk2的zoo.cfg的配置内容为下面的内容:

# The number of milliseconds of each tick
tickTime=
# The number of ticks that the initial
# synchronization phase can take
initLimit=
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=
# the directory where the snapshot is stored.
dataDir=/opt/amq/zk2/data
# the port at which the clients will connect
clientPort=
server.=10.90.7.10::
server.=10.90.7.10::
server.=10.90.7.10::

修改zk3的zoo.cfg的配置内容为下面的内容:

# The number of milliseconds of each tick
tickTime=
# The number of ticks that the initial
# synchronization phase can take
initLimit=
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=
# the directory where the snapshot is stored.
dataDir=/opt/amq/zk3/data
# the port at which the clients will connect
clientPort=
server.=10.90.7.10::
server.=10.90.7.10::
server.=10.90.7.10::

还有一步,就是在zk1,zk2,zk3的data(这个data目录也是自己创建的)下面创建一个文件myid,内容就是zoo.cfg中的服务器server.x中的数字1,2,3。

[root@localhost data]# pwd
/opt/amq/zk1/data
[root@localhost data]# ll
总用量
-rw-r--r-- root root 7月 : myid
drwxr-xr-x root root 7月 : version-
-rw-r--r-- root root 7月 : zookeeper_server.pid

最后,分别到zk1,zk2,zk3的bin目录,执行启动zookeeper的程序。例如,这里我启动zk1的。

[root@localhost bin]# ./zkServer.sh
JMX enabled by default
Using config: /opt/amq/zk1/bin/../conf/zoo.cfg
Usage: ./zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd} [root@localhost bin]# ./zkServer.sh start

到此,zk的3元集群启动完毕。

若是三个不同的机器上启动,配置上只有zoo.cfg中的一点点不同。就是下面的这个信息:
server.A=B:C:D
其 中
A 是一个数字,表示这个是第几号服务器;
B 是这个服务器的 ip地址;
C 表示的是这个服务器与集群中的 Leader 服务器交换信息的端口;
D 表示的是万一集群中的 Leader 服务器挂了,需要一个端口来重新进行选举,选出一个新的 Leader,
而这个端口就是用来执行选举时服务器相互通信的端口。如果是伪集群的配置方式,由于 B 都是一样,所以不同的 Zookeeper 实例通信端口号不能一样,所以要给它们分配不同的端口号。

正常启动了的zookeeper的日志是这样子的:

-- ::, - INFO  [QuorumPeer:/:::::::::Learner@] - Getting a snapshot from leader
-- ::, - INFO [QuorumPeer:/:::::::::Learner@] - Setting leader epoch
-- ::, - INFO [QuorumPeer:/:::::::::FileTxnSnapLog@] - Snapshotting:
-- ::, - INFO [WorkerReceiver Thread:FastLeaderElection@] - Notification: (n.leader), (n.zxid), (n.round), LOOKING (n.state), (n.sid), FOLLOWING (my state)
-- ::, - INFO [WorkerReceiver Thread:FastLeaderElection@] - Notification: (n.leader), (n.zxid), (n.round), LOOKING (n.state), (n.sid), FOLLOWING (my state)
-- ::, - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn$Factory@] - Accepted socket connection from /10.90.7.10:
-- ::, - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn@] - Client attempting to establish new session at /10.90.7.10:
-- ::, - WARN [QuorumPeer:/:::::::::Follower@] - Got zxid 0x100000001 expected 0x1
-- ::, - INFO [SyncThread::FileTxnLog@] - Creating new log file: log.
-- ::, - INFO [CommitProcessor::NIOServerCnxn@] - Established session 0x15d68b1dbf90000 with negotiated timeout for client /10.90.7.10:
-- ::, - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn@] - Closed socket connection for client /10.90.7.10: which had sessionid 0x15d68b1dbf90000
-- ::, - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn$Factory@] - Accepted socket connection from /10.90.7.10:
-- ::, - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn@] - Client attempting to establish new session at /10.90.7.10:
-- ::, - INFO [CommitProcessor::NIOServerCnxn@] - Established session 0x15d68b1dbf90001 with negotiated timeout for client /10.90.7.10:
-- ::, - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn@] - Closed socket connection for client /10.90.7.10: which had sessionid 0x15d68b1dbf90001
-- ::, - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn$Factory@] - Accepted socket connection from /10.90.7.10:
-- ::, - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn@] - Client attempting to establish new session at /10.90.7.10:
-- ::, - INFO [CommitProcessor::NIOServerCnxn@] - Established session 0x15d68b1dbf90002 with negotiated timeout for client /10.90.7.10:
-- ::, - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn$Factory@] - Accepted socket connection from /10.90.2.102:
-- ::, - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn@] - Client attempting to establish new session at /10.90.2.102:
-- ::, - INFO [CommitProcessor::NIOServerCnxn@] - Established session 0x15d68b1dbf90003 with negotiated timeout for client /10.90.2.102:
-- ::, - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn@] - Closed socket connection for client /10.90.2.102: which had sessionid 0x15d68b1dbf90003
-- ::, - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn$Factory@] - Accepted socket connection from /10.90.2.102:
-- ::, - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn@] - Client attempting to establish new session at /10.90.2.102:
-- ::, - INFO [CommitProcessor::NIOServerCnxn@] - Established session 0x15d68b1dbf90004 with negotiated timeout for client /10.90.2.102:
-- ::, - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn@] - EndOfStreamException: Unable to read additional data from client sessionid 0x15d68b1dbf90002, likely client has closed socket
-- ::, - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn@] - Closed socket connection for client /10.90.7.10: which had sessionid 0x15d68b1dbf90002
-- ::, - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn@] - EndOfStreamException: Unable to read additional data from client sessionid 0x15d68b1dbf90004, likely client has closed socket
-- ::, - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn@] - Closed socket connection for client /10.90.2.102: which had sessionid 0x15d68b1dbf90004
-- ::, - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn$Factory@] - Accepted socket connection from /10.90.7.2:
-- ::, - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0::NIOServerCnxn@] - Client attempting to establish new session at /10.90.7.2:
-- ::, - INFO [CommitProcessor::NIOServerCnxn@] - Established session 0x15d68b1dbf90005 with negotiated timeout for client /10.90.7.2:

执行一下./zkCli.sh

[root@localhost bin]# ./zkCli.sh
Connecting to localhost:
-- ::, - INFO [main:Environment@] - Client environment:zookeeper.version=3.3.-, built on // : GMT
-- ::, - INFO [main:Environment@] - Client environment:host.name=localhost
-- ::, - INFO [main:Environment@] - Client environment:java.version=1.8.0_121
-- ::, - INFO [main:Environment@] - Client environment:java.vendor=Oracle Corporation
-- ::, - INFO [main:Environment@] - Client environment:java.home=/usr/java/jdk1..0_121/jre
-- ::, - INFO [main:Environment@] - Client environment:java.class.path=/opt/amq/zk1/bin/../build/classes:/opt/amq/zk1/bin/../build/lib/*.jar:/opt/amq/zk1/bin/../zookeeper-3.3.6.jar:/opt/amq/zk1/bin/../lib/log4j-1.2.15.jar:/opt/amq/zk1/bin/../lib/jline-0.9.94.jar:/opt/amq/zk1/bin/../src/java/lib/*.jar:/opt/amq/zk1/bin/../conf:.:/usr/java/jdk1.8.0_121/lib/dt.jar:/usr/java/jdk1.8.0_121/lib/tools.jar
2017-07-22 16:46:49,281 - INFO [main:Environment@97] - Client environment:java.library.path=/home/torch/install/lib:/usr/local/cudnn:/usr/local/cuda-8.0/lib64::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2017-07-22 16:46:49,281 - INFO [main:Environment@97] - Client environment:java.io.tmpdir=/tmp
2017-07-22 16:46:49,281 - INFO [main:Environment@97] - Client environment:java.compiler=<NA>
2017-07-22 16:46:49,281 - INFO [main:Environment@97] - Client environment:os.name=Linux
2017-07-22 16:46:49,281 - INFO [main:Environment@97] - Client environment:os.arch=amd64
2017-07-22 16:46:49,281 - INFO [main:Environment@97] - Client environment:os.version=3.10.0-229.el7.x86_64
2017-07-22 16:46:49,282 - INFO [main:Environment@97] - Client environment:user.name=root
2017-07-22 16:46:49,282 - INFO [main:Environment@97] - Client environment:user.home=/root
2017-07-22 16:46:49,282 - INFO [main:Environment@97] - Client environment:user.dir=/opt/amq/zk1/bin
2017-07-22 16:46:49,283 - INFO [main:ZooKeeper@379] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@49c2faae
Welcome to ZooKeeper!
2017-07-22 16:46:49,295 - INFO [main-SendThread():ClientCnxn$SendThread@1058] - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181
JLine support is enabled
2017-07-22 16:46:49,359 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@947] - Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
2017-07-22 16:46:49,370 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@736] - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x15d68b1dbf90006, negotiated timeout = 30000 WATCHER:: WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] info
ZooKeeper -server host:port cmd args
stat path [watch]
set path data [version]
ls path [watch]
delquota [-n|-b] path
ls2 path [watch]
setAcl path acl
setquota -n|-b val path
history
redo cmdno
printwatches on|off
delete path [version]
sync path
listquota path
get path [watch]
create [-s] [-e] path data acl
addauth scheme auth
quit
getAcl path
close
connect host:port
[zk: localhost:2181(CONNECTED) 2] ls /
[activemq, zookeeper]
[zk: localhost:2181(CONNECTED) 3] ls /activemq
[leveldb-stores]
[zk: localhost:2181(CONNECTED) 4] ls /zookeeper
[quota]
[zk: localhost:2181(CONNECTED) 5] ls /zookeeper/quota
[]
[zk: localhost:2181(CONNECTED) 6]

2. 下载安装activemq

下载地址:http://archive.apache.org/dist/activemq/5.14.3/apache-activemq-5.14.3-bin.tar.gz

这个比较简单,我在三台机器上安装AMQ。
10.90.7.2
10.90.7.10
10.90.2.102
类似一个tomcat的应用,其实是jetty的web应用。
主要的修改配置文件,就是activemq.xml。

解压apache-activemq-5.14.3-bin.tar.gz,并重命名包名称为mq1(在10.90.7.10),mq2(在10.90.7.2),mq3(在10.90.2.102).
下面以操作mq1为例介绍配置:

[root@localhost amq]# pwd
/opt/amq
[root@localhost amq]# ll
总用量
drwxr-xr-x root root 12月 apache-activemq-5.14.
-rw-r--r-- root root 2月 : apache-activemq-5.14.-bin.tar.gz
drwxr-xr-x root root 7月 : mq1
drwxr-xr-x root root 7月 : zk1
drwxr-xr-x root root 7月 : zk2
drwxr-xr-x root root 7月 : zk3
drwxr-xr-x www www 7月 zookeeper-3.3.
-rw-r--r-- root root 7月 : zookeeper-3.3..tar.gz

进入mq1目录,vim activemq.xml文件。我们这里amq的集群是基于zk的,所以,不要默认的持久化方案。即将原始的

<persistenceAdapter>
      <kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>

注释掉,采用下面的新内容:

<persistenceAdapter>
    <replicatedLevelDB 
  directory="${activemq.data}/leveldb"
  replicas="3"
  bind="tcp://0.0.0.0:0"
  zkAddress="10.90.7.10:2181,10.90.7.10:2182,10.90.7.10:2183"
  hostname="10.90.7.10"
  sync="local_disk"
  zkPath="/activemq/leveldb-stores"
  />
</persistenceAdapter>

这里,hostname要修改为amq所在机器的IP地址,或者是能够解析的域名。zkAddress是zk集群的地址,即每个zk的IP:port对,之间用逗号分隔。zkPath这里是指定的,所以,在上面的zkCli.sh中可以ls命令看到的内容。

还有,broker这个节点中的brokerName,必须三个amq实例配置都要一样。这里,我配置为tkcss了。

。。。。。。
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="tkcss" dataDirectory="${activemq.data}"> <destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see: http://activemq.apache.org/slow-consumer-handling.html -->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit=""/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy> <!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see: http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="false"/>
</managementContext> <!--
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag).
For more information, see: http://activemq.apache.org/persistence.html
-->
<!--
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
--> <persistenceAdapter>
<replicatedLevelDB
directory="${activemq.data}/leveldb"
replicas="3"
bind="tcp://0.0.0.0:0"
zkAddress="10.90.7.10:2181,10.90.7.10:2182,10.90.7.10:2183"
hostname="10.90.7.10"
sync="local_disk"
zkPath="/activemq/leveldb-stores"
/>
</persistenceAdapter>

。。。。。。

配置完成后,即可启动amq。

[root@localhost bin]# ./activemq start

启动后,正常的日志,这里看看amq3启动的日志:

-- ::, | INFO  | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$@3e993445: startup date [Sat Jul  :: CST ]; root of context hierarchy | org.apache.activemq.xbean.XBeanBrokerFactory$ | main
-- ::, | INFO | Using Persistence Adapter: Replicated LevelDB[/opt/amq/mq3/data/leveldb, 10.90.7.10:,10.90.7.10:,10.90.7.10://activemq/leveldb-stores] | org.apache.activemq.broker.BrokerService | main
-- ::, | INFO | Starting StateChangeDispatcher | org.apache.activemq.leveldb.replicated.groups.ZKClient | ZooKeeper state change dispatcher thread
-- ::, | INFO | Client environment:zookeeper.version=3.4.-, built on // : GMT | org.apache.zookeeper.ZooKeeper | main
-- ::, | INFO | Client environment:host.name=localhost | org.apache.zookeeper.ZooKeeper | main
-- ::, | INFO | Client environment:java.version=1.7.0_75 | org.apache.zookeeper.ZooKeeper | main
-- ::, | INFO | Client environment:java.vendor=Oracle Corporation | org.apache.zookeeper.ZooKeeper | main
-- ::, | INFO | Client environment:java.home=/usr/lib/jvm/java-1.7.-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64/jre | org.apache.zookeeper.ZooKeeper | main
-- ::, | INFO | Client environment:java.class.path=/opt/amq/mq3//bin/activemq.jar | org.apache.zookeeper.ZooKeeper | main
-- ::, | INFO | Client environment:java.library.path=/usr/local/cuda-7.5/lib64::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib | org.apache.zookeeper.ZooKeeper | main
-- ::, | INFO | Client environment:java.io.tmpdir=/opt/amq/mq3//tmp | org.apache.zookeeper.ZooKeeper | main
-- ::, | INFO | Client environment:java.compiler=<NA> | org.apache.zookeeper.ZooKeeper | main
-- ::, | INFO | Client environment:os.name=Linux | org.apache.zookeeper.ZooKeeper | main
-- ::, | INFO | Client environment:os.arch=amd64 | org.apache.zookeeper.ZooKeeper | main
-- ::, | INFO | Client environment:os.version=3.10.-.el7.x86_64 | org.apache.zookeeper.ZooKeeper | main
-- ::, | INFO | Client environment:user.name=root | org.apache.zookeeper.ZooKeeper | main
-- ::, | INFO | Client environment:user.home=/root | org.apache.zookeeper.ZooKeeper | main
-- ::, | INFO | Client environment:user.dir=/opt/amq/mq3/bin | org.apache.zookeeper.ZooKeeper | main
-- ::, | INFO | Initiating client connection, connectString=10.90.7.10:,10.90.7.10:,10.90.7.10: sessionTimeout= watcher=org.apache.activemq.leveldb.replicated.groups.ZKClient@5bb2dc75 | org.apache.zooke
eper.ZooKeeper | main
-- ::, | WARN | SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/opt/amq/mq3//conf/login.config'. Will
continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. | org.apache.zookeeper.ClientCnxn | main-SendThread(10.90.7.10:)
-- ::, | INFO | Opening socket connection to server 10.90.7.10/10.90.7.10: | org.apache.zookeeper.ClientCnxn | main-SendThread(10.90.7.10:)
-- ::, | WARN | unprocessed event state: AuthFailed | org.apache.activemq.leveldb.replicated.groups.ZKClient | main-EventThread
-- ::, | INFO | Socket connection established to 10.90.7.10/10.90.7.10:, initiating session | org.apache.zookeeper.ClientCnxn | main-SendThread(10.90.7.10:)
-- ::, | WARN | Connected to an old server; r-o mode will be unavailable | org.apache.zookeeper.ClientCnxnSocket | main-SendThread(10.90.7.10:)
-- ::, | INFO | Session establishment complete on server 10.90.7.10/10.90.7.10:, sessionid = 0x35d68b2a0f10004, negotiated timeout = | org.apache.zookeeper.ClientCnxn | main-SendThread(10.90.7.10:)
-- ::, | INFO | Using the pure java LevelDB implementation. | org.apache.activemq.leveldb.LevelDBClient | ActiveMQ BrokerService[tkcss] Task-
-- ::, | INFO | Attaching to master: tcp://10.90.7.10:50942 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | ActiveMQ BrokerService[tkcss] Task-1
-- ::, | INFO | Slave started | org.apache.activemq.leveldb.replicated.MasterElector | ActiveMQ BrokerService[tkcss] Task-
-- ::, | INFO | Slave skipping download of: log/.log | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Slave requested: .index/CURRENT | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Slave requested: .index/.log | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Slave requested: .index/MANIFEST- | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Attaching... Downloaded 0.02/1.66 kb and / files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Attaching... Downloaded 1.61/1.66 kb and / files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Attaching... Downloaded 1.66/1.66 kb and / files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Attached | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-

最后,尝试将当前的Master的AMQ给kill掉,在amq3的日志中,会看到重新选主的日志:

-- ::, | WARN  | Unexpected session error: java.io.IOException: Connection reset by peer | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Using the pure java LevelDB implementation. | org.apache.activemq.leveldb.LevelDBClient | hawtdispatch-DEFAULT-
-- ::, | INFO | Attaching to master: tcp://10.90.7.10:50942 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-1
-- ::, | WARN | Unexpected session error: java.net.ConnectException: 拒绝连接 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Using the pure java LevelDB implementation. | org.apache.activemq.leveldb.LevelDBClient | hawtdispatch-DEFAULT-
-- ::, | INFO | Attaching to master: tcp://10.90.7.10:50942 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-1
-- ::, | WARN | Unexpected session error: java.net.ConnectException: 拒绝连接 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Using the pure java LevelDB implementation. | org.apache.activemq.leveldb.LevelDBClient | hawtdispatch-DEFAULT-
-- ::, | INFO | Attaching to master: tcp://10.90.7.10:50942 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-1
-- ::, | WARN | Unexpected session error: java.net.ConnectException: 拒绝连接 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Using the pure java LevelDB implementation. | org.apache.activemq.leveldb.LevelDBClient | hawtdispatch-DEFAULT-
-- ::, | INFO | Attaching to master: tcp://10.90.7.10:50942 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-1
-- ::, | WARN | Unexpected session error: java.net.ConnectException: 拒绝连接 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Using the pure java LevelDB implementation. | org.apache.activemq.leveldb.LevelDBClient | hawtdispatch-DEFAULT-
-- ::, | INFO | Attaching to master: tcp://10.90.7.10:50942 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-1
-- ::, | WARN | Unexpected session error: java.net.ConnectException: 拒绝连接 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Slave stopped | org.apache.activemq.leveldb.replicated.MasterElector | ActiveMQ BrokerService[tkcss] Task-
-- ::, | INFO | Using the pure java LevelDB implementation. | org.apache.activemq.leveldb.LevelDBClient | ActiveMQ BrokerService[tkcss] Task-
-- ::, | INFO | Attaching to master: tcp://10.90.7.2:2896 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | ActiveMQ BrokerService[tkcss] Task-2
-- ::, | INFO | Slave started | org.apache.activemq.leveldb.replicated.MasterElector | ActiveMQ BrokerService[tkcss] Task-
-- ::, | INFO | Slave requested: .index/.log | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Slave requested: .index/.sst | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Slave requested: .index/CURRENT | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Slave requested: .index/MANIFEST- | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Attaching... Downloaded 5.17/10.18 kb and / files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Attaching... Downloaded 9.01/10.18 kb and / files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Attaching... Downloaded 10.06/10.18 kb and / files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Attaching... Downloaded 10.08/10.18 kb and / files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Attaching... Downloaded 10.18/10.18 kb and / files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-
-- ::, | INFO | Attached | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-

到此,说明我们的三个实例amq基于zk的集群已经配置好。

在应用程序中,brokerUrl的配置,可以写成这样:

brokerURL=failover:(tcp://10.90.7.2:61616,tcp://10.90.7.10:61616,tcp://10.90.2.102:61616)?initialReconnectDelay=1000
userName=admin
password=admin

采用系统默认的用户权限管理。配置信息在jetty.xml里面。

关于spring集成amq的应用,这里不讲,下一个博文再叙。

基于zookeeper的activemq的主从集群配置的更多相关文章

  1. 基于zookeeper+mesos+marathon的docker集群管理平台

    参考文档: mesos:http://mesos.apache.org/ mesosphere社区版:https://github.com/mesosphere/open-docs mesospher ...

  2. ActiveMQ此例简单介绍基于docker的activemq安装与集群搭建

    ActiveMQ拓展连接 此例简单介绍基于Docker的activemq安装与集群搭建 一 :安装 1.获取activemq镜像 docker pull webcenter/activemq 2.启动 ...

  3. 基于 ZooKeeper 搭建 Hadoop 高可用集群

    一.高可用简介 二.集群规划 三.前置条件 四.集群配置 五.启动集群 六.查看集群 七.集群的二次启动 一.高可用简介 Hadoop 高可用 (High Availability) 分为 HDFS ...

  4. Hadoop 学习之路(八)—— 基于ZooKeeper搭建Hadoop高可用集群

    一.高可用简介 Hadoop 高可用 (High Availability) 分为 HDFS 高可用和 YARN 高可用,两者的实现基本类似,但 HDFS NameNode 对数据存储及其一致性的要求 ...

  5. Hadoop 系列(八)—— 基于 ZooKeeper 搭建 Hadoop 高可用集群

    一.高可用简介 Hadoop 高可用 (High Availability) 分为 HDFS 高可用和 YARN 高可用,两者的实现基本类似,但 HDFS NameNode 对数据存储及其一致性的要求 ...

  6. 基于 ZooKeeper 搭建 Spark 高可用集群

    一.集群规划 二.前置条件 三.Spark集群搭建         3.1 下载解压         3.2 配置环境变量         3.3 集群配置         3.4 安装包分发 四.启 ...

  7. Spark学习之路(七)—— 基于ZooKeeper搭建Spark高可用集群

    一.集群规划 这里搭建一个3节点的Spark集群,其中三台主机上均部署Worker服务.同时为了保证高可用,除了在hadoop001上部署主Master服务外,还在hadoop002和hadoop00 ...

  8. Spark 系列(七)—— 基于 ZooKeeper 搭建 Spark 高可用集群

    一.集群规划 这里搭建一个 3 节点的 Spark 集群,其中三台主机上均部署 Worker 服务.同时为了保证高可用,除了在 hadoop001 上部署主 Master 服务外,还在 hadoop0 ...

  9. 入门大数据---基于Zookeeper搭建Spark高可用集群

    一.集群规划 这里搭建一个 3 节点的 Spark 集群,其中三台主机上均部署 Worker 服务.同时为了保证高可用,除了在 hadoop001 上部署主 Master 服务外,还在 hadoop0 ...

随机推荐

  1. html 调用ocx控件

    !DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/x ...

  2. apache ab 压力测试

    我今天在慕课网中无意之间看到压力测试,可以模拟高并发; 顺便看了一下有没有相关的博客,发现下面的这个很详细; //在apache 安装目录下的bin,运行命令 ab -n1000 -c10 http: ...

  3. Scrapy对接selenium+phantomjs

    1.创建项目 :Jd 2.middlewares.py中添加selenium 1.导模块 :from selenium import webdriver 2.定义中间件 class seleniumM ...

  4. linux 用户配制文件 用户增加及删除 以及用户属于的更改

    1.用户密码文件 /etc/passwd root  :   x   :    0    :       0    :          root      :     /root    :    / ...

  5. EasyUI 文本框回车和普通回车

    easyui 回车 $('#Destination_Code').textbox('textbox').bind('keypress', function (e) { ) { } } 普通回车 < ...

  6. 《DSP using MATLAB》Problem 6.17

    代码: %% ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ %% Output In ...

  7. Thinkphp,Jquery,Ajax异步发布

    1.在提交表单的HTML页面的<head>中定义一个变量供Jquery使用 <script type='text/javascript'>var handleUrl='< ...

  8. doubleclick-video-skipable

    from:https://support.google.com/adxbuyer/answer/2691733?hl=en Implement skippable functionality usin ...

  9. solr 5.0.0 bin/start脚本详细解析

    参考文档:https://cwiki.apache.org/confluence/display/solr/Solr+Start+Script+Reference#SolrStartScriptRef ...

  10. go-elasticsearch 来自官方的 golang es client

    elasticsearch 终于有了官方的golang sdk 了,地址 https://github.com/elastic/go-elasticsearch 当前还不稳定,同时主要是对于es7 的 ...