1、安装zookeeper

# 解压缩
[root@localhost zookeeper]# tar -zxvf zookeeper-3.4..tar.gz
[root@localhost zookeeper]# mv zookeeper-3.4. zk_simple
# 复制zoo_simple.cfg到zoo.cfg
[root@localhost zookeeper]# cd zk_simple/
[root@localhost zk_simple]# cp conf/zoo_sample.cfg conf/zoo.cfg -R
# 启动
[root@localhost zk_simple]# bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /mirana/software/zookeeper/zk_simple/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
# 查看zookeeper启动状态
[root@localhost zk_simple]# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /mirana/software/zookeeper/zk_simple/bin/../conf/zoo.cfg
Mode: standalone

2、安装kafka

  2.1下载并上传kafka_2.12-1.1.0.tgz

  2.2解压

[root@localhost software]# tar -zxvf kafka_2.-1.1..tgz
[root@localhost software]# mv kafka_2.-1.1. kafka2./
[root@localhost software]# cd kafka2./
[root@localhost kafka2.]# ll
总用量
drwxr-xr-x. root root 3月 : bin
drwxr-xr-x. root root 3月 : config
drwxr-xr-x. root root 5月 : libs
-rw-r--r--. root root 3月 : LICENSE
drwxr-xr-x. root root 5月 : logs
-rw-r--r--. root root 3月 : NOTICE
drwxr-xr-x. root root 3月 : site-docs 

  2.3 【单节点单代理模式】启动kafka服务端,新开窗口用于持续打印kafka服务端的日志【窗口1】

后台启动,将所有的日志推送到linux黑洞

[root@localhost kafka2.]# bin/kafka-server-start.sh config/server.properties >>/dev/null >& &

正常启动后当前窗口作为服务端的日志输出窗口

[root@localhost kafka2.]# bin/kafka-server-start.sh config/server.properties
[-- ::,] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[-- ::,] INFO starting (kafka.server.KafkaServer)
[-- ::,] INFO Connecting to zookeeper on localhost: (kafka.server.KafkaServer)
[-- ::,] INFO [ZooKeeperClient] Initializing a new session to localhost:. (kafka.zookeeper.ZooKeeperClient)
[-- ::,] INFO Client environment:zookeeper.version=3.4.-39d3a4f269333c922ed3db283be479f9deacaa0f, built on // : GMT (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:host.name=localhost (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:java.version=1.8.0_172 (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:java.home=/mirana/software/jdk1./jre (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:java.class.path=/mirana/software/jdk1./jre/lib/rt.jar:/mirana/software/jdk1./jre/lib/ext:/mirana/software/kafka2./bin/../libs/aopalliance-repackaged-2.5.-b32.jar:/mirana/software/kafka2./bin/../libs/argparse4j-0.7..jar:/mirana/software/kafka2./bin/../libs/commons-lang3-3.5.jar:/mirana/software/kafka2./bin/../libs/connect-api-1.1..jar:/mirana/software/kafka2./bin/../libs/connect-file-1.1..jar:/mirana/software/kafka2./bin/../libs/connect-json-1.1..jar:/mirana/software/kafka2./bin/../libs/connect-runtime-1.1..jar:/mirana/software/kafka2./bin/../libs/connect-transforms-1.1..jar:/mirana/software/kafka2./bin/../libs/guava-20.0.jar:/mirana/software/kafka2./bin/../libs/hk2-api-2.5.-b32.jar:/mirana/software/kafka2./bin/../libs/hk2-locator-2.5.-b32.jar:/mirana/software/kafka2./bin/../libs/hk2-utils-2.5.-b32.jar:/mirana/software/kafka2./bin/../libs/jackson-annotations-2.9..jar:/mirana/software/kafka2./bin/../libs/jackson-core-2.9..jar:/mirana/software/kafka2./bin/../libs/jackson-databind-2.9..jar:/mirana/software/kafka2./bin/../libs/jackson-jaxrs-base-2.9..jar:/mirana/software/kafka2./bin/../libs/jackson-jaxrs-json-provider-2.9..jar:/mirana/software/kafka2./bin/../libs/jackson-module-jaxb-annotations-2.9..jar:/mirana/software/kafka2./bin/../libs/javassist-3.20.-GA.jar:/mirana/software/kafka2./bin/../libs/javassist-3.21.-GA.jar:/mirana/software/kafka2./bin/../libs/javax.annotation-api-1.2.jar:/mirana/software/kafka2./bin/../libs/javax.inject-.jar:/mirana/software/kafka2./bin/../libs/javax.inject-2.5.-b32.jar:/mirana/software/kafka2./bin/../libs/javax.servlet-api-3.1..jar:/mirana/software/kafka2./bin/../libs/javax.ws.rs-api-2.0..jar:/mirana/software/kafka2./bin/../libs/jersey-client-2.25..jar:/mirana/software/kafka2./bin/../libs/jersey-common-2.25..jar:/mirana/software/kafka2./bin/../libs/jersey-container-servlet-2.25..jar:/mirana/software/kafka2./bin/../libs/jersey-container-servlet-core-2.25..jar:/mirana/software/kafka2./bin/../libs/jersey-guava-2.25..jar:/mirana/software/kafka2./bin/../libs/jersey-media-jaxb-2.25..jar:/mirana/software/kafka2./bin/../libs/jersey-server-2.25..jar:/mirana/software/kafka2./bin/../libs/jetty-client-9.2..v20180105.jar:/mirana/software/kafka2./bin/../libs/jetty-continuation-9.2..v20180105.jar:/mirana/software/kafka2./bin/../libs/jetty-http-9.2..v20180105.jar:/mirana/software/kafka2./bin/../libs/jetty-io-9.2..v20180105.jar:/mirana/software/kafka2./bin/../libs/jetty-security-9.2..v20180105.jar:/mirana/software/kafka2./bin/../libs/jetty-server-9.2..v20180105.jar:/mirana/software/kafka2./bin/../libs/jetty-servlet-9.2..v20180105.jar:/mirana/software/kafka2./bin/../libs/jetty-servlets-9.2..v20180105.jar:/mirana/software/kafka2./bin/../libs/jetty-util-9.2..v20180105.jar:/mirana/software/kafka2./bin/../libs/jopt-simple-5.0..jar:/mirana/software/kafka2./bin/../libs/kafka_2.-1.1..jar:/mirana/software/kafka2./bin/../libs/kafka_2.-1.1.-sources.jar:/mirana/software/kafka2./bin/../libs/kafka_2.-1.1.-test-sources.jar:/mirana/software/kafka2./bin/../libs/kafka-clients-1.1..jar:/mirana/software/kafka2./bin/../libs/kafka-log4j-appender-1.1..jar:/mirana/software/kafka2./bin/../libs/kafka-streams-1.1..jar:/mirana/software/kafka2./bin/../libs/kafka-streams-examples-1.1..jar:/mirana/software/kafka2./bin/../libs/kafka-streams-test-utils-1.1..jar:/mirana/software/kafka2./bin/../libs/kafka-tools-1.1..jar:/mirana/software/kafka2./bin/../libs/log4j-1.2..jar:/mirana/software/kafka2./bin/../libs/lz4-java-1.4.jar:/mirana/software/kafka2./bin/../libs/maven-artifact-3.5..jar:/mirana/software/kafka2./bin/../libs/metrics-core-2.2..jar:/mirana/software/kafka2./bin/../libs/osgi-resource-locator-1.0..jar:/mirana/software/kafka2./bin/../libs/plexus-utils-3.1..jar:/mirana/software/kafka2./bin/../libs/reflections-0.9..jar:/mirana/software/kafka2./bin/../libs/rocksdbjni-5.7..jar:/mirana/software/kafka2./bin/../libs/scala-library-2.12..jar:/mirana/software/kafka2./bin/../libs/scala-logging_2.-3.7..jar:/mirana/software/kafka2./bin/../libs/scala-reflect-2.12..jar:/mirana/software/kafka2./bin/../libs/slf4j-api-1.7..jar:/mirana/software/kafka2./bin/../libs/slf4j-log4j12-1.7..jar:/mirana/software/kafka2./bin/../libs/snappy-java-1.1.7.1.jar:/mirana/software/kafka2./bin/../libs/validation-api-1.1..Final.jar:/mirana/software/kafka2./bin/../libs/zkclient-0.10.jar:/mirana/software/kafka2./bin/../libs/zookeeper-3.4..jar (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:os.version=3.10.-.el7.x86_64 (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:user.dir=/mirana/software/kafka2. (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Initiating client connection, connectString=localhost: sessionTimeout= watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@3e92efc3 (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Opening socket connection to server localhost/::::::::. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[-- ::,] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[-- ::,] INFO Socket connection established to localhost/::::::::, initiating session (org.apache.zookeeper.ClientCnxn)
[-- ::,] INFO Session establishment complete on server localhost/::::::::, sessionid = 0x100002832db0001, negotiated timeout = (org.apache.zookeeper.ClientCnxn)
[-- ::,] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[-- ::,] INFO Cluster ID = MHj0qP-5T-OFVNpn87zSXg (kafka.server.KafkaServer)
[-- ::,] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num =
alter.log.dirs.replication.quota.window.size.seconds =
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads =
broker.id =
broker.id.generation.enable = true
broker.rack = null
compression.type = producer
connections.max.idle.ms =
controlled.shutdown.enable = true
controlled.shutdown.max.retries =
controlled.shutdown.retry.backoff.ms =
controller.socket.timeout.ms =
create.topic.policy.class.name = null
default.replication.factor =
delegation.token.expiry.check.interval.ms =
delegation.token.expiry.time.ms =
delegation.token.master.key = null
delegation.token.max.lifetime.ms =
delete.records.purgatory.purge.interval.requests =
delete.topic.enable = true
fetch.purgatory.purge.interval.requests =
group.initial.rebalance.delay.ms =
group.max.session.timeout.ms =
group.min.session.timeout.ms =
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 1.1-IV0
leader.imbalance.check.interval.seconds =
leader.imbalance.per.broker.percentage =
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms =
log.cleaner.dedupe.buffer.size =
log.cleaner.delete.retention.ms =
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size =
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms =
log.cleaner.threads =
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /tmp/kafka-logs
log.flush.interval.messages =
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms =
log.flush.scheduler.interval.ms =
log.flush.start.offset.checkpoint.interval.ms =
log.index.interval.bytes =
log.index.size.max.bytes =
log.message.format.version = 1.1-IV0
log.message.timestamp.difference.max.ms =
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -
log.retention.check.interval.ms =
log.retention.hours =
log.retention.minutes = null
log.retention.ms = null
log.roll.hours =
log.roll.jitter.hours =
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes =
log.segment.delete.delay.ms =
max.connections.per.ip =
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots =
message.max.bytes =
metric.reporters = []
metrics.num.samples =
metrics.recording.level = INFO
metrics.sample.window.ms =
min.insync.replicas =
num.io.threads =
num.network.threads =
num.partitions =
num.recovery.threads.per.data.dir =
num.replica.alter.log.dirs.threads = null
num.replica.fetchers =
offset.metadata.max.bytes =
offsets.commit.required.acks = -
offsets.commit.timeout.ms =
offsets.load.buffer.size =
offsets.retention.check.interval.ms =
offsets.retention.minutes =
offsets.topic.compression.codec =
offsets.topic.num.partitions =
offsets.topic.replication.factor =
offsets.topic.segment.bytes =
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations =
password.encoder.key.length =
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port =
principal.builder.class = null
producer.purgatory.purge.interval.requests =
queued.max.request.bytes = -
queued.max.requests =
quota.consumer.default =
quota.producer.default =
quota.window.num =
quota.window.size.seconds =
replica.fetch.backoff.ms =
replica.fetch.max.bytes =
replica.fetch.min.bytes =
replica.fetch.response.max.bytes =
replica.fetch.wait.max.ms =
replica.high.watermark.checkpoint.interval.ms =
replica.lag.time.max.ms =
replica.socket.receive.buffer.bytes =
replica.socket.timeout.ms =
replication.quota.window.num =
replication.quota.window.size.seconds =
request.timeout.ms =
reserved.broker.max.id =
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin =
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism.inter.broker.protocol = GSSAPI
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes =
socket.request.max.bytes =
socket.send.buffer.bytes =
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1., TLSv1., TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms =
transaction.max.timeout.ms =
transaction.remove.expired.transaction.cleanup.interval.ms =
transaction.state.log.load.buffer.size =
transaction.state.log.min.isr =
transaction.state.log.num.partitions =
transaction.state.log.replication.factor =
transaction.state.log.segment.bytes =
transactional.id.expiration.ms =
unclean.leader.election.enable = false
zookeeper.connect = localhost:
zookeeper.connection.timeout.ms =
zookeeper.max.in.flight.requests =
zookeeper.session.timeout.ms =
zookeeper.set.acl = false
zookeeper.sync.time.ms =
(kafka.server.KafkaConfig)
[-- ::,] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num =
alter.log.dirs.replication.quota.window.size.seconds =
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads =
broker.id =
broker.id.generation.enable = true
broker.rack = null
compression.type = producer
connections.max.idle.ms =
controlled.shutdown.enable = true
controlled.shutdown.max.retries =
controlled.shutdown.retry.backoff.ms =
controller.socket.timeout.ms =
create.topic.policy.class.name = null
default.replication.factor =
delegation.token.expiry.check.interval.ms =
delegation.token.expiry.time.ms =
delegation.token.master.key = null
delegation.token.max.lifetime.ms =
delete.records.purgatory.purge.interval.requests =
delete.topic.enable = true
fetch.purgatory.purge.interval.requests =
group.initial.rebalance.delay.ms =
group.max.session.timeout.ms =
group.min.session.timeout.ms =
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 1.1-IV0
leader.imbalance.check.interval.seconds =
leader.imbalance.per.broker.percentage =
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms =
log.cleaner.dedupe.buffer.size =
log.cleaner.delete.retention.ms =
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size =
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms =
log.cleaner.threads =
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /tmp/kafka-logs
log.flush.interval.messages =
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms =
log.flush.scheduler.interval.ms =
log.flush.start.offset.checkpoint.interval.ms =
log.index.interval.bytes =
log.index.size.max.bytes =
log.message.format.version = 1.1-IV0
log.message.timestamp.difference.max.ms =
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -
log.retention.check.interval.ms =
log.retention.hours =
log.retention.minutes = null
log.retention.ms = null
log.roll.hours =
log.roll.jitter.hours =
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes =
log.segment.delete.delay.ms =
max.connections.per.ip =
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots =
message.max.bytes =
metric.reporters = []
metrics.num.samples =
metrics.recording.level = INFO
metrics.sample.window.ms =
min.insync.replicas =
num.io.threads =
num.network.threads =
num.partitions =
num.recovery.threads.per.data.dir =
num.replica.alter.log.dirs.threads = null
num.replica.fetchers =
offset.metadata.max.bytes =
offsets.commit.required.acks = -
offsets.commit.timeout.ms =
offsets.load.buffer.size =
offsets.retention.check.interval.ms =
offsets.retention.minutes =
offsets.topic.compression.codec =
offsets.topic.num.partitions =
offsets.topic.replication.factor =
offsets.topic.segment.bytes =
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations =
password.encoder.key.length =
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port =
principal.builder.class = null
producer.purgatory.purge.interval.requests =
queued.max.request.bytes = -
queued.max.requests =
quota.consumer.default =
quota.producer.default =
quota.window.num =
quota.window.size.seconds =
replica.fetch.backoff.ms =
replica.fetch.max.bytes =
replica.fetch.min.bytes =
replica.fetch.response.max.bytes =
replica.fetch.wait.max.ms =
replica.high.watermark.checkpoint.interval.ms =
replica.lag.time.max.ms =
replica.socket.receive.buffer.bytes =
replica.socket.timeout.ms =
replication.quota.window.num =
replication.quota.window.size.seconds =
request.timeout.ms =
reserved.broker.max.id =
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin =
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism.inter.broker.protocol = GSSAPI
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes =
socket.request.max.bytes =
socket.send.buffer.bytes =
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1., TLSv1., TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms =
transaction.max.timeout.ms =
transaction.remove.expired.transaction.cleanup.interval.ms =
transaction.state.log.load.buffer.size =
transaction.state.log.min.isr =
transaction.state.log.num.partitions =
transaction.state.log.replication.factor =
transaction.state.log.segment.bytes =
transactional.id.expiration.ms =
unclean.leader.election.enable = false
zookeeper.connect = localhost:
zookeeper.connection.timeout.ms =
zookeeper.max.in.flight.requests =
zookeeper.session.timeout.ms =
zookeeper.set.acl = false
zookeeper.sync.time.ms =
(kafka.server.KafkaConfig)
[-- ::,] INFO [ThrottledRequestReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[-- ::,] INFO [ThrottledRequestReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[-- ::,] INFO [ThrottledRequestReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[-- ::,] INFO Loading logs. (kafka.log.LogManager)
[-- ::,] INFO [Log partition=Hello-Kafka-, dir=/tmp/kafka-logs] Loading producer state from offset with message format version (kafka.log.Log)
[-- ::,] INFO [ProducerStateManager partition=Hello-Kafka-] Loading producer state from snapshot file '/tmp/kafka-logs/Hello-Kafka-0/00000000000000000007.snapshot' (kafka.log.ProducerStateManager)
[-- ::,] INFO [Log partition=Hello-Kafka-, dir=/tmp/kafka-logs] Completed load of log with segments, log start offset and log end offset in ms (kafka.log.Log)
[-- ::,] INFO [Log partition=kafka_topic02-, dir=/tmp/kafka-logs] Loading producer state from offset with message format version (kafka.log.Log)
[-- ::,] INFO [Log partition=kafka_topic02-, dir=/tmp/kafka-logs] Completed load of log with segments, log start offset and log end offset in ms (kafka.log.Log)
[-- ::,] INFO Logs loading complete in ms. (kafka.log.LogManager)
[-- ::,] INFO Starting log cleanup with a period of ms. (kafka.log.LogManager)
[-- ::,] INFO Starting log flusher with a default period of ms. (kafka.log.LogManager)
[-- ::,] INFO Awaiting socket connections on 0.0.0.0:. (kafka.network.Acceptor)
[-- ::,] INFO [SocketServer brokerId=] Started acceptor threads (kafka.network.SocketServer)
[-- ::,] INFO [ExpirationReaper--Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[-- ::,] INFO [ExpirationReaper--Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[-- ::,] INFO [ExpirationReaper--DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[-- ::,] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[-- ::,] INFO Creating /brokers/ids/ (is it secure? false) (kafka.zk.KafkaZkClient)
[-- ::,] INFO Result of znode creation at /brokers/ids/ is: OK (kafka.zk.KafkaZkClient)
[-- ::,] INFO Registered broker at path /brokers/ids/ with addresses: ArrayBuffer(EndPoint(localhost,,ListenerName(PLAINTEXT),PLAINTEXT)) (kafka.zk.KafkaZkClient)
[-- ::,] INFO [ExpirationReaper--topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[-- ::,] INFO [ExpirationReaper--Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[-- ::,] INFO [ExpirationReaper--Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[-- ::,] INFO Creating /controller (is it secure? false) (kafka.zk.KafkaZkClient)
[-- ::,] INFO Result of znode creation at /controller is: OK (kafka.zk.KafkaZkClient)
[-- ::,] INFO [GroupCoordinator ]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[-- ::,] INFO [GroupCoordinator ]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[-- ::,] INFO [GroupMetadataManager brokerId=] Removed expired offsets in milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[-- ::,] INFO [ProducerId Manager ]: Acquired new producerId block (brokerId:,blockStartProducerId:,blockEndProducerId:) by writing to Zk with path version (kafka.coordinator.transaction.ProducerIdManager)
[-- ::,] INFO [TransactionCoordinator id=] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[-- ::,] INFO [TransactionCoordinator id=] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[-- ::,] INFO [Transaction Marker Channel Manager ]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[-- ::,] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[-- ::,] INFO Kafka version : 1.1. (org.apache.kafka.common.utils.AppInfoParser)
[-- ::,] INFO Kafka commitId : fdcf75ea326b8e07 (org.apache.kafka.common.utils.AppInfoParser)
[-- ::,] INFO [KafkaServer id=] started (kafka.server.KafkaServer)
[-- ::,] INFO [ReplicaFetcherManager on broker ] Removed fetcher for partitions Hello-Kafka-,kafka_topic02- (kafka.server.ReplicaFetcherManager)
[-- ::,] INFO Replica loaded for partition Hello-Kafka- with initial high watermark (kafka.cluster.Replica)
[-- ::,] INFO [Partition Hello-Kafka- broker=] Hello-Kafka- starts at Leader Epoch from offset . Previous Leader Epoch was: - (kafka.cluster.Partition)
[-- ::,] INFO Replica loaded for partition kafka_topic02- with initial high watermark (kafka.cluster.Replica)
[-- ::,] INFO [Partition kafka_topic02- broker=] kafka_topic02- starts at Leader Epoch from offset . Previous Leader Epoch was: - (kafka.cluster.Partition)
[-- ::,] INFO [ReplicaAlterLogDirsManager on broker ] Added fetcher for partitions List() (kafka.server.ReplicaAlterLogDirsManager)

  2.3 查看启动的zookeeper和kafka的服务

[root@localhost kafka2.]# jps
Kafka
QuorumPeerMain
Jps

  2.4 创建和查看主题topic

创建并查看主题topic

[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --create --replication-factor  --partitions  --topic mytopic01
Created topic "mytopic01".
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --list
mytopic01
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --create --replication-factor --partitions --topic mytopic02
Created topic "mytopic02".
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --list
mytopic01
mytopic02

kafka服务端【窗口1】输入日志如下:

[-- ::,] INFO [ReplicaFetcherManager on broker ] Removed fetcher for partitions mytopic01- (kafka.server.ReplicaFetcherManager)
[-- ::,] INFO [Log partition=mytopic01-, dir=/tmp/kafka-logs] Loading producer state from offset with message format version (kafka.log.Log)
[-- ::,] INFO [Log partition=mytopic01-, dir=/tmp/kafka-logs] Completed load of log with segments, log start offset and log end offset in ms (kafka.log.Log)
[-- ::,] INFO Created log for partition mytopic01- in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> , max.message.bytes -> , min.compaction.lag.ms -> , message.timestamp.type -> CreateTime, min.insync.replicas -> , segment.jitter.ms -> , preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> , unclean.leader.election.enable -> false, retention.bytes -> -, delete.retention.ms -> , cleanup.policy -> [delete], flush.ms -> , segment.ms -> , segment.bytes -> , retention.ms -> , message.timestamp.difference.max.ms -> , segment.index.bytes -> , flush.messages -> }. (kafka.log.LogManager)
[-- ::,] INFO [Partition mytopic01- broker=] No checkpointed highwatermark is found for partition mytopic01- (kafka.cluster.Partition)
[-- ::,] INFO Replica loaded for partition mytopic01- with initial high watermark (kafka.cluster.Replica)
[-- ::,] INFO [Partition mytopic01- broker=] mytopic01- starts at Leader Epoch from offset . Previous Leader Epoch was: - (kafka.cluster.Partition)
[-- ::,] INFO [ReplicaAlterLogDirsManager on broker ] Added fetcher for partitions List() (kafka.server.ReplicaAlterLogDirsManager)
[-- ::,] INFO [ReplicaFetcherManager on broker ] Removed fetcher for partitions mytopic02- (kafka.server.ReplicaFetcherManager)
[-- ::,] INFO [Log partition=mytopic02-, dir=/tmp/kafka-logs] Loading producer state from offset with message format version (kafka.log.Log)
[-- ::,] INFO [Log partition=mytopic02-, dir=/tmp/kafka-logs] Completed load of log with segments, log start offset and log end offset in ms (kafka.log.Log)
[-- ::,] INFO Created log for partition mytopic02- in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> , max.message.bytes -> , min.compaction.lag.ms -> , message.timestamp.type -> CreateTime, min.insync.replicas -> , segment.jitter.ms -> , preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> , unclean.leader.election.enable -> false, retention.bytes -> -, delete.retention.ms -> , cleanup.policy -> [delete], flush.ms -> , segment.ms -> , segment.bytes -> , retention.ms -> , message.timestamp.difference.max.ms -> , segment.index.bytes -> , flush.messages -> }. (kafka.log.LogManager)
[-- ::,] INFO [Partition mytopic02- broker=] No checkpointed highwatermark is found for partition mytopic02- (kafka.cluster.Partition)
[-- ::,] INFO Replica loaded for partition mytopic02- with initial high watermark (kafka.cluster.Replica)
[-- ::,] INFO [Partition mytopic02- broker=] mytopic02- starts at Leader Epoch from offset . Previous Leader Epoch was: - (kafka.cluster.Partition)
[-- ::,] INFO [ReplicaAlterLogDirsManager on broker ] Added fetcher for partitions List() (kafka.server.ReplicaAlterLogDirsManager)
[-- ::,] INFO [GroupMetadataManager brokerId=] Removed expired offsets in milliseconds. (kafka.coordinator.group.GroupMetadataManager)

  2.5 重新打开一个窗口,用于查看订阅者收到的消息【窗口2】

[root@localhost kafka2.]# bin/kafka-console-consumer.sh --zookeeper localhost: --topic mytopic01 --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].

  在先前的窗口模拟生产者生产消息,这里生产三条消息

[root@localhost kafka2.]# bin/kafka-console-producer.sh --broker-list localhost: --topic mytopic01
>my first msg
>my second msg
>my third msg
>

  订阅者窗口【窗口2】打印日志如下:

[root@localhost kafka2.]# bin/kafka-console-consumer.sh --zookeeper localhost: --topic mytopic01 --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
my first msg
my second msg
my third msg

3、单节点多代理模式

  3.1 创建多个kafka brokers

  复制server.properties到server1.properties,server2.properties,server3.properties,修改配置如下

# server1.properties
broker.id=1
port=9091
log.dirs=/tmp/kafka1-logs # server2.properties
broker.id=2
port=9092
log.dirs=/tmp/kafka2-logs # server3.properties
broker.id=3
port=9093
log.dirs=/tmp/kafka3-logs

  3.2 启动三个kafka代理(不启动server.properties,启动sever1.properties,server2.properties,server3.properties)

[root@localhost kafka2.]# bin/kafka-server-start.sh config/server1.properties
[root@localhost kafka2.]# bin/kafka-server-start.sh config/server2.properties
[root@localhost kafka2.]# bin/kafka-server-start.sh config/server3.properties

  查看zookeeper和kafka的进程,其中QuorumPeerMain是zookeeper的守护进程,三个kafka分别对应kafka的代理进程

[root@localhost kafka2.]# jps
Kafka
QuorumPeerMain
Kafka
Kafka
Jps

  创建主题,查看当某节点宕机后主题的状态

# 创建三个副本因子为3,分区数量为1的topic
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --create --replication-factor --partitions --topic Multibrokerapplication01
Created topic "Multibrokerapplication01".
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --create --replication-factor --partitions --topic Multibrokerapplication02
Created topic "Multibrokerapplication02".
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --create --replication-factor --partitions --topic Multibrokerapplication03
Created topic "Multibrokerapplication03". # 创建三个副本因子为2,分区数量为1的topic
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --create --replication-factor --partitions --topic Multibrokerapplication04
Created topic "Multibrokerapplication04".
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --create --replication-factor --partitions --topic Multibrokerapplication05
Created topic "Multibrokerapplication05".
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --create --replication-factor --partitions --topic Multibrokerapplication06
Created topic "Multibrokerapplication06".
# 创建三个副本因子为1,分区数量为1的topic
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --create --replication-factor  --partitions  --topic Multibrokerapplication07
Created topic "Multibrokerapplication07".
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --create --replication-factor --partitions --topic Multibrokerapplication08
Created topic "Multibrokerapplication08".
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --create --replication-factor --partitions --topic Multibrokerapplication09
Created topic "Multibrokerapplication09". # 查看每个topic的状态
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication01
Topic:Multibrokerapplication01 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication01 Partition: Leader: Replicas: ,, Isr: ,,
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication02
Topic:Multibrokerapplication02 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication02 Partition: Leader: Replicas: ,, Isr: ,,
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication03
Topic:Multibrokerapplication03 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication03 Partition: Leader: Replicas: ,, Isr: ,,
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication04
Topic:Multibrokerapplication04 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication04 Partition: Leader: Replicas: , Isr: ,
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication05
Topic:Multibrokerapplication05 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication05 Partition: Leader: Replicas: , Isr: ,
^[[A[root@localhost kafka2.1bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication06
Topic:Multibrokerapplication06 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication06 Partition: Leader: Replicas: , Isr: ,
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication07
Topic:Multibrokerapplication07 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication07 Partition: Leader: Replicas: Isr:
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication08
Topic:Multibrokerapplication08 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication08 Partition: Leader: Replicas: Isr:
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication09
Topic:Multibrokerapplication09 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication09 Partition: Leader: Replicas: Isr: 3 # 杀掉brokerid为3的kafka代理节点
[root@localhost kafka2.]#
[root@localhost kafka2.]#
[root@localhost kafka2.]# kill - 4505
# 查看每个topic的状态
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication01
Topic:Multibrokerapplication01 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication01 Partition: Leader: Replicas: ,, Isr: ,
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication02
Topic:Multibrokerapplication02 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication02 Partition: Leader: Replicas: ,, Isr: ,
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication03
Topic:Multibrokerapplication03 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication03 Partition: Leader: Replicas: ,, Isr: ,
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication04
Topic:Multibrokerapplication04 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication04 Partition: Leader: Replicas: , Isr: ,
^[[A[root@localhost kafka2.1bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication05
Topic:Multibrokerapplication05 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication05 Partition: Leader: Replicas: , Isr:
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication06
Topic:Multibrokerapplication06 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication06 Partition: Leader: Replicas: , Isr:
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication07
Topic:Multibrokerapplication07 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication07 Partition: Leader: Replicas: Isr:
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication08
Topic:Multibrokerapplication08 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication08 Partition: Leader: Replicas: Isr:
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication09
Topic:Multibrokerapplication09 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication09 Partition: Leader: Replicas: Isr:
[root@localhost kafka2.]#

  3.3 启动生产者发送消息

  【Multibrokerapplication01】brokerid为1的代理,生产消息

[root@localhost kafka2.]# bin/kafka-console-producer.sh --broker-list localhost: --topic Multibrokerapplication01
>hello
>this is first msg
>this is
>

  【Multibrokerapplication01】订阅的消息如下

[root@localhost kafka2.]# bin/kafka-console-consumer.sh --zookeeper localhost: --topic Multibrokerapplication01 --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
hello
this is first msg
this is

  【Multibrokerapplication01】brokerid为2的代理,生产消息

[root@localhost kafka2.12]# bin/kafka-console-producer.sh --broker-list localhost:9092 --topic Multibrokerapplication01
>hello
>this is second msg
>this is 9092
>

  【Multibrokerapplication01】订阅的消息如下

[root@localhost kafka2.12]# bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic Multibrokerapplication01 --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
hello
this is first msg
this is 9091
hello
this is second msg
this is 9092

  【Multibrokerapplication01】brokerid为3的代理,生产消息(之前杀掉了brokerid=3的kafka代理)

[root@localhost kafka2.]# bin/kafka-console-producer.sh --broker-list localhost: --topic Multibrokerapplication01
>hello
[-- ::,] WARN [Producer clientId=console-producer] Connection to node - could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[-- ::,] WARN [Producer clientId=console-producer] Connection to node - could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[-- ::,] WARN [Producer clientId=console-producer] Connection to node - could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[-- ::,] WARN [Producer clientId=console-producer] Connection to node - could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

  【Multibrokerapplication01】订阅的消息没有日志输出

  3.4 修改主题

# 修改名称为'Multibrokerapplication02'的主题,分区数量为2,由于副本因子比实际的brokers代理要多,发生异常
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --alter --topic Multibrokerapplication02 --partitions
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Error while executing topic command : Replication factor: larger than available brokers: .
[-- ::,] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: larger than available brokers: .
(kafka.admin.TopicCommand$) # 重启brokerid=3的kafka代理
[root@localhost kafka2.]# bin/kafka-server-start.sh config/server3.properties >>/dev/null >& &
[] # 修改名称为'Multibrokerapplication02'的主题,分区数量为2
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --alter --topic Multibrokerapplication02 --partitions
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!

# 查看主题'Multibrokerapplication02'详情
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --describe --topic Multibrokerapplication02
Topic:Multibrokerapplication02 PartitionCount: ReplicationFactor: Configs:
Topic: Multibrokerapplication02 Partition: Leader: Replicas: ,, Isr: ,,
Topic: Multibrokerapplication02 Partition: Leader: Replicas: ,, Isr: ,,

  3.5 删除主题

[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --delete --topic Multibrokerapplication02
Topic Multibrokerapplication02 is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
[root@localhost kafka2.]# bin/kafka-topics.sh --zookeeper localhost: --list
Multibrokerapplication01
Multibrokerapplication03
Multibrokerapplication04
Multibrokerapplication05
Multibrokerapplication06
Multibrokerapplication07
Multibrokerapplication08
Multibrokerapplication09

kafka的安装及基本使用的更多相关文章

  1. Kafka的安装和部署及测试

    1.简介 大数据分析处理平台包括数据的接入,数据的存储,数据的处理,以及后面的展示或者应用.今天我们连说一下数据的接入,数据的接入目前比较普遍的是采用kafka将前面的数据通过消息的方式,以数据流的形 ...

  2. Linux下Kafka单机安装配置方法(图文)

    Kafka是一个分布式的.可分区的.可复制的消息系统.它提供了普通消息系统的功能,但具有自己独特的设计.这个独特的设计是什么样的呢 介绍 Kafka是一个分布式的.可分区的.可复制的消息系统.它提供了 ...

  3. kafka的安装以及基本用法

    kafka的安装 kafka依赖于ZooKeeper,所以在运行kafka之前需要先部署ZooKeeper集群,ZooKeeper集群部署方式分为两种,一种是单独部署(推荐),另外一种是使用kafka ...

  4. kafka manager安装配置和使用

    kafka manager安装配置和使用 .安装yum源 curl https://bintray.com/sbt/rpm/rpm | sudo tee /etc/yum.repos.d/bintra ...

  5. kafka 的安装部署

    Kafka 的简介: Kafka 是一款分布式消息发布和订阅系统,具有高性能.高吞吐量的特点而被广泛应用与大数据传输场景.它是由 LinkedIn 公司开发,使用 Scala 语言编写,之后成为 Ap ...

  6. Kafka学习之路 (四)Kafka的安装

    一.下载 下载地址: http://kafka.apache.org/downloads.html http://mirrors.hust.edu.cn/apache/ 二.安装前提(zookeepe ...

  7. centos php Zookeeper kafka扩展安装

    如题,系统架构升级引入消息机制,php 安装还是挺麻烦的,网上各种文章有的东拼西凑这里记录下来做个备忘,有需要的同学可以自行参考安装亲测可行 1 zookeeper扩展安装 1.安装zookeeper ...

  8. Linux下Kafka单机安装配置方法

    Kafka是一个分布式的.可分区的.可复制的消息系统.它提供了普通消息系统的功能,但具有自己独特的设计.这个独特的设计是什么样的呢? 首先让我们看几个基本的消息系统术语: •Kafka将消息以topi ...

  9. Kafka Manager安装部署及使用

     为了简化开发者和服务工程师维护Kafka集群的工作,yahoo构建了一个叫做Kafka管理器的基于Web工具,叫做 Kafka Manager.本文对其进行部署配置,并安装配置kafkatool对k ...

  10. 【kafka】安装部署kafka集群(kafka版本:kafka_2.12-2.3.0)

    3.2.1 下载kafka并安装kafka_2.12-2.3.0.tgz tar -zxvf kafka_2.12-2.3.0.tgz 3.2.2 配置kafka集群 在config/server.p ...

随机推荐

  1. Alpha冲刺(四)

    Information: 队名:彳艮彳亍团队 组长博客:戳我进入 作业博客:班级博客本次作业的链接 Details: 组员1(组长)柯奇豪 过去两天完成了哪些任务 文章基本的存储.列表生成显示 展示G ...

  2. Android 65536方法数限制的思考

    前言 没想到,65536真的很小. 1 Unable to execute dex: method ID not in [0, 0xffff]: 65536 PS:本文只是纯探索一下这个65K的来源, ...

  3. 12、Semantic-UI之输入框

    12.1 基础输入框   在Semantic-UI中可以定义多个样式的输入框,可以将图片与输入框结合,输入提示信息文字,设置输入框的状态. 示例:定义基础输入框 用户名: <div class= ...

  4. C# 利用CMD命令行结束进程

    public static void CmdKillProcess(int pid)        {            string cmdStr = string.Format("t ...

  5. 【转】生活中的OO智慧——大话面向对象五大原则

    原文地址:http://www.cnblogs.com/aoyeyuyan/p/4388110.html 一·单一职责原则(Single-Responsibility Principle) 定义:一个 ...

  6. Win8共享wifi热点设置

    Win8共享wifi热点如何设置?大家都知道win7系统可以实现wifi热点共享,那么win8应该也能实现wifi热点共享,那么如何设置win8不需要任何软件只需要对电脑进行设置就可以共享无线上网. ...

  7. WebService 常用的设置

    1.修改WebService接收长度 <binding name="IAuthServiceSoap11Binding" maxBufferSize="214748 ...

  8. S.O.L.I.D原则

    SILID原则: 是面向对象编程和设计的重要原则,在我们编程的过程中是谨记的重点,所以对其有深刻了解是必须的.   < Clean Code(代码整洁之道)>作者Robert C. Mar ...

  9. BZOJ4710 分特产

    题目链接:戳我 容斥题. 设\(f[i]\)表示至多有i个人能够分到(也就是至少n-i个人分不到)的方案数 \(f[i]=\prod_{j=1}^mC_{a[j]+i-1}^i-1\) a[j]表示的 ...

  10. 清北学堂2019NOIP提高储备营DAY3

    今天是钟神讲课,讲台上照旧摆满了冰红茶 目录时间到: $1. 动态规划 $2. 数位dp $3. 树形dp $4. 区间dp $5. 状压dp $6. 其它dp $1. 动态规划: ·以斐波那契数列为 ...