一.单节点

  1.上传Kafka安装包到Linux系统【当前为Centos7】。

  2.解压,配置conf/server.property。

    2.1配置broker.id

      

    2.2配置log.dirs

      

    2.3配置zookeeper.connect

      

  3.启动Zookeeper集群

    

    

    

    备注:zookeeper集群启动时,先启动的节点因节点启动过少而出现not running这种情况,是正常的,把所有节点都启动之后这个情况就会消失!

  3.启动Kafka服务

    执行:./kafka-server-start.sh ../config/server.properties &

    启动日志:   

[root@master bin]# ./kafka-server-start.sh ../config/server.properties &
[]
[root@master bin]# [-- ::,] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[-- ::,] INFO starting (kafka.server.KafkaServer)
[-- ::,] INFO Connecting to zookeeper on master: (kafka.server.KafkaServer)
[-- ::,] INFO [ZooKeeperClient] Initializing a new session to master:. (kafka.zookeeper.ZooKeeperClient)
[-- ::,] INFO Client environment:zookeeper.version=3.4.-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on // : GMT (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:host.name=master (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:java.version=1.8.0_172 (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:java.home=/usr/local/soft/jdk1..0_172/jre (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:java.class.path=.:/usr/local/soft/jdk1..0_172/jre/lib/rt.jar:/usr/local/soft/jdk1..0_172/lib/dt.jar:/usr/local/soft/jdk1..0_172/lib/tools.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/activation-1.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/aopalliance-repackaged-2.5.-b42.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/argparse4j-0.7..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/audience-annotations-0.5..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/commons-lang3-3.8..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/connect-api-2.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/connect-basic-auth-extension-2.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/connect-file-2.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/connect-json-2.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/connect-runtime-2.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/connect-transforms-2.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/guava-20.0.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/hk2-api-2.5.-b42.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/hk2-locator-2.5.-b42.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/hk2-utils-2.5.-b42.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jackson-annotations-2.9..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jackson-core-2.9..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jackson-databind-2.9..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jackson-jaxrs-base-2.9..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jackson-jaxrs-json-provider-2.9..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jackson-module-jaxb-annotations-2.9..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/javassist-3.22.-CR2.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/javax.annotation-api-1.2.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/javax.inject-.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/javax.inject-2.5.-b42.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/javax.servlet-api-3.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/javax.ws.rs-api-2.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/javax.ws.rs-api-2.1.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jaxb-api-2.3..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jersey-client-2.27.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jersey-common-2.27.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jersey-container-servlet-2.27.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jersey-container-servlet-core-2.27.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jersey-hk2-2.27.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jersey-media-jaxb-2.27.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jersey-server-2.27.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jetty-client-9.4..v20180830.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jetty-continuation-9.4..v20180830.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jetty-http-9.4..v20180830.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jetty-io-9.4..v20180830.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jetty-security-9.4..v20180830.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jetty-server-9.4..v20180830.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jetty-servlet-9.4..v20180830.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jetty-servlets-9.4..v20180830.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jetty-util-9.4..v20180830.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/jopt-simple-5.0..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/kafka_2.-2.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/kafka_2.-2.1.-sources.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/kafka-clients-2.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/kafka-log4j-appender-2.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/kafka-streams-2.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/kafka-streams-examples-2.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/kafka-streams-scala_2.-2.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/kafka-streams-test-utils-2.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/kafka-tools-2.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/log4j-1.2..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/lz4-java-1.5..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/maven-artifact-3.6..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/metrics-core-2.2..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/osgi-resource-locator-1.0..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/plexus-utils-3.1..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/reflections-0.9..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/rocksdbjni-5.14..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/scala-library-2.11..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/scala-logging_2.-3.9..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/scala-reflect-2.11..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/slf4j-api-1.7..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/slf4j-log4j12-1.7..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/snappy-java-1.1.7.2.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/validation-api-1.1..Final.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/zkclient-0.11.jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/zookeeper-3.4..jar:/usr/local/soft/kafka_2.-2.1./bin/../libs/zstd-jni-1.3.-.jar (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:os.version=3.10.-.el7.x86_64 (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Client environment:user.dir=/usr/local/soft/kafka_2.-2.1./bin (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Initiating client connection, connectString=master: sessionTimeout= watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@351d00c0 (org.apache.zookeeper.ZooKeeper)
[-- ::,] INFO Opening socket connection to server master/192.168.245.136:. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[-- ::,] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[-- ::,] INFO Socket connection established to master/192.168.245.136:, initiating session (org.apache.zookeeper.ClientCnxn)
[-- ::,] INFO Session establishment complete on server master/192.168.245.136:, sessionid = 0x1000023437d0000, negotiated timeout = (org.apache.zookeeper.ClientCnxn)
[-- ::,] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[-- ::,] INFO Cluster ID = 1AkrnNRhRiW9PWHA77R9lA (kafka.server.KafkaServer)
[-- ::,] WARN No meta.properties file under dir /usr/local/soft/kafka_2.-2.1./logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[-- ::,] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num =
alter.log.dirs.replication.quota.window.size.seconds =
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads =
broker.id =
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms =
connections.max.idle.ms =
controlled.shutdown.enable = true
controlled.shutdown.max.retries =
controlled.shutdown.retry.backoff.ms =
controller.socket.timeout.ms =
create.topic.policy.class.name = null
default.replication.factor =
delegation.token.expiry.check.interval.ms =
delegation.token.expiry.time.ms =
delegation.token.master.key = null
delegation.token.max.lifetime.ms =
delete.records.purgatory.purge.interval.requests =
delete.topic.enable = true
fetch.purgatory.purge.interval.requests =
group.initial.rebalance.delay.ms =
group.max.session.timeout.ms =
group.min.session.timeout.ms =
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.1-IV2
kafka.metrics.polling.interval.secs =
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds =
leader.imbalance.per.broker.percentage =
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms =
log.cleaner.dedupe.buffer.size =
log.cleaner.delete.retention.ms =
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size =
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms =
log.cleaner.threads =
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /usr/local/soft/kafka_2.-2.1./logs
log.flush.interval.messages =
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms =
log.flush.scheduler.interval.ms =
log.flush.start.offset.checkpoint.interval.ms =
log.index.interval.bytes =
log.index.size.max.bytes =
log.message.downconversion.enable = true
log.message.format.version = 2.1-IV2
log.message.timestamp.difference.max.ms =
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -
log.retention.check.interval.ms =
log.retention.hours =
log.retention.minutes = null
log.retention.ms = null
log.roll.hours =
log.roll.jitter.hours =
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes =
log.segment.delete.delay.ms =
max.connections.per.ip =
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots =
message.max.bytes =
metric.reporters = []
metrics.num.samples =
metrics.recording.level = INFO
metrics.sample.window.ms =
min.insync.replicas =
num.io.threads =
num.network.threads =
num.partitions =
num.recovery.threads.per.data.dir =
num.replica.alter.log.dirs.threads = null
num.replica.fetchers =
offset.metadata.max.bytes =
offsets.commit.required.acks = -
offsets.commit.timeout.ms =
offsets.load.buffer.size =
offsets.retention.check.interval.ms =
offsets.retention.minutes =
offsets.topic.compression.codec =
offsets.topic.num.partitions =
offsets.topic.replication.factor =
offsets.topic.segment.bytes =
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations =
password.encoder.key.length =
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port =
principal.builder.class = null
producer.purgatory.purge.interval.requests =
queued.max.request.bytes = -
queued.max.requests =
quota.consumer.default =
quota.producer.default =
quota.window.num =
quota.window.size.seconds =
replica.fetch.backoff.ms =
replica.fetch.max.bytes =
replica.fetch.min.bytes =
replica.fetch.response.max.bytes =
replica.fetch.wait.max.ms =
replica.high.watermark.checkpoint.interval.ms =
replica.lag.time.max.ms =
replica.socket.receive.buffer.bytes =
replica.socket.timeout.ms =
replication.quota.window.num =
replication.quota.window.size.seconds =
request.timeout.ms =
reserved.broker.max.id =
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin =
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds =
sasl.login.refresh.min.period.seconds =
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes =
socket.request.max.bytes =
socket.send.buffer.bytes =
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1., TLSv1., TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms =
transaction.max.timeout.ms =
transaction.remove.expired.transaction.cleanup.interval.ms =
transaction.state.log.load.buffer.size =
transaction.state.log.min.isr =
transaction.state.log.num.partitions =
transaction.state.log.replication.factor =
transaction.state.log.segment.bytes =
transactional.id.expiration.ms =
unclean.leader.election.enable = false
zookeeper.connect = master:
zookeeper.connection.timeout.ms =
zookeeper.max.in.flight.requests =
zookeeper.session.timeout.ms =
zookeeper.set.acl = false
zookeeper.sync.time.ms =
(kafka.server.KafkaConfig)
[-- ::,] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num =
alter.log.dirs.replication.quota.window.size.seconds =
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads =
broker.id =
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms =
connections.max.idle.ms =
controlled.shutdown.enable = true
controlled.shutdown.max.retries =
controlled.shutdown.retry.backoff.ms =
controller.socket.timeout.ms =
create.topic.policy.class.name = null
default.replication.factor =
delegation.token.expiry.check.interval.ms =
delegation.token.expiry.time.ms =
delegation.token.master.key = null
delegation.token.max.lifetime.ms =
delete.records.purgatory.purge.interval.requests =
delete.topic.enable = true
fetch.purgatory.purge.interval.requests =
group.initial.rebalance.delay.ms =
group.max.session.timeout.ms =
group.min.session.timeout.ms =
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.1-IV2
kafka.metrics.polling.interval.secs =
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds =
leader.imbalance.per.broker.percentage =
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms =
log.cleaner.dedupe.buffer.size =
log.cleaner.delete.retention.ms =
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size =
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms =
log.cleaner.threads =
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /usr/local/soft/kafka_2.-2.1./logs
log.flush.interval.messages =
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms =
log.flush.scheduler.interval.ms =
log.flush.start.offset.checkpoint.interval.ms =
log.index.interval.bytes =
log.index.size.max.bytes =
log.message.downconversion.enable = true
log.message.format.version = 2.1-IV2
log.message.timestamp.difference.max.ms =
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -
log.retention.check.interval.ms =
log.retention.hours =
log.retention.minutes = null
log.retention.ms = null
log.roll.hours =
log.roll.jitter.hours =
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes =
log.segment.delete.delay.ms =
max.connections.per.ip =
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots =
message.max.bytes =
metric.reporters = []
metrics.num.samples =
metrics.recording.level = INFO
metrics.sample.window.ms =
min.insync.replicas =
num.io.threads =
num.network.threads =
num.partitions =
num.recovery.threads.per.data.dir =
num.replica.alter.log.dirs.threads = null
num.replica.fetchers =
offset.metadata.max.bytes =
offsets.commit.required.acks = -
offsets.commit.timeout.ms =
offsets.load.buffer.size =
offsets.retention.check.interval.ms =
offsets.retention.minutes =
offsets.topic.compression.codec =
offsets.topic.num.partitions =
offsets.topic.replication.factor =
offsets.topic.segment.bytes =
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations =
password.encoder.key.length =
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port =
principal.builder.class = null
producer.purgatory.purge.interval.requests =
queued.max.request.bytes = -
queued.max.requests =
quota.consumer.default =
quota.producer.default =
quota.window.num =
quota.window.size.seconds =
replica.fetch.backoff.ms =
replica.fetch.max.bytes =
replica.fetch.min.bytes =
replica.fetch.response.max.bytes =
replica.fetch.wait.max.ms =
replica.high.watermark.checkpoint.interval.ms =
replica.lag.time.max.ms =
replica.socket.receive.buffer.bytes =
replica.socket.timeout.ms =
replication.quota.window.num =
replication.quota.window.size.seconds =
request.timeout.ms =
reserved.broker.max.id =
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin =
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds =
sasl.login.refresh.min.period.seconds =
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes =
socket.request.max.bytes =
socket.send.buffer.bytes =
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1., TLSv1., TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms =
transaction.max.timeout.ms =
transaction.remove.expired.transaction.cleanup.interval.ms =
transaction.state.log.load.buffer.size =
transaction.state.log.min.isr =
transaction.state.log.num.partitions =
transaction.state.log.replication.factor =
transaction.state.log.segment.bytes =
transactional.id.expiration.ms =
unclean.leader.election.enable = false
zookeeper.connect = master:
zookeeper.connection.timeout.ms =
zookeeper.max.in.flight.requests =
zookeeper.session.timeout.ms =
zookeeper.set.acl = false
zookeeper.sync.time.ms =
(kafka.server.KafkaConfig)
[-- ::,] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[-- ::,] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[-- ::,] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[-- ::,] INFO Loading logs. (kafka.log.LogManager)
[-- ::,] INFO Logs loading complete in ms. (kafka.log.LogManager)
[-- ::,] INFO Starting log cleanup with a period of ms. (kafka.log.LogManager)
[-- ::,] INFO Starting log flusher with a default period of ms. (kafka.log.LogManager)
[-- ::,] INFO Awaiting socket connections on 0.0.0.0:. (kafka.network.Acceptor)
[-- ::,] INFO [SocketServer brokerId=] Started acceptor threads (kafka.network.SocketServer)
[-- ::,] INFO [ExpirationReaper--Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[-- ::,] INFO [ExpirationReaper--Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[-- ::,] INFO [ExpirationReaper--DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[-- ::,] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[-- ::,] INFO Creating /brokers/ids/ (is it secure? false) (kafka.zk.KafkaZkClient)
[-- ::,] INFO Result of znode creation at /brokers/ids/ is: OK (kafka.zk.KafkaZkClient)
[-- ::,] INFO Registered broker at path /brokers/ids/ with addresses: ArrayBuffer(EndPoint(master,,ListenerName(PLAINTEXT),PLAINTEXT)) (kafka.zk.KafkaZkClient)
[-- ::,] WARN No meta.properties file under dir /usr/local/soft/kafka_2.-2.1./logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[-- ::,] INFO [ExpirationReaper--topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[-- ::,] INFO [ExpirationReaper--Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[-- ::,] INFO [ExpirationReaper--Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[-- ::,] INFO Successfully created /controller_epoch with initial epoch (kafka.zk.KafkaZkClient)
[-- ::,] INFO [GroupCoordinator ]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[-- ::,] INFO [GroupCoordinator ]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[-- ::,] INFO [GroupMetadataManager brokerId=] Removed expired offsets in milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[-- ::,] INFO [ProducerId Manager ]: Acquired new producerId block (brokerId:,blockStartProducerId:,blockEndProducerId:) by writing to Zk with path version (kafka.coordinator.transaction.ProducerIdManager)
[-- ::,] INFO [TransactionCoordinator id=] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[-- ::,] INFO [TransactionCoordinator id=] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[-- ::,] INFO [Transaction Marker Channel Manager ]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[-- ::,] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[-- ::,] INFO [SocketServer brokerId=] Started processors for acceptors (kafka.network.SocketServer)
[-- ::,] INFO Kafka version : 2.1. (org.apache.kafka.common.utils.AppInfoParser)
[-- ::,] INFO Kafka commitId : 21234bee31165527 (org.apache.kafka.common.utils.AppInfoParser)
[-- ::,] INFO [KafkaServer id=] started (kafka.server.KafkaServer)

    端口检测:

    

    其中9092为kafka的监听端口,2181位zookeeper的使用端口。

  4.单机连通性测试

    4.1启动生产者producer

      

    4.2启动消费者consumer

      

    4.3测试

      生产者生产数据:

      

      消费者消耗数据:

      

二.Kafka集群搭建

  未完待续......

Kafka单节点及集群配置安装的更多相关文章

  1. hadoop入门手册1:hadoop【2.7.1】【多节点】集群配置【必知配置知识1】

    问题导读 1.说说你对集群配置的认识?2.集群配置的配置项你了解多少?3.下面内容让你对集群的配置有了什么新的认识? 目的 目的1:这个文档描述了如何安装配置hadoop集群,从几个节点到上千节点.为 ...

  2. hadoop入门手册2:hadoop【2.7.1】【多节点】集群配置【必知配置知识2】

    问题导读 1.如何实现检测NodeManagers健康?2.配置ssh互信的作用是什么?3.启动.停止hdfs有哪些方式? 上篇: hadoop[2.7.1][多节点]集群配置[必知配置知识1]htt ...

  3. 使用Minikube运行一个本地单节点Kubernetes集群(阿里云)

    使用Minikube运行一个本地单节点Kubernetes集群中使用谷歌官方镜像由于某些原因导致镜像拉取失败以及很多人并没有代理无法开展相关实验. 因此本文使用阿里云提供的修改版Minikube创建一 ...

  4. K8s二进制部署单节点 etcd集群,flannel网络配置 ——锥刺股

    K8s 二进制部署单节点 master    --锥刺股 k8s集群搭建: etcd集群 flannel网络插件 搭建master组件 搭建node组件 1.部署etcd集群 2.Flannel 网络 ...

  5. 使用Minikube运行一个本地单节点Kubernetes集群

    使用Minikube是运行Kubernetes集群最简单.最快捷的途径,Minikube是一个构建单节点集群的工具,对于测试Kubernetes和本地开发应用都非常有用. ⒈安装Minikube Mi ...

  6. ActiveMQ的单节点和集群部署

    平安寿险消息队列用的是ActiveMQ. 单节点部署: 下载解压后,直接cd到bin目录,用activemq start命令就可启动activemq服务端了. ActiveMQ默认采用61616端口提 ...

  7. Ceph实战入门系列(一)——三节点Ceph集群的安装与部署

    安装文档:http://blog.csdn.net/u014139942/article/details/53639124

  8. Windows server2003 + sql server2005 集群配置安装

    http://blog.itpub.net/29500582/viewspace-1249319/

  9. mongodb 单节点集群配置 (开发环境)

    最近项目会用到mongodb的oplog触发业务流程,开发时的debug很不方便.所以在本地创建一个单台mongodb 集群进行开发debug. 大概:mongodb可以产生oplog的部署方式应该是 ...

随机推荐

  1. 学习python的第四天

    4.29自我总结 一.Jupyter的安装以及运行 1.Jupyter的安装 运行CMD,在CMD中输入pip3 --default-timeout=100 install -U jupyter 再输 ...

  2. 用css画一个哆啦A梦

    原图: 效果图: 虽然说没用啥什么高级的技巧,但这让我感受到了CSS的乐趣! 好好学习,天天向上! <!DOCTYPE html> <html> <head> &l ...

  3. CSS 火焰?不在话下

    正文从下面开始. 今天的小技巧是使用纯 CSS 生成火焰,逼真一点的火焰. 嗯,长什么样子?在 CodePen 上输入关键字 CSS Fire,能找到这样的: 或者这样的: 我们希望,仅仅使用 CSS ...

  4. 关于RecyclerView嵌套导致item复用异常,界面异常的问题

    常规需求: 外层RecyclerView嵌套内层RecyclerView , 在上下滑动的时候会出现item数据以及view的显示异常. 解决办法: 1.重写  getItemViewType  方法 ...

  5. c/c++ 网络编程 陈硕老师视频理解之ttcp

    ttcp 是干啥的:测试2台机器间的网络传输性能 wiki 功能如下图: 对应的视频是: 4.回顾基础的Sockets API.mkv 5.TTCP代码概览.mkv 6.使用TTCP进行网络传输性能测 ...

  6. SQLServer之修改用户自定义数据库用户

    修改用户自定义数据库用户注意事项 默认架构将是服务器为此数据库用户解析对象名时将搜索的第一个架构. 除非另外指定,否则默认架构将是此数据库用户创建的对象所属的架构. 如果用户具有默认架构,则将使用默认 ...

  7. C# Npoi 实现Excel与数据库相互导入

    十年河东,十年河西,莫欺少年穷! NPOI支持对 Word 和 Excel 文件的操作! 针对 Word 的操作一般用于打印技术!说白了就是利用 Word 文件作为模板,生成各种不同的打印!具体用到的 ...

  8. iOS开发之Masonry框架源码解析

    Masonry是iOS在控件布局中经常使用的一个轻量级框架,Masonry让NSLayoutConstraint使用起来更为简洁.Masonry简化了NSLayoutConstraint的使用方式,让 ...

  9. Java中Map和Object的互相转换方式

    一.使用Apache中的BeanUtils类,导入commons-beanutils包. Jar包Maven下载地址:https://mvnrepository.com/artifact/common ...

  10. rabbitmq实现延时队列(死信队列)

    基于队列和基于消息的TTL TTL是time to live 的简称,顾名思义指的是消息的存活时间.rabbitMq可以从两种维度设置消息过期时间,分别是队列和消息本身. 队列消息过期时间-Per-Q ...