后来重新复习的时候,发现这篇文章不错:https://www.cnblogs.com/z-sm/p/5691760.html

一:前提

1.安装条件

  Java   Scala

  zookeeper

  Kafka

2.使用版本

  使用的版本是0.8.2.1

  

  ------------------

  

二:伪分布式安装

1.解压

  kafka_2.10-0.8.2.1

2.拷贝server.properties

  

3.依次修改四个文件

  官网上:说明这三个配置项是必要的。

  

  主要要配置的有:

    broker.id=0 :服务器唯一标识

    port=9092   :服务器监听端口

    host.name=linux-hadoop01.ibeifeng.com  : 服务器监听主机名

    log.dirs=/opt/modules/kafka_2.10-0.8.2.1/data/0  :kafka数据存储路径

    zookeeper.connect=linux-hadoop01.ibeifeng.com:2181/kafka   :元数据管理的zookeeper的配置信息

  需要修改四个文件,这里只写第一个。

 # Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
broker.id=0 ############################# Socket Server Settings ############################# # The port the socket server listens on
port=9092 # Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=linux-hadoop01.ibeifeng.com # Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
#advertised.host.name=<hostname routable by clients> # The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients> # The number of threads handling network requests
num.network.threads=3 # The number of threads doing disk I/O
num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400 # The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600 ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files
log.dirs=/opt/modules/kafka_2.10-0.8.2.1/data/0 # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1 ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion
log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000 # By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.
# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
log.cleaner.enable=false ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=linux-hadoop01.ibeifeng.com:2181/kafka # Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

4.启动ZK

  

5.进入zkCli

  

6.启动kafka

  

  具体的jps

  

7.这个时候再看zkCli

  

  看ids:

  

  看ids=0的值:

  

  

8.关闭命令

  

051 Kafka的安装的更多相关文章

  1. Kafka的安装和部署及测试

    1.简介 大数据分析处理平台包括数据的接入,数据的存储,数据的处理,以及后面的展示或者应用.今天我们连说一下数据的接入,数据的接入目前比较普遍的是采用kafka将前面的数据通过消息的方式,以数据流的形 ...

  2. Linux下Kafka单机安装配置方法(图文)

    Kafka是一个分布式的.可分区的.可复制的消息系统.它提供了普通消息系统的功能,但具有自己独特的设计.这个独特的设计是什么样的呢 介绍 Kafka是一个分布式的.可分区的.可复制的消息系统.它提供了 ...

  3. kafka的安装以及基本用法

    kafka的安装 kafka依赖于ZooKeeper,所以在运行kafka之前需要先部署ZooKeeper集群,ZooKeeper集群部署方式分为两种,一种是单独部署(推荐),另外一种是使用kafka ...

  4. kafka manager安装配置和使用

    kafka manager安装配置和使用 .安装yum源 curl https://bintray.com/sbt/rpm/rpm | sudo tee /etc/yum.repos.d/bintra ...

  5. kafka 的安装部署

    Kafka 的简介: Kafka 是一款分布式消息发布和订阅系统,具有高性能.高吞吐量的特点而被广泛应用与大数据传输场景.它是由 LinkedIn 公司开发,使用 Scala 语言编写,之后成为 Ap ...

  6. Kafka学习之路 (四)Kafka的安装

    一.下载 下载地址: http://kafka.apache.org/downloads.html http://mirrors.hust.edu.cn/apache/ 二.安装前提(zookeepe ...

  7. centos php Zookeeper kafka扩展安装

    如题,系统架构升级引入消息机制,php 安装还是挺麻烦的,网上各种文章有的东拼西凑这里记录下来做个备忘,有需要的同学可以自行参考安装亲测可行 1 zookeeper扩展安装 1.安装zookeeper ...

  8. Linux下Kafka单机安装配置方法

    Kafka是一个分布式的.可分区的.可复制的消息系统.它提供了普通消息系统的功能,但具有自己独特的设计.这个独特的设计是什么样的呢? 首先让我们看几个基本的消息系统术语: •Kafka将消息以topi ...

  9. Kafka Manager安装部署及使用

     为了简化开发者和服务工程师维护Kafka集群的工作,yahoo构建了一个叫做Kafka管理器的基于Web工具,叫做 Kafka Manager.本文对其进行部署配置,并安装配置kafkatool对k ...

随机推荐

  1. Jmeter之csv参数化

    创建数据源csv文件 在线程组中添加CSV Data Set Config 1.添加CSV Data Set Config 添加CSV Data Set Config 2.配置CSV Data Set ...

  2. Netty学习4—NIO服务端报错:远程主机强迫关闭了一个现有的连接

    1 发现问题 NIO编程中服务端会出现报错 Exception in thread "main" java.io.IOException: 远程主机强迫关闭了一个现有的连接. at ...

  3. vmware 下找不到ifcfg-eth0的问题

    找不大 eth0网卡,也就连不上网络,症状是ifconfig以后只现实lo,不显示eth0 ifconfig,显示的ip是ifcfg-lo的ip 解决办法 . 拷贝cp ifcfg-lo ifcfg- ...

  4. Oracle imp exp 导入导出 执行脚本

    一:用命令 imp/exp 的方式进行数据的导入和导出 一:文件后缀名: 二:oracle  导出 exp 命令 echo 开始备份数据库 if not exist D:\oracle_bak\fil ...

  5. Confluence 6 识别慢性能的宏

    Page Profiling 给你了有关页面在载入的时候操作缓慢的邪教,你可以将下面的内容添加到调试(debug)级别: Version 3.1 及其后续版本 设置包名字为 com.atlassian ...

  6. Confluence 6 整合到支持的附件存储选项

    如果你现在正在存储附件到 WebDav 或者你的数据库中.你可以整合附件的存储到文件系统中.当你的附件从数据库中被合并到文件系统后,你存储在数据库中的附件数据就可以从数据库中删除了. 当附件合并进行的 ...

  7. vue-cli 3配置接口代理

    vue.config.js vue.config.js是一个可选的配置文件,新建该文件,存放在项目根目录(将自动加载)中 // 作为配置文件,直接导出配置对象即可 module.exports = { ...

  8. 《剑指offer》 二叉树的镜像

    本题来自<剑指offer>二叉树的镜像 题目: 操作给定的二叉树,将其变换为源二叉树的镜像. 二叉树的镜像定义:源二叉树 8 / \ 6 10 / \ / \ 5 7 9 11 镜像二叉树 ...

  9. Nginx详解十八:Nginx深度学习篇之Rewrite规则

    Rewrite规则可以实现对url的重写,以及重定向 作用场景: 1.URL访问跳转,支持开发设计,如页面跳转,兼容性支持,展示效果等 2.SEO优化 3.维护:后台维护.流量转发等 4.安全 配置语 ...

  10. js 浮点数相加 变成字符串 解决方案

    var count = 0; count+=Number(parseFloat(value[i]['sla']).toFixed(2)); 数字相加的时候最好使用Number转换一下