废话不说,先来个示例,有个感性认识再介绍。

这个示例来自spark自带的example,基本步骤如下:

(1)使用以下命令输入流消息:

$ nc -lk 9999

(2)在一个新的终端中运行NetworkWordCount,统计上面的词语数量并输出:

$ bin/run-example streaming.NetworkWordCount localhost 9999

(3)在第一步创建的输入流程中敲入一些内容,在第二步创建的终端中会看到统计结果,如:

第一个终端输入的内容:

hello world again

第二个端口的输出

-------------------------------------------
Time: 1436758706000 ms
-------------------------------------------
(again,1)
(hello,1)
(world,1)

简单解释一下,上面的示例通过手工敲入内容,并传给spark streaming统计单词数量,然后将结果打印出来。

附上代码:

package org.apache.spark.examples.streaming

import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.storage.StorageLevel /**
* Counts words in UTF8 encoded, '\n' delimited text received from the network every second.
*
* Usage: NetworkWordCount <hostname> <port>
* <hostname> and <port> describe the TCP server that Spark Streaming would connect to receive data.
*
* To run this on your local machine, you need to first run a Netcat server
* `$ nc -lk 9999`
* and then run the example
* `$ bin/run-example org.apache.spark.examples.streaming.NetworkWordCount localhost 9999`
*/
object NetworkWordCount {
def main(args: Array[String]) {
if (args.length < 2) {
System.err.println("Usage: NetworkWordCount <hostname> <port>")
System.exit(1)
} StreamingExamples.setStreamingLogLevels() // Create the context with a 1 second batch size
val sparkConf = new SparkConf().setAppName("NetworkWordCount")
val ssc = new StreamingContext(sparkConf, Seconds(1)) // Create a socket stream on target ip:port and count the
// words in input stream of \n delimited text (eg. generated by 'nc')
// Note that no duplication in storage level only for running locally.
// Replication necessary in distributed scenario for fault tolerance.
val lines = ssc.socketTextStream(args(0), args(1).toInt, StorageLevel.MEMORY_AND_DISK_SER)
val words = lines.flatMap(_.split(" "))
val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)
wordCounts.print()
ssc.start()
ssc.awaitTermination()
}
}
 
 
(一)构建自己的项目
本示例使用java+maven来构建一个wordcount
1、创建项目,在pom.xml添加如下的依赖关系

<dependency>

<groupId>org.slf4j</groupId>

<artifactId>slf4j-api</artifactId>

<version>1.7.0</version>

</dependency>

<dependency>

<groupId>org.slf4j</groupId>

<artifactId>slf4j-log4j12</artifactId>

<version>1.7.0</version>

</dependency>

<dependency>

<groupId>log4j</groupId>

<artifactId>log4j</artifactId>

<version>1.2.17</version>

</dependency>

<dependency>

<groupId>org.apache.spark</groupId>

<artifactId>spark-core_2.10</artifactId>

<version>1.4.0</version>

</dependency>

<dependency>

<groupId>org.apache.spark</groupId>

<artifactId>spark-streaming_2.10</artifactId>

<version>1.4.0</version>

</dependency>

<dependency>

<groupId>org.apache.spark</groupId>

<artifactId>spark-streaming-kafka_2.10</artifactId>

<version>1.4.0</version>

</dependency>

 

<dependency>

<groupId>org.apache.kafka</groupId>

<artifactId>kafka_2.10</artifactId>

<version>0.8.2.1</version>

</dependency>

 
2、写代码,此部分代码使用了官方的代码:
package com.netease.gdc.kafkaStreaming;

import java.util.Map;
import java.util.HashMap;
import java.util.regex.Pattern; import scala.Tuple2;
import com.google.common.collect.Lists;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaPairDStream;
import org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka.KafkaUtils; /**
* Consumes messages from one or more topics in Kafka and does wordcount.
*
* Usage: JavaKafkaWordCount
* is a list of one or more zookeeper servers that make quorum
* is the name of kafka consumer group
* is a list of one or more kafka topics to consume from
*is the number of threads the kafka consumer should use
*
* To run this example:
* `$ bin/run-example org.apache.spark.examples.streaming.JavaKafkaWordCount zoo01,zoo02, \
* zoo03 my-consumer-group topic1,topic2 1`
*/ public final class JavaKafkaWordCount {
private static final Pattern SPACE = Pattern.compile(" "); private JavaKafkaWordCount() {
} public static void main(String[] args) {
if (args.length < 4) {
System.err.println("Usage: JavaKafkaWordCount
");
System.exit(1);
} SparkConf sparkConf = new SparkConf().setAppName("JavaKafkaWordCount");
// Create the context with a 1 second batch size
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(2000)); int numThreads = Integer.parseInt(args[3]);
Map topicMap = new HashMap();
String[] topics = args[2].split(",");
for (String topic: topics) {
topicMap.put(topic, numThreads);
} JavaPairReceiverInputDStream messages =
KafkaUtils.createStream(jssc, args[0], args[1], topicMap); JavaDStream lines = messages.map(new Function<tuple2, String>() {
@Override
public String call(Tuple2 tuple2) {
return tuple2._2();
}
}); JavaDStream words = lines.flatMap(new FlatMapFunction() {
@Override
public Iterable call(String x) {
return Lists.newArrayList(SPACE.split(x));
}
}); JavaPairDStream wordCounts = words.mapToPair(
new PairFunction() {
@Override
public Tuple2 call(String s) {
return new Tuple2(s, 1);
}
}).reduceByKey(new Function2() {
@Override
public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
}); wordCounts.print();
jssc.start();
jssc.awaitTermination();
}
}
 
3、上传到服务器中然后编译
mvn clean package
4、提交job到spark中
/home/hadoop/spark/bin/spark-submit --jars ../mylib/metrics-core-2.2.0.jar,../mylib/zkclient-0.3.jar,../mylib/spark-streaming-kafka_2.10-1.4.0.jar,../mylib/kafka-clients-0.8.2.1.jar,../mylib/kafka_2.10-0.8.2.1.jar  --class com.netease.gdc.kafkaStreaming.JavaKafkaWordCount --master spark://192.168.165.102:7077  target/kafkaStreaming-0.0.1-SNAPSHOT.jar 192.168.172.111:2181/kafka my-consumer-group test 3
当然,前提是kafka集群已经正常运行,且存在test这个topic
 
5、验证
打开一个console producer,输入内容,然后观察wordcount的结果。
结果形式如下:
(hi,1)

  

Spark Streaming教程的更多相关文章

  1. [spark]Spark Streaming教程

      (一)官方入门示例 废话不说,先来个示例,有个感性认识再介绍. 这个示例来自spark自带的example,基本步骤如下: (1)使用以下命令输入流消息: $ nc -lk 9999 (2)在一个 ...

  2. cdh环境下,spark streaming与flume的集成问题总结

    文章发自:http://www.cnblogs.com/hark0623/p/4170156.html  转发请注明 如何做集成,其实特别简单,网上其实就是教程. http://blog.csdn.n ...

  3. Spark Streaming入门

    欢迎大家前往腾讯云+社区,获取更多腾讯海量技术实践干货哦~ 本文将帮助您使用基于HBase的Apache Spark Streaming.Spark Streaming是Spark API核心的一个扩 ...

  4. 【概念、概述】Spark入门教程[1]

    本教程源于2016年3月出版书籍<Spark原理.机制及应用> ,如有兴趣,请支持正版书籍. 随着互联网为代表的信息技术深度发展,其背后由于历史积累产生了TB.PB甚至EB级数据量,由于传 ...

  5. spark streaming之 windowDuration、slideDuration、batchDuration​

    spark streaming 不同于sotm,是一种准实时处理系统.storm 中,把批处理看错是时间教程的实时处理.而在spark streaming中,则反过来,把实时处理看作为时间极小的批处理 ...

  6. [Spark] 07 - Spark Streaming Programming

    Streaming programming 一.编程套路 编写Streaming程序的套路 创建DStream,也就定义了输入源. 对DStream进行一些 “转换操作” 和 "输出操作&q ...

  7. flink和spark Streaming中的Back Pressure

    Spark Streaming的back pressure 在讲flink的back pressure之前,我们先讲讲Spark Streaming的back pressure.Spark Strea ...

  8. Flink与Spark Streaming在与kafka结合的区别!

    本文主要是想聊聊flink与kafka结合.当然,单纯的介绍flink与kafka的结合呢,比较单调,也没有可对比性,所以的准备顺便帮大家简单回顾一下Spark Streaming与kafka的结合. ...

  9. Spark踩坑记——Spark Streaming+Kafka

    [TOC] 前言 在WeTest舆情项目中,需要对每天千万级的游戏评论信息进行词频统计,在生产者一端,我们将数据按照每天的拉取时间存入了Kafka当中,而在消费者一端,我们利用了spark strea ...

随机推荐

  1. css3的新特性选择器-------属性选择器

    自己学css的时候比较乱,这次趁着复习把css3的新特性选择器和css2以前不怎么用的选择器做一个总结 <div id="parent"> <p>I'm a ...

  2. Gym 100952 F. Contestants Ranking

    http://codeforces.com/gym/100952/problem/F F. Contestants Ranking time limit per test 1 second memor ...

  3. init进程

    2.Linux下的三个特殊进程 Linux下有三个特殊的进程idle进程(PID=0),init进程(PID=1),和kthreadd(PID=2)idle进程由系统自动创建,运行在内核态idle进程 ...

  4. virmon防火墙64位正式版(暂定)公布

    ChangeLog: 2015-06-2564位版本号签名问题临时得到解决.还要致谢一下某位黑客. 支持版本号x64 Windows Vista.7.8.8.1以上等.个人仅仅在Windows7上做了 ...

  5. 关于XAMPP安装后APACH无法启动的问题

    Xampp的获得和安装都十分简单,你仅仅要到下面网址: http://www.apachefriends.org/zh_cn/xampp.html 下载xampp就可以.我安装的是windows版本号 ...

  6. jquery constructor

    function F(){ this.a = "aaa"; alert(111); } F.prototype = { constructor:F, } var f = new F ...

  7. 15.Node.js REPL(交互式解释器)

    转自:http://www.runoob.com/nodejs/nodejs-tutorial.html Node.js REPL(Read Eval Print Loop:交互式解释器) 表示一个电 ...

  8. Day5下午解题报告1

    预计分数:100+60+30=190 实际分数:100+60+30=190 终于有一道无脑T1了哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈哈 ...

  9. sql之group by的用法

    1.概述 “Group By”从字面意义上理解就是根据“By”指定的规则对数据进行分组,所谓的分组就是将一个“数据集”划分成若干个“小区域”,然后针对若干个“小区域”进行数据处理. 2.原始表 3.简 ...

  10. Shelled-out Commands In Golang

    http://nathanleclaire.com/blog/2014/12/29/shelled-out-commands-in-golang/ Shelled-out Commands In Go ...