在对nutch源码执行ant runtime后,会创建一个runtime的文件夹。在runtime文件夹下有deploy和local 2个文件夹。

[jediael@jediael runtime]$ ls

deploy  local

这2个文件夹分别代表nutch的2种执行方式:部署模式及本地模式。

1、nutch.sh中关于2种执行方式的执行

if $local; then
# fix for the external Xerces lib issue with SAXParserFactory
NUTCH_OPTS="-Djavax.xml.parsers.DocumentBuilderFactory=com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl $NUTCH_OPTS"
EXEC_CALL="$JAVA $JAVA_HEAP_MAX $NUTCH_OPTS -classpath $CLASSPATH"
else
# check that hadoop can be found on the path
if [ $(which hadoop | wc -l ) -eq 0 ]; then
echo "Can't find Hadoop executable. Add HADOOP_HOME/bin to the path or run in local mode."
exit -1;
fi
# distributed mode
EXEC_CALL="hadoop jar $NUTCH_JOB"
fi # run it
exec $EXEC_CALL $CLASS "$@“

2、在deploy文件夹下运行命令即为deploy模式,local文件夹下运行命令即为local模式。

下面以inject为例,示范2种执行模式。

一、本地模式

1、基本使用方法:

$ bin/nutch inject
Usage: InjectorJob <url_dir> [-crawlId <id>]

使用方法一:未指定id

liaoliuqingdeMacBook-Air:local liaoliuqing$ bin/nutch inject urls
InjectorJob: starting at 2014-12-20 22:32:01
InjectorJob: Injecting urlDir: urls
InjectorJob: Using class org.apache.gora.hbase.store.HBaseStore as the Gora storage class.
InjectorJob: total number of urls rejected by filters: 0
InjectorJob: total number of urls injected after normalization and filtering: 1 Injector: finished at 2014-12-20 22:32:15, elapsed: 00:00:14

使用方法二:指定id

$ bin/nutch inject urls -crawlId 2
InjectorJob: starting at 2014-12-20 22:34:01
InjectorJob: Injecting urlDir: urls
InjectorJob: Using class org.apache.gora.hbase.store.HBaseStore as the Gora storage class.
InjectorJob: total number of urls rejected by filters: 0
InjectorJob: total number of urls injected after normalization and filtering: 1 Injector: finished at 2014-12-20 22:34:15, elapsed: 00:00:14

2、数据库中的数据变化

上述命令将在hbase数据库中新建一个表,表名为${id}_webpage,若未指定id,则表名为webpage.

然后将urls文件夹中的文件内容写入表中。作为爬虫种子。

hbase(main):003:0> scan 'webpage'
ROW COLUMN+CELL
com.163.www:http/ column=f:fi, timestamp=1419085934952, value=\x00'\x8D\x00
com.163.www:http/ column=f:ts, timestamp=1419085934952, value=\x00\x00\x01Jh
\x1C\xBC7
com.163.www:http/ column=mk:_injmrk_, timestamp=1419085934952, value=y
com.163.www:http/ column=mk:dist, timestamp=1419085934952, value=0
com.163.www:http/ column=mtdt:_csh_, timestamp=1419085934952, value=?\x80\x0
0\x00
com.163.www:http/ column=s:s, timestamp=1419085934952, value=? \x80\x00\x00
1 row(s) in 0.6140 seconds

当再次运行inject命令时,会添加新的url进入表中。

3、其他执行脚本

where COMMAND is one of:
inject inject new urls into the database
hostinject creates or updates an existing host table from a text file
generate generate new batches to fetch from crawl db
fetch fetch URLs marked during generate
parse parse URLs marked during fetch
updatedb update web table after parsing
updatehostdb update host table after parsing
readdb read/dump records from page database
readhostdb display entries from the hostDB
elasticindex run the elasticsearch indexer
solrindex run the solr indexer on parsed batches
solrdedup remove duplicates from solr
parsechecker check the parser for a given url
indexchecker check the indexing filters for a given url
plugin load a plugin and run one of its classes main()
nutchserver run a (local) Nutch server on a user defined port
junit runs the given JUnit test
or
CLASSNAME run the class named CLASSNAME
Most commands print help when invoked w/o parameters.

能够逐步执行一个完整抓取流程中的各个步骤,形成一个总体的流程。

当使用crawl命令进行抓取任务时,其基本流程过程例如以下:

(1)InjectorJob

開始第一个迭代

(2)GeneratorJob

(3)FetcherJob

(4)ParserJob

(5)DbUpdaterJob

(6)SolrIndexerJob

開始第二个迭代

(2)GeneratorJob

(3)FetcherJob

(4)ParserJob

(5)DbUpdaterJob

(6)SolrIndexerJob

開始第三个迭代

详细每一个步骤的运行,请见http://blog.csdn.net/jediael_lu/article/details/38591067

4、nutch封装了一个crawl脚本。将各个关键步骤进行了封装,从而无需逐步执行抓取流程。

[jediael@jediael local]$ bin/crawl
Missing seedDir : crawl <seedDir> <crawlID> <solrURL> <numberOfRounds>

如:

[root@jediael44 bin]# ./crawl seed.txt TestCrawl http://localhost:8983/solr 2

二、部署模式

1、使用hadoop命令执行

注意:必须先启动hadoop及hbase。

[jediael@jediael deploy]$ hadoop jar apache-nutch-2.2.1.job org.apache.nutch.crawl.InjectorJob file:///opt/jediael/apache-nutch-2.2.1/runtime/deploy/urls/
14/12/20 23:26:50 INFO crawl.InjectorJob: InjectorJob: starting at 2014-12-20 23:26:50
14/12/20 23:26:50 INFO crawl.InjectorJob: InjectorJob: Injecting urlDir: file:/opt/jediael/apache-nutch-2.2.1/runtime/deploy/urls
14/12/20 23:26:52 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.3.2-1031432, built on 11/05/2010 05:32 GMT
14/12/20 23:26:52 INFO zookeeper.ZooKeeper: Client environment:host.name=jediael
14/12/20 23:26:52 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_51
14/12/20 23:26:52 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
14/12/20 23:26:52 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.7.0_51/jre
14/12/20 23:26:52 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/opt/jediael/hadoop-1.2.1/libexec/../conf:/usr/java/jdk1.7.0_51/lib/tools.jar:/opt/jediael/hadoop-1.2.1/libexec/..:/opt/jediael/hadoop-1.2.1/libexec/../hadoop-core-1.2.1.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/asm-3.2.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/aspectjrt-1.6.11.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/aspectjtools-1.6.11.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/commons-beanutils-1.7.0.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/commons-beanutils-core-1.8.0.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/commons-cli-1.2.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/commons-codec-1.4.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/commons-collections-3.2.1.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/commons-configuration-1.6.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/commons-daemon-1.0.1.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/commons-digester-1.8.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/commons-el-1.0.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/commons-io-2.1.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/commons-lang-2.4.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/commons-logging-1.1.1.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/commons-logging-api-1.0.4.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/commons-math-2.1.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/commons-net-3.1.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/core-3.1.1.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/hadoop-capacity-scheduler-1.2.1.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/hadoop-fairscheduler-1.2.1.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/hadoop-thriftfs-1.2.1.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/jdeb-0.8.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/jersey-core-1.8.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/jersey-json-1.8.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/jersey-server-1.8.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/jets3t-0.6.1.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/jetty-6.1.26.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/jetty-util-6.1.26.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/jsch-0.1.42.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/junit-4.5.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/kfs-0.2.2.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/log4j-1.2.15.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/mockito-all-1.8.5.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/oro-2.0.8.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/slf4j-api-1.4.3.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/slf4j-log4j12-1.4.3.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/xmlenc-0.52.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/jediael/hadoop-1.2.1/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
14/12/20 23:26:52 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/jediael/hadoop-1.2.1/libexec/../lib/native/Linux-amd64-64
14/12/20 23:26:52 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
14/12/20 23:26:52 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
14/12/20 23:26:52 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
14/12/20 23:26:52 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
14/12/20 23:26:52 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-431.17.1.el6.x86_64
14/12/20 23:26:52 INFO zookeeper.ZooKeeper: Client environment:user.name=jediael
14/12/20 23:26:52 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/jediael
14/12/20 23:26:52 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/jediael/apache-nutch-2.2.1/runtime/deploy
14/12/20 23:26:52 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
14/12/20 23:26:52 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181
14/12/20 23:26:52 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
14/12/20 23:26:52 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x14a5c24c9cf0657, negotiated timeout = 40000
14/12/20 23:26:52 INFO crawl.InjectorJob: InjectorJob: Using class org.apache.gora.hbase.store.HBaseStore as the Gora storage class.
14/12/20 23:26:55 INFO input.FileInputFormat: Total input paths to process : 1
14/12/20 23:26:55 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/12/20 23:26:55 WARN snappy.LoadSnappy: Snappy native library not loaded
14/12/20 23:26:56 INFO mapred.JobClient: Running job: job_201412202325_0002
14/12/20 23:26:57 INFO mapred.JobClient: map 0% reduce 0%
14/12/20 23:27:15 INFO mapred.JobClient: map 100% reduce 0%
14/12/20 23:27:17 INFO mapred.JobClient: Job complete: job_201412202325_0002
14/12/20 23:27:18 INFO mapred.JobClient: Counters: 20
14/12/20 23:27:18 INFO mapred.JobClient: Job Counters
14/12/20 23:27:18 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=14058
14/12/20 23:27:18 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/12/20 23:27:18 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/12/20 23:27:18 INFO mapred.JobClient: Rack-local map tasks=1
14/12/20 23:27:18 INFO mapred.JobClient: Launched map tasks=1
14/12/20 23:27:18 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
14/12/20 23:27:18 INFO mapred.JobClient: File Output Format Counters
14/12/20 23:27:18 INFO mapred.JobClient: Bytes Written=0
14/12/20 23:27:18 INFO mapred.JobClient: injector
14/12/20 23:27:18 INFO mapred.JobClient: urls_injected=3
14/12/20 23:27:18 INFO mapred.JobClient: FileSystemCounters
14/12/20 23:27:18 INFO mapred.JobClient: FILE_BYTES_READ=149
14/12/20 23:27:18 INFO mapred.JobClient: HDFS_BYTES_READ=130
14/12/20 23:27:18 INFO mapred.JobClient: FILE_BYTES_WRITTEN=78488
14/12/20 23:27:18 INFO mapred.JobClient: File Input Format Counters
14/12/20 23:27:18 INFO mapred.JobClient: Bytes Read=149
14/12/20 23:27:18 INFO mapred.JobClient: Map-Reduce Framework
14/12/20 23:27:18 INFO mapred.JobClient: Map input records=6
14/12/20 23:27:18 INFO mapred.JobClient: Physical memory (bytes) snapshot=106311680
14/12/20 23:27:18 INFO mapred.JobClient: Spilled Records=0
14/12/20 23:27:18 INFO mapred.JobClient: CPU time spent (ms)=2420
14/12/20 23:27:18 INFO mapred.JobClient: Total committed heap usage (bytes)=29753344
14/12/20 23:27:18 INFO mapred.JobClient: Virtual memory (bytes) snapshot=736796672
14/12/20 23:27:18 INFO mapred.JobClient: Map output records=3
14/12/20 23:27:18 INFO mapred.JobClient: SPLIT_RAW_BYTES=130
14/12/20 23:27:18 INFO crawl.InjectorJob: InjectorJob: total number of urls rejected by filters: 0
14/12/20 23:27:18 INFO crawl.InjectorJob: InjectorJob: total number of urls injected after normalization and filtering: 3
14/12/20 23:27:18 INFO crawl.InjectorJob: Injector: finished at 2014-12-20 23:27:18, elapsed: 00:00:27

三、附带使用eclipse执行nutch的方式

此方法本质上是与部署模式一致的。

使用eclipse执行InjectorJob

eclipse输出内容:

InjectorJob: starting at 2014-12-20 23:13:24
InjectorJob: Injecting urlDir: /Users/liaoliuqing/99_Project/2.x/urls
InjectorJob: Using class org.apache.gora.hbase.store.HBaseStore as the Gora storage class.
InjectorJob: total number of urls rejected by filters: 0
InjectorJob: total number of urls injected after normalization and filtering: 1 Injector: finished at 2014-12-20 23:13:27, elapsed: 00:00:02

【Nutch基础教程之七】Nutch的2种执行模式:local及deploy的更多相关文章

  1. 【Nutch基础教程之七】Nutch的2种运行模式:local及deploy

    在对nutch源代码运行ant runtime后,会创建一个runtime的目录,在runtime目录下有deploy和local 2个目录. [jediael@jediael runtime]$ l ...

  2. Tomcat Connector三种执行模式(BIO, NIO, APR)的比較和优化

    Tomcat Connector的三种不同的执行模式性能相差非常大,有人測试过的结果例如以下: 这三种模式的不同之处例如以下: BIO: 一个线程处理一个请求.缺点:并发量高时,线程数较多,浪费资源. ...

  3. Spring Cloud Alibaba基础教程:支持的几种服务消费方式(RestTemplate、WebClient、Feign)

    通过<Spring Cloud Alibaba基础教程:使用Nacos实现服务注册与发现>一文的学习,我们已经学会如何使用Nacos来实现服务的注册与发现,同时也介绍如何通过LoadBal ...

  4. 【HBase基础教程】1、HBase之单机模式与伪分布式模式安装(转)

    在这篇blog中,我们将介绍Hbase的单机模式安装与伪分布式的安装方式,以及通过浏览器查看Hbase的用户界面.搭建hbase伪分布式环境的前提是我们已经搭建好了hadoop完全分布式环境,搭建ha ...

  5. JIT与JVM的三种执行模式:解释模式、编译模式、混合模式

    Java JIT(just in time)即时编译器是sun公司采用了hotspot虚拟机取代其开发的classic vm之后引入的一项技术,目的在于提高java程序的性能,改变人们“java比C/ ...

  6. C#的接口基础教程之七 覆盖虚接口

    有时候我们需要表达一种抽象的东西,它是一些东西的概括,但我们又不能真正的看到它成为一个实体在我们眼前出现,为此面向对象的编程语言便有了抽象类的概念.C#作为一个面向对象的语言,必然也会引入抽象类这一概 ...

  7. 倍福TwinCAT(贝福Beckhoff)基础教程7.1 TwinCAT如何简单执行NC功能块 TC2

    TC2的程序是在TC3的基础上稍作调整,只说明不同点,请先看TC3的. TC2中的一个原本是AXIS_REF类型变量被拆成了两个(PLCTONC_AXLESTRUCT和NCTOPLC_AXLESTRU ...

  8. spark学习(基础篇)--(第三节)Spark几种运行模式

    spark应用执行机制分析 前段时间一直在编写指标代码,一直采用的是--deploy-mode client方式开发测试,因此执行没遇到什么问题,但是放到生产上采用--master yarn-clus ...

  9. 倍福TwinCAT(贝福Beckhoff)基础教程7.1 TwinCAT 如何简单执行NC功能块 TC3

    这一节我们介绍简单的NC运动(前面所讲的所有内容都是PLC编程和HMI的界面,算是基础知识),这里NC就是控制伺服电机的部分(当然还不是实际的NC轴,是虚拟轴,但是用到的函数都是一样,可以为后面的实际 ...

随机推荐

  1. python--进程内容补充

    一. 进程的其他方法 进程id, 进程名字, 查看进程是否活着(is_alive()), terminate()发送结束进程的信号 import time import os from multipr ...

  2. python常见陷阱

    copy to https://pythonguidecn.readthedocs.io/zh/latest/writing/gotchas.html 大多数情况下,Python的目标是成为一门简洁和 ...

  3. 【02】markdown工具推荐

    [02]信息 Windows 平台 MarkdownPad MarkPad Linux 平台 ReText Mac 平台 Mou 最新版Mac OS下Mou已经无法使用了.这里推荐一个跨平台的编辑器  ...

  4. VC调试入门

    概述调试是一个程序员最基本的技能,其重要性甚至超过学习一门语言.不会调试的程序员就意味着他即使会一门语言,却不能编制出任何好的软件.这里我简要的根据自己的经验列出调试中比较常用的技巧,希望对大家有用. ...

  5. python算法-二叉树广度优先遍历

    广度优先遍历:优先遍历兄弟节点,再遍历子节点 算法:通过队列实现-->先进先出 广度优先遍历的结果: 50,20,60,15,30,70,12 程序遍历这个二叉树: # encoding=utf ...

  6. 六丶人生苦短,我用python【第六篇】

    Python基础之函数 三元运算 三元运算(三目运算),是对简单的条件语句的缩写. # 书写格式 result = 值1 if 条件 else 值2 # 如果条件成立,那么将 “值1” 赋值给resu ...

  7. Leetcode 410.分割数组的最大值

    分割数组的最大值 给定一个非负整数数组和一个整数 m,你需要将这个数组分成 m 个非空的连续子数组.设计一个算法使得这 m 个子数组各自和的最大值最小. 注意:数组长度 n 满足以下条件: 1 ≤ n ...

  8. BZOJ 1443 [JSOI2009]游戏Game ——博弈论

    好题. 首先看到棋盘,先黑白染色. 然后就是二分图的经典模型. 考虑最特殊的情况,完美匹配,那么先手必胜, 因为无论如何,先手走匹配边,后手无论走哪条边,总有对应的匹配边. 如果在不在最大匹配中出发, ...

  9. BZOJ 3750: [POI2015]Pieczęć 【模拟】

    Description 一张n*m的方格纸,有些格子需要印成黑色,剩下的格子需要保留白色. 你有一个a*b的印章,有些格子是凸起(会沾上墨水)的.你需要判断能否用这个印章印出纸上的图案.印的过程中需要 ...

  10. 【二叉树】hdu 1622 Trees on the level

    [题意] 给定一棵树每个结点的权重和路径(路径用LR串表示),输出这棵树的层次遍历 [思路] 注意输入输出,sscanf用来格式化地截取需要的数据,strchr来在字符串中查找字符的位置 [Accep ...