scrapy分布式Spider源码分析及实现过程
分布式框架scrapy_redis实现了一套完整的组件,其中也实现了spider,RedisSpider是在继承原scrapy的Spider的基础上略有改动,初始URL不在从start_urls列表中读取,而是从redis起始队列中读取。
scrapy_redis源码在scrapy.redis.spider中,不仅实现了RedisSpider(分布式爬虫)还实现了RedisCrawlSpider(分布式深度爬虫)的逻辑,不过二者很多方法是一致的。
源码如下:
from scrapy import signals
from scrapy.exceptions import DontCloseSpider
from scrapy.spiders import Spider, CrawlSpider from . import connection # Default batch size matches default concurrent requests setting. DEFAULT_START_URLS_BATCH_SIZE = 16 DEFAULT_START_URLS_KEY = '%(name)s:start_urls' class RedisMixin(object): """Mixin class to implement reading urls from a redis queue.""" # Per spider redis key, default to DEFAULT_START_URLS_KEY. redis_key = None # Fetch this amount of start urls when idle. Default to DEFAULT_START_URLS_BATCH_SIZE. redis_batch_size = None # Redis client instance. server = None def start_requests(self): """Returns a batch of start requests from redis.""" return self.next_requests() def setup_redis(self, crawler=None): """Setup redis connection and idle signal. This should be called after the spider has set its crawler object. """ if self.server is not None: return if crawler is None: # We allow optional crawler argument to keep backwards # compatibility. # XXX: Raise a deprecation warning. crawler = getattr(self, 'crawler', None) if crawler is None: raise ValueError("crawler is required") settings = crawler.settings if self.redis_key is None: self.redis_key = settings.get( 'REDIS_START_URLS_KEY', DEFAULT_START_URLS_KEY, ) self.redis_key = self.redis_key % {'name': self.name} if not self.redis_key.strip(): raise ValueError("redis_key must not be empty") if self.redis_batch_size is None: self.redis_batch_size = settings.getint( 'REDIS_START_URLS_BATCH_SIZE', DEFAULT_START_URLS_BATCH_SIZE, ) try: self.redis_batch_size = int(self.redis_batch_size) except (TypeError, ValueError): raise ValueError("redis_batch_size must be an integer") self.logger.info("Reading start URLs from redis key '%(redis_key)s' " "(batch size: %(redis_batch_size)s)", self.__dict__) self.server = connection.from_settings(crawler.settings) # The idle signal is called when the spider has no requests left, # that's when we will schedule new requests from redis queue crawler.signals.connect(self.spider_idle, signal=signals.spider_idle) def next_requests(self): """Returns a request to be scheduled or none.""" use_set = self.settings.getbool('REDIS_START_URLS_AS_SET') fetch_one = self.server.spop if use_set else self.server.lpop # XXX: Do we need to use a timeout here? found = 0 while found < self.redis_batch_size: data = fetch_one(self.redis_key) if not data: # Queue empty. break req = self.make_request_from_data(data) if req: yield req found += 1 else: self.logger.debug("Request not made from data: %r", data) if found: self.logger.debug("Read %s requests from '%s'", found, self.redis_key) def make_request_from_data(self, data): # By default, data is an URL. if '://' in data: return self.make_requests_from_url(data) else: self.logger.error("Unexpected URL from '%s': %r", self.redis_key, data) def schedule_next_requests(self): """Schedules a request if available""" for req in self.next_requests(): self.crawler.engine.crawl(req, spider=self) def spider_idle(self): """Schedules a request if available, otherwise waits.""" # XXX: Handle a sentinel to close the spider. self.schedule_next_requests() raise DontCloseSpider class RedisSpider(RedisMixin, Spider): """Spider that reads urls from redis queue when idle.""" @classmethod def from_crawler(self, crawler, *args, **kwargs): obj = super(RedisSpider, self).from_crawler(crawler, *args, **kwargs) obj.setup_redis(crawler) return obj class RedisCrawlSpider(RedisMixin, CrawlSpider): """Spider that reads urls from redis queue when idle.""" @classmethod def from_crawler(self, crawler, *args, **kwargs): obj = super(RedisCrawlSpider, self).from_crawler(crawler, *args, **kwargs) obj.setup_redis(crawler) return obj
开始start_requests方法是从Redis中获取所有连接,然后与URL去重列表比对去重,不被去重的通过make_requests_from_url(url)传递给引擎,然后到调度器,这里和一般的爬虫思路是一致的。
当引擎收到任务队列为空的消息后会调用Spider_idle方法,进而调用schedule_next_requests迭代获取到的起始URL递交给引擎,然后就开始任务,直到任务完成后在次调用起始队列中的URL。
def schedule_next_requests(self): """Schedules a request if available""" for req in self.next_requests(): self.crawler.engine.crawl(req, spider=self) def spider_idle(self): """Schedules a request if available, otherwise waits.""" # XXX: Handle a sentinel to close the spider. self.schedule_next_requests() raise DontCloseSpider
RedisSpider和RedisCrawlSpider很简单,继承了Spider和RedisMiXine,实现了setup_redis方法,该方法会根据不同的crawler初始化setting、redis_key,及通过connect接口,给spider绑定了spider_idle信号绑定。
class RedisSpider(RedisMixin, Spider): """Spider that reads urls from redis queue when idle.""" @classmethod def from_crawler(self, crawler, *args, **kwargs): obj = super(RedisSpider, self).from_crawler(crawler, *args, **kwargs) obj.setup_redis(crawler) return obj class RedisCrawlSpider(RedisMixin, CrawlSpider): """Spider that reads urls from redis queue when idle.""" @classmethod def from_crawler(self, crawler, *args, **kwargs): obj = super(RedisCrawlSpider, self).from_crawler(crawler, *args, **kwargs) obj.setup_redis(crawler) return obj def setup_redis(self, crawler=None): """Setup redis connection and idle signal. This should be called after the spider has set its crawler object. """ if self.server is not None: return if crawler is None: # We allow optional crawler argument to keep backwards # compatibility. # XXX: Raise a deprecation warning. crawler = getattr(self, 'crawler', None) if crawler is None: raise ValueError("crawler is required") settings = crawler.settings if self.redis_key is None: self.redis_key = settings.get( 'REDIS_START_URLS_KEY', DEFAULT_START_URLS_KEY, ) self.redis_key = self.redis_key % {'name': self.name} if not self.redis_key.strip(): raise ValueError("redis_key must not be empty") if self.redis_batch_size is None: self.redis_batch_size = settings.getint( 'REDIS_START_URLS_BATCH_SIZE', DEFAULT_START_URLS_BATCH_SIZE, ) try: self.redis_batch_size = int(self.redis_batch_size) except (TypeError, ValueError): raise ValueError("redis_batch_size must be an integer") self.logger.info("Reading start URLs from redis key '%(redis_key)s' " "(batch size: %(redis_batch_size)s)", self.__dict__) self.server = connection.from_settings(crawler.settings) # The idle signal is called when the spider has no requests left, # that's when we will schedule new requests from redis queue crawler.signals.connect(self.spider_idle, signal=signals.spider_idle)
总结:RedisSpider从初始Redis队列中获取start_url生成Request,然后递交给scrapy引擎交给对应的Redis scheduler调度,完成去重和入队列。当调度条件满足是队列中的Request交由引擎给下载器,生成Response对象递交给Spider解析,产生新的Request,继续抓取;直到没有新的Request产生后,Spider由从起始队列中去获取URL,继续上一轮循环,当然如果队列中没有了,那么爬虫就结束了。
scrapy分布式Spider源码分析及实现过程的更多相关文章
- SOFA 源码分析 —— 服务引用过程
前言 在前面的 SOFA 源码分析 -- 服务发布过程 文章中,我们分析了 SOFA 的服务发布过程,一个完整的 RPC 除了发布服务,当然还需要引用服务. So,今天就一起来看看 SOFA 是如何引 ...
- Dubbo 源码分析 - 服务调用过程
注: 本系列文章已捐赠给 Dubbo 社区,你也可以在 Dubbo 官方文档中阅读本系列文章. 1. 简介 在前面的文章中,我们分析了 Dubbo SPI.服务导出与引入.以及集群容错方面的代码.经过 ...
- MyBatis 源码分析 - 配置文件解析过程
* 本文速览 由于本篇文章篇幅比较大,所以这里拿出一节对本文进行快速概括.本篇文章对 MyBatis 配置文件中常用配置的解析过程进行了较为详细的介绍和分析,包括但不限于settings,typeAl ...
- 源码分析HotSpot GC过程(一)
«上一篇:源码分析HotSpot GC过程(一)»下一篇:源码分析HotSpot GC过程(三):TenuredGeneration的GC过程 https://blogs.msdn.microsoft ...
- 源码分析HotSpot GC过程(三):TenuredGeneration的GC过程
老年代TenuredGeneration所使用的垃圾回收算法是标记-压缩-清理算法.在回收阶段,将标记对象越过堆的空闲区移动到堆的另一端,所有被移动的对象的引用也会被更新指向新的位置.看起来像是把杂陈 ...
- RedissonLock分布式锁源码分析
最近碰到的一个问题,Java代码中写了一个定时器,分布式部署的时候,多台同时执行的话就会出现重复的数据,为了避免这种情况,之前是通过在配置文件里写上可以执行这段代码的IP,代码中判断如果跟这个IP相等 ...
- Kafka#4:存储设计 分布式设计 源码分析
https://sites.google.com/a/mammatustech.com/mammatusmain/kafka-architecture/4-kafka-detailed-archite ...
- 深入理解 spring 容器,源码分析加载过程
Spring框架提供了构建Web应用程序的全功能MVC模块,叫Spring MVC,通过Spring Core+Spring MVC即可搭建一套稳定的Java Web项目.本文通过Spring MVC ...
- wifidog源码分析 - 用户连接过程
引言 之前的文章已经描述wifidog大概的一个工作流程,这里我们具体说说wifidog是怎么把一个新用户重定向到认证服务器中的,它又是怎么对一个已认证的用户实行放行操作的.我们已经知道wifidog ...
随机推荐
- oracle函数 ROW_NUMBER()
[语法]ROW_NUMBER() OVER (PARTITION BY COL1 ORDER BY COL2) [功能]表示根据COL1分组,在分组内部根据 COL2排序,而这个值就表示每组内部排序后 ...
- js+canvas实现象棋的布局、走棋位置提示、走棋代码
<!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title> ...
- Round #590 (Div. 3)
拿DIV找快乐... 当场过了A-B1-B2-C 写D差5分钟写的是正解...留坑补FG A. Equalize Prices Again 直接判断sum%n==0?sum/n:sum/n+1 B1, ...
- hdu 3234 Exclusive-OR (并查集)
Problem - 3234 题意不难理解,就是给出一些断言,以及一些查询,回答查询或者在找到断言矛盾以后沉默不做任何事. 这题其实就是一个并查集的距离存储问题,只要记录并查集元素的相对值以及绝对值就 ...
- 测试代码的执行时间魔法方法%time和%timeit
对于规模更大.运行时间更长的数据分析应用程序,你可能会希望测试一下各个部分或函数调用或语句的执行时间.你可能会希望了解某个复杂计算过程中到底是哪些函数占用的时间最多.幸运的是,在开发和测试代码的过程中 ...
- 一个div居于另一个div底部
一个div如何与另一个div底部对齐,方法有很多,比如使用绝对定位 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional/ ...
- 禁用GPU版本TensorFlow,切换到CPU版本TensorFlow。
#禁用gpu版本TensorFlow,因为CUDA号码从0开始,这里直接让CUDA使用-1的GPU,自然就无法使用gpu了. 代码前面加入: import osos.environ["CUD ...
- 2005年NOIP普及组复赛题解
题目涉及算法: 陶陶摘苹果:入门题: 校门外的树:简单模拟: 采药:01背包: 循环:模拟.高精度. 陶陶摘苹果 题目链接:https://www.luogu.org/problem/P1046 循环 ...
- codeforces1217-edu
C The Number Of Good Substrings 我原来的基本思路也是这样,但是写的不够好 注意算前缀和的时候,字符串起始最好从1开始. #include<cstdio> # ...
- Element节点输出到System.out
protected void writeElementToFile(Element valrespEle) { try { TransformerFactory transformerFactory ...