原文地址:https://doc.scrapy.org/en/latest/topics/architecture.html

This document describes the architecture of Scrapy and how its components interact.

Overview

The following diagram shows an overview of the Scrapy architecture with its components and an outline of the data flow that takes place inside the system (shown by the red arrows). A brief description of the components is included below with links for more detailed information about them. The data flow is also described below.

Data flow

The data flow in Scrapy is controlled by the execution engine, and goes like this:

  1. The Engine gets the initial Requests to crawl from the Spider.
  2. The Engine schedules the Requests in the Scheduler and asks for the next Requests to crawl.
  3. The Scheduler returns the next Requests to the Engine.
  4. The Engine sends the Requests to the Downloader, passing through the Downloader Middlewares (see process_request()).
  5. Once the page finishes downloading the Downloader generates a Response (with that page) and sends it to the Engine, passing through the Downloader Middlewares (see process_response()).
  6. The Engine receives the Response from the Downloader and sends it to the Spider for processing, passing through the Spider Middleware (see process_spider_input()).
  7. The Spider processes the Response and returns scraped items and new Requests (to follow) to the Engine, passing through the Spider Middleware (see process_spider_output()).
  8. The Engine sends processed items to Item Pipelines, then send processed Requests to the Scheduler and asks for possible next Requests to crawl.
  9. The process repeats (from step 1) until there are no more requests from the Scheduler.

Components

Scrapy Engine

The engine is responsible for controlling the data flow between all components of the system, and triggering events when certain actions occur. See the Data Flow section above for more details.

Scheduler

The Scheduler receives requests from the engine and enqueues them for feeding them later (also to the engine) when the engine requests them.

Downloader

The Downloader is responsible for fetching web pages and feeding them to the engine which, in turn, feeds them to the spiders.

Spiders

Spiders are custom classes written by Scrapy users to parse responses and extract items (aka scraped items) from them or additional requests to follow. For more information see Spiders.

Item Pipeline

The Item Pipeline is responsible for processing the items once they have been extracted (or scraped) by the spiders. Typical tasks include cleansing, validation and persistence (like storing the item in a database). For more information see Item Pipeline.

Downloader middlewares

Downloader middlewares are specific hooks that sit between the Engine and the Downloader and process requests when they pass from the Engine to the Downloader, and responses that pass from Downloader to the Engine.

Use a Downloader middleware if you need to do one of the following:

  • process a request just before it is sent to the Downloader (i.e. right before Scrapy sends the request to the website);
  • change received response before passing it to a spider;
  • send a new Request instead of passing received response to a spider;
  • pass response to a spider without fetching a web page;
  • silently drop some requests.

For more information see Downloader Middleware.

Spider middlewares

Spider middlewares are specific hooks that sit between the Engine and the Spiders and are able to process spider input (responses) and output (items and requests).

Use a Spider middleware if you need to

  • post-process output of spider callbacks - change/add/remove requests or items;
  • post-process start_requests;
  • handle spider exceptions;
  • call errback instead of callback for some of the requests based on response content.

For more information see Spider Middleware.

Event-driven networking

Scrapy is written with Twisted, a popular event-driven networking framework for Python. Thus, it’s implemented using a non-blocking (aka asynchronous) code for concurrency.

Scrapy Architecture overview--官方文档的更多相关文章

  1. hbase官方文档(转)

    FROM:http://www.just4e.com/hbase.html Apache HBase™ 参考指南  HBase 官方文档中文版 Copyright © 2012 Apache Soft ...

  2. HBase官方文档

    HBase官方文档 目录 序 1. 入门 1.1. 介绍 1.2. 快速开始 2. Apache HBase (TM)配置 2.1. 基础条件 2.2. HBase 运行模式: 独立和分布式 2.3. ...

  3. Spark官方文档 - 中文翻译

    Spark官方文档 - 中文翻译 Spark版本:1.6.0 转载请注明出处:http://www.cnblogs.com/BYRans/ 1 概述(Overview) 2 引入Spark(Linki ...

  4. Spring 4 官方文档学习 Spring与Java EE技术的集成

    本部分覆盖了以下内容: Chapter 28, Remoting and web services using Spring -- 使用Spring进行远程和web服务 Chapter 29, Ent ...

  5. Spark SQL 官方文档-中文翻译

    Spark SQL 官方文档-中文翻译 Spark版本:Spark 1.5.2 转载请注明出处:http://www.cnblogs.com/BYRans/ 1 概述(Overview) 2 Data ...

  6. 人工智能系统Google开源的TensorFlow官方文档中文版

    人工智能系统Google开源的TensorFlow官方文档中文版 2015年11月9日,Google发布人工智能系统TensorFlow并宣布开源,机器学习作为人工智能的一种类型,可以让软件根据大量的 ...

  7. Google Android官方文档进程与线程(Processes and Threads)翻译

    android的多线程在开发中已经有使用过了,想再系统地学习一下,找到了android的官方文档,介绍进程与线程的介绍,试着翻译一下. 原文地址:http://developer.android.co ...

  8. OGR 官方文档

    OGR 官方文档 http://www.gdal.org/ogr/index.html The OGR Simple Features Library is a C++ open source lib ...

  9. cassandra 3.x官方文档(5)---探测器

    写在前面 cassandra3.x官方文档的非官方翻译.翻译内容水平全依赖本人英文水平和对cassandra的理解.所以强烈建议阅读英文版cassandra 3.x 官方文档.此文档一半是翻译,一半是 ...

  10. Cuda 9.2 CuDnn7.0 官方文档解读

    目录 Cuda 9.2 CuDnn7.0 官方文档解读 准备工作(下载) 显卡驱动重装 CUDA安装 系统要求 处理之前安装的cuda文件 下载的deb安装过程 下载的runfile的安装过程 安装完 ...

随机推荐

  1. Photoshop扣除特定颜色背景

    步骤:打开ps--打开图片---选择--选择色彩范围---取样颜色(用吸管选定颜色)-- ( )再按delete键删除--点击文件--点击储存为PSD. 这样就抠除了特定范围色彩内的背景.

  2. Xcode8 NSLog打印json不完整的解决方案

    解决方案:自定义宏,通过fprintf函数打印log,完美解决! #ifdef DEBUG #define NSLog(FORMAT, ...) fprintf(stderr, "%s:%z ...

  3. The features of Swift

    The features of Swift are designed to work together to create a language that is powerful, yet fun t ...

  4. HTTP报文获取方法

    GET:获取资源,可以理解为读取或者下载数据: HEAD:获取资源的元信息: POST:向资源提交数据,相当于写入或上传数据: PUT:类似 POST: DELETE:删除资源: CONNECT:建立 ...

  5. NGUI发布后UI层看不见的解决办法

    NGUI发布后UI层看不见的解决办法 提示信息:You can'tplace widgets on a layer different than the UIPanel that manages th ...

  6. web前端对文件的引用规则

    web前端一般常用文件 .html .css .js.但是当用css文件和html引入资源(比如图片)时,路径可能不相同.下面总结了几条. 使用相对路径引入规则: html或者js引入图片,按照htm ...

  7. svn查看工程版本库的url地址

    打开cmd,cd到工程目录,使用svn的命令:svn info 完.

  8. struts配置之namespace

  9. CAD教程--嵌入表格

    1.第一步,打开excel复制一下表格 2.第二步,打开CAD,选择编辑->选择性粘贴->autocad图元,左键点击一下图就行了,找找图,放大到适合的比例就行了.

  10. PHP--规范化的文件上传

    <form action="" method="post" enctype="multipart/form-data"> < ...