一、Scrapy简介

Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架。 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中。
其最初是为了页面抓取 (更确切来说, 网络抓取 )所设计的, 也可以应用在获取API所返回的数据(例如 Amazon Associates Web Services ) 或者通用的网络爬虫。Scrapy用途广泛,可以用于数据挖掘、监测和自动化测试。

Scrapy 使用了 Twisted异步网络库来处理网络通讯。整体架构大致如下:

Scrapy主要包括了以下组件:

  • 引擎(Scrapy)

  用来处理整个系统的数据流处理, 触发事务(框架核心)

  调度器(Scheduler)
  用来接受引擎发过来的请求, 压入队列中, 并在引擎再次请求的时候返回. 可以想像成一个URL(抓取网页的网址或者说是链接)的优先队列, 由它来决定下一个要抓取的网址是什么, 同时去除重复的网址

  • 下载器(Downloader)

  用于下载网页内容, 并将网页内容返回给蜘蛛(Scrapy下载器是建立在twisted这个高效的异步模型上的)

  • 爬虫(Spiders)

  爬虫是主要干活的, 用于从特定的网页中提取自己需要的信息, 即所谓的实体(Item)。用户也可以从中提取出链接,让Scrapy继续抓取下一个页面

  • 项目管道(Pipeline)

  负责处理爬虫从网页中抽取的实体,主要的功能是持久化实体、验证实体的有效性、清除不需要的信息。当页面被爬虫解析后,将被发送到项目管道,并经过几个特定的次序处理数据。

  • 下载器中间件(Downloader Middlewares)

  位于Scrapy引擎和下载器之间的框架,主要是处理Scrapy引擎与下载器之间的请求及响应。

  • 爬虫中间件(Spider Middlewares)

  介于Scrapy引擎和爬虫之间的框架,主要工作是处理蜘蛛的响应输入和请求输出。

  • 调度中间件(Scheduler Middewares)

  介于Scrapy引擎和调度之间的中间件,从Scrapy引擎发送到调度的请求和响应。

Scrapy运行流程大概如下:

  1. 引擎从调度器中取出一个链接(URL)用于接下来的抓取
  2. 引擎把URL封装成一个请求(Request)传给下载器
  3. 下载器把资源下载下来,并封装成应答包(Response)
  4. 爬虫解析Response
  5. 解析出实体(Item),则交给实体管道进行进一步的处理
  6. 解析出的是链接(URL),则把URL交给调度器等待抓取

二、安装

 Linux
pip3 install scrapy Windows
a. pip3 install wheel
b. 下载twisted http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted
c. 进入下载目录,执行 pip3 install Twisted‑17.1.‑cp35‑cp35m‑win_amd64.whl
d. pip3 install scrapy
e. 下载并安装pywin32:https://sourceforge.net/projects/pywin32/files/

三、基本使用

1. 基本命令

. scrapy startproject 项目名称
- 在当前目录中创建中创建一个项目文件(类似于Django) . scrapy genspider [-t template] <name> <domain>
- 创建爬虫应用
如:
scrapy gensipider -t basic oldboy oldboy.com
scrapy gensipider -t xmlfeed autohome autohome.com.cn
PS:
查看所有命令:scrapy gensipider -l
查看模板命令:scrapy gensipider -d 模板名称 . scrapy list
- 展示爬虫应用列表 . scrapy crawl 爬虫应用名称
- 运行单独爬虫应用

2. 项目结构及爬虫应用简介

2.1. 目录结构
project_name/
scrapy.cfg
project_name/
__init__.py
items.py
pipelines.py
settings.py
spiders/
__init__.py
爬虫1.py
爬虫2.py
爬虫3.py
2.2. 文件说明
  • scrapy.cfg  项目的主配置信息。(真正爬虫相关的配置信息在settings.py文件中)
  • items.py    设置数据存储模板,用于结构化数据,如:Django的Model
  • pipelines    数据处理行为,如:一般结构化的数据持久化
  • settings.py 配置文件,如:递归的层数、并发数,延迟下载等
  • spiders      爬虫目录,如:创建文件,编写爬虫规则

注意:一般创建爬虫文件时,以网站域名命名

2.3. 示例
 import scrapy

 class XiaoHuarSpider(scrapy.spiders.Spider):
name = "xiaohuar" # 爬虫名称 *****
allowed_domains = ["xiaohuar.com"] # 允许的域名
start_urls = [
"http://www.xiaohuar.com/hua/", # URL
] def parse(self, response):
# 访问起始URL并获取结果后的回调函数

爬虫1.py

若在Windows系统上若出现编码错误,解决方法如下:

 import sys,os

 sys.stdout=io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030')

3. 小试牛刀

 import scrapy
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request class DigSpider(scrapy.Spider):
# 爬虫应用的名称,通过此名称启动爬虫命令
name = "dig" # 允许的域名
allowed_domains = ["chouti.com"] # 起始URL
start_urls = [
'http://dig.chouti.com/',
] has_request_set = {} def parse(self, response):
print(response.url) hxs = HtmlXPathSelector(response)
page_list = hxs.select('//div[@id="dig_lcpage"]//a[re:test(@href, "/all/hot/recent/\d+")]/@href').extract()
for page in page_list:
page_url = 'http://dig.chouti.com%s' % page
key = self.md5(page_url)
if key in self.has_request_set:
pass
else:
self.has_request_set[key] = page_url
obj = Request(url=page_url, method='GET', callback=self.parse)
yield obj @staticmethod
def md5(val):
import hashlib
ha = hashlib.md5()
ha.update(bytes(val, encoding='utf-8'))
key = ha.hexdigest()
return key

爬取抽屉网页

执行此爬虫文件,则在终端进入项目目录执行如下命令:

 scrapy crawl dig --nolog

对于上述代码重要之处在于:

  • Request是一个封装用户请求的类,在回调函数中yield该对象表示继续访问
  • HtmlXpathSelector用于结构化HTML代码并提供选择器功能

4. 选择器

 #!/usr/bin/env python
# -*- coding:utf-8 -*-
from scrapy.selector import Selector, HtmlXPathSelector
from scrapy.http import HtmlResponse
html = """<!DOCTYPE html>
<html>
<head lang="en">
<meta charset="UTF-8">
<title></title>
</head>
<body>
<ul>
<li class="item-"><a id='i1' href="link.html">first item</a></li>
<li class="item-0"><a id='i2' href="llink.html">first item</a></li>
<li class="item-1"><a href="llink2.html">second item<span>vv</span></a></li>
</ul>
<div><a href="llink2.html">second item</a></div>
</body>
</html>
"""
response = HtmlResponse(url='http://example.com', body=html,encoding='utf-8')
# hxs = HtmlXPathSelector(response)
# print(hxs)
# hxs = Selector(response=response).xpath('//a')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[2]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[@id]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[@id="i1"]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[@href="link.html"][@id="i1"]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[contains(@href, "link")]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[starts-with(@href, "link")]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]/text()').extract()
# print(hxs)
# hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]/@href').extract()
# print(hxs)
# hxs = Selector(response=response).xpath('/html/body/ul/li/a/@href').extract()
# print(hxs)
# hxs = Selector(response=response).xpath('//body/ul/li/a/@href').extract_first()
# print(hxs) # ul_list = Selector(response=response).xpath('//body/ul/li')
# for item in ul_list:
# v = item.xpath('./a/span')
# # 或
# # v = item.xpath('a/span')
# # 或
# # v = item.xpath('*/a/span')
# print(v)

5. 登陆抽屉实现自动点赞或取消赞

 # -*- coding: utf-8 -*-
import scrapy
import json
import urllib.parse
from scrapy.selector import Selector, HtmlXPathSelector
from scrapy.http import Request class ChoutiSpider(scrapy.Spider):
name = 'chouti'
allowed_domains = ['dig.chouti.com']
start_urls = ['http://dig.chouti.com/']
cookie_dict = {}
news_dict = {}
type = 'undo' # 取消点赞undo,点赞do def start_requests(self):
for url in self.start_urls:
yield Request(url, dont_filter=True, callback=self.login) def login(self, response):
print('login...\n')
from scrapy.http.cookies import CookieJar
ck_jar = CookieJar()
ck_jar.extract_cookies(response, response.request) # 去响应中获取cookies
for k, v in ck_jar._cookies.items():
for i, j in v.items():
for m, n in j.items():
self.cookie_dict[m] = n.value
print(self.cookie_dict,'\n')
post_dict = {
'phone': 123456789,
'password': '**********',
'oneMonth': 1,
} yield Request(
url='http://dig.chouti.com/login',
method='POST',
cookies=self.cookie_dict,
body=urllib.parse.urlencode(post_dict),
headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},
callback=self.get_news
) def get_news(self, response):
yield Request(url='http://dig.chouti.com/', cookies=self.cookie_dict, callback=self.do) def do(self, response):
hxs = Selector(response)
link_id_list = hxs.xpath('//div[@class="part2"]/@share-linkid').extract()
print(link_id_list)
undo_url = 'http://dig.chouti.com/vote/cancel/vote.do'
for link_id in link_id_list:
if self.type == 'undo':
print('undo.........\n')
post_dict = {'linksId': link_id}
yield Request(
undo_url, method='POST',
cookies=self.cookie_dict,
body=urllib.parse.urlencode(post_dict),
headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},
callback=self.showret)
else:
print('dododod.......\n')
thumb_url = "http://dig.chouti.com/link/vote?linksId=%s" % (link_id,)
yield Request(thumb_url, method='POST', cookies=self.cookie_dict, callback=self.showret)
page_list = hxs.xpath('//a[@class="ct_pagepa"]/@href').extract()
for page in page_list:
# http://dig.chouti.com/all/hot/recent/2
page_url = 'http://dig.chouti.com%s' %(page,)
yield Request(page_url, method='GET', callback=self.do) def showret(self, response):
print('show...\n', response.text)

抽屉网站自动点赞或取消赞

注意:settings.py中设置DEPTH_LIMIT = 1来指定“递归”的层数,即只操作当前页。

6. 格式处理

上述实例只是简单的处理,所以在parse方法中直接处理。如果对于想要获取更多的数据处理,则可以利用Scrapy的items将数据格式化,然后统一交由pipelines来处理。

 import scrapy
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request
from scrapy.http.cookies import CookieJar
from scrapy import FormRequest class XiaoHuarSpider(scrapy.Spider):
# 爬虫应用的名称,通过此名称启动爬虫命令
name = "xiaohuar"
# 允许的域名
allowed_domains = ["xiaohuar.com"] start_urls = [
"http://www.xiaohuar.com/list-1-1.html",
]
# custom_settings = {
# 'ITEM_PIPELINES':{
# 'spider1.pipelines.JsonPipeline': 100
# }
# }
has_request_set = {} def parse(self, response):
# 分析页面
# 找到页面中符合规则的内容(校花图片),保存
# 找到所有的a标签,再访问其他a标签,一层一层的搞下去 hxs = HtmlXPathSelector(response) items = hxs.select('//div[@class="item_list infinite_scroll"]/div')
for item in items:
src = item.select('.//div[@class="img"]/a/img/@src').extract_first()
name = item.select('.//div[@class="img"]/span/text()').extract_first()
school = item.select('.//div[@class="img"]/div[@class="btns"]/a/text()').extract_first()
url = "http://www.xiaohuar.com%s" % src
from ..items import XiaoHuarItem
obj = XiaoHuarItem(name=name, school=school, url=url)
yield obj urls = hxs.select('//a[re:test(@href, "http://www.xiaohuar.com/list-1-\d+.html")]/@href')
for url in urls:
key = self.md5(url)
if key in self.has_request_set:
pass
else:
self.has_request_set[key] = url
req = Request(url=url,method='GET',callback=self.parse)
yield req @staticmethod
def md5(val):
import hashlib
ha = hashlib.md5()
ha.update(bytes(val, encoding='utf-8'))
key = ha.hexdigest()
return key

spiders/xiahuar.py

 import scrapy

 class XiaoHuarItem(scrapy.Item):
name = scrapy.Field()
school = scrapy.Field()
url = scrapy.Field()

items

 import json
import os
import requests class JsonPipeline(object):
def __init__(self):
self.file = open('xiaohua.txt', 'w') def process_item(self, item, spider):
v = json.dumps(dict(item), ensure_ascii=False)
self.file.write(v)
self.file.write('\n')
self.file.flush()
return item class FilePipeline(object):
def __init__(self):
if not os.path.exists('imgs'):
os.makedirs('imgs') def process_item(self, item, spider):
response = requests.get(item['url'], stream=True)
file_name = '%s_%s.jpg' % (item['name'], item['school'])
with open(os.path.join('imgs', file_name), mode='wb') as f:
f.write(response.content)
return item

pipelines

 ITEM_PIPELINES = {
'spider1.pipelines.JsonPipeline': 100,
'spider1.pipelines.FilePipeline': 300,
}
# 每行后面的整型值,确定了他们运行的顺序,item按数字从低到高的顺序,通过pipeline,通常将这些数字定义在0-1000范围内。

settings

对于pipeline可以做更多,如下:

 from scrapy.exceptions import DropItem

 class CustomPipeline(object):
def __init__(self,v):
self.value = v def process_item(self, item, spider):
# 操作并进行持久化 # return表示会被后续的pipeline继续处理
return item # 表示将item丢弃,不会被后续pipeline处理
# raise DropItem() @classmethod
def from_crawler(cls, crawler):
"""
初始化时候,用于创建pipeline对象
:param crawler:
:return:
"""
val = crawler.settings.getint('MMMM')
return cls(val) def open_spider(self,spider):
"""
爬虫开始执行时,调用
:param spider:
:return:
"""
print('') def close_spider(self,spider):
"""
爬虫关闭时,被调用
:param spider:
:return:
"""
print('')

自定义pipeline

7. 中间件

 class SpiderMiddleware(object):

     def process_spider_input(self,response, spider):
"""
下载完成,执行,然后交给parse处理
:param response:
:param spider:
:return:
"""
pass def process_spider_output(self,response, result, spider):
"""
spider处理完成,返回时调用
:param response:
:param result:
:param spider:
:return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)
"""
return result def process_spider_exception(self,response, exception, spider):
"""
异常调用
:param response:
:param exception:
:param spider:
:return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline
"""
return None def process_start_requests(self,start_requests, spider):
"""
爬虫启动时调用
:param start_requests:
:param spider:
:return: 包含 Request 对象的可迭代对象
"""
return start_requests

爬虫中间件

 class DownMiddleware1(object):
def process_request(self, request, spider):
"""
请求需要被下载时,经过所有下载器中间件的process_request调用
:param request:
:param spider:
:return:
None,继续后续中间件去下载;
Response对象,停止process_request的执行,开始执行process_response
Request对象,停止中间件的执行,将Request重新调度器
raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception
"""
pass def process_response(self, request, response, spider):
"""
spider处理完成,返回时调用
:param response:
:param result:
:param spider:
:return:
Response 对象:转交给其他中间件process_response
Request 对象:停止中间件,request会被重新调度下载
raise IgnoreRequest 异常:调用Request.errback
"""
print('response1')
return response def process_exception(self, request, exception, spider):
"""
当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
:param response:
:param exception:
:param spider:
:return:
None:继续交给后续中间件处理异常;
Response对象:停止后续process_exception方法
Request对象:停止中间件,request将会被重新调用下载
"""
return None

下载器中间件

8. 自定制命令

  • 在spiders同级创建任意目录,如:commands
  • 在其中创建 crawlall.py 文件 (此处文件名就是自定义的命令)
 from scrapy.commands import ScrapyCommand
from scrapy.utils.project import get_project_settings class Command(ScrapyCommand): requires_project = True def syntax(self):
return '[options]' def short_desc(self):
return 'Runs all of the spiders' def run(self, args, opts):
spider_list = self.crawler_process.spiders.list()
for name in spider_list:
self.crawler_process.crawl(name, **opts.__dict__)
self.crawler_process.start()

crawlall.py

  • 在settings.py 中添加配置 COMMANDS_MODULE = '项目名称.目录名称'
  • 在项目目录执行命令:scrapy crawlall

9. 自定义扩展

自定义扩展时,利用信号在指定位置注册制定操作

 from scrapy import signals

 class MyExtension(object):
def __init__(self, value):
self.value = value @classmethod
def from_crawler(cls, crawler):
val = crawler.settings.getint('MMMM')
ext = cls(val) crawler.signals.connect(ext.spider_opened, signal=signals.spider_opened)
crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed) return ext def spider_opened(self, spider):
print('open') def spider_closed(self, spider):
print('close')

signals

10. 避免重复访问

scrapy默认使用 scrapy.dupefilter.RFPDupeFilter 进行去重,相关配置有:

 DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter'
DUPEFILTER_DEBUG = False
JOBDIR = "保存范文记录的日志路径,如:/root/" # 最终路径为 /root/requests.seen
 class RepeatUrl:
def __init__(self):
self.visited_url = set() @classmethod
def from_settings(cls, settings):
"""
初始化时,调用
:param settings:
:return:
"""
return cls() def request_seen(self, request):
"""
检测当前请求是否已经被访问过
:param request:
:return: True表示已经访问过;False表示未访问过
"""
if request.url in self.visited_url:
return True
self.visited_url.add(request.url)
return False def open(self):
"""
开始爬去请求时,调用
:return:
"""
print('open replication') def close(self, reason):
"""
结束爬虫爬取时,调用
:param reason:
:return:
"""
print('close replication') def log(self, request, spider):
"""
记录日志
:param request:
:param spider:
:return:
"""
print('repeat', request.url)

自定义URL去重操作

11. 其它

 # -*- coding: utf-8 -*-

 # Scrapy settings for step8_king project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# http://doc.scrapy.org/en/latest/topics/settings.html
# http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html # 1. 爬虫名称
BOT_NAME = 'step8_king' # 2. 爬虫应用路径
SPIDER_MODULES = ['step8_king.spiders']
NEWSPIDER_MODULE = 'step8_king.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent
# 3. 客户端 user-agent请求头
# USER_AGENT = 'step8_king (+http://www.yourdomain.com)' # Obey robots.txt rules
# 4. 禁止爬虫配置
# ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16)
# 5. 并发请求数
# CONCURRENT_REQUESTS = 4 # Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# 6. 延迟下载秒数
# DOWNLOAD_DELAY = 2 # The download delay setting will honor only one of:
# 7. 单域名访问并发数,并且延迟下次秒数也应用在每个域名
# CONCURRENT_REQUESTS_PER_DOMAIN = 2
# 单IP访问并发数,如果有值则忽略:CONCURRENT_REQUESTS_PER_DOMAIN,并且延迟下次秒数也应用在每个IP
# CONCURRENT_REQUESTS_PER_IP = 3 # Disable cookies (enabled by default)
# 8. 是否支持cookie,cookiejar进行操作cookie
# COOKIES_ENABLED = True
# COOKIES_DEBUG = True # Disable Telnet Console (enabled by default)
# 9. Telnet用于查看当前爬虫的信息,操作爬虫等...
# 使用telnet ip port ,然后通过命令操作
# TELNETCONSOLE_ENABLED = True
# TELNETCONSOLE_HOST = '127.0.0.1'
# TELNETCONSOLE_PORT = [6023,] # 10. 默认请求头
# Override the default request headers:
# DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
# } # Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
# 11. 定义pipeline处理请求
# ITEM_PIPELINES = {
# 'step8_king.pipelines.JsonPipeline': 700,
# 'step8_king.pipelines.FilePipeline': 500,
# } # 12. 自定义扩展,基于信号进行调用
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
# EXTENSIONS = {
# # 'step8_king.extensions.MyExtension': 500,
# } # 13. 爬虫允许的最大深度,可以通过meta查看当前深度;0表示无深度
# DEPTH_LIMIT = 3 # 14. 爬取时,0表示深度优先Lifo(默认);1表示广度优先FiFo # 后进先出,深度优先
# DEPTH_PRIORITY = 0
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue'
# 先进先出,广度优先 # DEPTH_PRIORITY = 1
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue' # 15. 调度器队列
# SCHEDULER = 'scrapy.core.scheduler.Scheduler'
# from scrapy.core.scheduler import Scheduler # 16. 访问URL去重
# DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl' # Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html """
17. 自动限速算法
from scrapy.contrib.throttle import AutoThrottle
自动限速设置
1. 获取最小延迟 DOWNLOAD_DELAY
2. 获取最大延迟 AUTOTHROTTLE_MAX_DELAY
3. 设置初始下载延迟 AUTOTHROTTLE_START_DELAY
4. 当请求下载完成后,获取其"连接"时间 latency,即:请求连接到接受到响应头之间的时间
5. 用于计算的... AUTOTHROTTLE_TARGET_CONCURRENCY
target_delay = latency / self.target_concurrency
new_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延迟时间
new_delay = max(target_delay, new_delay)
new_delay = min(max(self.mindelay, new_delay), self.maxdelay)
slot.delay = new_delay
""" # 开始自动限速
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# 初始下载延迟
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# 最大下载延迟
# AUTOTHROTTLE_MAX_DELAY = 10
# The average number of requests Scrapy should be sending in parallel to each remote server
# 平均每秒并发数
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received:
# 是否显示
# AUTOTHROTTLE_DEBUG = True # Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings """
18. 启用缓存
目的用于将已经发送的请求或相应缓存下来,以便以后使用 from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddleware
from scrapy.extensions.httpcache import DummyPolicy
from scrapy.extensions.httpcache import FilesystemCacheStorage
"""
# 是否启用缓存策略
# HTTPCACHE_ENABLED = True # 缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"
# 缓存策略:根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy" # 缓存超时时间
# HTTPCACHE_EXPIRATION_SECS = 0 # 缓存保存路径
# HTTPCACHE_DIR = 'httpcache' # 缓存忽略的Http状态码
# HTTPCACHE_IGNORE_HTTP_CODES = [] # 缓存存储的插件
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' """
19. 代理,需要在环境变量中设置
from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware 方式一:使用默认
os.environ
{
http_proxy:http://root:woshiniba@192.168.11.11:9999/
https_proxy:http://192.168.11.11:9999/
}
方式二:使用自定义下载中间件 def to_bytes(text, encoding=None, errors='strict'):
if isinstance(text, bytes):
return text
if not isinstance(text, six.string_types):
raise TypeError('to_bytes must receive a unicode, str or bytes '
'object, got %s' % type(text).__name__)
if encoding is None:
encoding = 'utf-8'
return text.encode(encoding, errors) class ProxyMiddleware(object):
def process_request(self, request, spider):
PROXIES = [
{'ip_port': '111.11.228.75:80', 'user_pass': ''},
{'ip_port': '120.198.243.22:80', 'user_pass': ''},
{'ip_port': '111.8.60.9:8123', 'user_pass': ''},
{'ip_port': '101.71.27.120:80', 'user_pass': ''},
{'ip_port': '122.96.59.104:80', 'user_pass': ''},
{'ip_port': '122.224.249.122:8088', 'user_pass': ''},
]
proxy = random.choice(PROXIES)
if proxy['user_pass'] is not None:
request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))
request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)
print "**************ProxyMiddleware have pass************" + proxy['ip_port']
else:
print "**************ProxyMiddleware no pass************" + proxy['ip_port']
request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port']) DOWNLOADER_MIDDLEWARES = {
'step8_king.middlewares.ProxyMiddleware': 500,
} """ """
20. Https访问
Https访问时有两种情况:
1. 要爬取网站使用的可信任证书(默认支持)
DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory" 2. 要爬取网站使用的自定义证书
DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory" # https.py
from scrapy.core.downloader.contextfactory import ScrapyClientContextFactory
from twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate) class MySSLFactory(ScrapyClientContextFactory):
def getCertificateOptions(self):
from OpenSSL import crypto
v1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())
v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())
return CertificateOptions(
privateKey=v1, # pKey对象
certificate=v2, # X509对象
verify=False,
method=getattr(self, 'method', getattr(self, '_ssl_method', None))
)
其他:
相关类
scrapy.core.downloader.handlers.http.HttpDownloadHandler
scrapy.core.downloader.webclient.ScrapyHTTPClientFactory
scrapy.core.downloader.contextfactory.ScrapyClientContextFactory
相关配置
DOWNLOADER_HTTPCLIENTFACTORY
DOWNLOADER_CLIENTCONTEXTFACTORY """ """
21. 爬虫中间件
class SpiderMiddleware(object): def process_spider_input(self,response, spider):
'''
下载完成,执行,然后交给parse处理
:param response:
:param spider:
:return:
'''
pass def process_spider_output(self,response, result, spider):
'''
spider处理完成,返回时调用
:param response:
:param result:
:param spider:
:return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)
'''
return result def process_spider_exception(self,response, exception, spider):
'''
异常调用
:param response:
:param exception:
:param spider:
:return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline
'''
return None def process_start_requests(self,start_requests, spider):
'''
爬虫启动时调用
:param start_requests:
:param spider:
:return: 包含 Request 对象的可迭代对象
'''
return start_requests 内置爬虫中间件:
'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,
'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,
'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,
'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,
'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900, """
# from scrapy.contrib.spidermiddleware.referer import RefererMiddleware
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {
# 'step8_king.middlewares.SpiderMiddleware': 543,
} """
22. 下载中间件
class DownMiddleware1(object):
def process_request(self, request, spider):
'''
请求需要被下载时,经过所有下载器中间件的process_request调用
:param request:
:param spider:
:return:
None,继续后续中间件去下载;
Response对象,停止process_request的执行,开始执行process_response
Request对象,停止中间件的执行,将Request重新调度器
raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception
'''
pass def process_response(self, request, response, spider):
'''
spider处理完成,返回时调用
:param response:
:param result:
:param spider:
:return:
Response 对象:转交给其他中间件process_response
Request 对象:停止中间件,request会被重新调度下载
raise IgnoreRequest 异常:调用Request.errback
'''
print('response1')
return response def process_exception(self, request, exception, spider):
'''
当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
:param response:
:param exception:
:param spider:
:return:
None:继续交给后续中间件处理异常;
Response对象:停止后续process_exception方法
Request对象:停止中间件,request将会被重新调用下载
'''
return None 默认下载中间件
{
'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,
'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,
'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,
'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,
'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,
'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,
'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,
'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,
'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,
'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,
'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,
'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,
'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,
} """
# from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
# 'step8_king.middlewares.DownMiddleware1': 100,
# 'step8_king.middlewares.DownMiddleware2': 500,
# }

settings配置

12. TinyScrapy

 # -*- coding: utf-8 -*-

 from twisted.web.client import getPage, defer
from twisted.internet import reactor
import queue class Response(object):
def __init__(self, body, request):
self.body = body
self.request = request
self.url = request.url @property
def text(self):
return self.body.decode('utf-8') class Request(object):
def __init__(self, url, callback=None):
self.url = url
self.callback = callback class Scheduler(object):
def __init__(self, engine):
self.q = queue.Queue()
self.engine = engine def enqueue_request(self, request):
self.q.put(request) def next_request(self):
try:
req = self.q.get(block=False)
except Exception as e:
req = None
return req def size(self):
return self.q.qsize() class ExecutionEngine(object):
def __init__(self):
self._closewait = None
self.running = True
self.start_requests = None
self.scheduler = Scheduler(self)
self.inprogress = set() def check_empty(self, response):
if not self.running:
self._closewait.callback() def _nex_request(self):
while self.start_requests:
try:
request = next(self.start_requests)
except StopIteration:
self.start_requests = None
else:
self.scheduler.enqueue_request(request)
while len(self.inprogress) < 5 and self.scheduler.size() > 0: # 最大并发数为5
request = self.scheduler.next_request()
if not request:
break
self.inprogress.add(request)
d = getPage(bytes(request.url, encoding='utf-8'))
d.addBoth(self._handle_downloader_output, request)
d.addBoth(lambda x, req: self.inprogress.remove(req), request)
d.addBoth(lambda x: self._nex_request())
if not len(self.inprogress) and not self.scheduler.size():
self._closewait.callback(None) def _handle_downloader_output(self, body, request):
"""
获取内容,执行回调函数,并且把回调函数中的返回值获取,并添加到队列中
:param body:
:param request:
:return:
"""
import types
response = Response(body, request)
func = request.callback or self.spider.parse
gen = func(response)
if isinstance(gen, types.GeneratorType):
for req in gen:
self.scheduler.enqueue_request(req) @defer.inlineCallbacks
def start(self):
self._closewait = defer.Deferred()
yield self._closewait @defer.inlineCallbacks
def open_spider(self, spider, start_requests):
self.start_requests = start_requests
self.spider = spider
yield None
reactor.callLater(0, self._nex_request) class Crawler(object):
def __init__(self, spidercls):
self.spidercls = spidercls
self.spider = None
self.engine = None @defer.inlineCallbacks
def crawl(self):
self.engine = ExecutionEngine()
self.spider = self.spidercls()
start_requests = iter(self.spider.start_requests())
yield self.engine.open_spider(self.spider, start_requests)
yield self.engine.start() class CrawlerProcess(object):
def __init__(self):
self._active = set()
self.crawlers = set() def crawl(self, spidercls, *args, **kwargs):
crawler = Crawler(spidercls)
self.crawlers.add(crawler)
d = crawler.crawl(*args, **kwargs)
self._active.add(d)
return d def start(self):
d1 = defer.DeferredList(self._active)
d1.addBoth(self._stop_reactor)
reactor.run() def _stop_reactor(self, _=None):
reactor.stop() class Spider(object):
def start_requests(self):
for url in self.start_urls:
yield Request(url) class ChoutiSpider(Spider):
name = "chouti"
start_urls = [
'http://dig.chouti.com/',
] def parse(self, response):
print(response.text) class CnblogsSpider(Spider):
name = "cnblogs"
start_urls = [
'http://www.cnblogs.com/',
] def parse(self, response):
print(response.text) if __name__ == '__main__':
spider_cls_list = [ChoutiSpider, CnblogsSpider]
crawler_process = CrawlerProcess()
for spider_cls in spider_cls_list:
crawler_process.crawl(spider_cls)
crawler_process.start()

更多文档参见:http://scrapy-chs.readthedocs.io/zh_CN/latest/index.html

转载自:

http://www.cnblogs.com/wupeiqi/articles/6229292.html

爬虫必备—Scrapy的更多相关文章

  1. 爬虫框架Scrapy

    前面十章爬虫笔记陆陆续续记录了一些简单的Python爬虫知识, 用来解决简单的贴吧下载,绩点运算自然不在话下. 不过要想批量下载大量的内容,比如知乎的所有的问答,那便显得游刃不有余了点. 于是乎,爬虫 ...

  2. 教你分分钟学会用python爬虫框架Scrapy爬取心目中的女神

    本博文将带领你从入门到精通爬虫框架Scrapy,最终具备爬取任何网页的数据的能力.本文以校花网为例进行爬取,校花网:http://www.xiaohuar.com/,让你体验爬取校花的成就感. Scr ...

  3. 爬虫之scrapy框架

    解析 Scrapy解释 Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架. 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中.其最初是为了页面抓取 (更确切来说, 网络抓 ...

  4. 【转载】教你分分钟学会用python爬虫框架Scrapy爬取心目中的女神

    原文:教你分分钟学会用python爬虫框架Scrapy爬取心目中的女神 本博文将带领你从入门到精通爬虫框架Scrapy,最终具备爬取任何网页的数据的能力.本文以校花网为例进行爬取,校花网:http:/ ...

  5. 爬虫入门scrapy

    Python之路[第十九篇]:爬虫   网络爬虫(又被称为网页蜘蛛,网络机器人,在FOAF社区中间,更经常的称为网页追逐者),是一种按照一定的规则,自动地抓取万维网信息的程序或者脚本.另外一些不常使用 ...

  6. 97、爬虫框架scrapy

    本篇导航: 介绍与安装 命令行工具 项目结构以及爬虫应用简介 Spiders 其它介绍 爬取亚马逊商品信息   一.介绍与安装 Scrapy一个开源和协作的框架,其最初是为了页面抓取 (更确切来说, ...

  7. 第三篇:爬虫框架 - Scrapy

    前言 Python提供了一个比较实用的爬虫框架 - Scrapy.在这个框架下只要定制好指定的几个模块,就能实现一个爬虫. 本文将讲解Scrapy框架的基本体系结构,以及使用这个框架定制爬虫的具体步骤 ...

  8. 网络爬虫框架Scrapy简介

    作者: 黄进(QQ:7149101) 一. 网络爬虫 网络爬虫(又被称为网页蜘蛛,网络机器人),是一种按照一定的规则,自动地抓取万维网信息的程序或者脚本:它是一个自动提取网页的程序,它为搜索引擎从万维 ...

  9. Linux 安装python爬虫框架 scrapy

    Linux 安装python爬虫框架 scrapy http://scrapy.org/ Scrapy是python最好用的一个爬虫框架.要求: python2.7.x. 1. Ubuntu14.04 ...

随机推荐

  1. 记一次使用SecureCRT连接局域网巨慢的问题

    环境:Win7 32bit + SecureCRT 6.5 中文 使用工作机上的SecureCRT登录公司内网的跳板机,发现很慢,每次都得等待好几分钟才弹出输出私匙密码的框.咨询了一下其他同事,发现他 ...

  2. 使用python+selenium对web进行自动化测试

    想用python代码,对web网页进行自动化测试 web自动化测试和手动测试的区别: 手动测试:通过手动去对网页的功能进行点点点 web自动化:可以通过代码,自动对网页点点点 首先,将python+s ...

  3. 自动安装zabbix_agent脚本 -- python2

    #!/usr/bin/env python # -*- coding: utf-8 -*- import os ZABBIX_SERVER_IP='10.171.100.28' if os.path. ...

  4. Apache JMeter2.13 实战

    安装目录下 设置浏览器代理127.0.0.1 8080,以chrome为例 开始录制脚本,进入应用点击相应的功能,可以捕获到如下地址 去除无用地址,保留需要测试的地址 注:上图编号列表中11为获取co ...

  5. Spring Security构建Rest服务-1201-Spring Security OAuth开发APP认证框架之实现服务提供商

    实现服务提供商,就是要实现认证服务器.资源服务器. 现在做的都是app的东西,所以在app项目写代码  认证服务器: 新建 ImoocAuthenticationServerConfig 类,@Ena ...

  6. 【数组】Next Permutation

    题目: Implement next permutation, which rearranges numbers into the lexicographically next greater per ...

  7. web与app测试的区别

    单纯从功能测试的层面上来讲的话,APP 测试.web 测试 在流程和功能测试上是没有区别的. 系统架构方面: web项目,一般都是b/s架构,基于浏览器的 app项目,则是c/s的,必须要有客户端,用 ...

  8. CentOS7安装SaltStack的踩坑经历

    虚拟机镜像:centOS7 minion 版本 软件版本: salt-master-2017.7.2-1.el7.noarchsalt-minion-2017.7.2-1.el7.noarchsalt ...

  9. python:rs, ws, es = select.select(inputs, [], []) --报错error 10022

    昨晚折腾的1个多钟,直到3点多才睡,感觉自己也是热爱代码了,敲3个多钟一点也不累(其实是为了凌晨6点起来抢票回家了^_^) 练习python中select进行异步通信-防止socket.recv方法阻 ...

  10. android学习-Adapter适配器进阶

    参考资源 Android 快速开发系列 打造万能的ListView GridView 适配器 实现代码复用,争取打机**的时间. android4.4源码 target=android-19 一般自定 ...