昨日内容拾遗

打开昨天写的DianShang项目,查看items.py

  1. class AmazonItem(scrapy.Item):
  2.  
  3. name = scrapy.Field() # 商品名
  4. price= scrapy.Field() # 价格
  5. delivery=scrapy.Field() # 配送方式

这里的AmazonItem类名,可以随意。这里定义的3个属性,和spiders\amazon.py定义的3个key,是一一对应的

  1. # 生成标准化数据
  2. item = AmazonItem() # 执行函数,默认是一个空字典
  3. # 增加键值对
  4. item["name"] = name
  5. item["price"] = price
  6. item["delivery"] = delivery

查看 pipelines.py

  1. class MongodbPipeline(object):
  2.  
  3. def __init__(self, host, port, db, table):
  4. self.host = host
  5. self.port = port
  6. self.db = db
  7. self.table = table
  8.  
  9. @classmethod
  10. def from_crawler(cls, crawler):
  11. """
  12. Scrapy会先通过getattr判断我们是否自定义了from_crawler,有则调它来完
  13. 成实例化
  14. """
  15. HOST = crawler.settings.get('HOST')
  16. PORT = crawler.settings.get('PORT')
  17. DB = crawler.settings.get('DB')
  18. TABLE = crawler.settings.get('TABLE')
  19. return cls(HOST, PORT, DB, TABLE)

如果有from_crawler方法,它会优先执行!之后再执行__init__方法。

from_crawler方法必须返回一个对象,这个cls对象,其实是执行了__init__方法。它传送的4个值和__init__是一一对应的!

pipelines.py 可以放多个pipeline,比如文件处理

修改 pipelines.py,增加FilePipeline,它会将爬取的信息写入到文件中

  1. # -*- coding: utf-8 -*-
  2.  
  3. # Define your item pipelines here
  4. #
  5. # Don't forget to add your pipeline to the ITEM_PIPELINES setting
  6. # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
  7.  
  8. from pymongo import MongoClient
  9.  
  10. class MongodbPipeline(object):
  11.  
  12. def __init__(self, host, port, db, table):
  13. self.host = host
  14. self.port = port
  15. self.db = db
  16. self.table = table
  17.  
  18. @classmethod
  19. def from_crawler(cls, crawler):
  20. """
  21. Scrapy会先通过getattr判断我们是否自定义了from_crawler,有则调它来完
  22. 成实例化
  23. """
  24. HOST = crawler.settings.get('HOST')
  25. PORT = crawler.settings.get('PORT')
  26. DB = crawler.settings.get('DB')
  27. TABLE = crawler.settings.get('TABLE')
  28. return cls(HOST, PORT, DB, TABLE)
  29.  
  30. def open_spider(self, spider):
  31. """
  32. 爬虫刚启动时执行一次
  33. """
  34. # self.client = MongoClient('mongodb://%s:%s@%s:%s' %(self.user,self.pwd,self.host,self.port))
  35. self.client = MongoClient(host=self.host, port=self.port)
  36.  
  37. def close_spider(self, spider):
  38. """
  39. 爬虫关闭时执行一次
  40. """
  41. self.client.close()
  42.  
  43. def process_item(self, item, spider):
  44. # 操作并进行持久化
  45. d = dict(item)
  46. if all(d.values()):
  47. self.client[self.db][self.table].insert(d)
  48. print("添加成功一条")
  49.  
  50. class FilePipeline(object):
  51.  
  52. def __init__(self, file_path):
  53. self.file_path=file_path
  54.  
  55. @classmethod
  56. def from_crawler(cls, crawler):
  57. """
  58. Scrapy会先通过getattr判断我们是否自定义了from_crawler,有则调它来完
  59. 成实例化
  60. """
  61. file_path = crawler.settings.get('FILE_PATH')
  62. return cls(file_path)
  63.  
  64. def open_spider(self, spider):
  65. """
  66. 爬虫刚启动时执行一次
  67. """
  68. print('==============>爬虫程序刚刚启动')
  69. self.fileobj=open(self.file_path,'w',encoding='utf-8')
  70.  
  71. def close_spider(self, spider):
  72. """
  73. 爬虫关闭时执行一次
  74. """
  75. print('==============>爬虫程序运行完毕')
  76. self.fileobj.close()
  77.  
  78. def process_item(self, item, spider):
  79.  
  80. # 操作并进行持久化
  81. print("items----->",item)
  82. # return表示会被后续的pipeline继续处理
  83. d = dict(item)
  84. if all(d.values()):
  85.  
  86. self.fileobj.write("%s\n" %str(d))
  87.  
  88. return item
  89.  
  90. # 表示将item丢弃,不会被后续pipeline处理

如果写了raise,表示将item丢弃,不会被后续pipeline处理。

由于file_path指定的文件路径,需要在settings中获取。

修改 setting.py,最后一行增加配置项FILE_PATH

  1. FILE_PATH='pipe.txt'

这里写的是相对路由,实际路径是项目根目录

修改 setting.py,增加pipeline

  1. ITEM_PIPELINES = {
  2. 'DianShang.pipelines.MongodbPipeline': 300,
  3. 'DianShang.pipelines.FilePipeline': 500,
  4. }

修改 pipelines.py,修改MongodbPipeline中的process_item方法,它必须要return

  1. def process_item(self, item, spider):
  2. # 操作并进行持久化
  3. d = dict(item)
  4. if all(d.values()):
  5. self.client[self.db][self.table].insert(d)
  6. print("添加成功一条")
  7.  
  8. return item

执行bin.py,查看pipe.txt,内容如下:

修改 spiders-->amazon.py,增加close方法。这个命令不能变动!

  1. # -*- coding: utf-8 -*-
  2. import scrapy
  3. from scrapy import Request # 导入模块
  4. from DianShang.items import AmazonItem # 导入item
  5.  
  6. class AmazonSpider(scrapy.Spider):
  7. name = 'amazon'
  8. allowed_domains = ['amazon.cn']
  9. # start_urls = ['http://amazon.cn/']
  10. # 自定义配置,注意:变量名必须是custom_settings
  11. custom_settings = {
  12. 'REQUEST_HEADERS': {
  13. 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.75 Safari/537.36',
  14. }
  15. }
  16.  
  17. def start_requests(self):
  18. r1 = Request(url="https://www.amazon.cn/s/ref=nb_sb_ss_i_3_6?field-keywords=iphone+x",
  19. headers=self.settings.get('REQUEST_HEADERS'),)
  20. yield r1
  21.  
  22. def parse(self, response):
  23. # 商品详细链接
  24. detail_urls = response.xpath('//li[contains(@id,"result_")]/div/div[3]/div[1]/a/@href').extract()
  25. # print(detail_urls)
  26. for url in detail_urls:
  27. yield Request(url=url,
  28. headers=self.settings.get('REQUEST_HEADERS'), # 请求头
  29. callback=self.parse_detail, # 回调函数
  30. dont_filter=True # 不去重
  31. )
  32.  
  33. def parse_detail(self, response): # 获取商品详细信息
  34. # 商品名,获取第一个结果
  35. name = response.xpath('//*[@id="productTitle"]/text()').extract_first()
  36. if name:
  37. name = name.strip()
  38.  
  39. # 商品价格
  40. price = response.xpath('//*[@id="priceblock_ourprice"]/text()').extract_first()
  41. # 配送方式,*[1]表示取第一个标签,也就是b标签
  42. delivery = response.xpath('//*[@id="ddmMerchantMessage"]/*[1]/text()').extract_first()
  43. print(name,price,delivery)
  44.  
  45. # 生成标准化数据
  46. item = AmazonItem() # 执行函数,默认是一个空字典
  47. # 增加键值对
  48. item["name"] = name
  49. item["price"] = price
  50. item["delivery"] = delivery
  51.  
  52. return item # 必须要返回
  53.  
  54. def close(self,reason):
  55. print("spider is closed")

这个方法,在每次请求执行完毕后,会调用。它可以打印一些日志信息,或者做一些收尾工作!

一、下载中间件

  1. class MyDownMiddleware(object):
  2. def process_request(self, request, spider):
  3. """
  4. 请求需要被下载时,经过所有下载器中间件的process_request调用
  5. :param request:
  6. :param spider:
  7. :return:
  8. None,继续后续中间件去下载;
  9. Response对象,停止process_request的执行,开始执行process_response
  10. Request对象,停止中间件的执行,将Request重新调度器
  11. raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception
  12. """
  13. pass
  14.  
  15. def process_response(self, request, response, spider):
  16. """
  17. spider处理完成,返回时调用
  18. :param response:
  19. :param result:
  20. :param spider:
  21. :return:
  22. Response 对象:转交给其他中间件process_response
  23. Request 对象:停止中间件,request会被重新调度下载
  24. raise IgnoreRequest 异常:调用Request.errback
  25. """
  26. print('response1')
  27. return response
  28.  
  29. def process_exception(self, request, exception, spider):
  30. """
  31. 当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
  32. :param response:
  33. :param exception:
  34. :param spider:
  35. :return:
  36. None:继续交给后续中间件处理异常;
  37. Response对象:停止后续process_exception方法
  38. Request对象:停止中间件,request将会被重新调用下载
  39. """
  40. return None

这个主要用来做代理IP更换的!比如某些网站,一分钟只能下载3次。超过3次之后,就会封锁IP。

这个时候再去访问已经没有意义了,需要更改IP地址才行!

在scrapy架构中,主要有8个步骤。有可能第4步-->第5步时,就会出现问题,需要重新访问才行!

中间的蓝色条块,就是中间件!如果在中间件里面做更改IP操作,那么就可以保证每次请求都是不同的IP地址访问。

这里需要做一个IP代理池,有一个请求过来,通过中间件,就取一个IP地址做封装!

只要每次IP不一样,某些网站就无法封锁你!

推荐在中间件中做更改IP操作,为什么呢?目前在spiders中,只有一个亚马逊爬虫程序。

假设还有一个淘宝爬虫程序,它也需要做更好IP操作,怎么办?每一个爬虫程序里面,用代码实现更换IP操作吗?

这样代码就重复了,如果在中间中做更改IP的操作,那么不管有多少个爬虫程序,都会自动更换IP。

所以:对于所有请求做同一批量操作时,推荐使用中间件!

不管针对于换IP,还以做cookie池,账户池(花钱买一堆真实账户)

在django的中间中,如果遇到return HttpResponse或者异常,它会原路返回!

但是在scrapy框架中,它是从最里面的Response返回。每一个中间件的Response都会被执行!

看上面蓝色块中的Request对象,它帮你做了封装。那么更换IP操作,是在这里面封装的!

如果遇到报错,会交给SCHEDULER,也就是调度器。

举例:

修改  middlewares.py,增加2个下载中间件

由于时间关系,步骤略...

项目链接如下:

https://github.com/jhao104/proxy_pool

使用方法,请先查看README.md

由于时间关系,步骤略...

二、settings配置

  1. # -*- coding: utf-8 -*-
  2.  
  3. # Scrapy settings for step8_king project
  4. #
  5. # For simplicity, this file contains only settings considered important or
  6. # commonly used. You can find more settings consulting the documentation:
  7. #
  8. # http://doc.scrapy.org/en/latest/topics/settings.html
  9. # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
  10. # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
  11.  
  12. # 1. 爬虫名称
  13. BOT_NAME = 'step8_king'
  14.  
  15. # 2. 爬虫应用路径
  16. SPIDER_MODULES = ['step8_king.spiders']
  17. NEWSPIDER_MODULE = 'step8_king.spiders'
  18.  
  19. # Crawl responsibly by identifying yourself (and your website) on the user-agent
  20. # 3. 客户端 user-agent请求头
  21. # USER_AGENT = 'step8_king (+http://www.yourdomain.com)'
  22.  
  23. # Obey robots.txt rules
  24. # 4. 禁止爬虫配置
  25. # ROBOTSTXT_OBEY = False
  26.  
  27. # Configure maximum concurrent requests performed by Scrapy (default: 16)
  28. # 5. 并发请求数
  29. # CONCURRENT_REQUESTS = 4
  30.  
  31. # Configure a delay for requests for the same website (default: 0)
  32. # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
  33. # See also autothrottle settings and docs
  34. # 6. 延迟下载秒数
  35. # DOWNLOAD_DELAY = 2
  36.  
  37. # The download delay setting will honor only one of:
  38. # 7. 单域名访问并发数,并且延迟下次秒数也应用在每个域名
  39. # CONCURRENT_REQUESTS_PER_DOMAIN = 2
  40. # 单IP访问并发数,如果有值则忽略:CONCURRENT_REQUESTS_PER_DOMAIN,并且延迟下次秒数也应用在每个IP
  41. # CONCURRENT_REQUESTS_PER_IP = 3
  42.  
  43. # Disable cookies (enabled by default)
  44. # 8. 是否支持cookie,cookiejar进行操作cookie
  45. # COOKIES_ENABLED = True
  46. # COOKIES_DEBUG = True
  47.  
  48. # Disable Telnet Console (enabled by default)
  49. # 9. Telnet用于查看当前爬虫的信息,操作爬虫等...
  50. # 使用telnet ip port ,然后通过命令操作
  51. # TELNETCONSOLE_ENABLED = True
  52. # TELNETCONSOLE_HOST = '127.0.0.1'
  53. # TELNETCONSOLE_PORT = [6023,]
  54.  
  55. # 10. 默认请求头
  56. # Override the default request headers:
  57. # DEFAULT_REQUEST_HEADERS = {
  58. # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  59. # 'Accept-Language': 'en',
  60. # }
  61.  
  62. # Configure item pipelines
  63. # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
  64. # 11. 定义pipeline处理请求
  65. # ITEM_PIPELINES = {
  66. # 'step8_king.pipelines.JsonPipeline': 700,
  67. # 'step8_king.pipelines.FilePipeline': 500,
  68. # }
  69.  
  70. # 12. 自定义扩展,基于信号进行调用
  71. # Enable or disable extensions
  72. # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
  73. # EXTENSIONS = {
  74. # # 'step8_king.extensions.MyExtension': 500,
  75. # }
  76.  
  77. # 13. 爬虫允许的最大深度,可以通过meta查看当前深度;0表示无深度
  78. # DEPTH_LIMIT = 3
  79.  
  80. # 14. 爬取时,0表示深度优先Lifo(默认);1表示广度优先FiFo
  81.  
  82. # 后进先出,深度优先
  83. # DEPTH_PRIORITY = 0
  84. # SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue'
  85. # SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue'
  86. # 先进先出,广度优先
  87.  
  88. # DEPTH_PRIORITY = 1
  89. # SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
  90. # SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'
  91.  
  92. # 15. 调度器队列
  93. # SCHEDULER = 'scrapy.core.scheduler.Scheduler'
  94. # from scrapy.core.scheduler import Scheduler
  95.  
  96. # 16. 访问URL去重
  97. # DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl'
  98.  
  99. # Enable and configure the AutoThrottle extension (disabled by default)
  100. # See http://doc.scrapy.org/en/latest/topics/autothrottle.html
  101.  
  102. """
  103. 17. 自动限速算法
  104. from scrapy.contrib.throttle import AutoThrottle
  105. 自动限速设置
  106. 1. 获取最小延迟 DOWNLOAD_DELAY
  107. 2. 获取最大延迟 AUTOTHROTTLE_MAX_DELAY
  108. 3. 设置初始下载延迟 AUTOTHROTTLE_START_DELAY
  109. 4. 当请求下载完成后,获取其"连接"时间 latency,即:请求连接到接受到响应头之间的时间
  110. 5. 用于计算的... AUTOTHROTTLE_TARGET_CONCURRENCY
  111. target_delay = latency / self.target_concurrency
  112. new_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延迟时间
  113. new_delay = max(target_delay, new_delay)
  114. new_delay = min(max(self.mindelay, new_delay), self.maxdelay)
  115. slot.delay = new_delay
  116. """
  117.  
  118. # 开始自动限速
  119. # AUTOTHROTTLE_ENABLED = True
  120. # The initial download delay
  121. # 初始下载延迟
  122. # AUTOTHROTTLE_START_DELAY = 5
  123. # The maximum download delay to be set in case of high latencies
  124. # 最大下载延迟
  125. # AUTOTHROTTLE_MAX_DELAY = 10
  126. # The average number of requests Scrapy should be sending in parallel to each remote server
  127. # 平均每秒并发数
  128. # AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
  129.  
  130. # Enable showing throttling stats for every response received:
  131. # 是否显示
  132. # AUTOTHROTTLE_DEBUG = True
  133.  
  134. # Enable and configure HTTP caching (disabled by default)
  135. # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
  136.  
  137. """
  138. 18. 启用缓存
  139. 目的用于将已经发送的请求或相应缓存下来,以便以后使用
  140.  
  141. from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddleware
  142. from scrapy.extensions.httpcache import DummyPolicy
  143. from scrapy.extensions.httpcache import FilesystemCacheStorage
  144. """
  145. # 是否启用缓存策略
  146. # HTTPCACHE_ENABLED = True
  147.  
  148. # 缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可
  149. # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"
  150. # 缓存策略:根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略
  151. # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"
  152.  
  153. # 缓存超时时间
  154. # HTTPCACHE_EXPIRATION_SECS = 0
  155.  
  156. # 缓存保存路径
  157. # HTTPCACHE_DIR = 'httpcache'
  158.  
  159. # 缓存忽略的Http状态码
  160. # HTTPCACHE_IGNORE_HTTP_CODES = []
  161.  
  162. # 缓存存储的插件
  163. # HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
  164.  
  165. """
  166. 19. 代理,需要在环境变量中设置
  167. from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware
  168.  
  169. 方式一:使用默认
  170. os.environ
  171. {
  172. http_proxy:http://root:woshiniba@192.168.11.11:9999/
  173. https_proxy:http://192.168.11.11:9999/
  174. }
  175. 方式二:使用自定义下载中间件
  176.  
  177. def to_bytes(text, encoding=None, errors='strict'):
  178. if isinstance(text, bytes):
  179. return text
  180. if not isinstance(text, six.string_types):
  181. raise TypeError('to_bytes must receive a unicode, str or bytes '
  182. 'object, got %s' % type(text).__name__)
  183. if encoding is None:
  184. encoding = 'utf-8'
  185. return text.encode(encoding, errors)
  186.  
  187. class ProxyMiddleware(object):
  188. def process_request(self, request, spider):
  189. PROXIES = [
  190. {'ip_port': '111.11.228.75:80', 'user_pass': ''},
  191. {'ip_port': '120.198.243.22:80', 'user_pass': ''},
  192. {'ip_port': '111.8.60.9:8123', 'user_pass': ''},
  193. {'ip_port': '101.71.27.120:80', 'user_pass': ''},
  194. {'ip_port': '122.96.59.104:80', 'user_pass': ''},
  195. {'ip_port': '122.224.249.122:8088', 'user_pass': ''},
  196. ]
  197. proxy = random.choice(PROXIES)
  198. if proxy['user_pass'] is not None:
  199. request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
  200. encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))
  201. request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)
  202. print "**************ProxyMiddleware have pass************" + proxy['ip_port']
  203. else:
  204. print "**************ProxyMiddleware no pass************" + proxy['ip_port']
  205. request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
  206.  
  207. DOWNLOADER_MIDDLEWARES = {
  208. 'step8_king.middlewares.ProxyMiddleware': 500,
  209. }
  210.  
  211. """
  212.  
  213. """
  214. 20. Https访问
  215. Https访问时有两种情况:
  216. 1. 要爬取网站使用的可信任证书(默认支持)
  217. DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
  218. DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"
  219.  
  220. 2. 要爬取网站使用的自定义证书
  221. DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
  222. DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"
  223.  
  224. # https.py
  225. from scrapy.core.downloader.contextfactory import ScrapyClientContextFactory
  226. from twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)
  227.  
  228. class MySSLFactory(ScrapyClientContextFactory):
  229. def getCertificateOptions(self):
  230. from OpenSSL import crypto
  231. v1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())
  232. v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())
  233. return CertificateOptions(
  234. privateKey=v1, # pKey对象
  235. certificate=v2, # X509对象
  236. verify=False,
  237. method=getattr(self, 'method', getattr(self, '_ssl_method', None))
  238. )
  239. 其他:
  240. 相关类
  241. scrapy.core.downloader.handlers.http.HttpDownloadHandler
  242. scrapy.core.downloader.webclient.ScrapyHTTPClientFactory
  243. scrapy.core.downloader.contextfactory.ScrapyClientContextFactory
  244. 相关配置
  245. DOWNLOADER_HTTPCLIENTFACTORY
  246. DOWNLOADER_CLIENTCONTEXTFACTORY
  247.  
  248. """
  249.  
  250. """
  251. 21. 爬虫中间件
  252. class SpiderMiddleware(object):
  253.  
  254. def process_spider_input(self,response, spider):
  255. '''
  256. 下载完成,执行,然后交给parse处理
  257. :param response:
  258. :param spider:
  259. :return:
  260. '''
  261. pass
  262.  
  263. def process_spider_output(self,response, result, spider):
  264. '''
  265. spider处理完成,返回时调用
  266. :param response:
  267. :param result:
  268. :param spider:
  269. :return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)
  270. '''
  271. return result
  272.  
  273. def process_spider_exception(self,response, exception, spider):
  274. '''
  275. 异常调用
  276. :param response:
  277. :param exception:
  278. :param spider:
  279. :return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline
  280. '''
  281. return None
  282.  
  283. def process_start_requests(self,start_requests, spider):
  284. '''
  285. 爬虫启动时调用
  286. :param start_requests:
  287. :param spider:
  288. :return: 包含 Request 对象的可迭代对象
  289. '''
  290. return start_requests
  291.  
  292. 内置爬虫中间件:
  293. 'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,
  294. 'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,
  295. 'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,
  296. 'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,
  297. 'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,
  298.  
  299. """
  300. # from scrapy.contrib.spidermiddleware.referer import RefererMiddleware
  301. # Enable or disable spider middlewares
  302. # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
  303. SPIDER_MIDDLEWARES = {
  304. # 'step8_king.middlewares.SpiderMiddleware': 543,
  305. }
  306.  
  307. """
  308. 22. 下载中间件
  309. class DownMiddleware1(object):
  310. def process_request(self, request, spider):
  311. '''
  312. 请求需要被下载时,经过所有下载器中间件的process_request调用
  313. :param request:
  314. :param spider:
  315. :return:
  316. None,继续后续中间件去下载;
  317. Response对象,停止process_request的执行,开始执行process_response
  318. Request对象,停止中间件的执行,将Request重新调度器
  319. raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception
  320. '''
  321. pass
  322.  
  323. def process_response(self, request, response, spider):
  324. '''
  325. spider处理完成,返回时调用
  326. :param response:
  327. :param result:
  328. :param spider:
  329. :return:
  330. Response 对象:转交给其他中间件process_response
  331. Request 对象:停止中间件,request会被重新调度下载
  332. raise IgnoreRequest 异常:调用Request.errback
  333. '''
  334. print('response1')
  335. return response
  336.  
  337. def process_exception(self, request, exception, spider):
  338. '''
  339. 当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
  340. :param response:
  341. :param exception:
  342. :param spider:
  343. :return:
  344. None:继续交给后续中间件处理异常;
  345. Response对象:停止后续process_exception方法
  346. Request对象:停止中间件,request将会被重新调用下载
  347. '''
  348. return None
  349.  
  350. 默认下载中间件
  351. {
  352. 'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,
  353. 'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,
  354. 'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,
  355. 'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,
  356. 'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,
  357. 'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,
  358. 'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,
  359. 'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,
  360. 'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,
  361. 'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,
  362. 'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,
  363. 'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,
  364. 'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,
  365. 'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,
  366. }
  367.  
  368. """
  369. # from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware
  370. # Enable or disable downloader middlewares
  371. # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
  372. # DOWNLOADER_MIDDLEWARES = {
  373. # 'step8_king.middlewares.DownMiddleware1': 100,
  374. # 'step8_king.middlewares.DownMiddleware2': 500,
  375. # }

三、亚马逊项目

完整代码,请参考:

下载项目代码

未完待续...

python 全栈开发,Day138(scrapy框架的下载中间件,settings配置)的更多相关文章

  1. Python全栈开发:web框架之tornado

    概述 Tornado 是 FriendFeed 使用的可扩展的非阻塞式 web 服务器及其相关工具的开源版本.这个 Web 框架看起来有些像web.py 或者 Google 的 webapp,不过为了 ...

  2. Python全栈开发:web框架们

    Python的WEB框架 Bottle Bottle是一个快速.简洁.轻量级的基于WSIG的微型Web框架,此框架只由一个 .py 文件,除了Python的标准库外,其不依赖任何其他模块. 1 2 3 ...

  3. Python全栈开发:web框架

    Web框架本质 众所周知,对于所有的Web应用,本质上其实就是一个socket服务端,用户的浏览器其实就是一个socket客户端. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 ...

  4. 巨蟒python全栈开发django1:自定义框架

    今日大纲: 1.val和text方法的补充 2.信息收集卡用bootstrap实现 3.自定义web框架 4.http协议 5.自定义web框架2 今日内容详解: 1.val和text方法的补充 ht ...

  5. python全栈开发day102-django rest-framework框架

    1.频次访问组件 1) 手写版本 # class VisitThrottle(BaseThrottle): # # def __init__(self): # self.history = None ...

  6. 学习笔记之Python全栈开发/人工智能公开课_腾讯课堂

    Python全栈开发/人工智能公开课_腾讯课堂 https://ke.qq.com/course/190378 https://github.com/haoran119/ke.qq.com.pytho ...

  7. Python全栈开发记录_第一篇(循环练习及杂碎的知识点)

    Python全栈开发记录只为记录全栈开发学习过程中一些难和重要的知识点,还有问题及课后题目,以供自己和他人共同查看.(该篇代码行数大约:300行) 知识点1:优先级:not>and 短路原则:a ...

  8. python 全栈开发,Day99(作业讲解,DRF版本,DRF分页,DRF序列化进阶)

    昨日内容回顾 1. 为什么要做前后端分离? - 前后端交给不同的人来编写,职责划分明确. - API (IOS,安卓,PC,微信小程序...) - vue.js等框架编写前端时,会比之前写jQuery ...

  9. python全栈开发目录

    python全栈开发目录 Linux系列 python基础 前端~HTML~CSS~JavaScript~JQuery~Vue web框架们~Django~Flask~Tornado 数据库们~MyS ...

随机推荐

  1. oracle解除锁表【原】

    在日常操作中,经常会有不小心被锁表的情况发生 一般造成原因有: 开发人员不小心执行了 for update 查询语句后,没有解锁 不合理代码中开启事务(begin transaction)后,没有关闭 ...

  2. AWT和Swing的关系

    1.AWT和Swing都是java中的包. 2.AWT(Abstract Window Toolkit):抽象窗口工具包,早期编写图形界面应用程序的包,AWT是通过调用操作系统的native方法实现的 ...

  3. .Net并行编程之同步机制

     一:Barrier(屏障同步) 二:spinLock(自旋锁) 信号量  一:CountdownEvent 虽然通过Task.WaitAll()方法也可以达到线程同步的目的. 但是Countdown ...

  4. DotNetBar TreeGx用法

    添加一个节点和4个子节点treeGXHelp.Nodes[].Text = textBoxDropDownHelp.Text + "的主题"; treeGXHelp.Nodes[] ...

  5. Java——关于num++和++num

    public class num_add_add { public static void numAdd(){ int num = 10; int a = num++; System.out.prin ...

  6. dubbo服务使用spring-data-mongodb进行时间查询的bug记录

    一.项目情况:spring-boot+mongodb+dubbo. 二.问题:调用dubbo服务并使用spring-data-mongodb的gte,lte时间段比较查询, @Reference(re ...

  7. PHP二叉树

    <?php /******************************************************** * 我写的PHP都是从C语言的数据结构中演化而来********* ...

  8. MySQL复制框架

    一.复制框架 开始接触复制时,看到各种各样的复制,总想把不同类型对应起来,结果越理越乱~究其原因就是对比了不同维度的属性,不同维度得出的结果集之间必然存在交集,没有必要将不同维度的属性安插到成对的萝卜 ...

  9. Web前端的缓存机制(那些以代价换来的效率)

    对于Web前端而言,cache可以说是无处不在,通常是2个环节之间,就会引入一个cache做为提升整体效率的角色.例如A和B两者之间的数据交换,为了提升整体的效率,引入角色C,而C被用于当做热点数据的 ...

  10. Freemarker导出word的简单使用

    1.模板 username:${username} password:${password} <#list mylist as item> ${item.name!} ${item.pas ...