一、请求传参

  • 实现深度爬取:爬取多个层级对应的页面数据
  • 使用场景:爬取的数据没有在同一张页面中
  • 在手动请求的时候传递item:yield scrapy.Request(url,callback,meta={'item':item})
  • 将meta这个字典传递给callback
  • 在callback中接收meta:item = response.meta['item']

应用:4567电影网数据爬取

# -*- coding: utf-8 -*-
import scrapy from moviePro.items import MovieproItem
class MovieSpider(scrapy.Spider):
name = 'movie'
# allowed_domains = ['www.xxx.com']
start_urls = ['https://www.4567tv.tv/index.php/vod/show/class/动作/id/1.html'] url = 'https://www.4567tv.tv/index.php/vod/show/class/动作/id/1/page/%d.html'
pageNum = 1
def parse(self, response):
li_list = response.xpath('/html/body/div[1]/div/div/div/div[2]/ul/li')
for li in li_list:
title = li.xpath('./div[1]/a/@title').extract_first()
detail_url = 'https://www.4567tv.tv'+li.xpath('./div[1]/a/@href').extract_first()
item = MovieproItem()
item['title'] = title
#meta参数是一个字典,该字典就可以传递给callback指定的回调函数
yield scrapy.Request(detail_url,callback=self.parse_detail,meta={'item':item}) if self.pageNum < 5:
self.pageNum += 1
new_url = format(self.url%self.pageNum)
yield scrapy.Request(new_url,callback=self.parse) def parse_detail(self,response):
#接收meta:response.meta
item = response.meta['item']
desc = response.xpath('/html/body/div[1]/div/div/div/div[2]/p[5]/span[2]/text()').extract_first()
item['desc'] = desc
yield item

mpvie.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html import scrapy class MovieproItem(scrapy.Item):
# define the fields for your item here like:
title = scrapy.Field()
desc = scrapy.Field()

items.py

# -*- coding: utf-8 -*-

# Scrapy settings for moviePro project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'moviePro' SPIDER_MODULES = ['moviePro.spiders']
NEWSPIDER_MODULE = 'moviePro.spiders'
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36' # Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'moviePro (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = False
LOG_LEVEL= 'ERROR' # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'moviePro.middlewares.MovieproSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
'moviePro.middlewares.MovieproDownloaderMiddleware': 543,
} # Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'moviePro.pipelines.MovieproPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

settings.py

# -*- coding: utf-8 -*-

# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html from scrapy import signals
import random
user_agent_list = [
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 "
"(KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 "
"(KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 "
"(KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 "
"(KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 "
"(KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 "
"(KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 "
"(KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 "
"(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 "
"(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
] PROXY_http = [
'153.180.102.104:80',
'195.208.131.189:56055',
]
PROXY_https = [
'120.83.49.90:9000',
'95.189.112.214:35508',
]
class MovieproDownloaderMiddleware(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the downloader middleware does not modify the
# passed objects.
#拦截正常的请求,参数request就是拦截到的请求对象
def process_request(self, request, spider):
print('i am process_request()')
#实现:将拦截到的请求尽可能多的设定成不同的请求载体身份标识
request.headers['User-Agent'] = random.choice(user_agent_list) #代理操作
if request.url.split(':')[0] == 'http':
request.meta['proxy'] = 'http://'+random.choice(PROXY_http) #http://ip:port
else:
request.meta['proxy'] = 'https://' + random.choice(PROXY_https) # http://ip:port return None
#拦截响应:参数response就是拦截到的响应
def process_response(self, request, response, spider):
print('i am process_response()')
return response
#拦截发生异常的请求
def process_exception(self, request, exception, spider):
print('i am process_exception()')
#拦截到异常的请求然后对其进行修正,然后重新进行请求发送
# 代理操作
if request.url.split(':')[0] == 'http':
request.meta['proxy'] = 'http://' + random.choice(PROXY_http) # http://ip:port
else:
request.meta['proxy'] = 'https://' + random.choice(PROXY_https) # http://ip:port return request #将修正之后的请求进行重新发送

middlewares.py

二、scrapy中的中间件的应用

  • 爬虫中间件,下载中间件
  • 下载中间件:
    • 作用:批量拦截请求和响应
    • 拦截请求:
    • UA伪装:将所有的请求尽可能多的设定成不同的请求载体身份标识
    • request.headers['User-Agent'] = 'xxx'
    • 代理操作:request.meta['proxy'] = 'http://ip:port'
    • 拦截响应:篡改响应数据或者直接替换响应对象

应用:网易新闻五个内容板块的爬取(国内、国际、军事、航空、无人机)\结合了百度AI对新闻进行分类

# -*- coding: utf-8 -*-
import scrapy
from selenium import webdriver
from wangyiPro.items import WangyiproItem
class WangyiSpider(scrapy.Spider):
name = 'wangyi'
# allowed_domains = ['www.xxx.com']
start_urls = ['https://news.163.com']
five_model_urls = []
bro = webdriver.Chrome(executable_path=r'C:\Users\oldboy-python\Desktop\爬虫+数据\tools\chromedriver.exe')
#用来解析五个板块对应的url,然后对其进行手动请求发送
def parse(self, response):
model_index = [3,4,6,7,8]
li_list = response.xpath('//*[@id="index2016_wrap"]/div[1]/div[2]/div[2]/div[2]/div[2]/div/ul/li')
for index in model_index:
li = li_list[index]
#获取了五个板块对应的url
model_url = li.xpath('./a/@href').extract_first()
self.five_model_urls.append(model_url)
#对每一个板块的url进行手动i请求发送
yield scrapy.Request(model_url,callback=self.parse_model)
#解析每一个板块页面中的新闻标题和新闻详情页的url
#问题:response(不满足需求的response)中并没有包含每一个板块中动态加载出的新闻数据
def parse_model(self,response):
div_list = response.xpath('/html/body/div[1]/div[3]/div[4]/div[1]/div/div/ul/li/div/div')
for div in div_list:
title = div.xpath('./div/div[1]/h3/a/text()').extract_first()
detail_url = div.xpath('./div/div[1]/h3/a/@href').extract_first()
if detail_url:
item = WangyiproItem()
item['title'] = title
#对详情页发起请求解析出新闻内容
yield scrapy.Request(detail_url,callback=self.parse_new_content,meta={'item':item})
def parse_new_content(self,response): #解析新闻内容
item = response.meta['item']
content = response.xpath('//*[@id="endText"]//text()').extract()
content = ''.join(content) item['content'] = content yield item #最后执行
def closed(self,spider):
self.bro.quit()

wangyi.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html import scrapy class WangyiproItem(scrapy.Item):
# define the fields for your item here like:
title = scrapy.Field()
content = scrapy.Field()

items.py

# -*- coding: utf-8 -*-

# Scrapy settings for wangyiPro project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'wangyiPro' SPIDER_MODULES = ['wangyiPro.spiders']
NEWSPIDER_MODULE = 'wangyiPro.spiders' USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36' # Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'wangyiPro (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = False
LOG_LEVEL = 'ERROR' # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'wangyiPro.middlewares.WangyiproSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
'wangyiPro.middlewares.WangyiproDownloaderMiddleware': 543,
} # Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'wangyiPro.pipelines.WangyiproPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

settings.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html from aip import AipNlp """ 你的 APPID AK SK """
APP_ID = '17164366'
API_KEY = 'iwypmYNvMzwPgG3BKlV093an'
SECRET_KEY = 'btKA8A0ODRHGdfTUCZuZZARBjUPvqMia'
class WangyiproPipeline(object):
client = AipNlp(APP_ID, API_KEY, SECRET_KEY)
def process_item(self, item, spider):
title = item['title']
title = title.replace(u'\xa0',u' ')
content = item['content']
content = content.replace(u'\xa0',u' ')
wd_dic = self.client.keyword(title, content)
tp_dic = self.client.topic(title, content)
print(wd_dic,tp_dic)
return item

pipelines.py

# -*- coding: utf-8 -*-

# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
from time import sleep
from scrapy import signals
from scrapy.http import HtmlResponse class WangyiproDownloaderMiddleware(object): def process_request(self, request, spider):
# Called for each request that goes through the downloader
# middleware. # Must either:
# - return None: continue processing this request
# - or return a Response object
# - or return a Request object
# - or raise IgnoreRequest: process_exception() methods of
# installed downloader middleware will be called
return None def process_response(self, request, response, spider):#spider就是爬虫文件中爬虫类实例化的对象
#进行所有响应对象的拦截
#1.将所有的响应中那五个不满足需求的响应对象找出
#1.每一个响应对象对应唯一一个请求对象
#2.如果我们可以定位到五个响应对应的请求对象后,就可以通过该i请求对象定位到指定的响应对象
#3.可以通过五个板块的url定位请求对象
#总结: url==》request==》response #2.将找出的五个不满足需求的响应对象进行修正(替换)
#spider.five_model_urls:五个板块对应的url
bro = spider.bro
if request.url in spider.five_model_urls:
bro.get(request.url)
sleep(1)
page_text = bro.page_source #包含了动态加载的新闻数据
#如果if条件程利则该response就是五个板块对应的响应对象
new_response = HtmlResponse(url=request.url,body=page_text,encoding='utf-8',request=request)
return new_response return response def process_exception(self, request, exception, spider):
# Called when a download handler or a process_request()
# (from other downloader middleware) raises an exception. # Must either:
# - return None: continue processing this exception
# - return a Response object: stops process_exception() chain
# - return a Request object: stops process_exception() chain
pass

middlewares.py

from aip import AipNlp

""" 你的 APPID AK SK """
APP_ID = '17164366'
API_KEY = 'iwypmYNvMzwPgG3BKlV093an'
SECRET_KEY = 'btKA8A0ODRHGdfTUCZuZZARBjUPvqMia' client = AipNlp(APP_ID, API_KEY, SECRET_KEY) title = "iphone手机出现“白苹果”原因及解决办法,用苹果手机的可以看下" content = "如果下面的方法还是没有解决你的问题建议来我们门店看下成都市锦江区红星路三段99号银石广场24层01室。" """ 调用文章标签 """
wd_dic = client.keyword(title, content)
print(wd_dic) tp_dic = client.topic(title, content)
print(tp_dic)

baiduAI.py

三、基于CrawlSpider的全站数据爬取

  • CrawlSpider就是爬虫类中Spider的一个子类
  • 使用流程:
  1. 创建一个基于CrawlSpider的一个爬虫文件:scrapy genspider -t crawl spiderName www.xxxx.com
  2. 构造链接提取器和规则解析器
  • 链接提取器:提取的规则:allow =‘正则表达式’

    • 作用:可以根据指定的规则进行指定链接的提取
  • 规则解析器: 源码数据进行数据解析
    • 作用:获取连接提取器提取到的链接,然后对其进行请求发送,根据指定规则对请求到的页面
  • fllow=True:将链接提取器 继续作用到 连接提取器提取出的页码链接 所对应的页面中
  • 注意:连接提取器和规则解析器也是一对一的关系

应用:阳光热线问政平台数据爬取

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from sunPro.items import SunproItem_second,SunproItem
#没有实现深度爬取:爬取的只是每一个页码对应页面中的数据
# class SunSpider(CrawlSpider):
# name = 'sun'
# # allowed_domains = ['www,xxx,com']
# start_urls = ['http://wz.sun0769.com/index.php/question/questionType?type=4&page=']
# #链接提取器
# link = LinkExtractor(allow=r'type=4&page=\d+')
# rules = (
# #实例化一个Rule(规则解析器)的对象
# Rule(link, callback='parse_item', follow=True),
# )
#
# def parse_item(self, response):
# tr_list = response.xpath('//*[@id="morelist"]/div/table[2]//tr/td/table//tr')
# for tr in tr_list:
# title = tr.xpath('./td[2]/a[2]/@title').extract_first()
# status = tr.xpath('./td[3]/span/text()').extract_first()
#
# print(title,status) #实现深度爬取
class SunSpider(CrawlSpider):
name = 'sun'
# allowed_domains = ['www,xxx,com']
start_urls = ['http://wz.sun0769.com/index.php/question/questionType?type=4&page=']
#链接提取器
link = LinkExtractor(allow=r'type=4&page=\d+')
#http://wz.sun0769.com/html/question/201908/426393.shtml
link_detail = LinkExtractor(allow=r'question/\d+/\d+\.shtml')
rules = (
#实例化一个Rule(规则解析器)的对象
Rule(link, callback='parse_item', follow=True),
Rule(link_detail, callback='parse_detail'),
) def parse_item(self, response):
tr_list = response.xpath('//*[@id="morelist"]/div/table[2]//tr/td/table//tr')
for tr in tr_list:
title = tr.xpath('./td[2]/a[2]/@title').extract_first()
status = tr.xpath('./td[3]/span/text()').extract_first()
num = tr.xpath('./td[1]/text()').extract_first()
item = SunproItem_second()
item['title'] = title
item['status'] = status
item['num'] = num
yield item def parse_detail(self,response):
content = response.xpath('/html/body/div[9]/table[2]/tbody/tr[1]//text()').extract()
content = ''.join(content)
num = response.xpath('/html/body/div[9]/table[1]/tbody/tr/td[2]/span[2]/text()').extract_first()
if num:
num = num.split(':')[-1]
item = SunproItem()
item['content'] = content
item['num'] = num
yield item

sunSpider.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html import scrapy class SunproItem(scrapy.Item):
content = scrapy.Field()
num = scrapy.Field()
class SunproItem_second(scrapy.Item):
title = scrapy.Field()
status = scrapy.Field()
num = scrapy.Field()

items.py

# -*- coding: utf-8 -*-

# Scrapy settings for sunPro project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'sunPro'
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36' SPIDER_MODULES = ['sunPro.spiders']
NEWSPIDER_MODULE = 'sunPro.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'sunPro (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = False
LOG_LEVEL = 'ERROR' # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'sunPro.middlewares.SunproSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'sunPro.middlewares.SunproDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'sunPro.pipelines.SunproPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

settings.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html class SunproPipeline(object):
def process_item(self, item, spider):
if item.__class__.__name__ == 'SunproItem':
content = item['content']
num = item['num']
print(content,num)
else:
title = item['title']
status = item['status']
num = item['num'] print(title,status,num)
return item

pipelines.py

四、分布式

1.什么是分布式爬虫?

  • 基于多台电脑组建一个分布式机群,然后让机群中的每一台电脑执行同一组程序,然后让它们对同一个网站的数据进行分布爬取

2.为什么要使用分布式爬虫?

  • 提升爬取数据的效率 -

3.如何实现分布式爬虫?

  • 基于scrapy+redis的形式实现分布式scrapy结合这scrapy-redis组建实现的分布式
  • 原生的scrapy框架是无法实现分布式?
  • 调度器无法被分布式机群共享
  • 管道无法被共享
  • scrapy-redis组件的作用:
    • 提供可以被共享的调度器和管道

4.环境安装:

  • redis
  • pip Install scrapy-redis

5.编码流程:

  • 创建一个工程
  • 创建一个爬虫文件:基于CrawlSpider的爬虫文件
  • 修改当前的爬虫文件:
  • 导包:from scrapy_redis.spiders import RedisCrawlSpider
  • 将当前爬虫类的父类修改成RedisCrawlSpider
  • 将start_urls替换成redis_key = ‘xxx’#表示的是可被共享调度器中队列的名称

6.编写爬虫类爬取数据的操作

  • 对settings进行配置:
  • 指定管道:
  • #开启可以被共享的管道
    ITEM_PIPELINES = {
    'scrapy_redis.pipelines.RedisPipeline': 400
    }
  • 指定调度器:
  • # 增加了一个去重容器类的配置, 作用使用Redis的set集合来存储请求的指纹数据, 从而实现请求去重的持久化
    DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
    # 使用scrapy-redis组件自己的调度器
    SCHEDULER = "scrapy_redis.scheduler.Scheduler"
    # 配置调度器是否要持久化, 也就是当爬虫结束了, 要不要清空Redis中请求队列和去重指纹的set。如果是True, 就表示要持久化存储, 就不清空数据, 否则清空数据
    SCHEDULER_PERSIST = True
  • 指定redis的服务:
    • REDIS_HOST = 'redis服务的ip地址'
    • REDIS_PORT = 6379
  • redis的配置文件进行配置:redis.windows.conf携带配置文件启动redis服务
    • 56行:#bind 127.0.0.1
    • 75行:protected-mode no
  • ./redis-server redis.windows.conf
  • 启动redis的客户端
  • redis-cli
  • 执行当前的工程:
  • 进入到爬虫文件对应的目录中:scrapy runspider xxx.py
  • 向调度器队列中仍入一个起始的url:
  • 队列在哪里呢?答:队列在redis中
  • lpush fbsQueue www.xxx.com

应用:

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from scrapy_redis.spiders import RedisCrawlSpider,RedisSpider
from fbsPro.items import FbsproItem
class FbsSpider(RedisCrawlSpider):
name = 'fbs'
# allowed_domains = ['www.xxx.com']
# start_urls = ['http://www.xxx.com/']
#表示的是可被共享调度器中队列的名称
redis_key = 'fbsQueue'
rules = (
Rule(LinkExtractor(allow=r'type=4&page=\d+'), callback='parse_item', follow=True),
) def parse_item(self, response):
tr_list = response.xpath('//*[@id="morelist"]/div/table[2]//tr/td/table//tr')
for tr in tr_list:
title = tr.xpath('./td[2]/a[2]/@title').extract_first()
status = tr.xpath('./td[3]/span/text()').extract_first()
item = FbsproItem()
item['title'] = title
item['status'] = status
yield item

fbsSpider.py

# -*- coding: utf-8 -*-

# Scrapy settings for fbsPro project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'fbsPro' SPIDER_MODULES = ['fbsPro.spiders']
NEWSPIDER_MODULE = 'fbsPro.spiders' USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36' # Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'fbsPro (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16)
CONCURRENT_REQUESTS = 2 # Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'fbsPro.middlewares.FbsproSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'fbsPro.middlewares.FbsproDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# ITEM_PIPELINES = {
# 'fbsPro.pipelines.FbsproPipeline': 300,
# } # Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' #开启可以被共享的管道
ITEM_PIPELINES = {
'scrapy_redis.pipelines.RedisPipeline': 400
} #指定使用可被共享的调度器
# 增加了一个去重容器类的配置, 作用使用Redis的set集合来存储请求的指纹数据, 从而实现请求去重的持久化
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
# 使用scrapy-redis组件自己的调度器
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
# 配置调度器是否要持久化, 也就是当爬虫结束了, 要不要清空Redis中请求队列和去重指纹的set。如果是True, 就表示要持久化存储, 就不清空数据, 否则清空数据
SCHEDULER_PERSIST = True #指定redis
REDIS_HOST = '192.168.11.175'
REDIS_PORT = 6379

settings.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html import scrapy class FbsproItem(scrapy.Item):
# define the fields for your item here like:
title = scrapy.Field()
status = scrapy.Field()

items.py

五、增量式爬虫

  1. 概念:监测网站数据更新的情况。
  2. 核心:去重!!!
  • 深度爬取类型的网站中需要对详情页的url进行记录和检测
  1. 记录:将爬取过的详情页的url进行记录保存
  2. url存储到redis的set中
  3. 检测:如果对某一个详情页的url发起请求之前先要取记录表中进行查看,该url是否存在,存在的话以为着这个url已经被爬取过了。
  • - 非深度爬取类型的网站:
  1. 名词:数据指纹
  2. 一组数据的唯一标识

应用:4567电影网数据爬取

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from redis import Redis
from zjs_moviePro.items import ZjsMovieproItem
class MovieSpider(CrawlSpider):
name = 'movie'
conn = Redis(host='127.0.0.1',port=6379)
# allowed_domains = ['www.xxx.com']
start_urls = ['https://www.4567tv.tv/index.php/vod/show/id/6.html'] rules = (#/index.php/vod/show/id/6/page/2.html
Rule(LinkExtractor(allow=r'id/6/page/\d+\.html'), callback='parse_item', follow=False),
) def parse_item(self, response):
li_list = response.xpath('/html/body/div[1]/div/div/div/div[2]/ul/li')
for li in li_list:
name = li.xpath('./div/div/h4/a/text()').extract_first()
detail_url = 'https://www.4567tv.tv'+li.xpath('./div/div/h4/a/@href').extract_first()
ex = self.conn.sadd('movie_detail_urls',detail_url)
if ex == 1:#向redis的set中成功插入了detail_url
print('有最新数据可爬......')
item = ZjsMovieproItem()
item['name'] = name
yield scrapy.Request(url=detail_url,callback=self.parse_detail,meta={'item':item})
else:
print('该数据已经被爬取过了!')
def parse_detail(self,response):
item = response.meta['item']
desc = response.xpath('/html/body/div[1]/div/div/div/div[2]/p[5]/span[2]/text()').extract_first()
item['desc'] = desc yield item

movie.py

# -*- coding: utf-8 -*-

# Scrapy settings for zjs_moviePro project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'zjs_moviePro'
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36' SPIDER_MODULES = ['zjs_moviePro.spiders']
NEWSPIDER_MODULE = 'zjs_moviePro.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'zjs_moviePro (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = False
LOG_LEVEL = 'ERROR' # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'zjs_moviePro.middlewares.ZjsMovieproSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'zjs_moviePro.middlewares.ZjsMovieproDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'zjs_moviePro.pipelines.ZjsMovieproPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

settings.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html import scrapy class ZjsMovieproItem(scrapy.Item):
# define the fields for your item here like:
name = scrapy.Field()
desc = scrapy.Field()

items.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html class ZjsMovieproPipeline(object):
def process_item(self, item, spider):
conn = spider.conn
conn.lpush('movie_data',item)
return item

pipelines.py

爬虫必知必会(7)_scrapy框架高级的更多相关文章

  1. python网络爬虫,知识储备,简单爬虫的必知必会,【核心】

    知识储备,简单爬虫的必知必会,[核心] 一.实验说明 1. 环境登录 无需密码自动登录,系统用户名shiyanlou 2. 环境介绍 本实验环境采用带桌面的Ubuntu Linux环境,实验中会用到桌 ...

  2. Django框架之第六篇(模型层)--单表查询和必知必会13条、单表查询之双下划线、Django ORM常用字段和参数、关系字段

    单表查询 补充一个知识点:在models.py建表是 create_time = models.DateField() 关键字参数: 1.auto_now:每次操作数据,都会自动刷新当前操作的时间 2 ...

  3. 2015 前端[JS]工程师必知必会

    2015 前端[JS]工程师必知必会 本文摘自:http://zhuanlan.zhihu.com/FrontendMagazine/20002850 ,因为好东东西暂时没看懂,所以暂时保留下来,供以 ...

  4. [ 学习路线 ] 2015 前端(JS)工程师必知必会 (2)

    http://segmentfault.com/a/1190000002678515?utm_source=Weibo&utm_medium=shareLink&utm_campaig ...

  5. Android程序员必知必会的网络通信传输层协议——UDP和TCP

    1.点评 互联网发展至今已经高度发达,而对于互联网应用(尤其即时通讯技术这一块)的开发者来说,网络编程是基础中的基础,只有更好地理解相关基础知识,对于应用层的开发才能做到游刃有余. 对于Android ...

  6. 迈向高阶:优秀Android程序员必知必会的网络基础

    1.前言 网络通信一直是Android项目里比较重要的一个模块,Android开源项目上出现过很多优秀的网络框架,从一开始只是一些对HttpClient和HttpUrlConnection简易封装使用 ...

  7. 脑残式网络编程入门(三):HTTP协议必知必会的一些知识

    本文原作者:“竹千代”,原文由“玉刚说”写作平台提供写作赞助,原文版权归“玉刚说”微信公众号所有,即时通讯网收录时有改动. 1.前言 无论是即时通讯应用还是传统的信息系统,Http协议都是我们最常打交 ...

  8. RecyclerView 必知必会(转)

    [腾讯Bugly干货分享]RecyclerView 必知必会 本文来自于腾讯Bugly公众号(weixinBugly),未经作者同意,请勿转载,原文地址:http://mp.weixin.qq.com ...

  9. H5系列之History(必知必会)

    H5系列之History(必知必会)   目录 概念 兼容性 属性 方法 H5方法       概念     理解History Api的使用方式 目的是为了解决哪些问题   作用:ajax获取数据时 ...

随机推荐

  1. 添加特定软件证书到windows不信任列表

    $target="C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe" $filePath=$PSScript ...

  2. 二进制安装kubernetes(二) kube-apiserver组件安装

    根据架构图,我们的apiserver部署在hdss7-21和hdss7-22上: 首先在hdss7-200上申请证书并拷贝到21和22上: 创建证书文件: # cd /opt/certs # vi c ...

  3. leetcode一些细节

    取数组中点时不要写 int mid = (left + right) // 2;,「这么写有一个问题:数值越界,例如left和right都是最大int,这么操作就越界了,在二分法中尤其需要注意!」 所 ...

  4. PAT L2-005. 集合相似度 【stl set】

    给定两个整数集合,它们的相似度定义为:Nc/Nt*100%.其中Nc是两个集合都有的不相等整数的个数,Nt是两个集合一共有的不相等整数的个数.你的任务就是计算任意一对给定集合的相似度. 输入格式: 输 ...

  5. 牛客多校第三场J LRU management(双向链表)题解

    题意: 给一个长度为\(m\)的队列,现给定以下操作: \(opt=0\),插入一个串,如果不在队里直接插入栈尾,如果超出\(m\)删队首:在队里就拿出来重新放到队尾,返回\(v\)值. \(opt= ...

  6. 2021-2-16:请问你知道分布式设计模式中的Quorum思想么?

    有效个数(Quorum) 有效个数(Quorum)这个设计模式一般是指分布式系统的每一次修改都要在大多数实例上通过来确定修改通过. 问题背景 在一个分布式存储系统中,用户请求会发到一个实例上.通常在一 ...

  7. Storybook 最新教程

    Storybook 最新教程 Storybook is the most popular UI component development tool for React, Vue, and Angul ...

  8. Seven xxx in Seven Weeks ebooks | 七周七 xxx 系列图书 电子书| share 分享 | free of charge 免费!

    Seven xxx  in Seven Weeks ebooks |  七周七 xxx 系列图书  电子书| share  分享 | free of charge  免费! Seven Languag ...

  9. 2020 Google 开发者大会

    2020 Google 开发者大会 Google Developer Summit https://developersummit.googlecnapps.cn/ Flutter | Web | M ...

  10. Self-XSS All In One

    Self-XSS All In One Self-XSS(自跨站脚本)攻击 警告! 使用此控制台可能会给攻击者可乘之机,让其利用 Self-XSS(自跨站脚本)攻击来冒充您并窃取您的信息.请勿输入或粘 ...