场景2的实现:

数据指纹

  • 使用详情页的url充当数据指纹即可。

创建爬虫爬虫文件:

  • cd project_name(进入项目目录)
  • scrapy genspider 爬虫文件的名称(自定义一个名字即可) 起始url
    • (例如:scrapy genspider first www.xxx.com)
  • 创建成功后,会在爬虫文件夹下生成一个py的爬虫文件

进入爬虫文件:

  • cd 爬虫文件的名称(即自定义的名字)

爬虫文件

import scrapy
import redis
import random
from ..items import Zlsdemo2ProItem
class JianliSpider(scrapy.Spider):
name = "jianli"
# allowed_domains = ["www.xxx.com"]
start_urls = ["https://sc.chinaz.com/jianli/free.html"]
# 连接数据库
conn = redis.Redis(
host='127.0.0.1',
port=6379,
)
def parse(self, response):
div_list = response.xpath('//*[@id="container"]/div')
for div in div_list:
title = div.xpath('./p/a/text()').extract_first()
detail_url = div.xpath('./p/a/@href').extract_first()
# print(title, detail_url)
ex = self.conn.sadd('data_id', detail_url)
#建立item对象
item = Zlsdemo2ProItem()
item['title'] = title#将title传给item
if ex == 1:#数据库中没有存储爬取到的数据
print('有最新数据更新,正在采集中......')
#做请求传参,将title通过meta传递给parse_detail函数
yield scrapy.Request(url=detail_url,callback=self.parse_detail,meta={'item':item}) else:#数据库中已存储爬取到的数据
print('暂无数据更新!') def parse_detail(self,response):
item = response.meta['item']
download_li = response.xpath('//*[@id="down"]/div[2]/ul/li')
download_urls = []
for li in download_li:
download_url_e = li.xpath('./a/@href').extract_first()
download_urls.append(download_url_e)
download_url = random.choice(download_urls)
item['download_url'] = download_url yield item

items.py

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html import scrapy class Zlsdemo2ProItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
title = scrapy.Field()
download_url = scrapy.Field()

pipelines.py

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html # useful for handling different item types with a single interface
from itemadapter import ItemAdapter class Zlsdemo2ProPipeline:
def process_item(self, item, spider): conn = spider.conn
conn.lpush('data1', item) return item

settings

# Scrapy settings for zlsDemo2Pro project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = "zlsDemo2Pro" SPIDER_MODULES = ["zlsDemo2Pro.spiders"]
NEWSPIDER_MODULE = "zlsDemo2Pro.spiders" # Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36"
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
LOG_LEVEL = 'ERROR'
LOG_LEVEL = 'WARNING' # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
# "Accept-Language": "en",
#} # Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# "zlsDemo2Pro.middlewares.Zlsdemo2ProSpiderMiddleware": 543,
#} # Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# "zlsDemo2Pro.middlewares.Zlsdemo2ProDownloaderMiddleware": 543,
#} # Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# "scrapy.extensions.telnet.TelnetConsole": None,
#} # Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
"zlsDemo2Pro.pipelines.Zlsdemo2ProPipeline": 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = "httpcache"
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = "scrapy.extensions.httpcache.FilesystemCacheStorage" # Set settings whose default value is deprecated to a future-proof value
REQUEST_FINGERPRINTER_IMPLEMENTATION = "2.7"
TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"
FEED_EXPORT_ENCODING = "utf-8"

Day 22 22.1.2:增量式爬虫 - 场景2的实现的更多相关文章

  1. Scrapy 增量式爬虫

    Scrapy 增量式爬虫 https://blog.csdn.net/mygodit/article/details/83931009 https://blog.csdn.net/mygodit/ar ...

  2. 基于Scrapy框架的增量式爬虫

    概述 概念:监测 核心技术:去重 基于 redis 的一个去重 适合使用增量式的网站: 基于深度爬取的 对爬取过的页面url进行一个记录(记录表) 基于非深度爬取的 记录表:爬取过的数据对应的数据指纹 ...

  3. 增量式爬虫 Scrapy-Rredis 详解及案例

    1.创建scrapy项目命令 scrapy startproject myproject 2.在项目中创建一个新的spider文件命令: scrapy genspider mydomain mydom ...

  4. 爬虫 crawlSpider 分布式 增量式 提高效率

    crawlSpider 作用:为了方便提取页面整个链接url,不必使用创参寻找url,通过拉链提取器,将start_urls的全部符合规则的URL地址全部取出 使用:创建文件scrapy startp ...

  5. python爬虫---CrawlSpider实现的全站数据的爬取,分布式,增量式,所有的反爬机制

    CrawlSpider实现的全站数据的爬取 新建一个工程 cd 工程 创建爬虫文件:scrapy genspider -t crawl spiderName www.xxx.com 连接提取器Link ...

  6. 爬虫07 /scrapy图片爬取、中间件、selenium在scrapy中的应用、CrawlSpider、分布式、增量式

    爬虫07 /scrapy图片爬取.中间件.selenium在scrapy中的应用.CrawlSpider.分布式.增量式 目录 爬虫07 /scrapy图片爬取.中间件.selenium在scrapy ...

  7. 爬虫---scrapy分布式和增量式

    分布式 概念: 需要搭建一个分布式的机群, 然后在每一台电脑中执行同一组程序, 让其对某一网站的数据进行联合分布爬取. 原生的scrapy框架不能实现分布式的原因 调度器不能被共享, 管道也不能被共享 ...

  8. [开源 .NET 跨平台 数据采集 爬虫框架: DotnetSpider] [三] 配置式爬虫

    [DotnetSpider 系列目录] 一.初衷与架构设计 二.基本使用 三.配置式爬虫 四.JSON数据解析与配置系统 上一篇介绍的基本的使用方式,虽然自由度很高,但是编写的代码相对还是挺多.于是框 ...

  9. 增量式PID计算公式4个疑问与理解

    一开始见到PID计算公式时总是疑问为什么是那样子?为了理解那几道公式,当时将其未简化前的公式“活生生”地算了一遍,现在想来,这样的演算过程固然有助于理解,但假如一开始就带着对疑问的答案已有一定看法后再 ...

  10. 增量式PID简单翻板角度控制

    1.研究背景 随着电子技术.信息技术和自动控制理论技术的完善与发展,近来微型处理器在控制方面的应用也越来越多.随之逐渐渗透到我们生活的各个领域.如导弹导航装置,飞机上仪表的控制,网络通讯与数据传输,工 ...

随机推荐

  1. Codeforces 1281E

    Link 题意:一棵$2n$个点的树让你分配$n$对居民在点上求每对居民之间路径和的最小值和最大值 思路:考虑一条边$(u, v)$ 1.若要使答案尽可能大,那么这条边应该取到尽可能多次.显然,如果$ ...

  2. python 链接云端数据库/远程数据库 可以使用原始Odbc

    class MySqlOdbc: def __init__(self): self.sqlhead = None # 当前数据链接句柄 self.mycursor = None # 当前游标 &quo ...

  3. Java面向对象之static关键字详解

    static关键字详解 package OOP.Demo10; public class Person { //2:赋初值 { System.out.println("匿名代码块" ...

  4. Easyui 表格列数据合并!

    //datagrid调用列子 onLoadSuccess: function (data) { $(".datagrid-header-row").css("text-a ...

  5. 查看nohup.out 日志文件

    1.查看实时日志: tail -f nohup.out 2.查看实时日志并检索关键字: tail -f nohup.out | grep "关键字" 3.查看文件最后100行日志: ...

  6. 音乐下载器,音乐解析软件,全网音乐免费下载,mp3格式音乐下载,flac格式音乐下载,无损音质音乐下载器,你想听的都搜的到~

    在这个音乐版权被三分天下的时代,想必大家也都会有这种的困扰,喜欢的音乐很多,刚好这些音乐的版权还分散在三大主流音乐厂商的手里. 这样的话,想要听或者下载自己喜欢的音乐可能要开多个会员,而且下载的音乐单 ...

  7. pure-ftpd(源码编译)中文编码问题

    1.由于版本问题,该软件有些版本不能编译--with-rfc2640选项.解决办法为换成相应低一点的版本 tar -xf pure-ftpd-1.0.42.tar.gz cd pure-ftpd-1. ...

  8. vxWidgets(一):初识

    wxWidgets 和 QT 之间的选择 跨平台的C++ GUI工具库很多,可是应用广泛的也就那么几个,Qt.wxWidgets便是其中的翘楚这里把GTK+排除在外,以C实现面向对象,上手相当困难,而 ...

  9. django操作WEB涉及的几个命令

    1)创建项目bysms django-admin startproject bysms 2)创建应用sales (在bysms目录下执行) python manage.py startapp sale ...

  10. PHP_递归实现无限级分类

    <?php /** * 递归方法实现无限级别分类 * @param array $list 要生成树形列表的数组[该数组中必须要有主键id 和 父级pid] * @param int $pid= ...