python之scrapy模块pipelines
1、知识点
""""
pipelines使用:
1、在spiders里面使用yield生成器
list_li = response.xpath("//div[@class='swiper-wrapper']//li")
#print(list_li)
for li in list_li:
#print(li.extract_first())
item = { }
item["name"] = li.xpath(".//h3/text()").extract_first()
item["content"] = li.xpath(".//p[@class='teacherBrief']/text()").extract_first()
#item["content"] = li.xpath(".//p[@class='teacherIntroduction']/text()").extract_first()
#print(item)
yield item #将数据传递道pipelines 2、在pipelines中打印item
class MyspiderPipeline(object):
"""
#第一个管道,这个process_item方法名是不能改
"""
def process_item(self, item, spider):
item["hello"] = "world"
print(item)
return item class MyspiderPipeline1(object):
"""
#第二个管道
"""
def process_item(self, item, spider):
print(item)
return item 3、在settings文件添加pipelines的支持
ITEM_PIPELINES = {
#执行顺序为从小到大,即先执行300,然后在301
'myspider.pipelines.MyspiderPipeline': 300,
'myspider.pipelines.MyspiderPipeline1': 301,
}
"""
2、spider.py文件中通过
yield item #将数据传递道pipelines.py中的item
JulyeduSpider.py文件代码
# -*- coding: utf-8 -*-
import scrapy
import logging logger = logging.getLogger(__name__)
class JulyeduSpider(scrapy.Spider):
name = 'julyedu'
allowed_domains = ['julyedu.com']
start_urls = ['http://julyedu.com/']
#这个parse方法名不能改
def parse(self, response):
"""
爬虫七月在线的导师名单
:param response:
:return:
"""
list_li = response.xpath("//div[@class='swiper-wrapper']//li")
#print(list_li)
item = {}
for li in list_li:
item["name"] = li.xpath(".//h3/text()").extract_first()
item["content"] = li.xpath(".//p[@class='teacherBrief']/text()").extract_first()
#item["content"] = li.xpath(".//p[@class='teacherIntroduction']/text()").extract_first()
#print(item)
#将数据传递道pipelines,yield只接受Request,BaseItem,dict,None四种类型
logger.warning(item) #打印日志
yield item
2、修改pipelines.py文件,对其中的item可以操作
# -*- coding: utf-8 -*- # Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html class MyspiderPipeline(object):
"""
第一个管道,这个process_item方法名是不能改
"""
def process_item(self, item, spider):
"""
针对不同的爬虫的数据处理
:param item:spider 传过来的值
:param spider: 传递过来spider的类
:return:
"""
if spider.name == "julyedu":
#print(item)
return item
else:
return item class MyspiderPipeline1(object):
"""
第二个管道
"""
def process_item(self, item, spider):
#print(item)
return item
3、对settings.py文件添加pipelines配置
# -*- coding: utf-8 -*- # Scrapy settings for myspider project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'myspider' SPIDER_MODULES = ['myspider.spiders']
NEWSPIDER_MODULE = 'myspider.spiders' LOG_LEVEL = 'WARNING' #增加log日志
LOG_FILE='./log.log' #将log日志保存到文件中
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'myspider (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'myspider.middlewares.MyspiderSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'myspider.middlewares.MyspiderDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
#执行顺序为从小到大,即先执行300,然后在301
'myspider.pipelines.MyspiderPipeline': 300,
'myspider.pipelines.MyspiderPipeline1': 301,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
python之scrapy模块pipelines的更多相关文章
- python之scrapy模块scrapy-redis使用
1.redis的使用,自己可以多学习下,个人也是在学习 https://www.cnblogs.com/ywjfx/p/10262662.html官网可以自己搜索下. 2.下载安装scrapy-red ...
- python之scrapy模块下载中间件
知识点 使用方法: 编写一个Downloader Middlewares和我们编写一个pipeline一样,定义一个类,然后在setting中开启 Downloader Middlewares默认的方 ...
- python之scrapy模块logging日志
1.知识点 """ logging : scrapy: settings中设置LOG_LEVEL="WARNING" settings中设置LOG_F ...
- python 安装 Scrapy 模块
环境的安装总是让人多愁善感,爱恨交叉... 本人安装环境:win7 64 + python2.7 先来几个网站 https://doc.scrapy.org/en/latest/intro/insta ...
- Python之Scrapy爬虫框架安装及简单使用
题记:早已听闻python爬虫框架的大名.近些天学习了下其中的Scrapy爬虫框架,将自己理解的跟大家分享.有表述不当之处,望大神们斧正. 一.初窥Scrapy Scrapy是一个为了爬取网站数据,提 ...
- [Python爬虫] scrapy爬虫系列 <一>.安装及入门介绍
前面介绍了很多Selenium基于自动测试的Python爬虫程序,主要利用它的xpath语句,通过分析网页DOM树结构进行爬取内容,同时可以结合Phantomjs模拟浏览器进行鼠标或键盘操作.但是,更 ...
- python爬虫scrapy项目详解(关注、持续更新)
python爬虫scrapy项目(一) 爬取目标:腾讯招聘网站(起始url:https://hr.tencent.com/position.php?keywords=&tid=0&st ...
- 第三百二十四节,web爬虫,scrapy模块介绍与使用
第三百二十四节,web爬虫,scrapy模块介绍与使用 Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架. 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中.其最初是为了 ...
- 爬虫scrapy模块
首先下载scrapy模块 这里有惊喜 https://www.cnblogs.com/bobo-zhang/p/10068997.html 创建一个scrapy文件 首先在终端找到一个文件夹 输入 s ...
随机推荐
- Delphi 类的特性
- windows BAT脚本2个服务器间传递文件
1. 脚本功能: 实现2个服务器间文件的传递,例如从A服务器往B服务器上传文件 2. 实现步骤: 2.1 服务器连结,找到指定路径,读取所需要上传的文件,将文件名称复制到一个文件下 (此处考虑可能需要 ...
- centos7 nginx设置开启启动
添加系统服务 在 /usr/lib/systemd/system 目录中添加 nginx.service,根据实际情况进行修改,详细解析可查看下方参考资料中的文章.内容如下 ? [Unit] ...
- wordpress网站不正常显示解决办法
第一种:自己在后台修改了wordpress网址,导致不能登陆后台. 解决办法: 1.首先我们登录MySql数据库,这个不用我教吧: 2.查看表”wp_options”的数据(你的表不一定是以”wp”开 ...
- CSS基础学习-15-1.CSS 浏览器内核
- BZOJ3887 [Usaco2015 Jan]Grass Cownoisseur[缩点]
首先看得出缩点的套路.跑出DAG之后,考虑怎么用逆行条件.首先可以不用,这样只能待原地不动.用的话,考虑在DAG上向后走,必须得逆行到1号点缩点后所在点的前面,才能再走回去. 于是统计从1号点缩点所在 ...
- avcodec_decode_video2函数
转自 https://www.xuebuyuan.com/2156374.html 该函数的作用是实现压缩视频的解码.在avcodec.h中的声明方式如下: int avcodec_decode_vi ...
- Luogu P1641 [SCOI2010]生成字符串 组合数学
神仙.... 当时以为是,$x$代表$1$,$y$代表$0$,所以不能过$y=x$的路径数...结果不会... 然后康题解...ヾ(。`Д´。)竟然向右上是$1$,向右下是$0$.... 所以现在就是 ...
- 用CSS实现梯形图标
遇到需要实现如下图标 由图形分析,梯形,平行四边形等都可以由矩形变形而来. 而想要实现梯形,需要进行3D变换,需要使用css3的 perspective属性. 属性 perspective指定了观察者 ...
- MySQL多表查询总结
MySQL术语: Redundacncy(冗余):存储两次或多次数据,以便实现快速查询. Primary Key(主键):主键是唯一的.表中每条记录的唯一标识. Foreign Key(外键):用于连 ...