这是一个使用scrapy的ImagesPipeline爬取下载图片的示例,生成的图片保存在爬虫的full文件夹里。

scrapy startproject DoubanImgs

cd DoubanImgs

scrapy genspider download_douban  douban.com

vim spiders/download_douban.py

# coding=utf-8
from scrapy.spiders import Spider
import re
from scrapy import Request
from ..items import DoubanImgsItem class download_douban(Spider):
name = 'download_douban' default_headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, sdch, br',
'Accept-Language': 'zh-CN,zh;q=0.8,en;q=0.6',
'Cache-Control': 'max-age=0',
'Connection': 'keep-alive',
'Host': 'www.douban.com',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36',
} def __init__(self, url='1638835355', *args, **kwargs):
self.allowed_domains = ['douban.com']
self.start_urls = []
for i in xrange(23):
if i == 0:
page_url = 'http://www.douban.com/photos/album/' + url
else:
page_url = 'http://www.douban.com/photos/album/' + url + '/?start=' + str(i*18)
self.start_urls.append(page_url)
self.url = url
# call the father base function # super(download_douban, self).__init__(*args, **kwargs) def start_requests(self): for url in self.start_urls:
yield Request(url=url, headers=self.default_headers, callback=self.parse) def parse(self, response):
list_imgs = response.xpath('//div[@class="photolst clearfix"]//img/@src').extract()
if list_imgs:
item = DoubanImgsItem()
item['image_urls'] = list_imgs
yield item

vim settings.py

# -*- coding: utf-8 -*-

# Scrapy settings for DoubanImgs project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'DoubanImgs' SPIDER_MODULES = ['DoubanImgs.spiders']
NEWSPIDER_MODULE = 'DoubanImgs.spiders' ITEM_PIPELINES = {
'DoubanImgs.pipelines.DoubanImgDownloadPipeline': 300,
}
IMAGES_STORE = '.'
IMAGES_EXPIRES = 90 # Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'DoubanImgs (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 0.5
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'DoubanImgs.middlewares.DoubanimgsSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'DoubanImgs.middlewares.DoubanimgsDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
# 'DoubanImgs.pipelines.DoubanimgsPipeline': 300,
#} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

vim items.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
from scrapy import Field class DoubanImgsItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
image_urls = Field()
images = Field()
image_paths = Field()

vim pipelines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
from scrapy.pipelines.images import ImagesPipeline
from scrapy.exceptions import DropItem
from scrapy import Request
from scrapy import log class DoubanImgsPipeline(object):
def process_item(self, item, spider):
return item class DoubanImgDownloadPipeline(ImagesPipeline):
default_headers = {
'accept': 'image/webp,image/*,*/*;q=0.8',
'accept-encoding': 'gzip, deflate, sdch, br',
'accept-language': 'zh-CN,zh;q=0.8,en;q=0.6',
'cookie': 'bid=yQdC/AzTaCw',
'referer': 'https://www.douban.com/photos/photo/2370443040/',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36',
} def get_media_requests(self, item, info):
for image_url in item['image_urls']:
self.default_headers['referer'] = image_url
yield Request(image_url, headers=self.default_headers) def item_completed(self, results, item, info):
image_paths = [x['path'] for ok, x in results if ok]
if not image_paths:
raise DropItem("Item contains no images")
item['image_paths'] = image_paths
return item

使用scrapy ImagesPipeline爬取图片资源的更多相关文章

  1. scrapy框架爬取图片并将图片保存到本地

    如果基于scrapy进行图片数据的爬取 在爬虫文件中只需要解析提取出图片地址,然后将地址提交给管道 配置文件中:IMAGES_STORE = './imgsLib' 在管道文件中进行管道类的制定: f ...

  2. python爬虫---scrapy框架爬取图片,scrapy手动发送请求,发送post请求,提升爬取效率,请求传参(meta),五大核心组件,中间件

    # settings 配置 UA USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, l ...

  3. 使用scrapy框架爬取图片网全站图片(二十多万张),并打包成exe可执行文件

    目标网站:https://www.mn52.com/ 本文代码已上传至git和百度网盘,链接分享在文末 网站概览 目标,使用scrapy框架抓取全部图片并分类保存到本地. 1.创建scrapy项目 s ...

  4. scrapy爬虫,爬取图片

    一.scrapy的安装: 本文基于Anacoda3, Anacoda2和3如何同时安装? 将Anacoda3安装在C:\ProgramData\Anaconda2\envs文件夹中即可. 如何用con ...

  5. scrapy爬虫系列之三--爬取图片保存到本地

    功能点:如何爬取图片,并保存到本地 爬取网站:斗鱼主播 完整代码:https://files.cnblogs.com/files/bookwed/Douyu.zip 主要代码: douyu.py im ...

  6. python网络爬虫之使用scrapy爬取图片

    在前面的章节中都介绍了scrapy如何爬取网页数据,今天介绍下如何爬取图片. 下载图片需要用到ImagesPipeline这个类,首先介绍下工作流程: 1 首先需要在一个爬虫中,获取到图片的url并存 ...

  7. 使用Scrapy爬取图片入库,并保存在本地

    使用Scrapy爬取图片入库,并保存在本地 上 篇博客已经简单的介绍了爬取数据流程,现在让我们继续学习scrapy 目标: 爬取爱卡汽车标题,价格以及图片存入数据库,并存图到本地 好了不多说,让我们实 ...

  8. scrapy 爬取图片

    scrapy 爬取图片 1.scrapy 有下载图片的自带接口,不用我们在去实现 setting.py设置 # 保存log信息的文件名 LOG_LEVEL = "INFO" # L ...

  9. 爬虫07 /scrapy图片爬取、中间件、selenium在scrapy中的应用、CrawlSpider、分布式、增量式

    爬虫07 /scrapy图片爬取.中间件.selenium在scrapy中的应用.CrawlSpider.分布式.增量式 目录 爬虫07 /scrapy图片爬取.中间件.selenium在scrapy ...

随机推荐

  1. python-day97--git协同开发

    1.协同开发流程 - 在dev的基础上创建三个开发的分支 -每个人都在自己的分支中进行开发 -第一个人开发完成之后把review分支从云端版本库中拉下来 -将个人的分支与review分支合并(确保re ...

  2. Redis+Twemproxy+HAProxy集群(转) 干货

    原文地址:Redis+Twemproxy+HAProxy集群  干货 Redis主从模式 Redis数据库与传统数据库属于并行关系,也就是说传统的关系型数据库保存的是结构化数据,而Redis保存的是一 ...

  3. SQL SERVER版本补丁体系及升级

    首先了解几个定义: RTM : 表示 Release to Manufacturing ,这是产品的原始发布版本,当从光盘或 MSDN 下载的默认版本.不过现在下载 SQL Server 版本时,也有 ...

  4. vue-navigation 实现前进刷新,后退不刷新

    vue-navigation GitHub地址 导航默认行为类似手机APP的页面导航(A.B.C为页面): A前进到B,再前进到C: C返回到B时,B会从缓存中恢复: B再次前进到C,C会重新生成,不 ...

  5. Kali安装教程(VMWare)

    1.下载镜像及相关 1.1下载镜像文件 下载链接:https://www.kali.org/downloads/ 选择自己需要的版本下载,根据经验先下载种子文件(torrent)再用迅雷下载网速是最有 ...

  6. xubuntu无法进图形界面问题

    http://www.ubuntugeek.com/fix-for-cant-login-after-upgrading-from-ubuntu-13-04-to-ubuntu-13-10.html ...

  7. linux下文件内容查找 转

    find | xargs grep test find命令和xargs命令 网友:wuye_chinaunix 发布于: : (共有条评论) 查看评论 | 我要评论 青云 分配文件 - -| 回首页 ...

  8. nyoj-0708-ones(dp)

    nyoj-0708-ones 题意:用1,+,*,(,). 这四个符号组成表达式表达数s(0 <= s <= 10000),且1最少时1的的个数 状态转移方程: dp[i] = min(d ...

  9. 1-3Controller之Response

    控制器中的方法: public function response1(){ /*响应的常见类型: * 1.字符串 * 2.视图 * 3.json * 4.重定向 * */ //响应JSON /*$da ...

  10. IDEA教程之导入maven项目

    通过从网上的开源项目下载源码,一般都是maven管理的项目,此类项目可以通过导入快捷运行项目,如图为下载的一个项目: 2 打开IDEA,点击第二个选项“Import Porject”,然后选择源码根目 ...