爬取目标网站: http://www.weather.com.cn/ 具体区域天气地址: http://www.weather.com.cn/weather1d/101280601.shtm(深圳) 开始: scrapy startproject weather 编写items.py import scrapy class WeatherItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.
2017-03-29 Scrapy爬图片到本地应该会给图片自动生成sha1摘要算法文件名,我第一次用scrapy也不清楚太多,就在程序里自己写了一段实现这一功能的代码.需import hashlib # 存储所有图片链接image_urls item["image_urls"] = ['http://www.nosta.gov.cn/upload/2017slgb'+i.replace('..', '') for i in response.xpath('//img[@width=&q
用scrapy爬取搜狗Lofter图片 # -*- coding: utf-8 -*- import json import scrapy from scrapy.http import Request from urllib import parse from scrapy.loader import ItemLoader from tutorial.items import LofterSpiderItem class LofterSpider(scrapy.Spider): name =