爬虫——Scrapy框架案例二:阳光问政平台
阳光热线问政平台
URL地址:http://wz.sun0769.com/index.php/question/questionType?type=4&page=
爬取字段:帖子的编号、投诉类型、帖子的标题、帖子的URL地址、部门、状态、网友、时间。
1.items.py
# -*- coding: utf-8 -*- # Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html import scrapy class SunwzspiderItem(scrapy.Item):
# define the fields for your item here like:
# 爬取投诉帖子的编号、投诉类型、帖子的标题、帖子的URL、部门、状态、网友、时间。
# 帖子的编号
post_id = scrapy.Field()
# 投诉类型
post_type = scrapy.Field()
# 帖子的标题
post_title = scrapy.Field()
# 帖子的URL
post_url = scrapy.Field()
# 部门
sector = scrapy.Field()
# 状态
post_state = scrapy.Field()
# 网友
net_friend = scrapy.Field()
# 时间
post_time = scrapy.Field()
2.spiders/sunwz.py
# -*- coding: utf-8 -*-
import scrapy
from sunwzSpider.items import SunwzspiderItem class SunwzSpider(scrapy.Spider):
name = 'sunwz'
allowed_domains = ['wz.sun0769.com']
url = "http://wz.sun0769.com/index.php/question/questionType?type=4&page="
offset = 0
start_urls = [url + str(offset)] def parse(self, response):
table = response.xpath("//table[@width='98%']")[0]
trs = table.xpath("./tr")
# 是否爬取下一页的标记
next_flag = False
for tr in trs:
next_flag = True
try:
item = SunwzspiderItem()
# 帖子的编号
post_id = tr.xpath("./td/text()").extract()[0]
td2 = tr.xpath("./td")[1]
# 投诉类型
post_type = td2.xpath("./a/text()").extract()[0]
# 帖子的标题
post_title = td2.xpath("./a/text()").extract()[1]
# 帖子的URL
post_url = td2.xpath("./a/@href").extract()[1]
# 部门
sector = td2.xpath("./a/text()").extract()[2]
td3 = tr.xpath("./td")[2]
# 状态
post_state = td3.xpath("./span/text()").extract()[0]
# 网友
net_friend = tr.xpath("./td/text()").extract()[3]
# 时间
post_time = tr.xpath("./td/text()").extract()[4] item["post_id"] = post_id
item["post_type"] = post_type
item["post_title"] = post_title
item["post_url"] = post_url
item["sector"] = sector
item["post_state"] = post_state
item["net_friend"] = net_friend
item["post_time"] = post_time yield item
except:
pass # 判断是否继续爬取下一页
if next_flag:
self.offset += 30
yield scrapy.Request(self.url + str(self.offset), callback = self.parse)
3.pipelines.py
# -*- coding: utf-8 -*- # Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html import json class SunwzspiderPipeline(object):
def __init__(self):
self.file = open("阳光问政平台.json", "w", encoding = "utf-8")
self.first_flag = True def process_item(self, item, spider):
if self.first_flag:
self.first_flag = False
content = "[\n" + json.dumps(dict(item), ensure_ascii = False)
else:
content = ",\n" + json.dumps(dict(item), ensure_ascii = False)
self.file.write(content) return item def close_spider(self, spider):
self.file.write("\n]")
self.file.close()
4.settings.py
# -*- coding: utf-8 -*- # Scrapy settings for sunwzSpider project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# http://doc.scrapy.org/en/latest/topics/settings.html
# http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html BOT_NAME = 'sunwzSpider' SPIDER_MODULES = ['sunwzSpider.spiders']
NEWSPIDER_MODULE = 'sunwzSpider.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'sunwzSpider (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 2
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36'
# 'Accept-Language': 'en',
} # Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'sunwzSpider.middlewares.SunwzspiderSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'sunwzSpider.middlewares.MyCustomDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'sunwzSpider.pipelines.SunwzspiderPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
爬虫——Scrapy框架案例二:阳光问政平台的更多相关文章
- 爬虫——Scrapy框架案例一:手机APP抓包
以爬取斗鱼直播上的信息为例: URL地址:http://capi.douyucdn.cn/api/v1/getVerticalRoom?limit=20&offset=0 爬取字段:房间ID. ...
- 爬虫scrapy框架之CrawlSpider
爬虫scrapy框架之CrawlSpider 引入 提问:如果想要通过爬虫程序去爬取全站数据的话,有几种实现方法? 方法一:基于Scrapy框架中的Spider的递归爬取进行实现(Request模 ...
- 爬虫Ⅱ:scrapy框架
爬虫Ⅱ:scrapy框架 step5: Scrapy框架初识 Scrapy框架的使用 pySpider 什么是框架: 就是一个具有很强通用性且集成了很多功能的项目模板(可以被应用在各种需求中) scr ...
- python爬虫scrapy框架——人工识别登录知乎倒立文字验证码和数字英文验证码(2)
操作环境:python3 在上一文中python爬虫scrapy框架--人工识别知乎登录知乎倒立文字验证码和数字英文验证码(1)我们已经介绍了用Requests库来登录知乎,本文如果看不懂可以先看之前 ...
- scrapy框架(二)
scrapy框架(二) 一.scrapy 选择器 概述: Scrapy提供基于lxml库的解析机制,它们被称为选择器. 因为,它们“选择”由XPath或CSS表达式指定的HTML文档的某部分. Sca ...
- 安装爬虫 scrapy 框架前提条件
安装爬虫 scrapy 框架前提条件 (不然 会 报错) pip install pypiwin32
- Python3爬虫(十八) Scrapy框架(二)
对Scrapy框架(一)的补充 Infi-chu: http://www.cnblogs.com/Infi-chu/ Scrapy优点: 提供了内置的 HTTP 缓存 ,以加速本地开发 . ...
- python爬虫scrapy框架
Scrapy 框架 关注公众号"轻松学编程"了解更多. 一.简介 Scrapy是用纯Python实现一个为了爬取网站数据.提取结构性数据而编写的应用框架,用途非常广泛. 框架的力量 ...
- Python爬虫Scrapy框架入门(2)
本文是跟着大神博客,尝试从网站上爬一堆东西,一堆你懂得的东西 附上原创链接: http://www.cnblogs.com/qiyeboy/p/5428240.html 基本思路是,查看网页元素,填写 ...
随机推荐
- Infor SyteLine如何快速锁定用户
使用Infor Syteline ERP系统,当需要做系统维护时,我们需要通知所有用户退出系统,在维护期间,严禁用户登录,这样的话,我们需要锁定用户.对于这个问题,很多管理员会打开SL的Users窗口 ...
- java几种基本排序算法
1.选择排序 原理:将数组的每一个元素和第一个元素相比较,如果小于第一个元素则交换,选出第一小的,依次选出第二小,第三小的.... 代码 int[] a = {1,3,2,5}; int i,j,te ...
- Oracle案例04——TNS-12547: TNS:lost contact
Oracle数据库服务器DG从库重启后,无法完成数据同步,具体报错信息如下: 一.报错信息 alter log报错 ****************************************** ...
- C#中的多线程 - 多线程的使用 z
原文:http://www.albahari.com/threading/part3.aspx 专题:C#中的多线程 1基于事件的异步模式Permalink 基于事件的异步模式(event-based ...
- GO语言(五)项目搭建
<sorter> |------<src>(手动添加,代码存放处) |------sorter.go |------<algorithm> |--- ...
- awk的简单使用
awk是一个强大的文本分析工具,相对于grep的查找,sed的编辑,awk在其对数据分析并生成报告时,显得尤为强大.简单来说awk就是把文件逐行的读入,以空格为默认分隔符将每行切片,切开的部分再进行各 ...
- redis在windows平台安装和启动
官网: https://redis.io/ 中文网站:http://www.redis.net.cn/ 一.下载windows版本的redis 官网没有提供windows版本的下载,只有linux版本 ...
- Django 按时间来查找数据库中的数据
问题: 按时间来查找数据表中的数据. 前提: 1. 数据表student中有一个字段类型为DateField或者DateTimeField字段, 字段名是birthday. 2. 数据表中已经有些数据 ...
- SQL表创建注意事项
CREATE TABLE V_USER ( AUTOID ), USERID BYTE) NOT NULL, USERNAME BYTE) NOT NULL, USERPASSWORD BYTE) N ...
- Centos 安装libevent
1.在http://libevent.org/下载libevent-2.1.8-stable.tar.gz 2.解压缩 tar -zxvf libevent-2.1.8-stable.tar.gz c ...