【转载】Scrapy安装及demo测试笔记
Scrapy安装及demo测试笔记
Scrapy安装及demo测试笔记
一、环境搭建
1. 安装scrapy:pip install scrapy
2.安装:PyWin32,可以从网上载已编译好的安装包:http://www.lfd.uci.edu/%7Egohlke/pythonlibs/#pywin32
安装完之后会报如下错误
解决办法,把以下两个文件拷贝到C:\Windows\System32目录下
二、创建scrapy工程(在此用网上别人提供的例子)
1.cmd的方式进到某个指定目录(d:/tmp/)下执行:scrapy startproject myscrapy,命令执行完之后,生成的目录结构如下
2.设置items
- # -*- coding: utf-8 -*-
- # Define here the models for your scraped items
- #
- # See documentation in:
- # http://doc.scrapy.org/en/latest/topics/items.html
- import scrapy
- class MyscrapyItem(scrapy.Item):
- news_title = scrapy.Field() #南邮新闻标题
- news_date = scrapy.Field() #南邮新闻时间
- news_url = scrapy.Field() #南邮新闻的详细链接
3.编写 spider
- # -*- coding: utf-8 -*-
- import scrapy
- from myscrapy.items import MyscrapyItem
- import logging
- class myscrapySpider(scrapy.Spider):
- name = "myscrapy"
- allowed_domains = ["njupt.edu.cn"]
- start_urls = [
- "http://news.njupt.edu.cn/s/222/t/1100/p/1/c/6866/i/1/list.htm",
- ]
- def parse(self, response):
- news_page_num = 14
- page_num = 386
- if response.status == 200:
- for i in range(2,page_num+1):
- for j in range(1,news_page_num+1):
- item = MyscrapyItem()
- item['news_url'],item['news_title'],item['news_date'] = response.xpath(
- "//div[@id='newslist']/table[1]/tr["+str(j)+"]//a/font/text()"
- "|//div[@id='newslist']/table[1]/tr["+str(j)+"]//td[@class='postTime']/text()"
- "|//div[@id='newslist']/table[1]/tr["+str(j)+"]//a/@href").extract()
- yield item
- next_page_url = "http://news.njupt.edu.cn/s/222/t/1100/p/1/c/6866/i/"+str(i)+"/list.htm"
- yield scrapy.Request(next_page_url,callback=self.parse_news)
- def parse_news(self, response):
- news_page_num = 14
- if response.status == 200:
- for j in range(1,news_page_num+1):
- item = MyscrapyItem()
- item['news_url'],item['news_title'],item['news_date'] = response.xpath(
- "//div[@id='newslist']/table[1]/tr["+str(j)+"]//a/font/text()"
- "|//div[@id='newslist']/table[1]/tr["+str(j)+"]//td[@class='postTime']/text()"
- "|//div[@id='newslist']/table[1]/tr["+str(j)+"]//a/@href").extract()
- yield item
4.编写pipelines
- # -*- coding: utf-8 -*-
- # Define your item pipelines here
- #
- # Don't forget to add your pipeline to the ITEM_PIPELINES setting
- # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
- import json
- class MyscrapyPipeline(object):
- def __init__(self):
- self.file = open('myscrapy.txt',mode='wb')
- def process_item(self, item, spider):
- self.file.write(item['news_title'].encode("GBK"))
- self.file.write("\n")
- self.file.write(item['news_date'].encode("GBK"))
- self.file.write("\n")
- self.file.write(item['news_url'].encode("GBK"))
- self.file.write("\n")
- return item
5.编写settings.py
- # -*- coding: utf-8 -*-
- # Scrapy settings for myscrapy project
- #
- # For simplicity, this file contains only settings considered important or
- # commonly used. You can find more settings consulting the documentation:
- #
- # http://doc.scrapy.org/en/latest/topics/settings.html
- # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
- # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
- BOT_NAME = 'myscrapy'
- SPIDER_MODULES = ['myscrapy.spiders']
- NEWSPIDER_MODULE = 'myscrapy.spiders'
- # Crawl responsibly by identifying yourself (and your website) on the user-agent
- #USER_AGENT = 'myscrapy (+http://www.yourdomain.com)'
- # Obey robots.txt rules
- ROBOTSTXT_OBEY = True
- # Configure maximum concurrent requests performed by Scrapy (default: 16)
- #CONCURRENT_REQUESTS = 32
- # Configure a delay for requests for the same website (default: 0)
- # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
- # See also autothrottle settings and docs
- #DOWNLOAD_DELAY = 3
- # The download delay setting will honor only one of:
- #CONCURRENT_REQUESTS_PER_DOMAIN = 16
- #CONCURRENT_REQUESTS_PER_IP = 16
- # Disable cookies (enabled by default)
- #COOKIES_ENABLED = False
- # Disable Telnet Console (enabled by default)
- #TELNETCONSOLE_ENABLED = False
- # Override the default request headers:
- #DEFAULT_REQUEST_HEADERS = {
- # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
- # 'Accept-Language': 'en',
- #}
- # Enable or disable spider middlewares
- # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
- #SPIDER_MIDDLEWARES = {
- # 'myscrapy.middlewares.MyCustomSpiderMiddleware': 543,
- #}
- # Enable or disable downloader middlewares
- # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
- #DOWNLOADER_MIDDLEWARES = {
- # 'myscrapy.middlewares.MyCustomDownloaderMiddleware': 543,
- #}
- # Enable or disable extensions
- # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
- #EXTENSIONS = {
- # 'scrapy.extensions.telnet.TelnetConsole': None,
- #}
- # Configure item pipelines
- # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
- ITEM_PIPELINES = {
- 'myscrapy.pipelines.MyscrapyPipeline': 1,
- }
- # Enable and configure the AutoThrottle extension (disabled by default)
- # See http://doc.scrapy.org/en/latest/topics/autothrottle.html
- #AUTOTHROTTLE_ENABLED = True
- # The initial download delay
- #AUTOTHROTTLE_START_DELAY = 5
- # The maximum download delay to be set in case of high latencies
- #AUTOTHROTTLE_MAX_DELAY = 60
- # The average number of requests Scrapy should be sending in parallel to
- # each remote server
- #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
- # Enable showing throttling stats for every response received:
- #AUTOTHROTTLE_DEBUG = False
- # Enable and configure HTTP caching (disabled by default)
- # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
- #HTTPCACHE_ENABLED = True
- #HTTPCACHE_EXPIRATION_SECS = 0
- #HTTPCACHE_DIR = 'httpcache'
- #HTTPCACHE_IGNORE_HTTP_CODES = []
- #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
6.进到D:\tmp\myscrapy\myscrapy\spiders启动爬虫并查看结果:scrapy crawl myscrapy
Scrapy安装及demo测试笔记
Scrapy安装及demo测试笔记
一、环境搭建
1. 安装scrapy:pip install scrapy
2.安装:PyWin32,可以从网上载已编译好的安装包:http://www.lfd.uci.edu/%7Egohlke/pythonlibs/#pywin32
安装完之后会报如下错误
解决办法,把以下两个文件拷贝到C:\Windows\System32目录下
二、创建scrapy工程(在此用网上别人提供的例子)
1.cmd的方式进到某个指定目录(d:/tmp/)下执行:scrapy startproject myscrapy,命令执行完之后,生成的目录结构如下
2.设置items
- # -*- coding: utf-8 -*-
- # Define here the models for your scraped items
- #
- # See documentation in:
- # http://doc.scrapy.org/en/latest/topics/items.html
- import scrapy
- class MyscrapyItem(scrapy.Item):
- news_title = scrapy.Field() #南邮新闻标题
- news_date = scrapy.Field() #南邮新闻时间
- news_url = scrapy.Field() #南邮新闻的详细链接
3.编写 spider
- # -*- coding: utf-8 -*-
- import scrapy
- from myscrapy.items import MyscrapyItem
- import logging
- class myscrapySpider(scrapy.Spider):
- name = "myscrapy"
- allowed_domains = ["njupt.edu.cn"]
- start_urls = [
- "http://news.njupt.edu.cn/s/222/t/1100/p/1/c/6866/i/1/list.htm",
- ]
- def parse(self, response):
- news_page_num = 14
- page_num = 386
- if response.status == 200:
- for i in range(2,page_num+1):
- for j in range(1,news_page_num+1):
- item = MyscrapyItem()
- item['news_url'],item['news_title'],item['news_date'] = response.xpath(
- "//div[@id='newslist']/table[1]/tr["+str(j)+"]//a/font/text()"
- "|//div[@id='newslist']/table[1]/tr["+str(j)+"]//td[@class='postTime']/text()"
- "|//div[@id='newslist']/table[1]/tr["+str(j)+"]//a/@href").extract()
- yield item
- next_page_url = "http://news.njupt.edu.cn/s/222/t/1100/p/1/c/6866/i/"+str(i)+"/list.htm"
- yield scrapy.Request(next_page_url,callback=self.parse_news)
- def parse_news(self, response):
- news_page_num = 14
- if response.status == 200:
- for j in range(1,news_page_num+1):
- item = MyscrapyItem()
- item['news_url'],item['news_title'],item['news_date'] = response.xpath(
- "//div[@id='newslist']/table[1]/tr["+str(j)+"]//a/font/text()"
- "|//div[@id='newslist']/table[1]/tr["+str(j)+"]//td[@class='postTime']/text()"
- "|//div[@id='newslist']/table[1]/tr["+str(j)+"]//a/@href").extract()
- yield item
4.编写pipelines
- # -*- coding: utf-8 -*-
- # Define your item pipelines here
- #
- # Don't forget to add your pipeline to the ITEM_PIPELINES setting
- # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
- import json
- class MyscrapyPipeline(object):
- def __init__(self):
- self.file = open('myscrapy.txt',mode='wb')
- def process_item(self, item, spider):
- self.file.write(item['news_title'].encode("GBK"))
- self.file.write("\n")
- self.file.write(item['news_date'].encode("GBK"))
- self.file.write("\n")
- self.file.write(item['news_url'].encode("GBK"))
- self.file.write("\n")
- return item
5.编写settings.py
- # -*- coding: utf-8 -*-
- # Scrapy settings for myscrapy project
- #
- # For simplicity, this file contains only settings considered important or
- # commonly used. You can find more settings consulting the documentation:
- #
- # http://doc.scrapy.org/en/latest/topics/settings.html
- # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
- # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
- BOT_NAME = 'myscrapy'
- SPIDER_MODULES = ['myscrapy.spiders']
- NEWSPIDER_MODULE = 'myscrapy.spiders'
- # Crawl responsibly by identifying yourself (and your website) on the user-agent
- #USER_AGENT = 'myscrapy (+http://www.yourdomain.com)'
- # Obey robots.txt rules
- ROBOTSTXT_OBEY = True
- # Configure maximum concurrent requests performed by Scrapy (default: 16)
- #CONCURRENT_REQUESTS = 32
- # Configure a delay for requests for the same website (default: 0)
- # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
- # See also autothrottle settings and docs
- #DOWNLOAD_DELAY = 3
- # The download delay setting will honor only one of:
- #CONCURRENT_REQUESTS_PER_DOMAIN = 16
- #CONCURRENT_REQUESTS_PER_IP = 16
- # Disable cookies (enabled by default)
- #COOKIES_ENABLED = False
- # Disable Telnet Console (enabled by default)
- #TELNETCONSOLE_ENABLED = False
- # Override the default request headers:
- #DEFAULT_REQUEST_HEADERS = {
- # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
- # 'Accept-Language': 'en',
- #}
- # Enable or disable spider middlewares
- # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
- #SPIDER_MIDDLEWARES = {
- # 'myscrapy.middlewares.MyCustomSpiderMiddleware': 543,
- #}
- # Enable or disable downloader middlewares
- # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
- #DOWNLOADER_MIDDLEWARES = {
- # 'myscrapy.middlewares.MyCustomDownloaderMiddleware': 543,
- #}
- # Enable or disable extensions
- # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
- #EXTENSIONS = {
- # 'scrapy.extensions.telnet.TelnetConsole': None,
- #}
- # Configure item pipelines
- # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
- ITEM_PIPELINES = {
- 'myscrapy.pipelines.MyscrapyPipeline': 1,
- }
- # Enable and configure the AutoThrottle extension (disabled by default)
- # See http://doc.scrapy.org/en/latest/topics/autothrottle.html
- #AUTOTHROTTLE_ENABLED = True
- # The initial download delay
- #AUTOTHROTTLE_START_DELAY = 5
- # The maximum download delay to be set in case of high latencies
- #AUTOTHROTTLE_MAX_DELAY = 60
- # The average number of requests Scrapy should be sending in parallel to
- # each remote server
- #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
- # Enable showing throttling stats for every response received:
- #AUTOTHROTTLE_DEBUG = False
- # Enable and configure HTTP caching (disabled by default)
- # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
- #HTTPCACHE_ENABLED = True
- #HTTPCACHE_EXPIRATION_SECS = 0
- #HTTPCACHE_DIR = 'httpcache'
- #HTTPCACHE_IGNORE_HTTP_CODES = []
- #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
6.进到D:\tmp\myscrapy\myscrapy\spiders启动爬虫并查看结果:scrapy crawl myscrapy
【转载】Scrapy安装及demo测试笔记的更多相关文章
- red5研究(一):下载,工程建立、oflaDemo安装、demo测试
一.red5下载.添加工程到myeclipse 1,从官网上下载red51.01版本(我下载的是red51.0的版本),下载链接http://www.red5.org/downloads/red5/1 ...
- hue安装及基本测试-笔记
#################################################################################################### ...
- c#实例化继承类,必须对被继承类的程序集做引用 .net core Redis分布式缓存客户端实现逻辑分析及示例demo 数据库笔记之索引和事务 centos 7下安装python 3.6笔记 你大波哥~ C#开源框架(转载) JSON C# Class Generator ---由json字符串生成C#实体类的工具
c#实例化继承类,必须对被继承类的程序集做引用 0x00 问题 类型“Model.NewModel”在未被引用的程序集中定义.必须添加对程序集“Model, Version=1.0.0.0, Cu ...
- Python、pip和scrapy的安装——Python爬虫学习笔记1
Python作为爬虫语言非常受欢迎,近期项目需要,很是学习了一番Python,在此记录学习过程:首先因为是初学,而且当时要求很快速的出demo,所以首先想到的是框架,一番查找选用了Python界大名鼎 ...
- Python3 Scrapy 安装方法
Python3 Scrapy 安装方法 (一脸辛酸泪) 写在前面 最近在学习爬虫,在熟悉了Python语言和BeautifulSoup4后打算下个爬虫框架试试. 没想到啊,这坑太深了... 看了看相关 ...
- 02 Linux 下安装JDK并测试开发“Hello World!”
测试环境 主机系统:Win7 64位 虚拟机:VMware® Workstation 11.1.0 虚拟机系统:CentOS 6.5 64位 Kernel 2.6.32-431.e16.x86_6 ...
- Python实用工具包Scrapy安装教程
对于想用每个想用Python开发网络爬虫的开发者来说,Scrapy无疑是一个极好的开源工具.今天安装之后觉得Scrapy的安装确实不易啊.所以在此博文一篇,往后来着少走弯路. 废话不多说了,如果 ...
- Chapter 2. OpenSSL的安装和配置学习笔记
Chapter 2. OpenSSL的安装和配置学习笔记 2.1 在linux上面安装OpenSSL我还是做点No paper事情比较在行,正好和老师的课程接轨一下.以前尝试过在Windows上面安装 ...
- 中标麒麟6.0_ICE3.4.2编译+demo测试(CPP)
(菜鸟版)确保 gcc版本4.4.6(其他版本未测试),4.8不行 一.降级GCC到4.4.6 注意:gcc g++ c++命令都为4.4.6(可用gcc -v; g++ -v; c++ -v 命令查 ...
随机推荐
- day61-mysql-索引原理和慢查询优化
ProgramData是C盘隐藏的文件夹,mysql的data文件夹在里面,C:\ProgramData\MySQL\MySQL Server 8.0\Data 一.存储引擎 重点[面试题]: inn ...
- java内部类(构造spring中的接收返回数据的实体类)
说起内部类这个词,想必很多人都不陌生,但是又会觉得不熟悉.原因是平时编写代码时可能用到的场景不多,用得最多的是在有事件监听的情况下,并且即使用到也很少去总结内部类的用法.今天我们就来一探究竟. 原文链 ...
- 编译x64c++出错,errorC1900:P1和P2之间 Il 不匹配问题
搜索了下相关资料,有一个说法是编译x64时本地缺失一些东西,2015安装update3就行. 我的是2013update4,找了下最新的有update5,安装然而并没有什么用. 最后还是重新找对应版本 ...
- 如何决定 Web 应用的线程池大小
在部署 web 应用到生产环境,或者在对 web 应用进行性能测试的时候,经常会有人问:如何决定 web 应用线程池大小?决定一个 IO 阻塞型 web 应用的线程池大小是一项很艰巨的任务.通常是通过 ...
- 吴裕雄--天生自然 PYTHON3开发学习:OS 文件/目录方法
import os, sys # 假定 /tmp/foo.txt 文件存在,并有读写权限 ret = os.access("/tmp/foo.txt", os.F_OK) prin ...
- Android如何制作自己的依赖库上传至github供别人下载使用
Android如何制作自己的依赖库上传至github供别人下载使用 https://blog.csdn.net/xuchao_blog/article/details/62893851
- TPO6-1Powering the Industrial Revolution
The source had long been known but not exploited. Early in the eighteenth century, a pump had come i ...
- eclipse安装tfs插件
Eclipse安装TFS插件 1.打开Eclipse.点击菜单栏上的 “Help”——>选择“Install New Software”. 2.在弹出框中输入点击“Add”. 3.在弹出框中 ...
- 功能区按钮调用Excel、PowerPoint、Word中的VBA宏:RunMacro
功能区按钮调用Excel.PowerPoint.Word中的VBA宏:RunMacro 众所周知,Excel.PPT.Word文档或加载宏文件中可以写很多过程和函数,调试的过程中当然可以按F8或F5直 ...
- iOS之NSString类型为什么要用copy修饰
在开发的过程中,只知道NSString类型最好用copy修饰而不能用strong,但是不知道为什么,今天了解了下,总算搞明白了. 如下所示,当修饰符为copy时,因为NSMutableString是N ...