scrapy基础笔记
公众号原文
公众号排版更友好,建议查看公众号原文
前言
reference: https://www.tutorialspoint.com/scrapy/scrapy_quick_guide.htm
offical doc: http://doc.scrapy.org/en/1.0/intro/tutorial.html
安装
reference: http://doc.scrapy.org/en/1.0/intro/install.html#intro-install
- 启动个容器安装scrapy(耗时比较长)
root@ubuntu:/home/vickey# docker run -itd --name test-scrapy ubuntu
root@ubuntu:/home/vickey# docker exec -it test-scrapy /bin/bash
root@8b825656f58b:/# apt-get update
...
root@8b825656f58b:/# apt-get install python-dev python-pip libxml2-dev libxslt1-dev zlib1g-dev libffi-dev libssl-dev
...
root@8b825656f58b:/# pip install scrapy
...
root@8b825656f58b:/# scrapy -v
Scrapy 1.6.0 - no active project
...
- 还可以直接用本人做好的镜像: vickeywu/scrapy-python3
root@ubuntu:/home/vickey# docker pull vickeywu/scrapy-python3
Using default tag: latest
latest: Pulling from vickeywu/scrapy-python3
Digest: sha256:e1bdf37f93ac7ced9168a7a697576ce905e73fb4775f7cb80de196fa2df5a549
Status: Downloaded newer image for vickeywu/scrapy-python3:latest
root@ubuntu:/home/vickey# docker run -itd --name test-scrapy vickeywu/scrapy-python3
相关命令
- 创建项目:
scrapy startproject scrapy_project_name
- 创建项目爬虫(需先进入
scrapy_project_name
目录):scrapy genspider spider_name domain_name.com
- 运行项目爬虫(需先进入
scrapy_project_name
目录):scrapy crawl spider_name
- 使用
scrapy -h
查看更多命令。如下:
root@2fb0da64a933:/home/test_scrapy# scrapy -h
Scrapy 1.5.0 - project: test_scrapy
Usage:
scrapy <command> [options] [args]
Available commands:
bench Run quick benchmark test
check Check spider contracts
crawl Run a spider
edit Edit spider
fetch Fetch a URL using the Scrapy downloader
genspider Generate new spider using pre-defined templates
list List available spiders
parse Parse URL (using its spider) and print the results
runspider Run a self-contained spider (without creating a project)
settings Get settings values
shell Interactive scraping console
startproject Create new project
version Print Scrapy version
view Open URL in browser, as seen by Scrapy
Use "scrapy <command> -h" to see more info about a command
创建项目
reference: http://doc.scrapy.org/en/1.0/intro/tutorial.html#creating-a-project
root@ubuntu:/home/vickey# docker exec -it test-scrapy /bin/bash
root@2fb0da64a933:/# cd /home
root@2fb0da64a933:/home# scrapy startproject test_scrapy
New Scrapy project 'test_scrapy', using template directory '/usr/local/lib/python2.7/dist-packages/scrapy/templates/project', created in:
/home/test_scrapy
You can start your first spider with:
cd test_scrapy
scrapy genspider example example.com
创建项目爬虫
root@2fb0da64a933:/home/test_scrapy# cd test_scrapy/
root@2fb0da64a933:/home/test_scrapy/test_scrapy# scrapy genspider test_spider baidu.com
Created spider 'test_spider' using template 'basic' in module:
test_scrapy.spiders.test_spider
项目及爬虫文件
- 概览
root@8b825656f58b:/home# tree -L 2 test_scrapy/
test_scrapy/ # Deploy the configuration file
|-- scrapy.cfg # Name of the project
`-- test_scrapy
|-- __init__.py
|-- items.py # It is project's items file
|-- middlewares.py # It is project's pipelines file
|-- pipelines.py # It is project's pipelines file
|-- settings.py # It is project's settings file
`-- spiders
|-- __init__.py
`-- test_spider.py # It is project's spiders file
2 directories, 6 files
- scrapy.cfg
root@2fb0da64a933:/home# cd test_scrapy/ # 进入创建的项目
root@2fb0da64a933:/home/test_scrapy# ls
scrapy.cfg test_scrapy
root@2fb0da64a933:/home/test_scrapy# cat scrapy.cfg
# Automatically created by: scrapy startproject
#
# For more information about the [deploy] section see:
# https://scrapyd.readthedocs.io/en/latest/deploy.html
[settings]
default = test_scrapy.settings # default = 项目名.settings
[deploy]
#url = http://localhost:6800/
project = test_scrapy # project = 项目名
root@2fb0da64a933:/home/test_scrapy# cd test_scrapy/
root@2fb0da64a933:/home/test_scrapy/test_scrapy# ls # 创建项目时默认创建的文件
__init__.py __init__.pyc items.py middlewares.py pipelines.py settings.py settings.pyc spiders
- items.py
设置数据库字段
root@2fb0da64a933:/home/test_scrapy/test_scrapy# cat items.py
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class TestScrapyItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
pass
- middlewares.py(暂忽略)
root@2fb0da64a933:/home/test_scrapy/test_scrapy# cat middlewares.py
# -*- coding: utf-8 -*-
# Define here the models for your spider middleware
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
from scrapy import signals
class TestScrapySpiderMiddleware(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the spider middleware does not modify the
# passed objects.
...
class TestScrapyDownloaderMiddleware(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the downloader middleware does not modify the
# passed objects.
...
- pipelines.py
连接、写入数据库的操作等写在这里(先看模版,之后会给出实例)
root@2fb0da64a933:/home/test_scrapy/test_scrapy# cat pipelines.py
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
class TestScrapyPipeline(object):
def process_item(self, item, spider):
return item
- settings.py
root@2fb0da64a933:/home/test_scrapy/test_scrapy# cat settings.py|grep -v ^# |grep -v ^$
BOT_NAME = 'test_scrapy'
SPIDER_MODULES = ['test_scrapy.spiders']
NEWSPIDER_MODULE = 'test_scrapy.spiders'
ROBOTSTXT_OBEY = True
- 项目爬虫文件
reference: https://docs.scrapy.org/en/latest/topics/spiders.html?highlight=filter#scrapy-spider
root@2fb0da64a933:/home/test_scrapy/test_scrapy# cd spiders/
root@2fb0da64a933:/home/test_scrapy/test_scrapy/spiders# ls
__init__.py test_spider.py # test.spider.py就是创建的爬虫文件,创建的所有同一项目爬虫都会放在这里
root@2fb0da64a933:/home/test_scrapy/test_scrapy/spiders# cat test_spider.py
# -*- coding: utf-8 -*-
import scrapy
class TestSpiderSpider(scrapy.Spider): # 类名为:爬虫名+Spider
name = 'test_spider' # 创建爬虫时定义的爬虫名
allowed_domains = ['baidu.com'] # 创建爬虫时定义的爬虫要爬的域名或URL
start_urls = ['http://baidu.com/'] # 爬虫要爬取信息的根URL,是个列表类型
def parse(self, response):
pass
运行项目爬虫
- 不带参数运行爬虫
官方文档说需要回到项目顶层目录运行爬虫,但实际上好像不用,只要在项目目录内就行
root@2fb0da64a933:/home/test_scrapy/test_scrapy/spiders# scrapy crawl test_spider
2019-06-26 07:02:52 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: test_scrapy)
......
2019-06-26 07:02:53 [scrapy.core.engine] INFO: Spider closed (finished)
- 带参数运行爬虫
前提是需要在
__init__
中先接收该传入参数
root@2fb0da64a933:/home/test_scrapy/test_scrapy/spiders# cat test_spider.py
# -*- coding: utf-8 -*-
import scrapy
class TestSpiderSpider(scrapy.Spider):
name = 'test_spider'
allowed_domains = ['baidu.com']
start_urls = ['http://baidu.com/']
def __init__(self, group, *args, **kargs):
super(TestSpiderSpider, self).__init__(*args, **kwargs)
self.start_urls = ['http://www.example.com/group/%s' % group]
def parse(self, response):
pass
root@2fb0da64a933:/home/test_scrapy/test_scrapy/spiders# scrapy crawl test_spider -a group=aa
2019-06-27 03:11:35 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: test_scrapy)
......
2019-06-27 03:11:35 [scrapy.core.engine] INFO: Spider closed (finished)
电影天堂爬虫实战
内容太多,放到下一篇笔记吧
scrapy基础笔记的更多相关文章
- Java基础笔记 – Annotation注解的介绍和使用 自定义注解
Java基础笔记 – Annotation注解的介绍和使用 自定义注解 本文由arthinking发表于5年前 | Java基础 | 评论数 7 | 被围观 25,969 views+ 1.Anno ...
- php代码审计基础笔记
出处: 九零SEC连接:http://forum.90sec.org/forum.php?mod=viewthread&tid=8059 --------------------------- ...
- MYSQL基础笔记(六)- 数据类型一
数据类型(列类型) 所谓数据烈性,就是对数据进行统一的分类.从系统角度出发时为了能够使用统一的方式进行管理,更好的利用有限的空间. SQL中讲数据类型分成三大类:1.数值类型,2.字符串类型和时间日期 ...
- MYSQL基础笔记(五)- 练习作业:站点统计练习
作业:站点统计 1.将用户的访问信息记录到文件中,独占一行,记录IP地址 <?php //站点统计 header('Content-type:text/html;charset=utf-8'); ...
- MYSQL基础笔记(四)-数据基本操作
数据操作 新增数据:两种方案. 1.方案一,给全表字段插入数据,不需要指定字段列表,要求数据的值出现的顺序必须与表中设计的字段出现的顺序一致.凡是非数值数据,到需要使用引号(建议使用单引号)包裹. i ...
- MYSQL基础笔记(三)-表操作基础
数据表的操作 表与字段是密不可分的. 新增数据表 Create table [if not exists] 表名( 字段名 数据类型, 字段名 数据类型, 字段n 数据类型 --最后一行不需要加逗号 ...
- MYSQL基础笔记(二)-SQL基本操作
SQL基本操作 基本操作:CRUD,增删改查 将SQL的基本操作根据操作对象进行分类: 1.库操作 2.表操作 3.数据操作 库操作: 对数据库的增删改查 新增数据库: 基本语法: Create da ...
- MYSQL基础笔记(一)
关系型数据库概念: 1.什么是关系型数据库? 关系型数据库:是一种建立在关系模型(数学模型)上的数据库 关系模型:一种所谓建立在关系上的模型. 关系模型包含三个方面: 1.数据结构:数据存储的问题,二 ...
- JavaScript基础笔记二
一.函数返回值1.什么是函数返回值 函数的执行结果2. 可以没有return // 没有return或者return后面为空则会返回undefined3.一个函数应该只返回一种类型的值 二.可变 ...
随机推荐
- web 前端2 CSS
CSS CSS是Cascading Style Sheets的简称,中文称为层叠样式表,用来控制网页数据的表现,可以使网页的表现与数据内容分离. 一 css的四种引入方式 1.行内式 ...
- java二周的学习总结
一转眼二周就过去了,个人觉得虽然java和C语言有差异,但差别并不大,因为语法语句方面都是差不多的,因为我上个学期并没有很认真的学好C语言,所以我这个学期更希望学好java,java方面还是挺有趣的, ...
- [Python3 填坑] 001 格式化符号 & 格式化操作符的辅助指令
目录 1. print( 坑的信息 ) 2. 开始填坑 2.1 Python 格式化符号表 举例说明 (1) %c (2) %s 与 %d (3) %o (4) %x (5) %f (6) %e (7 ...
- [2019杭电多校第一场][hdu6579]Operation(线性基)
题目链接:http://acm.hdu.edu.cn/showproblem.php?pid=6579 题目大意是两个操作,1个是求[l,r]区间子序列的最大异或和,另一个是在最后面添加一个数. 如果 ...
- P3724 [AH2017/HNOI2017]大佬
传送门 发现保持自信和做其他事情互不干扰,可以直接做一次 $dp$ 求出最多能空出几天来怼大佬 然后就变成给你若干天,是否能怼死大佬,考虑求出所有的 天数和输出的嘲讽值集合,因为天数不多,嘲讽值增长很 ...
- eclipse 使用技巧、经验 (编码、格式化模板、字体)
版权声明:本文为博主原创文章,未经博主同意不得转载.安金龙 的博客. https://blog.csdn.net/smile0198/article/details/28697515 1.设置编码为U ...
- Neo4j 修改关系类型 type
没有直接修改的函数,也不需要,下面代码就可以: MATCH (n:User {name:"foo"})-[r:REL]->(m:User {name:"bar&qu ...
- 使用JSONP,jQuery的ajax跨域获取json数据
网上找了很多资料,写的不错,推荐下: 1.深入浅出JSONP--解决ajax跨域问题 (http://www.cnblogs.com/chopper/archive/2012/03/24/240394 ...
- 通过busybox制作根文件系统
通过busybox制作根文件系统可以自定义选项,在制作的根文件系统中添加需要的命令,指定生成的根文件系统到相应的目录下. 一. 根文件系统的获取方式--->官网: https://busybox ...
- 动态规划—distinct-subsequences
题目: Given a string S and a string T, count the number of distinct subsequences of T in S. A subseque ...