一、创建工程(cmd)

scrapy startproject xxxx

二、编写item文件

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html from scrapy import Field, Item class YouyuanItem(Item):
# define the fields for your item here like:
# name = scrapy.Field()
# 用户名
username = Field()
# 年龄
age = Field()
# 头像图片链接
herder_url = Field()
# 相册图片
image_url = Field()
# 独白
content = Field()
# 籍贯
place_from = Field()
# 学历
education = Field()
# 兴趣爱好
hobby = Field()
# 个人主页
source_url = Field()
# 数据来源
sourcec = Field()

三、编写settings文件

# -*- coding: utf-8 -*-

# Scrapy settings for youyuan project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'youyuan' SPIDER_MODULES = ['youyuan.spiders']
NEWSPIDER_MODULE = 'youyuan.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'youyuan (+http://www.yourdomain.com)' # Obey robots.txt rules
#ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'youyuan.middlewares.YouyuanSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'youyuan.middlewares.YouyuanDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'youyuan.pipelines.YouyuanPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

四、进入spider文件创建自定义爬虫文件

scrapy genspider demo 'www.xxxx.com'

编写文件

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from youyuan.items import YouyuanItem
import re class YySpider(CrawlSpider):
name = 'yy'
allowed_domains = ['xxxx.com']
start_urls = ['http://www.xxxx.com/find/beijing/mm18-25/advance-0-0-0-0-0-0-0/p1/'] # 匹配每一页链接匹配规则
page_links = LinkExtractor(allow=(r"xxxx.com/find/beijing/mm18-25/advance-0-0-0-0-0-0-0/p\d+/"))
# 每个主页的匹配规则
profile_links = LinkExtractor(allow=(r"xxxxx.com/\d+-profile/")) rules = (
Rule(page_links),
Rule(profile_links, callback='parse_item'),
) def parse_item(self, response):
item = YouyuanItem()
# 姓名
item['username'] = self.get_username(response)
# 年龄
item['age'] = self.get_age(response)
# 头像图片链接
item['herder_url'] = self.get_herder_url(response)
# 相册图片
item['image_url'] = self.get_image_url(response)
# 独白
item['content'] = self.get_content(response)
# 籍贯
item['place_from'] = self.get_place_from(response)
# 学历
item['education'] = self.get_education(response)
# 兴趣爱好
item['hobby'] = self.get_hobby(response)
# 个人主页
item['source_url'] = response.url
# 数据来源
item['sourcec'] = "youyuan" yield item def get_username(self, response):
username = response.xpath("//dl[@class='personal_cen']//div[@class='main']/strong/text()").extract()
if len(username):
username = username[0]
else:
username = "NULL"
return username.split() def get_age(self, response):
age = response.xpath("//dl[@class='personal_cen']//dd/p/text()").extract()
if len(age):
age = re.findall(u"\d+岁", age[0])[0]
else:
age = "NULL"
return age.strip() def get_herder_url(self, response):
herder_url = response.xpath("//dl[@class='personal_cen']//dt/img/@src").extract()
if len(herder_url):
herder_url = herder_url[0]
else:
herder_url = "NULL"
return herder_url.strip() def get_image_url(self, response):
image_url = response.xpath("//div[@class='ph_show']/ul/li/a/img/@src").extract()
if len(image_url):
image_url = image_url
else:
image_url = "NULL"
return image_url def get_content(self, response):
content = response.xpath("//div[@class='pre_data']/ul/li/p/text()").extract()
if len(content):
content = content[0]
else:
content = "NULL"
return content.strip() def get_place_from(self, response):
place_from = response.xpath("//div[@class='pre_data']/ul/li[2]//ol[1]/li[1]/span/text()").extract()
if len(place_from):
place_from = place_from[0]
else:
place_from = "NULL"
return place_from.strip() def get_education(self, response):
education = response.xpath("//div[@class='pre_data']/ul/li[3]//ol[2]/li[2]/span/text()").extract()
if len(education):
education = education[0]
else:
education = "NULL"
return education.strip() def get_hobby(self, response):
hobby = response.xpath("//dl[@class='personal_cen']//ol/li/text()").extract()
if len(hobby):
hobby = ",".join(hobby).replace(" ", "")
else:
hobby = "NULL"
return hobby.strip()

五、运行

scrapy crawl xxxx

OVER!

python之scrapy篇(三)的更多相关文章

  1. python之scrapy篇(一)

    一.首先创建工程(cmd中进行) scrapy startproject xxx 二.编写Item文件 添加要字段 # -*- coding: utf-8 -*- # Define here the ...

  2. Python:Scrapy(三) 进阶:额外的一些类ItemLoader与CrawlSpider,使用原理及总结

    学习自:Python Scrapy 爬虫框架实例(一) - Blue·Sky - 博客园 这一节是对前两节内容的补充,涉及内容为一些额外的类与方法,来对原代码进行改进 原代码:这里并没有用前两节的代码 ...

  3. python之scrapy篇(二)

    一.创建工程 scarpy startproject xxx 二.编写iteam文件 # -*- coding: utf-8 -*- # Define here the models for your ...

  4. [Python爬虫] scrapy爬虫系列 <一>.安装及入门介绍

    前面介绍了很多Selenium基于自动测试的Python爬虫程序,主要利用它的xpath语句,通过分析网页DOM树结构进行爬取内容,同时可以结合Phantomjs模拟浏览器进行鼠标或键盘操作.但是,更 ...

  5. Python爬虫学习:三、爬虫的基本操作流程

    本文是博主原创随笔,转载时请注明出处Maple2cat|Python爬虫学习:三.爬虫的基本操作与流程 一般我们使用Python爬虫都是希望实现一套完整的功能,如下: 1.爬虫目标数据.信息: 2.将 ...

  6. python爬虫Scrapy(一)-我爬了boss数据

    一.概述 学习python有一段时间了,最近了解了下Python的入门爬虫框架Scrapy,参考了文章Python爬虫框架Scrapy入门.本篇文章属于初学经验记录,比较简单,适合刚学习爬虫的小伙伴. ...

  7. Python爬虫Scrapy框架入门(0)

    想学习爬虫,又想了解python语言,有个python高手推荐我看看scrapy. scrapy是一个python爬虫框架,据说很灵活,网上介绍该框架的信息很多,此处不再赘述.专心记录我自己遇到的问题 ...

  8. Python之Scrapy爬虫框架安装及简单使用

    题记:早已听闻python爬虫框架的大名.近些天学习了下其中的Scrapy爬虫框架,将自己理解的跟大家分享.有表述不当之处,望大神们斧正. 一.初窥Scrapy Scrapy是一个为了爬取网站数据,提 ...

  9. scrapy 的三个入门应用场景

    说明: 本文参照了官网的 dmoz 爬虫例子. 不过这个例子有些年头了,而 dmoz.org 的网页结构已经不同以前.所以我对xpath也相应地进行了修改. 概要: 本文提出了scrapy 的三个入门 ...

随机推荐

  1. Windows平台部署Asp.Net Core应用

    一. 简介 Asp.Net Core 部署方式有两种:依赖框架和独立部署. 1. 框架依赖的部署: 顾名思义,依赖框架的部署 (FDD) 依赖目标系统上存在共享系统级版本的 .NET Core. 由于 ...

  2. 记一次容器CPU高占用问题排查

    起因:发现docker中有两个容器的CPU持续在百分之95以上运行了一晚上 执行命令:docker stats 发现这个两个大兄弟一点没歇满负荷跑了一晚上,再这么下去怕不是要GG 容器里跑的是JAVA ...

  3. PyQt(Python+Qt)学习随笔:Model/View中的枚举类 Qt.MatchFlag的取值及含义

    老猿Python博文目录 专栏:使用PyQt开发图形界面Python应用 老猿Python博客地址 枚举类 Qt.MatchFlag描述在模型中搜索项时可以使用的匹配类型,它可以在QStandardI ...

  4. Java获取不到请求的真实IP

    问题 最近在写博客浏览量的时候,设计了这么一个逻辑:同一个IP浏览一遍文章,5分钟内不刷新次数.就需要在服务器端得到用户的真实IP,我代码是这样写的(从网上找的方法): public static S ...

  5. 判断wangeidtor中输入框内容为空

    在我做的项目中,产品没有要求图片多媒体等,暂时只需要标题正文表格之类的,在保存的时候校验内容不为空 刚开始考虑的是editor.txt.html()获取到html片段在判断标签中的值,但是太过繁琐 后 ...

  6. 最简 Spring AOP 源码分析!

    前言 最近在研究 Spring 源码,Spring 最核心的功能就是 IOC 容器和 AOP.本文定位是以最简的方式,分析 Spring AOP 源码. 基本概念 上面的思维导图能够概括了 Sprin ...

  7. 【复习笔记】重习 AC 自动机

    发现已经忘了许多....于是复习一下 基础要点概况 AC 自动机基于 Trie 树 的结构,即构建 AC 自动机前需要先建 Trie. 一个状态中除了转移 \(\delta\) 之外还有失配指针 \( ...

  8. 基于menu小插件探索工程实践

    目录 一.准备工作 1.C/C++环境搭建 2.VSCode的配置 (1) 安装插件: (2) 设置配置文件: 二.工程化编程实战 1.模块化设计 2.可重用设计:进一步抽象 menu的进一步优化 可 ...

  9. Day10 python高级特性-- 生成器 Generator

    列表生成式可以创建列表,但是受内存限制,列表容量时有限的,创建一个巨量元素的列表,不仅占用很大的存储空间,当仅仅访问前几个元素时,后面的绝大多数元素占用的空间都被浪费了. 如果list的元素可以按照算 ...

  10. Python 学习笔记 之 01 - 基础总结

    数据类型 整数 十六进制和八进制使用0开头,0x12f, 010 浮点数 可以用科学记数法,如1.23x10^9 可以写成 12.3e8 ,0.000012可以写成 1.2e-5 空值 用None表示 ...