Hi,大家好。有段时间没来更新scrapy爬取实例信息了,前2天同事说爬取拉勾,boss直聘等网站信息比较困难。昨天下午开始着手爬取boss直聘内Python爬虫的信息,比想象中的简单很多。

需要解决的问题:

  boss直聘网的信息是大部分以静态加载和少许动态加载方式显示网站。

  1.静态加载:公司的具体信息和岗位职责(1_1)

  2.动态加载:首页搜索框,搜索python爬虫(1_2)

解决的思路:

  1.静态加载:常规爬取信息(简单)

  2.动态加载:selenium(简单)

                  图(1_1)

                  图(1_2)

老规矩,给各位爬取结果的图,大家也可以去尝试一下:

(三)开始正题

3_1.需要提取的信息:items.py

import scrapy

class BossItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
#pass
job_title = scrapy.Field()
salary = scrapy.Field()
address = scrapy.Field()
job_time = scrapy.Field()
education = scrapy.Field()
company = scrapy.Field()
company_info= scrapy.Field()
detail_text = scrapy.Field()

3_2.设置代理:middlewares.py

class BossSpiderMiddleware(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the spider middleware does not modify the
# passed objects. def __init__(self,ip=''):
self.ip = ip
def process_request(self,request,spider):
print('http://10.240.252.16:911')
request.meta['proxy']= 'http://10.240.252.16:911' @classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s def process_spider_input(self, response, spider):
# Called for each response that goes through the spider
# middleware and into the spider. # Should return None or raise an exception.
return None def process_spider_output(self, response, result, spider):
# Called with the results returned from the Spider, after
# it has processed the response. # Must return an iterable of Request, dict or Item objects.
for i in result:
yield i def process_spider_exception(self, response, exception, spider):
# Called when a spider or process_spider_input() method
# (from other spider middleware) raises an exception. # Should return either None or an iterable of Response, dict
# or Item objects.
pass def process_start_requests(self, start_requests, spider):
# Called with the start requests of the spider, and works
# similarly to the process_spider_output() method, except
# that it doesn’t have a response associated. # Must return only requests (not items).
for r in start_requests:
yield r def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name) class BossDownloaderMiddleware(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the downloader middleware does not modify the
# passed objects. @classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s def process_request(self, request, spider):
# Called for each request that goes through the downloader
# middleware. # Must either:
# - return None: continue processing this request
# - or return a Response object
# - or return a Request object
# - or raise IgnoreRequest: process_exception() methods of
# installed downloader middleware will be called
return None def process_response(self, request, response, spider):
# Called with the response returned from the downloader. # Must either;
# - return a Response object
# - return a Request object
# - or raise IgnoreRequest
return response def process_exception(self, request, exception, spider):
# Called when a download handler or a process_request()
# (from other downloader middleware) raises an exception. # Must either:
# - return None: continue processing this exception
# - return a Response object: stops process_exception() chain
# - return a Request object: stops process_exception() chain
pass def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name)

3_3.下载数据(存储到mongodb):pipelines.py

import scrapy
import pymongo
from scrapy.item import Item class BossPipeline(object):
def process_item(self, item, spider):
return item class MongoDBPipeline(object): #存储到mongodb中
@classmethod
def from_crawler(cls,crawler):
cls.DB_URL = crawler.settings.get("MONGO_DB_URL",'mongodb://localhost:27017/')
cls.DB_NAME = crawler.settings.get("MONGO_DB_NAME",'scrapy_data')
return cls() def open_spider(self,spider):
self.client = pymongo.MongoClient(self.DB_URL)
self.db = self.client[self.DB_NAME] def close_spider(self,spider):
self.client.close() def process_item(self,item,spider):
collection = self.db[spider.name]
post = dict(item) if isinstance(item,Item) else item
collection.insert(post) return item

3_4.settings.py

MONGO_DB_URL = 'mongodb://localhost:27017/'
MONGO_DB_NAME = 'boss_detail' USER_AGENT ={ #设置浏览器的User_agent
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
} FEED_EXPORT_FIELDS = ['job_title','salary','address','job_time','education','company','company_info'] # Obey robots.txt rules
ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16)
CONCURRENT_REQUESTS = 10 # See also autothrottle settings and docs
DOWNLOAD_DELAY = 0.5 # Disable cookies (enabled by default)
COOKIES_ENABLED = False DOWNLOADER_MIDDLEWARES = {
#'Boss.middlewares.BossDownloaderMiddleware': 543,
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware':543,
'Boss.middlewares.BossSpiderMiddleware':123,
} ITEM_PIPELINES = {
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware':1,
'Boss.pipelines.MongoDBPipeline': 300,
}

3_5.spider/boss.py

#-*- coding:utf-8 -*-
import time
from selenium import webdriver
import pdb
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from lxml import etree
import re
from bs4 import BeautifulSoup
import scrapy
from Boss.items import BossItem
from Boss.settings import USER_AGENT
from scrapy.linkextractors import LinkExtractor chrome_options = Options()
driver = webdriver.Chrome() class BossSpider(scrapy.Spider):
name = 'boss'
allowed_domains = ['www.zhipin.com']
start_urls = ['http://www.zhipin.com/'] headers = {
'Accept': 'application/json, text/javascript, */*; q=0.01',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'zh-CN,zh;q=0.9',
'Connection': 'keep-alive',
'Content-Length': '',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'Host': 'www.zhipin.com',
'Origin': 'www.zhipin.com',
'Referer': 'http://www.zhipin.com/',
'User-Agent': USER_AGENT,
'X-Requested-With': 'XMLHttpRequest',
} def start_requests(self):
driver.get(
self.start_urls[0]
)
time.sleep(3) #搜索python爬虫
driver.find_element_by_name('query').send_keys(u'python爬虫')
time.sleep(3)
driver.find_element_by_class_name('btn-search').click()
time.sleep(3) new_url = driver.current_url.encode('utf8') #获取跳转之后的url
yield scrapy.Request(new_url) def parse(self, response):
#提取网页链接url
links = LinkExtractor(restrict_css="div.info-primary>h3>a")
link = links.extract_links(response)
for each_link in link:
yield scrapy.Request(each_link.url,callback=self.job_detail) #sels = LinkExtractor(restrict_css='div.page')
#yield scrapy.Request(sels.extract_links(response)[0].url,callback=self.parse) def job_detail(self,response):
spiderItem = BossItem()
#想要提取的信息
spiderItem['job_title'] = response.css('div.job-primary.detail-box div.name h1::text').extract()[0]
#pdb.set_trace()
salar = response.css('div.job-primary.detail-box span.badge ::text').extract()[0]
spiderItem['salary'] = re.findall(r'(\d.*?)\n',salar)[0] #re提取金额
spiderItem['address'] = response.css('div.job-primary.detail-box p::text').extract()[0]
spiderItem['job_time'] = response.css('div.job-primary.detail-box p::text').extract()[1]
spiderItem['education'] = response.css('div.job-primary.detail-box p::text').extract()[2]
spiderItem['company'] = response.css('div.job-primary.detail-box div.info-company h3.name a::text').extract()[0]
spiderItem['company_info'] = response.css('div.job-primary.detail-box div.info-company>p::text').extract()[0] detail = response.css('div.job-sec div.text ::text').extract()
details = ''.join(detail).replace(' ','') ##将列表内所有字符串提取成一个整的字符串,并且去除空格
spiderItem['detail_text'] = details print spiderItem
yield spiderItem

scrapy--boss直聘的更多相关文章

  1. 爬虫系列---scrapy post请求、框架组件和下载中间件+boss直聘爬取

    一 Post 请求 在爬虫文件中重写父类的start_requests(self)方法 父类方法源码(Request): def start_requests(self): for url in se ...

  2. Scrapy 爬取BOSS直聘关于Python招聘岗位

    年前的时候想看下招聘Python的岗位有多少,当时考虑目前比较流行的招聘网站就属于boss直聘,所以使用Scrapy来爬取下boss直聘的Python岗位. 1.首先我们创建一个Scrapy 工程 s ...

  3. Python爬虫——Scrapy整合Selenium案例分析(BOSS直聘)

    概述 本文主要介绍scrapy架构图.组建.工作流程,以及结合selenium boss直聘爬虫案例分析 架构图 组件 Scrapy 引擎(Engine) 引擎负责控制数据流在系统中所有组件中流动,并 ...

  4. Python的scrapy之爬取boss直聘网站

    在我们的项目中,单单分析一个51job网站的工作职位可能爬取结果不太理想,所以我又爬取了boss直聘网的工作,不过boss直聘的网站一次只能展示300个职位,所以我们一次也只能爬取300个职位. jo ...

  5. scrapy——7 scrapy-redis分布式爬虫,用药助手实战,Boss直聘实战,阿布云代理设置

    scrapy——7 什么是scrapy-redis 怎么安装scrapy-redis scrapy-redis常用配置文件 scrapy-redis键名介绍 实战-利用scrapy-redis分布式爬 ...

  6. Pyhton爬虫实战 - 抓取BOSS直聘职位描述 和 数据清洗

    Pyhton爬虫实战 - 抓取BOSS直聘职位描述 和 数据清洗 零.致谢 感谢BOSS直聘相对权威的招聘信息,使本人有了这次比较有意思的研究之旅. 由于爬虫持续爬取 www.zhipin.com 网 ...

  7. 基于‘BOSS直聘的招聘信息’分析企业到底需要什么样的PHP程序员

    原文地址:http://www.jtahstu.com/blog/scrapy_zhipin_php.html 基于'BOSS直聘的招聘信息'分析企业到底需要什么样的PHP程序员 标签(空格分隔): ...

  8. iOS开发之功能模块--高仿Boss直聘的IM界面交互功能

    本人公司项目属于社交类,高仿Boss直聘早期的版本,现在Boss直聘界面风格,交互风格都不如Boss直聘以前版本的好看. 本人通过iPhone模拟器和本人真机对聊,将完成的交互功能通过Mac截屏模拟器 ...

  9. iOS开发之功能模块--高仿Boss直聘的常用语的开发

    首先上Boss直聘的功能界面截图,至于交互请读者现在Boss直聘去交互体验:     本人的公司项目要高仿Boss直聘的IM常用语的交互功能,居然花费了我前后17个小时完成,这回自己测试了很多遍,代码 ...

  10. 使用VUE模仿BOSS直聘APP

    一.碎碎念: 偶尔在群里看到一个小伙伴说:最近面试的人好多都说用vue做过一个饿了么.当时有种莫名想笑. 为何不知道创新一下?于是想写个DEMO演练一下.那去模仿谁呢?还是BOSS直聘(跟我没关系,不 ...

随机推荐

  1. 微信小程序获取数据、处理数据、绑定数据关键步骤记录

    onload:function(event){ var inTheatersUrl ="https://api.douban.com"+"/v2/movie/in_the ...

  2. javascript 数组方法拼接html标签

    var htmls = new Array(); htmls.push("<tr class='otherinfotr'>");htmls.push("< ...

  3. WebService的搭建,部署,简单应用和实体类结合使用

    WebService:一种跨编程语言和操作系统平台的远程调用技术,SOAP.WSDL(WebServicesDescriptionLanguage).UDDI(UniversalDescription ...

  4. 如何将运维的报警做成运营的报警--Java后端架构

    转:http://mp.weixin.qq.com/s?__biz=MzI4OTU3ODk3NQ==&mid=2247483970&idx=1&sn=2a00acfb25f0c ...

  5. 未能解析引用的程序集......因为它对不在当前目标框架“.NETFramework,Version=v4.0,Profile=Client”中的

    解决方法:资源管理器下点击项目名(右键)属性--将.NET Framework 4 Client Profile改成.NET Framework 4 . 传送门:http://bbs.csdn.net ...

  6. js随堂初体验(一)

    Js初体验(-) 1 js的基础知识 A web三大标准:1 html:结构标准    2 css:表现标准  3 javascript:行为标准 B js三种书写方式:1 行内js:onclick ...

  7. (生产)vuex - 状态管理

    参考:https://vuex.vuejs.org/zh-cn/ 安装 直接下载 / CDN 引用 https://unpkg.com/vuex在 Vue 之后引入 vuex 会进行自动安装:< ...

  8. Multi-modal Sentence Summarization with Modality Attention and Image Filtering 论文笔记

     文章已同步更新在https://ldzhangyx.github.io/,欢迎访问评论.   五个月没写博客了,不熟悉我的人大概以为我挂了…… 总之呢这段时间还是成长了很多,在加拿大实习的两个多月来 ...

  9. April 8 2017 Week 14 Saturday

    Life is the art of drawing without an eraser. 人生如画,落笔无悔. Yesterday I watched a film from Japan, Afte ...

  10. 测试笔记:本地存储localstorage与sessionstorage

    最近测试的投票项目开发说用的是localstorage.查了下是h5的本地存储.还有个sessionstorage,区别在于sessonstorage关闭页面后清空,localstorage保留. 以 ...