Hi,大家好。有段时间没来更新scrapy爬取实例信息了,前2天同事说爬取拉勾,boss直聘等网站信息比较困难。昨天下午开始着手爬取boss直聘内Python爬虫的信息,比想象中的简单很多。

需要解决的问题:

  boss直聘网的信息是大部分以静态加载和少许动态加载方式显示网站。

  1.静态加载:公司的具体信息和岗位职责(1_1)

  2.动态加载:首页搜索框,搜索python爬虫(1_2)

解决的思路:

  1.静态加载:常规爬取信息(简单)

  2.动态加载:selenium(简单)

                  图(1_1)

                  图(1_2)

老规矩,给各位爬取结果的图,大家也可以去尝试一下:

(三)开始正题

3_1.需要提取的信息:items.py

import scrapy

class BossItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
#pass
job_title = scrapy.Field()
salary = scrapy.Field()
address = scrapy.Field()
job_time = scrapy.Field()
education = scrapy.Field()
company = scrapy.Field()
company_info= scrapy.Field()
detail_text = scrapy.Field()

3_2.设置代理:middlewares.py

class BossSpiderMiddleware(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the spider middleware does not modify the
# passed objects. def __init__(self,ip=''):
self.ip = ip
def process_request(self,request,spider):
print('http://10.240.252.16:911')
request.meta['proxy']= 'http://10.240.252.16:911' @classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s def process_spider_input(self, response, spider):
# Called for each response that goes through the spider
# middleware and into the spider. # Should return None or raise an exception.
return None def process_spider_output(self, response, result, spider):
# Called with the results returned from the Spider, after
# it has processed the response. # Must return an iterable of Request, dict or Item objects.
for i in result:
yield i def process_spider_exception(self, response, exception, spider):
# Called when a spider or process_spider_input() method
# (from other spider middleware) raises an exception. # Should return either None or an iterable of Response, dict
# or Item objects.
pass def process_start_requests(self, start_requests, spider):
# Called with the start requests of the spider, and works
# similarly to the process_spider_output() method, except
# that it doesn’t have a response associated. # Must return only requests (not items).
for r in start_requests:
yield r def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name) class BossDownloaderMiddleware(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the downloader middleware does not modify the
# passed objects. @classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s def process_request(self, request, spider):
# Called for each request that goes through the downloader
# middleware. # Must either:
# - return None: continue processing this request
# - or return a Response object
# - or return a Request object
# - or raise IgnoreRequest: process_exception() methods of
# installed downloader middleware will be called
return None def process_response(self, request, response, spider):
# Called with the response returned from the downloader. # Must either;
# - return a Response object
# - return a Request object
# - or raise IgnoreRequest
return response def process_exception(self, request, exception, spider):
# Called when a download handler or a process_request()
# (from other downloader middleware) raises an exception. # Must either:
# - return None: continue processing this exception
# - return a Response object: stops process_exception() chain
# - return a Request object: stops process_exception() chain
pass def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name)

3_3.下载数据(存储到mongodb):pipelines.py

import scrapy
import pymongo
from scrapy.item import Item class BossPipeline(object):
def process_item(self, item, spider):
return item class MongoDBPipeline(object): #存储到mongodb中
@classmethod
def from_crawler(cls,crawler):
cls.DB_URL = crawler.settings.get("MONGO_DB_URL",'mongodb://localhost:27017/')
cls.DB_NAME = crawler.settings.get("MONGO_DB_NAME",'scrapy_data')
return cls() def open_spider(self,spider):
self.client = pymongo.MongoClient(self.DB_URL)
self.db = self.client[self.DB_NAME] def close_spider(self,spider):
self.client.close() def process_item(self,item,spider):
collection = self.db[spider.name]
post = dict(item) if isinstance(item,Item) else item
collection.insert(post) return item

3_4.settings.py

MONGO_DB_URL = 'mongodb://localhost:27017/'
MONGO_DB_NAME = 'boss_detail' USER_AGENT ={ #设置浏览器的User_agent
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
} FEED_EXPORT_FIELDS = ['job_title','salary','address','job_time','education','company','company_info'] # Obey robots.txt rules
ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16)
CONCURRENT_REQUESTS = 10 # See also autothrottle settings and docs
DOWNLOAD_DELAY = 0.5 # Disable cookies (enabled by default)
COOKIES_ENABLED = False DOWNLOADER_MIDDLEWARES = {
#'Boss.middlewares.BossDownloaderMiddleware': 543,
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware':543,
'Boss.middlewares.BossSpiderMiddleware':123,
} ITEM_PIPELINES = {
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware':1,
'Boss.pipelines.MongoDBPipeline': 300,
}

3_5.spider/boss.py

#-*- coding:utf-8 -*-
import time
from selenium import webdriver
import pdb
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from lxml import etree
import re
from bs4 import BeautifulSoup
import scrapy
from Boss.items import BossItem
from Boss.settings import USER_AGENT
from scrapy.linkextractors import LinkExtractor chrome_options = Options()
driver = webdriver.Chrome() class BossSpider(scrapy.Spider):
name = 'boss'
allowed_domains = ['www.zhipin.com']
start_urls = ['http://www.zhipin.com/'] headers = {
'Accept': 'application/json, text/javascript, */*; q=0.01',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'zh-CN,zh;q=0.9',
'Connection': 'keep-alive',
'Content-Length': '',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'Host': 'www.zhipin.com',
'Origin': 'www.zhipin.com',
'Referer': 'http://www.zhipin.com/',
'User-Agent': USER_AGENT,
'X-Requested-With': 'XMLHttpRequest',
} def start_requests(self):
driver.get(
self.start_urls[0]
)
time.sleep(3) #搜索python爬虫
driver.find_element_by_name('query').send_keys(u'python爬虫')
time.sleep(3)
driver.find_element_by_class_name('btn-search').click()
time.sleep(3) new_url = driver.current_url.encode('utf8') #获取跳转之后的url
yield scrapy.Request(new_url) def parse(self, response):
#提取网页链接url
links = LinkExtractor(restrict_css="div.info-primary>h3>a")
link = links.extract_links(response)
for each_link in link:
yield scrapy.Request(each_link.url,callback=self.job_detail) #sels = LinkExtractor(restrict_css='div.page')
#yield scrapy.Request(sels.extract_links(response)[0].url,callback=self.parse) def job_detail(self,response):
spiderItem = BossItem()
#想要提取的信息
spiderItem['job_title'] = response.css('div.job-primary.detail-box div.name h1::text').extract()[0]
#pdb.set_trace()
salar = response.css('div.job-primary.detail-box span.badge ::text').extract()[0]
spiderItem['salary'] = re.findall(r'(\d.*?)\n',salar)[0] #re提取金额
spiderItem['address'] = response.css('div.job-primary.detail-box p::text').extract()[0]
spiderItem['job_time'] = response.css('div.job-primary.detail-box p::text').extract()[1]
spiderItem['education'] = response.css('div.job-primary.detail-box p::text').extract()[2]
spiderItem['company'] = response.css('div.job-primary.detail-box div.info-company h3.name a::text').extract()[0]
spiderItem['company_info'] = response.css('div.job-primary.detail-box div.info-company>p::text').extract()[0] detail = response.css('div.job-sec div.text ::text').extract()
details = ''.join(detail).replace(' ','') ##将列表内所有字符串提取成一个整的字符串,并且去除空格
spiderItem['detail_text'] = details print spiderItem
yield spiderItem

scrapy--boss直聘的更多相关文章

  1. 爬虫系列---scrapy post请求、框架组件和下载中间件+boss直聘爬取

    一 Post 请求 在爬虫文件中重写父类的start_requests(self)方法 父类方法源码(Request): def start_requests(self): for url in se ...

  2. Scrapy 爬取BOSS直聘关于Python招聘岗位

    年前的时候想看下招聘Python的岗位有多少,当时考虑目前比较流行的招聘网站就属于boss直聘,所以使用Scrapy来爬取下boss直聘的Python岗位. 1.首先我们创建一个Scrapy 工程 s ...

  3. Python爬虫——Scrapy整合Selenium案例分析(BOSS直聘)

    概述 本文主要介绍scrapy架构图.组建.工作流程,以及结合selenium boss直聘爬虫案例分析 架构图 组件 Scrapy 引擎(Engine) 引擎负责控制数据流在系统中所有组件中流动,并 ...

  4. Python的scrapy之爬取boss直聘网站

    在我们的项目中,单单分析一个51job网站的工作职位可能爬取结果不太理想,所以我又爬取了boss直聘网的工作,不过boss直聘的网站一次只能展示300个职位,所以我们一次也只能爬取300个职位. jo ...

  5. scrapy——7 scrapy-redis分布式爬虫,用药助手实战,Boss直聘实战,阿布云代理设置

    scrapy——7 什么是scrapy-redis 怎么安装scrapy-redis scrapy-redis常用配置文件 scrapy-redis键名介绍 实战-利用scrapy-redis分布式爬 ...

  6. Pyhton爬虫实战 - 抓取BOSS直聘职位描述 和 数据清洗

    Pyhton爬虫实战 - 抓取BOSS直聘职位描述 和 数据清洗 零.致谢 感谢BOSS直聘相对权威的招聘信息,使本人有了这次比较有意思的研究之旅. 由于爬虫持续爬取 www.zhipin.com 网 ...

  7. 基于‘BOSS直聘的招聘信息’分析企业到底需要什么样的PHP程序员

    原文地址:http://www.jtahstu.com/blog/scrapy_zhipin_php.html 基于'BOSS直聘的招聘信息'分析企业到底需要什么样的PHP程序员 标签(空格分隔): ...

  8. iOS开发之功能模块--高仿Boss直聘的IM界面交互功能

    本人公司项目属于社交类,高仿Boss直聘早期的版本,现在Boss直聘界面风格,交互风格都不如Boss直聘以前版本的好看. 本人通过iPhone模拟器和本人真机对聊,将完成的交互功能通过Mac截屏模拟器 ...

  9. iOS开发之功能模块--高仿Boss直聘的常用语的开发

    首先上Boss直聘的功能界面截图,至于交互请读者现在Boss直聘去交互体验:     本人的公司项目要高仿Boss直聘的IM常用语的交互功能,居然花费了我前后17个小时完成,这回自己测试了很多遍,代码 ...

  10. 使用VUE模仿BOSS直聘APP

    一.碎碎念: 偶尔在群里看到一个小伙伴说:最近面试的人好多都说用vue做过一个饿了么.当时有种莫名想笑. 为何不知道创新一下?于是想写个DEMO演练一下.那去模仿谁呢?还是BOSS直聘(跟我没关系,不 ...

随机推荐

  1. jQuery UI dialog 隐藏默认关闭按钮

    var O_dialog = $("#dialog-modal"); O_dialog.dialog({ closeOnEscape: false, width: 250, hei ...

  2. 什么是NIO2

    NIO2I/O发展历史Java1.0-1.3在Java的早期版本中,没有完整的I/O支持,在开发过程中需要解决以下问题:1)没有数据缓冲区或者NIO的通道概念,需要编程人员处理底层细节.2)I/O是受 ...

  3. #CSS的盒子模型、元素类型

    CSS的盒子模型.元素类型   本文首先介绍了CSS元素的统一内部结构模型,盒子模型:然后介绍了CSS元素基于不同分类标准定义的元素类型,包括基于不同内容设置方式定义的replaced元素和non-r ...

  4. 跨平台图表控件TeeChart使用教程:导入XML数据

    TeeChart的最新版中包含了一个自动加载XML数据的新组件.这个组件的名字叫做TTeeXMLSource,用户可以在TeeXML.pas unit中找到这个组件. 加载XML图表所需的最小属性是& ...

  5. html和java的交互,利用jsBridge开源框架

    html中,js注册监听和回调 function connectWebViewJavascriptBridge(callback) { if (window.WebViewJavascriptBrid ...

  6. java环境变量配置备忘录

    用鼠标右击“我的电脑”->属性->高级->环境变量系统变量->新建->变量名:JAVA_HOME 变量值:D:\Program Files\Java\jdk1.6.0_1 ...

  7. ansible使用1-安装&配置

    参考文档 http://docs.ansible.com https://github.com/leucos/ansible-tuto 控制机安装 ansible控制机通过ssh控制远程主机,远程主机 ...

  8. azkaban调度

    azkaban调度 1.概述 azkaban是一套调度系统,常用大数据作业调度.azkaban包括web和executor两套程序,web主要完成展示和交互,executor上完成调度和作业提交执行. ...

  9. .net core 2.0下的容器注册方法

    https://www.cnblogs.com/Wddpct/p/5764511.html 自带的容器注册方法真的很好用

  10. framework7 手风琴页面有滚动条时再次点开手风琴item滑动里面内容消失的解决方法

    在手风琴的ul外面的div加入最小高度min-height:1000px,问题解决 示例代码: <div class="list-block accordion-list" ...