1、创建工程

scrapy startproject gosuncn

2、创建项目

cd gosuncn
scrapy genspider gaoxinxing gosuncn.zhiye.com

3、运行项目

 crawl gaoxinxing

4、gaoxinxing.py代码

# -*- coding: utf-8 -*-
import scrapy
import logging logger = logging.getLogger(__name__)
#引入日志
class GaoxinxingSpider(scrapy.Spider):
name = 'gaoxinxing'
allowed_domains = ['gosuncn.zhiye.com']
start_urls = ['http://gosuncn.zhiye.com/Social']
next_page_num = 1
def parse(self, response):
tr_list = response.xpath("//table[@class='jobsTable']/tr")[1:]
#print(tr_list)
for tr in tr_list:
item = {}
item["position"]=tr.xpath(".//td[1]/a/text()").extract_first()
item["platform"] = tr.xpath(".//td[3]/text()").extract_first()
item["num"] = tr.xpath(".//td[4]/text()").extract_first()
item["time"] = tr.xpath(".//td[6]/text()").extract_first()
logger.warning(item) #打印日志
yield item #next_page_url = response.xpath("//div[@class='pager2']//a[@class='next']/@href").extract_first()
#print(next_page_url)
self.next_page_num = self.next_page_num+1
if self.next_page_num<=4:
next_url = "http://gosuncn.zhiye.com/social/?PageIndex=" + str(self.next_page_num)
print(next_url)
yield scrapy.Request(
next_url,
callback=self.parse
)

5、settings.py文件

# -*- coding: utf-8 -*-

# Scrapy settings for gosuncn project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'gosuncn' SPIDER_MODULES = ['gosuncn.spiders']
NEWSPIDER_MODULE = 'gosuncn.spiders' LOG_LEVEL="WARNING"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'gosuncn (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'gosuncn.middlewares.GosuncnSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'gosuncn.middlewares.GosuncnDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'gosuncn.pipelines.GosuncnPipeline': 300,
}
LOG_LEVEL ="WARNING"
LOG_FILE = "./log.log"
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

6、pipelines.py文件

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html class GosuncnPipeline(object):
def process_item(self, item, spider):
print(item)
return item

python之scrapy爬取某集团招聘信息的更多相关文章

  1. python之scrapy爬取某集团招聘信息以及招聘详情

    1.定义爬取的字段items.py # -*- coding: utf-8 -*- # Define here the models for your scraped items # # See do ...

  2. python之crawlscrapy爬取某集团招聘信息以及招聘详情

    针对这种招聘信息,使用crawlscrapy很适合. 1.settings.py # -*- coding: utf-8 -*- # Scrapy settings for gosuncn proje ...

  3. 使用python scrapy爬取知乎提问信息

    前文介绍了python的scrapy爬虫框架和登录知乎的方法. 这里介绍如何爬取知乎的问题信息,并保存到mysql数据库中. 首先,看一下我要爬取哪些内容: 如下图所示,我要爬取一个问题的6个信息: ...

  4. 用Python爬取智联招聘信息做职业规划

    上学期在实验室发表时写了一个爬取智联招牌信息的爬虫. 操作流程大致分为:信息爬取——数据结构化——存入数据库——所需技能等分词统计——数据可视化 1.数据爬取 job = "通信工程师&qu ...

  5. [Python爬虫] Selenium爬取新浪微博客户端用户信息、热点话题及评论 (上)

    转载自:http://blog.csdn.net/eastmount/article/details/51231852 一. 文章介绍 源码下载地址:http://download.csdn.net/ ...

  6. 利用 Scrapy 爬取知乎用户信息

    思路:通过获取知乎某个大V的关注列表和被关注列表,查看该大V和其关注用户和被关注用户的详细信息,然后通过层层递归调用,实现获取关注用户和被关注用户的关注列表和被关注列表,最终实现获取大量用户信息. 一 ...

  7. 爬虫(十六):scrapy爬取知乎用户信息

    一:爬取思路 首先我们应该找到一个账号,这个账号被关注的人和关注的人都相对比较多的,就是下图中金字塔顶端的人,然后通过爬取这个账号的信息后,再爬取他关注的人和被关注的人的账号信息,然后爬取被关注人的账 ...

  8. Python爬虫项目--爬取自如网房源信息

    本次爬取自如网房源信息所用到的知识点: 1. requests get请求 2. lxml解析html 3. Xpath 4. MongoDB存储 正文 1.分析目标站点 1. url: http:/ ...

  9. python之scrapy爬取jingdong招聘信息到mysql数据库

    1.创建工程 scrapy startproject jd 2.创建项目 scrapy genspider jingdong 3.安装pymysql pip install pymysql 4.set ...

随机推荐

  1. 聚类算法之MeanShift

    机器学习的研究方向主要分为三大类:聚类,分类与回归. MeanShift作为聚类方法之一,在视觉领域有着广泛的应用,尤其是作为深度学习回归后的后处理模块而存在着. 接下来,我们先介绍下基本功能流程,然 ...

  2. (备忘)Python字符串、元组、列表、字典互相转换的方法

    #1.字典 dict = {'name': 'Zara', 'age': 7, 'class': 'First'} #字典转为字符串,返回:<type 'str'> {'age': 7, ...

  3. swagger2注解使用方法

    swagger注解整体说明: @Api:用在请求的类上,表示对类的说明 tags="说明该类的作用,可以在UI界面上看到的注解" value="该参数没什么意义,在UI界 ...

  4. Launcher类源码分析

    基于上一次获取系统类加载器这块进行分析: 关于这个方法的javadoc在之前已经阅读过了,不过这里再来仔细阅读一下加深印象: 这里有一个非常重要的概念:上下文类加载器: 它的作用非常之大,在后面会详细 ...

  5. Python&Selenium 关键字驱动测试框架之数据文件解析

    摘要:在关键字驱动测试框架中,除了PO模式以及一些常规Action的封装外,一个很重要的内容就是读写EXCEL,在团队中如何让不会写代码的人也可以进行自动化测试? 我们可以将自动化测试用例按一定的规格 ...

  6. 【洛谷P4430】小猴打架

    题目大意:求带标号 N 个点的生成树个数,两棵生成树相同当且仅当两棵树结构相同且边的生成顺序相同. 题解:学会了 prufer 序列. prufer 序列是用来表示带标号的无根树的序列. 每种不同类型 ...

  7. GET与POST方法和用curl命令执行

    1.超文本传输协议 超文本传输协议(HTTP)的设计目的是保证客户机与服务器之间的通信,web 浏览器可能是客户端,而计算机上的网络应用程序也可能作为服务器端. HTTP的工作方式是客户机与服务器之间 ...

  8. C# 通过Process.Start() 打开程序 置顶方法

    private void webBrowser1_Navigating(object sender, WebBrowserNavigatingEventArgs e) { try { foreach ...

  9. linux 下使用github

    Linux下Git和GitHub环境的搭建 1.创建Github帐号  (name@server.com) 2.安装git [root@cloud ~]# yum install git -y 3.生 ...

  10. [jenkins] 启动错误 Failed to start LSB: Jenkins Automation Server.

    解决办法见https://blog.csdn.net/qq_34208844/article/details/87865672