1、创建工程

scrapy startproject jd

2、创建项目

scrapy genspider jingdong

3、安装pymysql

pip install pymysql

4、settings.py文件,主要是全局字段的定义,包括数据库信息

# -*- coding: utf-8 -*-

# Scrapy settings for jd project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'jd' SPIDER_MODULES = ['jd.spiders']
NEWSPIDER_MODULE = 'jd.spiders' LOG_LEVEL="WARNING"
LOG_FILE="./jingdong1.log"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'jd (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'jd.middlewares.JdSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'jd.middlewares.JdDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'jd.pipelines.JdPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' # 连接数据MySQL
# 数据库地址
MYSQL_HOST = 'localhost'
# 数据库用户名:
MYSQL_USER = 'root'
# 数据库密码
MYSQL_PASSWORD = 'yang156122'
# 数据库端口
MYSQL_PORT = 3306
# 数据库名称
MYSQL_DBNAME = 'test'
# 数据库编码
MYSQL_CHARSET = 'utf8'

5、items.py文件定义数据库字段

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html import scrapy class JdItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
appTime = scrapy.Field()
applicantErp = scrapy.Field()
formatPublishTime = scrapy.Field()
jobType = scrapy.Field()
positionName = scrapy.Field()
positionNameOpen = scrapy.Field()
publishTime = scrapy.Field()
qualification= scrapy.Field()
pass

6、jingdong.py文件主要是爬取所需数据

# -*- coding: utf-8 -*-
import scrapy import logging
import json
logger = logging.getLogger(__name__)
class JingdongSpider(scrapy.Spider):
name = 'jingdong'
allowed_domains = ['zhaopin.jd.com']
start_urls = ['http://zhaopin.jd.com/web/job/job_list?page=1']
pageNum = 1
def parse(self, response):
content = response.body.decode()
content = json.loads(content)
##########去除列表中字典集中的空值###########
for i in range(len(content)):
#list(content[i].keys()获取当前字典中的key
# for key in list(content[i].keys()): #content[i]为字典
# if not content[i].get(key):#content[i].get(key)根据key获取value
# del content[i][key] #删除空值字典
yield content[i]
# for i in range(len(content)):
# logging.warning(content[i]) self.pageNum = self.pageNum+1
if self.pageNum<=355:
next_url = "http://zhaopin.jd.com/web/job/job_list?page="+str(self.pageNum)
yield scrapy.Request(
next_url,
callback=self.parse
)
pass

7、pipelines.py文件主要是对爬取的数据进行清洗和处理,包括数据的入库操作

  这里和tencent相比,主要是增加了时间处理

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html import logging
from pymysql import cursors
from twisted.enterprise import adbapi
import time
import copy
class JdPipeline(object):
# 函数初始化
def __init__(self, db_pool):
self.db_pool = db_pool @classmethod
def from_settings(cls, settings):
"""类方法,只加载一次,数据库初始化"""
db_params = dict(
host=settings['MYSQL_HOST'],
user=settings['MYSQL_USER'],
password=settings['MYSQL_PASSWORD'],
port=settings['MYSQL_PORT'],
database=settings['MYSQL_DBNAME'],
charset=settings['MYSQL_CHARSET'],
use_unicode=True,
# 设置游标类型
cursorclass=cursors.DictCursor
)
# 创建连接池
db_pool = adbapi.ConnectionPool('pymysql', **db_params)
# 返回一个pipeline对象
return cls(db_pool) def process_item(self, item, spider):
myItem = {}
myItem["appTime"]=item["appTime"]
myItem["applicantErp"] = item["applicantErp"]
myItem["formatPublishTime"] = item["formatPublishTime"]
myItem["jobType"] = item["jobType"]
myItem["positionName"] = item["positionName"]
#时间转换
publishTime = item["publishTime"]
publishTime = time.localtime(int(str(publishTime)[:10])) #时间格式转换
myItem["publishTime"] = time.strftime("%Y-%m-%d %H:%M:%S", publishTime) myItem["positionNameOpen"]=item["positionNameOpen"]
myItem["qualification"] = item["qualification"] logging.warning(item)
# 对象拷贝,深拷贝 --- 这里是解决数据重复问题!!!
asynItem = copy.deepcopy(myItem)
# 把要执行的sql放入连接池
query = self.db_pool.runInteraction(self.insert_into, asynItem)
# 如果sql执行发送错误,自动回调addErrBack()函数
query.addErrback(self.handle_error, myItem, spider)
return myItem # 处理sql函数
def insert_into(self, cursor, item):
# 创建sql语句
sql = "INSERT INTO jingdong (appTime,applicantErp,formatPublishTime,jobType,positionName,publishTime,positionNameOpen,qualification) " \
"VALUES ('{}','{}','{}','{}','{}','{}','{}','{}')".format(
item['appTime'], item['applicantErp'],item['formatPublishTime'] , item['jobType'],
item['positionName'], item['publishTime'], item['positionNameOpen'],item['qualification'])
# 执行sql语句
cursor.execute(sql) # 错误函数
def handle_error(self, failure, item, spider):
# #输出错误信息
print("failure", failure)

完美收官!!!

python之scrapy爬取jingdong招聘信息到mysql数据库的更多相关文章

  1. 爬虫框架之Scrapy——爬取某招聘信息网站

    案例1:爬取内容存储为一个文件 1.建立项目 C:\pythonStudy\ScrapyProject>scrapy startproject tenCent New Scrapy projec ...

  2. python scrapy爬取前程无忧招聘信息

    使用scrapy框架之前,使用以下命令下载库: pip install scrapy -i https://pypi.tuna.tsinghua.edu.cn/simple 1.创建项目文件夹 scr ...

  3. 爬取拉勾网招聘信息并使用xlwt存入Excel

    xlwt 1.3.0 xlwt 文档 xlrd 1.1.0 python操作excel之xlrd 1.Python模块介绍 - xlwt ,什么是xlwt? Python语言中,写入Excel文件的扩 ...

  4. pymysql 使用twisted异步插入数据库:基于crawlspider爬取内容保存到本地mysql数据库

    本文的前提是实现了整站内容的抓取,然后把抓取的内容保存到数据库. 可以参考另一篇已经实现整站抓取的文章:Scrapy 使用CrawlSpider整站抓取文章内容实现 本文也是基于这篇文章代码基础上实现 ...

  5. Python爬取拉勾网招聘信息并写入Excel

    这个是我想爬取的链接:http://www.lagou.com/zhaopin/Python/?labelWords=label 页面显示如下: 在Chrome浏览器中审查元素,找到对应的链接: 然后 ...

  6. [Python学习] 简单爬取CSDN下载资源信息

    这是一篇Python爬取CSDN下载资源信息的样例,主要是通过urllib2获取CSDN某个人全部资源的资源URL.资源名称.下载次数.分数等信息.写这篇文章的原因是我想获取自己的资源全部的评论信息. ...

  7. python之简单爬取一个网站信息

    requests库是一个简介且简单的处理HTTP请求的第三方库 get()是获取网页最常用的方式,其基本使用方式如下 使用requests库获取HTML页面并将其转换成字符串后,需要进一步解析HTML ...

  8. python之scrapy爬取某集团招聘信息以及招聘详情

    1.定义爬取的字段items.py # -*- coding: utf-8 -*- # Define here the models for your scraped items # # See do ...

  9. python之scrapy爬取jd和qq招聘信息

    1.settings.py文件 # -*- coding: utf-8 -*- # Scrapy settings for jd project # # For simplicity, this fi ...

随机推荐

  1. Image Processing and Analysis_8_Edge Detection:Theory of Edge Detection ——1980

    此主要讨论图像处理与分析.虽然计算机视觉部分的有些内容比如特 征提取等也可以归结到图像分析中来,但鉴于它们与计算机视觉的紧密联系,以 及它们的出处,没有把它们纳入到图像处理与分析中来.同样,这里面也有 ...

  2. Win10系统C盘空间不足怎么安全清理?

    我们在使用电脑时,系统经常会产生许多垃圾文件,占用磁盘存储空间.在Win10系统中,我们可以通过清理系统盘的临时文件来释放一些存储空间.下面好系统U盘启动就来告诉你具体的方法步骤. Win10系统C盘 ...

  3. MyBatis 插入失败后爆出 500 ,如何捕获异常?

    我们在使用 Mybatis 的时候,会出现以下场景 数据表里有一些字段被设置为了 不可为 null 但是我们的用户在提交表单的时候没有提交所需的 字段数据 然后 Mybatis 在数据库做操作的时候就 ...

  4. linux基础1_文件类型、拓展名、目录配置

    命令ls -l,显示的第一个属性代表了这个档案的档案类型 [d]:目录 [-]:普通文件 [l]:连接文件 [b]:存储数据以供系统访问的接口设备 [c]:串行接口的端口设备,例如键盘.鼠标 [s]: ...

  5. JS中call()和apply()以及bind()的区别

    一.方法定义: apply:调用一个对象的一个方法,用另一个对象替换当前对象.例如:B.apply(A, arguments);即A对象应用B对象的方法. call:调用一个对象的一个方法,用另一个对 ...

  6. 鼠标点击自定义文字展现特效JS代码

    JS特效使用方法 只需将如下JS代码放到</body>之前就好了 var a_idx = 0; jQuery(document).ready(function($) { $("b ...

  7. redis,windows设置记录

    windows下载 github地址 : https://github.com/MicrosoftArchive/redis/releases #设置内存 redis-server.exe redis ...

  8. Mariadb多实例启动脚本

    #!/bin/bash port=3306 mysql_user="root" mysql_pwd="centos" cmd_path="/app/m ...

  9. P2502 [HAOI2006]旅行 最小生成树

    思路:枚举边集,最小生成树 提交:1次 题解:枚举最长边,添加较小边. #include<cstdio> #include<iostream> #include<algo ...

  10. HGOI 20191101am 题解

    Problem A awesome 给出一个序列$A_i$,任取序列中三个数组成三元组$(a_i , a_j , a_k)$. 输出本质不同的且$abc \equiv 1 (mod  P)$且满足$a ...