1、创建工程

scrapy startproject tencent

2、创建项目

scrapy genspider mahuateng

3、既然保存到数据库,自然要安装pymsql

pip install pymysql

4、settings文件,配置信息,包括数据库等

# -*- coding: utf-8 -*-

# Scrapy settings for tencent project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'tencent' SPIDER_MODULES = ['tencent.spiders']
NEWSPIDER_MODULE = 'tencent.spiders' LOG_LEVEL="WARNING"
LOG_FILE="./qq.log"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36' # Obey robots.txt rules
#ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'tencent.middlewares.TencentSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'tencent.middlewares.TencentDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'tencent.pipelines.TencentPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' # 连接数据MySQL
# 数据库地址
MYSQL_HOST = 'localhost'
# 数据库用户名:
MYSQL_USER = 'root'
# 数据库密码
MYSQL_PASSWORD = 'yang156122'
# 数据库端口
MYSQL_PORT = 3306
# 数据库名称
MYSQL_DBNAME = 'test'
# 数据库编码
MYSQL_CHARSET = 'utf8'

5、items.py文件定义数据字段

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html import scrapy class TencentItem(scrapy.Item):
"""
数据字段定义
"""
postId = scrapy.Field()
recruitPostId = scrapy.Field()
recruitPostName = scrapy.Field()
countryName = scrapy.Field()
locationName = scrapy.Field()
categoryName = scrapy.Field()
lastUpdateTime = scrapy.Field() pass

6、mahuateng.py文件主要是抓取数据

# -*- coding: utf-8 -*-
import scrapy import json
import logging
class MahuatengSpider(scrapy.Spider):
name = 'mahuateng'
allowed_domains = ['careers.tencent.com']
start_urls = ['https://careers.tencent.com/tencentcareer/api/post/Query?timestamp=1561688387174&countryId=&cityId=&bgIds=&productId=&categoryId=&parentCategoryId=40003&attrId=&keyword=&pageIndex=1&pageSize=10&language=zh-cn&area=cn']
pageNum = 1
def parse(self, response):
"""
数据获取
:param response:
:return:
"""
content = response.body.decode()
content = json.loads(content)
content=content['Data']['Posts']
#删除空字典
for con in content:
#print(con)
for key in list(con.keys()):
if not con.get(key):
del con[key]
#记录每一个岗位信息
# for con in content:
# yield con
#print(type(con))
yield con
#logging.warning(con) #####翻页######
self.pageNum = self.pageNum+1
if self.pageNum<=118:
next_url = "https://careers.tencent.com/tencentcareer/api/post/Query?timestamp=1561688387174&countryId=&cityId=&bgIds=&productId=&categoryId=&parentCategoryId=40003&attrId=&keyword=&pageIndex="+str(self.pageNum)+"&pageSize=10&language=zh-cn&area=cn"
yield scrapy.Request(
next_url,
callback=self.parse
)

7、pipelines.py文件主要是对数据进行处理,包括将数据存储到mysql

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html import logging from pymysql import cursors
from twisted.enterprise import adbapi
import time
# from tencent.settings import MYSQL_HOST
# from tencent.settings import MYSQL_USER
# from tencent.settings import MYSQL_PASSWORD
# from tencent.settings import MYSQL_PORT
# from tencent.settings import MYSQL_DBNAME
#
# from tencent.settings import MYSQL_CHARSET
import copy
class TencentPipeline(object):
#函数初始化
def __init__(self,db_pool):
self.db_pool=db_pool @classmethod
def from_settings(cls,settings):
"""类方法,只加载一次,数据库初始化"""
db_params = dict(
host=settings['MYSQL_HOST'],
user=settings['MYSQL_USER'],
password=settings['MYSQL_PASSWORD'],
port=settings['MYSQL_PORT'],
database=settings['MYSQL_DBNAME'],
charset=settings['MYSQL_CHARSET'],
use_unicode=True,
# 设置游标类型
cursorclass=cursors.DictCursor
)
# 创建连接池
db_pool = adbapi.ConnectionPool('pymysql', **db_params)
# 返回一个pipeline对象
return cls(db_pool) def process_item(self, item, spider):
"""
数据处理
:param item:
:param spider:
:return:
"""
myItem={}
myItem["postId"] = item["PostId"]
myItem["recruitPostId"] = item["RecruitPostId"]
myItem["recruitPostName"] = item["RecruitPostName"]
myItem["countryName"] = item["CountryName"]
myItem["locationName"] = item["LocationName"]
myItem["categoryName"] = item["CategoryName"]
myItem["lastUpdateTime"] = item["LastUpdateTime"]
logging.warning(myItem)
# 对象拷贝,深拷贝 --- 这里是解决数据重复问题!!!
asynItem = copy.deepcopy(myItem) # 把要执行的sql放入连接池
query = self.db_pool.runInteraction(self.insert_into, asynItem) # 如果sql执行发送错误,自动回调addErrBack()函数
query.addErrback(self.handle_error, myItem, spider)
return myItem # 处理sql函数
def insert_into(self, cursor, item):
# 创建sql语句
sql = "INSERT INTO tencent (postId,recruitPostId,recruitPostName,countryName,locationName,categoryName,lastUpdateTime) VALUES ('{}','{}','{}','{}','{}','{}','{}')".format(
item['postId'], item['recruitPostId'], item['recruitPostName'], item['countryName'], item['locationName'],
item['categoryName'],item['lastUpdateTime'])
# 执行sql语句
cursor.execute(sql)
# 错误函数 def handle_error(self, failure, item, spider):
# #输出错误信息
print("failure", failure)

8、创建数据库表

Navicat MySQL Data Transfer

Source Server         : 本机
Source Server Version : 50519
Source Host : localhost:3306
Source Database : test Target Server Type : MYSQL
Target Server Version : 50519
File Encoding : 65001 Date: 2019-06-28 12:47:06
*/ SET FOREIGN_KEY_CHECKS=0; -- ----------------------------
-- Table structure for tencent
-- ----------------------------
DROP TABLE IF EXISTS `tencent`;
CREATE TABLE `tencent` (
`id` int(10) NOT NULL AUTO_INCREMENT,
`postId` varchar(100) DEFAULT NULL,
`recruitPostId` varchar(100) DEFAULT NULL,
`recruitPostName` varchar(100) DEFAULT NULL,
`countryName` varchar(100) DEFAULT NULL,
`locationName` varchar(100) DEFAULT NULL,
`categoryName` varchar(100) DEFAULT NULL,
`lastUpdateTime` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1181 DEFAULT CHARSET=utf8;

完美收官!

python之scrapy爬取数据保存到mysql数据库的更多相关文章

  1. Python scrapy爬虫数据保存到MySQL数据库

    除将爬取到的信息写入文件中之外,程序也可通过修改 Pipeline 文件将数据保存到数据库中.为了使用数据库来保存爬取到的信息,在 MySQL 的 python 数据库中执行如下 SQL 语句来创建 ...

  2. 爬取伯乐在线文章(四)将爬取结果保存到MySQL

    Item Pipeline 当Item在Spider中被收集之后,它将会被传递到Item Pipeline,这些Item Pipeline组件按定义的顺序处理Item. 每个Item Pipeline ...

  3. 5分钟掌握智联招聘网站爬取并保存到MongoDB数据库

    前言 本次主题分两篇文章来介绍: 一.数据采集 二.数据分析 第一篇先来介绍数据采集,即用python爬取网站数据. 1 运行环境和python库 先说下运行环境: python3.5 windows ...

  4. python爬取数据保存到Excel中

    # -*- conding:utf-8 -*- # 1.两页的内容 # 2.抓取每页title和URL # 3.根据title创建文件,发送URL请求,提取数据 import requests fro ...

  5. 关于爬取数据保存到json文件,中文是unicode解决方式

    流程: 爬取的数据处理为列表,包含字典.里面包含中文, 经过json.dumps,保存到json文件中, 发现里面的中文显示未\ue768这样子 查阅资料发现,json.dumps 有一个参数.ens ...

  6. 爬取网贷之家平台数据保存到mysql数据库

    # coding utf-8 import requests import json import datetime import pymysql user_agent = 'User-Agent: ...

  7. Python+Scrapy+Crawlspider 爬取数据且存入MySQL数据库

    1.Scrapy使用流程 1-1.使用Terminal终端创建工程,输入指令:scrapy startproject ProName 1-2.进入工程目录:cd ProName 1-3.创建爬虫文件( ...

  8. Java爬取51job保存到MySQL并进行分析

    大二下实训课结业作业,想着就爬个工作信息,原本是要用python的,后面想想就用java试试看, java就自学了一个月左右,想要锻炼一下自己面向对象的思想等等的, 然后网上转了一圈,拉钩什么的是动态 ...

  9. 如何将大数据保存到 MySql 数据库

    1. 什么是大数据 1. 所谓大数据, 就是大的字节数据,或大的字符数据. 2. 标准 SQL 中提供了如下类型来保存大数据类型: 字节数据类型: tinyblob(256B), blob(64K), ...

随机推荐

  1. mount -t proc none /proc

    linux initrd里的init脚本中的第一句为: mount -t proc /proc /proc 作用是把proc这个虚拟文件系统挂载到/proc目录.这说明initrd需要用到/proc, ...

  2. 理解 chroot

    什么是 chroot chroot,即 change root directory (更改 root 目录).在 linux 系统中,系统默认的目录结构都是以 `/`,即是以根 (root) 开始的. ...

  3. linux usb驱动记录(一)

    一.linux 下的usb驱动框架 在linux系统中,usb驱动可以从两个角度去观察,一个是主机侧,一个是设备侧.linux usb 驱动的总体框架如下图所示:   从主机侧看usb驱动可分为四层: ...

  4. Python 爬虫实现天气查询(可视化界面版)

    github项目地址:StarMan Python 实现天气查询的程序早已完成,近日开学无课,昨晚心血来潮想做一个较为友好的界面版本,便匆忙行动了起来. 在之前已有的程序的基础上使用Tkinter 模 ...

  5. 团队第三次作业:Alpha版本第二周小结

    姓名 学号 周前计划安排 每周实际工作记录 自我打分 XXX 061109 1.对原型设计与编码任务进行进一步的规划与任务分配 2.协调与统一已完成的部分原型设计页面风格并针对部分页面提出了改进建议 ...

  6. python+selenium之——pip环境变量配置

    将pip的路径……\Python37-32\Scripts添加进Path: 而非……\Python37-32\Lib\site-packages\pip-18.1-py3.7.egg

  7. P1052 过河 题解

    复习dp(迪皮)的时候刷到了一道简单路径压缩的题目(一点不会qwq) 题目描述链接. 正解: 首先呢,我们看到题目,自然而然的会想到这种思路: 设状态变量dp[i]表示从第一个格子开始经过一些跳跃跳到 ...

  8. CF901C Bipartite Segments[点双+二分+前缀优化]

    不想翻译了,直接放luogu翻译 说了没有偶环,也就是说全是奇环,再结合二分图性质,那么暴力的话,固定左端点,增大序号,加点直到产生环就不合法了.也就是说,任何一个环,只要他上面的数全都被加了,就不合 ...

  9. getSuperclass与getGenericSuperclass区别

    声明三个类class Person<T, V> {}class Teacher {}class Student extends Person<Student, Teacher> ...

  10. modbus-RTU-crc16——c语言

    为确保消息数据的完整性,除了验证消息CRC之外,建议实现检查串行端口(UART)成帧错误的代码.如果接收消息中的CRC与接收设备计算的CRC不匹配,则应忽略该消息.下面的C语言代码片段显示了如何使用逐 ...