1、创建工程

scrapy startproject tencent

2、创建项目

scrapy genspider mahuateng

3、既然保存到数据库,自然要安装pymsql

pip install pymysql

4、settings文件,配置信息,包括数据库等

# -*- coding: utf-8 -*-

# Scrapy settings for tencent project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'tencent' SPIDER_MODULES = ['tencent.spiders']
NEWSPIDER_MODULE = 'tencent.spiders' LOG_LEVEL="WARNING"
LOG_FILE="./qq.log"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36' # Obey robots.txt rules
#ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'tencent.middlewares.TencentSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'tencent.middlewares.TencentDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'tencent.pipelines.TencentPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' # 连接数据MySQL
# 数据库地址
MYSQL_HOST = 'localhost'
# 数据库用户名:
MYSQL_USER = 'root'
# 数据库密码
MYSQL_PASSWORD = 'yang156122'
# 数据库端口
MYSQL_PORT = 3306
# 数据库名称
MYSQL_DBNAME = 'test'
# 数据库编码
MYSQL_CHARSET = 'utf8'

5、items.py文件定义数据字段

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html import scrapy class TencentItem(scrapy.Item):
"""
数据字段定义
"""
postId = scrapy.Field()
recruitPostId = scrapy.Field()
recruitPostName = scrapy.Field()
countryName = scrapy.Field()
locationName = scrapy.Field()
categoryName = scrapy.Field()
lastUpdateTime = scrapy.Field() pass

6、mahuateng.py文件主要是抓取数据

# -*- coding: utf-8 -*-
import scrapy import json
import logging
class MahuatengSpider(scrapy.Spider):
name = 'mahuateng'
allowed_domains = ['careers.tencent.com']
start_urls = ['https://careers.tencent.com/tencentcareer/api/post/Query?timestamp=1561688387174&countryId=&cityId=&bgIds=&productId=&categoryId=&parentCategoryId=40003&attrId=&keyword=&pageIndex=1&pageSize=10&language=zh-cn&area=cn']
pageNum = 1
def parse(self, response):
"""
数据获取
:param response:
:return:
"""
content = response.body.decode()
content = json.loads(content)
content=content['Data']['Posts']
#删除空字典
for con in content:
#print(con)
for key in list(con.keys()):
if not con.get(key):
del con[key]
#记录每一个岗位信息
# for con in content:
# yield con
#print(type(con))
yield con
#logging.warning(con) #####翻页######
self.pageNum = self.pageNum+1
if self.pageNum<=118:
next_url = "https://careers.tencent.com/tencentcareer/api/post/Query?timestamp=1561688387174&countryId=&cityId=&bgIds=&productId=&categoryId=&parentCategoryId=40003&attrId=&keyword=&pageIndex="+str(self.pageNum)+"&pageSize=10&language=zh-cn&area=cn"
yield scrapy.Request(
next_url,
callback=self.parse
)

7、pipelines.py文件主要是对数据进行处理,包括将数据存储到mysql

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html import logging from pymysql import cursors
from twisted.enterprise import adbapi
import time
# from tencent.settings import MYSQL_HOST
# from tencent.settings import MYSQL_USER
# from tencent.settings import MYSQL_PASSWORD
# from tencent.settings import MYSQL_PORT
# from tencent.settings import MYSQL_DBNAME
#
# from tencent.settings import MYSQL_CHARSET
import copy
class TencentPipeline(object):
#函数初始化
def __init__(self,db_pool):
self.db_pool=db_pool @classmethod
def from_settings(cls,settings):
"""类方法,只加载一次,数据库初始化"""
db_params = dict(
host=settings['MYSQL_HOST'],
user=settings['MYSQL_USER'],
password=settings['MYSQL_PASSWORD'],
port=settings['MYSQL_PORT'],
database=settings['MYSQL_DBNAME'],
charset=settings['MYSQL_CHARSET'],
use_unicode=True,
# 设置游标类型
cursorclass=cursors.DictCursor
)
# 创建连接池
db_pool = adbapi.ConnectionPool('pymysql', **db_params)
# 返回一个pipeline对象
return cls(db_pool) def process_item(self, item, spider):
"""
数据处理
:param item:
:param spider:
:return:
"""
myItem={}
myItem["postId"] = item["PostId"]
myItem["recruitPostId"] = item["RecruitPostId"]
myItem["recruitPostName"] = item["RecruitPostName"]
myItem["countryName"] = item["CountryName"]
myItem["locationName"] = item["LocationName"]
myItem["categoryName"] = item["CategoryName"]
myItem["lastUpdateTime"] = item["LastUpdateTime"]
logging.warning(myItem)
# 对象拷贝,深拷贝 --- 这里是解决数据重复问题!!!
asynItem = copy.deepcopy(myItem) # 把要执行的sql放入连接池
query = self.db_pool.runInteraction(self.insert_into, asynItem) # 如果sql执行发送错误,自动回调addErrBack()函数
query.addErrback(self.handle_error, myItem, spider)
return myItem # 处理sql函数
def insert_into(self, cursor, item):
# 创建sql语句
sql = "INSERT INTO tencent (postId,recruitPostId,recruitPostName,countryName,locationName,categoryName,lastUpdateTime) VALUES ('{}','{}','{}','{}','{}','{}','{}')".format(
item['postId'], item['recruitPostId'], item['recruitPostName'], item['countryName'], item['locationName'],
item['categoryName'],item['lastUpdateTime'])
# 执行sql语句
cursor.execute(sql)
# 错误函数 def handle_error(self, failure, item, spider):
# #输出错误信息
print("failure", failure)

8、创建数据库表

Navicat MySQL Data Transfer

Source Server         : 本机
Source Server Version : 50519
Source Host : localhost:3306
Source Database : test Target Server Type : MYSQL
Target Server Version : 50519
File Encoding : 65001 Date: 2019-06-28 12:47:06
*/ SET FOREIGN_KEY_CHECKS=0; -- ----------------------------
-- Table structure for tencent
-- ----------------------------
DROP TABLE IF EXISTS `tencent`;
CREATE TABLE `tencent` (
`id` int(10) NOT NULL AUTO_INCREMENT,
`postId` varchar(100) DEFAULT NULL,
`recruitPostId` varchar(100) DEFAULT NULL,
`recruitPostName` varchar(100) DEFAULT NULL,
`countryName` varchar(100) DEFAULT NULL,
`locationName` varchar(100) DEFAULT NULL,
`categoryName` varchar(100) DEFAULT NULL,
`lastUpdateTime` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1181 DEFAULT CHARSET=utf8;

完美收官!

python之scrapy爬取数据保存到mysql数据库的更多相关文章

  1. Python scrapy爬虫数据保存到MySQL数据库

    除将爬取到的信息写入文件中之外,程序也可通过修改 Pipeline 文件将数据保存到数据库中.为了使用数据库来保存爬取到的信息,在 MySQL 的 python 数据库中执行如下 SQL 语句来创建 ...

  2. 爬取伯乐在线文章(四)将爬取结果保存到MySQL

    Item Pipeline 当Item在Spider中被收集之后,它将会被传递到Item Pipeline,这些Item Pipeline组件按定义的顺序处理Item. 每个Item Pipeline ...

  3. 5分钟掌握智联招聘网站爬取并保存到MongoDB数据库

    前言 本次主题分两篇文章来介绍: 一.数据采集 二.数据分析 第一篇先来介绍数据采集,即用python爬取网站数据. 1 运行环境和python库 先说下运行环境: python3.5 windows ...

  4. python爬取数据保存到Excel中

    # -*- conding:utf-8 -*- # 1.两页的内容 # 2.抓取每页title和URL # 3.根据title创建文件,发送URL请求,提取数据 import requests fro ...

  5. 关于爬取数据保存到json文件,中文是unicode解决方式

    流程: 爬取的数据处理为列表,包含字典.里面包含中文, 经过json.dumps,保存到json文件中, 发现里面的中文显示未\ue768这样子 查阅资料发现,json.dumps 有一个参数.ens ...

  6. 爬取网贷之家平台数据保存到mysql数据库

    # coding utf-8 import requests import json import datetime import pymysql user_agent = 'User-Agent: ...

  7. Python+Scrapy+Crawlspider 爬取数据且存入MySQL数据库

    1.Scrapy使用流程 1-1.使用Terminal终端创建工程,输入指令:scrapy startproject ProName 1-2.进入工程目录:cd ProName 1-3.创建爬虫文件( ...

  8. Java爬取51job保存到MySQL并进行分析

    大二下实训课结业作业,想着就爬个工作信息,原本是要用python的,后面想想就用java试试看, java就自学了一个月左右,想要锻炼一下自己面向对象的思想等等的, 然后网上转了一圈,拉钩什么的是动态 ...

  9. 如何将大数据保存到 MySql 数据库

    1. 什么是大数据 1. 所谓大数据, 就是大的字节数据,或大的字符数据. 2. 标准 SQL 中提供了如下类型来保存大数据类型: 字节数据类型: tinyblob(256B), blob(64K), ...

随机推荐

  1. oracle to_Char fm 函数

    近期在使用oracle to_char函数处理浮点数时发现有坑,这里做个小结: 网上可以找到关于to_char中使用fm9990.0099中的相关解释: 0表示:如果参数(double或者float类 ...

  2. 十六, k8s集群资源需求和限制, 以及pod驱逐策略。

    目录 容器的资源需求和资源限制 QoS Classes分类 Guaranteed Burstable Best-Effort kubernetes之node资源紧缺时pod驱逐机制 Qos Class ...

  3. 客户端升级为select网路模型

    服务器端: #include<WinSock2.h> #include<Windows.h> #include<vector> #include<stdio. ...

  4. vim文本编辑及文件查找应用4

    linux系统上的特殊权限 : 特殊权限有:SUID,SGID,STICKY 安全上下文: 1.进程以其发起者的身份运行:进程对文件的访问权限,取决于发此进程的用户的权限:进程是发起些进程用户的代理, ...

  5. SVN建立分支和合并代码

    1.SVN建立分支正确SVN服务器上会有两个目录:trunk和branches.trunk目录下面代码就是所谓的主版本,而branches文件夹主要是用来放置分支版本.分支版本是依赖于主版本的,因此建 ...

  6. Redis入门(三)——主从服务器配置

    当数据量变得庞大的时候,读写分离还是很有必要的.同时避免一个redis服务宕机,导致应用宕机的情况,我们启用sentinel(哨兵)服务,实现主从切换的功能.redis提供了一个master,多个sl ...

  7. SPOJ 10707 COT2 - Count on a tree II

    思路 树上莫队的题目 每次更新(u1,u2)和(v1,v2)(不包括lca)的路径,最后单独统计LCA即可 代码 #include <cstdio> #include <cstrin ...

  8. hdu3715 Go Deeper[二分+2-SAT]/poj2723 Get Luffy Out[二分+2-SAT]

    这题转化一下题意就是给一堆形如$a_i + a_j \ne c\quad (a_i\in [0,1],c\in [0,2])$的限制,问从开头开始最多到哪条限制全是有解的. 那么,首先有可二分性,所以 ...

  9. k8s部署普罗米修斯

    参考链接:https://blog.csdn.net/ywq935/article/details/80818390 https://www.jianshu.com/p/ac8853927528 1. ...

  10. python--openCV--其它

    t1=cv2.getTickCount() # 记录当前时间,以时钟周期计算 t2=cv2.getTickFrequency() #返回时钟周期,返回CPU的频率,返回CPU一秒中所走的时钟周期数