1、创建工程

  1. scrapy startproject tencent

2、创建项目

  1. scrapy genspider mahuateng

3、既然保存到数据库,自然要安装pymsql

  1. pip install pymysql

4、settings文件,配置信息,包括数据库等

  1. # -*- coding: utf-8 -*-
  2.  
  3. # Scrapy settings for tencent project
  4. #
  5. # For simplicity, this file contains only settings considered important or
  6. # commonly used. You can find more settings consulting the documentation:
  7. #
  8. # https://doc.scrapy.org/en/latest/topics/settings.html
  9. # https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
  10. # https://doc.scrapy.org/en/latest/topics/spider-middleware.html
  11.  
  12. BOT_NAME = 'tencent'
  13.  
  14. SPIDER_MODULES = ['tencent.spiders']
  15. NEWSPIDER_MODULE = 'tencent.spiders'
  16.  
  17. LOG_LEVEL="WARNING"
  18. LOG_FILE="./qq.log"
  19. # Crawl responsibly by identifying yourself (and your website) on the user-agent
  20. USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'
  21.  
  22. # Obey robots.txt rules
  23. #ROBOTSTXT_OBEY = True
  24.  
  25. # Configure maximum concurrent requests performed by Scrapy (default: 16)
  26. #CONCURRENT_REQUESTS = 32
  27.  
  28. # Configure a delay for requests for the same website (default: 0)
  29. # See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
  30. # See also autothrottle settings and docs
  31. #DOWNLOAD_DELAY = 3
  32. # The download delay setting will honor only one of:
  33. #CONCURRENT_REQUESTS_PER_DOMAIN = 16
  34. #CONCURRENT_REQUESTS_PER_IP = 16
  35.  
  36. # Disable cookies (enabled by default)
  37. #COOKIES_ENABLED = False
  38.  
  39. # Disable Telnet Console (enabled by default)
  40. #TELNETCONSOLE_ENABLED = False
  41.  
  42. # Override the default request headers:
  43. #DEFAULT_REQUEST_HEADERS = {
  44. # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  45. # 'Accept-Language': 'en',
  46. #}
  47.  
  48. # Enable or disable spider middlewares
  49. # See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
  50. #SPIDER_MIDDLEWARES = {
  51. # 'tencent.middlewares.TencentSpiderMiddleware': 543,
  52. #}
  53.  
  54. # Enable or disable downloader middlewares
  55. # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
  56. #DOWNLOADER_MIDDLEWARES = {
  57. # 'tencent.middlewares.TencentDownloaderMiddleware': 543,
  58. #}
  59.  
  60. # Enable or disable extensions
  61. # See https://doc.scrapy.org/en/latest/topics/extensions.html
  62. #EXTENSIONS = {
  63. # 'scrapy.extensions.telnet.TelnetConsole': None,
  64. #}
  65.  
  66. # Configure item pipelines
  67. # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
  68. ITEM_PIPELINES = {
  69. 'tencent.pipelines.TencentPipeline': 300,
  70. }
  71.  
  72. # Enable and configure the AutoThrottle extension (disabled by default)
  73. # See https://doc.scrapy.org/en/latest/topics/autothrottle.html
  74. #AUTOTHROTTLE_ENABLED = True
  75. # The initial download delay
  76. #AUTOTHROTTLE_START_DELAY = 5
  77. # The maximum download delay to be set in case of high latencies
  78. #AUTOTHROTTLE_MAX_DELAY = 60
  79. # The average number of requests Scrapy should be sending in parallel to
  80. # each remote server
  81. #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
  82. # Enable showing throttling stats for every response received:
  83. #AUTOTHROTTLE_DEBUG = False
  84.  
  85. # Enable and configure HTTP caching (disabled by default)
  86. # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
  87. #HTTPCACHE_ENABLED = True
  88. #HTTPCACHE_EXPIRATION_SECS = 0
  89. #HTTPCACHE_DIR = 'httpcache'
  90. #HTTPCACHE_IGNORE_HTTP_CODES = []
  91. #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
  92.  
  93. # 连接数据MySQL
  94. # 数据库地址
  95. MYSQL_HOST = 'localhost'
  96. # 数据库用户名:
  97. MYSQL_USER = 'root'
  98. # 数据库密码
  99. MYSQL_PASSWORD = 'yang156122'
  100. # 数据库端口
  101. MYSQL_PORT = 3306
  102. # 数据库名称
  103. MYSQL_DBNAME = 'test'
  104. # 数据库编码
  105. MYSQL_CHARSET = 'utf8'

5、items.py文件定义数据字段

  1. # -*- coding: utf-8 -*-
  2.  
  3. # Define here the models for your scraped items
  4. #
  5. # See documentation in:
  6. # https://doc.scrapy.org/en/latest/topics/items.html
  7.  
  8. import scrapy
  9.  
  10. class TencentItem(scrapy.Item):
  11. """
  12. 数据字段定义
  13. """
  14. postId = scrapy.Field()
  15. recruitPostId = scrapy.Field()
  16. recruitPostName = scrapy.Field()
  17. countryName = scrapy.Field()
  18. locationName = scrapy.Field()
  19. categoryName = scrapy.Field()
  20. lastUpdateTime = scrapy.Field()
  21.  
  22. pass

6、mahuateng.py文件主要是抓取数据

  1. # -*- coding: utf-8 -*-
  2. import scrapy
  3.  
  4. import json
  5. import logging
  6. class MahuatengSpider(scrapy.Spider):
  7. name = 'mahuateng'
  8. allowed_domains = ['careers.tencent.com']
  9. start_urls = ['https://careers.tencent.com/tencentcareer/api/post/Query?timestamp=1561688387174&countryId=&cityId=&bgIds=&productId=&categoryId=&parentCategoryId=40003&attrId=&keyword=&pageIndex=1&pageSize=10&language=zh-cn&area=cn']
  10. pageNum = 1
  11. def parse(self, response):
  12. """
  13. 数据获取
  14. :param response:
  15. :return:
  16. """
  17. content = response.body.decode()
  18. content = json.loads(content)
  19. content=content['Data']['Posts']
  20. #删除空字典
  21. for con in content:
  22. #print(con)
  23. for key in list(con.keys()):
  24. if not con.get(key):
  25. del con[key]
  26. #记录每一个岗位信息
  27. # for con in content:
  28. # yield con
  29. #print(type(con))
  30. yield con
  31. #logging.warning(con)
  32.  
  33. #####翻页######
  34. self.pageNum = self.pageNum+1
  35. if self.pageNum<=118:
  36. next_url = "https://careers.tencent.com/tencentcareer/api/post/Query?timestamp=1561688387174&countryId=&cityId=&bgIds=&productId=&categoryId=&parentCategoryId=40003&attrId=&keyword=&pageIndex="+str(self.pageNum)+"&pageSize=10&language=zh-cn&area=cn"
  37. yield scrapy.Request(
  38. next_url,
  39. callback=self.parse
  40. )

7、pipelines.py文件主要是对数据进行处理,包括将数据存储到mysql

  1. # -*- coding: utf-8 -*-
  2.  
  3. # Define your item pipelines here
  4. #
  5. # Don't forget to add your pipeline to the ITEM_PIPELINES setting
  6. # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
  7.  
  8. import logging
  9.  
  10. from pymysql import cursors
  11. from twisted.enterprise import adbapi
  12. import time
  13. # from tencent.settings import MYSQL_HOST
  14. # from tencent.settings import MYSQL_USER
  15. # from tencent.settings import MYSQL_PASSWORD
  16. # from tencent.settings import MYSQL_PORT
  17. # from tencent.settings import MYSQL_DBNAME
  18. #
  19. # from tencent.settings import MYSQL_CHARSET
  20. import copy
  21. class TencentPipeline(object):
  22. #函数初始化
  23. def __init__(self,db_pool):
  24. self.db_pool=db_pool
  25.  
  26. @classmethod
  27. def from_settings(cls,settings):
  28. """类方法,只加载一次,数据库初始化"""
  29. db_params = dict(
  30. host=settings['MYSQL_HOST'],
  31. user=settings['MYSQL_USER'],
  32. password=settings['MYSQL_PASSWORD'],
  33. port=settings['MYSQL_PORT'],
  34. database=settings['MYSQL_DBNAME'],
  35. charset=settings['MYSQL_CHARSET'],
  36. use_unicode=True,
  37. # 设置游标类型
  38. cursorclass=cursors.DictCursor
  39. )
  40. # 创建连接池
  41. db_pool = adbapi.ConnectionPool('pymysql', **db_params)
  42. # 返回一个pipeline对象
  43. return cls(db_pool)
  44.  
  45. def process_item(self, item, spider):
  46. """
  47. 数据处理
  48. :param item:
  49. :param spider:
  50. :return:
  51. """
  52. myItem={}
  53. myItem["postId"] = item["PostId"]
  54. myItem["recruitPostId"] = item["RecruitPostId"]
  55. myItem["recruitPostName"] = item["RecruitPostName"]
  56. myItem["countryName"] = item["CountryName"]
  57. myItem["locationName"] = item["LocationName"]
  58. myItem["categoryName"] = item["CategoryName"]
  59. myItem["lastUpdateTime"] = item["LastUpdateTime"]
  60. logging.warning(myItem)
  61. # 对象拷贝,深拷贝 --- 这里是解决数据重复问题!!!
  62. asynItem = copy.deepcopy(myItem)
  63.  
  64. # 把要执行的sql放入连接池
  65. query = self.db_pool.runInteraction(self.insert_into, asynItem)
  66.  
  67. # 如果sql执行发送错误,自动回调addErrBack()函数
  68. query.addErrback(self.handle_error, myItem, spider)
  69. return myItem
  70.  
  71. # 处理sql函数
  72. def insert_into(self, cursor, item):
  73. # 创建sql语句
  74. sql = "INSERT INTO tencent (postId,recruitPostId,recruitPostName,countryName,locationName,categoryName,lastUpdateTime) VALUES ('{}','{}','{}','{}','{}','{}','{}')".format(
  75. item['postId'], item['recruitPostId'], item['recruitPostName'], item['countryName'], item['locationName'],
  76. item['categoryName'],item['lastUpdateTime'])
  77. # 执行sql语句
  78. cursor.execute(sql)
  79. # 错误函数
  80.  
  81. def handle_error(self, failure, item, spider):
  82. # #输出错误信息
  83. print("failure", failure)

8、创建数据库表

  1. Navicat MySQL Data Transfer
  2.  
  3. Source Server : 本机
  4. Source Server Version : 50519
  5. Source Host : localhost:3306
  6. Source Database : test
  7.  
  8. Target Server Type : MYSQL
  9. Target Server Version : 50519
  10. File Encoding : 65001
  11.  
  12. Date: 2019-06-28 12:47:06
  13. */
  14.  
  15. SET FOREIGN_KEY_CHECKS=0;
  16.  
  17. -- ----------------------------
  18. -- Table structure for tencent
  19. -- ----------------------------
  20. DROP TABLE IF EXISTS `tencent`;
  21. CREATE TABLE `tencent` (
  22. `id` int(10) NOT NULL AUTO_INCREMENT,
  23. `postId` varchar(100) DEFAULT NULL,
  24. `recruitPostId` varchar(100) DEFAULT NULL,
  25. `recruitPostName` varchar(100) DEFAULT NULL,
  26. `countryName` varchar(100) DEFAULT NULL,
  27. `locationName` varchar(100) DEFAULT NULL,
  28. `categoryName` varchar(100) DEFAULT NULL,
  29. `lastUpdateTime` varchar(100) DEFAULT NULL,
  30. PRIMARY KEY (`id`)
  31. ) ENGINE=InnoDB AUTO_INCREMENT=1181 DEFAULT CHARSET=utf8;

完美收官!

python之scrapy爬取数据保存到mysql数据库的更多相关文章

  1. Python scrapy爬虫数据保存到MySQL数据库

    除将爬取到的信息写入文件中之外,程序也可通过修改 Pipeline 文件将数据保存到数据库中.为了使用数据库来保存爬取到的信息,在 MySQL 的 python 数据库中执行如下 SQL 语句来创建 ...

  2. 爬取伯乐在线文章(四)将爬取结果保存到MySQL

    Item Pipeline 当Item在Spider中被收集之后,它将会被传递到Item Pipeline,这些Item Pipeline组件按定义的顺序处理Item. 每个Item Pipeline ...

  3. 5分钟掌握智联招聘网站爬取并保存到MongoDB数据库

    前言 本次主题分两篇文章来介绍: 一.数据采集 二.数据分析 第一篇先来介绍数据采集,即用python爬取网站数据. 1 运行环境和python库 先说下运行环境: python3.5 windows ...

  4. python爬取数据保存到Excel中

    # -*- conding:utf-8 -*- # 1.两页的内容 # 2.抓取每页title和URL # 3.根据title创建文件,发送URL请求,提取数据 import requests fro ...

  5. 关于爬取数据保存到json文件,中文是unicode解决方式

    流程: 爬取的数据处理为列表,包含字典.里面包含中文, 经过json.dumps,保存到json文件中, 发现里面的中文显示未\ue768这样子 查阅资料发现,json.dumps 有一个参数.ens ...

  6. 爬取网贷之家平台数据保存到mysql数据库

    # coding utf-8 import requests import json import datetime import pymysql user_agent = 'User-Agent: ...

  7. Python+Scrapy+Crawlspider 爬取数据且存入MySQL数据库

    1.Scrapy使用流程 1-1.使用Terminal终端创建工程,输入指令:scrapy startproject ProName 1-2.进入工程目录:cd ProName 1-3.创建爬虫文件( ...

  8. Java爬取51job保存到MySQL并进行分析

    大二下实训课结业作业,想着就爬个工作信息,原本是要用python的,后面想想就用java试试看, java就自学了一个月左右,想要锻炼一下自己面向对象的思想等等的, 然后网上转了一圈,拉钩什么的是动态 ...

  9. 如何将大数据保存到 MySql 数据库

    1. 什么是大数据 1. 所谓大数据, 就是大的字节数据,或大的字符数据. 2. 标准 SQL 中提供了如下类型来保存大数据类型: 字节数据类型: tinyblob(256B), blob(64K), ...

随机推荐

  1. openstack 平台P2V迁移

    目录 [Openstack]P2V迁移 一.前言 二.前提准备 三.操作步骤 1.安装迁移中转机 2.配置中转机 3.创建存储池(可选) 4.制作virt-p2v的 U盘引导启动工具 5.操作物理机, ...

  2. ie11浏览器不显示vbs脚本

    最初接触学习vbs在浏览器上运行,老不显示vbscript脚本语言,所以找了很久,最后就用这个方法吧,比较简单有效 原因:新版IE不再支持 VBScript,就是因为微软已经放弃把VBScript作为 ...

  3. php连接docker运行的mysql,显示(HY000/2002): Connection refused的解决办法

    php要连接docker中运行的mysql是不能用localhost, 127.0.0.1来连接的,因为每个docker运行容器的localhost 127.0.0.1都是自己容器本身,不是mysql ...

  4. Azkaban无法启动错误Error: Could not find or load main class 12321

    1 错误日志 Using Hadoop from /mnt/software/hadoop-2.6.0-cdh5.16.1 bin/internal/../.. /mnt/software/jdk1. ...

  5. 远程操控批量复制应用(scp/pssh/pscp.pssh/rsync/pslurp)

    scp命令: scp [options] SRC... DEST/两种方式: scp [options] [user@]host:/sourcefile /destpath scp [options] ...

  6. kubernetes容易混淆的几个端口

    k8s服务的配置文件中几个端口参数,nodePort.port.targetPort,刚开始的时候不理解什么意思很容易混淆写错,这里总结一下,概括来说就是nodePort和port都是k8s的serv ...

  7. java——OOM内存泄漏

    资料: 一.什么是OOM OOM,全称“Out Of Memory”,翻译成中文就是“内存用完了”,当JVM因为没有足够的内存来为对象分配空间并且垃圾回收器也已经没有空间可回收时,就会抛出这个erro ...

  8. kotlin字符串模板&条件控制

    字符串模版: 小时候都有要求记日记的习惯,下面是一小学生记的日记: 很漂亮的流水账,那细分析一下这些文件其实大体都类似,只有几个不同点: 其实就是地点变了,那对于这种有规律的文字可以采用kotlin的 ...

  9. PHP 提取数组中奇数或偶数的元素array_filter

    //提取奇数 $filter = array_filter($ql,function($var){ return($var & 1); },ARRAY_FILTER_USE_KEY); pri ...

  10. bug-- java.lang.RuntimeException: Type “Klass*"

      使用jinfo查看jvm进程id为27523的信息   [java@xftest0 ~]$ jinfo 27523 Attaching to process ID 27523, please wa ...