一、依赖

virtualenv -p python3.6 xx

pip install scrapy

pip install pymysql

二、

1、创建项目和spider1

scrapy startproject scraw_swagger

scrapy genspider spider1  xxx.com  (执行之后在项目的spiders目录下会生成一个spider1.py的文件)

以下代码主要实现了将swagger的第一级目录爬下来存在一个叫:interfaces_path的文件下

# -*- coding: utf-8 -*-
import scrapy
import json
from scraw_swagger import settings class Spider1Spider(scrapy.Spider): name = 'spider1'
allowed_domains = ['xxx.com'']
scrawl_domain = settings.interface_domain+'/api-docs'
start_urls = [scrawl_domain] def parse(self, response):
# 调试代码
# filename = 'mid_link'
# open(filename, 'wb').write(response.body)
# /////////
response = response.body
response_dict = json.loads(response)
apis = response_dict['apis']
n = len(apis)
temppath = []
i = 0 domain = settings.interface_domain+'/api-docs'
filename = 'interfaces_path'
file = open(filename, 'w')
for i in range(0, n):
subapi = apis[i]
path = subapi['path']
path = ','+domain + path
temppath.append(path)
file.write(path)

2、创建spider2

scrapy genspider spider2 xxx.com  (执行之后在项目的spiders目录下会生成一个spider2.py的文件)

以下代码主要实现了获取interfaces_path的文件下的地址对应的内容

# # -*- coding: utf-8 -*-
import scrapy
from scraw_swagger.items import ScrawSwaggerItem
import json
from scraw_swagger import settings class Spider2Spider(scrapy.Spider):
name = 'spider2'
allowed_domains = ['xxx.com']
file = open('interfaces_path', 'r')
file = file.read()
list_files = []
files = file.split(',')
n = len(files)
for i in range(1, n):
file = files[i]
list_files.append(file)
start_urls = list_files def parse(self, response):
outitem = ScrawSwaggerItem()
out_interface = []
out_domain = []
out_method = []
out_param_name = []
out_data_type = []
out_param_required = []
# 调试代码
# filename = response.url.split("/")[-1]
# open('temp/'+filename, 'wb').write(response.body)
# ///////
response = response.body
response_dict = json.loads(response)
items = response_dict['apis']
items_len = len(items)
for j in range(0, items_len):
path = items[j]['path']
# interface组成list
operations = items[j]['operations'][0]
method = operations['method']
parameters = operations['parameters']
parameters_len = len(parameters)
param_name = []
param_required = []
data_type = []
for i in range(0, parameters_len):
name = parameters[i]['name']
param_name.append(name)
required = parameters[i]['required']
param_required.append(required)
type = parameters[i]['type']
data_type.append(type) out_interface.append(path)
interface_domain = settings.interface_domain
out_domain.append(interface_domain)
out_method.append(method)
out_data_type.append(data_type)
out_param_name.append(param_name)
out_param_required.append(param_required) outitem['interface'] = out_interface
outitem['domain'] = out_domain
outitem['method'] = out_method
outitem['param_name'] = out_param_name
outitem['param_required'] = out_param_required
outitem['data_type'] = out_data_type
yield outitem

3、settings.py文件

# -*- coding: utf-8 -*-

# Scrapy settings for scraw_swagger project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html interface_domain = 'test'
token = 'test' # 调试代码
# interface_domain = 'http://xxxx..net'
# token = 'xxxxxx'
# /////////////// BOT_NAME = 'scraw_swagger' SPIDER_MODULES = ['scraw_swagger.spiders']
NEWSPIDER_MODULE = 'scraw_swagger.spiders' FEED_EXPORT_ENCODING = 'utf-8' # Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'scraw_swagger (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'scraw_swagger.middlewares.ScrawSwaggerSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'scraw_swagger.middlewares.ScrawSwaggerDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'scraw_swagger.pipelines.ScrawSwaggerPipeline': 300,
# 'scraw_swagger.pipelines.MysqlTwistedPipline': 200,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' MYSQL_HOST = ''
MYSQL_DBNAME = ''
MYSQL_USER = ''
MYSQL_PASSWD = ''
MYSQL_PORT = 3306

4、存入数据库。编写pipelines.py文件。提取返回的item,并将对应的字段存入数据库

# -*- coding: utf-8 -*-
import pymysql
from scraw_swagger import settings
from twisted.enterprise import adbapi
import pymysql.cursors
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html class ScrawSwaggerPipeline(object):
def process_item(self, item, spider):
try:
# 插入数据sql
sql = """
insert into interfaces (domain, interface, method, param_name ,data_type, param_required)
VALUES (%s, %s, %s, %s, %s, %s)
"""
domain = item['domain']
n = len(domain)
for i in range(0, n):
domain = str(item['domain'][i])
interface = str(item["interface"][i])
method = str(item["method"][i])
param_name = str(item["param_name"][i])
data_type = str(item["data_type"][i])
param_required = str(item["param_required"][i])
a = (domain, interface, method, param_name, data_type, param_required)
self.cursor.execute(sql, a)
self.connect.commit()
except Exception as error:
# 出现错误时打印错误日志
print(error)
# self.connect.close()
return item def __init__(self):
# 连接数据库
self.connect = pymysql.connect(
host=settings.MYSQL_HOST,
db=settings.MYSQL_DBNAME,
user=settings.MYSQL_USER,
passwd=settings.MYSQL_PASSWD,
port=settings.MYSQL_PORT,
charset='utf8',
use_unicode=True) # 通过cursor执行增删查改
self.cursor = self.connect.cursor()

swagger上的接口写入数据库的更多相关文章

  1. 开机后将sim/uim卡上的联系人写入数据库

    tyle="margin:20px 0px 0px; font-size:14px; line-height:26px; font-family:Arial; color:rgb(51,51 ...

  2. 通过POI实现上传EXCEL的批量读取数据写入数据库

    最近公司新增功能要求导入excel,并读取其中数据批量写入数据库.于是就开始了这个事情,之前的文章,记录了上传文件,本篇记录如何通过POI读取excel数据并封装为对象上传. 上代码: 1.首先这是一 ...

  3. c#上传文件并将word pdf转化成txt存储并将内容写入数据库

    c#上传文件并将word pdf转化成txt存储并将内容写入数据库 using System; using System.Data; using System.Configuration; using ...

  4. Django上传excel表格并将数据写入数据库

    前言: 最近公司领导要统计技术部门在各个业务条线花费的工时百分比,而 jira 当前的 Tempo 插件只能统计个人工时.于是就写了个报表工具,将 jira 中导出的个人工时excel表格 导入数据库 ...

  5. c#WebApi使用form表单提交excel,实现批量写入数据库

    思路:用户点击下载模板按钮,获取到excel模板,然后向里面填写数据保存.from表单提交的时候选择保存好的excel,实现数据的批量导入过程 先把模板放在服务器的项目目录下面:如 模板我一般放在:F ...

  6. 使用log4j让日志写入数据库

    之前做的一个项目有这么个要求,在日志管理系统里,需要将某些日志信息存储到数据库里,供用户.管理员查看分析.因此我就花了点时间搞了一下这一功能,各位请看. 摘要:我们知道log4j能提供强大的可配置的记 ...

  7. Excel 导入到Datatable 中,再使用常规方法写入数据库

    首先呢?要看你的电脑的office版本,我的是office 2013 .为了使用oledb程序,需要安装一个引擎.名字为AccessDatabaseEngine.exe.这里不过多介绍了哦.它的数据库 ...

  8. SQLBulkCopy使用实例--读取Excel写入数据库/将 Excel 文件转成 DataTable

    MS SQL Server 提供一个称为 bcp 的流行的命令提示符实用工具,用于将数据从一个表移动到另一个表(表可以在不同服务器上). SqlBulkCopy 类允许编写提供类似功能的托管代码解决方 ...

  9. thinkphp + 美图秀秀api 实现图片裁切上传,带数据库

    思路: 1.数据库 创建test2 创建表img,字段id,url,addtime 2.前台页: 1>我用的是bootstrap 引入必要的js,css 2>引入美图秀秀的js 3.后台: ...

随机推荐

  1. 什么是 Ansible - 使用 Ansible 进行配置管理

    [注]本文译自:https://www.edureka.co/blog/what-is-ansible/   Ansible 是一个开源的 IT 配置管理.部署和编排工具.它旨在为各种自动化挑战提供巨 ...

  2. SQL语句通过身份证号计算年龄

    SQL语句通过身份证号计算年龄 1.截取身份证号上的出生日期 身份证一般为18位数和15位数 18位数身份证的第7-10位数是出生年份,第11-14位数是出生月日,所以18位身份证的年龄计算如下 su ...

  3. Toolkit 大更新:UI 更美观,用起来更方便!

    前言 前段时间有小伙伴在群里聊天,说到 Toolkit 下载量到 4.9k 了.就突然想起来,很久没有更新这个插件. PS:我是用它申请了 License,一般时候使用 Json 格式化功能. 趁着周 ...

  4. Mysql无法通过临时密码访问

    在Linux上安装Mysql: [步骤一]:将mysql的安装文件上传到Linux的服务器. [步骤二]:安装MYSQL服务端 [步骤三]:安装MYSQL客户端 我在步骤三遇到了问题,所以直接从步骤三 ...

  5. 《深入理解计算机系统》学习笔记整理(CSAPP 学习笔记)

    简介 本笔记目前已包含 CSAPP 中除第四章(处理器部分)外的其他各章节,但部分章节的笔记尚未整理完全.未整理完成的部分包括:ch3.ch11.ch12 的后面几小节:ch5 的大部分. 我在整理笔 ...

  6. JVM调试命令简介

    1.JPS(查JAVA进程) 2.jinfo(查看正在运行java应用程序的扩展参数,包括Java System属性和JVM命令行参数:也可以动态的修改正在运行的JVM一些参数) 大部分的运行期参数是 ...

  7. Redis 与 Python 交互

    1. Python 库安装 2. 交互代码范例 3. Redis 操作封装 4. 应用范例:用户登录 1. Python 库安装 联网安装 pip install redis 使用源码安装 到中文官网 ...

  8. Selenium 3 常用 API

    元素定位 获取页面元素属性 元素判断 元素操作 操作输入框/单击 双击 下拉框操作 键盘操作 鼠标操作 单选框操作 多选框操作 拖动窗口 操作 JS 框 切换 frame 使用 JS 操作页面对象 操 ...

  9. JVM JIT动态编译

    一.概述 1.1 基本概念 a. 动态编译(dynamic compilation)指的是"在运行时进行编译":与之相对的是事前编译(ahead-of-time compilati ...

  10. hdu1839 二分最短路

    题意:       给你n个城市,m条双向边,每条边有自己的长度和最大运输量,让你找到一条时间小于等于T的运输能力最大的那条路... 思路:       刚开始以为是费用流呢,后来发现根本不是,因为根 ...