之前一直在学习关于滑块验证码的爬虫知识,最接近的当属于模拟人的行为进行鼠标移动,登录页面之后在获取了,由于一直找不到滑块验证码的原图,无法通过openCV获取当前滑块所需要移动的距离。

1.机智如我开始找关系通过截取滑块图片,然后经过PS,再进行比较(就差最后的验证了)

2.Selenium+Scrapy:登录部分--自己操作鼠标通过验证,登录之后页面--爬取静态页面

给大家讲了答题思路,现在就来拿实例验证一下可行性,拿自己博客开刀--"https://i.cnblogs.com"

二、先给大家看下效果,我不坑人

一、配置代码

1.1_items.py

import scrapy

class BokeItem(scrapy.Item):
name = scrapy.Field()
summary = scrapy.Field()
read_num = scrapy.Field()
comment = scrapy.Field()

1.2_middlewares.py(设置代理)

class BokeSpiderMiddleware(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the spider middleware does not modify the
# passed objects.
def __init__(self,ip=''):
self.ip = ip
def process_request(self,request,spider):
print('http://10.240.252.16:911')
request.meta['proxy']= 'http://10.240.252.16:911'

1.3_pipelines.py(存储mongodb)

import scrapy
import pymongo
from scrapy.item import Item class BokePipeline(object):
def process_item(self, item, spider):
return item class MongoDBPipeline(object): #存储到mongodb中
@classmethod
def from_crawler(cls,crawler):
cls.DB_URL = crawler.settings.get("MONGO_DB_URL",'mongodb://localhost:27017/')
cls.DB_NAME = crawler.settings.get("MONGO_DB_NAME",'scrapy_data')
return cls() def open_spider(self,spider):
self.client = pymongo.MongoClient(self.DB_URL)
self.db = self.client[self.DB_NAME] def close_spider(self,spider):
self.client.close() def process_item(self,item,spider):
collection = self.db[spider.name]
post = dict(item) if isinstance(item,Item) else item
collection.insert(post) return item

1.4_settings.py(重要设置)

import random

BOT_NAME = 'Boke'

SPIDER_MODULES = ['Boke.spiders']
NEWSPIDER_MODULE = 'Boke.spiders' MONGO_DB_URL = 'mongodb://localhost:27017/'
MONGO_DB_NAME = 'myboke' USER_AGENT =[ #设置浏览器的User_agent
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
] FEED_EXPORT_FIELDS = ['name','summary','read_num','comment'] ROBOTSTXT_OBEY = False
CONCURRENT_REQUESTS = 10
DOWNLOAD_DELAY = 0.5
COOKIES_ENABLED = False
# Crawled (400) <GET https://www.cnblogs.com/eilinge/> (referer: None)
DEFAULT_REQUEST_HEADERS = { 'User-Agent': random.choice(USER_AGENT), 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Language': 'en', } DOWNLOADER_MIDDLEWARES = {
#'Boss.middlewares.BossDownloaderMiddleware': 543,
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware':543,
'Boke.middlewares.BokeSpiderMiddleware':123, } ITEM_PIPELINES = {
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware':1,
'Boke.pipelines.MongoDBPipeline': 300, }

1.5_spiders/boke.py

#-*- coding:utf-8 -*-
import time
from selenium import webdriver
import pdb
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from lxml import etree
import re
from bs4 import BeautifulSoup
import scrapy
from Boke.items import BokeItem
from Boke.settings import USER_AGENT
from scrapy.linkextractors import LinkExtractor
import random
import re chrome_options = Options()
driver = webdriver.Chrome() class BokeSpider(scrapy.Spider):
name = 'boke'
allowed_domains = ['www.cnblogs.com','passport.cnblogs.com']
start_urls = ['https://passport.cnblogs.com/user/signin'] def start_requests(self): driver.get(
self.start_urls[0]
)
time.sleep(3)
driver.find_element_by_id('input1').send_keys(u'xxx')
time.sleep(3)
driver.find_element_by_id('input2').send_keys(u'xxx')
time.sleep(3)
driver.find_element_by_id('signin').click()
time.sleep(20) new_url = driver.current_url.encode('utf8')
print(driver.current_url.encode('utf8')) yield scrapy.Request(new_url) def parse(self, response): bokeitem = BokeItem()
sels = response.css('div.day')
for sel in sels:
bokeitem['name'] = sel.css('div.postTitle>a ::text').extract()[0]
bokeitem['summary'] = sel.css('div.c_b_p_desc ::text').extract()[0]
summary = sel.css('div.postDesc ::text').extract()[0]
bokeitem['read_num'] = re.findall(r'\((\d{0,5})',summary)[0]
bokeitem['comment'] = re.findall(r'\((\d{0,5})', summary)[1] print bokeitem
yield bokeitem

scrapy--cnblogs的更多相关文章

  1. scrapy爬取cnblogs文章列表

    scrapy爬取cnblogs文章 目标任务 安装爬虫 创建爬虫 编写 items.py 编写 spiders/cnblogs.py 编写 pipelines.py 编写 settings.py 运行 ...

  2. cnblogs 博客爬取 + scrapy + 持久化 + 分布式

    目录 普通 scrapy 分布式爬取 cnblogs_spider.py 普通 scrapy # -*- coding: utf-8 -*- import scrapy from ..items im ...

  3. scrapy架构与目录介绍、scrapy解析数据、配置相关、全站爬取cnblogs数据、存储数据、爬虫中间件、加代理、加header、集成selenium

    今日内容概要 scrapy架构和目录介绍 scrapy解析数据 setting中相关配置 全站爬取cnblgos文章 存储数据 爬虫中间件和下载中间件 加代理,加header,集成selenium 内 ...

  4. python3 安装scrapy

    twisted(网络异步框架) wget https://pypi.python.org/packages/dc/c0/a0114a6d7fa211c0904b0de931e8cafb5210ad82 ...

  5. scrapy爬虫结果插入mysql数据库

    1.通过工具创建数据库scrapy

  6. Scrapy爬取自己的博客内容

    python中常用的写爬虫的库有urllib2.requests,对于大多数比较简单的场景或者以学习为目的,可以用这两个库实现.这里有一篇我之前写过的用urllib2+BeautifulSoup做的一 ...

  7. Python爬虫Scrapy框架入门(2)

    本文是跟着大神博客,尝试从网站上爬一堆东西,一堆你懂得的东西 附上原创链接: http://www.cnblogs.com/qiyeboy/p/5428240.html 基本思路是,查看网页元素,填写 ...

  8. Python爬虫Scrapy框架入门(0)

    想学习爬虫,又想了解python语言,有个python高手推荐我看看scrapy. scrapy是一个python爬虫框架,据说很灵活,网上介绍该框架的信息很多,此处不再赘述.专心记录我自己遇到的问题 ...

  9. scrapy爬虫笔记(三)------写入源文件的爬取

    开始爬取网页:(2)写入源文件的爬取 为了使代码易于修改,更清晰高效的爬取网页,我们将代码写入源文件进行爬取. 主要分为以下几个步骤: 一.使用scrapy创建爬虫框架: 二.修改并编写源代码,确定我 ...

  10. [转]Scrapy入门教程

    关键字:scrapy 入门教程 爬虫 Spider 作者:http://www.cnblogs.com/txw1958/ 出处:http://www.cnblogs.com/txw1958/archi ...

随机推荐

  1. nodejs操作文件

    var fs = require('fs'); var txt = "以上程序使用fs.readFileSync从源路径读取文件内容,并使用fs.writeFileSync将文件内容写入目标 ...

  2. 关于“.WriteLine()是否需要这么多重载”的笔记

    在Stack Overflow上看到一个较热门的问题,作笔记于此. Console.WriteLine()有以下如此多的重载: public static void WriteLine(string ...

  3. 五分钟急速搭建wordpress(博主亲测有效)

    第一步:下载WordPress安装包并解压 从此处下载WordPress压缩包并解压缩 http://wordpress.org/download/ 如果你想将WordPress上传至一个远程服务器, ...

  4. ArcMap如何修改地图坐标系统

    有时候,地图投影坐标需要作出修改,使得符合要求,不然空间参考不一样无法进行进一步的操作,分析等!下面介绍arcgis地图投影坐标的修改! 1.首先,将地图数据导入,这里我导入的是广西的边界图bound ...

  5. Android NDK 入门与实践

    NDK 是什么 NDK 全称 Native Development Kit,可以让您在 Android 应用中调用 C 或 C++ 代码的工具. NDK 好处 1.NDK 可以生成 .so 文件, 方 ...

  6. Azure进阶攻略 | 下载还是在浏览器直接打开,MIME说了算!

    多年来,从一开始的网络菜鸟发展成 Azure 云专家,想必你一定学到了很多知识.不知道在这个过程中你自己是否遇到过,或者被人问到过类似下面这样的问题: 同样是直接点击网页上提供的 .mp4 视频文件链 ...

  7. 开始用PyTorch

    怎么说呢,TensorFlow有些实现过于蛋疼,我需要使用更实用的框架. 目前在读https://github.com/chenyuntc/pytorch-book

  8. Altium_Designer-怎么将“原理图的更改”更新到“pcb图”?

    打开原理图,直击菜单栏>>Design,选择第一项,>>Update PCB Document...在弹出的对话框里面选择执行更改即可将原理图更新到工程下面对应的PCB.也可以 ...

  9. MySQL 相关文章参考

    MySQL 中隔离级别 RC 与 RR 的区别http://www.cnblogs.com/digdeep/p/4968453.html MySQL+InnoDB semi-consitent rea ...

  10. POJ-3349 Snowflake Snow Snowflakes---最小表示法

    题目链接: https://vjudge.net/problem/POJ-3349 题目大意: 每个雪花都有六个分支,用六个整数代表,这六个整数是从任意一个分支开始,朝顺时针或逆时针方向遍历得到的.输 ...