声明:以下代码,Python版本3.6完美运行,但因网站日新月异,下面代码可能在有些网站已不适用,读者朋友理解思路就好

一、思路介绍

  不同的图片网站设有不同的反爬虫机制,根据具体网站采取对应的方法

  1. 浏览器浏览分析地址变化规律

  2. Python测试类获取网页内容,从而获取图片地址

  3. Python测试类下载图片,保存成功则爬虫可以实现

二、豆瓣美女(难度:❤)

  1.  网址:https://www.dbmeinv.com/dbgroup/show.htm

  浏览器里点击后,按分类和页数得到新的地址:"https://www.dbmeinv.com/dbgroup/show.htm?cid=%s&pager_offset=%s" % (cid, index)

  (其中cid:2-胸 3-腿 4-脸 5-杂 6-臀 7-袜子     index:页数)

  2. 通过python调用,查看获取网页内容,以下是Test_Url.py的内容 

 1 from urllib import request
2 import re
3 from bs4 import BeautifulSoup
4
5
6 def get_html(url):
7 req = request.Request(url)
8 return request.urlopen(req).read()
9
10
11 if __name__ == '__main__':
12 url = "https://www.dbmeinv.com/dbgroup/show.htm?cid=2&pager_offset=2"
13 html = get_html(url)
14 data = BeautifulSoup(html, "lxml")
15 print(data)
16 r = r'(https://\S+\.jpg)'
17 p = re.compile(r)
18 get_list = re.findall(p, str(data))
19 print(get_list)

  通过urllib.request.Request(Url)请求网站,BeautifulSoup解析返回的二进制内容,re.findall()匹配图片地址

  最终print(get_list)打印出了图片地址的一个列表

  3. 通过python调用,下载图片,以下是Test_Down.py的内容

 1 from urllib import request
2
3
4 def get_image(url):
5 req = request.Request(url)
6 get_img = request.urlopen(req).read()
7 with open('E:/Python_Doc/Images/DownTest/001.jpg', 'wb') as fp:
8 fp.write(get_img)
9 print("Download success!")
10 return
11
12
13 if __name__ == '__main__':
14 url = "https://ww2.sinaimg.cn/bmiddle/0060lm7Tgy1fn1cmtxkrcj30dw09a0u3.jpg"
15 get_image(url)

  通过urllib.request.Request(image_url)获取图片,然后写入本地,看到路径下多了一张图片,说明整个爬虫实现是可实现的

  4. 综合上面分析,写出完整爬虫代码 douban_spider.py

 from urllib import request
from urllib.request import urlopen
from bs4 import BeautifulSoup
import os
import time
import re
import threading # 全局声明的可以写到配置文件,这里为了读者方便看,故只写在一个文件里面
# 图片地址
picpath = r'E:\Python_Doc\Images'
# 豆瓣地址
douban_url = "https://www.dbmeinv.com/dbgroup/show.htm?cid=%s&pager_offset=%s" # 保存路径的文件夹,没有则自己创建文件夹,不能创建上级文件夹
def setpath(name):
path = os.path.join(picpath, name)
if not os.path.isdir(path):
os.mkdir(path)
return path # 获取html内容
def get_html(url):
req = request.Request(url)
return request.urlopen(req).read() # 获取图片地址
def get_ImageUrl(html):
data = BeautifulSoup(html, "lxml")
r = r'(https://\S+\.jpg)'
p = re.compile(r)
return re.findall(p, str(data)) # 保存图片
def save_image(savepath, url):
content = urlopen(url).read()
# url[-11:] 表示截取原图片后面11位
with open(savepath + '/' + url[-11:], 'wb') as code:
code.write(content) def do_task(savepath, cid, index):
url = douban_url % (cid, index)
html = get_html(url)
image_list = get_ImageUrl(html)
# 此处判断其实意义不大,程序基本都是人手动终止的,因为图片你是下不完的
if not image_list:
print(u'已经全部抓取完毕')
return
# 实时查看,这个有必要
print("=============================================================================")
print(u'开始抓取Cid= %s 第 %s 页' % (cid, index))
for image in image_list:
save_image(savepath, image)
# 抓取下一页
do_task(savepath, cid, index+1) if __name__ == '__main__':
# 文件名
filename = "DouBan"
filepath = setpath(filename) # 2-胸 3-腿 4-脸 5-杂 6-臀 7-袜子
for i in range(2, 8):
do_task(filepath, i, 1) # threads = []
# for i in range(2, 4):
# ti = threading.Thread(target=do_task, args=(filepath, i, 1, ))
# threads.append(ti)
# for t in threads:
# t.setDaemon(True)
# t.start()
# t.join()

  运行程序,进入文件夹查看,图片已经不停的写入电脑了!

  5. 分析:豆瓣图片下载用比较简单的爬虫就能实现,网站唯一的控制好像只有不能频繁调用,所以豆瓣不适合用多线程调用

      豆瓣还有一个地址:https://www.dbmeinv.com/dbgroup/current.htm有兴趣的读者朋友可以自己去研究

三、MM131网(难度:❤❤)

  1. 网址:http://www.mm131.com

  浏览器里点击后,按分类和页数得到新的地址:"http://www.mm131.com/xinggan/list_6_%s.html" % index

  (如果清纯:"http://www.mm131.com/qingchun/list_1_%s.html" % index , index:页数)

  2. Test_Url.py,双重循化先获取图片人物地址,在获取人物每页的图片

 1 from urllib import request
2 import re
3 from bs4 import BeautifulSoup
4
5
6 def get_html(url):
7 req = request.Request(url)
8 return request.urlopen(req).read()
9
10
11 if __name__ == '__main__':
12 url = "http://www.mm131.com/xinggan/list_6_2.html"
13 html = get_html(url)
14 data = BeautifulSoup(html, "lxml")
15 p = r"(http://www\S*/\d{4}\.html)"
16 get_list = re.findall(p, str(data))
17 # 循化人物地址
18 for i in range(20):
19 # print(get_list[i])
20 # 循环人物的N页图片
21 for j in range(200):
22 url2 = get_list[i][:-5] + "_" + str(j + 2) + ".html"
23 try:
24 html2 = get_html(url2)
25 except:
26 break
27 p = r"(http://\S*/\d{4}\S*\.jpg)"
28 get_list2 = re.findall(p, str(html2))
29 print(get_list2[0])
30 break

  

  3. 下载图片 Test_Down.py,用豆瓣下载的方法下载,发现不论下载多少张,都是一样的下面图片

  

  这个就有点尴尬了,妹子图片地址都有了,就是不能下载,浏览器打开的时候也是时好时坏,网上也找不到原因,当然楼主最终还是找到原因了,下面先贴上代码

 1 from urllib import request
2 import requests
3
4
5 def get_image(url):
6 req = request.Request(url)
7 get_img = request.urlopen(req).read()
8 with open('E:/Python_Doc/Images/DownTest/123.jpg', 'wb') as fp:
9 fp.write(get_img)
10 print("Download success!")
11 return
12
13
14 def get_image2(url_ref, url):
15 headers = {"Referer": url_ref,
16 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 '
17 '(KHTML, like Gecko)Chrome/62.0.3202.94 Safari/537.36'}
18 content = requests.get(url, headers=headers)
19 if content.status_code == 200:
20 with open('E:/Python_Doc/Images/DownTest/124.jpg', 'wb') as f:
21 for chunk in content:
22 f.write(chunk)
23 print("Download success!")
24
25
26 if __name__ == '__main__':
27 url_ref = "http://www.mm131.com/xinggan/2343_3.html"
28 url = "http://img1.mm131.me/pic/2343/3.jpg"
29 get_image2(url_ref, url)

  可以看到下载成功,改用requests.get方法获取图片内容,这种请求方法方便设置头文件headers(urllib.request怎么设置headers没有研究过),headers里面有个Referer参数,必须设置为此图片的进入地址,从浏览器F12代码可以看出来,如下图

  

  

  4. 测试都通过了,下面是汇总的完整源码

 from urllib import request
from urllib.request import urlopen
from bs4 import BeautifulSoup
import os
import time
import re
import requests # 全局声明的可以写到配置文件,这里为了读者方便看,故只写在一个文件里面
# 图片地址
picpath = r'E:\Python_Doc\Images'
# mm131地址
mm_url = "http://www.mm131.com/xinggan/list_6_%s.html" # 保存路径的文件夹,没有则自己创建文件夹,不能创建上级文件夹
def setpath(name):
path = os.path.join(picpath, name)
if not os.path.isdir(path):
os.mkdir(path)
return path # 获取html内容
def get_html(url):
req = request.Request(url)
return request.urlopen(req).read() def save_image2(path, url_ref, url):
headers = {"Referer": url_ref,
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 '
'(KHTML, like Gecko)Chrome/62.0.3202.94 Safari/537.36'}
content = requests.get(url, headers=headers)
if content.status_code == 200:
with open(path + '/' + str(time.time()) + '.jpg', 'wb') as f:
for chunk in content:
f.write(chunk) def do_task(path, url):
html = get_html(url)
data = BeautifulSoup(html, "lxml")
p = r"(http://www\S*/\d{1,5}\.html)"
get_list = re.findall(p, str(data))
# print(data)
# 循化人物地址 每页20个
for i in range(20):
try:
print(get_list[i])
except:
break
# 循环人物的N页图片
for j in range(200):
url2 = get_list[i][:-5] + "_" + str(3*j + 2) + ".html"
try:
html2 = get_html(url2)
except:
break
p = r"(http://\S*/\d{1,4}\S*\.jpg)"
get_list2 = re.findall(p, str(html2))
save_image2(path, get_list[i], get_list2[0]) if __name__ == '__main__':
# 文件名
filename = "MM131_XG"
filepath = setpath(filename) for i in range(2, 100):
print("正在List_6_%s " % i)
url = mm_url % i
do_task(filepath, url)

  运行程序后,图片将源源不断的写入电脑

  

  5. 分析:MM131图片下载主要问题是保存图片的时候需要用headers设置Referer

  

四、煎蛋网(难度:❤❤❤)

  1. 网址:http://jandan.net/ooxx

  浏览器里点击后,按分类和页数得到新的地址:http://jandan.net/ooxx/page-%s#comments % index

  (index:页数)

  2. Test_Url.py,由于通过urllib.request.Request会被拒绝

  通过requests添加headers,返回的html文本如下  

 1 ......
2 <div class="text">
3 <span class="righttext">
4 <a href="//jandan.net/ooxx/page-2#comment-3535967">3535967</a>
5 </span>
6 <p>
7 <img src="//img.jandan.net/img/blank.gif" onload="jandan_load_img(this)" />
8 <span class="img-hash">d0c4TroufRLv8KcPl0CZWwEuhv3ZTfJrTVr02gQHSmnFyf0tWbjze3F+DoWRsMEJFYpWSXTd5YfOrmma+1CKquxniG2C19Gzh81OF3wz84m8TSGr1pXRIA</span>
9 </p>
10 </div>
11 ......

  从上面关键结果可以看到,<span class="img-hash">后面的一长串哈希字符才是图片地址,网站打开时候动态转换为图片地址显示的,不过上有政策,下有对策,那就把这些hash字符串转为图片地址了,怎么转呢? 以下提供两种方案

  (1)通过Python的execjs模块直接调用JS里面的函数,将这个hash转为图片地址,具体实现就是把当前网页保存下来,然后找到里面的js转换方法函数,单独拎出来写在一个JS里面

  调用的JS和方法如下:OOXX1.js   CallJS.python

1 function OOXX1(a,b) {
2 return md5(a)
3 }
4 function md5(a) {
5 return "Success" + a
6 }
 1 import execjs
2
3
4 # 执行本地的js
5 def get_js():
6 f = open("OOXX1.js", 'r', encoding='UTF-8')
7 line = f.readline()
8 htmlstr = ''
9 while line:
10 htmlstr = htmlstr + line
11 line = f.readline()
12 return htmlstr
13
14
15 js = get_js()
16 ctx = execjs.compile(js)
17 ss = "SS"
18 cc = "CC"
19 print(ctx.call("OOXX1", ss, cc))

  此种方法只提供思路,楼主找到的JS如下 OOXX.js,实际调用报错了,这个方法应该会比方法二速度快很多,所以还是贴上未完成代码供读者朋友参阅研究

 function time() {
var a = new Date().getTime();
return parseInt(a / 1000)
}
function microtime(b) {
var a = new Date().getTime();
var c = parseInt(a / 1000);
return b ? (a / 1000) : (a - (c * 1000)) / 1000 + " " + c
}
function chr(a) {
return String.fromCharCode(a)
}
function ord(a) {
return a.charCodeAt()
}
function md5(a) {
return hex_md5(a)
}
function base64_encode(a) {
return btoa(a)
}
function base64_decode(a) {
return atob(a)
} (function(g) {
function o(u, z) {
var w = (u & 65535) + (z & 65535),
v = (u >> 16) + (z >> 16) + (w >> 16);
return (v << 16) | (w & 65535)
}
function s(u, v) {
return (u << v) | (u >>> (32 - v))
}
function c(A, w, v, u, z, y) {
return o(s(o(o(w, A), o(u, y)), z), v)
}
function b(w, v, B, A, u, z, y) {
return c((v & B) | ((~v) & A), w, v, u, z, y)
}
function i(w, v, B, A, u, z, y) {
return c((v & A) | (B & (~A)), w, v, u, z, y)
}
function n(w, v, B, A, u, z, y) {
return c(v ^ B ^ A, w, v, u, z, y)
}
function a(w, v, B, A, u, z, y) {
return c(B ^ (v | (~A)), w, v, u, z, y)
}
function d(F, A) {
F[A >> 5] |= 128 << (A % 32);
F[(((A + 64) >>> 9) << 4) + 14] = A;
var w, z, y, v, u, E = 1732584193,
D = -271733879,
C = -1732584194,
B = 271733878;
for (w = 0; w < F.length; w += 16) {
z = E;
y = D;
v = C;
u = B;
E = b(E, D, C, B, F[w], 7, -680876936);
B = b(B, E, D, C, F[w + 1], 12, -389564586);
C = b(C, B, E, D, F[w + 2], 17, 606105819);
D = b(D, C, B, E, F[w + 3], 22, -1044525330);
E = b(E, D, C, B, F[w + 4], 7, -176418897);
B = b(B, E, D, C, F[w + 5], 12, 1200080426);
C = b(C, B, E, D, F[w + 6], 17, -1473231341);
D = b(D, C, B, E, F[w + 7], 22, -45705983);
E = b(E, D, C, B, F[w + 8], 7, 1770035416);
B = b(B, E, D, C, F[w + 9], 12, -1958414417);
C = b(C, B, E, D, F[w + 10], 17, -42063);
D = b(D, C, B, E, F[w + 11], 22, -1990404162);
E = b(E, D, C, B, F[w + 12], 7, 1804603682);
B = b(B, E, D, C, F[w + 13], 12, -40341101);
C = b(C, B, E, D, F[w + 14], 17, -1502002290);
D = b(D, C, B, E, F[w + 15], 22, 1236535329);
E = i(E, D, C, B, F[w + 1], 5, -165796510);
B = i(B, E, D, C, F[w + 6], 9, -1069501632);
C = i(C, B, E, D, F[w + 11], 14, 643717713);
D = i(D, C, B, E, F[w], 20, -373897302);
E = i(E, D, C, B, F[w + 5], 5, -701558691);
B = i(B, E, D, C, F[w + 10], 9, 38016083);
C = i(C, B, E, D, F[w + 15], 14, -660478335);
D = i(D, C, B, E, F[w + 4], 20, -405537848);
E = i(E, D, C, B, F[w + 9], 5, 568446438);
B = i(B, E, D, C, F[w + 14], 9, -1019803690);
C = i(C, B, E, D, F[w + 3], 14, -187363961);
D = i(D, C, B, E, F[w + 8], 20, 1163531501);
E = i(E, D, C, B, F[w + 13], 5, -1444681467);
B = i(B, E, D, C, F[w + 2], 9, -51403784);
C = i(C, B, E, D, F[w + 7], 14, 1735328473);
D = i(D, C, B, E, F[w + 12], 20, -1926607734);
E = n(E, D, C, B, F[w + 5], 4, -378558);
B = n(B, E, D, C, F[w + 8], 11, -2022574463);
C = n(C, B, E, D, F[w + 11], 16, 1839030562);
D = n(D, C, B, E, F[w + 14], 23, -35309556);
E = n(E, D, C, B, F[w + 1], 4, -1530992060);
B = n(B, E, D, C, F[w + 4], 11, 1272893353);
C = n(C, B, E, D, F[w + 7], 16, -155497632);
D = n(D, C, B, E, F[w + 10], 23, -1094730640);
E = n(E, D, C, B, F[w + 13], 4, 681279174);
B = n(B, E, D, C, F[w], 11, -358537222);
C = n(C, B, E, D, F[w + 3], 16, -722521979);
D = n(D, C, B, E, F[w + 6], 23, 76029189);
E = n(E, D, C, B, F[w + 9], 4, -640364487);
B = n(B, E, D, C, F[w + 12], 11, -421815835);
C = n(C, B, E, D, F[w + 15], 16, 530742520);
D = n(D, C, B, E, F[w + 2], 23, -995338651);
E = a(E, D, C, B, F[w], 6, -198630844);
B = a(B, E, D, C, F[w + 7], 10, 1126891415);
C = a(C, B, E, D, F[w + 14], 15, -1416354905);
D = a(D, C, B, E, F[w + 5], 21, -57434055);
E = a(E, D, C, B, F[w + 12], 6, 1700485571);
B = a(B, E, D, C, F[w + 3], 10, -1894986606);
C = a(C, B, E, D, F[w + 10], 15, -1051523);
D = a(D, C, B, E, F[w + 1], 21, -2054922799);
E = a(E, D, C, B, F[w + 8], 6, 1873313359);
B = a(B, E, D, C, F[w + 15], 10, -30611744);
C = a(C, B, E, D, F[w + 6], 15, -1560198380);
D = a(D, C, B, E, F[w + 13], 21, 1309151649);
E = a(E, D, C, B, F[w + 4], 6, -145523070);
B = a(B, E, D, C, F[w + 11], 10, -1120210379);
C = a(C, B, E, D, F[w + 2], 15, 718787259);
D = a(D, C, B, E, F[w + 9], 21, -343485551);
E = o(E, z);
D = o(D, y);
C = o(C, v);
B = o(B, u)
}
return [E, D, C, B]
}
function p(v) {
var w, u = "";
for (w = 0; w < v.length * 32; w += 8) {
u += String.fromCharCode((v[w >> 5] >>> (w % 32)) & 255)
}
return u
}
function j(v) {
var w, u = [];
u[(v.length >> 2) - 1] = undefined;
for (w = 0; w < u.length; w += 1) {
u[w] = 0
}
for (w = 0; w < v.length * 8; w += 8) {
u[w >> 5] |= (v.charCodeAt(w / 8) & 255) << (w % 32)
}
return u
}
function k(u) {
return p(d(j(u), u.length * 8))
}
function e(w, z) {
var v, y = j(w),
u = [],
x = [],
A;
u[15] = x[15] = undefined;
if (y.length > 16) {
y = d(y, w.length * 8)
}
for (v = 0; v < 16; v += 1) {
u[v] = y[v] ^ 909522486;
x[v] = y[v] ^ 1549556828
}
A = d(u.concat(j(z)), 512 + z.length * 8);
return p(d(x.concat(A), 512 + 128))
}
function t(w) {
var z = "0123456789abcdef",
v = "",
u, y;
for (y = 0; y < w.length; y += 1) {
u = w.charCodeAt(y);
v += z.charAt((u >>> 4) & 15) + z.charAt(u & 15)
}
return v
}
function m(u) {
return unescape(encodeURIComponent(u))
}
function q(u) {
return k(m(u))
}
function l(u) {
return t(q(u))
}
function h(u, v) {
return e(m(u), m(v))
}
function r(u, v) {
return t(h(u, v))
}
function f(v, w, u) {
if (!w) {
if (!u) {
return l(v)
}
return q(v)
}
if (!u) {
return r(w, v)
}
return h(w, v)
}
if (typeof define === "function" && define.amd) {
define(function() {
return f
})
} else {
g.md5 = f
}
} (this)); function md5(source) {
function safe_add(x, y) {
var lsw = (x & 65535) + (y & 65535),
msw = (x >> 16) + (y >> 16) + (lsw >> 16);
return msw << 16 | lsw & 65535
}
function bit_rol(num, cnt) {
return num << cnt | num >>> 32 - cnt
}
function md5_cmn(q, a, b, x, s, t) {
return safe_add(bit_rol(safe_add(safe_add(a, q), safe_add(x, t)), s), b)
}
function md5_ff(a, b, c, d, x, s, t) {
return md5_cmn(b & c | ~b & d, a, b, x, s, t)
}
function md5_gg(a, b, c, d, x, s, t) {
return md5_cmn(b & d | c & ~d, a, b, x, s, t)
}
function md5_hh(a, b, c, d, x, s, t) {
return md5_cmn(b ^ c ^ d, a, b, x, s, t)
}
function md5_ii(a, b, c, d, x, s, t) {
return md5_cmn(c ^ (b | ~d), a, b, x, s, t)
}
function binl_md5(x, len) {
x[len >> 5] |= 128 << len % 32;
x[(len + 64 >>> 9 << 4) + 14] = len;
var i, olda, oldb, oldc, oldd, a = 1732584193,
b = -271733879,
c = -1732584194,
d = 271733878;
for (i = 0; i < x.length; i += 16) {
olda = a;
oldb = b;
oldc = c;
oldd = d;
a = md5_ff(a, b, c, d, x[i], 7, -680876936);
d = md5_ff(d, a, b, c, x[i + 1], 12, -389564586);
c = md5_ff(c, d, a, b, x[i + 2], 17, 606105819);
b = md5_ff(b, c, d, a, x[i + 3], 22, -1044525330);
a = md5_ff(a, b, c, d, x[i + 4], 7, -176418897);
d = md5_ff(d, a, b, c, x[i + 5], 12, 1200080426);
c = md5_ff(c, d, a, b, x[i + 6], 17, -1473231341);
b = md5_ff(b, c, d, a, x[i + 7], 22, -45705983);
a = md5_ff(a, b, c, d, x[i + 8], 7, 1770035416);
d = md5_ff(d, a, b, c, x[i + 9], 12, -1958414417);
c = md5_ff(c, d, a, b, x[i + 10], 17, -42063);
b = md5_ff(b, c, d, a, x[i + 11], 22, -1990404162);
a = md5_ff(a, b, c, d, x[i + 12], 7, 1804603682);
d = md5_ff(d, a, b, c, x[i + 13], 12, -40341101);
c = md5_ff(c, d, a, b, x[i + 14], 17, -1502002290);
b = md5_ff(b, c, d, a, x[i + 15], 22, 1236535329);
a = md5_gg(a, b, c, d, x[i + 1], 5, -165796510);
d = md5_gg(d, a, b, c, x[i + 6], 9, -1069501632);
c = md5_gg(c, d, a, b, x[i + 11], 14, 643717713);
b = md5_gg(b, c, d, a, x[i], 20, -373897302);
a = md5_gg(a, b, c, d, x[i + 5], 5, -701558691);
d = md5_gg(d, a, b, c, x[i + 10], 9, 38016083);
c = md5_gg(c, d, a, b, x[i + 15], 14, -660478335);
b = md5_gg(b, c, d, a, x[i + 4], 20, -405537848);
a = md5_gg(a, b, c, d, x[i + 9], 5, 568446438);
d = md5_gg(d, a, b, c, x[i + 14], 9, -1019803690);
c = md5_gg(c, d, a, b, x[i + 3], 14, -187363961);
b = md5_gg(b, c, d, a, x[i + 8], 20, 1163531501);
a = md5_gg(a, b, c, d, x[i + 13], 5, -1444681467);
d = md5_gg(d, a, b, c, x[i + 2], 9, -51403784);
c = md5_gg(c, d, a, b, x[i + 7], 14, 1735328473);
b = md5_gg(b, c, d, a, x[i + 12], 20, -1926607734);
a = md5_hh(a, b, c, d, x[i + 5], 4, -378558);
d = md5_hh(d, a, b, c, x[i + 8], 11, -2022574463);
c = md5_hh(c, d, a, b, x[i + 11], 16, 1839030562);
b = md5_hh(b, c, d, a, x[i + 14], 23, -35309556);
a = md5_hh(a, b, c, d, x[i + 1], 4, -1530992060);
d = md5_hh(d, a, b, c, x[i + 4], 11, 1272893353);
c = md5_hh(c, d, a, b, x[i + 7], 16, -155497632);
b = md5_hh(b, c, d, a, x[i + 10], 23, -1094730640);
a = md5_hh(a, b, c, d, x[i + 13], 4, 681279174);
d = md5_hh(d, a, b, c, x[i], 11, -358537222);
c = md5_hh(c, d, a, b, x[i + 3], 16, -722521979);
b = md5_hh(b, c, d, a, x[i + 6], 23, 76029189);
a = md5_hh(a, b, c, d, x[i + 9], 4, -640364487);
d = md5_hh(d, a, b, c, x[i + 12], 11, -421815835);
c = md5_hh(c, d, a, b, x[i + 15], 16, 530742520);
b = md5_hh(b, c, d, a, x[i + 2], 23, -995338651);
a = md5_ii(a, b, c, d, x[i], 6, -198630844);
d = md5_ii(d, a, b, c, x[i + 7], 10, 1126891415);
c = md5_ii(c, d, a, b, x[i + 14], 15, -1416354905);
b = md5_ii(b, c, d, a, x[i + 5], 21, -57434055);
a = md5_ii(a, b, c, d, x[i + 12], 6, 1700485571);
d = md5_ii(d, a, b, c, x[i + 3], 10, -1894986606);
c = md5_ii(c, d, a, b, x[i + 10], 15, -1051523);
b = md5_ii(b, c, d, a, x[i + 1], 21, -2054922799);
a = md5_ii(a, b, c, d, x[i + 8], 6, 1873313359);
d = md5_ii(d, a, b, c, x[i + 15], 10, -30611744);
c = md5_ii(c, d, a, b, x[i + 6], 15, -1560198380);
b = md5_ii(b, c, d, a, x[i + 13], 21, 1309151649);
a = md5_ii(a, b, c, d, x[i + 4], 6, -145523070);
d = md5_ii(d, a, b, c, x[i + 11], 10, -1120210379);
c = md5_ii(c, d, a, b, x[i + 2], 15, 718787259);
b = md5_ii(b, c, d, a, x[i + 9], 21, -343485551);
a = safe_add(a, olda);
b = safe_add(b, oldb);
c = safe_add(c, oldc);
d = safe_add(d, oldd)
}
return [a, b, c, d]
}
function binl2rstr(input) {
var i, output = "";
for (i = 0; i < input.length * 32; i += 8) output += String.fromCharCode(input[i >> 5] >>> i % 32 & 255);
return output
}
function rstr2binl(input) {
var i, output = [];
output[(input.length >> 2) - 1] = undefined;
for (i = 0; i < output.length; i += 1) output[i] = 0;
for (i = 0; i < input.length * 8; i += 8) output[i >> 5] |= (input.charCodeAt(i / 8) & 255) << i % 32;
return output
}
function rstr_md5(s) {
return binl2rstr(binl_md5(rstr2binl(s), s.length * 8))
}
function rstr_hmac_md5(key, data) {
var i, bkey = rstr2binl(key),
ipad = [],
opad = [],
hash;
ipad[15] = opad[15] = undefined;
if (bkey.length > 16) bkey = binl_md5(bkey, key.length * 8);
for (i = 0; i < 16; i += 1) {
ipad[i] = bkey[i] ^ 909522486;
opad[i] = bkey[i] ^ 1549556828
}
hash = binl_md5(ipad.concat(rstr2binl(data)), 512 + data.length * 8);
return binl2rstr(binl_md5(opad.concat(hash), 512 + 128))
}
function rstr2hex(input) {
var hex_tab = "0123456789abcdef",
output = "",
x, i;
for (i = 0; i < input.length; i += 1) {
x = input.charCodeAt(i);
output += hex_tab.charAt(x >>> 4 & 15) + hex_tab.charAt(x & 15)
}
return output
}
function str2rstr_utf8(input) {
return unescape(encodeURIComponent(input))
}
function raw_md5(s) {
return rstr_md5(str2rstr_utf8(s))
}
function hex_md5(s) {
return rstr2hex(raw_md5(s))
}
function raw_hmac_md5(k, d) {
return rstr_hmac_md5(str2rstr_utf8(k), str2rstr_utf8(d))
}
function hex_hmac_md5(k, d) {
return rstr2hex(raw_hmac_md5(k, d))
}
return hex_md5(source)
} function OOXX(m, r, d) {
var e = "DECODE";
var r = r ? r: "";
var d = d ? d: 0;
var q = 4;
r = md5(r);
var o = md5(r.substr(0, 16));
var n = md5(r.substr(16, 16));
if (q) {
if (e == "DECODE") {
var l = m.substr(0, q)
}
} else {
var l = ""
}
var c = o + md5(o + l);
var k;
if (e == "DECODE") {
m = m.substr(q);
k = base64_decode(m)
}
var h = new Array(256);
for (var g = 0; g < 256; g++) {
h[g] = g
}
var b = new Array();
for (var g = 0; g < 256; g++) {
b[g] = c.charCodeAt(g % c.length)
}
for (var f = g = 0; g < 256; g++) {
f = (f + h[g] + b[g]) % 256;
tmp = h[g];
h[g] = h[f];
h[f] = tmp
}
var t = "";
k = k.split("");
for (var p = f = g = 0; g < k.length; g++) {
p = (p + 1) % 256;
f = (f + h[p]) % 256;
tmp = h[p];
h[p] = h[f];
h[f] = tmp;
t += chr(ord(k[g]) ^ (h[(h[p] + h[f]) % 256]))
}
if (e == "DECODE") {
if ((t.substr(0, 10) == 0 || t.substr(0, 10) - time() > 0) && t.substr(10, 16) == md5(t.substr(26) + n).substr(0, 16)) {
t = t.substr(26)
} else {
t = ""
}
}
return t
};

  (2)通过Python的selenium调用Chrome的无头浏览器(说明:低版本的Python可以用Phantomjs无头浏览器,Python3.6是直接弃用了这个浏览器,只好选择Chrome)

  用Chrome无头浏览器需要Chrome60以上版本,根据Chrome版本下载对应(下图对面关系)的chromedrive.exe(说是好像Chrome60以上版本自带无头浏览器功能,楼主没有成功实现,还是老老实实下载了chromedriver,下载地址:http://chromedriver.storage.googleapis.com/index.html)

  

 1 from urllib import request
2 import re
3 from selenium import webdriver
4 import requests
5
6
7 def get_html(url):
8 req = request.Request(url)
9 return request.urlopen(req).read()
10
11
12 def get_html2(url):
13 headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 '
14 '(KHTML, like Gecko)Chrome/62.0.3202.94 Safari/537.36'}
15 return requests.get(url, headers=headers).text
16
17
18 # 用chrome headless打开网页
19 def get_html3(url):
20 chromedriver = "C:\Program Files (x86)\Google\Chrome\Application\chromedriver.exe"
21 options = webdriver.ChromeOptions()
22 options.add_argument('--headless')
23 browser = webdriver.Chrome(chromedriver, chrome_options=options)
24 browser.get(url)
25 html = browser.page_source
26 browser.quit()
27 return html
28
29
30 # 打开网页返回网页内容
31 def open_webpage(html):
32 reg = r"(http:\S+(\.jpg|\.png|\.gif))"
33 imgre = re.compile(reg)
34 imglist = re.findall(imgre, html)
35 return imglist
36
37
38 if __name__ == '__main__':
39 url = "http://jandan.net/ooxx/page-2#comments"
40 html = get_html3(url)
41 reg = r"(http:\S+(\.jpg|\.png))"
42 imglist = re.findall(reg, html)
43 for img in imglist:
44 print(img[0]) 

  

  3. 下载图片 用上面提到过的urllib.request.Request方法就可以实现

  4. 测试都通过了,下面是汇总的完整源码OOXX_spider.py

 from selenium import webdriver
import os
import requests
import re
from urllib.request import urlopen # 全局声明的可以写到配置文件,这里为了读者方便看,故只写在一个文件里面
# 图片地址
picpath = r'E:\Python_Doc\Images'
# 网站地址
ooxx_url = "http://jandan.net/ooxx/page-{}#comments"
# chromedriver地址
chromedriver = "C:\Program Files (x86)\Google\Chrome\Application\chromedriver.exe" # 保存路径的文件夹,没有则自己创建,不能创建目录
def setpath(name):
path = os.path.join(picpath, name)
if not os.path.isdir(path):
os.mkdir(path)
return path # 用chrome headless打开网页
def gethtml(url):
options = webdriver.ChromeOptions()
options.add_argument('--headless')
browser = webdriver.Chrome(chromedriver, chrome_options=options)
browser.get(url)
html = browser.page_source
browser.quit()
return html # 打开网页返回网页内容
def open_webpage(html):
reg = r"(http:\S+(\.jpg|\.png))"
imgre = re.compile(reg)
imglist = re.findall(imgre, html)
return imglist # 保存照片
def savegirl(path, url):
content = urlopen(url).read()
with open(path + '/' + url[-11:], 'wb') as code:
code.write(content) def do_task(path, index):
print("正在抓取低 %s 页" % index)
web_url = ooxx_url.format(index)
htmltext = gethtml(web_url)
imglists = open_webpage(htmltext)
print("抓取成功,开始保存 %s 页图片" % index)
for j in range(len(imglists)):
savegirl(path, imglists[j][0]) if __name__ == '__main__':
filename = "OOXX"
filepath = setpath(filename)
for i in range(1, 310):
do_task(filepath, i)

  

  5. 分析:用调用chrome方法速度会比较慢,不过这个方法会是反爬虫技术越来越先进的必然选择,如何提速才是要考虑的关键

五、其他直接贴完整代码(难度:❤❤)

  1. 网址:http://pic.yesky.com  yesky_spider.py

 from urllib import request
import re
import os
from bs4 import BeautifulSoup
from urllib.error import HTTPError # 全局声明的可以写到配置文件,这里为了读者方便看,故只写在一个文件里面
# 图片地址
picpath = r'E:\Python_Doc\Images'
# 网站地址
mm_url = "http://pic.yesky.com/c/6_20771_%s.shtml" # 保存路径的文件夹,没有则自己创建文件夹,不能创建上级文件夹
def setpath(name):
path = os.path.join(picpath, name)
if not os.path.isdir(path):
os.mkdir(path)
return path # 获取html内容
def get_html(url):
req = request.Request(url)
return request.urlopen(req).read() # 保存图片
def save_image(path, url):
req = request.Request(url)
get_img = request.urlopen(req).read()
with open(path + '/' + url[-14:] + '.jpg', 'wb') as fp:
fp.write(get_img)
return def do_task(path, url):
html = get_html(url)
p = r'(http://pic.yesky.com/\d+/\d+.shtml)'
urllist = re.findall(p, str(html))
print(urllist)
for ur in urllist:
for i in range(2, 100):
url1 = ur[:-6] + "_" + str(i) + ".shtml"
print(url1)
try:
html1 = get_html(url1)
data = BeautifulSoup(html1, "lxml")
p = r"http://dynamic-image\.yesky\.com/740x-/uploadImages/\S+\.jpg"
image_list = re.findall(p, str(data))
print(image_list[0])
save_image(path, image_list[0])
except:
break if __name__ == '__main__': # 文件名
filename = "YeSky"
filepath = setpath(filename) for i in range(2, 100):
print("正在6_20771_%s " % i)
url = mm_url % i
do_task(filepath, url)

  

  2. 网站:http://www.7160.com 7160_spider

 from urllib import request
import re
import os # 全局声明的可以写到配置文件,这里为了读者方便看,故只写在一个文件里面
# 图片地址
picpath = r'E:\Python_Doc\Images'
# 7160地址
mm_url = "http://www.7160.com/xingganmeinv/list_3_%s.html"
mm_url2 = "http://www.7160.com/meinv/%s/index_%s.html" # 保存路径的文件夹,没有则自己创建文件夹,不能创建上级文件夹
def setpath(name):
path = os.path.join(picpath, name)
if not os.path.isdir(path):
os.mkdir(path)
return path def get_html(url):
req = request.Request(url)
return request.urlopen(req).read() def get_image(path, url):
req = request.Request(url)
get_img = request.urlopen(req).read()
with open(path + '/' + url[-14:] + '.jpg', 'wb') as fp:
fp.write(get_img)
return def do_task(path, url):
html = get_html(url)
p = r"<a href=\"/meinv/(\d+)/\""
get_list = re.findall(p, str(html))
for list in get_list:
for i in range(2, 200):
try:
url2 = mm_url2 % (list, i)
html2 = get_html(url2)
p = r"http://img\.7160\.com/uploads/allimg/\d+/\S+\.jpg"
image_list = re.findall(p, str(html2))
# print(image_list[0])
get_image(path, image_list[0])
except:
break
break if __name__ == '__main__':
# 文件名
filename = ""
filepath = setpath(filename) for i in range(2, 288):
print("正在下载页数:List_3_%s " % i)
url = mm_url % i
do_task(filepath, url)

  3. 网站:http://www.263dm.com 263dm_spider.py

 from urllib.request import urlopen
import urllib.request
import os
import sys
import time
import re
import requests # 全局声明的可以写到配置文件,这里为了读者方便看,故只写在一个文件里面
# 图片地址
picpath = r'E:\Python_Doc\Images'
# 网站地址
mm_url = "http://www.263dm.com/html/ai/%s.html" # 保存路径的文件夹,没有则自己创建文件夹,不能创建上级文件夹
def setpath(name):
path = os.path.join(picpath, name)
if not os.path.isdir(path):
os.mkdir(path)
return path def getUrl(url):
aa = urllib.request.Request(url)
html = urllib.request.urlopen(aa).read()
p = r"(http://www\S*/\d{4}\.html)"
return re.findall(p, str(html)) def get_image(savepath, url):
aa = urllib.request.Request(url)
html = urllib.request.urlopen(aa).read()
p = r"(http:\S+\.jpg)"
url_list = re.findall(p, str(html))
for ur in url_list:
save_image(ur, ur, savepath) def save_image(url_ref, url, path):
headers = {"Referer": url_ref,
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 '
'(KHTML, like Gecko)Chrome/62.0.3202.94 Safari/537.36'}
content = requests.get(url, headers=headers)
if content.status_code == 200:
with open(path + "/" + str(time.time()) + '.jpg', 'wb') as f:
for chunk in content:
f.write(chunk) def do_task(savepath, index):
print("正在保存页数:%s " % index)
url = mm_url % i
get_image(savepath, url) if __name__ == '__main__':
# 文件名
filename = "Adult"
filepath = setpath(filename) for i in range(10705, 9424, -1):
do_task(filepath, i)

六、总结和补充

  1. 获取网页内容有三种方式

    urllib.request.Request和urllib.request.urlopen

    ——速度快,很容易被发现,不能获取js执行后的网页内容

    requests带headers的请求方法

    ——速度快,可以实现伪装,不能获取js执行后的网页内容

    chrome headless方法

    ——速度慢,等于浏览器访问,可以获取js执行后的网页内容

Python爬虫下载美女图片(不同网站不同方法)的更多相关文章

  1. PYTHON 爬虫 baidu美女图片

    from urllib import requestimport re import osdef main(): #page=request.urlopen("http://image.ba ...

  2. Python实战:美女图片下载器,海量图片任你下载

    Python应用现在如火如荼,应用范围很广.因其效率高开发迅速的优势,快速进入编程语言排行榜前几名.本系列文章致力于可以全面系统的介绍Python语言开发知识和相关知识总结.希望大家能够快速入门并学习 ...

  3. 初次尝试python爬虫,爬取小说网站的小说。

    本次是小阿鹏,第一次通过python爬虫去爬一个小说网站的小说. 下面直接上菜. 1.首先我需要导入相应的包,这里我采用了第三方模块的架包,requests.requests是python实现的简单易 ...

  4. python爬虫下载文件

    python爬虫下载文件 下载东西和访问网页差不多,这里以下载我以前做的一个安卓小游戏为例 地址为:http://hjwachhy.site/game/only_v1.1.1.apk 首先下载到内存 ...

  5. Python爬虫下载Bilibili番剧弹幕

    本文绍如何利用python爬虫下载bilibili番剧弹幕. 准备: python3环境 需要安装BeautifulSoup,selenium包 phantomjs 原理: 通过aid下载bilibi ...

  6. 如何用Python爬虫实现百度图片自动下载?

    Github:https://github.com/nnngu/LearningNotes 制作爬虫的步骤 制作一个爬虫一般分以下几个步骤: 分析需求 分析网页源代码,配合开发者工具 编写正则表达式或 ...

  7. Python爬虫抓取某音乐网站MP3(下载歌曲、存入Sqlite)

    最近右胳膊受伤,打了石膏在家休息.为了实现之前的想法,就用左手打字.写代码,查资料完成了这个资源小爬虫.网页爬虫, 最主要的是协议分析(必须要弄清楚自己的目的),另外就是要考虑对爬取的数据归类,存储. ...

  8. python爬虫学习-爬取某个网站上的所有图片

    最近简单地看了下python爬虫的视频.便自己尝试写了下爬虫操作,计划的是把某一个网站上的美女图全给爬下来,不过经过计算,查不多有好几百G的样子,还是算了.就首先下载一点点先看看. 本次爬虫使用的是p ...

  9. Python爬虫之网页图片抓取

    一.引入 这段时间一直在学习Python的东西,以前就听说Python爬虫多厉害,正好现在学到这里,跟着小甲鱼的Python视频写了一个爬虫程序,能实现简单的网页图片下载. 二.代码 __author ...

随机推荐

  1. java 可变參数

    我们在某些特定的需求环境下,可能要对某一个方法中的參数进行一些操作,并且这些方法中的參数是不规定的,那么问题来了,我们该怎么办呢? java事实上就为我们考虑了这样的情况,那就是使用可变參数 可变參数 ...

  2. 【LDA】修正 GibbsLDA++-0.2 中的两个内存问题

    周末这两天在家用LDA做个小实验. 在LDA的众多实现的工具包中.GibbsLDA 是应用最广泛的.包含c++版本号.java版本号等.GibbsLDA++ 是它的C++版本号的实现.眼下最新版本号是 ...

  3. Codeforces548A:Mike and Fax

    While Mike was walking in the subway, all the stuff in his back-bag dropped on the ground. There wer ...

  4. C#设计模式之二十一职责链模式(Chain of Responsibility Pattern)【行为型】

    一.引言   今天我们开始讲"行为型"设计模式的第八个模式,该模式是[职责链模式],英文名称是:Chain of Responsibility Pattern.让我们看看现实生活中 ...

  5. linux系统安全及应用

    小伙伴们让我们一起回顾一下Linux系统安全基础知识吧 1. 系统账号清理 对于公司里刚离职或停职不久的人,处于公司信息安全考虑,给他们的账号给锁定就好了. usermod -L wangqingxi ...

  6. 基于 HTML5 WebGL 的 3D 服务器与客户端的通信

    这个例子的初衷是模拟服务器与客户端的通信,我把整个需求简化变成了今天的这个例子.3D 机房方面的模拟一般都是需要鹰眼来辅助的,这样找产品以及整个空间的概括会比较明确,在这个例子中我也加了,这篇文章就算 ...

  7. Django学习(3)模板定制

    在Django学习(一)一首情诗中,views.py中HTML被直接硬编码在代码之中,虽然这样便于解释视图是如何工作的,但直接将HTML硬编码到视图却不算一个好主意.因为: 对页面设计进行的任何改变都 ...

  8. 39.Linux应用调试-strace命令

    1.strace简介 strace常用来跟踪进程执行时的系统调用和所接收的信号.通过strace可以知道应用程序打开了哪些文件,以及读写了什么内容,包括消耗的时间以及返回值等 2.安装strace命令 ...

  9. Python Xcode搭建Python环境以及使用PyCharm CE

    pycharm CE下载   使用教程 1.基础学习网站 2.在Xcode7中搭建python开发环境,这个不行了就试试第二个,我是第二个可以正常输出了,第一个没有输出 3.Python学习-MAC下 ...

  10. java 异常处理机制(java 编程思想)

    一.概念 "异常"这个词有"我对此感到意外"的意思.问题出现了,你也许并不清楚该如何处理,但你的确知道不应该置之不理:你要停下来,看看是不是有别人或在别的地方, ...