Python爬虫系列-Urllib库详解
Urllib库详解
Python内置的Http请求库:
* urllib.request 请求模块
* urllib.error 异常处理模块
* urllib.parse url解析模块
* urllib.robotparser robots.txt解析模块
相比在python2基础上的变化
Python2
import urllib2
response = urllib2.urlopen('http://www.baidu.com')
Python3
import urllib.request
response = urllib.request.urlopen('http://www.baidu.com')
urlopen实现get方法
import urllib.request
response = urllib.request.urlopen('http://www.baidu.com')
print(response.read().decode('utf-8'))
urlopen实现post方法
import urllib.parse
import urllib.request
data = bytes(urllib.parse.urlencode({'word':'hello'}),encoding='utf-8')
response = urllib.request.urlopen('http://httpbin.org/post',data=data)
urlopen实现超时设置
import urllib.request
response = urllib.request.urlopen('http://httpbin.org/get',timeout=1)
print(response.read())
将时间缩短,查看效果
import socket
import urllib.request
import urllib.error
try:
response = urllib.request.urlopen('http://httpbin.org/get',timeout=0.1)
except urllib.error.URLError as e:
if isinstance(e.reason,socket.timeout):
print('TIME OUT')
响应类型
import urllib.request
response = urllib.request.urlopen('https://www.python.org')
print(type(response))
<class 'http.client.HTTPResponse'>
状态码、响应头
import urllib.request
response = urllib.request.
response = urllib.request.urlopen('https://www.python.org')
response = urllib.request.urlopen('https://www.python.org')
print(response.status)
print(response.getheaders())
print(response.getheader('Server'))
200
[('Server', 'nginx'), ('Content-Type', 'text/html; charset=utf-8'), ('X-Frame-Options', 'SAMEORIGIN'), ('x-xss-protection', '1; mode=block'), ('X-Clacks-Overhead', 'GNU Terry Pratchett'), ('Via', '1.1 varnish'), ('Content-Length', '50069'), ('Accept-Ranges', 'bytes'), ('Date', 'Mon, 26 Nov 2018 10:16:51 GMT'), ('Via', '1.1 varnish'), ('Age', '1872'), ('Connection', 'close'), ('X-Served-By', 'cache-iad2144-IAD, cache-tyo19943-TYO'), ('X-Cache', 'HIT, HIT'), ('X-Cache-Hits', '2, 4331'), ('X-Timer', 'S1543227412.955266,VS0,VE0'), ('Vary', 'Cookie'),
('Strict-Transport-Security', 'max-age=63072000; includeSubDomains')]
nginx
Request
from urllib import request,parse
url = 'http://httpbin.org/post'
headers = {
'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36',
'Host':'httpbin.org' }
dict = { 'name':'Germey' }
data = bytes(parse.urlencode(dict),encoding='utf-8')
req = request.Request(url=url,data=data,headers=headers,method='POST')
response = request.urlopen(req)
print(response.read().decode('utf-8'))
{
"args": {},
"data": "",
"files": {},
"form": {
"name": "Germey"
},
"headers": {
"Accept-Encoding": "identity",
"Connection": "close",
"Content-Length": "11",
"Content-Type": "application/x-www-form-urlencoded",
"Host": "httpbin.org",
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36"
},
"json": null,
"origin": "58.34.235.37",
"url": "http://httpbin.org/post"
}
from urllib import request,parse
url = 'http://httpbin.org/post'
dict = {'name':'Germey'}
data = bytes(parse.urlencode(dict),encoding='utf8')
req = request.Request(url=url,data=data,method='POST')
req.add_header('User-Agent','Mozilla/4.0 (compatible;MSIE 5.5;Windows NT)')
response = request.urlopen(req)
print(response.read().decode('utf-8'))
{
"args": {},
"data": "",
"files": {},
"form": {
"name": "Germey"
},
"headers": {
"Accept-Encoding": "identity",
"Connection": "close",
"Content-Length": "11",
"Content-Type": "application/x-www-form-urlencoded",
"Host": "httpbin.org",
"User-Agent": "Mozilla/4.0 (compatible;MSIE 5.5;Windows NT)"
},
"json": null,
"origin": "58.34.235.37",
"url": "http://httpbin.org/post"
}
Handler
代理
import urllib.request
proxy_handler = urllib.request.ProxyHandler({
'http':'http://127.0.0.1:9319',
'https':'https://127.0.0.1:9319'
})
opener = urllib.request.build_opener(proxy_handler)
response = opener.open('http://www.baidu.com')
print(response.read())
Cookie
获取
import http.cookiejar,urllib.request
cookie = http.cookiejar.CookieJar()
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
for item in cookie:
print(item.name+'='+item.value)
BAIDUID=DF51E1D71641196283719D090EEA14DA:FG=1
BIDUPSID=DF51E1D71641196283719D090EEA14DA
H_PS_PSSID=1433_21086_27508
PSTM=1543232201
delPer=0
BDSVRTM=0
BD_HOME=0
保存Cookie文件
import http.cookiejar,urllib.request
filename = 'cookie.txt'
cookie = http.cookiejar.MozillaCookieJar(filename)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
cookie.save(ignore_discard=True,ignore_expires=True)
Netscape HTTP Cookie File
http://curl.haxx.se/rfc/cookie_spec.html
# This is a generated file! Do not edit.
.baidu.com TRUE / FALSE 3690716228 BAIDUID 3131EAE6351C0F474BF6E477B848A52B:FG=1
.baidu.com TRUE / FALSE 3690716228 BIDUPSID 3131EAE6351C0F474BF6E477B848A52B
.baidu.com TRUE / FALSE H_PS_PSSID 27775_1454_21088_20719
.baidu.com TRUE / FALSE 3690716228 PSTM 1543232581
.baidu.com TRUE / FALSE delPer 0
www.baidu.com FALSE / FALSE BDSVRTM 0
www.baidu.com FALSE / FALSE BD_HOME 0
import http.cookiejar,urllib.request
filename = 'cookie2.txt'
cookie = http.cookiejar.LWPCookieJar(filename)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
cookie.save(ignore_discard=True,ignore_expires=True)
LWP-Cookies-2.0
Set-Cookie3: BAIDUID="38C33C024449D6412F80B85996FAA2F8:FG=1"; path="/"; domain=".baidu.com"; path_spec; domain_dot; expires="2086-12-14 15:01:56Z"; version=0
Set-Cookie3: BIDUPSID=38C33C024449D6412F80B85996FAA2F8; path="/"; domain=".baidu.com"; path_spec; domain_dot; expires="2086-12-14 15:01:56Z"; version=0
Set-Cookie3: H_PS_PSSID=1462_21088_22157; path="/"; domain=".baidu.com"; path_spec; domain_dot; discard; version=0
Set-Cookie3: PSTM=1543232869; path="/"; domain=".baidu.com"; path_spec; domain_dot; expires="2086-12-14 15:01:56Z"; version=0
Set-Cookie3: delPer=0; path="/"; domain=".baidu.com"; path_spec; domain_dot; discard; version=0
Set-Cookie3: BDSVRTM=0; path="/"; domain="www.baidu.com"; path_spec; discard; version=0
Set-Cookie3: BD_HOME=0; path="/"; domain="www.baidu.com"; path_spec; discard; version=0
加载Cookie文件
import http.cookiejar,urllib.request
cookie = http.cookiejar.LWPCookieJar()
cookie.load('cookie2.txt',ignore_discard=True,ignore_expires=True)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
print(response.read().decode('utf-8'))
异常处理
from urllib import request,error
try:
response = request.urlopen('http://cuiiqdkfsj.com/insdfi.htm')
except error.URLError as e:
print(e.reason)
[Errno -2] Name or service not known
from urllib import request,error
try:
response = request.urlopen('http://www.baidu.com/dsjfi.htm')
except error.HTTPError as e:
print(e.reason,e.code,e.headers,sep='\n')
except error.URLError as e:
print(e.reason)
else:
print('Request Successfully')
[Errno -2] Name or service not known
import socket
import urllib.request
import urllib.error
try:
response = urllib.request.urlopen('https://www.baidu.com',timeout=0.01)
except urllib.error.URLError as e:
print(type(e.reason))
if isinstance(e.reason,socket.timeout):
print('TIME OUT')
<class 'socket.timeout'>
TIME OUT
URL解析
from urllib.parse import urlparse
result = urlparse('http://www.baidu.com/index.html;user?id=5#comment')
print(type(result),result)
<class 'urllib.parse.ParseResult'> ParseResult(scheme='http', netloc='www.baidu.com', path='/index.html', params='user', query='id=5', fragment='comment')
指定协议
from urllib.parse import urlparse
result = urlparse('www.baidu.com/index.html;user?id=5#comment',scheme='https')
print(result)
ParseResult(scheme='https', netloc='', path='www.baidu.com/index.html', params='user', query='id=5', fragment='comment')
取消锚点链接,向前拼接
from urllib.parse import urlparse
result = urlparse('http://www.baidu.com/index.html;user?id=5#comment',allow_fragments=False)
print(result)
ParseResult(scheme='http', netloc='www.baidu.com', path='/index.html', params='user', query='id=5#comment', fragment='')
urlunparse
from urllib.parse import urlunparse
data = ['http','www.baidu.com','index.html','user','a=6','comment']
print(urlunparse(data))
urljoin
urlencode
from urllib.parse import urlencode
params = {'name':'germey','age':22}
base_url = 'http://www.baidu.com?'
url = base_url+urlencode(params)
print(url)
Python爬虫系列-Urllib库详解的更多相关文章
- Python爬虫系列-Requests库详解
Requests基于urllib,比urllib更加方便,可以节约我们大量的工作,完全满足HTTP测试需求. 实例引入 import requests response = requests.get( ...
- Python爬虫之urllib.parse详解
Python爬虫之urllib.parse 转载地址 Python 中的 urllib.parse 模块提供了很多解析和组建 URL 的函数. 解析url 解析url( urlparse() ) ur ...
- Python爬虫:requests 库详解,cookie操作与实战
原文 第三方库 requests是基于urllib编写的.比urllib库强大,非常适合爬虫的编写. 安装: pip install requests 简单的爬百度首页的例子: response.te ...
- 爬虫入门之urllib库详解(二)
爬虫入门之urllib库详解(二) 1 urllib模块 urllib模块是一个运用于URL的包 urllib.request用于访问和读取URLS urllib.error包括了所有urllib.r ...
- python爬虫之urllib库(三)
python爬虫之urllib库(三) urllib库 访问网页都是通过HTTP协议进行的,而HTTP协议是一种无状态的协议,即记不住来者何人.举个栗子,天猫上买东西,需要先登录天猫账号进入主页,再去 ...
- python爬虫之urllib库(二)
python爬虫之urllib库(二) urllib库 超时设置 网页长时间无法响应的,系统会判断网页超时,无法打开网页.对于爬虫而言,我们作为网页的访问者,不能一直等着服务器给我们返回错误信息,耗费 ...
- python爬虫之urllib库(一)
python爬虫之urllib库(一) urllib库 urllib库是python提供的一种用于操作URL的模块,python2中是urllib和urllib2两个库文件,python3中整合在了u ...
- python爬虫知识点总结(三)urllib库详解
一.什么是Urllib? 官方学习文档:https://docs.python.org/3/library/urllib.html 廖雪峰的网站:https://www.liaoxuefeng.com ...
- 爬虫--Urllib库详解
1.什么是Urllib? 2.相比Python2的变化 3.用法讲解 (1)urlopen urlllb.request.urlopen(url,data=None[timeout,],cahle=N ...
随机推荐
- Luogu P1462 通往奥格瑞玛的道路 二分答案+最短路
先二分答案,再跑最短路,跑的时候遇到 过路费超过二分的答案的 就不拿他更新最短路 #include<cstdio> #include<iostream> #include< ...
- C.One Piece
链接:https://ac.nowcoder.com/acm/contest/908/C 题意: Luffy once saw a particularly delicious food, but h ...
- Ubuntu新服务器安装lnmp
版本: nginx(无要求,最新) mysql(5.6.xx) php(5.6.xx) ubuntu(16.04,其他版本也并无过多差异) 准备: #apt-get update #apt-get i ...
- Elasticsearch优化
2.out of memory错误 因为默认情况下es对字段数据缓存(Field Data Cache)大小是无限制的,查询时会把字段值放到内存,特别是facet查询,对内存要求非常高,它会把结果都放 ...
- 执行ng build --prod --aot命令报错
D:\git\**\src\main\iui>ng build --prod --aotHash: 257ab60feca43633b6f7Time: 25358mschunk {0} poly ...
- Hi,bro
这是我第一次写部落格,也是我刚开始学python,希望我以后能把To Do List 做好,也希望大家可以好好学习,为了以后good life去努力,Do SomeThing OK?
- text-transform字母大小写属性设置
text-transform: none: 默认 不设置,全是小写 capitalize: 每个单词以大写字母开头 uppercase: 全部是大写字母 lowercase: 全部是小写字母 in ...
- git-gui:使用终端打开以后出现错误提示 Spell checking is unavable
参考链接:http://www.lai18.com/content/10706682.html 安装了git-gui,打开以后出现以下提示: Spell checking is unavable: e ...
- pixhawk在linux(ubuntu16.04)下的开发环境搭建和源码编译
1查找安装文档(http://dev.px4.io/starting-installing-linux.html)(本文仅针对硬件为PIXHAWK的开发环境搭建,其他硬件请参考官方文档) ...
- 1979 第K个数
1979 第K个数 时间限制: 1 s 空间限制: 1000 KB 题目等级 : 黄金 Gold 题目描述 Description 给定一个长度为N(0<n<=10000) ...