收藏好文,看的懂文档,但效率太慢

cookie 清空

 import urllib2
import cookielib
from time import sleep cookie=cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookie))
for n in range(6):
response = opener.open('http://www.docin.com/p-976549025.html?utm_source=keda&utm_medium=20150406&utm_campaign=a20401')
sleep(6)
cookie.clear()

Proxy 的设置

urllib2 默认会使用环境变量 http_proxy 来设置 HTTP Proxy。如果想在程序中明确控制 Proxy 而不受环境变量的影响,可以使用下面的方式

import urllib2

enable_proxy = True
proxy_handler = urllib2.ProxyHandler({"http" : 'http://some-proxy.com:8080'})
null_proxy_handler = urllib2.ProxyHandler({})

if enable_proxy:
opener = urllib2.build_opener(proxy_handler)
else:
opener = urllib2.build_opener(null_proxy_handler)

urllib2.install_opener(opener)

这里要注意的一个细节,使用 urllib2.install_opener() 会设置 urllib2 的全局 opener 。这样后面的使用会很方便,但不能做更细粒度的控制,比如想在程序中使用两个不同的 Proxy 设置等。比较好的做法是不使用 install_opener 去更改全局的设置,而只是直接调用 opener 的 open方法代替全局的 urlopen 方法。

Timeout 设置

在老版 Python 中,urllib2 的 API 并没有暴露 Timeout 的设置,要设置 Timeout 值,只能更改 Socket 的全局 Timeout 值。

import urllib2
import socket

socket.setdefaulttimeout(10) # 10 秒钟后超时
urllib2.socket.setdefaulttimeout(10) # 另一种方式
在 Python 2.6 以后,超时可以通过 urllib2.urlopen() 的 timeout 参数直接设置。

import urllib2
response = urllib2.urlopen('http://www.google.com', timeout=10)

在 HTTP Request 中加入特定的 Header

要加入 header,需要使用 Request 对象:

import urllib2

request = urllib2.Request(uri)
request.add_header('User-Agent', 'fake-client')
response = urllib2.urlopen(request)
对有些 header 要特别留意,服务器会针对这些 header 做检查

User-Agent : 有些服务器或 Proxy 会通过该值来判断是否是浏览器发出的请求

Content-Type : 在使用 REST 接口时,服务器会检查该值,用来确定 HTTP Body 中的内容该怎样解析。常见的取值有:

application/xml : 在 XML RPC,如 RESTful/SOAP 调用时使用
application/json : 在 JSON RPC 调用时使用
application/x-www-form-urlencoded : 浏览器提交 Web 表单时使用
在使用服务器提供的 RESTful 或 SOAP 服务时, Content-Type 设置错误会导致服务器拒绝服务

Redirect

urllib2 默认情况下会针对 HTTP 3XX 返回码自动进行 redirect 动作,无需人工配置。要检测是否发生了 redirect 动作,只要检查一下 Response 的 URL 和 Request 的 URL 是否一致就可以了。

import urllib2
response = urllib2.urlopen('http://www.google.cn')
redirected = response.geturl() == 'http://www.google.cn'
如果不想自动 redirect,除了使用更低层次的 httplib 库之外,还可以自定义HTTPRedirectHandler 类。

import urllib2

class RedirectHandler(urllib2.HTTPRedirectHandler):
def http_error_301(self, req, fp, code, msg, headers):
pass
def http_error_302(self, req, fp, code, msg, headers):
pass

opener = urllib2.build_opener(RedirectHandler)
opener.open('http://www.google.cn')

Cookie

urllib2 对 Cookie 的处理也是自动的。如果需要得到某个 Cookie 项的值,可以这么做:

import urllib2
import cookielib

cookie = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookie))
response = opener.open('http://www.google.com')
for item in cookie:
if item.name == 'some_cookie_item_name':
print item.value

使用 HTTP 的 PUT 和 DELETE 方法

urllib2 只支持 HTTP 的 GET 和 POST 方法,如果要使用 HTTP PUT 和 DELETE ,只能使用比较低层的 httplib 库。虽然如此,我们还是能通过下面的方式,使 urllib2 能够发出 PUT 或 DELETE 的请求:

import urllib2

request = urllib2.Request(uri, data=data)
request.get_method = lambda: 'PUT' # or 'DELETE'
response = urllib2.urlopen(request)
这种做法虽然属于 Hack 的方式,但实际使用起来也没什么问题。

得到 HTTP 的返回码

对于 200 OK 来说,只要使用 urlopen 返回的 response 对象的 getcode() 方法就可以得到 HTTP 的返回码。但对其它返回码来说,urlopen 会抛出异常。这时候,就要检查异常对象的 code 属性了:

import urllib2
try:
response = urllib2.urlopen('http://restrict.web.com')
except urllib2.HTTPError, e:
print e.code

Debug Log

使用 urllib2 时,可以通过下面的方法把 debug Log 打开,这样收发包的内容就会在屏幕上打印出来,方便调试,有时可以省去抓包的工作

import urllib2

httpHandler = urllib2.HTTPHandler(debuglevel=1)
httpsHandler = urllib2.HTTPSHandler(debuglevel=1)
opener = urllib2.build_opener(httpHandler, httpsHandler)

urllib2.install_opener(opener)
response = urllib2.urlopen('http://www.google.com')

————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————

fetch urls

response=urlopen->http:// or ftp://

response is a file-like object response.read()

import urllib.request
response = urllib.request.urlopen('http://python.org/')
html = response.read()

OR

req = urllib.request.Request('http://www.voidspace.org.uk')
response = urllib.request.urlopen(req)
the_page = response.read()

POST (sent data to) | GET

1 import urllib.parse //post
2 import urllib.request
3
4 url = 'http://www.someserver.com/cgi-bin/register.cgi'
5 values = {'name' : 'Michael Foord',
6 'location' : 'Northampton',
7 'language' : 'Python' }
8
9 data = urllib.parse.urlencode(values) //convert to HTML form
10 req = urllib.request.Request(url, data)
11 response = urllib.request.urlopen(req)
12 the_page = response.read()
GET

url = 'http://www.example.com/example.cgi'
full_url = url + '?' + url_values
data = urllib.request.open(full_url)

add headers

values = {'name' : 'Michael Foord',
'location' : 'Northampton',
'language' : 'Python' }

headers = { 'User-Agent' : user_agent }

data = urllib.parse.urlencode(values)
req = urllib.request.Request(url, data, headers)

response = urllib.request.urlopen(req)

catch url error

try:

  urllib.request.urlopen(req)

excep urllib.error.URLError as e:

  print(e.reaseon)

cat HTTPError

req=urllib.request.Request('www...')

try:

  urllib.request.urlopen(req)

except urllib.error.HTTPError as e:

  print(e.code) //404 or e.g.

  print(e.read()) // <!D...html>

Warpping up

1 from urllib.request import Request, urlopen
2 from urllib.error import URLError
3 req = Request(someurl)
4 try:
5 response = urlopen(req)
6 except URLError as e:
7 if hasattr(e, 'reason'):
8 print('We failed to reach a server.')
9 print('Reason: ', e.reason)
10 elif hasattr(e, 'code'):
11 print('The server couldn\'t fulfill the request.')
12 print('Error code: ', e.code)
13 else:
14 # everything is fine

info and geturl

The response returned by urlopen (or the HTTPError instance) has two useful methods info() and geturl()

geturl - this returns the real URL of the page fetched(not by redirected)//may in script,it's better to use geturl() instead of read()

Typical headers include 'Content-length', 'Content-type', and so on.

info - this returns a dictionary-like object that describes the page fetched, particularly the headers sent by the server. It is currently an http.client.HTTPMessage instance.

URL authentic needed

# create a password manager
password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()

# Add the username and password.
# If we knew the realm, we could use it instead of None.//域
top_level_url = "http://example.com/foo/" //e.g. 82.0.0.35:9090(H3C bims)
password_mgr.add_password(None, top_level_url, username, password)

handler = urllib.request.HTTPBasicAuthHandler(password_mgr)

# create "opener" (OpenerDirector instance)
opener = urllib.request.build_opener(handler)

# use the opener to fetch a URL
opener.open(a_url)

# Install the opener.
# Now all calls to urllib.request.urlopen use our opener.
urllib.request.install_opener(opener)
Proxy

setting up a Basic Authentication handler

proxy_support = urllib.request.ProxyHandler({})

opener = urllib.request.build_opener(proxy_support)
urllib.request.install_opener(opener)

Set timeout

1 import socket
2 import urllib.request
3
4 # timeout in seconds
5 timeout = 10
6 socket.setdefaulttimeout(timeout)
7
8 # this call to urllib.request.urlopen now uses the default timeout
9 # we have set in the socket module
10 req = urllib.request.Request('http://www.voidspace.org.uk')
11 response = urllib.request.urlopen(req)

to read

Quick Reference to HTTP Headers

HTTP Error code:

# Table mapping response codes to messages; entries have the
# form {code: (shortmessage, longmessage)}.
responses = {
100: ('Continue', 'Request received, please continue'),
101: ('Switching Protocols',
'Switching to new protocol; obey Upgrade header'),

200: ('OK', 'Request fulfilled, document follows'),
201: ('Created', 'Document created, URL follows'),
202: ('Accepted',
'Request accepted, processing continues off-line'),
203: ('Non-Authoritative Information', 'Request fulfilled from cache'),
204: ('No Content', 'Request fulfilled, nothing follows'),
205: ('Reset Content', 'Clear input form for further input.'),
206: ('Partial Content', 'Partial content follows.'),

300: ('Multiple Choices',
'Object has several resources -- see URI list'),
301: ('Moved Permanently', 'Object moved permanently -- see URI list'),
302: ('Found', 'Object moved temporarily -- see URI list'),
303: ('See Other', 'Object moved -- see Method and URL list'),
304: ('Not Modified',
'Document has not changed since given time'),
305: ('Use Proxy',
'You must use proxy specified in Location to access this '
'resource.'),
307: ('Temporary Redirect',
'Object moved temporarily -- see URI list'),

400: ('Bad Request',
'Bad request syntax or unsupported method'),
401: ('Unauthorized',
'No permission -- see authorization schemes'),
402: ('Payment Required',
'No payment -- see charging schemes'),
403: ('Forbidden',
'Request forbidden -- authorization will not help'),
404: ('Not Found', 'Nothing matches the given URI'),
405: ('Method Not Allowed',
'Specified method is invalid for this server.'),
406: ('Not Acceptable', 'URI not available in preferred format.'),
407: ('Proxy Authentication Required', 'You must authenticate with '
'this proxy before proceeding.'),
408: ('Request Timeout', 'Request timed out; try again later.'),
409: ('Conflict', 'Request conflict.'),
410: ('Gone',
'URI no longer exists and has been permanently removed.'),
411: ('Length Required', 'Client must specify Content-Length.'),
412: ('Precondition Failed', 'Precondition in headers is false.'),
413: ('Request Entity Too Large', 'Entity is too large.'),
414: ('Request-URI Too Long', 'URI is too long.'),
415: ('Unsupported Media Type', 'Entity body in unsupported format.'),
416: ('Requested Range Not Satisfiable',
'Cannot satisfy request range.'),
417: ('Expectation Failed',
'Expect condition could not be satisfied.'),

500: ('Internal Server Error', 'Server got itself in trouble'),
501: ('Not Implemented',
'Server does not support this operation'),
502: ('Bad Gateway', 'Invalid responses from another server/proxy.'),
503: ('Service Unavailable',
'The server cannot process the request due to a high load'),
504: ('Gateway Timeout',
'The gateway server did not receive a timely response'),
505: ('HTTP Version Not Supported', 'Cannot fulfill request.'),
}

example @ cyou

example @ cyou

WEB urllib2 module note的更多相关文章

  1. python 3.3.2报错:No module named 'urllib2'

    ModuleNotFoundError: No module named 'urllib3' 1. ImportError: No module named 'cookielib'1 Python3中 ...

  2. python 3.x报错:No module named 'cookielib'或No module named 'urllib2'

    1.    ModuleNotFoundError: No module named 'cookielib' Python3中,import  cookielib改成 import  http.coo ...

  3. Python之urllib2

    urllib2 - extensible library for opening URLs Note The urllib2 module has been split across several ...

  4. MDN > Web technology for developers > HTTP

    The Hypertext Transfer Protocol (HTTP) is an application-layer protocol for transmitting hypermedia ...

  5. 使用Web.Config Transformation配置灵活的配置文件

    发布Asp.net程序的时候,开发环境和发布环境的Web.Config往往不同,比如connectionstring等.如果常常有发布的需求,就需要常常修改web.config文件,这往往是一件非常麻 ...

  6. 【转载】Using the Web Service Callbacks in the .NET Application

    来源 This article describes a .NET Application model driven by the Web Services using the Virtual Web ...

  7. Web.Config Transformation配置灵活的配置文件

    使用Web.Config Transformation配置灵活的配置文件 发布Asp.net程序的时候,开发环境和发布环境的Web.Config往往不同,比如connectionstring等.如果常 ...

  8. 关于Web.config的debug和release.config文件

    使用Web.Config Transformation配置灵活的配置文件 发布Asp.net程序的时候,开发环境和发布环境的Web.Config往往不同,比如connectionstring等.如果常 ...

  9. Geronimo tomcat: 在 Apache Geronimo 插件体系中将 Apache Tomcat 这个优秀的 Web 容器整合至其中

    Apache Geronimo 灵活的插件体系将 Tomcat, OpenJPA, OpenEJB, ActiveMQ 等第三方组件集成至其中.本文从多角度介绍了在 Apache Geronimo 中 ...

随机推荐

  1. mysql执行完select后,释放游标

    内存释放 在我们执行完SELECT语句后,释放游标内存是一个很好的习惯. .可以通过PHP函数mysql_free_result()来实现内存的释放. 以下实例演示了该函数的使用方法. 2.mysql ...

  2. Windows下编译objective-C

    Windows下编译objective-C 2011-08-31 14:32 630人阅读 评论(0) 收藏 举报 windowscocoa工具objective clibraryxcode   目录 ...

  3. PHP 中获取当前时间[Datetime Now]

    在 PHP 中可以通过date()获取当前时间,在>5.2的版本中最好还是用 datetime 类型 date() <?php echo date('Y-m-d H:i:s'); ?> ...

  4. SQL搜索下划线,like中不能匹配下划线的问题

    最近在检测天气预报15天查询网 站(http://tqybw.net)时的URL时,发现页面中有很些404页,分析发现,是请求地址的能参数中多了下划线“_”,而rewrite规 则中并没有配这样的规则 ...

  5. Thread safety

    https://en.wikipedia.org/wiki/Thread_safety Thread safety is a computer programming concept applicab ...

  6. P1092 虫食算 NOIP2002

    为了测试stl 30分的暴力写法... #include <bits/stdc++.h> using namespace std; const int maxn = 11; int n; ...

  7. SQL注入攻击和防御

    部分整理...   什么是SQL注入? 简单的例子, 对于一个购物网站,可以允许搜索,price小于某值的商品 这个值用户是可以输入的,比如,100 但是对于用户,如果输入,100' OR '1'=' ...

  8. JVM GC (一)

    1. 先来说GC工作在哪块区域呢? 程序计数器,虚拟机栈(也就平时所说的栈), 本地方法栈这三区域随着线程而生,随着线程而灭,出栈入栈的操作,在栈中分配配的多少内存都具 有确定性,在这几个区域就不用考 ...

  9. Vmware安装与VMware下Linux系统安装

    源文件地址:http://www.cnblogs.com/lclq/p/5619271.html 1.下载安装VMware,我安装的是VMware 12.VMware从11开始不再支持32位系统,32 ...

  10. 拷贝数据库和VS项目

    2个项目的相似度比较大,在另一个的基础上做修改,不想从头再来,把数据库和项目如何克隆一份呢? 数据库复制:(SQLSERVER2008) 任务-备份数据库 然后还原到新建的数据库名下即可 VS项目复制 ...