python之爬虫_模块
目录
一、requests模块
1、介绍
Python标准库中提供了:urllib、urllib2、httplib等模块以供Http请求,但是,它的 API 太渣了。它是为另一个时代、另一个互联网所创建的。它需要巨量的工作,甚至包括各种方法覆盖,来完成最简单的任务。
Requests 是使用 Apache2 Licensed 许可证的 基于Python开发的HTTP 库,其在Python内置模块的基础上进行了高度的封装,从而使得Pythoner进行网络请求时,变得美好了许多,使用Requests可以轻而易举的完成浏览器可有的任何操作。
2、请求method介绍
(1)GET请求
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
# 1、无参数实例 import requests ret = requests.get( 'https://github.com/timeline.json' ) ret.encoding = "gbk"
print( ret.url) print( ret.text) # str
print( ret.content) # bytes # 2、有参数实例 import requests payload = { 'key1' : 'value1' , 'key2' : 'value2' } ret = requests.get( "http://httpbin.org/get" , params = payload) print ret.url print ret.text |
(2)POST请求
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
# 1、基本POST实例 import requests payload = { 'key1' : 'value1' , 'key2' : 'value2' } ret = requests.post( "http://httpbin.org/post" , data = payload) print ret.text # 2、发送请求头和数据实例 import requests import json url = 'https://api.github.com/some/endpoint' payload = { 'some' : 'data' } headers = { 'content-type' : 'application/json' } ret = requests.post(url, data = json.dumps(payload), headers = headers) print ret.text print ret.cookies |
(3)其他请求
1
2
3
4
5
6
7
8
9
10
|
requests.get(url, params = None , * * kwargs) requests.post(url, data = None , json = None , * * kwargs) requests.put(url, data = None , * * kwargs) requests.head(url, * * kwargs) requests.delete(url, * * kwargs) requests.patch(url, data = None , * * kwargs) requests.options(url, * * kwargs) # 以上方法均是在此方法的基础上构建 requests.request(method, url, * * kwargs) |
(4)代理
- import requests
- url = "http://www.baidu.com/"
- headers = {
- "User-Agent":'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:62.0) Gecko/20100101 Firefox/62.0'
- }
- proxies={
- 'http':'113.200.56.13:8010', # 网上随便找的一个代理,不稳定
- }
- res = requests.get(url=url,proxies=proxies)
- res.encoding = "utf8"
- print(res.text) # 实验成功
3、参数说明
- def request(method, url, **kwargs):
- """Constructs and sends a :class:`Request <Request>`.
- :param method: method for the new :class:`Request` object.
- :param url: URL for the new :class:`Request` object.
- :param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`.
- :param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
- :param json: (optional) json data to send in the body of the :class:`Request`.
- :param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`.
- :param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`.
- :param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': file-tuple}``) for multipart encoding upload.
- ``file-tuple`` can be a 2-tuple ``('filename', fileobj)``, 3-tuple ``('filename', fileobj, 'content_type')``
- or a 4-tuple ``('filename', fileobj, 'content_type', custom_headers)``, where ``'content-type'`` is a string
- defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers
- to add for the file.
- :param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
- :param timeout: (optional) How long to wait for the server to send data
- before giving up, as a float, or a :ref:`(connect timeout, read
- timeout) <timeouts>` tuple.
- :type timeout: float or tuple
- :param allow_redirects: (optional) Boolean. Set to True if POST/PUT/DELETE redirect following is allowed.
- :type allow_redirects: bool
- :param proxies: (optional) Dictionary mapping protocol to the URL of the proxy.
- :param verify: (optional) whether the SSL cert will be verified. A CA_BUNDLE path can also be provided. Defaults to ``True``.
- :param stream: (optional) if ``False``, the response content will be immediately downloaded.
- :param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
- :return: :class:`Response <Response>` object
- :rtype: requests.Response
- Usage::
- >>> import requests
- >>> req = requests.request('GET', 'http://httpbin.org/get')
- <Response [200]>
- """
- 参数列表
参数说明
- def param_method_url():
- # requests.request(method='get', url='http://127.0.0.1:8000/test/')
- # requests.request(method='post', url='http://127.0.0.1:8000/test/')
- pass
- def param_param():
- # - 可以是字典
- # - 可以是字符串
- # - 可以是字节(ascii编码以内)
- # requests.request(method='get',
- # url='http://127.0.0.1:8000/test/',
- # params={'k1': 'v1', 'k2': '水电费'})
- # requests.request(method='get',
- # url='http://127.0.0.1:8000/test/',
- # params="k1=v1&k2=水电费&k3=v3&k3=vv3")
- # requests.request(method='get',
- # url='http://127.0.0.1:8000/test/',
- # params=bytes("k1=v1&k2=k2&k3=v3&k3=vv3", encoding='utf8'))
- # 错误
- # requests.request(method='get',
- # url='http://127.0.0.1:8000/test/',
- # params=bytes("k1=v1&k2=水电费&k3=v3&k3=vv3", encoding='utf8'))
- pass
- def param_data():
- # 可以是字典
- # 可以是字符串
- # 可以是字节
- # 可以是文件对象
- # requests.request(method='POST',
- # url='http://127.0.0.1:8000/test/',
- # data={'k1': 'v1', 'k2': '水电费'})
- # requests.request(method='POST',
- # url='http://127.0.0.1:8000/test/',
- # data="k1=v1; k2=v2; k3=v3; k3=v4"
- # )
- # requests.request(method='POST',
- # url='http://127.0.0.1:8000/test/',
- # data="k1=v1;k2=v2;k3=v3;k3=v4",
- # headers={'Content-Type': 'application/x-www-form-urlencoded'}
- # )
- # requests.request(method='POST',
- # url='http://127.0.0.1:8000/test/',
- # data=open('data_file.py', mode='r', encoding='utf-8'), # 文件内容是:k1=v1;k2=v2;k3=v3;k3=v4
- # headers={'Content-Type': 'application/x-www-form-urlencoded'}
- # )
- pass
- def param_json():
- # 将json中对应的数据进行序列化成一个字符串,json.dumps(...)
- # 然后发送到服务器端的body中,并且Content-Type是 {'Content-Type': 'application/json'}
- requests.request(method='POST',
- url='http://127.0.0.1:8000/test/',
- json={'k1': 'v1', 'k2': '水电费'})
- def param_headers():
- # 发送请求头到服务器端
- requests.request(method='POST',
- url='http://127.0.0.1:8000/test/',
- json={'k1': 'v1', 'k2': '水电费'},
- headers={'Content-Type': 'application/x-www-form-urlencoded'}
- )
- def param_cookies():
- # 发送Cookie到服务器端
- requests.request(method='POST',
- url='http://127.0.0.1:8000/test/',
- data={'k1': 'v1', 'k2': 'v2'},
- cookies={'cook1': 'value1'},
- )
- # 也可以使用CookieJar(字典形式就是在此基础上封装)
- from http.cookiejar import CookieJar
- from http.cookiejar import Cookie
- obj = CookieJar()
- obj.set_cookie(Cookie(version=0, name='c1', value='v1', port=None, domain='', path='/', secure=False, expires=None,
- discard=True, comment=None, comment_url=None, rest={'HttpOnly': None}, rfc2109=False,
- port_specified=False, domain_specified=False, domain_initial_dot=False, path_specified=False)
- )
- requests.request(method='POST',
- url='http://127.0.0.1:8000/test/',
- data={'k1': 'v1', 'k2': 'v2'},
- cookies=obj)
- def param_files():
- # 发送文件
- # file_dict = {
- # 'f1': open('readme', 'rb')
- # }
- # requests.request(method='POST',
- # url='http://127.0.0.1:8000/test/',
- # files=file_dict)
- # 发送文件,定制文件名
- # file_dict = {
- # 'f1': ('test.txt', open('readme', 'rb'))
- # }
- # requests.request(method='POST',
- # url='http://127.0.0.1:8000/test/',
- # files=file_dict)
- # 发送文件,定制文件名
- # file_dict = {
- # 'f1': ('test.txt', "hahsfaksfa9kasdjflaksdjf")
- # }
- # requests.request(method='POST',
- # url='http://127.0.0.1:8000/test/',
- # files=file_dict)
- # 发送文件,定制文件名
- # file_dict = {
- # 'f1': ('test.txt', "hahsfaksfa9kasdjflaksdjf", 'application/text', {'k1': '0'})
- # }
- # requests.request(method='POST',
- # url='http://127.0.0.1:8000/test/',
- # files=file_dict)
- pass
- def param_auth():
- from requests.auth import HTTPBasicAuth, HTTPDigestAuth
- ret = requests.get('https://api.github.com/user', auth=HTTPBasicAuth('wupeiqi', 'sdfasdfasdf'))
- print(ret.text)
- # ret = requests.get('http://192.168.1.1',
- # auth=HTTPBasicAuth('admin', 'admin'))
- # ret.encoding = 'gbk'
- # print(ret.text)
- # ret = requests.get('http://httpbin.org/digest-auth/auth/user/pass', auth=HTTPDigestAuth('user', 'pass'))
- # print(ret)
- #
- def param_timeout():
- # ret = requests.get('http://google.com/', timeout=1)
- # print(ret)
- # ret = requests.get('http://google.com/', timeout=(5, 1))
- # print(ret)
- pass
- def param_allow_redirects():
- ret = requests.get('http://127.0.0.1:8000/test/', allow_redirects=False)
- print(ret.text)
- def param_proxies():
- # proxies = {
- # "http": "61.172.249.96:80",
- # "https": "http://61.185.219.126:3128",
- # }
- # proxies = {'http://10.20.1.128': 'http://10.10.1.10:5323'}
- # ret = requests.get("http://www.proxy360.cn/Proxy", proxies=proxies)
- # print(ret.headers)
- # from requests.auth import HTTPProxyAuth
- #
- # proxyDict = {
- # 'http': '77.75.105.165',
- # 'https': '77.75.105.165'
- # }
- # auth = HTTPProxyAuth('username', 'mypassword')
- #
- # r = requests.get("http://www.google.com", proxies=proxyDict, auth=auth)
- # print(r.text)
- pass
- def param_stream():
- ret = requests.get('http://127.0.0.1:8000/test/', stream=True)
- print(ret.content)
- ret.close()
- # from contextlib import closing
- # with closing(requests.get('http://httpbin.org/get', stream=True)) as r:
- # # 在此处理响应。
- # for i in r.iter_content():
- # print(i)
- def requests_session():
- import requests
- session = requests.Session()
- ### 1、首先登陆任何页面,获取cookie
- i1 = session.get(url="http://dig.chouti.com/help/service")
- ### 2、用户登陆,携带上一次的cookie,后台对cookie中的 gpsd 进行授权
- i2 = session.post(
- url="http://dig.chouti.com/login",
- data={
- 'phone': "",
- 'password': "xxxxxx",
- 'oneMonth': ""
- }
- )
- i3 = session.post(
- url="http://dig.chouti.com/link/vote?linksId=8589623",
- )
- print(i3.text)
参数示例
官方文档:http://cn.python-requests.org/zh_CN/latest/user/quickstart.html#id4
二、Beautifulsoup模块
可参考:https://blog.csdn.net/xxf813/article/details/81605197
1、介绍
BeautifulSoup是一个模块,该模块用于接收一个HTML或XML字符串,然后将其进行格式化,之后遍可以使用他提供的方法进行快速查找指定元素,从而使得在HTML或XML中查找指定元素变得简单。
2、简单使用
pip3 install beautifulsoup4
- from bs4 import BeautifulSoup
- html_doc = """
- <html><head><title>The Dormouse's story</title></head>
- <body>
- asdf
- <div class="title">
- <b>The Dormouse's story总共</b>
- <h1>f</h1>
- </div>
- <div class="story">Once upon a time there were three little sisters; and their names were
- <a class="sister0" id="link1">Els<span>f</span>ie</a>,
- <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
- <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
- and they lived at the bottom of a well.</div>
- ad<br/>sf
- <p class="story">...</p>
- </body>
- </html>
- """
- soup = BeautifulSoup(html_doc, features="lxml")
- # 找到第一个a标签
- tag1 = soup.find(name='a')
- # 找到所有的a标签
- tag2 = soup.find_all(name='a')
- # 找到id=link2的标签
- tag3 = soup.select('#link2')
简单使用
3、方法说明
1. name,标签名称
- # tag = soup.find('a')
- # name = tag.name # 获取
- # print(name)
- # tag.name = 'span' # 设置
- # print(soup)
name
2. attrs,标签属性
- # tag = soup.find('a')
- # attrs = tag.attrs # 获取
- # print(attrs)
- # tag.attrs = {'ik':123} # 设置
- # tag.attrs['id'] = 'iiiii' # 设置
- # print(soup)
attr
3. children,所有子标签
- # body = soup.find('body')
- # v = body.children
children
4. descendants,所有子子孙孙标签
- # body = soup.find('body')
- # v = body.descendants
descendants
5. clear,将标签的所有子标签全部清空(保留标签名)
- # tag = soup.find('body')
- # tag.clear()
- # print(soup)
clear
6. decompose,递归的删除所有的标签
- # body = soup.find('body')
- # body.decompose()
- # print(soup)
decompose
7. extract,递归的删除所有的标签,并获取删除的标签。(跟父有关系)
- # body = soup.find('body')
- # v = body.extract()
- # print(soup)
extract
8. decode,转换为字符串(含当前标签);decode_contents(不含当前标签)
- # body = soup.find('body')
- # v = body.decode()
- # v = body.decode_contents()
- # print(v)
decode
9. encode,转换为字节(含当前标签);encode_contents(不含当前标签)
- # body = soup.find('body')
- # v = body.encode()
- # v = body.encode_contents()
- # print(v)
encode
10. find,获取匹配的第一个标签
- # tag = soup.find('a')
- # print(tag)
- # tag = soup.find(name='a', attrs={'class': 'sister'}, recursive=True, text='Lacie')
- # tag = soup.find(name='a', class_='sister', recursive=True, text='Lacie')
- # print(tag)
find
11. find_all,获取匹配的所有标签
- # tags = soup.find_all('a')
- # print(tags)
- # tags = soup.find_all('a',limit=1)
- # print(tags)
- # tags = soup.find_all(name='a', attrs={'class': 'sister'}, recursive=True, text='Lacie')
- # # tags = soup.find(name='a', class_='sister', recursive=True, text='Lacie')
- # print(tags)
- # ####### 列表 #######
- # v = soup.find_all(name=['a','div'])
- # print(v)
- # v = soup.find_all(class_=['sister0', 'sister'])
- # print(v)
- # v = soup.find_all(text=['Tillie'])
- # print(v, type(v[0]))
- # v = soup.find_all(id=['link1','link2'])
- # print(v)
- # v = soup.find_all(href=['link1','link2'])
- # print(v)
- # ####### 正则 #######
- import re
- # rep = re.compile('p')
- # rep = re.compile('^p')
- # v = soup.find_all(name=rep)
- # print(v)
- # rep = re.compile('sister.*')
- # v = soup.find_all(class_=rep)
- # print(v)
- # rep = re.compile('http://www.oldboy.com/static/.*')
- # v = soup.find_all(href=rep)
- # print(v)
- # ####### 方法筛选 #######
- # def func(tag):
- # return tag.has_attr('class') and tag.has_attr('id')
- # v = soup.find_all(name=func)
- # print(v)
- # ## get,获取标签属性
- # tag = soup.find('a')
- # v = tag.get('id')
- # print(v)
find_all
12. has_attr,检查标签是否具有该属性
- # tag = soup.find('a')
- # v = tag.has_attr('id')
- # print(v)
has_attr
13. get_text,获取标签内部文本内容
- # tag = soup.find('a')
- # v = tag.get_text('id')
- # print(v)
get_text
14. index,检查标签在某标签中的索引位置
- # tag = soup.find('body')
- # v = tag.index(tag.find('div'))
- # print(v)
- # tag = soup.find('body')
- # for i,v in enumerate(tag):
- # print(i,v)
index
15. is_empty_element,是否是空标签(是否可以是空)或者自闭合标签,
判断是否是如下标签:'br' , 'hr', 'input', 'img', 'meta','spacer', 'link', 'frame', 'base'
- # tag = soup.find('br')
- # v = tag.is_empty_element
- # print(v)
is_empty_element
16. 当前的关联标签
- # soup.next
- # soup.next_element
- # soup.next_elements
- # soup.next_sibling
- # soup.next_siblings
- #
- # tag.previous
- # tag.previous_element
- # tag.previous_elements
- # tag.previous_sibling
- # tag.previous_siblings
- #
- # tag.parent
- # tag.parents
当前的关联标签(next、previous等)
17. 查找某标签的关联标签
- # tag.find_next(...)
- # tag.find_all_next(...)
- # tag.find_next_sibling(...)
- # tag.find_next_siblings(...)
- # tag.find_previous(...)
- # tag.find_all_previous(...)
- # tag.find_previous_sibling(...)
- # tag.find_previous_siblings(...)
- # tag.find_parent(...)
- # tag.find_parents(...)
- # 参数同find_all
查看某标签的关联标签
18. select,select_one, CSS选择器
- soup.select("title")
- soup.select("p nth-of-type(3)")
- soup.select("body a")
- soup.select("html head title")
- tag = soup.select("span,a")
- soup.select("head > title")
- soup.select("p > a")
- soup.select("p > a:nth-of-type(2)")
- soup.select("p > #link1")
- soup.select("body > a")
- soup.select("#link1 ~ .sister")
- soup.select("#link1 + .sister")
- soup.select(".sister")
- soup.select("[class~=sister]")
- soup.select("#link1")
- soup.select("a#link2")
- soup.select('a[href]')
- soup.select('a[href="http://example.com/elsie"]')
- soup.select('a[href^="http://example.com/"]')
- soup.select('a[href$="tillie"]')
- soup.select('a[href*=".com/el"]')
- from bs4.element import Tag
- def default_candidate_generator(tag):
- for child in tag.descendants:
- if not isinstance(child, Tag):
- continue
- if not child.has_attr('href'):
- continue
- yield child
- tags = soup.find('body').select("a", _candidate_generator=default_candidate_generator)
- print(type(tags), tags)
- from bs4.element import Tag
- def default_candidate_generator(tag):
- for child in tag.descendants:
- if not isinstance(child, Tag):
- continue
- if not child.has_attr('href'):
- continue
- yield child
- tags = soup.find('body').select("a", _candidate_generator=default_candidate_generator, limit=1)
- print(type(tags), tags)
select,select_one,CSS选择器
19. 标签的内容
- # tag = soup.find('span')
- # print(tag.string) # 获取
- # tag.string = 'new content' # 设置
- # print(soup)
- # tag = soup.find('body')
- # print(tag.string)
- # tag.string = 'xxx'
- # print(soup)
- # tag = soup.find('body')
- # v = tag.stripped_strings # 递归内部获取所有标签的文本
- # print(v)
标签内容(string等)
20.append在当前标签内部追加一个标签
- # tag = soup.find('body')
- # tag.append(soup.find('a'))
- # print(soup)
- #
- # from bs4.element import Tag
- # obj = Tag(name='i',attrs={'id': 'it'})
- # obj.string = '我是一个新来的'
- # tag = soup.find('body')
- # tag.append(obj)
- # print(soup)
append
21.insert在当前标签内部指定位置插入一个标签
- # from bs4.element import Tag
- # obj = Tag(name='i', attrs={'id': 'it'})
- # obj.string = '我是一个新来的'
- # tag = soup.find('body')
- # tag.insert(2, obj)
- # print(soup)
insert
22. insert_after,insert_before 在当前标签后面或前面插入
- # from bs4.element import Tag
- # obj = Tag(name='i', attrs={'id': 'it'})
- # obj.string = '我是一个新来的'
- # tag = soup.find('body')
- # # tag.insert_before(obj)
- # tag.insert_after(obj)
- # print(soup)
insert_after,insert_before
23. replace_with 在当前标签替换为指定标签
- # from bs4.element import Tag
- # obj = Tag(name='i', attrs={'id': 'it'})
- # obj.string = '我是一个新来的'
- # tag = soup.find('div')
- # tag.replace_with(obj)
- # print(soup)
replace_with
24. 创建标签之间的关系
- # tag = soup.find('div')
- # a = soup.find('a')
- # tag.setup(previous_sibling=a)
- # print(tag.previous_sibling)
创建标签之间的关系
25. wrap,将指定标签把当前标签包裹起来
- # from bs4.element import Tag
- # obj1 = Tag(name='div', attrs={'id': 'it'})
- # obj1.string = '我是一个新来的'
- #
- # tag = soup.find('a')
- # v = tag.wrap(obj1)
- # print(soup)
- # tag = soup.find('a')
- # v = tag.wrap(soup.find('p'))
- # print(soup)
wrap
26. unwrap,去掉当前标签,将保留其包裹的标签
- # tag = soup.find('a')
- # v = tag.unwrap()
- # print(soup)
unwrap
更多参数官方:http://beautifulsoup.readthedocs.io/zh_CN/v4.4.0/
示例
- # # -*- coding:utf-8 -*-
- # import requests
- # from bs4 import BeautifulSoup
- #
- #
- # response = requests.get("http://www.autohome.com.cn/news/")
- # response.encoding = "gbk"
- #
- # # print(type(response.text)) # str
- #
- #
- # soup = BeautifulSoup(response.text,"html.parser")
- #
- # tag = soup.find(id='auto0channel0lazyload-article')
- # print(tag)
- # tag = soup.find(name="h3")
- # print(tag)
- # tag = soup.find(name="h3",attr={"class":"xxx"})
- # 所有新闻
- # 标题,简介,url,图片
- import requests
- from bs4 import BeautifulSoup
- #
- html = requests.get("http://www.autohome.com.cn/news/")
- html.encoding = "gbk"
- print(type(html.text)) # <class 'str'>
- print(type(html.content)) # <class 'bytes'>
- soup = BeautifulSoup(html,"html.parser")
- li_list = soup.find(id="auto-channel-lazyload-article").find_all(name="li")
- for li in li_list:
- title = li.find(name="h3")
- if not title:
- continue
- print('\033[1;32m[标题]\033[0m',title.text)
- summary = li.find(name="p")
- print('\033[1;33m[简介]\033[0m',summary.text)
- # attrs = li.find("a").attrs
- # print(attrs)
- url = li.find("a").get("href")
- print('\033[1;34m[url]\033[0m',url)
- img = li.find("img").get("src")
- print('\033[1;35m[img]\033[0m',img)
- res = requests.get("http:%s" % (img,))
- # file_name = img.rsplit("/",1)[1]
- # with open(file_name,"wb") as f:
- # f.write(res.content)
- print("\033[1;31m==================================================================\033[0m")
汽车之家 某些页面
- # -*- coding:utf-8 -*-
- import requests
- from bs4 import BeautifulSoup
- # 获取token
- r1 = requests.get("https://github.com/login")
- r1.encoding = "gbk"
- s1 = BeautifulSoup(r1.text,"html.parser")
- token = s1.find(name="input",attrs={"name":"authenticity_token"}).get("value")
- r1_cookie_dict = r1.cookies.get_dict()
- print(token)
- # 将用户名、密码、token发送到post的url
- '''
- commit:Sign in
- utf8:✓
- authenticity_token:r31RX8eQeShWRxUnEyYXtQHVmIlrw6sZmwdyy/IYP0dCzV1m4covQQZz+d8qUuc9mT8qIxjjx0U3YjKN9ZvLHA==
- login:asdf
- password:asdf
- '''
- r2 = requests.post(
- url="https://github.com/session",
- data={
- "utf8":"✓",
- "authenticity_token":token,
- "login":"fat39@163.com",
- "password":"123!@#qwe",
- "commit":"Sign in"
- },
- cookies=r1_cookie_dict,
- )
- r2_cookie_dict = r2.cookies.get_dict()
- print(r2_cookie_dict)
- # 带cookie访问
- cookie_dict = {}
- cookie_dict.update(r1_cookie_dict)
- cookie_dict.update(r2_cookie_dict)
- r3 = requests.get(
- url="https://github.com/settings/emails",
- cookies=cookie_dict,
- )
- print(r3.text)
- ############################################
- #!/usr/bin/env python
- # -*- coding:utf-8 -*-
- import requests
- from bs4 import BeautifulSoup
- # ############## 方式一 ##############
- #
- # # 1. 访问登陆页面,获取 authenticity_token
- # i1 = requests.get('https://github.com/login')
- # soup1 = BeautifulSoup(i1.text, features='lxml')
- # tag = soup1.find(name='input', attrs={'name': 'authenticity_token'})
- # authenticity_token = tag.get('value')
- # c1 = i1.cookies.get_dict()
- # i1.close()
- #
- # # 1. 携带authenticity_token和用户名密码等信息,发送用户验证
- # form_data = {
- # "authenticity_token": authenticity_token,
- # "utf8": "",
- # "commit": "Sign in",
- # "login": "wupeiqi@live.com",
- # 'password': 'xxoo'
- # }
- #
- # i2 = requests.post('https://github.com/session', data=form_data, cookies=c1)
- # c2 = i2.cookies.get_dict()
- # c1.update(c2)
- # i3 = requests.get('https://github.com/settings/repositories', cookies=c1)
- #
- # soup3 = BeautifulSoup(i3.text, features='lxml')
- # list_group = soup3.find(name='div', class_='listgroup')
- #
- # from bs4.element import Tag
- #
- # for child in list_group.children:
- # if isinstance(child, Tag):
- # project_tag = child.find(name='a', class_='mr-1')
- # size_tag = child.find(name='small')
- # temp = "项目:%s(%s); 项目路径:%s" % (project_tag.get('href'), size_tag.string, project_tag.string, )
- # print(temp)
- # ############## 方式二 ##############
- # session = requests.Session()
- # # 1. 访问登陆页面,获取 authenticity_token
- # i1 = session.get('https://github.com/login')
- # soup1 = BeautifulSoup(i1.text, features='lxml')
- # tag = soup1.find(name='input', attrs={'name': 'authenticity_token'})
- # authenticity_token = tag.get('value')
- # c1 = i1.cookies.get_dict()
- # i1.close()
- #
- # # 1. 携带authenticity_token和用户名密码等信息,发送用户验证
- # form_data = {
- # "authenticity_token": authenticity_token,
- # "utf8": "",
- # "commit": "Sign in",
- # "login": "wupeiqi@live.com",
- # 'password': 'xxoo'
- # }
- #
- # i2 = session.post('https://github.com/session', data=form_data)
- # c2 = i2.cookies.get_dict()
- # c1.update(c2)
- # i3 = session.get('https://github.com/settings/repositories')
- #
- # soup3 = BeautifulSoup(i3.text, features='lxml')
- # list_group = soup3.find(name='div', class_='listgroup')
- #
- # from bs4.element import Tag
- #
- # for child in list_group.children:
- # if isinstance(child, Tag):
- # project_tag = child.find(name='a', class_='mr-1')
- # size_tag = child.find(name='small')
- # temp = "项目:%s(%s); 项目路径:%s" % (project_tag.get('href'), size_tag.string, project_tag.string, )
- # print(temp)
自动登录github
- #!/usr/bin/env python
- # -*- coding:utf-8 -*-
- import time
- import requests
- from bs4 import BeautifulSoup
- session = requests.Session()
- i1 = session.get(
- url='https://www.zhihu.com/#signin',
- headers={
- 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36',
- }
- )
- soup1 = BeautifulSoup(i1.text, 'lxml')
- xsrf_tag = soup1.find(name='input', attrs={'name': '_xsrf'})
- xsrf = xsrf_tag.get('value')
- current_time = time.time()
- i2 = session.get(
- url='https://www.zhihu.com/captcha.gif',
- params={'r': current_time, 'type': 'login'},
- headers={
- 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36',
- })
- with open('zhihu.gif', 'wb') as f:
- f.write(i2.content)
- captcha = input('请打开zhihu.gif文件,查看并输入验证码:')
- form_data = {
- "_xsrf": xsrf,
- 'password': 'xxooxxoo',
- "captcha": 'captcha',
- 'email': '424662508@qq.com'
- }
- i3 = session.post(
- url='https://www.zhihu.com/login/email',
- data=form_data,
- headers={
- 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36',
- }
- )
- i4 = session.get(
- url='https://www.zhihu.com/settings/profile',
- headers={
- 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36',
- }
- )
- soup4 = BeautifulSoup(i4.text, 'lxml')
- tag = soup4.find(id='rename-section')
- nick_name = tag.find('span',class_='name').string
- print(nick_name)
- 知乎
知乎
- #!/usr/bin/env python
- # -*- coding:utf-8 -*-
- import re
- import json
- import base64
- import rsa
- import requests
- def js_encrypt(text):
- b64der = 'MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCp0wHYbg/NOPO3nzMD3dndwS0MccuMeXCHgVlGOoYyFwLdS24Im2e7YyhB0wrUsyYf0/nhzCzBK8ZC9eCWqd0aHbdgOQT6CuFQBMjbyGYvlVYU2ZP7kG9Ft6YV6oc9ambuO7nPZh+bvXH0zDKfi02prknrScAKC0XhadTHT3Al0QIDAQAB'
- der = base64.standard_b64decode(b64der)
- pk = rsa.PublicKey.load_pkcs1_openssl_der(der)
- v1 = rsa.encrypt(bytes(text, 'utf8'), pk)
- value = base64.encodebytes(v1).replace(b'\n', b'')
- value = value.decode('utf8')
- return value
- session = requests.Session()
- i1 = session.get('https://passport.cnblogs.com/user/signin')
- rep = re.compile("'VerificationToken': '(.*)'")
- v = re.search(rep, i1.text)
- verification_token = v.group(1)
- form_data = {
- 'input1': js_encrypt('wptawy'),
- 'input2': js_encrypt('asdfasdf'),
- 'remember': False
- }
- i2 = session.post(url='https://passport.cnblogs.com/user/signin',
- data=json.dumps(form_data),
- headers={
- 'Content-Type': 'application/json; charset=UTF-8',
- 'X-Requested-With': 'XMLHttpRequest',
- 'VerificationToken': verification_token}
- )
- i3 = session.get(url='https://i.cnblogs.com/EditDiary.aspx')
- print(i3.text)
- 博客园
博客园
参考or转发
http://www.cnblogs.com/wupeiqi/articles/6283017.html
python之爬虫_模块的更多相关文章
- Python网络爬虫-requests模块(II)
有些时候,我们在使用爬虫程序去爬取一些用户相关信息的数据(爬取张三“人人网”个人主页数据)时,如果使用之前requests模块常规操作时,往往达不到我们想要的目的,例如: #!/usr/bin/env ...
- Python网络爬虫-requests模块
requests模块 requests模块是python中原生的基于网络请求的模块,其主要作用是用来模拟浏览器发起请求.功能强大,用法简洁高效.在爬虫领域中占据着半壁江山的地位. 如何使用reques ...
- python之爬虫_并发(串行、多线程、多进程、异步IO)
并发 在编写爬虫时,性能的消耗主要在IO请求中,当单进程单线程模式下请求URL时必然会引起等待,从而使得请求整体变慢 import requests def fetch_async(url): res ...
- python 网络爬虫requests模块
一.requests模块 requests模块是python中原生的基于网络请求的模块,其主要作用是用来模拟浏览器发起请求.功能强大,用法简洁高效. 1.1 模块介绍及请求过程 requests模块模 ...
- 06 Python网络爬虫requets模块高级用法
一. 基于requests模块的cookie操作 - cookie概念: 当用户通过浏览器访问一个域名的时候,访问的web服务器会给客户端发送数据,以保持web服务器与客户端之间的状态保持,这些数据就 ...
- [b0020] python 归纳 (六)_模块变量作用域
test_module2.py: # -*- coding: utf-8 -*-"""测试 模块变量的作用域 总结:1 其他模块的变量,在当前模块的任何地方,包括函数都可 ...
- Python网络爬虫_爬取Ajax动态加载和翻页时url不变的网页
1 . 什么是 AJAX ? AJAX = 异步 JavaScript 和 XML. AJAX 是一种用于创建快速动态网页的技术. 通过在后台与服务器进行少量数据交换,AJAX 可以使网页实现异步更新 ...
- Python网络爬虫-xpath模块
一.正解解析 单字符: . : 除换行以外所有字符 [] :[aoe] [a-w] 匹配集合中任意一个字符 \d :数字 [0-9] \D : 非数字 \w :数字.字母.下划线.中文 \W : 非\ ...
- 【python网络爬虫】之requests相关模块
python网络爬虫的学习第一步 [python网络爬虫]之0 爬虫与反扒 [python网络爬虫]之一 简单介绍 [python网络爬虫]之二 python uillib库 [python网络爬虫] ...
随机推荐
- JVM(二)GC算法和垃圾收集器
前言 垃圾收集器(Garbage Collection)通常被成为GC,诞生于1960年MIT的Lisp语言.上一篇介绍了Java运行时区域的各个部分,其中程序计数器.虚拟机栈.本地方法栈3个区域随线 ...
- 使用iometer测试
对国产机进行测试 1.win7上安装测试 下载: 点击打开链接 双击安装即可. 2.ubuntu下配置: OS: Ubuntu 12.04LTS x86_64Kernel: 3.5.0-26-gene ...
- sql函数:开窗函数简介
与聚合函数一样,开窗函数也是对行集组进行聚合计算,但是普通聚合函数每组只能返回一个值,而开窗函数可以每组返回多个值. 实验一比如我们想查询每个工资小于5000元的员工信息(城市以及年龄),并且在每行中 ...
- 打开 CRM 时,出现错误:"Invalid Action – The selected action was not valid"
今天当所有用户在打开CRM时,都出现了一个错误提示 “Invalid Action – The selected action was not valid”. 打开服务器的 event viewer查 ...
- 通过R语言统计考研英语(二)单词出现频率
通过R语言统计考研英语(二)单词出现频率 大家对英语考试并不陌生,首先是背单词,就是所谓的高频词汇.厚厚的一本单词,真的看的头大.最近结合自己刚学的R语言,为年底的考研做准备,想统计一下最近考研英语( ...
- 认识node
node是一个基于Chrome V8引擎的ECMAScript运行环境,使用了ECMAScript语法规范.有了node之后,js文件就能运行在服务器端了,也可以用来创建web服务器. node的主要 ...
- Advanced Find and Replace(文件内容搜索替换工具)v7.8.1简体中文破解版
Advanced Find and Replace是一款文件内容搜索工具,同时也是文件内容批量替换工具.支持通配符和正则表达式,方便快捷强大! 显示中文的方法:第二个菜单-Language-选 下载地 ...
- Java List添加元素
import java.util.ArrayList; public class Test { public static void main(String[] args) { ...
- mfc 函数重载
函数重载的概念 for循环中变量 一. 函数重载的概念 函数重载允许我们使用相同的函数名定义多个函数. 提示: 函数参数类型不同,可重载. 类型相同时,则需要参数个数不同. int max(int a ...
- 【转载】MFC动态创建控件及其消息响应函数
原文:http://blog.sina.com.cn/s/blog_4a08244901014ok1.html 这几天专门调研了一下MFC中如何动态创建控件及其消息响应函数. 参考帖子如下: (1)h ...