官方学习文档:https://www.crummy.com/software/BeautifulSoup/bs4/doc.zh/

一、什么时BeautifulSoup?

答:灵活又方便的网页解析库,处理搞笑,支持多种解析器。

  利用它不用编写正则表达式即可方便地实现网页信息的提取。

二、安装

pip3 install bewautifulsoup4

三、用法讲解

解析器 使用方法 优势 劣势
Py't'hon标准库 BeautifulSoup(markup,"html.parser") Python的内置标准库、执行速度适中、文档容错额能力强 Python2.7 or 3.2。2 前的版本中文容错额能力差
lxml HTML解析器 BeautifulSoup(markup,"lxml") 速度快、文档容错能力强 需要安装C语言库
lxml XML解析器 BeautifulSoup(markup,"xml") 速度快、唯一支持XML的解析器 需要安装C语言库
html5lib BeautifulSoup(markup,"html5lib") 最好的容错性、以浏览器的方式解析文档、生成HTML5格式的文档 速度慢、不依赖外部扩展

四、基本使用

html = '''
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dormouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters;and their names were
<a href="http://example.com/elsle" class="sister" id="link1"><!--Elsle--></a>,
<a href="http://example.com/elsle" class="sister" id="link1">Lacle</a>and
<a href="http://example.com/elsle" class="sister" id="link1">Title</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.prettify())
print(soup.title.string)

  

五、标签选择器

lxml解析库

1、选择元素

html = '''
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dormouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters;and their names were
<a href="http://example.com/elsle" class="sister" id="link1"><!--Elsle--></a>,
<a href="http://example.com/elsle" class="sister" id="link1">Lacle</a>and
<a href="http://example.com/elsle" class="sister" id="link1">Title</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.title)
print(soup.title.string)
print(type(soup.title))
print(soup.href)
print(soup.p)

2、获取名称

html = '''
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dormouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters;and their names were
<a href="http://example.com/elsle" class="sister" id="link1"><!--Elsle--></a>,
<a href="http://example.com/elsle" class="sister" id="link1">Lacle</a>and
<a href="http://example.com/elsle" class="sister" id="link1">Title</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.title.name)

3、获取属性

html = '''
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dormouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters;and their names were
<a href="http://example.com/elsle" class="sister" id="link1"><!--Elsle--></a>,
<a href="http://example.com/elsle" class="sister" id="link1">Lacle</a>and
<a href="http://example.com/elsle" class="sister" id="link1">Title</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.p.attrs['name'])
print(soup.p['name'])

  

4、获取内容

html = '''
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dormouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters;and their names were
<a href="http://example.com/elsle" class="sister" id="link1"><!--Elsle--></a>,
<a href="http://example.com/elsle" class="sister" id="link1">Lacle</a>and
<a href="http://example.com/elsle" class="sister" id="link1">Title</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.p.string)

  

5、嵌套选择

html = '''
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dormouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters;and their names were
<a href="http://example.com/elsle" class="sister" id="link1"><!--Elsle--></a>,
<a href="http://example.com/elsle" class="sister" id="link1">Lacle</a>and
<a href="http://example.com/elsle" class="sister" id="link1">Title</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.head.title.string)

  

6、子节点和子孙节点

.contents可以获取标签的子节点
html = '''
<html><head><title>The Dormouse's story</title></head>
<body> <p class="story">
Once upon a time there were three little sisters;and their names were
<a href="http://example.com/elsle" class="sister" id="link1"><!--Elsle--></a>,
<a href="http://example.com/elsle" class="sister" id="link1">Lacle</a>
and
<a href="http://example.com/elsle" class="sister" id="link1">Title</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.p.contents)# .contents可以获取标签的子节点

  

.children是一个迭代器,以换行符分隔,获取所有的子节点
html = '''
<html><head><title>The Dormouse's story</title></head>
<body> <p class="story">Once upon a time there were three little sisters;and their names were
<a href="http://example.com/elsle" class="sister" id="link1"><!--Elsle--></a>,
<a href="http://example.com/elsle" class="sister" id="link1">Lacle</a>and
<a href="http://example.com/elsle" class="sister" id="link1">Title</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.p.children) # .children是一个迭代器,以换行符分隔,获取所有的子节点
for i,child in enumerate(soup.p.children):
print(i,child)

  

.descendants,以换行符分隔,获取所有的子孙节点
html = '''
<html><head><title>The Dormouse's story</title></head>
<body> <p class="story">Once upon a time there were three little sisters;and their names were
<a href="http://example.com/elsle" class="sister" id="link1"><!--Elsle--></a>,
<a href="http://example.com/elsle" class="sister" id="link1">Lacle</a>and
<a href="http://example.com/elsle" class="sister" id="link1">Title</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.p.descendants) # .descendants,以换行符分隔,获取所有的子孙节点
for i,child in enumerate(soup.p.descendants):
print(i,child)

  

7、父节点和祖先节点

.parent,获取父节点
html = '''
<html><head><title>The Dormouse's story</title></head>
<body> <p class="story">Once upon a time there were three little sisters;and their names were
<a href="http://example.com/elsle" class="sister" id="link1"><!--Elsle--></a>,
<a href="http://example.com/elsle" class="sister" id="link1">Lacle</a>and
<a href="http://example.com/elsle" class="sister" id="link1">Title</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.a.parent) # .parent,获取父节点

  

.parents,获取祖先节点
html = '''
<html><head><title>The Dormouse's story</title></head>
<body> <p class="story">Once upon a time there were three little sisters;and their names were
<a href="http://example.com/elsle" class="sister" id="link1"><!--Elsle--></a>,
<a href="http://example.com/elsle" class="sister" id="link1">Lacle</a>and
<a href="http://example.com/elsle" class="sister" id="link1">Title</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(list(enumerate(soup.a.parents))) # .parents,获取祖先节点

 
8、兄弟节点

.next_siblings,获取后面的兄弟节点
.previous_siblings,获取后面的兄弟节点
html = '''
<html><head><title>The Dormouse's story</title></head>
<body> <p class="story">Once upon a time there were three little sisters;and their names were
<a href="http://example.com/elsle" class="sister" id="link1"><!--Elsle--></a>,
<a href="http://example.com/elsle" class="sister" id="link1">Lacle</a>and
<a href="http://example.com/elsle" class="sister" id="link1">Title</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(list(enumerate(soup.a.next_siblings))) # .next_siblings,获取后面的兄弟节点
print(list(enumerate(soup.a.previous_siblings))) # .previous_siblings,获取后面的兄弟节点

  

标签选择器

1、find_all(name,attrs,recursive,text,**kwargs)

可根据标签名、属性、内容查找文档

html = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.find_all('ul'))
print(type(soup.find_all('ul')[0]))

  

html = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
for ul in soup.find_all('ul'):
print(ul.find_all('li'))

  

attrs

html = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
for ul in soup.find_all('ul'):
print(ul.find_all('li'))

  

text

html = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.find_all(text='Foo')) # text方法适用于文本匹配,不适用于标签查找

  

2、find(name.attrs,recursive,text,**kwargs)

find返回单个元素,find_all返回所有元素

html = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.find('ul'))
print(type(soup.find('ul')))
print(soup.find('page'))

 

3、其他

find_parents()和 find_parent

find_parents()返回所有祖先节点,find_parent()返回直接父节点

find_next_siblings()和 find_next_siblings()

find_next_siblings()返回后面所有兄弟结点, find_next_siblings()返回后面第一个兄弟结点

find_previous_siblings()和find_previous_sibling()

find_previous_siblings()返回前面所有修兄弟节点,find_previous_sibling()返回前面第一个兄弟节点

find_all_next()和find_next()

find_all_next()返回节点后面所有符合条件的结点,find_next()返回第一个符合条件的结点

find_all_previous()和find_previous()

find_all_previous()返回结点前面所有符合条件的结点,find_previous()返回第一个符合条件的结点

 

CSS选择器

通过select()直接传入CSS选择器即可完成选择

html = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.select('.panel .panel-heading')) # panel前面的.代表class属性
print(soup.select('ul li')) #ul li表示ul属性内的li属性,嵌套选择
print(soup.select('#list-2 .element'))
print(type(soup.select('ul')[0]))

  

1、获取属性

html = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
for ul in soup.select('ul'):
print(ul['id'])
print(ul.attrs['id'])

  

2、获取内容

html = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
for li in soup.select('li'):
print(li.get_text())

  

总结

  • 推荐使用lxml解析库,必要时使用html.parser
  • 标签选择筛选功能弱但是速度快
  • 建议使用find()、find_all()查询匹配单个结果或是多个结果
  • 如果对CSS选择器熟悉建议使用select()

python爬虫知识点总结(六)BeautifulSoup库详解的更多相关文章

  1. python爬虫入门四:BeautifulSoup库(转)

    正则表达式可以从html代码中提取我们想要的数据信息,它比较繁琐复杂,编写的时候效率不高,但我们又最好是能够学会使用正则表达式. 我在网络上发现了一篇关于写得很好的教程,如果需要使用正则表达式的话,参 ...

  2. python爬虫学习之使用BeautifulSoup库爬取开奖网站信息-模块化

    实例需求:运用python语言爬取http://kaijiang.zhcw.com/zhcw/html/ssq/list_1.html这个开奖网站所有的信息,并且保存为txt文件和excel文件. 实 ...

  3. python爬虫学习(一):BeautifulSoup库基础及一般元素提取方法

    最近在看爬虫相关的东西,一方面是兴趣,另一方面也是借学习爬虫练习python的使用,推荐一个很好的入门教程:中国大学MOOC的<python网络爬虫与信息提取>,是由北京理工的副教授嵩天老 ...

  4. python WEB接口自动化测试之requests库详解

    由于web接口自动化测试需要用到python的第三方库--requests库,运用requests库可以模拟发送http请求,再结合unittest测试框架,就能完成web接口自动化测试. 所以笔者今 ...

  5. Python爬虫连载4-Error模块、Useragent详解

    一.error 1.URLError产生的原因:(1)没有网络:(2)服务器连接失败:(3)不知道指定服务器:(4)是OSError的子类 from urllib import request,err ...

  6. python爬虫知识点详解

    python爬虫知识点总结(一)库的安装 python爬虫知识点总结(二)爬虫的基本原理 python爬虫知识点总结(三)urllib库详解 python爬虫知识点总结(四)Requests库的基本使 ...

  7. Python爬虫系列-Urllib库详解

    Urllib库详解 Python内置的Http请求库: * urllib.request 请求模块 * urllib.error 异常处理模块 * urllib.parse url解析模块 * url ...

  8. Python爬虫之Beautiful Soup解析库的使用(五)

    Python爬虫之Beautiful Soup解析库的使用 Beautiful Soup-介绍 Python第三方库,用于从HTML或XML中提取数据官方:http://www.crummv.com/ ...

  9. 爬虫入门之urllib库详解(二)

    爬虫入门之urllib库详解(二) 1 urllib模块 urllib模块是一个运用于URL的包 urllib.request用于访问和读取URLS urllib.error包括了所有urllib.r ...

随机推荐

  1. jquery垂直滚动插件一个参数用于设置速度,兼容ie6

    利用外层的块级元素负外边距来滚动 1.使用 <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://ww ...

  2. String 的fomat方法日期转换

    一.常规类型.字符类型和数值类型的格式说明符的语法如下:%[argument_index$][flags][width][.precision]conversion 可选的 argument_inde ...

  3. 牛牛有一个鱼缸。鱼缸里面已经有n条鱼,每条鱼的大小为fishSize[i] (1 ≤ i ≤ n,均为正整数),牛牛现在想把新捕捉的鱼放入鱼缸。鱼缸内存在着大鱼吃小鱼的定律。经过观察,牛牛发现一条鱼A的大小为另外一条鱼B大小的2倍到10倍(包括2倍大小和10倍大小),鱼A会吃掉鱼B。考虑到这个,牛牛要放入的鱼就需要保证:1、放进去的鱼是安全的,不会被其他鱼吃掉 2、这条鱼放进去也不能吃掉其他鱼

    // ConsoleApplication5.cpp : 定义控制台应用程序的入口点. // #include<vector> #include<algorithm> #inc ...

  4. Spark Streaming和Kafka整合开发指南(二)

    在本博客的<Spark Streaming和Kafka整合开发指南(一)>文章中介绍了如何使用基于Receiver的方法使用Spark Streaming从Kafka中接收数据.本文将介绍 ...

  5. Zend API:深入 PHP 内核

    Introduction Those who know don't talk. Those who talk don't know. Sometimes, PHP "as is" ...

  6. CentOS 配置网络

    1.编辑ifcfg-eth0 vi /etc/sysconfig/network-scripts/ifcfg-eth0 2.修改NOBOOT=yes 3.重启服务 service network re ...

  7. mongodb的mongod.lock文件及oplog文件

    在mongodb的启动时,在数据目录下,会生成一个mongod.lock文件.如果在正常退出时,会清除这个mongod.lock文件,若要是异常退出,在下次启动的时候,会禁止启动,从而保留一份干净的一 ...

  8. 流式 storm介绍

    Storm是什么 如果只用一句话来描述storm的话,可能会是这样:分布式实时计算系统.按照storm作者的说法,storm对于实时计算的意义类似于hadoop对于批处理的意义.我们都知道,根据goo ...

  9. 02 redis通用命令操作

    set hi hello 设置值 get hi 获取值 keys * 查询出所有的key memcached 不能查询出所有的key keys *h 模糊查找key keys h[ie] 模糊查找 k ...

  10. Cesium--气泡弹窗

    参考资料 首先感谢以下博主们的帮助,本人刚接触Cesium不久,无奈只能拾人牙慧了. 由于cesium没有自带的点击弹出气泡的功能,所以需要自己去开发一个这样的功能,网络上资源很多,看到基本思路都一致 ...