Python爬虫之BeautifulSoup库
1. BeautifulSoup
1.1 解析库
1)Python标准库
- # 使用方法
- BeautifulSoup(markup, "html.parser")
- # 优势
- Python的内置标准库,执行速度适中,文档容错能力强
- # 劣势
- Python2.7.3 或者 python3.2.2 前的版本容错能力差
2)lxml HTML解析器
- 绝大部分场景都应该使用lxml解析器
- # 使用方法
- BeautifulSoup(markup, "lxml")
- # 优势
- 速度快,文档容错能力强
- # 劣势
- 需要安装C语言库
3)lxml XML解析器
- # 使用方法
- BeautifulSoup(markup, "xml")
- # 优势
- 速度快,唯一支持XML的解析器
- # 劣势
- 需要安装C语言库
4)html5lib
- # 使用方法
- BeautifulSoup(markup, "html5lib")
- # 优势
- 最好的容错性,以浏览器的方式解析文档,生成HTML5格式的文档
- # 劣势
- 速度慢,不依赖外部扩展
1.2 基本使用
- html = """
- <html><head><title>The Dormouse's story</title></head>
- <body>
- <p class="title" name="dromouse"><b>The Dormouse's story</b></p>
- <p class="story">Once upon a time there were three little sisters; and their names were
- <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
- <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
- <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
- and they lived at the bottom of a well.</p>
- <p class="story">...</p>
- """
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml') # 使用lxml解析器
- print(soup.prettify()) # 格式化代码,能自动将缺失的代码进行补全并进行容错处理
- print(soup.title.string) # 拿到title标签,并拿到其中的内容
2. 标签选择器
2.1 选择元素
可以直接通过 .标签名 的方式来选择标签
- html = """
- <html><head><title>The Dormouse's story</title></head>
- <body>
- <p class="title" name="dromouse"><b>The Dormouse's story</b></p>
- <p class="story">Once upon a time there were three little sisters; and their names were
- <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
- <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
- <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
- and they lived at the bottom of a well.</p>
- <p class="story">...</p>
- """
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml')
- print(soup.title) # 选择title标签,打印结果:<title>The Dormouse's story</title>
- print(type(soup.title)) # 类型:<class 'bs4.element.Tag'>
- print(soup.head)
- print(soup.p) # 如果有多个匹配结果,那么它只会返回第一个
2.2 获取名称
获取标签的名称,如是p标签还是a标签等
- html = """
- <html><head><title>The Dormouse's story</title></head>
- <body>
- <p class="title" name="dromouse"><b>The Dormouse's story</b></p>
- <p class="story">Once upon a time there were three little sisters; and their names were
- <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
- <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
- <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
- and they lived at the bottom of a well.</p>
- <p class="story">...</p>
- """
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml')
- print(soup.title.name) # 获取标签名称
2.3 获取属性
可以通过 attrs["name"] 或者 标签["name"] 的方式来获取标签中name属性的值
- html = """
- <html><head><title>The Dormouse's story</title></head>
- <body>
- <p class="title" name="dromouse"><b>The Dormouse's story</b></p>
- <p class="story">Once upon a time there were three little sisters; and their names were
- <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
- <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
- <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
- and they lived at the bottom of a well.</p>
- <p class="story">...</p>
- """
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml')
- print(soup.p.attrs['name']) # 获取p标签中name属性的值
- print(soup.p['name']) # 这样也可以获取
2.4 获取内容
可以通过 标签.string 的方式来获取标签中的内容
- html = """
- <html><head><title>The Dormouse's story</title></head>
- <body>
- <p clss="title" name="dromouse"><b>The Dormouse's story</b></p>
- <p class="story">Once upon a time there were three little sisters; and their names were
- <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
- <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
- <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
- and they lived at the bottom of a well.</p>
- <p class="story">...</p>
- """
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml')
- print(soup.p.string) # 获取p标签中的内容(只是获取字符内容):The Dormouse's story
2.5 嵌套选择
可以通过点 . 的方式来嵌套选择
- html = """
- <html><head><title>The Dormouse's story</title></head>
- <body>
- <p class="title" name="dromouse"><b>The Dormouse's story</b></p>
- <p class="story">Once upon a time there were three little sisters; and their names were
- <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
- <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
- <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
- and they lived at the bottom of a well.</p>
- <p class="story">...</p>
- """
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml')
- print(soup.head.title.string) # 获取head下面的title中的字符内容
2.6 子节点和子孙节点
1)子节点
- 通过 标签.contents 可以获取标签中的所有子节点,保存为一个列表
- 保存为列表
- html = """
- <html>
- <head>
- <title>The Dormouse's story</title>
- </head>
- <body>
- <p class="story">
- Once upon a time there were three little sisters; and their names were
- <a href="http://example.com/elsie" class="sister" id="link1">
- <span>Elsie</span>
- </a>
- <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
- and
- <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
- and they lived at the bottom of a well.
- </p>
- <p class="story">...</p>
- """
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml')
- print(soup.p.contents) # 获取p标签中的所有子节点,保存为一个列表
- 可以通过 标签.children 来获取标签中的所有子节点,保存为一个迭代器
- 保存为迭代器
- html = """
- <html>
- <head>
- <title>The Dormouse's story</title>
- </head>
- <body>
- <p class="story">
- Once upon a time there were three little sisters; and their names were
- <a href="http://example.com/elsie" class="sister" id="link1">
- <span>Elsie</span>
- </a>
- <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
- and
- <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
- and they lived at the bottom of a well.
- </p>
- <p class="story">...</p>
- """
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml')
- print(soup.p.children) # 获取p标签中的所有子节点,保存为一个迭代器
- for i, child in enumerate(soup.p.children):
- print(i, child)
2)子孙节点
- 可以通过 标签.descendants 来获取标签中的所有子孙节点,并保存为一个迭代器
- html = """
- <html>
- <head>
- <title>The Dormouse's story</title>
- </head>
- <body>
- <p class="story">
- Once upon a time there were three little sisters; and their names were
- <a href="http://example.com/elsie" class="sister" id="link1">
- <span>Elsie</span>
- </a>
- <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
- and
- <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
- and they lived at the bottom of a well.
- </p>
- <p class="story">...</p>
- """
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml')
- print(soup.p.children) # 获取p标签中的所有子节点,保存为一个迭代器
- for i, child in enumerate(soup.p.children):
- print(i, child)
2.7 父节点和祖先节点
1)父节点
- 通过 标签.parent 可以获取标签的父节点
- html = """
- <html>
- <head>
- <title>The Dormouse's story</title>
- </head>
- <body>
- <p class="story">
- Once upon a time there were three little sisters; and their names were
- <a href="http://example.com/elsie" class="sister" id="link1">
- <span>Elsie</span>
- </a>
- <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
- and
- <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
- and they lived at the bottom of a well.
- </p>
- <p class="story">...</p>
- """
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml')
- print(soup.a.parent) # 获取a标签的父节点
2)祖先节点
- 通过 标签.parents 可以获取标签的所有祖先节点
- html = """
- <html>
- <head>
- <title>The Dormouse's story</title>
- </head>
- <body>
- <p class="story">
- Once upon a time there were three little sisters; and their names were
- <a href="http://example.com/elsie" class="sister" id="link1">
- <span>Elsie</span>
- </a>
- <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
- and
- <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
- and they lived at the bottom of a well.
- </p>
- <p class="story">...</p>
- """
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml')
- print(list(enumerate(soup.a.parents))) # 获取a标签所有的祖先节点
2.8 兄弟节点
- 通过 标签.next_siblings 可以获取标签后面的所有兄弟节点
- 通过 标签.previous_siblings 可以获取标签前面的所有兄弟节点
- html = """
- <html>
- <head>
- <title>The Dormouse's story</title>
- </head>
- <body>
- <p class="story">
- Once upon a time there were three little sisters; and their names were
- <a href="http://example.com/elsie" class="sister" id="link1">
- <span>Elsie</span>
- </a>
- <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
- and
- <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
- and they lived at the bottom of a well.
- </p>
- <p class="story">...</p>
- """
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml')
- print(list(enumerate(soup.a.next_siblings))) # 获取a标签后面的所有兄弟节点
- print(list(enumerate(soup.a.previous_siblings))) # 获取a标签前面的所有兄弟节点
3. 标准选择器
3.1 find_all()
- 使用语法:find_all(name, attrs, recursive, text, **kwargs)
1)name
- 根据标签名来选择标签
- html='''
- <div class="panel">
- <div class="panel-heading">
- <h4>Hello</h4>
- </div>
- <div class="panel-body">
- <ul class="list" id="list-1">
- <li class="element">Foo</li>
- <li class="element">Bar</li>
- <li class="element">Jay</li>
- </ul>
- <ul class="list list-small" id="list-2">
- <li class="element">Foo</li>
- <li class="element">Bar</li>
- </ul>
- </div>
- </div>
- '''
- from bs4 import BeautifulSoup
- soup1 = BeautifulSoup(html, 'lxml')
- print(soup1.find_all('ul')) # 找到所有匹配的结果,并以列表的形式返回
- print(type(soup1.find_all('ul')[0]))
- soup2 = BeautifulSoup(html, 'lxml')
- for ul in soup2.find_all('ul'):
- print(ul.find_all('li'))
2)attrs
- 根据标签中的属性进行选择标签
- html='''
- <div class="panel">
- <div class="panel-heading">
- <h4>Hello</h4>
- </div>
- <div class="panel-body">
- <ul class="list" id="list-1" name="elements">
- <li class="element">Foo</li>
- <li class="element">Bar</li>
- <li class="element">Jay</li>
- </ul>
- <ul class="list list-small" id="list-2">
- <li class="element">Foo</li>
- <li class="element">Bar</li>
- </ul>
- </div>
- </div>
- '''
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml')
- print(soup.find_all(attrs={'id': 'list-1'})) # 找到所有的标签属性中id=list-1的标签
- print(soup.find_all(attrs={'name': 'elements'}))
- soup2 = BeautifulSoup(html, 'lxml')
- print(soup2.find_all(id='list-1')) # 找到所有的标签属性中id=list-1的标签,和attrs类似,只不过不需要再传入字典了
- print(soup2.find_all(class_='element')) # 如果和关键字冲突,则可以通过将属性后面加一个下划线,如class_
3)text
- 根据文本的内容进行选择
- html='''
- <div class="panel">
- <div class="panel-heading">
- <h4>Hello</h4>
- </div>
- <div class="panel-body">
- <ul class="list" id="list-1">
- <li class="element">Foo</li>
- <li class="element">Bar</li>
- <li class="element">Jay</li>
- </ul>
- <ul class="list list-small" id="list-2">
- <li class="element">Foo</li>
- <li class="element">Bar</li>
- </ul>
- </div>
- </div>
- '''
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml')
- print(soup.find_all(text='Foo')) # 根据文本的内容进行选择,选择文本中包含Foo的标签的所有内容
3.2 find()
- find返回单个元素,find_all返回所有元素
- html='''
- <div class="panel">
- <div class="panel-heading">
- <h4>Hello</h4>
- </div>
- <div class="panel-body">
- <ul class="list" id="list-1">
- <li class="element">Foo</li>
- <li class="element">Bar</li>
- <li class="element">Jay</li>
- </ul>
- <ul class="list list-small" id="list-2">
- <li class="element">Foo</li>
- <li class="element">Bar</li>
- </ul>
- </div>
- </div>
- '''
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml')
- print(soup.find('ul')) # 找到第一个ul标签
- print(type(soup.find('ul')))
- print(soup.find('page'))
3.3 find_parents() find_parent()
find_parents() 返回所有祖先节点,find_parent() 返回直接父节点。
3.4 find_next_siblings() find_next_sibling()
find_next_siblings()返回后面所有兄弟节点,find_next_sibling()返回后面第一个兄弟节点。
3.5 find_previous_siblings() find_previous_sibling()
find_previous_siblings()返回前面所有兄弟节点,find_previous_sibling()返回前面第一个兄弟节点。
3.6 find_all_next() find_next()
find_all_next()返回节点后所有符合条件的节点, find_next()返回第一个符合条件的节点。
3.7 find_all_previous() 和 find_previous()
find_all_previous()返回节点后所有符合条件的节点, find_previous()返回第一个符合条件的节点。
4. CSS选择器
4.1 css选择器基本使用
通过select() 直接传入CSS选择器即可完成选择
- html='''
- <div class="panel">
- <div class="panel-heading">
- <h4>Hello</h4>
- </div>
- <div class="panel-body">
- <ul class="list" id="list-1">
- <li class="element">Foo</li>
- <li class="element">Bar</li>
- <li class="element">Jay</li>
- </ul>
- <ul class="list list-small" id="list-2">
- <li class="element">Foo</li>
- <li class="element">Bar</li>
- </ul>
- </div>
- </div>
- '''
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml')
- print(soup.select('.panel .panel-heading')) # 这是类选择器,class=xxx,中间的空格表示这是也是层级选择器
- print(soup.select('ul li')) # 这是标签选择器,选择具体的标签,这里表示选择ul标签中的li标签
- print(soup.select('#list-2 .element')) # 这个id选择器,id=xxx
- print(type(soup.select('ul')[0]))
- soup2 = BeautifulSoup(html, 'lxml')
- for ul in soup2.select('ul'):
- print(ul.select('li'))
4.2 获取属性
- TAG['id']
- TAG.attr['id']
- html='''
- <div class="panel">
- <div class="panel-heading">
- <h4>Hello</h4>
- </div>
- <div class="panel-body">
- <ul class="list" id="list-1">
- <li class="element">Foo</li>
- <li class="element">Bar</li>
- <li class="element">Jay</li>
- </ul>
- <ul class="list list-small" id="list-2">
- <li class="element">Foo</li>
- <li class="element">Bar</li>
- </ul>
- </div>
- </div>
- '''
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml')
- for ul in soup.select('ul'):
- print(ul['id']) # 获取ul标签中id属性的值
- print(ul.attrs['id']) # 这两种写法等价
4.3 获取内容
- TAG.get_text()
- html='''
- <div class="panel">
- <div class="panel-heading">
- <h4>Hello</h4>
- </div>
- <div class="panel-body">
- <ul class="list" id="list-1">
- <li class="element">Foo</li>
- <li class="element">Bar</li>
- <li class="element">Jay</li>
- </ul>
- <ul class="list list-small" id="list-2">
- <li class="element">Foo</li>
- <li class="element">Bar</li>
- </ul>
- </div>
- </div>
- '''
- from bs4 import BeautifulSoup
- soup = BeautifulSoup(html, 'lxml')
- for li in soup.select('li'):
- print(li.get_text()) # 获取标签中的文本
5. 总结
- 推荐使用 lxml 解析库,必要时使用 html.parser
- 标签选择筛选功能弱但是速度快
- 建议使用find()、find_all() 查询匹配单个结果或者多个结果
- 如果对CSS选择器熟悉建议使用select()
- 要记住常用的获取属性和文本值的方法
Python爬虫之BeautifulSoup库的更多相关文章
- python下载安装BeautifulSoup库
python下载安装BeautifulSoup库 1.下载https://www.crummy.com/software/BeautifulSoup/bs4/download/4.5/ 2.解压到解压 ...
- python爬虫之urllib库(三)
python爬虫之urllib库(三) urllib库 访问网页都是通过HTTP协议进行的,而HTTP协议是一种无状态的协议,即记不住来者何人.举个栗子,天猫上买东西,需要先登录天猫账号进入主页,再去 ...
- python爬虫之urllib库(二)
python爬虫之urllib库(二) urllib库 超时设置 网页长时间无法响应的,系统会判断网页超时,无法打开网页.对于爬虫而言,我们作为网页的访问者,不能一直等着服务器给我们返回错误信息,耗费 ...
- python爬虫之urllib库(一)
python爬虫之urllib库(一) urllib库 urllib库是python提供的一种用于操作URL的模块,python2中是urllib和urllib2两个库文件,python3中整合在了u ...
- Python爬虫之selenium库使用详解
Python爬虫之selenium库使用详解 本章内容如下: 什么是Selenium selenium基本使用 声明浏览器对象 访问页面 查找元素 多个元素查找 元素交互操作 交互动作 执行JavaS ...
- Python爬虫之BeautifulSoup的用法
之前看静觅博客,关于BeautifulSoup的用法不太熟练,所以趁机在网上搜索相关的视频,其中一个讲的还是挺清楚的:python爬虫小白入门之BeautifulSoup库,有空做了一下笔记: 一.爬 ...
- Mac os 下 python爬虫相关的库和软件的安装
由于最近正在放暑假,所以就自己开始学习python中有关爬虫的技术,因为发现其中需要安装许多库与软件所以就在这里记录一下以避免大家在安装时遇到一些不必要的坑. 一. 相关软件的安装: 1. h ...
- 通过哪吒动漫豆瓣影评,带你分析python爬虫与BeautifulSoup快速入门【华为云技术分享】
久旱逢甘霖 西安连着几天温度排行全国三甲,也许是<哪吒之魔童降世>的剧组买通了老天,从踩着风火轮的小朋友首映开始,就全国性的持续高温,还好今天凌晨的一场暴雨,算是将大家从中暑边缘拯救回来了 ...
- Python中的BeautifulSoup库简要总结
一.基本元素 BeautifulSoup库是解析.遍历.维护“标签树”的功能库. 引用 from bs4 import BeautifulSoup import bs4 html文档-标签树-Beau ...
随机推荐
- Linux——CentOS 7 systemctl和防火墙firewalld命令
一.防火墙的开启.关闭.禁用命令 (1)设置开机启用防火墙:systemctl enable firewalld.service (2)设置开机禁用防火墙:systemctl disable fire ...
- SHEIN:Java开发面经
SHEIN面经 我觉得除技术外,自信是一个非常关键的点. 一面 自我介绍: 谈谈实习经历: 讲讲你实习的收获: 如何设计规范的接口?(简历上有写,所以问到) 当你需要修改两个月前的代码时,如何去整理以 ...
- 异或加密 - cr2-many-time-secrets(攻防世界) - 异性相吸(buuctf)
Crib dragging attack 在开始了解 Crib dragging attack 之前,先来理一理 异或. 异或加密 [详情请戳这里] XOR 加密简介 异或加密特性: ① 两个值相同时 ...
- 简单几步就能把素材变成大片?老司机推荐Vegas
"素材编辑"一般分为两种,一种是对时间线素材长度和位置的编辑,另一种就是遮罩法操作. 第一种,裁剪素材(将素材在我们选定的位置一分为二),对时间线上的素材进行裁剪,有两种方法: 一 ...
- 轻松学编曲,论FL Studio的钢琴卷帘功能
在编曲软件FL Studio中有一个会被经常用到的功能,叫钢琴卷帘,可以用来扒谱.编曲.制作音乐等,并且操作简单,即使不懂乐理也能一样使用.今天,就来带大家认识一下钢琴卷帘. 还没有安装FL Stud ...
- 用MathType怎么把分数打出来
分数是生活中最常见的数,作为大学生学习高数概率论更是离不开分数.分数是指整体的一部分,或更一般地,任何数量相等的部分.分数是一个整数a和一个正整数b的不等于整数的比. 当在日常用语中说话时,分数描述了 ...
- 下载器Folx的创建种子功能怎么使用
当我们想要分享一些自己制作的资源时,可以使用Folx的创建种子功能,在网络上分享种子,供他人下载,这个过程也被称为做种.作为种子创建者,需要在一定时间内保持做种进程,以便维持种子的生命期限,方便他人下 ...
- pycharm2020激活破解和汉化
一:破解补丁和程序下载:链接:https://pan.baidu.com/s/1u-aZrKMmfRBlQHtcivUt8Q 提取码:tvko 二:破解步骤: 1.安装下载的pycharm202 ...
- uniapp自定义picker城市多级联动组件
uniapp自定义picker城市多级联动组件 支持多端--h5.app.微信小程序.支付宝小程序... 支持自定义配置picker插件级数 支持无限级 注意事项:插件传入数据格式为children树 ...
- Java基础教程——接口
接口 接口只是一种约定.--Anders 接口定义了一种规范--多个类共同的公共行为规范. 对于接口的实现者--规定了必须向外提供哪些服务 对于接口的调用者--规定了可以调用哪些服务,如何调用这些服务 ...