使用R语言和XML包抓取网页数据-Scraping data from web pages in R with XML package
In the last years a lot of data has been released publicly in different formats, but sometimes the data we're interested in are still inside the HTML of a web page: let's see how to get those data.
One of the existing packages for doing this job is the XML package. This package allows us to read and create XML and HTML documents; among the many features, there's a function called readHTMLTable() that analyze the parsed HTML and returns the tables present in the page. The details of the package are available in the official documentation of the package.
Let's start.
Suppose we're interested in the italian demographic info present in this page http://sdw.ecb.europa.eu/browse.do?node=2120803 from the EU website. We start loading and parsing the page:
page <- "http://sdw.ecb.europa.eu/browse.do?node=2120803"
parsed <- htmlParse(page)
Now that we have parsed HTML, we can use the readHTMLTable() function to return a list with all the tables present in the page; we'll call the function with these parameters:
- parsed: the parsed HTML
- skip.rows: the rows we want to skip (at the beginning of this table there are a couple of rows that don't contain data but just formatting elements)
- colClasses: the datatype of the different columns of the table (in our case all the columns have integer values); the rep() function is used to replicate 31 times the "integer" value
table <- readHTMLTable(parsed, skip.rows=c(1,3,4,5), colClasses = c(rep("integer", 31)))
As we can see from the page source code, this web page contains six HTML tables; the one that contains the data we're interested in is the fifth, so we extract that one from the list of tables, as a data frame:
values <- as.data.frame(table[5])
Just for convenience, we rename the columns with the period and italian data:
# renames the columns for the period and Italy
colnames(values)[1] <- 'Period'
colnames(values)[19] <- 'Italy'
The italian data lasts from 1990 to 2014, so we have to subset only those rows and, of course, only the two columns of period and italian data:
# subsets the data: we are interested only in the first and the 19th column (period and italian info)
ids <- values[c(1,19)] # Italy has only 25 years of info, so we cut away the other rows
ids <- as.data.frame(ids[1:25,])
Now we can plot these data calling the plot function with these parameters:
- ids: the data to plot
- xlab: the label of the X axis
- ylab: the label of the Y axis
- main: the title of the plot
- pch: the symbol to draw for evey point (19 is a solid circle: look here for an overview)
- cex: the size of the symbol
plot(ids, xlab="Year", ylab="Population in thousands", main="Population 1990-2014", pch=19, cex=0.5)
and here is the result:
Here's the full code, also available on my github:
library(XML) # sets the URL
url <- "http://sdw.ecb.europa.eu/browse.do?node=2120803" # let the XML library parse the HTMl of the page
parsed <- htmlParse(url) # reads the HTML table present inside the page, paying attention
# to the data types contained in the HTML table
table <- readHTMLTable(parsed, skip.rows=c(1,3,4,5), colClasses = c(rep("integer", 31) )) # this web page contains seven HTML pages, but the one that contains the data
# is the fifth
values <- as.data.frame(table[5]) # renames the columns for the period and Italy
colnames(values)[1] <- 'Period'
colnames(values)[19] <- 'Italy' # now subsets the data: we are interested only in the first and
# the 19th column (period and Italy info)
ids <- values[c(1,19)] # Italy has only 25 year of info, so we cut away the others
ids <- as.data.frame(ids[1:25,]) # plots the data
plot(ids, xlab="Year", ylab="Population in thousands", main="Population 1990-2014", pch=19, cex=0.5) from: http://andreaiacono.blogspot.com/2014/01/scraping-data-from-web-pages-in-r-with.html
使用R语言和XML包抓取网页数据-Scraping data from web pages in R with XML package的更多相关文章
- java抓取网页数据,登录之后抓取数据。
最近做了一个从网络上抓取数据的一个小程序.主要关于信贷方面,收集的一些黑名单网站,从该网站上抓取到自己系统中. 也找了一些资料,觉得没有一个很好的,全面的例子.因此在这里做个笔记提醒自己. 首先需要一 ...
- Asp.net 使用正则和网络编程抓取网页数据(有用)
Asp.net 使用正则和网络编程抓取网页数据(有用) Asp.net 使用正则和网络编程抓取网页数据(有用) /// <summary> /// 抓取网页对应内容 /// </su ...
- 使用JAVA抓取网页数据
一.使用 HttpClient 抓取网页数据 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 ...
- 使用HtmlAgilityPack批量抓取网页数据
原文:使用HtmlAgilityPack批量抓取网页数据 相关软件点击下载登录的处理.因为有些网页数据需要登陆后才能提取.这里要使用ieHTTPHeaders来提取登录时的提交信息.抓取网页 Htm ...
- web scraper 抓取网页数据的几个常见问题
如果你想抓取数据,又懒得写代码了,可以试试 web scraper 抓取数据. 相关文章: 最简单的数据抓取教程,人人都用得上 web scraper 进阶教程,人人都用得上 如果你在使用 web s ...
- c#抓取网页数据
写了一个简单的抓取网页数据的小例子,代码如下: //根据Url地址得到网页的html源码 private string GetWebContent(string Url) { string strRe ...
- 【iOS】正則表達式抓取网页数据制作小词典
版权声明:本文为博主原创文章,未经博主同意不得转载. https://blog.csdn.net/xn4545945/article/details/37684127 应用程序不一定要自己去提供数据. ...
- 01 UIPath抓取网页数据并导出Excel(非Table表单)
上次转载了一篇<UIPath抓取网页数据并导出Excel>的文章,因为那个导出的是table标签中的数据,所以相对比较简单.现实的网页中,有许多不是通过table标签展示的,那又该如何处理 ...
- Node.js的学习--使用cheerio抓取网页数据
打算要写一个公开课网站,缺少数据,就决定去网易公开课去抓取一些数据. 前一阵子看过一段时间的Node.js,而且Node.js也比较适合做这个事情,就打算用Node.js去抓取数据. 关键是抓取到网页 ...
随机推荐
- jmeter------reponse报错”Unknown column 'XXXXX' in 'where clause'“
一.问题描述 jmeter添加了与数据库mysql的连接,编写完JDBC Request之后,运行提示报错”Unknown column 'be7f5b6e750bb6becf855386338644 ...
- 作死自救日记——不小心修改linux下/etc/sudoers权限的解决办法
作死自救日记,献给跟我一样不小心作了死的人 ================================================ 今天不小心作死修改了/etc/sudoers的权限,作死命 ...
- codeforce 1A Theatre Square
A. Theatre Square Theatre Square in the capital city of Berland has a rectangular shape with the siz ...
- SaltStack配置管理--状态间的关系(六)
一.include的引用 需求场景:用于含有多个SLS的状态,使用include可以进行多个状态的组合,将安装apache,php,mysql集合在一个sls中 [root@7mini-node1 p ...
- Python并发编程-信号量
信号量保证同一资源同一时间只能有限定的进程去访问 一套资源,同一时间,只能被n个人访问 某一段代码,同一时间,只能被n个进程执行 from multiprocessing import Process ...
- java_方法
方法 1.1方法概述 在我们的日常生活中,方法可以理解为要做某件事情,而采取的解决办法. 如:小明同学在路边准备坐车来学校学习.这就面临着一件事情(坐车到学校这件事情)需要解决,解决办法呢?可采用坐公 ...
- 从JDBC看Mybatis的设计
Java数据库连接,(Java Database Connectivity,简称JDBC)是Java语言中用来规范客户端程序如何来访问数据库的应用程序接口,提供了诸如查询和更新数据库中数据的方法. 六 ...
- python 写一个贪吃蛇游戏
#!usr/bin/python #-*- coding:utf-8 -*- import random import curses s = curses.initscr() curses.curs_ ...
- Hibernate 组合主键映射
在开发过程中创建数据库表时,有时候会发现单纯的创建一个主键是不可行的,有时候就需要多个字段联合保持唯一,本文讲述如何创建组合主键的映射. 例如:记录一个班的考试成绩.学生跟科目是多对多的关系,只有一个 ...
- 【BZOJ 2982】 2982: combination (卢卡斯定理)
2982: combination Time Limit: 1 Sec Memory Limit: 128 MBSubmit: 510 Solved: 316 Description LMZ有n个 ...