Accessing data in Hadoop using dplyr and SQL
If your primary objective is to query your data in Hadoop to browse, manipulate, and extract it into R, then you probably want to use SQL. You can write SQL code explicitly to interact with Hadoop, or you can write SQL code implicitly with dplyr
. The dplyr
package has a generalized backend for data sources that translates your R code into SQL. You can use RStudio and dplyr
to work with several of the most popular software packages in the Hadoop ecosystem, including Hive, Impala, HBase and Spark.
There are two methods for accessing data in Hadoop using dplyr
and SQL.
ODBC
You can connect R and RStudio to Hadoop with an ODBC connection. This effectively treats Hadoop like any other data source (i.e., as if Hadoop were a relational database). You will need a data source specific driver (e.g., Hive, Impala, HBase) installed on your desktop or your sever. You will also need a few R packages. We recommend using these R packages: DBI
, dplyr
, and odbc
. Note that the dplyr
package may also reference the dbplyr
package to help translate R into specific variants of SQL. You can use the odbc
package to create a connection with Hadoop and run queries:
library(odbc)
con <- dbConnect(odbc::odbc(),
driver = <driver>,
host = <host>,
dbname = <dbname>,
user = <user>,
password = <password>,
port = 10000)
tbl(con, "mytable") # dplyr
dbGetQuery(con, "SELECT * FROM mytable") # SQL
dbDisconnect(con)
Spark
If you are running Spark on Hadoop, you may also elect to use the sparklyr
package to access your data in HDFS. Spark is a general engine for large-scale data processing, and it supports SQL. The sparklyr
package communicates with the Spark API to run SQL queries, and it also has a dplyr
backend. You can use sparklyr
to create a connect with Spark run queries:
library(sparklyr)
con <- spark_connect(master = "yarn-client")
tbl(con, "mytable") # dplyr
dbGetQuery(con, "SELECT * FROM mytable") # SQL
spark_disconnect(con)
转自:https://support.rstudio.com/hc/en-us/articles/115008241668-Accessing-data-in-Hadoop-using-dplyr-and-SQL
Accessing data in Hadoop using dplyr and SQL的更多相关文章
- 【Big Data】HADOOP集群的配置(二)
Hadoop集群的配置(二) 摘要: hadoop集群配置系列文档,是笔者在实验室真机环境实验后整理而得.以便随后工作所需,做以知识整理,另则与博客园朋友分享实验成果,因为笔者在学习初期,也遇到不少问 ...
- java.lang.IllegalStateException:Couldn't read row 0, col -1 from CursorWindow. Make sure the Cursor is initialized correctly before accessing data from it.
java.lang.RuntimeException: Unable to start activity ComponentInfo{com.xxx...}: java.lang.IllegalSta ...
- java.lang.IllegalStateException: Couldn't read row 1, col 0 from CursorWindow. Make sure the Cursor is initialized correctly before accessing data fr
Android中操作Sqlite遇到的错误:java.lang.IllegalStateException: Couldn't read row 1, col 0 from CursorWindow. ...
- android 出现Make sure the Cursor is initialized correctly before accessing data from it
Make sure the Cursor is initialized correctly before accessing data from it 详细错误是:java.lang.IllegalS ...
- [Big Data]从Hadoop到Spark的架构实践
摘要:本文则主要介绍TalkingData在大数据平台建设过程中,逐渐引入Spark,并且以Hadoop YARN和Spark为基础来构建移动大数据平台的过程. 当下,Spark已经在国内得到了广泛的 ...
- 【Big Data】HADOOP集群的配置(一)
Hadoop集群的配置(一) 摘要: hadoop集群配置系列文档,是笔者在实验室真机环境实验后整理而得.以便随后工作所需,做以知识整理,另则与博客园朋友分享实验成果,因为笔者在学习初期,也遇到不少问 ...
- 使用Red Gate Sql Data Compare 数据库同步工具进行SQL Server的两个数据库的数据比较、同步
Sql Data Compare 是比较两个数据库的数据是否相同.生成同步sql的工具. 这一款工具由Red Gate公司出品,我们熟悉的.NET Reflector就是这个公司推出的,它的SQLTo ...
- 举例说明:Hadoop vs. NoSql vs. Sql vs. NewSql
转自:http://blog.jobbole.com/86269/ 尽管层次数据库如今在大型机上依然被广泛使用,但关系数据库(RDBMS)(SQL)已经占领了数据库市场,并且表现的相当优异.我们存 ...
- 数据库原理及应用-SQL数据操纵语言(Data Manipulation Language)和嵌入式SQL&存储过程
2018-02-19 18:03:54 一.数据操纵语言(Data Manipulation Language) 数据操纵语言是指插入,删除和更新语言. 二.视图(View) 数据库三级模式,两级映射 ...
随机推荐
- HBase表的备份
HBase表备份其实就是先将Table导出,再导入两个过程. 导出过程 //hbase org.apache.hadoop.hbase.mapreduce.Driver export 表名 数据文件位 ...
- js模拟类的创建以及继承的四部曲
<script> 1)创建父类 function Person(){ } Person.prototype.age = 18;//给父类添加属性 var p1 = new Person() ...
- 解决 docker: Error response from daemon: ... : net/http: TLS handshake timeout.
参考:解决 Docker pull 出现的net/http: TLS handshake timeout 的一个办法 问题: 执行 $ sudo docker run hello-world 时出现: ...
- mygene 3.0.0
MyGene.Info provides simple-to-use REST web services to query/retrieve gene annotation data. It’s de ...
- BZOJ 2594 【WC2006】 水管局长数据加强版
题目链接:水管局长数据加强版 好久没写博客了…… 上次考试的时候写了一发LCT,但是写挂了……突然意识到我已经很久没有写过LCT了,于是今天找了道题来练练手. 首先,LCT这里不讲.这道题要求支持动态 ...
- springMVC中不通过注解方式获取指定Service的javabean
如TestService,其实现为TestServiceImpl,则可以通过 TestService testService = (TestService)SpringContextHolder.ge ...
- python学习笔记(自定义库文件路径)
博主最近在弄接口自动化.主要是基于python自带的unittest框架.包括 Pubilc模块定义所有接口. Main模块根据业务需求重新封装接口便于测试. config文件导入测试业务的固定参数. ...
- Spring MVC 实现跨域资源 CORS 请求
说到 AJAX 跨域,很多人最先想到的是 JSONP.的确,JSONP 我们已经十分熟悉,也使用了多年,从本质上讲,JSONP 的原理是给页面注入一个 <script>,把远程 JavaS ...
- HDU-4336-期望dp-bit
Card Collector Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others)To ...
- javascript垃圾收集
javascript具有自动垃圾收集机制,也就是说,执行环境会负责管理代码执行过程中使用的内存.而在C和C++之类的语言中,开发人员的一项基本任务就是手工跟踪内存的使用情况 ,这是造成许多问题的一个根 ...