前言

今天来介绍一下,hbase的2.1.0版本升级之后和1.2.6版本的api方法的一些不同之处。

hbase的工具类

在介绍hbase的相关的java api之前,这里先介绍一下hbase的工具类,这边我打算将这个demo写成工具类形式,具体的方法,后面可能会介绍,但是不可能面面俱到,具体的还是需要移步apache官网。

步骤一

构建maven工程,添加相关的maven依赖如下:

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  3. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  4.  
  5. <modelVersion>4.0.0</modelVersion>
  6.  
  7. <groupId>com.linewell</groupId>
  8. <artifactId>hbase-test</artifactId>
  9. <packaging>jar</packaging>
  10. <version>1.0-SNAPSHOT</version>
  11.  
  12. <name>A Camel Route</name>
  13.  
  14. <properties>
  15. <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  16. <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
  17. </properties>
  18.  
  19. <dependencyManagement>
  20. <dependencies>
  21. <!-- Camel BOM -->
  22. <dependency>
  23. <groupId>org.apache.camel</groupId>
  24. <artifactId>camel-parent</artifactId>
  25. <version>2.22.1</version>
  26. <scope>import</scope>
  27. <type>pom</type>
  28. </dependency>
  29. </dependencies>
  30. </dependencyManagement>
  31.  
  32. <dependencies>
  33. <dependency>
  34. <groupId>org.apache.camel</groupId>
  35. <artifactId>camel-core</artifactId>
  36. </dependency>
  37.  
  38. <!-- logging -->
  39. <dependency>
  40. <groupId>org.apache.logging.log4j</groupId>
  41. <artifactId>log4j-api</artifactId>
  42. <scope>runtime</scope>
  43. </dependency>
  44. <dependency>
  45. <groupId>org.apache.logging.log4j</groupId>
  46. <artifactId>log4j-core</artifactId>
  47. <scope>runtime</scope>
  48. </dependency>
  49. <dependency>
  50. <groupId>org.apache.logging.log4j</groupId>
  51. <artifactId>log4j-slf4j-impl</artifactId>
  52. <scope>runtime</scope>
  53. </dependency>
  54.  
  55. <!-- testing -->
  56. <dependency>
  57. <groupId>org.apache.camel</groupId>
  58. <artifactId>camel-test</artifactId>
  59. <scope>test</scope>
  60. </dependency>
  61. <dependency>
  62. <groupId>org.apache.hadoop</groupId>
  63. <artifactId>hadoop-common</artifactId>
  64. <version>2.7.3</version>
  65. <exclusions>
  66. <exclusion>
  67. <artifactId>guava</artifactId>
  68. <groupId>com.google.guava</groupId>
  69. </exclusion>
  70. <exclusion>
  71. <artifactId>xz</artifactId>
  72. <groupId>org.tukaani</groupId>
  73. </exclusion>
  74. <exclusion>
  75. <artifactId>commons-compress</artifactId>
  76. <groupId>org.apache.commons</groupId>
  77. </exclusion>
  78. <exclusion>
  79. <artifactId>jackson-core-asl</artifactId>
  80. <groupId>org.codehaus.jackson</groupId>
  81. </exclusion>
  82. <exclusion>
  83. <artifactId>commons-lang</artifactId>
  84. <groupId>commons-lang</groupId>
  85. </exclusion>
  86. <exclusion>
  87. <artifactId>jackson-jaxrs</artifactId>
  88. <groupId>org.codehaus.jackson</groupId>
  89. </exclusion>
  90. <exclusion>
  91. <artifactId>jackson-mapper-asl</artifactId>
  92. <groupId>org.codehaus.jackson</groupId>
  93. </exclusion>
  94. </exclusions>
  95. </dependency>
  96. <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-server -->
  97. <dependency>
  98. <groupId>org.apache.hbase</groupId>
  99. <artifactId>hbase-server</artifactId>
  100. <version>1.2.6</version>
  101. <exclusions>
  102. <exclusion>
  103. <artifactId>jackson-xc</artifactId>
  104. <groupId>org.codehaus.jackson</groupId>
  105. </exclusion>
  106. <exclusion>
  107. <artifactId>hadoop-annotations</artifactId>
  108. <groupId>org.apache.hadoop</groupId>
  109. </exclusion>
  110. <exclusion>
  111. <artifactId>hadoop-auth</artifactId>
  112. <groupId>org.apache.hadoop</groupId>
  113. </exclusion>
  114. <exclusion>
  115. <artifactId>guava</artifactId>
  116. <groupId>com.google.guava</groupId>
  117. </exclusion>
  118. <exclusion>
  119. <artifactId>hadoop-common</artifactId>
  120. <groupId>org.apache.hadoop</groupId>
  121. </exclusion>
  122. </exclusions>
  123. </dependency>
  124. <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-client -->
  125. <dependency>
  126. <groupId>org.apache.hbase</groupId>
  127. <artifactId>hbase-client</artifactId>
  128. <version>1.2.6</version>
  129. <exclusions>
  130. <exclusion>
  131. <artifactId>hadoop-common</artifactId>
  132. <groupId>org.apache.hadoop</groupId>
  133. </exclusion>
  134. <exclusion>
  135. <artifactId>guava</artifactId>
  136. <groupId>com.google.guava</groupId>
  137. </exclusion>
  138. <exclusion>
  139. <artifactId>hadoop-auth</artifactId>
  140. <groupId>org.apache.hadoop</groupId>
  141. </exclusion>
  142. <!--
  143. <exclusion>
  144. <artifactId>hadoop-auth</artifactId>
  145. <groupId>org.apache.hadoop</groupId>
  146. </exclusion>
  147. -->
  148. </exclusions>
  149. </dependency>
  150. <!-- https://mvnrepository.com/artifact/com.google.guava/guava -->
  151. <dependency>
  152. <groupId>com.google.guava</groupId>
  153. <artifactId>guava</artifactId>
  154. <version>14.0.1</version>
  155. <exclusions>
  156. <exclusion>
  157. <artifactId>jsr305</artifactId>
  158. <groupId>com.google.code.findbugs</groupId>
  159. </exclusion>
  160. </exclusions>
  161. </dependency>
  162. </dependencies>
  163.  
  164. <build>
  165. <defaultGoal>install</defaultGoal>
  166.  
  167. <plugins>
  168. <plugin>
  169. <groupId>org.apache.maven.plugins</groupId>
  170. <artifactId>maven-compiler-plugin</artifactId>
  171. <version>3.7.0</version>
  172. <configuration>
  173. <source>1.8</source>
  174. <target>1.8</target>
  175. </configuration>
  176. </plugin>
  177. <plugin>
  178. <groupId>org.apache.maven.plugins</groupId>
  179. <artifactId>maven-resources-plugin</artifactId>
  180. <version>3.0.2</version>
  181. <configuration>
  182. <encoding>UTF-8</encoding>
  183. </configuration>
  184. </plugin>
  185.  
  186. <!-- Allows the example to be run via 'mvn compile exec:java' -->
  187. <plugin>
  188. <groupId>org.codehaus.mojo</groupId>
  189. <artifactId>exec-maven-plugin</artifactId>
  190. <version>1.6.0</version>
  191. <configuration>
  192. <mainClass>com.linewell.MainApp</mainClass>
  193. <includePluginDependencies>false</includePluginDependencies>
  194. </configuration>
  195. </plugin>
  196.  
  197. </plugins>
  198. </build>
  199.  
  200. </project>

步骤二

编写工具类,下面的工具类主要对表的创建和列的添加等等进行了封装,如果有更多的想法,可以在此基础上进行添加:

  1. /**
  2. * Copyright (C), 2015-2018, hzhiping@linewell.com
  3. * Title:HbaseUtil
  4. * Author:hzhiping
  5. * Date:2018/9/19 10:01
  6. * Description: 编写HbaseUtil工具类,可执行常规的增删改查操作
  7. */
  8. package com.linewell.util;
  9.  
  10. import org.apache.hadoop.conf.Configuration;
  11. import org.apache.hadoop.hbase.*;
  12. import org.apache.hadoop.hbase.client.*;
  13. import org.apache.hadoop.hbase.util.Bytes;
  14.  
  15. import java.io.IOException;
  16. import java.sql.Timestamp;
  17. import java.util.ArrayList;
  18. import java.util.List;
  19.  
  20. public class HbaseUtil {
  21. private static Configuration configuration = null;
  22. private static Connection connection = null;
  23. // 设置集群的连接
  24. private static final String HBASE_ZOOKEEPER_QUORUM = "master.org.cn,slave01.org.cn,slave03.org.cn";
  25. // 可以在IP:16010端口所在的web页面查看相关的配置
  26. private static final String ZOOKEEPER_ZNODE_PARENT = "/hbase";
  27. private static final String HBASE_ZOOKEEPER_PROPERTY_CLIENTPORT = "2181";
  28.  
  29. // 创建hbase连接
  30. static {
  31. configuration = HBaseConfiguration.create();
  32. // 设置hbase连接配置
  33. configuration.set("hbase.zookeeper.quorum", HBASE_ZOOKEEPER_QUORUM);
  34. configuration.set("zookeeper.znode.parent", ZOOKEEPER_ZNODE_PARENT);
  35. configuration.set("hbase.zookeeper.property.clientPort", HBASE_ZOOKEEPER_PROPERTY_CLIENTPORT);
  36. try {
  37. connection = ConnectionFactory.createConnection(configuration);
  38. } catch (IOException e) {
  39. e.printStackTrace();
  40. }
  41. }
  42.  
  43. /**
  44. * @title:createTable
  45. * @description:创建表,表名称以参数的形式传递进来
  46. * @param:tableName 表名
  47. * @param:column 创建一个列,因为hbase在创建表的同时至少需要创建一个列
  48. */
  49. public static void createTable(String column, String tableName) throws IOException {
  50. /* 1.2.6
  51. TableName table = TableName.valueOf(tableName);
  52. Admin admin = connection.getAdmin();
  53. // 判断表是否存在
  54. if (admin.tableExists(table)) {
  55. admin.disableTable(table);// disable
  56. admin.deleteTable(table);// delete
  57. }
  58. HTableDescriptor hTableDescriptor = new HTableDescriptor(table);
  59. // 创建一个新的列
  60. HColumnDescriptor hColumnDescriptor = new HColumnDescriptor(column);
  61. // 设置列的相关属性
  62. hColumnDescriptor.setMaxVersions(5);
  63. hColumnDescriptor.setBlockCacheEnabled(true);
  64. hColumnDescriptor.setBlocksize(1800000);
  65. // 添加列
  66. hTableDescriptor.addFamily(hColumnDescriptor);
  67. // 表创建
  68. admin.createTable(hTableDescriptor);
  69. System.out.println("---------表创建完成---------");
  70. admin.close();
  71. */
  72. TableName table = TableName.valueOf(tableName);
  73. Admin admin = connection.getAdmin();
  74. TableDescriptorBuilder tableDescriptor = TableDescriptorBuilder.newBuilder(table);
  75. // 判断表是否存在
  76. if (admin.tableExists(table)) {
  77. admin.disableTable(table);// disable
  78. admin.deleteTable(table);// delete
  79. }
  80. // 创建一个新的列族
  81. ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor columnFamilyDescriptor = new ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor(Bytes.toBytes(column));
  82. columnFamilyDescriptor.setMaxVersions(5);
  83. columnFamilyDescriptor.setBlockCacheEnabled(true);
  84. columnFamilyDescriptor.setBlocksize(1800000);
  85. // 将列族添加到表中
  86. TableDescriptorBuilder tableDescriptorBuilder =
  87. tableDescriptor.setColumnFamily(columnFamilyDescriptor);
  88. // 创建表
  89. admin.createTable(tableDescriptorBuilder.build());
  90. }
  91.  
  92. /**
  93. * @title:deleteTable
  94. * @description:将表名以参数的形式传入,然后删除表
  95. * @param:tablename 表名
  96. */
  97. public static void deleteTable(String tablename) throws IOException {
  98. TableName tableName = TableName.valueOf(tablename);
  99. Admin admin = connection.getAdmin();
  100. // 判断表是否存在
  101. if (admin.tableExists(tableName)) {
  102. admin.disableTable(tableName);
  103. admin.deleteTable(tableName);
  104. System.out.println("---------删除表完成---------");
  105. }
  106. }
  107.  
  108. /**
  109. * @title:getTable
  110. * @description:查询该集群中存在的表名
  111. * @return:TableName 表名
  112. */
  113. public static TableName[] getTable() throws IOException {
  114. /* 1.2.6和2.1.0版本一致*/
  115. Admin admin = connection.getAdmin();
  116. TableName[] listTableName = admin.listTableNames();
  117. System.out.println("---------查询表---------");
  118. for (TableName tableName : listTableName) {
  119. System.out.println(tableName.toString());
  120. }
  121. return listTableName;
  122. }
  123.  
  124. /**
  125. * @title:addColumn
  126. * @description:往指定表中添加列
  127. * @param:columns 添加的列名
  128. * @param:tableName 表名
  129. */
  130. public static void addColume(String[] columns, String tableName) throws IOException {
  131. /* 1.2.6
  132. Admin admin = connection.getAdmin();
  133. // 列的大小
  134. int size = columns.length;
  135. // 遍历数组索引,添加列
  136. for (int i = 0; i < size; i++) {
  137. HColumnDescriptor hColumnDescriptor = new HColumnDescriptor(columns[i]);
  138. admin.addColumn(TableName.valueOf(tableName), hColumnDescriptor);
  139. System.out.println("inserted " + columns[i] + "...");
  140. }
  141. */
  142. Admin admin = connection.getAdmin();
  143. // 列的大小
  144. int size = columns.length;
  145. for (int i = 0; i < size; i++) {
  146. ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor columnFamilyDescriptor = new ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor(Bytes.toBytes(columns[i]));
  147. admin.addColumnFamily(TableName.valueOf(tableName), columnFamilyDescriptor);
  148. System.out.println("inserted " + columns[i] + "...");
  149. }
  150. }
  151.  
  152. /**
  153. * @title:insertData
  154. * @description:从表中插入数据,表名以参数形式传入[此处插入的数据只是模拟数据]
  155. * @param:tableName 表名
  156. */
  157. public static void insertData(String tablename) throws IOException {
  158. /* 1.2.6
  159. Admin admin = connection.getAdmin();
  160. // 获取表
  161. HTable hTable = new HTable(configuration, TableName.valueOf(tableName));
  162. // 获取当前时间戳
  163. Long timeStamp = System.currentTimeMillis();
  164. // 将时间戳设置为row key的唯一辨识
  165. Put put = new Put(Bytes.toBytes("row " + timeStamp.toString()));
  166. put.add(Bytes.toBytes("personal data"), Bytes.toBytes("name"), Bytes.toBytes("raju"));
  167. put.add(Bytes.toBytes("personal data"), Bytes.toBytes("city"), Bytes.toBytes("quanzhou"));
  168. put.add(Bytes.toBytes("professional data"), Bytes.toBytes("designation"), Bytes.toBytes("manager"));
  169. put.add(Bytes.toBytes("professional data"), Bytes.toBytes("salary"), Bytes.toBytes("50000"));
  170. hTable.put(put);
  171. System.out.println("inserted data...");
  172. hTable.close();
  173. */
  174. // 获取表
  175. TableName tableName = TableName.valueOf(tablename);
  176. Table table = connection.getTable(tableName);
  177. // 简单测试
  178. Long timeStamp = System.currentTimeMillis();
  179. Put put = new Put(Bytes.toBytes("row " + timeStamp.toString()));
  180. put.addColumn(Bytes.toBytes("personal data"), Bytes.toBytes("name"), Bytes.toBytes("raju"));
  181. put.addColumn(Bytes.toBytes("personal data"), Bytes.toBytes("city"), Bytes.toBytes("quanzhou"));
  182. put.addColumn(Bytes.toBytes("professional data"), Bytes.toBytes("designation"), Bytes.toBytes("manager"));
  183. put.addColumn(Bytes.toBytes("professional data"), Bytes.toBytes("salary"), Bytes.toBytes("50000"));
  184. table.put(put);
  185. System.out.println("inserted data...");
  186. }
  187.  
  188. /**
  189. * @title:uptData
  190. * @description:更新数据
  191. * @param:tableName 表名
  192. */
  193. public static void uptData(String tableName) throws IOException {
  194. TableName tablename = TableName.valueOf(tableName);
  195. Table table = connection.getTable(tablename);
  196. Put put = new Put(Bytes.toBytes("row 1539179928919"));
  197. put.addColumn(Bytes.toBytes("personal data"), Bytes.toBytes("name"), Bytes.toBytes("hzhihui"));
  198. table.put(put);
  199. System.out.println("updated data...");
  200. }
  201.  
  202. /**
  203. * @title:delData
  204. * @description:删除一行或者多行的数据
  205. * @param:tableName 表名
  206. * @param:rowKeys 主键
  207. */
  208. public static void delData(String tableName, String[] rowKeys) throws IOException {
  209. TableName tablename = TableName.valueOf(tableName);
  210. Table table = connection.getTable(tablename);
  211. List<Delete> deleteList = new ArrayList<Delete>(rowKeys.length);
  212. Delete delete;
  213. for(String rowKey:rowKeys){
  214. delete = new Delete(Bytes.toBytes(rowKey));
  215. deleteList.add(delete);
  216. }
  217. table.delete(deleteList);
  218. System.out.println("deleted data...");
  219. }
  220.  
  221. /**
  222. * @title:scanTable
  223. * @description:查询hbase的表数据
  224. * @param:tableName
  225. */
  226. public static void scanTable(String tableName) throws IOException {
  227. /* 1.2.6
  228. Admin admin = connection.getAdmin();
  229. HTable table = new HTable(configuration, TableName.valueOf(tableName));
  230. // scan多行输出
  231. Scan scan = new Scan();
  232. ResultScanner resultScanner = table.getScanner(scan);
  233. // 遍历结果集,输出所有的结果
  234. for (Result result : resultScanner) {
  235. for (Cell cell : result.rawCells()) {
  236. System.out.println(Bytes.toString(result.getRow()) + "\t"
  237. + Bytes.toString(CellUtil.cloneQualifier(cell)) + "\t"
  238. + Bytes.toString(CellUtil.cloneValue(cell)) + "\t"
  239. + cell.getTimestamp()
  240. );
  241. System.out.println("--------------------华丽的分割线--------------------");
  242. }
  243. }
  244. */
  245. // 获取表名
  246. TableName tablename = TableName.valueOf(tableName);
  247. Table table = connection.getTable(tablename);
  248. Scan scan = new Scan();
  249. ResultScanner resultScanner = table.getScanner(scan);
  250. for (Result result : resultScanner) {
  251. for (Cell cell : result.rawCells()) {
  252. System.out.println(Bytes.toString(result.getRow()) + "\t"
  253. + Bytes.toString(CellUtil.cloneQualifier(cell)) + "\t"
  254. + Bytes.toString(CellUtil.cloneValue(cell)) + "\t"
  255. + cell.getTimestamp()
  256. );
  257. System.out.println("--------------------华丽的分割线--------------------");
  258. }
  259. }
  260. }
  261. }
  262.  
  263. HbaseUtil.java/**
  264. * Copyright (C), 2015-2018, hzhiping@linewell.com
  265. * Title:HbaseUtil
  266. * Author:hzhiping
  267. * Date:2018/9/19 10:01
  268. * Description: 编写HbaseUtil工具类,可执行常规的增删改查操作
  269. */
  270. package com.linewell.util;
  271.  
  272. import org.apache.hadoop.conf.Configuration;
  273. import org.apache.hadoop.hbase.*;
  274. import org.apache.hadoop.hbase.client.*;
  275. import org.apache.hadoop.hbase.util.Bytes;
  276.  
  277. import java.io.IOException;
  278. import java.sql.Timestamp;
  279. import java.util.ArrayList;
  280. import java.util.List;
  281.  
  282. public class HbaseUtil {
  283. private static Configuration configuration = null;
  284. private static Connection connection = null;
  285. // 设置集群的连接
  286. private static final String HBASE_ZOOKEEPER_QUORUM = "master.org.cn,slave01.org.cn,slave03.org.cn";
  287. // 可以在IP:16010端口所在的web页面查看相关的配置
  288. private static final String ZOOKEEPER_ZNODE_PARENT = "/hbase";
  289. private static final String HBASE_ZOOKEEPER_PROPERTY_CLIENTPORT = "2181";
  290.  
  291. // 创建hbase连接
  292. static {
  293. configuration = HBaseConfiguration.create();
  294. // 设置hbase连接配置
  295. configuration.set("hbase.zookeeper.quorum", HBASE_ZOOKEEPER_QUORUM);
  296. configuration.set("zookeeper.znode.parent", ZOOKEEPER_ZNODE_PARENT);
  297. configuration.set("hbase.zookeeper.property.clientPort", HBASE_ZOOKEEPER_PROPERTY_CLIENTPORT);
  298. try {
  299. connection = ConnectionFactory.createConnection(configuration);
  300. } catch (IOException e) {
  301. e.printStackTrace();
  302. }
  303. }
  304.  
  305. /**
  306. * @title:createTable
  307. * @description:创建表,表名称以参数的形式传递进来
  308. * @param:tableName 表名
  309. * @param:column 创建一个列,因为hbase在创建表的同时至少需要创建一个列
  310. */
  311. public static void createTable(String column, String tableName) throws IOException {
  312. /* 1.2.6
  313. TableName table = TableName.valueOf(tableName);
  314. Admin admin = connection.getAdmin();
  315. // 判断表是否存在
  316. if (admin.tableExists(table)) {
  317. admin.disableTable(table);// disable
  318. admin.deleteTable(table);// delete
  319. }
  320. HTableDescriptor hTableDescriptor = new HTableDescriptor(table);
  321. // 创建一个新的列
  322. HColumnDescriptor hColumnDescriptor = new HColumnDescriptor(column);
  323. // 设置列的相关属性
  324. hColumnDescriptor.setMaxVersions(5);
  325. hColumnDescriptor.setBlockCacheEnabled(true);
  326. hColumnDescriptor.setBlocksize(1800000);
  327. // 添加列
  328. hTableDescriptor.addFamily(hColumnDescriptor);
  329. // 表创建
  330. admin.createTable(hTableDescriptor);
  331. System.out.println("---------表创建完成---------");
  332. admin.close();
  333. */
  334. TableName table = TableName.valueOf(tableName);
  335. Admin admin = connection.getAdmin();
  336. TableDescriptorBuilder tableDescriptor = TableDescriptorBuilder.newBuilder(table);
  337. // 判断表是否存在
  338. if (admin.tableExists(table)) {
  339. admin.disableTable(table);// disable
  340. admin.deleteTable(table);// delete
  341. }
  342. // 创建一个新的列族
  343. ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor columnFamilyDescriptor = new ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor(Bytes.toBytes(column));
  344. columnFamilyDescriptor.setMaxVersions(5);
  345. columnFamilyDescriptor.setBlockCacheEnabled(true);
  346. columnFamilyDescriptor.setBlocksize(1800000);
  347. // 将列族添加到表中
  348. TableDescriptorBuilder tableDescriptorBuilder =
  349. tableDescriptor.setColumnFamily(columnFamilyDescriptor);
  350. // 创建表
  351. admin.createTable(tableDescriptorBuilder.build());
  352. }
  353.  
  354. /**
  355. * @title:deleteTable
  356. * @description:将表名以参数的形式传入,然后删除表
  357. * @param:tablename 表名
  358. */
  359. public static void deleteTable(String tablename) throws IOException {
  360. TableName tableName = TableName.valueOf(tablename);
  361. Admin admin = connection.getAdmin();
  362. // 判断表是否存在
  363. if (admin.tableExists(tableName)) {
  364. admin.disableTable(tableName);
  365. admin.deleteTable(tableName);
  366. System.out.println("---------删除表完成---------");
  367. }
  368. }
  369.  
  370. /**
  371. * @title:getTable
  372. * @description:查询该集群中存在的表名
  373. * @return:TableName 表名
  374. */
  375. public static TableName[] getTable() throws IOException {
  376. /* 1.2.6和2.1.0版本一致*/
  377. Admin admin = connection.getAdmin();
  378. TableName[] listTableName = admin.listTableNames();
  379. System.out.println("---------查询表---------");
  380. for (TableName tableName : listTableName) {
  381. System.out.println(tableName.toString());
  382. }
  383. return listTableName;
  384. }
  385.  
  386. /**
  387. * @title:addColumn
  388. * @description:往指定表中添加列
  389. * @param:columns 添加的列名
  390. * @param:tableName 表名
  391. */
  392. public static void addColume(String[] columns, String tableName) throws IOException {
  393. /* 1.2.6
  394. Admin admin = connection.getAdmin();
  395. // 列的大小
  396. int size = columns.length;
  397. // 遍历数组索引,添加列
  398. for (int i = 0; i < size; i++) {
  399. HColumnDescriptor hColumnDescriptor = new HColumnDescriptor(columns[i]);
  400. admin.addColumn(TableName.valueOf(tableName), hColumnDescriptor);
  401. System.out.println("inserted " + columns[i] + "...");
  402. }
  403. */
  404. Admin admin = connection.getAdmin();
  405. // 列的大小
  406. int size = columns.length;
  407. for (int i = 0; i < size; i++) {
  408. ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor columnFamilyDescriptor = new ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor(Bytes.toBytes(columns[i]));
  409. admin.addColumnFamily(TableName.valueOf(tableName), columnFamilyDescriptor);
  410. System.out.println("inserted " + columns[i] + "...");
  411. }
  412. }
  413.  
  414. /**
  415. * @title:insertData
  416. * @description:从表中插入数据,表名以参数形式传入[此处插入的数据只是模拟数据]
  417. * @param:tableName 表名
  418. */
  419. public static void insertData(String tablename) throws IOException {
  420. /* 1.2.6
  421. Admin admin = connection.getAdmin();
  422. // 获取表
  423. HTable hTable = new HTable(configuration, TableName.valueOf(tableName));
  424. // 获取当前时间戳
  425. Long timeStamp = System.currentTimeMillis();
  426. // 将时间戳设置为row key的唯一辨识
  427. Put put = new Put(Bytes.toBytes("row " + timeStamp.toString()));
  428. put.add(Bytes.toBytes("personal data"), Bytes.toBytes("name"), Bytes.toBytes("raju"));
  429. put.add(Bytes.toBytes("personal data"), Bytes.toBytes("city"), Bytes.toBytes("quanzhou"));
  430. put.add(Bytes.toBytes("professional data"), Bytes.toBytes("designation"), Bytes.toBytes("manager"));
  431. put.add(Bytes.toBytes("professional data"), Bytes.toBytes("salary"), Bytes.toBytes("50000"));
  432. hTable.put(put);
  433. System.out.println("inserted data...");
  434. hTable.close();
  435. */
  436. // 获取表
  437. TableName tableName = TableName.valueOf(tablename);
  438. Table table = connection.getTable(tableName);
  439. // 简单测试
  440. Long timeStamp = System.currentTimeMillis();
  441. Put put = new Put(Bytes.toBytes("row " + timeStamp.toString()));
  442. put.addColumn(Bytes.toBytes("personal data"), Bytes.toBytes("name"), Bytes.toBytes("raju"));
  443. put.addColumn(Bytes.toBytes("personal data"), Bytes.toBytes("city"), Bytes.toBytes("quanzhou"));
  444. put.addColumn(Bytes.toBytes("professional data"), Bytes.toBytes("designation"), Bytes.toBytes("manager"));
  445. put.addColumn(Bytes.toBytes("professional data"), Bytes.toBytes("salary"), Bytes.toBytes("50000"));
  446. table.put(put);
  447. System.out.println("inserted data...");
  448. }
  449.  
  450. /**
  451. * @title:uptData
  452. * @description:更新数据
  453. * @param:tableName 表名
  454. */
  455. public static void uptData(String tableName) throws IOException {
  456. TableName tablename = TableName.valueOf(tableName);
  457. Table table = connection.getTable(tablename);
  458. Put put = new Put(Bytes.toBytes("row 1539179928919"));
  459. put.addColumn(Bytes.toBytes("personal data"), Bytes.toBytes("name"), Bytes.toBytes("hzhihui"));
  460. table.put(put);
  461. System.out.println("updated data...");
  462. }
  463.  
  464. /**
  465. * @title:delData
  466. * @description:删除一行或者多行的数据
  467. * @param:tableName 表名
  468. * @param:rowKeys 主键
  469. */
  470. public static void delData(String tableName, String[] rowKeys) throws IOException {
  471. TableName tablename = TableName.valueOf(tableName);
  472. Table table = connection.getTable(tablename);
  473. List<Delete> deleteList = new ArrayList<Delete>(rowKeys.length);
  474. Delete delete;
  475. for(String rowKey:rowKeys){
  476. delete = new Delete(Bytes.toBytes(rowKey));
  477. deleteList.add(delete);
  478. }
  479. table.delete(deleteList);
  480. System.out.println("deleted data...");
  481. }
  482.  
  483. /**
  484. * @title:scanTable
  485. * @description:查询hbase的表数据
  486. * @param:tableName
  487. */
  488. public static void scanTable(String tableName) throws IOException {
  489. /* 1.2.6
  490. Admin admin = connection.getAdmin();
  491. HTable table = new HTable(configuration, TableName.valueOf(tableName));
  492. // scan多行输出
  493. Scan scan = new Scan();
  494. ResultScanner resultScanner = table.getScanner(scan);
  495. // 遍历结果集,输出所有的结果
  496. for (Result result : resultScanner) {
  497. for (Cell cell : result.rawCells()) {
  498. System.out.println(Bytes.toString(result.getRow()) + "\t"
  499. + Bytes.toString(CellUtil.cloneQualifier(cell)) + "\t"
  500. + Bytes.toString(CellUtil.cloneValue(cell)) + "\t"
  501. + cell.getTimestamp()
  502. );
  503. System.out.println("--------------------华丽的分割线--------------------");
  504. }
  505. }
  506. */
  507. // 获取表名
  508. TableName tablename = TableName.valueOf(tableName);
  509. Table table = connection.getTable(tablename);
  510. Scan scan = new Scan();
  511. ResultScanner resultScanner = table.getScanner(scan);
  512. for (Result result : resultScanner) {
  513. for (Cell cell : result.rawCells()) {
  514. System.out.println(Bytes.toString(result.getRow()) + "\t"
  515. + Bytes.toString(CellUtil.cloneQualifier(cell)) + "\t"
  516. + Bytes.toString(CellUtil.cloneValue(cell)) + "\t"
  517. + cell.getTimestamp()
  518. );
  519. System.out.println("--------------------华丽的分割线--------------------");
  520. }
  521. }
  522. }
  523. }

当然了,后续笔者会将demo代码链接上来。

源码链接

本例子代码:仓库的hbase01

坚壁清野

hbase版本升级的api对比的更多相关文章

  1. Hbase框架原理及相关的知识点理解、Hbase访问MapReduce、Hbase访问Java API、Hbase shell及Hbase性能优化总结

    转自:http://blog.csdn.net/zhongwen7710/article/details/39577431 本blog的内容包含: 第一部分:Hbase框架原理理解 第二部分:Hbas ...

  2. HBase 学习之一 <<HBase使用客户端API动态创建Hbase数据表并在Hbase下导出执行>>

    HBase使用客户端API动态创建Hbase数据表并在Hbase下导出执行                       ----首先感谢网络能够给我提供一个开放的学习平台,如果没有网上的技术爱好者提供 ...

  3. 微服务、SOA 和 API对比与分析

    摘要: 对比微服务架构和面向服务的架构(SOA)是一个敏感的话题,常常引起激烈的争论.本文将介绍这些争论的起源,并分析如何以最佳方式解决它们.然后进一步查看这些概念如何与 API 管理概念结合使用,实 ...

  4. 5 hbase-shell + hbase的java api

    本博文的主要内容有 .HBase的单机模式(1节点)安装 .HBase的单机模式(1节点)的启动 .HBase的伪分布模式(1节点)安装  .HBase的伪分布模式(1节点)的启动    .HBase ...

  5. HBase的Java Api连接失败的问题及解决方法

    分布式方式部署的HBase,启动正常,Shell操作正常,使用HBase的Java Api操作时总是连接失败,信息如下: This server is in the failed servers li ...

  6. hbase-shell + hbase的java api

    本博文的主要内容有 .HBase的单机模式(1节点)安装 .HBase的单机模式(1节点)的启动 .HBase的伪分布模式(1节点)安装   .HBase的伪分布模式(1节点)的启动    .HBas ...

  7. HBase Client JAVA API

    旧 的 HBase 接口逻辑与传统 JDBC 方式很不相同,新的接口与传统 JDBC 的逻辑更加相像,具有更加清晰的 Connection 管理方式. 同时,在旧的接口中,客户端何时将 Put 写到服 ...

  8. HBASE学习笔记--API

    HBaseConfiguration HBaseConfiguration是每一个hbase client都会使用到的对象,它代表的是HBase配置信息.它有两种构造方式: public HBaseC ...

  9. mapreduce新旧api对比

    对比:hadoop版本1.x 新版,hadoop版本0.x 旧版 1.新api引用包一般是mapreduce ,旧版api引用的包一般是mapred 2.新api使用Job,旧版api使用JobCon ...

随机推荐

  1. table动态增加删除

    基于网上代码修改实现动态添加表数据行 <!DOCTYPE html> <html lang="cn"> <html> <head> ...

  2. securecrt-active

    Mac下面的SecureCRT(附破解方案) 更新到最新的7.3.7 转自 http://bbs.weiphone.com/read-htm-tid-6939481.html 继续更新到7.3.2的破 ...

  3. js正則匹配经纬度(经纬度逗号隔开)

    谷歌坐標:31.2807691689,112.5382624525 高德坐標:31.2807691689,112.5382624525 regexp: {//正则验证 regexp: /^([0-9] ...

  4. 自动编译批处理设置(MSBuild)

    基本设置,如果想更改可以设置. @echo off rem --------------------------------- rem ----作成者:李暁賓--------------- rem - ...

  5. c++ map 注意事项

    1.  往map里面插入元素: 下标方式[]:    map[key] = value; 调用insert:       map.insert(make_pair(key, value)); 下标方式 ...

  6. IDEA汉化

    1.- 汉化包 提取码: 4mbq 2.打开IDEA,执行下列操作:在主界面选择File → Settings → Appearance&Behavior → Appearance → 勾选O ...

  7. [转]Github 下载指定文件夹

    来自:https://blog.csdn.net/qq_35860352/article/details/80313078 操作步骤 step1:转换链接地址 点开”/examples”子文件,复制浏 ...

  8. three.js:使用createMultiMaterialObject创建的多材质对象无法使用光线跟踪Raycaster选中

    创建多材质对象: var loader = new THREE.DDSLoader(); var map = loader.load('../assets/textures/Mountains_arg ...

  9. 网页中HTML代码如何实现字体删除线效果

    有的朋友在制作网站的时候,需要给字体制作删除线,例如:选择题,错误标识等!那么我们就需要用到了<s>这个标签写法如下 字体删除线: <s>这里是内容</s> 效果如 ...

  10. idea中使用github

    转载:https://www.cnblogs.com/javabg/p/7987755.html 1.先安装git插件,本机安装git在C:\InstallSoftWare\Git 2. 在Idea ...