两个field,一个是KFC数据 一个列放的内容是“same”
每条数据都flush
 
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
2014-08-08 17:07:46,898 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-08-08 17:07:47,049 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=hconnection-0xd4159f connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
2014-08-08 17:07:47,412 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0xd4159f connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
2014-08-08 17:07:47,481 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2014-08-08 17:07:48,743 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0xd4159f connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
create table success!
has been write 10000 record 20414 total milliseconds
has been write 20000 record 18707 total milliseconds
has been write 30000 record 18629 total milliseconds
has been write 40000 record 18413 total milliseconds
has been write 50000 record 18332 total milliseconds
has been write 60000 record 18233 total milliseconds
has been write 70000 record 18290 total milliseconds
has been write 80000 record 18422 total milliseconds
has been write 90000 record 18439 total milliseconds
has been write 100000 record 19525 total milliseconds
has been write 110000 record 18534 total milliseconds
has been write 120000 record 18421 total milliseconds
has been write 130000 record 18413 total milliseconds
has been write 140000 record 18017 total milliseconds
has been write 150000 record 18618 total milliseconds
has been write 160000 record 19550 total milliseconds
has been write 170000 record 18546 total milliseconds
has been write 180000 record 18636 total milliseconds
has been write 190000 record 18201 total milliseconds
has been write 200000 record 18178 total milliseconds
has been write 210000 record 18044 total milliseconds
has been write 220000 record 17923 total milliseconds
has been write 230000 record 18356 total milliseconds
has been write 240000 record 18626 total milliseconds
has been write 250000 record 18766 total milliseconds
has been write 260000 record 18783 total milliseconds
has been write 270000 record 18354 total milliseconds
has been write 280000 record 18632 total milliseconds
has been write 290000 record 18365 total milliseconds
has been write 300000 record 18347 total milliseconds
has been write 310000 record 18467 total milliseconds
has been write 320000 record 18390 total milliseconds
has been write 330000 record 22061 total milliseconds
has been write 340000 record 18059 total milliseconds
has been write 350000 record 18703 total milliseconds
has been write 360000 record 18620 total milliseconds
has been write 370000 record 18527 total milliseconds
has been write 380000 record 18596 total milliseconds
has been write 390000 record 18534 total milliseconds
has been write 400000 record 18756 total milliseconds
has been write 410000 record 18690 total milliseconds
has been write 420000 record 18712 total milliseconds
has been write 430000 record 18782 total milliseconds
has been write 440000 record 18725 total milliseconds
has been write 450000 record 18458 total milliseconds
has been write 460000 record 18478 total milliseconds
873298 total milliseconds
==================================================
10000条数据提交一次
(如果要设置多条提交除了设置 table.setAutoFlush(false);还要设置buf大小table.setWriteBufferSize(1024 * 1024*50); //100MB)
 
空间大小
 
0                   /hbase/.tmp
7595732       /hbase/WALs
0                   /hbase/archive
0                   /hbase/corrupt
49270766     /hbase/data
42                 /hbase/hbase.id
7                   /hbase/hbase.version
208169150   /hbase/oldWALs
 
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
2014-08-08 17:51:58,199 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-08-08 17:51:58,497 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=hconnection-0x1af0db6 connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
2014-08-08 17:51:58,977 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0x1af0db6 connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
2014-08-08 17:51:59,066 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available
table Exists!
has been write 10000 record 148 total milliseconds
has been write 20000 record 1465 total milliseconds
has been write 30000 record 699 total milliseconds
has been write 40000 record 999 total milliseconds
has been write 50000 record 882 total milliseconds
has been write 60000 record 644 total milliseconds
has been write 70000 record 808 total milliseconds
has been write 80000 record 725 total milliseconds
has been write 90000 record 612 total milliseconds
has been write 100000 record 709 total milliseconds
has been write 110000 record 588 total milliseconds
has been write 120000 record 600 total milliseconds
has been write 130000 record 813 total milliseconds
has been write 140000 record 545 total milliseconds
has been write 150000 record 750 total milliseconds
has been write 160000 record 769 total milliseconds
has been write 170000 record 771 total milliseconds
has been write 180000 record 761 total milliseconds
has been write 190000 record 622 total milliseconds
has been write 200000 record 723 total milliseconds
has been write 210000 record 625 total milliseconds
has been write 220000 record 777 total milliseconds
has been write 230000 record 635 total milliseconds
has been write 240000 record 707 total milliseconds
has been write 250000 record 604 total milliseconds
has been write 260000 record 804 total milliseconds
has been write 270000 record 735 total milliseconds
has been write 280000 record 624 total milliseconds
has been write 290000 record 615 total milliseconds
has been write 300000 record 727 total milliseconds
has been write 310000 record 613 total milliseconds
has been write 320000 record 665 total milliseconds
has been write 330000 record 703 total milliseconds
has been write 340000 record 622 total milliseconds
has been write 350000 record 620 total milliseconds
has been write 360000 record 933 total milliseconds
has been write 370000 record 885 total milliseconds
has been write 380000 record 861 total milliseconds
has been write 390000 record 989 total milliseconds
has been write 400000 record 833 total milliseconds
has been write 410000 record 991 total milliseconds
has been write 420000 record 736 total milliseconds
has been write 430000 record 586 total milliseconds
has been write 440000 record 590 total milliseconds
has been write 450000 record 690 total milliseconds
has been write 460000 record 617 total milliseconds
34145 total milliseconds

KFC数据测试hbase结果的更多相关文章

  1. HBase跨版本数据迁移总结

    某客户大数据测试场景为:Solr类似画像的数据查出用户标签--通过这些标签在HBase查询详细信息.以上测试功能以及性能. 其中HBase的数据量为500G,Solr约5T.数据均需要从对方的集群人工 ...

  2. 大数据学习系列之九---- Hive整合Spark和HBase以及相关测试

    前言 在之前的大数据学习系列之七 ----- Hadoop+Spark+Zookeeper+HBase+Hive集群搭建 中介绍了集群的环境搭建,但是在使用hive进行数据查询的时候会非常的慢,因为h ...

  3. HBase写入性能改造(续)--MemStore、flush、compact参数调优及压缩卡的使用【转】

    首先续上篇测试:   经过上一篇文章中对代码及参数的修改,Hbase的写入性能在不开Hlog的情况下从3~4万提高到了11万左右. 本篇主要介绍参数调整的方法,在HDFS上加上压缩卡,最后能达到的写入 ...

  4. HBase数据迁移至Hive

    背景:需要将HBase中表xyz(列簇cf1,列val)迁移至Hive 1. 建立Hive和HBase的映射关系     1.1 运行hive shell进入hive命令行模式,运行如下脚本 CREA ...

  5. hadoop 1.1.2和 hive 0.10 和hbase 0.94.9整合

    今天弄了一下hive0.10和hbase0.94.9整合,需要设置的并不多,但是也遇到了一些问题. 1.复制jar包 拷贝hbase-0.94.9.jar,zookeeper-3.4.5.jar,pr ...

  6. Hbase 0.92.1 Replication

    原集群 服务器名称 服务 sht-sgmhadoopnn-01 Master,NameNode,JobTracker sht-sgmhadoopdn-01 RegionServer,DataNode, ...

  7. HBase学习(二) 基本命令 Java api

    一.Hbase shell 1.Region信息观察 创建表指定命名空间 在创建表的时候可以选择创建到bigdata17这个namespace中,如何实现呢? 使用这种格式即可:'命名空间名称:表名' ...

  8. Mapreduce的文件和hbase共同输入

    Mapreduce的文件和hbase共同输入 package duogemap;   import java.io.IOException;   import org.apache.hadoop.co ...

  9. Redis/HBase/Tair比较

    KV系统对比表 对比维度 Redis Redis Cluster Medis Hbase Tair 访问模式    支持Value大小 理论上不超过1GB(建议不超过1MB) 理论上可配置(默认配置1 ...

随机推荐

  1. Thrift框架介绍

    1.前言 Thrift是一个跨语言的服务部署框架,最初由Facebook于2007年开发,2008年进入Apache开源项目.Thrift通过一个中间语言(IDL, 接口定义语言)来定义RPC的接口和 ...

  2. int型长度

    Ø  基本数据类型 C语言中只有4中基本数据类型——整型.浮点型.指针和聚合类型(如数组和结构等):所有其他类型都是从这4种基本类型的某种变化或组合派生而来. 一.整型家族 整型家族包括char.sh ...

  3. 【boost】使用装饰者模式改造boost::thread_group

    在项目中使用boost::thread_group的时候遇到几个问题: 1.thread_group不提供删除全部thread列表的方法,一直使用create会是其内部列表不断增加. 2.thread ...

  4. Google Test资料

    Google Test资料 玩转Google开源C++单元测试框架Google Test系列(gtest)(总) gtest.h file not found googletest xcode 7.0 ...

  5. Ibatis 美元符号替换为井号

    原码:where name = '$name$' or code = '$code$' 修改后: where name = #name# or code = #code#

  6. 泡泡堂、QQ堂游戏通信架构分析

    http://blog.csdn.net/sodme/article/details/468327#comments ————————————————————————————————————————— ...

  7. Event Functions

    [Event Functions] A key concept in games programming is that of making changes to position, state an ...

  8. python 应用xml.dom.minidom读xml

    xml文件 <?xml version="1.0" encoding="utf-8"?> <city> <name>上海&l ...

  9. iOS 中self和super如何理解?

    或许你理解self和super都是指的是类的对象   self指的是本类的对象,而super指的是父类的对象,但是事实情况呢,可能有些和你想象的不一样? 简单看下下面例子: @interface Pe ...

  10. oracle学习 六 删除表空间,数据文件的语句以及导入导出dmp文件的方法(持续更新中)

    要想删除表空间就要先删除数据文件 例如这个例子 CREATE TABLESPACE STHSGIMGDB_SPACE11 DATAFILE 'D:\ORACLEDATABASE\JinHuaDataB ...