HBase学习笔记1 - 如何编写高性能的客户端Java代码
转载请标注原链接:http://www.cnblogs.com/xczyd/p/5577124.html
客户在使用HBase的时候,经常会抱怨说写入太慢,并发上不去等等。从前我遇到这种情况,一般都二话不说,直接去看HBase集群的负载,看看有什么性能瓶颈等等。
某老司机说,且慢,先看看用户怎么写的客户端访问HBase集群的代码。
于是花了一些时间去看。
不看不知道,一看就吓尿。客户(也包括我们自己的实施)写出来的客户端,很多时候存在很多低级错误,比如:
(1)滥用sychronize;
(2)创建了连接不释放;
(3)明明只需要调用一次的API,却进行了多次调用,要是碰巧遇到比较花时间的API,那性能就可想而知了;
(4)其他各种幺蛾子...
为此,本篇仅从HBase的Java API入手,通过源码分析和简单的实验,找到最合适Java API调用方法(主要服务于高并发场景)。
如果对HBase的Java API不熟悉的话,可以先去官网看一下文档。
下面开始正文:
使用Java API与HBase集群交互时,需要先创建一个HTable的实例,再使用该实例提供的方法来进行插入/删除/查询等操作。
要创建HTable对象,要先创建一个包含了HBase集群信息的配置实例Configuration conf,其一般创建方法如下:
Configuration conf = HBaseConfiguration.create();
//设置HBase集群的IP和端口
conf.set("hbase.zookeeper.quorum", "XX.XXX.X.XX");
conf.set("hbase.zookeeper.property.clientPort", "2181");
在拥有了conf之后,可以通过HTable提供的如下两种构造方法来创建HTable实例:
方法一:直接利用conf来创建HTable实例
对应的构造函数如下:
public HTable(Configuration conf, final TableName tableName)
throws IOException {
this.tableName = tableName;
this.cleanupPoolOnClose = this.cleanupConnectionOnClose = true;
if (conf == null) {
this.connection = null;
return;
}
this.connection = HConnectionManager.getConnection(conf);
this.configuration = conf; this.pool = getDefaultExecutor(conf);
this.finishSetup();
}
注意红色部分的代码。在这种构造方法中,会调用HConnectionManager的getConnection函数,这个函数以conf作为输入参数,来获取了一个HConnection的实例connection。熟悉odbc,jdbc的话,会知道使用Java API进行数据库操作的时候,都会创建一个类似的connection/connection pool来维护一些数据库与客户端之间相互的连接。对于Hbase来说,承担类似角色的就是HConnection。不过与oracle不同的一点是,HConnection实际上去连接的并不是HBase集群本身,而是维护其关键数据信息的Zookeeper(简称ZK)集群。有关ZK的内容在这里不做展开,不熟悉的话可以单纯地理解为一个独立的元信息管理角色。回过来看getConnection函数,其具体实现如下:
public static HConnection getConnection(final Configuration conf)
throws IOException {
HConnectionKey connectionKey = new HConnectionKey(conf);
synchronized (CONNECTION_INSTANCES) {
HConnectionImplementation connection = CONNECTION_INSTANCES.get(connectionKey);
if (connection == null) {
connection = (HConnectionImplementation)createConnection(conf, true);
CONNECTION_INSTANCES.put(connectionKey, connection);
} else if (connection.isClosed()) {
HConnectionManager.deleteConnection(connectionKey, true);
connection = (HConnectionImplementation)createConnection(conf, true);
CONNECTION_INSTANCES.put(connectionKey, connection);
}
connection.incCount();
return connection;
}
}
其中,CONNECTION_INSTANCES的类型是LinkedHashMap<HConnectionKey,HConnectionImplementation>。所谓HConnectionImplementation其实就是HConnection的具体实现。继续注意红色部分的三行代码。第一行,通过conf创建了一个HConnectionKey的实例connectionKey;第二行,去CONNECTION_INSTANCES中查找是否存在与connectionKey对应的一个HConnection的实例;第三行,如果不存在,那么调用createConnection来创建一个HConnection的实例,否则直接返回刚才从Map中查找得到的HConnection对象
不嫌麻烦,再看一下HConnectionKey的构造函数和重写的hashCode函数,代码分别如下:
HConnectionKey(Configuration conf) {
Map<String, String> m = new HashMap<String, String>();
if (conf != null) {
for (String property : CONNECTION_PROPERTIES) {
String value = conf.get(property);
if (value != null) {
m.put(property, value);
}
}
}
this.properties = Collections.unmodifiableMap(m); try {
UserProvider provider = UserProvider.instantiate(conf);
User currentUser = provider.getCurrent();
if (currentUser != null) {
username = currentUser.getName();
}
} catch (IOException ioe) {
HConnectionManager.LOG.warn("Error obtaining current user, skipping username in HConnectionKey", ioe);
}
}
public int hashCode() {
final int prime = 31;
int result = 1;
if (username != null) {
result = username.hashCode();
}
for (String property : CONNECTION_PROPERTIES) {
String value = properties.get(property);
if (value != null) {
result = prime * result + value.hashCode();
}
} return result;
}
可以看到,hashCode函数被重写以后,其返回值实际上是username的hashCode函数的返回值,而username来自于currentuser,currentuser又来自于provider,provider是由conf创建的。可以看出,只要有相同的conf,就能创建出相同的username,也就能保证HConnectionKey的hashCode函数被重写以后,能够在username相同时返回相同的值。而CONNECTION_INSTANCES是一个LinkedHashMap,其get函数会调用HConnectionKey的hashCode函数来判断该对象是否已经存在。因此,getConnection函数的本质就是根据conf信息返回connection对象,对每一个内容相同的conf,只会返回一个connection
方法二:调用createConnection方法来显式地创建Hconnection的实例,再将其作为输入参数来创建HTable实例
createConnection方法和Htable对应的构造函数分别如下:
public static HConnection createConnection(Configuration conf) throws IOException {
UserProvider provider = UserProvider.instantiate(conf);
return createConnection(conf, false, null, provider.getCurrent());
} static HConnection createConnection(final Configuration conf, final boolean managed,final ExecutorService pool, final User user)
throws IOException {
String className = conf.get("hbase.client.connection.impl",HConnectionManager.HConnectionImplementation.class.getName());
Class<?> clazz = null;
try {
clazz = Class.forName(className);
} catch (ClassNotFoundException e) {
throw new IOException(e);
}
try {
// Default HCM#HCI is not accessible; make it so before invoking.
Constructor<?> constructor =
clazz.getDeclaredConstructor(Configuration.class,
boolean.class, ExecutorService.class, User.class);
constructor.setAccessible(true);
return (HConnection) constructor.newInstance(conf, managed, pool, user);
} catch (Exception e) {
throw new IOException(e);
}
}
public HTable(TableName tableName, HConnection connection) throws IOException {
this.tableName = tableName;
this.cleanupPoolOnClose = true;
this.cleanupConnectionOnClose = false;
this.connection = connection;
this.configuration = connection.getConfiguration(); this.pool = getDefaultExecutor(this.configuration);
this.finishSetup();
}
可以看出,这种构造HTable的方法会通过反射来创建一个新的HConnection实例,而不像方法一中那样共享一个HConnection实例。
值得一提的是,通过此种方法创建出来的HConnection,是需要在不再使用的时候显式调用close方法去释放掉的,否则容易造成端口占用等问题。
那么,上述两种方法,在执行插入/删除/查找的时候,性能如何呢?不妨先从代码角度分析一下。为了简便,先分析HTable在执行put(插入)操作时具体做的事情。
HTable的put函数如下:
public void put(final Put put) throws InterruptedIOException, RetriesExhaustedWithDetailsException {
doPut(put);
if (autoFlush) {
flushCommits();
}
} private void doPut(Put put) throws InterruptedIOException, RetriesExhaustedWithDetailsException {
if (ap.hasError()){
writeAsyncBuffer.add(put);
backgroundFlushCommits(true);
} validatePut(put); currentWriteBufferSize += put.heapSize();
writeAsyncBuffer.add(put); while (currentWriteBufferSize > writeBufferSize) {
backgroundFlushCommits(false);
}
} private void backgroundFlushCommits(boolean synchronous) throws InterruptedIOException, RetriesExhaustedWithDetailsException {
try {
do {
ap.submit(writeAsyncBuffer, true);
} while (synchronous && !writeAsyncBuffer.isEmpty()); if (synchronous) {
ap.waitUntilDone();
} if (ap.hasError()) {
LOG.debug(tableName + ": One or more of the operations have failed -" +
" waiting for all operation in progress to finish (successfully or not)");
while (!writeAsyncBuffer.isEmpty()) {
ap.submit(writeAsyncBuffer, true);
}
ap.waitUntilDone(); if (!clearBufferOnFail) {
// if clearBufferOnFailed is not set, we're supposed to keep the failed operation in the
// write buffer. This is a questionable feature kept here for backward compatibility
writeAsyncBuffer.addAll(ap.getFailedOperations());
}
RetriesExhaustedWithDetailsException e = ap.getErrors();
ap.clearErrors();
throw e;
}
} finally {
currentWriteBufferSize = 0;
for (Row mut : writeAsyncBuffer) {
if (mut instanceof Mutation) {
currentWriteBufferSize += ((Mutation) mut).heapSize();
}
}
}
}
如红色部分所表示,调用顺序是put->doPut->backgroundFlushCommits->ap.submit,其中ap是类AsyncProcess的对象。因此追踪到AsyncProcess类,其代码如下:
public void submit(List<? extends Row> rows, boolean atLeastOne) throws InterruptedIOException {
submitLowPriority(rows, atLeastOne, false);
} public void submitLowPriority(List<? extends Row> rows, boolean atLeastOne, boolean isLowPripority) throws InterruptedIOException {
if (rows.isEmpty()) {
return;
} // This looks like we are keying by region but HRegionLocation has a comparator that compares
// on the server portion only (hostname + port) so this Map collects regions by server.
Map<HRegionLocation, MultiAction<Row>> actionsByServer = new HashMap<HRegionLocation, MultiAction<Row>>();
List<Action<Row>> retainedActions = new ArrayList<Action<Row>>(rows.size()); long currentTaskCnt = tasksDone.get();
boolean alreadyLooped = false; NonceGenerator ng = this.hConnection.getNonceGenerator();
do {
if (alreadyLooped){
// if, for whatever reason, we looped, we want to be sure that something has changed.
waitForNextTaskDone(currentTaskCnt);
currentTaskCnt = tasksDone.get();
} else {
alreadyLooped = true;
} // Wait until there is at least one slot for a new task.
waitForMaximumCurrentTasks(maxTotalConcurrentTasks - 1); // Remember the previous decisions about regions or region servers we put in the
// final multi.
Map<Long, Boolean> regionIncluded = new HashMap<Long, Boolean>();
Map<ServerName, Boolean> serverIncluded = new HashMap<ServerName, Boolean>(); int posInList = -1;
Iterator<? extends Row> it = rows.iterator();
while (it.hasNext()) {
Row r = it.next();
HRegionLocation loc = findDestLocation(r, posInList); if (loc == null) { // loc is null if there is an error such as meta not available.
it.remove();
} else if (canTakeOperation(loc, regionIncluded, serverIncluded)) {
Action<Row> action = new Action<Row>(r, ++posInList);
setNonce(ng, r, action);
retainedActions.add(action);
addAction(loc, action, actionsByServer, ng);
it.remove();
}
}
} while (retainedActions.isEmpty() && atLeastOne && !hasError()); HConnectionManager.ServerErrorTracker errorsByServer = createServerErrorTracker();
sendMultiAction(retainedActions, actionsByServer, 1, errorsByServer, isLowPripority);
} private HRegionLocation findDestLocation(Row row, int posInList) {
if (row == null) throw new IllegalArgumentException("#" + id + ", row cannot be null");
HRegionLocation loc = null;
IOException locationException = null;
try {
loc = hConnection.locateRegion(this.tableName, row.getRow());
if (loc == null) {
locationException = new IOException("#" + id + ", no location found, aborting submit for" +
" tableName=" + tableName +
" rowkey=" + Arrays.toString(row.getRow()));
}
} catch (IOException e) {
locationException = e;
}
if (locationException != null) {
// There are multiple retries in locateRegion already. No need to add new.
// We can't continue with this row, hence it's the last retry.
manageError(posInList, row, false, locationException, null);
return null;
} return loc;
}
这里代码的主要实现机制是异步调用,也就是说,并非每一次put操作都是直接往HBase里面写数据的,而是等到缓存区域内的数据多到一定程度(默认设置是2M),再进行一次写操作。当然这次操作在Server端应当还是要排队执行的,具体执行机制这里不作展开。可以确定的是,HConnection在插入/查询/删除的Java API中,只是起到一个定位RegionServer的作用,在定位到RegionServer之后,操作都是由client端通过rpc调用完成的,与客户端创建的connection的数目无关。另外,locateRegion其实只有在没有命中缓存的时候才会进行rpc通信,其他时候都是直接从缓存中获取RegionServer信息,详情可以查看locateRegion的源码,这里也不再展开。
代码分析告一段落,通过分析可以确定,createConnection的方法创建出大量的HConnection并不会对写入性能有任何帮助。相反,由于白白浪费了资源,还会比getConnection更慢。但是慢多少,无法仅凭代码作出判断。
不妨简单做一个实验来验证上述论断:
服务器环境:四台linux服务器组成的HBase集群, 内存64G,ping一次平均约5ms(严谨一点的话应该再提供一下cpu核数、频率,以及磁盘转速等信息)
客户端环境:在Mac上装的ubuntu虚拟机,分配内存10G,CPU、网络和磁盘读写速度都要比物理机慢不少,但是不影响结论
实验代码如下:
public class HbaseConectionTest { public static void main(String[] args) throws Exception{ Configuration conf = HBaseConfiguration.create(); conf.set("hbase.zookeeper.quorum", "XX.XXX.X.XX");
conf.set("hbase.zookeeper.property.clientPort", "2181"); ThreadInfo info = new ThreadInfo();
info.setTableNamePrefix("test");
info.setColNames("col1,col2");
info.setTableCount(1);
info.setConnStrategy("CREATEWITHCONF");//CREATEWITHCONF,CREATEWITHCONN
info.setWriteStrategy("SEPERATE");//OVERLAP,SEPERATE
info.setLifeCycle(60000L); int threadCount = 100; for(int i=0;i<threadCount;i++){
//createTable(tableNamePrefix+i,colNames,conf);
} //
for(int i=0;i<threadCount;i++){
new Thread(new WriteThread(conf,info,i)).start();
} //HBaseAdmin admin = new HBaseAdmin(conf); //System.out.println(admin.tableExists("test")); } public static void createTable(String tableName,String[] colNames,Configuration conf) {
System.out.println("start create table "+tableName);
try { HBaseAdmin hBaseAdmin = new HBaseAdmin(conf);
if (hBaseAdmin.tableExists(tableName)) {
System.out.println(tableName + " is exist");
//hBaseAdmin.disableTable(tableName);
//hBaseAdmin.deleteTable(tableName);
return;
}
HTableDescriptor tableDescriptor = new HTableDescriptor(tableName);
for(int i=0;i<colNames.length;i++) {
tableDescriptor.addFamily(new HColumnDescriptor(colNames[i]));
}
hBaseAdmin.createTable(tableDescriptor);
} catch (Exception ex) {
ex.printStackTrace();
}
System.out.println("end create table "+tableName);
} } //Thread执行操作的配置信息
class ThreadInfo { private int tableCount; String tableNamePrefix;
String[] colNames; //CREATEBYCONF or CREATEBYCONN
String connStrategy; //overlap or seperate
String writeStrategy; long lifeCycle; public ThreadInfo(){ } public int getTableCount() {
return tableCount;
} public void setTableCount(int tableCount) {
this.tableCount = tableCount;
} public String getTableNamePrefix() {
return tableNamePrefix;
} public void setTableNamePrefix(String tableNamePrefix) {
this.tableNamePrefix = tableNamePrefix;
} public String[] getColNames() {
return colNames;
} public void setColNames(String[] colNames) {
this.colNames = colNames;
} public void setColNames(String colNames) {
if(colNames == null){
this.colNames = null;
}
else{
this.colNames = colNames.split(",");
}
} public String getWriteStrategy() {
return writeStrategy;
} public void setWriteStrategy(String writeStrategy) {
this.writeStrategy = writeStrategy;
} public String getConnStrategy() {
return connStrategy;
} public void setConnStrategy(String connStrategy) {
this.connStrategy = connStrategy;
} public long getLifeCycle() {
return lifeCycle;
} public void setLifeCycle(long lifeCycle) {
this.lifeCycle = lifeCycle;
} } class WriteThread implements Runnable{ private Configuration conf;
private ThreadInfo info;
private int index; public WriteThread(Configuration conf,ThreadInfo info,int index){
this.conf = conf;
this.info = info;
this.index = index;
} @Override
public void run(){ String threadName = Thread.currentThread().getName();
int operationCount = 0; HTable[] htables = null;
HConnection conn = null; int tableCount = info.getTableCount(); String tableNamePrefix = info.getTableNamePrefix();
String[] colNames = info.getColNames(); String connStrategy = info.getConnStrategy();
String writeStrategy = info.getWriteStrategy(); long lifeCycle = info.getLifeCycle(); System.out.println(threadName+": started with index "+index); try{
if (connStrategy.equals("CREATEWITHCONN")) { conn = HConnectionManager.createConnection(conf); if (writeStrategy.equals("SEPERATE")) {
htables = new HTable[1];
htables[0] = new HTable(TableName.valueOf(tableNamePrefix+(index%tableCount)), conn);
}
else if(writeStrategy.equals("OVERLAP")) {
htables = new HTable[tableCount];
for (int i = 0; i < tableCount; i++) {
htables[i] = new HTable(TableName.valueOf(tableNamePrefix+i), conn);
}
}
else{
return;
}
}
else if (connStrategy.equals("CREATEWITHCONF")) { conn = null; if (writeStrategy.equals("SEPERATE")) {
htables = new HTable[1];
htables[0] = new HTable(conf,TableName.valueOf(tableNamePrefix+(index%tableCount)));
}
else if(writeStrategy.equals("OVERLAP")) {
htables = new HTable[tableCount];
for (int i = 0; i < tableCount; i++) {
htables[i] = new HTable(conf,TableName.valueOf(tableNamePrefix+i));
}
}
else{
return;
}
}
else {
return;
} long start = System.currentTimeMillis();
long end = System.currentTimeMillis(); Map<HTable,HColumnDescriptor[]> table_columnFamilies = new HashMap<HTable,HColumnDescriptor[]>();
for(int i=0;i<htables.length;i++){
table_columnFamilies.put(htables[i],htables[i].getTableDescriptor().getColumnFamilies());
} while(end-start<=lifeCycle){
HTable table = htables.length==1?htables[0]:htables[(int)Math.random()*htables.length];
long s1 = System.currentTimeMillis();
double r = Math.random();
HColumnDescriptor[] columnFamilies = table_columnFamilies.get(table);
Put put = generatePut(threadName,columnFamilies,colNames,operationCount);
table.put(put);
if(r>0.999){
System.out.println(System.currentTimeMillis()-s1);
}
operationCount++;
end = System.currentTimeMillis();
} if(conn != null){
conn.close();
} }catch(Exception ex){
ex.printStackTrace();
} System.out.println(threadName+": ended with operation count:"+operationCount);
} private Put generatePut(String threadName,HColumnDescriptor[] columnFamilies,String[] colNames,int operationCount){
Put put = new Put(Bytes.toBytes(threadName+"_"+operationCount));
for (int i = 0; i < columnFamilies.length; i++) {
String familyName = columnFamilies[i].getNameAsString();
//System.out.println("familyName:"+familyName);
for(int j=0;j<colNames.length;j++){
if(familyName.equals(colNames[j])) { //
String columnName = familyName+(int)(Math.floor(Math.random()*5+10*j));
String val = ""+columnName.hashCode()%100;
put.add(Bytes.toBytes(familyName),Bytes.toBytes(columnName),Bytes.toBytes(val));
}
}
}
//System.out.println(put.toString());
return put;
}
}
简单来说就是先创建一些有两列的HBase表,然后创建一些线程分别采用getConnection策略和createConnection策略来写1分钟的数据。当然写几张表,写多久,写什么,怎么写都可以调整。比如我这里就设计了固定写一张表或者随机写一张表几种逻辑。需要注意一下红色部分的代码,这里预先获得了要写的HBase表的列信息。做这个动作的原因是getTableDescriptor是会产生网络开销的,建议写代码时尽量少调用,以免增加不必要的额外开销(事实上这个额外开销是很巨大的)。
具体实验数据如下表所示,具体值因为网络波动等原因会有所差异。总的来说,在并发较高(线程数大于30)的时候,getConnection方法速度要明显快于createConnection;在并发较低的(线程数小于等于10)的时候,createConnection则稍微占优。另外,使用getConnection的时候,写一张表的速度在高并发场景下要明显快于写多张表,但是在低并发情况下此现象不明显;使用createConnection的时候,无论并发高低,写一张表的速度与写多张表大致相同,甚至还偏慢。
上述现象与代码分析的结果并不完全一致。不一致的地方主要包括如下两点:
(1)为什么线程少的时候,createConnection占优?理论上应该持平才是。这一点无法得到很合理的解释,存疑;
(2)为什么线程很多的时候,createConnection会慢这么多?这里猜测服务端的ZK要维护大量连接会负载过大,即便是多个regionServer在负责具体的写操作,也仍旧会导致性能下降。
这两个疑点还有待进一步论证。尽管如此,还是可以先建议大家在使用Java API与HBase交互时,尤其是处理高并发场景的时候,尽量使用getConnection的办法去创建HTable对象,避免维护不必要的connection导致浪费资源。
thread_count | table_count | conn_strategy | write_strategy | interval | result |
1 | 1 | CONF | OVERLAP | 60s | 10000*1=10000 |
5 | 1 | CONF | OVERLAP | 60s | 11000*5=55000 |
10 | 1 | CONF | OVERLAP | 60s | 12000*10=120000 |
30 | 1 | CONF | OVERLAP | 60s | 8300*30=249000 |
60 | 1 | CONF | OVERLAP | 60s | 6000*60=360000 |
100 | 1 | CONF | OVERLAP | 60s | 4700*100=470000 |
1 | 1 | CONN | OVERLAP | 60s | 12000*1=12000 |
5 | 1 | CONN | OVERLAP | 60s | 16000*5=80000 |
10 | 1 | CONN | OVERLAP | 60s | 10000*10=100000 |
30 | 1 | CONN | OVERLAP | 60s | 2500*30=75000 |
60 | 1 | CONN | OVERLAP | 60s | 1200*60=72000 |
100 | 1 | CONN | OVERLAP | 60s | 1000*100=100000 |
5 | 5 | CONF | SEPERATE | 60s | 10600*5=53000 |
10 | 10 | CONF | SEPERATE | 60s | 11900*10=119000 |
30 | 30 | CONF | SEPERATE | 60s | 6900*30=207000 |
60 | 60 | CONF | SEPERATE | 60s | 3650*60=219000 |
100 | 100 | CONF | SEPERATE | 60s | 2500*100=250000 |
5 | 5 | CONN | SEPERATE | 60s | 14000*5=70000 |
10 | 10 | CONN | SEPERATE | 60s | 10500*10=105000 |
30 | 30 | CONN | SEPERATE | 60s | 3250*30=97500 |
60 | 60 | CONN | SEPERATE | 60s | 1450*60=87000 |
100 | 100 | CONN | SEPERATE | 60s | 930*100=93000 |
HBase学习笔记1 - 如何编写高性能的客户端Java代码的更多相关文章
- Spring学习笔记1——IOC: 尽量使用注解以及java代码(转)
在实战中学习Spring,本系列的最终目的是完成一个实现用户注册登录功能的项目. 预想的基本流程如下: 1.用户网站注册,填写用户名.密码.email.手机号信息,后台存入数据库后返回ok.(学习IO ...
- Spring学习笔记1——IOC: 尽量使用注解以及java代码
在实战中学习Spring,本系列的最终目的是完成一个实现用户注册登录功能的项目. 预想的基本流程如下: 1.用户网站注册,填写用户名.密码.email.手机号信息,后台存入数据库后返回ok.(学习IO ...
- AM335x(TQ335x)学习笔记——触摸屏驱动编写
前面几篇文章已经通过配置DTS的方式完成了多个驱动的移植,接下来我们解决TQ335x的触摸驱动问题.由于种种原因,TQ335x的触摸屏驱动是以模块方式提供的,且Linux官方内核中也没有带该触摸屏的驱 ...
- HBase学习笔记之HBase的安装和配置
HBase学习笔记之HBase的安装和配置 我是为了调研和验证hbase的bulkload功能,才安装hbase,学习hbase的.为了快速的验证bulkload功能,我安装了一个节点的hadoop集 ...
- HBASE学习笔记(四)
这两天把要前几天的知识点回顾一下,接下来我会用自己对知识点的理解来写一些东西 一.知识点回顾 1.hbase集群启动:$>start-hbase.sh ===>hbase-daemon.s ...
- HBase学习笔记-高级(一)
HBase1. hbase.id记录了集群的唯一标识:hbase.version记录了文件格式的版本号2. split和.corrupt目录在日志分裂过程中使用,以便保存一些中间结果和损坏的日志在表目 ...
- Android(java)学习笔记205:网易新闻RSS客户端应用编写逻辑过程
1.我们的项目需求是编写一个新闻RSS浏览器,RSS(Really Simple Syndication)是一种描述和同步网站内容的格式,是使用最广泛的XML应用.RSS目前广泛用于网上新闻频道,bl ...
- Android(java)学习笔记148:网易新闻RSS客户端应用编写逻辑过程
1.我们的项目需求是编写一个新闻RSS浏览器,RSS(Really Simple Syndication)是一种描述和同步网站内容的格式,是使用最广泛的XML应用.RSS目前广泛用于网上新闻频道,bl ...
- HBase学习笔记一
HBase简介 HBase概念 HBase的原型是谷歌的Bigtable论文 HBase是一个高可靠性.高性能.面向列.可伸缩的分布式存储系统,利用HBase技术可在廉价PC上搭建起大规模结构化存储集 ...
随机推荐
- 12、类成员访问修饰符public/private/producted/readonly
1.private 类的私有成员 private 类的私有成员,只能在内部访问,在外部访问不到,无法被继承,我们可以将不需要被外部修改的定义为私有的 私有成员,只能在内部访问,在外部访问不到 priv ...
- Python3自定义日志类教程
一.说明 Python3的logging功能是比较丰富的支持不同层次的日志输出,但或是我们想在日志前输出时间.或是我们想要将日志输入到文件,我们还是想要自定义日志类. 之前自己也尝试写过但感觉文档太乱 ...
- JS中的作用域(一)-详谈
本篇文章在于详细解读JavaScript的作用域,从底层原理来解释一些常见的问题,例如变量提升.隐式创建变量等问题,在和大家一起交流进步的同时,也算对自己知识掌握的记录,方便以后复习 首先,直接捡干的 ...
- npm安装material-design-icons总是失败
项目中使用npm或者cnpm安装material-design-icons总是失败 解决办法: 1.自己上github下载后拷贝到项目node_modules目录下 2.还有npm安装老出问题,npm ...
- SVN删除文件和恢复文件
SVN删除文件 一.本地删除SVN删除文件中的本地删除,指的是在客户端delete了一个文件,但还没有commit,使用revert来撤销删除. 二.服务器删除1.通过本地删除后提交服务器a)Upda ...
- [NOIP2013D1]
T1 Problem 洛谷 Solution 感觉我写的也不是正解... 我是先找出每个循环节的长度l...然后用快速幂求出10 ^ k % l的值.. Code #include<cmath& ...
- Linux常见命令快捷方式
命令行编辑的辅助操作: Tab健:自动补齐 Ctrl +U :清空至首行 Ctrl +K: 清空至尾行 Ctrl +L:(或者clear) 清屏 Ctrl +C: 取消执行命令 获取帮助命令: 内 ...
- webStorm activeCode
https://blog.csdn.net/qq_33183172/article/details/81491183
- css a的伪类顺序
a:link {color: #FF0000} /* 未访问的链接 */ a:visited {color: #00FF00} /* 已访问的链接 */ a:hover {color: #FF00FF ...
- inner join on (程序测试验证结果。) _学习贴
inner join on 两张表:机制就是第一张表的每一条数据,都会去和第二章表的每一条数据 依次进行匹配.匹配成功,就会显示出来. (程序测试验证结果.) 数据库连接 1 对 1 create ...