HBASE SHELL 命令使用
HBASE SHELL命令的使用
在hbase shell客户端有许多的操作命令,今天回顾并且总结一二,希望和广大读者共同进步,并且悉心聆听你们的意见。在此的hbase版本是:HBase 1.2.0-cdh5.10.0。
HBASE SHELL命令总结如下:
hbase(main):001:0> help
HBase Shell, version 1.2.0-cdh5.10.0, rUnknown, Fri Jan 20 12:13:18 PST 2017
Type 'help "COMMAND"', (e.g. 'help "get"' -- the quotes are necessary) for help on a specific command.
Commands are grouped. Type 'help "COMMAND_GROUP"', (e.g. 'help "general"') for help on a command group. COMMAND GROUPS:
Group name: general
Commands: status, table_help, version, whoami Group name: ddl
Commands: alter, alter_async, alter_status, create, describe, disable, disable_all, drop, drop_all, enable, enable_all, exists,
get_table, is_disabled, is_enabled, list, locate_region, show_filters Group name: namespace
Commands: alter_namespace, create_namespace, describe_namespace, drop_namespace, list_namespace, list_namespace_tables Group name: dml
Commands: append, count, delete, deleteall, get, get_counter, get_splits, incr, put, scan, truncate, truncate_preserve Group name: tools
Commands: assign, balance_switch, balancer, balancer_enabled, catalogjanitor_enabled, catalogjanitor_run,
catalogjanitor_switch, close_region, compact, compact_mob, compact_rs, flush, major_compact, major_compact_mob, merge_region,
move, normalize, normalizer_enabled, normalizer_switch, split, trace, unassign, wal_roll, zk_dump Group name: replication
Commands: add_peer, append_peer_tableCFs, disable_peer, disable_table_replication, enable_peer, enable_table_replication,
get_peer_config, list_peer_configs, list_peers, list_replicated_tables, remove_peer, remove_peer_tableCFs, set_peer_tableCFs, show_peer_tableCFs, update_peer_config Group name: snapshots
Commands: clone_snapshot, delete_all_snapshot, delete_snapshot, list_snapshots, restore_snapshot, snapshot Group name: configuration
Commands: update_all_config, update_config Group name: quotas
Commands: list_quotas, set_quota Group name: security
Commands: grant, list_security_capabilities, revoke, user_permission Group name: procedures
Commands: abort_procedure, list_procedures Group name: visibility labels
Commands: add_labels, clear_auths, get_auths, list_labels, set_auths, set_visibility SHELL USAGE:
Quote all names in HBase Shell such as table and column names. Commas delimit
command parameters. Type <RETURN> after entering a command to run it.
Dictionaries of configuration used in the creation and alteration of tables are
Ruby Hashes. They look like this: {'key1' => 'value1', 'key2' => 'value2', ...} and are opened and closed with curley-braces. Key/values are delimited by the
'=>' character combination. Usually keys are predefined constants such as
NAME, VERSIONS, COMPRESSION, etc. Constants do not need to be quoted. Type
'Object.constants' to see a (messy) list of all constants in the environment. If you are using binary keys or values and need to enter them in the shell, use
double-quote'd hexadecimal representation. For example: hbase> get 't1', "key\x03\x3f\xcd"
hbase> get 't1', "key\003\023\011"
hbase> put 't1', "test\xef\xff", 'f1:', "\x01\x33\x40" The HBase shell is the (J)Ruby IRB with the above HBase-specific commands added.
For more on the HBase Shell, see http://hbase.apache.org/book.html
1,group name:general
status 命令:检查集群状态的基本信息,使用方式如下:
hbase(main):004:0> help "status"
Show cluster status. Can be 'summary', 'simple', 'detailed', or 'replication'. The
default is 'summary'. Examples: hbase> status
hbase> status 'simple'
hbase> status 'summary'
hbase> status 'detailed'
hbase> status 'replication'
hbase> status 'replication', 'source'
hbase> status 'replication', 'sink'
执行:status 和执行status 'summery'是一样的都是查看最基本的信息:
hbase(main):016:0> status
1 active master, 0 backup masters, 3 servers, 0 dead, 13.0000 average load hbase(main):017:0> status 'summery'
1 active master, 0 backup masters, 3 servers, 0 dead, 13.0000 average load
执行:status 'simple' 可以查看比status命令更详细的信息,从中可以查看到平均负载情况。
hbase(main):001:0> status 'simple'
active master: cdh-27:60000 1551245927380
1 backup masters
cdh-25:60000 1551245936877
3 live servers
cdh-26:60020 1551245928059
requestsPerSecond=0.0, numberOfOnlineRegions=84, usedHeapMB=12693, maxHeapMB=31219, numberOfStores=84, numberOfStorefiles=188,
storefileUncompressedSizeMB=475307, storefileSizeMB=475451, compressionRatio=1.0003, memstoreSizeMB=0, storefileIndexSizeMB=0,
readRequestsCount=471023658, writeRequestsCount=337460431, rootIndexSizeKB=1891, totalStaticIndexSizeKB=612080, totalStaticBloomSizeKB=296867,
totalCompactingKVs=2224183692, currentCompactedKVs=2224183692, compactionProgressPct=1.0, coprocessors=[SecureBulkLoadEndpoint]
cdh-27:60020 1551245927337
requestsPerSecond=0.0, numberOfOnlineRegions=82, usedHeapMB=18486, maxHeapMB=31219, numberOfStores=82, numberOfStorefiles=153,
storefileUncompressedSizeMB=468291, storefileSizeMB=468430, compressionRatio=1.0003, memstoreSizeMB=0, storefileIndexSizeMB=0, readRequestsCount=637863799,
writeRequestsCount=205654269, rootIndexSizeKB=1777, totalStaticIndexSizeKB=610759, totalStaticBloomSizeKB=286281, totalCompactingKVs=2986633048,
currentCompactedKVs=2986633048, compactionProgressPct=1.0, coprocessors=[SecureBulkLoadEndpoint]
cdh-25:60020 1551245936859
requestsPerSecond=0.0, numberOfOnlineRegions=83, usedHeapMB=21088, maxHeapMB=29815, numberOfStores=83, numberOfStorefiles=174,
storefileUncompressedSizeMB=468736, storefileSizeMB=468878, compressionRatio=1.0003, memstoreSizeMB=0, storefileIndexSizeMB=0,
readRequestsCount=577718232, writeRequestsCount=294271083, rootIndexSizeKB=1687, totalStaticIndexSizeKB=607484, totalStaticBloomSizeKB=293495,
totalCompactingKVs=2598647849, currentCompactedKVs=2598647849, compactionProgressPct=1.0, coprocessors=[MultiRowMutationEndpoint, SecureBulkLoadEndpoint]
0 dead servers
Aggregate load: 0, regions: 249
执行:status 'detailed'命令是查看更加详细的信息
hbase(main):019:0> status 'detailed'
version 1.2.0-cdh5.10.0
0 regionsInTransition
active master: rhel1009161:60000 1551862927288
0 backup masters
master coprocessors: []
3 live servers
rhel1009167:60020 1551862927303
requestsPerSecond=0.0, numberOfOnlineRegions=13, usedHeapMB=1725, maxHeapMB=4096, numberOfStores=13, numberOfStorefiles=9, storefileUncompressedSizeMB=9,
storefileSizeMB=2, compressionRatio=0.2222, memstoreSizeMB=0, storefileIndexSizeMB=0, readRequestsCount=92468, writeRequestsCount=4256, rootIndexSizeKB=9,
totalStaticIndexSizeKB=4, totalStaticBloomSizeKB=264, totalCompactingKVs=48, currentCompactedKVs=48, compactionProgressPct=1.0, coprocessors=[AggregateImplementation,
GroupedAggregateRegionObserver, Indexer, MetaDataEndpointImpl, MultiRowMutationEndpoint, ScanRegionObserver, SecureBulkLoadEndpoint, ServerCachingEndpointImpl,
UngroupedAggregateRegionObserver]
"logs,01,1550475903153.90a78af1f727bc227fae4eb110bf9f81."
numberOfStores=1, numberOfStorefiles=0, storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0,
readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0,
compactionProgressPct=NaN, completeSequenceId=-1, dataLocality=0.0
"logs,03,1550475903153.ed0a1db1482cdc32c5c81db7c13d0640."
numberOfStores=1, numberOfStorefiles=0, storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0,
readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0,
compactionProgressPct=NaN, completeSequenceId=-1, dataLocality=0.0
rhel1009179:60020 1551862926917
requestsPerSecond=0.0, numberOfOnlineRegions=12, usedHeapMB=789, maxHeapMB=4096, numberOfStores=14, numberOfStorefiles=11, storefileUncompressedSizeMB=0,
storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, readRequestsCount=4148804, writeRequestsCount=40, rootIndexSizeKB=2, totalStaticIndexSizeKB=1,
totalStaticBloomSizeKB=10, totalCompactingKVs=12, currentCompactedKVs=12, compactionProgressPct=1.0, coprocessors=[AggregateImplementation, GroupedAggregateRegionObserver,
Indexer, MultiRowMutationEndpoint, ScanRegionObserver, SecureBulkLoadEndpoint, SequenceRegionObserver, ServerCachingEndpointImpl, UngroupedAggregateRegionObserver]
"logs,17,1550475903153.049ef09d700b5ccfe1f9dc81eb67b622."
numberOfStores=1, numberOfStorefiles=1, storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0,
readRequestsCount=15951, writeRequestsCount=0, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=2, totalCompactingKVs=0, currentCompactedKVs=0,
compactionProgressPct=NaN, completeSequenceId=-1, dataLocality=1.0
"logs,19,1550475903153.abea5829dfde5fb41ffeb0850d9bf887."
numberOfStores=1, numberOfStorefiles=1, storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0,
readRequestsCount=15951, writeRequestsCount=0, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=2, totalCompactingKVs=0, currentCompactedKVs=0,
compactionProgressPct=NaN, completeSequenceId=-1, dataLocality=1.0
rhel1009173:60020 1551862927060
requestsPerSecond=0.0, numberOfOnlineRegions=14, usedHeapMB=416, maxHeapMB=4096, numberOfStores=14, numberOfStorefiles=9, storefileUncompressedSizeMB=0, storefileSizeMB=0,
memstoreSizeMB=0, storefileIndexSizeMB=0, readRequestsCount=138051, writeRequestsCount=4636, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=10,
totalCompactingKVs=62, currentCompactedKVs=62, compactionProgressPct=1.0, coprocessors=[AggregateImplementation, GroupedAggregateRegionObserver, Indexer, MetaDataEndpointImpl,
MetaDataRegionObserver, ScanRegionObserver, SecureBulkLoadEndpoint, SequenceRegionObserver, ServerCachingEndpointImpl, UngroupedAggregateRegionObserver]
"logs,07,1550475903153.41822d6c6b7cd327899a702042891843."
numberOfStores=1, numberOfStorefiles=0, storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0,
readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0,
compactionProgressPct=NaN, completeSequenceId=-1, dataLocality=0.0
0 dead servers
执行:status 'replication',status 'replication', 'source' ,status 'replication', 'sink' 都是查看备份的副本状态,这里不加深入说明:
hbase(main):022:0> status 'replication'
version 1.2.0-cdh5.10.0
3 live servers
rhel1009167:
SOURCE:
SINK : AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Mar 06 17:02:10 CST 2019
rhel1009179:
SOURCE:
SINK : AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Mar 06 17:02:10 CST 2019
rhel1009173:
SOURCE:
SINK : AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Mar 06 17:02:10 CST 2019 hbase(main):023:0> status 'replication', 'source'
version 1.2.0-cdh5.10.0
3 live servers
rhel1009167:
SOURCE:
rhel1009179:
SOURCE:
rhel1009173:
SOURCE: hbase(main):024:0> status 'replication', 'sink'
version 1.2.0-cdh5.10.0
3 live servers
rhel1009167:
SINK : AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Mar 06 17:02:10 CST 2019
rhel1009179:
SINK : AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Mar 06 17:02:10 CST 2019
rhel1009173:
SINK : AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Mar 06 17:02:10 CST 2019
table_help 命令,使用方式如下:
主要说明了table 方面命令的使用说明
hbase(main):026:0> table_help
Help for table-reference commands. You can either create a table via 'create' and then manipulate the table via commands like 'put', 'get', etc.
See the standard help information for how to use each of these commands. However, as of 0.96, you can also get a reference to a table, on which you can invoke commands.
For instance, you can get create a table and keep around a reference to it via: hbase> t = create 't', 'cf' Or, if you have already created the table, you can get a reference to it: hbase> t = get_table 't' You can do things like call 'put' on the table: hbase> t.put 'r', 'cf:q', 'v' which puts a row 'r' with column family 'cf', qualifier 'q' and value 'v' into table t. To read the data out, you can scan the table: hbase> t.scan which will read all the rows in table 't'. Essentially, any command that takes a table name can also be done via table reference.
Other commands include things like: get, delete, deleteall,
get_all_columns, get_counter, count, incr. These functions, along with
the standard JRuby object methods are also available via tab completion. For more information on how to use each of these commands, you can also just type: hbase> t.help 'scan' which will output more information on how to use that command. You can also do general admin actions directly on a table; things like enable, disable,
flush and drop just by typing: hbase> t.enable
hbase> t.flush
hbase> t.disable
hbase> t.drop Note that after dropping a table, your reference to it becomes useless and further usage
is undefined (and not recommended).
version 命令使用:
hbase(main):028:0> version
1.2.0-cdh5.10.0, rUnknown, Fri Jan 20 12:13:18 PST 2017
whoami 命令使用:
hbase(main):029:0> whoami
root (auth:SIMPLE)
groups: root
2,group name :ddl
顾名思义这是在hbase中适用于ddl 数据定义语言的格式:
hbase(main):030:0> help 'ddl'
Command: alter
Alter a table. If the "hbase.online.schema.update.enable" property is set to
false, then the table must be disabled (see help 'disable'). If the
"hbase.online.schema.update.enable" property is set to true, tables can be
altered without disabling them first. Altering enabled tables has caused problems
in the past, so use caution and test it before using in production. You can use the alter command to add,
modify or delete column families or change table configuration options.
Column families work in a similar way as the 'create' command. The column family
specification can either be a name string, or a dictionary with the NAME attribute.
Dictionaries are described in the output of the 'help' command, with no arguments. For example, to change or add the 'f1' column family in table 't1' from
current value to keep a maximum of 5 cell VERSIONS, do: hbase> alter 't1', NAME => 'f1', VERSIONS => 5 You can operate on several column families: hbase> alter 't1', 'f1', {NAME => 'f2', IN_MEMORY => true}, {NAME => 'f3', VERSIONS => 5} To delete the 'f1' column family in table 'ns1:t1', use one of: hbase> alter 'ns1:t1', NAME => 'f1', METHOD => 'delete'
hbase> alter 'ns1:t1', 'delete' => 'f1' You can also change table-scope attributes like MAX_FILESIZE, READONLY,
MEMSTORE_FLUSHSIZE, DURABILITY, etc. These can be put at the end;
for example, to change the max size of a region to 128MB, do: hbase> alter 't1', MAX_FILESIZE => '134217728' You can add a table coprocessor by setting a table coprocessor attribute: hbase> alter 't1',
'coprocessor'=>'hdfs:///foo.jar|com.foo.FooRegionObserver|1001|arg1=1,arg2=2' Since you can have multiple coprocessors configured for a table, a
sequence number will be automatically appended to the attribute name
to uniquely identify it. The coprocessor attribute must match the pattern below in order for
the framework to understand how to load the coprocessor classes: [coprocessor jar file location] | class name | [priority] | [arguments] You can also set configuration settings specific to this table or column family: hbase> alter 't1', CONFIGURATION => {'hbase.hregion.scan.loadColumnFamiliesOnDemand' => 'true'}
hbase> alter 't1', {NAME => 'f2', CONFIGURATION => {'hbase.hstore.blockingStoreFiles' => '10'}} You can also remove a table-scope attribute: hbase> alter 't1', METHOD => 'table_att_unset', NAME => 'MAX_FILESIZE' hbase> alter 't1', METHOD => 'table_att_unset', NAME => 'coprocessor$1' You can also set REGION_REPLICATION: hbase> alter 't1', {REGION_REPLICATION => 2} There could be more than one alteration in one command: hbase> alter 't1', { NAME => 'f1', VERSIONS => 3 },
{ MAX_FILESIZE => '134217728' }, { METHOD => 'delete', NAME => 'f2' },
OWNER => 'johndoe', METADATA => { 'mykey' => 'myvalue' } Command: alter_async
Alter column family schema, does not wait for all regions to receive the
schema changes. Pass table name and a dictionary specifying new column
family schema. Dictionaries are described on the main help command output.
Dictionary must include name of column family to alter. For example, To change or add the 'f1' column family in table 't1' from defaults
to instead keep a maximum of 5 cell VERSIONS, do: hbase> alter_async 't1', NAME => 'f1', VERSIONS => 5 To delete the 'f1' column family in table 'ns1:t1', do: hbase> alter_async 'ns1:t1', NAME => 'f1', METHOD => 'delete' or a shorter version: hbase> alter_async 'ns1:t1', 'delete' => 'f1' You can also change table-scope attributes like MAX_FILESIZE
MEMSTORE_FLUSHSIZE, READONLY, and DEFERRED_LOG_FLUSH. For example, to change the max size of a family to 128MB, do: hbase> alter 't1', METHOD => 'table_att', MAX_FILESIZE => '134217728' There could be more than one alteration in one command: hbase> alter 't1', {NAME => 'f1'}, {NAME => 'f2', METHOD => 'delete'} To check if all the regions have been updated, use alter_status <table_name> Command: alter_status
Get the status of the alter command. Indicates the number of regions of the
table that have received the updated schema
Pass table name. hbase> alter_status 't1'
hbase> alter_status 'ns1:t1' Command: create
Creates a table. Pass a table name, and a set of column family
specifications (at least one), and, optionally, table configuration.
Column specification can be a simple string (name), or a dictionary
(dictionaries are described below in main help output), necessarily
including NAME attribute.
Examples: Create a table with namespace=ns1 and table qualifier=t1
hbase> create 'ns1:t1', {NAME => 'f1', VERSIONS => 5} Create a table with namespace=default and table qualifier=t1
hbase> create 't1', {NAME => 'f1'}, {NAME => 'f2'}, {NAME => 'f3'}
hbase> # The above in shorthand would be the following:
hbase> create 't1', 'f1', 'f2', 'f3'
hbase> create 't1', {NAME => 'f1', VERSIONS => 1, TTL => 2592000, BLOCKCACHE => true}
hbase> create 't1', {NAME => 'f1', CONFIGURATION => {'hbase.hstore.blockingStoreFiles' => '10'}} Table configuration options can be put at the end.
Examples: hbase> create 'ns1:t1', 'f1', SPLITS => ['10', '20', '30', '40']
hbase> create 't1', 'f1', SPLITS => ['10', '20', '30', '40']
hbase> create 't1', 'f1', SPLITS_FILE => 'splits.txt', OWNER => 'johndoe'
hbase> create 't1', {NAME => 'f1', VERSIONS => 5}, METADATA => { 'mykey' => 'myvalue' }
hbase> # Optionally pre-split the table into NUMREGIONS, using
hbase> # SPLITALGO ("HexStringSplit", "UniformSplit" or classname)
hbase> create 't1', 'f1', {NUMREGIONS => 15, SPLITALGO => 'HexStringSplit'}
hbase> create 't1', 'f1', {NUMREGIONS => 15, SPLITALGO => 'HexStringSplit', REGION_REPLICATION => 2, CONFIGURATION => {'hbase.hregion.scan.loadColumnFamiliesOnDemand' => 'true'}}
hbase> create 't1', {NAME => 'f1', DFS_REPLICATION => 1} You can also keep around a reference to the created table: hbase> t1 = create 't1', 'f1' Which gives you a reference to the table named 't1', on which you can then
call methods. Command: describe
Describe the named table. For example:
hbase> describe 't1'
hbase> describe 'ns1:t1' Alternatively, you can use the abbreviated 'desc' for the same thing.
hbase> desc 't1'
hbase> desc 'ns1:t1' Command: disable
Start disable of named table:
hbase> disable 't1'
hbase> disable 'ns1:t1' Command: disable_all
Disable all of tables matching the given regex: hbase> disable_all 't.*'
hbase> disable_all 'ns:t.*'
hbase> disable_all 'ns:.*' Command: drop
Drop the named table. Table must first be disabled:
hbase> drop 't1'
hbase> drop 'ns1:t1' Command: drop_all
Drop all of the tables matching the given regex: hbase> drop_all 't.*'
hbase> drop_all 'ns:t.*'
hbase> drop_all 'ns:.*' Command: enable
Start enable of named table:
hbase> enable 't1'
hbase> enable 'ns1:t1' Command: enable_all
Enable all of the tables matching the given regex: hbase> enable_all 't.*'
hbase> enable_all 'ns:t.*'
hbase> enable_all 'ns:.*' Command: exists
Does the named table exist?
hbase> exists 't1'
hbase> exists 'ns1:t1' Command: get_table
Get the given table name and return it as an actual object to
be manipulated by the user. See table.help for more information
on how to use the table.
Eg. hbase> t1 = get_table 't1'
hbase> t1 = get_table 'ns1:t1' returns the table named 't1' as a table object. You can then do hbase> t1.help which will then print the help for that table. Command: is_disabled
Is named table disabled? For example:
hbase> is_disabled 't1'
hbase> is_disabled 'ns1:t1' Command: is_enabled
Is named table enabled? For example:
hbase> is_enabled 't1'
hbase> is_enabled 'ns1:t1' Command: list
List all tables in hbase. Optional regular expression parameter could
be used to filter the output. Examples: hbase> list
hbase> list 'abc.*'
hbase> list 'ns:abc.*'
hbase> list 'ns:.*' Command: locate_region
Locate the region given a table name and a row-key hbase> locate_region 'tableName', 'key0' Command: show_filters
Show all the filters in hbase. Example:
hbase> show_filters ColumnPrefixFilter
TimestampsFilter
PageFilter
.....
KeyOnlyFilter
3,group name:namespace
在hbase中,namespace命名空间指对一组表的逻辑分组,类似RDBMS中的database,方便对表业务划分。
hbase(main):045:0> help 'namespace'
Command: alter_namespace
Alter namespace properties. To add/modify a property: hbase> alter_namespace 'ns1', {METHOD => 'set', 'PROPERTY_NAME' => 'PROPERTY_VALUE'} To delete a property: hbase> alter_namespace 'ns1', {METHOD => 'unset', NAME=>'PROPERTY_NAME'} Command: create_namespace
Create namespace; pass namespace name,
and optionally a dictionary of namespace configuration.
Examples: hbase> create_namespace 'ns1'
hbase> create_namespace 'ns1', {'PROPERTY_NAME'=>'PROPERTY_VALUE'} Command: describe_namespace
Describe the named namespace. For example:
hbase> describe_namespace 'ns1' Command: drop_namespace
Drop the named namespace. The namespace must be empty. Command: list_namespace
List all namespaces in hbase. Optional regular expression parameter could
be used to filter the output. Examples: hbase> list_namespace
hbase> list_namespace 'abc.*' Command: list_namespace_tables
List all tables that are members of the namespace.
Examples: hbase> list_namespace_tables 'ns1'
4,group name:dml
dml 顾名思义就是数据操纵语言,这是是hbase的dml的使用说明:
hbase(main):053:0> help 'dml'
Command: append
Appends a cell 'value' at specified table/row/column coordinates. hbase> append 't1', 'r1', 'c1', 'value', ATTRIBUTES=>{'mykey'=>'myvalue'}
hbase> append 't1', 'r1', 'c1', 'value', {VISIBILITY=>'PRIVATE|SECRET'} The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be: hbase> t.append 'r1', 'c1', 'value', ATTRIBUTES=>{'mykey'=>'myvalue'}
hbase> t.append 'r1', 'c1', 'value', {VISIBILITY=>'PRIVATE|SECRET'} Command: count
Count the number of rows in a table. Return value is the number of rows.
This operation may take a LONG time (Run '$HADOOP_HOME/bin/hadoop jar
hbase.jar rowcount' to run a counting mapreduce job). Current count is shown
every 1000 rows by default. Count interval may be optionally specified. Scan
caching is enabled on count scans by default. Default cache size is 10 rows.
If your rows are small in size, you may want to increase this
parameter. Examples: hbase> count 'ns1:t1'
hbase> count 't1'
hbase> count 't1', INTERVAL => 100000
hbase> count 't1', CACHE => 1000
hbase> count 't1', INTERVAL => 10, CACHE => 1000 The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding commands would be: hbase> t.count
hbase> t.count INTERVAL => 100000
hbase> t.count CACHE => 1000
hbase> t.count INTERVAL => 10, CACHE => 1000 Command: delete
Put a delete cell value at specified table/row/column and optionally
timestamp coordinates. Deletes must match the deleted cell's
coordinates exactly. When scanning, a delete cell suppresses older
versions. To delete a cell from 't1' at row 'r1' under column 'c1'
marked with the time 'ts1', do: hbase> delete 'ns1:t1', 'r1', 'c1', ts1
hbase> delete 't1', 'r1', 'c1', ts1
hbase> delete 't1', 'r1', 'c1', ts1, {VISIBILITY=>'PRIVATE|SECRET'} The same command can also be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be: hbase> t.delete 'r1', 'c1', ts1
hbase> t.delete 'r1', 'c1', ts1, {VISIBILITY=>'PRIVATE|SECRET'} Command: deleteall
Delete all cells in a given row; pass a table name, row, and optionally
a column and timestamp. Examples: hbase> deleteall 'ns1:t1', 'r1'
hbase> deleteall 't1', 'r1'
hbase> deleteall 't1', 'r1', 'c1'
hbase> deleteall 't1', 'r1', 'c1', ts1
hbase> deleteall 't1', 'r1', 'c1', ts1, {VISIBILITY=>'PRIVATE|SECRET'} The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be: hbase> t.deleteall 'r1'
hbase> t.deleteall 'r1', 'c1'
hbase> t.deleteall 'r1', 'c1', ts1
hbase> t.deleteall 'r1', 'c1', ts1, {VISIBILITY=>'PRIVATE|SECRET'} Command: get
Get row or cell contents; pass table name, row, and optionally
a dictionary of column(s), timestamp, timerange and versions. Examples: hbase> get 'ns1:t1', 'r1'
hbase> get 't1', 'r1'
hbase> get 't1', 'r1', {TIMERANGE => [ts1, ts2]}
hbase> get 't1', 'r1', {COLUMN => 'c1'}
hbase> get 't1', 'r1', {COLUMN => ['c1', 'c2', 'c3']}
hbase> get 't1', 'r1', {COLUMN => 'c1', TIMESTAMP => ts1}
hbase> get 't1', 'r1', {COLUMN => 'c1', TIMERANGE => [ts1, ts2], VERSIONS => 4}
hbase> get 't1', 'r1', {COLUMN => 'c1', TIMESTAMP => ts1, VERSIONS => 4}
hbase> get 't1', 'r1', {FILTER => "ValueFilter(=, 'binary:abc')"}
hbase> get 't1', 'r1', 'c1'
hbase> get 't1', 'r1', 'c1', 'c2'
hbase> get 't1', 'r1', ['c1', 'c2']
hbase> get 't1', 'r1', {COLUMN => 'c1', ATTRIBUTES => {'mykey'=>'myvalue'}}
hbase> get 't1', 'r1', {COLUMN => 'c1', AUTHORIZATIONS => ['PRIVATE','SECRET']}
hbase> get 't1', 'r1', {CONSISTENCY => 'TIMELINE'}
hbase> get 't1', 'r1', {CONSISTENCY => 'TIMELINE', REGION_REPLICA_ID => 1} Besides the default 'toStringBinary' format, 'get' also supports custom formatting by
column. A user can define a FORMATTER by adding it to the column name in the get
specification. The FORMATTER can be stipulated: 1. either as a org.apache.hadoop.hbase.util.Bytes method name (e.g, toInt, toString)
2. or as a custom class followed by method name: e.g. 'c(MyFormatterClass).format'. Example formatting cf:qualifier1 and cf:qualifier2 both as Integers:
hbase> get 't1', 'r1' {COLUMN => ['cf:qualifier1:toInt',
'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] } Note that you can specify a FORMATTER by column only (cf:qualifier). You cannot specify
a FORMATTER for all columns of a column family. The same commands also can be run on a reference to a table (obtained via get_table or
create_table). Suppose you had a reference t to table 't1', the corresponding commands
would be: hbase> t.get 'r1'
hbase> t.get 'r1', {TIMERANGE => [ts1, ts2]}
hbase> t.get 'r1', {COLUMN => 'c1'}
hbase> t.get 'r1', {COLUMN => ['c1', 'c2', 'c3']}
hbase> t.get 'r1', {COLUMN => 'c1', TIMESTAMP => ts1}
hbase> t.get 'r1', {COLUMN => 'c1', TIMERANGE => [ts1, ts2], VERSIONS => 4}
hbase> t.get 'r1', {COLUMN => 'c1', TIMESTAMP => ts1, VERSIONS => 4}
hbase> t.get 'r1', {FILTER => "ValueFilter(=, 'binary:abc')"}
hbase> t.get 'r1', 'c1'
hbase> t.get 'r1', 'c1', 'c2'
hbase> t.get 'r1', ['c1', 'c2']
hbase> t.get 'r1', {CONSISTENCY => 'TIMELINE'}
hbase> t.get 'r1', {CONSISTENCY => 'TIMELINE', REGION_REPLICA_ID => 1} Command: get_counter
Return a counter cell value at specified table/row/column coordinates.
A counter cell should be managed with atomic increment functions on HBase
and the data should be binary encoded (as long value). Example: hbase> get_counter 'ns1:t1', 'r1', 'c1'
hbase> get_counter 't1', 'r1', 'c1' The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be: hbase> t.get_counter 'r1', 'c1' Command: get_splits
Get the splits of the named table:
hbase> get_splits 't1'
hbase> get_splits 'ns1:t1' The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be: hbase> t.get_splits Command: incr
Increments a cell 'value' at specified table/row/column coordinates.
To increment a cell value in table 'ns1:t1' or 't1' at row 'r1' under column
'c1' by 1 (can be omitted) or 10 do: hbase> incr 'ns1:t1', 'r1', 'c1'
hbase> incr 't1', 'r1', 'c1'
hbase> incr 't1', 'r1', 'c1', 1
hbase> incr 't1', 'r1', 'c1', 10
hbase> incr 't1', 'r1', 'c1', 10, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> incr 't1', 'r1', 'c1', {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> incr 't1', 'r1', 'c1', 10, {VISIBILITY=>'PRIVATE|SECRET'} The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be: hbase> t.incr 'r1', 'c1'
hbase> t.incr 'r1', 'c1', 1
hbase> t.incr 'r1', 'c1', 10, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> t.incr 'r1', 'c1', 10, {VISIBILITY=>'PRIVATE|SECRET'} Command: put
Put a cell 'value' at specified table/row/column and optionally
timestamp coordinates. To put a cell value into table 'ns1:t1' or 't1'
at row 'r1' under column 'c1' marked with the time 'ts1', do: hbase> put 'ns1:t1', 'r1', 'c1', 'value'
hbase> put 't1', 'r1', 'c1', 'value'
hbase> put 't1', 'r1', 'c1', 'value', ts1
hbase> put 't1', 'r1', 'c1', 'value', {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> put 't1', 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> put 't1', 'r1', 'c1', 'value', ts1, {VISIBILITY=>'PRIVATE|SECRET'} The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be: hbase> t.put 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}} Command: scan
Scan a table; pass table name and optionally a dictionary of scanner
specifications. Scanner specifications may include one or more of:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, ROWPREFIXFILTER, TIMESTAMP,
MAXLENGTH or COLUMNS, CACHE or RAW, VERSIONS, ALL_METRICS or METRICS If no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in
'col_family'. The filter can be specified in two ways:
1. Using a filterString - more information on this is available in the
Filter Language document attached to the HBASE-4176 JIRA
2. Using the entire package name of the filter. If you wish to see metrics regarding the execution of the scan, the
ALL_METRICS boolean should be set to true. Alternatively, if you would
prefer to see only a subset of the metrics, the METRICS array can be
defined to include the names of only the metrics you care about. Some examples: hbase> scan 'hbase:meta'
hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}
hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [1303668804, 1303668904]}
hbase> scan 't1', {REVERSED => true}
hbase> scan 't1', {ALL_METRICS => true}
hbase> scan 't1', {METRICS => ['RPC_RETRIES', 'ROWS_FILTERED']}
hbase> scan 't1', {ROWPREFIXFILTER => 'row2', FILTER => "
(QualifierFilter (>=, 'binary:xyz')) AND (TimestampsFilter ( 123, 456))"}
hbase> scan 't1', {FILTER =>
org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}
hbase> scan 't1', {CONSISTENCY => 'TIMELINE'}
For setting the Operation Attributes
hbase> scan 't1', { COLUMNS => ['c1', 'c2'], ATTRIBUTES => {'mykey' => 'myvalue'}}
hbase> scan 't1', { COLUMNS => ['c1', 'c2'], AUTHORIZATIONS => ['PRIVATE','SECRET']}
For experts, there is an additional option -- CACHE_BLOCKS -- which
switches block caching for the scanner on (true) or off (false). By
default it is enabled. Examples: hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false} Also for experts, there is an advanced option -- RAW -- which instructs the
scanner to return all cells (including delete markers and uncollected deleted
cells). This option cannot be combined with requesting specific COLUMNS.
Disabled by default. Example: hbase> scan 't1', {RAW => true, VERSIONS => 10} Besides the default 'toStringBinary' format, 'scan' supports custom formatting
by column. A user can define a FORMATTER by adding it to the column name in
the scan specification. The FORMATTER can be stipulated: 1. either as a org.apache.hadoop.hbase.util.Bytes method name (e.g, toInt, toString)
2. or as a custom class followed by method name: e.g. 'c(MyFormatterClass).format'. Example formatting cf:qualifier1 and cf:qualifier2 both as Integers:
hbase> scan 't1', {COLUMNS => ['cf:qualifier1:toInt',
'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] } Note that you can specify a FORMATTER by column only (cf:qualifier). You cannot
specify a FORMATTER for all columns of a column family. Scan can also be used directly from a table, by first getting a reference to a
table, like such: hbase> t = get_table 't'
hbase> t.scan Note in the above situation, you can still provide all the filtering, columns,
options, etc as described above. Command: truncate
Disables, drops and recreates the specified table. Command: truncate_preserve
Disables, drops and recreates the specified table while still maintaing the previous region boundaries.
5,group name:tools
hbase(main):097:0> help 'tools'
Command: assign
Assign a region. Use with caution. If region already assigned,
this command will do a force reassign. For experts only.
Examples: hbase> assign 'REGIONNAME'
hbase> assign 'ENCODED_REGIONNAME' Command: balance_switch
Enable/Disable balancer. Returns previous balancer state.
Examples: hbase> balance_switch true
hbase> balance_switch false Command: balancer
Trigger the cluster balancer. Returns true if balancer ran and was able to
tell the region servers to unassign all the regions to balance (the re-assignment itself is async).
Otherwise false (Will not run if regions in transition). Command: balancer_enabled
Query the balancer's state.
Examples: hbase> balancer_enabled Command: catalogjanitor_enabled
Query for the CatalogJanitor state (enabled/disabled?)
Examples: hbase> catalogjanitor_enabled Command: catalogjanitor_run
Catalog janitor command to run the (garbage collection) scan from command line. hbase> catalogjanitor_run Command: catalogjanitor_switch
Enable/Disable CatalogJanitor. Returns previous CatalogJanitor state.
Examples: hbase> catalogjanitor_switch true
hbase> catalogjanitor_switch false Command: close_region
Close a single region. Ask the master to close a region out on the cluster
or if 'SERVER_NAME' is supplied, ask the designated hosting regionserver to
close the region directly. Closing a region, the master expects 'REGIONNAME'
to be a fully qualified region name. When asking the hosting regionserver to
directly close a region, you pass the regions' encoded name only. A region
name looks like this: TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396.
or
Namespace:TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396. The trailing period is part of the regionserver name. A region's encoded name
is the hash at the end of a region name; e.g. 527db22f95c8a9e0116f0cc13c680396
(without the period). A 'SERVER_NAME' is its host, port plus startcode. For
example: host187.example.com,60020,1289493121758 (find servername in master ui
or when you do detailed status in shell). This command will end up running
close on the region hosting regionserver. The close is done without the
master's involvement (It will not know of the close). Once closed, region will
stay closed. Use assign to reopen/reassign. Use unassign or move to assign
the region elsewhere on cluster. Use with caution. For experts only.
Examples: hbase> close_region 'REGIONNAME'
hbase> close_region 'REGIONNAME', 'SERVER_NAME'
hbase> close_region 'ENCODED_REGIONNAME'
hbase> close_region 'ENCODED_REGIONNAME', 'SERVER_NAME' Command: compact
Compact all regions in passed table or pass a region row
to compact an individual region. You can also compact a single column
family within a region.
Examples:
Compact all regions in a table:
hbase> compact 'ns1:t1'
hbase> compact 't1'
Compact an entire region:
hbase> compact 'r1'
Compact only a column family within a region:
hbase> compact 'r1', 'c1'
Compact a column family within a table:
hbase> compact 't1', 'c1' Command: compact_mob
Run compaction on a mob enabled column family
or all mob enabled column families within a table
Examples:
Compact a column family within a table:
hbase> compact_mob 't1', 'c1'
Compact all mob enabled column families
hbase> compact_mob 't1' Command: compact_rs
Compact all regions on passed regionserver.
Examples:
Compact all regions on a regionserver:
hbase> compact_rs 'host187.example.com,60020'
or
hbase> compact_rs 'host187.example.com,60020,1289493121758'
Major compact all regions on a regionserver:
hbase> compact_rs 'host187.example.com,60020,1289493121758', true Command: flush
Flush all regions in passed table or pass a region row to
flush an individual region. For example: hbase> flush 'TABLENAME'
hbase> flush 'REGIONNAME'
hbase> flush 'ENCODED_REGIONNAME' Command: major_compact
Run major compaction on passed table or pass a region row
to major compact an individual region. To compact a single
column family within a region specify the region name
followed by the column family name.
Examples:
Compact all regions in a table:
hbase> major_compact 't1'
hbase> major_compact 'ns1:t1'
Compact an entire region:
hbase> major_compact 'r1'
Compact a single column family within a region:
hbase> major_compact 'r1', 'c1'
Compact a single column family within a table:
hbase> major_compact 't1', 'c1' Command: major_compact_mob
Run major compaction on a mob enabled column family
or all mob enabled column families within a table
Examples:
Compact a column family within a table:
hbase> major_compact_mob 't1', 'c1'
Compact all mob enabled column families within a table
hbase> major_compact_mob 't1' Command: merge_region
Merge two regions. Passing 'true' as the optional third parameter will force
a merge ('force' merges regardless else merge will fail unless passed
adjacent regions. 'force' is for expert use only). NOTE: You must pass the encoded region name, not the full region name so
this command is a little different from other region operations. The encoded
region name is the hash suffix on region names: e.g. if the region name were
TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396. then
the encoded region name portion is 527db22f95c8a9e0116f0cc13c680396 Examples: hbase> merge_region 'ENCODED_REGIONNAME', 'ENCODED_REGIONNAME'
hbase> merge_region 'ENCODED_REGIONNAME', 'ENCODED_REGIONNAME', true Command: move
Move a region. Optionally specify target regionserver else we choose one
at random. NOTE: You pass the encoded region name, not the region name so
this command is a little different to the others. The encoded region name
is the hash suffix on region names: e.g. if the region name were
TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396. then
the encoded region name portion is 527db22f95c8a9e0116f0cc13c680396
A server name is its host, port plus startcode. For example:
host187.example.com,60020,1289493121758
Examples: hbase> move 'ENCODED_REGIONNAME'
hbase> move 'ENCODED_REGIONNAME', 'SERVER_NAME' Command: normalize
Trigger region normalizer for all tables which have NORMALIZATION_ENABLED flag set. Returns true
if normalizer ran successfully, false otherwise. Note that this command has no effect
if region normalizer is disabled (make sure it's turned on using 'normalizer_switch' command). Examples: hbase> normalize Command: normalizer_enabled
Query the state of region normalizer.
Examples: hbase> normalizer_enabled Command: normalizer_switch
Enable/Disable region normalizer. Returns previous normalizer state.
When normalizer is enabled, it handles all tables with 'NORMALIZATION_ENABLED' => true.
Examples: hbase> normalizer_switch true
hbase> normalizer_switch false Command: split
Split entire table or pass a region to split individual region. With the
second parameter, you can specify an explicit split key for the region.
Examples:
split 'tableName'
split 'namespace:tableName'
split 'regionName' # format: 'tableName,startKey,id'
split 'tableName', 'splitKey'
split 'regionName', 'splitKey' Command: trace
Start or Stop tracing using HTrace.
Always returns true if tracing is running, otherwise false.
If the first argument is 'start', new span is started.
If the first argument is 'stop', current running span is stopped.
('stop' returns false on success.)
If the first argument is 'status', just returns if or not tracing is running.
On 'start'-ing, you can optionally pass the name of span as the second argument.
The default name of span is 'HBaseShell'.
Repeating 'start' does not start nested span. Examples: hbase> trace 'start'
hbase> trace 'status'
hbase> trace 'stop' hbase> trace 'start', 'MySpanName'
hbase> trace 'stop' Command: unassign
Unassign a region. Unassign will close region in current location and then
reopen it again. Pass 'true' to force the unassignment ('force' will clear
all in-memory state in master before the reassign. If results in
double assignment use hbck -fix to resolve. To be used by experts).
Use with caution. For expert use only. Examples: hbase> unassign 'REGIONNAME'
hbase> unassign 'REGIONNAME', true
hbase> unassign 'ENCODED_REGIONNAME'
hbase> unassign 'ENCODED_REGIONNAME', true Command: wal_roll
Roll the log writer. That is, start writing log messages to a new file.
The name of the regionserver should be given as the parameter. A
'server_name' is the host, port plus startcode of a regionserver. For
example: host187.example.com,60020,1289493121758 (find servername in
master ui or when you do detailed status in shell) Command: zk_dump
Dump status of HBase cluster as seen by ZooKeeper. -------------------------------------------------------------------------------- WARNING: Above commands are for 'experts'-only as misuse can damage an install
6,group name:replication
hbase(main):002:0> help 'replication'
Command: add_peer
A peer can either be another HBase cluster or a custom replication endpoint. In either case an id
must be specified to identify the peer. For a HBase cluster peer, a cluster key must be provided and is composed like this:
hbase.zookeeper.quorum:hbase.zookeeper.property.clientPort:zookeeper.znode.parent
This gives a full path for HBase to connect to another HBase cluster. An optional parameter for
table column families identifies which column families will be replicated to the peer cluster.
Examples: hbase> add_peer '1', "server1.cie.com:2181:/hbase"
hbase> add_peer '2', "zk1,zk2,zk3:2182:/hbase-prod"
hbase> add_peer '3', "zk4,zk5,zk6:11000:/hbase-test", "table1; table2:cf1; table3:cf1,cf2"
hbase> add_peer '4', CLUSTER_KEY => "server1.cie.com:2181:/hbase"
hbase> add_peer '5', CLUSTER_KEY => "zk1,zk2,zk3:2182:/hbase-prod",
TABLE_CFS => { "table1" => [], "ns2:table2" => ["cf1"], "ns3:table3" => ["cf1", "cf2"] } For a custom replication endpoint, the ENDPOINT_CLASSNAME can be provided. Two optional arguments
are DATA and CONFIG which can be specified to set different either the peer_data or configuration
for the custom replication endpoint. Table column families is optional and can be specified with
the key TABLE_CFS. hbase> add_peer '6', ENDPOINT_CLASSNAME => 'org.apache.hadoop.hbase.MyReplicationEndpoint'
hbase> add_peer '7', ENDPOINT_CLASSNAME => 'org.apache.hadoop.hbase.MyReplicationEndpoint',
DATA => { "key1" => 1 }
hbase> add_peer '8', ENDPOINT_CLASSNAME => 'org.apache.hadoop.hbase.MyReplicationEndpoint',
CONFIG => { "config1" => "value1", "config2" => "value2" }
hbase> add_peer '9', ENDPOINT_CLASSNAME => 'org.apache.hadoop.hbase.MyReplicationEndpoint',
DATA => { "key1" => 1 }, CONFIG => { "config1" => "value1", "config2" => "value2" },
hbase> add_peer '10', ENDPOINT_CLASSNAME => 'org.apache.hadoop.hbase.MyReplicationEndpoint',
TABLE_CFS => { "table1" => [], "ns2:table2" => ["cf1"], "ns3:table3" => ["cf1", "cf2"] }
hbase> add_peer '11', ENDPOINT_CLASSNAME => 'org.apache.hadoop.hbase.MyReplicationEndpoint',
DATA => { "key1" => 1 }, CONFIG => { "config1" => "value1", "config2" => "value2" },
TABLE_CFS => { "table1" => [], "table2" => ["cf1"], "table3" => ["cf1", "cf2"] } Note: Either CLUSTER_KEY or ENDPOINT_CLASSNAME must be specified but not both. Command: append_peer_tableCFs
Append a replicable table-cf config for the specified peer
Examples: # append a table / table-cf to be replicable for a peer
hbase> append_peer_tableCFs '2', { "ns1:table4" => ["cfA", "cfB"] } Command: disable_peer
Stops the replication stream to the specified cluster, but still
keeps track of new edits to replicate. Examples: hbase> disable_peer '1' Command: disable_table_replication
Disable a table's replication switch. Examples: hbase> disable_table_replication 'table_name' Command: enable_peer
Restarts the replication to the specified peer cluster,
continuing from where it was disabled. Examples: hbase> enable_peer '1' Command: enable_table_replication
Enable a table's replication switch. Examples: hbase> enable_table_replication 'table_name' Command: get_peer_config
Outputs the cluster key, replication endpoint class (if present), and any replication configuration parameters Command: list_peer_configs
No-argument method that outputs the replication peer configuration for each peer defined on this cluster. Command: list_peers
List all replication peer clusters. hbase> list_peers Command: list_replicated_tables
List all the tables and column families replicated from this cluster hbase> list_replicated_tables
hbase> list_replicated_tables 'abc.*' Command: remove_peer
Stops the specified replication stream and deletes all the meta
information kept about it. Examples: hbase> remove_peer '1' Command: remove_peer_tableCFs
Remove a table / table-cf from the table-cfs config for the specified peer
Examples: # Remove a table / table-cf from the replicable table-cfs for a peer
hbase> remove_peer_tableCFs '2', { "ns1:table1" => [] }
hbase> remove_peer_tableCFs '2', { "ns1:table1" => ["cf1"] } Command: set_peer_tableCFs
Set the replicable table-cf config for the specified peer
Examples: # set all tables to be replicable for a peer
hbase> set_peer_tableCFs '1', ""
hbase> set_peer_tableCFs '1'
# set table / table-cf to be replicable for a peer, for a table without
# an explicit column-family list, all replicable column-families (with
# replication_scope == 1) will be replicated
hbase> set_peer_tableCFs '2', { "ns1:table1" => [],
"ns2:table2" => ["cf1", "cf2"],
"ns3:table3" => ["cfA", "cfB"] } Command: show_peer_tableCFs
Show replicable table-cf config for the specified peer. hbase> show_peer_tableCFs Command: update_peer_config
A peer can either be another HBase cluster or a custom replication endpoint. In either case an id
must be specified to identify the peer. This command does not interrupt processing on an enabled replication peer. Two optional arguments are DATA and CONFIG which can be specified to set different values for either
the peer_data or configuration for a custom replication endpoint. Any existing values not updated by this command
are left unchanged. CLUSTER_KEY, REPLICATION_ENDPOINT, and TABLE_CFs cannot be updated with this command.
To update TABLE_CFs, see the append_peer_tableCFs and remove_peer_tableCFs commands. hbase> update_peer_config '1', DATA => { "key1" => 1 }
hbase> update_peer_config '2', CONFIG => { "config1" => "value1", "config2" => "value2" }
hbase> update_peer_config '3', DATA => { "key1" => 1 }, CONFIG => { "config1" => "value1", "config2" => "value2" }, -------------------------------------------------------------------------------- In order to use these tools, hbase.replication must be true.
7,group name:snapshots
hbase(main):005:0> help 'snapshots'
Command: clone_snapshot
Create a new table by cloning the snapshot content.
There're no copies of data involved.
And writing on the newly created table will not influence the snapshot data. Examples:
hbase> clone_snapshot 'snapshotName', 'tableName'
hbase> clone_snapshot 'snapshotName', 'namespace:tableName' Command: delete_all_snapshot
Delete all of the snapshots matching the given regex. Examples: hbase> delete_all_snapshot 's.*' Command: delete_snapshot
Delete a specified snapshot. Examples: hbase> delete_snapshot 'snapshotName', Command: list_snapshots
List all snapshots taken (by printing the names and relative information).
Optional regular expression parameter could be used to filter the output
by snapshot name. Examples:
hbase> list_snapshots
hbase> list_snapshots 'abc.*' Command: restore_snapshot
Restore a specified snapshot.
The restore will replace the content of the original table,
bringing back the content to the snapshot state.
The table must be disabled. Examples:
hbase> restore_snapshot 'snapshotName' Command: snapshot
Take a snapshot of specified table. Examples: hbase> snapshot 'sourceTable', 'snapshotName'
hbase> snapshot 'namespace:sourceTable', 'snapshotName', {SKIP_FLUSH => true}
8,group name:configuration
hbase(main):006:0> help 'configuration'
Command: update_all_config
Reload a subset of configuration on all servers in the cluster. See
http://hbase.apache.org/book.html?dyn_config for more details. Here is how
you would run the command in the hbase shell:
hbase> update_all_config Command: update_config
Reload a subset of configuration on server 'servername' where servername is
host, port plus startcode. For example: host187.example.com,60020,1289493121758
See http://hbase.apache.org/book.html?dyn_config for more details. Here is how
you would run the command in the hbase shell:
hbase> update_config 'servername'
9,group name:quotas
资源限制
hbase(main):008:0> help 'quotas'
Command: list_quotas
List the quota settings added to the system.
You can filter the result based on USER, TABLE, or NAMESPACE. For example: hbase> list_quotas
hbase> list_quotas USER => 'bob.*'
hbase> list_quotas USER => 'bob.*', TABLE => 't1'
hbase> list_quotas USER => 'bob.*', NAMESPACE => 'ns.*'
hbase> list_quotas TABLE => 'myTable'
hbase> list_quotas NAMESPACE => 'ns.*' Command: set_quota
Set a quota for a user, table, or namespace.
Syntax : set_quota TYPE => <type>, <args> TYPE => THROTTLE
User can either set quota on read, write or on both the requests together(i.e., read+write)
The read, write, or read+write(default throttle type) request limit can be expressed using
the form 100req/sec, 100req/min and the read, write, read+write(default throttle type) limit
can be expressed using the form 100k/sec, 100M/min with (B, K, M, G, T, P) as valid size unit
and (sec, min, hour, day) as valid time unit.
Currently the throttle limit is per machine - a limit of 100req/min
means that each machine can execute 100req/min. For example: hbase> set_quota TYPE => THROTTLE, USER => 'u1', LIMIT => '10req/sec'
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => READ, USER => 'u1', LIMIT => '10req/sec' hbase> set_quota TYPE => THROTTLE, USER => 'u1', LIMIT => '10M/sec'
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', LIMIT => '10M/sec' hbase> set_quota TYPE => THROTTLE, USER => 'u1', TABLE => 't2', LIMIT => '5K/min'
hbase> set_quota TYPE => THROTTLE, USER => 'u1', NAMESPACE => 'ns2', LIMIT => NONE hbase> set_quota TYPE => THROTTLE, NAMESPACE => 'ns1', LIMIT => '10req/sec'
hbase> set_quota TYPE => THROTTLE, TABLE => 't1', LIMIT => '10M/sec'
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, TABLE => 't1', LIMIT => '10M/sec'
hbase> set_quota TYPE => THROTTLE, USER => 'u1', LIMIT => NONE
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', LIMIT => NONE hbase> set_quota USER => 'u1', GLOBAL_BYPASS => true
10,group name:security
hbase(main):009:0> help 'security'
Command: grant
Grant users specific rights.
Syntax : grant <user>, <permissions> [, <@namespace> [, <table> [, <column family> [, <column qualifier>]]] permissions is either zero or more letters from the set "RWXCA".
READ('R'), WRITE('W'), EXEC('X'), CREATE('C'), ADMIN('A') Note: Groups and users are granted access in the same way, but groups are prefixed with an '@'
character. In the same way, tables and namespaces are specified, but namespaces are
prefixed with an '@' character. For example: hbase> grant 'bobsmith', 'RWXCA'
hbase> grant '@admins', 'RWXCA'
hbase> grant 'bobsmith', 'RWXCA', '@ns1'
hbase> grant 'bobsmith', 'RW', 't1', 'f1', 'col1'
hbase> grant 'bobsmith', 'RW', 'ns1:t1', 'f1', 'col1' Command: list_security_capabilities
List supported security capabilities Example:
hbase> list_security_capabilities Command: revoke
Revoke a user's access rights.
Syntax : revoke <user> [, <@namespace> [, <table> [, <column family> [, <column qualifier>]]]] Note: Groups and users access are revoked in the same way, but groups are prefixed with an '@'
character. In the same way, tables and namespaces are specified, but namespaces are
prefixed with an '@' character. For example: hbase> revoke 'bobsmith'
hbase> revoke '@admins'
hbase> revoke 'bobsmith', '@ns1'
hbase> revoke 'bobsmith', 't1', 'f1', 'col1'
hbase> revoke 'bobsmith', 'ns1:t1', 'f1', 'col1' Command: user_permission
Show all permissions for the particular user.
Syntax : user_permission <table> Note: A namespace must always precede with '@' character. For example: hbase> user_permission
hbase> user_permission '@ns1'
hbase> user_permission '@.*'
hbase> user_permission '@^[a-c].*'
hbase> user_permission 'table1'
hbase> user_permission 'namespace1:table1'
hbase> user_permission '.*'
hbase> user_permission '^[A-C].*' -------------------------------------------------------------------------------- NOTE: Above commands are only applicable if running with the AccessController coprocessor
11,group name:procedures
hbase(main):010:0> help 'procedures'
Command: abort_procedure
Given a procedure Id (and optional boolean may_interrupt_if_running parameter,
default is true), abort a procedure in hbase. Use with caution. Some procedures
might not be abortable. For experts only. If this command is accepted and the procedure is in the process of aborting,
it will return true; if the procedure could not be aborted (eg. procedure
does not exist, or procedure already completed or abort will cause corruption),
this command will return false. Examples: hbase> abort_procedure proc_id
hbase> abort_procedure proc_id, true
hbase> abort_procedure proc_id, false Command: list_procedures
List all procedures in hbase. Examples: hbase> list_procedures
12,group name:visibility labels
hbase(main):012:0> help 'visibility labels'
Command: add_labels
Add a set of visibility labels.
Syntax : add_labels [label1, label2] For example: hbase> add_labels ['SECRET','PRIVATE'] Command: clear_auths
Clear visibility labels from a user or group
Syntax : clear_auths 'user',[label1, label2] For example: hbase> clear_auths 'user1', ['SECRET','PRIVATE']
hbase> clear_auths '@group1', ['SECRET','PRIVATE'] Command: get_auths
Get the visibility labels set for a particular user or group
Syntax : get_auths 'user' For example: hbase> get_auths 'user1'
hbase> get_auths '@group1' Command: list_labels
List the visibility labels defined in the system.
Optional regular expression parameter could be used to filter the labels being returned.
Syntax : list_labels For example: hbase> list_labels 'secret.*'
hbase> list_labels Command: set_auths
Add a set of visibility labels for a user or group
Syntax : set_auths 'user',[label1, label2] For example: hbase> set_auths 'user1', ['SECRET','PRIVATE']
hbase> set_auths '@group1', ['SECRET','PRIVATE'] Command: set_visibility
Set the visibility expression on one or more existing cells. Pass table name, visibility expression, and a dictionary containing
scanner specifications. Scanner specifications may include one or more
of: TIMERANGE, FILTER, STARTROW, STOPROW, ROWPREFIXFILTER, TIMESTAMP, or COLUMNS If no columns are specified, all columns will be included.
To include all members of a column family, leave the qualifier empty as in
'col_family:'. The filter can be specified in two ways:
1. Using a filterString - more information on this is available in the
Filter Language document attached to the HBASE-4176 JIRA
2. Using the entire package name of the filter. Examples: hbase> set_visibility 't1', 'A|B', {COLUMNS => ['c1', 'c2']}
hbase> set_visibility 't1', '(A&B)|C', {COLUMNS => 'c1',
TIMERANGE => [1303668804, 1303668904]}
hbase> set_visibility 't1', 'A&B&C', {ROWPREFIXFILTER => 'row2',
FILTER => "(QualifierFilter (>=, 'binary:xyz')) AND
(TimestampsFilter ( 123, 456))"} This command will only affect existing cells and is expected to be mainly
useful for feature testing and functional verification. -------------------------------------------------------------------------------- NOTE: Above commands are only applicable if running with the VisibilityController coprocessor
HBASE SHELL 命令使用的更多相关文章
- Hbase Shell命令
1 启动HBase shell 2 HBase shell 命令 3 我们将以“一个学生成绩表”的例子来详细介绍常用的 HBase 命令及其使用方法. 这里 grad 对于表来说是一个列,course ...
- HBase shell 命令介绍
HBase shell是HBase的一套命令行工具,类似传统数据中的sql概念,可以使用shell命令来查询HBase中数据的详细情况.安装完HBase之后,如果配置了HBase的环境变量,只要在sh ...
- HBase shell 命令。
HBase shell 命令. 进入hbase shell console$HBASE_HOME/bin/hbase shell如果有kerberos认证,需要事先使用相应的keytab进行一下认证( ...
- 第六章 hbase shell 命令
hbase shell命令 描述 alter 修改列族(Column Family)模式 count 统计表中行的数量 create 创建表 ...
- 使用HBase Shell命令
使用HBase Shell命令 或 使用HBase Java API完成: 列出HBase所有的表的相关信息,例如表名: 在终端打印出指定的表的所有记录数据: 向已经创建好的表添加和删除指定的列族或列 ...
- Hbase记录-shell脚本嵌入hbase shell命令
第一种方式:hbase shell test.txt test.txt:list 第二种方式:<<EOF重定向输入 我们经常在shell脚本程序中用<<EOF重定向输入,将我们 ...
- Hbase Shell命令详解+API操作
HBase Shell 操作 3.1 基本操作1.进入 HBase 客户端命令行,在hbase-2.1.3目录下 bin/hbase shell 2.查看帮助命令 hbase(main):001:0& ...
- hbase shell 命令
HBase使用教程 时间 2014-06-01 20:02:18 IT社区推荐资讯 原文 http://itindex.net/detail/49825-hbase 主题 HBase 1 基 ...
- 关于HBase Shell命令基本操作示例
HBase 为用户提供了一个非常方便的使用方式, 我们称之为“HBase Shell”. HBase Shell 提供了大多数的 HBase 命令, 通过 HBase Shell 用户可以方便地创建. ...
随机推荐
- 运行python脚本时,报错InsecurePlatformWarning: A true SSLContext object is not available,解决方法
今天,要在新环境里运行一个python脚本,遇到下面的报错: /usr/lib/python2.7/site-packages/urllib3/util/ssl_.py:160: InsecurePl ...
- sprigmvc 传值jsp页面
https://blog.csdn.net/qq_41357573/article/details/84675535#如何将controller层值传递到jsp页面
- EM算法(期望最大化算法)
适用场景:存在为未测变量的情况下,对模型参数进行估计. EM算法: input:观测数据Y,为观测数据Z,联合分布P(Y,Z|θ),条件分布P(Z|Y,θ) output:模型参数θ 步骤: (1)选 ...
- blender 3d模型软件介绍(开源,免费,易用,强大)
关于BLENDER Blender是一个开源的多平台轻量级全能三维动画制作软件 具有建模,雕刻,绑定,粒子,动力学,动画,交互,材质,渲染,音频处理,视频剪辑以及运动跟踪,后期合成等等的一系列动画短片 ...
- day14 内置函数二
lamda 语法: 函数名 = lambda 参数: 返回值注意: 1. 函数的参数可以有多个. 多个参数之间⽤逗号隔开 2. 匿名函数不管多复杂. 只能写⼀⾏, 且逻辑结束后直接返回数据 3. 返回 ...
- Linux一些常用操作命令
1.创建一个等同于root管理员的用户 useradd -u 0 -o -g root -G root -d /home/username username usermod -u 0 -o ...
- 微信小程序(mpvue)—解决视频播放bug的一种方式
// 第一页 <div @click="play(video.src, video.width, video.height)"></div> methods ...
- CSS:与input相关的一些样式设置问题
input是HTML中非常重要,非常常用而又不可替代的元素,在于其相关的样式设置中有时会遇到其他元素不会发生的问题,今天把我印象中的一些小问题和解决方案记录一下. 1.与同行元素上下居中对齐 关于上下 ...
- Python_随机序列生成_白噪声
本文介绍如何利用Python自行生成随机序列,实现了 Whichmann / Hill 生成器. 参考: [1]Random Number Generation and Monte Carlo Met ...
- 注意&&前后两个表达式有顺序的差别
//插入排序 public static void insertSort(int[] arr) { // 遍历所有数字 for (int i = 1; i < arr.length; i++) ...