, start hdfs 

[hadoop@alamps sbin]$ ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [alamps]
alamps: starting namenode, logging to /home/hadoop/app/hadoop-2.4./logs/hadoop-hadoop-namenode-alamps.out
alamps: starting datanode, logging to /home/hadoop/app/hadoop-2.4./logs/hadoop-hadoop-datanode-alamps.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.4./logs/hadoop-hadoop-secondarynamenode-alamps.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.4./logs/yarn-hadoop-resourcemanager-alamps.out
alamps: starting nodemanager, logging to /home/hadoop/app/hadoop-2.4./logs/yarn-hadoop-nodemanager-alamps.out
[hadoop@alamps sbin]$ jps
Jps
DataNode
NodeManager
NameNode
SecondaryNameNode
ResourceManager , start hbase
[hadoop@alamps bin]$ ./start-hbase.sh
alamps: starting zookeeper, logging to /home/hadoop/hbase-0.96./bin/../logs/hbase-hadoop-zookeeper-alamps.out
starting master, logging to /home/hadoop/hbase-0.96./bin/../logs/hbase-hadoop-master-alamps.out
localhost: starting regionserver, logging to /home/hadoop/hbase-0.96./bin/../logs/hbase-hadoop-regionserver-alamps.out
[hadoop@alamps bin]$ jps
DataNode
HQuorumPeer
NodeManager
NameNode
Jps
HMaster
SecondaryNameNode
ResourceManager
HRegionServer
[hadoop@alamps bin]$ , hbase shell
=========================
进入hbase命令行
./hbase shell 显示hbase中的表
list hbase(main)::> list
TABLE
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hbase-0.96./lib/slf4j-log4j12-1.6..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/app/hadoop-2.4./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
row(s) in 1.8360 seconds => []
hbase(main)::> create 'user', 'info1', 'data1'
row(s) in 0.6790 seconds => Hbase::Table - user
hbase(main)::> list
TABLE
user
row(s) in 0.0670 seconds => ["user"]
hbase(main)::> create 'user', {NAME => 'info', VERSIONS => ''} ERROR: Table already exists: user! Here is some help for this command:
Creates a table. Pass a table name, and a set of column family
specifications (at least one), and, optionally, table configuration.
Column specification can be a simple string (name), or a dictionary
(dictionaries are described below in main help output), necessarily
including NAME attribute.
Examples: Create a table with namespace=ns1 and table qualifier=t1
hbase> create 'ns1:t1', {NAME => 'f1', VERSIONS => } Create a table with namespace=default and table qualifier=t1
hbase> create 't1', {NAME => 'f1'}, {NAME => 'f2'}, {NAME => 'f3'}
hbase> # The above in shorthand would be the following:
hbase> create 't1', 'f1', 'f2', 'f3'
hbase> create 't1', {NAME => 'f1', VERSIONS => , TTL => , BLOCKCACHE => true}
hbase> create 't1', {NAME => 'f1', CONFIGURATION => {'hbase.hstore.blockingStoreFiles' => ''}} Table configuration options can be put at the end.
Examples: hbase> create 'ns1:t1', 'f1', SPLITS => ['', '', '', '']
hbase> create 't1', 'f1', SPLITS => ['', '', '', '']
hbase> create 't1', 'f1', SPLITS_FILE => 'splits.txt', OWNER => 'johndoe'
hbase> create 't1', {NAME => 'f1', VERSIONS => }, METADATA => { 'mykey' => 'myvalue' }
hbase> # Optionally pre-split the table into NUMREGIONS, using
hbase> # SPLITALGO ("HexStringSplit", "UniformSplit" or classname)
hbase> create 't1', 'f1', {NUMREGIONS => , SPLITALGO => 'HexStringSplit'}
hbase> create 't1', 'f1', {NUMREGIONS => , SPLITALGO => 'HexStringSplit', CONFIGURATION => {'hbase.hregion.scan.loadColumnFamiliesOnDemand' => 'true'}} You can also keep around a reference to the created table: hbase> t1 = create 't1', 'f1' Which gives you a reference to the table named 't1', on which you can then
call methods. hbase(main)::> create 'user', {NAME => 'info1', VERSIONS => ''} ERROR: Table already exists: user! Here is some help for this command:
Creates a table. Pass a table name, and a set of column family
specifications (at least one), and, optionally, table configuration.
Column specification can be a simple string (name), or a dictionary
(dictionaries are described below in main help output), necessarily
including NAME attribute.
Examples: Create a table with namespace=ns1 and table qualifier=t1
hbase> create 'ns1:t1', {NAME => 'f1', VERSIONS => } Create a table with namespace=default and table qualifier=t1
hbase> create 't1', {NAME => 'f1'}, {NAME => 'f2'}, {NAME => 'f3'}
hbase> # The above in shorthand would be the following:
hbase> create 't1', 'f1', 'f2', 'f3'
hbase> create 't1', {NAME => 'f1', VERSIONS => , TTL => , BLOCKCACHE => true}
hbase> create 't1', {NAME => 'f1', CONFIGURATION => {'hbase.hstore.blockingStoreFiles' => ''}} Table configuration options can be put at the end.
Examples: hbase> create 'ns1:t1', 'f1', SPLITS => ['', '', '', '']
hbase> create 't1', 'f1', SPLITS => ['', '', '', '']
hbase> create 't1', 'f1', SPLITS_FILE => 'splits.txt', OWNER => 'johndoe'
hbase> create 't1', {NAME => 'f1', VERSIONS => }, METADATA => { 'mykey' => 'myvalue' }
hbase> # Optionally pre-split the table into NUMREGIONS, using
hbase> # SPLITALGO ("HexStringSplit", "UniformSplit" or classname)
hbase> create 't1', 'f1', {NUMREGIONS => , SPLITALGO => 'HexStringSplit'}
hbase> create 't1', 'f1', {NUMREGIONS => , SPLITALGO => 'HexStringSplit', CONFIGURATION => {'hbase.hregion.scan.loadColumnFamiliesOnDemand' => 'true'}} You can also keep around a reference to the created table: hbase> t1 = create 't1', 'f1' Which gives you a reference to the table named 't1', on which you can then
call methods. hbase(main)::> san 'user'
NoMethodError: undefined method `san' for #<Object:0x689973> hbase(main)::> scan 'user'
ROW COLUMN+CELL
row(s) in 0.0330 seconds hbase(main)::> create 'user', {NAME => 'info1', VERSIONS => ''} ERROR: Table already exists: user! Here is some help for this command:
Creates a table. Pass a table name, and a set of column family
specifications (at least one), and, optionally, table configuration.
Column specification can be a simple string (name), or a dictionary
(dictionaries are described below in main help output), necessarily
including NAME attribute.
Examples: Create a table with namespace=ns1 and table qualifier=t1
hbase> create 'ns1:t1', {NAME => 'f1', VERSIONS => } Create a table with namespace=default and table qualifier=t1
hbase> create 't1', {NAME => 'f1'}, {NAME => 'f2'}, {NAME => 'f3'}
hbase> # The above in shorthand would be the following:
hbase> create 't1', 'f1', 'f2', 'f3'
hbase> create 't1', {NAME => 'f1', VERSIONS => , TTL => , BLOCKCACHE => true}
hbase> create 't1', {NAME => 'f1', CONFIGURATION => {'hbase.hstore.blockingStoreFiles' => ''}} Table configuration options can be put at the end.
Examples: hbase> create 'ns1:t1', 'f1', SPLITS => ['', '', '', '']
hbase> create 't1', 'f1', SPLITS => ['', '', '', '']
hbase> create 't1', 'f1', SPLITS_FILE => 'splits.txt', OWNER => 'johndoe'
hbase> create 't1', {NAME => 'f1', VERSIONS => }, METADATA => { 'mykey' => 'myvalue' }
hbase> # Optionally pre-split the table into NUMREGIONS, using
hbase> # SPLITALGO ("HexStringSplit", "UniformSplit" or classname)
hbase> create 't1', 'f1', {NUMREGIONS => , SPLITALGO => 'HexStringSplit'}
hbase> create 't1', 'f1', {NUMREGIONS => , SPLITALGO => 'HexStringSplit', CONFIGURATION => {'hbase.hregion.scan.loadColumnFamiliesOnDemand' => 'true'}} You can also keep around a reference to the created table: hbase> t1 = create 't1', 'f1' Which gives you a reference to the table named 't1', on which you can then
call methods. hbase(main)::> put 'user', 'rk0001', 'info:name', 'zhangsan' ERROR: Failed action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family info does not exist in region user,,.7dafe6d1353a5be73a69aa03ffdbe8d3. in table 'user', {NAME => 'data1', BLOOMFILTER => 'ROW', VERSIONS => '', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => '', COMPRESSION => 'NONE', MIN_VERSIONS => '', BLOCKCACHE => 'true', BLOCKSIZE => '', REPLICATION_SCOPE => ''}, {NAME => 'info1', BLOOMFILTER => 'ROW', VERSIONS => '', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => '', COMPRESSION => 'NONE', MIN_VERSIONS => '', BLOCKCACHE => 'true', BLOCKSIZE => '', REPLICATION_SCOPE => ''}
at org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:)
at org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:)
at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$.callBlockingMethod(ClientProtos.java:)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:)
at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:)
: time, Here is some help for this command:
Put a cell 'value' at specified table/row/column and optionally
timestamp coordinates. To put a cell value into table 'ns1:t1' or 't1'
at row 'r1' under column 'c1' marked with the time 'ts1', do: hbase> put 'ns1:t1', 'r1', 'c1', 'value', ts1 The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be: hbase> t.put 'r1', 'c1', 'value', ts1 hbase(main)::> put 'user', 'rk0001', 'info1:name', 'zhangsan'
row(s) in 0.0050 seconds hbase(main)::> scan 'user'
ROW COLUMN+CELL
rk0001 column=info1:name, timestamp=, value=zhangsan
row(s) in 0.0330 seconds hbase(main)::> delete 'user' ERROR: wrong number of arguments ( for ) Here is some help for this command:
Put a delete cell value at specified table/row/column and optionally
timestamp coordinates. Deletes must match the deleted cell's
coordinates exactly. When scanning, a delete cell suppresses older
versions. To delete a cell from 't1' at row 'r1' under column 'c1'
marked with the time 'ts1', do: hbase> delete 'ns1:t1', 'r1', 'c1', ts1
hbase> delete 't1', 'r1', 'c1', ts1 The same command can also be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be: hbase> t.delete 'r1', 'c1', ts1 hbase(main)::> list
TABLE
user
row(s) in 0.0580 seconds => ["user"]
hbase(main)::> delete 'user' ERROR: wrong number of arguments ( for ) Here is some help for this command:
Put a delete cell value at specified table/row/column and optionally
timestamp coordinates. Deletes must match the deleted cell's
coordinates exactly. When scanning, a delete cell suppresses older
versions. To delete a cell from 't1' at row 'r1' under column 'c1'
marked with the time 'ts1', do: hbase> delete 'ns1:t1', 'r1', 'c1', ts1
hbase> delete 't1', 'r1', 'c1', ts1 The same command can also be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be: hbase> t.delete 'r1', 'c1', ts1 hbase(main)::> delete user
NameError: undefined local variable or method `user' for #<Object:0x689973> hbase(main)::> disable 'user'
row(s) in 2.7590 seconds hbase(main)::> delete 'user' ERROR: wrong number of arguments ( for ) Here is some help for this command:
Put a delete cell value at specified table/row/column and optionally
timestamp coordinates. Deletes must match the deleted cell's
coordinates exactly. When scanning, a delete cell suppresses older
versions. To delete a cell from 't1' at row 'r1' under column 'c1'
marked with the time 'ts1', do: hbase> delete 'ns1:t1', 'r1', 'c1', ts1
hbase> delete 't1', 'r1', 'c1', ts1 The same command can also be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be: hbase> t.delete 'r1', 'c1', ts1 hbase(main)::> scan 'user'
ROW COLUMN+CELL ERROR: user is disabled. Here is some help for this command:
Scan a table; pass table name and optionally a dictionary of scanner
specifications. Scanner specifications may include one or more of:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, TIMESTAMP, MAXLENGTH,
or COLUMNS, CACHE If no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in
'col_family:'. The filter can be specified in two ways:
. Using a filterString - more information on this is available in the
Filter Language document attached to the HBASE- JIRA
. Using the entire package name of the filter. Some examples: hbase> scan 'hbase:meta'
hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}
hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => , STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => , STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [, ]}
hbase> scan 't1', {FILTER => "(PrefixFilter ('row2') AND
(QualifierFilter (>=, 'binary:xyz'))) AND (TimestampsFilter ( , ))"}
hbase> scan 't1', {FILTER =>
org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(, )} For experts, there is an additional option -- CACHE_BLOCKS -- which
switches block caching for the scanner on (true) or off (false). By
default it is enabled. Examples: hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false} Also for experts, there is an advanced option -- RAW -- which instructs the
scanner to return all cells (including delete markers and uncollected deleted
cells). This option cannot be combined with requesting specific COLUMNS.
Disabled by default. Example: hbase> scan 't1', {RAW => true, VERSIONS => } Besides the default 'toStringBinary' format, 'scan' supports custom formatting
by column. A user can define a FORMATTER by adding it to the column name in
the scan specification. The FORMATTER can be stipulated: . either as a org.apache.hadoop.hbase.util.Bytes method name (e.g, toInt, toString)
. or as a custom class followed by method name: e.g. 'c(MyFormatterClass).format'. Example formatting cf:qualifier1 and cf:qualifier2 both as Integers:
hbase> scan 't1', {COLUMNS => ['cf:qualifier1:toInt',
'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] } Note that you can specify a FORMATTER by column only (cf:qualifer). You cannot
specify a FORMATTER for all columns of a column family. Scan can also be used directly from a table, by first getting a reference to a
table, like such: hbase> t = get_table 't'
hbase> t.scan Note in the above situation, you can still provide all the filtering, columns,
options, etc as described above. hbase(main)::> scan 'user'
ROW COLUMN+CELL ERROR: user is disabled. Here is some help for this command:
Scan a table; pass table name and optionally a dictionary of scanner
specifications. Scanner specifications may include one or more of:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, TIMESTAMP, MAXLENGTH,
or COLUMNS, CACHE If no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in
'col_family:'. The filter can be specified in two ways:
. Using a filterString - more information on this is available in the
Filter Language document attached to the HBASE- JIRA
. Using the entire package name of the filter. Some examples: hbase> scan 'hbase:meta'
hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}
hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => , STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => , STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [, ]}
hbase> scan 't1', {FILTER => "(PrefixFilter ('row2') AND
(QualifierFilter (>=, 'binary:xyz'))) AND (TimestampsFilter ( , ))"}
hbase> scan 't1', {FILTER =>
org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(, )} For experts, there is an additional option -- CACHE_BLOCKS -- which
switches block caching for the scanner on (true) or off (false). By
default it is enabled. Examples: hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false} Also for experts, there is an advanced option -- RAW -- which instructs the
scanner to return all cells (including delete markers and uncollected deleted
cells). This option cannot be combined with requesting specific COLUMNS.
Disabled by default. Example: hbase> scan 't1', {RAW => true, VERSIONS => } Besides the default 'toStringBinary' format, 'scan' supports custom formatting
by column. A user can define a FORMATTER by adding it to the column name in
the scan specification. The FORMATTER can be stipulated: . either as a org.apache.hadoop.hbase.util.Bytes method name (e.g, toInt, toString)
. or as a custom class followed by method name: e.g. 'c(MyFormatterClass).format'. Example formatting cf:qualifier1 and cf:qualifier2 both as Integers:
hbase> scan 't1', {COLUMNS => ['cf:qualifier1:toInt',
'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] } Note that you can specify a FORMATTER by column only (cf:qualifer). You cannot
specify a FORMATTER for all columns of a column family. Scan can also be used directly from a table, by first getting a reference to a
table, like such: hbase> t = get_table 't'
hbase> t.scan Note in the above situation, you can still provide all the filtering, columns,
options, etc as described above. hbase(main)::> enable 'user'
row(s) in 0.6160 seconds hbase(main)::> scan 'user'
ROW COLUMN+CELL
rk0001 column=info1:name, timestamp=, value=zhangsan
row(s) in 0.0670 seconds hbase(main)::> put 'user', 'rk0001', 'info1:gender', 'female'
row(s) in 0.0130 seconds hbase(main)::> scan 'user'
ROW COLUMN+CELL
rk0001 column=info1:gender, timestamp=, value=female
rk0001 column=info1:name, timestamp=, value=zhangsan
row(s) in 0.0360 seconds hbase(main)::> put 'user', 'rk0001', 'info1:age',
row(s) in 0.0250 seconds hbase(main)::> scan 'user'
ROW COLUMN+CELL
rk0001 column=info1:age, timestamp=, value=
rk0001 column=info1:gender, timestamp=, value=female
rk0001 column=info1:name, timestamp=, value=zhangsan
row(s) in 0.0290 seconds Here is some help for this command:
Put a cell 'value' at specified table/row/column and optionally
timestamp coordinates. To put a cell value into table 'ns1:t1' or 't1'
at row 'r1' under column 'c1' marked with the time 'ts1', do: hbase> put 'ns1:t1', 'r1', 'c1', 'value', ts1 The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be: hbase> t.put 'r1', 'c1', 'value', ts1 hbase(main)::> put 'user', 'rk0001', 'data1:pic', 'picture'
row(s) in 0.0080 seconds hbase(main)::> scan 'user'
ROW COLUMN+CELL
rk0001 column=data1:pic, timestamp=, value=picture
rk0001 column=info1:age, timestamp=, value=
rk0001 column=info1:gender, timestamp=, value=female
rk0001 column=info1:name, timestamp=, value=zhangsan
row(s) in 0.0270 seconds hbase(main)::> get 'user' rk001
SyntaxError: (hbase):: syntax error, unexpected tIDENTIFIER get 'user' rk001
^ hbase(main)::> get 'user', 'rk0001'
COLUMN CELL
data1:pic timestamp=, value=picture
info1:age timestamp=, value=
info1:gender timestamp=, value=female
info1:name timestamp=, value=zhangsan
row(s) in 0.0650 seconds hbase(main)::> get 'user', 'rk0001', 'info1'
COLUMN CELL
info1:age timestamp=, value=
info1:gender timestamp=, value=female
info1:name timestamp=, value=zhangsan
row(s) in 0.0390 seconds hbase(main)::> get 'user', 'rk0001', 'info1:name', 'info1:age'
COLUMN CELL
info1:age timestamp=, value=
info1:name timestamp=, value=zhangsan
row(s) in 0.0260 seconds hbase(main)::> get 'user', 'rk0001', 'info1', 'data1'
COLUMN CELL
data1:pic timestamp=, value=picture
info1:age timestamp=, value=
info1:gender timestamp=, value=female
info1:name timestamp=, value=zhangsan
row(s) in 0.0220 seconds hbase(main)::> get 'user', 'rk0001', {COLUMN => ['info1', 'data1']}
COLUMN CELL
data1:pic timestamp=, value=picture
info1:age timestamp=, value=
info1:gender timestamp=, value=female
info1:name timestamp=, value=zhangsan
row(s) in 0.0200 seconds hbase(main)::> get 'user', 'rk0001', {COLUMN => ['info1:name', 'data1:pic']}
COLUMN CELL
data1:pic timestamp=, value=picture
info1:name timestamp=, value=zhangsan
row(s) in 0.0400 seconds hbase(main)::> get 'user', 'rk0001', {COLUMN => 'info1:name', VERSIONS => }
COLUMN CELL
info1:name timestamp=, value=zhangsan
row(s) in 0.0130 seconds hbase(main)::> get 'user', 'rk0001', {FILTER => "ValueFilter(=, 'binary:图片')"}
COLUMN CELL
row(s) in 0.0750 seconds hbase(main)::> get 'user', 'rk0001', {FILTER => "(QualifierFilter(=,'substring:a'))"}
COLUMN CELL
info1:age timestamp=, value=
info1:name timestamp=, value=zhangsan
row(s) in 0.0290 seconds hbase(main)::> put 'user', 'rk0002', 'info1:name', 'fanbingbing'
row(s) in 0.0110 seconds hbase(main)::> put 'user', 'rk0002', 'info1:gender', 'female'
row(s) in 0.0040 seconds hbase(main)::> put 'user', 'rk0002', 'info1:nationality', '中国'
row(s) in 0.0130 seconds hbase(main)::> get surscan 'user'
NoMethodError: undefined method `surscan' for #<Object:0x689973> hbase(main)::> scan 'user'
ROW COLUMN+CELL
rk0001 column=data1:pic, timestamp=, value=picture
rk0001 column=info1:age, timestamp=, value=
rk0001 column=info1:gender, timestamp=, value=female
rk0001 column=info1:name, timestamp=, value=zhangsan
rk0002 column=info1:gender, timestamp=, value=female
rk0002 column=info1:name, timestamp=, value=fanbingbing
rk0002 column=info1:nationality, timestamp=, value=\xE4\xB8\xAD\xE5\x9B\xBD
row(s) in 0.0280 seconds hbase(main)::> scan 'user'
ROW COLUMN+CELL
rk0001 column=data1:pic, timestamp=, value=picture
rk0001 column=info1:age, timestamp=, value=
rk0001 column=info1:gender, timestamp=, value=female
rk0001 column=info1:name, timestamp=, value=zhangsan
rk0002 column=info1:gender, timestamp=, value=female
rk0002 column=info1:name, timestamp=, value=fanbingbing
rk0002 column=info1:nationality, timestamp=, value=\xE4\xB8\xAD\xE5\x9B\xBD
row(s) in 0.0330 seconds hbase(main)::> get 'user', 'rk0002', {FILTER => "ValueFilter(=, 'binary:中国')"}
COLUMN CELL
info1:nationality timestamp=, value=\xE4\xB8\xAD\xE5\x9B\xBD
row(s) in 0.0130 seconds hbase(main)::> scan 'user', {COLUMNS => 'info'}
ROW COLUMN+CELL ERROR: Unknown column family! Valid column names: data1:*, info1:* Here is some help for this command:
Scan a table; pass table name and optionally a dictionary of scanner
specifications. Scanner specifications may include one or more of:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, TIMESTAMP, MAXLENGTH,
or COLUMNS, CACHE If no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in
'col_family:'. The filter can be specified in two ways:
. Using a filterString - more information on this is available in the
Filter Language document attached to the HBASE- JIRA
. Using the entire package name of the filter. Some examples: hbase> scan 'hbase:meta'
hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}
hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => , STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => , STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [, ]}
hbase> scan 't1', {FILTER => "(PrefixFilter ('row2') AND
(QualifierFilter (>=, 'binary:xyz'))) AND (TimestampsFilter ( , ))"}
hbase> scan 't1', {FILTER =>
org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(, )} hbase(main)::> scan 'user', {COLUMNS => 'info1'}
ROW COLUMN+CELL
rk0001 column=info1:age, timestamp=, value=
rk0001 column=info1:gender, timestamp=, value=female
rk0001 column=info1:name, timestamp=, value=zhangsan
rk0002 column=info1:gender, timestamp=, value=female
rk0002 column=info1:name, timestamp=, value=fanbingbing
rk0002 column=info1:nationality, timestamp=, value=\xE4\xB8\xAD\xE5\x9B\xBD
row(s) in 0.0570 seconds hbase(main)::> scan 'user', {COLUMNS => 'info1', RAW => true, VERSIONS => }
ROW COLUMN+CELL
rk0001 column=info1:age, timestamp=, value=
rk0001 column=info1:gender, timestamp=, value=female
rk0001 column=info1:name, timestamp=, value=zhangsan
rk0002 column=info1:gender, timestamp=, value=female
rk0002 column=info1:name, timestamp=, value=fanbingbing
rk0002 column=info1:nationality, timestamp=, value=\xE4\xB8\xAD\xE5\x9B\xBD
row(s) in 0.0280 seconds hbase(main)::> scan 'user', {COLUMNS => 'info1', RAW => true, VERSIONS => }
ROW COLUMN+CELL
rk0001 column=info1:age, timestamp=, value=
rk0001 column=info1:gender, timestamp=, value=female
rk0001 column=info1:name, timestamp=, value=zhangsan
rk0002 column=info1:gender, timestamp=, value=female
rk0002 column=info1:name, timestamp=, value=fanbingbing
rk0002 column=info1:nationality, timestamp=, value=\xE4\xB8\xAD\xE5\x9B\xBD
row(s) in 0.0360 seconds hbase(main)::> scan 'user', {COLUMNS => ['info1', 'data1']}
ROW COLUMN+CELL
rk0001 column=data1:pic, timestamp=, value=picture
rk0001 column=info1:age, timestamp=, value=
rk0001 column=info1:gender, timestamp=, value=female
rk0001 column=info1:name, timestamp=, value=zhangsan
rk0002 column=info1:gender, timestamp=, value=female
rk0002 column=info1:name, timestamp=, value=fanbingbing
rk0002 column=info1:nationality, timestamp=, value=\xE4\xB8\xAD\xE5\x9B\xBD
row(s) in 0.0270 seconds hbase(main)::> scan 'user', {COLUMNS => ['info1:name', 'data1:pic']}
ROW COLUMN+CELL
rk0001 column=data1:pic, timestamp=, value=picture
rk0001 column=info1:name, timestamp=, value=zhangsan
rk0002 column=info1:name, timestamp=, value=fanbingbing
row(s) in 0.0380 seconds hbase(main)::> scan 'user', {COLUMNS => 'info1:name'}
ROW COLUMN+CELL
rk0001 column=info1:name, timestamp=, value=zhangsan
rk0002 column=info1:name, timestamp=, value=fanbingbing
row(s) in 0.0200 seconds hbase(main)::> scan 'user', {COLUMNS => 'info1:name', VERSIONS => }
ROW COLUMN+CELL
rk0001 column=info1:name, timestamp=, value=zhangsan
rk0002 column=info1:name, timestamp=, value=fanbingbing
row(s) in 0.0250 seconds hbase(main)::> scan 'user', {COLUMNS => 'info1:name', VERSIONS => }
ROW COLUMN+CELL
rk0001 column=info1:name, timestamp=, value=zhangsan
rk0002 column=info1:name, timestamp=, value=fanbingbing
row(s) in 0.0230 seconds hbase(main)::> scan 'user', {COLUMNS => 'info1:name', VERSIONS => }
ROW COLUMN+CELL
rk0001 column=info1:name, timestamp=, value=zhangsan
rk0002 column=info1:name, timestamp=, value=fanbingbing
row(s) in 0.0230 seconds hbase(main)::> scan 'people', {COLUMNS => ['info1', 'data1'], FILTER => "(QualifierFilter(=,'substring:a'))"} ERROR: Unknown table people! Here is some help for this command:
Scan a table; pass table name and optionally a dictionary of scanner
specifications. Scanner specifications may include one or more of:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, TIMESTAMP, MAXLENGTH,
or COLUMNS, CACHE If no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in
'col_family:'. The filter can be specified in two ways:
. Using a filterString - more information on this is available in the
Filter Language document attached to the HBASE- JIRA
. Using the entire package name of the filter. Some examples: hbase> scan 'hbase:meta'
hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}
hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => , STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => , STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [, ]}
hbase> scan 't1', {FILTER => "(PrefixFilter ('row2') AND
(QualifierFilter (>=, 'binary:xyz'))) AND (TimestampsFilter ( , ))"}
hbase> scan 't1', {FILTER =>
org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(, )} For experts, there is an additional option -- CACHE_BLOCKS -- which
switches block caching for the scanner on (true) or off (false). By
default it is enabled. Examples: hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false} Also for experts, there is an advanced option -- RAW -- which instructs the
scanner to return all cells (including delete markers and uncollected deleted
cells). This option cannot be combined with requesting specific COLUMNS.
Disabled by default. Example: hbase> scan 't1', {RAW => true, VERSIONS => } Besides the default 'toStringBinary' format, 'scan' supports custom formatting
by column. A user can define a FORMATTER by adding it to the column name in
the scan specification. The FORMATTER can be stipulated: . either as a org.apache.hadoop.hbase.util.Bytes method name (e.g, toInt, toString)
. or as a custom class followed by method name: e.g. 'c(MyFormatterClass).format'. Example formatting cf:qualifier1 and cf:qualifier2 both as Integers:
hbase> scan 't1', {COLUMNS => ['cf:qualifier1:toInt',
'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] } Note that you can specify a FORMATTER by column only (cf:qualifer). You cannot
specify a FORMATTER for all columns of a column family. Scan can also be used directly from a table, by first getting a reference to a
table, like such: hbase> t = get_table 't'
hbase> t.scan Note in the above situation, you can still provide all the filtering, columns,
options, etc as described above. hbase(main)::> scan 'user', {COLUMNS => ['info1', 'data1'], FILTER => "(QualifierFilter(=,'substring:a'))"}
ROW COLUMN+CELL
rk0001 column=info1:age, timestamp=, value=
rk0001 column=info1:name, timestamp=, value=zhangsan
rk0002 column=info1:name, timestamp=, value=fanbingbing
rk0002 column=info1:nationality, timestamp=, value=\xE4\xB8\xAD\xE5\x9B\xBD
row(s) in 0.0220 seconds hbase(main)::> scan 'user', {COLUMNS => 'info1', STARTROW => 'rk0001', ENDROW => 'rk0003'}
ROW COLUMN+CELL
rk0001 column=info1:age, timestamp=, value=
rk0001 column=info1:gender, timestamp=, value=female
rk0001 column=info1:name, timestamp=, value=zhangsan
rk0002 column=info1:gender, timestamp=, value=female
rk0002 column=info1:name, timestamp=, value=fanbingbing
rk0002 column=info1:nationality, timestamp=, value=\xE4\xB8\xAD\xE5\x9B\xBD
row(s) in 0.0250 seconds hbase(main)::> scan 'user',{FILTER=>"PrefixFilter('rk')"}
ROW COLUMN+CELL
rk0001 column=data1:pic, timestamp=, value=picture
rk0001 column=info1:age, timestamp=, value=
rk0001 column=info1:gender, timestamp=, value=female
rk0001 column=info1:name, timestamp=, value=zhangsan
rk0002 column=info1:gender, timestamp=, value=female
rk0002 column=info1:name, timestamp=, value=fanbingbing
rk0002 column=info1:nationality, timestamp=, value=\xE4\xB8\xAD\xE5\x9B\xBD
row(s) in 0.0430 seconds hbase(main)::> [hadoop@alamps sbin]$ scan 'user', {TIMERANGE => [, ]}
-bash: scan: command not found
[hadoop@alamps sbin]$ scan 'user', {TIMERANGE => []}
-bash: scan: command not found
[hadoop@alamps sbin]$ delete 'user', 'rk0001', 'info:name'
-bash: delete: command not found
[hadoop@alamps sbin]$ delete 'user', 'rk0001', 'info1:name'
-bash: delete: command not found
[hadoop@alamps sbin]$ delete 'user', 'rk0001', 'info1:name'
-bash: delete: command not found
[hadoop@alamps sbin]$ truncate 'user'
truncate: you must specify one of `--size' or `--reference'
Try `truncate --help' for more information.
[hadoop@alamps sbin]$ truncate 'user'
truncate: you must specify one of `--size' or `--reference'
Try `truncate --help' for more information.
[hadoop@alamps sbin]$ alter 'user', NAME => 'f2'
-bash: alter: command not found
[hadoop@alamps sbin]$ alter 'user', NAME => 'f1', METHOD => 'delete'
-bash: alter: command not found
[hadoop@alamps sbin]$ drop 'user'
-bash: drop: command not found
[hadoop@alamps sbin]$ get 'user', 'rk0002', {COLUMN => ['info1:name', 'data1:pic']}
-bash: get: command not found
[hadoop@alamps sbin]$ get 'user', 'rk0001', {COLUMN => ['info1:name', 'data1:pic']}
-bash: get: command not found
[hadoop@alamps sbin]$ get 'user'
-bash: get: command not found
[hadoop@alamps sbin]$ hbase shell
-- ::, INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.96.-hadoop2, r1581096, Mon Mar :: PDT hbase(main)::> get 'user', 'rk0002', {COLUMN => ['info1:name', 'data1:pic']}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hbase-0.96./lib/slf4j-log4j12-1.6..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/app/hadoop-2.4./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
COLUMN CELL
info1:name timestamp=, value=fanbingbing
row(s) in 0.0750 seconds hbase(main)::> list
TABLE
user
row(s) in 0.1340 seconds => ["user"]
hbase(main)::> get 'user'

hbase shell operate的更多相关文章

  1. HBASE SHELL 命令使用

    HBASE SHELL命令的使用 在hbase shell客户端有许多的操作命令,今天回顾并且总结一二,希望和广大读者共同进步,并且悉心聆听你们的意见.在此的hbase版本是:HBase 1.2.0- ...

  2. HBase ——Shell操作

    HBase --Shell操作 Q:你觉得HBase是什么? A:一种结构化的分布式数据存储系统,它基于列来存储数据. 基于HBase,可以实现以廉价PC机器集群存储海量数据的分布式数据库的解决方案. ...

  3. HBase Shell操作

    Hbase 是一个分布式的.面向列的开源数据库,其实现是建立在google 的bigTable 理论之上,并基于hadoop HDFS文件系统.     Hbase不同于一般的关系型数据库(RDBMS ...

  4. HBase Shell 常用命令及例子

    下面我们看看HBase Shell的一些基本操作命令,我列出了几个常用的HBase Shell命令,如下: 名称 命令表达式 创建表 create '表名称', '列名称1','列名称2','列名称N ...

  5. hbase shell command

    进入hbase shell console $HBASE_HOME/bin/hbase shell 如果有kerberos认证,需要事先使用相应的keytab进行一下认证(使用kinit命令),认证成 ...

  6. hbase shell基础和常用命令详解(转)

    HBase shell的基本用法 hbase提供了一个shell的终端给用户交互.使用命令hbase shell进入命令界面.通过执行 help可以看到命令的帮助信息. 以网上的一个学生成绩表的例子来 ...

  7. hbase shell 基本命令总结

    访问hbase,以及操作hbase,命令不用使用分号hbase shell 进入hbase list 查看表hbase shell -d hbase(main):024:0> scan '.ME ...

  8. hbase shell 常用命令

    进入hbase shell console$HBASE_HOME/bin/hbase shell如果有kerberos认证,需要事先使用相应的keytab进行一下认证(使用kinit命令),认证成功之 ...

  9. Hbase Shell常用命令

    hbase shell常用的操作命令有create,describe,disable,drop,list,scan,put,get,delete,deleteall,count,status等,通过h ...

随机推荐

  1. arcengine右键实现new group layer的功能

    没有找到相关方法,但是有对图层组进行操作的资料. https://gis.stackexchange.com/questions/43620/how-do-i-reach-a-layer-inside ...

  2. hide server info

    <?php /*wamp64\bin\apache\apache2.4.18\confhttpd.conf ServerSignature On  ServerTokens Full Serve ...

  3. PHP之对象类型

    PHP之object对象 对象初始化 要创建一个新的对象object,使用new语句实例化一个类: 转化为对象 如果讲一个对象转化成对象,它将不会有任何变化.如果其它任何类型的值被转化成对象,将会创建 ...

  4. python,re模块正则

    python没有正则需要导入re模块调用.正则表达式是为了匹配字符串,动态模糊的匹配,只要有返回就匹配到了, 没返回就没匹配到,前面是格式后面是字符串 最常用的匹配语法: re.match()#麦驰, ...

  5. finecms设置伪静态后分享到微信不能访问怎么处理

    finecms设置伪静态后分享到微信不能访问,分享的链接自动增加了一串参数,类似这样的***.html?from=singlemessage&isappinstalled=0,刚开始ytkah ...

  6. react native touchable

    <Button style={{marginTop: 30}} onPress={() => { Alert.alert("你点击了按钮!"); }} onPressI ...

  7. unix下命令窗分屏工具

    运行 sudo apt-get install terminator 效果 新建分屏窗口:右键鼠标选择

  8. 【剑指offer】部分思路整理

    题目 LL今天心情特别好,因为他去买了一副扑克牌,发现里面居然有2个大王,2个小王(一副牌原本是54张^_^)...他随机从中抽出了5张牌,想测测自己的手气,看看能不能抽到顺子,如果抽到的话,他决定去 ...

  9. ABP-添加表

    1.在My_ABP.Core根目录下先创建一个文件夹,在该文件夹里创建一个公共的类,在里面定义所需要用到的属性 public class Person:Entity      {         pu ...

  10. awk命令分析日志的简单笔记

    awk是一个文本分析工具,可以用来进行流量日志分析 之前无意中看到了这个命令,简单记一下笔记 ,在打线下的时候可能会有用 awk有3个不同版本: awk.nawk和gawk,未作特别说明,一般指gaw ...