主要是常用的hbase shell命令,包括表的创建与删除,表数据的增删查【hbase没有修改】;以及hbase的导出与导入。

参考教程:HBase教程

参考博客:hbase shell基础和常用命令详解

参考博客:hbase shell常用命令和filter

参考博客:hbase导入导出数据

1. hbase命令

1.1. 命令的登录与退出

 1.1.    命令的登录与退出
[yun@mini03 ~]$ hbase shell # 登录hbase
SLF4J: Class path contains multiple SLF4J bindings.
………………
Version 2.0., r7483b111e4da77adbfc8062b3b22cbe7c2cb91c1, Sun Apr :: PDT
Took 0.0126 seconds
hbase(main)::> quit # 退出hbase
[yun@mini03 ~]$

1.2. 常用命令

名称

命令表达式

创建表

create '表名', '列族名1','列族名2','列族名N'

查看所有表

list

描述表

describe  ‘表名’

判断表存在

exists  '表名'

判断是否禁用启用表

is_enabled '表名'

is_disabled ‘表名’

添加记录     

put  ‘表名’, ‘rowKey’, ‘列族 : 列‘  ,  '值'

查看记录rowkey下的所有数据

get  '表名' , 'rowKey'

查看表中的记录总数

count  '表名'

获取某个列族

get '表名','rowkey','列族'

获取某个列族的某个列

get '表名','rowkey','列族:列’

删除记录

delete  ‘表名’ ,‘行名’ , ‘列族:列'

删除整行

deleteall '表名','rowkey'

删除一张表

先要屏蔽该表,才能对该表进行删除

第一步 disable ‘表名’ ,第二步  drop '表名'

清空表

truncate '表名'

查看所有记录

scan "表名" 

查看某个表某个列中所有数据

scan "表名" , {COLUMNS=>'列族名:列名'}

更新记录

就是重写一遍,进行覆盖,hbase没有修改,都是追加

说明:

  1、列族里边可以自由添加子列很方便。如果列族下没有子列,加不加冒号都是可以的。

hbase> put 't1', 'r1', 'c1', 'value', ts1

  2、t1指表名,r1指行键名,c1指列名,value指单元格值。ts1指时间戳,一般都省略掉了。

1.3. 示例1-建表,查看表结构信息,删表

hbase(main):006:0* create 'user','info1','info2','info3'  # 创建表
Created table user
Took 2.9927 seconds
=> Hbase::Table - user
hbase(main):007:0> list # 查看所有表
TABLE
user
1 row(s)
Took 0.0468 seconds
=> ["user"]
hbase(main):008:0> describe 'user' # 描述表
Table user is ENABLED ##### 当前表可用
user
COLUMN FAMILIES DESCRIPTION
{NAME => 'info1', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}
{NAME => 'info2', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}
{NAME => 'info3', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}
3 row(s)
Took 0.1933 seconds
hbase(main):011:0* exists 'user' # 判断表是否存在
Table user does exist
Took 0.0398 seconds
=> true
hbase(main):026:0> is_enabled 'user' # 判断表是否可用
true
Took 0.0149 seconds
=> true
hbase(main):027:0> is_disabled 'user' # 判断表是否不可用
false
Took 0.0148 seconds
=> 1
hbase(main):015:0> drop 'user' # 删除表【必须先将表置为不可用】 ERROR: Table user is enabled. Disable it first. Drop the named table. Table must first be disabled:
hbase> drop 't1'
hbase> drop 'ns1:t1' Took 0.0342 seconds
hbase(main):018:0* disable 'user' # 将表置为不可用
Took 0.5544 seconds
hbase(main):019:0> drop 'user' # 之后删除表
Took 0.5143 seconds
hbase(main):020:0> list
TABLE
0 row(s)
Took 0.0687 seconds
=> []

  

1.4. 示例2-添加表数据和查询表数据

hbase(main):024:0> create 'user','info1','info2','info3' # 创建表
Created table user
Took 1.2717 seconds
=> Hbase::Table - user
hbase(main):025:0> list # 查看所有表信息
TABLE
user
1 row(s)
Took 0.0466 seconds
=> ["user"]
hbase(main):042:0* put 'user','1234','info1:name','zhang' # 添加数据
Took 0.0634 seconds
hbase(main):043:0> scan 'user' # 查看所有记录
ROW COLUMN+CELL
1234 column=info1:name, timestamp=1533740907054, value=zhang
1 row(s)
Took 0.0776 seconds
hbase(main):044:0> put 'user','1234','info1:name','zhangsan' # 再次添加【实际类似修改】
Took 0.0055 seconds
hbase(main):045:0> scan 'user'
ROW COLUMN+CELL
1234 column=info1:name, timestamp=1533740935956, value=zhangsan
1 row(s)
Took 0.0160 seconds
hbase(main):046:0> put 'user','1234','info2:name','zhang' # 在另一个列族添加数据
Took 0.0160 seconds
hbase(main):047:0> put 'user','1234','info2:age','23' # 在另一个列族的另外一个列添加数据
Took 0.0133 seconds
hbase(main):048:0> scan 'user'
ROW COLUMN+CELL
1234 column=info1:name, timestamp=1533740935956, value=zhangsan
1234 column=info2:age, timestamp=1533741066465, value=23
1234 column=info2:name, timestamp=1533741052169, value=zhang
1 row(s)
Took 0.0171 seconds
hbase(main):050:0* put 'user','12345','info1:name','lisi'
Took 0.0125 seconds
hbase(main):051:0> put 'user','12345','info2:age','25'
Took 0.0143 seconds
hbase(main):052:0> scan 'user'
ROW COLUMN+CELL
1234 column=info1:name, timestamp=1533740935956, value=zhangsan
1234 column=info2:age, timestamp=1533741066465, value=23
1234 column=info2:name, timestamp=1533741052169, value=zhang
12345 column=info1:name, timestamp=1533741585906, value=lisi
12345 column=info2:age, timestamp=1533741595725, value=25
2 row(s)
Took 0.0179 seconds
hbase(main):058:0* count 'user' # 查看表中的记录总数【根据 row keys 判断】
2 row(s)
Took 0.1065 seconds
=> 2
hbase(main):053:0> get 'user','1234' # 查看记录rowkey下的所有数据
COLUMN CELL
info1:name timestamp=1533740935956, value=zhangsan
info2:age timestamp=1533741066465, value=23
info2:name timestamp=1533741052169, value=zhang
1 row(s)
Took 0.0371 seconds
hbase(main):067:0* get 'user','1234','info2' # 获取某个列族
COLUMN CELL
info2:age timestamp=1533741066465, value=23
info2:name timestamp=1533741052169, value=zhang
1 row(s)
Took 0.0305 seconds
hbase(main):068:0> get 'user','1234','info2:name' # 获取某个列族的某个列
COLUMN CELL
info2:name timestamp=1533741052169, value=zhang
1 row(s)
Took 0.0182 seconds

  

1.5. 示例3-删除行数据

hbase(main):072:0> get 'user','1234'
COLUMN CELL
info1:name timestamp=1533740935956, value=zhangsan
info2:address timestamp=1533742368985, value=China
info2:age timestamp=1533741066465, value=23
info2:name timestamp=1533741052169, value=zhang
1 row(s)
Took 0.0146 seconds
hbase(main):073:0> delete 'user','1234','info2:age' # 删除指定行,指定列记录
#### 注意其中: delete 'user','1234','info2' 是无效的 ★★★
Took 0.0288 seconds
hbase(main):074:0> get 'user','1234'
COLUMN CELL
info1:name timestamp=1533740935956, value=zhangsan
info2:address timestamp=1533742368985, value=China
info2:name timestamp=1533741052169, value=zhang
1 row(s)
Took 0.0140 seconds
hbase(main):100:0* deleteall 'user','1234' # 删除整行
Took 0.0119 seconds
hbase(main):101:0> get 'user','1234'
COLUMN CELL
0 row(s)
Took 0.0145 seconds

  

1.6. 示例4-条件scan和truncate

hbase(main):122:0* scan 'user'
ROW COLUMN+CELL
1234 column=info2:address, timestamp=1533743416815, value=CN
1234 column=info2:age, timestamp=1533743407616, value=20
1234 column=info2:name, timestamp=1533743396872, value=wangwu
12345 column=info1:name, timestamp=1533741585906, value=lisi
12345 column=info2:age, timestamp=1533741595725, value=25
2 row(s)
Took 0.0241 seconds
hbase(main):123:0> scan 'user',{COLUMNS => 'info2'}
ROW COLUMN+CELL
1234 column=info2:address, timestamp=1533743416815, value=CN
1234 column=info2:age, timestamp=1533743407616, value=20
1234 column=info2:name, timestamp=1533743396872, value=wangwu
12345 column=info2:age, timestamp=1533741595725, value=25
2 row(s)
Took 0.0161 seconds
hbase(main):124:0> scan 'user',{COLUMNS => 'info2:age'}
ROW COLUMN+CELL
1234 column=info2:age, timestamp=1533743407616, value=20
12345 column=info2:age, timestamp=1533741595725, value=25
2 row(s)
Took 0.0158 seconds
hbase(main):128:0* truncate
truncate truncate_preserve
hbase(main):128:0* truncate 'user' # 截断表
Truncating 'user' table (it may take a while):
Disabling table...
Truncating table...
Took 2.6156 seconds
hbase(main):129:0> scan 'user'
ROW COLUMN+CELL
0 row(s)
Took 0.1305 seconds

  

2. hbase导入导出数据

在实际应用HBase过程中,经常需要将生产环境中的数据备份,或者需要在开发环境中利用生产环境的数据(更加符合实际情况),因此HBase存储的数据的导入导出必不可少。

2.1. 表信息准备

hbase(main):002:0> create 'zhang','userinfo','baseinfo','eduinfo','workinfo'
Created table zhang
Took 1.4311 seconds
=> Hbase::Table - zhang
hbase(main):005:0* put 'zhang','12345','userinfo:username','zhangsan'
Took 0.3751 seconds
hbase(main):006:0> put 'zhang','12345','userinfo:password','111111'
Took 0.0220 seconds
hbase(main):007:0> put 'zhang','12345','baseinfo:name','zhangsan'
Took 0.0136 seconds
hbase(main):008:0> put 'zhang','12345','baseinfo:age','22'
Took 0.0136 seconds
hbase(main):009:0> put 'zhang','12345','baseinfo:name','zhangnew'
Took 0.0106 seconds
hbase(main):010:0> put 'zhang','12345','baseinfo:age','25'
Took 0.0138 seconds
hbase(main):013:0> put 'zhang','12345','eduinfo:pri_school','star school'
Took 0.0106 seconds
hbase(main):014:0> scan 'zhang'
ROW COLUMN+CELL
12345 column=baseinfo:age, timestamp=1533884261796, value=25
12345 column=baseinfo:name, timestamp=1533884258020, value=zhangnew
12345 column=eduinfo:pri_school, timestamp=1533884297216, value=star school
12345 column=userinfo:password, timestamp=1533884246132, value=111111
12345 column=userinfo:username, timestamp=1533884241334, value=zhangsan
1 row(s)
Took 0.0179 seconds

  

2.2. hbase表导出到HDFS

[yun@mini02 ~]$ hbase org.apache.hadoop.hbase.mapreduce.Export zhang /zhang/hbase/zhang_tab
……………………
2018-08-10 15:01:27,354 INFO [main] mapreduce.Job: The url to track the job: http://mini02:8088/proxy/application_1533865678790_0001/
2018-08-10 15:01:27,355 INFO [main] mapreduce.Job: Running job: job_1533865678790_0001
2018-08-10 15:01:39,564 INFO [main] mapreduce.Job: Job job_1533865678790_0001 running in uber mode : false
2018-08-10 15:01:39,566 INFO [main] mapreduce.Job: map 0% reduce 0%
2018-08-10 15:01:52,384 INFO [main] mapreduce.Job: map 100% reduce 0%
2018-08-10 15:01:53,416 INFO [main] mapreduce.Job: Job job_1533865678790_0001 completed successfully
2018-08-10 15:01:53,554 INFO [main] mapreduce.Job: ps in occupied slots (ms)=9661
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=9661
Total vcore-milliseconds taken by all map tasks=9661
Total megabyte-milliseconds taken by all map tasks=9892864
Map-Reduce Framework
Map input records=1
Map output records=1
Input split bytes=124
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=493
CPU time spent (ms)=7670
Physical memory (bytes) snapshot=259244032
Virtual memory (bytes) snapshot=2178433024
Total committed heap usage (bytes)=89653248
HBase Counters
BYTES_IN_REMOTE_RESULTS=252
BYTES_IN_RESULTS=252
MILLIS_BETWEEN_NEXTS=1258
NOT_SERVING_REGION_EXCEPTION=0
NUM_SCANNER_RESTARTS=0
NUM_SCAN_RESULTS_STALE=0
REGIONS_SCANNED=1
REMOTE_RPC_CALLS=1
REMOTE_RPC_RETRIES=0
ROWS_FILTERED=0
ROWS_SCANNED=1
RPC_CALLS=1
RPC_RETRIES=0
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=364

  

HDFS查看

 [yun@mini03 ~]$ hadoop fs -ls /zhang/hbase/zhang_tab
Found items
-rw-r--r-- yun supergroup -- : /zhang/hbase/zhang_tab/_SUCCESS
-rw-r--r-- yun supergroup -- : /zhang/hbase/zhang_tab/part-m-

2.3. hbase表导出到本地集群系统

具体导出到哪台机器,需要自己去查找

[yun@mini02 hbase_data]$ hbase org.apache.hadoop.hbase.mapreduce.Export zhang file:///app/software/hbase_data/zhang_tab
……………………
2018-08-10 16:14:17,619 INFO [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1533865678790_0003
2018-08-10 16:14:18,249 INFO [main] impl.YarnClientImpl: Submitted application application_1533865678790_0003
2018-08-10 16:14:18,290 INFO [main] mapreduce.Job: The url to track the job: http://mini02:8088/proxy/application_1533865678790_0003/
2018-08-10 16:14:18,290 INFO [main] mapreduce.Job: Running job: job_1533865678790_0003
2018-08-10 16:14:29,932 INFO [main] mapreduce.Job: Job job_1533865678790_0003 running in uber mode : false
2018-08-10 16:14:29,935 INFO [main] mapreduce.Job: map 0% reduce 0%
2018-08-10 16:14:39,958 INFO [main] mapreduce.Job: map 100% reduce 0%
2018-08-10 16:14:40,984 INFO [main] mapreduce.Job: Job job_1533865678790_0003 completed successfully
2018-08-10 16:14:41,131 INFO [main] mapreduce.Job: aps in occupied slots (ms)=7246
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=7246
Total vcore-milliseconds taken by all map tasks=7246
Total megabyte-milliseconds taken by all map tasks=7419904
Map-Reduce Framework
Map input records=1
Map output records=1
Input split bytes=124
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=242
CPU time spent (ms)=5580
Physical memory (bytes) snapshot=263946240
Virtual memory (bytes) snapshot=2176024576
Total committed heap usage (bytes)=94896128
HBase Counters
BYTES_IN_REMOTE_RESULTS=252
BYTES_IN_RESULTS=252
MILLIS_BETWEEN_NEXTS=959
NOT_SERVING_REGION_EXCEPTION=0
NUM_SCANNER_RESTARTS=0
NUM_SCAN_RESULTS_STALE=0
REGIONS_SCANNED=1
REMOTE_RPC_CALLS=1
REMOTE_RPC_RETRIES=0
ROWS_FILTERED=0
ROWS_SCANNED=1
RPC_CALLS=1
RPC_RETRIES=0
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=376

  

导出到本地的文件

 [yun@mini04 task_1533865678790_0003_m_000000]$ pwd
/app/software/hbase_data/zhang_tb/_temporary//task_1533865678790_0003_m_000000
[yun@mini04 task_1533865678790_0003_m_000000]$ ll
total
-rw-r--r-- yun yun Aug : part-m-

2.4. 导入数据到hbase

把刚才导出的数据,导入到hbase。

方式1【该方式导入失败】

 # 把 hbase 中表 zhang 的数据给删掉,然后导入
hbase(main)::* deleteall 'zhang',''
Took 0.0541 seconds \
hbase(main)::> scan 'zhang'
ROW COLUMN+CELL
row(s)
Took 0.0247 seconds

结果导入时死活没数据。。。

方式2【导入正常】

 # 对表 zhang 进行truncate,然后导入数据   或者 删除表 zhang,再重建该表
hbase(main)::* truncate 'zhang'
Truncating 'zhang' table (it may take a while):
Disabling table...
Truncating table...
Took 1.8954 seconds
hbase(main)::> scan 'zhang'
ROW COLUMN+CELL
row(s)
Took 1.4118 seconds

导入数据命令

[yun@mini04 task_1533865678790_0003_m_000000]$ hbase org.apache.hadoop.hbase.mapreduce.Import zhang file:///app/software/hbase_data/zhang_tb/_temporary/1/task_1533865678790_0003_m_000000/part-m-00000
……………………

  

导入后查看

hbase(main):023:0* scan 'zhang'
ROW COLUMN+CELL
12345 column=baseinfo:age, timestamp=1533884261796, value=25
12345 column=baseinfo:name, timestamp=1533884258020, value=zhangnew
12345 column=eduinfo:pri_school, timestamp=1533884297216, value=star school
12345 column=userinfo:password, timestamp=1533884246132, value=111111
12345 column=userinfo:username, timestamp=1533884241334, value=zhangsan
1 row(s)
Took 0.0439 seconds

  

方式3【导入正常】

 # 新建表导入  要求:表结构一样
hbase(main)::* create 'zhang_test','userinfo','baseinfo','eduinfo','workinfo'
Created table zhang_test
Took 0.7815 seconds \
=> Hbase::Table - zhang_test
hbase(main)::> list # 查看所有表
TABLE
scores
user
zhang
zhang_test
row(s)
Took 0.0280 seconds
=> ["scores", "user", "zhang", "zhang_test"]

导入数据命令

[yun@mini04 task_1533865678790_0003_m_000000]$ hbase org.apache.hadoop.hbase.mapreduce.Import zhang_test file:///app/software/hbase_data/zhang_tb/_temporary/1/task_1533865678790_0003_m_000000/part-m-00000
………………

  

导入后查看

 hbase(main)::* scan 'zhang_test'
ROW COLUMN+CELL
column=baseinfo:age, timestamp=, value=
column=baseinfo:name, timestamp=, value=zhangnew
column=eduinfo:pri_school, timestamp=, value=star school
column=userinfo:password, timestamp=, value=
column=userinfo:username, timestamp=, value=zhangsan
row(s)
Took 0.0544 seconds

Hbase-2.0.0_02_常用操作的更多相关文章

  1. Hbase到Solr同步常用操作

    Hbase到Solr同步常用操作 1. 整体流程 2. 常用操作 Hbase常用操作 Solr常用操作 hbase-index常用操作 3. 其他资料 Lily HBase Indexer使用整理 h ...

  2. kafka_2.11-2.0.0_常用操作

    参考博文:Kafka消费组(consumer group) 参考博文:kafka 1.0 中文文档(九):操作 参考博文:kafka集群管理工具kafka-manager部署安装 以下操作可以在min ...

  3. Hbase常用操作(增删改查)

    Hbase常用操作(增删改查) [日期:2014-01-03] 来源:Linux社区  作者:net19880504 [字体:大 中 小]     运行Eclipse,创建一个新的Java工程“HBa ...

  4. Hbase常用操作记录

    Hbase常用操作记录 Hbase 创建表 查看表结构 修改表结构 删除表 创建表 语法:create <table>, {NAME => <family>, VERSI ...

  5. HBase常用操作之namespace

    1.介绍 在HBase中,namespace命名空间指对一组表的逻辑分组,类似RDBMS中的database,方便对表在业务上划分.Apache HBase从0.98.0, 0.95.2两个版本开始支 ...

  6. C# 字符串常用操作 分类: C# 2014-08-22 15:07 238人阅读 评论(0) 收藏

    string str1 = "C#操作字符串<几种常见方式>如下"; string str2 = "C#操作字符串";     //比较字符串 Co ...

  7. 『心善渊』Selenium3.0基础 — 11、Selenium对元素常用操作

    目录 1.Selenium对元素常用操作 2.Selenium对元素的其他操作 1.Selenium对元素常用操作 操作 说明 click() 单击元素 send_keys() 模拟输入 clear( ...

  8. Hbase深入学习(六) Java操作HBase

    Hbase深入学习(六) ―― Java操作HBase 本文讲述如何用hbase shell命令和hbase java api对hbase服务器进行操作. 先看以下读取一行记录hbase是如何进行工作 ...

  9. Oracle常用操作——创建表空间、临时表空间、创建表分区、创建索引、锁表处理

    摘要:Oracle数据库的库表常用操作:创建与添加表空间.临时表空间.创建表分区.创建索引.锁表处理 1.表空间 ■  详细查看表空间使用状况,包括总大小,使用空间,使用率,剩余空间 --详细查看表空 ...

随机推荐

  1. jxl 读取xls,并转为二维数组可进行保存

    jxl.jar: 通过java操作excel表格的工具类库 支持Excel 95-2000的所有版本 生成Excel 2000标准格式 支持字体.数字.日期操作 能够修饰单元格属性 支持图像和图表 应 ...

  2. gbk转utf-8

    1.文件转码:使用脚本   gbk转u8的脚本文件: #!/bin/bash FILE_SUFFIX="java xml html vm js" # FILE_SUFFIX=&qu ...

  3. Java并发编程笔记之Semaphore信号量源码分析

    JUC 中 Semaphore 的使用与原理分析,Semaphore 也是 Java 中的一个同步器,与 CountDownLatch 和 CycleBarrier 不同在于它内部的计数器是递增的,那 ...

  4. VM虚拟机扩展硬盘容量

    VM虚拟机扩展硬盘容量 第一步,关闭系统,给虚拟机硬盘增加空间. 第二步,启动系统.查看硬盘大小和分区情况. 第三步,分区. 第四步,格式化分区. 第五步,挂载. 第六步,开机自动挂载. 第一步: 当 ...

  5. 梯度下降法原理与python实现

    梯度下降法(Gradient descent)是一个一阶最优化算法,通常也称为最速下降法. 要使用梯度下降法找到一个函数的局部极小值,必须向函数上当前点对应梯度(或者是近似梯度)的反方向的规定步长距离 ...

  6. 【Shell实战】批量在多台服务器上执行命令

    功能说明:批量在多台服务器上执行命令 #!/bin/bash # ========================================== # 功能:批量在多台服务器上执行命令 # 方法: ...

  7. 如何做自己的服务监控?spring boot 1.x服务监控揭秘

    1.准备 下载可运行程序:http://www.mkyong.com/spring-boot/spring-boot-hello-world-example-jsp/ 2.添加服务监控依赖 <d ...

  8. spring boot 2.0 源码分析(一)

    在学习spring boot 2.0源码之前,我们先利用spring initializr快速地创建一个基本的简单的示例: 1.先从创建示例中的main函数开始读起: package com.exam ...

  9. MySQL基准测试(二)--方法

    MySQL基准测试(二)--方法 目的: 方法不是越高级越好.而应该善于做减法.至简是一种智慧,首先要做的是收集MySQL的各状态数据.收集到了,不管各个时间段出现的问题,至少你手上有第一时间的状态数 ...

  10. 牛刀小试MySQL--GTID

    GTID的概念 何为GITD GTID(global transaction identifier)是全局事务标识符,在MySQL5.6版本中作为一个超级特性被推出.事务标识不仅对于Master(起源 ...