https://blog.csdn.net/mchdba/article/details/108896766

环境:centos7、tidb4.0.4、tiup-v1.0.8

添加两个tikv节点  172.21.210.37-38

思路:初始化两台服务器、配置ssh互通——>编辑配置文件——>执行扩容命令——>重启grafana

1、初始化服务器、配置ssh互通

1
2
3
4
1、时间同步
2、配置ssh
ssh-copy-id root@172.21.210.37
ssh-copy-id root@172.21.210.38

2、编辑配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
tiup cluster list                                     #查看当前的集群名称列表
tiup cluster edit-config <cluster-name>  #查看集群配置、拷贝对应的配置
 
vi scale-out.yaml
tikv_servers:
- host: 172.21.210.37
  ssh_port: 22
  port: 20160
  status_port: 20180
  deploy_dir: /data1/tidb-deploy/tikv-20160
  data_dir: /data1/tidb-data/tikv-20160
  arch: amd64
  os: linux
- host: 172.21.210.38
  ssh_port: 22
  port: 20160
  status_port: 20180
  deploy_dir: /data1/tidb-deploy/tikv-20160
  data_dir: /data1/tidb-data/tikv-20160
  arch: amd64
  os: linux

3、执行扩容命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
此处假设当前执行命令的用户和新增的机器打通了互信,如果不满足已打通互信的条件,需要通过 -p 来输入新机器的密码,或通过 -i 指定私钥文件。
tiup cluster scale-out <cluster-name> scale-out.yaml
预期输出 Scaled cluster <cluster-name> out successfully 信息,表示扩容操作成功
 
root@host-172-21-210-32 tidb_config]# tiup cluster scale-out tidb scale-out.yaml
Starting component `cluster`:  scale-out tidb scale-out.yaml
Please confirm your topology:
TiDB Cluster: tidb
TiDB Version: v4.0.4
Type  Host           Ports        OS/Arch       Directories
----  ----           -----        -------       -----------
tikv  172.21.210.37  20160/20180  linux/x86_64  /data1/tidb-deploy/tikv-20160,/data1/tidb-data/tikv-20160
tikv  172.21.210.38  20160/20180  linux/x86_64  /data1/tidb-deploy/tikv-20160,/data1/tidb-data/tikv-20160
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb/ssh/id_rsa.pub
 
 
  - Download tikv:v4.0.4 (linux/amd64) ... Done
+ [ Serial ] - RootSSH: user=root, host=172.21.210.38, port=22, key=/root/.ssh/id_rsa
+ [ Serial ] - EnvInit: user=tidb, host=172.21.210.38
+ [ Serial ] - RootSSH: user=root, host=172.21.210.37, port=22, key=/root/.ssh/id_rsa
+ [ Serial ] - EnvInit: user=tidb, host=172.21.210.37
+ [ Serial ] - Mkdir: host=172.21.210.37, directories='/data1/tidb-deploy','/data1/tidb-data'
+ [ Serial ] - Mkdir: host=172.21.210.38, directories='/data1/tidb-deploy','/data1/tidb-data'
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.39
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.33
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.34
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.33
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.35
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.36
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32
+ [ Serial ] - UserSSH: user=tidb, host=172.21.210.38
 
+ [ Serial ] - UserSSH: user=tidb, host=172.21.210.37
+ [ Serial ] - Mkdir: host=172.21.210.38, directories='/data1/tidb-deploy/tikv-20160','/data1/tidb-deploy/tikv-20160/log','/data1/tidb-deploy/tikv-20160/bin','/data1/tidb-deploy/tikv-20160/conf','/data1/tidb-deploy/tikv-20160/scripts'
+ [ Serial ] - Mkdir: host=172.21.210.37, directories='/data1/tidb-deploy/tikv-20160','/data1/tidb-deploy/tikv-20160/log','/data1/tidb-deploy/tikv-20160/bin','/data1/tidb-deploy/tikv-20160/conf','/data1/tidb-deploy/tikv-20160/scripts'
 
 
  - Copy blackbox_exporter -> 172.21.210.37 ... ? Mkdir: host=172.21.210.37, directories='/data1/tidb-deploy/monitor-9100','/data1/t...
  - Copy blackbox_exporter -> 172.21.210.37 ... ? Mkdir: host=172.21.210.37, directories='/data1/tidb-deploy/monitor-9100','/data1/t...
  - Copy node_exporter -> 172.21.210.37 ... ? CopyComponent: component=node_exporter, version=v0.17.0, remote=172.21.210.37:/data1/t...
  - Copy blackbox_exporter -> 172.21.210.37 ... ? MonitoredConfig: cluster=tidb, user=tidb, node_exporter_port=9100, blackbox_export...
  - Copy node_exporter -> 172.21.210.38 ... Done
+ [ Serial ] - ScaleConfig: cluster=tidb, user=tidb, host=172.21.210.37, service=tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=
+ [ Serial ] - ScaleConfig: cluster=tidb, user=tidb, host=172.21.210.38, service=tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=
+ [ Serial ] - ClusterOperate: operation=StartOperation, options={Roles:[] Nodes:[] Force:false SSHTimeout:0 OptTimeout:120 APITimeout:0 IgnoreConfigCheck:false RetainDataRoles:[] RetainDataNodes:[]}
Starting component pd
        Starting instance pd 172.21.210.33:2379
        Starting instance pd 172.21.210.32:2379
        Start pd 172.21.210.33:2379 success
        Start pd 172.21.210.32:2379 success
Starting component node_exporter
        Starting instance 172.21.210.32
        Start 172.21.210.32 success
Starting component blackbox_exporter
        Starting instance 172.21.210.32
        Start 172.21.210.32 success
Starting component node_exporter
        Starting instance 172.21.210.33
        Start 172.21.210.33 success
Starting component blackbox_exporter
        Starting instance 172.21.210.33
        Start 172.21.210.33 success
Starting component tikv
        Starting instance tikv 172.21.210.35:20160
        Starting instance tikv 172.21.210.34:20160
        Starting instance tikv 172.21.210.39:20160
        Starting instance tikv 172.21.210.36:20160
        Start tikv 172.21.210.39:20160 success
        Start tikv 172.21.210.34:20160 success
        Start tikv 172.21.210.35:20160 success
        Start tikv 172.21.210.36:20160 success
Starting component node_exporter
        Starting instance 172.21.210.35
        Start 172.21.210.35 success
Starting component blackbox_exporter
        Starting instance 172.21.210.35
        Start 172.21.210.35 success
Starting component node_exporter
        Starting instance 172.21.210.34
        Start 172.21.210.34 success
Starting component blackbox_exporter
        Starting instance 172.21.210.34
        Start 172.21.210.34 success
Starting component node_exporter
        Starting instance 172.21.210.39
        Start 172.21.210.39 success
Starting component blackbox_exporter
        Starting instance 172.21.210.39
        Start 172.21.210.39 success
Starting component node_exporter
        Starting instance 172.21.210.36
        Start 172.21.210.36 success
Starting component blackbox_exporter
        Starting instance 172.21.210.36
        Start 172.21.210.36 success
Starting component tidb
        Starting instance tidb 172.21.210.33:4000
        Starting instance tidb 172.21.210.32:4000
        Start tidb 172.21.210.32:4000 success
        Start tidb 172.21.210.33:4000 success
Starting component prometheus
        Starting instance prometheus 172.21.210.32:9090
        Start prometheus 172.21.210.32:9090 success
Starting component grafana
        Starting instance grafana 172.21.210.32:3000
        Start grafana 172.21.210.32:3000 success
Starting component alertmanager
        Starting instance alertmanager 172.21.210.32:9093
        Start alertmanager 172.21.210.32:9093 success
Checking service state of pd
        172.21.210.32      Active: active (running) since Fri 2020-10-16 22:50:31 CST; 2 weeks 5 days ago
        172.21.210.33      Active: active (running) since Fri 2020-10-16 22:50:22 CST; 2 weeks 5 days ago
Checking service state of tikv
        172.21.210.34      Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago
        172.21.210.35      Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago
        172.21.210.36      Active: active (running) since Sat 2020-10-17 02:25:23 CST; 2 weeks 5 days ago
        172.21.210.39      Active: active (running) since Fri 2020-10-16 23:34:13 CST; 2 weeks 5 days ago
Checking service state of tidb
        172.21.210.32      Active: active (running) since Fri 2020-10-16 22:50:49 CST; 2 weeks 5 days ago
        172.21.210.33      Active: active (running) since Fri 2020-10-16 22:50:40 CST; 2 weeks 5 days ago
Checking service state of prometheus
        172.21.210.32      Active: active (running) since Sat 2020-10-17 02:25:27 CST; 2 weeks 5 days ago
Checking service state of grafana
        172.21.210.32      Active: active (running) since Fri 2020-10-16 23:55:07 CST; 2 weeks 5 days ago
Checking service state of alertmanager
        172.21.210.32      Active: active (running) since Fri 2020-10-16 22:51:06 CST; 2 weeks 5 days ago
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.38
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.37
+ [ Serial ] - save meta
+ [ Serial ] - ClusterOperate: operation=StartOperation, options={Roles:[] Nodes:[] Force:false SSHTimeout:0 OptTimeout:120 APITimeout:0 IgnoreConfigCheck:false RetainDataRoles:[] RetainDataNodes:[]}
Starting component tikv
        Starting instance tikv 172.21.210.38:20160
        Starting instance tikv 172.21.210.37:20160
        Start tikv 172.21.210.37:20160 success
        Start tikv 172.21.210.38:20160 success
Starting component node_exporter
        Starting instance 172.21.210.37
        Start 172.21.210.37 success
Starting component blackbox_exporter
        Starting instance 172.21.210.37
        Start 172.21.210.37 success
Starting component node_exporter
        Starting instance 172.21.210.38
        Start 172.21.210.38 success
Starting component blackbox_exporter
        Starting instance 172.21.210.38
        Start 172.21.210.38 success
Checking service state of tikv
        172.21.210.37      Active: active (running) since Thu 2020-11-05 11:33:46 CST; 3s ago
        172.21.210.38      Active: active (running) since Thu 2020-11-05 11:33:46 CST; 2s ago
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/alertmanager-9093.service, deploy_dir=/data1/tidb-deploy/alertmanager-9093, data_dir=[/data1/tidb-data/alertmanager-9093], log_dir=/data1/tidb-deploy/alertmanager-9093/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.36, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tidb-4000.service, deploy_dir=/data1/tidb-deploy/tidb-4000, data_dir=[], log_dir=/data1/tidb-deploy/tidb-4000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/pd-2379.service, deploy_dir=/data1/tidb-deploy/pd-2379, data_dir=[/data1/tidb-data/pd-2379], log_dir=/data1/tidb-deploy/pd-2379/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.37, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.33, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tidb-4000.service, deploy_dir=/data1/tidb-deploy/tidb-4000, data_dir=[], log_dir=/data1/tidb-deploy/tidb-4000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.35, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/prometheus-9090.service, deploy_dir=/data1/tidb-deploy/prometheus-9090, data_dir=[/data1/tidb-data/prometheus-9090], log_dir=/data1/tidb-deploy/prometheus-9090/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.34, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/grafana-3000.service, deploy_dir=/data1/tidb-deploy/grafana-3000, data_dir=[], log_dir=/data1/tidb-deploy/grafana-3000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.38, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.33, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/pd-2379.service, deploy_dir=/data1/tidb-deploy/pd-2379, data_dir=[/data1/tidb-data/pd-2379], log_dir=/data1/tidb-deploy/pd-2379/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.39, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - ClusterOperate: operation=RestartOperation, options={Roles:[prometheus] Nodes:[] Force:false SSHTimeout:0 OptTimeout:120 APITimeout:0 IgnoreConfigCheck:false RetainDataRoles:[] RetainDataNodes:[]}
Stopping component prometheus
        Stopping instance 172.21.210.32
        Stop prometheus 172.21.210.32:9090 success
Starting component prometheus
        Starting instance prometheus 172.21.210.32:9090
        Start prometheus 172.21.210.32:9090 success
Starting component node_exporter
        Starting instance 172.21.210.32
        Start 172.21.210.32 success
Starting component blackbox_exporter
        Starting instance 172.21.210.32
        Start 172.21.210.32 success
Checking service state of pd
        172.21.210.33      Active: active (running) since Fri 2020-10-16 22:50:22 CST; 2 weeks 5 days ago
        172.21.210.32      Active: active (running) since Fri 2020-10-16 22:50:31 CST; 2 weeks 5 days ago
Checking service state of tikv
        172.21.210.35      Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago
        172.21.210.39      Active: active (running) since Fri 2020-10-16 23:34:13 CST; 2 weeks 5 days ago
        172.21.210.34      Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago
        172.21.210.36      Active: active (running) since Sat 2020-10-17 02:25:23 CST; 2 weeks 5 days ago
Checking service state of tidb
        172.21.210.32      Active: active (running) since Fri 2020-10-16 22:50:49 CST; 2 weeks 5 days ago
        172.21.210.33      Active: active (running) since Fri 2020-10-16 22:50:40 CST; 2 weeks 5 days ago
Checking service state of prometheus
        172.21.210.32      Active: active (running) since Thu 2020-11-05 11:33:53 CST; 2s ago
Checking service state of grafana
        172.21.210.32      Active: active (running) since Fri 2020-10-16 23:55:07 CST; 2 weeks 5 days ago
Checking service state of alertmanager
        172.21.210.32      Active: active (running) since Fri 2020-10-16 22:51:06 CST; 2 weeks 5 days ago
+ [ Serial ] - UpdateTopology: cluster=tidb
Scaled cluster `tidb` out successfully

4、查看集群状态、重启grafana

1
2
3
4
检查集群状态
    tiup cluster display <cluster-name>
重启grafana
    tiup cluster restart tidb -R grafana

[转帖]tidb4.0.4使用tiup扩容TiKV 节点的更多相关文章

  1. Tidb进行缩减扩容tikv节点

    这两天接到任务说是要进行测试缩减机器给集群带来的负面效果有哪些. 然后我就按照官方的教程将机器进行了缩减,主要是缩减tikv节点 我们先来看看官方的文章是怎么写的: 步骤都没有什么问题,就是进行到第二 ...

  2. [转帖]springboot2.0配置连接池(hikari、druid)

    springboot2.0配置连接池(hikari.druid) 原文链接:https://www.cnblogs.com/blog5277/p/10660689.html 原文作者:博客园--曲高终 ...

  3. [转帖]从0开始的高并发(一)--- Zookeeper的基础概念

    从0开始的高并发(一)--- Zookeeper的基础概念 https://juejin.im/post/5d0bd358e51d45105e0212db 前言 前面几篇以spring作为主题也是有些 ...

  4. [转帖]mysql8.0忘记密码如何操作?

    mysql8.0忘记密码如何操作? https://www.cnblogs.com/gspsuccess/p/11245314.html mark 一下 上次竟然不知道怎么弄. 很不幸,刚安装了MYS ...

  5. zookeeper集群扩容/下线节点实践

    环境:zookeeper版本 3.4.6jdk版本 1.7.0_8010.111.1.29 zk110.111.1.44 zk210.111.1.45 zk310.111.1.46 zk410.111 ...

  6. MongoDB 3.0.6的主,从,仲裁节点搭建

    在MongoDB所在路径创建log和data目录mkdir logmkdir data 在data目录下 创建master.slaver.arbiter路径 mkdir master mkdir sl ...

  7. 【转】Rancher 2.0 里程碑版本:支持添加自定义节点!

    原文链接: http://mp.weixin.qq.com/s?__biz=MzIyMTUwMDMyOQ==&mid=2247487533&idx=1&sn=c70258577 ...

  8. 11.2.0.3 RAC(VCS)节点crash以及hang的问题分析

    昨天某个客户的一套双节RAC当中一个节点crash,同一时候最后导致另外一个节点也hang住,仅仅能shutdown abort. 且出现shutdown abort实例之后,还有部分进程无法通过ki ...

  9. Hadoop 动态扩容 增加节点

    基础准备 在基础准备部分,主要是设置hadoop运行的系统环境 修改系统hostname(通过hostname和/etc/sysconfig/network进行修改) 修改hosts文件,将集群所有节 ...

  10. 在线tidb+tipd+tikv扩容,迁移,从UC到阿里云

    集群现状: 共有五个节点,配置为16核32g内存,数据节点为1T ssd盘,非数据节点为100g ssd盘: 角色规划: node1 tidb tipd node2 tidb tipd node3 t ...

随机推荐

  1. Java数组常见的几种排序。

    public class code2 { public static void main(String[] args) { int[] x = {37, 89, 23}; for (int z = 0 ...

  2. 快速掌握服务网格系列二:云原生、K8S、服务网格(Service Mesh)及微服务之间的关系

    快速掌握服务网格系列二:云原生.K8S.服务网格(Service Mesh)及微服务之间的关系 首先看下CNCF对云原生的定义: Cloud native technologies empower o ...

  3. RSA加密--前端

    流程 前端js使用公钥进行加密,后端使用私钥进行解密(C#或java语言). 注意:c#使用xml格式的公钥/私钥 Java,js都是pem格式 格式 xml: <RSAKeyValue> ...

  4. Mysql tls 会话:再一次抓包之后,我认识到…

    本文分享自华为云社区<有些事你只有抓包才知道之mysql tls会话>,作者:张俭. 你的mysql客户端和服务端之间开启tls了吗?你的回答可能是No,我根本没开启mysql的tls. ...

  5. Nacos是什么?

    摘要:Nacos是 Dynamic Naming and Configuration Service的首字母简称,相较之下,它更易于构建云原生应用的动态服务发现.配置管理和服务管理平台. 本文分享自华 ...

  6. PanGu-Coder:函数级的代码生成模型

    摘要:华为诺亚方舟实验室语音语义实验室联合华为云PaaS技术创新实验室基于PanGu-Alpha研制出了当前业界最新的模型PanGu-Coder 本文分享自华为云社区<PanGu-Coder 函 ...

  7. 学会这5种JS函数继承方式,前端面试你至少成功50%

    摘要:函数继承是在JS里比较基础也是比较重要的一部分,而且也是面试中常常要问到的.下面带你快速了解JS中有哪几种是经常出现且必须掌握的继承方式.掌握下面的内容面试也差不多没问题啦~ 本文分享自华为云社 ...

  8. 云小课|VMware备份上云学习专列来了,快加入吧~

    阅识风云是华为云信息大咖,擅长将复杂信息多元化呈现,其出品的一张图(云图说).深入浅出的博文(云小课)或短视频(云视厅)总有一款能让您快速上手华为云.更多精彩内容请单击此处. 摘要:华为云云备份CBR ...

  9. nodejs升级到最新LTS版本方法汇总:linux/mac/window—npm/yum/ssh

    nodejs不同版本的差异还是蛮多的,比如obj?.a 在nodejs12是不支持的,必须得升级到14才可以.但是centos yum 默认安装的,或者系统集成的nodejs版本都是很老的.项目上传到 ...

  10. MyBatis Mapper.XML 标签使用说明

    直接将值返回给对象 <select id="list" resultType="com.vipsoft.base.entity.UserInfo"> ...