TIDB数据集群部署
TIDB 数据库集群
一、TiDB数据介绍
1.1、TiDB数据简介
TiDB 是 PingCAP 公司设计的开源分布式 HTAP (Hybrid Transactional and Analytical Processing) 数据库,结合了传统的 RDBMS 和 NoSQL 的最佳特性。TiDB 兼容 MySQL,支持无限的水平扩展,具备强一致性和高可用性。TiDB 的目标是为 OLTP (Online Transactional Processing) 和 OLAP (Online Analytical Processing) 场景提供一站式的解决方案。
TiDB 具备如下特性:
高度兼容 MySQL
大多数情况下,无需修改代码即可从 MySQL 轻松迁移至 TiDB,分库分表后的 MySQL 集群亦可通过 TiDB 工具进行实时迁移。
水平弹性扩展
通过简单地增加新节点即可实现 TiDB 的水平扩展,按需扩展吞吐或存储,轻松应对高并发、海量数据场景。
分布式事务
TiDB 100% 支持标准的 ACID 事务。
真正金融级高可用
相比于传统主从 (M-S) 复制方案,基于 Raft 的多数派选举协议可以提供金融级的 100% 数据强一致性保证,且在不丢失大多数副本的前提下,可以实现故障的自动恢复 (auto-failover),无需人工介入。
一站式 HTAP 解决方案
TiDB 作为典型的 OLTP 行存数据库,同时兼具强大的 OLAP 性能,配合 TiSpark,可提供一站式 HTAP 解决方案,一份存储同时处理 OLTP & OLAP,无需传统繁琐的 ETL 过程。
云原生 SQL 数据库
TiDB 是为云而设计的数据库,支持公有云、私有云和混合云,使部署、配置和维护变得十分简单。
TiDB Server
TiDB Server 负责接收 SQL 请求,处理 SQL 相关的逻辑,并通过 PD 找到存储计算所需数据的 TiKV 地址,与 TiKV 交互获取数据,最终返回结果。TiDB Server 是无状态的,其本身并不存储数据,只负责计算,可以无限水平扩展,可以通过负载均衡组件(如LVS、HAProxy 或 F5)对外提供统一的接入地址。
PD Server
Placement Driver (简称 PD) 是整个集群的管理模块,其主要工作有三个:一是存储集群的元信息(某个 Key 存储在哪个 TiKV 节点);二是对 TiKV 集群进行调度和负载均衡(如数据的迁移、Raft group leader 的迁移等);三是分配全局唯一且递增的事务 ID。
PD 是一个集群,需要部署奇数个节点,一般线上推荐至少部署 3 个节点
TiKV Server
TiKV Server 负责存储数据,从外部看 TiKV 是一个分布式的提供事务的 Key-Value 存储引擎。存储数据的基本单位是 Region,每个 Region 负责存储一个 Key Range(从 StartKey 到 EndKey 的左闭右开区间)的数据,每个 TiKV 节点会负责多个 Region。TiKV 使用 Raft 协议做复制,保持数据的一致性和容灾。副本以 Region 为单位进行管理,不同节点上的多个 Region 构成一个 Raft Group,互为副本。数据在多个 TiKV 之间的负载均衡由 PD 调度,这里也是以 Region 为单位进行调度
TiSpark
TiSpark 作为 TiDB 中解决用户复杂 OLAP 需求的主要组件,将 Spark SQL 直接运行在 TiDB 存储层上,同时融合 TiKV 分布式集群的优势,并融入大数据社区生态。至此,TiDB 可以通过一套系统,同时支持 OLTP 与 OLAP,免除用户数据同步的烦恼
1.2、Tidb 数据基本操作
创建、查看和删除数据库
CREATE DATABASE db_name [options];
CREATE DATABASE IF NOT EXISTS samp_db;
DROP DATABASE samp_db;
DROP TABLE IF EXISTS person;
CREATE INDEX person_num ON person (number);
ALTER TABLE person ADD INDEX person_num (number);
CREATE UNIQUE INDEX person_num ON person (number);
CREATE USER 'tiuser'@'localhost' IDENTIFIED BY '';
GRANT SELECT ON samp_db.* TO 'tiuser'@'localhost';
SHOW GRANTS for tiuser@localhost;
DROP USER 'tiuser'@'localhost';
GRANT ALL PRIVILEGES ON test.* TO 'xxxx'@'%' IDENTIFIED BY 'yyyyy';
REVOKE ALL PRIVILEGES ON `test`.* FROM 'genius'@'localhost';
SHOW GRANTS for 'root'@'%';
SELECT Insert_priv FROM mysql.user WHERE user='test' AND host='%';
FLUSH PRIVILEGES;
二、TiDB Ansible 部署
2.1、安装Tidb集群基础环境
使用三台物理机搭建Tidb集群,三台机器ip 为 172.16.5.50,172.16.5.51,172.16.5.10,其中172.16.5.51作为中控机。
软件安装如下:
172.16.5.51 TiDB,PD,TiKV
172.16.5.50 TiKV
172.16.5.10 TiKV
安装中控机软件
yum -y install epel-release git curl sshpass atop vim htop net-tools
yum -y install python-pip
在中控机上创建 tidb 用户,并生成 ssh key
# 创建tidb用户
useradd -m -d /home/tidb tidb && passwd tidb
# 配置tidb用户sudo权限
visudo
tidb ALL=(ALL) NOPASSWD: ALL
# 使用tidb账户生成 ssh key
su tidb && ssh-keygen -t rsa -C mikel@tidb
在中控机器上下载 TiDB-Ansible
1 # 下载Tidb-Ansible 版本
cd /home/tidb && git clone -b release-2.0 https://github.com/pingcap/tidb-ansible.git
# 安装ansible及依赖
cd /home/tidb/tidb-ansible/ && pip install -r ./requirements.txt
在中控机上配置部署机器ssh互信及sudo 规则
# 配置hosts.ini
su tidb && cd /home/tidb/tidb-ansible
vim hosts.ini
[servers]
172.16.5.50
172.16.5.51
172.16.5.52
[all:vars]
username = tidb
ntp_server = pool.ntp.org
# 配置ssh 互信
ansible-playbook -i hosts.ini create_users.yml -u root -k
在目标机器上安装ntp服务
# 中控机器上给目标主机安装ntp服务
cd /home/tidb/tidb-ansible
ansible-playbook -i hosts.ini deploy_ntp.yml -u tidb -b
目标机器上调整cpufreq
# 查看cpupower 调节模式,目前虚拟机不支持,调节10服务器cpupower
cpupower frequency-info --governors
analyzing CPU :
available cpufreq governors: Not Available
# 配置cpufreq调节模式
cpupower frequency-set --governor performance
目标机器上添加数据盘ext4 文件系统挂载
# 创建分区表
parted -s -a optimal /dev/nvme0n1 mklabel gpt -- mkpart primary ext4 -
# 手动创建分区
parted dev/sdb
mklabel gpt
mkpart primary 0KB 210GB
# 格式化分区
mkfs.ext4 /dev/sdb
# 查看数据盘分区 UUID
[root@tidb-tikv1 ~]# lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 xfs f41c3b1b-125f-407c-81fa-5197367feb39 /boot
├─sda2 xfs 8119193b-c774-467f-a057-98329c66b3b3 /
├─sda3
└─sda5 xfs 42356bb3-911a-4dc4-b56e-815bafd08db2 /home
sdb ext4 532697e9-970e-49d4-bdba-df386cac34d2
# 分别在三台机器上,编辑 /etc/fstab 文件,添加 nodelalloc 挂载参数
vim /etc/fstab
UUID=8119193b-c774-467f-a057-98329c66b3b3 / xfs defaults
UUID=f41c3b1b-125f-407c-81fa-5197367feb39 /boot xfs defaults
UUID=42356bb3-911a-4dc4-b56e-815bafd08db2 /home xfs defaults
UUID=532697e9-970e-49d4-bdba-df386cac34d2 /data ext4 defaults,nodelalloc,noatime
# 挂载数据盘
mkdir /data
mount -a
mount -t ext4
/dev/sdb on /data type ext4 (rw,noatime,seclabel,nodelalloc,data=ordered)
分配机器资源,编辑inventory.ini 文件
# 单机Tikv实例
Name HostIP Services
tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
tidb-tikv2 172.16.5.51 PD2, TiKV2
tidb-tikv3 172.16.5.52 PD3, TiKV3
# 编辑inventory.ini 文件
cd /home/tidb/tidb-ansible
vim inventory.ini
## TiDB Cluster Part
[tidb_servers]
172.16.5.50
172.16.5.51 [tikv_servers]
172.16.5.50
172.16.5.51
172.16.5.52 [pd_servers]
172.16.5.50
172.16.5.51
172.16.5.52 ## Monitoring Part
# prometheus and pushgateway servers
[monitoring_servers]
172.16.5.50 # node_exporter and blackbox_exporter servers
[monitored_servers]
172.16.5.50
172.16.5.51
172.16.5.52 [all:vars]
#deploy_dir = /home/tidb/deploy
deploy_dir = /data/deploy
# 检测ssh互信
[tidb@tidb-tikv1 tidb-ansible]$ ansible -i inventory.ini all -m shell -a 'whoami'
172.16.5.51 | SUCCESS | rc= >>
tidb
172.16.5.52 | SUCCESS | rc= >>
tidb
172.16.5.50 | SUCCESS | rc= >>
tidb
# 检测tidb 用户 sudo 免密码配置
[tidb@tidb-tikv1 tidb-ansible]$ ansible -i inventory.ini all -m shell -a 'whoami' -b
172.16.5.52 | SUCCESS | rc= >>
root
172.16.5.51 | SUCCESS | rc= >>
root
172.16.5.50 | SUCCESS | rc= >>
root
# 执行 local_prepare.yml playbook,联网下载 TiDB binary 到中控机
ansible-playbook local_prepare.yml
# 初始化系统环境,修改内核参数
ansible-playbook bootstrap.yml
2.2、安装Tidb集群
ansible-playbook deploy.yml
2.3、启动Tidb集群
ansible-playbook start.yml
2.4、测试集群
# 使用 MySQL 客户端连接测试,TCP 端口是 TiDB 服务默认端口
mysql -u root -h 172.16.5.50 -P
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| INFORMATION_SCHEMA |
| PERFORMANCE_SCHEMA |
| mysql |
| test |
+--------------------+
rows in set (0.00 sec)
# 通过浏览器访问监控平台
地址:http://172.16.5.51:3000 默认帐号密码是:admin/admin
三、TIDB集群扩容
3.1、扩容 TiDB/TiKV 节点
# 单机Tikv实例
Name HostIP Services
tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
tidb-tikv2 172.16.5.51 PD2, TiKV2
tidb-tikv3 172.16.5.52 PD3, TiKV3
# 新增一台TIDB节点
添加一个 TiDB 节点(tidb-tikv4),IP 地址为 172.16.5.53
# 编辑inventory.ini 文件
cd /home/tidb/tidb-ansible
vim inventory.ini
------------------start---------------------------
## TiDB Cluster Part
[tidb_servers]
172.16.5.50
172.16.5.51
172.16.5.53 [tikv_servers]
172.16.5.50
172.16.5.51
172.16.5.52 [pd_servers]
172.16.5.50
172.16.5.51
172.16.5.52 ## Monitoring Part
# prometheus and pushgateway servers
[monitoring_servers]
172.16.5.50 # node_exporter and blackbox_exporter servers
[monitored_servers]
172.16.5.50
172.16.5.51
172.16.5.52
172.16.5.53
----------------------end-------------------
# 拓扑结构如下
Name HostIP Services
tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
tidb-tikv2 172.16.5.51 PD2, TiKV2
tidb-tikv3 172.16.5.52 PD3, TiKV3
tidb-tikv4 172.16.5.53 TiDB2
# 初始化新增节点
ansible-playbook bootstrap.yml -l 172.16.5.53
# 部署新增节点
ansible-playbook deploy.yml -l 172.16.5.53
# 启动新节点服务
ansible-playbook start.yml -l 172.16.5.53
# 更新 Prometheus 配置并重启
ansible-playbook rolling_update_monitor.yml --tags=prometheus
3.2、扩容PD节点
# 拓扑结构如下# 单机Tikv实例
Name HostIP Services
tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
tidb-tikv2 172.16.5.51 PD2, TiKV2
tidb-tikv3 172.16.5.52 PD3, TiKV3
# 新增一台PD节点
添加一个 PD 节点(tidb-pd1),IP 地址为 172.16.5.54
# 编辑inventory.ini 文件
cd /home/tidb/tidb-ansible
vim inventory.ini
## TiDB Cluster Part
[tidb_servers]
172.16.5.50
172.16.5.51 [tikv_servers]
172.16.5.50
172.16.5.51
172.16.5.52 [pd_servers]
172.16.5.50
172.16.5.51
172.16.5.52
172.16.5.54 ## Monitoring Part
# prometheus and pushgateway servers
[monitoring_servers]
172.16.5.50 # node_exporter and blackbox_exporter servers
[monitored_servers]
172.16.5.50
172.16.5.51
172.16.5.52
172.16.5.54
# 拓扑结构如下
Name HostIP Services
tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
tidb-tikv2 172.16.5.51 PD2, TiKV2
tidb-tikv3 172.16.5.52 PD3, TiKV3
tidb-pd1 172.16.5.54 PD4
# 初始化新增节点
ansible-playbook bootstrap.yml -l 172.16.5.54
# 部署新增节点
ansible-playbook deploy.yml -l 172.16.5.54
# 登录新增的 PD 节点,编辑启动脚本:{deploy_dir}/scripts/run_pd.sh
、移除 --initial-cluster="xxxx" \ 配置。
、添加 --join="http://172.16.10.1:2379" \,IP 地址 (172.16.10.1) 可以是集群内现有 PD IP 地址中的任意一个。
、在新增 PD 节点中手动启动 PD 服务:
{deploy_dir}/scripts/start_pd.sh
、使用 pd-ctl 检查新节点是否添加成功:
/home/tidb/tidb-ansible/resources/bin/pd-ctl -u "http://172.16.10.1:2379" -d member
# 滚动升级整个集群
ansible-playbook rolling_update.yml
# 更新 Prometheus 配置并重启
ansible-playbook rolling_update_monitor.yml --tags=prometheus
四、tidb集群测试
4.1、sysbench基准库测试
sysbench安装
# 二进制安装
curl -s https://packagecloud.io/install/repositories/akopytov/sysbench/script.rpm.sh | sudo bash
sudo yum -y install sysbench
性能测试
# cpu性能测试
sysbench --test=cpu --cpu-max-prime= run
----------------------------------start----------------------------------------
Number of threads:
Initializing random number generator from current time
Prime numbers limit:
Initializing worker threads...
Threads started!
CPU speed:
events per second: 286.71
General statistics:
total time: .0004s
total number of events:
Latency (ms):
min: 3.46
avg: 3.49
max: 4.49
95th percentile: 3.55
sum: 9997.23
Threads fairness:
events (avg/stddev): 2868.0000/0.00
execution time (avg/stddev): 9.9972/0.00
-----------------------------------end-------------------------------------------
# 线程测试
sysbench --test=threads --num-threads= --thread-yields= --thread-locks= run
------------------------------------start-----------------------------------------
Number of threads:
Initializing random number generator from current time
Initializing worker threads...
Threads started!
General statistics:
total time: .0048s
total number of events:
Latency (ms):
min: 0.05
avg: 5.88
max: 49.15
95th percentile: 17.32
sum: 640073.32
Threads fairness:
events (avg/stddev): 1701.2969/36.36
execution time (avg/stddev): 10.0011/0.00
-----------------------------------end-----------------------------------------
# 磁盘IO测试
sysbench --test=fileio --num-threads= --file-total-size=3G --file-test-mode=rndrw prepare
----------------------------------start-----------------------------------------
files, 24576Kb each, 3072Mb total
Creating files for the test...
Extra file open flags: (none)
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
Creating file test_file.
bytes written in 339.76 seconds (9.04 MiB/sec)
----------------------------------end------------------------------------------
sysbench --test=fileio --num-threads= --file-total-size=3G --file-test-mode=rndrw run
----------------------------------start-----------------------------------------
Number of threads:
Initializing random number generator from current time
Extra file open flags: (none)
files, 24MiB each
3GiB total file size
Block size 16KiB
Number of IO requests:
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Initializing worker threads...
Threads started!
File operations:
reads/s: 299.19
writes/s: 199.46
fsyncs/s: 816.03
Throughput:
read, MiB/s: 4.67
written, MiB/s: 3.12
General statistics:
total time: .8270s
total number of events:
Latency (ms):
min: 0.00
avg: 13.14
max: 340.58
95th percentile: 92.42
sum: 160186.15
Threads fairness:
events (avg/stddev): 761.8125/216.01
execution time (avg/stddev): 10.0116/0.01
--------------------------------------end---------------------------------------
sysbench --test=fileio --num-threads= --file-total-size=3G --file-test-mode=rndrw cleanup
# 内存测试
sysbench --test=memory --memory-block-size=8k --memory-total-size=4G run
------------------------------------start-----------------------------------------
Number of threads:
Initializing random number generator from current time
Running memory speed test with the following options:
block size: 8KiB
total size: 4096MiB
operation: write
scope: global
Initializing worker threads...
Threads started!
Total operations: (1111310.93 per second)
4096.00 MiB transferred (8682.12 MiB/sec)
General statistics:
total time: .4692s
total number of events:
Latency (ms):
min: 0.00
avg: 0.00
max: 0.03
95th percentile: 0.00
sum: 381.39 Threads fairness:
events (avg/stddev): 524288.0000/0.00
execution time (avg/stddev): 0.3814/0.00
-------------------------------------end---------------------------------------
4.2、OLTP测试
# 登录tidb创建测试数据库
mysql -u root -P -h 172.16.5.50
create database sbtest
# 准备测试数据
sysbench /usr/share/sysbench/oltp_common.lua --mysql-host=172.16.5.50 --mysql-port= --mysql-user=root --tables= --table_size= --threads= --max-requests= prepare
--tables= # 创建20个表
--table_size= # 每个表两千万数据
--threads= # 使用100个线程数
---------------------------------报错信息如下------------------------------------------
FATAL: mysql_drv_query() returned error (PD server timeout[try again later]
// ::19.236 log.go:: [warning] etcdserver: [timed out waiting for read index response]
// ::17.329 heartbeat_streams.go:: [error] [store ] send keepalive message fail: EOF
// ::04.603 leader.go:: [info] leader is deleted
// ::04.603 leader.go:: [info] pd2 is not etcd leader, skip campaign leader and check later
// ::10.071 coordinator.go:: [info] [region ] send schedule command: transfer leader from store to store
FATAL: mysql_drv_query() returned error (Information schema is out of date)
------------------------------------end-----------------------------------------------
# 调整线程数为10,表数量为10,表数据为2000000 做测试
sysbench /usr/share/sysbench/oltp_common.lua --mysql-host=172.16.5.50 --mysql-port= --mysql-user=root --tables= --table_size= --threads= --max-requests= prepare
--------------------------------------start--------------------------------------------
FATAL: mysql_drv_query() returned error (Information schema is out of date) 超时报错
成功写入2张表,其余8张表数据并未写满,写好索引
# 对tidb集群进行读写测试
sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-host=172.16.5.50 --mysql-port= --mysql-user=root --tables= --table_size= --threads= --max-requests= run
----------------------------------------start--------------------------------------
Number of threads:
Initializing random number generator from current time
Initializing worker threads...
Threads started!
SQL statistics:
queries performed:
read:
write:
other:
total:
transactions: (5.60 per sec.)
queries: (112.10 per sec.)
ignored errors: (0.00 per sec.)
reconnects: (0.00 per sec.)
General statistics:
total time: .0594s
total number of events:
Latency (ms):
min: 944.55
avg: 1757.78
max: 2535.05
95th percentile: 2320.55
sum: 108982.56
Threads fairness:
events (avg/stddev): 6.2000/0.40
execution time (avg/stddev): 10.8983/0.31
------------------------------------end----------------------------------------
# 使用mysql对比测试
mysql -uroot -P -h 172.15.5.154
create database sbtest
sysbench /usr/share/sysbench/oltp_common.lua --mysql-host=172.16.5.154 --mysql-port= --mysql-user=root --mysql-password=root --tables= --table_size= --threads= --max-requests= prepare
使用mysql 做测试未发现报错情况
4.3、业务数据测试
sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-host=172.16.5.50 --mysql-port= --mysql-user=root --tables= --table_size= --threads= --max-requests= run
TIDB数据集群部署的更多相关文章
- centos7 ambari2.6.1.5+hdp2.6.4.0 大数据集群安装部署
前言 本文是讲如何在centos7(64位) 安装ambari+hdp,如果在装有原生hadoop等集群的机器上安装,需要先将集群服务停掉,然后将不需要的环境变量注释掉即可,如果不注释掉,后面虽然可以 ...
- Ubuntu14.04下Ambari安装搭建部署大数据集群(图文分五大步详解)(博主强烈推荐)
不多说,直接上干货! 写在前面的话 (1) 最近一段时间,因担任我团队实验室的大数据环境集群真实物理机器工作,至此,本人秉持负责.认真和细心的态度,先分别在虚拟机上模拟搭建ambari(基于CentO ...
- Ubuntu14.04下Cloudera安装搭建部署大数据集群(图文分五大步详解)(博主强烈推荐)(在线或离线)
第一步: Cloudera Manager安装之Cloudera Manager安装前准备(Ubuntu14.04)(一) 第二步: Cloudera Manager安装之时间服务器和时间客户端(Ub ...
- 基于Docker搭建大数据集群(一)Docker环境部署
本篇文章是基于Docker搭建大数据集群系列的开篇之作 主要内容 docker搭建 docker部署CentOS 容器免密钥通信 容器保存成镜像 docker镜像发布 环境 Linux 7.6 一.D ...
- 部署TiDB集群
架构图 节点规划 120.52.146.213 Control Machine 120.52.146.214 PD1_TiDB1 120.52.146.215 PD2_TiDB2 120.52.146 ...
- 使用ansible部署CDH 5.15.1大数据集群
使用ansible离线部署CDH 5.15.1大数据集群 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 在此之前,我之前分享过使用shell自定义脚本部署大数据集群,不管是部署CD ...
- 基于Docker搭建大数据集群(七)Hbase部署
基于Docker搭建大数据集群(七)Hbase搭建 一.安装包准备 Hbase官网下载 微云下载 | 在 tar 目录下 二.版本兼容 三.角色分配 节点 Master Regionserver cl ...
- HBase集成Zookeeper集群部署
大数据集群为了保证故障转移,一般通过zookeeper来整体协调管理,当节点数大于等于6个时推荐使用,接下来描述一下Hbase集群部署在zookeeper上的过程: 安装Hbase之前首先系统应该做通 ...
- 谈一谈Elasticsearch的集群部署
Elasticsearch天生就支持分布式部署,通过集群部署可以提高系统的可用性.本文重点谈一谈Elasticsearch的集群节点相关问题,搞清楚这些是进行Elasticsearch集群部署和拓 ...
随机推荐
- 润乾报表新功能–导出excel支持锁定表头
在以往的报表设计中,锁定表头是会经常被用到的一个功能,这个功能不仅能使浏览的页面更加直观,信息对应的更加准确,而且也提高了报表的美观程度.但是,很多客户在将这样的报表导出excel时发现exce ...
- Oracle SQL_TRACE使用小结
一.关于基础表Oc_COJ^c680758 rd-A6z/&[1R1] H680758 Oracle10G之前,启用AUTOTRACE功能需要手工创建plan_table表,创建脚本为$ORA ...
- jsoncpp cmake
(1)下载jsoncpp源码源码地址:https://github.com/open-source-parsers/jsoncpp/tree/0.y.z(2)解压源码 unzip jsoncpp-0. ...
- [UI] 精美UI界面欣赏[10]
精美UI界面欣赏[10]
- zabbix 监控iptables
参看的文章链接忘了...... yum -y install iptstate 1.脚本位置和内容 [root@web1 scripts]# pwd /etc/zabbix/scripts [root ...
- ms17-010漏洞利用教程
ms17-010 漏洞利用并拿下服务器教程 攻击环境: 攻击机win2003 ip:192.168.150.129 Window2003 Python环境及工具 攻击机kali: ip:192.168 ...
- SQL——快速定位相关的外键表
- Crond定时任务
crond简介 crond是linux下用来周期性的执行某种任务或等待处理某些事件的一个守护进程,与windows下的计划任务类似,当安装完成操作系统后,默认会安装此服务工具,并且会自动启动crond ...
- php解析xml文件的方法
最近一段时间在做模板包导入.模板包中包含有xml文件,,需要解析成给定的php数组格式. 我接触到了两种方法,分别是DOMDocument 方法和 simple_load_file. 个人偏好后一种, ...
- Java并发:Executor与连接池
概述 首先来说一说java连接池中常用到的几个类:Executor,ExecutorService,ScheduledExecutorService Executor 执行已经提交的任务对象.此接口提 ...