KubeSphere 高可用集群搭建并启用所有插件
介绍
大多数情况下,单主节点集群大致足以供开发和测试环境使用。但是,对于生产环境,您需要考虑集群的高可用性。如果关键组件(例如 kube-apiserver、kube-scheduler 和 kube-controller-manager)都在同一个主节点上运行,一旦主节点宕机,Kubernetes 和 KubeSphere 都将不可用。因此,您需要为多个主节点配置负载均衡器,以创建高可用集群。您可以使用任意云负载均衡器或者任意硬件负载均衡器(例如 F5)。此外,也可以使用 Keepalived 和 HAproxy,或者 Nginx 来创建高可用集群。
架构
在您开始操作前,请确保准备了 6 台 Linux 机器,其中 3 台充当主节点,另外 3 台充当工作节点。下图展示了这些机器的详情,包括它们的私有 IP 地址和角色。
配置负载均衡器
您必须在您的环境中创建一个负载均衡器来监听(在某些云平台也称作监听器)关键端口。建议监听下表中的端口。
服务 协议 端口
apiserver TCP 6443
ks-console TCP 30880
http TCP 80
https TCP 443
配置免密
root@hello:~# ssh-keygen
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.10
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.11
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.12
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.13
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.14
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.15
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.16
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.51
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.52
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.53
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.54
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.55
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.56
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.57
下载 KubeKey
Kubekey 是新一代安装程序,可以简单、快速和灵活地安装 Kubernetes 和 KubeSphere。
root@cby:~# export KKZONE=cn
root@cby:~# curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.0 sh -
Downloading kubekey v1.2.0 from https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v1.2.0/kubekey-v1.2.0-linux-amd64.tar.gz ...
Kubekey v1.2.0 Download Complete!
为 kk 添加可执行权限
root@cby:~# chmod +x kk
root@cby:~# ./kk create config --with-kubesphere v3.2.0 --with-kubernetes v1.22.1
部署 KubeSphere 和 Kubernetes
root@cby:~# vim config-sample.yaml
root@cby:~#
root@cby:~#
root@cby:~# cat config-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: master1, address: 192.168.1.10, internalAddress: 192.168.1.10, user: root, password: Cby123..}
- {name: master2, address: 192.168.1.11, internalAddress: 192.168.1.11, user: root, password: Cby123..}
- {name: master3, address: 192.168.1.12, internalAddress: 192.168.1.12, user: root, password: Cby123..}
- {name: node1, address: 192.168.1.13, internalAddress: 192.168.1.13, user: root, password: Cby123..}
- {name: node2, address: 192.168.1.14, internalAddress: 192.168.1.14, user: root, password: Cby123..}
- {name: node3, address: 192.168.1.15, internalAddress: 192.168.1.15, user: root, password: Cby123..}
- {name: node4, address: 192.168.1.16, internalAddress: 192.168.1.16, user: root, password: Cby123..}
- {name: node5, address: 192.168.1.51, internalAddress: 192.168.1.51, user: root, password: Cby123..}
- {name: node6, address: 192.168.1.52, internalAddress: 192.168.1.52, user: root, password: Cby123..}
- {name: node7, address: 192.168.1.53, internalAddress: 192.168.1.53, user: root, password: Cby123..}
- {name: node8, address: 192.168.1.54, internalAddress: 192.168.1.54, user: root, password: Cby123..}
- {name: node9, address: 192.168.1.55, internalAddress: 192.168.1.55, user: root, password: Cby123..}
- {name: node10, address: 192.168.1.56, internalAddress: 192.168.1.56, user: root, password: Cby123..}
- {name: node11, address: 192.168.1.57, internalAddress: 192.168.1.57, user: root, password: Cby123..}
roleGroups:
etcd:
- master1
- master2
- master3
master:
- master1
- master2
- master3
worker:
- node1
- node2
- node3
- node4
- node5
- node6
- node7
- node8
- node9
- node10
- node11
controlPlaneEndpoint:
##Internal loadbalancer for apiservers
#internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: "192.168.1.20"
port: 6443
kubernetes:
version: v1.22.1
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
addons: []
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.2.0
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
local_registry: ""
# dev_tag: ""
etcd:
monitoring: false
endpointIps: localhost
port: 2379
tlsEnable: true
common:
core:
console:
enableMultiLogin: true
port: 30880
type: NodePort
# apiserver:
# resources: {}
# controllerManager:
# resources: {}
redis:
enabled: false
volumeSize: 2Gi
openldap:
enabled: false
volumeSize: 2Gi
minio:
volumeSize: 20Gi
monitoring:
# type: external
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
GPUMonitoring:
enabled: false
gpu:
kinds:
- resourceName: "nvidia.com/gpu"
resourceType: "GPU"
default: true
es:
# master:
# volumeSize: 4Gi
# replicas: 1
# resources: {}
# data:
# volumeSize: 20Gi
# replicas: 1
# resources: {}
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchUrl: ""
externalElasticsearchPort: ""
alerting:
enabled: false
# thanosruler:
# replicas: 1
# resources: {}
auditing:
enabled: false
# operator:
# resources: {}
# webhook:
# resources: {}
devops:
enabled: false
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
jenkinsJavaOpts_Xms: 512m
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
events:
enabled: false
# operator:
# resources: {}
# exporter:
# resources: {}
# ruler:
# enabled: false
# replicas: 2
# resources: {}
logging:
enabled: false
containerruntime: docker
logsidecar:
enabled: false
replicas: 2
# resources: {}
metrics_server:
enabled: false
monitoring:
storageClass: ""
# kube_rbac_proxy:
# resources: {}
# kube_state_metrics:
# resources: {}
# prometheus:
# replicas: 1
# volumeSize: 20Gi
# resources: {}
# operator:
# resources: {}
# adapter:
# resources: {}
# node_exporter:
# resources: {}
# alertmanager:
# replicas: 1
# resources: {}
# notification_manager:
# resources: {}
# operator:
# resources: {}
# proxy:
# resources: {}
gpu:
nvidia_dcgm_exporter:
enabled: false
# resources: {}
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
kubeedge:
enabled: false
cloudCore:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
cloudhubPort: "10000"
cloudhubQuicPort: "10001"
cloudhubHttpsPort: "10002"
cloudstreamPort: "10003"
tunnelPort: "10004"
cloudHub:
advertiseAddress:
- ""
nodeLimit: "100"
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
edgeWatcher:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
edgeWatcherAgent:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
root@cby:~#
若是haproxy配置如下:
frontend kube-apiserver
bind *:6443
mode tcp
option tcplog
default_backend kube-apiserver
backend kube-apiserver
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server kube-apiserver-1 192.168.1.10:6443 check
server kube-apiserver-2 192.168.1.11:6443 check
server kube-apiserver-3 192.168.1.12:6443 check
安装所需环境
root@hello:~# bash -x 1.sh
root@hello:~# cat 1.sh
ssh root@192.168.1.10 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.11 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.12 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.13 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.14 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.15 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.16 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.51 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.52 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.53 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.54 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.55 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.56 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.57 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
开始安装
配置完成后,您可以执行以下命令来开始安装:
root@hello:~# ./kk create cluster -f config-sample.yaml
+---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| node3 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| node1 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| node4 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| node8 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| node11 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| master1 | y | y | y | y | y | y | y | | y | | | UTC 13:26:00 |
| node5 | y | y | y | y | y | y | y | | y | | | UTC 13:26:00 |
| master2 | y | y | y | y | y | y | y | | y | | | UTC 13:26:00 |
| node2 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| node7 | y | y | y | y | y | y | y | | y | | | UTC 13:26:00 |
| master3 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| node6 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| node9 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| node10 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
+---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
INFO[13:26:06 UTC] Downloading Installation Files
INFO[13:26:06 UTC] Downloading kubeadm ...
---略---
验证安装
运行以下命令查看安装日志。
root@cby:~# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
**************************************************
Collecting installation results ...
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.1.10:30880
Account: admin
Password: P@88w0rd
NOTES:
1. After you log into the console, please check the
monitoring status of service components in
"Cluster Management". If any service is not
ready, please wait patiently until all components
are up and running.
2. Please change the default password after login.
#####################################################
https://kubesphere.io 2021-11-10 10:24:00
#####################################################
root@hello:~# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 30m v1.22.1
master2 Ready control-plane,master 29m v1.22.1
master3 Ready control-plane,master 29m v1.22.1
node1 Ready worker 29m v1.22.1
node10 Ready worker 29m v1.22.1
node11 Ready worker 29m v1.22.1
node2 Ready worker 29m v1.22.1
node3 Ready worker 29m v1.22.1
node4 Ready worker 29m v1.22.1
node5 Ready worker 29m v1.22.1
node6 Ready worker 29m v1.22.1
node7 Ready worker 30m v1.22.1
node8 Ready worker 30m v1.22.1
node9 Ready worker 29m v1.22.1
在安装后启用插件
使用 admin 用户登录控制台。点击左上角的平台管理,然后选择集群管理。
点击 CRD,然后在搜索栏中输入 clusterconfiguration。点击搜索结果查看其详情页。
在自定义资源中,点击 ks-installer 右侧的 ,然后选择编辑 YAML。
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"installer.kubesphere.io/v1alpha1","kind":"ClusterConfiguration","metadata":{"annotations":{},"labels":{"version":"v3.2.0"},"name":"ks-installer","namespace":"kubesphere-system"},"spec":{"alerting":{"enabled":false},"auditing":{"enabled":false},"authentication":{"jwtSecret":""},"common":{"core":{"console":{"enableMultiLogin":true,"port":30880,"type":"NodePort"}},"es":{"basicAuth":{"enabled":false,"password":"","username":""},"elkPrefix":"logstash","externalElasticsearchPort":"","externalElasticsearchUrl":"","logMaxAge":7},"gpu":{"kinds":[{"default":true,"resourceName":"nvidia.com/gpu","resourceType":"GPU"}]},"minio":{"volumeSize":"20Gi"},"monitoring":{"GPUMonitoring":{"enabled":false},"endpoint":"http://prometheus-operated.kubesphere-monitoring-system.svc:9090"},"openldap":{"enabled":false,"volumeSize":"2Gi"},"redis":{"enabled":false,"volumeSize":"2Gi"}},"devops":{"enabled":false,"jenkinsJavaOpts_MaxRAM":"2g","jenkinsJavaOpts_Xms":"512m","jenkinsJavaOpts_Xmx":"512m","jenkinsMemoryLim":"2Gi","jenkinsMemoryReq":"1500Mi","jenkinsVolumeSize":"8Gi"},"etcd":{"endpointIps":"192.168.1.10,192.168.1.11,192.168.1.12","monitoring":false,"port":2379,"tlsEnable":true},"events":{"enabled":false},"kubeedge":{"cloudCore":{"cloudHub":{"advertiseAddress":[""],"nodeLimit":"100"},"cloudhubHttpsPort":"10002","cloudhubPort":"10000","cloudhubQuicPort":"10001","cloudstreamPort":"10003","nodeSelector":{"node-role.kubernetes.io/worker":""},"service":{"cloudhubHttpsNodePort":"30002","cloudhubNodePort":"30000","cloudhubQuicNodePort":"30001","cloudstreamNodePort":"30003","tunnelNodePort":"30004"},"tolerations":[],"tunnelPort":"10004"},"edgeWatcher":{"edgeWatcherAgent":{"nodeSelector":{"node-role.kubernetes.io/worker":""},"tolerations":[]},"nodeSelector":{"node-role.kubernetes.io/worker":""},"tolerations":[]},"enabled":false},"logging":{"containerruntime":"docker","enabled":false,"logsidecar":{"enabled":false,"replicas":2}},"metrics_server":{"enabled":false},"monitoring":{"gpu":{"nvidia_dcgm_exporter":{"enabled":false}},"storageClass":""},"multicluster":{"clusterRole":"none"},"network":{"ippool":{"type":"none"},"networkpolicy":{"enabled":false},"topology":{"type":"none"}},"openpitrix":{"store":{"enabled":false}},"persistence":{"storageClass":""},"servicemesh":{"enabled":false}}}
labels:
version: v3.2.0
name: ks-installer
namespace: kubesphere-system
spec:
alerting:
enabled: true
auditing:
enabled: true
authentication:
jwtSecret: ''
common:
core:
console:
enableMultiLogin: true
port: 30880
type: NodePort
es:
basicAuth:
enabled: true
password: ''
username: ''
elkPrefix: logstash
externalElasticsearchPort: ''
externalElasticsearchUrl: ''
logMaxAge: 7
gpu:
kinds:
- default: true
resourceName: nvidia.com/gpu
resourceType: GPU
minio:
volumeSize: 20Gi
monitoring:
GPUMonitoring:
enabled: true
endpoint: 'http://prometheus-operated.kubesphere-monitoring-system.svc:9090'
openldap:
enabled: true
volumeSize: 2Gi
redis:
enabled: true
volumeSize: 2Gi
devops:
enabled: true
jenkinsJavaOpts_MaxRAM: 2g
jenkinsJavaOpts_Xms: 512m
jenkinsJavaOpts_Xmx: 512m
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
etcd:
endpointIps: '192.168.1.10,192.168.1.11,192.168.1.12'
monitoring: false
port: 2379
tlsEnable: true
events:
enabled: true
kubeedge:
cloudCore:
cloudHub:
advertiseAddress:
- ''
nodeLimit: '100'
cloudhubHttpsPort: '10002'
cloudhubPort: '10000'
cloudhubQuicPort: '10001'
cloudstreamPort: '10003'
nodeSelector:
node-role.kubernetes.io/worker: ''
service:
cloudhubHttpsNodePort: '30002'
cloudhubNodePort: '30000'
cloudhubQuicNodePort: '30001'
cloudstreamNodePort: '30003'
tunnelNodePort: '30004'
tolerations: []
tunnelPort: '10004'
edgeWatcher:
edgeWatcherAgent:
nodeSelector:
node-role.kubernetes.io/worker: ''
tolerations: []
nodeSelector:
node-role.kubernetes.io/worker: ''
tolerations: []
enabled: true
logging:
containerruntime: docker
enabled: true
logsidecar:
enabled: true
replicas: 2
metrics_server:
enabled: true
monitoring:
gpu:
nvidia_dcgm_exporter:
enabled: true
storageClass: ''
multicluster:
clusterRole: none
network:
ippool:
type: weave-scope
networkpolicy:
enabled: true
topology:
type: none
openpitrix:
store:
enabled: true
persistence:
storageClass: ''
servicemesh:
enabled: true
批量将所有服务器设置阿里云加速
root@hello:~# vim 8
root@hello:~# cat 8
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://ted9wxpi.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
root@hello:~# vim 7
root@hello:~# cat 7
scp 8 root@192.168.1.11:
scp 8 root@192.168.1.12:
scp 8 root@192.168.1.13:
scp 8 root@192.168.1.14:
scp 8 root@192.168.1.15:
scp 8 root@192.168.1.16:
scp 8 root@192.168.1.51:
scp 8 root@192.168.1.52:
scp 8 root@192.168.1.53:
scp 8 root@192.168.1.54:
scp 8 root@192.168.1.55:
scp 8 root@192.168.1.56:
scp 8 root@192.168.1.57:
root@hello:~# bash -x 7
root@hello:~# vim 6
root@hello:~# cat 6
ssh root@192.168.1.10 "bash -x 8"
ssh root@192.168.1.11 "bash -x 8"
ssh root@192.168.1.12 "bash -x 8"
ssh root@192.168.1.13 "bash -x 8"
ssh root@192.168.1.14 "bash -x 8"
ssh root@192.168.1.15 "bash -x 8"
ssh root@192.168.1.16 "bash -x 8"
ssh root@192.168.1.51 "bash -x 8"
ssh root@192.168.1.52 "bash -x 8"
ssh root@192.168.1.53 "bash -x 8"
ssh root@192.168.1.54 "bash -x 8"
ssh root@192.168.1.55 "bash -x 8"
ssh root@192.168.1.56 "bash -x 8"
ssh root@192.168.1.57 "bash -x 8"
root@hello:~# bash -x 6
查看node节点
root@hello:~# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 11h v1.22.1
master2 Ready control-plane,master 11h v1.22.1
master3 Ready control-plane,master 11h v1.22.1
node1 Ready worker 11h v1.22.1
node10 Ready worker 11h v1.22.1
node11 Ready worker 11h v1.22.1
node2 Ready worker 11h v1.22.1
node3 Ready worker 11h v1.22.1
node4 Ready worker 11h v1.22.1
node5 Ready worker 11h v1.22.1
node6 Ready worker 11h v1.22.1
node7 Ready worker 11h v1.22.1
node8 Ready worker 11h v1.22.1
node9 Ready worker 11h v1.22.1
root@hello:~#
root@hello:~#
https://blog.csdn.net/qq_33921750
https://my.oschina.net/u/3981543
https://www.zhihu.com/people/chen-bu-yun-2
https://segmentfault.com/u/hppyvyv6/articles
https://juejin.cn/user/3315782802482007
https://space.bilibili.com/352476552/article
https://cloud.tencent.com/developer/column/93230
知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云
KubeSphere 高可用集群搭建并启用所有插件的更多相关文章
- Hadoop HA高可用集群搭建(Hadoop+Zookeeper+HBase)
声明:作者原创,转载注明出处. 作者:帅气陈吃苹果 一.服务器环境 主机名 IP 用户名 密码 安装目录 master188 192.168.29.188 hadoop hadoop /home/ha ...
- Hadoop 3.1.2(HA)+Zookeeper3.4.13+Hbase1.4.9(HA)+Hive2.3.4+Spark2.4.0(HA)高可用集群搭建
目录 目录 1.前言 1.1.什么是 Hadoop? 1.1.1.什么是 YARN? 1.2.什么是 Zookeeper? 1.3.什么是 Hbase? 1.4.什么是 Hive 1.5.什么是 Sp ...
- MongoDB高可用集群搭建(主从、分片、路由、安全验证)
目录 一.环境准备 1.部署图 2.模块介绍 3.服务器准备 二.环境变量 1.准备三台集群 2.安装解压 3.配置环境变量 三.集群搭建 1.新建配置目录 2.修改配置文件 3.分发其他节点 4.批 ...
- RabbitMQ高级指南:从配置、使用到高可用集群搭建
本文大纲: 1. RabbitMQ简介 2. RabbitMQ安装与配置 3. C# 如何使用RabbitMQ 4. 几种Exchange模式 5. RPC 远程过程调用 6. RabbitMQ高可用 ...
- CentOS7/RHEL7 pacemaker+corosync高可用集群搭建
TOC \o "1-3" \h \z \u 一.集群信息... PAGEREF _Toc502099174 \h 4 08D0C9EA79F9BACE118C8200AA004B ...
- hadoop高可用集群搭建小结
hadoop高可用集群搭建小结1.Zookeeper集群搭建2.格式化Zookeeper集群 (注:在Zookeeper集群建立hadoop-ha,amenode的元数据)3.开启Journalmno ...
- Spark高可用集群搭建
Spark高可用集群搭建 node1 node2 node3 1.node1修改spark-env.sh,注释掉hadoop(就不用开启Hadoop集群了),添加如下语句 export ...
- spring cloud 服务注册中心eureka高可用集群搭建
spring cloud 服务注册中心eureka高可用集群搭建 一,准备工作 eureka可以类比zookeeper,本文用三台机器搭建集群,也就是说要启动三个eureka注册中心 1 本文三台eu ...
- MongoDB 3.4 高可用集群搭建(二)replica set 副本集
转自:http://www.lanceyan.com/tech/mongodb/mongodb_repset1.html 在上一篇文章<MongoDB 3.4 高可用集群搭建(一):主从模式&g ...
- Hbase 完全分布式 高可用 集群搭建
1.准备 Hadoop 版本:2.7.7 ZooKeeper 版本:3.4.14 Hbase 版本:2.0.5 四台主机: s0, s1, s2, s3 搭建目标如下: HMaster:s0,s1(备 ...
随机推荐
- cuda安装的问题
小学期老师给的文档,里面要加入这几个环境变量 他这排版有问题,我就去网上找了几个cuda环境变量的配置 保姆级的CUDA的下载安装使用,详细的环境变量配置,不仅仅让你能够安装,还会教你弄懂为什么要这样 ...
- 图像处理|Matlab
图像处理 | Matlab 参考博文: 图像处理-平滑滤波 图像去噪-加性噪声(高斯/椒盐)
- P1413 坚果保龄球
P1413 坚果保龄球 - 洛谷 | 计算机科学教育新生态 (luogu.com.cn) 大家可以发现这里的坚果其实是火爆辣椒2333,那么我们要尽量多消灭僵尸,就需要在僵尸位于1列时在放置(ans+ ...
- Nginx 代理解决跨域问题分析
Nginx 代理解决跨域问题分析 当你遇到跨域问题,不要立刻就选择复制去尝试.请详细看完这篇文章再处理 .我相信它能帮到你. 分析前准备: 前端网站地址:http://localhost:8080 ...
- SSD目标检测网络解读(含网络结构和内容解读)
SSD实现思路 SSD具有如下主要特点: 从YOLO中继承了将detection转化为regression的思路,一次完成目标定位与分类 基于Faster RCNN中的Anchor,提出了相似的Pri ...
- Django: sqlite的版本问题小记 “SQLite 3.8.3 or later”
问题概述 在Django中,默认的数据库时SQLite3. 可能会出现sqlite版本问题的报错,具体如下 起初我直接在django的project下面开了个cmd窗口,python python m ...
- Pintia 7-3 列车调度
7-3 列车调度 (25 分) 火车站的列车调度铁轨的结构如下图所示. 两端分别是一条入口(Entrance)轨道和一条出口(Exit)轨道,它们之间有N条平行的轨道.每趟列车从入口可以选择任意一条轨 ...
- java输入一个字符串,要求将该字符串中出现的英文字母, * 按照顺序输出,区分大小写,且大写优先
public static void main(String[] args) { String input ="A8r4c5jaAjp#7"; //转为char[] char[] ...
- Java基础知识题
在Java语言中,已知 a 为int 型,b 为 double型,c 为 float 型,d 为 char 型,则表达式 a+b*c-d/a 的 结果类型为(选一项)A.intB.doubleC.fl ...
- 1004 Counting Leaves (30分)
今天在热心网友的督促下完成了第一道PAT编程题. 太久没有保持训练了,整个人都很懵. 解题方法: 1.读懂题意 2.分析重点 3.确定算法 4.代码实现 该题需要计算每层的叶子节点个数,所以选用BFS ...