docker 快速部署ES集群 spark集群
1) 拉下来 ES集群 spark集群 两套快速部署环境, 并只用docker跑起来,并保存到私库。
2)弄清楚怎么样打包 linux镜像(或者说制作)。
3)试着改一下,让它们跑在集群里面。
4) 弄清楚
Dockerfile 怎么制作镜像
docker-compose 里面的启动项 及 与 mesos里面怎么对应起来。
5)写一个spack程序
及在ES环境里造少量数据查一下。
ES 环境docker :参考贴子来跑的,https://cloud.tencent.com/developer/article/1098820
记录下主要步骤:
1)拉镜像到私库
2)内网虚拟机环境,禁用其它服务没有就跳过,ip修改,免密,等。
- systemctl disable mesos-slave
- systemctl stop mesos-slave
- systemctl disable zookeeper
- systemctl stop zookeeper
- systemctl disable mesos-master
- systemctl stop mesos-master
- systemctl disable marathon
- systemctl stop marathon
- systemctl enable docker
- systemctl start docker
- systemctl daemon-reload
- systemctl restart docker
3)从私库拉镜象,不需要每个全拉下来,拉对应的。
- docker pull 192.168.1.153:31809/kafka.new.es
- docker pull 192.168.1.153:31809/zookeeper.new.es
- docker pull 192.168.1.153:31809/elastic/elasticsearch:5.6.8.new.es
- docker pull 192.168.1.153:31809/elastic/kibana:5.6.8.new.es
- docker pull 192.168.1.153:31809/elastic/logstash:5.6.8.new.es
- docker pull 192.168.1.153:31809/elastic/elasticsearch:5.6.8.new.es
4)
写脚本主要有 cp.test.sh 和 docker.run.sh
cp.test.sh 主要是测试 脚本流程 及 处理免密登陆第一次时需要输入yes问题
4.1
将 cp.test.sh 放在任意一台 ,将自己发送给所有节点,在每台这样操作,输入yes.
再发一次确保没漏
4.2
执行 docker.run.sh 启动容器
docker.run.sh主要做和三件事
1)将自己发给其它节点
2)处理自己 相关docker 的启动
3)通过ssh 向其它节点发送命令 在其它节点执行 docker.run.sh(当然这时根据参数是不会再次shh其它节点)
总之就是 docker.run.sh 将会停止和删除所有节点中原来的所有容器,并且启动该启动的容器。
另外附上相关操作脚本。
ip有修改
- -------------------公共库---------------------------------------
- docker pull jenkins
- docker tag jenkins 192.168.1.153:/jenkins:latest
- docker push 192.168.1.153:/jenkins:latest
- docker rmi jenkins
- docker rmi 192.168.1.153:/jenkins
- docker pull mysql
- docker tag mysql 192.168.1.153:/mysql:latest
- docker push 192.168.1.153:/mysql:latest
- docker rmi mysql
- docker rmi 192.168.1.153:/mysql
- docker pull tomcat
- docker tag tomcat 192.168.1.153:/tomcat:latest
- docker push 192.168.1.153:/tomcat:latest
- docker rmi tomcat
- docker rmi 192.168.1.153:/tomcat
- docker pull maven
- docker tag maven 192.168.1.153:/maven:latest
- docker push 192.168.1.153:/maven:latest
- docker rmi maven
- docker rmi 192.168.1.153:/maven
- -------------------公共库---------------------------------------
- -----------------快速部署ES集群.txt--------------------------
- https://cloud.tencent.com/developer/article/1098820
- docker pull zookeeper
- docker tag zookeeper 192.168.1.153:/zookeeper.new.es
- docker push 192.168.1.153:/zookeeper.new.es
- docker rmi zookeeper
- docker rmi 192.168.1.153:/zookeeper.new.es
- docker pull wurstmeister/kafka
- docker tag wurstmeister/kafka 192.168.1.153:/kafka.new.es
- docker push 192.168.1.153:/kafka.new.es
- docker rmi wurstmeister/kafka
- docker rmi 192.168.1.153:/kafka.new.es
- docker pull docker.elastic.co/elasticsearch/elasticsearch:5.6.
- docker tag docker.elastic.co/elasticsearch/elasticsearch:5.6. 192.168.1.153:/elastic/elasticsearch:5.6..new.es
- docker push 192.168.1.153:/elastic/elasticsearch:5.6..new.es
- docker rmi docker.elastic.co/elasticsearch/elasticsearch:5.6.
- docker rmi 192.168.1.153:/elastic/elasticsearch:5.6..new.es
- docker pull docker.elastic.co/kibana/kibana:5.6.
- docker tag docker.elastic.co/kibana/kibana:5.6. 192.168.1.153:/elastic/kibana:5.6..new.es
- docker push 192.168.1.153:/elastic/kibana:5.6..new.es
- docker rmi docker.elastic.co/kibana/kibana:5.6.
- docker rmi 192.168...:/elastic/kibana:5.6..new.es
- docker pull docker.elastic.co/logstash/logstash:5.6.
- docker tag docker.elastic.co/logstash/logstash:5.6. 192.168.1.153:/elastic/logstash:5.6..new.es
- docker push 192.168.1.153:/elastic/logstash:5.6..new.es
- docker rmi docker.elastic.co/logstash/logstash:5.6.
- docker rmi 192.168.1.153:/elastic/logstash:5.6..new.es
- -----------------快速部署ES集群.txt------------------------------
- -----------------spark:.2镜像--------------------------
- https://www.cnblogs.com/hongdada/p/9475406.html
- docker pull singularities/spark
- docker tag singularities/spark 192.168.1.153:/singularities/spark
- docker push 192.168.1.153:/singularities/spark
- docker rmi singularities/spark
- docker rmi 192.168.1.153:/singularities/spark
- ----------------- spark:.2镜像------------------------------
-----------------centos7 镜像--------------------------
https://www.jianshu.com/p/4801bb7ab9e0
docker tag 7 192.168.1.153:31809/centos7
docker push 192.168.1.153:31809/centos7
docker rmi 7
docker rmi 192.168.1.153:31809/centos7
----------------- centos7 镜像------------------------------
-----------------singularities/hadoop 2.8 镜像--------------------------
https://www.jianshu.com/p/4801bb7ab9e0
docker tag singularities/hadoop:2.8 192.168.1.153:31809/singularities/hadoop.2.8
docker push 192.168.1.153:31809/singularities/hadoop.2.8
docker rmi singularities/hadoop:2.8
docker rmi 192.168.1.153:31809/singularities/hadoop.2.8
----------------- singularities/hadoop 2.8 镜像 ------------------------------
- --docker images
- --查看库
- curl -X GET http://192.168.1.153:31809/v2/_catalog {"repositories":["nginx"]}
- --查看库
- curl -X GET http://192.168.1.153:31809/v2/_catalog {"name":"nginx","tags":["latest"]}
免密登陆测试脚本
cp.test.sh
- flag=$1
- getip()
- {
- ifconfig|grep 192|awk '{print $2}'
- }
- ip=`getip`
- echo "salf IP:" $ip
- cpToOtherVM()
- {
- if [[ "${flag}" == "y" ]]; then
- if [[ "${ip}" != "$1" ]]; then
- scp -r /z-hl-c53cc450-62bf-4b65-b7f2-432e2aae9c62-v5.json $1:/
- fi
- fi
- }
- execOtherVmShell()
- {
- if [[ "${flag}" == "y" ]]; then
- if [[ "${ip}" != "$1" ]]; then
- ssh root@$1 "sh /cp.test.sh"
- fi
- fi
- }
- echo "copy to"
- cpToOtherVM "192.168.1.100"
- cpToOtherVM "192.168.1.101"
- cpToOtherVM "192.168.1.102"
- sleep 1
- cpToOtherVM "192.168.1.110"
- cpToOtherVM "192.168.1.111"
- cpToOtherVM "192.168.1.112"
- sleep 1
- cpToOtherVM "192.168.1.120"
- cpToOtherVM "192.168.1.121"
- cpToOtherVM "192.168.1.122"
- cpToOtherVM "192.168.1.123"
- sleep 3
- echo "exec other"
- execOtherVmShell "192.168.1.100"
- execOtherVmShell "192.168.1.101"
- execOtherVmShell "192.168.1.102"
- execOtherVmShell "192.168.1.110"
- execOtherVmShell "192.168.1.111"
- execOtherVmShell "192.168.1.112"
- execOtherVmShell "192.168.1.120"
- execOtherVmShell "192.168.1.121"
- execOtherVmShell "192.168.1.122"
- execOtherVmShell "192.168.1.123"
- echo "exec salf action"
运行容器脚本
docker.run.sh
- [root@docker-master3 /]# cat docker.run.sh
- flag=$1
- getip()
- {
- ifconfig|grep 192|awk '{print $2}'
- }
- ip=`getip`
- echo "salf IP:" $ip
- cpToOtherVM()
- {
- if [[ "${flag}" == "y" ]]; then
- if [[ "${ip}" != "$1" ]]; then
- scp -r /etc/sysctl.conf $1:/etc/sysctl.conf
- scp -r /docker.run.sh $1:/docker.run.sh
- fi
- fi
- }
- execOtherVmShell()
- {
- if [[ "${flag}" == "y" ]]; then
- if [[ "${ip}" != "$1" ]]; then
- ssh root@$1 "docker ps -a |grep 192.168.1. |awk -F ' ' '{print $1}'| xargs -i docker kill {}"
- echo "stop all docker"
- sleep 2
- ssh root@$1 "docker ps -a |grep 192.168.1. |awk -F ' ' '{print $1}'| xargs -i docker rm {}"
- echo "rm all docker"
- sleep 5
- ssh root@$1 "sh /docker.run.sh"
- fi
- fi
- }
- echo "copy to"
- cpToOtherVM "192.168.1.100"
- cpToOtherVM "192.168.1.101"
- cpToOtherVM "192.168.1.102"
- sleep 1
- cpToOtherVM "192.168.1.110"
- cpToOtherVM "192.168.1.111"
- cpToOtherVM "192.168.1.112"
- sleep 1
- cpToOtherVM "192.168.1.120"
- cpToOtherVM "192.168.1.121"
- cpToOtherVM "192.168.1.122"
- cpToOtherVM "192.168.1.123"
- sleep 1
- echo "exec salf action"
- docker ps -a |grep 192.168.1. |awk -F ' ' '{print $1}'| xargs -i docker kill {}
- sleep 2
- docker ps -a |grep 192.168.1. |awk -F ' ' '{print $1}'| xargs -i docker rm {}
- sleep 3
- function runZookeeper()
- {
- echo "exec runZookeeper" $1 $2
- # 启动
- docker run --name zookeeper \
- --net=host \
- --restart always \
- -v /data/zookeeper:/data/zookeeper \
- -e ZOO_PORT=2181 \
- -e ZOO_DATA_DIR=/data/zookeeper/data \
- -e ZOO_DATA_LOG_DIR=/data/zookeeper/logs \
- -e ZOO_MY_ID=$2 \
- -e ZOO_SERVERS="server.1=192.168.1.100:2888:3888 server.2=192.168.1.101:2888:3888 server.3=192.168.1.102:2888:3888" \
- -d 192.168.1.153:31809/zookeeper.new.es
- sleep 2
- }
- function runKafka()
- {
- echo "exec runKafka" $1 $2
- # 机器有11块盘,这里都用起来
- mkdir -p /data{1..11}/kafka
- # 启动
- docker run --name kafka \
- --net=host \
- --volume /data1:/data1 \
- --volume /data2:/data2 \
- --volume /data3:/data3 \
- --volume /data4:/data4 \
- --volume /data5:/data5 \
- --volume /data6:/data6 \
- --volume /data7:/data7 \
- --volume /data8:/data8 \
- --volume /data9:/data9 \
- --volume /data10:/data10 \
- --volume /data11:/data11 \
- -e KAFKA_BROKER_ID=$2 \
- -e KAFKA_PORT=9092 \
- -e KAFKA_HEAP_OPTS="-Xms8g -Xmx8g" \
- -e KAFKA_HOST_NAME=$1 \
- -e KAFKA_ADVERTISED_HOST_NAME=$1 \
- -e KAFKA_LOG_DIRS=/data1/kafka,/data2/kafka,/data3/kafka,/data4/kafka,/data5/kafka,/data6/kafka,/data7/kafka,/data8/kafka,/data9/kafka,/data10/kafka,/data11/kafka \
- -e KAFKA_ZOOKEEPER_CONNECT="192.168.1.100:2181,192.168.1.101:2181,192.168.1.102:2181" \
- -d 192.168.1.153:31809/kafka.new.es
- sleep 2
- }
- function runMaster()
- {
- echo "exec runMaster" $1 $2
- #!/bin/bash
- # 删除已退出的同名容器
- #docker ps -a | grep es_master |egrep "Exited|Created" | awk '{print $1}'|xargs -i% docker rm -f % 2>/dev/null
- # 启动
- docker run --name es_master \
- -d --net=host \
- --restart=always \
- --privileged=true \
- --ulimit nofile=655350 \
- --ulimit memlock=-1 \
- --memory=1G \
- --memory-swap=-1 \
- --cpus=0.5 \
- --volume /data:/data \
- --volume /etc/localtime:/etc/localtime \
- -e TERM=dumb \
- -e ES_JAVA_OPTS="-Xms8g -Xmx8g" \
- -e cluster.name="iyunwei" \
- -e node.name="MASTER-"$2 \
- -e node.master=true \
- -e node.data=false \
- -e node.ingest=false \
- -e node.attr.rack="0402-K03" \
- -e discovery.zen.ping.unicast.hosts="192.168.1.110:9301,192.168.1.111:9301,192.168.1.112:9301,192.168.1.110:9300,192.168.1.112:9300,192.168.1.113:9300,192.168.1.120:9300,192.168.1.121:9300,192.168.1.122:9300,192.168.1.123:9300" \
- -e discovery.zen.minimum_master_nodes=2 \
- -e gateway.recover_after_nodes=5 \
- -e network.host=0.0.0.0 \
- -e transport.tcp.port=9301 \
- -e http.port=9201 \
- -e path.data="/data/iyunwei/master" \
- -e path.logs=/data/elastic/logs \
- -e bootstrap.memory_lock=true \
- -e bootstrap.system_call_filter=false \
- -e indices.fielddata.cache.size="25%" \
- 192.168.1.153:31809/elastic/elasticsearch:5.6.8.new.es
- sleep 2
- }
- function runClient()
- {
- echo "exec runClient" $1 $2
- #!/bin/bash
- #docker ps -a | grep es_client |egrep "Exited|Created" | awk '{print $1}'|xargs -i% docker rm -f % 2>/dev/null
- docker run --name es_client \
- -d --net=host \
- --restart=always \
- --privileged=true \
- --ulimit nofile=655350 \
- --ulimit memlock=-1 \
- --memory=1G \
- --memory-swap=-1 \
- --cpus=0.5 \
- --volume /data:/data \
- --volume /etc/localtime:/etc/localtime \
- -e TERM=dumb \
- -e ES_JAVA_OPTS="-Xms31g -Xmx31g" \
- -e cluster.name="iyunwei" \
- -e node.name="CLIENT-"$2 \
- -e node.master=false \
- -e node.data=false \
- -e node.attr.rack="0402-K03" \
- -e discovery.zen.ping.unicast.hosts="192.168.1.110:9301,192.168.1.111:9301,192.168.1.112:9301,192.168.1.110:9300,192.168.1.112:9300,192.168.1.113:9300,192.168.1.120:9300,192.168.1.121:9300,192.168.1.122:9300,192.168.1.123:9300" \
- -e discovery.zen.minimum_master_nodes=2 \
- -e gateway.recover_after_nodes=2 \
- -e network.host=0.0.0.0 \
- -e transport.tcp.port=9300 \
- -e http.port=9200 \
- -e path.data="/data/iyunwei/client" \
- -e path.logs=/data/elastic/logs \
- -e bootstrap.memory_lock=true \
- -e bootstrap.system_call_filter=false \
- -e indices.fielddata.cache.size="25%" \
- 192.168.1.153:31809/elastic/elasticsearch:5.6.8.new.es
- sleep 2
- }
- function runDATA ()
- {
- echo "exec runDATA" $1 $2
- #!/bin/bash
- #docker ps -a | grep es_data |egrep "Exited|Created" | awk '{print $1}'|xargs -i% docker rm -f % 2>/dev/null
- docker run --name es_data \
- -d --net=host \
- --restart=always \
- --privileged \
- --ulimit nofile=655350 \
- --ulimit memlock=-1 \
- --volume /data:/data \
- --volume /data1:/data1 \
- --volume /data2:/data2 \
- --volume /data3:/data3 \
- --volume /data4:/data4 \
- --volume /data5:/data5 \
- --volume /data6:/data6 \
- --volume /data7:/data7 \
- --volume /data8:/data8 \
- --volume /data9:/data9 \
- --volume /data10:/data10 \
- --volume /data11:/data11 \
- --volume /etc/localtime:/etc/localtime \
- --ulimit memlock=-1 \
- -e TERM=dumb \
- -e ES_JAVA_OPTS="-Xms31g -Xmx31g" \
- -e cluster.name="iyunwei" \
- -e node.name="DATA-"$2 \
- -e node.master=false \
- -e node.data=true \
- -e node.ingest=false \
- -e node.attr.rack="0402-Q06" \
- -e discovery.zen.ping.unicast.hosts="192.168.1.110:9301,192.168.1.111:9301,192.168.1.112:9301,192.168.1.110:9300,192.168.1.112:9300,192.168.1.113:9300,192.168.1.120:9300,192.168.1.121:9300,192.168.1.122:9300,192.168.1.123:9300" \
- -e discovery.zen.minimum_master_nodes=2 \
- -e gateway.recover_after_nodes=2 \
- -e network.host=0.0.0.0 \
- -e http.port=9200 \
- -e path.data="/data1/iyunwei/data,/data2/iyunwei/data,/data3/iyunwei/data,/data4/iyunwei/data,/data5/iyunwei/data,/data6/iyunwei/data,/data7/iyunwei/data,/data8/iyunwei/data,/data9/iyunwei/data,/data10/iyunwei/data,/data11/iyunwei/data,/data12/iyunwei/data" \
- -e path.logs=/data/elastic/logs \
- -e bootstrap.memory_lock=true \
- -e bootstrap.system_call_filter=false \
- -e indices.fielddata.cache.size="25%" \
- 192.168.1.153:31809/elastic/elasticsearch:5.6.8.new.es
- sleep 2
- }
- function runKibana()
- {
- echo "exec runKibana" $1 $2
- #!/bin/bash
- #docker ps -a | grep kibana | egrep "Exited|Create" | awk '{print $1}'|xargs -i% docker rm -f % 2>/dev/null
- docker run --name kibana \
- --restart=always \
- -d --net=host \
- -v /data:/data \
- -v /etc/localtime:/etc/localtime \
- --privileged \
- -e TERM=dumb \
- -e SERVER_HOST=0.0.0.0 \
- -e SERVER_PORT=5601 \
- -e SERVER_NAME=Kibana-$2 \
- -e ELASTICSEARCH_URL=http://localhost:9200 \
- -e ELASTICSEARCH_USERNAME=elastic \
- -e ELASTICSEARCH_PASSWORD=changeme \
- -e XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED=false \
- -e LOG_FILE=/data/elastic/logs/kibana.log \
- 192.168.1.153:31809/elastic/kibana:5.6.8.new.es
- sleep 2
- }
- function runLogstash()
- {
- echo "exec runLogstash" $1 $2
- #!/bin/bash
- #docker ps -a | grep logstash |egrep "Exited|Created" | awk '{print $1}'|xargs -i% docker rm -f % 2>/dev/null
- docker run --name logstash \
- -d --net=host \
- --restart=always \
- --privileged \
- --ulimit nofile=655350 \
- --ulimit memlock=-1 \
- -e ES_JAVA_OPTS="-Xms16g -Xmx16g" \
- -e TERM=dumb \
- --volume /etc/localtime:/etc/localtime \
- --volume /data/elastic/config:/usr/share/logstash/config \
- --volume /data/elastic/config/pipeline:/usr/share/logstash/pipeline \
- --volume /data/elastic/logs:/usr/share/logstash/logs \
- 192.168.1.153:31809/elastic/logstash:5.6.8.new.es
- sleep 2
- }
- function cfgkafka()
- {
- if [[ "${ip}" = "$1" ]]; then
- echo "exec cfgkafka" $1 $2
- mkdir -p /data/zookeeper
- runZookeeper $1 $2
- runKafka $1 $2
- fi
- }
- function cfgMaster()
- {
- if [[ "${ip}" = "$1" ]]; then
- echo "exec cfgMaster" $1 $2
- mkdir -p /data/iyunwei/master
- chown -R 1000:1000 /data/iyunwei
- mkdir -p /data/iyunwei/client
- chown -R 1000:1000 /data/iyunwei
- runMaster $1 $2
- runClient $1 $2
- runKibana $1 $2
- runLogstash $1 $2
- fi
- }
- function cfgDATA()
- {
- if [[ "${ip}" = "$1" ]]; then
- echo "exec cfgDATA" $1 $2
- mkdir -p /data{1..12}/iyunwei/data
- chown -R 1000:1000 /data{1..12}/iyunwei
- runDATA $1 $2
- fi
- }
- index=0
- for kafkaClusterIP in "192.168.1.100" "192.168.1.101" "192.168.1.102"
- do
- index=$(($index+1))
- echo "cfgkafka" $kafkaClusterIP $index
- cfgkafka $kafkaClusterIP $index
- done
- #Master
- MasterIndex=0
- for MasterIP in in "192.168.1.110" "192.168.1.111" "192.168.1.112"
- do
- MasterIndex=$(($MasterIndex+1))
- echo "cfgMaster" $MasterIP $MasterIndex
- cfgMaster $MasterIP $MasterIndex
- done
- #DATA
- DATAIndex=0
- for DATAIP in "192.168.1.120" "192.168.1.121" "192.168.1.122" "192.168.1.123"
- do
- DATAIndex=$(($DATAIndex+1))
- echo "cfgDATA" $DATAIP $DATAIndex
- cfgDATA $DATAIP $DATAIndex
- done
- sleep 3
- echo "exec other vm action"
- execOtherVmShell "192.168.1.100"
- execOtherVmShell "192.168.1.101"
- execOtherVmShell "192.168.1.102"
- execOtherVmShell "192.168.1.110"
- execOtherVmShell "192.168.1.111"
- execOtherVmShell "192.168.1.112"
- execOtherVmShell "192.168.1.120"
- execOtherVmShell "192.168.1.121"
- execOtherVmShell "192.168.1.122"
- execOtherVmShell "192.168.1.123"
- curl -XPUT http://192.168.1.111:9200/_license?acknowledge=true -d @z-hl-c53cc450-62bf-4b65-b7f2-432e2aae9c62-v5.json -uelastic:changeme
内核参数 cat /etc/sysctl.conf
vm.max_map_count = 655360
vm.swappiness = 1
ip:规划与原贴不同
为保证通信方便,不组建复杂网络环境,全部放在同一网段。
192.168.1.100-102 相同为kafka
192.168.1.110-112 对应 192.168.2.100-102
192.168.1.120-123 对应 192.168.3.100-103
现在,容器布署成功
kafka镜像配置文件有权限问题
es-master主面kibana 可访问 但登不进云
http://192.168.1.110:5601/login?next=%2F#?_g=()
应该是没有注册的问题
- curl 命令注册时端口不能访问,需要看一下
- 容器 ps -a 全部者在
- 容器 ps kafka没有运行
- 因为ip和原镜像的ip不一样,有可能要进入容器里面去拿配置文件,修改对应ip
拿出来后还要重新打镜象才行,目前不了解要哪些配置要改
- 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
- ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
- 192.168.1.151 docker-slave1 docker-slave1.com
- 192.168.1.152 docker-slave2 docker-slave2.com
- 192.168.1.153 docker-slave3 docker-slave3.com
- 192.168.1.161 docker-master1 docker-master1.com
- 192.168.1.162 docker-master2 docker-master3.com
- 192.168.1.163 docker-master3 docker-master3.com
- 192.168.1.110 es-master1 es-master1.com
- 192.168.1.111 es-master2 es-master2.com
- 192.168.1.112 es-master3 es-master3.com
- 192.168.1.110 es-kafka1 es-kafka1.com
- 192.168.1.101 es-kafka2 es-kafka2.com
- 192.168.1.102 es-kafka3 es-kafka3.com
- 192.168.1.120 es-data1 es-data1.com
- 192.168.1.121 es-data2 es-data2.com
- 192.168.1.122 es-data3 es-data3.com
- 192.168.1.124 es-data4 es-data4.com
docker 快速部署ES集群 spark集群的更多相关文章
- Docker安装部署es集群
Docker安装部署es集群:环境准备:已安装docker的centos服务器一台1. 拉取es版本docker pull elasticsearch:5.6.82. 新建文件夹 数据挂载目录 和 配 ...
- 教你在Kubernetes中快速部署ES集群
摘要:ES集群是进行大数据存储和分析,快速检索的利器,本文简述了ES的集群架构,并提供了在Kubernetes中快速部署ES集群的样例:对ES集群的监控运维工具进行了介绍,并提供了部分问题定位经验,最 ...
- 使用Docker快速部署ELK分析Nginx日志实践
原文:使用Docker快速部署ELK分析Nginx日志实践 一.背景 笔者所在项目组的项目由多个子项目所组成,每一个子项目都存在一定的日志,有时候想排查一些问题,需要到各个地方去查看,极为不方便,此前 ...
- 使用Docker快速部署各类服务
使用Docker快速部署各类服务 一键安装Docker #Centos环境 wget -O- https://gitee.com/iubest/dinstall/raw/master/install. ...
- 私活利器,docker快速部署node.js应用
http://cnodejs.org/topic/53f494d9bbdaa79d519c9a4a 最近研究了几天docker的快速部署,感觉很有新意,非常轻量级和方便,打算在公司推广一下,解放运维, ...
- 使用Docker快速部署ELK分析Nginx日志实践(二)
Kibana汉化使用中文界面实践 一.背景 笔者在上一篇文章使用Docker快速部署ELK分析Nginx日志实践当中有提到如何快速搭建ELK分析Nginx日志,但是这只是第一步,后面还有很多仪表盘需要 ...
- 利用Docker快速部署Mysql
写在前面 我又来更新了~~~,今天内容较少,主要是利用Docker快速部署Mysql和初始化数据 利用Docker下载Mysql 简洁明了,在命令提示符中输入 docker pull mysql:8. ...
- 在Docker中部署GreatSQL并构建MGR集群
GreatSQL社区原创内容未经授权不得随意使用,转载请联系小编并注明来源. 为了方面社区用户体验GreatSQL,我们同时还提供Docker镜像,本文详细介绍如何在Docker中部署GreatSQL ...
- 【集群监控】Docker上部署Prometheus+Alertmanager+Grafana实现集群监控
Docker部署 下载 sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.re ...
随机推荐
- makefile笔记3 - makefile规则
target ... : prerequisites ... command ... ... 规则包含两个部分,一个是依赖关系,一个是生成目标的方法. 在 Makefile 中,规则的顺序是很重要的, ...
- Fiddler抓取https的设置
在抓取https的设置中,出现了The root certificate could not be located; 需要下载并安装证书生成器,勾选Capture HTTPS traffic.
- 大数据处理N!(21<N<2000)
输入: 每行输入1个正整数n,(0<n<1000 000) 输出: 对于每个n,输出n!的(十进制)位数 digit, 和最高位数firstNum.(n!约等于 firstNum * 10 ...
- 第一次玩博客,今天被安利了一个很方便JDBC的基于Spring框架的一个叫SimpleInsert的类,现在就来简单介绍一下
首先先对这段代码的简单介绍,我之前在需要操作JDBC的时候总是会因为经常要重新写SQL语句感到很麻烦.所以就能拿则拿不能拿的就简单地封装了一下. 首先是Insert.Spring框架的JDBC包里面的 ...
- Centos7.4 防火墙配置
# service firewalld status; #查看防火墙状态 (disabled 表明 已经禁止开启启动 enable 表示开机自启,inactive 表示防火墙关闭状态 activate ...
- gulp在项目中的基本使用
在项目中用gulp做项目的代码的管理,用起来很方便.主要用到了下面一些功能 关于js的处理,包括合并.压缩.加hash. 关于css的处理,编辑scss,合并css,加hash,自动加入前缀 本地开发 ...
- react-redux性能优化之reselect
在React-redux深入理解中,我们知道了 react-redux 是如何将 React 和 Redux 进行连接的,今天来说一下其中存在的性能问题以及改进的方式. 一.存在的性能问题 以 Red ...
- dee
窗口居中def center(self): screen = QDesktopWidget().screenGeometry() size = self.geometry() self.move((s ...
- Java并发编程75道面试题及答案
1.在java中守护线程和本地线程区别? java中的线程分为两种:守护线程(Daemon)和用户线程(User). 任何线程都可以设置为守护线程和用户线程,通过方法Thread.setDaemon( ...
- Qt 学习-----helloword
(参考:http://www.qter.org/portal.php?mod=view&aid=27&page=3) 1. 打开“文件→新建文件或项目”菜单项(也可以直接按下Ctrl+ ...