1.基础架构

1.1.架构图

  • Zookeeper是Dubbo微服务集群的注册中心
  • 它的高可用机制和k8s的etcd集群一致
  • java编写,需要jdk环境

1.2.节点规划

主机名 角色 ip
hdss7-11.host.com k8s代理节点1,zk1 10.4.7.11
hdss7-12.host.com k8s代理节点2,zk2 10.4.7.12
hdss7-21.host.com k8s运算节点1,zk3 10.4.7.21
hdss7-22.host.com k8s运算节点2,jenkins 10.4.7.22
hdss7-200.host.com k8s运维节点(docker仓库) 10.4.7.200

2.部署zookeeper

2.1.安装jdk 1.8(3台zk节点都要安装)

  1. //解压、创建软链接
  2. [root@hdss7-11 src]# mkdir /usr/java
  3. [root@hdss7-11 src]# tar xf jdk-8u221-linux-x64.tar.gz -C /usr/java/
  4. [root@hdss7-11 src]# ln -s /usr/java/jdk1.8.0_221/ /usr/java/jdk
  5. [root@hdss7-11 src]# cd /usr/java/
  6. [root@hdss7-11 java]# ll
  7. total 0
  8. lrwxrwxrwx 1 root root 23 Nov 30 17:38 jdk -> /usr/java/jdk1.8.0_221/
  9. drwxr-xr-x 7 10 143 245 Jul 4 19:37 jdk1.8.0_221
  10. //创建环境变量
  11. [root@hdss7-11 java]# vi /etc/profile
  12. export JAVA_HOME=/usr/java/jdk
  13. export PATH=$JAVA_HOME/bin:$JAVA_HOME/bin:$PATH
  14. export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
  15. //source并检查
  16. [root@hdss7-11 java]# source /etc/profile
  17. [root@hdss7-11 java]# java -version
  18. java version "1.8.0_221"
  19. Java(TM) SE Runtime Environment (build 1.8.0_221-b11)
  20. Java HotSpot(TM) 64-Bit Server VM (build 25.221-b11, mixed mode)

2.2.安装zk(3台节点都要安装)

zookeeper官方地址

2.2.1.解压,创建软链接

  1. [root@hdss7-11 src]# tar xf zookeeper-3.4.14.tar.gz -C /opt/
  2. [root@hdss7-11 src]# ln -s /opt/zookeeper-3.4.14/ /opt/zookeeper

2.2.2.创建数据目录和日志目录

  1. [root@hdss7-11 opt]# mkdir -pv /data/zookeeper/data /data/zookeeper/logs
  2. mkdir: created directory ‘/data
  3. mkdir: created directory ‘/data/zookeeper
  4. mkdir: created directory ‘/data/zookeeper/data
  5. mkdir: created directory ‘/data/zookeeper/logs

2.2.3.配置

  1. //各节点相同
  2. [root@hdss7-11 opt]# vi /opt/zookeeper/conf/zoo.cfg
  3. tickTime=2000
  4. initLimit=10
  5. syncLimit=5
  6. dataDir=/data/zookeeper/data
  7. dataLogDir=/data/zookeeper/logs
  8. clientPort=2181
  9. server.1=zk1.od.com:2888:3888
  10. server.2=zk2.od.com:2888:3888
  11. server.3=zk3.od.com:2888:3888

myid

  1. //各节点不同
  2. [root@hdss7-11 opt]# vi /data/zookeeper/data/myidvi
  3. 1
  4. [root@hdss7-12 opt]# vi /data/zookeeper/data/myid
  5. 2
  6. [root@hdss7-21 opt]# vi /data/zookeeper/data/myid
  7. 3

2.2.4.做dns解析

  1. [root@hdss7-11 opt]# vi /var/named/od.com.zone
  2. $ORIGIN od.com.
  3. $TTL 600 ; 10 minutes
  4. @ IN SOA dns.od.com. dnsadmin.od.com. (
  5. 2019111006 ; serial //序列号前滚1
  6. 10800 ; refresh (3 hours)
  7. 900 ; retry (15 minutes)
  8. 604800 ; expire (1 week)
  9. 86400 ; minimum (1 day)
  10. )
  11. NS dns.od.com.
  12. $TTL 60 ; 1 minute
  13. dns A 10.4.7.11
  14. harbor A 10.4.7.200
  15. k8s-yaml A 10.4.7.200
  16. traefik A 10.4.7.10
  17. dashboard A 10.4.7.10
  18. zk1 A 10.4.7.11
  19. zk2 A 10.4.7.12
  20. zk3 A 10.4.7.21
  21. [root@hdss7-11 opt]# systemctl restart named
  22. [root@hdss7-11 opt]# dig -t A zk1.od.com @10.4.7.11 +short
  23. 10.4.7.11

2.2.4.依次启动并检查

启动

  1. [root@hdss7-11 opt]# /opt/zookeeper/bin/zkServer.sh start
  2. ZooKeeper JMX enabled by default
  3. Using config: /opt/zookeeper/bin/../conf/zoo.cfg
  4. Starting zookeeper ... STARTED
  5. [root@hdss7-12 opt]# /opt/zookeeper/bin/zkServer.sh start
  6. [root@hdss7-21 opt]# /opt/zookeeper/bin/zkServer.sh start

检查

  1. [root@hdss7-11 opt]# netstat -ntlup|grep 2181
  2. tcp6 0 0 :::2181 :::* LISTEN 69157/java
  3. [root@hdss7-11 opt]# zookeeper/bin/zkServer.sh status
  4. ZooKeeper JMX enabled by default
  5. Using config: /opt/zookeeper/bin/../conf/zoo.cfg
  6. Mode: follower
  7. [root@hdss7-12 opt]# zookeeper/bin/zkServer.sh status
  8. ZooKeeper JMX enabled by default
  9. Using config: /opt/zookeeper/bin/../conf/zoo.cfg
  10. Mode: leader

3.部署jenkins

jenkins官网

jenkins 镜像

3.1.准备镜像

hdss7-200上

  1. [root@hdss7-200 ~]# docker pull jenkins/jenkins:2.190.3
  2. [root@hdss7-200 ~]# docker images |grep jenkins
  3. [root@hdss7-200 ~]# docker tag 22b8b9a84dbe harbor.od.com/public/jenkins:v2.190.3
  4. [root@hdss7-200 ~]# docker push harbor.od.com/public/jenkins:v2.190.3

3.2.制作自定义镜像

3.2.1.生成ssh秘钥对

  1. [root@hdss7-200 ~]# ssh-keygen -t rsa -b 2048 -C "8614610@qq.com" -N "" -f /root/.ssh/id_rsa
  • 此处用自己的邮箱

3.2.2.准备get-docker.sh文件

  1. [root@hdss7-200 ~]# curl -fsSL get.docker.com -o get-docker.sh
  2. [root@hdss7-200 ~]# chmod +x get-docker.sh

3.2.3.准备config.json文件

  1. cp /root/.docker/config.json .
  2. cat /root/.docker/config.json
  3. {
  4. "auths": {
  5. "harbor.od.com": {
  6. "auth": "YWRtaW46SGFyYm9yMTIzNDU="
  7. }
  8. },
  9. "HttpHeaders": {
  10. "User-Agent": "Docker-Client/19.03.4 (linux)"
  11. }

3.2.4.创建目录并准备Dockerfile

  1. [root@hdss7-200 ~]# mkdir /data/dockerfile/jenkins -p
  2. [root@hdss7-200 ~]# cd /data/dockerfile/jenkins/
  3. [root@hdss7-200 jenkins]# vi Dockerfile
  4. FROM harbor.od.com/public/jenkins:v2.190.3
  5. USER root
  6. RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
  7. echo 'Asia/Shanghai' >/etc/timezone
  8. ADD id_rsa /root/.ssh/id_rsa
  9. ADD config.json /root/.docker/config.json
  10. ADD get-docker.sh /get-docker.sh
  11. RUN echo " StrictHostKeyChecking no" >> /etc/ssh/ssh_config &&\
  12. /get-docker.sh
  • 设置容器用户为root
  • 设置容器内的时区
  • 将创建的ssh私钥加入(使用git拉代码是要用,配对的公钥配置在gitlab中)
  • 加入了登陆自建harbor仓库的config文件
  • 修改了ssh客户端的配置,不做指纹验证
  • 安装一个docker的客户端 //build如果失败,在get-docker.sh 后加--mirror=Aliyun

3.3.制作自定义镜像

  1. //准备所需文件,拷贝至/data/dockerfile/jenkins
  2. [root@hdss7-200 jenkins]# pwd
  3. /data/dockerfile/jenkins
  4. [root@hdss7-200 jenkins]# ll
  5. total 32
  6. -rw------- 1 root root 151 Nov 30 18:35 config.json
  7. -rw-r--r-- 1 root root 349 Nov 30 18:31 Dockerfile
  8. -rwxr-xr-x 1 root root 13216 Nov 30 18:31 get-docker.sh
  9. -rw------- 1 root root 1675 Nov 30 18:35 id_rsa
  10. //执行build
  11. docker build . -t harbor.od.com/infra/jenkins:v2.190.3
  12. //公钥上传到gitee测试此镜像是否可以成功连接
  13. [root@hdss7-200 harbor]# docker run --rm harbor.od.com/infra/jenkins:v2.190.3 ssh -i /root/.ssh/id_rsa -T git@gitee.com
  14. Warning: Permanently added 'gitee.com,212.64.62.174' (ECDSA) to the list of known hosts.
  15. Hi StanleyWang (DeployKey)! You've successfully authenticated, but GITEE.COM does not provide shell access.
  16. Note: Perhaps the current use is DeployKey.
  17. Note: DeployKey only supports pull/fetch operations

3.4.创建infra仓库

3.5.创建kubernetes名称空间并在此创建secret

  1. [root@hdss7-21 ~]# kubectl create namespace infra
  2. [root@hdss7-21 ~]# kubectl create secret docker-registry harbor --docker-server=harbor.od.com --docker-username=admin --docker-password=Harbor12345 -n infra

3.6.推送镜像

  1. [root@hdss7-200 jenkins]# docker push harbor.od.com/infra/jenkins:v2.190.3

3.7.准备共享存储

运维主机hdss7-200和所有运算节点上,这里指hdss7-21、22

3.7.1.安装nfs-utils -y

  1. [root@hdss7-200 jenkins]# yum install nfs-utils -y

3.7.2.配置NFS服务

运维主机hdss7-200上

  1. [root@hdss7-200 jenkins]# vi /etc/exports
  2. /data/nfs-volume 10.4.7.0/24(rw,no_root_squash)

3.7.3.启动NFS服务

运维主机hdss7-200上

  1. [root@hdss7-200 ~]# mkdir -p /data/nfs-volume
  2. [root@hdss7-200 ~]# systemctl start nfs
  3. [root@hdss7-200 ~]# systemctl enable nfs

3.8.准备资源配置清单

运维主机hdss7-200上

  1. [root@hdss7-200 ~]# cd /data/k8s-yaml/
  2. [root@hdss7-200 k8s-yaml]# mkdir /data/k8s-yaml/jenkins && mkdir /data/nfs-volume/jenkins_home && cd jenkins

dp.yaml

  1. [root@hdss7-200 jenkins]# vi dp.yaml
  2. kind: Deployment
  3. apiVersion: extensions/v1beta1
  4. metadata:
  5. name: jenkins
  6. namespace: infra
  7. labels:
  8. name: jenkins
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. name: jenkins
  14. template:
  15. metadata:
  16. labels:
  17. app: jenkins
  18. name: jenkins
  19. spec:
  20. volumes:
  21. - name: data
  22. nfs:
  23. server: hdss7-200
  24. path: /data/nfs-volume/jenkins_home
  25. - name: docker
  26. hostPath:
  27. path: /run/docker.sock
  28. type: ''
  29. containers:
  30. - name: jenkins
  31. image: harbor.od.com/infra/jenkins:v2.190.3
  32. imagePullPolicy: IfNotPresent
  33. ports:
  34. - containerPort: 8080
  35. protocol: TCP
  36. env:
  37. - name: JAVA_OPTS
  38. value: -Xmx512m -Xms512m
  39. volumeMounts:
  40. - name: data
  41. mountPath: /var/jenkins_home
  42. - name: docker
  43. mountPath: /run/docker.sock
  44. imagePullSecrets:
  45. - name: harbor
  46. securityContext:
  47. runAsUser: 0
  48. strategy:
  49. type: RollingUpdate
  50. rollingUpdate:
  51. maxUnavailable: 1
  52. maxSurge: 1
  53. revisionHistoryLimit: 7
  54. progressDeadlineSeconds: 600

svc.yaml

  1. [root@hdss7-200 jenkins]# vi dp.yaml
  2. kind: Service
  3. apiVersion: v1
  4. metadata:
  5. name: jenkins
  6. namespace: infra
  7. spec:
  8. ports:
  9. - protocol: TCP
  10. port: 80
  11. targetPort: 8080
  12. selector:
  13. app: jenkins

ingress.yaml

  1. kind: Ingress
  2. apiVersion: extensions/v1beta1
  3. metadata:
  4. name: jenkins
  5. namespace: infra
  6. spec:
  7. rules:
  8. - host: jenkins.od.com
  9. http:
  10. paths:
  11. - path: /
  12. backend:
  13. serviceName: jenkins
  14. servicePort: 80

3.9.应用资源配置清单

任意运算节点上

  1. [root@hdss7-21 etcd]# kubectl apply -f http://k8s-yaml.od.com/jenkins/dp.yaml
  2. [root@hdss7-21 etcd]# kubectl apply -f http://k8s-yaml.od.com/jenkins/svc.yaml
  3. [root@hdss7-21 etcd]# kubectl apply -f http://k8s-yaml.od.com/jenkins/ingress.yaml
  4. [root@hdss7-21 etcd]# kubectl get pods -n infra
  5. NAME READY STATUS RESTARTS AGE
  6. jenkins-54b8469cf9-v46cc 1/1 Running 0 168m
  7. [root@hdss7-21 etcd]# kubectl get all -n infra
  8. NAME READY STATUS RESTARTS AGE
  9. pod/jenkins-54b8469cf9-v46cc 1/1 Running 0 169m
  10. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  11. service/jenkins ClusterIP 192.168.183.210 <none> 80/TCP 2d21h
  12. NAME READY UP-TO-DATE AVAILABLE AGE
  13. deployment.apps/jenkins 1/1 1 1 2d21h
  14. NAME DESIRED CURRENT READY AGE
  15. replicaset.apps/jenkins-54b8469cf9 1 1 1 2d18h
  16. replicaset.apps/jenkins-6b6d76f456 0 0 0 2d21h

3.10.解析域名

hdss7-11上

  1. [root@hdss7-11 ~]# vi /var/named/od.com.zone
  2. $ORIGIN od.com.
  3. $TTL 600 ; 10 minutes
  4. @ IN SOA dns.od.com. dnsadmin.od.com. (
  5. 2019111007 ; serial
  6. 10800 ; refresh (3 hours)
  7. 900 ; retry (15 minutes)
  8. 604800 ; expire (1 week)
  9. 86400 ; minimum (1 day)
  10. )
  11. NS dns.od.com.
  12. $TTL 60 ; 1 minute
  13. dns A 10.4.7.11
  14. ...
  15. ...
  16. jenkins A 10.4.7.10
  17. [root@hdss7-11 ~]# systemctl restart named
  18. [root@hdss7-11 ~]# dig -t A jenkins.od.com @10.4.7.11 +short
  19. 10.4.7.10

3.11. 浏览器访问

访问:http://jenkins.od.com 需要输入初始密码:

初始密码查看(也可在log里查看):

  1. [root@hdss7-200 jenkins]# cat /data/nfs-volume/jenkins_home/secrets/initialAdminPassword

3.12.页面配置jenkins

3.12.1.配置用户名密码

用户名:admin 密码:admin123 //后续依赖此密码,请务必设置此密码

3.12.2.设置configure global security

允许匿名用户访问

阻止跨域请求,勾去掉

3.12.3.安装好流水线插件Blue-Ocean

注意安装插件慢的话可以设置清华大学加速

hdss-200上

  1. cd /data/nfs-volume/jenkins_home/updates
  2. sed -i 's/http:\/\/updates.jenkins-ci.org\/download/https:\/\/mirrors.tuna.tsinghua.edu.cn\/jenkins/g' default.json && sed -i 's/http:\/\/www.google.com/https:\/\/www.baidu.com/g' default.json

4.最后的准备工作

4.1.检查jenkins容器里的docker客户端

验证当前用户,时区



,sock文件是否可用

验证kubernetes名称空间创建的secret是否可登陆到harbor仓库

4.2.检查jenkins容器里的SSH key

验证私钥,是否能登陆到gitee拉代码

4.3.部署maven软件

编译java,早些年用javac-->ant -->maven-->Gragle

在运维主机hdss7-200上二进制部署,这里部署maven-3.6.2版本

mvn命令是一个脚本,如果用jdk7,可以在脚本里修改

4.3.1.下载安装包

maven官方下载地址

4.3.2.创建目录并解压

目录8u232是根据docker容器里的jenkins的jdk版本命名,请严格按照此命名

  1. [root@hdss7-200 src]# mkdir /data/nfs-volume/jenkins_home/maven-3.6.1-8u232
  2. [root@hdss7-200 src]# tar xf apache-maven-3.6.1-bin.tar.gz -C /data/nfs-volume/jenkins_home/maven-3.6.1-8u232
  3. [root@hdss7-200 src]# cd /data/nfs-volume/jenkins_home/maven-3.6.1-8u232
  4. [root@hdss7-200 maven-3.6.1-8u232]# ls
  5. apache-maven-3.6.1
  6. [root@hdss7-200 maven-3.6.1-8u232]# mv apache-maven-3.6.1/ ../ && mv ../apache-maven-3.6.1/* .
  7. [root@hdss7-200 maven-3.6.1-8u232]# ll
  8. total 28
  9. drwxr-xr-x 2 root root 97 Dec 3 19:04 bin
  10. drwxr-xr-x 2 root root 42 Dec 3 19:04 boot
  11. drwxr-xr-x 3 501 games 63 Apr 5 2019 conf
  12. drwxr-xr-x 4 501 games 4096 Dec 3 19:04 lib
  13. -rw-r--r-- 1 501 games 13437 Apr 5 2019 LICENSE
  14. -rw-r--r-- 1 501 games 182 Apr 5 2019 NOTICE
  15. -rw-r--r-- 1 501 games 2533 Apr 5 2019 README.txt

4.3.3.设置settings.xml国内镜像源

  1. 搜索替换:
  2. [root@hdss7-200 maven-3.6.1-8u232]# vi /data/nfs-volume/jenkins_home/maven-3.6.1-8u232/conf/settings.xml
  3. <mirror>
  4. <id>nexus-aliyun</id>
  5. <mirrorOf>*</mirrorOf>
  6. <name>Nexus aliyun</name>
  7. <url>http://maven.aliyun.com/nexus/content/groups/public</url>
  8. </mirror>

4.3.制作dubbo微服务的底包镜像

运维主机hdss7-200上

4.3.1.自定义Dockerfile

  1. root@hdss7-200 dockerfile]# mkdir jre8
  2. [root@hdss7-200 dockerfile]# cd jre8/
  3. [root@hdss7-200 jre8]# pwd
  4. /data/dockerfile/jre8
  5. [root@hdss7-200 jre8]# vi Dockfile
  6. FROM harbor.od.com/public/jre:8u112
  7. RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
  8. echo 'Asia/Shanghai' >/etc/timezone
  9. ADD config.yml /opt/prom/config.yml
  10. ADD jmx_javaagent-0.3.1.jar /opt/prom/
  11. WORKDIR /opt/project_dir
  12. ADD entrypoint.sh /entrypoint.sh
  13. CMD ["/entrypoint.sh"]
  • 普罗米修斯的监控匹配规则
  • java agent 收集jvm的信息,采集jvm的jar包
  • docker运行的默认启动脚本entrypoint.sh

4.3.2.准备jre底包(7版本有一个7u80)

  1. [root@hdss7-200 jre8]# docker pull docker.io/stanleyws/jre8:8u112
  2. [root@hdss7-200 jre8]# docker images |grep jre
  3. stanleyws/jre8 8u112 fa3a085d6ef1 2 years ago 363MB
  4. [root@hdss7-200 jre8]# docker tag fa3a085d6ef1 harbor.od.com/public/jre:8u112
  5. [root@hdss7-200 jre8]# docker push harbor.od.com/public/jre:8u112

4.3.3.准备java-agent的jar包

  1. [root@hdss7-200 jre8]# wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar -O jmx_javaagent-0.3.1.jar

4.3.3.准备config.yml和entrypoint.sh

  1. [root@hdss7-200 jre8]# vi config.yml
  2. ---
  3. rules:
  4. - pattern: '.*'
  5. [root@hdss7-200 jre8]# vi entrypoint.sh
  6. #!/bin/sh
  7. M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml"
  8. C_OPTS=${C_OPTS}
  9. JAR_BALL=${JAR_BALL}
  10. exec java -jar ${M_OPTS} ${C_OPTS} ${JAR_BALL}
  11. [root@hdss7-200 jre8]# chmod +x entrypoint.sh

4.3.4.harbor创建base项目

4.3.5.构建dubbo微服务的底包并推到harbor仓库

  1. [root@hdss7-200 jre8]# docker build . -t harbor.od.com/base/jre8:8u112
  2. [root@hdss7-200 jre8]# docker push harbor.od.com/base/jre8:8u112

5.交付dubbo微服务至kubenetes集群

5.1.配置New job流水线

添加构建参数:

//以下配置项是王导根据多年生产经验总结出来的--运维甩锅大法(姿势要帅,动作要快)

登陆jenkins----->选择NEW-ITEM----->item name :dubbo-demo----->Pipeline------>ok

需要保留多少此老的构建,这里设置,保留三天,30个

点击:“This project is parameterized”使用参数化构建jenkins

添加参数String Parameter:8个------Trim the string都勾选

app_name

image_name

git_repo

git_ver

add_tag

mvn_dir

target_dir

mvn_cmd

添加Choice Parameter:2个

base_image

maven

5.2..Pipeline Script

  1. pipeline {
  2. agent any
  3. stages {
  4. stage('pull') { //get project code from repo
  5. steps {
  6. sh "git clone ${params.git_repo} ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.app_name}/${env.BUILD_NUMBER} && git checkout ${params.git_ver}"
  7. }
  8. }
  9. stage('build') { //exec mvn cmd
  10. steps {
  11. sh "cd ${params.app_name}/${env.BUILD_NUMBER} && /var/jenkins_home/maven-${params.maven}/bin/${params.mvn_cmd}"
  12. }
  13. }
  14. stage('package') { //move jar file into project_dir
  15. steps {
  16. sh "cd ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.target_dir} && mkdir project_dir && mv *.jar ./project_dir"
  17. }
  18. }
  19. stage('image') { //build image and push to registry
  20. steps {
  21. writeFile file: "${params.app_name}/${env.BUILD_NUMBER}/Dockerfile", text: """FROM harbor.od.com/${params.base_image}
  22. ADD ${params.target_dir}/project_dir /opt/project_dir"""
  23. sh "cd ${params.app_name}/${env.BUILD_NUMBER} && docker build -t harbor.od.com/${params.image_name}:${params.git_ver}_${params.add_tag} . && docker push harbor.od.com/${params.image_name}:${params.git_ver}_${params.add_tag}"
  24. }
  25. }
  26. }
  27. }

5.3.harbor创建app项目,把dubbo服务镜像管理起来

5.4.创建app名称空间,并添加secret资源

任意运算节点上

因为要去拉app私有仓库的镜像,所以添加secret资源

  1. [root@hdss7-21 bin]# kubectl create ns app
  2. namespace/app created
  3. [root@hdss7-21 bin]# kubectl create secret docker-registry harbor --docker-server=harbor.od.com --docker-username=admin --docker-password=Harbor12345 -n app
  4. secret/harbor created

5.5.交付dubbo-demo-service

5.5.1.jenkins传参,构建dubbo-demo-service镜像,传到harbor





5.5.2.创建dubbo-demo-service的资源配置清单

特别注意:dp.yaml的image替换成自己打包的镜像名称

hdss7-200上

  1. [root@hdss7-200 dubbo-demo-service]# vi dp.yaml
  2. kind: Deployment
  3. apiVersion: extensions/v1beta1
  4. metadata:
  5. name: dubbo-demo-service
  6. namespace: app
  7. labels:
  8. name: dubbo-demo-service
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. name: dubbo-demo-service
  14. template:
  15. metadata:
  16. labels:
  17. app: dubbo-demo-service
  18. name: dubbo-demo-service
  19. spec:
  20. containers:
  21. - name: dubbo-demo-service
  22. image: harbor.od.com/app/dubbo-demo-service:master_191201_1200
  23. ports:
  24. - containerPort: 20880
  25. protocol: TCP
  26. env:
  27. - name: JAR_BALL
  28. value: dubbo-server.jar
  29. imagePullPolicy: IfNotPresent
  30. imagePullSecrets:
  31. - name: harbor
  32. restartPolicy: Always
  33. terminationGracePeriodSeconds: 30
  34. securityContext:
  35. runAsUser: 0
  36. schedulerName: default-scheduler
  37. strategy:
  38. type: RollingUpdate
  39. rollingUpdate:
  40. maxUnavailable: 1
  41. maxSurge: 1
  42. revisionHistoryLimit: 7
  43. progressDeadlineSeconds: 600

5.5.3.应用dubbo-demo-service资源配置清单

任意运算节点上

  1. [root@hdss7-21 bin]# kubectl apply -f http://k8s-yaml.od.com/dubbo-demo-service/dp.yaml
  2. deployment.extensions/dubbo-demo-service created

5.5.4.检查启动状态

dashboard查看日志

zk注册中心查看

5.6.交付dubbo-Monitor

dubbo-monitor实际上就是从注册中心registry去数据出来然后展示的工具

两个开源软件:1、dubbo-admin 2、dubbo-monitor。此处我们用dubbo-minitor

5.6.1.准备docker镜像

5.6.1.1.下载源码包、解压
  1. [root@hdss7-200 src]# ll|grep dubbo
  2. -rw-r--r-- 1 root root 23468109 Dec 4 11:50 dubbo-monitor-master.zip
  3. [root@hdss7-200 src]# unzip dubbo-monitor-master.zip
  4. [root@hdss7-200 src]# ll|grep dubbo
  5. drwxr-xr-x 3 root root 69 Jul 27 2016 dubbo-monitor-master
  6. -rw-r--r-- 1 root root 23468109 Dec 4 11:50 dubbo-monitor-master.zip
5.6.1.2.修改以下项配置
  1. [root@hdss7-200 conf]# pwd
  2. /opt/src/dubbo-monitor-master/dubbo-monitor-simple/conf
  3. [root@hdss7-200 conf]# vi dubbo_origin.properties
  4. dubbo.registry.address=zookeeper://zk1.od.com:2181?backup=zk2.od.com:2181,zk3.od.com:2181
  5. dubbo.protocol.port=20880
  6. dubbo.jetty.port=8080
  7. dubbo.jetty.directory=/dubbo-monitor-simple/monitor
  8. dubbo.statistics.directory=/dubbo-monitor-simple/statistics
  9. dubbo.charts.directory=/dubbo-monitor-simple/charts
  10. dubbo.log4j.file=logs/dubbo-monitor.log
5.6.1.3.制作镜像
5.6.1.3.1.准备环境
  • 由于是虚拟机环境,这里java给的内存太大,需要给小一些,nohup 替换成exec,要在前台跑,去掉结尾&符,删除nohup 行下所有行
  1. [root@hdss7-200 bin]# vi /opt/src/dubbo-monitor-master/dubbo-monitor-simple/bin/start.sh
  2. JAVA_MEM_OPTS=" -server -Xmx2g -Xms2g -Xmn256m -XX:PermSize=128m -Xss256k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 "
  3. else
  4. JAVA_MEM_OPTS=" -server -Xms1g -Xmx1g -XX:PermSize=128m -XX:SurvivorRatio=2 -XX:+UseParallelGC "
  5. fi
  6. echo -e "Starting the $SERVER_NAME ...\c"
  7. nohup java $JAVA_OPTS $JAVA_MEM_OPTS $JAVA_DEBUG_OPTS $JAVA_JMX_OPTS -classpath $CONF_DIR:$LIB_JARS com.alibaba.dubbo.container.Main > $STDOUT_FILE 2>&1 &
  • sed命令替换,用到了sed模式空间
  1. sed -r -i -e '/^nohup/{p;:a;N;$!ba;d}' ./dubbo-monitor-simple/bin/start.sh && sed -r -i -e "s%^nohup(.*)%exec \1%" /opt/src/dubbo-monitor-master/dubbo-monitor-simple/bin/start.sh

//调小内存,然后nohup行结尾的&去掉!!!

  • 为了规范,复制到data下
  1. [root@hdss7-200 src]# mv dubbo-monitor-master dubbo-monitor
  2. [root@hdss7-200 src]# cp -a dubbo-monitor /data/dockerfile/
  3. [root@hdss7-200 src]# cd /data/dockerfile/dubbo-monitor/
5.6.1.3.2.准备Dockerfile
  1. [root@hdss7-200 dubbo-monitor]# pwd
  2. /data/dockerfile/dubbo-monitor
  3. [root@hdss7-200 dubbo-monitor]# cat Dockerfile
  4. FROM jeromefromcn/docker-alpine-java-bash
  5. MAINTAINER Jerome Jiang
  6. COPY dubbo-monitor-simple/ /dubbo-monitor-simple/
  7. CMD /dubbo-monitor-simple/bin/start.sh
5.6.1.3.3.build镜像并push到harbor仓库
  1. [root@hdss7-200 dubbo-monitor]# docker build . -t harbor.od.com/infra/dubbo-monitor:latest
  2. [root@hdss7-200 dubbo-monitor]# docker push harbor.od.com/infra/dubbo-monitor:latest

5.6.2.解析域名

  1. [root@hdss7-11 ~]# vi /var/named/od.com.zone
  2. $ORIGIN od.com.
  3. $TTL 600 ; 10 minutes
  4. @ IN SOA dns.od.com. dnsadmin.od.com. (
  5. 2019111008 ; serial
  6. 10800 ; refresh (3 hours)
  7. 900 ; retry (15 minutes)
  8. 604800 ; expire (1 week)
  9. 86400 ; minimum (1 day)
  10. )
  11. NS dns.od.com.
  12. 。。。略
  13. dubbo-monitor A 10.4.7.10
  14. [root@hdss7-11 ~]# systemctl restart named
  15. [root@hdss7-11 ~]# dig -t A dubbo-monitor.od.com @10.4.7.11 +short
  16. 10.4.7.10

5.6.3.准备k8s资源配置清单

  • dp.yaml
  1. [root@hdss7-200 k8s-yaml]# pwd
  2. /data/k8s-yaml
  3. [root@hdss7-200 k8s-yaml]# pwd
  4. /data/k8s-yaml
  5. [root@hdss7-200 k8s-yaml]# mkdir dubbo-monitor
  6. [root@hdss7-200 k8s-yaml]# cd dubbo-monitor/
  7. kind: Deployment
  8. apiVersion: extensions/v1beta1
  9. metadata:
  10. name: dubbo-monitor
  11. namespace: infra
  12. labels:
  13. name: dubbo-monitor
  14. spec:
  15. replicas: 1
  16. selector:
  17. matchLabels:
  18. name: dubbo-monitor
  19. template:
  20. metadata:
  21. labels:
  22. app: dubbo-monitor
  23. name: dubbo-monitor
  24. spec:
  25. containers:
  26. - name: dubbo-monitor
  27. image: harbor.od.com/infra/dubbo-monitor:latest
  28. ports:
  29. - containerPort: 8080
  30. protocol: TCP
  31. - containerPort: 20880
  32. protocol: TCP
  33. imagePullPolicy: IfNotPresent
  34. imagePullSecrets:
  35. - name: harbor
  36. restartPolicy: Always
  37. terminationGracePeriodSeconds: 30
  38. securityContext:
  39. runAsUser: 0
  40. schedulerName: default-scheduler
  41. strategy:
  42. type: RollingUpdate
  43. rollingUpdate:
  44. maxUnavailable: 1
  45. maxSurge: 1
  46. revisionHistoryLimit: 7
  47. progressDeadlineSeconds: 600
  • svc.yaml
  1. kind: Service
  2. apiVersion: v1
  3. metadata:
  4. name: dubbo-monitor
  5. namespace: infra
  6. spec:
  7. ports:
  8. - protocol: TCP
  9. port: 8080
  10. targetPort: 8080
  11. selector:
  12. app: dubbo-monitor
  • ingress.yaml
  1. kind: Ingress
  2. apiVersion: extensions/v1beta1
  3. metadata:
  4. name: dubbo-monitor
  5. namespace: infra
  6. spec:
  7. rules:
  8. - host: dubbo-monitor.od.com
  9. http:
  10. paths:
  11. - path: /
  12. backend:
  13. serviceName: dubbo-monitor
  14. servicePort: 8080

5.6.4.应用资源配置清单

  1. [root@hdss7-21 bin]# kubectl apply -f http://k8s-yaml.od.com/dubbo-monitor/dp.yaml
  2. deployment.extensions/dubbo-monitor created
  3. [root@hdss7-21 bin]# kubectl apply -f http://k8s-yaml.od.com/dubbo-monitor/svc.yaml
  4. service/dubbo-monitor created
  5. [root@hdss7-21 bin]# kubectl apply -f http://k8s-yaml.od.com/dubbo-monitor/ingress.yaml
  6. ingress.extensions/dubbo-monitor created

5.6.5.浏览器访问

5.7.交付dubbo-demo-consumer

5.7.1.jenkins传参,构建dubbo-demo-service镜像,传到harbor

jenkins的jar包本地缓存

5.7.2.创建dubbo-demo-consumer的资源配置清单

运维主机hdss7-200上

特别注意:dp.yaml的image替换成自己打包的镜像名称

dp.yaml

  1. [root@hdss7-200 k8s-yaml]# pwd
  2. /data/k8s-yaml
  3. [root@hdss7-200 k8s-yaml]# mkdir dubbo-demo-consumer
  4. [root@hdss7-200 k8s-yaml]# cd dubbo-demo-consumer/
  5. [root@hdss7-200 k8s-yaml]# vi dp.yaml
  6. kind: Deployment
  7. apiVersion: extensions/v1beta1
  8. metadata:
  9. name: dubbo-demo-consumer
  10. namespace: app
  11. labels:
  12. name: dubbo-demo-consumer
  13. spec:
  14. replicas: 1
  15. selector:
  16. matchLabels:
  17. name: dubbo-demo-consumer
  18. template:
  19. metadata:
  20. labels:
  21. app: dubbo-demo-consumer
  22. name: dubbo-demo-consumer
  23. spec:
  24. containers:
  25. - name: dubbo-demo-consumer
  26. image: harbor.od.com/app/dubbo-demo-consumer:master_191204_1307
  27. ports:
  28. - containerPort: 8080
  29. protocol: TCP
  30. - containerPort: 20880
  31. protocol: TCP
  32. env:
  33. - name: JAR_BALL
  34. value: dubbo-client.jar
  35. imagePullPolicy: IfNotPresent
  36. imagePullSecrets:
  37. - name: harbor
  38. restartPolicy: Always
  39. terminationGracePeriodSeconds: 30
  40. securityContext:
  41. runAsUser: 0
  42. schedulerName: default-scheduler
  43. strategy:
  44. type: RollingUpdate
  45. rollingUpdate:
  46. maxUnavailable: 1
  47. maxSurge: 1
  48. revisionHistoryLimit: 7
  49. progressDeadlineSeconds: 600

svc.yaml

  1. kind: Service
  2. apiVersion: v1
  3. metadata:
  4. name: dubbo-demo-consumer
  5. namespace: app
  6. spec:
  7. ports:
  8. - protocol: TCP
  9. port: 8080
  10. targetPort: 8080
  11. selector:
  12. app: dubbo-demo-consumer

ingress.yaml

  1. kind: Ingress
  2. apiVersion: extensions/v1beta1
  3. metadata:
  4. name: dubbo-demo-consumer
  5. namespace: app
  6. spec:
  7. rules:
  8. - host: demo.od.com
  9. http:
  10. paths:
  11. - path: /
  12. backend:
  13. serviceName: dubbo-demo-consumer
  14. servicePort: 8080

5.7.3.应用dubbo-demo-consumer资源配置清单

任意运算节点上

  1. [root@hdss7-21 bin]# kubectl apply -f http://k8s-yaml.od.com/dubbo-demo-consumer/dp.yaml
  2. deployment.extensions/dubbo-demo-consumer created
  3. [root@hdss7-21 bin]# kubectl apply -f http://k8s-yaml.od.com/dubbo-demo-consumer/svc.yaml
  4. service/dubbo-demo-consumer created
  5. [root@hdss7-21 bin]# kubectl apply -f http://k8s-yaml.od.com/dubbo-demo-consumer/ingress.yaml
  6. ingress.extensions/dubbo-demo-consumer created

5.7.4.解析域名

hdss7-11上

  1. [root@hdss7-11 ~]# vi /var/named/od.com.zone
  2. $ORIGIN od.com.
  3. $TTL 600 ; 10 minutes
  4. @ IN SOA dns.od.com. dnsadmin.od.com. (
  5. 2019111009 ; serial
  6. 10800 ; refresh (3 hours)
  7. 900 ; retry (15 minutes)
  8. 604800 ; expire (1 week)
  9. 86400 ; minimum (1 day)
  10. )
  11. NS dns.od.com.
  12. $TTL 60 ; 1 minute
  13. dns A 10.4.7.11
  14. ...
  15. ...
  16. demo A 10.4.7.10
  17. [root@hdss7-11 ~]# systemctl restart named
  18. [root@hdss7-11 ~]# dig -t A demo.od.com @10.4.7.11 +short
  19. 10.4.7.10

5.7.5.检查启动状态

6.实战维护dubbo微服务集群

6.1.更新(rolling update)

  • 修改代码提交GIT(发版)
  • 使用jenkins进行CI(持续构建)
  • 修改并应用k8s资源配置清单
  • 或者在k8s上修改yaml的harbor镜像地址

6.2.扩容(scaling)

  • 在k8s的dashboard上直接操作:登陆dashboard页面-->部署-->伸缩-->修改数量-->确定
  • 命令行扩容,如下示例:
  1. * Examples:
  2. # Scale a replicaset named 'foo' to 3.
  3. kubectl scale --replicas=3 rs/foo
  4. # Scale a resource identified by type and name specified in "foo.yaml" to 3.
  5. kubectl scale --replicas=3 -f foo.yaml
  6. # If the deployment named mysql's current size is 2, scale mysql to 3.
  7. kubectl scale --current-replicas=2 --replicas=3 deployment/mysql
  8. # Scale multiple replication controllers.
  9. kubectl scale --replicas=5 rc/foo rc/bar rc/baz
  10. # Scale statefulset named 'web' to 3.
  11. kubectl scale --replicas=3 statefulset/web

6.3.宿主机突发故障处理

假如hdss7-21突发故障,离线

  1. 其他运算节点上操作:先删除故障节点使k8s触发自愈机制,pod在健康节点重新拉起
  1. [root@hdss7-22 ~]# kubectl delete node hdss7-21.host.com
  2. node "hdss7-21.host.com" deleted
  1. 前端代理修改配置文件,把节点注释掉,使其不再调度到故障节点(hdss-7-21)
  1. [root@hdss7-11 ~]# vi /etc/nginx/nginx.conf
  2. [root@hdss7-11 ~]# vi /etc/nginx/conf.d/od.com.conf
  3. [root@hdss7-11 ~]# nginx -t
  4. [root@hdss7-11 ~]# nginx -s reload
  1. 节点修好,直接启动,会自行加到集群,修改label,并把节点加回前端负载
  1. [root@hdss7-21 bin]# kubectl label node hdss7-21.host.com node-role.kubernetes.io/master=
  2. node/hdss7-21.host.com labeled
  3. [root@hdss7-21 bin]# kubectl label node hdss7-21.host.com node-role.kubernetes.io/node=
  4. node/hdss7-21.host.com labeled
  5. [root@hdss7-21 bin]# kubectl get nodes
  6. NAME STATUS ROLES AGE VERSION
  7. hdss7-21.host.com Ready master,node 8d v1.15.4
  8. hdss7-22.host.com Ready master,node 10d v1.15.4

6.4.FAQ

6.4.1.supervisor restart 不成功?

/etc/supervisord.d/xxx.ini 追加:

  1. killasgroup=true
  2. stopasgroup=true

交付Dubbo微服务到kubernetes集群的更多相关文章

  1. (转)实验文档2:实战交付一套dubbo微服务到kubernetes集群

    基础架构 主机名 角色 ip HDSS7-11.host.com k8s代理节点1,zk1 10.4.7.11 HDSS7-12.host.com k8s代理节点2,zk2 10.4.7.12 HDS ...

  2. 8.实战交付一套dubbo微服务到k8s集群(1)之Zookeeper部署

    1.基础架构 主机名 角色 ip HDSS7-11.host.com K8S代理节点1,zk1 10.4.7.11 HDSS7-12.host.com K8S代理节点2,zk2 10.4.7.12 H ...

  3. 实战交付一套dubbo微服务到k8s集群(1)之Zookeeper部署

    基础架构 主机名 角色 IP地址 mfyxw10.mfyxw.com K8S代理节点1,zk1 192.168.80.10 mfyxw20.mfyxw.com K8S代理节点2,zk2 192.168 ...

  4. 实战交付一套dubbo微服务到k8s集群(6)之交付dubbo-monitor到K8S集群

    dubbo-monitor官方源码地址:https://github.com/Jeromefromcn/dubbo-monitor 1.下载dubbo-monitor源码 在运维主机(mfyxw50. ...

  5. 实战交付一套dubbo微服务到k8s集群(2)之Jenkins部署

    Jenkins官网:https://www.jenkins.io/zh/ Jenkins 2.190.3 镜像地址:docker pull jenkins/jenkins:2.190.3 1.下载Je ...

  6. 12.实战交付一套dubbo微服务到k8s集群(5)之交付dubbo-monitor到K8S集群

    dubbo-monitor官方源码地址:https://github.com/Jeromefromcn/dubbo-monitor 1.下载dubbo-monitor源码并解压 [root@hdss7 ...

  7. 11.实战交付一套dubbo微服务到k8s集群(4)之使用Jenkins进行持续构建交付dubo服务的提供者

    1.登录到jenkins,新建一个项目 2.新建流水线 3.设置保留的天数及份数 4. 添加参数 # 参数 . name: git_repo type: string description: 项目在 ...

  8. 9.实战交付一套dubbo微服务到k8s集群(2)之Jenkins部署

    1.下载Jenkins镜像打包上传harbor上 [root@hdss7- ~]# docker pull jenkins/jenkins:2.190. [root@hdss7- ~]# docker ...

  9. 实战交付一套dubbo微服务到k8s集群(8)之configmap使用

    使用ConfigMap管理应用配置 拆分环境 主机名 角色 IP地址 mfyxw10.mfyxw.com zk1.od.com(Test环境) 192.168.80.10 mfyxw20.mfyxw. ...

随机推荐

  1. Linux系统swappiness参数在内存与交换分区之间优化作用

    http://blog.sina.com.cn/s/blog_13cc013b50102wskd.html swappiness的值的大小对如何使用swap分区是有着很大的联系的.swappiness ...

  2. ASP.NET(C#)图片加文字、图片水印,神啊,看看吧

    ASP.NET(C#)图片加文字.图片水印 一.图片上加文字: //using System.Drawing; //using System.IO; //using System.Drawing.Im ...

  3. 原生JavaScript常用本地浏览器存储方法二(cookie)

    JavsScript Cookie概述 cookie是浏览器提供的一种机制,它将document对象的cookie属性提供给JavaScript.可以由JavaScript对其进行控制,而并不是Jav ...

  4. 推荐Pi(π)币,相当于比特币手机挖矿版

    我为什么推荐这个? 说实话,之所以发出来还是因为如果用我的邀请码注册,双方的挖矿速度都会增加些,我的邀请码:leneing,有问题可以咨询我. Pi币简介 1.在这里强烈推荐Pi币,相当于比特币手机挖 ...

  5. Codeforces-Two Buttons-520problemB(思维题)

    B. Two Buttons Vasya has found a strange device. On the front panel of a device there are: a red but ...

  6. 软件素材--c/c++干掉代码的通用方法

    while(1) { sleep(200); } #endif

  7. jenkins【目录】:目录

    jenkins[目录]:目录 GitLab 自动触发 Jenkins 构建 返回

  8. 【转帖】【PCI-E通道是个什么东西?他是干啥的?】

    [PCI-E通道是个什么东西?他是干啥的?] https://zhuanlan.zhihu.com/p/62426408 前言: 经常接触台式机的同学肯定绕不开PCI-E这个名词,因为这是台式机里最重 ...

  9. Linux7 安装python3.5.4

    1.首先修改yum配置文件 因为yum使用python2,因此替换为python3后可能无法正常工作,继续使用这个python2.7.5 因此修改yum配置文件(vi /usr/bin/yum). 把 ...

  10. Python05之常用操作符

    主要有三类:算数运算符,比较运算符,逻辑运算符 一.算数运算符: 加(+)减(-)乘(*)除(/)幂运算(**)地板除(//)取余数(%) 注:/和//的区别: 数据1 / 数据2 = 数据3   ( ...