openshift 3.11 安装部署

openshift安装部署

1 环境准备(所有节点)

openshift 版本 v3.11
1.1 机器环境
ip              cpu  mem   hostname  OSsystem
192.168.1.130 4 16 master  CentOS7.6
192.168.1.132 2 4 node01  CentOS7.6
192.168.1.135 2 4 node02  CentOS7.6
1.2 免密码ssh登陆
ssh-keygen
ssh-copy-id 192.168.1.130
ssh-copy-id 192.168.1.132
ssh-copy-id 192.168.1.135
1.3 hosts解析
vim /etc/hosts
192.168.1.130 master
192.168.1.132 node01
192.168.1.135 node02
---------------------
scp -rp /etc/hosts 192.168.1.132:/etc/hosts
scp -rp /etc/hosts 192.168.1.135:/etc/hosts
1.4 selinux和关闭防火墙

#sed -i 's/SELINUX=.*/SELINUX=enforcing/' /etc/selinux/config

#sed -i 's/SELINUXTYPE=.*/SELINUXTYPE=targeted/' /etc/selinux/config

开放8443端口给openshift,api使用

/sbin/iptables -I INPUT -p tcp --dport 8443 -j ACCEPT &&\ service iptables save

1.2.3 安装需要的软件包

yum install -y wget git ntp net-tools bind-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct nfs-utils yum-utils docker NetworkManager

1.2.4 其他
sysctl net.ipv4.ip_forward=1
yum install pyOpenSSL httpd-tools -y
systemctl start NetworkManager
systemctl enable NetworkManager

配置镜像加速器
echo '{
"insecure-registries": ["172.30.0.0/16"],
"registry-mirrors": ["https://xxxxx.mirror.aliyuncs.com"]
}' >/etc/docker/daemon.json systemctl daemon-reload && \
systemctl enable docker && \
systemctl restart docker
1.2.5 镜像下载
#master镜像列表(主节点)
echo 'docker.io/cockpit/kubernetes
docker.io/openshift/origin-haproxy-router
docker.io/openshift/origin-haproxy-router  
docker.io/openshift/origin-service-catalog
docker.io/openshift/origin-node
docker.io/openshift/origin-deployer
docker.io/openshift/origin-control-plane
docker.io/openshift/origin-control-plane
docker.io/openshift/origin-template-service-broker
docker.io/openshift/origin-pod
docker.io/cockpit/kubernetes
docker.io/openshift/origin-web-console
quay.io/coreos/etcd' >image.txt && \
while read line; do docker pull $line ; done<image.txt #node镜像列表(两个node节点)
echo 'docker.io/openshift/origin-haproxy-router
docker.io/openshift/origin-node
docker.io/openshift/origin-deployer
docker.io/openshift/origin-pod
docker.io/ansibleplaybookbundle/origin-ansible-service-broker
docker.io/openshift/origin-docker-registry' >image.txt && \
while read line; do docker pull $line ; done<image.txt

2 配置ansible(主节点)

2.1 下载openshift-ansible代码

需要下载2.6.5版本的ansible

git clone -b release-3.11 https://github.com/openshift/openshift-ansible.git

wget https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/ansible-2.6.5-1.el7.noarch.rpm &&\
yum localinstall ansible-2.6.5-1.el7.noarch.rpm -y &&\
yum install -y etcd &&\
systemctl enable etcd &&\
systemctl start etcd
2.2 配置文件
[root@master ~]# cat /etc/ansible/hosts
[all]
# all下放所有机器节点的名称
master
node01
node02 [OSEv3:children]
#这里放openshfit的角色,这里有三个角色,master,node,etcd
masters
nodes
etcd [OSEv3:vars]
#这里是openshfit的安装参数

#指定ansible使用ssh的用户为root
ansible_ssh_user=root

#指定方式为origin
openshift_deployment_type=origin

#指定版本为3.11
openshift_release=3.11

openshift_enable_service_catalog=false
openshift_clock_enabled=true
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability [masters]
#master角色的机器名称包含
master [etcd]
#etcd角色的机器名称包含
master [nodes]
node角色的机器名称包含
master openshift_node_group_name='node-config-all-in-one'
node01 openshift_node_group_name='node-config-compute'
node02 openshift_node_group_name='node-config-compute' #openshift_enable_service_catalog=false
#openshift_hosted_registry_storage_kind=nfs
#openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
#openshift_hosted_registry_storage_nfs_directory=/data/docker
#openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'
#openshift_hosted_registry_storage_volume_name=registry
#openshift_hosted_registry_storage_volume_size=20Gi # openshiftclock_enabled=true
# ansible_service_broker_install=false

3 使用ansible来进行安装

#安装前检查

ansible-playbook ~/openshift-ansible/playbooks/prerequisites.yml

#安装

ansible-playbook ~/openshift-ansible/playbooks/deploy_cluster.yml

#如需重新安装,先卸载

ansible-playbook ~/openshift-ansible/playbooks/adhoc/uninstall.yml

4 安装后配置(主节点)

4.1 配置nfs持久卷
yum install nfs-utils rpcbind -y
mkdir -p /data/v0{01..20} /data/{docker,volume,registry}
chmod -R 777 /data
vim /etc/exports
/data 192.168.1.0/24(rw,sync,no_all_squash,no_root_squash)
/data/v001 192.168.1.0/24(rw,sync,no_all_squash,no_root_squash)
/data/v002 192.168.1.0/24(rw,sync,no_all_squash,no_root_squash)
/data/v003 192.168.1.0/24(rw,sync,no_all_squash,no_root_squash)
/data/v004 192.168.1.0/24(rw,sync,no_all_squash,no_root_squash)
/data/v005 192.168.1.0/24(rw,sync,no_all_squash,no_root_squash)
/data/v006 192.168.1.0/24(rw,sync,no_all_squash,no_root_squash)
/data/v007 192.168.1.0/24(rw,sync,no_all_squash,no_root_squash)
/data/v008 192.168.1.0/24(rw,sync,no_all_squash,no_root_squash)
/data/v009 192.168.1.0/24(rw,sync,no_all_squash,no_root_squash)
/data/v010 192.168.1.0/24(rw,sync,no_all_squash,no_root_squash)
/data/docker *(rw,sync,no_all_squash,no_root_squash) systemctl restart rpcbind &&\
systemctl restart nfs && \
systemctl enable rpcbind &&\
systemctl enable nfs
exportfs -r
kubectl apply -f pv-01-10.yaml
配置文件参考章节最后 pv-01-10.yaml
4.2 创建openshift用户
oc login -u system:admin                                ##使用系统管理员用户登录
htpasswd -b /etc/origin/master/htpasswd admin 123456 ##创建用户
htpasswd -b /etc/origin/master/htpasswd dev dev ##创建用户
oc login -u admin ##使用用户登录
oc logout ##退出当前用户
4.3 赋予创建的用户集群管理员权限
oc login -u system:admin &&\
oc adm policy add-cluster-role-to-user cluster-admin xxxxx
4.4 访问测试

需要添加hosts解析到本地电脑

192.168.1.130 master
192.168.1.132 node01
192.168.1.135 node02

账号密码是上面创建用户的账号密码

http://master:8443 admin/123456

5 其他配置

5.1 部署集群节点管理cockpit
yum install -y cockpit cockpit-docker cockpit-kubernetes &&\
systemctl start cockpit &&\
systemctl enable cockpit.socket &&\
iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 9090 -j ACCEPT

https://192.168.1.130:9090 账号密码是机器的ssh账号密码

5.2 命令补全
#kubectl 命令补全
mkdir -p /usr/share/bash-completion/kubernetes
kubectl completion bash >/usr/share/bash-completion/kubernetes/bash_completion
echo 'source /usr/share/bash-completion/kubernetes/bash_completion' >>~/.bash_profile #oc 自动补全
mkdir -p /usr/share/bash-completion/openshift
oc completion bash >/usr/share/bash-completion/openshift/bash_completion
echo "source /usr/share/bash-completion/openshift/bash_completion" >> ~/.bash_profile source ~/.bash_profile
5.3 openshift登录
#admin用户登陆openshift:用户名dev 密码:dev
oc login -n openshift oc get svc -n default|grep docker-registry|awk '{print $3}'
#查看admin用户的token
oc whoami -t
#登录docker私库
docker login -u admin -p `oc whoami -t` docker-registry.default.svc:5000
通过观察service的docker-registry的IP 将svc添加每台主机的hosts做对应的解析
5.4 常用命令行操作
#master-restart api
#master-restart controllers
oc whoami -t ###查看当前用户token
oc login https://master:8443 --token=`oc whoami -t` ###使用用户token登录
oc get nodes ###查看当前node节点状态

6 其他

6.1 pv-01-10.yaml文件
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv001
labels:
name: pv001
type: nfs
spec:
nfs:
path: /data/v001
server: 192.168.1.130
capacity:
storage: 50Gi
accessModes:
- ReadWriteMany
- ReadWriteOnce
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv002
labels:
name: nfs-pv002
type: nfs
spec:
nfs:
path: /data/v002
server: 192.168.1.130
capacity:
storage: 50Gi
accessModes:
- ReadWriteMany
- ReadWriteOnce
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv003
labels:
name: nfs-pv003
type: nfs
spec:
nfs:
path: /data/v003
server: 192.168.1.130
capacity:
storage: 30Gi
accessModes:
- ReadWriteMany
- ReadWriteOnce
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv004
labels:
name: nfs-pv004
type: nfs
spec:
nfs:
path: /data/v004
server: 192.168.1.130
capacity:
storage: 30Gi
accessModes:
- ReadWriteMany
- ReadWriteOnce
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv005
labels:
name: nfs-pv005
type: nfs
spec:
nfs:
path: /data/v005
server: 192.168.1.130
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
- ReadWriteOnce
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv006
labels:
name: nfs-pv006
type: nfs
spec:
nfs:
path: /data/v006
server: 192.168.1.130
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
- ReadWriteOnce
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv007
labels:
name: nfs-pv007
type: nfs
spec:
nfs:
path: /data/v007
server: 192.168.1.130
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
- ReadWriteOnce
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv008
labels:
name: nfs-pv008
type: nfs
spec:
nfs:
path: /data/v008
server: 192.168.1.130
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
- ReadWriteOnce
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv009
labels:
name: nfs-pv009
type: nfs
spec:
nfs:
path: /data/v009
server: 192.168.1.130
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
- ReadWriteOnce
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv010
labels:
name: nfs-pv010
type: nfs
spec:
nfs:
path: /data/v010
server: 192.168.1.130
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
- ReadWriteOnce
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain

openshift 3.11安装部署的更多相关文章

  1. openshift 3.11 安装部署

    openshift 3.11 安装部署 openshift安装部署 1 环境准备(所有节点) openshift 版本 v3.11 1.1 机器环境 ip cpu mem hostname OSsys ...

  2. tilecache2.11在windows apache2.22安装部署

    tilecache2.11在windows apache2.22安装部署 蔡建良 2013-09-03 一.安装环境 操作系统: Windows7 32位 Apache2.22 Python2.5 m ...

  3. CentOS下SparkR安装部署:hadoop2.7.3+spark2.0.0+scale2.11.8+hive2.1.0

    注:之前本人写了一篇SparkR的安装部署文章:SparkR安装部署及数据分析实例,当时SparkR项目还没正式入主Spark,需要自己下载SparkR安装包,但现在spark已经支持R接口,so更新 ...

  4. Kubernets二进制安装(11)之部署Node节点服务的kubelet

    集群规划 主机名 角色 IP地址 mfyxw30.mfyxw.com kubelet 192.168.80.30 mfyxw40.mfyxw.com kubelet 192.168.80.40 注意: ...

  5. ELK7.11.2版本安装部署及ElastAlert告警相关配置

    文档开篇,我还是要说一遍,虽然我在文档内容中也会说好多遍,但是希望大家不要嫌我墨迹: 请多看官方文档,请多看命令行报错信息,请多看日志信息,很多时候它们比百度.比必应.比谷歌有用: 请不要嫌麻烦,打开 ...

  6. Istio(二):在Kubernetes(k8s)集群上安装部署istio1.14

    目录 一.模块概览 二.系统环境 三.安装istio 3.1 使用 Istioctl 安装 3.2 使用 Istio Operator 安装 3.3 生产部署情况如何? 3.4 平台安装指南 四.Ge ...

  7. Oracle安装部署,版本升级,应用补丁快速参考

    一.Oracle安装部署 1.1 单机环境 1.2 Oracle RAC环境 1.3 Oracle DataGuard环境 1.4 主机双机 1.5 客户端部署 二.Oracle版本升级 2.1 单机 ...

  8. KVM安装部署

    KVM安装部署 公司开始部署KVM,KVM的全称是kernel base virtual machine,对KVM虚拟化技术研究了一段时间, KVM是基于硬件的完全虚拟化,跟vmware.xen.hy ...

  9. Linux平台oracle 11g单实例 + ASM存储 安装部署 快速参考

    操作环境:Citrix虚拟化环境中申请一个Linux6.4主机(模板)目标:创建单机11g + ASM存储 数据库 1. 主机准备 2. 创建ORACLE 用户和组成员 3. 创建以下目录并赋予对应权 ...

随机推荐

  1. hugging face-基于pytorch-bert的中文文本分类

    1.安装hugging face的transformers pip install transformers 2.下载相关文件 字表: wget http://52.216.242.246/model ...

  2. Protobuf简单类型直接反序列化方法

    我有一个想法,有一个能够进行跨平台的高性能数据协议规范,能够让数据在两个不同的程序之间进行读取,最好能够支持直接将object序列化,那就完美了. 目标 支持任意Object序列化 支持从类似Syst ...

  3. 根据json数据和HTML模板,渲染嵌套的HTML

    2020-12-22 11:53:23 星期二 场景, HTML模板是多个div嵌套, 里边有列表, 也有键值对, 与之匹配的有一个json数据, 需要根据json去渲染这个HTML DOM 示例截图 ...

  4. Docker来搭建分布式文件系统FastDfs

    对于文件存储来说,一般情况下简单的处理就是在Django配置文件中配置存储目录,按照规则对文件进行上传或者下载. 实际上,当文件较少的时候,Django是可以应付的过来的.但当文件以海量形式出现的时候 ...

  5. 前置机器学习(五):30分钟掌握常用Matplitlib用法

    Matplotlib 是建立在NumPy基础之上的Python绘图库,是在机器学习中用于数据可视化的工具. 我们在前面的文章讲过NumPy的用法,这里我们就不展开讨论NumPy的相关知识了. Matp ...

  6. (十五)、shell脚本之简单控制流结构

    一.基本的控制结构 1.控制流 常见的控制流就是if.then.else语句提供测试条件,测试条件可以基于各种条件.例如创建文件是否成功.是否有读写权限等,凡是执行的操作有失败的可能就可以用控制流,注 ...

  7. 痞子衡嵌入式:MCUXpresso IDE下SDK工程导入与workspace管理机制

    大家好,我是痞子衡,是正经搞技术的痞子.今天痞子衡给大家介绍的是MCUXpresso IDE下SDK工程导入与workspace管理机制. MCUXpresso IDE是恩智浦软件团队倾注很大心血研发 ...

  8. ThreadLocal源码深度剖析

    ThreadLocal源码深度剖析 ThreadLocal的作用 ThreadLocal的作用是提供线程内的局部变量,说白了,就是在各线程内部创建一个变量的副本,相比于使用各种锁机制访问变量,Thre ...

  9. linux中的dmesg命令以及确定进程是否被系统主动kill

    linux中的dmesg命令以及确定进程是否被系统主动kill Feb 21, 2017 | java | 185 Hits 近期发现线上项目的进程莫名其妙的就不见了,也没有崩溃日志,就怀疑是被操作系 ...

  10. java数组基础知识

    数组的定义:int[] array=new array[n];int array[]={, , , ,};定义了数组,JVM就会给其一个空间,数组是应用类型的数据类型,其存储方式是随机存储. 数组的遍 ...