helm3.1安装及结合ceph rbd 部署harbor
- [root@bs-k8s-ceph ~]# ceph -s
- cluster:
- id: 11880418-1a9a-4b55-a353-4b141e2199d8
- health: HEALTH_WARN
- Long heartbeat ping times on back interface seen, longest is 3884.944 msec
- Long heartbeat ping times on front interface seen, longest is 3888.368 msec
- application not enabled on 1 pool(s)
- clock skew detected on mon.bs-hk-hk02, mon.bs-k8s-ceph
- services:
- mon: 3 daemons, quorum bs-hk-hk01,bs-hk-hk02,bs-k8s-ceph
- mgr: bs-hk-hk01(active), standbys: bs-hk-hk02, bs-k8s-ceph
- osd: 6 osds: 6 up, 6 in
- data:
- pools: 3 pools, 320 pgs
- objects: 416 objects, 978 MiB
- usage: 8.6 GiB used, 105 GiB / 114 GiB avail
- pgs: 320 active+clean
- [root@bs-k8s-ceph ~]# ceph osd pool application enable harbor rbd
- enabled application 'rbd' on pool 'harbor'
- [root@bs-k8s-ceph ~]# ceph -s
- cluster:
- id: 11880418-1a9a-4b55-a353-4b141e2199d8
- health: HEALTH_WARN
- Long heartbeat ping times on back interface seen, longest is 3870.142 msec
- Long heartbeat ping times on front interface seen, longest is 3873.410 msec
- clock skew detected on mon.bs-hk-hk02
- services:
- mon: 3 daemons, quorum bs-hk-hk01,bs-hk-hk02,bs-k8s-ceph
- mgr: bs-hk-hk01(active), standbys: bs-hk-hk02, bs-k8s-ceph
- osd: 6 osds: 6 up, 6 in
- data:
- pools: 3 pools, 320 pgs
- objects: 416 objects, 978 MiB
- usage: 8.6 GiB used, 105 GiB / 114 GiB avail
- pgs: 320 active+clean
- # systemctl restart ceph.target //让时间停一会
- [root@bs-k8s-ceph ~]# ceph -s
- cluster:
- id: 11880418-1a9a-4b55-a353-4b141e2199d8
- health: HEALTH_OK
- services:
- mon: 3 daemons, quorum bs-hk-hk01,bs-hk-hk02,bs-k8s-ceph
- mgr: bs-hk-hk01(active), standbys: bs-hk-hk02, bs-k8s-ceph
- osd: 6 osds: 6 up, 6 in
- data:
- pools: 3 pools, 320 pgs
- objects: 416 objects, 978 MiB
- usage: 8.6 GiB used, 105 GiB / 114 GiB avail
- pgs: 320 active+clean
- [root@bs-k8s-master01 ~]# kubectl get nodes
- The connection to the server 20.0.0.250:8443 was refused - did you specify the right host or port?
- [root@bs-hk-hk01 ~]# systemctl start haproxy
- [root@bs-k8s-master01 k8s]# kubectl get nodes
- NAME STATUS ROLES AGE VERSION
- bs-k8s-master01 Ready master 7d10h v1.17.2
- bs-k8s-master02 Ready master 7d10h v1.17.2
- bs-k8s-master03 Ready master 7d10h v1.17.2
- bs-k8s-node01 Ready <none> 7d10h v1.17.2
- bs-k8s-node02 Ready <none> 7d10h v1.17.2
- bs-k8s-node03 NotReady <none> 7d9h v1.17.2 //为了节省cpu而关掉
- https://github.com/helm/helm/releases
- [root@bs-k8s-master01 helm3]# pwd
- /data/k8s/helm3
- [root@bs-k8s-master01 helm3]# ll
- 总用量 11980
- -rw-r--r-- 1 root root 12267464 2月 17 2020 helm-v3.1.0-linux-amd64.tar.gz
- [root@bs-k8s-master01 helm3]# cp linux-amd64/helm /usr/local/bin/helm
- [root@bs-k8s-master01 helm3]# helm version
- version.BuildInfo{Version:"v3.1.0", GitCommit:"b29d20baf09943e134c2fa5e1e1cab3bf93315fa", GitTreeState:"clean", GoVersion:"go1.13.7"}
- [root@bs-k8s-master01 helm3]# helm --help
- The Kubernetes package manager
- Common actions for Helm:
- - helm search: search for charts
- - helm pull: download a chart to your local directory to view
- - helm install: upload the chart to Kubernetes
- - helm list: list releases of charts
- Environment variables:
- +------------------+-----------------------------------------------------------------------------+
- | Name | Description |
- +------------------+-----------------------------------------------------------------------------+
- | $XDG_CACHE_HOME | set an alternative location for storing cached files. |
- | $XDG_CONFIG_HOME | set an alternative location for storing Helm configuration. |
- | $XDG_DATA_HOME | set an alternative location for storing Helm data. |
- | $HELM_DRIVER | set the backend storage driver. Values are: configmap, secret, memory |
- | $HELM_NO_PLUGINS | disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins. |
- | $KUBECONFIG | set an alternative Kubernetes configuration file (default "~/.kube/config") |
- +------------------+-----------------------------------------------------------------------------+
- Helm stores configuration based on the XDG base directory specification, so
- - cached files are stored in $XDG_CACHE_HOME/helm
- - configuration is stored in $XDG_CONFIG_HOME/helm
- - data is stored in $XDG_DATA_HOME/helm
- By default, the default directories depend on the Operating System. The defaults are listed below:
- +------------------+---------------------------+--------------------------------+-------------------------+
- | Operating System | Cache Path | Configuration Path | Data Path |
- +------------------+---------------------------+--------------------------------+-------------------------+
- | Linux | $HOME/.cache/helm | $HOME/.config/helm | $HOME/.local/share/helm |
- | macOS | $HOME/Library/Caches/helm | $HOME/Library/Preferences/helm | $HOME/Library/helm |
- | Windows | %TEMP%\helm | %APPDATA%\helm | %APPDATA%\helm |
- +------------------+---------------------------+--------------------------------+-------------------------+
- Usage:
- helm [command]
- Available Commands:
- completion Generate autocompletions script for the specified shell (bash or zsh)
- create create a new chart with the given name
- dependency manage a chart's dependencies
- env Helm client environment information
- get download extended information of a named release
- help Help about any command
- history fetch release history
- install install a chart
- lint examines a chart for possible issues
- list list releases
- package package a chart directory into a chart archive
- plugin install, list, or uninstall Helm plugins
- pull download a chart from a repository and (optionally) unpack it in local directory
- repo add, list, remove, update, and index chart repositories
- rollback roll back a release to a previous revision
- search search for a keyword in charts
- show show information of a chart
- status displays the status of the named release
- template locally render templates
- test run tests for a release
- uninstall uninstall a release
- upgrade upgrade a release
- verify verify that a chart at the given path has been signed and is valid
- version print the client version information
- Flags:
- --add-dir-header If true, adds the file directory to the header
- --alsologtostderr log to standard error as well as files
- --debug enable verbose output
- -h, --help help for helm
- --kube-context string name of the kubeconfig context to use
- --kubeconfig string path to the kubeconfig file
- --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
- --log-dir string If non-empty, write log files in this directory
- --log-file string If non-empty, use this log file
- --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
- --logtostderr log to standard error instead of files (default true)
- -n, --namespace string namespace scope for this request
- --registry-config string path to the registry config file (default "/root/.config/helm/registry.json")
- --repository-cache string path to the file containing cached repository indexes (default "/root/.cache/helm/repository")
- --repository-config string path to the file containing repository names and URLs (default "/root/.config/helm/repositories.yaml")
- --skip-headers If true, avoid header prefixes in the log messages
- --skip-log-headers If true, avoid headers when opening log files
- --stderrthreshold severity logs at or above this threshold go to stderr (default 2)
- -v, --v Level number for the log level verbosity
- --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
- Use "helm [command] --help" for more information about a command.
- [root@bs-k8s-master01 helm3]# source <(helm completion bash)
- [root@bs-k8s-master01 helm3]# echo "source <(helm completion bash)" >> ~/.bashrc
- [root@bs-k8s-master01 rbd]# helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
- "aliyun" has been added to your repositories
- [root@bs-k8s-master01 helm3]# helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
- "stable" has been added to your repositories
- [root@bs-k8s-master01 helm3]# helm repo add google https://kubernetes-charts.storage.googleapis.com
- "google" has been added to your repositories
- [root@bs-k8s-master01 helm3]# helm repo add jetstack https://charts.jetstack.io
- "jetstack" has been added to your repositories
- [root@bs-k8s-master01 helm3]# helm repo list
- NAME URL
- aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
- stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
- google https://kubernetes-charts.storage.googleapis.com
- jetstack https://charts.jetstack.io
- [root@bs-k8s-master01 helm3]# curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
- % Total % Received % Xferd Average Speed Time Time Time Current
- Dload Upload Total Spent Left Speed
- 100 6794 100 6794 0 0 434 0 0:00:15 0:00:15 --:--:-- 761
- [root@bs-k8s-master01 helm3]# ./get_helm.sh
- Downloading https://get.helm.sh/helm-v3.1.0-linux-amd64.tar.gz
- Preparing to install helm into /usr/local/bin
- helm installed into /usr/local/bin/helm
- [root@bs-k8s-master01 helm3]# helm version
- version.BuildInfo{Version:"v3.1.0", GitCommit:"b29d20baf09943e134c2fa5e1e1cab3bf93315fa", GitTreeState:"clean", GoVersion:"go1.13.7"}
- [root@bs-k8s-master01 helm3]# helm repo update
- Hang tight while we grab the latest from your chart repositories...
- ...Successfully got an update from the "aliyun" chart repository
- Update Complete. ⎈ Happy Helming!⎈
- [root@bs-k8s-master01 helm3]# helm search repo nginx
- NAME CHART VERSION APP VERSION DESCRIPTION
- aliyun/nginx-ingress 0.9.5 0.10.2 An nginx Ingress controller that uses ConfigMap...
- aliyun/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
- google/nginx-ingress 1.30.3 0.28.0 An nginx Ingress controller that uses ConfigMap...
- google/nginx-ldapauth-proxy 0.1.3 1.13.5 nginx proxy with ldapauth
- google/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
- stable/nginx-ingress 0.9.5 0.10.2 An nginx Ingress controller that uses ConfigMap...
- stable/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
- aliyun/gcloud-endpoints 0.1.0 Develop, deploy, protect and monitor your APIs ...
- google/gcloud-endpoints 0.1.2 1 DEPRECATED Develop, deploy, protect and monitor...
- stable/gcloud-endpoints 0.1.0 Develop, deploy, protect and monitor your APIs ...
- [root@bs-k8s-master01 helm3]# helm repo remove stable
- "stable" has been removed from your repositories
- [root@bs-k8s-master01 helm3]# helm repo remove google
- "google" has been removed from your repositories
- [root@bs-k8s-master01 helm3]# helm repo remove jetstack
- "jetstack" has been removed from your repositories
- [root@bs-k8s-master01 helm3]# helm repo list
- NAME URL
- aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
- [root@bs-k8s-master01 helm3]# helm repo add harbor https://helm.goharbor.io
- "harbor" has been added to your repositories
- [root@bs-k8s-master01 harbor]# pwd
- /data/k8s/harbor
- [root@bs-k8s-master01 harbor]# ll
- 总用量 48
- -rw-r--r-- 1 root root 701 2月 16 19:26 ceph-harbor-pvc.yaml
- -rw-r--r-- 1 root root 863 2月 16 19:18 ceph-harbor-secret.yaml
- -rw-r--r-- 1 root root 994 2月 16 19:21 ceph-harbor-storageclass.yaml
- -rw-r--r-- 1 root root 35504 2月 17 13:07 harbor-1.3.0.tgz
- drwxr-xr-x 2 root root 134 2月 16 19:13 rbd
- [root@bs-k8s-master01 harbor]# tar xf harbor-1.3.0.tgz
- [root@bs-k8s-master01 harbor]# cd harbor/
- [root@bs-k8s-master01 harbor]# ls
- cert Chart.yaml conf LICENSE README.md templates values.yaml
- [root@bs-k8s-master01 harbor]# cp values.yaml{,.bak}
- [root@bs-k8s-master01 harbor]# diff values.yaml{,.bak}
- 26c26
- < commonName: "zisefeizhu.harbor.org"
- ---
- > commonName: ""
- 29c29
- < core: zisefeizhu.harbor.org
- ---
- > core: core.harbor.domain
- 101c101
- < externalURL: https://zisefeizhu.harbor.org
- ---
- > externalURL: https://core.harbor.domain
- 123c123
- < storageClass: "ceph-harbor"
- ---
- > storageClass: ""
- 129c129
- < storageClass: "ceph-harbor"
- ---
- > storageClass: ""
- 135c135
- < storageClass: "ceph-harbor"
- ---
- > storageClass: ""
- 143c143
- < storageClass: "ceph-harbor"
- ---
- > storageClass: ""
- 151c151
- < storageClass: "ceph-harbor"
- ---
- > storageClass: ""
- 253c253
- < harborAdminPassword: "zisefeizhu"
- ---
- > harborAdminPassword: "Harbor12345"
- [root@bs-k8s-master01 nginx-ingress]# pwd
- /data/k8s/nginx-ingress
- [root@bs-k8s-master01 k8s]# cd nginx-ingress/
- [root@bs-k8s-master01 nginx-ingress]# helm pull aliyun/nginx-ingress
- [root@bs-k8s-master01 nginx-ingress]# tar xf nginx-ingress-0.9.5.tgz
- [root@bs-k8s-master01 nginx-ingress]# pwd
- /data/k8s/nginx-ingress/nginx-ingress
- [root@bs-k8s-master01 nginx-ingress]# pwd
- /data/k8s/nginx-ingress
- [root@bs-k8s-master01 nginx-ingress]# helm install nginx-ingress nginx-ingress
- Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
- [root@bs-k8s-master01 nginx-ingress]# grep -irl "extensions/v1beta1" nginx-ingress | grep deploy
- nginx-ingress/templates/controller-deployment.yaml
- nginx-ingress/templates/default-backend-deployment.yaml
- [root@bs-k8s-master01 nginx-ingress]# grep -irl "extensions/v1beta1" nginx-ingress | grep deploy | xargs sed -i 's#extensions/v1beta1#apps/v1#g'
- [root@bs-k8s-master01 nginx-ingress]# helm install nginx-ingress nginx-ingress
- Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec
- 由于k8s1.16版本升级,需要Deployment.spec中加selector,所以愉快地加上就行了。
- [root@bs-k8s-master01 nginx]# helm install nginx-ingress nginx-ingress
- NAME: nginx-ingress
- LAST DEPLOYED: Mon Feb 17 14:12:27 2020
- NAMESPACE: default
- STATUS: deployed
- REVISION: 1
- TEST SUITE: None
- NOTES:
- The nginx-ingress controller has been installed.
- Get the application URL by running these commands:
- export HTTP_NODE_PORT=$(kubectl --namespace default get services -o jsonpath="{.spec.ports[0].nodePort}" nginx-ingress-controller)
- export HTTPS_NODE_PORT=$(kubectl --namespace default get services -o jsonpath="{.spec.ports[1].nodePort}" nginx-ingress-controller)
- export NODE_IP=$(kubectl --namespace default get nodes -o jsonpath="{.items[0].status.addresses[1].address}")
- echo "Visit http://$NODE_IP:$HTTP_NODE_PORT to access your application via HTTP."
- echo "Visit https://$NODE_IP:$HTTPS_NODE_PORT to access your application via HTTPS."
- An example Ingress that makes use of the controller:
- apiVersion: extensions/v1beta1
- kind: Ingress
- metadata:
- annotations:
- kubernetes.io/ingress.class: nginx
- name: example
- namespace: foo
- spec:
- rules:
- - host: www.example.com
- http:
- paths:
- - backend:
- serviceName: exampleService
- servicePort: 80
- path: /
- # This section is only required if TLS is to be enabled for the Ingress
- tls:
- - hosts:
- - www.example.com
- secretName: example-tls
- If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
- apiVersion: v1
- kind: Secret
- metadata:
- name: example-tls
- namespace: foo
- data:
- tls.crt: <base64 encoded cert>
- tls.key: <base64 encoded key>
- type: kubernetes.io/tls
- [root@bs-k8s-master01 nginx]# kubectl get pods
- NAME READY STATUS RESTARTS AGE
- nginx-ingress-controller-8fbb5974-l7dsx 1/1 Running 0 115s
- nginx-ingress-default-backend-744fdc79c4-xcvqp 1/1 Running 0 115s
- [root@bs-k8s-master01 nginx]# pwd
- /data/k8s/nginx
- [root@bs-k8s-master01 nginx]# ll
- 总用量 12
- drwxr-xr-x 3 root root 119 2月 17 13:32 nginx-ingress
- -rw-r--r-- 1 root root 10830 2月 17 13:25 nginx-ingress-0.9.5.tgz
- [root@bs-k8s-master01 harbor]# helm install harbor -n harbor harbor
- NAME: harbor
- LAST DEPLOYED: Mon Feb 17 14:16:05 2020
- NAMESPACE: harbor
- STATUS: deployed
- REVISION: 1
- TEST SUITE: None
- NOTES:
- Please wait for several minutes for Harbor deployment to complete.
- Then you should be able to visit the Harbor portal at https://zisefeizhu.harbor.org.
- For more details, please visit https://github.com/goharbor/harbor.
- [root@bs-k8s-master01 harbor]# kubectl get pvc -n harbor
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- data-harbor-harbor-redis-0 Bound pvc-4b2c0362-aca9-4fc2-b3e8-5fed5bf46b43 1Gi RWO ceph-harbor 66s
- database-data-harbor-harbor-database-0 Bound pvc-ce201f8c-0909-4f69-8eb9-aeaeb542de98 1Gi RWO ceph-harbor 66s
- harbor-harbor-chartmuseum Bound pvc-1ec866fa-413a-463d-bb04-a0376577ae69 5Gi RWO ceph-harbor 6m38s
- harbor-harbor-jobservice Bound pvc-03dd5393-fad1-471b-8384-b0a5f5403d90 1Gi RWO ceph-harbor 6m38s
- harbor-harbor-registry Bound pvc-b7268d13-e92a-4ab3-846a-26d14672e56c 5Gi RWO ceph-harbor 6m38s
- [root@bs-k8s-master01 harbor]# kubectl get pv
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-03dd5393-fad1-471b-8384-b0a5f5403d90 1Gi RWO Retain Bound harbor/harbor-harbor-jobservice ceph-harbor <invalid>
- pvc-1ec866fa-413a-463d-bb04-a0376577ae69 5Gi RWO Retain Bound harbor/harbor-harbor-chartmuseum ceph-harbor <invalid>
- pvc-494a130d-018c-4be3-9b31-e951cc4367a5 20Gi RWO Retain Bound default/wp-pv-claim ceph-rbd 27h
- pvc-4b2c0362-aca9-4fc2-b3e8-5fed5bf46b43 1Gi RWO Retain Bound harbor/data-harbor-harbor-redis-0 ceph-harbor <invalid>
- pvc-8ffa3182-a2f6-47d9-a71d-ff8e8b379a16 1Gi RWO Retain Bound default/ceph-pvc ceph-rbd 29h
- pvc-ac7d3a09-123e-4614-886c-cded8822a078 20Gi RWO Retain Bound default/mysql-pv-claim ceph-rbd 27h
- pvc-b7268d13-e92a-4ab3-846a-26d14672e56c 5Gi RWO Retain Bound harbor/harbor-harbor-registry ceph-harbor <invalid>
- pvc-ce201f8c-0909-4f69-8eb9-aeaeb542de98 1Gi RWO Retain Bound harbor/database-data-harbor-harbor-database-0 ceph-harbor <invalid>
- [root@bs-k8s-master01 harbor]# kubectl get pods -n harbor -o wide
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- harbor-harbor-chartmuseum-dcc6f779f-68tvn 1/1 Running 0 32m 10.209.208.21 bs-k8s-node03 <none> <none>
- harbor-harbor-clair-69789f6695-5zrf8 1/2 CrashLoopBackOff 9 32m 10.209.145.26 bs-k8s-node02 <none> <none>
- harbor-harbor-core-5675f84d5f-ddhj2 0/1 CrashLoopBackOff 8 32m 10.209.145.27 bs-k8s-node02 <none> <none>
- harbor-harbor-database-0 1/1 Running 1 32m 10.209.46.93 bs-k8s-node01 <none> <none>
- harbor-harbor-jobservice-74f469588d-m6w64 0/1 Running 3 32m 10.209.46.91 bs-k8s-node01 <none> <none>
- harbor-harbor-notary-server-fcbcfdf9c-zgjk8 0/1 CrashLoopBackOff 9 32m 10.209.208.19 bs-k8s-node03 <none> <none>
- harbor-harbor-notary-signer-9789894bd-8p67d 0/1 CrashLoopBackOff 9 32m 10.209.208.20 bs-k8s-node03 <none> <none>
- harbor-harbor-portal-56456988bb-6cb9j 1/1 Running 0 32m 10.209.208.18 bs-k8s-node03 <none> <none>
- harbor-harbor-redis-0 1/1 Running 0 32m 10.209.46.92 bs-k8s-node01 <none> <none>
- harbor-harbor-registry-6946847b6f-qdgfp 2/2 Running 0 32m 10.209.145.28 bs-k8s-node02 <none> <none>
- rbd-provisioner-75b85f85bd-d4b8d 1/1 Running 0 136m 10.209.145.25 bs-k8s-node02 <none> <none>
- 下面的操作相比就不需要记载了
- 注意点:
- 1. pvc创建完毕后不要执行。切记。
- 2. 本地hosts里的Ip是 nginx-controller的节点ip
helm3.1安装及结合ceph rbd 部署harbor的更多相关文章
- CentOS7 下安装 iSCSI Target(tgt) ,使用 Ceph rbd
目录 一.iSCSI 介绍 1. iSCSI 定义 2. 几种常见的 iSCSI Target 3. 优缺点比较 二.安装步骤 1. 关闭防火墙 2. 关闭selinux 3. 通过 yum 安装 t ...
- 理解 OpenStack + Ceph (3):Ceph RBD 接口和工具 [Ceph RBD API and Tools]
本系列文章会深入研究 Ceph 以及 Ceph 和 OpenStack 的集成: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 的基础数据结构 ...
- SUSE CaaS Platform 4 - 使用 Ceph RBD 作为持久存储(动态)
图1 架构图 图2 各存储插件对动态供给方式的支持状况 1.所有节点安装 # yum install ceph-common 复制 ceph.conf 到 worker 节点上 # scp admin ...
- SUSE CaaS Platform 4 - Ceph RBD 作为 Pod 存储卷
RBD存储卷 目前 CaaSP4 支持多种 Volume 类型,这里选择 Ceph RBD(Rados Block Device),主要有如下好处: Ceph 经过多年开发,已经非常熟,社区也很活跃: ...
- SUSE Ceph RBD Mirror - Storage 6
Ceph采用的是强一致性同步模型,所有副本都必须完成写操作才算一次写入成功,这就导致不能很好地支持跨域部署,因为如果副本在异地,网络延迟就会很大,拖垮整个集群的写性能.因此,Ceph集群很少有跨域部署 ...
- 二十八. Ceph概述 部署Ceph集群 Ceph块存储
client :192.168.4.10 node1 :192.168.4.11 ndoe2 :192.168.4.12 node3 :192.168.4.13 1.实验环境 准备四台KVM虚 ...
- Ceph RBD 的实现原理与常规操作
目录 文章目录 目录 前文列表 RBD RBD Pool 的创建与删除 块设备的创建与删除 块设备的挂载与卸载 新建客户端 块设备的扩缩容 RBD 块设备的 Format 1 VS Format 2 ...
- 007 Ceph手动部署单节点
前面已经介绍了Ceph的自动部署,本次介绍一下关于手动部署Ceph节点操作 一.环境准备 一台虚拟机部署单节点Ceph集群 IP:172.25.250.14 内核: Red Hat Enterpris ...
- 7.4 k8s结合ceph rbd、cephfs实现数据的持久化和共享
1.在ceph集群中创建rbd存储池.镜像及普通用户 1.1.存储池接镜像配置 创建存储池 root@u20-deploy:~# ceph osd pool create rbd-test-pool1 ...
随机推荐
- how to create react custom hooks with arguments
how to create react custom hooks with arguments React Hooks & Custom Hooks // reusable custom ho ...
- es6 & map & set
es6 & map & set Map & WeakMap https://developer.mozilla.org/en-US/docs/Web/JavaScript/Re ...
- uniapp 发起网络请求
推荐下我写的uni-http 创建http-config.js import Vue from 'vue' const BASE_URL = 'http://xxx.com'; if (process ...
- TYLER ADAMS BRADBERRY:人到中年,要学会戒掉这三点
在一些国家的一些人当中,总会出现这样一个问题"中年危机".而到了中年,人与人间的差距似乎也变得越来越大.有人说,人到中年,是一个门槛,有的人迈过去了,有的人没迈过去.但是,其实实话 ...
- Union international inc引进微信线下支付,开启消费无现金时代
长期以来,Union international inc娱乐集团(公司编号:20151533091)因其客户来自全球各国,特别是除了美国之外的中国用户居多,因此公司一直和中国领先的社交软件微信保持着良 ...
- C++算法代码——质数的和与积
题目来自:http://218.5.5.242:9018/JudgeOnline/problem.php?id=1682 题目描述 两个质数的和是S,它们的积最大是多少? 输入 输入文件名为prime ...
- python基础(2)字符串常用方法
python字符串常用方法 find(sub[, start[, end]]) 在索引start和end之间查找字符串sub 找到,则返回最左端的索引值,未找到,则返回-1 start和end都可 ...
- 解决springBoot上传大文件异常问题
上传文件过大时的报错: org.springframework.web.multipart.MaxUploadSizeExceededException: Maximum upload size ex ...
- HTML认知
<!DOCTYPE html>的作用 1.定义 DOCTYPE是一种标准通用标记语言的文档类型的声明,目的是告诉标准通用标记语言解析器,该用什么方式解析这个文档. <!DOCTYPE ...
- Mysql通过binlog恢复误update的数据
事件: 在生产库执行update时只添加了STATUS(状态)条件,将所有状态为'E'的数据全部改为了'D' 思路: 操作步骤主要参考自文章:https://blog.csdn.net/weixin_ ...