helm3.1安装及结合ceph rbd 部署harbor
[root@bs-k8s-ceph ~]# ceph -s
cluster:
id: 11880418-1a9a-4b55-a353-4b141e2199d8
health: HEALTH_WARN
Long heartbeat ping times on back interface seen, longest is 3884.944 msec
Long heartbeat ping times on front interface seen, longest is 3888.368 msec
application not enabled on 1 pool(s)
clock skew detected on mon.bs-hk-hk02, mon.bs-k8s-ceph services:
mon: 3 daemons, quorum bs-hk-hk01,bs-hk-hk02,bs-k8s-ceph
mgr: bs-hk-hk01(active), standbys: bs-hk-hk02, bs-k8s-ceph
osd: 6 osds: 6 up, 6 in data:
pools: 3 pools, 320 pgs
objects: 416 objects, 978 MiB
usage: 8.6 GiB used, 105 GiB / 114 GiB avail
pgs: 320 active+clean
[root@bs-k8s-ceph ~]# ceph osd pool application enable harbor rbd
enabled application 'rbd' on pool 'harbor'
[root@bs-k8s-ceph ~]# ceph -s
cluster:
id: 11880418-1a9a-4b55-a353-4b141e2199d8
health: HEALTH_WARN
Long heartbeat ping times on back interface seen, longest is 3870.142 msec
Long heartbeat ping times on front interface seen, longest is 3873.410 msec
clock skew detected on mon.bs-hk-hk02 services:
mon: 3 daemons, quorum bs-hk-hk01,bs-hk-hk02,bs-k8s-ceph
mgr: bs-hk-hk01(active), standbys: bs-hk-hk02, bs-k8s-ceph
osd: 6 osds: 6 up, 6 in data:
pools: 3 pools, 320 pgs
objects: 416 objects, 978 MiB
usage: 8.6 GiB used, 105 GiB / 114 GiB avail
pgs: 320 active+clean # systemctl restart ceph.target //让时间停一会
[root@bs-k8s-ceph ~]# ceph -s
cluster:
id: 11880418-1a9a-4b55-a353-4b141e2199d8
health: HEALTH_OK services:
mon: 3 daemons, quorum bs-hk-hk01,bs-hk-hk02,bs-k8s-ceph
mgr: bs-hk-hk01(active), standbys: bs-hk-hk02, bs-k8s-ceph
osd: 6 osds: 6 up, 6 in data:
pools: 3 pools, 320 pgs
objects: 416 objects, 978 MiB
usage: 8.6 GiB used, 105 GiB / 114 GiB avail
pgs: 320 active+clean
[root@bs-k8s-master01 ~]# kubectl get nodes
The connection to the server 20.0.0.250:8443 was refused - did you specify the right host or port?
[root@bs-hk-hk01 ~]# systemctl start haproxy
[root@bs-k8s-master01 k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
bs-k8s-master01 Ready master 7d10h v1.17.2
bs-k8s-master02 Ready master 7d10h v1.17.2
bs-k8s-master03 Ready master 7d10h v1.17.2
bs-k8s-node01 Ready <none> 7d10h v1.17.2
bs-k8s-node02 Ready <none> 7d10h v1.17.2
bs-k8s-node03 NotReady <none> 7d9h v1.17.2 //为了节省cpu而关掉
https://github.com/helm/helm/releases
[root@bs-k8s-master01 helm3]# pwd
/data/k8s/helm3
[root@bs-k8s-master01 helm3]# ll
总用量 11980
-rw-r--r-- 1 root root 12267464 2月 17 2020 helm-v3.1.0-linux-amd64.tar.gz
[root@bs-k8s-master01 helm3]# cp linux-amd64/helm /usr/local/bin/helm
[root@bs-k8s-master01 helm3]# helm version
version.BuildInfo{Version:"v3.1.0", GitCommit:"b29d20baf09943e134c2fa5e1e1cab3bf93315fa", GitTreeState:"clean", GoVersion:"go1.13.7"}
[root@bs-k8s-master01 helm3]# helm --help
The Kubernetes package manager Common actions for Helm: - helm search: search for charts
- helm pull: download a chart to your local directory to view
- helm install: upload the chart to Kubernetes
- helm list: list releases of charts Environment variables: +------------------+-----------------------------------------------------------------------------+
| Name | Description |
+------------------+-----------------------------------------------------------------------------+
| $XDG_CACHE_HOME | set an alternative location for storing cached files. |
| $XDG_CONFIG_HOME | set an alternative location for storing Helm configuration. |
| $XDG_DATA_HOME | set an alternative location for storing Helm data. |
| $HELM_DRIVER | set the backend storage driver. Values are: configmap, secret, memory |
| $HELM_NO_PLUGINS | disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins. |
| $KUBECONFIG | set an alternative Kubernetes configuration file (default "~/.kube/config") |
+------------------+-----------------------------------------------------------------------------+ Helm stores configuration based on the XDG base directory specification, so - cached files are stored in $XDG_CACHE_HOME/helm
- configuration is stored in $XDG_CONFIG_HOME/helm
- data is stored in $XDG_DATA_HOME/helm By default, the default directories depend on the Operating System. The defaults are listed below: +------------------+---------------------------+--------------------------------+-------------------------+
| Operating System | Cache Path | Configuration Path | Data Path |
+------------------+---------------------------+--------------------------------+-------------------------+
| Linux | $HOME/.cache/helm | $HOME/.config/helm | $HOME/.local/share/helm |
| macOS | $HOME/Library/Caches/helm | $HOME/Library/Preferences/helm | $HOME/Library/helm |
| Windows | %TEMP%\helm | %APPDATA%\helm | %APPDATA%\helm |
+------------------+---------------------------+--------------------------------+-------------------------+ Usage:
helm [command] Available Commands:
completion Generate autocompletions script for the specified shell (bash or zsh)
create create a new chart with the given name
dependency manage a chart's dependencies
env Helm client environment information
get download extended information of a named release
help Help about any command
history fetch release history
install install a chart
lint examines a chart for possible issues
list list releases
package package a chart directory into a chart archive
plugin install, list, or uninstall Helm plugins
pull download a chart from a repository and (optionally) unpack it in local directory
repo add, list, remove, update, and index chart repositories
rollback roll back a release to a previous revision
search search for a keyword in charts
show show information of a chart
status displays the status of the named release
template locally render templates
test run tests for a release
uninstall uninstall a release
upgrade upgrade a release
verify verify that a chart at the given path has been signed and is valid
version print the client version information Flags:
--add-dir-header If true, adds the file directory to the header
--alsologtostderr log to standard error as well as files
--debug enable verbose output
-h, --help help for helm
--kube-context string name of the kubeconfig context to use
--kubeconfig string path to the kubeconfig file
--log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log-dir string If non-empty, write log files in this directory
--log-file string If non-empty, use this log file
--log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--logtostderr log to standard error instead of files (default true)
-n, --namespace string namespace scope for this request
--registry-config string path to the registry config file (default "/root/.config/helm/registry.json")
--repository-cache string path to the file containing cached repository indexes (default "/root/.cache/helm/repository")
--repository-config string path to the file containing repository names and URLs (default "/root/.config/helm/repositories.yaml")
--skip-headers If true, avoid header prefixes in the log messages
--skip-log-headers If true, avoid headers when opening log files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level number for the log level verbosity
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging Use "helm [command] --help" for more information about a command.
[root@bs-k8s-master01 helm3]# source <(helm completion bash)
[root@bs-k8s-master01 helm3]# echo "source <(helm completion bash)" >> ~/.bashrc
[root@bs-k8s-master01 rbd]# helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
"aliyun" has been added to your repositories
[root@bs-k8s-master01 helm3]# helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
"stable" has been added to your repositories
[root@bs-k8s-master01 helm3]# helm repo add google https://kubernetes-charts.storage.googleapis.com
"google" has been added to your repositories
[root@bs-k8s-master01 helm3]# helm repo add jetstack https://charts.jetstack.io
"jetstack" has been added to your repositories
[root@bs-k8s-master01 helm3]# helm repo list
NAME URL
aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
google https://kubernetes-charts.storage.googleapis.com
jetstack https://charts.jetstack.io [root@bs-k8s-master01 helm3]# curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 6794 100 6794 0 0 434 0 0:00:15 0:00:15 --:--:-- 761 [root@bs-k8s-master01 helm3]# ./get_helm.sh
Downloading https://get.helm.sh/helm-v3.1.0-linux-amd64.tar.gz
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
[root@bs-k8s-master01 helm3]# helm version
version.BuildInfo{Version:"v3.1.0", GitCommit:"b29d20baf09943e134c2fa5e1e1cab3bf93315fa", GitTreeState:"clean", GoVersion:"go1.13.7"} [root@bs-k8s-master01 helm3]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "aliyun" chart repository
Update Complete. ⎈ Happy Helming!⎈
[root@bs-k8s-master01 helm3]# helm search repo nginx
NAME CHART VERSION APP VERSION DESCRIPTION
aliyun/nginx-ingress 0.9.5 0.10.2 An nginx Ingress controller that uses ConfigMap...
aliyun/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
google/nginx-ingress 1.30.3 0.28.0 An nginx Ingress controller that uses ConfigMap...
google/nginx-ldapauth-proxy 0.1.3 1.13.5 nginx proxy with ldapauth
google/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
stable/nginx-ingress 0.9.5 0.10.2 An nginx Ingress controller that uses ConfigMap...
stable/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
aliyun/gcloud-endpoints 0.1.0 Develop, deploy, protect and monitor your APIs ...
google/gcloud-endpoints 0.1.2 1 DEPRECATED Develop, deploy, protect and monitor...
stable/gcloud-endpoints 0.1.0 Develop, deploy, protect and monitor your APIs ...
[root@bs-k8s-master01 helm3]# helm repo remove stable
"stable" has been removed from your repositories
[root@bs-k8s-master01 helm3]# helm repo remove google
"google" has been removed from your repositories
[root@bs-k8s-master01 helm3]# helm repo remove jetstack
"jetstack" has been removed from your repositories
[root@bs-k8s-master01 helm3]# helm repo list
NAME URL
aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
[root@bs-k8s-master01 helm3]# helm repo add harbor https://helm.goharbor.io
"harbor" has been added to your repositories
[root@bs-k8s-master01 harbor]# pwd
/data/k8s/harbor
[root@bs-k8s-master01 harbor]# ll
总用量 48
-rw-r--r-- 1 root root 701 2月 16 19:26 ceph-harbor-pvc.yaml
-rw-r--r-- 1 root root 863 2月 16 19:18 ceph-harbor-secret.yaml
-rw-r--r-- 1 root root 994 2月 16 19:21 ceph-harbor-storageclass.yaml
-rw-r--r-- 1 root root 35504 2月 17 13:07 harbor-1.3.0.tgz
drwxr-xr-x 2 root root 134 2月 16 19:13 rbd
[root@bs-k8s-master01 harbor]# tar xf harbor-1.3.0.tgz
[root@bs-k8s-master01 harbor]# cd harbor/
[root@bs-k8s-master01 harbor]# ls
cert Chart.yaml conf LICENSE README.md templates values.yaml
[root@bs-k8s-master01 harbor]# cp values.yaml{,.bak}
[root@bs-k8s-master01 harbor]# diff values.yaml{,.bak}
26c26
< commonName: "zisefeizhu.harbor.org"
---
> commonName: ""
29c29
< core: zisefeizhu.harbor.org
---
> core: core.harbor.domain
101c101
< externalURL: https://zisefeizhu.harbor.org
---
> externalURL: https://core.harbor.domain
123c123
< storageClass: "ceph-harbor"
---
> storageClass: ""
129c129
< storageClass: "ceph-harbor"
---
> storageClass: ""
135c135
< storageClass: "ceph-harbor"
---
> storageClass: ""
143c143
< storageClass: "ceph-harbor"
---
> storageClass: ""
151c151
< storageClass: "ceph-harbor"
---
> storageClass: ""
253c253
< harborAdminPassword: "zisefeizhu"
---
> harborAdminPassword: "Harbor12345"
[root@bs-k8s-master01 nginx-ingress]# pwd
/data/k8s/nginx-ingress
[root@bs-k8s-master01 k8s]# cd nginx-ingress/
[root@bs-k8s-master01 nginx-ingress]# helm pull aliyun/nginx-ingress
[root@bs-k8s-master01 nginx-ingress]# tar xf nginx-ingress-0.9.5.tgz
[root@bs-k8s-master01 nginx-ingress]# pwd
/data/k8s/nginx-ingress/nginx-ingress
[root@bs-k8s-master01 nginx-ingress]# pwd
/data/k8s/nginx-ingress
[root@bs-k8s-master01 nginx-ingress]# helm install nginx-ingress nginx-ingress
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
[root@bs-k8s-master01 nginx-ingress]# grep -irl "extensions/v1beta1" nginx-ingress | grep deploy
nginx-ingress/templates/controller-deployment.yaml
nginx-ingress/templates/default-backend-deployment.yaml
[root@bs-k8s-master01 nginx-ingress]# grep -irl "extensions/v1beta1" nginx-ingress | grep deploy | xargs sed -i 's#extensions/v1beta1#apps/v1#g'
[root@bs-k8s-master01 nginx-ingress]# helm install nginx-ingress nginx-ingress
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec
由于k8s1.16版本升级,需要Deployment.spec中加selector,所以愉快地加上就行了。 [root@bs-k8s-master01 nginx]# helm install nginx-ingress nginx-ingress
NAME: nginx-ingress
LAST DEPLOYED: Mon Feb 17 14:12:27 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The nginx-ingress controller has been installed.
Get the application URL by running these commands:
export HTTP_NODE_PORT=$(kubectl --namespace default get services -o jsonpath="{.spec.ports[0].nodePort}" nginx-ingress-controller)
export HTTPS_NODE_PORT=$(kubectl --namespace default get services -o jsonpath="{.spec.ports[1].nodePort}" nginx-ingress-controller)
export NODE_IP=$(kubectl --namespace default get nodes -o jsonpath="{.items[0].status.addresses[1].address}") echo "Visit http://$NODE_IP:$HTTP_NODE_PORT to access your application via HTTP."
echo "Visit https://$NODE_IP:$HTTPS_NODE_PORT to access your application via HTTPS." An example Ingress that makes use of the controller: apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided: apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
[root@bs-k8s-master01 nginx]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-8fbb5974-l7dsx 1/1 Running 0 115s
nginx-ingress-default-backend-744fdc79c4-xcvqp 1/1 Running 0 115s
[root@bs-k8s-master01 nginx]# pwd
/data/k8s/nginx
[root@bs-k8s-master01 nginx]# ll
总用量 12
drwxr-xr-x 3 root root 119 2月 17 13:32 nginx-ingress
-rw-r--r-- 1 root root 10830 2月 17 13:25 nginx-ingress-0.9.5.tgz
[root@bs-k8s-master01 harbor]# helm install harbor -n harbor harbor
NAME: harbor
LAST DEPLOYED: Mon Feb 17 14:16:05 2020
NAMESPACE: harbor
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at https://zisefeizhu.harbor.org.
For more details, please visit https://github.com/goharbor/harbor.
[root@bs-k8s-master01 harbor]# kubectl get pvc -n harbor
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-harbor-harbor-redis-0 Bound pvc-4b2c0362-aca9-4fc2-b3e8-5fed5bf46b43 1Gi RWO ceph-harbor 66s
database-data-harbor-harbor-database-0 Bound pvc-ce201f8c-0909-4f69-8eb9-aeaeb542de98 1Gi RWO ceph-harbor 66s
harbor-harbor-chartmuseum Bound pvc-1ec866fa-413a-463d-bb04-a0376577ae69 5Gi RWO ceph-harbor 6m38s
harbor-harbor-jobservice Bound pvc-03dd5393-fad1-471b-8384-b0a5f5403d90 1Gi RWO ceph-harbor 6m38s
harbor-harbor-registry Bound pvc-b7268d13-e92a-4ab3-846a-26d14672e56c 5Gi RWO ceph-harbor 6m38s
[root@bs-k8s-master01 harbor]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-03dd5393-fad1-471b-8384-b0a5f5403d90 1Gi RWO Retain Bound harbor/harbor-harbor-jobservice ceph-harbor <invalid>
pvc-1ec866fa-413a-463d-bb04-a0376577ae69 5Gi RWO Retain Bound harbor/harbor-harbor-chartmuseum ceph-harbor <invalid>
pvc-494a130d-018c-4be3-9b31-e951cc4367a5 20Gi RWO Retain Bound default/wp-pv-claim ceph-rbd 27h
pvc-4b2c0362-aca9-4fc2-b3e8-5fed5bf46b43 1Gi RWO Retain Bound harbor/data-harbor-harbor-redis-0 ceph-harbor <invalid>
pvc-8ffa3182-a2f6-47d9-a71d-ff8e8b379a16 1Gi RWO Retain Bound default/ceph-pvc ceph-rbd 29h
pvc-ac7d3a09-123e-4614-886c-cded8822a078 20Gi RWO Retain Bound default/mysql-pv-claim ceph-rbd 27h
pvc-b7268d13-e92a-4ab3-846a-26d14672e56c 5Gi RWO Retain Bound harbor/harbor-harbor-registry ceph-harbor <invalid>
pvc-ce201f8c-0909-4f69-8eb9-aeaeb542de98 1Gi RWO Retain Bound harbor/database-data-harbor-harbor-database-0 ceph-harbor <invalid>
[root@bs-k8s-master01 harbor]# kubectl get pods -n harbor -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
harbor-harbor-chartmuseum-dcc6f779f-68tvn 1/1 Running 0 32m 10.209.208.21 bs-k8s-node03 <none> <none>
harbor-harbor-clair-69789f6695-5zrf8 1/2 CrashLoopBackOff 9 32m 10.209.145.26 bs-k8s-node02 <none> <none>
harbor-harbor-core-5675f84d5f-ddhj2 0/1 CrashLoopBackOff 8 32m 10.209.145.27 bs-k8s-node02 <none> <none>
harbor-harbor-database-0 1/1 Running 1 32m 10.209.46.93 bs-k8s-node01 <none> <none>
harbor-harbor-jobservice-74f469588d-m6w64 0/1 Running 3 32m 10.209.46.91 bs-k8s-node01 <none> <none>
harbor-harbor-notary-server-fcbcfdf9c-zgjk8 0/1 CrashLoopBackOff 9 32m 10.209.208.19 bs-k8s-node03 <none> <none>
harbor-harbor-notary-signer-9789894bd-8p67d 0/1 CrashLoopBackOff 9 32m 10.209.208.20 bs-k8s-node03 <none> <none>
harbor-harbor-portal-56456988bb-6cb9j 1/1 Running 0 32m 10.209.208.18 bs-k8s-node03 <none> <none>
harbor-harbor-redis-0 1/1 Running 0 32m 10.209.46.92 bs-k8s-node01 <none> <none>
harbor-harbor-registry-6946847b6f-qdgfp 2/2 Running 0 32m 10.209.145.28 bs-k8s-node02 <none> <none>
rbd-provisioner-75b85f85bd-d4b8d 1/1 Running 0 136m 10.209.145.25 bs-k8s-node02 <none> <none> 下面的操作相比就不需要记载了
注意点:
1. pvc创建完毕后不要执行。切记。
2. 本地hosts里的Ip是 nginx-controller的节点ip
helm3.1安装及结合ceph rbd 部署harbor的更多相关文章
- CentOS7 下安装 iSCSI Target(tgt) ,使用 Ceph rbd
目录 一.iSCSI 介绍 1. iSCSI 定义 2. 几种常见的 iSCSI Target 3. 优缺点比较 二.安装步骤 1. 关闭防火墙 2. 关闭selinux 3. 通过 yum 安装 t ...
- 理解 OpenStack + Ceph (3):Ceph RBD 接口和工具 [Ceph RBD API and Tools]
本系列文章会深入研究 Ceph 以及 Ceph 和 OpenStack 的集成: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 的基础数据结构 ...
- SUSE CaaS Platform 4 - 使用 Ceph RBD 作为持久存储(动态)
图1 架构图 图2 各存储插件对动态供给方式的支持状况 1.所有节点安装 # yum install ceph-common 复制 ceph.conf 到 worker 节点上 # scp admin ...
- SUSE CaaS Platform 4 - Ceph RBD 作为 Pod 存储卷
RBD存储卷 目前 CaaSP4 支持多种 Volume 类型,这里选择 Ceph RBD(Rados Block Device),主要有如下好处: Ceph 经过多年开发,已经非常熟,社区也很活跃: ...
- SUSE Ceph RBD Mirror - Storage 6
Ceph采用的是强一致性同步模型,所有副本都必须完成写操作才算一次写入成功,这就导致不能很好地支持跨域部署,因为如果副本在异地,网络延迟就会很大,拖垮整个集群的写性能.因此,Ceph集群很少有跨域部署 ...
- 二十八. Ceph概述 部署Ceph集群 Ceph块存储
client :192.168.4.10 node1 :192.168.4.11 ndoe2 :192.168.4.12 node3 :192.168.4.13 1.实验环境 准备四台KVM虚 ...
- Ceph RBD 的实现原理与常规操作
目录 文章目录 目录 前文列表 RBD RBD Pool 的创建与删除 块设备的创建与删除 块设备的挂载与卸载 新建客户端 块设备的扩缩容 RBD 块设备的 Format 1 VS Format 2 ...
- 007 Ceph手动部署单节点
前面已经介绍了Ceph的自动部署,本次介绍一下关于手动部署Ceph节点操作 一.环境准备 一台虚拟机部署单节点Ceph集群 IP:172.25.250.14 内核: Red Hat Enterpris ...
- 7.4 k8s结合ceph rbd、cephfs实现数据的持久化和共享
1.在ceph集群中创建rbd存储池.镜像及普通用户 1.1.存储池接镜像配置 创建存储池 root@u20-deploy:~# ceph osd pool create rbd-test-pool1 ...
随机推荐
- OSS & Object Storage Service
OSS & Object Storage Service Object Storage Service https://en.wikipedia.org/wiki/Object_storage ...
- WLAN-AC+AP射频一劳永逸的调优方式
AP射频调优组网图 射频调优简介 射频调优的主要功能就是动态调整AP的信道和功率,可以使同一AC管理的各AP的信道和功率保持相对平衡,保证AP工作在最佳状态.WLAN网络中,AP的工作状态会受到周围环 ...
- 漫画 | C语言哭了,过年回家,只有我还没对象
C语言回家过年,遇到不少小伙伴. 大家都在外地打拼,一年难得见面,聚到一起吃饭,都非常高兴. 听Java提到TIOBE, 正在喝酒的C语言激动起来. 自己常年在那里排名第二,人类用自己写的程序可真不少 ...
- std::unordered_map与std::map
前者查找更快.后者自动排序,并可指定排序方式. 资料参考: https://blog.csdn.net/photon222/article/details/102947597
- Java中出现Unhandled exception的原因
说明某个方法在方法声明上已经声明了会抛异常,那么在调用这个方法的时候,就必须做异常处理,处理的方式有2种,要么try-catch这个异常,要么继续往上一层抛出这个异常,这是java语法要求的,必须这么 ...
- Java基础语法:注释
书写注释是一个非常好的习惯. 注释并不会被执行,是给我们写代码的人看的. Java中的注释有三种: 单行注释(Line comment) 多行注释(Block comment) 文档注释(JavaDo ...
- 查看手机CPU每个APP利用率
adb shell top -m 5
- mysql 单表下的字段操作
如下只介绍单表的添加.更新.删除.查询表结构操作,查询数据操作范围太大用单独的篇幅来讲解: 查看表结构 desc test_tb; Insert 插入数据 插入 = 添加 为表中指定的字段插入数据 C ...
- POJ-1847(SPFA+Vector和PriorityQueue优化的dijstra算法)
Tram POJ-1847 这里其实没有必要使用SPFA算法,但是为了巩固知识,还是用了.也可以使用dijikstra算法. #include<iostream> #include< ...
- Java 语言基础 01
语言基础·一级 什么是计算机? 计算机(Computer)全称:电子计算机,俗称电脑.是一种能够按照程序运行,自动.高速处理海量数据的现代化智能电子设备.由硬件和软件所组成,没有安装任何软件的计算机称 ...