1. [root@bs-k8s-ceph ~]# ceph -s
  2. cluster:
  3. id: 11880418-1a9a-4b55-a353-4b141e2199d8
  4. health: HEALTH_WARN
  5. Long heartbeat ping times on back interface seen, longest is 3884.944 msec
  6. Long heartbeat ping times on front interface seen, longest is 3888.368 msec
  7. application not enabled on 1 pool(s)
  8. clock skew detected on mon.bs-hk-hk02, mon.bs-k8s-ceph
  9.  
  10. services:
  11. mon: 3 daemons, quorum bs-hk-hk01,bs-hk-hk02,bs-k8s-ceph
  12. mgr: bs-hk-hk01(active), standbys: bs-hk-hk02, bs-k8s-ceph
  13. osd: 6 osds: 6 up, 6 in
  14.  
  15. data:
  16. pools: 3 pools, 320 pgs
  17. objects: 416 objects, 978 MiB
  18. usage: 8.6 GiB used, 105 GiB / 114 GiB avail
  19. pgs: 320 active+clean
  20. [root@bs-k8s-ceph ~]# ceph osd pool application enable harbor rbd
  21. enabled application 'rbd' on pool 'harbor'
  22. [root@bs-k8s-ceph ~]# ceph -s
  23. cluster:
  24. id: 11880418-1a9a-4b55-a353-4b141e2199d8
  25. health: HEALTH_WARN
  26. Long heartbeat ping times on back interface seen, longest is 3870.142 msec
  27. Long heartbeat ping times on front interface seen, longest is 3873.410 msec
  28. clock skew detected on mon.bs-hk-hk02
  29.  
  30. services:
  31. mon: 3 daemons, quorum bs-hk-hk01,bs-hk-hk02,bs-k8s-ceph
  32. mgr: bs-hk-hk01(active), standbys: bs-hk-hk02, bs-k8s-ceph
  33. osd: 6 osds: 6 up, 6 in
  34.  
  35. data:
  36. pools: 3 pools, 320 pgs
  37. objects: 416 objects, 978 MiB
  38. usage: 8.6 GiB used, 105 GiB / 114 GiB avail
  39. pgs: 320 active+clean
  40.  
  41. # systemctl restart ceph.target //让时间停一会
  42. [root@bs-k8s-ceph ~]# ceph -s
  43. cluster:
  44. id: 11880418-1a9a-4b55-a353-4b141e2199d8
  45. health: HEALTH_OK
  46.  
  47. services:
  48. mon: 3 daemons, quorum bs-hk-hk01,bs-hk-hk02,bs-k8s-ceph
  49. mgr: bs-hk-hk01(active), standbys: bs-hk-hk02, bs-k8s-ceph
  50. osd: 6 osds: 6 up, 6 in
  51.  
  52. data:
  53. pools: 3 pools, 320 pgs
  54. objects: 416 objects, 978 MiB
  55. usage: 8.6 GiB used, 105 GiB / 114 GiB avail
  56. pgs: 320 active+clean
  57. [root@bs-k8s-master01 ~]# kubectl get nodes
  58. The connection to the server 20.0.0.250:8443 was refused - did you specify the right host or port?
  59. [root@bs-hk-hk01 ~]# systemctl start haproxy
  60. [root@bs-k8s-master01 k8s]# kubectl get nodes
  61. NAME STATUS ROLES AGE VERSION
  62. bs-k8s-master01 Ready master 7d10h v1.17.2
  63. bs-k8s-master02 Ready master 7d10h v1.17.2
  64. bs-k8s-master03 Ready master 7d10h v1.17.2
  65. bs-k8s-node01 Ready <none> 7d10h v1.17.2
  66. bs-k8s-node02 Ready <none> 7d10h v1.17.2
  67. bs-k8s-node03 NotReady <none> 7d9h v1.17.2 //为了节省cpu而关掉
  68. https://github.com/helm/helm/releases
  69. [root@bs-k8s-master01 helm3]# pwd
  70. /data/k8s/helm3
  71. [root@bs-k8s-master01 helm3]# ll
  72. 总用量 11980
  73. -rw-r--r-- 1 root root 12267464 2 17 2020 helm-v3.1.0-linux-amd64.tar.gz
  74. [root@bs-k8s-master01 helm3]# cp linux-amd64/helm /usr/local/bin/helm
  75. [root@bs-k8s-master01 helm3]# helm version
  76. version.BuildInfo{Version:"v3.1.0", GitCommit:"b29d20baf09943e134c2fa5e1e1cab3bf93315fa", GitTreeState:"clean", GoVersion:"go1.13.7"}
  77. [root@bs-k8s-master01 helm3]# helm --help
  78. The Kubernetes package manager
  79.  
  80. Common actions for Helm:
  81.  
  82. - helm search: search for charts
  83. - helm pull: download a chart to your local directory to view
  84. - helm install: upload the chart to Kubernetes
  85. - helm list: list releases of charts
  86.  
  87. Environment variables:
  88.  
  89. +------------------+-----------------------------------------------------------------------------+
  90. | Name | Description |
  91. +------------------+-----------------------------------------------------------------------------+
  92. | $XDG_CACHE_HOME | set an alternative location for storing cached files. |
  93. | $XDG_CONFIG_HOME | set an alternative location for storing Helm configuration. |
  94. | $XDG_DATA_HOME | set an alternative location for storing Helm data. |
  95. | $HELM_DRIVER | set the backend storage driver. Values are: configmap, secret, memory |
  96. | $HELM_NO_PLUGINS | disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins. |
  97. | $KUBECONFIG | set an alternative Kubernetes configuration file (default "~/.kube/config") |
  98. +------------------+-----------------------------------------------------------------------------+
  99.  
  100. Helm stores configuration based on the XDG base directory specification, so
  101.  
  102. - cached files are stored in $XDG_CACHE_HOME/helm
  103. - configuration is stored in $XDG_CONFIG_HOME/helm
  104. - data is stored in $XDG_DATA_HOME/helm
  105.  
  106. By default, the default directories depend on the Operating System. The defaults are listed below:
  107.  
  108. +------------------+---------------------------+--------------------------------+-------------------------+
  109. | Operating System | Cache Path | Configuration Path | Data Path |
  110. +------------------+---------------------------+--------------------------------+-------------------------+
  111. | Linux | $HOME/.cache/helm | $HOME/.config/helm | $HOME/.local/share/helm |
  112. | macOS | $HOME/Library/Caches/helm | $HOME/Library/Preferences/helm | $HOME/Library/helm |
  113. | Windows | %TEMP%\helm | %APPDATA%\helm | %APPDATA%\helm |
  114. +------------------+---------------------------+--------------------------------+-------------------------+
  115.  
  116. Usage:
  117. helm [command]
  118.  
  119. Available Commands:
  120. completion Generate autocompletions script for the specified shell (bash or zsh)
  121. create create a new chart with the given name
  122. dependency manage a chart's dependencies
  123. env Helm client environment information
  124. get download extended information of a named release
  125. help Help about any command
  126. history fetch release history
  127. install install a chart
  128. lint examines a chart for possible issues
  129. list list releases
  130. package package a chart directory into a chart archive
  131. plugin install, list, or uninstall Helm plugins
  132. pull download a chart from a repository and (optionally) unpack it in local directory
  133. repo add, list, remove, update, and index chart repositories
  134. rollback roll back a release to a previous revision
  135. search search for a keyword in charts
  136. show show information of a chart
  137. status displays the status of the named release
  138. template locally render templates
  139. test run tests for a release
  140. uninstall uninstall a release
  141. upgrade upgrade a release
  142. verify verify that a chart at the given path has been signed and is valid
  143. version print the client version information
  144.  
  145. Flags:
  146. --add-dir-header If true, adds the file directory to the header
  147. --alsologtostderr log to standard error as well as files
  148. --debug enable verbose output
  149. -h, --help help for helm
  150. --kube-context string name of the kubeconfig context to use
  151. --kubeconfig string path to the kubeconfig file
  152. --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
  153. --log-dir string If non-empty, write log files in this directory
  154. --log-file string If non-empty, use this log file
  155. --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
  156. --logtostderr log to standard error instead of files (default true)
  157. -n, --namespace string namespace scope for this request
  158. --registry-config string path to the registry config file (default "/root/.config/helm/registry.json")
  159. --repository-cache string path to the file containing cached repository indexes (default "/root/.cache/helm/repository")
  160. --repository-config string path to the file containing repository names and URLs (default "/root/.config/helm/repositories.yaml")
  161. --skip-headers If true, avoid header prefixes in the log messages
  162. --skip-log-headers If true, avoid headers when opening log files
  163. --stderrthreshold severity logs at or above this threshold go to stderr (default 2)
  164. -v, --v Level number for the log level verbosity
  165. --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
  166.  
  167. Use "helm [command] --help" for more information about a command.
  168. [root@bs-k8s-master01 helm3]# source <(helm completion bash)
  169. [root@bs-k8s-master01 helm3]# echo "source <(helm completion bash)" >> ~/.bashrc
  170. [root@bs-k8s-master01 rbd]# helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
  171. "aliyun" has been added to your repositories
  172. [root@bs-k8s-master01 helm3]# helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
  173. "stable" has been added to your repositories
  174. [root@bs-k8s-master01 helm3]# helm repo add google https://kubernetes-charts.storage.googleapis.com
  175. "google" has been added to your repositories
  176. [root@bs-k8s-master01 helm3]# helm repo add jetstack https://charts.jetstack.io
  177. "jetstack" has been added to your repositories
  178. [root@bs-k8s-master01 helm3]# helm repo list
  179. NAME URL
  180. aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
  181. stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
  182. google https://kubernetes-charts.storage.googleapis.com
  183. jetstack https://charts.jetstack.io
  184.  
  185. [root@bs-k8s-master01 helm3]# curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
  186. % Total % Received % Xferd Average Speed Time Time Time Current
  187. Dload Upload Total Spent Left Speed
  188. 100 6794 100 6794 0 0 434 0 0:00:15 0:00:15 --:--:-- 761
  189.  
  190. [root@bs-k8s-master01 helm3]# ./get_helm.sh
  191. Downloading https://get.helm.sh/helm-v3.1.0-linux-amd64.tar.gz
  192. Preparing to install helm into /usr/local/bin
  193. helm installed into /usr/local/bin/helm
  194. [root@bs-k8s-master01 helm3]# helm version
  195. version.BuildInfo{Version:"v3.1.0", GitCommit:"b29d20baf09943e134c2fa5e1e1cab3bf93315fa", GitTreeState:"clean", GoVersion:"go1.13.7"}
  196.  
  197. [root@bs-k8s-master01 helm3]# helm repo update
  198. Hang tight while we grab the latest from your chart repositories...
  199. ...Successfully got an update from the "aliyun" chart repository
  200. Update Complete. ⎈ Happy Helming!⎈
  201. [root@bs-k8s-master01 helm3]# helm search repo nginx
  202. NAME CHART VERSION APP VERSION DESCRIPTION
  203. aliyun/nginx-ingress 0.9.5 0.10.2 An nginx Ingress controller that uses ConfigMap...
  204. aliyun/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
  205. google/nginx-ingress 1.30.3 0.28.0 An nginx Ingress controller that uses ConfigMap...
  206. google/nginx-ldapauth-proxy 0.1.3 1.13.5 nginx proxy with ldapauth
  207. google/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
  208. stable/nginx-ingress 0.9.5 0.10.2 An nginx Ingress controller that uses ConfigMap...
  209. stable/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
  210. aliyun/gcloud-endpoints 0.1.0 Develop, deploy, protect and monitor your APIs ...
  211. google/gcloud-endpoints 0.1.2 1 DEPRECATED Develop, deploy, protect and monitor...
  212. stable/gcloud-endpoints 0.1.0 Develop, deploy, protect and monitor your APIs ...
  213. [root@bs-k8s-master01 helm3]# helm repo remove stable
  214. "stable" has been removed from your repositories
  215. [root@bs-k8s-master01 helm3]# helm repo remove google
  216. "google" has been removed from your repositories
  217. [root@bs-k8s-master01 helm3]# helm repo remove jetstack
  218. "jetstack" has been removed from your repositories
  219. [root@bs-k8s-master01 helm3]# helm repo list
  220. NAME URL
  221. aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
  222. [root@bs-k8s-master01 helm3]# helm repo add harbor https://helm.goharbor.io
  223. "harbor" has been added to your repositories
  224. [root@bs-k8s-master01 harbor]# pwd
  225. /data/k8s/harbor
  226. [root@bs-k8s-master01 harbor]# ll
  227. 总用量 48
  228. -rw-r--r-- 1 root root 701 2月 16 19:26 ceph-harbor-pvc.yaml
  229. -rw-r--r-- 1 root root 863 2月 16 19:18 ceph-harbor-secret.yaml
  230. -rw-r--r-- 1 root root 994 2月 16 19:21 ceph-harbor-storageclass.yaml
  231. -rw-r--r-- 1 root root 35504 2月 17 13:07 harbor-1.3.0.tgz
  232. drwxr-xr-x 2 root root 134 2月 16 19:13 rbd
  233. [root@bs-k8s-master01 harbor]# tar xf harbor-1.3.0.tgz
  234. [root@bs-k8s-master01 harbor]# cd harbor/
  235. [root@bs-k8s-master01 harbor]# ls
  236. cert Chart.yaml conf LICENSE README.md templates values.yaml
  237. [root@bs-k8s-master01 harbor]# cp values.yaml{,.bak}
  238. [root@bs-k8s-master01 harbor]# diff values.yaml{,.bak}
  239. 26c26
  240. < commonName: "zisefeizhu.harbor.org"
  241. ---
  242. > commonName: ""
  243. 29c29
  244. < core: zisefeizhu.harbor.org
  245. ---
  246. > core: core.harbor.domain
  247. 101c101
  248. < externalURL: https://zisefeizhu.harbor.org
  249. ---
  250. > externalURL: https://core.harbor.domain
  251. 123c123
  252. < storageClass: "ceph-harbor"
  253. ---
  254. > storageClass: ""
  255. 129c129
  256. < storageClass: "ceph-harbor"
  257. ---
  258. > storageClass: ""
  259. 135c135
  260. < storageClass: "ceph-harbor"
  261. ---
  262. > storageClass: ""
  263. 143c143
  264. < storageClass: "ceph-harbor"
  265. ---
  266. > storageClass: ""
  267. 151c151
  268. < storageClass: "ceph-harbor"
  269. ---
  270. > storageClass: ""
  271. 253c253
  272. < harborAdminPassword: "zisefeizhu"
  273. ---
  274. > harborAdminPassword: "Harbor12345"
  275. [root@bs-k8s-master01 nginx-ingress]# pwd
  276. /data/k8s/nginx-ingress
  277. [root@bs-k8s-master01 k8s]# cd nginx-ingress/
  278. [root@bs-k8s-master01 nginx-ingress]# helm pull aliyun/nginx-ingress
  279. [root@bs-k8s-master01 nginx-ingress]# tar xf nginx-ingress-0.9.5.tgz
  280. [root@bs-k8s-master01 nginx-ingress]# pwd
  281. /data/k8s/nginx-ingress/nginx-ingress
  282. [root@bs-k8s-master01 nginx-ingress]# pwd
  283. /data/k8s/nginx-ingress
  284. [root@bs-k8s-master01 nginx-ingress]# helm install nginx-ingress nginx-ingress
  285. Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
  286. [root@bs-k8s-master01 nginx-ingress]# grep -irl "extensions/v1beta1" nginx-ingress | grep deploy
  287. nginx-ingress/templates/controller-deployment.yaml
  288. nginx-ingress/templates/default-backend-deployment.yaml
  289. [root@bs-k8s-master01 nginx-ingress]# grep -irl "extensions/v1beta1" nginx-ingress | grep deploy | xargs sed -i 's#extensions/v1beta1#apps/v1#g'
  290. [root@bs-k8s-master01 nginx-ingress]# helm install nginx-ingress nginx-ingress
  291. Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec
  292. 由于k8s1.16版本升级,需要Deployment.spec中加selector,所以愉快地加上就行了。
  293.  
  294. [root@bs-k8s-master01 nginx]# helm install nginx-ingress nginx-ingress
  295. NAME: nginx-ingress
  296. LAST DEPLOYED: Mon Feb 17 14:12:27 2020
  297. NAMESPACE: default
  298. STATUS: deployed
  299. REVISION: 1
  300. TEST SUITE: None
  301. NOTES:
  302. The nginx-ingress controller has been installed.
  303. Get the application URL by running these commands:
  304. export HTTP_NODE_PORT=$(kubectl --namespace default get services -o jsonpath="{.spec.ports[0].nodePort}" nginx-ingress-controller)
  305. export HTTPS_NODE_PORT=$(kubectl --namespace default get services -o jsonpath="{.spec.ports[1].nodePort}" nginx-ingress-controller)
  306. export NODE_IP=$(kubectl --namespace default get nodes -o jsonpath="{.items[0].status.addresses[1].address}")
  307.  
  308. echo "Visit http://$NODE_IP:$HTTP_NODE_PORT to access your application via HTTP."
  309. echo "Visit https://$NODE_IP:$HTTPS_NODE_PORT to access your application via HTTPS."
  310.  
  311. An example Ingress that makes use of the controller:
  312.  
  313. apiVersion: extensions/v1beta1
  314. kind: Ingress
  315. metadata:
  316. annotations:
  317. kubernetes.io/ingress.class: nginx
  318. name: example
  319. namespace: foo
  320. spec:
  321. rules:
  322. - host: www.example.com
  323. http:
  324. paths:
  325. - backend:
  326. serviceName: exampleService
  327. servicePort: 80
  328. path: /
  329. # This section is only required if TLS is to be enabled for the Ingress
  330. tls:
  331. - hosts:
  332. - www.example.com
  333. secretName: example-tls
  334.  
  335. If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
  336.  
  337. apiVersion: v1
  338. kind: Secret
  339. metadata:
  340. name: example-tls
  341. namespace: foo
  342. data:
  343. tls.crt: <base64 encoded cert>
  344. tls.key: <base64 encoded key>
  345. type: kubernetes.io/tls
  346. [root@bs-k8s-master01 nginx]# kubectl get pods
  347. NAME READY STATUS RESTARTS AGE
  348. nginx-ingress-controller-8fbb5974-l7dsx 1/1 Running 0 115s
  349. nginx-ingress-default-backend-744fdc79c4-xcvqp 1/1 Running 0 115s
  350. [root@bs-k8s-master01 nginx]# pwd
  351. /data/k8s/nginx
  352. [root@bs-k8s-master01 nginx]# ll
  353. 总用量 12
  354. drwxr-xr-x 3 root root 119 2 17 13:32 nginx-ingress
  355. -rw-r--r-- 1 root root 10830 2 17 13:25 nginx-ingress-0.9.5.tgz
  356. [root@bs-k8s-master01 harbor]# helm install harbor -n harbor harbor
  357. NAME: harbor
  358. LAST DEPLOYED: Mon Feb 17 14:16:05 2020
  359. NAMESPACE: harbor
  360. STATUS: deployed
  361. REVISION: 1
  362. TEST SUITE: None
  363. NOTES:
  364. Please wait for several minutes for Harbor deployment to complete.
  365. Then you should be able to visit the Harbor portal at https://zisefeizhu.harbor.org.
  366. For more details, please visit https://github.com/goharbor/harbor.
  367. [root@bs-k8s-master01 harbor]# kubectl get pvc -n harbor
  368. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  369. data-harbor-harbor-redis-0 Bound pvc-4b2c0362-aca9-4fc2-b3e8-5fed5bf46b43 1Gi RWO ceph-harbor 66s
  370. database-data-harbor-harbor-database-0 Bound pvc-ce201f8c-0909-4f69-8eb9-aeaeb542de98 1Gi RWO ceph-harbor 66s
  371. harbor-harbor-chartmuseum Bound pvc-1ec866fa-413a-463d-bb04-a0376577ae69 5Gi RWO ceph-harbor 6m38s
  372. harbor-harbor-jobservice Bound pvc-03dd5393-fad1-471b-8384-b0a5f5403d90 1Gi RWO ceph-harbor 6m38s
  373. harbor-harbor-registry Bound pvc-b7268d13-e92a-4ab3-846a-26d14672e56c 5Gi RWO ceph-harbor 6m38s
  374. [root@bs-k8s-master01 harbor]# kubectl get pv
  375. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  376. pvc-03dd5393-fad1-471b-8384-b0a5f5403d90 1Gi RWO Retain Bound harbor/harbor-harbor-jobservice ceph-harbor <invalid>
  377. pvc-1ec866fa-413a-463d-bb04-a0376577ae69 5Gi RWO Retain Bound harbor/harbor-harbor-chartmuseum ceph-harbor <invalid>
  378. pvc-494a130d-018c-4be3-9b31-e951cc4367a5 20Gi RWO Retain Bound default/wp-pv-claim ceph-rbd 27h
  379. pvc-4b2c0362-aca9-4fc2-b3e8-5fed5bf46b43 1Gi RWO Retain Bound harbor/data-harbor-harbor-redis-0 ceph-harbor <invalid>
  380. pvc-8ffa3182-a2f6-47d9-a71d-ff8e8b379a16 1Gi RWO Retain Bound default/ceph-pvc ceph-rbd 29h
  381. pvc-ac7d3a09-123e-4614-886c-cded8822a078 20Gi RWO Retain Bound default/mysql-pv-claim ceph-rbd 27h
  382. pvc-b7268d13-e92a-4ab3-846a-26d14672e56c 5Gi RWO Retain Bound harbor/harbor-harbor-registry ceph-harbor <invalid>
  383. pvc-ce201f8c-0909-4f69-8eb9-aeaeb542de98 1Gi RWO Retain Bound harbor/database-data-harbor-harbor-database-0 ceph-harbor <invalid>
  384. [root@bs-k8s-master01 harbor]# kubectl get pods -n harbor -o wide
  385. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  386. harbor-harbor-chartmuseum-dcc6f779f-68tvn 1/1 Running 0 32m 10.209.208.21 bs-k8s-node03 <none> <none>
  387. harbor-harbor-clair-69789f6695-5zrf8 1/2 CrashLoopBackOff 9 32m 10.209.145.26 bs-k8s-node02 <none> <none>
  388. harbor-harbor-core-5675f84d5f-ddhj2 0/1 CrashLoopBackOff 8 32m 10.209.145.27 bs-k8s-node02 <none> <none>
  389. harbor-harbor-database-0 1/1 Running 1 32m 10.209.46.93 bs-k8s-node01 <none> <none>
  390. harbor-harbor-jobservice-74f469588d-m6w64 0/1 Running 3 32m 10.209.46.91 bs-k8s-node01 <none> <none>
  391. harbor-harbor-notary-server-fcbcfdf9c-zgjk8 0/1 CrashLoopBackOff 9 32m 10.209.208.19 bs-k8s-node03 <none> <none>
  392. harbor-harbor-notary-signer-9789894bd-8p67d 0/1 CrashLoopBackOff 9 32m 10.209.208.20 bs-k8s-node03 <none> <none>
  393. harbor-harbor-portal-56456988bb-6cb9j 1/1 Running 0 32m 10.209.208.18 bs-k8s-node03 <none> <none>
  394. harbor-harbor-redis-0 1/1 Running 0 32m 10.209.46.92 bs-k8s-node01 <none> <none>
  395. harbor-harbor-registry-6946847b6f-qdgfp 2/2 Running 0 32m 10.209.145.28 bs-k8s-node02 <none> <none>
  396. rbd-provisioner-75b85f85bd-d4b8d 1/1 Running 0 136m 10.209.145.25 bs-k8s-node02 <none> <none>
  397.  
  398. 下面的操作相比就不需要记载了
  399. 注意点:
  400. 1. pvc创建完毕后不要执行。切记。
  401. 2. 本地hosts里的Ip nginx-controller的节点ip

helm3.1安装及结合ceph rbd 部署harbor的更多相关文章

  1. CentOS7 下安装 iSCSI Target(tgt) ,使用 Ceph rbd

    目录 一.iSCSI 介绍 1. iSCSI 定义 2. 几种常见的 iSCSI Target 3. 优缺点比较 二.安装步骤 1. 关闭防火墙 2. 关闭selinux 3. 通过 yum 安装 t ...

  2. 理解 OpenStack + Ceph (3):Ceph RBD 接口和工具 [Ceph RBD API and Tools]

    本系列文章会深入研究 Ceph 以及 Ceph 和 OpenStack 的集成: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 的基础数据结构 ...

  3. SUSE CaaS Platform 4 - 使用 Ceph RBD 作为持久存储(动态)

    图1 架构图 图2 各存储插件对动态供给方式的支持状况 1.所有节点安装 # yum install ceph-common 复制 ceph.conf 到 worker 节点上 # scp admin ...

  4. SUSE CaaS Platform 4 - Ceph RBD 作为 Pod 存储卷

    RBD存储卷 目前 CaaSP4 支持多种 Volume 类型,这里选择 Ceph RBD(Rados Block Device),主要有如下好处: Ceph 经过多年开发,已经非常熟,社区也很活跃: ...

  5. SUSE Ceph RBD Mirror - Storage 6

    Ceph采用的是强一致性同步模型,所有副本都必须完成写操作才算一次写入成功,这就导致不能很好地支持跨域部署,因为如果副本在异地,网络延迟就会很大,拖垮整个集群的写性能.因此,Ceph集群很少有跨域部署 ...

  6. 二十八. Ceph概述 部署Ceph集群 Ceph块存储

    client   :192.168.4.10 node1 :192.168.4.11 ndoe2 :192.168.4.12 node3 :192.168.4.13   1.实验环境 准备四台KVM虚 ...

  7. Ceph RBD 的实现原理与常规操作

    目录 文章目录 目录 前文列表 RBD RBD Pool 的创建与删除 块设备的创建与删除 块设备的挂载与卸载 新建客户端 块设备的扩缩容 RBD 块设备的 Format 1 VS Format 2 ...

  8. 007 Ceph手动部署单节点

    前面已经介绍了Ceph的自动部署,本次介绍一下关于手动部署Ceph节点操作 一.环境准备 一台虚拟机部署单节点Ceph集群 IP:172.25.250.14 内核: Red Hat Enterpris ...

  9. 7.4 k8s结合ceph rbd、cephfs实现数据的持久化和共享

    1.在ceph集群中创建rbd存储池.镜像及普通用户 1.1.存储池接镜像配置 创建存储池 root@u20-deploy:~# ceph osd pool create rbd-test-pool1 ...

随机推荐

  1. how to create react custom hooks with arguments

    how to create react custom hooks with arguments React Hooks & Custom Hooks // reusable custom ho ...

  2. es6 & map & set

    es6 & map & set Map & WeakMap https://developer.mozilla.org/en-US/docs/Web/JavaScript/Re ...

  3. uniapp 发起网络请求

    推荐下我写的uni-http 创建http-config.js import Vue from 'vue' const BASE_URL = 'http://xxx.com'; if (process ...

  4. TYLER ADAMS BRADBERRY:人到中年,要学会戒掉这三点

    在一些国家的一些人当中,总会出现这样一个问题"中年危机".而到了中年,人与人间的差距似乎也变得越来越大.有人说,人到中年,是一个门槛,有的人迈过去了,有的人没迈过去.但是,其实实话 ...

  5. Union international inc引进微信线下支付,开启消费无现金时代

    长期以来,Union international inc娱乐集团(公司编号:20151533091)因其客户来自全球各国,特别是除了美国之外的中国用户居多,因此公司一直和中国领先的社交软件微信保持着良 ...

  6. C++算法代码——质数的和与积

    题目来自:http://218.5.5.242:9018/JudgeOnline/problem.php?id=1682 题目描述 两个质数的和是S,它们的积最大是多少? 输入 输入文件名为prime ...

  7. python基础(2)字符串常用方法

    python字符串常用方法 find(sub[, start[, end]]) 在索引start和end之间查找字符串sub ​找到,则返回最左端的索引值,未找到,则返回-1 ​start和end都可 ...

  8. 解决springBoot上传大文件异常问题

    上传文件过大时的报错: org.springframework.web.multipart.MaxUploadSizeExceededException: Maximum upload size ex ...

  9. HTML认知

    <!DOCTYPE html>的作用 1.定义 DOCTYPE是一种标准通用标记语言的文档类型的声明,目的是告诉标准通用标记语言解析器,该用什么方式解析这个文档. <!DOCTYPE ...

  10. Mysql通过binlog恢复误update的数据

    事件: 在生产库执行update时只添加了STATUS(状态)条件,将所有状态为'E'的数据全部改为了'D' 思路: 操作步骤主要参考自文章:https://blog.csdn.net/weixin_ ...