1.ceph存储集群的访问接口
 
1.1ceph块设备接口(RBD)
ceph块设备,也称为RADOS块设备(简称RBD),是一种基于RADOS存储系统支持超配(thin-provisioned)、可伸缩的条带化数据存储系统,它通过librbd库与OSD进行交互。RBD为KVM等虚拟化技术和云OS(例如Openstack和CloudStack)提供高性能和无限可扩展的存储后端,这些系统依赖于libvirt和QEMU实用程序于RBD进行集成。
 
客户端基于librbd库即可将RADOS存储集群用作块存储,不过,用于rbd的存储池需要事先启用rbd功能并进行初始化。例如,下面的命令创建一个名为volumes的存储池,在启用rbd功能后对其进行初始化:
# ceph osd pool create volumes 128
# ceph osd pool application enable volumes rbd
# rbd pool init -p volumes
 
不过,rbd存储池并不能直接用于块设备,而是需要事先在其中按需创建映像(image),并把映像文件作为块设备使用。rbd命令可用于创建、查看及删除块设备相在的映像(image),以及克隆映像、创建快照、将映像回滚到快照和查看快照等管理操作。例如,下面创建一个名为img1的映像:
rbd create img1 --size 1024 --pool volumes
 
映像的相关信息则可以使用“rbd info”命令获取:
rbd --image img1 --pool volumes info
 
 
1.2文件系统(CephFS)接口
CephFS需要至少运行一个元数据服务器(MDS)守护进程(ceph-mds),此进程管理与CephFS上存储的文件相关的元数据,并协调Ceph存储集群的访问。因此,若要使用CephFS接口,需要在存储集群上至少部署一个MDS实例。“ceph-deploy mds create {ceph-node}”命令可以完成此功能,例如,在stor1上启用MDS:
[root@ceph-host-01 ceph-cluster]# ceph-deploy mds create ceph-host-02
 
查看MDS的相关状态可以发现,刚添加的MDS处于Standby模式:
[root@ceph-host-01 ceph-cluster]# ceph mds stat
1 up:standby
 
使用CephFS之前需要事先于集群中创建一个文件系统,并为其分别指定元数据和数据相关的储存池。下面创建一个名为cephfs的文件系统用于测试,它使用cephfs-metadata为元数据存储池,使用cephfs-data为数据存储池:
 
[root@ceph-host-01 ceph-cluster]# ceph osd pool create cephfs-metadata 64
[root@ceph-host-01 ceph-cluster]# ceph osd pool create cephfs-data 64
[root@ceph-host-01 ceph-cluster]# ceph fs new cephfs cephfs-metadata cephfs-data
 
而后即可使用如下命令“ceph fs status <fsname>”查看文件系统的相关状态,例如:
# ceph fs status cephfs
 
此时,MDS的状态已经发生了改变:
# ceph mds stat系统
cephfs:1 {0=ceph-host-02=up:active}
 
随后,客户端通过内核中的cephfs文件系统接口即可挂载使用cephfs文件,或者通过FUSE接口与文件系统进行交互
 
[root@node5 ~]# mkdir /data/ceph-storage/ -p
[root@node5 ~]# chown -R ceph.ceph /data/ceph-storage
[root@node5 ~]# mount -t ceph 10.30.1.221:6789:/ /data/ceph-storage/ -o name=admin,secret=AQA8HzdeFQuPHxAAUfjHnOMSfFu7hHIoGv/x1A==
[root@node5 ~]# mount | grep ceph
10.30.1.221:6789:/ on /data/ceph-storage type ceph (rw,relatime,name=admin,secret=<hidden>,acl,wsize=16777216)
 
注:secret值的查看方法
# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
    key = AQA8HzdeFQuPHxAAUfjHnOMSfFu7hHIoGv/x1A==
 
 
关于删除fs以及pool
ceph fs fail cephfs
ceph fs rm cephfs --yes-i-really-mean-it
ceph osd pool rm cephfs-matadata cephfs-matadata --yes-i-really-really-mean-it
ceph osd pool rm cephfs-data cephfs-data --yes-i-really-really-mean-it
 
 
---------------------------------------------------------------------------------------------------------------------------
 
2.存储空间用量
命令:ceph df
输出两段内容:RAW STORAGE和POOLS
   RAW STORAGE:存储量概览
   POOLS:存储池
RAW STORAGE段
    SIZE:集群的整体存储量容量
    AVAIL:集群中可以使用的可用空间容量
    RAW USED:已用的原始储存量
    % RAW USED:已用的原始储存量百分比。将此数字与full ratio 和 near full ratio搭配使用,可确保您不会用完集群的容量。
[root@ceph-host-01]# ceph df
RAW STORAGE:
    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED
    hdd       1.2 TiB     1.1 TiB     6.1 GiB       21 GiB          1.78
    TOTAL     1.2 TiB     1.1 TiB     6.1 GiB       21 GiB          1.78
POOLS:
    POOL              ID     STORED      OBJECTS     USED        %USED     MAX AVAIL
    nova-metadata      6     4.5 MiB          24      20 MiB         0       276 GiB
    nova-data          7     1.3 GiB         391     5.4 GiB      0.48       276 GiB
 
 
3.检查OSD和MON的状态
可通过执行以下命令来检查OSD,以确保它们已启动且正在运行
[root@ceph-host-03 ~]# ceph osd stat
15 osds: 15 up (since 22m), 15 in (since 116m); epoch: e417
 
[root@ceph-host-03 ~]# ceph osd dump
epoch 417
fsid 272905d2-fd66-4ef6-a772-9cd73a274683
created 2020-02-03 03:13:00.528959
modified 2020-02-04 19:29:43.906336
flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
crush_version 33
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client jewel
require_osd_release nautilus
pool 6 'nova-metadata' replicated size 4 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 152 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 7 'nova-data' replicated size 4 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 152 flags hashpspool stripe_width 0 application cephfs
max_osd 15
osd.0 up   in  weight 1 up_from 379 up_thru 413 down_at 360 last_clean_interval [328,378) [v2:10.30.1.221:6802/4544,v1:10.30.1.221:6803/4544] [v2:192.168.9.211:6808/2004544,v1:192.168.9.211:6809/2004544] exists,up 5903a2c7-ca1f-4eb8-baff-2583e0db38c8
osd.1 up   in  weight 1 up_from 278 up_thru 413 down_at 277 last_clean_interval [247,268) [v2:10.30.1.222:6802/3716,v1:10.30.1.222:6803/3716] [v2:192.168.9.212:6800/3716,v1:192.168.9.212:6801/3716] exists,up bd1f8700-c318-4a35-a0ac-16b16e9c1179
osd.2 up   in  weight 1 up_from 413 up_thru 413 down_at 406 last_clean_interval [272,411) [v2:10.30.1.223:6802/3882,v1:10.30.1.223:6803/3882] [v2:192.168.9.213:6806/1003882,v1:192.168.9.213:6807/1003882] exists,up 1d4e71da-1956-48bb-bf93-af6c4eae0799
osd.3 up   in  weight 1 up_from 355 up_thru 413 down_at 351 last_clean_interval [275,352) [v2:10.30.1.224:6802/3856,v1:10.30.1.224:6803/3856] [v2:192.168.9.214:6802/3856,v1:192.168.9.214:6803/3856] exists,up ecd3b813-c1d7-4612-8448-a9834af18d8f
osd.4 up   in  weight 1 up_from 400 up_thru 413 down_at 392 last_clean_interval [273,389) [v2:10.30.1.221:6800/6694,v1:10.30.1.221:6801/6694] [v2:192.168.9.211:6800/6694,v1:192.168.9.211:6801/6694] exists,up 28488ddd-240a-4a21-a245-351472a7deaa
osd.5 up   in  weight 1 up_from 398 up_thru 413 down_at 390 last_clean_interval [279,389) [v2:10.30.1.222:6805/4521,v1:10.30.1.222:6807/4521] [v2:192.168.9.212:6803/4521,v1:192.168.9.212:6804/4521] exists,up cc8742ff-9d93-46b7-9fdb-60405ac09b6f
osd.6 up   in  weight 1 up_from 412 up_thru 412 down_at 410 last_clean_interval [273,411) [v2:10.30.1.223:6800/3884,v1:10.30.1.223:6801/3884] [v2:192.168.9.213:6808/2003884,v1:192.168.9.213:6810/2003884] exists,up 27910039-7ee6-4bf9-8d6b-06a0b8c3491a
osd.7 up   in  weight 1 up_from 353 up_thru 413 down_at 351 last_clean_interval [271,352) [v2:10.30.1.224:6800/3858,v1:10.30.1.224:6801/3858] [v2:192.168.9.214:6800/3858,v1:192.168.9.214:6801/3858] exists,up ef7c51dd-b9ee-44ef-872a-2861c3ad2f5a
osd.8 up   in  weight 1 up_from 380 up_thru 415 down_at 366 last_clean_interval [346,379) [v2:10.30.1.221:6814/4681,v1:10.30.1.221:6815/4681] [v2:192.168.9.211:6806/1004681,v1:192.168.9.211:6807/1004681] exists,up 4e8582b0-e06e-497d-8058-43e6d882ba6b
osd.9 up   in  weight 1 up_from 382 up_thru 413 down_at 377 last_clean_interval [280,375) [v2:10.30.1.222:6810/4374,v1:10.30.1.222:6811/4374] [v2:192.168.9.212:6808/4374,v1:192.168.9.212:6809/4374] exists,up baef9f86-2d3d-4f1a-8d1b-777034371968
osd.10 up   in  weight 1 up_from 412 up_thru 416 down_at 403 last_clean_interval [272,407) [v2:10.30.1.223:6808/3880,v1:10.30.1.223:6810/3880] [v2:192.168.9.213:6800/1003880,v1:192.168.9.213:6805/1003880] exists,up b6cd0b80-9ef1-42ad-b0c8-2f5b8d07da98
osd.11 up   in  weight 1 up_from 354 up_thru 413 down_at 351 last_clean_interval [278,352) [v2:10.30.1.224:6808/3859,v1:10.30.1.224:6809/3859] [v2:192.168.9.214:6808/3859,v1:192.168.9.214:6809/3859] exists,up 788897e9-1b8b-456d-b379-1c1c376e5bf0
osd.12 up   in  weight 1 up_from 395 up_thru 413 down_at 390 last_clean_interval [383,393) [v2:10.30.1.221:6810/6453,v1:10.30.1.221:6811/6453] [v2:192.168.9.211:6804/1006453,v1:192.168.9.211:6805/1006453] exists,up bf5765f0-cb28-4ef8-a92d-f7fe1b5f2a09
osd.13 up   in  weight 1 up_from 413 up_thru 413 down_at 403 last_clean_interval [274,411) [v2:10.30.1.223:6806/3878,v1:10.30.1.223:6807/3878] [v2:192.168.9.213:6801/1003878,v1:192.168.9.213:6802/1003878] exists,up 54a3b38f-e772-4e6f-bb6a-afadaf766a4e
osd.14 up   in  weight 1 up_from 353 up_thru 413 down_at 351 last_clean_interval [273,352) [v2:10.30.1.224:6812/3860,v1:10.30.1.224:6813/3860] [v2:192.168.9.214:6812/3860,v1:192.168.9.214:6813/3860] exists,up 2652556d-b2a9-4bce-a4a2-3039a80f3c29
blacklist 10.30.1.222:6826/1493024757 expires 2020-02-04 20:47:36.530951
blacklist 10.30.1.222:6827/1493024757 expires 2020-02-04 20:47:36.530951
blacklist 10.30.1.221:6829/1662704623 expires 2020-02-05 05:33:02.570724
blacklist 10.30.1.221:6828/1662704623 expires 2020-02-05 05:33:02.570724
blacklist 10.30.1.222:6800/1416583747 expires 2020-02-05 17:54:46.478629
blacklist 10.30.1.222:6801/1416583747 expires 2020-02-05 17:54:46.478629
blacklist 10.30.1.222:6800/3620735873 expires 2020-02-05 19:03:42.652746
blacklist 10.30.1.222:6801/3620735873 expires 2020-02-05 19:03:42.652746
 
还可以根据OSD在CRUSH地图中的位置查看OSD
[root@ceph-host-01]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME             STATUS REWEIGHT PRI-AFF
-1       1.15631 root default                                  
-3       0.30835     host ceph-host-01                         
0   hdd 0.07709         osd.0             up  1.00000 1.00000
4   hdd 0.07709         osd.4             up  1.00000 1.00000
8   hdd 0.07709         osd.8             up  1.00000 1.00000
12   hdd 0.07709         osd.12            up  1.00000 1.00000
-5       0.23126     host ceph-host-02                         
1   hdd 0.07709         osd.1             up  1.00000 1.00000
5   hdd 0.07709         osd.5             up  1.00000 1.00000
9   hdd 0.07709         osd.9             up  1.00000 1.00000
-7       0.30835     host ceph-host-03                         
2   hdd 0.07709         osd.2             up  1.00000 1.00000
6   hdd 0.07709         osd.6             up  1.00000 1.00000
10   hdd 0.07709         osd.10            up  1.00000 1.00000
13   hdd 0.07709         osd.13            up  1.00000 1.00000
-9       0.30835     host ceph-host-04                         
3   hdd 0.07709         osd.3             up  1.00000 1.00000
7   hdd 0.07709         osd.7             up  1.00000 1.00000
11   hdd 0.07709         osd.11            up  1.00000 1.00000
14   hdd 0.07709         osd.14            up  1.00000 1.00000
注:ceph将列显CRUSH树及主机、它的OSD、OSD是否已启动及其权重
 
集群中存在多个Mon主机时,应该在启动集群之后读取或写入数据之前检查Mon的仲裁状态;事实上。管理员也应该定期检查这种仲裁结果
显示监视器映射:ceph mon stat命令或者ceph mon dump
[root@ceph-host-03 ~]# ceph mon dump
dumped monmap epoch 3
epoch 3
fsid 272905d2-fd66-4ef6-a772-9cd73a274683
last_changed 2020-02-04 19:01:35.801920
created 2020-02-03 03:12:45.424079
min_mon_release 14 (nautilus)
0: [v2:10.30.1.221:3300/0,v1:10.30.1.221:6789/0] mon.ceph-host-01
1: [v2:10.30.1.222:3300/0,v1:10.30.1.222:6789/0] mon.ceph-host-02
2: [v2:10.30.1.223:3300/0,v1:10.30.1.223:6789/0] mon.ceph-host-03
显示仲裁状态:ceph quorum_status
 
 
 
4.使用管理套接字
ceph的管理套接字接口常用于查询守护进程
    套接字默认保存于/var/run/ceph目录
     此接口的使用不能以远程方式进行
命令的使用格式:
    ceph --admin-daemon /var/run/ceph/socket-name
    获取使用帮助:
        ceph --admin-daemon  /var/run/ceph/socket-name help
使用示范如下:
[root@ceph-host-04 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.7.asok version
{"version":"14.2.7","release":"nautilus","release_type":"stable"}
[root@ceph-host-04 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.7.asok status
{
    "cluster_fsid": "272905d2-fd66-4ef6-a772-9cd73a274683",
    "osd_fsid": "ef7c51dd-b9ee-44ef-872a-2861c3ad2f5a",
    "whoami": 7,
    "state": "active",
    "oldest_map": 1,
    "newest_map": 417,
    "num_pgs": 28
}
[root@ceph-host-02 ceph-cluster]# ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-host-02.asok help
{
    "add_bootstrap_peer_hint": "add peer address as potential bootstrap peer for cluster bringup",
    "add_bootstrap_peer_hintv": "add peer address vector as potential bootstrap peer for cluster bringup",
    "config diff": "dump diff of current config and default config",
    "config diff get": "dump diff get <field>: dump diff of current and default config setting <field>",
    "config get": "config get <field>: get the config value",
    "config help": "get config setting schema and descriptions",
    "config set": "config set <field> <val> [<val> ...]: set a config variable",
    "config show": "dump current config settings",
    "config unset": "config unset <field>: unset a config variable",
    "dump_historic_ops": "show recent ops",
    "dump_historic_ops_by_duration": "show recent ops, sorted by duration",
    "dump_historic_slow_ops": "show recent slow ops",
    "dump_mempools": "get mempool stats",
    "get_command_descriptions": "list available commands",
    "git_version": "get git sha1",
    "help": "list available commands",
    "log dump": "dump recent log entries to log file",
    "log flush": "flush log entries to log file",
    "log reopen": "reopen log file",
    "mon_status": "show current monitor status",
    "ops": "show the ops currently in flight",
    "perf dump": "dump perfcounters value",
    "perf histogram dump": "dump perf histogram values",
    "perf histogram schema": "dump perf histogram schema",
    "perf reset": "perf reset <name>: perf reset all or one perfcounter name",
    "perf schema": "dump perfcounters schema",
    "quorum enter": "force monitor back into quorum",
    "quorum exit": "force monitor out of the quorum",
    "quorum_status": "show current quorum status",
    "sessions": "list existing sessions",
    "sync_force": "force sync of and clear monitor store",
    "version": "get ceph version"
}
 
[root@ceph-host-04 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.7.asok help
{
    "bluestore allocator dump block": "dump allocator free regions",
    "bluestore allocator dump bluefs-db": "dump allocator free regions",
    "bluestore allocator score block": "give score on allocator fragmentation (0-no fragmentation, 1-absolute fragmentation)",
    "bluestore allocator score bluefs-db": "give score on allocator fragmentation (0-no fragmentation, 1-absolute fragmentation)",
    "bluestore bluefs available": "Report available space for bluefs. If alloc_size set, make simulation.",
    "calc_objectstore_db_histogram": "Generate key value histogram of kvdb(rocksdb) which used by bluestore",
    "compact": "Commpact object store's omap. WARNING: Compaction probably slows your requests",
    "config diff": "dump diff of current config and default config",
    "config diff get": "dump diff get <field>: dump diff of current and default config setting <field>",
    "config get": "config get <field>: get the config value",
    "config help": "get config setting schema and descriptions",
    "config set": "config set <field> <val> [<val> ...]: set a config variable",
    "config show": "dump current config settings",
    "config unset": "config unset <field>: unset a config variable",
    "dump_blacklist": "dump blacklisted clients and times",
    "dump_blocked_ops": "show the blocked ops currently in flight",
    "dump_historic_ops": "show recent ops",
    "dump_historic_ops_by_duration": "show slowest recent ops, sorted by duration",
    "dump_historic_slow_ops": "show slowest recent ops",
    "dump_mempools": "get mempool stats",
    "dump_objectstore_kv_stats": "print statistics of kvdb which used by bluestore",
    "dump_op_pq_state": "dump op priority queue state",
    "dump_ops_in_flight": "show the ops currently in flight",
    "dump_osd_network": "Dump osd heartbeat network ping times",
    "dump_pgstate_history": "show recent state history",
    "dump_recovery_reservations": "show recovery reservations",
    "dump_scrub_reservations": "show scrub reservations",
    "dump_scrubs": "print scheduled scrubs",
    "dump_watchers": "show clients which have active watches, and on which objects",
    "flush_journal": "flush the journal to permanent store",
    "flush_store_cache": "Flush bluestore internal cache",
    "get_command_descriptions": "list available commands",
    "get_heap_property": "get malloc extension heap property",
    "get_latest_osdmap": "force osd to update the latest map from the mon",
    "get_mapped_pools": "dump pools whose PG(s) are mapped to this OSD.",
    "getomap": "output entire object map",
    "git_version": "get git sha1",
    "heap": "show heap usage info (available only if compiled with tcmalloc)",
    "help": "list available commands",
    "injectdataerr": "inject data error to an object",
    "injectfull": "Inject a full disk (optional count times)",
    "injectmdataerr": "inject metadata error to an object",
    "list_devices": "list OSD devices.",
    "log dump": "dump recent log entries to log file",
    "log flush": "flush log entries to log file",
    "log reopen": "reopen log file",
    "objecter_requests": "show in-progress osd requests",
    "ops": "show the ops currently in flight",
    "perf dump": "dump perfcounters value",
    "perf histogram dump": "dump perf histogram values",
    "perf histogram schema": "dump perf histogram schema",
    "perf reset": "perf reset <name>: perf reset all or one perfcounter name",
    "perf schema": "dump perfcounters schema",
    "rmomapkey": "remove omap key",
    "send_beacon": "send OSD beacon to mon immediately",
    "set_heap_property": "update malloc extension heap property",
    "set_recovery_delay": "Delay osd recovery by specified seconds",
    "setomapheader": "set omap header",
    "setomapval": "set omap key",
    "smart": "probe OSD devices for SMART data.",
    "status": "high-level status of OSD",
    "trigger_deep_scrub": "Trigger a scheduled deep scrub ",
    "trigger_scrub": "Trigger a scheduled scrub ",
    "truncobj": "truncate object to length",
    "version": "get ceph version"
}
 
 
 
5.停止或重启ceph集群
停止
 1.告知ceph集群不要将osd标记为out
   命令:ceph osd set noout
 2.按如下顺序停止守护进程和节点
   存储客户端
   网关,例如NFS Ganesha或者对象网关
   元数据服务器
   ceph osd
   ceph manager
   ceph monitor
 
启动
 1.以与停止过程相关顺序启动节点
   ceph monitor
   ceph manager
   ceph osd
   元数据服务器
   网关,例如NFS Ganesha或者对象网关
   存储客户端
 2.删除noout标记
   命令:ceph osd unset noout

ceph存储集群的应用的更多相关文章

  1. Ceph 存储集群

    Ceph 存储集群 Ceph 作为软件定义存储的代表之一,最近几年其发展势头很猛,也出现了不少公司在测试和生产系统中使用 Ceph 的案例,尽管与此同时许多人对它的抱怨也一直存在.本文试着整理作者了解 ...

  2. 002.RHCS-配置Ceph存储集群

    一 前期准备 [kiosk@foundation0 ~]$ ssh ceph@serverc #登录Ceph集群节点 [ceph@serverc ~]$ ceph health #确保集群状态正常 H ...

  3. Ceph 存储集群 - 搭建存储集群

    目录 一.准备机器 二.ceph节点安装 三.搭建集群 四.扩展集群(扩容)   一.准备机器 本文描述如何在 CentOS 7 下搭建 Ceph 存储集群(STORAGE CLUSTER). 一共4 ...

  4. Ceph 存储集群搭建

    前言 Ceph 分布式存储系统,在企业中应用面较广 初步了解并学会使用很有必要 一.简介 Ceph 是一个开源的分布式存储系统,包括对象存储.块设备.文件系统.它具有高可靠性.安装方便.管理简便.能够 ...

  5. Ceph 存储集群5-数据归置

    一.数据归置概览 Ceph 通过 RADOS 集群动态地存储.复制和重新均衡数据对象.很多不同用户因不同目的把对象存储在不同的存储池里,而它们都坐落于无数的 OSD 之上,所以 Ceph 的运营需要些 ...

  6. Ceph 存储集群4-高级运维:

    一.高级运维 高级集群操作主要包括用 ceph 服务管理脚本启动.停止.重启集群,和集群健康状态检查.监控和操作集群. 操纵集群 运行 Ceph 每次用命令启动.重启.停止Ceph 守护进程(或整个集 ...

  7. Ceph 存储集群2-配置:心跳选项、OSD选项、存储池、归置组和 CRUSH 选项

    一.心跳选项 完成基本配置后就可以部署.运行 Ceph 了.执行 ceph health 或 ceph -s 命令时,监视器会报告 Ceph 存储集群的当前状态.监视器通过让各 OSD 自己报告.并接 ...

  8. Ceph 存储集群1-配置:硬盘和文件系统、配置 Ceph、网络选项、认证选项和监控器选项

    所有 Ceph 部署都始于 Ceph 存储集群.基于 RADOS 的 Ceph 对象存储集群包括两类守护进程: 1.对象存储守护进程( OSD )把存储节点上的数据存储为对象: 2.Ceph 监视器( ...

  9. Ceph 存储集群第一部分:配置和部署

    内容来源于官方,经过个人实践操作整理,官方地址:http://docs.ceph.org.cn/rados/ 所有 Ceph 部署都始于 Ceph 存储集群. 基于 RADOS 的 Ceph 对象存储 ...

随机推荐

  1. CLH lock queue的原理解释及Java实现

    目录 背景 原理解释 Java代码实现 定义QNode 定义Lock接口 定义CLHLock 使用场景 运行代码 代码输出 代码解释 CLHLock的加锁.释放锁过程 第一个使用CLHLock的线程自 ...

  2. 10、Spring Boot分布式

    1.分布式简介  2.Zookeeper和Dubbo  3.zookeeper (1).zookeeper安装 官方文档:https://hub.docker.com/_/zookeeper?tab= ...

  3. Nginx配置https以及配置说明

    示例 worker_processes 1; events { worker_connections 1024; } http { #均衡负载 upstream demo{ server localh ...

  4. vm虚拟机安装centos7。克隆镜像以及快照

    为了方便下次安装配置,保存一篇安装centos的文章 https://blog.csdn.net/wsq119/article/details/80635558 步骤非常详细,一看就会. 这一篇是关于 ...

  5. 第1.2节 Python学习环境的使用

    Python的环境安装好以后,可以通过IDLE(Python 3.7 64-bit)进入图形界面使用Python,也可以通过Python 3.7 64-bit进入命令行交互式界面,两者都可以使用,不过 ...

  6. 第十四章 web前端开发小白学爬虫

    老猿从事IT开发快三十年了,接触互联网也很久了,但自己没有做过web前端开发,只知道与前端开发相关的一些基本概念,如B/S架构.html标签.js脚本.css样式.xml解析.cookies.http ...

  7. Linux用户配置文件

    一,用户信息文件 /etc/passwd 1,用户管理简介 1,越是对服务器安全性要求高的服务器,越需要建立合理的用户权限等级制度和服务器操作规范 2,在Linux中主要是通过用户配置文件来查看和修改 ...

  8. 交叉熵损失函数,以及pytorch CrossEntropyLoss的理解

    实际运用例子: https://zhuanlan.zhihu.com/p/35709485 pytorch CrossEntropyLoss,参考博客如下: https://mathpretty.co ...

  9. js之数组乱序

    这是最近面试遇到的,不过忘记了,之前也有印象刷到过这道题,就再次记录一下加深印象吧,听到最多的答案是利用sort方法,不过也有说这种方法不好,利用了快排和插入排序,那就整理下吧 <!DOCTYP ...

  10. 使用JMeter进行负载测试快速入门

    相信JMeter是很多测试人员必备技能之一,今天简单讲一下开发人员如何使用JMeter进行简单的压力测试快速入门. 安装JMeter Jmter官方地址 按提示下载JMeter,然后直接解压就可以用了 ...