由于时钟不一致问题,导致ceph存储有问题

clock skew
时钟偏移
overall
adj. 全部的;全体的;一切在内的
stamped
adj. 铭刻的;盖上邮戳的;顿足的

beacon
vt. 照亮,指引

2019-04-29 17:00:00.000223 mon.cu-pve04 mon.0 192.168.7.204:6789/0 1959 : cluster [WRN] overall HEALTH_WARN clock skew detected on mon.cu-pve05, mon.cu-pve06

2019-04-29 17:00:11.495180 mon.cu-pve04 mon.0 192.168.7.204:6789/0 1960 : cluster [WRN] mon.1 192.168.7.205:6789/0 clock skew 1.30379s > max 0.05s
2019-04-29 17:00:11.495343 mon.cu-pve04 mon.0 192.168.7.204:6789/0 1961 : cluster [WRN] mon.2 192.168.7.206:6789/0 clock skew 0.681995s > max 0.05s
2019-04-29 17:14:41.500133 mon.cu-pve04 mon.0 192.168.7.204:6789/0 2106 : cluster [WRN] mon.1 192.168.7.205:6789/0 clock skew 1.73357s > max 0.05s
2019-04-29 17:14:41.500307 mon.cu-pve04 mon.0 192.168.7.204:6789/0 2107 : cluster [WRN] mon.2 192.168.7.206:6789/0 clock skew 0.671272s > max 0.05s

2019-04-29 17:35:33.320667 mon.cu-pve04 mon.0 192.168.7.204:6789/0 2342 : cluster [WRN] message from mon.1 was stamped 2.355514s in the future, clocks not synchronized
2019-04-29 17:39:59.322154 mon.cu-pve04 mon.0 192.168.7.204:6789/0 2397 : cluster [DBG] osdmap e191: 24 total, 24 up, 24 in
2019-04-29 18:32:24.854130 mon.cu-pve04 mon.0 192.168.7.204:6789/0 3026 : cluster [DBG] osdmap e194: 24 total, 24 up, 24 in
2019-04-29 19:00:00.000221 mon.cu-pve04 mon.0 192.168.7.204:6789/0 3324 : cluster [WRN] overall HEALTH_WARN clock skew detected on mon.cu-pve05, mon.cu-pve06

2019-04-29 17:01:31.898307 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 649 : cluster [DBG] pgmap v676: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail
2019-04-29 17:01:33.927961 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 650 : cluster [DBG] pgmap v677: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 1.97KiB/s wr, 0op/s
2019-04-29 17:01:35.956276 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 651 : cluster [DBG] pgmap v678: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 588B/s rd, 2.71KiB/s wr, 1op/s
2019-04-29 17:01:37.981052 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 652 : cluster [DBG] pgmap v679: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 588B/s rd, 2.71KiB/s wr, 1op/s
2019-04-29 17:01:40.014386 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 653 : cluster [DBG] pgmap v680: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 589B/s rd, 4.03KiB/s wr, 1op/s
2019-04-29 17:01:42.042173 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 654 : cluster [DBG] pgmap v681: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 588B/s rd, 4.02KiB/s wr, 1op/s
2019-04-29 17:01:44.072142 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 655 : cluster [DBG] pgmap v682: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 588B/s rd, 5.01KiB/s wr, 1op/s
2019-04-29 17:01:46.100477 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 656 : cluster [DBG] pgmap v683: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 1.89KiB/s rd, 3.20KiB/s wr, 1op/s
2019-04-29 17:01:48.129701 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 657 : cluster [DBG] pgmap v684: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 1.31KiB/s rd, 2.46KiB/s wr, 0op/s
2019-04-29 17:01:50.161716 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 658 : cluster [DBG] pgmap v685: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 1.31KiB/s rd, 2.46KiB/s wr, 0op/s
2019-04-29 17:01:52.190373 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 659 : cluster [DBG] pgmap v686: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 1.31KiB/s rd, 1.15KiB/s wr, 0op/s
2019-04-29 17:01:54.220284 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 660 : cluster [DBG] pgmap v687: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 1.31KiB/s rd, 1.15KiB/s wr, 0op/s
2019-04-29 17:01:56.248956 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 661 : cluster [DBG] pgmap v688: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 1.31KiB/s rd, 168B/s wr, 0op/s
2019-04-29 17:01:58.273446 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 662 : cluster [DBG] pgmap v689: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail
2019-04-29 17:02:00.305394 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 663 : cluster [DBG] pgmap v690: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail
2019-04-29 17:02:02.334375 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 664 : cluster [DBG] pgmap v691: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail

2019-04-30 00:22:14.177176 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 13697 : cluster [DBG] pgmap v13716: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
2019-04-30 00:22:16.203475 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 13698 : cluster [DBG] pgmap v13717: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
2019-04-30 00:22:28.348815 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6578 : cluster [WRN] daemon mds.cu-pve04 is not responding, replacing it as rank 0 with standby daemon mds.cu-pve06
2019-04-30 00:22:28.349010 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6579 : cluster [INF] Standby daemon mds.cu-pve05 is not responding, dropping it
2019-04-30 00:22:28.353359 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6580 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)
2019-04-30 00:22:28.353476 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6581 : cluster [WRN] Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)
2019-04-30 00:22:28.364180 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6582 : cluster [DBG] osdmap e195: 24 total, 24 up, 24 in
2019-04-30 00:22:28.374585 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6583 : cluster [DBG] fsmap cephfs-1/1/1 up {0=cu-pve06=up:replay}
2019-04-30 00:22:29.413750 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6584 : cluster [INF] Health check cleared: MDS_INSUFFICIENT_STANDBY (was: insufficient standby MDS daemons available)
2019-04-30 00:22:29.425556 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6585 : cluster [DBG] mds.0 192.168.7.206:6800/3970858648 up:reconnect
2019-04-30 00:22:29.425710 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6586 : cluster [DBG] mds.? 192.168.7.204:6800/2960873692 up:boot
2019-04-30 00:22:29.425883 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6587 : cluster [DBG] fsmap cephfs-1/1/1 up {0=cu-pve06=up:reconnect}, 1 up:standby
2019-04-30 00:22:30.435723 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6588 : cluster [DBG] mds.0 192.168.7.206:6800/3970858648 up:rejoin
2019-04-30 00:22:30.435868 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6589 : cluster [DBG] fsmap cephfs-1/1/1 up {0=cu-pve06=up:rejoin}, 1 up:standby
2019-04-30 00:22:30.449165 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6590 : cluster [INF] daemon mds.cu-pve06 is now active in filesystem cephfs as rank 0
2019-04-30 00:22:30.015869 mds.cu-pve06 mds.0 192.168.7.206:6800/3970858648 1 : cluster [DBG] reconnect by client.54450 192.168.7.205:0/1578906464 after 0
2019-04-30 00:22:30.019932 mds.cu-pve06 mds.0 192.168.7.206:6800/3970858648 2 : cluster [DBG] reconnect by client.64366 192.168.7.206:0/2722278656 after 0.00400001
2019-04-30 00:22:30.054313 mds.cu-pve06 mds.0 192.168.7.206:6800/3970858648 3 : cluster [DBG] reconnect by client.54120 192.168.7.204:0/254060409 after 0.0400001
2019-04-30 00:22:31.434592 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6591 : cluster [INF] Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
2019-04-30 00:22:31.446526 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6592 : cluster [DBG] mds.0 192.168.7.206:6800/3970858648 up:active
2019-04-30 00:22:31.446675 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6593 : cluster [DBG] fsmap cephfs-1/1/1 up {0=cu-pve06=up:active}, 1 up:standby
2019-04-30 00:22:43.355044 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6595 : cluster [INF] Manager daemon cu-pve05 is unresponsive. No standby daemons available.
2019-04-30 00:22:43.355235 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6596 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)
2019-04-30 00:22:43.367182 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6597 : cluster [DBG] mgrmap e18: no daemons active
2019-04-30 00:22:53.658070 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6601 : cluster [INF] Activating manager daemon cu-pve05
2019-04-30 00:22:53.898363 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6602 : cluster [INF] Health check cleared: MGR_DOWN (was: no active mgr)
2019-04-30 00:22:53.917204 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6603 : cluster [DBG] mgrmap e19: cu-pve05(active, starting)
2019-04-30 00:22:53.979682 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6608 : cluster [INF] Manager daemon cu-pve05 is now available
2019-04-30 00:22:54.928868 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6609 : cluster [DBG] mgrmap e20: cu-pve05(active)
2019-04-30 00:22:59.965578 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 1 : cluster [DBG] pgmap v2: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
2019-04-30 00:23:00.677664 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 2 : cluster [DBG] pgmap v3: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
2019-04-30 00:23:02.700917 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 3 : cluster [DBG] pgmap v4: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
2019-04-30 00:23:04.707492 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 4 : cluster [DBG] pgmap v5: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
2019-04-30 00:23:06.740218 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 5 : cluster [DBG] pgmap v6: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
2019-04-30 00:23:08.746633 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 6 : cluster [DBG] pgmap v7: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
2019-04-30 00:23:10.780395 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 7 : cluster [DBG] pgmap v8: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail

2019-04-30 00:32:18.562962 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 278 : cluster [DBG] pgmap v279: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
2019-04-30 00:32:18.465670 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7327 : cluster [INF] osd.16 marked down after no beacon for 901.455814 seconds
2019-04-30 00:32:18.468437 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7328 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2019-04-30 00:32:18.483797 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7329 : cluster [DBG] osdmap e196: 24 total, 23 up, 24 in
2019-04-30 00:32:19.495106 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7331 : cluster [DBG] osdmap e197: 24 total, 23 up, 24 in
2019-04-30 00:32:21.501683 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7334 : cluster [WRN] Health check failed: Reduced data availability: 3 pgs inactive, 47 pgs peering (PG_AVAILABILITY)
2019-04-30 00:32:21.501774 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7335 : cluster [WRN] Health check failed: Degraded data redundancy: 794/38643 objects degraded (2.055%), 50 pgs degraded (PG_DEGRADED)
2019-04-30 00:32:20.596358 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 279 : cluster [DBG] pgmap v280: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
2019-04-30 00:32:22.603039 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 280 : cluster [DBG] pgmap v281: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
2019-04-30 00:32:24.628896 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 281 : cluster [DBG] pgmap v283: 1152 pgs: 41 stale+active+clean, 1111 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
2019-04-30 00:32:26.642893 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 282 : cluster [DBG] pgmap v285: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
2019-04-30 00:32:28.669528 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 283 : cluster [DBG] pgmap v286: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
2019-04-30 00:32:30.683129 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 284 : cluster [DBG] pgmap v287: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
2019-04-30 00:32:32.709629 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 285 : cluster [DBG] pgmap v288: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
2019-04-30 00:32:34.717180 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 286 : cluster [DBG] pgmap v289: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
2019-04-30 00:32:36.748749 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 287 : cluster [DBG] pgmap v290: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
2019-04-30 00:32:38.756345 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 288 : cluster [DBG] pgmap v291: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
2019-04-30 00:32:40.789378 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 289 : cluster [DBG] pgmap v292: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
2019-04-30 00:32:42.796488 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 290 : cluster [DBG] pgmap v293: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
2019-04-30 00:32:44.821576 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 291 : cluster [DBG] pgmap v294: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
2019-04-30 00:32:46.835641 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 292 : cluster [DBG] pgmap v295: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
2019-04-30 00:32:48.475079 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7371 : cluster [INF] osd.17 marked down after no beacon for 903.631937 seconds
2019-04-30 00:32:48.475189 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7372 : cluster [INF] osd.20 marked down after no beacon for 901.611316 seconds
2019-04-30 00:32:48.483726 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7373 : cluster [WRN] Health check update: 3 osds down (OSD_DOWN)
2019-04-30 00:32:48.500282 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7374 : cluster [DBG] osdmap e198: 24 total, 21 up, 24 in
2019-04-30 00:32:49.510909 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7376 : cluster [DBG] osdmap e199: 24 total, 21 up, 24 in

2019-04-30 00:35:58.536182 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7645 : cluster [INF] osd.7 marked down after no beacon for 902.595536 seconds
2019-04-30 00:35:58.538784 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7646 : cluster [WRN] Health check update: 5 osds down (OSD_DOWN)
2019-04-30 00:35:58.554495 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7647 : cluster [DBG] osdmap e202: 24 total, 19 up, 24 in
2019-04-30 00:35:59.565253 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7649 : cluster [DBG] osdmap e203: 24 total, 19 up, 24 in
2019-04-30 00:36:01.657260 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7652 : cluster [WRN] Health check update: Reduced data availability: 202 pgs inactive, 206 pgs peering (PG_AVAILABILITY)
2019-04-30 00:36:01.657353 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7653 : cluster [WRN] Health check update: Degraded data redundancy: 4903/38643 objects degraded (12.688%), 247 pgs degraded, 285 pgs undersized (PG_DEGRADED)

--------------------------------------

2019-04-30 05:31:46.580027 mon.cu-pve04 mon.0 192.168.7.204:6789/0 11871 : cluster [INF] Standby daemon mds.cu-pve05 is not responding, dropping it
2019-04-30 05:31:46.591494 mon.cu-pve04 mon.0 192.168.7.204:6789/0 11872 : cluster [DBG] fsmap cephfs-1/1/1 up {0=cu-pve06=up:active}, 1 up:standby
2019-04-30 05:31:50.842218 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 9143 : cluster [DBG] pgmap v9201: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
2019-04-30 05:31:52.872419 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 9144 : cluster [DBG] pgmap v9202: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
2019-04-30 05:31:54.899490 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 9145 : cluster [DBG] pgmap v9203: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
2019-04-30 05:31:56.925830 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 9146 : cluster [DBG] pgmap v9204: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
2019-04-30 05:31:58.957234 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 9147 : cluster [DBG] pgmap v9205: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
2019-04-30 05:32:01.596600 mon.cu-pve04 mon.0 192.168.7.204:6789/0 11890 : cluster [DBG] mgrmap e22: cu-pve05(active)

2019-04-30 05:43:16.717729 mon.cu-pve04 mon.0 192.168.7.204:6789/0 12763 : cluster [INF] osd.18 marked down after no beacon for 902.818940 seconds
2019-04-30 05:43:16.717846 mon.cu-pve04 mon.0 192.168.7.204:6789/0 12764 : cluster [INF] osd.19 marked down after no beacon for 902.818731 seconds
2019-04-30 05:43:16.717914 mon.cu-pve04 mon.0 192.168.7.204:6789/0 12765 : cluster [INF] osd.23 marked down after no beacon for 900.786850 seconds
2019-04-30 05:43:16.726253 mon.cu-pve04 mon.0 192.168.7.204:6789/0 12766 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN)
2019-04-30 05:43:16.742278 mon.cu-pve04 mon.0 192.168.7.204:6789/0 12767 : cluster [DBG] osdmap e253: 24 total, 21 up, 24 in
2019-04-30 05:43:17.753181 mon.cu-pve04 mon.0 192.168.7.204:6789/0 12771 : cluster [DBG] osdmap e254: 24 total, 21 up, 24 in
2019-04-30 05:43:19.209031 mon.cu-pve04 mon.0 192.168.7.204:6789/0 12774 : cluster [WRN] Health check failed: Reduced data availability: 51 pgs inactive, 293 pgs peering (PG_AVAILABILITY)

-----------------------------------------
2019-04-30 08:56:22.240506 mon.cu-pve04 mon.0 192.168.7.204:6789/0 19905 : cluster [DBG] Standby manager daemon cu-pve04 started
2019-04-30 05:43:17.030698 osd.18 osd.18 192.168.7.204:6811/5641 3 : cluster [WRN] Monitor daemon marked osd.18 down, but it is still running
2019-04-30 05:43:17.030714 osd.18 osd.18 192.168.7.204:6811/5641 4 : cluster [DBG] map e253 wrongly marked me down at e253
2019-04-30 05:43:18.450669 osd.19 osd.19 192.168.7.204:6807/5309 3 : cluster [WRN] Monitor daemon marked osd.19 down, but it is still running
2019-04-30 05:43:18.450689 osd.19 osd.19 192.168.7.204:6807/5309 4 : cluster [DBG] map e254 wrongly marked me down at e253
2019-04-30 05:43:18.652645 osd.23 osd.23 192.168.7.204:6801/4516 3 : cluster [WRN] Monitor daemon marked osd.23 down, but it is still running

2019-04-30 05:44:07.065692 osd.20 osd.20 192.168.7.204:6809/5441 4 : cluster [DBG] map e263 wrongly marked me down at e263
2019-04-30 08:56:22.458718 mon.cu-pve04 mon.0 192.168.7.204:6789/0 19906 : cluster [INF] daemon mds.cu-pve05 restarted
2019-04-30 08:56:26.088398 mon.cu-pve04 mon.0 192.168.7.204:6789/0 19910 : cluster [DBG] Standby manager daemon cu-pve06 started
2019-04-30 08:56:26.495852 mon.cu-pve04 mon.0 192.168.7.204:6789/0 19911 : cluster [DBG] mgrmap e23: cu-pve05(active), standbys: cu-pve04

ceph-报错日志的更多相关文章

  1. [小菜随笔]关于monkey报错日志分析

    今天小菜在一个测试群内看到群友发出一个monkey的报错信息,其实是一个很简单的报错 个人觉得monkey虽然操作起来比较简易,但其实查看日志分析日志也是很重要的环节,如果对错误分析不够详细,就容易误 ...

  2. linux系统报错日志学习

    linux本身会自动记录系统报错日志:/var/log/messages 这个日志记录,我是在什么时候发现其强大的作用的呢?它有点像我们使用php脚本开发接口的时候技术员在重要地方打日志的效果,方便技 ...

  3. Linux常用命令&定位生产报错日志

    1. cd / 到根目录下 2. cd .. 返回上层目录 3.ls 显示当前目录有哪些文件 4. pwd 显示当前目录 5. ps -ef|grep tomcat7 查看当前运行进程 6. kill ...

  4. php报错日志:PHP Deprecated:Automatically populating $HTTP_RAW_POST_DATA is deprecated

    前几天将线上php服务升级到5.6.x版本后,php-error.log报出错误:PHP Deprecated: Automatically populating $HTTP_RAW_POST_DAT ...

  5. k8s报错日志查看

    看系统日志 cat /var/log/messages 用kubectl 查看日志 # 注意:使用Kubelet describe 查看日志,一定要带上 命名空间,否则会报如下错误[root@node ...

  6. nginx报错日志:see security.limit_extensions

    访问出现部分.js.css等部分文件被拒绝错误日志如下: 19:20:13 [error] 1181#0: *287 FastCGI sent in stderr: "Access to t ...

  7. gulp压缩js文件报错日志

    输出 gulp-uglify 压缩js文件时报错信息 gulp.task('es6', function () { return gulp.src('src/main/webapp/bower_com ...

  8. ceph报错

    [ceph_deploy.mon][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different conten ...

  9. opennebula kvm 创建VM oned报错日志

    Thu Jul :: [ReM][D]: Req: UID: VirtualMachineDeploy result SUCCESS, Thu Jul :: [TM][D]: Message rece ...

  10. tomcat启动不起来,不知原因,没有报错日志,控制台一闪而过 怎么办

    在startup.bat文件中 编辑,在最后一行回车 加上一个词pause,暂停,然后再启动就看见控制台的错误信息啦,然后就自己解决吧

随机推荐

  1. ArcMap如何撤销配准

    ArcMap地理配准时,更新地理配准后,就没法撤销了. 如何解决呢,更新地理配准后,会在源文件夹中自动生成配准文件(文件格式为.over  .jgwx  .xml),可以通过删除这些文件来清除配准.

  2. 关于JS读取DOM对象(标签)的自定义属性

    DOM对象对于js来说,是个很基础的元素,我们写js一般来说,都一定会对它进行操作.我们可以很方便地给它加上自定义的属性,比如: var test = document.getElementById( ...

  3. 数论---lcm和gcd

    cd即最大公约数,lcm即最小公倍数. 首先给出a×b=gcd×lcm 证明:令gcd(a,b)=k,a=xk,b=yk,则a×b=xykk,而lcm=xyk,所以ab=gcd*lcm. 所以求lcm ...

  4. multiprocessing的Process类的简单使用

    ''' 跨平台的进程创建模块(multiprocessing) 支持跨平台 :window/linux multiprocessing提供一个Process类来代表一个进程对象 ''' from mu ...

  5. Cocos2d 之FlyBird开发---GameUnit类

    |   版权声明:本文为博主原创文章,未经博主允许不得转载. 这节来实现GameUnit类中的一些函数方法,其实这个类一般是一个边写边完善的过程,因为一般很难一次性想全所有的能够供多个类共用的方法.下 ...

  6. 如何判断索引是否生效--explain

    explain 显示了MySql 如何使用索引来处理select语句以及连接表. 使用方式在select 前面加上 explain就可以了 示例:explain select id , name ta ...

  7. Linux.中断处理.入口x86平台entry_32.S

    Linux.中断处理.入口x86平台entry_32.S Linux.中断处理.入口x86平台entry_32.S 在保护模式下处理器是通过中断号和IDTR找到中断处理程序的入口地址的.IDTR存的是 ...

  8. HDU多校训练第一场 1012 Sequence

    题目链接:acm.hdu.edu.cn/showproblem.php?pid=6589 题意:给出一个长度为n的数组,有m次操作,操作有3种1,2,3,问操作m次后的数组,输出i*a[i]的异或和 ...

  9. 50.Maximal Square(最大正方形)

    Level   Medium 题目描述: Given a 2D binary matrix filled with 0's and 1's, find the largest square conta ...

  10. Java8 Stream流API常用操作

    Java版本现在已经发布到JDK13了,目前公司还是用的JDK8,还是有必要了解一些JDK8的新特性的,例如优雅判空的Optional类,操作集合的Stream流,函数式编程等等;这里就按操作例举一些 ...