在同一个网络内,建立了两个同名的群集

Jun 24 11:56:08 cu-pve05 kyc_zabbix_ceph[2419970]: ]}
Jun 24 11:56:08 cu-pve05 corosync[3954]: error [TOTEM ] Digest does not match
Jun 24 11:56:08 cu-pve05 corosync[3954]: alert [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:08 cu-pve05 corosync[3954]: alert [TOTEM ] Invalid packet data
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Digest does not match
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Invalid packet data
Jun 24 11:56:08 cu-pve05 kyc_zabbix_ceph[2419970]: Response from "192.168.7.114:10051": "processed: 3; failed: 48; total: 51; seconds spent: 0.001189"
Jun 24 11:56:08 cu-pve05 kyc_zabbix_ceph[2419970]: sent: 51; skipped: 0; total: 51
Jun 24 11:56:08 cu-pve05 corosync[3954]: error [TOTEM ] Digest does not match
Jun 24 11:56:08 cu-pve05 corosync[3954]: alert [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:08 cu-pve05 corosync[3954]: alert [TOTEM ] Invalid packet data
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Digest does not match
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Invalid packet data
Jun 24 11:56:08 cu-pve05 corosync[3954]: error [TOTEM ] Digest does not match
Jun 24 11:56:08 cu-pve05 corosync[3954]: alert [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:08 cu-pve05 corosync[3954]: alert [TOTEM ] Invalid packet data
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Digest does not match
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Invalid packet data
Jun 24 11:56:09 cu-pve05 corosync[3954]: error [TOTEM ] Digest does not match
Jun 24 11:56:09 cu-pve05 corosync[3954]: alert [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:09 cu-pve05 corosync[3954]: alert [TOTEM ] Invalid packet data
Jun 24 11:56:09 cu-pve05 corosync[3954]: [TOTEM ] Digest does not match
Jun 24 11:56:09 cu-pve05 corosync[3954]: [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:09 cu-pve05 corosync[3954]: [TOTEM ] Invalid packet data
Jun 24 11:56:09 cu-pve05 corosync[3954]: error [TOTEM ] Digest does not match
Jun 24 11:56:09 cu-pve05 corosync[3954]: alert [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:09 cu-pve05 corosync[3954]: alert [TOTEM ] Invalid packet data

删掉其中一个后,又报下面的错,在节点视图下的syslog下看到的

Jul 11 18:48:01 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on write
Jul 11 18:48:04 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:48:07 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on read
Jul 11 18:48:35 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on read
Jul 11 18:48:39 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on read
Jul 11 18:48:42 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:48:45 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:48:51 cu-pve03 pvedaemon[4111390]: worker exit
Jul 11 18:48:51 cu-pve03 pvedaemon[4692]: worker 4111390 finished
Jul 11 18:48:51 cu-pve03 pvedaemon[4692]: starting 1 worker(s)
Jul 11 18:48:51 cu-pve03 pvedaemon[4692]: worker 4148787 started
Jul 11 18:49:00 cu-pve03 systemd[1]: Starting Proxmox VE replication runner...
Jul 11 18:49:00 cu-pve03 systemd[1]: Started Proxmox VE replication runner.
Jul 11 18:49:06 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on read
Jul 11 18:49:10 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:49:13 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on read
Jul 11 18:49:16 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:49:23 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:49:30 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket closed (con state CONNECTING)
Jul 11 18:49:36 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on write
Jul 11 18:49:39 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket closed (con state CONNECTING)
Jul 11 18:49:42 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on read
Jul 11 18:49:50 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on write
Jul 11 18:50:00 cu-pve03 systemd[1]: Starting Proxmox VE replication runner...
Jul 11 18:50:00 cu-pve03 systemd[1]: Started Proxmox VE replication runner.
Jul 11 18:50:00 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on read
Jul 11 18:50:07 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on write
Jul 11 18:50:14 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:50:26 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:50:31 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on write
Jul 11 18:50:34 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:50:37 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on write
Jul 11 18:50:40 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on write
Jul 11 18:50:55 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:51:00 cu-pve03 systemd[1]: Starting Proxmox VE replication runner...
Jul 11 18:51:00 cu-pve03 systemd[1]: Started Proxmox VE replication runner.
Jul 11 18:51:02 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket closed (con state CONNECTING)
Jul 11 18:51:09 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on read
Jul 11 18:51:16 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket closed (con state CONNECTING)
Jul 11 18:51:19 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on read
Jul 11 18:51:33 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:51:43 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on read
Jul 11 18:51:48 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on read
Jul 11 18:51:51 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on write
Jul 11 18:52:00 cu-pve03 systemd[1]: Starting Proxmox VE replication runner...
Jul 11 18:52:00 cu-pve03 systemd[1]: Started Proxmox VE replication runner.
Jul 11 18:52:03 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:52:14 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on write
Jul 11 18:52:26 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on write
Jul 11 18:52:34 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:52:37 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on write
Jul 11 18:52:40 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:52:43 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:52:50 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:53:00 cu-pve03 systemd[1]: Starting Proxmox VE replication runner...
Jul 11 18:53:00 cu-pve03 systemd[1]: Started Proxmox VE replication runner.
Jul 11 18:53:01 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:53:05 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on write
Jul 11 18:53:08 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on write
Jul 11 18:53:11 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on write
Jul 11 18:53:14 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket closed (con state CONNECTING)
Jul 11 18:53:28 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket closed (con state CONNECTING)
Jul 11 18:53:51 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on read
Jul 11 18:53:55 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on read
Jul 11 18:53:58 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:54:00 cu-pve03 systemd[1]: Starting Proxmox VE replication runner...
Jul 11 18:54:00 cu-pve03 systemd[1]: Started Proxmox VE replication runner.
Jul 11 18:54:01 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:54:06 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on read
Jul 11 18:54:09 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:54:17 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:54:18 cu-pve03 kernel: libceph: mds0 192.168.7.5:6800 socket closed (con state CONNECTING)
Jul 11 18:54:32 cu-pve03 pveproxy[4118327]: worker exit
Jul 11 18:54:32 cu-pve03 pveproxy[7729]: worker 4118327 finished
Jul 11 18:54:32 cu-pve03 pveproxy[7729]: starting 1 worker(s)
Jul 11 18:54:32 cu-pve03 pveproxy[7729]: worker 4150738 started
Jul 11 18:54:37 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on read
问题点:
1、te环境backup 40MB/s
2、数据库虚拟机备份,考虑挂载点的问题,/ceph/fileserver/...
3、复制kycfs上的文件。
vzdump 202 --compress lzo  --storage kycfs --mode snapshot --node cu-pve05 --remove 0
vzdump 151 --mode stop --remove 0 --storage kycfs --compress lzo --node cu-pve02 --bwlimit 200000
--------------------------------------------------------
INFO: starting new backup job: vzdump 151 --mode stop --remove 0 --storage kycfs --compress lzo --node cu-pve02
INFO: Starting Backup of VM 151 (qemu)
INFO: Backup started at 2019-07-10 16:26:44
INFO: status = stopped
INFO: update VM 151: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: cu-dbs-151
INFO: include disk 'scsi0' 'kycrbd:vm-151-disk-0' 100G
INFO: include disk 'scsi1' 'kycrbd:vm-151-disk-1' 300G
INFO: snapshots found (not included into backup)
INFO: creating archive '/mnt/pve/kycfs/dump/vzdump-qemu-151-2019_07_10-16_26_44.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task 'a29c0ebc-52ee-4823-a5e6-56e7443c2cae'
INFO: status: 0% (499122176/429496729600), sparse 0% (423092224), duration 3, read/write 166/25 MB/s
INFO: status: 1% (4353687552/429496729600), sparse 0% (4277657600), duration 22, read/write 202/0 MB/s ---------------------------------------------------------
INFO: starting new backup job: vzdump 192 --compress lzo --bwlimit --storage kycfs --mode snapshot --node cu-pve06 --remove 0
INFO: Starting Backup of VM 192 (qemu)
INFO: Backup started at 2019-07-10 16:28:53
INFO: status = stopped
INFO: update VM 192: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: cu-tpl-192
INFO: include disk 'ide0' 'kycrbd:vm-192-disk-0' 100G
INFO: creating archive '/mnt/pve/kycfs/dump/vzdump-qemu-192-2019_07_10-16_28_53.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task '58adf55a-971c-49aa-b42d-595f8e3a0cf3'
INFO: status: 0% (197656576/107374182400), sparse 0% (114630656), duration 3, read/write 65/27 MB/s
INFO: status: 1% (1090519040/107374182400), sparse 0% (556826624), duration 15, read/write 74/37 MB/s
INFO: status: 2% (2181038080/107374182400), sparse 0% (563113984), duration 42, read/write 40/40 MB/s
INFO: status: 3% (3257532416/107374182400), sparse 0% (581787648), duration 69, read/write 39/39 MB/s

pve_ceph问题汇总的更多相关文章

  1. 常用 Gulp 插件汇总 —— 基于 Gulp 的前端集成解决方案(三)

    前两篇文章讨论了 Gulp 的安装部署及基本概念,借助于 Gulp 强大的 插件生态 可以完成很多常见的和不常见的任务.本文主要汇总常用的 Gulp 插件及其基本使用,需要读者对 Gulp 有一个基本 ...

  2. 异常处理汇总 ~ 修正果带着你的Net飞奔吧!

    经验库开源地址:https://github.com/dunitian/LoTDotNet 异常处理汇总-服 务 器 http://www.cnblogs.com/dunitian/p/4522983 ...

  3. UWP开发必备:常用数据列表控件汇总比较

    今天是想通过实例将UWP开发常用的数据列表做汇总比较,作为以后项目开发参考.UWP开发必备知识点总结请参照[UWP开发必备以及常用知识点总结]. 本次主要讨论以下控件: GridView:用于显示数据 ...

  4. Oracle手边常用70则脚本知识汇总

    Oracle手边常用70则脚本知识汇总 作者:白宁超 时间:2016年3月4日13:58:36 摘要: 日常使用oracle数据库过程中,常用脚本命令莫不是用户和密码.表空间.多表联合.执行语句等常规 ...

  5. Oracle 数据库知识汇总篇

    Oracle 数据库知识汇总篇(更新中..) 1.安装部署篇 2.管理维护篇 3.数据迁移篇 4.故障处理篇 5.性能调优篇 6.SQL PL/SQL篇 7.考试认证篇 8.原理体系篇 9.架构设计篇 ...

  6. Vertica 数据库知识汇总篇

    Vertica 数据库知识汇总篇(更新中..) 1.Vertica 集群软件部署,各节点硬件性能测试 2.Vertica 创建数据库,创建业务用户测试 3.Vertica 数据库参数调整,资源池分配 ...

  7. 读书笔记汇总 - SQL必知必会(第4版)

    本系列记录并分享学习SQL的过程,主要内容为SQL的基础概念及练习过程. 书目信息 中文名:<SQL必知必会(第4版)> 英文名:<Sams Teach Yourself SQL i ...

  8. 关于DDD的学习资料汇总

    DDD(Domain-Driven Design)领域驱动设计,第一次看到DDD是在学习ABP时,在其中的介绍中看到的.what,DDD是个什么鬼,我不是小白,是大白,没听过.于是乎,度娘查查查,找到 ...

  9. Oracle 列数据聚合方法汇总

    网上流传众多列数据聚合方法,现将各方法整理汇总,以做备忘. wm_concat 该方法来自wmsys下的wm_concat函数,属于Oracle内部函数,返回值类型varchar2,最大字符数4000 ...

随机推荐

  1. 定制ubuntu镜像

    使用ubuntu server 18.04 lts版镜像 作为源镜像定制目标镜像, 工具cubic, 定制目标镜像, 使得一些服务装机后即可使用,例如redis, mysql, monggodb ,s ...

  2. OpenStack环境搭建

    实验环境 CentOS-7-x86_64-Minimal-1708.iso openstack_N.tar.gz 创建虚拟机 controller部署 computer网络配置 OpenStack环境 ...

  3. 【OF框架】定义框架标准WebApi,按照规范返回状态信息及数据信息

    准备 了解框架基本应用,已经完成Controller创建. 一.定义框架标准WebApi 一个标准的WebApi,包含预定义的入参和回参类型 入参为CallParams,需要增加FromBody声明, ...

  4. {RuntimeError} An attempt has been made to start a new process before the current process has finished its bootstrapping phase.This probably means that you are not using fork to start your child...

    加载数据时出现报错: RuntimeError:         An attempt has been made to start a new process before the        c ...

  5. python数据类型:dict(字典)

    一.字典的简单介绍 字典(dict)是python中唯一的一个映射类型.他是以{}括起来的键值对组成. 语法: {key1:value1,key2:value2......} 注意:key必须是不可变 ...

  6. Idea中用来遍历list集合的快捷键

    使用Intellij idea时,想要快捷生成for循环代码块: itar 生成array for代码块 for (int i = 0; i < array.length; i++) { = a ...

  7. Clipper库中文文档详解

    简介 Clipper Library(以下简称为Clipper库或ClipperLib或Clipper)提供了对线段和多边形的裁剪(Clipping)以及偏置(offseting)的功能 和其他的裁剪 ...

  8. Postgresql Useful SQL/Commands

    Update records ' and a.subscriber_id=b.subscriber_id; Connections select count(*) from pg_stat_activ ...

  9. 多任务4---greenlet完成多任务

    同yield一样 ,单线程,来回切换完成多任务,需要安装greenlet插件 pip install greenlet 代码: from greenlet import greenlet import ...

  10. 2019牛客暑期多校训练营(第九场)Knapsack Cryptosystem——哈希表&&二进制枚举

    题意 有长度为 $n$($1\leq n\leq 36$)的数列,给出 $s$,求和为 $s$ 的子集,保证子集存在且唯一. 分析 答案肯定是来自左右半边两部分组成的. 如果我们用哈希表存一半,计算另 ...