pve_ceph问题汇总
在同一个网络内,建立了两个同名的群集
Jun 24 11:56:08 cu-pve05 kyc_zabbix_ceph[2419970]: ]}
Jun 24 11:56:08 cu-pve05 corosync[3954]: error [TOTEM ] Digest does not match
Jun 24 11:56:08 cu-pve05 corosync[3954]: alert [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:08 cu-pve05 corosync[3954]: alert [TOTEM ] Invalid packet data
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Digest does not match
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Invalid packet data
Jun 24 11:56:08 cu-pve05 kyc_zabbix_ceph[2419970]: Response from "192.168.7.114:10051": "processed: 3; failed: 48; total: 51; seconds spent: 0.001189"
Jun 24 11:56:08 cu-pve05 kyc_zabbix_ceph[2419970]: sent: 51; skipped: 0; total: 51
Jun 24 11:56:08 cu-pve05 corosync[3954]: error [TOTEM ] Digest does not match
Jun 24 11:56:08 cu-pve05 corosync[3954]: alert [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:08 cu-pve05 corosync[3954]: alert [TOTEM ] Invalid packet data
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Digest does not match
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Invalid packet data
Jun 24 11:56:08 cu-pve05 corosync[3954]: error [TOTEM ] Digest does not match
Jun 24 11:56:08 cu-pve05 corosync[3954]: alert [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:08 cu-pve05 corosync[3954]: alert [TOTEM ] Invalid packet data
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Digest does not match
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:08 cu-pve05 corosync[3954]: [TOTEM ] Invalid packet data
Jun 24 11:56:09 cu-pve05 corosync[3954]: error [TOTEM ] Digest does not match
Jun 24 11:56:09 cu-pve05 corosync[3954]: alert [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:09 cu-pve05 corosync[3954]: alert [TOTEM ] Invalid packet data
Jun 24 11:56:09 cu-pve05 corosync[3954]: [TOTEM ] Digest does not match
Jun 24 11:56:09 cu-pve05 corosync[3954]: [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:09 cu-pve05 corosync[3954]: [TOTEM ] Invalid packet data
Jun 24 11:56:09 cu-pve05 corosync[3954]: error [TOTEM ] Digest does not match
Jun 24 11:56:09 cu-pve05 corosync[3954]: alert [TOTEM ] Received message has invalid digest... ignoring.
Jun 24 11:56:09 cu-pve05 corosync[3954]: alert [TOTEM ] Invalid packet data
删掉其中一个后,又报下面的错,在节点视图下的syslog下看到的
Jul 11 18:48:01 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on write
Jul 11 18:48:04 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:48:07 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on read
Jul 11 18:48:35 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on read
Jul 11 18:48:39 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on read
Jul 11 18:48:42 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:48:45 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:48:51 cu-pve03 pvedaemon[4111390]: worker exit
Jul 11 18:48:51 cu-pve03 pvedaemon[4692]: worker 4111390 finished
Jul 11 18:48:51 cu-pve03 pvedaemon[4692]: starting 1 worker(s)
Jul 11 18:48:51 cu-pve03 pvedaemon[4692]: worker 4148787 started
Jul 11 18:49:00 cu-pve03 systemd[1]: Starting Proxmox VE replication runner...
Jul 11 18:49:00 cu-pve03 systemd[1]: Started Proxmox VE replication runner.
Jul 11 18:49:06 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on read
Jul 11 18:49:10 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:49:13 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on read
Jul 11 18:49:16 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:49:23 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:49:30 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket closed (con state CONNECTING)
Jul 11 18:49:36 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on write
Jul 11 18:49:39 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket closed (con state CONNECTING)
Jul 11 18:49:42 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on read
Jul 11 18:49:50 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on write
Jul 11 18:50:00 cu-pve03 systemd[1]: Starting Proxmox VE replication runner...
Jul 11 18:50:00 cu-pve03 systemd[1]: Started Proxmox VE replication runner.
Jul 11 18:50:00 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on read
Jul 11 18:50:07 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on write
Jul 11 18:50:14 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:50:26 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:50:31 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on write
Jul 11 18:50:34 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:50:37 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on write
Jul 11 18:50:40 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on write
Jul 11 18:50:55 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:51:00 cu-pve03 systemd[1]: Starting Proxmox VE replication runner...
Jul 11 18:51:00 cu-pve03 systemd[1]: Started Proxmox VE replication runner.
Jul 11 18:51:02 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket closed (con state CONNECTING)
Jul 11 18:51:09 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on read
Jul 11 18:51:16 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket closed (con state CONNECTING)
Jul 11 18:51:19 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on read
Jul 11 18:51:33 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:51:43 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on read
Jul 11 18:51:48 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on read
Jul 11 18:51:51 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on write
Jul 11 18:52:00 cu-pve03 systemd[1]: Starting Proxmox VE replication runner...
Jul 11 18:52:00 cu-pve03 systemd[1]: Started Proxmox VE replication runner.
Jul 11 18:52:03 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:52:14 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on write
Jul 11 18:52:26 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on write
Jul 11 18:52:34 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:52:37 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on write
Jul 11 18:52:40 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:52:43 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:52:50 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:53:00 cu-pve03 systemd[1]: Starting Proxmox VE replication runner...
Jul 11 18:53:00 cu-pve03 systemd[1]: Started Proxmox VE replication runner.
Jul 11 18:53:01 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:53:05 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on write
Jul 11 18:53:08 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on write
Jul 11 18:53:11 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on write
Jul 11 18:53:14 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket closed (con state CONNECTING)
Jul 11 18:53:28 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket closed (con state CONNECTING)
Jul 11 18:53:51 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on read
Jul 11 18:53:55 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket error on read
Jul 11 18:53:58 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:54:00 cu-pve03 systemd[1]: Starting Proxmox VE replication runner...
Jul 11 18:54:00 cu-pve03 systemd[1]: Started Proxmox VE replication runner.
Jul 11 18:54:01 cu-pve03 kernel: libceph: mon0 192.168.7.4:6789 socket closed (con state CONNECTING)
Jul 11 18:54:06 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket error on read
Jul 11 18:54:09 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:54:17 cu-pve03 kernel: libceph: mon2 192.168.7.6:6789 socket closed (con state CONNECTING)
Jul 11 18:54:18 cu-pve03 kernel: libceph: mds0 192.168.7.5:6800 socket closed (con state CONNECTING)
Jul 11 18:54:32 cu-pve03 pveproxy[4118327]: worker exit
Jul 11 18:54:32 cu-pve03 pveproxy[7729]: worker 4118327 finished
Jul 11 18:54:32 cu-pve03 pveproxy[7729]: starting 1 worker(s)
Jul 11 18:54:32 cu-pve03 pveproxy[7729]: worker 4150738 started
Jul 11 18:54:37 cu-pve03 kernel: libceph: mon1 192.168.7.5:6789 socket error on read
问题点:
1、te环境backup 40MB/s
2、数据库虚拟机备份,考虑挂载点的问题,/ceph/fileserver/...
3、复制kycfs上的文件。
vzdump 202 --compress lzo --storage kycfs --mode snapshot --node cu-pve05 --remove 0
vzdump 151 --mode stop --remove 0 --storage kycfs --compress lzo --node cu-pve02 --bwlimit 200000
--------------------------------------------------------
INFO: starting new backup job: vzdump 151 --mode stop --remove 0 --storage kycfs --compress lzo --node cu-pve02
INFO: Starting Backup of VM 151 (qemu)
INFO: Backup started at 2019-07-10 16:26:44
INFO: status = stopped
INFO: update VM 151: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: cu-dbs-151
INFO: include disk 'scsi0' 'kycrbd:vm-151-disk-0' 100G
INFO: include disk 'scsi1' 'kycrbd:vm-151-disk-1' 300G
INFO: snapshots found (not included into backup)
INFO: creating archive '/mnt/pve/kycfs/dump/vzdump-qemu-151-2019_07_10-16_26_44.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task 'a29c0ebc-52ee-4823-a5e6-56e7443c2cae'
INFO: status: 0% (499122176/429496729600), sparse 0% (423092224), duration 3, read/write 166/25 MB/s
INFO: status: 1% (4353687552/429496729600), sparse 0% (4277657600), duration 22, read/write 202/0 MB/s ---------------------------------------------------------
INFO: starting new backup job: vzdump 192 --compress lzo --bwlimit --storage kycfs --mode snapshot --node cu-pve06 --remove 0
INFO: Starting Backup of VM 192 (qemu)
INFO: Backup started at 2019-07-10 16:28:53
INFO: status = stopped
INFO: update VM 192: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: cu-tpl-192
INFO: include disk 'ide0' 'kycrbd:vm-192-disk-0' 100G
INFO: creating archive '/mnt/pve/kycfs/dump/vzdump-qemu-192-2019_07_10-16_28_53.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task '58adf55a-971c-49aa-b42d-595f8e3a0cf3'
INFO: status: 0% (197656576/107374182400), sparse 0% (114630656), duration 3, read/write 65/27 MB/s
INFO: status: 1% (1090519040/107374182400), sparse 0% (556826624), duration 15, read/write 74/37 MB/s
INFO: status: 2% (2181038080/107374182400), sparse 0% (563113984), duration 42, read/write 40/40 MB/s
INFO: status: 3% (3257532416/107374182400), sparse 0% (581787648), duration 69, read/write 39/39 MB/s
pve_ceph问题汇总的更多相关文章
- 常用 Gulp 插件汇总 —— 基于 Gulp 的前端集成解决方案(三)
前两篇文章讨论了 Gulp 的安装部署及基本概念,借助于 Gulp 强大的 插件生态 可以完成很多常见的和不常见的任务.本文主要汇总常用的 Gulp 插件及其基本使用,需要读者对 Gulp 有一个基本 ...
- 异常处理汇总 ~ 修正果带着你的Net飞奔吧!
经验库开源地址:https://github.com/dunitian/LoTDotNet 异常处理汇总-服 务 器 http://www.cnblogs.com/dunitian/p/4522983 ...
- UWP开发必备:常用数据列表控件汇总比较
今天是想通过实例将UWP开发常用的数据列表做汇总比较,作为以后项目开发参考.UWP开发必备知识点总结请参照[UWP开发必备以及常用知识点总结]. 本次主要讨论以下控件: GridView:用于显示数据 ...
- Oracle手边常用70则脚本知识汇总
Oracle手边常用70则脚本知识汇总 作者:白宁超 时间:2016年3月4日13:58:36 摘要: 日常使用oracle数据库过程中,常用脚本命令莫不是用户和密码.表空间.多表联合.执行语句等常规 ...
- Oracle 数据库知识汇总篇
Oracle 数据库知识汇总篇(更新中..) 1.安装部署篇 2.管理维护篇 3.数据迁移篇 4.故障处理篇 5.性能调优篇 6.SQL PL/SQL篇 7.考试认证篇 8.原理体系篇 9.架构设计篇 ...
- Vertica 数据库知识汇总篇
Vertica 数据库知识汇总篇(更新中..) 1.Vertica 集群软件部署,各节点硬件性能测试 2.Vertica 创建数据库,创建业务用户测试 3.Vertica 数据库参数调整,资源池分配 ...
- 读书笔记汇总 - SQL必知必会(第4版)
本系列记录并分享学习SQL的过程,主要内容为SQL的基础概念及练习过程. 书目信息 中文名:<SQL必知必会(第4版)> 英文名:<Sams Teach Yourself SQL i ...
- 关于DDD的学习资料汇总
DDD(Domain-Driven Design)领域驱动设计,第一次看到DDD是在学习ABP时,在其中的介绍中看到的.what,DDD是个什么鬼,我不是小白,是大白,没听过.于是乎,度娘查查查,找到 ...
- Oracle 列数据聚合方法汇总
网上流传众多列数据聚合方法,现将各方法整理汇总,以做备忘. wm_concat 该方法来自wmsys下的wm_concat函数,属于Oracle内部函数,返回值类型varchar2,最大字符数4000 ...
随机推荐
- 每日一题-——LeetCode(46)全排列
题目描述: 给定一个没有重复数字的序列,返回其所有可能的全排列.输入: [1,2,3]输出:[ [1,2,3], [1,3,2], [2,1,3], [2,3,1], [3,1,2], [3,2,1] ...
- 基本排序-冒泡/选择/插入(python)
# -*- coding: utf-8 -*- import random def bubble_sort(seq): n = len(seq) for i in range(n-1): print( ...
- 1205 CSRF跨站请求与django中的auth模块使用
目录 今日内容 昨日回顾 基于配置文件的编程思想 importlib模块 简单代码实现 跨站请求伪造csrf 1. 钓鱼网站 如何实现 模拟该现象的产生 2. 解决问题 解决 {% csrf_toke ...
- python练习题(二)
题目: 已知以下几期双色球号码(最后一个数字为蓝球), 2019080 03 06 08 20 24 32 07 2019079 01 03 06 09 19 31 16 2019078 01 17 ...
- 性能三 powerVR specfication
2.Optimising Geometry Interleaving Attributes VBO Draw call size Triangle Size 32个像素/primitive - ...
- 关于Maven打包
Maven打包构建完全指南和最佳实践 Maven最佳实践:划分模块 IDEA一个项目引用另一个项目 IDEA创建多个模块MavenSpringBoot项目 这个简单明了,基础知识
- 使用jQuery快速高效制作网页交互特效---jQuery选择器
一.什么是jQuery选择器 Query选择器继承了CSS与Path语言的部分语法,允许通过标签名.属性名或内容对DOM元素进行快速.准确的选择, 而不必担心浏览器的兼容性,通过jQuery选择器对页 ...
- MongoDB 分片键分类与数据分发
In sharded clusters, if you do not use the _id field as the shard key, then your application must en ...
- asp.net文件上传下载组件
以ASP.NET Core WebAPI 作后端 API ,用 Vue 构建前端页面,用 Axios 从前端访问后端 API ,包括文件的上传和下载. 准备文件上传的API #region 文件上传 ...
- PHP+下载文件夹
php下载文件我整理了这三种方法,和大家分享一下: 第一种:直接添加文件下载的绝对路径连接 //如:我有一个文件在demo.xx.cn/demo.zip<button> <a ...