1,问题检查

[gpadmin@greenplum01 conf]$ psql -c "select * from gp_segment_configuration where status='d'"
dbid | content | role | preferred_role | mode | status | port | hostname | address | replication_por
t
------+---------+------+----------------+------+--------+-------+-------------+-------------+----------------
--
12 | 2 | m | m | s | d | 43002 | greenplum03 | greenplum03 | 4400
2
7 | 5 | m | p | s | d | 6001 | greenplum03 | greenplum03 | 3400
1
(2 rows)
发现状态的
[gpadmin@greenplum01 conf]$ gpstate -m
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[INFO]:-Starting gpstate with args: -m
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.16.0 build commit:23cec7df0406d69d6552a4bbb77035dba4d7dd44'
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.16.0 build commit:23cec7df0406d69d6552a4bbb77035dba4d7dd44) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Jan 16 2019 02:32:15'
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[INFO]:-Obtaining Segment details from master...
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[INFO]:--------------------------------------------------------------
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[INFO]:--Current GPDB mirror list and status
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[INFO]:--Type = Group
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[INFO]:--------------------------------------------------------------
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[INFO]:- Mirror Datadir Port Status Data Status
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[INFO]:- greenplum03 /greenplum/data/mirror/gpseg0 43000 Passive Synchronized
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[INFO]:- greenplum03 /greenplum/data/mirror/gpseg1 43001 Passive Synchronized
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[WARNING]:-greenplum03 /greenplum/data2/mirror/gpseg2 43002 Failed <<<<<<<< 这个出现问题了
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[INFO]:- greenplum03 /greenplum/data2/mirror/gpseg3 43003 Passive Synchronized
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[INFO]:- greenplum02 /greenplum/data/mirror/gpseg4 43000 Passive Synchronized
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[INFO]:- greenplum02 /greenplum/data/mirror/gpseg5 43001 Acting as Primary Change Tracking
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[INFO]:- greenplum02 /greenplum/data2/mirror/gpseg6 43002 Passive Synchronized
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[INFO]:- greenplum02 /greenplum/data2/mirror/gpseg7 43003 Passive Synchronized
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[INFO]:--------------------------------------------------------------
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[WARNING]:-1 segment(s) configured as mirror(s) are acting as primaries
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[WARNING]:-1 segment(s) configured as mirror(s) have failed ------------看这里
20190711:17:06:51:025238 gpstate:greenplum01:gpadmin-[WARNING]:-1 mirror segment(s) acting as primaries are in change tracking

01,连接问题

首先解决连接是否成功,ping 相应的主机看返回是否是成功状态

ping greenplum03

02,激活失效的segment

gprecoverseg

恢复过程会启动失效的Segment并且确定需要同步的已更改文件
在gprecoverseg完成后,系统会进入到Resynchronizing模式并且开始复制更改过的文件。这个过程在后台运行,而系统处于在线状态并且能够接受数据库请求。
当重新同步过程完成时,系统状态是Synchronized
需要恢复两个 日志:
 [gpadmin@greenplum01 conf]$ gprecoverseg
20190711:17:10:44:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Starting gprecoverseg with args:
20190711:17:10:44:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.16.0 build commit:23cec7df0406d69d6552a4bbb77035dba4d7dd44'
20190711:17:10:44:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.16.0 build commit:23cec7df0406d69d6552a4bbb77035dba4d7dd44) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Jan 16 2019 02:32:15'
20190711:17:10:44:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Checking if segments are ready to connect
20190711:17:10:44:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Obtaining Segment details from master...
20190711:17:10:44:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Obtaining Segment details from master...
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Heap checksum setting is consistent between master and the segments that are candidates for recoverseg
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Greenplum instance recovery parameters
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:----------------------------------------------------------
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Recovery type = Standard
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:----------------------------------------------------------
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Recovery 1 of 2
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:----------------------------------------------------------
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Synchronization mode = Incremental
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Failed instance host = greenplum03
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Failed instance address = greenplum03
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Failed instance directory = /greenplum/data2/mirror/gpseg2
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Failed instance port = 43002
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Failed instance replication port = 44002
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Recovery Source instance host = greenplum02
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Recovery Source instance address = greenplum02
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Recovery Source instance directory = /greenplum/data2/primary/gpseg2
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Recovery Source instance port = 6002
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Recovery Source instance replication port = 34002
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Recovery Target = in-place
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:----------------------------------------------------------
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Recovery 2 of 2
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:----------------------------------------------------------
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Synchronization mode = Incremental
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Failed instance host = greenplum03
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Failed instance address = greenplum03
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Failed instance directory = /greenplum/data/primary/gpseg5
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Failed instance port = 6001
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Failed instance replication port = 34001
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Recovery Source instance host = greenplum02
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Recovery Source instance address = greenplum02
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Recovery Source instance directory = /greenplum/data/mirror/gpseg5
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Recovery Source instance port = 43001
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Recovery Source instance replication port = 44001
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:- Recovery Target = in-place
20190711:17:10:45:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:---------------------------------------------------------- Continue with segment recovery procedure Yy|Nn (default=N):
> Y
20190711:17:11:31:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-2 segment(s) to recover
20190711:17:11:31:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Ensuring 2 failed segment(s) are stopped 20190711:17:11:32:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Ensuring that shared memory is cleaned up for stopped segments
updating flat files
20190711:17:11:32:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Updating configuration with new mirrors
20190711:17:11:33:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Updating mirrors
.
20190711:17:11:34:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Starting mirrors
20190711:17:11:34:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-era is 24a58010f9c5a05a_190711113124
20190711:17:11:34:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait...
..
20190711:17:11:36:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Process results...
20190711:17:11:36:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Updating configuration to mark mirrors up
20190711:17:11:36:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Updating primaries
20190711:17:11:36:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Commencing parallel primary conversion of 2 segments, please wait...
.
20190711:17:11:37:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Process results...
20190711:17:11:37:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Done updating primaries
20190711:17:11:37:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-******************************************************************
20190711:17:11:37:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Updating segments for resynchronization is completed.
20190711:17:11:37:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-For segments updated successfully, resynchronization will continue in the background.
20190711:17:11:37:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-
20190711:17:11:37:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-Use gpstate -s to check the resynchronization progress.
20190711:17:11:37:025375 gprecoverseg:greenplum01:gpadmin-[INFO]:-******************************************************************

03, 检测同步

gpstate -m
[gpadmin@greenplum01 conf]$ gpstate -m
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:-Starting gpstate with args: -m
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.16.0 build commit:23cec7df0406d69d6552a4bbb77035dba4d7dd44'
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.16.0 build commit:23cec7df0406d69d6552a4bbb77035dba4d7dd44) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Jan 16 2019 02:32:15'
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:-Obtaining Segment details from master...
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:--------------------------------------------------------------
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:--Current GPDB mirror list and status
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:--Type = Group
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:--------------------------------------------------------------
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:- Mirror Datadir Port Status Data Status
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:- greenplum03 /greenplum/data/mirror/gpseg0 43000 Passive Synchronized
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:- greenplum03 /greenplum/data/mirror/gpseg1 43001 Passive Synchronized
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:- greenplum03 /greenplum/data2/mirror/gpseg2 43002 Passive Synchronized
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:- greenplum03 /greenplum/data2/mirror/gpseg3 43003 Passive Synchronized
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:- greenplum02 /greenplum/data/mirror/gpseg4 43000 Passive Synchronized
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:- greenplum02 /greenplum/data/mirror/gpseg5 43001 Acting as Primary Synchronized
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:- greenplum02 /greenplum/data2/mirror/gpseg6 43002 Passive Synchronized
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:- greenplum02 /greenplum/data2/mirror/gpseg7 43003 Passive Synchronized
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[INFO]:--------------------------------------------------------------
20190711:17:12:10:025484 gpstate:greenplum01:gpadmin-[WARNING]:-1 segment(s) configured as mirror(s) are acting as primaries

发现恢复出来了

04,恢复初始化状态

  因为宕机一个主segment,镜像会激活另一个,并且成为主segment。运行gprecoverseg之后,主segment依旧没变化,失效的segment没有正式加进来,所以需要让他变成初始化的时候的segment状态,让所有segment重新恢复平衡系统

检查这个segment的状态
gpstate -e

  

运行gpstate -m来确保所有镜像都是Synchronized。

gpstate -m

一直在运行了

假如有Resynchronizing模式 ,需要耐心等待

用-r选项运行gprecoverseg,让Segment回到它们的首选角色。
gprecoverseg -r 在重新平衡之后,运行gpstate -e来确认所有的Segment都处于它们的首选角色。
gpstate -e

这个就没问题了

GreenPlum 大数据平台--segment 失效问题恢复的更多相关文章

  1. GreenPlum 大数据平台--segment 失效问题恢复《二》(全部segment宕机情况下)

    01,情况描述 主Segment和它的镜像都宕掉.导致了greenplum数据库不可用状态 02,重启greenplum数据库 gpstop -r 03,恢复 gprecoverseg 04,状态检查 ...

  2. GreenPlum 大数据平台--segment 失效问题排查

    01,segment 检查一: 在master节点上检查失效的segment 正常情况下: :::: gpstate:greenplum01:gpadmin-[INFO]:-Starting gpst ...

  3. GreenPlum 大数据平台--监控

    数据库状态监控活动 活动 过程 纠正措施 列出当前状态为down的Segment.如果有任何行被返回,就会生成一个警告或者告警. 推荐频率:每5到10分钟 重要度: IMPORTANT 在postgr ...

  4. GreenPlum 大数据平台--介绍

    一,GreenPlum 01,介绍: Greenplum是一种基于PostgreSQL的分布式数据库,其采用shared-nothing架构,主机.操作系统.内存.存储都是自我控制的,不存在共享. 官 ...

  5. GreenPlum 大数据平台--运维(三)

    一,操作命令 01,启动gpstart 参数说明 COMMAND NAME: gpstart Starts a Greenplum Database system. ***************** ...

  6. GreenPlum 大数据平台--外部表(三)

    一,外部表介绍 Greenplum 在数据加载上有一个明显的优势,就是支持数据的并发加载,gpfdisk是并发加载的工具,数据库中对应的就是外部表 所谓外部表,就是在数据库中只有表定义.没有数据,数据 ...

  7. GreenPlum 大数据平台--增加segment

    01,增加机器的配置 需要增加的机器安装greenplum 软件(操作见greenplum安装部署章节) 02,分配机器存储区域 03,配置互信 使用gpssh-exkeys确保Segment主机能通 ...

  8. GreenPlum 大数据平台--集群恢复

    一,问题描述 :::: gpinitstandby:greenplum01:gpadmin-[ERROR]:-Cannot use -n option when standby master has ...

  9. GreenPlum 大数据平台--非并行备份(六)

    一,非并行备份(pg_dump) 1) GP依然支持常规的PostgreSQL备份命令pg_dump和pg_dumpall 2) 备份将在Master主机上创建一个包含所有Segment数据的大的备份 ...

随机推荐

  1. 一些 Java 和 Android 的参考资料

    1. .net程序员转战android第三篇---登录模块之静态登录 2. .net程序员转战android第二篇---牛刀小试 3. .net程序员转战android第一篇---环境部署 4. 一些 ...

  2. 钉钉SDK使用。

    (1)到 https://open-doc.dingtalk.com/microapp/faquestions/vzbp02 下载SDK (2)引入 using DingTalk.Api; using ...

  3. 使用 PDBDownloader 解决 IDA 加载 ntoskrnl.exe 时符号不完全问题

    解决 IDA 加载 ntoskrnl.exe 时符号不完全问题 1. 问题:IDA加载xp系统的 ntoskrnl.exe 加载不完全. 2. 尝试过但未成功的解决方案: 1)配置好的IDA的 pdb ...

  4. 服务器部署Laravel

    安装lnmp环境 参考:简书 - Centos 7 下安装LNMP官方最新版 安装redis 参考:简书 - Centos 7下使用yum安装redis 安装nodejs npm nodejs分8.x ...

  5. Activex在没有电子秤api的情况下获取串口数据

    大二做B/S架构的项目使用了安衡电子秤CHS-D+R和一款扫码枪,两个设备的串口使用一样,这款电子秤是相当的坑,没有开发的api,无奈只能自己开发Activex了,在B/S架构中进行引用Activex ...

  6. Java编程基础——常量变量和数据类型

    Java编程基础——常量变量和数据类型 摘要:本文介绍了Java编程语言的常量变量和数据类型. 常量变量 常量的定义 一块内存中的数据存储空间,里面的数据不可以更改. 变量的定义 一块内存中的数据存储 ...

  7. mask-rcnn代码解读(四):rpn_feature_maps数据的处理

    此处模拟 rpn_feature_maps数据的处理,最终得到rpn_class_logits, rpn_class, rpn_bbox. 代码如下: import numpy as np'''层与层 ...

  8. 在ie下转换时间戳出错

    在将特定格式转换为时间戳的时候,我们通常的做法事new Date(str).getTime(), 这个方法在谷歌上是可行的,但是在ie上需要注意一点,就是这个str如果是“2019-11-15”的格式 ...

  9. LNMP环境下搭建SVN服务

    最近自己买了个服务器,试着在上面搭建了LNMP环境,因为以前在本地用MAMP Pro搭建过LAMP环境,所以基本上还算是轻车熟路,第一次搭建LNMP,使用的是一键安装,过程是顺利的,后来在使用过程中遇 ...

  10. Struts2 在Action中操作数据

    Servlet存储数据的方式 在Servlet中,使用ServletContext对象来存储整个WebApp的数据,ServletContext中直接存储整个WebApp的公共数据,可使用set|ge ...