Deepgreen/Greenplum删除节点步骤

Greenplum和Deepgreen官方都没有给出删除节点的方法和建议,但实际上,我们可以对节点进行删除。由于不确定性,删除节点极有可能导致其他的问题,所以还行做好备份,谨慎而为。下面是具体的步骤:

1.查看数据库当前状态(12个实例)

[gpadmin@sdw1 ~]$ gpstate
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-Starting gpstate with args:
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.99.00 build Deepgreen DB) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.9.2 20150212 (Red Hat 4.9.2-6) compiled on Jul 6 2017 03:04:10'
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-Gathering data from segments...
..
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-Greenplum instance status summary
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Master instance = Active
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Master standby = No master standby configured
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total segment instance count from metadata = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Primary Segment Status
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total primary segments = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total primary segment valid (at master) = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total primary segment failures (at master) = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total number of postmaster.pid files missing = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total number of postmaster.pid files found = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total number of postmaster.pid PIDs missing = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total number of postmaster.pid PIDs found = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total number of /tmp lock files missing = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total number of /tmp lock files found = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total number postmaster processes missing = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total number postmaster processes found = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Mirror Segment Status
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Mirrors not configured on this array
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------

2.并行备份数据库

使用 gpcrondump 命令备份数据库,这里不赘述,不明白的可以翻看文档。

3.关闭当前数据库

[gpadmin@sdw1 ~]$ gpstop -M fast
20170816:12:54:10:097793 gpstop:sdw1:gpadmin-[INFO]:-Starting gpstop with args: -M fast
20170816:12:54:10:097793 gpstop:sdw1:gpadmin-[INFO]:-Gathering information and validating the environment...
20170816:12:54:10:097793 gpstop:sdw1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20170816:12:54:10:097793 gpstop:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:---------------------------------------------
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-Master instance parameters
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:---------------------------------------------
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- Master Greenplum instance process active PID = 31250
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- Database = template1
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- Master port = 5432
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- Master directory = /hgdata/master/hgdwseg-1
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- Shutdown mode = fast
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- Timeout = 120
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- Shutdown Master standby host = Off
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:---------------------------------------------
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-Segment instances that will be shutdown:
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:---------------------------------------------
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- Host Datadir Port Status
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg0 25432 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg1 25433 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg2 25434 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg3 25435 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg4 25436 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg5 25437 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg6 25438 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg7 25439 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg8 25440 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg9 25441 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg10 25442 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg11 25443 u Continue with Greenplum instance shutdown Yy|Nn (default=N):
> y
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-There are 0 connections to the database
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='fast'
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Master host=sdw1
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Detected 0 connections to database
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Using standard WAIT mode of 120 seconds
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=fast
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Master segment instance directory=/hgdata/master/hgdwseg-1
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-Terminating processes for segment /hgdata/master/hgdwseg-1
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-No standby master host configured
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-Commencing parallel segment instance shutdown, please wait...
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-0.00% of jobs completed
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-100.00% of jobs completed
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:- Segments stopped successfully = 12
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:- Segments with errors during stop = 0
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Successfully shutdown 12 of 12 segment instances
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Database successfully shutdown with no errors reported
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Cleaning up leftover gpmmon process
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-No leftover gpmmon process found
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Cleaning up leftover gpsmon processes
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-No leftover gpsmon processes on some hosts. not attempting forceful termination on these hosts
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Cleaning up leftover shared memory

4.以管理模式启动数据库

[gpadmin@sdw1 ~]$ gpstart -m
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Starting gpstart with args: -m
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Gathering information and validating the environment...
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Greenplum Catalog Version: '201310150'
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Master-only start requested in configuration without a standby master. Continue with master-only startup Yy|Nn (default=N):
> y
20170816:12:54:41:098061 gpstart:sdw1:gpadmin-[INFO]:-Starting Master instance in admin mode
20170816:12:54:42:098061 gpstart:sdw1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20170816:12:54:42:098061 gpstart:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:54:42:098061 gpstart:sdw1:gpadmin-[INFO]:-Setting new master era
20170816:12:54:42:098061 gpstart:sdw1:gpadmin-[INFO]:-Master Started...

5.登陆管理数据库

[gpadmin@sdw1 ~]$ PGOPTIONS="-c gp_session_role=utility" psql -d postgres
psql (8.2.15)
Type "help" for help.

6.删除segment

postgres=# select * from gp_segment_configuration;
dbid | content | role | preferred_role | mode | status | port | hostname | address | replication_port | san_mounts
------+---------+------+----------------+------+--------+-------+----------+---------+------------------+------------
1 | -1 | p | p | s | u | 5432 | sdw1 | sdw1 | |
2 | 0 | p | p | s | u | 25432 | sdw1 | sdw1 | |
3 | 1 | p | p | s | u | 25433 | sdw1 | sdw1 | |
4 | 2 | p | p | s | u | 25434 | sdw1 | sdw1 | |
5 | 3 | p | p | s | u | 25435 | sdw1 | sdw1 | |
6 | 4 | p | p | s | u | 25436 | sdw1 | sdw1 | |
7 | 5 | p | p | s | u | 25437 | sdw1 | sdw1 | |
8 | 6 | p | p | s | u | 25438 | sdw1 | sdw1 | |
9 | 7 | p | p | s | u | 25439 | sdw1 | sdw1 | |
10 | 8 | p | p | s | u | 25440 | sdw1 | sdw1 | |
11 | 9 | p | p | s | u | 25441 | sdw1 | sdw1 | |
12 | 10 | p | p | s | u | 25442 | sdw1 | sdw1 | |
13 | 11 | p | p | s | u | 25443 | sdw1 | sdw1 | |
(13 rows)
postgres=# set allow_system_table_mods='dml';
SET
postgres=# delete from gp_segment_configuration where dbid=13;
DELETE 1
postgres=# select * from gp_segment_configuration;
dbid | content | role | preferred_role | mode | status | port | hostname | address | replication_port | san_mounts
------+---------+------+----------------+------+--------+-------+----------+---------+------------------+------------
1 | -1 | p | p | s | u | 5432 | sdw1 | sdw1 | |
2 | 0 | p | p | s | u | 25432 | sdw1 | sdw1 | |
3 | 1 | p | p | s | u | 25433 | sdw1 | sdw1 | |
4 | 2 | p | p | s | u | 25434 | sdw1 | sdw1 | |
5 | 3 | p | p | s | u | 25435 | sdw1 | sdw1 | |
6 | 4 | p | p | s | u | 25436 | sdw1 | sdw1 | |
7 | 5 | p | p | s | u | 25437 | sdw1 | sdw1 | |
8 | 6 | p | p | s | u | 25438 | sdw1 | sdw1 | |
9 | 7 | p | p | s | u | 25439 | sdw1 | sdw1 | |
10 | 8 | p | p | s | u | 25440 | sdw1 | sdw1 | |
11 | 9 | p | p | s | u | 25441 | sdw1 | sdw1 | |
12 | 10 | p | p | s | u | 25442 | sdw1 | sdw1 | |
(12 rows)

7.删除filespace

postgres=# select * from pg_filespace_entry;
fsefsoid | fsedbid | fselocation
----------+---------+---------------------------
3052 | 1 | /hgdata/master/hgdwseg-1
3052 | 2 | /hgdata/primary/hgdwseg0
3052 | 3 | /hgdata/primary/hgdwseg1
3052 | 4 | /hgdata/primary/hgdwseg2
3052 | 5 | /hgdata/primary/hgdwseg3
3052 | 6 | /hgdata/primary/hgdwseg4
3052 | 7 | /hgdata/primary/hgdwseg5
3052 | 8 | /hgdata/primary/hgdwseg6
3052 | 9 | /hgdata/primary/hgdwseg7
3052 | 10 | /hgdata/primary/hgdwseg8
3052 | 11 | /hgdata/primary/hgdwseg9
3052 | 12 | /hgdata/primary/hgdwseg10
3052 | 13 | /hgdata/primary/hgdwseg11
(13 rows)
postgres=#  delete from pg_filespace_entry where fsedbid=13;
DELETE 1
postgres=# select * from pg_filespace_entry;
fsefsoid | fsedbid | fselocation
----------+---------+---------------------------
3052 | 1 | /hgdata/master/hgdwseg-1
3052 | 2 | /hgdata/primary/hgdwseg0
3052 | 3 | /hgdata/primary/hgdwseg1
3052 | 4 | /hgdata/primary/hgdwseg2
3052 | 5 | /hgdata/primary/hgdwseg3
3052 | 6 | /hgdata/primary/hgdwseg4
3052 | 7 | /hgdata/primary/hgdwseg5
3052 | 8 | /hgdata/primary/hgdwseg6
3052 | 9 | /hgdata/primary/hgdwseg7
3052 | 10 | /hgdata/primary/hgdwseg8
3052 | 11 | /hgdata/primary/hgdwseg9
3052 | 12 | /hgdata/primary/hgdwseg10
(12 rows)

8.退出管理模式,正常启动数据库

[gpadmin@sdw1 ~]$ gpstop -m
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Starting gpstop with args: -m
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Gathering information and validating the environment...
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-There are 0 connections to the database
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Master host=sdw1
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=smart
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Master segment instance directory=/hgdata/master/hgdwseg-1
20170816:12:56:53:098095 gpstop:sdw1:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20170816:12:56:53:098095 gpstop:sdw1:gpadmin-[INFO]:-Terminating processes for segment /hgdata/master/hgdwseg-1
[gpadmin@sdw1 ~]$ gpstart
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Starting gpstart with args:
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Gathering information and validating the environment...
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Greenplum Catalog Version: '201310150'
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Starting Master instance in admin mode
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Setting new master era
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Master Started...
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Shutting down master
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:---------------------------
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Master instance parameters
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:---------------------------
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Database = template1
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Master Port = 5432
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Master directory = /hgdata/master/hgdwseg-1
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Timeout = 600 seconds
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Master standby = Off
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:---------------------------------------
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Segment instances that will be started
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:---------------------------------------
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- Host Datadir Port
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg0 25432
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg1 25433
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg2 25434
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg3 25435
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg4 25436
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg5 25437
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg6 25438
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg7 25439
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg8 25440
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg9 25441
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg10 25442 Continue with Greenplum instance startup Yy|Nn (default=N):
> y
20170816:12:57:07:098112 gpstart:sdw1:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...
.......
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-Process results...
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:- Successful segment starts = 11
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:- Failed segment starts = 0
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:- Skipped segment starts (segments are marked down in configuration) = 0
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-Successfully started 11 of 11 segment instances
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-Starting Master instance sdw1 directory /hgdata/master/hgdwseg-1
20170816:12:57:15:098112 gpstart:sdw1:gpadmin-[INFO]:-Command pg_ctl reports Master sdw1 instance active
20170816:12:57:15:098112 gpstart:sdw1:gpadmin-[INFO]:-No standby master configured. skipping...
20170816:12:57:15:098112 gpstart:sdw1:gpadmin-[INFO]:-Database successfully started

9.将删除节点的备份文件使用psql恢复到当前数据库

psql -d postgres -f xxxx.sql #这里不赘述恢复过程

备注:

1)本文使用的是只恢复删除节点的数据。

2)本文的过程,逆向执行,可以将删除的节点重新添加回来,但是数据恢复起来比较耗时,与重新建库恢复差不多。

转载自:https://www.sypopo.com/post/M95Rm39Or7/

Deepgreen/Greenplum 删除节点步骤的更多相关文章

  1. 【RAC】oracle11g r2 rac环境删除节点步骤

    1.移除数据库实例 如果节点运行了service首先需要删除service使用dbca图形化界面删除节点依次选择 Real Application Clusters -- > Instance ...

  2. Hadoop 删除节点步骤

    1.在hadoop1.1.1/conf 下新建文件 nn-excluded-list 并写入要删除的节点名称或者IP 一个节点 一行 如: mos5200app cmpaknwom rac7 2.分发 ...

  3. 『GreenPlum系列』GreenPlum 4节点集群安装(图文教程)

      目标架构如上图   一.硬件评估 cpu主频,核数推荐CPU核数与磁盘数的比例在12:12以上Instance上执行时只能利用一个CPU核资源进行计算,推荐高主频 内存容量 网络带宽重分布操作 R ...

  4. Deepgreen & Greenplum DBA小白普及课之三

    Deepgreen & Greenplum DBA小白普及课之三(备份问题解答) 不积跬步无以至千里,要想成为一名合格的数据库管理员,首先应该具备扎实的基础知识及问题处理能力.本文参考Pivo ...

  5. Javascript 笔记与总结(2-10)删除节点,创建节点

    [删除节点] 步骤: ① 找到对象 ② 找到他的父对象 parentObj ③ parentObj.removeChild(子对象); [例] <!DOCTYPE html> <ht ...

  6. adoop集群动态添加和删除节点

    hadoop集群动态添加和删除节点说明 上篇博客我已经安装了Hadoop集群(hadoop集群的安装步骤和配置),现在写这个博客我将在之前的基础上进行节点的添加的删除. 首先将启动四台机器(一主三从) ...

  7. redis 集群新增节点,slots槽分配,删除节点, [ERR] Calling MIGRATE ERR Syntax error, try CLIENT (LIST | KILL | GET...

    redis reshard 重新分槽(slots) https://github.com/antirez/redis/issues/5029 redis 官方已确认该bug redis 集群重新(re ...

  8. Elasticsearch集群管理之添加、删除节点

    1.问题抛出 1.1 新增节点问题 我的群集具有黄色运行状况,因为它只有一个节点,因此副本保持未分配状态,我想要添加一个节点,该怎么弄? 1.2 删除节点问题 假设集群中有5个节点,我必须在运行时删除 ...

  9. C和指针 第十七章 二叉树删除节点

    二叉树的节点删除分为三种情况: 1.删除的节点没有子节点,直接删除即可 2. 删除的节点有一个子节点,直接用子节点替换既可以 3.删除的节点有两个子节点. 对于第三种情况,一般是不删除这个节点,而是删 ...

随机推荐

  1. shell分享

    shell脚本分享 一.介绍shell Shell 是一个用 C 语言编写的程序,它是用户使用 Linux 的桥梁.Shell 既是一种命令语言,又是一种程序设计语言. Shell 是指一种应用程序, ...

  2. SpirngBoot--错误消息的定制

    在SpringBoot中发生了4xx 5xx之类的错误,SpringBoot默认会发一个/error的请求,该请求由BasicErrorController处理,即在SpringBoot中错误处理也是 ...

  3. Centos Consul集群及Acl配置

    一,准备工作 准备四台centos服务器,三台用于consul server 高可用集群,一台用于consul client作服务注册及健康检查.架构如下图所示 二,在四台服务器上安装consul 1 ...

  4. vue 自定义image组件

    介绍 1:当图片加载失败时,给出错误提示. 2:当图片加载中时,给出加载提示. 3:图片处理模式:等比缩放/裁剪/填充/... 1.图片加载状态处理 通过给图片绑定load事件与error事件处理函数 ...

  5. vue初级使用

    一.Vue是什么? Vue(读音 /vjuː/, 类似于 view)是一个构建数据驱动的 web 界面的渐进式框架.采用自底向上增量开发的设计.Vue.js 的目标是通过尽可能简单的 API 实现响应 ...

  6. UCOSIII钩子函数

    OSIdleTaskHook 空闲任务调用这个函数,可以用来让CPU进入低功耗模式 void OSIdleTaskHook (void) { #if OS_CFG_APP_HOOKS_EN > ...

  7. ..\USER\stm32f10x.h(428): error: #67: expected a "}" ADC1_2_IRQn = 18, /*!

    MDK软件编译,出现如下错误: ..\USER\stm32f10x.h(428): error: #67: expected a "}" ADC1_2_IRQn = 18, /*! ...

  8. oracle in和exists区别

    in和exists http://oraclemine.com/sql-exists-vs-in/ https://www.techonthenet.com/oracle/exists.php htt ...

  9. [LeetCode] 581. 最短无序连续子数组 ☆

    描述 给定一个整数数组,你需要寻找一个连续的子数组,如果对这个子数组进行升序排序,那么整个数组都会变为升序排序. 你找到的子数组应是最短的,请输出它的长度. 示例 1: 输入: [2, 6, 4, 8 ...

  10. uc/xi

    一个较为通用的定义为:嵌入式系统是对对象进行自动控制而使其具有智能化并可嵌入对象体系统中的专用计算机系统. 实时性:目前,嵌入式系统广泛应用于生产过程控制.数据采集.传输通信等场合,这些应用的共同特点 ...