Greenplum 数据库安装部署(生产环境)

硬件配置:

16 台 IBM X3650,

节点配置:CPU 2 * 8core,内存 128GB,硬盘 16 * 900GB,万兆网卡。

万兆交换机。

安装需求:

1台Master,1台Standby Master,14台Segment计算节点。

安装步骤:

  1. Master节点安装
  2. 创建GP安装配置文件并配置ssh互信
  3. 关闭防火墙及开启自启动
  4. 关闭SELinux
  5. 磁盘调度算法
  6. 磁盘预读取配置
  7. 语言与字符集
  8. Sysctl.conf增加配置
  9. 用户资源限额配置
  10. 时间校对
  11. 设定网卡自启动
  12. 创建用户(可选)
  13. 创建目录并赋权
  14. 各节点GP软件的安装
  15. 初始化GP数据库

1. Master节点安装

先确认各节点目录结构保持一致:

[root@XXXGPM01 db]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 20G 440M 18G 3% /
tmpfs 63G 76K 63G 1% /dev/shm
/dev/sda2 485M 39M 421M 9% /boot
/dev/sda1 200M 260K 200M 1% /boot/efi
/dev/sda4 5.5T 3.1G 5.5T 1% /data1
/dev/sdb1 5.8T 34M 5.8T 1% /data2
/dev/sda8 9.7G 150M 9.0G 2% /home
/dev/sda5 49G 3.1G 43G 7% /opt
/dev/sda9 9.7G 151M 9.0G 2% /tmp
/dev/sda7 20G 5.8G 13G 32% /usr
/dev/sda10 9.7G 338M 8.8G 4% /var

主节点/etc/hosts文件,追加内容(万兆网卡的IP地址和主机名):

172.16.99.18 XXXGPM01
172.16.99.19 XXXGPM02
172.16.99.20 XXXGPD01
172.16.99.21 XXXGPD02
172.16.99.22 XXXGPD03
172.16.99.23 XXXGPD04
172.16.99.24 XXXGPD05
172.16.99.25 XXXGPD06
172.16.99.26 XXXGPD07
172.16.99.27 XXXGPD08
172.16.99.28 XXXGPD09
172.16.99.29 XXXGPD10
172.16.99.30 XXXGPD11
172.16.99.31 XXXGPD12
172.16.99.32 XXXGPD13
172.16.99.33 XXXGPD14
  1. 解压安装介质

    GP的安装包:/opt/db.zip

    cd /data1; unzip /opt/db.zip
  2. 安装目录

    mkdir -p /data1/gpinstall

在 XXXGPM1 节点上,将 greenplum-db-4.x.x.x-build-5-RHEL5-x86_64.zip 解开, 以 root 用户执行得到的.bin 文件。按照提示进行安装。

# /bin/bash greenplum-db-4.3.2.0-build-1-RHEL5-x86_64.bin

选择自定义安装目录: /data1/gpinstall/greenplum-db-4.3.2.0

安装完成之后,可以看到如下目录

[root@XXXGPM01 gpinstall]# pwd
/data1/gpinstall
[root@XXXGPM01 gpinstall]# ls -lh
总用量 4.0K
lrwxrwxrwx. 1 root root 22 6月 4 18:51 greenplum-db -> ./greenplum-db-4.3.2.0
drwxr-xr-x. 11 root root 4.0K 6月 4 18:51 greenplum-db-4.3.2.0

2. 创建GP安装配置文件并配置ssh互信

`mkdir -p /data1/gpinstall/config`
创建两个配置文件

1. allnodes.txt

XXXGPM01
XXXGPM02
XXXGPD01
XXXGPD02
XXXGPD03
XXXGPD04
XXXGPD05
XXXGPD06
XXXGPD07
XXXGPD08
XXXGPD09
XXXGPD10
XXXGPD11
XXXGPD12
XXXGPD13
XXXGPD14

2. nodes.txt

XXXGPD01
XXXGPD02
XXXGPD03
XXXGPD04
XXXGPD05
XXXGPD06
XXXGPD07
XXXGPD08
XXXGPD09
XXXGPD10
XXXGPD11
XXXGPD12
XXXGPD13
XXXGPD14

3. 配置所有GP节点root用户的ssh互信:

source /data1/gpinstall/greenplum-db/greenplum_path.sh
gpssh-exkeys -f /data1/gpinstall/config/allnodes.txt

测试可以正常互信。此时就可以方便的去同步配置所有的参数。

先写脚本同步各节点的/etc/hosts文件:

#!/bin/bash
#Usage: copy files to other hosts in cluster.
#ex: sh bulkcp.sh /etc/hosts /etc/hosts
#Author: AlfredZhao
#Version: 1.0.0
for((i=18;i<=33;i++))
do
scp $1 172.16.99.$i:$2
echo "scp $1 172.16.99.$i:$2"
done

gpssh -f /data1/gpinstall/config/allnodes.txt -e '' 回车即可进入交互性界面。

3. 关闭防火墙及开启自启动

检查是否关闭及开机启动项设置:

service iptables status
service ip6tables status
service libvirtd status service iptables stop
service ip6tables stop
service libvirtd stop chkconfig libvirtd off
chkconfig iptables off
chkconfig ip6tables off

4. 关闭SELinux

检查SELinux的当前状态和配置:

getenforce
more /etc/selinux/config | grep SELINUX=

临时关闭SELinux和永久禁用SELinux:

setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

5. 磁盘调度算法

检查当前的磁盘调度算法:
`cat /sys/block/sd*/queue/scheduler`

原厂给的设置方法:

修改/boot/grub/menu.lst 找到 kernel /vmlinuz-xxx 这一行,在最后添加 elevator=deadline

检查/验证方法:

系统启动正常后,执行 cat /sys/block/*/queue/scheduler 应能看到: noop anticipatory [deadline] cfq

实际遇到问题:RHEL6.5系统 没有/boot/grub/menu.lst 这个文件。

nl /boot/grub/menu.lst

换一个思路,利用rc.local配置加两行信息如下:

echo deadline > /sys/block/sda/queue/scheduler
echo deadline > /sys/block/sdb/queue/scheduler

这里为了快速同步集群的配置统一,将这两行写入临时配置文件/tmp/gpconfig/1deadline.conf,然后同步到各节点。

mkdir -p /tmp/gpconfig

最后在交互性界面执行命令:

cat /tmp/gpconfig/1deadline.conf >> /etc/rc.d/rc.local

6. 磁盘预读取配置

检查磁盘预读取设置,应都是 16384:
blockdev --getra /dev/sd* blockdev --getra /dev/dm-*

设置办法:

修改/etc/rc.d/rc.local 增加两行

blockdev --setra 16384 /dev/sd*
blockdev --setra 16384 /dev/dm-*

检查办法:

系统重启后,运行

blockdev --getra /dev/sd* blockdev --getra /dev/dm-* 应都是 16384

/etc/rc.d/rc.local

sh bulkcp.sh /tmp/gpconfig/2rclocal.conf /tmp/gpconfig/2rclocal.conf

gpssh -f /data1/gpinstall/config/allnodes.txt -e ''

cat /tmp/gpconfig/2rclocal.conf >> /etc/rc.d/rc.local

7. 语言与字符集

检查语言和字符集:echo $LANG
`en_US.UTF-8`
设置办法:系统安装时指定。
检查办法:登录系统,执行命令 locale 结果应该是 en_US.UTF-8

修改:

sed -i 's/zh_CN.UTF-8/en_US.UTF-8/g' /etc/sysconfig/i18n

8. Sysctl.conf增加配置

配置sysctl.conf
sysctl -p
有时需要 modprobe bridge

设置办法: 修改/etc/sysctl.conf,

增加以下内容:

net.ipv4.ip_forward = 0
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.sem = 250 64000 100 2048
kernel.shmmax = 5000000000
kernel.shmmni = 40960
kernel.shmall = 40000000000
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_max_syn_backlog=4096
net.core.netdev_max_backlog=10000
vm.overcommit_memory=2
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range = 1025 65535

然后执行 sysctl -p。

vi /tmp/gpconfig/4sysctl.conf
sh bulckcp.sh /tmp/gpconfig/4sysctl.conf /tmp/gpconfig/4sysctl.conf
gpssh -f /data1/gpinstall/config/allnodes.txt -e ''
cat /tmp/gpconfig/4sysctl.conf >> /etc/sysctl.conf

检查办法: 用 sysctl ,检查上述各参数是否匹配。

9. 用户资源限额配置

查看用户资源限额:
ulimit -a

设置办法:

修改 /etc/security/limits.d/90-nproc.conf

Replace with:

* soft nofile 1048576
* hard nofile 1048576
* soft nproc 1048576
* hard nproc 1048576

同步修改:

vi /tmp/gpconfig/5limits.conf
sh bulkcp.sh /tmp/gpconfig/5limits.conf /tmp/gpconfig/5limits.conf
nl /etc/security/limits.d/90-nproc.conf
cat /tmp/gpconfig/5limits.conf >> /etc/security/limits.d/90-nproc.conf

检查办法: 用任意普通用户登录,执行 ulimit -a,检查上述参数是否匹配。

10. 时间校对

检查时间:
date

校对时间:

--2015年6月5日9:17

date 060509172015

hwclock -w

11. 设定网卡自启动

确定需要启用的网卡:eth2
`ifconfig eth2 | grep 172`
`more /etc/sysconfig/network-scripts/ifcfg-eth2 |grep ONBOOT`

修改eth2网卡配置,开启自启动

sed -i 's/ONBOOT=no/ONBOOT=yes/g' /etc/sysconfig/network-scripts/ifcfg-eth2

以上步骤修改完毕后统一重启下机器,再次验证以上各个步骤是否修改成功。

12. 创建用户(可选)

这个也可以在gpseginstall时指定选项创建,这里为了规范业务用户的uid和gid事先先创建好组和用户:
写一个小脚本,分发到各节点,然后在交互式界面下统一执行脚本建立组,用户,密码:

#!/bin/bash
#Usage: create gpadmin
#Author: AlfredZhao
#version: 1.0.0 groupadd gpadmin -g 3030
useradd gpadmin -u 3030 -g 3030
passwd gpadmin <<EOF
> gpadminpwd
> gpadminpwd
> EOF

13. 创建目录并赋权

Master 目录:

gpssh -h XXXGPM01 -e 'mkdir -p /data1/master'
gpssh -h XXXGPM02 -e 'mkdir -p /data1/master' gpssh -h XXXGPM01 -e 'chown gpadmin:gpadmin /data1/master'
gpssh -h XXXGPM02 -e 'chown gpadmin:gpadmin /data1/master'

数据库数据文件存储目录:

注意这里配置文件变为nodes.txt,主节点并不需要创建这些目录。

gpssh -f /data1/gpinstall/config/nodes.txt -e ''

mkdir -p /data1/primary
mkdir -p /data1/mirror
mkdir -p /data2/primary
mkdir -p /data2/mirror chown gpadmin:gpadmin /data1/primary
chown gpadmin:gpadmin /data1/mirror
chown gpadmin:gpadmin /data2/primary
chown gpadmin:gpadmin /data2/mirror

14. 各节点GP软件的安装

`gpseginstall -f allnodes.txt -c csv`
如果前面没有手工提前建立gpadmin用户,这里可以指定用户密码(-u -p),统一建立gpadmin用户和密码

[root@XXXGPM01 data1]# gpseginstall -f /data1/gpinstall/config/allnodes.txt -c csv
20150605:00:01:07:005656 gpseginstall:XXXGPM01:root-[INFO]:-Installation Info:
link_name greenplum-db
binary_path /data1/gpinstall/greenplum-db-4.3.2.0
binary_dir_location /data1/gpinstall
binary_dir_name greenplum-db-4.3.2.0
20150605:00:01:07:005656 gpseginstall:XXXGPM01:root-[INFO]:-check cluster password access
20150605:00:01:11:005656 gpseginstall:XXXGPM01:root-[INFO]:-de-duplicate hostnames
20150605:00:01:11:005656 gpseginstall:XXXGPM01:root-[INFO]:-master hostname: XXXGPM01
20150605:00:01:13:005656 gpseginstall:XXXGPM01:root-[INFO]:-chown -R gpadmin:gpadmin /data1/gpinstall/greenplum-db
20150605:00:01:13:005656 gpseginstall:XXXGPM01:root-[INFO]:-chown -R gpadmin:gpadmin /data1/gpinstall/greenplum-db-4.3.2.0
20150605:00:01:14:005656 gpseginstall:XXXGPM01:root-[INFO]:-rm -f /data1/gpinstall/greenplum-db-4.3.2.0.tar; rm -f /data1/gpinstall/greenplum-db-4.3.2.0.tar.gz
20150605:00:01:14:005656 gpseginstall:XXXGPM01:root-[INFO]:-cd /data1/gpinstall; tar cf greenplum-db-4.3.2.0.tar greenplum-db-4.3.2.0
20150605:00:01:17:005656 gpseginstall:XXXGPM01:root-[INFO]:-gzip /data1/gpinstall/greenplum-db-4.3.2.0.tar
20150605:00:01:36:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: mkdir -p /data1/gpinstall
20150605:00:01:37:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: rm -rf /data1/gpinstall/greenplum-db-4.3.2.0
20150605:00:01:39:005656 gpseginstall:XXXGPM01:root-[INFO]:-scp software to remote location
20150605:00:01:41:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: gzip -f -d /data1/gpinstall/greenplum-db-4.3.2.0.tar.gz
20150605:00:01:46:005656 gpseginstall:XXXGPM01:root-[INFO]:-md5 check on remote location
20150605:00:01:48:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: cd /data1/gpinstall; tar xf greenplum-db-4.3.2.0.tar
20150605:00:01:51:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: rm -f /data1/gpinstall/greenplum-db-4.3.2.0.tar
20150605:00:01:52:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: cd /data1/gpinstall; rm -f greenplum-db; ln -fs greenplum-db-4.3.2.0 greenplum-db
20150605:00:01:54:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: chown -R gpadmin:gpadmin /data1/gpinstall/greenplum-db
20150605:00:01:55:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: chown -R gpadmin:gpadmin /data1/gpinstall/greenplum-db-4.3.2.0
20150605:00:01:57:005656 gpseginstall:XXXGPM01:root-[INFO]:-rm -f /data1/gpinstall/greenplum-db-4.3.2.0.tar.gz
20150605:00:01:57:005656 gpseginstall:XXXGPM01:root-[INFO]:-version string on master: gpssh version 4.3.2.0 build 1
20150605:00:01:57:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: . /data1/gpinstall/greenplum-db/./greenplum_path.sh; /data1/gpinstall/greenplum-db/./bin/gpssh --version
20150605:00:01:59:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: . /data1/gpinstall/greenplum-db-4.3.2.0/greenplum_path.sh; /data1/gpinstall/greenplum-db-4.3.2.0/bin/gpssh --version
20150605:00:02:06:005656 gpseginstall:XXXGPM01:root-[INFO]:-SUCCESS -- Requested commands completed

15. 初始化GP数据库

以下工作,在Master节点上以gpadmin用户登陆完成。
cd /data1/gpinstall/config
新建gpinitsystem_config配置文件,内容如下:

ARRAY_NAME="XXXGPDB"
SEG_PREFIX=gpseg
PORT_BASE=40000
declare -a DATA_DIRECTORY=(/data1/primary /data1/primary /data1/primary /data1/primary /data1/primary /data1/primary /data2/primary /data2/primary /data2/primary /data2/primary /data2/primary /data2/primary)
MASTER_HOSTNAME=XXXGPM01
MASTER_DIRECTORY=/data1/master
MASTER_PORT=5432
TRUSTED_SHELL=ssh
CHECK_POINT_SEGMENTS=256
ENCODING=UNICODE
MIRROR_PORT_BASE=50000
REPLICATION_PORT_BASE=41000
MIRROR_REPLICATION_PORT_BASE=51000
declare -a MIRROR_DATA_DIRECTORY=(/data1/mirror /data1/mirror /data1/mirror /data1/mirror /data1/mirror /data1/mirror /data2/mirror /data2/mirror /data2/mirror /data2/mirror /data2/mirror /data2/mirror)

15.1 配置下gpadmin用户的各节点间互信:

gpssh-exkeys -f /data1/gpinstall/config/allnodes.txt

15.2 初始化数据库:

gpinitsystem -c gpinitsystem_config -h nodes.txt -B 8

15.3 配置环境变量:

source /usr/local/greenplum-db/greenplum_path.sh
export MASTER_DATA_DIRECTORY=/data1/master/gpseg-1

15.4 建立冗余的master节点:

gpinitstandby -s XXXGPM02

15.5 调整数据库参数:

以下数据库参数调整后必须重新启动数据库。

调整方法:执行命令 gpconfig -c 参数名 -v 参数值 -m Master节点值

检查方法:重启数据库后,执行命令 gpconfig -s 参数名

参数名 参数值 master节点值

gpconfig -c shared_buffers -v 128MB -m 128MB
gpconfig -c gp_vmem_protect_limit -v 15360 -m 15360
gpconfig -c max_connections -v 1000 -m 200
gpconfig --skipvalidation -c wal_send_client_timeout -v 60s -m 60s [gpadmin@XXXGPM01 greenplum-db]$ gpconfig -c shared_buffers -v 128MB -m 128MB
20150605:14:50:53:017038 gpconfig:XXXGPM01:gpadmin-[INFO]:-completed successfully
[gpadmin@XXXGPM01 greenplum-db]$ gpconfig -c gp_vmem_protect_limit -v 15360 -m 15360
20150605:14:52:51:017179 gpconfig:XXXGPM01:gpadmin-[INFO]:-completed successfully
[gpadmin@XXXGPM01 greenplum-db]$ gpconfig -c max_connections -v 1000 -m 200
20150605:14:53:08:017271 gpconfig:XXXGPM01:gpadmin-[INFO]:-completed successfully
[gpadmin@XXXGPM01 greenplum-db]$ gpconfig --skipvalidation -c wal_send_client_timeout -v 60s -m 60s
20150605:14:53:23:017363 gpconfig:XXXGPM01:gpadmin-[INFO]:-completed successfully

15.6 关闭数据库

[gpadmin@XXXGPM01 greenplum-db]$ gpstop -a
20150605:14:54:40:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Starting gpstop with args: -a
20150605:14:54:40:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Gathering information and validating the environment...
20150605:14:54:40:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20150605:14:54:40:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Obtaining Segment details from master...
20150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.3.2.0 build 1'
20150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-There are 0 connections to the database
20150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'
20150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Master host=XXXGPM01
20150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=smart
20150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Master segment instance directory=/data1/master/gpseg-1
20150605:14:54:43:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Stopping master standby host XXXGPM02 mode=fast
20150605:14:54:45:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Successfully shutdown standby process on XXXGPM02
20150605:14:54:45:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Commencing parallel primary segment instance shutdown, please wait...
...........................
20150605:14:55:12:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Commencing parallel mirror segment instance shutdown, please wait...
................
20150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:- Segments stopped successfully = 336
20150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:- Segments with errors during stop = 0
20150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Successfully shutdown 336 of 336 segment instances
20150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Database successfully shutdown with no errors reported

15.7 启动数据库:

[gpadmin@XXXGPM01 greenplum-db]$ gpstart -a
20150605:14:56:39:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Starting gpstart with args: -a
20150605:14:56:39:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Gathering information and validating the environment...
20150605:14:56:39:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.3.2.0 build 1'
20150605:14:56:39:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Greenplum Catalog Version: '201310150'
20150605:14:56:39:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Starting Master instance in admin mode
20150605:14:56:40:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20150605:14:56:40:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Obtaining Segment details from master...
20150605:14:56:42:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Setting new master era
20150605:14:56:42:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Master Started...
20150605:14:56:42:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Shutting down master
20150605:14:56:46:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait...
..................
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Process results...
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:- Successful segment starts = 336
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:- Failed segment starts = 0
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:- Skipped segment starts (segments are marked down in configuration) = 0
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Successfully started 336 of 336 segment instances
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Starting Master instance XXXGPM01 directory /data1/master/gpseg-1
20150605:14:57:05:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Command pg_ctl reports Master XXXGPM01 instance active
20150605:14:57:06:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Starting standby master
20150605:14:57:06:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Checking if standby master is running on host: XXXGPM02 in directory: /data1/master/gpseg-1
20150605:14:57:09:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Database successfully started
[gpadmin@XXXGPM01 greenplum-db]$

15.8 检查数据库的参数

gpconfig -s shared_buffers

gpconfig -s gp_vmem_protect_limit

gpconfig -s max_connections

gpconfig -s wal_send_client_timeout

[gpadmin@XXXGPM01 greenplum-db]$ gpconfig -s shared_buffers
20150605:11:37:40:011591 gpconfig:XXXGPM01:gpadmin-[ERROR]:-Failed to retrieve GUC information: error 'ERROR: function gp_toolkit.gp_param_setting(unknown) does not exist
LINE 1: select * from gp_toolkit.gp_param_setting('shared_buffers')
^
HINT: No function matches the given name and argument types. You may need to add explicit type casts.
' in 'select * from gp_toolkit.gp_param_setting('shared_buffers')'

psql postgres

show shared_buffers;
show gp_vmem_protect_limit;
show max_connections;
show wal_send_client_timeout;

15.9 创建业务数据库XXX

项目内规范:所有后期创建数据库,都必须在psql 登陆到postgres数据库后创建!!

psql postgres
postgres=# create database XXX;
CREATE DATABASE

psql XXX

15.10 调整连接控制参数

修改文件 $MASTER_DATA_DIRECTORY/pg_hba.conf

增加一行:

host all all 0/0 md5

修改standby master上的文件 $MASTER_DATA_DIRECTORY/pg_hba.conf

增加一行:

host all all 0/0 md5

15.11 查看数据库的状态(gpstate)

[gpadmin@XXXGPM01 ~]$ gpstate
20150605:13:48:23:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-Starting gpstate with args:
20150605:13:48:24:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.2.0 build 1'
20150605:13:48:24:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.2.0 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Jul 12 2014 17:02:40'
20150605:13:48:24:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-Obtaining Segment details from master...
20150605:13:48:25:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-Gathering data from segments...
................
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-Greenplum instance status summary
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Master instance = Active
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Master standby = XXXGPM02
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Standby master state = Standby host passive
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total segment instance count from metadata = 336
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Primary Segment Status
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total primary segments = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total primary segment valid (at master) = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total primary segment failures (at master) = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number of postmaster.pid files missing = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number of postmaster.pid files found = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number of postmaster.pid PIDs missing = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number of postmaster.pid PIDs found = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number of /tmp lock files missing = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number of /tmp lock files found = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number postmaster processes missing = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number postmaster processes found = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Mirror Segment Status
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total mirror segments = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total mirror segment valid (at master) = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total mirror segment failures (at master) = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number of postmaster.pid files missing = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number of postmaster.pid files found = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number of postmaster.pid PIDs missing = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number of postmaster.pid PIDs found = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number of /tmp lock files missing = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number of /tmp lock files found = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number postmaster processes missing = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number postmaster processes found = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number mirror segments acting as primary segments = 0
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:- Total number mirror segments acting as mirror segments = 168
20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------
[gpadmin@XXXGPM01 ~]$

Greenplum 数据库安装部署(生产环境)的更多相关文章

  1. 转_Greenplum 数据库安装部署(生产环境)

    Greenplum 数据库安装部署(生产环境) 硬件配置: 16 台 IBM X3650, 节点配置:CPU 2 * 8core,内存 128GB,硬盘 16 * 900GB,万兆网卡. 万兆交换机. ...

  2. Centos6.3 下使用 Tomcat-6.0.43 非root用户 jsvc模式部署 生产环境 端口80 vsftp

    一.安装JDK环境 方法一. 官方下载链接 http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260 ...

  3. Centos6.3 下使用 Tomcat-6.0.43 非root用户 部署 生产环境 端口转发方式

    一.安装JDK环境 方法一. 官方下载链接 http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260 ...

  4. MPP - GreenPlum数据库安装以及简单使用

    一.集群介绍 共3台主机,ip 为193.168.0.93   193.168.0.94  193.168.0.95 集群对应master和segment如下,193.168.0.93为master节 ...

  5. django2.0 + python3.6 在centos7 下部署生产环境的一些注意事项

    一:mysql 与环境选用的坑 目前, 在生产环境部署django有三种方式: 1. apache + mod_wsgi 2. nginx + uwsigi 3. nginx + supervisor ...

  6. centos-6.5安装部署LNMP环境

    安装部署前,确保安装了gcc和gcc-c++ 系统信息: [root@zww ~]# cat /etc/redhat-release CentOS release 6.5 (Final) [root@ ...

  7. (转)centos 7 Tomcat 8.5 的安装及生产环境的搭建调优

    原文:https://www.cnblogs.com/linhankbl/articles/9149804.html#top JVM菜鸟进阶高手之路七(tomcat调优以及tomcat7.8性能对比) ...

  8. GreenPlum数据库安装

    第一章    文档概述 本文描述适用于Greenplum4.0以上版本的安装操作.所涉及到的操作系统相关参数调整,主要针对Redhat Linux操作系统. 第二章    安装介质 操作系统:Cent ...

  9. redis cluster安装部署(测试环境)

    redis 应用于web前端,做缓存和数据存取的速度是挺可观的,最近看了一些资料,手痒了,就弄了一个测试环境,两台方案,试用一下. ##Redis 集群部署## 一,方案调研: 参考博客: http: ...

随机推荐

  1. react-native-http请求后navigator导航跳转

    琢磨react-native有一段时间了.对于我来说,它的确是前端开发工作者的福音,因为我可以利用它来写app的代码,而且基本可以一套代码,多个平台使用. 早就想写一篇随笔记录一下react nati ...

  2. Android--全局获取Context

    1.Android 提供了一个Application 类,每当应用程序启动的时候,系统就会自动将这个类进行初始化.这里我们可以定制一个自己的Application 类,以便于管理程序内一些全局的状态信 ...

  3. 大型B2B网站开发手记 2

    刚开始做功能的时候,发现有个“面包屑”导航的功能穿插到了所有的页面.这个看似不起眼的小功能以前没有注意过,现在决定来实现一下 所谓面包屑,即页面层级导航,例如 首页>>我的博客>&g ...

  4. 关于sitemesh和freemark在struts2中的一些问题总结

    最近刚开始工作,首先让我在熟悉公司编程环境的前提下做一些简单的增删改查,在此总结一些这些天遇到的问题. 1,在刚开始建表的时候,我在oracle数据库中设置的主键id为四位的number类型,对应的实 ...

  5. Xamarin.Android经验之谈

    1.Fragment如何做到显示才加载数据 有些界面我们会采用套用多个Fragment来显示的效果,但是我们不会在一显示这个活动的时候就把所有的Fragment加载并加载数据,而是会让显示出来的Fra ...

  6. Aoite 系列(01) - 比 Dapper 更好用的 ORM

    Aoite 是一个适于任何 .Net Framework 4.0+ 项目的快速开发整体解决方案.Aoite.Data 适用于市面上大多数的数据库提供程序,通过统一封装,可以在日常开发中简单便捷的操作数 ...

  7. 003. Asp.Net Routing与MVC 之一: 请求如何到达MVC

    基础知识 本文用到的基础知识:URL.HttpModule 与 HttpHandler.IIS 的请求处理过程. URL HttpModule与HttpHandler IIS7.0的请求处理过程 OK ...

  8. EF 外键问题

    在做一个评论功能的时候,发现用户的id不对,打开数据库一看,莫名其妙的新增了几个用户.明显是将外键中的用户新增到用户表中了. 评论表: public class CourseComment : Bas ...

  9. C#可扩展编程之MEF学习笔记(二):MEF的导出(Export)和导入(Import)

    上一篇学习完了MEF的基础知识,编写了一个简单的DEMO,接下来接着上篇的内容继续学习,如果没有看过上一篇的内容, 请阅读:http://www.cnblogs.com/yunfeifei/p/392 ...

  10. java提高篇(二三)-----HashMap

    HashMap也是我们使用非常多的Collection,它是基于哈希表的 Map 接口的实现,以key-value的形式存在.在HashMap中,key-value总是会当做一个整体来处理,系统会根据 ...