Greenplum测试环境部署
1.准备3台主机
本实例是部署实验环境,采用的是Citrix的虚拟化环境,分配了3台RHEL6.4的主机。
额外需求 | |
---|---|
Master | 创建模板后,额外添加20G一块磁盘/dev/xvdb,额外添加2块网卡eth1,eth2 |
Standby | 创建模板后,额外添加20G一块磁盘/dev/xvdb,额外添加2块网卡eth1,eth2 |
Segment01 | 创建模板后,额外添加50G一块磁盘/dev/xvdb,额外添加2块网卡eth1,eth2 |
网络规划
eth0(外部IP) | eth1 | eth2 | |
---|---|---|---|
Master | 192.168.9.123 | 172.16.10.101 | 172.16.11.101 |
Standby | 192.168.9.124 | 172.16.10.102 | 172.16.11.102 |
Segment01 | 192.168.9.125(可选) | 172.16.10.1 | 172.16.11.1 |
实验环境资源有限暂时配置3个节点,后续可能会根据需求添加Segment02,Segment03...
修改主机名
将Master,Standby,Segment01的三台主机名分别设置为mdw, smdw, sdw1
主机名修改方法:
hostname 主机名
vi /etc/sysconfig/network 修改hostname
Options:配置脚本,前期为了方便同步节点间的配置,可选。
export NODE_LIST='MDW SMDW SDW1'
vi /etc/hosts 临时配置
192.168.9.123 mdw
192.168.9.124 smdw
192.168.9.125 sdw1
配置第一个节点到自身和其他机器的无密码登录
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.9.123
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.9.124
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.9.125
cluster_run_all_nodes "hostname ; date"
磁盘规划
gp建议使用xfs文件系统,所有节点需要安装依赖包
# rpm -ivh xfsprogs-3.1.1-10.el6.x86_64.rpm
所有节点建立/data文件夹,用来挂载xfs的文件系统
mkdir /data
mkfs.xfs /dev/xvdb
[root@smdb Packages]# mkfs.xfs /dev/xvdb
meta-data=/dev/xvdb isize=256agcount=4, agsize=1310720 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=5242880, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
vi /etc/fstab 添加下面一行
/dev/xvdb /data xfs rw,noatime,inode64,allocsize=16m1 1
2.关闭iptables和selinux
cluster_run_all_nodes "hostname; service iptables stop"
cluster_run_all_nodes "hostname; chkconfig iptables off"
cluster_run_all_nodes "hostname; chkconfig ip6tables off"
cluster_run_all_nodes "hostname; chkconfig libvirtd off"
cluster_run_all_nodes "hostname; setenforce 0"
cluster_run_all_nodes "hostname; sestatus"
vi /etc/selinux/config
cluster_copy_all_nodes /etc/selinux/config /etc/selinux/
注:所有节点都要统一设定,我这里先配置了信任,用脚本实现的同步,如果没有配置,是需要每台依次设定的。
3.设定建议的系统参数
vi /etc/sysctl.conf
kernel.shmmax = 500000000
kernel.shmmni = 4096
kernel.shmall = 4000000000
kernel.sem = 250 512000 100 2048
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.conf.default.arp_filter = 1
net.core.netdev_max_backlog = 10000
vm.overcommit_memory = 2
kernel.msgmni = 2048
net.ipv4.ip_local_port_range = 1025 65535
vi /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
* soft nproc 131072
* hard nproc 131072
同步到各个节点:
cluster_copy_all_nodes /etc/sysctl.conf /etc/sysctl.conf
cluster_copy_all_nodes /etc/security/limits.conf /etc/security/limits.conf
磁盘预读参数及 deadline算法
在/etc/rc.d/rc.local 添加
blockdev --setra 16385 /dev/xvdb
echo deadline > /sys/block/xvdb/queue/scheduler
cluster_copy_all_nodes /etc/rc.d/rc.local /etc/rc.d/rc.local
注:重启后 blockdev --getra /dev/xvdb 验证是否生效
验证所有节点的字符集
cluster_run_all_nodes "hostname; echo $LANG"
重启所有节点,验证修改是否生效:
blockdev --getra /dev/xvdb
more /sys/block/xvdb/queue/scheduler
cluster_run_all_nodes "hostname; service iptables status"
4.在Master上安装
mkdir -p /data/soft
上传greenplum-db-4.3.4.2-build-1-RHEL5-x86_64.zip到Master
**解压**
unzip greenplum-db-4.3.4.2-build-1-RHEL5-x86_64.zip
**安装**
/bin/bash greenplum-db-4.3.4.2-build-1-RHEL5-x86_64.bin
5.在所有的节点上安装配置Greenplum
配置/etc/hosts
192.168.9.123 mdw
172.16.10.101 mdw-1
172.16.11.101 mdw-2
192.168.9.124 smdw
172.16.10.102 smdw-1
172.16.11.102 smdw-2
192.168.9.125 sdw1
172.16.10.1 sdw1-1
172.16.11.1 sdw1-2
同步/etc/hosts配置
cluster_copy_all_nodes /etc/hosts /etc/hosts
配置gp需要的互信
vi hostfile_exkeys 创建文件内容示例如下:
mdw
mdw-1
mdw-2
smdw
smdw-1
smdw-2
sdw1
sdw1-1
sdw1-2
Option: 此时如果之前做了部分互信,可以清除之前为安装方便配置的ssh信任
rm -rf /root/.ssh/
# gpseginstall -f hostfile_exkeys -u gpadmin -p 123456
# su - gpadmin
$ source /usr/local/greenplum-db/greenplum_path.sh
$ cd /usr/local/greenplum-db
$ gpssh -f hostfile_exkeys -e ls -l $GPHOME
互信此时应该是可用的,如果不可用,再次执行
gpssh -f hostfile_exkeys
创建Data Storage Areas,root用户操作
# mkdir /data/master
# chown gpadmin /data/master/
利用gpssh,在standby master上也创建数据目录
# source /usr/local/greenplum-db/greenplum_path.sh
# gpssh -h smdw -e 'mkdir /data/master'
# gpssh -h smdw -e 'chown gpadmin /data/master'
在所有的segment节点上创建数据目录
先创建一个文件 hostfile_gpssh_segonly,包含所有segment节点的主机名
sdw1
创建目录
# source /usr/local/greenplum-db/greenplum_path.sh
# gpssh -f hostfile_gpssh_segonly -e 'mkdir /data/primary'
# gpssh -f hostfile_gpssh_segonly -e 'mkdir /data/mirror'
# gpssh -f hostfile_gpssh_segonly -e 'chown gpadmin /data/primary'
# gpssh -f hostfile_gpssh_segonly -e 'chown gpadmin /data/mirror'
配置NTP
我这里没有配置NTP,生产环境建议配置。
验证OS设置
先建立一个hostfile_gpcheck文件
mdw
smdw
sdw1
验证
$ source /usr/local/greenplum-db/greenplum_path.sh
$ gpcheck -f hostfile_gpcheck -m mdw -s smdw
20150402:17:56:10:009650 gpcheck:mdw:gpadmin-[INFO]:-dedupe hostnames
20150402:17:56:10:009650 gpcheck:mdw:gpadmin-[INFO]:-Detected platform: Generic Linux Cluster
20150402:17:56:10:009650 gpcheck:mdw:gpadmin-[INFO]:-generate data on servers
20150402:17:56:11:009650 gpcheck:mdw:gpadmin-[INFO]:-copy data files from servers
20150402:17:56:12:009650 gpcheck:mdw:gpadmin-[INFO]:-delete remote tmp files
20150402:17:56:12:009650 gpcheck:mdw:gpadmin-[INFO]:-Using gpcheck config file: /usr/local/greenplum-db/./etc/gpcheck.cnf
20150402:17:56:12:009650 gpcheck:mdw:gpadmin-[ERROR]:-GPCHECK_ERROR host(None): utility will not check all settings when run as non-root user
20150402:17:56:12:009650 gpcheck:mdw:gpadmin-[ERROR]:-GPCHECK_ERROR host(smdw): on device (xvdd) IO scheduler 'cfq' does not match expected value 'deadline'
20150402:17:56:12:009650 gpcheck:mdw:gpadmin-[ERROR]:-GPCHECK_ERROR host(smdw): on device (xvda) IO scheduler 'cfq' does not match expected value 'deadline'
20150402:17:56:12:009650 gpcheck:mdw:gpadmin-[ERROR]:-GPCHECK_ERROR host(smdw): ntpd not detected on machine
20150402:17:56:12:009650 gpcheck:mdw:gpadmin-[ERROR]:-GPCHECK_ERROR host(sdw1): on device (xvda) IO scheduler 'cfq' does not match expected value 'deadline'
20150402:17:56:12:009650 gpcheck:mdw:gpadmin-[ERROR]:-GPCHECK_ERROR host(sdw1): ntpd not detected on machine
20150402:17:56:12:009650 gpcheck:mdw:gpadmin-[ERROR]:-GPCHECK_ERROR host(mdw): on device (xvda) IO scheduler 'cfq' does not match expected value 'deadline'
20150402:17:56:12:009650 gpcheck:mdw:gpadmin-[ERROR]:-GPCHECK_ERROR host(mdw): ntpd not detected on machine
20150402:17:56:12:009650 gpcheck:mdw:gpadmin-[INFO]:-gpcheck completing...
验证网络性能
hostfile_gpchecknet_sc1
sdw1-1
hostfile_gpchecknet_sc2
sdw1-2
验证磁盘I/O和内存
hostfile_gpcheckperf
sdw1
配置本地化设置
字符集的设定
创建初始化文件
$ mkdir -p /home/gpadmin/gpconfigs
$ cd /home/gpadmin/gpconfigs
$ vi hostfile_gpinitsystem
sdw1-1
sdw1-2
拷贝gpinitsystem_config
$ cp /usr/local/greenplum-db/docs/cli_help/gpconfigs/gpinitsystem_config /home/gpadmin/gpconfigs/gpinitsystem_config
$ cd /home/gpadmin/gpconfigs
修改
declare -a DATA_DIRECTORY=(/data/primary /data/primary)
#declare -a MIRROR_DATA_DIRECTORY=(/data/mirror /data/mirror) 以后配置默认就是注释的
运行初始化工具
$ gpinitsystem -c gpconfigs/gpinitsystem_config -h gpconfigs/hostfile_gpinitsystem -s smdw
初始化过程中报错:
20150403:10:58:51:032589 gpcreateseg.sh:mdw:gpadmin-[INFO]:-Start Function ED_PG_CONF
20150403:10:58:52:032672 gpcreateseg.sh:mdw:gpadmin-[WARN]:-Failed to insert port=40001 in /data/primary/gpseg1/postgresql.conf on sdw1-2
20150403:10:58:52:032672 gpcreateseg.sh:mdw:gpadmin-[INFO]:-End Function ED_PG_CONF
20150403:10:58:52:032672 gpcreateseg.sh:mdw:gpadmin-[FATAL][1]:-Failed Update port number to 40001
20150403:10:58:52:032589 gpcreateseg.sh:mdw:gpadmin-[WARN]:-Failed to insert port=40000 in /data/primary/gpseg0/postgresql.conf on sdw1-1
20150403:10:58:53:032589 gpcreateseg.sh:mdw:gpadmin-[INFO]:-End Function ED_PG_CONF
20150403:10:58:53:032589 gpcreateseg.sh:mdw:gpadmin-[FATAL][0]:-Failed Update port number to 40000
找到资料:https://support.pivotal.io/hc/communities/public/questions/200372738-HAWQ-Initialization
解决方法:
1.所有节点安装ed
# rpm -ivh /tmp/ed-1.1-3.3.el6.x86_64.rpm
warning: /tmp/ed-1.1-3.3.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing...########################################### [100%]
1:ed ########################################### [100%]
2.清除初始化系统的信息
/bin/bash /home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_20150403_105721
3.重新初始化系统
gpinitsystem -c gpconfigs/gpinitsystem_config -h gpconfigs/hostfile_gpinitsystem -s smdw
吐槽下:明明依赖ed,官方安装文档中却没有提及..
安装成功最后会输出类似下面的提示:
20150403:11:13:00:002886 gpinitsystem:mdw:gpadmin-[INFO]:-Greenplum Database instance successfully created
20150403:11:13:00:002886 gpinitsystem:mdw:gpadmin-[INFO]:-------------------------------------------------------
20150403:11:13:00:002886 gpinitsystem:mdw:gpadmin-[INFO]:-To complete the environment configuration, please
20150403:11:13:00:002886 gpinitsystem:mdw:gpadmin-[INFO]:-update gpadmin .bashrc file with the following
20150403:11:13:00:002886 gpinitsystem:mdw:gpadmin-[INFO]:-1. Ensure that the greenplum_path.sh file is sourced
20150403:11:13:00:002886 gpinitsystem:mdw:gpadmin-[INFO]:-2. Add "export MASTER_DATA_DIRECTORY=/data/master/gpseg-1"
20150403:11:13:00:002886 gpinitsystem:mdw:gpadmin-[INFO]:- to access the Greenplum scripts for this instance:
20150403:11:13:00:002886 gpinitsystem:mdw:gpadmin-[INFO]:- or, use -d /data/master/gpseg-1 option for the Greenplum scripts
20150403:11:13:01:002886 gpinitsystem:mdw:gpadmin-[INFO]:- Example gpstate -d /data/master/gpseg-1
20150403:11:13:01:002886 gpinitsystem:mdw:gpadmin-[INFO]:-Script log file = /home/gpadmin/gpAdminLogs/gpinitsystem_20150403.log
20150403:11:13:01:002886 gpinitsystem:mdw:gpadmin-[INFO]:-To remove instance, run gpdeletesystem utility
20150403:11:13:01:002886 gpinitsystem:mdw:gpadmin-[INFO]:-Standby Master smdw has been configured
20150403:11:13:01:002886 gpinitsystem:mdw:gpadmin-[INFO]:-To activate the Standby Master Segment in the event of Master
20150403:11:13:01:002886 gpinitsystem:mdw:gpadmin-[INFO]:-failure review options for gpactivatestandby
20150403:11:13:01:002886 gpinitsystem:mdw:gpadmin-[INFO]:-------------------------------------------------------
20150403:11:13:01:002886 gpinitsystem:mdw:gpadmin-[INFO]:-The Master /data/master/gpseg-1/pg_hba.conf post gpinitsystem
20150403:11:13:01:002886 gpinitsystem:mdw:gpadmin-[INFO]:-has been configured to allow all hosts within this new
20150403:11:13:01:002886 gpinitsystem:mdw:gpadmin-[INFO]:-array to intercommunicate. Any hosts external to this
20150403:11:13:01:002886 gpinitsystem:mdw:gpadmin-[INFO]:-new array must be explicitly added to this file
20150403:11:13:02:002886 gpinitsystem:mdw:gpadmin-[INFO]:-Refer to the Greenplum Admin support guide which is
20150403:11:13:02:002886 gpinitsystem:mdw:gpadmin-[INFO]:-located in the /usr/local/greenplum-db/./docs directory
20150403:11:13:02:002886 gpinitsystem:mdw:gpadmin-[INFO]:-------------------------------------------------------
配置gpadmin环境变量
source /usr/local/greenplum-db/greenplum_path.sh
export MASTER_DATA_DIRECTORY=/data/master/gpseg-1
可选:客户端会话环境变量
export PGPORT=5432
export PGUSER=gpadmin
export PGDATABASE=gptest
生效并拷贝到standby master
$ source ~/.bashrc
$ scp ~/.bashrc smdw:~/.bashrc
6.创建数据库gptest
CREATE DATABASE gptest;
$ psql template1
psql (8.2.15)
Type "help" for help.
template1=# help
You are using psql, the command-line interface to PostgreSQL.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help with psql commands
\g or terminate with semicolon to execute query
\q to quit
template1=# \h
Available help:
ABORT BEGIN CREATE SEQUENCE DROP OPERATOR CLASS PREPARE
ALTER AGGREGATE CHECKPOINT CREATE SERVER DROP OWNED PREPARE TRANSACTION
ALTER CONVERSIONCLOSE CREATE TABLEDROP RESOURCE QUEUE REASSIGN OWNED
ALTER DATABASE CLUSTER CREATE TABLE AS DROP ROLE REINDEX
ALTER DOMAINCOMMENT CREATE TABLESPACE DROP RULE RELEASE SAVEPOINT
ALTER EXTERNAL TABLECOMMIT CREATE TRIGGER DROP SCHEMA RESET
ALTER FILESPACE COMMIT PREPARED CREATE TYPE DROP SEQUENCE REVOKE
ALTER FOREIGN DATA WRAPPER COPYCREATE USER DROP SERVER ROLLBACK
ALTER FUNCTION CREATE AGGREGATECREATE USER MAPPING DROP TABLE ROLLBACK PREPARED
ALTER GROUP CREATE CAST CREATE VIEW DROP TABLESPACE ROLLBACK TO SAVEPOINT
ALTER INDEX CREATE CONSTRAINT TRIGGER DEALLOCATE DROP TRIGGERSAVEPOINT
ALTER LANGUAGE CREATE CONVERSION DECLARE DROP TYPE SELECT
ALTER OPERATOR CREATE DATABASE DELETE DROP USER SELECT INTO
ALTER OPERATOR CLASSCREATE DOMAIN DROP AGGREGATE DROP USER MAPPING SET
ALTER RESOURCE QUEUECREATE EXTERNAL TABLE DROP CAST DROP VIEW SET CONSTRAINTS
ALTER ROLE CREATE FOREIGN DATA WRAPPER DROP CONVERSION END SET ROLE
ALTER SCHEMACREATE FUNCTION DROP DATABASE EXECUTE SET SESSION AUTHORIZATION
ALTER SEQUENCE CREATE GROUPDROP DOMAIN EXPLAIN SET TRANSACTION
ALTER SERVERCREATE INDEXDROP EXTERNAL TABLE FETCH SHOW
ALTER TABLE CREATE LANGUAGE DROP FILESPACE GRANT START TRANSACTION
ALTER TABLESPACECREATE OPERATOR DROP FOREIGN DATA WRAPPER INSERT TRUNCATE
ALTER TRIGGER CREATE OPERATOR CLASS DROP FUNCTION LISTEN UNLISTEN
ALTER TYPE CREATE RESOURCE QUEUE DROP GROUP LOADUPDATE
ALTER USER CREATE ROLE DROP INDEX LOCKVACUUM
ALTER USER MAPPING CREATE RULE DROP LANGUAGE MOVEVALUES
ANALYZE CREATE SCHEMA DROP OPERATOR NOTIFY
template1=#
template1=# CREATE DATABASE gptest;
CREATE DATABASE
登录到gptest
$ psql
psql (8.2.15)
Type "help" for help.
gptest=#
Greenplum测试环境部署的更多相关文章
- (5.2)mysql高可用系列——测试环境部署
关键词环境部署: ############## 测试环境机器架构 #########[1]策划[1.1]linux服务器A组 8台 192.168.1.200~192.168.1.207,主机名db, ...
- NodeJs 开发微信公众号(二)测试环境部署
由于卤煮本人是做前端开发的,所以在做公众号过程中基本上没有遇到前端问题,在这方面花的时间是最少的.加上用了mui框架(纯css界面)和自己积累的代码,很快地开发出了界面来.接着是后台开发.卤煮选的是n ...
- Hadoop 学习笔记 (八) hadoop2.2.0 测试环境部署 及两种启动方式
1基本流程步骤1:准备硬件(linux操作系统)步骤2:准备软件安装包,并安装基础软件(主要是JDK)步骤3:修改配置文件步骤4:分发hadoop步骤5:启动服务步骤6:验证是否启动成功!2硬件配置要 ...
- 测试环境部署之填坑记录-Expected one result (or null) to be returned by selectOne(), but found: 2
最近在部署性能测试环境的时候,环境 部署好以后,部分功能出现接口查询异常,问题现象: 拿到错误,肯定要先判断是前端还是后端代码的问题,最简单的方式是抓包查看: 以上是报错页面捕获的接口报错,很明显的接 ...
- 在testrpc以太坊测试环境部署智能合约
2018年03月13日 09:20:54 思无邪-machengyu 阅读数 2683 版权声明:本文为博主原创文章,转载请务必注明出处,否则追究法律责任 https://blog.csdn.ne ...
- LINUX测试环境部署nginx(五)
安装配置nginx 安装编译环境:yum -y install pcre-devel openssl openssl-devel 拷贝nginx压缩文件到目标目录后,解压tar -zxvf nginx ...
- (转)LINUX测试环境部署Redis(四)
安装配置Redis 第一部分:安装redis 希望将redis安装到此目录 1 /usr/local/redis 希望将安装包下载到此目录 1 /usr/local/src 那么安装过程指令如下: ...
- Airtest iOS测试环境部署
[本文出自天外归云的博客园] 简介 这个Airtest IDE是通过iOS-Tagent来操作iPhone的,你可以在Airtest IDE里录制脚本来实现自动化操作iPhone 前提 1. 得有个i ...
- AD读取Excel新建客户邮箱的测试环境部署有感
现有AD的账户操作所有服务几乎用WebApi方式,此 方法是便于搭建和部署,做到了前后端的分离 ,其中验证Exchange邮箱转发模块时发现foxmail的exchange本地邮箱配置极其简单,以此记 ...
随机推荐
- 超链接标签a样式生效,取消下划线,文字垂直(上下)居中
直接设置超链接标签a的属性时并不会生效, 需要将display属性改为inline-block, 即style="display:inline-block" 添加标签a时,默认是有 ...
- 03人人都应该了解的10个 jQuery 小技巧
1 返回顶部按钮 你可以利用animate和scrollTop来实现返回顶部的动画,而不需要使用其他插件. // Back to top $('a.top').click(function () { ...
- SQLSERVER 里SELECT COUNT(1) 和SELECT COUNT(*)哪个性能好?
SQLSERVER 里SELECT COUNT(1) 和SELECT COUNT(*)哪个性能好? 今天遇到某人在我以前写的一篇文章里问到 如果统计信息没来得及更新的话,那岂不是统计出来的数据时错误的 ...
- mongoDB研究笔记:分片集群的工作机制
上面的(http://www.cnblogs.com/guoyuanwei/p/3565088.html)介绍了部署了一个默认的分片集群,对mongoDB的分片集群有了大概的认识,到目前为止我们还没有 ...
- NetMq学习--发布订阅(一)
基于NeqMq 4.0.0-rc5版本发布端: using (var publisher = new PublisherSocket()) { publisher.Bind("tcp://* ...
- CentOs笔记
系统 CentOs7,最小安装,使用 Ext4,/ ,/boot,/swap 使用标准分区,另一个分区做为数据分区,使用 LVM. 更新: http://mirrors.163.com/.help/c ...
- MySQL数据丢失讨论
原文地址:http://hatemysql.com/tag/sync_binlog/ 1. 概述 很多企业选择MySQL都会担心它的数据丢失问题,从而选择Oracle,但是其实并不十分清楚什么情况下 ...
- 人人都是 DBA(XIII)索引信息收集脚本汇编
什么?有个 SQL 执行了 8 秒! 哪里出了问题?臣妾不知道啊,得找 DBA 啊. DBA 人呢?离职了!!擦!!! 程序员在无处寻求帮助时,就得想办法自救,努力让自己变成 "伪 DBA& ...
- js中各种跨域问题实战小结(一)
什么是跨域?为什么要实现跨域呢? 这是因为JavaScript出于安全方面的考虑,不允许跨域调用其他页面的对象.也就是说只能访问同一个域中的资源.我觉得这就有必要了解下javascript中的同源策略 ...
- netstat 1
lsof -i :8086 第二个参数需要:开头 netstat linux -a (all)显示所有选项,默认不显示LISTEN相关 理解这个很关键, 这是为什么我们netstat -a 不显 ...