postgres高可用学习篇一:如何通过patroni如何管理3个postgres节点
环境: CentOS Linux release 7.6.1810 (Core) 内核版本:3.10.0-957.10.1.el7.x86_64
node1:192.168.216.130
node2:192.168.216.132
node3:192.168.216.134
postgres内核优化指南:https://github.com/digoal/blog/blob/master/201611/20161121_01.md?spm=a2c4e.10696291.0.0.660a19a4sIk1Ok&file=20161121_01.md
一、安装postgres
yum install https://download.postgresql.org/pub/repos/yum/11/redhat/rhel-7-x86_64/pgdg-centos11-11-2.noarch.rpm
yum install postgresql11
yum install postgresql11-server
yum install postgresql11-libs
yum install postgresql11-contrib
yum install postgresql11-devel
可以参考:https://www.jianshu.com/p/b4a759c2208f
安装完成后可以查询下rpm -qa|grep postgres安装了哪些包
postgresql11-libs-11.5-1PGDG.rhel7.x86_64
postgresql10-libs-10.10-1PGDG.rhel7.x86_64
postgresql11-11.5-1PGDG.rhel7.x86_64
postgresql11-contrib-11.5-1PGDG.rhel7.x86_64
postgresql11-server-11.5-1PGDG.rhel7.x86_64
postgresql11-devel-11.5-1PGDG.rhel7.x86_64
安装后不需要初始化,可由patroni来完成初始化操作,如果已经初始化完成,不需要patroni来初始化操作,可以修改patroni配置文件的以下参数来指定data目录和安装目录
data_dir: /var/lib/pgsql/11/data
bin_dir: /usr/pgsql-11/bin
config_dir: /var/lib/pgsql/11/data
stats_temp_directory: /var/lib/pgsql_stats_tmp
chown -Rf postgres:postgres /var/lib/pgsql/11/data
chmod -Rf 700 /var/lib/pgsql/11/data
chown -Rf postgres:postgres /var/lib/pgsql_stats_tmp
chmod -Rf 700 /var/lib/pgsql_stats_tmp
二、安装patroni,这里建议先修改pip源为国内,否则在安装过程中可能遇到大量超时问题
可参考:https://www.cnblogs.com/caidingyu/p/11566690.html
yum install gcc
yum install python-devel.x86_64
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py
pip install psycopg2-binary
pip install patroni[etcd,consul]
三、安装etcd服务
可参考:https://www.cnblogs.com/caidingyu/p/11408389.html
四、创建patroni的配置文件
node1:patroni配置文件如下
[root@localhost tmp]# cat /etc/patroni/patroni.yml scope: postgres-cluster
name: pgnode01
namespace: /service/ restapi:
listen: 192.168.216.130:8008
connect_address: 192.168.216.130:8008
# certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
# keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
# authentication:
# username: username
# password: password etcd:
hosts: 192.168.216.130:2379,192.168.216.132:2379,192.168.216.134:2379 bootstrap:
# this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
# and all other cluster members will use it as a `global configuration`
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
master_start_timeout: 300
synchronous_mode: false
synchronous_mode_strict: false
#standby_cluster:
#host: 127.0.0.1
#port: 1111
#primary_slot_name: patroni
postgresql:
use_pg_rewind: true
use_slots: true
parameters:
max_connections: 100
superuser_reserved_connections: 5
max_locks_per_transaction: 64
max_prepared_transactions: 0
huge_pages: try
shared_buffers: 512MB
work_mem: 128MB
maintenance_work_mem: 256MB
effective_cache_size: 4GB
checkpoint_timeout: 15min
checkpoint_completion_target: 0.9
min_wal_size: 2GB
max_wal_size: 4GB
wal_buffers: 32MB
default_statistics_target: 1000
seq_page_cost: 1
random_page_cost: 4
effective_io_concurrency: 2
synchronous_commit: on
autovacuum: on
autovacuum_max_workers: 5
autovacuum_vacuum_scale_factor: 0.01
autovacuum_analyze_scale_factor: 0.02
autovacuum_vacuum_cost_limit: 200
autovacuum_vacuum_cost_delay: 20
autovacuum_naptime: 1s
max_files_per_process: 4096
archive_mode: on
archive_timeout: 1800s
archive_command: cd .
wal_level: replica
wal_keep_segments: 130
max_wal_senders: 10
max_replication_slots: 10
hot_standby: on
wal_log_hints: on
shared_preload_libraries: pg_stat_statements,auto_explain
pg_stat_statements.max: 10000
pg_stat_statements.track: all
pg_stat_statements.save: off
auto_explain.log_min_duration: 10s
auto_explain.log_analyze: true
auto_explain.log_buffers: true
auto_explain.log_timing: false
auto_explain.log_triggers: true
auto_explain.log_verbose: true
auto_explain.log_nested_statements: true
track_io_timing: on
log_lock_waits: on
log_temp_files: 0
track_activities: on
track_counts: on
track_functions: all
log_checkpoints: on
logging_collector: on
log_truncate_on_rotation: on
log_rotation_age: 1d
log_rotation_size: 0
log_line_prefix: '%t [%p-%l] %r %q%u@%d '
log_filename: 'postgresql-%a.log'
log_directory: /var/log/postgresql # recovery_conf:
# restore_command: cp ../wal_archive/%f %p # some desired options for 'initdb'
initdb: # Note: It needs to be a list (some options need values, others are switches)
- encoding: UTF8
- locale: en_US.UTF-8
- data-checksums pg_hba: # Add following lines to pg_hba.conf after running 'initdb'
- host replication replicator 0.0.0.0/0 md5
- host all all 0.0.0.0/0 md5 # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
# post_init: /usr/local/bin/setup_cluster.sh # Some additional users which needs to be created after initializing new cluster
# users:
# admin:
# password: admin-pass
# options:
# - createrole
# - createdb postgresql:
listen: 192.168.216.130,127.0.0.1:5432
connect_address: 192.168.216.130:5432
use_unix_socket: true
data_dir: /var/lib/pgsql/11/data
bin_dir: /usr/pgsql-11/bin
config_dir: /var/lib/pgsql/11/data
pgpass: /var/lib/pgsql/.pgpass
authentication:
replication:
username: replicator
password: replicator-pass
superuser:
username: postgres
password: postgres-pass
# rewind: # Has no effect on postgres 10 and lower
# username: rewind_user
# password: rewind_password
parameters:
unix_socket_directories: /var/run/postgresql
stats_temp_directory: /var/lib/pgsql_stats_tmp # callbacks:
# on_start:
# on_stop:
# on_restart:
# on_reload:
# on_role_change: create_replica_methods:
# - pgbackrest
# - wal_e
- basebackup
# pgbackrest:
# command: /usr/bin/pgbackrest --stanza=<Stanza_Name> --delta restore
# keep_data: True
# no_params: True
# wal_e
# command: patroni_wale_restore
# no_master: 1
# envdir: /etc/wal_e/envdir
# use_iam: 1
basebackup:
max-rate: '100M' #watchdog:
# mode: automatic # Allowed values: off, automatic, required
# device: /dev/watchdog
# safety_margin: 5 tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
# specify a node to replicate from. This can be used to implement a cascading replication.
# replicatefrom: (node name)
node2:patroni配置文件如下
[root@localhost postgresql]# cat /etc/patroni/patroni.yml scope: postgres-cluster
name: pgnode02
namespace: /service/ restapi:
listen: 192.168.216.132:8008
connect_address: 192.168.216.132:8008
# certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
# keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
# authentication:
# username: username
# password: password etcd:
hosts: 192.168.216.130:2379,192.168.216.132:2379,192.168.216.134:2379 bootstrap:
# this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
# and all other cluster members will use it as a `global configuration`
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
master_start_timeout: 300
synchronous_mode: false
synchronous_mode_strict: false
#standby_cluster:
#host: 127.0.0.1
#port: 1111
#primary_slot_name: patroni
postgresql:
use_pg_rewind: true
use_slots: true
parameters:
max_connections: 100
superuser_reserved_connections: 5
max_locks_per_transaction: 64
max_prepared_transactions: 0
huge_pages: try
shared_buffers: 512MB
work_mem: 128MB
maintenance_work_mem: 256MB
effective_cache_size: 4GB
checkpoint_timeout: 15min
checkpoint_completion_target: 0.9
min_wal_size: 2GB
max_wal_size: 4GB
wal_buffers: 32MB
default_statistics_target: 1000
seq_page_cost: 1
random_page_cost: 4
effective_io_concurrency: 2
synchronous_commit: on
autovacuum: on
autovacuum_max_workers: 5
autovacuum_vacuum_scale_factor: 0.01
autovacuum_analyze_scale_factor: 0.02
autovacuum_vacuum_cost_limit: 200
autovacuum_vacuum_cost_delay: 20
autovacuum_naptime: 1s
max_files_per_process: 4096
archive_mode: on
archive_timeout: 1800s
archive_command: cd .
wal_level: replica
wal_keep_segments: 130
max_wal_senders: 10
max_replication_slots: 10
hot_standby: on
wal_log_hints: on
shared_preload_libraries: pg_stat_statements,auto_explain
pg_stat_statements.max: 10000
pg_stat_statements.track: all
pg_stat_statements.save: off
auto_explain.log_min_duration: 10s
auto_explain.log_analyze: true
auto_explain.log_buffers: true
auto_explain.log_timing: false
auto_explain.log_triggers: true
auto_explain.log_verbose: true
auto_explain.log_nested_statements: true
track_io_timing: on
log_lock_waits: on
log_temp_files: 0
track_activities: on
track_counts: on
track_functions: all
log_checkpoints: on
logging_collector: on
log_truncate_on_rotation: on
log_rotation_age: 1d
log_rotation_size: 0
log_line_prefix: '%t [%p-%l] %r %q%u@%d '
log_filename: 'postgresql-%a.log'
log_directory: /var/log/postgresql # recovery_conf:
# restore_command: cp ../wal_archive/%f %p # some desired options for 'initdb'
initdb: # Note: It needs to be a list (some options need values, others are switches)
- encoding: UTF8
- locale: en_US.UTF-8
- data-checksums pg_hba: # Add following lines to pg_hba.conf after running 'initdb'
- host replication replicator 0.0.0.0/0 md5
- host all all 0.0.0.0/0 md5 # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
# post_init: /usr/local/bin/setup_cluster.sh # Some additional users which needs to be created after initializing new cluster
# users:
# admin:
# password: admin-pass
# options:
# - createrole
# - createdb postgresql:
listen: 192.168.216.132,127.0.0.1:5432
connect_address: 192.168.216.132:5432
use_unix_socket: true
data_dir: /var/lib/pgsql/11/data
bin_dir: /usr/pgsql-11/bin
config_dir: /var/lib/pgsql/11/data
pgpass: /var/lib/pgsql/.pgpass
authentication:
replication:
username: replicator
password: replicator-pass
superuser:
username: postgres
password: postgres-pass
# rewind: # Has no effect on postgres 10 and lower
# username: rewind_user
# password: rewind_password
parameters:
unix_socket_directories: /var/run/postgresql
stats_temp_directory: /var/lib/pgsql_stats_tmp # callbacks:
# on_start:
# on_stop:
# on_restart:
# on_reload:
# on_role_change: create_replica_methods:
# - pgbackrest
# - wal_e
- basebackup
# pgbackrest:
# command: /usr/bin/pgbackrest --stanza=<Stanza_Name> --delta restore
# keep_data: True
# no_params: True
# wal_e
# command: patroni_wale_restore
# no_master: 1
# envdir: /etc/wal_e/envdir
# use_iam: 1
basebackup:
max-rate: '100M' #watchdog:
# mode: automatic # Allowed values: off, automatic, required
# device: /dev/watchdog
# safety_margin: 5 tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
# specify a node to replicate from. This can be used to implement a cascading replication.
# replicatefrom: (node name)
node3:patroni配置文件如下
[root@localhost tmp]# cat /etc/patroni/patroni.yml scope: postgres-cluster
name: pgnode03
namespace: /service/ restapi:
listen: 192.168.216.134:8008
connect_address: 192.168.216.134:8008
# certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
# keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
# authentication:
# username: username
# password: password etcd:
hosts: 192.168.216.130:2379,192.168.216.132:2379,192.168.216.134:2379 bootstrap:
# this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
# and all other cluster members will use it as a `global configuration`
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
master_start_timeout: 300
synchronous_mode: false
synchronous_mode_strict: false
#standby_cluster:
#host: 127.0.0.1
#port: 1111
#primary_slot_name: patroni
postgresql:
use_pg_rewind: true
use_slots: true
parameters:
max_connections: 100
superuser_reserved_connections: 5
max_locks_per_transaction: 64
max_prepared_transactions: 0
huge_pages: try
shared_buffers: 512MB
work_mem: 128MB
maintenance_work_mem: 256MB
effective_cache_size: 4GB
checkpoint_timeout: 15min
checkpoint_completion_target: 0.9
min_wal_size: 2GB
max_wal_size: 4GB
wal_buffers: 32MB
default_statistics_target: 1000
seq_page_cost: 1
random_page_cost: 4
effective_io_concurrency: 2
synchronous_commit: on
autovacuum: on
autovacuum_max_workers: 5
autovacuum_vacuum_scale_factor: 0.01
autovacuum_analyze_scale_factor: 0.02
autovacuum_vacuum_cost_limit: 200
autovacuum_vacuum_cost_delay: 20
autovacuum_naptime: 1s
max_files_per_process: 4096
archive_mode: on
archive_timeout: 1800s
archive_command: cd .
wal_level: replica
wal_keep_segments: 130
max_wal_senders: 10
max_replication_slots: 10
hot_standby: on
wal_log_hints: on
shared_preload_libraries: pg_stat_statements,auto_explain
pg_stat_statements.max: 10000
pg_stat_statements.track: all
pg_stat_statements.save: off
auto_explain.log_min_duration: 10s
auto_explain.log_analyze: true
auto_explain.log_buffers: true
auto_explain.log_timing: false
auto_explain.log_triggers: true
auto_explain.log_verbose: true
auto_explain.log_nested_statements: true
track_io_timing: on
log_lock_waits: on
log_temp_files: 0
track_activities: on
track_counts: on
track_functions: all
log_checkpoints: on
logging_collector: on
log_truncate_on_rotation: on
log_rotation_age: 1d
log_rotation_size: 0
log_line_prefix: '%t [%p-%l] %r %q%u@%d '
log_filename: 'postgresql-%a.log'
log_directory: /var/log/postgresql # recovery_conf:
# restore_command: cp ../wal_archive/%f %p # some desired options for 'initdb'
initdb: # Note: It needs to be a list (some options need values, others are switches)
- encoding: UTF8
- locale: en_US.UTF-8
- data-checksums pg_hba: # Add following lines to pg_hba.conf after running 'initdb'
- host replication replicator 0.0.0.0/0 md5
- host all all 0.0.0.0/0 md5 # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
# post_init: /usr/local/bin/setup_cluster.sh # Some additional users which needs to be created after initializing new cluster
# users:
# admin:
# password: admin-pass
# options:
# - createrole
# - createdb postgresql:
listen: 192.168.216.134,127.0.0.1:5432
connect_address: 192.168.216.134:5432
use_unix_socket: true
data_dir: /var/lib/pgsql/11/data
bin_dir: /usr/pgsql-11/bin
config_dir: /var/lib/pgsql/11/data
pgpass: /var/lib/pgsql/.pgpass
authentication:
replication:
username: replicator
password: replicator-pass
superuser:
username: postgres
password: postgres-pass
# rewind: # Has no effect on postgres 10 and lower
# username: rewind_user
# password: rewind_password
parameters:
unix_socket_directories: /var/run/postgresql
stats_temp_directory: /var/lib/pgsql_stats_tmp # callbacks:
# on_start:
# on_stop:
# on_restart:
# on_reload:
# on_role_change: create_replica_methods:
# - pgbackrest
# - wal_e
- basebackup
# pgbackrest:
# command: /usr/bin/pgbackrest --stanza=<Stanza_Name> --delta restore
# keep_data: True
# no_params: True
# wal_e
# command: patroni_wale_restore
# no_master: 1
# envdir: /etc/wal_e/envdir
# use_iam: 1
basebackup:
max-rate: '100M' #watchdog:
# mode: automatic # Allowed values: off, automatic, required
# device: /dev/watchdog
# safety_margin: 5 tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
# specify a node to replicate from. This can be used to implement a cascading replication.
# replicatefrom: (node name)
五、分别在3个node节点上创建/etc/systemd/system/patroni.service来通过systemctl管理patroni服务
可以先执行下,确认patroni的安装位置
which patroni
如果安装位置和patroni.service配置中的不一致,可以采用创建软连接的方式,
1、创建软连接(或者手动修改patroni.service中patroni的路径为实际路径,即:ExecStart=patroni的实际路径)
ln -s /usr/bin/patronictl /usr/local/bin/patronictl
ln -s /usr/bin/patroni /usr/local/bin/patroni
2、在Node1上创建/etc/systemd/system/patroni.service
cat /etc/systemd/system/patroni.service
[Unit]
Description=Runners to orchestrate a high-availability PostgreSQL - patroni
After=syslog.target network.target [Service]
Type=simple User=postgres
Group=postgres # Read in configuration file if it exists, otherwise proceed
EnvironmentFile=-/etc/patroni_env.conf WorkingDirectory=~ # Where to send early-startup messages from the server
# This is normally controlled by the global default set by systemd
# StandardOutput=syslog # Pre-commands to start watchdog device
# Uncomment if watchdog is part of your patroni setup
#ExecStartPre=-/usr/bin/sudo /sbin/modprobe softdog
#ExecStartPre=-/usr/bin/sudo /bin/chown postgres /dev/watchdog # Start the patroni process
ExecStart=/usr/local/bin/patroni /etc/patroni/patroni.yml # Send HUP to reload from patroni.yml
ExecReload=/bin/kill -s HUP $MAINPID # only kill the patroni process, not it's children, so it will gracefully stop postgres
KillMode=process # Give a reasonable amount of time for the server to start up/shut down
TimeoutSec=60 # Do not restart the service if it crashes, we want to manually inspect database on failure
Restart=no [Install]
WantedBy=multi-user.target
3、在Node2上创建/etc/systemd/system/patroni.service
cat /etc/systemd/system/patroni.service
[Unit]
Description=Runners to orchestrate a high-availability PostgreSQL - patroni
After=syslog.target network.target [Service]
Type=simple User=postgres
Group=postgres # Read in configuration file if it exists, otherwise proceed
EnvironmentFile=-/etc/patroni_env.conf WorkingDirectory=~ # Where to send early-startup messages from the server
# This is normally controlled by the global default set by systemd
# StandardOutput=syslog # Pre-commands to start watchdog device
# Uncomment if watchdog is part of your patroni setup
#ExecStartPre=-/usr/bin/sudo /sbin/modprobe softdog
#ExecStartPre=-/usr/bin/sudo /bin/chown postgres /dev/watchdog # Start the patroni process
ExecStart=/usr/local/bin/patroni /etc/patroni/patroni.yml # Send HUP to reload from patroni.yml
ExecReload=/bin/kill -s HUP $MAINPID # only kill the patroni process, not it's children, so it will gracefully stop postgres
KillMode=process # Give a reasonable amount of time for the server to start up/shut down
TimeoutSec=60 # Do not restart the service if it crashes, we want to manually inspect database on failure
Restart=no [Install]
WantedBy=multi-user.target
4、node3上操作同上,直接复制即可
postgres高可用学习篇一:如何通过patroni如何管理3个postgres节点的更多相关文章
- postgres高可用学习篇二:通过pgbouncer连接池工具来管理postgres连接
安装pgbouncer yum install libevent -y yum install libevent-devel -y wget http://www.pgbouncer.org/down ...
- postgres高可用学习篇三:haproxy+keepalived实现postgres负载均衡
环境: CentOS Linux release 7.6.1810 (Core) 内核版本:3.10.0-957.10.1.el7.x86_64 node1:192.168.216.130 node2 ...
- 分布式架构高可用架构篇_07_MySQL主从复制的配置(CentOS-6.7+MySQL-5.6)
参考: 龙果学院http://www.roncoo.com/share.html?hamc=hLPG8QsaaWVOl2Z76wpJHp3JBbZZF%2Bywm5vEfPp9LbLkAjAnB%2B ...
- 高可用架构篇--MyCat在MySQL主从复制基础上实现读写分离
实战操作可参考:http://www.roncoo.com/course/view/3117ffd4c74b4a51a998f9276740dcfb 一.环境 操作系统:CentOS-6.6-x86_ ...
- 分布式架构高可用架构篇_01_zookeeper集群的安装、配置、高可用测试
参考: 龙果学院http://www.roncoo.com/share.html?hamc=hLPG8QsaaWVOl2Z76wpJHp3JBbZZF%2Bywm5vEfPp9LbLkAjAnB%2B ...
- 10-Flink集群的高可用(搭建篇补充)
戳更多文章: 1-Flink入门 2-本地环境搭建&构建第一个Flink应用 3-DataSet API 4-DataSteam API 5-集群部署 6-分布式缓存 7-重启策略 8-Fli ...
- 分布式架构高可用架构篇_04_Keepalived+Nginx实现高可用Web负载均衡
参考: 龙果学院http://www.roncoo.com/share.html?hamc=hLPG8QsaaWVOl2Z76wpJHp3JBbZZF%2Bywm5vEfPp9LbLkAjAnB%2B ...
- 分布式架构高可用架构篇_03-redis3集群的安装高可用测试
参考文档 Redis 官方集群指南:http://redis.io/topics/cluster-tutorial Redis 官方集群规范:http://redis.io/topics/cluste ...
- 分布式架构高可用架构篇_02_activemq高可用集群(zookeeper+leveldb)安装、配置、高可用测试
参考: 龙果学院http://www.roncoo.com/share.html?hamc=hLPG8QsaaWVOl2Z76wpJHp3JBbZZF%2Bywm5vEfPp9LbLkAjAnB%2B ...
随机推荐
- ThinkPHP3创建Model模型--对表的操作
创建Model模型 把"Home/Model"文件夹剪切到Application文件夹下,让Home和Admin共同使用. 第一种实例化模型的方法 第二种实例化模型的方法 第三种实 ...
- 余胜威《MATLAB数学建模经典案例实战》2015年版
内容介绍 本书全面.系统地讲解了数学建模的知识.书中结合历年全国大学生数学建模竞赛试题,采用案例与算法程序相结合的方法,循序渐进,逐步引导读者深入挖掘实际问题背后的数学问题及求解方法.在本书案例的分析 ...
- mysql 允许在唯一索引的字段中出现多个null值
线上问题:org.springframework.dao.DuplicateKeyException: PreparedStatementCallback; SQL [update fl_table ...
- [转帖]Docker公司被收购,开源界尴尬不?
Docker公司被收购,开源界尴尬不? https://news.51cto.com/art/201911/606189.htm Docker公司被谁收了? Docker公司被谁收了?Mirantis ...
- [转帖]8个最佳Docker容器监控工具,收藏了
8个最佳Docker容器监控工具,收藏了 https://www.sohu.com/a/341156793_100159565?spm=smpc.author.fd-d.9.1574127778732 ...
- 资源对象的池化, java极简实现,close资源时,自动回收
https://www.cnblogs.com/piepie/p/10498953.html 在java程序中对于资源,例如数据库连接,这类不能并行共享的资源对象,一般采用资源池的方式进行管理. 资源 ...
- strlen()与sizeof()
一.strlen() strlen()为计算字符串长度的函数,以‘\0’为字符串结束标志.注意:其传入参数必须是字符串指针(char*), 当传入的是数组名时,实际上数组退化成指针了. 二.sizeo ...
- Delphi RSA签名与验签【支持SHA1WithRSA(RSA1)、SHA256WithRSA(RSA2)和MD5WithRSA签名与验签】
作者QQ:(648437169) 点击下载➨ RSA签名与验签 [delphi RSA签名与验签]支持3种方式签名与验签(SHA1WithRSA(RSA1).SHA256WithRSA(RSA2)和M ...
- 安装nginx1.16.1版本
安装nginx1.16.1版本 一.添加源 到 cd /etc/yum.repos.d/ 目录下 新建nginx.repo 文件 vim nginx.repo 输入以下信息 [nginx-stable ...
- 【EBS】菜单的复制脚本
DECLARE l_error_flag ); l_menu_rowid ); l_menu_entity_rowid ); l_menu_id NUMBER; l_cnt ; c_new_menu_ ...