postgres高可用学习篇一:如何通过patroni如何管理3个postgres节点
环境: CentOS Linux release 7.6.1810 (Core) 内核版本:3.10.0-957.10.1.el7.x86_64
node1:192.168.216.130
node2:192.168.216.132
node3:192.168.216.134
postgres内核优化指南:https://github.com/digoal/blog/blob/master/201611/20161121_01.md?spm=a2c4e.10696291.0.0.660a19a4sIk1Ok&file=20161121_01.md
一、安装postgres
yum install https://download.postgresql.org/pub/repos/yum/11/redhat/rhel-7-x86_64/pgdg-centos11-11-2.noarch.rpm
yum install postgresql11
yum install postgresql11-server
yum install postgresql11-libs
yum install postgresql11-contrib
yum install postgresql11-devel
可以参考:https://www.jianshu.com/p/b4a759c2208f
安装完成后可以查询下rpm -qa|grep postgres安装了哪些包
postgresql11-libs-11.5-1PGDG.rhel7.x86_64
postgresql10-libs-10.10-1PGDG.rhel7.x86_64
postgresql11-11.5-1PGDG.rhel7.x86_64
postgresql11-contrib-11.5-1PGDG.rhel7.x86_64
postgresql11-server-11.5-1PGDG.rhel7.x86_64
postgresql11-devel-11.5-1PGDG.rhel7.x86_64
安装后不需要初始化,可由patroni来完成初始化操作,如果已经初始化完成,不需要patroni来初始化操作,可以修改patroni配置文件的以下参数来指定data目录和安装目录
data_dir: /var/lib/pgsql/11/data
bin_dir: /usr/pgsql-11/bin
config_dir: /var/lib/pgsql/11/data
stats_temp_directory: /var/lib/pgsql_stats_tmp
chown -Rf postgres:postgres /var/lib/pgsql/11/data
chmod -Rf 700 /var/lib/pgsql/11/data
chown -Rf postgres:postgres /var/lib/pgsql_stats_tmp
chmod -Rf 700 /var/lib/pgsql_stats_tmp
二、安装patroni,这里建议先修改pip源为国内,否则在安装过程中可能遇到大量超时问题
可参考:https://www.cnblogs.com/caidingyu/p/11566690.html
yum install gcc
yum install python-devel.x86_64
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py
pip install psycopg2-binary
pip install patroni[etcd,consul]
三、安装etcd服务
可参考:https://www.cnblogs.com/caidingyu/p/11408389.html
四、创建patroni的配置文件
node1:patroni配置文件如下
[root@localhost tmp]# cat /etc/patroni/patroni.yml scope: postgres-cluster
name: pgnode01
namespace: /service/ restapi:
listen: 192.168.216.130:8008
connect_address: 192.168.216.130:8008
# certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
# keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
# authentication:
# username: username
# password: password etcd:
hosts: 192.168.216.130:2379,192.168.216.132:2379,192.168.216.134:2379 bootstrap:
# this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
# and all other cluster members will use it as a `global configuration`
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
master_start_timeout: 300
synchronous_mode: false
synchronous_mode_strict: false
#standby_cluster:
#host: 127.0.0.1
#port: 1111
#primary_slot_name: patroni
postgresql:
use_pg_rewind: true
use_slots: true
parameters:
max_connections: 100
superuser_reserved_connections: 5
max_locks_per_transaction: 64
max_prepared_transactions: 0
huge_pages: try
shared_buffers: 512MB
work_mem: 128MB
maintenance_work_mem: 256MB
effective_cache_size: 4GB
checkpoint_timeout: 15min
checkpoint_completion_target: 0.9
min_wal_size: 2GB
max_wal_size: 4GB
wal_buffers: 32MB
default_statistics_target: 1000
seq_page_cost: 1
random_page_cost: 4
effective_io_concurrency: 2
synchronous_commit: on
autovacuum: on
autovacuum_max_workers: 5
autovacuum_vacuum_scale_factor: 0.01
autovacuum_analyze_scale_factor: 0.02
autovacuum_vacuum_cost_limit: 200
autovacuum_vacuum_cost_delay: 20
autovacuum_naptime: 1s
max_files_per_process: 4096
archive_mode: on
archive_timeout: 1800s
archive_command: cd .
wal_level: replica
wal_keep_segments: 130
max_wal_senders: 10
max_replication_slots: 10
hot_standby: on
wal_log_hints: on
shared_preload_libraries: pg_stat_statements,auto_explain
pg_stat_statements.max: 10000
pg_stat_statements.track: all
pg_stat_statements.save: off
auto_explain.log_min_duration: 10s
auto_explain.log_analyze: true
auto_explain.log_buffers: true
auto_explain.log_timing: false
auto_explain.log_triggers: true
auto_explain.log_verbose: true
auto_explain.log_nested_statements: true
track_io_timing: on
log_lock_waits: on
log_temp_files: 0
track_activities: on
track_counts: on
track_functions: all
log_checkpoints: on
logging_collector: on
log_truncate_on_rotation: on
log_rotation_age: 1d
log_rotation_size: 0
log_line_prefix: '%t [%p-%l] %r %q%u@%d '
log_filename: 'postgresql-%a.log'
log_directory: /var/log/postgresql # recovery_conf:
# restore_command: cp ../wal_archive/%f %p # some desired options for 'initdb'
initdb: # Note: It needs to be a list (some options need values, others are switches)
- encoding: UTF8
- locale: en_US.UTF-8
- data-checksums pg_hba: # Add following lines to pg_hba.conf after running 'initdb'
- host replication replicator 0.0.0.0/0 md5
- host all all 0.0.0.0/0 md5 # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
# post_init: /usr/local/bin/setup_cluster.sh # Some additional users which needs to be created after initializing new cluster
# users:
# admin:
# password: admin-pass
# options:
# - createrole
# - createdb postgresql:
listen: 192.168.216.130,127.0.0.1:5432
connect_address: 192.168.216.130:5432
use_unix_socket: true
data_dir: /var/lib/pgsql/11/data
bin_dir: /usr/pgsql-11/bin
config_dir: /var/lib/pgsql/11/data
pgpass: /var/lib/pgsql/.pgpass
authentication:
replication:
username: replicator
password: replicator-pass
superuser:
username: postgres
password: postgres-pass
# rewind: # Has no effect on postgres 10 and lower
# username: rewind_user
# password: rewind_password
parameters:
unix_socket_directories: /var/run/postgresql
stats_temp_directory: /var/lib/pgsql_stats_tmp # callbacks:
# on_start:
# on_stop:
# on_restart:
# on_reload:
# on_role_change: create_replica_methods:
# - pgbackrest
# - wal_e
- basebackup
# pgbackrest:
# command: /usr/bin/pgbackrest --stanza=<Stanza_Name> --delta restore
# keep_data: True
# no_params: True
# wal_e
# command: patroni_wale_restore
# no_master: 1
# envdir: /etc/wal_e/envdir
# use_iam: 1
basebackup:
max-rate: '100M' #watchdog:
# mode: automatic # Allowed values: off, automatic, required
# device: /dev/watchdog
# safety_margin: 5 tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
# specify a node to replicate from. This can be used to implement a cascading replication.
# replicatefrom: (node name)
node2:patroni配置文件如下
[root@localhost postgresql]# cat /etc/patroni/patroni.yml scope: postgres-cluster
name: pgnode02
namespace: /service/ restapi:
listen: 192.168.216.132:8008
connect_address: 192.168.216.132:8008
# certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
# keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
# authentication:
# username: username
# password: password etcd:
hosts: 192.168.216.130:2379,192.168.216.132:2379,192.168.216.134:2379 bootstrap:
# this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
# and all other cluster members will use it as a `global configuration`
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
master_start_timeout: 300
synchronous_mode: false
synchronous_mode_strict: false
#standby_cluster:
#host: 127.0.0.1
#port: 1111
#primary_slot_name: patroni
postgresql:
use_pg_rewind: true
use_slots: true
parameters:
max_connections: 100
superuser_reserved_connections: 5
max_locks_per_transaction: 64
max_prepared_transactions: 0
huge_pages: try
shared_buffers: 512MB
work_mem: 128MB
maintenance_work_mem: 256MB
effective_cache_size: 4GB
checkpoint_timeout: 15min
checkpoint_completion_target: 0.9
min_wal_size: 2GB
max_wal_size: 4GB
wal_buffers: 32MB
default_statistics_target: 1000
seq_page_cost: 1
random_page_cost: 4
effective_io_concurrency: 2
synchronous_commit: on
autovacuum: on
autovacuum_max_workers: 5
autovacuum_vacuum_scale_factor: 0.01
autovacuum_analyze_scale_factor: 0.02
autovacuum_vacuum_cost_limit: 200
autovacuum_vacuum_cost_delay: 20
autovacuum_naptime: 1s
max_files_per_process: 4096
archive_mode: on
archive_timeout: 1800s
archive_command: cd .
wal_level: replica
wal_keep_segments: 130
max_wal_senders: 10
max_replication_slots: 10
hot_standby: on
wal_log_hints: on
shared_preload_libraries: pg_stat_statements,auto_explain
pg_stat_statements.max: 10000
pg_stat_statements.track: all
pg_stat_statements.save: off
auto_explain.log_min_duration: 10s
auto_explain.log_analyze: true
auto_explain.log_buffers: true
auto_explain.log_timing: false
auto_explain.log_triggers: true
auto_explain.log_verbose: true
auto_explain.log_nested_statements: true
track_io_timing: on
log_lock_waits: on
log_temp_files: 0
track_activities: on
track_counts: on
track_functions: all
log_checkpoints: on
logging_collector: on
log_truncate_on_rotation: on
log_rotation_age: 1d
log_rotation_size: 0
log_line_prefix: '%t [%p-%l] %r %q%u@%d '
log_filename: 'postgresql-%a.log'
log_directory: /var/log/postgresql # recovery_conf:
# restore_command: cp ../wal_archive/%f %p # some desired options for 'initdb'
initdb: # Note: It needs to be a list (some options need values, others are switches)
- encoding: UTF8
- locale: en_US.UTF-8
- data-checksums pg_hba: # Add following lines to pg_hba.conf after running 'initdb'
- host replication replicator 0.0.0.0/0 md5
- host all all 0.0.0.0/0 md5 # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
# post_init: /usr/local/bin/setup_cluster.sh # Some additional users which needs to be created after initializing new cluster
# users:
# admin:
# password: admin-pass
# options:
# - createrole
# - createdb postgresql:
listen: 192.168.216.132,127.0.0.1:5432
connect_address: 192.168.216.132:5432
use_unix_socket: true
data_dir: /var/lib/pgsql/11/data
bin_dir: /usr/pgsql-11/bin
config_dir: /var/lib/pgsql/11/data
pgpass: /var/lib/pgsql/.pgpass
authentication:
replication:
username: replicator
password: replicator-pass
superuser:
username: postgres
password: postgres-pass
# rewind: # Has no effect on postgres 10 and lower
# username: rewind_user
# password: rewind_password
parameters:
unix_socket_directories: /var/run/postgresql
stats_temp_directory: /var/lib/pgsql_stats_tmp # callbacks:
# on_start:
# on_stop:
# on_restart:
# on_reload:
# on_role_change: create_replica_methods:
# - pgbackrest
# - wal_e
- basebackup
# pgbackrest:
# command: /usr/bin/pgbackrest --stanza=<Stanza_Name> --delta restore
# keep_data: True
# no_params: True
# wal_e
# command: patroni_wale_restore
# no_master: 1
# envdir: /etc/wal_e/envdir
# use_iam: 1
basebackup:
max-rate: '100M' #watchdog:
# mode: automatic # Allowed values: off, automatic, required
# device: /dev/watchdog
# safety_margin: 5 tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
# specify a node to replicate from. This can be used to implement a cascading replication.
# replicatefrom: (node name)
node3:patroni配置文件如下
[root@localhost tmp]# cat /etc/patroni/patroni.yml scope: postgres-cluster
name: pgnode03
namespace: /service/ restapi:
listen: 192.168.216.134:8008
connect_address: 192.168.216.134:8008
# certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
# keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
# authentication:
# username: username
# password: password etcd:
hosts: 192.168.216.130:2379,192.168.216.132:2379,192.168.216.134:2379 bootstrap:
# this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
# and all other cluster members will use it as a `global configuration`
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
master_start_timeout: 300
synchronous_mode: false
synchronous_mode_strict: false
#standby_cluster:
#host: 127.0.0.1
#port: 1111
#primary_slot_name: patroni
postgresql:
use_pg_rewind: true
use_slots: true
parameters:
max_connections: 100
superuser_reserved_connections: 5
max_locks_per_transaction: 64
max_prepared_transactions: 0
huge_pages: try
shared_buffers: 512MB
work_mem: 128MB
maintenance_work_mem: 256MB
effective_cache_size: 4GB
checkpoint_timeout: 15min
checkpoint_completion_target: 0.9
min_wal_size: 2GB
max_wal_size: 4GB
wal_buffers: 32MB
default_statistics_target: 1000
seq_page_cost: 1
random_page_cost: 4
effective_io_concurrency: 2
synchronous_commit: on
autovacuum: on
autovacuum_max_workers: 5
autovacuum_vacuum_scale_factor: 0.01
autovacuum_analyze_scale_factor: 0.02
autovacuum_vacuum_cost_limit: 200
autovacuum_vacuum_cost_delay: 20
autovacuum_naptime: 1s
max_files_per_process: 4096
archive_mode: on
archive_timeout: 1800s
archive_command: cd .
wal_level: replica
wal_keep_segments: 130
max_wal_senders: 10
max_replication_slots: 10
hot_standby: on
wal_log_hints: on
shared_preload_libraries: pg_stat_statements,auto_explain
pg_stat_statements.max: 10000
pg_stat_statements.track: all
pg_stat_statements.save: off
auto_explain.log_min_duration: 10s
auto_explain.log_analyze: true
auto_explain.log_buffers: true
auto_explain.log_timing: false
auto_explain.log_triggers: true
auto_explain.log_verbose: true
auto_explain.log_nested_statements: true
track_io_timing: on
log_lock_waits: on
log_temp_files: 0
track_activities: on
track_counts: on
track_functions: all
log_checkpoints: on
logging_collector: on
log_truncate_on_rotation: on
log_rotation_age: 1d
log_rotation_size: 0
log_line_prefix: '%t [%p-%l] %r %q%u@%d '
log_filename: 'postgresql-%a.log'
log_directory: /var/log/postgresql # recovery_conf:
# restore_command: cp ../wal_archive/%f %p # some desired options for 'initdb'
initdb: # Note: It needs to be a list (some options need values, others are switches)
- encoding: UTF8
- locale: en_US.UTF-8
- data-checksums pg_hba: # Add following lines to pg_hba.conf after running 'initdb'
- host replication replicator 0.0.0.0/0 md5
- host all all 0.0.0.0/0 md5 # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
# post_init: /usr/local/bin/setup_cluster.sh # Some additional users which needs to be created after initializing new cluster
# users:
# admin:
# password: admin-pass
# options:
# - createrole
# - createdb postgresql:
listen: 192.168.216.134,127.0.0.1:5432
connect_address: 192.168.216.134:5432
use_unix_socket: true
data_dir: /var/lib/pgsql/11/data
bin_dir: /usr/pgsql-11/bin
config_dir: /var/lib/pgsql/11/data
pgpass: /var/lib/pgsql/.pgpass
authentication:
replication:
username: replicator
password: replicator-pass
superuser:
username: postgres
password: postgres-pass
# rewind: # Has no effect on postgres 10 and lower
# username: rewind_user
# password: rewind_password
parameters:
unix_socket_directories: /var/run/postgresql
stats_temp_directory: /var/lib/pgsql_stats_tmp # callbacks:
# on_start:
# on_stop:
# on_restart:
# on_reload:
# on_role_change: create_replica_methods:
# - pgbackrest
# - wal_e
- basebackup
# pgbackrest:
# command: /usr/bin/pgbackrest --stanza=<Stanza_Name> --delta restore
# keep_data: True
# no_params: True
# wal_e
# command: patroni_wale_restore
# no_master: 1
# envdir: /etc/wal_e/envdir
# use_iam: 1
basebackup:
max-rate: '100M' #watchdog:
# mode: automatic # Allowed values: off, automatic, required
# device: /dev/watchdog
# safety_margin: 5 tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
# specify a node to replicate from. This can be used to implement a cascading replication.
# replicatefrom: (node name)
五、分别在3个node节点上创建/etc/systemd/system/patroni.service来通过systemctl管理patroni服务
可以先执行下,确认patroni的安装位置
which patroni
如果安装位置和patroni.service配置中的不一致,可以采用创建软连接的方式,
1、创建软连接(或者手动修改patroni.service中patroni的路径为实际路径,即:ExecStart=patroni的实际路径)
ln -s /usr/bin/patronictl /usr/local/bin/patronictl
ln -s /usr/bin/patroni /usr/local/bin/patroni
2、在Node1上创建/etc/systemd/system/patroni.service
cat /etc/systemd/system/patroni.service
[Unit]
Description=Runners to orchestrate a high-availability PostgreSQL - patroni
After=syslog.target network.target [Service]
Type=simple User=postgres
Group=postgres # Read in configuration file if it exists, otherwise proceed
EnvironmentFile=-/etc/patroni_env.conf WorkingDirectory=~ # Where to send early-startup messages from the server
# This is normally controlled by the global default set by systemd
# StandardOutput=syslog # Pre-commands to start watchdog device
# Uncomment if watchdog is part of your patroni setup
#ExecStartPre=-/usr/bin/sudo /sbin/modprobe softdog
#ExecStartPre=-/usr/bin/sudo /bin/chown postgres /dev/watchdog # Start the patroni process
ExecStart=/usr/local/bin/patroni /etc/patroni/patroni.yml # Send HUP to reload from patroni.yml
ExecReload=/bin/kill -s HUP $MAINPID # only kill the patroni process, not it's children, so it will gracefully stop postgres
KillMode=process # Give a reasonable amount of time for the server to start up/shut down
TimeoutSec=60 # Do not restart the service if it crashes, we want to manually inspect database on failure
Restart=no [Install]
WantedBy=multi-user.target
3、在Node2上创建/etc/systemd/system/patroni.service
cat /etc/systemd/system/patroni.service
[Unit]
Description=Runners to orchestrate a high-availability PostgreSQL - patroni
After=syslog.target network.target [Service]
Type=simple User=postgres
Group=postgres # Read in configuration file if it exists, otherwise proceed
EnvironmentFile=-/etc/patroni_env.conf WorkingDirectory=~ # Where to send early-startup messages from the server
# This is normally controlled by the global default set by systemd
# StandardOutput=syslog # Pre-commands to start watchdog device
# Uncomment if watchdog is part of your patroni setup
#ExecStartPre=-/usr/bin/sudo /sbin/modprobe softdog
#ExecStartPre=-/usr/bin/sudo /bin/chown postgres /dev/watchdog # Start the patroni process
ExecStart=/usr/local/bin/patroni /etc/patroni/patroni.yml # Send HUP to reload from patroni.yml
ExecReload=/bin/kill -s HUP $MAINPID # only kill the patroni process, not it's children, so it will gracefully stop postgres
KillMode=process # Give a reasonable amount of time for the server to start up/shut down
TimeoutSec=60 # Do not restart the service if it crashes, we want to manually inspect database on failure
Restart=no [Install]
WantedBy=multi-user.target
4、node3上操作同上,直接复制即可
postgres高可用学习篇一:如何通过patroni如何管理3个postgres节点的更多相关文章
- postgres高可用学习篇二:通过pgbouncer连接池工具来管理postgres连接
安装pgbouncer yum install libevent -y yum install libevent-devel -y wget http://www.pgbouncer.org/down ...
- postgres高可用学习篇三:haproxy+keepalived实现postgres负载均衡
环境: CentOS Linux release 7.6.1810 (Core) 内核版本:3.10.0-957.10.1.el7.x86_64 node1:192.168.216.130 node2 ...
- 分布式架构高可用架构篇_07_MySQL主从复制的配置(CentOS-6.7+MySQL-5.6)
参考: 龙果学院http://www.roncoo.com/share.html?hamc=hLPG8QsaaWVOl2Z76wpJHp3JBbZZF%2Bywm5vEfPp9LbLkAjAnB%2B ...
- 高可用架构篇--MyCat在MySQL主从复制基础上实现读写分离
实战操作可参考:http://www.roncoo.com/course/view/3117ffd4c74b4a51a998f9276740dcfb 一.环境 操作系统:CentOS-6.6-x86_ ...
- 分布式架构高可用架构篇_01_zookeeper集群的安装、配置、高可用测试
参考: 龙果学院http://www.roncoo.com/share.html?hamc=hLPG8QsaaWVOl2Z76wpJHp3JBbZZF%2Bywm5vEfPp9LbLkAjAnB%2B ...
- 10-Flink集群的高可用(搭建篇补充)
戳更多文章: 1-Flink入门 2-本地环境搭建&构建第一个Flink应用 3-DataSet API 4-DataSteam API 5-集群部署 6-分布式缓存 7-重启策略 8-Fli ...
- 分布式架构高可用架构篇_04_Keepalived+Nginx实现高可用Web负载均衡
参考: 龙果学院http://www.roncoo.com/share.html?hamc=hLPG8QsaaWVOl2Z76wpJHp3JBbZZF%2Bywm5vEfPp9LbLkAjAnB%2B ...
- 分布式架构高可用架构篇_03-redis3集群的安装高可用测试
参考文档 Redis 官方集群指南:http://redis.io/topics/cluster-tutorial Redis 官方集群规范:http://redis.io/topics/cluste ...
- 分布式架构高可用架构篇_02_activemq高可用集群(zookeeper+leveldb)安装、配置、高可用测试
参考: 龙果学院http://www.roncoo.com/share.html?hamc=hLPG8QsaaWVOl2Z76wpJHp3JBbZZF%2Bywm5vEfPp9LbLkAjAnB%2B ...
随机推荐
- [转帖]rename(Perl语言版本) 详解
rename(Perl语言版本) 详解 2019-03-19 22:51:23 wayne17 阅读数 464更多 分类专栏: Ubuntu之路 版权声明:本文为博主原创文章,遵循CC 4.0 B ...
- 解决netty客户端接收报文不完整的情况
逻辑就是在处理handler前加入一个处理符,然后 channelReadComplete这个事件进行处理.同时注意客服端的配置: public void connect(String addr, i ...
- LeetCode 1259. Handshakes That Don't Cross - Java - DP
题目链接:https://leetcode-cn.com/problems/handshakes-that-dont-cross/ You are given an even number of pe ...
- 【论文排版工具】——LaTeX的安装及使用(MiKTeX+TexStudio+Windows)
版权声明:本文为CSDN博主「豆豆花」的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明. 原文链接:https://blog.csdn.net/weixin_405 ...
- STM8L052C6T6液晶LCD驱动过程
首先还是必要的说明,鄙人经验有限,如发现问题还请不吝赐教,反馈到邮箱mr.li.ming@qq.com 使用的液晶屏 单片机引脚 液晶的COM1-COM4分别对应单片机LCD_COM0-LCD_COM ...
- CSS3弹性盒布局方式
一.CSS3弹性盒子 弹性盒子是CSS3的一种新布局模式. CSS3 弹性盒( Flexible Box 或 flexbox),是一种当页面需要适应不同的屏幕大小以及设备类型时确保元素拥有恰当的行为的 ...
- gym101480
A. ASCII Addition 模拟 #include <iostream> #include <sstream> #include <algorithm> # ...
- N皇后问题的python实现
数据结构中常见的问题,最近复习到了,用python做一遍. # 检测(x,y)这个位置是否合法(不会被其他皇后攻击到) def is_attack(queue, x, y): for i in ran ...
- 解决使用RabbitTemplate操作RabbitMQ,发生The channelMax limit is reached. Try later.问题
使用RabbitTemplate操纵RabbitMQ,每个RabbitTemplate等于一个connection,每个connection最多支持2048个channel,当hannel达到2048 ...
- VMware学习笔记之在虚拟机中使用Ghost系统盘安装xp黑屏卡在光标闪无法进入系统
使用ghost安装后,无法进入系统,卡在光标闪动,请参考如下: https://www.cnblogs.com/mq0036/p/3588058.html https://wenku.baidu.co ...