1        Set max_connections to three times the number of processor cores on the server. Include virtual (hyperthreading) cores. Set shared_buffers             to 4GB for servers with up to 64 GB of RAM. Use 8GB for systems with more than 64 GB of RAM.

2        Set work_mem to 8MB for servers with up to 32 GB of RAM, 16MB for servers with up 64 GB of RAM, and 32MB for systems with more than             64 GB of RAM. If max_connections is greater than 400, divide this by two.

3        Set maintenance_work_mem to 1GB.

4        Set wal_level to hot_standby.

5        Set checkpoint_segments to (system memory in MB / 20 / 16).

6        Set checkpoint_completion_target to 0.8.

7        Set archive_mode to on.

8        Set archive_command to /bin/true.

9        Set max_wal_senders to 5.

10    Set wal_keep_segments to (3 * checkpoint_segments).

11    Set random_page_cost to 2.0 if you are using RAID or SAN; 1.0 for SSD-based storage.

12    Set effective_cache_size to half of the available system RAM.

13    Set log_min_duration_statement to 1000.

14    Set log_checkpoints to on.

The commonly accepted formula for estimating max_connections is to take the number of processor cores, multiply them by two, and add disk spindles. With the relatively recent improvement of virtual cores, contributing factors such as SSD or other high-performance storage, and so on, we have a bit more freedom than we had earlier. In addition, even if we were to follow this estimation method, allowing a few extra connections can prevent highly visible connection rejections. A slightly lower performance is a small price to pay for availability.

The advice for shared_buffers is very different from the accepted practice of simply setting it to a quarter of the available RAM. We must consider buffer flushing and the synchronization time. In the case of a forced checkpoint, an amount of RAM equal to shared_buffers could be flushed to disk. This kind of write storm can easily cripple even high-end hardware. Highly available hardware often has far more RAM that could easily be flushed to a disk in an emergency. As such, we don't recommend that you use more than 8 GB until this situation improves substantially.

The work_mem setting is the amount of memory used by several temporary operations, including data sorts. Thus, a single query can consume multiple instances of this amount simultaneously. A good estimate is to assume that each connection will use up to four instances at a time. Setting this too high can lead to over-committed memory and cause the kernel to start killing processes until RAM is available. This can lead to PostgreSQL shutdown or a server crash, depending on what processes are stopped. Systems with very high connection counts (over 400) have increased risk for such a cascade, so we reduce work_mem in these cases.

The maintenance_work_mem setting is similar to the work_mem setting in that there can be multiple instances. However, this is reserved for background workers and maintenance such as vacuum, analyze, or create index activities. Starving these kinds of Memory operations can drastically increase the disk I/O, which can detrimentally affect query performance. For the cost of a few GBs of RAM, we get a more stable server.

The only reason we set wal_level to hot_standby is because in a highly available environment, we should have at least one online streaming standby. Other recipes will detail how we set these up, but this is the starting point.

The number of checkpoint_segments is not a simple thing to set. The calculation we used assumes up to 5 percent of system memory, which could be in transit as checkpoint data, and each segment is 16 MB in size. This time, we are trying to avoid forced checkpoints, because we ran out of segments during data acquisition.

We also want to reduce disk contention when possible, so we increase checkpoint_ completion_target to 0.8. We don't want to overwhelm the disk subsystem, and this setting will cause PostgreSQL to spread writes over 80 percent of the time specified by checkpoint_timeout. By default, checkpoint_timeout is set to 5 minutes, which should suffice until we start working with larger batches of data or a busy OLTP system.

Next, we enable archive_mode by setting it to on. This setting can only be changed by restarting PostgreSQL, which we want to avoid. It's very likely that we will be using WAL archival in some respect, even if we don't yet know which method to use at this point. This means we also need to set archive_command to a command that always succeeds, or PostgreSQL will fill our logs with complaints that it couldn't archive old WAL files. Using /bin/ true as a placeholder, we can change it when we choose an archival method.

We increase max_wal_senders because it's needed for certain synchronization and backup methods. Five is a good starting point, and we can always decrease it later; we definitely need more than zero. Additionally, wal_keep_segments is set to a relatively high number. In this case, we keep it up to three multiples of checkpoint_segments worth, in case a streaming standby falls behind.

If this count of segments is exhausted while the standby is behind, it can never catch up until the remaining WAL segments are provided some other way or the standby is re-imaged. We'll discuss this more when it's time to talk about WAL archival. This uses more disk space, so multiply the total number of these segments by 16 MB to estimate total disk usage.

The cost of reading a random disk block, as opposed to reading it sequentially, directly affects how the query planner decides to execute a query. By decreasing random_page_cost, we tell PostgreSQL that our storage's random read performance is very fast. A highly available server should have equally capable storage, so we lower this to something more reasonable. In the case of SSD or PCIe-based storage, there is effectively no difference between a random or sequential read, so the setting should reflect this.

The last setting that modifies server behavior is effective_cache_size, which tells the query planner how much RAM is probably being used by the OS to cache data. Generally, this makes PostgreSQL prefer indexes, because it's likely that the indexed data is in memory. As most Unix systems are fairly aggressive when caching, at least half of the available RAM on a dedicated database server will be full of cached data.

Finally, we want better logging. We increase the logging of slow queries by setting log_min_ duration_statement to 1000. This is in milliseconds, so any query that runs for over one second will be logged. This helps us find slow queries without flooding the logs with thousands or even millions of entries by logging everything. Similarly, we want log_checkpoints enabled, because it provides extremely beneficial information on checkpoints. We can see how long they took, how frequently they ran, and also how much disk-sync time they required. We need to know if checkpoints start taking too long or occur too frequently so that some values can be adjusted. This setting really should be enabled in all PostgreSQL servers.

PostgreSQL Server Configuration: http://www.postgresql.org/docs/9.3/static/runtime-config.html

pgtune: https://github.com/gregs1104/pgtune

PostgreSQL configuration file postgresql.conf recommanded settings and how it works的更多相关文章

  1. PostgreSQL Client Authentication Configuration File

    PostgreSQL: Documentation: 10: 16.4. Installation Procedure https://www.postgresql.org/docs/10/stati ...

  2. PHP 5.6启动失败failed to open configuration file '/usr/local/php/etc/php-fpm.conf'

    PHP编译安装完毕,启动失败,提示 [-Jun- ::] ERROR: failed to open configuration ) [-Jun- ::] ERROR: failed to load ...

  3. mac 启动php-fpm报错 failed to open configuration file '/private/etc/php-fpm.conf': No such file or direc

    直接运行,有报错找不到配置文件. $ php-fpm [11-Jan-2014 16:03:03] ERROR: failed to open configuration file '/private ...

  4. hadoop 出现FATAL conf.Configuration: error parsing conf file,异常

    FATAL conf.Configuration: error parsing conf file: com.sun.org.apache.xerces.internal.impl.io.Malfor ...

  5. [ERROR ] Error parsing configuration file: /etc/salt/minion - conf should be a document, not <type 'str'>.

    错误信息 [ERROR ] Error parsing configuration file: /etc/salt/minion - conf should be a document, not &l ...

  6. PostgreSQL重新读取pg_hba.conf文件

    PostgreSQL  配置文件之pg_hba.conf 该文件用于控制访问安全性,管理客户端对于PostgreSQL服务器的访问权限,内容包括:允许哪些用户连接到哪个数据库,允许哪些IP或者哪个网段 ...

  7. Nginx - Configuration File Syntax

    Configuration Directives The Nginx configuration file can be described as a list of directives organ ...

  8. PostgreSQL入门,PostgreSQL和mysql

    PostgreSQL被誉为“世界上功能最强大的开源数据库”,是以加州大学伯克利分校计算机系开发的POSTGRES 4.2为基础的对象关系型数据库管理系统. PostgreSQL支持大部分 SQL标准并 ...

  9. This configuration file was broken by system-config-keyboard

    posts • Page of problem with startx Postby evarie » // :: Normally i can get started with the x wind ...

随机推荐

  1. Java的访问控制

       类内部  本包(实例.类变量和方法)  子类(任何位置) 外部包(实例.类变量和方法) public    √  √  √  √ protected   √  √  √  × default  ...

  2. Levenshtein distance

    https://en.wikipedia.org/wiki/Levenshtein_distance 验证码识别 图片中的二维码截取

  3. android 设计工具栏

    设计工具栏Action Bar(订制工具栏类型) 工具栏给用户提供了一种熟悉和可预测的方式来执行某种动作和操纵应用程序,但是这并不意味着它就需要和其他的应用程序看起来一样的.如果想设计工具栏以使得它能 ...

  4. 【转】设计模式(九)外观模式Facade(结构型)

    设计模式--外观模式Facade(结构型): 1. 概述 外观模式,我们通过外观的包装,使应用程序只能看到外观对象,而不会看到具体的细节对象,这样无疑会降低应用程序的复杂度,并且提高了程序的可维护性. ...

  5. Ubuntu 14.04 LTS 64bit 编译SDL的问题

    http://blog.csdn.net/jhting/article/details/38523945 Ubuntu 14.04 LTS 64bit 编译SDL的问题 分类: C/C++2014-0 ...

  6. 字典型转换为JSON数据

    一)将NSDictionary转换成为NSData类型 NSDictionary *tempDict=[[NSDictionary alloc] initWithObjectsAndKeys:@&qu ...

  7. 修改delphi xe6 FMX Label字体颜色

    delphi fmx的字体等设置默认与皮肤有关,用代码直接修改字体颜色等是无效的,如何才能用代码修改呢?请按以下方法就可以: 1.在Object inspector中取消StlyedSettings中 ...

  8. job_chain

    JOB链:JOB之间的相互触发操作. 实验意图:有些JOB具有先后调用次序,比如先做一件事情,这件事情做完才能继续下一件事情,而第一个JOB如果自己挂掉的话,第二个JOB需要正常运行(默认是终止),这 ...

  9. 导航栏和里面的View设置的是同一颜色值,实际运行又不一样.

    导航栏和里面的View设置的是同一颜色值,实际运行又不一样.如何保证两者的颜色一致呢?  答案就是:( navigationBar.translucent = NO; )   去除 导航条的分割线(黑 ...

  10. SQL Server 2008 R2【SET ANSI_PADDING填充属性】插入一条数据后,为何每一列都默认的在字符后多了几个空格

    当加入空格后查出 解决: 导致出现这样的现象的原因就是SET ANSI_PADDING选项. 这个选项只在数据表的字符串字段被更新或者新的数据行插入到表中的时候作用.它控制着SQL Server在遇到 ...