|The PostgreSQL 9.0 Reference Manual - Volume 3 - Server Administration Guide
by The PostgreSQL Global Development Group
Paperback (6"x9"), 274 pages
RRP £9.95 ($14.95)
Sales of this book support the PostgreSQL project! Get a printed copy>>>
15.4 WAL Configuration
There are several WAL-related configuration parameters that affect database performance. This section explains their use. Consult section 4 Server Configuration for general information about setting server configuration parameters.
Checkpoints are points in the sequence of transactions at which it is guaranteed that the heap and index data files have been updated with all information written before the checkpoint. At checkpoint time, all dirty data pages are flushed to disk and a special checkpoint record is written to the log file. (The changes were previously flushed to the WAL files.) In the event of a crash, the crash recovery procedure looks at the latest checkpoint record to determine the point in the log (known as the redo record) from which it should start the REDO operation. Any changes made to data files before that point are guaranteed to be already on disk. Hence, after a checkpoint, log segments preceding the one containing the redo record are no longer needed and can be recycled or removed. (When WAL archiving is being done, the log segments must be archived before being recycled or removed.)
The checkpoint requirement of flushing all dirty data pages to disk can cause a significant I/O load. For this reason, checkpoint activity is throttled so I/O begins at checkpoint start and completes before the next checkpoint starts; this minimizes performance degradation during checkpoints.
The server's background writer process automatically performs
a checkpoint every so often. A checkpoint is created every
checkpoint_segments log segments, or every
checkpoint_timeout seconds, whichever comes first.
The default settings are 3 segments and 300 seconds (5 minutes), respectively.
It is also possible to force a checkpoint by using the SQL command
checkpoint_timeout causes checkpoints to occur
more often. This allows faster after-crash recovery (since less work
will need to be redone). However, one must balance this against the
increased cost of flushing dirty data pages more often. If
full_page_writes is set (as is the default), there is
another factor to consider. To ensure data page consistency,
the first modification of a data page after each checkpoint results in
logging the entire page content. In that case,
a smaller checkpoint interval increases the volume of output to the WAL log,
partially negating the goal of using a smaller interval,
and in any case causing more disk I/O.
Checkpoints are fairly expensive, first because they require writing
out all currently dirty buffers, and second because they result in
extra subsequent WAL traffic as discussed above. It is therefore
wise to set the checkpointing parameters high enough that checkpoints
don't happen too often. As a simple sanity check on your checkpointing
parameters, you can set the
parameter. If checkpoints happen closer together than
a message will be output to the server log recommending increasing
checkpoint_segments. Occasional appearance of such
a message is not cause for alarm, but if it appears often then the
checkpoint control parameters should be increased. Bulk operations such
COPY transfers might cause a number of such warnings
to appear if you have not set
To avoid flooding the I/O system with a burst of page writes,
writing dirty buffers during a checkpoint is spread over a period of time.
That period is controlled by
checkpoint_completion_target, which is
given as a fraction of the checkpoint interval.
The I/O rate is adjusted so that the checkpoint finishes when the
given fraction of
checkpoint_segments WAL segments
have been consumed since checkpoint start, or the given fraction of
checkpoint_timeout seconds have elapsed,
whichever is sooner. With the default value of 0.5,
PostgreSQL can be expected to complete each checkpoint
in about half the time before the next checkpoint starts. On a system
that's very close to maximum I/O throughput during normal operation,
you might want to increase
to reduce the I/O load from checkpoints. The disadvantage of this is that
prolonging checkpoints affects recovery time, because more WAL segments
will need to be kept around for possible use in recovery. Although
checkpoint_completion_target can be set as high as 1.0,
it is best to keep it less than that (perhaps 0.9 at most) since
checkpoints include some other activities besides writing dirty buffers.
A setting of 1.0 is quite likely to result in checkpoints not being
completed on time, which would result in performance loss due to
unexpected variation in the number of WAL segments needed.
There will always be at least one WAL segment file, and will normally
not be more than (2 +
checkpoint_segments + 1
wal_keep_segments + 1
files. Each segment file is normally 16 MB (though this size can be
altered when building the server). You can use this to estimate space
requirements for WAL.
Ordinarily, when old log segment files are no longer needed, they
are recycled (renamed to become the next segments in the numbered
sequence). If, due to a short-term peak of log output rate, there
are more than 3 *
checkpoint_segments + 1
segment files, the unneeded segment files will be deleted instead
of recycled until the system gets back under this limit.
In archive recovery or standby mode, the server periodically performs
which are similar to checkpoints in normal operation: the server forces
all its state to disk, updates the ‘pg_control’ file to
indicate that the already-processed WAL data need not be scanned again,
and then recycles any old log segment files in ‘pg_xlog’
directory. A restartpoint is triggered if at least one checkpoint record
has been replayed and
checkpoint_timeout seconds have passed
since last restartpoint. In standby mode, a restartpoint is also triggered
checkpoint_segments log segments have been replayed since
last restartpoint and at least one checkpoint record has been replayed.
Restartpoints can't be performed more frequently than checkpoints in the
master because restartpoints can only be performed at checkpoint records.
There are two commonly used internal WAL functions:
LogInsert is used to place a new record into
the WAL buffers in shared memory. If there is no
space for the new record,
LogInsert will have
to write (move to kernel cache) a few filled WAL
buffers. This is undesirable because
is used on every database low level modification (for example, row
insertion) at a time when an exclusive lock is held on affected
data pages, so the operation needs to be as fast as possible. What
is worse, writing WAL buffers might also force the
creation of a new log segment, which takes even more
time. Normally, WAL buffers should be written
and flushed by a
LogFlush request, which is
made, for the most part, at transaction commit time to ensure that
transaction records are flushed to permanent storage. On systems
with high log output,
LogFlush requests might
not occur often enough to prevent
from having to do writes. On such systems
one should increase the number of WAL buffers by
modifying the configuration parameter
wal_buffers. The default number of WAL
buffers is 8. Increasing this value will
correspondingly increase shared memory usage. When
full_page_writes is set and the system is very busy,
setting this value higher will help smooth response times during the
period immediately following each checkpoint.
commit_delay parameter defines for how many
microseconds the server process will sleep after writing a commit
record to the log with
LogInsert but before
LogFlush. This delay allows other
server processes to add their commit records to the log so as to have all
of them flushed with a single log sync. No sleep will occur if
is not enabled, or if fewer than
other sessions are currently in active transactions; this avoids
sleeping when it's unlikely that any other session will commit soon.
Note that on most platforms, the resolution of a sleep request is
ten milliseconds, so that any nonzero
setting between 1 and 10000 microseconds would have the same effect.
Good values for these parameters are not yet clear; experimentation
wal_sync_method parameter determines how
PostgreSQL will ask the kernel to force
WAL updates out to disk.
All the options should be the same in terms of reliability,
but it's quite platform-specific which one will be the fastest.
Note that this parameter is irrelevant if
has been turned off.
wal_debug configuration parameter
(provided that PostgreSQL has been
compiled with support for it) will result in each
WAL call being logged to the server log. This
option might be replaced by a more general mechanism in the future.
|ISBN 9781906966072||The PostgreSQL 9.0 Reference Manual - Volume 3 - Server Administration Guide||See the print edition|