- publishing free software manuals
PostgreSQL Reference Manual - Volume 3 - Server Administration Guide
by The PostgreSQL Global Development Group
Paperback (6"x9"), 204 pages
ISBN 0954612043
RRP £13.95 ($24.95)

Sales of this book support the PostgreSQL project! Get a printed copy>>>

10.1.3 Handling large databases

Since PostgreSQL allows tables larger than the maximum file size on your system, it can be problematic to dump such a table to a file, since the resulting file will likely be larger than the maximum size allowed by your system. Since pg_dump can write to the standard output, you can use standard Unix tools to work around this possible problem.

Use compressed dumps.. You can use your favorite compression program, for example gzip.

pg_dump dbname | gzip > filename.gz

Reload with

createdb dbname
gunzip -c filename.gz | psql dbname

or

cat filename.gz | gunzip | psql dbname

Use split.. The split command allows you to split the output into pieces that are acceptable in size to the underlying file system. For example, to make chunks of 1 megabyte:

pg_dump dbname | split -b 1m - filename

Reload with

createdb dbname
cat filename* | psql dbname

Use the custom dump format.. If PostgreSQL was built on a system with the zlib compression library installed, the custom dump format will compress data as it writes it to the output file. This will produce dump file sizes similar to using gzip, but it has the added advantage that tables can be restored selectively. The following command dumps a database using the custom dump format:

pg_dump -Fc dbname > filename

A custom-format dump is not a script for psql, but instead must be restored with pg_restore. See the pg_dump and pg_restore reference pages for details.

ISBN 0954612043PostgreSQL Reference Manual - Volume 3 - Server Administration GuideSee the print edition