Began to deal with database backup in Postgre. Copies will be made for 1C enterprises. I read two backup utilities:

pg_dump 

As I understand it is used to copy a single database.

 pg_dumpall 

For a copy of the entire database cluster. If you understand correctly, then with pg_dumpall we will create a backup of all existing databases. And it's probably better to use it when we transfer everything to another server.

The question is whether it is possible to restore only the databases of interest from the backup made with pg_dumpall . For example: there are 5 bases, I want to restore 1 or 2. Is it even possible?

Or is it better to use pg_dump for each database?

2 answers 2

pg_dumpall generate a text file with sql commands. Therefore, in principle, you can get a separate database dump from it. Only inconvenient, it is necessary in some third-party way to find the boundaries of the pg_dump result. pg_dump , in turn, is not able to save global data of a cluster of DBMS - for example, a list of databases or users.

Therefore, both are commonly used. pg_dumpall -g to save global data, and pg_dump to save snapshots of databases of interest.


A separate question is whether the dump is generally backup. It should be understood that a dump is a base cast at the time of the start of the dump, you can recover only at this very moment and can be quite long in time.

As a backup, pg_basebackup is considered a copy of the entire cluster and the archive of WAL segments from the moment pg_basebackup until the desired recovery point

  • If I understand everything correctly, then from the sql file created by pg_dumpall you can delete the lines that participate in the creation of databases that are not of interest to us. Well, actually there is no simple way to get the base we need. For my purposes it will probably be enough to use pg_dump for the bases I need. Because There are only 3 of them. - Aleksey Muratov
  • one
    True, there is no easy way to separate. Therefore, it is usually done separately by pg_dump for the necessary databases. - Shallow

Dump is not a backup.

I'll open another way to make a copy here. I postgres in Krontaba:

  0 6 * * * pg_basebackup -x --write-recovery-conf --format=t -z --pgdata=$BACKPATH$($DATE)/pg_basebackup/ 

This format is convenient because it is almost ready for deployment in case of a breakdown. Just unzip the archive and start the recovery.

I recommend to add archive with transaction log. In the config, I turned on wal and finished the line. Couchy looks, you can do cp , but save space. In the config:

 archive_command = 'test ! -f /backup/$(date +%F)/%f && mkdir -p /backup/$(date +%F)/ && cat %p | gzip > /backup/$(date +%F)/%f.gz' 

If there are suggestions how to improve it - I will be glad. But rather transfer the compression function to another server. Here I will leave just cp .

The name of the dump - you need a day to deploy it on a large base. With one archive pg_basebackup you can turn around in a couple of hours, and if you practice it even in half an hour. With the 2 archives listed, you can restore the database to the state 15 minutes before the crash, and not the day before it.

A backup is not considered a backup until you try to restore it!

  • Many thanks for another way. But at the moment we are dealing with Dump. The base will take about 5GB, and 1.2GB in the archive. Time to create and restore dump ~ 1h.30min, this is without add. settings. It is to your method that I have nothing to add, since only the second day I understand this topic. - Aleksey Muratov
  • It is worth paying special attention that for pg_basebackup -x it is necessary either to closely monitor if there are enough saved WALs or there must be a WAL archive. The most commonly used is -X stream requiring 2 replication connections, but self-copying WAL. There are big complaints to archive_command - you cannot rely on the current date. Besides, how then to get the wal from the archive? Not to collect the designer manually - there is a barman. - Petite
  • @ Small Date is needed so that there is no directory for a million files. It is also convenient to copy the catalog for a specific date: pg_basebackup plus wala per day. Recovering back something like find /backup/ -name %f.gz -exec gunzip -c > %p Try barman, e-mail - eri