Date: 26-02-02 17:38
I recently upgraded to PostgreSQL 7.2 and had similar problems. Because I didn't use dumpall, I had to recreate my users and groups.
While pg_dumpall will dump all the relevant data, it doesn't support dumping large objects, which is fine for most people who don't use such complex objects in a database.
Personally, I would use `pg_dumpall -g` to dump the global information, and then use pg_dump to dump my inidividual databases to separate files. I do not have many databases, so this is a viable option. This allows me to keep separate files with only the relevant data for a given database. One database in particular that I have is for a school project that needs more care taken with it than my own personal databases.
Also, it is preferable to use one of the non-script outputs. The -F c parameter to pg_dump will output PostgreSQL native format that is compressed by default and also allows for reordering of data/schema loading. -F t produces a tar'd output that is an inbetween the benefits of a script and the -F c format. Both the tar and custom output can handle large objects.
If you need to inspect the data with grep, you can use pg_restore without a database argument, and it will produce a script that you could use to restore the data or make changes.
As you can see, it makes quite a difference in size between the different formats.
1187696 Feb 26 08:35 test1.dump (Normal dump, no options)
1213952 Feb 26 08:32 test1.tar (-F t dump)
314581 Feb 26 08:32 test1.psql (-F c dump)
297169 Feb 26 08:34 test1.gz (Gzipped normal dump)
298300 Feb 26 08:32 test1.psql.gz (gz'd -F c dump)
300667 Feb 26 08:32 test1.tar.gz (gz'd -F t)
gzipping the -F c dump was the smallest, but it wasn't a huge gain from the normal -F c dump size.
While the alternate format outputs aren't as portable, they are ideal for backups. You do not need to pass the same -F parameter to pg_restore as you did with pg_dump, it can determine the type automatically.