132
votes

My computer broke down but fortunately I backed up the folder C:\Program Files\PostgreSQL.

Now I'm working in a new computer and I would like to import the previous Postgres databases that are stored in the external disk.

I would like to export the schema of a specific database that is located in the backup folder.

The file PostgreSQL\8.3\data\global\pg_database contains information about databases and their OIDs; for example:

"db1" 20012
"db2" 23456

I would like to export the schema of "db1".

There is a folder named "20012" in folder "PostgreSQL\8.3\data\base\20012" that contains a lot of files [500 files].

Is there any way to export the schema of that database?

Note that all of the Postgresql database files are located in an external hard disk and I would like to export the schema of that database in an SQL file, take that file, run it and create the same exact database locally.

9

9 Answers

157
votes

You should take a look at pg_dump:

pg_dump -s databasename

Will dump only the schema to stdout as .sql.

For windows, you'll probably want to call pg_dump.exe. I don't have access to a Windows machine but I'm pretty sure from memory that's the command. See if the help works for you too.

46
votes

In Linux you can do like this

pg_dump -U postgres -s postgres > exportFile.dmp

Maybe it can work in Windows too, if not try the same with pg_dump.exe

pg_dump.exe -U postgres -s postgres > exportFile.dmp
43
votes

I am running Postgres 9.6 where I had to export a particular schema along with data.

I used the following command:

pg_dump.exe -U username -d databasename -n schemaname > C:\mylocation\mydumpfilename.dmp

If you want only the schema without data, use the switch s instead of n

Below is the pg_dump switch list:

C:\Program Files\PostgreSQL\9.6\bin>pg_dump --help
pg_dump dumps a database as a text file or to other formats.

Usage:
  pg_dump [OPTION]... [DBNAME]

General options:
  -f, --file=FILENAME          output file or directory name
  -F, --format=c|d|t|p         output file format (custom, directory, tar,
                               plain text (default))
  -j, --jobs=NUM               use this many parallel jobs to dump
  -v, --verbose                verbose mode
  -V, --version                output version information, then exit
  -Z, --compress=0-9           compression level for compressed formats
  --lock-wait-timeout=TIMEOUT  fail after waiting TIMEOUT for a table lock
  -?, --help                   show this help, then exit

Options controlling the output content:
  -a, --data-only              dump only the data, not the schema
  -b, --blobs                  include large objects in dump
  -c, --clean                  clean (drop) database objects before recreating
  -C, --create                 include commands to create database in dump
  -E, --encoding=ENCODING      dump the data in encoding ENCODING
  -n, --schema=SCHEMA          dump the named schema(s) only
  -N, --exclude-schema=SCHEMA  do NOT dump the named schema(s)
  -o, --oids                   include OIDs in dump
  -O, --no-owner               skip restoration of object ownership in
                               plain-text format
  -s, --schema-only            dump only the schema, no data
  -S, --superuser=NAME         superuser user name to use in plain-text format
  -t, --table=TABLE            dump the named table(s) only
  -T, --exclude-table=TABLE    do NOT dump the named table(s)
  -x, --no-privileges          do not dump privileges (grant/revoke)
  --binary-upgrade             for use by upgrade utilities only
  --column-inserts             dump data as INSERT commands with column names
  --disable-dollar-quoting     disable dollar quoting, use SQL standard quoting
  --disable-triggers           disable triggers during data-only restore
  --enable-row-security        enable row security (dump only content user has
                               access to)
  --exclude-table-data=TABLE   do NOT dump data for the named table(s)
  --if-exists                  use IF EXISTS when dropping objects
  --inserts                    dump data as INSERT commands, rather than COPY
  --no-security-labels         do not dump security label assignments
  --no-synchronized-snapshots  do not use synchronized snapshots in parallel jobs
  --no-tablespaces             do not dump tablespace assignments
  --no-unlogged-table-data     do not dump unlogged table data
  --quote-all-identifiers      quote all identifiers, even if not key words
  --section=SECTION            dump named section (pre-data, data, or post-data)
  --serializable-deferrable    wait until the dump can run without anomalies
  --snapshot=SNAPSHOT          use given snapshot for the dump
  --strict-names               require table and/or schema include patterns to
                               match at least one entity each
  --use-set-session-authorization
                               use SET SESSION AUTHORIZATION commands instead of
                               ALTER OWNER commands to set ownership

Connection options:
  -d, --dbname=DBNAME      database to dump
  -h, --host=HOSTNAME      database server host or socket directory
  -p, --port=PORT          database server port number
  -U, --username=NAME      connect as specified database user
  -w, --no-password        never prompt for password
  -W, --password           force password prompt (should happen automatically)
  --role=ROLENAME          do SET ROLE before dump

If no database name is supplied, then the PGDATABASE environment
variable value is used.

Report bugs to <[email protected]>.
19
votes
pg_dump -d <databasename> -h <hostname> -p <port> -n <schemaname> -f <location of the dump file>

Please notice that you have sufficient privilege to access that schema. If you want take backup as specific user add user name in that command preceded by -U

7
votes

If you only want the create tables, then you can do pg_dump -s databasename | awk 'RS="";/CREATE TABLE[^;]*;/'

7
votes

For Linux: (data excluded)

  • pg_dump -s -t tablename databasename > dump.sql (For a specific table in database)

  • pg_dump -s databasename > dump.sql (For the entire database)

2
votes

set up a new postgresql server and replace its data folder with the files from your external disk.

You will then be able to start that postgresql server up and retrieve the data using pg_dump (pg_dump -s for the schema-only as mentioned)

2
votes

You should use something like this pg_dump --schema=your_schema_name db1, for details take a look here

2
votes

pg_dump -s databasename -t tablename -U user -h host -p port > tablename.sql

this will limit the schema dump to the table "tablename" of "databasename"