228
votes

I have a database schema named: nyummy and a table named cimory:

create table nyummy.cimory (
  id numeric(10,0) not null,
  name character varying(60) not null,
  city character varying(50) not null,
  CONSTRAINT cimory_pkey PRIMARY KEY (id)
);

I want to export the cimory table's data as insert SQL script file. However, I only want to export records/data where the city is equal to 'tokyo' (assume city data are all lowercase).

How to do it?

It doesn't matter whether the solution is in freeware GUI tools or command line (although GUI tools solution is better). I had tried pgAdmin III, but I can't find an option to do this.

10
PostgreSQL cannot select across databases. At least, older versions cannot and neither can Greenplum, don't know about 9.x.PhilHibbs
I realize this is old, but I just wanted to mention that it is possible to select across databases using dblink, which has been available since at least v8.3. It makes use of foreign servers and foreign data wrappers to connect to "remote" databases. This works whether those databases exist on the same instance or entirely different hosts. I've used it fairly extensively to create materialized views into other databases to facilitate certain reporting and such and it works great.G_Hosa_Phat

10 Answers

330
votes

Create a table with the set you want to export and then use the command line utility pg_dump to export to a file:

create table export_table as 
select id, name, city
from nyummy.cimory
where city = 'tokyo'
$ pg_dump --table=export_table --data-only --column-inserts my_database > data.sql

--column-inserts will dump as insert commands with column names.

--data-only do not dump schema.

As commented below, creating a view in instead of a table will obviate the table creation whenever a new export is necessary.

195
votes

To export data only use COPY:

COPY (SELECT * FROM nyummy.cimory WHERE city = 'tokio') TO '/path/to/file.csv';

You can export a whole table, only selected columns, or the result of a query as demonstrated. No need to create a table explicitly.

You get a file with one table row per line as plain text (not INSERT commands). Smaller and faster than INSERT commands.

To import the same to another Postgres table of matching structure anywhere (columns in same order, data types compatible!):

COPY other_tbl FROM '/path/to/file.csv';

COPY writes and reads files local to the server, unlike client programs like pg_dump or psql which read and write files local to the client. If both run on the same machine, it doesn't matter much, but it does for remote connections.

There is also the \copy command of psql:

Performs a frontend (client) copy. This is an operation that runs an SQL COPY command, but instead of the server reading or writing the specified file, psql reads or writes the file and routes the data between the server and the local file system. This means that file accessibility and privileges are those of the local user, not the server, and no SQL superuser privileges are required.

Same syntax as above. Just replace COPY with \copy.

39
votes

This is an easy and fast way to export a table to a script with pgAdmin manually without extra installations:

  1. Right click on target table and select "Backup".
  2. Select a file path to store the backup. As Format choose "Plain".
  3. Open the tab "Dump Options #2" at the bottom and check "Use Column Inserts".
  4. Click the Backup-button.
  5. If you open the resulting file with a text reader (e.g. notepad++) you get a script to create the whole table. From there you can simply copy the generated INSERT-Statements.

This method also works with the technique of making an export_table as demonstrated in @Clodoaldo Neto's answer.

Click right on target table and choose "Backup"

Choose a destination path and change the format to "Plain"

Open the tab "Dump Options #2" at the bottom and check "Use Column Inserts"

You can copy the INSERT Statements from there.

10
votes

For my use-case I was able to simply pipe to grep.

pg_dump -U user_name --data-only --column-inserts -t nyummy.cimory | grep "tokyo" > tokyo.sql
9
votes

SQL Workbench has such a feature.

After running a query, right click on the query results and choose "Copy Data As SQL > SQL Insert"

6
votes

I tried to write a procedure doing that, based on @PhilHibbs codes, on a different way. Please have a look and test.

 CREATE OR REPLACE FUNCTION dump(IN p_schema text, IN p_table text, IN p_where text)
   RETURNS setof text AS
 $BODY$
 DECLARE
     dumpquery_0 text;
     dumpquery_1 text;
     selquery text;
     selvalue text;
     valrec record;
     colrec record;
 BEGIN

     -- ------ --
     -- GLOBAL --
     --   build base INSERT
     --   build SELECT array[ ... ]
     dumpquery_0 := 'INSERT INTO ' ||  quote_ident(p_schema) || '.' || quote_ident(p_table) || '(';
     selquery    := 'SELECT array[';

     <<label0>>
     FOR colrec IN SELECT table_schema, table_name, column_name, data_type
                   FROM information_schema.columns
                   WHERE table_name = p_table and table_schema = p_schema
                   ORDER BY ordinal_position
     LOOP
         dumpquery_0 := dumpquery_0 || quote_ident(colrec.column_name) || ',';
         selquery    := selquery    || 'CAST(' || quote_ident(colrec.column_name) || ' AS TEXT),';
     END LOOP label0;

     dumpquery_0 := substring(dumpquery_0 ,1,length(dumpquery_0)-1) || ')';
     dumpquery_0 := dumpquery_0 || ' VALUES (';
     selquery    := substring(selquery    ,1,length(selquery)-1)    || '] AS MYARRAY';
     selquery    := selquery    || ' FROM ' ||quote_ident(p_schema)||'.'||quote_ident(p_table);
     selquery    := selquery    || ' WHERE '||p_where;
     -- GLOBAL --
     -- ------ --

     -- ----------- --
     -- SELECT LOOP --
     --   execute SELECT built and loop on each row
     <<label1>>
     FOR valrec IN  EXECUTE  selquery
     LOOP
         dumpquery_1 := '';
         IF not found THEN
             EXIT ;
         END IF;

         -- ----------- --
         -- LOOP ARRAY (EACH FIELDS) --
         <<label2>>
         FOREACH selvalue in ARRAY valrec.MYARRAY
         LOOP
             IF selvalue IS NULL
             THEN selvalue := 'NULL';
             ELSE selvalue := quote_literal(selvalue);
             END IF;
             dumpquery_1 := dumpquery_1 || selvalue || ',';
         END LOOP label2;
         dumpquery_1 := substring(dumpquery_1 ,1,length(dumpquery_1)-1) || ');';
         -- LOOP ARRAY (EACH FIELD) --
         -- ----------- --

         -- debug: RETURN NEXT dumpquery_0 || dumpquery_1 || ' --' || selquery;
         -- debug: RETURN NEXT selquery;
         RETURN NEXT dumpquery_0 || dumpquery_1;

     END LOOP label1 ;
     -- SELECT LOOP --
     -- ----------- --

 RETURN ;
 END
 $BODY$
   LANGUAGE plpgsql VOLATILE;

And then :

-- for a range
SELECT dump('public', 'my_table','my_id between 123456 and 123459'); 
-- for the entire table
SELECT dump('public', 'my_table','true');

tested on my postgres 9.1, with a table with mixed field datatype (text, double, int,timestamp without time zone, etc).

That's why the CAST in TEXT type is needed. My test run correctly for about 9M lines, looks like it fail just before 18 minutes of running.

ps : I found an equivalent for mysql on the WEB.

3
votes

You can make view of the table with specifit records and then dump sql file

CREATE VIEW foo AS
SELECT id,name,city FROM nyummy.cimory WHERE city = 'tokyo'
1
votes

I just knocked up a quick procedure to do this. It only works for a single row, so I create a temporary view that just selects the row I want, and then replace the pg_temp.temp_view with the actual table that I want to insert into.

CREATE OR REPLACE FUNCTION dv_util.gen_insert_statement(IN p_schema text, IN p_table text)
  RETURNS text AS
$BODY$
DECLARE
    selquery text; 
    valquery text; 
    selvalue text; 
    colvalue text; 
    colrec record;
BEGIN

    selquery := 'INSERT INTO ' ||  quote_ident(p_schema) || '.' || quote_ident(p_table);

    selquery := selquery || '(';

    valquery := ' VALUES (';
    FOR colrec IN SELECT table_schema, table_name, column_name, data_type
                  FROM information_schema.columns 
                  WHERE table_name = p_table and table_schema = p_schema 
                  ORDER BY ordinal_position 
    LOOP
      selquery := selquery || quote_ident(colrec.column_name) || ',';

      selvalue := 
        'SELECT CASE WHEN ' || quote_ident(colrec.column_name) || ' IS NULL' || 
                   ' THEN ''NULL''' || 
                   ' ELSE '''' || quote_literal('|| quote_ident(colrec.column_name) || ')::text || ''''' || 
                   ' END' || 
        ' FROM '||quote_ident(p_schema)||'.'||quote_ident(p_table);
      EXECUTE selvalue INTO colvalue;
      valquery := valquery || colvalue || ',';
    END LOOP;
    -- Replace the last , with a )
    selquery := substring(selquery,1,length(selquery)-1) || ')';
    valquery := substring(valquery,1,length(valquery)-1) || ')';

    selquery := selquery || valquery;

RETURN selquery;
END
$BODY$
  LANGUAGE plpgsql VOLATILE;

Invoked thus:

SELECT distinct dv_util.gen_insert_statement('pg_temp_' || sess_id::text,'my_data') 
from pg_stat_activity 
where procpid = pg_backend_pid()

I haven't tested this against injection attacks, please let me know if the quote_literal call isn't sufficient for that.

Also it only works for columns that can be simply cast to ::text and back again.

Also this is for Greenplum but I can't think of a reason why it wouldn't work on Postgres, CMIIW.

0
votes

I was in need of a way where I can generate insert statements without creating a temp table (in production). I did see some useful arguments to the the pg_dump above but still ended up devising for a way to

generate the insert statements and dump into a file

The below statement did the trick which I feel will be useful for people who end up here for similar answer.

$ /usr/local/bin/pg_dump -h the_db_host_address -d the_db_name --table=schema_name.table_name --data-only --column-inserts -U postgres -p the_port_number -v -f /Users/the_user/folder_name/insrt_stmts_file_name.sql

The next thing is a prompt which asks the password, in the case above I provided the postgres password but any other user should work fine which has the read capability.

I ran the just the insert statements from the file named: insrt_stmts_file_name.sql in DBeaver client.

But the same can be run from the command prompt as well using the following script:

/usr/local/bin/psql -h the_db_host_address -d the_db_name -v -U postgres -f /Users/the_user/folder_name/insrt_stmts_file_name.sql

Reference for pg_dump / psql flags: -h = host -d = db name -v = verbose (it'll output as it progresses) -U = db user name -f = file / path

-2
votes

have u tried in pgadmin executing query with " EXECUTE QUERY WRITE RESULT TO FILE " option

its only export the data, else try like

pg_dump -t view_name DB_name > db.sql

-t option used for ==> Dump only tables (or views or sequences) matching table, refer