This writer sends data to a PostgreSQL database.
Find the PostgreSQL writer in the list of writers and create a new configuration. Name it.
The first step is to Set Up Credentials:
You need to provide a host name, user name, password, database name, and schema.
We highly recommend that you create dedicated credentials for the writer in your database. You can use the following SQL code to get started:
It is also possible to secure the connection using an SSH Tunnel.
The next step is to configure the tables you want to write. Click Add New Table:
Select an existing table from Storage:
The next step is to specify table configuration. Click the Edit Columns button to configure the table columns:
Use the preview icon to peek at the column contents.
For each column you can specify its
IGNOREmeans that column will not be present in the destination table.
'') in that column will be converted to
NULL. Use this for non-string columns with missing data.
When done configuring the columns, don’t forget to save the settings.
At the top of the page, you can specify the target table name and additional load options. There are two main options how the writer can write data to tables — Full Load and Incremental Load.
In the Incremental Load mode, the data are bulk inserted into the destination table and the table structure must match (including the data types). That means the structure of the target table will not be modified. If the target table doesn’t exist, it will be created. If a primary key is defined on the table, the data is upserted. If no primary key is defined, the data is inserted.
In the Full Load mode, the table is completely overwritten including the table structure. The table is removed
DROP command and recreated. The
DROP command needs to acquire a table-level lock.
This means that if the database is used by other applications which acquire table-level locks, the writer may
freeze waiting for the locks to be released. This will be recorded in the writer logs with a message similar to this:
Table "account" is locked by 1 transactions, waiting for them to finish
Additionally, you can specify a Primary key of the table, a simple column Data filter, and a filter for incremental processing.