Each extractor from an SQL database allows you to extract data from selected tables, or results from arbitrary SQL queries.
The extractors for supported SQL databases (Cloudera Impala, Firebird, IBM DB2, Microsoft SQL Server, MySQL, Oracle, PostgreSQL, Teradata) are configured in the same way and have an advanced mode. All notable differences are listed in the section Server Specific Notes.
Before you start configuring your SQL extractor, consider securing your connection to your internal database to avoid exposing your database server to the internet by setting up an SSH Tunnel.
Note: Quick introduction to extracting data from the Snowflake Database Server is also part of our tutorial.
After you create a configuration, the first step is to configure database credentials using the Set up credentials first button:
Fill in the credentials to the database. See the section Server Specific Notes for a description of non-standard fields. After testing the credentials, Save them:
After you save the credentials, the extractor will automatically fetch the list of all database tables accessible by the provided credentials. Select the tables you want to extract and click the Create button:
You can modify the configured tables by clicking on the appropriate row, or add new tables via the New Table button. Each table may also be extracted individually, or it may be disabled so that it is not extracted when the entire configuration is run. Existing credentials can be changed using the Database Credentials link.
If you want to modify the table extraction setup, click on the corresponding row. You’ll get to the table detail view:
Here you can modify the source table, limit the extraction to specific columns, or change the destination table name in Storage. The table detail also allows you to define Primary Key and Incremental Loading. We highly recommend you define a primary key where possible. Primary keys substantially speed up both the data loads and further processing of the table. Also, use incremental loading when possible — again, that speeds up the data loads considerably. Both options require knowledge of the source table, so don’t turn them on blindly.
The table detail also allows you to switch to the Advanced mode:
In the advanced mode, you can write an arbitrary
SELECT query. Its result will be imported to a
Storage table. The SQL query is executed on the source server without any processing. That means that
you have to follow the SQL dialect of the particular server you’re working with.
Please keep the following in mind when using the advanced mode:
Avoid doing complex joins and aggregations in SQL queries. Remember that these queries are executed on the database server you are extracting from. This database system might not be designed or optimized for complex SELECT queries. Complex queries may result in timeouts, or they might produce unnecessary loads on your internal systems. Instead, import raw data, and then use Keboola Connection tools to give it the shape you want.
The MySQL database server also supports encrypting the whole database communication using SSL Certificates. See the official guide for instructions on setting it up.
The MySQL database server enables Transaction isolation level.
The PostgreSQL database server also supports encrypting the whole database communication using SSL Certificates. See the official guide for instructions on setting it up.
The MS SQL database server also supports encrypting the whole database communication using SSL Certificates. See the official guide for instructions on setting it up.
The SQL Server export uses the BCP utility to export data.
For this reason, if you are writing advanced mode queries, you have to quote the values of non-numeric columns (text, datetime, etc.), so that the selected
"some text" instead of
some text. This can be done by, e.g.,
SELECT char(34) + [my_text_column] + char(34).
CHAR function with argument
the double quote character
When the extracted text itself may contain quotes, you need to escape them by replacing
"". A full example:
The extractor will still work if you don’t do these things, but the BCP will fail, and the backup, a much slower method,
will be used. In that case, the message
BCP command failed: ... Attempting export using pdo_sqlsrv will be logged in the extraction
You can remove null characters (
\u0000) from text by using the
REPLACE([column_name] COLLATE Latin1_General_BIN, char(0), '').
In the context of the previous example, the query will look like this:
An SQL Server instance hosted on Azure will normally have a hostname, such as
If the host name is provided as an IP address, for example,
220.127.116.11, the username needs to have the suffix
@[srvName] as in, for example,
When extracting data from a Snowflake database, permissions must be set to allow the specified user to use the specified warehouse.
The following SQL code creates the role and the user
KEBOOLA_SNOWFLAKE_EXTRACTOR and grants them access
to the warehouse
MY_WAREHOUSE, the database
MY_DATA, and the schema
Note that with the above setup, the user will not have access to newly created tables. You will either have to use a more permissive role or reset the permissions by calling:
The Oracle database server also supports tnsnames.ora configuration file instead host name and port number. See the oficial guide for instructions on setting it up.