Python transformations complement R and SQL transformations (MySQL or Redshift) where computations or other operations are too difficult. Common data operations like joining, sorting, or grouping are still easier and faster to do in SQL Transformations.
The Python script is running in an isolated Docker environment. The current Python version is 3.6.2.
The Docker container running the Python transformation has allocated 8GB of memory and the maximum running time is 6 hours.
The Python script itself will be compiled to
/data/script.py. To access your input and output tables, use
out/tables/file.csv) or absolute (
To access downloaded files, use the
/data/in/user/tag path. If you want to dig really deep,
have a look at the full Common Interface specification.
Temporary files can be written to a
/tmp/ folder. Do not use the
/data/ folder for files you do not wish to exchange with KBC.
Python is sensitive to indentation. Make sure not to mix tabs and spaces. All files are assumed to be in UTF;
# coding=utf-8 at the beginning of the script is not needed. If you define a main function, do not wrap it within the
if __name__ == '__main__': block as it will not be ran. Simply calling it from withing the script is enough:
You can list extra packages in the UI. These packages are installed using pip.
Generally, any package available on PyPI can be installed. However, some packages have external dependencies, which might not be available.
Feel free to contact us if you run into problems. When the package is installed you still need to
import from it.
The latest versions of packages are always installed.
Tables from Storage are imported to the Python script from CSV files. CSV files can be read by standard Python functions
from the csv packages. It is recommended to explicitly specify the formatting options.
You can read CSV files either to vectors (numbered columns), or to dictionaries (named columns).
Your input tables are stored as CSV files in
in/tables/, and your output tables in
If you can process the file line-by-line, then the most effective way is to read each line, process it and write it immediately. The following two examples show two ways of reading and manipulating a CSV file.
To develop and debug Python transformations, you can replicate the execution environment on your local machine. To do so, you need to have Python installed, preferably the same version as us.
To simulate the input and output mapping, all you need to do is create the right directories with the right files. The following image shows the directory structure:
The script itself is expected to be in the
data directory; its name is arbitrary. It is possible to use relative directories,
so that you can move the script to a KBC transformation with no changes. To develop a Python transformation which takes a sample CSV file locally, take the following steps:
in/tablessubdirectory of the working directory.
in/usersubdirectory of the working directory, and make sure that their name is without any extension.
Use this sample script:
A finished example of the above is attached below in data.zip.
Download it and test the script in your local Python installation. The
destination.csv output file will be created.
This script can be used in your transformations without any modifications. All you need to do is
source.csv(expected by the Python script),
destination.csv(produced by the Python script) to a new table in your Storage,
The above steps are usually sufficient for daily development and debugging of moderately complex Python transformations, although they do not reproduce the transformation execution environment exactly. To create a development environment with the exact same configuration as the transformation environment, use our Docker image.
The following piece of code reads a table with two columns, named first and second, from the source.csv input mapping file into the
row dictionary using
It then adds ping to the first column and multiplies the second column by 42. After that, it saves the row to the destination.csv output mapping file.
The above example shows how to process the file line-by-line; this is the most memory-efficient way which allows you to process data files of any size.
lazy_lines = (line.replace('\0', '') for line in in_file) is a Generator which makes sure that
Null characters are properly handled.
It is also important to use
encoding='utf-8' when reading and writing files.
The following piece of code reads a table with some of its columns from the source.csv input mapping file into the
row list of strings.
It then adds ping to the first column and multiplies the second column by 42. After that it saves the row to the destination.csv output mapping file.
You can simplify the above code using our pre-installed KBC dialect.
kbc dialect is automatically available in the transformation environment. If you want it in your local environment,
it is defined as
csv.register_dialect('kbc', lineterminator='\n', delimiter = ',', quotechar = '"').