To create your first Sandbox, and to see how it is an integral part of the KBC workflow in our Getting Started tutorial.
A sandbox is a safe environment for you to
You can fill a sandbox with any data from Storage. However, to simplify transformation development, KBC provides a specific loader for your transformations. It automatically fills the sandbox with relevant tables and takes the Input Mapping of the transformation into account.
Each user has one sandbox per project and backend (MySQL, Snowflake and Redshift) to their disposal. Sandboxes with different backends are very similar; but there are few specifics, mostly related to access management — see below.
Important: The backend of the sandbox does not have to match the backend of the original data. For example, you can load data from Snowflake into a Redshift sandbox.
Also, your sandbox might be deleted after 7 days of inactivity; make sure not to use it as a permanent data storage!
Data can be loaded into your sandbox in two different ways:
If you, while developing your transformation, need to add data to the tables already specified in the input mapping, the two ways can be combined. However, in that case, perform the plain load after the transformation load, because the transformation load deletes all data in the sandbox.
Go to the Transformations section and click the Sandbox button in the upper right corner.
If no sandbox exists, create it first by clicking Create Sandbox for the desired backend:
Then click the Load data button to load tables or whole buckets into the sandbox. You can limit the number of rows that are loaded; this is useful for sampling a large table.
The imported tables will have full tables names — including both a bucket and table name.
By default, the sandbox content is deleted before loading new data. Use the Preserve existing data option to keep its content and add new data to it.
This type of loading is intended for gradual development and debugging of your transformation code. Data loaded through the transformation loading is tied to a specific transformation bucket and input mapping. Only data specified in the input mapping is loaded into your transformation sandbox. You can choose whether the transformation itself is performed on the load or not.
To create a sandbox for a specific transformation, go to the Transformations section and select the respective transformation:
The Sandbox backend is defined by the transformation backend. In the transformation detail, click the Create Sandbox button:
Clicking the Create button will get you the connection credentials:
Choose how the data will be loaded and processed:
Once the sandbox is ready, you will get a notification. Or, watch the progress on the Jobs page.
Important: Transformation loading always deletes the contents of the sandbox first.
Except loading data, sandbox supports several other basic actions. To access them, go to the Transformations section and click the Sandbox button at the top.
In the same place you can also see the sandbox connection credentials. To copy & paste individual values, use the copy icon:
Even though sandboxes with different backends are very similar, let’s take a look at a few specifics that are mostly related to access management.
For a single user, MySQL credentials are shared within all projects. Each project sandbox is represented as a database assigned to the user. The user and password remain the same until you delete all MySQL sandboxes in all your projects.
user_4 can have assigned the
These are sandboxes in projects
1067. You can easily switch between them in your favorite MySQL client.
To connect to a MySQL sandbox, use any MySQL client; for instance, Sequel Pro or DBeaver. We recommend using an SSL secure connection. To use a secure connection, download the SSL certificate and use it in your MySQL client.
Sequel Pro configuration:
MySQL sandboxes use MariaDB 5.5.44.
Credentials for Redshift sandboxes are not shared between projects. Each project, including the sandbox, sits on a different cluster. Therefore, all your projects have their own set of credentials.
For a Redshift Sandbox, there is an additional action available: Refresh privileges. It is used to REVOKE and GRANT privileges on tables and schemas to the current Redshift user. As Redshift privileges are granted on a per-table basis, tables created later than the current user are not available unless you refresh the privileges.
Almost any PostgreSQL client can connect to an AWS Redshift cluster. We have tested Navicat and DBeaver (free), and they work fine. You can use both a Redshift driver and a PostgreSQL driver.
If using the PostgreSQL driver, do not forget to change the connection port to 5439. For an SSL secure connection, follow this guide. To establish a secure connection to a Redshift sandbox, follow the official instructions from Amazon.
In a Redshift sandbox, you have native access to all Redshift buckets in the project.
You can easily access a table in Storage using schema/bucket namespacing.
SELECT * FROM "in.c-main"."mytable".
Use double quotes, as the schema (= bucket) name always contains a dot.
We do not recommend working with Storage tables directly in your SQL code. Always use the input mapping to bring tables to your schema. This adds another level of security and features to your transformation.
A Redshift sandbox always uses the latest Redshift version available on the cluster.
Important: Currently, this feature is in beta and is available only on request. To enable it, contact our support team.
The RStudio sandbox is available only as a plain sandbox:
Click on the Create sandbox button,
and select the tables you want to load into the sandbox.
Depending on the size of the input tables, the sandbox creation may take some time. You can review its progress in Jobs. When the sandbox is created and you connect to it, you will be taken to a web version of RStudio. The loaded tables will be available at the same locations as in R Transformations. The R version is also the same as in R Transformations.
The RStudio Sandbox has the following limitations:
The Python environment for Jupyter Notebook is the same as in Python Transformations. Jupyter Notebook does not have a built-in file browser, but the loaded tables are kept at the same locations as in Python Transformations. To list the available table data files, use scripts such as this one: