See our Getting Started tutorial for instructions on how to use Storage.
As the central KBC subsystem, Storage manages everything related to storing data and accessing it. It is implemented as a layer on top of various database engines that we use as our backends (Snowflake, Redshift, and MySQL/MariaDB).
As with all other KBC components, everything that can be done through the UI can be also done programmatically via the Storage API. See our developers’ guide to learn more. Every operation done in Storage must be authorized via a token.
The Storage component manages all data stored in each KBC project:
Different storage technologies are used for the above data — Amazon S3 Storage for Files Storage, and Amazon Redshift or Snowflake for Table Storage. The database system behind the Table Storage is referred to as a backend.
Data in Table Storage are internally stored in a database backend (project backend). Specific properties of each backend are compared in the following table:
|Partial Import (Deprecated)||✓||x||x|
|Maximum number of columns in single table||Max. row size of 65,535 bytes||1200||1200|
|Maximum table cell size||64kB||64kB||1MB|
|Sync export (Data Preview) columns limit||x||x||110|
RAW formats are different for
It is also possible to use your own Redshift or Snowflake database with KBC.
The reported table size on the Redshift backend often tends to be inaccurate and affects mostly tables with many small incremental loads.
Since this is an issue of Redshift itself, we decided to recalculate Table Size by ourselves. All recalculating jobs are executed automatically when loading data to tables and when the actual table size is greater than 500MB.
However, any recalculating job can be also called manually by calling the Table Optimize method in Storage API.