All tables that are created in a Tabular warehouse are Apache Iceberg tables using the v2 table format.
By default, they are stored at s3://<warehouse-bucket>/<warehouse-id>/<table-id>
.
On this page:
Table Properties
In addition to the standard Apache Iceberg table properties, Tabular supports additional properties
to enable and configure Tabular’s automated services.
Snapshot management properties
Property | Default | Description |
---|
history.expire.max-snapshot-age-ms | 432000000 (5 days) | Default max age of snapshots to keep while expiring snapshots |
history.expire.min-snapshots-to-keep | 1 | Default min number of snapshots to keep while expiring snapshots |
history.expire.max-ref-age-ms | Long.MAX_VALUE (forever) | For snapshot references except the main branch, default max age of snapshot references to keep while expiring snapshots. The main branch never expires. |
Data lifecycle properties
Delete rows or mask columns from a table once it passes a specified age threshold.
Age is based on a user-specified timestamp column.
Property | Description |
---|
lifecycle.enabled | Whether data lifecycle functionality is enabled or not (default false) |
lifecycle.data-age-column | (Required) Data will be deleted or masked from table based on values in this column. Must be a TIMESTAMP, TIMESTAMPTZ, DATE or LONG |
lifecycle.data-age-column-units | (Optional) If data-age-column is a numerical type, what unit is it stored in. Options: [s , ms , or us ] |
lifecycle.table.max-data-age-ms | Row-level TTL: Data must be at least this old to be deleted |
lifecycle.column.<col_name>.max-data-age-ms | Column-level masking: Age at which column masking will apply |
lifecycle.column.<col_name>.transform | Column-level masking: Transform function to apply to the column. Currently only supports nullify |
Compaction properties
Property | Default | Description |
---|
compaction.enabled | true | Whether compaction is enabled on this table or not |
compaction.strategy | binpack | Strategies: binpack and sort (requires a sort_order ) |
compaction.options.partial-progress.enabled | false | Enable committing groups of files (see max-file-group-size-bytes) prior to the entire rewrite completing |
compaction.options.partial-progress.max-commits | 10 | If partial progress is enabled, the max number of commits per compaction |
compaction.options.max-file-group-size-bytes | 100 GB | Max size of a commit group |
compaction.options.max-concurrent-file-group-rewrites | 1 | Number of concurrent groups to rewrite in parallel |
compaction.options.rewrite-job-order | NONE | bytes-asc, bytes-desc, files-asc, files-desc |
compaction.options.delete-file-threshold | 3 | If a data file has this number of deletes or more, it will be rewritten regardless of its file size |
Rewrite manifests properties
Property | Default | Description |
---|
manifest-rewrite.enabled | true | Whether rewriting of manifests is enabled on this table or not |
manifest-rewrite.wait-time-min | 60 | Minimum number of minutes to wait before triggering another manifest rewrite run |
Optimizer properties
Property | Default | Description |
---|
optimizer.enabled | true | Whether recommended optimizations are automatically applied to this table or not |
write.parquet.compression-codec | zstd | Target compression codec to be used when writing files for this table |
write.object-storage.enabled | true | The file paths for this table will be prepended with a hash component optimized for object storage |
File Loader properties
Autoload files dropped into a given path
Property | Description |
---|
fileloader.enabled | true/false - determines if we try and match and load new files for this table |
fileloader.path | Path we use to match newly created files to table (for example, ‘S3://bucket-name/path’) |
fileloader.file-format | json, csv, parquet |
fileloader.write-mode | append, replace |
fileloader.csv.column-delimiter | set a custom field/value delimiter – only applicable for file-format = ‘csv’ (default: ‘,’) |
fileloader.parse-path-values | true/false - if ’true’, any ’name=value’ path parts will be parsed and added as columns/values to the input data |
fileloader.file-exclude-glob-filter | Any files that match the glob pattern set will not be loaded into the table |
Commit properties
Property | Default | Description |
---|
commit.allow-replace-rollback.enabled | false | Whether to allow commits to rollback a previous replace during contention resolution |