On this page:
Tabular is designed to be a “Bring Your Own Compute” data platform. This is a powerful architecture that enables write once, query everywhere capabilities. You can use the combination of data processing tools and technologies that work best for your use case today, knowing you can easily reuse that same data in new ways in the future.
By and large, integrating compute engines with Tabular is a matter of configuring them to use an Iceberg REST catalog.
You just need to set the catalog URI to Tabular’s REST catalog endpoint:
https://api.tabular.io/ws and provide authentication details.
The following example configures Apache Spark to use a Tabular warehouse named
spark.sql.catalog.sandbox org.apache.iceberg.spark.SparkCatalog spark.sql.catalog.sandbox.catalog-impl org.apache.iceberg.rest.RESTCatalog spark.sql.catalog.sandbox.uri https://api.tabular.io/ws spark.sql.catalog.sandbox.warehouse sandbox spark.sql.catalog.sandbox.credential <your-tabular-credential>
The Apache Iceberg docs are a helpful resource and may provide you with additional context.