- using Spark SQL Context in
ipynb and scala notebooks :
sql("SET spark.databricks.delta.preview.enabled=true")
sql("SET spark.databricks.delta.merge.joinBasedMerge.enabled = true")
- In
SQL dbc notebooks:
SET spark.databricks.delta.preview.enabled=true
SET spark.databricks.delta.merge.joinBasedMerge.enabled
- When you wanna
default the cluster to support Delta , while spinning up the cluster on UI in the last column in the parameters for Environment variables
just this line : spark.databricks.delta.preview.enabled=true
- Or the last and the final fun part. When you spin your cluster
Select 5.0 or above we should have Delta enabled by default for these guys.
And finally welcome to Databricks Delta :)
Also, Just to help you out with your code there it should look like this
%sql create table t as select * from test_db.src_data
USING DELTA
PARTITIONED BY (YourPartitionColumnHere)
LOCATION "/mnt/data/path/to/the/location/where/you/want/these/parquetFiles/to/be/present"