I'm working with ephemeral GCP Dataproc clusters ( Apache Spark 2.2.1, Apache Hadoop 2.8.4 and Apache Hive 2.1.1). These clusters all point to the same Hive Metastore (hosted on a Google Cloud SQL instance).
I created a database on one such cluster and set it's location to 'HDFS:///database_name' like so:
$ gcloud dataproc jobs submit hive \
-e "create database db_name LOCATION 'hdfs:///db_name'" \
--cluster=my-first-ephemeral-cluster --region=europe-west1
my-first-ephemeral-cluster then got deleted and with it the associated HDFS.
On all subsequent clusters the following error has since been popping up:
u'java.net.UnknownHostException: my-first-ephemeral-cluster-m'
This is probably because the Hive Metastore now has an entry for a location that does not exist. Trying to drop the corrupted database is a no go as well:
$ gcloud dataproc jobs submit hive \
-e 'drop database db_name' \
--cluster=my-second-ephemeral-cluster --region=europe-west1
Job [4462cb1d-88f2-4e2b-8a86-c342c0ce46ee] submitted.
Waiting for job output...
Connecting to jdbc:hive2://my-second-ephemeral-cluster-m:10000
Connected to: Apache Hive (version 2.1.1)
Driver: Hive JDBC (version 2.1.1)
18/11/03 13:40:04 [main]: WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not support autoCommit=false.
Transaction isolation: TRANSACTION_REPEATABLE_READ
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.IllegalArgumentException: java.net.UnknownHostException: my-first-ephemeral-cluster-m) (state=08S01,code=1)
Closing: 0: jdbc:hive2://my-second-ephemeral-cluster-m:10000
The reason being that the host my-first-ephemeral-cluster-m is no longer valid. Since changing the database's location is not an option in the version of hive I'm using, I need a different workaround to dropping this database.