24
votes

I am getting the below error on creating a hive database

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. com/facebook/fb303/FacebookService$Iface

Hadoop version:**hadoop-1.2.1**

HIVE Version: **hive-0.12.0**

Hadoop path:/home/hadoop_test/data/hadoop-1.2.1
hive path :/home/hadoop_test/data/hive-0.12.0

I have copied hive*.jar ,jline-.jar,antlr-runtime.jar from hive-0.12.0/lib to hadoop-1.2./lib

9
Does the user under which you run hive has write access to the metastore?Ștefan

9 Answers

37
votes
set hive.msck.path.validation=ignore;
MSCK REPAIR TABLE table_name;

Make sure the location is specified correctly

5
votes

In the following way, I solved the problem.

set hive.msck.repair.batch.size=1;
set hive.msck.path.validation=ignore;

If you can not set the value, and get the error.Error: Error while processing statement: Cannot modify hive.msck.path.validation at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)

add content in hive-site:

key:
hive.security.authorization.sqlstd.confwhitelist.append
value:
hive\.msck\.path\.validation|hive\.msck\.repair\.batch\.size

enter image description here

1
votes

Set hive.metastore.schema.verification property in hive-site.xml to true, by default it is false.

For further details check this link.

0
votes

I faced the same error. Reason in my case was a directory created in the HDFS warehouse with the same name. When this directory was deleted, it resolved my issue.

0
votes

It's probably because your metastore_db is corrubpted. Delete .lck files from metastore_db.

0
votes

hive -e "msck repair table database.tablename" it will repair table metastore schema of table;

0
votes

The reason why we got this error was we added a new column to the external Hive table. set hive.msck.path.validation=ignore; worked upto fixing hive queries but Impala had additional issues which were solved with below steps:

After doing an invalidate metadata, Impala queries started failing with Error: incompatible Parquet schema for column

Impala error SOLUTION: set PARQUET_FALLBACK_SCHEMA_RESOLUTION=name;

if you're using Cloudera distribution below steps will make the change permanent and you don't have to set the option per session.

Cloudera Manager -> Clusters -> Impala -> Configuration -> Impala Daemon Query Options Advanced Configuration Snippet (Safety Valve)

Add the value: PARQUET_FALLBACK_SCHEMA_RESOLUTION=name

NOTE: do not use SET or semi-colon when setting the parameter in Cloudera Manager

0
votes

open hive cli using "hive --hiveconf hive.root.logger=DEBUG,console" to enable logs and debug from there, in my case a camel case name for partition was written on hdfs and i created hive table with its name fully in lowercase.

-1
votes

I faced similar issue when the underlying hdfs directory got updated with new partitions and hence the hive metastore went out of sync.

Solved using the following two steps:

  1. MSCK table table_name showed what all partitions are out of sync.
  2. MSCK REPAIR table table_name added the missing partitions.