0
votes

I am migrating my HDP2.1 hadoop cluster to HDP2.2.4. The first step is to migrate ambari from 1.6.0 to 2.0.0.
After completing this step, I restarted my services.

Starting "HiveServer2" through Ambari 2.0 fails whereas sudo service hive-server2 start, subsequent hive requests, and Ambari Hive Service check works.

It fails because it tries to migrate my non-default database locations to apps/hive/warehouse in the python configuration step using a command like :

hive --config /etc/hive/conf.server --service metatool -updateLocation hdfs://cluster/apps/hive/warehouse hdfs://cluster/user/foo/DATABASE

This command fails for obscure reasons (see below), but the point is that I don't want this to happen, as the HDFS files did not move I don't see the point in relocating the tables!

Why is Ambari doing this, and how can I prevent this to happen (beside editing the python ambari files)?

The update location fails logging lines such as : -bash: line 1: hdfs://cluster/apps/hive/warehouse : No such file or directory

but the listed directories do exist.

This update is done by ambari's /var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_service.py (no comment to explain the purpose) :

def check_fs_root():
  import params
  fs_root_url = format("{fs_root}{hive_apps_whs_dir}")
  metatool_cmd = format("hive --config {hive_server_conf_dir} --service metatool")
  cmd = as_user(format("{metatool_cmd} -listFSRoot 2>/dev/null", env={'PATH' : params.execute_path }), params.hive_user) + " | grep hdfs:// | grep -v '.db$'"
  code, out = shell.call(cmd)
  if code == 0 and fs_root_url.strip() != out.strip():
    cmd = format("{metatool_cmd} -updateLocation {fs_root}{hive_apps_whs_dir} {out}")
    Execute(cmd,
            user=params.hive_user,
            environment= {'PATH' : params.execute_path }
    )
1

1 Answers

1
votes

See if this JIRA helps with the issue you are hitting...

https://issues.apache.org/jira/browse/AMBARI-10360