1
votes

I am trying to use hive with Oozie using hive action. The Oozie workflow is supposed to load data from one Hive table to another. I have a table foo in Hive and it is supposed to load data into table "test".

I am using Cloudera VM with Hadoop 2.0.0-cdh4.4.0.

I run the workflow using below command:

    [cloudera@localhost oozie-3.3.2+92]$ oozie job -oozie http://localhost:11000/oozie -config examples/apps/hive/job.properties -run

When I go to the JobTracker log file it says: Table not found 'foo'. Any help?

--

    cat script.q:

    CREATE EXTERNAL TABLE test (
    id int,
    name string
    )
    ROW FORMAT DELIMITED
    FIELDS TERMINATED BY '\t'
    STORED AS TEXTFILE
    LOCATION
    '/user/cloudera/test';

    INSERT OVERWRITE table test SELECT * FROM foo;

--

    cat job.properties:

    nameNode=hdfs://localhost.localdomain:8020
    jobTracker=localhost.localdomain:8021
    queueName=default
    examplesRoot=examples
    oozie.use.system.libpath=true
    oozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/apps/hive

--

    cat workflow.xml:

    <?xml version="1.0" encoding="UTF-8"?>
    <workflow-app xmlns="uri:oozie:workflow:0.2" name="hive-wf">
    <start to="hive-node"/>
    <action name="hive-node">
    <hive xmlns="uri:oozie:hive-action:0.2">
    <job-tracker>${jobTracker}</job-tracker>
    <name-node>${nameNode}</name-node>
    <script>script.q</script>
    </hive>
    <ok to="end"/>
    <error to="fail"/>
    </action>
    <kill name="fail">
    <message>Hive failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
    </kill>
    <end name="end"/>
    </workflow-app>

==

[cloudera@localhost hive]$ pwd /usr/share/doc/oozie-3.3.2+92/examples/apps/hive

==

    Current (local) dir = /mapred/local/taskTracker/cloudera/jobcache/job_201405081447_0019/attempt_201405081447_0019_m_000000_0/work
    ------------------------
    hive-exec-log4j.properties
    .action.xml.crc
    tmp
    hive-log4j.properties
    hive-site.xml
    action.xml
    script.q
    ------------------------

    Script [script.q] content: 
    ------------------------
    CREATE EXTERNAL TABLE test (
    id int,
    name string
    )
    ROW FORMAT DELIMITED
    FIELDS TERMINATED BY '\t'
    STORED AS TEXTFILE
    LOCATION
    '/user/cloudera/test';
    INSERT OVERWRITE table test SELECT * FROM foo;
        ------------------------
    Hive command arguments :
    fhive-node--hive
    script.q
    =================================================================
    >>> Invoking Hive command line now >>>
    Hadoop Job IDs executed by Hive:
    Intercepting System.exit(10001)
    <<< Invocation of Main class completed <<<
    Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.HiveMain], exit code [10001]
    Oozie Launcher failed, finishing Hadoop job gracefully

    oozie Launcher ends

    stderr logs
    Logging initialized using configuration in jar:file:/mapred/local/taskTracker/distcache/9141962611866023942_1400842701_327187723/localhost.localdomain/user/oozie/share/lib/hive/hive-common-0.10.0-cdh4.4.0.jar!/hive-log4j.properties
    Hive history file=/tmp/mapred/hive_job_log_eecd5d6b-69d3-4dbd-94ed-9c86ef42443d_1563998739.txt
    OK
    Time taken: 9.816 seconds
    FAILED: SemanticException [Error 10001]: Line 3:42 Table not found 'foo'
    Log file: /mapred/local/taskTracker/cloudera/jobcache/job_201405081447_0019/attempt_201405081447_0019_m_000000_0/work/hive-oozie-job_201405081447_0019.log not present. Therefore no Hadoop jobids found
    Intercepting System.exit(10001)
    Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.HiveMain], exit code [10001]

    syslog logs

    2014-05-12 10:12:10,156 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
    2014-05-12 10:12:11,099 INFO org.apache.hadoop.mapred.TaskRunner: Creating symlink: /mapred/local/taskTracker/distcache/-2339055663322524001_1176285901_1902801582/localhost.localdomain/user/cloudera/examples/apps/hive/script.q <- /mapred/local/taskTracker/cloudera/jobcache/job_201405081447_0019/attempt_201405081447_0019_m_000000_0/work/script.q
    2014-05-12 10:12:11,231 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id
    2014-05-12 10:12:11,231 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
    2014-05-12 10:12:11,544 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
    2014-05-12 10:12:11,549 INFO org.apache.hadoop.mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@375e293a
    2014-05-12 10:12:11,755 INFO org.apache.hadoop.mapred.MapTask: Processing split: hdfs://localhost.localdomain:8020/user/cloudera/oozie-oozi/0000014-140508144817449-oozie-oozi-W/hive-node--hive/input/dummy.txt:0+5
    2014-05-12 10:12:11,773 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and BYTES_READ as counter name instead
    2014-05-12 10:12:11,775 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0

==

Thanks,

Rio

1

1 Answers

0
votes

Which Metastore are you using for Hive?

If you are using Derby (default), then it is a local metastore and only visible from the node where you ran Hive and created the table. The Oozie action may run on a different machine, connect to its own local metastore and won't see the table defined in the previous step.

You need to configure and install a remote metastore on a DB like MySQL or Postgres.

See instructions here: http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/latest/CDH5-Installation-Guide/cdh5ig_hive_metastore_configure.html