0
votes
grunt> table_load = load ‘test_table_one’ USING org.apache.hive.hcatalog.pig.HCatLoader();
grunt> dump table_load;

2016-10-05 17:25:43,798 [main] INFO org.apache.hadoop.conf.Configuration.deprecation – fs.default.name is deprecated. Instead, use fs.defaultFS 2016-10-05 17:25:43,930 [main] INFO hive.metastore – Trying to connect to metastore with URI thrift://localhost:9084 2016-10-05 17:25:43,931 [main] INFO hive.metastore – Opened a connection to metastore, current connections: 1 2016-10-05 17:25:43,934 [main] INFO hive.metastore – Connected to metastore. … 2016-10-05 17:25:58,707 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – HadoopJobId: job_1475669003352_0017 2016-10-05 17:25:58,707 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – Processing aliases table_load 2016-10-05 17:25:58,707 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – detailed locations: M: table_load[7,13] C: R: 2016-10-05 17:25:58,716 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – 0% complete 2016-10-05 17:25:58,716 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – Running jobs are [job_1475669003352_0017] 2016-10-05 17:26:13,753 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure. 2016-10-05 17:26:13,753 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – job job_1475669003352_0017 has failed! Stop running all dependent jobs 2016-10-05 17:26:13,753 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – 100% complete 2016-10-05 17:26:13,882 [main] ERROR org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil – 1 map reduce job(s) failed! 2016-10-05 17:26:13,883 [main] INFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats – Script Statistics:

HadoopVersion PigVersion UserId StartedAt FinishedAt Features 2.6.0 0.15.0 hadoop 2016-10-05 17:25:57 2016-10-05 17:26:13 UNKNOWN

Failed!

Failed Jobs: JobId Alias Feature Message Outputs job_1475669003352_0017 table_load MAP_ONLY Message: Job failed! hdfs://mycluster/tmp/temp81690062/tmp2002161033,

Input(s): Failed to read data from “test_table_one”

Output(s): Failed to produce result in “hdfs://mycluster/tmp/temp81690062/tmp2002161033”

Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0

Job DAG: job_1475669003352_0017

2016-10-05 17:26:13,883 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – Failed! 2016-10-05 17:26:13,889 [main] ERROR org.apache.pig.tools.grunt.Grunt – ERROR 1066: Unable to open iterator for alias table_load Details at logfile: /home/hadoop/pig_1475674706670.log

Can you help me to find why it is happening to me.?

2
try this pig -useHCatalogArunakiran Nulu
Looks like a issue with access, Can you check the logs of failed task on Yarn resourcemanager.vgunnu
Grunt was started with pig -useHCatalog @ArunakiranNuluonlyvinish
Thanks, your answer gave me a lead to fixing this issue @vgunnuonlyvinish
@onlyvinish If you found the answer and think it is relevant for future readers, please summarize your solution and post it as an answer to help other visitors. If you think that it is not relevant, please delete the question to keep the site clean.Dennis Jaheruddin

2 Answers

0
votes

Either use pig -useHCatalog or use pig and REGISTER the supporting JARS for HCAT to work with grunt.

You can find the required jars that are been shared into HDFS when you use pig -useHCatalog.

0
votes
grunt> table_load = load ‘test_table_one’ USING org.apache.hive.hcatalog.pig.HCatLoader();
grunt> dump table_load;

This may be the reason that you haven't created Hive table with the exact name. Check the hive table and schema for the same. Before using Hcatlog we have to create table schema on top on the location from where we are loading the data. uSE any queue name if require. Before executing please check for the table in hive.

Hope it will help. Try