0
votes

I have a question about running hadoop mapreduce job. I have a table staff, partitioned by join date. Create statement like that:

create table staff (id int, age int) partitioned by (join_date string) row format delimited fields terminated by '\;';

I put some data to parition '20130921' then when i execute statement bellow, the result is ok:

select count(*) from staff where join_date='20130921';**

But when i execute on partition '20130922' (partition without data), the map reduce job is pending too long, seem like is run forever:

hive> select count(*) from staff where join_date='20130922';**

Total MapReduce jobs = 1**

Launching Job 1 out of 1**

**Number of reduce tasks determined at compile time: 1**

**In order to change the average load for a reducer (in bytes):**

    set hive.exec.reducers.bytes.per.reducer=<number>**

**In order to limit the maximum number of reducers:**

    set hive.exec.reducers.max=<number>**

**In order to set a constant number of reducers:**

    set mapred.reduce.tasks=<number>**

**Starting Job** = `job_201309231116_0131, Tracking URL = ....jobid=job_201309231116_0131`

**Kill Command** = `/u01/hadoop-0.20.203.0/bin/../bin/hadoop job  -kill job_201309231116_0131`

Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
2013-09-23 17:19:07,182 Stage-1 map = 0%,  reduce = 0%
2013-09-23 17:19:07,182 Stage-1 map = 0%,  reduce = 0%
2013-09-23 17:19:07,182 Stage-1 map = 0%,  reduce = 0%

The jobtracker show reduce task pending and this job dont seem like can finished.

Im using hadoop-0.20.203.0 and hive-0.10.0. I googled all day but didnt find any topic have same problem, please help me.

Best regards.

1
Did you find anything interesting in the TaskTracker logs?Tariq
I trace log of jobtracker, tasktracker, job log but didnt find any warn or error log. I test 'select count(*)' statement with table not using partition and the result is same, map reduce job cant finish. I tried using property 'mapreduce.task.timeout' but hadoop dont kill job.user2806318

1 Answers

0
votes

This seems to be a problem with your Hive installation. I came across a similar problem. You can try out restarting Hive Server and Hive Metastore. This fixed my problem.