I am new with hadoop and have encountered this problem. I am trying to change the default Text,Integer values for the reducer to Text, Text. I want to map Text,IntWritable then in the reducer I want to have 2 counters depending what the value is and then write those 2 counters in a Text for the collector.
public class WordCountMapper extends MapReduceBase
implements Mapper<LongWritable, Text, Text, IntWritable> {
private final IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable>
output, Reporter reporter) throws IOException {
String line = value.toString();
String[] words = line.split(",");
String[] date = words[2].split(" ");
word.set(date[0]+" "+date[1]+" "+date[2]);
if(words[0].contains("0"))
one.set(0);
else
one.set(4);
output.collect(word, one);
}
}
-----------------------------------------------------------------------------------
public class WordCountReducer extends MapReduceBase
implements Reducer<Text, IntWritable, Text, Text> {
public void reduce(Text key,Iterator<IntWritable> values,
OutputCollector<Text, Text> output,
Reporter reporter) throws IOException {
int sad = 0;
int happy = 0;
while (values.hasNext()) {
IntWritable value = (IntWritable) values.next();
if(value.get() == 0)
sad++; // process value
else
happy++;
}
output.collect(key, new Text("sad:"+sad+", happy:"+happy));
}
}
---------------------------------------------------------------------------------
public class WordCount {
public static void main(String[] args) {
JobClient client = new JobClient();
JobConf conf = new JobConf(WordCount.class);
// specify output types
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
// specify input and output dirs
FileInputFormat.addInputPath(conf, new Path("input"));
FileOutputFormat.setOutputPath(conf, new Path("output"));
// specify a mapper
conf.setMapperClass(WordCountMapper.class);
// specify a reducer
conf.setReducerClass(WordCountReducer.class);
conf.setCombinerClass(WordCountReducer.class);
client.setConf(conf);
try {
JobClient.runJob(conf);
} catch (Exception e) {
e.printStackTrace();
}
}
}
I get this error:
14/12/10 18:11:01 INFO mapred.JobClient: Task Id : attempt_201412100143_0008_m_000000_0, Status : FAILED java.io.IOException: Spill failed at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:425) at WordCountMapper.map(WordCountMapper.java:31) at WordCountMapper.map(WordCountMapper.java:1) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:47) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:227) at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2209) Caused by: java.io.IOException: wrong value class: class org.apache.hadoop.io.Text is not class org.apache.hadoop.io.IntWritable at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:143) at org.apache.hadoop.mapred.Task$CombineOutputCollector.collect(Task.java:626) at WordCountReducer.reduce(WordCountReducer.java:29) at WordCountReducer.reduce(WordCountReducer.java:1) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.combineAndSpill(MapTask.java:904) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:785) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$1600(MapTask.java:286) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$SpillThread.run(MapTask.java:712)
The error repeats itself several times after this. Could someone explain why this error occurs? I searched similar errors as this one but all I found were mismatched key-value types for the mapper and the reducer, but as I can see I have matching key-value types for the mapper and reducer. Thank you in advance.