I am writing a mapreduce with Hadoop. In reduce method, I want to use context.write() . But output is int type. How can I do this? When I use context.write() it shows an error:
The second argument cannot be int.
This is my code:
public void reduce(Text key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {
int count = 0;
for (NullWritable nullWritable : values) {
count++;
}
//context.write(key, count);
}
This reduce counts something. Then it should write key and count variable.
How can I do that?
Answer
I found my answer. I should new a IntWritable class and use its method (set(intValue)).
Like following code:
IntWritable c = new IntWritable();
public void reduce(Text key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {
int count = 0;
for (NullWritable nullWritable : values) {
count++;
}
c.set(count);
context.write(key, c);
}