1
votes

I have written a Mapper and Reducer in Python and have executed it successfully on Amazon's Elastic MapReduce(EMR) using Hadoop Streaming.

The final result folder contains the output in three different files part-00000, part-00001 and part-00002. But I need the output as one single file. Is there a way I can do that?

Here is my code for the Mapper:

#!/usr/bin/env python

import sys

for line in sys.stdin:
    line = line.strip()
    words = line.split()
    for word in words:
        print '%s\t%s' % (word, 1)

And here is my code for the Reducer

#!/usr/bin/env python

from operator import itemgetter
import sys

current_word = None
current_count = 0
word = None
max_count=0

for line in sys.stdin:
    line = line.strip()
    word, count = line.split('\t', 1)

    try:
        count = int(count)
    except ValueError:
        continue

if current_word == word:
    current_count += count
else:
    if current_word:
        # write result to STDOUT
            if current_word[0] != '@':
                print '%s\t%d' % (current_word, current_count)
                if count > max_count:
                    max_count = count
    current_count = count
    current_word = word

if current_word == word:
    print '%s\t%s' % (current_word, current_count)

I need the output of this as one single file.

4
Can't you just open the three files and concatenate them into a single output file?James Mills
That is what I have been doing. But I would like it if I can get a single output file after the Reduce phase.Arun Kumar
Can't you just do (Linux/UNIX): cat part-00000 part-00001 part-00002 > output?James Mills
Thanks James. That's one way. But I can't get EMR itself to spit it out as one single part file?Arun Kumar

4 Answers

1
votes

A really simple way of doing this (assuming a Linux/UNIX sytem):

$ cat part-00000 part-00001 part-00002 > output
0
votes

Use a single reduce for small datasets/processing or use the getmerge option on the output files of the job.

0
votes

My solution to the above problem was to execute the following hdfs command:

hadoop fs -getmerge /hdfs/path local_file

where /hdfs/path is a path containing all the parts (part-*****) of a job output. The -getmerge option of the hadoop fs, will merge all of the job output into a single file on our local file system.

0
votes

I had the same problem lately, actually combiner should do this task but I couldn't implement somehow. What did I do is;

  1. step1: mapper1.py reducer1.py

    input: s3://../data/

    output s3://..../small_output/

  2. step2: mapper2.py reducer2.py

    input s3://../data/

    output: s3://..../output2/

  3. step3: mapper3.py reducer3.py

    input: s3://../output2/

    output: s3://..../final_output/

I assume that we need output of step1 as a single file at the step3.

At the top of mapper2.py, there is this code;

if not os.path.isfile('/tmp/s3_sync_flag'):
    os.system('touch /tmp/s3_sync_flag')
    [download files to /tmp/output/]
    os.system('cat /tmp/output/part* > /tmp/output/all')

if block, checks against multiple mapper execution.