13
votes

I've recently started getting into data analysis and I've learned quite a bit over the last year (at the moment, pretty much exclusively using Python). I feel the next step is to begin training myself in MapReduce/Hadoop. I have no formal computer science training however and so often don't quite understand the jargon that is used when people write about Hadoop, hence my question here.

What I am hoping for is a top level overview of Hadoop (unless there is something else I should be using?) and perhaps a recommendation for some sort of tutorial/text book.

If, for example, I want to parallelise a neural network which I have written in Python, where would I start? Is there a relatively standard method for implementing Hadoop with an algorithm or is each solution very problem specific?

The Apache wiki page describes Hadoop as "a framework for running applications on large cluster built of commodity hardware". But what does that mean? I've heard the term "Hadoop Cluster" and I know that Hadoop is Java based. So does that mean for the above example I would need to learn Java, set up a Hadoop cluster on, say, a few amazon servers, then Jython-ify my algorithm before finally getting it to work on the cluster using Hadoop?

Thanks a lot for any help!

6

6 Answers

15
votes

First, to use Hadoop with Python (whenever you run it on your own cluster, or Amazon EMR, or anything else) you would need an option called "Hadoop Streaming".

Read the original chapter (updated link) of Hadoop Manual to get the idea of how it works.

There is also a great library "MrJob" that simplifies running Python jobs on Hadoop.

You could set up your own cluster or try to play with Amazon Elastic Map Reduce. The later can cost you something, but sometimes easier to run at the beginning. There is a great tutorial on how to run Python with Hadoop Streaming on Amazon EMR. It immediately shows a simple but practical application.

To learn the Hadoop itself I would recommend reading one of the books out there. They say that "Hadoop In Action" is better in covering things for those who interested in Python/Hadoop Streaming.

Also note that for testing/learning things you can run Hadoop on your local machine without having an actual cluster.

UPDATE:

As for understanding Map Reduce (that is how to identify and express different kinds of problems on Map Reduce language) read the great article "MapReduce Patterns, Algorithms, and Use Cases" with examples in Python.

4
votes

I would recommend you start by downloading the Cloudera VM for Hadoop which is pretty much a standard across many industries these days and simplifies the Hadoop setup process. Then follow this tutorial for the word count example which is a standard hello world equivalent for learning Map/Reduce

Before that, a simple way to understand map/reduce is by trying python's inbuilt map/reduce functions:

x = [1, 2, 3, 4]
y = map(lambda z: z*z, x]
print y
[1, 4, 9, 16]
q = reduce(lambda m,n : m+n, y)
print q
30

Here the mapper transforms the data by squaring every element and the reducer sums up the squares. Hadoop just uses this to scale large scale computations but you need to figure out your own mapping and reducing functions.

3
votes

For those who like MOOC as an option there is Intro to Hadoop and Mapreduce on Udacity, made in collaboration with Cloudera. During the course you have a chance to install Cloudera Hadoop Distribution virtual machine locally and perform some map/reduce jobs on sample datasets. Hadoop Streaming is used for interaction with Hadoop cluster and the programming is done in Python.

0
votes

Why not start from the original work by google? Since this is where every one else started. For parallelism there are many different options to choose from here

0
votes

http://blog.doughellmann.com/2009/04/implementing-mapreduce-with.html

Doug's solution isn't good for Google-scale production, as it's just a thin wrapper on Python's multiprocessing pool (it only uses one machine, though it can use many cores on that machine). But it's enough to get you started, and it's easy to see what it's doing.

I want to parallelise a neural network

Not easy. Communication between the nodes would be more trouble than it's worth. You could, however, run multiple instances of the network, to increase throughput.

ML problems are often very easy to parallelize - you run a different neural network on each node. Gradient decent can be a bit tricky, as gradient decent is very linear, but you can use some other optimization method (try several different step sizes, and pick which ever is best).

0
votes

Well, I have been working on it for 4 days in a row and at last I think I have stepped into it. Please check this repo. I think it will help.