I've recently started getting into data analysis and I've learned quite a bit over the last year (at the moment, pretty much exclusively using Python). I feel the next step is to begin training myself in MapReduce/Hadoop. I have no formal computer science training however and so often don't quite understand the jargon that is used when people write about Hadoop, hence my question here.
What I am hoping for is a top level overview of Hadoop (unless there is something else I should be using?) and perhaps a recommendation for some sort of tutorial/text book.
If, for example, I want to parallelise a neural network which I have written in Python, where would I start? Is there a relatively standard method for implementing Hadoop with an algorithm or is each solution very problem specific?
The Apache wiki page describes Hadoop as "a framework for running applications on large cluster built of commodity hardware". But what does that mean? I've heard the term "Hadoop Cluster" and I know that Hadoop is Java based. So does that mean for the above example I would need to learn Java, set up a Hadoop cluster on, say, a few amazon servers, then Jython-ify my algorithm before finally getting it to work on the cluster using Hadoop?
Thanks a lot for any help!