I have a Question regarding the MapReduce example explained here:
It is indeed the most common example of hadoop MapReduce, the WordCount.
I am able to execute it with no problems at the global instance of Cosmos, but even when I give it an small input (a file with 2 or 3 lines) it takes a lot to execute it (half a minute more or less). I assume this is its normal behavior but my question is: ¿Why does it takes so long even for an small input?
I guess this method increases its efectiveness with bigger datasets where this minimal delay is negligible.