After Hadoop is started, two types o daemon processes are running. One is the daemon process called namenode on the namenode, the other is he daemon process called datanode on he datanode. I am sure that they are used when a big file from local file system is loaded to HDFS by means of "hdfs dfs" command.
But is it also used when a Hadoop MapReduce job is running? My understanding is no, but maybe it is also used during the Shuffle, when the outpu of map functions might be transfered from one datanode to another datanode.