Number of mappers to be run does not depend on number of nodes or blocks or any other thing they only depend on total number of input splits.
In database context an split might correspondence to range of rows.
Now it is possible that an block in HDfS is of 128 mb and size of input split is 256 mb in that case only 1 mapper will run over this input split which is covering 2 block.
Now question arises how do input split is created
These splits are created by InputFormat class which contain getSplit and createrecordreader method which are responsible for creating splits and you can override these method if you want to change the way these splits are created.
These mappers job are started on different nodes of cluster but there is no guarantee that it will be evenly distributed. Mapreduce always try to give mapper job to a node which have local data to be processed. If that is not possible it will give mapper job to node with best resources.
Notice that input split do not contain actual data. They have reference to data. These stored location help mapredUce in assigning jobs.
I will suggest you to visit this link http://javacrunch.in/Yarn.jsp it will give you a impression about how yarn work for job allocation. You can also visit this for internal working of map reduce http://javacrunch.in/MR.jsp.
Hope this solve your query