I am using Pig to run my hadoop job. When I run the pig script and then navigate to the YARN resource manager UI, I could see multiple MapReduce jobs getting created for the same Pig job? I believe it would be the same for Hive jobs as well.
Can anyone please let me know the reasoning behind this? On what basis would one pig job be split into multiple mapreduce jobs? One among them happens to be TempletonControllerJob.
Thanks
