Edited::
Spark or any other application when manage resource via Yarn, It becomes Yarn's responsibility to ensure that resource. Now
how does YARN make sure that the containers aren’t affected by other containers on the node.?
Check here for detailed Yarn memory allocation explanation.

Yarn containers are JVM processes, so while launching the containers NM specifies jvm-opts for restricting the VM memory and then there is a component called ContainersMonitor in NodeManager, which monitors the usage of total memory usage of the process and sends a kill signal if the process trying to consume more resource.
is NM's ContainerMonitor using CGGroup for monitoring CPU and Memory ?
As per official documentation:Using CGroups with YARN
CGroups is a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour. CGroups is a Linux kernel feature and was merged into kernel version 2.6.24. From a YARN perspective, this allows containers to be limited in their resource usage. A good example of this is CPU usage. Without CGroups, it becomes hard to limit container CPU usage. Currently, CGroups is only used for limiting CPU usage.
For Memory , It's coming on Hadoop 3. Refer the JIRA here
How is it made sure that the memory is only used for this application?
For the allocated memory to JVM process, JVM ensures it throws out of memory exception for heap , and in total the NM's container monitor does the monitoring and killing.
can’t be used by another application?
Admin ensures . Ha ha ha, NO ONE is allowed to login to the worker nodes apart from few admins in our case.
Now coming to the planning , suppose you have 64 GB RAM in each worker/datanode machine , No one is allowed to login to run any custom code , so only required services(linux and yarn services) are running. Which are taking max 10 GB, So you decided to give Yarn rest of the 48 GB.
Now while launching containers Yarn will tell NM to allocate max 4GB per container (out which a percentage will be allotted as actual JVM's heap as per the settings), which will ensure min 12 Happy containers.
And then,If all jobs request 1 GB per container, YARN will be able to stuff 48 containers. (Thanks @Samson Scharfrichter)