Firstly, yes, the max sizes map_size_per_jvm, cluster_wide_map_size and partitions_wide_map_size are per-entry (not size in terms of storage space).
Secondly, these max sizes are hard limits, and whilst similar they are in fact different to the eviction policies (being LRU, LFU or NONE).
Here's how they work:
cluster_wide_map_size - this is the total map entries across all hazelcast nodes.
map_size_per_jvm - this is essentially the number of map entries per hazelcast node.
So if you are running 2 nodes using this policy with max size = 10 (and backupCount = 0, see below), you have max 20 map entries across all nodes. Adding another hazelcast node increases the total max map size.
partitions_wide_map_size - this one is a little unpredictable, since it depends on the distribution of partitions across your nodes.
A cluster node reaches it's maximum when it reaches it's proportion of (owned partitions / total partitions) of the max size. Code: MaxSizePartitionsWidePolicy
Please note that all of these max sizes include backups, so backupCount = 1 effectively reduces the real max map size in half.
The other max size settings, used_heap_size and used_heap_percentage seem clear in their usage.
I hope this helps, good luck!