When using KVM, the separation between guest userspace processes is handled by the guest OS and by the hardware. When the host CPU has hardware support for virtualization this means that (among other things) it supports not just the usual virtual->physical address translation, but a two-stage guest-virtual-address -> guest-physical-address -> host-physical-address translation. When the guest OS runs multiple userspace processes, it sets up the CPU's MMU as it would normally, and this controls the guest-VA to guest-PA translation. This keeps one guest OS process from seeing memory that the other owns exactly as it would if the guest OS were running on real hardware.
The second stage of translation (guest-physical to host-physical) is the one that the hypervisor controls; in this case that's QEMU and KVM. This is shared between all the vCPUs, in the same way that in a real physical machine every CPU sees and shares the same physical memory layout.
Note also that although it is true that each vCPU is a "thread", the behaviour and the environment that that thread sees is completely different when it is executing the guest code than what it sees when it's running in userspace as part of QEMU. As part of QEMU the thread is like any other, but it executes the KVM_RUN ioctl, and control goes into the host kernel, which then uses that thread purely as a way to schedule and control the vCPU. When the vCPU is running guest code it sees only the illusion provided by the VM and has no direct access to the QEMU process. Eventually when control comes back from guest code the host kernel causes the KVM_RUN ioctl to return and the "normal" user-space thread behaviour resumes.