One thing to keep in mind when talking about cache is that it is a redundant data structure, whose only goal is to speed-up data fetches.
So, when a piece of data is evicted from the cache, it has no consequence (other than the execution speed) on the program which uses this data, because it will then be fetched from the main memory. So, in any case, your trie will have the exact same behavior, regardless of which piece of it is located in the cache or not.
This is very important, because it allows us to code in high level languages, such as java, without caring about the replacement policy of the cache implemented by the processor. If it was not the case, it would be a nightmare, because we would have to take into account all the existing (and future?) replacement policy implemented in processors. Not even mentioning that these policies are not as simple as LRU (there are cache sets, which divide cache into 'lines', and their behavior is pretty much linked to their physical structure as well), and that the place a piece of data will be located in the cache depends on its address in the main memory, which will not necessarily be the same for each code execution.
In short, the two things you mention (trie nodes in java, and LRU cache policies) are too far apart (one is very, very low level programming, the other high level). That is why we rarely, if ever, consider their interactions.
If you implement a Trie in java, your job is to be sure that it works well in all situations, that it is well designed so maintenance will be easier (possible), that it is readable so other programmers can work on it some day. Eventually, if it still runs too slow, you can try and optimize it (after determining where the bottlenecks are, never before).
But if you want to link your trie to the cache hit/miss, and replacement policies, you will have to read the translation of your implementation in bytecode (done by the JVM).
PS: in your post, you talk of simulating memory being execeeded. There is no such thing for a program. When the cache is full, we fill up the main memory. When the main memory is full, operating systems usually reserve a part of the hard drive to play the role of the main memory (we call it swapping, and when it happens, the computer is as good as frozen). When the swap is full, programs crash. All of them.
In the 'mind' of a program, the operating system gives it absolutely gigantic amounts of memory (which is virtual, but for the program it's as good as real one), that will never be filled up. The program itself is not 'conscious' of the way memory is managed, and of the amount of memory left, for a lot of good reasons (security, guarantee that all programs will have a fair share of the ressources ...)