Actually, V8 is written in C++. However, does basically the same thing as the JVM, and JVM is written in C. V8 JITs Javascript code and executes the JIT'd code. Likewise the JVM JIT compiles (or hotspot compiles) bytecode (NOT Java) and executes that generated code.
Bytecode is not static, as Java is. In fact it can be quite dynamic. Java, on the other hand is mostly static, and it is not correct to conflate Java with bytecode. The java compiler transforms Java source code into bytecode, and the JVM executes the bytecode. For more information, I recommend you look at John Rose's blog (example). There's a lot of good information there. Also, try to look for talks by Cliff Click (like this one).
Likewise, Clojure code is directly compiled to bytecode, and the JVM then does the same process with that bytecode. Compiling Clojure is usually done at runtime, which is not the speediest process. Likewise the translation of Clojurescript into Javascript is not fast either. V8's translation of Javascript to executable form is obviously quite fast. Clojure can be ahead of time compiled to bytecode though, and that can eliminate a lot of startup overhead.
As you said, it's also not really correct to say that the JVM interprets bytecode. The 1.0 release did that more than 17 years ago!
Traditionally, there were two compilation modes. The first mode is a JIT (Just in Time) compiler. Where bytecode is translated directly to machine code. Java's JIT compiling executes fast, and it doesn't generate highly optimized code. It runs OK.
The second mode is called the hotspot compiler. The hotspot compiler is very sophisticated. It starts the program very quickly in interpreted mode, and it analyzes it as the program runs. As it detects hotspots (spots in the code that execute frequently), it will compile those. Whereas the JIT compiler has to be fast because nothing executes unless it's JIT'ed the hotspot compiler can afford to spend extra time to optimize the snot out of the code that it's compiling.
Additionally, it can go back and revisit that code later on and apply yet more optimizations to it if necessary and possible. This is the point where the hotspot compiler can start to beat compiled C/C++. Because it has runtime knowledge of the code, it can afford to apply optimizations that a static C/C++ compiler cannot do. For example, it can inline virtual functions.
Hotspot has one other feature, which to the best of my knowledge no other environment has, it can also deoptimize code if necessary. For example, if the code were continually taking a single branch, and that was optimized and the runtime conditions change forcing the code down the other (unoptimized) branch and performance suddenly becomes terrible. Hotspot can deoptimize that function and begin the analysis again to figure out how to make it run better.
A downside of hotspot is that it starts a bit slow. One change in the Java 7 JVM has been to combine the JIT compiler and the hotspot compiler. This mode is new, though, and it's not the default, but once it is initial startup should be good and then it can begin the advanced optimizations that the JVM is so good at.
Cheers!