Since I find myself talking about such things a good amount recently, I want to take a moment and define some depotimization terminology as I use it. There doesn’t seem to be a common lexicon for this stuff; every compiler community has developed it’s own vocabulary.
deoptimization – The event or process by which speculative optimized code is corrected when the speculation made turns out to be wrong.
* Yes, there do exist bad programs which use range check violations to perform normal control flow. They are evil, their developers should be ashamed, and don’t do that!
To my knowledge, there are two broad categories of deoptimization mechanisms. One of those can be possible subdivided, but the exact distinction is debatable.
The simplest mechanism is a simple side exit – also known as an uncommon trap, osr exit, etc… For this, you just emit a guard which checks the condition in question and branch to a bit of code which knows how to materialize the abstract state of the virtual machine and transitions into running another piece of code that knows how to deal with the unusual condition. In most cases, this ‘other piece of code’ is your interpreter. There’s a lot of detail here, but at least in concept it’s pretty straight forward.
As an extension of the previous, we can also play tricks to represent the branch of control without actually using a branch instruction. The most famous example here is the implicit null check, but nothing says this technique can’t be applied to other variety of checks. The general approach is to construct the speculatively optimized code in such a way as to generate a signal or other hardware fault if the condition turned out not to be true. The virtual machine can then catch the signal, lookup the faulting address in a side table, and then branch to the handler. Other than the indirect through the signal handler. and a lot of complexity in implementation, this is pretty much just another form of side exit.
Alternatively, we can generate code without any form of checks, but have a way to “shoot down” the generated code if our assumption turns out to be wrong. In Java, we like to do what is called Class Hierarchy Analysis – essentially this is whole program optimizations based on current inheritance structure subject to the assumption that the program isn’t going to change. Since Java allows class loading, this is of course not an entirely safe assumption.
To make such a mechanism work, you need two things: a) the ability to stall an event such a class loading until the system can respond by invalidating code, and b) a mechanism to “quickly” invalidate any code which was compiled under now broken assumptions. As long as the class loading event isn’t observable (i.e. doesn’t complete) until the problematic code is invalidated, we’re good to go.
I refer to such a process as an invalidation deoptimization. I’m going to skip the implementation details for now to keep this post relatively short; I might return to that at another time. As you can probably guess, there’s a lot of complexity invalidating running code in a timely manner.