Inverted schedctl usage in the HotSpot JVM

(This content originally appeared in blogs.sun.com/dave and then blogs.oracle.com/dave after Oracle acquired Sun Microsystems. I’ve re-posted it here in the hope that github URLs are more stable over the long term).

The schedctl facility in Solaris allows a thread to request that the kernel defer involuntary preemption for a brief period. The mechanism is strictly advisory – the kernel can opt to ignore the request. Schedctl is typically used to bracket lock critical sections. That, in turn, can avoid convoying – threads piling up on a critical section behind a preempted lock-holder – and other lock-related performance pathologies. If you’re interested see the man pages for schedctl_start() and schedctl_stop() and the schedctl.h include file. The implementation is very efficient. schedctl_start(), which asks that preemption be deferred, simply stores into a thread-specific structure – the schedctl block – that the kernel maps into user-space. Similarly, schedctl_stop() clears the flag set by schedctl_stop() and then checks a “preemption pending” flag in the block. Normally, this will be false, but if set schedctl_stop() will yield to politely grant the CPU to other threads. Note that you can’t abuse this facility for long-term preemption avoidance as the deferral is brief. If your thread exceeds the grace period the kernel will preempt it and transiently degrade its effective scheduling priority. Further reading and various papers by Andy Tucker.

We’ll now switch topics to the implementation of the “synchronized” locking construct in the HotSpot JVM. (I should note that everything I’m describing resides only in my local workspaces and isn’t yet integrated into HotSpot). If a lock is contended then on multiprocessor systems we’ll spin briefly to try to avoid context switching. Context switching is wasted work and inflicts various cache and TLB penalties on the threads involved. If context switching were “free” then we’d never spin to avoid switching, but that’s not the case. We use an adaptive spin-then-park strategy. One potentially undesirable outcome is that we can be preempted while spinning. When our spinning thread is finally rescheduled the lock may or may not be available. If not, we’ll spin and then potentially park (block) again, thus suffering a 2nd context switch. Recall that the reason we spin is to avoid context switching. To avoid this scenario I’ve found it useful to enable schedctl to request deferral while spinning. But while spinning I’ve arranged for the code to periodically check or poll the “preemption pending” flag. If that’s found set we simply abandon our spinning attempt and park immediately. This avoids the double context-switch scenario above. This particular usage of schedctl is inverted in the sense that we cover the spin loop instead of the critical section. (I’ve experimented with extending the schedctl preemption deferral period over the critical section – more about that in a subsequent blog entry).

One annoyance is that the schedctl blocks for the threads in a given process are tightly packed on special pages mapped from kernel space into user-land. As such, writes to the schedctl blocks can cause false sharing on other adjacent blocks. Hopefully the kernel folks will make changes to avoid this by padding and aligning the blocks to ensure that one cache line underlies at most one schedctl block at any one time. It’s vaguely ironic that a facility designed to improve cooperation between threads suffers from false sharing.

Schedctl also exposes a thread’s scheduling state. So if thread T2 holds a lock L, and T1 is contending for L, T1 can check T2’s state to see whether it’s running (ONPROC in Solaris terminology), ready, or blocked. If T2 is not running then it’s usually prudent for T1 to park instead of continuing to spin, as the spin attempt is much more likely to be futile.


<
Previous Post
Blog Post Title From First Header
>
Blog Archive
Archive of all previous blog posts