The epoch(9) subsystem is supposed to guarantee that deferred functions scheduled with epoch_call(9) will not run until all threads currently in a corresponding epoch section have exited that section. Likewise, epoch_wait(9) is not supposed to return until all threads currently in a corresponding epoch section have exited that section. To enforce these guarantees, the epoch subsystem tracks both an increasing global epoch value and also a per-CPU epoch value which indicates the global epoch in which the "oldest" active thread first entered the epoch section. (Here, "oldest" simply indicates how long the thread has been in the epoch section.)
Due to a logic error in ck_epoch_poll_deferred(), these guarantees can be violated. When ck_epoch_scan() indicates that a thread is still in a previous epoch section, ck_epoch_poll_deferred() resets the epoch value for the current CPU to the global epoch value. This causes it to appear that the "oldest" active thread first entered the epoch section more recently than it did. That allows epoch_call() functions to run sooner than they should, and allows epoch_wait() to return before all threads currently in a corresponding epoch section have exited that section.
This same function has something of the inverse error, as well. Once it has determined that an epoch can be advanced, it should run the epoch calls for the immediately preceding epoch. However, it instead runs the calls for the succeeding epoch. Because epoch calls are tracked in a 4-bucket hash table, this delays epoch calls by approximately 2 ticks beyond when they should have run.
Correct these errors by not advancing the per-CPU epoch value in ck_epoch_poll_deferred(), and by running the epoch calls from the correct bucket.