Ensure memory consistency on COW
ClosedPublic

Authored by kib on Tue, Feb 13, 1:20 PM.

Details

Summary

From the submitter description:
The process is forked transitioning a map entry to COW
Thread A writes to a page on the map entry, faults, updates the pmap to writable at a new phys addr, and starts TLB invalidations...
Thread B acquires a lock, writes to a location on the new phys addr, and releases the lock
Thread C acquires the lock, reads from the location on the old phys addr...
Thread A ...continues the TLB invalidations which are completed
Thread C ...reads from the location on the new phys addr, and releases the lock

In this example Thread B and C [lock, use and unlock] properly and neither own the lock at the same time. Thread A was writing somewhere else on the page and so never had/needed the lock. Thread C sees a location that is only ever read|modified under a lock change beneath it while it is the lock owner.

To fix this, perform the two-stage update of the copied PTE. First, the PTE is updated with the address of the new physical page with copied content, but in read-only mode. The pmap locking and the page busy state during PTE update and TLB invalidation IPIs ensure that any writer to the page cannot upgrade the PTE to the writable state until all CPUs updated their TLB to cache new mapping. Then, after the busy state of the page is lifted, the faults for write can proceed and do not violate the consistency of the reads.

The change is done in vm_fault because most architectures do need IPIs to invalidate remote TLBs. More, I think that hardware guarantees of atomicity of the remote TLB invalidation are not enough to prevent the inconsistent reads of non-atomic reads, like multi-word accesses protected by a lock. So instead of modifying each pmap invalidation code, I did it there.

Discovered and analyzed by: Elliott.Rabe@dell.com

While checking the change, it appeared to me that code fragment which downgrades the pte permissions if map relookup was performed due to the mapping changes, needs to ensure that there is still some access permission bit requested. Otherwise, pmaps cannot handle the pmap_enter() call. This change is formally unrelated.

Test Plan

Apparently, there are reports that the issue is reliably and easily reproducable on Ryzens, see PR 225584.

Diff Detail

Repository
rS FreeBSD src repository
Lint
Automatic diff as part of commit; lint not applicable.
Unit
Automatic diff as part of commit; unit tests not applicable.
kib created this revision.Tue, Feb 13, 1:20 PM

I tested net/samba47 builds as suggested in PR 225584. Previous tests without this D14347 patch (but some with various diagnostic or workaround patches) hung between the first and 20th or so iteration. With this patch applied my Threadripper completed 132 cycles of net/samba47 build last night.

cy added a subscriber: cy.Tue, Feb 13, 5:06 PM
markj accepted this revision.Tue, Feb 13, 10:14 PM
This revision is now accepted and ready to land.Tue, Feb 13, 10:14 PM

As noted in my email comment, this patch appears to have resolved a number of randomish-looking ports build failures on my Ryzen machine, in particular lang/go and anything related to lang/guile.

This revision was automatically updated to reflect the committed changes.
kib reopened this revision.Wed, Feb 14, 12:27 AM
This revision was not accepted when it landed; it landed in state Needs Review.Wed, Feb 14, 12:32 AM
Closed by commit rS329254: Ensure memory consistency on COW. (authored by kib, committed by ). · Explain Why
This revision was automatically updated to reflect the committed changes.
smh added a subscriber: smh.Thu, Feb 15, 5:20 PM

As noted in my email comment, this patch appears to have resolved a number of randomish-looking ports build failures on my Ryzen machine, in particular lang/go and anything related to lang/guile.

Interesting you mention lang/go failure do you have any more information on this specific failure, such as stack from go binary failing?
I'm wondering if this fix is related to the golang issue being tracked here: https://github.com/golang/go/issues/15658

In D14347#301456, @smh wrote:

I'm wondering if this fix is related to the golang issue being tracked here: https://github.com/golang/go/issues/15658

I don't know much about that issue, but it's quite possible. The bug fixed here could cause apparent memory corruption in any multi-threaded process that forks.

I don't remember seeing any finalizer related crashes. The failures were mostly malloc-related and looked like they could be caused by arena corruption. I can try to dig up the logs later. I don't recall seeing any build failures on my FX CPU, but lang/go would almost always fail to build on Ryzen.

smh added a comment.Thu, Feb 15, 11:44 PM

I don't remember seeing any finalizer related crashes. The failures were mostly malloc-related and looked like they could be caused by arena corruption. I can try to dig up the logs later. I don't recall seeing any build failures on my FX CPU, but lang/go would almost always fail to build on Ryzen.

If you could dig up the logs that would be great.

My best repo case for the issue linked takes many hours at least if not days to trigger, so if there is a easier way to trigger it that would be great.