Page MenuHomeFreeBSD

Store the bottom of the shadow chain in OBJ_ANON object->handle member.
ClosedPublic

Authored by kib on Nov 24 2019, 10:00 PM.
Tags
None
Referenced Files
Unknown Object (File)
Feb 17 2024, 2:57 PM
Unknown Object (File)
Feb 7 2024, 5:00 PM
Unknown Object (File)
Feb 3 2024, 2:36 AM
Unknown Object (File)
Dec 20 2023, 8:35 AM
Unknown Object (File)
Dec 19 2023, 11:52 AM
Unknown Object (File)
Dec 18 2023, 4:17 AM
Unknown Object (File)
Dec 13 2023, 10:38 PM
Unknown Object (File)
Dec 11 2023, 11:35 PM
Subscribers

Details

Reviewers
alc
markj
jeff
kib
Summary

The handle value is stable for all shadow objects in the inheritance chain. This allows to avoid descending the shadow chain to get to the bottom of it in vm_map_entry_set_vnode_text() [I did not examined other places yet].

Change vm_object_allocate_anon() and vm_object_shadow() to handle more of the cred/charge initialization for the new shadow object, in addition to set up handle.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Skipped
Unit
Tests Skipped
Build Status
Buildable 27751

Event Timeline

sys/vm/vm_map.c
526

Indentation is wrong here.

555

Do we need to do this when add is true?

sys/vm/vm_map.c
552

Isn't TMPFS set on creation and not cleared? Do we need the rlock for the swp_tmpfs field? Can the tmpfs vnode go away while it is mapped?

sys/vm/vm_object.c
464–465

Do these need to be coherent with any user of the global object list? If so they should be set before object type to avoid acquiring a lock.

1294

Can we either fix the existing style or switch to modern style?

kib marked 5 inline comments as done.Nov 25 2019, 3:39 PM
kib added inline comments.
sys/vm/vm_map.c
552

OBJ_TMPFS_NODE is stable during the object lifetime, but OBJ_TMPFS is not. The _NODE means that the object is the v_object for tmpfs node, while OBJ_TMPFS indicates that the vnode was instantiated.

After some bugfix, I started adding a use ref to the tmpfs vnode that is text-mapped. Otherwise, just having a mapping for tmpfs vnode is not enough to prevent recycling, since we do not ref the vnode.

Might be I can remove VM_OBJECT_RLOCK() there, but this would require some thinking about visibility order.

555

Yes, we need, and should unhold properly.

sys/vm/vm_object.c
464–465

No, they are never accessed for whatever global lists iterations we do over the whole set of objects. It is fine to set them unlocked before the object is made visible to other threads in regular ways.

kib marked 3 inline comments as done.

Style.
Unhold vnode for ADD.

This revision is now accepted and ready to land.Nov 26 2019, 1:55 AM

FWIW I reviewed backing_object users and I think only vm_object_sync() wants these semantics but would also be trivial to convert.

The rest need to look for pages in the chain. It would be nice to rewrite some of these consumers to not iterate down the backing object for every page but instead to do so only for the pages that are missing. You could do this with something like a bitset or array of bools + starting offset. vm_fault_prefault() in particular shows up on profiles. For each bit that is not yet set scan the current object for pages and set bits when found. If there are still unset bits traverse down the chain.

Right now we will lock and unlock 8 times * the backing object chain depth there.

sys/vm/vm_map.c
3970–3975

Isn't the newly created shadow object going to have OBJ_ONEMAPPING set? With the introduction of an "else" clause below, I don't see how that OBJ_ONEMAPPING flag is being cleared before this new shadow object is shared with the child process.

kib marked an inline comment as done.

Fix OBJ_ONEMAPPING for shared mapping after shadowing.

This revision now requires review to proceed.Nov 26 2019, 9:16 AM
alc added inline comments.
sys/vm/vm_fault.c
1745

As an aside, isn't this unsafe without locking because of a possible interaction with the object list scan?

1879–1880

As another aside, this pmap_enter() was the root cause of the last reported "invalid ASID" assertion failure on arm64 before I (for the time being) disabled deferred ASID allocation. The reason being that this pmap_enter() presets the accessed bit on the newly created mapping in the child's page table before the child has ever run. And, on a memory-starved machine, the page daemon might try to clear this accessed bit and invalidate the non-existent TLB entry before the child has ever run and been assigned an ASID.

sys/vm/vm_map.c
529

This would read better as "Mostly, we do not ..."

530

I would drop the word "implicitly".

This revision is now accepted and ready to land.Nov 30 2019, 7:43 PM
kib marked 2 inline comments as done.Nov 30 2019, 9:45 PM
kib added inline comments.
sys/vm/vm_fault.c
1745

Do you mean the removed scan in vm_meter.c, or something else ?

1879–1880

So do you want to 'remotely activate' (so to say) the target pmap there, at least allocating asid during vmspace_fork() ?

Update the language in comments.
FIx bugs found by Peter, mostly unneeded checks for object->handle for OBJ_ANON.

This revision now requires review to proceed.Nov 30 2019, 9:46 PM
sys/vm/vm_fault.c
1745

Yes. I completely forgot that this was removed, so nevermind.

1879–1880

No, I think that the "more correct" option would be to not set the accessed bit. However, adding another flag and conditional to pmap_enter() just to say. "don't preset the accessed bit", for this extremely rare case pains me.

I completed a full stress2 test of r355146+1d4f3fce228-c245581(cpuctl). No problems seen.

This revision is now accepted and ready to land.Dec 1 2019, 8:43 PM