Page MenuHomeFreeBSD

vmm: Consolidate code which manages guest memory regions
AcceptedPublic

Authored by markj on Dec 31 2024, 3:37 PM.
Tags
None
Referenced Files
Unknown Object (File)
Tue, Feb 4, 3:31 AM
Unknown Object (File)
Tue, Jan 21, 6:55 AM
Unknown Object (File)
Jan 7 2025, 7:08 PM
Unknown Object (File)
Jan 7 2025, 6:25 PM
Unknown Object (File)
Jan 6 2025, 6:02 AM
Unknown Object (File)
Jan 6 2025, 5:18 AM
Unknown Object (File)
Jan 5 2025, 9:05 PM
Unknown Object (File)
Jan 5 2025, 8:32 AM

Details

Reviewers
andrew
manu
jhb
Group Reviewers
bhyve
Summary

On all three platforms supported by vmm, we have mostly duplicated code
to manage guest physical memory regions. Deduplicate much of this code
and move it into sys/dev/vmm/vmm_mem.c.

To avoid exporting struct vm outside of machdep vmm.c, add a new
struct vm_mem to contain the memory segment descriptors, and add a
vm_mem() accessor, akin to vm_vmspace(). This way vmm_mem.c can
implement its routines without needing to see the layout of struct vm.

The handling of the per-VM vmspace is also duplicated but will be moved
to vmm_mem.c in a follow-up patch.

On amd64, move the ppt_is_mmio() check out of vm_mem_allocated() to keep
the code MI, as PPT is only implemented on amd64. There are only a
couple of callers, so this is not unreasonable.

No functional change intended.

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Skipped
Unit
Tests Skipped
Build Status
Buildable 62337
Build 59221: arc lint + arc unit

Event Timeline

Where did vm_gla2gpa_nofault go?

Note that there is nothing really about the ppt(4) driver that is inherently x86-specific. I believe kib@ has patches to make it switch to the iommu_* API and once that is in place it should be architecture-neutral.

Restore vm_gla2gpa_nofault() on arm64 and riscv. Fix an apparent bug in the
former where the return value was not being checked.

sys/arm64/vmm/vmm.c
550

Why is this locked here?

markj added inline comments.
sys/arm64/vmm/vmm.c
550

I was trying to be consistent with amd64. I'm not sure that it's really necessary to lock memsegs here, but I see no reason to be inconsistent. Note also that in the !destroy case, this lock is already held. I'll rework things to be a bit clearer.

markj marked an inline comment as done.

Clarify locking in vm_cleanup(), add assertions for the !destroy case.

Not sure how I missed seeing the xlock in the amd64 code, but I do find this change hard to review in this UI for whatever reason in that I have to jump around between the three arches.

sys/amd64/vmm/vmm.c
659

Given how much of vm->mem is hidden, perhaps this warrants a vm_assert_memsegs_xlocked wrapper?

markj marked an inline comment as done.

Add wrappers for sx_assert() on the memseg lock.

This revision is now accepted and ready to land.Tue, Feb 11, 2:00 PM