Page MenuHomeFreeBSD

bhyve: avoid allocating BARs above the end of supported physical addresses

Authored by kib on Nov 5 2020, 12:49 AM.



Read CPUID leaf 0x8000008 to determine max supported phys address and create BAR region right below it, according to the total 64bit BARs limit.

Also temporary bump qualification size for BAR allocation in lower (< 4G) IO window to be 128M (it will be 256M in final separate commit).

PR: 250802

Diff Detail

rS FreeBSD src repository - subversion
Lint Skipped
Unit Tests Skipped
Build Status
Buildable 34633

Event Timeline

kib requested review of this revision.Nov 5 2020, 12:49 AM

btw many thanks for doing this.


Comment should be 128


This should be dropped down to the next lowest and largest power-of-2 address, since BAR allocations are always on pow2 boundaries.


Maybe an easier way would be to start at the next 1G boundary past the end of configured guest RAM. and use from there until the end of phys address space for the region. That should cover just about any alignment/size.

Change to 128 in comment.
Set mem64bar start address to max power of 2 less than calculated address.

Dynamically calculate space we reserve for 64bit bars, set it to 1/4 of the max phys.
Previous model where fixed space was hardcoded did not worked because the value was too big for some CPUs.

kib marked 2 inline comments as done.Nov 5 2020, 9:45 PM
kib added inline comments.

I experimented some with the suggestion, and I prefer the approach from the patch. I still need to calculate maxphysaddr to see how much memresv64 can be. And with the suggestion, I need to find the TOM as well.

This should be fine.

There is an additional issue that I can do in a follow-up commit, and that's the base address of 0x000000D000000000 which hard-coded in existing bhyve/UEFI ACPI DSDT. A simple fix is to use that address if it fits within existing address lines, and the proper fix will be to use the f/w i/f to extract the base addr/limit at runtime and patch the DSDT.


This bit of code can be removed: it was a relic leftover from supporting a Netapp 8G BAR NVRAM adapter that required a 1:1 GPA:HPA mapping for peer-peer DMA.

This revision is now accepted and ready to land.Nov 5 2020, 9:54 PM

I can do it as followup.

Correct limit for membar64. It is abs address, not the size of the space to allocate from.

This revision now requires review to proceed.Nov 6 2020, 12:01 AM

Clamp cpu maxaddr at VM_MAXUSER_ADDRESS (LA47 variant due to only 4-level pages supported).
Also fix vmspace creation to use correct TOM for guest.

Add headers. Fix typo LA47->LA48.

grehan added inline comments.

It could be even simpler to just set pci_emul_membase64 to 3/4 * cpu_maxphysaddr

I should have been a bit clearer: the boundary doesn't have to be a power-of-2; just a multiple of a power-of-2, and that becomes the largest BAR size that could be placed. However, 3/4 of max phys addr is still an enormous number.

However, I'm fine with what you have.

This revision is now accepted and ready to land.Nov 11 2020, 11:03 AM
kib marked an inline comment as done.Nov 12 2020, 12:31 AM
kib added inline comments.

I am working on the proper resource tracking for BAR allocation, in particular I already ported subr_vmem.c to userspace library. Having starting address be power of two is useful I believe, as the start of the resource region.

This revision was automatically updated to reflect the committed changes.
kib marked an inline comment as done.
kib marked an inline comment as done.Nov 12 2020, 1:01 AM