r31386 changed how the size of the VM page array was calculated to be
less wasteful. For most systems, the amount of memory is divided by the
overhead required by each page (a page of data plus a struct vm_page) to
determine the maximum number of available pages. However, if the
remainder for the first non-available page was at least a page of data
(so that the only memory missing was a struct vm_page), this last page
was left in phys_avail[] but was not allocated an entry in the VM page
array. Handle this case by explicitly excluding the page from phys_avail[].
Specifically, with a MALTA64 kernel under qemu I had the following.
Initial phys_avail:
(gdb) p/x phys_avail
$2 = {0x833000, 0x10000000, 0x90000000, 0x100000000, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0}
Total pages in system:
(gdb) p size / 4096
$19 = 522093
Calculated page_range (existing code):
(gdb) p size / (4096 + sizeof(struct vm_page))
$25 = 509164
Number of pages needed to hold VM page array:
(gdb) p (509164 * sizeof(struct vm_page) + 4095) / 4096
$102 = 12928
Which when subtracted from 'end' to compute 'new_end' left the number of
pages described in new phys_avail:
(gdb) p 522093 - 12928
$103 = 509165
Which is one larger than the size of the VM page array.
I have thought through if the DENSE case has the same issue in its
calculation.
One odd thing here is that you have to round up the size of the VM page
array (rather than round down / truncate) to ensure enough room is taken
from 'end'.
The MALTA kernel with mips32 under qemu exhibited the same panic but I
haven't yet tested the fix there.