r31386 changed how the size of the VM page array was calculated to be
less wasteful. HoweverFor most systems, if the memory size is not evenlyamount of memory is divisible byded by the
the overhead of a page (therequired by each page worth(a page of data plus vm_pagea structure), vm_page) to
the partial page was not accounted for. Since the 'new_end' is onlydetermine the maximum number of available pages. However, if the
determined by subtracting the size of the allocated VMremainder for the first non-available page was at least a page array fromof data
(so that the old 'end'nly memory missing was a struct vm_page), this could leave 'new_end' addressing alast page beyond thee
end of the allocated space in the VM page array triggering a panic whenwas left in phys_avail[] but was not allocated an entry in the VM page
array. Handle this last page was added to the systemcase by explicitly excluding the page from phys_avail[].
Specifically, with a MALTA64 kernel under qemu I had the following.
Initial phys_avail:
(gdb) p/x phys_avail
$2 = {0x833000, 0x10000000, 0x90000000, 0x100000000, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0}
Total pages in system:
(gdb) p size / 4096
$19 = 522093
Calculated page_range (existing code):
(gdb) p size / (4096 + sizeof(struct vm_page))
$25 = 509164
Number of pages needed to hold VM page array:
(gdb) p (509164 * sizeof(struct vm_page) + 4095) / 4096
$102 = 12928
Which when subtracted from 'end' to compute 'new_end' left the number of
pages described in new phys_avail:
(gdb) p 522093 - 12928
$103 = 509165
Which is one larger than the size of the VM page array.
I have thought through if the DENSE case has the same issue in its
calculation.
One odd thing here is that you have to round up the size of the VM page
array (rather than round down / truncate) to ensure enough room is taken
from 'end'.
The MALTA kernel with mips32 under qemu exhibited the same panic but I
haven't yet tested the fix there.