As with vm_page_grab_pages(), we can avoid a radix tree lookup on each
page in the range by using the object memq.
Details
Details
- Reviewers
kib alc - Commits
- rS322391: Micro-optimize kmem_unback().
Diff Detail
Diff Detail
- Repository
- rS FreeBSD src repository - subversion
- Lint
Lint Not Applicable - Unit
Tests Not Applicable
Event Timeline
sys/vm/vm_kern.c | ||
---|---|---|
400 ↗ | (On Diff #31842) | for (; pindex++ < end; m = next) { |
Comment Actions
To be clear, I'm happy with the concept, and in general eliminating vm_page_lookup() calls inside of loops.
sys/vm/vm_kern.c | ||
---|---|---|
390 ↗ | (On Diff #31842) | Switching from vm_offset_t to vm_pindex_t is a pessimization for 32-bit architectures because it forces them to perform 64-bit arithmetic where before they used 32-bit arithmetic. |
401 ↗ | (On Diff #31842) | Just to be clear, the reason to use vm_page_next() rather than TAILQ_NEXT() here is that it will preserve the current behavior of panic'ing on a NULL pointer dereference if a page is missing. |
sys/vm/vm_kern.c | ||
---|---|---|
401 ↗ | (On Diff #31842) | Right. I had originally written next = TAILQ_NEXT(m, listq); MPASS(next->pindex == pindex); and then I remembered that vm_page_next() exists. |