As with vm_page_grab_pages(), we can avoid a radix tree lookup on each
page in the range by using the object memq.
Details
Details
- Reviewers
kib alc - Commits
- rS322391: Micro-optimize kmem_unback().
Diff Detail
Diff Detail
- Lint
Lint Passed - Unit
No Test Coverage - Build Status
Buildable 10983 Build 11370: arc lint + arc unit
Event Timeline
sys/vm/vm_kern.c | ||
---|---|---|
400 | for (; pindex++ < end; m = next) { |
Comment Actions
To be clear, I'm happy with the concept, and in general eliminating vm_page_lookup() calls inside of loops.
sys/vm/vm_kern.c | ||
---|---|---|
390 | Switching from vm_offset_t to vm_pindex_t is a pessimization for 32-bit architectures because it forces them to perform 64-bit arithmetic where before they used 32-bit arithmetic. | |
401 | Just to be clear, the reason to use vm_page_next() rather than TAILQ_NEXT() here is that it will preserve the current behavior of panic'ing on a NULL pointer dereference if a page is missing. |
sys/vm/vm_kern.c | ||
---|---|---|
401 | Right. I had originally written next = TAILQ_NEXT(m, listq); MPASS(next->pindex == pindex); and then I remembered that vm_page_next() exists. |