- User Since
- Dec 14 2014, 5:52 AM (144 w, 4 d)
Tue, Sep 19
As long as the page daemon will still launder the pages when we are short of memory, I don't see a problem with msync() ignoring them.
Sun, Sep 17
Sat, Sep 16
The next to last change to this patch that zeroed the bitmap in terminator nodes addressed my last concern. I'm going to commit this patch shortly.
Wed, Sep 13
I've performed some testing with the new sysctl that calls blist_stats(). Specifically, I've configured a test machine with 2GB of RAM and 64GB of swap space, and run a "make -j7 buildworld" in a loop. After 6 consecutive builds, the before and after results are as follows.
Sun, Sep 10
Have you evaluated the effects of isolating the free count? The fields surrounding it are read only.
Sat, Sep 9
Thu, Sep 7
Mon, Sep 4
Fri, Sep 1
swap_pager.c needs #include <sys/sbuf.h>
Mon, Aug 28
Sun, Aug 27
This change reveals a problem: vm_fault_soft_fast() is (indirectly) calling vm_pager_page_unswapped() with only a read lock.
Sat, Aug 26
Fri, Aug 25
I've now completed 8+ hours of testing under a "make -j7 buildworld" workload on 1.5GB of RAM swapping to a Samsung 850 PRO, both with and without the patch. I think that there is too much variability in the execution time conclude anything about the execution time. However, here is the memory utilization story:
Thu, Aug 24
I've tried to create a test where the vm object has discontiguous swap space usage and the radix tree should be advantageous. In particular, I've tried to create a situation where vm object destruction should be faster, but I'm getting inconsistent results.
Wed, Aug 23
I have run numerous tests since r322459 was committed that have wrapped around the swap area without crashing, so there must be another prerequisite to a crash: Your swap area is a fully allocated tree, i.e., the number of blocks equals the radix. Can we also solve this problem by placing a sentinel entry at the end of a fully allocated tree?
Aug 22 2017
Just an FYI, I'm doing some performance testing while sipping my morning coffee.
Aug 17 2017
Aug 16 2017
By the way, once upon a time, you asked me about whether we should continue using the blist allocator for swap space or switch to vmem. Perhaps the most compelling reason, which I failed to give at the time, is that every allocated range from an arena consumes a boundary tag. In other words, while vmem coalesces free ranges, it maintains a boundary tag for each allocated range.
Aug 15 2017
I have no objections to this change. I'm just interested in hearing a little more about why it is needed. In your SGX driver are you essentially maintaining a private free page list for managing the pool of physical memory that backs enclaves?
Aug 13 2017
I've asked Doug to create some new statistics code (see D11906) so that we can quantify the effects of this patch.
Aug 12 2017
This change reduces the text size by 7% on amd64.
I suggest changing the "radix" field of struct blist to be unsigned as well.
This change doesn't compile:
../../../kern/subr_blist.c:457:46: error: too few arguments to function call, expected 4, have 3 return (blst_leaf_alloc(scan, cursor, count)); ~~~~~~~~~~~~~~~ ^ ../../../kern/subr_blist.c:358:1: note: 'blst_leaf_alloc' declared here static daddr_t ^ 1 error generated.
I think that the callers should be rewritten to use vm_radix_lookup_le(), e.g.,
Aug 11 2017
Aug 10 2017
Brett, I added you to this change, because it will decrease the time spent in vm_radix_lookup() by your shm_open()/sendfile() test case.
To be clear, I'm happy with the concept, and in general eliminating vm_page_lookup() calls inside of loops.
Aug 9 2017
Aug 8 2017
Update the description of VM_ALLOC_NOBUSY.
Should vm_page_grab_pages() also include this assertion from vm_page_grab():
KASSERT((allocflags & VM_ALLOC_SBUSY) == 0 || (allocflags & VM_ALLOC_IGN_SBUSY) != 0, ("vm_page_grab: VM_ALLOC_SBUSY/VM_ALLOC_IGN_SBUSY mismatch"));