Page MenuHomeFreeBSD

Segregate _NOFREE allocations
Needs ReviewPublic

Authored by alc on Aug 8 2018, 5:52 AM.

Details

Reviewers
markj
Summary

This revision is a quick-and-dirty approach to segregating the physical memory that underlies UMA _NOFREE objects, such as the vm object zone, so that permanent fragmentation is avoided.

Diff Detail

Lint
Lint Skipped
Unit
Unit Tests Skipped

Event Timeline

alc created this revision.Aug 8 2018, 5:52 AM
alc added a comment.Aug 8 2018, 5:56 AM

Mark, I believe that you had a dtrace script for evaluating fragmentation. If this patch is effective, I'll try to come up with a commitable patch.

markj added a comment.Aug 8 2018, 3:52 PM
In D16620#353193, @alc wrote:

Mark, I believe that you had a dtrace script for evaluating fragmentation. If this patch is effective, I'll try to come up with a commitable patch.

It's just a quick C program which can dump the slabs from the kegs of a specified zone. In some cases, (VTOSLAB == 0 and ppera == 1) the slab addresses are in the direct map and thus can be used to measure fragmentation of physical memory. For instance, with less than 18 hours uptime on my desktop, I get the following output for the VM object zone: https://reviews.freebsd.org/P200

Each line corresponds to a 2 MB page and the first column is the number of 4KB slabs allocated to the object keg from that 2MB page. In particular, objects are allocated across over 5000 2MB pages.

The program is in here: https://people.freebsd.org/~markj/umaslabs/

I'll test the patch on my desktop. At one point I wrote a virtually identical patch and found that it indeed helped a fair bit.

markj added a comment.Aug 8 2018, 4:33 PM
In D16620#353193, @alc wrote:

Mark, I believe that you had a dtrace script for evaluating fragmentation. If this patch is effective, I'll try to come up with a commitable patch.

For instance, with less than 18 hours uptime on my desktop, I get the following output for the VM object zone: https://reviews.freebsd.org/P200

Obviously this isn't an apples-to-apples comparison, but things look much better with the patch so far: https://reviews.freebsd.org/P201

markj added a comment.Aug 8 2018, 4:37 PM

Obviously this isn't an apples-to-apples comparison, but things look much better with the patch so far: https://reviews.freebsd.org/P201

As one would expect, I'm also seeing some sharing among different NOFREE zones. For instance, the large page with 97 VM object slabs also contains nearly 200 thread slabs.

markj added a comment.Aug 9 2018, 2:44 PM

Fragmentation got substantially worse overnight, though it's still much better than before: https://reviews.freebsd.org/P203

With no evidence, my suspicion is that the cron job which updates the locate(1) database triggered many VM object allocations. By that point, my system was low on free pages, so large contiguous allocations weren't possible.

alc added inline comments.Aug 9 2018, 5:13 PM
sys/malloc.h
63

The "0" was to avoid conflict with the mbuf flag. Currently, I'm thinking of M_NEVERFREED as an alternative. Thoughts?

alc added a comment.Aug 9 2018, 5:31 PM

For 32-bit machines, where we don't have a direct map, this pool-based implementation won't work without explicit allocation of KVA and mapping from that KVA to the allocated physical pages. Alternatively, we could try to achieve physical segregation through reservations rather than pooling. In other words, just use the kernel object, but segregate the M_NEVERFREED allocations through segregated ranges of page indices within the kernel object (and kernel virtual addresses). The disadvantage of this approach is that we lose any segregation whatsoever when an entire reservation can't be allocated, whereas the pooling-based approach will just give us lesser segregation.

markj added inline comments.Aug 9 2018, 5:31 PM
sys/malloc.h
63

M_PERMANENT is the only other alternative I can think of. M_NEVERFREED seems preferable to me.

markj added a comment.Aug 9 2018, 5:37 PM
In D16620#353755, @alc wrote:

For 32-bit machines, where we don't have a direct map, this pool-based implementation won't work without explicit allocation of KVA and mapping from that KVA to the allocated physical pages.

Sorry, I don't quite follow. For such machines, don't we want to segregate KVA too?

alc added a comment.Aug 9 2018, 6:36 PM
In D16620#353755, @alc wrote:

For 32-bit machines, where we don't have a direct map, this pool-based implementation won't work without explicit allocation of KVA and mapping from that KVA to the allocated physical pages.

Sorry, I don't quite follow. For such machines, don't we want to segregate KVA too?

Yes, we would, particularly, if we are going to use superpage mappings.

alc added a comment.Aug 11 2018, 9:07 PM

As I think about an implementation based on reservations rather than a pool, the arena parameter to kmem_malloc_domain() makes increasingly less sense to me. We already have the newish flag M_EXEC, and with the flag being proposed here, those flags can drive the selection of the arena rather than pointlessly passing in an arena pointer that we don't really use except as essentially a flag.

I would also point out that kmem_alloc_attr_domain() doesn't take an arena pointer as a parameter.

alc added a comment.Aug 11 2018, 9:25 PM

Here is just a portion of what a patch eliminating the arena parameter would look like. It eliminates pointless work from kmem_malloc_domain()'s callers.

Index: vm/uma_core.c
===================================================================
--- vm/uma_core.c       (revision 337447)
+++ vm/uma_core.c       (working copy)
@@ -3680,32 +3680,22 @@ uma_zone_exhausted_nolock(uma_zone_t zone)
 void *
 uma_large_malloc_domain(vm_size_t size, int domain, int wait)
 {
-       struct vmem *arena;
        vm_offset_t addr;
        uma_slab_t slab;
 
-#if VM_NRESERVLEVEL > 0
-       if (__predict_true((wait & M_EXEC) == 0))
-               arena = kernel_arena;
-       else
-               arena = kernel_rwx_arena;
-#else
-       arena = kernel_arena;
-#endif
-
        slab = zone_alloc_item(slabzone, NULL, domain, wait);
        if (slab == NULL)
                return (NULL);
        if (domain == UMA_ANYDOMAIN)
-               addr = kmem_malloc(arena, size, wait);
+               addr = kmem_malloc(NULL, size, wait);
        else
-               addr = kmem_malloc_domain(arena, domain, size, wait);
+               addr = kmem_malloc_domain(domain, size, wait);
        if (addr != 0) {
                vsetslab(addr, slab);
                slab->us_data = (void *)addr;
                slab->us_flags = UMA_SLAB_KERNEL | UMA_SLAB_MALLOC;
 #if VM_NRESERVLEVEL > 0
-               if (__predict_false(arena == kernel_rwx_arena))
+               if (__predict_false(wait & M_EXEC))
                        slab->us_flags |= UMA_SLAB_KRWX;
 #endif
                slab->us_size = size;
Index: vm/vm_kern.c
===================================================================
--- vm/vm_kern.c        (revision 337447)
+++ vm/vm_kern.c        (working copy)
@@ -372,7 +372,7 @@ kmem_suballoc(vm_map_t parent, vm_offset_t *min, v
  *     Allocate wired-down pages in the kernel's address space.
  */
 vm_offset_t
-kmem_malloc_domain(struct vmem *vmem, int domain, vm_size_t size, int flags)
+kmem_malloc_domain(int domain, vm_size_t size, int flags)
 {
        vmem_t *arena;
        vm_offset_t addr;
@@ -379,16 +379,11 @@ vm_offset_t
        int rv;
 
 #if VM_NRESERVLEVEL > 0
-       KASSERT(vmem == kernel_arena || vmem == kernel_rwx_arena,
-           ("kmem_malloc_domain: Only kernel_arena or kernel_rwx_arena "
-           "are supported."));
-       if (__predict_true(vmem == kernel_arena))
+       if (__predict_true((flags & M_EXEC) == 0))
                arena = vm_dom[domain].vmd_kernel_arena;
        else
                arena = vm_dom[domain].vmd_kernel_rwx_arena;
 #else
-       KASSERT(vmem == kernel_arena,
-           ("kmem_malloc_domain: Only kernel_arena is supported."));
        arena = vm_dom[domain].vmd_kernel_arena;
 #endif
        size = round_page(size);
@@ -404,7 +399,7 @@ vm_offset_t
 }
 
 vm_offset_t
-kmem_malloc(struct vmem *vmem, vm_size_t size, int flags)
+kmem_malloc(struct vmem *vmem __unused, vm_size_t size, int flags)
 {
        struct vm_domainset_iter di;
        vm_offset_t addr;
@@ -412,7 +407,7 @@ vm_offset_t
 
        vm_domainset_iter_malloc_init(&di, kernel_object, &domain, &flags);
        do {
-               addr = kmem_malloc_domain(vmem, domain, size, flags);
+               addr = kmem_malloc_domain(domain, size, flags);
                if (addr != 0)
                        break;
        } while (vm_domainset_iter_malloc(&di, &domain, &flags) == 0);