Page MenuHomeFreeBSD

Avoid duplicate set_max_free after unlink
AbandonedPublic

Authored by dougm on Oct 21 2018, 1:05 AM.
Tags
None
Referenced Files
F103248421: D17635.id49412.diff
Fri, Nov 22, 3:06 PM
F103225205: D17635.id49412.diff
Fri, Nov 22, 10:04 AM
F103201674: D17635.id50550.diff
Fri, Nov 22, 4:50 AM
Unknown Object (File)
Thu, Nov 21, 3:41 AM
Unknown Object (File)
Thu, Nov 7, 7:07 PM
Unknown Object (File)
Sep 23 2024, 10:21 PM
Unknown Object (File)
Sep 21 2024, 8:49 AM
Unknown Object (File)
Sep 21 2024, 12:32 AM
Subscribers

Details

Summary

Calls to vm_map_entry_resize_free after merging adjacent nodes can be avoided if vm_map_entry_unlink is told that the range of the unlinked node is being merged with the range of a neighboring node, so that the adj_free value of the unlinked_node->prev can be set correctly without the need for later adjustment.

Diff Detail

Lint
Lint Skipped
Unit
Tests Skipped

Event Timeline

This looks like a hack, and deserves a comment at least, if applied.

P.S. Please upload diffs with context.

dougm retitled this revision from Avoid duplicate set_max_free in merging with next neighbor to Avoid duplicate set_max_free after unlink.
dougm edited the summary of this revision. (Show Details)

Update the diff to match the updated title of this change.

This creates an edge-case that we've never had to think about before, that we have overlapping entries in both the list and binary search tree (BST) of entries. Specifically, what I really don't like is that you are performing a splay on such a BST. So, now, I have to think about how splay will handle that edge-case.

I suspect that the better way to handle this is not to use _unlink() when merging. Instead, create custom code for merging, and then possibly do some refactoring of _unlink() and this custom code to extract out common pieces into smaller helper functions.

Rewrite to avoid changing start/end values before splaying, by adding an argument to unlink telling it which "neighbor", if any, that the unlinking item is being merged with.

sys/vm/vm_map.c
1055

There should be a space after switch.

1726

This should be included in _unlink().

Address reviewer comments.

Rename pesky 'neighbor' functions. Pass the unlink_merge_type to the function that disposes of things.

Fix cut-and-paste error. Drop prevsize. Reformat comment. Add merge parameter to vm_entry_map_delete, so that deleting a gap entry can happen without a call to vm_map_entry_resize_free.

sys/vm/vm_map.c
1036

Style(9) wants a space after the '{' and before the '}'.

1039–1041

Since you are editing this, please convert it to the more common style, and not have each parameter on its own line.

1041

This not C++. You need to write enum unlink_merge_type here.

3922

You can simply use vm_map_entry_merge(map, gap_entry, UNLINK_MERGE_PREV). You don't need to modify vm_map_entry_delete().

3926

There is nothing to update here. stack_entry->adj_free must necessarily be zero in this case. We really only needed to call vm_map_entry_resize_free() when the entire gap entry was being removed, but that would now be handled internally by vm_map_entry_merge().

sys/vm/vm_map.c
1660

Leave the name as it was, but let's add a KASSERT on entry->eflags that will prevent misuse.

Apply reviewer suggestions.

dougm edited the summary of this revision. (Show Details)

Add a KASSERT to vm_map_mergeable_neighbors. Exploit the fact that some adj_free field is 0 in vm_map_entry_unlink neighbor-merging to avoid some calculation.

Add missing max_free update to MERGE_NEXT case.

Avoid function call in MERGE_NONE case.

I would like to see D14005 completed and committed first.

Update after patch affected surrounding code.

sys/vm/vm_map.c
1670–1672

I'm afraid that this does not work because the second call to this function passes "next" as "entry", and "next" one of these flags set.

1676

Avoiding the early and potentially unnecessary computation of "end - start" also reduces the machine code size by 16 bytes on amd64.

Later today, I'm going to commit the bits here that are orthogonal to the main point of this patch. In other words, the bits that migrated here from an earlier, abandoned review that Kostik had asked about last week (if I recall correctly).

sys/vm/vm_map.c
1670–1672

"... and 'next' might have one ..."

sys/vm/vm_map.c
1676

On 32-bit i386 machines, the machine code is smaller if there are parentheses around "end - start" so that that arithmetic operation occurs before the extension to 64 bits. Recall that "offset" is a 64-bit value whereas "end" and "start" are 32-bit values.

dougm marked 10 inline comments as done.

Add optimizing parens. Drop invalid KASSERT.

Restore a KASSERT, with both arguments included. #define a name for the set of flags that prevent merging.

I like the introduction of the #define, but I'd to rename it to have the suffix _MASK. The point being that when a reader sees a use case as opposed to the definition the reader doesn't think that it's yet another flag.

After suggesting the use of a bitwise and to fix the KASSERT, I've had second thoughts.

Here is what I'm proposing to commit.

Index: vm/vm_map.c
===================================================================
--- vm/vm_map.c (revision 340474)
+++ vm/vm_map.c (working copy)
@@ -1644,16 +1644,25 @@ vm_map_find_min(vm_map_t map, vm_object_t object,
        }
 }
 
+/*
+ * A map entry with any of the following flags set must not be merged with
+ * another entry.
+ */
+#define        MAP_ENTRY_NOMERGE_MASK  (MAP_ENTRY_GROWS_DOWN | MAP_ENTRY_GROWS_UP | \
+           MAP_ENTRY_IN_TRANSITION | MAP_ENTRY_IS_SUB_MAP)
+
 static bool
 vm_map_mergeable_neighbors(vm_map_entry_t prev, vm_map_entry_t entry)
 {
-       vm_size_t prevsize;
 
-       prevsize = prev->end - prev->start;
+       KASSERT((prev->eflags & MAP_ENTRY_NOMERGE_MASK) == 0 ||
+           (entry->eflags & MAP_ENTRY_NOMERGE_MASK) == 0,
+           ("vm_map_mergeable_neighbors: neither %p nor %p are mergeable",
+           prev, entry));
        return (prev->end == entry->start &&
            prev->object.vm_object == entry->object.vm_object &&
            (prev->object.vm_object == NULL ||
-           prev->offset + prevsize == entry->offset) &&
+           prev->offset + (prev->end - prev->start) == entry->offset) &&
            prev->eflags == entry->eflags &&
            prev->protection == entry->protection &&
            prev->max_protection == entry->max_protection &&
@@ -1667,18 +1676,14 @@ vm_map_merged_neighbor_dispose(vm_map_t map, vm_ma
 {
 
        /*
-        * If the backing object is a vnode object,
-        * vm_object_deallocate() calls vrele().
-        * However, vrele() does not lock the vnode
-        * because the vnode has additional
-        * references.  Thus, the map lock can be kept
-        * without causing a lock-order reversal with
-        * the vnode lock.
+        * If the backing object is a vnode object, vm_object_deallocate()
+        * calls vrele().  However, vrele() does not lock the vnode because
+        * the vnode has additional references.  Thus, the map lock can be
+        * kept without causing a lock-order reversal with the vnode lock.
         *
-        * Since we count the number of virtual page
-        * mappings in object->un_pager.vnp.writemappings,
-        * the writemappings value should not be adjusted
-        * when the entry is disposed of.
+        * Since we count the number of virtual page mappings in
+        * object->un_pager.vnp.writemappings, the writemappings value
+        * should not be adjusted when the entry is disposed of.
         */
        if (entry->object.vm_object != NULL)
                vm_object_deallocate(entry->object.vm_object);
@@ -1704,10 +1709,8 @@ vm_map_simplify_entry(vm_map_t map, vm_map_entry_t
 {
        vm_map_entry_t next, prev;
 
-       if ((entry->eflags & (MAP_ENTRY_GROWS_DOWN | MAP_ENTRY_GROWS_UP |
-           MAP_ENTRY_IN_TRANSITION | MAP_ENTRY_IS_SUB_MAP)) != 0)
+       if ((entry->eflags & MAP_ENTRY_NOMERGE_MASK) != 0)
                return;
-
        prev = entry->prev;
        if (vm_map_mergeable_neighbors(prev, entry)) {
                vm_map_entry_unlink(map, prev);
@@ -1717,7 +1720,6 @@ vm_map_simplify_entry(vm_map_t map, vm_map_entry_t
                        vm_map_entry_resize_free(map, entry->prev);
                vm_map_merged_neighbor_dispose(map, prev);
        }
-
        next = entry->next;
        if (vm_map_mergeable_neighbors(entry, next)) {
                vm_map_entry_unlink(map, next);
@@ -1726,6 +1728,7 @@ vm_map_simplify_entry(vm_map_t map, vm_map_entry_t
                vm_map_merged_neighbor_dispose(map, next);
        }
 }
+
 /*
  *     vm_map_clip_start:      [ internal use only ]
  *
This revision was not accepted when it landed; it landed in state Needs Review.Nov 18 2018, 1:27 AM
This revision was automatically updated to reflect the committed changes.

Restore what remains of this patch.

Merged the changes into D17794.