User Details
- User Since
- Jun 4 2014, 10:38 AM (546 w, 1 d)
Sep 14 2024
keeping some % of reserve for root would be good,, equivalent to _falloc_noinstall
this can __assert_unreachable which will in production kernels will tell the compiler the 2 options are the only valid ones
Sep 10 2024
This would not expose any detail apart from the fact that pipes have to allocate space to hold data, which is an integral part of the mechanism.
should not this instead limit how much memory is used to back pipe buffers?
Jul 17 2024
Jul 14 2024
Sorry, I forgot to link the benchmark results from a previous iteration of this patch.
The metrics I've gathered show that this approach does reduce NOFREE fragmentation.
Jul 13 2024
one could add a flag to mountroot indicating the system knows some stuff may be missing and to fail the mount if that's the case, should be very easy to handle. then a wait round and another mount call but without the flag
can this only wait for usb if no root device is found? then whack the tunable
Jul 11 2024
Jul 10 2024
I'm taking this over from Kristof as a "favor".
Jul 9 2024
refs need to be converted to atomics, then you can grab one without taking the lock
Jul 8 2024
how does this look like in terms of valid lock ordering?
Jul 3 2024
did fragmentation drop though?
Jun 20 2024
Given that we're unlikely to see new consumers of lockmgr going forward, I wonder whether sleepgen would might someday be useful for avoiding the vnode interlock in some cases? If not, then this seems like a lot of machinery to deal with one lock, though the diff isn't too big.
May 16 2024
This needs to retain proper authorship, so the Author field still needs to be hps. Add the end of the commit message you should mention this was extracted by you from a bigger patch of his. So add that, drop "in memoriam".
May 12 2024
why even support something like this?
Mar 18 2024
Mar 12 2024
Further testing by other people confirmed my worry this is too trivial to fix the problem without running into other side effects, thus I'm dropping the thing.
Mar 11 2024
i386 kernel is being retired
Mar 5 2024
First and foremost my apologies this fell through the cracks.
Jan 20 2024
Jan 18 2024
this macro should be eliminated, not exposed
What's the motivation here? If you are running into scalability problems, it has to be allproc and proctree locks (amongst others).
Jan 5 2024
Linux folk explicitly designed openat2 to be extensible, so I expect it is going to pick up explicit "official" usage down the road.
Jan 4 2024
sounds like the thing to do is to add openat2 so that this automagically works, instead of a freebsd-specific flag
Dec 29 2023
Dec 9 2023
why is this port still a thing
Dec 6 2023
Nov 29 2023
Nov 28 2023
First of all my apologies, this somehow fell through the cracks after i pinged.
Nov 27 2023
So again what's the benefit of bubbling up ENOSYS? I assumed it would at least get handled in a post-vop hook instead of going all the way up to the caller.
Nov 24 2023
Nov 23 2023
if it does work with the patch, you should paste how dmesg looks like with it
according to your own copy from dmesg this failed to attach, so it does not work?
Nov 20 2023
Nov 19 2023
Nov 17 2023
The patch looks correct, but commit message needs some work.
Nov 16 2023
I massaged what I mean into a patch, with your nullfs change as basis:
Nov 15 2023
The VFS layer trying to babysit all filesystems is a long standing design flaw, which adds overhead to everyone and only makes optimisations clunky. For example for almost all filesystems VOP_CLOSE has next to nothing to do and most definitely does not need write suspension nor the vnode lock (and for zfs the routine is a nop) -- if there was no attempt to decide for the filesystem what it needs, there would be no problem.
Nov 14 2023
vn_copy_file_range passes down vnodes from different mount points and all filesystems implementing the vop (apart from zfs) have an explicit check that mount points patch. iow the check that this is an instance of the same filesystem type is redundant for their case.
Nov 6 2023
If you can rebase both changes and show me how to collect fragmentation stats I can test this against a full ports tree build.
I don't know if the kernel is in shape where this can be properly evaluated.