I'd planned on merging bnxt* stuff today.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Mon, Jun 3
I think this is ready now, so I'll land it.
I'm satisfied that old-school construct was for our old-school 4.2 gcc, which has aged out of support everywhere.
So where does this stand? My arm assembler fu is weak, but I know that at least the pcb.h bits are good. I don't know how to read Andrew's comments on this, though.
The update looks good to my eyes
I can land this too, it looks perfect now.
Sun, Jun 2
This is definitely better as it is. And I'm not 100% sure about the rescan stuff... it was my takeaway before I had to remove the 1 NVMe card I had that supported namespace management from my active system... I played around with several different combos, and I think I saw the repeat in a sloop of delete ns ; add ns... But that's been a couple of years.
I think this looks good... Agree again on the locking, unless we're seralized somehow... but maybe the locking needs to be more reference-county rather than just pure locking because what does one do with the namespace pointers? And what use of them might need protection against the dying namespace we can't quite free yet issue that I see lurking. Thankfully, namespace changes are super rare, so we have time to work it out.
Is this a good place to add a brief note about why this is a good idea, either in the commit message or in the code? (If so, I'd lean towards the former).
Sat, Jun 1
This looks good, I'll test it when I return from BSDCan
Fri, May 31
May 31 2024
d28bbfa2715a45c841e0eeec38d7f7b73513c66e landed this change. I forgot to tag it as reviewed.
I'll bump as a separate commit.
FreeBSD version bump?
Simplify the #ifdefs, even though this is a bit longer than the prior
expressions.
May 30 2024
This breaks wpa
diff --git a/contrib/wpa/src/utils/os_unix.c b/contrib/wpa/src/utils/os_unix.c index 315c973f3228..1a0cefbbb188 100644 --- a/contrib/wpa/src/utils/os_unix.c +++ b/contrib/wpa/src/utils/os_unix.c @@ -97,12 +97,12 @@ int os_get_reltime(struct os_reltime *t) return 0; } switch (clock_id) { -#ifdef CLOCK_BOOTTIME +#if (defined(CLOCK_BOOTTIME) && defined(CLOCK_MONOTONIC)) && (CLOCK_MONOTONIC != CLOCK_BOOTTIME) case CLOCK_BOOTTIME: clock_id = CLOCK_MONOTONIC; break; #endif -#ifdef CLOCK_MONOTONIC +#if defined(CLOCK_MONOTONIC) && (!defined(CLOCK_BOOTTIME) || CLOCK_MONOTONIC != CLOCK_BOOTTIME) case CLOCK_MONOTONIC: clock_id = CLOCK_REALTIME; break;
I'm convinced this is good. I'll push it in.
This looks good.
It conflicts with what i did here, but you did it better so I'll rebase.
rebased and lightly reworked
https://reviews.freebsd.org/D45404
May 29 2024
May 28 2024
Is this ready?
If so please add achad and bz as reviewers.
In D36259#1035544, @franco_opnsense.org wrote:This won't merge, I'm talking to @imp at https://github.com/freebsd/freebsd-src/pull/1258
May 27 2024
Hopefully this is actually multiple commits. This change looks on its surface too large to review due to the 12 different things going on. .
No further comments.
In D45379#1035342, @seigo.tanimura_gmail.com wrote:In D45379#1035237, @imp wrote:In D45379#1035197, @seigo.tanimura_gmail.com wrote:root@pkgfactory2:~ # swapinfo Device 1K-blocks Used Avail Capacity /dev/nda1p1 67108820 7964000 59144820 12% /dev/nda2p1 67108820 7987416 59121404 12% Total 134217640 15951416 118266224 12% root@pkgfactory2:~ # gpart show /dev/nda1 => 40 134217648 nda1 GPT (64G) 40 8 - free - (4.0K) 48 134217640 1 freebsd-swap (64G) root@pkgfactory2:~ # gpart show /dev/nda2 => 40 134217648 nda2 GPT (64G) 40 8 - free - (4.0K) 48 134217640 1 freebsd-swap (64G)Both of these drives are mispartitioned. For best performnace they should have at least 1MB if not more alignment for the first partition. Do we really need to add code to handle mis-aligned partition performance problems that are pilot error?
Getting off-topic, yet I have to ask you back. Are you talking about the AFT (4KB-sector) problem? If so, I believe it is sufficient to align to an 8-512B-sector boundary. I understand that the recommendation for the 1MB boundary alignment comes from Windows and Linux. (https://superuser.com/questions/1483928/why-do-windows-and-linux-leave-1mib-unused-before-first-partition) Mac OS X has been found to just align to an 8-512B-sector boundary. (https://forums.macrumors.com/threads/aligning-disk-partitions-to-prevent-ssd-wear.952904/)
I think this is a good idea.
The other question I have: why is the swap pager going so nuts? Normally, back pressure keeps the source of I/Os from overwhelming the lower layers of the system (which is why allocations are failing and you've moved to a preallocation). Why isn't that the case here? We're flooding it with more traffic than it can process. It would be, imho, much better for it to schedule less I/O at a time than to have these mechanisms to cope with flooding. Are there other drivers that have other issues? Or is nvme somehow special inherently (or because it advertises too much I/O space up the stack?)
In D45379#1035197, @seigo.tanimura_gmail.com wrote:In D45379#1035175, @olce wrote:Prior to any analysis, I assume you're doing this to fix some memory allocation deadlocks in the swap path. Could you describe a concrete scenario where you experienced a problem that this patch solves? Were you swapping to some regular partition, an encrypted one, or some vnode or zvol?
There are two swaps on the GPT partitions of two NVMe drives.
root@pkgfactory2:~ # swapinfo Device 1K-blocks Used Avail Capacity /dev/nda1p1 67108820 7964000 59144820 12% /dev/nda2p1 67108820 7987416 59121404 12% Total 134217640 15951416 118266224 12% root@pkgfactory2:~ # gpart show /dev/nda1 => 40 134217648 nda1 GPT (64G) 40 8 - free - (4.0K) 48 134217640 1 freebsd-swap (64G) root@pkgfactory2:~ # gpart show /dev/nda2 => 40 134217648 nda2 GPT (64G) 40 8 - free - (4.0K) 48 134217640 1 freebsd-swap (64G)
I'd like to see data that shows this is the hot path. I recently added counters to count the splits.
Ideally, it would use whatever cloning zones already exist and not invent a new one for the nvme drive, imho.
Do you get the same, better or worse performance if you just disable this feature of the nvme drive entirely?
May 26 2024
Fixed in https://reviews.freebsd.org/D45374