In D51219#1169594, @glebius wrote:I tried zpool sync and after this in zdb I observe the same pool txg that has been there for the last hours.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Feed Advanced Search
Advanced Search
Advanced Search
Wed, Jul 16
Wed, Jul 16
Wed, Jul 9
Wed, Jul 9
In D51219#1169581, @glebius wrote:In D51219#1169554, @mav wrote:The labels are updated on each TXG commit, that may be many times per second. So the races are quite probable there, and unlike resurrecting old disk are completely unpredictable and also harder to recover by just removing that disk.
Just for my education, at what events does txg of a pool is updated? I see that with normal filesystem I/O it is not updated. Is it on pool properties changes only?
Situation of deletion of a disappeared disk that later reappears is orders of magnitude more probable than situation of a crash during label update.
I wonder what happen if system crash after updating one label on only one disk out of several. In such case vdev_label_read_config() will return config of that vdev for that TXG. But no other vdev have config for that TXG yet. So if we already probed some other vdevs, the added vdev_free(spa->spa_root_vdev); will wipe root vdev and all the children configs for the pool and restart it from scratch, populating only vdev where the currently probed disk belongs. But already probed vdevs I guess will not be re-probed, leaving pool-wide configuration incomplete. It seems not implemented here, looking on UINT64_MAX always passed to vdev_label_read_config(), but IIRC normal pool import code tries to find maximum common TXG where it has sufficient quorum. It might not be the maximum TXG, but the previous one, or theoretically allowed even one before that, if something went very wrong.
Jul 1 2025
Jul 1 2025
I haven't tracked new things there, only hope that the IDs are right. But I just wonder about naming uniformity: previously we've already had "_H_" in constant names, while we never had "_U_".
May 28 2025
May 28 2025
systat/top: Update ZFS sysctl names
systat/top: Update ZFS sysctl names
systat/top: Update ZFS sysctl names
May 26 2025
May 26 2025
May 25 2025
May 25 2025
systat/top: Update ZFS sysctl names
May 20 2025
May 20 2025
May 12 2025
May 12 2025
May 8 2025
May 8 2025
May 5 2025
May 5 2025
Apr 9 2025
Apr 9 2025
Mar 18 2025
Mar 18 2025
As with any other patches of the kind you may either fix the "issue" or break sound completely by producing invalid configuration. Unless the configuration is already broken and require fixing, it should better be handled with /dev/dsp redirection, as we discussed before.
Mar 3 2025
Mar 3 2025
Feb 19 2025
Feb 19 2025
Feb 18 2025
Feb 18 2025
Feb 14 2025
Feb 14 2025
RGB mode looks like some magic withe the number of constants. I wonder if it is possible or reasonable to read defaults from BIOS where the changed values are stored?
Seems to make sense, but I don't know anything about it.
I have no objections, but I wonder whether maximum of 3 vs 7 might be model-specific?
I have no objections, if it works, but I don't have that ASUS for quite a while now.
Feb 4 2025
Feb 4 2025
Jan 27 2025
Jan 27 2025
Just to be sure, DA_FLAG_PACK_INVALID does not mean there is no media. There might be a media, just not one we opened originally.
This change require DA_FLAG_PACK_INVALID to be reliable avoid false periph invalidations. I am not sure current asc == 0x3a covers all the cases, I have feeling that 0x28/0x00 Not ready to ready change, medium may have changed is even more widespread case of media change and so pack invalidation.
This change is unneeded. ascq <= table_entry->ascq is already checked few lines above.
I've noticed this issue too while looking on the previous patch.
Jan 23 2025
Jan 23 2025
In D48595#1109250, @jhb wrote:I'm not quite sure where to put the nvlist_error calls TBH.
I don't remember nvlists much, but shouldn't errors be checked via nvlist_error() sometimes?
I guess instead of 1 could be expected some error codes, that just never happened.
Jan 17 2025
Jan 17 2025
This seems consistent with other places where we use "SS_FATAL | ENXIO".
While I guess u3g.4 might indeed include the full list of supported devices, simply because IIRC many of them require some quirks to attach and operate, the snd_hda was written with a goal to work on unknown hardware. The list of devices in the code in most cases is just for cosmetics and user convenience. That is why I would not like it to be listed as more than examples. We've actually had a list in the man page before, but I axed it out at some point.
Jan 15 2025
Jan 15 2025
I understand the desire to avoid extra work, but if server load pattern really change to no longer require the KTLS, the allocations will stay cached, occupying memory for no reason if there is no explicit pressure. For example, considering ZFS might shrink ARC or at least limit its growth if there is no free memory, it might be a waste.
isp: Fix abort issue introduced by previous commit
isp: Fix abort issue introduced by previous commit
Jan 8 2025
Jan 8 2025
isp: Fix abort issue introduced by previous commit
mav committed rGe6c96c7af717: Revert "isp: Fix abort issue introduced by previous commit" (authored by mav).
Revert "isp: Fix abort issue introduced by previous commit"
isp: Fix abort issue introduced by previous commit
Jan 7 2025
Jan 7 2025
Jan 3 2025
Jan 3 2025
I don't remember whether ASPM has anything to do with enabling ports, so wonder if it is a curios case of opposite bugs, but if it helps, I see no problem.
Dec 23 2024
Dec 23 2024
isp: Improve task aborts handling
isp: Fix use after free in aborts handling
isp: Improve task aborts handling
isp: Fix use after free in aborts handling
Dec 10 2024
Dec 10 2024
Dec 9 2024
Dec 9 2024
isp: Improve task aborts handling
isp: Fix use after free in aborts handling
Dec 4 2024
Dec 4 2024
mav added inline comments to D47745: intr/x86: merge pic_{dis,en}able_source() call into pic_{dis,en}able_intr().
Dec 1 2024
Dec 1 2024
hwpmc: Restore line lost in previous commit
Nov 28 2024
Nov 28 2024
hwpmc: Restore line lost in previous commit
Oct 26 2024
Oct 26 2024
I've never used it myself, so don't have a strong opinion, but as next step somebody will want to block reservation conflicts, then something else, and again and again...
Oct 23 2024
Oct 23 2024
Looks good to me. Thanks.
Oct 21 2024
Oct 21 2024
Oct 14 2024
Oct 14 2024
The xpt_done_td queues are used only for the non-MP safe things as well as I think for the error path, both of which are relatively rare.
Oct 2 2024
Oct 2 2024
Sep 26 2024
Sep 26 2024
mav committed rGd89090334a32: ure(4): Add ID for LAN port in Thinkpad OneLink+ dock (authored by mav).
ure(4): Add ID for LAN port in Thinkpad OneLink+ dock
mav committed rG8748daf670ee: ure(4): Add ID for LAN port in Thinkpad OneLink+ dock (authored by mav).
ure(4): Add ID for LAN port in Thinkpad OneLink+ dock
Sep 19 2024
Sep 19 2024
mav committed rGa1bb5bdb0ab6: ure(4): Add ID for LAN port in Thinkpad OneLink+ dock (authored by mav).
ure(4): Add ID for LAN port in Thinkpad OneLink+ dock
Sep 11 2024
Sep 11 2024
Aug 30 2024
Aug 30 2024
In D46469#1059445, @jrtc27 wrote:What do you both think of https://reviews.freebsd.org/P646 as a more systematic way of addressing this?
Aug 29 2024
Aug 29 2024
I suppose 3 printf's may cause garbled output if other printf's are used by something.
It seems BIO_GETATTR allows GEOM::attachment override. Though I am not saying it is very useful.
Jul 21 2024
Jul 21 2024
nvmecontrol: Fix "Workloadd" typo
nvmecontrol: Fix "Workloadd" typo
Jul 12 2024
Jul 12 2024
nvmecontrol: Fix "Workloadd" typo
Jul 3 2024
Jul 3 2024
Looks fine to me, just couple nits.
Jul 2 2024
Jul 2 2024
Have you looked on similar Linux code? It would be good to be consistent or at least similar. I haven't looked deep, but foreach_nfs_host_cb() seems to support multiple hosts.
Jun 27 2024
Jun 27 2024
Fix SATA NCQ error recovery after 25375b1415
Jun 24 2024
Jun 24 2024
Looks odd to me, but OK.
In D45660#1042742, @ken wrote:So here is what the debugging log message in isp_getpdb() shows. isp0 and isp1 are connected to LTO-6 tape drives via an 8Gb switch. isp2 is directly connected to an LTO-6 in loop mode:
isp0: Chan 0 handle 0x0 Port 0xfffc01 flags 0x0 curstate 77 laststate 77
isp0: Chan 0 handle 0x1 Port 0x011b26 flags 0x40a0 curstate 46 laststate 46
isp0: Chan 0 handle 0x7fe Port 0xfffffe flags 0x0 curstate 44 laststate 44
isp0: Chan 0 handle 0x7fe Port 0xfffffe flags 0x0 curstate 44 laststate 44
isp1: Chan 0 handle 0x0 Port 0xfffc01 flags 0x0 curstate 77 laststate 77
isp1: Chan 0 handle 0x1 Port 0x011a26 flags 0x40a0 curstate 46 laststate 46
isp1: Chan 0 handle 0x7fe Port 0xfffffe flags 0x0 curstate 44 laststate 44
isp1: Chan 0 handle 0x7fe Port 0xfffffe flags 0x0 curstate 44 laststate 44
isp2: Chan 0 handle 0x0 Port 0x000026 flags 0x40a0 curstate 46 laststate 46
It seems a good tunable, except I am not getting the meaning of "only" there. Why not "always", "force" or something like that?
None of QLogic documents I have know nothing about NVMe, and this state field is declared is byte there. I have no objections for this patch, but a bit curios what NVMe status do we see there for non-NVMe devices.
Jun 14 2024
Jun 14 2024
nvme: Fix panic on detach after ce75bfcac9cfe
Jun 7 2024
Jun 7 2024
Add some AMD device IDs.
Jun 6 2024
Jun 6 2024
May 29 2024
May 29 2024
Differences of less than 4 (RQ_PPQ) are insignificant and are simply removed. No functional change (intended).
I suspect that first thread was skipped to avoid stealing a thread that was just scheduled to a CPU, but was unable to run yet.
I am not fully sure about the motivation of this change, but It feels wrong to me to have per-namespace zones. On a big system under heavy load UMA does a lot of work for per-CPU and per-domain caching, and doing it also per-namespace would multiply resource waste. Also last time I touched it, I remember it was difficult for UMA to operate in severely constrained environments, since eviction of per-CPU caches is quite expensive. I don't remember how reservation works in that context, but I suppose that having dozens of small zones with small reservations, but huge per-CPU caches is not a very viable configuration.
May 23 2024
May 23 2024
Fix scn_queue races on very old pools