User Details
- User Since
- Jun 3 2014, 6:27 PM (560 w, 7 h)
Wed, Feb 19
Tue, Feb 18
Fri, Feb 14
RGB mode looks like some magic withe the number of constants. I wonder if it is possible or reasonable to read defaults from BIOS where the changed values are stored?
Seems to make sense, but I don't know anything about it.
I have no objections, but I wonder whether maximum of 3 vs 7 might be model-specific?
I have no objections, if it works, but I don't have that ASUS for quite a while now.
Tue, Feb 4
Mon, Jan 27
Just to be sure, DA_FLAG_PACK_INVALID does not mean there is no media. There might be a media, just not one we opened originally.
This change require DA_FLAG_PACK_INVALID to be reliable avoid false periph invalidations. I am not sure current asc == 0x3a covers all the cases, I have feeling that 0x28/0x00 Not ready to ready change, medium may have changed is even more widespread case of media change and so pack invalidation.
This change is unneeded. ascq <= table_entry->ascq is already checked few lines above.
I've noticed this issue too while looking on the previous patch.
Jan 23 2025
I don't remember nvlists much, but shouldn't errors be checked via nvlist_error() sometimes?
I guess instead of 1 could be expected some error codes, that just never happened.
Jan 17 2025
This seems consistent with other places where we use "SS_FATAL | ENXIO".
While I guess u3g.4 might indeed include the full list of supported devices, simply because IIRC many of them require some quirks to attach and operate, the snd_hda was written with a goal to work on unknown hardware. The list of devices in the code in most cases is just for cosmetics and user convenience. That is why I would not like it to be listed as more than examples. We've actually had a list in the man page before, but I axed it out at some point.
Jan 15 2025
I understand the desire to avoid extra work, but if server load pattern really change to no longer require the KTLS, the allocations will stay cached, occupying memory for no reason if there is no explicit pressure. For example, considering ZFS might shrink ARC or at least limit its growth if there is no free memory, it might be a waste.
Jan 8 2025
Jan 7 2025
Jan 3 2025
I don't remember whether ASPM has anything to do with enabling ports, so wonder if it is a curios case of opposite bugs, but if it helps, I see no problem.
Dec 23 2024
Dec 10 2024
Dec 9 2024
Dec 4 2024
Dec 1 2024
Nov 28 2024
Oct 26 2024
I've never used it myself, so don't have a strong opinion, but as next step somebody will want to block reservation conflicts, then something else, and again and again...
Oct 23 2024
Looks good to me. Thanks.
Oct 21 2024
Oct 14 2024
The xpt_done_td queues are used only for the non-MP safe things as well as I think for the error path, both of which are relatively rare.
Oct 2 2024
Sep 26 2024
Sep 19 2024
Sep 11 2024
Aug 30 2024
Aug 29 2024
I suppose 3 printf's may cause garbled output if other printf's are used by something.
It seems BIO_GETATTR allows GEOM::attachment override. Though I am not saying it is very useful.
Jul 21 2024
Jul 12 2024
Jul 3 2024
Looks fine to me, just couple nits.
Jul 2 2024
Have you looked on similar Linux code? It would be good to be consistent or at least similar. I haven't looked deep, but foreach_nfs_host_cb() seems to support multiple hosts.
Jun 27 2024
Jun 24 2024
Looks odd to me, but OK.
It seems a good tunable, except I am not getting the meaning of "only" there. Why not "always", "force" or something like that?
None of QLogic documents I have know nothing about NVMe, and this state field is declared is byte there. I have no objections for this patch, but a bit curios what NVMe status do we see there for non-NVMe devices.
Jun 14 2024
Jun 7 2024
Jun 6 2024
May 29 2024
Differences of less than 4 (RQ_PPQ) are insignificant and are simply removed. No functional change (intended).
I suspect that first thread was skipped to avoid stealing a thread that was just scheduled to a CPU, but was unable to run yet.
I am not fully sure about the motivation of this change, but It feels wrong to me to have per-namespace zones. On a big system under heavy load UMA does a lot of work for per-CPU and per-domain caching, and doing it also per-namespace would multiply resource waste. Also last time I touched it, I remember it was difficult for UMA to operate in severely constrained environments, since eviction of per-CPU caches is quite expensive. I don't remember how reservation works in that context, but I suppose that having dozens of small zones with small reservations, but huge per-CPU caches is not a very viable configuration.
May 23 2024
May 14 2024
I see no problems, but I have difficulties to believe that timeout handlers 1-2 times per second per queue pair may have any visible effects. Also I am not happy to see second place where timeouts are calculated. And 99/100 also looks quite arbitrary.
Mechanically it seems to have sense. I've missed when than original transition happened, but if you say it is right, so be it.
May 7 2024
I wonder if there is any real architecture where pointer load/store is non-atomic. For things that are going to be executed between once and never it feels like you are over-engineering it. :)
I have no objections, if it is useful.
May 3 2024
Apr 27 2024
Apr 26 2024
I wonder what is your queue depth, so that one message per request per 90 seconds would cause a noticeable storm. Also per-system limiting makes output not very useful, since it does not say much useful about LUNs, ports, commands, etc due to selecting first message out of many, only that something is wrong. Thinking even wider, I find those messages printed on actual completion not very useful, since if there are not a delays, but something is really wrong, the commands many never complete and so the messages may never get printed. I wonder if instead removing all this and once per second checking OOA queues for stuck requests and printing some digests would be more useful.
Apr 20 2024
Looks good to me, but if you wish, couple cosmetic thoughts.
Looks good to me, though seems only cosmetic.