User Details
- User Since
- Jun 3 2014, 6:27 PM (518 w, 2 d)
Tue, May 7
I wonder if there is any real architecture where pointer load/store is non-atomic. For things that are going to be executed between once and never it feels like you are over-engineering it. :)
I have no objections, if it is useful.
Fri, May 3
Sat, Apr 27
Fri, Apr 26
I wonder what is your queue depth, so that one message per request per 90 seconds would cause a noticeable storm. Also per-system limiting makes output not very useful, since it does not say much useful about LUNs, ports, commands, etc due to selecting first message out of many, only that something is wrong. Thinking even wider, I find those messages printed on actual completion not very useful, since if there are not a delays, but something is really wrong, the commands many never complete and so the messages may never get printed. I wonder if instead removing all this and once per second checking OOA queues for stuck requests and printing some digests would be more useful.
Sat, Apr 20
Looks good to me, but if you wish, couple cosmetic thoughts.
Looks good to me, though seems only cosmetic.
Wed, Apr 17
Wed, Apr 10
Mar 25 2024
Mar 21 2024
I don't have any chip documentation to know what is right here, so just wonder if unconditional printing a bunch of raw hex numbers is expected here. It feels mpi3mr_print_fault_info() is another candidate for mpi3mr_dprint().
I am not a big fan of kernel printing something in response to arbitrary user requests, it makes logs messy. Is the error reporting to user is not enough here?
Mar 18 2024
Why not backport 506fe78c48 instead?
Mar 15 2024
My only complaint is that it puts the queue into the same cache line as the main queue, that may be modified by writers. But if you really need it for debugging, it could be understood.
Mar 6 2024
Mar 5 2024
On failure we've already notified consumers that controller has failed. What will report it is back? And is there even a device to sent request IOCTL?
If you say it helps I have no objections, but I see nvme_sim_controller_fail() destroying SIM, so I am not sure you actually get here.
I wonder if there are any namespace-specific events? I remember NVMe specs allow per-namespace SMART, but I don't remember much details now.
Feb 27 2024
Feb 5 2024
Jan 27 2024
Jan 19 2024
Jan 10 2024
There is already a panic in apei_ge_handler(), based on total status severity. Do you see it not enough?
Jan 3 2024
Dec 30 2023
Not very my area, but seems to have sense.
Dec 28 2023
Dec 27 2023
Dec 26 2023
This is already merged.