In D35784#812281, @mav wrote:Without this it does not explicitly hold boot, but it blocks GEOM event thread on taste attempt, which also blocks boot, but also blocks most of GEOM commands.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Feed Advanced Search
Advanced Search
Advanced Search
Jul 14 2022
Jul 14 2022
Jul 12 2022
Jul 12 2022
Without this it does not explicitly hold boot, but it blocks GEOM event thread on taste attempt, which also blocks boot, but also blocks most of GEOM commands.
At this point, I'm only worried about da's state machine holding up boot now where it didn't used to do so... And I'm not sure, exactly, how to measure that worry ...
Remove incorrect comment.
Move softc->state = NDA_STATE_NORMAL; before disk_create().
Update after Warner's comments.
I generally like this, even though I had a bunch of nit-picky comments and questions.
I only see one real issue for sure, but maybe a second race (but maybe not, I'm asking to make sure it can't happen).
Remove unneeded now disk_attr_changed() call.
Mar 25 2022
Mar 25 2022
Oct 5 2021
Oct 5 2021
Restore commas, fix compat shims.
mav added a reviewer for D32305: cam(4): Limit search for disks in SES enclosure by single bus: imp.
Jun 3 2021
Jun 3 2021
Dec 7 2020
Dec 7 2020
Nov 14 2020
Nov 14 2020
If there is some use high priority, then it works for SATA and it is simple, but since it is absolute priority, the difference between normal and high is too big to use it without very good reason. For low priority though, which would be useful for background operations even with absolute priorities, I haven't found a working implementation so far, unless potentially NVMe. I am hoping to get some comments from ${HDD vendor} about it.
Have we reached any conclusions about whether to do any of the ideas suggested in this phabricator thread?
Nov 2 2020
Nov 2 2020
Just for information, I've also experimented with isochronous NCQ priority (AKA NCQ streaming). I hoped that setting large timeout would reduce the request priority. But at least on WD Red I see no any priority effects until the timeout is reached, and I see priority increase (again with the IOPS problem) when it is. It is good to see that the feature is really working, but unfortunately I see no usage for it in this shape. I see plenty of use cases for low priority (that SATA/SAS drives don't provide), but not really for high priority (that they do, but not very efficiently). NVMe seems to have usable priority concept and some devices support it, just not sure how important is the priority for pretty fast NVMe's.
Oct 29 2020
Oct 29 2020
I've tried opposite approach of adding LOWPRIO flag instead and using it only for background operations in few places, and marking BIOs without it high-priority in ATA/SCSI. But while testing it I've noticed that disk random IOPS drop to almost non-NCQ level on a mix of different priorities. And I am measuring the same on both WD and HGST. I don't understand what is going on there, may be I am missing something, but that is unacceptable trade-off to me. I've uploaded my present patch in case somebody wish to play, but probably won't commit it in this state.
Oct 28 2020
Oct 28 2020
Priority is working on top of tag, affecting specifically only commands tagged as SIMPLE . The ORDERED and HEAD tags still have their function as they are mandatory in their fencing semantics, while priority is a softer hint for a schduler.
I think that the intention of the feature from the manufacturers is for background sync and scrub workloads, not filesystem consistency operations. Regarding SAS, tags have always been the mechanism for setting priority and creating barriers. Ordered tags, head-of-queue tags, etc. mpr/mps support these, but they're largely unused because BIO_ORDERED was removed from FreeBSD. I'm not aware of SAS adopting the same SATA priority scheme.
More experiments with SATA WD REDs show that priorities there more like absolute with deadline. On WD20EFRX-68E on heavy random workload I see low-priority requests in presence of high-priority are all delayed for about a second, while on WD80EFZX-68U they are all delayed for about 5 seconds. So big difference makes me think it is unusable for differentiation of sync vs async requests, but should still be good for read/write vs scrub/initialization/etc differentiation. Unfortunately I still haven't found any capable SAS drive to check there, but considering SATL directly map one into another I suppose they should have the same (absolute) semantics.
Oct 24 2020
Oct 24 2020
In D26912#600288, @mckusick wrote:I could see a use for at least three levels of priority: low priority (default) for asynchronous I/O, mid-level priority for synchronous reads, high priority for synchronous writes.
Oct 23 2020
Oct 23 2020
It would be trivial to request high priority for synchronous writes in bwrite() and if desired synchronous reads in bread(). That would have effects for several filesystems.
Interesting. I have patches to iosched that marks metadata requests and topqueues them, but doesn't try to prioritize in the drive. It doesn't handle writes, though (we don't need them, but it's one of the reasons I've not conmitted)... it gives a modest boost to open latency, but not as much as the async open chuck is working on.
Oct 22 2020
Oct 22 2020
Oct 17 2020
Oct 17 2020
imp added inline comments to D25476: Fix use after free panic and state transitions in mps(4) and mpr(4).
I've had good luck running this for 2 or 3 firmware cycles now at Netflix... We're down 3/4 in panics, and at least part of the remaining 1/4 appear to be due to a small mismerge of two files so they were out of sync with the rest of the state machine....
Jul 21 2020
Jul 21 2020
I'm deploying this to one or two of the machines that we see panics from this every few days.
Jun 30 2020
Jun 30 2020
I only looked at mpr, but this looks good to me. It's good you have a way to recreate it. I have random machines failing with this (about ~0.001%/day), but can't find one to recreate it. I like this approach, and is similar to the one I took with other commands and target reset. thanks for fixing.
Jun 26 2020
Jun 26 2020
ken requested review of D25476: Fix use after free panic and state transitions in mps(4) and mpr(4).
Apr 22 2020
Apr 22 2020