Page MenuHomeFreeBSD

Fix gmirror BIO_FLUSH and sync collision queuing.
ClosedPublic

Authored by markj on Dec 20 2017, 7:50 PM.
Tags
None
Referenced Files
Unknown Object (File)
Nov 21 2024, 11:02 PM
Unknown Object (File)
Oct 5 2024, 11:33 PM
Unknown Object (File)
Oct 4 2024, 4:46 AM
Unknown Object (File)
Oct 2 2024, 5:18 AM
Unknown Object (File)
Oct 1 2024, 10:27 PM
Unknown Object (File)
Oct 1 2024, 9:15 PM
Unknown Object (File)
Sep 28 2024, 2:17 AM
Unknown Object (File)
Sep 27 2024, 4:10 AM
Subscribers

Details

Summary

This revision aims to address a few BIO ordering bugs:

BIO_FLUSH requests are dispatched directly to the mirrors from
g_mirror_start() rather than being queued for processing by the gmirror
worker thread. This means that a BIO_FLUSH may "jump ahead" of a
preceding BIO_WRITE, which is undesirable. Address this by handling
BIO_FLUSH requests the same as BIO_{READ,WRITE,DELETE}. We take care to
avoid breaking mirrors if a BIO_FLUSH comes back with EOPNOTSUPP.

When a BIO_WRITE or BIO_DELETE request overlaps with an offset that's
actively being synchronized (i.e., we're currently reading at that
offset in order to copy its contents to a synchronizing mirror, or we're
currently writing to that offset as part of a block copy), the request
is placed in the regular_delayed queue. Once the colliding
synchronization BIO completes, BIOs from the delayed queue are put back
on the head of the work queue. However, this allows writes to be
reordered when synchronizing. Fix this by freezing all I/O requests
(including reads) when a collision occurs.

The regular_delayed queue may reorder reads and writes to the same
offset, since reads never went into the regular_delayed queue. Fix this
the same as above: once a BIO_WRITE or BIO_DELETE collides with a
synchronization BIO, all subsequent regular I/Os are placed in the
regular_delayed queue until that initial collision is over, at which
point all BIOs in regular_delayed are moved back to the main I/O queue.

Test Plan

Peter Holm ran stress2 scenarios with this patch applied.

In-tree tests pass.

Diff Detail

Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 13693
Build 13913: arc lint + arc unit

Event Timeline

markj edited the test plan for this revision. (Show Details)

My first reading I like it. Don't have the time to give it a very close reading though.

sys/geom/mirror/g_mirror.c
1980

I love these comments. It always takes me time to recall what each of these cases were for...

I've only looked at this first half of this patch. Most of my comments aren't requests for any change, but clarifying questions for me, as someone not very familiar with gmirror. Gotta run, will look more later.

sys/geom/mirror/g_mirror.c
920–921

Should we flag disks broken for read errors? Or at least, we could track this with two different flags, if it's useful. Edit: Oh, I see, this is a straightforward extraction of some code below into a subroutine. Well, feel free to ignore this functional change question then.

930–936

What do these different bumps do, and why do we want different ones for each of these cases? Might be nice to comment for folks unfamiliar with gmirror (me :-)). (Edit: I see this is just a straightforward extraction into subroutine of earlier code, but it'd still be nice to document while you're here :-).)

977

Nice! More failpoints is a good thing.

988–996

(Trivial style nit: this could be extracted as a subroutine.)

1018–1020

Why do we have different completion process for the very special case of a gmirror with a single disk? It seems like we shouldn't special case it.

1040–1041

That's an interesting interpretation of some requests succeeded. Are we starting to kick out the broken mirrors for WRITE/DELETE at this point? I suppose for disks that don't handle FLUSH it may be ok to keep them?

sys/geom/mirror/g_mirror.c
920–921

I'm not sure yet. In OneFS 7 or earlier we actually added a state, "unreadable," for this case. We attempt to re-synchronize unreadable mirrors rather than kick them out, in the hope that overwriting the bad sector(s) will allow reads to succeed again. I'm undecided as to whether that change should be upstreamed; it's untested AFAIK and has probably rotted somewhat in the BSD10/11 merges. There's an internal bug assigned to me to investigate further and sync with upstream.

930–936

The BUGS section of the gmirror man page says, "There should be a section with an implementation description." :) I started writing one but it's not ready yet.

The bump flags just tell the gmirror worker thread to increment a pair of generation counters associated with the gmirror: the genid (bumped when a mirror is kicked out as a result of errors) and the syncid (bumped when the gmirror is known to have stale mirrors). ENXIO is handled with a syncid bump: see r323612.

977

Yup, they've been pretty handy for writing tests.

1018–1020

What's happening here is that a read from one of the mirrors failed. If there's only one active mirror left, it means that we can't read at all and so we return the error to upper layers. If there's more than one, we effect a retry from a different mirror by requeuing the read BIO.

1040–1041

Yup, g_mirror_regular_request_error() kicked out the mirror that returned an error. Note that I specifically short-circuit the case where BIO_FLUSH came back with EOPNOTSUPP - we don't want to kick the mirrors out in that case at least.

sys/geom/mirror/g_mirror.c
1018–1020

Oh, ok.

Does something enforce that the bio actually goes to a different disk the 2nd time around?

sys/geom/mirror/g_mirror.c
1018–1020

Yes. g_mirror_regular_request_error() will kick out the disk that returned the error (or at least flag it in some way), and we'll only select a healthy mirror when the retry request comes around.

A question: Does it make sense to 'resilver' those sectors that return EIO rather than simply kicking the disk to the curb and only kicking it to the curb if the resilver fails to readback correctly?

In D13559#285371, @imp wrote:

A question: Does it make sense to 'resilver' those sectors that return EIO rather than simply kicking the disk to the curb and only kicking it to the curb if the resilver fails to readback correctly?

Yes, I think so. The gmirror mods in OneFS already do this in a kind of half-baked way (I didn't write them :)); it's related to the "unreadable" state I mentioned in another comment in this review. I'm planning to rework that for upstream, along with a mechanism to proactively scan for such sectors. In particular, I'd like to add a read-balancing algorithm which attempts to read the requested sector(s) from all mirrors rather than just one, and be able to temporarily switch to that for the purpose of reading back the whole gmirror with dd(1) or so.

I plan to commit this in the next day or two if there are no objections.

I'm OK with this. It's an improvement and clearly better w/o any new issues I can spot.

I'm a little uneasy still about gmirror, but to erase that discomfort is too high a bar for me to set for this review :)

This revision is now accepted and ready to land.Jan 2 2018, 4:42 PM
In D13559#287445, @imp wrote:

I'm a little uneasy still about gmirror, but to erase that discomfort is too high a bar for me to set for this review :)

Hm, are there any specific issues you can describe here? I plan to continue addressing problems in gmirror.

This revision was automatically updated to reflect the committed changes.