Page MenuHomeFreeBSD

Remove nested epochs from lagg(4).
ClosedPublic

Authored by jhb on Mar 27 2019, 12:52 AM.
Tags
None
Referenced Files
Unknown Object (File)
Dec 20 2023, 3:04 AM
Unknown Object (File)
Oct 17 2023, 9:15 PM
Unknown Object (File)
Sep 7 2023, 8:38 AM
Unknown Object (File)
Sep 7 2023, 8:38 AM
Unknown Object (File)
Sep 7 2023, 8:36 AM
Unknown Object (File)
Sep 1 2023, 8:27 PM
Unknown Object (File)
Jun 24 2023, 8:56 PM
Unknown Object (File)
Jun 24 2023, 8:56 PM
Subscribers

Details

Summary

lagg_bcast_start appeared to have a bug in that was using the last
lagg port structure after exiting the epoch that was keeping that
structure alive. However, upon further inspection, the epoch was
already entered by the caller (lagg_transmit), so the epoch enter/exit
in lagg_bcast_start was actually unnecessary.

This commit generally removes uses of the net epoch via LAGG_RLOCK to
protect the list of ports when the list of ports was already protected
by an existing LAGG_RLOCK in a caller, or the LAGG_XLOCK.

It also adds a missing epoch enter/exit in lagg_snd_tag_alloc while
accessing the lagg port structures. An ifp is still accessed via an
unsafe reference after the epoch is exited, but that is true in the
current code and will be fixed in a future change.

Test Plan
  • this hasn't been run-tested, only compile tested. Both Drew and I tested kernels with the original change to just lagg_bcast_start, but we weren't using the broadcast protocol so those tests don't really count.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

gallatin added inline comments.
sys/net/if_lagg.c
1546 ↗(On Diff #55477)

Its just my opinion, but I generally prefer an "abort_with_rlock" kind of label and gotos to repeating the unlock in every error case. I'm always worried a later modification may miss the unlock..

Especially in this case, since the error is always EOPNOTSUPP

This revision is now accepted and ready to land.Mar 27 2019, 1:14 PM
sys/net/if_lagg.c
1546 ↗(On Diff #55477)

In the send_tags branch that this came from, I've moved a chunk of this logic (choosing the 'lp') into a separate function and enter/exit the epoch in the caller which collapses several of these unlock-return patterns since they are done once in the caller after the helper function returns.

I've now tested this via netperf using RATELIMIT using the various lagg protocols under an INVARIANTS kernel.

This revision was automatically updated to reflect the committed changes.