- User Since
- Sep 28 2014, 7:22 PM (229 w, 3 d)
Thanks. The original reporter of the bug discovered that too.
Tue, Feb 19
Mon, Feb 11
This one boots, but panics when I kldload pfsync:
Sun, Feb 10
Sat, Feb 9
I have no objections to the patch, but I don't know enough about HFSC to meaningfully review this, I'm afraid.
Fri, Feb 8
I'm pretty sure that ipv6_activate_all_interfaces="YES" in /etc/rc.conf is the trigger for the previous panic.
That also matches nicely with the backtrace. Presumably nd6_dad_timer() doesn't enter the NET_EPOCH.
This panics my test vm:
Sat, Feb 2
Tue, Jan 29
Did you happen to do any benchmarking on this? I'd have expected "Sync pfil hooks epoch(9)" to give us a (small) performance improvement, but my initial test shows a small reduction in forwarding performance (with pf loaded).
Mon, Jan 28
Fri, Jan 25
This panics during the netipsec and pf regression tests:
Thu, Jan 24
To be clear: I don't object to your change. It's just that ifconfig is a bit of a mess and any opportunity for cleanup is tempting. Especially if I don't have to do it.
Jan 21 2019
Jan 18 2019
Yeah, looking at my original commit I did manage to lose that flag. I’ve not tested yet, but this patch is almost certainly correct.
Jan 15 2019
Also, I’d really, really like some basic tests for altq. It should be reasonably straightforward to do something based on the existing pf tests. At least, if altq can work on top of epairs.
Jan 12 2019
Jan 8 2019
Jan 7 2019
I’ll try to review asap, but I’m on holiday wirh limited internet access right now.
Dec 29 2018
Dec 8 2018
Dec 5 2018
Dec 3 2018
Fix build with ! VIMAGE
Address man page remark.
Dec 2 2018
Minor improvements, style and such.
Small man page addition.
Dec 1 2018
Nov 29 2018
Single pfsyncintr swi, rather than one per bucket.
Well, the remarks you've had have been pretty helpful so far.
Nov 28 2018
And here's my flame graph: https://people.freebsd.org/~kp/pfsync/buckets.svg
This is a slightly rough first version. I'm sure there are some remaining style issues (and there might be cleanup issues too).
I have an alternative version in https://reviews.freebsd.org/D18373.
It splits pfsync up into buckets, with their own lock. Performance is slightly better than this, and it's in many ways a simpler change.
Nov 26 2018
Nov 22 2018
Nov 21 2018
Nov 17 2018
Here are flame graphs for my test setup:
- plain forwarding
- pf only
- unpatched pfsync
- pfsync with this (and the next) patch
Nov 16 2018
I've been thinking about what we could do to split up the lock, rather than take this approach, and I'm not sure how.
Nov 15 2018
For what it's worth, OpenBSD don't drop here but henning did add a /* XXX why clear instead of drop? */ comment a few years ago.
My performance tests show an improvement of 80-90% with this change. The swi_sched is costly, yes, but it's a pretty significant improvement in throughput.
Nov 14 2018
Nov 1 2018
I'm not currently aware of any remaining issues here. It's certainly possible that some still exist, but I've seen no reports and the tests don't provoke them.
I think of this change mostly as a 'I will support this now.' statement.
Oct 28 2018
Committed as r339836
Assorted pf VIMAGE fixes have been done, and pf is now usable inside VIMAGE jails.
An alternative approach (VIMAGE based) was committed instead.
Oct 21 2018
Make :0 work without ()
Oct 20 2018
Oct 13 2018
We also don't want to not get notified and have an interface hanging around for ever given pf never releases a reference?
Oct 12 2018
Yes. Especially because pfsync currently does a lot of work (including taking a single lock, which really kills throughput) even if it won't actually end up sending the sync packets out.
It should be possible to change pfsync so we don't virtualise the callback pointers and instead immediately check if it's enabled and return if not. We would have to be careful about locking around the up/down (and defer) flags, so it'd be a larger change than this.
Fix comment style
The following also resolves the use-after-free issue.
Oct 11 2018
I’ll try to review in the next day or two, but it needs tests ;) (in a separte commit)
I think there are currently no functional tests for if_bridge, but it should be pretty straightforward to spin up a vnet jail, create a bridge in it and then use libifconfig to add/remove interfaces.
It’d have the very nice side-effect of getting us some basic bridge tests too.
Oct 10 2018
Each vnet may choose to set pfsync0 down, which (see pfsyncioctl() / SIOCSIFFLAGS) set or clear these callbacks.
And while it's okay for a single vnet to decide it doesn't want pfsync to be enabled, it's not okay for that decision to affect all vnets.