Page MenuHomeFreeBSD

entropy: add quick check before taking lock
Needs ReviewPublic

Authored by wma on Oct 29 2021, 9:59 AM.
Tags
None
Referenced Files
Unknown Object (File)
Feb 9 2024, 2:05 AM
Unknown Object (File)
Dec 22 2023, 11:46 PM
Unknown Object (File)
Dec 10 2023, 2:05 PM
Unknown Object (File)
Nov 28 2023, 4:12 AM
Unknown Object (File)
Nov 22 2023, 9:35 PM
Unknown Object (File)
Nov 22 2023, 11:42 AM
Unknown Object (File)
Nov 12 2023, 9:03 AM
Unknown Object (File)
Oct 19 2023, 7:41 PM

Details

Reviewers
kevans
jhb
kib
mw
kd
Group Reviewers
secteam
Summary

Add a very dirty check at the beginning. If there is no
room for new events there is no need to take a lock.

Diff Detail

Lint
Lint Skipped
Unit
Tests Skipped

Event Timeline

wma requested review of this revision.Oct 29 2021, 9:59 AM
wma created this revision.

This change should come with a motivational ministat graph.

I think it might be better to move to move to (uncontended) PCPU rings instead, though.

Sorry for the late response.
I don't have physical access to device that exhibits the problem, but I got some data from someone who does.

In order for this lock to become an issue we need a rather beefy system with a lot of network traffic.
The benchmark below was done on a server with 2x24 core xenon gold cpus and 4x50G ixl(4) NICs.
Note that some custom packet processing was applied(ASQ.ko), which might make the pmcstat look a bit uncommon.
Without this patch the throughput of ~96Gb/s is achieved, about half of the linerate.

PMC: [CPU_CLK_UNHALTED.THREAD_P] Samples: 167924 (100.0%) , 0 unresolved
%SAMP IMAGE FUNCTION CALLERS
10.9 kernel random_harvest_queue ether_input_internal
9.7 kernel _mtx_lock_spin_cooki random_harvest_queue
3.1 if_ixl.ko ixl_mq_start_locked ixl_poll
1.4 kernel ether_input_internal ether_input
1.4 kernel __rw_rlock_int ip_input
1.1 ASQ.ko sf_flowtable_ipv4_lo ip_forward
1.0 if_ixl.ko ixl_rxeof ixl_poll
1.0 kernel bzero
0.9 kernel ifa_ref ip_forward
0.8 kernel netisr_dispatch_src ether_input_internal
0.8 kernel ip_output_ex ip_forward
0.7 kernel in_broadcast ip_output_ex
0.7 kernel ip_input netisr_dispatch_src
0.7 ASQ.ko sf_hash_get_entry sf_flowtable_ipv4_lookup

Now with the patch applied a throughput of 192Gb/s is achieved.

PMC: [CPU_CLK_UNHALTED.THREAD_P] Samples: 118930 (100.0%) , 0 unresolved
Key: => pause => press space again to continue
%SAMP IMAGE FUNCTION CALLERS
12.7 if_ixl.ko ixl_mq_start_locked ixl_poll
8.8 kernel __rw_rlock_int ip_input
6.7 kernel in_broadcast ip_output_ex
4.9 kernel ifa_ref ip_forward
4.9 ASQ.ko sf_flowtable_ipv4_lo ip_forward
4.8 kernel _rw_runlock_cookie_i ip_input
4.7 kernel ether_input_internal ether_input
4.5 kernel ifa_free ip_forward
4.2 if_ixl.ko ixl_rxeof ixl_poll
4.2 kernel ip_input netisr_dispatch_src
3.7 kernel bzero m_pkthdr_init:1.9 ip_forward:1.8
3.1 kernel ip_output_ex ip_forward
2.5 ASQ.ko sf_hash_get_entry sf_flowtable_ipv4_lookup
1.6 kernel ether_output ip_output_ex
1.6 if_ixl.ko ixl_mq_start sf_qos_handoff
1.5 kernel atomic_cmpset_int drbr_enqueue
1.4 kernel atomic_fcmpset_long ixl_rxeof
1.3 if_ixl.ko ixl_txeof ixl_poll
1.3 kernel bounce_bus_dmamap_lo bus_dmamap_load_mbuf_sg
1.2 kernel ip_forward ip_input
1.2 kernel memcpy ether_output
1.0 kernel uma_zalloc_arg m_getjcl
0.8 kernel bus_dmamap_load_mbuf
0.8 if_ixl.ko ixl_refresh_mbufs ixl_rxeof
0.8 ASQ.ko sf_flowtable_ipv4_bu ip_forward
0.7 kernel uma_zfree_arg mb_free_ext
0.6 kernel cpu_search_highest cpu_search_highest
0.6 kernel acpi_cpu_idle_mwait acpi_cpu_idle
0.6 kernel in_cksumdata
0.5 kernel in_cksum_skip ip_output_ex
0.5 kernel critical_exit
0.5 kernel mb_ctor_pack uma_zalloc_arg

Consider just disabling Ethernet entropy collection instead. In fact, I thought it was off by default in approximately the 13 timeframe. Maybe even 12.

We should improve our entropy collection subsystem for high rate drivers, such as with percpu buffers, sampling, and cheap mixing (xor). But this doesn’t do that.

In D32725#746021, @cem wrote:

Consider just disabling Ethernet entropy collection instead. In fact, I thought it was off by default in approximately the 13 timeframe. Maybe even 12.

The problem is that in some use cases we might not have a lot of entropy good sources, with the ethernet being the only good candidate.

We should improve our entropy collection subsystem for high rate drivers, such as with percpu buffers, sampling, and cheap mixing (xor). But this doesn’t do that.

I'll look into using a PCPU queues, but this seems like quite an intrusive change.
I'd appreciate it if you could share some hints w.r.t design/implementation.

The problem is that in some use cases we might not have a lot of entropy good sources, with the ethernet being the only good candidate.

Your benchmark platform was "a server with 2x24 core xenon gold cpus and 4x50G ixl(4) NICs," which definitely has other good entropy sources.

In D32725#746813, @cem wrote:

Your benchmark platform was "a server with 2x24 core xenon gold cpus and 4x50G ixl(4) NICs," which definitely has other good entropy sources.

@cem What are the other good entropy sources you can think of?

At a minimum, you have rdseed (rdrand). But I expect there are other non-ethernet sources present as well.