A future commit will allow a timeout when polling virtio_random in case
the host provider is slow.
I don’t think this is a great mitigation for random - the pending request will still be written in guest memory, and we need the queue completion to know when we can free the memory.
I think we should really try to scale down our “fast source” entropy collection instead. 2kB/s is pretty ridiculous. I really think we could just collect 256 bits x2 into the zero pool per reseed (64B) and maybe change the fortuna minimum reseed interval to 1s (from 100ms).
Also - I believe the fX random design reseeds much less frequently. I haven’t looked at it’s “fast source” consumption recently, but it may be a lot lower.
It's in the softc, there's only one outstanding request at a time (which was true before). The only thing that's missing (in the vtrnd patch) is its detach needs to virtqueue_drain to ensure there isn't an outstanding request that just timed out and hasn't yet completed.
If that's an option, then sure. =-) I didn't/don't understand fortuna's design well enough to make claims about what a reasonable rate is. If we can securely drop it down to <= 1kB/s on average then yeah, we can probably just not care about this at all.
Yeah, we can. Fortuna doesn't call for a random source that provides 512x32 bits of entropy every 100ms. I think (but am not totally sure) our random_kthread() -> random_sources_feed() dates to the earlier Yarrow CSPRNG, which only had two pools -- 16x lower entropy requests per call.
We should (1) eliminate its use of read_rate (which becomes a dead variable we can remove, along with read_rate_increment()), and (2) reduce the frequency random_kthread invokes random_sources_feed() below 10 Hz. We may still want to process the other data in random_kthread at 10Hz (hc_entropy_ring, hc_entropy_fast_accumulator), but we can use a counter to only invoke random_sources_feed once in ten or a hundred iterations (or something like that). Alternatively, restructure random_sources_feed to be able to fill smaller chunks at a time, spreading out the request volume over time. (This is probably good for RDSEED, in addition to virtio-rng.) Or like, use an additional thread, but that seems unnecessary.
OK, I can throw together a patch to this effect. Based on your description here, we're tentatively punting on the idea of just collecting into the zero pool for now, right? My understanding is that 128 bits into the zero pool is the bare minimum and 256 bits per pool per second as a good compromise between there and where we're at now.
Great! Yeah, collecting just into P0 is probably more controversial / against the Fortuna design in general. It makes sense in a VM environment, but is difficult to generalize. We can get into a more reasonable range just by slowing down our read rate. There's no real minimum here -- this is just the background process that feeds sync entropy sources into the pools. The front end of Fortuna is supposed to block reseeds until the zero pool has sufficient input ("MinPoolSize", something like 32-64 bytes), although I'm not sure we actually do that.