There is a mismatch between how required bounce pages are counted in
_bus_dmamap_count_pages() and bounce_bus_dmamap_load_buffer().
This problem has been observed on the Beagle-VRISC-V VisionFive v2 SoC which has memory
memory physically addressed above 4GB. This requires some bouncing for the
the dwmmc driver. This driver has a maximum segment size of 2048 bytes. When
When attempting to load a page-aligned 4-page buffer that requires bouncing,
we can end up counting 4 bounce pages for an 8-segment transfer.ing, Thesewe can end up counting 4 bounce pages for an 8-segment
transfer. These pages will be incorrectly configured to cover only the first half of the
first half of the transfer (4 x 2048 bytes). With this change, 8 bounce pages are
pages are allocated and set up.
Note that _bus_dmamap_count_phys() does not appear to have this problem,
as it clamps the segment size to dmat->common.maxsegsz.
Transactions must meet the following conditions in order for the
miscalculation to manifest:
1. Maximum segment size smaller than 1 page
2. Transfer size exceeding 1 segment
3. Buffer requires bouncing
4. Driver uses _bus_dmamap_load_buffer(), not _bus_dmamap_load_phys()
or other variations
It seems unusual but not inconceivable that this exact combination has
not been encountered or has gone unnoticed on other architectures, which
also lack this check for max segment size. For example, the rockpro64
uses the dwmmc driver, but violates 3fails to meet 3, as its memory is physically
addressed below 4GB. Some other mmc drivers appear to violate 1fail 1, etc.