Page MenuHomeFreeBSD

Update armv6 busdma implementation to support unmapped/out-of-context userspace buffers
ClosedPublic

Authored by jah on Oct 11 2015, 9:27 PM.
Tags
None
Referenced Files
Unknown Object (File)
Thu, May 2, 9:24 AM
Unknown Object (File)
Thu, May 2, 6:25 AM
Unknown Object (File)
Mar 19 2024, 1:20 AM
Unknown Object (File)
Feb 26 2024, 10:03 PM
Unknown Object (File)
Feb 26 2024, 9:29 PM
Unknown Object (File)
Jan 17 2024, 7:11 AM
Unknown Object (File)
Dec 22 2023, 10:26 PM
Unknown Object (File)
Dec 11 2023, 12:20 PM
Subscribers

Details

Summary

Use pmap_quick* functions in armv6 busdma, for bounce buffers and cache maintenance. This makes it safe to sync buffers that have no VA mapping associated with the busdma map, but may have other mappings, possibly on different CPUs. This also makes it safe to sync unmapped bounce buffers in non-sleepable thread contexts.

Similar to r286787 for x86, this treats userspace buffers the same as unmapped buffers and no longer borrows the UVA for sync operations.

Submitted by: Svatopluk Kraus <onwahe@gmail.com> (earlier revision)

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

jah retitled this revision from to Update armv6 busdma implementation to support unmapped/out-of-context userspace buffers.
jah updated this object.
jah edited the test plan for this revision. (Show Details)
jah added a reviewer: skra.

Port unmapped bounce buffer alignment fix from x86

Rename dcache_discard_poc to dcache_dma_preread, as in Svata's original patch. This makes it clear that the operation is specific to the DMA preread case.

sys/arm/arm/busdma_machdep-v6.c
1093 ↗(On Diff #9346)

Should it be safe to call PHYS_TO_VM_PAGE() with not aligned address?

1101 ↗(On Diff #9346)

What is tested here? Argument passed to this function is already physical address, so the test is useless, unless there is something wrong right here, in this function.

1230 ↗(On Diff #9346)

What is tested here? Even in VM_PHYSSEG_SPARSE case, vm_page_array is continuous. Thus when physical addresses are continuous, associated vm_page_t structs should be too. Note that sgsize cannot be bigger than PAGE_SIZE here.

1329 ↗(On Diff #9346)

I do not like to call this function for every item in sync_list list just to do nothing in BUS_DMASYNC_POSTWRITE case.

1456 ↗(On Diff #9346)

Copy the comment above to place where dma_preread_safe() is called.

1498 ↗(On Diff #9346)

I propose to keep switch(op) with while (sl != end) loops in all valid cases here, together with panic(). And remove panic() from dma_dcache_sync(). This way, really nothing will be done in BUS_DMASYNC_POSTWRITE case.

sys/arm/arm/busdma_machdep-v6.c
1093 ↗(On Diff #9346)

PHYS_TO_VM_PAGE() always uses atop() internally to index the page array, so it's safe to use a non-aligned address. But I like to be explicit about that kind of thing, since AFAIK that isn't documented anywhere.

1101 ↗(On Diff #9346)

Since this code can extend the sl to cover more than one page, I want to be sure the vm_page_t is contiguous in the page array as well. As you mention below, vm_page_array is contiguous even for VM_PHYSSEG_SPARSE, so that's maybe a little paranoid. But it is possible for PHYS_TO_VM_PAGE to use fictitious pages that aren't part of the page array. AFAIK that shouldn't happen for DMA. But I'd rather have an assert to catch violations of that assumption than have weird random data corruption.

1329 ↗(On Diff #9346)

You're right, that's a lot of wasted cycles. The POSTWRITE check needs to be moved outside the loop.

Re-adding comment, moving POSTWRITE check to beginning of _bus_dmamap_sync() as is done for armv5 and mips

sys/arm/arm/busdma_machdep-v6.c
1093 ↗(On Diff #9457)

Maybe, one AND does not make difference. However, I always think of PHYS_TO_VM_PAGE() like something what returns associated vm_page_t to any physical address. So it's just a matter of principle. To not create bad leading case. I searched tree and both aligned and not aligned addresses are used. However, unaligned addresses are used in vm subdirectory, which should be significant for how PHYS_TO_VM_PAGE() should be used. Further, it's PHYS_TO... and not ALIGNED_PHYS_TO...

1101 ↗(On Diff #9457)

I understand your paranoia. However, it should not end with testing kernel consistency in every place. Even if it's hard to resist such temptation. Even if it's only KASSERT(). Further, (1) how you said, sl could cover more than one page. However, only border pages are tested here. Note that sgsize can be bigger than one page here. (2) More important than vm_page_t structs consistency is consistency of physical addresses kept in them as they are used for temporary mappings in dma_dcache_sync().

Thus, if you want to test something, test that physical addresses returned from VM_PAGE_TO_PHYS(m) for every vm_page_t involved are continuous in dma_dcache_sync().

sys/arm/arm/busdma_machdep-v6.c
1093 ↗(On Diff #9457)

Well, like I said I like to be explicit about alignment-related things. But there's more than one way to be explicit. Instead of the AND, I can have a short comment here that makes it clear I'm not forgetting about alignment.

1101 ↗(On Diff #9457)

Hmm...well, the original patch I sent out did just that: it had the test in dma_dcache_sync, using VM_PAGE_TO_PHYS. The idea in moving the check to load_phys() and load_buffer() was to catch problems sooner and avoid dereferencing a completely bogus vm_page_t. Of course, that works better in load_buffer() where every page gets checked.

But, the way I originally had it is certainly cleaner, and you seem to like it better, so I'll go with that.

Comment on unaligned pages, move page array assertion back to dma_dcache_sync()

I'm going to re-test the patch (just to be sure) and then I let you know. ;) However, my testing of unmapped buffers is limited.

sys/arm/arm/busdma_machdep-v6.c
1307 ↗(On Diff #9470)

Trailing whitespace.

1355 ↗(On Diff #9470)

Trailing whitespace.

1366 ↗(On Diff #9470)

Trailing whitespace.

1496 ↗(On Diff #9470)

Trailing whitespace.

sys/arm/arm/busdma_machdep-v6.c
1412 ↗(On Diff #9470)

Cache maintainance must be done on bounce page, so bpage->busaddr instead of busaddr. And busaddr evaluation can be removed.

1435 ↗(On Diff #9470)

Same as above.

1455 ↗(On Diff #9470)

Same as above.

sys/arm/arm/busdma_machdep-v6.c
1412 ↗(On Diff #9470)

Ugh, of course. I almost did the same thing in the armv5 map.

Probably, it's better to use kmem_alloc_contig() to allocate the bounce pages as uncacheable. But that also belongs in a separate change.

jah edited edge metadata.

Fixed confusion of client page addr vs. bounce page addr, trimmed trailing whitespace

sys/arm/arm/busdma_machdep-v6.c
1409 ↗(On Diff #9477)

Hmm...not relevant to this review, but thinking about it a little bit, uncacheable bounce buffers might not be such a good idea. For writes the cache operation will be write-back but not invalidate, so it seems like an uncached bounce buffer could slow down the word-sized accesses in the bcopy more than the slowdown we'd see from the cacheline-sized writeback. mips busdma tries to use uncached bounce buffers, but now I wonder if that is a good idea.

skra edited edge metadata.

Mapped buffers were tested on pandaboard (extern L2 cache) and rpi2 (intergrated L2 cache). Unmapped buffers were tested on rpi2 (quake3).

sys/arm/arm/busdma_machdep-v6.c
1409 ↗(On Diff #9477)

It's always hard to say in complex cases.

This revision is now accepted and ready to land.Oct 20 2015, 12:32 PM
This revision was automatically updated to reflect the committed changes.