Whitespace fixes
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Jan 29 2017
Jan 26 2017
Jan 25 2017
Replace pcpu_find with get_pcpu()
Jan 24 2017
Jan 22 2017
Jan 21 2017
Jan 14 2017
Return sched_unpin() to original order, rename fields
Jan 7 2017
Dec 23 2016
Use constant for pcpu padding. Collapse sysmaps struct.
Dec 20 2016
Dec 19 2016
Dec 18 2016
Aug 7 2016
Jul 24 2016
In D3989#151738, @jhibbits wrote:Did some light testing, but not sure if I triggered anything.
May 29 2016
May 20 2016
May 19 2016
Add comment noting size limit of each message
Limit number of messages to 42 to prevent exhaustion/short allocation on 32-bit systems
May 16 2016
In D3989#135429, @jhibbits wrote:I think qemu can boot powerpc64 (see https://wiki.freebsd.org/QemuRecipes ). What kinds of testing should be done? I have a Book-E board I could boot test with, and do basic SATA I/O, would that be enough? I also have some PowerMac hardware. Would need a week or two before I can test with that though.
May 15 2016
Garrett, were you planning to add the limit on nmsgs and commit this? If not, I can do it.
This one's been sitting for a while. Anything I can do to test this out on my own? qemu maybe?
Apr 14 2016
I think it's better to revert r292255 than to add this. Like Hans mentioned it adds complexity, and the fact that it forces entire buffers to be bounced seems wrong to me. The added memcpy probably isn't that big of a deal, since performance goes out the window if you're bouncing anyway. But the memory those bounce pages come from is likely to be somewhat precious. This could make large transfers use a lot more of that, which makes me nervous.
Feb 1 2016
In D5155#109705, @kib wrote:Copy/paste from my mail to the OP:
> > $ svn diff iic.c > > Index: iic.c > > =================================================================== > > --- iic.c (revision 295081) > > +++ iic.c (working copy) > > @@ -303,6 +303,10 @@ > > buf = malloc(sizeof(*d->msgs) * d->nmsgs, M_IIC, M_WAITOK); > > > > error = copyin(d->msgs, buf, sizeof(*d->msgs) * d->nmsgs); > > + if (error != 0) { > > + free(buf, M_IIC); > > + return (error); > > + } > > > > /* Alloc kernel buffers for userland data, copyin write data */ > > usrbufs = malloc(sizeof(void *) * d->nmsgs, M_IIC, M_WAITOK | M_ZERO); > > > This just continues the original bug. > > If you look at the line above the changed line, you would see the > multiplication with the user-supplied value, which could e.g. overflow > and results in the short buffer being malloced. Then copying trashes the > kernel heap. > > That said, even if overflow does not occur, the user-controlled malloc(9) > either causes DoS by over-allocation of the kernel memory, or by causing > panic by exhausting the kmem address space on 32bit arches.
I might've missed something, but it looks like we don't actually use the contents of buf for anything if error is set. We might store some of that bogus data in usrbufs at line 311, but we'll avoid using it.
Nov 18 2015
Nov 14 2015
Oct 28 2015
In D3986#83956, @adrian wrote:Just build some software or something that'll churn the VM and do disk IO. :)
In D3986#83554, @adrian wrote:Can you test this out via a qemu-system-mips (from qemu-devel) package?
https://github.com/freebsd/freebsd-wifi-build/wiki/MipsQemuEmulatorImages
Oct 23 2015
Remove unclear comment on PHYS_TO_VM_PAGE().
Noted by: avg
Remove unclear comments on PHYS_TO_VM_PAGE().
Noted by: avg
Oct 22 2015
Oct 21 2015
In D888#82509, @royger wrote:In D888#80533, @jah wrote:Thinking about this some more, I wonder if it would be better to have add_bounce_page() do coalescing in much the same way as _bus_dmamap_addseg() already does. You'd still have the 2-page array in struct bounce_page, but add_bounce_page() would go back to just taking one address. It would look at the tail of map->bpages (if that exists). If that last bounce page can still fit the new segment with the right alignment, then its datacount will be increased and datapage[1] will be set if necessary. Otherwise, a new bounce page will be pulled from the bz queue.
With that scheme, you wouldn't need the dedicated load_ma or count_ma functions; existing load_phys and load_buffer would "do the right thing". We'd also make more efficient use of bounce pages when magsegz is much smaller than a page. Right now, if maxsegsz is, say, 512 and those 512-byte segments get bounced, each one will waste a whole bounce page. Of course, I doubt that's a common use case.
A downside would be that you'd probably need to duplicate some of the coalescing logic in count_phys and count_pages to avoid over-requesting bounce pages.
Just a thought I had; there are probably holes in that scheme that I haven't thought of.
The solution you are proposing seems fine, but I don't think I will have time to look into it until a couple of weeks, do you mind if I commit this now so we can get unmapped IO for blkfront?
Oct 20 2015
Narrow the scope of the temporary bounce buffer mappings
Oct 18 2015
Oct 17 2015
Fixed confusion of client page addr vs. bounce page addr, trimmed trailing whitespace
Comment on unaligned pages, move page array assertion back to dma_dcache_sync()
Re-adding comment, moving POSTWRITE check to beginning of _bus_dmamap_sync() as is done for armv5 and mips
Oct 16 2015
Oct 13 2015
Thinking about this some more, I wonder if it would be better to have add_bounce_page() do coalescing in much the same way as _bus_dmamap_addseg() already does. You'd still have the 2-page array in struct bounce_page, but add_bounce_page() would go back to just taking one address. It would look at the tail of map->bpages (if that exists). If that last bounce page can still fit the new segment with the right alignment, then its datacount will be increased and datapage[1] will be set if necessary. Otherwise, a new bounce page will be pulled from the bz queue.
Port unmapped bounce buffer alignment fix from x86
Port unmapped bounce buffer alignment fix from x86
Port unmapped bounce buffer alignment fix from x86
Oct 12 2015
I like the memdesc idea, or at least something like it. I also agree that it's too tall an order to do everywhere right now, especially if Xen needs this right away.
Oct 11 2015
Adding assert on non-contiguous pages
Oct 1 2015
- Simplify loop in sync_sl
- Fix logic errors around call to sync_buf and bounce buffer cache maintenance
- Remove sync_buf call for unmapped case: cache operations in pmap_quick_enter_page()->pmap_kenter()->pmap_fix_cache() make sync_buf irrelevant
Sep 16 2015
Style fixes. Initialize sl outside the mapping loop, since it is incremented inside the loop.
Sep 1 2015
Making sl coalescing logic a little clearer, preventing coalescing in the case where the sl has a non-contiguous KVA but the physical pages happen to be adjacent.