- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Sat, Jan 11
In D48337#1104688, @kib wrote:As I understand, the patch causes the inactive scan to stop even if there is still page_shortage (>0), hoping that laundry would keep up and do the necessary cleaning. Suppose that we have the mix of the anon and file dirty pages, and, for instance, no swap (or files are backed by slow device). Then it is possible that for the long time, despite queuing the pages for laundry, they cannot be cleaned, so the page_shortage is not going to go away.
Wouldn't it be needed for such patch to ensure that either launder thread make progress, or inactive scan continues? I understand that scan would be kicked again, but I mean that laundry should kick it as well if it cannot get rid of page_shortage.
This looks okay to me, but I'm not very familiar with this part of the kernel.
In D48361#1104681, @ehem_freebsd_m5p.com wrote:I would have liked to see a comment left in the file. I suspect someone down the line will wonder why someone bothered to optimize this case. This might help uniprocessor VMs, but multiprocessor VMs are also very common. While a single branch is cheap, this still means a tiny performance loss if you've got more than one.
Fri, Jan 10
Remove the .depend cleanup hack, fix the TICKS_OFFSET definition, move to .bss.
I have no real idea how to test this code either.
Thu, Jan 9
Adjust comments.
In D48283#1103786, @jhb wrote:Why not use a taskqueue? That is what every other driver that needs this functionality does.
Apply reviewer suggestions, move symbol definitions to sys/kern/subr_ticks.s.
Wed, Jan 8
Restore the initialization of ticksl on 64-bit platforms. It's still useful for
testing the case where some upper bits are non-zero.
Add a header to define ticks. I am happy to put it in an existing
header but I'm not sure which one would be suitable.
Reorder asm directives a bit.
Update tc_ticktock() as well. It's not necessary, but this way
we avoid some silent truncation that might confuse readers.
Tue, Jan 7
Permit the inactive weight to have a value of 0, which effectively
restores the old behaviour.
In D48361#1102810, @kib wrote:The real fix is to remove i386 kernel, of course.
Doug, how would you like to proceed with the patch? Since quite a few pieces of it are independent, I imagine they can be reviewed and peeled off one by one, especially in areas that aren't performance-critical.
Mon, Jan 6
This scan might process many gigabytes worth of pages in one go,
triggering VM object lock contention (on the DB cache file's VM object)
and consuming CPU, which can cause application latency spikes.
@jamie , I wonder if you had any further comments on the patch?
This broadly looks good to me. I'd suggest having a comment somewhere which explains at a higher level what the tests are testing.
In D48331#1102183, @imp wrote:Why 5, but regardless of why, this is correct
At least tests/sys/file/path_test.c:path_io will need to be updated.
Remove the todo comment from riscv as well.
Sat, Jan 4
Fri, Jan 3
After staring at this for a while, I think it's right.
In D48222#1101530, @jfree wrote:I do not like how complicated this is getting, but the code all looks good.
Thu, Jan 2
In D48241#1101029, @aokblast wrote:I discover the original wait operation for gdb is not working now. I configure one vm with rfb and gdb. When gdb is with wait operation, the rfb does not have any output. But if the gdb is not waiting, the rfb will have output and gdb can attach to vm after boot up. I checkout to the main branch and it is also the case.