Page MenuHomeFreeBSD

Increase the pageout cluster size to 32 pages
ClosedPublic

Authored by alc on Jun 22 2017, 5:09 PM.
Tags
None
Referenced Files
Unknown Object (File)
Nov 17 2024, 3:43 AM
Unknown Object (File)
Oct 27 2024, 11:31 PM
Unknown Object (File)
Oct 27 2024, 11:25 PM
Unknown Object (File)
Oct 27 2024, 9:31 PM
Unknown Object (File)
Oct 19 2024, 11:36 PM
Unknown Object (File)
Oct 18 2024, 8:28 PM
Unknown Object (File)
Oct 1 2024, 1:57 PM
Unknown Object (File)
Sep 30 2024, 10:52 AM
Subscribers
None

Details

Summary

Increase the pageout cluster size to 32 pages.

Decouple the pageout cluster size from the size of the hash table entry used by the swap pager to map (object,pindex) to a block on the swap device(s), and keep the size of a hash table entry at its current size.

Eliminate a pointless macro.

Test Plan

I've tested this change on three different devices with a test program that cycles through an mmap(MAP_ANON) region of size hw.physmem + 4 GB.

The devices are:

ada0 at ahcich0 bus 0 scbus1 target 0 lun 0
ada0: <WDC WD3000HLFS-01G6U1 04.04V02> ATA8-ACS SATA 2.x device
ada0: Serial Number WD-WXD0CB981984
ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 286168MB (586072368 512 byte sectors)

ada2 at ahcich2 bus 0 scbus3 target 0 lun 0
ada2: <WDC WD5000HHTZ-04N21V0 04.06A00> ATA8-ACS SATA 3.x device
ada2: Serial Number WD-WXB1E62JMM50
ada2: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada2: Command Queueing enabled
ada2: 476940MB (976773168 512 byte sectors)

ada3 at ahcich4 bus 0 scbus5 target 0 lun 0
ada3: <Samsung SSD 850 PRO 512GB EXM02B6Q> ACS-2 ATA SATA 3.x device
ada3: Serial Number S250NX0H626395F
ada3: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 512bytes)
ada3: Command Queueing enabled
ada3: 488386MB (1000215216 512 byte sectors)
ada3: quirks=0x3<4K,NCQ_TRIM_BROKEN>

On the 300MB/s device, there is no clear difference, but on the two 600MB/s devices, the test program runs a bit faster.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

Seems reasonable to me. I noticed a while ago that we allocate 2*vm_pageout_page_count vm_page pointers on the stack in vm_pageout_cluster(), which seems like a lot, but I guess it's not going to be a problem for now since the laundry thread is the only caller.

This revision is now accepted and ready to land.Jun 22 2017, 5:50 PM

The rewrite of the swap pager to use radix trie for swblock tracking is still somewhere in my repo.

In D11305#234137, @kib wrote:

The rewrite of the swap pager to use radix trie for swblock tracking is still somewhere in my repo.

I'd like to find a way to reduce the amount of physical memory that will be consumed by the leaves. Otherwise, I like it.

In D11305#234511, @alc wrote:
In D11305#234137, @kib wrote:

The rewrite of the swap pager to use radix trie for swblock tracking is still somewhere in my repo.

I'd like to find a way to reduce the amount of physical memory that will be consumed by the leaves. Otherwise, I like it.

Most obvious thing, which I marked as TODO initially, is to change single leave to track SWAP_META_PAGES, same as current hashed swblock. Or do you mean something more involved ?

Here are representative results from the faster of my two mechanical drives:

Before:

len: 38619095040
run 0, 49s.131725210ns
run 1, 499s.841835227ns
run 2, 544s.875683258ns
run 3, 539s.369814074ns
run 4, 545s.721396685ns
run 5, 540s.798867815ns
run 6, 543s.24431160ns
run 7, 543s.677969637ns
run 8, 544s.857138181ns
run 9, 547s.576903085ns

After:

len: 38619095040
run 0, 49s.874228145ns
run 1, 502s.973700124ns
run 2, 519s.524882015ns
run 3, 508s.519812284ns
run 4, 518s.892564204ns
run 5, 513s.63885846ns
run 6, 519s.226032372ns
run 7, 514s.322495736ns
run 8, 514s.729125927ns
run 9, 519s.840386992ns

In D11305#234526, @kib wrote:
In D11305#234511, @alc wrote:
In D11305#234137, @kib wrote:

The rewrite of the swap pager to use radix trie for swblock tracking is still somewhere in my repo.

I'd like to find a way to reduce the amount of physical memory that will be consumed by the leaves. Otherwise, I like it.

Most obvious thing, which I marked as TODO initially, is to change single leave to track SWAP_META_PAGES, same as current hashed swblock. Or do you mean something more involved ?

No, that would suffice.

vm/swap_pager.c
137 ↗(On Diff #29952)

If I'm actually "decoupling" the cluster size and hash table entry size, then this should be 32 and not SWB_NPAGES.

This revision was automatically updated to reflect the committed changes.