Page MenuHomeFreeBSD

arm: Bump KSTACK_PAGES default to match i386/amd64
ClosedPublic

Authored by kbowling on Jul 20 2021, 6:46 PM.
Tags
None
Referenced Files
Unknown Object (File)
Jan 27 2024, 4:41 AM
Unknown Object (File)
Dec 20 2023, 3:17 AM
Unknown Object (File)
Dec 18 2023, 2:24 PM
Unknown Object (File)
Dec 12 2023, 5:59 AM
Unknown Object (File)
Nov 25 2023, 3:29 AM
Unknown Object (File)
Nov 6 2023, 7:44 PM
Unknown Object (File)
Nov 6 2023, 5:54 AM
Unknown Object (File)
Nov 4 2023, 6:40 PM

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

ZFS is useful on these small memory beasts?

In D31244#703721, @imp wrote:

ZFS is useful on these small memory beasts?

Unsure, @cem laid out some other real problems with the small stack size in the referenced commit https://reviews.freebsd.org/R10:3f6867ef6386435a52ec564780b91a47dd948b0c. Is there a significant drawback here versus the safety of a larger stack?

In D31244#703721, @imp wrote:

ZFS is useful on these small memory beasts?

The change is likely fine now that we've put the armv5 gear to bed.

ian added a subscriber: ian.

This has been on my to-do list for ages. Several people who use zfs on 32-bit arm have requested it over the years.

This revision is now accepted and ready to land.Jul 21 2021, 1:01 AM
In D31244#703721, @imp wrote:

ZFS is useful on these small memory beasts?

Unsure, @cem laid out some other real problems with the small stack size in the referenced commit https://reviews.freebsd.org/R10:3f6867ef6386435a52ec564780b91a47dd948b0c. Is there a significant drawback here versus the safety of a larger stack?

I don't believe there is a huge drawback, I just hit a big wall trying to set ZFS up years ago on older V7 gear and was wondering if things had changed with OpenZFS. This change would be a problem on the older armv5 gear due to its limitations on RAM. The v6/v7 machines start at 512MB and go up from there. They aren't memory constrained. The only detail I hesitate on would be if this increases overhead on context switches or something like that. I don't believe it will, so the change is fine, but I'd love to see what others say before making the change.

With Ian on board, any lingering doubts i have are gone.

Are there any ABI or KBI concerns for stable/13?

In D31244#703740, @imp wrote:
In D31244#703721, @imp wrote:

ZFS is useful on these small memory beasts?

Unsure, @cem laid out some other real problems with the small stack size in the referenced commit https://reviews.freebsd.org/R10:3f6867ef6386435a52ec564780b91a47dd948b0c. Is there a significant drawback here versus the safety of a larger stack?

I don't believe there is a huge drawback, I just hit a big wall trying to set ZFS up years ago on older V7 gear and was wondering if things had changed with OpenZFS. This change would be a problem on the older armv5 gear due to its limitations on RAM. The v6/v7 machines start at 512MB and go up from there. They aren't memory constrained. The only detail I hesitate on would be if this increases overhead on context switches or something like that. I don't believe it will, so the change is fine, but I'd love to see what others say before making the change.

I should have mentioned that I tested this with a couple armv7 products at $work over a year ago; our products typically run 40-80 threads (most of them sleeping most of the time). I didn't encounter any problems; while it surely used a bit more memory, not enough so to cause any kind of problems (our products all have 2gb of ram) or need any other tweaks to kernel config or tunables.

We have been using KSTACK_PAGES=4 in our Cortex-A9-based switches (with 2GiB RAM) for a number of years now.
During the platform bringup days, before bumping up the KSTACK_PAGES, we were hitting cases where we were running out of stack space and getting all sorts of wonderful odd crashes happening.

@stevek @ian -- @mjg is reporting deadlocks with a heavy userspace load with this change on stable/12 on a 2GB arm platform.

I am going to revert it on that branch at the conclusion of his test and am looking for advice on what to do with stable/13 and main since I do not have hardware to perform my own tests.

We are wondering if you use the default KVA split, and I am wondering if there are MM differences in later branches that are material to stability with this setting.

It's not deadlocks, it is the kernel getting into a state where stack allocation always fails. Workload pushes a lot of traffic through pf and some through ipsec, then forks + execs tons of processes (I don't know the exact count) -- rinse & repeat. The issue was showed up 3 times so far in less than 2 hours of doing this.

Key point is that with 4 pages for use + 1 guard page bump the requirement of continuous space from 3 to 5 pages, all while the kernel has no means of defragging memory.