In D17059#368781, @crest_bultmann.eu wrote:I removed enough DIMMs to balance all four NUMA domains on my 32core EPYC system. Now each of the four domains contains a single 32GB DIMM for a total of 128GB. Under load (again multiple dd processes writing to ZFS) the system still swaps out complete processes (e.g. login shells running zpool iostat or top). If those processes exit and their parent shell was swapped out it can take over a minute until the shell is swapped back int although there are at least 3GB of free memory spread over all domains according to top.
While systems with unbalanced NUMA domains behave far worse the same problem exists in systems with balanced NUMA domains while top reported 3.8GB free memory.
Update: In one case it took over 7 minutes for zsh to get paged back in and execute date.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Sep 24 2018
I removed enough DIMMs to balance all four NUMA domains on my 32core EPYC system. Now each of the four domains contains a single 32GB DIMM for a total of 128GB. Under load (again multiple dd processes writing to ZFS) the system still swaps out complete processes (e.g. login shells running zpool iostat or top). If those processes exit and their parent shell was swapped out it can take over a minute until the shell is swapped back int although there are at least 3GB of free memory spread over all domains according to top.
Sep 20 2018
In D17059#367587, @kib wrote:In D17059#367418, @markj wrote:Yes, and that assumption is not very ZFS-friendly, especially if the domain sizes are not roughly equal: the round-robin allocations performed in keg_fetch_slab() can cause the smaller domain(s) to become depleted, and we end up in a situation where one domain is permanently below the min_free_count threshold. Aside from causing hangs, this will also result in an overactive page daemon.
I think all of the vm_page_count_min() calls are problematic. For the swapper, at least, I think we need to follow r338507 and consult the kstack obj's domain allocation policy (as well as curthread's) before deciding whether to proceed. In other cases, such as uma_reclaim_locked(), the solution is not so clear to me. If we permit situations where one or more domains is permanently depleted, then uma_reclaim_locked() should only drain per-CPU caches when all domains are below the free_min threshold. However, this can probably lead to easy foot-shooting since it is possible to create domain allocation policies which only attempt allocations from depleted domains.
Swapper then should collect all policies for kstack objects for all threads of the process which is swapped in. This is somewhat insane.
Sep 19 2018
In D17059#367418, @markj wrote:Yes, and that assumption is not very ZFS-friendly, especially if the domain sizes are not roughly equal: the round-robin allocations performed in keg_fetch_slab() can cause the smaller domain(s) to become depleted, and we end up in a situation where one domain is permanently below the min_free_count threshold. Aside from causing hangs, this will also result in an overactive page daemon.
I think all of the vm_page_count_min() calls are problematic. For the swapper, at least, I think we need to follow r338507 and consult the kstack obj's domain allocation policy (as well as curthread's) before deciding whether to proceed. In other cases, such as uma_reclaim_locked(), the solution is not so clear to me. If we permit situations where one or more domains is permanently depleted, then uma_reclaim_locked() should only drain per-CPU caches when all domains are below the free_min threshold. However, this can probably lead to easy foot-shooting since it is possible to create domain allocation policies which only attempt allocations from depleted domains.
In D17059#367399, @kib wrote:In D17059#367103, @markj wrote:Peter reproduced this issue as well. I think the problem is with the vm_page_count_min() predicate in swapper_wkilled_only(). If one domain is depleted, we won't swap processes back in.
I am not quite sure would could be an alternative there. if the policy for the kstack object is strict and corresponding domain is in severe low condition, then we must not start swapin.
I think that the current design does assume that all domains must return from the low conditions.
In D17059#367103, @markj wrote:In D17059#367094, @crest_bultmann.eu wrote:I did see zsh processes wrapped with "<>" in top's output so they were get swapped out. I need a few minutes to reproduce the problem and run "ps auxwwwH".
Peter reproduced this issue as well. I think the problem is with the vm_page_count_min() predicate in swapper_wkilled_only(). If one domain is depleted, we won't swap processes back in.
Sep 18 2018
In D17059#367094, @crest_bultmann.eu wrote:I did see zsh processes wrapped with "<>" in top's output so they were get swapped out. I need a few minutes to reproduce the problem and run "ps auxwwwH".
In D17059#367066, @markj wrote:In D17059#367064, @crest_bultmann.eu wrote:In D17059#367059, @markj wrote:In D17059#366975, @crest_bultmann.eu wrote:I have to revise my statement. I tried an other torture test (six dd if=/dev/zero bs=1m of=/kkdata/benchmark/$RANDOM writing to an uncompressed dataset). The system is still writing at about 1GB/s with the patch, but trying to exit some tools (e.g. zpool, top) hangs. Here is the procstat -kka output:
procstat.hang2.txt451 KBDownloadI don't see any such processes in the procstat output. Did you try this test without "options NUMA"?
Did you look at the zsh processes as well? I observed no hangs without "options NUMA".
Yes, seems they're just waiting for children to report an exit status. I am wondering if the processes got swapped out. Could you provide "ps auxwwwH" output?
In D17059#367064, @crest_bultmann.eu wrote:In D17059#367059, @markj wrote:In D17059#366975, @crest_bultmann.eu wrote:I have to revise my statement. I tried an other torture test (six dd if=/dev/zero bs=1m of=/kkdata/benchmark/$RANDOM writing to an uncompressed dataset). The system is still writing at about 1GB/s with the patch, but trying to exit some tools (e.g. zpool, top) hangs. Here is the procstat -kka output:
procstat.hang2.txt451 KBDownloadI don't see any such processes in the procstat output. Did you try this test without "options NUMA"?
Did you look at the zsh processes as well? I observed no hangs without "options NUMA".
In D17059#367059, @markj wrote:In D17059#366975, @crest_bultmann.eu wrote:I have to revise my statement. I tried an other torture test (six dd if=/dev/zero bs=1m of=/kkdata/benchmark/$RANDOM writing to an uncompressed dataset). The system is still writing at about 1GB/s with the patch, but trying to exit some tools (e.g. zpool, top) hangs. Here is the procstat -kka output:
procstat.hang2.txt451 KBDownloadI don't see any such processes in the procstat output. Did you try this test without "options NUMA"?
In D17059#366975, @crest_bultmann.eu wrote:I have to revise my statement. I tried an other torture test (six dd if=/dev/zero bs=1m of=/kkdata/benchmark/$RANDOM writing to an uncompressed dataset). The system is still writing at about 1GB/s with the patch, but trying to exit some tools (e.g. zpool, top) hangs. Here is the procstat -kka output:
procstat.hang2.txt451 KBDownload
I have to revise my statement. I tried an other torture test (six dd if=/dev/zero bs=1m of=/kkdata/benchmark/$RANDOM writing to an uncompressed dataset). The system is still writing at about 1GB/s with the patch, but trying to exit some tools (e.g. zpool, top) hangs. Here is the procstat -kka output:
In D17059#366834, @markj wrote:I think I see the problem. Could you test with the diff at D17209 applied?
Sep 17 2018
I think I see the problem. Could you test with the diff at D17209 applied?
In D17059#366776, @crest_bultmann.eu wrote:If you prefer we can switch to IRC.
In D17059#366775, @markj wrote:In D17059#366774, @crest_bultmann.eu wrote:In D17059#366732, @markj wrote:In D17059#366566, @crest_bultmann.eu wrote:This is the output from top -HSazo res when writes to ZFS stopped beeing processed on the system with a NUMA enabled kernel:
Thanks. Could you also grab "procstat -kka" output from the system in this state?
Here is the requested output from procstat -kka of a hanging system.
procstat.hang.txt454 KBDownloadGreat, this helps. Finally, could I ask for output from "sysctl vm", again from the system in this state?
In D17059#366774, @crest_bultmann.eu wrote:In D17059#366732, @markj wrote:In D17059#366566, @crest_bultmann.eu wrote:This is the output from top -HSazo res when writes to ZFS stopped beeing processed on the system with a NUMA enabled kernel:
Thanks. Could you also grab "procstat -kka" output from the system in this state?
Here is the requested output from procstat -kka of a hanging system.
procstat.hang.txt454 KBDownload
In D17059#366732, @markj wrote:In D17059#366566, @crest_bultmann.eu wrote:This is the output from top -HSazo res when writes to ZFS stopped beeing processed on the system with a NUMA enabled kernel:
Thanks. Could you also grab "procstat -kka" output from the system in this state?
In D17059#366566, @crest_bultmann.eu wrote:This is the output from top -HSazo res when writes to ZFS stopped beeing processed on the system with a NUMA enabled kernel:
This is the output from top -HSazo res when writes to ZFS stopped beeing processed on the system with a NUMA enabled kernel:
This time I triggered a panic via sysctl a few minutes after ZFS writes hung but before the kernel panic()ed on its own.
I gave up after >500 screenshots of the IPMI KVM output. I haven't yet found a working configuration for the Serial over LAN. I'm trying again with a dump device large enough to hold >200GB RAM.
Never mind. The ghosts in the machine read my post. The kernel just panic()ed again. I'm at the kernel debugger prompt in the IPMI KVM webinterface.
After copying 110TB between two pools with zfs send | mbuffer -m1g -s128k | zfs recv on a kernel without "options NUMA" I bootet a kernel with "options NUMA" build from revision 338698. ZFS writes still hang, but the system doesn't panic. The mbuffer output shows that the buffer remains 100% full when writes hang.
Sep 14 2018
In D17059#365996, @crest_bultmann.eu wrote:
I attached a screenshot of the system console taken via IPMI
. Ignore the nvme related lines. I reproduced the same panic with them unplugged. I used the ALPHA5 memstick (r338518) to install and encountered the panic with the GENERIC kernel from that installation. I checked out r338638 which includes NUMA in GENERIC and compiled a GENERIC-NODEBUG kernel and disabled malloc debugging to get a realistic impression of the hardware's potential. The EPYC system compiled the kernel and world just fine so I attached and imported the old ZFS pool from its predecessor (a FreeBSD 11.2 system) and tried to send | recv the relevant datasets from the old pool to a new pool. This repeatedly hung after about 70-80GB. Until writes stopped the system transferred 1.0 to 1.1GB/s. I remembered reading about starvation in the NUMA code disabled it on a hunch. With NUMA disabled the system is stable (so far) and currently half way through copying 107TB from the old pool to the new pool.In D17059#365698, @crest_bultmann.eu wrote:With the NUMA option enabled ZFS hangs after a few minutes of heavy write load causing the deadman switch to panic the kernel on a 32 core AMD EPCY 7551P. I can still write to the swap partitions on the same disks while writes to ZFS on an other partition hang.
In D17059#365698, @crest_bultmann.eu wrote:With the NUMA option enabled ZFS hangs after a few minutes of heavy write load causing the deadman switch to panic the kernel on a 32 core AMD EPCY 7551P. I can still write to the swap partitions on the same disks while writes to ZFS on an other partition hang.
Sep 13 2018
With the NUMA option enabled ZFS hangs after a few minutes of heavy write load causing the deadman switch to panic the kernel on a 32 core AMD EPCY 7551P. I can still write to the swap partitions on the same disks while writes to ZFS on an other partition hang.
Sep 11 2018
Sep 6 2018
We've had it in our config at Netflix for so long that I forgot that it was not in GENERIC..