I get a panic with D37478.id113832.diff : https://people.freebsd.org/~pho/stress/log/log0394.txt
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Dec 6 2022
Nov 23 2022
D37458.113418.patch LGTM
Nov 21 2022
D37452.id113378.diff fixed the issue for me.
Nov 14 2022
Nov 13 2022
Nov 10 2022
Oct 27 2022
Oct 24 2022
Oct 21 2022
D35054.112068.patch LGTM
Oct 13 2022
In D36491#839750, @pho wrote:I have not observed any new problems while testing D36491.id111589.diff
I have not observed any new problems while testing D36491.id111589.diff
Oct 11 2022
Sep 27 2022
Sep 26 2022
Sep 21 2022
Aug 25 2022
Aug 24 2022
Aug 19 2022
Aug 11 2022
Aug 7 2022
Aug 6 2022
D35054.108900.patch looks good to me.
Jul 25 2022
I have not observed any problems with D35054.108486.patch
Jul 17 2022
Jul 15 2022
Jul 12 2022
Jul 7 2022
Jun 30 2022
Jun 29 2022
Jun 28 2022
Jun 25 2022
Jun 21 2022
Jun 20 2022
Jun 17 2022
Jun 14 2022
Jun 12 2022
Jun 10 2022
No problems seen with a short test on amd64.
Jun 1 2022
May 27 2022
May 25 2022
The disk image fuzzer test no longer triggers a panic with this patch.
I have not detected any side effects with D35219.id106252.diff.
LGTM.
May 18 2022
Kernel page fault with the following non-sleepable locks held: exclusive rw vm object (vm object) r = 0 (0xfffff80004837b58) locked @ x86/iommu/intel_idpgtbl.c:550 exclusive sleep mutex AHCI channel lock (AHCI channel lock) r = 0 (0xfffffe003ce28400) locked @ kern/kern_mutex.c:211 stack backtrace: #0 0xffffffff80c85445 at witness_debugger+0x65 #1 0xffffffff80c8659a at witness_warn+0x3ea #2 0xffffffff810fcce6 at trap_pfault+0x86 #3 0xffffffff810cdc18 at calltrap+0x8 #4 0xffffffff81078e7e at iommu_gas_map+0x15e #5 0xffffffff81077339 at iommu_bus_dmamap_load_something+0x119 #6 0xffffffff81076995 at iommu_bus_dmamap_load_buffer+0x1c5 #7 0xffffffff80c55a3e at _bus_dmamap_load_ccb+0x20e #8 0xffffffff80c557cc at bus_dmamap_load_ccb+0x8c #9 0xffffffff803929d9 at xpt_run_devq+0x2f9 #10 0xffffffff80395de7 at xpt_release_simq+0x67 #11 0xffffffff80c3027a at softclock_call_cc+0x15a #12 0xffffffff80c31b96 at softclock_thread+0xc6 #13 0xffffffff80bc9850 at fork_exit+0x80 #14 0xffffffff810cec8e at fork_trampoline+0xe
panic: segment too large: ctx 0xfffff800041ca180 start 0xfef01000 end 0xfef24000 buflen1 0x23000 maxsegsz 0x22400
May 16 2022
I have uploaded a minimal test scenario, where the second mount fails with your patch:
May 8 2022
Apr 29 2022
Apr 28 2022
Apr 22 2022
Apr 16 2022
Apr 15 2022
D34815.104986.patch also looks good to me.
OTOH it would be useful if you provide Peter with your test program and instructions how to reproduce, so that this case does not regress more.
Apr 14 2022
Apr 13 2022
Apr 11 2022
D34815.104865.patch LGTM
Apr 10 2022
../../../vm/vm_phys.c:1360:6: error: unused variable 'order' [-Werror,-Wunused-variable] int order; ^ 1 error generated.
I got a panic with this: https://people.freebsd.org/~pho/stress/log/log0278.txt
Apr 9 2022
I ran tests for 16 hours without seeing any problems with D34729.104761.patch
Apr 6 2022
Apr 5 2022
Apr 3 2022
Apr 1 2022
I ran a four hour test with the contigmalloc() tests in a loop. This followed by random stress2 tests for 4 hours. No problems seen.
Mar 31 2022
I'm not sure how much testing is required for this patch.
I ran the three contigmalloc(9) tests I have in a loop for four hours. I followed up with a few hours of random stress2 tests.
No problems seen.
The stress2 test suite completed with any issues.
Mar 30 2022
I'm almost done with testing the D33947.104303.patch. No problems seen so far. A full stress2 test takes two days to complete.
Mar 28 2022
In D33947#786068, @alc wrote:Peter, can you please test this?
Mar 24 2022
I got this panic from a contigmalloc() test:
Mar 18 2022
Mar 16 2022
I have been running tests with this patch for 18 hours, without seeing any problems.
Mar 15 2022
Mar 8 2022
Mar 4 2022
Mar 3 2022
Feb 22 2022
Feb 19 2022
I ran tests with D34282.102922.patch for 5 hours. No problems seen.
Feb 18 2022
The patch fixed the issue I had observed.