- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Yesterday
Tue, May 21
Mon, May 20
FYI, at $WORK the only packages we include in the DVD image are:
- ports-mgmt/pkg
- sysutils/py-salt , because Salt can install everything else
- net-mgmt/lldpd, which helps us debug network problems that would prevent Salt from working.
Sun, May 19
Sat, May 18
Thu, May 16
Wow. I'm surprised this bug went unnoticed for so long. Can you get the fix into 14.1 ?
Thu, May 9
Mon, May 6
Fri, Apr 26
In D44961#1025278, @mav wrote:I wonder what is your queue depth, so that one message per request per 90 seconds would cause a noticeable storm. Also per-system limiting makes output not very useful, since it does not say much useful about LUNs, ports, commands, etc due to selecting first message out of many, only that something is wrong. Thinking even wider, I find those messages printed on actual completion not very useful, since if there are not a delays, but something is really wrong, the commands many never complete and so the messages may never get printed. I wonder if instead removing all this and once per second checking OOA queues for stuck requests and printing some digests would be more useful.
Thu, Apr 25
- bsdinstall: all ESPs created by zfsboot should have a bootloader
Wed, Apr 24
Apr 23 2024
- Respond to markj's feedback.
Apr 22 2024
In D44904#1023723, @markj wrote:In D44904#1023722, @asomers wrote:In D44904#1023720, @markj wrote:Would it be worthwhile to document some of your performance findings in geli.8 or so? So that the next user to hit this problem doesn't have to redo your analysis and discover the relationship with vfs.zfs.vdev.aggregation_limit.
I was planning to follow up with a new tip in fortune(6) and at https://wiki.freebsd.org/ZFSTuningGuide . Do you think that would be sufficient or would you like geli(8) too?
I'm skeptical that fortune(6) is good place to document anything performance-related. ZFSTuningGuide seems like a good place, but I think geli.8 is the right document for this kind of knowledge. There are other performance considerations relating to GELI that have nothing to do with ZFS (GELI's thread-per-CPU-per-volume behaviour for instance) that would also belong in a "PERFORMANCE CONSIDERATIONS" section in geli.8.
In D44904#1023720, @markj wrote:Would it be worthwhile to document some of your performance findings in geli.8 or so? So that the next user to hit this problem doesn't have to redo your analysis and discover the relationship with vfs.zfs.vdev.aggregation_limit.
Apr 21 2024
Apr 13 2024
Apr 10 2024
Apr 4 2024
- Skip the fuse_vnop_do_lseek and fuse_filehandle_close if
- Two more fixes:
Apr 3 2024
Actually, another problem is that EACCES, EINTEGRITY, and EIO errors won't be correctly reported. That problem predates this review, but I'll fix it now.
Looks ok to me, but it's been a long time since I've been active in this code. And it's nice that the patch is so much simpler now!
Mar 26 2024
Mar 24 2024
In D44320#1014666, @imp wrote:So digging into this a bit more... I bumped the POSIX_VERSION in FreeBSD. This code seemed to compile great when I did that after I moved the BSDs to the #if I highlighted.
However, OpenSSL needs a small tweak. POSIX_VERSION 200809L dropped makecontext, et al, from the standard in Issue 7 (2008 version), and OpenSSL has no fallback for that situation, except to make an exception for GLIBC... So I added an exception for FreeBSD as well, and submitted that pull request. Now I have to fix at least two ports (openssl3 and openssl111), and fight that battle.... At least I have someone lined up to help me there.
So there's a complication to my suggestion, I'll let you know when vvd@ and I have looked at the fallout. I hate kludging things like this pull request, though, I'm spending way more time on this than I thought I'd need to.
Mar 22 2024
Mar 12 2024
An exp-run is what you suggested when I first raised the issue on freebsd-hackers last September. However, I don't know how to do an exp-run. Are you volunteering?
Mar 7 2024
Mar 5 2024
Feb 23 2024
Feb 12 2024
Feb 9 2024
Feb 8 2024
This looks good. BTW, you don't have to wait until the cleanup phase to print the seed. It's OK to print it at the beginning of the test phase. In the event of a failure, Kyua will report everything that the test printed.
Feb 7 2024
In D43775#998578, @glebius wrote:I'll make resources reclaim patch, thanks!
But I don't agree with going for deterministic random. That will reduce test coverage. For such kind of tests to get coverage tending to 100% you need either go with deterministic random and volume of data to pump needs to tend to infinity. Or you can go true random and then make number of runs tend to infinity. The CI itself gives us the latter for free.
Of course failures reported by CI (if any) won't be reproducible immediately. But they will be a red flag. Once I got a failure, I will run the test in a loop until reproduction.
This looks good. But I have a few thoughts:
Feb 6 2024
So in the future it will be possible to send a single record with multiple send syscalls? In that case we can certainly remove these tests. However, in the future it will be important to ensure that we can send messages larger than the socket buffer size, right? In that case, I think we should leave these tests here for now, and update them atomically when the new behavior is committed. That way we won't forget.
Feb 4 2024
@imp here's a test case that depends on Rust. Not exactly what you meant, perhaps, but I think it would be a valuable addition. If you like this, we can add versions for other builtin file systems, too.
Feb 3 2024
@emaste what about something like this?
#define VA_NOTIME(ts) { \ ts->tv_sec = -1; \ ts->tv_nsec = 0; \ }
Feb 2 2024
Jan 25 2024
In D43590#994149, @emaste wrote:I think this is good but I wonder if we should have a trivial macro/inline for recording an unset/invalid va_birthtime?
Jan 20 2024
Jan 19 2024
Jan 17 2024
- Replaces memsets by default initializers. Delete allocations using