Details
Diff Detail
- Lint
Lint Skipped - Unit
Tests Skipped
Event Timeline
This does not seem wrong, but it is a bit strange to handle this undefined behaviour problem but not the integer overflow when parsing the number in __find_arguments() and __find_warguments() and the integer overflows in __grow_type_table().
One way to do that would be to start enforcing NL_ARGMAX (possibly increasing it if necessary to make applications work). This is not as bad as it seems because any n$ in printf(3) implies at least n + 1 arguments in some C function call that cannot be dynamically constructed. (printf(1) is different since one can dynamically construct very long argument lists to a builtin printf.)
you mean something like that ?
we have NL_ARGMAX defined in limits.h, it is set to 99, which should be enough for applications.
otherwise many thanks!
I wonder whether we should increase NL_ARGMAX if we're actually enforcing it, including for strings that do not have n$.
lib/libc/stdio/printf-pos.c | ||
---|---|---|
314–317 | This check should be moved inside the loop since you may have an overflowing calculation that overflows back into the allowed range. |
This commit bumped NL_ARGMAX from 99 to 65536.
Turns out there is real-world software out there which uses this to size stuff. In particular postgres has a table of NL_ARGMAX * 4 which it then memsets, i.e. it is of 256KB of size. This significantly slows down single-threaded performance for this case. Chances are solid there are other victims. That said, fixing postgres aside, I would like to get reasoning for bumping the limit so high.
My grepping around suggests Linux sticks to 4096, which still sounds kind of high. I would be leaning towards a mere 1024.
As a data point, reducing the macro from 65536 to 4096 sped up single-threaded select queries from 34000 tps to 40000 on a Skylake box. It is a big deal. A naive memset loop zeroing such a big buffer (256KB!) does only 70k iterations per second.
It seems like cutting NL_ARGMAX back to 1024 or even reverting to 99 makes sense. You're doing some pretty weird macro meta-programming to get to a 100 argument printf.
That being said, if I'm reading the grok results (https://grok.dragonflybsd.org/search?project=dports&q=NL_ARGMAX&defs=&refs=&path=&hist=&type=&si=q&n=25)correctly this is kind of an own-goal on PostgreSQL's part due to their insistence on rolling their own versions of all sorts of things. We need to work with them, but part of that should probably be trying to convince them to pick up modern versions of things like snprintf.
They already took care of the problem on their side by defining their own macro to 31.
However, older postgres ports are still affected.
I suggested 4096 because that's what Linux sets it to, it probably is very arbitrary anyway. For all I care it can go back to 9. :->