Turn the limit into a sysctl (kern.dtrace.buffer_maxsize)
I like that this is tunable but isn't this allocated per-cpu? As I understand it the original heuristic was attempting to prevent us from allocating too much on large-cpu count machines.
Maybe that's irrelevant because for us "large cpu count" implies "huge amounts of memory"
I don't know of any systems with 40 cores that don't also have 128G or more of RAM. Also,note that the tunable is 16m, so even with 40 cores that's "only" 640M, and no one needs more than 640...M
I don't really understand what this comment means. We don't sleep here if there's a memory shortage, so an allocation failure should just cause us to release any already-allocated buffers and bail. Am I missing something?
SYSCTL_QUAD is wrong for this: dtrace_strsize_default is a size_t which is 32 bits in ILP32. I guess SYSCTL_ULONG is better.
Hmm. libdtrace also needs to allocate buffers of this size for when it copies the kernel buffers, so one can, for instance, trigger the OOM killer by using a large enough buffer size. In that case, we should probably set buffer_maxsize based on physmem/ncpu. See r261122.
In 261122 all the limits were removed in libdtrace. Are you suggesting that due to that change I should at least do a protection setting of:
if (size > physmem * PAGE_SIZE)
at this point?