Index: head/contrib/jemalloc/ChangeLog =================================================================== --- head/contrib/jemalloc/ChangeLog (revision 320622) +++ head/contrib/jemalloc/ChangeLog (revision 320623) @@ -1,1230 +1,1265 @@ Following are change highlights associated with official releases. Important bug fixes are all mentioned, but some internal enhancements are omitted here for brevity. Much more detail can be found in the git revision history: https://github.com/jemalloc/jemalloc +* 5.0.1 (July 1, 2017) + + This bugfix release fixes several issues, most of which are obscure enough + that typical applications are not impacted. + + Bug fixes: + - Update decay->nunpurged before purging, in order to avoid potential update + races and subsequent incorrect purging volume. (@interwq) + - Only abort on dlsym(3) error if the failure impacts an enabled feature (lazy + locking and/or background threads). This mitigates an initialization + failure bug for which we still do not have a clear reproduction test case. + (@interwq) + - Modify tsd management so that it neither crashes nor leaks if a thread's + only allocation activity is to call free() after TLS destructors have been + executed. This behavior was observed when operating with GNU libc, and is + unlikely to be an issue with other libc implementations. (@interwq) + - Mask signals during background thread creation. This prevents signals from + being inadvertently delivered to background threads. (@jasone, + @davidtgoldblatt, @interwq) + - Avoid inactivity checks within background threads, in order to prevent + recursive mutex acquisition. (@interwq) + - Fix extent_grow_retained() to use the specified hooks when the + arena..extent_hooks mallctl is used to override the default hooks. + (@interwq) + - Add missing reentrancy support for custom extent hooks which allocate. + (@interwq) + - Post-fork(2), re-initialize the list of tcaches associated with each arena + to contain no tcaches except the forking thread's. (@interwq) + - Add missing post-fork(2) mutex reinitialization for extent_grow_mtx. This + fixes potential deadlocks after fork(2). (@interwq) + - Enforce minimum autoconf version (currently 2.68), since 2.63 is known to + generate corrupt configure scripts. (@jasone) + - Ensure that the configured page size (--with-lg-page) is no larger than the + configured huge page size (--with-lg-hugepage). (@jasone) + * 5.0.0 (June 13, 2017) Unlike all previous jemalloc releases, this release does not use naturally aligned "chunks" for virtual memory management, and instead uses page-aligned "extents". This change has few externally visible effects, but the internal impacts are... extensive. Many other internal changes combine to make this the most cohesively designed version of jemalloc so far, with ample opportunity for further enhancements. Continuous integration is now an integral aspect of development thanks to the efforts of @davidtgoldblatt, and the dev branch tends to remain reasonably stable on the tested platforms (Linux, FreeBSD, macOS, and Windows). As a side effect the official release frequency may decrease over time. New features: - Implement optional per-CPU arena support; threads choose which arena to use based on current CPU rather than on fixed thread-->arena associations. (@interwq) - Implement two-phase decay of unused dirty pages. Pages transition from dirty-->muzzy-->clean, where the first phase transition relies on madvise(... MADV_FREE) semantics, and the second phase transition discards pages such that they are replaced with demand-zeroed pages on next access. (@jasone) - Increase decay time resolution from seconds to milliseconds. (@jasone) - Implement opt-in per CPU background threads, and use them for asynchronous decay-driven unused dirty page purging. (@interwq) - Add mutex profiling, which collects a variety of statistics useful for diagnosing overhead/contention issues. (@interwq) - Add C++ new/delete operator bindings. (@djwatson) - Support manually created arena destruction, such that all data and metadata are discarded. Add MALLCTL_ARENAS_DESTROYED for accessing merged stats associated with destroyed arenas. (@jasone) - Add MALLCTL_ARENAS_ALL as a fixed index for use in accessing merged/destroyed arena statistics via mallctl. (@jasone) - Add opt.abort_conf to optionally abort if invalid configuration options are detected during initialization. (@interwq) - Add opt.stats_print_opts, so that e.g. JSON output can be selected for the stats dumped during exit if opt.stats_print is true. (@jasone) - Add --with-version=VERSION for use when embedding jemalloc into another project's git repository. (@jasone) - Add --disable-thp to support cross compiling. (@jasone) - Add --with-lg-hugepage to support cross compiling. (@jasone) - Add mallctl interfaces (various authors): + background_thread + opt.abort_conf + opt.retain + opt.percpu_arena + opt.background_thread + opt.{dirty,muzzy}_decay_ms + opt.stats_print_opts + arena..initialized + arena..destroy + arena..{dirty,muzzy}_decay_ms + arena..extent_hooks + arenas.{dirty,muzzy}_decay_ms + arenas.bin..slab_size + arenas.nlextents + arenas.lextent..size + arenas.create + stats.background_thread.{num_threads,num_runs,run_interval} + stats.mutexes.{ctl,background_thread,prof,reset}. {num_ops,num_spin_acq,num_wait,max_wait_time,total_wait_time,max_num_thds, num_owner_switch} + stats.arenas..{dirty,muzzy}_decay_ms + stats.arenas..uptime + stats.arenas..{pmuzzy,base,internal,resident} + stats.arenas..{dirty,muzzy}_{npurge,nmadvise,purged} + stats.arenas..bins..{nslabs,reslabs,curslabs} + stats.arenas..bins..mutex. {num_ops,num_spin_acq,num_wait,max_wait_time,total_wait_time,max_num_thds, num_owner_switch} + stats.arenas..lextents..{nmalloc,ndalloc,nrequests,curlextents} + stats.arenas.i.mutexes.{large,extent_avail,extents_dirty,extents_muzzy, extents_retained,decay_dirty,decay_muzzy,base,tcache_list}. {num_ops,num_spin_acq,num_wait,max_wait_time,total_wait_time,max_num_thds, num_owner_switch} Portability improvements: - Improve reentrant allocation support, such that deadlock is less likely if e.g. a system library call in turn allocates memory. (@davidtgoldblatt, @interwq) - Support static linking of jemalloc with glibc. (@djwatson) Optimizations and refactors: - Organize virtual memory as "extents" of virtual memory pages, rather than as naturally aligned "chunks", and store all metadata in arbitrarily distant locations. This reduces virtual memory external fragmentation, and will interact better with huge pages (not yet explicitly supported). (@jasone) - Fold large and huge size classes together; only small and large size classes remain. (@jasone) - Unify the allocation paths, and merge most fast-path branching decisions. (@davidtgoldblatt, @interwq) - Embed per thread automatic tcache into thread-specific data, which reduces conditional branches and dereferences. Also reorganize tcache to increase fast-path data locality. (@interwq) - Rewrite atomics to closely model the C11 API, convert various synchronization from mutex-based to atomic, and use the explicit memory ordering control to resolve various hypothetical races without increasing synchronization overhead. (@davidtgoldblatt) - Extensively optimize rtree via various methods: + Add multiple layers of rtree lookup caching, since rtree lookups are now part of fast-path deallocation. (@interwq) + Determine rtree layout at compile time. (@jasone) + Make the tree shallower for common configurations. (@jasone) + Embed the root node in the top-level rtree data structure, thus avoiding one level of indirection. (@jasone) + Further specialize leaf elements as compared to internal node elements, and directly embed extent metadata needed for fast-path deallocation. (@jasone) + Ignore leading always-zero address bits (architecture-specific). (@jasone) - Reorganize headers (ongoing work) to make them hermetic, and disentangle various module dependencies. (@davidtgoldblatt) - Convert various internal data structures such as size class metadata from boot-time-initialized to compile-time-initialized. Propagate resulting data structure simplifications, such as making arena metadata fixed-size. (@jasone) - Simplify size class lookups when constrained to size classes that are multiples of the page size. This speeds lookups, but the primary benefit is complexity reduction in code that was the source of numerous regressions. (@jasone) - Lock individual extents when possible for localized extent operations, rather than relying on a top-level arena lock. (@davidtgoldblatt, @jasone) - Use first fit layout policy instead of best fit, in order to improve packing. (@jasone) - If munmap(2) is not in use, use an exponential series to grow each arena's virtual memory, so that the number of disjoint virtual memory mappings remains low. (@jasone) - Implement per arena base allocators, so that arenas never share any virtual memory pages. (@jasone) - Automatically generate private symbol name mangling macros. (@jasone) Incompatible changes: - Replace chunk hooks with an expanded/normalized set of extent hooks. (@jasone) - Remove ratio-based purging. (@jasone) - Remove --disable-tcache. (@jasone) - Remove --disable-tls. (@jasone) - Remove --enable-ivsalloc. (@jasone) - Remove --with-lg-size-class-group. (@jasone) - Remove --with-lg-tiny-min. (@jasone) - Remove --disable-cc-silence. (@jasone) - Remove --enable-code-coverage. (@jasone) - Remove --disable-munmap (replaced by opt.retain). (@jasone) - Remove Valgrind support. (@jasone) - Remove quarantine support. (@jasone) - Remove redzone support. (@jasone) - Remove mallctl interfaces (various authors): + config.munmap + config.tcache + config.tls + config.valgrind + opt.lg_chunk + opt.purge + opt.lg_dirty_mult + opt.decay_time + opt.quarantine + opt.redzone + opt.thp + arena..lg_dirty_mult + arena..decay_time + arena..chunk_hooks + arenas.initialized + arenas.lg_dirty_mult + arenas.decay_time + arenas.bin..run_size + arenas.nlruns + arenas.lrun..size + arenas.nhchunks + arenas.hchunk..size + arenas.extend + stats.cactive + stats.arenas..lg_dirty_mult + stats.arenas..decay_time + stats.arenas..metadata.{mapped,allocated} + stats.arenas..{npurge,nmadvise,purged} + stats.arenas..huge.{allocated,nmalloc,ndalloc,nrequests} + stats.arenas..bins..{nruns,reruns,curruns} + stats.arenas..lruns..{nmalloc,ndalloc,nrequests,curruns} + stats.arenas..hchunks..{nmalloc,ndalloc,nrequests,curhchunks} Bug fixes: - Improve interval-based profile dump triggering to dump only one profile when a single allocation's size exceeds the interval. (@jasone) - Use prefixed function names (as controlled by --with-jemalloc-prefix) when pruning backtrace frames in jeprof. (@jasone) * 4.5.0 (February 28, 2017) This is the first release to benefit from much broader continuous integration testing, thanks to @davidtgoldblatt. Had we had this testing infrastructure in place for prior releases, it would have caught all of the most serious regressions fixed by this release. New features: - Add --disable-thp and the opt.thp mallctl to provide opt-out mechanisms for transparent huge page integration. (@jasone) - Update zone allocator integration to work with macOS 10.12. (@glandium) - Restructure *CFLAGS configuration, so that CFLAGS behaves typically, and EXTRA_CFLAGS provides a way to specify e.g. -Werror during building, but not during configuration. (@jasone, @ronawho) Bug fixes: - Fix DSS (sbrk(2)-based) allocation. This regression was first released in 4.3.0. (@jasone) - Handle race in per size class utilization computation. This functionality was first released in 4.0.0. (@interwq) - Fix lock order reversal during gdump. (@jasone) - Fix/refactor tcache synchronization. This regression was first released in 4.0.0. (@jasone) - Fix various JSON-formatted malloc_stats_print() bugs. This functionality was first released in 4.3.0. (@jasone) - Fix huge-aligned allocation. This regression was first released in 4.4.0. (@jasone) - When transparent huge page integration is enabled, detect what state pages start in according to the kernel's current operating mode, and only convert arena chunks to non-huge during purging if that is not their initial state. This functionality was first released in 4.4.0. (@jasone) - Fix lg_chunk clamping for the --enable-cache-oblivious --disable-fill case. This regression was first released in 4.0.0. (@jasone, @428desmo) - Properly detect sparc64 when building for Linux. (@glaubitz) * 4.4.0 (December 3, 2016) New features: - Add configure support for *-*-linux-android. (@cferris1000, @jasone) - Add the --disable-syscall configure option, for use on systems that place security-motivated limitations on syscall(2). (@jasone) - Add support for Debian GNU/kFreeBSD. (@thesam) Optimizations: - Add extent serial numbers and use them where appropriate as a sort key that is higher priority than address, so that the allocation policy prefers older extents. This tends to improve locality (decrease fragmentation) when memory grows downward. (@jasone) - Refactor madvise(2) configuration so that MADV_FREE is detected and utilized on Linux 4.5 and newer. (@jasone) - Mark partially purged arena chunks as non-huge-page. This improves interaction with Linux's transparent huge page functionality. (@jasone) Bug fixes: - Fix size class computations for edge conditions involving extremely large allocations. This regression was first released in 4.0.0. (@jasone, @ingvarha) - Remove overly restrictive assertions related to the cactive statistic. This regression was first released in 4.1.0. (@jasone) - Implement a more reliable detection scheme for os_unfair_lock on macOS. (@jszakmeister) * 4.3.1 (November 7, 2016) Bug fixes: - Fix a severe virtual memory leak. This regression was first released in 4.3.0. (@interwq, @jasone) - Refactor atomic and prng APIs to restore support for 32-bit platforms that use pre-C11 toolchains, e.g. FreeBSD's mips. (@jasone) * 4.3.0 (November 4, 2016) This is the first release that passes the test suite for multiple Windows configurations, thanks in large part to @glandium setting up continuous integration via AppVeyor (and Travis CI for Linux and OS X). New features: - Add "J" (JSON) support to malloc_stats_print(). (@jasone) - Add Cray compiler support. (@ronawho) Optimizations: - Add/use adaptive spinning for bootstrapping and radix tree node initialization. (@jasone) Bug fixes: - Fix large allocation to search starting in the optimal size class heap, which can substantially reduce virtual memory churn and fragmentation. This regression was first released in 4.0.0. (@mjp41, @jasone) - Fix stats.arenas..nthreads accounting. (@interwq) - Fix and simplify decay-based purging. (@jasone) - Make DSS (sbrk(2)-related) operations lockless, which resolves potential deadlocks during thread exit. (@jasone) - Fix over-sized allocation of radix tree leaf nodes. (@mjp41, @ogaun, @jasone) - Fix over-sized allocation of arena_t (plus associated stats) data structures. (@jasone, @interwq) - Fix EXTRA_CFLAGS to not affect configuration. (@jasone) - Fix a Valgrind integration bug. (@ronawho) - Disallow 0x5a junk filling when running in Valgrind. (@jasone) - Fix a file descriptor leak on Linux. This regression was first released in 4.2.0. (@vsarunas, @jasone) - Fix static linking of jemalloc with glibc. (@djwatson) - Use syscall(2) rather than {open,read,close}(2) during boot on Linux. This works around other libraries' system call wrappers performing reentrant allocation. (@kspinka, @Whissi, @jasone) - Fix OS X default zone replacement to work with OS X 10.12. (@glandium, @jasone) - Fix cached memory management to avoid needless commit/decommit operations during purging, which resolves permanent virtual memory map fragmentation issues on Windows. (@mjp41, @jasone) - Fix TSD fetches to avoid (recursive) allocation. This is relevant to non-TLS and Windows configurations. (@jasone) - Fix malloc_conf overriding to work on Windows. (@jasone) - Forcibly disable lazy-lock on Windows (was forcibly *enabled*). (@jasone) * 4.2.1 (June 8, 2016) Bug fixes: - Fix bootstrapping issues for configurations that require allocation during tsd initialization (e.g. --disable-tls). (@cferris1000, @jasone) - Fix gettimeofday() version of nstime_update(). (@ronawho) - Fix Valgrind regressions in calloc() and chunk_alloc_wrapper(). (@ronawho) - Fix potential VM map fragmentation regression. (@jasone) - Fix opt_zero-triggered in-place huge reallocation zeroing. (@jasone) - Fix heap profiling context leaks in reallocation edge cases. (@jasone) * 4.2.0 (May 12, 2016) New features: - Add the arena..reset mallctl, which makes it possible to discard all of an arena's allocations in a single operation. (@jasone) - Add the stats.retained and stats.arenas..retained statistics. (@jasone) - Add the --with-version configure option. (@jasone) - Support --with-lg-page values larger than actual page size. (@jasone) Optimizations: - Use pairing heaps rather than red-black trees for various hot data structures. (@djwatson, @jasone) - Streamline fast paths of rtree operations. (@jasone) - Optimize the fast paths of calloc() and [m,d,sd]allocx(). (@jasone) - Decommit unused virtual memory if the OS does not overcommit. (@jasone) - Specify MAP_NORESERVE on Linux if [heuristic] overcommit is active, in order to avoid unfortunate interactions during fork(2). (@jasone) Bug fixes: - Fix chunk accounting related to triggering gdump profiles. (@jasone) - Link against librt for clock_gettime(2) if glibc < 2.17. (@jasone) - Scale leak report summary according to sampling probability. (@jasone) * 4.1.1 (May 3, 2016) This bugfix release resolves a variety of mostly minor issues, though the bitmap fix is critical for 64-bit Windows. Bug fixes: - Fix the linear scan version of bitmap_sfu() to shift by the proper amount even when sizeof(long) is not the same as sizeof(void *), as on 64-bit Windows. (@jasone) - Fix hashing functions to avoid unaligned memory accesses (and resulting crashes). This is relevant at least to some ARM-based platforms. (@rkmisra) - Fix fork()-related lock rank ordering reversals. These reversals were unlikely to cause deadlocks in practice except when heap profiling was enabled and active. (@jasone) - Fix various chunk leaks in OOM code paths. (@jasone) - Fix malloc_stats_print() to print opt.narenas correctly. (@jasone) - Fix MSVC-specific build/test issues. (@rustyx, @yuslepukhin) - Fix a variety of test failures that were due to test fragility rather than core bugs. (@jasone) * 4.1.0 (February 28, 2016) This release is primarily about optimizations, but it also incorporates a lot of portability-motivated refactoring and enhancements. Many people worked on this release, to an extent that even with the omission here of minor changes (see git revision history), and of the people who reported and diagnosed issues, so much of the work was contributed that starting with this release, changes are annotated with author credits to help reflect the collaborative effort involved. New features: - Implement decay-based unused dirty page purging, a major optimization with mallctl API impact. This is an alternative to the existing ratio-based unused dirty page purging, and is intended to eventually become the sole purging mechanism. New mallctls: + opt.purge + opt.decay_time + arena..decay + arena..decay_time + arenas.decay_time + stats.arenas..decay_time (@jasone, @cevans87) - Add --with-malloc-conf, which makes it possible to embed a default options string during configuration. This was motivated by the desire to specify --with-malloc-conf=purge:decay , since the default must remain purge:ratio until the 5.0.0 release. (@jasone) - Add MS Visual Studio 2015 support. (@rustyx, @yuslepukhin) - Make *allocx() size class overflow behavior defined. The maximum size class is now less than PTRDIFF_MAX to protect applications against numerical overflow, and all allocation functions are guaranteed to indicate errors rather than potentially crashing if the request size exceeds the maximum size class. (@jasone) - jeprof: + Add raw heap profile support. (@jasone) + Add --retain and --exclude for backtrace symbol filtering. (@jasone) Optimizations: - Optimize the fast path to combine various bootstrapping and configuration checks and execute more streamlined code in the common case. (@interwq) - Use linear scan for small bitmaps (used for small object tracking). In addition to speeding up bitmap operations on 64-bit systems, this reduces allocator metadata overhead by approximately 0.2%. (@djwatson) - Separate arena_avail trees, which substantially speeds up run tree operations. (@djwatson) - Use memoization (boot-time-computed table) for run quantization. Separate arena_avail trees reduced the importance of this optimization. (@jasone) - Attempt mmap-based in-place huge reallocation. This can dramatically speed up incremental huge reallocation. (@jasone) Incompatible changes: - Make opt.narenas unsigned rather than size_t. (@jasone) Bug fixes: - Fix stats.cactive accounting regression. (@rustyx, @jasone) - Handle unaligned keys in hash(). This caused problems for some ARM systems. (@jasone, @cferris1000) - Refactor arenas array. In addition to fixing a fork-related deadlock, this makes arena lookups faster and simpler. (@jasone) - Move retained memory allocation out of the default chunk allocation function, to a location that gets executed even if the application installs a custom chunk allocation function. This resolves a virtual memory leak. (@buchgr) - Fix a potential tsd cleanup leak. (@cferris1000, @jasone) - Fix run quantization. In practice this bug had no impact unless applications requested memory with alignment exceeding one page. (@jasone, @djwatson) - Fix LinuxThreads-specific bootstrapping deadlock. (Cosmin Paraschiv) - jeprof: + Don't discard curl options if timeout is not defined. (@djwatson) + Detect failed profile fetches. (@djwatson) - Fix stats.arenas..{dss,lg_dirty_mult,decay_time,pactive,pdirty} for --disable-stats case. (@jasone) * 4.0.4 (October 24, 2015) This bugfix release fixes another xallocx() regression. No other regressions have come to light in over a month, so this is likely a good starting point for people who prefer to wait for "dot one" releases with all the major issues shaken out. Bug fixes: - Fix xallocx(..., MALLOCX_ZERO to zero the last full trailing page of large allocations that have been randomly assigned an offset of 0 when --enable-cache-oblivious configure option is enabled. * 4.0.3 (September 24, 2015) This bugfix release continues the trend of xallocx() and heap profiling fixes. Bug fixes: - Fix xallocx(..., MALLOCX_ZERO) to zero all trailing bytes of large allocations when --enable-cache-oblivious configure option is enabled. - Fix xallocx(..., MALLOCX_ZERO) to zero trailing bytes of huge allocations when resizing from/to a size class that is not a multiple of the chunk size. - Fix prof_tctx_dump_iter() to filter out nodes that were created after heap profile dumping started. - Work around a potentially bad thread-specific data initialization interaction with NPTL (glibc's pthreads implementation). * 4.0.2 (September 21, 2015) This bugfix release addresses a few bugs specific to heap profiling. Bug fixes: - Fix ixallocx_prof_sample() to never modify nor create sampled small allocations. xallocx() is in general incapable of moving small allocations, so this fix removes buggy code without loss of generality. - Fix irallocx_prof_sample() to always allocate large regions, even when alignment is non-zero. - Fix prof_alloc_rollback() to read tdata from thread-specific data rather than dereferencing a potentially invalid tctx. * 4.0.1 (September 15, 2015) This is a bugfix release that is somewhat high risk due to the amount of refactoring required to address deep xallocx() problems. As a side effect of these fixes, xallocx() now tries harder to partially fulfill requests for optional extra space. Note that a couple of minor heap profiling optimizations are included, but these are better thought of as performance fixes that were integral to disovering most of the other bugs. Optimizations: - Avoid a chunk metadata read in arena_prof_tctx_set(), since it is in the fast path when heap profiling is enabled. Additionally, split a special case out into arena_prof_tctx_reset(), which also avoids chunk metadata reads. - Optimize irallocx_prof() to optimistically update the sampler state. The prior implementation appears to have been a holdover from when rallocx()/xallocx() functionality was combined as rallocm(). Bug fixes: - Fix TLS configuration such that it is enabled by default for platforms on which it works correctly. - Fix arenas_cache_cleanup() and arena_get_hard() to handle allocation/deallocation within the application's thread-specific data cleanup functions even after arenas_cache is torn down. - Fix xallocx() bugs related to size+extra exceeding HUGE_MAXCLASS. - Fix chunk purge hook calls for in-place huge shrinking reallocation to specify the old chunk size rather than the new chunk size. This bug caused no correctness issues for the default chunk purge function, but was visible to custom functions set via the "arena..chunk_hooks" mallctl. - Fix heap profiling bugs: + Fix heap profiling to distinguish among otherwise identical sample sites with interposed resets (triggered via the "prof.reset" mallctl). This bug could cause data structure corruption that would most likely result in a segfault. + Fix irealloc_prof() to prof_alloc_rollback() on OOM. + Make one call to prof_active_get_unlocked() per allocation event, and use the result throughout the relevant functions that handle an allocation event. Also add a missing check in prof_realloc(). These fixes protect allocation events against concurrent prof_active changes. + Fix ixallocx_prof() to pass usize_max and zero to ixallocx_prof_sample() in the correct order. + Fix prof_realloc() to call prof_free_sampled_object() after calling prof_malloc_sample_object(). Prior to this fix, if tctx and old_tctx were the same, the tctx could have been prematurely destroyed. - Fix portability bugs: + Don't bitshift by negative amounts when encoding/decoding run sizes in chunk header maps. This affected systems with page sizes greater than 8 KiB. + Rename index_t to szind_t to avoid an existing type on Solaris. + Add JEMALLOC_CXX_THROW to the memalign() function prototype, in order to match glibc and avoid compilation errors when including both jemalloc/jemalloc.h and malloc.h in C++ code. + Don't assume that /bin/sh is appropriate when running size_classes.sh during configuration. + Consider __sparcv9 a synonym for __sparc64__ when defining LG_QUANTUM. + Link tests to librt if it contains clock_gettime(2). * 4.0.0 (August 17, 2015) This version contains many speed and space optimizations, both minor and major. The major themes are generalization, unification, and simplification. Although many of these optimizations cause no visible behavior change, their cumulative effect is substantial. New features: - Normalize size class spacing to be consistent across the complete size range. By default there are four size classes per size doubling, but this is now configurable via the --with-lg-size-class-group option. Also add the --with-lg-page, --with-lg-page-sizes, --with-lg-quantum, and --with-lg-tiny-min options, which can be used to tweak page and size class settings. Impacts: + Worst case performance for incrementally growing/shrinking reallocation is improved because there are far fewer size classes, and therefore copying happens less often. + Internal fragmentation is limited to 20% for all but the smallest size classes (those less than four times the quantum). (1B + 4 KiB) and (1B + 4 MiB) previously suffered nearly 50% internal fragmentation. + Chunk fragmentation tends to be lower because there are fewer distinct run sizes to pack. - Add support for explicit tcaches. The "tcache.create", "tcache.flush", and "tcache.destroy" mallctls control tcache lifetime and flushing, and the MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to the *allocx() API control which tcache is used for each operation. - Implement per thread heap profiling, as well as the ability to enable/disable heap profiling on a per thread basis. Add the "prof.reset", "prof.lg_sample", "thread.prof.name", "thread.prof.active", "opt.prof_thread_active_init", "prof.thread_active_init", and "thread.prof.active" mallctls. - Add support for per arena application-specified chunk allocators, configured via the "arena..chunk_hooks" mallctl. - Refactor huge allocation to be managed by arenas, so that arenas now function as general purpose independent allocators. This is important in the context of user-specified chunk allocators, aside from the scalability benefits. Related new statistics: + The "stats.arenas..huge.allocated", "stats.arenas..huge.nmalloc", "stats.arenas..huge.ndalloc", and "stats.arenas..huge.nrequests" mallctls provide high level per arena huge allocation statistics. + The "arenas.nhchunks", "arenas.hchunk..size", "stats.arenas..hchunks..nmalloc", "stats.arenas..hchunks..ndalloc", "stats.arenas..hchunks..nrequests", and "stats.arenas..hchunks..curhchunks" mallctls provide per size class statistics. - Add the 'util' column to malloc_stats_print() output, which reports the proportion of available regions that are currently in use for each small size class. - Add "alloc" and "free" modes for for junk filling (see the "opt.junk" mallctl), so that it is possible to separately enable junk filling for allocation versus deallocation. - Add the jemalloc-config script, which provides information about how jemalloc was configured, and how to integrate it into application builds. - Add metadata statistics, which are accessible via the "stats.metadata", "stats.arenas..metadata.mapped", and "stats.arenas..metadata.allocated" mallctls. - Add the "stats.resident" mallctl, which reports the upper limit of physically resident memory mapped by the allocator. - Add per arena control over unused dirty page purging, via the "arenas.lg_dirty_mult", "arena..lg_dirty_mult", and "stats.arenas..lg_dirty_mult" mallctls. - Add the "prof.gdump" mallctl, which makes it possible to toggle the gdump feature on/off during program execution. - Add sdallocx(), which implements sized deallocation. The primary optimization over dallocx() is the removal of a metadata read, which often suffers an L1 cache miss. - Add missing header includes in jemalloc/jemalloc.h, so that applications only have to #include . - Add support for additional platforms: + Bitrig + Cygwin + DragonFlyBSD + iOS + OpenBSD + OpenRISC/or1k Optimizations: - Maintain dirty runs in per arena LRUs rather than in per arena trees of dirty-run-containing chunks. In practice this change significantly reduces dirty page purging volume. - Integrate whole chunks into the unused dirty page purging machinery. This reduces the cost of repeated huge allocation/deallocation, because it effectively introduces a cache of chunks. - Split the arena chunk map into two separate arrays, in order to increase cache locality for the frequently accessed bits. - Move small run metadata out of runs, into arena chunk headers. This reduces run fragmentation, smaller runs reduce external fragmentation for small size classes, and packed (less uniformly aligned) metadata layout improves CPU cache set distribution. - Randomly distribute large allocation base pointer alignment relative to page boundaries in order to more uniformly utilize CPU cache sets. This can be disabled via the --disable-cache-oblivious configure option, and queried via the "config.cache_oblivious" mallctl. - Micro-optimize the fast paths for the public API functions. - Refactor thread-specific data to reside in a single structure. This assures that only a single TLS read is necessary per call into the public API. - Implement in-place huge allocation growing and shrinking. - Refactor rtree (radix tree for chunk lookups) to be lock-free, and make additional optimizations that reduce maximum lookup depth to one or two levels. This resolves what was a concurrency bottleneck for per arena huge allocation, because a global data structure is critical for determining which arenas own which huge allocations. Incompatible changes: - Replace --enable-cc-silence with --disable-cc-silence to suppress spurious warnings by default. - Assure that the constness of malloc_usable_size()'s return type matches that of the system implementation. - Change the heap profile dump format to support per thread heap profiling, rename pprof to jeprof, and enhance it with the --thread= option. As a result, the bundled jeprof must now be used rather than the upstream (gperftools) pprof. - Disable "opt.prof_final" by default, in order to avoid atexit(3), which can internally deadlock on some platforms. - Change the "arenas.nlruns" mallctl type from size_t to unsigned. - Replace the "stats.arenas..bins..allocated" mallctl with "stats.arenas..bins..curregs". - Ignore MALLOC_CONF in set{uid,gid,cap} binaries. - Ignore MALLOCX_ARENA(a) in dallocx(), in favor of using the MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to control tcache usage. Removed features: - Remove the *allocm() API, which is superseded by the *allocx() API. - Remove the --enable-dss options, and make dss non-optional on all platforms which support sbrk(2). - Remove the "arenas.purge" mallctl, which was obsoleted by the "arena..purge" mallctl in 3.1.0. - Remove the unnecessary "opt.valgrind" mallctl; jemalloc automatically detects whether it is running inside Valgrind. - Remove the "stats.huge.allocated", "stats.huge.nmalloc", and "stats.huge.ndalloc" mallctls. - Remove the --enable-mremap option. - Remove the "stats.chunks.current", "stats.chunks.total", and "stats.chunks.high" mallctls. Bug fixes: - Fix the cactive statistic to decrease (rather than increase) when active memory decreases. This regression was first released in 3.5.0. - Fix OOM handling in memalign() and valloc(). A variant of this bug existed in all releases since 2.0.0, which introduced these functions. - Fix an OOM-related regression in arena_tcache_fill_small(), which could cause cache corruption on OOM. This regression was present in all releases from 2.2.0 through 3.6.0. - Fix size class overflow handling for malloc(), posix_memalign(), memalign(), calloc(), and realloc() when profiling is enabled. - Fix the "arena..dss" mallctl to return an error if "primary" or "secondary" precedence is specified, but sbrk(2) is not supported. - Fix fallback lg_floor() implementations to handle extremely large inputs. - Ensure the default purgeable zone is after the default zone on OS X. - Fix latent bugs in atomic_*(). - Fix the "arena..dss" mallctl to handle read-only calls. - Fix tls_model configuration to enable the initial-exec model when possible. - Mark malloc_conf as a weak symbol so that the application can override it. - Correctly detect glibc's adaptive pthread mutexes. - Fix the --without-export configure option. * 3.6.0 (March 31, 2014) This version contains a critical bug fix for a regression present in 3.5.0 and 3.5.1. Bug fixes: - Fix a regression in arena_chunk_alloc() that caused crashes during small/large allocation if chunk allocation failed. In the absence of this bug, chunk allocation failure would result in allocation failure, e.g. NULL return from malloc(). This regression was introduced in 3.5.0. - Fix backtracing for gcc intrinsics-based backtracing by specifying -fno-omit-frame-pointer to gcc. Note that the application (and all the libraries it links to) must also be compiled with this option for backtracing to be reliable. - Use dss allocation precedence for huge allocations as well as small/large allocations. - Fix test assertion failure message formatting. This bug did not manifest on x86_64 systems because of implementation subtleties in va_list. - Fix inconsequential test failures for hash and SFMT code. New features: - Support heap profiling on FreeBSD. This feature depends on the proc filesystem being mounted during heap profile dumping. * 3.5.1 (February 25, 2014) This version primarily addresses minor bugs in test code. Bug fixes: - Configure Solaris/Illumos to use MADV_FREE. - Fix junk filling for mremap(2)-based huge reallocation. This is only relevant if configuring with the --enable-mremap option specified. - Avoid compilation failure if 'restrict' C99 keyword is not supported by the compiler. - Add a configure test for SSE2 rather than assuming it is usable on i686 systems. This fixes test compilation errors, especially on 32-bit Linux systems. - Fix mallctl argument size mismatches (size_t vs. uint64_t) in the stats unit test. - Fix/remove flawed alignment-related overflow tests. - Prevent compiler optimizations that could change backtraces in the prof_accum unit test. * 3.5.0 (January 22, 2014) This version focuses on refactoring and automated testing, though it also includes some non-trivial heap profiling optimizations not mentioned below. New features: - Add the *allocx() API, which is a successor to the experimental *allocm() API. The *allocx() functions are slightly simpler to use because they have fewer parameters, they directly return the results of primary interest, and mallocx()/rallocx() avoid the strict aliasing pitfall that allocm()/rallocm() share with posix_memalign(). Note that *allocm() is slated for removal in the next non-bugfix release. - Add support for LinuxThreads. Bug fixes: - Unless heap profiling is enabled, disable floating point code and don't link with libm. This, in combination with e.g. EXTRA_CFLAGS=-mno-sse on x64 systems, makes it possible to completely disable floating point register use. Some versions of glibc neglect to save/restore caller-saved floating point registers during dynamic lazy symbol loading, and the symbol loading code uses whatever malloc the application happens to have linked/loaded with, the result being potential floating point register corruption. - Report ENOMEM rather than EINVAL if an OOM occurs during heap profiling backtrace creation in imemalign(). This bug impacted posix_memalign() and aligned_alloc(). - Fix a file descriptor leak in a prof_dump_maps() error path. - Fix prof_dump() to close the dump file descriptor for all relevant error paths. - Fix rallocm() to use the arena specified by the ALLOCM_ARENA(s) flag for allocation, not just deallocation. - Fix a data race for large allocation stats counters. - Fix a potential infinite loop during thread exit. This bug occurred on Solaris, and could affect other platforms with similar pthreads TSD implementations. - Don't junk-fill reallocations unless usable size changes. This fixes a violation of the *allocx()/*allocm() semantics. - Fix growing large reallocation to junk fill new space. - Fix huge deallocation to junk fill when munmap is disabled. - Change the default private namespace prefix from empty to je_, and change --with-private-namespace-prefix so that it prepends an additional prefix rather than replacing je_. This reduces the likelihood of applications which statically link jemalloc experiencing symbol name collisions. - Add missing private namespace mangling (relevant when --with-private-namespace is specified). - Add and use JEMALLOC_INLINE_C so that static inline functions are marked as static even for debug builds. - Add a missing mutex unlock in a malloc_init_hard() error path. In practice this error path is never executed. - Fix numerous bugs in malloc_strotumax() error handling/reporting. These bugs had no impact except for malformed inputs. - Fix numerous bugs in malloc_snprintf(). These bugs were not exercised by existing calls, so they had no impact. * 3.4.1 (October 20, 2013) Bug fixes: - Fix a race in the "arenas.extend" mallctl that could cause memory corruption of internal data structures and subsequent crashes. - Fix Valgrind integration flaws that caused Valgrind warnings about reads of uninitialized memory in: + arena chunk headers + internal zero-initialized data structures (relevant to tcache and prof code) - Preserve errno during the first allocation. A readlink(2) call during initialization fails unless /etc/malloc.conf exists, so errno was typically set during the first allocation prior to this fix. - Fix compilation warnings reported by gcc 4.8.1. * 3.4.0 (June 2, 2013) This version is essentially a small bugfix release, but the addition of aarch64 support requires that the minor version be incremented. Bug fixes: - Fix race-triggered deadlocks in chunk_record(). These deadlocks were typically triggered by multiple threads concurrently deallocating huge objects. New features: - Add support for the aarch64 architecture. * 3.3.1 (March 6, 2013) This version fixes bugs that are typically encountered only when utilizing custom run-time options. Bug fixes: - Fix a locking order bug that could cause deadlock during fork if heap profiling were enabled. - Fix a chunk recycling bug that could cause the allocator to lose track of whether a chunk was zeroed. On FreeBSD, NetBSD, and OS X, it could cause corruption if allocating via sbrk(2) (unlikely unless running with the "dss:primary" option specified). This was completely harmless on Linux unless using mlockall(2) (and unlikely even then, unless the --disable-munmap configure option or the "dss:primary" option was specified). This regression was introduced in 3.1.0 by the mlockall(2)/madvise(2) interaction fix. - Fix TLS-related memory corruption that could occur during thread exit if the thread never allocated memory. Only the quarantine and prof facilities were susceptible. - Fix two quarantine bugs: + Internal reallocation of the quarantined object array leaked the old array. + Reallocation failure for internal reallocation of the quarantined object array (very unlikely) resulted in memory corruption. - Fix Valgrind integration to annotate all internally allocated memory in a way that keeps Valgrind happy about internal data structure access. - Fix building for s390 systems. * 3.3.0 (January 23, 2013) This version includes a few minor performance improvements in addition to the listed new features and bug fixes. New features: - Add clipping support to lg_chunk option processing. - Add the --enable-ivsalloc option. - Add the --without-export option. - Add the --disable-zone-allocator option. Bug fixes: - Fix "arenas.extend" mallctl to output the number of arenas. - Fix chunk_recycle() to unconditionally inform Valgrind that returned memory is undefined. - Fix build break on FreeBSD related to alloca.h. * 3.2.0 (November 9, 2012) In addition to a couple of bug fixes, this version modifies page run allocation and dirty page purging algorithms in order to better control page-level virtual memory fragmentation. Incompatible changes: - Change the "opt.lg_dirty_mult" default from 5 to 3 (32:1 to 8:1). Bug fixes: - Fix dss/mmap allocation precedence code to use recyclable mmap memory only after primary dss allocation fails. - Fix deadlock in the "arenas.purge" mallctl. This regression was introduced in 3.1.0 by the addition of the "arena..purge" mallctl. * 3.1.0 (October 16, 2012) New features: - Auto-detect whether running inside Valgrind, thus removing the need to manually specify MALLOC_CONF=valgrind:true. - Add the "arenas.extend" mallctl, which allows applications to create manually managed arenas. - Add the ALLOCM_ARENA() flag for {,r,d}allocm(). - Add the "opt.dss", "arena..dss", and "stats.arenas..dss" mallctls, which provide control over dss/mmap precedence. - Add the "arena..purge" mallctl, which obsoletes "arenas.purge". - Define LG_QUANTUM for hppa. Incompatible changes: - Disable tcache by default if running inside Valgrind, in order to avoid making unallocated objects appear reachable to Valgrind. - Drop const from malloc_usable_size() argument on Linux. Bug fixes: - Fix heap profiling crash if sampled object is freed via realloc(p, 0). - Remove const from __*_hook variable declarations, so that glibc can modify them during process forking. - Fix mlockall(2)/madvise(2) interaction. - Fix fork(2)-related deadlocks. - Fix error return value for "thread.tcache.enabled" mallctl. * 3.0.0 (May 11, 2012) Although this version adds some major new features, the primary focus is on internal code cleanup that facilitates maintainability and portability, most of which is not reflected in the ChangeLog. This is the first release to incorporate substantial contributions from numerous other developers, and the result is a more broadly useful allocator (see the git revision history for contribution details). Note that the license has been unified, thanks to Facebook granting a license under the same terms as the other copyright holders (see COPYING). New features: - Implement Valgrind support, redzones, and quarantine. - Add support for additional platforms: + FreeBSD + Mac OS X Lion + MinGW + Windows (no support yet for replacing the system malloc) - Add support for additional architectures: + MIPS + SH4 + Tilera - Add support for cross compiling. - Add nallocm(), which rounds a request size up to the nearest size class without actually allocating. - Implement aligned_alloc() (blame C11). - Add the "thread.tcache.enabled" mallctl. - Add the "opt.prof_final" mallctl. - Update pprof (from gperftools 2.0). - Add the --with-mangling option. - Add the --disable-experimental option. - Add the --disable-munmap option, and make it the default on Linux. - Add the --enable-mremap option, which disables use of mremap(2) by default. Incompatible changes: - Enable stats by default. - Enable fill by default. - Disable lazy locking by default. - Rename the "tcache.flush" mallctl to "thread.tcache.flush". - Rename the "arenas.pagesize" mallctl to "arenas.page". - Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB). - Change the "opt.prof_accum" default from true to false. Removed features: - Remove the swap feature, including the "config.swap", "swap.avail", "swap.prezeroed", "swap.nfds", and "swap.fds" mallctls. - Remove highruns statistics, including the "stats.arenas..bins..highruns" and "stats.arenas..lruns..highruns" mallctls. - As part of small size class refactoring, remove the "opt.lg_[qc]space_max", "arenas.cacheline", "arenas.subpage", "arenas.[tqcs]space_{min,max}", and "arenas.[tqcs]bins" mallctls. - Remove the "arenas.chunksize" mallctl. - Remove the "opt.lg_prof_tcmax" option. - Remove the "opt.lg_prof_bt_max" option. - Remove the "opt.lg_tcache_gc_sweep" option. - Remove the --disable-tiny option, including the "config.tiny" mallctl. - Remove the --enable-dynamic-page-shift configure option. - Remove the --enable-sysv configure option. Bug fixes: - Fix a statistics-related bug in the "thread.arena" mallctl that could cause invalid statistics and crashes. - Work around TLS deallocation via free() on Linux. This bug could cause write-after-free memory corruption. - Fix a potential deadlock that could occur during interval- and growth-triggered heap profile dumps. - Fix large calloc() zeroing bugs due to dropping chunk map unzeroed flags. - Fix chunk_alloc_dss() to stop claiming memory is zeroed. This bug could cause memory corruption and crashes with --enable-dss specified. - Fix fork-related bugs that could cause deadlock in children between fork and exec. - Fix malloc_stats_print() to honor 'b' and 'l' in the opts parameter. - Fix realloc(p, 0) to act like free(p). - Do not enforce minimum alignment in memalign(). - Check for NULL pointer in malloc_usable_size(). - Fix an off-by-one heap profile statistics bug that could be observed in interval- and growth-triggered heap profiles. - Fix the "epoch" mallctl to update cached stats even if the passed in epoch is 0. - Fix bin->runcur management to fix a layout policy bug. This bug did not affect correctness. - Fix a bug in choose_arena_hard() that potentially caused more arenas to be initialized than necessary. - Add missing "opt.lg_tcache_max" mallctl implementation. - Use glibc allocator hooks to make mixed allocator usage less likely. - Fix build issues for --disable-tcache. - Don't mangle pthread_create() when --with-private-namespace is specified. * 2.2.5 (November 14, 2011) Bug fixes: - Fix huge_ralloc() race when using mremap(2). This is a serious bug that could cause memory corruption and/or crashes. - Fix huge_ralloc() to maintain chunk statistics. - Fix malloc_stats_print(..., "a") output. * 2.2.4 (November 5, 2011) Bug fixes: - Initialize arenas_tsd before using it. This bug existed for 2.2.[0-3], as well as for --disable-tls builds in earlier releases. - Do not assume a 4 KiB page size in test/rallocm.c. * 2.2.3 (August 31, 2011) This version fixes numerous bugs related to heap profiling. Bug fixes: - Fix a prof-related race condition. This bug could cause memory corruption, but only occurred in non-default configurations (prof_accum:false). - Fix off-by-one backtracing issues (make sure that prof_alloc_prep() is excluded from backtraces). - Fix a prof-related bug in realloc() (only triggered by OOM errors). - Fix prof-related bugs in allocm() and rallocm(). - Fix prof_tdata_cleanup() for --disable-tls builds. - Fix a relative include path, to fix objdir builds. * 2.2.2 (July 30, 2011) Bug fixes: - Fix a build error for --disable-tcache. - Fix assertions in arena_purge() (for real this time). - Add the --with-private-namespace option. This is a workaround for symbol conflicts that can inadvertently arise when using static libraries. * 2.2.1 (March 30, 2011) Bug fixes: - Implement atomic operations for x86/x64. This fixes compilation failures for versions of gcc that are still in wide use. - Fix an assertion in arena_purge(). * 2.2.0 (March 22, 2011) This version incorporates several improvements to algorithms and data structures that tend to reduce fragmentation and increase speed. New features: - Add the "stats.cactive" mallctl. - Update pprof (from google-perftools 1.7). - Improve backtracing-related configuration logic, and add the --disable-prof-libgcc option. Bug fixes: - Change default symbol visibility from "internal", to "hidden", which decreases the overhead of library-internal function calls. - Fix symbol visibility so that it is also set on OS X. - Fix a build dependency regression caused by the introduction of the .pic.o suffix for PIC object files. - Add missing checks for mutex initialization failures. - Don't use libgcc-based backtracing except on x64, where it is known to work. - Fix deadlocks on OS X that were due to memory allocation in pthread_mutex_lock(). - Heap profiling-specific fixes: + Fix memory corruption due to integer overflow in small region index computation, when using a small enough sample interval that profiling context pointers are stored in small run headers. + Fix a bootstrap ordering bug that only occurred with TLS disabled. + Fix a rallocm() rsize bug. + Fix error detection bugs for aligned memory allocation. * 2.1.3 (March 14, 2011) Bug fixes: - Fix a cpp logic regression (due to the "thread.{de,}allocatedp" mallctl fix for OS X in 2.1.2). - Fix a "thread.arena" mallctl bug. - Fix a thread cache stats merging bug. * 2.1.2 (March 2, 2011) Bug fixes: - Fix "thread.{de,}allocatedp" mallctl for OS X. - Add missing jemalloc.a to build system. * 2.1.1 (January 31, 2011) Bug fixes: - Fix aligned huge reallocation (affected allocm()). - Fix the ALLOCM_LG_ALIGN macro definition. - Fix a heap dumping deadlock. - Fix a "thread.arena" mallctl bug. * 2.1.0 (December 3, 2010) This version incorporates some optimizations that can't quite be considered bug fixes. New features: - Use Linux's mremap(2) for huge object reallocation when possible. - Avoid locking in mallctl*() when possible. - Add the "thread.[de]allocatedp" mallctl's. - Convert the manual page source from roff to DocBook, and generate both roff and HTML manuals. Bug fixes: - Fix a crash due to incorrect bootstrap ordering. This only impacted --enable-debug --enable-dss configurations. - Fix a minor statistics bug for mallctl("swap.avail", ...). * 2.0.1 (October 29, 2010) Bug fixes: - Fix a race condition in heap profiling that could cause undefined behavior if "opt.prof_accum" were disabled. - Add missing mutex unlocks for some OOM error paths in the heap profiling code. - Fix a compilation error for non-C99 builds. * 2.0.0 (October 24, 2010) This version focuses on the experimental *allocm() API, and on improved run-time configuration/introspection. Nonetheless, numerous performance improvements are also included. New features: - Implement the experimental {,r,s,d}allocm() API, which provides a superset of the functionality available via malloc(), calloc(), posix_memalign(), realloc(), malloc_usable_size(), and free(). These functions can be used to allocate/reallocate aligned zeroed memory, ask for optional extra memory during reallocation, prevent object movement during reallocation, etc. - Replace JEMALLOC_OPTIONS/JEMALLOC_PROF_PREFIX with MALLOC_CONF, which is more human-readable, and more flexible. For example: JEMALLOC_OPTIONS=AJP is now: MALLOC_CONF=abort:true,fill:true,stats_print:true - Port to Apple OS X. Sponsored by Mozilla. - Make it possible for the application to control thread-->arena mappings via the "thread.arena" mallctl. - Add compile-time support for all TLS-related functionality via pthreads TSD. This is mainly of interest for OS X, which does not support TLS, but has a TSD implementation with similar performance. - Override memalign() and valloc() if they are provided by the system. - Add the "arenas.purge" mallctl, which can be used to synchronously purge all dirty unused pages. - Make cumulative heap profiling data optional, so that it is possible to limit the amount of memory consumed by heap profiling data structures. - Add per thread allocation counters that can be accessed via the "thread.allocated" and "thread.deallocated" mallctls. Incompatible changes: - Remove JEMALLOC_OPTIONS and malloc_options (see MALLOC_CONF above). - Increase default backtrace depth from 4 to 128 for heap profiling. - Disable interval-based profile dumps by default. Bug fixes: - Remove bad assertions in fork handler functions. These assertions could cause aborts for some combinations of configure settings. - Fix strerror_r() usage to deal with non-standard semantics in GNU libc. - Fix leak context reporting. This bug tended to cause the number of contexts to be underreported (though the reported number of objects and bytes were correct). - Fix a realloc() bug for large in-place growing reallocation. This bug could cause memory corruption, but it was hard to trigger. - Fix an allocation bug for small allocations that could be triggered if multiple threads raced to create a new run of backing pages. - Enhance the heap profiler to trigger samples based on usable size, rather than request size. - Fix a heap profiling bug due to sometimes losing track of requested object size for sampled objects. * 1.0.3 (August 12, 2010) Bug fixes: - Fix the libunwind-based implementation of stack backtracing (used for heap profiling). This bug could cause zero-length backtraces to be reported. - Add a missing mutex unlock in library initialization code. If multiple threads raced to initialize malloc, some of them could end up permanently blocked. * 1.0.2 (May 11, 2010) Bug fixes: - Fix junk filling of large objects, which could cause memory corruption. - Add MAP_NORESERVE support for chunk mapping, because otherwise virtual memory limits could cause swap file configuration to fail. Contributed by Jordan DeLong. * 1.0.1 (April 14, 2010) Bug fixes: - Fix compilation when --enable-fill is specified. - Fix threads-related profiling bugs that affected accuracy and caused memory to be leaked during thread exit. - Fix dirty page purging race conditions that could cause crashes. - Fix crash in tcache flushing code during thread destruction. * 1.0.0 (April 11, 2010) This release focuses on speed and run-time introspection. Numerous algorithmic improvements make this release substantially faster than its predecessors. New features: - Implement autoconf-based configuration system. - Add mallctl*(), for the purposes of introspection and run-time configuration. - Make it possible for the application to manually flush a thread's cache, via the "tcache.flush" mallctl. - Base maximum dirty page count on proportion of active memory. - Compute various additional run-time statistics, including per size class statistics for large objects. - Expose malloc_stats_print(), which can be called repeatedly by the application. - Simplify the malloc_message() signature to only take one string argument, and incorporate an opaque data pointer argument for use by the application in combination with malloc_stats_print(). - Add support for allocation backed by one or more swap files, and allow the application to disable over-commit if swap files are in use. - Implement allocation profiling and leak checking. Removed features: - Remove the dynamic arena rebalancing code, since thread-specific caching reduces its utility. Bug fixes: - Modify chunk allocation to work when address space layout randomization (ASLR) is in use. - Fix thread cleanup bugs related to TLS destruction. - Handle 0-size allocation requests in posix_memalign(). - Fix a chunk leak. The leaked chunks were never touched, so this impacted virtual memory usage, but not physical memory usage. * linux_2008082[78]a (August 27/28, 2008) These snapshot releases are the simple result of incorporating Linux-specific support into the FreeBSD malloc sources. -------------------------------------------------------------------------------- vim:filetype=text:textwidth=80 Index: head/contrib/jemalloc/VERSION =================================================================== --- head/contrib/jemalloc/VERSION (revision 320622) +++ head/contrib/jemalloc/VERSION (revision 320623) @@ -1 +1 @@ -5.0.0-4-g84f6c2cae0fb1399377ef6aea9368444c4987cc6 +5.0.1-0-g896ed3a8b3f41998d4fb4d625d30ac63ef2d51fb Index: head/contrib/jemalloc/doc/jemalloc.3 =================================================================== --- head/contrib/jemalloc/doc/jemalloc.3 (revision 320622) +++ head/contrib/jemalloc/doc/jemalloc.3 (revision 320623) @@ -1,2435 +1,2435 @@ '\" t .\" Title: JEMALLOC .\" Author: Jason Evans .\" Generator: DocBook XSL Stylesheets v1.76.1 -.\" Date: 06/29/2017 +.\" Date: 07/01/2017 .\" Manual: User Manual -.\" Source: jemalloc 5.0.0-4-g84f6c2cae0fb1399377ef6aea9368444c4987cc6 +.\" Source: jemalloc 5.0.1-0-g896ed3a8b3f41998d4fb4d625d30ac63ef2d51fb .\" Language: English .\" -.TH "JEMALLOC" "3" "06/29/2017" "jemalloc 5.0.0-4-g84f6c2cae0fb" "User Manual" +.TH "JEMALLOC" "3" "07/01/2017" "jemalloc 5.0.1-0-g896ed3a8b3f4" "User Manual" .\" ----------------------------------------------------------------- .\" * Define some portability stuff .\" ----------------------------------------------------------------- .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .\" http://bugs.debian.org/507673 .\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" ----------------------------------------------------------------- .\" * set default formatting .\" ----------------------------------------------------------------- .\" disable hyphenation .nh .\" disable justification (adjust text to left margin only) .ad l .\" ----------------------------------------------------------------- .\" * MAIN CONTENT STARTS HERE * .\" ----------------------------------------------------------------- .SH "NAME" jemalloc \- general purpose memory allocation functions .SH "LIBRARY" .PP -This manual describes jemalloc 5\&.0\&.0\-4\-g84f6c2cae0fb1399377ef6aea9368444c4987cc6\&. More information can be found at the +This manual describes jemalloc 5\&.0\&.1\-0\-g896ed3a8b3f41998d4fb4d625d30ac63ef2d51fb\&. More information can be found at the \m[blue]\fBjemalloc website\fR\m[]\&\s-2\u[1]\d\s+2\&. .PP The following configuration options are enabled in libc\*(Aqs built\-in jemalloc: \fB\-\-enable\-fill\fR, \fB\-\-enable\-lazy\-lock\fR, \fB\-\-enable\-stats\fR, \fB\-\-enable\-utrace\fR, \fB\-\-enable\-xmalloc\fR, and \fB\-\-with\-malloc\-conf=abort_conf:false\fR\&. Additionally, \fB\-\-enable\-debug\fR is enabled in development versions of FreeBSD (controlled by the \fBMALLOC_PRODUCTION\fR make variable)\&. .SH "SYNOPSIS" .sp .ft B .nf #include #include .fi .ft .SS "Standard API" .HP \w'void\ *malloc('u .BI "void *malloc(size_t\ " "size" ");" .HP \w'void\ *calloc('u .BI "void *calloc(size_t\ " "number" ", size_t\ " "size" ");" .HP \w'int\ posix_memalign('u .BI "int posix_memalign(void\ **" "ptr" ", size_t\ " "alignment" ", size_t\ " "size" ");" .HP \w'void\ *aligned_alloc('u .BI "void *aligned_alloc(size_t\ " "alignment" ", size_t\ " "size" ");" .HP \w'void\ *realloc('u .BI "void *realloc(void\ *" "ptr" ", size_t\ " "size" ");" .HP \w'void\ free('u .BI "void free(void\ *" "ptr" ");" .SS "Non\-standard API" .HP \w'void\ *mallocx('u .BI "void *mallocx(size_t\ " "size" ", int\ " "flags" ");" .HP \w'void\ *rallocx('u .BI "void *rallocx(void\ *" "ptr" ", size_t\ " "size" ", int\ " "flags" ");" .HP \w'size_t\ xallocx('u .BI "size_t xallocx(void\ *" "ptr" ", size_t\ " "size" ", size_t\ " "extra" ", int\ " "flags" ");" .HP \w'size_t\ sallocx('u .BI "size_t sallocx(void\ *" "ptr" ", int\ " "flags" ");" .HP \w'void\ dallocx('u .BI "void dallocx(void\ *" "ptr" ", int\ " "flags" ");" .HP \w'void\ sdallocx('u .BI "void sdallocx(void\ *" "ptr" ", size_t\ " "size" ", int\ " "flags" ");" .HP \w'size_t\ nallocx('u .BI "size_t nallocx(size_t\ " "size" ", int\ " "flags" ");" .HP \w'int\ mallctl('u .BI "int mallctl(const\ char\ *" "name" ", void\ *" "oldp" ", size_t\ *" "oldlenp" ", void\ *" "newp" ", size_t\ " "newlen" ");" .HP \w'int\ mallctlnametomib('u .BI "int mallctlnametomib(const\ char\ *" "name" ", size_t\ *" "mibp" ", size_t\ *" "miblenp" ");" .HP \w'int\ mallctlbymib('u .BI "int mallctlbymib(const\ size_t\ *" "mib" ", size_t\ " "miblen" ", void\ *" "oldp" ", size_t\ *" "oldlenp" ", void\ *" "newp" ", size_t\ " "newlen" ");" .HP \w'void\ malloc_stats_print('u .BI "void malloc_stats_print(void\ " "(*write_cb)" "\ (void\ *,\ const\ char\ *), void\ *" "cbopaque" ", const\ char\ *" "opts" ");" .HP \w'size_t\ malloc_usable_size('u .BI "size_t malloc_usable_size(const\ void\ *" "ptr" ");" .HP \w'void\ (*malloc_message)('u .BI "void (*malloc_message)(void\ *" "cbopaque" ", const\ char\ *" "s" ");" .PP const char *\fImalloc_conf\fR; .SH "DESCRIPTION" .SS "Standard API" .PP The malloc() function allocates \fIsize\fR bytes of uninitialized memory\&. The allocated space is suitably aligned (after possible pointer coercion) for storage of any type of object\&. .PP The calloc() function allocates space for \fInumber\fR objects, each \fIsize\fR bytes in length\&. The result is identical to calling malloc() with an argument of \fInumber\fR * \fIsize\fR, with the exception that the allocated memory is explicitly initialized to zero bytes\&. .PP The posix_memalign() function allocates \fIsize\fR bytes of memory such that the allocation\*(Aqs base address is a multiple of \fIalignment\fR, and returns the allocation in the value pointed to by \fIptr\fR\&. The requested \fIalignment\fR must be a power of 2 at least as large as sizeof(\fBvoid *\fR)\&. .PP The aligned_alloc() function allocates \fIsize\fR bytes of memory such that the allocation\*(Aqs base address is a multiple of \fIalignment\fR\&. The requested \fIalignment\fR must be a power of 2\&. Behavior is undefined if \fIsize\fR is not an integral multiple of \fIalignment\fR\&. .PP The realloc() function changes the size of the previously allocated memory referenced by \fIptr\fR to \fIsize\fR bytes\&. The contents of the memory are unchanged up to the lesser of the new and old sizes\&. If the new size is larger, the contents of the newly allocated portion of the memory are undefined\&. Upon success, the memory referenced by \fIptr\fR is freed and a pointer to the newly allocated memory is returned\&. Note that realloc() may move the memory allocation, resulting in a different return value than \fIptr\fR\&. If \fIptr\fR is \fBNULL\fR, the realloc() function behaves identically to malloc() for the specified size\&. .PP The free() function causes the allocated memory referenced by \fIptr\fR to be made available for future allocations\&. If \fIptr\fR is \fBNULL\fR, no action occurs\&. .SS "Non\-standard API" .PP The mallocx(), rallocx(), xallocx(), sallocx(), dallocx(), sdallocx(), and nallocx() functions all have a \fIflags\fR argument that can be used to specify options\&. The functions only check the options that are contextually relevant\&. Use bitwise or (|) operations to specify one or more of the following: .PP \fBMALLOCX_LG_ALIGN(\fR\fB\fIla\fR\fR\fB) \fR .RS 4 Align the memory allocation to start at an address that is a multiple of (1 << \fIla\fR)\&. This macro does not validate that \fIla\fR is within the valid range\&. .RE .PP \fBMALLOCX_ALIGN(\fR\fB\fIa\fR\fR\fB) \fR .RS 4 Align the memory allocation to start at an address that is a multiple of \fIa\fR, where \fIa\fR is a power of two\&. This macro does not validate that \fIa\fR is a power of 2\&. .RE .PP \fBMALLOCX_ZERO\fR .RS 4 Initialize newly allocated memory to contain zero bytes\&. In the growing reallocation case, the real size prior to reallocation defines the boundary between untouched bytes and those that are initialized to contain zero bytes\&. If this macro is absent, newly allocated memory is uninitialized\&. .RE .PP \fBMALLOCX_TCACHE(\fR\fB\fItc\fR\fR\fB) \fR .RS 4 Use the thread\-specific cache (tcache) specified by the identifier \fItc\fR, which must have been acquired via the tcache\&.create mallctl\&. This macro does not validate that \fItc\fR specifies a valid identifier\&. .RE .PP \fBMALLOCX_TCACHE_NONE\fR .RS 4 Do not use a thread\-specific cache (tcache)\&. Unless \fBMALLOCX_TCACHE(\fR\fB\fItc\fR\fR\fB)\fR or \fBMALLOCX_TCACHE_NONE\fR is specified, an automatically managed tcache will be used under many circumstances\&. This macro cannot be used in the same \fIflags\fR argument as \fBMALLOCX_TCACHE(\fR\fB\fItc\fR\fR\fB)\fR\&. .RE .PP \fBMALLOCX_ARENA(\fR\fB\fIa\fR\fR\fB) \fR .RS 4 Use the arena specified by the index \fIa\fR\&. This macro has no effect for regions that were allocated via an arena other than the one specified\&. This macro does not validate that \fIa\fR specifies an arena index in the valid range\&. .RE .PP The mallocx() function allocates at least \fIsize\fR bytes of memory, and returns a pointer to the base address of the allocation\&. Behavior is undefined if \fIsize\fR is \fB0\fR\&. .PP The rallocx() function resizes the allocation at \fIptr\fR to be at least \fIsize\fR bytes, and returns a pointer to the base address of the resulting allocation, which may or may not have moved from its original location\&. Behavior is undefined if \fIsize\fR is \fB0\fR\&. .PP The xallocx() function resizes the allocation at \fIptr\fR in place to be at least \fIsize\fR bytes, and returns the real size of the allocation\&. If \fIextra\fR is non\-zero, an attempt is made to resize the allocation to be at least (\fIsize\fR + \fIextra\fR) bytes, though inability to allocate the extra byte(s) will not by itself result in failure to resize\&. Behavior is undefined if \fIsize\fR is \fB0\fR, or if (\fIsize\fR + \fIextra\fR > \fBSIZE_T_MAX\fR)\&. .PP The sallocx() function returns the real size of the allocation at \fIptr\fR\&. .PP The dallocx() function causes the memory referenced by \fIptr\fR to be made available for future allocations\&. .PP The sdallocx() function is an extension of dallocx() with a \fIsize\fR parameter to allow the caller to pass in the allocation size as an optimization\&. The minimum valid input size is the original requested size of the allocation, and the maximum valid input size is the corresponding value returned by nallocx() or sallocx()\&. .PP The nallocx() function allocates no memory, but it performs the same size computation as the mallocx() function, and returns the real size of the allocation that would result from the equivalent mallocx() function call, or \fB0\fR if the inputs exceed the maximum supported size class and/or alignment\&. Behavior is undefined if \fIsize\fR is \fB0\fR\&. .PP The mallctl() function provides a general interface for introspecting the memory allocator, as well as setting modifiable parameters and triggering actions\&. The period\-separated \fIname\fR argument specifies a location in a tree\-structured namespace; see the MALLCTL NAMESPACE section for documentation on the tree contents\&. To read a value, pass a pointer via \fIoldp\fR to adequate space to contain the value, and a pointer to its length via \fIoldlenp\fR; otherwise pass \fBNULL\fR and \fBNULL\fR\&. Similarly, to write a value, pass a pointer to the value via \fInewp\fR, and its length via \fInewlen\fR; otherwise pass \fBNULL\fR and \fB0\fR\&. .PP The mallctlnametomib() function provides a way to avoid repeated name lookups for applications that repeatedly query the same portion of the namespace, by translating a name to a \(lqManagement Information Base\(rq (MIB) that can be passed repeatedly to mallctlbymib()\&. Upon successful return from mallctlnametomib(), \fImibp\fR contains an array of \fI*miblenp\fR integers, where \fI*miblenp\fR is the lesser of the number of components in \fIname\fR and the input value of \fI*miblenp\fR\&. Thus it is possible to pass a \fI*miblenp\fR that is smaller than the number of period\-separated name components, which results in a partial MIB that can be used as the basis for constructing a complete MIB\&. For name components that are integers (e\&.g\&. the 2 in arenas\&.bin\&.2\&.size), the corresponding MIB component will always be that integer\&. Therefore, it is legitimate to construct code like the following: .sp .if n \{\ .RS 4 .\} .nf unsigned nbins, i; size_t mib[4]; size_t len, miblen; len = sizeof(nbins); mallctl("arenas\&.nbins", &nbins, &len, NULL, 0); miblen = 4; mallctlnametomib("arenas\&.bin\&.0\&.size", mib, &miblen); for (i = 0; i < nbins; i++) { size_t bin_size; mib[2] = i; len = sizeof(bin_size); mallctlbymib(mib, miblen, (void *)&bin_size, &len, NULL, 0); /* Do something with bin_size\&.\&.\&. */ } .fi .if n \{\ .RE .\} .PP .RS 4 .RE .PP The malloc_stats_print() function writes summary statistics via the \fIwrite_cb\fR callback function pointer and \fIcbopaque\fR data passed to \fIwrite_cb\fR, or malloc_message() if \fIwrite_cb\fR is \fBNULL\fR\&. The statistics are presented in human\-readable form unless \(lqJ\(rq is specified as a character within the \fIopts\fR string, in which case the statistics are presented in \m[blue]\fBJSON format\fR\m[]\&\s-2\u[2]\d\s+2\&. This function can be called repeatedly\&. General information that never changes during execution can be omitted by specifying \(lqg\(rq as a character within the \fIopts\fR string\&. Note that malloc_message() uses the mallctl*() functions internally, so inconsistent statistics can be reported if multiple threads use these functions simultaneously\&. If \fB\-\-enable\-stats\fR is specified during configuration, \(lqm\(rq, \(lqd\(rq, and \(lqa\(rq can be specified to omit merged arena, destroyed merged arena, and per arena statistics, respectively; \(lqb\(rq and \(lql\(rq can be specified to omit per size class statistics for bins and large objects, respectively; \(lqx\(rq can be specified to omit all mutex statistics\&. Unrecognized characters are silently ignored\&. Note that thread caching may prevent some statistics from being completely up to date, since extra locking would be required to merge counters that track thread cache operations\&. .PP The malloc_usable_size() function returns the usable size of the allocation pointed to by \fIptr\fR\&. The return value may be larger than the size that was requested during allocation\&. The malloc_usable_size() function is not a mechanism for in\-place realloc(); rather it is provided solely as a tool for introspection purposes\&. Any discrepancy between the requested allocation size and the size reported by malloc_usable_size() should not be depended on, since such behavior is entirely implementation\-dependent\&. .SH "TUNING" .PP Once, when the first call is made to one of the memory allocation routines, the allocator initializes its internals based in part on various options that can be specified at compile\- or run\-time\&. .PP The string specified via \fB\-\-with\-malloc\-conf\fR, the string pointed to by the global variable \fImalloc_conf\fR, the \(lqname\(rq of the file referenced by the symbolic link named /etc/malloc\&.conf, and the value of the environment variable \fBMALLOC_CONF\fR, will be interpreted, in that order, from left to right as options\&. Note that \fImalloc_conf\fR may be read before main() is entered, so the declaration of \fImalloc_conf\fR should specify an initializer that contains the final value to be read by jemalloc\&. \fB\-\-with\-malloc\-conf\fR and \fImalloc_conf\fR are compile\-time mechanisms, whereas /etc/malloc\&.conf and \fBMALLOC_CONF\fR can be safely set any time prior to program invocation\&. .PP An options string is a comma\-separated list of option:value pairs\&. There is one key corresponding to each opt\&.* mallctl (see the MALLCTL NAMESPACE section for options documentation)\&. For example, abort:true,narenas:1 sets the opt\&.abort and opt\&.narenas options\&. Some options have boolean values (true/false), others have integer values (base 8, 10, or 16, depending on prefix), and yet others have raw string values\&. .SH "IMPLEMENTATION NOTES" .PP Traditionally, allocators have used \fBsbrk\fR(2) to obtain memory, which is suboptimal for several reasons, including race conditions, increased fragmentation, and artificial limitations on maximum usable memory\&. If \fBsbrk\fR(2) is supported by the operating system, this allocator uses both \fBmmap\fR(2) and \fBsbrk\fR(2), in that order of preference; otherwise only \fBmmap\fR(2) is used\&. .PP This allocator uses multiple arenas in order to reduce lock contention for threaded programs on multi\-processor systems\&. This works well with regard to threading scalability, but incurs some costs\&. There is a small fixed per\-arena overhead, and additionally, arenas manage memory completely independently of each other, which means a small fixed increase in overall memory fragmentation\&. These overheads are not generally an issue, given the number of arenas normally used\&. Note that using substantially more arenas than the default is not likely to improve performance, mainly due to reduced cache performance\&. However, it may make sense to reduce the number of arenas if an application does not make much use of the allocation functions\&. .PP In addition to multiple arenas, this allocator supports thread\-specific caching, in order to make it possible to completely avoid synchronization for most allocation requests\&. Such caching allows very fast allocation in the common case, but it increases memory usage and fragmentation, since a bounded number of objects can remain allocated in each thread cache\&. .PP Memory is conceptually broken into extents\&. Extents are always aligned to multiples of the page size\&. This alignment makes it possible to find metadata for user objects quickly\&. User objects are broken into two categories according to size: small and large\&. Contiguous small objects comprise a slab, which resides within a single extent, whereas large objects each have their own extents backing them\&. .PP Small objects are managed in groups by slabs\&. Each slab maintains a bitmap to track which regions are in use\&. Allocation requests that are no more than half the quantum (8 or 16, depending on architecture) are rounded up to the nearest power of two that is at least sizeof(\fBdouble\fR)\&. All other object size classes are multiples of the quantum, spaced such that there are four size classes for each doubling in size, which limits internal fragmentation to approximately 20% for all but the smallest size classes\&. Small size classes are smaller than four times the page size, and large size classes extend from four times the page size up to the largest size class that does not exceed \fBPTRDIFF_MAX\fR\&. .PP Allocations are packed tightly together, which can be an issue for multi\-threaded applications\&. If you need to assure that allocations do not suffer from cacheline sharing, round your allocation requests up to the nearest multiple of the cacheline size, or specify cacheline alignment when allocating\&. .PP The realloc(), rallocx(), and xallocx() functions may resize allocations without moving them under limited circumstances\&. Unlike the *allocx() API, the standard API does not officially round up the usable size of an allocation to the nearest size class, so technically it is necessary to call realloc() to grow e\&.g\&. a 9\-byte allocation to 16 bytes, or shrink a 16\-byte allocation to 9 bytes\&. Growth and shrinkage trivially succeeds in place as long as the pre\-size and post\-size both round up to the same size class\&. No other API guarantees are made regarding in\-place resizing, but the current implementation also tries to resize large allocations in place, as long as the pre\-size and post\-size are both large\&. For shrinkage to succeed, the extent allocator must support splitting (see arena\&.\&.extent_hooks)\&. Growth only succeeds if the trailing memory is currently available, and the extent allocator supports merging\&. .PP Assuming 4 KiB pages and a 16\-byte quantum on a 64\-bit system, the size classes in each category are as shown in Table 1\&. .sp .it 1 an-trap .nr an-no-space-flag 1 .nr an-break-flag 1 .br .B Table\ \&1.\ \&Size classes .TS allbox tab(:); lB rB lB. T{ Category T}:T{ Spacing T}:T{ Size T} .T& l r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l l r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l. T{ Small T}:T{ lg T}:T{ [8] T} :T{ 16 T}:T{ [16, 32, 48, 64, 80, 96, 112, 128] T} :T{ 32 T}:T{ [160, 192, 224, 256] T} :T{ 64 T}:T{ [320, 384, 448, 512] T} :T{ 128 T}:T{ [640, 768, 896, 1024] T} :T{ 256 T}:T{ [1280, 1536, 1792, 2048] T} :T{ 512 T}:T{ [2560, 3072, 3584, 4096] T} :T{ 1 KiB T}:T{ [5 KiB, 6 KiB, 7 KiB, 8 KiB] T} :T{ 2 KiB T}:T{ [10 KiB, 12 KiB, 14 KiB] T} T{ Large T}:T{ 2 KiB T}:T{ [16 KiB] T} :T{ 4 KiB T}:T{ [20 KiB, 24 KiB, 28 KiB, 32 KiB] T} :T{ 8 KiB T}:T{ [40 KiB, 48 KiB, 54 KiB, 64 KiB] T} :T{ 16 KiB T}:T{ [80 KiB, 96 KiB, 112 KiB, 128 KiB] T} :T{ 32 KiB T}:T{ [160 KiB, 192 KiB, 224 KiB, 256 KiB] T} :T{ 64 KiB T}:T{ [320 KiB, 384 KiB, 448 KiB, 512 KiB] T} :T{ 128 KiB T}:T{ [640 KiB, 768 KiB, 896 KiB, 1 MiB] T} :T{ 256 KiB T}:T{ [1280 KiB, 1536 KiB, 1792 KiB, 2 MiB] T} :T{ 512 KiB T}:T{ [2560 KiB, 3 MiB, 3584 KiB, 4 MiB] T} :T{ 1 MiB T}:T{ [5 MiB, 6 MiB, 7 MiB, 8 MiB] T} :T{ 2 MiB T}:T{ [10 MiB, 12 MiB, 14 MiB, 16 MiB] T} :T{ 4 MiB T}:T{ [20 MiB, 24 MiB, 28 MiB, 32 MiB] T} :T{ 8 MiB T}:T{ [40 MiB, 48 MiB, 56 MiB, 64 MiB] T} :T{ \&.\&.\&. T}:T{ \&.\&.\&. T} :T{ 512 PiB T}:T{ [2560 PiB, 3 EiB, 3584 PiB, 4 EiB] T} :T{ 1 EiB T}:T{ [5 EiB, 6 EiB, 7 EiB] T} .TE .sp 1 .SH "MALLCTL NAMESPACE" .PP The following names are defined in the namespace accessible via the mallctl*() functions\&. Value types are specified in parentheses, their readable/writable statuses are encoded as rw, r\-, \-w, or \-\-, and required build configuration flags follow, if any\&. A name element encoded as or indicates an integer component, where the integer varies from 0 to some upper value that must be determined via introspection\&. In the case of stats\&.arenas\&.\&.* and arena\&.\&.{initialized,purge,decay,dss}, equal to \fBMALLCTL_ARENAS_ALL\fR can be used to operate on all arenas or access the summation of statistics from all arenas; similarly equal to \fBMALLCTL_ARENAS_DESTROYED\fR can be used to access the summation of statistics from all destroyed arenas\&. These constants can be utilized either via mallctlnametomib() followed by mallctlbymib(), or via code such as the following: .sp .if n \{\ .RS 4 .\} .nf #define STRINGIFY_HELPER(x) #x #define STRINGIFY(x) STRINGIFY_HELPER(x) mallctl("arena\&." STRINGIFY(MALLCTL_ARENAS_ALL) "\&.decay", NULL, NULL, NULL, 0); .fi .if n \{\ .RE .\} .sp Take special note of the epoch mallctl, which controls refreshing of cached dynamic statistics\&. .PP version (\fBconst char *\fR) r\- .RS 4 Return the jemalloc version string\&. .RE .PP epoch (\fBuint64_t\fR) rw .RS 4 If a value is passed in, refresh the data from which the mallctl*() functions report values, and increment the epoch\&. Return the current epoch\&. This is useful for detecting whether another thread caused a refresh\&. .RE .PP background_thread (\fBbool\fR) rw .RS 4 Enable/disable internal background worker threads\&. When set to true, background threads are created on demand (the number of background threads will be no more than the number of CPUs or active arenas)\&. Threads run periodically, and handle purging asynchronously\&. When switching off, background threads are terminated synchronously\&. Note that after \fBfork\fR(2) function, the state in the child process will be disabled regardless the state in parent process\&. See stats\&.background_thread for related stats\&. opt\&.background_thread can be used to set the default option\&. This option is only available on selected pthread\-based platforms\&. .RE .PP config\&.cache_oblivious (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-cache\-oblivious\fR was specified during build configuration\&. .RE .PP config\&.debug (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-debug\fR was specified during build configuration\&. .RE .PP config\&.fill (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-fill\fR was specified during build configuration\&. .RE .PP config\&.lazy_lock (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-lazy\-lock\fR was specified during build configuration\&. .RE .PP config\&.malloc_conf (\fBconst char *\fR) r\- .RS 4 Embedded configure\-time\-specified run\-time options string, empty unless \fB\-\-with\-malloc\-conf\fR was specified during build configuration\&. .RE .PP config\&.prof (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-prof\fR was specified during build configuration\&. .RE .PP config\&.prof_libgcc (\fBbool\fR) r\- .RS 4 \fB\-\-disable\-prof\-libgcc\fR was not specified during build configuration\&. .RE .PP config\&.prof_libunwind (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-prof\-libunwind\fR was specified during build configuration\&. .RE .PP config\&.stats (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-stats\fR was specified during build configuration\&. .RE .PP config\&.thp (\fBbool\fR) r\- .RS 4 \fB\-\-disable\-thp\fR was not specified during build configuration, and the system supports transparent huge page manipulation\&. .RE .PP config\&.utrace (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-utrace\fR was specified during build configuration\&. .RE .PP config\&.xmalloc (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-xmalloc\fR was specified during build configuration\&. .RE .PP opt\&.abort (\fBbool\fR) r\- .RS 4 Abort\-on\-warning enabled/disabled\&. If true, most warnings are fatal\&. Note that runtime option warnings are not included (see opt\&.abort_conf for that)\&. The process will call \fBabort\fR(3) in these cases\&. This option is disabled by default unless \fB\-\-enable\-debug\fR is specified during configuration, in which case it is enabled by default\&. .RE .PP opt\&.abort_conf (\fBbool\fR) r\- .RS 4 Abort\-on\-invalid\-configuration enabled/disabled\&. If true, invalid runtime options are fatal\&. The process will call \fBabort\fR(3) in these cases\&. This option is disabled by default unless \fB\-\-enable\-debug\fR is specified during configuration, in which case it is enabled by default\&. .RE .PP opt\&.retain (\fBbool\fR) r\- .RS 4 If true, retain unused virtual memory for later reuse rather than discarding it by calling \fBmunmap\fR(2) or equivalent (see stats\&.retained for related details)\&. This option is disabled by default unless discarding virtual memory is known to trigger platform\-specific performance problems, e\&.g\&. for [64\-bit] Linux, which has a quirk in its virtual memory allocation algorithm that causes semi\-permanent VM map holes under normal jemalloc operation\&. Although \fBmunmap\fR(2) causes issues on 32\-bit Linux as well, retaining virtual memory for 32\-bit Linux is disabled by default due to the practical possibility of address space exhaustion\&. .RE .PP opt\&.dss (\fBconst char *\fR) r\- .RS 4 dss (\fBsbrk\fR(2)) allocation precedence as related to \fBmmap\fR(2) allocation\&. The following settings are supported if \fBsbrk\fR(2) is supported by the operating system: \(lqdisabled\(rq, \(lqprimary\(rq, and \(lqsecondary\(rq; otherwise only \(lqdisabled\(rq is supported\&. The default is \(lqsecondary\(rq if \fBsbrk\fR(2) is supported by the operating system; \(lqdisabled\(rq otherwise\&. .RE .PP opt\&.narenas (\fBunsigned\fR) r\- .RS 4 Maximum number of arenas to use for automatic multiplexing of threads and arenas\&. The default is four times the number of CPUs, or one if there is a single CPU\&. .RE .PP opt\&.percpu_arena (\fBconst char *\fR) r\- .RS 4 Per CPU arena mode\&. Use the \(lqpercpu\(rq setting to enable this feature, which uses number of CPUs to determine number of arenas, and bind threads to arenas dynamically based on the CPU the thread runs on currently\&. \(lqphycpu\(rq setting uses one arena per physical CPU, which means the two hyper threads on the same CPU share one arena\&. Note that no runtime checking regarding the availability of hyper threading is done at the moment\&. When set to \(lqdisabled\(rq, narenas and thread to arena association will not be impacted by this option\&. The default is \(lqdisabled\(rq\&. .RE .PP opt\&.background_thread (\fBconst bool\fR) r\- .RS 4 Internal background worker threads enabled/disabled\&. See background_thread for dynamic control options and details\&. This option is disabled by default\&. .RE .PP opt\&.dirty_decay_ms (\fBssize_t\fR) r\- .RS 4 Approximate time in milliseconds from the creation of a set of unused dirty pages until an equivalent set of unused dirty pages is purged (i\&.e\&. converted to muzzy via e\&.g\&. madvise(\fI\&.\&.\&.\fR\fI\fBMADV_FREE\fR\fR) if supported by the operating system, or converted to clean otherwise) and/or reused\&. Dirty pages are defined as previously having been potentially written to by the application, and therefore consuming physical memory, yet having no current use\&. The pages are incrementally purged according to a sigmoidal decay curve that starts and ends with zero purge rate\&. A decay time of 0 causes all unused dirty pages to be purged immediately upon creation\&. A decay time of \-1 disables purging\&. The default decay time is 10 seconds\&. See arenas\&.dirty_decay_ms and arena\&.\&.muzzy_decay_ms for related dynamic control options\&. See opt\&.muzzy_decay_ms for a description of muzzy pages\&. .RE .PP opt\&.muzzy_decay_ms (\fBssize_t\fR) r\- .RS 4 Approximate time in milliseconds from the creation of a set of unused muzzy pages until an equivalent set of unused muzzy pages is purged (i\&.e\&. converted to clean) and/or reused\&. Muzzy pages are defined as previously having been unused dirty pages that were subsequently purged in a manner that left them subject to the reclamation whims of the operating system (e\&.g\&. madvise(\fI\&.\&.\&.\fR\fI\fBMADV_FREE\fR\fR)), and therefore in an indeterminate state\&. The pages are incrementally purged according to a sigmoidal decay curve that starts and ends with zero purge rate\&. A decay time of 0 causes all unused muzzy pages to be purged immediately upon creation\&. A decay time of \-1 disables purging\&. The default decay time is 10 seconds\&. See arenas\&.muzzy_decay_ms and arena\&.\&.muzzy_decay_ms for related dynamic control options\&. .RE .PP opt\&.stats_print (\fBbool\fR) r\- .RS 4 Enable/disable statistics printing at exit\&. If enabled, the malloc_stats_print() function is called at program exit via an \fBatexit\fR(3) function\&. opt\&.stats_print_opts can be combined to specify output options\&. If \fB\-\-enable\-stats\fR is specified during configuration, this has the potential to cause deadlock for a multi\-threaded process that exits while one or more threads are executing in the memory allocation functions\&. Furthermore, atexit() may allocate memory during application initialization and then deadlock internally when jemalloc in turn calls atexit(), so this option is not universally usable (though the application can register its own atexit() function with equivalent functionality)\&. Therefore, this option should only be used with care; it is primarily intended as a performance tuning aid during application development\&. This option is disabled by default\&. .RE .PP opt\&.stats_print_opts (\fBconst char *\fR) r\- .RS 4 Options (the \fIopts\fR string) to pass to the malloc_stats_print() at exit (enabled through opt\&.stats_print)\&. See available options in malloc_stats_print()\&. Has no effect unless opt\&.stats_print is enabled\&. The default is \(lq\(rq\&. .RE .PP opt\&.junk (\fBconst char *\fR) r\- [\fB\-\-enable\-fill\fR] .RS 4 Junk filling\&. If set to \(lqalloc\(rq, each byte of uninitialized allocated memory will be initialized to 0xa5\&. If set to \(lqfree\(rq, all deallocated memory will be initialized to 0x5a\&. If set to \(lqtrue\(rq, both allocated and deallocated memory will be initialized, and if set to \(lqfalse\(rq, junk filling be disabled entirely\&. This is intended for debugging and will impact performance negatively\&. This option is \(lqfalse\(rq by default unless \fB\-\-enable\-debug\fR is specified during configuration, in which case it is \(lqtrue\(rq by default\&. .RE .PP opt\&.zero (\fBbool\fR) r\- [\fB\-\-enable\-fill\fR] .RS 4 Zero filling enabled/disabled\&. If enabled, each byte of uninitialized allocated memory will be initialized to 0\&. Note that this initialization only happens once for each byte, so realloc() and rallocx() calls do not zero memory that was previously allocated\&. This is intended for debugging and will impact performance negatively\&. This option is disabled by default\&. .RE .PP opt\&.utrace (\fBbool\fR) r\- [\fB\-\-enable\-utrace\fR] .RS 4 Allocation tracing based on \fButrace\fR(2) enabled/disabled\&. This option is disabled by default\&. .RE .PP opt\&.xmalloc (\fBbool\fR) r\- [\fB\-\-enable\-xmalloc\fR] .RS 4 Abort\-on\-out\-of\-memory enabled/disabled\&. If enabled, rather than returning failure for any allocation function, display a diagnostic message on \fBSTDERR_FILENO\fR and cause the program to drop core (using \fBabort\fR(3))\&. If an application is designed to depend on this behavior, set the option at compile time by including the following in the source code: .sp .if n \{\ .RS 4 .\} .nf malloc_conf = "xmalloc:true"; .fi .if n \{\ .RE .\} .sp This option is disabled by default\&. .RE .PP opt\&.tcache (\fBbool\fR) r\- .RS 4 Thread\-specific caching (tcache) enabled/disabled\&. When there are multiple threads, each thread uses a tcache for objects up to a certain size\&. Thread\-specific caching allows many allocations to be satisfied without performing any thread synchronization, at the cost of increased memory use\&. See the opt\&.lg_tcache_max option for related tuning information\&. This option is enabled by default\&. .RE .PP opt\&.lg_tcache_max (\fBsize_t\fR) r\- .RS 4 Maximum size class (log base 2) to cache in the thread\-specific cache (tcache)\&. At a minimum, all small size classes are cached, and at a maximum all large size classes are cached\&. The default maximum is 32 KiB (2^15)\&. .RE .PP opt\&.prof (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Memory profiling enabled/disabled\&. If enabled, profile memory allocation activity\&. See the opt\&.prof_active option for on\-the\-fly activation/deactivation\&. See the opt\&.lg_prof_sample option for probabilistic sampling control\&. See the opt\&.prof_accum option for control of cumulative sample reporting\&. See the opt\&.lg_prof_interval option for information on interval\-triggered profile dumping, the opt\&.prof_gdump option for information on high\-water\-triggered profile dumping, and the opt\&.prof_final option for final profile dumping\&. Profile output is compatible with the \fBjeprof\fR command, which is based on the \fBpprof\fR that is developed as part of the \m[blue]\fBgperftools package\fR\m[]\&\s-2\u[3]\d\s+2\&. See HEAP PROFILE FORMAT for heap profile format documentation\&. .RE .PP opt\&.prof_prefix (\fBconst char *\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Filename prefix for profile dumps\&. If the prefix is set to the empty string, no automatic dumps will occur; this is primarily useful for disabling the automatic final heap dump (which also disables leak reporting, if enabled)\&. The default prefix is jeprof\&. .RE .PP opt\&.prof_active (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Profiling activated/deactivated\&. This is a secondary control mechanism that makes it possible to start the application with profiling enabled (see the opt\&.prof option) but inactive, then toggle profiling at any time during program execution with the prof\&.active mallctl\&. This option is enabled by default\&. .RE .PP opt\&.prof_thread_active_init (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Initial setting for thread\&.prof\&.active in newly created threads\&. The initial setting for newly created threads can also be changed during execution via the prof\&.thread_active_init mallctl\&. This option is enabled by default\&. .RE .PP opt\&.lg_prof_sample (\fBsize_t\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Average interval (log base 2) between allocation samples, as measured in bytes of allocation activity\&. Increasing the sampling interval decreases profile fidelity, but also decreases the computational overhead\&. The default sample interval is 512 KiB (2^19 B)\&. .RE .PP opt\&.prof_accum (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Reporting of cumulative object/byte counts in profile dumps enabled/disabled\&. If this option is enabled, every unique backtrace must be stored for the duration of execution\&. Depending on the application, this can impose a large memory overhead, and the cumulative counts are not always of interest\&. This option is disabled by default\&. .RE .PP opt\&.lg_prof_interval (\fBssize_t\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Average interval (log base 2) between memory profile dumps, as measured in bytes of allocation activity\&. The actual interval between dumps may be sporadic because decentralized allocation counters are used to avoid synchronization bottlenecks\&. Profiles are dumped to files named according to the pattern \&.\&.\&.i\&.heap, where is controlled by the opt\&.prof_prefix option\&. By default, interval\-triggered profile dumping is disabled (encoded as \-1)\&. .RE .PP opt\&.prof_gdump (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Set the initial state of prof\&.gdump, which when enabled triggers a memory profile dump every time the total virtual memory exceeds the previous maximum\&. This option is disabled by default\&. .RE .PP opt\&.prof_final (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Use an \fBatexit\fR(3) function to dump final memory usage to a file named according to the pattern \&.\&.\&.f\&.heap, where is controlled by the opt\&.prof_prefix option\&. Note that atexit() may allocate memory during application initialization and then deadlock internally when jemalloc in turn calls atexit(), so this option is not universally usable (though the application can register its own atexit() function with equivalent functionality)\&. This option is disabled by default\&. .RE .PP opt\&.prof_leak (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Leak reporting enabled/disabled\&. If enabled, use an \fBatexit\fR(3) function to report memory leaks detected by allocation sampling\&. See the opt\&.prof option for information on analyzing heap profile output\&. This option is disabled by default\&. .RE .PP thread\&.arena (\fBunsigned\fR) rw .RS 4 Get or set the arena associated with the calling thread\&. If the specified arena was not initialized beforehand (see the arena\&.i\&.initialized mallctl), it will be automatically initialized as a side effect of calling this interface\&. .RE .PP thread\&.allocated (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Get the total number of bytes ever allocated by the calling thread\&. This counter has the potential to wrap around; it is up to the application to appropriately interpret the counter in such cases\&. .RE .PP thread\&.allocatedp (\fBuint64_t *\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Get a pointer to the the value that is returned by the thread\&.allocated mallctl\&. This is useful for avoiding the overhead of repeated mallctl*() calls\&. .RE .PP thread\&.deallocated (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Get the total number of bytes ever deallocated by the calling thread\&. This counter has the potential to wrap around; it is up to the application to appropriately interpret the counter in such cases\&. .RE .PP thread\&.deallocatedp (\fBuint64_t *\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Get a pointer to the the value that is returned by the thread\&.deallocated mallctl\&. This is useful for avoiding the overhead of repeated mallctl*() calls\&. .RE .PP thread\&.tcache\&.enabled (\fBbool\fR) rw .RS 4 Enable/disable calling thread\*(Aqs tcache\&. The tcache is implicitly flushed as a side effect of becoming disabled (see thread\&.tcache\&.flush)\&. .RE .PP thread\&.tcache\&.flush (\fBvoid\fR) \-\- .RS 4 Flush calling thread\*(Aqs thread\-specific cache (tcache)\&. This interface releases all cached objects and internal data structures associated with the calling thread\*(Aqs tcache\&. Ordinarily, this interface need not be called, since automatic periodic incremental garbage collection occurs, and the thread cache is automatically discarded when a thread exits\&. However, garbage collection is triggered by allocation activity, so it is possible for a thread that stops allocating/deallocating to retain its cache indefinitely, in which case the developer may find manual flushing useful\&. .RE .PP thread\&.prof\&.name (\fBconst char *\fR) r\- or \-w [\fB\-\-enable\-prof\fR] .RS 4 Get/set the descriptive name associated with the calling thread in memory profile dumps\&. An internal copy of the name string is created, so the input string need not be maintained after this interface completes execution\&. The output string of this interface should be copied for non\-ephemeral uses, because multiple implementation details can cause asynchronous string deallocation\&. Furthermore, each invocation of this interface can only read or write; simultaneous read/write is not supported due to string lifetime limitations\&. The name string must be nil\-terminated and comprised only of characters in the sets recognized by \fBisgraph\fR(3) and \fBisblank\fR(3)\&. .RE .PP thread\&.prof\&.active (\fBbool\fR) rw [\fB\-\-enable\-prof\fR] .RS 4 Control whether sampling is currently active for the calling thread\&. This is an activation mechanism in addition to prof\&.active; both must be active for the calling thread to sample\&. This flag is enabled by default\&. .RE .PP tcache\&.create (\fBunsigned\fR) r\- .RS 4 Create an explicit thread\-specific cache (tcache) and return an identifier that can be passed to the \fBMALLOCX_TCACHE(\fR\fB\fItc\fR\fR\fB)\fR macro to explicitly use the specified cache rather than the automatically managed one that is used by default\&. Each explicit cache can be used by only one thread at a time; the application must assure that this constraint holds\&. .RE .PP tcache\&.flush (\fBunsigned\fR) \-w .RS 4 Flush the specified thread\-specific cache (tcache)\&. The same considerations apply to this interface as to thread\&.tcache\&.flush, except that the tcache will never be automatically discarded\&. .RE .PP tcache\&.destroy (\fBunsigned\fR) \-w .RS 4 Flush the specified thread\-specific cache (tcache) and make the identifier available for use during a future tcache creation\&. .RE .PP arena\&.\&.initialized (\fBbool\fR) r\- .RS 4 Get whether the specified arena\*(Aqs statistics are initialized (i\&.e\&. the arena was initialized prior to the current epoch)\&. This interface can also be nominally used to query whether the merged statistics corresponding to \fBMALLCTL_ARENAS_ALL\fR are initialized (always true)\&. .RE .PP arena\&.\&.decay (\fBvoid\fR) \-\- .RS 4 Trigger decay\-based purging of unused dirty/muzzy pages for arena , or for all arenas if equals \fBMALLCTL_ARENAS_ALL\fR\&. The proportion of unused dirty/muzzy pages to be purged depends on the current time; see opt\&.dirty_decay_ms and opt\&.muzy_decay_ms for details\&. .RE .PP arena\&.\&.purge (\fBvoid\fR) \-\- .RS 4 Purge all unused dirty pages for arena , or for all arenas if equals \fBMALLCTL_ARENAS_ALL\fR\&. .RE .PP arena\&.\&.reset (\fBvoid\fR) \-\- .RS 4 Discard all of the arena\*(Aqs extant allocations\&. This interface can only be used with arenas explicitly created via arenas\&.create\&. None of the arena\*(Aqs discarded/cached allocations may accessed afterward\&. As part of this requirement, all thread caches which were used to allocate/deallocate in conjunction with the arena must be flushed beforehand\&. .RE .PP arena\&.\&.destroy (\fBvoid\fR) \-\- .RS 4 Destroy the arena\&. Discard all of the arena\*(Aqs extant allocations using the same mechanism as for arena\&.\&.reset (with all the same constraints and side effects), merge the arena stats into those accessible at arena index \fBMALLCTL_ARENAS_DESTROYED\fR, and then completely discard all metadata associated with the arena\&. Future calls to arenas\&.create may recycle the arena index\&. Destruction will fail if any threads are currently associated with the arena as a result of calls to thread\&.arena\&. .RE .PP arena\&.\&.dss (\fBconst char *\fR) rw .RS 4 Set the precedence of dss allocation as related to mmap allocation for arena , or for all arenas if equals \fBMALLCTL_ARENAS_ALL\fR\&. See opt\&.dss for supported settings\&. .RE .PP arena\&.\&.dirty_decay_ms (\fBssize_t\fR) rw .RS 4 Current per\-arena approximate time in milliseconds from the creation of a set of unused dirty pages until an equivalent set of unused dirty pages is purged and/or reused\&. Each time this interface is set, all currently unused dirty pages are considered to have fully decayed, which causes immediate purging of all unused dirty pages unless the decay time is set to \-1 (i\&.e\&. purging disabled)\&. See opt\&.dirty_decay_ms for additional information\&. .RE .PP arena\&.\&.muzzy_decay_ms (\fBssize_t\fR) rw .RS 4 Current per\-arena approximate time in milliseconds from the creation of a set of unused muzzy pages until an equivalent set of unused muzzy pages is purged and/or reused\&. Each time this interface is set, all currently unused muzzy pages are considered to have fully decayed, which causes immediate purging of all unused muzzy pages unless the decay time is set to \-1 (i\&.e\&. purging disabled)\&. See opt\&.muzzy_decay_ms for additional information\&. .RE .PP arena\&.\&.extent_hooks (\fBextent_hooks_t *\fR) rw .RS 4 Get or set the extent management hook functions for arena \&. The functions must be capable of operating on all extant extents associated with arena , usually by passing unknown extents to the replaced functions\&. In practice, it is feasible to control allocation for arenas explicitly created via arenas\&.create such that all extents originate from an application\-supplied extent allocator (by specifying the custom extent hook functions during arena creation), but the automatically created arenas will have already created extents prior to the application having an opportunity to take over extent allocation\&. .sp .if n \{\ .RS 4 .\} .nf typedef extent_hooks_s extent_hooks_t; struct extent_hooks_s { extent_alloc_t *alloc; extent_dalloc_t *dalloc; extent_destroy_t *destroy; extent_commit_t *commit; extent_decommit_t *decommit; extent_purge_t *purge_lazy; extent_purge_t *purge_forced; extent_split_t *split; extent_merge_t *merge; }; .fi .if n \{\ .RE .\} .sp The \fBextent_hooks_t\fR structure comprises function pointers which are described individually below\&. jemalloc uses these functions to manage extent lifetime, which starts off with allocation of mapped committed memory, in the simplest case followed by deallocation\&. However, there are performance and platform reasons to retain extents for later reuse\&. Cleanup attempts cascade from deallocation to decommit to forced purging to lazy purging, which gives the extent management functions opportunities to reject the most permanent cleanup operations in favor of less permanent (and often less costly) operations\&. All operations except allocation can be universally opted out of by setting the hook pointers to \fBNULL\fR, or selectively opted out of by returning failure\&. .HP \w'typedef\ void\ *(extent_alloc_t)('u .BI "typedef void *(extent_alloc_t)(extent_hooks_t\ *" "extent_hooks" ", void\ *" "new_addr" ", size_t\ " "size" ", size_t\ " "alignment" ", bool\ *" "zero" ", bool\ *" "commit" ", unsigned\ " "arena_ind" ");" .sp .if n \{\ .RS 4 .\} .nf .fi .if n \{\ .RE .\} .sp An extent allocation function conforms to the \fBextent_alloc_t\fR type and upon success returns a pointer to \fIsize\fR bytes of mapped memory on behalf of arena \fIarena_ind\fR such that the extent\*(Aqs base address is a multiple of \fIalignment\fR, as well as setting \fI*zero\fR to indicate whether the extent is zeroed and \fI*commit\fR to indicate whether the extent is committed\&. Upon error the function returns \fBNULL\fR and leaves \fI*zero\fR and \fI*commit\fR unmodified\&. The \fIsize\fR parameter is always a multiple of the page size\&. The \fIalignment\fR parameter is always a power of two at least as large as the page size\&. Zeroing is mandatory if \fI*zero\fR is true upon function entry\&. Committing is mandatory if \fI*commit\fR is true upon function entry\&. If \fInew_addr\fR is not \fBNULL\fR, the returned pointer must be \fInew_addr\fR on success or \fBNULL\fR on error\&. Committed memory may be committed in absolute terms as on a system that does not overcommit, or in implicit terms as on a system that overcommits and satisfies physical memory needs on demand via soft page faults\&. Note that replacing the default extent allocation function makes the arena\*(Aqs arena\&.\&.dss setting irrelevant\&. .HP \w'typedef\ bool\ (extent_dalloc_t)('u .BI "typedef bool (extent_dalloc_t)(extent_hooks_t\ *" "extent_hooks" ", void\ *" "addr" ", size_t\ " "size" ", bool\ " "committed" ", unsigned\ " "arena_ind" ");" .sp .if n \{\ .RS 4 .\} .nf .fi .if n \{\ .RE .\} .sp An extent deallocation function conforms to the \fBextent_dalloc_t\fR type and deallocates an extent at given \fIaddr\fR and \fIsize\fR with \fIcommitted\fR/decommited memory as indicated, on behalf of arena \fIarena_ind\fR, returning false upon success\&. If the function returns true, this indicates opt\-out from deallocation; the virtual memory mapping associated with the extent remains mapped, in the same commit state, and available for future use, in which case it will be automatically retained for later reuse\&. .HP \w'typedef\ void\ (extent_destroy_t)('u .BI "typedef void (extent_destroy_t)(extent_hooks_t\ *" "extent_hooks" ", void\ *" "addr" ", size_t\ " "size" ", bool\ " "committed" ", unsigned\ " "arena_ind" ");" .sp .if n \{\ .RS 4 .\} .nf .fi .if n \{\ .RE .\} .sp An extent destruction function conforms to the \fBextent_destroy_t\fR type and unconditionally destroys an extent at given \fIaddr\fR and \fIsize\fR with \fIcommitted\fR/decommited memory as indicated, on behalf of arena \fIarena_ind\fR\&. This function may be called to destroy retained extents during arena destruction (see arena\&.\&.destroy)\&. .HP \w'typedef\ bool\ (extent_commit_t)('u .BI "typedef bool (extent_commit_t)(extent_hooks_t\ *" "extent_hooks" ", void\ *" "addr" ", size_t\ " "size" ", size_t\ " "offset" ", size_t\ " "length" ", unsigned\ " "arena_ind" ");" .sp .if n \{\ .RS 4 .\} .nf .fi .if n \{\ .RE .\} .sp An extent commit function conforms to the \fBextent_commit_t\fR type and commits zeroed physical memory to back pages within an extent at given \fIaddr\fR and \fIsize\fR at \fIoffset\fR bytes, extending for \fIlength\fR on behalf of arena \fIarena_ind\fR, returning false upon success\&. Committed memory may be committed in absolute terms as on a system that does not overcommit, or in implicit terms as on a system that overcommits and satisfies physical memory needs on demand via soft page faults\&. If the function returns true, this indicates insufficient physical memory to satisfy the request\&. .HP \w'typedef\ bool\ (extent_decommit_t)('u .BI "typedef bool (extent_decommit_t)(extent_hooks_t\ *" "extent_hooks" ", void\ *" "addr" ", size_t\ " "size" ", size_t\ " "offset" ", size_t\ " "length" ", unsigned\ " "arena_ind" ");" .sp .if n \{\ .RS 4 .\} .nf .fi .if n \{\ .RE .\} .sp An extent decommit function conforms to the \fBextent_decommit_t\fR type and decommits any physical memory that is backing pages within an extent at given \fIaddr\fR and \fIsize\fR at \fIoffset\fR bytes, extending for \fIlength\fR on behalf of arena \fIarena_ind\fR, returning false upon success, in which case the pages will be committed via the extent commit function before being reused\&. If the function returns true, this indicates opt\-out from decommit; the memory remains committed and available for future use, in which case it will be automatically retained for later reuse\&. .HP \w'typedef\ bool\ (extent_purge_t)('u .BI "typedef bool (extent_purge_t)(extent_hooks_t\ *" "extent_hooks" ", void\ *" "addr" ", size_t\ " "size" ", size_t\ " "offset" ", size_t\ " "length" ", unsigned\ " "arena_ind" ");" .sp .if n \{\ .RS 4 .\} .nf .fi .if n \{\ .RE .\} .sp An extent purge function conforms to the \fBextent_purge_t\fR type and discards physical pages within the virtual memory mapping associated with an extent at given \fIaddr\fR and \fIsize\fR at \fIoffset\fR bytes, extending for \fIlength\fR on behalf of arena \fIarena_ind\fR\&. A lazy extent purge function (e\&.g\&. implemented via madvise(\fI\&.\&.\&.\fR\fI\fBMADV_FREE\fR\fR)) can delay purging indefinitely and leave the pages within the purged virtual memory range in an indeterminite state, whereas a forced extent purge function immediately purges, and the pages within the virtual memory range will be zero\-filled the next time they are accessed\&. If the function returns true, this indicates failure to purge\&. .HP \w'typedef\ bool\ (extent_split_t)('u .BI "typedef bool (extent_split_t)(extent_hooks_t\ *" "extent_hooks" ", void\ *" "addr" ", size_t\ " "size" ", size_t\ " "size_a" ", size_t\ " "size_b" ", bool\ " "committed" ", unsigned\ " "arena_ind" ");" .sp .if n \{\ .RS 4 .\} .nf .fi .if n \{\ .RE .\} .sp An extent split function conforms to the \fBextent_split_t\fR type and optionally splits an extent at given \fIaddr\fR and \fIsize\fR into two adjacent extents, the first of \fIsize_a\fR bytes, and the second of \fIsize_b\fR bytes, operating on \fIcommitted\fR/decommitted memory as indicated, on behalf of arena \fIarena_ind\fR, returning false upon success\&. If the function returns true, this indicates that the extent remains unsplit and therefore should continue to be operated on as a whole\&. .HP \w'typedef\ bool\ (extent_merge_t)('u .BI "typedef bool (extent_merge_t)(extent_hooks_t\ *" "extent_hooks" ", void\ *" "addr_a" ", size_t\ " "size_a" ", void\ *" "addr_b" ", size_t\ " "size_b" ", bool\ " "committed" ", unsigned\ " "arena_ind" ");" .sp .if n \{\ .RS 4 .\} .nf .fi .if n \{\ .RE .\} .sp An extent merge function conforms to the \fBextent_merge_t\fR type and optionally merges adjacent extents, at given \fIaddr_a\fR and \fIsize_a\fR with given \fIaddr_b\fR and \fIsize_b\fR into one contiguous extent, operating on \fIcommitted\fR/decommitted memory as indicated, on behalf of arena \fIarena_ind\fR, returning false upon success\&. If the function returns true, this indicates that the extents remain distinct mappings and therefore should continue to be operated on independently\&. .RE .PP arenas\&.narenas (\fBunsigned\fR) r\- .RS 4 Current limit on number of arenas\&. .RE .PP arenas\&.dirty_decay_ms (\fBssize_t\fR) rw .RS 4 Current default per\-arena approximate time in milliseconds from the creation of a set of unused dirty pages until an equivalent set of unused dirty pages is purged and/or reused, used to initialize arena\&.\&.dirty_decay_ms during arena creation\&. See opt\&.dirty_decay_ms for additional information\&. .RE .PP arenas\&.muzzy_decay_ms (\fBssize_t\fR) rw .RS 4 Current default per\-arena approximate time in milliseconds from the creation of a set of unused muzzy pages until an equivalent set of unused muzzy pages is purged and/or reused, used to initialize arena\&.\&.muzzy_decay_ms during arena creation\&. See opt\&.muzzy_decay_ms for additional information\&. .RE .PP arenas\&.quantum (\fBsize_t\fR) r\- .RS 4 Quantum size\&. .RE .PP arenas\&.page (\fBsize_t\fR) r\- .RS 4 Page size\&. .RE .PP arenas\&.tcache_max (\fBsize_t\fR) r\- .RS 4 Maximum thread\-cached size class\&. .RE .PP arenas\&.nbins (\fBunsigned\fR) r\- .RS 4 Number of bin size classes\&. .RE .PP arenas\&.nhbins (\fBunsigned\fR) r\- .RS 4 Total number of thread cache bin size classes\&. .RE .PP arenas\&.bin\&.\&.size (\fBsize_t\fR) r\- .RS 4 Maximum size supported by size class\&. .RE .PP arenas\&.bin\&.\&.nregs (\fBuint32_t\fR) r\- .RS 4 Number of regions per slab\&. .RE .PP arenas\&.bin\&.\&.slab_size (\fBsize_t\fR) r\- .RS 4 Number of bytes per slab\&. .RE .PP arenas\&.nlextents (\fBunsigned\fR) r\- .RS 4 Total number of large size classes\&. .RE .PP arenas\&.lextent\&.\&.size (\fBsize_t\fR) r\- .RS 4 Maximum size supported by this large size class\&. .RE .PP arenas\&.create (\fBunsigned\fR, \fBextent_hooks_t *\fR) rw .RS 4 Explicitly create a new arena outside the range of automatically managed arenas, with optionally specified extent hooks, and return the new arena index\&. .RE .PP prof\&.thread_active_init (\fBbool\fR) rw [\fB\-\-enable\-prof\fR] .RS 4 Control the initial setting for thread\&.prof\&.active in newly created threads\&. See the opt\&.prof_thread_active_init option for additional information\&. .RE .PP prof\&.active (\fBbool\fR) rw [\fB\-\-enable\-prof\fR] .RS 4 Control whether sampling is currently active\&. See the opt\&.prof_active option for additional information, as well as the interrelated thread\&.prof\&.active mallctl\&. .RE .PP prof\&.dump (\fBconst char *\fR) \-w [\fB\-\-enable\-prof\fR] .RS 4 Dump a memory profile to the specified file, or if NULL is specified, to a file according to the pattern \&.\&.\&.m\&.heap, where is controlled by the opt\&.prof_prefix option\&. .RE .PP prof\&.gdump (\fBbool\fR) rw [\fB\-\-enable\-prof\fR] .RS 4 When enabled, trigger a memory profile dump every time the total virtual memory exceeds the previous maximum\&. Profiles are dumped to files named according to the pattern \&.\&.\&.u\&.heap, where is controlled by the opt\&.prof_prefix option\&. .RE .PP prof\&.reset (\fBsize_t\fR) \-w [\fB\-\-enable\-prof\fR] .RS 4 Reset all memory profile statistics, and optionally update the sample rate (see opt\&.lg_prof_sample and prof\&.lg_sample)\&. .RE .PP prof\&.lg_sample (\fBsize_t\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Get the current sample rate (see opt\&.lg_prof_sample)\&. .RE .PP prof\&.interval (\fBuint64_t\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Average number of bytes allocated between interval\-based profile dumps\&. See the opt\&.lg_prof_interval option for additional information\&. .RE .PP stats\&.allocated (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Total number of bytes allocated by the application\&. .RE .PP stats\&.active (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Total number of bytes in active pages allocated by the application\&. This is a multiple of the page size, and greater than or equal to stats\&.allocated\&. This does not include stats\&.arenas\&.\&.pdirty, stats\&.arenas\&.\&.pmuzzy, nor pages entirely devoted to allocator metadata\&. .RE .PP stats\&.metadata (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Total number of bytes dedicated to metadata, which comprise base allocations used for bootstrap\-sensitive allocator metadata structures (see stats\&.arenas\&.\&.base) and internal allocations (see stats\&.arenas\&.\&.internal)\&. .RE .PP stats\&.resident (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Maximum number of bytes in physically resident data pages mapped by the allocator, comprising all pages dedicated to allocator metadata, pages backing active allocations, and unused dirty pages\&. This is a maximum rather than precise because pages may not actually be physically resident if they correspond to demand\-zeroed virtual memory that has not yet been touched\&. This is a multiple of the page size, and is larger than stats\&.active\&. .RE .PP stats\&.mapped (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Total number of bytes in active extents mapped by the allocator\&. This is larger than stats\&.active\&. This does not include inactive extents, even those that contain unused dirty pages, which means that there is no strict ordering between this and stats\&.resident\&. .RE .PP stats\&.retained (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Total number of bytes in virtual memory mappings that were retained rather than being returned to the operating system via e\&.g\&. \fBmunmap\fR(2) or similar\&. Retained virtual memory is typically untouched, decommitted, or purged, so it has no strongly associated physical memory (see extent hooks for details)\&. Retained memory is excluded from mapped memory statistics, e\&.g\&. stats\&.mapped\&. .RE .PP stats\&.background_thread\&.num_threads (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of background threads running currently\&. .RE .PP stats\&.background_thread\&.num_runs (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Total number of runs from all background threads\&. .RE .PP stats\&.background_thread\&.run_interval (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Average run interval in nanoseconds of background threads\&. .RE .PP stats\&.mutexes\&.ctl\&.{counter}; (\fBcounter specific type\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Statistics on \fIctl\fR mutex (global scope; mallctl related)\&. {counter} is one of the counters below: .PP .RS 4 \fInum_ops\fR (\fBuint64_t\fR): Total number of lock acquisition operations on this mutex\&. .sp \fInum_spin_acq\fR (\fBuint64_t\fR): Number of times the mutex was spin\-acquired\&. When the mutex is currently locked and cannot be acquired immediately, a short period of spin\-retry within jemalloc will be performed\&. Acquired through spin generally means the contention was lightweight and not causing context switches\&. .sp \fInum_wait\fR (\fBuint64_t\fR): Number of times the mutex was wait\-acquired, which means the mutex contention was not solved by spin\-retry, and blocking operation was likely involved in order to acquire the mutex\&. This event generally implies higher cost / longer delay, and should be investigated if it happens often\&. .sp \fImax_wait_time\fR (\fBuint64_t\fR): Maximum length of time in nanoseconds spent on a single wait\-acquired lock operation\&. Note that to avoid profiling overhead on the common path, this does not consider spin\-acquired cases\&. .sp \fItotal_wait_time\fR (\fBuint64_t\fR): Cumulative time in nanoseconds spent on wait\-acquired lock operations\&. Similarly, spin\-acquired cases are not considered\&. .sp \fImax_num_thds\fR (\fBuint32_t\fR): Maximum number of threads waiting on this mutex simultaneously\&. Similarly, spin\-acquired cases are not considered\&. .sp \fInum_owner_switch\fR (\fBuint64_t\fR): Number of times the current mutex owner is different from the previous one\&. This event does not generally imply an issue; rather it is an indicator of how often the protected data are accessed by different threads\&. .RE .RE .PP stats\&.mutexes\&.background_thread\&.{counter} (\fBcounter specific type\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Statistics on \fIbackground_thread\fR mutex (global scope; background_thread related)\&. {counter} is one of the counters in mutex profiling counters\&. .RE .PP stats\&.mutexes\&.prof\&.{counter} (\fBcounter specific type\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Statistics on \fIprof\fR mutex (global scope; profiling related)\&. {counter} is one of the counters in mutex profiling counters\&. .RE .PP stats\&.mutexes\&.reset (\fBvoid\fR) \-\- [\fB\-\-enable\-stats\fR] .RS 4 Reset all mutex profile statistics, including global mutexes, arena mutexes and bin mutexes\&. .RE .PP stats\&.arenas\&.\&.dss (\fBconst char *\fR) r\- .RS 4 dss (\fBsbrk\fR(2)) allocation precedence as related to \fBmmap\fR(2) allocation\&. See opt\&.dss for details\&. .RE .PP stats\&.arenas\&.\&.dirty_decay_ms (\fBssize_t\fR) r\- .RS 4 Approximate time in milliseconds from the creation of a set of unused dirty pages until an equivalent set of unused dirty pages is purged and/or reused\&. See opt\&.dirty_decay_ms for details\&. .RE .PP stats\&.arenas\&.\&.muzzy_decay_ms (\fBssize_t\fR) r\- .RS 4 Approximate time in milliseconds from the creation of a set of unused muzzy pages until an equivalent set of unused muzzy pages is purged and/or reused\&. See opt\&.muzzy_decay_ms for details\&. .RE .PP stats\&.arenas\&.\&.nthreads (\fBunsigned\fR) r\- .RS 4 Number of threads currently assigned to arena\&. .RE .PP stats\&.arenas\&.\&.uptime (\fBuint64_t\fR) r\- .RS 4 Time elapsed (in nanoseconds) since the arena was created\&. If equals \fB0\fR or \fBMALLCTL_ARENAS_ALL\fR, this is the uptime since malloc initialization\&. .RE .PP stats\&.arenas\&.\&.pactive (\fBsize_t\fR) r\- .RS 4 Number of pages in active extents\&. .RE .PP stats\&.arenas\&.\&.pdirty (\fBsize_t\fR) r\- .RS 4 Number of pages within unused extents that are potentially dirty, and for which madvise() or similar has not been called\&. See opt\&.dirty_decay_ms for a description of dirty pages\&. .RE .PP stats\&.arenas\&.\&.pmuzzy (\fBsize_t\fR) r\- .RS 4 Number of pages within unused extents that are muzzy\&. See opt\&.muzzy_decay_ms for a description of muzzy pages\&. .RE .PP stats\&.arenas\&.\&.mapped (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of mapped bytes\&. .RE .PP stats\&.arenas\&.\&.retained (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of retained bytes\&. See stats\&.retained for details\&. .RE .PP stats\&.arenas\&.\&.base (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of bytes dedicated to bootstrap\-sensitive allocator metadata structures\&. .RE .PP stats\&.arenas\&.\&.internal (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of bytes dedicated to internal allocations\&. Internal allocations differ from application\-originated allocations in that they are for internal use, and that they are omitted from heap profiles\&. .RE .PP stats\&.arenas\&.\&.resident (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Maximum number of bytes in physically resident data pages mapped by the arena, comprising all pages dedicated to allocator metadata, pages backing active allocations, and unused dirty pages\&. This is a maximum rather than precise because pages may not actually be physically resident if they correspond to demand\-zeroed virtual memory that has not yet been touched\&. This is a multiple of the page size\&. .RE .PP stats\&.arenas\&.\&.dirty_npurge (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of dirty page purge sweeps performed\&. .RE .PP stats\&.arenas\&.\&.dirty_nmadvise (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of madvise() or similar calls made to purge dirty pages\&. .RE .PP stats\&.arenas\&.\&.dirty_purged (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of dirty pages purged\&. .RE .PP stats\&.arenas\&.\&.muzzy_npurge (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of muzzy page purge sweeps performed\&. .RE .PP stats\&.arenas\&.\&.muzzy_nmadvise (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of madvise() or similar calls made to purge muzzy pages\&. .RE .PP stats\&.arenas\&.\&.muzzy_purged (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of muzzy pages purged\&. .RE .PP stats\&.arenas\&.\&.small\&.allocated (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of bytes currently allocated by small objects\&. .RE .PP stats\&.arenas\&.\&.small\&.nmalloc (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of times a small allocation was requested from the arena\*(Aqs bins, whether to fill the relevant tcache if opt\&.tcache is enabled, or to directly satisfy an allocation request otherwise\&. .RE .PP stats\&.arenas\&.\&.small\&.ndalloc (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of times a small allocation was returned to the arena\*(Aqs bins, whether to flush the relevant tcache if opt\&.tcache is enabled, or to directly deallocate an allocation otherwise\&. .RE .PP stats\&.arenas\&.\&.small\&.nrequests (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of allocation requests satisfied by all bin size classes\&. .RE .PP stats\&.arenas\&.\&.large\&.allocated (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of bytes currently allocated by large objects\&. .RE .PP stats\&.arenas\&.\&.large\&.nmalloc (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of times a large extent was allocated from the arena, whether to fill the relevant tcache if opt\&.tcache is enabled and the size class is within the range being cached, or to directly satisfy an allocation request otherwise\&. .RE .PP stats\&.arenas\&.\&.large\&.ndalloc (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of times a large extent was returned to the arena, whether to flush the relevant tcache if opt\&.tcache is enabled and the size class is within the range being cached, or to directly deallocate an allocation otherwise\&. .RE .PP stats\&.arenas\&.\&.large\&.nrequests (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of allocation requests satisfied by all large size classes\&. .RE .PP stats\&.arenas\&.\&.bins\&.\&.nmalloc (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of times a bin region of the corresponding size class was allocated from the arena, whether to fill the relevant tcache if opt\&.tcache is enabled, or to directly satisfy an allocation request otherwise\&. .RE .PP stats\&.arenas\&.\&.bins\&.\&.ndalloc (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of times a bin region of the corresponding size class was returned to the arena, whether to flush the relevant tcache if opt\&.tcache is enabled, or to directly deallocate an allocation otherwise\&. .RE .PP stats\&.arenas\&.\&.bins\&.\&.nrequests (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of allocation requests satisfied by bin regions of the corresponding size class\&. .RE .PP stats\&.arenas\&.\&.bins\&.\&.curregs (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Current number of regions for this size class\&. .RE .PP stats\&.arenas\&.\&.bins\&.\&.nfills (\fBuint64_t\fR) r\- .RS 4 Cumulative number of tcache fills\&. .RE .PP stats\&.arenas\&.\&.bins\&.\&.nflushes (\fBuint64_t\fR) r\- .RS 4 Cumulative number of tcache flushes\&. .RE .PP stats\&.arenas\&.\&.bins\&.\&.nslabs (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of slabs created\&. .RE .PP stats\&.arenas\&.\&.bins\&.\&.nreslabs (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of times the current slab from which to allocate changed\&. .RE .PP stats\&.arenas\&.\&.bins\&.\&.curslabs (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Current number of slabs\&. .RE .PP stats\&.arenas\&.\&.bins\&.\&.mutex\&.{counter} (\fBcounter specific type\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Statistics on \fIarena\&.\&.bins\&.\fR mutex (arena bin scope; bin operation related)\&. {counter} is one of the counters in mutex profiling counters\&. .RE .PP stats\&.arenas\&.\&.lextents\&.\&.nmalloc (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of times a large extent of the corresponding size class was allocated from the arena, whether to fill the relevant tcache if opt\&.tcache is enabled and the size class is within the range being cached, or to directly satisfy an allocation request otherwise\&. .RE .PP stats\&.arenas\&.\&.lextents\&.\&.ndalloc (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of times a large extent of the corresponding size class was returned to the arena, whether to flush the relevant tcache if opt\&.tcache is enabled and the size class is within the range being cached, or to directly deallocate an allocation otherwise\&. .RE .PP stats\&.arenas\&.\&.lextents\&.\&.nrequests (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of allocation requests satisfied by large extents of the corresponding size class\&. .RE .PP stats\&.arenas\&.\&.lextents\&.\&.curlextents (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Current number of large allocations for this size class\&. .RE .PP stats\&.arenas\&.\&.mutexes\&.large\&.{counter} (\fBcounter specific type\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Statistics on \fIarena\&.\&.large\fR mutex (arena scope; large allocation related)\&. {counter} is one of the counters in mutex profiling counters\&. .RE .PP stats\&.arenas\&.\&.mutexes\&.extent_avail\&.{counter} (\fBcounter specific type\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Statistics on \fIarena\&.\&.extent_avail \fR mutex (arena scope; extent avail related)\&. {counter} is one of the counters in mutex profiling counters\&. .RE .PP stats\&.arenas\&.\&.mutexes\&.extents_dirty\&.{counter} (\fBcounter specific type\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Statistics on \fIarena\&.\&.extents_dirty \fR mutex (arena scope; dirty extents related)\&. {counter} is one of the counters in mutex profiling counters\&. .RE .PP stats\&.arenas\&.\&.mutexes\&.extents_muzzy\&.{counter} (\fBcounter specific type\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Statistics on \fIarena\&.\&.extents_muzzy \fR mutex (arena scope; muzzy extents related)\&. {counter} is one of the counters in mutex profiling counters\&. .RE .PP stats\&.arenas\&.\&.mutexes\&.extents_retained\&.{counter} (\fBcounter specific type\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Statistics on \fIarena\&.\&.extents_retained \fR mutex (arena scope; retained extents related)\&. {counter} is one of the counters in mutex profiling counters\&. .RE .PP stats\&.arenas\&.\&.mutexes\&.decay_dirty\&.{counter} (\fBcounter specific type\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Statistics on \fIarena\&.\&.decay_dirty \fR mutex (arena scope; decay for dirty pages related)\&. {counter} is one of the counters in mutex profiling counters\&. .RE .PP stats\&.arenas\&.\&.mutexes\&.decay_muzzy\&.{counter} (\fBcounter specific type\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Statistics on \fIarena\&.\&.decay_muzzy \fR mutex (arena scope; decay for muzzy pages related)\&. {counter} is one of the counters in mutex profiling counters\&. .RE .PP stats\&.arenas\&.\&.mutexes\&.base\&.{counter} (\fBcounter specific type\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Statistics on \fIarena\&.\&.base\fR mutex (arena scope; base allocator related)\&. {counter} is one of the counters in mutex profiling counters\&. .RE .PP stats\&.arenas\&.\&.mutexes\&.tcache_list\&.{counter} (\fBcounter specific type\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Statistics on \fIarena\&.\&.tcache_list\fR mutex (arena scope; tcache to arena association related)\&. This mutex is expected to be accessed less often\&. {counter} is one of the counters in mutex profiling counters\&. .RE .SH "HEAP PROFILE FORMAT" .PP Although the heap profiling functionality was originally designed to be compatible with the \fBpprof\fR command that is developed as part of the \m[blue]\fBgperftools package\fR\m[]\&\s-2\u[3]\d\s+2, the addition of per thread heap profiling functionality required a different heap profile format\&. The \fBjeprof\fR command is derived from \fBpprof\fR, with enhancements to support the heap profile format described here\&. .PP In the following hypothetical heap profile, \fB[\&.\&.\&.]\fR indicates elision for the sake of compactness\&. .sp .if n \{\ .RS 4 .\} .nf heap_v2/524288 t*: 28106: 56637512 [0: 0] [\&.\&.\&.] t3: 352: 16777344 [0: 0] [\&.\&.\&.] t99: 17754: 29341640 [0: 0] [\&.\&.\&.] @ 0x5f86da8 0x5f5a1dc [\&.\&.\&.] 0x29e4d4e 0xa200316 0xabb2988 [\&.\&.\&.] t*: 13: 6688 [0: 0] t3: 12: 6496 [0: ] t99: 1: 192 [0: 0] [\&.\&.\&.] MAPPED_LIBRARIES: [\&.\&.\&.] .fi .if n \{\ .RE .\} .sp The following matches the above heap profile, but most tokens are replaced with \fB\fR to indicate descriptions of the corresponding fields\&. .sp .if n \{\ .RS 4 .\} .nf / : : [: ] [\&.\&.\&.] : : [: ] [\&.\&.\&.] : : [: ] [\&.\&.\&.] @ [\&.\&.\&.] [\&.\&.\&.] : : [: ] : : [: ] : : [: ] [\&.\&.\&.] MAPPED_LIBRARIES: /maps> .fi .if n \{\ .RE .\} .SH "DEBUGGING MALLOC PROBLEMS" .PP When debugging, it is a good idea to configure/build jemalloc with the \fB\-\-enable\-debug\fR and \fB\-\-enable\-fill\fR options, and recompile the program with suitable options and symbols for debugger support\&. When so configured, jemalloc incorporates a wide variety of run\-time assertions that catch application errors such as double\-free, write\-after\-free, etc\&. .PP Programs often accidentally depend on \(lquninitialized\(rq memory actually being filled with zero bytes\&. Junk filling (see the opt\&.junk option) tends to expose such bugs in the form of obviously incorrect results and/or coredumps\&. Conversely, zero filling (see the opt\&.zero option) eliminates the symptoms of such bugs\&. Between these two options, it is usually possible to quickly detect, diagnose, and eliminate such bugs\&. .PP This implementation does not provide much detail about the problems it detects, because the performance impact for storing such information would be prohibitive\&. .SH "DIAGNOSTIC MESSAGES" .PP If any of the memory allocation/deallocation functions detect an error or warning condition, a message will be printed to file descriptor \fBSTDERR_FILENO\fR\&. Errors will result in the process dumping core\&. If the opt\&.abort option is set, most warnings are treated as errors\&. .PP The \fImalloc_message\fR variable allows the programmer to override the function which emits the text strings forming the errors and warnings if for some reason the \fBSTDERR_FILENO\fR file descriptor is not suitable for this\&. malloc_message() takes the \fIcbopaque\fR pointer argument that is \fBNULL\fR unless overridden by the arguments in a call to malloc_stats_print(), followed by a string pointer\&. Please note that doing anything which tries to allocate memory in this function is likely to result in a crash or deadlock\&. .PP All messages are prefixed by \(lq: \(rq\&. .SH "RETURN VALUES" .SS "Standard API" .PP The malloc() and calloc() functions return a pointer to the allocated memory if successful; otherwise a \fBNULL\fR pointer is returned and \fIerrno\fR is set to ENOMEM\&. .PP The posix_memalign() function returns the value 0 if successful; otherwise it returns an error value\&. The posix_memalign() function will fail if: .PP EINVAL .RS 4 The \fIalignment\fR parameter is not a power of 2 at least as large as sizeof(\fBvoid *\fR)\&. .RE .PP ENOMEM .RS 4 Memory allocation error\&. .RE .PP The aligned_alloc() function returns a pointer to the allocated memory if successful; otherwise a \fBNULL\fR pointer is returned and \fIerrno\fR is set\&. The aligned_alloc() function will fail if: .PP EINVAL .RS 4 The \fIalignment\fR parameter is not a power of 2\&. .RE .PP ENOMEM .RS 4 Memory allocation error\&. .RE .PP The realloc() function returns a pointer, possibly identical to \fIptr\fR, to the allocated memory if successful; otherwise a \fBNULL\fR pointer is returned, and \fIerrno\fR is set to ENOMEM if the error was the result of an allocation failure\&. The realloc() function always leaves the original buffer intact when an error occurs\&. .PP The free() function returns no value\&. .SS "Non\-standard API" .PP The mallocx() and rallocx() functions return a pointer to the allocated memory if successful; otherwise a \fBNULL\fR pointer is returned to indicate insufficient contiguous memory was available to service the allocation request\&. .PP The xallocx() function returns the real size of the resulting resized allocation pointed to by \fIptr\fR, which is a value less than \fIsize\fR if the allocation could not be adequately grown in place\&. .PP The sallocx() function returns the real size of the allocation pointed to by \fIptr\fR\&. .PP The nallocx() returns the real size that would result from a successful equivalent mallocx() function call, or zero if insufficient memory is available to perform the size computation\&. .PP The mallctl(), mallctlnametomib(), and mallctlbymib() functions return 0 on success; otherwise they return an error value\&. The functions will fail if: .PP EINVAL .RS 4 \fInewp\fR is not \fBNULL\fR, and \fInewlen\fR is too large or too small\&. Alternatively, \fI*oldlenp\fR is too large or too small; in this case as much data as possible are read despite the error\&. .RE .PP ENOENT .RS 4 \fIname\fR or \fImib\fR specifies an unknown/invalid value\&. .RE .PP EPERM .RS 4 Attempt to read or write void value, or attempt to write read\-only value\&. .RE .PP EAGAIN .RS 4 A memory allocation failure occurred\&. .RE .PP EFAULT .RS 4 An interface with side effects failed in some way not directly related to mallctl*() read/write processing\&. .RE .PP The malloc_usable_size() function returns the usable size of the allocation pointed to by \fIptr\fR\&. .SH "ENVIRONMENT" .PP The following environment variable affects the execution of the allocation functions: .PP \fBMALLOC_CONF\fR .RS 4 If the environment variable \fBMALLOC_CONF\fR is set, the characters it contains will be interpreted as options\&. .RE .SH "EXAMPLES" .PP To dump core whenever a problem occurs: .sp .if n \{\ .RS 4 .\} .nf ln \-s \*(Aqabort:true\*(Aq /etc/malloc\&.conf .fi .if n \{\ .RE .\} .PP To specify in the source that only one arena should be automatically created: .sp .if n \{\ .RS 4 .\} .nf malloc_conf = "narenas:1"; .fi .if n \{\ .RE .\} .SH "SEE ALSO" .PP \fBmadvise\fR(2), \fBmmap\fR(2), \fBsbrk\fR(2), \fButrace\fR(2), \fBalloca\fR(3), \fBatexit\fR(3), \fBgetpagesize\fR(3) .SH "STANDARDS" .PP The malloc(), calloc(), realloc(), and free() functions conform to ISO/IEC 9899:1990 (\(lqISO C90\(rq)\&. .PP The posix_memalign() function conforms to IEEE Std 1003\&.1\-2001 (\(lqPOSIX\&.1\(rq)\&. .SH "HISTORY" .PP The malloc_usable_size() and posix_memalign() functions first appeared in FreeBSD 7\&.0\&. .PP The aligned_alloc(), malloc_stats_print(), and mallctl*() functions first appeared in FreeBSD 10\&.0\&. .PP The *allocx() functions first appeared in FreeBSD 11\&.0\&. .SH "AUTHOR" .PP \fBJason Evans\fR .RS 4 .RE .SH "NOTES" .IP " 1." 4 jemalloc website .RS 4 \%http://jemalloc.net/ .RE .IP " 2." 4 JSON format .RS 4 \%http://www.json.org/ .RE .IP " 3." 4 gperftools package .RS 4 \%http://code.google.com/p/gperftools/ .RE Index: head/contrib/jemalloc/include/jemalloc/internal/arena_externs.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/arena_externs.h (revision 320622) +++ head/contrib/jemalloc/include/jemalloc/internal/arena_externs.h (revision 320623) @@ -1,96 +1,97 @@ #ifndef JEMALLOC_INTERNAL_ARENA_EXTERNS_H #define JEMALLOC_INTERNAL_ARENA_EXTERNS_H #include "jemalloc/internal/extent_dss.h" #include "jemalloc/internal/pages.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/stats.h" extern ssize_t opt_dirty_decay_ms; extern ssize_t opt_muzzy_decay_ms; extern const arena_bin_info_t arena_bin_info[NBINS]; extern percpu_arena_mode_t opt_percpu_arena; extern const char *percpu_arena_mode_names[]; extern const uint64_t h_steps[SMOOTHSTEP_NSTEPS]; extern malloc_mutex_t arenas_lock; void arena_stats_large_nrequests_add(tsdn_t *tsdn, arena_stats_t *arena_stats, szind_t szind, uint64_t nrequests); void arena_stats_mapped_add(tsdn_t *tsdn, arena_stats_t *arena_stats, size_t size); void arena_basic_stats_merge(tsdn_t *tsdn, arena_t *arena, unsigned *nthreads, const char **dss, ssize_t *dirty_decay_ms, ssize_t *muzzy_decay_ms, size_t *nactive, size_t *ndirty, size_t *nmuzzy); void arena_stats_merge(tsdn_t *tsdn, arena_t *arena, unsigned *nthreads, const char **dss, ssize_t *dirty_decay_ms, ssize_t *muzzy_decay_ms, size_t *nactive, size_t *ndirty, size_t *nmuzzy, arena_stats_t *astats, malloc_bin_stats_t *bstats, malloc_large_stats_t *lstats); void arena_extents_dirty_dalloc(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent); #ifdef JEMALLOC_JET size_t arena_slab_regind(extent_t *slab, szind_t binind, const void *ptr); #endif extent_t *arena_extent_alloc_large(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment, bool *zero); void arena_extent_dalloc_large_prep(tsdn_t *tsdn, arena_t *arena, extent_t *extent); void arena_extent_ralloc_large_shrink(tsdn_t *tsdn, arena_t *arena, extent_t *extent, size_t oldsize); void arena_extent_ralloc_large_expand(tsdn_t *tsdn, arena_t *arena, extent_t *extent, size_t oldsize); ssize_t arena_dirty_decay_ms_get(arena_t *arena); bool arena_dirty_decay_ms_set(tsdn_t *tsdn, arena_t *arena, ssize_t decay_ms); ssize_t arena_muzzy_decay_ms_get(arena_t *arena); bool arena_muzzy_decay_ms_set(tsdn_t *tsdn, arena_t *arena, ssize_t decay_ms); void arena_decay(tsdn_t *tsdn, arena_t *arena, bool is_background_thread, bool all); void arena_reset(tsd_t *tsd, arena_t *arena); void arena_destroy(tsd_t *tsd, arena_t *arena); void arena_tcache_fill_small(tsdn_t *tsdn, arena_t *arena, tcache_t *tcache, tcache_bin_t *tbin, szind_t binind, uint64_t prof_accumbytes); void arena_alloc_junk_small(void *ptr, const arena_bin_info_t *bin_info, bool zero); typedef void (arena_dalloc_junk_small_t)(void *, const arena_bin_info_t *); extern arena_dalloc_junk_small_t *JET_MUTABLE arena_dalloc_junk_small; void *arena_malloc_hard(tsdn_t *tsdn, arena_t *arena, size_t size, szind_t ind, bool zero); void *arena_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment, bool zero, tcache_t *tcache); void arena_prof_promote(tsdn_t *tsdn, const void *ptr, size_t usize); void arena_dalloc_promoted(tsdn_t *tsdn, void *ptr, tcache_t *tcache, bool slow_path); void arena_dalloc_bin_junked_locked(tsdn_t *tsdn, arena_t *arena, extent_t *extent, void *ptr); void arena_dalloc_small(tsdn_t *tsdn, void *ptr); bool arena_ralloc_no_move(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t size, size_t extra, bool zero); void *arena_ralloc(tsdn_t *tsdn, arena_t *arena, void *ptr, size_t oldsize, size_t size, size_t alignment, bool zero, tcache_t *tcache); dss_prec_t arena_dss_prec_get(arena_t *arena); bool arena_dss_prec_set(arena_t *arena, dss_prec_t dss_prec); ssize_t arena_dirty_decay_ms_default_get(void); bool arena_dirty_decay_ms_default_set(ssize_t decay_ms); ssize_t arena_muzzy_decay_ms_default_get(void); bool arena_muzzy_decay_ms_default_set(ssize_t decay_ms); unsigned arena_nthreads_get(arena_t *arena, bool internal); void arena_nthreads_inc(arena_t *arena, bool internal); void arena_nthreads_dec(arena_t *arena, bool internal); size_t arena_extent_sn_next(arena_t *arena); arena_t *arena_new(tsdn_t *tsdn, unsigned ind, extent_hooks_t *extent_hooks); void arena_boot(void); void arena_prefork0(tsdn_t *tsdn, arena_t *arena); void arena_prefork1(tsdn_t *tsdn, arena_t *arena); void arena_prefork2(tsdn_t *tsdn, arena_t *arena); void arena_prefork3(tsdn_t *tsdn, arena_t *arena); void arena_prefork4(tsdn_t *tsdn, arena_t *arena); void arena_prefork5(tsdn_t *tsdn, arena_t *arena); void arena_prefork6(tsdn_t *tsdn, arena_t *arena); +void arena_prefork7(tsdn_t *tsdn, arena_t *arena); void arena_postfork_parent(tsdn_t *tsdn, arena_t *arena); void arena_postfork_child(tsdn_t *tsdn, arena_t *arena); #endif /* JEMALLOC_INTERNAL_ARENA_EXTERNS_H */ Index: head/contrib/jemalloc/include/jemalloc/internal/background_thread_inlines.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/background_thread_inlines.h (revision 320622) +++ head/contrib/jemalloc/include/jemalloc/internal/background_thread_inlines.h (revision 320623) @@ -1,56 +1,57 @@ #ifndef JEMALLOC_INTERNAL_BACKGROUND_THREAD_INLINES_H #define JEMALLOC_INTERNAL_BACKGROUND_THREAD_INLINES_H JEMALLOC_ALWAYS_INLINE bool background_thread_enabled(void) { return atomic_load_b(&background_thread_enabled_state, ATOMIC_RELAXED); } JEMALLOC_ALWAYS_INLINE void background_thread_enabled_set(tsdn_t *tsdn, bool state) { malloc_mutex_assert_owner(tsdn, &background_thread_lock); atomic_store_b(&background_thread_enabled_state, state, ATOMIC_RELAXED); } JEMALLOC_ALWAYS_INLINE background_thread_info_t * arena_background_thread_info_get(arena_t *arena) { unsigned arena_ind = arena_ind_get(arena); return &background_thread_info[arena_ind % ncpus]; } JEMALLOC_ALWAYS_INLINE uint64_t background_thread_wakeup_time_get(background_thread_info_t *info) { uint64_t next_wakeup = nstime_ns(&info->next_wakeup); assert(atomic_load_b(&info->indefinite_sleep, ATOMIC_ACQUIRE) == (next_wakeup == BACKGROUND_THREAD_INDEFINITE_SLEEP)); return next_wakeup; } JEMALLOC_ALWAYS_INLINE void background_thread_wakeup_time_set(tsdn_t *tsdn, background_thread_info_t *info, uint64_t wakeup_time) { malloc_mutex_assert_owner(tsdn, &info->mtx); atomic_store_b(&info->indefinite_sleep, wakeup_time == BACKGROUND_THREAD_INDEFINITE_SLEEP, ATOMIC_RELEASE); nstime_init(&info->next_wakeup, wakeup_time); } JEMALLOC_ALWAYS_INLINE bool background_thread_indefinite_sleep(background_thread_info_t *info) { return atomic_load_b(&info->indefinite_sleep, ATOMIC_ACQUIRE); } JEMALLOC_ALWAYS_INLINE void -arena_background_thread_inactivity_check(tsdn_t *tsdn, arena_t *arena) { - if (!background_thread_enabled()) { +arena_background_thread_inactivity_check(tsdn_t *tsdn, arena_t *arena, + bool is_background_thread) { + if (!background_thread_enabled() || is_background_thread) { return; } background_thread_info_t *info = arena_background_thread_info_get(arena); if (background_thread_indefinite_sleep(info)) { background_thread_interval_check(tsdn, arena, &arena->decay_dirty, 0); } } #endif /* JEMALLOC_INTERNAL_BACKGROUND_THREAD_INLINES_H */ Index: head/contrib/jemalloc/include/jemalloc/internal/base_externs.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/base_externs.h (revision 320622) +++ head/contrib/jemalloc/include/jemalloc/internal/base_externs.h (revision 320623) @@ -1,19 +1,19 @@ #ifndef JEMALLOC_INTERNAL_BASE_EXTERNS_H #define JEMALLOC_INTERNAL_BASE_EXTERNS_H base_t *b0get(void); base_t *base_new(tsdn_t *tsdn, unsigned ind, extent_hooks_t *extent_hooks); -void base_delete(base_t *base); +void base_delete(tsdn_t *tsdn, base_t *base); extent_hooks_t *base_extent_hooks_get(base_t *base); extent_hooks_t *base_extent_hooks_set(base_t *base, extent_hooks_t *extent_hooks); void *base_alloc(tsdn_t *tsdn, base_t *base, size_t size, size_t alignment); extent_t *base_alloc_extent(tsdn_t *tsdn, base_t *base); void base_stats_get(tsdn_t *tsdn, base_t *base, size_t *allocated, size_t *resident, size_t *mapped); void base_prefork(tsdn_t *tsdn, base_t *base); void base_postfork_parent(tsdn_t *tsdn, base_t *base); void base_postfork_child(tsdn_t *tsdn, base_t *base); bool base_boot(tsdn_t *tsdn); #endif /* JEMALLOC_INTERNAL_BASE_EXTERNS_H */ Index: head/contrib/jemalloc/include/jemalloc/internal/ctl.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/ctl.h (revision 320622) +++ head/contrib/jemalloc/include/jemalloc/internal/ctl.h (revision 320623) @@ -1,131 +1,130 @@ #ifndef JEMALLOC_INTERNAL_CTL_H #define JEMALLOC_INTERNAL_CTL_H #include "jemalloc/internal/jemalloc_internal_types.h" #include "jemalloc/internal/malloc_io.h" #include "jemalloc/internal/mutex_prof.h" #include "jemalloc/internal/ql.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/stats.h" /* Maximum ctl tree depth. */ #define CTL_MAX_DEPTH 7 typedef struct ctl_node_s { bool named; } ctl_node_t; typedef struct ctl_named_node_s { ctl_node_t node; const char *name; /* If (nchildren == 0), this is a terminal node. */ size_t nchildren; const ctl_node_t *children; int (*ctl)(tsd_t *, const size_t *, size_t, void *, size_t *, void *, size_t); } ctl_named_node_t; typedef struct ctl_indexed_node_s { struct ctl_node_s node; const ctl_named_node_t *(*index)(tsdn_t *, const size_t *, size_t, size_t); } ctl_indexed_node_t; typedef struct ctl_arena_stats_s { arena_stats_t astats; /* Aggregate stats for small size classes, based on bin stats. */ size_t allocated_small; uint64_t nmalloc_small; uint64_t ndalloc_small; uint64_t nrequests_small; malloc_bin_stats_t bstats[NBINS]; malloc_large_stats_t lstats[NSIZES - NBINS]; } ctl_arena_stats_t; typedef struct ctl_stats_s { size_t allocated; size_t active; size_t metadata; size_t resident; size_t mapped; size_t retained; background_thread_stats_t background_thread; mutex_prof_data_t mutex_prof_data[mutex_prof_num_global_mutexes]; } ctl_stats_t; typedef struct ctl_arena_s ctl_arena_t; struct ctl_arena_s { unsigned arena_ind; bool initialized; ql_elm(ctl_arena_t) destroyed_link; /* Basic stats, supported even if !config_stats. */ unsigned nthreads; const char *dss; ssize_t dirty_decay_ms; ssize_t muzzy_decay_ms; size_t pactive; size_t pdirty; size_t pmuzzy; /* NULL if !config_stats. */ ctl_arena_stats_t *astats; }; typedef struct ctl_arenas_s { uint64_t epoch; unsigned narenas; ql_head(ctl_arena_t) destroyed; /* * Element 0 corresponds to merged stats for extant arenas (accessed via * MALLCTL_ARENAS_ALL), element 1 corresponds to merged stats for * destroyed arenas (accessed via MALLCTL_ARENAS_DESTROYED), and the * remaining MALLOCX_ARENA_LIMIT elements correspond to arenas. */ ctl_arena_t *arenas[2 + MALLOCX_ARENA_LIMIT]; } ctl_arenas_t; int ctl_byname(tsd_t *tsd, const char *name, void *oldp, size_t *oldlenp, void *newp, size_t newlen); -int ctl_nametomib(tsdn_t *tsdn, const char *name, size_t *mibp, - size_t *miblenp); +int ctl_nametomib(tsd_t *tsd, const char *name, size_t *mibp, size_t *miblenp); int ctl_bymib(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen); bool ctl_boot(void); void ctl_prefork(tsdn_t *tsdn); void ctl_postfork_parent(tsdn_t *tsdn); void ctl_postfork_child(tsdn_t *tsdn); #define xmallctl(name, oldp, oldlenp, newp, newlen) do { \ if (je_mallctl(name, oldp, oldlenp, newp, newlen) \ != 0) { \ malloc_printf( \ ": Failure in xmallctl(\"%s\", ...)\n", \ name); \ abort(); \ } \ } while (0) #define xmallctlnametomib(name, mibp, miblenp) do { \ if (je_mallctlnametomib(name, mibp, miblenp) != 0) { \ malloc_printf(": Failure in " \ "xmallctlnametomib(\"%s\", ...)\n", name); \ abort(); \ } \ } while (0) #define xmallctlbymib(mib, miblen, oldp, oldlenp, newp, newlen) do { \ if (je_mallctlbymib(mib, miblen, oldp, oldlenp, newp, \ newlen) != 0) { \ malloc_write( \ ": Failure in xmallctlbymib()\n"); \ abort(); \ } \ } while (0) #endif /* JEMALLOC_INTERNAL_CTL_H */ Index: head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal_decls.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal_decls.h (revision 320622) +++ head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal_decls.h (revision 320623) @@ -1,84 +1,85 @@ #ifndef JEMALLOC_INTERNAL_DECLS_H #define JEMALLOC_INTERNAL_DECLS_H #include "libc_private.h" #include "namespace.h" #include #ifdef _WIN32 # include # include "msvc_compat/windows_extra.h" #else # include # include # if !defined(__pnacl__) && !defined(__native_client__) # include # if !defined(SYS_write) && defined(__NR_write) # define SYS_write __NR_write # endif # if defined(SYS_open) && defined(__aarch64__) /* Android headers may define SYS_open to __NR_open even though * __NR_open may not exist on AArch64 (superseded by __NR_openat). */ # undef SYS_open # endif # include # endif # include +# include # ifdef JEMALLOC_OS_UNFAIR_LOCK # include # endif # ifdef JEMALLOC_GLIBC_MALLOC_HOOK # include # endif # include # include # include # ifdef JEMALLOC_HAVE_MACH_ABSOLUTE_TIME # include # endif #endif #include #include #ifndef SIZE_T_MAX # define SIZE_T_MAX SIZE_MAX #endif #ifndef SSIZE_MAX # define SSIZE_MAX ((ssize_t)(SIZE_T_MAX >> 1)) #endif #include #include #include #include #include #include #ifndef offsetof # define offsetof(type, member) ((size_t)&(((type *)NULL)->member)) #endif #include #include #include #ifdef _MSC_VER # include typedef intptr_t ssize_t; # define PATH_MAX 1024 # define STDERR_FILENO 2 # define __func__ __FUNCTION__ # ifdef JEMALLOC_HAS_RESTRICT # define restrict __restrict # endif /* Disable warnings about deprecated system functions. */ # pragma warning(disable: 4996) #if _MSC_VER < 1800 static int isblank(int c) { return (c == '\t' || c == ' '); } #endif #else # include #endif #include #endif /* JEMALLOC_INTERNAL_H */ Index: head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal_defs.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal_defs.h (revision 320622) +++ head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal_defs.h (revision 320623) @@ -1,337 +1,340 @@ /* include/jemalloc/internal/jemalloc_internal_defs.h. Generated from jemalloc_internal_defs.h.in by configure. */ #ifndef JEMALLOC_INTERNAL_DEFS_H_ #define JEMALLOC_INTERNAL_DEFS_H_ /* * If JEMALLOC_PREFIX is defined via --with-jemalloc-prefix, it will cause all * public APIs to be prefixed. This makes it possible, with some care, to use * multiple allocators simultaneously. */ /* #undef JEMALLOC_PREFIX */ /* #undef JEMALLOC_CPREFIX */ /* * Define overrides for non-standard allocator-related functions if they are * present on the system. */ /* #undef JEMALLOC_OVERRIDE___LIBC_CALLOC */ /* #undef JEMALLOC_OVERRIDE___LIBC_FREE */ /* #undef JEMALLOC_OVERRIDE___LIBC_MALLOC */ /* #undef JEMALLOC_OVERRIDE___LIBC_MEMALIGN */ /* #undef JEMALLOC_OVERRIDE___LIBC_REALLOC */ /* #undef JEMALLOC_OVERRIDE___LIBC_VALLOC */ #define JEMALLOC_OVERRIDE___POSIX_MEMALIGN /* * JEMALLOC_PRIVATE_NAMESPACE is used as a prefix for all library-private APIs. * For shared libraries, symbol visibility mechanisms prevent these symbols * from being exported, but for static libraries, naming collisions are a real * possibility. */ #define JEMALLOC_PRIVATE_NAMESPACE __je_ /* * Hyper-threaded CPUs may need a special instruction inside spin loops in * order to yield to another virtual CPU. */ #define CPU_SPINWAIT __asm__ volatile("pause") /* * Number of significant bits in virtual addresses. This may be less than the * total number of bits in a pointer, e.g. on x64, for which the uppermost 16 * bits are the same as bit 47. */ #define LG_VADDR 48 /* Defined if C11 atomics are available. */ /* #undef JEMALLOC_C11_ATOMICS */ /* Defined if GCC __atomic atomics are available. */ /* #undef JEMALLOC_GCC_ATOMIC_ATOMICS */ /* Defined if GCC __sync atomics are available. */ #define JEMALLOC_GCC_SYNC_ATOMICS 1 /* * Defined if __sync_add_and_fetch(uint32_t *, uint32_t) and * __sync_sub_and_fetch(uint32_t *, uint32_t) are available, despite * __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 not being defined (which means the * functions are defined in libgcc instead of being inlines). */ #define JE_FORCE_SYNC_COMPARE_AND_SWAP_4 /* * Defined if __sync_add_and_fetch(uint64_t *, uint64_t) and * __sync_sub_and_fetch(uint64_t *, uint64_t) are available, despite * __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8 not being defined (which means the * functions are defined in libgcc instead of being inlines). */ #define JE_FORCE_SYNC_COMPARE_AND_SWAP_8 /* * Defined if __builtin_clz() and __builtin_clzl() are available. */ #define JEMALLOC_HAVE_BUILTIN_CLZ /* * Defined if os_unfair_lock_*() functions are available, as provided by Darwin. */ /* #undef JEMALLOC_OS_UNFAIR_LOCK */ /* * Defined if OSSpin*() functions are available, as provided by Darwin, and * documented in the spinlock(3) manual page. */ /* #undef JEMALLOC_OSSPIN */ /* Defined if syscall(2) is usable. */ #define JEMALLOC_USE_SYSCALL /* * Defined if secure_getenv(3) is available. */ /* #undef JEMALLOC_HAVE_SECURE_GETENV */ /* * Defined if issetugid(2) is available. */ #define JEMALLOC_HAVE_ISSETUGID /* Defined if pthread_atfork(3) is available. */ #define JEMALLOC_HAVE_PTHREAD_ATFORK +/* Defined if pthread_setname_np(3) is available. */ +/* #undef JEMALLOC_HAVE_PTHREAD_SETNAME_NP */ + /* * Defined if clock_gettime(CLOCK_MONOTONIC_COARSE, ...) is available. */ /* #undef JEMALLOC_HAVE_CLOCK_MONOTONIC_COARSE */ /* * Defined if clock_gettime(CLOCK_MONOTONIC, ...) is available. */ #define JEMALLOC_HAVE_CLOCK_MONOTONIC 1 /* * Defined if mach_absolute_time() is available. */ /* #undef JEMALLOC_HAVE_MACH_ABSOLUTE_TIME */ /* * Defined if _malloc_thread_cleanup() exists. At least in the case of * FreeBSD, pthread_key_create() allocates, which if used during malloc * bootstrapping will cause recursion into the pthreads library. Therefore, if * _malloc_thread_cleanup() exists, use it as the basis for thread cleanup in * malloc_tsd. */ #define JEMALLOC_MALLOC_THREAD_CLEANUP /* * Defined if threaded initialization is known to be safe on this platform. * Among other things, it must be possible to initialize a mutex without * triggering allocation in order for threaded allocation to be safe. */ /* #undef JEMALLOC_THREADED_INIT */ /* * Defined if the pthreads implementation defines * _pthread_mutex_init_calloc_cb(), in which case the function is used in order * to avoid recursive allocation during mutex initialization. */ #define JEMALLOC_MUTEX_INIT_CB 1 /* Non-empty if the tls_model attribute is supported. */ #define JEMALLOC_TLS_MODEL __attribute__((tls_model("initial-exec"))) /* * JEMALLOC_DEBUG enables assertions and other sanity checks, and disables * inline functions. */ /* #undef JEMALLOC_DEBUG */ /* JEMALLOC_STATS enables statistics calculation. */ #define JEMALLOC_STATS /* JEMALLOC_PROF enables allocation profiling. */ /* #undef JEMALLOC_PROF */ /* Use libunwind for profile backtracing if defined. */ /* #undef JEMALLOC_PROF_LIBUNWIND */ /* Use libgcc for profile backtracing if defined. */ /* #undef JEMALLOC_PROF_LIBGCC */ /* Use gcc intrinsics for profile backtracing if defined. */ /* #undef JEMALLOC_PROF_GCC */ /* * JEMALLOC_DSS enables use of sbrk(2) to allocate extents from the data storage * segment (DSS). */ #define JEMALLOC_DSS /* Support memory filling (junk/zero). */ #define JEMALLOC_FILL /* Support utrace(2)-based tracing. */ #define JEMALLOC_UTRACE /* Support optional abort() on OOM. */ #define JEMALLOC_XMALLOC /* Support lazy locking (avoid locking unless a second thread is launched). */ #define JEMALLOC_LAZY_LOCK /* * Minimum allocation alignment is 2^LG_QUANTUM bytes (ignoring tiny size * classes). */ /* #undef LG_QUANTUM */ /* One page is 2^LG_PAGE bytes. */ #define LG_PAGE 12 /* * One huge page is 2^LG_HUGEPAGE bytes. Note that this is defined even if the * system does not explicitly support huge pages; system calls that require * explicit huge page support are separately configured. */ #define LG_HUGEPAGE 21 /* * If defined, adjacent virtual memory mappings with identical attributes * automatically coalesce, and they fragment when changes are made to subranges. * This is the normal order of things for mmap()/munmap(), but on Windows * VirtualAlloc()/VirtualFree() operations must be precisely matched, i.e. * mappings do *not* coalesce/fragment. */ #define JEMALLOC_MAPS_COALESCE /* * If defined, retain memory for later reuse by default rather than using e.g. * munmap() to unmap freed extents. This is enabled on 64-bit Linux because * common sequences of mmap()/munmap() calls will cause virtual memory map * holes. */ /* #undef JEMALLOC_RETAIN */ /* TLS is used to map arenas and magazine caches to threads. */ #define JEMALLOC_TLS /* * Used to mark unreachable code to quiet "end of non-void" compiler warnings. * Don't use this directly; instead use unreachable() from util.h */ #define JEMALLOC_INTERNAL_UNREACHABLE abort /* * ffs*() functions to use for bitmapping. Don't use these directly; instead, * use ffs_*() from util.h. */ #define JEMALLOC_INTERNAL_FFSLL __builtin_ffsll #define JEMALLOC_INTERNAL_FFSL __builtin_ffsl #define JEMALLOC_INTERNAL_FFS __builtin_ffs /* * If defined, explicitly attempt to more uniformly distribute large allocation * pointer alignments across all cache indices. */ #define JEMALLOC_CACHE_OBLIVIOUS /* * Darwin (OS X) uses zones to work around Mach-O symbol override shortcomings. */ /* #undef JEMALLOC_ZONE */ /* * Methods for determining whether the OS overcommits. * JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY: Linux's * /proc/sys/vm.overcommit_memory file. * JEMALLOC_SYSCTL_VM_OVERCOMMIT: FreeBSD's vm.overcommit sysctl. */ #define JEMALLOC_SYSCTL_VM_OVERCOMMIT /* #undef JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY */ /* Defined if madvise(2) is available. */ #define JEMALLOC_HAVE_MADVISE /* * Methods for purging unused pages differ between operating systems. * * madvise(..., MADV_FREE) : This marks pages as being unused, such that they * will be discarded rather than swapped out. * madvise(..., MADV_DONTNEED) : If JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS is * defined, this immediately discards pages, * such that new pages will be demand-zeroed if * the address region is later touched; * otherwise this behaves similarly to * MADV_FREE, though typically with higher * system overhead. */ #define JEMALLOC_PURGE_MADVISE_FREE #define JEMALLOC_PURGE_MADVISE_DONTNEED /* #undef JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS */ /* * Defined if transparent huge pages (THPs) are supported via the * MADV_[NO]HUGEPAGE arguments to madvise(2), and THP support is enabled. */ /* #undef JEMALLOC_THP */ /* Define if operating system has alloca.h header. */ /* #undef JEMALLOC_HAS_ALLOCA_H */ /* C99 restrict keyword supported. */ #define JEMALLOC_HAS_RESTRICT 1 /* For use by hash code. */ /* #undef JEMALLOC_BIG_ENDIAN */ /* sizeof(int) == 2^LG_SIZEOF_INT. */ #define LG_SIZEOF_INT 2 /* sizeof(long) == 2^LG_SIZEOF_LONG. */ #define LG_SIZEOF_LONG 3 /* sizeof(long long) == 2^LG_SIZEOF_LONG_LONG. */ #define LG_SIZEOF_LONG_LONG 3 /* sizeof(intmax_t) == 2^LG_SIZEOF_INTMAX_T. */ #define LG_SIZEOF_INTMAX_T 3 /* glibc malloc hooks (__malloc_hook, __realloc_hook, __free_hook). */ /* #undef JEMALLOC_GLIBC_MALLOC_HOOK */ /* glibc memalign hook. */ /* #undef JEMALLOC_GLIBC_MEMALIGN_HOOK */ /* pthread support */ #define JEMALLOC_HAVE_PTHREAD /* dlsym() support */ #define JEMALLOC_HAVE_DLSYM /* Adaptive mutex support in pthreads. */ #define JEMALLOC_HAVE_PTHREAD_MUTEX_ADAPTIVE_NP /* GNU specific sched_getcpu support */ /* #undef JEMALLOC_HAVE_SCHED_GETCPU */ /* GNU specific sched_setaffinity support */ /* #undef JEMALLOC_HAVE_SCHED_SETAFFINITY */ /* * If defined, all the features necessary for background threads are present. */ #define JEMALLOC_BACKGROUND_THREAD 1 /* * If defined, jemalloc symbols are not exported (doesn't work when * JEMALLOC_PREFIX is not defined). */ /* #undef JEMALLOC_EXPORT */ /* config.malloc_conf options string. */ #define JEMALLOC_CONFIG_MALLOC_CONF "abort_conf:false" /* If defined, jemalloc takes the malloc/free/etc. symbol names. */ #define JEMALLOC_IS_MALLOC 1 #endif /* JEMALLOC_INTERNAL_DEFS_H_ */ Index: head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal_inlines_a.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal_inlines_a.h (revision 320622) +++ head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal_inlines_a.h (revision 320623) @@ -1,168 +1,171 @@ #ifndef JEMALLOC_INTERNAL_INLINES_A_H #define JEMALLOC_INTERNAL_INLINES_A_H #include "jemalloc/internal/atomic.h" #include "jemalloc/internal/bit_util.h" #include "jemalloc/internal/jemalloc_internal_types.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/ticker.h" JEMALLOC_ALWAYS_INLINE malloc_cpuid_t malloc_getcpu(void) { assert(have_percpu_arena); #if defined(JEMALLOC_HAVE_SCHED_GETCPU) return (malloc_cpuid_t)sched_getcpu(); #else not_reached(); return -1; #endif } /* Return the chosen arena index based on current cpu. */ JEMALLOC_ALWAYS_INLINE unsigned percpu_arena_choose(void) { assert(have_percpu_arena && PERCPU_ARENA_ENABLED(opt_percpu_arena)); malloc_cpuid_t cpuid = malloc_getcpu(); assert(cpuid >= 0); unsigned arena_ind; if ((opt_percpu_arena == percpu_arena) || ((unsigned)cpuid < ncpus / 2)) { arena_ind = cpuid; } else { assert(opt_percpu_arena == per_phycpu_arena); /* Hyper threads on the same physical CPU share arena. */ arena_ind = cpuid - ncpus / 2; } return arena_ind; } /* Return the limit of percpu auto arena range, i.e. arenas[0...ind_limit). */ JEMALLOC_ALWAYS_INLINE unsigned percpu_arena_ind_limit(percpu_arena_mode_t mode) { assert(have_percpu_arena && PERCPU_ARENA_ENABLED(mode)); if (mode == per_phycpu_arena && ncpus > 1) { if (ncpus % 2) { /* This likely means a misconfig. */ return ncpus / 2 + 1; } return ncpus / 2; } else { return ncpus; } } static inline arena_tdata_t * arena_tdata_get(tsd_t *tsd, unsigned ind, bool refresh_if_missing) { arena_tdata_t *tdata; arena_tdata_t *arenas_tdata = tsd_arenas_tdata_get(tsd); if (unlikely(arenas_tdata == NULL)) { /* arenas_tdata hasn't been initialized yet. */ return arena_tdata_get_hard(tsd, ind); } if (unlikely(ind >= tsd_narenas_tdata_get(tsd))) { /* * ind is invalid, cache is old (too small), or tdata to be * initialized. */ return (refresh_if_missing ? arena_tdata_get_hard(tsd, ind) : NULL); } tdata = &arenas_tdata[ind]; if (likely(tdata != NULL) || !refresh_if_missing) { return tdata; } return arena_tdata_get_hard(tsd, ind); } static inline arena_t * arena_get(tsdn_t *tsdn, unsigned ind, bool init_if_missing) { arena_t *ret; assert(ind < MALLOCX_ARENA_LIMIT); ret = (arena_t *)atomic_load_p(&arenas[ind], ATOMIC_ACQUIRE); if (unlikely(ret == NULL)) { if (init_if_missing) { ret = arena_init(tsdn, ind, (extent_hooks_t *)&extent_hooks_default); } } return ret; } static inline ticker_t * decay_ticker_get(tsd_t *tsd, unsigned ind) { arena_tdata_t *tdata; tdata = arena_tdata_get(tsd, ind, true); if (unlikely(tdata == NULL)) { return NULL; } return &tdata->decay_ticker; } JEMALLOC_ALWAYS_INLINE tcache_bin_t * tcache_small_bin_get(tcache_t *tcache, szind_t binind) { assert(binind < NBINS); return &tcache->tbins_small[binind]; } JEMALLOC_ALWAYS_INLINE tcache_bin_t * tcache_large_bin_get(tcache_t *tcache, szind_t binind) { assert(binind >= NBINS &&binind < nhbins); return &tcache->tbins_large[binind - NBINS]; } JEMALLOC_ALWAYS_INLINE bool tcache_available(tsd_t *tsd) { /* * Thread specific auto tcache might be unavailable if: 1) during tcache * initialization, or 2) disabled through thread.tcache.enabled mallctl * or config options. This check covers all cases. */ if (likely(tsd_tcache_enabled_get(tsd))) { /* Associated arena == NULL implies tcache init in progress. */ assert(tsd_tcachep_get(tsd)->arena == NULL || tcache_small_bin_get(tsd_tcachep_get(tsd), 0)->avail != NULL); return true; } return false; } JEMALLOC_ALWAYS_INLINE tcache_t * tcache_get(tsd_t *tsd) { if (!tcache_available(tsd)) { return NULL; } return tsd_tcachep_get(tsd); } static inline void -pre_reentrancy(tsd_t *tsd) { +pre_reentrancy(tsd_t *tsd, arena_t *arena) { + /* arena is the current context. Reentry from a0 is not allowed. */ + assert(arena != arena_get(tsd_tsdn(tsd), 0, false)); + bool fast = tsd_fast(tsd); ++*tsd_reentrancy_levelp_get(tsd); if (fast) { /* Prepare slow path for reentrancy. */ tsd_slow_update(tsd); assert(tsd->state == tsd_state_nominal_slow); } } static inline void post_reentrancy(tsd_t *tsd) { int8_t *reentrancy_level = tsd_reentrancy_levelp_get(tsd); assert(*reentrancy_level > 0); if (--*reentrancy_level == 0) { tsd_slow_update(tsd); } } #endif /* JEMALLOC_INTERNAL_INLINES_A_H */ Index: head/contrib/jemalloc/include/jemalloc/internal/private_namespace.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/private_namespace.h (revision 320622) +++ head/contrib/jemalloc/include/jemalloc/internal/private_namespace.h (revision 320623) @@ -1,369 +1,370 @@ #define a0dalloc JEMALLOC_N(a0dalloc) #define a0malloc JEMALLOC_N(a0malloc) #define arena_choose_hard JEMALLOC_N(arena_choose_hard) #define arena_cleanup JEMALLOC_N(arena_cleanup) #define arena_init JEMALLOC_N(arena_init) #define arena_migrate JEMALLOC_N(arena_migrate) #define arena_set JEMALLOC_N(arena_set) #define arena_tdata_get_hard JEMALLOC_N(arena_tdata_get_hard) #define arenas JEMALLOC_N(arenas) #define arenas_lock JEMALLOC_N(arenas_lock) #define arenas_tdata_cleanup JEMALLOC_N(arenas_tdata_cleanup) #define bootstrap_calloc JEMALLOC_N(bootstrap_calloc) #define bootstrap_free JEMALLOC_N(bootstrap_free) #define bootstrap_malloc JEMALLOC_N(bootstrap_malloc) #define iarena_cleanup JEMALLOC_N(iarena_cleanup) #define jemalloc_postfork_child JEMALLOC_N(jemalloc_postfork_child) #define malloc_initialized JEMALLOC_N(malloc_initialized) #define malloc_slow JEMALLOC_N(malloc_slow) #define narenas_auto JEMALLOC_N(narenas_auto) #define narenas_total_get JEMALLOC_N(narenas_total_get) #define ncpus JEMALLOC_N(ncpus) #define opt_abort JEMALLOC_N(opt_abort) #define opt_abort_conf JEMALLOC_N(opt_abort_conf) #define opt_junk JEMALLOC_N(opt_junk) #define opt_junk_alloc JEMALLOC_N(opt_junk_alloc) #define opt_junk_free JEMALLOC_N(opt_junk_free) #define opt_narenas JEMALLOC_N(opt_narenas) #define opt_utrace JEMALLOC_N(opt_utrace) #define opt_xmalloc JEMALLOC_N(opt_xmalloc) #define opt_zero JEMALLOC_N(opt_zero) #define arena_alloc_junk_small JEMALLOC_N(arena_alloc_junk_small) #define arena_basic_stats_merge JEMALLOC_N(arena_basic_stats_merge) #define arena_bin_info JEMALLOC_N(arena_bin_info) #define arena_boot JEMALLOC_N(arena_boot) #define arena_dalloc_bin_junked_locked JEMALLOC_N(arena_dalloc_bin_junked_locked) #define arena_dalloc_junk_small JEMALLOC_N(arena_dalloc_junk_small) #define arena_dalloc_promoted JEMALLOC_N(arena_dalloc_promoted) #define arena_dalloc_small JEMALLOC_N(arena_dalloc_small) #define arena_decay JEMALLOC_N(arena_decay) #define arena_destroy JEMALLOC_N(arena_destroy) #define arena_dirty_decay_ms_default_get JEMALLOC_N(arena_dirty_decay_ms_default_get) #define arena_dirty_decay_ms_default_set JEMALLOC_N(arena_dirty_decay_ms_default_set) #define arena_dirty_decay_ms_get JEMALLOC_N(arena_dirty_decay_ms_get) #define arena_dirty_decay_ms_set JEMALLOC_N(arena_dirty_decay_ms_set) #define arena_dss_prec_get JEMALLOC_N(arena_dss_prec_get) #define arena_dss_prec_set JEMALLOC_N(arena_dss_prec_set) #define arena_extent_alloc_large JEMALLOC_N(arena_extent_alloc_large) #define arena_extent_dalloc_large_prep JEMALLOC_N(arena_extent_dalloc_large_prep) #define arena_extent_ralloc_large_expand JEMALLOC_N(arena_extent_ralloc_large_expand) #define arena_extent_ralloc_large_shrink JEMALLOC_N(arena_extent_ralloc_large_shrink) #define arena_extent_sn_next JEMALLOC_N(arena_extent_sn_next) #define arena_extents_dirty_dalloc JEMALLOC_N(arena_extents_dirty_dalloc) #define arena_malloc_hard JEMALLOC_N(arena_malloc_hard) #define arena_muzzy_decay_ms_default_get JEMALLOC_N(arena_muzzy_decay_ms_default_get) #define arena_muzzy_decay_ms_default_set JEMALLOC_N(arena_muzzy_decay_ms_default_set) #define arena_muzzy_decay_ms_get JEMALLOC_N(arena_muzzy_decay_ms_get) #define arena_muzzy_decay_ms_set JEMALLOC_N(arena_muzzy_decay_ms_set) #define arena_new JEMALLOC_N(arena_new) #define arena_nthreads_dec JEMALLOC_N(arena_nthreads_dec) #define arena_nthreads_get JEMALLOC_N(arena_nthreads_get) #define arena_nthreads_inc JEMALLOC_N(arena_nthreads_inc) #define arena_palloc JEMALLOC_N(arena_palloc) #define arena_postfork_child JEMALLOC_N(arena_postfork_child) #define arena_postfork_parent JEMALLOC_N(arena_postfork_parent) #define arena_prefork0 JEMALLOC_N(arena_prefork0) #define arena_prefork1 JEMALLOC_N(arena_prefork1) #define arena_prefork2 JEMALLOC_N(arena_prefork2) #define arena_prefork3 JEMALLOC_N(arena_prefork3) #define arena_prefork4 JEMALLOC_N(arena_prefork4) #define arena_prefork5 JEMALLOC_N(arena_prefork5) #define arena_prefork6 JEMALLOC_N(arena_prefork6) +#define arena_prefork7 JEMALLOC_N(arena_prefork7) #define arena_prof_promote JEMALLOC_N(arena_prof_promote) #define arena_ralloc JEMALLOC_N(arena_ralloc) #define arena_ralloc_no_move JEMALLOC_N(arena_ralloc_no_move) #define arena_reset JEMALLOC_N(arena_reset) #define arena_stats_large_nrequests_add JEMALLOC_N(arena_stats_large_nrequests_add) #define arena_stats_mapped_add JEMALLOC_N(arena_stats_mapped_add) #define arena_stats_merge JEMALLOC_N(arena_stats_merge) #define arena_tcache_fill_small JEMALLOC_N(arena_tcache_fill_small) #define h_steps JEMALLOC_N(h_steps) #define opt_dirty_decay_ms JEMALLOC_N(opt_dirty_decay_ms) #define opt_muzzy_decay_ms JEMALLOC_N(opt_muzzy_decay_ms) #define opt_percpu_arena JEMALLOC_N(opt_percpu_arena) #define percpu_arena_mode_names JEMALLOC_N(percpu_arena_mode_names) #define background_thread_boot0 JEMALLOC_N(background_thread_boot0) #define background_thread_boot1 JEMALLOC_N(background_thread_boot1) #define background_thread_create JEMALLOC_N(background_thread_create) #define background_thread_ctl_init JEMALLOC_N(background_thread_ctl_init) #define background_thread_enabled_state JEMALLOC_N(background_thread_enabled_state) #define background_thread_info JEMALLOC_N(background_thread_info) #define background_thread_interval_check JEMALLOC_N(background_thread_interval_check) #define background_thread_lock JEMALLOC_N(background_thread_lock) #define background_thread_postfork_child JEMALLOC_N(background_thread_postfork_child) #define background_thread_postfork_parent JEMALLOC_N(background_thread_postfork_parent) #define background_thread_prefork0 JEMALLOC_N(background_thread_prefork0) #define background_thread_prefork1 JEMALLOC_N(background_thread_prefork1) #define background_thread_stats_read JEMALLOC_N(background_thread_stats_read) #define background_threads_disable JEMALLOC_N(background_threads_disable) #define background_threads_enable JEMALLOC_N(background_threads_enable) #define can_enable_background_thread JEMALLOC_N(can_enable_background_thread) #define n_background_threads JEMALLOC_N(n_background_threads) #define opt_background_thread JEMALLOC_N(opt_background_thread) #define pthread_create_wrapper JEMALLOC_N(pthread_create_wrapper) #define b0get JEMALLOC_N(b0get) #define base_alloc JEMALLOC_N(base_alloc) #define base_alloc_extent JEMALLOC_N(base_alloc_extent) #define base_boot JEMALLOC_N(base_boot) #define base_delete JEMALLOC_N(base_delete) #define base_extent_hooks_get JEMALLOC_N(base_extent_hooks_get) #define base_extent_hooks_set JEMALLOC_N(base_extent_hooks_set) #define base_new JEMALLOC_N(base_new) #define base_postfork_child JEMALLOC_N(base_postfork_child) #define base_postfork_parent JEMALLOC_N(base_postfork_parent) #define base_prefork JEMALLOC_N(base_prefork) #define base_stats_get JEMALLOC_N(base_stats_get) #define bitmap_info_init JEMALLOC_N(bitmap_info_init) #define bitmap_init JEMALLOC_N(bitmap_init) #define bitmap_size JEMALLOC_N(bitmap_size) #define ckh_count JEMALLOC_N(ckh_count) #define ckh_delete JEMALLOC_N(ckh_delete) #define ckh_insert JEMALLOC_N(ckh_insert) #define ckh_iter JEMALLOC_N(ckh_iter) #define ckh_new JEMALLOC_N(ckh_new) #define ckh_pointer_hash JEMALLOC_N(ckh_pointer_hash) #define ckh_pointer_keycomp JEMALLOC_N(ckh_pointer_keycomp) #define ckh_remove JEMALLOC_N(ckh_remove) #define ckh_search JEMALLOC_N(ckh_search) #define ckh_string_hash JEMALLOC_N(ckh_string_hash) #define ckh_string_keycomp JEMALLOC_N(ckh_string_keycomp) #define ctl_boot JEMALLOC_N(ctl_boot) #define ctl_bymib JEMALLOC_N(ctl_bymib) #define ctl_byname JEMALLOC_N(ctl_byname) #define ctl_nametomib JEMALLOC_N(ctl_nametomib) #define ctl_postfork_child JEMALLOC_N(ctl_postfork_child) #define ctl_postfork_parent JEMALLOC_N(ctl_postfork_parent) #define ctl_prefork JEMALLOC_N(ctl_prefork) #define extent_alloc JEMALLOC_N(extent_alloc) #define extent_alloc_wrapper JEMALLOC_N(extent_alloc_wrapper) #define extent_avail_destroy JEMALLOC_N(extent_avail_destroy) #define extent_avail_destroy_recurse JEMALLOC_N(extent_avail_destroy_recurse) #define extent_avail_empty JEMALLOC_N(extent_avail_empty) #define extent_avail_first JEMALLOC_N(extent_avail_first) #define extent_avail_insert JEMALLOC_N(extent_avail_insert) #define extent_avail_iter JEMALLOC_N(extent_avail_iter) #define extent_avail_iter_recurse JEMALLOC_N(extent_avail_iter_recurse) #define extent_avail_iter_start JEMALLOC_N(extent_avail_iter_start) #define extent_avail_last JEMALLOC_N(extent_avail_last) #define extent_avail_new JEMALLOC_N(extent_avail_new) #define extent_avail_next JEMALLOC_N(extent_avail_next) #define extent_avail_nsearch JEMALLOC_N(extent_avail_nsearch) #define extent_avail_prev JEMALLOC_N(extent_avail_prev) #define extent_avail_psearch JEMALLOC_N(extent_avail_psearch) #define extent_avail_remove JEMALLOC_N(extent_avail_remove) #define extent_avail_reverse_iter JEMALLOC_N(extent_avail_reverse_iter) #define extent_avail_reverse_iter_recurse JEMALLOC_N(extent_avail_reverse_iter_recurse) #define extent_avail_reverse_iter_start JEMALLOC_N(extent_avail_reverse_iter_start) #define extent_avail_search JEMALLOC_N(extent_avail_search) #define extent_boot JEMALLOC_N(extent_boot) #define extent_commit_wrapper JEMALLOC_N(extent_commit_wrapper) #define extent_dalloc JEMALLOC_N(extent_dalloc) #define extent_dalloc_gap JEMALLOC_N(extent_dalloc_gap) #define extent_dalloc_wrapper JEMALLOC_N(extent_dalloc_wrapper) #define extent_decommit_wrapper JEMALLOC_N(extent_decommit_wrapper) #define extent_destroy_wrapper JEMALLOC_N(extent_destroy_wrapper) #define extent_heap_any JEMALLOC_N(extent_heap_any) #define extent_heap_empty JEMALLOC_N(extent_heap_empty) #define extent_heap_first JEMALLOC_N(extent_heap_first) #define extent_heap_insert JEMALLOC_N(extent_heap_insert) #define extent_heap_new JEMALLOC_N(extent_heap_new) #define extent_heap_remove JEMALLOC_N(extent_heap_remove) #define extent_heap_remove_any JEMALLOC_N(extent_heap_remove_any) #define extent_heap_remove_first JEMALLOC_N(extent_heap_remove_first) #define extent_hooks_default JEMALLOC_N(extent_hooks_default) #define extent_hooks_get JEMALLOC_N(extent_hooks_get) #define extent_hooks_set JEMALLOC_N(extent_hooks_set) #define extent_merge_wrapper JEMALLOC_N(extent_merge_wrapper) #define extent_mutex_pool JEMALLOC_N(extent_mutex_pool) #define extent_purge_forced_wrapper JEMALLOC_N(extent_purge_forced_wrapper) #define extent_purge_lazy_wrapper JEMALLOC_N(extent_purge_lazy_wrapper) #define extent_split_wrapper JEMALLOC_N(extent_split_wrapper) #define extents_alloc JEMALLOC_N(extents_alloc) #define extents_dalloc JEMALLOC_N(extents_dalloc) #define extents_evict JEMALLOC_N(extents_evict) #define extents_init JEMALLOC_N(extents_init) #define extents_npages_get JEMALLOC_N(extents_npages_get) #define extents_postfork_child JEMALLOC_N(extents_postfork_child) #define extents_postfork_parent JEMALLOC_N(extents_postfork_parent) #define extents_prefork JEMALLOC_N(extents_prefork) #define extents_rtree JEMALLOC_N(extents_rtree) #define extents_state_get JEMALLOC_N(extents_state_get) #define dss_prec_names JEMALLOC_N(dss_prec_names) #define extent_alloc_dss JEMALLOC_N(extent_alloc_dss) #define extent_dss_boot JEMALLOC_N(extent_dss_boot) #define extent_dss_mergeable JEMALLOC_N(extent_dss_mergeable) #define extent_dss_prec_get JEMALLOC_N(extent_dss_prec_get) #define extent_dss_prec_set JEMALLOC_N(extent_dss_prec_set) #define extent_in_dss JEMALLOC_N(extent_in_dss) #define opt_dss JEMALLOC_N(opt_dss) #define extent_alloc_mmap JEMALLOC_N(extent_alloc_mmap) #define extent_dalloc_mmap JEMALLOC_N(extent_dalloc_mmap) #define opt_retain JEMALLOC_N(opt_retain) #define hooks_arena_new_hook JEMALLOC_N(hooks_arena_new_hook) #define hooks_libc_hook JEMALLOC_N(hooks_libc_hook) #define large_dalloc JEMALLOC_N(large_dalloc) #define large_dalloc_finish JEMALLOC_N(large_dalloc_finish) #define large_dalloc_junk JEMALLOC_N(large_dalloc_junk) #define large_dalloc_maybe_junk JEMALLOC_N(large_dalloc_maybe_junk) #define large_dalloc_prep_junked_locked JEMALLOC_N(large_dalloc_prep_junked_locked) #define large_malloc JEMALLOC_N(large_malloc) #define large_palloc JEMALLOC_N(large_palloc) #define large_prof_tctx_get JEMALLOC_N(large_prof_tctx_get) #define large_prof_tctx_reset JEMALLOC_N(large_prof_tctx_reset) #define large_prof_tctx_set JEMALLOC_N(large_prof_tctx_set) #define large_ralloc JEMALLOC_N(large_ralloc) #define large_ralloc_no_move JEMALLOC_N(large_ralloc_no_move) #define large_salloc JEMALLOC_N(large_salloc) #define buferror JEMALLOC_N(buferror) #define malloc_cprintf JEMALLOC_N(malloc_cprintf) #define malloc_printf JEMALLOC_N(malloc_printf) #define malloc_snprintf JEMALLOC_N(malloc_snprintf) #define malloc_strtoumax JEMALLOC_N(malloc_strtoumax) #define malloc_vcprintf JEMALLOC_N(malloc_vcprintf) #define malloc_vsnprintf JEMALLOC_N(malloc_vsnprintf) #define malloc_write JEMALLOC_N(malloc_write) #define malloc_mutex_boot JEMALLOC_N(malloc_mutex_boot) #define malloc_mutex_init JEMALLOC_N(malloc_mutex_init) #define malloc_mutex_lock_slow JEMALLOC_N(malloc_mutex_lock_slow) #define malloc_mutex_postfork_child JEMALLOC_N(malloc_mutex_postfork_child) #define malloc_mutex_postfork_parent JEMALLOC_N(malloc_mutex_postfork_parent) #define malloc_mutex_prefork JEMALLOC_N(malloc_mutex_prefork) #define malloc_mutex_prof_data_reset JEMALLOC_N(malloc_mutex_prof_data_reset) #define mutex_pool_init JEMALLOC_N(mutex_pool_init) #define nstime_add JEMALLOC_N(nstime_add) #define nstime_compare JEMALLOC_N(nstime_compare) #define nstime_copy JEMALLOC_N(nstime_copy) #define nstime_divide JEMALLOC_N(nstime_divide) #define nstime_iadd JEMALLOC_N(nstime_iadd) #define nstime_idivide JEMALLOC_N(nstime_idivide) #define nstime_imultiply JEMALLOC_N(nstime_imultiply) #define nstime_init JEMALLOC_N(nstime_init) #define nstime_init2 JEMALLOC_N(nstime_init2) #define nstime_isubtract JEMALLOC_N(nstime_isubtract) #define nstime_monotonic JEMALLOC_N(nstime_monotonic) #define nstime_msec JEMALLOC_N(nstime_msec) #define nstime_ns JEMALLOC_N(nstime_ns) #define nstime_nsec JEMALLOC_N(nstime_nsec) #define nstime_sec JEMALLOC_N(nstime_sec) #define nstime_subtract JEMALLOC_N(nstime_subtract) #define nstime_update JEMALLOC_N(nstime_update) #define pages_boot JEMALLOC_N(pages_boot) #define pages_commit JEMALLOC_N(pages_commit) #define pages_decommit JEMALLOC_N(pages_decommit) #define pages_huge JEMALLOC_N(pages_huge) #define pages_map JEMALLOC_N(pages_map) #define pages_nohuge JEMALLOC_N(pages_nohuge) #define pages_purge_forced JEMALLOC_N(pages_purge_forced) #define pages_purge_lazy JEMALLOC_N(pages_purge_lazy) #define pages_unmap JEMALLOC_N(pages_unmap) #define bt2gctx_mtx JEMALLOC_N(bt2gctx_mtx) #define bt_init JEMALLOC_N(bt_init) #define lg_prof_sample JEMALLOC_N(lg_prof_sample) #define opt_lg_prof_interval JEMALLOC_N(opt_lg_prof_interval) #define opt_lg_prof_sample JEMALLOC_N(opt_lg_prof_sample) #define opt_prof JEMALLOC_N(opt_prof) #define opt_prof_accum JEMALLOC_N(opt_prof_accum) #define opt_prof_active JEMALLOC_N(opt_prof_active) #define opt_prof_final JEMALLOC_N(opt_prof_final) #define opt_prof_gdump JEMALLOC_N(opt_prof_gdump) #define opt_prof_leak JEMALLOC_N(opt_prof_leak) #define opt_prof_prefix JEMALLOC_N(opt_prof_prefix) #define opt_prof_thread_active_init JEMALLOC_N(opt_prof_thread_active_init) #define prof_accum_init JEMALLOC_N(prof_accum_init) #define prof_active JEMALLOC_N(prof_active) #define prof_active_get JEMALLOC_N(prof_active_get) #define prof_active_set JEMALLOC_N(prof_active_set) #define prof_alloc_rollback JEMALLOC_N(prof_alloc_rollback) #define prof_backtrace JEMALLOC_N(prof_backtrace) #define prof_boot0 JEMALLOC_N(prof_boot0) #define prof_boot1 JEMALLOC_N(prof_boot1) #define prof_boot2 JEMALLOC_N(prof_boot2) #define prof_dump_header JEMALLOC_N(prof_dump_header) #define prof_dump_open JEMALLOC_N(prof_dump_open) #define prof_free_sampled_object JEMALLOC_N(prof_free_sampled_object) #define prof_gdump JEMALLOC_N(prof_gdump) #define prof_gdump_get JEMALLOC_N(prof_gdump_get) #define prof_gdump_set JEMALLOC_N(prof_gdump_set) #define prof_gdump_val JEMALLOC_N(prof_gdump_val) #define prof_idump JEMALLOC_N(prof_idump) #define prof_interval JEMALLOC_N(prof_interval) #define prof_lookup JEMALLOC_N(prof_lookup) #define prof_malloc_sample_object JEMALLOC_N(prof_malloc_sample_object) #define prof_mdump JEMALLOC_N(prof_mdump) #define prof_postfork_child JEMALLOC_N(prof_postfork_child) #define prof_postfork_parent JEMALLOC_N(prof_postfork_parent) #define prof_prefork0 JEMALLOC_N(prof_prefork0) #define prof_prefork1 JEMALLOC_N(prof_prefork1) #define prof_reset JEMALLOC_N(prof_reset) #define prof_sample_threshold_update JEMALLOC_N(prof_sample_threshold_update) #define prof_tdata_cleanup JEMALLOC_N(prof_tdata_cleanup) #define prof_tdata_init JEMALLOC_N(prof_tdata_init) #define prof_tdata_reinit JEMALLOC_N(prof_tdata_reinit) #define prof_thread_active_get JEMALLOC_N(prof_thread_active_get) #define prof_thread_active_init_get JEMALLOC_N(prof_thread_active_init_get) #define prof_thread_active_init_set JEMALLOC_N(prof_thread_active_init_set) #define prof_thread_active_set JEMALLOC_N(prof_thread_active_set) #define prof_thread_name_get JEMALLOC_N(prof_thread_name_get) #define prof_thread_name_set JEMALLOC_N(prof_thread_name_set) #define rtree_ctx_data_init JEMALLOC_N(rtree_ctx_data_init) #define rtree_leaf_alloc JEMALLOC_N(rtree_leaf_alloc) #define rtree_leaf_dalloc JEMALLOC_N(rtree_leaf_dalloc) #define rtree_leaf_elm_lookup_hard JEMALLOC_N(rtree_leaf_elm_lookup_hard) #define rtree_new JEMALLOC_N(rtree_new) #define rtree_node_alloc JEMALLOC_N(rtree_node_alloc) #define rtree_node_dalloc JEMALLOC_N(rtree_node_dalloc) #define arena_mutex_names JEMALLOC_N(arena_mutex_names) #define global_mutex_names JEMALLOC_N(global_mutex_names) #define opt_stats_print JEMALLOC_N(opt_stats_print) #define opt_stats_print_opts JEMALLOC_N(opt_stats_print_opts) #define stats_print JEMALLOC_N(stats_print) #define spin_adaptive JEMALLOC_N(spin_adaptive) #define sz_index2size_tab JEMALLOC_N(sz_index2size_tab) #define sz_pind2sz_tab JEMALLOC_N(sz_pind2sz_tab) #define sz_size2index_tab JEMALLOC_N(sz_size2index_tab) #define nhbins JEMALLOC_N(nhbins) #define opt_lg_tcache_max JEMALLOC_N(opt_lg_tcache_max) #define opt_tcache JEMALLOC_N(opt_tcache) #define tcache_alloc_small_hard JEMALLOC_N(tcache_alloc_small_hard) #define tcache_arena_associate JEMALLOC_N(tcache_arena_associate) #define tcache_arena_reassociate JEMALLOC_N(tcache_arena_reassociate) #define tcache_bin_flush_large JEMALLOC_N(tcache_bin_flush_large) #define tcache_bin_flush_small JEMALLOC_N(tcache_bin_flush_small) #define tcache_bin_info JEMALLOC_N(tcache_bin_info) #define tcache_boot JEMALLOC_N(tcache_boot) #define tcache_cleanup JEMALLOC_N(tcache_cleanup) #define tcache_create_explicit JEMALLOC_N(tcache_create_explicit) #define tcache_event_hard JEMALLOC_N(tcache_event_hard) #define tcache_flush JEMALLOC_N(tcache_flush) #define tcache_maxclass JEMALLOC_N(tcache_maxclass) #define tcache_postfork_child JEMALLOC_N(tcache_postfork_child) #define tcache_postfork_parent JEMALLOC_N(tcache_postfork_parent) #define tcache_prefork JEMALLOC_N(tcache_prefork) #define tcache_salloc JEMALLOC_N(tcache_salloc) #define tcache_stats_merge JEMALLOC_N(tcache_stats_merge) #define tcaches JEMALLOC_N(tcaches) #define tcaches_create JEMALLOC_N(tcaches_create) #define tcaches_destroy JEMALLOC_N(tcaches_destroy) #define tcaches_flush JEMALLOC_N(tcaches_flush) #define tsd_tcache_data_init JEMALLOC_N(tsd_tcache_data_init) #define tsd_tcache_enabled_data_init JEMALLOC_N(tsd_tcache_enabled_data_init) #define malloc_tsd_boot0 JEMALLOC_N(malloc_tsd_boot0) #define malloc_tsd_boot1 JEMALLOC_N(malloc_tsd_boot1) #define malloc_tsd_cleanup_register JEMALLOC_N(malloc_tsd_cleanup_register) #define malloc_tsd_dalloc JEMALLOC_N(malloc_tsd_dalloc) #define malloc_tsd_malloc JEMALLOC_N(malloc_tsd_malloc) #define tsd_booted JEMALLOC_N(tsd_booted) #define tsd_cleanup JEMALLOC_N(tsd_cleanup) #define tsd_fetch_slow JEMALLOC_N(tsd_fetch_slow) #define tsd_initialized JEMALLOC_N(tsd_initialized) #define tsd_slow_update JEMALLOC_N(tsd_slow_update) #define tsd_tls JEMALLOC_N(tsd_tls) #define witness_depth_error JEMALLOC_N(witness_depth_error) #define witness_init JEMALLOC_N(witness_init) #define witness_lock_error JEMALLOC_N(witness_lock_error) #define witness_not_owner_error JEMALLOC_N(witness_not_owner_error) #define witness_owner_error JEMALLOC_N(witness_owner_error) #define witness_postfork_child JEMALLOC_N(witness_postfork_child) #define witness_postfork_parent JEMALLOC_N(witness_postfork_parent) #define witness_prefork JEMALLOC_N(witness_prefork) #define witnesses_cleanup JEMALLOC_N(witnesses_cleanup) Index: head/contrib/jemalloc/include/jemalloc/internal/tcache_externs.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/tcache_externs.h (revision 320622) +++ head/contrib/jemalloc/include/jemalloc/internal/tcache_externs.h (revision 320623) @@ -1,55 +1,55 @@ #ifndef JEMALLOC_INTERNAL_TCACHE_EXTERNS_H #define JEMALLOC_INTERNAL_TCACHE_EXTERNS_H #include "jemalloc/internal/size_classes.h" extern bool opt_tcache; extern ssize_t opt_lg_tcache_max; extern tcache_bin_info_t *tcache_bin_info; /* * Number of tcache bins. There are NBINS small-object bins, plus 0 or more * large-object bins. */ extern unsigned nhbins; /* Maximum cached size class. */ extern size_t tcache_maxclass; /* * Explicit tcaches, managed via the tcache.{create,flush,destroy} mallctls and * usable via the MALLOCX_TCACHE() flag. The automatic per thread tcaches are * completely disjoint from this data structure. tcaches starts off as a sparse * array, so it has no physical memory footprint until individual pages are * touched. This allows the entire array to be allocated the first time an * explicit tcache is created without a disproportionate impact on memory usage. */ extern tcaches_t *tcaches; size_t tcache_salloc(tsdn_t *tsdn, const void *ptr); void tcache_event_hard(tsd_t *tsd, tcache_t *tcache); void *tcache_alloc_small_hard(tsdn_t *tsdn, arena_t *arena, tcache_t *tcache, tcache_bin_t *tbin, szind_t binind, bool *tcache_success); void tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin, szind_t binind, unsigned rem); void tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind, unsigned rem, tcache_t *tcache); void tcache_arena_reassociate(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena); tcache_t *tcache_create_explicit(tsd_t *tsd); void tcache_cleanup(tsd_t *tsd); void tcache_stats_merge(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena); bool tcaches_create(tsd_t *tsd, unsigned *r_ind); void tcaches_flush(tsd_t *tsd, unsigned ind); void tcaches_destroy(tsd_t *tsd, unsigned ind); bool tcache_boot(tsdn_t *tsdn); void tcache_arena_associate(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena); void tcache_prefork(tsdn_t *tsdn); void tcache_postfork_parent(tsdn_t *tsdn); void tcache_postfork_child(tsdn_t *tsdn); -void tcache_flush(void); +void tcache_flush(tsd_t *tsd); bool tsd_tcache_data_init(tsd_t *tsd); bool tsd_tcache_enabled_data_init(tsd_t *tsd); #endif /* JEMALLOC_INTERNAL_TCACHE_EXTERNS_H */ Index: head/contrib/jemalloc/include/jemalloc/internal/tsd.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/tsd.h (revision 320622) +++ head/contrib/jemalloc/include/jemalloc/internal/tsd.h (revision 320623) @@ -1,310 +1,324 @@ #ifndef JEMALLOC_INTERNAL_TSD_H #define JEMALLOC_INTERNAL_TSD_H #include "jemalloc/internal/arena_types.h" #include "jemalloc/internal/assert.h" #include "jemalloc/internal/jemalloc_internal_externs.h" #include "jemalloc/internal/prof_types.h" #include "jemalloc/internal/ql.h" #include "jemalloc/internal/rtree_tsd.h" #include "jemalloc/internal/tcache_types.h" #include "jemalloc/internal/tcache_structs.h" #include "jemalloc/internal/util.h" #include "jemalloc/internal/witness.h" /* * Thread-Specific-Data layout * --- data accessed on tcache fast path: state, rtree_ctx, stats, prof --- * s: state * e: tcache_enabled * m: thread_allocated (config_stats) * f: thread_deallocated (config_stats) * p: prof_tdata (config_prof) * c: rtree_ctx (rtree cache accessed on deallocation) * t: tcache * --- data not accessed on tcache fast path: arena-related fields --- * d: arenas_tdata_bypass * r: reentrancy_level * x: narenas_tdata * i: iarena * a: arena * o: arenas_tdata * Loading TSD data is on the critical path of basically all malloc operations. * In particular, tcache and rtree_ctx rely on hot CPU cache to be effective. * Use a compact layout to reduce cache footprint. * +--- 64-bit and 64B cacheline; 1B each letter; First byte on the left. ---+ * |---------------------------- 1st cacheline ----------------------------| * | sedrxxxx mmmmmmmm ffffffff pppppppp [c * 32 ........ ........ .......] | * |---------------------------- 2nd cacheline ----------------------------| * | [c * 64 ........ ........ ........ ........ ........ ........ .......] | * |---------------------------- 3nd cacheline ----------------------------| * | [c * 32 ........ ........ .......] iiiiiiii aaaaaaaa oooooooo [t...... | * +-------------------------------------------------------------------------+ * Note: the entire tcache is embedded into TSD and spans multiple cachelines. * * The last 3 members (i, a and o) before tcache isn't really needed on tcache * fast path. However we have a number of unused tcache bins and witnesses * (never touched unless config_debug) at the end of tcache, so we place them * there to avoid breaking the cachelines and possibly paging in an extra page. */ #ifdef JEMALLOC_JET typedef void (*test_callback_t)(int *); # define MALLOC_TSD_TEST_DATA_INIT 0x72b65c10 # define MALLOC_TEST_TSD \ O(test_data, int, int) \ O(test_callback, test_callback_t, int) # define MALLOC_TEST_TSD_INITIALIZER , MALLOC_TSD_TEST_DATA_INIT, NULL #else # define MALLOC_TEST_TSD # define MALLOC_TEST_TSD_INITIALIZER #endif /* O(name, type, nullable type */ #define MALLOC_TSD \ O(tcache_enabled, bool, bool) \ O(arenas_tdata_bypass, bool, bool) \ O(reentrancy_level, int8_t, int8_t) \ O(narenas_tdata, uint32_t, uint32_t) \ O(thread_allocated, uint64_t, uint64_t) \ O(thread_deallocated, uint64_t, uint64_t) \ O(prof_tdata, prof_tdata_t *, prof_tdata_t *) \ O(rtree_ctx, rtree_ctx_t, rtree_ctx_t) \ O(iarena, arena_t *, arena_t *) \ O(arena, arena_t *, arena_t *) \ O(arenas_tdata, arena_tdata_t *, arena_tdata_t *)\ O(tcache, tcache_t, tcache_t) \ O(witness_tsd, witness_tsd_t, witness_tsdn_t) \ MALLOC_TEST_TSD #define TSD_INITIALIZER { \ tsd_state_uninitialized, \ TCACHE_ENABLED_ZERO_INITIALIZER, \ false, \ 0, \ 0, \ 0, \ 0, \ NULL, \ RTREE_CTX_ZERO_INITIALIZER, \ NULL, \ NULL, \ NULL, \ TCACHE_ZERO_INITIALIZER, \ WITNESS_TSD_INITIALIZER \ MALLOC_TEST_TSD_INITIALIZER \ } enum { tsd_state_nominal = 0, /* Common case --> jnz. */ tsd_state_nominal_slow = 1, /* Initialized but on slow path. */ /* the above 2 nominal states should be lower values. */ tsd_state_nominal_max = 1, /* used for comparison only. */ - tsd_state_purgatory = 2, - tsd_state_reincarnated = 3, - tsd_state_uninitialized = 4 + tsd_state_minimal_initialized = 2, + tsd_state_purgatory = 3, + tsd_state_reincarnated = 4, + tsd_state_uninitialized = 5 }; /* Manually limit tsd_state_t to a single byte. */ typedef uint8_t tsd_state_t; /* The actual tsd. */ struct tsd_s { /* * The contents should be treated as totally opaque outside the tsd * module. Access any thread-local state through the getters and * setters below. */ tsd_state_t state; #define O(n, t, nt) \ t use_a_getter_or_setter_instead_##n; MALLOC_TSD #undef O }; /* * Wrapper around tsd_t that makes it possible to avoid implicit conversion * between tsd_t and tsdn_t, where tsdn_t is "nullable" and has to be * explicitly converted to tsd_t, which is non-nullable. */ struct tsdn_s { tsd_t tsd; }; #define TSDN_NULL ((tsdn_t *)0) JEMALLOC_ALWAYS_INLINE tsdn_t * tsd_tsdn(tsd_t *tsd) { return (tsdn_t *)tsd; } JEMALLOC_ALWAYS_INLINE bool tsdn_null(const tsdn_t *tsdn) { return tsdn == NULL; } JEMALLOC_ALWAYS_INLINE tsd_t * tsdn_tsd(tsdn_t *tsdn) { assert(!tsdn_null(tsdn)); return &tsdn->tsd; } void *malloc_tsd_malloc(size_t size); void malloc_tsd_dalloc(void *wrapper); void malloc_tsd_cleanup_register(bool (*f)(void)); tsd_t *malloc_tsd_boot0(void); void malloc_tsd_boot1(void); void tsd_cleanup(void *arg); tsd_t *tsd_fetch_slow(tsd_t *tsd, bool internal); void tsd_slow_update(tsd_t *tsd); /* * We put the platform-specific data declarations and inlines into their own * header files to avoid cluttering this file. They define tsd_boot0, * tsd_boot1, tsd_boot, tsd_booted_get, tsd_get_allocates, tsd_get, and tsd_set. */ #ifdef JEMALLOC_MALLOC_THREAD_CLEANUP #include "jemalloc/internal/tsd_malloc_thread_cleanup.h" #elif (defined(JEMALLOC_TLS)) #include "jemalloc/internal/tsd_tls.h" #elif (defined(_WIN32)) #include "jemalloc/internal/tsd_win.h" #else #include "jemalloc/internal/tsd_generic.h" #endif /* * tsd_foop_get_unsafe(tsd) returns a pointer to the thread-local instance of * foo. This omits some safety checks, and so can be used during tsd * initialization and cleanup. */ #define O(n, t, nt) \ JEMALLOC_ALWAYS_INLINE t * \ tsd_##n##p_get_unsafe(tsd_t *tsd) { \ return &tsd->use_a_getter_or_setter_instead_##n; \ } MALLOC_TSD #undef O /* tsd_foop_get(tsd) returns a pointer to the thread-local instance of foo. */ #define O(n, t, nt) \ JEMALLOC_ALWAYS_INLINE t * \ tsd_##n##p_get(tsd_t *tsd) { \ assert(tsd->state == tsd_state_nominal || \ tsd->state == tsd_state_nominal_slow || \ - tsd->state == tsd_state_reincarnated); \ + tsd->state == tsd_state_reincarnated || \ + tsd->state == tsd_state_minimal_initialized); \ return tsd_##n##p_get_unsafe(tsd); \ } MALLOC_TSD #undef O /* * tsdn_foop_get(tsdn) returns either the thread-local instance of foo (if tsdn * isn't NULL), or NULL (if tsdn is NULL), cast to the nullable pointer type. */ #define O(n, t, nt) \ JEMALLOC_ALWAYS_INLINE nt * \ tsdn_##n##p_get(tsdn_t *tsdn) { \ if (tsdn_null(tsdn)) { \ return NULL; \ } \ tsd_t *tsd = tsdn_tsd(tsdn); \ return (nt *)tsd_##n##p_get(tsd); \ } MALLOC_TSD #undef O /* tsd_foo_get(tsd) returns the value of the thread-local instance of foo. */ #define O(n, t, nt) \ JEMALLOC_ALWAYS_INLINE t \ tsd_##n##_get(tsd_t *tsd) { \ return *tsd_##n##p_get(tsd); \ } MALLOC_TSD #undef O /* tsd_foo_set(tsd, val) updates the thread-local instance of foo to be val. */ #define O(n, t, nt) \ JEMALLOC_ALWAYS_INLINE void \ tsd_##n##_set(tsd_t *tsd, t val) { \ - assert(tsd->state != tsd_state_reincarnated); \ + assert(tsd->state != tsd_state_reincarnated && \ + tsd->state != tsd_state_minimal_initialized); \ *tsd_##n##p_get(tsd) = val; \ } MALLOC_TSD #undef O JEMALLOC_ALWAYS_INLINE void tsd_assert_fast(tsd_t *tsd) { assert(!malloc_slow && tsd_tcache_enabled_get(tsd) && tsd_reentrancy_level_get(tsd) == 0); } JEMALLOC_ALWAYS_INLINE bool tsd_fast(tsd_t *tsd) { bool fast = (tsd->state == tsd_state_nominal); if (fast) { tsd_assert_fast(tsd); } return fast; } JEMALLOC_ALWAYS_INLINE tsd_t * -tsd_fetch_impl(bool init, bool internal) { +tsd_fetch_impl(bool init, bool minimal) { tsd_t *tsd = tsd_get(init); if (!init && tsd_get_allocates() && tsd == NULL) { return NULL; } assert(tsd != NULL); if (unlikely(tsd->state != tsd_state_nominal)) { - return tsd_fetch_slow(tsd, internal); + return tsd_fetch_slow(tsd, minimal); } assert(tsd_fast(tsd)); tsd_assert_fast(tsd); return tsd; } +/* Get a minimal TSD that requires no cleanup. See comments in free(). */ JEMALLOC_ALWAYS_INLINE tsd_t * -tsd_internal_fetch(void) { +tsd_fetch_min(void) { return tsd_fetch_impl(true, true); +} + +/* For internal background threads use only. */ +JEMALLOC_ALWAYS_INLINE tsd_t * +tsd_internal_fetch(void) { + tsd_t *tsd = tsd_fetch_min(); + /* Use reincarnated state to prevent full initialization. */ + tsd->state = tsd_state_reincarnated; + + return tsd; } JEMALLOC_ALWAYS_INLINE tsd_t * tsd_fetch(void) { return tsd_fetch_impl(true, false); } static inline bool tsd_nominal(tsd_t *tsd) { return (tsd->state <= tsd_state_nominal_max); } JEMALLOC_ALWAYS_INLINE tsdn_t * tsdn_fetch(void) { if (!tsd_booted_get()) { return NULL; } return tsd_tsdn(tsd_fetch_impl(false, false)); } JEMALLOC_ALWAYS_INLINE rtree_ctx_t * tsd_rtree_ctx(tsd_t *tsd) { return tsd_rtree_ctxp_get(tsd); } JEMALLOC_ALWAYS_INLINE rtree_ctx_t * tsdn_rtree_ctx(tsdn_t *tsdn, rtree_ctx_t *fallback) { /* * If tsd cannot be accessed, initialize the fallback rtree_ctx and * return a pointer to it. */ if (unlikely(tsdn_null(tsdn))) { rtree_ctx_data_init(fallback); return fallback; } return tsd_rtree_ctx(tsdn_tsd(tsdn)); } #endif /* JEMALLOC_INTERNAL_TSD_H */ Index: head/contrib/jemalloc/include/jemalloc/jemalloc.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/jemalloc.h (revision 320622) +++ head/contrib/jemalloc/include/jemalloc/jemalloc.h (revision 320623) @@ -1,420 +1,420 @@ #ifndef JEMALLOC_H_ #define JEMALLOC_H_ #ifdef __cplusplus extern "C" { #endif /* Defined if __attribute__((...)) syntax is supported. */ #define JEMALLOC_HAVE_ATTR /* Defined if alloc_size attribute is supported. */ /* #undef JEMALLOC_HAVE_ATTR_ALLOC_SIZE */ /* Defined if format(gnu_printf, ...) attribute is supported. */ /* #undef JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF */ /* Defined if format(printf, ...) attribute is supported. */ #define JEMALLOC_HAVE_ATTR_FORMAT_PRINTF /* * Define overrides for non-standard allocator-related functions if they are * present on the system. */ /* #undef JEMALLOC_OVERRIDE_MEMALIGN */ #define JEMALLOC_OVERRIDE_VALLOC /* * At least Linux omits the "const" in: * * size_t malloc_usable_size(const void *ptr); * * Match the operating system's prototype. */ #define JEMALLOC_USABLE_SIZE_CONST const /* * If defined, specify throw() for the public function prototypes when compiling * with C++. The only justification for this is to match the prototypes that * glibc defines. */ /* #undef JEMALLOC_USE_CXX_THROW */ #ifdef _MSC_VER # ifdef _WIN64 # define LG_SIZEOF_PTR_WIN 3 # else # define LG_SIZEOF_PTR_WIN 2 # endif #endif /* sizeof(void *) == 2^LG_SIZEOF_PTR. */ #define LG_SIZEOF_PTR 3 /* * Name mangling for public symbols is controlled by --with-mangling and * --with-jemalloc-prefix. With default settings the je_ prefix is stripped by * these macro definitions. */ #ifndef JEMALLOC_NO_RENAME # define je_aligned_alloc aligned_alloc # define je_calloc calloc # define je_dallocx dallocx # define je_free free # define je_mallctl mallctl # define je_mallctlbymib mallctlbymib # define je_mallctlnametomib mallctlnametomib # define je_malloc malloc # define je_malloc_conf malloc_conf # define je_malloc_message malloc_message # define je_malloc_stats_print malloc_stats_print # define je_malloc_usable_size malloc_usable_size # define je_mallocx mallocx # define je_nallocx nallocx # define je_posix_memalign posix_memalign # define je_rallocx rallocx # define je_realloc realloc # define je_sallocx sallocx # define je_sdallocx sdallocx # define je_xallocx xallocx # define je_valloc valloc #endif #include "jemalloc_FreeBSD.h" #include #include #include #include #include -#define JEMALLOC_VERSION "5.0.0-4-g84f6c2cae0fb1399377ef6aea9368444c4987cc6" +#define JEMALLOC_VERSION "5.0.1-0-g896ed3a8b3f41998d4fb4d625d30ac63ef2d51fb" #define JEMALLOC_VERSION_MAJOR 5 #define JEMALLOC_VERSION_MINOR 0 -#define JEMALLOC_VERSION_BUGFIX 0 -#define JEMALLOC_VERSION_NREV 4 -#define JEMALLOC_VERSION_GID "84f6c2cae0fb1399377ef6aea9368444c4987cc6" +#define JEMALLOC_VERSION_BUGFIX 1 +#define JEMALLOC_VERSION_NREV 0 +#define JEMALLOC_VERSION_GID "896ed3a8b3f41998d4fb4d625d30ac63ef2d51fb" #define MALLOCX_LG_ALIGN(la) ((int)(la)) #if LG_SIZEOF_PTR == 2 # define MALLOCX_ALIGN(a) ((int)(ffs((int)(a))-1)) #else # define MALLOCX_ALIGN(a) \ ((int)(((size_t)(a) < (size_t)INT_MAX) ? ffs((int)(a))-1 : \ ffs((int)(((size_t)(a))>>32))+31)) #endif #define MALLOCX_ZERO ((int)0x40) /* * Bias tcache index bits so that 0 encodes "automatic tcache management", and 1 * encodes MALLOCX_TCACHE_NONE. */ #define MALLOCX_TCACHE(tc) ((int)(((tc)+2) << 8)) #define MALLOCX_TCACHE_NONE MALLOCX_TCACHE(-1) /* * Bias arena index bits so that 0 encodes "use an automatically chosen arena". */ #define MALLOCX_ARENA(a) ((((int)(a))+1) << 20) /* * Use as arena index in "arena..{purge,decay,dss}" and * "stats.arenas..*" mallctl interfaces to select all arenas. This * definition is intentionally specified in raw decimal format to support * cpp-based string concatenation, e.g. * * #define STRINGIFY_HELPER(x) #x * #define STRINGIFY(x) STRINGIFY_HELPER(x) * * mallctl("arena." STRINGIFY(MALLCTL_ARENAS_ALL) ".purge", NULL, NULL, NULL, * 0); */ #define MALLCTL_ARENAS_ALL 4096 /* * Use as arena index in "stats.arenas..*" mallctl interfaces to select * destroyed arenas. */ #define MALLCTL_ARENAS_DESTROYED 4097 #if defined(__cplusplus) && defined(JEMALLOC_USE_CXX_THROW) # define JEMALLOC_CXX_THROW throw() #else # define JEMALLOC_CXX_THROW #endif #if defined(_MSC_VER) # define JEMALLOC_ATTR(s) # define JEMALLOC_ALIGNED(s) __declspec(align(s)) # define JEMALLOC_ALLOC_SIZE(s) # define JEMALLOC_ALLOC_SIZE2(s1, s2) # ifndef JEMALLOC_EXPORT # ifdef DLLEXPORT # define JEMALLOC_EXPORT __declspec(dllexport) # else # define JEMALLOC_EXPORT __declspec(dllimport) # endif # endif # define JEMALLOC_FORMAT_PRINTF(s, i) # define JEMALLOC_NOINLINE __declspec(noinline) # ifdef __cplusplus # define JEMALLOC_NOTHROW __declspec(nothrow) # else # define JEMALLOC_NOTHROW # endif # define JEMALLOC_SECTION(s) __declspec(allocate(s)) # define JEMALLOC_RESTRICT_RETURN __declspec(restrict) # if _MSC_VER >= 1900 && !defined(__EDG__) # define JEMALLOC_ALLOCATOR __declspec(allocator) # else # define JEMALLOC_ALLOCATOR # endif #elif defined(JEMALLOC_HAVE_ATTR) # define JEMALLOC_ATTR(s) __attribute__((s)) # define JEMALLOC_ALIGNED(s) JEMALLOC_ATTR(aligned(s)) # ifdef JEMALLOC_HAVE_ATTR_ALLOC_SIZE # define JEMALLOC_ALLOC_SIZE(s) JEMALLOC_ATTR(alloc_size(s)) # define JEMALLOC_ALLOC_SIZE2(s1, s2) JEMALLOC_ATTR(alloc_size(s1, s2)) # else # define JEMALLOC_ALLOC_SIZE(s) # define JEMALLOC_ALLOC_SIZE2(s1, s2) # endif # ifndef JEMALLOC_EXPORT # define JEMALLOC_EXPORT JEMALLOC_ATTR(visibility("default")) # endif # ifdef JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF # define JEMALLOC_FORMAT_PRINTF(s, i) JEMALLOC_ATTR(format(gnu_printf, s, i)) # elif defined(JEMALLOC_HAVE_ATTR_FORMAT_PRINTF) # define JEMALLOC_FORMAT_PRINTF(s, i) JEMALLOC_ATTR(format(printf, s, i)) # else # define JEMALLOC_FORMAT_PRINTF(s, i) # endif # define JEMALLOC_NOINLINE JEMALLOC_ATTR(noinline) # define JEMALLOC_NOTHROW JEMALLOC_ATTR(nothrow) # define JEMALLOC_SECTION(s) JEMALLOC_ATTR(section(s)) # define JEMALLOC_RESTRICT_RETURN # define JEMALLOC_ALLOCATOR #else # define JEMALLOC_ATTR(s) # define JEMALLOC_ALIGNED(s) # define JEMALLOC_ALLOC_SIZE(s) # define JEMALLOC_ALLOC_SIZE2(s1, s2) # define JEMALLOC_EXPORT # define JEMALLOC_FORMAT_PRINTF(s, i) # define JEMALLOC_NOINLINE # define JEMALLOC_NOTHROW # define JEMALLOC_SECTION(s) # define JEMALLOC_RESTRICT_RETURN # define JEMALLOC_ALLOCATOR #endif /* * The je_ prefix on the following public symbol declarations is an artifact * of namespace management, and should be omitted in application code unless * JEMALLOC_NO_DEMANGLE is defined (see jemalloc_mangle.h). */ extern JEMALLOC_EXPORT const char *je_malloc_conf; extern JEMALLOC_EXPORT void (*je_malloc_message)(void *cbopaque, const char *s); JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *je_malloc(size_t size) JEMALLOC_CXX_THROW JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(1); JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *je_calloc(size_t num, size_t size) JEMALLOC_CXX_THROW JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE2(1, 2); JEMALLOC_EXPORT int JEMALLOC_NOTHROW je_posix_memalign(void **memptr, size_t alignment, size_t size) JEMALLOC_CXX_THROW JEMALLOC_ATTR(nonnull(1)); JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *je_aligned_alloc(size_t alignment, size_t size) JEMALLOC_CXX_THROW JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(2); JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *je_realloc(void *ptr, size_t size) JEMALLOC_CXX_THROW JEMALLOC_ALLOC_SIZE(2); JEMALLOC_EXPORT void JEMALLOC_NOTHROW je_free(void *ptr) JEMALLOC_CXX_THROW; JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *je_mallocx(size_t size, int flags) JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(1); JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *je_rallocx(void *ptr, size_t size, int flags) JEMALLOC_ALLOC_SIZE(2); JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW je_xallocx(void *ptr, size_t size, size_t extra, int flags); JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW je_sallocx(const void *ptr, int flags) JEMALLOC_ATTR(pure); JEMALLOC_EXPORT void JEMALLOC_NOTHROW je_dallocx(void *ptr, int flags); JEMALLOC_EXPORT void JEMALLOC_NOTHROW je_sdallocx(void *ptr, size_t size, int flags); JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW je_nallocx(size_t size, int flags) JEMALLOC_ATTR(pure); JEMALLOC_EXPORT int JEMALLOC_NOTHROW je_mallctl(const char *name, void *oldp, size_t *oldlenp, void *newp, size_t newlen); JEMALLOC_EXPORT int JEMALLOC_NOTHROW je_mallctlnametomib(const char *name, size_t *mibp, size_t *miblenp); JEMALLOC_EXPORT int JEMALLOC_NOTHROW je_mallctlbymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen); JEMALLOC_EXPORT void JEMALLOC_NOTHROW je_malloc_stats_print( void (*write_cb)(void *, const char *), void *je_cbopaque, const char *opts); JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW je_malloc_usable_size( JEMALLOC_USABLE_SIZE_CONST void *ptr) JEMALLOC_CXX_THROW; #ifdef JEMALLOC_OVERRIDE_MEMALIGN JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *je_memalign(size_t alignment, size_t size) JEMALLOC_CXX_THROW JEMALLOC_ATTR(malloc); #endif #ifdef JEMALLOC_OVERRIDE_VALLOC JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *je_valloc(size_t size) JEMALLOC_CXX_THROW JEMALLOC_ATTR(malloc); #endif typedef struct extent_hooks_s extent_hooks_t; /* * void * * extent_alloc(extent_hooks_t *extent_hooks, void *new_addr, size_t size, * size_t alignment, bool *zero, bool *commit, unsigned arena_ind); */ typedef void *(extent_alloc_t)(extent_hooks_t *, void *, size_t, size_t, bool *, bool *, unsigned); /* * bool * extent_dalloc(extent_hooks_t *extent_hooks, void *addr, size_t size, * bool committed, unsigned arena_ind); */ typedef bool (extent_dalloc_t)(extent_hooks_t *, void *, size_t, bool, unsigned); /* * void * extent_destroy(extent_hooks_t *extent_hooks, void *addr, size_t size, * bool committed, unsigned arena_ind); */ typedef void (extent_destroy_t)(extent_hooks_t *, void *, size_t, bool, unsigned); /* * bool * extent_commit(extent_hooks_t *extent_hooks, void *addr, size_t size, * size_t offset, size_t length, unsigned arena_ind); */ typedef bool (extent_commit_t)(extent_hooks_t *, void *, size_t, size_t, size_t, unsigned); /* * bool * extent_decommit(extent_hooks_t *extent_hooks, void *addr, size_t size, * size_t offset, size_t length, unsigned arena_ind); */ typedef bool (extent_decommit_t)(extent_hooks_t *, void *, size_t, size_t, size_t, unsigned); /* * bool * extent_purge(extent_hooks_t *extent_hooks, void *addr, size_t size, * size_t offset, size_t length, unsigned arena_ind); */ typedef bool (extent_purge_t)(extent_hooks_t *, void *, size_t, size_t, size_t, unsigned); /* * bool * extent_split(extent_hooks_t *extent_hooks, void *addr, size_t size, * size_t size_a, size_t size_b, bool committed, unsigned arena_ind); */ typedef bool (extent_split_t)(extent_hooks_t *, void *, size_t, size_t, size_t, bool, unsigned); /* * bool * extent_merge(extent_hooks_t *extent_hooks, void *addr_a, size_t size_a, * void *addr_b, size_t size_b, bool committed, unsigned arena_ind); */ typedef bool (extent_merge_t)(extent_hooks_t *, void *, size_t, void *, size_t, bool, unsigned); struct extent_hooks_s { extent_alloc_t *alloc; extent_dalloc_t *dalloc; extent_destroy_t *destroy; extent_commit_t *commit; extent_decommit_t *decommit; extent_purge_t *purge_lazy; extent_purge_t *purge_forced; extent_split_t *split; extent_merge_t *merge; }; /* * By default application code must explicitly refer to mangled symbol names, * so that it is possible to use jemalloc in conjunction with another allocator * in the same application. Define JEMALLOC_MANGLE in order to cause automatic * name mangling that matches the API prefixing that happened as a result of * --with-mangling and/or --with-jemalloc-prefix configuration settings. */ #ifdef JEMALLOC_MANGLE # ifndef JEMALLOC_NO_DEMANGLE # define JEMALLOC_NO_DEMANGLE # endif # define aligned_alloc je_aligned_alloc # define calloc je_calloc # define dallocx je_dallocx # define free je_free # define mallctl je_mallctl # define mallctlbymib je_mallctlbymib # define mallctlnametomib je_mallctlnametomib # define malloc je_malloc # define malloc_conf je_malloc_conf # define malloc_message je_malloc_message # define malloc_stats_print je_malloc_stats_print # define malloc_usable_size je_malloc_usable_size # define mallocx je_mallocx # define nallocx je_nallocx # define posix_memalign je_posix_memalign # define rallocx je_rallocx # define realloc je_realloc # define sallocx je_sallocx # define sdallocx je_sdallocx # define xallocx je_xallocx # define valloc je_valloc #endif /* * The je_* macros can be used as stable alternative names for the * public jemalloc API if JEMALLOC_NO_DEMANGLE is defined. This is primarily * meant for use in jemalloc itself, but it can be used by application code to * provide isolation from the name mangling specified via --with-mangling * and/or --with-jemalloc-prefix. */ #ifndef JEMALLOC_NO_DEMANGLE # undef je_aligned_alloc # undef je_calloc # undef je_dallocx # undef je_free # undef je_mallctl # undef je_mallctlbymib # undef je_mallctlnametomib # undef je_malloc # undef je_malloc_conf # undef je_malloc_message # undef je_malloc_stats_print # undef je_malloc_usable_size # undef je_mallocx # undef je_nallocx # undef je_posix_memalign # undef je_rallocx # undef je_realloc # undef je_sallocx # undef je_sdallocx # undef je_xallocx # undef je_valloc #endif #ifdef __cplusplus } #endif #endif /* JEMALLOC_H_ */ Index: head/contrib/jemalloc/src/arena.c =================================================================== --- head/contrib/jemalloc/src/arena.c (revision 320622) +++ head/contrib/jemalloc/src/arena.c (revision 320623) @@ -1,2150 +1,2179 @@ #define JEMALLOC_ARENA_C_ #include "jemalloc/internal/jemalloc_preamble.h" #include "jemalloc/internal/jemalloc_internal_includes.h" #include "jemalloc/internal/assert.h" #include "jemalloc/internal/extent_dss.h" #include "jemalloc/internal/extent_mmap.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/rtree.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/util.h" /******************************************************************************/ /* Data. */ /* * Define names for both unininitialized and initialized phases, so that * options and mallctl processing are straightforward. */ const char *percpu_arena_mode_names[] = { "percpu", "phycpu", "disabled", "percpu", "phycpu" }; percpu_arena_mode_t opt_percpu_arena = PERCPU_ARENA_DEFAULT; ssize_t opt_dirty_decay_ms = DIRTY_DECAY_MS_DEFAULT; ssize_t opt_muzzy_decay_ms = MUZZY_DECAY_MS_DEFAULT; static atomic_zd_t dirty_decay_ms_default; static atomic_zd_t muzzy_decay_ms_default; const arena_bin_info_t arena_bin_info[NBINS] = { #define BIN_INFO_bin_yes(reg_size, slab_size, nregs) \ {reg_size, slab_size, nregs, BITMAP_INFO_INITIALIZER(nregs)}, #define BIN_INFO_bin_no(reg_size, slab_size, nregs) #define SC(index, lg_grp, lg_delta, ndelta, psz, bin, pgs, \ lg_delta_lookup) \ BIN_INFO_bin_##bin((1U<mtx, "arena_stats", WITNESS_RANK_ARENA_STATS, malloc_mutex_rank_exclusive)) { return true; } #endif /* Memory is zeroed, so there is no need to clear stats. */ return false; } static void arena_stats_lock(tsdn_t *tsdn, arena_stats_t *arena_stats) { #ifndef JEMALLOC_ATOMIC_U64 malloc_mutex_lock(tsdn, &arena_stats->mtx); #endif } static void arena_stats_unlock(tsdn_t *tsdn, arena_stats_t *arena_stats) { #ifndef JEMALLOC_ATOMIC_U64 malloc_mutex_unlock(tsdn, &arena_stats->mtx); #endif } static uint64_t arena_stats_read_u64(tsdn_t *tsdn, arena_stats_t *arena_stats, arena_stats_u64_t *p) { #ifdef JEMALLOC_ATOMIC_U64 return atomic_load_u64(p, ATOMIC_RELAXED); #else malloc_mutex_assert_owner(tsdn, &arena_stats->mtx); return *p; #endif } static void arena_stats_add_u64(tsdn_t *tsdn, arena_stats_t *arena_stats, arena_stats_u64_t *p, uint64_t x) { #ifdef JEMALLOC_ATOMIC_U64 atomic_fetch_add_u64(p, x, ATOMIC_RELAXED); #else malloc_mutex_assert_owner(tsdn, &arena_stats->mtx); *p += x; #endif } UNUSED static void arena_stats_sub_u64(tsdn_t *tsdn, arena_stats_t *arena_stats, arena_stats_u64_t *p, uint64_t x) { #ifdef JEMALLOC_ATOMIC_U64 UNUSED uint64_t r = atomic_fetch_sub_u64(p, x, ATOMIC_RELAXED); assert(r - x <= r); #else malloc_mutex_assert_owner(tsdn, &arena_stats->mtx); *p -= x; assert(*p + x >= *p); #endif } /* * Non-atomically sets *dst += src. *dst needs external synchronization. * This lets us avoid the cost of a fetch_add when its unnecessary (note that * the types here are atomic). */ static void arena_stats_accum_u64(arena_stats_u64_t *dst, uint64_t src) { #ifdef JEMALLOC_ATOMIC_U64 uint64_t cur_dst = atomic_load_u64(dst, ATOMIC_RELAXED); atomic_store_u64(dst, src + cur_dst, ATOMIC_RELAXED); #else *dst += src; #endif } static size_t arena_stats_read_zu(tsdn_t *tsdn, arena_stats_t *arena_stats, atomic_zu_t *p) { #ifdef JEMALLOC_ATOMIC_U64 return atomic_load_zu(p, ATOMIC_RELAXED); #else malloc_mutex_assert_owner(tsdn, &arena_stats->mtx); return atomic_load_zu(p, ATOMIC_RELAXED); #endif } static void arena_stats_add_zu(tsdn_t *tsdn, arena_stats_t *arena_stats, atomic_zu_t *p, size_t x) { #ifdef JEMALLOC_ATOMIC_U64 atomic_fetch_add_zu(p, x, ATOMIC_RELAXED); #else malloc_mutex_assert_owner(tsdn, &arena_stats->mtx); size_t cur = atomic_load_zu(p, ATOMIC_RELAXED); atomic_store_zu(p, cur + x, ATOMIC_RELAXED); #endif } static void arena_stats_sub_zu(tsdn_t *tsdn, arena_stats_t *arena_stats, atomic_zu_t *p, size_t x) { #ifdef JEMALLOC_ATOMIC_U64 UNUSED size_t r = atomic_fetch_sub_zu(p, x, ATOMIC_RELAXED); assert(r - x <= r); #else malloc_mutex_assert_owner(tsdn, &arena_stats->mtx); size_t cur = atomic_load_zu(p, ATOMIC_RELAXED); atomic_store_zu(p, cur - x, ATOMIC_RELAXED); #endif } /* Like the _u64 variant, needs an externally synchronized *dst. */ static void arena_stats_accum_zu(atomic_zu_t *dst, size_t src) { size_t cur_dst = atomic_load_zu(dst, ATOMIC_RELAXED); atomic_store_zu(dst, src + cur_dst, ATOMIC_RELAXED); } void arena_stats_large_nrequests_add(tsdn_t *tsdn, arena_stats_t *arena_stats, szind_t szind, uint64_t nrequests) { arena_stats_lock(tsdn, arena_stats); arena_stats_add_u64(tsdn, arena_stats, &arena_stats->lstats[szind - NBINS].nrequests, nrequests); arena_stats_unlock(tsdn, arena_stats); } void arena_stats_mapped_add(tsdn_t *tsdn, arena_stats_t *arena_stats, size_t size) { arena_stats_lock(tsdn, arena_stats); arena_stats_add_zu(tsdn, arena_stats, &arena_stats->mapped, size); arena_stats_unlock(tsdn, arena_stats); } void arena_basic_stats_merge(tsdn_t *tsdn, arena_t *arena, unsigned *nthreads, const char **dss, ssize_t *dirty_decay_ms, ssize_t *muzzy_decay_ms, size_t *nactive, size_t *ndirty, size_t *nmuzzy) { *nthreads += arena_nthreads_get(arena, false); *dss = dss_prec_names[arena_dss_prec_get(arena)]; *dirty_decay_ms = arena_dirty_decay_ms_get(arena); *muzzy_decay_ms = arena_muzzy_decay_ms_get(arena); *nactive += atomic_load_zu(&arena->nactive, ATOMIC_RELAXED); *ndirty += extents_npages_get(&arena->extents_dirty); *nmuzzy += extents_npages_get(&arena->extents_muzzy); } void arena_stats_merge(tsdn_t *tsdn, arena_t *arena, unsigned *nthreads, const char **dss, ssize_t *dirty_decay_ms, ssize_t *muzzy_decay_ms, size_t *nactive, size_t *ndirty, size_t *nmuzzy, arena_stats_t *astats, malloc_bin_stats_t *bstats, malloc_large_stats_t *lstats) { cassert(config_stats); arena_basic_stats_merge(tsdn, arena, nthreads, dss, dirty_decay_ms, muzzy_decay_ms, nactive, ndirty, nmuzzy); size_t base_allocated, base_resident, base_mapped; base_stats_get(tsdn, arena->base, &base_allocated, &base_resident, &base_mapped); arena_stats_lock(tsdn, &arena->stats); arena_stats_accum_zu(&astats->mapped, base_mapped + arena_stats_read_zu(tsdn, &arena->stats, &arena->stats.mapped)); arena_stats_accum_zu(&astats->retained, extents_npages_get(&arena->extents_retained) << LG_PAGE); arena_stats_accum_u64(&astats->decay_dirty.npurge, arena_stats_read_u64(tsdn, &arena->stats, &arena->stats.decay_dirty.npurge)); arena_stats_accum_u64(&astats->decay_dirty.nmadvise, arena_stats_read_u64(tsdn, &arena->stats, &arena->stats.decay_dirty.nmadvise)); arena_stats_accum_u64(&astats->decay_dirty.purged, arena_stats_read_u64(tsdn, &arena->stats, &arena->stats.decay_dirty.purged)); arena_stats_accum_u64(&astats->decay_muzzy.npurge, arena_stats_read_u64(tsdn, &arena->stats, &arena->stats.decay_muzzy.npurge)); arena_stats_accum_u64(&astats->decay_muzzy.nmadvise, arena_stats_read_u64(tsdn, &arena->stats, &arena->stats.decay_muzzy.nmadvise)); arena_stats_accum_u64(&astats->decay_muzzy.purged, arena_stats_read_u64(tsdn, &arena->stats, &arena->stats.decay_muzzy.purged)); arena_stats_accum_zu(&astats->base, base_allocated); arena_stats_accum_zu(&astats->internal, arena_internal_get(arena)); arena_stats_accum_zu(&astats->resident, base_resident + (((atomic_load_zu(&arena->nactive, ATOMIC_RELAXED) + extents_npages_get(&arena->extents_dirty) + extents_npages_get(&arena->extents_muzzy)) << LG_PAGE))); for (szind_t i = 0; i < NSIZES - NBINS; i++) { uint64_t nmalloc = arena_stats_read_u64(tsdn, &arena->stats, &arena->stats.lstats[i].nmalloc); arena_stats_accum_u64(&lstats[i].nmalloc, nmalloc); arena_stats_accum_u64(&astats->nmalloc_large, nmalloc); uint64_t ndalloc = arena_stats_read_u64(tsdn, &arena->stats, &arena->stats.lstats[i].ndalloc); arena_stats_accum_u64(&lstats[i].ndalloc, ndalloc); arena_stats_accum_u64(&astats->ndalloc_large, ndalloc); uint64_t nrequests = arena_stats_read_u64(tsdn, &arena->stats, &arena->stats.lstats[i].nrequests); arena_stats_accum_u64(&lstats[i].nrequests, nmalloc + nrequests); arena_stats_accum_u64(&astats->nrequests_large, nmalloc + nrequests); assert(nmalloc >= ndalloc); assert(nmalloc - ndalloc <= SIZE_T_MAX); size_t curlextents = (size_t)(nmalloc - ndalloc); lstats[i].curlextents += curlextents; arena_stats_accum_zu(&astats->allocated_large, curlextents * sz_index2size(NBINS + i)); } arena_stats_unlock(tsdn, &arena->stats); /* tcache_bytes counts currently cached bytes. */ atomic_store_zu(&astats->tcache_bytes, 0, ATOMIC_RELAXED); malloc_mutex_lock(tsdn, &arena->tcache_ql_mtx); tcache_t *tcache; ql_foreach(tcache, &arena->tcache_ql, link) { szind_t i = 0; for (; i < NBINS; i++) { tcache_bin_t *tbin = tcache_small_bin_get(tcache, i); arena_stats_accum_zu(&astats->tcache_bytes, tbin->ncached * sz_index2size(i)); } for (; i < nhbins; i++) { tcache_bin_t *tbin = tcache_large_bin_get(tcache, i); arena_stats_accum_zu(&astats->tcache_bytes, tbin->ncached * sz_index2size(i)); } } malloc_mutex_prof_read(tsdn, &astats->mutex_prof_data[arena_prof_mutex_tcache_list], &arena->tcache_ql_mtx); malloc_mutex_unlock(tsdn, &arena->tcache_ql_mtx); #define READ_ARENA_MUTEX_PROF_DATA(mtx, ind) \ malloc_mutex_lock(tsdn, &arena->mtx); \ malloc_mutex_prof_read(tsdn, &astats->mutex_prof_data[ind], \ &arena->mtx); \ malloc_mutex_unlock(tsdn, &arena->mtx); /* Gather per arena mutex profiling data. */ READ_ARENA_MUTEX_PROF_DATA(large_mtx, arena_prof_mutex_large); READ_ARENA_MUTEX_PROF_DATA(extent_avail_mtx, arena_prof_mutex_extent_avail) READ_ARENA_MUTEX_PROF_DATA(extents_dirty.mtx, arena_prof_mutex_extents_dirty) READ_ARENA_MUTEX_PROF_DATA(extents_muzzy.mtx, arena_prof_mutex_extents_muzzy) READ_ARENA_MUTEX_PROF_DATA(extents_retained.mtx, arena_prof_mutex_extents_retained) READ_ARENA_MUTEX_PROF_DATA(decay_dirty.mtx, arena_prof_mutex_decay_dirty) READ_ARENA_MUTEX_PROF_DATA(decay_muzzy.mtx, arena_prof_mutex_decay_muzzy) READ_ARENA_MUTEX_PROF_DATA(base->mtx, arena_prof_mutex_base) #undef READ_ARENA_MUTEX_PROF_DATA nstime_copy(&astats->uptime, &arena->create_time); nstime_update(&astats->uptime); nstime_subtract(&astats->uptime, &arena->create_time); for (szind_t i = 0; i < NBINS; i++) { arena_bin_t *bin = &arena->bins[i]; malloc_mutex_lock(tsdn, &bin->lock); malloc_mutex_prof_read(tsdn, &bstats[i].mutex_data, &bin->lock); bstats[i].nmalloc += bin->stats.nmalloc; bstats[i].ndalloc += bin->stats.ndalloc; bstats[i].nrequests += bin->stats.nrequests; bstats[i].curregs += bin->stats.curregs; bstats[i].nfills += bin->stats.nfills; bstats[i].nflushes += bin->stats.nflushes; bstats[i].nslabs += bin->stats.nslabs; bstats[i].reslabs += bin->stats.reslabs; bstats[i].curslabs += bin->stats.curslabs; malloc_mutex_unlock(tsdn, &bin->lock); } } void arena_extents_dirty_dalloc(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent) { witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, 0); extents_dalloc(tsdn, arena, r_extent_hooks, &arena->extents_dirty, extent); if (arena_dirty_decay_ms_get(arena) == 0) { arena_decay_dirty(tsdn, arena, false, true); } else { - arena_background_thread_inactivity_check(tsdn, arena); + arena_background_thread_inactivity_check(tsdn, arena, false); } } static void * arena_slab_reg_alloc(tsdn_t *tsdn, extent_t *slab, const arena_bin_info_t *bin_info) { void *ret; arena_slab_data_t *slab_data = extent_slab_data_get(slab); size_t regind; assert(extent_nfree_get(slab) > 0); assert(!bitmap_full(slab_data->bitmap, &bin_info->bitmap_info)); regind = bitmap_sfu(slab_data->bitmap, &bin_info->bitmap_info); ret = (void *)((uintptr_t)extent_addr_get(slab) + (uintptr_t)(bin_info->reg_size * regind)); extent_nfree_dec(slab); return ret; } #ifndef JEMALLOC_JET static #endif size_t arena_slab_regind(extent_t *slab, szind_t binind, const void *ptr) { size_t diff, regind; /* Freeing a pointer outside the slab can cause assertion failure. */ assert((uintptr_t)ptr >= (uintptr_t)extent_addr_get(slab)); assert((uintptr_t)ptr < (uintptr_t)extent_past_get(slab)); /* Freeing an interior pointer can cause assertion failure. */ assert(((uintptr_t)ptr - (uintptr_t)extent_addr_get(slab)) % (uintptr_t)arena_bin_info[binind].reg_size == 0); /* Avoid doing division with a variable divisor. */ diff = (size_t)((uintptr_t)ptr - (uintptr_t)extent_addr_get(slab)); switch (binind) { #define REGIND_bin_yes(index, reg_size) \ case index: \ regind = diff / (reg_size); \ assert(diff == regind * (reg_size)); \ break; #define REGIND_bin_no(index, reg_size) #define SC(index, lg_grp, lg_delta, ndelta, psz, bin, pgs, \ lg_delta_lookup) \ REGIND_bin_##bin(index, (1U<nregs); /* Freeing an unallocated pointer can cause assertion failure. */ assert(bitmap_get(slab_data->bitmap, &bin_info->bitmap_info, regind)); bitmap_unset(slab_data->bitmap, &bin_info->bitmap_info, regind); extent_nfree_inc(slab); } static void arena_nactive_add(arena_t *arena, size_t add_pages) { atomic_fetch_add_zu(&arena->nactive, add_pages, ATOMIC_RELAXED); } static void arena_nactive_sub(arena_t *arena, size_t sub_pages) { assert(atomic_load_zu(&arena->nactive, ATOMIC_RELAXED) >= sub_pages); atomic_fetch_sub_zu(&arena->nactive, sub_pages, ATOMIC_RELAXED); } static void arena_large_malloc_stats_update(tsdn_t *tsdn, arena_t *arena, size_t usize) { szind_t index, hindex; cassert(config_stats); if (usize < LARGE_MINCLASS) { usize = LARGE_MINCLASS; } index = sz_size2index(usize); hindex = (index >= NBINS) ? index - NBINS : 0; arena_stats_add_u64(tsdn, &arena->stats, &arena->stats.lstats[hindex].nmalloc, 1); } static void arena_large_dalloc_stats_update(tsdn_t *tsdn, arena_t *arena, size_t usize) { szind_t index, hindex; cassert(config_stats); if (usize < LARGE_MINCLASS) { usize = LARGE_MINCLASS; } index = sz_size2index(usize); hindex = (index >= NBINS) ? index - NBINS : 0; arena_stats_add_u64(tsdn, &arena->stats, &arena->stats.lstats[hindex].ndalloc, 1); } static void arena_large_ralloc_stats_update(tsdn_t *tsdn, arena_t *arena, size_t oldusize, size_t usize) { arena_large_dalloc_stats_update(tsdn, arena, oldusize); arena_large_malloc_stats_update(tsdn, arena, usize); } extent_t * arena_extent_alloc_large(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment, bool *zero) { extent_hooks_t *extent_hooks = EXTENT_HOOKS_INITIALIZER; witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, 0); szind_t szind = sz_size2index(usize); size_t mapped_add; bool commit = true; extent_t *extent = extents_alloc(tsdn, arena, &extent_hooks, &arena->extents_dirty, NULL, usize, sz_large_pad, alignment, false, szind, zero, &commit); if (extent == NULL) { extent = extents_alloc(tsdn, arena, &extent_hooks, &arena->extents_muzzy, NULL, usize, sz_large_pad, alignment, false, szind, zero, &commit); } size_t size = usize + sz_large_pad; if (extent == NULL) { extent = extent_alloc_wrapper(tsdn, arena, &extent_hooks, NULL, usize, sz_large_pad, alignment, false, szind, zero, &commit); if (config_stats) { /* * extent may be NULL on OOM, but in that case * mapped_add isn't used below, so there's no need to * conditionlly set it to 0 here. */ mapped_add = size; } } else if (config_stats) { mapped_add = 0; } if (extent != NULL) { if (config_stats) { arena_stats_lock(tsdn, &arena->stats); arena_large_malloc_stats_update(tsdn, arena, usize); if (mapped_add != 0) { arena_stats_add_zu(tsdn, &arena->stats, &arena->stats.mapped, mapped_add); } arena_stats_unlock(tsdn, &arena->stats); } arena_nactive_add(arena, size >> LG_PAGE); } return extent; } void arena_extent_dalloc_large_prep(tsdn_t *tsdn, arena_t *arena, extent_t *extent) { if (config_stats) { arena_stats_lock(tsdn, &arena->stats); arena_large_dalloc_stats_update(tsdn, arena, extent_usize_get(extent)); arena_stats_unlock(tsdn, &arena->stats); } arena_nactive_sub(arena, extent_size_get(extent) >> LG_PAGE); } void arena_extent_ralloc_large_shrink(tsdn_t *tsdn, arena_t *arena, extent_t *extent, size_t oldusize) { size_t usize = extent_usize_get(extent); size_t udiff = oldusize - usize; if (config_stats) { arena_stats_lock(tsdn, &arena->stats); arena_large_ralloc_stats_update(tsdn, arena, oldusize, usize); arena_stats_unlock(tsdn, &arena->stats); } arena_nactive_sub(arena, udiff >> LG_PAGE); } void arena_extent_ralloc_large_expand(tsdn_t *tsdn, arena_t *arena, extent_t *extent, size_t oldusize) { size_t usize = extent_usize_get(extent); size_t udiff = usize - oldusize; if (config_stats) { arena_stats_lock(tsdn, &arena->stats); arena_large_ralloc_stats_update(tsdn, arena, oldusize, usize); arena_stats_unlock(tsdn, &arena->stats); } arena_nactive_add(arena, udiff >> LG_PAGE); } static ssize_t arena_decay_ms_read(arena_decay_t *decay) { return atomic_load_zd(&decay->time_ms, ATOMIC_RELAXED); } static void arena_decay_ms_write(arena_decay_t *decay, ssize_t decay_ms) { atomic_store_zd(&decay->time_ms, decay_ms, ATOMIC_RELAXED); } static void arena_decay_deadline_init(arena_decay_t *decay) { /* * Generate a new deadline that is uniformly random within the next * epoch after the current one. */ nstime_copy(&decay->deadline, &decay->epoch); nstime_add(&decay->deadline, &decay->interval); if (arena_decay_ms_read(decay) > 0) { nstime_t jitter; nstime_init(&jitter, prng_range_u64(&decay->jitter_state, nstime_ns(&decay->interval))); nstime_add(&decay->deadline, &jitter); } } static bool arena_decay_deadline_reached(const arena_decay_t *decay, const nstime_t *time) { return (nstime_compare(&decay->deadline, time) <= 0); } static size_t arena_decay_backlog_npages_limit(const arena_decay_t *decay) { uint64_t sum; size_t npages_limit_backlog; unsigned i; /* * For each element of decay_backlog, multiply by the corresponding * fixed-point smoothstep decay factor. Sum the products, then divide * to round down to the nearest whole number of pages. */ sum = 0; for (i = 0; i < SMOOTHSTEP_NSTEPS; i++) { sum += decay->backlog[i] * h_steps[i]; } npages_limit_backlog = (size_t)(sum >> SMOOTHSTEP_BFP); return npages_limit_backlog; } static void arena_decay_backlog_update_last(arena_decay_t *decay, size_t current_npages) { size_t npages_delta = (current_npages > decay->nunpurged) ? current_npages - decay->nunpurged : 0; decay->backlog[SMOOTHSTEP_NSTEPS-1] = npages_delta; if (config_debug) { if (current_npages > decay->ceil_npages) { decay->ceil_npages = current_npages; } size_t npages_limit = arena_decay_backlog_npages_limit(decay); assert(decay->ceil_npages >= npages_limit); if (decay->ceil_npages > npages_limit) { decay->ceil_npages = npages_limit; } } } static void arena_decay_backlog_update(arena_decay_t *decay, uint64_t nadvance_u64, size_t current_npages) { if (nadvance_u64 >= SMOOTHSTEP_NSTEPS) { memset(decay->backlog, 0, (SMOOTHSTEP_NSTEPS-1) * sizeof(size_t)); } else { size_t nadvance_z = (size_t)nadvance_u64; assert((uint64_t)nadvance_z == nadvance_u64); memmove(decay->backlog, &decay->backlog[nadvance_z], (SMOOTHSTEP_NSTEPS - nadvance_z) * sizeof(size_t)); if (nadvance_z > 1) { memset(&decay->backlog[SMOOTHSTEP_NSTEPS - nadvance_z], 0, (nadvance_z-1) * sizeof(size_t)); } } arena_decay_backlog_update_last(decay, current_npages); } static void arena_decay_try_purge(tsdn_t *tsdn, arena_t *arena, arena_decay_t *decay, - extents_t *extents, size_t current_npages, size_t npages_limit) { + extents_t *extents, size_t current_npages, size_t npages_limit, + bool is_background_thread) { if (current_npages > npages_limit) { arena_decay_to_limit(tsdn, arena, decay, extents, false, - npages_limit); + npages_limit, is_background_thread); } } static void arena_decay_epoch_advance_helper(arena_decay_t *decay, const nstime_t *time, size_t current_npages) { assert(arena_decay_deadline_reached(decay, time)); nstime_t delta; nstime_copy(&delta, time); nstime_subtract(&delta, &decay->epoch); uint64_t nadvance_u64 = nstime_divide(&delta, &decay->interval); assert(nadvance_u64 > 0); /* Add nadvance_u64 decay intervals to epoch. */ nstime_copy(&delta, &decay->interval); nstime_imultiply(&delta, nadvance_u64); nstime_add(&decay->epoch, &delta); /* Set a new deadline. */ arena_decay_deadline_init(decay); /* Update the backlog. */ arena_decay_backlog_update(decay, nadvance_u64, current_npages); } static void arena_decay_epoch_advance(tsdn_t *tsdn, arena_t *arena, arena_decay_t *decay, - extents_t *extents, const nstime_t *time, bool purge) { + extents_t *extents, const nstime_t *time, bool is_background_thread) { size_t current_npages = extents_npages_get(extents); arena_decay_epoch_advance_helper(decay, time, current_npages); size_t npages_limit = arena_decay_backlog_npages_limit(decay); /* We may unlock decay->mtx when try_purge(). Finish logging first. */ decay->nunpurged = (npages_limit > current_npages) ? npages_limit : current_npages; - if (purge) { + + if (!background_thread_enabled() || is_background_thread) { arena_decay_try_purge(tsdn, arena, decay, extents, - current_npages, npages_limit); + current_npages, npages_limit, is_background_thread); } } static void arena_decay_reinit(arena_decay_t *decay, extents_t *extents, ssize_t decay_ms) { arena_decay_ms_write(decay, decay_ms); if (decay_ms > 0) { nstime_init(&decay->interval, (uint64_t)decay_ms * KQU(1000000)); nstime_idivide(&decay->interval, SMOOTHSTEP_NSTEPS); } nstime_init(&decay->epoch, 0); nstime_update(&decay->epoch); decay->jitter_state = (uint64_t)(uintptr_t)decay; arena_decay_deadline_init(decay); decay->nunpurged = 0; memset(decay->backlog, 0, SMOOTHSTEP_NSTEPS * sizeof(size_t)); } static bool arena_decay_init(arena_decay_t *decay, extents_t *extents, ssize_t decay_ms, decay_stats_t *stats) { if (config_debug) { for (size_t i = 0; i < sizeof(arena_decay_t); i++) { assert(((char *)decay)[i] == 0); } decay->ceil_npages = 0; } if (malloc_mutex_init(&decay->mtx, "decay", WITNESS_RANK_DECAY, malloc_mutex_rank_exclusive)) { return true; } decay->purging = false; arena_decay_reinit(decay, extents, decay_ms); /* Memory is zeroed, so there is no need to clear stats. */ if (config_stats) { decay->stats = stats; } return false; } static bool arena_decay_ms_valid(ssize_t decay_ms) { if (decay_ms < -1) { return false; } if (decay_ms == -1 || (uint64_t)decay_ms <= NSTIME_SEC_MAX * KQU(1000)) { return true; } return false; } static bool arena_maybe_decay(tsdn_t *tsdn, arena_t *arena, arena_decay_t *decay, extents_t *extents, bool is_background_thread) { malloc_mutex_assert_owner(tsdn, &decay->mtx); /* Purge all or nothing if the option is disabled. */ ssize_t decay_ms = arena_decay_ms_read(decay); if (decay_ms <= 0) { if (decay_ms == 0) { arena_decay_to_limit(tsdn, arena, decay, extents, false, - 0); + 0, is_background_thread); } return false; } nstime_t time; nstime_init(&time, 0); nstime_update(&time); if (unlikely(!nstime_monotonic() && nstime_compare(&decay->epoch, &time) > 0)) { /* * Time went backwards. Move the epoch back in time and * generate a new deadline, with the expectation that time * typically flows forward for long enough periods of time that * epochs complete. Unfortunately, this strategy is susceptible * to clock jitter triggering premature epoch advances, but * clock jitter estimation and compensation isn't feasible here * because calls into this code are event-driven. */ nstime_copy(&decay->epoch, &time); arena_decay_deadline_init(decay); } else { /* Verify that time does not go backwards. */ assert(nstime_compare(&decay->epoch, &time) <= 0); } /* * If the deadline has been reached, advance to the current epoch and * purge to the new limit if necessary. Note that dirty pages created * during the current epoch are not subject to purge until a future * epoch, so as a result purging only happens during epoch advances, or * being triggered by background threads (scheduled event). */ bool advance_epoch = arena_decay_deadline_reached(decay, &time); if (advance_epoch) { - bool should_purge = is_background_thread || - !background_thread_enabled(); arena_decay_epoch_advance(tsdn, arena, decay, extents, &time, - should_purge); + is_background_thread); } else if (is_background_thread) { arena_decay_try_purge(tsdn, arena, decay, extents, extents_npages_get(extents), - arena_decay_backlog_npages_limit(decay)); + arena_decay_backlog_npages_limit(decay), + is_background_thread); } return advance_epoch; } static ssize_t arena_decay_ms_get(arena_decay_t *decay) { return arena_decay_ms_read(decay); } ssize_t arena_dirty_decay_ms_get(arena_t *arena) { return arena_decay_ms_get(&arena->decay_dirty); } ssize_t arena_muzzy_decay_ms_get(arena_t *arena) { return arena_decay_ms_get(&arena->decay_muzzy); } static bool arena_decay_ms_set(tsdn_t *tsdn, arena_t *arena, arena_decay_t *decay, extents_t *extents, ssize_t decay_ms) { if (!arena_decay_ms_valid(decay_ms)) { return true; } malloc_mutex_lock(tsdn, &decay->mtx); /* * Restart decay backlog from scratch, which may cause many dirty pages * to be immediately purged. It would conceptually be possible to map * the old backlog onto the new backlog, but there is no justification * for such complexity since decay_ms changes are intended to be * infrequent, either between the {-1, 0, >0} states, or a one-time * arbitrary change during initial arena configuration. */ arena_decay_reinit(decay, extents, decay_ms); arena_maybe_decay(tsdn, arena, decay, extents, false); malloc_mutex_unlock(tsdn, &decay->mtx); return false; } bool arena_dirty_decay_ms_set(tsdn_t *tsdn, arena_t *arena, ssize_t decay_ms) { return arena_decay_ms_set(tsdn, arena, &arena->decay_dirty, &arena->extents_dirty, decay_ms); } bool arena_muzzy_decay_ms_set(tsdn_t *tsdn, arena_t *arena, ssize_t decay_ms) { return arena_decay_ms_set(tsdn, arena, &arena->decay_muzzy, &arena->extents_muzzy, decay_ms); } static size_t arena_stash_decayed(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extents_t *extents, size_t npages_limit, extent_list_t *decay_extents) { witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, 0); /* Stash extents according to npages_limit. */ size_t nstashed = 0; extent_t *extent; while ((extent = extents_evict(tsdn, arena, r_extent_hooks, extents, npages_limit)) != NULL) { extent_list_append(decay_extents, extent); nstashed += extent_size_get(extent) >> LG_PAGE; } return nstashed; } static size_t arena_decay_stashed(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, arena_decay_t *decay, extents_t *extents, - bool all, extent_list_t *decay_extents) { + bool all, extent_list_t *decay_extents, bool is_background_thread) { UNUSED size_t nmadvise, nunmapped; size_t npurged; if (config_stats) { nmadvise = 0; nunmapped = 0; } npurged = 0; ssize_t muzzy_decay_ms = arena_muzzy_decay_ms_get(arena); for (extent_t *extent = extent_list_first(decay_extents); extent != NULL; extent = extent_list_first(decay_extents)) { if (config_stats) { nmadvise++; } size_t npages = extent_size_get(extent) >> LG_PAGE; npurged += npages; extent_list_remove(decay_extents, extent); switch (extents_state_get(extents)) { case extent_state_active: not_reached(); case extent_state_dirty: if (!all && muzzy_decay_ms != 0 && !extent_purge_lazy_wrapper(tsdn, arena, r_extent_hooks, extent, 0, extent_size_get(extent))) { extents_dalloc(tsdn, arena, r_extent_hooks, &arena->extents_muzzy, extent); arena_background_thread_inactivity_check(tsdn, - arena); + arena, is_background_thread); break; } /* Fall through. */ case extent_state_muzzy: extent_dalloc_wrapper(tsdn, arena, r_extent_hooks, extent); if (config_stats) { nunmapped += npages; } break; case extent_state_retained: default: not_reached(); } } if (config_stats) { arena_stats_lock(tsdn, &arena->stats); arena_stats_add_u64(tsdn, &arena->stats, &decay->stats->npurge, 1); arena_stats_add_u64(tsdn, &arena->stats, &decay->stats->nmadvise, nmadvise); arena_stats_add_u64(tsdn, &arena->stats, &decay->stats->purged, npurged); arena_stats_sub_zu(tsdn, &arena->stats, &arena->stats.mapped, nunmapped << LG_PAGE); arena_stats_unlock(tsdn, &arena->stats); } return npurged; } /* * npages_limit: Decay as many dirty extents as possible without violating the * invariant: (extents_npages_get(extents) >= npages_limit) */ static void arena_decay_to_limit(tsdn_t *tsdn, arena_t *arena, arena_decay_t *decay, - extents_t *extents, bool all, size_t npages_limit) { + extents_t *extents, bool all, size_t npages_limit, + bool is_background_thread) { witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, 1); malloc_mutex_assert_owner(tsdn, &decay->mtx); if (decay->purging) { return; } decay->purging = true; malloc_mutex_unlock(tsdn, &decay->mtx); extent_hooks_t *extent_hooks = extent_hooks_get(arena); extent_list_t decay_extents; extent_list_init(&decay_extents); size_t npurge = arena_stash_decayed(tsdn, arena, &extent_hooks, extents, npages_limit, &decay_extents); if (npurge != 0) { UNUSED size_t npurged = arena_decay_stashed(tsdn, arena, - &extent_hooks, decay, extents, all, &decay_extents); + &extent_hooks, decay, extents, all, &decay_extents, + is_background_thread); assert(npurged == npurge); } malloc_mutex_lock(tsdn, &decay->mtx); decay->purging = false; } static bool arena_decay_impl(tsdn_t *tsdn, arena_t *arena, arena_decay_t *decay, extents_t *extents, bool is_background_thread, bool all) { if (all) { malloc_mutex_lock(tsdn, &decay->mtx); - arena_decay_to_limit(tsdn, arena, decay, extents, all, 0); + arena_decay_to_limit(tsdn, arena, decay, extents, all, 0, + is_background_thread); malloc_mutex_unlock(tsdn, &decay->mtx); return false; } if (malloc_mutex_trylock(tsdn, &decay->mtx)) { /* No need to wait if another thread is in progress. */ return true; } bool epoch_advanced = arena_maybe_decay(tsdn, arena, decay, extents, is_background_thread); size_t npages_new; if (epoch_advanced) { /* Backlog is updated on epoch advance. */ npages_new = decay->backlog[SMOOTHSTEP_NSTEPS-1]; } malloc_mutex_unlock(tsdn, &decay->mtx); if (have_background_thread && background_thread_enabled() && epoch_advanced && !is_background_thread) { background_thread_interval_check(tsdn, arena, decay, npages_new); } return false; } static bool arena_decay_dirty(tsdn_t *tsdn, arena_t *arena, bool is_background_thread, bool all) { return arena_decay_impl(tsdn, arena, &arena->decay_dirty, &arena->extents_dirty, is_background_thread, all); } static bool arena_decay_muzzy(tsdn_t *tsdn, arena_t *arena, bool is_background_thread, bool all) { return arena_decay_impl(tsdn, arena, &arena->decay_muzzy, &arena->extents_muzzy, is_background_thread, all); } void arena_decay(tsdn_t *tsdn, arena_t *arena, bool is_background_thread, bool all) { if (arena_decay_dirty(tsdn, arena, is_background_thread, all)) { return; } arena_decay_muzzy(tsdn, arena, is_background_thread, all); } static void arena_slab_dalloc(tsdn_t *tsdn, arena_t *arena, extent_t *slab) { arena_nactive_sub(arena, extent_size_get(slab) >> LG_PAGE); extent_hooks_t *extent_hooks = EXTENT_HOOKS_INITIALIZER; arena_extents_dirty_dalloc(tsdn, arena, &extent_hooks, slab); } static void arena_bin_slabs_nonfull_insert(arena_bin_t *bin, extent_t *slab) { assert(extent_nfree_get(slab) > 0); extent_heap_insert(&bin->slabs_nonfull, slab); } static void arena_bin_slabs_nonfull_remove(arena_bin_t *bin, extent_t *slab) { extent_heap_remove(&bin->slabs_nonfull, slab); } static extent_t * arena_bin_slabs_nonfull_tryget(arena_bin_t *bin) { extent_t *slab = extent_heap_remove_first(&bin->slabs_nonfull); if (slab == NULL) { return NULL; } if (config_stats) { bin->stats.reslabs++; } return slab; } static void arena_bin_slabs_full_insert(arena_t *arena, arena_bin_t *bin, extent_t *slab) { assert(extent_nfree_get(slab) == 0); /* * Tracking extents is required by arena_reset, which is not allowed * for auto arenas. Bypass this step to avoid touching the extent * linkage (often results in cache misses) for auto arenas. */ if (arena_is_auto(arena)) { return; } extent_list_append(&bin->slabs_full, slab); } static void arena_bin_slabs_full_remove(arena_t *arena, arena_bin_t *bin, extent_t *slab) { if (arena_is_auto(arena)) { return; } extent_list_remove(&bin->slabs_full, slab); } void arena_reset(tsd_t *tsd, arena_t *arena) { /* * Locking in this function is unintuitive. The caller guarantees that * no concurrent operations are happening in this arena, but there are * still reasons that some locking is necessary: * * - Some of the functions in the transitive closure of calls assume * appropriate locks are held, and in some cases these locks are * temporarily dropped to avoid lock order reversal or deadlock due to * reentry. * - mallctl("epoch", ...) may concurrently refresh stats. While * strictly speaking this is a "concurrent operation", disallowing * stats refreshes would impose an inconvenient burden. */ /* Large allocations. */ malloc_mutex_lock(tsd_tsdn(tsd), &arena->large_mtx); for (extent_t *extent = extent_list_first(&arena->large); extent != NULL; extent = extent_list_first(&arena->large)) { void *ptr = extent_base_get(extent); size_t usize; malloc_mutex_unlock(tsd_tsdn(tsd), &arena->large_mtx); alloc_ctx_t alloc_ctx; rtree_ctx_t *rtree_ctx = tsd_rtree_ctx(tsd); rtree_szind_slab_read(tsd_tsdn(tsd), &extents_rtree, rtree_ctx, (uintptr_t)ptr, true, &alloc_ctx.szind, &alloc_ctx.slab); assert(alloc_ctx.szind != NSIZES); if (config_stats || (config_prof && opt_prof)) { usize = sz_index2size(alloc_ctx.szind); assert(usize == isalloc(tsd_tsdn(tsd), ptr)); } /* Remove large allocation from prof sample set. */ if (config_prof && opt_prof) { prof_free(tsd, ptr, usize, &alloc_ctx); } large_dalloc(tsd_tsdn(tsd), extent); malloc_mutex_lock(tsd_tsdn(tsd), &arena->large_mtx); } malloc_mutex_unlock(tsd_tsdn(tsd), &arena->large_mtx); /* Bins. */ for (unsigned i = 0; i < NBINS; i++) { extent_t *slab; arena_bin_t *bin = &arena->bins[i]; malloc_mutex_lock(tsd_tsdn(tsd), &bin->lock); if (bin->slabcur != NULL) { slab = bin->slabcur; bin->slabcur = NULL; malloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock); arena_slab_dalloc(tsd_tsdn(tsd), arena, slab); malloc_mutex_lock(tsd_tsdn(tsd), &bin->lock); } while ((slab = extent_heap_remove_first(&bin->slabs_nonfull)) != NULL) { malloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock); arena_slab_dalloc(tsd_tsdn(tsd), arena, slab); malloc_mutex_lock(tsd_tsdn(tsd), &bin->lock); } for (slab = extent_list_first(&bin->slabs_full); slab != NULL; slab = extent_list_first(&bin->slabs_full)) { arena_bin_slabs_full_remove(arena, bin, slab); malloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock); arena_slab_dalloc(tsd_tsdn(tsd), arena, slab); malloc_mutex_lock(tsd_tsdn(tsd), &bin->lock); } if (config_stats) { bin->stats.curregs = 0; bin->stats.curslabs = 0; } malloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock); } atomic_store_zu(&arena->nactive, 0, ATOMIC_RELAXED); } static void arena_destroy_retained(tsdn_t *tsdn, arena_t *arena) { /* * Iterate over the retained extents and destroy them. This gives the * extent allocator underlying the extent hooks an opportunity to unmap * all retained memory without having to keep its own metadata * structures. In practice, virtual memory for dss-allocated extents is * leaked here, so best practice is to avoid dss for arenas to be * destroyed, or provide custom extent hooks that track retained * dss-based extents for later reuse. */ extent_hooks_t *extent_hooks = extent_hooks_get(arena); extent_t *extent; while ((extent = extents_evict(tsdn, arena, &extent_hooks, &arena->extents_retained, 0)) != NULL) { extent_destroy_wrapper(tsdn, arena, &extent_hooks, extent); } } void arena_destroy(tsd_t *tsd, arena_t *arena) { assert(base_ind_get(arena->base) >= narenas_auto); assert(arena_nthreads_get(arena, false) == 0); assert(arena_nthreads_get(arena, true) == 0); /* * No allocations have occurred since arena_reset() was called. * Furthermore, the caller (arena_i_destroy_ctl()) purged all cached * extents, so only retained extents may remain. */ assert(extents_npages_get(&arena->extents_dirty) == 0); assert(extents_npages_get(&arena->extents_muzzy) == 0); /* Deallocate retained memory. */ arena_destroy_retained(tsd_tsdn(tsd), arena); /* * Remove the arena pointer from the arenas array. We rely on the fact * that there is no way for the application to get a dirty read from the * arenas array unless there is an inherent race in the application * involving access of an arena being concurrently destroyed. The * application must synchronize knowledge of the arena's validity, so as * long as we use an atomic write to update the arenas array, the * application will get a clean read any time after it synchronizes * knowledge that the arena is no longer valid. */ arena_set(base_ind_get(arena->base), NULL); /* * Destroy the base allocator, which manages all metadata ever mapped by * this arena. */ - base_delete(arena->base); + base_delete(tsd_tsdn(tsd), arena->base); } static extent_t * arena_slab_alloc_hard(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, const arena_bin_info_t *bin_info, szind_t szind) { extent_t *slab; bool zero, commit; witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, 0); zero = false; commit = true; slab = extent_alloc_wrapper(tsdn, arena, r_extent_hooks, NULL, bin_info->slab_size, 0, PAGE, true, szind, &zero, &commit); if (config_stats && slab != NULL) { arena_stats_mapped_add(tsdn, &arena->stats, bin_info->slab_size); } return slab; } static extent_t * arena_slab_alloc(tsdn_t *tsdn, arena_t *arena, szind_t binind, const arena_bin_info_t *bin_info) { witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, 0); extent_hooks_t *extent_hooks = EXTENT_HOOKS_INITIALIZER; szind_t szind = sz_size2index(bin_info->reg_size); bool zero = false; bool commit = true; extent_t *slab = extents_alloc(tsdn, arena, &extent_hooks, &arena->extents_dirty, NULL, bin_info->slab_size, 0, PAGE, true, binind, &zero, &commit); if (slab == NULL) { slab = extents_alloc(tsdn, arena, &extent_hooks, &arena->extents_muzzy, NULL, bin_info->slab_size, 0, PAGE, true, binind, &zero, &commit); } if (slab == NULL) { slab = arena_slab_alloc_hard(tsdn, arena, &extent_hooks, bin_info, szind); if (slab == NULL) { return NULL; } } assert(extent_slab_get(slab)); /* Initialize slab internals. */ arena_slab_data_t *slab_data = extent_slab_data_get(slab); extent_nfree_set(slab, bin_info->nregs); bitmap_init(slab_data->bitmap, &bin_info->bitmap_info, false); arena_nactive_add(arena, extent_size_get(slab) >> LG_PAGE); return slab; } static extent_t * arena_bin_nonfull_slab_get(tsdn_t *tsdn, arena_t *arena, arena_bin_t *bin, szind_t binind) { extent_t *slab; const arena_bin_info_t *bin_info; /* Look for a usable slab. */ slab = arena_bin_slabs_nonfull_tryget(bin); if (slab != NULL) { return slab; } /* No existing slabs have any space available. */ bin_info = &arena_bin_info[binind]; /* Allocate a new slab. */ malloc_mutex_unlock(tsdn, &bin->lock); /******************************/ slab = arena_slab_alloc(tsdn, arena, binind, bin_info); /********************************/ malloc_mutex_lock(tsdn, &bin->lock); if (slab != NULL) { if (config_stats) { bin->stats.nslabs++; bin->stats.curslabs++; } return slab; } /* * arena_slab_alloc() failed, but another thread may have made * sufficient memory available while this one dropped bin->lock above, * so search one more time. */ slab = arena_bin_slabs_nonfull_tryget(bin); if (slab != NULL) { return slab; } return NULL; } /* Re-fill bin->slabcur, then call arena_slab_reg_alloc(). */ static void * arena_bin_malloc_hard(tsdn_t *tsdn, arena_t *arena, arena_bin_t *bin, szind_t binind) { const arena_bin_info_t *bin_info; extent_t *slab; bin_info = &arena_bin_info[binind]; if (!arena_is_auto(arena) && bin->slabcur != NULL) { arena_bin_slabs_full_insert(arena, bin, bin->slabcur); bin->slabcur = NULL; } slab = arena_bin_nonfull_slab_get(tsdn, arena, bin, binind); if (bin->slabcur != NULL) { /* * Another thread updated slabcur while this one ran without the * bin lock in arena_bin_nonfull_slab_get(). */ if (extent_nfree_get(bin->slabcur) > 0) { void *ret = arena_slab_reg_alloc(tsdn, bin->slabcur, bin_info); if (slab != NULL) { /* * arena_slab_alloc() may have allocated slab, * or it may have been pulled from * slabs_nonfull. Therefore it is unsafe to * make any assumptions about how slab has * previously been used, and * arena_bin_lower_slab() must be called, as if * a region were just deallocated from the slab. */ if (extent_nfree_get(slab) == bin_info->nregs) { arena_dalloc_bin_slab(tsdn, arena, slab, bin); } else { arena_bin_lower_slab(tsdn, arena, slab, bin); } } return ret; } arena_bin_slabs_full_insert(arena, bin, bin->slabcur); bin->slabcur = NULL; } if (slab == NULL) { return NULL; } bin->slabcur = slab; assert(extent_nfree_get(bin->slabcur) > 0); return arena_slab_reg_alloc(tsdn, slab, bin_info); } void arena_tcache_fill_small(tsdn_t *tsdn, arena_t *arena, tcache_t *tcache, tcache_bin_t *tbin, szind_t binind, uint64_t prof_accumbytes) { unsigned i, nfill; arena_bin_t *bin; assert(tbin->ncached == 0); if (config_prof && arena_prof_accum(tsdn, arena, prof_accumbytes)) { prof_idump(tsdn); } bin = &arena->bins[binind]; malloc_mutex_lock(tsdn, &bin->lock); for (i = 0, nfill = (tcache_bin_info[binind].ncached_max >> tcache->lg_fill_div[binind]); i < nfill; i++) { extent_t *slab; void *ptr; if ((slab = bin->slabcur) != NULL && extent_nfree_get(slab) > 0) { ptr = arena_slab_reg_alloc(tsdn, slab, &arena_bin_info[binind]); } else { ptr = arena_bin_malloc_hard(tsdn, arena, bin, binind); } if (ptr == NULL) { /* * OOM. tbin->avail isn't yet filled down to its first * element, so the successful allocations (if any) must * be moved just before tbin->avail before bailing out. */ if (i > 0) { memmove(tbin->avail - i, tbin->avail - nfill, i * sizeof(void *)); } break; } if (config_fill && unlikely(opt_junk_alloc)) { arena_alloc_junk_small(ptr, &arena_bin_info[binind], true); } /* Insert such that low regions get used first. */ *(tbin->avail - nfill + i) = ptr; } if (config_stats) { bin->stats.nmalloc += i; bin->stats.nrequests += tbin->tstats.nrequests; bin->stats.curregs += i; bin->stats.nfills++; tbin->tstats.nrequests = 0; } malloc_mutex_unlock(tsdn, &bin->lock); tbin->ncached = i; arena_decay_tick(tsdn, arena); } void arena_alloc_junk_small(void *ptr, const arena_bin_info_t *bin_info, bool zero) { if (!zero) { memset(ptr, JEMALLOC_ALLOC_JUNK, bin_info->reg_size); } } static void arena_dalloc_junk_small_impl(void *ptr, const arena_bin_info_t *bin_info) { memset(ptr, JEMALLOC_FREE_JUNK, bin_info->reg_size); } arena_dalloc_junk_small_t *JET_MUTABLE arena_dalloc_junk_small = arena_dalloc_junk_small_impl; static void * arena_malloc_small(tsdn_t *tsdn, arena_t *arena, szind_t binind, bool zero) { void *ret; arena_bin_t *bin; size_t usize; extent_t *slab; assert(binind < NBINS); bin = &arena->bins[binind]; usize = sz_index2size(binind); malloc_mutex_lock(tsdn, &bin->lock); if ((slab = bin->slabcur) != NULL && extent_nfree_get(slab) > 0) { ret = arena_slab_reg_alloc(tsdn, slab, &arena_bin_info[binind]); } else { ret = arena_bin_malloc_hard(tsdn, arena, bin, binind); } if (ret == NULL) { malloc_mutex_unlock(tsdn, &bin->lock); return NULL; } if (config_stats) { bin->stats.nmalloc++; bin->stats.nrequests++; bin->stats.curregs++; } malloc_mutex_unlock(tsdn, &bin->lock); if (config_prof && arena_prof_accum(tsdn, arena, usize)) { prof_idump(tsdn); } if (!zero) { if (config_fill) { if (unlikely(opt_junk_alloc)) { arena_alloc_junk_small(ret, &arena_bin_info[binind], false); } else if (unlikely(opt_zero)) { memset(ret, 0, usize); } } } else { if (config_fill && unlikely(opt_junk_alloc)) { arena_alloc_junk_small(ret, &arena_bin_info[binind], true); } memset(ret, 0, usize); } arena_decay_tick(tsdn, arena); return ret; } void * arena_malloc_hard(tsdn_t *tsdn, arena_t *arena, size_t size, szind_t ind, bool zero) { assert(!tsdn_null(tsdn) || arena != NULL); if (likely(!tsdn_null(tsdn))) { arena = arena_choose(tsdn_tsd(tsdn), arena); } if (unlikely(arena == NULL)) { return NULL; } if (likely(size <= SMALL_MAXCLASS)) { return arena_malloc_small(tsdn, arena, ind, zero); } return large_malloc(tsdn, arena, sz_index2size(ind), zero); } void * arena_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment, bool zero, tcache_t *tcache) { void *ret; if (usize <= SMALL_MAXCLASS && (alignment < PAGE || (alignment == PAGE && (usize & PAGE_MASK) == 0))) { /* Small; alignment doesn't require special slab placement. */ ret = arena_malloc(tsdn, arena, usize, sz_size2index(usize), zero, tcache, true); } else { if (likely(alignment <= CACHELINE)) { ret = large_malloc(tsdn, arena, usize, zero); } else { ret = large_palloc(tsdn, arena, usize, alignment, zero); } } return ret; } void arena_prof_promote(tsdn_t *tsdn, const void *ptr, size_t usize) { cassert(config_prof); assert(ptr != NULL); assert(isalloc(tsdn, ptr) == LARGE_MINCLASS); assert(usize <= SMALL_MAXCLASS); rtree_ctx_t rtree_ctx_fallback; rtree_ctx_t *rtree_ctx = tsdn_rtree_ctx(tsdn, &rtree_ctx_fallback); extent_t *extent = rtree_extent_read(tsdn, &extents_rtree, rtree_ctx, (uintptr_t)ptr, true); arena_t *arena = extent_arena_get(extent); szind_t szind = sz_size2index(usize); extent_szind_set(extent, szind); rtree_szind_slab_update(tsdn, &extents_rtree, rtree_ctx, (uintptr_t)ptr, szind, false); prof_accum_cancel(tsdn, &arena->prof_accum, usize); assert(isalloc(tsdn, ptr) == usize); } static size_t arena_prof_demote(tsdn_t *tsdn, extent_t *extent, const void *ptr) { cassert(config_prof); assert(ptr != NULL); extent_szind_set(extent, NBINS); rtree_ctx_t rtree_ctx_fallback; rtree_ctx_t *rtree_ctx = tsdn_rtree_ctx(tsdn, &rtree_ctx_fallback); rtree_szind_slab_update(tsdn, &extents_rtree, rtree_ctx, (uintptr_t)ptr, NBINS, false); assert(isalloc(tsdn, ptr) == LARGE_MINCLASS); return LARGE_MINCLASS; } void arena_dalloc_promoted(tsdn_t *tsdn, void *ptr, tcache_t *tcache, bool slow_path) { cassert(config_prof); assert(opt_prof); extent_t *extent = iealloc(tsdn, ptr); size_t usize = arena_prof_demote(tsdn, extent, ptr); if (usize <= tcache_maxclass) { tcache_dalloc_large(tsdn_tsd(tsdn), tcache, ptr, sz_size2index(usize), slow_path); } else { large_dalloc(tsdn, extent); } } static void arena_dissociate_bin_slab(arena_t *arena, extent_t *slab, arena_bin_t *bin) { /* Dissociate slab from bin. */ if (slab == bin->slabcur) { bin->slabcur = NULL; } else { szind_t binind = extent_szind_get(slab); const arena_bin_info_t *bin_info = &arena_bin_info[binind]; /* * The following block's conditional is necessary because if the * slab only contains one region, then it never gets inserted * into the non-full slabs heap. */ if (bin_info->nregs == 1) { arena_bin_slabs_full_remove(arena, bin, slab); } else { arena_bin_slabs_nonfull_remove(bin, slab); } } } static void arena_dalloc_bin_slab(tsdn_t *tsdn, arena_t *arena, extent_t *slab, arena_bin_t *bin) { assert(slab != bin->slabcur); malloc_mutex_unlock(tsdn, &bin->lock); /******************************/ arena_slab_dalloc(tsdn, arena, slab); /****************************/ malloc_mutex_lock(tsdn, &bin->lock); if (config_stats) { bin->stats.curslabs--; } } static void arena_bin_lower_slab(tsdn_t *tsdn, arena_t *arena, extent_t *slab, arena_bin_t *bin) { assert(extent_nfree_get(slab) > 0); /* * Make sure that if bin->slabcur is non-NULL, it refers to the * oldest/lowest non-full slab. It is okay to NULL slabcur out rather * than proactively keeping it pointing at the oldest/lowest non-full * slab. */ if (bin->slabcur != NULL && extent_snad_comp(bin->slabcur, slab) > 0) { /* Switch slabcur. */ if (extent_nfree_get(bin->slabcur) > 0) { arena_bin_slabs_nonfull_insert(bin, bin->slabcur); } else { arena_bin_slabs_full_insert(arena, bin, bin->slabcur); } bin->slabcur = slab; if (config_stats) { bin->stats.reslabs++; } } else { arena_bin_slabs_nonfull_insert(bin, slab); } } static void arena_dalloc_bin_locked_impl(tsdn_t *tsdn, arena_t *arena, extent_t *slab, void *ptr, bool junked) { arena_slab_data_t *slab_data = extent_slab_data_get(slab); szind_t binind = extent_szind_get(slab); arena_bin_t *bin = &arena->bins[binind]; const arena_bin_info_t *bin_info = &arena_bin_info[binind]; if (!junked && config_fill && unlikely(opt_junk_free)) { arena_dalloc_junk_small(ptr, bin_info); } arena_slab_reg_dalloc(tsdn, slab, slab_data, ptr); unsigned nfree = extent_nfree_get(slab); if (nfree == bin_info->nregs) { arena_dissociate_bin_slab(arena, slab, bin); arena_dalloc_bin_slab(tsdn, arena, slab, bin); } else if (nfree == 1 && slab != bin->slabcur) { arena_bin_slabs_full_remove(arena, bin, slab); arena_bin_lower_slab(tsdn, arena, slab, bin); } if (config_stats) { bin->stats.ndalloc++; bin->stats.curregs--; } } void arena_dalloc_bin_junked_locked(tsdn_t *tsdn, arena_t *arena, extent_t *extent, void *ptr) { arena_dalloc_bin_locked_impl(tsdn, arena, extent, ptr, true); } static void arena_dalloc_bin(tsdn_t *tsdn, arena_t *arena, extent_t *extent, void *ptr) { szind_t binind = extent_szind_get(extent); arena_bin_t *bin = &arena->bins[binind]; malloc_mutex_lock(tsdn, &bin->lock); arena_dalloc_bin_locked_impl(tsdn, arena, extent, ptr, false); malloc_mutex_unlock(tsdn, &bin->lock); } void arena_dalloc_small(tsdn_t *tsdn, void *ptr) { extent_t *extent = iealloc(tsdn, ptr); arena_t *arena = extent_arena_get(extent); arena_dalloc_bin(tsdn, arena, extent, ptr); arena_decay_tick(tsdn, arena); } bool arena_ralloc_no_move(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t size, size_t extra, bool zero) { /* Calls with non-zero extra had to clamp extra. */ assert(extra == 0 || size + extra <= LARGE_MAXCLASS); if (unlikely(size > LARGE_MAXCLASS)) { return true; } extent_t *extent = iealloc(tsdn, ptr); size_t usize_min = sz_s2u(size); size_t usize_max = sz_s2u(size + extra); if (likely(oldsize <= SMALL_MAXCLASS && usize_min <= SMALL_MAXCLASS)) { /* * Avoid moving the allocation if the size class can be left the * same. */ assert(arena_bin_info[sz_size2index(oldsize)].reg_size == oldsize); if ((usize_max > SMALL_MAXCLASS || sz_size2index(usize_max) != sz_size2index(oldsize)) && (size > oldsize || usize_max < oldsize)) { return true; } arena_decay_tick(tsdn, extent_arena_get(extent)); return false; } else if (oldsize >= LARGE_MINCLASS && usize_max >= LARGE_MINCLASS) { return large_ralloc_no_move(tsdn, extent, usize_min, usize_max, zero); } return true; } static void * arena_ralloc_move_helper(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment, bool zero, tcache_t *tcache) { if (alignment == 0) { return arena_malloc(tsdn, arena, usize, sz_size2index(usize), zero, tcache, true); } usize = sz_sa2u(usize, alignment); if (unlikely(usize == 0 || usize > LARGE_MAXCLASS)) { return NULL; } return ipalloct(tsdn, usize, alignment, zero, tcache, arena); } void * arena_ralloc(tsdn_t *tsdn, arena_t *arena, void *ptr, size_t oldsize, size_t size, size_t alignment, bool zero, tcache_t *tcache) { size_t usize = sz_s2u(size); if (unlikely(usize == 0 || size > LARGE_MAXCLASS)) { return NULL; } if (likely(usize <= SMALL_MAXCLASS)) { /* Try to avoid moving the allocation. */ if (!arena_ralloc_no_move(tsdn, ptr, oldsize, usize, 0, zero)) { return ptr; } } if (oldsize >= LARGE_MINCLASS && usize >= LARGE_MINCLASS) { return large_ralloc(tsdn, arena, iealloc(tsdn, ptr), usize, alignment, zero, tcache); } /* * size and oldsize are different enough that we need to move the * object. In that case, fall back to allocating new space and copying. */ void *ret = arena_ralloc_move_helper(tsdn, arena, usize, alignment, zero, tcache); if (ret == NULL) { return NULL; } /* * Junk/zero-filling were already done by * ipalloc()/arena_malloc(). */ size_t copysize = (usize < oldsize) ? usize : oldsize; memcpy(ret, ptr, copysize); isdalloct(tsdn, ptr, oldsize, tcache, NULL, true); return ret; } dss_prec_t arena_dss_prec_get(arena_t *arena) { return (dss_prec_t)atomic_load_u(&arena->dss_prec, ATOMIC_ACQUIRE); } bool arena_dss_prec_set(arena_t *arena, dss_prec_t dss_prec) { if (!have_dss) { return (dss_prec != dss_prec_disabled); } atomic_store_u(&arena->dss_prec, (unsigned)dss_prec, ATOMIC_RELEASE); return false; } ssize_t arena_dirty_decay_ms_default_get(void) { return atomic_load_zd(&dirty_decay_ms_default, ATOMIC_RELAXED); } bool arena_dirty_decay_ms_default_set(ssize_t decay_ms) { if (!arena_decay_ms_valid(decay_ms)) { return true; } atomic_store_zd(&dirty_decay_ms_default, decay_ms, ATOMIC_RELAXED); return false; } ssize_t arena_muzzy_decay_ms_default_get(void) { return atomic_load_zd(&muzzy_decay_ms_default, ATOMIC_RELAXED); } bool arena_muzzy_decay_ms_default_set(ssize_t decay_ms) { if (!arena_decay_ms_valid(decay_ms)) { return true; } atomic_store_zd(&muzzy_decay_ms_default, decay_ms, ATOMIC_RELAXED); return false; } unsigned arena_nthreads_get(arena_t *arena, bool internal) { return atomic_load_u(&arena->nthreads[internal], ATOMIC_RELAXED); } void arena_nthreads_inc(arena_t *arena, bool internal) { atomic_fetch_add_u(&arena->nthreads[internal], 1, ATOMIC_RELAXED); } void arena_nthreads_dec(arena_t *arena, bool internal) { atomic_fetch_sub_u(&arena->nthreads[internal], 1, ATOMIC_RELAXED); } size_t arena_extent_sn_next(arena_t *arena) { return atomic_fetch_add_zu(&arena->extent_sn_next, 1, ATOMIC_RELAXED); } arena_t * arena_new(tsdn_t *tsdn, unsigned ind, extent_hooks_t *extent_hooks) { arena_t *arena; base_t *base; unsigned i; if (ind == 0) { base = b0get(); } else { base = base_new(tsdn, ind, extent_hooks); if (base == NULL) { return NULL; } } arena = (arena_t *)base_alloc(tsdn, base, sizeof(arena_t), CACHELINE); if (arena == NULL) { goto label_error; } atomic_store_u(&arena->nthreads[0], 0, ATOMIC_RELAXED); atomic_store_u(&arena->nthreads[1], 0, ATOMIC_RELAXED); arena->last_thd = NULL; if (config_stats) { if (arena_stats_init(tsdn, &arena->stats)) { goto label_error; } ql_new(&arena->tcache_ql); if (malloc_mutex_init(&arena->tcache_ql_mtx, "tcache_ql", WITNESS_RANK_TCACHE_QL, malloc_mutex_rank_exclusive)) { goto label_error; } } if (config_prof) { if (prof_accum_init(tsdn, &arena->prof_accum)) { goto label_error; } } if (config_cache_oblivious) { /* * A nondeterministic seed based on the address of arena reduces * the likelihood of lockstep non-uniform cache index * utilization among identical concurrent processes, but at the * cost of test repeatability. For debug builds, instead use a * deterministic seed. */ atomic_store_zu(&arena->offset_state, config_debug ? ind : (size_t)(uintptr_t)arena, ATOMIC_RELAXED); } atomic_store_zu(&arena->extent_sn_next, 0, ATOMIC_RELAXED); atomic_store_u(&arena->dss_prec, (unsigned)extent_dss_prec_get(), ATOMIC_RELAXED); atomic_store_zu(&arena->nactive, 0, ATOMIC_RELAXED); extent_list_init(&arena->large); if (malloc_mutex_init(&arena->large_mtx, "arena_large", WITNESS_RANK_ARENA_LARGE, malloc_mutex_rank_exclusive)) { goto label_error; } /* * Delay coalescing for dirty extents despite the disruptive effect on * memory layout for best-fit extent allocation, since cached extents * are likely to be reused soon after deallocation, and the cost of * merging/splitting extents is non-trivial. */ if (extents_init(tsdn, &arena->extents_dirty, extent_state_dirty, true)) { goto label_error; } /* * Coalesce muzzy extents immediately, because operations on them are in * the critical path much less often than for dirty extents. */ if (extents_init(tsdn, &arena->extents_muzzy, extent_state_muzzy, false)) { goto label_error; } /* * Coalesce retained extents immediately, in part because they will * never be evicted (and therefore there's no opportunity for delayed * coalescing), but also because operations on retained extents are not * in the critical path. */ if (extents_init(tsdn, &arena->extents_retained, extent_state_retained, false)) { goto label_error; } if (arena_decay_init(&arena->decay_dirty, &arena->extents_dirty, arena_dirty_decay_ms_default_get(), &arena->stats.decay_dirty)) { goto label_error; } if (arena_decay_init(&arena->decay_muzzy, &arena->extents_muzzy, arena_muzzy_decay_ms_default_get(), &arena->stats.decay_muzzy)) { goto label_error; } arena->extent_grow_next = sz_psz2ind(HUGEPAGE); if (malloc_mutex_init(&arena->extent_grow_mtx, "extent_grow", WITNESS_RANK_EXTENT_GROW, malloc_mutex_rank_exclusive)) { goto label_error; } extent_avail_new(&arena->extent_avail); if (malloc_mutex_init(&arena->extent_avail_mtx, "extent_avail", WITNESS_RANK_EXTENT_AVAIL, malloc_mutex_rank_exclusive)) { goto label_error; } /* Initialize bins. */ for (i = 0; i < NBINS; i++) { arena_bin_t *bin = &arena->bins[i]; if (malloc_mutex_init(&bin->lock, "arena_bin", WITNESS_RANK_ARENA_BIN, malloc_mutex_rank_exclusive)) { goto label_error; } bin->slabcur = NULL; extent_heap_new(&bin->slabs_nonfull); extent_list_init(&bin->slabs_full); if (config_stats) { memset(&bin->stats, 0, sizeof(malloc_bin_stats_t)); } } arena->base = base; /* Set arena before creating background threads. */ arena_set(ind, arena); nstime_init(&arena->create_time, 0); nstime_update(&arena->create_time); /* We don't support reentrancy for arena 0 bootstrapping. */ if (ind != 0) { /* * If we're here, then arena 0 already exists, so bootstrapping * is done enough that we should have tsd. */ assert(!tsdn_null(tsdn)); - pre_reentrancy(tsdn_tsd(tsdn)); + pre_reentrancy(tsdn_tsd(tsdn), arena); if (hooks_arena_new_hook) { hooks_arena_new_hook(); } post_reentrancy(tsdn_tsd(tsdn)); } return arena; label_error: if (ind != 0) { - base_delete(base); + base_delete(tsdn, base); } return NULL; } void arena_boot(void) { arena_dirty_decay_ms_default_set(opt_dirty_decay_ms); arena_muzzy_decay_ms_default_set(opt_muzzy_decay_ms); } void arena_prefork0(tsdn_t *tsdn, arena_t *arena) { malloc_mutex_prefork(tsdn, &arena->decay_dirty.mtx); malloc_mutex_prefork(tsdn, &arena->decay_muzzy.mtx); } void arena_prefork1(tsdn_t *tsdn, arena_t *arena) { if (config_stats) { malloc_mutex_prefork(tsdn, &arena->tcache_ql_mtx); } } void arena_prefork2(tsdn_t *tsdn, arena_t *arena) { + malloc_mutex_prefork(tsdn, &arena->extent_grow_mtx); +} + +void +arena_prefork3(tsdn_t *tsdn, arena_t *arena) { extents_prefork(tsdn, &arena->extents_dirty); extents_prefork(tsdn, &arena->extents_muzzy); extents_prefork(tsdn, &arena->extents_retained); } void -arena_prefork3(tsdn_t *tsdn, arena_t *arena) { +arena_prefork4(tsdn_t *tsdn, arena_t *arena) { malloc_mutex_prefork(tsdn, &arena->extent_avail_mtx); } void -arena_prefork4(tsdn_t *tsdn, arena_t *arena) { +arena_prefork5(tsdn_t *tsdn, arena_t *arena) { base_prefork(tsdn, arena->base); } void -arena_prefork5(tsdn_t *tsdn, arena_t *arena) { +arena_prefork6(tsdn_t *tsdn, arena_t *arena) { malloc_mutex_prefork(tsdn, &arena->large_mtx); } void -arena_prefork6(tsdn_t *tsdn, arena_t *arena) { +arena_prefork7(tsdn_t *tsdn, arena_t *arena) { for (unsigned i = 0; i < NBINS; i++) { malloc_mutex_prefork(tsdn, &arena->bins[i].lock); } } void arena_postfork_parent(tsdn_t *tsdn, arena_t *arena) { unsigned i; for (i = 0; i < NBINS; i++) { malloc_mutex_postfork_parent(tsdn, &arena->bins[i].lock); } malloc_mutex_postfork_parent(tsdn, &arena->large_mtx); base_postfork_parent(tsdn, arena->base); malloc_mutex_postfork_parent(tsdn, &arena->extent_avail_mtx); extents_postfork_parent(tsdn, &arena->extents_dirty); extents_postfork_parent(tsdn, &arena->extents_muzzy); extents_postfork_parent(tsdn, &arena->extents_retained); + malloc_mutex_postfork_parent(tsdn, &arena->extent_grow_mtx); malloc_mutex_postfork_parent(tsdn, &arena->decay_dirty.mtx); malloc_mutex_postfork_parent(tsdn, &arena->decay_muzzy.mtx); if (config_stats) { malloc_mutex_postfork_parent(tsdn, &arena->tcache_ql_mtx); } } void arena_postfork_child(tsdn_t *tsdn, arena_t *arena) { unsigned i; + atomic_store_u(&arena->nthreads[0], 0, ATOMIC_RELAXED); + atomic_store_u(&arena->nthreads[1], 0, ATOMIC_RELAXED); + if (tsd_arena_get(tsdn_tsd(tsdn)) == arena) { + arena_nthreads_inc(arena, false); + } + if (tsd_iarena_get(tsdn_tsd(tsdn)) == arena) { + arena_nthreads_inc(arena, true); + } + if (config_stats) { + ql_new(&arena->tcache_ql); + tcache_t *tcache = tcache_get(tsdn_tsd(tsdn)); + if (tcache != NULL && tcache->arena == arena) { + ql_elm_new(tcache, link); + ql_tail_insert(&arena->tcache_ql, tcache, link); + } + } + for (i = 0; i < NBINS; i++) { malloc_mutex_postfork_child(tsdn, &arena->bins[i].lock); } malloc_mutex_postfork_child(tsdn, &arena->large_mtx); base_postfork_child(tsdn, arena->base); malloc_mutex_postfork_child(tsdn, &arena->extent_avail_mtx); extents_postfork_child(tsdn, &arena->extents_dirty); extents_postfork_child(tsdn, &arena->extents_muzzy); extents_postfork_child(tsdn, &arena->extents_retained); + malloc_mutex_postfork_child(tsdn, &arena->extent_grow_mtx); malloc_mutex_postfork_child(tsdn, &arena->decay_dirty.mtx); malloc_mutex_postfork_child(tsdn, &arena->decay_muzzy.mtx); if (config_stats) { malloc_mutex_postfork_child(tsdn, &arena->tcache_ql_mtx); } } Index: head/contrib/jemalloc/src/background_thread.c =================================================================== --- head/contrib/jemalloc/src/background_thread.c (revision 320622) +++ head/contrib/jemalloc/src/background_thread.c (revision 320623) @@ -1,846 +1,880 @@ #define JEMALLOC_BACKGROUND_THREAD_C_ #include "jemalloc/internal/jemalloc_preamble.h" #include "jemalloc/internal/jemalloc_internal_includes.h" #include "jemalloc/internal/assert.h" /******************************************************************************/ /* Data. */ /* This option should be opt-in only. */ #define BACKGROUND_THREAD_DEFAULT false /* Read-only after initialization. */ bool opt_background_thread = BACKGROUND_THREAD_DEFAULT; /* Used for thread creation, termination and stats. */ malloc_mutex_t background_thread_lock; /* Indicates global state. Atomic because decay reads this w/o locking. */ atomic_b_t background_thread_enabled_state; size_t n_background_threads; /* Thread info per-index. */ background_thread_info_t *background_thread_info; /* False if no necessary runtime support. */ bool can_enable_background_thread; /******************************************************************************/ #ifdef JEMALLOC_PTHREAD_CREATE_WRAPPER #include static int (*pthread_create_fptr)(pthread_t *__restrict, const pthread_attr_t *, void *(*)(void *), void *__restrict); static pthread_once_t once_control = PTHREAD_ONCE_INIT; static void pthread_create_wrapper_once(void) { #ifdef JEMALLOC_LAZY_LOCK isthreaded = true; #endif } int pthread_create_wrapper(pthread_t *__restrict thread, const pthread_attr_t *attr, void *(*start_routine)(void *), void *__restrict arg) { pthread_once(&once_control, pthread_create_wrapper_once); return pthread_create_fptr(thread, attr, start_routine, arg); } #endif /* JEMALLOC_PTHREAD_CREATE_WRAPPER */ #ifndef JEMALLOC_BACKGROUND_THREAD #define NOT_REACHED { not_reached(); } bool background_thread_create(tsd_t *tsd, unsigned arena_ind) NOT_REACHED bool background_threads_enable(tsd_t *tsd) NOT_REACHED bool background_threads_disable(tsd_t *tsd) NOT_REACHED void background_thread_interval_check(tsdn_t *tsdn, arena_t *arena, arena_decay_t *decay, size_t npages_new) NOT_REACHED void background_thread_prefork0(tsdn_t *tsdn) NOT_REACHED void background_thread_prefork1(tsdn_t *tsdn) NOT_REACHED void background_thread_postfork_parent(tsdn_t *tsdn) NOT_REACHED void background_thread_postfork_child(tsdn_t *tsdn) NOT_REACHED bool background_thread_stats_read(tsdn_t *tsdn, background_thread_stats_t *stats) NOT_REACHED void background_thread_ctl_init(tsdn_t *tsdn) NOT_REACHED #undef NOT_REACHED #else static bool background_thread_enabled_at_fork; static void background_thread_info_init(tsdn_t *tsdn, background_thread_info_t *info) { background_thread_wakeup_time_set(tsdn, info, 0); info->npages_to_purge_new = 0; if (config_stats) { info->tot_n_runs = 0; nstime_init(&info->tot_sleep_time, 0); } } static inline bool set_current_thread_affinity(UNUSED int cpu) { #if defined(JEMALLOC_HAVE_SCHED_SETAFFINITY) cpu_set_t cpuset; CPU_ZERO(&cpuset); CPU_SET(cpu, &cpuset); int ret = sched_setaffinity(0, sizeof(cpu_set_t), &cpuset); return (ret != 0); #else return false; #endif } /* Threshold for determining when to wake up the background thread. */ #define BACKGROUND_THREAD_NPAGES_THRESHOLD UINT64_C(1024) #define BILLION UINT64_C(1000000000) /* Minimal sleep interval 100 ms. */ #define BACKGROUND_THREAD_MIN_INTERVAL_NS (BILLION / 10) static inline size_t decay_npurge_after_interval(arena_decay_t *decay, size_t interval) { size_t i; uint64_t sum = 0; for (i = 0; i < interval; i++) { sum += decay->backlog[i] * h_steps[i]; } for (; i < SMOOTHSTEP_NSTEPS; i++) { sum += decay->backlog[i] * (h_steps[i] - h_steps[i - interval]); } return (size_t)(sum >> SMOOTHSTEP_BFP); } static uint64_t arena_decay_compute_purge_interval_impl(tsdn_t *tsdn, arena_decay_t *decay, extents_t *extents) { if (malloc_mutex_trylock(tsdn, &decay->mtx)) { /* Use minimal interval if decay is contended. */ return BACKGROUND_THREAD_MIN_INTERVAL_NS; } uint64_t interval; ssize_t decay_time = atomic_load_zd(&decay->time_ms, ATOMIC_RELAXED); if (decay_time <= 0) { /* Purging is eagerly done or disabled currently. */ interval = BACKGROUND_THREAD_INDEFINITE_SLEEP; goto label_done; } uint64_t decay_interval_ns = nstime_ns(&decay->interval); assert(decay_interval_ns > 0); size_t npages = extents_npages_get(extents); if (npages == 0) { unsigned i; for (i = 0; i < SMOOTHSTEP_NSTEPS; i++) { if (decay->backlog[i] > 0) { break; } } if (i == SMOOTHSTEP_NSTEPS) { /* No dirty pages recorded. Sleep indefinitely. */ interval = BACKGROUND_THREAD_INDEFINITE_SLEEP; goto label_done; } } if (npages <= BACKGROUND_THREAD_NPAGES_THRESHOLD) { /* Use max interval. */ interval = decay_interval_ns * SMOOTHSTEP_NSTEPS; goto label_done; } size_t lb = BACKGROUND_THREAD_MIN_INTERVAL_NS / decay_interval_ns; size_t ub = SMOOTHSTEP_NSTEPS; /* Minimal 2 intervals to ensure reaching next epoch deadline. */ lb = (lb < 2) ? 2 : lb; if ((decay_interval_ns * ub <= BACKGROUND_THREAD_MIN_INTERVAL_NS) || (lb + 2 > ub)) { interval = BACKGROUND_THREAD_MIN_INTERVAL_NS; goto label_done; } assert(lb + 2 <= ub); size_t npurge_lb, npurge_ub; npurge_lb = decay_npurge_after_interval(decay, lb); if (npurge_lb > BACKGROUND_THREAD_NPAGES_THRESHOLD) { interval = decay_interval_ns * lb; goto label_done; } npurge_ub = decay_npurge_after_interval(decay, ub); if (npurge_ub < BACKGROUND_THREAD_NPAGES_THRESHOLD) { interval = decay_interval_ns * ub; goto label_done; } unsigned n_search = 0; size_t target, npurge; while ((npurge_lb + BACKGROUND_THREAD_NPAGES_THRESHOLD < npurge_ub) && (lb + 2 < ub)) { target = (lb + ub) / 2; npurge = decay_npurge_after_interval(decay, target); if (npurge > BACKGROUND_THREAD_NPAGES_THRESHOLD) { ub = target; npurge_ub = npurge; } else { lb = target; npurge_lb = npurge; } assert(n_search++ < lg_floor(SMOOTHSTEP_NSTEPS) + 1); } interval = decay_interval_ns * (ub + lb) / 2; label_done: interval = (interval < BACKGROUND_THREAD_MIN_INTERVAL_NS) ? BACKGROUND_THREAD_MIN_INTERVAL_NS : interval; malloc_mutex_unlock(tsdn, &decay->mtx); return interval; } /* Compute purge interval for background threads. */ static uint64_t arena_decay_compute_purge_interval(tsdn_t *tsdn, arena_t *arena) { uint64_t i1, i2; i1 = arena_decay_compute_purge_interval_impl(tsdn, &arena->decay_dirty, &arena->extents_dirty); if (i1 == BACKGROUND_THREAD_MIN_INTERVAL_NS) { return i1; } i2 = arena_decay_compute_purge_interval_impl(tsdn, &arena->decay_muzzy, &arena->extents_muzzy); return i1 < i2 ? i1 : i2; } static void background_thread_sleep(tsdn_t *tsdn, background_thread_info_t *info, uint64_t interval) { if (config_stats) { info->tot_n_runs++; } info->npages_to_purge_new = 0; struct timeval tv; /* Specific clock required by timedwait. */ gettimeofday(&tv, NULL); nstime_t before_sleep; nstime_init2(&before_sleep, tv.tv_sec, tv.tv_usec * 1000); int ret; if (interval == BACKGROUND_THREAD_INDEFINITE_SLEEP) { assert(background_thread_indefinite_sleep(info)); ret = pthread_cond_wait(&info->cond, &info->mtx.lock); assert(ret == 0); } else { assert(interval >= BACKGROUND_THREAD_MIN_INTERVAL_NS && interval <= BACKGROUND_THREAD_INDEFINITE_SLEEP); /* We need malloc clock (can be different from tv). */ nstime_t next_wakeup; nstime_init(&next_wakeup, 0); nstime_update(&next_wakeup); nstime_iadd(&next_wakeup, interval); assert(nstime_ns(&next_wakeup) < BACKGROUND_THREAD_INDEFINITE_SLEEP); background_thread_wakeup_time_set(tsdn, info, nstime_ns(&next_wakeup)); nstime_t ts_wakeup; nstime_copy(&ts_wakeup, &before_sleep); nstime_iadd(&ts_wakeup, interval); struct timespec ts; ts.tv_sec = (size_t)nstime_sec(&ts_wakeup); ts.tv_nsec = (size_t)nstime_nsec(&ts_wakeup); assert(!background_thread_indefinite_sleep(info)); ret = pthread_cond_timedwait(&info->cond, &info->mtx.lock, &ts); assert(ret == ETIMEDOUT || ret == 0); background_thread_wakeup_time_set(tsdn, info, BACKGROUND_THREAD_INDEFINITE_SLEEP); } if (config_stats) { gettimeofday(&tv, NULL); nstime_t after_sleep; nstime_init2(&after_sleep, tv.tv_sec, tv.tv_usec * 1000); if (nstime_compare(&after_sleep, &before_sleep) > 0) { nstime_subtract(&after_sleep, &before_sleep); nstime_add(&info->tot_sleep_time, &after_sleep); } } } static bool background_thread_pause_check(tsdn_t *tsdn, background_thread_info_t *info) { if (unlikely(info->state == background_thread_paused)) { malloc_mutex_unlock(tsdn, &info->mtx); /* Wait on global lock to update status. */ malloc_mutex_lock(tsdn, &background_thread_lock); malloc_mutex_unlock(tsdn, &background_thread_lock); malloc_mutex_lock(tsdn, &info->mtx); return true; } return false; } static inline void background_work_sleep_once(tsdn_t *tsdn, background_thread_info_t *info, unsigned ind) { uint64_t min_interval = BACKGROUND_THREAD_INDEFINITE_SLEEP; unsigned narenas = narenas_total_get(); for (unsigned i = ind; i < narenas; i += ncpus) { arena_t *arena = arena_get(tsdn, i, false); if (!arena) { continue; } arena_decay(tsdn, arena, true, false); if (min_interval == BACKGROUND_THREAD_MIN_INTERVAL_NS) { /* Min interval will be used. */ continue; } uint64_t interval = arena_decay_compute_purge_interval(tsdn, arena); assert(interval >= BACKGROUND_THREAD_MIN_INTERVAL_NS); if (min_interval > interval) { min_interval = interval; } } background_thread_sleep(tsdn, info, min_interval); } static bool background_threads_disable_single(tsd_t *tsd, background_thread_info_t *info) { if (info == &background_thread_info[0]) { malloc_mutex_assert_owner(tsd_tsdn(tsd), &background_thread_lock); } else { malloc_mutex_assert_not_owner(tsd_tsdn(tsd), &background_thread_lock); } - pre_reentrancy(tsd); + pre_reentrancy(tsd, NULL); malloc_mutex_lock(tsd_tsdn(tsd), &info->mtx); bool has_thread; assert(info->state != background_thread_paused); if (info->state == background_thread_started) { has_thread = true; info->state = background_thread_stopped; pthread_cond_signal(&info->cond); } else { has_thread = false; } malloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx); if (!has_thread) { post_reentrancy(tsd); return false; } void *ret; if (pthread_join(info->thread, &ret)) { post_reentrancy(tsd); return true; } assert(ret == NULL); n_background_threads--; post_reentrancy(tsd); return false; } static void *background_thread_entry(void *ind_arg); +static int +background_thread_create_signals_masked(pthread_t *thread, + const pthread_attr_t *attr, void *(*start_routine)(void *), void *arg) { + /* + * Mask signals during thread creation so that the thread inherits + * an empty signal set. + */ + sigset_t set; + sigfillset(&set); + sigset_t oldset; + int mask_err = pthread_sigmask(SIG_SETMASK, &set, &oldset); + if (mask_err != 0) { + return mask_err; + } + int create_err = pthread_create_wrapper(thread, attr, start_routine, + arg); + /* + * Restore the signal mask. Failure to restore the signal mask here + * changes program behavior. + */ + int restore_err = pthread_sigmask(SIG_SETMASK, &oldset, NULL); + if (restore_err != 0) { + malloc_printf(": background thread creation " + "failed (%d), and signal mask restoration failed " + "(%d)\n", create_err, restore_err); + if (opt_abort) { + abort(); + } + } + return create_err; +} + static void check_background_thread_creation(tsd_t *tsd, unsigned *n_created, bool *created_threads) { if (likely(*n_created == n_background_threads)) { return; } malloc_mutex_unlock(tsd_tsdn(tsd), &background_thread_info[0].mtx); label_restart: malloc_mutex_lock(tsd_tsdn(tsd), &background_thread_lock); for (unsigned i = 1; i < ncpus; i++) { if (created_threads[i]) { continue; } background_thread_info_t *info = &background_thread_info[i]; malloc_mutex_lock(tsd_tsdn(tsd), &info->mtx); assert(info->state != background_thread_paused); bool create = (info->state == background_thread_started); malloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx); if (!create) { continue; } /* * To avoid deadlock with prefork handlers (which waits for the * mutex held here), unlock before calling pthread_create(). */ malloc_mutex_unlock(tsd_tsdn(tsd), &background_thread_lock); - pre_reentrancy(tsd); - int err = pthread_create_wrapper(&info->thread, NULL, - background_thread_entry, (void *)(uintptr_t)i); + pre_reentrancy(tsd, NULL); + int err = background_thread_create_signals_masked(&info->thread, + NULL, background_thread_entry, (void *)(uintptr_t)i); post_reentrancy(tsd); if (err == 0) { (*n_created)++; created_threads[i] = true; } else { malloc_printf(": background thread " "creation failed (%d)\n", err); if (opt_abort) { abort(); } } /* Restart since we unlocked. */ goto label_restart; } malloc_mutex_lock(tsd_tsdn(tsd), &background_thread_info[0].mtx); malloc_mutex_unlock(tsd_tsdn(tsd), &background_thread_lock); } static void background_thread0_work(tsd_t *tsd) { /* Thread0 is also responsible for launching / terminating threads. */ VARIABLE_ARRAY(bool, created_threads, ncpus); unsigned i; for (i = 1; i < ncpus; i++) { created_threads[i] = false; } /* Start working, and create more threads when asked. */ unsigned n_created = 1; while (background_thread_info[0].state != background_thread_stopped) { if (background_thread_pause_check(tsd_tsdn(tsd), &background_thread_info[0])) { continue; } check_background_thread_creation(tsd, &n_created, (bool *)&created_threads); background_work_sleep_once(tsd_tsdn(tsd), &background_thread_info[0], 0); } /* * Shut down other threads at exit. Note that the ctl thread is holding * the global background_thread mutex (and is waiting) for us. */ assert(!background_thread_enabled()); for (i = 1; i < ncpus; i++) { background_thread_info_t *info = &background_thread_info[i]; assert(info->state != background_thread_paused); if (created_threads[i]) { background_threads_disable_single(tsd, info); } else { malloc_mutex_lock(tsd_tsdn(tsd), &info->mtx); /* Clear in case the thread wasn't created. */ info->state = background_thread_stopped; malloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx); } } background_thread_info[0].state = background_thread_stopped; assert(n_background_threads == 1); } static void background_work(tsd_t *tsd, unsigned ind) { background_thread_info_t *info = &background_thread_info[ind]; malloc_mutex_lock(tsd_tsdn(tsd), &info->mtx); background_thread_wakeup_time_set(tsd_tsdn(tsd), info, BACKGROUND_THREAD_INDEFINITE_SLEEP); if (ind == 0) { background_thread0_work(tsd); } else { while (info->state != background_thread_stopped) { if (background_thread_pause_check(tsd_tsdn(tsd), info)) { continue; } background_work_sleep_once(tsd_tsdn(tsd), info, ind); } } assert(info->state == background_thread_stopped); background_thread_wakeup_time_set(tsd_tsdn(tsd), info, 0); malloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx); } static void * background_thread_entry(void *ind_arg) { unsigned thread_ind = (unsigned)(uintptr_t)ind_arg; assert(thread_ind < ncpus); - +#ifdef JEMALLOC_HAVE_PTHREAD_SETNAME_NP + pthread_setname_np(pthread_self(), "jemalloc_bg_thd"); +#endif if (opt_percpu_arena != percpu_arena_disabled) { set_current_thread_affinity((int)thread_ind); } /* * Start periodic background work. We use internal tsd which avoids * side effects, for example triggering new arena creation (which in * turn triggers another background thread creation). */ background_work(tsd_internal_fetch(), thread_ind); assert(pthread_equal(pthread_self(), background_thread_info[thread_ind].thread)); return NULL; } static void background_thread_init(tsd_t *tsd, background_thread_info_t *info) { malloc_mutex_assert_owner(tsd_tsdn(tsd), &background_thread_lock); info->state = background_thread_started; background_thread_info_init(tsd_tsdn(tsd), info); n_background_threads++; } /* Create a new background thread if needed. */ bool background_thread_create(tsd_t *tsd, unsigned arena_ind) { assert(have_background_thread); malloc_mutex_assert_owner(tsd_tsdn(tsd), &background_thread_lock); /* We create at most NCPUs threads. */ size_t thread_ind = arena_ind % ncpus; background_thread_info_t *info = &background_thread_info[thread_ind]; bool need_new_thread; malloc_mutex_lock(tsd_tsdn(tsd), &info->mtx); need_new_thread = background_thread_enabled() && (info->state == background_thread_stopped); if (need_new_thread) { background_thread_init(tsd, info); } malloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx); if (!need_new_thread) { return false; } if (arena_ind != 0) { /* Threads are created asynchronously by Thread 0. */ background_thread_info_t *t0 = &background_thread_info[0]; malloc_mutex_lock(tsd_tsdn(tsd), &t0->mtx); assert(t0->state == background_thread_started); pthread_cond_signal(&t0->cond); malloc_mutex_unlock(tsd_tsdn(tsd), &t0->mtx); return false; } - pre_reentrancy(tsd); + pre_reentrancy(tsd, NULL); /* * To avoid complications (besides reentrancy), create internal * background threads with the underlying pthread_create. */ - int err = pthread_create_wrapper(&info->thread, NULL, + int err = background_thread_create_signals_masked(&info->thread, NULL, background_thread_entry, (void *)thread_ind); post_reentrancy(tsd); if (err != 0) { malloc_printf(": arena 0 background thread creation " "failed (%d)\n", err); malloc_mutex_lock(tsd_tsdn(tsd), &info->mtx); info->state = background_thread_stopped; n_background_threads--; malloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx); return true; } return false; } bool background_threads_enable(tsd_t *tsd) { assert(n_background_threads == 0); assert(background_thread_enabled()); malloc_mutex_assert_owner(tsd_tsdn(tsd), &background_thread_lock); VARIABLE_ARRAY(bool, marked, ncpus); unsigned i, nmarked; for (i = 0; i < ncpus; i++) { marked[i] = false; } nmarked = 0; /* Mark the threads we need to create for thread 0. */ unsigned n = narenas_total_get(); for (i = 1; i < n; i++) { if (marked[i % ncpus] || arena_get(tsd_tsdn(tsd), i, false) == NULL) { continue; } background_thread_info_t *info = &background_thread_info[i]; malloc_mutex_lock(tsd_tsdn(tsd), &info->mtx); assert(info->state == background_thread_stopped); background_thread_init(tsd, info); malloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx); marked[i % ncpus] = true; if (++nmarked == ncpus) { break; } } return background_thread_create(tsd, 0); } bool background_threads_disable(tsd_t *tsd) { assert(!background_thread_enabled()); malloc_mutex_assert_owner(tsd_tsdn(tsd), &background_thread_lock); /* Thread 0 will be responsible for terminating other threads. */ if (background_threads_disable_single(tsd, &background_thread_info[0])) { return true; } assert(n_background_threads == 0); return false; } /* Check if we need to signal the background thread early. */ void background_thread_interval_check(tsdn_t *tsdn, arena_t *arena, arena_decay_t *decay, size_t npages_new) { background_thread_info_t *info = arena_background_thread_info_get( arena); if (malloc_mutex_trylock(tsdn, &info->mtx)) { /* * Background thread may hold the mutex for a long period of * time. We'd like to avoid the variance on application * threads. So keep this non-blocking, and leave the work to a * future epoch. */ return; } if (info->state != background_thread_started) { goto label_done; } if (malloc_mutex_trylock(tsdn, &decay->mtx)) { goto label_done; } ssize_t decay_time = atomic_load_zd(&decay->time_ms, ATOMIC_RELAXED); if (decay_time <= 0) { /* Purging is eagerly done or disabled currently. */ goto label_done_unlock2; } uint64_t decay_interval_ns = nstime_ns(&decay->interval); assert(decay_interval_ns > 0); nstime_t diff; nstime_init(&diff, background_thread_wakeup_time_get(info)); if (nstime_compare(&diff, &decay->epoch) <= 0) { goto label_done_unlock2; } nstime_subtract(&diff, &decay->epoch); if (nstime_ns(&diff) < BACKGROUND_THREAD_MIN_INTERVAL_NS) { goto label_done_unlock2; } if (npages_new > 0) { size_t n_epoch = (size_t)(nstime_ns(&diff) / decay_interval_ns); /* * Compute how many new pages we would need to purge by the next * wakeup, which is used to determine if we should signal the * background thread. */ uint64_t npurge_new; if (n_epoch >= SMOOTHSTEP_NSTEPS) { npurge_new = npages_new; } else { uint64_t h_steps_max = h_steps[SMOOTHSTEP_NSTEPS - 1]; assert(h_steps_max >= h_steps[SMOOTHSTEP_NSTEPS - 1 - n_epoch]); npurge_new = npages_new * (h_steps_max - h_steps[SMOOTHSTEP_NSTEPS - 1 - n_epoch]); npurge_new >>= SMOOTHSTEP_BFP; } info->npages_to_purge_new += npurge_new; } bool should_signal; if (info->npages_to_purge_new > BACKGROUND_THREAD_NPAGES_THRESHOLD) { should_signal = true; } else if (unlikely(background_thread_indefinite_sleep(info)) && (extents_npages_get(&arena->extents_dirty) > 0 || extents_npages_get(&arena->extents_muzzy) > 0 || info->npages_to_purge_new > 0)) { should_signal = true; } else { should_signal = false; } if (should_signal) { info->npages_to_purge_new = 0; pthread_cond_signal(&info->cond); } label_done_unlock2: malloc_mutex_unlock(tsdn, &decay->mtx); label_done: malloc_mutex_unlock(tsdn, &info->mtx); } void background_thread_prefork0(tsdn_t *tsdn) { malloc_mutex_prefork(tsdn, &background_thread_lock); background_thread_enabled_at_fork = background_thread_enabled(); } void background_thread_prefork1(tsdn_t *tsdn) { for (unsigned i = 0; i < ncpus; i++) { malloc_mutex_prefork(tsdn, &background_thread_info[i].mtx); } } void background_thread_postfork_parent(tsdn_t *tsdn) { for (unsigned i = 0; i < ncpus; i++) { malloc_mutex_postfork_parent(tsdn, &background_thread_info[i].mtx); } malloc_mutex_postfork_parent(tsdn, &background_thread_lock); } void background_thread_postfork_child(tsdn_t *tsdn) { for (unsigned i = 0; i < ncpus; i++) { malloc_mutex_postfork_child(tsdn, &background_thread_info[i].mtx); } malloc_mutex_postfork_child(tsdn, &background_thread_lock); if (!background_thread_enabled_at_fork) { return; } /* Clear background_thread state (reset to disabled for child). */ malloc_mutex_lock(tsdn, &background_thread_lock); n_background_threads = 0; background_thread_enabled_set(tsdn, false); for (unsigned i = 0; i < ncpus; i++) { background_thread_info_t *info = &background_thread_info[i]; malloc_mutex_lock(tsdn, &info->mtx); info->state = background_thread_stopped; int ret = pthread_cond_init(&info->cond, NULL); assert(ret == 0); background_thread_info_init(tsdn, info); malloc_mutex_unlock(tsdn, &info->mtx); } malloc_mutex_unlock(tsdn, &background_thread_lock); } bool background_thread_stats_read(tsdn_t *tsdn, background_thread_stats_t *stats) { assert(config_stats); malloc_mutex_lock(tsdn, &background_thread_lock); if (!background_thread_enabled()) { malloc_mutex_unlock(tsdn, &background_thread_lock); return true; } stats->num_threads = n_background_threads; uint64_t num_runs = 0; nstime_init(&stats->run_interval, 0); for (unsigned i = 0; i < ncpus; i++) { background_thread_info_t *info = &background_thread_info[i]; malloc_mutex_lock(tsdn, &info->mtx); if (info->state != background_thread_stopped) { num_runs += info->tot_n_runs; nstime_add(&stats->run_interval, &info->tot_sleep_time); } malloc_mutex_unlock(tsdn, &info->mtx); } stats->num_runs = num_runs; if (num_runs > 0) { nstime_idivide(&stats->run_interval, num_runs); } malloc_mutex_unlock(tsdn, &background_thread_lock); return false; } #undef BACKGROUND_THREAD_NPAGES_THRESHOLD #undef BILLION #undef BACKGROUND_THREAD_MIN_INTERVAL_NS /* * When lazy lock is enabled, we need to make sure setting isthreaded before * taking any background_thread locks. This is called early in ctl (instead of * wait for the pthread_create calls to trigger) because the mutex is required * before creating background threads. */ void background_thread_ctl_init(tsdn_t *tsdn) { malloc_mutex_assert_not_owner(tsdn, &background_thread_lock); #ifdef JEMALLOC_PTHREAD_CREATE_WRAPPER pthread_once(&once_control, pthread_create_wrapper_once); #endif } #endif /* defined(JEMALLOC_BACKGROUND_THREAD) */ bool background_thread_boot0(void) { if (!have_background_thread && opt_background_thread) { malloc_printf(": option background_thread currently " "supports pthread only\n"); return true; } #ifdef JEMALLOC_PTHREAD_CREATE_WRAPPER pthread_create_fptr = dlsym(RTLD_NEXT, "pthread_create"); if (pthread_create_fptr == NULL) { can_enable_background_thread = false; if (config_lazy_lock || opt_background_thread) { malloc_write(": Error in dlsym(RTLD_NEXT, " "\"pthread_create\")\n"); abort(); } } else { can_enable_background_thread = true; } #endif return false; } bool background_thread_boot1(tsdn_t *tsdn) { #ifdef JEMALLOC_BACKGROUND_THREAD assert(have_background_thread); assert(narenas_total_get() > 0); background_thread_enabled_set(tsdn, opt_background_thread); if (malloc_mutex_init(&background_thread_lock, "background_thread_global", WITNESS_RANK_BACKGROUND_THREAD_GLOBAL, malloc_mutex_rank_exclusive)) { return true; } if (opt_background_thread) { background_thread_ctl_init(tsdn); } background_thread_info = (background_thread_info_t *)base_alloc(tsdn, b0get(), ncpus * sizeof(background_thread_info_t), CACHELINE); if (background_thread_info == NULL) { return true; } for (unsigned i = 0; i < ncpus; i++) { background_thread_info_t *info = &background_thread_info[i]; /* Thread mutex is rank_inclusive because of thread0. */ if (malloc_mutex_init(&info->mtx, "background_thread", WITNESS_RANK_BACKGROUND_THREAD, malloc_mutex_address_ordered)) { return true; } if (pthread_cond_init(&info->cond, NULL)) { return true; } malloc_mutex_lock(tsdn, &info->mtx); info->state = background_thread_stopped; background_thread_info_init(tsdn, info); malloc_mutex_unlock(tsdn, &info->mtx); } #endif return false; } Index: head/contrib/jemalloc/src/base.c =================================================================== --- head/contrib/jemalloc/src/base.c (revision 320622) +++ head/contrib/jemalloc/src/base.c (revision 320623) @@ -1,392 +1,402 @@ #define JEMALLOC_BASE_C_ #include "jemalloc/internal/jemalloc_preamble.h" #include "jemalloc/internal/jemalloc_internal_includes.h" #include "jemalloc/internal/assert.h" #include "jemalloc/internal/extent_mmap.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/sz.h" /******************************************************************************/ /* Data. */ static base_t *b0; /******************************************************************************/ static void * -base_map(extent_hooks_t *extent_hooks, unsigned ind, size_t size) { +base_map(tsdn_t *tsdn, extent_hooks_t *extent_hooks, unsigned ind, size_t size) { void *addr; bool zero = true; bool commit = true; assert(size == HUGEPAGE_CEILING(size)); if (extent_hooks == &extent_hooks_default) { addr = extent_alloc_mmap(NULL, size, PAGE, &zero, &commit); } else { + /* No arena context as we are creating new arenas. */ + tsd_t *tsd = tsdn_null(tsdn) ? tsd_fetch() : tsdn_tsd(tsdn); + pre_reentrancy(tsd, NULL); addr = extent_hooks->alloc(extent_hooks, NULL, size, PAGE, &zero, &commit, ind); + post_reentrancy(tsd); } return addr; } static void -base_unmap(extent_hooks_t *extent_hooks, unsigned ind, void *addr, +base_unmap(tsdn_t *tsdn, extent_hooks_t *extent_hooks, unsigned ind, void *addr, size_t size) { /* * Cascade through dalloc, decommit, purge_forced, and purge_lazy, * stopping at first success. This cascade is performed for consistency * with the cascade in extent_dalloc_wrapper() because an application's * custom hooks may not support e.g. dalloc. This function is only ever * called as a side effect of arena destruction, so although it might * seem pointless to do anything besides dalloc here, the application * may in fact want the end state of all associated virtual memory to be * in some consistent-but-allocated state. */ if (extent_hooks == &extent_hooks_default) { if (!extent_dalloc_mmap(addr, size)) { return; } if (!pages_decommit(addr, size)) { return; } if (!pages_purge_forced(addr, size)) { return; } if (!pages_purge_lazy(addr, size)) { return; } /* Nothing worked. This should never happen. */ not_reached(); } else { + tsd_t *tsd = tsdn_null(tsdn) ? tsd_fetch() : tsdn_tsd(tsdn); + pre_reentrancy(tsd, NULL); if (extent_hooks->dalloc != NULL && !extent_hooks->dalloc(extent_hooks, addr, size, true, ind)) { - return; + goto label_done; } if (extent_hooks->decommit != NULL && !extent_hooks->decommit(extent_hooks, addr, size, 0, size, ind)) { - return; + goto label_done; } if (extent_hooks->purge_forced != NULL && !extent_hooks->purge_forced(extent_hooks, addr, size, 0, size, ind)) { - return; + goto label_done; } if (extent_hooks->purge_lazy != NULL && !extent_hooks->purge_lazy(extent_hooks, addr, size, 0, size, ind)) { - return; + goto label_done; } /* Nothing worked. That's the application's problem. */ + label_done: + post_reentrancy(tsd); + return; } } static void base_extent_init(size_t *extent_sn_next, extent_t *extent, void *addr, size_t size) { size_t sn; sn = *extent_sn_next; (*extent_sn_next)++; extent_binit(extent, addr, size, sn); } static void * base_extent_bump_alloc_helper(extent_t *extent, size_t *gap_size, size_t size, size_t alignment) { void *ret; assert(alignment == ALIGNMENT_CEILING(alignment, QUANTUM)); assert(size == ALIGNMENT_CEILING(size, alignment)); *gap_size = ALIGNMENT_CEILING((uintptr_t)extent_addr_get(extent), alignment) - (uintptr_t)extent_addr_get(extent); ret = (void *)((uintptr_t)extent_addr_get(extent) + *gap_size); assert(extent_bsize_get(extent) >= *gap_size + size); extent_binit(extent, (void *)((uintptr_t)extent_addr_get(extent) + *gap_size + size), extent_bsize_get(extent) - *gap_size - size, extent_sn_get(extent)); return ret; } static void base_extent_bump_alloc_post(tsdn_t *tsdn, base_t *base, extent_t *extent, size_t gap_size, void *addr, size_t size) { if (extent_bsize_get(extent) > 0) { /* * Compute the index for the largest size class that does not * exceed extent's size. */ szind_t index_floor = sz_size2index(extent_bsize_get(extent) + 1) - 1; extent_heap_insert(&base->avail[index_floor], extent); } if (config_stats) { base->allocated += size; /* * Add one PAGE to base_resident for every page boundary that is * crossed by the new allocation. */ base->resident += PAGE_CEILING((uintptr_t)addr + size) - PAGE_CEILING((uintptr_t)addr - gap_size); assert(base->allocated <= base->resident); assert(base->resident <= base->mapped); } } static void * base_extent_bump_alloc(tsdn_t *tsdn, base_t *base, extent_t *extent, size_t size, size_t alignment) { void *ret; size_t gap_size; ret = base_extent_bump_alloc_helper(extent, &gap_size, size, alignment); base_extent_bump_alloc_post(tsdn, base, extent, gap_size, ret, size); return ret; } /* * Allocate a block of virtual memory that is large enough to start with a * base_block_t header, followed by an object of specified size and alignment. * On success a pointer to the initialized base_block_t header is returned. */ static base_block_t * -base_block_alloc(extent_hooks_t *extent_hooks, unsigned ind, +base_block_alloc(tsdn_t *tsdn, extent_hooks_t *extent_hooks, unsigned ind, pszind_t *pind_last, size_t *extent_sn_next, size_t size, size_t alignment) { alignment = ALIGNMENT_CEILING(alignment, QUANTUM); size_t usize = ALIGNMENT_CEILING(size, alignment); size_t header_size = sizeof(base_block_t); size_t gap_size = ALIGNMENT_CEILING(header_size, alignment) - header_size; /* * Create increasingly larger blocks in order to limit the total number * of disjoint virtual memory ranges. Choose the next size in the page * size class series (skipping size classes that are not a multiple of * HUGEPAGE), or a size large enough to satisfy the requested size and * alignment, whichever is larger. */ size_t min_block_size = HUGEPAGE_CEILING(sz_psz2u(header_size + gap_size + usize)); pszind_t pind_next = (*pind_last + 1 < NPSIZES) ? *pind_last + 1 : *pind_last; size_t next_block_size = HUGEPAGE_CEILING(sz_pind2sz(pind_next)); size_t block_size = (min_block_size > next_block_size) ? min_block_size : next_block_size; - base_block_t *block = (base_block_t *)base_map(extent_hooks, ind, + base_block_t *block = (base_block_t *)base_map(tsdn, extent_hooks, ind, block_size); if (block == NULL) { return NULL; } *pind_last = sz_psz2ind(block_size); block->size = block_size; block->next = NULL; assert(block_size >= header_size); base_extent_init(extent_sn_next, &block->extent, (void *)((uintptr_t)block + header_size), block_size - header_size); return block; } /* * Allocate an extent that is at least as large as specified size, with * specified alignment. */ static extent_t * base_extent_alloc(tsdn_t *tsdn, base_t *base, size_t size, size_t alignment) { malloc_mutex_assert_owner(tsdn, &base->mtx); extent_hooks_t *extent_hooks = base_extent_hooks_get(base); /* * Drop mutex during base_block_alloc(), because an extent hook will be * called. */ malloc_mutex_unlock(tsdn, &base->mtx); - base_block_t *block = base_block_alloc(extent_hooks, base_ind_get(base), - &base->pind_last, &base->extent_sn_next, size, alignment); + base_block_t *block = base_block_alloc(tsdn, extent_hooks, + base_ind_get(base), &base->pind_last, &base->extent_sn_next, size, + alignment); malloc_mutex_lock(tsdn, &base->mtx); if (block == NULL) { return NULL; } block->next = base->blocks; base->blocks = block; if (config_stats) { base->allocated += sizeof(base_block_t); base->resident += PAGE_CEILING(sizeof(base_block_t)); base->mapped += block->size; assert(base->allocated <= base->resident); assert(base->resident <= base->mapped); } return &block->extent; } base_t * b0get(void) { return b0; } base_t * base_new(tsdn_t *tsdn, unsigned ind, extent_hooks_t *extent_hooks) { pszind_t pind_last = 0; size_t extent_sn_next = 0; - base_block_t *block = base_block_alloc(extent_hooks, ind, &pind_last, - &extent_sn_next, sizeof(base_t), QUANTUM); + base_block_t *block = base_block_alloc(tsdn, extent_hooks, ind, + &pind_last, &extent_sn_next, sizeof(base_t), QUANTUM); if (block == NULL) { return NULL; } size_t gap_size; size_t base_alignment = CACHELINE; size_t base_size = ALIGNMENT_CEILING(sizeof(base_t), base_alignment); base_t *base = (base_t *)base_extent_bump_alloc_helper(&block->extent, &gap_size, base_size, base_alignment); base->ind = ind; atomic_store_p(&base->extent_hooks, extent_hooks, ATOMIC_RELAXED); if (malloc_mutex_init(&base->mtx, "base", WITNESS_RANK_BASE, malloc_mutex_rank_exclusive)) { - base_unmap(extent_hooks, ind, block, block->size); + base_unmap(tsdn, extent_hooks, ind, block, block->size); return NULL; } base->pind_last = pind_last; base->extent_sn_next = extent_sn_next; base->blocks = block; for (szind_t i = 0; i < NSIZES; i++) { extent_heap_new(&base->avail[i]); } if (config_stats) { base->allocated = sizeof(base_block_t); base->resident = PAGE_CEILING(sizeof(base_block_t)); base->mapped = block->size; assert(base->allocated <= base->resident); assert(base->resident <= base->mapped); } base_extent_bump_alloc_post(tsdn, base, &block->extent, gap_size, base, base_size); return base; } void -base_delete(base_t *base) { +base_delete(tsdn_t *tsdn, base_t *base) { extent_hooks_t *extent_hooks = base_extent_hooks_get(base); base_block_t *next = base->blocks; do { base_block_t *block = next; next = block->next; - base_unmap(extent_hooks, base_ind_get(base), block, + base_unmap(tsdn, extent_hooks, base_ind_get(base), block, block->size); } while (next != NULL); } extent_hooks_t * base_extent_hooks_get(base_t *base) { return (extent_hooks_t *)atomic_load_p(&base->extent_hooks, ATOMIC_ACQUIRE); } extent_hooks_t * base_extent_hooks_set(base_t *base, extent_hooks_t *extent_hooks) { extent_hooks_t *old_extent_hooks = base_extent_hooks_get(base); atomic_store_p(&base->extent_hooks, extent_hooks, ATOMIC_RELEASE); return old_extent_hooks; } static void * base_alloc_impl(tsdn_t *tsdn, base_t *base, size_t size, size_t alignment, size_t *esn) { alignment = QUANTUM_CEILING(alignment); size_t usize = ALIGNMENT_CEILING(size, alignment); size_t asize = usize + alignment - QUANTUM; extent_t *extent = NULL; malloc_mutex_lock(tsdn, &base->mtx); for (szind_t i = sz_size2index(asize); i < NSIZES; i++) { extent = extent_heap_remove_first(&base->avail[i]); if (extent != NULL) { /* Use existing space. */ break; } } if (extent == NULL) { /* Try to allocate more space. */ extent = base_extent_alloc(tsdn, base, usize, alignment); } void *ret; if (extent == NULL) { ret = NULL; goto label_return; } ret = base_extent_bump_alloc(tsdn, base, extent, usize, alignment); if (esn != NULL) { *esn = extent_sn_get(extent); } label_return: malloc_mutex_unlock(tsdn, &base->mtx); return ret; } /* * base_alloc() returns zeroed memory, which is always demand-zeroed for the * auto arenas, in order to make multi-page sparse data structures such as radix * tree nodes efficient with respect to physical memory usage. Upon success a * pointer to at least size bytes with specified alignment is returned. Note * that size is rounded up to the nearest multiple of alignment to avoid false * sharing. */ void * base_alloc(tsdn_t *tsdn, base_t *base, size_t size, size_t alignment) { return base_alloc_impl(tsdn, base, size, alignment, NULL); } extent_t * base_alloc_extent(tsdn_t *tsdn, base_t *base) { size_t esn; extent_t *extent = base_alloc_impl(tsdn, base, sizeof(extent_t), CACHELINE, &esn); if (extent == NULL) { return NULL; } extent_esn_set(extent, esn); return extent; } void base_stats_get(tsdn_t *tsdn, base_t *base, size_t *allocated, size_t *resident, size_t *mapped) { cassert(config_stats); malloc_mutex_lock(tsdn, &base->mtx); assert(base->allocated <= base->resident); assert(base->resident <= base->mapped); *allocated = base->allocated; *resident = base->resident; *mapped = base->mapped; malloc_mutex_unlock(tsdn, &base->mtx); } void base_prefork(tsdn_t *tsdn, base_t *base) { malloc_mutex_prefork(tsdn, &base->mtx); } void base_postfork_parent(tsdn_t *tsdn, base_t *base) { malloc_mutex_postfork_parent(tsdn, &base->mtx); } void base_postfork_child(tsdn_t *tsdn, base_t *base) { malloc_mutex_postfork_child(tsdn, &base->mtx); } bool base_boot(tsdn_t *tsdn) { b0 = base_new(tsdn, 0, (extent_hooks_t *)&extent_hooks_default); return (b0 == NULL); } Index: head/contrib/jemalloc/src/ctl.c =================================================================== --- head/contrib/jemalloc/src/ctl.c (revision 320622) +++ head/contrib/jemalloc/src/ctl.c (revision 320623) @@ -1,2698 +1,2698 @@ #define JEMALLOC_CTL_C_ #include "jemalloc/internal/jemalloc_preamble.h" #include "jemalloc/internal/jemalloc_internal_includes.h" #include "jemalloc/internal/assert.h" #include "jemalloc/internal/ctl.h" #include "jemalloc/internal/extent_dss.h" #include "jemalloc/internal/extent_mmap.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/nstime.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/util.h" /******************************************************************************/ /* Data. */ /* * ctl_mtx protects the following: * - ctl_stats->* */ static malloc_mutex_t ctl_mtx; static bool ctl_initialized; static ctl_stats_t *ctl_stats; static ctl_arenas_t *ctl_arenas; /******************************************************************************/ /* Helpers for named and indexed nodes. */ static const ctl_named_node_t * ctl_named_node(const ctl_node_t *node) { return ((node->named) ? (const ctl_named_node_t *)node : NULL); } static const ctl_named_node_t * ctl_named_children(const ctl_named_node_t *node, size_t index) { const ctl_named_node_t *children = ctl_named_node(node->children); return (children ? &children[index] : NULL); } static const ctl_indexed_node_t * ctl_indexed_node(const ctl_node_t *node) { return (!node->named ? (const ctl_indexed_node_t *)node : NULL); } /******************************************************************************/ /* Function prototypes for non-inline static functions. */ #define CTL_PROTO(n) \ static int n##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, \ void *oldp, size_t *oldlenp, void *newp, size_t newlen); #define INDEX_PROTO(n) \ static const ctl_named_node_t *n##_index(tsdn_t *tsdn, \ const size_t *mib, size_t miblen, size_t i); CTL_PROTO(version) CTL_PROTO(epoch) CTL_PROTO(background_thread) CTL_PROTO(thread_tcache_enabled) CTL_PROTO(thread_tcache_flush) CTL_PROTO(thread_prof_name) CTL_PROTO(thread_prof_active) CTL_PROTO(thread_arena) CTL_PROTO(thread_allocated) CTL_PROTO(thread_allocatedp) CTL_PROTO(thread_deallocated) CTL_PROTO(thread_deallocatedp) CTL_PROTO(config_cache_oblivious) CTL_PROTO(config_debug) CTL_PROTO(config_fill) CTL_PROTO(config_lazy_lock) CTL_PROTO(config_malloc_conf) CTL_PROTO(config_prof) CTL_PROTO(config_prof_libgcc) CTL_PROTO(config_prof_libunwind) CTL_PROTO(config_stats) CTL_PROTO(config_thp) CTL_PROTO(config_utrace) CTL_PROTO(config_xmalloc) CTL_PROTO(opt_abort) CTL_PROTO(opt_abort_conf) CTL_PROTO(opt_retain) CTL_PROTO(opt_dss) CTL_PROTO(opt_narenas) CTL_PROTO(opt_percpu_arena) CTL_PROTO(opt_background_thread) CTL_PROTO(opt_dirty_decay_ms) CTL_PROTO(opt_muzzy_decay_ms) CTL_PROTO(opt_stats_print) CTL_PROTO(opt_stats_print_opts) CTL_PROTO(opt_junk) CTL_PROTO(opt_zero) CTL_PROTO(opt_utrace) CTL_PROTO(opt_xmalloc) CTL_PROTO(opt_tcache) CTL_PROTO(opt_lg_tcache_max) CTL_PROTO(opt_prof) CTL_PROTO(opt_prof_prefix) CTL_PROTO(opt_prof_active) CTL_PROTO(opt_prof_thread_active_init) CTL_PROTO(opt_lg_prof_sample) CTL_PROTO(opt_lg_prof_interval) CTL_PROTO(opt_prof_gdump) CTL_PROTO(opt_prof_final) CTL_PROTO(opt_prof_leak) CTL_PROTO(opt_prof_accum) CTL_PROTO(tcache_create) CTL_PROTO(tcache_flush) CTL_PROTO(tcache_destroy) CTL_PROTO(arena_i_initialized) CTL_PROTO(arena_i_decay) CTL_PROTO(arena_i_purge) CTL_PROTO(arena_i_reset) CTL_PROTO(arena_i_destroy) CTL_PROTO(arena_i_dss) CTL_PROTO(arena_i_dirty_decay_ms) CTL_PROTO(arena_i_muzzy_decay_ms) CTL_PROTO(arena_i_extent_hooks) INDEX_PROTO(arena_i) CTL_PROTO(arenas_bin_i_size) CTL_PROTO(arenas_bin_i_nregs) CTL_PROTO(arenas_bin_i_slab_size) INDEX_PROTO(arenas_bin_i) CTL_PROTO(arenas_lextent_i_size) INDEX_PROTO(arenas_lextent_i) CTL_PROTO(arenas_narenas) CTL_PROTO(arenas_dirty_decay_ms) CTL_PROTO(arenas_muzzy_decay_ms) CTL_PROTO(arenas_quantum) CTL_PROTO(arenas_page) CTL_PROTO(arenas_tcache_max) CTL_PROTO(arenas_nbins) CTL_PROTO(arenas_nhbins) CTL_PROTO(arenas_nlextents) CTL_PROTO(arenas_create) CTL_PROTO(prof_thread_active_init) CTL_PROTO(prof_active) CTL_PROTO(prof_dump) CTL_PROTO(prof_gdump) CTL_PROTO(prof_reset) CTL_PROTO(prof_interval) CTL_PROTO(lg_prof_sample) CTL_PROTO(stats_arenas_i_small_allocated) CTL_PROTO(stats_arenas_i_small_nmalloc) CTL_PROTO(stats_arenas_i_small_ndalloc) CTL_PROTO(stats_arenas_i_small_nrequests) CTL_PROTO(stats_arenas_i_large_allocated) CTL_PROTO(stats_arenas_i_large_nmalloc) CTL_PROTO(stats_arenas_i_large_ndalloc) CTL_PROTO(stats_arenas_i_large_nrequests) CTL_PROTO(stats_arenas_i_bins_j_nmalloc) CTL_PROTO(stats_arenas_i_bins_j_ndalloc) CTL_PROTO(stats_arenas_i_bins_j_nrequests) CTL_PROTO(stats_arenas_i_bins_j_curregs) CTL_PROTO(stats_arenas_i_bins_j_nfills) CTL_PROTO(stats_arenas_i_bins_j_nflushes) CTL_PROTO(stats_arenas_i_bins_j_nslabs) CTL_PROTO(stats_arenas_i_bins_j_nreslabs) CTL_PROTO(stats_arenas_i_bins_j_curslabs) INDEX_PROTO(stats_arenas_i_bins_j) CTL_PROTO(stats_arenas_i_lextents_j_nmalloc) CTL_PROTO(stats_arenas_i_lextents_j_ndalloc) CTL_PROTO(stats_arenas_i_lextents_j_nrequests) CTL_PROTO(stats_arenas_i_lextents_j_curlextents) INDEX_PROTO(stats_arenas_i_lextents_j) CTL_PROTO(stats_arenas_i_nthreads) CTL_PROTO(stats_arenas_i_uptime) CTL_PROTO(stats_arenas_i_dss) CTL_PROTO(stats_arenas_i_dirty_decay_ms) CTL_PROTO(stats_arenas_i_muzzy_decay_ms) CTL_PROTO(stats_arenas_i_pactive) CTL_PROTO(stats_arenas_i_pdirty) CTL_PROTO(stats_arenas_i_pmuzzy) CTL_PROTO(stats_arenas_i_mapped) CTL_PROTO(stats_arenas_i_retained) CTL_PROTO(stats_arenas_i_dirty_npurge) CTL_PROTO(stats_arenas_i_dirty_nmadvise) CTL_PROTO(stats_arenas_i_dirty_purged) CTL_PROTO(stats_arenas_i_muzzy_npurge) CTL_PROTO(stats_arenas_i_muzzy_nmadvise) CTL_PROTO(stats_arenas_i_muzzy_purged) CTL_PROTO(stats_arenas_i_base) CTL_PROTO(stats_arenas_i_internal) CTL_PROTO(stats_arenas_i_tcache_bytes) CTL_PROTO(stats_arenas_i_resident) INDEX_PROTO(stats_arenas_i) CTL_PROTO(stats_allocated) CTL_PROTO(stats_active) CTL_PROTO(stats_background_thread_num_threads) CTL_PROTO(stats_background_thread_num_runs) CTL_PROTO(stats_background_thread_run_interval) CTL_PROTO(stats_metadata) CTL_PROTO(stats_resident) CTL_PROTO(stats_mapped) CTL_PROTO(stats_retained) #define MUTEX_STATS_CTL_PROTO_GEN(n) \ CTL_PROTO(stats_##n##_num_ops) \ CTL_PROTO(stats_##n##_num_wait) \ CTL_PROTO(stats_##n##_num_spin_acq) \ CTL_PROTO(stats_##n##_num_owner_switch) \ CTL_PROTO(stats_##n##_total_wait_time) \ CTL_PROTO(stats_##n##_max_wait_time) \ CTL_PROTO(stats_##n##_max_num_thds) /* Global mutexes. */ #define OP(mtx) MUTEX_STATS_CTL_PROTO_GEN(mutexes_##mtx) MUTEX_PROF_GLOBAL_MUTEXES #undef OP /* Per arena mutexes. */ #define OP(mtx) MUTEX_STATS_CTL_PROTO_GEN(arenas_i_mutexes_##mtx) MUTEX_PROF_ARENA_MUTEXES #undef OP /* Arena bin mutexes. */ MUTEX_STATS_CTL_PROTO_GEN(arenas_i_bins_j_mutex) #undef MUTEX_STATS_CTL_PROTO_GEN CTL_PROTO(stats_mutexes_reset) /******************************************************************************/ /* mallctl tree. */ #define NAME(n) {true}, n #define CHILD(t, c) \ sizeof(c##_node) / sizeof(ctl_##t##_node_t), \ (ctl_node_t *)c##_node, \ NULL #define CTL(c) 0, NULL, c##_ctl /* * Only handles internal indexed nodes, since there are currently no external * ones. */ #define INDEX(i) {false}, i##_index static const ctl_named_node_t thread_tcache_node[] = { {NAME("enabled"), CTL(thread_tcache_enabled)}, {NAME("flush"), CTL(thread_tcache_flush)} }; static const ctl_named_node_t thread_prof_node[] = { {NAME("name"), CTL(thread_prof_name)}, {NAME("active"), CTL(thread_prof_active)} }; static const ctl_named_node_t thread_node[] = { {NAME("arena"), CTL(thread_arena)}, {NAME("allocated"), CTL(thread_allocated)}, {NAME("allocatedp"), CTL(thread_allocatedp)}, {NAME("deallocated"), CTL(thread_deallocated)}, {NAME("deallocatedp"), CTL(thread_deallocatedp)}, {NAME("tcache"), CHILD(named, thread_tcache)}, {NAME("prof"), CHILD(named, thread_prof)} }; static const ctl_named_node_t config_node[] = { {NAME("cache_oblivious"), CTL(config_cache_oblivious)}, {NAME("debug"), CTL(config_debug)}, {NAME("fill"), CTL(config_fill)}, {NAME("lazy_lock"), CTL(config_lazy_lock)}, {NAME("malloc_conf"), CTL(config_malloc_conf)}, {NAME("prof"), CTL(config_prof)}, {NAME("prof_libgcc"), CTL(config_prof_libgcc)}, {NAME("prof_libunwind"), CTL(config_prof_libunwind)}, {NAME("stats"), CTL(config_stats)}, {NAME("thp"), CTL(config_thp)}, {NAME("utrace"), CTL(config_utrace)}, {NAME("xmalloc"), CTL(config_xmalloc)} }; static const ctl_named_node_t opt_node[] = { {NAME("abort"), CTL(opt_abort)}, {NAME("abort_conf"), CTL(opt_abort_conf)}, {NAME("retain"), CTL(opt_retain)}, {NAME("dss"), CTL(opt_dss)}, {NAME("narenas"), CTL(opt_narenas)}, {NAME("percpu_arena"), CTL(opt_percpu_arena)}, {NAME("background_thread"), CTL(opt_background_thread)}, {NAME("dirty_decay_ms"), CTL(opt_dirty_decay_ms)}, {NAME("muzzy_decay_ms"), CTL(opt_muzzy_decay_ms)}, {NAME("stats_print"), CTL(opt_stats_print)}, {NAME("stats_print_opts"), CTL(opt_stats_print_opts)}, {NAME("junk"), CTL(opt_junk)}, {NAME("zero"), CTL(opt_zero)}, {NAME("utrace"), CTL(opt_utrace)}, {NAME("xmalloc"), CTL(opt_xmalloc)}, {NAME("tcache"), CTL(opt_tcache)}, {NAME("lg_tcache_max"), CTL(opt_lg_tcache_max)}, {NAME("prof"), CTL(opt_prof)}, {NAME("prof_prefix"), CTL(opt_prof_prefix)}, {NAME("prof_active"), CTL(opt_prof_active)}, {NAME("prof_thread_active_init"), CTL(opt_prof_thread_active_init)}, {NAME("lg_prof_sample"), CTL(opt_lg_prof_sample)}, {NAME("lg_prof_interval"), CTL(opt_lg_prof_interval)}, {NAME("prof_gdump"), CTL(opt_prof_gdump)}, {NAME("prof_final"), CTL(opt_prof_final)}, {NAME("prof_leak"), CTL(opt_prof_leak)}, {NAME("prof_accum"), CTL(opt_prof_accum)} }; static const ctl_named_node_t tcache_node[] = { {NAME("create"), CTL(tcache_create)}, {NAME("flush"), CTL(tcache_flush)}, {NAME("destroy"), CTL(tcache_destroy)} }; static const ctl_named_node_t arena_i_node[] = { {NAME("initialized"), CTL(arena_i_initialized)}, {NAME("decay"), CTL(arena_i_decay)}, {NAME("purge"), CTL(arena_i_purge)}, {NAME("reset"), CTL(arena_i_reset)}, {NAME("destroy"), CTL(arena_i_destroy)}, {NAME("dss"), CTL(arena_i_dss)}, {NAME("dirty_decay_ms"), CTL(arena_i_dirty_decay_ms)}, {NAME("muzzy_decay_ms"), CTL(arena_i_muzzy_decay_ms)}, {NAME("extent_hooks"), CTL(arena_i_extent_hooks)} }; static const ctl_named_node_t super_arena_i_node[] = { {NAME(""), CHILD(named, arena_i)} }; static const ctl_indexed_node_t arena_node[] = { {INDEX(arena_i)} }; static const ctl_named_node_t arenas_bin_i_node[] = { {NAME("size"), CTL(arenas_bin_i_size)}, {NAME("nregs"), CTL(arenas_bin_i_nregs)}, {NAME("slab_size"), CTL(arenas_bin_i_slab_size)} }; static const ctl_named_node_t super_arenas_bin_i_node[] = { {NAME(""), CHILD(named, arenas_bin_i)} }; static const ctl_indexed_node_t arenas_bin_node[] = { {INDEX(arenas_bin_i)} }; static const ctl_named_node_t arenas_lextent_i_node[] = { {NAME("size"), CTL(arenas_lextent_i_size)} }; static const ctl_named_node_t super_arenas_lextent_i_node[] = { {NAME(""), CHILD(named, arenas_lextent_i)} }; static const ctl_indexed_node_t arenas_lextent_node[] = { {INDEX(arenas_lextent_i)} }; static const ctl_named_node_t arenas_node[] = { {NAME("narenas"), CTL(arenas_narenas)}, {NAME("dirty_decay_ms"), CTL(arenas_dirty_decay_ms)}, {NAME("muzzy_decay_ms"), CTL(arenas_muzzy_decay_ms)}, {NAME("quantum"), CTL(arenas_quantum)}, {NAME("page"), CTL(arenas_page)}, {NAME("tcache_max"), CTL(arenas_tcache_max)}, {NAME("nbins"), CTL(arenas_nbins)}, {NAME("nhbins"), CTL(arenas_nhbins)}, {NAME("bin"), CHILD(indexed, arenas_bin)}, {NAME("nlextents"), CTL(arenas_nlextents)}, {NAME("lextent"), CHILD(indexed, arenas_lextent)}, {NAME("create"), CTL(arenas_create)} }; static const ctl_named_node_t prof_node[] = { {NAME("thread_active_init"), CTL(prof_thread_active_init)}, {NAME("active"), CTL(prof_active)}, {NAME("dump"), CTL(prof_dump)}, {NAME("gdump"), CTL(prof_gdump)}, {NAME("reset"), CTL(prof_reset)}, {NAME("interval"), CTL(prof_interval)}, {NAME("lg_sample"), CTL(lg_prof_sample)} }; static const ctl_named_node_t stats_arenas_i_small_node[] = { {NAME("allocated"), CTL(stats_arenas_i_small_allocated)}, {NAME("nmalloc"), CTL(stats_arenas_i_small_nmalloc)}, {NAME("ndalloc"), CTL(stats_arenas_i_small_ndalloc)}, {NAME("nrequests"), CTL(stats_arenas_i_small_nrequests)} }; static const ctl_named_node_t stats_arenas_i_large_node[] = { {NAME("allocated"), CTL(stats_arenas_i_large_allocated)}, {NAME("nmalloc"), CTL(stats_arenas_i_large_nmalloc)}, {NAME("ndalloc"), CTL(stats_arenas_i_large_ndalloc)}, {NAME("nrequests"), CTL(stats_arenas_i_large_nrequests)} }; #define MUTEX_PROF_DATA_NODE(prefix) \ static const ctl_named_node_t stats_##prefix##_node[] = { \ {NAME("num_ops"), \ CTL(stats_##prefix##_num_ops)}, \ {NAME("num_wait"), \ CTL(stats_##prefix##_num_wait)}, \ {NAME("num_spin_acq"), \ CTL(stats_##prefix##_num_spin_acq)}, \ {NAME("num_owner_switch"), \ CTL(stats_##prefix##_num_owner_switch)}, \ {NAME("total_wait_time"), \ CTL(stats_##prefix##_total_wait_time)}, \ {NAME("max_wait_time"), \ CTL(stats_##prefix##_max_wait_time)}, \ {NAME("max_num_thds"), \ CTL(stats_##prefix##_max_num_thds)} \ /* Note that # of current waiting thread not provided. */ \ }; MUTEX_PROF_DATA_NODE(arenas_i_bins_j_mutex) static const ctl_named_node_t stats_arenas_i_bins_j_node[] = { {NAME("nmalloc"), CTL(stats_arenas_i_bins_j_nmalloc)}, {NAME("ndalloc"), CTL(stats_arenas_i_bins_j_ndalloc)}, {NAME("nrequests"), CTL(stats_arenas_i_bins_j_nrequests)}, {NAME("curregs"), CTL(stats_arenas_i_bins_j_curregs)}, {NAME("nfills"), CTL(stats_arenas_i_bins_j_nfills)}, {NAME("nflushes"), CTL(stats_arenas_i_bins_j_nflushes)}, {NAME("nslabs"), CTL(stats_arenas_i_bins_j_nslabs)}, {NAME("nreslabs"), CTL(stats_arenas_i_bins_j_nreslabs)}, {NAME("curslabs"), CTL(stats_arenas_i_bins_j_curslabs)}, {NAME("mutex"), CHILD(named, stats_arenas_i_bins_j_mutex)} }; static const ctl_named_node_t super_stats_arenas_i_bins_j_node[] = { {NAME(""), CHILD(named, stats_arenas_i_bins_j)} }; static const ctl_indexed_node_t stats_arenas_i_bins_node[] = { {INDEX(stats_arenas_i_bins_j)} }; static const ctl_named_node_t stats_arenas_i_lextents_j_node[] = { {NAME("nmalloc"), CTL(stats_arenas_i_lextents_j_nmalloc)}, {NAME("ndalloc"), CTL(stats_arenas_i_lextents_j_ndalloc)}, {NAME("nrequests"), CTL(stats_arenas_i_lextents_j_nrequests)}, {NAME("curlextents"), CTL(stats_arenas_i_lextents_j_curlextents)} }; static const ctl_named_node_t super_stats_arenas_i_lextents_j_node[] = { {NAME(""), CHILD(named, stats_arenas_i_lextents_j)} }; static const ctl_indexed_node_t stats_arenas_i_lextents_node[] = { {INDEX(stats_arenas_i_lextents_j)} }; #define OP(mtx) MUTEX_PROF_DATA_NODE(arenas_i_mutexes_##mtx) MUTEX_PROF_ARENA_MUTEXES #undef OP static const ctl_named_node_t stats_arenas_i_mutexes_node[] = { #define OP(mtx) {NAME(#mtx), CHILD(named, stats_arenas_i_mutexes_##mtx)}, MUTEX_PROF_ARENA_MUTEXES #undef OP }; static const ctl_named_node_t stats_arenas_i_node[] = { {NAME("nthreads"), CTL(stats_arenas_i_nthreads)}, {NAME("uptime"), CTL(stats_arenas_i_uptime)}, {NAME("dss"), CTL(stats_arenas_i_dss)}, {NAME("dirty_decay_ms"), CTL(stats_arenas_i_dirty_decay_ms)}, {NAME("muzzy_decay_ms"), CTL(stats_arenas_i_muzzy_decay_ms)}, {NAME("pactive"), CTL(stats_arenas_i_pactive)}, {NAME("pdirty"), CTL(stats_arenas_i_pdirty)}, {NAME("pmuzzy"), CTL(stats_arenas_i_pmuzzy)}, {NAME("mapped"), CTL(stats_arenas_i_mapped)}, {NAME("retained"), CTL(stats_arenas_i_retained)}, {NAME("dirty_npurge"), CTL(stats_arenas_i_dirty_npurge)}, {NAME("dirty_nmadvise"), CTL(stats_arenas_i_dirty_nmadvise)}, {NAME("dirty_purged"), CTL(stats_arenas_i_dirty_purged)}, {NAME("muzzy_npurge"), CTL(stats_arenas_i_muzzy_npurge)}, {NAME("muzzy_nmadvise"), CTL(stats_arenas_i_muzzy_nmadvise)}, {NAME("muzzy_purged"), CTL(stats_arenas_i_muzzy_purged)}, {NAME("base"), CTL(stats_arenas_i_base)}, {NAME("internal"), CTL(stats_arenas_i_internal)}, {NAME("tcache_bytes"), CTL(stats_arenas_i_tcache_bytes)}, {NAME("resident"), CTL(stats_arenas_i_resident)}, {NAME("small"), CHILD(named, stats_arenas_i_small)}, {NAME("large"), CHILD(named, stats_arenas_i_large)}, {NAME("bins"), CHILD(indexed, stats_arenas_i_bins)}, {NAME("lextents"), CHILD(indexed, stats_arenas_i_lextents)}, {NAME("mutexes"), CHILD(named, stats_arenas_i_mutexes)} }; static const ctl_named_node_t super_stats_arenas_i_node[] = { {NAME(""), CHILD(named, stats_arenas_i)} }; static const ctl_indexed_node_t stats_arenas_node[] = { {INDEX(stats_arenas_i)} }; static const ctl_named_node_t stats_background_thread_node[] = { {NAME("num_threads"), CTL(stats_background_thread_num_threads)}, {NAME("num_runs"), CTL(stats_background_thread_num_runs)}, {NAME("run_interval"), CTL(stats_background_thread_run_interval)} }; #define OP(mtx) MUTEX_PROF_DATA_NODE(mutexes_##mtx) MUTEX_PROF_GLOBAL_MUTEXES #undef OP static const ctl_named_node_t stats_mutexes_node[] = { #define OP(mtx) {NAME(#mtx), CHILD(named, stats_mutexes_##mtx)}, MUTEX_PROF_GLOBAL_MUTEXES #undef OP {NAME("reset"), CTL(stats_mutexes_reset)} }; #undef MUTEX_PROF_DATA_NODE static const ctl_named_node_t stats_node[] = { {NAME("allocated"), CTL(stats_allocated)}, {NAME("active"), CTL(stats_active)}, {NAME("metadata"), CTL(stats_metadata)}, {NAME("resident"), CTL(stats_resident)}, {NAME("mapped"), CTL(stats_mapped)}, {NAME("retained"), CTL(stats_retained)}, {NAME("background_thread"), CHILD(named, stats_background_thread)}, {NAME("mutexes"), CHILD(named, stats_mutexes)}, {NAME("arenas"), CHILD(indexed, stats_arenas)} }; static const ctl_named_node_t root_node[] = { {NAME("version"), CTL(version)}, {NAME("epoch"), CTL(epoch)}, {NAME("background_thread"), CTL(background_thread)}, {NAME("thread"), CHILD(named, thread)}, {NAME("config"), CHILD(named, config)}, {NAME("opt"), CHILD(named, opt)}, {NAME("tcache"), CHILD(named, tcache)}, {NAME("arena"), CHILD(indexed, arena)}, {NAME("arenas"), CHILD(named, arenas)}, {NAME("prof"), CHILD(named, prof)}, {NAME("stats"), CHILD(named, stats)} }; static const ctl_named_node_t super_root_node[] = { {NAME(""), CHILD(named, root)} }; #undef NAME #undef CHILD #undef CTL #undef INDEX /******************************************************************************/ /* * Sets *dst + *src non-atomically. This is safe, since everything is * synchronized by the ctl mutex. */ static void accum_arena_stats_u64(arena_stats_u64_t *dst, arena_stats_u64_t *src) { #ifdef JEMALLOC_ATOMIC_U64 uint64_t cur_dst = atomic_load_u64(dst, ATOMIC_RELAXED); uint64_t cur_src = atomic_load_u64(src, ATOMIC_RELAXED); atomic_store_u64(dst, cur_dst + cur_src, ATOMIC_RELAXED); #else *dst += *src; #endif } /* Likewise: with ctl mutex synchronization, reading is simple. */ static uint64_t arena_stats_read_u64(arena_stats_u64_t *p) { #ifdef JEMALLOC_ATOMIC_U64 return atomic_load_u64(p, ATOMIC_RELAXED); #else return *p; #endif } static void accum_atomic_zu(atomic_zu_t *dst, atomic_zu_t *src) { size_t cur_dst = atomic_load_zu(dst, ATOMIC_RELAXED); size_t cur_src = atomic_load_zu(src, ATOMIC_RELAXED); atomic_store_zu(dst, cur_dst + cur_src, ATOMIC_RELAXED); } /******************************************************************************/ static unsigned arenas_i2a_impl(size_t i, bool compat, bool validate) { unsigned a; switch (i) { case MALLCTL_ARENAS_ALL: a = 0; break; case MALLCTL_ARENAS_DESTROYED: a = 1; break; default: if (compat && i == ctl_arenas->narenas) { /* * Provide deprecated backward compatibility for * accessing the merged stats at index narenas rather * than via MALLCTL_ARENAS_ALL. This is scheduled for * removal in 6.0.0. */ a = 0; } else if (validate && i >= ctl_arenas->narenas) { a = UINT_MAX; } else { /* * This function should never be called for an index * more than one past the range of indices that have * initialized ctl data. */ assert(i < ctl_arenas->narenas || (!validate && i == ctl_arenas->narenas)); a = (unsigned)i + 2; } break; } return a; } static unsigned arenas_i2a(size_t i) { return arenas_i2a_impl(i, true, false); } static ctl_arena_t * -arenas_i_impl(tsdn_t *tsdn, size_t i, bool compat, bool init) { +arenas_i_impl(tsd_t *tsd, size_t i, bool compat, bool init) { ctl_arena_t *ret; assert(!compat || !init); ret = ctl_arenas->arenas[arenas_i2a_impl(i, compat, false)]; if (init && ret == NULL) { if (config_stats) { struct container_s { ctl_arena_t ctl_arena; ctl_arena_stats_t astats; }; struct container_s *cont = - (struct container_s *)base_alloc(tsdn, b0get(), - sizeof(struct container_s), QUANTUM); + (struct container_s *)base_alloc(tsd_tsdn(tsd), + b0get(), sizeof(struct container_s), QUANTUM); if (cont == NULL) { return NULL; } ret = &cont->ctl_arena; ret->astats = &cont->astats; } else { - ret = (ctl_arena_t *)base_alloc(tsdn, b0get(), + ret = (ctl_arena_t *)base_alloc(tsd_tsdn(tsd), b0get(), sizeof(ctl_arena_t), QUANTUM); if (ret == NULL) { return NULL; } } ret->arena_ind = (unsigned)i; ctl_arenas->arenas[arenas_i2a_impl(i, compat, false)] = ret; } assert(ret == NULL || arenas_i2a(ret->arena_ind) == arenas_i2a(i)); return ret; } static ctl_arena_t * arenas_i(size_t i) { - ctl_arena_t *ret = arenas_i_impl(TSDN_NULL, i, true, false); + ctl_arena_t *ret = arenas_i_impl(tsd_fetch(), i, true, false); assert(ret != NULL); return ret; } static void ctl_arena_clear(ctl_arena_t *ctl_arena) { ctl_arena->nthreads = 0; ctl_arena->dss = dss_prec_names[dss_prec_limit]; ctl_arena->dirty_decay_ms = -1; ctl_arena->muzzy_decay_ms = -1; ctl_arena->pactive = 0; ctl_arena->pdirty = 0; ctl_arena->pmuzzy = 0; if (config_stats) { memset(&ctl_arena->astats->astats, 0, sizeof(arena_stats_t)); ctl_arena->astats->allocated_small = 0; ctl_arena->astats->nmalloc_small = 0; ctl_arena->astats->ndalloc_small = 0; ctl_arena->astats->nrequests_small = 0; memset(ctl_arena->astats->bstats, 0, NBINS * sizeof(malloc_bin_stats_t)); memset(ctl_arena->astats->lstats, 0, (NSIZES - NBINS) * sizeof(malloc_large_stats_t)); } } static void ctl_arena_stats_amerge(tsdn_t *tsdn, ctl_arena_t *ctl_arena, arena_t *arena) { unsigned i; if (config_stats) { arena_stats_merge(tsdn, arena, &ctl_arena->nthreads, &ctl_arena->dss, &ctl_arena->dirty_decay_ms, &ctl_arena->muzzy_decay_ms, &ctl_arena->pactive, &ctl_arena->pdirty, &ctl_arena->pmuzzy, &ctl_arena->astats->astats, ctl_arena->astats->bstats, ctl_arena->astats->lstats); for (i = 0; i < NBINS; i++) { ctl_arena->astats->allocated_small += ctl_arena->astats->bstats[i].curregs * sz_index2size(i); ctl_arena->astats->nmalloc_small += ctl_arena->astats->bstats[i].nmalloc; ctl_arena->astats->ndalloc_small += ctl_arena->astats->bstats[i].ndalloc; ctl_arena->astats->nrequests_small += ctl_arena->astats->bstats[i].nrequests; } } else { arena_basic_stats_merge(tsdn, arena, &ctl_arena->nthreads, &ctl_arena->dss, &ctl_arena->dirty_decay_ms, &ctl_arena->muzzy_decay_ms, &ctl_arena->pactive, &ctl_arena->pdirty, &ctl_arena->pmuzzy); } } static void ctl_arena_stats_sdmerge(ctl_arena_t *ctl_sdarena, ctl_arena_t *ctl_arena, bool destroyed) { unsigned i; if (!destroyed) { ctl_sdarena->nthreads += ctl_arena->nthreads; ctl_sdarena->pactive += ctl_arena->pactive; ctl_sdarena->pdirty += ctl_arena->pdirty; ctl_sdarena->pmuzzy += ctl_arena->pmuzzy; } else { assert(ctl_arena->nthreads == 0); assert(ctl_arena->pactive == 0); assert(ctl_arena->pdirty == 0); assert(ctl_arena->pmuzzy == 0); } if (config_stats) { ctl_arena_stats_t *sdstats = ctl_sdarena->astats; ctl_arena_stats_t *astats = ctl_arena->astats; if (!destroyed) { accum_atomic_zu(&sdstats->astats.mapped, &astats->astats.mapped); accum_atomic_zu(&sdstats->astats.retained, &astats->astats.retained); } accum_arena_stats_u64(&sdstats->astats.decay_dirty.npurge, &astats->astats.decay_dirty.npurge); accum_arena_stats_u64(&sdstats->astats.decay_dirty.nmadvise, &astats->astats.decay_dirty.nmadvise); accum_arena_stats_u64(&sdstats->astats.decay_dirty.purged, &astats->astats.decay_dirty.purged); accum_arena_stats_u64(&sdstats->astats.decay_muzzy.npurge, &astats->astats.decay_muzzy.npurge); accum_arena_stats_u64(&sdstats->astats.decay_muzzy.nmadvise, &astats->astats.decay_muzzy.nmadvise); accum_arena_stats_u64(&sdstats->astats.decay_muzzy.purged, &astats->astats.decay_muzzy.purged); #define OP(mtx) malloc_mutex_prof_merge( \ &(sdstats->astats.mutex_prof_data[ \ arena_prof_mutex_##mtx]), \ &(astats->astats.mutex_prof_data[ \ arena_prof_mutex_##mtx])); MUTEX_PROF_ARENA_MUTEXES #undef OP if (!destroyed) { accum_atomic_zu(&sdstats->astats.base, &astats->astats.base); accum_atomic_zu(&sdstats->astats.internal, &astats->astats.internal); accum_atomic_zu(&sdstats->astats.resident, &astats->astats.resident); } else { assert(atomic_load_zu( &astats->astats.internal, ATOMIC_RELAXED) == 0); } if (!destroyed) { sdstats->allocated_small += astats->allocated_small; } else { assert(astats->allocated_small == 0); } sdstats->nmalloc_small += astats->nmalloc_small; sdstats->ndalloc_small += astats->ndalloc_small; sdstats->nrequests_small += astats->nrequests_small; if (!destroyed) { accum_atomic_zu(&sdstats->astats.allocated_large, &astats->astats.allocated_large); } else { assert(atomic_load_zu(&astats->astats.allocated_large, ATOMIC_RELAXED) == 0); } accum_arena_stats_u64(&sdstats->astats.nmalloc_large, &astats->astats.nmalloc_large); accum_arena_stats_u64(&sdstats->astats.ndalloc_large, &astats->astats.ndalloc_large); accum_arena_stats_u64(&sdstats->astats.nrequests_large, &astats->astats.nrequests_large); accum_atomic_zu(&sdstats->astats.tcache_bytes, &astats->astats.tcache_bytes); if (ctl_arena->arena_ind == 0) { sdstats->astats.uptime = astats->astats.uptime; } for (i = 0; i < NBINS; i++) { sdstats->bstats[i].nmalloc += astats->bstats[i].nmalloc; sdstats->bstats[i].ndalloc += astats->bstats[i].ndalloc; sdstats->bstats[i].nrequests += astats->bstats[i].nrequests; if (!destroyed) { sdstats->bstats[i].curregs += astats->bstats[i].curregs; } else { assert(astats->bstats[i].curregs == 0); } sdstats->bstats[i].nfills += astats->bstats[i].nfills; sdstats->bstats[i].nflushes += astats->bstats[i].nflushes; sdstats->bstats[i].nslabs += astats->bstats[i].nslabs; sdstats->bstats[i].reslabs += astats->bstats[i].reslabs; if (!destroyed) { sdstats->bstats[i].curslabs += astats->bstats[i].curslabs; } else { assert(astats->bstats[i].curslabs == 0); } malloc_mutex_prof_merge(&sdstats->bstats[i].mutex_data, &astats->bstats[i].mutex_data); } for (i = 0; i < NSIZES - NBINS; i++) { accum_arena_stats_u64(&sdstats->lstats[i].nmalloc, &astats->lstats[i].nmalloc); accum_arena_stats_u64(&sdstats->lstats[i].ndalloc, &astats->lstats[i].ndalloc); accum_arena_stats_u64(&sdstats->lstats[i].nrequests, &astats->lstats[i].nrequests); if (!destroyed) { sdstats->lstats[i].curlextents += astats->lstats[i].curlextents; } else { assert(astats->lstats[i].curlextents == 0); } } } } static void ctl_arena_refresh(tsdn_t *tsdn, arena_t *arena, ctl_arena_t *ctl_sdarena, unsigned i, bool destroyed) { ctl_arena_t *ctl_arena = arenas_i(i); ctl_arena_clear(ctl_arena); ctl_arena_stats_amerge(tsdn, ctl_arena, arena); /* Merge into sum stats as well. */ ctl_arena_stats_sdmerge(ctl_sdarena, ctl_arena, destroyed); } static unsigned -ctl_arena_init(tsdn_t *tsdn, extent_hooks_t *extent_hooks) { +ctl_arena_init(tsd_t *tsd, extent_hooks_t *extent_hooks) { unsigned arena_ind; ctl_arena_t *ctl_arena; if ((ctl_arena = ql_last(&ctl_arenas->destroyed, destroyed_link)) != NULL) { ql_remove(&ctl_arenas->destroyed, ctl_arena, destroyed_link); arena_ind = ctl_arena->arena_ind; } else { arena_ind = ctl_arenas->narenas; } /* Trigger stats allocation. */ - if (arenas_i_impl(tsdn, arena_ind, false, true) == NULL) { + if (arenas_i_impl(tsd, arena_ind, false, true) == NULL) { return UINT_MAX; } /* Initialize new arena. */ - if (arena_init(tsdn, arena_ind, extent_hooks) == NULL) { + if (arena_init(tsd_tsdn(tsd), arena_ind, extent_hooks) == NULL) { return UINT_MAX; } if (arena_ind == ctl_arenas->narenas) { ctl_arenas->narenas++; } return arena_ind; } static void ctl_background_thread_stats_read(tsdn_t *tsdn) { background_thread_stats_t *stats = &ctl_stats->background_thread; if (!have_background_thread || background_thread_stats_read(tsdn, stats)) { memset(stats, 0, sizeof(background_thread_stats_t)); nstime_init(&stats->run_interval, 0); } } static void ctl_refresh(tsdn_t *tsdn) { unsigned i; ctl_arena_t *ctl_sarena = arenas_i(MALLCTL_ARENAS_ALL); VARIABLE_ARRAY(arena_t *, tarenas, ctl_arenas->narenas); /* * Clear sum stats, since they will be merged into by * ctl_arena_refresh(). */ ctl_arena_clear(ctl_sarena); for (i = 0; i < ctl_arenas->narenas; i++) { tarenas[i] = arena_get(tsdn, i, false); } for (i = 0; i < ctl_arenas->narenas; i++) { ctl_arena_t *ctl_arena = arenas_i(i); bool initialized = (tarenas[i] != NULL); ctl_arena->initialized = initialized; if (initialized) { ctl_arena_refresh(tsdn, tarenas[i], ctl_sarena, i, false); } } if (config_stats) { ctl_stats->allocated = ctl_sarena->astats->allocated_small + atomic_load_zu(&ctl_sarena->astats->astats.allocated_large, ATOMIC_RELAXED); ctl_stats->active = (ctl_sarena->pactive << LG_PAGE); ctl_stats->metadata = atomic_load_zu( &ctl_sarena->astats->astats.base, ATOMIC_RELAXED) + atomic_load_zu(&ctl_sarena->astats->astats.internal, ATOMIC_RELAXED); ctl_stats->resident = atomic_load_zu( &ctl_sarena->astats->astats.resident, ATOMIC_RELAXED); ctl_stats->mapped = atomic_load_zu( &ctl_sarena->astats->astats.mapped, ATOMIC_RELAXED); ctl_stats->retained = atomic_load_zu( &ctl_sarena->astats->astats.retained, ATOMIC_RELAXED); ctl_background_thread_stats_read(tsdn); #define READ_GLOBAL_MUTEX_PROF_DATA(i, mtx) \ malloc_mutex_lock(tsdn, &mtx); \ malloc_mutex_prof_read(tsdn, &ctl_stats->mutex_prof_data[i], &mtx); \ malloc_mutex_unlock(tsdn, &mtx); if (config_prof && opt_prof) { READ_GLOBAL_MUTEX_PROF_DATA(global_prof_mutex_prof, bt2gctx_mtx); } if (have_background_thread) { READ_GLOBAL_MUTEX_PROF_DATA( global_prof_mutex_background_thread, background_thread_lock); } else { memset(&ctl_stats->mutex_prof_data[ global_prof_mutex_background_thread], 0, sizeof(mutex_prof_data_t)); } /* We own ctl mutex already. */ malloc_mutex_prof_read(tsdn, &ctl_stats->mutex_prof_data[global_prof_mutex_ctl], &ctl_mtx); #undef READ_GLOBAL_MUTEX_PROF_DATA } ctl_arenas->epoch++; } static bool -ctl_init(tsdn_t *tsdn) { +ctl_init(tsd_t *tsd) { bool ret; + tsdn_t *tsdn = tsd_tsdn(tsd); malloc_mutex_lock(tsdn, &ctl_mtx); if (!ctl_initialized) { ctl_arena_t *ctl_sarena, *ctl_darena; unsigned i; /* * Allocate demand-zeroed space for pointers to the full * range of supported arena indices. */ if (ctl_arenas == NULL) { ctl_arenas = (ctl_arenas_t *)base_alloc(tsdn, b0get(), sizeof(ctl_arenas_t), QUANTUM); if (ctl_arenas == NULL) { ret = true; goto label_return; } } if (config_stats && ctl_stats == NULL) { ctl_stats = (ctl_stats_t *)base_alloc(tsdn, b0get(), sizeof(ctl_stats_t), QUANTUM); if (ctl_stats == NULL) { ret = true; goto label_return; } } /* * Allocate space for the current full range of arenas * here rather than doing it lazily elsewhere, in order * to limit when OOM-caused errors can occur. */ - if ((ctl_sarena = arenas_i_impl(tsdn, MALLCTL_ARENAS_ALL, false, + if ((ctl_sarena = arenas_i_impl(tsd, MALLCTL_ARENAS_ALL, false, true)) == NULL) { ret = true; goto label_return; } ctl_sarena->initialized = true; - if ((ctl_darena = arenas_i_impl(tsdn, MALLCTL_ARENAS_DESTROYED, + if ((ctl_darena = arenas_i_impl(tsd, MALLCTL_ARENAS_DESTROYED, false, true)) == NULL) { ret = true; goto label_return; } ctl_arena_clear(ctl_darena); /* * Don't toggle ctl_darena to initialized until an arena is * actually destroyed, so that arena..initialized can be used * to query whether the stats are relevant. */ ctl_arenas->narenas = narenas_total_get(); for (i = 0; i < ctl_arenas->narenas; i++) { - if (arenas_i_impl(tsdn, i, false, true) == NULL) { + if (arenas_i_impl(tsd, i, false, true) == NULL) { ret = true; goto label_return; } } ql_new(&ctl_arenas->destroyed); ctl_refresh(tsdn); ctl_initialized = true; } ret = false; label_return: malloc_mutex_unlock(tsdn, &ctl_mtx); return ret; } static int ctl_lookup(tsdn_t *tsdn, const char *name, ctl_node_t const **nodesp, size_t *mibp, size_t *depthp) { int ret; const char *elm, *tdot, *dot; size_t elen, i, j; const ctl_named_node_t *node; elm = name; /* Equivalent to strchrnul(). */ dot = ((tdot = strchr(elm, '.')) != NULL) ? tdot : strchr(elm, '\0'); elen = (size_t)((uintptr_t)dot - (uintptr_t)elm); if (elen == 0) { ret = ENOENT; goto label_return; } node = super_root_node; for (i = 0; i < *depthp; i++) { assert(node); assert(node->nchildren > 0); if (ctl_named_node(node->children) != NULL) { const ctl_named_node_t *pnode = node; /* Children are named. */ for (j = 0; j < node->nchildren; j++) { const ctl_named_node_t *child = ctl_named_children(node, j); if (strlen(child->name) == elen && strncmp(elm, child->name, elen) == 0) { node = child; if (nodesp != NULL) { nodesp[i] = (const ctl_node_t *)node; } mibp[i] = j; break; } } if (node == pnode) { ret = ENOENT; goto label_return; } } else { uintmax_t index; const ctl_indexed_node_t *inode; /* Children are indexed. */ index = malloc_strtoumax(elm, NULL, 10); if (index == UINTMAX_MAX || index > SIZE_T_MAX) { ret = ENOENT; goto label_return; } inode = ctl_indexed_node(node->children); node = inode->index(tsdn, mibp, *depthp, (size_t)index); if (node == NULL) { ret = ENOENT; goto label_return; } if (nodesp != NULL) { nodesp[i] = (const ctl_node_t *)node; } mibp[i] = (size_t)index; } if (node->ctl != NULL) { /* Terminal node. */ if (*dot != '\0') { /* * The name contains more elements than are * in this path through the tree. */ ret = ENOENT; goto label_return; } /* Complete lookup successful. */ *depthp = i + 1; break; } /* Update elm. */ if (*dot == '\0') { /* No more elements. */ ret = ENOENT; goto label_return; } elm = &dot[1]; dot = ((tdot = strchr(elm, '.')) != NULL) ? tdot : strchr(elm, '\0'); elen = (size_t)((uintptr_t)dot - (uintptr_t)elm); } ret = 0; label_return: return ret; } int ctl_byname(tsd_t *tsd, const char *name, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; size_t depth; ctl_node_t const *nodes[CTL_MAX_DEPTH]; size_t mib[CTL_MAX_DEPTH]; const ctl_named_node_t *node; - if (!ctl_initialized && ctl_init(tsd_tsdn(tsd))) { + if (!ctl_initialized && ctl_init(tsd)) { ret = EAGAIN; goto label_return; } depth = CTL_MAX_DEPTH; ret = ctl_lookup(tsd_tsdn(tsd), name, nodes, mib, &depth); if (ret != 0) { goto label_return; } node = ctl_named_node(nodes[depth-1]); if (node != NULL && node->ctl) { ret = node->ctl(tsd, mib, depth, oldp, oldlenp, newp, newlen); } else { /* The name refers to a partial path through the ctl tree. */ ret = ENOENT; } label_return: return(ret); } int -ctl_nametomib(tsdn_t *tsdn, const char *name, size_t *mibp, size_t *miblenp) { +ctl_nametomib(tsd_t *tsd, const char *name, size_t *mibp, size_t *miblenp) { int ret; - if (!ctl_initialized && ctl_init(tsdn)) { + if (!ctl_initialized && ctl_init(tsd)) { ret = EAGAIN; goto label_return; } - ret = ctl_lookup(tsdn, name, NULL, mibp, miblenp); + ret = ctl_lookup(tsd_tsdn(tsd), name, NULL, mibp, miblenp); label_return: return(ret); } int ctl_bymib(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; const ctl_named_node_t *node; size_t i; - if (!ctl_initialized && ctl_init(tsd_tsdn(tsd))) { + if (!ctl_initialized && ctl_init(tsd)) { ret = EAGAIN; goto label_return; } /* Iterate down the tree. */ node = super_root_node; for (i = 0; i < miblen; i++) { assert(node); assert(node->nchildren > 0); if (ctl_named_node(node->children) != NULL) { /* Children are named. */ if (node->nchildren <= mib[i]) { ret = ENOENT; goto label_return; } node = ctl_named_children(node, mib[i]); } else { const ctl_indexed_node_t *inode; /* Indexed element. */ inode = ctl_indexed_node(node->children); node = inode->index(tsd_tsdn(tsd), mib, miblen, mib[i]); if (node == NULL) { ret = ENOENT; goto label_return; } } } /* Call the ctl function. */ if (node && node->ctl) { ret = node->ctl(tsd, mib, miblen, oldp, oldlenp, newp, newlen); } else { /* Partial MIB. */ ret = ENOENT; } label_return: return(ret); } bool ctl_boot(void) { if (malloc_mutex_init(&ctl_mtx, "ctl", WITNESS_RANK_CTL, malloc_mutex_rank_exclusive)) { return true; } ctl_initialized = false; return false; } void ctl_prefork(tsdn_t *tsdn) { malloc_mutex_prefork(tsdn, &ctl_mtx); } void ctl_postfork_parent(tsdn_t *tsdn) { malloc_mutex_postfork_parent(tsdn, &ctl_mtx); } void ctl_postfork_child(tsdn_t *tsdn) { malloc_mutex_postfork_child(tsdn, &ctl_mtx); } /******************************************************************************/ /* *_ctl() functions. */ #define READONLY() do { \ if (newp != NULL || newlen != 0) { \ ret = EPERM; \ goto label_return; \ } \ } while (0) #define WRITEONLY() do { \ if (oldp != NULL || oldlenp != NULL) { \ ret = EPERM; \ goto label_return; \ } \ } while (0) #define READ_XOR_WRITE() do { \ if ((oldp != NULL && oldlenp != NULL) && (newp != NULL || \ newlen != 0)) { \ ret = EPERM; \ goto label_return; \ } \ } while (0) #define READ(v, t) do { \ if (oldp != NULL && oldlenp != NULL) { \ if (*oldlenp != sizeof(t)) { \ size_t copylen = (sizeof(t) <= *oldlenp) \ ? sizeof(t) : *oldlenp; \ memcpy(oldp, (void *)&(v), copylen); \ ret = EINVAL; \ goto label_return; \ } \ *(t *)oldp = (v); \ } \ } while (0) #define WRITE(v, t) do { \ if (newp != NULL) { \ if (newlen != sizeof(t)) { \ ret = EINVAL; \ goto label_return; \ } \ (v) = *(t *)newp; \ } \ } while (0) #define MIB_UNSIGNED(v, i) do { \ if (mib[i] > UINT_MAX) { \ ret = EFAULT; \ goto label_return; \ } \ v = (unsigned)mib[i]; \ } while (0) /* * There's a lot of code duplication in the following macros due to limitations * in how nested cpp macros are expanded. */ #define CTL_RO_CLGEN(c, l, n, v, t) \ static int \ n##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, \ size_t *oldlenp, void *newp, size_t newlen) { \ int ret; \ t oldval; \ \ if (!(c)) { \ return ENOENT; \ } \ if (l) { \ malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); \ } \ READONLY(); \ oldval = (v); \ READ(oldval, t); \ \ ret = 0; \ label_return: \ if (l) { \ malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); \ } \ return ret; \ } #define CTL_RO_CGEN(c, n, v, t) \ static int \ n##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, \ size_t *oldlenp, void *newp, size_t newlen) { \ int ret; \ t oldval; \ \ if (!(c)) { \ return ENOENT; \ } \ malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); \ READONLY(); \ oldval = (v); \ READ(oldval, t); \ \ ret = 0; \ label_return: \ malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); \ return ret; \ } #define CTL_RO_GEN(n, v, t) \ static int \ n##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, \ size_t *oldlenp, void *newp, size_t newlen) { \ int ret; \ t oldval; \ \ malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); \ READONLY(); \ oldval = (v); \ READ(oldval, t); \ \ ret = 0; \ label_return: \ malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); \ return ret; \ } /* * ctl_mtx is not acquired, under the assumption that no pertinent data will * mutate during the call. */ #define CTL_RO_NL_CGEN(c, n, v, t) \ static int \ n##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, \ size_t *oldlenp, void *newp, size_t newlen) { \ int ret; \ t oldval; \ \ if (!(c)) { \ return ENOENT; \ } \ READONLY(); \ oldval = (v); \ READ(oldval, t); \ \ ret = 0; \ label_return: \ return ret; \ } #define CTL_RO_NL_GEN(n, v, t) \ static int \ n##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, \ size_t *oldlenp, void *newp, size_t newlen) { \ int ret; \ t oldval; \ \ READONLY(); \ oldval = (v); \ READ(oldval, t); \ \ ret = 0; \ label_return: \ return ret; \ } #define CTL_TSD_RO_NL_CGEN(c, n, m, t) \ static int \ n##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, \ size_t *oldlenp, void *newp, size_t newlen) { \ int ret; \ t oldval; \ \ if (!(c)) { \ return ENOENT; \ } \ READONLY(); \ oldval = (m(tsd)); \ READ(oldval, t); \ \ ret = 0; \ label_return: \ return ret; \ } #define CTL_RO_CONFIG_GEN(n, t) \ static int \ n##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, \ size_t *oldlenp, void *newp, size_t newlen) { \ int ret; \ t oldval; \ \ READONLY(); \ oldval = n; \ READ(oldval, t); \ \ ret = 0; \ label_return: \ return ret; \ } /******************************************************************************/ CTL_RO_NL_GEN(version, JEMALLOC_VERSION, const char *) static int epoch_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; UNUSED uint64_t newval; malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); WRITE(newval, uint64_t); if (newp != NULL) { ctl_refresh(tsd_tsdn(tsd)); } READ(ctl_arenas->epoch, uint64_t); ret = 0; label_return: malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); return ret; } static int background_thread_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; bool oldval; if (!have_background_thread) { return ENOENT; } background_thread_ctl_init(tsd_tsdn(tsd)); malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); malloc_mutex_lock(tsd_tsdn(tsd), &background_thread_lock); if (newp == NULL) { oldval = background_thread_enabled(); READ(oldval, bool); } else { if (newlen != sizeof(bool)) { ret = EINVAL; goto label_return; } oldval = background_thread_enabled(); READ(oldval, bool); bool newval = *(bool *)newp; if (newval == oldval) { ret = 0; goto label_return; } background_thread_enabled_set(tsd_tsdn(tsd), newval); if (newval) { if (!can_enable_background_thread) { malloc_printf(": Error in dlsym(" "RTLD_NEXT, \"pthread_create\"). Cannot " "enable background_thread\n"); ret = EFAULT; goto label_return; } if (background_threads_enable(tsd)) { ret = EFAULT; goto label_return; } } else { if (background_threads_disable(tsd)) { ret = EFAULT; goto label_return; } } } ret = 0; label_return: malloc_mutex_unlock(tsd_tsdn(tsd), &background_thread_lock); malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); return ret; } /******************************************************************************/ CTL_RO_CONFIG_GEN(config_cache_oblivious, bool) CTL_RO_CONFIG_GEN(config_debug, bool) CTL_RO_CONFIG_GEN(config_fill, bool) CTL_RO_CONFIG_GEN(config_lazy_lock, bool) CTL_RO_CONFIG_GEN(config_malloc_conf, const char *) CTL_RO_CONFIG_GEN(config_prof, bool) CTL_RO_CONFIG_GEN(config_prof_libgcc, bool) CTL_RO_CONFIG_GEN(config_prof_libunwind, bool) CTL_RO_CONFIG_GEN(config_stats, bool) CTL_RO_CONFIG_GEN(config_thp, bool) CTL_RO_CONFIG_GEN(config_utrace, bool) CTL_RO_CONFIG_GEN(config_xmalloc, bool) /******************************************************************************/ CTL_RO_NL_GEN(opt_abort, opt_abort, bool) CTL_RO_NL_GEN(opt_abort_conf, opt_abort_conf, bool) CTL_RO_NL_GEN(opt_retain, opt_retain, bool) CTL_RO_NL_GEN(opt_dss, opt_dss, const char *) CTL_RO_NL_GEN(opt_narenas, opt_narenas, unsigned) CTL_RO_NL_GEN(opt_percpu_arena, percpu_arena_mode_names[opt_percpu_arena], const char *) CTL_RO_NL_GEN(opt_background_thread, opt_background_thread, bool) CTL_RO_NL_GEN(opt_dirty_decay_ms, opt_dirty_decay_ms, ssize_t) CTL_RO_NL_GEN(opt_muzzy_decay_ms, opt_muzzy_decay_ms, ssize_t) CTL_RO_NL_GEN(opt_stats_print, opt_stats_print, bool) CTL_RO_NL_GEN(opt_stats_print_opts, opt_stats_print_opts, const char *) CTL_RO_NL_CGEN(config_fill, opt_junk, opt_junk, const char *) CTL_RO_NL_CGEN(config_fill, opt_zero, opt_zero, bool) CTL_RO_NL_CGEN(config_utrace, opt_utrace, opt_utrace, bool) CTL_RO_NL_CGEN(config_xmalloc, opt_xmalloc, opt_xmalloc, bool) CTL_RO_NL_GEN(opt_tcache, opt_tcache, bool) CTL_RO_NL_GEN(opt_lg_tcache_max, opt_lg_tcache_max, ssize_t) CTL_RO_NL_CGEN(config_prof, opt_prof, opt_prof, bool) CTL_RO_NL_CGEN(config_prof, opt_prof_prefix, opt_prof_prefix, const char *) CTL_RO_NL_CGEN(config_prof, opt_prof_active, opt_prof_active, bool) CTL_RO_NL_CGEN(config_prof, opt_prof_thread_active_init, opt_prof_thread_active_init, bool) CTL_RO_NL_CGEN(config_prof, opt_lg_prof_sample, opt_lg_prof_sample, size_t) CTL_RO_NL_CGEN(config_prof, opt_prof_accum, opt_prof_accum, bool) CTL_RO_NL_CGEN(config_prof, opt_lg_prof_interval, opt_lg_prof_interval, ssize_t) CTL_RO_NL_CGEN(config_prof, opt_prof_gdump, opt_prof_gdump, bool) CTL_RO_NL_CGEN(config_prof, opt_prof_final, opt_prof_final, bool) CTL_RO_NL_CGEN(config_prof, opt_prof_leak, opt_prof_leak, bool) /******************************************************************************/ static int thread_arena_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; arena_t *oldarena; unsigned newind, oldind; oldarena = arena_choose(tsd, NULL); if (oldarena == NULL) { return EAGAIN; } newind = oldind = arena_ind_get(oldarena); WRITE(newind, unsigned); READ(oldind, unsigned); if (newind != oldind) { arena_t *newarena; if (newind >= narenas_total_get()) { /* New arena index is out of range. */ ret = EFAULT; goto label_return; } if (have_percpu_arena && PERCPU_ARENA_ENABLED(opt_percpu_arena)) { if (newind < percpu_arena_ind_limit(opt_percpu_arena)) { /* * If perCPU arena is enabled, thread_arena * control is not allowed for the auto arena * range. */ ret = EPERM; goto label_return; } } /* Initialize arena if necessary. */ newarena = arena_get(tsd_tsdn(tsd), newind, true); if (newarena == NULL) { ret = EAGAIN; goto label_return; } /* Set new arena/tcache associations. */ arena_migrate(tsd, oldind, newind); if (tcache_available(tsd)) { tcache_arena_reassociate(tsd_tsdn(tsd), tsd_tcachep_get(tsd), newarena); } } ret = 0; label_return: return ret; } CTL_TSD_RO_NL_CGEN(config_stats, thread_allocated, tsd_thread_allocated_get, uint64_t) CTL_TSD_RO_NL_CGEN(config_stats, thread_allocatedp, tsd_thread_allocatedp_get, uint64_t *) CTL_TSD_RO_NL_CGEN(config_stats, thread_deallocated, tsd_thread_deallocated_get, uint64_t) CTL_TSD_RO_NL_CGEN(config_stats, thread_deallocatedp, tsd_thread_deallocatedp_get, uint64_t *) static int thread_tcache_enabled_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; bool oldval; oldval = tcache_enabled_get(tsd); if (newp != NULL) { if (newlen != sizeof(bool)) { ret = EINVAL; goto label_return; } tcache_enabled_set(tsd, *(bool *)newp); } READ(oldval, bool); ret = 0; label_return: return ret; } static int thread_tcache_flush_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; if (!tcache_available(tsd)) { ret = EFAULT; goto label_return; } READONLY(); WRITEONLY(); - tcache_flush(); + tcache_flush(tsd); ret = 0; label_return: return ret; } static int thread_prof_name_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; if (!config_prof) { return ENOENT; } READ_XOR_WRITE(); if (newp != NULL) { if (newlen != sizeof(const char *)) { ret = EINVAL; goto label_return; } if ((ret = prof_thread_name_set(tsd, *(const char **)newp)) != 0) { goto label_return; } } else { const char *oldname = prof_thread_name_get(tsd); READ(oldname, const char *); } ret = 0; label_return: return ret; } static int thread_prof_active_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; bool oldval; if (!config_prof) { return ENOENT; } oldval = prof_thread_active_get(tsd); if (newp != NULL) { if (newlen != sizeof(bool)) { ret = EINVAL; goto label_return; } if (prof_thread_active_set(tsd, *(bool *)newp)) { ret = EAGAIN; goto label_return; } } READ(oldval, bool); ret = 0; label_return: return ret; } /******************************************************************************/ static int tcache_create_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned tcache_ind; READONLY(); if (tcaches_create(tsd, &tcache_ind)) { ret = EFAULT; goto label_return; } READ(tcache_ind, unsigned); ret = 0; label_return: return ret; } static int tcache_flush_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned tcache_ind; WRITEONLY(); tcache_ind = UINT_MAX; WRITE(tcache_ind, unsigned); if (tcache_ind == UINT_MAX) { ret = EFAULT; goto label_return; } tcaches_flush(tsd, tcache_ind); ret = 0; label_return: return ret; } static int tcache_destroy_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned tcache_ind; WRITEONLY(); tcache_ind = UINT_MAX; WRITE(tcache_ind, unsigned); if (tcache_ind == UINT_MAX) { ret = EFAULT; goto label_return; } tcaches_destroy(tsd, tcache_ind); ret = 0; label_return: return ret; } /******************************************************************************/ static int arena_i_initialized_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; tsdn_t *tsdn = tsd_tsdn(tsd); unsigned arena_ind; bool initialized; READONLY(); MIB_UNSIGNED(arena_ind, 1); malloc_mutex_lock(tsdn, &ctl_mtx); initialized = arenas_i(arena_ind)->initialized; malloc_mutex_unlock(tsdn, &ctl_mtx); READ(initialized, bool); ret = 0; label_return: return ret; } static void arena_i_decay(tsdn_t *tsdn, unsigned arena_ind, bool all) { malloc_mutex_lock(tsdn, &ctl_mtx); { unsigned narenas = ctl_arenas->narenas; /* * Access via index narenas is deprecated, and scheduled for * removal in 6.0.0. */ if (arena_ind == MALLCTL_ARENAS_ALL || arena_ind == narenas) { unsigned i; VARIABLE_ARRAY(arena_t *, tarenas, narenas); for (i = 0; i < narenas; i++) { tarenas[i] = arena_get(tsdn, i, false); } /* * No further need to hold ctl_mtx, since narenas and * tarenas contain everything needed below. */ malloc_mutex_unlock(tsdn, &ctl_mtx); for (i = 0; i < narenas; i++) { if (tarenas[i] != NULL) { arena_decay(tsdn, tarenas[i], false, all); } } } else { arena_t *tarena; assert(arena_ind < narenas); tarena = arena_get(tsdn, arena_ind, false); /* No further need to hold ctl_mtx. */ malloc_mutex_unlock(tsdn, &ctl_mtx); if (tarena != NULL) { arena_decay(tsdn, tarena, false, all); } } } } static int arena_i_decay_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned arena_ind; READONLY(); WRITEONLY(); MIB_UNSIGNED(arena_ind, 1); arena_i_decay(tsd_tsdn(tsd), arena_ind, false); ret = 0; label_return: return ret; } static int arena_i_purge_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned arena_ind; READONLY(); WRITEONLY(); MIB_UNSIGNED(arena_ind, 1); arena_i_decay(tsd_tsdn(tsd), arena_ind, true); ret = 0; label_return: return ret; } static int arena_i_reset_destroy_helper(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen, unsigned *arena_ind, arena_t **arena) { int ret; READONLY(); WRITEONLY(); MIB_UNSIGNED(*arena_ind, 1); *arena = arena_get(tsd_tsdn(tsd), *arena_ind, false); if (*arena == NULL || arena_is_auto(*arena)) { ret = EFAULT; goto label_return; } ret = 0; label_return: return ret; } static void arena_reset_prepare_background_thread(tsd_t *tsd, unsigned arena_ind) { /* Temporarily disable the background thread during arena reset. */ if (have_background_thread) { malloc_mutex_lock(tsd_tsdn(tsd), &background_thread_lock); if (background_thread_enabled()) { unsigned ind = arena_ind % ncpus; background_thread_info_t *info = &background_thread_info[ind]; assert(info->state == background_thread_started); malloc_mutex_lock(tsd_tsdn(tsd), &info->mtx); info->state = background_thread_paused; malloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx); } } } static void arena_reset_finish_background_thread(tsd_t *tsd, unsigned arena_ind) { if (have_background_thread) { if (background_thread_enabled()) { unsigned ind = arena_ind % ncpus; background_thread_info_t *info = &background_thread_info[ind]; - assert(info->state = background_thread_paused); + assert(info->state == background_thread_paused); malloc_mutex_lock(tsd_tsdn(tsd), &info->mtx); info->state = background_thread_started; malloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx); } malloc_mutex_unlock(tsd_tsdn(tsd), &background_thread_lock); } } static int arena_i_reset_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned arena_ind; arena_t *arena; ret = arena_i_reset_destroy_helper(tsd, mib, miblen, oldp, oldlenp, newp, newlen, &arena_ind, &arena); if (ret != 0) { return ret; } arena_reset_prepare_background_thread(tsd, arena_ind); arena_reset(tsd, arena); arena_reset_finish_background_thread(tsd, arena_ind); return ret; } static int arena_i_destroy_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned arena_ind; arena_t *arena; ctl_arena_t *ctl_darena, *ctl_arena; ret = arena_i_reset_destroy_helper(tsd, mib, miblen, oldp, oldlenp, newp, newlen, &arena_ind, &arena); if (ret != 0) { goto label_return; } if (arena_nthreads_get(arena, false) != 0 || arena_nthreads_get(arena, true) != 0) { ret = EFAULT; goto label_return; } arena_reset_prepare_background_thread(tsd, arena_ind); /* Merge stats after resetting and purging arena. */ arena_reset(tsd, arena); arena_decay(tsd_tsdn(tsd), arena, false, true); ctl_darena = arenas_i(MALLCTL_ARENAS_DESTROYED); ctl_darena->initialized = true; ctl_arena_refresh(tsd_tsdn(tsd), arena, ctl_darena, arena_ind, true); /* Destroy arena. */ arena_destroy(tsd, arena); ctl_arena = arenas_i(arena_ind); ctl_arena->initialized = false; /* Record arena index for later recycling via arenas.create. */ ql_elm_new(ctl_arena, destroyed_link); ql_tail_insert(&ctl_arenas->destroyed, ctl_arena, destroyed_link); arena_reset_finish_background_thread(tsd, arena_ind); assert(ret == 0); label_return: return ret; } static int arena_i_dss_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; const char *dss = NULL; unsigned arena_ind; dss_prec_t dss_prec_old = dss_prec_limit; dss_prec_t dss_prec = dss_prec_limit; malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); WRITE(dss, const char *); MIB_UNSIGNED(arena_ind, 1); if (dss != NULL) { int i; bool match = false; for (i = 0; i < dss_prec_limit; i++) { if (strcmp(dss_prec_names[i], dss) == 0) { dss_prec = i; match = true; break; } } if (!match) { ret = EINVAL; goto label_return; } } /* * Access via index narenas is deprecated, and scheduled for removal in * 6.0.0. */ if (arena_ind == MALLCTL_ARENAS_ALL || arena_ind == ctl_arenas->narenas) { if (dss_prec != dss_prec_limit && extent_dss_prec_set(dss_prec)) { ret = EFAULT; goto label_return; } dss_prec_old = extent_dss_prec_get(); } else { arena_t *arena = arena_get(tsd_tsdn(tsd), arena_ind, false); if (arena == NULL || (dss_prec != dss_prec_limit && arena_dss_prec_set(arena, dss_prec))) { ret = EFAULT; goto label_return; } dss_prec_old = arena_dss_prec_get(arena); } dss = dss_prec_names[dss_prec_old]; READ(dss, const char *); ret = 0; label_return: malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); return ret; } static int arena_i_decay_ms_ctl_impl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen, bool dirty) { int ret; unsigned arena_ind; arena_t *arena; MIB_UNSIGNED(arena_ind, 1); arena = arena_get(tsd_tsdn(tsd), arena_ind, false); if (arena == NULL) { ret = EFAULT; goto label_return; } if (oldp != NULL && oldlenp != NULL) { size_t oldval = dirty ? arena_dirty_decay_ms_get(arena) : arena_muzzy_decay_ms_get(arena); READ(oldval, ssize_t); } if (newp != NULL) { if (newlen != sizeof(ssize_t)) { ret = EINVAL; goto label_return; } if (dirty ? arena_dirty_decay_ms_set(tsd_tsdn(tsd), arena, *(ssize_t *)newp) : arena_muzzy_decay_ms_set(tsd_tsdn(tsd), arena, *(ssize_t *)newp)) { ret = EFAULT; goto label_return; } } ret = 0; label_return: return ret; } static int arena_i_dirty_decay_ms_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { return arena_i_decay_ms_ctl_impl(tsd, mib, miblen, oldp, oldlenp, newp, newlen, true); } static int arena_i_muzzy_decay_ms_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { return arena_i_decay_ms_ctl_impl(tsd, mib, miblen, oldp, oldlenp, newp, newlen, false); } static int arena_i_extent_hooks_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned arena_ind; arena_t *arena; malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); MIB_UNSIGNED(arena_ind, 1); if (arena_ind < narenas_total_get() && (arena = arena_get(tsd_tsdn(tsd), arena_ind, false)) != NULL) { if (newp != NULL) { extent_hooks_t *old_extent_hooks; extent_hooks_t *new_extent_hooks JEMALLOC_CC_SILENCE_INIT(NULL); WRITE(new_extent_hooks, extent_hooks_t *); old_extent_hooks = extent_hooks_set(tsd, arena, new_extent_hooks); READ(old_extent_hooks, extent_hooks_t *); } else { extent_hooks_t *old_extent_hooks = extent_hooks_get(arena); READ(old_extent_hooks, extent_hooks_t *); } } else { ret = EFAULT; goto label_return; } ret = 0; label_return: malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); return ret; } static const ctl_named_node_t * arena_i_index(tsdn_t *tsdn, const size_t *mib, size_t miblen, size_t i) { const ctl_named_node_t *ret; malloc_mutex_lock(tsdn, &ctl_mtx); switch (i) { case MALLCTL_ARENAS_ALL: case MALLCTL_ARENAS_DESTROYED: break; default: if (i > ctl_arenas->narenas) { ret = NULL; goto label_return; } break; } ret = super_arena_i_node; label_return: malloc_mutex_unlock(tsdn, &ctl_mtx); return ret; } /******************************************************************************/ static int arenas_narenas_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned narenas; malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); READONLY(); if (*oldlenp != sizeof(unsigned)) { ret = EINVAL; goto label_return; } narenas = ctl_arenas->narenas; READ(narenas, unsigned); ret = 0; label_return: malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); return ret; } static int arenas_decay_ms_ctl_impl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen, bool dirty) { int ret; if (oldp != NULL && oldlenp != NULL) { size_t oldval = (dirty ? arena_dirty_decay_ms_default_get() : arena_muzzy_decay_ms_default_get()); READ(oldval, ssize_t); } if (newp != NULL) { if (newlen != sizeof(ssize_t)) { ret = EINVAL; goto label_return; } if (dirty ? arena_dirty_decay_ms_default_set(*(ssize_t *)newp) : arena_muzzy_decay_ms_default_set(*(ssize_t *)newp)) { ret = EFAULT; goto label_return; } } ret = 0; label_return: return ret; } static int arenas_dirty_decay_ms_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { return arenas_decay_ms_ctl_impl(tsd, mib, miblen, oldp, oldlenp, newp, newlen, true); } static int arenas_muzzy_decay_ms_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { return arenas_decay_ms_ctl_impl(tsd, mib, miblen, oldp, oldlenp, newp, newlen, false); } CTL_RO_NL_GEN(arenas_quantum, QUANTUM, size_t) CTL_RO_NL_GEN(arenas_page, PAGE, size_t) CTL_RO_NL_GEN(arenas_tcache_max, tcache_maxclass, size_t) CTL_RO_NL_GEN(arenas_nbins, NBINS, unsigned) CTL_RO_NL_GEN(arenas_nhbins, nhbins, unsigned) CTL_RO_NL_GEN(arenas_bin_i_size, arena_bin_info[mib[2]].reg_size, size_t) CTL_RO_NL_GEN(arenas_bin_i_nregs, arena_bin_info[mib[2]].nregs, uint32_t) CTL_RO_NL_GEN(arenas_bin_i_slab_size, arena_bin_info[mib[2]].slab_size, size_t) static const ctl_named_node_t * arenas_bin_i_index(tsdn_t *tsdn, const size_t *mib, size_t miblen, size_t i) { if (i > NBINS) { return NULL; } return super_arenas_bin_i_node; } CTL_RO_NL_GEN(arenas_nlextents, NSIZES - NBINS, unsigned) CTL_RO_NL_GEN(arenas_lextent_i_size, sz_index2size(NBINS+(szind_t)mib[2]), size_t) static const ctl_named_node_t * arenas_lextent_i_index(tsdn_t *tsdn, const size_t *mib, size_t miblen, size_t i) { if (i > NSIZES - NBINS) { return NULL; } return super_arenas_lextent_i_node; } static int arenas_create_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; extent_hooks_t *extent_hooks; unsigned arena_ind; malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); extent_hooks = (extent_hooks_t *)&extent_hooks_default; WRITE(extent_hooks, extent_hooks_t *); - if ((arena_ind = ctl_arena_init(tsd_tsdn(tsd), extent_hooks)) == - UINT_MAX) { + if ((arena_ind = ctl_arena_init(tsd, extent_hooks)) == UINT_MAX) { ret = EAGAIN; goto label_return; } READ(arena_ind, unsigned); ret = 0; label_return: malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); return ret; } /******************************************************************************/ static int prof_thread_active_init_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; bool oldval; if (!config_prof) { return ENOENT; } if (newp != NULL) { if (newlen != sizeof(bool)) { ret = EINVAL; goto label_return; } oldval = prof_thread_active_init_set(tsd_tsdn(tsd), *(bool *)newp); } else { oldval = prof_thread_active_init_get(tsd_tsdn(tsd)); } READ(oldval, bool); ret = 0; label_return: return ret; } static int prof_active_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; bool oldval; if (!config_prof) { return ENOENT; } if (newp != NULL) { if (newlen != sizeof(bool)) { ret = EINVAL; goto label_return; } oldval = prof_active_set(tsd_tsdn(tsd), *(bool *)newp); } else { oldval = prof_active_get(tsd_tsdn(tsd)); } READ(oldval, bool); ret = 0; label_return: return ret; } static int prof_dump_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; const char *filename = NULL; if (!config_prof) { return ENOENT; } WRITEONLY(); WRITE(filename, const char *); if (prof_mdump(tsd, filename)) { ret = EFAULT; goto label_return; } ret = 0; label_return: return ret; } static int prof_gdump_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; bool oldval; if (!config_prof) { return ENOENT; } if (newp != NULL) { if (newlen != sizeof(bool)) { ret = EINVAL; goto label_return; } oldval = prof_gdump_set(tsd_tsdn(tsd), *(bool *)newp); } else { oldval = prof_gdump_get(tsd_tsdn(tsd)); } READ(oldval, bool); ret = 0; label_return: return ret; } static int prof_reset_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; size_t lg_sample = lg_prof_sample; if (!config_prof) { return ENOENT; } WRITEONLY(); WRITE(lg_sample, size_t); if (lg_sample >= (sizeof(uint64_t) << 3)) { lg_sample = (sizeof(uint64_t) << 3) - 1; } prof_reset(tsd, lg_sample); ret = 0; label_return: return ret; } CTL_RO_NL_CGEN(config_prof, prof_interval, prof_interval, uint64_t) CTL_RO_NL_CGEN(config_prof, lg_prof_sample, lg_prof_sample, size_t) /******************************************************************************/ CTL_RO_CGEN(config_stats, stats_allocated, ctl_stats->allocated, size_t) CTL_RO_CGEN(config_stats, stats_active, ctl_stats->active, size_t) CTL_RO_CGEN(config_stats, stats_metadata, ctl_stats->metadata, size_t) CTL_RO_CGEN(config_stats, stats_resident, ctl_stats->resident, size_t) CTL_RO_CGEN(config_stats, stats_mapped, ctl_stats->mapped, size_t) CTL_RO_CGEN(config_stats, stats_retained, ctl_stats->retained, size_t) CTL_RO_CGEN(config_stats, stats_background_thread_num_threads, ctl_stats->background_thread.num_threads, size_t) CTL_RO_CGEN(config_stats, stats_background_thread_num_runs, ctl_stats->background_thread.num_runs, uint64_t) CTL_RO_CGEN(config_stats, stats_background_thread_run_interval, nstime_ns(&ctl_stats->background_thread.run_interval), uint64_t) CTL_RO_GEN(stats_arenas_i_dss, arenas_i(mib[2])->dss, const char *) CTL_RO_GEN(stats_arenas_i_dirty_decay_ms, arenas_i(mib[2])->dirty_decay_ms, ssize_t) CTL_RO_GEN(stats_arenas_i_muzzy_decay_ms, arenas_i(mib[2])->muzzy_decay_ms, ssize_t) CTL_RO_GEN(stats_arenas_i_nthreads, arenas_i(mib[2])->nthreads, unsigned) CTL_RO_GEN(stats_arenas_i_uptime, nstime_ns(&arenas_i(mib[2])->astats->astats.uptime), uint64_t) CTL_RO_GEN(stats_arenas_i_pactive, arenas_i(mib[2])->pactive, size_t) CTL_RO_GEN(stats_arenas_i_pdirty, arenas_i(mib[2])->pdirty, size_t) CTL_RO_GEN(stats_arenas_i_pmuzzy, arenas_i(mib[2])->pmuzzy, size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_mapped, atomic_load_zu(&arenas_i(mib[2])->astats->astats.mapped, ATOMIC_RELAXED), size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_retained, atomic_load_zu(&arenas_i(mib[2])->astats->astats.retained, ATOMIC_RELAXED), size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_dirty_npurge, arena_stats_read_u64(&arenas_i(mib[2])->astats->astats.decay_dirty.npurge), uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_dirty_nmadvise, arena_stats_read_u64( &arenas_i(mib[2])->astats->astats.decay_dirty.nmadvise), uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_dirty_purged, arena_stats_read_u64(&arenas_i(mib[2])->astats->astats.decay_dirty.purged), uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_muzzy_npurge, arena_stats_read_u64(&arenas_i(mib[2])->astats->astats.decay_muzzy.npurge), uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_muzzy_nmadvise, arena_stats_read_u64( &arenas_i(mib[2])->astats->astats.decay_muzzy.nmadvise), uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_muzzy_purged, arena_stats_read_u64(&arenas_i(mib[2])->astats->astats.decay_muzzy.purged), uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_base, atomic_load_zu(&arenas_i(mib[2])->astats->astats.base, ATOMIC_RELAXED), size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_internal, atomic_load_zu(&arenas_i(mib[2])->astats->astats.internal, ATOMIC_RELAXED), size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_tcache_bytes, atomic_load_zu(&arenas_i(mib[2])->astats->astats.tcache_bytes, ATOMIC_RELAXED), size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_resident, atomic_load_zu(&arenas_i(mib[2])->astats->astats.resident, ATOMIC_RELAXED), size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_small_allocated, arenas_i(mib[2])->astats->allocated_small, size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_small_nmalloc, arenas_i(mib[2])->astats->nmalloc_small, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_small_ndalloc, arenas_i(mib[2])->astats->ndalloc_small, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_small_nrequests, arenas_i(mib[2])->astats->nrequests_small, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_large_allocated, atomic_load_zu(&arenas_i(mib[2])->astats->astats.allocated_large, ATOMIC_RELAXED), size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_large_nmalloc, arena_stats_read_u64(&arenas_i(mib[2])->astats->astats.nmalloc_large), uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_large_ndalloc, arena_stats_read_u64(&arenas_i(mib[2])->astats->astats.ndalloc_large), uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_large_nrequests, arena_stats_read_u64(&arenas_i(mib[2])->astats->astats.nmalloc_large), uint64_t) /* Intentional. */ /* Lock profiling related APIs below. */ #define RO_MUTEX_CTL_GEN(n, l) \ CTL_RO_CGEN(config_stats, stats_##n##_num_ops, \ l.n_lock_ops, uint64_t) \ CTL_RO_CGEN(config_stats, stats_##n##_num_wait, \ l.n_wait_times, uint64_t) \ CTL_RO_CGEN(config_stats, stats_##n##_num_spin_acq, \ l.n_spin_acquired, uint64_t) \ CTL_RO_CGEN(config_stats, stats_##n##_num_owner_switch, \ l.n_owner_switches, uint64_t) \ CTL_RO_CGEN(config_stats, stats_##n##_total_wait_time, \ nstime_ns(&l.tot_wait_time), uint64_t) \ CTL_RO_CGEN(config_stats, stats_##n##_max_wait_time, \ nstime_ns(&l.max_wait_time), uint64_t) \ CTL_RO_CGEN(config_stats, stats_##n##_max_num_thds, \ l.max_n_thds, uint32_t) /* Global mutexes. */ #define OP(mtx) \ RO_MUTEX_CTL_GEN(mutexes_##mtx, \ ctl_stats->mutex_prof_data[global_prof_mutex_##mtx]) MUTEX_PROF_GLOBAL_MUTEXES #undef OP /* Per arena mutexes */ #define OP(mtx) RO_MUTEX_CTL_GEN(arenas_i_mutexes_##mtx, \ arenas_i(mib[2])->astats->astats.mutex_prof_data[arena_prof_mutex_##mtx]) MUTEX_PROF_ARENA_MUTEXES #undef OP /* tcache bin mutex */ RO_MUTEX_CTL_GEN(arenas_i_bins_j_mutex, arenas_i(mib[2])->astats->bstats[mib[4]].mutex_data) #undef RO_MUTEX_CTL_GEN /* Resets all mutex stats, including global, arena and bin mutexes. */ static int stats_mutexes_reset_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { if (!config_stats) { return ENOENT; } tsdn_t *tsdn = tsd_tsdn(tsd); #define MUTEX_PROF_RESET(mtx) \ malloc_mutex_lock(tsdn, &mtx); \ malloc_mutex_prof_data_reset(tsdn, &mtx); \ malloc_mutex_unlock(tsdn, &mtx); /* Global mutexes: ctl and prof. */ MUTEX_PROF_RESET(ctl_mtx); if (have_background_thread) { MUTEX_PROF_RESET(background_thread_lock); } if (config_prof && opt_prof) { MUTEX_PROF_RESET(bt2gctx_mtx); } /* Per arena mutexes. */ unsigned n = narenas_total_get(); for (unsigned i = 0; i < n; i++) { arena_t *arena = arena_get(tsdn, i, false); if (!arena) { continue; } MUTEX_PROF_RESET(arena->large_mtx); MUTEX_PROF_RESET(arena->extent_avail_mtx); MUTEX_PROF_RESET(arena->extents_dirty.mtx); MUTEX_PROF_RESET(arena->extents_muzzy.mtx); MUTEX_PROF_RESET(arena->extents_retained.mtx); MUTEX_PROF_RESET(arena->decay_dirty.mtx); MUTEX_PROF_RESET(arena->decay_muzzy.mtx); MUTEX_PROF_RESET(arena->tcache_ql_mtx); MUTEX_PROF_RESET(arena->base->mtx); for (szind_t i = 0; i < NBINS; i++) { arena_bin_t *bin = &arena->bins[i]; MUTEX_PROF_RESET(bin->lock); } } #undef MUTEX_PROF_RESET return 0; } CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nmalloc, arenas_i(mib[2])->astats->bstats[mib[4]].nmalloc, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_ndalloc, arenas_i(mib[2])->astats->bstats[mib[4]].ndalloc, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nrequests, arenas_i(mib[2])->astats->bstats[mib[4]].nrequests, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_curregs, arenas_i(mib[2])->astats->bstats[mib[4]].curregs, size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nfills, arenas_i(mib[2])->astats->bstats[mib[4]].nfills, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nflushes, arenas_i(mib[2])->astats->bstats[mib[4]].nflushes, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nslabs, arenas_i(mib[2])->astats->bstats[mib[4]].nslabs, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nreslabs, arenas_i(mib[2])->astats->bstats[mib[4]].reslabs, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_curslabs, arenas_i(mib[2])->astats->bstats[mib[4]].curslabs, size_t) static const ctl_named_node_t * stats_arenas_i_bins_j_index(tsdn_t *tsdn, const size_t *mib, size_t miblen, size_t j) { if (j > NBINS) { return NULL; } return super_stats_arenas_i_bins_j_node; } CTL_RO_CGEN(config_stats, stats_arenas_i_lextents_j_nmalloc, arena_stats_read_u64(&arenas_i(mib[2])->astats->lstats[mib[4]].nmalloc), uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_lextents_j_ndalloc, arena_stats_read_u64(&arenas_i(mib[2])->astats->lstats[mib[4]].ndalloc), uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_lextents_j_nrequests, arena_stats_read_u64(&arenas_i(mib[2])->astats->lstats[mib[4]].nrequests), uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_lextents_j_curlextents, arenas_i(mib[2])->astats->lstats[mib[4]].curlextents, size_t) static const ctl_named_node_t * stats_arenas_i_lextents_j_index(tsdn_t *tsdn, const size_t *mib, size_t miblen, size_t j) { if (j > NSIZES - NBINS) { return NULL; } return super_stats_arenas_i_lextents_j_node; } static const ctl_named_node_t * stats_arenas_i_index(tsdn_t *tsdn, const size_t *mib, size_t miblen, size_t i) { const ctl_named_node_t *ret; size_t a; malloc_mutex_lock(tsdn, &ctl_mtx); a = arenas_i2a_impl(i, true, true); if (a == UINT_MAX || !ctl_arenas->arenas[a]->initialized) { ret = NULL; goto label_return; } ret = super_stats_arenas_i_node; label_return: malloc_mutex_unlock(tsdn, &ctl_mtx); return ret; } Index: head/contrib/jemalloc/src/extent.c =================================================================== --- head/contrib/jemalloc/src/extent.c (revision 320622) +++ head/contrib/jemalloc/src/extent.c (revision 320623) @@ -1,1919 +1,1987 @@ #define JEMALLOC_EXTENT_C_ #include "jemalloc/internal/jemalloc_preamble.h" #include "jemalloc/internal/jemalloc_internal_includes.h" #include "jemalloc/internal/assert.h" #include "jemalloc/internal/extent_dss.h" #include "jemalloc/internal/extent_mmap.h" #include "jemalloc/internal/ph.h" #include "jemalloc/internal/rtree.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/mutex_pool.h" /******************************************************************************/ /* Data. */ rtree_t extents_rtree; /* Keyed by the address of the extent_t being protected. */ mutex_pool_t extent_mutex_pool; static const bitmap_info_t extents_bitmap_info = BITMAP_INFO_INITIALIZER(NPSIZES+1); static void *extent_alloc_default(extent_hooks_t *extent_hooks, void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit, unsigned arena_ind); static bool extent_dalloc_default(extent_hooks_t *extent_hooks, void *addr, size_t size, bool committed, unsigned arena_ind); static void extent_destroy_default(extent_hooks_t *extent_hooks, void *addr, size_t size, bool committed, unsigned arena_ind); static bool extent_commit_default(extent_hooks_t *extent_hooks, void *addr, size_t size, size_t offset, size_t length, unsigned arena_ind); static bool extent_commit_impl(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent, size_t offset, size_t length, bool growing_retained); static bool extent_decommit_default(extent_hooks_t *extent_hooks, void *addr, size_t size, size_t offset, size_t length, unsigned arena_ind); #ifdef PAGES_CAN_PURGE_LAZY static bool extent_purge_lazy_default(extent_hooks_t *extent_hooks, void *addr, size_t size, size_t offset, size_t length, unsigned arena_ind); #endif static bool extent_purge_lazy_impl(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent, size_t offset, size_t length, bool growing_retained); #ifdef PAGES_CAN_PURGE_FORCED static bool extent_purge_forced_default(extent_hooks_t *extent_hooks, void *addr, size_t size, size_t offset, size_t length, unsigned arena_ind); #endif static bool extent_purge_forced_impl(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent, size_t offset, size_t length, bool growing_retained); #ifdef JEMALLOC_MAPS_COALESCE static bool extent_split_default(extent_hooks_t *extent_hooks, void *addr, size_t size, size_t size_a, size_t size_b, bool committed, unsigned arena_ind); #endif static extent_t *extent_split_impl(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent, size_t size_a, szind_t szind_a, bool slab_a, size_t size_b, szind_t szind_b, bool slab_b, bool growing_retained); #ifdef JEMALLOC_MAPS_COALESCE static bool extent_merge_default(extent_hooks_t *extent_hooks, void *addr_a, size_t size_a, void *addr_b, size_t size_b, bool committed, unsigned arena_ind); #endif static bool extent_merge_impl(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *a, extent_t *b, bool growing_retained); const extent_hooks_t extent_hooks_default = { extent_alloc_default, extent_dalloc_default, extent_destroy_default, extent_commit_default, extent_decommit_default #ifdef PAGES_CAN_PURGE_LAZY , extent_purge_lazy_default #else , NULL #endif #ifdef PAGES_CAN_PURGE_FORCED , extent_purge_forced_default #else , NULL #endif #ifdef JEMALLOC_MAPS_COALESCE , extent_split_default, extent_merge_default #endif }; /* Used exclusively for gdump triggering. */ static atomic_zu_t curpages; static atomic_zu_t highpages; /******************************************************************************/ /* * Function prototypes for static functions that are referenced prior to * definition. */ static void extent_deregister(tsdn_t *tsdn, extent_t *extent); static extent_t *extent_recycle(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extents_t *extents, void *new_addr, size_t usize, size_t pad, size_t alignment, bool slab, szind_t szind, bool *zero, bool *commit, bool growing_retained); static extent_t *extent_try_coalesce(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, rtree_ctx_t *rtree_ctx, extents_t *extents, extent_t *extent, bool *coalesced, bool growing_retained); static void extent_record(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extents_t *extents, extent_t *extent, bool growing_retained); /******************************************************************************/ rb_gen(UNUSED, extent_avail_, extent_tree_t, extent_t, rb_link, extent_esnead_comp) typedef enum { lock_result_success, lock_result_failure, lock_result_no_extent } lock_result_t; static lock_result_t extent_rtree_leaf_elm_try_lock(tsdn_t *tsdn, rtree_leaf_elm_t *elm, extent_t **result) { extent_t *extent1 = rtree_leaf_elm_extent_read(tsdn, &extents_rtree, elm, true); if (extent1 == NULL) { return lock_result_no_extent; } /* * It's possible that the extent changed out from under us, and with it * the leaf->extent mapping. We have to recheck while holding the lock. */ extent_lock(tsdn, extent1); extent_t *extent2 = rtree_leaf_elm_extent_read(tsdn, &extents_rtree, elm, true); if (extent1 == extent2) { *result = extent1; return lock_result_success; } else { extent_unlock(tsdn, extent1); return lock_result_failure; } } /* * Returns a pool-locked extent_t * if there's one associated with the given * address, and NULL otherwise. */ static extent_t * extent_lock_from_addr(tsdn_t *tsdn, rtree_ctx_t *rtree_ctx, void *addr) { extent_t *ret = NULL; rtree_leaf_elm_t *elm = rtree_leaf_elm_lookup(tsdn, &extents_rtree, rtree_ctx, (uintptr_t)addr, false, false); if (elm == NULL) { return NULL; } lock_result_t lock_result; do { lock_result = extent_rtree_leaf_elm_try_lock(tsdn, elm, &ret); } while (lock_result == lock_result_failure); return ret; } extent_t * extent_alloc(tsdn_t *tsdn, arena_t *arena) { malloc_mutex_lock(tsdn, &arena->extent_avail_mtx); extent_t *extent = extent_avail_first(&arena->extent_avail); if (extent == NULL) { malloc_mutex_unlock(tsdn, &arena->extent_avail_mtx); return base_alloc_extent(tsdn, arena->base); } extent_avail_remove(&arena->extent_avail, extent); malloc_mutex_unlock(tsdn, &arena->extent_avail_mtx); return extent; } void extent_dalloc(tsdn_t *tsdn, arena_t *arena, extent_t *extent) { malloc_mutex_lock(tsdn, &arena->extent_avail_mtx); extent_avail_insert(&arena->extent_avail, extent); malloc_mutex_unlock(tsdn, &arena->extent_avail_mtx); } extent_hooks_t * extent_hooks_get(arena_t *arena) { return base_extent_hooks_get(arena->base); } extent_hooks_t * extent_hooks_set(tsd_t *tsd, arena_t *arena, extent_hooks_t *extent_hooks) { background_thread_info_t *info; if (have_background_thread) { info = arena_background_thread_info_get(arena); malloc_mutex_lock(tsd_tsdn(tsd), &info->mtx); } extent_hooks_t *ret = base_extent_hooks_set(arena->base, extent_hooks); if (have_background_thread) { malloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx); } return ret; } static void extent_hooks_assure_initialized(arena_t *arena, extent_hooks_t **r_extent_hooks) { if (*r_extent_hooks == EXTENT_HOOKS_INITIALIZER) { *r_extent_hooks = extent_hooks_get(arena); } } #ifndef JEMALLOC_JET static #endif size_t extent_size_quantize_floor(size_t size) { size_t ret; pszind_t pind; assert(size > 0); assert((size & PAGE_MASK) == 0); pind = sz_psz2ind(size - sz_large_pad + 1); if (pind == 0) { /* * Avoid underflow. This short-circuit would also do the right * thing for all sizes in the range for which there are * PAGE-spaced size classes, but it's simplest to just handle * the one case that would cause erroneous results. */ return size; } ret = sz_pind2sz(pind - 1) + sz_large_pad; assert(ret <= size); return ret; } #ifndef JEMALLOC_JET static #endif size_t extent_size_quantize_ceil(size_t size) { size_t ret; assert(size > 0); assert(size - sz_large_pad <= LARGE_MAXCLASS); assert((size & PAGE_MASK) == 0); ret = extent_size_quantize_floor(size); if (ret < size) { /* * Skip a quantization that may have an adequately large extent, * because under-sized extents may be mixed in. This only * happens when an unusual size is requested, i.e. for aligned * allocation, and is just one of several places where linear * search would potentially find sufficiently aligned available * memory somewhere lower. */ ret = sz_pind2sz(sz_psz2ind(ret - sz_large_pad + 1)) + sz_large_pad; } return ret; } /* Generate pairing heap functions. */ ph_gen(, extent_heap_, extent_heap_t, extent_t, ph_link, extent_snad_comp) bool extents_init(tsdn_t *tsdn, extents_t *extents, extent_state_t state, bool delay_coalesce) { if (malloc_mutex_init(&extents->mtx, "extents", WITNESS_RANK_EXTENTS, malloc_mutex_rank_exclusive)) { return true; } for (unsigned i = 0; i < NPSIZES+1; i++) { extent_heap_new(&extents->heaps[i]); } bitmap_init(extents->bitmap, &extents_bitmap_info, true); extent_list_init(&extents->lru); atomic_store_zu(&extents->npages, 0, ATOMIC_RELAXED); extents->state = state; extents->delay_coalesce = delay_coalesce; return false; } extent_state_t extents_state_get(const extents_t *extents) { return extents->state; } size_t extents_npages_get(extents_t *extents) { return atomic_load_zu(&extents->npages, ATOMIC_RELAXED); } static void extents_insert_locked(tsdn_t *tsdn, extents_t *extents, extent_t *extent, bool preserve_lru) { malloc_mutex_assert_owner(tsdn, &extents->mtx); assert(extent_state_get(extent) == extents->state); size_t size = extent_size_get(extent); size_t psz = extent_size_quantize_floor(size); pszind_t pind = sz_psz2ind(psz); if (extent_heap_empty(&extents->heaps[pind])) { bitmap_unset(extents->bitmap, &extents_bitmap_info, (size_t)pind); } extent_heap_insert(&extents->heaps[pind], extent); if (!preserve_lru) { extent_list_append(&extents->lru, extent); } size_t npages = size >> LG_PAGE; /* * All modifications to npages hold the mutex (as asserted above), so we * don't need an atomic fetch-add; we can get by with a load followed by * a store. */ size_t cur_extents_npages = atomic_load_zu(&extents->npages, ATOMIC_RELAXED); atomic_store_zu(&extents->npages, cur_extents_npages + npages, ATOMIC_RELAXED); } static void extents_remove_locked(tsdn_t *tsdn, extents_t *extents, extent_t *extent, bool preserve_lru) { malloc_mutex_assert_owner(tsdn, &extents->mtx); assert(extent_state_get(extent) == extents->state); size_t size = extent_size_get(extent); size_t psz = extent_size_quantize_floor(size); pszind_t pind = sz_psz2ind(psz); extent_heap_remove(&extents->heaps[pind], extent); if (extent_heap_empty(&extents->heaps[pind])) { bitmap_set(extents->bitmap, &extents_bitmap_info, (size_t)pind); } if (!preserve_lru) { extent_list_remove(&extents->lru, extent); } size_t npages = size >> LG_PAGE; /* * As in extents_insert_locked, we hold extents->mtx and so don't need * atomic operations for updating extents->npages. */ size_t cur_extents_npages = atomic_load_zu(&extents->npages, ATOMIC_RELAXED); assert(cur_extents_npages >= npages); atomic_store_zu(&extents->npages, cur_extents_npages - (size >> LG_PAGE), ATOMIC_RELAXED); } /* Do any-best-fit extent selection, i.e. select any extent that best fits. */ static extent_t * extents_best_fit_locked(tsdn_t *tsdn, arena_t *arena, extents_t *extents, size_t size) { pszind_t pind = sz_psz2ind(extent_size_quantize_ceil(size)); pszind_t i = (pszind_t)bitmap_ffu(extents->bitmap, &extents_bitmap_info, (size_t)pind); if (i < NPSIZES+1) { assert(!extent_heap_empty(&extents->heaps[i])); extent_t *extent = extent_heap_any(&extents->heaps[i]); assert(extent_size_get(extent) >= size); return extent; } return NULL; } /* * Do first-fit extent selection, i.e. select the oldest/lowest extent that is * large enough. */ static extent_t * extents_first_fit_locked(tsdn_t *tsdn, arena_t *arena, extents_t *extents, size_t size) { extent_t *ret = NULL; pszind_t pind = sz_psz2ind(extent_size_quantize_ceil(size)); for (pszind_t i = (pszind_t)bitmap_ffu(extents->bitmap, &extents_bitmap_info, (size_t)pind); i < NPSIZES+1; i = (pszind_t)bitmap_ffu(extents->bitmap, &extents_bitmap_info, (size_t)i+1)) { assert(!extent_heap_empty(&extents->heaps[i])); extent_t *extent = extent_heap_first(&extents->heaps[i]); assert(extent_size_get(extent) >= size); if (ret == NULL || extent_snad_comp(extent, ret) < 0) { ret = extent; } if (i == NPSIZES) { break; } assert(i < NPSIZES); } return ret; } /* * Do {best,first}-fit extent selection, where the selection policy choice is * based on extents->delay_coalesce. Best-fit selection requires less * searching, but its layout policy is less stable and may cause higher virtual * memory fragmentation as a side effect. */ static extent_t * extents_fit_locked(tsdn_t *tsdn, arena_t *arena, extents_t *extents, size_t size) { malloc_mutex_assert_owner(tsdn, &extents->mtx); return extents->delay_coalesce ? extents_best_fit_locked(tsdn, arena, extents, size) : extents_first_fit_locked(tsdn, arena, extents, size); } static bool extent_try_delayed_coalesce(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, rtree_ctx_t *rtree_ctx, extents_t *extents, extent_t *extent) { extent_state_set(extent, extent_state_active); bool coalesced; extent = extent_try_coalesce(tsdn, arena, r_extent_hooks, rtree_ctx, extents, extent, &coalesced, false); extent_state_set(extent, extents_state_get(extents)); if (!coalesced) { return true; } extents_insert_locked(tsdn, extents, extent, true); return false; } extent_t * extents_alloc(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extents_t *extents, void *new_addr, size_t size, size_t pad, size_t alignment, bool slab, szind_t szind, bool *zero, bool *commit) { assert(size + pad != 0); assert(alignment != 0); witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, 0); return extent_recycle(tsdn, arena, r_extent_hooks, extents, new_addr, size, pad, alignment, slab, szind, zero, commit, false); } void extents_dalloc(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extents_t *extents, extent_t *extent) { assert(extent_base_get(extent) != NULL); assert(extent_size_get(extent) != 0); witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, 0); extent_addr_set(extent, extent_base_get(extent)); extent_zeroed_set(extent, false); extent_record(tsdn, arena, r_extent_hooks, extents, extent, false); } extent_t * extents_evict(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extents_t *extents, size_t npages_min) { rtree_ctx_t rtree_ctx_fallback; rtree_ctx_t *rtree_ctx = tsdn_rtree_ctx(tsdn, &rtree_ctx_fallback); malloc_mutex_lock(tsdn, &extents->mtx); /* * Get the LRU coalesced extent, if any. If coalescing was delayed, * the loop will iterate until the LRU extent is fully coalesced. */ extent_t *extent; while (true) { /* Get the LRU extent, if any. */ extent = extent_list_first(&extents->lru); if (extent == NULL) { goto label_return; } /* Check the eviction limit. */ size_t npages = extent_size_get(extent) >> LG_PAGE; size_t extents_npages = atomic_load_zu(&extents->npages, ATOMIC_RELAXED); if (extents_npages - npages < npages_min) { extent = NULL; goto label_return; } extents_remove_locked(tsdn, extents, extent, false); if (!extents->delay_coalesce) { break; } /* Try to coalesce. */ if (extent_try_delayed_coalesce(tsdn, arena, r_extent_hooks, rtree_ctx, extents, extent)) { break; } /* * The LRU extent was just coalesced and the result placed in * the LRU at its neighbor's position. Start over. */ } /* * Either mark the extent active or deregister it to protect against * concurrent operations. */ switch (extents_state_get(extents)) { case extent_state_active: not_reached(); case extent_state_dirty: case extent_state_muzzy: extent_state_set(extent, extent_state_active); break; case extent_state_retained: extent_deregister(tsdn, extent); break; default: not_reached(); } label_return: malloc_mutex_unlock(tsdn, &extents->mtx); return extent; } static void extents_leak(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extents_t *extents, extent_t *extent, bool growing_retained) { /* * Leak extent after making sure its pages have already been purged, so * that this is only a virtual memory leak. */ if (extents_state_get(extents) == extent_state_dirty) { if (extent_purge_lazy_impl(tsdn, arena, r_extent_hooks, extent, 0, extent_size_get(extent), growing_retained)) { extent_purge_forced_impl(tsdn, arena, r_extent_hooks, extent, 0, extent_size_get(extent), growing_retained); } } extent_dalloc(tsdn, arena, extent); } void extents_prefork(tsdn_t *tsdn, extents_t *extents) { malloc_mutex_prefork(tsdn, &extents->mtx); } void extents_postfork_parent(tsdn_t *tsdn, extents_t *extents) { malloc_mutex_postfork_parent(tsdn, &extents->mtx); } void extents_postfork_child(tsdn_t *tsdn, extents_t *extents) { malloc_mutex_postfork_child(tsdn, &extents->mtx); } static void extent_deactivate_locked(tsdn_t *tsdn, arena_t *arena, extents_t *extents, extent_t *extent, bool preserve_lru) { assert(extent_arena_get(extent) == arena); assert(extent_state_get(extent) == extent_state_active); extent_state_set(extent, extents_state_get(extents)); extents_insert_locked(tsdn, extents, extent, preserve_lru); } static void extent_deactivate(tsdn_t *tsdn, arena_t *arena, extents_t *extents, extent_t *extent, bool preserve_lru) { malloc_mutex_lock(tsdn, &extents->mtx); extent_deactivate_locked(tsdn, arena, extents, extent, preserve_lru); malloc_mutex_unlock(tsdn, &extents->mtx); } static void extent_activate_locked(tsdn_t *tsdn, arena_t *arena, extents_t *extents, extent_t *extent, bool preserve_lru) { assert(extent_arena_get(extent) == arena); assert(extent_state_get(extent) == extents_state_get(extents)); extents_remove_locked(tsdn, extents, extent, preserve_lru); extent_state_set(extent, extent_state_active); } static bool extent_rtree_leaf_elms_lookup(tsdn_t *tsdn, rtree_ctx_t *rtree_ctx, const extent_t *extent, bool dependent, bool init_missing, rtree_leaf_elm_t **r_elm_a, rtree_leaf_elm_t **r_elm_b) { *r_elm_a = rtree_leaf_elm_lookup(tsdn, &extents_rtree, rtree_ctx, (uintptr_t)extent_base_get(extent), dependent, init_missing); if (!dependent && *r_elm_a == NULL) { return true; } assert(*r_elm_a != NULL); *r_elm_b = rtree_leaf_elm_lookup(tsdn, &extents_rtree, rtree_ctx, (uintptr_t)extent_last_get(extent), dependent, init_missing); if (!dependent && *r_elm_b == NULL) { return true; } assert(*r_elm_b != NULL); return false; } static void extent_rtree_write_acquired(tsdn_t *tsdn, rtree_leaf_elm_t *elm_a, rtree_leaf_elm_t *elm_b, extent_t *extent, szind_t szind, bool slab) { rtree_leaf_elm_write(tsdn, &extents_rtree, elm_a, extent, szind, slab); if (elm_b != NULL) { rtree_leaf_elm_write(tsdn, &extents_rtree, elm_b, extent, szind, slab); } } static void extent_interior_register(tsdn_t *tsdn, rtree_ctx_t *rtree_ctx, extent_t *extent, szind_t szind) { assert(extent_slab_get(extent)); /* Register interior. */ for (size_t i = 1; i < (extent_size_get(extent) >> LG_PAGE) - 1; i++) { rtree_write(tsdn, &extents_rtree, rtree_ctx, (uintptr_t)extent_base_get(extent) + (uintptr_t)(i << LG_PAGE), extent, szind, true); } } static void extent_gdump_add(tsdn_t *tsdn, const extent_t *extent) { cassert(config_prof); /* prof_gdump() requirement. */ witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, 0); if (opt_prof && extent_state_get(extent) == extent_state_active) { size_t nadd = extent_size_get(extent) >> LG_PAGE; size_t cur = atomic_fetch_add_zu(&curpages, nadd, ATOMIC_RELAXED) + nadd; size_t high = atomic_load_zu(&highpages, ATOMIC_RELAXED); while (cur > high && !atomic_compare_exchange_weak_zu( &highpages, &high, cur, ATOMIC_RELAXED, ATOMIC_RELAXED)) { /* * Don't refresh cur, because it may have decreased * since this thread lost the highpages update race. * Note that high is updated in case of CAS failure. */ } if (cur > high && prof_gdump_get_unlocked()) { prof_gdump(tsdn); } } } static void extent_gdump_sub(tsdn_t *tsdn, const extent_t *extent) { cassert(config_prof); if (opt_prof && extent_state_get(extent) == extent_state_active) { size_t nsub = extent_size_get(extent) >> LG_PAGE; assert(atomic_load_zu(&curpages, ATOMIC_RELAXED) >= nsub); atomic_fetch_sub_zu(&curpages, nsub, ATOMIC_RELAXED); } } static bool extent_register_impl(tsdn_t *tsdn, extent_t *extent, bool gdump_add) { rtree_ctx_t rtree_ctx_fallback; rtree_ctx_t *rtree_ctx = tsdn_rtree_ctx(tsdn, &rtree_ctx_fallback); rtree_leaf_elm_t *elm_a, *elm_b; /* * We need to hold the lock to protect against a concurrent coalesce * operation that sees us in a partial state. */ extent_lock(tsdn, extent); if (extent_rtree_leaf_elms_lookup(tsdn, rtree_ctx, extent, false, true, &elm_a, &elm_b)) { return true; } szind_t szind = extent_szind_get_maybe_invalid(extent); bool slab = extent_slab_get(extent); extent_rtree_write_acquired(tsdn, elm_a, elm_b, extent, szind, slab); if (slab) { extent_interior_register(tsdn, rtree_ctx, extent, szind); } extent_unlock(tsdn, extent); if (config_prof && gdump_add) { extent_gdump_add(tsdn, extent); } return false; } static bool extent_register(tsdn_t *tsdn, extent_t *extent) { return extent_register_impl(tsdn, extent, true); } static bool extent_register_no_gdump_add(tsdn_t *tsdn, extent_t *extent) { return extent_register_impl(tsdn, extent, false); } static void extent_reregister(tsdn_t *tsdn, extent_t *extent) { bool err = extent_register(tsdn, extent); assert(!err); } static void extent_interior_deregister(tsdn_t *tsdn, rtree_ctx_t *rtree_ctx, extent_t *extent) { size_t i; assert(extent_slab_get(extent)); for (i = 1; i < (extent_size_get(extent) >> LG_PAGE) - 1; i++) { rtree_clear(tsdn, &extents_rtree, rtree_ctx, (uintptr_t)extent_base_get(extent) + (uintptr_t)(i << LG_PAGE)); } } static void extent_deregister(tsdn_t *tsdn, extent_t *extent) { rtree_ctx_t rtree_ctx_fallback; rtree_ctx_t *rtree_ctx = tsdn_rtree_ctx(tsdn, &rtree_ctx_fallback); rtree_leaf_elm_t *elm_a, *elm_b; extent_rtree_leaf_elms_lookup(tsdn, rtree_ctx, extent, true, false, &elm_a, &elm_b); extent_lock(tsdn, extent); extent_rtree_write_acquired(tsdn, elm_a, elm_b, NULL, NSIZES, false); if (extent_slab_get(extent)) { extent_interior_deregister(tsdn, rtree_ctx, extent); extent_slab_set(extent, false); } extent_unlock(tsdn, extent); if (config_prof) { extent_gdump_sub(tsdn, extent); } } static extent_t * extent_recycle_extract(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, rtree_ctx_t *rtree_ctx, extents_t *extents, void *new_addr, size_t size, size_t pad, size_t alignment, bool slab, bool *zero, bool *commit, bool growing_retained) { witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, growing_retained ? 1 : 0); assert(alignment > 0); if (config_debug && new_addr != NULL) { /* * Non-NULL new_addr has two use cases: * * 1) Recycle a known-extant extent, e.g. during purging. * 2) Perform in-place expanding reallocation. * * Regardless of use case, new_addr must either refer to a * non-existing extent, or to the base of an extant extent, * since only active slabs support interior lookups (which of * course cannot be recycled). */ assert(PAGE_ADDR2BASE(new_addr) == new_addr); assert(pad == 0); assert(alignment <= PAGE); } size_t esize = size + pad; size_t alloc_size = esize + PAGE_CEILING(alignment) - PAGE; /* Beware size_t wrap-around. */ if (alloc_size < esize) { return NULL; } malloc_mutex_lock(tsdn, &extents->mtx); extent_hooks_assure_initialized(arena, r_extent_hooks); extent_t *extent; if (new_addr != NULL) { extent = extent_lock_from_addr(tsdn, rtree_ctx, new_addr); if (extent != NULL) { /* * We might null-out extent to report an error, but we * still need to unlock the associated mutex after. */ extent_t *unlock_extent = extent; assert(extent_base_get(extent) == new_addr); if (extent_arena_get(extent) != arena || extent_size_get(extent) < esize || extent_state_get(extent) != extents_state_get(extents)) { extent = NULL; } extent_unlock(tsdn, unlock_extent); } } else { extent = extents_fit_locked(tsdn, arena, extents, alloc_size); } if (extent == NULL) { malloc_mutex_unlock(tsdn, &extents->mtx); return NULL; } extent_activate_locked(tsdn, arena, extents, extent, false); malloc_mutex_unlock(tsdn, &extents->mtx); if (extent_zeroed_get(extent)) { *zero = true; } if (extent_committed_get(extent)) { *commit = true; } return extent; } static extent_t * extent_recycle_split(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, rtree_ctx_t *rtree_ctx, extents_t *extents, void *new_addr, size_t size, size_t pad, size_t alignment, bool slab, szind_t szind, extent_t *extent, bool growing_retained) { size_t esize = size + pad; size_t leadsize = ALIGNMENT_CEILING((uintptr_t)extent_base_get(extent), PAGE_CEILING(alignment)) - (uintptr_t)extent_base_get(extent); assert(new_addr == NULL || leadsize == 0); assert(extent_size_get(extent) >= leadsize + esize); size_t trailsize = extent_size_get(extent) - leadsize - esize; /* Split the lead. */ if (leadsize != 0) { extent_t *lead = extent; extent = extent_split_impl(tsdn, arena, r_extent_hooks, lead, leadsize, NSIZES, false, esize + trailsize, szind, slab, growing_retained); if (extent == NULL) { extent_deregister(tsdn, lead); extents_leak(tsdn, arena, r_extent_hooks, extents, lead, growing_retained); return NULL; } extent_deactivate(tsdn, arena, extents, lead, false); } /* Split the trail. */ if (trailsize != 0) { extent_t *trail = extent_split_impl(tsdn, arena, r_extent_hooks, extent, esize, szind, slab, trailsize, NSIZES, false, growing_retained); if (trail == NULL) { extent_deregister(tsdn, extent); extents_leak(tsdn, arena, r_extent_hooks, extents, extent, growing_retained); return NULL; } extent_deactivate(tsdn, arena, extents, trail, false); } else if (leadsize == 0) { /* * Splitting causes szind to be set as a side effect, but no * splitting occurred. */ extent_szind_set(extent, szind); if (szind != NSIZES) { rtree_szind_slab_update(tsdn, &extents_rtree, rtree_ctx, (uintptr_t)extent_addr_get(extent), szind, slab); if (slab && extent_size_get(extent) > PAGE) { rtree_szind_slab_update(tsdn, &extents_rtree, rtree_ctx, (uintptr_t)extent_past_get(extent) - (uintptr_t)PAGE, szind, slab); } } } return extent; } static extent_t * extent_recycle(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extents_t *extents, void *new_addr, size_t size, size_t pad, size_t alignment, bool slab, szind_t szind, bool *zero, bool *commit, bool growing_retained) { witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, growing_retained ? 1 : 0); assert(new_addr == NULL || !slab); assert(pad == 0 || !slab); assert(!*zero || !slab); rtree_ctx_t rtree_ctx_fallback; rtree_ctx_t *rtree_ctx = tsdn_rtree_ctx(tsdn, &rtree_ctx_fallback); bool committed = false; extent_t *extent = extent_recycle_extract(tsdn, arena, r_extent_hooks, rtree_ctx, extents, new_addr, size, pad, alignment, slab, zero, &committed, growing_retained); if (extent == NULL) { return NULL; } if (committed) { *commit = true; } extent = extent_recycle_split(tsdn, arena, r_extent_hooks, rtree_ctx, extents, new_addr, size, pad, alignment, slab, szind, extent, growing_retained); if (extent == NULL) { return NULL; } if (*commit && !extent_committed_get(extent)) { if (extent_commit_impl(tsdn, arena, r_extent_hooks, extent, 0, extent_size_get(extent), growing_retained)) { extent_record(tsdn, arena, r_extent_hooks, extents, extent, growing_retained); return NULL; } extent_zeroed_set(extent, true); } if (pad != 0) { extent_addr_randomize(tsdn, extent, alignment); } assert(extent_state_get(extent) == extent_state_active); if (slab) { extent_slab_set(extent, slab); extent_interior_register(tsdn, rtree_ctx, extent, szind); } if (*zero) { void *addr = extent_base_get(extent); size_t size = extent_size_get(extent); if (!extent_zeroed_get(extent)) { if (pages_purge_forced(addr, size)) { memset(addr, 0, size); } } else if (config_debug) { size_t *p = (size_t *)(uintptr_t)addr; for (size_t i = 0; i < size / sizeof(size_t); i++) { assert(p[i] == 0); } } } return extent; } /* * If the caller specifies (!*zero), it is still possible to receive zeroed * memory, in which case *zero is toggled to true. arena_extent_alloc() takes * advantage of this to avoid demanding zeroed extents, but taking advantage of * them if they are returned. */ static void * extent_alloc_core(tsdn_t *tsdn, arena_t *arena, void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit, dss_prec_t dss_prec) { void *ret; assert(size != 0); assert(alignment != 0); /* "primary" dss. */ if (have_dss && dss_prec == dss_prec_primary && (ret = extent_alloc_dss(tsdn, arena, new_addr, size, alignment, zero, commit)) != NULL) { return ret; } /* mmap. */ if ((ret = extent_alloc_mmap(new_addr, size, alignment, zero, commit)) != NULL) { return ret; } /* "secondary" dss. */ if (have_dss && dss_prec == dss_prec_secondary && (ret = extent_alloc_dss(tsdn, arena, new_addr, size, alignment, zero, commit)) != NULL) { return ret; } /* All strategies for allocation failed. */ return NULL; } static void * extent_alloc_default_impl(tsdn_t *tsdn, arena_t *arena, void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit) { void *ret; ret = extent_alloc_core(tsdn, arena, new_addr, size, alignment, zero, commit, (dss_prec_t)atomic_load_u(&arena->dss_prec, ATOMIC_RELAXED)); return ret; } static void * extent_alloc_default(extent_hooks_t *extent_hooks, void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit, unsigned arena_ind) { tsdn_t *tsdn; arena_t *arena; tsdn = tsdn_fetch(); arena = arena_get(tsdn, arena_ind, false); /* * The arena we're allocating on behalf of must have been initialized * already. */ assert(arena != NULL); return extent_alloc_default_impl(tsdn, arena, new_addr, size, alignment, zero, commit); } +static void +extent_hook_pre_reentrancy(tsdn_t *tsdn, arena_t *arena) { + tsd_t *tsd = tsdn_null(tsdn) ? tsd_fetch() : tsdn_tsd(tsdn); + pre_reentrancy(tsd, arena); +} + +static void +extent_hook_post_reentrancy(tsdn_t *tsdn) { + tsd_t *tsd = tsdn_null(tsdn) ? tsd_fetch() : tsdn_tsd(tsdn); + post_reentrancy(tsd); +} + /* * If virtual memory is retained, create increasingly larger extents from which * to split requested extents in order to limit the total number of disjoint * virtual memory ranges retained by each arena. */ static extent_t * extent_grow_retained(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, size_t size, size_t pad, size_t alignment, bool slab, szind_t szind, bool *zero, bool *commit) { malloc_mutex_assert_owner(tsdn, &arena->extent_grow_mtx); assert(pad == 0 || !slab); assert(!*zero || !slab); size_t esize = size + pad; size_t alloc_size_min = esize + PAGE_CEILING(alignment) - PAGE; /* Beware size_t wrap-around. */ if (alloc_size_min < esize) { goto label_err; } /* * Find the next extent size in the series that would be large enough to * satisfy this request. */ pszind_t egn_skip = 0; size_t alloc_size = sz_pind2sz(arena->extent_grow_next + egn_skip); while (alloc_size < alloc_size_min) { egn_skip++; if (arena->extent_grow_next + egn_skip == NPSIZES) { /* Outside legal range. */ goto label_err; } assert(arena->extent_grow_next + egn_skip < NPSIZES); alloc_size = sz_pind2sz(arena->extent_grow_next + egn_skip); } extent_t *extent = extent_alloc(tsdn, arena); if (extent == NULL) { goto label_err; } bool zeroed = false; bool committed = false; void *ptr; if (*r_extent_hooks == &extent_hooks_default) { ptr = extent_alloc_core(tsdn, arena, NULL, alloc_size, PAGE, &zeroed, &committed, (dss_prec_t)atomic_load_u( &arena->dss_prec, ATOMIC_RELAXED)); } else { + extent_hook_pre_reentrancy(tsdn, arena); ptr = (*r_extent_hooks)->alloc(*r_extent_hooks, NULL, alloc_size, PAGE, &zeroed, &committed, arena_ind_get(arena)); + extent_hook_post_reentrancy(tsdn); } extent_init(extent, arena, ptr, alloc_size, false, NSIZES, arena_extent_sn_next(arena), extent_state_active, zeroed, committed); if (ptr == NULL) { extent_dalloc(tsdn, arena, extent); goto label_err; } if (extent_register_no_gdump_add(tsdn, extent)) { extents_leak(tsdn, arena, r_extent_hooks, &arena->extents_retained, extent, true); goto label_err; } size_t leadsize = ALIGNMENT_CEILING((uintptr_t)ptr, PAGE_CEILING(alignment)) - (uintptr_t)ptr; assert(alloc_size >= leadsize + esize); size_t trailsize = alloc_size - leadsize - esize; if (extent_zeroed_get(extent) && extent_committed_get(extent)) { *zero = true; } if (extent_committed_get(extent)) { *commit = true; } /* Split the lead. */ if (leadsize != 0) { extent_t *lead = extent; extent = extent_split_impl(tsdn, arena, r_extent_hooks, lead, leadsize, NSIZES, false, esize + trailsize, szind, slab, true); if (extent == NULL) { extent_deregister(tsdn, lead); extents_leak(tsdn, arena, r_extent_hooks, &arena->extents_retained, lead, true); goto label_err; } extent_record(tsdn, arena, r_extent_hooks, &arena->extents_retained, lead, true); } /* Split the trail. */ if (trailsize != 0) { extent_t *trail = extent_split_impl(tsdn, arena, r_extent_hooks, extent, esize, szind, slab, trailsize, NSIZES, false, true); if (trail == NULL) { extent_deregister(tsdn, extent); extents_leak(tsdn, arena, r_extent_hooks, &arena->extents_retained, extent, true); goto label_err; } extent_record(tsdn, arena, r_extent_hooks, &arena->extents_retained, trail, true); } else if (leadsize == 0) { /* * Splitting causes szind to be set as a side effect, but no * splitting occurred. */ rtree_ctx_t rtree_ctx_fallback; rtree_ctx_t *rtree_ctx = tsdn_rtree_ctx(tsdn, &rtree_ctx_fallback); extent_szind_set(extent, szind); if (szind != NSIZES) { rtree_szind_slab_update(tsdn, &extents_rtree, rtree_ctx, (uintptr_t)extent_addr_get(extent), szind, slab); if (slab && extent_size_get(extent) > PAGE) { rtree_szind_slab_update(tsdn, &extents_rtree, rtree_ctx, (uintptr_t)extent_past_get(extent) - (uintptr_t)PAGE, szind, slab); } } } if (*commit && !extent_committed_get(extent)) { if (extent_commit_impl(tsdn, arena, r_extent_hooks, extent, 0, extent_size_get(extent), true)) { extent_record(tsdn, arena, r_extent_hooks, &arena->extents_retained, extent, true); goto label_err; } extent_zeroed_set(extent, true); } /* * Increment extent_grow_next if doing so wouldn't exceed the legal * range. */ if (arena->extent_grow_next + egn_skip + 1 < NPSIZES) { arena->extent_grow_next += egn_skip + 1; } else { arena->extent_grow_next = NPSIZES - 1; } /* All opportunities for failure are past. */ malloc_mutex_unlock(tsdn, &arena->extent_grow_mtx); if (config_prof) { /* Adjust gdump stats now that extent is final size. */ extent_gdump_add(tsdn, extent); } if (pad != 0) { extent_addr_randomize(tsdn, extent, alignment); } if (slab) { rtree_ctx_t rtree_ctx_fallback; rtree_ctx_t *rtree_ctx = tsdn_rtree_ctx(tsdn, &rtree_ctx_fallback); extent_slab_set(extent, true); extent_interior_register(tsdn, rtree_ctx, extent, szind); } if (*zero && !extent_zeroed_get(extent)) { void *addr = extent_base_get(extent); size_t size = extent_size_get(extent); if (pages_purge_forced(addr, size)) { memset(addr, 0, size); } } return extent; label_err: malloc_mutex_unlock(tsdn, &arena->extent_grow_mtx); return NULL; } static extent_t * extent_alloc_retained(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, void *new_addr, size_t size, size_t pad, size_t alignment, bool slab, szind_t szind, bool *zero, bool *commit) { assert(size != 0); assert(alignment != 0); malloc_mutex_lock(tsdn, &arena->extent_grow_mtx); extent_t *extent = extent_recycle(tsdn, arena, r_extent_hooks, &arena->extents_retained, new_addr, size, pad, alignment, slab, szind, zero, commit, true); if (extent != NULL) { malloc_mutex_unlock(tsdn, &arena->extent_grow_mtx); if (config_prof) { extent_gdump_add(tsdn, extent); } } else if (opt_retain && new_addr == NULL) { extent = extent_grow_retained(tsdn, arena, r_extent_hooks, size, pad, alignment, slab, szind, zero, commit); /* extent_grow_retained() always releases extent_grow_mtx. */ } else { malloc_mutex_unlock(tsdn, &arena->extent_grow_mtx); } malloc_mutex_assert_not_owner(tsdn, &arena->extent_grow_mtx); return extent; } static extent_t * extent_alloc_wrapper_hard(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, void *new_addr, size_t size, size_t pad, size_t alignment, bool slab, szind_t szind, bool *zero, bool *commit) { size_t esize = size + pad; extent_t *extent = extent_alloc(tsdn, arena); if (extent == NULL) { return NULL; } void *addr; if (*r_extent_hooks == &extent_hooks_default) { /* Call directly to propagate tsdn. */ addr = extent_alloc_default_impl(tsdn, arena, new_addr, esize, alignment, zero, commit); } else { + extent_hook_pre_reentrancy(tsdn, arena); addr = (*r_extent_hooks)->alloc(*r_extent_hooks, new_addr, esize, alignment, zero, commit, arena_ind_get(arena)); + extent_hook_post_reentrancy(tsdn); } if (addr == NULL) { extent_dalloc(tsdn, arena, extent); return NULL; } extent_init(extent, arena, addr, esize, slab, szind, arena_extent_sn_next(arena), extent_state_active, zero, commit); if (pad != 0) { extent_addr_randomize(tsdn, extent, alignment); } if (extent_register(tsdn, extent)) { extents_leak(tsdn, arena, r_extent_hooks, &arena->extents_retained, extent, false); return NULL; } return extent; } extent_t * extent_alloc_wrapper(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, void *new_addr, size_t size, size_t pad, size_t alignment, bool slab, szind_t szind, bool *zero, bool *commit) { witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, 0); extent_hooks_assure_initialized(arena, r_extent_hooks); extent_t *extent = extent_alloc_retained(tsdn, arena, r_extent_hooks, new_addr, size, pad, alignment, slab, szind, zero, commit); if (extent == NULL) { extent = extent_alloc_wrapper_hard(tsdn, arena, r_extent_hooks, new_addr, size, pad, alignment, slab, szind, zero, commit); } return extent; } static bool extent_can_coalesce(arena_t *arena, extents_t *extents, const extent_t *inner, const extent_t *outer) { assert(extent_arena_get(inner) == arena); if (extent_arena_get(outer) != arena) { return false; } assert(extent_state_get(inner) == extent_state_active); if (extent_state_get(outer) != extents->state) { return false; } if (extent_committed_get(inner) != extent_committed_get(outer)) { return false; } return true; } static bool extent_coalesce(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extents_t *extents, extent_t *inner, extent_t *outer, bool forward, bool growing_retained) { assert(extent_can_coalesce(arena, extents, inner, outer)); if (forward && extents->delay_coalesce) { /* * The extent that remains after coalescing must occupy the * outer extent's position in the LRU. For forward coalescing, * swap the inner extent into the LRU. */ extent_list_replace(&extents->lru, outer, inner); } extent_activate_locked(tsdn, arena, extents, outer, extents->delay_coalesce); malloc_mutex_unlock(tsdn, &extents->mtx); bool err = extent_merge_impl(tsdn, arena, r_extent_hooks, forward ? inner : outer, forward ? outer : inner, growing_retained); malloc_mutex_lock(tsdn, &extents->mtx); if (err) { if (forward && extents->delay_coalesce) { extent_list_replace(&extents->lru, inner, outer); } extent_deactivate_locked(tsdn, arena, extents, outer, extents->delay_coalesce); } return err; } static extent_t * extent_try_coalesce(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, rtree_ctx_t *rtree_ctx, extents_t *extents, extent_t *extent, bool *coalesced, bool growing_retained) { /* * Continue attempting to coalesce until failure, to protect against * races with other threads that are thwarted by this one. */ bool again; do { again = false; /* Try to coalesce forward. */ extent_t *next = extent_lock_from_addr(tsdn, rtree_ctx, extent_past_get(extent)); if (next != NULL) { /* * extents->mtx only protects against races for * like-state extents, so call extent_can_coalesce() * before releasing next's pool lock. */ bool can_coalesce = extent_can_coalesce(arena, extents, extent, next); extent_unlock(tsdn, next); if (can_coalesce && !extent_coalesce(tsdn, arena, r_extent_hooks, extents, extent, next, true, growing_retained)) { if (extents->delay_coalesce) { /* Do minimal coalescing. */ *coalesced = true; return extent; } again = true; } } /* Try to coalesce backward. */ extent_t *prev = extent_lock_from_addr(tsdn, rtree_ctx, extent_before_get(extent)); if (prev != NULL) { bool can_coalesce = extent_can_coalesce(arena, extents, extent, prev); extent_unlock(tsdn, prev); if (can_coalesce && !extent_coalesce(tsdn, arena, r_extent_hooks, extents, extent, prev, false, growing_retained)) { extent = prev; if (extents->delay_coalesce) { /* Do minimal coalescing. */ *coalesced = true; return extent; } again = true; } } } while (again); if (extents->delay_coalesce) { *coalesced = false; } return extent; } static void extent_record(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extents_t *extents, extent_t *extent, bool growing_retained) { rtree_ctx_t rtree_ctx_fallback; rtree_ctx_t *rtree_ctx = tsdn_rtree_ctx(tsdn, &rtree_ctx_fallback); assert((extents_state_get(extents) != extent_state_dirty && extents_state_get(extents) != extent_state_muzzy) || !extent_zeroed_get(extent)); malloc_mutex_lock(tsdn, &extents->mtx); extent_hooks_assure_initialized(arena, r_extent_hooks); extent_szind_set(extent, NSIZES); if (extent_slab_get(extent)) { extent_interior_deregister(tsdn, rtree_ctx, extent); extent_slab_set(extent, false); } assert(rtree_extent_read(tsdn, &extents_rtree, rtree_ctx, (uintptr_t)extent_base_get(extent), true) == extent); if (!extents->delay_coalesce) { extent = extent_try_coalesce(tsdn, arena, r_extent_hooks, rtree_ctx, extents, extent, NULL, growing_retained); } extent_deactivate_locked(tsdn, arena, extents, extent, false); malloc_mutex_unlock(tsdn, &extents->mtx); } void extent_dalloc_gap(tsdn_t *tsdn, arena_t *arena, extent_t *extent) { extent_hooks_t *extent_hooks = EXTENT_HOOKS_INITIALIZER; witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, 0); if (extent_register(tsdn, extent)) { extents_leak(tsdn, arena, &extent_hooks, &arena->extents_retained, extent, false); return; } extent_dalloc_wrapper(tsdn, arena, &extent_hooks, extent); } static bool extent_dalloc_default_impl(void *addr, size_t size) { if (!have_dss || !extent_in_dss(addr)) { return extent_dalloc_mmap(addr, size); } return true; } static bool extent_dalloc_default(extent_hooks_t *extent_hooks, void *addr, size_t size, bool committed, unsigned arena_ind) { return extent_dalloc_default_impl(addr, size); } static bool extent_dalloc_wrapper_try(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent) { bool err; assert(extent_base_get(extent) != NULL); assert(extent_size_get(extent) != 0); witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, 0); extent_addr_set(extent, extent_base_get(extent)); extent_hooks_assure_initialized(arena, r_extent_hooks); /* Try to deallocate. */ if (*r_extent_hooks == &extent_hooks_default) { /* Call directly to propagate tsdn. */ err = extent_dalloc_default_impl(extent_base_get(extent), extent_size_get(extent)); } else { + extent_hook_pre_reentrancy(tsdn, arena); err = ((*r_extent_hooks)->dalloc == NULL || (*r_extent_hooks)->dalloc(*r_extent_hooks, extent_base_get(extent), extent_size_get(extent), extent_committed_get(extent), arena_ind_get(arena))); + extent_hook_post_reentrancy(tsdn); } if (!err) { extent_dalloc(tsdn, arena, extent); } return err; } void extent_dalloc_wrapper(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent) { witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, 0); /* * Deregister first to avoid a race with other allocating threads, and * reregister if deallocation fails. */ extent_deregister(tsdn, extent); if (!extent_dalloc_wrapper_try(tsdn, arena, r_extent_hooks, extent)) { return; } extent_reregister(tsdn, extent); + if (*r_extent_hooks != &extent_hooks_default) { + extent_hook_pre_reentrancy(tsdn, arena); + } /* Try to decommit; purge if that fails. */ bool zeroed; if (!extent_committed_get(extent)) { zeroed = true; } else if (!extent_decommit_wrapper(tsdn, arena, r_extent_hooks, extent, 0, extent_size_get(extent))) { zeroed = true; } else if ((*r_extent_hooks)->purge_forced != NULL && !(*r_extent_hooks)->purge_forced(*r_extent_hooks, extent_base_get(extent), extent_size_get(extent), 0, extent_size_get(extent), arena_ind_get(arena))) { zeroed = true; } else if (extent_state_get(extent) == extent_state_muzzy || ((*r_extent_hooks)->purge_lazy != NULL && !(*r_extent_hooks)->purge_lazy(*r_extent_hooks, extent_base_get(extent), extent_size_get(extent), 0, extent_size_get(extent), arena_ind_get(arena)))) { zeroed = false; } else { zeroed = false; } + if (*r_extent_hooks != &extent_hooks_default) { + extent_hook_post_reentrancy(tsdn); + } extent_zeroed_set(extent, zeroed); if (config_prof) { extent_gdump_sub(tsdn, extent); } extent_record(tsdn, arena, r_extent_hooks, &arena->extents_retained, extent, false); } static void extent_destroy_default_impl(void *addr, size_t size) { if (!have_dss || !extent_in_dss(addr)) { pages_unmap(addr, size); } } static void extent_destroy_default(extent_hooks_t *extent_hooks, void *addr, size_t size, bool committed, unsigned arena_ind) { extent_destroy_default_impl(addr, size); } void extent_destroy_wrapper(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent) { assert(extent_base_get(extent) != NULL); assert(extent_size_get(extent) != 0); witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, 0); /* Deregister first to avoid a race with other allocating threads. */ extent_deregister(tsdn, extent); extent_addr_set(extent, extent_base_get(extent)); extent_hooks_assure_initialized(arena, r_extent_hooks); /* Try to destroy; silently fail otherwise. */ if (*r_extent_hooks == &extent_hooks_default) { /* Call directly to propagate tsdn. */ extent_destroy_default_impl(extent_base_get(extent), extent_size_get(extent)); } else if ((*r_extent_hooks)->destroy != NULL) { + extent_hook_pre_reentrancy(tsdn, arena); (*r_extent_hooks)->destroy(*r_extent_hooks, extent_base_get(extent), extent_size_get(extent), extent_committed_get(extent), arena_ind_get(arena)); + extent_hook_post_reentrancy(tsdn); } extent_dalloc(tsdn, arena, extent); } static bool extent_commit_default(extent_hooks_t *extent_hooks, void *addr, size_t size, size_t offset, size_t length, unsigned arena_ind) { return pages_commit((void *)((uintptr_t)addr + (uintptr_t)offset), length); } static bool extent_commit_impl(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent, size_t offset, size_t length, bool growing_retained) { witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, growing_retained ? 1 : 0); extent_hooks_assure_initialized(arena, r_extent_hooks); + if (*r_extent_hooks != &extent_hooks_default) { + extent_hook_pre_reentrancy(tsdn, arena); + } bool err = ((*r_extent_hooks)->commit == NULL || (*r_extent_hooks)->commit(*r_extent_hooks, extent_base_get(extent), extent_size_get(extent), offset, length, arena_ind_get(arena))); + if (*r_extent_hooks != &extent_hooks_default) { + extent_hook_post_reentrancy(tsdn); + } extent_committed_set(extent, extent_committed_get(extent) || !err); return err; } bool extent_commit_wrapper(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent, size_t offset, size_t length) { return extent_commit_impl(tsdn, arena, r_extent_hooks, extent, offset, length, false); } static bool extent_decommit_default(extent_hooks_t *extent_hooks, void *addr, size_t size, size_t offset, size_t length, unsigned arena_ind) { return pages_decommit((void *)((uintptr_t)addr + (uintptr_t)offset), length); } bool extent_decommit_wrapper(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent, size_t offset, size_t length) { witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, 0); extent_hooks_assure_initialized(arena, r_extent_hooks); + if (*r_extent_hooks != &extent_hooks_default) { + extent_hook_pre_reentrancy(tsdn, arena); + } bool err = ((*r_extent_hooks)->decommit == NULL || (*r_extent_hooks)->decommit(*r_extent_hooks, extent_base_get(extent), extent_size_get(extent), offset, length, arena_ind_get(arena))); + if (*r_extent_hooks != &extent_hooks_default) { + extent_hook_post_reentrancy(tsdn); + } extent_committed_set(extent, extent_committed_get(extent) && err); return err; } #ifdef PAGES_CAN_PURGE_LAZY static bool extent_purge_lazy_default(extent_hooks_t *extent_hooks, void *addr, size_t size, size_t offset, size_t length, unsigned arena_ind) { assert(addr != NULL); assert((offset & PAGE_MASK) == 0); assert(length != 0); assert((length & PAGE_MASK) == 0); return pages_purge_lazy((void *)((uintptr_t)addr + (uintptr_t)offset), length); } #endif static bool extent_purge_lazy_impl(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent, size_t offset, size_t length, bool growing_retained) { witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, growing_retained ? 1 : 0); extent_hooks_assure_initialized(arena, r_extent_hooks); - return ((*r_extent_hooks)->purge_lazy == NULL || - (*r_extent_hooks)->purge_lazy(*r_extent_hooks, + + if ((*r_extent_hooks)->purge_lazy == NULL) { + return true; + } + if (*r_extent_hooks != &extent_hooks_default) { + extent_hook_pre_reentrancy(tsdn, arena); + } + bool err = (*r_extent_hooks)->purge_lazy(*r_extent_hooks, extent_base_get(extent), extent_size_get(extent), offset, length, - arena_ind_get(arena))); + arena_ind_get(arena)); + if (*r_extent_hooks != &extent_hooks_default) { + extent_hook_post_reentrancy(tsdn); + } + + return err; } bool extent_purge_lazy_wrapper(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent, size_t offset, size_t length) { return extent_purge_lazy_impl(tsdn, arena, r_extent_hooks, extent, offset, length, false); } #ifdef PAGES_CAN_PURGE_FORCED static bool extent_purge_forced_default(extent_hooks_t *extent_hooks, void *addr, size_t size, size_t offset, size_t length, unsigned arena_ind) { assert(addr != NULL); assert((offset & PAGE_MASK) == 0); assert(length != 0); assert((length & PAGE_MASK) == 0); return pages_purge_forced((void *)((uintptr_t)addr + (uintptr_t)offset), length); } #endif static bool extent_purge_forced_impl(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent, size_t offset, size_t length, bool growing_retained) { witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, growing_retained ? 1 : 0); extent_hooks_assure_initialized(arena, r_extent_hooks); - return ((*r_extent_hooks)->purge_forced == NULL || - (*r_extent_hooks)->purge_forced(*r_extent_hooks, + + if ((*r_extent_hooks)->purge_forced == NULL) { + return true; + } + if (*r_extent_hooks != &extent_hooks_default) { + extent_hook_pre_reentrancy(tsdn, arena); + } + bool err = (*r_extent_hooks)->purge_forced(*r_extent_hooks, extent_base_get(extent), extent_size_get(extent), offset, length, - arena_ind_get(arena))); + arena_ind_get(arena)); + if (*r_extent_hooks != &extent_hooks_default) { + extent_hook_post_reentrancy(tsdn); + } + return err; } bool extent_purge_forced_wrapper(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent, size_t offset, size_t length) { return extent_purge_forced_impl(tsdn, arena, r_extent_hooks, extent, offset, length, false); } #ifdef JEMALLOC_MAPS_COALESCE static bool extent_split_default(extent_hooks_t *extent_hooks, void *addr, size_t size, size_t size_a, size_t size_b, bool committed, unsigned arena_ind) { return !maps_coalesce; } #endif static extent_t * extent_split_impl(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent, size_t size_a, szind_t szind_a, bool slab_a, size_t size_b, szind_t szind_b, bool slab_b, bool growing_retained) { assert(extent_size_get(extent) == size_a + size_b); witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, growing_retained ? 1 : 0); extent_hooks_assure_initialized(arena, r_extent_hooks); if ((*r_extent_hooks)->split == NULL) { return NULL; } extent_t *trail = extent_alloc(tsdn, arena); if (trail == NULL) { goto label_error_a; } extent_init(trail, arena, (void *)((uintptr_t)extent_base_get(extent) + size_a), size_b, slab_b, szind_b, extent_sn_get(extent), extent_state_get(extent), extent_zeroed_get(extent), extent_committed_get(extent)); rtree_ctx_t rtree_ctx_fallback; rtree_ctx_t *rtree_ctx = tsdn_rtree_ctx(tsdn, &rtree_ctx_fallback); rtree_leaf_elm_t *lead_elm_a, *lead_elm_b; { extent_t lead; extent_init(&lead, arena, extent_addr_get(extent), size_a, slab_a, szind_a, extent_sn_get(extent), extent_state_get(extent), extent_zeroed_get(extent), extent_committed_get(extent)); extent_rtree_leaf_elms_lookup(tsdn, rtree_ctx, &lead, false, true, &lead_elm_a, &lead_elm_b); } rtree_leaf_elm_t *trail_elm_a, *trail_elm_b; extent_rtree_leaf_elms_lookup(tsdn, rtree_ctx, trail, false, true, &trail_elm_a, &trail_elm_b); if (lead_elm_a == NULL || lead_elm_b == NULL || trail_elm_a == NULL || trail_elm_b == NULL) { goto label_error_b; } extent_lock2(tsdn, extent, trail); - if ((*r_extent_hooks)->split(*r_extent_hooks, extent_base_get(extent), + if (*r_extent_hooks != &extent_hooks_default) { + extent_hook_pre_reentrancy(tsdn, arena); + } + bool err = (*r_extent_hooks)->split(*r_extent_hooks, extent_base_get(extent), size_a + size_b, size_a, size_b, extent_committed_get(extent), - arena_ind_get(arena))) { + arena_ind_get(arena)); + if (*r_extent_hooks != &extent_hooks_default) { + extent_hook_post_reentrancy(tsdn); + } + if (err) { goto label_error_c; } extent_size_set(extent, size_a); extent_szind_set(extent, szind_a); extent_rtree_write_acquired(tsdn, lead_elm_a, lead_elm_b, extent, szind_a, slab_a); extent_rtree_write_acquired(tsdn, trail_elm_a, trail_elm_b, trail, szind_b, slab_b); extent_unlock2(tsdn, extent, trail); return trail; label_error_c: extent_unlock2(tsdn, extent, trail); label_error_b: extent_dalloc(tsdn, arena, trail); label_error_a: return NULL; } extent_t * extent_split_wrapper(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *extent, size_t size_a, szind_t szind_a, bool slab_a, size_t size_b, szind_t szind_b, bool slab_b) { return extent_split_impl(tsdn, arena, r_extent_hooks, extent, size_a, szind_a, slab_a, size_b, szind_b, slab_b, false); } static bool extent_merge_default_impl(void *addr_a, void *addr_b) { if (!maps_coalesce) { return true; } if (have_dss && !extent_dss_mergeable(addr_a, addr_b)) { return true; } return false; } #ifdef JEMALLOC_MAPS_COALESCE static bool extent_merge_default(extent_hooks_t *extent_hooks, void *addr_a, size_t size_a, void *addr_b, size_t size_b, bool committed, unsigned arena_ind) { return extent_merge_default_impl(addr_a, addr_b); } #endif static bool extent_merge_impl(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *a, extent_t *b, bool growing_retained) { witness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE, growing_retained ? 1 : 0); extent_hooks_assure_initialized(arena, r_extent_hooks); if ((*r_extent_hooks)->merge == NULL) { return true; } bool err; if (*r_extent_hooks == &extent_hooks_default) { /* Call directly to propagate tsdn. */ err = extent_merge_default_impl(extent_base_get(a), extent_base_get(b)); } else { + extent_hook_pre_reentrancy(tsdn, arena); err = (*r_extent_hooks)->merge(*r_extent_hooks, extent_base_get(a), extent_size_get(a), extent_base_get(b), extent_size_get(b), extent_committed_get(a), arena_ind_get(arena)); + extent_hook_post_reentrancy(tsdn); } if (err) { return true; } /* * The rtree writes must happen while all the relevant elements are * owned, so the following code uses decomposed helper functions rather * than extent_{,de}register() to do things in the right order. */ rtree_ctx_t rtree_ctx_fallback; rtree_ctx_t *rtree_ctx = tsdn_rtree_ctx(tsdn, &rtree_ctx_fallback); rtree_leaf_elm_t *a_elm_a, *a_elm_b, *b_elm_a, *b_elm_b; extent_rtree_leaf_elms_lookup(tsdn, rtree_ctx, a, true, false, &a_elm_a, &a_elm_b); extent_rtree_leaf_elms_lookup(tsdn, rtree_ctx, b, true, false, &b_elm_a, &b_elm_b); extent_lock2(tsdn, a, b); if (a_elm_b != NULL) { rtree_leaf_elm_write(tsdn, &extents_rtree, a_elm_b, NULL, NSIZES, false); } if (b_elm_b != NULL) { rtree_leaf_elm_write(tsdn, &extents_rtree, b_elm_a, NULL, NSIZES, false); } else { b_elm_b = b_elm_a; } extent_size_set(a, extent_size_get(a) + extent_size_get(b)); extent_szind_set(a, NSIZES); extent_sn_set(a, (extent_sn_get(a) < extent_sn_get(b)) ? extent_sn_get(a) : extent_sn_get(b)); extent_zeroed_set(a, extent_zeroed_get(a) && extent_zeroed_get(b)); extent_rtree_write_acquired(tsdn, a_elm_a, b_elm_b, a, NSIZES, false); extent_unlock2(tsdn, a, b); extent_dalloc(tsdn, extent_arena_get(b), b); return false; } bool extent_merge_wrapper(tsdn_t *tsdn, arena_t *arena, extent_hooks_t **r_extent_hooks, extent_t *a, extent_t *b) { return extent_merge_impl(tsdn, arena, r_extent_hooks, a, b, false); } bool extent_boot(void) { if (rtree_new(&extents_rtree, true)) { return true; } if (mutex_pool_init(&extent_mutex_pool, "extent_mutex_pool", WITNESS_RANK_EXTENT_POOL)) { return true; } if (have_dss) { extent_dss_boot(); } return false; } Index: head/contrib/jemalloc/src/jemalloc.c =================================================================== --- head/contrib/jemalloc/src/jemalloc.c (revision 320622) +++ head/contrib/jemalloc/src/jemalloc.c (revision 320623) @@ -1,3252 +1,3262 @@ #define JEMALLOC_C_ #include "jemalloc/internal/jemalloc_preamble.h" #include "jemalloc/internal/jemalloc_internal_includes.h" #include "jemalloc/internal/assert.h" #include "jemalloc/internal/atomic.h" #include "jemalloc/internal/ctl.h" #include "jemalloc/internal/extent_dss.h" #include "jemalloc/internal/extent_mmap.h" #include "jemalloc/internal/jemalloc_internal_types.h" #include "jemalloc/internal/malloc_io.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/rtree.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/spin.h" #include "jemalloc/internal/sz.h" #include "jemalloc/internal/ticker.h" #include "jemalloc/internal/util.h" /******************************************************************************/ /* Data. */ /* Work around : */ const char *__malloc_options_1_0 = NULL; __sym_compat(_malloc_options, __malloc_options_1_0, FBSD_1.0); /* Runtime configuration options. */ const char *je_malloc_conf #ifndef _WIN32 JEMALLOC_ATTR(weak) #endif ; bool opt_abort = #ifdef JEMALLOC_DEBUG true #else false #endif ; bool opt_abort_conf = #ifdef JEMALLOC_DEBUG true #else false #endif ; const char *opt_junk = #if (defined(JEMALLOC_DEBUG) && defined(JEMALLOC_FILL)) "true" #else "false" #endif ; bool opt_junk_alloc = #if (defined(JEMALLOC_DEBUG) && defined(JEMALLOC_FILL)) true #else false #endif ; bool opt_junk_free = #if (defined(JEMALLOC_DEBUG) && defined(JEMALLOC_FILL)) true #else false #endif ; bool opt_utrace = false; bool opt_xmalloc = false; bool opt_zero = false; unsigned opt_narenas = 0; unsigned ncpus; /* Protects arenas initialization. */ malloc_mutex_t arenas_lock; /* * Arenas that are used to service external requests. Not all elements of the * arenas array are necessarily used; arenas are created lazily as needed. * * arenas[0..narenas_auto) are used for automatic multiplexing of threads and * arenas. arenas[narenas_auto..narenas_total) are only used if the application * takes some action to create them and allocate from them. * * Points to an arena_t. */ JEMALLOC_ALIGNED(CACHELINE) atomic_p_t arenas[MALLOCX_ARENA_LIMIT]; static atomic_u_t narenas_total; /* Use narenas_total_*(). */ static arena_t *a0; /* arenas[0]; read-only after initialization. */ unsigned narenas_auto; /* Read-only after initialization. */ typedef enum { malloc_init_uninitialized = 3, malloc_init_a0_initialized = 2, malloc_init_recursible = 1, malloc_init_initialized = 0 /* Common case --> jnz. */ } malloc_init_t; static malloc_init_t malloc_init_state = malloc_init_uninitialized; /* False should be the common case. Set to true to trigger initialization. */ bool malloc_slow = true; /* When malloc_slow is true, set the corresponding bits for sanity check. */ enum { flag_opt_junk_alloc = (1U), flag_opt_junk_free = (1U << 1), flag_opt_zero = (1U << 2), flag_opt_utrace = (1U << 3), flag_opt_xmalloc = (1U << 4) }; static uint8_t malloc_slow_flags; #ifdef JEMALLOC_THREADED_INIT /* Used to let the initializing thread recursively allocate. */ # define NO_INITIALIZER ((unsigned long)0) # define INITIALIZER pthread_self() # define IS_INITIALIZER (malloc_initializer == pthread_self()) static pthread_t malloc_initializer = NO_INITIALIZER; #else # define NO_INITIALIZER false # define INITIALIZER true # define IS_INITIALIZER malloc_initializer static bool malloc_initializer = NO_INITIALIZER; #endif /* Used to avoid initialization races. */ #ifdef _WIN32 #if _WIN32_WINNT >= 0x0600 static malloc_mutex_t init_lock = SRWLOCK_INIT; #else static malloc_mutex_t init_lock; static bool init_lock_initialized = false; JEMALLOC_ATTR(constructor) static void WINAPI _init_init_lock(void) { /* * If another constructor in the same binary is using mallctl to e.g. * set up extent hooks, it may end up running before this one, and * malloc_init_hard will crash trying to lock the uninitialized lock. So * we force an initialization of the lock in malloc_init_hard as well. * We don't try to care about atomicity of the accessed to the * init_lock_initialized boolean, since it really only matters early in * the process creation, before any separate thread normally starts * doing anything. */ if (!init_lock_initialized) { malloc_mutex_init(&init_lock, "init", WITNESS_RANK_INIT, malloc_mutex_rank_exclusive); } init_lock_initialized = true; } #ifdef _MSC_VER # pragma section(".CRT$XCU", read) JEMALLOC_SECTION(".CRT$XCU") JEMALLOC_ATTR(used) static const void (WINAPI *init_init_lock)(void) = _init_init_lock; #endif #endif #else static malloc_mutex_t init_lock = MALLOC_MUTEX_INITIALIZER; #endif typedef struct { void *p; /* Input pointer (as in realloc(p, s)). */ size_t s; /* Request size. */ void *r; /* Result pointer. */ } malloc_utrace_t; #ifdef JEMALLOC_UTRACE # define UTRACE(a, b, c) do { \ if (unlikely(opt_utrace)) { \ int utrace_serrno = errno; \ malloc_utrace_t ut; \ ut.p = (a); \ ut.s = (b); \ ut.r = (c); \ utrace(&ut, sizeof(ut)); \ errno = utrace_serrno; \ } \ } while (0) #else # define UTRACE(a, b, c) #endif /* Whether encountered any invalid config options. */ static bool had_conf_error = false; /******************************************************************************/ /* * Function prototypes for static functions that are referenced prior to * definition. */ static bool malloc_init_hard_a0(void); static bool malloc_init_hard(void); /******************************************************************************/ /* * Begin miscellaneous support functions. */ bool malloc_initialized(void) { return (malloc_init_state == malloc_init_initialized); } JEMALLOC_ALWAYS_INLINE bool malloc_init_a0(void) { if (unlikely(malloc_init_state == malloc_init_uninitialized)) { return malloc_init_hard_a0(); } return false; } JEMALLOC_ALWAYS_INLINE bool malloc_init(void) { if (unlikely(!malloc_initialized()) && malloc_init_hard()) { return true; } return false; } /* * The a0*() functions are used instead of i{d,}alloc() in situations that * cannot tolerate TLS variable access. */ static void * a0ialloc(size_t size, bool zero, bool is_internal) { if (unlikely(malloc_init_a0())) { return NULL; } return iallocztm(TSDN_NULL, size, sz_size2index(size), zero, NULL, is_internal, arena_get(TSDN_NULL, 0, true), true); } static void a0idalloc(void *ptr, bool is_internal) { idalloctm(TSDN_NULL, ptr, NULL, NULL, is_internal, true); } void * a0malloc(size_t size) { return a0ialloc(size, false, true); } void a0dalloc(void *ptr) { a0idalloc(ptr, true); } /* * FreeBSD's libc uses the bootstrap_*() functions in bootstrap-senstive * situations that cannot tolerate TLS variable access (TLS allocation and very * early internal data structure initialization). */ void * bootstrap_malloc(size_t size) { if (unlikely(size == 0)) { size = 1; } return a0ialloc(size, false, false); } void * bootstrap_calloc(size_t num, size_t size) { size_t num_size; num_size = num * size; if (unlikely(num_size == 0)) { assert(num == 0 || size == 0); num_size = 1; } return a0ialloc(num_size, true, false); } void bootstrap_free(void *ptr) { if (unlikely(ptr == NULL)) { return; } a0idalloc(ptr, false); } void arena_set(unsigned ind, arena_t *arena) { atomic_store_p(&arenas[ind], arena, ATOMIC_RELEASE); } static void narenas_total_set(unsigned narenas) { atomic_store_u(&narenas_total, narenas, ATOMIC_RELEASE); } static void narenas_total_inc(void) { atomic_fetch_add_u(&narenas_total, 1, ATOMIC_RELEASE); } unsigned narenas_total_get(void) { return atomic_load_u(&narenas_total, ATOMIC_ACQUIRE); } /* Create a new arena and insert it into the arenas array at index ind. */ static arena_t * arena_init_locked(tsdn_t *tsdn, unsigned ind, extent_hooks_t *extent_hooks) { arena_t *arena; assert(ind <= narenas_total_get()); if (ind >= MALLOCX_ARENA_LIMIT) { return NULL; } if (ind == narenas_total_get()) { narenas_total_inc(); } /* * Another thread may have already initialized arenas[ind] if it's an * auto arena. */ arena = arena_get(tsdn, ind, false); if (arena != NULL) { assert(ind < narenas_auto); return arena; } /* Actually initialize the arena. */ arena = arena_new(tsdn, ind, extent_hooks); return arena; } static void arena_new_create_background_thread(tsdn_t *tsdn, unsigned ind) { if (ind == 0) { return; } if (have_background_thread) { bool err; malloc_mutex_lock(tsdn, &background_thread_lock); err = background_thread_create(tsdn_tsd(tsdn), ind); malloc_mutex_unlock(tsdn, &background_thread_lock); if (err) { malloc_printf(": error in background thread " "creation for arena %u. Abort.\n", ind); abort(); } } } arena_t * arena_init(tsdn_t *tsdn, unsigned ind, extent_hooks_t *extent_hooks) { arena_t *arena; malloc_mutex_lock(tsdn, &arenas_lock); arena = arena_init_locked(tsdn, ind, extent_hooks); malloc_mutex_unlock(tsdn, &arenas_lock); arena_new_create_background_thread(tsdn, ind); return arena; } static void arena_bind(tsd_t *tsd, unsigned ind, bool internal) { arena_t *arena = arena_get(tsd_tsdn(tsd), ind, false); arena_nthreads_inc(arena, internal); if (internal) { tsd_iarena_set(tsd, arena); } else { tsd_arena_set(tsd, arena); } } void arena_migrate(tsd_t *tsd, unsigned oldind, unsigned newind) { arena_t *oldarena, *newarena; oldarena = arena_get(tsd_tsdn(tsd), oldind, false); newarena = arena_get(tsd_tsdn(tsd), newind, false); arena_nthreads_dec(oldarena, false); arena_nthreads_inc(newarena, false); tsd_arena_set(tsd, newarena); } static void arena_unbind(tsd_t *tsd, unsigned ind, bool internal) { arena_t *arena; arena = arena_get(tsd_tsdn(tsd), ind, false); arena_nthreads_dec(arena, internal); if (internal) { tsd_iarena_set(tsd, NULL); } else { tsd_arena_set(tsd, NULL); } } arena_tdata_t * arena_tdata_get_hard(tsd_t *tsd, unsigned ind) { arena_tdata_t *tdata, *arenas_tdata_old; arena_tdata_t *arenas_tdata = tsd_arenas_tdata_get(tsd); unsigned narenas_tdata_old, i; unsigned narenas_tdata = tsd_narenas_tdata_get(tsd); unsigned narenas_actual = narenas_total_get(); /* * Dissociate old tdata array (and set up for deallocation upon return) * if it's too small. */ if (arenas_tdata != NULL && narenas_tdata < narenas_actual) { arenas_tdata_old = arenas_tdata; narenas_tdata_old = narenas_tdata; arenas_tdata = NULL; narenas_tdata = 0; tsd_arenas_tdata_set(tsd, arenas_tdata); tsd_narenas_tdata_set(tsd, narenas_tdata); } else { arenas_tdata_old = NULL; narenas_tdata_old = 0; } /* Allocate tdata array if it's missing. */ if (arenas_tdata == NULL) { bool *arenas_tdata_bypassp = tsd_arenas_tdata_bypassp_get(tsd); narenas_tdata = (ind < narenas_actual) ? narenas_actual : ind+1; if (tsd_nominal(tsd) && !*arenas_tdata_bypassp) { *arenas_tdata_bypassp = true; arenas_tdata = (arena_tdata_t *)a0malloc( sizeof(arena_tdata_t) * narenas_tdata); *arenas_tdata_bypassp = false; } if (arenas_tdata == NULL) { tdata = NULL; goto label_return; } assert(tsd_nominal(tsd) && !*arenas_tdata_bypassp); tsd_arenas_tdata_set(tsd, arenas_tdata); tsd_narenas_tdata_set(tsd, narenas_tdata); } /* * Copy to tdata array. It's possible that the actual number of arenas * has increased since narenas_total_get() was called above, but that * causes no correctness issues unless two threads concurrently execute * the arenas.create mallctl, which we trust mallctl synchronization to * prevent. */ /* Copy/initialize tickers. */ for (i = 0; i < narenas_actual; i++) { if (i < narenas_tdata_old) { ticker_copy(&arenas_tdata[i].decay_ticker, &arenas_tdata_old[i].decay_ticker); } else { ticker_init(&arenas_tdata[i].decay_ticker, DECAY_NTICKS_PER_UPDATE); } } if (narenas_tdata > narenas_actual) { memset(&arenas_tdata[narenas_actual], 0, sizeof(arena_tdata_t) * (narenas_tdata - narenas_actual)); } /* Read the refreshed tdata array. */ tdata = &arenas_tdata[ind]; label_return: if (arenas_tdata_old != NULL) { a0dalloc(arenas_tdata_old); } return tdata; } /* Slow path, called only by arena_choose(). */ arena_t * arena_choose_hard(tsd_t *tsd, bool internal) { arena_t *ret JEMALLOC_CC_SILENCE_INIT(NULL); if (have_percpu_arena && PERCPU_ARENA_ENABLED(opt_percpu_arena)) { unsigned choose = percpu_arena_choose(); ret = arena_get(tsd_tsdn(tsd), choose, true); assert(ret != NULL); arena_bind(tsd, arena_ind_get(ret), false); arena_bind(tsd, arena_ind_get(ret), true); return ret; } if (narenas_auto > 1) { unsigned i, j, choose[2], first_null; bool is_new_arena[2]; /* * Determine binding for both non-internal and internal * allocation. * * choose[0]: For application allocation. * choose[1]: For internal metadata allocation. */ for (j = 0; j < 2; j++) { choose[j] = 0; is_new_arena[j] = false; } first_null = narenas_auto; malloc_mutex_lock(tsd_tsdn(tsd), &arenas_lock); assert(arena_get(tsd_tsdn(tsd), 0, false) != NULL); for (i = 1; i < narenas_auto; i++) { if (arena_get(tsd_tsdn(tsd), i, false) != NULL) { /* * Choose the first arena that has the lowest * number of threads assigned to it. */ for (j = 0; j < 2; j++) { if (arena_nthreads_get(arena_get( tsd_tsdn(tsd), i, false), !!j) < arena_nthreads_get(arena_get( tsd_tsdn(tsd), choose[j], false), !!j)) { choose[j] = i; } } } else if (first_null == narenas_auto) { /* * Record the index of the first uninitialized * arena, in case all extant arenas are in use. * * NB: It is possible for there to be * discontinuities in terms of initialized * versus uninitialized arenas, due to the * "thread.arena" mallctl. */ first_null = i; } } for (j = 0; j < 2; j++) { if (arena_nthreads_get(arena_get(tsd_tsdn(tsd), choose[j], false), !!j) == 0 || first_null == narenas_auto) { /* * Use an unloaded arena, or the least loaded * arena if all arenas are already initialized. */ if (!!j == internal) { ret = arena_get(tsd_tsdn(tsd), choose[j], false); } } else { arena_t *arena; /* Initialize a new arena. */ choose[j] = first_null; arena = arena_init_locked(tsd_tsdn(tsd), choose[j], (extent_hooks_t *)&extent_hooks_default); if (arena == NULL) { malloc_mutex_unlock(tsd_tsdn(tsd), &arenas_lock); return NULL; } is_new_arena[j] = true; if (!!j == internal) { ret = arena; } } arena_bind(tsd, choose[j], !!j); } malloc_mutex_unlock(tsd_tsdn(tsd), &arenas_lock); for (j = 0; j < 2; j++) { if (is_new_arena[j]) { assert(choose[j] > 0); arena_new_create_background_thread( tsd_tsdn(tsd), choose[j]); } } } else { ret = arena_get(tsd_tsdn(tsd), 0, false); arena_bind(tsd, 0, false); arena_bind(tsd, 0, true); } return ret; } void iarena_cleanup(tsd_t *tsd) { arena_t *iarena; iarena = tsd_iarena_get(tsd); if (iarena != NULL) { arena_unbind(tsd, arena_ind_get(iarena), true); } } void arena_cleanup(tsd_t *tsd) { arena_t *arena; arena = tsd_arena_get(tsd); if (arena != NULL) { arena_unbind(tsd, arena_ind_get(arena), false); } } void arenas_tdata_cleanup(tsd_t *tsd) { arena_tdata_t *arenas_tdata; /* Prevent tsd->arenas_tdata from being (re)created. */ *tsd_arenas_tdata_bypassp_get(tsd) = true; arenas_tdata = tsd_arenas_tdata_get(tsd); if (arenas_tdata != NULL) { tsd_arenas_tdata_set(tsd, NULL); a0dalloc(arenas_tdata); } } static void stats_print_atexit(void) { if (config_stats) { tsdn_t *tsdn; unsigned narenas, i; tsdn = tsdn_fetch(); /* * Merge stats from extant threads. This is racy, since * individual threads do not lock when recording tcache stats * events. As a consequence, the final stats may be slightly * out of date by the time they are reported, if other threads * continue to allocate. */ for (i = 0, narenas = narenas_total_get(); i < narenas; i++) { arena_t *arena = arena_get(tsdn, i, false); if (arena != NULL) { tcache_t *tcache; malloc_mutex_lock(tsdn, &arena->tcache_ql_mtx); ql_foreach(tcache, &arena->tcache_ql, link) { tcache_stats_merge(tsdn, tcache, arena); } malloc_mutex_unlock(tsdn, &arena->tcache_ql_mtx); } } } je_malloc_stats_print(NULL, NULL, opt_stats_print_opts); } /* * Ensure that we don't hold any locks upon entry to or exit from allocator * code (in a "broad" sense that doesn't count a reentrant allocation as an * entrance or exit). */ JEMALLOC_ALWAYS_INLINE void check_entry_exit_locking(tsdn_t *tsdn) { if (!config_debug) { return; } if (tsdn_null(tsdn)) { return; } tsd_t *tsd = tsdn_tsd(tsdn); /* * It's possible we hold locks at entry/exit if we're in a nested * allocation. */ int8_t reentrancy_level = tsd_reentrancy_level_get(tsd); if (reentrancy_level != 0) { return; } witness_assert_lockless(tsdn_witness_tsdp_get(tsdn)); } /* * End miscellaneous support functions. */ /******************************************************************************/ /* * Begin initialization functions. */ static char * jemalloc_secure_getenv(const char *name) { #ifdef JEMALLOC_HAVE_SECURE_GETENV return secure_getenv(name); #else # ifdef JEMALLOC_HAVE_ISSETUGID if (issetugid() != 0) { return NULL; } # endif return getenv(name); #endif } static unsigned malloc_ncpus(void) { long result; #ifdef _WIN32 SYSTEM_INFO si; GetSystemInfo(&si); result = si.dwNumberOfProcessors; #elif defined(JEMALLOC_GLIBC_MALLOC_HOOK) && defined(CPU_COUNT) /* * glibc >= 2.6 has the CPU_COUNT macro. * * glibc's sysconf() uses isspace(). glibc allocates for the first time * *before* setting up the isspace tables. Therefore we need a * different method to get the number of CPUs. */ { cpu_set_t set; pthread_getaffinity_np(pthread_self(), sizeof(set), &set); result = CPU_COUNT(&set); } #else result = sysconf(_SC_NPROCESSORS_ONLN); #endif return ((result == -1) ? 1 : (unsigned)result); } static void init_opt_stats_print_opts(const char *v, size_t vlen) { size_t opts_len = strlen(opt_stats_print_opts); assert(opts_len <= stats_print_tot_num_options); for (size_t i = 0; i < vlen; i++) { switch (v[i]) { #define OPTION(o, v, d, s) case o: break; STATS_PRINT_OPTIONS #undef OPTION default: continue; } if (strchr(opt_stats_print_opts, v[i]) != NULL) { /* Ignore repeated. */ continue; } opt_stats_print_opts[opts_len++] = v[i]; opt_stats_print_opts[opts_len] = '\0'; assert(opts_len <= stats_print_tot_num_options); } assert(opts_len == strlen(opt_stats_print_opts)); } static bool malloc_conf_next(char const **opts_p, char const **k_p, size_t *klen_p, char const **v_p, size_t *vlen_p) { bool accept; const char *opts = *opts_p; *k_p = opts; for (accept = false; !accept;) { switch (*opts) { case 'A': case 'B': case 'C': case 'D': case 'E': case 'F': case 'G': case 'H': case 'I': case 'J': case 'K': case 'L': case 'M': case 'N': case 'O': case 'P': case 'Q': case 'R': case 'S': case 'T': case 'U': case 'V': case 'W': case 'X': case 'Y': case 'Z': case 'a': case 'b': case 'c': case 'd': case 'e': case 'f': case 'g': case 'h': case 'i': case 'j': case 'k': case 'l': case 'm': case 'n': case 'o': case 'p': case 'q': case 'r': case 's': case 't': case 'u': case 'v': case 'w': case 'x': case 'y': case 'z': case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': case '_': opts++; break; case ':': opts++; *klen_p = (uintptr_t)opts - 1 - (uintptr_t)*k_p; *v_p = opts; accept = true; break; case '\0': if (opts != *opts_p) { malloc_write(": Conf string ends " "with key\n"); } return true; default: malloc_write(": Malformed conf string\n"); return true; } } for (accept = false; !accept;) { switch (*opts) { case ',': opts++; /* * Look ahead one character here, because the next time * this function is called, it will assume that end of * input has been cleanly reached if no input remains, * but we have optimistically already consumed the * comma if one exists. */ if (*opts == '\0') { malloc_write(": Conf string ends " "with comma\n"); } *vlen_p = (uintptr_t)opts - 1 - (uintptr_t)*v_p; accept = true; break; case '\0': *vlen_p = (uintptr_t)opts - (uintptr_t)*v_p; accept = true; break; default: opts++; break; } } *opts_p = opts; return false; } static void malloc_abort_invalid_conf(void) { assert(opt_abort_conf); malloc_printf(": Abort (abort_conf:true) on invalid conf " "value (see above).\n"); abort(); } static void malloc_conf_error(const char *msg, const char *k, size_t klen, const char *v, size_t vlen) { malloc_printf(": %s: %.*s:%.*s\n", msg, (int)klen, k, (int)vlen, v); had_conf_error = true; if (opt_abort_conf) { malloc_abort_invalid_conf(); } } static void malloc_slow_flag_init(void) { /* * Combine the runtime options into malloc_slow for fast path. Called * after processing all the options. */ malloc_slow_flags |= (opt_junk_alloc ? flag_opt_junk_alloc : 0) | (opt_junk_free ? flag_opt_junk_free : 0) | (opt_zero ? flag_opt_zero : 0) | (opt_utrace ? flag_opt_utrace : 0) | (opt_xmalloc ? flag_opt_xmalloc : 0); malloc_slow = (malloc_slow_flags != 0); } static void malloc_conf_init(void) { unsigned i; char buf[PATH_MAX + 1]; const char *opts, *k, *v; size_t klen, vlen; for (i = 0; i < 4; i++) { /* Get runtime configuration. */ switch (i) { case 0: opts = config_malloc_conf; break; case 1: if (je_malloc_conf != NULL) { /* * Use options that were compiled into the * program. */ opts = je_malloc_conf; } else { /* No configuration specified. */ buf[0] = '\0'; opts = buf; } break; case 2: { ssize_t linklen = 0; #ifndef _WIN32 int saved_errno = errno; const char *linkname = # ifdef JEMALLOC_PREFIX "/etc/"JEMALLOC_PREFIX"malloc.conf" # else "/etc/malloc.conf" # endif ; /* * Try to use the contents of the "/etc/malloc.conf" * symbolic link's name. */ linklen = readlink(linkname, buf, sizeof(buf) - 1); if (linklen == -1) { /* No configuration specified. */ linklen = 0; /* Restore errno. */ set_errno(saved_errno); } #endif buf[linklen] = '\0'; opts = buf; break; } case 3: { const char *envname = #ifdef JEMALLOC_PREFIX JEMALLOC_CPREFIX"MALLOC_CONF" #else "MALLOC_CONF" #endif ; if ((opts = jemalloc_secure_getenv(envname)) != NULL) { /* * Do nothing; opts is already initialized to * the value of the MALLOC_CONF environment * variable. */ } else { /* No configuration specified. */ buf[0] = '\0'; opts = buf; } break; } default: not_reached(); buf[0] = '\0'; opts = buf; } while (*opts != '\0' && !malloc_conf_next(&opts, &k, &klen, &v, &vlen)) { #define CONF_MATCH(n) \ (sizeof(n)-1 == klen && strncmp(n, k, klen) == 0) #define CONF_MATCH_VALUE(n) \ (sizeof(n)-1 == vlen && strncmp(n, v, vlen) == 0) #define CONF_HANDLE_BOOL(o, n) \ if (CONF_MATCH(n)) { \ if (CONF_MATCH_VALUE("true")) { \ o = true; \ } else if (CONF_MATCH_VALUE("false")) { \ o = false; \ } else { \ malloc_conf_error( \ "Invalid conf value", \ k, klen, v, vlen); \ } \ continue; \ } #define CONF_MIN_no(um, min) false #define CONF_MIN_yes(um, min) ((um) < (min)) #define CONF_MAX_no(um, max) false #define CONF_MAX_yes(um, max) ((um) > (max)) #define CONF_HANDLE_T_U(t, o, n, min, max, check_min, check_max, clip) \ if (CONF_MATCH(n)) { \ uintmax_t um; \ char *end; \ \ set_errno(0); \ um = malloc_strtoumax(v, &end, 0); \ if (get_errno() != 0 || (uintptr_t)end -\ (uintptr_t)v != vlen) { \ malloc_conf_error( \ "Invalid conf value", \ k, klen, v, vlen); \ } else if (clip) { \ if (CONF_MIN_##check_min(um, \ (t)(min))) { \ o = (t)(min); \ } else if ( \ CONF_MAX_##check_max(um, \ (t)(max))) { \ o = (t)(max); \ } else { \ o = (t)um; \ } \ } else { \ if (CONF_MIN_##check_min(um, \ (t)(min)) || \ CONF_MAX_##check_max(um, \ (t)(max))) { \ malloc_conf_error( \ "Out-of-range " \ "conf value", \ k, klen, v, vlen); \ } else { \ o = (t)um; \ } \ } \ continue; \ } #define CONF_HANDLE_UNSIGNED(o, n, min, max, check_min, check_max, \ clip) \ CONF_HANDLE_T_U(unsigned, o, n, min, max, \ check_min, check_max, clip) #define CONF_HANDLE_SIZE_T(o, n, min, max, check_min, check_max, clip) \ CONF_HANDLE_T_U(size_t, o, n, min, max, \ check_min, check_max, clip) #define CONF_HANDLE_SSIZE_T(o, n, min, max) \ if (CONF_MATCH(n)) { \ long l; \ char *end; \ \ set_errno(0); \ l = strtol(v, &end, 0); \ if (get_errno() != 0 || (uintptr_t)end -\ (uintptr_t)v != vlen) { \ malloc_conf_error( \ "Invalid conf value", \ k, klen, v, vlen); \ } else if (l < (ssize_t)(min) || l > \ (ssize_t)(max)) { \ malloc_conf_error( \ "Out-of-range conf value", \ k, klen, v, vlen); \ } else { \ o = l; \ } \ continue; \ } #define CONF_HANDLE_CHAR_P(o, n, d) \ if (CONF_MATCH(n)) { \ size_t cpylen = (vlen <= \ sizeof(o)-1) ? vlen : \ sizeof(o)-1; \ strncpy(o, v, cpylen); \ o[cpylen] = '\0'; \ continue; \ } CONF_HANDLE_BOOL(opt_abort, "abort") CONF_HANDLE_BOOL(opt_abort_conf, "abort_conf") if (opt_abort_conf && had_conf_error) { malloc_abort_invalid_conf(); } CONF_HANDLE_BOOL(opt_retain, "retain") if (strncmp("dss", k, klen) == 0) { int i; bool match = false; for (i = 0; i < dss_prec_limit; i++) { if (strncmp(dss_prec_names[i], v, vlen) == 0) { if (extent_dss_prec_set(i)) { malloc_conf_error( "Error setting dss", k, klen, v, vlen); } else { opt_dss = dss_prec_names[i]; match = true; break; } } } if (!match) { malloc_conf_error("Invalid conf value", k, klen, v, vlen); } continue; } CONF_HANDLE_UNSIGNED(opt_narenas, "narenas", 1, UINT_MAX, yes, no, false) CONF_HANDLE_SSIZE_T(opt_dirty_decay_ms, "dirty_decay_ms", -1, NSTIME_SEC_MAX * KQU(1000) < QU(SSIZE_MAX) ? NSTIME_SEC_MAX * KQU(1000) : SSIZE_MAX); CONF_HANDLE_SSIZE_T(opt_muzzy_decay_ms, "muzzy_decay_ms", -1, NSTIME_SEC_MAX * KQU(1000) < QU(SSIZE_MAX) ? NSTIME_SEC_MAX * KQU(1000) : SSIZE_MAX); CONF_HANDLE_BOOL(opt_stats_print, "stats_print") if (CONF_MATCH("stats_print_opts")) { init_opt_stats_print_opts(v, vlen); continue; } if (config_fill) { if (CONF_MATCH("junk")) { if (CONF_MATCH_VALUE("true")) { opt_junk = "true"; opt_junk_alloc = opt_junk_free = true; } else if (CONF_MATCH_VALUE("false")) { opt_junk = "false"; opt_junk_alloc = opt_junk_free = false; } else if (CONF_MATCH_VALUE("alloc")) { opt_junk = "alloc"; opt_junk_alloc = true; opt_junk_free = false; } else if (CONF_MATCH_VALUE("free")) { opt_junk = "free"; opt_junk_alloc = false; opt_junk_free = true; } else { malloc_conf_error( "Invalid conf value", k, klen, v, vlen); } continue; } CONF_HANDLE_BOOL(opt_zero, "zero") } if (config_utrace) { CONF_HANDLE_BOOL(opt_utrace, "utrace") } if (config_xmalloc) { CONF_HANDLE_BOOL(opt_xmalloc, "xmalloc") } CONF_HANDLE_BOOL(opt_tcache, "tcache") CONF_HANDLE_SSIZE_T(opt_lg_tcache_max, "lg_tcache_max", -1, (sizeof(size_t) << 3) - 1) if (strncmp("percpu_arena", k, klen) == 0) { int i; bool match = false; for (i = percpu_arena_mode_names_base; i < percpu_arena_mode_names_limit; i++) { if (strncmp(percpu_arena_mode_names[i], v, vlen) == 0) { if (!have_percpu_arena) { malloc_conf_error( "No getcpu support", k, klen, v, vlen); } opt_percpu_arena = i; match = true; break; } } if (!match) { malloc_conf_error("Invalid conf value", k, klen, v, vlen); } continue; } CONF_HANDLE_BOOL(opt_background_thread, "background_thread"); if (config_prof) { CONF_HANDLE_BOOL(opt_prof, "prof") CONF_HANDLE_CHAR_P(opt_prof_prefix, "prof_prefix", "jeprof") CONF_HANDLE_BOOL(opt_prof_active, "prof_active") CONF_HANDLE_BOOL(opt_prof_thread_active_init, "prof_thread_active_init") CONF_HANDLE_SIZE_T(opt_lg_prof_sample, "lg_prof_sample", 0, (sizeof(uint64_t) << 3) - 1, no, yes, true) CONF_HANDLE_BOOL(opt_prof_accum, "prof_accum") CONF_HANDLE_SSIZE_T(opt_lg_prof_interval, "lg_prof_interval", -1, (sizeof(uint64_t) << 3) - 1) CONF_HANDLE_BOOL(opt_prof_gdump, "prof_gdump") CONF_HANDLE_BOOL(opt_prof_final, "prof_final") CONF_HANDLE_BOOL(opt_prof_leak, "prof_leak") } malloc_conf_error("Invalid conf pair", k, klen, v, vlen); #undef CONF_MATCH #undef CONF_MATCH_VALUE #undef CONF_HANDLE_BOOL #undef CONF_MIN_no #undef CONF_MIN_yes #undef CONF_MAX_no #undef CONF_MAX_yes #undef CONF_HANDLE_T_U #undef CONF_HANDLE_UNSIGNED #undef CONF_HANDLE_SIZE_T #undef CONF_HANDLE_SSIZE_T #undef CONF_HANDLE_CHAR_P } } } static bool malloc_init_hard_needed(void) { if (malloc_initialized() || (IS_INITIALIZER && malloc_init_state == malloc_init_recursible)) { /* * Another thread initialized the allocator before this one * acquired init_lock, or this thread is the initializing * thread, and it is recursively allocating. */ return false; } #ifdef JEMALLOC_THREADED_INIT if (malloc_initializer != NO_INITIALIZER && !IS_INITIALIZER) { /* Busy-wait until the initializing thread completes. */ spin_t spinner = SPIN_INITIALIZER; do { malloc_mutex_unlock(TSDN_NULL, &init_lock); spin_adaptive(&spinner); malloc_mutex_lock(TSDN_NULL, &init_lock); } while (!malloc_initialized()); return false; } #endif return true; } static bool malloc_init_hard_a0_locked() { malloc_initializer = INITIALIZER; if (config_prof) { prof_boot0(); } malloc_conf_init(); if (opt_stats_print) { /* Print statistics at exit. */ if (atexit(stats_print_atexit) != 0) { malloc_write(": Error in atexit()\n"); if (opt_abort) { abort(); } } } if (pages_boot()) { return true; } if (base_boot(TSDN_NULL)) { return true; } if (extent_boot()) { return true; } if (ctl_boot()) { return true; } if (config_prof) { prof_boot1(); } arena_boot(); if (tcache_boot(TSDN_NULL)) { return true; } if (malloc_mutex_init(&arenas_lock, "arenas", WITNESS_RANK_ARENAS, malloc_mutex_rank_exclusive)) { return true; } /* * Create enough scaffolding to allow recursive allocation in * malloc_ncpus(). */ narenas_auto = 1; memset(arenas, 0, sizeof(arena_t *) * narenas_auto); /* * Initialize one arena here. The rest are lazily created in * arena_choose_hard(). */ if (arena_init(TSDN_NULL, 0, (extent_hooks_t *)&extent_hooks_default) == NULL) { return true; } a0 = arena_get(TSDN_NULL, 0, false); malloc_init_state = malloc_init_a0_initialized; return false; } static bool malloc_init_hard_a0(void) { bool ret; malloc_mutex_lock(TSDN_NULL, &init_lock); ret = malloc_init_hard_a0_locked(); malloc_mutex_unlock(TSDN_NULL, &init_lock); return ret; } /* Initialize data structures which may trigger recursive allocation. */ static bool malloc_init_hard_recursible(void) { malloc_init_state = malloc_init_recursible; ncpus = malloc_ncpus(); #if (defined(JEMALLOC_HAVE_PTHREAD_ATFORK) && !defined(JEMALLOC_MUTEX_INIT_CB) \ && !defined(JEMALLOC_ZONE) && !defined(_WIN32) && \ !defined(__native_client__)) /* LinuxThreads' pthread_atfork() allocates. */ if (pthread_atfork(jemalloc_prefork, jemalloc_postfork_parent, jemalloc_postfork_child) != 0) { malloc_write(": Error in pthread_atfork()\n"); if (opt_abort) { abort(); } return true; } #endif if (background_thread_boot0()) { return true; } return false; } static unsigned malloc_narenas_default(void) { assert(ncpus > 0); /* * For SMP systems, create more than one arena per CPU by * default. */ if (ncpus > 1) { return ncpus << 2; } else { return 1; } } static percpu_arena_mode_t percpu_arena_as_initialized(percpu_arena_mode_t mode) { assert(!malloc_initialized()); assert(mode <= percpu_arena_disabled); if (mode != percpu_arena_disabled) { mode += percpu_arena_mode_enabled_base; } return mode; } static bool malloc_init_narenas(void) { assert(ncpus > 0); if (opt_percpu_arena != percpu_arena_disabled) { if (!have_percpu_arena || malloc_getcpu() < 0) { opt_percpu_arena = percpu_arena_disabled; malloc_printf(": perCPU arena getcpu() not " "available. Setting narenas to %u.\n", opt_narenas ? opt_narenas : malloc_narenas_default()); if (opt_abort) { abort(); } } else { if (ncpus >= MALLOCX_ARENA_LIMIT) { malloc_printf(": narenas w/ percpu" "arena beyond limit (%d)\n", ncpus); if (opt_abort) { abort(); } return true; } /* NB: opt_percpu_arena isn't fully initialized yet. */ if (percpu_arena_as_initialized(opt_percpu_arena) == per_phycpu_arena && ncpus % 2 != 0) { malloc_printf(": invalid " "configuration -- per physical CPU arena " "with odd number (%u) of CPUs (no hyper " "threading?).\n", ncpus); if (opt_abort) abort(); } unsigned n = percpu_arena_ind_limit( percpu_arena_as_initialized(opt_percpu_arena)); if (opt_narenas < n) { /* * If narenas is specified with percpu_arena * enabled, actual narenas is set as the greater * of the two. percpu_arena_choose will be free * to use any of the arenas based on CPU * id. This is conservative (at a small cost) * but ensures correctness. * * If for some reason the ncpus determined at * boot is not the actual number (e.g. because * of affinity setting from numactl), reserving * narenas this way provides a workaround for * percpu_arena. */ opt_narenas = n; } } } if (opt_narenas == 0) { opt_narenas = malloc_narenas_default(); } assert(opt_narenas > 0); narenas_auto = opt_narenas; /* * Limit the number of arenas to the indexing range of MALLOCX_ARENA(). */ if (narenas_auto >= MALLOCX_ARENA_LIMIT) { narenas_auto = MALLOCX_ARENA_LIMIT - 1; malloc_printf(": Reducing narenas to limit (%d)\n", narenas_auto); } narenas_total_set(narenas_auto); return false; } static void malloc_init_percpu(void) { opt_percpu_arena = percpu_arena_as_initialized(opt_percpu_arena); } static bool malloc_init_hard_finish(void) { if (malloc_mutex_boot()) { return true; } malloc_init_state = malloc_init_initialized; malloc_slow_flag_init(); return false; } static void malloc_init_hard_cleanup(tsdn_t *tsdn, bool reentrancy_set) { malloc_mutex_assert_owner(tsdn, &init_lock); malloc_mutex_unlock(tsdn, &init_lock); if (reentrancy_set) { assert(!tsdn_null(tsdn)); tsd_t *tsd = tsdn_tsd(tsdn); assert(tsd_reentrancy_level_get(tsd) > 0); post_reentrancy(tsd); } } static bool malloc_init_hard(void) { tsd_t *tsd; #if defined(_WIN32) && _WIN32_WINNT < 0x0600 _init_init_lock(); #endif malloc_mutex_lock(TSDN_NULL, &init_lock); #define UNLOCK_RETURN(tsdn, ret, reentrancy) \ malloc_init_hard_cleanup(tsdn, reentrancy); \ return ret; if (!malloc_init_hard_needed()) { UNLOCK_RETURN(TSDN_NULL, false, false) } if (malloc_init_state != malloc_init_a0_initialized && malloc_init_hard_a0_locked()) { UNLOCK_RETURN(TSDN_NULL, true, false) } malloc_mutex_unlock(TSDN_NULL, &init_lock); /* Recursive allocation relies on functional tsd. */ tsd = malloc_tsd_boot0(); if (tsd == NULL) { return true; } if (malloc_init_hard_recursible()) { return true; } malloc_mutex_lock(tsd_tsdn(tsd), &init_lock); /* Set reentrancy level to 1 during init. */ - pre_reentrancy(tsd); + pre_reentrancy(tsd, NULL); /* Initialize narenas before prof_boot2 (for allocation). */ if (malloc_init_narenas() || background_thread_boot1(tsd_tsdn(tsd))) { UNLOCK_RETURN(tsd_tsdn(tsd), true, true) } if (config_prof && prof_boot2(tsd)) { UNLOCK_RETURN(tsd_tsdn(tsd), true, true) } malloc_init_percpu(); if (malloc_init_hard_finish()) { UNLOCK_RETURN(tsd_tsdn(tsd), true, true) } post_reentrancy(tsd); malloc_mutex_unlock(tsd_tsdn(tsd), &init_lock); malloc_tsd_boot1(); /* Update TSD after tsd_boot1. */ tsd = tsd_fetch(); if (opt_background_thread) { assert(have_background_thread); /* * Need to finish init & unlock first before creating background * threads (pthread_create depends on malloc). */ malloc_mutex_lock(tsd_tsdn(tsd), &background_thread_lock); bool err = background_thread_create(tsd, 0); malloc_mutex_unlock(tsd_tsdn(tsd), &background_thread_lock); if (err) { return true; } } #undef UNLOCK_RETURN return false; } /* * End initialization functions. */ /******************************************************************************/ /* * Begin allocation-path internal functions and data structures. */ /* * Settings determined by the documented behavior of the allocation functions. */ typedef struct static_opts_s static_opts_t; struct static_opts_s { /* Whether or not allocation size may overflow. */ bool may_overflow; /* Whether or not allocations of size 0 should be treated as size 1. */ bool bump_empty_alloc; /* * Whether to assert that allocations are not of size 0 (after any * bumping). */ bool assert_nonempty_alloc; /* * Whether or not to modify the 'result' argument to malloc in case of * error. */ bool null_out_result_on_error; /* Whether to set errno when we encounter an error condition. */ bool set_errno_on_error; /* * The minimum valid alignment for functions requesting aligned storage. */ size_t min_alignment; /* The error string to use if we oom. */ const char *oom_string; /* The error string to use if the passed-in alignment is invalid. */ const char *invalid_alignment_string; /* * False if we're configured to skip some time-consuming operations. * * This isn't really a malloc "behavior", but it acts as a useful * summary of several other static (or at least, static after program * initialization) options. */ bool slow; }; JEMALLOC_ALWAYS_INLINE void static_opts_init(static_opts_t *static_opts) { static_opts->may_overflow = false; static_opts->bump_empty_alloc = false; static_opts->assert_nonempty_alloc = false; static_opts->null_out_result_on_error = false; static_opts->set_errno_on_error = false; static_opts->min_alignment = 0; static_opts->oom_string = ""; static_opts->invalid_alignment_string = ""; static_opts->slow = false; } /* * These correspond to the macros in jemalloc/jemalloc_macros.h. Broadly, we * should have one constant here per magic value there. Note however that the * representations need not be related. */ #define TCACHE_IND_NONE ((unsigned)-1) #define TCACHE_IND_AUTOMATIC ((unsigned)-2) #define ARENA_IND_AUTOMATIC ((unsigned)-1) typedef struct dynamic_opts_s dynamic_opts_t; struct dynamic_opts_s { void **result; size_t num_items; size_t item_size; size_t alignment; bool zero; unsigned tcache_ind; unsigned arena_ind; }; JEMALLOC_ALWAYS_INLINE void dynamic_opts_init(dynamic_opts_t *dynamic_opts) { dynamic_opts->result = NULL; dynamic_opts->num_items = 0; dynamic_opts->item_size = 0; dynamic_opts->alignment = 0; dynamic_opts->zero = false; dynamic_opts->tcache_ind = TCACHE_IND_AUTOMATIC; dynamic_opts->arena_ind = ARENA_IND_AUTOMATIC; } /* ind is ignored if dopts->alignment > 0. */ JEMALLOC_ALWAYS_INLINE void * imalloc_no_sample(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd, size_t size, size_t usize, szind_t ind) { tcache_t *tcache; arena_t *arena; /* Fill in the tcache. */ if (dopts->tcache_ind == TCACHE_IND_AUTOMATIC) { if (likely(!sopts->slow)) { /* Getting tcache ptr unconditionally. */ tcache = tsd_tcachep_get(tsd); assert(tcache == tcache_get(tsd)); } else { tcache = tcache_get(tsd); } } else if (dopts->tcache_ind == TCACHE_IND_NONE) { tcache = NULL; } else { tcache = tcaches_get(tsd, dopts->tcache_ind); } /* Fill in the arena. */ if (dopts->arena_ind == ARENA_IND_AUTOMATIC) { /* * In case of automatic arena management, we defer arena * computation until as late as we can, hoping to fill the * allocation out of the tcache. */ arena = NULL; } else { arena = arena_get(tsd_tsdn(tsd), dopts->arena_ind, true); } if (unlikely(dopts->alignment != 0)) { return ipalloct(tsd_tsdn(tsd), usize, dopts->alignment, dopts->zero, tcache, arena); } return iallocztm(tsd_tsdn(tsd), size, ind, dopts->zero, tcache, false, arena, sopts->slow); } JEMALLOC_ALWAYS_INLINE void * imalloc_sample(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd, size_t usize, szind_t ind) { void *ret; /* * For small allocations, sampling bumps the usize. If so, we allocate * from the ind_large bucket. */ szind_t ind_large; size_t bumped_usize = usize; if (usize <= SMALL_MAXCLASS) { assert(((dopts->alignment == 0) ? sz_s2u(LARGE_MINCLASS) : sz_sa2u(LARGE_MINCLASS, dopts->alignment)) == LARGE_MINCLASS); ind_large = sz_size2index(LARGE_MINCLASS); bumped_usize = sz_s2u(LARGE_MINCLASS); ret = imalloc_no_sample(sopts, dopts, tsd, bumped_usize, bumped_usize, ind_large); if (unlikely(ret == NULL)) { return NULL; } arena_prof_promote(tsd_tsdn(tsd), ret, usize); } else { ret = imalloc_no_sample(sopts, dopts, tsd, usize, usize, ind); } return ret; } /* * Returns true if the allocation will overflow, and false otherwise. Sets * *size to the product either way. */ JEMALLOC_ALWAYS_INLINE bool compute_size_with_overflow(bool may_overflow, dynamic_opts_t *dopts, size_t *size) { /* * This function is just num_items * item_size, except that we may have * to check for overflow. */ if (!may_overflow) { assert(dopts->num_items == 1); *size = dopts->item_size; return false; } /* A size_t with its high-half bits all set to 1. */ const static size_t high_bits = SIZE_T_MAX << (sizeof(size_t) * 8 / 2); *size = dopts->item_size * dopts->num_items; if (unlikely(*size == 0)) { return (dopts->num_items != 0 && dopts->item_size != 0); } /* * We got a non-zero size, but we don't know if we overflowed to get * there. To avoid having to do a divide, we'll be clever and note that * if both A and B can be represented in N/2 bits, then their product * can be represented in N bits (without the possibility of overflow). */ if (likely((high_bits & (dopts->num_items | dopts->item_size)) == 0)) { return false; } if (likely(*size / dopts->item_size == dopts->num_items)) { return false; } return true; } JEMALLOC_ALWAYS_INLINE int imalloc_body(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd) { /* Where the actual allocated memory will live. */ void *allocation = NULL; /* Filled in by compute_size_with_overflow below. */ size_t size = 0; /* * For unaligned allocations, we need only ind. For aligned * allocations, or in case of stats or profiling we need usize. * * These are actually dead stores, in that their values are reset before * any branch on their value is taken. Sometimes though, it's * convenient to pass them as arguments before this point. To avoid * undefined behavior then, we initialize them with dummy stores. */ szind_t ind = 0; size_t usize = 0; /* Reentrancy is only checked on slow path. */ int8_t reentrancy_level; /* Compute the amount of memory the user wants. */ if (unlikely(compute_size_with_overflow(sopts->may_overflow, dopts, &size))) { goto label_oom; } /* Validate the user input. */ if (sopts->bump_empty_alloc) { if (unlikely(size == 0)) { size = 1; } } if (sopts->assert_nonempty_alloc) { assert (size != 0); } if (unlikely(dopts->alignment < sopts->min_alignment || (dopts->alignment & (dopts->alignment - 1)) != 0)) { goto label_invalid_alignment; } /* This is the beginning of the "core" algorithm. */ if (dopts->alignment == 0) { ind = sz_size2index(size); if (unlikely(ind >= NSIZES)) { goto label_oom; } if (config_stats || (config_prof && opt_prof)) { usize = sz_index2size(ind); assert(usize > 0 && usize <= LARGE_MAXCLASS); } } else { usize = sz_sa2u(size, dopts->alignment); if (unlikely(usize == 0 || usize > LARGE_MAXCLASS)) { goto label_oom; } } check_entry_exit_locking(tsd_tsdn(tsd)); /* * If we need to handle reentrancy, we can do it out of a * known-initialized arena (i.e. arena 0). */ reentrancy_level = tsd_reentrancy_level_get(tsd); if (sopts->slow && unlikely(reentrancy_level > 0)) { /* * We should never specify particular arenas or tcaches from * within our internal allocations. */ assert(dopts->tcache_ind == TCACHE_IND_AUTOMATIC || dopts->tcache_ind == TCACHE_IND_NONE); - assert(dopts->arena_ind = ARENA_IND_AUTOMATIC); + assert(dopts->arena_ind == ARENA_IND_AUTOMATIC); dopts->tcache_ind = TCACHE_IND_NONE; /* We know that arena 0 has already been initialized. */ dopts->arena_ind = 0; } /* If profiling is on, get our profiling context. */ if (config_prof && opt_prof) { /* * Note that if we're going down this path, usize must have been * initialized in the previous if statement. */ prof_tctx_t *tctx = prof_alloc_prep( tsd, usize, prof_active_get_unlocked(), true); alloc_ctx_t alloc_ctx; if (likely((uintptr_t)tctx == (uintptr_t)1U)) { alloc_ctx.slab = (usize <= SMALL_MAXCLASS); allocation = imalloc_no_sample( sopts, dopts, tsd, usize, usize, ind); } else if ((uintptr_t)tctx > (uintptr_t)1U) { /* * Note that ind might still be 0 here. This is fine; * imalloc_sample ignores ind if dopts->alignment > 0. */ allocation = imalloc_sample( sopts, dopts, tsd, usize, ind); alloc_ctx.slab = false; } else { allocation = NULL; } if (unlikely(allocation == NULL)) { prof_alloc_rollback(tsd, tctx, true); goto label_oom; } prof_malloc(tsd_tsdn(tsd), allocation, usize, &alloc_ctx, tctx); } else { /* * If dopts->alignment > 0, then ind is still 0, but usize was * computed in the previous if statement. Down the positive * alignment path, imalloc_no_sample ignores ind and size * (relying only on usize). */ allocation = imalloc_no_sample(sopts, dopts, tsd, size, usize, ind); if (unlikely(allocation == NULL)) { goto label_oom; } } /* * Allocation has been done at this point. We still have some * post-allocation work to do though. */ assert(dopts->alignment == 0 || ((uintptr_t)allocation & (dopts->alignment - 1)) == ZU(0)); if (config_stats) { assert(usize == isalloc(tsd_tsdn(tsd), allocation)); *tsd_thread_allocatedp_get(tsd) += usize; } if (sopts->slow) { UTRACE(0, size, allocation); } /* Success! */ check_entry_exit_locking(tsd_tsdn(tsd)); *dopts->result = allocation; return 0; label_oom: if (unlikely(sopts->slow) && config_xmalloc && unlikely(opt_xmalloc)) { malloc_write(sopts->oom_string); abort(); } if (sopts->slow) { UTRACE(NULL, size, NULL); } check_entry_exit_locking(tsd_tsdn(tsd)); if (sopts->set_errno_on_error) { set_errno(ENOMEM); } if (sopts->null_out_result_on_error) { *dopts->result = NULL; } return ENOMEM; /* * This label is only jumped to by one goto; we move it out of line * anyways to avoid obscuring the non-error paths, and for symmetry with * the oom case. */ label_invalid_alignment: if (config_xmalloc && unlikely(opt_xmalloc)) { malloc_write(sopts->invalid_alignment_string); abort(); } if (sopts->set_errno_on_error) { set_errno(EINVAL); } if (sopts->slow) { UTRACE(NULL, size, NULL); } check_entry_exit_locking(tsd_tsdn(tsd)); if (sopts->null_out_result_on_error) { *dopts->result = NULL; } return EINVAL; } /* Returns the errno-style error code of the allocation. */ JEMALLOC_ALWAYS_INLINE int imalloc(static_opts_t *sopts, dynamic_opts_t *dopts) { if (unlikely(!malloc_initialized()) && unlikely(malloc_init())) { if (config_xmalloc && unlikely(opt_xmalloc)) { malloc_write(sopts->oom_string); abort(); } UTRACE(NULL, dopts->num_items * dopts->item_size, NULL); set_errno(ENOMEM); *dopts->result = NULL; return ENOMEM; } /* We always need the tsd. Let's grab it right away. */ tsd_t *tsd = tsd_fetch(); assert(tsd); if (likely(tsd_fast(tsd))) { /* Fast and common path. */ tsd_assert_fast(tsd); sopts->slow = false; return imalloc_body(sopts, dopts, tsd); } else { sopts->slow = true; return imalloc_body(sopts, dopts, tsd); } } /******************************************************************************/ /* * Begin malloc(3)-compatible functions. */ JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW * JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(1) je_malloc(size_t size) { void *ret; static_opts_t sopts; dynamic_opts_t dopts; static_opts_init(&sopts); dynamic_opts_init(&dopts); sopts.bump_empty_alloc = true; sopts.null_out_result_on_error = true; sopts.set_errno_on_error = true; sopts.oom_string = ": Error in malloc(): out of memory\n"; dopts.result = &ret; dopts.num_items = 1; dopts.item_size = size; imalloc(&sopts, &dopts); return ret; } JEMALLOC_EXPORT int JEMALLOC_NOTHROW JEMALLOC_ATTR(nonnull(1)) je_posix_memalign(void **memptr, size_t alignment, size_t size) { int ret; static_opts_t sopts; dynamic_opts_t dopts; static_opts_init(&sopts); dynamic_opts_init(&dopts); sopts.bump_empty_alloc = true; sopts.min_alignment = sizeof(void *); sopts.oom_string = ": Error allocating aligned memory: out of memory\n"; sopts.invalid_alignment_string = ": Error allocating aligned memory: invalid alignment\n"; dopts.result = memptr; dopts.num_items = 1; dopts.item_size = size; dopts.alignment = alignment; ret = imalloc(&sopts, &dopts); return ret; } JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW * JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(2) je_aligned_alloc(size_t alignment, size_t size) { void *ret; static_opts_t sopts; dynamic_opts_t dopts; static_opts_init(&sopts); dynamic_opts_init(&dopts); sopts.bump_empty_alloc = true; sopts.null_out_result_on_error = true; sopts.set_errno_on_error = true; sopts.min_alignment = 1; sopts.oom_string = ": Error allocating aligned memory: out of memory\n"; sopts.invalid_alignment_string = ": Error allocating aligned memory: invalid alignment\n"; dopts.result = &ret; dopts.num_items = 1; dopts.item_size = size; dopts.alignment = alignment; imalloc(&sopts, &dopts); return ret; } JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW * JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE2(1, 2) je_calloc(size_t num, size_t size) { void *ret; static_opts_t sopts; dynamic_opts_t dopts; static_opts_init(&sopts); dynamic_opts_init(&dopts); sopts.may_overflow = true; sopts.bump_empty_alloc = true; sopts.null_out_result_on_error = true; sopts.set_errno_on_error = true; sopts.oom_string = ": Error in calloc(): out of memory\n"; dopts.result = &ret; dopts.num_items = num; dopts.item_size = size; dopts.zero = true; imalloc(&sopts, &dopts); return ret; } static void * irealloc_prof_sample(tsd_t *tsd, void *old_ptr, size_t old_usize, size_t usize, prof_tctx_t *tctx) { void *p; if (tctx == NULL) { return NULL; } if (usize <= SMALL_MAXCLASS) { p = iralloc(tsd, old_ptr, old_usize, LARGE_MINCLASS, 0, false); if (p == NULL) { return NULL; } arena_prof_promote(tsd_tsdn(tsd), p, usize); } else { p = iralloc(tsd, old_ptr, old_usize, usize, 0, false); } return p; } JEMALLOC_ALWAYS_INLINE void * irealloc_prof(tsd_t *tsd, void *old_ptr, size_t old_usize, size_t usize, alloc_ctx_t *alloc_ctx) { void *p; bool prof_active; prof_tctx_t *old_tctx, *tctx; prof_active = prof_active_get_unlocked(); old_tctx = prof_tctx_get(tsd_tsdn(tsd), old_ptr, alloc_ctx); tctx = prof_alloc_prep(tsd, usize, prof_active, true); if (unlikely((uintptr_t)tctx != (uintptr_t)1U)) { p = irealloc_prof_sample(tsd, old_ptr, old_usize, usize, tctx); } else { p = iralloc(tsd, old_ptr, old_usize, usize, 0, false); } if (unlikely(p == NULL)) { prof_alloc_rollback(tsd, tctx, true); return NULL; } prof_realloc(tsd, p, usize, tctx, prof_active, true, old_ptr, old_usize, old_tctx); return p; } JEMALLOC_ALWAYS_INLINE void ifree(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path) { if (!slow_path) { tsd_assert_fast(tsd); } check_entry_exit_locking(tsd_tsdn(tsd)); if (tsd_reentrancy_level_get(tsd) != 0) { assert(slow_path); } assert(ptr != NULL); assert(malloc_initialized() || IS_INITIALIZER); alloc_ctx_t alloc_ctx; rtree_ctx_t *rtree_ctx = tsd_rtree_ctx(tsd); rtree_szind_slab_read(tsd_tsdn(tsd), &extents_rtree, rtree_ctx, (uintptr_t)ptr, true, &alloc_ctx.szind, &alloc_ctx.slab); assert(alloc_ctx.szind != NSIZES); size_t usize; if (config_prof && opt_prof) { usize = sz_index2size(alloc_ctx.szind); prof_free(tsd, ptr, usize, &alloc_ctx); } else if (config_stats) { usize = sz_index2size(alloc_ctx.szind); } if (config_stats) { *tsd_thread_deallocatedp_get(tsd) += usize; } if (likely(!slow_path)) { idalloctm(tsd_tsdn(tsd), ptr, tcache, &alloc_ctx, false, false); } else { idalloctm(tsd_tsdn(tsd), ptr, tcache, &alloc_ctx, false, true); } } JEMALLOC_ALWAYS_INLINE void isfree(tsd_t *tsd, void *ptr, size_t usize, tcache_t *tcache, bool slow_path) { if (!slow_path) { tsd_assert_fast(tsd); } check_entry_exit_locking(tsd_tsdn(tsd)); if (tsd_reentrancy_level_get(tsd) != 0) { assert(slow_path); } assert(ptr != NULL); assert(malloc_initialized() || IS_INITIALIZER); alloc_ctx_t alloc_ctx, *ctx; if (config_prof && opt_prof) { rtree_ctx_t *rtree_ctx = tsd_rtree_ctx(tsd); rtree_szind_slab_read(tsd_tsdn(tsd), &extents_rtree, rtree_ctx, (uintptr_t)ptr, true, &alloc_ctx.szind, &alloc_ctx.slab); assert(alloc_ctx.szind == sz_size2index(usize)); ctx = &alloc_ctx; prof_free(tsd, ptr, usize, ctx); } else { ctx = NULL; } if (config_stats) { *tsd_thread_deallocatedp_get(tsd) += usize; } if (likely(!slow_path)) { isdalloct(tsd_tsdn(tsd), ptr, usize, tcache, ctx, false); } else { isdalloct(tsd_tsdn(tsd), ptr, usize, tcache, ctx, true); } } JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW * JEMALLOC_ALLOC_SIZE(2) je_realloc(void *ptr, size_t size) { void *ret; tsdn_t *tsdn JEMALLOC_CC_SILENCE_INIT(NULL); size_t usize JEMALLOC_CC_SILENCE_INIT(0); size_t old_usize = 0; if (unlikely(size == 0)) { if (ptr != NULL) { /* realloc(ptr, 0) is equivalent to free(ptr). */ UTRACE(ptr, 0, 0); tcache_t *tcache; tsd_t *tsd = tsd_fetch(); if (tsd_reentrancy_level_get(tsd) == 0) { tcache = tcache_get(tsd); } else { tcache = NULL; } ifree(tsd, ptr, tcache, true); return NULL; } size = 1; } if (likely(ptr != NULL)) { assert(malloc_initialized() || IS_INITIALIZER); tsd_t *tsd = tsd_fetch(); check_entry_exit_locking(tsd_tsdn(tsd)); alloc_ctx_t alloc_ctx; rtree_ctx_t *rtree_ctx = tsd_rtree_ctx(tsd); rtree_szind_slab_read(tsd_tsdn(tsd), &extents_rtree, rtree_ctx, (uintptr_t)ptr, true, &alloc_ctx.szind, &alloc_ctx.slab); assert(alloc_ctx.szind != NSIZES); old_usize = sz_index2size(alloc_ctx.szind); assert(old_usize == isalloc(tsd_tsdn(tsd), ptr)); if (config_prof && opt_prof) { usize = sz_s2u(size); ret = unlikely(usize == 0 || usize > LARGE_MAXCLASS) ? NULL : irealloc_prof(tsd, ptr, old_usize, usize, &alloc_ctx); } else { if (config_stats) { usize = sz_s2u(size); } ret = iralloc(tsd, ptr, old_usize, size, 0, false); } tsdn = tsd_tsdn(tsd); } else { /* realloc(NULL, size) is equivalent to malloc(size). */ return je_malloc(size); } if (unlikely(ret == NULL)) { if (config_xmalloc && unlikely(opt_xmalloc)) { malloc_write(": Error in realloc(): " "out of memory\n"); abort(); } set_errno(ENOMEM); } if (config_stats && likely(ret != NULL)) { tsd_t *tsd; assert(usize == isalloc(tsdn, ret)); tsd = tsdn_tsd(tsdn); *tsd_thread_allocatedp_get(tsd) += usize; *tsd_thread_deallocatedp_get(tsd) += old_usize; } UTRACE(ptr, size, ret); check_entry_exit_locking(tsdn); return ret; } JEMALLOC_EXPORT void JEMALLOC_NOTHROW je_free(void *ptr) { UTRACE(ptr, 0, 0); if (likely(ptr != NULL)) { - tsd_t *tsd = tsd_fetch(); + /* + * We avoid setting up tsd fully (e.g. tcache, arena binding) + * based on only free() calls -- other activities trigger the + * minimal to full transition. This is because free() may + * happen during thread shutdown after tls deallocation: if a + * thread never had any malloc activities until then, a + * fully-setup tsd won't be destructed properly. + */ + tsd_t *tsd = tsd_fetch_min(); check_entry_exit_locking(tsd_tsdn(tsd)); tcache_t *tcache; if (likely(tsd_fast(tsd))) { tsd_assert_fast(tsd); /* Unconditionally get tcache ptr on fast path. */ tcache = tsd_tcachep_get(tsd); ifree(tsd, ptr, tcache, false); } else { if (likely(tsd_reentrancy_level_get(tsd) == 0)) { tcache = tcache_get(tsd); } else { tcache = NULL; } ifree(tsd, ptr, tcache, true); } check_entry_exit_locking(tsd_tsdn(tsd)); } } /* * End malloc(3)-compatible functions. */ /******************************************************************************/ /* * Begin non-standard override functions. */ #ifdef JEMALLOC_OVERRIDE_MEMALIGN JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW * JEMALLOC_ATTR(malloc) je_memalign(size_t alignment, size_t size) { void *ret; static_opts_t sopts; dynamic_opts_t dopts; static_opts_init(&sopts); dynamic_opts_init(&dopts); sopts.bump_empty_alloc = true; sopts.min_alignment = 1; sopts.oom_string = ": Error allocating aligned memory: out of memory\n"; sopts.invalid_alignment_string = ": Error allocating aligned memory: invalid alignment\n"; sopts.null_out_result_on_error = true; dopts.result = &ret; dopts.num_items = 1; dopts.item_size = size; dopts.alignment = alignment; imalloc(&sopts, &dopts); return ret; } #endif #ifdef JEMALLOC_OVERRIDE_VALLOC JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW * JEMALLOC_ATTR(malloc) je_valloc(size_t size) { void *ret; static_opts_t sopts; dynamic_opts_t dopts; static_opts_init(&sopts); dynamic_opts_init(&dopts); sopts.bump_empty_alloc = true; sopts.null_out_result_on_error = true; sopts.min_alignment = PAGE; sopts.oom_string = ": Error allocating aligned memory: out of memory\n"; sopts.invalid_alignment_string = ": Error allocating aligned memory: invalid alignment\n"; dopts.result = &ret; dopts.num_items = 1; dopts.item_size = size; dopts.alignment = PAGE; imalloc(&sopts, &dopts); return ret; } #endif #if defined(JEMALLOC_IS_MALLOC) && defined(JEMALLOC_GLIBC_MALLOC_HOOK) /* * glibc provides the RTLD_DEEPBIND flag for dlopen which can make it possible * to inconsistently reference libc's malloc(3)-compatible functions * (https://bugzilla.mozilla.org/show_bug.cgi?id=493541). * * These definitions interpose hooks in glibc. The functions are actually * passed an extra argument for the caller return address, which will be * ignored. */ JEMALLOC_EXPORT void (*__free_hook)(void *ptr) = je_free; JEMALLOC_EXPORT void *(*__malloc_hook)(size_t size) = je_malloc; JEMALLOC_EXPORT void *(*__realloc_hook)(void *ptr, size_t size) = je_realloc; # ifdef JEMALLOC_GLIBC_MEMALIGN_HOOK JEMALLOC_EXPORT void *(*__memalign_hook)(size_t alignment, size_t size) = je_memalign; # endif # ifdef CPU_COUNT /* * To enable static linking with glibc, the libc specific malloc interface must * be implemented also, so none of glibc's malloc.o functions are added to the * link. */ # define ALIAS(je_fn) __attribute__((alias (#je_fn), used)) /* To force macro expansion of je_ prefix before stringification. */ # define PREALIAS(je_fn) ALIAS(je_fn) # ifdef JEMALLOC_OVERRIDE___LIBC_CALLOC void *__libc_calloc(size_t n, size_t size) PREALIAS(je_calloc); # endif # ifdef JEMALLOC_OVERRIDE___LIBC_FREE void __libc_free(void* ptr) PREALIAS(je_free); # endif # ifdef JEMALLOC_OVERRIDE___LIBC_MALLOC void *__libc_malloc(size_t size) PREALIAS(je_malloc); # endif # ifdef JEMALLOC_OVERRIDE___LIBC_MEMALIGN void *__libc_memalign(size_t align, size_t s) PREALIAS(je_memalign); # endif # ifdef JEMALLOC_OVERRIDE___LIBC_REALLOC void *__libc_realloc(void* ptr, size_t size) PREALIAS(je_realloc); # endif # ifdef JEMALLOC_OVERRIDE___LIBC_VALLOC void *__libc_valloc(size_t size) PREALIAS(je_valloc); # endif # ifdef JEMALLOC_OVERRIDE___POSIX_MEMALIGN int __posix_memalign(void** r, size_t a, size_t s) PREALIAS(je_posix_memalign); # endif # undef PREALIAS # undef ALIAS # endif #endif /* * End non-standard override functions. */ /******************************************************************************/ /* * Begin non-standard functions. */ JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW * JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(1) je_mallocx(size_t size, int flags) { void *ret; static_opts_t sopts; dynamic_opts_t dopts; static_opts_init(&sopts); dynamic_opts_init(&dopts); sopts.assert_nonempty_alloc = true; sopts.null_out_result_on_error = true; sopts.oom_string = ": Error in mallocx(): out of memory\n"; dopts.result = &ret; dopts.num_items = 1; dopts.item_size = size; if (unlikely(flags != 0)) { if ((flags & MALLOCX_LG_ALIGN_MASK) != 0) { dopts.alignment = MALLOCX_ALIGN_GET_SPECIFIED(flags); } dopts.zero = MALLOCX_ZERO_GET(flags); if ((flags & MALLOCX_TCACHE_MASK) != 0) { if ((flags & MALLOCX_TCACHE_MASK) == MALLOCX_TCACHE_NONE) { dopts.tcache_ind = TCACHE_IND_NONE; } else { dopts.tcache_ind = MALLOCX_TCACHE_GET(flags); } } else { dopts.tcache_ind = TCACHE_IND_AUTOMATIC; } if ((flags & MALLOCX_ARENA_MASK) != 0) dopts.arena_ind = MALLOCX_ARENA_GET(flags); } imalloc(&sopts, &dopts); return ret; } static void * irallocx_prof_sample(tsdn_t *tsdn, void *old_ptr, size_t old_usize, size_t usize, size_t alignment, bool zero, tcache_t *tcache, arena_t *arena, prof_tctx_t *tctx) { void *p; if (tctx == NULL) { return NULL; } if (usize <= SMALL_MAXCLASS) { p = iralloct(tsdn, old_ptr, old_usize, LARGE_MINCLASS, alignment, zero, tcache, arena); if (p == NULL) { return NULL; } arena_prof_promote(tsdn, p, usize); } else { p = iralloct(tsdn, old_ptr, old_usize, usize, alignment, zero, tcache, arena); } return p; } JEMALLOC_ALWAYS_INLINE void * irallocx_prof(tsd_t *tsd, void *old_ptr, size_t old_usize, size_t size, size_t alignment, size_t *usize, bool zero, tcache_t *tcache, arena_t *arena, alloc_ctx_t *alloc_ctx) { void *p; bool prof_active; prof_tctx_t *old_tctx, *tctx; prof_active = prof_active_get_unlocked(); old_tctx = prof_tctx_get(tsd_tsdn(tsd), old_ptr, alloc_ctx); tctx = prof_alloc_prep(tsd, *usize, prof_active, false); if (unlikely((uintptr_t)tctx != (uintptr_t)1U)) { p = irallocx_prof_sample(tsd_tsdn(tsd), old_ptr, old_usize, *usize, alignment, zero, tcache, arena, tctx); } else { p = iralloct(tsd_tsdn(tsd), old_ptr, old_usize, size, alignment, zero, tcache, arena); } if (unlikely(p == NULL)) { prof_alloc_rollback(tsd, tctx, false); return NULL; } if (p == old_ptr && alignment != 0) { /* * The allocation did not move, so it is possible that the size * class is smaller than would guarantee the requested * alignment, and that the alignment constraint was * serendipitously satisfied. Additionally, old_usize may not * be the same as the current usize because of in-place large * reallocation. Therefore, query the actual value of usize. */ *usize = isalloc(tsd_tsdn(tsd), p); } prof_realloc(tsd, p, *usize, tctx, prof_active, false, old_ptr, old_usize, old_tctx); return p; } JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW * JEMALLOC_ALLOC_SIZE(2) je_rallocx(void *ptr, size_t size, int flags) { void *p; tsd_t *tsd; size_t usize; size_t old_usize; size_t alignment = MALLOCX_ALIGN_GET(flags); bool zero = flags & MALLOCX_ZERO; arena_t *arena; tcache_t *tcache; assert(ptr != NULL); assert(size != 0); assert(malloc_initialized() || IS_INITIALIZER); tsd = tsd_fetch(); check_entry_exit_locking(tsd_tsdn(tsd)); if (unlikely((flags & MALLOCX_ARENA_MASK) != 0)) { unsigned arena_ind = MALLOCX_ARENA_GET(flags); arena = arena_get(tsd_tsdn(tsd), arena_ind, true); if (unlikely(arena == NULL)) { goto label_oom; } } else { arena = NULL; } if (unlikely((flags & MALLOCX_TCACHE_MASK) != 0)) { if ((flags & MALLOCX_TCACHE_MASK) == MALLOCX_TCACHE_NONE) { tcache = NULL; } else { tcache = tcaches_get(tsd, MALLOCX_TCACHE_GET(flags)); } } else { tcache = tcache_get(tsd); } alloc_ctx_t alloc_ctx; rtree_ctx_t *rtree_ctx = tsd_rtree_ctx(tsd); rtree_szind_slab_read(tsd_tsdn(tsd), &extents_rtree, rtree_ctx, (uintptr_t)ptr, true, &alloc_ctx.szind, &alloc_ctx.slab); assert(alloc_ctx.szind != NSIZES); old_usize = sz_index2size(alloc_ctx.szind); assert(old_usize == isalloc(tsd_tsdn(tsd), ptr)); if (config_prof && opt_prof) { usize = (alignment == 0) ? sz_s2u(size) : sz_sa2u(size, alignment); if (unlikely(usize == 0 || usize > LARGE_MAXCLASS)) { goto label_oom; } p = irallocx_prof(tsd, ptr, old_usize, size, alignment, &usize, zero, tcache, arena, &alloc_ctx); if (unlikely(p == NULL)) { goto label_oom; } } else { p = iralloct(tsd_tsdn(tsd), ptr, old_usize, size, alignment, zero, tcache, arena); if (unlikely(p == NULL)) { goto label_oom; } if (config_stats) { usize = isalloc(tsd_tsdn(tsd), p); } } assert(alignment == 0 || ((uintptr_t)p & (alignment - 1)) == ZU(0)); if (config_stats) { *tsd_thread_allocatedp_get(tsd) += usize; *tsd_thread_deallocatedp_get(tsd) += old_usize; } UTRACE(ptr, size, p); check_entry_exit_locking(tsd_tsdn(tsd)); return p; label_oom: if (config_xmalloc && unlikely(opt_xmalloc)) { malloc_write(": Error in rallocx(): out of memory\n"); abort(); } UTRACE(ptr, size, 0); check_entry_exit_locking(tsd_tsdn(tsd)); return NULL; } JEMALLOC_ALWAYS_INLINE size_t ixallocx_helper(tsdn_t *tsdn, void *ptr, size_t old_usize, size_t size, size_t extra, size_t alignment, bool zero) { size_t usize; if (ixalloc(tsdn, ptr, old_usize, size, extra, alignment, zero)) { return old_usize; } usize = isalloc(tsdn, ptr); return usize; } static size_t ixallocx_prof_sample(tsdn_t *tsdn, void *ptr, size_t old_usize, size_t size, size_t extra, size_t alignment, bool zero, prof_tctx_t *tctx) { size_t usize; if (tctx == NULL) { return old_usize; } usize = ixallocx_helper(tsdn, ptr, old_usize, size, extra, alignment, zero); return usize; } JEMALLOC_ALWAYS_INLINE size_t ixallocx_prof(tsd_t *tsd, void *ptr, size_t old_usize, size_t size, size_t extra, size_t alignment, bool zero, alloc_ctx_t *alloc_ctx) { size_t usize_max, usize; bool prof_active; prof_tctx_t *old_tctx, *tctx; prof_active = prof_active_get_unlocked(); old_tctx = prof_tctx_get(tsd_tsdn(tsd), ptr, alloc_ctx); /* * usize isn't knowable before ixalloc() returns when extra is non-zero. * Therefore, compute its maximum possible value and use that in * prof_alloc_prep() to decide whether to capture a backtrace. * prof_realloc() will use the actual usize to decide whether to sample. */ if (alignment == 0) { usize_max = sz_s2u(size+extra); assert(usize_max > 0 && usize_max <= LARGE_MAXCLASS); } else { usize_max = sz_sa2u(size+extra, alignment); if (unlikely(usize_max == 0 || usize_max > LARGE_MAXCLASS)) { /* * usize_max is out of range, and chances are that * allocation will fail, but use the maximum possible * value and carry on with prof_alloc_prep(), just in * case allocation succeeds. */ usize_max = LARGE_MAXCLASS; } } tctx = prof_alloc_prep(tsd, usize_max, prof_active, false); if (unlikely((uintptr_t)tctx != (uintptr_t)1U)) { usize = ixallocx_prof_sample(tsd_tsdn(tsd), ptr, old_usize, size, extra, alignment, zero, tctx); } else { usize = ixallocx_helper(tsd_tsdn(tsd), ptr, old_usize, size, extra, alignment, zero); } if (usize == old_usize) { prof_alloc_rollback(tsd, tctx, false); return usize; } prof_realloc(tsd, ptr, usize, tctx, prof_active, false, ptr, old_usize, old_tctx); return usize; } JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW je_xallocx(void *ptr, size_t size, size_t extra, int flags) { tsd_t *tsd; size_t usize, old_usize; size_t alignment = MALLOCX_ALIGN_GET(flags); bool zero = flags & MALLOCX_ZERO; assert(ptr != NULL); assert(size != 0); assert(SIZE_T_MAX - size >= extra); assert(malloc_initialized() || IS_INITIALIZER); tsd = tsd_fetch(); check_entry_exit_locking(tsd_tsdn(tsd)); alloc_ctx_t alloc_ctx; rtree_ctx_t *rtree_ctx = tsd_rtree_ctx(tsd); rtree_szind_slab_read(tsd_tsdn(tsd), &extents_rtree, rtree_ctx, (uintptr_t)ptr, true, &alloc_ctx.szind, &alloc_ctx.slab); assert(alloc_ctx.szind != NSIZES); old_usize = sz_index2size(alloc_ctx.szind); assert(old_usize == isalloc(tsd_tsdn(tsd), ptr)); /* * The API explicitly absolves itself of protecting against (size + * extra) numerical overflow, but we may need to clamp extra to avoid * exceeding LARGE_MAXCLASS. * * Ordinarily, size limit checking is handled deeper down, but here we * have to check as part of (size + extra) clamping, since we need the * clamped value in the above helper functions. */ if (unlikely(size > LARGE_MAXCLASS)) { usize = old_usize; goto label_not_resized; } if (unlikely(LARGE_MAXCLASS - size < extra)) { extra = LARGE_MAXCLASS - size; } if (config_prof && opt_prof) { usize = ixallocx_prof(tsd, ptr, old_usize, size, extra, alignment, zero, &alloc_ctx); } else { usize = ixallocx_helper(tsd_tsdn(tsd), ptr, old_usize, size, extra, alignment, zero); } if (unlikely(usize == old_usize)) { goto label_not_resized; } if (config_stats) { *tsd_thread_allocatedp_get(tsd) += usize; *tsd_thread_deallocatedp_get(tsd) += old_usize; } label_not_resized: UTRACE(ptr, size, ptr); check_entry_exit_locking(tsd_tsdn(tsd)); return usize; } JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW JEMALLOC_ATTR(pure) je_sallocx(const void *ptr, int flags) { size_t usize; tsdn_t *tsdn; assert(malloc_initialized() || IS_INITIALIZER); assert(ptr != NULL); tsdn = tsdn_fetch(); check_entry_exit_locking(tsdn); if (config_debug || force_ivsalloc) { usize = ivsalloc(tsdn, ptr); assert(force_ivsalloc || usize != 0); } else { usize = isalloc(tsdn, ptr); } check_entry_exit_locking(tsdn); return usize; } JEMALLOC_EXPORT void JEMALLOC_NOTHROW je_dallocx(void *ptr, int flags) { assert(ptr != NULL); assert(malloc_initialized() || IS_INITIALIZER); tsd_t *tsd = tsd_fetch(); bool fast = tsd_fast(tsd); check_entry_exit_locking(tsd_tsdn(tsd)); tcache_t *tcache; if (unlikely((flags & MALLOCX_TCACHE_MASK) != 0)) { /* Not allowed to be reentrant and specify a custom tcache. */ assert(tsd_reentrancy_level_get(tsd) == 0); if ((flags & MALLOCX_TCACHE_MASK) == MALLOCX_TCACHE_NONE) { tcache = NULL; } else { tcache = tcaches_get(tsd, MALLOCX_TCACHE_GET(flags)); } } else { if (likely(fast)) { tcache = tsd_tcachep_get(tsd); assert(tcache == tcache_get(tsd)); } else { if (likely(tsd_reentrancy_level_get(tsd) == 0)) { tcache = tcache_get(tsd); } else { tcache = NULL; } } } UTRACE(ptr, 0, 0); if (likely(fast)) { tsd_assert_fast(tsd); ifree(tsd, ptr, tcache, false); } else { ifree(tsd, ptr, tcache, true); } check_entry_exit_locking(tsd_tsdn(tsd)); } JEMALLOC_ALWAYS_INLINE size_t inallocx(tsdn_t *tsdn, size_t size, int flags) { check_entry_exit_locking(tsdn); size_t usize; if (likely((flags & MALLOCX_LG_ALIGN_MASK) == 0)) { usize = sz_s2u(size); } else { usize = sz_sa2u(size, MALLOCX_ALIGN_GET_SPECIFIED(flags)); } check_entry_exit_locking(tsdn); return usize; } JEMALLOC_EXPORT void JEMALLOC_NOTHROW je_sdallocx(void *ptr, size_t size, int flags) { assert(ptr != NULL); assert(malloc_initialized() || IS_INITIALIZER); tsd_t *tsd = tsd_fetch(); bool fast = tsd_fast(tsd); size_t usize = inallocx(tsd_tsdn(tsd), size, flags); assert(usize == isalloc(tsd_tsdn(tsd), ptr)); check_entry_exit_locking(tsd_tsdn(tsd)); tcache_t *tcache; if (unlikely((flags & MALLOCX_TCACHE_MASK) != 0)) { /* Not allowed to be reentrant and specify a custom tcache. */ assert(tsd_reentrancy_level_get(tsd) == 0); if ((flags & MALLOCX_TCACHE_MASK) == MALLOCX_TCACHE_NONE) { tcache = NULL; } else { tcache = tcaches_get(tsd, MALLOCX_TCACHE_GET(flags)); } } else { if (likely(fast)) { tcache = tsd_tcachep_get(tsd); assert(tcache == tcache_get(tsd)); } else { if (likely(tsd_reentrancy_level_get(tsd) == 0)) { tcache = tcache_get(tsd); } else { tcache = NULL; } } } UTRACE(ptr, 0, 0); if (likely(fast)) { tsd_assert_fast(tsd); isfree(tsd, ptr, usize, tcache, false); } else { isfree(tsd, ptr, usize, tcache, true); } check_entry_exit_locking(tsd_tsdn(tsd)); } JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW JEMALLOC_ATTR(pure) je_nallocx(size_t size, int flags) { size_t usize; tsdn_t *tsdn; assert(size != 0); if (unlikely(malloc_init())) { return 0; } tsdn = tsdn_fetch(); check_entry_exit_locking(tsdn); usize = inallocx(tsdn, size, flags); if (unlikely(usize > LARGE_MAXCLASS)) { return 0; } check_entry_exit_locking(tsdn); return usize; } JEMALLOC_EXPORT int JEMALLOC_NOTHROW je_mallctl(const char *name, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; tsd_t *tsd; if (unlikely(malloc_init())) { return EAGAIN; } tsd = tsd_fetch(); check_entry_exit_locking(tsd_tsdn(tsd)); ret = ctl_byname(tsd, name, oldp, oldlenp, newp, newlen); check_entry_exit_locking(tsd_tsdn(tsd)); return ret; } JEMALLOC_EXPORT int JEMALLOC_NOTHROW je_mallctlnametomib(const char *name, size_t *mibp, size_t *miblenp) { int ret; - tsdn_t *tsdn; if (unlikely(malloc_init())) { return EAGAIN; } - tsdn = tsdn_fetch(); - check_entry_exit_locking(tsdn); - ret = ctl_nametomib(tsdn, name, mibp, miblenp); - check_entry_exit_locking(tsdn); + tsd_t *tsd = tsd_fetch(); + check_entry_exit_locking(tsd_tsdn(tsd)); + ret = ctl_nametomib(tsd, name, mibp, miblenp); + check_entry_exit_locking(tsd_tsdn(tsd)); return ret; } JEMALLOC_EXPORT int JEMALLOC_NOTHROW je_mallctlbymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; tsd_t *tsd; if (unlikely(malloc_init())) { return EAGAIN; } tsd = tsd_fetch(); check_entry_exit_locking(tsd_tsdn(tsd)); ret = ctl_bymib(tsd, mib, miblen, oldp, oldlenp, newp, newlen); check_entry_exit_locking(tsd_tsdn(tsd)); return ret; } JEMALLOC_EXPORT void JEMALLOC_NOTHROW je_malloc_stats_print(void (*write_cb)(void *, const char *), void *cbopaque, const char *opts) { tsdn_t *tsdn; tsdn = tsdn_fetch(); check_entry_exit_locking(tsdn); stats_print(write_cb, cbopaque, opts); check_entry_exit_locking(tsdn); } JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW je_malloc_usable_size(JEMALLOC_USABLE_SIZE_CONST void *ptr) { size_t ret; tsdn_t *tsdn; assert(malloc_initialized() || IS_INITIALIZER); tsdn = tsdn_fetch(); check_entry_exit_locking(tsdn); if (unlikely(ptr == NULL)) { ret = 0; } else { if (config_debug || force_ivsalloc) { ret = ivsalloc(tsdn, ptr); assert(force_ivsalloc || ret != 0); } else { ret = isalloc(tsdn, ptr); } } check_entry_exit_locking(tsdn); return ret; } /* * End non-standard functions. */ /******************************************************************************/ /* * Begin compatibility functions. */ #define ALLOCM_LG_ALIGN(la) (la) #define ALLOCM_ALIGN(a) (ffsl(a)-1) #define ALLOCM_ZERO ((int)0x40) #define ALLOCM_NO_MOVE ((int)0x80) #define ALLOCM_SUCCESS 0 #define ALLOCM_ERR_OOM 1 #define ALLOCM_ERR_NOT_MOVED 2 int je_allocm(void **ptr, size_t *rsize, size_t size, int flags) { assert(ptr != NULL); void *p = je_mallocx(size, flags); if (p == NULL) { return (ALLOCM_ERR_OOM); } if (rsize != NULL) { *rsize = isalloc(tsdn_fetch(), p); } *ptr = p; return ALLOCM_SUCCESS; } int je_rallocm(void **ptr, size_t *rsize, size_t size, size_t extra, int flags) { assert(ptr != NULL); assert(*ptr != NULL); assert(size != 0); assert(SIZE_T_MAX - size >= extra); int ret; bool no_move = flags & ALLOCM_NO_MOVE; if (no_move) { size_t usize = je_xallocx(*ptr, size, extra, flags); ret = (usize >= size) ? ALLOCM_SUCCESS : ALLOCM_ERR_NOT_MOVED; if (rsize != NULL) { *rsize = usize; } } else { void *p = je_rallocx(*ptr, size+extra, flags); if (p != NULL) { *ptr = p; ret = ALLOCM_SUCCESS; } else { ret = ALLOCM_ERR_OOM; } if (rsize != NULL) { *rsize = isalloc(tsdn_fetch(), *ptr); } } return ret; } int je_sallocm(const void *ptr, size_t *rsize, int flags) { assert(rsize != NULL); *rsize = je_sallocx(ptr, flags); return ALLOCM_SUCCESS; } int je_dallocm(void *ptr, int flags) { je_dallocx(ptr, flags); return ALLOCM_SUCCESS; } int je_nallocm(size_t *rsize, size_t size, int flags) { size_t usize = je_nallocx(size, flags); if (usize == 0) { return ALLOCM_ERR_OOM; } if (rsize != NULL) { *rsize = usize; } return ALLOCM_SUCCESS; } #undef ALLOCM_LG_ALIGN #undef ALLOCM_ALIGN #undef ALLOCM_ZERO #undef ALLOCM_NO_MOVE #undef ALLOCM_SUCCESS #undef ALLOCM_ERR_OOM #undef ALLOCM_ERR_NOT_MOVED /* * End compatibility functions. */ /******************************************************************************/ /* * The following functions are used by threading libraries for protection of * malloc during fork(). */ /* * If an application creates a thread before doing any allocation in the main * thread, then calls fork(2) in the main thread followed by memory allocation * in the child process, a race can occur that results in deadlock within the * child: the main thread may have forked while the created thread had * partially initialized the allocator. Ordinarily jemalloc prevents * fork/malloc races via the following functions it registers during * initialization using pthread_atfork(), but of course that does no good if * the allocator isn't fully initialized at fork time. The following library * constructor is a partial solution to this problem. It may still be possible * to trigger the deadlock described above, but doing so would involve forking * via a library constructor that runs before jemalloc's runs. */ #ifndef JEMALLOC_JET JEMALLOC_ATTR(constructor) static void jemalloc_constructor(void) { malloc_init(); } #endif #ifndef JEMALLOC_MUTEX_INIT_CB void jemalloc_prefork(void) #else JEMALLOC_EXPORT void _malloc_prefork(void) #endif { tsd_t *tsd; unsigned i, j, narenas; arena_t *arena; #ifdef JEMALLOC_MUTEX_INIT_CB if (!malloc_initialized()) { return; } #endif assert(malloc_initialized()); tsd = tsd_fetch(); narenas = narenas_total_get(); witness_prefork(tsd_witness_tsdp_get(tsd)); /* Acquire all mutexes in a safe order. */ ctl_prefork(tsd_tsdn(tsd)); tcache_prefork(tsd_tsdn(tsd)); malloc_mutex_prefork(tsd_tsdn(tsd), &arenas_lock); if (have_background_thread) { background_thread_prefork0(tsd_tsdn(tsd)); } prof_prefork0(tsd_tsdn(tsd)); if (have_background_thread) { background_thread_prefork1(tsd_tsdn(tsd)); } /* Break arena prefork into stages to preserve lock order. */ - for (i = 0; i < 7; i++) { + for (i = 0; i < 8; i++) { for (j = 0; j < narenas; j++) { if ((arena = arena_get(tsd_tsdn(tsd), j, false)) != NULL) { switch (i) { case 0: arena_prefork0(tsd_tsdn(tsd), arena); break; case 1: arena_prefork1(tsd_tsdn(tsd), arena); break; case 2: arena_prefork2(tsd_tsdn(tsd), arena); break; case 3: arena_prefork3(tsd_tsdn(tsd), arena); break; case 4: arena_prefork4(tsd_tsdn(tsd), arena); break; case 5: arena_prefork5(tsd_tsdn(tsd), arena); break; case 6: arena_prefork6(tsd_tsdn(tsd), arena); + break; + case 7: + arena_prefork7(tsd_tsdn(tsd), arena); break; default: not_reached(); } } } } prof_prefork1(tsd_tsdn(tsd)); } #ifndef JEMALLOC_MUTEX_INIT_CB void jemalloc_postfork_parent(void) #else JEMALLOC_EXPORT void _malloc_postfork(void) #endif { tsd_t *tsd; unsigned i, narenas; #ifdef JEMALLOC_MUTEX_INIT_CB if (!malloc_initialized()) { return; } #endif assert(malloc_initialized()); tsd = tsd_fetch(); witness_postfork_parent(tsd_witness_tsdp_get(tsd)); /* Release all mutexes, now that fork() has completed. */ for (i = 0, narenas = narenas_total_get(); i < narenas; i++) { arena_t *arena; if ((arena = arena_get(tsd_tsdn(tsd), i, false)) != NULL) { arena_postfork_parent(tsd_tsdn(tsd), arena); } } prof_postfork_parent(tsd_tsdn(tsd)); if (have_background_thread) { background_thread_postfork_parent(tsd_tsdn(tsd)); } malloc_mutex_postfork_parent(tsd_tsdn(tsd), &arenas_lock); tcache_postfork_parent(tsd_tsdn(tsd)); ctl_postfork_parent(tsd_tsdn(tsd)); } void jemalloc_postfork_child(void) { tsd_t *tsd; unsigned i, narenas; assert(malloc_initialized()); tsd = tsd_fetch(); witness_postfork_child(tsd_witness_tsdp_get(tsd)); /* Release all mutexes, now that fork() has completed. */ for (i = 0, narenas = narenas_total_get(); i < narenas; i++) { arena_t *arena; if ((arena = arena_get(tsd_tsdn(tsd), i, false)) != NULL) { arena_postfork_child(tsd_tsdn(tsd), arena); } } prof_postfork_child(tsd_tsdn(tsd)); if (have_background_thread) { background_thread_postfork_child(tsd_tsdn(tsd)); } malloc_mutex_postfork_child(tsd_tsdn(tsd), &arenas_lock); tcache_postfork_child(tsd_tsdn(tsd)); ctl_postfork_child(tsd_tsdn(tsd)); } void _malloc_first_thread(void) { (void)malloc_mutex_first_thread(); } /******************************************************************************/ Index: head/contrib/jemalloc/src/prof.c =================================================================== --- head/contrib/jemalloc/src/prof.c (revision 320622) +++ head/contrib/jemalloc/src/prof.c (revision 320623) @@ -1,2464 +1,2464 @@ #define JEMALLOC_PROF_C_ #include "jemalloc/internal/jemalloc_preamble.h" #include "jemalloc/internal/jemalloc_internal_includes.h" #include "jemalloc/internal/assert.h" #include "jemalloc/internal/ckh.h" #include "jemalloc/internal/hash.h" #include "jemalloc/internal/malloc_io.h" #include "jemalloc/internal/mutex.h" /******************************************************************************/ #ifdef JEMALLOC_PROF_LIBUNWIND #define UNW_LOCAL_ONLY #include #endif #ifdef JEMALLOC_PROF_LIBGCC /* * We have a circular dependency -- jemalloc_internal.h tells us if we should * use libgcc's unwinding functionality, but after we've included that, we've * already hooked _Unwind_Backtrace. We'll temporarily disable hooking. */ #undef _Unwind_Backtrace #include #define _Unwind_Backtrace JEMALLOC_HOOK(_Unwind_Backtrace, hooks_libc_hook) #endif /******************************************************************************/ /* Data. */ bool opt_prof = false; bool opt_prof_active = true; bool opt_prof_thread_active_init = true; size_t opt_lg_prof_sample = LG_PROF_SAMPLE_DEFAULT; ssize_t opt_lg_prof_interval = LG_PROF_INTERVAL_DEFAULT; bool opt_prof_gdump = false; bool opt_prof_final = false; bool opt_prof_leak = false; bool opt_prof_accum = false; char opt_prof_prefix[ /* Minimize memory bloat for non-prof builds. */ #ifdef JEMALLOC_PROF PATH_MAX + #endif 1]; /* * Initialized as opt_prof_active, and accessed via * prof_active_[gs]et{_unlocked,}(). */ bool prof_active; static malloc_mutex_t prof_active_mtx; /* * Initialized as opt_prof_thread_active_init, and accessed via * prof_thread_active_init_[gs]et(). */ static bool prof_thread_active_init; static malloc_mutex_t prof_thread_active_init_mtx; /* * Initialized as opt_prof_gdump, and accessed via * prof_gdump_[gs]et{_unlocked,}(). */ bool prof_gdump_val; static malloc_mutex_t prof_gdump_mtx; uint64_t prof_interval = 0; size_t lg_prof_sample; /* * Table of mutexes that are shared among gctx's. These are leaf locks, so * there is no problem with using them for more than one gctx at the same time. * The primary motivation for this sharing though is that gctx's are ephemeral, * and destroying mutexes causes complications for systems that allocate when * creating/destroying mutexes. */ static malloc_mutex_t *gctx_locks; static atomic_u_t cum_gctxs; /* Atomic counter. */ /* * Table of mutexes that are shared among tdata's. No operations require * holding multiple tdata locks, so there is no problem with using them for more * than one tdata at the same time, even though a gctx lock may be acquired * while holding a tdata lock. */ static malloc_mutex_t *tdata_locks; /* * Global hash of (prof_bt_t *)-->(prof_gctx_t *). This is the master data * structure that knows about all backtraces currently captured. */ static ckh_t bt2gctx; /* Non static to enable profiling. */ malloc_mutex_t bt2gctx_mtx; /* * Tree of all extant prof_tdata_t structures, regardless of state, * {attached,detached,expired}. */ static prof_tdata_tree_t tdatas; static malloc_mutex_t tdatas_mtx; static uint64_t next_thr_uid; static malloc_mutex_t next_thr_uid_mtx; static malloc_mutex_t prof_dump_seq_mtx; static uint64_t prof_dump_seq; static uint64_t prof_dump_iseq; static uint64_t prof_dump_mseq; static uint64_t prof_dump_useq; /* * This buffer is rather large for stack allocation, so use a single buffer for * all profile dumps. */ static malloc_mutex_t prof_dump_mtx; static char prof_dump_buf[ /* Minimize memory bloat for non-prof builds. */ #ifdef JEMALLOC_PROF PROF_DUMP_BUFSIZE #else 1 #endif ]; static size_t prof_dump_buf_end; static int prof_dump_fd; /* Do not dump any profiles until bootstrapping is complete. */ static bool prof_booted = false; /******************************************************************************/ /* * Function prototypes for static functions that are referenced prior to * definition. */ static bool prof_tctx_should_destroy(tsdn_t *tsdn, prof_tctx_t *tctx); static void prof_tctx_destroy(tsd_t *tsd, prof_tctx_t *tctx); static bool prof_tdata_should_destroy(tsdn_t *tsdn, prof_tdata_t *tdata, bool even_if_attached); static void prof_tdata_destroy(tsd_t *tsd, prof_tdata_t *tdata, bool even_if_attached); static char *prof_thread_name_alloc(tsdn_t *tsdn, const char *thread_name); /******************************************************************************/ /* Red-black trees. */ static int prof_tctx_comp(const prof_tctx_t *a, const prof_tctx_t *b) { uint64_t a_thr_uid = a->thr_uid; uint64_t b_thr_uid = b->thr_uid; int ret = (a_thr_uid > b_thr_uid) - (a_thr_uid < b_thr_uid); if (ret == 0) { uint64_t a_thr_discrim = a->thr_discrim; uint64_t b_thr_discrim = b->thr_discrim; ret = (a_thr_discrim > b_thr_discrim) - (a_thr_discrim < b_thr_discrim); if (ret == 0) { uint64_t a_tctx_uid = a->tctx_uid; uint64_t b_tctx_uid = b->tctx_uid; ret = (a_tctx_uid > b_tctx_uid) - (a_tctx_uid < b_tctx_uid); } } return ret; } rb_gen(static UNUSED, tctx_tree_, prof_tctx_tree_t, prof_tctx_t, tctx_link, prof_tctx_comp) static int prof_gctx_comp(const prof_gctx_t *a, const prof_gctx_t *b) { unsigned a_len = a->bt.len; unsigned b_len = b->bt.len; unsigned comp_len = (a_len < b_len) ? a_len : b_len; int ret = memcmp(a->bt.vec, b->bt.vec, comp_len * sizeof(void *)); if (ret == 0) { ret = (a_len > b_len) - (a_len < b_len); } return ret; } rb_gen(static UNUSED, gctx_tree_, prof_gctx_tree_t, prof_gctx_t, dump_link, prof_gctx_comp) static int prof_tdata_comp(const prof_tdata_t *a, const prof_tdata_t *b) { int ret; uint64_t a_uid = a->thr_uid; uint64_t b_uid = b->thr_uid; ret = ((a_uid > b_uid) - (a_uid < b_uid)); if (ret == 0) { uint64_t a_discrim = a->thr_discrim; uint64_t b_discrim = b->thr_discrim; ret = ((a_discrim > b_discrim) - (a_discrim < b_discrim)); } return ret; } rb_gen(static UNUSED, tdata_tree_, prof_tdata_tree_t, prof_tdata_t, tdata_link, prof_tdata_comp) /******************************************************************************/ void prof_alloc_rollback(tsd_t *tsd, prof_tctx_t *tctx, bool updated) { prof_tdata_t *tdata; cassert(config_prof); if (updated) { /* * Compute a new sample threshold. This isn't very important in * practice, because this function is rarely executed, so the * potential for sample bias is minimal except in contrived * programs. */ tdata = prof_tdata_get(tsd, true); if (tdata != NULL) { prof_sample_threshold_update(tdata); } } if ((uintptr_t)tctx > (uintptr_t)1U) { malloc_mutex_lock(tsd_tsdn(tsd), tctx->tdata->lock); tctx->prepared = false; if (prof_tctx_should_destroy(tsd_tsdn(tsd), tctx)) { prof_tctx_destroy(tsd, tctx); } else { malloc_mutex_unlock(tsd_tsdn(tsd), tctx->tdata->lock); } } } void prof_malloc_sample_object(tsdn_t *tsdn, const void *ptr, size_t usize, prof_tctx_t *tctx) { prof_tctx_set(tsdn, ptr, usize, NULL, tctx); malloc_mutex_lock(tsdn, tctx->tdata->lock); tctx->cnts.curobjs++; tctx->cnts.curbytes += usize; if (opt_prof_accum) { tctx->cnts.accumobjs++; tctx->cnts.accumbytes += usize; } tctx->prepared = false; malloc_mutex_unlock(tsdn, tctx->tdata->lock); } void prof_free_sampled_object(tsd_t *tsd, size_t usize, prof_tctx_t *tctx) { malloc_mutex_lock(tsd_tsdn(tsd), tctx->tdata->lock); assert(tctx->cnts.curobjs > 0); assert(tctx->cnts.curbytes >= usize); tctx->cnts.curobjs--; tctx->cnts.curbytes -= usize; if (prof_tctx_should_destroy(tsd_tsdn(tsd), tctx)) { prof_tctx_destroy(tsd, tctx); } else { malloc_mutex_unlock(tsd_tsdn(tsd), tctx->tdata->lock); } } void bt_init(prof_bt_t *bt, void **vec) { cassert(config_prof); bt->vec = vec; bt->len = 0; } static void prof_enter(tsd_t *tsd, prof_tdata_t *tdata) { cassert(config_prof); assert(tdata == prof_tdata_get(tsd, false)); if (tdata != NULL) { assert(!tdata->enq); tdata->enq = true; } malloc_mutex_lock(tsd_tsdn(tsd), &bt2gctx_mtx); } static void prof_leave(tsd_t *tsd, prof_tdata_t *tdata) { cassert(config_prof); assert(tdata == prof_tdata_get(tsd, false)); malloc_mutex_unlock(tsd_tsdn(tsd), &bt2gctx_mtx); if (tdata != NULL) { bool idump, gdump; assert(tdata->enq); tdata->enq = false; idump = tdata->enq_idump; tdata->enq_idump = false; gdump = tdata->enq_gdump; tdata->enq_gdump = false; if (idump) { prof_idump(tsd_tsdn(tsd)); } if (gdump) { prof_gdump(tsd_tsdn(tsd)); } } } #ifdef JEMALLOC_PROF_LIBUNWIND void prof_backtrace(prof_bt_t *bt) { int nframes; cassert(config_prof); assert(bt->len == 0); assert(bt->vec != NULL); nframes = unw_backtrace(bt->vec, PROF_BT_MAX); if (nframes <= 0) { return; } bt->len = nframes; } #elif (defined(JEMALLOC_PROF_LIBGCC)) static _Unwind_Reason_Code prof_unwind_init_callback(struct _Unwind_Context *context, void *arg) { cassert(config_prof); return _URC_NO_REASON; } static _Unwind_Reason_Code prof_unwind_callback(struct _Unwind_Context *context, void *arg) { prof_unwind_data_t *data = (prof_unwind_data_t *)arg; void *ip; cassert(config_prof); ip = (void *)_Unwind_GetIP(context); if (ip == NULL) { return _URC_END_OF_STACK; } data->bt->vec[data->bt->len] = ip; data->bt->len++; if (data->bt->len == data->max) { return _URC_END_OF_STACK; } return _URC_NO_REASON; } void prof_backtrace(prof_bt_t *bt) { prof_unwind_data_t data = {bt, PROF_BT_MAX}; cassert(config_prof); _Unwind_Backtrace(prof_unwind_callback, &data); } #elif (defined(JEMALLOC_PROF_GCC)) void prof_backtrace(prof_bt_t *bt) { #define BT_FRAME(i) \ if ((i) < PROF_BT_MAX) { \ void *p; \ if (__builtin_frame_address(i) == 0) { \ return; \ } \ p = __builtin_return_address(i); \ if (p == NULL) { \ return; \ } \ bt->vec[(i)] = p; \ bt->len = (i) + 1; \ } else { \ return; \ } cassert(config_prof); BT_FRAME(0) BT_FRAME(1) BT_FRAME(2) BT_FRAME(3) BT_FRAME(4) BT_FRAME(5) BT_FRAME(6) BT_FRAME(7) BT_FRAME(8) BT_FRAME(9) BT_FRAME(10) BT_FRAME(11) BT_FRAME(12) BT_FRAME(13) BT_FRAME(14) BT_FRAME(15) BT_FRAME(16) BT_FRAME(17) BT_FRAME(18) BT_FRAME(19) BT_FRAME(20) BT_FRAME(21) BT_FRAME(22) BT_FRAME(23) BT_FRAME(24) BT_FRAME(25) BT_FRAME(26) BT_FRAME(27) BT_FRAME(28) BT_FRAME(29) BT_FRAME(30) BT_FRAME(31) BT_FRAME(32) BT_FRAME(33) BT_FRAME(34) BT_FRAME(35) BT_FRAME(36) BT_FRAME(37) BT_FRAME(38) BT_FRAME(39) BT_FRAME(40) BT_FRAME(41) BT_FRAME(42) BT_FRAME(43) BT_FRAME(44) BT_FRAME(45) BT_FRAME(46) BT_FRAME(47) BT_FRAME(48) BT_FRAME(49) BT_FRAME(50) BT_FRAME(51) BT_FRAME(52) BT_FRAME(53) BT_FRAME(54) BT_FRAME(55) BT_FRAME(56) BT_FRAME(57) BT_FRAME(58) BT_FRAME(59) BT_FRAME(60) BT_FRAME(61) BT_FRAME(62) BT_FRAME(63) BT_FRAME(64) BT_FRAME(65) BT_FRAME(66) BT_FRAME(67) BT_FRAME(68) BT_FRAME(69) BT_FRAME(70) BT_FRAME(71) BT_FRAME(72) BT_FRAME(73) BT_FRAME(74) BT_FRAME(75) BT_FRAME(76) BT_FRAME(77) BT_FRAME(78) BT_FRAME(79) BT_FRAME(80) BT_FRAME(81) BT_FRAME(82) BT_FRAME(83) BT_FRAME(84) BT_FRAME(85) BT_FRAME(86) BT_FRAME(87) BT_FRAME(88) BT_FRAME(89) BT_FRAME(90) BT_FRAME(91) BT_FRAME(92) BT_FRAME(93) BT_FRAME(94) BT_FRAME(95) BT_FRAME(96) BT_FRAME(97) BT_FRAME(98) BT_FRAME(99) BT_FRAME(100) BT_FRAME(101) BT_FRAME(102) BT_FRAME(103) BT_FRAME(104) BT_FRAME(105) BT_FRAME(106) BT_FRAME(107) BT_FRAME(108) BT_FRAME(109) BT_FRAME(110) BT_FRAME(111) BT_FRAME(112) BT_FRAME(113) BT_FRAME(114) BT_FRAME(115) BT_FRAME(116) BT_FRAME(117) BT_FRAME(118) BT_FRAME(119) BT_FRAME(120) BT_FRAME(121) BT_FRAME(122) BT_FRAME(123) BT_FRAME(124) BT_FRAME(125) BT_FRAME(126) BT_FRAME(127) #undef BT_FRAME } #else void prof_backtrace(prof_bt_t *bt) { cassert(config_prof); not_reached(); } #endif static malloc_mutex_t * prof_gctx_mutex_choose(void) { unsigned ngctxs = atomic_fetch_add_u(&cum_gctxs, 1, ATOMIC_RELAXED); return &gctx_locks[(ngctxs - 1) % PROF_NCTX_LOCKS]; } static malloc_mutex_t * prof_tdata_mutex_choose(uint64_t thr_uid) { return &tdata_locks[thr_uid % PROF_NTDATA_LOCKS]; } static prof_gctx_t * prof_gctx_create(tsdn_t *tsdn, prof_bt_t *bt) { /* * Create a single allocation that has space for vec of length bt->len. */ size_t size = offsetof(prof_gctx_t, vec) + (bt->len * sizeof(void *)); prof_gctx_t *gctx = (prof_gctx_t *)iallocztm(tsdn, size, sz_size2index(size), false, NULL, true, arena_get(TSDN_NULL, 0, true), true); if (gctx == NULL) { return NULL; } gctx->lock = prof_gctx_mutex_choose(); /* * Set nlimbo to 1, in order to avoid a race condition with * prof_tctx_destroy()/prof_gctx_try_destroy(). */ gctx->nlimbo = 1; tctx_tree_new(&gctx->tctxs); /* Duplicate bt. */ memcpy(gctx->vec, bt->vec, bt->len * sizeof(void *)); gctx->bt.vec = gctx->vec; gctx->bt.len = bt->len; return gctx; } static void prof_gctx_try_destroy(tsd_t *tsd, prof_tdata_t *tdata_self, prof_gctx_t *gctx, prof_tdata_t *tdata) { cassert(config_prof); /* * Check that gctx is still unused by any thread cache before destroying * it. prof_lookup() increments gctx->nlimbo in order to avoid a race * condition with this function, as does prof_tctx_destroy() in order to * avoid a race between the main body of prof_tctx_destroy() and entry * into this function. */ prof_enter(tsd, tdata_self); malloc_mutex_lock(tsd_tsdn(tsd), gctx->lock); assert(gctx->nlimbo != 0); if (tctx_tree_empty(&gctx->tctxs) && gctx->nlimbo == 1) { /* Remove gctx from bt2gctx. */ if (ckh_remove(tsd, &bt2gctx, &gctx->bt, NULL, NULL)) { not_reached(); } prof_leave(tsd, tdata_self); /* Destroy gctx. */ malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock); idalloctm(tsd_tsdn(tsd), gctx, NULL, NULL, true, true); } else { /* * Compensate for increment in prof_tctx_destroy() or * prof_lookup(). */ gctx->nlimbo--; malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock); prof_leave(tsd, tdata_self); } } static bool prof_tctx_should_destroy(tsdn_t *tsdn, prof_tctx_t *tctx) { malloc_mutex_assert_owner(tsdn, tctx->tdata->lock); if (opt_prof_accum) { return false; } if (tctx->cnts.curobjs != 0) { return false; } if (tctx->prepared) { return false; } return true; } static bool prof_gctx_should_destroy(prof_gctx_t *gctx) { if (opt_prof_accum) { return false; } if (!tctx_tree_empty(&gctx->tctxs)) { return false; } if (gctx->nlimbo != 0) { return false; } return true; } static void prof_tctx_destroy(tsd_t *tsd, prof_tctx_t *tctx) { prof_tdata_t *tdata = tctx->tdata; prof_gctx_t *gctx = tctx->gctx; bool destroy_tdata, destroy_tctx, destroy_gctx; malloc_mutex_assert_owner(tsd_tsdn(tsd), tctx->tdata->lock); assert(tctx->cnts.curobjs == 0); assert(tctx->cnts.curbytes == 0); assert(!opt_prof_accum); assert(tctx->cnts.accumobjs == 0); assert(tctx->cnts.accumbytes == 0); ckh_remove(tsd, &tdata->bt2tctx, &gctx->bt, NULL, NULL); destroy_tdata = prof_tdata_should_destroy(tsd_tsdn(tsd), tdata, false); malloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock); malloc_mutex_lock(tsd_tsdn(tsd), gctx->lock); switch (tctx->state) { case prof_tctx_state_nominal: tctx_tree_remove(&gctx->tctxs, tctx); destroy_tctx = true; if (prof_gctx_should_destroy(gctx)) { /* * Increment gctx->nlimbo in order to keep another * thread from winning the race to destroy gctx while * this one has gctx->lock dropped. Without this, it * would be possible for another thread to: * * 1) Sample an allocation associated with gctx. * 2) Deallocate the sampled object. * 3) Successfully prof_gctx_try_destroy(gctx). * * The result would be that gctx no longer exists by the * time this thread accesses it in * prof_gctx_try_destroy(). */ gctx->nlimbo++; destroy_gctx = true; } else { destroy_gctx = false; } break; case prof_tctx_state_dumping: /* * A dumping thread needs tctx to remain valid until dumping * has finished. Change state such that the dumping thread will * complete destruction during a late dump iteration phase. */ tctx->state = prof_tctx_state_purgatory; destroy_tctx = false; destroy_gctx = false; break; default: not_reached(); destroy_tctx = false; destroy_gctx = false; } malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock); if (destroy_gctx) { prof_gctx_try_destroy(tsd, prof_tdata_get(tsd, false), gctx, tdata); } malloc_mutex_assert_not_owner(tsd_tsdn(tsd), tctx->tdata->lock); if (destroy_tdata) { prof_tdata_destroy(tsd, tdata, false); } if (destroy_tctx) { idalloctm(tsd_tsdn(tsd), tctx, NULL, NULL, true, true); } } static bool prof_lookup_global(tsd_t *tsd, prof_bt_t *bt, prof_tdata_t *tdata, void **p_btkey, prof_gctx_t **p_gctx, bool *p_new_gctx) { union { prof_gctx_t *p; void *v; } gctx, tgctx; union { prof_bt_t *p; void *v; } btkey; bool new_gctx; prof_enter(tsd, tdata); if (ckh_search(&bt2gctx, bt, &btkey.v, &gctx.v)) { /* bt has never been seen before. Insert it. */ prof_leave(tsd, tdata); tgctx.p = prof_gctx_create(tsd_tsdn(tsd), bt); if (tgctx.v == NULL) { return true; } prof_enter(tsd, tdata); if (ckh_search(&bt2gctx, bt, &btkey.v, &gctx.v)) { gctx.p = tgctx.p; btkey.p = &gctx.p->bt; if (ckh_insert(tsd, &bt2gctx, btkey.v, gctx.v)) { /* OOM. */ prof_leave(tsd, tdata); idalloctm(tsd_tsdn(tsd), gctx.v, NULL, NULL, true, true); return true; } new_gctx = true; } else { new_gctx = false; } } else { tgctx.v = NULL; new_gctx = false; } if (!new_gctx) { /* * Increment nlimbo, in order to avoid a race condition with * prof_tctx_destroy()/prof_gctx_try_destroy(). */ malloc_mutex_lock(tsd_tsdn(tsd), gctx.p->lock); gctx.p->nlimbo++; malloc_mutex_unlock(tsd_tsdn(tsd), gctx.p->lock); new_gctx = false; if (tgctx.v != NULL) { /* Lost race to insert. */ idalloctm(tsd_tsdn(tsd), tgctx.v, NULL, NULL, true, true); } } prof_leave(tsd, tdata); *p_btkey = btkey.v; *p_gctx = gctx.p; *p_new_gctx = new_gctx; return false; } prof_tctx_t * prof_lookup(tsd_t *tsd, prof_bt_t *bt) { union { prof_tctx_t *p; void *v; } ret; prof_tdata_t *tdata; bool not_found; cassert(config_prof); tdata = prof_tdata_get(tsd, false); if (tdata == NULL) { return NULL; } malloc_mutex_lock(tsd_tsdn(tsd), tdata->lock); not_found = ckh_search(&tdata->bt2tctx, bt, NULL, &ret.v); if (!not_found) { /* Note double negative! */ ret.p->prepared = true; } malloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock); if (not_found) { void *btkey; prof_gctx_t *gctx; bool new_gctx, error; /* * This thread's cache lacks bt. Look for it in the global * cache. */ if (prof_lookup_global(tsd, bt, tdata, &btkey, &gctx, &new_gctx)) { return NULL; } /* Link a prof_tctx_t into gctx for this thread. */ ret.v = iallocztm(tsd_tsdn(tsd), sizeof(prof_tctx_t), sz_size2index(sizeof(prof_tctx_t)), false, NULL, true, arena_ichoose(tsd, NULL), true); if (ret.p == NULL) { if (new_gctx) { prof_gctx_try_destroy(tsd, tdata, gctx, tdata); } return NULL; } ret.p->tdata = tdata; ret.p->thr_uid = tdata->thr_uid; ret.p->thr_discrim = tdata->thr_discrim; memset(&ret.p->cnts, 0, sizeof(prof_cnt_t)); ret.p->gctx = gctx; ret.p->tctx_uid = tdata->tctx_uid_next++; ret.p->prepared = true; ret.p->state = prof_tctx_state_initializing; malloc_mutex_lock(tsd_tsdn(tsd), tdata->lock); error = ckh_insert(tsd, &tdata->bt2tctx, btkey, ret.v); malloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock); if (error) { if (new_gctx) { prof_gctx_try_destroy(tsd, tdata, gctx, tdata); } idalloctm(tsd_tsdn(tsd), ret.v, NULL, NULL, true, true); return NULL; } malloc_mutex_lock(tsd_tsdn(tsd), gctx->lock); ret.p->state = prof_tctx_state_nominal; tctx_tree_insert(&gctx->tctxs, ret.p); gctx->nlimbo--; malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock); } return ret.p; } /* * The bodies of this function and prof_leakcheck() are compiled out unless heap * profiling is enabled, so that it is possible to compile jemalloc with * floating point support completely disabled. Avoiding floating point code is * important on memory-constrained systems, but it also enables a workaround for * versions of glibc that don't properly save/restore floating point registers * during dynamic lazy symbol loading (which internally calls into whatever * malloc implementation happens to be integrated into the application). Note * that some compilers (e.g. gcc 4.8) may use floating point registers for fast * memory moves, so jemalloc must be compiled with such optimizations disabled * (e.g. * -mno-sse) in order for the workaround to be complete. */ void prof_sample_threshold_update(prof_tdata_t *tdata) { #ifdef JEMALLOC_PROF uint64_t r; double u; if (!config_prof) { return; } if (lg_prof_sample == 0) { tdata->bytes_until_sample = 0; return; } /* * Compute sample interval as a geometrically distributed random * variable with mean (2^lg_prof_sample). * * __ __ * | log(u) | 1 * tdata->bytes_until_sample = | -------- |, where p = --------------- * | log(1-p) | lg_prof_sample * 2 * * For more information on the math, see: * * Non-Uniform Random Variate Generation * Luc Devroye * Springer-Verlag, New York, 1986 * pp 500 * (http://luc.devroye.org/rnbookindex.html) */ r = prng_lg_range_u64(&tdata->prng_state, 53); u = (double)r * (1.0/9007199254740992.0L); tdata->bytes_until_sample = (uint64_t)(log(u) / log(1.0 - (1.0 / (double)((uint64_t)1U << lg_prof_sample)))) + (uint64_t)1U; #endif } #ifdef JEMALLOC_JET static prof_tdata_t * prof_tdata_count_iter(prof_tdata_tree_t *tdatas, prof_tdata_t *tdata, void *arg) { size_t *tdata_count = (size_t *)arg; (*tdata_count)++; return NULL; } size_t prof_tdata_count(void) { size_t tdata_count = 0; tsdn_t *tsdn; tsdn = tsdn_fetch(); malloc_mutex_lock(tsdn, &tdatas_mtx); tdata_tree_iter(&tdatas, NULL, prof_tdata_count_iter, (void *)&tdata_count); malloc_mutex_unlock(tsdn, &tdatas_mtx); return tdata_count; } size_t prof_bt_count(void) { size_t bt_count; tsd_t *tsd; prof_tdata_t *tdata; tsd = tsd_fetch(); tdata = prof_tdata_get(tsd, false); if (tdata == NULL) { return 0; } malloc_mutex_lock(tsd_tsdn(tsd), &bt2gctx_mtx); bt_count = ckh_count(&bt2gctx); malloc_mutex_unlock(tsd_tsdn(tsd), &bt2gctx_mtx); return bt_count; } #endif static int prof_dump_open_impl(bool propagate_err, const char *filename) { int fd; fd = creat(filename, 0644); if (fd == -1 && !propagate_err) { malloc_printf(": creat(\"%s\"), 0644) failed\n", filename); if (opt_abort) { abort(); } } return fd; } prof_dump_open_t *JET_MUTABLE prof_dump_open = prof_dump_open_impl; static bool prof_dump_flush(bool propagate_err) { bool ret = false; ssize_t err; cassert(config_prof); err = write(prof_dump_fd, prof_dump_buf, prof_dump_buf_end); if (err == -1) { if (!propagate_err) { malloc_write(": write() failed during heap " "profile flush\n"); if (opt_abort) { abort(); } } ret = true; } prof_dump_buf_end = 0; return ret; } static bool prof_dump_close(bool propagate_err) { bool ret; assert(prof_dump_fd != -1); ret = prof_dump_flush(propagate_err); close(prof_dump_fd); prof_dump_fd = -1; return ret; } static bool prof_dump_write(bool propagate_err, const char *s) { size_t i, slen, n; cassert(config_prof); i = 0; slen = strlen(s); while (i < slen) { /* Flush the buffer if it is full. */ if (prof_dump_buf_end == PROF_DUMP_BUFSIZE) { if (prof_dump_flush(propagate_err) && propagate_err) { return true; } } if (prof_dump_buf_end + slen <= PROF_DUMP_BUFSIZE) { /* Finish writing. */ n = slen - i; } else { /* Write as much of s as will fit. */ n = PROF_DUMP_BUFSIZE - prof_dump_buf_end; } memcpy(&prof_dump_buf[prof_dump_buf_end], &s[i], n); prof_dump_buf_end += n; i += n; } return false; } JEMALLOC_FORMAT_PRINTF(2, 3) static bool prof_dump_printf(bool propagate_err, const char *format, ...) { bool ret; va_list ap; char buf[PROF_PRINTF_BUFSIZE]; va_start(ap, format); malloc_vsnprintf(buf, sizeof(buf), format, ap); va_end(ap); ret = prof_dump_write(propagate_err, buf); return ret; } static void prof_tctx_merge_tdata(tsdn_t *tsdn, prof_tctx_t *tctx, prof_tdata_t *tdata) { malloc_mutex_assert_owner(tsdn, tctx->tdata->lock); malloc_mutex_lock(tsdn, tctx->gctx->lock); switch (tctx->state) { case prof_tctx_state_initializing: malloc_mutex_unlock(tsdn, tctx->gctx->lock); return; case prof_tctx_state_nominal: tctx->state = prof_tctx_state_dumping; malloc_mutex_unlock(tsdn, tctx->gctx->lock); memcpy(&tctx->dump_cnts, &tctx->cnts, sizeof(prof_cnt_t)); tdata->cnt_summed.curobjs += tctx->dump_cnts.curobjs; tdata->cnt_summed.curbytes += tctx->dump_cnts.curbytes; if (opt_prof_accum) { tdata->cnt_summed.accumobjs += tctx->dump_cnts.accumobjs; tdata->cnt_summed.accumbytes += tctx->dump_cnts.accumbytes; } break; case prof_tctx_state_dumping: case prof_tctx_state_purgatory: not_reached(); } } static void prof_tctx_merge_gctx(tsdn_t *tsdn, prof_tctx_t *tctx, prof_gctx_t *gctx) { malloc_mutex_assert_owner(tsdn, gctx->lock); gctx->cnt_summed.curobjs += tctx->dump_cnts.curobjs; gctx->cnt_summed.curbytes += tctx->dump_cnts.curbytes; if (opt_prof_accum) { gctx->cnt_summed.accumobjs += tctx->dump_cnts.accumobjs; gctx->cnt_summed.accumbytes += tctx->dump_cnts.accumbytes; } } static prof_tctx_t * prof_tctx_merge_iter(prof_tctx_tree_t *tctxs, prof_tctx_t *tctx, void *arg) { tsdn_t *tsdn = (tsdn_t *)arg; malloc_mutex_assert_owner(tsdn, tctx->gctx->lock); switch (tctx->state) { case prof_tctx_state_nominal: /* New since dumping started; ignore. */ break; case prof_tctx_state_dumping: case prof_tctx_state_purgatory: prof_tctx_merge_gctx(tsdn, tctx, tctx->gctx); break; default: not_reached(); } return NULL; } struct prof_tctx_dump_iter_arg_s { tsdn_t *tsdn; bool propagate_err; }; static prof_tctx_t * prof_tctx_dump_iter(prof_tctx_tree_t *tctxs, prof_tctx_t *tctx, void *opaque) { struct prof_tctx_dump_iter_arg_s *arg = (struct prof_tctx_dump_iter_arg_s *)opaque; malloc_mutex_assert_owner(arg->tsdn, tctx->gctx->lock); switch (tctx->state) { case prof_tctx_state_initializing: case prof_tctx_state_nominal: /* Not captured by this dump. */ break; case prof_tctx_state_dumping: case prof_tctx_state_purgatory: if (prof_dump_printf(arg->propagate_err, " t%"FMTu64": %"FMTu64": %"FMTu64" [%"FMTu64": " "%"FMTu64"]\n", tctx->thr_uid, tctx->dump_cnts.curobjs, tctx->dump_cnts.curbytes, tctx->dump_cnts.accumobjs, tctx->dump_cnts.accumbytes)) { return tctx; } break; default: not_reached(); } return NULL; } static prof_tctx_t * prof_tctx_finish_iter(prof_tctx_tree_t *tctxs, prof_tctx_t *tctx, void *arg) { tsdn_t *tsdn = (tsdn_t *)arg; prof_tctx_t *ret; malloc_mutex_assert_owner(tsdn, tctx->gctx->lock); switch (tctx->state) { case prof_tctx_state_nominal: /* New since dumping started; ignore. */ break; case prof_tctx_state_dumping: tctx->state = prof_tctx_state_nominal; break; case prof_tctx_state_purgatory: ret = tctx; goto label_return; default: not_reached(); } ret = NULL; label_return: return ret; } static void prof_dump_gctx_prep(tsdn_t *tsdn, prof_gctx_t *gctx, prof_gctx_tree_t *gctxs) { cassert(config_prof); malloc_mutex_lock(tsdn, gctx->lock); /* * Increment nlimbo so that gctx won't go away before dump. * Additionally, link gctx into the dump list so that it is included in * prof_dump()'s second pass. */ gctx->nlimbo++; gctx_tree_insert(gctxs, gctx); memset(&gctx->cnt_summed, 0, sizeof(prof_cnt_t)); malloc_mutex_unlock(tsdn, gctx->lock); } struct prof_gctx_merge_iter_arg_s { tsdn_t *tsdn; size_t leak_ngctx; }; static prof_gctx_t * prof_gctx_merge_iter(prof_gctx_tree_t *gctxs, prof_gctx_t *gctx, void *opaque) { struct prof_gctx_merge_iter_arg_s *arg = (struct prof_gctx_merge_iter_arg_s *)opaque; malloc_mutex_lock(arg->tsdn, gctx->lock); tctx_tree_iter(&gctx->tctxs, NULL, prof_tctx_merge_iter, (void *)arg->tsdn); if (gctx->cnt_summed.curobjs != 0) { arg->leak_ngctx++; } malloc_mutex_unlock(arg->tsdn, gctx->lock); return NULL; } static void prof_gctx_finish(tsd_t *tsd, prof_gctx_tree_t *gctxs) { prof_tdata_t *tdata = prof_tdata_get(tsd, false); prof_gctx_t *gctx; /* * Standard tree iteration won't work here, because as soon as we * decrement gctx->nlimbo and unlock gctx, another thread can * concurrently destroy it, which will corrupt the tree. Therefore, * tear down the tree one node at a time during iteration. */ while ((gctx = gctx_tree_first(gctxs)) != NULL) { gctx_tree_remove(gctxs, gctx); malloc_mutex_lock(tsd_tsdn(tsd), gctx->lock); { prof_tctx_t *next; next = NULL; do { prof_tctx_t *to_destroy = tctx_tree_iter(&gctx->tctxs, next, prof_tctx_finish_iter, (void *)tsd_tsdn(tsd)); if (to_destroy != NULL) { next = tctx_tree_next(&gctx->tctxs, to_destroy); tctx_tree_remove(&gctx->tctxs, to_destroy); idalloctm(tsd_tsdn(tsd), to_destroy, NULL, NULL, true, true); } else { next = NULL; } } while (next != NULL); } gctx->nlimbo--; if (prof_gctx_should_destroy(gctx)) { gctx->nlimbo++; malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock); prof_gctx_try_destroy(tsd, tdata, gctx, tdata); } else { malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock); } } } struct prof_tdata_merge_iter_arg_s { tsdn_t *tsdn; prof_cnt_t cnt_all; }; static prof_tdata_t * prof_tdata_merge_iter(prof_tdata_tree_t *tdatas, prof_tdata_t *tdata, void *opaque) { struct prof_tdata_merge_iter_arg_s *arg = (struct prof_tdata_merge_iter_arg_s *)opaque; malloc_mutex_lock(arg->tsdn, tdata->lock); if (!tdata->expired) { size_t tabind; union { prof_tctx_t *p; void *v; } tctx; tdata->dumping = true; memset(&tdata->cnt_summed, 0, sizeof(prof_cnt_t)); for (tabind = 0; !ckh_iter(&tdata->bt2tctx, &tabind, NULL, &tctx.v);) { prof_tctx_merge_tdata(arg->tsdn, tctx.p, tdata); } arg->cnt_all.curobjs += tdata->cnt_summed.curobjs; arg->cnt_all.curbytes += tdata->cnt_summed.curbytes; if (opt_prof_accum) { arg->cnt_all.accumobjs += tdata->cnt_summed.accumobjs; arg->cnt_all.accumbytes += tdata->cnt_summed.accumbytes; } } else { tdata->dumping = false; } malloc_mutex_unlock(arg->tsdn, tdata->lock); return NULL; } static prof_tdata_t * prof_tdata_dump_iter(prof_tdata_tree_t *tdatas, prof_tdata_t *tdata, void *arg) { bool propagate_err = *(bool *)arg; if (!tdata->dumping) { return NULL; } if (prof_dump_printf(propagate_err, " t%"FMTu64": %"FMTu64": %"FMTu64" [%"FMTu64": %"FMTu64"]%s%s\n", tdata->thr_uid, tdata->cnt_summed.curobjs, tdata->cnt_summed.curbytes, tdata->cnt_summed.accumobjs, tdata->cnt_summed.accumbytes, (tdata->thread_name != NULL) ? " " : "", (tdata->thread_name != NULL) ? tdata->thread_name : "")) { return tdata; } return NULL; } static bool prof_dump_header_impl(tsdn_t *tsdn, bool propagate_err, const prof_cnt_t *cnt_all) { bool ret; if (prof_dump_printf(propagate_err, "heap_v2/%"FMTu64"\n" " t*: %"FMTu64": %"FMTu64" [%"FMTu64": %"FMTu64"]\n", ((uint64_t)1U << lg_prof_sample), cnt_all->curobjs, cnt_all->curbytes, cnt_all->accumobjs, cnt_all->accumbytes)) { return true; } malloc_mutex_lock(tsdn, &tdatas_mtx); ret = (tdata_tree_iter(&tdatas, NULL, prof_tdata_dump_iter, (void *)&propagate_err) != NULL); malloc_mutex_unlock(tsdn, &tdatas_mtx); return ret; } prof_dump_header_t *JET_MUTABLE prof_dump_header = prof_dump_header_impl; static bool prof_dump_gctx(tsdn_t *tsdn, bool propagate_err, prof_gctx_t *gctx, const prof_bt_t *bt, prof_gctx_tree_t *gctxs) { bool ret; unsigned i; struct prof_tctx_dump_iter_arg_s prof_tctx_dump_iter_arg; cassert(config_prof); malloc_mutex_assert_owner(tsdn, gctx->lock); /* Avoid dumping such gctx's that have no useful data. */ if ((!opt_prof_accum && gctx->cnt_summed.curobjs == 0) || (opt_prof_accum && gctx->cnt_summed.accumobjs == 0)) { assert(gctx->cnt_summed.curobjs == 0); assert(gctx->cnt_summed.curbytes == 0); assert(gctx->cnt_summed.accumobjs == 0); assert(gctx->cnt_summed.accumbytes == 0); ret = false; goto label_return; } if (prof_dump_printf(propagate_err, "@")) { ret = true; goto label_return; } for (i = 0; i < bt->len; i++) { if (prof_dump_printf(propagate_err, " %#"FMTxPTR, (uintptr_t)bt->vec[i])) { ret = true; goto label_return; } } if (prof_dump_printf(propagate_err, "\n" " t*: %"FMTu64": %"FMTu64" [%"FMTu64": %"FMTu64"]\n", gctx->cnt_summed.curobjs, gctx->cnt_summed.curbytes, gctx->cnt_summed.accumobjs, gctx->cnt_summed.accumbytes)) { ret = true; goto label_return; } prof_tctx_dump_iter_arg.tsdn = tsdn; prof_tctx_dump_iter_arg.propagate_err = propagate_err; if (tctx_tree_iter(&gctx->tctxs, NULL, prof_tctx_dump_iter, (void *)&prof_tctx_dump_iter_arg) != NULL) { ret = true; goto label_return; } ret = false; label_return: return ret; } #ifndef _WIN32 JEMALLOC_FORMAT_PRINTF(1, 2) static int prof_open_maps(const char *format, ...) { int mfd; va_list ap; char filename[PATH_MAX + 1]; va_start(ap, format); malloc_vsnprintf(filename, sizeof(filename), format, ap); va_end(ap); mfd = open(filename, O_RDONLY | O_CLOEXEC); return mfd; } #endif static int prof_getpid(void) { #ifdef _WIN32 return GetCurrentProcessId(); #else return getpid(); #endif } static bool prof_dump_maps(bool propagate_err) { bool ret; int mfd; cassert(config_prof); #ifdef __FreeBSD__ mfd = prof_open_maps("/proc/curproc/map"); #elif defined(_WIN32) mfd = -1; // Not implemented #else { int pid = prof_getpid(); mfd = prof_open_maps("/proc/%d/task/%d/maps", pid, pid); if (mfd == -1) { mfd = prof_open_maps("/proc/%d/maps", pid); } } #endif if (mfd != -1) { ssize_t nread; if (prof_dump_write(propagate_err, "\nMAPPED_LIBRARIES:\n") && propagate_err) { ret = true; goto label_return; } nread = 0; do { prof_dump_buf_end += nread; if (prof_dump_buf_end == PROF_DUMP_BUFSIZE) { /* Make space in prof_dump_buf before read(). */ if (prof_dump_flush(propagate_err) && propagate_err) { ret = true; goto label_return; } } nread = read(mfd, &prof_dump_buf[prof_dump_buf_end], PROF_DUMP_BUFSIZE - prof_dump_buf_end); } while (nread > 0); } else { ret = true; goto label_return; } ret = false; label_return: if (mfd != -1) { close(mfd); } return ret; } /* * See prof_sample_threshold_update() comment for why the body of this function * is conditionally compiled. */ static void prof_leakcheck(const prof_cnt_t *cnt_all, size_t leak_ngctx, const char *filename) { #ifdef JEMALLOC_PROF /* * Scaling is equivalent AdjustSamples() in jeprof, but the result may * differ slightly from what jeprof reports, because here we scale the * summary values, whereas jeprof scales each context individually and * reports the sums of the scaled values. */ if (cnt_all->curbytes != 0) { double sample_period = (double)((uint64_t)1 << lg_prof_sample); double ratio = (((double)cnt_all->curbytes) / (double)cnt_all->curobjs) / sample_period; double scale_factor = 1.0 / (1.0 - exp(-ratio)); uint64_t curbytes = (uint64_t)round(((double)cnt_all->curbytes) * scale_factor); uint64_t curobjs = (uint64_t)round(((double)cnt_all->curobjs) * scale_factor); malloc_printf(": Leak approximation summary: ~%"FMTu64 " byte%s, ~%"FMTu64" object%s, >= %zu context%s\n", curbytes, (curbytes != 1) ? "s" : "", curobjs, (curobjs != 1) ? "s" : "", leak_ngctx, (leak_ngctx != 1) ? "s" : ""); malloc_printf( ": Run jeprof on \"%s\" for leak detail\n", filename); } #endif } struct prof_gctx_dump_iter_arg_s { tsdn_t *tsdn; bool propagate_err; }; static prof_gctx_t * prof_gctx_dump_iter(prof_gctx_tree_t *gctxs, prof_gctx_t *gctx, void *opaque) { prof_gctx_t *ret; struct prof_gctx_dump_iter_arg_s *arg = (struct prof_gctx_dump_iter_arg_s *)opaque; malloc_mutex_lock(arg->tsdn, gctx->lock); if (prof_dump_gctx(arg->tsdn, arg->propagate_err, gctx, &gctx->bt, gctxs)) { ret = gctx; goto label_return; } ret = NULL; label_return: malloc_mutex_unlock(arg->tsdn, gctx->lock); return ret; } static void prof_dump_prep(tsd_t *tsd, prof_tdata_t *tdata, struct prof_tdata_merge_iter_arg_s *prof_tdata_merge_iter_arg, struct prof_gctx_merge_iter_arg_s *prof_gctx_merge_iter_arg, prof_gctx_tree_t *gctxs) { size_t tabind; union { prof_gctx_t *p; void *v; } gctx; prof_enter(tsd, tdata); /* * Put gctx's in limbo and clear their counters in preparation for * summing. */ gctx_tree_new(gctxs); for (tabind = 0; !ckh_iter(&bt2gctx, &tabind, NULL, &gctx.v);) { prof_dump_gctx_prep(tsd_tsdn(tsd), gctx.p, gctxs); } /* * Iterate over tdatas, and for the non-expired ones snapshot their tctx * stats and merge them into the associated gctx's. */ prof_tdata_merge_iter_arg->tsdn = tsd_tsdn(tsd); memset(&prof_tdata_merge_iter_arg->cnt_all, 0, sizeof(prof_cnt_t)); malloc_mutex_lock(tsd_tsdn(tsd), &tdatas_mtx); tdata_tree_iter(&tdatas, NULL, prof_tdata_merge_iter, (void *)prof_tdata_merge_iter_arg); malloc_mutex_unlock(tsd_tsdn(tsd), &tdatas_mtx); /* Merge tctx stats into gctx's. */ prof_gctx_merge_iter_arg->tsdn = tsd_tsdn(tsd); prof_gctx_merge_iter_arg->leak_ngctx = 0; gctx_tree_iter(gctxs, NULL, prof_gctx_merge_iter, (void *)prof_gctx_merge_iter_arg); prof_leave(tsd, tdata); } static bool prof_dump_file(tsd_t *tsd, bool propagate_err, const char *filename, bool leakcheck, prof_tdata_t *tdata, struct prof_tdata_merge_iter_arg_s *prof_tdata_merge_iter_arg, struct prof_gctx_merge_iter_arg_s *prof_gctx_merge_iter_arg, struct prof_gctx_dump_iter_arg_s *prof_gctx_dump_iter_arg, prof_gctx_tree_t *gctxs) { /* Create dump file. */ if ((prof_dump_fd = prof_dump_open(propagate_err, filename)) == -1) { return true; } /* Dump profile header. */ if (prof_dump_header(tsd_tsdn(tsd), propagate_err, &prof_tdata_merge_iter_arg->cnt_all)) { goto label_write_error; } /* Dump per gctx profile stats. */ prof_gctx_dump_iter_arg->tsdn = tsd_tsdn(tsd); prof_gctx_dump_iter_arg->propagate_err = propagate_err; if (gctx_tree_iter(gctxs, NULL, prof_gctx_dump_iter, (void *)prof_gctx_dump_iter_arg) != NULL) { goto label_write_error; } /* Dump /proc//maps if possible. */ if (prof_dump_maps(propagate_err)) { goto label_write_error; } if (prof_dump_close(propagate_err)) { return true; } return false; label_write_error: prof_dump_close(propagate_err); return true; } static bool prof_dump(tsd_t *tsd, bool propagate_err, const char *filename, bool leakcheck) { cassert(config_prof); assert(tsd_reentrancy_level_get(tsd) == 0); prof_tdata_t * tdata = prof_tdata_get(tsd, true); if (tdata == NULL) { return true; } - pre_reentrancy(tsd); + pre_reentrancy(tsd, NULL); malloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_mtx); prof_gctx_tree_t gctxs; struct prof_tdata_merge_iter_arg_s prof_tdata_merge_iter_arg; struct prof_gctx_merge_iter_arg_s prof_gctx_merge_iter_arg; struct prof_gctx_dump_iter_arg_s prof_gctx_dump_iter_arg; prof_dump_prep(tsd, tdata, &prof_tdata_merge_iter_arg, &prof_gctx_merge_iter_arg, &gctxs); bool err = prof_dump_file(tsd, propagate_err, filename, leakcheck, tdata, &prof_tdata_merge_iter_arg, &prof_gctx_merge_iter_arg, &prof_gctx_dump_iter_arg, &gctxs); prof_gctx_finish(tsd, &gctxs); malloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_mtx); post_reentrancy(tsd); if (err) { return true; } if (leakcheck) { prof_leakcheck(&prof_tdata_merge_iter_arg.cnt_all, prof_gctx_merge_iter_arg.leak_ngctx, filename); } return false; } #ifdef JEMALLOC_JET void prof_cnt_all(uint64_t *curobjs, uint64_t *curbytes, uint64_t *accumobjs, uint64_t *accumbytes) { tsd_t *tsd; prof_tdata_t *tdata; struct prof_tdata_merge_iter_arg_s prof_tdata_merge_iter_arg; struct prof_gctx_merge_iter_arg_s prof_gctx_merge_iter_arg; prof_gctx_tree_t gctxs; tsd = tsd_fetch(); tdata = prof_tdata_get(tsd, false); if (tdata == NULL) { if (curobjs != NULL) { *curobjs = 0; } if (curbytes != NULL) { *curbytes = 0; } if (accumobjs != NULL) { *accumobjs = 0; } if (accumbytes != NULL) { *accumbytes = 0; } return; } prof_dump_prep(tsd, tdata, &prof_tdata_merge_iter_arg, &prof_gctx_merge_iter_arg, &gctxs); prof_gctx_finish(tsd, &gctxs); if (curobjs != NULL) { *curobjs = prof_tdata_merge_iter_arg.cnt_all.curobjs; } if (curbytes != NULL) { *curbytes = prof_tdata_merge_iter_arg.cnt_all.curbytes; } if (accumobjs != NULL) { *accumobjs = prof_tdata_merge_iter_arg.cnt_all.accumobjs; } if (accumbytes != NULL) { *accumbytes = prof_tdata_merge_iter_arg.cnt_all.accumbytes; } } #endif #define DUMP_FILENAME_BUFSIZE (PATH_MAX + 1) #define VSEQ_INVALID UINT64_C(0xffffffffffffffff) static void prof_dump_filename(char *filename, char v, uint64_t vseq) { cassert(config_prof); if (vseq != VSEQ_INVALID) { /* "...v.heap" */ malloc_snprintf(filename, DUMP_FILENAME_BUFSIZE, "%s.%d.%"FMTu64".%c%"FMTu64".heap", opt_prof_prefix, prof_getpid(), prof_dump_seq, v, vseq); } else { /* "....heap" */ malloc_snprintf(filename, DUMP_FILENAME_BUFSIZE, "%s.%d.%"FMTu64".%c.heap", opt_prof_prefix, prof_getpid(), prof_dump_seq, v); } prof_dump_seq++; } static void prof_fdump(void) { tsd_t *tsd; char filename[DUMP_FILENAME_BUFSIZE]; cassert(config_prof); assert(opt_prof_final); assert(opt_prof_prefix[0] != '\0'); if (!prof_booted) { return; } tsd = tsd_fetch(); assert(tsd_reentrancy_level_get(tsd) == 0); malloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_seq_mtx); prof_dump_filename(filename, 'f', VSEQ_INVALID); malloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_seq_mtx); prof_dump(tsd, false, filename, opt_prof_leak); } bool prof_accum_init(tsdn_t *tsdn, prof_accum_t *prof_accum) { cassert(config_prof); #ifndef JEMALLOC_ATOMIC_U64 if (malloc_mutex_init(&prof_accum->mtx, "prof_accum", WITNESS_RANK_PROF_ACCUM, malloc_mutex_rank_exclusive)) { return true; } prof_accum->accumbytes = 0; #else atomic_store_u64(&prof_accum->accumbytes, 0, ATOMIC_RELAXED); #endif return false; } void prof_idump(tsdn_t *tsdn) { tsd_t *tsd; prof_tdata_t *tdata; cassert(config_prof); if (!prof_booted || tsdn_null(tsdn)) { return; } tsd = tsdn_tsd(tsdn); if (tsd_reentrancy_level_get(tsd) > 0) { return; } tdata = prof_tdata_get(tsd, false); if (tdata == NULL) { return; } if (tdata->enq) { tdata->enq_idump = true; return; } if (opt_prof_prefix[0] != '\0') { char filename[PATH_MAX + 1]; malloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_seq_mtx); prof_dump_filename(filename, 'i', prof_dump_iseq); prof_dump_iseq++; malloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_seq_mtx); prof_dump(tsd, false, filename, false); } } bool prof_mdump(tsd_t *tsd, const char *filename) { cassert(config_prof); assert(tsd_reentrancy_level_get(tsd) == 0); if (!opt_prof || !prof_booted) { return true; } char filename_buf[DUMP_FILENAME_BUFSIZE]; if (filename == NULL) { /* No filename specified, so automatically generate one. */ if (opt_prof_prefix[0] == '\0') { return true; } malloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_seq_mtx); prof_dump_filename(filename_buf, 'm', prof_dump_mseq); prof_dump_mseq++; malloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_seq_mtx); filename = filename_buf; } return prof_dump(tsd, true, filename, false); } void prof_gdump(tsdn_t *tsdn) { tsd_t *tsd; prof_tdata_t *tdata; cassert(config_prof); if (!prof_booted || tsdn_null(tsdn)) { return; } tsd = tsdn_tsd(tsdn); if (tsd_reentrancy_level_get(tsd) > 0) { return; } tdata = prof_tdata_get(tsd, false); if (tdata == NULL) { return; } if (tdata->enq) { tdata->enq_gdump = true; return; } if (opt_prof_prefix[0] != '\0') { char filename[DUMP_FILENAME_BUFSIZE]; malloc_mutex_lock(tsdn, &prof_dump_seq_mtx); prof_dump_filename(filename, 'u', prof_dump_useq); prof_dump_useq++; malloc_mutex_unlock(tsdn, &prof_dump_seq_mtx); prof_dump(tsd, false, filename, false); } } static void prof_bt_hash(const void *key, size_t r_hash[2]) { prof_bt_t *bt = (prof_bt_t *)key; cassert(config_prof); hash(bt->vec, bt->len * sizeof(void *), 0x94122f33U, r_hash); } static bool prof_bt_keycomp(const void *k1, const void *k2) { const prof_bt_t *bt1 = (prof_bt_t *)k1; const prof_bt_t *bt2 = (prof_bt_t *)k2; cassert(config_prof); if (bt1->len != bt2->len) { return false; } return (memcmp(bt1->vec, bt2->vec, bt1->len * sizeof(void *)) == 0); } static uint64_t prof_thr_uid_alloc(tsdn_t *tsdn) { uint64_t thr_uid; malloc_mutex_lock(tsdn, &next_thr_uid_mtx); thr_uid = next_thr_uid; next_thr_uid++; malloc_mutex_unlock(tsdn, &next_thr_uid_mtx); return thr_uid; } static prof_tdata_t * prof_tdata_init_impl(tsd_t *tsd, uint64_t thr_uid, uint64_t thr_discrim, char *thread_name, bool active) { prof_tdata_t *tdata; cassert(config_prof); /* Initialize an empty cache for this thread. */ tdata = (prof_tdata_t *)iallocztm(tsd_tsdn(tsd), sizeof(prof_tdata_t), sz_size2index(sizeof(prof_tdata_t)), false, NULL, true, arena_get(TSDN_NULL, 0, true), true); if (tdata == NULL) { return NULL; } tdata->lock = prof_tdata_mutex_choose(thr_uid); tdata->thr_uid = thr_uid; tdata->thr_discrim = thr_discrim; tdata->thread_name = thread_name; tdata->attached = true; tdata->expired = false; tdata->tctx_uid_next = 0; if (ckh_new(tsd, &tdata->bt2tctx, PROF_CKH_MINITEMS, prof_bt_hash, prof_bt_keycomp)) { idalloctm(tsd_tsdn(tsd), tdata, NULL, NULL, true, true); return NULL; } tdata->prng_state = (uint64_t)(uintptr_t)tdata; prof_sample_threshold_update(tdata); tdata->enq = false; tdata->enq_idump = false; tdata->enq_gdump = false; tdata->dumping = false; tdata->active = active; malloc_mutex_lock(tsd_tsdn(tsd), &tdatas_mtx); tdata_tree_insert(&tdatas, tdata); malloc_mutex_unlock(tsd_tsdn(tsd), &tdatas_mtx); return tdata; } prof_tdata_t * prof_tdata_init(tsd_t *tsd) { return prof_tdata_init_impl(tsd, prof_thr_uid_alloc(tsd_tsdn(tsd)), 0, NULL, prof_thread_active_init_get(tsd_tsdn(tsd))); } static bool prof_tdata_should_destroy_unlocked(prof_tdata_t *tdata, bool even_if_attached) { if (tdata->attached && !even_if_attached) { return false; } if (ckh_count(&tdata->bt2tctx) != 0) { return false; } return true; } static bool prof_tdata_should_destroy(tsdn_t *tsdn, prof_tdata_t *tdata, bool even_if_attached) { malloc_mutex_assert_owner(tsdn, tdata->lock); return prof_tdata_should_destroy_unlocked(tdata, even_if_attached); } static void prof_tdata_destroy_locked(tsd_t *tsd, prof_tdata_t *tdata, bool even_if_attached) { malloc_mutex_assert_owner(tsd_tsdn(tsd), &tdatas_mtx); tdata_tree_remove(&tdatas, tdata); assert(prof_tdata_should_destroy_unlocked(tdata, even_if_attached)); if (tdata->thread_name != NULL) { idalloctm(tsd_tsdn(tsd), tdata->thread_name, NULL, NULL, true, true); } ckh_delete(tsd, &tdata->bt2tctx); idalloctm(tsd_tsdn(tsd), tdata, NULL, NULL, true, true); } static void prof_tdata_destroy(tsd_t *tsd, prof_tdata_t *tdata, bool even_if_attached) { malloc_mutex_lock(tsd_tsdn(tsd), &tdatas_mtx); prof_tdata_destroy_locked(tsd, tdata, even_if_attached); malloc_mutex_unlock(tsd_tsdn(tsd), &tdatas_mtx); } static void prof_tdata_detach(tsd_t *tsd, prof_tdata_t *tdata) { bool destroy_tdata; malloc_mutex_lock(tsd_tsdn(tsd), tdata->lock); if (tdata->attached) { destroy_tdata = prof_tdata_should_destroy(tsd_tsdn(tsd), tdata, true); /* * Only detach if !destroy_tdata, because detaching would allow * another thread to win the race to destroy tdata. */ if (!destroy_tdata) { tdata->attached = false; } tsd_prof_tdata_set(tsd, NULL); } else { destroy_tdata = false; } malloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock); if (destroy_tdata) { prof_tdata_destroy(tsd, tdata, true); } } prof_tdata_t * prof_tdata_reinit(tsd_t *tsd, prof_tdata_t *tdata) { uint64_t thr_uid = tdata->thr_uid; uint64_t thr_discrim = tdata->thr_discrim + 1; char *thread_name = (tdata->thread_name != NULL) ? prof_thread_name_alloc(tsd_tsdn(tsd), tdata->thread_name) : NULL; bool active = tdata->active; prof_tdata_detach(tsd, tdata); return prof_tdata_init_impl(tsd, thr_uid, thr_discrim, thread_name, active); } static bool prof_tdata_expire(tsdn_t *tsdn, prof_tdata_t *tdata) { bool destroy_tdata; malloc_mutex_lock(tsdn, tdata->lock); if (!tdata->expired) { tdata->expired = true; destroy_tdata = tdata->attached ? false : prof_tdata_should_destroy(tsdn, tdata, false); } else { destroy_tdata = false; } malloc_mutex_unlock(tsdn, tdata->lock); return destroy_tdata; } static prof_tdata_t * prof_tdata_reset_iter(prof_tdata_tree_t *tdatas, prof_tdata_t *tdata, void *arg) { tsdn_t *tsdn = (tsdn_t *)arg; return (prof_tdata_expire(tsdn, tdata) ? tdata : NULL); } void prof_reset(tsd_t *tsd, size_t lg_sample) { prof_tdata_t *next; assert(lg_sample < (sizeof(uint64_t) << 3)); malloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_mtx); malloc_mutex_lock(tsd_tsdn(tsd), &tdatas_mtx); lg_prof_sample = lg_sample; next = NULL; do { prof_tdata_t *to_destroy = tdata_tree_iter(&tdatas, next, prof_tdata_reset_iter, (void *)tsd); if (to_destroy != NULL) { next = tdata_tree_next(&tdatas, to_destroy); prof_tdata_destroy_locked(tsd, to_destroy, false); } else { next = NULL; } } while (next != NULL); malloc_mutex_unlock(tsd_tsdn(tsd), &tdatas_mtx); malloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_mtx); } void prof_tdata_cleanup(tsd_t *tsd) { prof_tdata_t *tdata; if (!config_prof) { return; } tdata = tsd_prof_tdata_get(tsd); if (tdata != NULL) { prof_tdata_detach(tsd, tdata); } } bool prof_active_get(tsdn_t *tsdn) { bool prof_active_current; malloc_mutex_lock(tsdn, &prof_active_mtx); prof_active_current = prof_active; malloc_mutex_unlock(tsdn, &prof_active_mtx); return prof_active_current; } bool prof_active_set(tsdn_t *tsdn, bool active) { bool prof_active_old; malloc_mutex_lock(tsdn, &prof_active_mtx); prof_active_old = prof_active; prof_active = active; malloc_mutex_unlock(tsdn, &prof_active_mtx); return prof_active_old; } const char * prof_thread_name_get(tsd_t *tsd) { prof_tdata_t *tdata; tdata = prof_tdata_get(tsd, true); if (tdata == NULL) { return ""; } return (tdata->thread_name != NULL ? tdata->thread_name : ""); } static char * prof_thread_name_alloc(tsdn_t *tsdn, const char *thread_name) { char *ret; size_t size; if (thread_name == NULL) { return NULL; } size = strlen(thread_name) + 1; if (size == 1) { return ""; } ret = iallocztm(tsdn, size, sz_size2index(size), false, NULL, true, arena_get(TSDN_NULL, 0, true), true); if (ret == NULL) { return NULL; } memcpy(ret, thread_name, size); return ret; } int prof_thread_name_set(tsd_t *tsd, const char *thread_name) { prof_tdata_t *tdata; unsigned i; char *s; tdata = prof_tdata_get(tsd, true); if (tdata == NULL) { return EAGAIN; } /* Validate input. */ if (thread_name == NULL) { return EFAULT; } for (i = 0; thread_name[i] != '\0'; i++) { char c = thread_name[i]; if (!isgraph(c) && !isblank(c)) { return EFAULT; } } s = prof_thread_name_alloc(tsd_tsdn(tsd), thread_name); if (s == NULL) { return EAGAIN; } if (tdata->thread_name != NULL) { idalloctm(tsd_tsdn(tsd), tdata->thread_name, NULL, NULL, true, true); tdata->thread_name = NULL; } if (strlen(s) > 0) { tdata->thread_name = s; } return 0; } bool prof_thread_active_get(tsd_t *tsd) { prof_tdata_t *tdata; tdata = prof_tdata_get(tsd, true); if (tdata == NULL) { return false; } return tdata->active; } bool prof_thread_active_set(tsd_t *tsd, bool active) { prof_tdata_t *tdata; tdata = prof_tdata_get(tsd, true); if (tdata == NULL) { return true; } tdata->active = active; return false; } bool prof_thread_active_init_get(tsdn_t *tsdn) { bool active_init; malloc_mutex_lock(tsdn, &prof_thread_active_init_mtx); active_init = prof_thread_active_init; malloc_mutex_unlock(tsdn, &prof_thread_active_init_mtx); return active_init; } bool prof_thread_active_init_set(tsdn_t *tsdn, bool active_init) { bool active_init_old; malloc_mutex_lock(tsdn, &prof_thread_active_init_mtx); active_init_old = prof_thread_active_init; prof_thread_active_init = active_init; malloc_mutex_unlock(tsdn, &prof_thread_active_init_mtx); return active_init_old; } bool prof_gdump_get(tsdn_t *tsdn) { bool prof_gdump_current; malloc_mutex_lock(tsdn, &prof_gdump_mtx); prof_gdump_current = prof_gdump_val; malloc_mutex_unlock(tsdn, &prof_gdump_mtx); return prof_gdump_current; } bool prof_gdump_set(tsdn_t *tsdn, bool gdump) { bool prof_gdump_old; malloc_mutex_lock(tsdn, &prof_gdump_mtx); prof_gdump_old = prof_gdump_val; prof_gdump_val = gdump; malloc_mutex_unlock(tsdn, &prof_gdump_mtx); return prof_gdump_old; } void prof_boot0(void) { cassert(config_prof); memcpy(opt_prof_prefix, PROF_PREFIX_DEFAULT, sizeof(PROF_PREFIX_DEFAULT)); } void prof_boot1(void) { cassert(config_prof); /* * opt_prof must be in its final state before any arenas are * initialized, so this function must be executed early. */ if (opt_prof_leak && !opt_prof) { /* * Enable opt_prof, but in such a way that profiles are never * automatically dumped. */ opt_prof = true; opt_prof_gdump = false; } else if (opt_prof) { if (opt_lg_prof_interval >= 0) { prof_interval = (((uint64_t)1U) << opt_lg_prof_interval); } } } bool prof_boot2(tsd_t *tsd) { cassert(config_prof); if (opt_prof) { unsigned i; lg_prof_sample = opt_lg_prof_sample; prof_active = opt_prof_active; if (malloc_mutex_init(&prof_active_mtx, "prof_active", WITNESS_RANK_PROF_ACTIVE, malloc_mutex_rank_exclusive)) { return true; } prof_gdump_val = opt_prof_gdump; if (malloc_mutex_init(&prof_gdump_mtx, "prof_gdump", WITNESS_RANK_PROF_GDUMP, malloc_mutex_rank_exclusive)) { return true; } prof_thread_active_init = opt_prof_thread_active_init; if (malloc_mutex_init(&prof_thread_active_init_mtx, "prof_thread_active_init", WITNESS_RANK_PROF_THREAD_ACTIVE_INIT, malloc_mutex_rank_exclusive)) { return true; } if (ckh_new(tsd, &bt2gctx, PROF_CKH_MINITEMS, prof_bt_hash, prof_bt_keycomp)) { return true; } if (malloc_mutex_init(&bt2gctx_mtx, "prof_bt2gctx", WITNESS_RANK_PROF_BT2GCTX, malloc_mutex_rank_exclusive)) { return true; } tdata_tree_new(&tdatas); if (malloc_mutex_init(&tdatas_mtx, "prof_tdatas", WITNESS_RANK_PROF_TDATAS, malloc_mutex_rank_exclusive)) { return true; } next_thr_uid = 0; if (malloc_mutex_init(&next_thr_uid_mtx, "prof_next_thr_uid", WITNESS_RANK_PROF_NEXT_THR_UID, malloc_mutex_rank_exclusive)) { return true; } if (malloc_mutex_init(&prof_dump_seq_mtx, "prof_dump_seq", WITNESS_RANK_PROF_DUMP_SEQ, malloc_mutex_rank_exclusive)) { return true; } if (malloc_mutex_init(&prof_dump_mtx, "prof_dump", WITNESS_RANK_PROF_DUMP, malloc_mutex_rank_exclusive)) { return true; } if (opt_prof_final && opt_prof_prefix[0] != '\0' && atexit(prof_fdump) != 0) { malloc_write(": Error in atexit()\n"); if (opt_abort) { abort(); } } gctx_locks = (malloc_mutex_t *)base_alloc(tsd_tsdn(tsd), b0get(), PROF_NCTX_LOCKS * sizeof(malloc_mutex_t), CACHELINE); if (gctx_locks == NULL) { return true; } for (i = 0; i < PROF_NCTX_LOCKS; i++) { if (malloc_mutex_init(&gctx_locks[i], "prof_gctx", WITNESS_RANK_PROF_GCTX, malloc_mutex_rank_exclusive)) { return true; } } tdata_locks = (malloc_mutex_t *)base_alloc(tsd_tsdn(tsd), b0get(), PROF_NTDATA_LOCKS * sizeof(malloc_mutex_t), CACHELINE); if (tdata_locks == NULL) { return true; } for (i = 0; i < PROF_NTDATA_LOCKS; i++) { if (malloc_mutex_init(&tdata_locks[i], "prof_tdata", WITNESS_RANK_PROF_TDATA, malloc_mutex_rank_exclusive)) { return true; } } } #ifdef JEMALLOC_PROF_LIBGCC /* * Cause the backtracing machinery to allocate its internal state * before enabling profiling. */ _Unwind_Backtrace(prof_unwind_init_callback, NULL); #endif prof_booted = true; return false; } void prof_prefork0(tsdn_t *tsdn) { if (config_prof && opt_prof) { unsigned i; malloc_mutex_prefork(tsdn, &prof_dump_mtx); malloc_mutex_prefork(tsdn, &bt2gctx_mtx); malloc_mutex_prefork(tsdn, &tdatas_mtx); for (i = 0; i < PROF_NTDATA_LOCKS; i++) { malloc_mutex_prefork(tsdn, &tdata_locks[i]); } for (i = 0; i < PROF_NCTX_LOCKS; i++) { malloc_mutex_prefork(tsdn, &gctx_locks[i]); } } } void prof_prefork1(tsdn_t *tsdn) { if (config_prof && opt_prof) { malloc_mutex_prefork(tsdn, &prof_active_mtx); malloc_mutex_prefork(tsdn, &prof_dump_seq_mtx); malloc_mutex_prefork(tsdn, &prof_gdump_mtx); malloc_mutex_prefork(tsdn, &next_thr_uid_mtx); malloc_mutex_prefork(tsdn, &prof_thread_active_init_mtx); } } void prof_postfork_parent(tsdn_t *tsdn) { if (config_prof && opt_prof) { unsigned i; malloc_mutex_postfork_parent(tsdn, &prof_thread_active_init_mtx); malloc_mutex_postfork_parent(tsdn, &next_thr_uid_mtx); malloc_mutex_postfork_parent(tsdn, &prof_gdump_mtx); malloc_mutex_postfork_parent(tsdn, &prof_dump_seq_mtx); malloc_mutex_postfork_parent(tsdn, &prof_active_mtx); for (i = 0; i < PROF_NCTX_LOCKS; i++) { malloc_mutex_postfork_parent(tsdn, &gctx_locks[i]); } for (i = 0; i < PROF_NTDATA_LOCKS; i++) { malloc_mutex_postfork_parent(tsdn, &tdata_locks[i]); } malloc_mutex_postfork_parent(tsdn, &tdatas_mtx); malloc_mutex_postfork_parent(tsdn, &bt2gctx_mtx); malloc_mutex_postfork_parent(tsdn, &prof_dump_mtx); } } void prof_postfork_child(tsdn_t *tsdn) { if (config_prof && opt_prof) { unsigned i; malloc_mutex_postfork_child(tsdn, &prof_thread_active_init_mtx); malloc_mutex_postfork_child(tsdn, &next_thr_uid_mtx); malloc_mutex_postfork_child(tsdn, &prof_gdump_mtx); malloc_mutex_postfork_child(tsdn, &prof_dump_seq_mtx); malloc_mutex_postfork_child(tsdn, &prof_active_mtx); for (i = 0; i < PROF_NCTX_LOCKS; i++) { malloc_mutex_postfork_child(tsdn, &gctx_locks[i]); } for (i = 0; i < PROF_NTDATA_LOCKS; i++) { malloc_mutex_postfork_child(tsdn, &tdata_locks[i]); } malloc_mutex_postfork_child(tsdn, &tdatas_mtx); malloc_mutex_postfork_child(tsdn, &bt2gctx_mtx); malloc_mutex_postfork_child(tsdn, &prof_dump_mtx); } } /******************************************************************************/ Index: head/contrib/jemalloc/src/tcache.c =================================================================== --- head/contrib/jemalloc/src/tcache.c (revision 320622) +++ head/contrib/jemalloc/src/tcache.c (revision 320623) @@ -1,709 +1,708 @@ #define JEMALLOC_TCACHE_C_ #include "jemalloc/internal/jemalloc_preamble.h" #include "jemalloc/internal/jemalloc_internal_includes.h" #include "jemalloc/internal/assert.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/size_classes.h" /******************************************************************************/ /* Data. */ bool opt_tcache = true; ssize_t opt_lg_tcache_max = LG_TCACHE_MAXCLASS_DEFAULT; tcache_bin_info_t *tcache_bin_info; static unsigned stack_nelms; /* Total stack elms per tcache. */ unsigned nhbins; size_t tcache_maxclass; tcaches_t *tcaches; /* Index of first element within tcaches that has never been used. */ static unsigned tcaches_past; /* Head of singly linked list tracking available tcaches elements. */ static tcaches_t *tcaches_avail; /* Protects tcaches{,_past,_avail}. */ static malloc_mutex_t tcaches_mtx; /******************************************************************************/ size_t tcache_salloc(tsdn_t *tsdn, const void *ptr) { return arena_salloc(tsdn, ptr); } void tcache_event_hard(tsd_t *tsd, tcache_t *tcache) { szind_t binind = tcache->next_gc_bin; tcache_bin_t *tbin; if (binind < NBINS) { tbin = tcache_small_bin_get(tcache, binind); } else { tbin = tcache_large_bin_get(tcache, binind); } if (tbin->low_water > 0) { /* * Flush (ceiling) 3/4 of the objects below the low water mark. */ if (binind < NBINS) { tcache_bin_flush_small(tsd, tcache, tbin, binind, tbin->ncached - tbin->low_water + (tbin->low_water >> 2)); /* * Reduce fill count by 2X. Limit lg_fill_div such that * the fill count is always at least 1. */ tcache_bin_info_t *tbin_info = &tcache_bin_info[binind]; if ((tbin_info->ncached_max >> (tcache->lg_fill_div[binind] + 1)) >= 1) { tcache->lg_fill_div[binind]++; } } else { tcache_bin_flush_large(tsd, tbin, binind, tbin->ncached - tbin->low_water + (tbin->low_water >> 2), tcache); } } else if (tbin->low_water < 0) { /* * Increase fill count by 2X for small bins. Make sure * lg_fill_div stays greater than 0. */ if (binind < NBINS && tcache->lg_fill_div[binind] > 1) { tcache->lg_fill_div[binind]--; } } tbin->low_water = tbin->ncached; tcache->next_gc_bin++; if (tcache->next_gc_bin == nhbins) { tcache->next_gc_bin = 0; } } void * tcache_alloc_small_hard(tsdn_t *tsdn, arena_t *arena, tcache_t *tcache, tcache_bin_t *tbin, szind_t binind, bool *tcache_success) { void *ret; assert(tcache->arena != NULL); arena_tcache_fill_small(tsdn, arena, tcache, tbin, binind, config_prof ? tcache->prof_accumbytes : 0); if (config_prof) { tcache->prof_accumbytes = 0; } ret = tcache_alloc_easy(tbin, tcache_success); return ret; } void tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin, szind_t binind, unsigned rem) { bool merged_stats = false; assert(binind < NBINS); assert(rem <= tbin->ncached); arena_t *arena = tcache->arena; assert(arena != NULL); unsigned nflush = tbin->ncached - rem; VARIABLE_ARRAY(extent_t *, item_extent, nflush); /* Look up extent once per item. */ for (unsigned i = 0 ; i < nflush; i++) { item_extent[i] = iealloc(tsd_tsdn(tsd), *(tbin->avail - 1 - i)); } while (nflush > 0) { /* Lock the arena bin associated with the first object. */ extent_t *extent = item_extent[0]; arena_t *bin_arena = extent_arena_get(extent); arena_bin_t *bin = &bin_arena->bins[binind]; if (config_prof && bin_arena == arena) { if (arena_prof_accum(tsd_tsdn(tsd), arena, tcache->prof_accumbytes)) { prof_idump(tsd_tsdn(tsd)); } tcache->prof_accumbytes = 0; } malloc_mutex_lock(tsd_tsdn(tsd), &bin->lock); if (config_stats && bin_arena == arena) { assert(!merged_stats); merged_stats = true; bin->stats.nflushes++; bin->stats.nrequests += tbin->tstats.nrequests; tbin->tstats.nrequests = 0; } unsigned ndeferred = 0; for (unsigned i = 0; i < nflush; i++) { void *ptr = *(tbin->avail - 1 - i); extent = item_extent[i]; assert(ptr != NULL && extent != NULL); if (extent_arena_get(extent) == bin_arena) { arena_dalloc_bin_junked_locked(tsd_tsdn(tsd), bin_arena, extent, ptr); } else { /* * This object was allocated via a different * arena bin than the one that is currently * locked. Stash the object, so that it can be * handled in a future pass. */ *(tbin->avail - 1 - ndeferred) = ptr; item_extent[ndeferred] = extent; ndeferred++; } } malloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock); arena_decay_ticks(tsd_tsdn(tsd), bin_arena, nflush - ndeferred); nflush = ndeferred; } if (config_stats && !merged_stats) { /* * The flush loop didn't happen to flush to this thread's * arena, so the stats didn't get merged. Manually do so now. */ arena_bin_t *bin = &arena->bins[binind]; malloc_mutex_lock(tsd_tsdn(tsd), &bin->lock); bin->stats.nflushes++; bin->stats.nrequests += tbin->tstats.nrequests; tbin->tstats.nrequests = 0; malloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock); } memmove(tbin->avail - rem, tbin->avail - tbin->ncached, rem * sizeof(void *)); tbin->ncached = rem; if ((low_water_t)tbin->ncached < tbin->low_water) { tbin->low_water = tbin->ncached; } } void tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind, unsigned rem, tcache_t *tcache) { bool merged_stats = false; assert(binind < nhbins); assert(rem <= tbin->ncached); arena_t *arena = tcache->arena; assert(arena != NULL); unsigned nflush = tbin->ncached - rem; VARIABLE_ARRAY(extent_t *, item_extent, nflush); /* Look up extent once per item. */ for (unsigned i = 0 ; i < nflush; i++) { item_extent[i] = iealloc(tsd_tsdn(tsd), *(tbin->avail - 1 - i)); } while (nflush > 0) { /* Lock the arena associated with the first object. */ extent_t *extent = item_extent[0]; arena_t *locked_arena = extent_arena_get(extent); UNUSED bool idump; if (config_prof) { idump = false; } malloc_mutex_lock(tsd_tsdn(tsd), &locked_arena->large_mtx); for (unsigned i = 0; i < nflush; i++) { void *ptr = *(tbin->avail - 1 - i); assert(ptr != NULL); extent = item_extent[i]; if (extent_arena_get(extent) == locked_arena) { large_dalloc_prep_junked_locked(tsd_tsdn(tsd), extent); } } if ((config_prof || config_stats) && locked_arena == arena) { if (config_prof) { idump = arena_prof_accum(tsd_tsdn(tsd), arena, tcache->prof_accumbytes); tcache->prof_accumbytes = 0; } if (config_stats) { merged_stats = true; arena_stats_large_nrequests_add(tsd_tsdn(tsd), &arena->stats, binind, tbin->tstats.nrequests); tbin->tstats.nrequests = 0; } } malloc_mutex_unlock(tsd_tsdn(tsd), &locked_arena->large_mtx); unsigned ndeferred = 0; for (unsigned i = 0; i < nflush; i++) { void *ptr = *(tbin->avail - 1 - i); extent = item_extent[i]; assert(ptr != NULL && extent != NULL); if (extent_arena_get(extent) == locked_arena) { large_dalloc_finish(tsd_tsdn(tsd), extent); } else { /* * This object was allocated via a different * arena than the one that is currently locked. * Stash the object, so that it can be handled * in a future pass. */ *(tbin->avail - 1 - ndeferred) = ptr; item_extent[ndeferred] = extent; ndeferred++; } } if (config_prof && idump) { prof_idump(tsd_tsdn(tsd)); } arena_decay_ticks(tsd_tsdn(tsd), locked_arena, nflush - ndeferred); nflush = ndeferred; } if (config_stats && !merged_stats) { /* * The flush loop didn't happen to flush to this thread's * arena, so the stats didn't get merged. Manually do so now. */ arena_stats_large_nrequests_add(tsd_tsdn(tsd), &arena->stats, binind, tbin->tstats.nrequests); tbin->tstats.nrequests = 0; } memmove(tbin->avail - rem, tbin->avail - tbin->ncached, rem * sizeof(void *)); tbin->ncached = rem; if ((low_water_t)tbin->ncached < tbin->low_water) { tbin->low_water = tbin->ncached; } } void tcache_arena_associate(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena) { assert(tcache->arena == NULL); tcache->arena = arena; if (config_stats) { /* Link into list of extant tcaches. */ malloc_mutex_lock(tsdn, &arena->tcache_ql_mtx); ql_elm_new(tcache, link); ql_tail_insert(&arena->tcache_ql, tcache, link); malloc_mutex_unlock(tsdn, &arena->tcache_ql_mtx); } } static void tcache_arena_dissociate(tsdn_t *tsdn, tcache_t *tcache) { arena_t *arena = tcache->arena; assert(arena != NULL); if (config_stats) { /* Unlink from list of extant tcaches. */ malloc_mutex_lock(tsdn, &arena->tcache_ql_mtx); if (config_debug) { bool in_ql = false; tcache_t *iter; ql_foreach(iter, &arena->tcache_ql, link) { if (iter == tcache) { in_ql = true; break; } } assert(in_ql); } ql_remove(&arena->tcache_ql, tcache, link); tcache_stats_merge(tsdn, tcache, arena); malloc_mutex_unlock(tsdn, &arena->tcache_ql_mtx); } tcache->arena = NULL; } void tcache_arena_reassociate(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena) { tcache_arena_dissociate(tsdn, tcache); tcache_arena_associate(tsdn, tcache, arena); } bool tsd_tcache_enabled_data_init(tsd_t *tsd) { /* Called upon tsd initialization. */ tsd_tcache_enabled_set(tsd, opt_tcache); tsd_slow_update(tsd); if (opt_tcache) { /* Trigger tcache init. */ tsd_tcache_data_init(tsd); } return false; } /* Initialize auto tcache (embedded in TSD). */ static void tcache_init(tsd_t *tsd, tcache_t *tcache, void *avail_stack) { memset(&tcache->link, 0, sizeof(ql_elm(tcache_t))); tcache->prof_accumbytes = 0; tcache->next_gc_bin = 0; tcache->arena = NULL; ticker_init(&tcache->gc_ticker, TCACHE_GC_INCR); size_t stack_offset = 0; assert((TCACHE_NSLOTS_SMALL_MAX & 1U) == 0); memset(tcache->tbins_small, 0, sizeof(tcache_bin_t) * NBINS); memset(tcache->tbins_large, 0, sizeof(tcache_bin_t) * (nhbins - NBINS)); unsigned i = 0; for (; i < NBINS; i++) { tcache->lg_fill_div[i] = 1; stack_offset += tcache_bin_info[i].ncached_max * sizeof(void *); /* * avail points past the available space. Allocations will * access the slots toward higher addresses (for the benefit of * prefetch). */ tcache_small_bin_get(tcache, i)->avail = (void **)((uintptr_t)avail_stack + (uintptr_t)stack_offset); } for (; i < nhbins; i++) { stack_offset += tcache_bin_info[i].ncached_max * sizeof(void *); tcache_large_bin_get(tcache, i)->avail = (void **)((uintptr_t)avail_stack + (uintptr_t)stack_offset); } assert(stack_offset == stack_nelms * sizeof(void *)); } /* Initialize auto tcache (embedded in TSD). */ bool tsd_tcache_data_init(tsd_t *tsd) { tcache_t *tcache = tsd_tcachep_get_unsafe(tsd); assert(tcache_small_bin_get(tcache, 0)->avail == NULL); size_t size = stack_nelms * sizeof(void *); /* Avoid false cacheline sharing. */ size = sz_sa2u(size, CACHELINE); void *avail_array = ipallocztm(tsd_tsdn(tsd), size, CACHELINE, true, NULL, true, arena_get(TSDN_NULL, 0, true)); if (avail_array == NULL) { return true; } tcache_init(tsd, tcache, avail_array); /* * Initialization is a bit tricky here. After malloc init is done, all * threads can rely on arena_choose and associate tcache accordingly. * However, the thread that does actual malloc bootstrapping relies on * functional tsd, and it can only rely on a0. In that case, we * associate its tcache to a0 temporarily, and later on * arena_choose_hard() will re-associate properly. */ tcache->arena = NULL; arena_t *arena; if (!malloc_initialized()) { /* If in initialization, assign to a0. */ arena = arena_get(tsd_tsdn(tsd), 0, false); tcache_arena_associate(tsd_tsdn(tsd), tcache, arena); } else { arena = arena_choose(tsd, NULL); /* This may happen if thread.tcache.enabled is used. */ if (tcache->arena == NULL) { tcache_arena_associate(tsd_tsdn(tsd), tcache, arena); } } assert(arena == tcache->arena); return false; } /* Created manual tcache for tcache.create mallctl. */ tcache_t * tcache_create_explicit(tsd_t *tsd) { tcache_t *tcache; size_t size, stack_offset; size = sizeof(tcache_t); /* Naturally align the pointer stacks. */ size = PTR_CEILING(size); stack_offset = size; size += stack_nelms * sizeof(void *); /* Avoid false cacheline sharing. */ size = sz_sa2u(size, CACHELINE); tcache = ipallocztm(tsd_tsdn(tsd), size, CACHELINE, true, NULL, true, arena_get(TSDN_NULL, 0, true)); if (tcache == NULL) { return NULL; } tcache_init(tsd, tcache, (void *)((uintptr_t)tcache + (uintptr_t)stack_offset)); tcache_arena_associate(tsd_tsdn(tsd), tcache, arena_ichoose(tsd, NULL)); return tcache; } static void tcache_flush_cache(tsd_t *tsd, tcache_t *tcache) { assert(tcache->arena != NULL); for (unsigned i = 0; i < NBINS; i++) { tcache_bin_t *tbin = tcache_small_bin_get(tcache, i); tcache_bin_flush_small(tsd, tcache, tbin, i, 0); if (config_stats) { assert(tbin->tstats.nrequests == 0); } } for (unsigned i = NBINS; i < nhbins; i++) { tcache_bin_t *tbin = tcache_large_bin_get(tcache, i); tcache_bin_flush_large(tsd, tbin, i, 0, tcache); if (config_stats) { assert(tbin->tstats.nrequests == 0); } } if (config_prof && tcache->prof_accumbytes > 0 && arena_prof_accum(tsd_tsdn(tsd), tcache->arena, tcache->prof_accumbytes)) { prof_idump(tsd_tsdn(tsd)); } } void -tcache_flush(void) { - tsd_t *tsd = tsd_fetch(); +tcache_flush(tsd_t *tsd) { assert(tcache_available(tsd)); tcache_flush_cache(tsd, tsd_tcachep_get(tsd)); } static void tcache_destroy(tsd_t *tsd, tcache_t *tcache, bool tsd_tcache) { tcache_flush_cache(tsd, tcache); tcache_arena_dissociate(tsd_tsdn(tsd), tcache); if (tsd_tcache) { /* Release the avail array for the TSD embedded auto tcache. */ void *avail_array = (void *)((uintptr_t)tcache_small_bin_get(tcache, 0)->avail - (uintptr_t)tcache_bin_info[0].ncached_max * sizeof(void *)); idalloctm(tsd_tsdn(tsd), avail_array, NULL, NULL, true, true); } else { /* Release both the tcache struct and avail array. */ idalloctm(tsd_tsdn(tsd), tcache, NULL, NULL, true, true); } } /* For auto tcache (embedded in TSD) only. */ void tcache_cleanup(tsd_t *tsd) { tcache_t *tcache = tsd_tcachep_get(tsd); if (!tcache_available(tsd)) { assert(tsd_tcache_enabled_get(tsd) == false); if (config_debug) { assert(tcache_small_bin_get(tcache, 0)->avail == NULL); } return; } assert(tsd_tcache_enabled_get(tsd)); assert(tcache_small_bin_get(tcache, 0)->avail != NULL); tcache_destroy(tsd, tcache, true); if (config_debug) { tcache_small_bin_get(tcache, 0)->avail = NULL; } } void tcache_stats_merge(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena) { unsigned i; cassert(config_stats); /* Merge and reset tcache stats. */ for (i = 0; i < NBINS; i++) { arena_bin_t *bin = &arena->bins[i]; tcache_bin_t *tbin = tcache_small_bin_get(tcache, i); malloc_mutex_lock(tsdn, &bin->lock); bin->stats.nrequests += tbin->tstats.nrequests; malloc_mutex_unlock(tsdn, &bin->lock); tbin->tstats.nrequests = 0; } for (; i < nhbins; i++) { tcache_bin_t *tbin = tcache_large_bin_get(tcache, i); arena_stats_large_nrequests_add(tsdn, &arena->stats, i, tbin->tstats.nrequests); tbin->tstats.nrequests = 0; } } static bool tcaches_create_prep(tsd_t *tsd) { bool err; malloc_mutex_lock(tsd_tsdn(tsd), &tcaches_mtx); if (tcaches == NULL) { tcaches = base_alloc(tsd_tsdn(tsd), b0get(), sizeof(tcache_t *) * (MALLOCX_TCACHE_MAX+1), CACHELINE); if (tcaches == NULL) { err = true; goto label_return; } } if (tcaches_avail == NULL && tcaches_past > MALLOCX_TCACHE_MAX) { err = true; goto label_return; } err = false; label_return: malloc_mutex_unlock(tsd_tsdn(tsd), &tcaches_mtx); return err; } bool tcaches_create(tsd_t *tsd, unsigned *r_ind) { witness_assert_depth(tsdn_witness_tsdp_get(tsd_tsdn(tsd)), 0); bool err; if (tcaches_create_prep(tsd)) { err = true; goto label_return; } tcache_t *tcache = tcache_create_explicit(tsd); if (tcache == NULL) { err = true; goto label_return; } tcaches_t *elm; malloc_mutex_lock(tsd_tsdn(tsd), &tcaches_mtx); if (tcaches_avail != NULL) { elm = tcaches_avail; tcaches_avail = tcaches_avail->next; elm->tcache = tcache; *r_ind = (unsigned)(elm - tcaches); } else { elm = &tcaches[tcaches_past]; elm->tcache = tcache; *r_ind = tcaches_past; tcaches_past++; } malloc_mutex_unlock(tsd_tsdn(tsd), &tcaches_mtx); err = false; label_return: witness_assert_depth(tsdn_witness_tsdp_get(tsd_tsdn(tsd)), 0); return err; } static tcache_t * tcaches_elm_remove(tsd_t *tsd, tcaches_t *elm) { malloc_mutex_assert_owner(tsd_tsdn(tsd), &tcaches_mtx); if (elm->tcache == NULL) { return NULL; } tcache_t *tcache = elm->tcache; elm->tcache = NULL; return tcache; } void tcaches_flush(tsd_t *tsd, unsigned ind) { malloc_mutex_lock(tsd_tsdn(tsd), &tcaches_mtx); tcache_t *tcache = tcaches_elm_remove(tsd, &tcaches[ind]); malloc_mutex_unlock(tsd_tsdn(tsd), &tcaches_mtx); if (tcache != NULL) { tcache_destroy(tsd, tcache, false); } } void tcaches_destroy(tsd_t *tsd, unsigned ind) { malloc_mutex_lock(tsd_tsdn(tsd), &tcaches_mtx); tcaches_t *elm = &tcaches[ind]; tcache_t *tcache = tcaches_elm_remove(tsd, elm); elm->next = tcaches_avail; tcaches_avail = elm; malloc_mutex_unlock(tsd_tsdn(tsd), &tcaches_mtx); if (tcache != NULL) { tcache_destroy(tsd, tcache, false); } } bool tcache_boot(tsdn_t *tsdn) { /* If necessary, clamp opt_lg_tcache_max. */ if (opt_lg_tcache_max < 0 || (ZU(1) << opt_lg_tcache_max) < SMALL_MAXCLASS) { tcache_maxclass = SMALL_MAXCLASS; } else { tcache_maxclass = (ZU(1) << opt_lg_tcache_max); } if (malloc_mutex_init(&tcaches_mtx, "tcaches", WITNESS_RANK_TCACHES, malloc_mutex_rank_exclusive)) { return true; } nhbins = sz_size2index(tcache_maxclass) + 1; /* Initialize tcache_bin_info. */ tcache_bin_info = (tcache_bin_info_t *)base_alloc(tsdn, b0get(), nhbins * sizeof(tcache_bin_info_t), CACHELINE); if (tcache_bin_info == NULL) { return true; } stack_nelms = 0; unsigned i; for (i = 0; i < NBINS; i++) { if ((arena_bin_info[i].nregs << 1) <= TCACHE_NSLOTS_SMALL_MIN) { tcache_bin_info[i].ncached_max = TCACHE_NSLOTS_SMALL_MIN; } else if ((arena_bin_info[i].nregs << 1) <= TCACHE_NSLOTS_SMALL_MAX) { tcache_bin_info[i].ncached_max = (arena_bin_info[i].nregs << 1); } else { tcache_bin_info[i].ncached_max = TCACHE_NSLOTS_SMALL_MAX; } stack_nelms += tcache_bin_info[i].ncached_max; } for (; i < nhbins; i++) { tcache_bin_info[i].ncached_max = TCACHE_NSLOTS_LARGE; stack_nelms += tcache_bin_info[i].ncached_max; } return false; } void tcache_prefork(tsdn_t *tsdn) { if (!config_prof && opt_tcache) { malloc_mutex_prefork(tsdn, &tcaches_mtx); } } void tcache_postfork_parent(tsdn_t *tsdn) { if (!config_prof && opt_tcache) { malloc_mutex_postfork_parent(tsdn, &tcaches_mtx); } } void tcache_postfork_child(tsdn_t *tsdn) { if (!config_prof && opt_tcache) { malloc_mutex_postfork_child(tsdn, &tcaches_mtx); } } Index: head/contrib/jemalloc/src/tsd.c =================================================================== --- head/contrib/jemalloc/src/tsd.c (revision 320622) +++ head/contrib/jemalloc/src/tsd.c (revision 320623) @@ -1,327 +1,341 @@ #define JEMALLOC_TSD_C_ #include "jemalloc/internal/jemalloc_preamble.h" #include "jemalloc/internal/jemalloc_internal_includes.h" #include "jemalloc/internal/assert.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/rtree.h" /******************************************************************************/ /* Data. */ static unsigned ncleanups; static malloc_tsd_cleanup_t cleanups[MALLOC_TSD_CLEANUPS_MAX]; #ifdef JEMALLOC_MALLOC_THREAD_CLEANUP __thread tsd_t JEMALLOC_TLS_MODEL tsd_tls = TSD_INITIALIZER; __thread bool JEMALLOC_TLS_MODEL tsd_initialized = false; bool tsd_booted = false; #elif (defined(JEMALLOC_TLS)) __thread tsd_t JEMALLOC_TLS_MODEL tsd_tls = TSD_INITIALIZER; pthread_key_t tsd_tsd; bool tsd_booted = false; #elif (defined(_WIN32)) DWORD tsd_tsd; tsd_wrapper_t tsd_boot_wrapper = {false, TSD_INITIALIZER}; bool tsd_booted = false; #else /* * This contains a mutex, but it's pretty convenient to allow the mutex code to * have a dependency on tsd. So we define the struct here, and only refer to it * by pointer in the header. */ struct tsd_init_head_s { ql_head(tsd_init_block_t) blocks; malloc_mutex_t lock; }; pthread_key_t tsd_tsd; tsd_init_head_t tsd_init_head = { ql_head_initializer(blocks), MALLOC_MUTEX_INITIALIZER }; tsd_wrapper_t tsd_boot_wrapper = { false, TSD_INITIALIZER }; bool tsd_booted = false; #endif /******************************************************************************/ void tsd_slow_update(tsd_t *tsd) { if (tsd_nominal(tsd)) { if (malloc_slow || !tsd_tcache_enabled_get(tsd) || tsd_reentrancy_level_get(tsd) > 0) { tsd->state = tsd_state_nominal_slow; } else { tsd->state = tsd_state_nominal; } } } static bool tsd_data_init(tsd_t *tsd) { /* * We initialize the rtree context first (before the tcache), since the * tcache initialization depends on it. */ rtree_ctx_data_init(tsd_rtree_ctxp_get_unsafe(tsd)); return tsd_tcache_enabled_data_init(tsd); } static void assert_tsd_data_cleanup_done(tsd_t *tsd) { assert(!tsd_nominal(tsd)); assert(*tsd_arenap_get_unsafe(tsd) == NULL); assert(*tsd_iarenap_get_unsafe(tsd) == NULL); assert(*tsd_arenas_tdata_bypassp_get_unsafe(tsd) == true); assert(*tsd_arenas_tdatap_get_unsafe(tsd) == NULL); assert(*tsd_tcache_enabledp_get_unsafe(tsd) == false); assert(*tsd_prof_tdatap_get_unsafe(tsd) == NULL); } static bool tsd_data_init_nocleanup(tsd_t *tsd) { - assert(tsd->state == tsd_state_reincarnated); + assert(tsd->state == tsd_state_reincarnated || + tsd->state == tsd_state_minimal_initialized); /* * During reincarnation, there is no guarantee that the cleanup function * will be called (deallocation may happen after all tsd destructors). * We set up tsd in a way that no cleanup is needed. */ rtree_ctx_data_init(tsd_rtree_ctxp_get_unsafe(tsd)); *tsd_arenas_tdata_bypassp_get(tsd) = true; *tsd_tcache_enabledp_get_unsafe(tsd) = false; *tsd_reentrancy_levelp_get(tsd) = 1; assert_tsd_data_cleanup_done(tsd); return false; } tsd_t * -tsd_fetch_slow(tsd_t *tsd, bool internal) { - if (internal) { - /* For internal background threads use only. */ - assert(tsd->state == tsd_state_uninitialized); - tsd->state = tsd_state_reincarnated; - tsd_set(tsd); - tsd_data_init_nocleanup(tsd); - return tsd; - } +tsd_fetch_slow(tsd_t *tsd, bool minimal) { + assert(!tsd_fast(tsd)); if (tsd->state == tsd_state_nominal_slow) { /* On slow path but no work needed. */ assert(malloc_slow || !tsd_tcache_enabled_get(tsd) || tsd_reentrancy_level_get(tsd) > 0 || *tsd_arenas_tdata_bypassp_get(tsd)); } else if (tsd->state == tsd_state_uninitialized) { - tsd->state = tsd_state_nominal; - tsd_slow_update(tsd); - /* Trigger cleanup handler registration. */ - tsd_set(tsd); - tsd_data_init(tsd); + if (!minimal) { + tsd->state = tsd_state_nominal; + tsd_slow_update(tsd); + /* Trigger cleanup handler registration. */ + tsd_set(tsd); + tsd_data_init(tsd); + } else { + tsd->state = tsd_state_minimal_initialized; + tsd_set(tsd); + tsd_data_init_nocleanup(tsd); + } + } else if (tsd->state == tsd_state_minimal_initialized) { + if (!minimal) { + /* Switch to fully initialized. */ + tsd->state = tsd_state_nominal; + assert(*tsd_reentrancy_levelp_get(tsd) >= 1); + (*tsd_reentrancy_levelp_get(tsd))--; + tsd_slow_update(tsd); + tsd_data_init(tsd); + } else { + assert_tsd_data_cleanup_done(tsd); + } } else if (tsd->state == tsd_state_purgatory) { tsd->state = tsd_state_reincarnated; tsd_set(tsd); tsd_data_init_nocleanup(tsd); } else { assert(tsd->state == tsd_state_reincarnated); } return tsd; } void * malloc_tsd_malloc(size_t size) { return a0malloc(CACHELINE_CEILING(size)); } void malloc_tsd_dalloc(void *wrapper) { a0dalloc(wrapper); } #if defined(JEMALLOC_MALLOC_THREAD_CLEANUP) || defined(_WIN32) #ifndef _WIN32 JEMALLOC_EXPORT #endif void _malloc_thread_cleanup(void) { bool pending[MALLOC_TSD_CLEANUPS_MAX], again; unsigned i; for (i = 0; i < ncleanups; i++) { pending[i] = true; } do { again = false; for (i = 0; i < ncleanups; i++) { if (pending[i]) { pending[i] = cleanups[i](); if (pending[i]) { again = true; } } } } while (again); } #endif void malloc_tsd_cleanup_register(bool (*f)(void)) { assert(ncleanups < MALLOC_TSD_CLEANUPS_MAX); cleanups[ncleanups] = f; ncleanups++; } static void tsd_do_data_cleanup(tsd_t *tsd) { prof_tdata_cleanup(tsd); iarena_cleanup(tsd); arena_cleanup(tsd); arenas_tdata_cleanup(tsd); tcache_cleanup(tsd); witnesses_cleanup(tsd_witness_tsdp_get_unsafe(tsd)); } void tsd_cleanup(void *arg) { tsd_t *tsd = (tsd_t *)arg; switch (tsd->state) { case tsd_state_uninitialized: /* Do nothing. */ break; + case tsd_state_minimal_initialized: + /* This implies the thread only did free() in its life time. */ + /* Fall through. */ case tsd_state_reincarnated: /* * Reincarnated means another destructor deallocated memory * after the destructor was called. Cleanup isn't required but * is still called for testing and completeness. */ assert_tsd_data_cleanup_done(tsd); /* Fall through. */ case tsd_state_nominal: case tsd_state_nominal_slow: tsd_do_data_cleanup(tsd); tsd->state = tsd_state_purgatory; tsd_set(tsd); break; case tsd_state_purgatory: /* * The previous time this destructor was called, we set the * state to tsd_state_purgatory so that other destructors * wouldn't cause re-creation of the tsd. This time, do * nothing, and do not request another callback. */ break; default: not_reached(); } #ifdef JEMALLOC_JET test_callback_t test_callback = *tsd_test_callbackp_get_unsafe(tsd); int *data = tsd_test_datap_get_unsafe(tsd); if (test_callback != NULL) { test_callback(data); } #endif } tsd_t * malloc_tsd_boot0(void) { tsd_t *tsd; ncleanups = 0; if (tsd_boot0()) { return NULL; } tsd = tsd_fetch(); *tsd_arenas_tdata_bypassp_get(tsd) = true; return tsd; } void malloc_tsd_boot1(void) { tsd_boot1(); tsd_t *tsd = tsd_fetch(); /* malloc_slow has been set properly. Update tsd_slow. */ tsd_slow_update(tsd); *tsd_arenas_tdata_bypassp_get(tsd) = false; } #ifdef _WIN32 static BOOL WINAPI _tls_callback(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved) { switch (fdwReason) { #ifdef JEMALLOC_LAZY_LOCK case DLL_THREAD_ATTACH: isthreaded = true; break; #endif case DLL_THREAD_DETACH: _malloc_thread_cleanup(); break; default: break; } return true; } /* * We need to be able to say "read" here (in the "pragma section"), but have * hooked "read". We won't read for the rest of the file, so we can get away * with unhooking. */ #ifdef read # undef read #endif #ifdef _MSC_VER # ifdef _M_IX86 # pragma comment(linker, "/INCLUDE:__tls_used") # pragma comment(linker, "/INCLUDE:_tls_callback") # else # pragma comment(linker, "/INCLUDE:_tls_used") # pragma comment(linker, "/INCLUDE:tls_callback") # endif # pragma section(".CRT$XLY",long,read) #endif JEMALLOC_SECTION(".CRT$XLY") JEMALLOC_ATTR(used) BOOL (WINAPI *const tls_callback)(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved) = _tls_callback; #endif #if (!defined(JEMALLOC_MALLOC_THREAD_CLEANUP) && !defined(JEMALLOC_TLS) && \ !defined(_WIN32)) void * tsd_init_check_recursion(tsd_init_head_t *head, tsd_init_block_t *block) { pthread_t self = pthread_self(); tsd_init_block_t *iter; /* Check whether this thread has already inserted into the list. */ malloc_mutex_lock(TSDN_NULL, &head->lock); ql_foreach(iter, &head->blocks, link) { if (iter->thread == self) { malloc_mutex_unlock(TSDN_NULL, &head->lock); return iter->data; } } /* Insert block into list. */ ql_elm_new(block, link); block->thread = self; ql_tail_insert(&head->blocks, block, link); malloc_mutex_unlock(TSDN_NULL, &head->lock); return NULL; } void tsd_init_finish(tsd_init_head_t *head, tsd_init_block_t *block) { malloc_mutex_lock(TSDN_NULL, &head->lock); ql_remove(&head->blocks, block, link); malloc_mutex_unlock(TSDN_NULL, &head->lock); } #endif