Index: head/contrib/jemalloc/ChangeLog =================================================================== --- head/contrib/jemalloc/ChangeLog (revision 299586) +++ head/contrib/jemalloc/ChangeLog (revision 299587) @@ -1,873 +1,917 @@ Following are change highlights associated with official releases. Important bug fixes are all mentioned, but some internal enhancements are omitted here for brevity. Much more detail can be found in the git revision history: https://github.com/jemalloc/jemalloc +* 4.2.0 (May 12, 2016) + + New features: + - Add the arena..reset mallctl, which makes it possible to discard all of + an arena's allocations in a single operation. (@jasone) + - Add the stats.retained and stats.arenas..retained statistics. (@jasone) + - Add the --with-version configure option. (@jasone) + - Support --with-lg-page values larger than actual page size. (@jasone) + + Optimizations: + - Use pairing heaps rather than red-black trees for various hot data + structures. (@djwatson, @jasone) + - Streamline fast paths of rtree operations. (@jasone) + - Optimize the fast paths of calloc() and [m,d,sd]allocx(). (@jasone) + - Decommit unused virtual memory if the OS does not overcommit. (@jasone) + - Specify MAP_NORESERVE on Linux if [heuristic] overcommit is active, in order + to avoid unfortunate interactions during fork(2). (@jasone) + + Bug fixes: + - Fix chunk accounting related to triggering gdump profiles. (@jasone) + - Link against librt for clock_gettime(2) if glibc < 2.17. (@jasone) + - Scale leak report summary according to sampling probability. (@jasone) + +* 4.1.1 (May 3, 2016) + + This bugfix release resolves a variety of mostly minor issues, though the + bitmap fix is critical for 64-bit Windows. + + Bug fixes: + - Fix the linear scan version of bitmap_sfu() to shift by the proper amount + even when sizeof(long) is not the same as sizeof(void *), as on 64-bit + Windows. (@jasone) + - Fix hashing functions to avoid unaligned memory accesses (and resulting + crashes). This is relevant at least to some ARM-based platforms. + (@rkmisra) + - Fix fork()-related lock rank ordering reversals. These reversals were + unlikely to cause deadlocks in practice except when heap profiling was + enabled and active. (@jasone) + - Fix various chunk leaks in OOM code paths. (@jasone) + - Fix malloc_stats_print() to print opt.narenas correctly. (@jasone) + - Fix MSVC-specific build/test issues. (@rustyx, @yuslepukhin) + - Fix a variety of test failures that were due to test fragility rather than + core bugs. (@jasone) + * 4.1.0 (February 28, 2016) This release is primarily about optimizations, but it also incorporates a lot of portability-motivated refactoring and enhancements. Many people worked on this release, to an extent that even with the omission here of minor changes (see git revision history), and of the people who reported and diagnosed issues, so much of the work was contributed that starting with this release, changes are annotated with author credits to help reflect the collaborative effort involved. New features: - Implement decay-based unused dirty page purging, a major optimization with mallctl API impact. This is an alternative to the existing ratio-based unused dirty page purging, and is intended to eventually become the sole purging mechanism. New mallctls: + opt.purge + opt.decay_time + arena..decay + arena..decay_time + arenas.decay_time + stats.arenas..decay_time (@jasone, @cevans87) - Add --with-malloc-conf, which makes it possible to embed a default options string during configuration. This was motivated by the desire to specify --with-malloc-conf=purge:decay , since the default must remain purge:ratio until the 5.0.0 release. (@jasone) - Add MS Visual Studio 2015 support. (@rustyx, @yuslepukhin) - Make *allocx() size class overflow behavior defined. The maximum size class is now less than PTRDIFF_MAX to protect applications against numerical overflow, and all allocation functions are guaranteed to indicate errors rather than potentially crashing if the request size exceeds the maximum size class. (@jasone) - jeprof: + Add raw heap profile support. (@jasone) + Add --retain and --exclude for backtrace symbol filtering. (@jasone) Optimizations: - Optimize the fast path to combine various bootstrapping and configuration checks and execute more streamlined code in the common case. (@interwq) - Use linear scan for small bitmaps (used for small object tracking). In addition to speeding up bitmap operations on 64-bit systems, this reduces allocator metadata overhead by approximately 0.2%. (@djwatson) - Separate arena_avail trees, which substantially speeds up run tree operations. (@djwatson) - Use memoization (boot-time-computed table) for run quantization. Separate arena_avail trees reduced the importance of this optimization. (@jasone) - Attempt mmap-based in-place huge reallocation. This can dramatically speed up incremental huge reallocation. (@jasone) Incompatible changes: - Make opt.narenas unsigned rather than size_t. (@jasone) Bug fixes: - Fix stats.cactive accounting regression. (@rustyx, @jasone) - Handle unaligned keys in hash(). This caused problems for some ARM systems. - (@jasone, Christopher Ferris) + (@jasone, @cferris1000) - Refactor arenas array. In addition to fixing a fork-related deadlock, this makes arena lookups faster and simpler. (@jasone) - Move retained memory allocation out of the default chunk allocation function, to a location that gets executed even if the application installs a custom chunk allocation function. This resolves a virtual memory leak. (@buchgr) - - Fix a potential tsd cleanup leak. (Christopher Ferris, @jasone) + - Fix a potential tsd cleanup leak. (@cferris1000, @jasone) - Fix run quantization. In practice this bug had no impact unless applications requested memory with alignment exceeding one page. (@jasone, @djwatson) - Fix LinuxThreads-specific bootstrapping deadlock. (Cosmin Paraschiv) - jeprof: + Don't discard curl options if timeout is not defined. (@djwatson) + Detect failed profile fetches. (@djwatson) - Fix stats.arenas..{dss,lg_dirty_mult,decay_time,pactive,pdirty} for --disable-stats case. (@jasone) * 4.0.4 (October 24, 2015) This bugfix release fixes another xallocx() regression. No other regressions have come to light in over a month, so this is likely a good starting point for people who prefer to wait for "dot one" releases with all the major issues shaken out. Bug fixes: - Fix xallocx(..., MALLOCX_ZERO to zero the last full trailing page of large allocations that have been randomly assigned an offset of 0 when --enable-cache-oblivious configure option is enabled. * 4.0.3 (September 24, 2015) This bugfix release continues the trend of xallocx() and heap profiling fixes. Bug fixes: - Fix xallocx(..., MALLOCX_ZERO) to zero all trailing bytes of large allocations when --enable-cache-oblivious configure option is enabled. - Fix xallocx(..., MALLOCX_ZERO) to zero trailing bytes of huge allocations when resizing from/to a size class that is not a multiple of the chunk size. - Fix prof_tctx_dump_iter() to filter out nodes that were created after heap profile dumping started. - Work around a potentially bad thread-specific data initialization interaction with NPTL (glibc's pthreads implementation). * 4.0.2 (September 21, 2015) This bugfix release addresses a few bugs specific to heap profiling. Bug fixes: - Fix ixallocx_prof_sample() to never modify nor create sampled small allocations. xallocx() is in general incapable of moving small allocations, so this fix removes buggy code without loss of generality. - Fix irallocx_prof_sample() to always allocate large regions, even when alignment is non-zero. - Fix prof_alloc_rollback() to read tdata from thread-specific data rather than dereferencing a potentially invalid tctx. * 4.0.1 (September 15, 2015) This is a bugfix release that is somewhat high risk due to the amount of refactoring required to address deep xallocx() problems. As a side effect of these fixes, xallocx() now tries harder to partially fulfill requests for optional extra space. Note that a couple of minor heap profiling optimizations are included, but these are better thought of as performance fixes that were integral to disovering most of the other bugs. Optimizations: - Avoid a chunk metadata read in arena_prof_tctx_set(), since it is in the fast path when heap profiling is enabled. Additionally, split a special case out into arena_prof_tctx_reset(), which also avoids chunk metadata reads. - Optimize irallocx_prof() to optimistically update the sampler state. The prior implementation appears to have been a holdover from when rallocx()/xallocx() functionality was combined as rallocm(). Bug fixes: - Fix TLS configuration such that it is enabled by default for platforms on which it works correctly. - Fix arenas_cache_cleanup() and arena_get_hard() to handle allocation/deallocation within the application's thread-specific data cleanup functions even after arenas_cache is torn down. - Fix xallocx() bugs related to size+extra exceeding HUGE_MAXCLASS. - Fix chunk purge hook calls for in-place huge shrinking reallocation to specify the old chunk size rather than the new chunk size. This bug caused no correctness issues for the default chunk purge function, but was visible to custom functions set via the "arena..chunk_hooks" mallctl. - Fix heap profiling bugs: + Fix heap profiling to distinguish among otherwise identical sample sites with interposed resets (triggered via the "prof.reset" mallctl). This bug could cause data structure corruption that would most likely result in a segfault. + Fix irealloc_prof() to prof_alloc_rollback() on OOM. + Make one call to prof_active_get_unlocked() per allocation event, and use the result throughout the relevant functions that handle an allocation event. Also add a missing check in prof_realloc(). These fixes protect allocation events against concurrent prof_active changes. + Fix ixallocx_prof() to pass usize_max and zero to ixallocx_prof_sample() in the correct order. + Fix prof_realloc() to call prof_free_sampled_object() after calling prof_malloc_sample_object(). Prior to this fix, if tctx and old_tctx were the same, the tctx could have been prematurely destroyed. - Fix portability bugs: + Don't bitshift by negative amounts when encoding/decoding run sizes in chunk header maps. This affected systems with page sizes greater than 8 KiB. + Rename index_t to szind_t to avoid an existing type on Solaris. + Add JEMALLOC_CXX_THROW to the memalign() function prototype, in order to match glibc and avoid compilation errors when including both jemalloc/jemalloc.h and malloc.h in C++ code. + Don't assume that /bin/sh is appropriate when running size_classes.sh during configuration. + Consider __sparcv9 a synonym for __sparc64__ when defining LG_QUANTUM. + Link tests to librt if it contains clock_gettime(2). * 4.0.0 (August 17, 2015) This version contains many speed and space optimizations, both minor and major. The major themes are generalization, unification, and simplification. Although many of these optimizations cause no visible behavior change, their cumulative effect is substantial. New features: - Normalize size class spacing to be consistent across the complete size range. By default there are four size classes per size doubling, but this is now configurable via the --with-lg-size-class-group option. Also add the --with-lg-page, --with-lg-page-sizes, --with-lg-quantum, and --with-lg-tiny-min options, which can be used to tweak page and size class settings. Impacts: + Worst case performance for incrementally growing/shrinking reallocation is improved because there are far fewer size classes, and therefore copying happens less often. + Internal fragmentation is limited to 20% for all but the smallest size classes (those less than four times the quantum). (1B + 4 KiB) and (1B + 4 MiB) previously suffered nearly 50% internal fragmentation. + Chunk fragmentation tends to be lower because there are fewer distinct run sizes to pack. - Add support for explicit tcaches. The "tcache.create", "tcache.flush", and "tcache.destroy" mallctls control tcache lifetime and flushing, and the MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to the *allocx() API control which tcache is used for each operation. - Implement per thread heap profiling, as well as the ability to enable/disable heap profiling on a per thread basis. Add the "prof.reset", "prof.lg_sample", "thread.prof.name", "thread.prof.active", "opt.prof_thread_active_init", "prof.thread_active_init", and "thread.prof.active" mallctls. - Add support for per arena application-specified chunk allocators, configured via the "arena..chunk_hooks" mallctl. - Refactor huge allocation to be managed by arenas, so that arenas now function as general purpose independent allocators. This is important in the context of user-specified chunk allocators, aside from the scalability benefits. Related new statistics: + The "stats.arenas..huge.allocated", "stats.arenas..huge.nmalloc", "stats.arenas..huge.ndalloc", and "stats.arenas..huge.nrequests" mallctls provide high level per arena huge allocation statistics. + The "arenas.nhchunks", "arenas.hchunk..size", "stats.arenas..hchunks..nmalloc", "stats.arenas..hchunks..ndalloc", "stats.arenas..hchunks..nrequests", and "stats.arenas..hchunks..curhchunks" mallctls provide per size class statistics. - Add the 'util' column to malloc_stats_print() output, which reports the proportion of available regions that are currently in use for each small size class. - Add "alloc" and "free" modes for for junk filling (see the "opt.junk" mallctl), so that it is possible to separately enable junk filling for allocation versus deallocation. - Add the jemalloc-config script, which provides information about how jemalloc was configured, and how to integrate it into application builds. - Add metadata statistics, which are accessible via the "stats.metadata", "stats.arenas..metadata.mapped", and "stats.arenas..metadata.allocated" mallctls. - Add the "stats.resident" mallctl, which reports the upper limit of physically resident memory mapped by the allocator. - Add per arena control over unused dirty page purging, via the "arenas.lg_dirty_mult", "arena..lg_dirty_mult", and "stats.arenas..lg_dirty_mult" mallctls. - Add the "prof.gdump" mallctl, which makes it possible to toggle the gdump feature on/off during program execution. - Add sdallocx(), which implements sized deallocation. The primary optimization over dallocx() is the removal of a metadata read, which often suffers an L1 cache miss. - Add missing header includes in jemalloc/jemalloc.h, so that applications only have to #include . - Add support for additional platforms: + Bitrig + Cygwin + DragonFlyBSD + iOS + OpenBSD + OpenRISC/or1k Optimizations: - Maintain dirty runs in per arena LRUs rather than in per arena trees of dirty-run-containing chunks. In practice this change significantly reduces dirty page purging volume. - Integrate whole chunks into the unused dirty page purging machinery. This reduces the cost of repeated huge allocation/deallocation, because it effectively introduces a cache of chunks. - Split the arena chunk map into two separate arrays, in order to increase cache locality for the frequently accessed bits. - Move small run metadata out of runs, into arena chunk headers. This reduces run fragmentation, smaller runs reduce external fragmentation for small size classes, and packed (less uniformly aligned) metadata layout improves CPU cache set distribution. - Randomly distribute large allocation base pointer alignment relative to page boundaries in order to more uniformly utilize CPU cache sets. This can be disabled via the --disable-cache-oblivious configure option, and queried via the "config.cache_oblivious" mallctl. - Micro-optimize the fast paths for the public API functions. - Refactor thread-specific data to reside in a single structure. This assures that only a single TLS read is necessary per call into the public API. - Implement in-place huge allocation growing and shrinking. - Refactor rtree (radix tree for chunk lookups) to be lock-free, and make additional optimizations that reduce maximum lookup depth to one or two levels. This resolves what was a concurrency bottleneck for per arena huge allocation, because a global data structure is critical for determining which arenas own which huge allocations. Incompatible changes: - Replace --enable-cc-silence with --disable-cc-silence to suppress spurious warnings by default. - Assure that the constness of malloc_usable_size()'s return type matches that of the system implementation. - Change the heap profile dump format to support per thread heap profiling, rename pprof to jeprof, and enhance it with the --thread= option. As a result, the bundled jeprof must now be used rather than the upstream (gperftools) pprof. - Disable "opt.prof_final" by default, in order to avoid atexit(3), which can internally deadlock on some platforms. - Change the "arenas.nlruns" mallctl type from size_t to unsigned. - Replace the "stats.arenas..bins..allocated" mallctl with "stats.arenas..bins..curregs". - Ignore MALLOC_CONF in set{uid,gid,cap} binaries. - Ignore MALLOCX_ARENA(a) in dallocx(), in favor of using the MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to control tcache usage. Removed features: - Remove the *allocm() API, which is superseded by the *allocx() API. - Remove the --enable-dss options, and make dss non-optional on all platforms which support sbrk(2). - Remove the "arenas.purge" mallctl, which was obsoleted by the "arena..purge" mallctl in 3.1.0. - Remove the unnecessary "opt.valgrind" mallctl; jemalloc automatically detects whether it is running inside Valgrind. - Remove the "stats.huge.allocated", "stats.huge.nmalloc", and "stats.huge.ndalloc" mallctls. - Remove the --enable-mremap option. - Remove the "stats.chunks.current", "stats.chunks.total", and "stats.chunks.high" mallctls. Bug fixes: - Fix the cactive statistic to decrease (rather than increase) when active memory decreases. This regression was first released in 3.5.0. - Fix OOM handling in memalign() and valloc(). A variant of this bug existed in all releases since 2.0.0, which introduced these functions. - Fix an OOM-related regression in arena_tcache_fill_small(), which could cause cache corruption on OOM. This regression was present in all releases from 2.2.0 through 3.6.0. - Fix size class overflow handling for malloc(), posix_memalign(), memalign(), calloc(), and realloc() when profiling is enabled. - Fix the "arena..dss" mallctl to return an error if "primary" or "secondary" precedence is specified, but sbrk(2) is not supported. - Fix fallback lg_floor() implementations to handle extremely large inputs. - Ensure the default purgeable zone is after the default zone on OS X. - Fix latent bugs in atomic_*(). - Fix the "arena..dss" mallctl to handle read-only calls. - Fix tls_model configuration to enable the initial-exec model when possible. - Mark malloc_conf as a weak symbol so that the application can override it. - Correctly detect glibc's adaptive pthread mutexes. - Fix the --without-export configure option. * 3.6.0 (March 31, 2014) This version contains a critical bug fix for a regression present in 3.5.0 and 3.5.1. Bug fixes: - Fix a regression in arena_chunk_alloc() that caused crashes during small/large allocation if chunk allocation failed. In the absence of this bug, chunk allocation failure would result in allocation failure, e.g. NULL return from malloc(). This regression was introduced in 3.5.0. - Fix backtracing for gcc intrinsics-based backtracing by specifying -fno-omit-frame-pointer to gcc. Note that the application (and all the libraries it links to) must also be compiled with this option for backtracing to be reliable. - Use dss allocation precedence for huge allocations as well as small/large allocations. - Fix test assertion failure message formatting. This bug did not manifest on x86_64 systems because of implementation subtleties in va_list. - Fix inconsequential test failures for hash and SFMT code. New features: - Support heap profiling on FreeBSD. This feature depends on the proc filesystem being mounted during heap profile dumping. * 3.5.1 (February 25, 2014) This version primarily addresses minor bugs in test code. Bug fixes: - Configure Solaris/Illumos to use MADV_FREE. - Fix junk filling for mremap(2)-based huge reallocation. This is only relevant if configuring with the --enable-mremap option specified. - Avoid compilation failure if 'restrict' C99 keyword is not supported by the compiler. - Add a configure test for SSE2 rather than assuming it is usable on i686 systems. This fixes test compilation errors, especially on 32-bit Linux systems. - Fix mallctl argument size mismatches (size_t vs. uint64_t) in the stats unit test. - Fix/remove flawed alignment-related overflow tests. - Prevent compiler optimizations that could change backtraces in the prof_accum unit test. * 3.5.0 (January 22, 2014) This version focuses on refactoring and automated testing, though it also includes some non-trivial heap profiling optimizations not mentioned below. New features: - Add the *allocx() API, which is a successor to the experimental *allocm() API. The *allocx() functions are slightly simpler to use because they have fewer parameters, they directly return the results of primary interest, and mallocx()/rallocx() avoid the strict aliasing pitfall that allocm()/rallocm() share with posix_memalign(). Note that *allocm() is slated for removal in the next non-bugfix release. - Add support for LinuxThreads. Bug fixes: - Unless heap profiling is enabled, disable floating point code and don't link with libm. This, in combination with e.g. EXTRA_CFLAGS=-mno-sse on x64 systems, makes it possible to completely disable floating point register use. Some versions of glibc neglect to save/restore caller-saved floating point registers during dynamic lazy symbol loading, and the symbol loading code uses whatever malloc the application happens to have linked/loaded with, the result being potential floating point register corruption. - Report ENOMEM rather than EINVAL if an OOM occurs during heap profiling backtrace creation in imemalign(). This bug impacted posix_memalign() and aligned_alloc(). - Fix a file descriptor leak in a prof_dump_maps() error path. - Fix prof_dump() to close the dump file descriptor for all relevant error paths. - Fix rallocm() to use the arena specified by the ALLOCM_ARENA(s) flag for allocation, not just deallocation. - Fix a data race for large allocation stats counters. - Fix a potential infinite loop during thread exit. This bug occurred on Solaris, and could affect other platforms with similar pthreads TSD implementations. - Don't junk-fill reallocations unless usable size changes. This fixes a violation of the *allocx()/*allocm() semantics. - Fix growing large reallocation to junk fill new space. - Fix huge deallocation to junk fill when munmap is disabled. - Change the default private namespace prefix from empty to je_, and change --with-private-namespace-prefix so that it prepends an additional prefix rather than replacing je_. This reduces the likelihood of applications which statically link jemalloc experiencing symbol name collisions. - Add missing private namespace mangling (relevant when --with-private-namespace is specified). - Add and use JEMALLOC_INLINE_C so that static inline functions are marked as static even for debug builds. - Add a missing mutex unlock in a malloc_init_hard() error path. In practice this error path is never executed. - Fix numerous bugs in malloc_strotumax() error handling/reporting. These bugs had no impact except for malformed inputs. - Fix numerous bugs in malloc_snprintf(). These bugs were not exercised by existing calls, so they had no impact. * 3.4.1 (October 20, 2013) Bug fixes: - Fix a race in the "arenas.extend" mallctl that could cause memory corruption of internal data structures and subsequent crashes. - Fix Valgrind integration flaws that caused Valgrind warnings about reads of uninitialized memory in: + arena chunk headers + internal zero-initialized data structures (relevant to tcache and prof code) - Preserve errno during the first allocation. A readlink(2) call during initialization fails unless /etc/malloc.conf exists, so errno was typically set during the first allocation prior to this fix. - Fix compilation warnings reported by gcc 4.8.1. * 3.4.0 (June 2, 2013) This version is essentially a small bugfix release, but the addition of aarch64 support requires that the minor version be incremented. Bug fixes: - Fix race-triggered deadlocks in chunk_record(). These deadlocks were typically triggered by multiple threads concurrently deallocating huge objects. New features: - Add support for the aarch64 architecture. * 3.3.1 (March 6, 2013) This version fixes bugs that are typically encountered only when utilizing custom run-time options. Bug fixes: - Fix a locking order bug that could cause deadlock during fork if heap profiling were enabled. - Fix a chunk recycling bug that could cause the allocator to lose track of whether a chunk was zeroed. On FreeBSD, NetBSD, and OS X, it could cause corruption if allocating via sbrk(2) (unlikely unless running with the "dss:primary" option specified). This was completely harmless on Linux unless using mlockall(2) (and unlikely even then, unless the --disable-munmap configure option or the "dss:primary" option was specified). This regression was introduced in 3.1.0 by the mlockall(2)/madvise(2) interaction fix. - Fix TLS-related memory corruption that could occur during thread exit if the thread never allocated memory. Only the quarantine and prof facilities were susceptible. - Fix two quarantine bugs: + Internal reallocation of the quarantined object array leaked the old array. + Reallocation failure for internal reallocation of the quarantined object array (very unlikely) resulted in memory corruption. - Fix Valgrind integration to annotate all internally allocated memory in a way that keeps Valgrind happy about internal data structure access. - Fix building for s390 systems. * 3.3.0 (January 23, 2013) This version includes a few minor performance improvements in addition to the listed new features and bug fixes. New features: - Add clipping support to lg_chunk option processing. - Add the --enable-ivsalloc option. - Add the --without-export option. - Add the --disable-zone-allocator option. Bug fixes: - Fix "arenas.extend" mallctl to output the number of arenas. - Fix chunk_recycle() to unconditionally inform Valgrind that returned memory is undefined. - Fix build break on FreeBSD related to alloca.h. * 3.2.0 (November 9, 2012) In addition to a couple of bug fixes, this version modifies page run allocation and dirty page purging algorithms in order to better control page-level virtual memory fragmentation. Incompatible changes: - Change the "opt.lg_dirty_mult" default from 5 to 3 (32:1 to 8:1). Bug fixes: - Fix dss/mmap allocation precedence code to use recyclable mmap memory only after primary dss allocation fails. - Fix deadlock in the "arenas.purge" mallctl. This regression was introduced in 3.1.0 by the addition of the "arena..purge" mallctl. * 3.1.0 (October 16, 2012) New features: - Auto-detect whether running inside Valgrind, thus removing the need to manually specify MALLOC_CONF=valgrind:true. - Add the "arenas.extend" mallctl, which allows applications to create manually managed arenas. - Add the ALLOCM_ARENA() flag for {,r,d}allocm(). - Add the "opt.dss", "arena..dss", and "stats.arenas..dss" mallctls, which provide control over dss/mmap precedence. - Add the "arena..purge" mallctl, which obsoletes "arenas.purge". - Define LG_QUANTUM for hppa. Incompatible changes: - Disable tcache by default if running inside Valgrind, in order to avoid making unallocated objects appear reachable to Valgrind. - Drop const from malloc_usable_size() argument on Linux. Bug fixes: - Fix heap profiling crash if sampled object is freed via realloc(p, 0). - Remove const from __*_hook variable declarations, so that glibc can modify them during process forking. - Fix mlockall(2)/madvise(2) interaction. - Fix fork(2)-related deadlocks. - Fix error return value for "thread.tcache.enabled" mallctl. * 3.0.0 (May 11, 2012) Although this version adds some major new features, the primary focus is on internal code cleanup that facilitates maintainability and portability, most of which is not reflected in the ChangeLog. This is the first release to incorporate substantial contributions from numerous other developers, and the result is a more broadly useful allocator (see the git revision history for contribution details). Note that the license has been unified, thanks to Facebook granting a license under the same terms as the other copyright holders (see COPYING). New features: - Implement Valgrind support, redzones, and quarantine. - Add support for additional platforms: + FreeBSD + Mac OS X Lion + MinGW + Windows (no support yet for replacing the system malloc) - Add support for additional architectures: + MIPS + SH4 + Tilera - Add support for cross compiling. - Add nallocm(), which rounds a request size up to the nearest size class without actually allocating. - Implement aligned_alloc() (blame C11). - Add the "thread.tcache.enabled" mallctl. - Add the "opt.prof_final" mallctl. - Update pprof (from gperftools 2.0). - Add the --with-mangling option. - Add the --disable-experimental option. - Add the --disable-munmap option, and make it the default on Linux. - Add the --enable-mremap option, which disables use of mremap(2) by default. Incompatible changes: - Enable stats by default. - Enable fill by default. - Disable lazy locking by default. - Rename the "tcache.flush" mallctl to "thread.tcache.flush". - Rename the "arenas.pagesize" mallctl to "arenas.page". - Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB). - Change the "opt.prof_accum" default from true to false. Removed features: - Remove the swap feature, including the "config.swap", "swap.avail", "swap.prezeroed", "swap.nfds", and "swap.fds" mallctls. - Remove highruns statistics, including the "stats.arenas..bins..highruns" and "stats.arenas..lruns..highruns" mallctls. - As part of small size class refactoring, remove the "opt.lg_[qc]space_max", "arenas.cacheline", "arenas.subpage", "arenas.[tqcs]space_{min,max}", and "arenas.[tqcs]bins" mallctls. - Remove the "arenas.chunksize" mallctl. - Remove the "opt.lg_prof_tcmax" option. - Remove the "opt.lg_prof_bt_max" option. - Remove the "opt.lg_tcache_gc_sweep" option. - Remove the --disable-tiny option, including the "config.tiny" mallctl. - Remove the --enable-dynamic-page-shift configure option. - Remove the --enable-sysv configure option. Bug fixes: - Fix a statistics-related bug in the "thread.arena" mallctl that could cause invalid statistics and crashes. - Work around TLS deallocation via free() on Linux. This bug could cause write-after-free memory corruption. - Fix a potential deadlock that could occur during interval- and growth-triggered heap profile dumps. - Fix large calloc() zeroing bugs due to dropping chunk map unzeroed flags. - Fix chunk_alloc_dss() to stop claiming memory is zeroed. This bug could cause memory corruption and crashes with --enable-dss specified. - Fix fork-related bugs that could cause deadlock in children between fork and exec. - Fix malloc_stats_print() to honor 'b' and 'l' in the opts parameter. - Fix realloc(p, 0) to act like free(p). - Do not enforce minimum alignment in memalign(). - Check for NULL pointer in malloc_usable_size(). - Fix an off-by-one heap profile statistics bug that could be observed in interval- and growth-triggered heap profiles. - Fix the "epoch" mallctl to update cached stats even if the passed in epoch is 0. - Fix bin->runcur management to fix a layout policy bug. This bug did not affect correctness. - Fix a bug in choose_arena_hard() that potentially caused more arenas to be initialized than necessary. - Add missing "opt.lg_tcache_max" mallctl implementation. - Use glibc allocator hooks to make mixed allocator usage less likely. - Fix build issues for --disable-tcache. - Don't mangle pthread_create() when --with-private-namespace is specified. * 2.2.5 (November 14, 2011) Bug fixes: - Fix huge_ralloc() race when using mremap(2). This is a serious bug that could cause memory corruption and/or crashes. - Fix huge_ralloc() to maintain chunk statistics. - Fix malloc_stats_print(..., "a") output. * 2.2.4 (November 5, 2011) Bug fixes: - Initialize arenas_tsd before using it. This bug existed for 2.2.[0-3], as well as for --disable-tls builds in earlier releases. - Do not assume a 4 KiB page size in test/rallocm.c. * 2.2.3 (August 31, 2011) This version fixes numerous bugs related to heap profiling. Bug fixes: - Fix a prof-related race condition. This bug could cause memory corruption, but only occurred in non-default configurations (prof_accum:false). - Fix off-by-one backtracing issues (make sure that prof_alloc_prep() is excluded from backtraces). - Fix a prof-related bug in realloc() (only triggered by OOM errors). - Fix prof-related bugs in allocm() and rallocm(). - Fix prof_tdata_cleanup() for --disable-tls builds. - Fix a relative include path, to fix objdir builds. * 2.2.2 (July 30, 2011) Bug fixes: - Fix a build error for --disable-tcache. - Fix assertions in arena_purge() (for real this time). - Add the --with-private-namespace option. This is a workaround for symbol conflicts that can inadvertently arise when using static libraries. * 2.2.1 (March 30, 2011) Bug fixes: - Implement atomic operations for x86/x64. This fixes compilation failures for versions of gcc that are still in wide use. - Fix an assertion in arena_purge(). * 2.2.0 (March 22, 2011) This version incorporates several improvements to algorithms and data structures that tend to reduce fragmentation and increase speed. New features: - Add the "stats.cactive" mallctl. - Update pprof (from google-perftools 1.7). - Improve backtracing-related configuration logic, and add the --disable-prof-libgcc option. Bug fixes: - Change default symbol visibility from "internal", to "hidden", which decreases the overhead of library-internal function calls. - Fix symbol visibility so that it is also set on OS X. - Fix a build dependency regression caused by the introduction of the .pic.o suffix for PIC object files. - Add missing checks for mutex initialization failures. - Don't use libgcc-based backtracing except on x64, where it is known to work. - Fix deadlocks on OS X that were due to memory allocation in pthread_mutex_lock(). - Heap profiling-specific fixes: + Fix memory corruption due to integer overflow in small region index computation, when using a small enough sample interval that profiling context pointers are stored in small run headers. + Fix a bootstrap ordering bug that only occurred with TLS disabled. + Fix a rallocm() rsize bug. + Fix error detection bugs for aligned memory allocation. * 2.1.3 (March 14, 2011) Bug fixes: - Fix a cpp logic regression (due to the "thread.{de,}allocatedp" mallctl fix for OS X in 2.1.2). - Fix a "thread.arena" mallctl bug. - Fix a thread cache stats merging bug. * 2.1.2 (March 2, 2011) Bug fixes: - Fix "thread.{de,}allocatedp" mallctl for OS X. - Add missing jemalloc.a to build system. * 2.1.1 (January 31, 2011) Bug fixes: - Fix aligned huge reallocation (affected allocm()). - Fix the ALLOCM_LG_ALIGN macro definition. - Fix a heap dumping deadlock. - Fix a "thread.arena" mallctl bug. * 2.1.0 (December 3, 2010) This version incorporates some optimizations that can't quite be considered bug fixes. New features: - Use Linux's mremap(2) for huge object reallocation when possible. - Avoid locking in mallctl*() when possible. - Add the "thread.[de]allocatedp" mallctl's. - Convert the manual page source from roff to DocBook, and generate both roff and HTML manuals. Bug fixes: - Fix a crash due to incorrect bootstrap ordering. This only impacted --enable-debug --enable-dss configurations. - Fix a minor statistics bug for mallctl("swap.avail", ...). * 2.0.1 (October 29, 2010) Bug fixes: - Fix a race condition in heap profiling that could cause undefined behavior if "opt.prof_accum" were disabled. - Add missing mutex unlocks for some OOM error paths in the heap profiling code. - Fix a compilation error for non-C99 builds. * 2.0.0 (October 24, 2010) This version focuses on the experimental *allocm() API, and on improved run-time configuration/introspection. Nonetheless, numerous performance improvements are also included. New features: - Implement the experimental {,r,s,d}allocm() API, which provides a superset of the functionality available via malloc(), calloc(), posix_memalign(), realloc(), malloc_usable_size(), and free(). These functions can be used to allocate/reallocate aligned zeroed memory, ask for optional extra memory during reallocation, prevent object movement during reallocation, etc. - Replace JEMALLOC_OPTIONS/JEMALLOC_PROF_PREFIX with MALLOC_CONF, which is more human-readable, and more flexible. For example: JEMALLOC_OPTIONS=AJP is now: MALLOC_CONF=abort:true,fill:true,stats_print:true - Port to Apple OS X. Sponsored by Mozilla. - Make it possible for the application to control thread-->arena mappings via the "thread.arena" mallctl. - Add compile-time support for all TLS-related functionality via pthreads TSD. This is mainly of interest for OS X, which does not support TLS, but has a TSD implementation with similar performance. - Override memalign() and valloc() if they are provided by the system. - Add the "arenas.purge" mallctl, which can be used to synchronously purge all dirty unused pages. - Make cumulative heap profiling data optional, so that it is possible to limit the amount of memory consumed by heap profiling data structures. - Add per thread allocation counters that can be accessed via the "thread.allocated" and "thread.deallocated" mallctls. Incompatible changes: - Remove JEMALLOC_OPTIONS and malloc_options (see MALLOC_CONF above). - Increase default backtrace depth from 4 to 128 for heap profiling. - Disable interval-based profile dumps by default. Bug fixes: - Remove bad assertions in fork handler functions. These assertions could cause aborts for some combinations of configure settings. - Fix strerror_r() usage to deal with non-standard semantics in GNU libc. - Fix leak context reporting. This bug tended to cause the number of contexts to be underreported (though the reported number of objects and bytes were correct). - Fix a realloc() bug for large in-place growing reallocation. This bug could cause memory corruption, but it was hard to trigger. - Fix an allocation bug for small allocations that could be triggered if multiple threads raced to create a new run of backing pages. - Enhance the heap profiler to trigger samples based on usable size, rather than request size. - Fix a heap profiling bug due to sometimes losing track of requested object size for sampled objects. * 1.0.3 (August 12, 2010) Bug fixes: - Fix the libunwind-based implementation of stack backtracing (used for heap profiling). This bug could cause zero-length backtraces to be reported. - Add a missing mutex unlock in library initialization code. If multiple threads raced to initialize malloc, some of them could end up permanently blocked. * 1.0.2 (May 11, 2010) Bug fixes: - Fix junk filling of large objects, which could cause memory corruption. - Add MAP_NORESERVE support for chunk mapping, because otherwise virtual memory limits could cause swap file configuration to fail. Contributed by Jordan DeLong. * 1.0.1 (April 14, 2010) Bug fixes: - Fix compilation when --enable-fill is specified. - Fix threads-related profiling bugs that affected accuracy and caused memory to be leaked during thread exit. - Fix dirty page purging race conditions that could cause crashes. - Fix crash in tcache flushing code during thread destruction. * 1.0.0 (April 11, 2010) This release focuses on speed and run-time introspection. Numerous algorithmic improvements make this release substantially faster than its predecessors. New features: - Implement autoconf-based configuration system. - Add mallctl*(), for the purposes of introspection and run-time configuration. - Make it possible for the application to manually flush a thread's cache, via the "tcache.flush" mallctl. - Base maximum dirty page count on proportion of active memory. - Compute various additional run-time statistics, including per size class statistics for large objects. - Expose malloc_stats_print(), which can be called repeatedly by the application. - Simplify the malloc_message() signature to only take one string argument, and incorporate an opaque data pointer argument for use by the application in combination with malloc_stats_print(). - Add support for allocation backed by one or more swap files, and allow the application to disable over-commit if swap files are in use. - Implement allocation profiling and leak checking. Removed features: - Remove the dynamic arena rebalancing code, since thread-specific caching reduces its utility. Bug fixes: - Modify chunk allocation to work when address space layout randomization (ASLR) is in use. - Fix thread cleanup bugs related to TLS destruction. - Handle 0-size allocation requests in posix_memalign(). - Fix a chunk leak. The leaked chunks were never touched, so this impacted virtual memory usage, but not physical memory usage. * linux_2008082[78]a (August 27/28, 2008) These snapshot releases are the simple result of incorporating Linux-specific support into the FreeBSD malloc sources. -------------------------------------------------------------------------------- vim:filetype=text:textwidth=80 Index: head/contrib/jemalloc/FREEBSD-diffs =================================================================== --- head/contrib/jemalloc/FREEBSD-diffs (revision 299586) +++ head/contrib/jemalloc/FREEBSD-diffs (revision 299587) @@ -1,509 +1,512 @@ diff --git a/doc/jemalloc.xml.in b/doc/jemalloc.xml.in -index bc5dbd1..ba182da 100644 +index c4a44e3..4626e9b 100644 --- a/doc/jemalloc.xml.in +++ b/doc/jemalloc.xml.in @@ -53,11 +53,23 @@ This manual describes jemalloc @jemalloc_version@. More information can be found at the jemalloc website. + + The following configuration options are enabled in libc's built-in + jemalloc: , + , , + , , + , , and + . Additionally, + is enabled in development versions of + FreeBSD (controlled by the MALLOC_PRODUCTION make + variable). + SYNOPSIS - #include <jemalloc/jemalloc.h> + #include <stdlib.h> +#include <malloc_np.h> Standard API -@@ -2905,4 +2917,18 @@ malloc_conf = "lg_chunk:24";]]> +@@ -2961,4 +2973,18 @@ malloc_conf = "lg_chunk:24";]]> The posix_memalign function conforms to IEEE Std 1003.1-2001 (“POSIX.1”). + + HISTORY + The malloc_usable_size and + posix_memalign functions first appeared in + FreeBSD 7.0. + + The aligned_alloc, + malloc_stats_print, and + mallctl* functions first appeared in + FreeBSD 10.0. + + The *allocx functions first appeared + in FreeBSD 11.0. + diff --git a/include/jemalloc/internal/jemalloc_internal.h.in b/include/jemalloc/internal/jemalloc_internal.h.in -index 3f54391..d240256 100644 +index 51bf897..7de22ea 100644 --- a/include/jemalloc/internal/jemalloc_internal.h.in +++ b/include/jemalloc/internal/jemalloc_internal.h.in @@ -8,6 +8,9 @@ #include #endif +#include "un-namespace.h" +#include "libc_private.h" + #define JEMALLOC_NO_DEMANGLE #ifdef JEMALLOC_JET # define JEMALLOC_N(n) jet_##n @@ -42,13 +45,7 @@ static const bool config_fill = false #endif ; -static const bool config_lazy_lock = -#ifdef JEMALLOC_LAZY_LOCK - true -#else - false -#endif - ; +static const bool config_lazy_lock = true; static const char * const config_malloc_conf = JEMALLOC_CONFIG_MALLOC_CONF; static const bool config_prof = #ifdef JEMALLOC_PROF diff --git a/include/jemalloc/internal/jemalloc_internal_decls.h b/include/jemalloc/internal/jemalloc_internal_decls.h index 2b8ca5d..42d97f2 100644 --- a/include/jemalloc/internal/jemalloc_internal_decls.h +++ b/include/jemalloc/internal/jemalloc_internal_decls.h @@ -1,6 +1,9 @@ #ifndef JEMALLOC_INTERNAL_DECLS_H #define JEMALLOC_INTERNAL_DECLS_H +#include "libc_private.h" +#include "namespace.h" + #include #ifdef _WIN32 # include diff --git a/include/jemalloc/internal/mutex.h b/include/jemalloc/internal/mutex.h -index f051f29..561378f 100644 +index 5221799..60ab041 100644 --- a/include/jemalloc/internal/mutex.h +++ b/include/jemalloc/internal/mutex.h -@@ -47,15 +47,13 @@ struct malloc_mutex_s { +@@ -52,9 +52,6 @@ struct malloc_mutex_s { #ifdef JEMALLOC_LAZY_LOCK extern bool isthreaded; -#else -# undef isthreaded /* Undo private_namespace.h definition. */ -# define isthreaded true #endif - bool malloc_mutex_init(malloc_mutex_t *mutex); - void malloc_mutex_prefork(malloc_mutex_t *mutex); - void malloc_mutex_postfork_parent(malloc_mutex_t *mutex); - void malloc_mutex_postfork_child(malloc_mutex_t *mutex); + bool malloc_mutex_init(malloc_mutex_t *mutex, const char *name, +@@ -62,6 +59,7 @@ bool malloc_mutex_init(malloc_mutex_t *mutex, const char *name, + void malloc_mutex_prefork(tsdn_t *tsdn, malloc_mutex_t *mutex); + void malloc_mutex_postfork_parent(tsdn_t *tsdn, malloc_mutex_t *mutex); + void malloc_mutex_postfork_child(tsdn_t *tsdn, malloc_mutex_t *mutex); +bool malloc_mutex_first_thread(void); - bool mutex_boot(void); + bool malloc_mutex_boot(void); #endif /* JEMALLOC_H_EXTERNS */ diff --git a/include/jemalloc/internal/private_symbols.txt b/include/jemalloc/internal/private_symbols.txt -index 5880996..6e94e03 100644 +index f2b6a55..69369c9 100644 --- a/include/jemalloc/internal/private_symbols.txt +++ b/include/jemalloc/internal/private_symbols.txt -@@ -296,7 +296,6 @@ iralloct_realign +@@ -311,7 +311,6 @@ iralloct_realign isalloc isdalloct isqalloc -isthreaded ivsalloc ixalloc jemalloc_postfork_child diff --git a/include/jemalloc/jemalloc_FreeBSD.h b/include/jemalloc/jemalloc_FreeBSD.h new file mode 100644 -index 0000000..433dab5 +index 0000000..c58a8f3 --- /dev/null +++ b/include/jemalloc/jemalloc_FreeBSD.h -@@ -0,0 +1,160 @@ +@@ -0,0 +1,162 @@ +/* + * Override settings that were generated in jemalloc_defs.h as necessary. + */ + +#undef JEMALLOC_OVERRIDE_VALLOC + +#ifndef MALLOC_PRODUCTION +#define JEMALLOC_DEBUG +#endif + ++#undef JEMALLOC_DSS ++ +/* + * The following are architecture-dependent, so conditionally define them for + * each supported architecture. + */ +#undef JEMALLOC_TLS_MODEL +#undef STATIC_PAGE_SHIFT +#undef LG_SIZEOF_PTR +#undef LG_SIZEOF_INT +#undef LG_SIZEOF_LONG +#undef LG_SIZEOF_INTMAX_T + +#ifdef __i386__ +# define LG_SIZEOF_PTR 2 +# define JEMALLOC_TLS_MODEL __attribute__((tls_model("initial-exec"))) +#endif +#ifdef __ia64__ +# define LG_SIZEOF_PTR 3 +#endif +#ifdef __sparc64__ +# define LG_SIZEOF_PTR 3 +# define JEMALLOC_TLS_MODEL __attribute__((tls_model("initial-exec"))) +#endif +#ifdef __amd64__ +# define LG_SIZEOF_PTR 3 +# define JEMALLOC_TLS_MODEL __attribute__((tls_model("initial-exec"))) +#endif +#ifdef __arm__ +# define LG_SIZEOF_PTR 2 +#endif +#ifdef __aarch64__ +# define LG_SIZEOF_PTR 3 +#endif +#ifdef __mips__ +#ifdef __mips_n64 +# define LG_SIZEOF_PTR 3 +#else +# define LG_SIZEOF_PTR 2 +#endif +#endif +#ifdef __powerpc64__ +# define LG_SIZEOF_PTR 3 +#elif defined(__powerpc__) +# define LG_SIZEOF_PTR 2 +#endif +#ifdef __riscv__ +# define LG_SIZEOF_PTR 3 +#endif + +#ifndef JEMALLOC_TLS_MODEL +# define JEMALLOC_TLS_MODEL /* Default. */ +#endif + +#define STATIC_PAGE_SHIFT PAGE_SHIFT +#define LG_SIZEOF_INT 2 +#define LG_SIZEOF_LONG LG_SIZEOF_PTR +#define LG_SIZEOF_INTMAX_T 3 + +#undef CPU_SPINWAIT +#include +#include +#define CPU_SPINWAIT cpu_spinwait() + +/* Disable lazy-lock machinery, mangle isthreaded, and adjust its type. */ +#undef JEMALLOC_LAZY_LOCK +extern int __isthreaded; +#define isthreaded ((bool)__isthreaded) + +/* Mangle. */ +#undef je_malloc +#undef je_calloc +#undef je_posix_memalign +#undef je_aligned_alloc +#undef je_realloc +#undef je_free +#undef je_malloc_usable_size +#undef je_mallocx +#undef je_rallocx +#undef je_xallocx +#undef je_sallocx +#undef je_dallocx +#undef je_sdallocx +#undef je_nallocx +#undef je_mallctl +#undef je_mallctlnametomib +#undef je_mallctlbymib +#undef je_malloc_stats_print +#undef je_allocm +#undef je_rallocm +#undef je_sallocm +#undef je_dallocm +#undef je_nallocm +#define je_malloc __malloc +#define je_calloc __calloc +#define je_posix_memalign __posix_memalign +#define je_aligned_alloc __aligned_alloc +#define je_realloc __realloc +#define je_free __free +#define je_malloc_usable_size __malloc_usable_size +#define je_mallocx __mallocx +#define je_rallocx __rallocx +#define je_xallocx __xallocx +#define je_sallocx __sallocx +#define je_dallocx __dallocx +#define je_sdallocx __sdallocx +#define je_nallocx __nallocx +#define je_mallctl __mallctl +#define je_mallctlnametomib __mallctlnametomib +#define je_mallctlbymib __mallctlbymib +#define je_malloc_stats_print __malloc_stats_print +#define je_allocm __allocm +#define je_rallocm __rallocm +#define je_sallocm __sallocm +#define je_dallocm __dallocm +#define je_nallocm __nallocm +#define open _open +#define read _read +#define write _write +#define close _close +#define pthread_mutex_lock _pthread_mutex_lock +#define pthread_mutex_unlock _pthread_mutex_unlock + +#ifdef JEMALLOC_C_ +/* + * Define 'weak' symbols so that an application can have its own versions + * of malloc, calloc, realloc, free, et al. + */ +__weak_reference(__malloc, malloc); +__weak_reference(__calloc, calloc); +__weak_reference(__posix_memalign, posix_memalign); +__weak_reference(__aligned_alloc, aligned_alloc); +__weak_reference(__realloc, realloc); +__weak_reference(__free, free); +__weak_reference(__malloc_usable_size, malloc_usable_size); +__weak_reference(__mallocx, mallocx); +__weak_reference(__rallocx, rallocx); +__weak_reference(__xallocx, xallocx); +__weak_reference(__sallocx, sallocx); +__weak_reference(__dallocx, dallocx); +__weak_reference(__sdallocx, sdallocx); +__weak_reference(__nallocx, nallocx); +__weak_reference(__mallctl, mallctl); +__weak_reference(__mallctlnametomib, mallctlnametomib); +__weak_reference(__mallctlbymib, mallctlbymib); +__weak_reference(__malloc_stats_print, malloc_stats_print); +__weak_reference(__allocm, allocm); +__weak_reference(__rallocm, rallocm); +__weak_reference(__sallocm, sallocm); +__weak_reference(__dallocm, dallocm); +__weak_reference(__nallocm, nallocm); +#endif diff --git a/include/jemalloc/jemalloc_rename.sh b/include/jemalloc/jemalloc_rename.sh index f943891..47d032c 100755 --- a/include/jemalloc/jemalloc_rename.sh +++ b/include/jemalloc/jemalloc_rename.sh @@ -19,4 +19,6 @@ done cat <: */ +const char *__malloc_options_1_0 = NULL; +__sym_compat(_malloc_options, __malloc_options_1_0, FBSD_1.0); + /* Runtime configuration options. */ const char *je_malloc_conf JEMALLOC_ATTR(weak); bool opt_abort = -@@ -2611,6 +2615,107 @@ je_malloc_usable_size(JEMALLOC_USABLE_SIZE_CONST void *ptr) +@@ -2673,6 +2677,107 @@ je_malloc_usable_size(JEMALLOC_USABLE_SIZE_CONST void *ptr) */ /******************************************************************************/ /* + * Begin compatibility functions. + */ + +#define ALLOCM_LG_ALIGN(la) (la) +#define ALLOCM_ALIGN(a) (ffsl(a)-1) +#define ALLOCM_ZERO ((int)0x40) +#define ALLOCM_NO_MOVE ((int)0x80) + +#define ALLOCM_SUCCESS 0 +#define ALLOCM_ERR_OOM 1 +#define ALLOCM_ERR_NOT_MOVED 2 + +int +je_allocm(void **ptr, size_t *rsize, size_t size, int flags) +{ + void *p; + + assert(ptr != NULL); + + p = je_mallocx(size, flags); + if (p == NULL) + return (ALLOCM_ERR_OOM); + if (rsize != NULL) -+ *rsize = isalloc(p, config_prof); ++ *rsize = isalloc(tsdn_fetch(), p, config_prof); + *ptr = p; + return (ALLOCM_SUCCESS); +} + +int +je_rallocm(void **ptr, size_t *rsize, size_t size, size_t extra, int flags) +{ + int ret; + bool no_move = flags & ALLOCM_NO_MOVE; + + assert(ptr != NULL); + assert(*ptr != NULL); + assert(size != 0); + assert(SIZE_T_MAX - size >= extra); + + if (no_move) { + size_t usize = je_xallocx(*ptr, size, extra, flags); + ret = (usize >= size) ? ALLOCM_SUCCESS : ALLOCM_ERR_NOT_MOVED; + if (rsize != NULL) + *rsize = usize; + } else { + void *p = je_rallocx(*ptr, size+extra, flags); + if (p != NULL) { + *ptr = p; + ret = ALLOCM_SUCCESS; + } else + ret = ALLOCM_ERR_OOM; + if (rsize != NULL) -+ *rsize = isalloc(*ptr, config_prof); ++ *rsize = isalloc(tsdn_fetch(), *ptr, config_prof); + } + return (ret); +} + +int +je_sallocm(const void *ptr, size_t *rsize, int flags) +{ + + assert(rsize != NULL); + *rsize = je_sallocx(ptr, flags); + return (ALLOCM_SUCCESS); +} + +int +je_dallocm(void *ptr, int flags) +{ + + je_dallocx(ptr, flags); + return (ALLOCM_SUCCESS); +} + +int +je_nallocm(size_t *rsize, size_t size, int flags) +{ + size_t usize; + + usize = je_nallocx(size, flags); + if (usize == 0) + return (ALLOCM_ERR_OOM); + if (rsize != NULL) + *rsize = usize; + return (ALLOCM_SUCCESS); +} + +#undef ALLOCM_LG_ALIGN +#undef ALLOCM_ALIGN +#undef ALLOCM_ZERO +#undef ALLOCM_NO_MOVE + +#undef ALLOCM_SUCCESS +#undef ALLOCM_ERR_OOM +#undef ALLOCM_ERR_NOT_MOVED + +/* + * End compatibility functions. + */ +/******************************************************************************/ +/* * The following functions are used by threading libraries for protection of * malloc during fork(). */ -@@ -2717,4 +2822,11 @@ jemalloc_postfork_child(void) - ctl_postfork_child(); +@@ -2814,4 +2919,11 @@ jemalloc_postfork_child(void) + ctl_postfork_child(tsd_tsdn(tsd)); } +void +_malloc_first_thread(void) +{ + + (void)malloc_mutex_first_thread(); +} + /******************************************************************************/ diff --git a/src/mutex.c b/src/mutex.c -index 2d47af9..934d5aa 100644 +index a1fac34..a24e420 100644 --- a/src/mutex.c +++ b/src/mutex.c @@ -66,6 +66,17 @@ pthread_create(pthread_t *__restrict thread, #ifdef JEMALLOC_MUTEX_INIT_CB JEMALLOC_EXPORT int _pthread_mutex_init_calloc_cb(pthread_mutex_t *mutex, void *(calloc_cb)(size_t, size_t)); + +#pragma weak _pthread_mutex_init_calloc_cb +int +_pthread_mutex_init_calloc_cb(pthread_mutex_t *mutex, + void *(calloc_cb)(size_t, size_t)) +{ + + return (((int (*)(pthread_mutex_t *, void *(*)(size_t, size_t))) + __libc_interposing[INTERPOS__pthread_mutex_init_calloc_cb])(mutex, + calloc_cb)); +} #endif bool -@@ -137,7 +148,7 @@ malloc_mutex_postfork_child(malloc_mutex_t *mutex) +@@ -140,7 +151,7 @@ malloc_mutex_postfork_child(tsdn_t *tsdn, malloc_mutex_t *mutex) } bool --mutex_boot(void) +-malloc_mutex_boot(void) +malloc_mutex_first_thread(void) { #ifdef JEMALLOC_MUTEX_INIT_CB -@@ -151,3 +162,14 @@ mutex_boot(void) +@@ -154,3 +165,14 @@ malloc_mutex_boot(void) #endif return (false); } + +bool -+mutex_boot(void) ++malloc_mutex_boot(void) +{ + +#ifndef JEMALLOC_MUTEX_INIT_CB + return (malloc_mutex_first_thread()); +#else + return (false); +#endif +} diff --git a/src/util.c b/src/util.c -index 02673c7..116e981 100644 +index a1c4a2a..04f9153 100644 --- a/src/util.c +++ b/src/util.c -@@ -66,6 +66,22 @@ wrtmessage(void *cbopaque, const char *s) +@@ -67,6 +67,22 @@ wrtmessage(void *cbopaque, const char *s) JEMALLOC_EXPORT void (*je_malloc_message)(void *, const char *s); +JEMALLOC_ATTR(visibility("hidden")) +void +wrtmessage_1_0(const char *s1, const char *s2, const char *s3, + const char *s4) +{ + + wrtmessage(NULL, s1); + wrtmessage(NULL, s2); + wrtmessage(NULL, s3); + wrtmessage(NULL, s4); +} + +void (*__malloc_message_1_0)(const char *s1, const char *s2, const char *s3, + const char *s4) = wrtmessage_1_0; +__sym_compat(_malloc_message, __malloc_message_1_0, FBSD_1.0); + /* * Wrapper around malloc_message() that avoids the need for * je_malloc_message(...) throughout the code. Index: head/contrib/jemalloc/VERSION =================================================================== --- head/contrib/jemalloc/VERSION (revision 299586) +++ head/contrib/jemalloc/VERSION (revision 299587) @@ -1 +1 @@ -4.1.0-1-g994da4232621dd1210fcf39bdf0d6454cefda473 +4.2.0-1-gdc7ff6306d7a15b53479e2fb8e5546404b82e6fc Index: head/contrib/jemalloc/doc/jemalloc.3 =================================================================== --- head/contrib/jemalloc/doc/jemalloc.3 (revision 299586) +++ head/contrib/jemalloc/doc/jemalloc.3 (revision 299587) @@ -1,2117 +1,2154 @@ '\" t .\" Title: JEMALLOC .\" Author: Jason Evans .\" Generator: DocBook XSL Stylesheets v1.76.1 -.\" Date: 02/28/2016 +.\" Date: 05/12/2016 .\" Manual: User Manual -.\" Source: jemalloc 4.1.0-1-g994da4232621dd1210fcf39bdf0d6454cefda473 +.\" Source: jemalloc 4.2.0-1-gdc7ff6306d7a15b53479e2fb8e5546404b82e6fc .\" Language: English .\" -.TH "JEMALLOC" "3" "02/28/2016" "jemalloc 4.1.0-1-g994da4232621" "User Manual" +.TH "JEMALLOC" "3" "05/12/2016" "jemalloc 4.2.0-1-gdc7ff6306d7a" "User Manual" .\" ----------------------------------------------------------------- .\" * Define some portability stuff .\" ----------------------------------------------------------------- .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .\" http://bugs.debian.org/507673 .\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" ----------------------------------------------------------------- .\" * set default formatting .\" ----------------------------------------------------------------- .\" disable hyphenation .nh .\" disable justification (adjust text to left margin only) .ad l .\" ----------------------------------------------------------------- .\" * MAIN CONTENT STARTS HERE * .\" ----------------------------------------------------------------- .SH "NAME" jemalloc \- general purpose memory allocation functions .SH "LIBRARY" .PP -This manual describes jemalloc 4\&.1\&.0\-1\-g994da4232621dd1210fcf39bdf0d6454cefda473\&. More information can be found at the +This manual describes jemalloc 4\&.2\&.0\-1\-gdc7ff6306d7a15b53479e2fb8e5546404b82e6fc\&. More information can be found at the \m[blue]\fBjemalloc website\fR\m[]\&\s-2\u[1]\d\s+2\&. .PP The following configuration options are enabled in libc\*(Aqs built\-in jemalloc: \fB\-\-enable\-fill\fR, \fB\-\-enable\-lazy\-lock\fR, \fB\-\-enable\-munmap\fR, \fB\-\-enable\-stats\fR, \fB\-\-enable\-tcache\fR, \fB\-\-enable\-tls\fR, \fB\-\-enable\-utrace\fR, and \fB\-\-enable\-xmalloc\fR\&. Additionally, \fB\-\-enable\-debug\fR is enabled in development versions of FreeBSD (controlled by the \fBMALLOC_PRODUCTION\fR make variable)\&. .SH "SYNOPSIS" .sp .ft B .nf #include #include .fi .ft .SS "Standard API" .HP \w'void\ *malloc('u .BI "void *malloc(size_t\ " "size" ");" .HP \w'void\ *calloc('u .BI "void *calloc(size_t\ " "number" ", size_t\ " "size" ");" .HP \w'int\ posix_memalign('u .BI "int posix_memalign(void\ **" "ptr" ", size_t\ " "alignment" ", size_t\ " "size" ");" .HP \w'void\ *aligned_alloc('u .BI "void *aligned_alloc(size_t\ " "alignment" ", size_t\ " "size" ");" .HP \w'void\ *realloc('u .BI "void *realloc(void\ *" "ptr" ", size_t\ " "size" ");" .HP \w'void\ free('u .BI "void free(void\ *" "ptr" ");" .SS "Non\-standard API" .HP \w'void\ *mallocx('u .BI "void *mallocx(size_t\ " "size" ", int\ " "flags" ");" .HP \w'void\ *rallocx('u .BI "void *rallocx(void\ *" "ptr" ", size_t\ " "size" ", int\ " "flags" ");" .HP \w'size_t\ xallocx('u .BI "size_t xallocx(void\ *" "ptr" ", size_t\ " "size" ", size_t\ " "extra" ", int\ " "flags" ");" .HP \w'size_t\ sallocx('u .BI "size_t sallocx(void\ *" "ptr" ", int\ " "flags" ");" .HP \w'void\ dallocx('u .BI "void dallocx(void\ *" "ptr" ", int\ " "flags" ");" .HP \w'void\ sdallocx('u .BI "void sdallocx(void\ *" "ptr" ", size_t\ " "size" ", int\ " "flags" ");" .HP \w'size_t\ nallocx('u .BI "size_t nallocx(size_t\ " "size" ", int\ " "flags" ");" .HP \w'int\ mallctl('u .BI "int mallctl(const\ char\ *" "name" ", void\ *" "oldp" ", size_t\ *" "oldlenp" ", void\ *" "newp" ", size_t\ " "newlen" ");" .HP \w'int\ mallctlnametomib('u .BI "int mallctlnametomib(const\ char\ *" "name" ", size_t\ *" "mibp" ", size_t\ *" "miblenp" ");" .HP \w'int\ mallctlbymib('u .BI "int mallctlbymib(const\ size_t\ *" "mib" ", size_t\ " "miblen" ", void\ *" "oldp" ", size_t\ *" "oldlenp" ", void\ *" "newp" ", size_t\ " "newlen" ");" .HP \w'void\ malloc_stats_print('u .BI "void malloc_stats_print(void\ " "(*write_cb)" "\ (void\ *,\ const\ char\ *), void\ *" "cbopaque" ", const\ char\ *" "opts" ");" .HP \w'size_t\ malloc_usable_size('u .BI "size_t malloc_usable_size(const\ void\ *" "ptr" ");" .HP \w'void\ (*malloc_message)('u .BI "void (*malloc_message)(void\ *" "cbopaque" ", const\ char\ *" "s" ");" .PP const char *\fImalloc_conf\fR; .SH "DESCRIPTION" .SS "Standard API" .PP The \fBmalloc\fR\fB\fR function allocates \fIsize\fR bytes of uninitialized memory\&. The allocated space is suitably aligned (after possible pointer coercion) for storage of any type of object\&. .PP The \fBcalloc\fR\fB\fR function allocates space for \fInumber\fR objects, each \fIsize\fR bytes in length\&. The result is identical to calling \fBmalloc\fR\fB\fR with an argument of \fInumber\fR * \fIsize\fR, with the exception that the allocated memory is explicitly initialized to zero bytes\&. .PP The \fBposix_memalign\fR\fB\fR function allocates \fIsize\fR bytes of memory such that the allocation\*(Aqs base address is a multiple of \fIalignment\fR, and returns the allocation in the value pointed to by \fIptr\fR\&. The requested \fIalignment\fR must be a power of 2 at least as large as sizeof(\fBvoid *\fR)\&. .PP The \fBaligned_alloc\fR\fB\fR function allocates \fIsize\fR bytes of memory such that the allocation\*(Aqs base address is a multiple of \fIalignment\fR\&. The requested \fIalignment\fR must be a power of 2\&. Behavior is undefined if \fIsize\fR is not an integral multiple of \fIalignment\fR\&. .PP The \fBrealloc\fR\fB\fR function changes the size of the previously allocated memory referenced by \fIptr\fR to \fIsize\fR bytes\&. The contents of the memory are unchanged up to the lesser of the new and old sizes\&. If the new size is larger, the contents of the newly allocated portion of the memory are undefined\&. Upon success, the memory referenced by \fIptr\fR is freed and a pointer to the newly allocated memory is returned\&. Note that \fBrealloc\fR\fB\fR may move the memory allocation, resulting in a different return value than \fIptr\fR\&. If \fIptr\fR is \fBNULL\fR, the \fBrealloc\fR\fB\fR function behaves identically to \fBmalloc\fR\fB\fR for the specified size\&. .PP The \fBfree\fR\fB\fR function causes the allocated memory referenced by \fIptr\fR to be made available for future allocations\&. If \fIptr\fR is \fBNULL\fR, no action occurs\&. .SS "Non\-standard API" .PP The \fBmallocx\fR\fB\fR, \fBrallocx\fR\fB\fR, \fBxallocx\fR\fB\fR, \fBsallocx\fR\fB\fR, \fBdallocx\fR\fB\fR, \fBsdallocx\fR\fB\fR, and \fBnallocx\fR\fB\fR functions all have a \fIflags\fR argument that can be used to specify options\&. The functions only check the options that are contextually relevant\&. Use bitwise or (|) operations to specify one or more of the following: .PP \fBMALLOCX_LG_ALIGN(\fR\fB\fIla\fR\fR\fB) \fR .RS 4 Align the memory allocation to start at an address that is a multiple of (1 << \fIla\fR)\&. This macro does not validate that \fIla\fR is within the valid range\&. .RE .PP \fBMALLOCX_ALIGN(\fR\fB\fIa\fR\fR\fB) \fR .RS 4 Align the memory allocation to start at an address that is a multiple of \fIa\fR, where \fIa\fR is a power of two\&. This macro does not validate that \fIa\fR is a power of 2\&. .RE .PP \fBMALLOCX_ZERO\fR .RS 4 Initialize newly allocated memory to contain zero bytes\&. In the growing reallocation case, the real size prior to reallocation defines the boundary between untouched bytes and those that are initialized to contain zero bytes\&. If this macro is absent, newly allocated memory is uninitialized\&. .RE .PP \fBMALLOCX_TCACHE(\fR\fB\fItc\fR\fR\fB) \fR .RS 4 Use the thread\-specific cache (tcache) specified by the identifier \fItc\fR, which must have been acquired via the "tcache\&.create" mallctl\&. This macro does not validate that \fItc\fR specifies a valid identifier\&. .RE .PP \fBMALLOCX_TCACHE_NONE\fR .RS 4 Do not use a thread\-specific cache (tcache)\&. Unless \fBMALLOCX_TCACHE(\fR\fB\fItc\fR\fR\fB)\fR or \fBMALLOCX_TCACHE_NONE\fR is specified, an automatically managed tcache will be used under many circumstances\&. This macro cannot be used in the same \fIflags\fR argument as \fBMALLOCX_TCACHE(\fR\fB\fItc\fR\fR\fB)\fR\&. .RE .PP \fBMALLOCX_ARENA(\fR\fB\fIa\fR\fR\fB) \fR .RS 4 Use the arena specified by the index \fIa\fR\&. This macro has no effect for regions that were allocated via an arena other than the one specified\&. This macro does not validate that \fIa\fR specifies an arena index in the valid range\&. .RE .PP The \fBmallocx\fR\fB\fR function allocates at least \fIsize\fR bytes of memory, and returns a pointer to the base address of the allocation\&. Behavior is undefined if \fIsize\fR is \fB0\fR\&. .PP The \fBrallocx\fR\fB\fR function resizes the allocation at \fIptr\fR to be at least \fIsize\fR bytes, and returns a pointer to the base address of the resulting allocation, which may or may not have moved from its original location\&. Behavior is undefined if \fIsize\fR is \fB0\fR\&. .PP The \fBxallocx\fR\fB\fR function resizes the allocation at \fIptr\fR in place to be at least \fIsize\fR bytes, and returns the real size of the allocation\&. If \fIextra\fR is non\-zero, an attempt is made to resize the allocation to be at least (\fIsize\fR + \fIextra\fR) bytes, though inability to allocate the extra byte(s) will not by itself result in failure to resize\&. Behavior is undefined if \fIsize\fR is \fB0\fR, or if (\fIsize\fR + \fIextra\fR > \fBSIZE_T_MAX\fR)\&. .PP The \fBsallocx\fR\fB\fR function returns the real size of the allocation at \fIptr\fR\&. .PP The \fBdallocx\fR\fB\fR function causes the memory referenced by \fIptr\fR to be made available for future allocations\&. .PP The \fBsdallocx\fR\fB\fR function is an extension of \fBdallocx\fR\fB\fR with a \fIsize\fR parameter to allow the caller to pass in the allocation size as an optimization\&. The minimum valid input size is the original requested size of the allocation, and the maximum valid input size is the corresponding value returned by \fBnallocx\fR\fB\fR or \fBsallocx\fR\fB\fR\&. .PP The \fBnallocx\fR\fB\fR function allocates no memory, but it performs the same size computation as the \fBmallocx\fR\fB\fR function, and returns the real size of the allocation that would result from the equivalent \fBmallocx\fR\fB\fR function call, or \fB0\fR if the inputs exceed the maximum supported size class and/or alignment\&. Behavior is undefined if \fIsize\fR is \fB0\fR\&. .PP The \fBmallctl\fR\fB\fR function provides a general interface for introspecting the memory allocator, as well as setting modifiable parameters and triggering actions\&. The period\-separated \fIname\fR argument specifies a location in a tree\-structured namespace; see the MALLCTL NAMESPACE section for documentation on the tree contents\&. To read a value, pass a pointer via \fIoldp\fR to adequate space to contain the value, and a pointer to its length via \fIoldlenp\fR; otherwise pass \fBNULL\fR and \fBNULL\fR\&. Similarly, to write a value, pass a pointer to the value via \fInewp\fR, and its length via \fInewlen\fR; otherwise pass \fBNULL\fR and \fB0\fR\&. .PP The \fBmallctlnametomib\fR\fB\fR function provides a way to avoid repeated name lookups for applications that repeatedly query the same portion of the namespace, by translating a name to a \(lqManagement Information Base\(rq (MIB) that can be passed repeatedly to \fBmallctlbymib\fR\fB\fR\&. Upon successful return from \fBmallctlnametomib\fR\fB\fR, \fImibp\fR contains an array of \fI*miblenp\fR integers, where \fI*miblenp\fR is the lesser of the number of components in \fIname\fR and the input value of \fI*miblenp\fR\&. Thus it is possible to pass a \fI*miblenp\fR that is smaller than the number of period\-separated name components, which results in a partial MIB that can be used as the basis for constructing a complete MIB\&. For name components that are integers (e\&.g\&. the 2 in "arenas\&.bin\&.2\&.size"), the corresponding MIB component will always be that integer\&. Therefore, it is legitimate to construct code like the following: .sp .if n \{\ .RS 4 .\} .nf unsigned nbins, i; size_t mib[4]; size_t len, miblen; len = sizeof(nbins); mallctl("arenas\&.nbins", &nbins, &len, NULL, 0); miblen = 4; mallctlnametomib("arenas\&.bin\&.0\&.size", mib, &miblen); for (i = 0; i < nbins; i++) { size_t bin_size; mib[2] = i; len = sizeof(bin_size); mallctlbymib(mib, miblen, &bin_size, &len, NULL, 0); /* Do something with bin_size\&.\&.\&. */ } .fi .if n \{\ .RE .\} .PP The \fBmalloc_stats_print\fR\fB\fR function writes human\-readable summary statistics via the \fIwrite_cb\fR callback function pointer and \fIcbopaque\fR data passed to \fIwrite_cb\fR, or \fBmalloc_message\fR\fB\fR if \fIwrite_cb\fR is \fBNULL\fR\&. This function can be called repeatedly\&. General information that never changes during execution can be omitted by specifying "g" as a character within the \fIopts\fR string\&. Note that \fBmalloc_message\fR\fB\fR uses the \fBmallctl*\fR\fB\fR functions internally, so inconsistent statistics can be reported if multiple threads use these functions simultaneously\&. If \fB\-\-enable\-stats\fR is specified during configuration, \(lqm\(rq and \(lqa\(rq can be specified to omit merged arena and per arena statistics, respectively; \(lqb\(rq, \(lql\(rq, and \(lqh\(rq can be specified to omit per size class statistics for bins, large objects, and huge objects, respectively\&. Unrecognized characters are silently ignored\&. Note that thread caching may prevent some statistics from being completely up to date, since extra locking would be required to merge counters that track thread cache operations\&. .PP The \fBmalloc_usable_size\fR\fB\fR function returns the usable size of the allocation pointed to by \fIptr\fR\&. The return value may be larger than the size that was requested during allocation\&. The \fBmalloc_usable_size\fR\fB\fR function is not a mechanism for in\-place \fBrealloc\fR\fB\fR; rather it is provided solely as a tool for introspection purposes\&. Any discrepancy between the requested allocation size and the size reported by \fBmalloc_usable_size\fR\fB\fR should not be depended on, since such behavior is entirely implementation\-dependent\&. .SH "TUNING" .PP Once, when the first call is made to one of the memory allocation routines, the allocator initializes its internals based in part on various options that can be specified at compile\- or run\-time\&. .PP The string specified via \fB\-\-with\-malloc\-conf\fR, the string pointed to by the global variable \fImalloc_conf\fR, the \(lqname\(rq of the file referenced by the symbolic link named /etc/malloc\&.conf, and the value of the environment variable \fBMALLOC_CONF\fR, will be interpreted, in that order, from left to right as options\&. Note that \fImalloc_conf\fR may be read before \fBmain\fR\fB\fR is entered, so the declaration of \fImalloc_conf\fR should specify an initializer that contains the final value to be read by jemalloc\&. \fB\-\-with\-malloc\-conf\fR and \fImalloc_conf\fR are compile\-time mechanisms, whereas /etc/malloc\&.conf and \fBMALLOC_CONF\fR can be safely set any time prior to program invocation\&. .PP An options string is a comma\-separated list of option:value pairs\&. There is one key corresponding to each "opt\&.*" mallctl (see the MALLCTL NAMESPACE section for options documentation)\&. For example, abort:true,narenas:1 sets the "opt\&.abort" and "opt\&.narenas" options\&. Some options have boolean values (true/false), others have integer values (base 8, 10, or 16, depending on prefix), and yet others have raw string values\&. .SH "IMPLEMENTATION NOTES" .PP Traditionally, allocators have used \fBsbrk\fR(2) to obtain memory, which is suboptimal for several reasons, including race conditions, increased fragmentation, and artificial limitations on maximum usable memory\&. If \fBsbrk\fR(2) is supported by the operating system, this allocator uses both \fBmmap\fR(2) and \fBsbrk\fR(2), in that order of preference; otherwise only \fBmmap\fR(2) is used\&. .PP This allocator uses multiple arenas in order to reduce lock contention for threaded programs on multi\-processor systems\&. This works well with regard to threading scalability, but incurs some costs\&. There is a small fixed per\-arena overhead, and additionally, arenas manage memory completely independently of each other, which means a small fixed increase in overall memory fragmentation\&. These overheads are not generally an issue, given the number of arenas normally used\&. Note that using substantially more arenas than the default is not likely to improve performance, mainly due to reduced cache performance\&. However, it may make sense to reduce the number of arenas if an application does not make much use of the allocation functions\&. .PP In addition to multiple arenas, unless \fB\-\-disable\-tcache\fR is specified during configuration, this allocator supports thread\-specific caching for small and large objects, in order to make it possible to completely avoid synchronization for most allocation requests\&. Such caching allows very fast allocation in the common case, but it increases memory usage and fragmentation, since a bounded number of objects can remain allocated in each thread cache\&. .PP Memory is conceptually broken into equal\-sized chunks, where the chunk size is a power of two that is greater than the page size\&. Chunks are always aligned to multiples of the chunk size\&. This alignment makes it possible to find metadata for user objects very quickly\&. User objects are broken into three categories according to size: small, large, and huge\&. Multiple small and large objects can reside within a single chunk, whereas huge objects each have one or more chunks backing them\&. Each chunk that contains small and/or large objects tracks its contents as runs of contiguous pages (unused, backing a set of small objects, or backing one large object)\&. The combination of chunk alignment and chunk page maps makes it possible to determine all metadata regarding small and large allocations in constant time\&. .PP Small objects are managed in groups by page runs\&. Each run maintains a bitmap to track which regions are in use\&. Allocation requests that are no more than half the quantum (8 or 16, depending on architecture) are rounded up to the nearest power of two that is at least sizeof(\fBdouble\fR)\&. All other object size classes are multiples of the quantum, spaced such that there are four size classes for each doubling in size, which limits internal fragmentation to approximately 20% for all but the smallest size classes\&. Small size classes are smaller than four times the page size, large size classes are smaller than the chunk size (see the "opt\&.lg_chunk" -option), and huge size classes extend from the chunk size up to one size class less than the full address space size\&. +option), and huge size classes extend from the chunk size up to the largest size class that does not exceed +\fBPTRDIFF_MAX\fR\&. .PP Allocations are packed tightly together, which can be an issue for multi\-threaded applications\&. If you need to assure that allocations do not suffer from cacheline sharing, round your allocation requests up to the nearest multiple of the cacheline size, or specify cacheline alignment when allocating\&. .PP The \fBrealloc\fR\fB\fR, \fBrallocx\fR\fB\fR, and \fBxallocx\fR\fB\fR functions may resize allocations without moving them under limited circumstances\&. Unlike the \fB*allocx\fR\fB\fR API, the standard API does not officially round up the usable size of an allocation to the nearest size class, so technically it is necessary to call \fBrealloc\fR\fB\fR to grow e\&.g\&. a 9\-byte allocation to 16 bytes, or shrink a 16\-byte allocation to 9 bytes\&. Growth and shrinkage trivially succeeds in place as long as the pre\-size and post\-size both round up to the same size class\&. No other API guarantees are made regarding in\-place resizing, but the current implementation also tries to resize large and huge allocations in place, as long as the pre\-size and post\-size are both large or both huge\&. In such cases shrinkage always succeeds for large size classes, but for huge size classes the chunk allocator must support splitting (see "arena\&.\&.chunk_hooks")\&. Growth only succeeds if the trailing memory is currently available, and additionally for huge size classes the chunk allocator must support merging\&. .PP Assuming 2 MiB chunks, 4 KiB pages, and a 16\-byte quantum on a 64\-bit system, the size classes in each category are as shown in Table 1\&. .sp .it 1 an-trap .nr an-no-space-flag 1 .nr an-break-flag 1 .br .B Table\ \&1.\ \&Size classes .TS allbox tab(:); lB rB lB. T{ Category T}:T{ Spacing T}:T{ Size T} .T& l r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l l r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l ^ r l l r l ^ r l ^ r l ^ r l ^ r l ^ r l +^ r l +^ r l ^ r l. T{ Small T}:T{ lg T}:T{ [8] T} :T{ 16 T}:T{ [16, 32, 48, 64, 80, 96, 112, 128] T} :T{ 32 T}:T{ [160, 192, 224, 256] T} :T{ 64 T}:T{ [320, 384, 448, 512] T} :T{ 128 T}:T{ [640, 768, 896, 1024] T} :T{ 256 T}:T{ [1280, 1536, 1792, 2048] T} :T{ 512 T}:T{ [2560, 3072, 3584, 4096] T} :T{ 1 KiB T}:T{ [5 KiB, 6 KiB, 7 KiB, 8 KiB] T} :T{ 2 KiB T}:T{ [10 KiB, 12 KiB, 14 KiB] T} T{ Large T}:T{ 2 KiB T}:T{ [16 KiB] T} :T{ 4 KiB T}:T{ [20 KiB, 24 KiB, 28 KiB, 32 KiB] T} :T{ 8 KiB T}:T{ [40 KiB, 48 KiB, 54 KiB, 64 KiB] T} :T{ 16 KiB T}:T{ [80 KiB, 96 KiB, 112 KiB, 128 KiB] T} :T{ 32 KiB T}:T{ [160 KiB, 192 KiB, 224 KiB, 256 KiB] T} :T{ 64 KiB T}:T{ [320 KiB, 384 KiB, 448 KiB, 512 KiB] T} :T{ 128 KiB T}:T{ [640 KiB, 768 KiB, 896 KiB, 1 MiB] T} :T{ 256 KiB T}:T{ [1280 KiB, 1536 KiB, 1792 KiB] T} T{ Huge T}:T{ 256 KiB T}:T{ [2 MiB] T} :T{ 512 KiB T}:T{ [2560 KiB, 3 MiB, 3584 KiB, 4 MiB] T} :T{ 1 MiB T}:T{ [5 MiB, 6 MiB, 7 MiB, 8 MiB] T} :T{ 2 MiB T}:T{ [10 MiB, 12 MiB, 14 MiB, 16 MiB] T} :T{ 4 MiB T}:T{ [20 MiB, 24 MiB, 28 MiB, 32 MiB] T} :T{ 8 MiB T}:T{ [40 MiB, 48 MiB, 56 MiB, 64 MiB] T} :T{ \&.\&.\&. T}:T{ \&.\&.\&. T} +:T{ +512 PiB +T}:T{ +[2560 PiB, 3 EiB, 3584 PiB, 4 EiB] +T} +:T{ +1 EiB +T}:T{ +[5 EiB, 6 EiB, 7 EiB] +T} .TE .sp 1 .SH "MALLCTL NAMESPACE" .PP The following names are defined in the namespace accessible via the \fBmallctl*\fR\fB\fR functions\&. Value types are specified in parentheses, their readable/writable statuses are encoded as rw, r\-, \-w, or \-\-, and required build configuration flags follow, if any\&. A name element encoded as or indicates an integer component, where the integer varies from 0 to some upper value that must be determined via introspection\&. In the case of "stats\&.arenas\&.\&.*", equal to "arenas\&.narenas" can be used to access the summation of statistics from all arenas\&. Take special note of the "epoch" mallctl, which controls refreshing of cached dynamic statistics\&. .PP "version" (\fBconst char *\fR) r\- .RS 4 Return the jemalloc version string\&. .RE .PP "epoch" (\fBuint64_t\fR) rw .RS 4 If a value is passed in, refresh the data from which the \fBmallctl*\fR\fB\fR functions report values, and increment the epoch\&. Return the current epoch\&. This is useful for detecting whether another thread caused a refresh\&. .RE .PP "config\&.cache_oblivious" (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-cache\-oblivious\fR was specified during build configuration\&. .RE .PP "config\&.debug" (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-debug\fR was specified during build configuration\&. .RE .PP "config\&.fill" (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-fill\fR was specified during build configuration\&. .RE .PP "config\&.lazy_lock" (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-lazy\-lock\fR was specified during build configuration\&. .RE .PP "config\&.malloc_conf" (\fBconst char *\fR) r\- .RS 4 Embedded configure\-time\-specified run\-time options string, empty unless \fB\-\-with\-malloc\-conf\fR was specified during build configuration\&. .RE .PP "config\&.munmap" (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-munmap\fR was specified during build configuration\&. .RE .PP "config\&.prof" (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-prof\fR was specified during build configuration\&. .RE .PP "config\&.prof_libgcc" (\fBbool\fR) r\- .RS 4 \fB\-\-disable\-prof\-libgcc\fR was not specified during build configuration\&. .RE .PP "config\&.prof_libunwind" (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-prof\-libunwind\fR was specified during build configuration\&. .RE .PP "config\&.stats" (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-stats\fR was specified during build configuration\&. .RE .PP "config\&.tcache" (\fBbool\fR) r\- .RS 4 \fB\-\-disable\-tcache\fR was not specified during build configuration\&. .RE .PP "config\&.tls" (\fBbool\fR) r\- .RS 4 \fB\-\-disable\-tls\fR was not specified during build configuration\&. .RE .PP "config\&.utrace" (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-utrace\fR was specified during build configuration\&. .RE .PP "config\&.valgrind" (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-valgrind\fR was specified during build configuration\&. .RE .PP "config\&.xmalloc" (\fBbool\fR) r\- .RS 4 \fB\-\-enable\-xmalloc\fR was specified during build configuration\&. .RE .PP "opt\&.abort" (\fBbool\fR) r\- .RS 4 Abort\-on\-warning enabled/disabled\&. If true, most warnings are fatal\&. The process will call \fBabort\fR(3) in these cases\&. This option is disabled by default unless \fB\-\-enable\-debug\fR is specified during configuration, in which case it is enabled by default\&. .RE .PP "opt\&.dss" (\fBconst char *\fR) r\- .RS 4 dss (\fBsbrk\fR(2)) allocation precedence as related to \fBmmap\fR(2) allocation\&. The following settings are supported if \fBsbrk\fR(2) is supported by the operating system: \(lqdisabled\(rq, \(lqprimary\(rq, and \(lqsecondary\(rq; otherwise only \(lqdisabled\(rq is supported\&. The default is \(lqsecondary\(rq if \fBsbrk\fR(2) is supported by the operating system; \(lqdisabled\(rq otherwise\&. .RE .PP "opt\&.lg_chunk" (\fBsize_t\fR) r\- .RS 4 Virtual memory chunk size (log base 2)\&. If a chunk size outside the supported size range is specified, the size is silently clipped to the minimum/maximum supported size\&. The default chunk size is 2 MiB (2^21)\&. .RE .PP "opt\&.narenas" (\fBunsigned\fR) r\- .RS 4 Maximum number of arenas to use for automatic multiplexing of threads and arenas\&. The default is four times the number of CPUs, or one if there is a single CPU\&. .RE .PP "opt\&.purge" (\fBconst char *\fR) r\- .RS 4 Purge mode is \(lqratio\(rq (default) or \(lqdecay\(rq\&. See "opt\&.lg_dirty_mult" for details of the ratio mode\&. See "opt\&.decay_time" for details of the decay mode\&. .RE .PP "opt\&.lg_dirty_mult" (\fBssize_t\fR) r\- .RS 4 Per\-arena minimum ratio (log base 2) of active to dirty pages\&. Some dirty unused pages may be allowed to accumulate, within the limit set by the ratio (or one chunk worth of dirty pages, whichever is greater), before informing the kernel about some of those pages via \fBmadvise\fR(2) or a similar system call\&. This provides the kernel with sufficient information to recycle dirty pages if physical memory becomes scarce and the pages remain unused\&. The default minimum ratio is 8:1 (2^3:1); an option value of \-1 will disable dirty page purging\&. See "arenas\&.lg_dirty_mult" and "arena\&.\&.lg_dirty_mult" for related dynamic control options\&. .RE .PP "opt\&.decay_time" (\fBssize_t\fR) r\- .RS 4 Approximate time in seconds from the creation of a set of unused dirty pages until an equivalent set of unused dirty pages is purged and/or reused\&. The pages are incrementally purged according to a sigmoidal decay curve that starts and ends with zero purge rate\&. A decay time of 0 causes all unused dirty pages to be purged immediately upon creation\&. A decay time of \-1 disables purging\&. The default decay time is 10 seconds\&. See "arenas\&.decay_time" and "arena\&.\&.decay_time" for related dynamic control options\&. .RE .PP "opt\&.stats_print" (\fBbool\fR) r\- .RS 4 Enable/disable statistics printing at exit\&. If enabled, the \fBmalloc_stats_print\fR\fB\fR function is called at program exit via an \fBatexit\fR(3) function\&. If \fB\-\-enable\-stats\fR is specified during configuration, this has the potential to cause deadlock for a multi\-threaded process that exits while one or more threads are executing in the memory allocation functions\&. Furthermore, \fBatexit\fR\fB\fR may allocate memory during application initialization and then deadlock internally when jemalloc in turn calls -\fBatexit\fR\fB\fR, so this option is not univerally usable (though the application can register its own +\fBatexit\fR\fB\fR, so this option is not universally usable (though the application can register its own \fBatexit\fR\fB\fR function with equivalent functionality)\&. Therefore, this option should only be used with care; it is primarily intended as a performance tuning aid during application development\&. This option is disabled by default\&. .RE .PP "opt\&.junk" (\fBconst char *\fR) r\- [\fB\-\-enable\-fill\fR] .RS 4 Junk filling\&. If set to "alloc", each byte of uninitialized allocated memory will be initialized to 0xa5\&. If set to "free", all deallocated memory will be initialized to 0x5a\&. If set to "true", both allocated and deallocated memory will be initialized, and if set to "false", junk filling be disabled entirely\&. This is intended for debugging and will impact performance negatively\&. This option is "false" by default unless \fB\-\-enable\-debug\fR is specified during configuration, in which case it is "true" by default unless running inside \m[blue]\fBValgrind\fR\m[]\&\s-2\u[2]\d\s+2\&. .RE .PP "opt\&.quarantine" (\fBsize_t\fR) r\- [\fB\-\-enable\-fill\fR] .RS 4 Per thread quarantine size in bytes\&. If non\-zero, each thread maintains a FIFO object quarantine that stores up to the specified number of bytes of memory\&. The quarantined memory is not freed until it is released from quarantine, though it is immediately junk\-filled if the "opt\&.junk" option is enabled\&. This feature is of particular use in combination with \m[blue]\fBValgrind\fR\m[]\&\s-2\u[2]\d\s+2, which can detect attempts to access quarantined objects\&. This is intended for debugging and will impact performance negatively\&. The default quarantine size is 0 unless running inside Valgrind, in which case the default is 16 MiB\&. .RE .PP "opt\&.redzone" (\fBbool\fR) r\- [\fB\-\-enable\-fill\fR] .RS 4 Redzones enabled/disabled\&. If enabled, small allocations have redzones before and after them\&. Furthermore, if the "opt\&.junk" option is enabled, the redzones are checked for corruption during deallocation\&. However, the primary intended purpose of this feature is to be used in combination with \m[blue]\fBValgrind\fR\m[]\&\s-2\u[2]\d\s+2, which needs redzones in order to do effective buffer overflow/underflow detection\&. This option is intended for debugging and will impact performance negatively\&. This option is disabled by default unless running inside Valgrind\&. .RE .PP "opt\&.zero" (\fBbool\fR) r\- [\fB\-\-enable\-fill\fR] .RS 4 Zero filling enabled/disabled\&. If enabled, each byte of uninitialized allocated memory will be initialized to 0\&. Note that this initialization only happens once for each byte, so \fBrealloc\fR\fB\fR and \fBrallocx\fR\fB\fR calls do not zero memory that was previously allocated\&. This is intended for debugging and will impact performance negatively\&. This option is disabled by default\&. .RE .PP "opt\&.utrace" (\fBbool\fR) r\- [\fB\-\-enable\-utrace\fR] .RS 4 Allocation tracing based on \fButrace\fR(2) enabled/disabled\&. This option is disabled by default\&. .RE .PP "opt\&.xmalloc" (\fBbool\fR) r\- [\fB\-\-enable\-xmalloc\fR] .RS 4 Abort\-on\-out\-of\-memory enabled/disabled\&. If enabled, rather than returning failure for any allocation function, display a diagnostic message on \fBSTDERR_FILENO\fR and cause the program to drop core (using \fBabort\fR(3))\&. If an application is designed to depend on this behavior, set the option at compile time by including the following in the source code: .sp .if n \{\ .RS 4 .\} .nf malloc_conf = "xmalloc:true"; .fi .if n \{\ .RE .\} .sp This option is disabled by default\&. .RE .PP "opt\&.tcache" (\fBbool\fR) r\- [\fB\-\-enable\-tcache\fR] .RS 4 Thread\-specific caching (tcache) enabled/disabled\&. When there are multiple threads, each thread uses a tcache for objects up to a certain size\&. Thread\-specific caching allows many allocations to be satisfied without performing any thread synchronization, at the cost of increased memory use\&. See the "opt\&.lg_tcache_max" option for related tuning information\&. This option is enabled by default unless running inside \m[blue]\fBValgrind\fR\m[]\&\s-2\u[2]\d\s+2, in which case it is forcefully disabled\&. .RE .PP "opt\&.lg_tcache_max" (\fBsize_t\fR) r\- [\fB\-\-enable\-tcache\fR] .RS 4 Maximum size class (log base 2) to cache in the thread\-specific cache (tcache)\&. At a minimum, all small size classes are cached, and at a maximum all large size classes are cached\&. The default maximum is 32 KiB (2^15)\&. .RE .PP "opt\&.prof" (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Memory profiling enabled/disabled\&. If enabled, profile memory allocation activity\&. See the "opt\&.prof_active" option for on\-the\-fly activation/deactivation\&. See the "opt\&.lg_prof_sample" option for probabilistic sampling control\&. See the "opt\&.prof_accum" option for control of cumulative sample reporting\&. See the "opt\&.lg_prof_interval" option for information on interval\-triggered profile dumping, the "opt\&.prof_gdump" option for information on high\-water\-triggered profile dumping, and the "opt\&.prof_final" option for final profile dumping\&. Profile output is compatible with the \fBjeprof\fR command, which is based on the \fBpprof\fR that is developed as part of the \m[blue]\fBgperftools package\fR\m[]\&\s-2\u[3]\d\s+2\&. See HEAP PROFILE FORMAT for heap profile format documentation\&. .RE .PP "opt\&.prof_prefix" (\fBconst char *\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Filename prefix for profile dumps\&. If the prefix is set to the empty string, no automatic dumps will occur; this is primarily useful for disabling the automatic final heap dump (which also disables leak reporting, if enabled)\&. The default prefix is jeprof\&. .RE .PP "opt\&.prof_active" (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Profiling activated/deactivated\&. This is a secondary control mechanism that makes it possible to start the application with profiling enabled (see the "opt\&.prof" option) but inactive, then toggle profiling at any time during program execution with the "prof\&.active" mallctl\&. This option is enabled by default\&. .RE .PP "opt\&.prof_thread_active_init" (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Initial setting for "thread\&.prof\&.active" in newly created threads\&. The initial setting for newly created threads can also be changed during execution via the "prof\&.thread_active_init" mallctl\&. This option is enabled by default\&. .RE .PP "opt\&.lg_prof_sample" (\fBsize_t\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Average interval (log base 2) between allocation samples, as measured in bytes of allocation activity\&. Increasing the sampling interval decreases profile fidelity, but also decreases the computational overhead\&. The default sample interval is 512 KiB (2^19 B)\&. .RE .PP "opt\&.prof_accum" (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Reporting of cumulative object/byte counts in profile dumps enabled/disabled\&. If this option is enabled, every unique backtrace must be stored for the duration of execution\&. Depending on the application, this can impose a large memory overhead, and the cumulative counts are not always of interest\&. This option is disabled by default\&. .RE .PP "opt\&.lg_prof_interval" (\fBssize_t\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Average interval (log base 2) between memory profile dumps, as measured in bytes of allocation activity\&. The actual interval between dumps may be sporadic because decentralized allocation counters are used to avoid synchronization bottlenecks\&. Profiles are dumped to files named according to the pattern \&.\&.\&.i\&.heap, where is controlled by the "opt\&.prof_prefix" option\&. By default, interval\-triggered profile dumping is disabled (encoded as \-1)\&. .RE .PP "opt\&.prof_gdump" (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Set the initial state of "prof\&.gdump", which when enabled triggers a memory profile dump every time the total virtual memory exceeds the previous maximum\&. This option is disabled by default\&. .RE .PP "opt\&.prof_final" (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Use an \fBatexit\fR(3) function to dump final memory usage to a file named according to the pattern \&.\&.\&.f\&.heap, where is controlled by the "opt\&.prof_prefix" option\&. Note that \fBatexit\fR\fB\fR may allocate memory during application initialization and then deadlock internally when jemalloc in turn calls -\fBatexit\fR\fB\fR, so this option is not univerally usable (though the application can register its own +\fBatexit\fR\fB\fR, so this option is not universally usable (though the application can register its own \fBatexit\fR\fB\fR function with equivalent functionality)\&. This option is disabled by default\&. .RE .PP "opt\&.prof_leak" (\fBbool\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Leak reporting enabled/disabled\&. If enabled, use an \fBatexit\fR(3) function to report memory leaks detected by allocation sampling\&. See the "opt\&.prof" option for information on analyzing heap profile output\&. This option is disabled by default\&. .RE .PP "thread\&.arena" (\fBunsigned\fR) rw .RS 4 Get or set the arena associated with the calling thread\&. If the specified arena was not initialized beforehand (see the "arenas\&.initialized" mallctl), it will be automatically initialized as a side effect of calling this interface\&. .RE .PP "thread\&.allocated" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Get the total number of bytes ever allocated by the calling thread\&. This counter has the potential to wrap around; it is up to the application to appropriately interpret the counter in such cases\&. .RE .PP "thread\&.allocatedp" (\fBuint64_t *\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Get a pointer to the the value that is returned by the "thread\&.allocated" mallctl\&. This is useful for avoiding the overhead of repeated \fBmallctl*\fR\fB\fR calls\&. .RE .PP "thread\&.deallocated" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Get the total number of bytes ever deallocated by the calling thread\&. This counter has the potential to wrap around; it is up to the application to appropriately interpret the counter in such cases\&. .RE .PP "thread\&.deallocatedp" (\fBuint64_t *\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Get a pointer to the the value that is returned by the "thread\&.deallocated" mallctl\&. This is useful for avoiding the overhead of repeated \fBmallctl*\fR\fB\fR calls\&. .RE .PP "thread\&.tcache\&.enabled" (\fBbool\fR) rw [\fB\-\-enable\-tcache\fR] .RS 4 Enable/disable calling thread\*(Aqs tcache\&. The tcache is implicitly flushed as a side effect of becoming disabled (see "thread\&.tcache\&.flush")\&. .RE .PP "thread\&.tcache\&.flush" (\fBvoid\fR) \-\- [\fB\-\-enable\-tcache\fR] .RS 4 Flush calling thread\*(Aqs thread\-specific cache (tcache)\&. This interface releases all cached objects and internal data structures associated with the calling thread\*(Aqs tcache\&. Ordinarily, this interface need not be called, since automatic periodic incremental garbage collection occurs, and the thread cache is automatically discarded when a thread exits\&. However, garbage collection is triggered by allocation activity, so it is possible for a thread that stops allocating/deallocating to retain its cache indefinitely, in which case the developer may find manual flushing useful\&. .RE .PP "thread\&.prof\&.name" (\fBconst char *\fR) r\- or \-w [\fB\-\-enable\-prof\fR] .RS 4 Get/set the descriptive name associated with the calling thread in memory profile dumps\&. An internal copy of the name string is created, so the input string need not be maintained after this interface completes execution\&. The output string of this interface should be copied for non\-ephemeral uses, because multiple implementation details can cause asynchronous string deallocation\&. Furthermore, each invocation of this interface can only read or write; simultaneous read/write is not supported due to string lifetime limitations\&. The name string must be nil\-terminated and comprised only of characters in the sets recognized by \fBisgraph\fR(3) and \fBisblank\fR(3)\&. .RE .PP "thread\&.prof\&.active" (\fBbool\fR) rw [\fB\-\-enable\-prof\fR] .RS 4 Control whether sampling is currently active for the calling thread\&. This is an activation mechanism in addition to "prof\&.active"; both must be active for the calling thread to sample\&. This flag is enabled by default\&. .RE .PP "tcache\&.create" (\fBunsigned\fR) r\- [\fB\-\-enable\-tcache\fR] .RS 4 Create an explicit thread\-specific cache (tcache) and return an identifier that can be passed to the \fBMALLOCX_TCACHE(\fR\fB\fItc\fR\fR\fB)\fR macro to explicitly use the specified cache rather than the automatically managed one that is used by default\&. Each explicit cache can be used by only one thread at a time; the application must assure that this constraint holds\&. .RE .PP "tcache\&.flush" (\fBunsigned\fR) \-w [\fB\-\-enable\-tcache\fR] .RS 4 Flush the specified thread\-specific cache (tcache)\&. The same considerations apply to this interface as to "thread\&.tcache\&.flush", except that the tcache will never be automatically discarded\&. .RE .PP "tcache\&.destroy" (\fBunsigned\fR) \-w [\fB\-\-enable\-tcache\fR] .RS 4 Flush the specified thread\-specific cache (tcache) and make the identifier available for use during a future tcache creation\&. .RE .PP "arena\&.\&.purge" (\fBvoid\fR) \-\- .RS 4 Purge all unused dirty pages for arena , or for all arenas if equals "arenas\&.narenas"\&. .RE .PP "arena\&.\&.decay" (\fBvoid\fR) \-\- .RS 4 Trigger decay\-based purging of unused dirty pages for arena , or for all arenas if equals "arenas\&.narenas"\&. The proportion of unused dirty pages to be purged depends on the current time; see "opt\&.decay_time" for details\&. .RE .PP +"arena\&.\&.reset" (\fBvoid\fR) \-\- +.RS 4 +Discard all of the arena\*(Aqs extant allocations\&. This interface can only be used with arenas created via +"arenas\&.extend"\&. None of the arena\*(Aqs discarded/cached allocations may accessed afterward\&. As part of this requirement, all thread caches which were used to allocate/deallocate in conjunction with the arena must be flushed beforehand\&. This interface cannot be used if running inside Valgrind, nor if the +quarantine +size is non\-zero\&. +.RE +.PP "arena\&.\&.dss" (\fBconst char *\fR) rw .RS 4 Set the precedence of dss allocation as related to mmap allocation for arena , or for all arenas if equals "arenas\&.narenas"\&. See "opt\&.dss" for supported settings\&. .RE .PP "arena\&.\&.lg_dirty_mult" (\fBssize_t\fR) rw .RS 4 Current per\-arena minimum ratio (log base 2) of active to dirty pages for arena \&. Each time this interface is set and the ratio is increased, pages are synchronously purged as necessary to impose the new ratio\&. See "opt\&.lg_dirty_mult" for additional information\&. .RE .PP "arena\&.\&.decay_time" (\fBssize_t\fR) rw .RS 4 Current per\-arena approximate time in seconds from the creation of a set of unused dirty pages until an equivalent set of unused dirty pages is purged and/or reused\&. Each time this interface is set, all currently unused dirty pages are considered to have fully decayed, which causes immediate purging of all unused dirty pages unless the decay time is set to \-1 (i\&.e\&. purging disabled)\&. See "opt\&.decay_time" for additional information\&. .RE .PP "arena\&.\&.chunk_hooks" (\fBchunk_hooks_t\fR) rw .RS 4 Get or set the chunk management hook functions for arena \&. The functions must be capable of operating on all extant chunks associated with arena , usually by passing unknown chunks to the replaced functions\&. In practice, it is feasible to control allocation for arenas created via "arenas\&.extend" such that all chunks originate from an application\-supplied chunk allocator (by setting custom chunk hook functions just after arena creation), but the automatically created arenas may have already created chunks prior to the application having an opportunity to take over chunk allocation\&. .sp .if n \{\ .RS 4 .\} .nf typedef struct { chunk_alloc_t *alloc; chunk_dalloc_t *dalloc; chunk_commit_t *commit; chunk_decommit_t *decommit; chunk_purge_t *purge; chunk_split_t *split; chunk_merge_t *merge; } chunk_hooks_t; .fi .if n \{\ .RE .\} .sp The \fBchunk_hooks_t\fR structure comprises function pointers which are described individually below\&. jemalloc uses these functions to manage chunk lifetime, which starts off with allocation of mapped committed memory, in the simplest case followed by deallocation\&. However, there are performance and platform reasons to retain chunks for later reuse\&. Cleanup attempts cascade from deallocation to decommit to purging, which gives the chunk management functions opportunities to reject the most permanent cleanup operations in favor of less permanent (and often less costly) operations\&. The chunk splitting and merging operations can also be opted out of, but this is mainly intended to support platforms on which virtual memory mappings provided by the operating system kernel do not automatically coalesce and split, e\&.g\&. Windows\&. .HP \w'typedef\ void\ *(chunk_alloc_t)('u .BI "typedef void *(chunk_alloc_t)(void\ *" "chunk" ", size_t\ " "size" ", size_t\ " "alignment" ", bool\ *" "zero" ", bool\ *" "commit" ", unsigned\ " "arena_ind" ");" .sp .if n \{\ .RS 4 .\} .nf .fi .if n \{\ .RE .\} .sp A chunk allocation function conforms to the \fBchunk_alloc_t\fR type and upon success returns a pointer to \fIsize\fR bytes of mapped memory on behalf of arena \fIarena_ind\fR such that the chunk\*(Aqs base address is a multiple of \fIalignment\fR, as well as setting \fI*zero\fR to indicate whether the chunk is zeroed and \fI*commit\fR to indicate whether the chunk is committed\&. Upon error the function returns \fBNULL\fR and leaves \fI*zero\fR and \fI*commit\fR unmodified\&. The \fIsize\fR parameter is always a multiple of the chunk size\&. The \fIalignment\fR parameter is always a power of two at least as large as the chunk size\&. Zeroing is mandatory if \fI*zero\fR is true upon function entry\&. Committing is mandatory if \fI*commit\fR is true upon function entry\&. If \fIchunk\fR is not \fBNULL\fR, the returned pointer must be \fIchunk\fR on success or \fBNULL\fR on error\&. Committed memory may be committed in absolute terms as on a system that does not overcommit, or in implicit terms as on a system that overcommits and satisfies physical memory needs on demand via soft page faults\&. Note that replacing the default chunk allocation function makes the arena\*(Aqs "arena\&.\&.dss" setting irrelevant\&. .HP \w'typedef\ bool\ (chunk_dalloc_t)('u .BI "typedef bool (chunk_dalloc_t)(void\ *" "chunk" ", size_t\ " "size" ", bool\ " "committed" ", unsigned\ " "arena_ind" ");" .sp .if n \{\ .RS 4 .\} .nf .fi .if n \{\ .RE .\} .sp A chunk deallocation function conforms to the \fBchunk_dalloc_t\fR type and deallocates a \fIchunk\fR of given \fIsize\fR with \fIcommitted\fR/decommited memory as indicated, on behalf of arena \fIarena_ind\fR, returning false upon success\&. If the function returns true, this indicates opt\-out from deallocation; the virtual memory mapping associated with the chunk remains mapped, in the same commit state, and available for future use, in which case it will be automatically retained for later reuse\&. .HP \w'typedef\ bool\ (chunk_commit_t)('u .BI "typedef bool (chunk_commit_t)(void\ *" "chunk" ", size_t\ " "size" ", size_t\ " "offset" ", size_t\ " "length" ", unsigned\ " "arena_ind" ");" .sp .if n \{\ .RS 4 .\} .nf .fi .if n \{\ .RE .\} .sp A chunk commit function conforms to the \fBchunk_commit_t\fR type and commits zeroed physical memory to back pages within a \fIchunk\fR of given \fIsize\fR at \fIoffset\fR bytes, extending for \fIlength\fR on behalf of arena \fIarena_ind\fR, returning false upon success\&. Committed memory may be committed in absolute terms as on a system that does not overcommit, or in implicit terms as on a system that overcommits and satisfies physical memory needs on demand via soft page faults\&. If the function returns true, this indicates insufficient physical memory to satisfy the request\&. .HP \w'typedef\ bool\ (chunk_decommit_t)('u .BI "typedef bool (chunk_decommit_t)(void\ *" "chunk" ", size_t\ " "size" ", size_t\ " "offset" ", size_t\ " "length" ", unsigned\ " "arena_ind" ");" .sp .if n \{\ .RS 4 .\} .nf .fi .if n \{\ .RE .\} .sp A chunk decommit function conforms to the \fBchunk_decommit_t\fR type and decommits any physical memory that is backing pages within a \fIchunk\fR of given \fIsize\fR at \fIoffset\fR bytes, extending for \fIlength\fR on behalf of arena \fIarena_ind\fR, returning false upon success, in which case the pages will be committed via the chunk commit function before being reused\&. If the function returns true, this indicates opt\-out from decommit; the memory remains committed and available for future use, in which case it will be automatically retained for later reuse\&. .HP \w'typedef\ bool\ (chunk_purge_t)('u .BI "typedef bool (chunk_purge_t)(void\ *" "chunk" ", size_t" "size" ", size_t\ " "offset" ", size_t\ " "length" ", unsigned\ " "arena_ind" ");" .sp .if n \{\ .RS 4 .\} .nf .fi .if n \{\ .RE .\} .sp A chunk purge function conforms to the \fBchunk_purge_t\fR type and optionally discards physical pages within the virtual memory mapping associated with \fIchunk\fR of given \fIsize\fR at \fIoffset\fR bytes, extending for \fIlength\fR on behalf of arena \fIarena_ind\fR, returning false if pages within the purged virtual memory range will be zero\-filled the next time they are accessed\&. .HP \w'typedef\ bool\ (chunk_split_t)('u .BI "typedef bool (chunk_split_t)(void\ *" "chunk" ", size_t\ " "size" ", size_t\ " "size_a" ", size_t\ " "size_b" ", bool\ " "committed" ", unsigned\ " "arena_ind" ");" .sp .if n \{\ .RS 4 .\} .nf .fi .if n \{\ .RE .\} .sp A chunk split function conforms to the \fBchunk_split_t\fR type and optionally splits \fIchunk\fR of given \fIsize\fR into two adjacent chunks, the first of \fIsize_a\fR bytes, and the second of \fIsize_b\fR bytes, operating on \fIcommitted\fR/decommitted memory as indicated, on behalf of arena \fIarena_ind\fR, returning false upon success\&. If the function returns true, this indicates that the chunk remains unsplit and therefore should continue to be operated on as a whole\&. .HP \w'typedef\ bool\ (chunk_merge_t)('u .BI "typedef bool (chunk_merge_t)(void\ *" "chunk_a" ", size_t\ " "size_a" ", void\ *" "chunk_b" ", size_t\ " "size_b" ", bool\ " "committed" ", unsigned\ " "arena_ind" ");" .sp .if n \{\ .RS 4 .\} .nf .fi .if n \{\ .RE .\} .sp A chunk merge function conforms to the \fBchunk_merge_t\fR type and optionally merges adjacent chunks, \fIchunk_a\fR of given \fIsize_a\fR and \fIchunk_b\fR of given \fIsize_b\fR into one contiguous chunk, operating on \fIcommitted\fR/decommitted memory as indicated, on behalf of arena \fIarena_ind\fR, returning false upon success\&. If the function returns true, this indicates that the chunks remain distinct mappings and therefore should continue to be operated on independently\&. .RE .PP "arenas\&.narenas" (\fBunsigned\fR) r\- .RS 4 Current limit on number of arenas\&. .RE .PP "arenas\&.initialized" (\fBbool *\fR) r\- .RS 4 An array of "arenas\&.narenas" booleans\&. Each boolean indicates whether the corresponding arena is initialized\&. .RE .PP "arenas\&.lg_dirty_mult" (\fBssize_t\fR) rw .RS 4 Current default per\-arena minimum ratio (log base 2) of active to dirty pages, used to initialize "arena\&.\&.lg_dirty_mult" during arena creation\&. See "opt\&.lg_dirty_mult" for additional information\&. .RE .PP "arenas\&.decay_time" (\fBssize_t\fR) rw .RS 4 Current default per\-arena approximate time in seconds from the creation of a set of unused dirty pages until an equivalent set of unused dirty pages is purged and/or reused, used to initialize "arena\&.\&.decay_time" during arena creation\&. See "opt\&.decay_time" for additional information\&. .RE .PP "arenas\&.quantum" (\fBsize_t\fR) r\- .RS 4 Quantum size\&. .RE .PP "arenas\&.page" (\fBsize_t\fR) r\- .RS 4 Page size\&. .RE .PP "arenas\&.tcache_max" (\fBsize_t\fR) r\- [\fB\-\-enable\-tcache\fR] .RS 4 Maximum thread\-cached size class\&. .RE .PP "arenas\&.nbins" (\fBunsigned\fR) r\- .RS 4 Number of bin size classes\&. .RE .PP "arenas\&.nhbins" (\fBunsigned\fR) r\- [\fB\-\-enable\-tcache\fR] .RS 4 Total number of thread cache bin size classes\&. .RE .PP "arenas\&.bin\&.\&.size" (\fBsize_t\fR) r\- .RS 4 Maximum size supported by size class\&. .RE .PP "arenas\&.bin\&.\&.nregs" (\fBuint32_t\fR) r\- .RS 4 Number of regions per page run\&. .RE .PP "arenas\&.bin\&.\&.run_size" (\fBsize_t\fR) r\- .RS 4 Number of bytes per page run\&. .RE .PP "arenas\&.nlruns" (\fBunsigned\fR) r\- .RS 4 Total number of large size classes\&. .RE .PP "arenas\&.lrun\&.\&.size" (\fBsize_t\fR) r\- .RS 4 Maximum size supported by this large size class\&. .RE .PP "arenas\&.nhchunks" (\fBunsigned\fR) r\- .RS 4 Total number of huge size classes\&. .RE .PP "arenas\&.hchunk\&.\&.size" (\fBsize_t\fR) r\- .RS 4 Maximum size supported by this huge size class\&. .RE .PP "arenas\&.extend" (\fBunsigned\fR) r\- .RS 4 Extend the array of arenas by appending a new arena, and returning the new arena index\&. .RE .PP "prof\&.thread_active_init" (\fBbool\fR) rw [\fB\-\-enable\-prof\fR] .RS 4 Control the initial setting for "thread\&.prof\&.active" in newly created threads\&. See the "opt\&.prof_thread_active_init" option for additional information\&. .RE .PP "prof\&.active" (\fBbool\fR) rw [\fB\-\-enable\-prof\fR] .RS 4 Control whether sampling is currently active\&. See the "opt\&.prof_active" option for additional information, as well as the interrelated "thread\&.prof\&.active" mallctl\&. .RE .PP "prof\&.dump" (\fBconst char *\fR) \-w [\fB\-\-enable\-prof\fR] .RS 4 Dump a memory profile to the specified file, or if NULL is specified, to a file according to the pattern \&.\&.\&.m\&.heap, where is controlled by the "opt\&.prof_prefix" option\&. .RE .PP "prof\&.gdump" (\fBbool\fR) rw [\fB\-\-enable\-prof\fR] .RS 4 When enabled, trigger a memory profile dump every time the total virtual memory exceeds the previous maximum\&. Profiles are dumped to files named according to the pattern \&.\&.\&.u\&.heap, where is controlled by the "opt\&.prof_prefix" option\&. .RE .PP "prof\&.reset" (\fBsize_t\fR) \-w [\fB\-\-enable\-prof\fR] .RS 4 Reset all memory profile statistics, and optionally update the sample rate (see "opt\&.lg_prof_sample" and "prof\&.lg_sample")\&. .RE .PP "prof\&.lg_sample" (\fBsize_t\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 Get the current sample rate (see "opt\&.lg_prof_sample")\&. .RE .PP "prof\&.interval" (\fBuint64_t\fR) r\- [\fB\-\-enable\-prof\fR] .RS 4 -Average number of bytes allocated between inverval\-based profile dumps\&. See the +Average number of bytes allocated between interval\-based profile dumps\&. See the "opt\&.lg_prof_interval" option for additional information\&. .RE .PP "stats\&.cactive" (\fBsize_t *\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Pointer to a counter that contains an approximate count of the current number of bytes in active pages\&. The estimate may be high, but never low, because each arena rounds up when computing its contribution to the counter\&. Note that the "epoch" mallctl has no bearing on this counter\&. Furthermore, counter consistency is maintained via atomic operations, so it is necessary to use an atomic operation in order to guarantee a consistent read when dereferencing the pointer\&. .RE .PP "stats\&.allocated" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Total number of bytes allocated by the application\&. .RE .PP "stats\&.active" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Total number of bytes in active pages allocated by the application\&. This is a multiple of the page size, and greater than or equal to "stats\&.allocated"\&. This does not include "stats\&.arenas\&.\&.pdirty", nor pages entirely devoted to allocator metadata\&. .RE .PP "stats\&.metadata" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Total number of bytes dedicated to metadata, which comprise base allocations used for bootstrap\-sensitive internal allocator data structures, arena chunk headers (see "stats\&.arenas\&.\&.metadata\&.mapped"), and internal allocations (see "stats\&.arenas\&.\&.metadata\&.allocated")\&. .RE .PP "stats\&.resident" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Maximum number of bytes in physically resident data pages mapped by the allocator, comprising all pages dedicated to allocator metadata, pages backing active allocations, and unused dirty pages\&. This is a maximum rather than precise because pages may not actually be physically resident if they correspond to demand\-zeroed virtual memory that has not yet been touched\&. This is a multiple of the page size, and is larger than "stats\&.active"\&. .RE .PP "stats\&.mapped" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Total number of bytes in active chunks mapped by the allocator\&. This is a multiple of the chunk size, and is larger than "stats\&.active"\&. This does not include inactive chunks, even those that contain unused dirty pages, which means that there is no strict ordering between this and "stats\&.resident"\&. .RE .PP +"stats\&.retained" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] +.RS 4 +Total number of bytes in virtual memory mappings that were retained rather than being returned to the operating system via e\&.g\&. +\fBmunmap\fR(2)\&. Retained virtual memory is typically untouched, decommitted, or purged, so it has no strongly associated physical memory (see +chunk hooks +for details)\&. Retained memory is excluded from mapped memory statistics, e\&.g\&. +"stats\&.mapped"\&. +.RE +.PP "stats\&.arenas\&.\&.dss" (\fBconst char *\fR) r\- .RS 4 dss (\fBsbrk\fR(2)) allocation precedence as related to \fBmmap\fR(2) allocation\&. See "opt\&.dss" for details\&. .RE .PP "stats\&.arenas\&.\&.lg_dirty_mult" (\fBssize_t\fR) r\- .RS 4 Minimum ratio (log base 2) of active to dirty pages\&. See "opt\&.lg_dirty_mult" for details\&. .RE .PP "stats\&.arenas\&.\&.decay_time" (\fBssize_t\fR) r\- .RS 4 Approximate time in seconds from the creation of a set of unused dirty pages until an equivalent set of unused dirty pages is purged and/or reused\&. See "opt\&.decay_time" for details\&. .RE .PP "stats\&.arenas\&.\&.nthreads" (\fBunsigned\fR) r\- .RS 4 Number of threads currently assigned to arena\&. .RE .PP "stats\&.arenas\&.\&.pactive" (\fBsize_t\fR) r\- .RS 4 Number of pages in active runs\&. .RE .PP "stats\&.arenas\&.\&.pdirty" (\fBsize_t\fR) r\- .RS 4 Number of pages within unused runs that are potentially dirty, and for which \fBmadvise\fR\fB\fI\&.\&.\&.\fR\fR\fB \fR\fB\fI\fBMADV_DONTNEED\fR\fR\fR or similar has not been called\&. .RE .PP "stats\&.arenas\&.\&.mapped" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of mapped bytes\&. +.RE +.PP +"stats\&.arenas\&.\&.retained" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] +.RS 4 +Number of retained bytes\&. See +"stats\&.retained" +for details\&. .RE .PP "stats\&.arenas\&.\&.metadata\&.mapped" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of mapped bytes in arena chunk headers, which track the states of the non\-metadata pages\&. .RE .PP "stats\&.arenas\&.\&.metadata\&.allocated" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of bytes dedicated to internal allocations\&. Internal allocations differ from application\-originated allocations in that they are for internal use, and that they are omitted from heap profiles\&. This statistic is reported separately from "stats\&.metadata" and "stats\&.arenas\&.\&.metadata\&.mapped" because it overlaps with e\&.g\&. the "stats\&.allocated" and "stats\&.active" statistics, whereas the other metadata statistics do not\&. .RE .PP "stats\&.arenas\&.\&.npurge" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of dirty page purge sweeps performed\&. .RE .PP "stats\&.arenas\&.\&.nmadvise" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of \fBmadvise\fR\fB\fI\&.\&.\&.\fR\fR\fB \fR\fB\fI\fBMADV_DONTNEED\fR\fR\fR or similar calls made to purge dirty pages\&. .RE .PP "stats\&.arenas\&.\&.purged" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of pages purged\&. .RE .PP "stats\&.arenas\&.\&.small\&.allocated" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of bytes currently allocated by small objects\&. .RE .PP "stats\&.arenas\&.\&.small\&.nmalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of allocation requests served by small bins\&. .RE .PP "stats\&.arenas\&.\&.small\&.ndalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of small objects returned to bins\&. .RE .PP "stats\&.arenas\&.\&.small\&.nrequests" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of small allocation requests\&. .RE .PP "stats\&.arenas\&.\&.large\&.allocated" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of bytes currently allocated by large objects\&. .RE .PP "stats\&.arenas\&.\&.large\&.nmalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of large allocation requests served directly by the arena\&. .RE .PP "stats\&.arenas\&.\&.large\&.ndalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of large deallocation requests served directly by the arena\&. .RE .PP "stats\&.arenas\&.\&.large\&.nrequests" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of large allocation requests\&. .RE .PP "stats\&.arenas\&.\&.huge\&.allocated" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Number of bytes currently allocated by huge objects\&. .RE .PP "stats\&.arenas\&.\&.huge\&.nmalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of huge allocation requests served directly by the arena\&. .RE .PP "stats\&.arenas\&.\&.huge\&.ndalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of huge deallocation requests served directly by the arena\&. .RE .PP "stats\&.arenas\&.\&.huge\&.nrequests" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of huge allocation requests\&. .RE .PP "stats\&.arenas\&.\&.bins\&.\&.nmalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of allocations served by bin\&. .RE .PP "stats\&.arenas\&.\&.bins\&.\&.ndalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of allocations returned to bin\&. .RE .PP "stats\&.arenas\&.\&.bins\&.\&.nrequests" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of allocation requests\&. .RE .PP "stats\&.arenas\&.\&.bins\&.\&.curregs" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Current number of regions for this size class\&. .RE .PP "stats\&.arenas\&.\&.bins\&.\&.nfills" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR \fB\-\-enable\-tcache\fR] .RS 4 Cumulative number of tcache fills\&. .RE .PP "stats\&.arenas\&.\&.bins\&.\&.nflushes" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR \fB\-\-enable\-tcache\fR] .RS 4 Cumulative number of tcache flushes\&. .RE .PP "stats\&.arenas\&.\&.bins\&.\&.nruns" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of runs created\&. .RE .PP "stats\&.arenas\&.\&.bins\&.\&.nreruns" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of times the current run from which to allocate changed\&. .RE .PP "stats\&.arenas\&.\&.bins\&.\&.curruns" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Current number of runs\&. .RE .PP "stats\&.arenas\&.\&.lruns\&.\&.nmalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of allocation requests for this size class served directly by the arena\&. .RE .PP "stats\&.arenas\&.\&.lruns\&.\&.ndalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of deallocation requests for this size class served directly by the arena\&. .RE .PP "stats\&.arenas\&.\&.lruns\&.\&.nrequests" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of allocation requests for this size class\&. .RE .PP "stats\&.arenas\&.\&.lruns\&.\&.curruns" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Current number of runs for this size class\&. .RE .PP "stats\&.arenas\&.\&.hchunks\&.\&.nmalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of allocation requests for this size class served directly by the arena\&. .RE .PP "stats\&.arenas\&.\&.hchunks\&.\&.ndalloc" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of deallocation requests for this size class served directly by the arena\&. .RE .PP "stats\&.arenas\&.\&.hchunks\&.\&.nrequests" (\fBuint64_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Cumulative number of allocation requests for this size class\&. .RE .PP "stats\&.arenas\&.\&.hchunks\&.\&.curhchunks" (\fBsize_t\fR) r\- [\fB\-\-enable\-stats\fR] .RS 4 Current number of huge allocations for this size class\&. .RE .SH "HEAP PROFILE FORMAT" .PP Although the heap profiling functionality was originally designed to be compatible with the \fBpprof\fR command that is developed as part of the \m[blue]\fBgperftools package\fR\m[]\&\s-2\u[3]\d\s+2, the addition of per thread heap profiling functionality required a different heap profile format\&. The \fBjeprof\fR command is derived from \fBpprof\fR, with enhancements to support the heap profile format described here\&. .PP In the following hypothetical heap profile, \fB[\&.\&.\&.]\fR indicates elision for the sake of compactness\&. .sp .if n \{\ .RS 4 .\} .nf heap_v2/524288 t*: 28106: 56637512 [0: 0] [\&.\&.\&.] t3: 352: 16777344 [0: 0] [\&.\&.\&.] t99: 17754: 29341640 [0: 0] [\&.\&.\&.] @ 0x5f86da8 0x5f5a1dc [\&.\&.\&.] 0x29e4d4e 0xa200316 0xabb2988 [\&.\&.\&.] t*: 13: 6688 [0: 0] t3: 12: 6496 [0: ] t99: 1: 192 [0: 0] [\&.\&.\&.] MAPPED_LIBRARIES: [\&.\&.\&.] .fi .if n \{\ .RE .\} .sp The following matches the above heap profile, but most tokens are replaced with \fB\fR to indicate descriptions of the corresponding fields\&. .sp .if n \{\ .RS 4 .\} .nf / : : [: ] [\&.\&.\&.] : : [: ] [\&.\&.\&.] : : [: ] [\&.\&.\&.] @ [\&.\&.\&.] [\&.\&.\&.] : : [: ] : : [: ] : : [: ] [\&.\&.\&.] MAPPED_LIBRARIES: /maps> .fi .if n \{\ .RE .\} .SH "DEBUGGING MALLOC PROBLEMS" .PP When debugging, it is a good idea to configure/build jemalloc with the \fB\-\-enable\-debug\fR and \fB\-\-enable\-fill\fR options, and recompile the program with suitable options and symbols for debugger support\&. When so configured, jemalloc incorporates a wide variety of run\-time assertions that catch application errors such as double\-free, write\-after\-free, etc\&. .PP Programs often accidentally depend on \(lquninitialized\(rq memory actually being filled with zero bytes\&. Junk filling (see the "opt\&.junk" option) tends to expose such bugs in the form of obviously incorrect results and/or coredumps\&. Conversely, zero filling (see the "opt\&.zero" option) eliminates the symptoms of such bugs\&. Between these two options, it is usually possible to quickly detect, diagnose, and eliminate such bugs\&. .PP This implementation does not provide much detail about the problems it detects, because the performance impact for storing such information would be prohibitive\&. However, jemalloc does integrate with the most excellent \m[blue]\fBValgrind\fR\m[]\&\s-2\u[2]\d\s+2 tool if the \fB\-\-enable\-valgrind\fR configuration option is enabled\&. .SH "DIAGNOSTIC MESSAGES" .PP If any of the memory allocation/deallocation functions detect an error or warning condition, a message will be printed to file descriptor \fBSTDERR_FILENO\fR\&. Errors will result in the process dumping core\&. If the "opt\&.abort" option is set, most warnings are treated as errors\&. .PP The \fImalloc_message\fR variable allows the programmer to override the function which emits the text strings forming the errors and warnings if for some reason the \fBSTDERR_FILENO\fR file descriptor is not suitable for this\&. \fBmalloc_message\fR\fB\fR takes the \fIcbopaque\fR pointer argument that is \fBNULL\fR unless overridden by the arguments in a call to \fBmalloc_stats_print\fR\fB\fR, followed by a string pointer\&. Please note that doing anything which tries to allocate memory in this function is likely to result in a crash or deadlock\&. .PP All messages are prefixed by \(lq:\(rq\&. .SH "RETURN VALUES" .SS "Standard API" .PP The \fBmalloc\fR\fB\fR and \fBcalloc\fR\fB\fR functions return a pointer to the allocated memory if successful; otherwise a \fBNULL\fR pointer is returned and \fIerrno\fR is set to ENOMEM\&. .PP The \fBposix_memalign\fR\fB\fR function returns the value 0 if successful; otherwise it returns an error value\&. The \fBposix_memalign\fR\fB\fR function will fail if: .PP EINVAL .RS 4 The \fIalignment\fR parameter is not a power of 2 at least as large as sizeof(\fBvoid *\fR)\&. .RE .PP ENOMEM .RS 4 Memory allocation error\&. .RE .PP The \fBaligned_alloc\fR\fB\fR function returns a pointer to the allocated memory if successful; otherwise a \fBNULL\fR pointer is returned and \fIerrno\fR is set\&. The \fBaligned_alloc\fR\fB\fR function will fail if: .PP EINVAL .RS 4 The \fIalignment\fR parameter is not a power of 2\&. .RE .PP ENOMEM .RS 4 Memory allocation error\&. .RE .PP The \fBrealloc\fR\fB\fR function returns a pointer, possibly identical to \fIptr\fR, to the allocated memory if successful; otherwise a \fBNULL\fR pointer is returned, and \fIerrno\fR is set to ENOMEM if the error was the result of an allocation failure\&. The \fBrealloc\fR\fB\fR function always leaves the original buffer intact when an error occurs\&. .PP The \fBfree\fR\fB\fR function returns no value\&. .SS "Non\-standard API" .PP The \fBmallocx\fR\fB\fR and \fBrallocx\fR\fB\fR functions return a pointer to the allocated memory if successful; otherwise a \fBNULL\fR pointer is returned to indicate insufficient contiguous memory was available to service the allocation request\&. .PP The \fBxallocx\fR\fB\fR function returns the real size of the resulting resized allocation pointed to by \fIptr\fR, which is a value less than \fIsize\fR if the allocation could not be adequately grown in place\&. .PP The \fBsallocx\fR\fB\fR function returns the real size of the allocation pointed to by \fIptr\fR\&. .PP The \fBnallocx\fR\fB\fR returns the real size that would result from a successful equivalent \fBmallocx\fR\fB\fR function call, or zero if insufficient memory is available to perform the size computation\&. .PP The \fBmallctl\fR\fB\fR, \fBmallctlnametomib\fR\fB\fR, and \fBmallctlbymib\fR\fB\fR functions return 0 on success; otherwise they return an error value\&. The functions will fail if: .PP EINVAL .RS 4 \fInewp\fR is not \fBNULL\fR, and \fInewlen\fR is too large or too small\&. Alternatively, \fI*oldlenp\fR is too large or too small; in this case as much data as possible are read despite the error\&. .RE .PP ENOENT .RS 4 \fIname\fR or \fImib\fR specifies an unknown/invalid value\&. .RE .PP EPERM .RS 4 Attempt to read or write void value, or attempt to write read\-only value\&. .RE .PP EAGAIN .RS 4 A memory allocation failure occurred\&. .RE .PP EFAULT .RS 4 An interface with side effects failed in some way not directly related to \fBmallctl*\fR\fB\fR read/write processing\&. .RE .PP The \fBmalloc_usable_size\fR\fB\fR function returns the usable size of the allocation pointed to by \fIptr\fR\&. .SH "ENVIRONMENT" .PP The following environment variable affects the execution of the allocation functions: .PP \fBMALLOC_CONF\fR .RS 4 If the environment variable \fBMALLOC_CONF\fR is set, the characters it contains will be interpreted as options\&. .RE .SH "EXAMPLES" .PP To dump core whenever a problem occurs: .sp .if n \{\ .RS 4 .\} .nf ln \-s \*(Aqabort:true\*(Aq /etc/malloc\&.conf .fi .if n \{\ .RE .\} .PP To specify in the source a chunk size that is 16 MiB: .sp .if n \{\ .RS 4 .\} .nf malloc_conf = "lg_chunk:24"; .fi .if n \{\ .RE .\} .SH "SEE ALSO" .PP \fBmadvise\fR(2), \fBmmap\fR(2), \fBsbrk\fR(2), \fButrace\fR(2), \fBalloca\fR(3), \fBatexit\fR(3), \fBgetpagesize\fR(3) .SH "STANDARDS" .PP The \fBmalloc\fR\fB\fR, \fBcalloc\fR\fB\fR, \fBrealloc\fR\fB\fR, and \fBfree\fR\fB\fR functions conform to ISO/IEC 9899:1990 (\(lqISO C90\(rq)\&. .PP The \fBposix_memalign\fR\fB\fR function conforms to IEEE Std 1003\&.1\-2001 (\(lqPOSIX\&.1\(rq)\&. .SH "HISTORY" .PP The \fBmalloc_usable_size\fR\fB\fR and \fBposix_memalign\fR\fB\fR functions first appeared in FreeBSD 7\&.0\&. .PP The \fBaligned_alloc\fR\fB\fR, \fBmalloc_stats_print\fR\fB\fR, and \fBmallctl*\fR\fB\fR functions first appeared in FreeBSD 10\&.0\&. .PP The \fB*allocx\fR\fB\fR functions first appeared in FreeBSD 11\&.0\&. .SH "AUTHOR" .PP \fBJason Evans\fR .RS 4 .RE .SH "NOTES" .IP " 1." 4 jemalloc website .RS 4 \%http://www.canonware.com/jemalloc/ .RE .IP " 2." 4 Valgrind .RS 4 \%http://valgrind.org/ .RE .IP " 3." 4 gperftools package .RS 4 \%http://code.google.com/p/gperftools/ .RE Index: head/contrib/jemalloc/include/jemalloc/internal/arena.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/arena.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/arena.h (revision 299587) @@ -1,1450 +1,1515 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES #define LARGE_MINCLASS (ZU(1) << LG_LARGE_MINCLASS) /* Maximum number of regions in one run. */ #define LG_RUN_MAXREGS (LG_PAGE - LG_TINY_MIN) #define RUN_MAXREGS (1U << LG_RUN_MAXREGS) /* * Minimum redzone size. Redzones may be larger than this if necessary to * preserve region alignment. */ #define REDZONE_MINSIZE 16 /* * The minimum ratio of active:dirty pages per arena is computed as: * * (nactive >> lg_dirty_mult) >= ndirty * * So, supposing that lg_dirty_mult is 3, there can be no less than 8 times as * many active pages as dirty pages. */ #define LG_DIRTY_MULT_DEFAULT 3 typedef enum { purge_mode_ratio = 0, purge_mode_decay = 1, purge_mode_limit = 2 } purge_mode_t; #define PURGE_DEFAULT purge_mode_ratio /* Default decay time in seconds. */ #define DECAY_TIME_DEFAULT 10 /* Number of event ticks between time checks. */ #define DECAY_NTICKS_PER_UPDATE 1000 typedef struct arena_runs_dirty_link_s arena_runs_dirty_link_t; +typedef struct arena_avail_links_s arena_avail_links_t; typedef struct arena_run_s arena_run_t; typedef struct arena_chunk_map_bits_s arena_chunk_map_bits_t; typedef struct arena_chunk_map_misc_s arena_chunk_map_misc_t; typedef struct arena_chunk_s arena_chunk_t; typedef struct arena_bin_info_s arena_bin_info_t; typedef struct arena_bin_s arena_bin_t; typedef struct arena_s arena_t; typedef struct arena_tdata_s arena_tdata_t; #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #ifdef JEMALLOC_ARENA_STRUCTS_A struct arena_run_s { /* Index of bin this run is associated with. */ szind_t binind; /* Number of free regions in run. */ unsigned nfree; /* Per region allocated/deallocated bitmap. */ bitmap_t bitmap[BITMAP_GROUPS_MAX]; }; /* Each element of the chunk map corresponds to one page within the chunk. */ struct arena_chunk_map_bits_s { /* * Run address (or size) and various flags are stored together. The bit * layout looks like (assuming 32-bit system): * * ???????? ???????? ???nnnnn nnndumla * * ? : Unallocated: Run address for first/last pages, unset for internal * pages. * Small: Run page offset. * Large: Run page count for first page, unset for trailing pages. * n : binind for small size class, BININD_INVALID for large size class. * d : dirty? * u : unzeroed? * m : decommitted? * l : large? * a : allocated? * * Following are example bit patterns for the three types of runs. * * p : run page offset * s : run size * n : binind for size class; large objects set these to BININD_INVALID * x : don't care * - : 0 * + : 1 * [DUMLA] : bit set * [dumla] : bit unset * * Unallocated (clean): * ssssssss ssssssss sss+++++ +++dum-a * xxxxxxxx xxxxxxxx xxxxxxxx xxx-Uxxx * ssssssss ssssssss sss+++++ +++dUm-a * * Unallocated (dirty): * ssssssss ssssssss sss+++++ +++D-m-a * xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx * ssssssss ssssssss sss+++++ +++D-m-a * * Small: * pppppppp pppppppp pppnnnnn nnnd---A * pppppppp pppppppp pppnnnnn nnn----A * pppppppp pppppppp pppnnnnn nnnd---A * * Large: * ssssssss ssssssss sss+++++ +++D--LA * xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx * -------- -------- ---+++++ +++D--LA * * Large (sampled, size <= LARGE_MINCLASS): * ssssssss ssssssss sssnnnnn nnnD--LA * xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx * -------- -------- ---+++++ +++D--LA * * Large (not sampled, size == LARGE_MINCLASS): * ssssssss ssssssss sss+++++ +++D--LA * xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx * -------- -------- ---+++++ +++D--LA */ size_t bits; #define CHUNK_MAP_ALLOCATED ((size_t)0x01U) #define CHUNK_MAP_LARGE ((size_t)0x02U) #define CHUNK_MAP_STATE_MASK ((size_t)0x3U) #define CHUNK_MAP_DECOMMITTED ((size_t)0x04U) #define CHUNK_MAP_UNZEROED ((size_t)0x08U) #define CHUNK_MAP_DIRTY ((size_t)0x10U) #define CHUNK_MAP_FLAGS_MASK ((size_t)0x1cU) #define CHUNK_MAP_BININD_SHIFT 5 #define BININD_INVALID ((size_t)0xffU) #define CHUNK_MAP_BININD_MASK (BININD_INVALID << CHUNK_MAP_BININD_SHIFT) #define CHUNK_MAP_BININD_INVALID CHUNK_MAP_BININD_MASK #define CHUNK_MAP_RUNIND_SHIFT (CHUNK_MAP_BININD_SHIFT + 8) #define CHUNK_MAP_SIZE_SHIFT (CHUNK_MAP_RUNIND_SHIFT - LG_PAGE) #define CHUNK_MAP_SIZE_MASK \ (~(CHUNK_MAP_BININD_MASK | CHUNK_MAP_FLAGS_MASK | CHUNK_MAP_STATE_MASK)) }; struct arena_runs_dirty_link_s { qr(arena_runs_dirty_link_t) rd_link; }; /* * Each arena_chunk_map_misc_t corresponds to one page within the chunk, just * like arena_chunk_map_bits_t. Two separate arrays are stored within each * chunk header in order to improve cache locality. */ struct arena_chunk_map_misc_s { /* - * Linkage for run trees. There are two disjoint uses: + * Linkage for run heaps. There are two disjoint uses: * - * 1) arena_t's runs_avail tree. + * 1) arena_t's runs_avail heaps. * 2) arena_run_t conceptually uses this linkage for in-use non-full * runs, rather than directly embedding linkage. */ - rb_node(arena_chunk_map_misc_t) rb_link; + phn(arena_chunk_map_misc_t) ph_link; union { /* Linkage for list of dirty runs. */ arena_runs_dirty_link_t rd; /* Profile counters, used for large object runs. */ union { void *prof_tctx_pun; prof_tctx_t *prof_tctx; }; /* Small region run metadata. */ arena_run_t run; }; }; -typedef rb_tree(arena_chunk_map_misc_t) arena_run_tree_t; +typedef ph(arena_chunk_map_misc_t) arena_run_heap_t; #endif /* JEMALLOC_ARENA_STRUCTS_A */ #ifdef JEMALLOC_ARENA_STRUCTS_B /* Arena chunk header. */ struct arena_chunk_s { /* * A pointer to the arena that owns the chunk is stored within the node. * This field as a whole is used by chunks_rtree to support both * ivsalloc() and core-based debugging. */ extent_node_t node; /* * Map of pages within chunk that keeps track of free/large/small. The * first map_bias entries are omitted, since the chunk header does not * need to be tracked in the map. This omission saves a header page * for common chunk sizes (e.g. 4 MiB). */ arena_chunk_map_bits_t map_bits[1]; /* Dynamically sized. */ }; /* * Read-only information associated with each element of arena_t's bins array * is stored separately, partly to reduce memory usage (only one copy, rather * than one per arena), but mainly to avoid false cacheline sharing. * * Each run has the following layout: * * /--------------------\ * | pad? | * |--------------------| * | redzone | * reg0_offset | region 0 | * | redzone | * |--------------------| \ * | redzone | | * | region 1 | > reg_interval * | redzone | / * |--------------------| * | ... | * | ... | * | ... | * |--------------------| * | redzone | * | region nregs-1 | * | redzone | * |--------------------| * | alignment pad? | * \--------------------/ * * reg_interval has at least the same minimum alignment as reg_size; this * preserves the alignment constraint that sa2u() depends on. Alignment pad is * either 0 or redzone_size; it is present only if needed to align reg0_offset. */ struct arena_bin_info_s { /* Size of regions in a run for this bin's size class. */ size_t reg_size; /* Redzone size. */ size_t redzone_size; /* Interval between regions (reg_size + (redzone_size << 1)). */ size_t reg_interval; /* Total size of a run for this bin's size class. */ size_t run_size; /* Total number of regions in a run for this bin's size class. */ uint32_t nregs; /* * Metadata used to manipulate bitmaps for runs associated with this * bin. */ bitmap_info_t bitmap_info; /* Offset of first region in a run for this bin's size class. */ uint32_t reg0_offset; }; struct arena_bin_s { /* * All operations on runcur, runs, and stats require that lock be * locked. Run allocation/deallocation are protected by the arena lock, * which may be acquired while holding one or more bin locks, but not * vise versa. */ malloc_mutex_t lock; /* * Current run being used to service allocations of this bin's size * class. */ arena_run_t *runcur; /* - * Tree of non-full runs. This tree is used when looking for an + * Heap of non-full runs. This heap is used when looking for an * existing run when runcur is no longer usable. We choose the * non-full run that is lowest in memory; this policy tends to keep * objects packed well, and it can also help reduce the number of * almost-empty chunks. */ - arena_run_tree_t runs; + arena_run_heap_t runs; /* Bin statistics. */ malloc_bin_stats_t stats; }; struct arena_s { /* This arena's index within the arenas array. */ unsigned ind; /* - * Number of threads currently assigned to this arena. This field is - * synchronized via atomic operations. + * Number of threads currently assigned to this arena, synchronized via + * atomic operations. Each thread has two distinct assignments, one for + * application-serving allocation, and the other for internal metadata + * allocation. Internal metadata must not be allocated from arenas + * created via the arenas.extend mallctl, because the arena..reset + * mallctl indiscriminately discards all allocations for the affected + * arena. + * + * 0: Application allocation. + * 1: Internal metadata allocation. */ - unsigned nthreads; + unsigned nthreads[2]; /* * There are three classes of arena operations from a locking * perspective: * 1) Thread assignment (modifies nthreads) is synchronized via atomics. * 2) Bin-related operations are protected by bin locks. * 3) Chunk- and run-related operations are protected by this mutex. */ malloc_mutex_t lock; arena_stats_t stats; /* * List of tcaches for extant threads associated with this arena. * Stats from these are merged incrementally, and at exit if * opt_stats_print is enabled. */ ql_head(tcache_t) tcache_ql; uint64_t prof_accumbytes; /* * PRNG state for cache index randomization of large allocation base * pointers. */ uint64_t offset_state; dss_prec_t dss_prec; + + /* Extant arena chunks. */ + ql_head(extent_node_t) achunks; + /* * In order to avoid rapid chunk allocation/deallocation when an arena * oscillates right on the cusp of needing a new chunk, cache the most * recently freed chunk. The spare is left in the arena's chunk trees * until it is deleted. * * There is one spare chunk per arena, rather than one spare total, in * order to avoid interactions between multiple threads that could make * a single spare inadequate. */ arena_chunk_t *spare; /* Minimum ratio (log base 2) of nactive:ndirty. */ ssize_t lg_dirty_mult; /* True if a thread is currently executing arena_purge_to_limit(). */ bool purging; /* Number of pages in active runs and huge regions. */ size_t nactive; /* * Current count of pages within unused runs that are potentially * dirty, and for which madvise(... MADV_DONTNEED) has not been called. * By tracking this, we can institute a limit on how much dirty unused * memory is mapped for each arena. */ size_t ndirty; /* * Unused dirty memory this arena manages. Dirty memory is conceptually * tracked as an arbitrarily interleaved LRU of dirty runs and cached * chunks, but the list linkage is actually semi-duplicated in order to * avoid extra arena_chunk_map_misc_t space overhead. * * LRU-----------------------------------------------------------MRU * * /-- arena ---\ * | | * | | * |------------| /- chunk -\ * ...->|chunks_cache|<--------------------------->| /----\ |<--... * |------------| | |node| | * | | | | | | * | | /- run -\ /- run -\ | | | | * | | | | | | | | | | * | | | | | | | | | | * |------------| |-------| |-------| | |----| | * ...->|runs_dirty |<-->|rd |<-->|rd |<---->|rd |<----... * |------------| |-------| |-------| | |----| | * | | | | | | | | | | * | | | | | | | \----/ | * | | \-------/ \-------/ | | * | | | | * | | | | * \------------/ \---------/ */ arena_runs_dirty_link_t runs_dirty; extent_node_t chunks_cache; /* * Approximate time in seconds from the creation of a set of unused * dirty pages until an equivalent set of unused dirty pages is purged * and/or reused. */ ssize_t decay_time; /* decay_time / SMOOTHSTEP_NSTEPS. */ nstime_t decay_interval; /* * Time at which the current decay interval logically started. We do * not actually advance to a new epoch until sometime after it starts * because of scheduling and computation delays, and it is even possible * to completely skip epochs. In all cases, during epoch advancement we * merge all relevant activity into the most recently recorded epoch. */ nstime_t decay_epoch; /* decay_deadline randomness generator. */ uint64_t decay_jitter_state; /* * Deadline for current epoch. This is the sum of decay_interval and * per epoch jitter which is a uniform random variable in * [0..decay_interval). Epochs always advance by precise multiples of * decay_interval, but we randomize the deadline to reduce the * likelihood of arenas purging in lockstep. */ nstime_t decay_deadline; /* * Number of dirty pages at beginning of current epoch. During epoch * advancement we use the delta between decay_ndirty and ndirty to * determine how many dirty pages, if any, were generated, and record * the result in decay_backlog. */ size_t decay_ndirty; /* * Memoized result of arena_decay_backlog_npages_limit() corresponding * to the current contents of decay_backlog, i.e. the limit on how many * pages are allowed to exist for the decay epochs. */ size_t decay_backlog_npages_limit; /* * Trailing log of how many unused dirty pages were generated during * each of the past SMOOTHSTEP_NSTEPS decay epochs, where the last * element is the most recent epoch. Corresponding epoch times are * relative to decay_epoch. */ size_t decay_backlog[SMOOTHSTEP_NSTEPS]; /* Extant huge allocations. */ ql_head(extent_node_t) huge; /* Synchronizes all huge allocation/update/deallocation. */ malloc_mutex_t huge_mtx; /* * Trees of chunks that were previously allocated (trees differ only in * node ordering). These are used when allocating chunks, in an attempt * to re-use address space. Depending on function, different tree * orderings are needed, which is why there are two trees with the same * contents. */ extent_tree_t chunks_szad_cached; extent_tree_t chunks_ad_cached; extent_tree_t chunks_szad_retained; extent_tree_t chunks_ad_retained; malloc_mutex_t chunks_mtx; /* Cache of nodes that were allocated via base_alloc(). */ ql_head(extent_node_t) node_cache; malloc_mutex_t node_cache_mtx; /* User-configurable chunk hook functions. */ chunk_hooks_t chunk_hooks; /* bins is used to store trees of free regions. */ arena_bin_t bins[NBINS]; /* - * Quantized address-ordered trees of this arena's available runs. The - * trees are used for first-best-fit run allocation. + * Quantized address-ordered heaps of this arena's available runs. The + * heaps are used for first-best-fit run allocation. */ - arena_run_tree_t runs_avail[1]; /* Dynamically sized. */ + arena_run_heap_t runs_avail[1]; /* Dynamically sized. */ }; /* Used in conjunction with tsd for fast arena-related context lookup. */ struct arena_tdata_s { ticker_t decay_ticker; }; #endif /* JEMALLOC_ARENA_STRUCTS_B */ #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS static const size_t large_pad = #ifdef JEMALLOC_CACHE_OBLIVIOUS PAGE #else 0 #endif ; extern purge_mode_t opt_purge; extern const char *purge_mode_names[]; extern ssize_t opt_lg_dirty_mult; extern ssize_t opt_decay_time; extern arena_bin_info_t arena_bin_info[NBINS]; extern size_t map_bias; /* Number of arena chunk header pages. */ extern size_t map_misc_offset; extern size_t arena_maxrun; /* Max run size for arenas. */ extern size_t large_maxclass; /* Max large size class. */ extern size_t run_quantize_max; /* Max run_quantize_*() input. */ extern unsigned nlclasses; /* Number of large size classes. */ extern unsigned nhclasses; /* Number of huge size classes. */ #ifdef JEMALLOC_JET typedef size_t (run_quantize_t)(size_t); extern run_quantize_t *run_quantize_floor; extern run_quantize_t *run_quantize_ceil; #endif void arena_chunk_cache_maybe_insert(arena_t *arena, extent_node_t *node, bool cache); void arena_chunk_cache_maybe_remove(arena_t *arena, extent_node_t *node, bool cache); -extent_node_t *arena_node_alloc(arena_t *arena); -void arena_node_dalloc(arena_t *arena, extent_node_t *node); -void *arena_chunk_alloc_huge(arena_t *arena, size_t usize, size_t alignment, - bool *zero); -void arena_chunk_dalloc_huge(arena_t *arena, void *chunk, size_t usize); -void arena_chunk_ralloc_huge_similar(arena_t *arena, void *chunk, - size_t oldsize, size_t usize); -void arena_chunk_ralloc_huge_shrink(arena_t *arena, void *chunk, - size_t oldsize, size_t usize); -bool arena_chunk_ralloc_huge_expand(arena_t *arena, void *chunk, - size_t oldsize, size_t usize, bool *zero); -ssize_t arena_lg_dirty_mult_get(arena_t *arena); -bool arena_lg_dirty_mult_set(arena_t *arena, ssize_t lg_dirty_mult); -ssize_t arena_decay_time_get(arena_t *arena); -bool arena_decay_time_set(arena_t *arena, ssize_t decay_time); -void arena_maybe_purge(arena_t *arena); -void arena_purge(arena_t *arena, bool all); -void arena_tcache_fill_small(tsd_t *tsd, arena_t *arena, tcache_bin_t *tbin, - szind_t binind, uint64_t prof_accumbytes); +extent_node_t *arena_node_alloc(tsdn_t *tsdn, arena_t *arena); +void arena_node_dalloc(tsdn_t *tsdn, arena_t *arena, extent_node_t *node); +void *arena_chunk_alloc_huge(tsdn_t *tsdn, arena_t *arena, size_t usize, + size_t alignment, bool *zero); +void arena_chunk_dalloc_huge(tsdn_t *tsdn, arena_t *arena, void *chunk, + size_t usize); +void arena_chunk_ralloc_huge_similar(tsdn_t *tsdn, arena_t *arena, + void *chunk, size_t oldsize, size_t usize); +void arena_chunk_ralloc_huge_shrink(tsdn_t *tsdn, arena_t *arena, + void *chunk, size_t oldsize, size_t usize); +bool arena_chunk_ralloc_huge_expand(tsdn_t *tsdn, arena_t *arena, + void *chunk, size_t oldsize, size_t usize, bool *zero); +ssize_t arena_lg_dirty_mult_get(tsdn_t *tsdn, arena_t *arena); +bool arena_lg_dirty_mult_set(tsdn_t *tsdn, arena_t *arena, + ssize_t lg_dirty_mult); +ssize_t arena_decay_time_get(tsdn_t *tsdn, arena_t *arena); +bool arena_decay_time_set(tsdn_t *tsdn, arena_t *arena, ssize_t decay_time); +void arena_purge(tsdn_t *tsdn, arena_t *arena, bool all); +void arena_maybe_purge(tsdn_t *tsdn, arena_t *arena); +void arena_reset(tsd_t *tsd, arena_t *arena); +void arena_tcache_fill_small(tsdn_t *tsdn, arena_t *arena, + tcache_bin_t *tbin, szind_t binind, uint64_t prof_accumbytes); void arena_alloc_junk_small(void *ptr, arena_bin_info_t *bin_info, bool zero); #ifdef JEMALLOC_JET typedef void (arena_redzone_corruption_t)(void *, size_t, bool, size_t, uint8_t); extern arena_redzone_corruption_t *arena_redzone_corruption; typedef void (arena_dalloc_junk_small_t)(void *, arena_bin_info_t *); extern arena_dalloc_junk_small_t *arena_dalloc_junk_small; #else void arena_dalloc_junk_small(void *ptr, arena_bin_info_t *bin_info); #endif void arena_quarantine_junk_small(void *ptr, size_t usize); -void *arena_malloc_large(tsd_t *tsd, arena_t *arena, szind_t ind, bool zero); -void *arena_malloc_hard(tsd_t *tsd, arena_t *arena, size_t size, szind_t ind, - bool zero, tcache_t *tcache); -void *arena_palloc(tsd_t *tsd, arena_t *arena, size_t usize, +void *arena_malloc_large(tsdn_t *tsdn, arena_t *arena, szind_t ind, + bool zero); +void *arena_malloc_hard(tsdn_t *tsdn, arena_t *arena, size_t size, + szind_t ind, bool zero); +void *arena_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment, bool zero, tcache_t *tcache); -void arena_prof_promoted(const void *ptr, size_t size); -void arena_dalloc_bin_junked_locked(arena_t *arena, arena_chunk_t *chunk, - void *ptr, arena_chunk_map_bits_t *bitselm); -void arena_dalloc_bin(arena_t *arena, arena_chunk_t *chunk, void *ptr, - size_t pageind, arena_chunk_map_bits_t *bitselm); -void arena_dalloc_small(tsd_t *tsd, arena_t *arena, arena_chunk_t *chunk, +void arena_prof_promoted(tsdn_t *tsdn, const void *ptr, size_t size); +void arena_dalloc_bin_junked_locked(tsdn_t *tsdn, arena_t *arena, + arena_chunk_t *chunk, void *ptr, arena_chunk_map_bits_t *bitselm); +void arena_dalloc_bin(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk, + void *ptr, size_t pageind, arena_chunk_map_bits_t *bitselm); +void arena_dalloc_small(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk, void *ptr, size_t pageind); #ifdef JEMALLOC_JET typedef void (arena_dalloc_junk_large_t)(void *, size_t); extern arena_dalloc_junk_large_t *arena_dalloc_junk_large; #else void arena_dalloc_junk_large(void *ptr, size_t usize); #endif -void arena_dalloc_large_junked_locked(arena_t *arena, arena_chunk_t *chunk, +void arena_dalloc_large_junked_locked(tsdn_t *tsdn, arena_t *arena, + arena_chunk_t *chunk, void *ptr); +void arena_dalloc_large(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk, void *ptr); -void arena_dalloc_large(tsd_t *tsd, arena_t *arena, arena_chunk_t *chunk, - void *ptr); #ifdef JEMALLOC_JET typedef void (arena_ralloc_junk_large_t)(void *, size_t, size_t); extern arena_ralloc_junk_large_t *arena_ralloc_junk_large; #endif -bool arena_ralloc_no_move(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, - size_t extra, bool zero); +bool arena_ralloc_no_move(tsdn_t *tsdn, void *ptr, size_t oldsize, + size_t size, size_t extra, bool zero); void *arena_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize, size_t size, size_t alignment, bool zero, tcache_t *tcache); -dss_prec_t arena_dss_prec_get(arena_t *arena); -bool arena_dss_prec_set(arena_t *arena, dss_prec_t dss_prec); +dss_prec_t arena_dss_prec_get(tsdn_t *tsdn, arena_t *arena); +bool arena_dss_prec_set(tsdn_t *tsdn, arena_t *arena, dss_prec_t dss_prec); ssize_t arena_lg_dirty_mult_default_get(void); bool arena_lg_dirty_mult_default_set(ssize_t lg_dirty_mult); ssize_t arena_decay_time_default_get(void); bool arena_decay_time_default_set(ssize_t decay_time); -void arena_basic_stats_merge(arena_t *arena, unsigned *nthreads, +void arena_basic_stats_merge(tsdn_t *tsdn, arena_t *arena, + unsigned *nthreads, const char **dss, ssize_t *lg_dirty_mult, + ssize_t *decay_time, size_t *nactive, size_t *ndirty); +void arena_stats_merge(tsdn_t *tsdn, arena_t *arena, unsigned *nthreads, const char **dss, ssize_t *lg_dirty_mult, ssize_t *decay_time, - size_t *nactive, size_t *ndirty); -void arena_stats_merge(arena_t *arena, unsigned *nthreads, const char **dss, - ssize_t *lg_dirty_mult, ssize_t *decay_time, size_t *nactive, - size_t *ndirty, arena_stats_t *astats, malloc_bin_stats_t *bstats, - malloc_large_stats_t *lstats, malloc_huge_stats_t *hstats); -unsigned arena_nthreads_get(arena_t *arena); -void arena_nthreads_inc(arena_t *arena); -void arena_nthreads_dec(arena_t *arena); -arena_t *arena_new(unsigned ind); + size_t *nactive, size_t *ndirty, arena_stats_t *astats, + malloc_bin_stats_t *bstats, malloc_large_stats_t *lstats, + malloc_huge_stats_t *hstats); +unsigned arena_nthreads_get(arena_t *arena, bool internal); +void arena_nthreads_inc(arena_t *arena, bool internal); +void arena_nthreads_dec(arena_t *arena, bool internal); +arena_t *arena_new(tsdn_t *tsdn, unsigned ind); bool arena_boot(void); -void arena_prefork(arena_t *arena); -void arena_postfork_parent(arena_t *arena); -void arena_postfork_child(arena_t *arena); +void arena_prefork0(tsdn_t *tsdn, arena_t *arena); +void arena_prefork1(tsdn_t *tsdn, arena_t *arena); +void arena_prefork2(tsdn_t *tsdn, arena_t *arena); +void arena_prefork3(tsdn_t *tsdn, arena_t *arena); +void arena_postfork_parent(tsdn_t *tsdn, arena_t *arena); +void arena_postfork_child(tsdn_t *tsdn, arena_t *arena); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE -arena_chunk_map_bits_t *arena_bitselm_get(arena_chunk_t *chunk, +arena_chunk_map_bits_t *arena_bitselm_get_mutable(arena_chunk_t *chunk, size_t pageind); -arena_chunk_map_misc_t *arena_miscelm_get(arena_chunk_t *chunk, +const arena_chunk_map_bits_t *arena_bitselm_get_const( + const arena_chunk_t *chunk, size_t pageind); +arena_chunk_map_misc_t *arena_miscelm_get_mutable(arena_chunk_t *chunk, size_t pageind); +const arena_chunk_map_misc_t *arena_miscelm_get_const( + const arena_chunk_t *chunk, size_t pageind); size_t arena_miscelm_to_pageind(const arena_chunk_map_misc_t *miscelm); -void *arena_miscelm_to_rpages(arena_chunk_map_misc_t *miscelm); +void *arena_miscelm_to_rpages(const arena_chunk_map_misc_t *miscelm); arena_chunk_map_misc_t *arena_rd_to_miscelm(arena_runs_dirty_link_t *rd); arena_chunk_map_misc_t *arena_run_to_miscelm(arena_run_t *run); -size_t *arena_mapbitsp_get(arena_chunk_t *chunk, size_t pageind); -size_t arena_mapbitsp_read(size_t *mapbitsp); -size_t arena_mapbits_get(arena_chunk_t *chunk, size_t pageind); +size_t *arena_mapbitsp_get_mutable(arena_chunk_t *chunk, size_t pageind); +const size_t *arena_mapbitsp_get_const(const arena_chunk_t *chunk, + size_t pageind); +size_t arena_mapbitsp_read(const size_t *mapbitsp); +size_t arena_mapbits_get(const arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_size_decode(size_t mapbits); -size_t arena_mapbits_unallocated_size_get(arena_chunk_t *chunk, +size_t arena_mapbits_unallocated_size_get(const arena_chunk_t *chunk, size_t pageind); -size_t arena_mapbits_large_size_get(arena_chunk_t *chunk, size_t pageind); -size_t arena_mapbits_small_runind_get(arena_chunk_t *chunk, size_t pageind); -szind_t arena_mapbits_binind_get(arena_chunk_t *chunk, size_t pageind); -size_t arena_mapbits_dirty_get(arena_chunk_t *chunk, size_t pageind); -size_t arena_mapbits_unzeroed_get(arena_chunk_t *chunk, size_t pageind); -size_t arena_mapbits_decommitted_get(arena_chunk_t *chunk, size_t pageind); -size_t arena_mapbits_large_get(arena_chunk_t *chunk, size_t pageind); -size_t arena_mapbits_allocated_get(arena_chunk_t *chunk, size_t pageind); +size_t arena_mapbits_large_size_get(const arena_chunk_t *chunk, + size_t pageind); +size_t arena_mapbits_small_runind_get(const arena_chunk_t *chunk, + size_t pageind); +szind_t arena_mapbits_binind_get(const arena_chunk_t *chunk, size_t pageind); +size_t arena_mapbits_dirty_get(const arena_chunk_t *chunk, size_t pageind); +size_t arena_mapbits_unzeroed_get(const arena_chunk_t *chunk, size_t pageind); +size_t arena_mapbits_decommitted_get(const arena_chunk_t *chunk, + size_t pageind); +size_t arena_mapbits_large_get(const arena_chunk_t *chunk, size_t pageind); +size_t arena_mapbits_allocated_get(const arena_chunk_t *chunk, size_t pageind); void arena_mapbitsp_write(size_t *mapbitsp, size_t mapbits); size_t arena_mapbits_size_encode(size_t size); void arena_mapbits_unallocated_set(arena_chunk_t *chunk, size_t pageind, size_t size, size_t flags); void arena_mapbits_unallocated_size_set(arena_chunk_t *chunk, size_t pageind, size_t size); void arena_mapbits_internal_set(arena_chunk_t *chunk, size_t pageind, size_t flags); void arena_mapbits_large_set(arena_chunk_t *chunk, size_t pageind, size_t size, size_t flags); void arena_mapbits_large_binind_set(arena_chunk_t *chunk, size_t pageind, szind_t binind); void arena_mapbits_small_set(arena_chunk_t *chunk, size_t pageind, size_t runind, szind_t binind, size_t flags); void arena_metadata_allocated_add(arena_t *arena, size_t size); void arena_metadata_allocated_sub(arena_t *arena, size_t size); size_t arena_metadata_allocated_get(arena_t *arena); bool arena_prof_accum_impl(arena_t *arena, uint64_t accumbytes); bool arena_prof_accum_locked(arena_t *arena, uint64_t accumbytes); -bool arena_prof_accum(arena_t *arena, uint64_t accumbytes); +bool arena_prof_accum(tsdn_t *tsdn, arena_t *arena, uint64_t accumbytes); szind_t arena_ptr_small_binind_get(const void *ptr, size_t mapbits); szind_t arena_bin_index(arena_t *arena, arena_bin_t *bin); size_t arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info, const void *ptr); -prof_tctx_t *arena_prof_tctx_get(const void *ptr); -void arena_prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx); -void arena_prof_tctx_reset(const void *ptr, size_t usize, +prof_tctx_t *arena_prof_tctx_get(tsdn_t *tsdn, const void *ptr); +void arena_prof_tctx_set(tsdn_t *tsdn, const void *ptr, size_t usize, + prof_tctx_t *tctx); +void arena_prof_tctx_reset(tsdn_t *tsdn, const void *ptr, size_t usize, const void *old_ptr, prof_tctx_t *old_tctx); -void arena_decay_ticks(tsd_t *tsd, arena_t *arena, unsigned nticks); -void arena_decay_tick(tsd_t *tsd, arena_t *arena); -void *arena_malloc(tsd_t *tsd, arena_t *arena, size_t size, szind_t ind, +void arena_decay_ticks(tsdn_t *tsdn, arena_t *arena, unsigned nticks); +void arena_decay_tick(tsdn_t *tsdn, arena_t *arena); +void *arena_malloc(tsdn_t *tsdn, arena_t *arena, size_t size, szind_t ind, bool zero, tcache_t *tcache, bool slow_path); arena_t *arena_aalloc(const void *ptr); -size_t arena_salloc(const void *ptr, bool demote); -void arena_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path); -void arena_sdalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache); +size_t arena_salloc(tsdn_t *tsdn, const void *ptr, bool demote); +void arena_dalloc(tsdn_t *tsdn, void *ptr, tcache_t *tcache, bool slow_path); +void arena_sdalloc(tsdn_t *tsdn, void *ptr, size_t size, tcache_t *tcache, + bool slow_path); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_ARENA_C_)) # ifdef JEMALLOC_ARENA_INLINE_A JEMALLOC_ALWAYS_INLINE arena_chunk_map_bits_t * -arena_bitselm_get(arena_chunk_t *chunk, size_t pageind) +arena_bitselm_get_mutable(arena_chunk_t *chunk, size_t pageind) { assert(pageind >= map_bias); assert(pageind < chunk_npages); return (&chunk->map_bits[pageind-map_bias]); } +JEMALLOC_ALWAYS_INLINE const arena_chunk_map_bits_t * +arena_bitselm_get_const(const arena_chunk_t *chunk, size_t pageind) +{ + + return (arena_bitselm_get_mutable((arena_chunk_t *)chunk, pageind)); +} + JEMALLOC_ALWAYS_INLINE arena_chunk_map_misc_t * -arena_miscelm_get(arena_chunk_t *chunk, size_t pageind) +arena_miscelm_get_mutable(arena_chunk_t *chunk, size_t pageind) { assert(pageind >= map_bias); assert(pageind < chunk_npages); return ((arena_chunk_map_misc_t *)((uintptr_t)chunk + (uintptr_t)map_misc_offset) + pageind-map_bias); } +JEMALLOC_ALWAYS_INLINE const arena_chunk_map_misc_t * +arena_miscelm_get_const(const arena_chunk_t *chunk, size_t pageind) +{ + + return (arena_miscelm_get_mutable((arena_chunk_t *)chunk, pageind)); +} + JEMALLOC_ALWAYS_INLINE size_t arena_miscelm_to_pageind(const arena_chunk_map_misc_t *miscelm) { arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(miscelm); size_t pageind = ((uintptr_t)miscelm - ((uintptr_t)chunk + map_misc_offset)) / sizeof(arena_chunk_map_misc_t) + map_bias; assert(pageind >= map_bias); assert(pageind < chunk_npages); return (pageind); } JEMALLOC_ALWAYS_INLINE void * -arena_miscelm_to_rpages(arena_chunk_map_misc_t *miscelm) +arena_miscelm_to_rpages(const arena_chunk_map_misc_t *miscelm) { arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(miscelm); size_t pageind = arena_miscelm_to_pageind(miscelm); return ((void *)((uintptr_t)chunk + (pageind << LG_PAGE))); } JEMALLOC_ALWAYS_INLINE arena_chunk_map_misc_t * arena_rd_to_miscelm(arena_runs_dirty_link_t *rd) { arena_chunk_map_misc_t *miscelm = (arena_chunk_map_misc_t *)((uintptr_t)rd - offsetof(arena_chunk_map_misc_t, rd)); assert(arena_miscelm_to_pageind(miscelm) >= map_bias); assert(arena_miscelm_to_pageind(miscelm) < chunk_npages); return (miscelm); } JEMALLOC_ALWAYS_INLINE arena_chunk_map_misc_t * arena_run_to_miscelm(arena_run_t *run) { arena_chunk_map_misc_t *miscelm = (arena_chunk_map_misc_t *)((uintptr_t)run - offsetof(arena_chunk_map_misc_t, run)); assert(arena_miscelm_to_pageind(miscelm) >= map_bias); assert(arena_miscelm_to_pageind(miscelm) < chunk_npages); return (miscelm); } JEMALLOC_ALWAYS_INLINE size_t * -arena_mapbitsp_get(arena_chunk_t *chunk, size_t pageind) +arena_mapbitsp_get_mutable(arena_chunk_t *chunk, size_t pageind) { - return (&arena_bitselm_get(chunk, pageind)->bits); + return (&arena_bitselm_get_mutable(chunk, pageind)->bits); } +JEMALLOC_ALWAYS_INLINE const size_t * +arena_mapbitsp_get_const(const arena_chunk_t *chunk, size_t pageind) +{ + + return (arena_mapbitsp_get_mutable((arena_chunk_t *)chunk, pageind)); +} + JEMALLOC_ALWAYS_INLINE size_t -arena_mapbitsp_read(size_t *mapbitsp) +arena_mapbitsp_read(const size_t *mapbitsp) { return (*mapbitsp); } JEMALLOC_ALWAYS_INLINE size_t -arena_mapbits_get(arena_chunk_t *chunk, size_t pageind) +arena_mapbits_get(const arena_chunk_t *chunk, size_t pageind) { - return (arena_mapbitsp_read(arena_mapbitsp_get(chunk, pageind))); + return (arena_mapbitsp_read(arena_mapbitsp_get_const(chunk, pageind))); } JEMALLOC_ALWAYS_INLINE size_t arena_mapbits_size_decode(size_t mapbits) { size_t size; #if CHUNK_MAP_SIZE_SHIFT > 0 size = (mapbits & CHUNK_MAP_SIZE_MASK) >> CHUNK_MAP_SIZE_SHIFT; #elif CHUNK_MAP_SIZE_SHIFT == 0 size = mapbits & CHUNK_MAP_SIZE_MASK; #else size = (mapbits & CHUNK_MAP_SIZE_MASK) << -CHUNK_MAP_SIZE_SHIFT; #endif return (size); } JEMALLOC_ALWAYS_INLINE size_t -arena_mapbits_unallocated_size_get(arena_chunk_t *chunk, size_t pageind) +arena_mapbits_unallocated_size_get(const arena_chunk_t *chunk, size_t pageind) { size_t mapbits; mapbits = arena_mapbits_get(chunk, pageind); assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) == 0); return (arena_mapbits_size_decode(mapbits)); } JEMALLOC_ALWAYS_INLINE size_t -arena_mapbits_large_size_get(arena_chunk_t *chunk, size_t pageind) +arena_mapbits_large_size_get(const arena_chunk_t *chunk, size_t pageind) { size_t mapbits; mapbits = arena_mapbits_get(chunk, pageind); assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) == (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)); return (arena_mapbits_size_decode(mapbits)); } JEMALLOC_ALWAYS_INLINE size_t -arena_mapbits_small_runind_get(arena_chunk_t *chunk, size_t pageind) +arena_mapbits_small_runind_get(const arena_chunk_t *chunk, size_t pageind) { size_t mapbits; mapbits = arena_mapbits_get(chunk, pageind); assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) == CHUNK_MAP_ALLOCATED); return (mapbits >> CHUNK_MAP_RUNIND_SHIFT); } JEMALLOC_ALWAYS_INLINE szind_t -arena_mapbits_binind_get(arena_chunk_t *chunk, size_t pageind) +arena_mapbits_binind_get(const arena_chunk_t *chunk, size_t pageind) { size_t mapbits; szind_t binind; mapbits = arena_mapbits_get(chunk, pageind); binind = (mapbits & CHUNK_MAP_BININD_MASK) >> CHUNK_MAP_BININD_SHIFT; assert(binind < NBINS || binind == BININD_INVALID); return (binind); } JEMALLOC_ALWAYS_INLINE size_t -arena_mapbits_dirty_get(arena_chunk_t *chunk, size_t pageind) +arena_mapbits_dirty_get(const arena_chunk_t *chunk, size_t pageind) { size_t mapbits; mapbits = arena_mapbits_get(chunk, pageind); assert((mapbits & CHUNK_MAP_DECOMMITTED) == 0 || (mapbits & (CHUNK_MAP_DIRTY|CHUNK_MAP_UNZEROED)) == 0); return (mapbits & CHUNK_MAP_DIRTY); } JEMALLOC_ALWAYS_INLINE size_t -arena_mapbits_unzeroed_get(arena_chunk_t *chunk, size_t pageind) +arena_mapbits_unzeroed_get(const arena_chunk_t *chunk, size_t pageind) { size_t mapbits; mapbits = arena_mapbits_get(chunk, pageind); assert((mapbits & CHUNK_MAP_DECOMMITTED) == 0 || (mapbits & (CHUNK_MAP_DIRTY|CHUNK_MAP_UNZEROED)) == 0); return (mapbits & CHUNK_MAP_UNZEROED); } JEMALLOC_ALWAYS_INLINE size_t -arena_mapbits_decommitted_get(arena_chunk_t *chunk, size_t pageind) +arena_mapbits_decommitted_get(const arena_chunk_t *chunk, size_t pageind) { size_t mapbits; mapbits = arena_mapbits_get(chunk, pageind); assert((mapbits & CHUNK_MAP_DECOMMITTED) == 0 || (mapbits & (CHUNK_MAP_DIRTY|CHUNK_MAP_UNZEROED)) == 0); return (mapbits & CHUNK_MAP_DECOMMITTED); } JEMALLOC_ALWAYS_INLINE size_t -arena_mapbits_large_get(arena_chunk_t *chunk, size_t pageind) +arena_mapbits_large_get(const arena_chunk_t *chunk, size_t pageind) { size_t mapbits; mapbits = arena_mapbits_get(chunk, pageind); return (mapbits & CHUNK_MAP_LARGE); } JEMALLOC_ALWAYS_INLINE size_t -arena_mapbits_allocated_get(arena_chunk_t *chunk, size_t pageind) +arena_mapbits_allocated_get(const arena_chunk_t *chunk, size_t pageind) { size_t mapbits; mapbits = arena_mapbits_get(chunk, pageind); return (mapbits & CHUNK_MAP_ALLOCATED); } JEMALLOC_ALWAYS_INLINE void arena_mapbitsp_write(size_t *mapbitsp, size_t mapbits) { *mapbitsp = mapbits; } JEMALLOC_ALWAYS_INLINE size_t arena_mapbits_size_encode(size_t size) { size_t mapbits; #if CHUNK_MAP_SIZE_SHIFT > 0 mapbits = size << CHUNK_MAP_SIZE_SHIFT; #elif CHUNK_MAP_SIZE_SHIFT == 0 mapbits = size; #else mapbits = size >> -CHUNK_MAP_SIZE_SHIFT; #endif assert((mapbits & ~CHUNK_MAP_SIZE_MASK) == 0); return (mapbits); } JEMALLOC_ALWAYS_INLINE void arena_mapbits_unallocated_set(arena_chunk_t *chunk, size_t pageind, size_t size, size_t flags) { - size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); + size_t *mapbitsp = arena_mapbitsp_get_mutable(chunk, pageind); assert((size & PAGE_MASK) == 0); assert((flags & CHUNK_MAP_FLAGS_MASK) == flags); assert((flags & CHUNK_MAP_DECOMMITTED) == 0 || (flags & (CHUNK_MAP_DIRTY|CHUNK_MAP_UNZEROED)) == 0); arena_mapbitsp_write(mapbitsp, arena_mapbits_size_encode(size) | CHUNK_MAP_BININD_INVALID | flags); } JEMALLOC_ALWAYS_INLINE void arena_mapbits_unallocated_size_set(arena_chunk_t *chunk, size_t pageind, size_t size) { - size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); + size_t *mapbitsp = arena_mapbitsp_get_mutable(chunk, pageind); size_t mapbits = arena_mapbitsp_read(mapbitsp); assert((size & PAGE_MASK) == 0); assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) == 0); arena_mapbitsp_write(mapbitsp, arena_mapbits_size_encode(size) | (mapbits & ~CHUNK_MAP_SIZE_MASK)); } JEMALLOC_ALWAYS_INLINE void arena_mapbits_internal_set(arena_chunk_t *chunk, size_t pageind, size_t flags) { - size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); + size_t *mapbitsp = arena_mapbitsp_get_mutable(chunk, pageind); assert((flags & CHUNK_MAP_UNZEROED) == flags); arena_mapbitsp_write(mapbitsp, flags); } JEMALLOC_ALWAYS_INLINE void arena_mapbits_large_set(arena_chunk_t *chunk, size_t pageind, size_t size, size_t flags) { - size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); + size_t *mapbitsp = arena_mapbitsp_get_mutable(chunk, pageind); assert((size & PAGE_MASK) == 0); assert((flags & CHUNK_MAP_FLAGS_MASK) == flags); assert((flags & CHUNK_MAP_DECOMMITTED) == 0 || (flags & (CHUNK_MAP_DIRTY|CHUNK_MAP_UNZEROED)) == 0); arena_mapbitsp_write(mapbitsp, arena_mapbits_size_encode(size) | CHUNK_MAP_BININD_INVALID | flags | CHUNK_MAP_LARGE | CHUNK_MAP_ALLOCATED); } JEMALLOC_ALWAYS_INLINE void arena_mapbits_large_binind_set(arena_chunk_t *chunk, size_t pageind, szind_t binind) { - size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); + size_t *mapbitsp = arena_mapbitsp_get_mutable(chunk, pageind); size_t mapbits = arena_mapbitsp_read(mapbitsp); assert(binind <= BININD_INVALID); assert(arena_mapbits_large_size_get(chunk, pageind) == LARGE_MINCLASS + large_pad); arena_mapbitsp_write(mapbitsp, (mapbits & ~CHUNK_MAP_BININD_MASK) | (binind << CHUNK_MAP_BININD_SHIFT)); } JEMALLOC_ALWAYS_INLINE void arena_mapbits_small_set(arena_chunk_t *chunk, size_t pageind, size_t runind, szind_t binind, size_t flags) { - size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); + size_t *mapbitsp = arena_mapbitsp_get_mutable(chunk, pageind); assert(binind < BININD_INVALID); assert(pageind - runind >= map_bias); assert((flags & CHUNK_MAP_UNZEROED) == flags); arena_mapbitsp_write(mapbitsp, (runind << CHUNK_MAP_RUNIND_SHIFT) | (binind << CHUNK_MAP_BININD_SHIFT) | flags | CHUNK_MAP_ALLOCATED); } JEMALLOC_INLINE void arena_metadata_allocated_add(arena_t *arena, size_t size) { atomic_add_z(&arena->stats.metadata_allocated, size); } JEMALLOC_INLINE void arena_metadata_allocated_sub(arena_t *arena, size_t size) { atomic_sub_z(&arena->stats.metadata_allocated, size); } JEMALLOC_INLINE size_t arena_metadata_allocated_get(arena_t *arena) { return (atomic_read_z(&arena->stats.metadata_allocated)); } JEMALLOC_INLINE bool arena_prof_accum_impl(arena_t *arena, uint64_t accumbytes) { cassert(config_prof); assert(prof_interval != 0); arena->prof_accumbytes += accumbytes; if (arena->prof_accumbytes >= prof_interval) { arena->prof_accumbytes -= prof_interval; return (true); } return (false); } JEMALLOC_INLINE bool arena_prof_accum_locked(arena_t *arena, uint64_t accumbytes) { cassert(config_prof); if (likely(prof_interval == 0)) return (false); return (arena_prof_accum_impl(arena, accumbytes)); } JEMALLOC_INLINE bool -arena_prof_accum(arena_t *arena, uint64_t accumbytes) +arena_prof_accum(tsdn_t *tsdn, arena_t *arena, uint64_t accumbytes) { cassert(config_prof); if (likely(prof_interval == 0)) return (false); { bool ret; - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); ret = arena_prof_accum_impl(arena, accumbytes); - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); return (ret); } } JEMALLOC_ALWAYS_INLINE szind_t arena_ptr_small_binind_get(const void *ptr, size_t mapbits) { szind_t binind; binind = (mapbits & CHUNK_MAP_BININD_MASK) >> CHUNK_MAP_BININD_SHIFT; if (config_debug) { arena_chunk_t *chunk; arena_t *arena; size_t pageind; size_t actual_mapbits; size_t rpages_ind; - arena_run_t *run; + const arena_run_t *run; arena_bin_t *bin; szind_t run_binind, actual_binind; arena_bin_info_t *bin_info; - arena_chunk_map_misc_t *miscelm; - void *rpages; + const arena_chunk_map_misc_t *miscelm; + const void *rpages; assert(binind != BININD_INVALID); assert(binind < NBINS); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); arena = extent_node_arena_get(&chunk->node); pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; actual_mapbits = arena_mapbits_get(chunk, pageind); assert(mapbits == actual_mapbits); assert(arena_mapbits_large_get(chunk, pageind) == 0); assert(arena_mapbits_allocated_get(chunk, pageind) != 0); rpages_ind = pageind - arena_mapbits_small_runind_get(chunk, pageind); - miscelm = arena_miscelm_get(chunk, rpages_ind); + miscelm = arena_miscelm_get_const(chunk, rpages_ind); run = &miscelm->run; run_binind = run->binind; bin = &arena->bins[run_binind]; actual_binind = (szind_t)(bin - arena->bins); assert(run_binind == actual_binind); bin_info = &arena_bin_info[actual_binind]; rpages = arena_miscelm_to_rpages(miscelm); assert(((uintptr_t)ptr - ((uintptr_t)rpages + (uintptr_t)bin_info->reg0_offset)) % bin_info->reg_interval == 0); } return (binind); } # endif /* JEMALLOC_ARENA_INLINE_A */ # ifdef JEMALLOC_ARENA_INLINE_B JEMALLOC_INLINE szind_t arena_bin_index(arena_t *arena, arena_bin_t *bin) { szind_t binind = (szind_t)(bin - arena->bins); assert(binind < NBINS); return (binind); } JEMALLOC_INLINE size_t arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info, const void *ptr) { size_t diff, interval, shift, regind; arena_chunk_map_misc_t *miscelm = arena_run_to_miscelm(run); void *rpages = arena_miscelm_to_rpages(miscelm); /* * Freeing a pointer lower than region zero can cause assertion * failure. */ assert((uintptr_t)ptr >= (uintptr_t)rpages + (uintptr_t)bin_info->reg0_offset); /* * Avoid doing division with a variable divisor if possible. Using * actual division here can reduce allocator throughput by over 20%! */ diff = (size_t)((uintptr_t)ptr - (uintptr_t)rpages - bin_info->reg0_offset); /* Rescale (factor powers of 2 out of the numerator and denominator). */ interval = bin_info->reg_interval; shift = ffs_zu(interval) - 1; diff >>= shift; interval >>= shift; if (interval == 1) { /* The divisor was a power of 2. */ regind = diff; } else { /* * To divide by a number D that is not a power of two we * multiply by (2^21 / D) and then right shift by 21 positions. * * X / D * * becomes * * (X * interval_invs[D - 3]) >> SIZE_INV_SHIFT * * We can omit the first three elements, because we never * divide by 0, and 1 and 2 are both powers of two, which are * handled above. */ #define SIZE_INV_SHIFT ((sizeof(size_t) << 3) - LG_RUN_MAXREGS) #define SIZE_INV(s) (((ZU(1) << SIZE_INV_SHIFT) / (s)) + 1) static const size_t interval_invs[] = { SIZE_INV(3), SIZE_INV(4), SIZE_INV(5), SIZE_INV(6), SIZE_INV(7), SIZE_INV(8), SIZE_INV(9), SIZE_INV(10), SIZE_INV(11), SIZE_INV(12), SIZE_INV(13), SIZE_INV(14), SIZE_INV(15), SIZE_INV(16), SIZE_INV(17), SIZE_INV(18), SIZE_INV(19), SIZE_INV(20), SIZE_INV(21), SIZE_INV(22), SIZE_INV(23), SIZE_INV(24), SIZE_INV(25), SIZE_INV(26), SIZE_INV(27), SIZE_INV(28), SIZE_INV(29), SIZE_INV(30), SIZE_INV(31) }; if (likely(interval <= ((sizeof(interval_invs) / sizeof(size_t)) + 2))) { regind = (diff * interval_invs[interval - 3]) >> SIZE_INV_SHIFT; } else regind = diff / interval; #undef SIZE_INV #undef SIZE_INV_SHIFT } assert(diff == regind * interval); assert(regind < bin_info->nregs); return (regind); } JEMALLOC_INLINE prof_tctx_t * -arena_prof_tctx_get(const void *ptr) +arena_prof_tctx_get(tsdn_t *tsdn, const void *ptr) { prof_tctx_t *ret; arena_chunk_t *chunk; cassert(config_prof); assert(ptr != NULL); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (likely(chunk != ptr)) { size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; size_t mapbits = arena_mapbits_get(chunk, pageind); assert((mapbits & CHUNK_MAP_ALLOCATED) != 0); if (likely((mapbits & CHUNK_MAP_LARGE) == 0)) ret = (prof_tctx_t *)(uintptr_t)1U; else { - arena_chunk_map_misc_t *elm = arena_miscelm_get(chunk, - pageind); + arena_chunk_map_misc_t *elm = + arena_miscelm_get_mutable(chunk, pageind); ret = atomic_read_p(&elm->prof_tctx_pun); } } else - ret = huge_prof_tctx_get(ptr); + ret = huge_prof_tctx_get(tsdn, ptr); return (ret); } JEMALLOC_INLINE void -arena_prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx) +arena_prof_tctx_set(tsdn_t *tsdn, const void *ptr, size_t usize, + prof_tctx_t *tctx) { arena_chunk_t *chunk; cassert(config_prof); assert(ptr != NULL); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (likely(chunk != ptr)) { size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; assert(arena_mapbits_allocated_get(chunk, pageind) != 0); if (unlikely(usize > SMALL_MAXCLASS || (uintptr_t)tctx > (uintptr_t)1U)) { arena_chunk_map_misc_t *elm; assert(arena_mapbits_large_get(chunk, pageind) != 0); - elm = arena_miscelm_get(chunk, pageind); + elm = arena_miscelm_get_mutable(chunk, pageind); atomic_write_p(&elm->prof_tctx_pun, tctx); } else { /* * tctx must always be initialized for large runs. * Assert that the surrounding conditional logic is * equivalent to checking whether ptr refers to a large * run. */ assert(arena_mapbits_large_get(chunk, pageind) == 0); } } else - huge_prof_tctx_set(ptr, tctx); + huge_prof_tctx_set(tsdn, ptr, tctx); } JEMALLOC_INLINE void -arena_prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr, - prof_tctx_t *old_tctx) +arena_prof_tctx_reset(tsdn_t *tsdn, const void *ptr, size_t usize, + const void *old_ptr, prof_tctx_t *old_tctx) { cassert(config_prof); assert(ptr != NULL); if (unlikely(usize > SMALL_MAXCLASS || (ptr == old_ptr && (uintptr_t)old_tctx > (uintptr_t)1U))) { arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (likely(chunk != ptr)) { size_t pageind; arena_chunk_map_misc_t *elm; pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; assert(arena_mapbits_allocated_get(chunk, pageind) != 0); assert(arena_mapbits_large_get(chunk, pageind) != 0); - elm = arena_miscelm_get(chunk, pageind); + elm = arena_miscelm_get_mutable(chunk, pageind); atomic_write_p(&elm->prof_tctx_pun, (prof_tctx_t *)(uintptr_t)1U); } else - huge_prof_tctx_reset(ptr); + huge_prof_tctx_reset(tsdn, ptr); } } JEMALLOC_ALWAYS_INLINE void -arena_decay_ticks(tsd_t *tsd, arena_t *arena, unsigned nticks) +arena_decay_ticks(tsdn_t *tsdn, arena_t *arena, unsigned nticks) { + tsd_t *tsd; ticker_t *decay_ticker; - if (unlikely(tsd == NULL)) + if (unlikely(tsdn_null(tsdn))) return; + tsd = tsdn_tsd(tsdn); decay_ticker = decay_ticker_get(tsd, arena->ind); if (unlikely(decay_ticker == NULL)) return; if (unlikely(ticker_ticks(decay_ticker, nticks))) - arena_purge(arena, false); + arena_purge(tsdn, arena, false); } JEMALLOC_ALWAYS_INLINE void -arena_decay_tick(tsd_t *tsd, arena_t *arena) +arena_decay_tick(tsdn_t *tsdn, arena_t *arena) { - arena_decay_ticks(tsd, arena, 1); + arena_decay_ticks(tsdn, arena, 1); } JEMALLOC_ALWAYS_INLINE void * -arena_malloc(tsd_t *tsd, arena_t *arena, size_t size, szind_t ind, bool zero, +arena_malloc(tsdn_t *tsdn, arena_t *arena, size_t size, szind_t ind, bool zero, tcache_t *tcache, bool slow_path) { + assert(!tsdn_null(tsdn) || tcache == NULL); assert(size != 0); if (likely(tcache != NULL)) { if (likely(size <= SMALL_MAXCLASS)) { - return (tcache_alloc_small(tsd, arena, tcache, size, - ind, zero, slow_path)); + return (tcache_alloc_small(tsdn_tsd(tsdn), arena, + tcache, size, ind, zero, slow_path)); } if (likely(size <= tcache_maxclass)) { - return (tcache_alloc_large(tsd, arena, tcache, size, - ind, zero, slow_path)); + return (tcache_alloc_large(tsdn_tsd(tsdn), arena, + tcache, size, ind, zero, slow_path)); } /* (size > tcache_maxclass) case falls through. */ assert(size > tcache_maxclass); } - return (arena_malloc_hard(tsd, arena, size, ind, zero, tcache)); + return (arena_malloc_hard(tsdn, arena, size, ind, zero)); } JEMALLOC_ALWAYS_INLINE arena_t * arena_aalloc(const void *ptr) { arena_chunk_t *chunk; chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (likely(chunk != ptr)) return (extent_node_arena_get(&chunk->node)); else return (huge_aalloc(ptr)); } /* Return the size of the allocation pointed to by ptr. */ JEMALLOC_ALWAYS_INLINE size_t -arena_salloc(const void *ptr, bool demote) +arena_salloc(tsdn_t *tsdn, const void *ptr, bool demote) { size_t ret; arena_chunk_t *chunk; size_t pageind; szind_t binind; assert(ptr != NULL); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (likely(chunk != ptr)) { pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; assert(arena_mapbits_allocated_get(chunk, pageind) != 0); binind = arena_mapbits_binind_get(chunk, pageind); if (unlikely(binind == BININD_INVALID || (config_prof && !demote && arena_mapbits_large_get(chunk, pageind) != 0))) { /* * Large allocation. In the common case (demote), and * as this is an inline function, most callers will only * end up looking at binind to determine that ptr is a * small allocation. */ assert(config_cache_oblivious || ((uintptr_t)ptr & PAGE_MASK) == 0); ret = arena_mapbits_large_size_get(chunk, pageind) - large_pad; assert(ret != 0); assert(pageind + ((ret+large_pad)>>LG_PAGE) <= chunk_npages); assert(arena_mapbits_dirty_get(chunk, pageind) == arena_mapbits_dirty_get(chunk, pageind+((ret+large_pad)>>LG_PAGE)-1)); } else { /* * Small allocation (possibly promoted to a large * object). */ assert(arena_mapbits_large_get(chunk, pageind) != 0 || arena_ptr_small_binind_get(ptr, arena_mapbits_get(chunk, pageind)) == binind); ret = index2size(binind); } } else - ret = huge_salloc(ptr); + ret = huge_salloc(tsdn, ptr); return (ret); } JEMALLOC_ALWAYS_INLINE void -arena_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path) +arena_dalloc(tsdn_t *tsdn, void *ptr, tcache_t *tcache, bool slow_path) { arena_chunk_t *chunk; size_t pageind, mapbits; + assert(!tsdn_null(tsdn) || tcache == NULL); assert(ptr != NULL); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (likely(chunk != ptr)) { pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; mapbits = arena_mapbits_get(chunk, pageind); assert(arena_mapbits_allocated_get(chunk, pageind) != 0); if (likely((mapbits & CHUNK_MAP_LARGE) == 0)) { /* Small allocation. */ if (likely(tcache != NULL)) { szind_t binind = arena_ptr_small_binind_get(ptr, mapbits); - tcache_dalloc_small(tsd, tcache, ptr, binind, - slow_path); + tcache_dalloc_small(tsdn_tsd(tsdn), tcache, ptr, + binind, slow_path); } else { - arena_dalloc_small(tsd, extent_node_arena_get( - &chunk->node), chunk, ptr, pageind); + arena_dalloc_small(tsdn, + extent_node_arena_get(&chunk->node), chunk, + ptr, pageind); } } else { size_t size = arena_mapbits_large_size_get(chunk, pageind); assert(config_cache_oblivious || ((uintptr_t)ptr & PAGE_MASK) == 0); if (likely(tcache != NULL) && size - large_pad <= tcache_maxclass) { - tcache_dalloc_large(tsd, tcache, ptr, size - - large_pad, slow_path); + tcache_dalloc_large(tsdn_tsd(tsdn), tcache, ptr, + size - large_pad, slow_path); } else { - arena_dalloc_large(tsd, extent_node_arena_get( - &chunk->node), chunk, ptr); + arena_dalloc_large(tsdn, + extent_node_arena_get(&chunk->node), chunk, + ptr); } } } else - huge_dalloc(tsd, ptr, tcache); + huge_dalloc(tsdn, ptr); } JEMALLOC_ALWAYS_INLINE void -arena_sdalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache) +arena_sdalloc(tsdn_t *tsdn, void *ptr, size_t size, tcache_t *tcache, + bool slow_path) { arena_chunk_t *chunk; + assert(!tsdn_null(tsdn) || tcache == NULL); + chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (likely(chunk != ptr)) { if (config_prof && opt_prof) { size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; assert(arena_mapbits_allocated_get(chunk, pageind) != 0); if (arena_mapbits_large_get(chunk, pageind) != 0) { /* * Make sure to use promoted size, not request * size. */ size = arena_mapbits_large_size_get(chunk, pageind) - large_pad; } } - assert(s2u(size) == s2u(arena_salloc(ptr, false))); + assert(s2u(size) == s2u(arena_salloc(tsdn, ptr, false))); if (likely(size <= SMALL_MAXCLASS)) { /* Small allocation. */ if (likely(tcache != NULL)) { szind_t binind = size2index(size); - tcache_dalloc_small(tsd, tcache, ptr, binind, - true); + tcache_dalloc_small(tsdn_tsd(tsdn), tcache, ptr, + binind, slow_path); } else { size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; - arena_dalloc_small(tsd, extent_node_arena_get( - &chunk->node), chunk, ptr, pageind); + arena_dalloc_small(tsdn, + extent_node_arena_get(&chunk->node), chunk, + ptr, pageind); } } else { assert(config_cache_oblivious || ((uintptr_t)ptr & PAGE_MASK) == 0); if (likely(tcache != NULL) && size <= tcache_maxclass) { - tcache_dalloc_large(tsd, tcache, ptr, size, - true); + tcache_dalloc_large(tsdn_tsd(tsdn), tcache, ptr, + size, slow_path); } else { - arena_dalloc_large(tsd, extent_node_arena_get( - &chunk->node), chunk, ptr); + arena_dalloc_large(tsdn, + extent_node_arena_get(&chunk->node), chunk, + ptr); } } } else - huge_dalloc(tsd, ptr, tcache); + huge_dalloc(tsdn, ptr); } # endif /* JEMALLOC_ARENA_INLINE_B */ #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/base.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/base.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/base.h (revision 299587) @@ -1,24 +1,25 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS -void *base_alloc(size_t size); -void base_stats_get(size_t *allocated, size_t *resident, size_t *mapped); +void *base_alloc(tsdn_t *tsdn, size_t size); +void base_stats_get(tsdn_t *tsdn, size_t *allocated, size_t *resident, + size_t *mapped); bool base_boot(void); -void base_prefork(void); -void base_postfork_parent(void); -void base_postfork_child(void); +void base_prefork(tsdn_t *tsdn); +void base_postfork_parent(tsdn_t *tsdn); +void base_postfork_child(tsdn_t *tsdn); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/bitmap.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/bitmap.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/bitmap.h (revision 299587) @@ -1,274 +1,274 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES /* Maximum bitmap bit count is 2^LG_BITMAP_MAXBITS. */ #define LG_BITMAP_MAXBITS LG_RUN_MAXREGS #define BITMAP_MAXBITS (ZU(1) << LG_BITMAP_MAXBITS) typedef struct bitmap_level_s bitmap_level_t; typedef struct bitmap_info_s bitmap_info_t; typedef unsigned long bitmap_t; #define LG_SIZEOF_BITMAP LG_SIZEOF_LONG /* Number of bits per group. */ #define LG_BITMAP_GROUP_NBITS (LG_SIZEOF_BITMAP + 3) #define BITMAP_GROUP_NBITS (ZU(1) << LG_BITMAP_GROUP_NBITS) #define BITMAP_GROUP_NBITS_MASK (BITMAP_GROUP_NBITS-1) /* * Do some analysis on how big the bitmap is before we use a tree. For a brute - * force linear search, if we would have to call ffsl more than 2^3 times, use a - * tree instead. + * force linear search, if we would have to call ffs_lu() more than 2^3 times, + * use a tree instead. */ #if LG_BITMAP_MAXBITS - LG_BITMAP_GROUP_NBITS > 3 # define USE_TREE #endif /* Number of groups required to store a given number of bits. */ #define BITMAP_BITS2GROUPS(nbits) \ ((nbits + BITMAP_GROUP_NBITS_MASK) >> LG_BITMAP_GROUP_NBITS) /* * Number of groups required at a particular level for a given number of bits. */ #define BITMAP_GROUPS_L0(nbits) \ BITMAP_BITS2GROUPS(nbits) #define BITMAP_GROUPS_L1(nbits) \ BITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS(nbits)) #define BITMAP_GROUPS_L2(nbits) \ BITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS((nbits)))) #define BITMAP_GROUPS_L3(nbits) \ BITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS( \ BITMAP_BITS2GROUPS((nbits))))) /* * Assuming the number of levels, number of groups required for a given number * of bits. */ #define BITMAP_GROUPS_1_LEVEL(nbits) \ BITMAP_GROUPS_L0(nbits) #define BITMAP_GROUPS_2_LEVEL(nbits) \ (BITMAP_GROUPS_1_LEVEL(nbits) + BITMAP_GROUPS_L1(nbits)) #define BITMAP_GROUPS_3_LEVEL(nbits) \ (BITMAP_GROUPS_2_LEVEL(nbits) + BITMAP_GROUPS_L2(nbits)) #define BITMAP_GROUPS_4_LEVEL(nbits) \ (BITMAP_GROUPS_3_LEVEL(nbits) + BITMAP_GROUPS_L3(nbits)) /* * Maximum number of groups required to support LG_BITMAP_MAXBITS. */ #ifdef USE_TREE #if LG_BITMAP_MAXBITS <= LG_BITMAP_GROUP_NBITS # define BITMAP_GROUPS_MAX BITMAP_GROUPS_1_LEVEL(BITMAP_MAXBITS) #elif LG_BITMAP_MAXBITS <= LG_BITMAP_GROUP_NBITS * 2 # define BITMAP_GROUPS_MAX BITMAP_GROUPS_2_LEVEL(BITMAP_MAXBITS) #elif LG_BITMAP_MAXBITS <= LG_BITMAP_GROUP_NBITS * 3 # define BITMAP_GROUPS_MAX BITMAP_GROUPS_3_LEVEL(BITMAP_MAXBITS) #elif LG_BITMAP_MAXBITS <= LG_BITMAP_GROUP_NBITS * 4 # define BITMAP_GROUPS_MAX BITMAP_GROUPS_4_LEVEL(BITMAP_MAXBITS) #else # error "Unsupported bitmap size" #endif /* Maximum number of levels possible. */ #define BITMAP_MAX_LEVELS \ (LG_BITMAP_MAXBITS / LG_SIZEOF_BITMAP) \ + !!(LG_BITMAP_MAXBITS % LG_SIZEOF_BITMAP) #else /* USE_TREE */ #define BITMAP_GROUPS_MAX BITMAP_BITS2GROUPS(BITMAP_MAXBITS) #endif /* USE_TREE */ #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS struct bitmap_level_s { /* Offset of this level's groups within the array of groups. */ size_t group_offset; }; struct bitmap_info_s { /* Logical number of bits in bitmap (stored at bottom level). */ size_t nbits; #ifdef USE_TREE /* Number of levels necessary for nbits. */ unsigned nlevels; /* * Only the first (nlevels+1) elements are used, and levels are ordered * bottom to top (e.g. the bottom level is stored in levels[0]). */ bitmap_level_t levels[BITMAP_MAX_LEVELS+1]; #else /* USE_TREE */ /* Number of groups necessary for nbits. */ size_t ngroups; #endif /* USE_TREE */ }; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS void bitmap_info_init(bitmap_info_t *binfo, size_t nbits); void bitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo); size_t bitmap_size(const bitmap_info_t *binfo); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE bool bitmap_full(bitmap_t *bitmap, const bitmap_info_t *binfo); bool bitmap_get(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit); void bitmap_set(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit); size_t bitmap_sfu(bitmap_t *bitmap, const bitmap_info_t *binfo); void bitmap_unset(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_BITMAP_C_)) JEMALLOC_INLINE bool bitmap_full(bitmap_t *bitmap, const bitmap_info_t *binfo) { #ifdef USE_TREE size_t rgoff = binfo->levels[binfo->nlevels].group_offset - 1; bitmap_t rg = bitmap[rgoff]; /* The bitmap is full iff the root group is 0. */ return (rg == 0); #else size_t i; for (i = 0; i < binfo->ngroups; i++) { if (bitmap[i] != 0) return (false); } return (true); #endif } JEMALLOC_INLINE bool bitmap_get(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit) { size_t goff; bitmap_t g; assert(bit < binfo->nbits); goff = bit >> LG_BITMAP_GROUP_NBITS; g = bitmap[goff]; return (!(g & (ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK)))); } JEMALLOC_INLINE void bitmap_set(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit) { size_t goff; bitmap_t *gp; bitmap_t g; assert(bit < binfo->nbits); assert(!bitmap_get(bitmap, binfo, bit)); goff = bit >> LG_BITMAP_GROUP_NBITS; gp = &bitmap[goff]; g = *gp; assert(g & (ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK))); g ^= ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK); *gp = g; assert(bitmap_get(bitmap, binfo, bit)); #ifdef USE_TREE /* Propagate group state transitions up the tree. */ if (g == 0) { unsigned i; for (i = 1; i < binfo->nlevels; i++) { bit = goff; goff = bit >> LG_BITMAP_GROUP_NBITS; gp = &bitmap[binfo->levels[i].group_offset + goff]; g = *gp; assert(g & (ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK))); g ^= ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK); *gp = g; if (g != 0) break; } } #endif } /* sfu: set first unset. */ JEMALLOC_INLINE size_t bitmap_sfu(bitmap_t *bitmap, const bitmap_info_t *binfo) { size_t bit; bitmap_t g; unsigned i; assert(!bitmap_full(bitmap, binfo)); #ifdef USE_TREE i = binfo->nlevels - 1; g = bitmap[binfo->levels[i].group_offset]; bit = ffs_lu(g) - 1; while (i > 0) { i--; g = bitmap[binfo->levels[i].group_offset + bit]; bit = (bit << LG_BITMAP_GROUP_NBITS) + (ffs_lu(g) - 1); } #else i = 0; g = bitmap[0]; while ((bit = ffs_lu(g)) == 0) { i++; g = bitmap[i]; } - bit = (bit - 1) + (i << 6); + bit = (i << LG_BITMAP_GROUP_NBITS) + (bit - 1); #endif bitmap_set(bitmap, binfo, bit); return (bit); } JEMALLOC_INLINE void bitmap_unset(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit) { size_t goff; bitmap_t *gp; bitmap_t g; UNUSED bool propagate; assert(bit < binfo->nbits); assert(bitmap_get(bitmap, binfo, bit)); goff = bit >> LG_BITMAP_GROUP_NBITS; gp = &bitmap[goff]; g = *gp; propagate = (g == 0); assert((g & (ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK))) == 0); g ^= ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK); *gp = g; assert(!bitmap_get(bitmap, binfo, bit)); #ifdef USE_TREE /* Propagate group state transitions up the tree. */ if (propagate) { unsigned i; for (i = 1; i < binfo->nlevels; i++) { bit = goff; goff = bit >> LG_BITMAP_GROUP_NBITS; gp = &bitmap[binfo->levels[i].group_offset + goff]; g = *gp; propagate = (g == 0); assert((g & (ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK))) == 0); g ^= ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK); *gp = g; if (!propagate) break; } } #endif /* USE_TREE */ } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/chunk.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/chunk.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/chunk.h (revision 299587) @@ -1,99 +1,99 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES /* * Size and alignment of memory chunks that are allocated by the OS's virtual * memory system. */ #define LG_CHUNK_DEFAULT 21 /* Return the chunk address for allocation address a. */ #define CHUNK_ADDR2BASE(a) \ ((void *)((uintptr_t)(a) & ~chunksize_mask)) /* Return the chunk offset of address a. */ #define CHUNK_ADDR2OFFSET(a) \ ((size_t)((uintptr_t)(a) & chunksize_mask)) /* Return the smallest chunk multiple that is >= s. */ #define CHUNK_CEILING(s) \ (((s) + chunksize_mask) & ~chunksize_mask) #define CHUNK_HOOKS_INITIALIZER { \ NULL, \ NULL, \ NULL, \ NULL, \ NULL, \ NULL, \ NULL \ } #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS extern size_t opt_lg_chunk; extern const char *opt_dss; extern rtree_t chunks_rtree; extern size_t chunksize; extern size_t chunksize_mask; /* (chunksize - 1). */ extern size_t chunk_npages; extern const chunk_hooks_t chunk_hooks_default; -chunk_hooks_t chunk_hooks_get(arena_t *arena); -chunk_hooks_t chunk_hooks_set(arena_t *arena, +chunk_hooks_t chunk_hooks_get(tsdn_t *tsdn, arena_t *arena); +chunk_hooks_t chunk_hooks_set(tsdn_t *tsdn, arena_t *arena, const chunk_hooks_t *chunk_hooks); -bool chunk_register(const void *chunk, const extent_node_t *node); +bool chunk_register(tsdn_t *tsdn, const void *chunk, + const extent_node_t *node); void chunk_deregister(const void *chunk, const extent_node_t *node); void *chunk_alloc_base(size_t size); -void *chunk_alloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks, - void *new_addr, size_t size, size_t alignment, bool *zero, - bool dalloc_node); -void *chunk_alloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, - void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit); -void chunk_dalloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks, - void *chunk, size_t size, bool committed); -void chunk_dalloc_arena(arena_t *arena, chunk_hooks_t *chunk_hooks, - void *chunk, size_t size, bool zeroed, bool committed); -void chunk_dalloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, - void *chunk, size_t size, bool committed); -bool chunk_purge_arena(arena_t *arena, void *chunk, size_t offset, +void *chunk_alloc_cache(tsdn_t *tsdn, arena_t *arena, + chunk_hooks_t *chunk_hooks, void *new_addr, size_t size, size_t alignment, + bool *zero, bool dalloc_node); +void *chunk_alloc_wrapper(tsdn_t *tsdn, arena_t *arena, + chunk_hooks_t *chunk_hooks, void *new_addr, size_t size, size_t alignment, + bool *zero, bool *commit); +void chunk_dalloc_cache(tsdn_t *tsdn, arena_t *arena, + chunk_hooks_t *chunk_hooks, void *chunk, size_t size, bool committed); +void chunk_dalloc_wrapper(tsdn_t *tsdn, arena_t *arena, + chunk_hooks_t *chunk_hooks, void *chunk, size_t size, bool zeroed, + bool committed); +bool chunk_purge_wrapper(tsdn_t *tsdn, arena_t *arena, + chunk_hooks_t *chunk_hooks, void *chunk, size_t size, size_t offset, size_t length); -bool chunk_purge_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, - void *chunk, size_t size, size_t offset, size_t length); bool chunk_boot(void); -void chunk_prefork(void); -void chunk_postfork_parent(void); -void chunk_postfork_child(void); +void chunk_prefork(tsdn_t *tsdn); +void chunk_postfork_parent(tsdn_t *tsdn); +void chunk_postfork_child(tsdn_t *tsdn); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE extent_node_t *chunk_lookup(const void *chunk, bool dependent); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_CHUNK_C_)) JEMALLOC_INLINE extent_node_t * chunk_lookup(const void *ptr, bool dependent) { return (rtree_get(&chunks_rtree, (uintptr_t)ptr, dependent)); } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ #include "jemalloc/internal/chunk_dss.h" #include "jemalloc/internal/chunk_mmap.h" Index: head/contrib/jemalloc/include/jemalloc/internal/chunk_dss.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/chunk_dss.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/chunk_dss.h (revision 299587) @@ -1,39 +1,39 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef enum { dss_prec_disabled = 0, dss_prec_primary = 1, dss_prec_secondary = 2, dss_prec_limit = 3 } dss_prec_t; #define DSS_PREC_DEFAULT dss_prec_secondary #define DSS_DEFAULT "secondary" #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS extern const char *dss_prec_names[]; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS -dss_prec_t chunk_dss_prec_get(void); -bool chunk_dss_prec_set(dss_prec_t dss_prec); -void *chunk_alloc_dss(arena_t *arena, void *new_addr, size_t size, - size_t alignment, bool *zero, bool *commit); -bool chunk_in_dss(void *chunk); +dss_prec_t chunk_dss_prec_get(tsdn_t *tsdn); +bool chunk_dss_prec_set(tsdn_t *tsdn, dss_prec_t dss_prec); +void *chunk_alloc_dss(tsdn_t *tsdn, arena_t *arena, void *new_addr, + size_t size, size_t alignment, bool *zero, bool *commit); +bool chunk_in_dss(tsdn_t *tsdn, void *chunk); bool chunk_dss_boot(void); -void chunk_dss_prefork(void); -void chunk_dss_postfork_parent(void); -void chunk_dss_postfork_child(void); +void chunk_dss_prefork(tsdn_t *tsdn); +void chunk_dss_postfork_parent(tsdn_t *tsdn); +void chunk_dss_postfork_child(tsdn_t *tsdn); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/ckh.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/ckh.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/ckh.h (revision 299587) @@ -1,86 +1,86 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef struct ckh_s ckh_t; typedef struct ckhc_s ckhc_t; /* Typedefs to allow easy function pointer passing. */ typedef void ckh_hash_t (const void *, size_t[2]); typedef bool ckh_keycomp_t (const void *, const void *); /* Maintain counters used to get an idea of performance. */ /* #define CKH_COUNT */ /* Print counter values in ckh_delete() (requires CKH_COUNT). */ /* #define CKH_VERBOSE */ /* * There are 2^LG_CKH_BUCKET_CELLS cells in each hash table bucket. Try to fit * one bucket per L1 cache line. */ #define LG_CKH_BUCKET_CELLS (LG_CACHELINE - LG_SIZEOF_PTR - 1) #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS /* Hash table cell. */ struct ckhc_s { const void *key; const void *data; }; struct ckh_s { #ifdef CKH_COUNT /* Counters used to get an idea of performance. */ uint64_t ngrows; uint64_t nshrinks; uint64_t nshrinkfails; uint64_t ninserts; uint64_t nrelocs; #endif /* Used for pseudo-random number generation. */ uint64_t prng_state; /* Total number of items. */ size_t count; /* * Minimum and current number of hash table buckets. There are * 2^LG_CKH_BUCKET_CELLS cells per bucket. */ unsigned lg_minbuckets; unsigned lg_curbuckets; /* Hash and comparison functions. */ ckh_hash_t *hash; ckh_keycomp_t *keycomp; /* Hash table with 2^lg_curbuckets buckets. */ ckhc_t *tab; }; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS -bool ckh_new(tsd_t *tsd, ckh_t *ckh, size_t minitems, ckh_hash_t *hash, +bool ckh_new(tsdn_t *tsdn, ckh_t *ckh, size_t minitems, ckh_hash_t *hash, ckh_keycomp_t *keycomp); -void ckh_delete(tsd_t *tsd, ckh_t *ckh); +void ckh_delete(tsdn_t *tsdn, ckh_t *ckh); size_t ckh_count(ckh_t *ckh); bool ckh_iter(ckh_t *ckh, size_t *tabind, void **key, void **data); -bool ckh_insert(tsd_t *tsd, ckh_t *ckh, const void *key, const void *data); -bool ckh_remove(tsd_t *tsd, ckh_t *ckh, const void *searchkey, void **key, +bool ckh_insert(tsdn_t *tsdn, ckh_t *ckh, const void *key, const void *data); +bool ckh_remove(tsdn_t *tsdn, ckh_t *ckh, const void *searchkey, void **key, void **data); bool ckh_search(ckh_t *ckh, const void *searchkey, void **key, void **data); void ckh_string_hash(const void *key, size_t r_hash[2]); bool ckh_string_keycomp(const void *k1, const void *k2); void ckh_pointer_hash(const void *key, size_t r_hash[2]); bool ckh_pointer_keycomp(const void *k1, const void *k2); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/ctl.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/ctl.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/ctl.h (revision 299587) @@ -1,115 +1,118 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef struct ctl_node_s ctl_node_t; typedef struct ctl_named_node_s ctl_named_node_t; typedef struct ctl_indexed_node_s ctl_indexed_node_t; typedef struct ctl_arena_stats_s ctl_arena_stats_t; typedef struct ctl_stats_s ctl_stats_t; #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS struct ctl_node_s { bool named; }; struct ctl_named_node_s { struct ctl_node_s node; const char *name; /* If (nchildren == 0), this is a terminal node. */ unsigned nchildren; const ctl_node_t *children; - int (*ctl)(const size_t *, size_t, void *, size_t *, - void *, size_t); + int (*ctl)(tsd_t *, const size_t *, size_t, void *, + size_t *, void *, size_t); }; struct ctl_indexed_node_s { struct ctl_node_s node; - const ctl_named_node_t *(*index)(const size_t *, size_t, size_t); + const ctl_named_node_t *(*index)(tsdn_t *, const size_t *, size_t, + size_t); }; struct ctl_arena_stats_s { bool initialized; unsigned nthreads; const char *dss; ssize_t lg_dirty_mult; ssize_t decay_time; size_t pactive; size_t pdirty; /* The remainder are only populated if config_stats is true. */ arena_stats_t astats; /* Aggregate stats for small size classes, based on bin stats. */ size_t allocated_small; uint64_t nmalloc_small; uint64_t ndalloc_small; uint64_t nrequests_small; malloc_bin_stats_t bstats[NBINS]; malloc_large_stats_t *lstats; /* nlclasses elements. */ malloc_huge_stats_t *hstats; /* nhclasses elements. */ }; struct ctl_stats_s { size_t allocated; size_t active; size_t metadata; size_t resident; size_t mapped; + size_t retained; unsigned narenas; ctl_arena_stats_t *arenas; /* (narenas + 1) elements. */ }; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS -int ctl_byname(const char *name, void *oldp, size_t *oldlenp, void *newp, - size_t newlen); -int ctl_nametomib(const char *name, size_t *mibp, size_t *miblenp); - -int ctl_bymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, +int ctl_byname(tsd_t *tsd, const char *name, void *oldp, size_t *oldlenp, void *newp, size_t newlen); +int ctl_nametomib(tsdn_t *tsdn, const char *name, size_t *mibp, + size_t *miblenp); + +int ctl_bymib(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, + size_t *oldlenp, void *newp, size_t newlen); bool ctl_boot(void); -void ctl_prefork(void); -void ctl_postfork_parent(void); -void ctl_postfork_child(void); +void ctl_prefork(tsdn_t *tsdn); +void ctl_postfork_parent(tsdn_t *tsdn); +void ctl_postfork_child(tsdn_t *tsdn); #define xmallctl(name, oldp, oldlenp, newp, newlen) do { \ if (je_mallctl(name, oldp, oldlenp, newp, newlen) \ != 0) { \ malloc_printf( \ ": Failure in xmallctl(\"%s\", ...)\n", \ name); \ abort(); \ } \ } while (0) #define xmallctlnametomib(name, mibp, miblenp) do { \ if (je_mallctlnametomib(name, mibp, miblenp) != 0) { \ malloc_printf(": Failure in " \ "xmallctlnametomib(\"%s\", ...)\n", name); \ abort(); \ } \ } while (0) #define xmallctlbymib(mib, miblen, oldp, oldlenp, newp, newlen) do { \ if (je_mallctlbymib(mib, miblen, oldp, oldlenp, newp, \ newlen) != 0) { \ malloc_write( \ ": Failure in xmallctlbymib()\n"); \ abort(); \ } \ } while (0) #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/extent.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/extent.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/extent.h (revision 299587) @@ -1,239 +1,239 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef struct extent_node_s extent_node_t; #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS /* Tree of extents. Use accessor functions for en_* fields. */ struct extent_node_s { /* Arena from which this extent came, if any. */ arena_t *en_arena; /* Pointer to the extent that this tree node is responsible for. */ void *en_addr; /* Total region size. */ size_t en_size; /* * The zeroed flag is used by chunk recycling code to track whether * memory is zero-filled. */ bool en_zeroed; /* * True if physical memory is committed to the extent, whether * explicitly or implicitly as on a system that overcommits and * satisfies physical memory needs on demand via soft page faults. */ bool en_committed; /* * The achunk flag is used to validate that huge allocation lookups * don't return arena chunks. */ bool en_achunk; /* Profile counters, used for huge objects. */ prof_tctx_t *en_prof_tctx; /* Linkage for arena's runs_dirty and chunks_cache rings. */ arena_runs_dirty_link_t rd; qr(extent_node_t) cc_link; union { /* Linkage for the size/address-ordered tree. */ rb_node(extent_node_t) szad_link; - /* Linkage for arena's huge and node_cache lists. */ + /* Linkage for arena's achunks, huge, and node_cache lists. */ ql_elm(extent_node_t) ql_link; }; /* Linkage for the address-ordered tree. */ rb_node(extent_node_t) ad_link; }; typedef rb_tree(extent_node_t) extent_tree_t; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS rb_proto(, extent_tree_szad_, extent_tree_t, extent_node_t) rb_proto(, extent_tree_ad_, extent_tree_t, extent_node_t) #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE arena_t *extent_node_arena_get(const extent_node_t *node); void *extent_node_addr_get(const extent_node_t *node); size_t extent_node_size_get(const extent_node_t *node); bool extent_node_zeroed_get(const extent_node_t *node); bool extent_node_committed_get(const extent_node_t *node); bool extent_node_achunk_get(const extent_node_t *node); prof_tctx_t *extent_node_prof_tctx_get(const extent_node_t *node); void extent_node_arena_set(extent_node_t *node, arena_t *arena); void extent_node_addr_set(extent_node_t *node, void *addr); void extent_node_size_set(extent_node_t *node, size_t size); void extent_node_zeroed_set(extent_node_t *node, bool zeroed); void extent_node_committed_set(extent_node_t *node, bool committed); void extent_node_achunk_set(extent_node_t *node, bool achunk); void extent_node_prof_tctx_set(extent_node_t *node, prof_tctx_t *tctx); void extent_node_init(extent_node_t *node, arena_t *arena, void *addr, size_t size, bool zeroed, bool committed); void extent_node_dirty_linkage_init(extent_node_t *node); void extent_node_dirty_insert(extent_node_t *node, arena_runs_dirty_link_t *runs_dirty, extent_node_t *chunks_dirty); void extent_node_dirty_remove(extent_node_t *node); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_EXTENT_C_)) JEMALLOC_INLINE arena_t * extent_node_arena_get(const extent_node_t *node) { return (node->en_arena); } JEMALLOC_INLINE void * extent_node_addr_get(const extent_node_t *node) { return (node->en_addr); } JEMALLOC_INLINE size_t extent_node_size_get(const extent_node_t *node) { return (node->en_size); } JEMALLOC_INLINE bool extent_node_zeroed_get(const extent_node_t *node) { return (node->en_zeroed); } JEMALLOC_INLINE bool extent_node_committed_get(const extent_node_t *node) { assert(!node->en_achunk); return (node->en_committed); } JEMALLOC_INLINE bool extent_node_achunk_get(const extent_node_t *node) { return (node->en_achunk); } JEMALLOC_INLINE prof_tctx_t * extent_node_prof_tctx_get(const extent_node_t *node) { return (node->en_prof_tctx); } JEMALLOC_INLINE void extent_node_arena_set(extent_node_t *node, arena_t *arena) { node->en_arena = arena; } JEMALLOC_INLINE void extent_node_addr_set(extent_node_t *node, void *addr) { node->en_addr = addr; } JEMALLOC_INLINE void extent_node_size_set(extent_node_t *node, size_t size) { node->en_size = size; } JEMALLOC_INLINE void extent_node_zeroed_set(extent_node_t *node, bool zeroed) { node->en_zeroed = zeroed; } JEMALLOC_INLINE void extent_node_committed_set(extent_node_t *node, bool committed) { node->en_committed = committed; } JEMALLOC_INLINE void extent_node_achunk_set(extent_node_t *node, bool achunk) { node->en_achunk = achunk; } JEMALLOC_INLINE void extent_node_prof_tctx_set(extent_node_t *node, prof_tctx_t *tctx) { node->en_prof_tctx = tctx; } JEMALLOC_INLINE void extent_node_init(extent_node_t *node, arena_t *arena, void *addr, size_t size, bool zeroed, bool committed) { extent_node_arena_set(node, arena); extent_node_addr_set(node, addr); extent_node_size_set(node, size); extent_node_zeroed_set(node, zeroed); extent_node_committed_set(node, committed); extent_node_achunk_set(node, false); if (config_prof) extent_node_prof_tctx_set(node, NULL); } JEMALLOC_INLINE void extent_node_dirty_linkage_init(extent_node_t *node) { qr_new(&node->rd, rd_link); qr_new(node, cc_link); } JEMALLOC_INLINE void extent_node_dirty_insert(extent_node_t *node, arena_runs_dirty_link_t *runs_dirty, extent_node_t *chunks_dirty) { qr_meld(runs_dirty, &node->rd, rd_link); qr_meld(chunks_dirty, node, cc_link); } JEMALLOC_INLINE void extent_node_dirty_remove(extent_node_t *node) { qr_remove(&node->rd, rd_link); qr_remove(node, cc_link); } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/hash.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/hash.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/hash.h (revision 299587) @@ -1,357 +1,357 @@ /* * The following hash function is based on MurmurHash3, placed into the public * domain by Austin Appleby. See https://github.com/aappleby/smhasher for * details. */ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE uint32_t hash_x86_32(const void *key, int len, uint32_t seed); void hash_x86_128(const void *key, const int len, uint32_t seed, uint64_t r_out[2]); void hash_x64_128(const void *key, const int len, const uint32_t seed, uint64_t r_out[2]); void hash(const void *key, size_t len, const uint32_t seed, size_t r_hash[2]); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_HASH_C_)) /******************************************************************************/ /* Internal implementation. */ JEMALLOC_INLINE uint32_t hash_rotl_32(uint32_t x, int8_t r) { return ((x << r) | (x >> (32 - r))); } JEMALLOC_INLINE uint64_t hash_rotl_64(uint64_t x, int8_t r) { return ((x << r) | (x >> (64 - r))); } JEMALLOC_INLINE uint32_t hash_get_block_32(const uint32_t *p, int i) { /* Handle unaligned read. */ if (unlikely((uintptr_t)p & (sizeof(uint32_t)-1)) != 0) { uint32_t ret; - memcpy(&ret, &p[i], sizeof(uint32_t)); + memcpy(&ret, (uint8_t *)(p + i), sizeof(uint32_t)); return (ret); } return (p[i]); } JEMALLOC_INLINE uint64_t hash_get_block_64(const uint64_t *p, int i) { /* Handle unaligned read. */ if (unlikely((uintptr_t)p & (sizeof(uint64_t)-1)) != 0) { uint64_t ret; - memcpy(&ret, &p[i], sizeof(uint64_t)); + memcpy(&ret, (uint8_t *)(p + i), sizeof(uint64_t)); return (ret); } return (p[i]); } JEMALLOC_INLINE uint32_t hash_fmix_32(uint32_t h) { h ^= h >> 16; h *= 0x85ebca6b; h ^= h >> 13; h *= 0xc2b2ae35; h ^= h >> 16; return (h); } JEMALLOC_INLINE uint64_t hash_fmix_64(uint64_t k) { k ^= k >> 33; k *= KQU(0xff51afd7ed558ccd); k ^= k >> 33; k *= KQU(0xc4ceb9fe1a85ec53); k ^= k >> 33; return (k); } JEMALLOC_INLINE uint32_t hash_x86_32(const void *key, int len, uint32_t seed) { const uint8_t *data = (const uint8_t *) key; const int nblocks = len / 4; uint32_t h1 = seed; const uint32_t c1 = 0xcc9e2d51; const uint32_t c2 = 0x1b873593; /* body */ { const uint32_t *blocks = (const uint32_t *) (data + nblocks*4); int i; for (i = -nblocks; i; i++) { uint32_t k1 = hash_get_block_32(blocks, i); k1 *= c1; k1 = hash_rotl_32(k1, 15); k1 *= c2; h1 ^= k1; h1 = hash_rotl_32(h1, 13); h1 = h1*5 + 0xe6546b64; } } /* tail */ { const uint8_t *tail = (const uint8_t *) (data + nblocks*4); uint32_t k1 = 0; switch (len & 3) { case 3: k1 ^= tail[2] << 16; case 2: k1 ^= tail[1] << 8; case 1: k1 ^= tail[0]; k1 *= c1; k1 = hash_rotl_32(k1, 15); k1 *= c2; h1 ^= k1; } } /* finalization */ h1 ^= len; h1 = hash_fmix_32(h1); return (h1); } UNUSED JEMALLOC_INLINE void hash_x86_128(const void *key, const int len, uint32_t seed, uint64_t r_out[2]) { const uint8_t * data = (const uint8_t *) key; const int nblocks = len / 16; uint32_t h1 = seed; uint32_t h2 = seed; uint32_t h3 = seed; uint32_t h4 = seed; const uint32_t c1 = 0x239b961b; const uint32_t c2 = 0xab0e9789; const uint32_t c3 = 0x38b34ae5; const uint32_t c4 = 0xa1e38b93; /* body */ { const uint32_t *blocks = (const uint32_t *) (data + nblocks*16); int i; for (i = -nblocks; i; i++) { uint32_t k1 = hash_get_block_32(blocks, i*4 + 0); uint32_t k2 = hash_get_block_32(blocks, i*4 + 1); uint32_t k3 = hash_get_block_32(blocks, i*4 + 2); uint32_t k4 = hash_get_block_32(blocks, i*4 + 3); k1 *= c1; k1 = hash_rotl_32(k1, 15); k1 *= c2; h1 ^= k1; h1 = hash_rotl_32(h1, 19); h1 += h2; h1 = h1*5 + 0x561ccd1b; k2 *= c2; k2 = hash_rotl_32(k2, 16); k2 *= c3; h2 ^= k2; h2 = hash_rotl_32(h2, 17); h2 += h3; h2 = h2*5 + 0x0bcaa747; k3 *= c3; k3 = hash_rotl_32(k3, 17); k3 *= c4; h3 ^= k3; h3 = hash_rotl_32(h3, 15); h3 += h4; h3 = h3*5 + 0x96cd1c35; k4 *= c4; k4 = hash_rotl_32(k4, 18); k4 *= c1; h4 ^= k4; h4 = hash_rotl_32(h4, 13); h4 += h1; h4 = h4*5 + 0x32ac3b17; } } /* tail */ { const uint8_t *tail = (const uint8_t *) (data + nblocks*16); uint32_t k1 = 0; uint32_t k2 = 0; uint32_t k3 = 0; uint32_t k4 = 0; switch (len & 15) { case 15: k4 ^= tail[14] << 16; case 14: k4 ^= tail[13] << 8; case 13: k4 ^= tail[12] << 0; k4 *= c4; k4 = hash_rotl_32(k4, 18); k4 *= c1; h4 ^= k4; case 12: k3 ^= tail[11] << 24; case 11: k3 ^= tail[10] << 16; case 10: k3 ^= tail[ 9] << 8; case 9: k3 ^= tail[ 8] << 0; k3 *= c3; k3 = hash_rotl_32(k3, 17); k3 *= c4; h3 ^= k3; case 8: k2 ^= tail[ 7] << 24; case 7: k2 ^= tail[ 6] << 16; case 6: k2 ^= tail[ 5] << 8; case 5: k2 ^= tail[ 4] << 0; k2 *= c2; k2 = hash_rotl_32(k2, 16); k2 *= c3; h2 ^= k2; case 4: k1 ^= tail[ 3] << 24; case 3: k1 ^= tail[ 2] << 16; case 2: k1 ^= tail[ 1] << 8; case 1: k1 ^= tail[ 0] << 0; k1 *= c1; k1 = hash_rotl_32(k1, 15); k1 *= c2; h1 ^= k1; } } /* finalization */ h1 ^= len; h2 ^= len; h3 ^= len; h4 ^= len; h1 += h2; h1 += h3; h1 += h4; h2 += h1; h3 += h1; h4 += h1; h1 = hash_fmix_32(h1); h2 = hash_fmix_32(h2); h3 = hash_fmix_32(h3); h4 = hash_fmix_32(h4); h1 += h2; h1 += h3; h1 += h4; h2 += h1; h3 += h1; h4 += h1; r_out[0] = (((uint64_t) h2) << 32) | h1; r_out[1] = (((uint64_t) h4) << 32) | h3; } UNUSED JEMALLOC_INLINE void hash_x64_128(const void *key, const int len, const uint32_t seed, uint64_t r_out[2]) { const uint8_t *data = (const uint8_t *) key; const int nblocks = len / 16; uint64_t h1 = seed; uint64_t h2 = seed; const uint64_t c1 = KQU(0x87c37b91114253d5); const uint64_t c2 = KQU(0x4cf5ad432745937f); /* body */ { const uint64_t *blocks = (const uint64_t *) (data); int i; for (i = 0; i < nblocks; i++) { uint64_t k1 = hash_get_block_64(blocks, i*2 + 0); uint64_t k2 = hash_get_block_64(blocks, i*2 + 1); k1 *= c1; k1 = hash_rotl_64(k1, 31); k1 *= c2; h1 ^= k1; h1 = hash_rotl_64(h1, 27); h1 += h2; h1 = h1*5 + 0x52dce729; k2 *= c2; k2 = hash_rotl_64(k2, 33); k2 *= c1; h2 ^= k2; h2 = hash_rotl_64(h2, 31); h2 += h1; h2 = h2*5 + 0x38495ab5; } } /* tail */ { const uint8_t *tail = (const uint8_t*)(data + nblocks*16); uint64_t k1 = 0; uint64_t k2 = 0; switch (len & 15) { case 15: k2 ^= ((uint64_t)(tail[14])) << 48; case 14: k2 ^= ((uint64_t)(tail[13])) << 40; case 13: k2 ^= ((uint64_t)(tail[12])) << 32; case 12: k2 ^= ((uint64_t)(tail[11])) << 24; case 11: k2 ^= ((uint64_t)(tail[10])) << 16; case 10: k2 ^= ((uint64_t)(tail[ 9])) << 8; case 9: k2 ^= ((uint64_t)(tail[ 8])) << 0; k2 *= c2; k2 = hash_rotl_64(k2, 33); k2 *= c1; h2 ^= k2; case 8: k1 ^= ((uint64_t)(tail[ 7])) << 56; case 7: k1 ^= ((uint64_t)(tail[ 6])) << 48; case 6: k1 ^= ((uint64_t)(tail[ 5])) << 40; case 5: k1 ^= ((uint64_t)(tail[ 4])) << 32; case 4: k1 ^= ((uint64_t)(tail[ 3])) << 24; case 3: k1 ^= ((uint64_t)(tail[ 2])) << 16; case 2: k1 ^= ((uint64_t)(tail[ 1])) << 8; case 1: k1 ^= ((uint64_t)(tail[ 0])) << 0; k1 *= c1; k1 = hash_rotl_64(k1, 31); k1 *= c2; h1 ^= k1; } } /* finalization */ h1 ^= len; h2 ^= len; h1 += h2; h2 += h1; h1 = hash_fmix_64(h1); h2 = hash_fmix_64(h2); h1 += h2; h2 += h1; r_out[0] = h1; r_out[1] = h2; } /******************************************************************************/ /* API. */ JEMALLOC_INLINE void hash(const void *key, size_t len, const uint32_t seed, size_t r_hash[2]) { assert(len <= INT_MAX); /* Unfortunate implementation limitation. */ #if (LG_SIZEOF_PTR == 3 && !defined(JEMALLOC_BIG_ENDIAN)) hash_x64_128(key, (int)len, seed, (uint64_t *)r_hash); #else { uint64_t hashes[2]; hash_x86_128(key, (int)len, seed, hashes); r_hash[0] = (size_t)hashes[0]; r_hash[1] = (size_t)hashes[1]; } #endif } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/huge.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/huge.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/huge.h (revision 299587) @@ -1,36 +1,35 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS -void *huge_malloc(tsd_t *tsd, arena_t *arena, size_t usize, bool zero, - tcache_t *tcache); -void *huge_palloc(tsd_t *tsd, arena_t *arena, size_t usize, size_t alignment, - bool zero, tcache_t *tcache); -bool huge_ralloc_no_move(tsd_t *tsd, void *ptr, size_t oldsize, +void *huge_malloc(tsdn_t *tsdn, arena_t *arena, size_t usize, bool zero); +void *huge_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize, + size_t alignment, bool zero); +bool huge_ralloc_no_move(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t usize_min, size_t usize_max, bool zero); void *huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize, size_t usize, size_t alignment, bool zero, tcache_t *tcache); #ifdef JEMALLOC_JET -typedef void (huge_dalloc_junk_t)(void *, size_t); +typedef void (huge_dalloc_junk_t)(tsdn_t *, void *, size_t); extern huge_dalloc_junk_t *huge_dalloc_junk; #endif -void huge_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache); +void huge_dalloc(tsdn_t *tsdn, void *ptr); arena_t *huge_aalloc(const void *ptr); -size_t huge_salloc(const void *ptr); -prof_tctx_t *huge_prof_tctx_get(const void *ptr); -void huge_prof_tctx_set(const void *ptr, prof_tctx_t *tctx); -void huge_prof_tctx_reset(const void *ptr); +size_t huge_salloc(tsdn_t *tsdn, const void *ptr); +prof_tctx_t *huge_prof_tctx_get(tsdn_t *tsdn, const void *ptr); +void huge_prof_tctx_set(tsdn_t *tsdn, const void *ptr, prof_tctx_t *tctx); +void huge_prof_tctx_reset(tsdn_t *tsdn, const void *ptr); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal.h (revision 299587) @@ -1,1185 +1,1191 @@ #ifndef JEMALLOC_INTERNAL_H #define JEMALLOC_INTERNAL_H #include "jemalloc_internal_defs.h" #include "jemalloc/internal/jemalloc_internal_decls.h" #ifdef JEMALLOC_UTRACE #include #endif #include "un-namespace.h" #include "libc_private.h" #define JEMALLOC_NO_DEMANGLE #ifdef JEMALLOC_JET # define JEMALLOC_N(n) jet_##n # include "jemalloc/internal/public_namespace.h" # define JEMALLOC_NO_RENAME # include "../jemalloc.h" # undef JEMALLOC_NO_RENAME #else # define JEMALLOC_N(n) __je_##n # include "../jemalloc.h" #endif #include "jemalloc/internal/private_namespace.h" static const bool config_debug = #ifdef JEMALLOC_DEBUG true #else false #endif ; static const bool have_dss = #ifdef JEMALLOC_DSS true #else false #endif ; static const bool config_fill = #ifdef JEMALLOC_FILL true #else false #endif ; static const bool config_lazy_lock = true; static const char * const config_malloc_conf = JEMALLOC_CONFIG_MALLOC_CONF; static const bool config_prof = #ifdef JEMALLOC_PROF true #else false #endif ; static const bool config_prof_libgcc = #ifdef JEMALLOC_PROF_LIBGCC true #else false #endif ; static const bool config_prof_libunwind = #ifdef JEMALLOC_PROF_LIBUNWIND true #else false #endif ; static const bool maps_coalesce = #ifdef JEMALLOC_MAPS_COALESCE true #else false #endif ; static const bool config_munmap = #ifdef JEMALLOC_MUNMAP true #else false #endif ; static const bool config_stats = #ifdef JEMALLOC_STATS true #else false #endif ; static const bool config_tcache = #ifdef JEMALLOC_TCACHE true #else false #endif ; static const bool config_tls = #ifdef JEMALLOC_TLS true #else false #endif ; static const bool config_utrace = #ifdef JEMALLOC_UTRACE true #else false #endif ; static const bool config_valgrind = #ifdef JEMALLOC_VALGRIND true #else false #endif ; static const bool config_xmalloc = #ifdef JEMALLOC_XMALLOC true #else false #endif ; static const bool config_ivsalloc = #ifdef JEMALLOC_IVSALLOC true #else false #endif ; static const bool config_cache_oblivious = #ifdef JEMALLOC_CACHE_OBLIVIOUS true #else false #endif ; #ifdef JEMALLOC_C11ATOMICS #include #endif #ifdef JEMALLOC_ATOMIC9 #include #endif #if (defined(JEMALLOC_OSATOMIC) || defined(JEMALLOC_OSSPIN)) #include #endif #ifdef JEMALLOC_ZONE #include #include #include #include #endif +#include "jemalloc/internal/ph.h" #define RB_COMPACT #include "jemalloc/internal/rb.h" #include "jemalloc/internal/qr.h" #include "jemalloc/internal/ql.h" /* * jemalloc can conceptually be broken into components (arena, tcache, etc.), * but there are circular dependencies that cannot be broken without * substantial performance degradation. In order to reduce the effect on * visual code flow, read the header files in multiple passes, with one of the * following cpp variables defined during each pass: * * JEMALLOC_H_TYPES : Preprocessor-defined constants and psuedo-opaque data * types. * JEMALLOC_H_STRUCTS : Data structures. * JEMALLOC_H_EXTERNS : Extern data declarations and function prototypes. * JEMALLOC_H_INLINES : Inline functions. */ /******************************************************************************/ #define JEMALLOC_H_TYPES #include "jemalloc/internal/jemalloc_internal_macros.h" /* Size class index type. */ typedef unsigned szind_t; /* * Flags bits: * * a: arena * t: tcache * 0: unused * z: zero * n: alignment * * aaaaaaaa aaaatttt tttttttt 0znnnnnn */ #define MALLOCX_ARENA_MASK ((int)~0xfffff) #define MALLOCX_ARENA_MAX 0xffe #define MALLOCX_TCACHE_MASK ((int)~0xfff000ffU) #define MALLOCX_TCACHE_MAX 0xffd #define MALLOCX_LG_ALIGN_MASK ((int)0x3f) /* Use MALLOCX_ALIGN_GET() if alignment may not be specified in flags. */ #define MALLOCX_ALIGN_GET_SPECIFIED(flags) \ (ZU(1) << (flags & MALLOCX_LG_ALIGN_MASK)) #define MALLOCX_ALIGN_GET(flags) \ (MALLOCX_ALIGN_GET_SPECIFIED(flags) & (SIZE_T_MAX-1)) #define MALLOCX_ZERO_GET(flags) \ ((bool)(flags & MALLOCX_ZERO)) #define MALLOCX_TCACHE_GET(flags) \ (((unsigned)((flags & MALLOCX_TCACHE_MASK) >> 8)) - 2) #define MALLOCX_ARENA_GET(flags) \ (((unsigned)(((unsigned)flags) >> 20)) - 1) /* Smallest size class to support. */ #define TINY_MIN (1U << LG_TINY_MIN) /* * Minimum allocation alignment is 2^LG_QUANTUM bytes (ignoring tiny size * classes). */ #ifndef LG_QUANTUM # if (defined(__i386__) || defined(_M_IX86)) # define LG_QUANTUM 4 # endif # ifdef __ia64__ # define LG_QUANTUM 4 # endif # ifdef __alpha__ # define LG_QUANTUM 4 # endif # if (defined(__sparc64__) || defined(__sparcv9)) # define LG_QUANTUM 4 # endif # if (defined(__amd64__) || defined(__x86_64__) || defined(_M_X64)) # define LG_QUANTUM 4 # endif # ifdef __arm__ # define LG_QUANTUM 3 # endif # ifdef __aarch64__ # define LG_QUANTUM 4 # endif # ifdef __hppa__ # define LG_QUANTUM 4 # endif # ifdef __mips__ # define LG_QUANTUM 3 # endif # ifdef __or1k__ # define LG_QUANTUM 3 # endif # ifdef __powerpc__ # define LG_QUANTUM 4 # endif # ifdef __riscv__ # define LG_QUANTUM 4 # endif # ifdef __s390__ # define LG_QUANTUM 4 # endif # ifdef __SH4__ # define LG_QUANTUM 4 # endif # ifdef __tile__ # define LG_QUANTUM 4 # endif # ifdef __le32__ # define LG_QUANTUM 4 # endif # ifndef LG_QUANTUM # error "Unknown minimum alignment for architecture; specify via " "--with-lg-quantum" # endif #endif #define QUANTUM ((size_t)(1U << LG_QUANTUM)) #define QUANTUM_MASK (QUANTUM - 1) /* Return the smallest quantum multiple that is >= a. */ #define QUANTUM_CEILING(a) \ (((a) + QUANTUM_MASK) & ~QUANTUM_MASK) #define LONG ((size_t)(1U << LG_SIZEOF_LONG)) #define LONG_MASK (LONG - 1) /* Return the smallest long multiple that is >= a. */ #define LONG_CEILING(a) \ (((a) + LONG_MASK) & ~LONG_MASK) #define SIZEOF_PTR (1U << LG_SIZEOF_PTR) #define PTR_MASK (SIZEOF_PTR - 1) /* Return the smallest (void *) multiple that is >= a. */ #define PTR_CEILING(a) \ (((a) + PTR_MASK) & ~PTR_MASK) /* * Maximum size of L1 cache line. This is used to avoid cache line aliasing. * In addition, this controls the spacing of cacheline-spaced size classes. * * CACHELINE cannot be based on LG_CACHELINE because __declspec(align()) can * only handle raw constants. */ #define LG_CACHELINE 6 #define CACHELINE 64 #define CACHELINE_MASK (CACHELINE - 1) /* Return the smallest cacheline multiple that is >= s. */ #define CACHELINE_CEILING(s) \ (((s) + CACHELINE_MASK) & ~CACHELINE_MASK) /* Page size. LG_PAGE is determined by the configure script. */ #ifdef PAGE_MASK # undef PAGE_MASK #endif #define PAGE ((size_t)(1U << LG_PAGE)) #define PAGE_MASK ((size_t)(PAGE - 1)) /* Return the page base address for the page containing address a. */ #define PAGE_ADDR2BASE(a) \ ((void *)((uintptr_t)(a) & ~PAGE_MASK)) /* Return the smallest pagesize multiple that is >= s. */ #define PAGE_CEILING(s) \ (((s) + PAGE_MASK) & ~PAGE_MASK) /* Return the nearest aligned address at or below a. */ #define ALIGNMENT_ADDR2BASE(a, alignment) \ ((void *)((uintptr_t)(a) & (-(alignment)))) /* Return the offset between a and the nearest aligned address at or below a. */ #define ALIGNMENT_ADDR2OFFSET(a, alignment) \ ((size_t)((uintptr_t)(a) & (alignment - 1))) /* Return the smallest alignment multiple that is >= s. */ #define ALIGNMENT_CEILING(s, alignment) \ (((s) + (alignment - 1)) & (-(alignment))) /* Declare a variable-length array. */ #if __STDC_VERSION__ < 199901L # ifdef _MSC_VER # include # define alloca _alloca # else # ifdef JEMALLOC_HAS_ALLOCA_H # include # else # include # endif # endif # define VARIABLE_ARRAY(type, name, count) \ type *name = alloca(sizeof(type) * (count)) #else # define VARIABLE_ARRAY(type, name, count) type name[(count)] #endif #include "jemalloc/internal/nstime.h" #include "jemalloc/internal/valgrind.h" #include "jemalloc/internal/util.h" #include "jemalloc/internal/atomic.h" #include "jemalloc/internal/prng.h" #include "jemalloc/internal/ticker.h" #include "jemalloc/internal/ckh.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/smoothstep.h" #include "jemalloc/internal/stats.h" #include "jemalloc/internal/ctl.h" +#include "jemalloc/internal/witness.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/tsd.h" #include "jemalloc/internal/mb.h" #include "jemalloc/internal/extent.h" #include "jemalloc/internal/arena.h" #include "jemalloc/internal/bitmap.h" #include "jemalloc/internal/base.h" #include "jemalloc/internal/rtree.h" #include "jemalloc/internal/pages.h" #include "jemalloc/internal/chunk.h" #include "jemalloc/internal/huge.h" #include "jemalloc/internal/tcache.h" #include "jemalloc/internal/hash.h" #include "jemalloc/internal/quarantine.h" #include "jemalloc/internal/prof.h" #undef JEMALLOC_H_TYPES /******************************************************************************/ #define JEMALLOC_H_STRUCTS #include "jemalloc/internal/nstime.h" #include "jemalloc/internal/valgrind.h" #include "jemalloc/internal/util.h" #include "jemalloc/internal/atomic.h" #include "jemalloc/internal/prng.h" #include "jemalloc/internal/ticker.h" #include "jemalloc/internal/ckh.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/smoothstep.h" #include "jemalloc/internal/stats.h" #include "jemalloc/internal/ctl.h" +#include "jemalloc/internal/witness.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/mb.h" #include "jemalloc/internal/bitmap.h" #define JEMALLOC_ARENA_STRUCTS_A #include "jemalloc/internal/arena.h" #undef JEMALLOC_ARENA_STRUCTS_A #include "jemalloc/internal/extent.h" #define JEMALLOC_ARENA_STRUCTS_B #include "jemalloc/internal/arena.h" #undef JEMALLOC_ARENA_STRUCTS_B #include "jemalloc/internal/base.h" #include "jemalloc/internal/rtree.h" #include "jemalloc/internal/pages.h" #include "jemalloc/internal/chunk.h" #include "jemalloc/internal/huge.h" #include "jemalloc/internal/tcache.h" #include "jemalloc/internal/hash.h" #include "jemalloc/internal/quarantine.h" #include "jemalloc/internal/prof.h" #include "jemalloc/internal/tsd.h" #undef JEMALLOC_H_STRUCTS /******************************************************************************/ #define JEMALLOC_H_EXTERNS extern bool opt_abort; extern const char *opt_junk; extern bool opt_junk_alloc; extern bool opt_junk_free; extern size_t opt_quarantine; extern bool opt_redzone; extern bool opt_utrace; extern bool opt_xmalloc; extern bool opt_zero; extern unsigned opt_narenas; extern bool in_valgrind; /* Number of CPUs. */ extern unsigned ncpus; +/* Number of arenas used for automatic multiplexing of threads and arenas. */ +extern unsigned narenas_auto; + /* * Arenas that are used to service external requests. Not all elements of the * arenas array are necessarily used; arenas are created lazily as needed. */ extern arena_t **arenas; /* * index2size_tab encodes the same information as could be computed (at * unacceptable cost in some code paths) by index2size_compute(). */ extern size_t const index2size_tab[NSIZES+1]; /* * size2index_tab is a compact lookup table that rounds request sizes up to * size classes. In order to reduce cache footprint, the table is compressed, * and all accesses are via size2index(). */ extern uint8_t const size2index_tab[]; void *a0malloc(size_t size); void a0dalloc(void *ptr); void *bootstrap_malloc(size_t size); void *bootstrap_calloc(size_t num, size_t size); void bootstrap_free(void *ptr); -arena_t *arenas_extend(unsigned ind); unsigned narenas_total_get(void); -arena_t *arena_init(unsigned ind); +arena_t *arena_init(tsdn_t *tsdn, unsigned ind); arena_tdata_t *arena_tdata_get_hard(tsd_t *tsd, unsigned ind); -arena_t *arena_choose_hard(tsd_t *tsd); +arena_t *arena_choose_hard(tsd_t *tsd, bool internal); void arena_migrate(tsd_t *tsd, unsigned oldind, unsigned newind); void thread_allocated_cleanup(tsd_t *tsd); void thread_deallocated_cleanup(tsd_t *tsd); +void iarena_cleanup(tsd_t *tsd); void arena_cleanup(tsd_t *tsd); void arenas_tdata_cleanup(tsd_t *tsd); void narenas_tdata_cleanup(tsd_t *tsd); void arenas_tdata_bypass_cleanup(tsd_t *tsd); void jemalloc_prefork(void); void jemalloc_postfork_parent(void); void jemalloc_postfork_child(void); #include "jemalloc/internal/nstime.h" #include "jemalloc/internal/valgrind.h" #include "jemalloc/internal/util.h" #include "jemalloc/internal/atomic.h" #include "jemalloc/internal/prng.h" #include "jemalloc/internal/ticker.h" #include "jemalloc/internal/ckh.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/smoothstep.h" #include "jemalloc/internal/stats.h" #include "jemalloc/internal/ctl.h" +#include "jemalloc/internal/witness.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/mb.h" #include "jemalloc/internal/bitmap.h" #include "jemalloc/internal/extent.h" #include "jemalloc/internal/arena.h" #include "jemalloc/internal/base.h" #include "jemalloc/internal/rtree.h" #include "jemalloc/internal/pages.h" #include "jemalloc/internal/chunk.h" #include "jemalloc/internal/huge.h" #include "jemalloc/internal/tcache.h" #include "jemalloc/internal/hash.h" #include "jemalloc/internal/quarantine.h" #include "jemalloc/internal/prof.h" #include "jemalloc/internal/tsd.h" #undef JEMALLOC_H_EXTERNS /******************************************************************************/ #define JEMALLOC_H_INLINES #include "jemalloc/internal/nstime.h" #include "jemalloc/internal/valgrind.h" #include "jemalloc/internal/util.h" #include "jemalloc/internal/atomic.h" #include "jemalloc/internal/prng.h" #include "jemalloc/internal/ticker.h" #include "jemalloc/internal/ckh.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/smoothstep.h" #include "jemalloc/internal/stats.h" #include "jemalloc/internal/ctl.h" -#include "jemalloc/internal/mutex.h" #include "jemalloc/internal/tsd.h" +#include "jemalloc/internal/witness.h" +#include "jemalloc/internal/mutex.h" #include "jemalloc/internal/mb.h" #include "jemalloc/internal/extent.h" #include "jemalloc/internal/base.h" #include "jemalloc/internal/rtree.h" #include "jemalloc/internal/pages.h" #include "jemalloc/internal/chunk.h" #include "jemalloc/internal/huge.h" #ifndef JEMALLOC_ENABLE_INLINE szind_t size2index_compute(size_t size); szind_t size2index_lookup(size_t size); szind_t size2index(size_t size); size_t index2size_compute(szind_t index); size_t index2size_lookup(szind_t index); size_t index2size(szind_t index); size_t s2u_compute(size_t size); size_t s2u_lookup(size_t size); size_t s2u(size_t size); size_t sa2u(size_t size, size_t alignment); +arena_t *arena_choose_impl(tsd_t *tsd, arena_t *arena, bool internal); arena_t *arena_choose(tsd_t *tsd, arena_t *arena); +arena_t *arena_ichoose(tsdn_t *tsdn, arena_t *arena); arena_tdata_t *arena_tdata_get(tsd_t *tsd, unsigned ind, bool refresh_if_missing); -arena_t *arena_get(unsigned ind, bool init_if_missing); +arena_t *arena_get(tsdn_t *tsdn, unsigned ind, bool init_if_missing); ticker_t *decay_ticker_get(tsd_t *tsd, unsigned ind); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_C_)) JEMALLOC_INLINE szind_t size2index_compute(size_t size) { #if (NTBINS != 0) if (size <= (ZU(1) << LG_TINY_MAXCLASS)) { szind_t lg_tmin = LG_TINY_MAXCLASS - NTBINS + 1; szind_t lg_ceil = lg_floor(pow2_ceil_zu(size)); return (lg_ceil < lg_tmin ? 0 : lg_ceil - lg_tmin); } #endif { szind_t x = unlikely(ZI(size) < 0) ? ((size<<1) ? (ZU(1)<<(LG_SIZEOF_PTR+3)) : ((ZU(1)<<(LG_SIZEOF_PTR+3))-1)) : lg_floor((size<<1)-1); szind_t shift = (x < LG_SIZE_CLASS_GROUP + LG_QUANTUM) ? 0 : x - (LG_SIZE_CLASS_GROUP + LG_QUANTUM); szind_t grp = shift << LG_SIZE_CLASS_GROUP; szind_t lg_delta = (x < LG_SIZE_CLASS_GROUP + LG_QUANTUM + 1) ? LG_QUANTUM : x - LG_SIZE_CLASS_GROUP - 1; size_t delta_inverse_mask = ZI(-1) << lg_delta; szind_t mod = ((((size-1) & delta_inverse_mask) >> lg_delta)) & ((ZU(1) << LG_SIZE_CLASS_GROUP) - 1); szind_t index = NTBINS + grp + mod; return (index); } } JEMALLOC_ALWAYS_INLINE szind_t size2index_lookup(size_t size) { assert(size <= LOOKUP_MAXCLASS); { szind_t ret = (size2index_tab[(size-1) >> LG_TINY_MIN]); assert(ret == size2index_compute(size)); return (ret); } } JEMALLOC_ALWAYS_INLINE szind_t size2index(size_t size) { assert(size > 0); if (likely(size <= LOOKUP_MAXCLASS)) return (size2index_lookup(size)); return (size2index_compute(size)); } JEMALLOC_INLINE size_t index2size_compute(szind_t index) { #if (NTBINS > 0) if (index < NTBINS) return (ZU(1) << (LG_TINY_MAXCLASS - NTBINS + 1 + index)); #endif { size_t reduced_index = index - NTBINS; size_t grp = reduced_index >> LG_SIZE_CLASS_GROUP; size_t mod = reduced_index & ((ZU(1) << LG_SIZE_CLASS_GROUP) - 1); size_t grp_size_mask = ~((!!grp)-1); size_t grp_size = ((ZU(1) << (LG_QUANTUM + (LG_SIZE_CLASS_GROUP-1))) << grp) & grp_size_mask; size_t shift = (grp == 0) ? 1 : grp; size_t lg_delta = shift + (LG_QUANTUM-1); size_t mod_size = (mod+1) << lg_delta; size_t usize = grp_size + mod_size; return (usize); } } JEMALLOC_ALWAYS_INLINE size_t index2size_lookup(szind_t index) { size_t ret = (size_t)index2size_tab[index]; assert(ret == index2size_compute(index)); return (ret); } JEMALLOC_ALWAYS_INLINE size_t index2size(szind_t index) { assert(index < NSIZES); return (index2size_lookup(index)); } JEMALLOC_ALWAYS_INLINE size_t s2u_compute(size_t size) { #if (NTBINS > 0) if (size <= (ZU(1) << LG_TINY_MAXCLASS)) { size_t lg_tmin = LG_TINY_MAXCLASS - NTBINS + 1; size_t lg_ceil = lg_floor(pow2_ceil_zu(size)); return (lg_ceil < lg_tmin ? (ZU(1) << lg_tmin) : (ZU(1) << lg_ceil)); } #endif { size_t x = unlikely(ZI(size) < 0) ? ((size<<1) ? (ZU(1)<<(LG_SIZEOF_PTR+3)) : ((ZU(1)<<(LG_SIZEOF_PTR+3))-1)) : lg_floor((size<<1)-1); size_t lg_delta = (x < LG_SIZE_CLASS_GROUP + LG_QUANTUM + 1) ? LG_QUANTUM : x - LG_SIZE_CLASS_GROUP - 1; size_t delta = ZU(1) << lg_delta; size_t delta_mask = delta - 1; size_t usize = (size + delta_mask) & ~delta_mask; return (usize); } } JEMALLOC_ALWAYS_INLINE size_t s2u_lookup(size_t size) { size_t ret = index2size_lookup(size2index_lookup(size)); assert(ret == s2u_compute(size)); return (ret); } /* * Compute usable size that would result from allocating an object with the * specified size. */ JEMALLOC_ALWAYS_INLINE size_t s2u(size_t size) { assert(size > 0); if (likely(size <= LOOKUP_MAXCLASS)) return (s2u_lookup(size)); return (s2u_compute(size)); } /* * Compute usable size that would result from allocating an object with the * specified size and alignment. */ JEMALLOC_ALWAYS_INLINE size_t sa2u(size_t size, size_t alignment) { size_t usize; assert(alignment != 0 && ((alignment - 1) & alignment) == 0); /* Try for a small size class. */ if (size <= SMALL_MAXCLASS && alignment < PAGE) { /* * Round size up to the nearest multiple of alignment. * * This done, we can take advantage of the fact that for each * small size class, every object is aligned at the smallest * power of two that is non-zero in the base two representation * of the size. For example: * * Size | Base 2 | Minimum alignment * -----+----------+------------------ * 96 | 1100000 | 32 * 144 | 10100000 | 32 * 192 | 11000000 | 64 */ usize = s2u(ALIGNMENT_CEILING(size, alignment)); if (usize < LARGE_MINCLASS) return (usize); } /* Try for a large size class. */ if (likely(size <= large_maxclass) && likely(alignment < chunksize)) { /* * We can't achieve subpage alignment, so round up alignment * to the minimum that can actually be supported. */ alignment = PAGE_CEILING(alignment); /* Make sure result is a large size class. */ usize = (size <= LARGE_MINCLASS) ? LARGE_MINCLASS : s2u(size); /* * Calculate the size of the over-size run that arena_palloc() * would need to allocate in order to guarantee the alignment. */ - if (usize + large_pad + alignment - PAGE <= arena_maxrun) + if (usize + large_pad + alignment <= arena_maxrun) return (usize); } /* Huge size class. Beware of overflow. */ if (unlikely(alignment > HUGE_MAXCLASS)) return (0); /* * We can't achieve subchunk alignment, so round up alignment to the * minimum that can actually be supported. */ alignment = CHUNK_CEILING(alignment); /* Make sure result is a huge size class. */ if (size <= chunksize) usize = chunksize; else { usize = s2u(size); if (usize < size) { /* size_t overflow. */ return (0); } } /* * Calculate the multi-chunk mapping that huge_palloc() would need in * order to guarantee the alignment. */ - if (usize + alignment - PAGE < usize) { + if (usize + alignment < usize) { /* size_t overflow. */ return (0); } return (usize); } /* Choose an arena based on a per-thread value. */ JEMALLOC_INLINE arena_t * -arena_choose(tsd_t *tsd, arena_t *arena) +arena_choose_impl(tsd_t *tsd, arena_t *arena, bool internal) { arena_t *ret; if (arena != NULL) return (arena); - if (unlikely((ret = tsd_arena_get(tsd)) == NULL)) - ret = arena_choose_hard(tsd); + ret = internal ? tsd_iarena_get(tsd) : tsd_arena_get(tsd); + if (unlikely(ret == NULL)) + ret = arena_choose_hard(tsd, internal); return (ret); } +JEMALLOC_INLINE arena_t * +arena_choose(tsd_t *tsd, arena_t *arena) +{ + + return (arena_choose_impl(tsd, arena, false)); +} + +JEMALLOC_INLINE arena_t * +arena_ichoose(tsdn_t *tsdn, arena_t *arena) +{ + + assert(!tsdn_null(tsdn) || arena != NULL); + + if (!tsdn_null(tsdn)) + return (arena_choose_impl(tsdn_tsd(tsdn), NULL, true)); + return (arena); +} + JEMALLOC_INLINE arena_tdata_t * arena_tdata_get(tsd_t *tsd, unsigned ind, bool refresh_if_missing) { arena_tdata_t *tdata; arena_tdata_t *arenas_tdata = tsd_arenas_tdata_get(tsd); if (unlikely(arenas_tdata == NULL)) { /* arenas_tdata hasn't been initialized yet. */ return (arena_tdata_get_hard(tsd, ind)); } if (unlikely(ind >= tsd_narenas_tdata_get(tsd))) { /* * ind is invalid, cache is old (too small), or tdata to be * initialized. */ return (refresh_if_missing ? arena_tdata_get_hard(tsd, ind) : NULL); } tdata = &arenas_tdata[ind]; if (likely(tdata != NULL) || !refresh_if_missing) return (tdata); return (arena_tdata_get_hard(tsd, ind)); } JEMALLOC_INLINE arena_t * -arena_get(unsigned ind, bool init_if_missing) +arena_get(tsdn_t *tsdn, unsigned ind, bool init_if_missing) { arena_t *ret; assert(ind <= MALLOCX_ARENA_MAX); ret = arenas[ind]; if (unlikely(ret == NULL)) { ret = atomic_read_p((void *)&arenas[ind]); if (init_if_missing && unlikely(ret == NULL)) - ret = arena_init(ind); + ret = arena_init(tsdn, ind); } return (ret); } JEMALLOC_INLINE ticker_t * decay_ticker_get(tsd_t *tsd, unsigned ind) { arena_tdata_t *tdata; tdata = arena_tdata_get(tsd, ind, true); if (unlikely(tdata == NULL)) return (NULL); return (&tdata->decay_ticker); } #endif #include "jemalloc/internal/bitmap.h" /* * Include portions of arena.h interleaved with tcache.h in order to resolve * circular dependencies. */ #define JEMALLOC_ARENA_INLINE_A #include "jemalloc/internal/arena.h" #undef JEMALLOC_ARENA_INLINE_A #include "jemalloc/internal/tcache.h" #define JEMALLOC_ARENA_INLINE_B #include "jemalloc/internal/arena.h" #undef JEMALLOC_ARENA_INLINE_B #include "jemalloc/internal/hash.h" #include "jemalloc/internal/quarantine.h" #ifndef JEMALLOC_ENABLE_INLINE arena_t *iaalloc(const void *ptr); -size_t isalloc(const void *ptr, bool demote); -void *iallocztm(tsd_t *tsd, size_t size, szind_t ind, bool zero, +size_t isalloc(tsdn_t *tsdn, const void *ptr, bool demote); +void *iallocztm(tsdn_t *tsdn, size_t size, szind_t ind, bool zero, tcache_t *tcache, bool is_metadata, arena_t *arena, bool slow_path); -void *imalloct(tsd_t *tsd, size_t size, szind_t ind, tcache_t *tcache, - arena_t *arena); -void *imalloc(tsd_t *tsd, size_t size, szind_t ind, bool slow_path); -void *icalloct(tsd_t *tsd, size_t size, szind_t ind, tcache_t *tcache, - arena_t *arena); -void *icalloc(tsd_t *tsd, size_t size, szind_t ind); -void *ipallocztm(tsd_t *tsd, size_t usize, size_t alignment, bool zero, +void *ialloc(tsd_t *tsd, size_t size, szind_t ind, bool zero, + bool slow_path); +void *ipallocztm(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero, tcache_t *tcache, bool is_metadata, arena_t *arena); -void *ipalloct(tsd_t *tsd, size_t usize, size_t alignment, bool zero, +void *ipalloct(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero, tcache_t *tcache, arena_t *arena); void *ipalloc(tsd_t *tsd, size_t usize, size_t alignment, bool zero); -size_t ivsalloc(const void *ptr, bool demote); +size_t ivsalloc(tsdn_t *tsdn, const void *ptr, bool demote); size_t u2rz(size_t usize); -size_t p2rz(const void *ptr); -void idalloctm(tsd_t *tsd, void *ptr, tcache_t *tcache, bool is_metadata, +size_t p2rz(tsdn_t *tsdn, const void *ptr); +void idalloctm(tsdn_t *tsdn, void *ptr, tcache_t *tcache, bool is_metadata, bool slow_path); -void idalloct(tsd_t *tsd, void *ptr, tcache_t *tcache); void idalloc(tsd_t *tsd, void *ptr); void iqalloc(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path); -void isdalloct(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache); -void isqalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache); +void isdalloct(tsdn_t *tsdn, void *ptr, size_t size, tcache_t *tcache, + bool slow_path); +void isqalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache, + bool slow_path); void *iralloct_realign(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, size_t extra, size_t alignment, bool zero, tcache_t *tcache, arena_t *arena); void *iralloct(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, size_t alignment, bool zero, tcache_t *tcache, arena_t *arena); void *iralloc(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, size_t alignment, bool zero); -bool ixalloc(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, +bool ixalloc(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t size, size_t extra, size_t alignment, bool zero); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_C_)) JEMALLOC_ALWAYS_INLINE arena_t * iaalloc(const void *ptr) { assert(ptr != NULL); return (arena_aalloc(ptr)); } /* * Typical usage: + * tsdn_t *tsdn = [...] * void *ptr = [...] - * size_t sz = isalloc(ptr, config_prof); + * size_t sz = isalloc(tsdn, ptr, config_prof); */ JEMALLOC_ALWAYS_INLINE size_t -isalloc(const void *ptr, bool demote) +isalloc(tsdn_t *tsdn, const void *ptr, bool demote) { assert(ptr != NULL); /* Demotion only makes sense if config_prof is true. */ assert(config_prof || !demote); - return (arena_salloc(ptr, demote)); + return (arena_salloc(tsdn, ptr, demote)); } JEMALLOC_ALWAYS_INLINE void * -iallocztm(tsd_t *tsd, size_t size, szind_t ind, bool zero, tcache_t *tcache, +iallocztm(tsdn_t *tsdn, size_t size, szind_t ind, bool zero, tcache_t *tcache, bool is_metadata, arena_t *arena, bool slow_path) { void *ret; assert(size != 0); + assert(!is_metadata || tcache == NULL); + assert(!is_metadata || arena == NULL || arena->ind < narenas_auto); - ret = arena_malloc(tsd, arena, size, ind, zero, tcache, slow_path); + ret = arena_malloc(tsdn, arena, size, ind, zero, tcache, slow_path); if (config_stats && is_metadata && likely(ret != NULL)) { - arena_metadata_allocated_add(iaalloc(ret), isalloc(ret, - config_prof)); + arena_metadata_allocated_add(iaalloc(ret), + isalloc(tsdn, ret, config_prof)); } return (ret); } JEMALLOC_ALWAYS_INLINE void * -imalloct(tsd_t *tsd, size_t size, szind_t ind, tcache_t *tcache, arena_t *arena) +ialloc(tsd_t *tsd, size_t size, szind_t ind, bool zero, bool slow_path) { - return (iallocztm(tsd, size, ind, false, tcache, false, arena, true)); + return (iallocztm(tsd_tsdn(tsd), size, ind, zero, tcache_get(tsd, true), + false, NULL, slow_path)); } JEMALLOC_ALWAYS_INLINE void * -imalloc(tsd_t *tsd, size_t size, szind_t ind, bool slow_path) -{ - - return (iallocztm(tsd, size, ind, false, tcache_get(tsd, true), false, - NULL, slow_path)); -} - -JEMALLOC_ALWAYS_INLINE void * -icalloct(tsd_t *tsd, size_t size, szind_t ind, tcache_t *tcache, arena_t *arena) -{ - - return (iallocztm(tsd, size, ind, true, tcache, false, arena, true)); -} - -JEMALLOC_ALWAYS_INLINE void * -icalloc(tsd_t *tsd, size_t size, szind_t ind) -{ - - return (iallocztm(tsd, size, ind, true, tcache_get(tsd, true), false, - NULL, true)); -} - -JEMALLOC_ALWAYS_INLINE void * -ipallocztm(tsd_t *tsd, size_t usize, size_t alignment, bool zero, +ipallocztm(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero, tcache_t *tcache, bool is_metadata, arena_t *arena) { void *ret; assert(usize != 0); assert(usize == sa2u(usize, alignment)); + assert(!is_metadata || tcache == NULL); + assert(!is_metadata || arena == NULL || arena->ind < narenas_auto); - ret = arena_palloc(tsd, arena, usize, alignment, zero, tcache); + ret = arena_palloc(tsdn, arena, usize, alignment, zero, tcache); assert(ALIGNMENT_ADDR2BASE(ret, alignment) == ret); if (config_stats && is_metadata && likely(ret != NULL)) { - arena_metadata_allocated_add(iaalloc(ret), isalloc(ret, + arena_metadata_allocated_add(iaalloc(ret), isalloc(tsdn, ret, config_prof)); } return (ret); } JEMALLOC_ALWAYS_INLINE void * -ipalloct(tsd_t *tsd, size_t usize, size_t alignment, bool zero, +ipalloct(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero, tcache_t *tcache, arena_t *arena) { - return (ipallocztm(tsd, usize, alignment, zero, tcache, false, arena)); + return (ipallocztm(tsdn, usize, alignment, zero, tcache, false, arena)); } JEMALLOC_ALWAYS_INLINE void * ipalloc(tsd_t *tsd, size_t usize, size_t alignment, bool zero) { - return (ipallocztm(tsd, usize, alignment, zero, tcache_get(tsd, true), - false, NULL)); + return (ipallocztm(tsd_tsdn(tsd), usize, alignment, zero, + tcache_get(tsd, true), false, NULL)); } JEMALLOC_ALWAYS_INLINE size_t -ivsalloc(const void *ptr, bool demote) +ivsalloc(tsdn_t *tsdn, const void *ptr, bool demote) { extent_node_t *node; /* Return 0 if ptr is not within a chunk managed by jemalloc. */ node = chunk_lookup(ptr, false); if (node == NULL) return (0); /* Only arena chunks should be looked up via interior pointers. */ assert(extent_node_addr_get(node) == ptr || extent_node_achunk_get(node)); - return (isalloc(ptr, demote)); + return (isalloc(tsdn, ptr, demote)); } JEMALLOC_INLINE size_t u2rz(size_t usize) { size_t ret; if (usize <= SMALL_MAXCLASS) { szind_t binind = size2index(usize); ret = arena_bin_info[binind].redzone_size; } else ret = 0; return (ret); } JEMALLOC_INLINE size_t -p2rz(const void *ptr) +p2rz(tsdn_t *tsdn, const void *ptr) { - size_t usize = isalloc(ptr, false); + size_t usize = isalloc(tsdn, ptr, false); return (u2rz(usize)); } JEMALLOC_ALWAYS_INLINE void -idalloctm(tsd_t *tsd, void *ptr, tcache_t *tcache, bool is_metadata, +idalloctm(tsdn_t *tsdn, void *ptr, tcache_t *tcache, bool is_metadata, bool slow_path) { assert(ptr != NULL); + assert(!is_metadata || tcache == NULL); + assert(!is_metadata || iaalloc(ptr)->ind < narenas_auto); if (config_stats && is_metadata) { - arena_metadata_allocated_sub(iaalloc(ptr), isalloc(ptr, + arena_metadata_allocated_sub(iaalloc(ptr), isalloc(tsdn, ptr, config_prof)); } - arena_dalloc(tsd, ptr, tcache, slow_path); + arena_dalloc(tsdn, ptr, tcache, slow_path); } JEMALLOC_ALWAYS_INLINE void -idalloct(tsd_t *tsd, void *ptr, tcache_t *tcache) -{ - - idalloctm(tsd, ptr, tcache, false, true); -} - -JEMALLOC_ALWAYS_INLINE void idalloc(tsd_t *tsd, void *ptr) { - idalloctm(tsd, ptr, tcache_get(tsd, false), false, true); + idalloctm(tsd_tsdn(tsd), ptr, tcache_get(tsd, false), false, true); } JEMALLOC_ALWAYS_INLINE void iqalloc(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path) { if (slow_path && config_fill && unlikely(opt_quarantine)) quarantine(tsd, ptr); else - idalloctm(tsd, ptr, tcache, false, slow_path); + idalloctm(tsd_tsdn(tsd), ptr, tcache, false, slow_path); } JEMALLOC_ALWAYS_INLINE void -isdalloct(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache) +isdalloct(tsdn_t *tsdn, void *ptr, size_t size, tcache_t *tcache, + bool slow_path) { - arena_sdalloc(tsd, ptr, size, tcache); + arena_sdalloc(tsdn, ptr, size, tcache, slow_path); } JEMALLOC_ALWAYS_INLINE void -isqalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache) +isqalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache, bool slow_path) { - if (config_fill && unlikely(opt_quarantine)) + if (slow_path && config_fill && unlikely(opt_quarantine)) quarantine(tsd, ptr); else - isdalloct(tsd, ptr, size, tcache); + isdalloct(tsd_tsdn(tsd), ptr, size, tcache, slow_path); } JEMALLOC_ALWAYS_INLINE void * iralloct_realign(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, size_t extra, size_t alignment, bool zero, tcache_t *tcache, arena_t *arena) { void *p; size_t usize, copysize; usize = sa2u(size + extra, alignment); if (unlikely(usize == 0 || usize > HUGE_MAXCLASS)) return (NULL); - p = ipalloct(tsd, usize, alignment, zero, tcache, arena); + p = ipalloct(tsd_tsdn(tsd), usize, alignment, zero, tcache, arena); if (p == NULL) { if (extra == 0) return (NULL); /* Try again, without extra this time. */ usize = sa2u(size, alignment); if (unlikely(usize == 0 || usize > HUGE_MAXCLASS)) return (NULL); - p = ipalloct(tsd, usize, alignment, zero, tcache, arena); + p = ipalloct(tsd_tsdn(tsd), usize, alignment, zero, tcache, + arena); if (p == NULL) return (NULL); } /* * Copy at most size bytes (not size+extra), since the caller has no * expectation that the extra bytes will be reliably preserved. */ copysize = (size < oldsize) ? size : oldsize; memcpy(p, ptr, copysize); - isqalloc(tsd, ptr, oldsize, tcache); + isqalloc(tsd, ptr, oldsize, tcache, true); return (p); } JEMALLOC_ALWAYS_INLINE void * iralloct(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, size_t alignment, bool zero, tcache_t *tcache, arena_t *arena) { assert(ptr != NULL); assert(size != 0); if (alignment != 0 && ((uintptr_t)ptr & ((uintptr_t)alignment-1)) != 0) { /* * Existing object alignment is inadequate; allocate new space * and copy. */ return (iralloct_realign(tsd, ptr, oldsize, size, 0, alignment, zero, tcache, arena)); } return (arena_ralloc(tsd, arena, ptr, oldsize, size, alignment, zero, tcache)); } JEMALLOC_ALWAYS_INLINE void * iralloc(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, size_t alignment, bool zero) { return (iralloct(tsd, ptr, oldsize, size, alignment, zero, tcache_get(tsd, true), NULL)); } JEMALLOC_ALWAYS_INLINE bool -ixalloc(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, size_t extra, +ixalloc(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t size, size_t extra, size_t alignment, bool zero) { assert(ptr != NULL); assert(size != 0); if (alignment != 0 && ((uintptr_t)ptr & ((uintptr_t)alignment-1)) != 0) { /* Existing object alignment is inadequate. */ return (true); } - return (arena_ralloc_no_move(tsd, ptr, oldsize, size, extra, zero)); + return (arena_ralloc_no_move(tsdn, ptr, oldsize, size, extra, zero)); } #endif #include "jemalloc/internal/prof.h" #undef JEMALLOC_H_INLINES /******************************************************************************/ #endif /* JEMALLOC_INTERNAL_H */ Index: head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal_defs.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal_defs.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/jemalloc_internal_defs.h (revision 299587) @@ -1,270 +1,279 @@ /* include/jemalloc/internal/jemalloc_internal_defs.h. Generated from jemalloc_internal_defs.h.in by configure. */ #ifndef JEMALLOC_INTERNAL_DEFS_H_ #define JEMALLOC_INTERNAL_DEFS_H_ /* * If JEMALLOC_PREFIX is defined via --with-jemalloc-prefix, it will cause all * public APIs to be prefixed. This makes it possible, with some care, to use * multiple allocators simultaneously. */ /* #undef JEMALLOC_PREFIX */ /* #undef JEMALLOC_CPREFIX */ /* * JEMALLOC_PRIVATE_NAMESPACE is used as a prefix for all library-private APIs. * For shared libraries, symbol visibility mechanisms prevent these symbols * from being exported, but for static libraries, naming collisions are a real * possibility. */ #define JEMALLOC_PRIVATE_NAMESPACE __je_ /* * Hyper-threaded CPUs may need a special instruction inside spin loops in * order to yield to another virtual CPU. */ #define CPU_SPINWAIT __asm__ volatile("pause") /* Defined if C11 atomics are available. */ /* #undef JEMALLOC_C11ATOMICS */ /* Defined if the equivalent of FreeBSD's atomic(9) functions are available. */ #define JEMALLOC_ATOMIC9 1 /* * Defined if OSAtomic*() functions are available, as provided by Darwin, and * documented in the atomic(3) manual page. */ /* #undef JEMALLOC_OSATOMIC */ /* * Defined if __sync_add_and_fetch(uint32_t *, uint32_t) and * __sync_sub_and_fetch(uint32_t *, uint32_t) are available, despite * __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 not being defined (which means the * functions are defined in libgcc instead of being inlines). */ /* #undef JE_FORCE_SYNC_COMPARE_AND_SWAP_4 */ /* * Defined if __sync_add_and_fetch(uint64_t *, uint64_t) and * __sync_sub_and_fetch(uint64_t *, uint64_t) are available, despite * __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8 not being defined (which means the * functions are defined in libgcc instead of being inlines). */ /* #undef JE_FORCE_SYNC_COMPARE_AND_SWAP_8 */ /* * Defined if __builtin_clz() and __builtin_clzl() are available. */ #define JEMALLOC_HAVE_BUILTIN_CLZ /* * Defined if madvise(2) is available. */ #define JEMALLOC_HAVE_MADVISE /* * Defined if OSSpin*() functions are available, as provided by Darwin, and * documented in the spinlock(3) manual page. */ /* #undef JEMALLOC_OSSPIN */ /* * Defined if secure_getenv(3) is available. */ /* #undef JEMALLOC_HAVE_SECURE_GETENV */ /* * Defined if issetugid(2) is available. */ #define JEMALLOC_HAVE_ISSETUGID /* * Defined if _malloc_thread_cleanup() exists. At least in the case of * FreeBSD, pthread_key_create() allocates, which if used during malloc * bootstrapping will cause recursion into the pthreads library. Therefore, if * _malloc_thread_cleanup() exists, use it as the basis for thread cleanup in * malloc_tsd. */ #define JEMALLOC_MALLOC_THREAD_CLEANUP /* * Defined if threaded initialization is known to be safe on this platform. * Among other things, it must be possible to initialize a mutex without * triggering allocation in order for threaded allocation to be safe. */ /* #undef JEMALLOC_THREADED_INIT */ /* * Defined if the pthreads implementation defines * _pthread_mutex_init_calloc_cb(), in which case the function is used in order * to avoid recursive allocation during mutex initialization. */ #define JEMALLOC_MUTEX_INIT_CB 1 /* Non-empty if the tls_model attribute is supported. */ #define JEMALLOC_TLS_MODEL __attribute__((tls_model("initial-exec"))) /* JEMALLOC_CC_SILENCE enables code that silences unuseful compiler warnings. */ #define JEMALLOC_CC_SILENCE /* JEMALLOC_CODE_COVERAGE enables test code coverage analysis. */ /* #undef JEMALLOC_CODE_COVERAGE */ /* * JEMALLOC_DEBUG enables assertions and other sanity checks, and disables * inline functions. */ /* #undef JEMALLOC_DEBUG */ /* JEMALLOC_STATS enables statistics calculation. */ #define JEMALLOC_STATS /* JEMALLOC_PROF enables allocation profiling. */ /* #undef JEMALLOC_PROF */ /* Use libunwind for profile backtracing if defined. */ /* #undef JEMALLOC_PROF_LIBUNWIND */ /* Use libgcc for profile backtracing if defined. */ /* #undef JEMALLOC_PROF_LIBGCC */ /* Use gcc intrinsics for profile backtracing if defined. */ /* #undef JEMALLOC_PROF_GCC */ /* * JEMALLOC_TCACHE enables a thread-specific caching layer for small objects. * This makes it possible to allocate/deallocate objects without any locking * when the cache is in the steady state. */ #define JEMALLOC_TCACHE /* * JEMALLOC_DSS enables use of sbrk(2) to allocate chunks from the data storage * segment (DSS). */ #define JEMALLOC_DSS /* Support memory filling (junk/zero/quarantine/redzone). */ #define JEMALLOC_FILL /* Support utrace(2)-based tracing. */ #define JEMALLOC_UTRACE /* Support Valgrind. */ /* #undef JEMALLOC_VALGRIND */ /* Support optional abort() on OOM. */ #define JEMALLOC_XMALLOC /* Support lazy locking (avoid locking unless a second thread is launched). */ #define JEMALLOC_LAZY_LOCK /* Minimum size class to support is 2^LG_TINY_MIN bytes. */ #define LG_TINY_MIN 3 /* * Minimum allocation alignment is 2^LG_QUANTUM bytes (ignoring tiny size * classes). */ /* #undef LG_QUANTUM */ /* One page is 2^LG_PAGE bytes. */ #define LG_PAGE 12 /* * If defined, adjacent virtual memory mappings with identical attributes * automatically coalesce, and they fragment when changes are made to subranges. * This is the normal order of things for mmap()/munmap(), but on Windows * VirtualAlloc()/VirtualFree() operations must be precisely matched, i.e. * mappings do *not* coalesce/fragment. */ #define JEMALLOC_MAPS_COALESCE /* * If defined, use munmap() to unmap freed chunks, rather than storing them for * later reuse. This is disabled by default on Linux because common sequences * of mmap()/munmap() calls will cause virtual memory map holes. */ #define JEMALLOC_MUNMAP /* TLS is used to map arenas and magazine caches to threads. */ #define JEMALLOC_TLS /* * ffs*() functions to use for bitmapping. Don't use these directly; instead, * use ffs_*() from util.h. */ #define JEMALLOC_INTERNAL_FFSLL __builtin_ffsll #define JEMALLOC_INTERNAL_FFSL __builtin_ffsl #define JEMALLOC_INTERNAL_FFS __builtin_ffs /* * JEMALLOC_IVSALLOC enables ivsalloc(), which verifies that pointers reside * within jemalloc-owned chunks before dereferencing them. */ /* #undef JEMALLOC_IVSALLOC */ /* * If defined, explicitly attempt to more uniformly distribute large allocation * pointer alignments across all cache indices. */ #define JEMALLOC_CACHE_OBLIVIOUS /* * Darwin (OS X) uses zones to work around Mach-O symbol override shortcomings. */ /* #undef JEMALLOC_ZONE */ /* #undef JEMALLOC_ZONE_VERSION */ /* + * Methods for determining whether the OS overcommits. + * JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY: Linux's + * /proc/sys/vm.overcommit_memory file. + * JEMALLOC_SYSCTL_VM_OVERCOMMIT: FreeBSD's vm.overcommit sysctl. + */ +#define JEMALLOC_SYSCTL_VM_OVERCOMMIT +/* #undef JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY */ + +/* * Methods for purging unused pages differ between operating systems. * * madvise(..., MADV_DONTNEED) : On Linux, this immediately discards pages, * such that new pages will be demand-zeroed if * the address region is later touched. * madvise(..., MADV_FREE) : On FreeBSD and Darwin, this marks pages as being * unused, such that they will be discarded rather * than swapped out. */ /* #undef JEMALLOC_PURGE_MADVISE_DONTNEED */ #define JEMALLOC_PURGE_MADVISE_FREE /* Define if operating system has alloca.h header. */ /* #undef JEMALLOC_HAS_ALLOCA_H */ /* C99 restrict keyword supported. */ #define JEMALLOC_HAS_RESTRICT 1 /* For use by hash code. */ /* #undef JEMALLOC_BIG_ENDIAN */ /* sizeof(int) == 2^LG_SIZEOF_INT. */ #define LG_SIZEOF_INT 2 /* sizeof(long) == 2^LG_SIZEOF_LONG. */ #define LG_SIZEOF_LONG 3 /* sizeof(long long) == 2^LG_SIZEOF_LONG_LONG. */ #define LG_SIZEOF_LONG_LONG 3 /* sizeof(intmax_t) == 2^LG_SIZEOF_INTMAX_T. */ #define LG_SIZEOF_INTMAX_T 3 /* glibc malloc hooks (__malloc_hook, __realloc_hook, __free_hook). */ /* #undef JEMALLOC_GLIBC_MALLOC_HOOK */ /* glibc memalign hook. */ /* #undef JEMALLOC_GLIBC_MEMALIGN_HOOK */ /* Adaptive mutex support in pthreads. */ #define JEMALLOC_HAVE_PTHREAD_MUTEX_ADAPTIVE_NP /* * If defined, jemalloc symbols are not exported (doesn't work when * JEMALLOC_PREFIX is not defined). */ /* #undef JEMALLOC_EXPORT */ /* config.malloc_conf options string. */ #define JEMALLOC_CONFIG_MALLOC_CONF "" #endif /* JEMALLOC_INTERNAL_DEFS_H_ */ Index: head/contrib/jemalloc/include/jemalloc/internal/mb.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/mb.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/mb.h (revision 299587) @@ -1,115 +1,115 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE void mb_write(void); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_MB_C_)) #ifdef __i386__ /* * According to the Intel Architecture Software Developer's Manual, current * processors execute instructions in order from the perspective of other * processors in a multiprocessor system, but 1) Intel reserves the right to * change that, and 2) the compiler's optimizer could re-order instructions if * there weren't some form of barrier. Therefore, even if running on an * architecture that does not need memory barriers (everything through at least * i686), an "optimizer barrier" is necessary. */ JEMALLOC_INLINE void mb_write(void) { # if 0 /* This is a true memory barrier. */ asm volatile ("pusha;" "xor %%eax,%%eax;" "cpuid;" "popa;" : /* Outputs. */ : /* Inputs. */ : "memory" /* Clobbers. */ ); -#else +# else /* * This is hopefully enough to keep the compiler from reordering * instructions around this one. */ asm volatile ("nop;" : /* Outputs. */ : /* Inputs. */ : "memory" /* Clobbers. */ ); -#endif +# endif } #elif (defined(__amd64__) || defined(__x86_64__)) JEMALLOC_INLINE void mb_write(void) { asm volatile ("sfence" : /* Outputs. */ : /* Inputs. */ : "memory" /* Clobbers. */ ); } #elif defined(__powerpc__) JEMALLOC_INLINE void mb_write(void) { asm volatile ("eieio" : /* Outputs. */ : /* Inputs. */ : "memory" /* Clobbers. */ ); } #elif defined(__sparc64__) JEMALLOC_INLINE void mb_write(void) { asm volatile ("membar #StoreStore" : /* Outputs. */ : /* Inputs. */ : "memory" /* Clobbers. */ ); } #elif defined(__tile__) JEMALLOC_INLINE void mb_write(void) { __sync_synchronize(); } #else /* * This is much slower than a simple memory barrier, but the semantics of mutex * unlock make this work. */ JEMALLOC_INLINE void mb_write(void) { malloc_mutex_t mtx; - malloc_mutex_init(&mtx); - malloc_mutex_lock(&mtx); - malloc_mutex_unlock(&mtx); + malloc_mutex_init(&mtx, "mb", WITNESS_RANK_OMIT); + malloc_mutex_lock(NULL, &mtx); + malloc_mutex_unlock(NULL, &mtx); } #endif #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/mutex.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/mutex.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/mutex.h (revision 299587) @@ -1,109 +1,136 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef struct malloc_mutex_s malloc_mutex_t; #ifdef _WIN32 # define MALLOC_MUTEX_INITIALIZER #elif (defined(JEMALLOC_OSSPIN)) -# define MALLOC_MUTEX_INITIALIZER {0} +# define MALLOC_MUTEX_INITIALIZER {0, WITNESS_INITIALIZER(WITNESS_RANK_OMIT)} #elif (defined(JEMALLOC_MUTEX_INIT_CB)) -# define MALLOC_MUTEX_INITIALIZER {PTHREAD_MUTEX_INITIALIZER, NULL} +# define MALLOC_MUTEX_INITIALIZER \ + {PTHREAD_MUTEX_INITIALIZER, NULL, WITNESS_INITIALIZER(WITNESS_RANK_OMIT)} #else # if (defined(JEMALLOC_HAVE_PTHREAD_MUTEX_ADAPTIVE_NP) && \ defined(PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP)) # define MALLOC_MUTEX_TYPE PTHREAD_MUTEX_ADAPTIVE_NP -# define MALLOC_MUTEX_INITIALIZER {PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP} +# define MALLOC_MUTEX_INITIALIZER \ + {PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP, \ + WITNESS_INITIALIZER(WITNESS_RANK_OMIT)} # else # define MALLOC_MUTEX_TYPE PTHREAD_MUTEX_DEFAULT -# define MALLOC_MUTEX_INITIALIZER {PTHREAD_MUTEX_INITIALIZER} +# define MALLOC_MUTEX_INITIALIZER \ + {PTHREAD_MUTEX_INITIALIZER, WITNESS_INITIALIZER(WITNESS_RANK_OMIT)} # endif #endif #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS struct malloc_mutex_s { #ifdef _WIN32 # if _WIN32_WINNT >= 0x0600 SRWLOCK lock; # else CRITICAL_SECTION lock; # endif #elif (defined(JEMALLOC_OSSPIN)) OSSpinLock lock; #elif (defined(JEMALLOC_MUTEX_INIT_CB)) pthread_mutex_t lock; malloc_mutex_t *postponed_next; #else pthread_mutex_t lock; #endif + witness_t witness; }; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_LAZY_LOCK extern bool isthreaded; #endif -bool malloc_mutex_init(malloc_mutex_t *mutex); -void malloc_mutex_prefork(malloc_mutex_t *mutex); -void malloc_mutex_postfork_parent(malloc_mutex_t *mutex); -void malloc_mutex_postfork_child(malloc_mutex_t *mutex); +bool malloc_mutex_init(malloc_mutex_t *mutex, const char *name, + witness_rank_t rank); +void malloc_mutex_prefork(tsdn_t *tsdn, malloc_mutex_t *mutex); +void malloc_mutex_postfork_parent(tsdn_t *tsdn, malloc_mutex_t *mutex); +void malloc_mutex_postfork_child(tsdn_t *tsdn, malloc_mutex_t *mutex); bool malloc_mutex_first_thread(void); -bool mutex_boot(void); +bool malloc_mutex_boot(void); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE -void malloc_mutex_lock(malloc_mutex_t *mutex); -void malloc_mutex_unlock(malloc_mutex_t *mutex); +void malloc_mutex_lock(tsdn_t *tsdn, malloc_mutex_t *mutex); +void malloc_mutex_unlock(tsdn_t *tsdn, malloc_mutex_t *mutex); +void malloc_mutex_assert_owner(tsdn_t *tsdn, malloc_mutex_t *mutex); +void malloc_mutex_assert_not_owner(tsdn_t *tsdn, malloc_mutex_t *mutex); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_MUTEX_C_)) JEMALLOC_INLINE void -malloc_mutex_lock(malloc_mutex_t *mutex) +malloc_mutex_lock(tsdn_t *tsdn, malloc_mutex_t *mutex) { if (isthreaded) { + witness_assert_not_owner(tsdn, &mutex->witness); #ifdef _WIN32 # if _WIN32_WINNT >= 0x0600 AcquireSRWLockExclusive(&mutex->lock); # else EnterCriticalSection(&mutex->lock); # endif #elif (defined(JEMALLOC_OSSPIN)) OSSpinLockLock(&mutex->lock); #else pthread_mutex_lock(&mutex->lock); #endif + witness_lock(tsdn, &mutex->witness); } } JEMALLOC_INLINE void -malloc_mutex_unlock(malloc_mutex_t *mutex) +malloc_mutex_unlock(tsdn_t *tsdn, malloc_mutex_t *mutex) { if (isthreaded) { + witness_unlock(tsdn, &mutex->witness); #ifdef _WIN32 # if _WIN32_WINNT >= 0x0600 ReleaseSRWLockExclusive(&mutex->lock); # else LeaveCriticalSection(&mutex->lock); # endif #elif (defined(JEMALLOC_OSSPIN)) OSSpinLockUnlock(&mutex->lock); #else pthread_mutex_unlock(&mutex->lock); #endif } +} + +JEMALLOC_INLINE void +malloc_mutex_assert_owner(tsdn_t *tsdn, malloc_mutex_t *mutex) +{ + + if (isthreaded) + witness_assert_owner(tsdn, &mutex->witness); +} + +JEMALLOC_INLINE void +malloc_mutex_assert_not_owner(tsdn_t *tsdn, malloc_mutex_t *mutex) +{ + + if (isthreaded) + witness_assert_not_owner(tsdn, &mutex->witness); } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/nstime.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/nstime.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/nstime.h (revision 299587) @@ -1,48 +1,48 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES -#define JEMALLOC_CLOCK_GETTIME defined(_POSIX_MONOTONIC_CLOCK) \ +#define JEMALLOC_CLOCK_GETTIME defined(_POSIX_MONOTONIC_CLOCK) \ && _POSIX_MONOTONIC_CLOCK >= 0 typedef struct nstime_s nstime_t; /* Maximum supported number of seconds (~584 years). */ #define NSTIME_SEC_MAX KQU(18446744072) #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS struct nstime_s { uint64_t ns; }; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS void nstime_init(nstime_t *time, uint64_t ns); void nstime_init2(nstime_t *time, uint64_t sec, uint64_t nsec); uint64_t nstime_ns(const nstime_t *time); uint64_t nstime_sec(const nstime_t *time); uint64_t nstime_nsec(const nstime_t *time); void nstime_copy(nstime_t *time, const nstime_t *source); int nstime_compare(const nstime_t *a, const nstime_t *b); void nstime_add(nstime_t *time, const nstime_t *addend); void nstime_subtract(nstime_t *time, const nstime_t *subtrahend); void nstime_imultiply(nstime_t *time, uint64_t multiplier); void nstime_idivide(nstime_t *time, uint64_t divisor); uint64_t nstime_divide(const nstime_t *time, const nstime_t *divisor); #ifdef JEMALLOC_JET typedef bool (nstime_update_t)(nstime_t *); extern nstime_update_t *nstime_update; #else bool nstime_update(nstime_t *time); #endif #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/pages.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/pages.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/pages.h (revision 299587) @@ -1,26 +1,27 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS -void *pages_map(void *addr, size_t size); +void *pages_map(void *addr, size_t size, bool *commit); void pages_unmap(void *addr, size_t size); void *pages_trim(void *addr, size_t alloc_size, size_t leadsize, - size_t size); + size_t size, bool *commit); bool pages_commit(void *addr, size_t size); bool pages_decommit(void *addr, size_t size); bool pages_purge(void *addr, size_t size); +void pages_boot(void); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/ph.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/ph.h (nonexistent) +++ head/contrib/jemalloc/include/jemalloc/internal/ph.h (revision 299587) @@ -0,0 +1,345 @@ +/* + * A Pairing Heap implementation. + * + * "The Pairing Heap: A New Form of Self-Adjusting Heap" + * https://www.cs.cmu.edu/~sleator/papers/pairing-heaps.pdf + * + * With auxiliary twopass list, described in a follow on paper. + * + * "Pairing Heaps: Experiments and Analysis" + * http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.106.2988&rep=rep1&type=pdf + * + ******************************************************************************* + */ + +#ifndef PH_H_ +#define PH_H_ + +/* Node structure. */ +#define phn(a_type) \ +struct { \ + a_type *phn_prev; \ + a_type *phn_next; \ + a_type *phn_lchild; \ +} + +/* Root structure. */ +#define ph(a_type) \ +struct { \ + a_type *ph_root; \ +} + +/* Internal utility macros. */ +#define phn_lchild_get(a_type, a_field, a_phn) \ + (a_phn->a_field.phn_lchild) +#define phn_lchild_set(a_type, a_field, a_phn, a_lchild) do { \ + a_phn->a_field.phn_lchild = a_lchild; \ +} while (0) + +#define phn_next_get(a_type, a_field, a_phn) \ + (a_phn->a_field.phn_next) +#define phn_prev_set(a_type, a_field, a_phn, a_prev) do { \ + a_phn->a_field.phn_prev = a_prev; \ +} while (0) + +#define phn_prev_get(a_type, a_field, a_phn) \ + (a_phn->a_field.phn_prev) +#define phn_next_set(a_type, a_field, a_phn, a_next) do { \ + a_phn->a_field.phn_next = a_next; \ +} while (0) + +#define phn_merge_ordered(a_type, a_field, a_phn0, a_phn1, a_cmp) do { \ + a_type *phn0child; \ + \ + assert(a_phn0 != NULL); \ + assert(a_phn1 != NULL); \ + assert(a_cmp(a_phn0, a_phn1) <= 0); \ + \ + phn_prev_set(a_type, a_field, a_phn1, a_phn0); \ + phn0child = phn_lchild_get(a_type, a_field, a_phn0); \ + phn_next_set(a_type, a_field, a_phn1, phn0child); \ + if (phn0child != NULL) \ + phn_prev_set(a_type, a_field, phn0child, a_phn1); \ + phn_lchild_set(a_type, a_field, a_phn0, a_phn1); \ +} while (0) + +#define phn_merge(a_type, a_field, a_phn0, a_phn1, a_cmp, r_phn) do { \ + if (a_phn0 == NULL) \ + r_phn = a_phn1; \ + else if (a_phn1 == NULL) \ + r_phn = a_phn0; \ + else if (a_cmp(a_phn0, a_phn1) < 0) { \ + phn_merge_ordered(a_type, a_field, a_phn0, a_phn1, \ + a_cmp); \ + r_phn = a_phn0; \ + } else { \ + phn_merge_ordered(a_type, a_field, a_phn1, a_phn0, \ + a_cmp); \ + r_phn = a_phn1; \ + } \ +} while (0) + +#define ph_merge_siblings(a_type, a_field, a_phn, a_cmp, r_phn) do { \ + a_type *head = NULL; \ + a_type *tail = NULL; \ + a_type *phn0 = a_phn; \ + a_type *phn1 = phn_next_get(a_type, a_field, phn0); \ + \ + /* \ + * Multipass merge, wherein the first two elements of a FIFO \ + * are repeatedly merged, and each result is appended to the \ + * singly linked FIFO, until the FIFO contains only a single \ + * element. We start with a sibling list but no reference to \ + * its tail, so we do a single pass over the sibling list to \ + * populate the FIFO. \ + */ \ + if (phn1 != NULL) { \ + a_type *phnrest = phn_next_get(a_type, a_field, phn1); \ + if (phnrest != NULL) \ + phn_prev_set(a_type, a_field, phnrest, NULL); \ + phn_prev_set(a_type, a_field, phn0, NULL); \ + phn_next_set(a_type, a_field, phn0, NULL); \ + phn_prev_set(a_type, a_field, phn1, NULL); \ + phn_next_set(a_type, a_field, phn1, NULL); \ + phn_merge(a_type, a_field, phn0, phn1, a_cmp, phn0); \ + head = tail = phn0; \ + phn0 = phnrest; \ + while (phn0 != NULL) { \ + phn1 = phn_next_get(a_type, a_field, phn0); \ + if (phn1 != NULL) { \ + phnrest = phn_next_get(a_type, a_field, \ + phn1); \ + if (phnrest != NULL) { \ + phn_prev_set(a_type, a_field, \ + phnrest, NULL); \ + } \ + phn_prev_set(a_type, a_field, phn0, \ + NULL); \ + phn_next_set(a_type, a_field, phn0, \ + NULL); \ + phn_prev_set(a_type, a_field, phn1, \ + NULL); \ + phn_next_set(a_type, a_field, phn1, \ + NULL); \ + phn_merge(a_type, a_field, phn0, phn1, \ + a_cmp, phn0); \ + phn_next_set(a_type, a_field, tail, \ + phn0); \ + tail = phn0; \ + phn0 = phnrest; \ + } else { \ + phn_next_set(a_type, a_field, tail, \ + phn0); \ + tail = phn0; \ + phn0 = NULL; \ + } \ + } \ + phn0 = head; \ + phn1 = phn_next_get(a_type, a_field, phn0); \ + if (phn1 != NULL) { \ + while (true) { \ + head = phn_next_get(a_type, a_field, \ + phn1); \ + assert(phn_prev_get(a_type, a_field, \ + phn0) == NULL); \ + phn_next_set(a_type, a_field, phn0, \ + NULL); \ + assert(phn_prev_get(a_type, a_field, \ + phn1) == NULL); \ + phn_next_set(a_type, a_field, phn1, \ + NULL); \ + phn_merge(a_type, a_field, phn0, phn1, \ + a_cmp, phn0); \ + if (head == NULL) \ + break; \ + phn_next_set(a_type, a_field, tail, \ + phn0); \ + tail = phn0; \ + phn0 = head; \ + phn1 = phn_next_get(a_type, a_field, \ + phn0); \ + } \ + } \ + } \ + r_phn = phn0; \ +} while (0) + +#define ph_merge_aux(a_type, a_field, a_ph, a_cmp) do { \ + a_type *phn = phn_next_get(a_type, a_field, a_ph->ph_root); \ + if (phn != NULL) { \ + phn_prev_set(a_type, a_field, a_ph->ph_root, NULL); \ + phn_next_set(a_type, a_field, a_ph->ph_root, NULL); \ + phn_prev_set(a_type, a_field, phn, NULL); \ + ph_merge_siblings(a_type, a_field, phn, a_cmp, phn); \ + assert(phn_next_get(a_type, a_field, phn) == NULL); \ + phn_merge(a_type, a_field, a_ph->ph_root, phn, a_cmp, \ + a_ph->ph_root); \ + } \ +} while (0) + +#define ph_merge_children(a_type, a_field, a_phn, a_cmp, r_phn) do { \ + a_type *lchild = phn_lchild_get(a_type, a_field, a_phn); \ + if (lchild == NULL) \ + r_phn = NULL; \ + else { \ + ph_merge_siblings(a_type, a_field, lchild, a_cmp, \ + r_phn); \ + } \ +} while (0) + +/* + * The ph_proto() macro generates function prototypes that correspond to the + * functions generated by an equivalently parameterized call to ph_gen(). + */ +#define ph_proto(a_attr, a_prefix, a_ph_type, a_type) \ +a_attr void a_prefix##new(a_ph_type *ph); \ +a_attr bool a_prefix##empty(a_ph_type *ph); \ +a_attr a_type *a_prefix##first(a_ph_type *ph); \ +a_attr void a_prefix##insert(a_ph_type *ph, a_type *phn); \ +a_attr a_type *a_prefix##remove_first(a_ph_type *ph); \ +a_attr void a_prefix##remove(a_ph_type *ph, a_type *phn); + +/* + * The ph_gen() macro generates a type-specific pairing heap implementation, + * based on the above cpp macros. + */ +#define ph_gen(a_attr, a_prefix, a_ph_type, a_type, a_field, a_cmp) \ +a_attr void \ +a_prefix##new(a_ph_type *ph) \ +{ \ + \ + memset(ph, 0, sizeof(ph(a_type))); \ +} \ +a_attr bool \ +a_prefix##empty(a_ph_type *ph) \ +{ \ + \ + return (ph->ph_root == NULL); \ +} \ +a_attr a_type * \ +a_prefix##first(a_ph_type *ph) \ +{ \ + \ + if (ph->ph_root == NULL) \ + return (NULL); \ + ph_merge_aux(a_type, a_field, ph, a_cmp); \ + return (ph->ph_root); \ +} \ +a_attr void \ +a_prefix##insert(a_ph_type *ph, a_type *phn) \ +{ \ + \ + memset(&phn->a_field, 0, sizeof(phn(a_type))); \ + \ + /* \ + * Treat the root as an aux list during insertion, and lazily \ + * merge during a_prefix##remove_first(). For elements that \ + * are inserted, then removed via a_prefix##remove() before the \ + * aux list is ever processed, this makes insert/remove \ + * constant-time, whereas eager merging would make insert \ + * O(log n). \ + */ \ + if (ph->ph_root == NULL) \ + ph->ph_root = phn; \ + else { \ + phn_next_set(a_type, a_field, phn, phn_next_get(a_type, \ + a_field, ph->ph_root)); \ + if (phn_next_get(a_type, a_field, ph->ph_root) != \ + NULL) { \ + phn_prev_set(a_type, a_field, \ + phn_next_get(a_type, a_field, ph->ph_root), \ + phn); \ + } \ + phn_prev_set(a_type, a_field, phn, ph->ph_root); \ + phn_next_set(a_type, a_field, ph->ph_root, phn); \ + } \ +} \ +a_attr a_type * \ +a_prefix##remove_first(a_ph_type *ph) \ +{ \ + a_type *ret; \ + \ + if (ph->ph_root == NULL) \ + return (NULL); \ + ph_merge_aux(a_type, a_field, ph, a_cmp); \ + \ + ret = ph->ph_root; \ + \ + ph_merge_children(a_type, a_field, ph->ph_root, a_cmp, \ + ph->ph_root); \ + \ + return (ret); \ +} \ +a_attr void \ +a_prefix##remove(a_ph_type *ph, a_type *phn) \ +{ \ + a_type *replace, *parent; \ + \ + /* \ + * We can delete from aux list without merging it, but we need \ + * to merge if we are dealing with the root node. \ + */ \ + if (ph->ph_root == phn) { \ + ph_merge_aux(a_type, a_field, ph, a_cmp); \ + if (ph->ph_root == phn) { \ + ph_merge_children(a_type, a_field, ph->ph_root, \ + a_cmp, ph->ph_root); \ + return; \ + } \ + } \ + \ + /* Get parent (if phn is leftmost child) before mutating. */ \ + if ((parent = phn_prev_get(a_type, a_field, phn)) != NULL) { \ + if (phn_lchild_get(a_type, a_field, parent) != phn) \ + parent = NULL; \ + } \ + /* Find a possible replacement node, and link to parent. */ \ + ph_merge_children(a_type, a_field, phn, a_cmp, replace); \ + /* Set next/prev for sibling linked list. */ \ + if (replace != NULL) { \ + if (parent != NULL) { \ + phn_prev_set(a_type, a_field, replace, parent); \ + phn_lchild_set(a_type, a_field, parent, \ + replace); \ + } else { \ + phn_prev_set(a_type, a_field, replace, \ + phn_prev_get(a_type, a_field, phn)); \ + if (phn_prev_get(a_type, a_field, phn) != \ + NULL) { \ + phn_next_set(a_type, a_field, \ + phn_prev_get(a_type, a_field, phn), \ + replace); \ + } \ + } \ + phn_next_set(a_type, a_field, replace, \ + phn_next_get(a_type, a_field, phn)); \ + if (phn_next_get(a_type, a_field, phn) != NULL) { \ + phn_prev_set(a_type, a_field, \ + phn_next_get(a_type, a_field, phn), \ + replace); \ + } \ + } else { \ + if (parent != NULL) { \ + a_type *next = phn_next_get(a_type, a_field, \ + phn); \ + phn_lchild_set(a_type, a_field, parent, next); \ + if (next != NULL) { \ + phn_prev_set(a_type, a_field, next, \ + parent); \ + } \ + } else { \ + assert(phn_prev_get(a_type, a_field, phn) != \ + NULL); \ + phn_next_set(a_type, a_field, \ + phn_prev_get(a_type, a_field, phn), \ + phn_next_get(a_type, a_field, phn)); \ + } \ + if (phn_next_get(a_type, a_field, phn) != NULL) { \ + phn_prev_set(a_type, a_field, \ + phn_next_get(a_type, a_field, phn), \ + phn_prev_get(a_type, a_field, phn)); \ + } \ + } \ +} + +#endif /* PH_H_ */ Property changes on: head/contrib/jemalloc/include/jemalloc/internal/ph.h ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property Index: head/contrib/jemalloc/include/jemalloc/internal/private_namespace.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/private_namespace.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/private_namespace.h (revision 299587) @@ -1,542 +1,611 @@ #define a0dalloc JEMALLOC_N(a0dalloc) #define a0malloc JEMALLOC_N(a0malloc) #define arena_aalloc JEMALLOC_N(arena_aalloc) #define arena_alloc_junk_small JEMALLOC_N(arena_alloc_junk_small) #define arena_basic_stats_merge JEMALLOC_N(arena_basic_stats_merge) #define arena_bin_index JEMALLOC_N(arena_bin_index) #define arena_bin_info JEMALLOC_N(arena_bin_info) -#define arena_bitselm_get JEMALLOC_N(arena_bitselm_get) +#define arena_bitselm_get_const JEMALLOC_N(arena_bitselm_get_const) +#define arena_bitselm_get_mutable JEMALLOC_N(arena_bitselm_get_mutable) #define arena_boot JEMALLOC_N(arena_boot) #define arena_choose JEMALLOC_N(arena_choose) #define arena_choose_hard JEMALLOC_N(arena_choose_hard) +#define arena_choose_impl JEMALLOC_N(arena_choose_impl) #define arena_chunk_alloc_huge JEMALLOC_N(arena_chunk_alloc_huge) #define arena_chunk_cache_maybe_insert JEMALLOC_N(arena_chunk_cache_maybe_insert) #define arena_chunk_cache_maybe_remove JEMALLOC_N(arena_chunk_cache_maybe_remove) #define arena_chunk_dalloc_huge JEMALLOC_N(arena_chunk_dalloc_huge) #define arena_chunk_ralloc_huge_expand JEMALLOC_N(arena_chunk_ralloc_huge_expand) #define arena_chunk_ralloc_huge_shrink JEMALLOC_N(arena_chunk_ralloc_huge_shrink) #define arena_chunk_ralloc_huge_similar JEMALLOC_N(arena_chunk_ralloc_huge_similar) #define arena_cleanup JEMALLOC_N(arena_cleanup) #define arena_dalloc JEMALLOC_N(arena_dalloc) #define arena_dalloc_bin JEMALLOC_N(arena_dalloc_bin) #define arena_dalloc_bin_junked_locked JEMALLOC_N(arena_dalloc_bin_junked_locked) #define arena_dalloc_junk_large JEMALLOC_N(arena_dalloc_junk_large) #define arena_dalloc_junk_small JEMALLOC_N(arena_dalloc_junk_small) #define arena_dalloc_large JEMALLOC_N(arena_dalloc_large) #define arena_dalloc_large_junked_locked JEMALLOC_N(arena_dalloc_large_junked_locked) #define arena_dalloc_small JEMALLOC_N(arena_dalloc_small) #define arena_decay_tick JEMALLOC_N(arena_decay_tick) #define arena_decay_ticks JEMALLOC_N(arena_decay_ticks) #define arena_decay_time_default_get JEMALLOC_N(arena_decay_time_default_get) #define arena_decay_time_default_set JEMALLOC_N(arena_decay_time_default_set) #define arena_decay_time_get JEMALLOC_N(arena_decay_time_get) #define arena_decay_time_set JEMALLOC_N(arena_decay_time_set) #define arena_dss_prec_get JEMALLOC_N(arena_dss_prec_get) #define arena_dss_prec_set JEMALLOC_N(arena_dss_prec_set) #define arena_get JEMALLOC_N(arena_get) +#define arena_ichoose JEMALLOC_N(arena_ichoose) #define arena_init JEMALLOC_N(arena_init) #define arena_lg_dirty_mult_default_get JEMALLOC_N(arena_lg_dirty_mult_default_get) #define arena_lg_dirty_mult_default_set JEMALLOC_N(arena_lg_dirty_mult_default_set) #define arena_lg_dirty_mult_get JEMALLOC_N(arena_lg_dirty_mult_get) #define arena_lg_dirty_mult_set JEMALLOC_N(arena_lg_dirty_mult_set) #define arena_malloc JEMALLOC_N(arena_malloc) #define arena_malloc_hard JEMALLOC_N(arena_malloc_hard) #define arena_malloc_large JEMALLOC_N(arena_malloc_large) #define arena_mapbits_allocated_get JEMALLOC_N(arena_mapbits_allocated_get) #define arena_mapbits_binind_get JEMALLOC_N(arena_mapbits_binind_get) #define arena_mapbits_decommitted_get JEMALLOC_N(arena_mapbits_decommitted_get) #define arena_mapbits_dirty_get JEMALLOC_N(arena_mapbits_dirty_get) #define arena_mapbits_get JEMALLOC_N(arena_mapbits_get) #define arena_mapbits_internal_set JEMALLOC_N(arena_mapbits_internal_set) #define arena_mapbits_large_binind_set JEMALLOC_N(arena_mapbits_large_binind_set) #define arena_mapbits_large_get JEMALLOC_N(arena_mapbits_large_get) #define arena_mapbits_large_set JEMALLOC_N(arena_mapbits_large_set) #define arena_mapbits_large_size_get JEMALLOC_N(arena_mapbits_large_size_get) #define arena_mapbits_size_decode JEMALLOC_N(arena_mapbits_size_decode) #define arena_mapbits_size_encode JEMALLOC_N(arena_mapbits_size_encode) #define arena_mapbits_small_runind_get JEMALLOC_N(arena_mapbits_small_runind_get) #define arena_mapbits_small_set JEMALLOC_N(arena_mapbits_small_set) #define arena_mapbits_unallocated_set JEMALLOC_N(arena_mapbits_unallocated_set) #define arena_mapbits_unallocated_size_get JEMALLOC_N(arena_mapbits_unallocated_size_get) #define arena_mapbits_unallocated_size_set JEMALLOC_N(arena_mapbits_unallocated_size_set) #define arena_mapbits_unzeroed_get JEMALLOC_N(arena_mapbits_unzeroed_get) -#define arena_mapbitsp_get JEMALLOC_N(arena_mapbitsp_get) +#define arena_mapbitsp_get_const JEMALLOC_N(arena_mapbitsp_get_const) +#define arena_mapbitsp_get_mutable JEMALLOC_N(arena_mapbitsp_get_mutable) #define arena_mapbitsp_read JEMALLOC_N(arena_mapbitsp_read) #define arena_mapbitsp_write JEMALLOC_N(arena_mapbitsp_write) #define arena_maxrun JEMALLOC_N(arena_maxrun) #define arena_maybe_purge JEMALLOC_N(arena_maybe_purge) #define arena_metadata_allocated_add JEMALLOC_N(arena_metadata_allocated_add) #define arena_metadata_allocated_get JEMALLOC_N(arena_metadata_allocated_get) #define arena_metadata_allocated_sub JEMALLOC_N(arena_metadata_allocated_sub) #define arena_migrate JEMALLOC_N(arena_migrate) -#define arena_miscelm_get JEMALLOC_N(arena_miscelm_get) +#define arena_miscelm_get_const JEMALLOC_N(arena_miscelm_get_const) +#define arena_miscelm_get_mutable JEMALLOC_N(arena_miscelm_get_mutable) #define arena_miscelm_to_pageind JEMALLOC_N(arena_miscelm_to_pageind) #define arena_miscelm_to_rpages JEMALLOC_N(arena_miscelm_to_rpages) #define arena_new JEMALLOC_N(arena_new) #define arena_node_alloc JEMALLOC_N(arena_node_alloc) #define arena_node_dalloc JEMALLOC_N(arena_node_dalloc) #define arena_nthreads_dec JEMALLOC_N(arena_nthreads_dec) #define arena_nthreads_get JEMALLOC_N(arena_nthreads_get) #define arena_nthreads_inc JEMALLOC_N(arena_nthreads_inc) #define arena_palloc JEMALLOC_N(arena_palloc) #define arena_postfork_child JEMALLOC_N(arena_postfork_child) #define arena_postfork_parent JEMALLOC_N(arena_postfork_parent) -#define arena_prefork JEMALLOC_N(arena_prefork) +#define arena_prefork0 JEMALLOC_N(arena_prefork0) +#define arena_prefork1 JEMALLOC_N(arena_prefork1) +#define arena_prefork2 JEMALLOC_N(arena_prefork2) +#define arena_prefork3 JEMALLOC_N(arena_prefork3) #define arena_prof_accum JEMALLOC_N(arena_prof_accum) #define arena_prof_accum_impl JEMALLOC_N(arena_prof_accum_impl) #define arena_prof_accum_locked JEMALLOC_N(arena_prof_accum_locked) #define arena_prof_promoted JEMALLOC_N(arena_prof_promoted) #define arena_prof_tctx_get JEMALLOC_N(arena_prof_tctx_get) #define arena_prof_tctx_reset JEMALLOC_N(arena_prof_tctx_reset) #define arena_prof_tctx_set JEMALLOC_N(arena_prof_tctx_set) #define arena_ptr_small_binind_get JEMALLOC_N(arena_ptr_small_binind_get) #define arena_purge JEMALLOC_N(arena_purge) #define arena_quarantine_junk_small JEMALLOC_N(arena_quarantine_junk_small) #define arena_ralloc JEMALLOC_N(arena_ralloc) #define arena_ralloc_junk_large JEMALLOC_N(arena_ralloc_junk_large) #define arena_ralloc_no_move JEMALLOC_N(arena_ralloc_no_move) #define arena_rd_to_miscelm JEMALLOC_N(arena_rd_to_miscelm) #define arena_redzone_corruption JEMALLOC_N(arena_redzone_corruption) +#define arena_reset JEMALLOC_N(arena_reset) #define arena_run_regind JEMALLOC_N(arena_run_regind) #define arena_run_to_miscelm JEMALLOC_N(arena_run_to_miscelm) #define arena_salloc JEMALLOC_N(arena_salloc) #define arena_sdalloc JEMALLOC_N(arena_sdalloc) #define arena_stats_merge JEMALLOC_N(arena_stats_merge) #define arena_tcache_fill_small JEMALLOC_N(arena_tcache_fill_small) #define arena_tdata_get JEMALLOC_N(arena_tdata_get) #define arena_tdata_get_hard JEMALLOC_N(arena_tdata_get_hard) #define arenas JEMALLOC_N(arenas) #define arenas_tdata_bypass_cleanup JEMALLOC_N(arenas_tdata_bypass_cleanup) #define arenas_tdata_cleanup JEMALLOC_N(arenas_tdata_cleanup) #define atomic_add_p JEMALLOC_N(atomic_add_p) #define atomic_add_u JEMALLOC_N(atomic_add_u) #define atomic_add_uint32 JEMALLOC_N(atomic_add_uint32) #define atomic_add_uint64 JEMALLOC_N(atomic_add_uint64) #define atomic_add_z JEMALLOC_N(atomic_add_z) #define atomic_cas_p JEMALLOC_N(atomic_cas_p) #define atomic_cas_u JEMALLOC_N(atomic_cas_u) #define atomic_cas_uint32 JEMALLOC_N(atomic_cas_uint32) #define atomic_cas_uint64 JEMALLOC_N(atomic_cas_uint64) #define atomic_cas_z JEMALLOC_N(atomic_cas_z) #define atomic_sub_p JEMALLOC_N(atomic_sub_p) #define atomic_sub_u JEMALLOC_N(atomic_sub_u) #define atomic_sub_uint32 JEMALLOC_N(atomic_sub_uint32) #define atomic_sub_uint64 JEMALLOC_N(atomic_sub_uint64) #define atomic_sub_z JEMALLOC_N(atomic_sub_z) +#define atomic_write_p JEMALLOC_N(atomic_write_p) +#define atomic_write_u JEMALLOC_N(atomic_write_u) +#define atomic_write_uint32 JEMALLOC_N(atomic_write_uint32) +#define atomic_write_uint64 JEMALLOC_N(atomic_write_uint64) +#define atomic_write_z JEMALLOC_N(atomic_write_z) #define base_alloc JEMALLOC_N(base_alloc) #define base_boot JEMALLOC_N(base_boot) #define base_postfork_child JEMALLOC_N(base_postfork_child) #define base_postfork_parent JEMALLOC_N(base_postfork_parent) #define base_prefork JEMALLOC_N(base_prefork) #define base_stats_get JEMALLOC_N(base_stats_get) #define bitmap_full JEMALLOC_N(bitmap_full) #define bitmap_get JEMALLOC_N(bitmap_get) #define bitmap_info_init JEMALLOC_N(bitmap_info_init) #define bitmap_init JEMALLOC_N(bitmap_init) #define bitmap_set JEMALLOC_N(bitmap_set) #define bitmap_sfu JEMALLOC_N(bitmap_sfu) #define bitmap_size JEMALLOC_N(bitmap_size) #define bitmap_unset JEMALLOC_N(bitmap_unset) #define bootstrap_calloc JEMALLOC_N(bootstrap_calloc) #define bootstrap_free JEMALLOC_N(bootstrap_free) #define bootstrap_malloc JEMALLOC_N(bootstrap_malloc) #define bt_init JEMALLOC_N(bt_init) #define buferror JEMALLOC_N(buferror) #define chunk_alloc_base JEMALLOC_N(chunk_alloc_base) #define chunk_alloc_cache JEMALLOC_N(chunk_alloc_cache) #define chunk_alloc_dss JEMALLOC_N(chunk_alloc_dss) #define chunk_alloc_mmap JEMALLOC_N(chunk_alloc_mmap) #define chunk_alloc_wrapper JEMALLOC_N(chunk_alloc_wrapper) #define chunk_boot JEMALLOC_N(chunk_boot) -#define chunk_dalloc_arena JEMALLOC_N(chunk_dalloc_arena) #define chunk_dalloc_cache JEMALLOC_N(chunk_dalloc_cache) #define chunk_dalloc_mmap JEMALLOC_N(chunk_dalloc_mmap) #define chunk_dalloc_wrapper JEMALLOC_N(chunk_dalloc_wrapper) #define chunk_deregister JEMALLOC_N(chunk_deregister) #define chunk_dss_boot JEMALLOC_N(chunk_dss_boot) #define chunk_dss_postfork_child JEMALLOC_N(chunk_dss_postfork_child) #define chunk_dss_postfork_parent JEMALLOC_N(chunk_dss_postfork_parent) #define chunk_dss_prec_get JEMALLOC_N(chunk_dss_prec_get) #define chunk_dss_prec_set JEMALLOC_N(chunk_dss_prec_set) #define chunk_dss_prefork JEMALLOC_N(chunk_dss_prefork) #define chunk_hooks_default JEMALLOC_N(chunk_hooks_default) #define chunk_hooks_get JEMALLOC_N(chunk_hooks_get) #define chunk_hooks_set JEMALLOC_N(chunk_hooks_set) #define chunk_in_dss JEMALLOC_N(chunk_in_dss) #define chunk_lookup JEMALLOC_N(chunk_lookup) #define chunk_npages JEMALLOC_N(chunk_npages) #define chunk_postfork_child JEMALLOC_N(chunk_postfork_child) #define chunk_postfork_parent JEMALLOC_N(chunk_postfork_parent) #define chunk_prefork JEMALLOC_N(chunk_prefork) -#define chunk_purge_arena JEMALLOC_N(chunk_purge_arena) #define chunk_purge_wrapper JEMALLOC_N(chunk_purge_wrapper) #define chunk_register JEMALLOC_N(chunk_register) #define chunks_rtree JEMALLOC_N(chunks_rtree) #define chunksize JEMALLOC_N(chunksize) #define chunksize_mask JEMALLOC_N(chunksize_mask) #define ckh_count JEMALLOC_N(ckh_count) #define ckh_delete JEMALLOC_N(ckh_delete) #define ckh_insert JEMALLOC_N(ckh_insert) #define ckh_iter JEMALLOC_N(ckh_iter) #define ckh_new JEMALLOC_N(ckh_new) #define ckh_pointer_hash JEMALLOC_N(ckh_pointer_hash) #define ckh_pointer_keycomp JEMALLOC_N(ckh_pointer_keycomp) #define ckh_remove JEMALLOC_N(ckh_remove) #define ckh_search JEMALLOC_N(ckh_search) #define ckh_string_hash JEMALLOC_N(ckh_string_hash) #define ckh_string_keycomp JEMALLOC_N(ckh_string_keycomp) #define ctl_boot JEMALLOC_N(ctl_boot) #define ctl_bymib JEMALLOC_N(ctl_bymib) #define ctl_byname JEMALLOC_N(ctl_byname) #define ctl_nametomib JEMALLOC_N(ctl_nametomib) #define ctl_postfork_child JEMALLOC_N(ctl_postfork_child) #define ctl_postfork_parent JEMALLOC_N(ctl_postfork_parent) #define ctl_prefork JEMALLOC_N(ctl_prefork) #define decay_ticker_get JEMALLOC_N(decay_ticker_get) #define dss_prec_names JEMALLOC_N(dss_prec_names) #define extent_node_achunk_get JEMALLOC_N(extent_node_achunk_get) #define extent_node_achunk_set JEMALLOC_N(extent_node_achunk_set) #define extent_node_addr_get JEMALLOC_N(extent_node_addr_get) #define extent_node_addr_set JEMALLOC_N(extent_node_addr_set) #define extent_node_arena_get JEMALLOC_N(extent_node_arena_get) #define extent_node_arena_set JEMALLOC_N(extent_node_arena_set) +#define extent_node_committed_get JEMALLOC_N(extent_node_committed_get) +#define extent_node_committed_set JEMALLOC_N(extent_node_committed_set) #define extent_node_dirty_insert JEMALLOC_N(extent_node_dirty_insert) #define extent_node_dirty_linkage_init JEMALLOC_N(extent_node_dirty_linkage_init) #define extent_node_dirty_remove JEMALLOC_N(extent_node_dirty_remove) #define extent_node_init JEMALLOC_N(extent_node_init) #define extent_node_prof_tctx_get JEMALLOC_N(extent_node_prof_tctx_get) #define extent_node_prof_tctx_set JEMALLOC_N(extent_node_prof_tctx_set) #define extent_node_size_get JEMALLOC_N(extent_node_size_get) #define extent_node_size_set JEMALLOC_N(extent_node_size_set) #define extent_node_zeroed_get JEMALLOC_N(extent_node_zeroed_get) #define extent_node_zeroed_set JEMALLOC_N(extent_node_zeroed_set) +#define extent_tree_ad_destroy JEMALLOC_N(extent_tree_ad_destroy) +#define extent_tree_ad_destroy_recurse JEMALLOC_N(extent_tree_ad_destroy_recurse) #define extent_tree_ad_empty JEMALLOC_N(extent_tree_ad_empty) #define extent_tree_ad_first JEMALLOC_N(extent_tree_ad_first) #define extent_tree_ad_insert JEMALLOC_N(extent_tree_ad_insert) #define extent_tree_ad_iter JEMALLOC_N(extent_tree_ad_iter) #define extent_tree_ad_iter_recurse JEMALLOC_N(extent_tree_ad_iter_recurse) #define extent_tree_ad_iter_start JEMALLOC_N(extent_tree_ad_iter_start) #define extent_tree_ad_last JEMALLOC_N(extent_tree_ad_last) #define extent_tree_ad_new JEMALLOC_N(extent_tree_ad_new) #define extent_tree_ad_next JEMALLOC_N(extent_tree_ad_next) #define extent_tree_ad_nsearch JEMALLOC_N(extent_tree_ad_nsearch) #define extent_tree_ad_prev JEMALLOC_N(extent_tree_ad_prev) #define extent_tree_ad_psearch JEMALLOC_N(extent_tree_ad_psearch) #define extent_tree_ad_remove JEMALLOC_N(extent_tree_ad_remove) #define extent_tree_ad_reverse_iter JEMALLOC_N(extent_tree_ad_reverse_iter) #define extent_tree_ad_reverse_iter_recurse JEMALLOC_N(extent_tree_ad_reverse_iter_recurse) #define extent_tree_ad_reverse_iter_start JEMALLOC_N(extent_tree_ad_reverse_iter_start) #define extent_tree_ad_search JEMALLOC_N(extent_tree_ad_search) +#define extent_tree_szad_destroy JEMALLOC_N(extent_tree_szad_destroy) +#define extent_tree_szad_destroy_recurse JEMALLOC_N(extent_tree_szad_destroy_recurse) #define extent_tree_szad_empty JEMALLOC_N(extent_tree_szad_empty) #define extent_tree_szad_first JEMALLOC_N(extent_tree_szad_first) #define extent_tree_szad_insert JEMALLOC_N(extent_tree_szad_insert) #define extent_tree_szad_iter JEMALLOC_N(extent_tree_szad_iter) #define extent_tree_szad_iter_recurse JEMALLOC_N(extent_tree_szad_iter_recurse) #define extent_tree_szad_iter_start JEMALLOC_N(extent_tree_szad_iter_start) #define extent_tree_szad_last JEMALLOC_N(extent_tree_szad_last) #define extent_tree_szad_new JEMALLOC_N(extent_tree_szad_new) #define extent_tree_szad_next JEMALLOC_N(extent_tree_szad_next) #define extent_tree_szad_nsearch JEMALLOC_N(extent_tree_szad_nsearch) #define extent_tree_szad_prev JEMALLOC_N(extent_tree_szad_prev) #define extent_tree_szad_psearch JEMALLOC_N(extent_tree_szad_psearch) #define extent_tree_szad_remove JEMALLOC_N(extent_tree_szad_remove) #define extent_tree_szad_reverse_iter JEMALLOC_N(extent_tree_szad_reverse_iter) #define extent_tree_szad_reverse_iter_recurse JEMALLOC_N(extent_tree_szad_reverse_iter_recurse) #define extent_tree_szad_reverse_iter_start JEMALLOC_N(extent_tree_szad_reverse_iter_start) #define extent_tree_szad_search JEMALLOC_N(extent_tree_szad_search) #define ffs_llu JEMALLOC_N(ffs_llu) #define ffs_lu JEMALLOC_N(ffs_lu) #define ffs_u JEMALLOC_N(ffs_u) #define ffs_u32 JEMALLOC_N(ffs_u32) #define ffs_u64 JEMALLOC_N(ffs_u64) #define ffs_zu JEMALLOC_N(ffs_zu) #define get_errno JEMALLOC_N(get_errno) #define hash JEMALLOC_N(hash) #define hash_fmix_32 JEMALLOC_N(hash_fmix_32) #define hash_fmix_64 JEMALLOC_N(hash_fmix_64) #define hash_get_block_32 JEMALLOC_N(hash_get_block_32) #define hash_get_block_64 JEMALLOC_N(hash_get_block_64) #define hash_rotl_32 JEMALLOC_N(hash_rotl_32) #define hash_rotl_64 JEMALLOC_N(hash_rotl_64) #define hash_x64_128 JEMALLOC_N(hash_x64_128) #define hash_x86_128 JEMALLOC_N(hash_x86_128) #define hash_x86_32 JEMALLOC_N(hash_x86_32) #define huge_aalloc JEMALLOC_N(huge_aalloc) #define huge_dalloc JEMALLOC_N(huge_dalloc) #define huge_dalloc_junk JEMALLOC_N(huge_dalloc_junk) #define huge_malloc JEMALLOC_N(huge_malloc) #define huge_palloc JEMALLOC_N(huge_palloc) #define huge_prof_tctx_get JEMALLOC_N(huge_prof_tctx_get) #define huge_prof_tctx_reset JEMALLOC_N(huge_prof_tctx_reset) #define huge_prof_tctx_set JEMALLOC_N(huge_prof_tctx_set) #define huge_ralloc JEMALLOC_N(huge_ralloc) #define huge_ralloc_no_move JEMALLOC_N(huge_ralloc_no_move) #define huge_salloc JEMALLOC_N(huge_salloc) #define iaalloc JEMALLOC_N(iaalloc) +#define ialloc JEMALLOC_N(ialloc) #define iallocztm JEMALLOC_N(iallocztm) -#define icalloc JEMALLOC_N(icalloc) -#define icalloct JEMALLOC_N(icalloct) +#define iarena_cleanup JEMALLOC_N(iarena_cleanup) #define idalloc JEMALLOC_N(idalloc) -#define idalloct JEMALLOC_N(idalloct) #define idalloctm JEMALLOC_N(idalloctm) -#define imalloc JEMALLOC_N(imalloc) -#define imalloct JEMALLOC_N(imalloct) #define in_valgrind JEMALLOC_N(in_valgrind) #define index2size JEMALLOC_N(index2size) #define index2size_compute JEMALLOC_N(index2size_compute) #define index2size_lookup JEMALLOC_N(index2size_lookup) #define index2size_tab JEMALLOC_N(index2size_tab) #define ipalloc JEMALLOC_N(ipalloc) #define ipalloct JEMALLOC_N(ipalloct) #define ipallocztm JEMALLOC_N(ipallocztm) #define iqalloc JEMALLOC_N(iqalloc) #define iralloc JEMALLOC_N(iralloc) #define iralloct JEMALLOC_N(iralloct) #define iralloct_realign JEMALLOC_N(iralloct_realign) #define isalloc JEMALLOC_N(isalloc) #define isdalloct JEMALLOC_N(isdalloct) #define isqalloc JEMALLOC_N(isqalloc) #define ivsalloc JEMALLOC_N(ivsalloc) #define ixalloc JEMALLOC_N(ixalloc) #define jemalloc_postfork_child JEMALLOC_N(jemalloc_postfork_child) #define jemalloc_postfork_parent JEMALLOC_N(jemalloc_postfork_parent) #define jemalloc_prefork JEMALLOC_N(jemalloc_prefork) #define large_maxclass JEMALLOC_N(large_maxclass) #define lg_floor JEMALLOC_N(lg_floor) +#define lg_prof_sample JEMALLOC_N(lg_prof_sample) #define malloc_cprintf JEMALLOC_N(malloc_cprintf) +#define malloc_mutex_assert_not_owner JEMALLOC_N(malloc_mutex_assert_not_owner) +#define malloc_mutex_assert_owner JEMALLOC_N(malloc_mutex_assert_owner) +#define malloc_mutex_boot JEMALLOC_N(malloc_mutex_boot) #define malloc_mutex_init JEMALLOC_N(malloc_mutex_init) #define malloc_mutex_lock JEMALLOC_N(malloc_mutex_lock) #define malloc_mutex_postfork_child JEMALLOC_N(malloc_mutex_postfork_child) #define malloc_mutex_postfork_parent JEMALLOC_N(malloc_mutex_postfork_parent) #define malloc_mutex_prefork JEMALLOC_N(malloc_mutex_prefork) #define malloc_mutex_unlock JEMALLOC_N(malloc_mutex_unlock) #define malloc_printf JEMALLOC_N(malloc_printf) #define malloc_snprintf JEMALLOC_N(malloc_snprintf) #define malloc_strtoumax JEMALLOC_N(malloc_strtoumax) #define malloc_tsd_boot0 JEMALLOC_N(malloc_tsd_boot0) #define malloc_tsd_boot1 JEMALLOC_N(malloc_tsd_boot1) #define malloc_tsd_cleanup_register JEMALLOC_N(malloc_tsd_cleanup_register) #define malloc_tsd_dalloc JEMALLOC_N(malloc_tsd_dalloc) #define malloc_tsd_malloc JEMALLOC_N(malloc_tsd_malloc) #define malloc_tsd_no_cleanup JEMALLOC_N(malloc_tsd_no_cleanup) #define malloc_vcprintf JEMALLOC_N(malloc_vcprintf) #define malloc_vsnprintf JEMALLOC_N(malloc_vsnprintf) #define malloc_write JEMALLOC_N(malloc_write) #define map_bias JEMALLOC_N(map_bias) #define map_misc_offset JEMALLOC_N(map_misc_offset) #define mb_write JEMALLOC_N(mb_write) -#define mutex_boot JEMALLOC_N(mutex_boot) +#define narenas_auto JEMALLOC_N(narenas_auto) #define narenas_tdata_cleanup JEMALLOC_N(narenas_tdata_cleanup) #define narenas_total_get JEMALLOC_N(narenas_total_get) #define ncpus JEMALLOC_N(ncpus) #define nhbins JEMALLOC_N(nhbins) +#define nhclasses JEMALLOC_N(nhclasses) +#define nlclasses JEMALLOC_N(nlclasses) #define nstime_add JEMALLOC_N(nstime_add) #define nstime_compare JEMALLOC_N(nstime_compare) #define nstime_copy JEMALLOC_N(nstime_copy) #define nstime_divide JEMALLOC_N(nstime_divide) #define nstime_idivide JEMALLOC_N(nstime_idivide) #define nstime_imultiply JEMALLOC_N(nstime_imultiply) #define nstime_init JEMALLOC_N(nstime_init) #define nstime_init2 JEMALLOC_N(nstime_init2) #define nstime_ns JEMALLOC_N(nstime_ns) #define nstime_nsec JEMALLOC_N(nstime_nsec) #define nstime_sec JEMALLOC_N(nstime_sec) #define nstime_subtract JEMALLOC_N(nstime_subtract) #define nstime_update JEMALLOC_N(nstime_update) #define opt_abort JEMALLOC_N(opt_abort) #define opt_decay_time JEMALLOC_N(opt_decay_time) #define opt_dss JEMALLOC_N(opt_dss) #define opt_junk JEMALLOC_N(opt_junk) #define opt_junk_alloc JEMALLOC_N(opt_junk_alloc) #define opt_junk_free JEMALLOC_N(opt_junk_free) #define opt_lg_chunk JEMALLOC_N(opt_lg_chunk) #define opt_lg_dirty_mult JEMALLOC_N(opt_lg_dirty_mult) #define opt_lg_prof_interval JEMALLOC_N(opt_lg_prof_interval) #define opt_lg_prof_sample JEMALLOC_N(opt_lg_prof_sample) #define opt_lg_tcache_max JEMALLOC_N(opt_lg_tcache_max) #define opt_narenas JEMALLOC_N(opt_narenas) #define opt_prof JEMALLOC_N(opt_prof) #define opt_prof_accum JEMALLOC_N(opt_prof_accum) #define opt_prof_active JEMALLOC_N(opt_prof_active) #define opt_prof_final JEMALLOC_N(opt_prof_final) #define opt_prof_gdump JEMALLOC_N(opt_prof_gdump) #define opt_prof_leak JEMALLOC_N(opt_prof_leak) #define opt_prof_prefix JEMALLOC_N(opt_prof_prefix) #define opt_prof_thread_active_init JEMALLOC_N(opt_prof_thread_active_init) #define opt_purge JEMALLOC_N(opt_purge) #define opt_quarantine JEMALLOC_N(opt_quarantine) #define opt_redzone JEMALLOC_N(opt_redzone) #define opt_stats_print JEMALLOC_N(opt_stats_print) #define opt_tcache JEMALLOC_N(opt_tcache) #define opt_utrace JEMALLOC_N(opt_utrace) #define opt_xmalloc JEMALLOC_N(opt_xmalloc) #define opt_zero JEMALLOC_N(opt_zero) #define p2rz JEMALLOC_N(p2rz) +#define pages_boot JEMALLOC_N(pages_boot) #define pages_commit JEMALLOC_N(pages_commit) #define pages_decommit JEMALLOC_N(pages_decommit) #define pages_map JEMALLOC_N(pages_map) #define pages_purge JEMALLOC_N(pages_purge) #define pages_trim JEMALLOC_N(pages_trim) #define pages_unmap JEMALLOC_N(pages_unmap) #define pow2_ceil_u32 JEMALLOC_N(pow2_ceil_u32) #define pow2_ceil_u64 JEMALLOC_N(pow2_ceil_u64) #define pow2_ceil_zu JEMALLOC_N(pow2_ceil_zu) #define prng_lg_range JEMALLOC_N(prng_lg_range) #define prng_range JEMALLOC_N(prng_range) +#define prof_active JEMALLOC_N(prof_active) #define prof_active_get JEMALLOC_N(prof_active_get) #define prof_active_get_unlocked JEMALLOC_N(prof_active_get_unlocked) #define prof_active_set JEMALLOC_N(prof_active_set) #define prof_alloc_prep JEMALLOC_N(prof_alloc_prep) #define prof_alloc_rollback JEMALLOC_N(prof_alloc_rollback) #define prof_backtrace JEMALLOC_N(prof_backtrace) #define prof_boot0 JEMALLOC_N(prof_boot0) #define prof_boot1 JEMALLOC_N(prof_boot1) #define prof_boot2 JEMALLOC_N(prof_boot2) +#define prof_bt_count JEMALLOC_N(prof_bt_count) #define prof_dump_header JEMALLOC_N(prof_dump_header) #define prof_dump_open JEMALLOC_N(prof_dump_open) #define prof_free JEMALLOC_N(prof_free) #define prof_free_sampled_object JEMALLOC_N(prof_free_sampled_object) #define prof_gdump JEMALLOC_N(prof_gdump) #define prof_gdump_get JEMALLOC_N(prof_gdump_get) #define prof_gdump_get_unlocked JEMALLOC_N(prof_gdump_get_unlocked) #define prof_gdump_set JEMALLOC_N(prof_gdump_set) #define prof_gdump_val JEMALLOC_N(prof_gdump_val) #define prof_idump JEMALLOC_N(prof_idump) #define prof_interval JEMALLOC_N(prof_interval) #define prof_lookup JEMALLOC_N(prof_lookup) #define prof_malloc JEMALLOC_N(prof_malloc) #define prof_malloc_sample_object JEMALLOC_N(prof_malloc_sample_object) #define prof_mdump JEMALLOC_N(prof_mdump) #define prof_postfork_child JEMALLOC_N(prof_postfork_child) #define prof_postfork_parent JEMALLOC_N(prof_postfork_parent) -#define prof_prefork JEMALLOC_N(prof_prefork) +#define prof_prefork0 JEMALLOC_N(prof_prefork0) +#define prof_prefork1 JEMALLOC_N(prof_prefork1) #define prof_realloc JEMALLOC_N(prof_realloc) #define prof_reset JEMALLOC_N(prof_reset) #define prof_sample_accum_update JEMALLOC_N(prof_sample_accum_update) #define prof_sample_threshold_update JEMALLOC_N(prof_sample_threshold_update) #define prof_tctx_get JEMALLOC_N(prof_tctx_get) #define prof_tctx_reset JEMALLOC_N(prof_tctx_reset) #define prof_tctx_set JEMALLOC_N(prof_tctx_set) #define prof_tdata_cleanup JEMALLOC_N(prof_tdata_cleanup) +#define prof_tdata_count JEMALLOC_N(prof_tdata_count) #define prof_tdata_get JEMALLOC_N(prof_tdata_get) #define prof_tdata_init JEMALLOC_N(prof_tdata_init) #define prof_tdata_reinit JEMALLOC_N(prof_tdata_reinit) #define prof_thread_active_get JEMALLOC_N(prof_thread_active_get) #define prof_thread_active_init_get JEMALLOC_N(prof_thread_active_init_get) #define prof_thread_active_init_set JEMALLOC_N(prof_thread_active_init_set) #define prof_thread_active_set JEMALLOC_N(prof_thread_active_set) #define prof_thread_name_get JEMALLOC_N(prof_thread_name_get) #define prof_thread_name_set JEMALLOC_N(prof_thread_name_set) #define purge_mode_names JEMALLOC_N(purge_mode_names) #define quarantine JEMALLOC_N(quarantine) #define quarantine_alloc_hook JEMALLOC_N(quarantine_alloc_hook) #define quarantine_alloc_hook_work JEMALLOC_N(quarantine_alloc_hook_work) #define quarantine_cleanup JEMALLOC_N(quarantine_cleanup) #define register_zone JEMALLOC_N(register_zone) #define rtree_child_read JEMALLOC_N(rtree_child_read) #define rtree_child_read_hard JEMALLOC_N(rtree_child_read_hard) #define rtree_child_tryread JEMALLOC_N(rtree_child_tryread) #define rtree_delete JEMALLOC_N(rtree_delete) #define rtree_get JEMALLOC_N(rtree_get) #define rtree_new JEMALLOC_N(rtree_new) #define rtree_node_valid JEMALLOC_N(rtree_node_valid) #define rtree_set JEMALLOC_N(rtree_set) #define rtree_start_level JEMALLOC_N(rtree_start_level) #define rtree_subkey JEMALLOC_N(rtree_subkey) #define rtree_subtree_read JEMALLOC_N(rtree_subtree_read) #define rtree_subtree_read_hard JEMALLOC_N(rtree_subtree_read_hard) #define rtree_subtree_tryread JEMALLOC_N(rtree_subtree_tryread) #define rtree_val_read JEMALLOC_N(rtree_val_read) #define rtree_val_write JEMALLOC_N(rtree_val_write) #define run_quantize_ceil JEMALLOC_N(run_quantize_ceil) #define run_quantize_floor JEMALLOC_N(run_quantize_floor) #define run_quantize_max JEMALLOC_N(run_quantize_max) #define s2u JEMALLOC_N(s2u) #define s2u_compute JEMALLOC_N(s2u_compute) #define s2u_lookup JEMALLOC_N(s2u_lookup) #define sa2u JEMALLOC_N(sa2u) #define set_errno JEMALLOC_N(set_errno) #define size2index JEMALLOC_N(size2index) #define size2index_compute JEMALLOC_N(size2index_compute) #define size2index_lookup JEMALLOC_N(size2index_lookup) #define size2index_tab JEMALLOC_N(size2index_tab) #define stats_cactive JEMALLOC_N(stats_cactive) #define stats_cactive_add JEMALLOC_N(stats_cactive_add) #define stats_cactive_get JEMALLOC_N(stats_cactive_get) #define stats_cactive_sub JEMALLOC_N(stats_cactive_sub) #define stats_print JEMALLOC_N(stats_print) #define tcache_alloc_easy JEMALLOC_N(tcache_alloc_easy) #define tcache_alloc_large JEMALLOC_N(tcache_alloc_large) #define tcache_alloc_small JEMALLOC_N(tcache_alloc_small) #define tcache_alloc_small_hard JEMALLOC_N(tcache_alloc_small_hard) -#define tcache_arena_associate JEMALLOC_N(tcache_arena_associate) -#define tcache_arena_dissociate JEMALLOC_N(tcache_arena_dissociate) #define tcache_arena_reassociate JEMALLOC_N(tcache_arena_reassociate) #define tcache_bin_flush_large JEMALLOC_N(tcache_bin_flush_large) #define tcache_bin_flush_small JEMALLOC_N(tcache_bin_flush_small) #define tcache_bin_info JEMALLOC_N(tcache_bin_info) #define tcache_boot JEMALLOC_N(tcache_boot) #define tcache_cleanup JEMALLOC_N(tcache_cleanup) #define tcache_create JEMALLOC_N(tcache_create) #define tcache_dalloc_large JEMALLOC_N(tcache_dalloc_large) #define tcache_dalloc_small JEMALLOC_N(tcache_dalloc_small) #define tcache_enabled_cleanup JEMALLOC_N(tcache_enabled_cleanup) #define tcache_enabled_get JEMALLOC_N(tcache_enabled_get) #define tcache_enabled_set JEMALLOC_N(tcache_enabled_set) #define tcache_event JEMALLOC_N(tcache_event) #define tcache_event_hard JEMALLOC_N(tcache_event_hard) #define tcache_flush JEMALLOC_N(tcache_flush) #define tcache_get JEMALLOC_N(tcache_get) #define tcache_get_hard JEMALLOC_N(tcache_get_hard) #define tcache_maxclass JEMALLOC_N(tcache_maxclass) #define tcache_salloc JEMALLOC_N(tcache_salloc) #define tcache_stats_merge JEMALLOC_N(tcache_stats_merge) #define tcaches JEMALLOC_N(tcaches) #define tcaches_create JEMALLOC_N(tcaches_create) #define tcaches_destroy JEMALLOC_N(tcaches_destroy) #define tcaches_flush JEMALLOC_N(tcaches_flush) #define tcaches_get JEMALLOC_N(tcaches_get) #define thread_allocated_cleanup JEMALLOC_N(thread_allocated_cleanup) #define thread_deallocated_cleanup JEMALLOC_N(thread_deallocated_cleanup) #define ticker_copy JEMALLOC_N(ticker_copy) #define ticker_init JEMALLOC_N(ticker_init) #define ticker_read JEMALLOC_N(ticker_read) #define ticker_tick JEMALLOC_N(ticker_tick) #define ticker_ticks JEMALLOC_N(ticker_ticks) #define tsd_arena_get JEMALLOC_N(tsd_arena_get) #define tsd_arena_set JEMALLOC_N(tsd_arena_set) +#define tsd_arenap_get JEMALLOC_N(tsd_arenap_get) +#define tsd_arenas_tdata_bypass_get JEMALLOC_N(tsd_arenas_tdata_bypass_get) +#define tsd_arenas_tdata_bypass_set JEMALLOC_N(tsd_arenas_tdata_bypass_set) +#define tsd_arenas_tdata_bypassp_get JEMALLOC_N(tsd_arenas_tdata_bypassp_get) +#define tsd_arenas_tdata_get JEMALLOC_N(tsd_arenas_tdata_get) +#define tsd_arenas_tdata_set JEMALLOC_N(tsd_arenas_tdata_set) +#define tsd_arenas_tdatap_get JEMALLOC_N(tsd_arenas_tdatap_get) #define tsd_boot JEMALLOC_N(tsd_boot) #define tsd_boot0 JEMALLOC_N(tsd_boot0) #define tsd_boot1 JEMALLOC_N(tsd_boot1) #define tsd_booted JEMALLOC_N(tsd_booted) +#define tsd_booted_get JEMALLOC_N(tsd_booted_get) #define tsd_cleanup JEMALLOC_N(tsd_cleanup) #define tsd_cleanup_wrapper JEMALLOC_N(tsd_cleanup_wrapper) #define tsd_fetch JEMALLOC_N(tsd_fetch) #define tsd_get JEMALLOC_N(tsd_get) -#define tsd_wrapper_get JEMALLOC_N(tsd_wrapper_get) -#define tsd_wrapper_set JEMALLOC_N(tsd_wrapper_set) +#define tsd_iarena_get JEMALLOC_N(tsd_iarena_get) +#define tsd_iarena_set JEMALLOC_N(tsd_iarena_set) +#define tsd_iarenap_get JEMALLOC_N(tsd_iarenap_get) #define tsd_initialized JEMALLOC_N(tsd_initialized) #define tsd_init_check_recursion JEMALLOC_N(tsd_init_check_recursion) #define tsd_init_finish JEMALLOC_N(tsd_init_finish) #define tsd_init_head JEMALLOC_N(tsd_init_head) +#define tsd_narenas_tdata_get JEMALLOC_N(tsd_narenas_tdata_get) +#define tsd_narenas_tdata_set JEMALLOC_N(tsd_narenas_tdata_set) +#define tsd_narenas_tdatap_get JEMALLOC_N(tsd_narenas_tdatap_get) +#define tsd_wrapper_get JEMALLOC_N(tsd_wrapper_get) +#define tsd_wrapper_set JEMALLOC_N(tsd_wrapper_set) #define tsd_nominal JEMALLOC_N(tsd_nominal) #define tsd_prof_tdata_get JEMALLOC_N(tsd_prof_tdata_get) #define tsd_prof_tdata_set JEMALLOC_N(tsd_prof_tdata_set) +#define tsd_prof_tdatap_get JEMALLOC_N(tsd_prof_tdatap_get) #define tsd_quarantine_get JEMALLOC_N(tsd_quarantine_get) #define tsd_quarantine_set JEMALLOC_N(tsd_quarantine_set) +#define tsd_quarantinep_get JEMALLOC_N(tsd_quarantinep_get) #define tsd_set JEMALLOC_N(tsd_set) #define tsd_tcache_enabled_get JEMALLOC_N(tsd_tcache_enabled_get) #define tsd_tcache_enabled_set JEMALLOC_N(tsd_tcache_enabled_set) +#define tsd_tcache_enabledp_get JEMALLOC_N(tsd_tcache_enabledp_get) #define tsd_tcache_get JEMALLOC_N(tsd_tcache_get) #define tsd_tcache_set JEMALLOC_N(tsd_tcache_set) +#define tsd_tcachep_get JEMALLOC_N(tsd_tcachep_get) #define tsd_thread_allocated_get JEMALLOC_N(tsd_thread_allocated_get) #define tsd_thread_allocated_set JEMALLOC_N(tsd_thread_allocated_set) +#define tsd_thread_allocatedp_get JEMALLOC_N(tsd_thread_allocatedp_get) #define tsd_thread_deallocated_get JEMALLOC_N(tsd_thread_deallocated_get) #define tsd_thread_deallocated_set JEMALLOC_N(tsd_thread_deallocated_set) +#define tsd_thread_deallocatedp_get JEMALLOC_N(tsd_thread_deallocatedp_get) #define tsd_tls JEMALLOC_N(tsd_tls) #define tsd_tsd JEMALLOC_N(tsd_tsd) +#define tsd_tsdn JEMALLOC_N(tsd_tsdn) +#define tsd_witness_fork_get JEMALLOC_N(tsd_witness_fork_get) +#define tsd_witness_fork_set JEMALLOC_N(tsd_witness_fork_set) +#define tsd_witness_forkp_get JEMALLOC_N(tsd_witness_forkp_get) +#define tsd_witnesses_get JEMALLOC_N(tsd_witnesses_get) +#define tsd_witnesses_set JEMALLOC_N(tsd_witnesses_set) +#define tsd_witnessesp_get JEMALLOC_N(tsd_witnessesp_get) +#define tsdn_fetch JEMALLOC_N(tsdn_fetch) +#define tsdn_null JEMALLOC_N(tsdn_null) +#define tsdn_tsd JEMALLOC_N(tsdn_tsd) #define u2rz JEMALLOC_N(u2rz) #define valgrind_freelike_block JEMALLOC_N(valgrind_freelike_block) #define valgrind_make_mem_defined JEMALLOC_N(valgrind_make_mem_defined) #define valgrind_make_mem_noaccess JEMALLOC_N(valgrind_make_mem_noaccess) #define valgrind_make_mem_undefined JEMALLOC_N(valgrind_make_mem_undefined) +#define witness_assert_lockless JEMALLOC_N(witness_assert_lockless) +#define witness_assert_not_owner JEMALLOC_N(witness_assert_not_owner) +#define witness_assert_owner JEMALLOC_N(witness_assert_owner) +#define witness_fork_cleanup JEMALLOC_N(witness_fork_cleanup) +#define witness_init JEMALLOC_N(witness_init) +#define witness_lock JEMALLOC_N(witness_lock) +#define witness_lock_error JEMALLOC_N(witness_lock_error) +#define witness_lockless_error JEMALLOC_N(witness_lockless_error) +#define witness_not_owner_error JEMALLOC_N(witness_not_owner_error) +#define witness_owner_error JEMALLOC_N(witness_owner_error) +#define witness_postfork_child JEMALLOC_N(witness_postfork_child) +#define witness_postfork_parent JEMALLOC_N(witness_postfork_parent) +#define witness_prefork JEMALLOC_N(witness_prefork) +#define witness_unlock JEMALLOC_N(witness_unlock) +#define witnesses_cleanup JEMALLOC_N(witnesses_cleanup) Index: head/contrib/jemalloc/include/jemalloc/internal/prof.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/prof.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/prof.h (revision 299587) @@ -1,545 +1,546 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef struct prof_bt_s prof_bt_t; typedef struct prof_cnt_s prof_cnt_t; typedef struct prof_tctx_s prof_tctx_t; typedef struct prof_gctx_s prof_gctx_t; typedef struct prof_tdata_s prof_tdata_t; /* Option defaults. */ #ifdef JEMALLOC_PROF # define PROF_PREFIX_DEFAULT "jeprof" #else # define PROF_PREFIX_DEFAULT "" #endif #define LG_PROF_SAMPLE_DEFAULT 19 #define LG_PROF_INTERVAL_DEFAULT -1 /* * Hard limit on stack backtrace depth. The version of prof_backtrace() that * is based on __builtin_return_address() necessarily has a hard-coded number * of backtrace frame handlers, and should be kept in sync with this setting. */ #define PROF_BT_MAX 128 /* Initial hash table size. */ #define PROF_CKH_MINITEMS 64 /* Size of memory buffer to use when writing dump files. */ #define PROF_DUMP_BUFSIZE 65536 /* Size of stack-allocated buffer used by prof_printf(). */ #define PROF_PRINTF_BUFSIZE 128 /* * Number of mutexes shared among all gctx's. No space is allocated for these * unless profiling is enabled, so it's okay to over-provision. */ #define PROF_NCTX_LOCKS 1024 /* * Number of mutexes shared among all tdata's. No space is allocated for these * unless profiling is enabled, so it's okay to over-provision. */ #define PROF_NTDATA_LOCKS 256 /* * prof_tdata pointers close to NULL are used to encode state information that * is used for cleaning up during thread shutdown. */ #define PROF_TDATA_STATE_REINCARNATED ((prof_tdata_t *)(uintptr_t)1) #define PROF_TDATA_STATE_PURGATORY ((prof_tdata_t *)(uintptr_t)2) #define PROF_TDATA_STATE_MAX PROF_TDATA_STATE_PURGATORY #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS struct prof_bt_s { /* Backtrace, stored as len program counters. */ void **vec; unsigned len; }; #ifdef JEMALLOC_PROF_LIBGCC /* Data structure passed to libgcc _Unwind_Backtrace() callback functions. */ typedef struct { prof_bt_t *bt; unsigned max; } prof_unwind_data_t; #endif struct prof_cnt_s { /* Profiling counters. */ uint64_t curobjs; uint64_t curbytes; uint64_t accumobjs; uint64_t accumbytes; }; typedef enum { prof_tctx_state_initializing, prof_tctx_state_nominal, prof_tctx_state_dumping, prof_tctx_state_purgatory /* Dumper must finish destroying. */ } prof_tctx_state_t; struct prof_tctx_s { /* Thread data for thread that performed the allocation. */ prof_tdata_t *tdata; /* * Copy of tdata->thr_{uid,discrim}, necessary because tdata may be * defunct during teardown. */ uint64_t thr_uid; uint64_t thr_discrim; /* Profiling counters, protected by tdata->lock. */ prof_cnt_t cnts; /* Associated global context. */ prof_gctx_t *gctx; /* * UID that distinguishes multiple tctx's created by the same thread, * but coexisting in gctx->tctxs. There are two ways that such * coexistence can occur: * - A dumper thread can cause a tctx to be retained in the purgatory * state. * - Although a single "producer" thread must create all tctx's which * share the same thr_uid, multiple "consumers" can each concurrently * execute portions of prof_tctx_destroy(). prof_tctx_destroy() only * gets called once each time cnts.cur{objs,bytes} drop to 0, but this * threshold can be hit again before the first consumer finishes * executing prof_tctx_destroy(). */ uint64_t tctx_uid; /* Linkage into gctx's tctxs. */ rb_node(prof_tctx_t) tctx_link; /* * True during prof_alloc_prep()..prof_malloc_sample_object(), prevents * sample vs destroy race. */ bool prepared; /* Current dump-related state, protected by gctx->lock. */ prof_tctx_state_t state; /* * Copy of cnts snapshotted during early dump phase, protected by * dump_mtx. */ prof_cnt_t dump_cnts; }; typedef rb_tree(prof_tctx_t) prof_tctx_tree_t; struct prof_gctx_s { /* Protects nlimbo, cnt_summed, and tctxs. */ malloc_mutex_t *lock; /* * Number of threads that currently cause this gctx to be in a state of * limbo due to one of: * - Initializing this gctx. * - Initializing per thread counters associated with this gctx. * - Preparing to destroy this gctx. * - Dumping a heap profile that includes this gctx. * nlimbo must be 1 (single destroyer) in order to safely destroy the * gctx. */ unsigned nlimbo; /* * Tree of profile counters, one for each thread that has allocated in * this context. */ prof_tctx_tree_t tctxs; /* Linkage for tree of contexts to be dumped. */ rb_node(prof_gctx_t) dump_link; /* Temporary storage for summation during dump. */ prof_cnt_t cnt_summed; /* Associated backtrace. */ prof_bt_t bt; /* Backtrace vector, variable size, referred to by bt. */ void *vec[1]; }; typedef rb_tree(prof_gctx_t) prof_gctx_tree_t; struct prof_tdata_s { malloc_mutex_t *lock; /* Monotonically increasing unique thread identifier. */ uint64_t thr_uid; /* * Monotonically increasing discriminator among tdata structures * associated with the same thr_uid. */ uint64_t thr_discrim; /* Included in heap profile dumps if non-NULL. */ char *thread_name; bool attached; bool expired; rb_node(prof_tdata_t) tdata_link; /* * Counter used to initialize prof_tctx_t's tctx_uid. No locking is * necessary when incrementing this field, because only one thread ever * does so. */ uint64_t tctx_uid_next; /* * Hash of (prof_bt_t *)-->(prof_tctx_t *). Each thread tracks * backtraces for which it has non-zero allocation/deallocation counters * associated with thread-specific prof_tctx_t objects. Other threads * may write to prof_tctx_t contents when freeing associated objects. */ ckh_t bt2tctx; /* Sampling state. */ uint64_t prng_state; uint64_t bytes_until_sample; /* State used to avoid dumping while operating on prof internals. */ bool enq; bool enq_idump; bool enq_gdump; /* * Set to true during an early dump phase for tdata's which are * currently being dumped. New threads' tdata's have this initialized * to false so that they aren't accidentally included in later dump * phases. */ bool dumping; /* * True if profiling is active for this tdata's thread * (thread.prof.active mallctl). */ bool active; /* Temporary storage for summation during dump. */ prof_cnt_t cnt_summed; /* Backtrace vector, used for calls to prof_backtrace(). */ void *vec[PROF_BT_MAX]; }; typedef rb_tree(prof_tdata_t) prof_tdata_tree_t; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS extern bool opt_prof; extern bool opt_prof_active; extern bool opt_prof_thread_active_init; extern size_t opt_lg_prof_sample; /* Mean bytes between samples. */ extern ssize_t opt_lg_prof_interval; /* lg(prof_interval). */ extern bool opt_prof_gdump; /* High-water memory dumping. */ extern bool opt_prof_final; /* Final profile dumping. */ extern bool opt_prof_leak; /* Dump leak summary at exit. */ extern bool opt_prof_accum; /* Report cumulative bytes. */ extern char opt_prof_prefix[ /* Minimize memory bloat for non-prof builds. */ #ifdef JEMALLOC_PROF PATH_MAX + #endif 1]; /* Accessed via prof_active_[gs]et{_unlocked,}(). */ extern bool prof_active; /* Accessed via prof_gdump_[gs]et{_unlocked,}(). */ extern bool prof_gdump_val; /* * Profile dump interval, measured in bytes allocated. Each arena triggers a * profile dump when it reaches this threshold. The effect is that the * interval between profile dumps averages prof_interval, though the actual * interval between dumps will tend to be sporadic, and the interval will be a * maximum of approximately (prof_interval * narenas). */ extern uint64_t prof_interval; /* * Initialized as opt_lg_prof_sample, and potentially modified during profiling * resets. */ extern size_t lg_prof_sample; void prof_alloc_rollback(tsd_t *tsd, prof_tctx_t *tctx, bool updated); -void prof_malloc_sample_object(const void *ptr, size_t usize, +void prof_malloc_sample_object(tsdn_t *tsdn, const void *ptr, size_t usize, prof_tctx_t *tctx); void prof_free_sampled_object(tsd_t *tsd, size_t usize, prof_tctx_t *tctx); void bt_init(prof_bt_t *bt, void **vec); void prof_backtrace(prof_bt_t *bt); prof_tctx_t *prof_lookup(tsd_t *tsd, prof_bt_t *bt); #ifdef JEMALLOC_JET size_t prof_tdata_count(void); size_t prof_bt_count(void); const prof_cnt_t *prof_cnt_all(void); typedef int (prof_dump_open_t)(bool, const char *); extern prof_dump_open_t *prof_dump_open; -typedef bool (prof_dump_header_t)(bool, const prof_cnt_t *); +typedef bool (prof_dump_header_t)(tsdn_t *, bool, const prof_cnt_t *); extern prof_dump_header_t *prof_dump_header; #endif -void prof_idump(void); -bool prof_mdump(const char *filename); -void prof_gdump(void); -prof_tdata_t *prof_tdata_init(tsd_t *tsd); +void prof_idump(tsdn_t *tsdn); +bool prof_mdump(tsd_t *tsd, const char *filename); +void prof_gdump(tsdn_t *tsdn); +prof_tdata_t *prof_tdata_init(tsdn_t *tsdn); prof_tdata_t *prof_tdata_reinit(tsd_t *tsd, prof_tdata_t *tdata); -void prof_reset(tsd_t *tsd, size_t lg_sample); +void prof_reset(tsdn_t *tsdn, size_t lg_sample); void prof_tdata_cleanup(tsd_t *tsd); -const char *prof_thread_name_get(void); -bool prof_active_get(void); -bool prof_active_set(bool active); +bool prof_active_get(tsdn_t *tsdn); +bool prof_active_set(tsdn_t *tsdn, bool active); +const char *prof_thread_name_get(tsd_t *tsd); int prof_thread_name_set(tsd_t *tsd, const char *thread_name); -bool prof_thread_active_get(void); -bool prof_thread_active_set(bool active); -bool prof_thread_active_init_get(void); -bool prof_thread_active_init_set(bool active_init); -bool prof_gdump_get(void); -bool prof_gdump_set(bool active); +bool prof_thread_active_get(tsd_t *tsd); +bool prof_thread_active_set(tsd_t *tsd, bool active); +bool prof_thread_active_init_get(tsdn_t *tsdn); +bool prof_thread_active_init_set(tsdn_t *tsdn, bool active_init); +bool prof_gdump_get(tsdn_t *tsdn); +bool prof_gdump_set(tsdn_t *tsdn, bool active); void prof_boot0(void); void prof_boot1(void); -bool prof_boot2(void); -void prof_prefork(void); -void prof_postfork_parent(void); -void prof_postfork_child(void); +bool prof_boot2(tsdn_t *tsdn); +void prof_prefork0(tsdn_t *tsdn); +void prof_prefork1(tsdn_t *tsdn); +void prof_postfork_parent(tsdn_t *tsdn); +void prof_postfork_child(tsdn_t *tsdn); void prof_sample_threshold_update(prof_tdata_t *tdata); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE bool prof_active_get_unlocked(void); bool prof_gdump_get_unlocked(void); prof_tdata_t *prof_tdata_get(tsd_t *tsd, bool create); +prof_tctx_t *prof_tctx_get(tsdn_t *tsdn, const void *ptr); +void prof_tctx_set(tsdn_t *tsdn, const void *ptr, size_t usize, + prof_tctx_t *tctx); +void prof_tctx_reset(tsdn_t *tsdn, const void *ptr, size_t usize, + const void *old_ptr, prof_tctx_t *tctx); bool prof_sample_accum_update(tsd_t *tsd, size_t usize, bool commit, prof_tdata_t **tdata_out); prof_tctx_t *prof_alloc_prep(tsd_t *tsd, size_t usize, bool prof_active, bool update); -prof_tctx_t *prof_tctx_get(const void *ptr); -void prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx); -void prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr, +void prof_malloc(tsdn_t *tsdn, const void *ptr, size_t usize, prof_tctx_t *tctx); -void prof_malloc_sample_object(const void *ptr, size_t usize, - prof_tctx_t *tctx); -void prof_malloc(const void *ptr, size_t usize, prof_tctx_t *tctx); void prof_realloc(tsd_t *tsd, const void *ptr, size_t usize, prof_tctx_t *tctx, bool prof_active, bool updated, const void *old_ptr, size_t old_usize, prof_tctx_t *old_tctx); void prof_free(tsd_t *tsd, const void *ptr, size_t usize); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_PROF_C_)) JEMALLOC_ALWAYS_INLINE bool prof_active_get_unlocked(void) { /* * Even if opt_prof is true, sampling can be temporarily disabled by * setting prof_active to false. No locking is used when reading * prof_active in the fast path, so there are no guarantees regarding * how long it will take for all threads to notice state changes. */ return (prof_active); } JEMALLOC_ALWAYS_INLINE bool prof_gdump_get_unlocked(void) { /* * No locking is used when reading prof_gdump_val in the fast path, so * there are no guarantees regarding how long it will take for all * threads to notice state changes. */ return (prof_gdump_val); } JEMALLOC_ALWAYS_INLINE prof_tdata_t * prof_tdata_get(tsd_t *tsd, bool create) { prof_tdata_t *tdata; cassert(config_prof); tdata = tsd_prof_tdata_get(tsd); if (create) { if (unlikely(tdata == NULL)) { if (tsd_nominal(tsd)) { - tdata = prof_tdata_init(tsd); + tdata = prof_tdata_init(tsd_tsdn(tsd)); tsd_prof_tdata_set(tsd, tdata); } } else if (unlikely(tdata->expired)) { tdata = prof_tdata_reinit(tsd, tdata); tsd_prof_tdata_set(tsd, tdata); } assert(tdata == NULL || tdata->attached); } return (tdata); } JEMALLOC_ALWAYS_INLINE prof_tctx_t * -prof_tctx_get(const void *ptr) +prof_tctx_get(tsdn_t *tsdn, const void *ptr) { cassert(config_prof); assert(ptr != NULL); - return (arena_prof_tctx_get(ptr)); + return (arena_prof_tctx_get(tsdn, ptr)); } JEMALLOC_ALWAYS_INLINE void -prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx) +prof_tctx_set(tsdn_t *tsdn, const void *ptr, size_t usize, prof_tctx_t *tctx) { cassert(config_prof); assert(ptr != NULL); - arena_prof_tctx_set(ptr, usize, tctx); + arena_prof_tctx_set(tsdn, ptr, usize, tctx); } JEMALLOC_ALWAYS_INLINE void -prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr, +prof_tctx_reset(tsdn_t *tsdn, const void *ptr, size_t usize, const void *old_ptr, prof_tctx_t *old_tctx) { cassert(config_prof); assert(ptr != NULL); - arena_prof_tctx_reset(ptr, usize, old_ptr, old_tctx); + arena_prof_tctx_reset(tsdn, ptr, usize, old_ptr, old_tctx); } JEMALLOC_ALWAYS_INLINE bool prof_sample_accum_update(tsd_t *tsd, size_t usize, bool update, prof_tdata_t **tdata_out) { prof_tdata_t *tdata; cassert(config_prof); tdata = prof_tdata_get(tsd, true); if (unlikely((uintptr_t)tdata <= (uintptr_t)PROF_TDATA_STATE_MAX)) tdata = NULL; if (tdata_out != NULL) *tdata_out = tdata; if (unlikely(tdata == NULL)) return (true); if (likely(tdata->bytes_until_sample >= usize)) { if (update) tdata->bytes_until_sample -= usize; return (true); } else { /* Compute new sample threshold. */ if (update) prof_sample_threshold_update(tdata); return (!tdata->active); } } JEMALLOC_ALWAYS_INLINE prof_tctx_t * prof_alloc_prep(tsd_t *tsd, size_t usize, bool prof_active, bool update) { prof_tctx_t *ret; prof_tdata_t *tdata; prof_bt_t bt; assert(usize == s2u(usize)); if (!prof_active || likely(prof_sample_accum_update(tsd, usize, update, &tdata))) ret = (prof_tctx_t *)(uintptr_t)1U; else { bt_init(&bt, tdata->vec); prof_backtrace(&bt); ret = prof_lookup(tsd, &bt); } return (ret); } JEMALLOC_ALWAYS_INLINE void -prof_malloc(const void *ptr, size_t usize, prof_tctx_t *tctx) +prof_malloc(tsdn_t *tsdn, const void *ptr, size_t usize, prof_tctx_t *tctx) { cassert(config_prof); assert(ptr != NULL); - assert(usize == isalloc(ptr, true)); + assert(usize == isalloc(tsdn, ptr, true)); if (unlikely((uintptr_t)tctx > (uintptr_t)1U)) - prof_malloc_sample_object(ptr, usize, tctx); + prof_malloc_sample_object(tsdn, ptr, usize, tctx); else - prof_tctx_set(ptr, usize, (prof_tctx_t *)(uintptr_t)1U); + prof_tctx_set(tsdn, ptr, usize, (prof_tctx_t *)(uintptr_t)1U); } JEMALLOC_ALWAYS_INLINE void prof_realloc(tsd_t *tsd, const void *ptr, size_t usize, prof_tctx_t *tctx, bool prof_active, bool updated, const void *old_ptr, size_t old_usize, prof_tctx_t *old_tctx) { bool sampled, old_sampled; cassert(config_prof); assert(ptr != NULL || (uintptr_t)tctx <= (uintptr_t)1U); if (prof_active && !updated && ptr != NULL) { - assert(usize == isalloc(ptr, true)); + assert(usize == isalloc(tsd_tsdn(tsd), ptr, true)); if (prof_sample_accum_update(tsd, usize, true, NULL)) { /* * Don't sample. The usize passed to prof_alloc_prep() * was larger than what actually got allocated, so a * backtrace was captured for this allocation, even * though its actual usize was insufficient to cross the * sample threshold. */ tctx = (prof_tctx_t *)(uintptr_t)1U; } } sampled = ((uintptr_t)tctx > (uintptr_t)1U); old_sampled = ((uintptr_t)old_tctx > (uintptr_t)1U); if (unlikely(sampled)) - prof_malloc_sample_object(ptr, usize, tctx); + prof_malloc_sample_object(tsd_tsdn(tsd), ptr, usize, tctx); else - prof_tctx_reset(ptr, usize, old_ptr, old_tctx); + prof_tctx_reset(tsd_tsdn(tsd), ptr, usize, old_ptr, old_tctx); if (unlikely(old_sampled)) prof_free_sampled_object(tsd, old_usize, old_tctx); } JEMALLOC_ALWAYS_INLINE void prof_free(tsd_t *tsd, const void *ptr, size_t usize) { - prof_tctx_t *tctx = prof_tctx_get(ptr); + prof_tctx_t *tctx = prof_tctx_get(tsd_tsdn(tsd), ptr); cassert(config_prof); - assert(usize == isalloc(ptr, true)); + assert(usize == isalloc(tsd_tsdn(tsd), ptr, true)); if (unlikely((uintptr_t)tctx > (uintptr_t)1U)) prof_free_sampled_object(tsd, usize, tctx); } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/rtree.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/rtree.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/rtree.h (revision 299587) @@ -1,294 +1,366 @@ /* * This radix tree implementation is tailored to the singular purpose of * associating metadata with chunks that are currently owned by jemalloc. * ******************************************************************************* */ #ifdef JEMALLOC_H_TYPES typedef struct rtree_node_elm_s rtree_node_elm_t; typedef struct rtree_level_s rtree_level_t; typedef struct rtree_s rtree_t; /* * RTREE_BITS_PER_LEVEL must be a power of two that is no larger than the * machine address width. */ #define LG_RTREE_BITS_PER_LEVEL 4 -#define RTREE_BITS_PER_LEVEL (ZU(1) << LG_RTREE_BITS_PER_LEVEL) +#define RTREE_BITS_PER_LEVEL (1U << LG_RTREE_BITS_PER_LEVEL) +/* Maximum rtree height. */ #define RTREE_HEIGHT_MAX \ - ((ZU(1) << (LG_SIZEOF_PTR+3)) / RTREE_BITS_PER_LEVEL) + ((1U << (LG_SIZEOF_PTR+3)) / RTREE_BITS_PER_LEVEL) /* Used for two-stage lock-free node initialization. */ #define RTREE_NODE_INITIALIZING ((rtree_node_elm_t *)0x1) /* * The node allocation callback function's argument is the number of contiguous * rtree_node_elm_t structures to allocate, and the resulting memory must be * zeroed. */ typedef rtree_node_elm_t *(rtree_node_alloc_t)(size_t); typedef void (rtree_node_dalloc_t)(rtree_node_elm_t *); #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS struct rtree_node_elm_s { union { void *pun; rtree_node_elm_t *child; extent_node_t *val; }; }; struct rtree_level_s { /* * A non-NULL subtree points to a subtree rooted along the hypothetical * path to the leaf node corresponding to key 0. Depending on what keys * have been used to store to the tree, an arbitrary combination of * subtree pointers may remain NULL. * * Suppose keys comprise 48 bits, and LG_RTREE_BITS_PER_LEVEL is 4. * This results in a 3-level tree, and the leftmost leaf can be directly * accessed via subtrees[2], the subtree prefixed by 0x0000 (excluding * 0x00000000) can be accessed via subtrees[1], and the remainder of the * tree can be accessed via subtrees[0]. * * levels[0] : [ | 0x0001******** | 0x0002******** | ...] * * levels[1] : [ | 0x00000001**** | 0x00000002**** | ... ] * * levels[2] : [val(0x000000000000) | val(0x000000000001) | ...] * * This has practical implications on x64, which currently uses only the * lower 47 bits of virtual address space in userland, thus leaving * subtrees[0] unused and avoiding a level of tree traversal. */ union { void *subtree_pun; rtree_node_elm_t *subtree; }; /* Number of key bits distinguished by this level. */ unsigned bits; /* * Cumulative number of key bits distinguished by traversing to * corresponding tree level. */ unsigned cumbits; }; struct rtree_s { rtree_node_alloc_t *alloc; rtree_node_dalloc_t *dalloc; unsigned height; /* * Precomputed table used to convert from the number of leading 0 key * bits to which subtree level to start at. */ unsigned start_level[RTREE_HEIGHT_MAX]; rtree_level_t levels[RTREE_HEIGHT_MAX]; }; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS bool rtree_new(rtree_t *rtree, unsigned bits, rtree_node_alloc_t *alloc, rtree_node_dalloc_t *dalloc); void rtree_delete(rtree_t *rtree); rtree_node_elm_t *rtree_subtree_read_hard(rtree_t *rtree, unsigned level); rtree_node_elm_t *rtree_child_read_hard(rtree_t *rtree, rtree_node_elm_t *elm, unsigned level); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE unsigned rtree_start_level(rtree_t *rtree, uintptr_t key); uintptr_t rtree_subkey(rtree_t *rtree, uintptr_t key, unsigned level); bool rtree_node_valid(rtree_node_elm_t *node); -rtree_node_elm_t *rtree_child_tryread(rtree_node_elm_t *elm); +rtree_node_elm_t *rtree_child_tryread(rtree_node_elm_t *elm, + bool dependent); rtree_node_elm_t *rtree_child_read(rtree_t *rtree, rtree_node_elm_t *elm, - unsigned level); + unsigned level, bool dependent); extent_node_t *rtree_val_read(rtree_t *rtree, rtree_node_elm_t *elm, bool dependent); void rtree_val_write(rtree_t *rtree, rtree_node_elm_t *elm, const extent_node_t *val); -rtree_node_elm_t *rtree_subtree_tryread(rtree_t *rtree, unsigned level); -rtree_node_elm_t *rtree_subtree_read(rtree_t *rtree, unsigned level); +rtree_node_elm_t *rtree_subtree_tryread(rtree_t *rtree, unsigned level, + bool dependent); +rtree_node_elm_t *rtree_subtree_read(rtree_t *rtree, unsigned level, + bool dependent); extent_node_t *rtree_get(rtree_t *rtree, uintptr_t key, bool dependent); bool rtree_set(rtree_t *rtree, uintptr_t key, const extent_node_t *val); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_RTREE_C_)) -JEMALLOC_INLINE unsigned +JEMALLOC_ALWAYS_INLINE unsigned rtree_start_level(rtree_t *rtree, uintptr_t key) { unsigned start_level; if (unlikely(key == 0)) return (rtree->height - 1); start_level = rtree->start_level[lg_floor(key) >> LG_RTREE_BITS_PER_LEVEL]; assert(start_level < rtree->height); return (start_level); } -JEMALLOC_INLINE uintptr_t +JEMALLOC_ALWAYS_INLINE uintptr_t rtree_subkey(rtree_t *rtree, uintptr_t key, unsigned level) { return ((key >> ((ZU(1) << (LG_SIZEOF_PTR+3)) - rtree->levels[level].cumbits)) & ((ZU(1) << rtree->levels[level].bits) - 1)); } -JEMALLOC_INLINE bool +JEMALLOC_ALWAYS_INLINE bool rtree_node_valid(rtree_node_elm_t *node) { return ((uintptr_t)node > (uintptr_t)RTREE_NODE_INITIALIZING); } -JEMALLOC_INLINE rtree_node_elm_t * -rtree_child_tryread(rtree_node_elm_t *elm) +JEMALLOC_ALWAYS_INLINE rtree_node_elm_t * +rtree_child_tryread(rtree_node_elm_t *elm, bool dependent) { rtree_node_elm_t *child; /* Double-checked read (first read may be stale. */ child = elm->child; - if (!rtree_node_valid(child)) + if (!dependent && !rtree_node_valid(child)) child = atomic_read_p(&elm->pun); + assert(!dependent || child != NULL); return (child); } -JEMALLOC_INLINE rtree_node_elm_t * -rtree_child_read(rtree_t *rtree, rtree_node_elm_t *elm, unsigned level) +JEMALLOC_ALWAYS_INLINE rtree_node_elm_t * +rtree_child_read(rtree_t *rtree, rtree_node_elm_t *elm, unsigned level, + bool dependent) { rtree_node_elm_t *child; - child = rtree_child_tryread(elm); - if (unlikely(!rtree_node_valid(child))) + child = rtree_child_tryread(elm, dependent); + if (!dependent && unlikely(!rtree_node_valid(child))) child = rtree_child_read_hard(rtree, elm, level); + assert(!dependent || child != NULL); return (child); } -JEMALLOC_INLINE extent_node_t * +JEMALLOC_ALWAYS_INLINE extent_node_t * rtree_val_read(rtree_t *rtree, rtree_node_elm_t *elm, bool dependent) { if (dependent) { /* * Reading a val on behalf of a pointer to a valid allocation is * guaranteed to be a clean read even without synchronization, * because the rtree update became visible in memory before the * pointer came into existence. */ return (elm->val); } else { /* * An arbitrary read, e.g. on behalf of ivsalloc(), may not be * dependent on a previous rtree write, which means a stale read * could result if synchronization were omitted here. */ return (atomic_read_p(&elm->pun)); } } JEMALLOC_INLINE void rtree_val_write(rtree_t *rtree, rtree_node_elm_t *elm, const extent_node_t *val) { atomic_write_p(&elm->pun, val); } -JEMALLOC_INLINE rtree_node_elm_t * -rtree_subtree_tryread(rtree_t *rtree, unsigned level) +JEMALLOC_ALWAYS_INLINE rtree_node_elm_t * +rtree_subtree_tryread(rtree_t *rtree, unsigned level, bool dependent) { rtree_node_elm_t *subtree; /* Double-checked read (first read may be stale. */ subtree = rtree->levels[level].subtree; - if (!rtree_node_valid(subtree)) + if (!dependent && unlikely(!rtree_node_valid(subtree))) subtree = atomic_read_p(&rtree->levels[level].subtree_pun); + assert(!dependent || subtree != NULL); return (subtree); } -JEMALLOC_INLINE rtree_node_elm_t * -rtree_subtree_read(rtree_t *rtree, unsigned level) +JEMALLOC_ALWAYS_INLINE rtree_node_elm_t * +rtree_subtree_read(rtree_t *rtree, unsigned level, bool dependent) { rtree_node_elm_t *subtree; - subtree = rtree_subtree_tryread(rtree, level); - if (unlikely(!rtree_node_valid(subtree))) + subtree = rtree_subtree_tryread(rtree, level, dependent); + if (!dependent && unlikely(!rtree_node_valid(subtree))) subtree = rtree_subtree_read_hard(rtree, level); + assert(!dependent || subtree != NULL); return (subtree); } -JEMALLOC_INLINE extent_node_t * +JEMALLOC_ALWAYS_INLINE extent_node_t * rtree_get(rtree_t *rtree, uintptr_t key, bool dependent) { uintptr_t subkey; - unsigned i, start_level; - rtree_node_elm_t *node, *child; + unsigned start_level; + rtree_node_elm_t *node; start_level = rtree_start_level(rtree, key); - for (i = start_level, node = rtree_subtree_tryread(rtree, start_level); - /**/; i++, node = child) { - if (!dependent && unlikely(!rtree_node_valid(node))) - return (NULL); - subkey = rtree_subkey(rtree, key, i); - if (i == rtree->height - 1) { - /* - * node is a leaf, so it contains values rather than - * child pointers. - */ - return (rtree_val_read(rtree, &node[subkey], - dependent)); - } - assert(i < rtree->height - 1); - child = rtree_child_tryread(&node[subkey]); + node = rtree_subtree_tryread(rtree, start_level, dependent); +#define RTREE_GET_BIAS (RTREE_HEIGHT_MAX - rtree->height) + switch (start_level + RTREE_GET_BIAS) { +#define RTREE_GET_SUBTREE(level) \ + case level: \ + assert(level < (RTREE_HEIGHT_MAX-1)); \ + if (!dependent && unlikely(!rtree_node_valid(node))) \ + return (NULL); \ + subkey = rtree_subkey(rtree, key, level - \ + RTREE_GET_BIAS); \ + node = rtree_child_tryread(&node[subkey], dependent); \ + /* Fall through. */ +#define RTREE_GET_LEAF(level) \ + case level: \ + assert(level == (RTREE_HEIGHT_MAX-1)); \ + if (!dependent && unlikely(!rtree_node_valid(node))) \ + return (NULL); \ + subkey = rtree_subkey(rtree, key, level - \ + RTREE_GET_BIAS); \ + /* \ + * node is a leaf, so it contains values rather than \ + * child pointers. \ + */ \ + return (rtree_val_read(rtree, &node[subkey], \ + dependent)); +#if RTREE_HEIGHT_MAX > 1 + RTREE_GET_SUBTREE(0) +#endif +#if RTREE_HEIGHT_MAX > 2 + RTREE_GET_SUBTREE(1) +#endif +#if RTREE_HEIGHT_MAX > 3 + RTREE_GET_SUBTREE(2) +#endif +#if RTREE_HEIGHT_MAX > 4 + RTREE_GET_SUBTREE(3) +#endif +#if RTREE_HEIGHT_MAX > 5 + RTREE_GET_SUBTREE(4) +#endif +#if RTREE_HEIGHT_MAX > 6 + RTREE_GET_SUBTREE(5) +#endif +#if RTREE_HEIGHT_MAX > 7 + RTREE_GET_SUBTREE(6) +#endif +#if RTREE_HEIGHT_MAX > 8 + RTREE_GET_SUBTREE(7) +#endif +#if RTREE_HEIGHT_MAX > 9 + RTREE_GET_SUBTREE(8) +#endif +#if RTREE_HEIGHT_MAX > 10 + RTREE_GET_SUBTREE(9) +#endif +#if RTREE_HEIGHT_MAX > 11 + RTREE_GET_SUBTREE(10) +#endif +#if RTREE_HEIGHT_MAX > 12 + RTREE_GET_SUBTREE(11) +#endif +#if RTREE_HEIGHT_MAX > 13 + RTREE_GET_SUBTREE(12) +#endif +#if RTREE_HEIGHT_MAX > 14 + RTREE_GET_SUBTREE(13) +#endif +#if RTREE_HEIGHT_MAX > 15 + RTREE_GET_SUBTREE(14) +#endif +#if RTREE_HEIGHT_MAX > 16 +# error Unsupported RTREE_HEIGHT_MAX +#endif + RTREE_GET_LEAF(RTREE_HEIGHT_MAX-1) +#undef RTREE_GET_SUBTREE +#undef RTREE_GET_LEAF + default: not_reached(); } +#undef RTREE_GET_BIAS not_reached(); } JEMALLOC_INLINE bool rtree_set(rtree_t *rtree, uintptr_t key, const extent_node_t *val) { uintptr_t subkey; unsigned i, start_level; rtree_node_elm_t *node, *child; start_level = rtree_start_level(rtree, key); - node = rtree_subtree_read(rtree, start_level); + node = rtree_subtree_read(rtree, start_level, false); if (node == NULL) return (true); for (i = start_level; /**/; i++, node = child) { subkey = rtree_subkey(rtree, key, i); if (i == rtree->height - 1) { /* * node is a leaf, so it contains values rather than * child pointers. */ rtree_val_write(rtree, &node[subkey], val); return (false); } assert(i + 1 < rtree->height); - child = rtree_child_read(rtree, &node[subkey], i); + child = rtree_child_read(rtree, &node[subkey], i, false); if (child == NULL) return (true); } not_reached(); } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/stats.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/stats.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/stats.h (revision 299587) @@ -1,193 +1,201 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef struct tcache_bin_stats_s tcache_bin_stats_t; typedef struct malloc_bin_stats_s malloc_bin_stats_t; typedef struct malloc_large_stats_s malloc_large_stats_t; typedef struct malloc_huge_stats_s malloc_huge_stats_t; typedef struct arena_stats_s arena_stats_t; typedef struct chunk_stats_s chunk_stats_t; #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS struct tcache_bin_stats_s { /* * Number of allocation requests that corresponded to the size of this * bin. */ uint64_t nrequests; }; struct malloc_bin_stats_s { /* * Total number of allocation/deallocation requests served directly by * the bin. Note that tcache may allocate an object, then recycle it * many times, resulting many increments to nrequests, but only one * each to nmalloc and ndalloc. */ uint64_t nmalloc; uint64_t ndalloc; /* * Number of allocation requests that correspond to the size of this * bin. This includes requests served by tcache, though tcache only * periodically merges into this counter. */ uint64_t nrequests; /* * Current number of regions of this size class, including regions * currently cached by tcache. */ size_t curregs; /* Number of tcache fills from this bin. */ uint64_t nfills; /* Number of tcache flushes to this bin. */ uint64_t nflushes; /* Total number of runs created for this bin's size class. */ uint64_t nruns; /* * Total number of runs reused by extracting them from the runs tree for * this bin's size class. */ uint64_t reruns; /* Current number of runs in this bin. */ size_t curruns; }; struct malloc_large_stats_s { /* * Total number of allocation/deallocation requests served directly by * the arena. Note that tcache may allocate an object, then recycle it * many times, resulting many increments to nrequests, but only one * each to nmalloc and ndalloc. */ uint64_t nmalloc; uint64_t ndalloc; /* * Number of allocation requests that correspond to this size class. * This includes requests served by tcache, though tcache only * periodically merges into this counter. */ uint64_t nrequests; /* * Current number of runs of this size class, including runs currently * cached by tcache. */ size_t curruns; }; struct malloc_huge_stats_s { /* * Total number of allocation/deallocation requests served directly by * the arena. */ uint64_t nmalloc; uint64_t ndalloc; /* Current number of (multi-)chunk allocations of this size class. */ size_t curhchunks; }; struct arena_stats_s { /* Number of bytes currently mapped. */ size_t mapped; /* + * Number of bytes currently retained as a side effect of munmap() being + * disabled/bypassed. Retained bytes are technically mapped (though + * always decommitted or purged), but they are excluded from the mapped + * statistic (above). + */ + size_t retained; + + /* * Total number of purge sweeps, total number of madvise calls made, * and total pages purged in order to keep dirty unused memory under * control. */ uint64_t npurge; uint64_t nmadvise; uint64_t purged; /* * Number of bytes currently mapped purely for metadata purposes, and * number of bytes currently allocated for internal metadata. */ size_t metadata_mapped; size_t metadata_allocated; /* Protected via atomic_*_z(). */ /* Per-size-category statistics. */ size_t allocated_large; uint64_t nmalloc_large; uint64_t ndalloc_large; uint64_t nrequests_large; size_t allocated_huge; uint64_t nmalloc_huge; uint64_t ndalloc_huge; /* One element for each large size class. */ malloc_large_stats_t *lstats; /* One element for each huge size class. */ malloc_huge_stats_t *hstats; }; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS extern bool opt_stats_print; extern size_t stats_cactive; void stats_print(void (*write)(void *, const char *), void *cbopaque, const char *opts); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE size_t stats_cactive_get(void); void stats_cactive_add(size_t size); void stats_cactive_sub(size_t size); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_STATS_C_)) JEMALLOC_INLINE size_t stats_cactive_get(void) { return (atomic_read_z(&stats_cactive)); } JEMALLOC_INLINE void stats_cactive_add(size_t size) { UNUSED size_t cactive; assert(size > 0); assert((size & chunksize_mask) == 0); cactive = atomic_add_z(&stats_cactive, size); assert(cactive - size < cactive); } JEMALLOC_INLINE void stats_cactive_sub(size_t size) { UNUSED size_t cactive; assert(size > 0); assert((size & chunksize_mask) == 0); cactive = atomic_sub_z(&stats_cactive, size); assert(cactive + size > cactive); } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/tcache.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/tcache.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/tcache.h (revision 299587) @@ -1,468 +1,469 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef struct tcache_bin_info_s tcache_bin_info_t; typedef struct tcache_bin_s tcache_bin_t; typedef struct tcache_s tcache_t; typedef struct tcaches_s tcaches_t; /* * tcache pointers close to NULL are used to encode state information that is * used for two purposes: preventing thread caching on a per thread basis and * cleaning up during thread shutdown. */ #define TCACHE_STATE_DISABLED ((tcache_t *)(uintptr_t)1) #define TCACHE_STATE_REINCARNATED ((tcache_t *)(uintptr_t)2) #define TCACHE_STATE_PURGATORY ((tcache_t *)(uintptr_t)3) #define TCACHE_STATE_MAX TCACHE_STATE_PURGATORY /* * Absolute minimum number of cache slots for each small bin. */ #define TCACHE_NSLOTS_SMALL_MIN 20 /* * Absolute maximum number of cache slots for each small bin in the thread * cache. This is an additional constraint beyond that imposed as: twice the * number of regions per run for this size class. * * This constant must be an even number. */ #define TCACHE_NSLOTS_SMALL_MAX 200 /* Number of cache slots for large size classes. */ #define TCACHE_NSLOTS_LARGE 20 /* (1U << opt_lg_tcache_max) is used to compute tcache_maxclass. */ #define LG_TCACHE_MAXCLASS_DEFAULT 15 /* * TCACHE_GC_SWEEP is the approximate number of allocation events between * full GC sweeps. Integer rounding may cause the actual number to be * slightly higher, since GC is performed incrementally. */ #define TCACHE_GC_SWEEP 8192 /* Number of tcache allocation/deallocation events between incremental GCs. */ #define TCACHE_GC_INCR \ ((TCACHE_GC_SWEEP / NBINS) + ((TCACHE_GC_SWEEP / NBINS == 0) ? 0 : 1)) #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS typedef enum { tcache_enabled_false = 0, /* Enable cast to/from bool. */ tcache_enabled_true = 1, tcache_enabled_default = 2 } tcache_enabled_t; /* * Read-only information associated with each element of tcache_t's tbins array * is stored separately, mainly to reduce memory usage. */ struct tcache_bin_info_s { unsigned ncached_max; /* Upper limit on ncached. */ }; struct tcache_bin_s { tcache_bin_stats_t tstats; int low_water; /* Min # cached since last GC. */ unsigned lg_fill_div; /* Fill (ncached_max >> lg_fill_div). */ unsigned ncached; /* # of cached objects. */ /* * To make use of adjacent cacheline prefetch, the items in the avail * stack goes to higher address for newer allocations. avail points * just above the available space, which means that * avail[-ncached, ... -1] are available items and the lowest item will * be allocated first. */ void **avail; /* Stack of available objects. */ }; struct tcache_s { ql_elm(tcache_t) link; /* Used for aggregating stats. */ uint64_t prof_accumbytes;/* Cleared after arena_prof_accum(). */ ticker_t gc_ticker; /* Drives incremental GC. */ szind_t next_gc_bin; /* Next bin to GC. */ tcache_bin_t tbins[1]; /* Dynamically sized. */ /* * The pointer stacks associated with tbins follow as a contiguous * array. During tcache initialization, the avail pointer in each * element of tbins is initialized to point to the proper offset within * this array. */ }; /* Linkage for list of available (previously used) explicit tcache IDs. */ struct tcaches_s { union { tcache_t *tcache; tcaches_t *next; }; }; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS extern bool opt_tcache; extern ssize_t opt_lg_tcache_max; extern tcache_bin_info_t *tcache_bin_info; /* * Number of tcache bins. There are NBINS small-object bins, plus 0 or more * large-object bins. */ extern unsigned nhbins; /* Maximum cached size class. */ extern size_t tcache_maxclass; /* * Explicit tcaches, managed via the tcache.{create,flush,destroy} mallctls and * usable via the MALLOCX_TCACHE() flag. The automatic per thread tcaches are * completely disjoint from this data structure. tcaches starts off as a sparse * array, so it has no physical memory footprint until individual pages are * touched. This allows the entire array to be allocated the first time an * explicit tcache is created without a disproportionate impact on memory usage. */ extern tcaches_t *tcaches; -size_t tcache_salloc(const void *ptr); +size_t tcache_salloc(tsdn_t *tsdn, const void *ptr); void tcache_event_hard(tsd_t *tsd, tcache_t *tcache); -void *tcache_alloc_small_hard(tsd_t *tsd, arena_t *arena, tcache_t *tcache, +void *tcache_alloc_small_hard(tsdn_t *tsdn, arena_t *arena, tcache_t *tcache, tcache_bin_t *tbin, szind_t binind, bool *tcache_success); void tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin, szind_t binind, unsigned rem); void tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind, unsigned rem, tcache_t *tcache); -void tcache_arena_associate(tcache_t *tcache, arena_t *arena); -void tcache_arena_reassociate(tcache_t *tcache, arena_t *oldarena, - arena_t *newarena); -void tcache_arena_dissociate(tcache_t *tcache, arena_t *arena); +void tcache_arena_reassociate(tsdn_t *tsdn, tcache_t *tcache, + arena_t *oldarena, arena_t *newarena); tcache_t *tcache_get_hard(tsd_t *tsd); -tcache_t *tcache_create(tsd_t *tsd, arena_t *arena); +tcache_t *tcache_create(tsdn_t *tsdn, arena_t *arena); void tcache_cleanup(tsd_t *tsd); void tcache_enabled_cleanup(tsd_t *tsd); -void tcache_stats_merge(tcache_t *tcache, arena_t *arena); -bool tcaches_create(tsd_t *tsd, unsigned *r_ind); +void tcache_stats_merge(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena); +bool tcaches_create(tsdn_t *tsdn, unsigned *r_ind); void tcaches_flush(tsd_t *tsd, unsigned ind); void tcaches_destroy(tsd_t *tsd, unsigned ind); -bool tcache_boot(void); +bool tcache_boot(tsdn_t *tsdn); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE void tcache_event(tsd_t *tsd, tcache_t *tcache); void tcache_flush(void); bool tcache_enabled_get(void); tcache_t *tcache_get(tsd_t *tsd, bool create); void tcache_enabled_set(bool enabled); void *tcache_alloc_easy(tcache_bin_t *tbin, bool *tcache_success); void *tcache_alloc_small(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size, szind_t ind, bool zero, bool slow_path); void *tcache_alloc_large(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size, szind_t ind, bool zero, bool slow_path); void tcache_dalloc_small(tsd_t *tsd, tcache_t *tcache, void *ptr, szind_t binind, bool slow_path); void tcache_dalloc_large(tsd_t *tsd, tcache_t *tcache, void *ptr, size_t size, bool slow_path); tcache_t *tcaches_get(tsd_t *tsd, unsigned ind); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_TCACHE_C_)) JEMALLOC_INLINE void tcache_flush(void) { tsd_t *tsd; cassert(config_tcache); tsd = tsd_fetch(); tcache_cleanup(tsd); } JEMALLOC_INLINE bool tcache_enabled_get(void) { tsd_t *tsd; tcache_enabled_t tcache_enabled; cassert(config_tcache); tsd = tsd_fetch(); tcache_enabled = tsd_tcache_enabled_get(tsd); if (tcache_enabled == tcache_enabled_default) { tcache_enabled = (tcache_enabled_t)opt_tcache; tsd_tcache_enabled_set(tsd, tcache_enabled); } return ((bool)tcache_enabled); } JEMALLOC_INLINE void tcache_enabled_set(bool enabled) { tsd_t *tsd; tcache_enabled_t tcache_enabled; cassert(config_tcache); tsd = tsd_fetch(); tcache_enabled = (tcache_enabled_t)enabled; tsd_tcache_enabled_set(tsd, tcache_enabled); if (!enabled) tcache_cleanup(tsd); } JEMALLOC_ALWAYS_INLINE tcache_t * tcache_get(tsd_t *tsd, bool create) { tcache_t *tcache; if (!config_tcache) return (NULL); tcache = tsd_tcache_get(tsd); if (!create) return (tcache); if (unlikely(tcache == NULL) && tsd_nominal(tsd)) { tcache = tcache_get_hard(tsd); tsd_tcache_set(tsd, tcache); } return (tcache); } JEMALLOC_ALWAYS_INLINE void tcache_event(tsd_t *tsd, tcache_t *tcache) { if (TCACHE_GC_INCR == 0) return; if (unlikely(ticker_tick(&tcache->gc_ticker))) tcache_event_hard(tsd, tcache); } JEMALLOC_ALWAYS_INLINE void * tcache_alloc_easy(tcache_bin_t *tbin, bool *tcache_success) { void *ret; if (unlikely(tbin->ncached == 0)) { tbin->low_water = -1; *tcache_success = false; return (NULL); } /* * tcache_success (instead of ret) should be checked upon the return of * this function. We avoid checking (ret == NULL) because there is * never a null stored on the avail stack (which is unknown to the * compiler), and eagerly checking ret would cause pipeline stall * (waiting for the cacheline). */ *tcache_success = true; ret = *(tbin->avail - tbin->ncached); tbin->ncached--; if (unlikely((int)tbin->ncached < tbin->low_water)) tbin->low_water = tbin->ncached; return (ret); } JEMALLOC_ALWAYS_INLINE void * tcache_alloc_small(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size, szind_t binind, bool zero, bool slow_path) { void *ret; tcache_bin_t *tbin; bool tcache_success; size_t usize JEMALLOC_CC_SILENCE_INIT(0); assert(binind < NBINS); tbin = &tcache->tbins[binind]; ret = tcache_alloc_easy(tbin, &tcache_success); assert(tcache_success == (ret != NULL)); if (unlikely(!tcache_success)) { bool tcache_hard_success; arena = arena_choose(tsd, arena); if (unlikely(arena == NULL)) return (NULL); - ret = tcache_alloc_small_hard(tsd, arena, tcache, tbin, binind, - &tcache_hard_success); + ret = tcache_alloc_small_hard(tsd_tsdn(tsd), arena, tcache, + tbin, binind, &tcache_hard_success); if (tcache_hard_success == false) return (NULL); } assert(ret); /* * Only compute usize if required. The checks in the following if * statement are all static. */ if (config_prof || (slow_path && config_fill) || unlikely(zero)) { usize = index2size(binind); - assert(tcache_salloc(ret) == usize); + assert(tcache_salloc(tsd_tsdn(tsd), ret) == usize); } if (likely(!zero)) { if (slow_path && config_fill) { if (unlikely(opt_junk_alloc)) { arena_alloc_junk_small(ret, &arena_bin_info[binind], false); } else if (unlikely(opt_zero)) memset(ret, 0, usize); } } else { if (slow_path && config_fill && unlikely(opt_junk_alloc)) { arena_alloc_junk_small(ret, &arena_bin_info[binind], true); } memset(ret, 0, usize); } if (config_stats) tbin->tstats.nrequests++; if (config_prof) tcache->prof_accumbytes += usize; tcache_event(tsd, tcache); return (ret); } JEMALLOC_ALWAYS_INLINE void * tcache_alloc_large(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size, szind_t binind, bool zero, bool slow_path) { void *ret; tcache_bin_t *tbin; bool tcache_success; assert(binind < nhbins); tbin = &tcache->tbins[binind]; ret = tcache_alloc_easy(tbin, &tcache_success); assert(tcache_success == (ret != NULL)); if (unlikely(!tcache_success)) { /* * Only allocate one large object at a time, because it's quite * expensive to create one and not use it. */ arena = arena_choose(tsd, arena); if (unlikely(arena == NULL)) return (NULL); - ret = arena_malloc_large(tsd, arena, binind, zero); + ret = arena_malloc_large(tsd_tsdn(tsd), arena, binind, zero); if (ret == NULL) return (NULL); } else { size_t usize JEMALLOC_CC_SILENCE_INIT(0); /* Only compute usize on demand */ if (config_prof || (slow_path && config_fill) || unlikely(zero)) { usize = index2size(binind); assert(usize <= tcache_maxclass); } if (config_prof && usize == LARGE_MINCLASS) { arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ret); size_t pageind = (((uintptr_t)ret - (uintptr_t)chunk) >> LG_PAGE); arena_mapbits_large_binind_set(chunk, pageind, BININD_INVALID); } if (likely(!zero)) { if (slow_path && config_fill) { - if (unlikely(opt_junk_alloc)) - memset(ret, 0xa5, usize); - else if (unlikely(opt_zero)) + if (unlikely(opt_junk_alloc)) { + memset(ret, JEMALLOC_ALLOC_JUNK, + usize); + } else if (unlikely(opt_zero)) memset(ret, 0, usize); } } else memset(ret, 0, usize); if (config_stats) tbin->tstats.nrequests++; if (config_prof) tcache->prof_accumbytes += usize; } tcache_event(tsd, tcache); return (ret); } JEMALLOC_ALWAYS_INLINE void tcache_dalloc_small(tsd_t *tsd, tcache_t *tcache, void *ptr, szind_t binind, bool slow_path) { tcache_bin_t *tbin; tcache_bin_info_t *tbin_info; - assert(tcache_salloc(ptr) <= SMALL_MAXCLASS); + assert(tcache_salloc(tsd_tsdn(tsd), ptr) <= SMALL_MAXCLASS); if (slow_path && config_fill && unlikely(opt_junk_free)) arena_dalloc_junk_small(ptr, &arena_bin_info[binind]); tbin = &tcache->tbins[binind]; tbin_info = &tcache_bin_info[binind]; if (unlikely(tbin->ncached == tbin_info->ncached_max)) { tcache_bin_flush_small(tsd, tcache, tbin, binind, (tbin_info->ncached_max >> 1)); } assert(tbin->ncached < tbin_info->ncached_max); tbin->ncached++; *(tbin->avail - tbin->ncached) = ptr; tcache_event(tsd, tcache); } JEMALLOC_ALWAYS_INLINE void tcache_dalloc_large(tsd_t *tsd, tcache_t *tcache, void *ptr, size_t size, bool slow_path) { szind_t binind; tcache_bin_t *tbin; tcache_bin_info_t *tbin_info; assert((size & PAGE_MASK) == 0); - assert(tcache_salloc(ptr) > SMALL_MAXCLASS); - assert(tcache_salloc(ptr) <= tcache_maxclass); + assert(tcache_salloc(tsd_tsdn(tsd), ptr) > SMALL_MAXCLASS); + assert(tcache_salloc(tsd_tsdn(tsd), ptr) <= tcache_maxclass); binind = size2index(size); if (slow_path && config_fill && unlikely(opt_junk_free)) arena_dalloc_junk_large(ptr, size); tbin = &tcache->tbins[binind]; tbin_info = &tcache_bin_info[binind]; if (unlikely(tbin->ncached == tbin_info->ncached_max)) { tcache_bin_flush_large(tsd, tbin, binind, (tbin_info->ncached_max >> 1), tcache); } assert(tbin->ncached < tbin_info->ncached_max); tbin->ncached++; *(tbin->avail - tbin->ncached) = ptr; tcache_event(tsd, tcache); } JEMALLOC_ALWAYS_INLINE tcache_t * tcaches_get(tsd_t *tsd, unsigned ind) { tcaches_t *elm = &tcaches[ind]; - if (unlikely(elm->tcache == NULL)) - elm->tcache = tcache_create(tsd, arena_choose(tsd, NULL)); + if (unlikely(elm->tcache == NULL)) { + elm->tcache = tcache_create(tsd_tsdn(tsd), arena_choose(tsd, + NULL)); + } return (elm->tcache); } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/tsd.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/tsd.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/tsd.h (revision 299587) @@ -1,665 +1,747 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES /* Maximum number of malloc_tsd users with cleanup functions. */ #define MALLOC_TSD_CLEANUPS_MAX 2 typedef bool (*malloc_tsd_cleanup_t)(void); #if (!defined(JEMALLOC_MALLOC_THREAD_CLEANUP) && !defined(JEMALLOC_TLS) && \ !defined(_WIN32)) typedef struct tsd_init_block_s tsd_init_block_t; typedef struct tsd_init_head_s tsd_init_head_t; #endif typedef struct tsd_s tsd_t; +typedef struct tsdn_s tsdn_t; +#define TSDN_NULL ((tsdn_t *)0) + typedef enum { tsd_state_uninitialized, tsd_state_nominal, tsd_state_purgatory, tsd_state_reincarnated } tsd_state_t; /* * TLS/TSD-agnostic macro-based implementation of thread-specific data. There * are five macros that support (at least) three use cases: file-private, * library-private, and library-private inlined. Following is an example * library-private tsd variable: * * In example.h: * typedef struct { * int x; * int y; * } example_t; * #define EX_INITIALIZER JEMALLOC_CONCAT({0, 0}) * malloc_tsd_types(example_, example_t) * malloc_tsd_protos(, example_, example_t) * malloc_tsd_externs(example_, example_t) * In example.c: * malloc_tsd_data(, example_, example_t, EX_INITIALIZER) * malloc_tsd_funcs(, example_, example_t, EX_INITIALIZER, * example_tsd_cleanup) * * The result is a set of generated functions, e.g.: * * bool example_tsd_boot(void) {...} + * bool example_tsd_booted_get(void) {...} * example_t *example_tsd_get() {...} * void example_tsd_set(example_t *val) {...} * * Note that all of the functions deal in terms of (a_type *) rather than * (a_type) so that it is possible to support non-pointer types (unlike * pthreads TSD). example_tsd_cleanup() is passed an (a_type *) pointer that is * cast to (void *). This means that the cleanup function needs to cast the * function argument to (a_type *), then dereference the resulting pointer to * access fields, e.g. * * void * example_tsd_cleanup(void *arg) * { * example_t *example = (example_t *)arg; * * example->x = 42; * [...] * if ([want the cleanup function to be called again]) * example_tsd_set(example); * } * * If example_tsd_set() is called within example_tsd_cleanup(), it will be * called again. This is similar to how pthreads TSD destruction works, except * that pthreads only calls the cleanup function again if the value was set to * non-NULL. */ /* malloc_tsd_types(). */ #ifdef JEMALLOC_MALLOC_THREAD_CLEANUP #define malloc_tsd_types(a_name, a_type) #elif (defined(JEMALLOC_TLS)) #define malloc_tsd_types(a_name, a_type) #elif (defined(_WIN32)) #define malloc_tsd_types(a_name, a_type) \ typedef struct { \ bool initialized; \ a_type val; \ } a_name##tsd_wrapper_t; #else #define malloc_tsd_types(a_name, a_type) \ typedef struct { \ bool initialized; \ a_type val; \ } a_name##tsd_wrapper_t; #endif /* malloc_tsd_protos(). */ #define malloc_tsd_protos(a_attr, a_name, a_type) \ a_attr bool \ a_name##tsd_boot0(void); \ a_attr void \ a_name##tsd_boot1(void); \ a_attr bool \ a_name##tsd_boot(void); \ +a_attr bool \ +a_name##tsd_booted_get(void); \ a_attr a_type * \ a_name##tsd_get(void); \ a_attr void \ a_name##tsd_set(a_type *val); /* malloc_tsd_externs(). */ #ifdef JEMALLOC_MALLOC_THREAD_CLEANUP #define malloc_tsd_externs(a_name, a_type) \ extern __thread a_type a_name##tsd_tls; \ extern __thread bool a_name##tsd_initialized; \ extern bool a_name##tsd_booted; #elif (defined(JEMALLOC_TLS)) #define malloc_tsd_externs(a_name, a_type) \ extern __thread a_type a_name##tsd_tls; \ extern pthread_key_t a_name##tsd_tsd; \ extern bool a_name##tsd_booted; #elif (defined(_WIN32)) #define malloc_tsd_externs(a_name, a_type) \ extern DWORD a_name##tsd_tsd; \ extern a_name##tsd_wrapper_t a_name##tsd_boot_wrapper; \ extern bool a_name##tsd_booted; #else #define malloc_tsd_externs(a_name, a_type) \ extern pthread_key_t a_name##tsd_tsd; \ extern tsd_init_head_t a_name##tsd_init_head; \ extern a_name##tsd_wrapper_t a_name##tsd_boot_wrapper; \ extern bool a_name##tsd_booted; #endif /* malloc_tsd_data(). */ #ifdef JEMALLOC_MALLOC_THREAD_CLEANUP #define malloc_tsd_data(a_attr, a_name, a_type, a_initializer) \ a_attr __thread a_type JEMALLOC_TLS_MODEL \ a_name##tsd_tls = a_initializer; \ a_attr __thread bool JEMALLOC_TLS_MODEL \ a_name##tsd_initialized = false; \ a_attr bool a_name##tsd_booted = false; #elif (defined(JEMALLOC_TLS)) #define malloc_tsd_data(a_attr, a_name, a_type, a_initializer) \ a_attr __thread a_type JEMALLOC_TLS_MODEL \ a_name##tsd_tls = a_initializer; \ a_attr pthread_key_t a_name##tsd_tsd; \ a_attr bool a_name##tsd_booted = false; #elif (defined(_WIN32)) #define malloc_tsd_data(a_attr, a_name, a_type, a_initializer) \ a_attr DWORD a_name##tsd_tsd; \ a_attr a_name##tsd_wrapper_t a_name##tsd_boot_wrapper = { \ false, \ a_initializer \ }; \ a_attr bool a_name##tsd_booted = false; #else #define malloc_tsd_data(a_attr, a_name, a_type, a_initializer) \ a_attr pthread_key_t a_name##tsd_tsd; \ a_attr tsd_init_head_t a_name##tsd_init_head = { \ ql_head_initializer(blocks), \ MALLOC_MUTEX_INITIALIZER \ }; \ a_attr a_name##tsd_wrapper_t a_name##tsd_boot_wrapper = { \ false, \ a_initializer \ }; \ a_attr bool a_name##tsd_booted = false; #endif /* malloc_tsd_funcs(). */ #ifdef JEMALLOC_MALLOC_THREAD_CLEANUP #define malloc_tsd_funcs(a_attr, a_name, a_type, a_initializer, \ a_cleanup) \ /* Initialization/cleanup. */ \ a_attr bool \ a_name##tsd_cleanup_wrapper(void) \ { \ \ if (a_name##tsd_initialized) { \ a_name##tsd_initialized = false; \ a_cleanup(&a_name##tsd_tls); \ } \ return (a_name##tsd_initialized); \ } \ a_attr bool \ a_name##tsd_boot0(void) \ { \ \ if (a_cleanup != malloc_tsd_no_cleanup) { \ malloc_tsd_cleanup_register( \ &a_name##tsd_cleanup_wrapper); \ } \ a_name##tsd_booted = true; \ return (false); \ } \ a_attr void \ a_name##tsd_boot1(void) \ { \ \ /* Do nothing. */ \ } \ a_attr bool \ a_name##tsd_boot(void) \ { \ \ return (a_name##tsd_boot0()); \ } \ +a_attr bool \ +a_name##tsd_booted_get(void) \ +{ \ + \ + return (a_name##tsd_booted); \ +} \ /* Get/set. */ \ a_attr a_type * \ a_name##tsd_get(void) \ { \ \ assert(a_name##tsd_booted); \ return (&a_name##tsd_tls); \ } \ a_attr void \ a_name##tsd_set(a_type *val) \ { \ \ assert(a_name##tsd_booted); \ a_name##tsd_tls = (*val); \ if (a_cleanup != malloc_tsd_no_cleanup) \ a_name##tsd_initialized = true; \ } #elif (defined(JEMALLOC_TLS)) #define malloc_tsd_funcs(a_attr, a_name, a_type, a_initializer, \ a_cleanup) \ /* Initialization/cleanup. */ \ a_attr bool \ a_name##tsd_boot0(void) \ { \ \ if (a_cleanup != malloc_tsd_no_cleanup) { \ if (pthread_key_create(&a_name##tsd_tsd, a_cleanup) != \ 0) \ return (true); \ } \ a_name##tsd_booted = true; \ return (false); \ } \ a_attr void \ a_name##tsd_boot1(void) \ { \ \ /* Do nothing. */ \ } \ a_attr bool \ a_name##tsd_boot(void) \ { \ \ return (a_name##tsd_boot0()); \ } \ +a_attr bool \ +a_name##tsd_booted_get(void) \ +{ \ + \ + return (a_name##tsd_booted); \ +} \ /* Get/set. */ \ a_attr a_type * \ a_name##tsd_get(void) \ { \ \ assert(a_name##tsd_booted); \ return (&a_name##tsd_tls); \ } \ a_attr void \ a_name##tsd_set(a_type *val) \ { \ \ assert(a_name##tsd_booted); \ a_name##tsd_tls = (*val); \ if (a_cleanup != malloc_tsd_no_cleanup) { \ if (pthread_setspecific(a_name##tsd_tsd, \ (void *)(&a_name##tsd_tls))) { \ malloc_write(": Error" \ " setting TSD for "#a_name"\n"); \ if (opt_abort) \ abort(); \ } \ } \ } #elif (defined(_WIN32)) #define malloc_tsd_funcs(a_attr, a_name, a_type, a_initializer, \ a_cleanup) \ /* Initialization/cleanup. */ \ a_attr bool \ a_name##tsd_cleanup_wrapper(void) \ { \ DWORD error = GetLastError(); \ a_name##tsd_wrapper_t *wrapper = (a_name##tsd_wrapper_t *) \ TlsGetValue(a_name##tsd_tsd); \ SetLastError(error); \ \ if (wrapper == NULL) \ return (false); \ if (a_cleanup != malloc_tsd_no_cleanup && \ wrapper->initialized) { \ wrapper->initialized = false; \ a_cleanup(&wrapper->val); \ if (wrapper->initialized) { \ /* Trigger another cleanup round. */ \ return (true); \ } \ } \ malloc_tsd_dalloc(wrapper); \ return (false); \ } \ a_attr void \ a_name##tsd_wrapper_set(a_name##tsd_wrapper_t *wrapper) \ { \ \ if (!TlsSetValue(a_name##tsd_tsd, (void *)wrapper)) { \ malloc_write(": Error setting" \ " TSD for "#a_name"\n"); \ abort(); \ } \ } \ a_attr a_name##tsd_wrapper_t * \ a_name##tsd_wrapper_get(void) \ { \ DWORD error = GetLastError(); \ a_name##tsd_wrapper_t *wrapper = (a_name##tsd_wrapper_t *) \ TlsGetValue(a_name##tsd_tsd); \ SetLastError(error); \ \ if (unlikely(wrapper == NULL)) { \ wrapper = (a_name##tsd_wrapper_t *) \ malloc_tsd_malloc(sizeof(a_name##tsd_wrapper_t)); \ if (wrapper == NULL) { \ malloc_write(": Error allocating" \ " TSD for "#a_name"\n"); \ abort(); \ } else { \ wrapper->initialized = false; \ wrapper->val = a_initializer; \ } \ a_name##tsd_wrapper_set(wrapper); \ } \ return (wrapper); \ } \ a_attr bool \ a_name##tsd_boot0(void) \ { \ \ a_name##tsd_tsd = TlsAlloc(); \ if (a_name##tsd_tsd == TLS_OUT_OF_INDEXES) \ return (true); \ if (a_cleanup != malloc_tsd_no_cleanup) { \ malloc_tsd_cleanup_register( \ &a_name##tsd_cleanup_wrapper); \ } \ a_name##tsd_wrapper_set(&a_name##tsd_boot_wrapper); \ a_name##tsd_booted = true; \ return (false); \ } \ a_attr void \ a_name##tsd_boot1(void) \ { \ a_name##tsd_wrapper_t *wrapper; \ wrapper = (a_name##tsd_wrapper_t *) \ malloc_tsd_malloc(sizeof(a_name##tsd_wrapper_t)); \ if (wrapper == NULL) { \ malloc_write(": Error allocating" \ " TSD for "#a_name"\n"); \ abort(); \ } \ memcpy(wrapper, &a_name##tsd_boot_wrapper, \ sizeof(a_name##tsd_wrapper_t)); \ a_name##tsd_wrapper_set(wrapper); \ } \ a_attr bool \ a_name##tsd_boot(void) \ { \ \ if (a_name##tsd_boot0()) \ return (true); \ a_name##tsd_boot1(); \ return (false); \ } \ +a_attr bool \ +a_name##tsd_booted_get(void) \ +{ \ + \ + return (a_name##tsd_booted); \ +} \ /* Get/set. */ \ a_attr a_type * \ a_name##tsd_get(void) \ { \ a_name##tsd_wrapper_t *wrapper; \ \ assert(a_name##tsd_booted); \ wrapper = a_name##tsd_wrapper_get(); \ return (&wrapper->val); \ } \ a_attr void \ a_name##tsd_set(a_type *val) \ { \ a_name##tsd_wrapper_t *wrapper; \ \ assert(a_name##tsd_booted); \ wrapper = a_name##tsd_wrapper_get(); \ wrapper->val = *(val); \ if (a_cleanup != malloc_tsd_no_cleanup) \ wrapper->initialized = true; \ } #else #define malloc_tsd_funcs(a_attr, a_name, a_type, a_initializer, \ a_cleanup) \ /* Initialization/cleanup. */ \ a_attr void \ a_name##tsd_cleanup_wrapper(void *arg) \ { \ a_name##tsd_wrapper_t *wrapper = (a_name##tsd_wrapper_t *)arg; \ \ if (a_cleanup != malloc_tsd_no_cleanup && \ wrapper->initialized) { \ wrapper->initialized = false; \ a_cleanup(&wrapper->val); \ if (wrapper->initialized) { \ /* Trigger another cleanup round. */ \ if (pthread_setspecific(a_name##tsd_tsd, \ (void *)wrapper)) { \ malloc_write(": Error" \ " setting TSD for "#a_name"\n"); \ if (opt_abort) \ abort(); \ } \ return; \ } \ } \ malloc_tsd_dalloc(wrapper); \ } \ a_attr void \ a_name##tsd_wrapper_set(a_name##tsd_wrapper_t *wrapper) \ { \ \ if (pthread_setspecific(a_name##tsd_tsd, \ (void *)wrapper)) { \ malloc_write(": Error setting" \ " TSD for "#a_name"\n"); \ abort(); \ } \ } \ a_attr a_name##tsd_wrapper_t * \ a_name##tsd_wrapper_get(void) \ { \ a_name##tsd_wrapper_t *wrapper = (a_name##tsd_wrapper_t *) \ pthread_getspecific(a_name##tsd_tsd); \ \ if (unlikely(wrapper == NULL)) { \ tsd_init_block_t block; \ wrapper = tsd_init_check_recursion( \ &a_name##tsd_init_head, &block); \ if (wrapper) \ return (wrapper); \ wrapper = (a_name##tsd_wrapper_t *) \ malloc_tsd_malloc(sizeof(a_name##tsd_wrapper_t)); \ block.data = wrapper; \ if (wrapper == NULL) { \ malloc_write(": Error allocating" \ " TSD for "#a_name"\n"); \ abort(); \ } else { \ wrapper->initialized = false; \ wrapper->val = a_initializer; \ } \ a_name##tsd_wrapper_set(wrapper); \ tsd_init_finish(&a_name##tsd_init_head, &block); \ } \ return (wrapper); \ } \ a_attr bool \ a_name##tsd_boot0(void) \ { \ \ if (pthread_key_create(&a_name##tsd_tsd, \ a_name##tsd_cleanup_wrapper) != 0) \ return (true); \ a_name##tsd_wrapper_set(&a_name##tsd_boot_wrapper); \ a_name##tsd_booted = true; \ return (false); \ } \ a_attr void \ a_name##tsd_boot1(void) \ { \ a_name##tsd_wrapper_t *wrapper; \ wrapper = (a_name##tsd_wrapper_t *) \ malloc_tsd_malloc(sizeof(a_name##tsd_wrapper_t)); \ if (wrapper == NULL) { \ malloc_write(": Error allocating" \ " TSD for "#a_name"\n"); \ abort(); \ } \ memcpy(wrapper, &a_name##tsd_boot_wrapper, \ sizeof(a_name##tsd_wrapper_t)); \ a_name##tsd_wrapper_set(wrapper); \ } \ a_attr bool \ a_name##tsd_boot(void) \ { \ \ if (a_name##tsd_boot0()) \ return (true); \ a_name##tsd_boot1(); \ return (false); \ } \ +a_attr bool \ +a_name##tsd_booted_get(void) \ +{ \ + \ + return (a_name##tsd_booted); \ +} \ /* Get/set. */ \ a_attr a_type * \ a_name##tsd_get(void) \ { \ a_name##tsd_wrapper_t *wrapper; \ \ assert(a_name##tsd_booted); \ wrapper = a_name##tsd_wrapper_get(); \ return (&wrapper->val); \ } \ a_attr void \ a_name##tsd_set(a_type *val) \ { \ a_name##tsd_wrapper_t *wrapper; \ \ assert(a_name##tsd_booted); \ wrapper = a_name##tsd_wrapper_get(); \ wrapper->val = *(val); \ if (a_cleanup != malloc_tsd_no_cleanup) \ wrapper->initialized = true; \ } #endif #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #if (!defined(JEMALLOC_MALLOC_THREAD_CLEANUP) && !defined(JEMALLOC_TLS) && \ !defined(_WIN32)) struct tsd_init_block_s { ql_elm(tsd_init_block_t) link; pthread_t thread; void *data; }; struct tsd_init_head_s { ql_head(tsd_init_block_t) blocks; malloc_mutex_t lock; }; #endif #define MALLOC_TSD \ /* O(name, type) */ \ O(tcache, tcache_t *) \ O(thread_allocated, uint64_t) \ O(thread_deallocated, uint64_t) \ O(prof_tdata, prof_tdata_t *) \ + O(iarena, arena_t *) \ O(arena, arena_t *) \ O(arenas_tdata, arena_tdata_t *) \ O(narenas_tdata, unsigned) \ O(arenas_tdata_bypass, bool) \ O(tcache_enabled, tcache_enabled_t) \ O(quarantine, quarantine_t *) \ + O(witnesses, witness_list_t) \ + O(witness_fork, bool) \ #define TSD_INITIALIZER { \ tsd_state_uninitialized, \ NULL, \ 0, \ 0, \ NULL, \ NULL, \ NULL, \ + NULL, \ 0, \ false, \ tcache_enabled_default, \ - NULL \ + NULL, \ + ql_head_initializer(witnesses), \ + false \ } struct tsd_s { tsd_state_t state; #define O(n, t) \ t n; MALLOC_TSD #undef O }; +/* + * Wrapper around tsd_t that makes it possible to avoid implicit conversion + * between tsd_t and tsdn_t, where tsdn_t is "nullable" and has to be + * explicitly converted to tsd_t, which is non-nullable. + */ +struct tsdn_s { + tsd_t tsd; +}; + static const tsd_t tsd_initializer = TSD_INITIALIZER; malloc_tsd_types(, tsd_t) #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS void *malloc_tsd_malloc(size_t size); void malloc_tsd_dalloc(void *wrapper); void malloc_tsd_no_cleanup(void *arg); void malloc_tsd_cleanup_register(bool (*f)(void)); -bool malloc_tsd_boot0(void); +tsd_t *malloc_tsd_boot0(void); void malloc_tsd_boot1(void); #if (!defined(JEMALLOC_MALLOC_THREAD_CLEANUP) && !defined(JEMALLOC_TLS) && \ !defined(_WIN32)) void *tsd_init_check_recursion(tsd_init_head_t *head, tsd_init_block_t *block); void tsd_init_finish(tsd_init_head_t *head, tsd_init_block_t *block); #endif void tsd_cleanup(void *arg); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE malloc_tsd_protos(JEMALLOC_ATTR(unused), , tsd_t) tsd_t *tsd_fetch(void); +tsdn_t *tsd_tsdn(tsd_t *tsd); bool tsd_nominal(tsd_t *tsd); #define O(n, t) \ t *tsd_##n##p_get(tsd_t *tsd); \ t tsd_##n##_get(tsd_t *tsd); \ void tsd_##n##_set(tsd_t *tsd, t n); MALLOC_TSD #undef O +tsdn_t *tsdn_fetch(void); +bool tsdn_null(const tsdn_t *tsdn); +tsd_t *tsdn_tsd(tsdn_t *tsdn); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_TSD_C_)) malloc_tsd_externs(, tsd_t) malloc_tsd_funcs(JEMALLOC_ALWAYS_INLINE, , tsd_t, tsd_initializer, tsd_cleanup) JEMALLOC_ALWAYS_INLINE tsd_t * tsd_fetch(void) { tsd_t *tsd = tsd_get(); if (unlikely(tsd->state != tsd_state_nominal)) { if (tsd->state == tsd_state_uninitialized) { tsd->state = tsd_state_nominal; /* Trigger cleanup handler registration. */ tsd_set(tsd); } else if (tsd->state == tsd_state_purgatory) { tsd->state = tsd_state_reincarnated; tsd_set(tsd); } else assert(tsd->state == tsd_state_reincarnated); } return (tsd); } +JEMALLOC_ALWAYS_INLINE tsdn_t * +tsd_tsdn(tsd_t *tsd) +{ + + return ((tsdn_t *)tsd); +} + JEMALLOC_INLINE bool tsd_nominal(tsd_t *tsd) { return (tsd->state == tsd_state_nominal); } #define O(n, t) \ JEMALLOC_ALWAYS_INLINE t * \ tsd_##n##p_get(tsd_t *tsd) \ { \ \ return (&tsd->n); \ } \ \ JEMALLOC_ALWAYS_INLINE t \ tsd_##n##_get(tsd_t *tsd) \ { \ \ return (*tsd_##n##p_get(tsd)); \ } \ \ JEMALLOC_ALWAYS_INLINE void \ tsd_##n##_set(tsd_t *tsd, t n) \ { \ \ assert(tsd->state == tsd_state_nominal); \ tsd->n = n; \ } MALLOC_TSD #undef O + +JEMALLOC_ALWAYS_INLINE tsdn_t * +tsdn_fetch(void) +{ + + if (!tsd_booted_get()) + return (NULL); + + return (tsd_tsdn(tsd_fetch())); +} + +JEMALLOC_ALWAYS_INLINE bool +tsdn_null(const tsdn_t *tsdn) +{ + + return (tsdn == NULL); +} + +JEMALLOC_ALWAYS_INLINE tsd_t * +tsdn_tsd(tsdn_t *tsdn) +{ + + assert(!tsdn_null(tsdn)); + + return (&tsdn->tsd); +} #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/util.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/util.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/util.h (revision 299587) @@ -1,344 +1,348 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES #ifdef _WIN32 # ifdef _WIN64 # define FMT64_PREFIX "ll" # define FMTPTR_PREFIX "ll" # else # define FMT64_PREFIX "ll" # define FMTPTR_PREFIX "" # endif # define FMTd32 "d" # define FMTu32 "u" # define FMTx32 "x" # define FMTd64 FMT64_PREFIX "d" # define FMTu64 FMT64_PREFIX "u" # define FMTx64 FMT64_PREFIX "x" # define FMTdPTR FMTPTR_PREFIX "d" # define FMTuPTR FMTPTR_PREFIX "u" # define FMTxPTR FMTPTR_PREFIX "x" #else # include # define FMTd32 PRId32 # define FMTu32 PRIu32 # define FMTx32 PRIx32 # define FMTd64 PRId64 # define FMTu64 PRIu64 # define FMTx64 PRIx64 # define FMTdPTR PRIdPTR # define FMTuPTR PRIuPTR # define FMTxPTR PRIxPTR #endif /* Size of stack-allocated buffer passed to buferror(). */ #define BUFERROR_BUF 64 /* * Size of stack-allocated buffer used by malloc_{,v,vc}printf(). This must be * large enough for all possible uses within jemalloc. */ #define MALLOC_PRINTF_BUFSIZE 4096 +/* Junk fill patterns. */ +#define JEMALLOC_ALLOC_JUNK ((uint8_t)0xa5) +#define JEMALLOC_FREE_JUNK ((uint8_t)0x5a) + /* * Wrap a cpp argument that contains commas such that it isn't broken up into * multiple arguments. */ #define JEMALLOC_ARG_CONCAT(...) __VA_ARGS__ /* * Silence compiler warnings due to uninitialized values. This is used * wherever the compiler fails to recognize that the variable is never used * uninitialized. */ #ifdef JEMALLOC_CC_SILENCE # define JEMALLOC_CC_SILENCE_INIT(v) = v #else # define JEMALLOC_CC_SILENCE_INIT(v) #endif #define JEMALLOC_GNUC_PREREQ(major, minor) \ (!defined(__clang__) && \ (__GNUC__ > (major) || (__GNUC__ == (major) && __GNUC_MINOR__ >= (minor)))) #ifndef __has_builtin # define __has_builtin(builtin) (0) #endif #define JEMALLOC_CLANG_HAS_BUILTIN(builtin) \ (defined(__clang__) && __has_builtin(builtin)) #ifdef __GNUC__ # define likely(x) __builtin_expect(!!(x), 1) # define unlikely(x) __builtin_expect(!!(x), 0) # if JEMALLOC_GNUC_PREREQ(4, 6) || \ JEMALLOC_CLANG_HAS_BUILTIN(__builtin_unreachable) # define unreachable() __builtin_unreachable() # else -# define unreachable() +# define unreachable() abort() # endif #else # define likely(x) !!(x) # define unlikely(x) !!(x) -# define unreachable() +# define unreachable() abort() #endif #include "jemalloc/internal/assert.h" /* Use to assert a particular configuration, e.g., cassert(config_debug). */ #define cassert(c) do { \ if (unlikely(!(c))) \ not_reached(); \ } while (0) #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS int buferror(int err, char *buf, size_t buflen); uintmax_t malloc_strtoumax(const char *restrict nptr, char **restrict endptr, int base); void malloc_write(const char *s); /* * malloc_vsnprintf() supports a subset of snprintf(3) that avoids floating * point math. */ -int malloc_vsnprintf(char *str, size_t size, const char *format, +size_t malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap); -int malloc_snprintf(char *str, size_t size, const char *format, ...) +size_t malloc_snprintf(char *str, size_t size, const char *format, ...) JEMALLOC_FORMAT_PRINTF(3, 4); void malloc_vcprintf(void (*write_cb)(void *, const char *), void *cbopaque, const char *format, va_list ap); void malloc_cprintf(void (*write)(void *, const char *), void *cbopaque, const char *format, ...) JEMALLOC_FORMAT_PRINTF(3, 4); void malloc_printf(const char *format, ...) JEMALLOC_FORMAT_PRINTF(1, 2); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE unsigned ffs_llu(unsigned long long bitmap); unsigned ffs_lu(unsigned long bitmap); unsigned ffs_u(unsigned bitmap); unsigned ffs_zu(size_t bitmap); unsigned ffs_u64(uint64_t bitmap); unsigned ffs_u32(uint32_t bitmap); uint64_t pow2_ceil_u64(uint64_t x); uint32_t pow2_ceil_u32(uint32_t x); size_t pow2_ceil_zu(size_t x); unsigned lg_floor(size_t x); void set_errno(int errnum); int get_errno(void); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_UTIL_C_)) /* Sanity check. */ #if !defined(JEMALLOC_INTERNAL_FFSLL) || !defined(JEMALLOC_INTERNAL_FFSL) \ || !defined(JEMALLOC_INTERNAL_FFS) # error JEMALLOC_INTERNAL_FFS{,L,LL} should have been defined by configure #endif JEMALLOC_ALWAYS_INLINE unsigned ffs_llu(unsigned long long bitmap) { return (JEMALLOC_INTERNAL_FFSLL(bitmap)); } JEMALLOC_ALWAYS_INLINE unsigned ffs_lu(unsigned long bitmap) { return (JEMALLOC_INTERNAL_FFSL(bitmap)); } JEMALLOC_ALWAYS_INLINE unsigned ffs_u(unsigned bitmap) { return (JEMALLOC_INTERNAL_FFS(bitmap)); } JEMALLOC_ALWAYS_INLINE unsigned ffs_zu(size_t bitmap) { #if LG_SIZEOF_PTR == LG_SIZEOF_INT return (ffs_u(bitmap)); #elif LG_SIZEOF_PTR == LG_SIZEOF_LONG return (ffs_lu(bitmap)); #elif LG_SIZEOF_PTR == LG_SIZEOF_LONG_LONG return (ffs_llu(bitmap)); #else #error No implementation for size_t ffs() #endif } JEMALLOC_ALWAYS_INLINE unsigned ffs_u64(uint64_t bitmap) { #if LG_SIZEOF_LONG == 3 return (ffs_lu(bitmap)); #elif LG_SIZEOF_LONG_LONG == 3 return (ffs_llu(bitmap)); #else #error No implementation for 64-bit ffs() #endif } JEMALLOC_ALWAYS_INLINE unsigned ffs_u32(uint32_t bitmap) { #if LG_SIZEOF_INT == 2 return (ffs_u(bitmap)); #else #error No implementation for 32-bit ffs() #endif return (ffs_u(bitmap)); } JEMALLOC_INLINE uint64_t pow2_ceil_u64(uint64_t x) { x--; x |= x >> 1; x |= x >> 2; x |= x >> 4; x |= x >> 8; x |= x >> 16; x |= x >> 32; x++; return (x); } JEMALLOC_INLINE uint32_t pow2_ceil_u32(uint32_t x) { x--; x |= x >> 1; x |= x >> 2; x |= x >> 4; x |= x >> 8; x |= x >> 16; x++; return (x); } /* Compute the smallest power of 2 that is >= x. */ JEMALLOC_INLINE size_t pow2_ceil_zu(size_t x) { #if (LG_SIZEOF_PTR == 3) return (pow2_ceil_u64(x)); #else return (pow2_ceil_u32(x)); #endif } #if (defined(__i386__) || defined(__amd64__) || defined(__x86_64__)) JEMALLOC_INLINE unsigned lg_floor(size_t x) { size_t ret; assert(x != 0); asm ("bsr %1, %0" : "=r"(ret) // Outputs. : "r"(x) // Inputs. ); assert(ret < UINT_MAX); return ((unsigned)ret); } #elif (defined(_MSC_VER)) JEMALLOC_INLINE unsigned lg_floor(size_t x) { unsigned long ret; assert(x != 0); #if (LG_SIZEOF_PTR == 3) _BitScanReverse64(&ret, x); #elif (LG_SIZEOF_PTR == 2) _BitScanReverse(&ret, x); #else # error "Unsupported type size for lg_floor()" #endif assert(ret < UINT_MAX); return ((unsigned)ret); } #elif (defined(JEMALLOC_HAVE_BUILTIN_CLZ)) JEMALLOC_INLINE unsigned lg_floor(size_t x) { assert(x != 0); #if (LG_SIZEOF_PTR == LG_SIZEOF_INT) return (((8 << LG_SIZEOF_PTR) - 1) - __builtin_clz(x)); #elif (LG_SIZEOF_PTR == LG_SIZEOF_LONG) return (((8 << LG_SIZEOF_PTR) - 1) - __builtin_clzl(x)); #else # error "Unsupported type size for lg_floor()" #endif } #else JEMALLOC_INLINE unsigned lg_floor(size_t x) { assert(x != 0); x |= (x >> 1); x |= (x >> 2); x |= (x >> 4); x |= (x >> 8); x |= (x >> 16); #if (LG_SIZEOF_PTR == 3) x |= (x >> 32); #endif if (x == SIZE_T_MAX) return ((8 << LG_SIZEOF_PTR) - 1); x++; return (ffs_zu(x) - 2); } #endif /* Set error code. */ JEMALLOC_INLINE void set_errno(int errnum) { #ifdef _WIN32 SetLastError(errnum); #else errno = errnum; #endif } /* Get last error code. */ JEMALLOC_INLINE int get_errno(void) { #ifdef _WIN32 return (GetLastError()); #else return (errno); #endif } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/valgrind.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/valgrind.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/internal/valgrind.h (revision 299587) @@ -1,112 +1,114 @@ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES #ifdef JEMALLOC_VALGRIND #include /* * The size that is reported to Valgrind must be consistent through a chain of * malloc..realloc..realloc calls. Request size isn't recorded anywhere in * jemalloc, so it is critical that all callers of these macros provide usize * rather than request size. As a result, buffer overflow detection is * technically weakened for the standard API, though it is generally accepted * practice to consider any extra bytes reported by malloc_usable_size() as * usable space. */ #define JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(ptr, usize) do { \ if (unlikely(in_valgrind)) \ valgrind_make_mem_noaccess(ptr, usize); \ } while (0) #define JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ptr, usize) do { \ if (unlikely(in_valgrind)) \ valgrind_make_mem_undefined(ptr, usize); \ } while (0) #define JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(ptr, usize) do { \ if (unlikely(in_valgrind)) \ valgrind_make_mem_defined(ptr, usize); \ } while (0) /* * The VALGRIND_MALLOCLIKE_BLOCK() and VALGRIND_RESIZEINPLACE_BLOCK() macro * calls must be embedded in macros rather than in functions so that when * Valgrind reports errors, there are no extra stack frames in the backtraces. */ -#define JEMALLOC_VALGRIND_MALLOC(cond, ptr, usize, zero) do { \ - if (unlikely(in_valgrind && cond)) \ - VALGRIND_MALLOCLIKE_BLOCK(ptr, usize, p2rz(ptr), zero); \ +#define JEMALLOC_VALGRIND_MALLOC(cond, tsdn, ptr, usize, zero) do { \ + if (unlikely(in_valgrind && cond)) { \ + VALGRIND_MALLOCLIKE_BLOCK(ptr, usize, p2rz(tsdn, ptr), \ + zero); \ + } \ } while (0) -#define JEMALLOC_VALGRIND_REALLOC(maybe_moved, ptr, usize, \ +#define JEMALLOC_VALGRIND_REALLOC(maybe_moved, tsdn, ptr, usize, \ ptr_maybe_null, old_ptr, old_usize, old_rzsize, old_ptr_maybe_null, \ zero) do { \ if (unlikely(in_valgrind)) { \ - size_t rzsize = p2rz(ptr); \ + size_t rzsize = p2rz(tsdn, ptr); \ \ if (!maybe_moved || ptr == old_ptr) { \ VALGRIND_RESIZEINPLACE_BLOCK(ptr, old_usize, \ usize, rzsize); \ if (zero && old_usize < usize) { \ valgrind_make_mem_defined( \ (void *)((uintptr_t)ptr + \ old_usize), usize - old_usize); \ } \ } else { \ if (!old_ptr_maybe_null || old_ptr != NULL) { \ valgrind_freelike_block(old_ptr, \ old_rzsize); \ } \ if (!ptr_maybe_null || ptr != NULL) { \ size_t copy_size = (old_usize < usize) \ ? old_usize : usize; \ size_t tail_size = usize - copy_size; \ VALGRIND_MALLOCLIKE_BLOCK(ptr, usize, \ rzsize, false); \ if (copy_size > 0) { \ valgrind_make_mem_defined(ptr, \ copy_size); \ } \ if (zero && tail_size > 0) { \ valgrind_make_mem_defined( \ (void *)((uintptr_t)ptr + \ copy_size), tail_size); \ } \ } \ } \ } \ } while (0) #define JEMALLOC_VALGRIND_FREE(ptr, rzsize) do { \ if (unlikely(in_valgrind)) \ valgrind_freelike_block(ptr, rzsize); \ } while (0) #else #define RUNNING_ON_VALGRIND ((unsigned)0) #define JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(ptr, usize) do {} while (0) #define JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ptr, usize) do {} while (0) #define JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(ptr, usize) do {} while (0) -#define JEMALLOC_VALGRIND_MALLOC(cond, ptr, usize, zero) do {} while (0) -#define JEMALLOC_VALGRIND_REALLOC(maybe_moved, ptr, usize, \ +#define JEMALLOC_VALGRIND_MALLOC(cond, tsdn, ptr, usize, zero) do {} while (0) +#define JEMALLOC_VALGRIND_REALLOC(maybe_moved, tsdn, ptr, usize, \ ptr_maybe_null, old_ptr, old_usize, old_rzsize, old_ptr_maybe_null, \ zero) do {} while (0) #define JEMALLOC_VALGRIND_FREE(ptr, rzsize) do {} while (0) #endif #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_VALGRIND void valgrind_make_mem_noaccess(void *ptr, size_t usize); void valgrind_make_mem_undefined(void *ptr, size_t usize); void valgrind_make_mem_defined(void *ptr, size_t usize); void valgrind_freelike_block(void *ptr, size_t usize); #endif #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ Index: head/contrib/jemalloc/include/jemalloc/internal/witness.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/internal/witness.h (nonexistent) +++ head/contrib/jemalloc/include/jemalloc/internal/witness.h (revision 299587) @@ -0,0 +1,249 @@ +/******************************************************************************/ +#ifdef JEMALLOC_H_TYPES + +typedef struct witness_s witness_t; +typedef unsigned witness_rank_t; +typedef ql_head(witness_t) witness_list_t; +typedef int witness_comp_t (const witness_t *, const witness_t *); + +/* + * Lock ranks. Witnesses with rank WITNESS_RANK_OMIT are completely ignored by + * the witness machinery. + */ +#define WITNESS_RANK_OMIT 0U + +#define WITNESS_RANK_INIT 1U +#define WITNESS_RANK_CTL 1U +#define WITNESS_RANK_ARENAS 2U + +#define WITNESS_RANK_PROF_DUMP 3U +#define WITNESS_RANK_PROF_BT2GCTX 4U +#define WITNESS_RANK_PROF_TDATAS 5U +#define WITNESS_RANK_PROF_TDATA 6U +#define WITNESS_RANK_PROF_GCTX 7U + +#define WITNESS_RANK_ARENA 8U +#define WITNESS_RANK_ARENA_CHUNKS 9U +#define WITNESS_RANK_ARENA_NODE_CACHE 10 + +#define WITNESS_RANK_BASE 11U + +#define WITNESS_RANK_LEAF 0xffffffffU +#define WITNESS_RANK_ARENA_BIN WITNESS_RANK_LEAF +#define WITNESS_RANK_ARENA_HUGE WITNESS_RANK_LEAF +#define WITNESS_RANK_DSS WITNESS_RANK_LEAF +#define WITNESS_RANK_PROF_ACTIVE WITNESS_RANK_LEAF +#define WITNESS_RANK_PROF_DUMP_SEQ WITNESS_RANK_LEAF +#define WITNESS_RANK_PROF_GDUMP WITNESS_RANK_LEAF +#define WITNESS_RANK_PROF_NEXT_THR_UID WITNESS_RANK_LEAF +#define WITNESS_RANK_PROF_THREAD_ACTIVE_INIT WITNESS_RANK_LEAF + +#define WITNESS_INITIALIZER(rank) {"initializer", rank, NULL, {NULL, NULL}} + +#endif /* JEMALLOC_H_TYPES */ +/******************************************************************************/ +#ifdef JEMALLOC_H_STRUCTS + +struct witness_s { + /* Name, used for printing lock order reversal messages. */ + const char *name; + + /* + * Witness rank, where 0 is lowest and UINT_MAX is highest. Witnesses + * must be acquired in order of increasing rank. + */ + witness_rank_t rank; + + /* + * If two witnesses are of equal rank and they have the samp comp + * function pointer, it is called as a last attempt to differentiate + * between witnesses of equal rank. + */ + witness_comp_t *comp; + + /* Linkage for thread's currently owned locks. */ + ql_elm(witness_t) link; +}; + +#endif /* JEMALLOC_H_STRUCTS */ +/******************************************************************************/ +#ifdef JEMALLOC_H_EXTERNS + +void witness_init(witness_t *witness, const char *name, witness_rank_t rank, + witness_comp_t *comp); +#ifdef JEMALLOC_JET +typedef void (witness_lock_error_t)(const witness_list_t *, const witness_t *); +extern witness_lock_error_t *witness_lock_error; +#else +void witness_lock_error(const witness_list_t *witnesses, + const witness_t *witness); +#endif +#ifdef JEMALLOC_JET +typedef void (witness_owner_error_t)(const witness_t *); +extern witness_owner_error_t *witness_owner_error; +#else +void witness_owner_error(const witness_t *witness); +#endif +#ifdef JEMALLOC_JET +typedef void (witness_not_owner_error_t)(const witness_t *); +extern witness_not_owner_error_t *witness_not_owner_error; +#else +void witness_not_owner_error(const witness_t *witness); +#endif +#ifdef JEMALLOC_JET +typedef void (witness_lockless_error_t)(const witness_list_t *); +extern witness_lockless_error_t *witness_lockless_error; +#else +void witness_lockless_error(const witness_list_t *witnesses); +#endif + +void witnesses_cleanup(tsd_t *tsd); +void witness_fork_cleanup(tsd_t *tsd); +void witness_prefork(tsd_t *tsd); +void witness_postfork_parent(tsd_t *tsd); +void witness_postfork_child(tsd_t *tsd); + +#endif /* JEMALLOC_H_EXTERNS */ +/******************************************************************************/ +#ifdef JEMALLOC_H_INLINES + +#ifndef JEMALLOC_ENABLE_INLINE +void witness_assert_owner(tsdn_t *tsdn, const witness_t *witness); +void witness_assert_not_owner(tsdn_t *tsdn, const witness_t *witness); +void witness_assert_lockless(tsdn_t *tsdn); +void witness_lock(tsdn_t *tsdn, witness_t *witness); +void witness_unlock(tsdn_t *tsdn, witness_t *witness); +#endif + +#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_MUTEX_C_)) +JEMALLOC_INLINE void +witness_assert_owner(tsdn_t *tsdn, const witness_t *witness) +{ + tsd_t *tsd; + witness_list_t *witnesses; + witness_t *w; + + if (!config_debug) + return; + + if (tsdn_null(tsdn)) + return; + tsd = tsdn_tsd(tsdn); + if (witness->rank == WITNESS_RANK_OMIT) + return; + + witnesses = tsd_witnessesp_get(tsd); + ql_foreach(w, witnesses, link) { + if (w == witness) + return; + } + witness_owner_error(witness); +} + +JEMALLOC_INLINE void +witness_assert_not_owner(tsdn_t *tsdn, const witness_t *witness) +{ + tsd_t *tsd; + witness_list_t *witnesses; + witness_t *w; + + if (!config_debug) + return; + + if (tsdn_null(tsdn)) + return; + tsd = tsdn_tsd(tsdn); + if (witness->rank == WITNESS_RANK_OMIT) + return; + + witnesses = tsd_witnessesp_get(tsd); + ql_foreach(w, witnesses, link) { + if (w == witness) + witness_not_owner_error(witness); + } +} + +JEMALLOC_INLINE void +witness_assert_lockless(tsdn_t *tsdn) +{ + tsd_t *tsd; + witness_list_t *witnesses; + witness_t *w; + + if (!config_debug) + return; + + if (tsdn_null(tsdn)) + return; + tsd = tsdn_tsd(tsdn); + + witnesses = tsd_witnessesp_get(tsd); + w = ql_last(witnesses, link); + if (w != NULL) + witness_lockless_error(witnesses); +} + +JEMALLOC_INLINE void +witness_lock(tsdn_t *tsdn, witness_t *witness) +{ + tsd_t *tsd; + witness_list_t *witnesses; + witness_t *w; + + if (!config_debug) + return; + + if (tsdn_null(tsdn)) + return; + tsd = tsdn_tsd(tsdn); + if (witness->rank == WITNESS_RANK_OMIT) + return; + + witness_assert_not_owner(tsdn, witness); + + witnesses = tsd_witnessesp_get(tsd); + w = ql_last(witnesses, link); + if (w == NULL) { + /* No other locks; do nothing. */ + } else if (tsd_witness_fork_get(tsd) && w->rank <= witness->rank) { + /* Forking, and relaxed ranking satisfied. */ + } else if (w->rank > witness->rank) { + /* Not forking, rank order reversal. */ + witness_lock_error(witnesses, witness); + } else if (w->rank == witness->rank && (w->comp == NULL || w->comp != + witness->comp || w->comp(w, witness) > 0)) { + /* + * Missing/incompatible comparison function, or comparison + * function indicates rank order reversal. + */ + witness_lock_error(witnesses, witness); + } + + ql_elm_new(witness, link); + ql_tail_insert(witnesses, witness, link); +} + +JEMALLOC_INLINE void +witness_unlock(tsdn_t *tsdn, witness_t *witness) +{ + tsd_t *tsd; + witness_list_t *witnesses; + + if (!config_debug) + return; + + if (tsdn_null(tsdn)) + return; + tsd = tsdn_tsd(tsdn); + if (witness->rank == WITNESS_RANK_OMIT) + return; + + witness_assert_owner(tsdn, witness); + + witnesses = tsd_witnessesp_get(tsd); + ql_remove(witnesses, witness, link); +} +#endif + +#endif /* JEMALLOC_H_INLINES */ +/******************************************************************************/ Property changes on: head/contrib/jemalloc/include/jemalloc/internal/witness.h ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property Index: head/contrib/jemalloc/include/jemalloc/jemalloc.h =================================================================== --- head/contrib/jemalloc/include/jemalloc/jemalloc.h (revision 299586) +++ head/contrib/jemalloc/include/jemalloc/jemalloc.h (revision 299587) @@ -1,381 +1,381 @@ #ifndef JEMALLOC_H_ #define JEMALLOC_H_ #ifdef __cplusplus extern "C" { #endif /* Defined if __attribute__((...)) syntax is supported. */ #define JEMALLOC_HAVE_ATTR /* Defined if alloc_size attribute is supported. */ /* #undef JEMALLOC_HAVE_ATTR_ALLOC_SIZE */ /* Defined if format(gnu_printf, ...) attribute is supported. */ /* #undef JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF */ /* Defined if format(printf, ...) attribute is supported. */ #define JEMALLOC_HAVE_ATTR_FORMAT_PRINTF /* * Define overrides for non-standard allocator-related functions if they are * present on the system. */ /* #undef JEMALLOC_OVERRIDE_MEMALIGN */ #define JEMALLOC_OVERRIDE_VALLOC /* * At least Linux omits the "const" in: * * size_t malloc_usable_size(const void *ptr); * * Match the operating system's prototype. */ #define JEMALLOC_USABLE_SIZE_CONST const /* * If defined, specify throw() for the public function prototypes when compiling * with C++. The only justification for this is to match the prototypes that * glibc defines. */ /* #undef JEMALLOC_USE_CXX_THROW */ #ifdef _MSC_VER # ifdef _WIN64 # define LG_SIZEOF_PTR_WIN 3 # else # define LG_SIZEOF_PTR_WIN 2 # endif #endif /* sizeof(void *) == 2^LG_SIZEOF_PTR. */ #define LG_SIZEOF_PTR 3 /* * Name mangling for public symbols is controlled by --with-mangling and * --with-jemalloc-prefix. With default settings the je_ prefix is stripped by * these macro definitions. */ #ifndef JEMALLOC_NO_RENAME # define je_malloc_conf malloc_conf # define je_malloc_message malloc_message # define je_malloc malloc # define je_calloc calloc # define je_posix_memalign posix_memalign # define je_aligned_alloc aligned_alloc # define je_realloc realloc # define je_free free # define je_mallocx mallocx # define je_rallocx rallocx # define je_xallocx xallocx # define je_sallocx sallocx # define je_dallocx dallocx # define je_sdallocx sdallocx # define je_nallocx nallocx # define je_mallctl mallctl # define je_mallctlnametomib mallctlnametomib # define je_mallctlbymib mallctlbymib # define je_malloc_stats_print malloc_stats_print # define je_malloc_usable_size malloc_usable_size # define je_valloc valloc #endif #include "jemalloc_FreeBSD.h" #include #include #include #include #include -#define JEMALLOC_VERSION "4.1.0-1-g994da4232621dd1210fcf39bdf0d6454cefda473" +#define JEMALLOC_VERSION "4.2.0-1-gdc7ff6306d7a15b53479e2fb8e5546404b82e6fc" #define JEMALLOC_VERSION_MAJOR 4 -#define JEMALLOC_VERSION_MINOR 1 +#define JEMALLOC_VERSION_MINOR 2 #define JEMALLOC_VERSION_BUGFIX 0 #define JEMALLOC_VERSION_NREV 1 -#define JEMALLOC_VERSION_GID "994da4232621dd1210fcf39bdf0d6454cefda473" +#define JEMALLOC_VERSION_GID "dc7ff6306d7a15b53479e2fb8e5546404b82e6fc" # define MALLOCX_LG_ALIGN(la) ((int)(la)) # if LG_SIZEOF_PTR == 2 -# define MALLOCX_ALIGN(a) ((int)(ffs(a)-1)) +# define MALLOCX_ALIGN(a) ((int)(ffs((int)(a))-1)) # else # define MALLOCX_ALIGN(a) \ - ((int)(((a) < (size_t)INT_MAX) ? ffs((int)(a))-1 : \ - ffs((int)((a)>>32))+31)) + ((int)(((size_t)(a) < (size_t)INT_MAX) ? ffs((int)(a))-1 : \ + ffs((int)(((size_t)(a))>>32))+31)) # endif # define MALLOCX_ZERO ((int)0x40) /* * Bias tcache index bits so that 0 encodes "automatic tcache management", and 1 * encodes MALLOCX_TCACHE_NONE. */ # define MALLOCX_TCACHE(tc) ((int)(((tc)+2) << 8)) # define MALLOCX_TCACHE_NONE MALLOCX_TCACHE(-1) /* * Bias arena index bits so that 0 encodes "use an automatically chosen arena". */ -# define MALLOCX_ARENA(a) ((int)(((a)+1) << 20)) +# define MALLOCX_ARENA(a) ((((int)(a))+1) << 20) #if defined(__cplusplus) && defined(JEMALLOC_USE_CXX_THROW) # define JEMALLOC_CXX_THROW throw() #else # define JEMALLOC_CXX_THROW #endif #if _MSC_VER # define JEMALLOC_ATTR(s) # define JEMALLOC_ALIGNED(s) __declspec(align(s)) # define JEMALLOC_ALLOC_SIZE(s) # define JEMALLOC_ALLOC_SIZE2(s1, s2) # ifndef JEMALLOC_EXPORT # ifdef DLLEXPORT # define JEMALLOC_EXPORT __declspec(dllexport) # else # define JEMALLOC_EXPORT __declspec(dllimport) # endif # endif # define JEMALLOC_FORMAT_PRINTF(s, i) # define JEMALLOC_NOINLINE __declspec(noinline) # ifdef __cplusplus # define JEMALLOC_NOTHROW __declspec(nothrow) # else # define JEMALLOC_NOTHROW # endif # define JEMALLOC_SECTION(s) __declspec(allocate(s)) # define JEMALLOC_RESTRICT_RETURN __declspec(restrict) # if _MSC_VER >= 1900 && !defined(__EDG__) # define JEMALLOC_ALLOCATOR __declspec(allocator) # else # define JEMALLOC_ALLOCATOR # endif #elif defined(JEMALLOC_HAVE_ATTR) # define JEMALLOC_ATTR(s) __attribute__((s)) # define JEMALLOC_ALIGNED(s) JEMALLOC_ATTR(aligned(s)) # ifdef JEMALLOC_HAVE_ATTR_ALLOC_SIZE # define JEMALLOC_ALLOC_SIZE(s) JEMALLOC_ATTR(alloc_size(s)) # define JEMALLOC_ALLOC_SIZE2(s1, s2) JEMALLOC_ATTR(alloc_size(s1, s2)) # else # define JEMALLOC_ALLOC_SIZE(s) # define JEMALLOC_ALLOC_SIZE2(s1, s2) # endif # ifndef JEMALLOC_EXPORT # define JEMALLOC_EXPORT JEMALLOC_ATTR(visibility("default")) # endif # ifdef JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF # define JEMALLOC_FORMAT_PRINTF(s, i) JEMALLOC_ATTR(format(gnu_printf, s, i)) # elif defined(JEMALLOC_HAVE_ATTR_FORMAT_PRINTF) # define JEMALLOC_FORMAT_PRINTF(s, i) JEMALLOC_ATTR(format(printf, s, i)) # else # define JEMALLOC_FORMAT_PRINTF(s, i) # endif # define JEMALLOC_NOINLINE JEMALLOC_ATTR(noinline) # define JEMALLOC_NOTHROW JEMALLOC_ATTR(nothrow) # define JEMALLOC_SECTION(s) JEMALLOC_ATTR(section(s)) # define JEMALLOC_RESTRICT_RETURN # define JEMALLOC_ALLOCATOR #else # define JEMALLOC_ATTR(s) # define JEMALLOC_ALIGNED(s) # define JEMALLOC_ALLOC_SIZE(s) # define JEMALLOC_ALLOC_SIZE2(s1, s2) # define JEMALLOC_EXPORT # define JEMALLOC_FORMAT_PRINTF(s, i) # define JEMALLOC_NOINLINE # define JEMALLOC_NOTHROW # define JEMALLOC_SECTION(s) # define JEMALLOC_RESTRICT_RETURN # define JEMALLOC_ALLOCATOR #endif /* * The je_ prefix on the following public symbol declarations is an artifact * of namespace management, and should be omitted in application code unless * JEMALLOC_NO_DEMANGLE is defined (see jemalloc_mangle.h). */ extern JEMALLOC_EXPORT const char *je_malloc_conf; extern JEMALLOC_EXPORT void (*je_malloc_message)(void *cbopaque, const char *s); JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *je_malloc(size_t size) JEMALLOC_CXX_THROW JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(1); JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *je_calloc(size_t num, size_t size) JEMALLOC_CXX_THROW JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE2(1, 2); JEMALLOC_EXPORT int JEMALLOC_NOTHROW je_posix_memalign(void **memptr, size_t alignment, size_t size) JEMALLOC_CXX_THROW JEMALLOC_ATTR(nonnull(1)); JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *je_aligned_alloc(size_t alignment, size_t size) JEMALLOC_CXX_THROW JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(2); JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *je_realloc(void *ptr, size_t size) JEMALLOC_CXX_THROW JEMALLOC_ALLOC_SIZE(2); JEMALLOC_EXPORT void JEMALLOC_NOTHROW je_free(void *ptr) JEMALLOC_CXX_THROW; JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *je_mallocx(size_t size, int flags) JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(1); JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *je_rallocx(void *ptr, size_t size, int flags) JEMALLOC_ALLOC_SIZE(2); JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW je_xallocx(void *ptr, size_t size, size_t extra, int flags); JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW je_sallocx(const void *ptr, int flags) JEMALLOC_ATTR(pure); JEMALLOC_EXPORT void JEMALLOC_NOTHROW je_dallocx(void *ptr, int flags); JEMALLOC_EXPORT void JEMALLOC_NOTHROW je_sdallocx(void *ptr, size_t size, int flags); JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW je_nallocx(size_t size, int flags) JEMALLOC_ATTR(pure); JEMALLOC_EXPORT int JEMALLOC_NOTHROW je_mallctl(const char *name, void *oldp, size_t *oldlenp, void *newp, size_t newlen); JEMALLOC_EXPORT int JEMALLOC_NOTHROW je_mallctlnametomib(const char *name, size_t *mibp, size_t *miblenp); JEMALLOC_EXPORT int JEMALLOC_NOTHROW je_mallctlbymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen); JEMALLOC_EXPORT void JEMALLOC_NOTHROW je_malloc_stats_print( void (*write_cb)(void *, const char *), void *je_cbopaque, const char *opts); JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW je_malloc_usable_size( JEMALLOC_USABLE_SIZE_CONST void *ptr) JEMALLOC_CXX_THROW; #ifdef JEMALLOC_OVERRIDE_MEMALIGN JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *je_memalign(size_t alignment, size_t size) JEMALLOC_CXX_THROW JEMALLOC_ATTR(malloc); #endif #ifdef JEMALLOC_OVERRIDE_VALLOC JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *je_valloc(size_t size) JEMALLOC_CXX_THROW JEMALLOC_ATTR(malloc); #endif /* * void * * chunk_alloc(void *new_addr, size_t size, size_t alignment, bool *zero, * bool *commit, unsigned arena_ind); */ typedef void *(chunk_alloc_t)(void *, size_t, size_t, bool *, bool *, unsigned); /* * bool * chunk_dalloc(void *chunk, size_t size, bool committed, unsigned arena_ind); */ typedef bool (chunk_dalloc_t)(void *, size_t, bool, unsigned); /* * bool * chunk_commit(void *chunk, size_t size, size_t offset, size_t length, * unsigned arena_ind); */ typedef bool (chunk_commit_t)(void *, size_t, size_t, size_t, unsigned); /* * bool * chunk_decommit(void *chunk, size_t size, size_t offset, size_t length, * unsigned arena_ind); */ typedef bool (chunk_decommit_t)(void *, size_t, size_t, size_t, unsigned); /* * bool * chunk_purge(void *chunk, size_t size, size_t offset, size_t length, * unsigned arena_ind); */ typedef bool (chunk_purge_t)(void *, size_t, size_t, size_t, unsigned); /* * bool * chunk_split(void *chunk, size_t size, size_t size_a, size_t size_b, * bool committed, unsigned arena_ind); */ typedef bool (chunk_split_t)(void *, size_t, size_t, size_t, bool, unsigned); /* * bool * chunk_merge(void *chunk_a, size_t size_a, void *chunk_b, size_t size_b, * bool committed, unsigned arena_ind); */ typedef bool (chunk_merge_t)(void *, size_t, void *, size_t, bool, unsigned); typedef struct { chunk_alloc_t *alloc; chunk_dalloc_t *dalloc; chunk_commit_t *commit; chunk_decommit_t *decommit; chunk_purge_t *purge; chunk_split_t *split; chunk_merge_t *merge; } chunk_hooks_t; /* * By default application code must explicitly refer to mangled symbol names, * so that it is possible to use jemalloc in conjunction with another allocator * in the same application. Define JEMALLOC_MANGLE in order to cause automatic * name mangling that matches the API prefixing that happened as a result of * --with-mangling and/or --with-jemalloc-prefix configuration settings. */ #ifdef JEMALLOC_MANGLE # ifndef JEMALLOC_NO_DEMANGLE # define JEMALLOC_NO_DEMANGLE # endif # define malloc_conf je_malloc_conf # define malloc_message je_malloc_message # define malloc je_malloc # define calloc je_calloc # define posix_memalign je_posix_memalign # define aligned_alloc je_aligned_alloc # define realloc je_realloc # define free je_free # define mallocx je_mallocx # define rallocx je_rallocx # define xallocx je_xallocx # define sallocx je_sallocx # define dallocx je_dallocx # define sdallocx je_sdallocx # define nallocx je_nallocx # define mallctl je_mallctl # define mallctlnametomib je_mallctlnametomib # define mallctlbymib je_mallctlbymib # define malloc_stats_print je_malloc_stats_print # define malloc_usable_size je_malloc_usable_size # define valloc je_valloc #endif /* * The je_* macros can be used as stable alternative names for the * public jemalloc API if JEMALLOC_NO_DEMANGLE is defined. This is primarily * meant for use in jemalloc itself, but it can be used by application code to * provide isolation from the name mangling specified via --with-mangling * and/or --with-jemalloc-prefix. */ #ifndef JEMALLOC_NO_DEMANGLE # undef je_malloc_conf # undef je_malloc_message # undef je_malloc # undef je_calloc # undef je_posix_memalign # undef je_aligned_alloc # undef je_realloc # undef je_free # undef je_mallocx # undef je_rallocx # undef je_xallocx # undef je_sallocx # undef je_dallocx # undef je_sdallocx # undef je_nallocx # undef je_mallctl # undef je_mallctlnametomib # undef je_mallctlbymib # undef je_malloc_stats_print # undef je_malloc_usable_size # undef je_valloc #endif #ifdef __cplusplus } #endif #endif /* JEMALLOC_H_ */ Index: head/contrib/jemalloc/src/arena.c =================================================================== --- head/contrib/jemalloc/src/arena.c (revision 299586) +++ head/contrib/jemalloc/src/arena.c (revision 299587) @@ -1,3689 +1,3891 @@ #define JEMALLOC_ARENA_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Data. */ purge_mode_t opt_purge = PURGE_DEFAULT; const char *purge_mode_names[] = { "ratio", "decay", "N/A" }; ssize_t opt_lg_dirty_mult = LG_DIRTY_MULT_DEFAULT; static ssize_t lg_dirty_mult_default; ssize_t opt_decay_time = DECAY_TIME_DEFAULT; static ssize_t decay_time_default; arena_bin_info_t arena_bin_info[NBINS]; size_t map_bias; size_t map_misc_offset; size_t arena_maxrun; /* Max run size for arenas. */ size_t large_maxclass; /* Max large size class. */ size_t run_quantize_max; /* Max run_quantize_*() input. */ static size_t small_maxrun; /* Max run size for small size classes. */ static bool *small_run_tab; /* Valid small run page multiples. */ static size_t *run_quantize_floor_tab; /* run_quantize_floor() memoization. */ static size_t *run_quantize_ceil_tab; /* run_quantize_ceil() memoization. */ unsigned nlclasses; /* Number of large size classes. */ unsigned nhclasses; /* Number of huge size classes. */ static szind_t runs_avail_bias; /* Size index for first runs_avail tree. */ static szind_t runs_avail_nclasses; /* Number of runs_avail trees. */ /******************************************************************************/ /* * Function prototypes for static functions that are referenced prior to * definition. */ -static void arena_purge_to_limit(arena_t *arena, size_t ndirty_limit); -static void arena_run_dalloc(arena_t *arena, arena_run_t *run, bool dirty, - bool cleaned, bool decommitted); -static void arena_dalloc_bin_run(arena_t *arena, arena_chunk_t *chunk, - arena_run_t *run, arena_bin_t *bin); +static void arena_purge_to_limit(tsdn_t *tsdn, arena_t *arena, + size_t ndirty_limit); +static void arena_run_dalloc(tsdn_t *tsdn, arena_t *arena, arena_run_t *run, + bool dirty, bool cleaned, bool decommitted); +static void arena_dalloc_bin_run(tsdn_t *tsdn, arena_t *arena, + arena_chunk_t *chunk, arena_run_t *run, arena_bin_t *bin); static void arena_bin_lower_run(arena_t *arena, arena_chunk_t *chunk, arena_run_t *run, arena_bin_t *bin); /******************************************************************************/ JEMALLOC_INLINE_C size_t arena_miscelm_size_get(const arena_chunk_map_misc_t *miscelm) { arena_chunk_t *chunk; size_t pageind, mapbits; chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(miscelm); pageind = arena_miscelm_to_pageind(miscelm); mapbits = arena_mapbits_get(chunk, pageind); return (arena_mapbits_size_decode(mapbits)); } JEMALLOC_INLINE_C int arena_run_addr_comp(const arena_chunk_map_misc_t *a, const arena_chunk_map_misc_t *b) { uintptr_t a_miscelm = (uintptr_t)a; uintptr_t b_miscelm = (uintptr_t)b; assert(a != NULL); assert(b != NULL); return ((a_miscelm > b_miscelm) - (a_miscelm < b_miscelm)); } -/* Generate red-black tree functions. */ -rb_gen(static UNUSED, arena_run_tree_, arena_run_tree_t, arena_chunk_map_misc_t, - rb_link, arena_run_addr_comp) +/* Generate pairing heap functions. */ +ph_gen(static UNUSED, arena_run_heap_, arena_run_heap_t, arena_chunk_map_misc_t, + ph_link, arena_run_addr_comp) static size_t run_quantize_floor_compute(size_t size) { size_t qsize; assert(size != 0); assert(size == PAGE_CEILING(size)); /* Don't change sizes that are valid small run sizes. */ if (size <= small_maxrun && small_run_tab[size >> LG_PAGE]) return (size); /* * Round down to the nearest run size that can actually be requested * during normal large allocation. Add large_pad so that cache index * randomization can offset the allocation from the page boundary. */ qsize = index2size(size2index(size - large_pad + 1) - 1) + large_pad; if (qsize <= SMALL_MAXCLASS + large_pad) return (run_quantize_floor_compute(size - large_pad)); assert(qsize <= size); return (qsize); } static size_t run_quantize_ceil_compute_hard(size_t size) { size_t large_run_size_next; assert(size != 0); assert(size == PAGE_CEILING(size)); /* * Return the next quantized size greater than the input size. * Quantized sizes comprise the union of run sizes that back small * region runs, and run sizes that back large regions with no explicit * alignment constraints. */ if (size > SMALL_MAXCLASS) { large_run_size_next = PAGE_CEILING(index2size(size2index(size - large_pad) + 1) + large_pad); } else large_run_size_next = SIZE_T_MAX; if (size >= small_maxrun) return (large_run_size_next); while (true) { size += PAGE; assert(size <= small_maxrun); if (small_run_tab[size >> LG_PAGE]) { if (large_run_size_next < size) return (large_run_size_next); return (size); } } } static size_t run_quantize_ceil_compute(size_t size) { size_t qsize = run_quantize_floor_compute(size); if (qsize < size) { /* * Skip a quantization that may have an adequately large run, * because under-sized runs may be mixed in. This only happens * when an unusual size is requested, i.e. for aligned * allocation, and is just one of several places where linear * search would potentially find sufficiently aligned available * memory somewhere lower. */ qsize = run_quantize_ceil_compute_hard(qsize); } return (qsize); } #ifdef JEMALLOC_JET #undef run_quantize_floor -#define run_quantize_floor JEMALLOC_N(run_quantize_floor_impl) +#define run_quantize_floor JEMALLOC_N(n_run_quantize_floor) #endif static size_t run_quantize_floor(size_t size) { size_t ret; assert(size > 0); assert(size <= run_quantize_max); assert((size & PAGE_MASK) == 0); ret = run_quantize_floor_tab[(size >> LG_PAGE) - 1]; assert(ret == run_quantize_floor_compute(size)); return (ret); } #ifdef JEMALLOC_JET #undef run_quantize_floor #define run_quantize_floor JEMALLOC_N(run_quantize_floor) -run_quantize_t *run_quantize_floor = JEMALLOC_N(run_quantize_floor_impl); +run_quantize_t *run_quantize_floor = JEMALLOC_N(n_run_quantize_floor); #endif #ifdef JEMALLOC_JET #undef run_quantize_ceil -#define run_quantize_ceil JEMALLOC_N(run_quantize_ceil_impl) +#define run_quantize_ceil JEMALLOC_N(n_run_quantize_ceil) #endif static size_t run_quantize_ceil(size_t size) { size_t ret; assert(size > 0); assert(size <= run_quantize_max); assert((size & PAGE_MASK) == 0); ret = run_quantize_ceil_tab[(size >> LG_PAGE) - 1]; assert(ret == run_quantize_ceil_compute(size)); return (ret); } #ifdef JEMALLOC_JET #undef run_quantize_ceil #define run_quantize_ceil JEMALLOC_N(run_quantize_ceil) -run_quantize_t *run_quantize_ceil = JEMALLOC_N(run_quantize_ceil_impl); +run_quantize_t *run_quantize_ceil = JEMALLOC_N(n_run_quantize_ceil); #endif -static arena_run_tree_t * +static arena_run_heap_t * arena_runs_avail_get(arena_t *arena, szind_t ind) { assert(ind >= runs_avail_bias); assert(ind - runs_avail_bias < runs_avail_nclasses); return (&arena->runs_avail[ind - runs_avail_bias]); } static void arena_avail_insert(arena_t *arena, arena_chunk_t *chunk, size_t pageind, size_t npages) { szind_t ind = size2index(run_quantize_floor(arena_miscelm_size_get( - arena_miscelm_get(chunk, pageind)))); + arena_miscelm_get_const(chunk, pageind)))); assert(npages == (arena_mapbits_unallocated_size_get(chunk, pageind) >> LG_PAGE)); - arena_run_tree_insert(arena_runs_avail_get(arena, ind), - arena_miscelm_get(chunk, pageind)); + arena_run_heap_insert(arena_runs_avail_get(arena, ind), + arena_miscelm_get_mutable(chunk, pageind)); } static void arena_avail_remove(arena_t *arena, arena_chunk_t *chunk, size_t pageind, size_t npages) { szind_t ind = size2index(run_quantize_floor(arena_miscelm_size_get( - arena_miscelm_get(chunk, pageind)))); + arena_miscelm_get_const(chunk, pageind)))); assert(npages == (arena_mapbits_unallocated_size_get(chunk, pageind) >> LG_PAGE)); - arena_run_tree_remove(arena_runs_avail_get(arena, ind), - arena_miscelm_get(chunk, pageind)); + arena_run_heap_remove(arena_runs_avail_get(arena, ind), + arena_miscelm_get_mutable(chunk, pageind)); } static void arena_run_dirty_insert(arena_t *arena, arena_chunk_t *chunk, size_t pageind, size_t npages) { - arena_chunk_map_misc_t *miscelm = arena_miscelm_get(chunk, pageind); + arena_chunk_map_misc_t *miscelm = arena_miscelm_get_mutable(chunk, + pageind); assert(npages == (arena_mapbits_unallocated_size_get(chunk, pageind) >> LG_PAGE)); assert(arena_mapbits_dirty_get(chunk, pageind) == CHUNK_MAP_DIRTY); assert(arena_mapbits_dirty_get(chunk, pageind+npages-1) == CHUNK_MAP_DIRTY); qr_new(&miscelm->rd, rd_link); qr_meld(&arena->runs_dirty, &miscelm->rd, rd_link); arena->ndirty += npages; } static void arena_run_dirty_remove(arena_t *arena, arena_chunk_t *chunk, size_t pageind, size_t npages) { - arena_chunk_map_misc_t *miscelm = arena_miscelm_get(chunk, pageind); + arena_chunk_map_misc_t *miscelm = arena_miscelm_get_mutable(chunk, + pageind); assert(npages == (arena_mapbits_unallocated_size_get(chunk, pageind) >> LG_PAGE)); assert(arena_mapbits_dirty_get(chunk, pageind) == CHUNK_MAP_DIRTY); assert(arena_mapbits_dirty_get(chunk, pageind+npages-1) == CHUNK_MAP_DIRTY); qr_remove(&miscelm->rd, rd_link); assert(arena->ndirty >= npages); arena->ndirty -= npages; } static size_t arena_chunk_dirty_npages(const extent_node_t *node) { return (extent_node_size_get(node) >> LG_PAGE); } void arena_chunk_cache_maybe_insert(arena_t *arena, extent_node_t *node, bool cache) { if (cache) { extent_node_dirty_linkage_init(node); extent_node_dirty_insert(node, &arena->runs_dirty, &arena->chunks_cache); arena->ndirty += arena_chunk_dirty_npages(node); } } void arena_chunk_cache_maybe_remove(arena_t *arena, extent_node_t *node, bool dirty) { if (dirty) { extent_node_dirty_remove(node); assert(arena->ndirty >= arena_chunk_dirty_npages(node)); arena->ndirty -= arena_chunk_dirty_npages(node); } } JEMALLOC_INLINE_C void * arena_run_reg_alloc(arena_run_t *run, arena_bin_info_t *bin_info) { void *ret; size_t regind; arena_chunk_map_misc_t *miscelm; void *rpages; assert(run->nfree > 0); assert(!bitmap_full(run->bitmap, &bin_info->bitmap_info)); regind = (unsigned)bitmap_sfu(run->bitmap, &bin_info->bitmap_info); miscelm = arena_run_to_miscelm(run); rpages = arena_miscelm_to_rpages(miscelm); ret = (void *)((uintptr_t)rpages + (uintptr_t)bin_info->reg0_offset + (uintptr_t)(bin_info->reg_interval * regind)); run->nfree--; return (ret); } JEMALLOC_INLINE_C void arena_run_reg_dalloc(arena_run_t *run, void *ptr) { arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(run); size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; size_t mapbits = arena_mapbits_get(chunk, pageind); szind_t binind = arena_ptr_small_binind_get(ptr, mapbits); arena_bin_info_t *bin_info = &arena_bin_info[binind]; size_t regind = arena_run_regind(run, bin_info, ptr); assert(run->nfree < bin_info->nregs); /* Freeing an interior pointer can cause assertion failure. */ assert(((uintptr_t)ptr - ((uintptr_t)arena_miscelm_to_rpages(arena_run_to_miscelm(run)) + (uintptr_t)bin_info->reg0_offset)) % (uintptr_t)bin_info->reg_interval == 0); assert((uintptr_t)ptr >= (uintptr_t)arena_miscelm_to_rpages(arena_run_to_miscelm(run)) + (uintptr_t)bin_info->reg0_offset); /* Freeing an unallocated pointer can cause assertion failure. */ assert(bitmap_get(run->bitmap, &bin_info->bitmap_info, regind)); bitmap_unset(run->bitmap, &bin_info->bitmap_info, regind); run->nfree++; } JEMALLOC_INLINE_C void arena_run_zero(arena_chunk_t *chunk, size_t run_ind, size_t npages) { JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED((void *)((uintptr_t)chunk + (run_ind << LG_PAGE)), (npages << LG_PAGE)); memset((void *)((uintptr_t)chunk + (run_ind << LG_PAGE)), 0, (npages << LG_PAGE)); } JEMALLOC_INLINE_C void arena_run_page_mark_zeroed(arena_chunk_t *chunk, size_t run_ind) { JEMALLOC_VALGRIND_MAKE_MEM_DEFINED((void *)((uintptr_t)chunk + (run_ind << LG_PAGE)), PAGE); } JEMALLOC_INLINE_C void arena_run_page_validate_zeroed(arena_chunk_t *chunk, size_t run_ind) { size_t i; UNUSED size_t *p = (size_t *)((uintptr_t)chunk + (run_ind << LG_PAGE)); arena_run_page_mark_zeroed(chunk, run_ind); for (i = 0; i < PAGE / sizeof(size_t); i++) assert(p[i] == 0); } static void arena_nactive_add(arena_t *arena, size_t add_pages) { if (config_stats) { size_t cactive_add = CHUNK_CEILING((arena->nactive + add_pages) << LG_PAGE) - CHUNK_CEILING(arena->nactive << LG_PAGE); if (cactive_add != 0) stats_cactive_add(cactive_add); } arena->nactive += add_pages; } static void arena_nactive_sub(arena_t *arena, size_t sub_pages) { if (config_stats) { size_t cactive_sub = CHUNK_CEILING(arena->nactive << LG_PAGE) - CHUNK_CEILING((arena->nactive - sub_pages) << LG_PAGE); if (cactive_sub != 0) stats_cactive_sub(cactive_sub); } arena->nactive -= sub_pages; } static void arena_run_split_remove(arena_t *arena, arena_chunk_t *chunk, size_t run_ind, size_t flag_dirty, size_t flag_decommitted, size_t need_pages) { size_t total_pages, rem_pages; assert(flag_dirty == 0 || flag_decommitted == 0); total_pages = arena_mapbits_unallocated_size_get(chunk, run_ind) >> LG_PAGE; assert(arena_mapbits_dirty_get(chunk, run_ind+total_pages-1) == flag_dirty); assert(need_pages <= total_pages); rem_pages = total_pages - need_pages; arena_avail_remove(arena, chunk, run_ind, total_pages); if (flag_dirty != 0) arena_run_dirty_remove(arena, chunk, run_ind, total_pages); arena_nactive_add(arena, need_pages); /* Keep track of trailing unused pages for later use. */ if (rem_pages > 0) { size_t flags = flag_dirty | flag_decommitted; size_t flag_unzeroed_mask = (flags == 0) ? CHUNK_MAP_UNZEROED : 0; arena_mapbits_unallocated_set(chunk, run_ind+need_pages, (rem_pages << LG_PAGE), flags | (arena_mapbits_unzeroed_get(chunk, run_ind+need_pages) & flag_unzeroed_mask)); arena_mapbits_unallocated_set(chunk, run_ind+total_pages-1, (rem_pages << LG_PAGE), flags | (arena_mapbits_unzeroed_get(chunk, run_ind+total_pages-1) & flag_unzeroed_mask)); if (flag_dirty != 0) { arena_run_dirty_insert(arena, chunk, run_ind+need_pages, rem_pages); } arena_avail_insert(arena, chunk, run_ind+need_pages, rem_pages); } } static bool arena_run_split_large_helper(arena_t *arena, arena_run_t *run, size_t size, bool remove, bool zero) { arena_chunk_t *chunk; arena_chunk_map_misc_t *miscelm; size_t flag_dirty, flag_decommitted, run_ind, need_pages; size_t flag_unzeroed_mask; chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(run); miscelm = arena_run_to_miscelm(run); run_ind = arena_miscelm_to_pageind(miscelm); flag_dirty = arena_mapbits_dirty_get(chunk, run_ind); flag_decommitted = arena_mapbits_decommitted_get(chunk, run_ind); need_pages = (size >> LG_PAGE); assert(need_pages > 0); if (flag_decommitted != 0 && arena->chunk_hooks.commit(chunk, chunksize, run_ind << LG_PAGE, size, arena->ind)) return (true); if (remove) { arena_run_split_remove(arena, chunk, run_ind, flag_dirty, flag_decommitted, need_pages); } if (zero) { if (flag_decommitted != 0) { /* The run is untouched, and therefore zeroed. */ JEMALLOC_VALGRIND_MAKE_MEM_DEFINED((void *)((uintptr_t)chunk + (run_ind << LG_PAGE)), (need_pages << LG_PAGE)); } else if (flag_dirty != 0) { /* The run is dirty, so all pages must be zeroed. */ arena_run_zero(chunk, run_ind, need_pages); } else { /* * The run is clean, so some pages may be zeroed (i.e. * never before touched). */ size_t i; for (i = 0; i < need_pages; i++) { if (arena_mapbits_unzeroed_get(chunk, run_ind+i) != 0) arena_run_zero(chunk, run_ind+i, 1); else if (config_debug) { arena_run_page_validate_zeroed(chunk, run_ind+i); } else { arena_run_page_mark_zeroed(chunk, run_ind+i); } } } } else { JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED((void *)((uintptr_t)chunk + (run_ind << LG_PAGE)), (need_pages << LG_PAGE)); } /* * Set the last element first, in case the run only contains one page * (i.e. both statements set the same element). */ flag_unzeroed_mask = (flag_dirty | flag_decommitted) == 0 ? CHUNK_MAP_UNZEROED : 0; arena_mapbits_large_set(chunk, run_ind+need_pages-1, 0, flag_dirty | (flag_unzeroed_mask & arena_mapbits_unzeroed_get(chunk, run_ind+need_pages-1))); arena_mapbits_large_set(chunk, run_ind, size, flag_dirty | (flag_unzeroed_mask & arena_mapbits_unzeroed_get(chunk, run_ind))); return (false); } static bool arena_run_split_large(arena_t *arena, arena_run_t *run, size_t size, bool zero) { return (arena_run_split_large_helper(arena, run, size, true, zero)); } static bool arena_run_init_large(arena_t *arena, arena_run_t *run, size_t size, bool zero) { return (arena_run_split_large_helper(arena, run, size, false, zero)); } static bool arena_run_split_small(arena_t *arena, arena_run_t *run, size_t size, szind_t binind) { arena_chunk_t *chunk; arena_chunk_map_misc_t *miscelm; size_t flag_dirty, flag_decommitted, run_ind, need_pages, i; assert(binind != BININD_INVALID); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(run); miscelm = arena_run_to_miscelm(run); run_ind = arena_miscelm_to_pageind(miscelm); flag_dirty = arena_mapbits_dirty_get(chunk, run_ind); flag_decommitted = arena_mapbits_decommitted_get(chunk, run_ind); need_pages = (size >> LG_PAGE); assert(need_pages > 0); if (flag_decommitted != 0 && arena->chunk_hooks.commit(chunk, chunksize, run_ind << LG_PAGE, size, arena->ind)) return (true); arena_run_split_remove(arena, chunk, run_ind, flag_dirty, flag_decommitted, need_pages); for (i = 0; i < need_pages; i++) { size_t flag_unzeroed = arena_mapbits_unzeroed_get(chunk, run_ind+i); arena_mapbits_small_set(chunk, run_ind+i, i, binind, flag_unzeroed); if (config_debug && flag_dirty == 0 && flag_unzeroed == 0) arena_run_page_validate_zeroed(chunk, run_ind+i); } JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED((void *)((uintptr_t)chunk + (run_ind << LG_PAGE)), (need_pages << LG_PAGE)); return (false); } static arena_chunk_t * arena_chunk_init_spare(arena_t *arena) { arena_chunk_t *chunk; assert(arena->spare != NULL); chunk = arena->spare; arena->spare = NULL; assert(arena_mapbits_allocated_get(chunk, map_bias) == 0); assert(arena_mapbits_allocated_get(chunk, chunk_npages-1) == 0); assert(arena_mapbits_unallocated_size_get(chunk, map_bias) == arena_maxrun); assert(arena_mapbits_unallocated_size_get(chunk, chunk_npages-1) == arena_maxrun); assert(arena_mapbits_dirty_get(chunk, map_bias) == arena_mapbits_dirty_get(chunk, chunk_npages-1)); return (chunk); } static bool -arena_chunk_register(arena_t *arena, arena_chunk_t *chunk, bool zero) +arena_chunk_register(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk, + bool zero) { /* * The extent node notion of "committed" doesn't directly apply to * arena chunks. Arbitrarily mark them as committed. The commit state * of runs is tracked individually, and upon chunk deallocation the * entire chunk is in a consistent commit state. */ extent_node_init(&chunk->node, arena, chunk, chunksize, zero, true); extent_node_achunk_set(&chunk->node, true); - return (chunk_register(chunk, &chunk->node)); + return (chunk_register(tsdn, chunk, &chunk->node)); } static arena_chunk_t * -arena_chunk_alloc_internal_hard(arena_t *arena, chunk_hooks_t *chunk_hooks, - bool *zero, bool *commit) +arena_chunk_alloc_internal_hard(tsdn_t *tsdn, arena_t *arena, + chunk_hooks_t *chunk_hooks, bool *zero, bool *commit) { arena_chunk_t *chunk; - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); - chunk = (arena_chunk_t *)chunk_alloc_wrapper(arena, chunk_hooks, NULL, - chunksize, chunksize, zero, commit); + chunk = (arena_chunk_t *)chunk_alloc_wrapper(tsdn, arena, chunk_hooks, + NULL, chunksize, chunksize, zero, commit); if (chunk != NULL && !*commit) { /* Commit header. */ if (chunk_hooks->commit(chunk, chunksize, 0, map_bias << LG_PAGE, arena->ind)) { - chunk_dalloc_wrapper(arena, chunk_hooks, - (void *)chunk, chunksize, *commit); + chunk_dalloc_wrapper(tsdn, arena, chunk_hooks, + (void *)chunk, chunksize, *zero, *commit); chunk = NULL; } } - if (chunk != NULL && arena_chunk_register(arena, chunk, *zero)) { + if (chunk != NULL && arena_chunk_register(tsdn, arena, chunk, *zero)) { if (!*commit) { /* Undo commit of header. */ chunk_hooks->decommit(chunk, chunksize, 0, map_bias << LG_PAGE, arena->ind); } - chunk_dalloc_wrapper(arena, chunk_hooks, (void *)chunk, - chunksize, *commit); + chunk_dalloc_wrapper(tsdn, arena, chunk_hooks, (void *)chunk, + chunksize, *zero, *commit); chunk = NULL; } - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); return (chunk); } static arena_chunk_t * -arena_chunk_alloc_internal(arena_t *arena, bool *zero, bool *commit) +arena_chunk_alloc_internal(tsdn_t *tsdn, arena_t *arena, bool *zero, + bool *commit) { arena_chunk_t *chunk; chunk_hooks_t chunk_hooks = CHUNK_HOOKS_INITIALIZER; - chunk = chunk_alloc_cache(arena, &chunk_hooks, NULL, chunksize, + chunk = chunk_alloc_cache(tsdn, arena, &chunk_hooks, NULL, chunksize, chunksize, zero, true); if (chunk != NULL) { - if (arena_chunk_register(arena, chunk, *zero)) { - chunk_dalloc_cache(arena, &chunk_hooks, chunk, + if (arena_chunk_register(tsdn, arena, chunk, *zero)) { + chunk_dalloc_cache(tsdn, arena, &chunk_hooks, chunk, chunksize, true); return (NULL); } *commit = true; } if (chunk == NULL) { - chunk = arena_chunk_alloc_internal_hard(arena, &chunk_hooks, - zero, commit); + chunk = arena_chunk_alloc_internal_hard(tsdn, arena, + &chunk_hooks, zero, commit); } if (config_stats && chunk != NULL) { arena->stats.mapped += chunksize; arena->stats.metadata_mapped += (map_bias << LG_PAGE); } return (chunk); } static arena_chunk_t * -arena_chunk_init_hard(arena_t *arena) +arena_chunk_init_hard(tsdn_t *tsdn, arena_t *arena) { arena_chunk_t *chunk; bool zero, commit; size_t flag_unzeroed, flag_decommitted, i; assert(arena->spare == NULL); zero = false; commit = false; - chunk = arena_chunk_alloc_internal(arena, &zero, &commit); + chunk = arena_chunk_alloc_internal(tsdn, arena, &zero, &commit); if (chunk == NULL) return (NULL); /* * Initialize the map to contain one maximal free untouched run. Mark - * the pages as zeroed if chunk_alloc() returned a zeroed or decommitted - * chunk. + * the pages as zeroed if arena_chunk_alloc_internal() returned a zeroed + * or decommitted chunk. */ flag_unzeroed = (zero || !commit) ? 0 : CHUNK_MAP_UNZEROED; flag_decommitted = commit ? 0 : CHUNK_MAP_DECOMMITTED; arena_mapbits_unallocated_set(chunk, map_bias, arena_maxrun, flag_unzeroed | flag_decommitted); /* * There is no need to initialize the internal page map entries unless * the chunk is not zeroed. */ if (!zero) { JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED( - (void *)arena_bitselm_get(chunk, map_bias+1), - (size_t)((uintptr_t) arena_bitselm_get(chunk, - chunk_npages-1) - (uintptr_t)arena_bitselm_get(chunk, - map_bias+1))); + (void *)arena_bitselm_get_const(chunk, map_bias+1), + (size_t)((uintptr_t)arena_bitselm_get_const(chunk, + chunk_npages-1) - + (uintptr_t)arena_bitselm_get_const(chunk, map_bias+1))); for (i = map_bias+1; i < chunk_npages-1; i++) arena_mapbits_internal_set(chunk, i, flag_unzeroed); } else { JEMALLOC_VALGRIND_MAKE_MEM_DEFINED((void - *)arena_bitselm_get(chunk, map_bias+1), (size_t)((uintptr_t) - arena_bitselm_get(chunk, chunk_npages-1) - - (uintptr_t)arena_bitselm_get(chunk, map_bias+1))); + *)arena_bitselm_get_const(chunk, map_bias+1), + (size_t)((uintptr_t)arena_bitselm_get_const(chunk, + chunk_npages-1) - + (uintptr_t)arena_bitselm_get_const(chunk, map_bias+1))); if (config_debug) { for (i = map_bias+1; i < chunk_npages-1; i++) { assert(arena_mapbits_unzeroed_get(chunk, i) == flag_unzeroed); } } } arena_mapbits_unallocated_set(chunk, chunk_npages-1, arena_maxrun, flag_unzeroed); return (chunk); } static arena_chunk_t * -arena_chunk_alloc(arena_t *arena) +arena_chunk_alloc(tsdn_t *tsdn, arena_t *arena) { arena_chunk_t *chunk; if (arena->spare != NULL) chunk = arena_chunk_init_spare(arena); else { - chunk = arena_chunk_init_hard(arena); + chunk = arena_chunk_init_hard(tsdn, arena); if (chunk == NULL) return (NULL); } + ql_elm_new(&chunk->node, ql_link); + ql_tail_insert(&arena->achunks, &chunk->node, ql_link); arena_avail_insert(arena, chunk, map_bias, chunk_npages-map_bias); return (chunk); } static void -arena_chunk_dalloc(arena_t *arena, arena_chunk_t *chunk) +arena_chunk_discard(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk) { + bool committed; + chunk_hooks_t chunk_hooks = CHUNK_HOOKS_INITIALIZER; + chunk_deregister(chunk, &chunk->node); + + committed = (arena_mapbits_decommitted_get(chunk, map_bias) == 0); + if (!committed) { + /* + * Decommit the header. Mark the chunk as decommitted even if + * header decommit fails, since treating a partially committed + * chunk as committed has a high potential for causing later + * access of decommitted memory. + */ + chunk_hooks = chunk_hooks_get(tsdn, arena); + chunk_hooks.decommit(chunk, chunksize, 0, map_bias << LG_PAGE, + arena->ind); + } + + chunk_dalloc_cache(tsdn, arena, &chunk_hooks, (void *)chunk, chunksize, + committed); + + if (config_stats) { + arena->stats.mapped -= chunksize; + arena->stats.metadata_mapped -= (map_bias << LG_PAGE); + } +} + +static void +arena_spare_discard(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *spare) +{ + + assert(arena->spare != spare); + + if (arena_mapbits_dirty_get(spare, map_bias) != 0) { + arena_run_dirty_remove(arena, spare, map_bias, + chunk_npages-map_bias); + } + + arena_chunk_discard(tsdn, arena, spare); +} + +static void +arena_chunk_dalloc(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk) +{ + arena_chunk_t *spare; + assert(arena_mapbits_allocated_get(chunk, map_bias) == 0); assert(arena_mapbits_allocated_get(chunk, chunk_npages-1) == 0); assert(arena_mapbits_unallocated_size_get(chunk, map_bias) == arena_maxrun); assert(arena_mapbits_unallocated_size_get(chunk, chunk_npages-1) == arena_maxrun); assert(arena_mapbits_dirty_get(chunk, map_bias) == arena_mapbits_dirty_get(chunk, chunk_npages-1)); assert(arena_mapbits_decommitted_get(chunk, map_bias) == arena_mapbits_decommitted_get(chunk, chunk_npages-1)); /* Remove run from runs_avail, so that the arena does not use it. */ arena_avail_remove(arena, chunk, map_bias, chunk_npages-map_bias); - if (arena->spare != NULL) { - arena_chunk_t *spare = arena->spare; - chunk_hooks_t chunk_hooks = CHUNK_HOOKS_INITIALIZER; - bool committed; - - arena->spare = chunk; - if (arena_mapbits_dirty_get(spare, map_bias) != 0) { - arena_run_dirty_remove(arena, spare, map_bias, - chunk_npages-map_bias); - } - - chunk_deregister(spare, &spare->node); - - committed = (arena_mapbits_decommitted_get(spare, map_bias) == - 0); - if (!committed) { - /* - * Decommit the header. Mark the chunk as decommitted - * even if header decommit fails, since treating a - * partially committed chunk as committed has a high - * potential for causing later access of decommitted - * memory. - */ - chunk_hooks = chunk_hooks_get(arena); - chunk_hooks.decommit(spare, chunksize, 0, map_bias << - LG_PAGE, arena->ind); - } - - chunk_dalloc_cache(arena, &chunk_hooks, (void *)spare, - chunksize, committed); - - if (config_stats) { - arena->stats.mapped -= chunksize; - arena->stats.metadata_mapped -= (map_bias << LG_PAGE); - } - } else - arena->spare = chunk; + ql_remove(&arena->achunks, &chunk->node, ql_link); + spare = arena->spare; + arena->spare = chunk; + if (spare != NULL) + arena_spare_discard(tsdn, arena, spare); } static void arena_huge_malloc_stats_update(arena_t *arena, size_t usize) { szind_t index = size2index(usize) - nlclasses - NBINS; cassert(config_stats); arena->stats.nmalloc_huge++; arena->stats.allocated_huge += usize; arena->stats.hstats[index].nmalloc++; arena->stats.hstats[index].curhchunks++; } static void arena_huge_malloc_stats_update_undo(arena_t *arena, size_t usize) { szind_t index = size2index(usize) - nlclasses - NBINS; cassert(config_stats); arena->stats.nmalloc_huge--; arena->stats.allocated_huge -= usize; arena->stats.hstats[index].nmalloc--; arena->stats.hstats[index].curhchunks--; } static void arena_huge_dalloc_stats_update(arena_t *arena, size_t usize) { szind_t index = size2index(usize) - nlclasses - NBINS; cassert(config_stats); arena->stats.ndalloc_huge++; arena->stats.allocated_huge -= usize; arena->stats.hstats[index].ndalloc++; arena->stats.hstats[index].curhchunks--; } static void +arena_huge_reset_stats_cancel(arena_t *arena, size_t usize) +{ + szind_t index = size2index(usize) - nlclasses - NBINS; + + cassert(config_stats); + + arena->stats.ndalloc_huge++; + arena->stats.hstats[index].ndalloc--; +} + +static void arena_huge_dalloc_stats_update_undo(arena_t *arena, size_t usize) { szind_t index = size2index(usize) - nlclasses - NBINS; cassert(config_stats); arena->stats.ndalloc_huge--; arena->stats.allocated_huge += usize; arena->stats.hstats[index].ndalloc--; arena->stats.hstats[index].curhchunks++; } static void arena_huge_ralloc_stats_update(arena_t *arena, size_t oldsize, size_t usize) { arena_huge_dalloc_stats_update(arena, oldsize); arena_huge_malloc_stats_update(arena, usize); } static void arena_huge_ralloc_stats_update_undo(arena_t *arena, size_t oldsize, size_t usize) { arena_huge_dalloc_stats_update_undo(arena, oldsize); arena_huge_malloc_stats_update_undo(arena, usize); } extent_node_t * -arena_node_alloc(arena_t *arena) +arena_node_alloc(tsdn_t *tsdn, arena_t *arena) { extent_node_t *node; - malloc_mutex_lock(&arena->node_cache_mtx); + malloc_mutex_lock(tsdn, &arena->node_cache_mtx); node = ql_last(&arena->node_cache, ql_link); if (node == NULL) { - malloc_mutex_unlock(&arena->node_cache_mtx); - return (base_alloc(sizeof(extent_node_t))); + malloc_mutex_unlock(tsdn, &arena->node_cache_mtx); + return (base_alloc(tsdn, sizeof(extent_node_t))); } ql_tail_remove(&arena->node_cache, extent_node_t, ql_link); - malloc_mutex_unlock(&arena->node_cache_mtx); + malloc_mutex_unlock(tsdn, &arena->node_cache_mtx); return (node); } void -arena_node_dalloc(arena_t *arena, extent_node_t *node) +arena_node_dalloc(tsdn_t *tsdn, arena_t *arena, extent_node_t *node) { - malloc_mutex_lock(&arena->node_cache_mtx); + malloc_mutex_lock(tsdn, &arena->node_cache_mtx); ql_elm_new(node, ql_link); ql_tail_insert(&arena->node_cache, node, ql_link); - malloc_mutex_unlock(&arena->node_cache_mtx); + malloc_mutex_unlock(tsdn, &arena->node_cache_mtx); } static void * -arena_chunk_alloc_huge_hard(arena_t *arena, chunk_hooks_t *chunk_hooks, - size_t usize, size_t alignment, bool *zero, size_t csize) +arena_chunk_alloc_huge_hard(tsdn_t *tsdn, arena_t *arena, + chunk_hooks_t *chunk_hooks, size_t usize, size_t alignment, bool *zero, + size_t csize) { void *ret; bool commit = true; - ret = chunk_alloc_wrapper(arena, chunk_hooks, NULL, csize, alignment, - zero, &commit); + ret = chunk_alloc_wrapper(tsdn, arena, chunk_hooks, NULL, csize, + alignment, zero, &commit); if (ret == NULL) { /* Revert optimistic stats updates. */ - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); if (config_stats) { arena_huge_malloc_stats_update_undo(arena, usize); arena->stats.mapped -= usize; } arena_nactive_sub(arena, usize >> LG_PAGE); - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); } return (ret); } void * -arena_chunk_alloc_huge(arena_t *arena, size_t usize, size_t alignment, - bool *zero) +arena_chunk_alloc_huge(tsdn_t *tsdn, arena_t *arena, size_t usize, + size_t alignment, bool *zero) { void *ret; chunk_hooks_t chunk_hooks = CHUNK_HOOKS_INITIALIZER; size_t csize = CHUNK_CEILING(usize); - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); /* Optimistically update stats. */ if (config_stats) { arena_huge_malloc_stats_update(arena, usize); arena->stats.mapped += usize; } arena_nactive_add(arena, usize >> LG_PAGE); - ret = chunk_alloc_cache(arena, &chunk_hooks, NULL, csize, alignment, - zero, true); - malloc_mutex_unlock(&arena->lock); + ret = chunk_alloc_cache(tsdn, arena, &chunk_hooks, NULL, csize, + alignment, zero, true); + malloc_mutex_unlock(tsdn, &arena->lock); if (ret == NULL) { - ret = arena_chunk_alloc_huge_hard(arena, &chunk_hooks, usize, - alignment, zero, csize); + ret = arena_chunk_alloc_huge_hard(tsdn, arena, &chunk_hooks, + usize, alignment, zero, csize); } return (ret); } void -arena_chunk_dalloc_huge(arena_t *arena, void *chunk, size_t usize) +arena_chunk_dalloc_huge(tsdn_t *tsdn, arena_t *arena, void *chunk, size_t usize) { chunk_hooks_t chunk_hooks = CHUNK_HOOKS_INITIALIZER; size_t csize; csize = CHUNK_CEILING(usize); - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); if (config_stats) { arena_huge_dalloc_stats_update(arena, usize); arena->stats.mapped -= usize; } arena_nactive_sub(arena, usize >> LG_PAGE); - chunk_dalloc_cache(arena, &chunk_hooks, chunk, csize, true); - malloc_mutex_unlock(&arena->lock); + chunk_dalloc_cache(tsdn, arena, &chunk_hooks, chunk, csize, true); + malloc_mutex_unlock(tsdn, &arena->lock); } void -arena_chunk_ralloc_huge_similar(arena_t *arena, void *chunk, size_t oldsize, - size_t usize) +arena_chunk_ralloc_huge_similar(tsdn_t *tsdn, arena_t *arena, void *chunk, + size_t oldsize, size_t usize) { assert(CHUNK_CEILING(oldsize) == CHUNK_CEILING(usize)); assert(oldsize != usize); - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); if (config_stats) arena_huge_ralloc_stats_update(arena, oldsize, usize); if (oldsize < usize) arena_nactive_add(arena, (usize - oldsize) >> LG_PAGE); else arena_nactive_sub(arena, (oldsize - usize) >> LG_PAGE); - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); } void -arena_chunk_ralloc_huge_shrink(arena_t *arena, void *chunk, size_t oldsize, - size_t usize) +arena_chunk_ralloc_huge_shrink(tsdn_t *tsdn, arena_t *arena, void *chunk, + size_t oldsize, size_t usize) { size_t udiff = oldsize - usize; size_t cdiff = CHUNK_CEILING(oldsize) - CHUNK_CEILING(usize); - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); if (config_stats) { arena_huge_ralloc_stats_update(arena, oldsize, usize); if (cdiff != 0) arena->stats.mapped -= cdiff; } arena_nactive_sub(arena, udiff >> LG_PAGE); if (cdiff != 0) { chunk_hooks_t chunk_hooks = CHUNK_HOOKS_INITIALIZER; void *nchunk = (void *)((uintptr_t)chunk + CHUNK_CEILING(usize)); - chunk_dalloc_cache(arena, &chunk_hooks, nchunk, cdiff, true); + chunk_dalloc_cache(tsdn, arena, &chunk_hooks, nchunk, cdiff, + true); } - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); } static bool -arena_chunk_ralloc_huge_expand_hard(arena_t *arena, chunk_hooks_t *chunk_hooks, - void *chunk, size_t oldsize, size_t usize, bool *zero, void *nchunk, - size_t udiff, size_t cdiff) +arena_chunk_ralloc_huge_expand_hard(tsdn_t *tsdn, arena_t *arena, + chunk_hooks_t *chunk_hooks, void *chunk, size_t oldsize, size_t usize, + bool *zero, void *nchunk, size_t udiff, size_t cdiff) { bool err; bool commit = true; - err = (chunk_alloc_wrapper(arena, chunk_hooks, nchunk, cdiff, chunksize, - zero, &commit) == NULL); + err = (chunk_alloc_wrapper(tsdn, arena, chunk_hooks, nchunk, cdiff, + chunksize, zero, &commit) == NULL); if (err) { /* Revert optimistic stats updates. */ - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); if (config_stats) { arena_huge_ralloc_stats_update_undo(arena, oldsize, usize); arena->stats.mapped -= cdiff; } arena_nactive_sub(arena, udiff >> LG_PAGE); - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); } else if (chunk_hooks->merge(chunk, CHUNK_CEILING(oldsize), nchunk, cdiff, true, arena->ind)) { - chunk_dalloc_arena(arena, chunk_hooks, nchunk, cdiff, *zero, - true); + chunk_dalloc_wrapper(tsdn, arena, chunk_hooks, nchunk, cdiff, + *zero, true); err = true; } return (err); } bool -arena_chunk_ralloc_huge_expand(arena_t *arena, void *chunk, size_t oldsize, - size_t usize, bool *zero) +arena_chunk_ralloc_huge_expand(tsdn_t *tsdn, arena_t *arena, void *chunk, + size_t oldsize, size_t usize, bool *zero) { bool err; - chunk_hooks_t chunk_hooks = chunk_hooks_get(arena); + chunk_hooks_t chunk_hooks = chunk_hooks_get(tsdn, arena); void *nchunk = (void *)((uintptr_t)chunk + CHUNK_CEILING(oldsize)); size_t udiff = usize - oldsize; size_t cdiff = CHUNK_CEILING(usize) - CHUNK_CEILING(oldsize); - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); /* Optimistically update stats. */ if (config_stats) { arena_huge_ralloc_stats_update(arena, oldsize, usize); arena->stats.mapped += cdiff; } arena_nactive_add(arena, udiff >> LG_PAGE); - err = (chunk_alloc_cache(arena, &arena->chunk_hooks, nchunk, cdiff, + err = (chunk_alloc_cache(tsdn, arena, &chunk_hooks, nchunk, cdiff, chunksize, zero, true) == NULL); - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); if (err) { - err = arena_chunk_ralloc_huge_expand_hard(arena, &chunk_hooks, - chunk, oldsize, usize, zero, nchunk, udiff, + err = arena_chunk_ralloc_huge_expand_hard(tsdn, arena, + &chunk_hooks, chunk, oldsize, usize, zero, nchunk, udiff, cdiff); } else if (chunk_hooks.merge(chunk, CHUNK_CEILING(oldsize), nchunk, cdiff, true, arena->ind)) { - chunk_dalloc_arena(arena, &chunk_hooks, nchunk, cdiff, *zero, - true); + chunk_dalloc_wrapper(tsdn, arena, &chunk_hooks, nchunk, cdiff, + *zero, true); err = true; } return (err); } /* * Do first-best-fit run selection, i.e. select the lowest run that best fits. * Run sizes are indexed, so not all candidate runs are necessarily exactly the * same size. */ static arena_run_t * arena_run_first_best_fit(arena_t *arena, size_t size) { szind_t ind, i; ind = size2index(run_quantize_ceil(size)); for (i = ind; i < runs_avail_nclasses + runs_avail_bias; i++) { - arena_chunk_map_misc_t *miscelm = arena_run_tree_first( + arena_chunk_map_misc_t *miscelm = arena_run_heap_first( arena_runs_avail_get(arena, i)); if (miscelm != NULL) return (&miscelm->run); } return (NULL); } static arena_run_t * arena_run_alloc_large_helper(arena_t *arena, size_t size, bool zero) { arena_run_t *run = arena_run_first_best_fit(arena, s2u(size)); if (run != NULL) { if (arena_run_split_large(arena, run, size, zero)) run = NULL; } return (run); } static arena_run_t * -arena_run_alloc_large(arena_t *arena, size_t size, bool zero) +arena_run_alloc_large(tsdn_t *tsdn, arena_t *arena, size_t size, bool zero) { arena_chunk_t *chunk; arena_run_t *run; assert(size <= arena_maxrun); assert(size == PAGE_CEILING(size)); /* Search the arena's chunks for the lowest best fit. */ run = arena_run_alloc_large_helper(arena, size, zero); if (run != NULL) return (run); /* * No usable runs. Create a new chunk from which to allocate the run. */ - chunk = arena_chunk_alloc(arena); + chunk = arena_chunk_alloc(tsdn, arena); if (chunk != NULL) { - run = &arena_miscelm_get(chunk, map_bias)->run; + run = &arena_miscelm_get_mutable(chunk, map_bias)->run; if (arena_run_split_large(arena, run, size, zero)) run = NULL; return (run); } /* * arena_chunk_alloc() failed, but another thread may have made * sufficient memory available while this one dropped arena->lock in * arena_chunk_alloc(), so search one more time. */ return (arena_run_alloc_large_helper(arena, size, zero)); } static arena_run_t * arena_run_alloc_small_helper(arena_t *arena, size_t size, szind_t binind) { arena_run_t *run = arena_run_first_best_fit(arena, size); if (run != NULL) { if (arena_run_split_small(arena, run, size, binind)) run = NULL; } return (run); } static arena_run_t * -arena_run_alloc_small(arena_t *arena, size_t size, szind_t binind) +arena_run_alloc_small(tsdn_t *tsdn, arena_t *arena, size_t size, szind_t binind) { arena_chunk_t *chunk; arena_run_t *run; assert(size <= arena_maxrun); assert(size == PAGE_CEILING(size)); assert(binind != BININD_INVALID); /* Search the arena's chunks for the lowest best fit. */ run = arena_run_alloc_small_helper(arena, size, binind); if (run != NULL) return (run); /* * No usable runs. Create a new chunk from which to allocate the run. */ - chunk = arena_chunk_alloc(arena); + chunk = arena_chunk_alloc(tsdn, arena); if (chunk != NULL) { - run = &arena_miscelm_get(chunk, map_bias)->run; + run = &arena_miscelm_get_mutable(chunk, map_bias)->run; if (arena_run_split_small(arena, run, size, binind)) run = NULL; return (run); } /* * arena_chunk_alloc() failed, but another thread may have made * sufficient memory available while this one dropped arena->lock in * arena_chunk_alloc(), so search one more time. */ return (arena_run_alloc_small_helper(arena, size, binind)); } static bool arena_lg_dirty_mult_valid(ssize_t lg_dirty_mult) { return (lg_dirty_mult >= -1 && lg_dirty_mult < (ssize_t)(sizeof(size_t) << 3)); } ssize_t -arena_lg_dirty_mult_get(arena_t *arena) +arena_lg_dirty_mult_get(tsdn_t *tsdn, arena_t *arena) { ssize_t lg_dirty_mult; - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); lg_dirty_mult = arena->lg_dirty_mult; - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); return (lg_dirty_mult); } bool -arena_lg_dirty_mult_set(arena_t *arena, ssize_t lg_dirty_mult) +arena_lg_dirty_mult_set(tsdn_t *tsdn, arena_t *arena, ssize_t lg_dirty_mult) { if (!arena_lg_dirty_mult_valid(lg_dirty_mult)) return (true); - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); arena->lg_dirty_mult = lg_dirty_mult; - arena_maybe_purge(arena); - malloc_mutex_unlock(&arena->lock); + arena_maybe_purge(tsdn, arena); + malloc_mutex_unlock(tsdn, &arena->lock); return (false); } static void arena_decay_deadline_init(arena_t *arena) { assert(opt_purge == purge_mode_decay); /* * Generate a new deadline that is uniformly random within the next * epoch after the current one. */ nstime_copy(&arena->decay_deadline, &arena->decay_epoch); nstime_add(&arena->decay_deadline, &arena->decay_interval); if (arena->decay_time > 0) { nstime_t jitter; nstime_init(&jitter, prng_range(&arena->decay_jitter_state, nstime_ns(&arena->decay_interval))); nstime_add(&arena->decay_deadline, &jitter); } } static bool arena_decay_deadline_reached(const arena_t *arena, const nstime_t *time) { assert(opt_purge == purge_mode_decay); return (nstime_compare(&arena->decay_deadline, time) <= 0); } static size_t arena_decay_backlog_npages_limit(const arena_t *arena) { static const uint64_t h_steps[] = { #define STEP(step, h, x, y) \ h, SMOOTHSTEP #undef STEP }; uint64_t sum; size_t npages_limit_backlog; unsigned i; assert(opt_purge == purge_mode_decay); /* * For each element of decay_backlog, multiply by the corresponding * fixed-point smoothstep decay factor. Sum the products, then divide * to round down to the nearest whole number of pages. */ sum = 0; for (i = 0; i < SMOOTHSTEP_NSTEPS; i++) sum += arena->decay_backlog[i] * h_steps[i]; - npages_limit_backlog = (sum >> SMOOTHSTEP_BFP); + npages_limit_backlog = (size_t)(sum >> SMOOTHSTEP_BFP); return (npages_limit_backlog); } static void arena_decay_epoch_advance(arena_t *arena, const nstime_t *time) { - uint64_t nadvance; + uint64_t nadvance_u64; nstime_t delta; size_t ndirty_delta; assert(opt_purge == purge_mode_decay); assert(arena_decay_deadline_reached(arena, time)); nstime_copy(&delta, time); nstime_subtract(&delta, &arena->decay_epoch); - nadvance = nstime_divide(&delta, &arena->decay_interval); - assert(nadvance > 0); + nadvance_u64 = nstime_divide(&delta, &arena->decay_interval); + assert(nadvance_u64 > 0); - /* Add nadvance decay intervals to epoch. */ + /* Add nadvance_u64 decay intervals to epoch. */ nstime_copy(&delta, &arena->decay_interval); - nstime_imultiply(&delta, nadvance); + nstime_imultiply(&delta, nadvance_u64); nstime_add(&arena->decay_epoch, &delta); /* Set a new deadline. */ arena_decay_deadline_init(arena); /* Update the backlog. */ - if (nadvance >= SMOOTHSTEP_NSTEPS) { + if (nadvance_u64 >= SMOOTHSTEP_NSTEPS) { memset(arena->decay_backlog, 0, (SMOOTHSTEP_NSTEPS-1) * sizeof(size_t)); } else { - memmove(arena->decay_backlog, &arena->decay_backlog[nadvance], - (SMOOTHSTEP_NSTEPS - nadvance) * sizeof(size_t)); - if (nadvance > 1) { + size_t nadvance_z = (size_t)nadvance_u64; + + assert((uint64_t)nadvance_z == nadvance_u64); + + memmove(arena->decay_backlog, &arena->decay_backlog[nadvance_z], + (SMOOTHSTEP_NSTEPS - nadvance_z) * sizeof(size_t)); + if (nadvance_z > 1) { memset(&arena->decay_backlog[SMOOTHSTEP_NSTEPS - - nadvance], 0, (nadvance-1) * sizeof(size_t)); + nadvance_z], 0, (nadvance_z-1) * sizeof(size_t)); } } ndirty_delta = (arena->ndirty > arena->decay_ndirty) ? arena->ndirty - arena->decay_ndirty : 0; arena->decay_ndirty = arena->ndirty; arena->decay_backlog[SMOOTHSTEP_NSTEPS-1] = ndirty_delta; arena->decay_backlog_npages_limit = arena_decay_backlog_npages_limit(arena); } static size_t arena_decay_npages_limit(arena_t *arena) { size_t npages_limit; assert(opt_purge == purge_mode_decay); npages_limit = arena->decay_backlog_npages_limit; /* Add in any dirty pages created during the current epoch. */ if (arena->ndirty > arena->decay_ndirty) npages_limit += arena->ndirty - arena->decay_ndirty; return (npages_limit); } static void arena_decay_init(arena_t *arena, ssize_t decay_time) { arena->decay_time = decay_time; if (decay_time > 0) { nstime_init2(&arena->decay_interval, decay_time, 0); nstime_idivide(&arena->decay_interval, SMOOTHSTEP_NSTEPS); } nstime_init(&arena->decay_epoch, 0); nstime_update(&arena->decay_epoch); arena->decay_jitter_state = (uint64_t)(uintptr_t)arena; arena_decay_deadline_init(arena); arena->decay_ndirty = arena->ndirty; arena->decay_backlog_npages_limit = 0; memset(arena->decay_backlog, 0, SMOOTHSTEP_NSTEPS * sizeof(size_t)); } static bool arena_decay_time_valid(ssize_t decay_time) { if (decay_time < -1) return (false); if (decay_time == -1 || (uint64_t)decay_time <= NSTIME_SEC_MAX) return (true); return (false); } ssize_t -arena_decay_time_get(arena_t *arena) +arena_decay_time_get(tsdn_t *tsdn, arena_t *arena) { ssize_t decay_time; - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); decay_time = arena->decay_time; - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); return (decay_time); } bool -arena_decay_time_set(arena_t *arena, ssize_t decay_time) +arena_decay_time_set(tsdn_t *tsdn, arena_t *arena, ssize_t decay_time) { if (!arena_decay_time_valid(decay_time)) return (true); - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); /* * Restart decay backlog from scratch, which may cause many dirty pages * to be immediately purged. It would conceptually be possible to map * the old backlog onto the new backlog, but there is no justification * for such complexity since decay_time changes are intended to be * infrequent, either between the {-1, 0, >0} states, or a one-time * arbitrary change during initial arena configuration. */ arena_decay_init(arena, decay_time); - arena_maybe_purge(arena); - malloc_mutex_unlock(&arena->lock); + arena_maybe_purge(tsdn, arena); + malloc_mutex_unlock(tsdn, &arena->lock); return (false); } static void -arena_maybe_purge_ratio(arena_t *arena) +arena_maybe_purge_ratio(tsdn_t *tsdn, arena_t *arena) { assert(opt_purge == purge_mode_ratio); /* Don't purge if the option is disabled. */ if (arena->lg_dirty_mult < 0) return; /* * Iterate, since preventing recursive purging could otherwise leave too * many dirty pages. */ while (true) { size_t threshold = (arena->nactive >> arena->lg_dirty_mult); if (threshold < chunk_npages) threshold = chunk_npages; /* * Don't purge unless the number of purgeable pages exceeds the * threshold. */ if (arena->ndirty <= threshold) return; - arena_purge_to_limit(arena, threshold); + arena_purge_to_limit(tsdn, arena, threshold); } } static void -arena_maybe_purge_decay(arena_t *arena) +arena_maybe_purge_decay(tsdn_t *tsdn, arena_t *arena) { nstime_t time; size_t ndirty_limit; assert(opt_purge == purge_mode_decay); /* Purge all or nothing if the option is disabled. */ if (arena->decay_time <= 0) { if (arena->decay_time == 0) - arena_purge_to_limit(arena, 0); + arena_purge_to_limit(tsdn, arena, 0); return; } nstime_copy(&time, &arena->decay_epoch); if (unlikely(nstime_update(&time))) { /* Time went backwards. Force an epoch advance. */ nstime_copy(&time, &arena->decay_deadline); } if (arena_decay_deadline_reached(arena, &time)) arena_decay_epoch_advance(arena, &time); ndirty_limit = arena_decay_npages_limit(arena); /* * Don't try to purge unless the number of purgeable pages exceeds the * current limit. */ if (arena->ndirty <= ndirty_limit) return; - arena_purge_to_limit(arena, ndirty_limit); + arena_purge_to_limit(tsdn, arena, ndirty_limit); } void -arena_maybe_purge(arena_t *arena) +arena_maybe_purge(tsdn_t *tsdn, arena_t *arena) { /* Don't recursively purge. */ if (arena->purging) return; if (opt_purge == purge_mode_ratio) - arena_maybe_purge_ratio(arena); + arena_maybe_purge_ratio(tsdn, arena); else - arena_maybe_purge_decay(arena); + arena_maybe_purge_decay(tsdn, arena); } static size_t arena_dirty_count(arena_t *arena) { size_t ndirty = 0; arena_runs_dirty_link_t *rdelm; extent_node_t *chunkselm; for (rdelm = qr_next(&arena->runs_dirty, rd_link), chunkselm = qr_next(&arena->chunks_cache, cc_link); rdelm != &arena->runs_dirty; rdelm = qr_next(rdelm, rd_link)) { size_t npages; if (rdelm == &chunkselm->rd) { npages = extent_node_size_get(chunkselm) >> LG_PAGE; chunkselm = qr_next(chunkselm, cc_link); } else { arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE( rdelm); arena_chunk_map_misc_t *miscelm = arena_rd_to_miscelm(rdelm); size_t pageind = arena_miscelm_to_pageind(miscelm); assert(arena_mapbits_allocated_get(chunk, pageind) == 0); assert(arena_mapbits_large_get(chunk, pageind) == 0); assert(arena_mapbits_dirty_get(chunk, pageind) != 0); npages = arena_mapbits_unallocated_size_get(chunk, pageind) >> LG_PAGE; } ndirty += npages; } return (ndirty); } static size_t -arena_stash_dirty(arena_t *arena, chunk_hooks_t *chunk_hooks, +arena_stash_dirty(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, size_t ndirty_limit, arena_runs_dirty_link_t *purge_runs_sentinel, extent_node_t *purge_chunks_sentinel) { arena_runs_dirty_link_t *rdelm, *rdelm_next; extent_node_t *chunkselm; size_t nstashed = 0; /* Stash runs/chunks according to ndirty_limit. */ for (rdelm = qr_next(&arena->runs_dirty, rd_link), chunkselm = qr_next(&arena->chunks_cache, cc_link); rdelm != &arena->runs_dirty; rdelm = rdelm_next) { size_t npages; rdelm_next = qr_next(rdelm, rd_link); if (rdelm == &chunkselm->rd) { extent_node_t *chunkselm_next; bool zero; UNUSED void *chunk; npages = extent_node_size_get(chunkselm) >> LG_PAGE; if (opt_purge == purge_mode_decay && arena->ndirty - (nstashed + npages) < ndirty_limit) break; chunkselm_next = qr_next(chunkselm, cc_link); /* * Allocate. chunkselm remains valid due to the * dalloc_node=false argument to chunk_alloc_cache(). */ zero = false; - chunk = chunk_alloc_cache(arena, chunk_hooks, + chunk = chunk_alloc_cache(tsdn, arena, chunk_hooks, extent_node_addr_get(chunkselm), extent_node_size_get(chunkselm), chunksize, &zero, false); assert(chunk == extent_node_addr_get(chunkselm)); assert(zero == extent_node_zeroed_get(chunkselm)); extent_node_dirty_insert(chunkselm, purge_runs_sentinel, purge_chunks_sentinel); assert(npages == (extent_node_size_get(chunkselm) >> LG_PAGE)); chunkselm = chunkselm_next; } else { arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(rdelm); arena_chunk_map_misc_t *miscelm = arena_rd_to_miscelm(rdelm); size_t pageind = arena_miscelm_to_pageind(miscelm); arena_run_t *run = &miscelm->run; size_t run_size = arena_mapbits_unallocated_size_get(chunk, pageind); npages = run_size >> LG_PAGE; if (opt_purge == purge_mode_decay && arena->ndirty - (nstashed + npages) < ndirty_limit) break; assert(pageind + npages <= chunk_npages); assert(arena_mapbits_dirty_get(chunk, pageind) == arena_mapbits_dirty_get(chunk, pageind+npages-1)); /* * If purging the spare chunk's run, make it available * prior to allocation. */ if (chunk == arena->spare) - arena_chunk_alloc(arena); + arena_chunk_alloc(tsdn, arena); /* Temporarily allocate the free dirty run. */ arena_run_split_large(arena, run, run_size, false); /* Stash. */ if (false) qr_new(rdelm, rd_link); /* Redundant. */ else { assert(qr_next(rdelm, rd_link) == rdelm); assert(qr_prev(rdelm, rd_link) == rdelm); } qr_meld(purge_runs_sentinel, rdelm, rd_link); } nstashed += npages; if (opt_purge == purge_mode_ratio && arena->ndirty - nstashed <= ndirty_limit) break; } return (nstashed); } static size_t -arena_purge_stashed(arena_t *arena, chunk_hooks_t *chunk_hooks, +arena_purge_stashed(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, arena_runs_dirty_link_t *purge_runs_sentinel, extent_node_t *purge_chunks_sentinel) { size_t npurged, nmadvise; arena_runs_dirty_link_t *rdelm; extent_node_t *chunkselm; if (config_stats) nmadvise = 0; npurged = 0; - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); for (rdelm = qr_next(purge_runs_sentinel, rd_link), chunkselm = qr_next(purge_chunks_sentinel, cc_link); rdelm != purge_runs_sentinel; rdelm = qr_next(rdelm, rd_link)) { size_t npages; if (rdelm == &chunkselm->rd) { /* * Don't actually purge the chunk here because 1) * chunkselm is embedded in the chunk and must remain * valid, and 2) we deallocate the chunk in * arena_unstash_purged(), where it is destroyed, * decommitted, or purged, depending on chunk * deallocation policy. */ size_t size = extent_node_size_get(chunkselm); npages = size >> LG_PAGE; chunkselm = qr_next(chunkselm, cc_link); } else { size_t pageind, run_size, flag_unzeroed, flags, i; bool decommitted; arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(rdelm); arena_chunk_map_misc_t *miscelm = arena_rd_to_miscelm(rdelm); pageind = arena_miscelm_to_pageind(miscelm); run_size = arena_mapbits_large_size_get(chunk, pageind); npages = run_size >> LG_PAGE; assert(pageind + npages <= chunk_npages); assert(!arena_mapbits_decommitted_get(chunk, pageind)); assert(!arena_mapbits_decommitted_get(chunk, pageind+npages-1)); decommitted = !chunk_hooks->decommit(chunk, chunksize, pageind << LG_PAGE, npages << LG_PAGE, arena->ind); if (decommitted) { flag_unzeroed = 0; flags = CHUNK_MAP_DECOMMITTED; } else { - flag_unzeroed = chunk_purge_wrapper(arena, + flag_unzeroed = chunk_purge_wrapper(tsdn, arena, chunk_hooks, chunk, chunksize, pageind << LG_PAGE, run_size) ? CHUNK_MAP_UNZEROED : 0; flags = flag_unzeroed; } arena_mapbits_large_set(chunk, pageind+npages-1, 0, flags); arena_mapbits_large_set(chunk, pageind, run_size, flags); /* * Set the unzeroed flag for internal pages, now that * chunk_purge_wrapper() has returned whether the pages * were zeroed as a side effect of purging. This chunk * map modification is safe even though the arena mutex * isn't currently owned by this thread, because the run * is marked as allocated, thus protecting it from being * modified by any other thread. As long as these * writes don't perturb the first and last elements' * CHUNK_MAP_ALLOCATED bits, behavior is well defined. */ for (i = 1; i < npages-1; i++) { arena_mapbits_internal_set(chunk, pageind+i, flag_unzeroed); } } npurged += npages; if (config_stats) nmadvise++; } - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); if (config_stats) { arena->stats.nmadvise += nmadvise; arena->stats.purged += npurged; } return (npurged); } static void -arena_unstash_purged(arena_t *arena, chunk_hooks_t *chunk_hooks, +arena_unstash_purged(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, arena_runs_dirty_link_t *purge_runs_sentinel, extent_node_t *purge_chunks_sentinel) { arena_runs_dirty_link_t *rdelm, *rdelm_next; extent_node_t *chunkselm; /* Deallocate chunks/runs. */ for (rdelm = qr_next(purge_runs_sentinel, rd_link), chunkselm = qr_next(purge_chunks_sentinel, cc_link); rdelm != purge_runs_sentinel; rdelm = rdelm_next) { rdelm_next = qr_next(rdelm, rd_link); if (rdelm == &chunkselm->rd) { extent_node_t *chunkselm_next = qr_next(chunkselm, cc_link); void *addr = extent_node_addr_get(chunkselm); size_t size = extent_node_size_get(chunkselm); bool zeroed = extent_node_zeroed_get(chunkselm); bool committed = extent_node_committed_get(chunkselm); extent_node_dirty_remove(chunkselm); - arena_node_dalloc(arena, chunkselm); + arena_node_dalloc(tsdn, arena, chunkselm); chunkselm = chunkselm_next; - chunk_dalloc_arena(arena, chunk_hooks, addr, size, - zeroed, committed); + chunk_dalloc_wrapper(tsdn, arena, chunk_hooks, addr, + size, zeroed, committed); } else { arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(rdelm); arena_chunk_map_misc_t *miscelm = arena_rd_to_miscelm(rdelm); size_t pageind = arena_miscelm_to_pageind(miscelm); bool decommitted = (arena_mapbits_decommitted_get(chunk, pageind) != 0); arena_run_t *run = &miscelm->run; qr_remove(rdelm, rd_link); - arena_run_dalloc(arena, run, false, true, decommitted); + arena_run_dalloc(tsdn, arena, run, false, true, + decommitted); } } } /* * NB: ndirty_limit is interpreted differently depending on opt_purge: * - purge_mode_ratio: Purge as few dirty run/chunks as possible to reach the * desired state: * (arena->ndirty <= ndirty_limit) * - purge_mode_decay: Purge as many dirty runs/chunks as possible without * violating the invariant: * (arena->ndirty >= ndirty_limit) */ static void -arena_purge_to_limit(arena_t *arena, size_t ndirty_limit) +arena_purge_to_limit(tsdn_t *tsdn, arena_t *arena, size_t ndirty_limit) { - chunk_hooks_t chunk_hooks = chunk_hooks_get(arena); + chunk_hooks_t chunk_hooks = chunk_hooks_get(tsdn, arena); size_t npurge, npurged; arena_runs_dirty_link_t purge_runs_sentinel; extent_node_t purge_chunks_sentinel; arena->purging = true; /* * Calls to arena_dirty_count() are disabled even for debug builds * because overhead grows nonlinearly as memory usage increases. */ if (false && config_debug) { size_t ndirty = arena_dirty_count(arena); assert(ndirty == arena->ndirty); } assert(opt_purge != purge_mode_ratio || (arena->nactive >> arena->lg_dirty_mult) < arena->ndirty || ndirty_limit == 0); qr_new(&purge_runs_sentinel, rd_link); extent_node_dirty_linkage_init(&purge_chunks_sentinel); - npurge = arena_stash_dirty(arena, &chunk_hooks, ndirty_limit, + npurge = arena_stash_dirty(tsdn, arena, &chunk_hooks, ndirty_limit, &purge_runs_sentinel, &purge_chunks_sentinel); if (npurge == 0) goto label_return; - npurged = arena_purge_stashed(arena, &chunk_hooks, &purge_runs_sentinel, - &purge_chunks_sentinel); + npurged = arena_purge_stashed(tsdn, arena, &chunk_hooks, + &purge_runs_sentinel, &purge_chunks_sentinel); assert(npurged == npurge); - arena_unstash_purged(arena, &chunk_hooks, &purge_runs_sentinel, + arena_unstash_purged(tsdn, arena, &chunk_hooks, &purge_runs_sentinel, &purge_chunks_sentinel); if (config_stats) arena->stats.npurge++; label_return: arena->purging = false; } void -arena_purge(arena_t *arena, bool all) +arena_purge(tsdn_t *tsdn, arena_t *arena, bool all) { - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); if (all) - arena_purge_to_limit(arena, 0); + arena_purge_to_limit(tsdn, arena, 0); else - arena_maybe_purge(arena); - malloc_mutex_unlock(&arena->lock); + arena_maybe_purge(tsdn, arena); + malloc_mutex_unlock(tsdn, &arena->lock); } static void +arena_achunk_prof_reset(tsd_t *tsd, arena_t *arena, arena_chunk_t *chunk) +{ + size_t pageind, npages; + + cassert(config_prof); + assert(opt_prof); + + /* + * Iterate over the allocated runs and remove profiled allocations from + * the sample set. + */ + for (pageind = map_bias; pageind < chunk_npages; pageind += npages) { + if (arena_mapbits_allocated_get(chunk, pageind) != 0) { + if (arena_mapbits_large_get(chunk, pageind) != 0) { + void *ptr = (void *)((uintptr_t)chunk + (pageind + << LG_PAGE)); + size_t usize = isalloc(tsd_tsdn(tsd), ptr, + config_prof); + + prof_free(tsd, ptr, usize); + npages = arena_mapbits_large_size_get(chunk, + pageind) >> LG_PAGE; + } else { + /* Skip small run. */ + size_t binind = arena_mapbits_binind_get(chunk, + pageind); + arena_bin_info_t *bin_info = + &arena_bin_info[binind]; + npages = bin_info->run_size >> LG_PAGE; + } + } else { + /* Skip unallocated run. */ + npages = arena_mapbits_unallocated_size_get(chunk, + pageind) >> LG_PAGE; + } + assert(pageind + npages <= chunk_npages); + } +} + +void +arena_reset(tsd_t *tsd, arena_t *arena) +{ + unsigned i; + extent_node_t *node; + + /* + * Locking in this function is unintuitive. The caller guarantees that + * no concurrent operations are happening in this arena, but there are + * still reasons that some locking is necessary: + * + * - Some of the functions in the transitive closure of calls assume + * appropriate locks are held, and in some cases these locks are + * temporarily dropped to avoid lock order reversal or deadlock due to + * reentry. + * - mallctl("epoch", ...) may concurrently refresh stats. While + * strictly speaking this is a "concurrent operation", disallowing + * stats refreshes would impose an inconvenient burden. + */ + + /* Remove large allocations from prof sample set. */ + if (config_prof && opt_prof) { + ql_foreach(node, &arena->achunks, ql_link) { + arena_achunk_prof_reset(tsd, arena, + extent_node_addr_get(node)); + } + } + + /* Reset curruns for large size classes. */ + if (config_stats) { + for (i = 0; i < nlclasses; i++) + arena->stats.lstats[i].curruns = 0; + } + + /* Huge allocations. */ + malloc_mutex_lock(tsd_tsdn(tsd), &arena->huge_mtx); + for (node = ql_last(&arena->huge, ql_link); node != NULL; node = + ql_last(&arena->huge, ql_link)) { + void *ptr = extent_node_addr_get(node); + size_t usize; + + malloc_mutex_unlock(tsd_tsdn(tsd), &arena->huge_mtx); + if (config_stats || (config_prof && opt_prof)) + usize = isalloc(tsd_tsdn(tsd), ptr, config_prof); + /* Remove huge allocation from prof sample set. */ + if (config_prof && opt_prof) + prof_free(tsd, ptr, usize); + huge_dalloc(tsd_tsdn(tsd), ptr); + malloc_mutex_lock(tsd_tsdn(tsd), &arena->huge_mtx); + /* Cancel out unwanted effects on stats. */ + if (config_stats) + arena_huge_reset_stats_cancel(arena, usize); + } + malloc_mutex_unlock(tsd_tsdn(tsd), &arena->huge_mtx); + + malloc_mutex_lock(tsd_tsdn(tsd), &arena->lock); + + /* Bins. */ + for (i = 0; i < NBINS; i++) { + arena_bin_t *bin = &arena->bins[i]; + malloc_mutex_lock(tsd_tsdn(tsd), &bin->lock); + bin->runcur = NULL; + arena_run_heap_new(&bin->runs); + if (config_stats) { + bin->stats.curregs = 0; + bin->stats.curruns = 0; + } + malloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock); + } + + /* + * Re-initialize runs_dirty such that the chunks_cache and runs_dirty + * chains directly correspond. + */ + qr_new(&arena->runs_dirty, rd_link); + for (node = qr_next(&arena->chunks_cache, cc_link); + node != &arena->chunks_cache; node = qr_next(node, cc_link)) { + qr_new(&node->rd, rd_link); + qr_meld(&arena->runs_dirty, &node->rd, rd_link); + } + + /* Arena chunks. */ + for (node = ql_last(&arena->achunks, ql_link); node != NULL; node = + ql_last(&arena->achunks, ql_link)) { + ql_remove(&arena->achunks, node, ql_link); + arena_chunk_discard(tsd_tsdn(tsd), arena, + extent_node_addr_get(node)); + } + + /* Spare. */ + if (arena->spare != NULL) { + arena_chunk_discard(tsd_tsdn(tsd), arena, arena->spare); + arena->spare = NULL; + } + + assert(!arena->purging); + arena->nactive = 0; + + for(i = 0; i < runs_avail_nclasses; i++) + arena_run_heap_new(&arena->runs_avail[i]); + + malloc_mutex_unlock(tsd_tsdn(tsd), &arena->lock); +} + +static void arena_run_coalesce(arena_t *arena, arena_chunk_t *chunk, size_t *p_size, size_t *p_run_ind, size_t *p_run_pages, size_t flag_dirty, size_t flag_decommitted) { size_t size = *p_size; size_t run_ind = *p_run_ind; size_t run_pages = *p_run_pages; /* Try to coalesce forward. */ if (run_ind + run_pages < chunk_npages && arena_mapbits_allocated_get(chunk, run_ind+run_pages) == 0 && arena_mapbits_dirty_get(chunk, run_ind+run_pages) == flag_dirty && arena_mapbits_decommitted_get(chunk, run_ind+run_pages) == flag_decommitted) { size_t nrun_size = arena_mapbits_unallocated_size_get(chunk, run_ind+run_pages); size_t nrun_pages = nrun_size >> LG_PAGE; /* * Remove successor from runs_avail; the coalesced run is * inserted later. */ assert(arena_mapbits_unallocated_size_get(chunk, run_ind+run_pages+nrun_pages-1) == nrun_size); assert(arena_mapbits_dirty_get(chunk, run_ind+run_pages+nrun_pages-1) == flag_dirty); assert(arena_mapbits_decommitted_get(chunk, run_ind+run_pages+nrun_pages-1) == flag_decommitted); arena_avail_remove(arena, chunk, run_ind+run_pages, nrun_pages); /* * If the successor is dirty, remove it from the set of dirty * pages. */ if (flag_dirty != 0) { arena_run_dirty_remove(arena, chunk, run_ind+run_pages, nrun_pages); } size += nrun_size; run_pages += nrun_pages; arena_mapbits_unallocated_size_set(chunk, run_ind, size); arena_mapbits_unallocated_size_set(chunk, run_ind+run_pages-1, size); } /* Try to coalesce backward. */ if (run_ind > map_bias && arena_mapbits_allocated_get(chunk, run_ind-1) == 0 && arena_mapbits_dirty_get(chunk, run_ind-1) == flag_dirty && arena_mapbits_decommitted_get(chunk, run_ind-1) == flag_decommitted) { size_t prun_size = arena_mapbits_unallocated_size_get(chunk, run_ind-1); size_t prun_pages = prun_size >> LG_PAGE; run_ind -= prun_pages; /* * Remove predecessor from runs_avail; the coalesced run is * inserted later. */ assert(arena_mapbits_unallocated_size_get(chunk, run_ind) == prun_size); assert(arena_mapbits_dirty_get(chunk, run_ind) == flag_dirty); assert(arena_mapbits_decommitted_get(chunk, run_ind) == flag_decommitted); arena_avail_remove(arena, chunk, run_ind, prun_pages); /* * If the predecessor is dirty, remove it from the set of dirty * pages. */ if (flag_dirty != 0) { arena_run_dirty_remove(arena, chunk, run_ind, prun_pages); } size += prun_size; run_pages += prun_pages; arena_mapbits_unallocated_size_set(chunk, run_ind, size); arena_mapbits_unallocated_size_set(chunk, run_ind+run_pages-1, size); } *p_size = size; *p_run_ind = run_ind; *p_run_pages = run_pages; } static size_t arena_run_size_get(arena_t *arena, arena_chunk_t *chunk, arena_run_t *run, size_t run_ind) { size_t size; assert(run_ind >= map_bias); assert(run_ind < chunk_npages); if (arena_mapbits_large_get(chunk, run_ind) != 0) { size = arena_mapbits_large_size_get(chunk, run_ind); assert(size == PAGE || arena_mapbits_large_size_get(chunk, run_ind+(size>>LG_PAGE)-1) == 0); } else { arena_bin_info_t *bin_info = &arena_bin_info[run->binind]; size = bin_info->run_size; } return (size); } static void -arena_run_dalloc(arena_t *arena, arena_run_t *run, bool dirty, bool cleaned, - bool decommitted) +arena_run_dalloc(tsdn_t *tsdn, arena_t *arena, arena_run_t *run, bool dirty, + bool cleaned, bool decommitted) { arena_chunk_t *chunk; arena_chunk_map_misc_t *miscelm; size_t size, run_ind, run_pages, flag_dirty, flag_decommitted; chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(run); miscelm = arena_run_to_miscelm(run); run_ind = arena_miscelm_to_pageind(miscelm); assert(run_ind >= map_bias); assert(run_ind < chunk_npages); size = arena_run_size_get(arena, chunk, run, run_ind); run_pages = (size >> LG_PAGE); arena_nactive_sub(arena, run_pages); /* * The run is dirty if the caller claims to have dirtied it, as well as * if it was already dirty before being allocated and the caller * doesn't claim to have cleaned it. */ assert(arena_mapbits_dirty_get(chunk, run_ind) == arena_mapbits_dirty_get(chunk, run_ind+run_pages-1)); if (!cleaned && !decommitted && arena_mapbits_dirty_get(chunk, run_ind) != 0) dirty = true; flag_dirty = dirty ? CHUNK_MAP_DIRTY : 0; flag_decommitted = decommitted ? CHUNK_MAP_DECOMMITTED : 0; /* Mark pages as unallocated in the chunk map. */ if (dirty || decommitted) { size_t flags = flag_dirty | flag_decommitted; arena_mapbits_unallocated_set(chunk, run_ind, size, flags); arena_mapbits_unallocated_set(chunk, run_ind+run_pages-1, size, flags); } else { arena_mapbits_unallocated_set(chunk, run_ind, size, arena_mapbits_unzeroed_get(chunk, run_ind)); arena_mapbits_unallocated_set(chunk, run_ind+run_pages-1, size, arena_mapbits_unzeroed_get(chunk, run_ind+run_pages-1)); } arena_run_coalesce(arena, chunk, &size, &run_ind, &run_pages, flag_dirty, flag_decommitted); /* Insert into runs_avail, now that coalescing is complete. */ assert(arena_mapbits_unallocated_size_get(chunk, run_ind) == arena_mapbits_unallocated_size_get(chunk, run_ind+run_pages-1)); assert(arena_mapbits_dirty_get(chunk, run_ind) == arena_mapbits_dirty_get(chunk, run_ind+run_pages-1)); assert(arena_mapbits_decommitted_get(chunk, run_ind) == arena_mapbits_decommitted_get(chunk, run_ind+run_pages-1)); arena_avail_insert(arena, chunk, run_ind, run_pages); if (dirty) arena_run_dirty_insert(arena, chunk, run_ind, run_pages); /* Deallocate chunk if it is now completely unused. */ if (size == arena_maxrun) { assert(run_ind == map_bias); assert(run_pages == (arena_maxrun >> LG_PAGE)); - arena_chunk_dalloc(arena, chunk); + arena_chunk_dalloc(tsdn, arena, chunk); } /* * It is okay to do dirty page processing here even if the chunk was * deallocated above, since in that case it is the spare. Waiting * until after possible chunk deallocation to do dirty processing * allows for an old spare to be fully deallocated, thus decreasing the * chances of spuriously crossing the dirty page purging threshold. */ if (dirty) - arena_maybe_purge(arena); + arena_maybe_purge(tsdn, arena); } static void -arena_run_trim_head(arena_t *arena, arena_chunk_t *chunk, arena_run_t *run, - size_t oldsize, size_t newsize) +arena_run_trim_head(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk, + arena_run_t *run, size_t oldsize, size_t newsize) { arena_chunk_map_misc_t *miscelm = arena_run_to_miscelm(run); size_t pageind = arena_miscelm_to_pageind(miscelm); size_t head_npages = (oldsize - newsize) >> LG_PAGE; size_t flag_dirty = arena_mapbits_dirty_get(chunk, pageind); size_t flag_decommitted = arena_mapbits_decommitted_get(chunk, pageind); size_t flag_unzeroed_mask = (flag_dirty | flag_decommitted) == 0 ? CHUNK_MAP_UNZEROED : 0; assert(oldsize > newsize); /* * Update the chunk map so that arena_run_dalloc() can treat the * leading run as separately allocated. Set the last element of each * run first, in case of single-page runs. */ assert(arena_mapbits_large_size_get(chunk, pageind) == oldsize); arena_mapbits_large_set(chunk, pageind+head_npages-1, 0, flag_dirty | (flag_unzeroed_mask & arena_mapbits_unzeroed_get(chunk, pageind+head_npages-1))); arena_mapbits_large_set(chunk, pageind, oldsize-newsize, flag_dirty | (flag_unzeroed_mask & arena_mapbits_unzeroed_get(chunk, pageind))); if (config_debug) { UNUSED size_t tail_npages = newsize >> LG_PAGE; assert(arena_mapbits_large_size_get(chunk, pageind+head_npages+tail_npages-1) == 0); assert(arena_mapbits_dirty_get(chunk, pageind+head_npages+tail_npages-1) == flag_dirty); } arena_mapbits_large_set(chunk, pageind+head_npages, newsize, flag_dirty | (flag_unzeroed_mask & arena_mapbits_unzeroed_get(chunk, pageind+head_npages))); - arena_run_dalloc(arena, run, false, false, (flag_decommitted != 0)); + arena_run_dalloc(tsdn, arena, run, false, false, (flag_decommitted != + 0)); } static void -arena_run_trim_tail(arena_t *arena, arena_chunk_t *chunk, arena_run_t *run, - size_t oldsize, size_t newsize, bool dirty) +arena_run_trim_tail(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk, + arena_run_t *run, size_t oldsize, size_t newsize, bool dirty) { arena_chunk_map_misc_t *miscelm = arena_run_to_miscelm(run); size_t pageind = arena_miscelm_to_pageind(miscelm); size_t head_npages = newsize >> LG_PAGE; size_t flag_dirty = arena_mapbits_dirty_get(chunk, pageind); size_t flag_decommitted = arena_mapbits_decommitted_get(chunk, pageind); size_t flag_unzeroed_mask = (flag_dirty | flag_decommitted) == 0 ? CHUNK_MAP_UNZEROED : 0; arena_chunk_map_misc_t *tail_miscelm; arena_run_t *tail_run; assert(oldsize > newsize); /* * Update the chunk map so that arena_run_dalloc() can treat the * trailing run as separately allocated. Set the last element of each * run first, in case of single-page runs. */ assert(arena_mapbits_large_size_get(chunk, pageind) == oldsize); arena_mapbits_large_set(chunk, pageind+head_npages-1, 0, flag_dirty | (flag_unzeroed_mask & arena_mapbits_unzeroed_get(chunk, pageind+head_npages-1))); arena_mapbits_large_set(chunk, pageind, newsize, flag_dirty | (flag_unzeroed_mask & arena_mapbits_unzeroed_get(chunk, pageind))); if (config_debug) { UNUSED size_t tail_npages = (oldsize - newsize) >> LG_PAGE; assert(arena_mapbits_large_size_get(chunk, pageind+head_npages+tail_npages-1) == 0); assert(arena_mapbits_dirty_get(chunk, pageind+head_npages+tail_npages-1) == flag_dirty); } arena_mapbits_large_set(chunk, pageind+head_npages, oldsize-newsize, flag_dirty | (flag_unzeroed_mask & arena_mapbits_unzeroed_get(chunk, pageind+head_npages))); - tail_miscelm = arena_miscelm_get(chunk, pageind + head_npages); + tail_miscelm = arena_miscelm_get_mutable(chunk, pageind + head_npages); tail_run = &tail_miscelm->run; - arena_run_dalloc(arena, tail_run, dirty, false, (flag_decommitted != - 0)); + arena_run_dalloc(tsdn, arena, tail_run, dirty, false, (flag_decommitted + != 0)); } -static arena_run_t * -arena_bin_runs_first(arena_bin_t *bin) -{ - arena_chunk_map_misc_t *miscelm = arena_run_tree_first(&bin->runs); - if (miscelm != NULL) - return (&miscelm->run); - - return (NULL); -} - static void arena_bin_runs_insert(arena_bin_t *bin, arena_run_t *run) { arena_chunk_map_misc_t *miscelm = arena_run_to_miscelm(run); - assert(arena_run_tree_search(&bin->runs, miscelm) == NULL); - - arena_run_tree_insert(&bin->runs, miscelm); + arena_run_heap_insert(&bin->runs, miscelm); } -static void -arena_bin_runs_remove(arena_bin_t *bin, arena_run_t *run) +static arena_run_t * +arena_bin_nonfull_run_tryget(arena_bin_t *bin) { - arena_chunk_map_misc_t *miscelm = arena_run_to_miscelm(run); + arena_chunk_map_misc_t *miscelm; - assert(arena_run_tree_search(&bin->runs, miscelm) != NULL); + miscelm = arena_run_heap_remove_first(&bin->runs); + if (miscelm == NULL) + return (NULL); + if (config_stats) + bin->stats.reruns++; - arena_run_tree_remove(&bin->runs, miscelm); + return (&miscelm->run); } static arena_run_t * -arena_bin_nonfull_run_tryget(arena_bin_t *bin) +arena_bin_nonfull_run_get(tsdn_t *tsdn, arena_t *arena, arena_bin_t *bin) { - arena_run_t *run = arena_bin_runs_first(bin); - if (run != NULL) { - arena_bin_runs_remove(bin, run); - if (config_stats) - bin->stats.reruns++; - } - return (run); -} - -static arena_run_t * -arena_bin_nonfull_run_get(arena_t *arena, arena_bin_t *bin) -{ arena_run_t *run; szind_t binind; arena_bin_info_t *bin_info; /* Look for a usable run. */ run = arena_bin_nonfull_run_tryget(bin); if (run != NULL) return (run); /* No existing runs have any space available. */ binind = arena_bin_index(arena, bin); bin_info = &arena_bin_info[binind]; /* Allocate a new run. */ - malloc_mutex_unlock(&bin->lock); + malloc_mutex_unlock(tsdn, &bin->lock); /******************************/ - malloc_mutex_lock(&arena->lock); - run = arena_run_alloc_small(arena, bin_info->run_size, binind); + malloc_mutex_lock(tsdn, &arena->lock); + run = arena_run_alloc_small(tsdn, arena, bin_info->run_size, binind); if (run != NULL) { /* Initialize run internals. */ run->binind = binind; run->nfree = bin_info->nregs; bitmap_init(run->bitmap, &bin_info->bitmap_info); } - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); /********************************/ - malloc_mutex_lock(&bin->lock); + malloc_mutex_lock(tsdn, &bin->lock); if (run != NULL) { if (config_stats) { bin->stats.nruns++; bin->stats.curruns++; } return (run); } /* * arena_run_alloc_small() failed, but another thread may have made * sufficient memory available while this one dropped bin->lock above, * so search one more time. */ run = arena_bin_nonfull_run_tryget(bin); if (run != NULL) return (run); return (NULL); } /* Re-fill bin->runcur, then call arena_run_reg_alloc(). */ static void * -arena_bin_malloc_hard(arena_t *arena, arena_bin_t *bin) +arena_bin_malloc_hard(tsdn_t *tsdn, arena_t *arena, arena_bin_t *bin) { szind_t binind; arena_bin_info_t *bin_info; arena_run_t *run; binind = arena_bin_index(arena, bin); bin_info = &arena_bin_info[binind]; bin->runcur = NULL; - run = arena_bin_nonfull_run_get(arena, bin); + run = arena_bin_nonfull_run_get(tsdn, arena, bin); if (bin->runcur != NULL && bin->runcur->nfree > 0) { /* * Another thread updated runcur while this one ran without the * bin lock in arena_bin_nonfull_run_get(). */ void *ret; assert(bin->runcur->nfree > 0); ret = arena_run_reg_alloc(bin->runcur, bin_info); if (run != NULL) { arena_chunk_t *chunk; /* * arena_run_alloc_small() may have allocated run, or * it may have pulled run from the bin's run tree. * Therefore it is unsafe to make any assumptions about * how run has previously been used, and * arena_bin_lower_run() must be called, as if a region * were just deallocated from the run. */ chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(run); - if (run->nfree == bin_info->nregs) - arena_dalloc_bin_run(arena, chunk, run, bin); - else + if (run->nfree == bin_info->nregs) { + arena_dalloc_bin_run(tsdn, arena, chunk, run, + bin); + } else arena_bin_lower_run(arena, chunk, run, bin); } return (ret); } if (run == NULL) return (NULL); bin->runcur = run; assert(bin->runcur->nfree > 0); return (arena_run_reg_alloc(bin->runcur, bin_info)); } void -arena_tcache_fill_small(tsd_t *tsd, arena_t *arena, tcache_bin_t *tbin, +arena_tcache_fill_small(tsdn_t *tsdn, arena_t *arena, tcache_bin_t *tbin, szind_t binind, uint64_t prof_accumbytes) { unsigned i, nfill; arena_bin_t *bin; assert(tbin->ncached == 0); - if (config_prof && arena_prof_accum(arena, prof_accumbytes)) - prof_idump(); + if (config_prof && arena_prof_accum(tsdn, arena, prof_accumbytes)) + prof_idump(tsdn); bin = &arena->bins[binind]; - malloc_mutex_lock(&bin->lock); + malloc_mutex_lock(tsdn, &bin->lock); for (i = 0, nfill = (tcache_bin_info[binind].ncached_max >> tbin->lg_fill_div); i < nfill; i++) { arena_run_t *run; void *ptr; if ((run = bin->runcur) != NULL && run->nfree > 0) ptr = arena_run_reg_alloc(run, &arena_bin_info[binind]); else - ptr = arena_bin_malloc_hard(arena, bin); + ptr = arena_bin_malloc_hard(tsdn, arena, bin); if (ptr == NULL) { /* * OOM. tbin->avail isn't yet filled down to its first * element, so the successful allocations (if any) must * be moved just before tbin->avail before bailing out. */ if (i > 0) { memmove(tbin->avail - i, tbin->avail - nfill, i * sizeof(void *)); } break; } if (config_fill && unlikely(opt_junk_alloc)) { arena_alloc_junk_small(ptr, &arena_bin_info[binind], true); } /* Insert such that low regions get used first. */ *(tbin->avail - nfill + i) = ptr; } if (config_stats) { bin->stats.nmalloc += i; bin->stats.nrequests += tbin->tstats.nrequests; bin->stats.curregs += i; bin->stats.nfills++; tbin->tstats.nrequests = 0; } - malloc_mutex_unlock(&bin->lock); + malloc_mutex_unlock(tsdn, &bin->lock); tbin->ncached = i; - arena_decay_tick(tsd, arena); + arena_decay_tick(tsdn, arena); } void arena_alloc_junk_small(void *ptr, arena_bin_info_t *bin_info, bool zero) { + size_t redzone_size = bin_info->redzone_size; + if (zero) { - size_t redzone_size = bin_info->redzone_size; - memset((void *)((uintptr_t)ptr - redzone_size), 0xa5, - redzone_size); - memset((void *)((uintptr_t)ptr + bin_info->reg_size), 0xa5, - redzone_size); + memset((void *)((uintptr_t)ptr - redzone_size), + JEMALLOC_ALLOC_JUNK, redzone_size); + memset((void *)((uintptr_t)ptr + bin_info->reg_size), + JEMALLOC_ALLOC_JUNK, redzone_size); } else { - memset((void *)((uintptr_t)ptr - bin_info->redzone_size), 0xa5, - bin_info->reg_interval); + memset((void *)((uintptr_t)ptr - redzone_size), + JEMALLOC_ALLOC_JUNK, bin_info->reg_interval); } } #ifdef JEMALLOC_JET #undef arena_redzone_corruption -#define arena_redzone_corruption JEMALLOC_N(arena_redzone_corruption_impl) +#define arena_redzone_corruption JEMALLOC_N(n_arena_redzone_corruption) #endif static void arena_redzone_corruption(void *ptr, size_t usize, bool after, size_t offset, uint8_t byte) { malloc_printf(": Corrupt redzone %zu byte%s %s %p " "(size %zu), byte=%#x\n", offset, (offset == 1) ? "" : "s", after ? "after" : "before", ptr, usize, byte); } #ifdef JEMALLOC_JET #undef arena_redzone_corruption #define arena_redzone_corruption JEMALLOC_N(arena_redzone_corruption) arena_redzone_corruption_t *arena_redzone_corruption = - JEMALLOC_N(arena_redzone_corruption_impl); + JEMALLOC_N(n_arena_redzone_corruption); #endif static void arena_redzones_validate(void *ptr, arena_bin_info_t *bin_info, bool reset) { bool error = false; if (opt_junk_alloc) { size_t size = bin_info->reg_size; size_t redzone_size = bin_info->redzone_size; size_t i; for (i = 1; i <= redzone_size; i++) { uint8_t *byte = (uint8_t *)((uintptr_t)ptr - i); - if (*byte != 0xa5) { + if (*byte != JEMALLOC_ALLOC_JUNK) { error = true; arena_redzone_corruption(ptr, size, false, i, *byte); if (reset) - *byte = 0xa5; + *byte = JEMALLOC_ALLOC_JUNK; } } for (i = 0; i < redzone_size; i++) { uint8_t *byte = (uint8_t *)((uintptr_t)ptr + size + i); - if (*byte != 0xa5) { + if (*byte != JEMALLOC_ALLOC_JUNK) { error = true; arena_redzone_corruption(ptr, size, true, i, *byte); if (reset) - *byte = 0xa5; + *byte = JEMALLOC_ALLOC_JUNK; } } } if (opt_abort && error) abort(); } #ifdef JEMALLOC_JET #undef arena_dalloc_junk_small -#define arena_dalloc_junk_small JEMALLOC_N(arena_dalloc_junk_small_impl) +#define arena_dalloc_junk_small JEMALLOC_N(n_arena_dalloc_junk_small) #endif void arena_dalloc_junk_small(void *ptr, arena_bin_info_t *bin_info) { size_t redzone_size = bin_info->redzone_size; arena_redzones_validate(ptr, bin_info, false); - memset((void *)((uintptr_t)ptr - redzone_size), 0x5a, + memset((void *)((uintptr_t)ptr - redzone_size), JEMALLOC_FREE_JUNK, bin_info->reg_interval); } #ifdef JEMALLOC_JET #undef arena_dalloc_junk_small #define arena_dalloc_junk_small JEMALLOC_N(arena_dalloc_junk_small) arena_dalloc_junk_small_t *arena_dalloc_junk_small = - JEMALLOC_N(arena_dalloc_junk_small_impl); + JEMALLOC_N(n_arena_dalloc_junk_small); #endif void arena_quarantine_junk_small(void *ptr, size_t usize) { szind_t binind; arena_bin_info_t *bin_info; cassert(config_fill); assert(opt_junk_free); assert(opt_quarantine); assert(usize <= SMALL_MAXCLASS); binind = size2index(usize); bin_info = &arena_bin_info[binind]; arena_redzones_validate(ptr, bin_info, true); } static void * -arena_malloc_small(tsd_t *tsd, arena_t *arena, szind_t binind, bool zero) +arena_malloc_small(tsdn_t *tsdn, arena_t *arena, szind_t binind, bool zero) { void *ret; arena_bin_t *bin; size_t usize; arena_run_t *run; assert(binind < NBINS); bin = &arena->bins[binind]; usize = index2size(binind); - malloc_mutex_lock(&bin->lock); + malloc_mutex_lock(tsdn, &bin->lock); if ((run = bin->runcur) != NULL && run->nfree > 0) ret = arena_run_reg_alloc(run, &arena_bin_info[binind]); else - ret = arena_bin_malloc_hard(arena, bin); + ret = arena_bin_malloc_hard(tsdn, arena, bin); if (ret == NULL) { - malloc_mutex_unlock(&bin->lock); + malloc_mutex_unlock(tsdn, &bin->lock); return (NULL); } if (config_stats) { bin->stats.nmalloc++; bin->stats.nrequests++; bin->stats.curregs++; } - malloc_mutex_unlock(&bin->lock); - if (config_prof && !isthreaded && arena_prof_accum(arena, usize)) - prof_idump(); + malloc_mutex_unlock(tsdn, &bin->lock); + if (config_prof && !isthreaded && arena_prof_accum(tsdn, arena, usize)) + prof_idump(tsdn); if (!zero) { if (config_fill) { if (unlikely(opt_junk_alloc)) { arena_alloc_junk_small(ret, &arena_bin_info[binind], false); } else if (unlikely(opt_zero)) memset(ret, 0, usize); } JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ret, usize); } else { if (config_fill && unlikely(opt_junk_alloc)) { arena_alloc_junk_small(ret, &arena_bin_info[binind], true); } JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ret, usize); memset(ret, 0, usize); } - arena_decay_tick(tsd, arena); + arena_decay_tick(tsdn, arena); return (ret); } void * -arena_malloc_large(tsd_t *tsd, arena_t *arena, szind_t binind, bool zero) +arena_malloc_large(tsdn_t *tsdn, arena_t *arena, szind_t binind, bool zero) { void *ret; size_t usize; uintptr_t random_offset; arena_run_t *run; arena_chunk_map_misc_t *miscelm; - UNUSED bool idump; + UNUSED bool idump JEMALLOC_CC_SILENCE_INIT(false); /* Large allocation. */ usize = index2size(binind); - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); if (config_cache_oblivious) { uint64_t r; /* * Compute a uniformly distributed offset within the first page * that is a multiple of the cacheline size, e.g. [0 .. 63) * 64 * for 4 KiB pages and 64-byte cachelines. */ r = prng_lg_range(&arena->offset_state, LG_PAGE - LG_CACHELINE); random_offset = ((uintptr_t)r) << LG_CACHELINE; } else random_offset = 0; - run = arena_run_alloc_large(arena, usize + large_pad, zero); + run = arena_run_alloc_large(tsdn, arena, usize + large_pad, zero); if (run == NULL) { - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); return (NULL); } miscelm = arena_run_to_miscelm(run); ret = (void *)((uintptr_t)arena_miscelm_to_rpages(miscelm) + random_offset); if (config_stats) { szind_t index = binind - NBINS; arena->stats.nmalloc_large++; arena->stats.nrequests_large++; arena->stats.allocated_large += usize; arena->stats.lstats[index].nmalloc++; arena->stats.lstats[index].nrequests++; arena->stats.lstats[index].curruns++; } if (config_prof) idump = arena_prof_accum_locked(arena, usize); - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); if (config_prof && idump) - prof_idump(); + prof_idump(tsdn); if (!zero) { if (config_fill) { if (unlikely(opt_junk_alloc)) - memset(ret, 0xa5, usize); + memset(ret, JEMALLOC_ALLOC_JUNK, usize); else if (unlikely(opt_zero)) memset(ret, 0, usize); } } - arena_decay_tick(tsd, arena); + arena_decay_tick(tsdn, arena); return (ret); } void * -arena_malloc_hard(tsd_t *tsd, arena_t *arena, size_t size, szind_t ind, - bool zero, tcache_t *tcache) +arena_malloc_hard(tsdn_t *tsdn, arena_t *arena, size_t size, szind_t ind, + bool zero) { - arena = arena_choose(tsd, arena); + assert(!tsdn_null(tsdn) || arena != NULL); + + if (likely(!tsdn_null(tsdn))) + arena = arena_choose(tsdn_tsd(tsdn), arena); if (unlikely(arena == NULL)) return (NULL); if (likely(size <= SMALL_MAXCLASS)) - return (arena_malloc_small(tsd, arena, ind, zero)); + return (arena_malloc_small(tsdn, arena, ind, zero)); if (likely(size <= large_maxclass)) - return (arena_malloc_large(tsd, arena, ind, zero)); - return (huge_malloc(tsd, arena, index2size(ind), zero, tcache)); + return (arena_malloc_large(tsdn, arena, ind, zero)); + return (huge_malloc(tsdn, arena, index2size(ind), zero)); } /* Only handles large allocations that require more than page alignment. */ static void * -arena_palloc_large(tsd_t *tsd, arena_t *arena, size_t usize, size_t alignment, +arena_palloc_large(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment, bool zero) { void *ret; size_t alloc_size, leadsize, trailsize; arena_run_t *run; arena_chunk_t *chunk; arena_chunk_map_misc_t *miscelm; void *rpages; + assert(!tsdn_null(tsdn) || arena != NULL); assert(usize == PAGE_CEILING(usize)); - arena = arena_choose(tsd, arena); + if (likely(!tsdn_null(tsdn))) + arena = arena_choose(tsdn_tsd(tsdn), arena); if (unlikely(arena == NULL)) return (NULL); alignment = PAGE_CEILING(alignment); - alloc_size = usize + large_pad + alignment - PAGE; + alloc_size = usize + large_pad + alignment; - malloc_mutex_lock(&arena->lock); - run = arena_run_alloc_large(arena, alloc_size, false); + malloc_mutex_lock(tsdn, &arena->lock); + run = arena_run_alloc_large(tsdn, arena, alloc_size, false); if (run == NULL) { - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); return (NULL); } chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(run); miscelm = arena_run_to_miscelm(run); rpages = arena_miscelm_to_rpages(miscelm); leadsize = ALIGNMENT_CEILING((uintptr_t)rpages, alignment) - (uintptr_t)rpages; assert(alloc_size >= leadsize + usize); trailsize = alloc_size - leadsize - usize - large_pad; if (leadsize != 0) { arena_chunk_map_misc_t *head_miscelm = miscelm; arena_run_t *head_run = run; - miscelm = arena_miscelm_get(chunk, + miscelm = arena_miscelm_get_mutable(chunk, arena_miscelm_to_pageind(head_miscelm) + (leadsize >> LG_PAGE)); run = &miscelm->run; - arena_run_trim_head(arena, chunk, head_run, alloc_size, + arena_run_trim_head(tsdn, arena, chunk, head_run, alloc_size, alloc_size - leadsize); } if (trailsize != 0) { - arena_run_trim_tail(arena, chunk, run, usize + large_pad + + arena_run_trim_tail(tsdn, arena, chunk, run, usize + large_pad + trailsize, usize + large_pad, false); } if (arena_run_init_large(arena, run, usize + large_pad, zero)) { size_t run_ind = arena_miscelm_to_pageind(arena_run_to_miscelm(run)); bool dirty = (arena_mapbits_dirty_get(chunk, run_ind) != 0); bool decommitted = (arena_mapbits_decommitted_get(chunk, run_ind) != 0); assert(decommitted); /* Cause of OOM. */ - arena_run_dalloc(arena, run, dirty, false, decommitted); - malloc_mutex_unlock(&arena->lock); + arena_run_dalloc(tsdn, arena, run, dirty, false, decommitted); + malloc_mutex_unlock(tsdn, &arena->lock); return (NULL); } ret = arena_miscelm_to_rpages(miscelm); if (config_stats) { szind_t index = size2index(usize) - NBINS; arena->stats.nmalloc_large++; arena->stats.nrequests_large++; arena->stats.allocated_large += usize; arena->stats.lstats[index].nmalloc++; arena->stats.lstats[index].nrequests++; arena->stats.lstats[index].curruns++; } - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); if (config_fill && !zero) { if (unlikely(opt_junk_alloc)) - memset(ret, 0xa5, usize); + memset(ret, JEMALLOC_ALLOC_JUNK, usize); else if (unlikely(opt_zero)) memset(ret, 0, usize); } - arena_decay_tick(tsd, arena); + arena_decay_tick(tsdn, arena); return (ret); } void * -arena_palloc(tsd_t *tsd, arena_t *arena, size_t usize, size_t alignment, +arena_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment, bool zero, tcache_t *tcache) { void *ret; if (usize <= SMALL_MAXCLASS && (alignment < PAGE || (alignment == PAGE && (usize & PAGE_MASK) == 0))) { /* Small; alignment doesn't require special run placement. */ - ret = arena_malloc(tsd, arena, usize, size2index(usize), zero, + ret = arena_malloc(tsdn, arena, usize, size2index(usize), zero, tcache, true); } else if (usize <= large_maxclass && alignment <= PAGE) { /* * Large; alignment doesn't require special run placement. * However, the cached pointer may be at a random offset from * the base of the run, so do some bit manipulation to retrieve * the base. */ - ret = arena_malloc(tsd, arena, usize, size2index(usize), zero, + ret = arena_malloc(tsdn, arena, usize, size2index(usize), zero, tcache, true); if (config_cache_oblivious) ret = (void *)((uintptr_t)ret & ~PAGE_MASK); } else { if (likely(usize <= large_maxclass)) { - ret = arena_palloc_large(tsd, arena, usize, alignment, + ret = arena_palloc_large(tsdn, arena, usize, alignment, zero); } else if (likely(alignment <= chunksize)) - ret = huge_malloc(tsd, arena, usize, zero, tcache); + ret = huge_malloc(tsdn, arena, usize, zero); else { - ret = huge_palloc(tsd, arena, usize, alignment, zero, - tcache); + ret = huge_palloc(tsdn, arena, usize, alignment, zero); } } return (ret); } void -arena_prof_promoted(const void *ptr, size_t size) +arena_prof_promoted(tsdn_t *tsdn, const void *ptr, size_t size) { arena_chunk_t *chunk; size_t pageind; szind_t binind; cassert(config_prof); assert(ptr != NULL); assert(CHUNK_ADDR2BASE(ptr) != ptr); - assert(isalloc(ptr, false) == LARGE_MINCLASS); - assert(isalloc(ptr, true) == LARGE_MINCLASS); + assert(isalloc(tsdn, ptr, false) == LARGE_MINCLASS); + assert(isalloc(tsdn, ptr, true) == LARGE_MINCLASS); assert(size <= SMALL_MAXCLASS); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; binind = size2index(size); assert(binind < NBINS); arena_mapbits_large_binind_set(chunk, pageind, binind); - assert(isalloc(ptr, false) == LARGE_MINCLASS); - assert(isalloc(ptr, true) == size); + assert(isalloc(tsdn, ptr, false) == LARGE_MINCLASS); + assert(isalloc(tsdn, ptr, true) == size); } static void arena_dissociate_bin_run(arena_chunk_t *chunk, arena_run_t *run, arena_bin_t *bin) { /* Dissociate run from bin. */ if (run == bin->runcur) bin->runcur = NULL; else { szind_t binind = arena_bin_index(extent_node_arena_get( &chunk->node), bin); arena_bin_info_t *bin_info = &arena_bin_info[binind]; + /* + * The following block's conditional is necessary because if the + * run only contains one region, then it never gets inserted + * into the non-full runs tree. + */ if (bin_info->nregs != 1) { - /* - * This block's conditional is necessary because if the - * run only contains one region, then it never gets - * inserted into the non-full runs tree. - */ - arena_bin_runs_remove(bin, run); + arena_chunk_map_misc_t *miscelm = + arena_run_to_miscelm(run); + + arena_run_heap_remove(&bin->runs, miscelm); } } } static void -arena_dalloc_bin_run(arena_t *arena, arena_chunk_t *chunk, arena_run_t *run, - arena_bin_t *bin) +arena_dalloc_bin_run(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk, + arena_run_t *run, arena_bin_t *bin) { assert(run != bin->runcur); - assert(arena_run_tree_search(&bin->runs, arena_run_to_miscelm(run)) == - NULL); - malloc_mutex_unlock(&bin->lock); + malloc_mutex_unlock(tsdn, &bin->lock); /******************************/ - malloc_mutex_lock(&arena->lock); - arena_run_dalloc(arena, run, true, false, false); - malloc_mutex_unlock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); + arena_run_dalloc(tsdn, arena, run, true, false, false); + malloc_mutex_unlock(tsdn, &arena->lock); /****************************/ - malloc_mutex_lock(&bin->lock); + malloc_mutex_lock(tsdn, &bin->lock); if (config_stats) bin->stats.curruns--; } static void arena_bin_lower_run(arena_t *arena, arena_chunk_t *chunk, arena_run_t *run, arena_bin_t *bin) { /* * Make sure that if bin->runcur is non-NULL, it refers to the lowest * non-full run. It is okay to NULL runcur out rather than proactively * keeping it pointing at the lowest non-full run. */ if ((uintptr_t)run < (uintptr_t)bin->runcur) { /* Switch runcur. */ if (bin->runcur->nfree > 0) arena_bin_runs_insert(bin, bin->runcur); bin->runcur = run; if (config_stats) bin->stats.reruns++; } else arena_bin_runs_insert(bin, run); } static void -arena_dalloc_bin_locked_impl(arena_t *arena, arena_chunk_t *chunk, void *ptr, - arena_chunk_map_bits_t *bitselm, bool junked) +arena_dalloc_bin_locked_impl(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk, + void *ptr, arena_chunk_map_bits_t *bitselm, bool junked) { size_t pageind, rpages_ind; arena_run_t *run; arena_bin_t *bin; arena_bin_info_t *bin_info; szind_t binind; pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; rpages_ind = pageind - arena_mapbits_small_runind_get(chunk, pageind); - run = &arena_miscelm_get(chunk, rpages_ind)->run; + run = &arena_miscelm_get_mutable(chunk, rpages_ind)->run; binind = run->binind; bin = &arena->bins[binind]; bin_info = &arena_bin_info[binind]; if (!junked && config_fill && unlikely(opt_junk_free)) arena_dalloc_junk_small(ptr, bin_info); arena_run_reg_dalloc(run, ptr); if (run->nfree == bin_info->nregs) { arena_dissociate_bin_run(chunk, run, bin); - arena_dalloc_bin_run(arena, chunk, run, bin); + arena_dalloc_bin_run(tsdn, arena, chunk, run, bin); } else if (run->nfree == 1 && run != bin->runcur) arena_bin_lower_run(arena, chunk, run, bin); if (config_stats) { bin->stats.ndalloc++; bin->stats.curregs--; } } void -arena_dalloc_bin_junked_locked(arena_t *arena, arena_chunk_t *chunk, void *ptr, - arena_chunk_map_bits_t *bitselm) +arena_dalloc_bin_junked_locked(tsdn_t *tsdn, arena_t *arena, + arena_chunk_t *chunk, void *ptr, arena_chunk_map_bits_t *bitselm) { - arena_dalloc_bin_locked_impl(arena, chunk, ptr, bitselm, true); + arena_dalloc_bin_locked_impl(tsdn, arena, chunk, ptr, bitselm, true); } void -arena_dalloc_bin(arena_t *arena, arena_chunk_t *chunk, void *ptr, +arena_dalloc_bin(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk, void *ptr, size_t pageind, arena_chunk_map_bits_t *bitselm) { arena_run_t *run; arena_bin_t *bin; size_t rpages_ind; rpages_ind = pageind - arena_mapbits_small_runind_get(chunk, pageind); - run = &arena_miscelm_get(chunk, rpages_ind)->run; + run = &arena_miscelm_get_mutable(chunk, rpages_ind)->run; bin = &arena->bins[run->binind]; - malloc_mutex_lock(&bin->lock); - arena_dalloc_bin_locked_impl(arena, chunk, ptr, bitselm, false); - malloc_mutex_unlock(&bin->lock); + malloc_mutex_lock(tsdn, &bin->lock); + arena_dalloc_bin_locked_impl(tsdn, arena, chunk, ptr, bitselm, false); + malloc_mutex_unlock(tsdn, &bin->lock); } void -arena_dalloc_small(tsd_t *tsd, arena_t *arena, arena_chunk_t *chunk, void *ptr, - size_t pageind) +arena_dalloc_small(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk, + void *ptr, size_t pageind) { arena_chunk_map_bits_t *bitselm; if (config_debug) { /* arena_ptr_small_binind_get() does extra sanity checking. */ assert(arena_ptr_small_binind_get(ptr, arena_mapbits_get(chunk, pageind)) != BININD_INVALID); } - bitselm = arena_bitselm_get(chunk, pageind); - arena_dalloc_bin(arena, chunk, ptr, pageind, bitselm); - arena_decay_tick(tsd, arena); + bitselm = arena_bitselm_get_mutable(chunk, pageind); + arena_dalloc_bin(tsdn, arena, chunk, ptr, pageind, bitselm); + arena_decay_tick(tsdn, arena); } #ifdef JEMALLOC_JET #undef arena_dalloc_junk_large -#define arena_dalloc_junk_large JEMALLOC_N(arena_dalloc_junk_large_impl) +#define arena_dalloc_junk_large JEMALLOC_N(n_arena_dalloc_junk_large) #endif void arena_dalloc_junk_large(void *ptr, size_t usize) { if (config_fill && unlikely(opt_junk_free)) - memset(ptr, 0x5a, usize); + memset(ptr, JEMALLOC_FREE_JUNK, usize); } #ifdef JEMALLOC_JET #undef arena_dalloc_junk_large #define arena_dalloc_junk_large JEMALLOC_N(arena_dalloc_junk_large) arena_dalloc_junk_large_t *arena_dalloc_junk_large = - JEMALLOC_N(arena_dalloc_junk_large_impl); + JEMALLOC_N(n_arena_dalloc_junk_large); #endif static void -arena_dalloc_large_locked_impl(arena_t *arena, arena_chunk_t *chunk, - void *ptr, bool junked) +arena_dalloc_large_locked_impl(tsdn_t *tsdn, arena_t *arena, + arena_chunk_t *chunk, void *ptr, bool junked) { size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; - arena_chunk_map_misc_t *miscelm = arena_miscelm_get(chunk, pageind); + arena_chunk_map_misc_t *miscelm = arena_miscelm_get_mutable(chunk, + pageind); arena_run_t *run = &miscelm->run; if (config_fill || config_stats) { size_t usize = arena_mapbits_large_size_get(chunk, pageind) - large_pad; if (!junked) arena_dalloc_junk_large(ptr, usize); if (config_stats) { szind_t index = size2index(usize) - NBINS; arena->stats.ndalloc_large++; arena->stats.allocated_large -= usize; arena->stats.lstats[index].ndalloc++; arena->stats.lstats[index].curruns--; } } - arena_run_dalloc(arena, run, true, false, false); + arena_run_dalloc(tsdn, arena, run, true, false, false); } void -arena_dalloc_large_junked_locked(arena_t *arena, arena_chunk_t *chunk, - void *ptr) +arena_dalloc_large_junked_locked(tsdn_t *tsdn, arena_t *arena, + arena_chunk_t *chunk, void *ptr) { - arena_dalloc_large_locked_impl(arena, chunk, ptr, true); + arena_dalloc_large_locked_impl(tsdn, arena, chunk, ptr, true); } void -arena_dalloc_large(tsd_t *tsd, arena_t *arena, arena_chunk_t *chunk, void *ptr) +arena_dalloc_large(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk, + void *ptr) { - malloc_mutex_lock(&arena->lock); - arena_dalloc_large_locked_impl(arena, chunk, ptr, false); - malloc_mutex_unlock(&arena->lock); - arena_decay_tick(tsd, arena); + malloc_mutex_lock(tsdn, &arena->lock); + arena_dalloc_large_locked_impl(tsdn, arena, chunk, ptr, false); + malloc_mutex_unlock(tsdn, &arena->lock); + arena_decay_tick(tsdn, arena); } static void -arena_ralloc_large_shrink(arena_t *arena, arena_chunk_t *chunk, void *ptr, - size_t oldsize, size_t size) +arena_ralloc_large_shrink(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk, + void *ptr, size_t oldsize, size_t size) { size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; - arena_chunk_map_misc_t *miscelm = arena_miscelm_get(chunk, pageind); + arena_chunk_map_misc_t *miscelm = arena_miscelm_get_mutable(chunk, + pageind); arena_run_t *run = &miscelm->run; assert(size < oldsize); /* * Shrink the run, and make trailing pages available for other * allocations. */ - malloc_mutex_lock(&arena->lock); - arena_run_trim_tail(arena, chunk, run, oldsize + large_pad, size + + malloc_mutex_lock(tsdn, &arena->lock); + arena_run_trim_tail(tsdn, arena, chunk, run, oldsize + large_pad, size + large_pad, true); if (config_stats) { szind_t oldindex = size2index(oldsize) - NBINS; szind_t index = size2index(size) - NBINS; arena->stats.ndalloc_large++; arena->stats.allocated_large -= oldsize; arena->stats.lstats[oldindex].ndalloc++; arena->stats.lstats[oldindex].curruns--; arena->stats.nmalloc_large++; arena->stats.nrequests_large++; arena->stats.allocated_large += size; arena->stats.lstats[index].nmalloc++; arena->stats.lstats[index].nrequests++; arena->stats.lstats[index].curruns++; } - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); } static bool -arena_ralloc_large_grow(arena_t *arena, arena_chunk_t *chunk, void *ptr, - size_t oldsize, size_t usize_min, size_t usize_max, bool zero) +arena_ralloc_large_grow(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk, + void *ptr, size_t oldsize, size_t usize_min, size_t usize_max, bool zero) { size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; size_t npages = (oldsize + large_pad) >> LG_PAGE; size_t followsize; assert(oldsize == arena_mapbits_large_size_get(chunk, pageind) - large_pad); /* Try to extend the run. */ - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); if (pageind+npages >= chunk_npages || arena_mapbits_allocated_get(chunk, pageind+npages) != 0) goto label_fail; followsize = arena_mapbits_unallocated_size_get(chunk, pageind+npages); if (oldsize + followsize >= usize_min) { /* * The next run is available and sufficiently large. Split the * following run, then merge the first part with the existing * allocation. */ arena_run_t *run; size_t usize, splitsize, size, flag_dirty, flag_unzeroed_mask; usize = usize_max; while (oldsize + followsize < usize) usize = index2size(size2index(usize)-1); assert(usize >= usize_min); assert(usize >= oldsize); splitsize = usize - oldsize; if (splitsize == 0) goto label_fail; - run = &arena_miscelm_get(chunk, pageind+npages)->run; + run = &arena_miscelm_get_mutable(chunk, pageind+npages)->run; if (arena_run_split_large(arena, run, splitsize, zero)) goto label_fail; if (config_cache_oblivious && zero) { /* * Zero the trailing bytes of the original allocation's * last page, since they are in an indeterminate state. * There will always be trailing bytes, because ptr's * offset from the beginning of the run is a multiple of * CACHELINE in [0 .. PAGE). */ void *zbase = (void *)((uintptr_t)ptr + oldsize); void *zpast = PAGE_ADDR2BASE((void *)((uintptr_t)zbase + PAGE)); size_t nzero = (uintptr_t)zpast - (uintptr_t)zbase; assert(nzero > 0); memset(zbase, 0, nzero); } size = oldsize + splitsize; npages = (size + large_pad) >> LG_PAGE; /* * Mark the extended run as dirty if either portion of the run * was dirty before allocation. This is rather pedantic, * because there's not actually any sequence of events that * could cause the resulting run to be passed to * arena_run_dalloc() with the dirty argument set to false * (which is when dirty flag consistency would really matter). */ flag_dirty = arena_mapbits_dirty_get(chunk, pageind) | arena_mapbits_dirty_get(chunk, pageind+npages-1); flag_unzeroed_mask = flag_dirty == 0 ? CHUNK_MAP_UNZEROED : 0; arena_mapbits_large_set(chunk, pageind, size + large_pad, flag_dirty | (flag_unzeroed_mask & arena_mapbits_unzeroed_get(chunk, pageind))); arena_mapbits_large_set(chunk, pageind+npages-1, 0, flag_dirty | (flag_unzeroed_mask & arena_mapbits_unzeroed_get(chunk, pageind+npages-1))); if (config_stats) { szind_t oldindex = size2index(oldsize) - NBINS; szind_t index = size2index(size) - NBINS; arena->stats.ndalloc_large++; arena->stats.allocated_large -= oldsize; arena->stats.lstats[oldindex].ndalloc++; arena->stats.lstats[oldindex].curruns--; arena->stats.nmalloc_large++; arena->stats.nrequests_large++; arena->stats.allocated_large += size; arena->stats.lstats[index].nmalloc++; arena->stats.lstats[index].nrequests++; arena->stats.lstats[index].curruns++; } - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); return (false); } label_fail: - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); return (true); } #ifdef JEMALLOC_JET #undef arena_ralloc_junk_large -#define arena_ralloc_junk_large JEMALLOC_N(arena_ralloc_junk_large_impl) +#define arena_ralloc_junk_large JEMALLOC_N(n_arena_ralloc_junk_large) #endif static void arena_ralloc_junk_large(void *ptr, size_t old_usize, size_t usize) { if (config_fill && unlikely(opt_junk_free)) { - memset((void *)((uintptr_t)ptr + usize), 0x5a, + memset((void *)((uintptr_t)ptr + usize), JEMALLOC_FREE_JUNK, old_usize - usize); } } #ifdef JEMALLOC_JET #undef arena_ralloc_junk_large #define arena_ralloc_junk_large JEMALLOC_N(arena_ralloc_junk_large) arena_ralloc_junk_large_t *arena_ralloc_junk_large = - JEMALLOC_N(arena_ralloc_junk_large_impl); + JEMALLOC_N(n_arena_ralloc_junk_large); #endif /* * Try to resize a large allocation, in order to avoid copying. This will * always fail if growing an object, and the following run is already in use. */ static bool -arena_ralloc_large(void *ptr, size_t oldsize, size_t usize_min, +arena_ralloc_large(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t usize_min, size_t usize_max, bool zero) { arena_chunk_t *chunk; arena_t *arena; if (oldsize == usize_max) { /* Current size class is compatible and maximal. */ return (false); } chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); arena = extent_node_arena_get(&chunk->node); if (oldsize < usize_max) { - bool ret = arena_ralloc_large_grow(arena, chunk, ptr, oldsize, - usize_min, usize_max, zero); + bool ret = arena_ralloc_large_grow(tsdn, arena, chunk, ptr, + oldsize, usize_min, usize_max, zero); if (config_fill && !ret && !zero) { if (unlikely(opt_junk_alloc)) { - memset((void *)((uintptr_t)ptr + oldsize), 0xa5, - isalloc(ptr, config_prof) - oldsize); + memset((void *)((uintptr_t)ptr + oldsize), + JEMALLOC_ALLOC_JUNK, + isalloc(tsdn, ptr, config_prof) - oldsize); } else if (unlikely(opt_zero)) { memset((void *)((uintptr_t)ptr + oldsize), 0, - isalloc(ptr, config_prof) - oldsize); + isalloc(tsdn, ptr, config_prof) - oldsize); } } return (ret); } assert(oldsize > usize_max); /* Fill before shrinking in order avoid a race. */ arena_ralloc_junk_large(ptr, oldsize, usize_max); - arena_ralloc_large_shrink(arena, chunk, ptr, oldsize, usize_max); + arena_ralloc_large_shrink(tsdn, arena, chunk, ptr, oldsize, usize_max); return (false); } bool -arena_ralloc_no_move(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, +arena_ralloc_no_move(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t size, size_t extra, bool zero) { size_t usize_min, usize_max; /* Calls with non-zero extra had to clamp extra. */ assert(extra == 0 || size + extra <= HUGE_MAXCLASS); if (unlikely(size > HUGE_MAXCLASS)) return (true); usize_min = s2u(size); usize_max = s2u(size + extra); if (likely(oldsize <= large_maxclass && usize_min <= large_maxclass)) { arena_chunk_t *chunk; /* * Avoid moving the allocation if the size class can be left the * same. */ if (oldsize <= SMALL_MAXCLASS) { assert(arena_bin_info[size2index(oldsize)].reg_size == oldsize); if ((usize_max > SMALL_MAXCLASS || size2index(usize_max) != size2index(oldsize)) && (size > oldsize || usize_max < oldsize)) return (true); } else { if (usize_max <= SMALL_MAXCLASS) return (true); - if (arena_ralloc_large(ptr, oldsize, usize_min, + if (arena_ralloc_large(tsdn, ptr, oldsize, usize_min, usize_max, zero)) return (true); } chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); - arena_decay_tick(tsd, extent_node_arena_get(&chunk->node)); + arena_decay_tick(tsdn, extent_node_arena_get(&chunk->node)); return (false); } else { - return (huge_ralloc_no_move(tsd, ptr, oldsize, usize_min, + return (huge_ralloc_no_move(tsdn, ptr, oldsize, usize_min, usize_max, zero)); } } static void * -arena_ralloc_move_helper(tsd_t *tsd, arena_t *arena, size_t usize, +arena_ralloc_move_helper(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment, bool zero, tcache_t *tcache) { if (alignment == 0) - return (arena_malloc(tsd, arena, usize, size2index(usize), zero, - tcache, true)); + return (arena_malloc(tsdn, arena, usize, size2index(usize), + zero, tcache, true)); usize = sa2u(usize, alignment); if (unlikely(usize == 0 || usize > HUGE_MAXCLASS)) return (NULL); - return (ipalloct(tsd, usize, alignment, zero, tcache, arena)); + return (ipalloct(tsdn, usize, alignment, zero, tcache, arena)); } void * arena_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize, size_t size, size_t alignment, bool zero, tcache_t *tcache) { void *ret; size_t usize; usize = s2u(size); if (unlikely(usize == 0 || size > HUGE_MAXCLASS)) return (NULL); if (likely(usize <= large_maxclass)) { size_t copysize; /* Try to avoid moving the allocation. */ - if (!arena_ralloc_no_move(tsd, ptr, oldsize, usize, 0, zero)) + if (!arena_ralloc_no_move(tsd_tsdn(tsd), ptr, oldsize, usize, 0, + zero)) return (ptr); /* * size and oldsize are different enough that we need to move * the object. In that case, fall back to allocating new space * and copying. */ - ret = arena_ralloc_move_helper(tsd, arena, usize, alignment, - zero, tcache); + ret = arena_ralloc_move_helper(tsd_tsdn(tsd), arena, usize, + alignment, zero, tcache); if (ret == NULL) return (NULL); /* * Junk/zero-filling were already done by * ipalloc()/arena_malloc(). */ copysize = (usize < oldsize) ? usize : oldsize; JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ret, copysize); memcpy(ret, ptr, copysize); - isqalloc(tsd, ptr, oldsize, tcache); + isqalloc(tsd, ptr, oldsize, tcache, true); } else { ret = huge_ralloc(tsd, arena, ptr, oldsize, usize, alignment, zero, tcache); } return (ret); } dss_prec_t -arena_dss_prec_get(arena_t *arena) +arena_dss_prec_get(tsdn_t *tsdn, arena_t *arena) { dss_prec_t ret; - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); ret = arena->dss_prec; - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); return (ret); } bool -arena_dss_prec_set(arena_t *arena, dss_prec_t dss_prec) +arena_dss_prec_set(tsdn_t *tsdn, arena_t *arena, dss_prec_t dss_prec) { if (!have_dss) return (dss_prec != dss_prec_disabled); - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); arena->dss_prec = dss_prec; - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); return (false); } ssize_t arena_lg_dirty_mult_default_get(void) { return ((ssize_t)atomic_read_z((size_t *)&lg_dirty_mult_default)); } bool arena_lg_dirty_mult_default_set(ssize_t lg_dirty_mult) { if (opt_purge != purge_mode_ratio) return (true); if (!arena_lg_dirty_mult_valid(lg_dirty_mult)) return (true); atomic_write_z((size_t *)&lg_dirty_mult_default, (size_t)lg_dirty_mult); return (false); } ssize_t arena_decay_time_default_get(void) { return ((ssize_t)atomic_read_z((size_t *)&decay_time_default)); } bool arena_decay_time_default_set(ssize_t decay_time) { if (opt_purge != purge_mode_decay) return (true); if (!arena_decay_time_valid(decay_time)) return (true); atomic_write_z((size_t *)&decay_time_default, (size_t)decay_time); return (false); } static void arena_basic_stats_merge_locked(arena_t *arena, unsigned *nthreads, const char **dss, ssize_t *lg_dirty_mult, ssize_t *decay_time, size_t *nactive, size_t *ndirty) { - *nthreads += arena_nthreads_get(arena); + *nthreads += arena_nthreads_get(arena, false); *dss = dss_prec_names[arena->dss_prec]; *lg_dirty_mult = arena->lg_dirty_mult; *decay_time = arena->decay_time; *nactive += arena->nactive; *ndirty += arena->ndirty; } void -arena_basic_stats_merge(arena_t *arena, unsigned *nthreads, const char **dss, - ssize_t *lg_dirty_mult, ssize_t *decay_time, size_t *nactive, - size_t *ndirty) +arena_basic_stats_merge(tsdn_t *tsdn, arena_t *arena, unsigned *nthreads, + const char **dss, ssize_t *lg_dirty_mult, ssize_t *decay_time, + size_t *nactive, size_t *ndirty) { - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); arena_basic_stats_merge_locked(arena, nthreads, dss, lg_dirty_mult, decay_time, nactive, ndirty); - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); } void -arena_stats_merge(arena_t *arena, unsigned *nthreads, const char **dss, - ssize_t *lg_dirty_mult, ssize_t *decay_time, size_t *nactive, - size_t *ndirty, arena_stats_t *astats, malloc_bin_stats_t *bstats, - malloc_large_stats_t *lstats, malloc_huge_stats_t *hstats) +arena_stats_merge(tsdn_t *tsdn, arena_t *arena, unsigned *nthreads, + const char **dss, ssize_t *lg_dirty_mult, ssize_t *decay_time, + size_t *nactive, size_t *ndirty, arena_stats_t *astats, + malloc_bin_stats_t *bstats, malloc_large_stats_t *lstats, + malloc_huge_stats_t *hstats) { unsigned i; cassert(config_stats); - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); arena_basic_stats_merge_locked(arena, nthreads, dss, lg_dirty_mult, decay_time, nactive, ndirty); astats->mapped += arena->stats.mapped; + astats->retained += arena->stats.retained; astats->npurge += arena->stats.npurge; astats->nmadvise += arena->stats.nmadvise; astats->purged += arena->stats.purged; astats->metadata_mapped += arena->stats.metadata_mapped; astats->metadata_allocated += arena_metadata_allocated_get(arena); astats->allocated_large += arena->stats.allocated_large; astats->nmalloc_large += arena->stats.nmalloc_large; astats->ndalloc_large += arena->stats.ndalloc_large; astats->nrequests_large += arena->stats.nrequests_large; astats->allocated_huge += arena->stats.allocated_huge; astats->nmalloc_huge += arena->stats.nmalloc_huge; astats->ndalloc_huge += arena->stats.ndalloc_huge; for (i = 0; i < nlclasses; i++) { lstats[i].nmalloc += arena->stats.lstats[i].nmalloc; lstats[i].ndalloc += arena->stats.lstats[i].ndalloc; lstats[i].nrequests += arena->stats.lstats[i].nrequests; lstats[i].curruns += arena->stats.lstats[i].curruns; } for (i = 0; i < nhclasses; i++) { hstats[i].nmalloc += arena->stats.hstats[i].nmalloc; hstats[i].ndalloc += arena->stats.hstats[i].ndalloc; hstats[i].curhchunks += arena->stats.hstats[i].curhchunks; } - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); for (i = 0; i < NBINS; i++) { arena_bin_t *bin = &arena->bins[i]; - malloc_mutex_lock(&bin->lock); + malloc_mutex_lock(tsdn, &bin->lock); bstats[i].nmalloc += bin->stats.nmalloc; bstats[i].ndalloc += bin->stats.ndalloc; bstats[i].nrequests += bin->stats.nrequests; bstats[i].curregs += bin->stats.curregs; if (config_tcache) { bstats[i].nfills += bin->stats.nfills; bstats[i].nflushes += bin->stats.nflushes; } bstats[i].nruns += bin->stats.nruns; bstats[i].reruns += bin->stats.reruns; bstats[i].curruns += bin->stats.curruns; - malloc_mutex_unlock(&bin->lock); + malloc_mutex_unlock(tsdn, &bin->lock); } } unsigned -arena_nthreads_get(arena_t *arena) +arena_nthreads_get(arena_t *arena, bool internal) { - return (atomic_read_u(&arena->nthreads)); + return (atomic_read_u(&arena->nthreads[internal])); } void -arena_nthreads_inc(arena_t *arena) +arena_nthreads_inc(arena_t *arena, bool internal) { - atomic_add_u(&arena->nthreads, 1); + atomic_add_u(&arena->nthreads[internal], 1); } void -arena_nthreads_dec(arena_t *arena) +arena_nthreads_dec(arena_t *arena, bool internal) { - atomic_sub_u(&arena->nthreads, 1); + atomic_sub_u(&arena->nthreads[internal], 1); } arena_t * -arena_new(unsigned ind) +arena_new(tsdn_t *tsdn, unsigned ind) { arena_t *arena; size_t arena_size; unsigned i; - arena_bin_t *bin; /* Compute arena size to incorporate sufficient runs_avail elements. */ - arena_size = offsetof(arena_t, runs_avail) + (sizeof(arena_run_tree_t) * + arena_size = offsetof(arena_t, runs_avail) + (sizeof(arena_run_heap_t) * runs_avail_nclasses); /* * Allocate arena, arena->lstats, and arena->hstats contiguously, mainly * because there is no way to clean up if base_alloc() OOMs. */ if (config_stats) { - arena = (arena_t *)base_alloc(CACHELINE_CEILING(arena_size) + - QUANTUM_CEILING(nlclasses * sizeof(malloc_large_stats_t) + - nhclasses) * sizeof(malloc_huge_stats_t)); + arena = (arena_t *)base_alloc(tsdn, + CACHELINE_CEILING(arena_size) + QUANTUM_CEILING(nlclasses * + sizeof(malloc_large_stats_t) + nhclasses) * + sizeof(malloc_huge_stats_t)); } else - arena = (arena_t *)base_alloc(arena_size); + arena = (arena_t *)base_alloc(tsdn, arena_size); if (arena == NULL) return (NULL); arena->ind = ind; - arena->nthreads = 0; - if (malloc_mutex_init(&arena->lock)) + arena->nthreads[0] = arena->nthreads[1] = 0; + if (malloc_mutex_init(&arena->lock, "arena", WITNESS_RANK_ARENA)) return (NULL); if (config_stats) { memset(&arena->stats, 0, sizeof(arena_stats_t)); arena->stats.lstats = (malloc_large_stats_t *)((uintptr_t)arena + CACHELINE_CEILING(arena_size)); memset(arena->stats.lstats, 0, nlclasses * sizeof(malloc_large_stats_t)); arena->stats.hstats = (malloc_huge_stats_t *)((uintptr_t)arena + CACHELINE_CEILING(arena_size) + QUANTUM_CEILING(nlclasses * sizeof(malloc_large_stats_t))); memset(arena->stats.hstats, 0, nhclasses * sizeof(malloc_huge_stats_t)); if (config_tcache) ql_new(&arena->tcache_ql); } if (config_prof) arena->prof_accumbytes = 0; if (config_cache_oblivious) { /* * A nondeterministic seed based on the address of arena reduces * the likelihood of lockstep non-uniform cache index * utilization among identical concurrent processes, but at the * cost of test repeatability. For debug builds, instead use a * deterministic seed. */ arena->offset_state = config_debug ? ind : (uint64_t)(uintptr_t)arena; } - arena->dss_prec = chunk_dss_prec_get(); + arena->dss_prec = chunk_dss_prec_get(tsdn); + ql_new(&arena->achunks); + arena->spare = NULL; arena->lg_dirty_mult = arena_lg_dirty_mult_default_get(); arena->purging = false; arena->nactive = 0; arena->ndirty = 0; for(i = 0; i < runs_avail_nclasses; i++) - arena_run_tree_new(&arena->runs_avail[i]); + arena_run_heap_new(&arena->runs_avail[i]); qr_new(&arena->runs_dirty, rd_link); qr_new(&arena->chunks_cache, cc_link); if (opt_purge == purge_mode_decay) arena_decay_init(arena, arena_decay_time_default_get()); ql_new(&arena->huge); - if (malloc_mutex_init(&arena->huge_mtx)) + if (malloc_mutex_init(&arena->huge_mtx, "arena_huge", + WITNESS_RANK_ARENA_HUGE)) return (NULL); extent_tree_szad_new(&arena->chunks_szad_cached); extent_tree_ad_new(&arena->chunks_ad_cached); extent_tree_szad_new(&arena->chunks_szad_retained); extent_tree_ad_new(&arena->chunks_ad_retained); - if (malloc_mutex_init(&arena->chunks_mtx)) + if (malloc_mutex_init(&arena->chunks_mtx, "arena_chunks", + WITNESS_RANK_ARENA_CHUNKS)) return (NULL); ql_new(&arena->node_cache); - if (malloc_mutex_init(&arena->node_cache_mtx)) + if (malloc_mutex_init(&arena->node_cache_mtx, "arena_node_cache", + WITNESS_RANK_ARENA_NODE_CACHE)) return (NULL); arena->chunk_hooks = chunk_hooks_default; /* Initialize bins. */ for (i = 0; i < NBINS; i++) { - bin = &arena->bins[i]; - if (malloc_mutex_init(&bin->lock)) + arena_bin_t *bin = &arena->bins[i]; + if (malloc_mutex_init(&bin->lock, "arena_bin", + WITNESS_RANK_ARENA_BIN)) return (NULL); bin->runcur = NULL; - arena_run_tree_new(&bin->runs); + arena_run_heap_new(&bin->runs); if (config_stats) memset(&bin->stats, 0, sizeof(malloc_bin_stats_t)); } return (arena); } /* * Calculate bin_info->run_size such that it meets the following constraints: * * *) bin_info->run_size <= arena_maxrun * *) bin_info->nregs <= RUN_MAXREGS * * bin_info->nregs and bin_info->reg0_offset are also calculated here, since * these settings are all interdependent. */ static void bin_info_run_size_calc(arena_bin_info_t *bin_info) { size_t pad_size; size_t try_run_size, perfect_run_size, actual_run_size; uint32_t try_nregs, perfect_nregs, actual_nregs; /* * Determine redzone size based on minimum alignment and minimum * redzone size. Add padding to the end of the run if it is needed to * align the regions. The padding allows each redzone to be half the * minimum alignment; without the padding, each redzone would have to * be twice as large in order to maintain alignment. */ if (config_fill && unlikely(opt_redzone)) { size_t align_min = ZU(1) << (ffs_zu(bin_info->reg_size) - 1); if (align_min <= REDZONE_MINSIZE) { bin_info->redzone_size = REDZONE_MINSIZE; pad_size = 0; } else { bin_info->redzone_size = align_min >> 1; pad_size = bin_info->redzone_size; } } else { bin_info->redzone_size = 0; pad_size = 0; } bin_info->reg_interval = bin_info->reg_size + (bin_info->redzone_size << 1); /* * Compute run size under ideal conditions (no redzones, no limit on run * size). */ try_run_size = PAGE; try_nregs = (uint32_t)(try_run_size / bin_info->reg_size); do { perfect_run_size = try_run_size; perfect_nregs = try_nregs; try_run_size += PAGE; try_nregs = (uint32_t)(try_run_size / bin_info->reg_size); } while (perfect_run_size != perfect_nregs * bin_info->reg_size); assert(perfect_nregs <= RUN_MAXREGS); actual_run_size = perfect_run_size; actual_nregs = (uint32_t)((actual_run_size - pad_size) / bin_info->reg_interval); /* * Redzones can require enough padding that not even a single region can * fit within the number of pages that would normally be dedicated to a * run for this size class. Increase the run size until at least one * region fits. */ while (actual_nregs == 0) { assert(config_fill && unlikely(opt_redzone)); actual_run_size += PAGE; actual_nregs = (uint32_t)((actual_run_size - pad_size) / bin_info->reg_interval); } /* * Make sure that the run will fit within an arena chunk. */ while (actual_run_size > arena_maxrun) { actual_run_size -= PAGE; actual_nregs = (uint32_t)((actual_run_size - pad_size) / bin_info->reg_interval); } assert(actual_nregs > 0); assert(actual_run_size == s2u(actual_run_size)); /* Copy final settings. */ bin_info->run_size = actual_run_size; bin_info->nregs = actual_nregs; bin_info->reg0_offset = (uint32_t)(actual_run_size - (actual_nregs * bin_info->reg_interval) - pad_size + bin_info->redzone_size); if (actual_run_size > small_maxrun) small_maxrun = actual_run_size; assert(bin_info->reg0_offset - bin_info->redzone_size + (bin_info->nregs * bin_info->reg_interval) + pad_size == bin_info->run_size); } static void bin_info_init(void) { arena_bin_info_t *bin_info; #define BIN_INFO_INIT_bin_yes(index, size) \ bin_info = &arena_bin_info[index]; \ bin_info->reg_size = size; \ bin_info_run_size_calc(bin_info); \ bitmap_info_init(&bin_info->bitmap_info, bin_info->nregs); #define BIN_INFO_INIT_bin_no(index, size) #define SC(index, lg_grp, lg_delta, ndelta, bin, lg_delta_lookup) \ BIN_INFO_INIT_bin_##bin(index, (ZU(1)<> + small_run_tab = (bool *)base_alloc(NULL, sizeof(bool) * (small_maxrun >> LG_PAGE)); if (small_run_tab == NULL) return (true); #define TAB_INIT_bin_yes(index, size) { \ arena_bin_info_t *bin_info = &arena_bin_info[index]; \ small_run_tab[bin_info->run_size >> LG_PAGE] = true; \ } #define TAB_INIT_bin_no(index, size) #define SC(index, lg_grp, lg_delta, ndelta, bin, lg_delta_lookup) \ TAB_INIT_bin_##bin(index, (ZU(1)<> LG_PAGE)); if (run_quantize_floor_tab == NULL) return (true); - run_quantize_ceil_tab = (size_t *)base_alloc(sizeof(size_t) * + run_quantize_ceil_tab = (size_t *)base_alloc(NULL, sizeof(size_t) * (run_quantize_max >> LG_PAGE)); if (run_quantize_ceil_tab == NULL) return (true); for (i = 1; i <= run_quantize_max >> LG_PAGE; i++) { size_t run_size = i << LG_PAGE; run_quantize_floor_tab[i-1] = run_quantize_floor_compute(run_size); run_quantize_ceil_tab[i-1] = run_quantize_ceil_compute(run_size); } return (false); } bool arena_boot(void) { unsigned i; arena_lg_dirty_mult_default_set(opt_lg_dirty_mult); arena_decay_time_default_set(opt_decay_time); /* * Compute the header size such that it is large enough to contain the * page map. The page map is biased to omit entries for the header * itself, so some iteration is necessary to compute the map bias. * * 1) Compute safe header_size and map_bias values that include enough * space for an unbiased page map. * 2) Refine map_bias based on (1) to omit the header pages in the page * map. The resulting map_bias may be one too small. * 3) Refine map_bias based on (2). The result will be >= the result * from (2), and will always be correct. */ map_bias = 0; for (i = 0; i < 3; i++) { size_t header_size = offsetof(arena_chunk_t, map_bits) + ((sizeof(arena_chunk_map_bits_t) + sizeof(arena_chunk_map_misc_t)) * (chunk_npages-map_bias)); map_bias = (header_size + PAGE_MASK) >> LG_PAGE; } assert(map_bias > 0); map_misc_offset = offsetof(arena_chunk_t, map_bits) + sizeof(arena_chunk_map_bits_t) * (chunk_npages-map_bias); arena_maxrun = chunksize - (map_bias << LG_PAGE); assert(arena_maxrun > 0); large_maxclass = index2size(size2index(chunksize)-1); if (large_maxclass > arena_maxrun) { /* * For small chunk sizes it's possible for there to be fewer * non-header pages available than are necessary to serve the * size classes just below chunksize. */ large_maxclass = arena_maxrun; } assert(large_maxclass > 0); nlclasses = size2index(large_maxclass) - size2index(SMALL_MAXCLASS); nhclasses = NSIZES - nlclasses - NBINS; bin_info_init(); if (small_run_size_init()) return (true); if (run_quantize_init()) return (true); runs_avail_bias = size2index(PAGE); runs_avail_nclasses = size2index(run_quantize_max)+1 - runs_avail_bias; return (false); } void -arena_prefork(arena_t *arena) +arena_prefork0(tsdn_t *tsdn, arena_t *arena) { + + malloc_mutex_prefork(tsdn, &arena->lock); +} + +void +arena_prefork1(tsdn_t *tsdn, arena_t *arena) +{ + + malloc_mutex_prefork(tsdn, &arena->chunks_mtx); +} + +void +arena_prefork2(tsdn_t *tsdn, arena_t *arena) +{ + + malloc_mutex_prefork(tsdn, &arena->node_cache_mtx); +} + +void +arena_prefork3(tsdn_t *tsdn, arena_t *arena) +{ unsigned i; - malloc_mutex_prefork(&arena->lock); - malloc_mutex_prefork(&arena->huge_mtx); - malloc_mutex_prefork(&arena->chunks_mtx); - malloc_mutex_prefork(&arena->node_cache_mtx); for (i = 0; i < NBINS; i++) - malloc_mutex_prefork(&arena->bins[i].lock); + malloc_mutex_prefork(tsdn, &arena->bins[i].lock); + malloc_mutex_prefork(tsdn, &arena->huge_mtx); } void -arena_postfork_parent(arena_t *arena) +arena_postfork_parent(tsdn_t *tsdn, arena_t *arena) { unsigned i; + malloc_mutex_postfork_parent(tsdn, &arena->huge_mtx); for (i = 0; i < NBINS; i++) - malloc_mutex_postfork_parent(&arena->bins[i].lock); - malloc_mutex_postfork_parent(&arena->node_cache_mtx); - malloc_mutex_postfork_parent(&arena->chunks_mtx); - malloc_mutex_postfork_parent(&arena->huge_mtx); - malloc_mutex_postfork_parent(&arena->lock); + malloc_mutex_postfork_parent(tsdn, &arena->bins[i].lock); + malloc_mutex_postfork_parent(tsdn, &arena->node_cache_mtx); + malloc_mutex_postfork_parent(tsdn, &arena->chunks_mtx); + malloc_mutex_postfork_parent(tsdn, &arena->lock); } void -arena_postfork_child(arena_t *arena) +arena_postfork_child(tsdn_t *tsdn, arena_t *arena) { unsigned i; + malloc_mutex_postfork_child(tsdn, &arena->huge_mtx); for (i = 0; i < NBINS; i++) - malloc_mutex_postfork_child(&arena->bins[i].lock); - malloc_mutex_postfork_child(&arena->node_cache_mtx); - malloc_mutex_postfork_child(&arena->chunks_mtx); - malloc_mutex_postfork_child(&arena->huge_mtx); - malloc_mutex_postfork_child(&arena->lock); + malloc_mutex_postfork_child(tsdn, &arena->bins[i].lock); + malloc_mutex_postfork_child(tsdn, &arena->node_cache_mtx); + malloc_mutex_postfork_child(tsdn, &arena->chunks_mtx); + malloc_mutex_postfork_child(tsdn, &arena->lock); } Index: head/contrib/jemalloc/src/base.c =================================================================== --- head/contrib/jemalloc/src/base.c (revision 299586) +++ head/contrib/jemalloc/src/base.c (revision 299587) @@ -1,174 +1,177 @@ #define JEMALLOC_BASE_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Data. */ static malloc_mutex_t base_mtx; static extent_tree_t base_avail_szad; static extent_node_t *base_nodes; static size_t base_allocated; static size_t base_resident; static size_t base_mapped; /******************************************************************************/ -/* base_mtx must be held. */ static extent_node_t * -base_node_try_alloc(void) +base_node_try_alloc(tsdn_t *tsdn) { extent_node_t *node; + malloc_mutex_assert_owner(tsdn, &base_mtx); + if (base_nodes == NULL) return (NULL); node = base_nodes; base_nodes = *(extent_node_t **)node; JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(node, sizeof(extent_node_t)); return (node); } -/* base_mtx must be held. */ static void -base_node_dalloc(extent_node_t *node) +base_node_dalloc(tsdn_t *tsdn, extent_node_t *node) { + malloc_mutex_assert_owner(tsdn, &base_mtx); + JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(node, sizeof(extent_node_t)); *(extent_node_t **)node = base_nodes; base_nodes = node; } -/* base_mtx must be held. */ static extent_node_t * -base_chunk_alloc(size_t minsize) +base_chunk_alloc(tsdn_t *tsdn, size_t minsize) { extent_node_t *node; size_t csize, nsize; void *addr; + malloc_mutex_assert_owner(tsdn, &base_mtx); assert(minsize != 0); - node = base_node_try_alloc(); + node = base_node_try_alloc(tsdn); /* Allocate enough space to also carve a node out if necessary. */ nsize = (node == NULL) ? CACHELINE_CEILING(sizeof(extent_node_t)) : 0; csize = CHUNK_CEILING(minsize + nsize); addr = chunk_alloc_base(csize); if (addr == NULL) { if (node != NULL) - base_node_dalloc(node); + base_node_dalloc(tsdn, node); return (NULL); } base_mapped += csize; if (node == NULL) { node = (extent_node_t *)addr; addr = (void *)((uintptr_t)addr + nsize); csize -= nsize; if (config_stats) { base_allocated += nsize; base_resident += PAGE_CEILING(nsize); } } extent_node_init(node, NULL, addr, csize, true, true); return (node); } /* * base_alloc() guarantees demand-zeroed memory, in order to make multi-page * sparse data structures such as radix tree nodes efficient with respect to * physical memory usage. */ void * -base_alloc(size_t size) +base_alloc(tsdn_t *tsdn, size_t size) { void *ret; size_t csize, usize; extent_node_t *node; extent_node_t key; /* * Round size up to nearest multiple of the cacheline size, so that * there is no chance of false cache line sharing. */ csize = CACHELINE_CEILING(size); usize = s2u(csize); extent_node_init(&key, NULL, NULL, usize, false, false); - malloc_mutex_lock(&base_mtx); + malloc_mutex_lock(tsdn, &base_mtx); node = extent_tree_szad_nsearch(&base_avail_szad, &key); if (node != NULL) { /* Use existing space. */ extent_tree_szad_remove(&base_avail_szad, node); } else { /* Try to allocate more space. */ - node = base_chunk_alloc(csize); + node = base_chunk_alloc(tsdn, csize); } if (node == NULL) { ret = NULL; goto label_return; } ret = extent_node_addr_get(node); if (extent_node_size_get(node) > csize) { extent_node_addr_set(node, (void *)((uintptr_t)ret + csize)); extent_node_size_set(node, extent_node_size_get(node) - csize); extent_tree_szad_insert(&base_avail_szad, node); } else - base_node_dalloc(node); + base_node_dalloc(tsdn, node); if (config_stats) { base_allocated += csize; /* * Add one PAGE to base_resident for every page boundary that is * crossed by the new allocation. */ base_resident += PAGE_CEILING((uintptr_t)ret + csize) - PAGE_CEILING((uintptr_t)ret); } JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(ret, csize); label_return: - malloc_mutex_unlock(&base_mtx); + malloc_mutex_unlock(tsdn, &base_mtx); return (ret); } void -base_stats_get(size_t *allocated, size_t *resident, size_t *mapped) +base_stats_get(tsdn_t *tsdn, size_t *allocated, size_t *resident, + size_t *mapped) { - malloc_mutex_lock(&base_mtx); + malloc_mutex_lock(tsdn, &base_mtx); assert(base_allocated <= base_resident); assert(base_resident <= base_mapped); *allocated = base_allocated; *resident = base_resident; *mapped = base_mapped; - malloc_mutex_unlock(&base_mtx); + malloc_mutex_unlock(tsdn, &base_mtx); } bool base_boot(void) { - if (malloc_mutex_init(&base_mtx)) + if (malloc_mutex_init(&base_mtx, "base", WITNESS_RANK_BASE)) return (true); extent_tree_szad_new(&base_avail_szad); base_nodes = NULL; return (false); } void -base_prefork(void) +base_prefork(tsdn_t *tsdn) { - malloc_mutex_prefork(&base_mtx); + malloc_mutex_prefork(tsdn, &base_mtx); } void -base_postfork_parent(void) +base_postfork_parent(tsdn_t *tsdn) { - malloc_mutex_postfork_parent(&base_mtx); + malloc_mutex_postfork_parent(tsdn, &base_mtx); } void -base_postfork_child(void) +base_postfork_child(tsdn_t *tsdn) { - malloc_mutex_postfork_child(&base_mtx); + malloc_mutex_postfork_child(tsdn, &base_mtx); } Index: head/contrib/jemalloc/src/bitmap.c =================================================================== --- head/contrib/jemalloc/src/bitmap.c (revision 299586) +++ head/contrib/jemalloc/src/bitmap.c (revision 299587) @@ -1,114 +1,111 @@ #define JEMALLOC_BITMAP_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ #ifdef USE_TREE void bitmap_info_init(bitmap_info_t *binfo, size_t nbits) { unsigned i; size_t group_count; assert(nbits > 0); assert(nbits <= (ZU(1) << LG_BITMAP_MAXBITS)); /* * Compute the number of groups necessary to store nbits bits, and * progressively work upward through the levels until reaching a level * that requires only one group. */ binfo->levels[0].group_offset = 0; group_count = BITMAP_BITS2GROUPS(nbits); for (i = 1; group_count > 1; i++) { assert(i < BITMAP_MAX_LEVELS); binfo->levels[i].group_offset = binfo->levels[i-1].group_offset + group_count; group_count = BITMAP_BITS2GROUPS(group_count); } binfo->levels[i].group_offset = binfo->levels[i-1].group_offset + group_count; assert(binfo->levels[i].group_offset <= BITMAP_GROUPS_MAX); binfo->nlevels = i; binfo->nbits = nbits; } static size_t bitmap_info_ngroups(const bitmap_info_t *binfo) { return (binfo->levels[binfo->nlevels].group_offset); } void bitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo) { size_t extra; unsigned i; /* * Bits are actually inverted with regard to the external bitmap * interface, so the bitmap starts out with all 1 bits, except for * trailing unused bits (if any). Note that each group uses bit 0 to * correspond to the first logical bit in the group, so extra bits * are the most significant bits of the last group. */ memset(bitmap, 0xffU, bitmap_size(binfo)); extra = (BITMAP_GROUP_NBITS - (binfo->nbits & BITMAP_GROUP_NBITS_MASK)) & BITMAP_GROUP_NBITS_MASK; if (extra != 0) bitmap[binfo->levels[1].group_offset - 1] >>= extra; for (i = 1; i < binfo->nlevels; i++) { size_t group_count = binfo->levels[i].group_offset - binfo->levels[i-1].group_offset; extra = (BITMAP_GROUP_NBITS - (group_count & BITMAP_GROUP_NBITS_MASK)) & BITMAP_GROUP_NBITS_MASK; if (extra != 0) bitmap[binfo->levels[i+1].group_offset - 1] >>= extra; } } #else /* USE_TREE */ void bitmap_info_init(bitmap_info_t *binfo, size_t nbits) { - size_t i; assert(nbits > 0); assert(nbits <= (ZU(1) << LG_BITMAP_MAXBITS)); - i = nbits >> LG_BITMAP_GROUP_NBITS; - if (nbits % BITMAP_GROUP_NBITS != 0) - i++; - binfo->ngroups = i; + binfo->ngroups = BITMAP_BITS2GROUPS(nbits); binfo->nbits = nbits; } static size_t bitmap_info_ngroups(const bitmap_info_t *binfo) { return (binfo->ngroups); } void bitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo) { size_t extra; memset(bitmap, 0xffU, bitmap_size(binfo)); - extra = (binfo->nbits % (binfo->ngroups * BITMAP_GROUP_NBITS)); + extra = (BITMAP_GROUP_NBITS - (binfo->nbits & BITMAP_GROUP_NBITS_MASK)) + & BITMAP_GROUP_NBITS_MASK; if (extra != 0) - bitmap[binfo->ngroups - 1] >>= (BITMAP_GROUP_NBITS - extra); + bitmap[binfo->ngroups - 1] >>= extra; } #endif /* USE_TREE */ size_t bitmap_size(const bitmap_info_t *binfo) { return (bitmap_info_ngroups(binfo) << LG_SIZEOF_BITMAP); } Index: head/contrib/jemalloc/src/chunk.c =================================================================== --- head/contrib/jemalloc/src/chunk.c (revision 299586) +++ head/contrib/jemalloc/src/chunk.c (revision 299587) @@ -1,770 +1,768 @@ #define JEMALLOC_CHUNK_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Data. */ const char *opt_dss = DSS_DEFAULT; size_t opt_lg_chunk = 0; /* Used exclusively for gdump triggering. */ static size_t curchunks; static size_t highchunks; rtree_t chunks_rtree; /* Various chunk-related settings. */ size_t chunksize; size_t chunksize_mask; /* (chunksize - 1). */ size_t chunk_npages; static void *chunk_alloc_default(void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit, unsigned arena_ind); static bool chunk_dalloc_default(void *chunk, size_t size, bool committed, unsigned arena_ind); static bool chunk_commit_default(void *chunk, size_t size, size_t offset, size_t length, unsigned arena_ind); static bool chunk_decommit_default(void *chunk, size_t size, size_t offset, size_t length, unsigned arena_ind); static bool chunk_purge_default(void *chunk, size_t size, size_t offset, size_t length, unsigned arena_ind); static bool chunk_split_default(void *chunk, size_t size, size_t size_a, size_t size_b, bool committed, unsigned arena_ind); static bool chunk_merge_default(void *chunk_a, size_t size_a, void *chunk_b, size_t size_b, bool committed, unsigned arena_ind); const chunk_hooks_t chunk_hooks_default = { chunk_alloc_default, chunk_dalloc_default, chunk_commit_default, chunk_decommit_default, chunk_purge_default, chunk_split_default, chunk_merge_default }; /******************************************************************************/ /* * Function prototypes for static functions that are referenced prior to * definition. */ -static void chunk_record(arena_t *arena, chunk_hooks_t *chunk_hooks, - extent_tree_t *chunks_szad, extent_tree_t *chunks_ad, bool cache, - void *chunk, size_t size, bool zeroed, bool committed); +static void chunk_record(tsdn_t *tsdn, arena_t *arena, + chunk_hooks_t *chunk_hooks, extent_tree_t *chunks_szad, + extent_tree_t *chunks_ad, bool cache, void *chunk, size_t size, bool zeroed, + bool committed); /******************************************************************************/ static chunk_hooks_t chunk_hooks_get_locked(arena_t *arena) { return (arena->chunk_hooks); } chunk_hooks_t -chunk_hooks_get(arena_t *arena) +chunk_hooks_get(tsdn_t *tsdn, arena_t *arena) { chunk_hooks_t chunk_hooks; - malloc_mutex_lock(&arena->chunks_mtx); + malloc_mutex_lock(tsdn, &arena->chunks_mtx); chunk_hooks = chunk_hooks_get_locked(arena); - malloc_mutex_unlock(&arena->chunks_mtx); + malloc_mutex_unlock(tsdn, &arena->chunks_mtx); return (chunk_hooks); } chunk_hooks_t -chunk_hooks_set(arena_t *arena, const chunk_hooks_t *chunk_hooks) +chunk_hooks_set(tsdn_t *tsdn, arena_t *arena, const chunk_hooks_t *chunk_hooks) { chunk_hooks_t old_chunk_hooks; - malloc_mutex_lock(&arena->chunks_mtx); + malloc_mutex_lock(tsdn, &arena->chunks_mtx); old_chunk_hooks = arena->chunk_hooks; /* * Copy each field atomically so that it is impossible for readers to * see partially updated pointers. There are places where readers only * need one hook function pointer (therefore no need to copy the * entirety of arena->chunk_hooks), and stale reads do not affect * correctness, so they perform unlocked reads. */ #define ATOMIC_COPY_HOOK(n) do { \ union { \ chunk_##n##_t **n; \ void **v; \ } u; \ u.n = &arena->chunk_hooks.n; \ atomic_write_p(u.v, chunk_hooks->n); \ } while (0) ATOMIC_COPY_HOOK(alloc); ATOMIC_COPY_HOOK(dalloc); ATOMIC_COPY_HOOK(commit); ATOMIC_COPY_HOOK(decommit); ATOMIC_COPY_HOOK(purge); ATOMIC_COPY_HOOK(split); ATOMIC_COPY_HOOK(merge); #undef ATOMIC_COPY_HOOK - malloc_mutex_unlock(&arena->chunks_mtx); + malloc_mutex_unlock(tsdn, &arena->chunks_mtx); return (old_chunk_hooks); } static void -chunk_hooks_assure_initialized_impl(arena_t *arena, chunk_hooks_t *chunk_hooks, - bool locked) +chunk_hooks_assure_initialized_impl(tsdn_t *tsdn, arena_t *arena, + chunk_hooks_t *chunk_hooks, bool locked) { static const chunk_hooks_t uninitialized_hooks = CHUNK_HOOKS_INITIALIZER; if (memcmp(chunk_hooks, &uninitialized_hooks, sizeof(chunk_hooks_t)) == 0) { *chunk_hooks = locked ? chunk_hooks_get_locked(arena) : - chunk_hooks_get(arena); + chunk_hooks_get(tsdn, arena); } } static void -chunk_hooks_assure_initialized_locked(arena_t *arena, +chunk_hooks_assure_initialized_locked(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks) { - chunk_hooks_assure_initialized_impl(arena, chunk_hooks, true); + chunk_hooks_assure_initialized_impl(tsdn, arena, chunk_hooks, true); } static void -chunk_hooks_assure_initialized(arena_t *arena, chunk_hooks_t *chunk_hooks) +chunk_hooks_assure_initialized(tsdn_t *tsdn, arena_t *arena, + chunk_hooks_t *chunk_hooks) { - chunk_hooks_assure_initialized_impl(arena, chunk_hooks, false); + chunk_hooks_assure_initialized_impl(tsdn, arena, chunk_hooks, false); } bool -chunk_register(const void *chunk, const extent_node_t *node) +chunk_register(tsdn_t *tsdn, const void *chunk, const extent_node_t *node) { assert(extent_node_addr_get(node) == chunk); if (rtree_set(&chunks_rtree, (uintptr_t)chunk, node)) return (true); if (config_prof && opt_prof) { size_t size = extent_node_size_get(node); size_t nadd = (size == 0) ? 1 : size / chunksize; size_t cur = atomic_add_z(&curchunks, nadd); size_t high = atomic_read_z(&highchunks); while (cur > high && atomic_cas_z(&highchunks, high, cur)) { /* * Don't refresh cur, because it may have decreased * since this thread lost the highchunks update race. */ high = atomic_read_z(&highchunks); } if (cur > high && prof_gdump_get_unlocked()) - prof_gdump(); + prof_gdump(tsdn); } return (false); } void chunk_deregister(const void *chunk, const extent_node_t *node) { bool err; err = rtree_set(&chunks_rtree, (uintptr_t)chunk, NULL); assert(!err); if (config_prof && opt_prof) { size_t size = extent_node_size_get(node); size_t nsub = (size == 0) ? 1 : size / chunksize; assert(atomic_read_z(&curchunks) >= nsub); atomic_sub_z(&curchunks, nsub); } } /* * Do first-best-fit chunk selection, i.e. select the lowest chunk that best * fits. */ static extent_node_t * chunk_first_best_fit(arena_t *arena, extent_tree_t *chunks_szad, extent_tree_t *chunks_ad, size_t size) { extent_node_t key; assert(size == CHUNK_CEILING(size)); extent_node_init(&key, arena, NULL, size, false, false); return (extent_tree_szad_nsearch(chunks_szad, &key)); } static void * -chunk_recycle(arena_t *arena, chunk_hooks_t *chunk_hooks, +chunk_recycle(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, extent_tree_t *chunks_szad, extent_tree_t *chunks_ad, bool cache, void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit, bool dalloc_node) { void *ret; extent_node_t *node; size_t alloc_size, leadsize, trailsize; bool zeroed, committed; assert(new_addr == NULL || alignment == chunksize); /* * Cached chunks use the node linkage embedded in their headers, in * which case dalloc_node is true, and new_addr is non-NULL because * we're operating on a specific chunk. */ assert(dalloc_node || new_addr != NULL); alloc_size = CHUNK_CEILING(s2u(size + alignment - chunksize)); /* Beware size_t wrap-around. */ if (alloc_size < size) return (NULL); - malloc_mutex_lock(&arena->chunks_mtx); - chunk_hooks_assure_initialized_locked(arena, chunk_hooks); + malloc_mutex_lock(tsdn, &arena->chunks_mtx); + chunk_hooks_assure_initialized_locked(tsdn, arena, chunk_hooks); if (new_addr != NULL) { extent_node_t key; extent_node_init(&key, arena, new_addr, alloc_size, false, false); node = extent_tree_ad_search(chunks_ad, &key); } else { node = chunk_first_best_fit(arena, chunks_szad, chunks_ad, alloc_size); } if (node == NULL || (new_addr != NULL && extent_node_size_get(node) < size)) { - malloc_mutex_unlock(&arena->chunks_mtx); + malloc_mutex_unlock(tsdn, &arena->chunks_mtx); return (NULL); } leadsize = ALIGNMENT_CEILING((uintptr_t)extent_node_addr_get(node), alignment) - (uintptr_t)extent_node_addr_get(node); assert(new_addr == NULL || leadsize == 0); assert(extent_node_size_get(node) >= leadsize + size); trailsize = extent_node_size_get(node) - leadsize - size; ret = (void *)((uintptr_t)extent_node_addr_get(node) + leadsize); zeroed = extent_node_zeroed_get(node); if (zeroed) *zero = true; committed = extent_node_committed_get(node); if (committed) *commit = true; /* Split the lead. */ if (leadsize != 0 && chunk_hooks->split(extent_node_addr_get(node), extent_node_size_get(node), leadsize, size, false, arena->ind)) { - malloc_mutex_unlock(&arena->chunks_mtx); + malloc_mutex_unlock(tsdn, &arena->chunks_mtx); return (NULL); } /* Remove node from the tree. */ extent_tree_szad_remove(chunks_szad, node); extent_tree_ad_remove(chunks_ad, node); arena_chunk_cache_maybe_remove(arena, node, cache); if (leadsize != 0) { /* Insert the leading space as a smaller chunk. */ extent_node_size_set(node, leadsize); extent_tree_szad_insert(chunks_szad, node); extent_tree_ad_insert(chunks_ad, node); arena_chunk_cache_maybe_insert(arena, node, cache); node = NULL; } if (trailsize != 0) { /* Split the trail. */ if (chunk_hooks->split(ret, size + trailsize, size, trailsize, false, arena->ind)) { if (dalloc_node && node != NULL) - arena_node_dalloc(arena, node); - malloc_mutex_unlock(&arena->chunks_mtx); - chunk_record(arena, chunk_hooks, chunks_szad, chunks_ad, - cache, ret, size + trailsize, zeroed, committed); + arena_node_dalloc(tsdn, arena, node); + malloc_mutex_unlock(tsdn, &arena->chunks_mtx); + chunk_record(tsdn, arena, chunk_hooks, chunks_szad, + chunks_ad, cache, ret, size + trailsize, zeroed, + committed); return (NULL); } /* Insert the trailing space as a smaller chunk. */ if (node == NULL) { - node = arena_node_alloc(arena); + node = arena_node_alloc(tsdn, arena); if (node == NULL) { - malloc_mutex_unlock(&arena->chunks_mtx); - chunk_record(arena, chunk_hooks, chunks_szad, - chunks_ad, cache, ret, size + trailsize, - zeroed, committed); + malloc_mutex_unlock(tsdn, &arena->chunks_mtx); + chunk_record(tsdn, arena, chunk_hooks, + chunks_szad, chunks_ad, cache, ret, size + + trailsize, zeroed, committed); return (NULL); } } extent_node_init(node, arena, (void *)((uintptr_t)(ret) + size), trailsize, zeroed, committed); extent_tree_szad_insert(chunks_szad, node); extent_tree_ad_insert(chunks_ad, node); arena_chunk_cache_maybe_insert(arena, node, cache); node = NULL; } if (!committed && chunk_hooks->commit(ret, size, 0, size, arena->ind)) { - malloc_mutex_unlock(&arena->chunks_mtx); - chunk_record(arena, chunk_hooks, chunks_szad, chunks_ad, cache, - ret, size, zeroed, committed); + malloc_mutex_unlock(tsdn, &arena->chunks_mtx); + chunk_record(tsdn, arena, chunk_hooks, chunks_szad, chunks_ad, + cache, ret, size, zeroed, committed); return (NULL); } - malloc_mutex_unlock(&arena->chunks_mtx); + malloc_mutex_unlock(tsdn, &arena->chunks_mtx); assert(dalloc_node || node != NULL); if (dalloc_node && node != NULL) - arena_node_dalloc(arena, node); + arena_node_dalloc(tsdn, arena, node); if (*zero) { if (!zeroed) memset(ret, 0, size); else if (config_debug) { size_t i; size_t *p = (size_t *)(uintptr_t)ret; JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(ret, size); for (i = 0; i < size / sizeof(size_t); i++) assert(p[i] == 0); } } return (ret); } /* * If the caller specifies (!*zero), it is still possible to receive zeroed * memory, in which case *zero is toggled to true. arena_chunk_alloc() takes * advantage of this to avoid demanding zeroed chunks, but taking advantage of * them if they are returned. */ static void * -chunk_alloc_core(arena_t *arena, void *new_addr, size_t size, size_t alignment, - bool *zero, bool *commit, dss_prec_t dss_prec) +chunk_alloc_core(tsdn_t *tsdn, arena_t *arena, void *new_addr, size_t size, + size_t alignment, bool *zero, bool *commit, dss_prec_t dss_prec) { void *ret; assert(size != 0); assert((size & chunksize_mask) == 0); assert(alignment != 0); assert((alignment & chunksize_mask) == 0); /* "primary" dss. */ if (have_dss && dss_prec == dss_prec_primary && (ret = - chunk_alloc_dss(arena, new_addr, size, alignment, zero, commit)) != - NULL) + chunk_alloc_dss(tsdn, arena, new_addr, size, alignment, zero, + commit)) != NULL) return (ret); /* mmap. */ if ((ret = chunk_alloc_mmap(new_addr, size, alignment, zero, commit)) != NULL) return (ret); /* "secondary" dss. */ if (have_dss && dss_prec == dss_prec_secondary && (ret = - chunk_alloc_dss(arena, new_addr, size, alignment, zero, commit)) != - NULL) + chunk_alloc_dss(tsdn, arena, new_addr, size, alignment, zero, + commit)) != NULL) return (ret); /* All strategies for allocation failed. */ return (NULL); } void * chunk_alloc_base(size_t size) { void *ret; bool zero, commit; /* * Directly call chunk_alloc_mmap() rather than chunk_alloc_core() * because it's critical that chunk_alloc_base() return untouched * demand-zeroed virtual memory. */ zero = true; commit = true; ret = chunk_alloc_mmap(NULL, size, chunksize, &zero, &commit); if (ret == NULL) return (NULL); if (config_valgrind) JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ret, size); return (ret); } void * -chunk_alloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks, void *new_addr, - size_t size, size_t alignment, bool *zero, bool dalloc_node) +chunk_alloc_cache(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, + void *new_addr, size_t size, size_t alignment, bool *zero, bool dalloc_node) { void *ret; bool commit; assert(size != 0); assert((size & chunksize_mask) == 0); assert(alignment != 0); assert((alignment & chunksize_mask) == 0); commit = true; - ret = chunk_recycle(arena, chunk_hooks, &arena->chunks_szad_cached, - &arena->chunks_ad_cached, true, new_addr, size, alignment, zero, - &commit, dalloc_node); + ret = chunk_recycle(tsdn, arena, chunk_hooks, + &arena->chunks_szad_cached, &arena->chunks_ad_cached, true, + new_addr, size, alignment, zero, &commit, dalloc_node); if (ret == NULL) return (NULL); assert(commit); if (config_valgrind) JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ret, size); return (ret); } static arena_t * -chunk_arena_get(unsigned arena_ind) +chunk_arena_get(tsdn_t *tsdn, unsigned arena_ind) { arena_t *arena; - arena = arena_get(arena_ind, false); + arena = arena_get(tsdn, arena_ind, false); /* * The arena we're allocating on behalf of must have been initialized * already. */ assert(arena != NULL); return (arena); } static void * chunk_alloc_default(void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit, unsigned arena_ind) { void *ret; + tsdn_t *tsdn; arena_t *arena; - arena = chunk_arena_get(arena_ind); - ret = chunk_alloc_core(arena, new_addr, size, alignment, zero, + tsdn = tsdn_fetch(); + arena = chunk_arena_get(tsdn, arena_ind); + ret = chunk_alloc_core(tsdn, arena, new_addr, size, alignment, zero, commit, arena->dss_prec); if (ret == NULL) return (NULL); if (config_valgrind) JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ret, size); return (ret); } static void * -chunk_alloc_retained(arena_t *arena, chunk_hooks_t *chunk_hooks, void *new_addr, - size_t size, size_t alignment, bool *zero, bool *commit) +chunk_alloc_retained(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, + void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit) { + void *ret; assert(size != 0); assert((size & chunksize_mask) == 0); assert(alignment != 0); assert((alignment & chunksize_mask) == 0); - return (chunk_recycle(arena, chunk_hooks, &arena->chunks_szad_retained, - &arena->chunks_ad_retained, false, new_addr, size, alignment, zero, - commit, true)); + ret = chunk_recycle(tsdn, arena, chunk_hooks, + &arena->chunks_szad_retained, &arena->chunks_ad_retained, false, + new_addr, size, alignment, zero, commit, true); + + if (config_stats && ret != NULL) + arena->stats.retained -= size; + + return (ret); } void * -chunk_alloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, void *new_addr, - size_t size, size_t alignment, bool *zero, bool *commit) +chunk_alloc_wrapper(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, + void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit) { void *ret; - chunk_hooks_assure_initialized(arena, chunk_hooks); + chunk_hooks_assure_initialized(tsdn, arena, chunk_hooks); - ret = chunk_alloc_retained(arena, chunk_hooks, new_addr, size, + ret = chunk_alloc_retained(tsdn, arena, chunk_hooks, new_addr, size, alignment, zero, commit); if (ret == NULL) { ret = chunk_hooks->alloc(new_addr, size, alignment, zero, commit, arena->ind); if (ret == NULL) return (NULL); } if (config_valgrind && chunk_hooks->alloc != chunk_alloc_default) JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ret, chunksize); return (ret); } static void -chunk_record(arena_t *arena, chunk_hooks_t *chunk_hooks, +chunk_record(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, extent_tree_t *chunks_szad, extent_tree_t *chunks_ad, bool cache, void *chunk, size_t size, bool zeroed, bool committed) { bool unzeroed; extent_node_t *node, *prev; extent_node_t key; assert(!cache || !zeroed); unzeroed = cache || !zeroed; JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(chunk, size); - malloc_mutex_lock(&arena->chunks_mtx); - chunk_hooks_assure_initialized_locked(arena, chunk_hooks); + malloc_mutex_lock(tsdn, &arena->chunks_mtx); + chunk_hooks_assure_initialized_locked(tsdn, arena, chunk_hooks); extent_node_init(&key, arena, (void *)((uintptr_t)chunk + size), 0, false, false); node = extent_tree_ad_nsearch(chunks_ad, &key); /* Try to coalesce forward. */ if (node != NULL && extent_node_addr_get(node) == extent_node_addr_get(&key) && extent_node_committed_get(node) == committed && !chunk_hooks->merge(chunk, size, extent_node_addr_get(node), extent_node_size_get(node), false, arena->ind)) { /* * Coalesce chunk with the following address range. This does * not change the position within chunks_ad, so only * remove/insert from/into chunks_szad. */ extent_tree_szad_remove(chunks_szad, node); arena_chunk_cache_maybe_remove(arena, node, cache); extent_node_addr_set(node, chunk); extent_node_size_set(node, size + extent_node_size_get(node)); extent_node_zeroed_set(node, extent_node_zeroed_get(node) && !unzeroed); extent_tree_szad_insert(chunks_szad, node); arena_chunk_cache_maybe_insert(arena, node, cache); } else { /* Coalescing forward failed, so insert a new node. */ - node = arena_node_alloc(arena); + node = arena_node_alloc(tsdn, arena); if (node == NULL) { /* * Node allocation failed, which is an exceedingly * unlikely failure. Leak chunk after making sure its * pages have already been purged, so that this is only * a virtual memory leak. */ if (cache) { - chunk_purge_wrapper(arena, chunk_hooks, chunk, - size, 0, size); + chunk_purge_wrapper(tsdn, arena, chunk_hooks, + chunk, size, 0, size); } goto label_return; } extent_node_init(node, arena, chunk, size, !unzeroed, committed); extent_tree_ad_insert(chunks_ad, node); extent_tree_szad_insert(chunks_szad, node); arena_chunk_cache_maybe_insert(arena, node, cache); } /* Try to coalesce backward. */ prev = extent_tree_ad_prev(chunks_ad, node); if (prev != NULL && (void *)((uintptr_t)extent_node_addr_get(prev) + extent_node_size_get(prev)) == chunk && extent_node_committed_get(prev) == committed && !chunk_hooks->merge(extent_node_addr_get(prev), extent_node_size_get(prev), chunk, size, false, arena->ind)) { /* * Coalesce chunk with the previous address range. This does * not change the position within chunks_ad, so only * remove/insert node from/into chunks_szad. */ extent_tree_szad_remove(chunks_szad, prev); extent_tree_ad_remove(chunks_ad, prev); arena_chunk_cache_maybe_remove(arena, prev, cache); extent_tree_szad_remove(chunks_szad, node); arena_chunk_cache_maybe_remove(arena, node, cache); extent_node_addr_set(node, extent_node_addr_get(prev)); extent_node_size_set(node, extent_node_size_get(prev) + extent_node_size_get(node)); extent_node_zeroed_set(node, extent_node_zeroed_get(prev) && extent_node_zeroed_get(node)); extent_tree_szad_insert(chunks_szad, node); arena_chunk_cache_maybe_insert(arena, node, cache); - arena_node_dalloc(arena, prev); + arena_node_dalloc(tsdn, arena, prev); } label_return: - malloc_mutex_unlock(&arena->chunks_mtx); + malloc_mutex_unlock(tsdn, &arena->chunks_mtx); } void -chunk_dalloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks, void *chunk, - size_t size, bool committed) +chunk_dalloc_cache(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, + void *chunk, size_t size, bool committed) { assert(chunk != NULL); assert(CHUNK_ADDR2BASE(chunk) == chunk); assert(size != 0); assert((size & chunksize_mask) == 0); - chunk_record(arena, chunk_hooks, &arena->chunks_szad_cached, + chunk_record(tsdn, arena, chunk_hooks, &arena->chunks_szad_cached, &arena->chunks_ad_cached, true, chunk, size, false, committed); - arena_maybe_purge(arena); + arena_maybe_purge(tsdn, arena); } +static bool +chunk_dalloc_default(void *chunk, size_t size, bool committed, + unsigned arena_ind) +{ + + if (!have_dss || !chunk_in_dss(tsdn_fetch(), chunk)) + return (chunk_dalloc_mmap(chunk, size)); + return (true); +} + void -chunk_dalloc_arena(arena_t *arena, chunk_hooks_t *chunk_hooks, void *chunk, - size_t size, bool zeroed, bool committed) +chunk_dalloc_wrapper(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, + void *chunk, size_t size, bool zeroed, bool committed) { assert(chunk != NULL); assert(CHUNK_ADDR2BASE(chunk) == chunk); assert(size != 0); assert((size & chunksize_mask) == 0); - chunk_hooks_assure_initialized(arena, chunk_hooks); + chunk_hooks_assure_initialized(tsdn, arena, chunk_hooks); /* Try to deallocate. */ if (!chunk_hooks->dalloc(chunk, size, committed, arena->ind)) return; /* Try to decommit; purge if that fails. */ if (committed) { committed = chunk_hooks->decommit(chunk, size, 0, size, arena->ind); } zeroed = !committed || !chunk_hooks->purge(chunk, size, 0, size, arena->ind); - chunk_record(arena, chunk_hooks, &arena->chunks_szad_retained, + chunk_record(tsdn, arena, chunk_hooks, &arena->chunks_szad_retained, &arena->chunks_ad_retained, false, chunk, size, zeroed, committed); -} -static bool -chunk_dalloc_default(void *chunk, size_t size, bool committed, - unsigned arena_ind) -{ - - if (!have_dss || !chunk_in_dss(chunk)) - return (chunk_dalloc_mmap(chunk, size)); - return (true); + if (config_stats) + arena->stats.retained += size; } -void -chunk_dalloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, void *chunk, - size_t size, bool committed) -{ - - chunk_hooks_assure_initialized(arena, chunk_hooks); - chunk_hooks->dalloc(chunk, size, committed, arena->ind); - if (config_valgrind && chunk_hooks->dalloc != chunk_dalloc_default) - JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(chunk, size); -} - static bool chunk_commit_default(void *chunk, size_t size, size_t offset, size_t length, unsigned arena_ind) { return (pages_commit((void *)((uintptr_t)chunk + (uintptr_t)offset), length)); } static bool chunk_decommit_default(void *chunk, size_t size, size_t offset, size_t length, unsigned arena_ind) { return (pages_decommit((void *)((uintptr_t)chunk + (uintptr_t)offset), length)); } -bool -chunk_purge_arena(arena_t *arena, void *chunk, size_t offset, size_t length) +static bool +chunk_purge_default(void *chunk, size_t size, size_t offset, size_t length, + unsigned arena_ind) { assert(chunk != NULL); assert(CHUNK_ADDR2BASE(chunk) == chunk); assert((offset & PAGE_MASK) == 0); assert(length != 0); assert((length & PAGE_MASK) == 0); return (pages_purge((void *)((uintptr_t)chunk + (uintptr_t)offset), length)); } -static bool -chunk_purge_default(void *chunk, size_t size, size_t offset, size_t length, - unsigned arena_ind) -{ - - return (chunk_purge_arena(chunk_arena_get(arena_ind), chunk, offset, - length)); -} - bool -chunk_purge_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, void *chunk, - size_t size, size_t offset, size_t length) +chunk_purge_wrapper(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, + void *chunk, size_t size, size_t offset, size_t length) { - chunk_hooks_assure_initialized(arena, chunk_hooks); + chunk_hooks_assure_initialized(tsdn, arena, chunk_hooks); return (chunk_hooks->purge(chunk, size, offset, length, arena->ind)); } static bool chunk_split_default(void *chunk, size_t size, size_t size_a, size_t size_b, bool committed, unsigned arena_ind) { if (!maps_coalesce) return (true); return (false); } static bool chunk_merge_default(void *chunk_a, size_t size_a, void *chunk_b, size_t size_b, bool committed, unsigned arena_ind) { if (!maps_coalesce) return (true); - if (have_dss && chunk_in_dss(chunk_a) != chunk_in_dss(chunk_b)) - return (true); + if (have_dss) { + tsdn_t *tsdn = tsdn_fetch(); + if (chunk_in_dss(tsdn, chunk_a) != chunk_in_dss(tsdn, chunk_b)) + return (true); + } return (false); } static rtree_node_elm_t * chunks_rtree_node_alloc(size_t nelms) { - return ((rtree_node_elm_t *)base_alloc(nelms * + return ((rtree_node_elm_t *)base_alloc(tsdn_fetch(), nelms * sizeof(rtree_node_elm_t))); } bool chunk_boot(void) { #ifdef _WIN32 SYSTEM_INFO info; GetSystemInfo(&info); /* * Verify actual page size is equal to or an integral multiple of * configured page size. */ if (info.dwPageSize & ((1U << LG_PAGE) - 1)) return (true); /* * Configure chunksize (if not set) to match granularity (usually 64K), * so pages_map will always take fast path. */ if (!opt_lg_chunk) { opt_lg_chunk = ffs_u((unsigned)info.dwAllocationGranularity) - 1; } #else if (!opt_lg_chunk) opt_lg_chunk = LG_CHUNK_DEFAULT; #endif /* Set variables according to the value of opt_lg_chunk. */ chunksize = (ZU(1) << opt_lg_chunk); assert(chunksize >= PAGE); chunksize_mask = chunksize - 1; chunk_npages = (chunksize >> LG_PAGE); if (have_dss && chunk_dss_boot()) return (true); if (rtree_new(&chunks_rtree, (unsigned)((ZU(1) << (LG_SIZEOF_PTR+3)) - opt_lg_chunk), chunks_rtree_node_alloc, NULL)) return (true); return (false); } void -chunk_prefork(void) +chunk_prefork(tsdn_t *tsdn) { - chunk_dss_prefork(); + chunk_dss_prefork(tsdn); } void -chunk_postfork_parent(void) +chunk_postfork_parent(tsdn_t *tsdn) { - chunk_dss_postfork_parent(); + chunk_dss_postfork_parent(tsdn); } void -chunk_postfork_child(void) +chunk_postfork_child(tsdn_t *tsdn) { - chunk_dss_postfork_child(); + chunk_dss_postfork_child(tsdn); } Index: head/contrib/jemalloc/src/chunk_dss.c =================================================================== --- head/contrib/jemalloc/src/chunk_dss.c (revision 299586) +++ head/contrib/jemalloc/src/chunk_dss.c (revision 299587) @@ -1,214 +1,214 @@ #define JEMALLOC_CHUNK_DSS_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Data. */ const char *dss_prec_names[] = { "disabled", "primary", "secondary", "N/A" }; /* Current dss precedence default, used when creating new arenas. */ static dss_prec_t dss_prec_default = DSS_PREC_DEFAULT; /* * Protects sbrk() calls. This avoids malloc races among threads, though it * does not protect against races with threads that call sbrk() directly. */ static malloc_mutex_t dss_mtx; /* Base address of the DSS. */ static void *dss_base; /* Current end of the DSS, or ((void *)-1) if the DSS is exhausted. */ static void *dss_prev; /* Current upper limit on DSS addresses. */ static void *dss_max; /******************************************************************************/ static void * chunk_dss_sbrk(intptr_t increment) { #ifdef JEMALLOC_DSS return (sbrk(increment)); #else not_implemented(); return (NULL); #endif } dss_prec_t -chunk_dss_prec_get(void) +chunk_dss_prec_get(tsdn_t *tsdn) { dss_prec_t ret; if (!have_dss) return (dss_prec_disabled); - malloc_mutex_lock(&dss_mtx); + malloc_mutex_lock(tsdn, &dss_mtx); ret = dss_prec_default; - malloc_mutex_unlock(&dss_mtx); + malloc_mutex_unlock(tsdn, &dss_mtx); return (ret); } bool -chunk_dss_prec_set(dss_prec_t dss_prec) +chunk_dss_prec_set(tsdn_t *tsdn, dss_prec_t dss_prec) { if (!have_dss) return (dss_prec != dss_prec_disabled); - malloc_mutex_lock(&dss_mtx); + malloc_mutex_lock(tsdn, &dss_mtx); dss_prec_default = dss_prec; - malloc_mutex_unlock(&dss_mtx); + malloc_mutex_unlock(tsdn, &dss_mtx); return (false); } void * -chunk_alloc_dss(arena_t *arena, void *new_addr, size_t size, size_t alignment, - bool *zero, bool *commit) +chunk_alloc_dss(tsdn_t *tsdn, arena_t *arena, void *new_addr, size_t size, + size_t alignment, bool *zero, bool *commit) { cassert(have_dss); assert(size > 0 && (size & chunksize_mask) == 0); assert(alignment > 0 && (alignment & chunksize_mask) == 0); /* * sbrk() uses a signed increment argument, so take care not to * interpret a huge allocation request as a negative increment. */ if ((intptr_t)size < 0) return (NULL); - malloc_mutex_lock(&dss_mtx); + malloc_mutex_lock(tsdn, &dss_mtx); if (dss_prev != (void *)-1) { /* * The loop is necessary to recover from races with other * threads that are using the DSS for something other than * malloc. */ do { void *ret, *cpad, *dss_next; size_t gap_size, cpad_size; intptr_t incr; /* Avoid an unnecessary system call. */ if (new_addr != NULL && dss_max != new_addr) break; /* Get the current end of the DSS. */ dss_max = chunk_dss_sbrk(0); /* Make sure the earlier condition still holds. */ if (new_addr != NULL && dss_max != new_addr) break; /* * Calculate how much padding is necessary to * chunk-align the end of the DSS. */ gap_size = (chunksize - CHUNK_ADDR2OFFSET(dss_max)) & chunksize_mask; /* * Compute how much chunk-aligned pad space (if any) is * necessary to satisfy alignment. This space can be * recycled for later use. */ cpad = (void *)((uintptr_t)dss_max + gap_size); ret = (void *)ALIGNMENT_CEILING((uintptr_t)dss_max, alignment); cpad_size = (uintptr_t)ret - (uintptr_t)cpad; dss_next = (void *)((uintptr_t)ret + size); if ((uintptr_t)ret < (uintptr_t)dss_max || (uintptr_t)dss_next < (uintptr_t)dss_max) { /* Wrap-around. */ - malloc_mutex_unlock(&dss_mtx); + malloc_mutex_unlock(tsdn, &dss_mtx); return (NULL); } incr = gap_size + cpad_size + size; dss_prev = chunk_dss_sbrk(incr); if (dss_prev == dss_max) { /* Success. */ dss_max = dss_next; - malloc_mutex_unlock(&dss_mtx); + malloc_mutex_unlock(tsdn, &dss_mtx); if (cpad_size != 0) { chunk_hooks_t chunk_hooks = CHUNK_HOOKS_INITIALIZER; - chunk_dalloc_wrapper(arena, + chunk_dalloc_wrapper(tsdn, arena, &chunk_hooks, cpad, cpad_size, - true); + false, true); } if (*zero) { JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED( ret, size); memset(ret, 0, size); } if (!*commit) *commit = pages_decommit(ret, size); return (ret); } } while (dss_prev != (void *)-1); } - malloc_mutex_unlock(&dss_mtx); + malloc_mutex_unlock(tsdn, &dss_mtx); return (NULL); } bool -chunk_in_dss(void *chunk) +chunk_in_dss(tsdn_t *tsdn, void *chunk) { bool ret; cassert(have_dss); - malloc_mutex_lock(&dss_mtx); + malloc_mutex_lock(tsdn, &dss_mtx); if ((uintptr_t)chunk >= (uintptr_t)dss_base && (uintptr_t)chunk < (uintptr_t)dss_max) ret = true; else ret = false; - malloc_mutex_unlock(&dss_mtx); + malloc_mutex_unlock(tsdn, &dss_mtx); return (ret); } bool chunk_dss_boot(void) { cassert(have_dss); - if (malloc_mutex_init(&dss_mtx)) + if (malloc_mutex_init(&dss_mtx, "dss", WITNESS_RANK_DSS)) return (true); dss_base = chunk_dss_sbrk(0); dss_prev = dss_base; dss_max = dss_base; return (false); } void -chunk_dss_prefork(void) +chunk_dss_prefork(tsdn_t *tsdn) { if (have_dss) - malloc_mutex_prefork(&dss_mtx); + malloc_mutex_prefork(tsdn, &dss_mtx); } void -chunk_dss_postfork_parent(void) +chunk_dss_postfork_parent(tsdn_t *tsdn) { if (have_dss) - malloc_mutex_postfork_parent(&dss_mtx); + malloc_mutex_postfork_parent(tsdn, &dss_mtx); } void -chunk_dss_postfork_child(void) +chunk_dss_postfork_child(tsdn_t *tsdn) { if (have_dss) - malloc_mutex_postfork_child(&dss_mtx); + malloc_mutex_postfork_child(tsdn, &dss_mtx); } /******************************************************************************/ Index: head/contrib/jemalloc/src/chunk_mmap.c =================================================================== --- head/contrib/jemalloc/src/chunk_mmap.c (revision 299586) +++ head/contrib/jemalloc/src/chunk_mmap.c (revision 299587) @@ -1,82 +1,78 @@ #define JEMALLOC_CHUNK_MMAP_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ static void * chunk_alloc_mmap_slow(size_t size, size_t alignment, bool *zero, bool *commit) { void *ret; size_t alloc_size; - alloc_size = size + alignment - PAGE; + alloc_size = size + alignment; /* Beware size_t wrap-around. */ if (alloc_size < size) return (NULL); do { void *pages; size_t leadsize; - pages = pages_map(NULL, alloc_size); + pages = pages_map(NULL, alloc_size, commit); if (pages == NULL) return (NULL); leadsize = ALIGNMENT_CEILING((uintptr_t)pages, alignment) - (uintptr_t)pages; - ret = pages_trim(pages, alloc_size, leadsize, size); + ret = pages_trim(pages, alloc_size, leadsize, size, commit); } while (ret == NULL); assert(ret != NULL); *zero = true; - if (!*commit) - *commit = pages_decommit(ret, size); return (ret); } void * chunk_alloc_mmap(void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit) { void *ret; size_t offset; /* * Ideally, there would be a way to specify alignment to mmap() (like * NetBSD has), but in the absence of such a feature, we have to work * hard to efficiently create aligned mappings. The reliable, but * slow method is to create a mapping that is over-sized, then trim the * excess. However, that always results in one or two calls to * pages_unmap(). * * Optimistically try mapping precisely the right amount before falling * back to the slow method, with the expectation that the optimistic * approach works most of the time. */ assert(alignment != 0); assert((alignment & chunksize_mask) == 0); - ret = pages_map(new_addr, size); + ret = pages_map(new_addr, size, commit); if (ret == NULL || ret == new_addr) return (ret); assert(new_addr == NULL); offset = ALIGNMENT_ADDR2OFFSET(ret, alignment); if (offset != 0) { pages_unmap(ret, size); return (chunk_alloc_mmap_slow(size, alignment, zero, commit)); } assert(ret != NULL); *zero = true; - if (!*commit) - *commit = pages_decommit(ret, size); return (ret); } bool chunk_dalloc_mmap(void *chunk, size_t size) { if (config_munmap) pages_unmap(chunk, size); return (!config_munmap); } Index: head/contrib/jemalloc/src/ckh.c =================================================================== --- head/contrib/jemalloc/src/ckh.c (revision 299586) +++ head/contrib/jemalloc/src/ckh.c (revision 299587) @@ -1,568 +1,568 @@ /* ******************************************************************************* * Implementation of (2^1+,2) cuckoo hashing, where 2^1+ indicates that each * hash bucket contains 2^n cells, for n >= 1, and 2 indicates that two hash * functions are employed. The original cuckoo hashing algorithm was described * in: * * Pagh, R., F.F. Rodler (2004) Cuckoo Hashing. Journal of Algorithms * 51(2):122-144. * * Generalization of cuckoo hashing was discussed in: * * Erlingsson, U., M. Manasse, F. McSherry (2006) A cool and practical * alternative to traditional hash tables. In Proceedings of the 7th * Workshop on Distributed Data and Structures (WDAS'06), Santa Clara, CA, * January 2006. * * This implementation uses precisely two hash functions because that is the * fewest that can work, and supporting multiple hashes is an implementation * burden. Here is a reproduction of Figure 1 from Erlingsson et al. (2006) * that shows approximate expected maximum load factors for various * configurations: * * | #cells/bucket | * #hashes | 1 | 2 | 4 | 8 | * --------+-------+-------+-------+-------+ * 1 | 0.006 | 0.006 | 0.03 | 0.12 | * 2 | 0.49 | 0.86 |>0.93< |>0.96< | * 3 | 0.91 | 0.97 | 0.98 | 0.999 | * 4 | 0.97 | 0.99 | 0.999 | | * * The number of cells per bucket is chosen such that a bucket fits in one cache * line. So, on 32- and 64-bit systems, we use (8,2) and (4,2) cuckoo hashing, * respectively. * ******************************************************************************/ #define JEMALLOC_CKH_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Function prototypes for non-inline static functions. */ -static bool ckh_grow(tsd_t *tsd, ckh_t *ckh); -static void ckh_shrink(tsd_t *tsd, ckh_t *ckh); +static bool ckh_grow(tsdn_t *tsdn, ckh_t *ckh); +static void ckh_shrink(tsdn_t *tsdn, ckh_t *ckh); /******************************************************************************/ /* * Search bucket for key and return the cell number if found; SIZE_T_MAX * otherwise. */ JEMALLOC_INLINE_C size_t ckh_bucket_search(ckh_t *ckh, size_t bucket, const void *key) { ckhc_t *cell; unsigned i; for (i = 0; i < (ZU(1) << LG_CKH_BUCKET_CELLS); i++) { cell = &ckh->tab[(bucket << LG_CKH_BUCKET_CELLS) + i]; if (cell->key != NULL && ckh->keycomp(key, cell->key)) return ((bucket << LG_CKH_BUCKET_CELLS) + i); } return (SIZE_T_MAX); } /* * Search table for key and return cell number if found; SIZE_T_MAX otherwise. */ JEMALLOC_INLINE_C size_t ckh_isearch(ckh_t *ckh, const void *key) { size_t hashes[2], bucket, cell; assert(ckh != NULL); ckh->hash(key, hashes); /* Search primary bucket. */ bucket = hashes[0] & ((ZU(1) << ckh->lg_curbuckets) - 1); cell = ckh_bucket_search(ckh, bucket, key); if (cell != SIZE_T_MAX) return (cell); /* Search secondary bucket. */ bucket = hashes[1] & ((ZU(1) << ckh->lg_curbuckets) - 1); cell = ckh_bucket_search(ckh, bucket, key); return (cell); } JEMALLOC_INLINE_C bool ckh_try_bucket_insert(ckh_t *ckh, size_t bucket, const void *key, const void *data) { ckhc_t *cell; unsigned offset, i; /* * Cycle through the cells in the bucket, starting at a random position. * The randomness avoids worst-case search overhead as buckets fill up. */ offset = (unsigned)prng_lg_range(&ckh->prng_state, LG_CKH_BUCKET_CELLS); for (i = 0; i < (ZU(1) << LG_CKH_BUCKET_CELLS); i++) { cell = &ckh->tab[(bucket << LG_CKH_BUCKET_CELLS) + ((i + offset) & ((ZU(1) << LG_CKH_BUCKET_CELLS) - 1))]; if (cell->key == NULL) { cell->key = key; cell->data = data; ckh->count++; return (false); } } return (true); } /* * No space is available in bucket. Randomly evict an item, then try to find an * alternate location for that item. Iteratively repeat this * eviction/relocation procedure until either success or detection of an * eviction/relocation bucket cycle. */ JEMALLOC_INLINE_C bool ckh_evict_reloc_insert(ckh_t *ckh, size_t argbucket, void const **argkey, void const **argdata) { const void *key, *data, *tkey, *tdata; ckhc_t *cell; size_t hashes[2], bucket, tbucket; unsigned i; bucket = argbucket; key = *argkey; data = *argdata; while (true) { /* * Choose a random item within the bucket to evict. This is * critical to correct function, because without (eventually) * evicting all items within a bucket during iteration, it * would be possible to get stuck in an infinite loop if there * were an item for which both hashes indicated the same * bucket. */ i = (unsigned)prng_lg_range(&ckh->prng_state, LG_CKH_BUCKET_CELLS); cell = &ckh->tab[(bucket << LG_CKH_BUCKET_CELLS) + i]; assert(cell->key != NULL); /* Swap cell->{key,data} and {key,data} (evict). */ tkey = cell->key; tdata = cell->data; cell->key = key; cell->data = data; key = tkey; data = tdata; #ifdef CKH_COUNT ckh->nrelocs++; #endif /* Find the alternate bucket for the evicted item. */ ckh->hash(key, hashes); tbucket = hashes[1] & ((ZU(1) << ckh->lg_curbuckets) - 1); if (tbucket == bucket) { tbucket = hashes[0] & ((ZU(1) << ckh->lg_curbuckets) - 1); /* * It may be that (tbucket == bucket) still, if the * item's hashes both indicate this bucket. However, * we are guaranteed to eventually escape this bucket * during iteration, assuming pseudo-random item * selection (true randomness would make infinite * looping a remote possibility). The reason we can * never get trapped forever is that there are two * cases: * * 1) This bucket == argbucket, so we will quickly * detect an eviction cycle and terminate. * 2) An item was evicted to this bucket from another, * which means that at least one item in this bucket * has hashes that indicate distinct buckets. */ } /* Check for a cycle. */ if (tbucket == argbucket) { *argkey = key; *argdata = data; return (true); } bucket = tbucket; if (!ckh_try_bucket_insert(ckh, bucket, key, data)) return (false); } } JEMALLOC_INLINE_C bool ckh_try_insert(ckh_t *ckh, void const**argkey, void const**argdata) { size_t hashes[2], bucket; const void *key = *argkey; const void *data = *argdata; ckh->hash(key, hashes); /* Try to insert in primary bucket. */ bucket = hashes[0] & ((ZU(1) << ckh->lg_curbuckets) - 1); if (!ckh_try_bucket_insert(ckh, bucket, key, data)) return (false); /* Try to insert in secondary bucket. */ bucket = hashes[1] & ((ZU(1) << ckh->lg_curbuckets) - 1); if (!ckh_try_bucket_insert(ckh, bucket, key, data)) return (false); /* * Try to find a place for this item via iterative eviction/relocation. */ return (ckh_evict_reloc_insert(ckh, bucket, argkey, argdata)); } /* * Try to rebuild the hash table from scratch by inserting all items from the * old table into the new. */ JEMALLOC_INLINE_C bool ckh_rebuild(ckh_t *ckh, ckhc_t *aTab) { size_t count, i, nins; const void *key, *data; count = ckh->count; ckh->count = 0; for (i = nins = 0; nins < count; i++) { if (aTab[i].key != NULL) { key = aTab[i].key; data = aTab[i].data; if (ckh_try_insert(ckh, &key, &data)) { ckh->count = count; return (true); } nins++; } } return (false); } static bool -ckh_grow(tsd_t *tsd, ckh_t *ckh) +ckh_grow(tsdn_t *tsdn, ckh_t *ckh) { bool ret; ckhc_t *tab, *ttab; unsigned lg_prevbuckets, lg_curcells; #ifdef CKH_COUNT ckh->ngrows++; #endif /* * It is possible (though unlikely, given well behaved hashes) that the * table will have to be doubled more than once in order to create a * usable table. */ lg_prevbuckets = ckh->lg_curbuckets; lg_curcells = ckh->lg_curbuckets + LG_CKH_BUCKET_CELLS; while (true) { size_t usize; lg_curcells++; usize = sa2u(sizeof(ckhc_t) << lg_curcells, CACHELINE); if (unlikely(usize == 0 || usize > HUGE_MAXCLASS)) { ret = true; goto label_return; } - tab = (ckhc_t *)ipallocztm(tsd, usize, CACHELINE, true, NULL, - true, NULL); + tab = (ckhc_t *)ipallocztm(tsdn, usize, CACHELINE, true, NULL, + true, arena_ichoose(tsdn, NULL)); if (tab == NULL) { ret = true; goto label_return; } /* Swap in new table. */ ttab = ckh->tab; ckh->tab = tab; tab = ttab; ckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS; if (!ckh_rebuild(ckh, tab)) { - idalloctm(tsd, tab, tcache_get(tsd, false), true, true); + idalloctm(tsdn, tab, NULL, true, true); break; } /* Rebuilding failed, so back out partially rebuilt table. */ - idalloctm(tsd, ckh->tab, tcache_get(tsd, false), true, true); + idalloctm(tsdn, ckh->tab, NULL, true, true); ckh->tab = tab; ckh->lg_curbuckets = lg_prevbuckets; } ret = false; label_return: return (ret); } static void -ckh_shrink(tsd_t *tsd, ckh_t *ckh) +ckh_shrink(tsdn_t *tsdn, ckh_t *ckh) { ckhc_t *tab, *ttab; size_t usize; unsigned lg_prevbuckets, lg_curcells; /* * It is possible (though unlikely, given well behaved hashes) that the * table rebuild will fail. */ lg_prevbuckets = ckh->lg_curbuckets; lg_curcells = ckh->lg_curbuckets + LG_CKH_BUCKET_CELLS - 1; usize = sa2u(sizeof(ckhc_t) << lg_curcells, CACHELINE); if (unlikely(usize == 0 || usize > HUGE_MAXCLASS)) return; - tab = (ckhc_t *)ipallocztm(tsd, usize, CACHELINE, true, NULL, true, - NULL); + tab = (ckhc_t *)ipallocztm(tsdn, usize, CACHELINE, true, NULL, true, + arena_ichoose(tsdn, NULL)); if (tab == NULL) { /* * An OOM error isn't worth propagating, since it doesn't * prevent this or future operations from proceeding. */ return; } /* Swap in new table. */ ttab = ckh->tab; ckh->tab = tab; tab = ttab; ckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS; if (!ckh_rebuild(ckh, tab)) { - idalloctm(tsd, tab, tcache_get(tsd, false), true, true); + idalloctm(tsdn, tab, NULL, true, true); #ifdef CKH_COUNT ckh->nshrinks++; #endif return; } /* Rebuilding failed, so back out partially rebuilt table. */ - idalloctm(tsd, ckh->tab, tcache_get(tsd, false), true, true); + idalloctm(tsdn, ckh->tab, NULL, true, true); ckh->tab = tab; ckh->lg_curbuckets = lg_prevbuckets; #ifdef CKH_COUNT ckh->nshrinkfails++; #endif } bool -ckh_new(tsd_t *tsd, ckh_t *ckh, size_t minitems, ckh_hash_t *hash, +ckh_new(tsdn_t *tsdn, ckh_t *ckh, size_t minitems, ckh_hash_t *hash, ckh_keycomp_t *keycomp) { bool ret; size_t mincells, usize; unsigned lg_mincells; assert(minitems > 0); assert(hash != NULL); assert(keycomp != NULL); #ifdef CKH_COUNT ckh->ngrows = 0; ckh->nshrinks = 0; ckh->nshrinkfails = 0; ckh->ninserts = 0; ckh->nrelocs = 0; #endif ckh->prng_state = 42; /* Value doesn't really matter. */ ckh->count = 0; /* * Find the minimum power of 2 that is large enough to fit minitems * entries. We are using (2+,2) cuckoo hashing, which has an expected * maximum load factor of at least ~0.86, so 0.75 is a conservative load * factor that will typically allow mincells items to fit without ever * growing the table. */ assert(LG_CKH_BUCKET_CELLS > 0); mincells = ((minitems + (3 - (minitems % 3))) / 3) << 2; for (lg_mincells = LG_CKH_BUCKET_CELLS; (ZU(1) << lg_mincells) < mincells; lg_mincells++) ; /* Do nothing. */ ckh->lg_minbuckets = lg_mincells - LG_CKH_BUCKET_CELLS; ckh->lg_curbuckets = lg_mincells - LG_CKH_BUCKET_CELLS; ckh->hash = hash; ckh->keycomp = keycomp; usize = sa2u(sizeof(ckhc_t) << lg_mincells, CACHELINE); if (unlikely(usize == 0 || usize > HUGE_MAXCLASS)) { ret = true; goto label_return; } - ckh->tab = (ckhc_t *)ipallocztm(tsd, usize, CACHELINE, true, NULL, true, - NULL); + ckh->tab = (ckhc_t *)ipallocztm(tsdn, usize, CACHELINE, true, NULL, + true, arena_ichoose(tsdn, NULL)); if (ckh->tab == NULL) { ret = true; goto label_return; } ret = false; label_return: return (ret); } void -ckh_delete(tsd_t *tsd, ckh_t *ckh) +ckh_delete(tsdn_t *tsdn, ckh_t *ckh) { assert(ckh != NULL); #ifdef CKH_VERBOSE malloc_printf( "%s(%p): ngrows: %"FMTu64", nshrinks: %"FMTu64"," " nshrinkfails: %"FMTu64", ninserts: %"FMTu64"," " nrelocs: %"FMTu64"\n", __func__, ckh, (unsigned long long)ckh->ngrows, (unsigned long long)ckh->nshrinks, (unsigned long long)ckh->nshrinkfails, (unsigned long long)ckh->ninserts, (unsigned long long)ckh->nrelocs); #endif - idalloctm(tsd, ckh->tab, tcache_get(tsd, false), true, true); + idalloctm(tsdn, ckh->tab, NULL, true, true); if (config_debug) - memset(ckh, 0x5a, sizeof(ckh_t)); + memset(ckh, JEMALLOC_FREE_JUNK, sizeof(ckh_t)); } size_t ckh_count(ckh_t *ckh) { assert(ckh != NULL); return (ckh->count); } bool ckh_iter(ckh_t *ckh, size_t *tabind, void **key, void **data) { size_t i, ncells; for (i = *tabind, ncells = (ZU(1) << (ckh->lg_curbuckets + LG_CKH_BUCKET_CELLS)); i < ncells; i++) { if (ckh->tab[i].key != NULL) { if (key != NULL) *key = (void *)ckh->tab[i].key; if (data != NULL) *data = (void *)ckh->tab[i].data; *tabind = i + 1; return (false); } } return (true); } bool -ckh_insert(tsd_t *tsd, ckh_t *ckh, const void *key, const void *data) +ckh_insert(tsdn_t *tsdn, ckh_t *ckh, const void *key, const void *data) { bool ret; assert(ckh != NULL); assert(ckh_search(ckh, key, NULL, NULL)); #ifdef CKH_COUNT ckh->ninserts++; #endif while (ckh_try_insert(ckh, &key, &data)) { - if (ckh_grow(tsd, ckh)) { + if (ckh_grow(tsdn, ckh)) { ret = true; goto label_return; } } ret = false; label_return: return (ret); } bool -ckh_remove(tsd_t *tsd, ckh_t *ckh, const void *searchkey, void **key, +ckh_remove(tsdn_t *tsdn, ckh_t *ckh, const void *searchkey, void **key, void **data) { size_t cell; assert(ckh != NULL); cell = ckh_isearch(ckh, searchkey); if (cell != SIZE_T_MAX) { if (key != NULL) *key = (void *)ckh->tab[cell].key; if (data != NULL) *data = (void *)ckh->tab[cell].data; ckh->tab[cell].key = NULL; ckh->tab[cell].data = NULL; /* Not necessary. */ ckh->count--; /* Try to halve the table if it is less than 1/4 full. */ if (ckh->count < (ZU(1) << (ckh->lg_curbuckets + LG_CKH_BUCKET_CELLS - 2)) && ckh->lg_curbuckets > ckh->lg_minbuckets) { /* Ignore error due to OOM. */ - ckh_shrink(tsd, ckh); + ckh_shrink(tsdn, ckh); } return (false); } return (true); } bool ckh_search(ckh_t *ckh, const void *searchkey, void **key, void **data) { size_t cell; assert(ckh != NULL); cell = ckh_isearch(ckh, searchkey); if (cell != SIZE_T_MAX) { if (key != NULL) *key = (void *)ckh->tab[cell].key; if (data != NULL) *data = (void *)ckh->tab[cell].data; return (false); } return (true); } void ckh_string_hash(const void *key, size_t r_hash[2]) { hash(key, strlen((const char *)key), 0x94122f33U, r_hash); } bool ckh_string_keycomp(const void *k1, const void *k2) { assert(k1 != NULL); assert(k2 != NULL); return (strcmp((char *)k1, (char *)k2) ? false : true); } void ckh_pointer_hash(const void *key, size_t r_hash[2]) { union { const void *v; size_t i; } u; assert(sizeof(u.v) == sizeof(u.i)); u.v = key; hash(&u.i, sizeof(u.i), 0xd983396eU, r_hash); } bool ckh_pointer_keycomp(const void *k1, const void *k2) { return ((k1 == k2) ? true : false); } Index: head/contrib/jemalloc/src/ctl.c =================================================================== --- head/contrib/jemalloc/src/ctl.c (revision 299586) +++ head/contrib/jemalloc/src/ctl.c (revision 299587) @@ -1,2220 +1,2254 @@ #define JEMALLOC_CTL_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Data. */ /* * ctl_mtx protects the following: * - ctl_stats.* */ static malloc_mutex_t ctl_mtx; static bool ctl_initialized; static uint64_t ctl_epoch; static ctl_stats_t ctl_stats; /******************************************************************************/ /* Helpers for named and indexed nodes. */ JEMALLOC_INLINE_C const ctl_named_node_t * ctl_named_node(const ctl_node_t *node) { return ((node->named) ? (const ctl_named_node_t *)node : NULL); } JEMALLOC_INLINE_C const ctl_named_node_t * ctl_named_children(const ctl_named_node_t *node, size_t index) { const ctl_named_node_t *children = ctl_named_node(node->children); return (children ? &children[index] : NULL); } JEMALLOC_INLINE_C const ctl_indexed_node_t * ctl_indexed_node(const ctl_node_t *node) { return (!node->named ? (const ctl_indexed_node_t *)node : NULL); } /******************************************************************************/ /* Function prototypes for non-inline static functions. */ #define CTL_PROTO(n) \ -static int n##_ctl(const size_t *mib, size_t miblen, void *oldp, \ - size_t *oldlenp, void *newp, size_t newlen); +static int n##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, \ + void *oldp, size_t *oldlenp, void *newp, size_t newlen); #define INDEX_PROTO(n) \ -static const ctl_named_node_t *n##_index(const size_t *mib, \ - size_t miblen, size_t i); +static const ctl_named_node_t *n##_index(tsdn_t *tsdn, \ + const size_t *mib, size_t miblen, size_t i); static bool ctl_arena_init(ctl_arena_stats_t *astats); static void ctl_arena_clear(ctl_arena_stats_t *astats); -static void ctl_arena_stats_amerge(ctl_arena_stats_t *cstats, +static void ctl_arena_stats_amerge(tsdn_t *tsdn, ctl_arena_stats_t *cstats, arena_t *arena); static void ctl_arena_stats_smerge(ctl_arena_stats_t *sstats, ctl_arena_stats_t *astats); -static void ctl_arena_refresh(arena_t *arena, unsigned i); -static bool ctl_grow(void); -static void ctl_refresh(void); -static bool ctl_init(void); -static int ctl_lookup(const char *name, ctl_node_t const **nodesp, - size_t *mibp, size_t *depthp); +static void ctl_arena_refresh(tsdn_t *tsdn, arena_t *arena, unsigned i); +static bool ctl_grow(tsdn_t *tsdn); +static void ctl_refresh(tsdn_t *tsdn); +static bool ctl_init(tsdn_t *tsdn); +static int ctl_lookup(tsdn_t *tsdn, const char *name, + ctl_node_t const **nodesp, size_t *mibp, size_t *depthp); CTL_PROTO(version) CTL_PROTO(epoch) CTL_PROTO(thread_tcache_enabled) CTL_PROTO(thread_tcache_flush) CTL_PROTO(thread_prof_name) CTL_PROTO(thread_prof_active) CTL_PROTO(thread_arena) CTL_PROTO(thread_allocated) CTL_PROTO(thread_allocatedp) CTL_PROTO(thread_deallocated) CTL_PROTO(thread_deallocatedp) CTL_PROTO(config_cache_oblivious) CTL_PROTO(config_debug) CTL_PROTO(config_fill) CTL_PROTO(config_lazy_lock) CTL_PROTO(config_malloc_conf) CTL_PROTO(config_munmap) CTL_PROTO(config_prof) CTL_PROTO(config_prof_libgcc) CTL_PROTO(config_prof_libunwind) CTL_PROTO(config_stats) CTL_PROTO(config_tcache) CTL_PROTO(config_tls) CTL_PROTO(config_utrace) CTL_PROTO(config_valgrind) CTL_PROTO(config_xmalloc) CTL_PROTO(opt_abort) CTL_PROTO(opt_dss) CTL_PROTO(opt_lg_chunk) CTL_PROTO(opt_narenas) CTL_PROTO(opt_purge) CTL_PROTO(opt_lg_dirty_mult) CTL_PROTO(opt_decay_time) CTL_PROTO(opt_stats_print) CTL_PROTO(opt_junk) CTL_PROTO(opt_zero) CTL_PROTO(opt_quarantine) CTL_PROTO(opt_redzone) CTL_PROTO(opt_utrace) CTL_PROTO(opt_xmalloc) CTL_PROTO(opt_tcache) CTL_PROTO(opt_lg_tcache_max) CTL_PROTO(opt_prof) CTL_PROTO(opt_prof_prefix) CTL_PROTO(opt_prof_active) CTL_PROTO(opt_prof_thread_active_init) CTL_PROTO(opt_lg_prof_sample) CTL_PROTO(opt_lg_prof_interval) CTL_PROTO(opt_prof_gdump) CTL_PROTO(opt_prof_final) CTL_PROTO(opt_prof_leak) CTL_PROTO(opt_prof_accum) CTL_PROTO(tcache_create) CTL_PROTO(tcache_flush) CTL_PROTO(tcache_destroy) -static void arena_i_purge(unsigned arena_ind, bool all); +static void arena_i_purge(tsdn_t *tsdn, unsigned arena_ind, bool all); CTL_PROTO(arena_i_purge) CTL_PROTO(arena_i_decay) +CTL_PROTO(arena_i_reset) CTL_PROTO(arena_i_dss) CTL_PROTO(arena_i_lg_dirty_mult) CTL_PROTO(arena_i_decay_time) CTL_PROTO(arena_i_chunk_hooks) INDEX_PROTO(arena_i) CTL_PROTO(arenas_bin_i_size) CTL_PROTO(arenas_bin_i_nregs) CTL_PROTO(arenas_bin_i_run_size) INDEX_PROTO(arenas_bin_i) CTL_PROTO(arenas_lrun_i_size) INDEX_PROTO(arenas_lrun_i) CTL_PROTO(arenas_hchunk_i_size) INDEX_PROTO(arenas_hchunk_i) CTL_PROTO(arenas_narenas) CTL_PROTO(arenas_initialized) CTL_PROTO(arenas_lg_dirty_mult) CTL_PROTO(arenas_decay_time) CTL_PROTO(arenas_quantum) CTL_PROTO(arenas_page) CTL_PROTO(arenas_tcache_max) CTL_PROTO(arenas_nbins) CTL_PROTO(arenas_nhbins) CTL_PROTO(arenas_nlruns) CTL_PROTO(arenas_nhchunks) CTL_PROTO(arenas_extend) CTL_PROTO(prof_thread_active_init) CTL_PROTO(prof_active) CTL_PROTO(prof_dump) CTL_PROTO(prof_gdump) CTL_PROTO(prof_reset) CTL_PROTO(prof_interval) CTL_PROTO(lg_prof_sample) CTL_PROTO(stats_arenas_i_small_allocated) CTL_PROTO(stats_arenas_i_small_nmalloc) CTL_PROTO(stats_arenas_i_small_ndalloc) CTL_PROTO(stats_arenas_i_small_nrequests) CTL_PROTO(stats_arenas_i_large_allocated) CTL_PROTO(stats_arenas_i_large_nmalloc) CTL_PROTO(stats_arenas_i_large_ndalloc) CTL_PROTO(stats_arenas_i_large_nrequests) CTL_PROTO(stats_arenas_i_huge_allocated) CTL_PROTO(stats_arenas_i_huge_nmalloc) CTL_PROTO(stats_arenas_i_huge_ndalloc) CTL_PROTO(stats_arenas_i_huge_nrequests) CTL_PROTO(stats_arenas_i_bins_j_nmalloc) CTL_PROTO(stats_arenas_i_bins_j_ndalloc) CTL_PROTO(stats_arenas_i_bins_j_nrequests) CTL_PROTO(stats_arenas_i_bins_j_curregs) CTL_PROTO(stats_arenas_i_bins_j_nfills) CTL_PROTO(stats_arenas_i_bins_j_nflushes) CTL_PROTO(stats_arenas_i_bins_j_nruns) CTL_PROTO(stats_arenas_i_bins_j_nreruns) CTL_PROTO(stats_arenas_i_bins_j_curruns) INDEX_PROTO(stats_arenas_i_bins_j) CTL_PROTO(stats_arenas_i_lruns_j_nmalloc) CTL_PROTO(stats_arenas_i_lruns_j_ndalloc) CTL_PROTO(stats_arenas_i_lruns_j_nrequests) CTL_PROTO(stats_arenas_i_lruns_j_curruns) INDEX_PROTO(stats_arenas_i_lruns_j) CTL_PROTO(stats_arenas_i_hchunks_j_nmalloc) CTL_PROTO(stats_arenas_i_hchunks_j_ndalloc) CTL_PROTO(stats_arenas_i_hchunks_j_nrequests) CTL_PROTO(stats_arenas_i_hchunks_j_curhchunks) INDEX_PROTO(stats_arenas_i_hchunks_j) CTL_PROTO(stats_arenas_i_nthreads) CTL_PROTO(stats_arenas_i_dss) CTL_PROTO(stats_arenas_i_lg_dirty_mult) CTL_PROTO(stats_arenas_i_decay_time) CTL_PROTO(stats_arenas_i_pactive) CTL_PROTO(stats_arenas_i_pdirty) CTL_PROTO(stats_arenas_i_mapped) +CTL_PROTO(stats_arenas_i_retained) CTL_PROTO(stats_arenas_i_npurge) CTL_PROTO(stats_arenas_i_nmadvise) CTL_PROTO(stats_arenas_i_purged) CTL_PROTO(stats_arenas_i_metadata_mapped) CTL_PROTO(stats_arenas_i_metadata_allocated) INDEX_PROTO(stats_arenas_i) CTL_PROTO(stats_cactive) CTL_PROTO(stats_allocated) CTL_PROTO(stats_active) CTL_PROTO(stats_metadata) CTL_PROTO(stats_resident) CTL_PROTO(stats_mapped) +CTL_PROTO(stats_retained) /******************************************************************************/ /* mallctl tree. */ /* Maximum tree depth. */ #define CTL_MAX_DEPTH 6 #define NAME(n) {true}, n #define CHILD(t, c) \ sizeof(c##_node) / sizeof(ctl_##t##_node_t), \ (ctl_node_t *)c##_node, \ NULL #define CTL(c) 0, NULL, c##_ctl /* * Only handles internal indexed nodes, since there are currently no external * ones. */ #define INDEX(i) {false}, i##_index static const ctl_named_node_t thread_tcache_node[] = { {NAME("enabled"), CTL(thread_tcache_enabled)}, {NAME("flush"), CTL(thread_tcache_flush)} }; static const ctl_named_node_t thread_prof_node[] = { {NAME("name"), CTL(thread_prof_name)}, {NAME("active"), CTL(thread_prof_active)} }; static const ctl_named_node_t thread_node[] = { {NAME("arena"), CTL(thread_arena)}, {NAME("allocated"), CTL(thread_allocated)}, {NAME("allocatedp"), CTL(thread_allocatedp)}, {NAME("deallocated"), CTL(thread_deallocated)}, {NAME("deallocatedp"), CTL(thread_deallocatedp)}, {NAME("tcache"), CHILD(named, thread_tcache)}, {NAME("prof"), CHILD(named, thread_prof)} }; static const ctl_named_node_t config_node[] = { {NAME("cache_oblivious"), CTL(config_cache_oblivious)}, {NAME("debug"), CTL(config_debug)}, {NAME("fill"), CTL(config_fill)}, {NAME("lazy_lock"), CTL(config_lazy_lock)}, {NAME("malloc_conf"), CTL(config_malloc_conf)}, {NAME("munmap"), CTL(config_munmap)}, {NAME("prof"), CTL(config_prof)}, {NAME("prof_libgcc"), CTL(config_prof_libgcc)}, {NAME("prof_libunwind"), CTL(config_prof_libunwind)}, {NAME("stats"), CTL(config_stats)}, {NAME("tcache"), CTL(config_tcache)}, {NAME("tls"), CTL(config_tls)}, {NAME("utrace"), CTL(config_utrace)}, {NAME("valgrind"), CTL(config_valgrind)}, {NAME("xmalloc"), CTL(config_xmalloc)} }; static const ctl_named_node_t opt_node[] = { {NAME("abort"), CTL(opt_abort)}, {NAME("dss"), CTL(opt_dss)}, {NAME("lg_chunk"), CTL(opt_lg_chunk)}, {NAME("narenas"), CTL(opt_narenas)}, {NAME("purge"), CTL(opt_purge)}, {NAME("lg_dirty_mult"), CTL(opt_lg_dirty_mult)}, {NAME("decay_time"), CTL(opt_decay_time)}, {NAME("stats_print"), CTL(opt_stats_print)}, {NAME("junk"), CTL(opt_junk)}, {NAME("zero"), CTL(opt_zero)}, {NAME("quarantine"), CTL(opt_quarantine)}, {NAME("redzone"), CTL(opt_redzone)}, {NAME("utrace"), CTL(opt_utrace)}, {NAME("xmalloc"), CTL(opt_xmalloc)}, {NAME("tcache"), CTL(opt_tcache)}, {NAME("lg_tcache_max"), CTL(opt_lg_tcache_max)}, {NAME("prof"), CTL(opt_prof)}, {NAME("prof_prefix"), CTL(opt_prof_prefix)}, {NAME("prof_active"), CTL(opt_prof_active)}, {NAME("prof_thread_active_init"), CTL(opt_prof_thread_active_init)}, {NAME("lg_prof_sample"), CTL(opt_lg_prof_sample)}, {NAME("lg_prof_interval"), CTL(opt_lg_prof_interval)}, {NAME("prof_gdump"), CTL(opt_prof_gdump)}, {NAME("prof_final"), CTL(opt_prof_final)}, {NAME("prof_leak"), CTL(opt_prof_leak)}, {NAME("prof_accum"), CTL(opt_prof_accum)} }; static const ctl_named_node_t tcache_node[] = { {NAME("create"), CTL(tcache_create)}, {NAME("flush"), CTL(tcache_flush)}, {NAME("destroy"), CTL(tcache_destroy)} }; static const ctl_named_node_t arena_i_node[] = { {NAME("purge"), CTL(arena_i_purge)}, {NAME("decay"), CTL(arena_i_decay)}, + {NAME("reset"), CTL(arena_i_reset)}, {NAME("dss"), CTL(arena_i_dss)}, {NAME("lg_dirty_mult"), CTL(arena_i_lg_dirty_mult)}, {NAME("decay_time"), CTL(arena_i_decay_time)}, {NAME("chunk_hooks"), CTL(arena_i_chunk_hooks)} }; static const ctl_named_node_t super_arena_i_node[] = { {NAME(""), CHILD(named, arena_i)} }; static const ctl_indexed_node_t arena_node[] = { {INDEX(arena_i)} }; static const ctl_named_node_t arenas_bin_i_node[] = { {NAME("size"), CTL(arenas_bin_i_size)}, {NAME("nregs"), CTL(arenas_bin_i_nregs)}, {NAME("run_size"), CTL(arenas_bin_i_run_size)} }; static const ctl_named_node_t super_arenas_bin_i_node[] = { {NAME(""), CHILD(named, arenas_bin_i)} }; static const ctl_indexed_node_t arenas_bin_node[] = { {INDEX(arenas_bin_i)} }; static const ctl_named_node_t arenas_lrun_i_node[] = { {NAME("size"), CTL(arenas_lrun_i_size)} }; static const ctl_named_node_t super_arenas_lrun_i_node[] = { {NAME(""), CHILD(named, arenas_lrun_i)} }; static const ctl_indexed_node_t arenas_lrun_node[] = { {INDEX(arenas_lrun_i)} }; static const ctl_named_node_t arenas_hchunk_i_node[] = { {NAME("size"), CTL(arenas_hchunk_i_size)} }; static const ctl_named_node_t super_arenas_hchunk_i_node[] = { {NAME(""), CHILD(named, arenas_hchunk_i)} }; static const ctl_indexed_node_t arenas_hchunk_node[] = { {INDEX(arenas_hchunk_i)} }; static const ctl_named_node_t arenas_node[] = { {NAME("narenas"), CTL(arenas_narenas)}, {NAME("initialized"), CTL(arenas_initialized)}, {NAME("lg_dirty_mult"), CTL(arenas_lg_dirty_mult)}, {NAME("decay_time"), CTL(arenas_decay_time)}, {NAME("quantum"), CTL(arenas_quantum)}, {NAME("page"), CTL(arenas_page)}, {NAME("tcache_max"), CTL(arenas_tcache_max)}, {NAME("nbins"), CTL(arenas_nbins)}, {NAME("nhbins"), CTL(arenas_nhbins)}, {NAME("bin"), CHILD(indexed, arenas_bin)}, {NAME("nlruns"), CTL(arenas_nlruns)}, {NAME("lrun"), CHILD(indexed, arenas_lrun)}, {NAME("nhchunks"), CTL(arenas_nhchunks)}, {NAME("hchunk"), CHILD(indexed, arenas_hchunk)}, {NAME("extend"), CTL(arenas_extend)} }; static const ctl_named_node_t prof_node[] = { {NAME("thread_active_init"), CTL(prof_thread_active_init)}, {NAME("active"), CTL(prof_active)}, {NAME("dump"), CTL(prof_dump)}, {NAME("gdump"), CTL(prof_gdump)}, {NAME("reset"), CTL(prof_reset)}, {NAME("interval"), CTL(prof_interval)}, {NAME("lg_sample"), CTL(lg_prof_sample)} }; static const ctl_named_node_t stats_arenas_i_metadata_node[] = { {NAME("mapped"), CTL(stats_arenas_i_metadata_mapped)}, {NAME("allocated"), CTL(stats_arenas_i_metadata_allocated)} }; static const ctl_named_node_t stats_arenas_i_small_node[] = { {NAME("allocated"), CTL(stats_arenas_i_small_allocated)}, {NAME("nmalloc"), CTL(stats_arenas_i_small_nmalloc)}, {NAME("ndalloc"), CTL(stats_arenas_i_small_ndalloc)}, {NAME("nrequests"), CTL(stats_arenas_i_small_nrequests)} }; static const ctl_named_node_t stats_arenas_i_large_node[] = { {NAME("allocated"), CTL(stats_arenas_i_large_allocated)}, {NAME("nmalloc"), CTL(stats_arenas_i_large_nmalloc)}, {NAME("ndalloc"), CTL(stats_arenas_i_large_ndalloc)}, {NAME("nrequests"), CTL(stats_arenas_i_large_nrequests)} }; static const ctl_named_node_t stats_arenas_i_huge_node[] = { {NAME("allocated"), CTL(stats_arenas_i_huge_allocated)}, {NAME("nmalloc"), CTL(stats_arenas_i_huge_nmalloc)}, {NAME("ndalloc"), CTL(stats_arenas_i_huge_ndalloc)}, {NAME("nrequests"), CTL(stats_arenas_i_huge_nrequests)} }; static const ctl_named_node_t stats_arenas_i_bins_j_node[] = { {NAME("nmalloc"), CTL(stats_arenas_i_bins_j_nmalloc)}, {NAME("ndalloc"), CTL(stats_arenas_i_bins_j_ndalloc)}, {NAME("nrequests"), CTL(stats_arenas_i_bins_j_nrequests)}, {NAME("curregs"), CTL(stats_arenas_i_bins_j_curregs)}, {NAME("nfills"), CTL(stats_arenas_i_bins_j_nfills)}, {NAME("nflushes"), CTL(stats_arenas_i_bins_j_nflushes)}, {NAME("nruns"), CTL(stats_arenas_i_bins_j_nruns)}, {NAME("nreruns"), CTL(stats_arenas_i_bins_j_nreruns)}, {NAME("curruns"), CTL(stats_arenas_i_bins_j_curruns)} }; static const ctl_named_node_t super_stats_arenas_i_bins_j_node[] = { {NAME(""), CHILD(named, stats_arenas_i_bins_j)} }; static const ctl_indexed_node_t stats_arenas_i_bins_node[] = { {INDEX(stats_arenas_i_bins_j)} }; static const ctl_named_node_t stats_arenas_i_lruns_j_node[] = { {NAME("nmalloc"), CTL(stats_arenas_i_lruns_j_nmalloc)}, {NAME("ndalloc"), CTL(stats_arenas_i_lruns_j_ndalloc)}, {NAME("nrequests"), CTL(stats_arenas_i_lruns_j_nrequests)}, {NAME("curruns"), CTL(stats_arenas_i_lruns_j_curruns)} }; static const ctl_named_node_t super_stats_arenas_i_lruns_j_node[] = { {NAME(""), CHILD(named, stats_arenas_i_lruns_j)} }; static const ctl_indexed_node_t stats_arenas_i_lruns_node[] = { {INDEX(stats_arenas_i_lruns_j)} }; static const ctl_named_node_t stats_arenas_i_hchunks_j_node[] = { {NAME("nmalloc"), CTL(stats_arenas_i_hchunks_j_nmalloc)}, {NAME("ndalloc"), CTL(stats_arenas_i_hchunks_j_ndalloc)}, {NAME("nrequests"), CTL(stats_arenas_i_hchunks_j_nrequests)}, {NAME("curhchunks"), CTL(stats_arenas_i_hchunks_j_curhchunks)} }; static const ctl_named_node_t super_stats_arenas_i_hchunks_j_node[] = { {NAME(""), CHILD(named, stats_arenas_i_hchunks_j)} }; static const ctl_indexed_node_t stats_arenas_i_hchunks_node[] = { {INDEX(stats_arenas_i_hchunks_j)} }; static const ctl_named_node_t stats_arenas_i_node[] = { {NAME("nthreads"), CTL(stats_arenas_i_nthreads)}, {NAME("dss"), CTL(stats_arenas_i_dss)}, {NAME("lg_dirty_mult"), CTL(stats_arenas_i_lg_dirty_mult)}, {NAME("decay_time"), CTL(stats_arenas_i_decay_time)}, {NAME("pactive"), CTL(stats_arenas_i_pactive)}, {NAME("pdirty"), CTL(stats_arenas_i_pdirty)}, {NAME("mapped"), CTL(stats_arenas_i_mapped)}, + {NAME("retained"), CTL(stats_arenas_i_retained)}, {NAME("npurge"), CTL(stats_arenas_i_npurge)}, {NAME("nmadvise"), CTL(stats_arenas_i_nmadvise)}, {NAME("purged"), CTL(stats_arenas_i_purged)}, {NAME("metadata"), CHILD(named, stats_arenas_i_metadata)}, {NAME("small"), CHILD(named, stats_arenas_i_small)}, {NAME("large"), CHILD(named, stats_arenas_i_large)}, {NAME("huge"), CHILD(named, stats_arenas_i_huge)}, {NAME("bins"), CHILD(indexed, stats_arenas_i_bins)}, {NAME("lruns"), CHILD(indexed, stats_arenas_i_lruns)}, {NAME("hchunks"), CHILD(indexed, stats_arenas_i_hchunks)} }; static const ctl_named_node_t super_stats_arenas_i_node[] = { {NAME(""), CHILD(named, stats_arenas_i)} }; static const ctl_indexed_node_t stats_arenas_node[] = { {INDEX(stats_arenas_i)} }; static const ctl_named_node_t stats_node[] = { {NAME("cactive"), CTL(stats_cactive)}, {NAME("allocated"), CTL(stats_allocated)}, {NAME("active"), CTL(stats_active)}, {NAME("metadata"), CTL(stats_metadata)}, {NAME("resident"), CTL(stats_resident)}, {NAME("mapped"), CTL(stats_mapped)}, + {NAME("retained"), CTL(stats_retained)}, {NAME("arenas"), CHILD(indexed, stats_arenas)} }; static const ctl_named_node_t root_node[] = { {NAME("version"), CTL(version)}, {NAME("epoch"), CTL(epoch)}, {NAME("thread"), CHILD(named, thread)}, {NAME("config"), CHILD(named, config)}, {NAME("opt"), CHILD(named, opt)}, {NAME("tcache"), CHILD(named, tcache)}, {NAME("arena"), CHILD(indexed, arena)}, {NAME("arenas"), CHILD(named, arenas)}, {NAME("prof"), CHILD(named, prof)}, {NAME("stats"), CHILD(named, stats)} }; static const ctl_named_node_t super_root_node[] = { {NAME(""), CHILD(named, root)} }; #undef NAME #undef CHILD #undef CTL #undef INDEX /******************************************************************************/ static bool ctl_arena_init(ctl_arena_stats_t *astats) { if (astats->lstats == NULL) { astats->lstats = (malloc_large_stats_t *)a0malloc(nlclasses * sizeof(malloc_large_stats_t)); if (astats->lstats == NULL) return (true); } if (astats->hstats == NULL) { astats->hstats = (malloc_huge_stats_t *)a0malloc(nhclasses * sizeof(malloc_huge_stats_t)); if (astats->hstats == NULL) return (true); } return (false); } static void ctl_arena_clear(ctl_arena_stats_t *astats) { astats->nthreads = 0; astats->dss = dss_prec_names[dss_prec_limit]; astats->lg_dirty_mult = -1; astats->decay_time = -1; astats->pactive = 0; astats->pdirty = 0; if (config_stats) { memset(&astats->astats, 0, sizeof(arena_stats_t)); astats->allocated_small = 0; astats->nmalloc_small = 0; astats->ndalloc_small = 0; astats->nrequests_small = 0; memset(astats->bstats, 0, NBINS * sizeof(malloc_bin_stats_t)); memset(astats->lstats, 0, nlclasses * sizeof(malloc_large_stats_t)); memset(astats->hstats, 0, nhclasses * sizeof(malloc_huge_stats_t)); } } static void -ctl_arena_stats_amerge(ctl_arena_stats_t *cstats, arena_t *arena) +ctl_arena_stats_amerge(tsdn_t *tsdn, ctl_arena_stats_t *cstats, arena_t *arena) { unsigned i; if (config_stats) { - arena_stats_merge(arena, &cstats->nthreads, &cstats->dss, + arena_stats_merge(tsdn, arena, &cstats->nthreads, &cstats->dss, &cstats->lg_dirty_mult, &cstats->decay_time, &cstats->pactive, &cstats->pdirty, &cstats->astats, cstats->bstats, cstats->lstats, cstats->hstats); for (i = 0; i < NBINS; i++) { cstats->allocated_small += cstats->bstats[i].curregs * index2size(i); cstats->nmalloc_small += cstats->bstats[i].nmalloc; cstats->ndalloc_small += cstats->bstats[i].ndalloc; cstats->nrequests_small += cstats->bstats[i].nrequests; } } else { - arena_basic_stats_merge(arena, &cstats->nthreads, &cstats->dss, - &cstats->lg_dirty_mult, &cstats->decay_time, + arena_basic_stats_merge(tsdn, arena, &cstats->nthreads, + &cstats->dss, &cstats->lg_dirty_mult, &cstats->decay_time, &cstats->pactive, &cstats->pdirty); } } static void ctl_arena_stats_smerge(ctl_arena_stats_t *sstats, ctl_arena_stats_t *astats) { unsigned i; sstats->nthreads += astats->nthreads; sstats->pactive += astats->pactive; sstats->pdirty += astats->pdirty; if (config_stats) { sstats->astats.mapped += astats->astats.mapped; + sstats->astats.retained += astats->astats.retained; sstats->astats.npurge += astats->astats.npurge; sstats->astats.nmadvise += astats->astats.nmadvise; sstats->astats.purged += astats->astats.purged; sstats->astats.metadata_mapped += astats->astats.metadata_mapped; sstats->astats.metadata_allocated += astats->astats.metadata_allocated; sstats->allocated_small += astats->allocated_small; sstats->nmalloc_small += astats->nmalloc_small; sstats->ndalloc_small += astats->ndalloc_small; sstats->nrequests_small += astats->nrequests_small; sstats->astats.allocated_large += astats->astats.allocated_large; sstats->astats.nmalloc_large += astats->astats.nmalloc_large; sstats->astats.ndalloc_large += astats->astats.ndalloc_large; sstats->astats.nrequests_large += astats->astats.nrequests_large; sstats->astats.allocated_huge += astats->astats.allocated_huge; sstats->astats.nmalloc_huge += astats->astats.nmalloc_huge; sstats->astats.ndalloc_huge += astats->astats.ndalloc_huge; for (i = 0; i < NBINS; i++) { sstats->bstats[i].nmalloc += astats->bstats[i].nmalloc; sstats->bstats[i].ndalloc += astats->bstats[i].ndalloc; sstats->bstats[i].nrequests += astats->bstats[i].nrequests; sstats->bstats[i].curregs += astats->bstats[i].curregs; if (config_tcache) { sstats->bstats[i].nfills += astats->bstats[i].nfills; sstats->bstats[i].nflushes += astats->bstats[i].nflushes; } sstats->bstats[i].nruns += astats->bstats[i].nruns; sstats->bstats[i].reruns += astats->bstats[i].reruns; sstats->bstats[i].curruns += astats->bstats[i].curruns; } for (i = 0; i < nlclasses; i++) { sstats->lstats[i].nmalloc += astats->lstats[i].nmalloc; sstats->lstats[i].ndalloc += astats->lstats[i].ndalloc; sstats->lstats[i].nrequests += astats->lstats[i].nrequests; sstats->lstats[i].curruns += astats->lstats[i].curruns; } for (i = 0; i < nhclasses; i++) { sstats->hstats[i].nmalloc += astats->hstats[i].nmalloc; sstats->hstats[i].ndalloc += astats->hstats[i].ndalloc; sstats->hstats[i].curhchunks += astats->hstats[i].curhchunks; } } } static void -ctl_arena_refresh(arena_t *arena, unsigned i) +ctl_arena_refresh(tsdn_t *tsdn, arena_t *arena, unsigned i) { ctl_arena_stats_t *astats = &ctl_stats.arenas[i]; ctl_arena_stats_t *sstats = &ctl_stats.arenas[ctl_stats.narenas]; ctl_arena_clear(astats); - ctl_arena_stats_amerge(astats, arena); + ctl_arena_stats_amerge(tsdn, astats, arena); /* Merge into sum stats as well. */ ctl_arena_stats_smerge(sstats, astats); } static bool -ctl_grow(void) +ctl_grow(tsdn_t *tsdn) { ctl_arena_stats_t *astats; /* Initialize new arena. */ - if (arena_init(ctl_stats.narenas) == NULL) + if (arena_init(tsdn, ctl_stats.narenas) == NULL) return (true); /* Allocate extended arena stats. */ astats = (ctl_arena_stats_t *)a0malloc((ctl_stats.narenas + 2) * sizeof(ctl_arena_stats_t)); if (astats == NULL) return (true); /* Initialize the new astats element. */ memcpy(astats, ctl_stats.arenas, (ctl_stats.narenas + 1) * sizeof(ctl_arena_stats_t)); memset(&astats[ctl_stats.narenas + 1], 0, sizeof(ctl_arena_stats_t)); if (ctl_arena_init(&astats[ctl_stats.narenas + 1])) { a0dalloc(astats); return (true); } /* Swap merged stats to their new location. */ { ctl_arena_stats_t tstats; memcpy(&tstats, &astats[ctl_stats.narenas], sizeof(ctl_arena_stats_t)); memcpy(&astats[ctl_stats.narenas], &astats[ctl_stats.narenas + 1], sizeof(ctl_arena_stats_t)); memcpy(&astats[ctl_stats.narenas + 1], &tstats, sizeof(ctl_arena_stats_t)); } a0dalloc(ctl_stats.arenas); ctl_stats.arenas = astats; ctl_stats.narenas++; return (false); } static void -ctl_refresh(void) +ctl_refresh(tsdn_t *tsdn) { unsigned i; VARIABLE_ARRAY(arena_t *, tarenas, ctl_stats.narenas); /* * Clear sum stats, since they will be merged into by * ctl_arena_refresh(). */ ctl_arena_clear(&ctl_stats.arenas[ctl_stats.narenas]); for (i = 0; i < ctl_stats.narenas; i++) - tarenas[i] = arena_get(i, false); + tarenas[i] = arena_get(tsdn, i, false); for (i = 0; i < ctl_stats.narenas; i++) { bool initialized = (tarenas[i] != NULL); ctl_stats.arenas[i].initialized = initialized; if (initialized) - ctl_arena_refresh(tarenas[i], i); + ctl_arena_refresh(tsdn, tarenas[i], i); } if (config_stats) { size_t base_allocated, base_resident, base_mapped; - base_stats_get(&base_allocated, &base_resident, &base_mapped); + base_stats_get(tsdn, &base_allocated, &base_resident, + &base_mapped); ctl_stats.allocated = ctl_stats.arenas[ctl_stats.narenas].allocated_small + ctl_stats.arenas[ctl_stats.narenas].astats.allocated_large + ctl_stats.arenas[ctl_stats.narenas].astats.allocated_huge; ctl_stats.active = (ctl_stats.arenas[ctl_stats.narenas].pactive << LG_PAGE); ctl_stats.metadata = base_allocated + ctl_stats.arenas[ctl_stats.narenas].astats.metadata_mapped + ctl_stats.arenas[ctl_stats.narenas].astats .metadata_allocated; ctl_stats.resident = base_resident + ctl_stats.arenas[ctl_stats.narenas].astats.metadata_mapped + ((ctl_stats.arenas[ctl_stats.narenas].pactive + ctl_stats.arenas[ctl_stats.narenas].pdirty) << LG_PAGE); ctl_stats.mapped = base_mapped + ctl_stats.arenas[ctl_stats.narenas].astats.mapped; + ctl_stats.retained = + ctl_stats.arenas[ctl_stats.narenas].astats.retained; } ctl_epoch++; } static bool -ctl_init(void) +ctl_init(tsdn_t *tsdn) { bool ret; - malloc_mutex_lock(&ctl_mtx); + malloc_mutex_lock(tsdn, &ctl_mtx); if (!ctl_initialized) { /* * Allocate space for one extra arena stats element, which * contains summed stats across all arenas. */ ctl_stats.narenas = narenas_total_get(); ctl_stats.arenas = (ctl_arena_stats_t *)a0malloc( (ctl_stats.narenas + 1) * sizeof(ctl_arena_stats_t)); if (ctl_stats.arenas == NULL) { ret = true; goto label_return; } memset(ctl_stats.arenas, 0, (ctl_stats.narenas + 1) * sizeof(ctl_arena_stats_t)); /* * Initialize all stats structures, regardless of whether they * ever get used. Lazy initialization would allow errors to * cause inconsistent state to be viewable by the application. */ if (config_stats) { unsigned i; for (i = 0; i <= ctl_stats.narenas; i++) { if (ctl_arena_init(&ctl_stats.arenas[i])) { unsigned j; for (j = 0; j < i; j++) { a0dalloc( ctl_stats.arenas[j].lstats); a0dalloc( ctl_stats.arenas[j].hstats); } a0dalloc(ctl_stats.arenas); ctl_stats.arenas = NULL; ret = true; goto label_return; } } } ctl_stats.arenas[ctl_stats.narenas].initialized = true; ctl_epoch = 0; - ctl_refresh(); + ctl_refresh(tsdn); ctl_initialized = true; } ret = false; label_return: - malloc_mutex_unlock(&ctl_mtx); + malloc_mutex_unlock(tsdn, &ctl_mtx); return (ret); } static int -ctl_lookup(const char *name, ctl_node_t const **nodesp, size_t *mibp, - size_t *depthp) +ctl_lookup(tsdn_t *tsdn, const char *name, ctl_node_t const **nodesp, + size_t *mibp, size_t *depthp) { int ret; const char *elm, *tdot, *dot; size_t elen, i, j; const ctl_named_node_t *node; elm = name; /* Equivalent to strchrnul(). */ dot = ((tdot = strchr(elm, '.')) != NULL) ? tdot : strchr(elm, '\0'); elen = (size_t)((uintptr_t)dot - (uintptr_t)elm); if (elen == 0) { ret = ENOENT; goto label_return; } node = super_root_node; for (i = 0; i < *depthp; i++) { assert(node); assert(node->nchildren > 0); if (ctl_named_node(node->children) != NULL) { const ctl_named_node_t *pnode = node; /* Children are named. */ for (j = 0; j < node->nchildren; j++) { const ctl_named_node_t *child = ctl_named_children(node, j); if (strlen(child->name) == elen && strncmp(elm, child->name, elen) == 0) { node = child; if (nodesp != NULL) nodesp[i] = (const ctl_node_t *)node; mibp[i] = j; break; } } if (node == pnode) { ret = ENOENT; goto label_return; } } else { uintmax_t index; const ctl_indexed_node_t *inode; /* Children are indexed. */ index = malloc_strtoumax(elm, NULL, 10); if (index == UINTMAX_MAX || index > SIZE_T_MAX) { ret = ENOENT; goto label_return; } inode = ctl_indexed_node(node->children); - node = inode->index(mibp, *depthp, (size_t)index); + node = inode->index(tsdn, mibp, *depthp, (size_t)index); if (node == NULL) { ret = ENOENT; goto label_return; } if (nodesp != NULL) nodesp[i] = (const ctl_node_t *)node; mibp[i] = (size_t)index; } if (node->ctl != NULL) { /* Terminal node. */ if (*dot != '\0') { /* * The name contains more elements than are * in this path through the tree. */ ret = ENOENT; goto label_return; } /* Complete lookup successful. */ *depthp = i + 1; break; } /* Update elm. */ if (*dot == '\0') { /* No more elements. */ ret = ENOENT; goto label_return; } elm = &dot[1]; dot = ((tdot = strchr(elm, '.')) != NULL) ? tdot : strchr(elm, '\0'); elen = (size_t)((uintptr_t)dot - (uintptr_t)elm); } ret = 0; label_return: return (ret); } int -ctl_byname(const char *name, void *oldp, size_t *oldlenp, void *newp, - size_t newlen) +ctl_byname(tsd_t *tsd, const char *name, void *oldp, size_t *oldlenp, + void *newp, size_t newlen) { int ret; size_t depth; ctl_node_t const *nodes[CTL_MAX_DEPTH]; size_t mib[CTL_MAX_DEPTH]; const ctl_named_node_t *node; - if (!ctl_initialized && ctl_init()) { + if (!ctl_initialized && ctl_init(tsd_tsdn(tsd))) { ret = EAGAIN; goto label_return; } depth = CTL_MAX_DEPTH; - ret = ctl_lookup(name, nodes, mib, &depth); + ret = ctl_lookup(tsd_tsdn(tsd), name, nodes, mib, &depth); if (ret != 0) goto label_return; node = ctl_named_node(nodes[depth-1]); if (node != NULL && node->ctl) - ret = node->ctl(mib, depth, oldp, oldlenp, newp, newlen); + ret = node->ctl(tsd, mib, depth, oldp, oldlenp, newp, newlen); else { /* The name refers to a partial path through the ctl tree. */ ret = ENOENT; } label_return: return(ret); } int -ctl_nametomib(const char *name, size_t *mibp, size_t *miblenp) +ctl_nametomib(tsdn_t *tsdn, const char *name, size_t *mibp, size_t *miblenp) { int ret; - if (!ctl_initialized && ctl_init()) { + if (!ctl_initialized && ctl_init(tsdn)) { ret = EAGAIN; goto label_return; } - ret = ctl_lookup(name, NULL, mibp, miblenp); + ret = ctl_lookup(tsdn, name, NULL, mibp, miblenp); label_return: return(ret); } int -ctl_bymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, - void *newp, size_t newlen) +ctl_bymib(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, + size_t *oldlenp, void *newp, size_t newlen) { int ret; const ctl_named_node_t *node; size_t i; - if (!ctl_initialized && ctl_init()) { + if (!ctl_initialized && ctl_init(tsd_tsdn(tsd))) { ret = EAGAIN; goto label_return; } /* Iterate down the tree. */ node = super_root_node; for (i = 0; i < miblen; i++) { assert(node); assert(node->nchildren > 0); if (ctl_named_node(node->children) != NULL) { /* Children are named. */ if (node->nchildren <= (unsigned)mib[i]) { ret = ENOENT; goto label_return; } node = ctl_named_children(node, mib[i]); } else { const ctl_indexed_node_t *inode; /* Indexed element. */ inode = ctl_indexed_node(node->children); - node = inode->index(mib, miblen, mib[i]); + node = inode->index(tsd_tsdn(tsd), mib, miblen, mib[i]); if (node == NULL) { ret = ENOENT; goto label_return; } } } /* Call the ctl function. */ if (node && node->ctl) - ret = node->ctl(mib, miblen, oldp, oldlenp, newp, newlen); + ret = node->ctl(tsd, mib, miblen, oldp, oldlenp, newp, newlen); else { /* Partial MIB. */ ret = ENOENT; } label_return: return(ret); } bool ctl_boot(void) { - if (malloc_mutex_init(&ctl_mtx)) + if (malloc_mutex_init(&ctl_mtx, "ctl", WITNESS_RANK_CTL)) return (true); ctl_initialized = false; return (false); } void -ctl_prefork(void) +ctl_prefork(tsdn_t *tsdn) { - malloc_mutex_prefork(&ctl_mtx); + malloc_mutex_prefork(tsdn, &ctl_mtx); } void -ctl_postfork_parent(void) +ctl_postfork_parent(tsdn_t *tsdn) { - malloc_mutex_postfork_parent(&ctl_mtx); + malloc_mutex_postfork_parent(tsdn, &ctl_mtx); } void -ctl_postfork_child(void) +ctl_postfork_child(tsdn_t *tsdn) { - malloc_mutex_postfork_child(&ctl_mtx); + malloc_mutex_postfork_child(tsdn, &ctl_mtx); } /******************************************************************************/ /* *_ctl() functions. */ #define READONLY() do { \ if (newp != NULL || newlen != 0) { \ ret = EPERM; \ goto label_return; \ } \ } while (0) #define WRITEONLY() do { \ if (oldp != NULL || oldlenp != NULL) { \ ret = EPERM; \ goto label_return; \ } \ } while (0) #define READ_XOR_WRITE() do { \ if ((oldp != NULL && oldlenp != NULL) && (newp != NULL || \ newlen != 0)) { \ ret = EPERM; \ goto label_return; \ } \ } while (0) #define READ(v, t) do { \ if (oldp != NULL && oldlenp != NULL) { \ if (*oldlenp != sizeof(t)) { \ size_t copylen = (sizeof(t) <= *oldlenp) \ ? sizeof(t) : *oldlenp; \ memcpy(oldp, (void *)&(v), copylen); \ ret = EINVAL; \ goto label_return; \ } \ *(t *)oldp = (v); \ } \ } while (0) #define WRITE(v, t) do { \ if (newp != NULL) { \ if (newlen != sizeof(t)) { \ ret = EINVAL; \ goto label_return; \ } \ (v) = *(t *)newp; \ } \ } while (0) /* * There's a lot of code duplication in the following macros due to limitations * in how nested cpp macros are expanded. */ #define CTL_RO_CLGEN(c, l, n, v, t) \ static int \ -n##_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, \ - void *newp, size_t newlen) \ +n##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, \ + size_t *oldlenp, void *newp, size_t newlen) \ { \ int ret; \ t oldval; \ \ if (!(c)) \ return (ENOENT); \ if (l) \ - malloc_mutex_lock(&ctl_mtx); \ + malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); \ READONLY(); \ oldval = (v); \ READ(oldval, t); \ \ ret = 0; \ label_return: \ if (l) \ - malloc_mutex_unlock(&ctl_mtx); \ + malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); \ return (ret); \ } #define CTL_RO_CGEN(c, n, v, t) \ static int \ -n##_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, \ - void *newp, size_t newlen) \ +n##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, \ + size_t *oldlenp, void *newp, size_t newlen) \ { \ int ret; \ t oldval; \ \ if (!(c)) \ return (ENOENT); \ - malloc_mutex_lock(&ctl_mtx); \ + malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); \ READONLY(); \ oldval = (v); \ READ(oldval, t); \ \ ret = 0; \ label_return: \ - malloc_mutex_unlock(&ctl_mtx); \ + malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); \ return (ret); \ } #define CTL_RO_GEN(n, v, t) \ static int \ -n##_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, \ - void *newp, size_t newlen) \ +n##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, \ + size_t *oldlenp, void *newp, size_t newlen) \ { \ int ret; \ t oldval; \ \ - malloc_mutex_lock(&ctl_mtx); \ + malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); \ READONLY(); \ oldval = (v); \ READ(oldval, t); \ \ ret = 0; \ label_return: \ - malloc_mutex_unlock(&ctl_mtx); \ + malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); \ return (ret); \ } /* * ctl_mtx is not acquired, under the assumption that no pertinent data will * mutate during the call. */ #define CTL_RO_NL_CGEN(c, n, v, t) \ static int \ -n##_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, \ - void *newp, size_t newlen) \ +n##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, \ + size_t *oldlenp, void *newp, size_t newlen) \ { \ int ret; \ t oldval; \ \ if (!(c)) \ return (ENOENT); \ READONLY(); \ oldval = (v); \ READ(oldval, t); \ \ ret = 0; \ label_return: \ return (ret); \ } #define CTL_RO_NL_GEN(n, v, t) \ static int \ -n##_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, \ - void *newp, size_t newlen) \ +n##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, \ + size_t *oldlenp, void *newp, size_t newlen) \ { \ int ret; \ t oldval; \ \ READONLY(); \ oldval = (v); \ READ(oldval, t); \ \ ret = 0; \ label_return: \ return (ret); \ } #define CTL_TSD_RO_NL_CGEN(c, n, m, t) \ static int \ -n##_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, \ - void *newp, size_t newlen) \ +n##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, \ + size_t *oldlenp, void *newp, size_t newlen) \ { \ int ret; \ t oldval; \ - tsd_t *tsd; \ \ if (!(c)) \ return (ENOENT); \ READONLY(); \ - tsd = tsd_fetch(); \ oldval = (m(tsd)); \ READ(oldval, t); \ \ ret = 0; \ label_return: \ return (ret); \ } #define CTL_RO_CONFIG_GEN(n, t) \ static int \ -n##_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, \ - void *newp, size_t newlen) \ +n##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, \ + size_t *oldlenp, void *newp, size_t newlen) \ { \ int ret; \ t oldval; \ \ READONLY(); \ oldval = n; \ READ(oldval, t); \ \ ret = 0; \ label_return: \ return (ret); \ } /******************************************************************************/ CTL_RO_NL_GEN(version, JEMALLOC_VERSION, const char *) static int -epoch_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, - void *newp, size_t newlen) +epoch_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, + size_t *oldlenp, void *newp, size_t newlen) { int ret; UNUSED uint64_t newval; - malloc_mutex_lock(&ctl_mtx); + malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); WRITE(newval, uint64_t); if (newp != NULL) - ctl_refresh(); + ctl_refresh(tsd_tsdn(tsd)); READ(ctl_epoch, uint64_t); ret = 0; label_return: - malloc_mutex_unlock(&ctl_mtx); + malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); return (ret); } /******************************************************************************/ CTL_RO_CONFIG_GEN(config_cache_oblivious, bool) CTL_RO_CONFIG_GEN(config_debug, bool) CTL_RO_CONFIG_GEN(config_fill, bool) CTL_RO_CONFIG_GEN(config_lazy_lock, bool) CTL_RO_CONFIG_GEN(config_malloc_conf, const char *) CTL_RO_CONFIG_GEN(config_munmap, bool) CTL_RO_CONFIG_GEN(config_prof, bool) CTL_RO_CONFIG_GEN(config_prof_libgcc, bool) CTL_RO_CONFIG_GEN(config_prof_libunwind, bool) CTL_RO_CONFIG_GEN(config_stats, bool) CTL_RO_CONFIG_GEN(config_tcache, bool) CTL_RO_CONFIG_GEN(config_tls, bool) CTL_RO_CONFIG_GEN(config_utrace, bool) CTL_RO_CONFIG_GEN(config_valgrind, bool) CTL_RO_CONFIG_GEN(config_xmalloc, bool) /******************************************************************************/ CTL_RO_NL_GEN(opt_abort, opt_abort, bool) CTL_RO_NL_GEN(opt_dss, opt_dss, const char *) CTL_RO_NL_GEN(opt_lg_chunk, opt_lg_chunk, size_t) CTL_RO_NL_GEN(opt_narenas, opt_narenas, unsigned) CTL_RO_NL_GEN(opt_purge, purge_mode_names[opt_purge], const char *) CTL_RO_NL_GEN(opt_lg_dirty_mult, opt_lg_dirty_mult, ssize_t) CTL_RO_NL_GEN(opt_decay_time, opt_decay_time, ssize_t) CTL_RO_NL_GEN(opt_stats_print, opt_stats_print, bool) CTL_RO_NL_CGEN(config_fill, opt_junk, opt_junk, const char *) CTL_RO_NL_CGEN(config_fill, opt_quarantine, opt_quarantine, size_t) CTL_RO_NL_CGEN(config_fill, opt_redzone, opt_redzone, bool) CTL_RO_NL_CGEN(config_fill, opt_zero, opt_zero, bool) CTL_RO_NL_CGEN(config_utrace, opt_utrace, opt_utrace, bool) CTL_RO_NL_CGEN(config_xmalloc, opt_xmalloc, opt_xmalloc, bool) CTL_RO_NL_CGEN(config_tcache, opt_tcache, opt_tcache, bool) CTL_RO_NL_CGEN(config_tcache, opt_lg_tcache_max, opt_lg_tcache_max, ssize_t) CTL_RO_NL_CGEN(config_prof, opt_prof, opt_prof, bool) CTL_RO_NL_CGEN(config_prof, opt_prof_prefix, opt_prof_prefix, const char *) CTL_RO_NL_CGEN(config_prof, opt_prof_active, opt_prof_active, bool) CTL_RO_NL_CGEN(config_prof, opt_prof_thread_active_init, opt_prof_thread_active_init, bool) CTL_RO_NL_CGEN(config_prof, opt_lg_prof_sample, opt_lg_prof_sample, size_t) CTL_RO_NL_CGEN(config_prof, opt_prof_accum, opt_prof_accum, bool) CTL_RO_NL_CGEN(config_prof, opt_lg_prof_interval, opt_lg_prof_interval, ssize_t) CTL_RO_NL_CGEN(config_prof, opt_prof_gdump, opt_prof_gdump, bool) CTL_RO_NL_CGEN(config_prof, opt_prof_final, opt_prof_final, bool) CTL_RO_NL_CGEN(config_prof, opt_prof_leak, opt_prof_leak, bool) /******************************************************************************/ static int -thread_arena_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, - void *newp, size_t newlen) +thread_arena_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, + size_t *oldlenp, void *newp, size_t newlen) { int ret; - tsd_t *tsd; arena_t *oldarena; unsigned newind, oldind; - tsd = tsd_fetch(); oldarena = arena_choose(tsd, NULL); if (oldarena == NULL) return (EAGAIN); - malloc_mutex_lock(&ctl_mtx); + malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); newind = oldind = oldarena->ind; WRITE(newind, unsigned); READ(oldind, unsigned); if (newind != oldind) { arena_t *newarena; if (newind >= ctl_stats.narenas) { /* New arena index is out of range. */ ret = EFAULT; goto label_return; } /* Initialize arena if necessary. */ - newarena = arena_get(newind, true); + newarena = arena_get(tsd_tsdn(tsd), newind, true); if (newarena == NULL) { ret = EAGAIN; goto label_return; } /* Set new arena/tcache associations. */ arena_migrate(tsd, oldind, newind); if (config_tcache) { tcache_t *tcache = tsd_tcache_get(tsd); if (tcache != NULL) { - tcache_arena_reassociate(tcache, oldarena, - newarena); + tcache_arena_reassociate(tsd_tsdn(tsd), tcache, + oldarena, newarena); } } } ret = 0; label_return: - malloc_mutex_unlock(&ctl_mtx); + malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); return (ret); } CTL_TSD_RO_NL_CGEN(config_stats, thread_allocated, tsd_thread_allocated_get, uint64_t) CTL_TSD_RO_NL_CGEN(config_stats, thread_allocatedp, tsd_thread_allocatedp_get, uint64_t *) CTL_TSD_RO_NL_CGEN(config_stats, thread_deallocated, tsd_thread_deallocated_get, uint64_t) CTL_TSD_RO_NL_CGEN(config_stats, thread_deallocatedp, tsd_thread_deallocatedp_get, uint64_t *) static int -thread_tcache_enabled_ctl(const size_t *mib, size_t miblen, void *oldp, - size_t *oldlenp, void *newp, size_t newlen) +thread_tcache_enabled_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, + void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; bool oldval; if (!config_tcache) return (ENOENT); oldval = tcache_enabled_get(); if (newp != NULL) { if (newlen != sizeof(bool)) { ret = EINVAL; goto label_return; } tcache_enabled_set(*(bool *)newp); } READ(oldval, bool); ret = 0; label_return: return (ret); } static int -thread_tcache_flush_ctl(const size_t *mib, size_t miblen, void *oldp, - size_t *oldlenp, void *newp, size_t newlen) +thread_tcache_flush_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, + void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; if (!config_tcache) return (ENOENT); READONLY(); WRITEONLY(); tcache_flush(); ret = 0; label_return: return (ret); } static int -thread_prof_name_ctl(const size_t *mib, size_t miblen, void *oldp, +thread_prof_name_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; if (!config_prof) return (ENOENT); READ_XOR_WRITE(); if (newp != NULL) { - tsd_t *tsd; - if (newlen != sizeof(const char *)) { ret = EINVAL; goto label_return; } - tsd = tsd_fetch(); - if ((ret = prof_thread_name_set(tsd, *(const char **)newp)) != 0) goto label_return; } else { - const char *oldname = prof_thread_name_get(); + const char *oldname = prof_thread_name_get(tsd); READ(oldname, const char *); } ret = 0; label_return: return (ret); } static int -thread_prof_active_ctl(const size_t *mib, size_t miblen, void *oldp, +thread_prof_active_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; bool oldval; if (!config_prof) return (ENOENT); - oldval = prof_thread_active_get(); + oldval = prof_thread_active_get(tsd); if (newp != NULL) { if (newlen != sizeof(bool)) { ret = EINVAL; goto label_return; } - if (prof_thread_active_set(*(bool *)newp)) { + if (prof_thread_active_set(tsd, *(bool *)newp)) { ret = EAGAIN; goto label_return; } } READ(oldval, bool); ret = 0; label_return: return (ret); } /******************************************************************************/ static int -tcache_create_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, - void *newp, size_t newlen) +tcache_create_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, + size_t *oldlenp, void *newp, size_t newlen) { int ret; - tsd_t *tsd; unsigned tcache_ind; if (!config_tcache) return (ENOENT); - tsd = tsd_fetch(); - - malloc_mutex_lock(&ctl_mtx); + malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); READONLY(); - if (tcaches_create(tsd, &tcache_ind)) { + if (tcaches_create(tsd_tsdn(tsd), &tcache_ind)) { ret = EFAULT; goto label_return; } READ(tcache_ind, unsigned); ret = 0; label_return: - malloc_mutex_unlock(&ctl_mtx); + malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); return (ret); } static int -tcache_flush_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, - void *newp, size_t newlen) +tcache_flush_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, + size_t *oldlenp, void *newp, size_t newlen) { int ret; - tsd_t *tsd; unsigned tcache_ind; if (!config_tcache) return (ENOENT); - tsd = tsd_fetch(); - WRITEONLY(); tcache_ind = UINT_MAX; WRITE(tcache_ind, unsigned); if (tcache_ind == UINT_MAX) { ret = EFAULT; goto label_return; } tcaches_flush(tsd, tcache_ind); ret = 0; label_return: return (ret); } static int -tcache_destroy_ctl(const size_t *mib, size_t miblen, void *oldp, +tcache_destroy_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; - tsd_t *tsd; unsigned tcache_ind; if (!config_tcache) return (ENOENT); - tsd = tsd_fetch(); - WRITEONLY(); tcache_ind = UINT_MAX; WRITE(tcache_ind, unsigned); if (tcache_ind == UINT_MAX) { ret = EFAULT; goto label_return; } tcaches_destroy(tsd, tcache_ind); ret = 0; label_return: return (ret); } /******************************************************************************/ static void -arena_i_purge(unsigned arena_ind, bool all) +arena_i_purge(tsdn_t *tsdn, unsigned arena_ind, bool all) { - malloc_mutex_lock(&ctl_mtx); + malloc_mutex_lock(tsdn, &ctl_mtx); { unsigned narenas = ctl_stats.narenas; if (arena_ind == narenas) { unsigned i; VARIABLE_ARRAY(arena_t *, tarenas, narenas); for (i = 0; i < narenas; i++) - tarenas[i] = arena_get(i, false); + tarenas[i] = arena_get(tsdn, i, false); /* * No further need to hold ctl_mtx, since narenas and * tarenas contain everything needed below. */ - malloc_mutex_unlock(&ctl_mtx); + malloc_mutex_unlock(tsdn, &ctl_mtx); for (i = 0; i < narenas; i++) { if (tarenas[i] != NULL) - arena_purge(tarenas[i], all); + arena_purge(tsdn, tarenas[i], all); } } else { arena_t *tarena; assert(arena_ind < narenas); - tarena = arena_get(arena_ind, false); + tarena = arena_get(tsdn, arena_ind, false); /* No further need to hold ctl_mtx. */ - malloc_mutex_unlock(&ctl_mtx); + malloc_mutex_unlock(tsdn, &ctl_mtx); if (tarena != NULL) - arena_purge(tarena, all); + arena_purge(tsdn, tarena, all); } } } static int -arena_i_purge_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, - void *newp, size_t newlen) +arena_i_purge_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, + size_t *oldlenp, void *newp, size_t newlen) { int ret; READONLY(); WRITEONLY(); - arena_i_purge((unsigned)mib[1], true); + arena_i_purge(tsd_tsdn(tsd), (unsigned)mib[1], true); ret = 0; label_return: return (ret); } static int -arena_i_decay_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, - void *newp, size_t newlen) +arena_i_decay_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, + size_t *oldlenp, void *newp, size_t newlen) { int ret; READONLY(); WRITEONLY(); - arena_i_purge((unsigned)mib[1], false); + arena_i_purge(tsd_tsdn(tsd), (unsigned)mib[1], false); ret = 0; label_return: return (ret); } static int -arena_i_dss_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, - void *newp, size_t newlen) +arena_i_reset_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, + size_t *oldlenp, void *newp, size_t newlen) { int ret; + unsigned arena_ind; + arena_t *arena; + + READONLY(); + WRITEONLY(); + + if ((config_valgrind && unlikely(in_valgrind)) || (config_fill && + unlikely(opt_quarantine))) { + ret = EFAULT; + goto label_return; + } + + arena_ind = (unsigned)mib[1]; + if (config_debug) { + malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); + assert(arena_ind < ctl_stats.narenas); + malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); + } + assert(arena_ind >= opt_narenas); + + arena = arena_get(tsd_tsdn(tsd), arena_ind, false); + + arena_reset(tsd, arena); + + ret = 0; +label_return: + return (ret); +} + +static int +arena_i_dss_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, + size_t *oldlenp, void *newp, size_t newlen) +{ + int ret; const char *dss = NULL; unsigned arena_ind = (unsigned)mib[1]; dss_prec_t dss_prec_old = dss_prec_limit; dss_prec_t dss_prec = dss_prec_limit; - malloc_mutex_lock(&ctl_mtx); + malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); WRITE(dss, const char *); if (dss != NULL) { int i; bool match = false; for (i = 0; i < dss_prec_limit; i++) { if (strcmp(dss_prec_names[i], dss) == 0) { dss_prec = i; match = true; break; } } if (!match) { ret = EINVAL; goto label_return; } } if (arena_ind < ctl_stats.narenas) { - arena_t *arena = arena_get(arena_ind, false); + arena_t *arena = arena_get(tsd_tsdn(tsd), arena_ind, false); if (arena == NULL || (dss_prec != dss_prec_limit && - arena_dss_prec_set(arena, dss_prec))) { + arena_dss_prec_set(tsd_tsdn(tsd), arena, dss_prec))) { ret = EFAULT; goto label_return; } - dss_prec_old = arena_dss_prec_get(arena); + dss_prec_old = arena_dss_prec_get(tsd_tsdn(tsd), arena); } else { if (dss_prec != dss_prec_limit && - chunk_dss_prec_set(dss_prec)) { + chunk_dss_prec_set(tsd_tsdn(tsd), dss_prec)) { ret = EFAULT; goto label_return; } - dss_prec_old = chunk_dss_prec_get(); + dss_prec_old = chunk_dss_prec_get(tsd_tsdn(tsd)); } dss = dss_prec_names[dss_prec_old]; READ(dss, const char *); ret = 0; label_return: - malloc_mutex_unlock(&ctl_mtx); + malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); return (ret); } static int -arena_i_lg_dirty_mult_ctl(const size_t *mib, size_t miblen, void *oldp, - size_t *oldlenp, void *newp, size_t newlen) +arena_i_lg_dirty_mult_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, + void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned arena_ind = (unsigned)mib[1]; arena_t *arena; - arena = arena_get(arena_ind, false); + arena = arena_get(tsd_tsdn(tsd), arena_ind, false); if (arena == NULL) { ret = EFAULT; goto label_return; } if (oldp != NULL && oldlenp != NULL) { - size_t oldval = arena_lg_dirty_mult_get(arena); + size_t oldval = arena_lg_dirty_mult_get(tsd_tsdn(tsd), arena); READ(oldval, ssize_t); } if (newp != NULL) { if (newlen != sizeof(ssize_t)) { ret = EINVAL; goto label_return; } - if (arena_lg_dirty_mult_set(arena, *(ssize_t *)newp)) { + if (arena_lg_dirty_mult_set(tsd_tsdn(tsd), arena, + *(ssize_t *)newp)) { ret = EFAULT; goto label_return; } } ret = 0; label_return: return (ret); } static int -arena_i_decay_time_ctl(const size_t *mib, size_t miblen, void *oldp, +arena_i_decay_time_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned arena_ind = (unsigned)mib[1]; arena_t *arena; - arena = arena_get(arena_ind, false); + arena = arena_get(tsd_tsdn(tsd), arena_ind, false); if (arena == NULL) { ret = EFAULT; goto label_return; } if (oldp != NULL && oldlenp != NULL) { - size_t oldval = arena_decay_time_get(arena); + size_t oldval = arena_decay_time_get(tsd_tsdn(tsd), arena); READ(oldval, ssize_t); } if (newp != NULL) { if (newlen != sizeof(ssize_t)) { ret = EINVAL; goto label_return; } - if (arena_decay_time_set(arena, *(ssize_t *)newp)) { + if (arena_decay_time_set(tsd_tsdn(tsd), arena, + *(ssize_t *)newp)) { ret = EFAULT; goto label_return; } } ret = 0; label_return: return (ret); } static int -arena_i_chunk_hooks_ctl(const size_t *mib, size_t miblen, void *oldp, - size_t *oldlenp, void *newp, size_t newlen) +arena_i_chunk_hooks_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, + void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned arena_ind = (unsigned)mib[1]; arena_t *arena; - malloc_mutex_lock(&ctl_mtx); + malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); if (arena_ind < narenas_total_get() && (arena = - arena_get(arena_ind, false)) != NULL) { + arena_get(tsd_tsdn(tsd), arena_ind, false)) != NULL) { if (newp != NULL) { chunk_hooks_t old_chunk_hooks, new_chunk_hooks; WRITE(new_chunk_hooks, chunk_hooks_t); - old_chunk_hooks = chunk_hooks_set(arena, + old_chunk_hooks = chunk_hooks_set(tsd_tsdn(tsd), arena, &new_chunk_hooks); READ(old_chunk_hooks, chunk_hooks_t); } else { - chunk_hooks_t old_chunk_hooks = chunk_hooks_get(arena); + chunk_hooks_t old_chunk_hooks = + chunk_hooks_get(tsd_tsdn(tsd), arena); READ(old_chunk_hooks, chunk_hooks_t); } } else { ret = EFAULT; goto label_return; } ret = 0; label_return: - malloc_mutex_unlock(&ctl_mtx); + malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); return (ret); } static const ctl_named_node_t * -arena_i_index(const size_t *mib, size_t miblen, size_t i) +arena_i_index(tsdn_t *tsdn, const size_t *mib, size_t miblen, size_t i) { - const ctl_named_node_t * ret; + const ctl_named_node_t *ret; - malloc_mutex_lock(&ctl_mtx); + malloc_mutex_lock(tsdn, &ctl_mtx); if (i > ctl_stats.narenas) { ret = NULL; goto label_return; } ret = super_arena_i_node; label_return: - malloc_mutex_unlock(&ctl_mtx); + malloc_mutex_unlock(tsdn, &ctl_mtx); return (ret); } /******************************************************************************/ static int -arenas_narenas_ctl(const size_t *mib, size_t miblen, void *oldp, +arenas_narenas_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned narenas; - malloc_mutex_lock(&ctl_mtx); + malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); READONLY(); if (*oldlenp != sizeof(unsigned)) { ret = EINVAL; goto label_return; } narenas = ctl_stats.narenas; READ(narenas, unsigned); ret = 0; label_return: - malloc_mutex_unlock(&ctl_mtx); + malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); return (ret); } static int -arenas_initialized_ctl(const size_t *mib, size_t miblen, void *oldp, +arenas_initialized_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned nread, i; - malloc_mutex_lock(&ctl_mtx); + malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); READONLY(); if (*oldlenp != ctl_stats.narenas * sizeof(bool)) { ret = EINVAL; nread = (*oldlenp < ctl_stats.narenas * sizeof(bool)) ? (unsigned)(*oldlenp / sizeof(bool)) : ctl_stats.narenas; } else { ret = 0; nread = ctl_stats.narenas; } for (i = 0; i < nread; i++) ((bool *)oldp)[i] = ctl_stats.arenas[i].initialized; label_return: - malloc_mutex_unlock(&ctl_mtx); + malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); return (ret); } static int -arenas_lg_dirty_mult_ctl(const size_t *mib, size_t miblen, void *oldp, - size_t *oldlenp, void *newp, size_t newlen) +arenas_lg_dirty_mult_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, + void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; if (oldp != NULL && oldlenp != NULL) { size_t oldval = arena_lg_dirty_mult_default_get(); READ(oldval, ssize_t); } if (newp != NULL) { if (newlen != sizeof(ssize_t)) { ret = EINVAL; goto label_return; } if (arena_lg_dirty_mult_default_set(*(ssize_t *)newp)) { ret = EFAULT; goto label_return; } } ret = 0; label_return: return (ret); } static int -arenas_decay_time_ctl(const size_t *mib, size_t miblen, void *oldp, +arenas_decay_time_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; if (oldp != NULL && oldlenp != NULL) { size_t oldval = arena_decay_time_default_get(); READ(oldval, ssize_t); } if (newp != NULL) { if (newlen != sizeof(ssize_t)) { ret = EINVAL; goto label_return; } if (arena_decay_time_default_set(*(ssize_t *)newp)) { ret = EFAULT; goto label_return; } } ret = 0; label_return: return (ret); } CTL_RO_NL_GEN(arenas_quantum, QUANTUM, size_t) CTL_RO_NL_GEN(arenas_page, PAGE, size_t) CTL_RO_NL_CGEN(config_tcache, arenas_tcache_max, tcache_maxclass, size_t) CTL_RO_NL_GEN(arenas_nbins, NBINS, unsigned) CTL_RO_NL_CGEN(config_tcache, arenas_nhbins, nhbins, unsigned) CTL_RO_NL_GEN(arenas_bin_i_size, arena_bin_info[mib[2]].reg_size, size_t) CTL_RO_NL_GEN(arenas_bin_i_nregs, arena_bin_info[mib[2]].nregs, uint32_t) CTL_RO_NL_GEN(arenas_bin_i_run_size, arena_bin_info[mib[2]].run_size, size_t) static const ctl_named_node_t * -arenas_bin_i_index(const size_t *mib, size_t miblen, size_t i) +arenas_bin_i_index(tsdn_t *tsdn, const size_t *mib, size_t miblen, size_t i) { if (i > NBINS) return (NULL); return (super_arenas_bin_i_node); } CTL_RO_NL_GEN(arenas_nlruns, nlclasses, unsigned) CTL_RO_NL_GEN(arenas_lrun_i_size, index2size(NBINS+(szind_t)mib[2]), size_t) static const ctl_named_node_t * -arenas_lrun_i_index(const size_t *mib, size_t miblen, size_t i) +arenas_lrun_i_index(tsdn_t *tsdn, const size_t *mib, size_t miblen, size_t i) { if (i > nlclasses) return (NULL); return (super_arenas_lrun_i_node); } CTL_RO_NL_GEN(arenas_nhchunks, nhclasses, unsigned) CTL_RO_NL_GEN(arenas_hchunk_i_size, index2size(NBINS+nlclasses+(szind_t)mib[2]), size_t) static const ctl_named_node_t * -arenas_hchunk_i_index(const size_t *mib, size_t miblen, size_t i) +arenas_hchunk_i_index(tsdn_t *tsdn, const size_t *mib, size_t miblen, size_t i) { if (i > nhclasses) return (NULL); return (super_arenas_hchunk_i_node); } static int -arenas_extend_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, - void *newp, size_t newlen) +arenas_extend_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, + size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned narenas; - malloc_mutex_lock(&ctl_mtx); + malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); READONLY(); - if (ctl_grow()) { + if (ctl_grow(tsd_tsdn(tsd))) { ret = EAGAIN; goto label_return; } narenas = ctl_stats.narenas - 1; READ(narenas, unsigned); ret = 0; label_return: - malloc_mutex_unlock(&ctl_mtx); + malloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx); return (ret); } /******************************************************************************/ static int -prof_thread_active_init_ctl(const size_t *mib, size_t miblen, void *oldp, - size_t *oldlenp, void *newp, size_t newlen) +prof_thread_active_init_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, + void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; bool oldval; if (!config_prof) return (ENOENT); if (newp != NULL) { if (newlen != sizeof(bool)) { ret = EINVAL; goto label_return; } - oldval = prof_thread_active_init_set(*(bool *)newp); + oldval = prof_thread_active_init_set(tsd_tsdn(tsd), + *(bool *)newp); } else - oldval = prof_thread_active_init_get(); + oldval = prof_thread_active_init_get(tsd_tsdn(tsd)); READ(oldval, bool); ret = 0; label_return: return (ret); } static int -prof_active_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, - void *newp, size_t newlen) +prof_active_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, + size_t *oldlenp, void *newp, size_t newlen) { int ret; bool oldval; if (!config_prof) return (ENOENT); if (newp != NULL) { if (newlen != sizeof(bool)) { ret = EINVAL; goto label_return; } - oldval = prof_active_set(*(bool *)newp); + oldval = prof_active_set(tsd_tsdn(tsd), *(bool *)newp); } else - oldval = prof_active_get(); + oldval = prof_active_get(tsd_tsdn(tsd)); READ(oldval, bool); ret = 0; label_return: return (ret); } static int -prof_dump_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, - void *newp, size_t newlen) +prof_dump_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, + size_t *oldlenp, void *newp, size_t newlen) { int ret; const char *filename = NULL; if (!config_prof) return (ENOENT); WRITEONLY(); WRITE(filename, const char *); - if (prof_mdump(filename)) { + if (prof_mdump(tsd, filename)) { ret = EFAULT; goto label_return; } ret = 0; label_return: return (ret); } static int -prof_gdump_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, - void *newp, size_t newlen) +prof_gdump_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, + size_t *oldlenp, void *newp, size_t newlen) { int ret; bool oldval; if (!config_prof) return (ENOENT); if (newp != NULL) { if (newlen != sizeof(bool)) { ret = EINVAL; goto label_return; } - oldval = prof_gdump_set(*(bool *)newp); + oldval = prof_gdump_set(tsd_tsdn(tsd), *(bool *)newp); } else - oldval = prof_gdump_get(); + oldval = prof_gdump_get(tsd_tsdn(tsd)); READ(oldval, bool); ret = 0; label_return: return (ret); } static int -prof_reset_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, - void *newp, size_t newlen) +prof_reset_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, + size_t *oldlenp, void *newp, size_t newlen) { int ret; size_t lg_sample = lg_prof_sample; - tsd_t *tsd; if (!config_prof) return (ENOENT); WRITEONLY(); WRITE(lg_sample, size_t); if (lg_sample >= (sizeof(uint64_t) << 3)) lg_sample = (sizeof(uint64_t) << 3) - 1; - tsd = tsd_fetch(); + prof_reset(tsd_tsdn(tsd), lg_sample); - prof_reset(tsd, lg_sample); - ret = 0; label_return: return (ret); } CTL_RO_NL_CGEN(config_prof, prof_interval, prof_interval, uint64_t) CTL_RO_NL_CGEN(config_prof, lg_prof_sample, lg_prof_sample, size_t) /******************************************************************************/ CTL_RO_CGEN(config_stats, stats_cactive, &stats_cactive, size_t *) CTL_RO_CGEN(config_stats, stats_allocated, ctl_stats.allocated, size_t) CTL_RO_CGEN(config_stats, stats_active, ctl_stats.active, size_t) CTL_RO_CGEN(config_stats, stats_metadata, ctl_stats.metadata, size_t) CTL_RO_CGEN(config_stats, stats_resident, ctl_stats.resident, size_t) CTL_RO_CGEN(config_stats, stats_mapped, ctl_stats.mapped, size_t) +CTL_RO_CGEN(config_stats, stats_retained, ctl_stats.retained, size_t) CTL_RO_GEN(stats_arenas_i_dss, ctl_stats.arenas[mib[2]].dss, const char *) CTL_RO_GEN(stats_arenas_i_lg_dirty_mult, ctl_stats.arenas[mib[2]].lg_dirty_mult, ssize_t) CTL_RO_GEN(stats_arenas_i_decay_time, ctl_stats.arenas[mib[2]].decay_time, ssize_t) CTL_RO_GEN(stats_arenas_i_nthreads, ctl_stats.arenas[mib[2]].nthreads, unsigned) CTL_RO_GEN(stats_arenas_i_pactive, ctl_stats.arenas[mib[2]].pactive, size_t) CTL_RO_GEN(stats_arenas_i_pdirty, ctl_stats.arenas[mib[2]].pdirty, size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_mapped, ctl_stats.arenas[mib[2]].astats.mapped, size_t) +CTL_RO_CGEN(config_stats, stats_arenas_i_retained, + ctl_stats.arenas[mib[2]].astats.retained, size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_npurge, ctl_stats.arenas[mib[2]].astats.npurge, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_nmadvise, ctl_stats.arenas[mib[2]].astats.nmadvise, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_purged, ctl_stats.arenas[mib[2]].astats.purged, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_metadata_mapped, ctl_stats.arenas[mib[2]].astats.metadata_mapped, size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_metadata_allocated, ctl_stats.arenas[mib[2]].astats.metadata_allocated, size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_small_allocated, ctl_stats.arenas[mib[2]].allocated_small, size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_small_nmalloc, ctl_stats.arenas[mib[2]].nmalloc_small, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_small_ndalloc, ctl_stats.arenas[mib[2]].ndalloc_small, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_small_nrequests, ctl_stats.arenas[mib[2]].nrequests_small, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_large_allocated, ctl_stats.arenas[mib[2]].astats.allocated_large, size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_large_nmalloc, ctl_stats.arenas[mib[2]].astats.nmalloc_large, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_large_ndalloc, ctl_stats.arenas[mib[2]].astats.ndalloc_large, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_large_nrequests, ctl_stats.arenas[mib[2]].astats.nrequests_large, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_huge_allocated, ctl_stats.arenas[mib[2]].astats.allocated_huge, size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_huge_nmalloc, ctl_stats.arenas[mib[2]].astats.nmalloc_huge, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_huge_ndalloc, ctl_stats.arenas[mib[2]].astats.ndalloc_huge, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_huge_nrequests, ctl_stats.arenas[mib[2]].astats.nmalloc_huge, uint64_t) /* Intentional. */ CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nmalloc, ctl_stats.arenas[mib[2]].bstats[mib[4]].nmalloc, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_ndalloc, ctl_stats.arenas[mib[2]].bstats[mib[4]].ndalloc, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nrequests, ctl_stats.arenas[mib[2]].bstats[mib[4]].nrequests, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_curregs, ctl_stats.arenas[mib[2]].bstats[mib[4]].curregs, size_t) CTL_RO_CGEN(config_stats && config_tcache, stats_arenas_i_bins_j_nfills, ctl_stats.arenas[mib[2]].bstats[mib[4]].nfills, uint64_t) CTL_RO_CGEN(config_stats && config_tcache, stats_arenas_i_bins_j_nflushes, ctl_stats.arenas[mib[2]].bstats[mib[4]].nflushes, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nruns, ctl_stats.arenas[mib[2]].bstats[mib[4]].nruns, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nreruns, ctl_stats.arenas[mib[2]].bstats[mib[4]].reruns, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_curruns, ctl_stats.arenas[mib[2]].bstats[mib[4]].curruns, size_t) static const ctl_named_node_t * -stats_arenas_i_bins_j_index(const size_t *mib, size_t miblen, size_t j) +stats_arenas_i_bins_j_index(tsdn_t *tsdn, const size_t *mib, size_t miblen, + size_t j) { if (j > NBINS) return (NULL); return (super_stats_arenas_i_bins_j_node); } CTL_RO_CGEN(config_stats, stats_arenas_i_lruns_j_nmalloc, ctl_stats.arenas[mib[2]].lstats[mib[4]].nmalloc, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_lruns_j_ndalloc, ctl_stats.arenas[mib[2]].lstats[mib[4]].ndalloc, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_lruns_j_nrequests, ctl_stats.arenas[mib[2]].lstats[mib[4]].nrequests, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_lruns_j_curruns, ctl_stats.arenas[mib[2]].lstats[mib[4]].curruns, size_t) static const ctl_named_node_t * -stats_arenas_i_lruns_j_index(const size_t *mib, size_t miblen, size_t j) +stats_arenas_i_lruns_j_index(tsdn_t *tsdn, const size_t *mib, size_t miblen, + size_t j) { if (j > nlclasses) return (NULL); return (super_stats_arenas_i_lruns_j_node); } CTL_RO_CGEN(config_stats, stats_arenas_i_hchunks_j_nmalloc, ctl_stats.arenas[mib[2]].hstats[mib[4]].nmalloc, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_hchunks_j_ndalloc, ctl_stats.arenas[mib[2]].hstats[mib[4]].ndalloc, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_hchunks_j_nrequests, ctl_stats.arenas[mib[2]].hstats[mib[4]].nmalloc, /* Intentional. */ uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_hchunks_j_curhchunks, ctl_stats.arenas[mib[2]].hstats[mib[4]].curhchunks, size_t) static const ctl_named_node_t * -stats_arenas_i_hchunks_j_index(const size_t *mib, size_t miblen, size_t j) +stats_arenas_i_hchunks_j_index(tsdn_t *tsdn, const size_t *mib, size_t miblen, + size_t j) { if (j > nhclasses) return (NULL); return (super_stats_arenas_i_hchunks_j_node); } static const ctl_named_node_t * -stats_arenas_i_index(const size_t *mib, size_t miblen, size_t i) +stats_arenas_i_index(tsdn_t *tsdn, const size_t *mib, size_t miblen, size_t i) { const ctl_named_node_t * ret; - malloc_mutex_lock(&ctl_mtx); + malloc_mutex_lock(tsdn, &ctl_mtx); if (i > ctl_stats.narenas || !ctl_stats.arenas[i].initialized) { ret = NULL; goto label_return; } ret = super_stats_arenas_i_node; label_return: - malloc_mutex_unlock(&ctl_mtx); + malloc_mutex_unlock(tsdn, &ctl_mtx); return (ret); } Index: head/contrib/jemalloc/src/huge.c =================================================================== --- head/contrib/jemalloc/src/huge.c (revision 299586) +++ head/contrib/jemalloc/src/huge.c (revision 299587) @@ -1,448 +1,471 @@ #define JEMALLOC_HUGE_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ static extent_node_t * huge_node_get(const void *ptr) { extent_node_t *node; node = chunk_lookup(ptr, true); assert(!extent_node_achunk_get(node)); return (node); } static bool -huge_node_set(const void *ptr, extent_node_t *node) +huge_node_set(tsdn_t *tsdn, const void *ptr, extent_node_t *node) { assert(extent_node_addr_get(node) == ptr); assert(!extent_node_achunk_get(node)); - return (chunk_register(ptr, node)); + return (chunk_register(tsdn, ptr, node)); } static void +huge_node_reset(tsdn_t *tsdn, const void *ptr, extent_node_t *node) +{ + bool err; + + err = huge_node_set(tsdn, ptr, node); + assert(!err); +} + +static void huge_node_unset(const void *ptr, const extent_node_t *node) { chunk_deregister(ptr, node); } void * -huge_malloc(tsd_t *tsd, arena_t *arena, size_t usize, bool zero, - tcache_t *tcache) +huge_malloc(tsdn_t *tsdn, arena_t *arena, size_t usize, bool zero) { assert(usize == s2u(usize)); - return (huge_palloc(tsd, arena, usize, chunksize, zero, tcache)); + return (huge_palloc(tsdn, arena, usize, chunksize, zero)); } void * -huge_palloc(tsd_t *tsd, arena_t *arena, size_t usize, size_t alignment, - bool zero, tcache_t *tcache) +huge_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment, + bool zero) { void *ret; size_t ausize; extent_node_t *node; bool is_zeroed; /* Allocate one or more contiguous chunks for this request. */ + assert(!tsdn_null(tsdn) || arena != NULL); + ausize = sa2u(usize, alignment); if (unlikely(ausize == 0 || ausize > HUGE_MAXCLASS)) return (NULL); assert(ausize >= chunksize); /* Allocate an extent node with which to track the chunk. */ - node = ipallocztm(tsd, CACHELINE_CEILING(sizeof(extent_node_t)), - CACHELINE, false, tcache, true, arena); + node = ipallocztm(tsdn, CACHELINE_CEILING(sizeof(extent_node_t)), + CACHELINE, false, NULL, true, arena_ichoose(tsdn, arena)); if (node == NULL) return (NULL); /* * Copy zero into is_zeroed and pass the copy to chunk_alloc(), so that * it is possible to make correct junk/zero fill decisions below. */ is_zeroed = zero; - arena = arena_choose(tsd, arena); - if (unlikely(arena == NULL) || (ret = arena_chunk_alloc_huge(arena, - usize, alignment, &is_zeroed)) == NULL) { - idalloctm(tsd, node, tcache, true, true); + if (likely(!tsdn_null(tsdn))) + arena = arena_choose(tsdn_tsd(tsdn), arena); + if (unlikely(arena == NULL) || (ret = arena_chunk_alloc_huge(tsdn, + arena, usize, alignment, &is_zeroed)) == NULL) { + idalloctm(tsdn, node, NULL, true, true); return (NULL); } extent_node_init(node, arena, ret, usize, is_zeroed, true); - if (huge_node_set(ret, node)) { - arena_chunk_dalloc_huge(arena, ret, usize); - idalloctm(tsd, node, tcache, true, true); + if (huge_node_set(tsdn, ret, node)) { + arena_chunk_dalloc_huge(tsdn, arena, ret, usize); + idalloctm(tsdn, node, NULL, true, true); return (NULL); } /* Insert node into huge. */ - malloc_mutex_lock(&arena->huge_mtx); + malloc_mutex_lock(tsdn, &arena->huge_mtx); ql_elm_new(node, ql_link); ql_tail_insert(&arena->huge, node, ql_link); - malloc_mutex_unlock(&arena->huge_mtx); + malloc_mutex_unlock(tsdn, &arena->huge_mtx); if (zero || (config_fill && unlikely(opt_zero))) { if (!is_zeroed) memset(ret, 0, usize); } else if (config_fill && unlikely(opt_junk_alloc)) - memset(ret, 0xa5, usize); + memset(ret, JEMALLOC_ALLOC_JUNK, usize); - arena_decay_tick(tsd, arena); + arena_decay_tick(tsdn, arena); return (ret); } #ifdef JEMALLOC_JET #undef huge_dalloc_junk #define huge_dalloc_junk JEMALLOC_N(huge_dalloc_junk_impl) #endif static void -huge_dalloc_junk(void *ptr, size_t usize) +huge_dalloc_junk(tsdn_t *tsdn, void *ptr, size_t usize) { if (config_fill && have_dss && unlikely(opt_junk_free)) { /* * Only bother junk filling if the chunk isn't about to be * unmapped. */ - if (!config_munmap || (have_dss && chunk_in_dss(ptr))) - memset(ptr, 0x5a, usize); + if (!config_munmap || (have_dss && chunk_in_dss(tsdn, ptr))) + memset(ptr, JEMALLOC_FREE_JUNK, usize); } } #ifdef JEMALLOC_JET #undef huge_dalloc_junk #define huge_dalloc_junk JEMALLOC_N(huge_dalloc_junk) huge_dalloc_junk_t *huge_dalloc_junk = JEMALLOC_N(huge_dalloc_junk_impl); #endif static void -huge_ralloc_no_move_similar(void *ptr, size_t oldsize, size_t usize_min, - size_t usize_max, bool zero) +huge_ralloc_no_move_similar(tsdn_t *tsdn, void *ptr, size_t oldsize, + size_t usize_min, size_t usize_max, bool zero) { size_t usize, usize_next; extent_node_t *node; arena_t *arena; chunk_hooks_t chunk_hooks = CHUNK_HOOKS_INITIALIZER; bool pre_zeroed, post_zeroed; /* Increase usize to incorporate extra. */ for (usize = usize_min; usize < usize_max && (usize_next = s2u(usize+1)) <= oldsize; usize = usize_next) ; /* Do nothing. */ if (oldsize == usize) return; node = huge_node_get(ptr); arena = extent_node_arena_get(node); pre_zeroed = extent_node_zeroed_get(node); /* Fill if necessary (shrinking). */ if (oldsize > usize) { size_t sdiff = oldsize - usize; if (config_fill && unlikely(opt_junk_free)) { - memset((void *)((uintptr_t)ptr + usize), 0x5a, sdiff); + memset((void *)((uintptr_t)ptr + usize), + JEMALLOC_FREE_JUNK, sdiff); post_zeroed = false; } else { - post_zeroed = !chunk_purge_wrapper(arena, &chunk_hooks, - ptr, CHUNK_CEILING(oldsize), usize, sdiff); + post_zeroed = !chunk_purge_wrapper(tsdn, arena, + &chunk_hooks, ptr, CHUNK_CEILING(oldsize), usize, + sdiff); } } else post_zeroed = pre_zeroed; - malloc_mutex_lock(&arena->huge_mtx); + malloc_mutex_lock(tsdn, &arena->huge_mtx); /* Update the size of the huge allocation. */ + huge_node_unset(ptr, node); assert(extent_node_size_get(node) != usize); extent_node_size_set(node, usize); + huge_node_reset(tsdn, ptr, node); /* Update zeroed. */ extent_node_zeroed_set(node, post_zeroed); - malloc_mutex_unlock(&arena->huge_mtx); + malloc_mutex_unlock(tsdn, &arena->huge_mtx); - arena_chunk_ralloc_huge_similar(arena, ptr, oldsize, usize); + arena_chunk_ralloc_huge_similar(tsdn, arena, ptr, oldsize, usize); /* Fill if necessary (growing). */ if (oldsize < usize) { if (zero || (config_fill && unlikely(opt_zero))) { if (!pre_zeroed) { memset((void *)((uintptr_t)ptr + oldsize), 0, usize - oldsize); } } else if (config_fill && unlikely(opt_junk_alloc)) { - memset((void *)((uintptr_t)ptr + oldsize), 0xa5, usize - - oldsize); + memset((void *)((uintptr_t)ptr + oldsize), + JEMALLOC_ALLOC_JUNK, usize - oldsize); } } } static bool -huge_ralloc_no_move_shrink(void *ptr, size_t oldsize, size_t usize) +huge_ralloc_no_move_shrink(tsdn_t *tsdn, void *ptr, size_t oldsize, + size_t usize) { extent_node_t *node; arena_t *arena; chunk_hooks_t chunk_hooks; size_t cdiff; bool pre_zeroed, post_zeroed; node = huge_node_get(ptr); arena = extent_node_arena_get(node); pre_zeroed = extent_node_zeroed_get(node); - chunk_hooks = chunk_hooks_get(arena); + chunk_hooks = chunk_hooks_get(tsdn, arena); assert(oldsize > usize); /* Split excess chunks. */ cdiff = CHUNK_CEILING(oldsize) - CHUNK_CEILING(usize); if (cdiff != 0 && chunk_hooks.split(ptr, CHUNK_CEILING(oldsize), CHUNK_CEILING(usize), cdiff, true, arena->ind)) return (true); if (oldsize > usize) { size_t sdiff = oldsize - usize; if (config_fill && unlikely(opt_junk_free)) { - huge_dalloc_junk((void *)((uintptr_t)ptr + usize), + huge_dalloc_junk(tsdn, (void *)((uintptr_t)ptr + usize), sdiff); post_zeroed = false; } else { - post_zeroed = !chunk_purge_wrapper(arena, &chunk_hooks, - CHUNK_ADDR2BASE((uintptr_t)ptr + usize), - CHUNK_CEILING(oldsize), + post_zeroed = !chunk_purge_wrapper(tsdn, arena, + &chunk_hooks, CHUNK_ADDR2BASE((uintptr_t)ptr + + usize), CHUNK_CEILING(oldsize), CHUNK_ADDR2OFFSET((uintptr_t)ptr + usize), sdiff); } } else post_zeroed = pre_zeroed; - malloc_mutex_lock(&arena->huge_mtx); + malloc_mutex_lock(tsdn, &arena->huge_mtx); /* Update the size of the huge allocation. */ + huge_node_unset(ptr, node); extent_node_size_set(node, usize); + huge_node_reset(tsdn, ptr, node); /* Update zeroed. */ extent_node_zeroed_set(node, post_zeroed); - malloc_mutex_unlock(&arena->huge_mtx); + malloc_mutex_unlock(tsdn, &arena->huge_mtx); /* Zap the excess chunks. */ - arena_chunk_ralloc_huge_shrink(arena, ptr, oldsize, usize); + arena_chunk_ralloc_huge_shrink(tsdn, arena, ptr, oldsize, usize); return (false); } static bool -huge_ralloc_no_move_expand(void *ptr, size_t oldsize, size_t usize, bool zero) { +huge_ralloc_no_move_expand(tsdn_t *tsdn, void *ptr, size_t oldsize, + size_t usize, bool zero) { extent_node_t *node; arena_t *arena; bool is_zeroed_subchunk, is_zeroed_chunk; node = huge_node_get(ptr); arena = extent_node_arena_get(node); - malloc_mutex_lock(&arena->huge_mtx); + malloc_mutex_lock(tsdn, &arena->huge_mtx); is_zeroed_subchunk = extent_node_zeroed_get(node); - malloc_mutex_unlock(&arena->huge_mtx); + malloc_mutex_unlock(tsdn, &arena->huge_mtx); /* * Copy zero into is_zeroed_chunk and pass the copy to chunk_alloc(), so * that it is possible to make correct junk/zero fill decisions below. */ is_zeroed_chunk = zero; - if (arena_chunk_ralloc_huge_expand(arena, ptr, oldsize, usize, + if (arena_chunk_ralloc_huge_expand(tsdn, arena, ptr, oldsize, usize, &is_zeroed_chunk)) return (true); - malloc_mutex_lock(&arena->huge_mtx); + malloc_mutex_lock(tsdn, &arena->huge_mtx); /* Update the size of the huge allocation. */ + huge_node_unset(ptr, node); extent_node_size_set(node, usize); - malloc_mutex_unlock(&arena->huge_mtx); + huge_node_reset(tsdn, ptr, node); + malloc_mutex_unlock(tsdn, &arena->huge_mtx); if (zero || (config_fill && unlikely(opt_zero))) { if (!is_zeroed_subchunk) { memset((void *)((uintptr_t)ptr + oldsize), 0, CHUNK_CEILING(oldsize) - oldsize); } if (!is_zeroed_chunk) { memset((void *)((uintptr_t)ptr + CHUNK_CEILING(oldsize)), 0, usize - CHUNK_CEILING(oldsize)); } } else if (config_fill && unlikely(opt_junk_alloc)) { - memset((void *)((uintptr_t)ptr + oldsize), 0xa5, usize - - oldsize); + memset((void *)((uintptr_t)ptr + oldsize), JEMALLOC_ALLOC_JUNK, + usize - oldsize); } return (false); } bool -huge_ralloc_no_move(tsd_t *tsd, void *ptr, size_t oldsize, size_t usize_min, +huge_ralloc_no_move(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t usize_min, size_t usize_max, bool zero) { assert(s2u(oldsize) == oldsize); /* The following should have been caught by callers. */ assert(usize_min > 0 && usize_max <= HUGE_MAXCLASS); /* Both allocations must be huge to avoid a move. */ if (oldsize < chunksize || usize_max < chunksize) return (true); if (CHUNK_CEILING(usize_max) > CHUNK_CEILING(oldsize)) { /* Attempt to expand the allocation in-place. */ - if (!huge_ralloc_no_move_expand(ptr, oldsize, usize_max, + if (!huge_ralloc_no_move_expand(tsdn, ptr, oldsize, usize_max, zero)) { - arena_decay_tick(tsd, huge_aalloc(ptr)); + arena_decay_tick(tsdn, huge_aalloc(ptr)); return (false); } /* Try again, this time with usize_min. */ if (usize_min < usize_max && CHUNK_CEILING(usize_min) > - CHUNK_CEILING(oldsize) && huge_ralloc_no_move_expand(ptr, - oldsize, usize_min, zero)) { - arena_decay_tick(tsd, huge_aalloc(ptr)); + CHUNK_CEILING(oldsize) && huge_ralloc_no_move_expand(tsdn, + ptr, oldsize, usize_min, zero)) { + arena_decay_tick(tsdn, huge_aalloc(ptr)); return (false); } } /* * Avoid moving the allocation if the existing chunk size accommodates * the new size. */ if (CHUNK_CEILING(oldsize) >= CHUNK_CEILING(usize_min) && CHUNK_CEILING(oldsize) <= CHUNK_CEILING(usize_max)) { - huge_ralloc_no_move_similar(ptr, oldsize, usize_min, usize_max, - zero); - arena_decay_tick(tsd, huge_aalloc(ptr)); + huge_ralloc_no_move_similar(tsdn, ptr, oldsize, usize_min, + usize_max, zero); + arena_decay_tick(tsdn, huge_aalloc(ptr)); return (false); } /* Attempt to shrink the allocation in-place. */ if (CHUNK_CEILING(oldsize) > CHUNK_CEILING(usize_max)) { - if (!huge_ralloc_no_move_shrink(ptr, oldsize, usize_max)) { - arena_decay_tick(tsd, huge_aalloc(ptr)); + if (!huge_ralloc_no_move_shrink(tsdn, ptr, oldsize, + usize_max)) { + arena_decay_tick(tsdn, huge_aalloc(ptr)); return (false); } } return (true); } static void * -huge_ralloc_move_helper(tsd_t *tsd, arena_t *arena, size_t usize, - size_t alignment, bool zero, tcache_t *tcache) +huge_ralloc_move_helper(tsdn_t *tsdn, arena_t *arena, size_t usize, + size_t alignment, bool zero) { if (alignment <= chunksize) - return (huge_malloc(tsd, arena, usize, zero, tcache)); - return (huge_palloc(tsd, arena, usize, alignment, zero, tcache)); + return (huge_malloc(tsdn, arena, usize, zero)); + return (huge_palloc(tsdn, arena, usize, alignment, zero)); } void * -huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize, size_t usize, - size_t alignment, bool zero, tcache_t *tcache) +huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize, + size_t usize, size_t alignment, bool zero, tcache_t *tcache) { void *ret; size_t copysize; /* The following should have been caught by callers. */ assert(usize > 0 && usize <= HUGE_MAXCLASS); /* Try to avoid moving the allocation. */ - if (!huge_ralloc_no_move(tsd, ptr, oldsize, usize, usize, zero)) + if (!huge_ralloc_no_move(tsd_tsdn(tsd), ptr, oldsize, usize, usize, + zero)) return (ptr); /* * usize and oldsize are different enough that we need to use a * different size class. In that case, fall back to allocating new * space and copying. */ - ret = huge_ralloc_move_helper(tsd, arena, usize, alignment, zero, - tcache); + ret = huge_ralloc_move_helper(tsd_tsdn(tsd), arena, usize, alignment, + zero); if (ret == NULL) return (NULL); copysize = (usize < oldsize) ? usize : oldsize; memcpy(ret, ptr, copysize); - isqalloc(tsd, ptr, oldsize, tcache); + isqalloc(tsd, ptr, oldsize, tcache, true); return (ret); } void -huge_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache) +huge_dalloc(tsdn_t *tsdn, void *ptr) { extent_node_t *node; arena_t *arena; node = huge_node_get(ptr); arena = extent_node_arena_get(node); huge_node_unset(ptr, node); - malloc_mutex_lock(&arena->huge_mtx); + malloc_mutex_lock(tsdn, &arena->huge_mtx); ql_remove(&arena->huge, node, ql_link); - malloc_mutex_unlock(&arena->huge_mtx); + malloc_mutex_unlock(tsdn, &arena->huge_mtx); - huge_dalloc_junk(extent_node_addr_get(node), + huge_dalloc_junk(tsdn, extent_node_addr_get(node), extent_node_size_get(node)); - arena_chunk_dalloc_huge(extent_node_arena_get(node), + arena_chunk_dalloc_huge(tsdn, extent_node_arena_get(node), extent_node_addr_get(node), extent_node_size_get(node)); - idalloctm(tsd, node, tcache, true, true); + idalloctm(tsdn, node, NULL, true, true); - arena_decay_tick(tsd, arena); + arena_decay_tick(tsdn, arena); } arena_t * huge_aalloc(const void *ptr) { return (extent_node_arena_get(huge_node_get(ptr))); } size_t -huge_salloc(const void *ptr) +huge_salloc(tsdn_t *tsdn, const void *ptr) { size_t size; extent_node_t *node; arena_t *arena; node = huge_node_get(ptr); arena = extent_node_arena_get(node); - malloc_mutex_lock(&arena->huge_mtx); + malloc_mutex_lock(tsdn, &arena->huge_mtx); size = extent_node_size_get(node); - malloc_mutex_unlock(&arena->huge_mtx); + malloc_mutex_unlock(tsdn, &arena->huge_mtx); return (size); } prof_tctx_t * -huge_prof_tctx_get(const void *ptr) +huge_prof_tctx_get(tsdn_t *tsdn, const void *ptr) { prof_tctx_t *tctx; extent_node_t *node; arena_t *arena; node = huge_node_get(ptr); arena = extent_node_arena_get(node); - malloc_mutex_lock(&arena->huge_mtx); + malloc_mutex_lock(tsdn, &arena->huge_mtx); tctx = extent_node_prof_tctx_get(node); - malloc_mutex_unlock(&arena->huge_mtx); + malloc_mutex_unlock(tsdn, &arena->huge_mtx); return (tctx); } void -huge_prof_tctx_set(const void *ptr, prof_tctx_t *tctx) +huge_prof_tctx_set(tsdn_t *tsdn, const void *ptr, prof_tctx_t *tctx) { extent_node_t *node; arena_t *arena; node = huge_node_get(ptr); arena = extent_node_arena_get(node); - malloc_mutex_lock(&arena->huge_mtx); + malloc_mutex_lock(tsdn, &arena->huge_mtx); extent_node_prof_tctx_set(node, tctx); - malloc_mutex_unlock(&arena->huge_mtx); + malloc_mutex_unlock(tsdn, &arena->huge_mtx); } void -huge_prof_tctx_reset(const void *ptr) +huge_prof_tctx_reset(tsdn_t *tsdn, const void *ptr) { - huge_prof_tctx_set(ptr, (prof_tctx_t *)(uintptr_t)1U); + huge_prof_tctx_set(tsdn, ptr, (prof_tctx_t *)(uintptr_t)1U); } Index: head/contrib/jemalloc/src/jemalloc.c =================================================================== --- head/contrib/jemalloc/src/jemalloc.c (revision 299586) +++ head/contrib/jemalloc/src/jemalloc.c (revision 299587) @@ -1,2832 +1,2929 @@ #define JEMALLOC_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Data. */ /* Work around : */ const char *__malloc_options_1_0 = NULL; __sym_compat(_malloc_options, __malloc_options_1_0, FBSD_1.0); /* Runtime configuration options. */ const char *je_malloc_conf JEMALLOC_ATTR(weak); bool opt_abort = #ifdef JEMALLOC_DEBUG true #else false #endif ; const char *opt_junk = #if (defined(JEMALLOC_DEBUG) && defined(JEMALLOC_FILL)) "true" #else "false" #endif ; bool opt_junk_alloc = #if (defined(JEMALLOC_DEBUG) && defined(JEMALLOC_FILL)) true #else false #endif ; bool opt_junk_free = #if (defined(JEMALLOC_DEBUG) && defined(JEMALLOC_FILL)) true #else false #endif ; size_t opt_quarantine = ZU(0); bool opt_redzone = false; bool opt_utrace = false; bool opt_xmalloc = false; bool opt_zero = false; unsigned opt_narenas = 0; /* Initialized to true if the process is running inside Valgrind. */ bool in_valgrind; unsigned ncpus; /* Protects arenas initialization. */ static malloc_mutex_t arenas_lock; /* * Arenas that are used to service external requests. Not all elements of the * arenas array are necessarily used; arenas are created lazily as needed. * * arenas[0..narenas_auto) are used for automatic multiplexing of threads and * arenas. arenas[narenas_auto..narenas_total) are only used if the application * takes some action to create them and allocate from them. */ arena_t **arenas; static unsigned narenas_total; /* Use narenas_total_*(). */ static arena_t *a0; /* arenas[0]; read-only after initialization. */ -static unsigned narenas_auto; /* Read-only after initialization. */ +unsigned narenas_auto; /* Read-only after initialization. */ typedef enum { malloc_init_uninitialized = 3, malloc_init_a0_initialized = 2, malloc_init_recursible = 1, malloc_init_initialized = 0 /* Common case --> jnz. */ } malloc_init_t; static malloc_init_t malloc_init_state = malloc_init_uninitialized; -/* 0 should be the common case. Set to true to trigger initialization. */ +/* False should be the common case. Set to true to trigger initialization. */ static bool malloc_slow = true; -/* When malloc_slow != 0, set the corresponding bits for sanity check. */ +/* When malloc_slow is true, set the corresponding bits for sanity check. */ enum { flag_opt_junk_alloc = (1U), flag_opt_junk_free = (1U << 1), flag_opt_quarantine = (1U << 2), flag_opt_zero = (1U << 3), flag_opt_utrace = (1U << 4), flag_in_valgrind = (1U << 5), flag_opt_xmalloc = (1U << 6) }; static uint8_t malloc_slow_flags; /* Last entry for overflow detection only. */ JEMALLOC_ALIGNED(CACHELINE) const size_t index2size_tab[NSIZES+1] = { #define SC(index, lg_grp, lg_delta, ndelta, bin, lg_delta_lookup) \ ((ZU(1)<= 0x0600 static malloc_mutex_t init_lock = SRWLOCK_INIT; #else static malloc_mutex_t init_lock; static bool init_lock_initialized = false; JEMALLOC_ATTR(constructor) static void WINAPI _init_init_lock(void) { /* If another constructor in the same binary is using mallctl to * e.g. setup chunk hooks, it may end up running before this one, * and malloc_init_hard will crash trying to lock the uninitialized * lock. So we force an initialization of the lock in * malloc_init_hard as well. We don't try to care about atomicity * of the accessed to the init_lock_initialized boolean, since it * really only matters early in the process creation, before any * separate thread normally starts doing anything. */ if (!init_lock_initialized) - malloc_mutex_init(&init_lock); + malloc_mutex_init(&init_lock, "init", WITNESS_RANK_INIT); init_lock_initialized = true; } #ifdef _MSC_VER # pragma section(".CRT$XCU", read) JEMALLOC_SECTION(".CRT$XCU") JEMALLOC_ATTR(used) static const void (WINAPI *init_init_lock)(void) = _init_init_lock; #endif #endif #else static malloc_mutex_t init_lock = MALLOC_MUTEX_INITIALIZER; #endif typedef struct { void *p; /* Input pointer (as in realloc(p, s)). */ size_t s; /* Request size. */ void *r; /* Result pointer. */ } malloc_utrace_t; #ifdef JEMALLOC_UTRACE # define UTRACE(a, b, c) do { \ if (unlikely(opt_utrace)) { \ int utrace_serrno = errno; \ malloc_utrace_t ut; \ ut.p = (a); \ ut.s = (b); \ ut.r = (c); \ utrace(&ut, sizeof(ut)); \ errno = utrace_serrno; \ } \ } while (0) #else # define UTRACE(a, b, c) #endif /******************************************************************************/ /* * Function prototypes for static functions that are referenced prior to * definition. */ static bool malloc_init_hard_a0(void); static bool malloc_init_hard(void); /******************************************************************************/ /* * Begin miscellaneous support functions. */ JEMALLOC_ALWAYS_INLINE_C bool malloc_initialized(void) { return (malloc_init_state == malloc_init_initialized); } JEMALLOC_ALWAYS_INLINE_C void malloc_thread_init(void) { /* * TSD initialization can't be safely done as a side effect of * deallocation, because it is possible for a thread to do nothing but * deallocate its TLS data via free(), in which case writing to TLS * would cause write-after-free memory corruption. The quarantine * facility *only* gets used as a side effect of deallocation, so make * a best effort attempt at initializing its TSD by hooking all * allocation events. */ if (config_fill && unlikely(opt_quarantine)) quarantine_alloc_hook(); } JEMALLOC_ALWAYS_INLINE_C bool malloc_init_a0(void) { if (unlikely(malloc_init_state == malloc_init_uninitialized)) return (malloc_init_hard_a0()); return (false); } JEMALLOC_ALWAYS_INLINE_C bool malloc_init(void) { if (unlikely(!malloc_initialized()) && malloc_init_hard()) return (true); malloc_thread_init(); return (false); } /* - * The a0*() functions are used instead of i[mcd]alloc() in situations that + * The a0*() functions are used instead of i{d,}alloc() in situations that * cannot tolerate TLS variable access. */ static void * a0ialloc(size_t size, bool zero, bool is_metadata) { if (unlikely(malloc_init_a0())) return (NULL); - return (iallocztm(NULL, size, size2index(size), zero, false, - is_metadata, arena_get(0, false), true)); + return (iallocztm(TSDN_NULL, size, size2index(size), zero, NULL, + is_metadata, arena_get(TSDN_NULL, 0, true), true)); } static void a0idalloc(void *ptr, bool is_metadata) { - idalloctm(NULL, ptr, false, is_metadata, true); + idalloctm(TSDN_NULL, ptr, false, is_metadata, true); } void * a0malloc(size_t size) { return (a0ialloc(size, false, true)); } void a0dalloc(void *ptr) { a0idalloc(ptr, true); } /* * FreeBSD's libc uses the bootstrap_*() functions in bootstrap-senstive * situations that cannot tolerate TLS variable access (TLS allocation and very * early internal data structure initialization). */ void * bootstrap_malloc(size_t size) { if (unlikely(size == 0)) size = 1; return (a0ialloc(size, false, false)); } void * bootstrap_calloc(size_t num, size_t size) { size_t num_size; num_size = num * size; if (unlikely(num_size == 0)) { assert(num == 0 || size == 0); num_size = 1; } return (a0ialloc(num_size, true, false)); } void bootstrap_free(void *ptr) { if (unlikely(ptr == NULL)) return; a0idalloc(ptr, false); } static void arena_set(unsigned ind, arena_t *arena) { atomic_write_p((void **)&arenas[ind], arena); } static void narenas_total_set(unsigned narenas) { atomic_write_u(&narenas_total, narenas); } static void narenas_total_inc(void) { atomic_add_u(&narenas_total, 1); } unsigned narenas_total_get(void) { return (atomic_read_u(&narenas_total)); } /* Create a new arena and insert it into the arenas array at index ind. */ static arena_t * -arena_init_locked(unsigned ind) +arena_init_locked(tsdn_t *tsdn, unsigned ind) { arena_t *arena; assert(ind <= narenas_total_get()); if (ind > MALLOCX_ARENA_MAX) return (NULL); if (ind == narenas_total_get()) narenas_total_inc(); /* * Another thread may have already initialized arenas[ind] if it's an * auto arena. */ - arena = arena_get(ind, false); + arena = arena_get(tsdn, ind, false); if (arena != NULL) { assert(ind < narenas_auto); return (arena); } /* Actually initialize the arena. */ - arena = arena_new(ind); + arena = arena_new(tsdn, ind); arena_set(ind, arena); return (arena); } arena_t * -arena_init(unsigned ind) +arena_init(tsdn_t *tsdn, unsigned ind) { arena_t *arena; - malloc_mutex_lock(&arenas_lock); - arena = arena_init_locked(ind); - malloc_mutex_unlock(&arenas_lock); + malloc_mutex_lock(tsdn, &arenas_lock); + arena = arena_init_locked(tsdn, ind); + malloc_mutex_unlock(tsdn, &arenas_lock); return (arena); } static void -arena_bind(tsd_t *tsd, unsigned ind) +arena_bind(tsd_t *tsd, unsigned ind, bool internal) { arena_t *arena; - arena = arena_get(ind, false); - arena_nthreads_inc(arena); + arena = arena_get(tsd_tsdn(tsd), ind, false); + arena_nthreads_inc(arena, internal); - if (tsd_nominal(tsd)) - tsd_arena_set(tsd, arena); + if (tsd_nominal(tsd)) { + if (internal) + tsd_iarena_set(tsd, arena); + else + tsd_arena_set(tsd, arena); + } } void arena_migrate(tsd_t *tsd, unsigned oldind, unsigned newind) { arena_t *oldarena, *newarena; - oldarena = arena_get(oldind, false); - newarena = arena_get(newind, false); - arena_nthreads_dec(oldarena); - arena_nthreads_inc(newarena); + oldarena = arena_get(tsd_tsdn(tsd), oldind, false); + newarena = arena_get(tsd_tsdn(tsd), newind, false); + arena_nthreads_dec(oldarena, false); + arena_nthreads_inc(newarena, false); tsd_arena_set(tsd, newarena); } static void -arena_unbind(tsd_t *tsd, unsigned ind) +arena_unbind(tsd_t *tsd, unsigned ind, bool internal) { arena_t *arena; - arena = arena_get(ind, false); - arena_nthreads_dec(arena); - tsd_arena_set(tsd, NULL); + arena = arena_get(tsd_tsdn(tsd), ind, false); + arena_nthreads_dec(arena, internal); + if (internal) + tsd_iarena_set(tsd, NULL); + else + tsd_arena_set(tsd, NULL); } arena_tdata_t * arena_tdata_get_hard(tsd_t *tsd, unsigned ind) { arena_tdata_t *tdata, *arenas_tdata_old; arena_tdata_t *arenas_tdata = tsd_arenas_tdata_get(tsd); unsigned narenas_tdata_old, i; unsigned narenas_tdata = tsd_narenas_tdata_get(tsd); unsigned narenas_actual = narenas_total_get(); /* * Dissociate old tdata array (and set up for deallocation upon return) * if it's too small. */ if (arenas_tdata != NULL && narenas_tdata < narenas_actual) { arenas_tdata_old = arenas_tdata; narenas_tdata_old = narenas_tdata; arenas_tdata = NULL; narenas_tdata = 0; tsd_arenas_tdata_set(tsd, arenas_tdata); tsd_narenas_tdata_set(tsd, narenas_tdata); } else { arenas_tdata_old = NULL; narenas_tdata_old = 0; } /* Allocate tdata array if it's missing. */ if (arenas_tdata == NULL) { bool *arenas_tdata_bypassp = tsd_arenas_tdata_bypassp_get(tsd); narenas_tdata = (ind < narenas_actual) ? narenas_actual : ind+1; if (tsd_nominal(tsd) && !*arenas_tdata_bypassp) { *arenas_tdata_bypassp = true; arenas_tdata = (arena_tdata_t *)a0malloc( sizeof(arena_tdata_t) * narenas_tdata); *arenas_tdata_bypassp = false; } if (arenas_tdata == NULL) { tdata = NULL; goto label_return; } assert(tsd_nominal(tsd) && !*arenas_tdata_bypassp); tsd_arenas_tdata_set(tsd, arenas_tdata); tsd_narenas_tdata_set(tsd, narenas_tdata); } /* * Copy to tdata array. It's possible that the actual number of arenas * has increased since narenas_total_get() was called above, but that * causes no correctness issues unless two threads concurrently execute * the arenas.extend mallctl, which we trust mallctl synchronization to * prevent. */ /* Copy/initialize tickers. */ for (i = 0; i < narenas_actual; i++) { if (i < narenas_tdata_old) { ticker_copy(&arenas_tdata[i].decay_ticker, &arenas_tdata_old[i].decay_ticker); } else { ticker_init(&arenas_tdata[i].decay_ticker, DECAY_NTICKS_PER_UPDATE); } } if (narenas_tdata > narenas_actual) { memset(&arenas_tdata[narenas_actual], 0, sizeof(arena_tdata_t) * (narenas_tdata - narenas_actual)); } /* Read the refreshed tdata array. */ tdata = &arenas_tdata[ind]; label_return: if (arenas_tdata_old != NULL) a0dalloc(arenas_tdata_old); return (tdata); } /* Slow path, called only by arena_choose(). */ arena_t * -arena_choose_hard(tsd_t *tsd) +arena_choose_hard(tsd_t *tsd, bool internal) { - arena_t *ret; + arena_t *ret JEMALLOC_CC_SILENCE_INIT(NULL); if (narenas_auto > 1) { - unsigned i, choose, first_null; + unsigned i, j, choose[2], first_null; - choose = 0; + /* + * Determine binding for both non-internal and internal + * allocation. + * + * choose[0]: For application allocation. + * choose[1]: For internal metadata allocation. + */ + + for (j = 0; j < 2; j++) + choose[j] = 0; + first_null = narenas_auto; - malloc_mutex_lock(&arenas_lock); - assert(arena_get(0, false) != NULL); + malloc_mutex_lock(tsd_tsdn(tsd), &arenas_lock); + assert(arena_get(tsd_tsdn(tsd), 0, false) != NULL); for (i = 1; i < narenas_auto; i++) { - if (arena_get(i, false) != NULL) { + if (arena_get(tsd_tsdn(tsd), i, false) != NULL) { /* * Choose the first arena that has the lowest * number of threads assigned to it. */ - if (arena_nthreads_get(arena_get(i, false)) < - arena_nthreads_get(arena_get(choose, - false))) - choose = i; + for (j = 0; j < 2; j++) { + if (arena_nthreads_get(arena_get( + tsd_tsdn(tsd), i, false), !!j) < + arena_nthreads_get(arena_get( + tsd_tsdn(tsd), choose[j], false), + !!j)) + choose[j] = i; + } } else if (first_null == narenas_auto) { /* * Record the index of the first uninitialized * arena, in case all extant arenas are in use. * * NB: It is possible for there to be * discontinuities in terms of initialized * versus uninitialized arenas, due to the * "thread.arena" mallctl. */ first_null = i; } } - if (arena_nthreads_get(arena_get(choose, false)) == 0 - || first_null == narenas_auto) { - /* - * Use an unloaded arena, or the least loaded arena if - * all arenas are already initialized. - */ - ret = arena_get(choose, false); - } else { - /* Initialize a new arena. */ - choose = first_null; - ret = arena_init_locked(choose); - if (ret == NULL) { - malloc_mutex_unlock(&arenas_lock); - return (NULL); + for (j = 0; j < 2; j++) { + if (arena_nthreads_get(arena_get(tsd_tsdn(tsd), + choose[j], false), !!j) == 0 || first_null == + narenas_auto) { + /* + * Use an unloaded arena, or the least loaded + * arena if all arenas are already initialized. + */ + if (!!j == internal) { + ret = arena_get(tsd_tsdn(tsd), + choose[j], false); + } + } else { + arena_t *arena; + + /* Initialize a new arena. */ + choose[j] = first_null; + arena = arena_init_locked(tsd_tsdn(tsd), + choose[j]); + if (arena == NULL) { + malloc_mutex_unlock(tsd_tsdn(tsd), + &arenas_lock); + return (NULL); + } + if (!!j == internal) + ret = arena; } + arena_bind(tsd, choose[j], !!j); } - arena_bind(tsd, choose); - malloc_mutex_unlock(&arenas_lock); + malloc_mutex_unlock(tsd_tsdn(tsd), &arenas_lock); } else { - ret = arena_get(0, false); - arena_bind(tsd, 0); + ret = arena_get(tsd_tsdn(tsd), 0, false); + arena_bind(tsd, 0, false); + arena_bind(tsd, 0, true); } return (ret); } void thread_allocated_cleanup(tsd_t *tsd) { /* Do nothing. */ } void thread_deallocated_cleanup(tsd_t *tsd) { /* Do nothing. */ } void +iarena_cleanup(tsd_t *tsd) +{ + arena_t *iarena; + + iarena = tsd_iarena_get(tsd); + if (iarena != NULL) + arena_unbind(tsd, iarena->ind, true); +} + +void arena_cleanup(tsd_t *tsd) { arena_t *arena; arena = tsd_arena_get(tsd); if (arena != NULL) - arena_unbind(tsd, arena->ind); + arena_unbind(tsd, arena->ind, false); } void arenas_tdata_cleanup(tsd_t *tsd) { arena_tdata_t *arenas_tdata; /* Prevent tsd->arenas_tdata from being (re)created. */ *tsd_arenas_tdata_bypassp_get(tsd) = true; arenas_tdata = tsd_arenas_tdata_get(tsd); if (arenas_tdata != NULL) { tsd_arenas_tdata_set(tsd, NULL); a0dalloc(arenas_tdata); } } void narenas_tdata_cleanup(tsd_t *tsd) { /* Do nothing. */ } void arenas_tdata_bypass_cleanup(tsd_t *tsd) { /* Do nothing. */ } static void stats_print_atexit(void) { if (config_tcache && config_stats) { + tsdn_t *tsdn; unsigned narenas, i; + tsdn = tsdn_fetch(); + /* * Merge stats from extant threads. This is racy, since * individual threads do not lock when recording tcache stats * events. As a consequence, the final stats may be slightly * out of date by the time they are reported, if other threads * continue to allocate. */ for (i = 0, narenas = narenas_total_get(); i < narenas; i++) { - arena_t *arena = arena_get(i, false); + arena_t *arena = arena_get(tsdn, i, false); if (arena != NULL) { tcache_t *tcache; /* * tcache_stats_merge() locks bins, so if any * code is introduced that acquires both arena * and bin locks in the opposite order, * deadlocks may result. */ - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); ql_foreach(tcache, &arena->tcache_ql, link) { - tcache_stats_merge(tcache, arena); + tcache_stats_merge(tsdn, tcache, arena); } - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); } } } je_malloc_stats_print(NULL, NULL, NULL); } /* * End miscellaneous support functions. */ /******************************************************************************/ /* * Begin initialization functions. */ #ifndef JEMALLOC_HAVE_SECURE_GETENV static char * secure_getenv(const char *name) { # ifdef JEMALLOC_HAVE_ISSETUGID if (issetugid() != 0) return (NULL); # endif return (getenv(name)); } #endif static unsigned malloc_ncpus(void) { long result; #ifdef _WIN32 SYSTEM_INFO si; GetSystemInfo(&si); result = si.dwNumberOfProcessors; #else result = sysconf(_SC_NPROCESSORS_ONLN); #endif return ((result == -1) ? 1 : (unsigned)result); } static bool malloc_conf_next(char const **opts_p, char const **k_p, size_t *klen_p, char const **v_p, size_t *vlen_p) { bool accept; const char *opts = *opts_p; *k_p = opts; for (accept = false; !accept;) { switch (*opts) { case 'A': case 'B': case 'C': case 'D': case 'E': case 'F': case 'G': case 'H': case 'I': case 'J': case 'K': case 'L': case 'M': case 'N': case 'O': case 'P': case 'Q': case 'R': case 'S': case 'T': case 'U': case 'V': case 'W': case 'X': case 'Y': case 'Z': case 'a': case 'b': case 'c': case 'd': case 'e': case 'f': case 'g': case 'h': case 'i': case 'j': case 'k': case 'l': case 'm': case 'n': case 'o': case 'p': case 'q': case 'r': case 's': case 't': case 'u': case 'v': case 'w': case 'x': case 'y': case 'z': case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': case '_': opts++; break; case ':': opts++; *klen_p = (uintptr_t)opts - 1 - (uintptr_t)*k_p; *v_p = opts; accept = true; break; case '\0': if (opts != *opts_p) { malloc_write(": Conf string ends " "with key\n"); } return (true); default: malloc_write(": Malformed conf string\n"); return (true); } } for (accept = false; !accept;) { switch (*opts) { case ',': opts++; /* * Look ahead one character here, because the next time * this function is called, it will assume that end of * input has been cleanly reached if no input remains, * but we have optimistically already consumed the * comma if one exists. */ if (*opts == '\0') { malloc_write(": Conf string ends " "with comma\n"); } *vlen_p = (uintptr_t)opts - 1 - (uintptr_t)*v_p; accept = true; break; case '\0': *vlen_p = (uintptr_t)opts - (uintptr_t)*v_p; accept = true; break; default: opts++; break; } } *opts_p = opts; return (false); } static void malloc_conf_error(const char *msg, const char *k, size_t klen, const char *v, size_t vlen) { malloc_printf(": %s: %.*s:%.*s\n", msg, (int)klen, k, (int)vlen, v); } static void malloc_slow_flag_init(void) { /* * Combine the runtime options into malloc_slow for fast path. Called * after processing all the options. */ malloc_slow_flags |= (opt_junk_alloc ? flag_opt_junk_alloc : 0) | (opt_junk_free ? flag_opt_junk_free : 0) | (opt_quarantine ? flag_opt_quarantine : 0) | (opt_zero ? flag_opt_zero : 0) | (opt_utrace ? flag_opt_utrace : 0) | (opt_xmalloc ? flag_opt_xmalloc : 0); if (config_valgrind) malloc_slow_flags |= (in_valgrind ? flag_in_valgrind : 0); malloc_slow = (malloc_slow_flags != 0); } static void malloc_conf_init(void) { unsigned i; char buf[PATH_MAX + 1]; const char *opts, *k, *v; size_t klen, vlen; /* * Automatically configure valgrind before processing options. The * valgrind option remains in jemalloc 3.x for compatibility reasons. */ if (config_valgrind) { in_valgrind = (RUNNING_ON_VALGRIND != 0) ? true : false; if (config_fill && unlikely(in_valgrind)) { opt_junk = "false"; opt_junk_alloc = false; opt_junk_free = false; assert(!opt_zero); opt_quarantine = JEMALLOC_VALGRIND_QUARANTINE_DEFAULT; opt_redzone = true; } if (config_tcache && unlikely(in_valgrind)) opt_tcache = false; } for (i = 0; i < 4; i++) { /* Get runtime configuration. */ switch (i) { case 0: opts = config_malloc_conf; break; case 1: if (je_malloc_conf != NULL) { /* * Use options that were compiled into the * program. */ opts = je_malloc_conf; } else { /* No configuration specified. */ buf[0] = '\0'; opts = buf; } break; case 2: { ssize_t linklen = 0; #ifndef _WIN32 int saved_errno = errno; const char *linkname = # ifdef JEMALLOC_PREFIX "/etc/"JEMALLOC_PREFIX"malloc.conf" # else "/etc/malloc.conf" # endif ; /* * Try to use the contents of the "/etc/malloc.conf" * symbolic link's name. */ linklen = readlink(linkname, buf, sizeof(buf) - 1); if (linklen == -1) { /* No configuration specified. */ linklen = 0; /* Restore errno. */ set_errno(saved_errno); } #endif buf[linklen] = '\0'; opts = buf; break; } case 3: { const char *envname = #ifdef JEMALLOC_PREFIX JEMALLOC_CPREFIX"MALLOC_CONF" #else "MALLOC_CONF" #endif ; if ((opts = secure_getenv(envname)) != NULL) { /* * Do nothing; opts is already initialized to * the value of the MALLOC_CONF environment * variable. */ } else { /* No configuration specified. */ buf[0] = '\0'; opts = buf; } break; } default: not_reached(); buf[0] = '\0'; opts = buf; } while (*opts != '\0' && !malloc_conf_next(&opts, &k, &klen, &v, &vlen)) { #define CONF_MATCH(n) \ (sizeof(n)-1 == klen && strncmp(n, k, klen) == 0) #define CONF_MATCH_VALUE(n) \ (sizeof(n)-1 == vlen && strncmp(n, v, vlen) == 0) #define CONF_HANDLE_BOOL(o, n, cont) \ if (CONF_MATCH(n)) { \ if (CONF_MATCH_VALUE("true")) \ o = true; \ else if (CONF_MATCH_VALUE("false")) \ o = false; \ else { \ malloc_conf_error( \ "Invalid conf value", \ k, klen, v, vlen); \ } \ if (cont) \ continue; \ } #define CONF_HANDLE_T_U(t, o, n, min, max, clip) \ if (CONF_MATCH(n)) { \ uintmax_t um; \ char *end; \ \ set_errno(0); \ um = malloc_strtoumax(v, &end, 0); \ if (get_errno() != 0 || (uintptr_t)end -\ (uintptr_t)v != vlen) { \ malloc_conf_error( \ "Invalid conf value", \ k, klen, v, vlen); \ } else if (clip) { \ if ((min) != 0 && um < (min)) \ o = (t)(min); \ else if (um > (max)) \ o = (t)(max); \ else \ o = (t)um; \ } else { \ if (((min) != 0 && um < (min)) \ || um > (max)) { \ malloc_conf_error( \ "Out-of-range " \ "conf value", \ k, klen, v, vlen); \ } else \ o = (t)um; \ } \ continue; \ } #define CONF_HANDLE_UNSIGNED(o, n, min, max, clip) \ CONF_HANDLE_T_U(unsigned, o, n, min, max, clip) #define CONF_HANDLE_SIZE_T(o, n, min, max, clip) \ CONF_HANDLE_T_U(size_t, o, n, min, max, clip) #define CONF_HANDLE_SSIZE_T(o, n, min, max) \ if (CONF_MATCH(n)) { \ long l; \ char *end; \ \ set_errno(0); \ l = strtol(v, &end, 0); \ if (get_errno() != 0 || (uintptr_t)end -\ (uintptr_t)v != vlen) { \ malloc_conf_error( \ "Invalid conf value", \ k, klen, v, vlen); \ } else if (l < (ssize_t)(min) || l > \ (ssize_t)(max)) { \ malloc_conf_error( \ "Out-of-range conf value", \ k, klen, v, vlen); \ } else \ o = l; \ continue; \ } #define CONF_HANDLE_CHAR_P(o, n, d) \ if (CONF_MATCH(n)) { \ size_t cpylen = (vlen <= \ sizeof(o)-1) ? vlen : \ sizeof(o)-1; \ strncpy(o, v, cpylen); \ o[cpylen] = '\0'; \ continue; \ } CONF_HANDLE_BOOL(opt_abort, "abort", true) /* * Chunks always require at least one header page, * as many as 2^(LG_SIZE_CLASS_GROUP+1) data pages, and * possibly an additional page in the presence of * redzones. In order to simplify options processing, * use a conservative bound that accommodates all these * constraints. */ CONF_HANDLE_SIZE_T(opt_lg_chunk, "lg_chunk", LG_PAGE + LG_SIZE_CLASS_GROUP + (config_fill ? 2 : 1), (sizeof(size_t) << 3) - 1, true) if (strncmp("dss", k, klen) == 0) { int i; bool match = false; for (i = 0; i < dss_prec_limit; i++) { if (strncmp(dss_prec_names[i], v, vlen) == 0) { - if (chunk_dss_prec_set(i)) { + if (chunk_dss_prec_set(NULL, + i)) { malloc_conf_error( "Error setting dss", k, klen, v, vlen); } else { opt_dss = dss_prec_names[i]; match = true; break; } } } if (!match) { malloc_conf_error("Invalid conf value", k, klen, v, vlen); } continue; } CONF_HANDLE_UNSIGNED(opt_narenas, "narenas", 1, UINT_MAX, false) if (strncmp("purge", k, klen) == 0) { int i; bool match = false; for (i = 0; i < purge_mode_limit; i++) { if (strncmp(purge_mode_names[i], v, vlen) == 0) { opt_purge = (purge_mode_t)i; match = true; break; } } if (!match) { malloc_conf_error("Invalid conf value", k, klen, v, vlen); } continue; } CONF_HANDLE_SSIZE_T(opt_lg_dirty_mult, "lg_dirty_mult", -1, (sizeof(size_t) << 3) - 1) CONF_HANDLE_SSIZE_T(opt_decay_time, "decay_time", -1, NSTIME_SEC_MAX); CONF_HANDLE_BOOL(opt_stats_print, "stats_print", true) if (config_fill) { if (CONF_MATCH("junk")) { if (CONF_MATCH_VALUE("true")) { opt_junk = "true"; opt_junk_alloc = opt_junk_free = true; } else if (CONF_MATCH_VALUE("false")) { opt_junk = "false"; opt_junk_alloc = opt_junk_free = false; } else if (CONF_MATCH_VALUE("alloc")) { opt_junk = "alloc"; opt_junk_alloc = true; opt_junk_free = false; } else if (CONF_MATCH_VALUE("free")) { opt_junk = "free"; opt_junk_alloc = false; opt_junk_free = true; } else { malloc_conf_error( "Invalid conf value", k, klen, v, vlen); } continue; } CONF_HANDLE_SIZE_T(opt_quarantine, "quarantine", 0, SIZE_T_MAX, false) CONF_HANDLE_BOOL(opt_redzone, "redzone", true) CONF_HANDLE_BOOL(opt_zero, "zero", true) } if (config_utrace) { CONF_HANDLE_BOOL(opt_utrace, "utrace", true) } if (config_xmalloc) { CONF_HANDLE_BOOL(opt_xmalloc, "xmalloc", true) } if (config_tcache) { CONF_HANDLE_BOOL(opt_tcache, "tcache", !config_valgrind || !in_valgrind) if (CONF_MATCH("tcache")) { assert(config_valgrind && in_valgrind); if (opt_tcache) { opt_tcache = false; malloc_conf_error( "tcache cannot be enabled " "while running inside Valgrind", k, klen, v, vlen); } continue; } CONF_HANDLE_SSIZE_T(opt_lg_tcache_max, "lg_tcache_max", -1, (sizeof(size_t) << 3) - 1) } if (config_prof) { CONF_HANDLE_BOOL(opt_prof, "prof", true) CONF_HANDLE_CHAR_P(opt_prof_prefix, "prof_prefix", "jeprof") CONF_HANDLE_BOOL(opt_prof_active, "prof_active", true) CONF_HANDLE_BOOL(opt_prof_thread_active_init, "prof_thread_active_init", true) CONF_HANDLE_SIZE_T(opt_lg_prof_sample, "lg_prof_sample", 0, (sizeof(uint64_t) << 3) - 1, true) CONF_HANDLE_BOOL(opt_prof_accum, "prof_accum", true) CONF_HANDLE_SSIZE_T(opt_lg_prof_interval, "lg_prof_interval", -1, (sizeof(uint64_t) << 3) - 1) CONF_HANDLE_BOOL(opt_prof_gdump, "prof_gdump", true) CONF_HANDLE_BOOL(opt_prof_final, "prof_final", true) CONF_HANDLE_BOOL(opt_prof_leak, "prof_leak", true) } malloc_conf_error("Invalid conf pair", k, klen, v, vlen); #undef CONF_MATCH #undef CONF_HANDLE_BOOL #undef CONF_HANDLE_SIZE_T #undef CONF_HANDLE_SSIZE_T #undef CONF_HANDLE_CHAR_P } } } -/* init_lock must be held. */ static bool malloc_init_hard_needed(void) { if (malloc_initialized() || (IS_INITIALIZER && malloc_init_state == malloc_init_recursible)) { /* * Another thread initialized the allocator before this one * acquired init_lock, or this thread is the initializing * thread, and it is recursively allocating. */ return (false); } #ifdef JEMALLOC_THREADED_INIT if (malloc_initializer != NO_INITIALIZER && !IS_INITIALIZER) { /* Busy-wait until the initializing thread completes. */ do { - malloc_mutex_unlock(&init_lock); + malloc_mutex_unlock(NULL, &init_lock); CPU_SPINWAIT; - malloc_mutex_lock(&init_lock); + malloc_mutex_lock(NULL, &init_lock); } while (!malloc_initialized()); return (false); } #endif return (true); } -/* init_lock must be held. */ static bool -malloc_init_hard_a0_locked(void) +malloc_init_hard_a0_locked() { malloc_initializer = INITIALIZER; if (config_prof) prof_boot0(); malloc_conf_init(); if (opt_stats_print) { /* Print statistics at exit. */ if (atexit(stats_print_atexit) != 0) { malloc_write(": Error in atexit()\n"); if (opt_abort) abort(); } } + pages_boot(); if (base_boot()) return (true); if (chunk_boot()) return (true); if (ctl_boot()) return (true); if (config_prof) prof_boot1(); if (arena_boot()) return (true); - if (config_tcache && tcache_boot()) + if (config_tcache && tcache_boot(TSDN_NULL)) return (true); - if (malloc_mutex_init(&arenas_lock)) + if (malloc_mutex_init(&arenas_lock, "arenas", WITNESS_RANK_ARENAS)) return (true); /* * Create enough scaffolding to allow recursive allocation in * malloc_ncpus(). */ narenas_auto = 1; narenas_total_set(narenas_auto); arenas = &a0; memset(arenas, 0, sizeof(arena_t *) * narenas_auto); /* * Initialize one arena here. The rest are lazily created in * arena_choose_hard(). */ - if (arena_init(0) == NULL) + if (arena_init(TSDN_NULL, 0) == NULL) return (true); + malloc_init_state = malloc_init_a0_initialized; + return (false); } static bool malloc_init_hard_a0(void) { bool ret; - malloc_mutex_lock(&init_lock); + malloc_mutex_lock(TSDN_NULL, &init_lock); ret = malloc_init_hard_a0_locked(); - malloc_mutex_unlock(&init_lock); + malloc_mutex_unlock(TSDN_NULL, &init_lock); return (ret); } -/* - * Initialize data structures which may trigger recursive allocation. - * - * init_lock must be held. - */ +/* Initialize data structures which may trigger recursive allocation. */ static bool malloc_init_hard_recursible(void) { - bool ret = false; malloc_init_state = malloc_init_recursible; - malloc_mutex_unlock(&init_lock); - /* LinuxThreads' pthread_setspecific() allocates. */ - if (malloc_tsd_boot0()) { - ret = true; - goto label_return; - } - ncpus = malloc_ncpus(); #if (!defined(JEMALLOC_MUTEX_INIT_CB) && !defined(JEMALLOC_ZONE) \ && !defined(_WIN32) && !defined(__native_client__)) /* LinuxThreads' pthread_atfork() allocates. */ if (pthread_atfork(jemalloc_prefork, jemalloc_postfork_parent, jemalloc_postfork_child) != 0) { - ret = true; malloc_write(": Error in pthread_atfork()\n"); if (opt_abort) abort(); + return (true); } #endif -label_return: - malloc_mutex_lock(&init_lock); - return (ret); + return (false); } -/* init_lock must be held. */ static bool -malloc_init_hard_finish(void) +malloc_init_hard_finish(tsdn_t *tsdn) { - if (mutex_boot()) + if (malloc_mutex_boot()) return (true); if (opt_narenas == 0) { /* * For SMP systems, create more than one arena per CPU by * default. */ if (ncpus > 1) opt_narenas = ncpus << 2; else opt_narenas = 1; } narenas_auto = opt_narenas; /* * Limit the number of arenas to the indexing range of MALLOCX_ARENA(). */ if (narenas_auto > MALLOCX_ARENA_MAX) { narenas_auto = MALLOCX_ARENA_MAX; malloc_printf(": Reducing narenas to limit (%d)\n", narenas_auto); } narenas_total_set(narenas_auto); /* Allocate and initialize arenas. */ - arenas = (arena_t **)base_alloc(sizeof(arena_t *) * + arenas = (arena_t **)base_alloc(tsdn, sizeof(arena_t *) * (MALLOCX_ARENA_MAX+1)); if (arenas == NULL) return (true); /* Copy the pointer to the one arena that was already initialized. */ arena_set(0, a0); malloc_init_state = malloc_init_initialized; malloc_slow_flag_init(); return (false); } static bool malloc_init_hard(void) { + tsd_t *tsd; #if defined(_WIN32) && _WIN32_WINNT < 0x0600 _init_init_lock(); #endif - malloc_mutex_lock(&init_lock); + malloc_mutex_lock(TSDN_NULL, &init_lock); if (!malloc_init_hard_needed()) { - malloc_mutex_unlock(&init_lock); + malloc_mutex_unlock(TSDN_NULL, &init_lock); return (false); } if (malloc_init_state != malloc_init_a0_initialized && malloc_init_hard_a0_locked()) { - malloc_mutex_unlock(&init_lock); + malloc_mutex_unlock(TSDN_NULL, &init_lock); return (true); } - if (malloc_init_hard_recursible()) { - malloc_mutex_unlock(&init_lock); + malloc_mutex_unlock(TSDN_NULL, &init_lock); + /* Recursive allocation relies on functional tsd. */ + tsd = malloc_tsd_boot0(); + if (tsd == NULL) return (true); - } + if (malloc_init_hard_recursible()) + return (true); + malloc_mutex_lock(tsd_tsdn(tsd), &init_lock); - if (config_prof && prof_boot2()) { - malloc_mutex_unlock(&init_lock); + if (config_prof && prof_boot2(tsd_tsdn(tsd))) { + malloc_mutex_unlock(tsd_tsdn(tsd), &init_lock); return (true); } - if (malloc_init_hard_finish()) { - malloc_mutex_unlock(&init_lock); + if (malloc_init_hard_finish(tsd_tsdn(tsd))) { + malloc_mutex_unlock(tsd_tsdn(tsd), &init_lock); return (true); } - malloc_mutex_unlock(&init_lock); + malloc_mutex_unlock(tsd_tsdn(tsd), &init_lock); malloc_tsd_boot1(); return (false); } /* * End initialization functions. */ /******************************************************************************/ /* * Begin malloc(3)-compatible functions. */ static void * -imalloc_prof_sample(tsd_t *tsd, size_t usize, szind_t ind, +ialloc_prof_sample(tsd_t *tsd, size_t usize, szind_t ind, bool zero, prof_tctx_t *tctx, bool slow_path) { void *p; if (tctx == NULL) return (NULL); if (usize <= SMALL_MAXCLASS) { szind_t ind_large = size2index(LARGE_MINCLASS); - p = imalloc(tsd, LARGE_MINCLASS, ind_large, slow_path); + p = ialloc(tsd, LARGE_MINCLASS, ind_large, zero, slow_path); if (p == NULL) return (NULL); - arena_prof_promoted(p, usize); + arena_prof_promoted(tsd_tsdn(tsd), p, usize); } else - p = imalloc(tsd, usize, ind, slow_path); + p = ialloc(tsd, usize, ind, zero, slow_path); return (p); } JEMALLOC_ALWAYS_INLINE_C void * -imalloc_prof(tsd_t *tsd, size_t usize, szind_t ind, bool slow_path) +ialloc_prof(tsd_t *tsd, size_t usize, szind_t ind, bool zero, bool slow_path) { void *p; prof_tctx_t *tctx; tctx = prof_alloc_prep(tsd, usize, prof_active_get_unlocked(), true); if (unlikely((uintptr_t)tctx != (uintptr_t)1U)) - p = imalloc_prof_sample(tsd, usize, ind, tctx, slow_path); + p = ialloc_prof_sample(tsd, usize, ind, zero, tctx, slow_path); else - p = imalloc(tsd, usize, ind, slow_path); + p = ialloc(tsd, usize, ind, zero, slow_path); if (unlikely(p == NULL)) { prof_alloc_rollback(tsd, tctx, true); return (NULL); } - prof_malloc(p, usize, tctx); + prof_malloc(tsd_tsdn(tsd), p, usize, tctx); return (p); } +/* + * ialloc_body() is inlined so that fast and slow paths are generated separately + * with statically known slow_path. + * + * This function guarantees that *tsdn is non-NULL on success. + */ JEMALLOC_ALWAYS_INLINE_C void * -imalloc_body(size_t size, tsd_t **tsd, size_t *usize, bool slow_path) +ialloc_body(size_t size, bool zero, tsdn_t **tsdn, size_t *usize, + bool slow_path) { + tsd_t *tsd; szind_t ind; - if (slow_path && unlikely(malloc_init())) + if (slow_path && unlikely(malloc_init())) { + *tsdn = NULL; return (NULL); - *tsd = tsd_fetch(); + } + + tsd = tsd_fetch(); + *tsdn = tsd_tsdn(tsd); + witness_assert_lockless(tsd_tsdn(tsd)); + ind = size2index(size); if (unlikely(ind >= NSIZES)) return (NULL); if (config_stats || (config_prof && opt_prof) || (slow_path && config_valgrind && unlikely(in_valgrind))) { *usize = index2size(ind); assert(*usize > 0 && *usize <= HUGE_MAXCLASS); } if (config_prof && opt_prof) - return (imalloc_prof(*tsd, *usize, ind, slow_path)); + return (ialloc_prof(tsd, *usize, ind, zero, slow_path)); - return (imalloc(*tsd, size, ind, slow_path)); + return (ialloc(tsd, size, ind, zero, slow_path)); } JEMALLOC_ALWAYS_INLINE_C void -imalloc_post_check(void *ret, tsd_t *tsd, size_t usize, bool slow_path) +ialloc_post_check(void *ret, tsdn_t *tsdn, size_t usize, const char *func, + bool update_errno, bool slow_path) { + + assert(!tsdn_null(tsdn) || ret == NULL); + if (unlikely(ret == NULL)) { if (slow_path && config_xmalloc && unlikely(opt_xmalloc)) { - malloc_write(": Error in malloc(): " - "out of memory\n"); + malloc_printf(": Error in %s(): out of " + "memory\n", func); abort(); } - set_errno(ENOMEM); + if (update_errno) + set_errno(ENOMEM); } if (config_stats && likely(ret != NULL)) { - assert(usize == isalloc(ret, config_prof)); - *tsd_thread_allocatedp_get(tsd) += usize; + assert(usize == isalloc(tsdn, ret, config_prof)); + *tsd_thread_allocatedp_get(tsdn_tsd(tsdn)) += usize; } + witness_assert_lockless(tsdn); } JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW * JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(1) je_malloc(size_t size) { void *ret; - tsd_t *tsd; + tsdn_t *tsdn; size_t usize JEMALLOC_CC_SILENCE_INIT(0); if (size == 0) size = 1; if (likely(!malloc_slow)) { - /* - * imalloc_body() is inlined so that fast and slow paths are - * generated separately with statically known slow_path. - */ - ret = imalloc_body(size, &tsd, &usize, false); - imalloc_post_check(ret, tsd, usize, false); + ret = ialloc_body(size, false, &tsdn, &usize, false); + ialloc_post_check(ret, tsdn, usize, "malloc", true, false); } else { - ret = imalloc_body(size, &tsd, &usize, true); - imalloc_post_check(ret, tsd, usize, true); + ret = ialloc_body(size, false, &tsdn, &usize, true); + ialloc_post_check(ret, tsdn, usize, "malloc", true, true); UTRACE(0, size, ret); - JEMALLOC_VALGRIND_MALLOC(ret != NULL, ret, usize, false); + JEMALLOC_VALGRIND_MALLOC(ret != NULL, tsdn, ret, usize, false); } return (ret); } static void * imemalign_prof_sample(tsd_t *tsd, size_t alignment, size_t usize, prof_tctx_t *tctx) { void *p; if (tctx == NULL) return (NULL); if (usize <= SMALL_MAXCLASS) { assert(sa2u(LARGE_MINCLASS, alignment) == LARGE_MINCLASS); p = ipalloc(tsd, LARGE_MINCLASS, alignment, false); if (p == NULL) return (NULL); - arena_prof_promoted(p, usize); + arena_prof_promoted(tsd_tsdn(tsd), p, usize); } else p = ipalloc(tsd, usize, alignment, false); return (p); } JEMALLOC_ALWAYS_INLINE_C void * imemalign_prof(tsd_t *tsd, size_t alignment, size_t usize) { void *p; prof_tctx_t *tctx; tctx = prof_alloc_prep(tsd, usize, prof_active_get_unlocked(), true); if (unlikely((uintptr_t)tctx != (uintptr_t)1U)) p = imemalign_prof_sample(tsd, alignment, usize, tctx); else p = ipalloc(tsd, usize, alignment, false); if (unlikely(p == NULL)) { prof_alloc_rollback(tsd, tctx, true); return (NULL); } - prof_malloc(p, usize, tctx); + prof_malloc(tsd_tsdn(tsd), p, usize, tctx); return (p); } JEMALLOC_ATTR(nonnull(1)) static int imemalign(void **memptr, size_t alignment, size_t size, size_t min_alignment) { int ret; tsd_t *tsd; size_t usize; void *result; assert(min_alignment != 0); if (unlikely(malloc_init())) { + tsd = NULL; result = NULL; goto label_oom; } tsd = tsd_fetch(); + witness_assert_lockless(tsd_tsdn(tsd)); if (size == 0) size = 1; /* Make sure that alignment is a large enough power of 2. */ if (unlikely(((alignment - 1) & alignment) != 0 || (alignment < min_alignment))) { if (config_xmalloc && unlikely(opt_xmalloc)) { malloc_write(": Error allocating " "aligned memory: invalid alignment\n"); abort(); } result = NULL; ret = EINVAL; goto label_return; } usize = sa2u(size, alignment); if (unlikely(usize == 0 || usize > HUGE_MAXCLASS)) { result = NULL; goto label_oom; } if (config_prof && opt_prof) result = imemalign_prof(tsd, alignment, usize); else result = ipalloc(tsd, usize, alignment, false); if (unlikely(result == NULL)) goto label_oom; assert(((uintptr_t)result & (alignment - 1)) == ZU(0)); *memptr = result; ret = 0; label_return: if (config_stats && likely(result != NULL)) { - assert(usize == isalloc(result, config_prof)); + assert(usize == isalloc(tsd_tsdn(tsd), result, config_prof)); *tsd_thread_allocatedp_get(tsd) += usize; } UTRACE(0, size, result); + JEMALLOC_VALGRIND_MALLOC(result != NULL, tsd_tsdn(tsd), result, usize, + false); + witness_assert_lockless(tsd_tsdn(tsd)); return (ret); label_oom: assert(result == NULL); if (config_xmalloc && unlikely(opt_xmalloc)) { malloc_write(": Error allocating aligned memory: " "out of memory\n"); abort(); } ret = ENOMEM; + witness_assert_lockless(tsd_tsdn(tsd)); goto label_return; } JEMALLOC_EXPORT int JEMALLOC_NOTHROW JEMALLOC_ATTR(nonnull(1)) je_posix_memalign(void **memptr, size_t alignment, size_t size) { - int ret = imemalign(memptr, alignment, size, sizeof(void *)); - JEMALLOC_VALGRIND_MALLOC(ret == 0, *memptr, isalloc(*memptr, - config_prof), false); + int ret; + + ret = imemalign(memptr, alignment, size, sizeof(void *)); + return (ret); } JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW * JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(2) je_aligned_alloc(size_t alignment, size_t size) { void *ret; int err; if (unlikely((err = imemalign(&ret, alignment, size, 1)) != 0)) { ret = NULL; set_errno(err); } - JEMALLOC_VALGRIND_MALLOC(err == 0, ret, isalloc(ret, config_prof), - false); + return (ret); } -static void * -icalloc_prof_sample(tsd_t *tsd, size_t usize, szind_t ind, prof_tctx_t *tctx) -{ - void *p; - - if (tctx == NULL) - return (NULL); - if (usize <= SMALL_MAXCLASS) { - szind_t ind_large = size2index(LARGE_MINCLASS); - p = icalloc(tsd, LARGE_MINCLASS, ind_large); - if (p == NULL) - return (NULL); - arena_prof_promoted(p, usize); - } else - p = icalloc(tsd, usize, ind); - - return (p); -} - -JEMALLOC_ALWAYS_INLINE_C void * -icalloc_prof(tsd_t *tsd, size_t usize, szind_t ind) -{ - void *p; - prof_tctx_t *tctx; - - tctx = prof_alloc_prep(tsd, usize, prof_active_get_unlocked(), true); - if (unlikely((uintptr_t)tctx != (uintptr_t)1U)) - p = icalloc_prof_sample(tsd, usize, ind, tctx); - else - p = icalloc(tsd, usize, ind); - if (unlikely(p == NULL)) { - prof_alloc_rollback(tsd, tctx, true); - return (NULL); - } - prof_malloc(p, usize, tctx); - - return (p); -} - JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW * JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE2(1, 2) je_calloc(size_t num, size_t size) { void *ret; - tsd_t *tsd; + tsdn_t *tsdn; size_t num_size; - szind_t ind; size_t usize JEMALLOC_CC_SILENCE_INIT(0); - if (unlikely(malloc_init())) { - num_size = 0; - ret = NULL; - goto label_return; - } - tsd = tsd_fetch(); - num_size = num * size; if (unlikely(num_size == 0)) { if (num == 0 || size == 0) num_size = 1; - else { - ret = NULL; - goto label_return; - } + else + num_size = HUGE_MAXCLASS + 1; /* Trigger OOM. */ /* * Try to avoid division here. We know that it isn't possible to * overflow during multiplication if neither operand uses any of the * most significant half of the bits in a size_t. */ } else if (unlikely(((num | size) & (SIZE_T_MAX << (sizeof(size_t) << - 2))) && (num_size / size != num))) { - /* size_t overflow. */ - ret = NULL; - goto label_return; - } + 2))) && (num_size / size != num))) + num_size = HUGE_MAXCLASS + 1; /* size_t overflow. */ - ind = size2index(num_size); - if (unlikely(ind >= NSIZES)) { - ret = NULL; - goto label_return; - } - if (config_prof && opt_prof) { - usize = index2size(ind); - ret = icalloc_prof(tsd, usize, ind); + if (likely(!malloc_slow)) { + ret = ialloc_body(num_size, true, &tsdn, &usize, false); + ialloc_post_check(ret, tsdn, usize, "calloc", true, false); } else { - if (config_stats || (config_valgrind && unlikely(in_valgrind))) - usize = index2size(ind); - ret = icalloc(tsd, num_size, ind); + ret = ialloc_body(num_size, true, &tsdn, &usize, true); + ialloc_post_check(ret, tsdn, usize, "calloc", true, true); + UTRACE(0, num_size, ret); + JEMALLOC_VALGRIND_MALLOC(ret != NULL, tsdn, ret, usize, false); } -label_return: - if (unlikely(ret == NULL)) { - if (config_xmalloc && unlikely(opt_xmalloc)) { - malloc_write(": Error in calloc(): out of " - "memory\n"); - abort(); - } - set_errno(ENOMEM); - } - if (config_stats && likely(ret != NULL)) { - assert(usize == isalloc(ret, config_prof)); - *tsd_thread_allocatedp_get(tsd) += usize; - } - UTRACE(0, num_size, ret); - JEMALLOC_VALGRIND_MALLOC(ret != NULL, ret, usize, true); return (ret); } static void * irealloc_prof_sample(tsd_t *tsd, void *old_ptr, size_t old_usize, size_t usize, prof_tctx_t *tctx) { void *p; if (tctx == NULL) return (NULL); if (usize <= SMALL_MAXCLASS) { p = iralloc(tsd, old_ptr, old_usize, LARGE_MINCLASS, 0, false); if (p == NULL) return (NULL); - arena_prof_promoted(p, usize); + arena_prof_promoted(tsd_tsdn(tsd), p, usize); } else p = iralloc(tsd, old_ptr, old_usize, usize, 0, false); return (p); } JEMALLOC_ALWAYS_INLINE_C void * irealloc_prof(tsd_t *tsd, void *old_ptr, size_t old_usize, size_t usize) { void *p; bool prof_active; prof_tctx_t *old_tctx, *tctx; prof_active = prof_active_get_unlocked(); - old_tctx = prof_tctx_get(old_ptr); + old_tctx = prof_tctx_get(tsd_tsdn(tsd), old_ptr); tctx = prof_alloc_prep(tsd, usize, prof_active, true); if (unlikely((uintptr_t)tctx != (uintptr_t)1U)) p = irealloc_prof_sample(tsd, old_ptr, old_usize, usize, tctx); else p = iralloc(tsd, old_ptr, old_usize, usize, 0, false); if (unlikely(p == NULL)) { prof_alloc_rollback(tsd, tctx, true); return (NULL); } prof_realloc(tsd, p, usize, tctx, prof_active, true, old_ptr, old_usize, old_tctx); return (p); } JEMALLOC_INLINE_C void ifree(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path) { size_t usize; UNUSED size_t rzsize JEMALLOC_CC_SILENCE_INIT(0); + witness_assert_lockless(tsd_tsdn(tsd)); + assert(ptr != NULL); assert(malloc_initialized() || IS_INITIALIZER); if (config_prof && opt_prof) { - usize = isalloc(ptr, config_prof); + usize = isalloc(tsd_tsdn(tsd), ptr, config_prof); prof_free(tsd, ptr, usize); } else if (config_stats || config_valgrind) - usize = isalloc(ptr, config_prof); + usize = isalloc(tsd_tsdn(tsd), ptr, config_prof); if (config_stats) *tsd_thread_deallocatedp_get(tsd) += usize; if (likely(!slow_path)) iqalloc(tsd, ptr, tcache, false); else { if (config_valgrind && unlikely(in_valgrind)) - rzsize = p2rz(ptr); + rzsize = p2rz(tsd_tsdn(tsd), ptr); iqalloc(tsd, ptr, tcache, true); JEMALLOC_VALGRIND_FREE(ptr, rzsize); } } JEMALLOC_INLINE_C void -isfree(tsd_t *tsd, void *ptr, size_t usize, tcache_t *tcache) +isfree(tsd_t *tsd, void *ptr, size_t usize, tcache_t *tcache, bool slow_path) { UNUSED size_t rzsize JEMALLOC_CC_SILENCE_INIT(0); + witness_assert_lockless(tsd_tsdn(tsd)); + assert(ptr != NULL); assert(malloc_initialized() || IS_INITIALIZER); if (config_prof && opt_prof) prof_free(tsd, ptr, usize); if (config_stats) *tsd_thread_deallocatedp_get(tsd) += usize; if (config_valgrind && unlikely(in_valgrind)) - rzsize = p2rz(ptr); - isqalloc(tsd, ptr, usize, tcache); + rzsize = p2rz(tsd_tsdn(tsd), ptr); + isqalloc(tsd, ptr, usize, tcache, slow_path); JEMALLOC_VALGRIND_FREE(ptr, rzsize); } JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW * JEMALLOC_ALLOC_SIZE(2) je_realloc(void *ptr, size_t size) { void *ret; - tsd_t *tsd JEMALLOC_CC_SILENCE_INIT(NULL); + tsdn_t *tsdn JEMALLOC_CC_SILENCE_INIT(NULL); size_t usize JEMALLOC_CC_SILENCE_INIT(0); size_t old_usize = 0; UNUSED size_t old_rzsize JEMALLOC_CC_SILENCE_INIT(0); if (unlikely(size == 0)) { if (ptr != NULL) { + tsd_t *tsd; + /* realloc(ptr, 0) is equivalent to free(ptr). */ UTRACE(ptr, 0, 0); tsd = tsd_fetch(); ifree(tsd, ptr, tcache_get(tsd, false), true); return (NULL); } size = 1; } if (likely(ptr != NULL)) { + tsd_t *tsd; + assert(malloc_initialized() || IS_INITIALIZER); malloc_thread_init(); tsd = tsd_fetch(); - old_usize = isalloc(ptr, config_prof); - if (config_valgrind && unlikely(in_valgrind)) - old_rzsize = config_prof ? p2rz(ptr) : u2rz(old_usize); + witness_assert_lockless(tsd_tsdn(tsd)); + old_usize = isalloc(tsd_tsdn(tsd), ptr, config_prof); + if (config_valgrind && unlikely(in_valgrind)) { + old_rzsize = config_prof ? p2rz(tsd_tsdn(tsd), ptr) : + u2rz(old_usize); + } + if (config_prof && opt_prof) { usize = s2u(size); ret = unlikely(usize == 0 || usize > HUGE_MAXCLASS) ? NULL : irealloc_prof(tsd, ptr, old_usize, usize); } else { if (config_stats || (config_valgrind && unlikely(in_valgrind))) usize = s2u(size); ret = iralloc(tsd, ptr, old_usize, size, 0, false); } + tsdn = tsd_tsdn(tsd); } else { /* realloc(NULL, size) is equivalent to malloc(size). */ if (likely(!malloc_slow)) - ret = imalloc_body(size, &tsd, &usize, false); + ret = ialloc_body(size, false, &tsdn, &usize, false); else - ret = imalloc_body(size, &tsd, &usize, true); + ret = ialloc_body(size, false, &tsdn, &usize, true); + assert(!tsdn_null(tsdn) || ret == NULL); } if (unlikely(ret == NULL)) { if (config_xmalloc && unlikely(opt_xmalloc)) { malloc_write(": Error in realloc(): " "out of memory\n"); abort(); } set_errno(ENOMEM); } if (config_stats && likely(ret != NULL)) { - assert(usize == isalloc(ret, config_prof)); + tsd_t *tsd; + + assert(usize == isalloc(tsdn, ret, config_prof)); + tsd = tsdn_tsd(tsdn); *tsd_thread_allocatedp_get(tsd) += usize; *tsd_thread_deallocatedp_get(tsd) += old_usize; } UTRACE(ptr, size, ret); - JEMALLOC_VALGRIND_REALLOC(true, ret, usize, true, ptr, old_usize, + JEMALLOC_VALGRIND_REALLOC(true, tsdn, ret, usize, true, ptr, old_usize, old_rzsize, true, false); + witness_assert_lockless(tsdn); return (ret); } JEMALLOC_EXPORT void JEMALLOC_NOTHROW je_free(void *ptr) { UTRACE(ptr, 0, 0); if (likely(ptr != NULL)) { tsd_t *tsd = tsd_fetch(); + witness_assert_lockless(tsd_tsdn(tsd)); if (likely(!malloc_slow)) ifree(tsd, ptr, tcache_get(tsd, false), false); else ifree(tsd, ptr, tcache_get(tsd, false), true); + witness_assert_lockless(tsd_tsdn(tsd)); } } /* * End malloc(3)-compatible functions. */ /******************************************************************************/ /* * Begin non-standard override functions. */ #ifdef JEMALLOC_OVERRIDE_MEMALIGN JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW * JEMALLOC_ATTR(malloc) je_memalign(size_t alignment, size_t size) { void *ret JEMALLOC_CC_SILENCE_INIT(NULL); if (unlikely(imemalign(&ret, alignment, size, 1) != 0)) ret = NULL; - JEMALLOC_VALGRIND_MALLOC(ret != NULL, ret, size, false); return (ret); } #endif #ifdef JEMALLOC_OVERRIDE_VALLOC JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW * JEMALLOC_ATTR(malloc) je_valloc(size_t size) { void *ret JEMALLOC_CC_SILENCE_INIT(NULL); if (unlikely(imemalign(&ret, PAGE, size, 1) != 0)) ret = NULL; - JEMALLOC_VALGRIND_MALLOC(ret != NULL, ret, size, false); return (ret); } #endif /* * is_malloc(je_malloc) is some macro magic to detect if jemalloc_defs.h has * #define je_malloc malloc */ #define malloc_is_malloc 1 #define is_malloc_(a) malloc_is_ ## a #define is_malloc(a) is_malloc_(a) #if ((is_malloc(je_malloc) == 1) && defined(JEMALLOC_GLIBC_MALLOC_HOOK)) /* * glibc provides the RTLD_DEEPBIND flag for dlopen which can make it possible * to inconsistently reference libc's malloc(3)-compatible functions * (https://bugzilla.mozilla.org/show_bug.cgi?id=493541). * * These definitions interpose hooks in glibc. The functions are actually * passed an extra argument for the caller return address, which will be * ignored. */ JEMALLOC_EXPORT void (*__free_hook)(void *ptr) = je_free; JEMALLOC_EXPORT void *(*__malloc_hook)(size_t size) = je_malloc; JEMALLOC_EXPORT void *(*__realloc_hook)(void *ptr, size_t size) = je_realloc; # ifdef JEMALLOC_GLIBC_MEMALIGN_HOOK JEMALLOC_EXPORT void *(*__memalign_hook)(size_t alignment, size_t size) = je_memalign; # endif #endif /* * End non-standard override functions. */ /******************************************************************************/ /* * Begin non-standard functions. */ JEMALLOC_ALWAYS_INLINE_C bool -imallocx_flags_decode_hard(tsd_t *tsd, size_t size, int flags, size_t *usize, +imallocx_flags_decode(tsd_t *tsd, size_t size, int flags, size_t *usize, size_t *alignment, bool *zero, tcache_t **tcache, arena_t **arena) { if ((flags & MALLOCX_LG_ALIGN_MASK) == 0) { *alignment = 0; *usize = s2u(size); } else { *alignment = MALLOCX_ALIGN_GET_SPECIFIED(flags); *usize = sa2u(size, *alignment); } if (unlikely(*usize == 0 || *usize > HUGE_MAXCLASS)) return (true); *zero = MALLOCX_ZERO_GET(flags); if ((flags & MALLOCX_TCACHE_MASK) != 0) { if ((flags & MALLOCX_TCACHE_MASK) == MALLOCX_TCACHE_NONE) *tcache = NULL; else *tcache = tcaches_get(tsd, MALLOCX_TCACHE_GET(flags)); } else *tcache = tcache_get(tsd, true); if ((flags & MALLOCX_ARENA_MASK) != 0) { unsigned arena_ind = MALLOCX_ARENA_GET(flags); - *arena = arena_get(arena_ind, true); + *arena = arena_get(tsd_tsdn(tsd), arena_ind, true); if (unlikely(*arena == NULL)) return (true); } else *arena = NULL; return (false); } -JEMALLOC_ALWAYS_INLINE_C bool -imallocx_flags_decode(tsd_t *tsd, size_t size, int flags, size_t *usize, - size_t *alignment, bool *zero, tcache_t **tcache, arena_t **arena) -{ - - if (likely(flags == 0)) { - *usize = s2u(size); - if (unlikely(*usize == 0 || *usize > HUGE_MAXCLASS)) - return (true); - *alignment = 0; - *zero = false; - *tcache = tcache_get(tsd, true); - *arena = NULL; - return (false); - } else { - return (imallocx_flags_decode_hard(tsd, size, flags, usize, - alignment, zero, tcache, arena)); - } -} - JEMALLOC_ALWAYS_INLINE_C void * -imallocx_flags(tsd_t *tsd, size_t usize, size_t alignment, bool zero, - tcache_t *tcache, arena_t *arena) +imallocx_flags(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero, + tcache_t *tcache, arena_t *arena, bool slow_path) { szind_t ind; if (unlikely(alignment != 0)) - return (ipalloct(tsd, usize, alignment, zero, tcache, arena)); + return (ipalloct(tsdn, usize, alignment, zero, tcache, arena)); ind = size2index(usize); assert(ind < NSIZES); - if (unlikely(zero)) - return (icalloct(tsd, usize, ind, tcache, arena)); - return (imalloct(tsd, usize, ind, tcache, arena)); + return (iallocztm(tsdn, usize, ind, zero, tcache, false, arena, + slow_path)); } static void * -imallocx_prof_sample(tsd_t *tsd, size_t usize, size_t alignment, bool zero, - tcache_t *tcache, arena_t *arena) +imallocx_prof_sample(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero, + tcache_t *tcache, arena_t *arena, bool slow_path) { void *p; if (usize <= SMALL_MAXCLASS) { assert(((alignment == 0) ? s2u(LARGE_MINCLASS) : sa2u(LARGE_MINCLASS, alignment)) == LARGE_MINCLASS); - p = imallocx_flags(tsd, LARGE_MINCLASS, alignment, zero, tcache, - arena); + p = imallocx_flags(tsdn, LARGE_MINCLASS, alignment, zero, + tcache, arena, slow_path); if (p == NULL) return (NULL); - arena_prof_promoted(p, usize); - } else - p = imallocx_flags(tsd, usize, alignment, zero, tcache, arena); + arena_prof_promoted(tsdn, p, usize); + } else { + p = imallocx_flags(tsdn, usize, alignment, zero, tcache, arena, + slow_path); + } return (p); } JEMALLOC_ALWAYS_INLINE_C void * -imallocx_prof(tsd_t *tsd, size_t size, int flags, size_t *usize) +imallocx_prof(tsd_t *tsd, size_t size, int flags, size_t *usize, bool slow_path) { void *p; size_t alignment; bool zero; tcache_t *tcache; arena_t *arena; prof_tctx_t *tctx; if (unlikely(imallocx_flags_decode(tsd, size, flags, usize, &alignment, &zero, &tcache, &arena))) return (NULL); tctx = prof_alloc_prep(tsd, *usize, prof_active_get_unlocked(), true); - if (likely((uintptr_t)tctx == (uintptr_t)1U)) - p = imallocx_flags(tsd, *usize, alignment, zero, tcache, arena); - else if ((uintptr_t)tctx > (uintptr_t)1U) { - p = imallocx_prof_sample(tsd, *usize, alignment, zero, tcache, - arena); + if (likely((uintptr_t)tctx == (uintptr_t)1U)) { + p = imallocx_flags(tsd_tsdn(tsd), *usize, alignment, zero, + tcache, arena, slow_path); + } else if ((uintptr_t)tctx > (uintptr_t)1U) { + p = imallocx_prof_sample(tsd_tsdn(tsd), *usize, alignment, zero, + tcache, arena, slow_path); } else p = NULL; if (unlikely(p == NULL)) { prof_alloc_rollback(tsd, tctx, true); return (NULL); } - prof_malloc(p, *usize, tctx); + prof_malloc(tsd_tsdn(tsd), p, *usize, tctx); assert(alignment == 0 || ((uintptr_t)p & (alignment - 1)) == ZU(0)); return (p); } JEMALLOC_ALWAYS_INLINE_C void * -imallocx_no_prof(tsd_t *tsd, size_t size, int flags, size_t *usize) +imallocx_no_prof(tsd_t *tsd, size_t size, int flags, size_t *usize, + bool slow_path) { void *p; size_t alignment; bool zero; tcache_t *tcache; arena_t *arena; + if (unlikely(imallocx_flags_decode(tsd, size, flags, usize, &alignment, + &zero, &tcache, &arena))) + return (NULL); + p = imallocx_flags(tsd_tsdn(tsd), *usize, alignment, zero, tcache, + arena, slow_path); + assert(alignment == 0 || ((uintptr_t)p & (alignment - 1)) == ZU(0)); + return (p); +} + +/* This function guarantees that *tsdn is non-NULL on success. */ +JEMALLOC_ALWAYS_INLINE_C void * +imallocx_body(size_t size, int flags, tsdn_t **tsdn, size_t *usize, + bool slow_path) +{ + tsd_t *tsd; + + if (slow_path && unlikely(malloc_init())) { + *tsdn = NULL; + return (NULL); + } + + tsd = tsd_fetch(); + *tsdn = tsd_tsdn(tsd); + witness_assert_lockless(tsd_tsdn(tsd)); + if (likely(flags == 0)) { szind_t ind = size2index(size); if (unlikely(ind >= NSIZES)) return (NULL); - if (config_stats || (config_valgrind && - unlikely(in_valgrind))) { + if (config_stats || (config_prof && opt_prof) || (slow_path && + config_valgrind && unlikely(in_valgrind))) { *usize = index2size(ind); assert(*usize > 0 && *usize <= HUGE_MAXCLASS); } - return (imalloc(tsd, size, ind, true)); + + if (config_prof && opt_prof) { + return (ialloc_prof(tsd, *usize, ind, false, + slow_path)); + } + + return (ialloc(tsd, size, ind, false, slow_path)); } - if (unlikely(imallocx_flags_decode_hard(tsd, size, flags, usize, - &alignment, &zero, &tcache, &arena))) - return (NULL); - p = imallocx_flags(tsd, *usize, alignment, zero, tcache, arena); - assert(alignment == 0 || ((uintptr_t)p & (alignment - 1)) == ZU(0)); - return (p); + if (config_prof && opt_prof) + return (imallocx_prof(tsd, size, flags, usize, slow_path)); + + return (imallocx_no_prof(tsd, size, flags, usize, slow_path)); } JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW * JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(1) je_mallocx(size_t size, int flags) { - tsd_t *tsd; + tsdn_t *tsdn; void *p; size_t usize; assert(size != 0); - if (unlikely(malloc_init())) - goto label_oom; - tsd = tsd_fetch(); - - if (config_prof && opt_prof) - p = imallocx_prof(tsd, size, flags, &usize); - else - p = imallocx_no_prof(tsd, size, flags, &usize); - if (unlikely(p == NULL)) - goto label_oom; - - if (config_stats) { - assert(usize == isalloc(p, config_prof)); - *tsd_thread_allocatedp_get(tsd) += usize; + if (likely(!malloc_slow)) { + p = imallocx_body(size, flags, &tsdn, &usize, false); + ialloc_post_check(p, tsdn, usize, "mallocx", false, false); + } else { + p = imallocx_body(size, flags, &tsdn, &usize, true); + ialloc_post_check(p, tsdn, usize, "mallocx", false, true); + UTRACE(0, size, p); + JEMALLOC_VALGRIND_MALLOC(p != NULL, tsdn, p, usize, + MALLOCX_ZERO_GET(flags)); } - UTRACE(0, size, p); - JEMALLOC_VALGRIND_MALLOC(true, p, usize, MALLOCX_ZERO_GET(flags)); + return (p); -label_oom: - if (config_xmalloc && unlikely(opt_xmalloc)) { - malloc_write(": Error in mallocx(): out of memory\n"); - abort(); - } - UTRACE(0, size, 0); - return (NULL); } static void * irallocx_prof_sample(tsd_t *tsd, void *old_ptr, size_t old_usize, size_t usize, size_t alignment, bool zero, tcache_t *tcache, arena_t *arena, prof_tctx_t *tctx) { void *p; if (tctx == NULL) return (NULL); if (usize <= SMALL_MAXCLASS) { p = iralloct(tsd, old_ptr, old_usize, LARGE_MINCLASS, alignment, zero, tcache, arena); if (p == NULL) return (NULL); - arena_prof_promoted(p, usize); + arena_prof_promoted(tsd_tsdn(tsd), p, usize); } else { p = iralloct(tsd, old_ptr, old_usize, usize, alignment, zero, tcache, arena); } return (p); } JEMALLOC_ALWAYS_INLINE_C void * irallocx_prof(tsd_t *tsd, void *old_ptr, size_t old_usize, size_t size, size_t alignment, size_t *usize, bool zero, tcache_t *tcache, arena_t *arena) { void *p; bool prof_active; prof_tctx_t *old_tctx, *tctx; prof_active = prof_active_get_unlocked(); - old_tctx = prof_tctx_get(old_ptr); + old_tctx = prof_tctx_get(tsd_tsdn(tsd), old_ptr); tctx = prof_alloc_prep(tsd, *usize, prof_active, true); if (unlikely((uintptr_t)tctx != (uintptr_t)1U)) { p = irallocx_prof_sample(tsd, old_ptr, old_usize, *usize, alignment, zero, tcache, arena, tctx); } else { p = iralloct(tsd, old_ptr, old_usize, size, alignment, zero, tcache, arena); } if (unlikely(p == NULL)) { prof_alloc_rollback(tsd, tctx, true); return (NULL); } if (p == old_ptr && alignment != 0) { /* * The allocation did not move, so it is possible that the size * class is smaller than would guarantee the requested * alignment, and that the alignment constraint was * serendipitously satisfied. Additionally, old_usize may not * be the same as the current usize because of in-place large * reallocation. Therefore, query the actual value of usize. */ - *usize = isalloc(p, config_prof); + *usize = isalloc(tsd_tsdn(tsd), p, config_prof); } prof_realloc(tsd, p, *usize, tctx, prof_active, true, old_ptr, old_usize, old_tctx); return (p); } JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW * JEMALLOC_ALLOC_SIZE(2) je_rallocx(void *ptr, size_t size, int flags) { void *p; tsd_t *tsd; size_t usize; size_t old_usize; UNUSED size_t old_rzsize JEMALLOC_CC_SILENCE_INIT(0); size_t alignment = MALLOCX_ALIGN_GET(flags); bool zero = flags & MALLOCX_ZERO; arena_t *arena; tcache_t *tcache; assert(ptr != NULL); assert(size != 0); assert(malloc_initialized() || IS_INITIALIZER); malloc_thread_init(); tsd = tsd_fetch(); + witness_assert_lockless(tsd_tsdn(tsd)); if (unlikely((flags & MALLOCX_ARENA_MASK) != 0)) { unsigned arena_ind = MALLOCX_ARENA_GET(flags); - arena = arena_get(arena_ind, true); + arena = arena_get(tsd_tsdn(tsd), arena_ind, true); if (unlikely(arena == NULL)) goto label_oom; } else arena = NULL; if (unlikely((flags & MALLOCX_TCACHE_MASK) != 0)) { if ((flags & MALLOCX_TCACHE_MASK) == MALLOCX_TCACHE_NONE) tcache = NULL; else tcache = tcaches_get(tsd, MALLOCX_TCACHE_GET(flags)); } else tcache = tcache_get(tsd, true); - old_usize = isalloc(ptr, config_prof); + old_usize = isalloc(tsd_tsdn(tsd), ptr, config_prof); if (config_valgrind && unlikely(in_valgrind)) old_rzsize = u2rz(old_usize); if (config_prof && opt_prof) { usize = (alignment == 0) ? s2u(size) : sa2u(size, alignment); if (unlikely(usize == 0 || usize > HUGE_MAXCLASS)) goto label_oom; p = irallocx_prof(tsd, ptr, old_usize, size, alignment, &usize, zero, tcache, arena); if (unlikely(p == NULL)) goto label_oom; } else { p = iralloct(tsd, ptr, old_usize, size, alignment, zero, tcache, arena); if (unlikely(p == NULL)) goto label_oom; if (config_stats || (config_valgrind && unlikely(in_valgrind))) - usize = isalloc(p, config_prof); + usize = isalloc(tsd_tsdn(tsd), p, config_prof); } assert(alignment == 0 || ((uintptr_t)p & (alignment - 1)) == ZU(0)); if (config_stats) { *tsd_thread_allocatedp_get(tsd) += usize; *tsd_thread_deallocatedp_get(tsd) += old_usize; } UTRACE(ptr, size, p); - JEMALLOC_VALGRIND_REALLOC(true, p, usize, false, ptr, old_usize, - old_rzsize, false, zero); + JEMALLOC_VALGRIND_REALLOC(true, tsd_tsdn(tsd), p, usize, false, ptr, + old_usize, old_rzsize, false, zero); + witness_assert_lockless(tsd_tsdn(tsd)); return (p); label_oom: if (config_xmalloc && unlikely(opt_xmalloc)) { malloc_write(": Error in rallocx(): out of memory\n"); abort(); } UTRACE(ptr, size, 0); + witness_assert_lockless(tsd_tsdn(tsd)); return (NULL); } JEMALLOC_ALWAYS_INLINE_C size_t -ixallocx_helper(tsd_t *tsd, void *ptr, size_t old_usize, size_t size, +ixallocx_helper(tsdn_t *tsdn, void *ptr, size_t old_usize, size_t size, size_t extra, size_t alignment, bool zero) { size_t usize; - if (ixalloc(tsd, ptr, old_usize, size, extra, alignment, zero)) + if (ixalloc(tsdn, ptr, old_usize, size, extra, alignment, zero)) return (old_usize); - usize = isalloc(ptr, config_prof); + usize = isalloc(tsdn, ptr, config_prof); return (usize); } static size_t -ixallocx_prof_sample(tsd_t *tsd, void *ptr, size_t old_usize, size_t size, +ixallocx_prof_sample(tsdn_t *tsdn, void *ptr, size_t old_usize, size_t size, size_t extra, size_t alignment, bool zero, prof_tctx_t *tctx) { size_t usize; if (tctx == NULL) return (old_usize); - usize = ixallocx_helper(tsd, ptr, old_usize, size, extra, alignment, + usize = ixallocx_helper(tsdn, ptr, old_usize, size, extra, alignment, zero); return (usize); } JEMALLOC_ALWAYS_INLINE_C size_t ixallocx_prof(tsd_t *tsd, void *ptr, size_t old_usize, size_t size, size_t extra, size_t alignment, bool zero) { size_t usize_max, usize; bool prof_active; prof_tctx_t *old_tctx, *tctx; prof_active = prof_active_get_unlocked(); - old_tctx = prof_tctx_get(ptr); + old_tctx = prof_tctx_get(tsd_tsdn(tsd), ptr); /* * usize isn't knowable before ixalloc() returns when extra is non-zero. * Therefore, compute its maximum possible value and use that in * prof_alloc_prep() to decide whether to capture a backtrace. * prof_realloc() will use the actual usize to decide whether to sample. */ if (alignment == 0) { usize_max = s2u(size+extra); assert(usize_max > 0 && usize_max <= HUGE_MAXCLASS); } else { usize_max = sa2u(size+extra, alignment); if (unlikely(usize_max == 0 || usize_max > HUGE_MAXCLASS)) { /* * usize_max is out of range, and chances are that * allocation will fail, but use the maximum possible * value and carry on with prof_alloc_prep(), just in * case allocation succeeds. */ usize_max = HUGE_MAXCLASS; } } tctx = prof_alloc_prep(tsd, usize_max, prof_active, false); if (unlikely((uintptr_t)tctx != (uintptr_t)1U)) { - usize = ixallocx_prof_sample(tsd, ptr, old_usize, size, extra, - alignment, zero, tctx); + usize = ixallocx_prof_sample(tsd_tsdn(tsd), ptr, old_usize, + size, extra, alignment, zero, tctx); } else { - usize = ixallocx_helper(tsd, ptr, old_usize, size, extra, - alignment, zero); + usize = ixallocx_helper(tsd_tsdn(tsd), ptr, old_usize, size, + extra, alignment, zero); } if (usize == old_usize) { prof_alloc_rollback(tsd, tctx, false); return (usize); } prof_realloc(tsd, ptr, usize, tctx, prof_active, false, ptr, old_usize, old_tctx); return (usize); } JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW je_xallocx(void *ptr, size_t size, size_t extra, int flags) { tsd_t *tsd; size_t usize, old_usize; UNUSED size_t old_rzsize JEMALLOC_CC_SILENCE_INIT(0); size_t alignment = MALLOCX_ALIGN_GET(flags); bool zero = flags & MALLOCX_ZERO; assert(ptr != NULL); assert(size != 0); assert(SIZE_T_MAX - size >= extra); assert(malloc_initialized() || IS_INITIALIZER); malloc_thread_init(); tsd = tsd_fetch(); + witness_assert_lockless(tsd_tsdn(tsd)); - old_usize = isalloc(ptr, config_prof); + old_usize = isalloc(tsd_tsdn(tsd), ptr, config_prof); /* * The API explicitly absolves itself of protecting against (size + * extra) numerical overflow, but we may need to clamp extra to avoid * exceeding HUGE_MAXCLASS. * * Ordinarily, size limit checking is handled deeper down, but here we * have to check as part of (size + extra) clamping, since we need the * clamped value in the above helper functions. */ if (unlikely(size > HUGE_MAXCLASS)) { usize = old_usize; goto label_not_resized; } if (unlikely(HUGE_MAXCLASS - size < extra)) extra = HUGE_MAXCLASS - size; if (config_valgrind && unlikely(in_valgrind)) old_rzsize = u2rz(old_usize); if (config_prof && opt_prof) { usize = ixallocx_prof(tsd, ptr, old_usize, size, extra, alignment, zero); } else { - usize = ixallocx_helper(tsd, ptr, old_usize, size, extra, - alignment, zero); + usize = ixallocx_helper(tsd_tsdn(tsd), ptr, old_usize, size, + extra, alignment, zero); } if (unlikely(usize == old_usize)) goto label_not_resized; if (config_stats) { *tsd_thread_allocatedp_get(tsd) += usize; *tsd_thread_deallocatedp_get(tsd) += old_usize; } - JEMALLOC_VALGRIND_REALLOC(false, ptr, usize, false, ptr, old_usize, - old_rzsize, false, zero); + JEMALLOC_VALGRIND_REALLOC(false, tsd_tsdn(tsd), ptr, usize, false, ptr, + old_usize, old_rzsize, false, zero); label_not_resized: UTRACE(ptr, size, ptr); + witness_assert_lockless(tsd_tsdn(tsd)); return (usize); } JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW JEMALLOC_ATTR(pure) je_sallocx(const void *ptr, int flags) { size_t usize; + tsdn_t *tsdn; assert(malloc_initialized() || IS_INITIALIZER); malloc_thread_init(); + tsdn = tsdn_fetch(); + witness_assert_lockless(tsdn); + if (config_ivsalloc) - usize = ivsalloc(ptr, config_prof); + usize = ivsalloc(tsdn, ptr, config_prof); else - usize = isalloc(ptr, config_prof); + usize = isalloc(tsdn, ptr, config_prof); + witness_assert_lockless(tsdn); return (usize); } JEMALLOC_EXPORT void JEMALLOC_NOTHROW je_dallocx(void *ptr, int flags) { tsd_t *tsd; tcache_t *tcache; assert(ptr != NULL); assert(malloc_initialized() || IS_INITIALIZER); tsd = tsd_fetch(); + witness_assert_lockless(tsd_tsdn(tsd)); if (unlikely((flags & MALLOCX_TCACHE_MASK) != 0)) { if ((flags & MALLOCX_TCACHE_MASK) == MALLOCX_TCACHE_NONE) tcache = NULL; else tcache = tcaches_get(tsd, MALLOCX_TCACHE_GET(flags)); } else tcache = tcache_get(tsd, false); UTRACE(ptr, 0, 0); - ifree(tsd_fetch(), ptr, tcache, true); + if (likely(!malloc_slow)) + ifree(tsd, ptr, tcache, false); + else + ifree(tsd, ptr, tcache, true); + witness_assert_lockless(tsd_tsdn(tsd)); } JEMALLOC_ALWAYS_INLINE_C size_t -inallocx(size_t size, int flags) +inallocx(tsdn_t *tsdn, size_t size, int flags) { size_t usize; + witness_assert_lockless(tsdn); + if (likely((flags & MALLOCX_LG_ALIGN_MASK) == 0)) usize = s2u(size); else usize = sa2u(size, MALLOCX_ALIGN_GET_SPECIFIED(flags)); + witness_assert_lockless(tsdn); return (usize); } JEMALLOC_EXPORT void JEMALLOC_NOTHROW je_sdallocx(void *ptr, size_t size, int flags) { tsd_t *tsd; tcache_t *tcache; size_t usize; assert(ptr != NULL); assert(malloc_initialized() || IS_INITIALIZER); - usize = inallocx(size, flags); - assert(usize == isalloc(ptr, config_prof)); - tsd = tsd_fetch(); + usize = inallocx(tsd_tsdn(tsd), size, flags); + assert(usize == isalloc(tsd_tsdn(tsd), ptr, config_prof)); + + witness_assert_lockless(tsd_tsdn(tsd)); if (unlikely((flags & MALLOCX_TCACHE_MASK) != 0)) { if ((flags & MALLOCX_TCACHE_MASK) == MALLOCX_TCACHE_NONE) tcache = NULL; else tcache = tcaches_get(tsd, MALLOCX_TCACHE_GET(flags)); } else tcache = tcache_get(tsd, false); UTRACE(ptr, 0, 0); - isfree(tsd, ptr, usize, tcache); + if (likely(!malloc_slow)) + isfree(tsd, ptr, usize, tcache, false); + else + isfree(tsd, ptr, usize, tcache, true); + witness_assert_lockless(tsd_tsdn(tsd)); } JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW JEMALLOC_ATTR(pure) je_nallocx(size_t size, int flags) { size_t usize; + tsdn_t *tsdn; assert(size != 0); if (unlikely(malloc_init())) return (0); - usize = inallocx(size, flags); + tsdn = tsdn_fetch(); + witness_assert_lockless(tsdn); + + usize = inallocx(tsdn, size, flags); if (unlikely(usize > HUGE_MAXCLASS)) return (0); + witness_assert_lockless(tsdn); return (usize); } JEMALLOC_EXPORT int JEMALLOC_NOTHROW je_mallctl(const char *name, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { + int ret; + tsd_t *tsd; if (unlikely(malloc_init())) return (EAGAIN); - return (ctl_byname(name, oldp, oldlenp, newp, newlen)); + tsd = tsd_fetch(); + witness_assert_lockless(tsd_tsdn(tsd)); + ret = ctl_byname(tsd, name, oldp, oldlenp, newp, newlen); + witness_assert_lockless(tsd_tsdn(tsd)); + return (ret); } JEMALLOC_EXPORT int JEMALLOC_NOTHROW je_mallctlnametomib(const char *name, size_t *mibp, size_t *miblenp) { + int ret; + tsdn_t *tsdn; if (unlikely(malloc_init())) return (EAGAIN); - return (ctl_nametomib(name, mibp, miblenp)); + tsdn = tsdn_fetch(); + witness_assert_lockless(tsdn); + ret = ctl_nametomib(tsdn, name, mibp, miblenp); + witness_assert_lockless(tsdn); + return (ret); } JEMALLOC_EXPORT int JEMALLOC_NOTHROW je_mallctlbymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { + int ret; + tsd_t *tsd; if (unlikely(malloc_init())) return (EAGAIN); - return (ctl_bymib(mib, miblen, oldp, oldlenp, newp, newlen)); + tsd = tsd_fetch(); + witness_assert_lockless(tsd_tsdn(tsd)); + ret = ctl_bymib(tsd, mib, miblen, oldp, oldlenp, newp, newlen); + witness_assert_lockless(tsd_tsdn(tsd)); + return (ret); } JEMALLOC_EXPORT void JEMALLOC_NOTHROW je_malloc_stats_print(void (*write_cb)(void *, const char *), void *cbopaque, const char *opts) { + tsdn_t *tsdn; + tsdn = tsdn_fetch(); + witness_assert_lockless(tsdn); stats_print(write_cb, cbopaque, opts); + witness_assert_lockless(tsdn); } JEMALLOC_EXPORT size_t JEMALLOC_NOTHROW je_malloc_usable_size(JEMALLOC_USABLE_SIZE_CONST void *ptr) { size_t ret; + tsdn_t *tsdn; assert(malloc_initialized() || IS_INITIALIZER); malloc_thread_init(); + tsdn = tsdn_fetch(); + witness_assert_lockless(tsdn); + if (config_ivsalloc) - ret = ivsalloc(ptr, config_prof); + ret = ivsalloc(tsdn, ptr, config_prof); else - ret = (ptr == NULL) ? 0 : isalloc(ptr, config_prof); + ret = (ptr == NULL) ? 0 : isalloc(tsdn, ptr, config_prof); + witness_assert_lockless(tsdn); return (ret); } /* * End non-standard functions. */ /******************************************************************************/ /* * Begin compatibility functions. */ #define ALLOCM_LG_ALIGN(la) (la) #define ALLOCM_ALIGN(a) (ffsl(a)-1) #define ALLOCM_ZERO ((int)0x40) #define ALLOCM_NO_MOVE ((int)0x80) #define ALLOCM_SUCCESS 0 #define ALLOCM_ERR_OOM 1 #define ALLOCM_ERR_NOT_MOVED 2 int je_allocm(void **ptr, size_t *rsize, size_t size, int flags) { void *p; assert(ptr != NULL); p = je_mallocx(size, flags); if (p == NULL) return (ALLOCM_ERR_OOM); if (rsize != NULL) - *rsize = isalloc(p, config_prof); + *rsize = isalloc(tsdn_fetch(), p, config_prof); *ptr = p; return (ALLOCM_SUCCESS); } int je_rallocm(void **ptr, size_t *rsize, size_t size, size_t extra, int flags) { int ret; bool no_move = flags & ALLOCM_NO_MOVE; assert(ptr != NULL); assert(*ptr != NULL); assert(size != 0); assert(SIZE_T_MAX - size >= extra); if (no_move) { size_t usize = je_xallocx(*ptr, size, extra, flags); ret = (usize >= size) ? ALLOCM_SUCCESS : ALLOCM_ERR_NOT_MOVED; if (rsize != NULL) *rsize = usize; } else { void *p = je_rallocx(*ptr, size+extra, flags); if (p != NULL) { *ptr = p; ret = ALLOCM_SUCCESS; } else ret = ALLOCM_ERR_OOM; if (rsize != NULL) - *rsize = isalloc(*ptr, config_prof); + *rsize = isalloc(tsdn_fetch(), *ptr, config_prof); } return (ret); } int je_sallocm(const void *ptr, size_t *rsize, int flags) { assert(rsize != NULL); *rsize = je_sallocx(ptr, flags); return (ALLOCM_SUCCESS); } int je_dallocm(void *ptr, int flags) { je_dallocx(ptr, flags); return (ALLOCM_SUCCESS); } int je_nallocm(size_t *rsize, size_t size, int flags) { size_t usize; usize = je_nallocx(size, flags); if (usize == 0) return (ALLOCM_ERR_OOM); if (rsize != NULL) *rsize = usize; return (ALLOCM_SUCCESS); } #undef ALLOCM_LG_ALIGN #undef ALLOCM_ALIGN #undef ALLOCM_ZERO #undef ALLOCM_NO_MOVE #undef ALLOCM_SUCCESS #undef ALLOCM_ERR_OOM #undef ALLOCM_ERR_NOT_MOVED /* * End compatibility functions. */ /******************************************************************************/ /* * The following functions are used by threading libraries for protection of * malloc during fork(). */ /* * If an application creates a thread before doing any allocation in the main * thread, then calls fork(2) in the main thread followed by memory allocation * in the child process, a race can occur that results in deadlock within the * child: the main thread may have forked while the created thread had * partially initialized the allocator. Ordinarily jemalloc prevents * fork/malloc races via the following functions it registers during * initialization using pthread_atfork(), but of course that does no good if * the allocator isn't fully initialized at fork time. The following library * constructor is a partial solution to this problem. It may still be possible * to trigger the deadlock described above, but doing so would involve forking * via a library constructor that runs before jemalloc's runs. */ +#ifndef JEMALLOC_JET JEMALLOC_ATTR(constructor) static void jemalloc_constructor(void) { malloc_init(); } +#endif #ifndef JEMALLOC_MUTEX_INIT_CB void jemalloc_prefork(void) #else JEMALLOC_EXPORT void _malloc_prefork(void) #endif { - unsigned i, narenas; + tsd_t *tsd; + unsigned i, j, narenas; + arena_t *arena; #ifdef JEMALLOC_MUTEX_INIT_CB if (!malloc_initialized()) return; #endif assert(malloc_initialized()); - /* Acquire all mutexes in a safe order. */ - ctl_prefork(); - prof_prefork(); - malloc_mutex_prefork(&arenas_lock); - for (i = 0, narenas = narenas_total_get(); i < narenas; i++) { - arena_t *arena; + tsd = tsd_fetch(); - if ((arena = arena_get(i, false)) != NULL) - arena_prefork(arena); + narenas = narenas_total_get(); + + witness_prefork(tsd); + /* Acquire all mutexes in a safe order. */ + ctl_prefork(tsd_tsdn(tsd)); + malloc_mutex_prefork(tsd_tsdn(tsd), &arenas_lock); + prof_prefork0(tsd_tsdn(tsd)); + for (i = 0; i < 3; i++) { + for (j = 0; j < narenas; j++) { + if ((arena = arena_get(tsd_tsdn(tsd), j, false)) != + NULL) { + switch (i) { + case 0: + arena_prefork0(tsd_tsdn(tsd), arena); + break; + case 1: + arena_prefork1(tsd_tsdn(tsd), arena); + break; + case 2: + arena_prefork2(tsd_tsdn(tsd), arena); + break; + default: not_reached(); + } + } + } } - chunk_prefork(); - base_prefork(); + base_prefork(tsd_tsdn(tsd)); + chunk_prefork(tsd_tsdn(tsd)); + for (i = 0; i < narenas; i++) { + if ((arena = arena_get(tsd_tsdn(tsd), i, false)) != NULL) + arena_prefork3(tsd_tsdn(tsd), arena); + } + prof_prefork1(tsd_tsdn(tsd)); } #ifndef JEMALLOC_MUTEX_INIT_CB void jemalloc_postfork_parent(void) #else JEMALLOC_EXPORT void _malloc_postfork(void) #endif { + tsd_t *tsd; unsigned i, narenas; #ifdef JEMALLOC_MUTEX_INIT_CB if (!malloc_initialized()) return; #endif assert(malloc_initialized()); + tsd = tsd_fetch(); + + witness_postfork_parent(tsd); /* Release all mutexes, now that fork() has completed. */ - base_postfork_parent(); - chunk_postfork_parent(); + chunk_postfork_parent(tsd_tsdn(tsd)); + base_postfork_parent(tsd_tsdn(tsd)); for (i = 0, narenas = narenas_total_get(); i < narenas; i++) { arena_t *arena; - if ((arena = arena_get(i, false)) != NULL) - arena_postfork_parent(arena); + if ((arena = arena_get(tsd_tsdn(tsd), i, false)) != NULL) + arena_postfork_parent(tsd_tsdn(tsd), arena); } - malloc_mutex_postfork_parent(&arenas_lock); - prof_postfork_parent(); - ctl_postfork_parent(); + prof_postfork_parent(tsd_tsdn(tsd)); + malloc_mutex_postfork_parent(tsd_tsdn(tsd), &arenas_lock); + ctl_postfork_parent(tsd_tsdn(tsd)); } void jemalloc_postfork_child(void) { + tsd_t *tsd; unsigned i, narenas; assert(malloc_initialized()); + tsd = tsd_fetch(); + + witness_postfork_child(tsd); /* Release all mutexes, now that fork() has completed. */ - base_postfork_child(); - chunk_postfork_child(); + chunk_postfork_child(tsd_tsdn(tsd)); + base_postfork_child(tsd_tsdn(tsd)); for (i = 0, narenas = narenas_total_get(); i < narenas; i++) { arena_t *arena; - if ((arena = arena_get(i, false)) != NULL) - arena_postfork_child(arena); + if ((arena = arena_get(tsd_tsdn(tsd), i, false)) != NULL) + arena_postfork_child(tsd_tsdn(tsd), arena); } - malloc_mutex_postfork_child(&arenas_lock); - prof_postfork_child(); - ctl_postfork_child(); + prof_postfork_child(tsd_tsdn(tsd)); + malloc_mutex_postfork_child(tsd_tsdn(tsd), &arenas_lock); + ctl_postfork_child(tsd_tsdn(tsd)); } void _malloc_first_thread(void) { (void)malloc_mutex_first_thread(); } /******************************************************************************/ Index: head/contrib/jemalloc/src/mutex.c =================================================================== --- head/contrib/jemalloc/src/mutex.c (revision 299586) +++ head/contrib/jemalloc/src/mutex.c (revision 299587) @@ -1,175 +1,178 @@ #define JEMALLOC_MUTEX_C_ #include "jemalloc/internal/jemalloc_internal.h" #if defined(JEMALLOC_LAZY_LOCK) && !defined(_WIN32) #include #endif #ifndef _CRT_SPINCOUNT #define _CRT_SPINCOUNT 4000 #endif /******************************************************************************/ /* Data. */ #ifdef JEMALLOC_LAZY_LOCK bool isthreaded = false; #endif #ifdef JEMALLOC_MUTEX_INIT_CB static bool postpone_init = true; static malloc_mutex_t *postponed_mutexes = NULL; #endif #if defined(JEMALLOC_LAZY_LOCK) && !defined(_WIN32) static void pthread_create_once(void); #endif /******************************************************************************/ /* * We intercept pthread_create() calls in order to toggle isthreaded if the * process goes multi-threaded. */ #if defined(JEMALLOC_LAZY_LOCK) && !defined(_WIN32) static int (*pthread_create_fptr)(pthread_t *__restrict, const pthread_attr_t *, void *(*)(void *), void *__restrict); static void pthread_create_once(void) { pthread_create_fptr = dlsym(RTLD_NEXT, "pthread_create"); if (pthread_create_fptr == NULL) { malloc_write(": Error in dlsym(RTLD_NEXT, " "\"pthread_create\")\n"); abort(); } isthreaded = true; } JEMALLOC_EXPORT int pthread_create(pthread_t *__restrict thread, const pthread_attr_t *__restrict attr, void *(*start_routine)(void *), void *__restrict arg) { static pthread_once_t once_control = PTHREAD_ONCE_INIT; pthread_once(&once_control, pthread_create_once); return (pthread_create_fptr(thread, attr, start_routine, arg)); } #endif /******************************************************************************/ #ifdef JEMALLOC_MUTEX_INIT_CB JEMALLOC_EXPORT int _pthread_mutex_init_calloc_cb(pthread_mutex_t *mutex, void *(calloc_cb)(size_t, size_t)); #pragma weak _pthread_mutex_init_calloc_cb int _pthread_mutex_init_calloc_cb(pthread_mutex_t *mutex, void *(calloc_cb)(size_t, size_t)) { return (((int (*)(pthread_mutex_t *, void *(*)(size_t, size_t))) __libc_interposing[INTERPOS__pthread_mutex_init_calloc_cb])(mutex, calloc_cb)); } #endif bool -malloc_mutex_init(malloc_mutex_t *mutex) +malloc_mutex_init(malloc_mutex_t *mutex, const char *name, witness_rank_t rank) { #ifdef _WIN32 # if _WIN32_WINNT >= 0x0600 InitializeSRWLock(&mutex->lock); # else if (!InitializeCriticalSectionAndSpinCount(&mutex->lock, _CRT_SPINCOUNT)) return (true); # endif #elif (defined(JEMALLOC_OSSPIN)) mutex->lock = 0; #elif (defined(JEMALLOC_MUTEX_INIT_CB)) if (postpone_init) { mutex->postponed_next = postponed_mutexes; postponed_mutexes = mutex; } else { if (_pthread_mutex_init_calloc_cb(&mutex->lock, bootstrap_calloc) != 0) return (true); } #else pthread_mutexattr_t attr; if (pthread_mutexattr_init(&attr) != 0) return (true); pthread_mutexattr_settype(&attr, MALLOC_MUTEX_TYPE); if (pthread_mutex_init(&mutex->lock, &attr) != 0) { pthread_mutexattr_destroy(&attr); return (true); } pthread_mutexattr_destroy(&attr); #endif + if (config_debug) + witness_init(&mutex->witness, name, rank, NULL); return (false); } void -malloc_mutex_prefork(malloc_mutex_t *mutex) +malloc_mutex_prefork(tsdn_t *tsdn, malloc_mutex_t *mutex) { - malloc_mutex_lock(mutex); + malloc_mutex_lock(tsdn, mutex); } void -malloc_mutex_postfork_parent(malloc_mutex_t *mutex) +malloc_mutex_postfork_parent(tsdn_t *tsdn, malloc_mutex_t *mutex) { - malloc_mutex_unlock(mutex); + malloc_mutex_unlock(tsdn, mutex); } void -malloc_mutex_postfork_child(malloc_mutex_t *mutex) +malloc_mutex_postfork_child(tsdn_t *tsdn, malloc_mutex_t *mutex) { #ifdef JEMALLOC_MUTEX_INIT_CB - malloc_mutex_unlock(mutex); + malloc_mutex_unlock(tsdn, mutex); #else - if (malloc_mutex_init(mutex)) { + if (malloc_mutex_init(mutex, mutex->witness.name, + mutex->witness.rank)) { malloc_printf(": Error re-initializing mutex in " "child\n"); if (opt_abort) abort(); } #endif } bool malloc_mutex_first_thread(void) { #ifdef JEMALLOC_MUTEX_INIT_CB postpone_init = false; while (postponed_mutexes != NULL) { if (_pthread_mutex_init_calloc_cb(&postponed_mutexes->lock, bootstrap_calloc) != 0) return (true); postponed_mutexes = postponed_mutexes->postponed_next; } #endif return (false); } bool -mutex_boot(void) +malloc_mutex_boot(void) { #ifndef JEMALLOC_MUTEX_INIT_CB return (malloc_mutex_first_thread()); #else return (false); #endif } Index: head/contrib/jemalloc/src/nstime.c =================================================================== --- head/contrib/jemalloc/src/nstime.c (revision 299586) +++ head/contrib/jemalloc/src/nstime.c (revision 299587) @@ -1,148 +1,148 @@ #include "jemalloc/internal/jemalloc_internal.h" #define BILLION UINT64_C(1000000000) void nstime_init(nstime_t *time, uint64_t ns) { time->ns = ns; } void nstime_init2(nstime_t *time, uint64_t sec, uint64_t nsec) { time->ns = sec * BILLION + nsec; } uint64_t nstime_ns(const nstime_t *time) { return (time->ns); } uint64_t nstime_sec(const nstime_t *time) { return (time->ns / BILLION); } uint64_t nstime_nsec(const nstime_t *time) { return (time->ns % BILLION); } void nstime_copy(nstime_t *time, const nstime_t *source) { *time = *source; } int nstime_compare(const nstime_t *a, const nstime_t *b) { return ((a->ns > b->ns) - (a->ns < b->ns)); } void nstime_add(nstime_t *time, const nstime_t *addend) { assert(UINT64_MAX - time->ns >= addend->ns); time->ns += addend->ns; } void nstime_subtract(nstime_t *time, const nstime_t *subtrahend) { assert(nstime_compare(time, subtrahend) >= 0); time->ns -= subtrahend->ns; } void nstime_imultiply(nstime_t *time, uint64_t multiplier) { assert((((time->ns | multiplier) & (UINT64_MAX << (sizeof(uint64_t) << 2))) == 0) || ((time->ns * multiplier) / multiplier == time->ns)); time->ns *= multiplier; } void nstime_idivide(nstime_t *time, uint64_t divisor) { assert(divisor != 0); time->ns /= divisor; } uint64_t nstime_divide(const nstime_t *time, const nstime_t *divisor) { assert(divisor->ns != 0); return (time->ns / divisor->ns); } #ifdef JEMALLOC_JET #undef nstime_update -#define nstime_update JEMALLOC_N(nstime_update_impl) +#define nstime_update JEMALLOC_N(n_nstime_update) #endif bool nstime_update(nstime_t *time) { nstime_t old_time; nstime_copy(&old_time, time); #ifdef _WIN32 { FILETIME ft; uint64_t ticks; GetSystemTimeAsFileTime(&ft); ticks = (((uint64_t)ft.dwHighDateTime) << 32) | ft.dwLowDateTime; time->ns = ticks * 100; } #elif JEMALLOC_CLOCK_GETTIME { struct timespec ts; if (sysconf(_SC_MONOTONIC_CLOCK) > 0) clock_gettime(CLOCK_MONOTONIC, &ts); else clock_gettime(CLOCK_REALTIME, &ts); time->ns = ts.tv_sec * BILLION + ts.tv_nsec; } #else struct timeval tv; gettimeofday(&tv, NULL); time->ns = tv.tv_sec * BILLION + tv.tv_usec * 1000; #endif /* Handle non-monotonic clocks. */ if (unlikely(nstime_compare(&old_time, time) > 0)) { nstime_copy(time, &old_time); return (true); } return (false); } #ifdef JEMALLOC_JET #undef nstime_update #define nstime_update JEMALLOC_N(nstime_update) -nstime_update_t *nstime_update = JEMALLOC_N(nstime_update_impl); +nstime_update_t *nstime_update = JEMALLOC_N(n_nstime_update); #endif Index: head/contrib/jemalloc/src/pages.c =================================================================== --- head/contrib/jemalloc/src/pages.c (revision 299586) +++ head/contrib/jemalloc/src/pages.c (revision 299587) @@ -1,173 +1,253 @@ #define JEMALLOC_PAGES_C_ #include "jemalloc/internal/jemalloc_internal.h" +#ifdef JEMALLOC_SYSCTL_VM_OVERCOMMIT +#include +#endif + /******************************************************************************/ +/* Data. */ +#ifndef _WIN32 +# define PAGES_PROT_COMMIT (PROT_READ | PROT_WRITE) +# define PAGES_PROT_DECOMMIT (PROT_NONE) +static int mmap_flags; +#endif +static bool os_overcommits; + +/******************************************************************************/ + void * -pages_map(void *addr, size_t size) +pages_map(void *addr, size_t size, bool *commit) { void *ret; assert(size != 0); + if (os_overcommits) + *commit = true; + #ifdef _WIN32 /* * If VirtualAlloc can't allocate at the given address when one is * given, it fails and returns NULL. */ - ret = VirtualAlloc(addr, size, MEM_COMMIT | MEM_RESERVE, + ret = VirtualAlloc(addr, size, MEM_RESERVE | (*commit ? MEM_COMMIT : 0), PAGE_READWRITE); #else /* * We don't use MAP_FIXED here, because it can cause the *replacement* * of existing mappings, and we only want to create new mappings. */ - ret = mmap(addr, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON, - -1, 0); + { + int prot = *commit ? PAGES_PROT_COMMIT : PAGES_PROT_DECOMMIT; + + ret = mmap(addr, size, prot, mmap_flags, -1, 0); + } assert(ret != NULL); if (ret == MAP_FAILED) ret = NULL; else if (addr != NULL && ret != addr) { /* * We succeeded in mapping memory, but not in the right place. */ pages_unmap(ret, size); ret = NULL; } #endif assert(ret == NULL || (addr == NULL && ret != addr) || (addr != NULL && ret == addr)); return (ret); } void pages_unmap(void *addr, size_t size) { #ifdef _WIN32 if (VirtualFree(addr, 0, MEM_RELEASE) == 0) #else if (munmap(addr, size) == -1) #endif { char buf[BUFERROR_BUF]; buferror(get_errno(), buf, sizeof(buf)); malloc_printf(": Error in " #ifdef _WIN32 "VirtualFree" #else "munmap" #endif "(): %s\n", buf); if (opt_abort) abort(); } } void * -pages_trim(void *addr, size_t alloc_size, size_t leadsize, size_t size) +pages_trim(void *addr, size_t alloc_size, size_t leadsize, size_t size, + bool *commit) { void *ret = (void *)((uintptr_t)addr + leadsize); assert(alloc_size >= leadsize + size); #ifdef _WIN32 { void *new_addr; pages_unmap(addr, alloc_size); - new_addr = pages_map(ret, size); + new_addr = pages_map(ret, size, commit); if (new_addr == ret) return (ret); if (new_addr) pages_unmap(new_addr, size); return (NULL); } #else { size_t trailsize = alloc_size - leadsize - size; if (leadsize != 0) pages_unmap(addr, leadsize); if (trailsize != 0) pages_unmap((void *)((uintptr_t)ret + size), trailsize); return (ret); } #endif } static bool pages_commit_impl(void *addr, size_t size, bool commit) { -#ifndef _WIN32 - /* - * The following decommit/commit implementation is functional, but - * always disabled because it doesn't add value beyong improved - * debugging (at the cost of extra system calls) on systems that - * overcommit. - */ - if (false) { - int prot = commit ? (PROT_READ | PROT_WRITE) : PROT_NONE; - void *result = mmap(addr, size, prot, MAP_PRIVATE | MAP_ANON | - MAP_FIXED, -1, 0); + if (os_overcommits) + return (true); + +#ifdef _WIN32 + return (commit ? (addr != VirtualAlloc(addr, size, MEM_COMMIT, + PAGE_READWRITE)) : (!VirtualFree(addr, size, MEM_DECOMMIT))); +#else + { + int prot = commit ? PAGES_PROT_COMMIT : PAGES_PROT_DECOMMIT; + void *result = mmap(addr, size, prot, mmap_flags | MAP_FIXED, + -1, 0); if (result == MAP_FAILED) return (true); if (result != addr) { /* * We succeeded in mapping memory, but not in the right * place. */ pages_unmap(result, size); return (true); } return (false); } #endif - return (true); } bool pages_commit(void *addr, size_t size) { return (pages_commit_impl(addr, size, true)); } bool pages_decommit(void *addr, size_t size) { return (pages_commit_impl(addr, size, false)); } bool pages_purge(void *addr, size_t size) { bool unzeroed; #ifdef _WIN32 VirtualAlloc(addr, size, MEM_RESET, PAGE_READWRITE); unzeroed = true; #elif defined(JEMALLOC_HAVE_MADVISE) # ifdef JEMALLOC_PURGE_MADVISE_DONTNEED # define JEMALLOC_MADV_PURGE MADV_DONTNEED # define JEMALLOC_MADV_ZEROS true # elif defined(JEMALLOC_PURGE_MADVISE_FREE) # define JEMALLOC_MADV_PURGE MADV_FREE # define JEMALLOC_MADV_ZEROS false # else # error "No madvise(2) flag defined for purging unused dirty pages." # endif int err = madvise(addr, size, JEMALLOC_MADV_PURGE); unzeroed = (!JEMALLOC_MADV_ZEROS || err != 0); # undef JEMALLOC_MADV_PURGE # undef JEMALLOC_MADV_ZEROS #else /* Last resort no-op. */ unzeroed = true; #endif return (unzeroed); } +#ifdef JEMALLOC_SYSCTL_VM_OVERCOMMIT +static bool +os_overcommits_sysctl(void) +{ + int vm_overcommit; + size_t sz; + + sz = sizeof(vm_overcommit); + if (sysctlbyname("vm.overcommit", &vm_overcommit, &sz, NULL, 0) != 0) + return (false); /* Error. */ + + return ((vm_overcommit & 0x3) == 0); +} +#endif + +#ifdef JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY +static bool +os_overcommits_proc(void) +{ + int fd; + char buf[1]; + ssize_t nread; + + fd = open("/proc/sys/vm/overcommit_memory", O_RDONLY); + if (fd == -1) + return (false); /* Error. */ + + nread = read(fd, &buf, sizeof(buf)); + if (nread < 1) + return (false); /* Error. */ + /* + * /proc/sys/vm/overcommit_memory meanings: + * 0: Heuristic overcommit. + * 1: Always overcommit. + * 2: Never overcommit. + */ + return (buf[0] == '0' || buf[0] == '1'); +} +#endif + +void +pages_boot(void) +{ + +#ifndef _WIN32 + mmap_flags = MAP_PRIVATE | MAP_ANON; +#endif + +#ifdef JEMALLOC_SYSCTL_VM_OVERCOMMIT + os_overcommits = os_overcommits_sysctl(); +#elif defined(JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY) + os_overcommits = os_overcommits_proc(); +# ifdef MAP_NORESERVE + if (os_overcommits) + mmap_flags |= MAP_NORESERVE; +# endif +#else + os_overcommits = false; +#endif +} Index: head/contrib/jemalloc/src/prof.c =================================================================== --- head/contrib/jemalloc/src/prof.c (revision 299586) +++ head/contrib/jemalloc/src/prof.c (revision 299587) @@ -1,2254 +1,2358 @@ #define JEMALLOC_PROF_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ #ifdef JEMALLOC_PROF_LIBUNWIND #define UNW_LOCAL_ONLY #include #endif #ifdef JEMALLOC_PROF_LIBGCC #include #endif /******************************************************************************/ /* Data. */ bool opt_prof = false; bool opt_prof_active = true; bool opt_prof_thread_active_init = true; size_t opt_lg_prof_sample = LG_PROF_SAMPLE_DEFAULT; ssize_t opt_lg_prof_interval = LG_PROF_INTERVAL_DEFAULT; bool opt_prof_gdump = false; bool opt_prof_final = false; bool opt_prof_leak = false; bool opt_prof_accum = false; char opt_prof_prefix[ /* Minimize memory bloat for non-prof builds. */ #ifdef JEMALLOC_PROF PATH_MAX + #endif 1]; /* * Initialized as opt_prof_active, and accessed via * prof_active_[gs]et{_unlocked,}(). */ bool prof_active; static malloc_mutex_t prof_active_mtx; /* * Initialized as opt_prof_thread_active_init, and accessed via * prof_thread_active_init_[gs]et(). */ static bool prof_thread_active_init; static malloc_mutex_t prof_thread_active_init_mtx; /* * Initialized as opt_prof_gdump, and accessed via * prof_gdump_[gs]et{_unlocked,}(). */ bool prof_gdump_val; static malloc_mutex_t prof_gdump_mtx; uint64_t prof_interval = 0; size_t lg_prof_sample; /* * Table of mutexes that are shared among gctx's. These are leaf locks, so * there is no problem with using them for more than one gctx at the same time. * The primary motivation for this sharing though is that gctx's are ephemeral, * and destroying mutexes causes complications for systems that allocate when * creating/destroying mutexes. */ static malloc_mutex_t *gctx_locks; static unsigned cum_gctxs; /* Atomic counter. */ /* * Table of mutexes that are shared among tdata's. No operations require * holding multiple tdata locks, so there is no problem with using them for more * than one tdata at the same time, even though a gctx lock may be acquired * while holding a tdata lock. */ static malloc_mutex_t *tdata_locks; /* * Global hash of (prof_bt_t *)-->(prof_gctx_t *). This is the master data * structure that knows about all backtraces currently captured. */ static ckh_t bt2gctx; static malloc_mutex_t bt2gctx_mtx; /* * Tree of all extant prof_tdata_t structures, regardless of state, * {attached,detached,expired}. */ static prof_tdata_tree_t tdatas; static malloc_mutex_t tdatas_mtx; static uint64_t next_thr_uid; static malloc_mutex_t next_thr_uid_mtx; static malloc_mutex_t prof_dump_seq_mtx; static uint64_t prof_dump_seq; static uint64_t prof_dump_iseq; static uint64_t prof_dump_mseq; static uint64_t prof_dump_useq; /* * This buffer is rather large for stack allocation, so use a single buffer for * all profile dumps. */ static malloc_mutex_t prof_dump_mtx; static char prof_dump_buf[ /* Minimize memory bloat for non-prof builds. */ #ifdef JEMALLOC_PROF PROF_DUMP_BUFSIZE #else 1 #endif ]; static size_t prof_dump_buf_end; static int prof_dump_fd; /* Do not dump any profiles until bootstrapping is complete. */ static bool prof_booted = false; /******************************************************************************/ /* * Function prototypes for static functions that are referenced prior to * definition. */ -static bool prof_tctx_should_destroy(prof_tctx_t *tctx); +static bool prof_tctx_should_destroy(tsdn_t *tsdn, prof_tctx_t *tctx); static void prof_tctx_destroy(tsd_t *tsd, prof_tctx_t *tctx); -static bool prof_tdata_should_destroy(prof_tdata_t *tdata, +static bool prof_tdata_should_destroy(tsdn_t *tsdn, prof_tdata_t *tdata, bool even_if_attached); -static void prof_tdata_destroy(tsd_t *tsd, prof_tdata_t *tdata, +static void prof_tdata_destroy(tsdn_t *tsdn, prof_tdata_t *tdata, bool even_if_attached); -static char *prof_thread_name_alloc(tsd_t *tsd, const char *thread_name); +static char *prof_thread_name_alloc(tsdn_t *tsdn, const char *thread_name); /******************************************************************************/ /* Red-black trees. */ JEMALLOC_INLINE_C int prof_tctx_comp(const prof_tctx_t *a, const prof_tctx_t *b) { uint64_t a_thr_uid = a->thr_uid; uint64_t b_thr_uid = b->thr_uid; int ret = (a_thr_uid > b_thr_uid) - (a_thr_uid < b_thr_uid); if (ret == 0) { uint64_t a_thr_discrim = a->thr_discrim; uint64_t b_thr_discrim = b->thr_discrim; ret = (a_thr_discrim > b_thr_discrim) - (a_thr_discrim < b_thr_discrim); if (ret == 0) { uint64_t a_tctx_uid = a->tctx_uid; uint64_t b_tctx_uid = b->tctx_uid; ret = (a_tctx_uid > b_tctx_uid) - (a_tctx_uid < b_tctx_uid); } } return (ret); } rb_gen(static UNUSED, tctx_tree_, prof_tctx_tree_t, prof_tctx_t, tctx_link, prof_tctx_comp) JEMALLOC_INLINE_C int prof_gctx_comp(const prof_gctx_t *a, const prof_gctx_t *b) { unsigned a_len = a->bt.len; unsigned b_len = b->bt.len; unsigned comp_len = (a_len < b_len) ? a_len : b_len; int ret = memcmp(a->bt.vec, b->bt.vec, comp_len * sizeof(void *)); if (ret == 0) ret = (a_len > b_len) - (a_len < b_len); return (ret); } rb_gen(static UNUSED, gctx_tree_, prof_gctx_tree_t, prof_gctx_t, dump_link, prof_gctx_comp) JEMALLOC_INLINE_C int prof_tdata_comp(const prof_tdata_t *a, const prof_tdata_t *b) { int ret; uint64_t a_uid = a->thr_uid; uint64_t b_uid = b->thr_uid; ret = ((a_uid > b_uid) - (a_uid < b_uid)); if (ret == 0) { uint64_t a_discrim = a->thr_discrim; uint64_t b_discrim = b->thr_discrim; ret = ((a_discrim > b_discrim) - (a_discrim < b_discrim)); } return (ret); } rb_gen(static UNUSED, tdata_tree_, prof_tdata_tree_t, prof_tdata_t, tdata_link, prof_tdata_comp) /******************************************************************************/ void prof_alloc_rollback(tsd_t *tsd, prof_tctx_t *tctx, bool updated) { prof_tdata_t *tdata; cassert(config_prof); if (updated) { /* * Compute a new sample threshold. This isn't very important in * practice, because this function is rarely executed, so the * potential for sample bias is minimal except in contrived * programs. */ tdata = prof_tdata_get(tsd, true); if (tdata != NULL) prof_sample_threshold_update(tdata); } if ((uintptr_t)tctx > (uintptr_t)1U) { - malloc_mutex_lock(tctx->tdata->lock); + malloc_mutex_lock(tsd_tsdn(tsd), tctx->tdata->lock); tctx->prepared = false; - if (prof_tctx_should_destroy(tctx)) + if (prof_tctx_should_destroy(tsd_tsdn(tsd), tctx)) prof_tctx_destroy(tsd, tctx); else - malloc_mutex_unlock(tctx->tdata->lock); + malloc_mutex_unlock(tsd_tsdn(tsd), tctx->tdata->lock); } } void -prof_malloc_sample_object(const void *ptr, size_t usize, prof_tctx_t *tctx) +prof_malloc_sample_object(tsdn_t *tsdn, const void *ptr, size_t usize, + prof_tctx_t *tctx) { - prof_tctx_set(ptr, usize, tctx); + prof_tctx_set(tsdn, ptr, usize, tctx); - malloc_mutex_lock(tctx->tdata->lock); + malloc_mutex_lock(tsdn, tctx->tdata->lock); tctx->cnts.curobjs++; tctx->cnts.curbytes += usize; if (opt_prof_accum) { tctx->cnts.accumobjs++; tctx->cnts.accumbytes += usize; } tctx->prepared = false; - malloc_mutex_unlock(tctx->tdata->lock); + malloc_mutex_unlock(tsdn, tctx->tdata->lock); } void prof_free_sampled_object(tsd_t *tsd, size_t usize, prof_tctx_t *tctx) { - malloc_mutex_lock(tctx->tdata->lock); + malloc_mutex_lock(tsd_tsdn(tsd), tctx->tdata->lock); assert(tctx->cnts.curobjs > 0); assert(tctx->cnts.curbytes >= usize); tctx->cnts.curobjs--; tctx->cnts.curbytes -= usize; - if (prof_tctx_should_destroy(tctx)) + if (prof_tctx_should_destroy(tsd_tsdn(tsd), tctx)) prof_tctx_destroy(tsd, tctx); else - malloc_mutex_unlock(tctx->tdata->lock); + malloc_mutex_unlock(tsd_tsdn(tsd), tctx->tdata->lock); } void bt_init(prof_bt_t *bt, void **vec) { cassert(config_prof); bt->vec = vec; bt->len = 0; } JEMALLOC_INLINE_C void prof_enter(tsd_t *tsd, prof_tdata_t *tdata) { cassert(config_prof); assert(tdata == prof_tdata_get(tsd, false)); if (tdata != NULL) { assert(!tdata->enq); tdata->enq = true; } - malloc_mutex_lock(&bt2gctx_mtx); + malloc_mutex_lock(tsd_tsdn(tsd), &bt2gctx_mtx); } JEMALLOC_INLINE_C void prof_leave(tsd_t *tsd, prof_tdata_t *tdata) { cassert(config_prof); assert(tdata == prof_tdata_get(tsd, false)); - malloc_mutex_unlock(&bt2gctx_mtx); + malloc_mutex_unlock(tsd_tsdn(tsd), &bt2gctx_mtx); if (tdata != NULL) { bool idump, gdump; assert(tdata->enq); tdata->enq = false; idump = tdata->enq_idump; tdata->enq_idump = false; gdump = tdata->enq_gdump; tdata->enq_gdump = false; if (idump) - prof_idump(); + prof_idump(tsd_tsdn(tsd)); if (gdump) - prof_gdump(); + prof_gdump(tsd_tsdn(tsd)); } } #ifdef JEMALLOC_PROF_LIBUNWIND void prof_backtrace(prof_bt_t *bt) { int nframes; cassert(config_prof); assert(bt->len == 0); assert(bt->vec != NULL); nframes = unw_backtrace(bt->vec, PROF_BT_MAX); if (nframes <= 0) return; bt->len = nframes; } #elif (defined(JEMALLOC_PROF_LIBGCC)) static _Unwind_Reason_Code prof_unwind_init_callback(struct _Unwind_Context *context, void *arg) { cassert(config_prof); return (_URC_NO_REASON); } static _Unwind_Reason_Code prof_unwind_callback(struct _Unwind_Context *context, void *arg) { prof_unwind_data_t *data = (prof_unwind_data_t *)arg; void *ip; cassert(config_prof); ip = (void *)_Unwind_GetIP(context); if (ip == NULL) return (_URC_END_OF_STACK); data->bt->vec[data->bt->len] = ip; data->bt->len++; if (data->bt->len == data->max) return (_URC_END_OF_STACK); return (_URC_NO_REASON); } void prof_backtrace(prof_bt_t *bt) { prof_unwind_data_t data = {bt, PROF_BT_MAX}; cassert(config_prof); _Unwind_Backtrace(prof_unwind_callback, &data); } #elif (defined(JEMALLOC_PROF_GCC)) void prof_backtrace(prof_bt_t *bt) { #define BT_FRAME(i) \ if ((i) < PROF_BT_MAX) { \ void *p; \ if (__builtin_frame_address(i) == 0) \ return; \ p = __builtin_return_address(i); \ if (p == NULL) \ return; \ bt->vec[(i)] = p; \ bt->len = (i) + 1; \ } else \ return; cassert(config_prof); BT_FRAME(0) BT_FRAME(1) BT_FRAME(2) BT_FRAME(3) BT_FRAME(4) BT_FRAME(5) BT_FRAME(6) BT_FRAME(7) BT_FRAME(8) BT_FRAME(9) BT_FRAME(10) BT_FRAME(11) BT_FRAME(12) BT_FRAME(13) BT_FRAME(14) BT_FRAME(15) BT_FRAME(16) BT_FRAME(17) BT_FRAME(18) BT_FRAME(19) BT_FRAME(20) BT_FRAME(21) BT_FRAME(22) BT_FRAME(23) BT_FRAME(24) BT_FRAME(25) BT_FRAME(26) BT_FRAME(27) BT_FRAME(28) BT_FRAME(29) BT_FRAME(30) BT_FRAME(31) BT_FRAME(32) BT_FRAME(33) BT_FRAME(34) BT_FRAME(35) BT_FRAME(36) BT_FRAME(37) BT_FRAME(38) BT_FRAME(39) BT_FRAME(40) BT_FRAME(41) BT_FRAME(42) BT_FRAME(43) BT_FRAME(44) BT_FRAME(45) BT_FRAME(46) BT_FRAME(47) BT_FRAME(48) BT_FRAME(49) BT_FRAME(50) BT_FRAME(51) BT_FRAME(52) BT_FRAME(53) BT_FRAME(54) BT_FRAME(55) BT_FRAME(56) BT_FRAME(57) BT_FRAME(58) BT_FRAME(59) BT_FRAME(60) BT_FRAME(61) BT_FRAME(62) BT_FRAME(63) BT_FRAME(64) BT_FRAME(65) BT_FRAME(66) BT_FRAME(67) BT_FRAME(68) BT_FRAME(69) BT_FRAME(70) BT_FRAME(71) BT_FRAME(72) BT_FRAME(73) BT_FRAME(74) BT_FRAME(75) BT_FRAME(76) BT_FRAME(77) BT_FRAME(78) BT_FRAME(79) BT_FRAME(80) BT_FRAME(81) BT_FRAME(82) BT_FRAME(83) BT_FRAME(84) BT_FRAME(85) BT_FRAME(86) BT_FRAME(87) BT_FRAME(88) BT_FRAME(89) BT_FRAME(90) BT_FRAME(91) BT_FRAME(92) BT_FRAME(93) BT_FRAME(94) BT_FRAME(95) BT_FRAME(96) BT_FRAME(97) BT_FRAME(98) BT_FRAME(99) BT_FRAME(100) BT_FRAME(101) BT_FRAME(102) BT_FRAME(103) BT_FRAME(104) BT_FRAME(105) BT_FRAME(106) BT_FRAME(107) BT_FRAME(108) BT_FRAME(109) BT_FRAME(110) BT_FRAME(111) BT_FRAME(112) BT_FRAME(113) BT_FRAME(114) BT_FRAME(115) BT_FRAME(116) BT_FRAME(117) BT_FRAME(118) BT_FRAME(119) BT_FRAME(120) BT_FRAME(121) BT_FRAME(122) BT_FRAME(123) BT_FRAME(124) BT_FRAME(125) BT_FRAME(126) BT_FRAME(127) #undef BT_FRAME } #else void prof_backtrace(prof_bt_t *bt) { cassert(config_prof); not_reached(); } #endif static malloc_mutex_t * prof_gctx_mutex_choose(void) { unsigned ngctxs = atomic_add_u(&cum_gctxs, 1); return (&gctx_locks[(ngctxs - 1) % PROF_NCTX_LOCKS]); } static malloc_mutex_t * prof_tdata_mutex_choose(uint64_t thr_uid) { return (&tdata_locks[thr_uid % PROF_NTDATA_LOCKS]); } static prof_gctx_t * -prof_gctx_create(tsd_t *tsd, prof_bt_t *bt) +prof_gctx_create(tsdn_t *tsdn, prof_bt_t *bt) { /* * Create a single allocation that has space for vec of length bt->len. */ size_t size = offsetof(prof_gctx_t, vec) + (bt->len * sizeof(void *)); - prof_gctx_t *gctx = (prof_gctx_t *)iallocztm(tsd, size, - size2index(size), false, tcache_get(tsd, true), true, NULL, true); + prof_gctx_t *gctx = (prof_gctx_t *)iallocztm(tsdn, size, + size2index(size), false, NULL, true, arena_get(TSDN_NULL, 0, true), + true); if (gctx == NULL) return (NULL); gctx->lock = prof_gctx_mutex_choose(); /* * Set nlimbo to 1, in order to avoid a race condition with * prof_tctx_destroy()/prof_gctx_try_destroy(). */ gctx->nlimbo = 1; tctx_tree_new(&gctx->tctxs); /* Duplicate bt. */ memcpy(gctx->vec, bt->vec, bt->len * sizeof(void *)); gctx->bt.vec = gctx->vec; gctx->bt.len = bt->len; return (gctx); } static void prof_gctx_try_destroy(tsd_t *tsd, prof_tdata_t *tdata_self, prof_gctx_t *gctx, prof_tdata_t *tdata) { cassert(config_prof); /* * Check that gctx is still unused by any thread cache before destroying * it. prof_lookup() increments gctx->nlimbo in order to avoid a race * condition with this function, as does prof_tctx_destroy() in order to * avoid a race between the main body of prof_tctx_destroy() and entry * into this function. */ prof_enter(tsd, tdata_self); - malloc_mutex_lock(gctx->lock); + malloc_mutex_lock(tsd_tsdn(tsd), gctx->lock); assert(gctx->nlimbo != 0); if (tctx_tree_empty(&gctx->tctxs) && gctx->nlimbo == 1) { /* Remove gctx from bt2gctx. */ - if (ckh_remove(tsd, &bt2gctx, &gctx->bt, NULL, NULL)) + if (ckh_remove(tsd_tsdn(tsd), &bt2gctx, &gctx->bt, NULL, NULL)) not_reached(); prof_leave(tsd, tdata_self); /* Destroy gctx. */ - malloc_mutex_unlock(gctx->lock); - idalloctm(tsd, gctx, tcache_get(tsd, false), true, true); + malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock); + idalloctm(tsd_tsdn(tsd), gctx, NULL, true, true); } else { /* * Compensate for increment in prof_tctx_destroy() or * prof_lookup(). */ gctx->nlimbo--; - malloc_mutex_unlock(gctx->lock); + malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock); prof_leave(tsd, tdata_self); } } -/* tctx->tdata->lock must be held. */ static bool -prof_tctx_should_destroy(prof_tctx_t *tctx) +prof_tctx_should_destroy(tsdn_t *tsdn, prof_tctx_t *tctx) { + malloc_mutex_assert_owner(tsdn, tctx->tdata->lock); + if (opt_prof_accum) return (false); if (tctx->cnts.curobjs != 0) return (false); if (tctx->prepared) return (false); return (true); } static bool prof_gctx_should_destroy(prof_gctx_t *gctx) { if (opt_prof_accum) return (false); if (!tctx_tree_empty(&gctx->tctxs)) return (false); if (gctx->nlimbo != 0) return (false); return (true); } -/* tctx->tdata->lock is held upon entry, and released before return. */ static void prof_tctx_destroy(tsd_t *tsd, prof_tctx_t *tctx) { prof_tdata_t *tdata = tctx->tdata; prof_gctx_t *gctx = tctx->gctx; bool destroy_tdata, destroy_tctx, destroy_gctx; + malloc_mutex_assert_owner(tsd_tsdn(tsd), tctx->tdata->lock); + assert(tctx->cnts.curobjs == 0); assert(tctx->cnts.curbytes == 0); assert(!opt_prof_accum); assert(tctx->cnts.accumobjs == 0); assert(tctx->cnts.accumbytes == 0); - ckh_remove(tsd, &tdata->bt2tctx, &gctx->bt, NULL, NULL); - destroy_tdata = prof_tdata_should_destroy(tdata, false); - malloc_mutex_unlock(tdata->lock); + ckh_remove(tsd_tsdn(tsd), &tdata->bt2tctx, &gctx->bt, NULL, NULL); + destroy_tdata = prof_tdata_should_destroy(tsd_tsdn(tsd), tdata, false); + malloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock); - malloc_mutex_lock(gctx->lock); + malloc_mutex_lock(tsd_tsdn(tsd), gctx->lock); switch (tctx->state) { case prof_tctx_state_nominal: tctx_tree_remove(&gctx->tctxs, tctx); destroy_tctx = true; if (prof_gctx_should_destroy(gctx)) { /* * Increment gctx->nlimbo in order to keep another * thread from winning the race to destroy gctx while * this one has gctx->lock dropped. Without this, it * would be possible for another thread to: * * 1) Sample an allocation associated with gctx. * 2) Deallocate the sampled object. * 3) Successfully prof_gctx_try_destroy(gctx). * * The result would be that gctx no longer exists by the * time this thread accesses it in * prof_gctx_try_destroy(). */ gctx->nlimbo++; destroy_gctx = true; } else destroy_gctx = false; break; case prof_tctx_state_dumping: /* * A dumping thread needs tctx to remain valid until dumping * has finished. Change state such that the dumping thread will * complete destruction during a late dump iteration phase. */ tctx->state = prof_tctx_state_purgatory; destroy_tctx = false; destroy_gctx = false; break; default: not_reached(); destroy_tctx = false; destroy_gctx = false; } - malloc_mutex_unlock(gctx->lock); + malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock); if (destroy_gctx) { prof_gctx_try_destroy(tsd, prof_tdata_get(tsd, false), gctx, tdata); } + malloc_mutex_assert_not_owner(tsd_tsdn(tsd), tctx->tdata->lock); + if (destroy_tdata) - prof_tdata_destroy(tsd, tdata, false); + prof_tdata_destroy(tsd_tsdn(tsd), tdata, false); if (destroy_tctx) - idalloctm(tsd, tctx, tcache_get(tsd, false), true, true); + idalloctm(tsd_tsdn(tsd), tctx, NULL, true, true); } static bool prof_lookup_global(tsd_t *tsd, prof_bt_t *bt, prof_tdata_t *tdata, void **p_btkey, prof_gctx_t **p_gctx, bool *p_new_gctx) { union { prof_gctx_t *p; void *v; } gctx; union { prof_bt_t *p; void *v; } btkey; bool new_gctx; prof_enter(tsd, tdata); if (ckh_search(&bt2gctx, bt, &btkey.v, &gctx.v)) { /* bt has never been seen before. Insert it. */ - gctx.p = prof_gctx_create(tsd, bt); + gctx.p = prof_gctx_create(tsd_tsdn(tsd), bt); if (gctx.v == NULL) { prof_leave(tsd, tdata); return (true); } btkey.p = &gctx.p->bt; - if (ckh_insert(tsd, &bt2gctx, btkey.v, gctx.v)) { + if (ckh_insert(tsd_tsdn(tsd), &bt2gctx, btkey.v, gctx.v)) { /* OOM. */ prof_leave(tsd, tdata); - idalloctm(tsd, gctx.v, tcache_get(tsd, false), true, - true); + idalloctm(tsd_tsdn(tsd), gctx.v, NULL, true, true); return (true); } new_gctx = true; } else { /* * Increment nlimbo, in order to avoid a race condition with * prof_tctx_destroy()/prof_gctx_try_destroy(). */ - malloc_mutex_lock(gctx.p->lock); + malloc_mutex_lock(tsd_tsdn(tsd), gctx.p->lock); gctx.p->nlimbo++; - malloc_mutex_unlock(gctx.p->lock); + malloc_mutex_unlock(tsd_tsdn(tsd), gctx.p->lock); new_gctx = false; } prof_leave(tsd, tdata); *p_btkey = btkey.v; *p_gctx = gctx.p; *p_new_gctx = new_gctx; return (false); } prof_tctx_t * prof_lookup(tsd_t *tsd, prof_bt_t *bt) { union { prof_tctx_t *p; void *v; } ret; prof_tdata_t *tdata; bool not_found; cassert(config_prof); tdata = prof_tdata_get(tsd, false); if (tdata == NULL) return (NULL); - malloc_mutex_lock(tdata->lock); + malloc_mutex_lock(tsd_tsdn(tsd), tdata->lock); not_found = ckh_search(&tdata->bt2tctx, bt, NULL, &ret.v); if (!not_found) /* Note double negative! */ ret.p->prepared = true; - malloc_mutex_unlock(tdata->lock); + malloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock); if (not_found) { - tcache_t *tcache; void *btkey; prof_gctx_t *gctx; bool new_gctx, error; /* * This thread's cache lacks bt. Look for it in the global * cache. */ if (prof_lookup_global(tsd, bt, tdata, &btkey, &gctx, &new_gctx)) return (NULL); /* Link a prof_tctx_t into gctx for this thread. */ - tcache = tcache_get(tsd, true); - ret.v = iallocztm(tsd, sizeof(prof_tctx_t), - size2index(sizeof(prof_tctx_t)), false, tcache, true, NULL, - true); + ret.v = iallocztm(tsd_tsdn(tsd), sizeof(prof_tctx_t), + size2index(sizeof(prof_tctx_t)), false, NULL, true, + arena_ichoose(tsd_tsdn(tsd), NULL), true); if (ret.p == NULL) { if (new_gctx) prof_gctx_try_destroy(tsd, tdata, gctx, tdata); return (NULL); } ret.p->tdata = tdata; ret.p->thr_uid = tdata->thr_uid; ret.p->thr_discrim = tdata->thr_discrim; memset(&ret.p->cnts, 0, sizeof(prof_cnt_t)); ret.p->gctx = gctx; ret.p->tctx_uid = tdata->tctx_uid_next++; ret.p->prepared = true; ret.p->state = prof_tctx_state_initializing; - malloc_mutex_lock(tdata->lock); - error = ckh_insert(tsd, &tdata->bt2tctx, btkey, ret.v); - malloc_mutex_unlock(tdata->lock); + malloc_mutex_lock(tsd_tsdn(tsd), tdata->lock); + error = ckh_insert(tsd_tsdn(tsd), &tdata->bt2tctx, btkey, + ret.v); + malloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock); if (error) { if (new_gctx) prof_gctx_try_destroy(tsd, tdata, gctx, tdata); - idalloctm(tsd, ret.v, tcache, true, true); + idalloctm(tsd_tsdn(tsd), ret.v, NULL, true, true); return (NULL); } - malloc_mutex_lock(gctx->lock); + malloc_mutex_lock(tsd_tsdn(tsd), gctx->lock); ret.p->state = prof_tctx_state_nominal; tctx_tree_insert(&gctx->tctxs, ret.p); gctx->nlimbo--; - malloc_mutex_unlock(gctx->lock); + malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock); } return (ret.p); } +/* + * The bodies of this function and prof_leakcheck() are compiled out unless heap + * profiling is enabled, so that it is possible to compile jemalloc with + * floating point support completely disabled. Avoiding floating point code is + * important on memory-constrained systems, but it also enables a workaround for + * versions of glibc that don't properly save/restore floating point registers + * during dynamic lazy symbol loading (which internally calls into whatever + * malloc implementation happens to be integrated into the application). Note + * that some compilers (e.g. gcc 4.8) may use floating point registers for fast + * memory moves, so jemalloc must be compiled with such optimizations disabled + * (e.g. + * -mno-sse) in order for the workaround to be complete. + */ void prof_sample_threshold_update(prof_tdata_t *tdata) { - /* - * The body of this function is compiled out unless heap profiling is - * enabled, so that it is possible to compile jemalloc with floating - * point support completely disabled. Avoiding floating point code is - * important on memory-constrained systems, but it also enables a - * workaround for versions of glibc that don't properly save/restore - * floating point registers during dynamic lazy symbol loading (which - * internally calls into whatever malloc implementation happens to be - * integrated into the application). Note that some compilers (e.g. - * gcc 4.8) may use floating point registers for fast memory moves, so - * jemalloc must be compiled with such optimizations disabled (e.g. - * -mno-sse) in order for the workaround to be complete. - */ #ifdef JEMALLOC_PROF uint64_t r; double u; if (!config_prof) return; if (lg_prof_sample == 0) { tdata->bytes_until_sample = 0; return; } /* * Compute sample interval as a geometrically distributed random * variable with mean (2^lg_prof_sample). * * __ __ * | log(u) | 1 * tdata->bytes_until_sample = | -------- |, where p = --------------- * | log(1-p) | lg_prof_sample * 2 * * For more information on the math, see: * * Non-Uniform Random Variate Generation * Luc Devroye * Springer-Verlag, New York, 1986 * pp 500 * (http://luc.devroye.org/rnbookindex.html) */ r = prng_lg_range(&tdata->prng_state, 53); u = (double)r * (1.0/9007199254740992.0L); tdata->bytes_until_sample = (uint64_t)(log(u) / log(1.0 - (1.0 / (double)((uint64_t)1U << lg_prof_sample)))) + (uint64_t)1U; #endif } #ifdef JEMALLOC_JET static prof_tdata_t * prof_tdata_count_iter(prof_tdata_tree_t *tdatas, prof_tdata_t *tdata, void *arg) { size_t *tdata_count = (size_t *)arg; (*tdata_count)++; return (NULL); } size_t prof_tdata_count(void) { size_t tdata_count = 0; + tsdn_t *tsdn; - malloc_mutex_lock(&tdatas_mtx); + tsdn = tsdn_fetch(); + malloc_mutex_lock(tsdn, &tdatas_mtx); tdata_tree_iter(&tdatas, NULL, prof_tdata_count_iter, (void *)&tdata_count); - malloc_mutex_unlock(&tdatas_mtx); + malloc_mutex_unlock(tsdn, &tdatas_mtx); return (tdata_count); } #endif #ifdef JEMALLOC_JET size_t prof_bt_count(void) { size_t bt_count; tsd_t *tsd; prof_tdata_t *tdata; tsd = tsd_fetch(); tdata = prof_tdata_get(tsd, false); if (tdata == NULL) return (0); - malloc_mutex_lock(&bt2gctx_mtx); + malloc_mutex_lock(tsd_tsdn(tsd), &bt2gctx_mtx); bt_count = ckh_count(&bt2gctx); - malloc_mutex_unlock(&bt2gctx_mtx); + malloc_mutex_unlock(tsd_tsdn(tsd), &bt2gctx_mtx); return (bt_count); } #endif #ifdef JEMALLOC_JET #undef prof_dump_open #define prof_dump_open JEMALLOC_N(prof_dump_open_impl) #endif static int prof_dump_open(bool propagate_err, const char *filename) { int fd; fd = creat(filename, 0644); if (fd == -1 && !propagate_err) { malloc_printf(": creat(\"%s\"), 0644) failed\n", filename); if (opt_abort) abort(); } return (fd); } #ifdef JEMALLOC_JET #undef prof_dump_open #define prof_dump_open JEMALLOC_N(prof_dump_open) prof_dump_open_t *prof_dump_open = JEMALLOC_N(prof_dump_open_impl); #endif static bool prof_dump_flush(bool propagate_err) { bool ret = false; ssize_t err; cassert(config_prof); err = write(prof_dump_fd, prof_dump_buf, prof_dump_buf_end); if (err == -1) { if (!propagate_err) { malloc_write(": write() failed during heap " "profile flush\n"); if (opt_abort) abort(); } ret = true; } prof_dump_buf_end = 0; return (ret); } static bool prof_dump_close(bool propagate_err) { bool ret; assert(prof_dump_fd != -1); ret = prof_dump_flush(propagate_err); close(prof_dump_fd); prof_dump_fd = -1; return (ret); } static bool prof_dump_write(bool propagate_err, const char *s) { size_t i, slen, n; cassert(config_prof); i = 0; slen = strlen(s); while (i < slen) { /* Flush the buffer if it is full. */ if (prof_dump_buf_end == PROF_DUMP_BUFSIZE) if (prof_dump_flush(propagate_err) && propagate_err) return (true); if (prof_dump_buf_end + slen <= PROF_DUMP_BUFSIZE) { /* Finish writing. */ n = slen - i; } else { /* Write as much of s as will fit. */ n = PROF_DUMP_BUFSIZE - prof_dump_buf_end; } memcpy(&prof_dump_buf[prof_dump_buf_end], &s[i], n); prof_dump_buf_end += n; i += n; } return (false); } JEMALLOC_FORMAT_PRINTF(2, 3) static bool prof_dump_printf(bool propagate_err, const char *format, ...) { bool ret; va_list ap; char buf[PROF_PRINTF_BUFSIZE]; va_start(ap, format); malloc_vsnprintf(buf, sizeof(buf), format, ap); va_end(ap); ret = prof_dump_write(propagate_err, buf); return (ret); } -/* tctx->tdata->lock is held. */ static void -prof_tctx_merge_tdata(prof_tctx_t *tctx, prof_tdata_t *tdata) +prof_tctx_merge_tdata(tsdn_t *tsdn, prof_tctx_t *tctx, prof_tdata_t *tdata) { - malloc_mutex_lock(tctx->gctx->lock); + malloc_mutex_assert_owner(tsdn, tctx->tdata->lock); + malloc_mutex_lock(tsdn, tctx->gctx->lock); + switch (tctx->state) { case prof_tctx_state_initializing: - malloc_mutex_unlock(tctx->gctx->lock); + malloc_mutex_unlock(tsdn, tctx->gctx->lock); return; case prof_tctx_state_nominal: tctx->state = prof_tctx_state_dumping; - malloc_mutex_unlock(tctx->gctx->lock); + malloc_mutex_unlock(tsdn, tctx->gctx->lock); memcpy(&tctx->dump_cnts, &tctx->cnts, sizeof(prof_cnt_t)); tdata->cnt_summed.curobjs += tctx->dump_cnts.curobjs; tdata->cnt_summed.curbytes += tctx->dump_cnts.curbytes; if (opt_prof_accum) { tdata->cnt_summed.accumobjs += tctx->dump_cnts.accumobjs; tdata->cnt_summed.accumbytes += tctx->dump_cnts.accumbytes; } break; case prof_tctx_state_dumping: case prof_tctx_state_purgatory: not_reached(); } } -/* gctx->lock is held. */ static void -prof_tctx_merge_gctx(prof_tctx_t *tctx, prof_gctx_t *gctx) +prof_tctx_merge_gctx(tsdn_t *tsdn, prof_tctx_t *tctx, prof_gctx_t *gctx) { + malloc_mutex_assert_owner(tsdn, gctx->lock); + gctx->cnt_summed.curobjs += tctx->dump_cnts.curobjs; gctx->cnt_summed.curbytes += tctx->dump_cnts.curbytes; if (opt_prof_accum) { gctx->cnt_summed.accumobjs += tctx->dump_cnts.accumobjs; gctx->cnt_summed.accumbytes += tctx->dump_cnts.accumbytes; } } -/* tctx->gctx is held. */ static prof_tctx_t * prof_tctx_merge_iter(prof_tctx_tree_t *tctxs, prof_tctx_t *tctx, void *arg) { + tsdn_t *tsdn = (tsdn_t *)arg; + malloc_mutex_assert_owner(tsdn, tctx->gctx->lock); + switch (tctx->state) { case prof_tctx_state_nominal: /* New since dumping started; ignore. */ break; case prof_tctx_state_dumping: case prof_tctx_state_purgatory: - prof_tctx_merge_gctx(tctx, tctx->gctx); + prof_tctx_merge_gctx(tsdn, tctx, tctx->gctx); break; default: not_reached(); } return (NULL); } -/* gctx->lock is held. */ +struct prof_tctx_dump_iter_arg_s { + tsdn_t *tsdn; + bool propagate_err; +}; + static prof_tctx_t * -prof_tctx_dump_iter(prof_tctx_tree_t *tctxs, prof_tctx_t *tctx, void *arg) +prof_tctx_dump_iter(prof_tctx_tree_t *tctxs, prof_tctx_t *tctx, void *opaque) { - bool propagate_err = *(bool *)arg; + struct prof_tctx_dump_iter_arg_s *arg = + (struct prof_tctx_dump_iter_arg_s *)opaque; + malloc_mutex_assert_owner(arg->tsdn, tctx->gctx->lock); + switch (tctx->state) { case prof_tctx_state_initializing: case prof_tctx_state_nominal: /* Not captured by this dump. */ break; case prof_tctx_state_dumping: case prof_tctx_state_purgatory: - if (prof_dump_printf(propagate_err, + if (prof_dump_printf(arg->propagate_err, " t%"FMTu64": %"FMTu64": %"FMTu64" [%"FMTu64": " "%"FMTu64"]\n", tctx->thr_uid, tctx->dump_cnts.curobjs, tctx->dump_cnts.curbytes, tctx->dump_cnts.accumobjs, tctx->dump_cnts.accumbytes)) return (tctx); break; default: not_reached(); } return (NULL); } -/* tctx->gctx is held. */ static prof_tctx_t * prof_tctx_finish_iter(prof_tctx_tree_t *tctxs, prof_tctx_t *tctx, void *arg) { + tsdn_t *tsdn = (tsdn_t *)arg; prof_tctx_t *ret; + malloc_mutex_assert_owner(tsdn, tctx->gctx->lock); + switch (tctx->state) { case prof_tctx_state_nominal: /* New since dumping started; ignore. */ break; case prof_tctx_state_dumping: tctx->state = prof_tctx_state_nominal; break; case prof_tctx_state_purgatory: ret = tctx; goto label_return; default: not_reached(); } ret = NULL; label_return: return (ret); } static void -prof_dump_gctx_prep(prof_gctx_t *gctx, prof_gctx_tree_t *gctxs) +prof_dump_gctx_prep(tsdn_t *tsdn, prof_gctx_t *gctx, prof_gctx_tree_t *gctxs) { cassert(config_prof); - malloc_mutex_lock(gctx->lock); + malloc_mutex_lock(tsdn, gctx->lock); /* * Increment nlimbo so that gctx won't go away before dump. * Additionally, link gctx into the dump list so that it is included in * prof_dump()'s second pass. */ gctx->nlimbo++; gctx_tree_insert(gctxs, gctx); memset(&gctx->cnt_summed, 0, sizeof(prof_cnt_t)); - malloc_mutex_unlock(gctx->lock); + malloc_mutex_unlock(tsdn, gctx->lock); } +struct prof_gctx_merge_iter_arg_s { + tsdn_t *tsdn; + size_t leak_ngctx; +}; + static prof_gctx_t * -prof_gctx_merge_iter(prof_gctx_tree_t *gctxs, prof_gctx_t *gctx, void *arg) +prof_gctx_merge_iter(prof_gctx_tree_t *gctxs, prof_gctx_t *gctx, void *opaque) { - size_t *leak_ngctx = (size_t *)arg; + struct prof_gctx_merge_iter_arg_s *arg = + (struct prof_gctx_merge_iter_arg_s *)opaque; - malloc_mutex_lock(gctx->lock); - tctx_tree_iter(&gctx->tctxs, NULL, prof_tctx_merge_iter, NULL); + malloc_mutex_lock(arg->tsdn, gctx->lock); + tctx_tree_iter(&gctx->tctxs, NULL, prof_tctx_merge_iter, + (void *)arg->tsdn); if (gctx->cnt_summed.curobjs != 0) - (*leak_ngctx)++; - malloc_mutex_unlock(gctx->lock); + arg->leak_ngctx++; + malloc_mutex_unlock(arg->tsdn, gctx->lock); return (NULL); } static void prof_gctx_finish(tsd_t *tsd, prof_gctx_tree_t *gctxs) { prof_tdata_t *tdata = prof_tdata_get(tsd, false); prof_gctx_t *gctx; /* * Standard tree iteration won't work here, because as soon as we * decrement gctx->nlimbo and unlock gctx, another thread can * concurrently destroy it, which will corrupt the tree. Therefore, * tear down the tree one node at a time during iteration. */ while ((gctx = gctx_tree_first(gctxs)) != NULL) { gctx_tree_remove(gctxs, gctx); - malloc_mutex_lock(gctx->lock); + malloc_mutex_lock(tsd_tsdn(tsd), gctx->lock); { prof_tctx_t *next; next = NULL; do { prof_tctx_t *to_destroy = tctx_tree_iter(&gctx->tctxs, next, - prof_tctx_finish_iter, NULL); + prof_tctx_finish_iter, + (void *)tsd_tsdn(tsd)); if (to_destroy != NULL) { next = tctx_tree_next(&gctx->tctxs, to_destroy); tctx_tree_remove(&gctx->tctxs, to_destroy); - idalloctm(tsd, to_destroy, - tcache_get(tsd, false), true, true); + idalloctm(tsd_tsdn(tsd), to_destroy, + NULL, true, true); } else next = NULL; } while (next != NULL); } gctx->nlimbo--; if (prof_gctx_should_destroy(gctx)) { gctx->nlimbo++; - malloc_mutex_unlock(gctx->lock); + malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock); prof_gctx_try_destroy(tsd, tdata, gctx, tdata); } else - malloc_mutex_unlock(gctx->lock); + malloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock); } } +struct prof_tdata_merge_iter_arg_s { + tsdn_t *tsdn; + prof_cnt_t cnt_all; +}; + static prof_tdata_t * -prof_tdata_merge_iter(prof_tdata_tree_t *tdatas, prof_tdata_t *tdata, void *arg) +prof_tdata_merge_iter(prof_tdata_tree_t *tdatas, prof_tdata_t *tdata, + void *opaque) { - prof_cnt_t *cnt_all = (prof_cnt_t *)arg; + struct prof_tdata_merge_iter_arg_s *arg = + (struct prof_tdata_merge_iter_arg_s *)opaque; - malloc_mutex_lock(tdata->lock); + malloc_mutex_lock(arg->tsdn, tdata->lock); if (!tdata->expired) { size_t tabind; union { prof_tctx_t *p; void *v; } tctx; tdata->dumping = true; memset(&tdata->cnt_summed, 0, sizeof(prof_cnt_t)); for (tabind = 0; !ckh_iter(&tdata->bt2tctx, &tabind, NULL, &tctx.v);) - prof_tctx_merge_tdata(tctx.p, tdata); + prof_tctx_merge_tdata(arg->tsdn, tctx.p, tdata); - cnt_all->curobjs += tdata->cnt_summed.curobjs; - cnt_all->curbytes += tdata->cnt_summed.curbytes; + arg->cnt_all.curobjs += tdata->cnt_summed.curobjs; + arg->cnt_all.curbytes += tdata->cnt_summed.curbytes; if (opt_prof_accum) { - cnt_all->accumobjs += tdata->cnt_summed.accumobjs; - cnt_all->accumbytes += tdata->cnt_summed.accumbytes; + arg->cnt_all.accumobjs += tdata->cnt_summed.accumobjs; + arg->cnt_all.accumbytes += tdata->cnt_summed.accumbytes; } } else tdata->dumping = false; - malloc_mutex_unlock(tdata->lock); + malloc_mutex_unlock(arg->tsdn, tdata->lock); return (NULL); } static prof_tdata_t * prof_tdata_dump_iter(prof_tdata_tree_t *tdatas, prof_tdata_t *tdata, void *arg) { bool propagate_err = *(bool *)arg; if (!tdata->dumping) return (NULL); if (prof_dump_printf(propagate_err, " t%"FMTu64": %"FMTu64": %"FMTu64" [%"FMTu64": %"FMTu64"]%s%s\n", tdata->thr_uid, tdata->cnt_summed.curobjs, tdata->cnt_summed.curbytes, tdata->cnt_summed.accumobjs, tdata->cnt_summed.accumbytes, (tdata->thread_name != NULL) ? " " : "", (tdata->thread_name != NULL) ? tdata->thread_name : "")) return (tdata); return (NULL); } #ifdef JEMALLOC_JET #undef prof_dump_header #define prof_dump_header JEMALLOC_N(prof_dump_header_impl) #endif static bool -prof_dump_header(bool propagate_err, const prof_cnt_t *cnt_all) +prof_dump_header(tsdn_t *tsdn, bool propagate_err, const prof_cnt_t *cnt_all) { bool ret; if (prof_dump_printf(propagate_err, "heap_v2/%"FMTu64"\n" " t*: %"FMTu64": %"FMTu64" [%"FMTu64": %"FMTu64"]\n", ((uint64_t)1U << lg_prof_sample), cnt_all->curobjs, cnt_all->curbytes, cnt_all->accumobjs, cnt_all->accumbytes)) return (true); - malloc_mutex_lock(&tdatas_mtx); + malloc_mutex_lock(tsdn, &tdatas_mtx); ret = (tdata_tree_iter(&tdatas, NULL, prof_tdata_dump_iter, (void *)&propagate_err) != NULL); - malloc_mutex_unlock(&tdatas_mtx); + malloc_mutex_unlock(tsdn, &tdatas_mtx); return (ret); } #ifdef JEMALLOC_JET #undef prof_dump_header #define prof_dump_header JEMALLOC_N(prof_dump_header) prof_dump_header_t *prof_dump_header = JEMALLOC_N(prof_dump_header_impl); #endif -/* gctx->lock is held. */ static bool -prof_dump_gctx(bool propagate_err, prof_gctx_t *gctx, const prof_bt_t *bt, - prof_gctx_tree_t *gctxs) +prof_dump_gctx(tsdn_t *tsdn, bool propagate_err, prof_gctx_t *gctx, + const prof_bt_t *bt, prof_gctx_tree_t *gctxs) { bool ret; unsigned i; + struct prof_tctx_dump_iter_arg_s prof_tctx_dump_iter_arg; cassert(config_prof); + malloc_mutex_assert_owner(tsdn, gctx->lock); /* Avoid dumping such gctx's that have no useful data. */ if ((!opt_prof_accum && gctx->cnt_summed.curobjs == 0) || (opt_prof_accum && gctx->cnt_summed.accumobjs == 0)) { assert(gctx->cnt_summed.curobjs == 0); assert(gctx->cnt_summed.curbytes == 0); assert(gctx->cnt_summed.accumobjs == 0); assert(gctx->cnt_summed.accumbytes == 0); ret = false; goto label_return; } if (prof_dump_printf(propagate_err, "@")) { ret = true; goto label_return; } for (i = 0; i < bt->len; i++) { if (prof_dump_printf(propagate_err, " %#"FMTxPTR, (uintptr_t)bt->vec[i])) { ret = true; goto label_return; } } if (prof_dump_printf(propagate_err, "\n" " t*: %"FMTu64": %"FMTu64" [%"FMTu64": %"FMTu64"]\n", gctx->cnt_summed.curobjs, gctx->cnt_summed.curbytes, gctx->cnt_summed.accumobjs, gctx->cnt_summed.accumbytes)) { ret = true; goto label_return; } + prof_tctx_dump_iter_arg.tsdn = tsdn; + prof_tctx_dump_iter_arg.propagate_err = propagate_err; if (tctx_tree_iter(&gctx->tctxs, NULL, prof_tctx_dump_iter, - (void *)&propagate_err) != NULL) { + (void *)&prof_tctx_dump_iter_arg) != NULL) { ret = true; goto label_return; } ret = false; label_return: return (ret); } #ifndef _WIN32 JEMALLOC_FORMAT_PRINTF(1, 2) static int prof_open_maps(const char *format, ...) { int mfd; va_list ap; char filename[PATH_MAX + 1]; va_start(ap, format); malloc_vsnprintf(filename, sizeof(filename), format, ap); va_end(ap); mfd = open(filename, O_RDONLY); return (mfd); } #endif static int prof_getpid(void) { #ifdef _WIN32 return (GetCurrentProcessId()); #else return (getpid()); #endif } static bool prof_dump_maps(bool propagate_err) { bool ret; int mfd; cassert(config_prof); #ifdef __FreeBSD__ mfd = prof_open_maps("/proc/curproc/map"); #elif defined(_WIN32) mfd = -1; // Not implemented #else { int pid = prof_getpid(); mfd = prof_open_maps("/proc/%d/task/%d/maps", pid, pid); if (mfd == -1) mfd = prof_open_maps("/proc/%d/maps", pid); } #endif if (mfd != -1) { ssize_t nread; if (prof_dump_write(propagate_err, "\nMAPPED_LIBRARIES:\n") && propagate_err) { ret = true; goto label_return; } nread = 0; do { prof_dump_buf_end += nread; if (prof_dump_buf_end == PROF_DUMP_BUFSIZE) { /* Make space in prof_dump_buf before read(). */ if (prof_dump_flush(propagate_err) && propagate_err) { ret = true; goto label_return; } } nread = read(mfd, &prof_dump_buf[prof_dump_buf_end], PROF_DUMP_BUFSIZE - prof_dump_buf_end); } while (nread > 0); } else { ret = true; goto label_return; } ret = false; label_return: if (mfd != -1) close(mfd); return (ret); } +/* + * See prof_sample_threshold_update() comment for why the body of this function + * is conditionally compiled. + */ static void prof_leakcheck(const prof_cnt_t *cnt_all, size_t leak_ngctx, const char *filename) { +#ifdef JEMALLOC_PROF + /* + * Scaling is equivalent AdjustSamples() in jeprof, but the result may + * differ slightly from what jeprof reports, because here we scale the + * summary values, whereas jeprof scales each context individually and + * reports the sums of the scaled values. + */ if (cnt_all->curbytes != 0) { - malloc_printf(": Leak summary: %"FMTu64" byte%s, %" - FMTu64" object%s, %zu context%s\n", - cnt_all->curbytes, (cnt_all->curbytes != 1) ? "s" : "", - cnt_all->curobjs, (cnt_all->curobjs != 1) ? "s" : "", - leak_ngctx, (leak_ngctx != 1) ? "s" : ""); + double sample_period = (double)((uint64_t)1 << lg_prof_sample); + double ratio = (((double)cnt_all->curbytes) / + (double)cnt_all->curobjs) / sample_period; + double scale_factor = 1.0 / (1.0 - exp(-ratio)); + uint64_t curbytes = (uint64_t)round(((double)cnt_all->curbytes) + * scale_factor); + uint64_t curobjs = (uint64_t)round(((double)cnt_all->curobjs) * + scale_factor); + + malloc_printf(": Leak approximation summary: ~%"FMTu64 + " byte%s, ~%"FMTu64" object%s, >= %zu context%s\n", + curbytes, (curbytes != 1) ? "s" : "", curobjs, (curobjs != + 1) ? "s" : "", leak_ngctx, (leak_ngctx != 1) ? "s" : ""); malloc_printf( ": Run jeprof on \"%s\" for leak detail\n", filename); } +#endif } +struct prof_gctx_dump_iter_arg_s { + tsdn_t *tsdn; + bool propagate_err; +}; + static prof_gctx_t * -prof_gctx_dump_iter(prof_gctx_tree_t *gctxs, prof_gctx_t *gctx, void *arg) +prof_gctx_dump_iter(prof_gctx_tree_t *gctxs, prof_gctx_t *gctx, void *opaque) { prof_gctx_t *ret; - bool propagate_err = *(bool *)arg; + struct prof_gctx_dump_iter_arg_s *arg = + (struct prof_gctx_dump_iter_arg_s *)opaque; - malloc_mutex_lock(gctx->lock); + malloc_mutex_lock(arg->tsdn, gctx->lock); - if (prof_dump_gctx(propagate_err, gctx, &gctx->bt, gctxs)) { + if (prof_dump_gctx(arg->tsdn, arg->propagate_err, gctx, &gctx->bt, + gctxs)) { ret = gctx; goto label_return; } ret = NULL; label_return: - malloc_mutex_unlock(gctx->lock); + malloc_mutex_unlock(arg->tsdn, gctx->lock); return (ret); } static bool prof_dump(tsd_t *tsd, bool propagate_err, const char *filename, bool leakcheck) { prof_tdata_t *tdata; - prof_cnt_t cnt_all; + struct prof_tdata_merge_iter_arg_s prof_tdata_merge_iter_arg; size_t tabind; union { prof_gctx_t *p; void *v; } gctx; - size_t leak_ngctx; + struct prof_gctx_merge_iter_arg_s prof_gctx_merge_iter_arg; + struct prof_gctx_dump_iter_arg_s prof_gctx_dump_iter_arg; prof_gctx_tree_t gctxs; cassert(config_prof); tdata = prof_tdata_get(tsd, true); if (tdata == NULL) return (true); - malloc_mutex_lock(&prof_dump_mtx); + malloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_mtx); prof_enter(tsd, tdata); /* * Put gctx's in limbo and clear their counters in preparation for * summing. */ gctx_tree_new(&gctxs); for (tabind = 0; !ckh_iter(&bt2gctx, &tabind, NULL, &gctx.v);) - prof_dump_gctx_prep(gctx.p, &gctxs); + prof_dump_gctx_prep(tsd_tsdn(tsd), gctx.p, &gctxs); /* * Iterate over tdatas, and for the non-expired ones snapshot their tctx * stats and merge them into the associated gctx's. */ - memset(&cnt_all, 0, sizeof(prof_cnt_t)); - malloc_mutex_lock(&tdatas_mtx); - tdata_tree_iter(&tdatas, NULL, prof_tdata_merge_iter, (void *)&cnt_all); - malloc_mutex_unlock(&tdatas_mtx); + prof_tdata_merge_iter_arg.tsdn = tsd_tsdn(tsd); + memset(&prof_tdata_merge_iter_arg.cnt_all, 0, sizeof(prof_cnt_t)); + malloc_mutex_lock(tsd_tsdn(tsd), &tdatas_mtx); + tdata_tree_iter(&tdatas, NULL, prof_tdata_merge_iter, + (void *)&prof_tdata_merge_iter_arg); + malloc_mutex_unlock(tsd_tsdn(tsd), &tdatas_mtx); /* Merge tctx stats into gctx's. */ - leak_ngctx = 0; - gctx_tree_iter(&gctxs, NULL, prof_gctx_merge_iter, (void *)&leak_ngctx); + prof_gctx_merge_iter_arg.tsdn = tsd_tsdn(tsd); + prof_gctx_merge_iter_arg.leak_ngctx = 0; + gctx_tree_iter(&gctxs, NULL, prof_gctx_merge_iter, + (void *)&prof_gctx_merge_iter_arg); prof_leave(tsd, tdata); /* Create dump file. */ if ((prof_dump_fd = prof_dump_open(propagate_err, filename)) == -1) goto label_open_close_error; /* Dump profile header. */ - if (prof_dump_header(propagate_err, &cnt_all)) + if (prof_dump_header(tsd_tsdn(tsd), propagate_err, + &prof_tdata_merge_iter_arg.cnt_all)) goto label_write_error; /* Dump per gctx profile stats. */ + prof_gctx_dump_iter_arg.tsdn = tsd_tsdn(tsd); + prof_gctx_dump_iter_arg.propagate_err = propagate_err; if (gctx_tree_iter(&gctxs, NULL, prof_gctx_dump_iter, - (void *)&propagate_err) != NULL) + (void *)&prof_gctx_dump_iter_arg) != NULL) goto label_write_error; /* Dump /proc//maps if possible. */ if (prof_dump_maps(propagate_err)) goto label_write_error; if (prof_dump_close(propagate_err)) goto label_open_close_error; prof_gctx_finish(tsd, &gctxs); - malloc_mutex_unlock(&prof_dump_mtx); + malloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_mtx); - if (leakcheck) - prof_leakcheck(&cnt_all, leak_ngctx, filename); - + if (leakcheck) { + prof_leakcheck(&prof_tdata_merge_iter_arg.cnt_all, + prof_gctx_merge_iter_arg.leak_ngctx, filename); + } return (false); label_write_error: prof_dump_close(propagate_err); label_open_close_error: prof_gctx_finish(tsd, &gctxs); - malloc_mutex_unlock(&prof_dump_mtx); + malloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_mtx); return (true); } #define DUMP_FILENAME_BUFSIZE (PATH_MAX + 1) #define VSEQ_INVALID UINT64_C(0xffffffffffffffff) static void prof_dump_filename(char *filename, char v, uint64_t vseq) { cassert(config_prof); if (vseq != VSEQ_INVALID) { /* "...v.heap" */ malloc_snprintf(filename, DUMP_FILENAME_BUFSIZE, "%s.%d.%"FMTu64".%c%"FMTu64".heap", opt_prof_prefix, prof_getpid(), prof_dump_seq, v, vseq); } else { /* "....heap" */ malloc_snprintf(filename, DUMP_FILENAME_BUFSIZE, "%s.%d.%"FMTu64".%c.heap", opt_prof_prefix, prof_getpid(), prof_dump_seq, v); } prof_dump_seq++; } static void prof_fdump(void) { tsd_t *tsd; char filename[DUMP_FILENAME_BUFSIZE]; cassert(config_prof); assert(opt_prof_final); assert(opt_prof_prefix[0] != '\0'); if (!prof_booted) return; tsd = tsd_fetch(); - malloc_mutex_lock(&prof_dump_seq_mtx); + malloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_seq_mtx); prof_dump_filename(filename, 'f', VSEQ_INVALID); - malloc_mutex_unlock(&prof_dump_seq_mtx); + malloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_seq_mtx); prof_dump(tsd, false, filename, opt_prof_leak); } void -prof_idump(void) +prof_idump(tsdn_t *tsdn) { tsd_t *tsd; prof_tdata_t *tdata; cassert(config_prof); - if (!prof_booted) + if (!prof_booted || tsdn_null(tsdn)) return; - tsd = tsd_fetch(); + tsd = tsdn_tsd(tsdn); tdata = prof_tdata_get(tsd, false); if (tdata == NULL) return; if (tdata->enq) { tdata->enq_idump = true; return; } if (opt_prof_prefix[0] != '\0') { char filename[PATH_MAX + 1]; - malloc_mutex_lock(&prof_dump_seq_mtx); + malloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_seq_mtx); prof_dump_filename(filename, 'i', prof_dump_iseq); prof_dump_iseq++; - malloc_mutex_unlock(&prof_dump_seq_mtx); + malloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_seq_mtx); prof_dump(tsd, false, filename, false); } } bool -prof_mdump(const char *filename) +prof_mdump(tsd_t *tsd, const char *filename) { - tsd_t *tsd; char filename_buf[DUMP_FILENAME_BUFSIZE]; cassert(config_prof); if (!opt_prof || !prof_booted) return (true); - tsd = tsd_fetch(); if (filename == NULL) { /* No filename specified, so automatically generate one. */ if (opt_prof_prefix[0] == '\0') return (true); - malloc_mutex_lock(&prof_dump_seq_mtx); + malloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_seq_mtx); prof_dump_filename(filename_buf, 'm', prof_dump_mseq); prof_dump_mseq++; - malloc_mutex_unlock(&prof_dump_seq_mtx); + malloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_seq_mtx); filename = filename_buf; } return (prof_dump(tsd, true, filename, false)); } void -prof_gdump(void) +prof_gdump(tsdn_t *tsdn) { tsd_t *tsd; prof_tdata_t *tdata; cassert(config_prof); - if (!prof_booted) + if (!prof_booted || tsdn_null(tsdn)) return; - tsd = tsd_fetch(); + tsd = tsdn_tsd(tsdn); tdata = prof_tdata_get(tsd, false); if (tdata == NULL) return; if (tdata->enq) { tdata->enq_gdump = true; return; } if (opt_prof_prefix[0] != '\0') { char filename[DUMP_FILENAME_BUFSIZE]; - malloc_mutex_lock(&prof_dump_seq_mtx); + malloc_mutex_lock(tsdn, &prof_dump_seq_mtx); prof_dump_filename(filename, 'u', prof_dump_useq); prof_dump_useq++; - malloc_mutex_unlock(&prof_dump_seq_mtx); + malloc_mutex_unlock(tsdn, &prof_dump_seq_mtx); prof_dump(tsd, false, filename, false); } } static void prof_bt_hash(const void *key, size_t r_hash[2]) { prof_bt_t *bt = (prof_bt_t *)key; cassert(config_prof); hash(bt->vec, bt->len * sizeof(void *), 0x94122f33U, r_hash); } static bool prof_bt_keycomp(const void *k1, const void *k2) { const prof_bt_t *bt1 = (prof_bt_t *)k1; const prof_bt_t *bt2 = (prof_bt_t *)k2; cassert(config_prof); if (bt1->len != bt2->len) return (false); return (memcmp(bt1->vec, bt2->vec, bt1->len * sizeof(void *)) == 0); } JEMALLOC_INLINE_C uint64_t -prof_thr_uid_alloc(void) +prof_thr_uid_alloc(tsdn_t *tsdn) { uint64_t thr_uid; - malloc_mutex_lock(&next_thr_uid_mtx); + malloc_mutex_lock(tsdn, &next_thr_uid_mtx); thr_uid = next_thr_uid; next_thr_uid++; - malloc_mutex_unlock(&next_thr_uid_mtx); + malloc_mutex_unlock(tsdn, &next_thr_uid_mtx); return (thr_uid); } static prof_tdata_t * -prof_tdata_init_impl(tsd_t *tsd, uint64_t thr_uid, uint64_t thr_discrim, +prof_tdata_init_impl(tsdn_t *tsdn, uint64_t thr_uid, uint64_t thr_discrim, char *thread_name, bool active) { prof_tdata_t *tdata; - tcache_t *tcache; cassert(config_prof); /* Initialize an empty cache for this thread. */ - tcache = tcache_get(tsd, true); - tdata = (prof_tdata_t *)iallocztm(tsd, sizeof(prof_tdata_t), - size2index(sizeof(prof_tdata_t)), false, tcache, true, NULL, true); + tdata = (prof_tdata_t *)iallocztm(tsdn, sizeof(prof_tdata_t), + size2index(sizeof(prof_tdata_t)), false, NULL, true, + arena_get(TSDN_NULL, 0, true), true); if (tdata == NULL) return (NULL); tdata->lock = prof_tdata_mutex_choose(thr_uid); tdata->thr_uid = thr_uid; tdata->thr_discrim = thr_discrim; tdata->thread_name = thread_name; tdata->attached = true; tdata->expired = false; tdata->tctx_uid_next = 0; - if (ckh_new(tsd, &tdata->bt2tctx, PROF_CKH_MINITEMS, + if (ckh_new(tsdn, &tdata->bt2tctx, PROF_CKH_MINITEMS, prof_bt_hash, prof_bt_keycomp)) { - idalloctm(tsd, tdata, tcache, true, true); + idalloctm(tsdn, tdata, NULL, true, true); return (NULL); } tdata->prng_state = (uint64_t)(uintptr_t)tdata; prof_sample_threshold_update(tdata); tdata->enq = false; tdata->enq_idump = false; tdata->enq_gdump = false; tdata->dumping = false; tdata->active = active; - malloc_mutex_lock(&tdatas_mtx); + malloc_mutex_lock(tsdn, &tdatas_mtx); tdata_tree_insert(&tdatas, tdata); - malloc_mutex_unlock(&tdatas_mtx); + malloc_mutex_unlock(tsdn, &tdatas_mtx); return (tdata); } prof_tdata_t * -prof_tdata_init(tsd_t *tsd) +prof_tdata_init(tsdn_t *tsdn) { - return (prof_tdata_init_impl(tsd, prof_thr_uid_alloc(), 0, NULL, - prof_thread_active_init_get())); + return (prof_tdata_init_impl(tsdn, prof_thr_uid_alloc(tsdn), 0, NULL, + prof_thread_active_init_get(tsdn))); } -/* tdata->lock must be held. */ static bool -prof_tdata_should_destroy(prof_tdata_t *tdata, bool even_if_attached) +prof_tdata_should_destroy_unlocked(prof_tdata_t *tdata, bool even_if_attached) { if (tdata->attached && !even_if_attached) return (false); if (ckh_count(&tdata->bt2tctx) != 0) return (false); return (true); } -/* tdatas_mtx must be held. */ +static bool +prof_tdata_should_destroy(tsdn_t *tsdn, prof_tdata_t *tdata, + bool even_if_attached) +{ + + malloc_mutex_assert_owner(tsdn, tdata->lock); + + return (prof_tdata_should_destroy_unlocked(tdata, even_if_attached)); +} + static void -prof_tdata_destroy_locked(tsd_t *tsd, prof_tdata_t *tdata, +prof_tdata_destroy_locked(tsdn_t *tsdn, prof_tdata_t *tdata, bool even_if_attached) { - tcache_t *tcache; - assert(prof_tdata_should_destroy(tdata, even_if_attached)); - assert(tsd_prof_tdata_get(tsd) != tdata); + malloc_mutex_assert_owner(tsdn, &tdatas_mtx); + assert(tsdn_null(tsdn) || tsd_prof_tdata_get(tsdn_tsd(tsdn)) != tdata); + tdata_tree_remove(&tdatas, tdata); - tcache = tcache_get(tsd, false); + assert(prof_tdata_should_destroy_unlocked(tdata, even_if_attached)); + if (tdata->thread_name != NULL) - idalloctm(tsd, tdata->thread_name, tcache, true, true); - ckh_delete(tsd, &tdata->bt2tctx); - idalloctm(tsd, tdata, tcache, true, true); + idalloctm(tsdn, tdata->thread_name, NULL, true, true); + ckh_delete(tsdn, &tdata->bt2tctx); + idalloctm(tsdn, tdata, NULL, true, true); } static void -prof_tdata_destroy(tsd_t *tsd, prof_tdata_t *tdata, bool even_if_attached) +prof_tdata_destroy(tsdn_t *tsdn, prof_tdata_t *tdata, bool even_if_attached) { - malloc_mutex_lock(&tdatas_mtx); - prof_tdata_destroy_locked(tsd, tdata, even_if_attached); - malloc_mutex_unlock(&tdatas_mtx); + malloc_mutex_lock(tsdn, &tdatas_mtx); + prof_tdata_destroy_locked(tsdn, tdata, even_if_attached); + malloc_mutex_unlock(tsdn, &tdatas_mtx); } static void prof_tdata_detach(tsd_t *tsd, prof_tdata_t *tdata) { bool destroy_tdata; - malloc_mutex_lock(tdata->lock); + malloc_mutex_lock(tsd_tsdn(tsd), tdata->lock); if (tdata->attached) { - destroy_tdata = prof_tdata_should_destroy(tdata, true); + destroy_tdata = prof_tdata_should_destroy(tsd_tsdn(tsd), tdata, + true); /* * Only detach if !destroy_tdata, because detaching would allow * another thread to win the race to destroy tdata. */ if (!destroy_tdata) tdata->attached = false; tsd_prof_tdata_set(tsd, NULL); } else destroy_tdata = false; - malloc_mutex_unlock(tdata->lock); + malloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock); if (destroy_tdata) - prof_tdata_destroy(tsd, tdata, true); + prof_tdata_destroy(tsd_tsdn(tsd), tdata, true); } prof_tdata_t * prof_tdata_reinit(tsd_t *tsd, prof_tdata_t *tdata) { uint64_t thr_uid = tdata->thr_uid; uint64_t thr_discrim = tdata->thr_discrim + 1; char *thread_name = (tdata->thread_name != NULL) ? - prof_thread_name_alloc(tsd, tdata->thread_name) : NULL; + prof_thread_name_alloc(tsd_tsdn(tsd), tdata->thread_name) : NULL; bool active = tdata->active; prof_tdata_detach(tsd, tdata); - return (prof_tdata_init_impl(tsd, thr_uid, thr_discrim, thread_name, - active)); + return (prof_tdata_init_impl(tsd_tsdn(tsd), thr_uid, thr_discrim, + thread_name, active)); } static bool -prof_tdata_expire(prof_tdata_t *tdata) +prof_tdata_expire(tsdn_t *tsdn, prof_tdata_t *tdata) { bool destroy_tdata; - malloc_mutex_lock(tdata->lock); + malloc_mutex_lock(tsdn, tdata->lock); if (!tdata->expired) { tdata->expired = true; destroy_tdata = tdata->attached ? false : - prof_tdata_should_destroy(tdata, false); + prof_tdata_should_destroy(tsdn, tdata, false); } else destroy_tdata = false; - malloc_mutex_unlock(tdata->lock); + malloc_mutex_unlock(tsdn, tdata->lock); return (destroy_tdata); } static prof_tdata_t * prof_tdata_reset_iter(prof_tdata_tree_t *tdatas, prof_tdata_t *tdata, void *arg) { + tsdn_t *tsdn = (tsdn_t *)arg; - return (prof_tdata_expire(tdata) ? tdata : NULL); + return (prof_tdata_expire(tsdn, tdata) ? tdata : NULL); } void -prof_reset(tsd_t *tsd, size_t lg_sample) +prof_reset(tsdn_t *tsdn, size_t lg_sample) { prof_tdata_t *next; assert(lg_sample < (sizeof(uint64_t) << 3)); - malloc_mutex_lock(&prof_dump_mtx); - malloc_mutex_lock(&tdatas_mtx); + malloc_mutex_lock(tsdn, &prof_dump_mtx); + malloc_mutex_lock(tsdn, &tdatas_mtx); lg_prof_sample = lg_sample; next = NULL; do { prof_tdata_t *to_destroy = tdata_tree_iter(&tdatas, next, - prof_tdata_reset_iter, NULL); + prof_tdata_reset_iter, (void *)tsdn); if (to_destroy != NULL) { next = tdata_tree_next(&tdatas, to_destroy); - prof_tdata_destroy_locked(tsd, to_destroy, false); + prof_tdata_destroy_locked(tsdn, to_destroy, false); } else next = NULL; } while (next != NULL); - malloc_mutex_unlock(&tdatas_mtx); - malloc_mutex_unlock(&prof_dump_mtx); + malloc_mutex_unlock(tsdn, &tdatas_mtx); + malloc_mutex_unlock(tsdn, &prof_dump_mtx); } void prof_tdata_cleanup(tsd_t *tsd) { prof_tdata_t *tdata; if (!config_prof) return; tdata = tsd_prof_tdata_get(tsd); if (tdata != NULL) prof_tdata_detach(tsd, tdata); } bool -prof_active_get(void) +prof_active_get(tsdn_t *tsdn) { bool prof_active_current; - malloc_mutex_lock(&prof_active_mtx); + malloc_mutex_lock(tsdn, &prof_active_mtx); prof_active_current = prof_active; - malloc_mutex_unlock(&prof_active_mtx); + malloc_mutex_unlock(tsdn, &prof_active_mtx); return (prof_active_current); } bool -prof_active_set(bool active) +prof_active_set(tsdn_t *tsdn, bool active) { bool prof_active_old; - malloc_mutex_lock(&prof_active_mtx); + malloc_mutex_lock(tsdn, &prof_active_mtx); prof_active_old = prof_active; prof_active = active; - malloc_mutex_unlock(&prof_active_mtx); + malloc_mutex_unlock(tsdn, &prof_active_mtx); return (prof_active_old); } const char * -prof_thread_name_get(void) +prof_thread_name_get(tsd_t *tsd) { - tsd_t *tsd; prof_tdata_t *tdata; - tsd = tsd_fetch(); tdata = prof_tdata_get(tsd, true); if (tdata == NULL) return (""); return (tdata->thread_name != NULL ? tdata->thread_name : ""); } static char * -prof_thread_name_alloc(tsd_t *tsd, const char *thread_name) +prof_thread_name_alloc(tsdn_t *tsdn, const char *thread_name) { char *ret; size_t size; if (thread_name == NULL) return (NULL); size = strlen(thread_name) + 1; if (size == 1) return (""); - ret = iallocztm(tsd, size, size2index(size), false, tcache_get(tsd, - true), true, NULL, true); + ret = iallocztm(tsdn, size, size2index(size), false, NULL, true, + arena_get(TSDN_NULL, 0, true), true); if (ret == NULL) return (NULL); memcpy(ret, thread_name, size); return (ret); } int prof_thread_name_set(tsd_t *tsd, const char *thread_name) { prof_tdata_t *tdata; unsigned i; char *s; tdata = prof_tdata_get(tsd, true); if (tdata == NULL) return (EAGAIN); /* Validate input. */ if (thread_name == NULL) return (EFAULT); for (i = 0; thread_name[i] != '\0'; i++) { char c = thread_name[i]; if (!isgraph(c) && !isblank(c)) return (EFAULT); } - s = prof_thread_name_alloc(tsd, thread_name); + s = prof_thread_name_alloc(tsd_tsdn(tsd), thread_name); if (s == NULL) return (EAGAIN); if (tdata->thread_name != NULL) { - idalloctm(tsd, tdata->thread_name, tcache_get(tsd, false), - true, true); + idalloctm(tsd_tsdn(tsd), tdata->thread_name, NULL, true, true); tdata->thread_name = NULL; } if (strlen(s) > 0) tdata->thread_name = s; return (0); } bool -prof_thread_active_get(void) +prof_thread_active_get(tsd_t *tsd) { - tsd_t *tsd; prof_tdata_t *tdata; - tsd = tsd_fetch(); tdata = prof_tdata_get(tsd, true); if (tdata == NULL) return (false); return (tdata->active); } bool -prof_thread_active_set(bool active) +prof_thread_active_set(tsd_t *tsd, bool active) { - tsd_t *tsd; prof_tdata_t *tdata; - tsd = tsd_fetch(); tdata = prof_tdata_get(tsd, true); if (tdata == NULL) return (true); tdata->active = active; return (false); } bool -prof_thread_active_init_get(void) +prof_thread_active_init_get(tsdn_t *tsdn) { bool active_init; - malloc_mutex_lock(&prof_thread_active_init_mtx); + malloc_mutex_lock(tsdn, &prof_thread_active_init_mtx); active_init = prof_thread_active_init; - malloc_mutex_unlock(&prof_thread_active_init_mtx); + malloc_mutex_unlock(tsdn, &prof_thread_active_init_mtx); return (active_init); } bool -prof_thread_active_init_set(bool active_init) +prof_thread_active_init_set(tsdn_t *tsdn, bool active_init) { bool active_init_old; - malloc_mutex_lock(&prof_thread_active_init_mtx); + malloc_mutex_lock(tsdn, &prof_thread_active_init_mtx); active_init_old = prof_thread_active_init; prof_thread_active_init = active_init; - malloc_mutex_unlock(&prof_thread_active_init_mtx); + malloc_mutex_unlock(tsdn, &prof_thread_active_init_mtx); return (active_init_old); } bool -prof_gdump_get(void) +prof_gdump_get(tsdn_t *tsdn) { bool prof_gdump_current; - malloc_mutex_lock(&prof_gdump_mtx); + malloc_mutex_lock(tsdn, &prof_gdump_mtx); prof_gdump_current = prof_gdump_val; - malloc_mutex_unlock(&prof_gdump_mtx); + malloc_mutex_unlock(tsdn, &prof_gdump_mtx); return (prof_gdump_current); } bool -prof_gdump_set(bool gdump) +prof_gdump_set(tsdn_t *tsdn, bool gdump) { bool prof_gdump_old; - malloc_mutex_lock(&prof_gdump_mtx); + malloc_mutex_lock(tsdn, &prof_gdump_mtx); prof_gdump_old = prof_gdump_val; prof_gdump_val = gdump; - malloc_mutex_unlock(&prof_gdump_mtx); + malloc_mutex_unlock(tsdn, &prof_gdump_mtx); return (prof_gdump_old); } void prof_boot0(void) { cassert(config_prof); memcpy(opt_prof_prefix, PROF_PREFIX_DEFAULT, sizeof(PROF_PREFIX_DEFAULT)); } void prof_boot1(void) { cassert(config_prof); /* * opt_prof must be in its final state before any arenas are * initialized, so this function must be executed early. */ if (opt_prof_leak && !opt_prof) { /* * Enable opt_prof, but in such a way that profiles are never * automatically dumped. */ opt_prof = true; opt_prof_gdump = false; } else if (opt_prof) { if (opt_lg_prof_interval >= 0) { prof_interval = (((uint64_t)1U) << opt_lg_prof_interval); } } } bool -prof_boot2(void) +prof_boot2(tsdn_t *tsdn) { cassert(config_prof); if (opt_prof) { - tsd_t *tsd; unsigned i; lg_prof_sample = opt_lg_prof_sample; prof_active = opt_prof_active; - if (malloc_mutex_init(&prof_active_mtx)) + if (malloc_mutex_init(&prof_active_mtx, "prof_active", + WITNESS_RANK_PROF_ACTIVE)) return (true); prof_gdump_val = opt_prof_gdump; - if (malloc_mutex_init(&prof_gdump_mtx)) + if (malloc_mutex_init(&prof_gdump_mtx, "prof_gdump", + WITNESS_RANK_PROF_GDUMP)) return (true); prof_thread_active_init = opt_prof_thread_active_init; - if (malloc_mutex_init(&prof_thread_active_init_mtx)) + if (malloc_mutex_init(&prof_thread_active_init_mtx, + "prof_thread_active_init", + WITNESS_RANK_PROF_THREAD_ACTIVE_INIT)) return (true); - tsd = tsd_fetch(); - if (ckh_new(tsd, &bt2gctx, PROF_CKH_MINITEMS, prof_bt_hash, + if (ckh_new(tsdn, &bt2gctx, PROF_CKH_MINITEMS, prof_bt_hash, prof_bt_keycomp)) return (true); - if (malloc_mutex_init(&bt2gctx_mtx)) + if (malloc_mutex_init(&bt2gctx_mtx, "prof_bt2gctx", + WITNESS_RANK_PROF_BT2GCTX)) return (true); tdata_tree_new(&tdatas); - if (malloc_mutex_init(&tdatas_mtx)) + if (malloc_mutex_init(&tdatas_mtx, "prof_tdatas", + WITNESS_RANK_PROF_TDATAS)) return (true); next_thr_uid = 0; - if (malloc_mutex_init(&next_thr_uid_mtx)) + if (malloc_mutex_init(&next_thr_uid_mtx, "prof_next_thr_uid", + WITNESS_RANK_PROF_NEXT_THR_UID)) return (true); - if (malloc_mutex_init(&prof_dump_seq_mtx)) + if (malloc_mutex_init(&prof_dump_seq_mtx, "prof_dump_seq", + WITNESS_RANK_PROF_DUMP_SEQ)) return (true); - if (malloc_mutex_init(&prof_dump_mtx)) + if (malloc_mutex_init(&prof_dump_mtx, "prof_dump", + WITNESS_RANK_PROF_DUMP)) return (true); if (opt_prof_final && opt_prof_prefix[0] != '\0' && atexit(prof_fdump) != 0) { malloc_write(": Error in atexit()\n"); if (opt_abort) abort(); } - gctx_locks = (malloc_mutex_t *)base_alloc(PROF_NCTX_LOCKS * - sizeof(malloc_mutex_t)); + gctx_locks = (malloc_mutex_t *)base_alloc(tsdn, PROF_NCTX_LOCKS + * sizeof(malloc_mutex_t)); if (gctx_locks == NULL) return (true); for (i = 0; i < PROF_NCTX_LOCKS; i++) { - if (malloc_mutex_init(&gctx_locks[i])) + if (malloc_mutex_init(&gctx_locks[i], "prof_gctx", + WITNESS_RANK_PROF_GCTX)) return (true); } - tdata_locks = (malloc_mutex_t *)base_alloc(PROF_NTDATA_LOCKS * - sizeof(malloc_mutex_t)); + tdata_locks = (malloc_mutex_t *)base_alloc(tsdn, + PROF_NTDATA_LOCKS * sizeof(malloc_mutex_t)); if (tdata_locks == NULL) return (true); for (i = 0; i < PROF_NTDATA_LOCKS; i++) { - if (malloc_mutex_init(&tdata_locks[i])) + if (malloc_mutex_init(&tdata_locks[i], "prof_tdata", + WITNESS_RANK_PROF_TDATA)) return (true); } } #ifdef JEMALLOC_PROF_LIBGCC /* * Cause the backtracing machinery to allocate its internal state * before enabling profiling. */ _Unwind_Backtrace(prof_unwind_init_callback, NULL); #endif prof_booted = true; return (false); } void -prof_prefork(void) +prof_prefork0(tsdn_t *tsdn) { if (opt_prof) { unsigned i; - malloc_mutex_prefork(&tdatas_mtx); - malloc_mutex_prefork(&bt2gctx_mtx); - malloc_mutex_prefork(&next_thr_uid_mtx); - malloc_mutex_prefork(&prof_dump_seq_mtx); - for (i = 0; i < PROF_NCTX_LOCKS; i++) - malloc_mutex_prefork(&gctx_locks[i]); + malloc_mutex_prefork(tsdn, &prof_dump_mtx); + malloc_mutex_prefork(tsdn, &bt2gctx_mtx); + malloc_mutex_prefork(tsdn, &tdatas_mtx); for (i = 0; i < PROF_NTDATA_LOCKS; i++) - malloc_mutex_prefork(&tdata_locks[i]); + malloc_mutex_prefork(tsdn, &tdata_locks[i]); + for (i = 0; i < PROF_NCTX_LOCKS; i++) + malloc_mutex_prefork(tsdn, &gctx_locks[i]); } } void -prof_postfork_parent(void) +prof_prefork1(tsdn_t *tsdn) { if (opt_prof) { + malloc_mutex_prefork(tsdn, &prof_active_mtx); + malloc_mutex_prefork(tsdn, &prof_dump_seq_mtx); + malloc_mutex_prefork(tsdn, &prof_gdump_mtx); + malloc_mutex_prefork(tsdn, &next_thr_uid_mtx); + malloc_mutex_prefork(tsdn, &prof_thread_active_init_mtx); + } +} + +void +prof_postfork_parent(tsdn_t *tsdn) +{ + + if (opt_prof) { unsigned i; - for (i = 0; i < PROF_NTDATA_LOCKS; i++) - malloc_mutex_postfork_parent(&tdata_locks[i]); + malloc_mutex_postfork_parent(tsdn, + &prof_thread_active_init_mtx); + malloc_mutex_postfork_parent(tsdn, &next_thr_uid_mtx); + malloc_mutex_postfork_parent(tsdn, &prof_gdump_mtx); + malloc_mutex_postfork_parent(tsdn, &prof_dump_seq_mtx); + malloc_mutex_postfork_parent(tsdn, &prof_active_mtx); for (i = 0; i < PROF_NCTX_LOCKS; i++) - malloc_mutex_postfork_parent(&gctx_locks[i]); - malloc_mutex_postfork_parent(&prof_dump_seq_mtx); - malloc_mutex_postfork_parent(&next_thr_uid_mtx); - malloc_mutex_postfork_parent(&bt2gctx_mtx); - malloc_mutex_postfork_parent(&tdatas_mtx); + malloc_mutex_postfork_parent(tsdn, &gctx_locks[i]); + for (i = 0; i < PROF_NTDATA_LOCKS; i++) + malloc_mutex_postfork_parent(tsdn, &tdata_locks[i]); + malloc_mutex_postfork_parent(tsdn, &tdatas_mtx); + malloc_mutex_postfork_parent(tsdn, &bt2gctx_mtx); + malloc_mutex_postfork_parent(tsdn, &prof_dump_mtx); } } void -prof_postfork_child(void) +prof_postfork_child(tsdn_t *tsdn) { if (opt_prof) { unsigned i; - for (i = 0; i < PROF_NTDATA_LOCKS; i++) - malloc_mutex_postfork_child(&tdata_locks[i]); + malloc_mutex_postfork_child(tsdn, &prof_thread_active_init_mtx); + malloc_mutex_postfork_child(tsdn, &next_thr_uid_mtx); + malloc_mutex_postfork_child(tsdn, &prof_gdump_mtx); + malloc_mutex_postfork_child(tsdn, &prof_dump_seq_mtx); + malloc_mutex_postfork_child(tsdn, &prof_active_mtx); for (i = 0; i < PROF_NCTX_LOCKS; i++) - malloc_mutex_postfork_child(&gctx_locks[i]); - malloc_mutex_postfork_child(&prof_dump_seq_mtx); - malloc_mutex_postfork_child(&next_thr_uid_mtx); - malloc_mutex_postfork_child(&bt2gctx_mtx); - malloc_mutex_postfork_child(&tdatas_mtx); + malloc_mutex_postfork_child(tsdn, &gctx_locks[i]); + for (i = 0; i < PROF_NTDATA_LOCKS; i++) + malloc_mutex_postfork_child(tsdn, &tdata_locks[i]); + malloc_mutex_postfork_child(tsdn, &tdatas_mtx); + malloc_mutex_postfork_child(tsdn, &bt2gctx_mtx); + malloc_mutex_postfork_child(tsdn, &prof_dump_mtx); } } /******************************************************************************/ Index: head/contrib/jemalloc/src/quarantine.c =================================================================== --- head/contrib/jemalloc/src/quarantine.c (revision 299586) +++ head/contrib/jemalloc/src/quarantine.c (revision 299587) @@ -1,185 +1,183 @@ #define JEMALLOC_QUARANTINE_C_ #include "jemalloc/internal/jemalloc_internal.h" /* * Quarantine pointers close to NULL are used to encode state information that * is used for cleaning up during thread shutdown. */ #define QUARANTINE_STATE_REINCARNATED ((quarantine_t *)(uintptr_t)1) #define QUARANTINE_STATE_PURGATORY ((quarantine_t *)(uintptr_t)2) #define QUARANTINE_STATE_MAX QUARANTINE_STATE_PURGATORY /******************************************************************************/ /* Function prototypes for non-inline static functions. */ static quarantine_t *quarantine_grow(tsd_t *tsd, quarantine_t *quarantine); -static void quarantine_drain_one(tsd_t *tsd, quarantine_t *quarantine); -static void quarantine_drain(tsd_t *tsd, quarantine_t *quarantine, +static void quarantine_drain_one(tsdn_t *tsdn, quarantine_t *quarantine); +static void quarantine_drain(tsdn_t *tsdn, quarantine_t *quarantine, size_t upper_bound); /******************************************************************************/ static quarantine_t * -quarantine_init(tsd_t *tsd, size_t lg_maxobjs) +quarantine_init(tsdn_t *tsdn, size_t lg_maxobjs) { quarantine_t *quarantine; size_t size; - assert(tsd_nominal(tsd)); - size = offsetof(quarantine_t, objs) + ((ZU(1) << lg_maxobjs) * sizeof(quarantine_obj_t)); - quarantine = (quarantine_t *)iallocztm(tsd, size, size2index(size), - false, tcache_get(tsd, true), true, NULL, true); + quarantine = (quarantine_t *)iallocztm(tsdn, size, size2index(size), + false, NULL, true, arena_get(TSDN_NULL, 0, true), true); if (quarantine == NULL) return (NULL); quarantine->curbytes = 0; quarantine->curobjs = 0; quarantine->first = 0; quarantine->lg_maxobjs = lg_maxobjs; return (quarantine); } void quarantine_alloc_hook_work(tsd_t *tsd) { quarantine_t *quarantine; if (!tsd_nominal(tsd)) return; - quarantine = quarantine_init(tsd, LG_MAXOBJS_INIT); + quarantine = quarantine_init(tsd_tsdn(tsd), LG_MAXOBJS_INIT); /* * Check again whether quarantine has been initialized, because * quarantine_init() may have triggered recursive initialization. */ if (tsd_quarantine_get(tsd) == NULL) tsd_quarantine_set(tsd, quarantine); else - idalloctm(tsd, quarantine, tcache_get(tsd, false), true, true); + idalloctm(tsd_tsdn(tsd), quarantine, NULL, true, true); } static quarantine_t * quarantine_grow(tsd_t *tsd, quarantine_t *quarantine) { quarantine_t *ret; - ret = quarantine_init(tsd, quarantine->lg_maxobjs + 1); + ret = quarantine_init(tsd_tsdn(tsd), quarantine->lg_maxobjs + 1); if (ret == NULL) { - quarantine_drain_one(tsd, quarantine); + quarantine_drain_one(tsd_tsdn(tsd), quarantine); return (quarantine); } ret->curbytes = quarantine->curbytes; ret->curobjs = quarantine->curobjs; if (quarantine->first + quarantine->curobjs <= (ZU(1) << quarantine->lg_maxobjs)) { /* objs ring buffer data are contiguous. */ memcpy(ret->objs, &quarantine->objs[quarantine->first], quarantine->curobjs * sizeof(quarantine_obj_t)); } else { /* objs ring buffer data wrap around. */ size_t ncopy_a = (ZU(1) << quarantine->lg_maxobjs) - quarantine->first; size_t ncopy_b = quarantine->curobjs - ncopy_a; memcpy(ret->objs, &quarantine->objs[quarantine->first], ncopy_a * sizeof(quarantine_obj_t)); memcpy(&ret->objs[ncopy_a], quarantine->objs, ncopy_b * sizeof(quarantine_obj_t)); } - idalloctm(tsd, quarantine, tcache_get(tsd, false), true, true); + idalloctm(tsd_tsdn(tsd), quarantine, NULL, true, true); tsd_quarantine_set(tsd, ret); return (ret); } static void -quarantine_drain_one(tsd_t *tsd, quarantine_t *quarantine) +quarantine_drain_one(tsdn_t *tsdn, quarantine_t *quarantine) { quarantine_obj_t *obj = &quarantine->objs[quarantine->first]; - assert(obj->usize == isalloc(obj->ptr, config_prof)); - idalloctm(tsd, obj->ptr, NULL, false, true); + assert(obj->usize == isalloc(tsdn, obj->ptr, config_prof)); + idalloctm(tsdn, obj->ptr, NULL, false, true); quarantine->curbytes -= obj->usize; quarantine->curobjs--; quarantine->first = (quarantine->first + 1) & ((ZU(1) << quarantine->lg_maxobjs) - 1); } static void -quarantine_drain(tsd_t *tsd, quarantine_t *quarantine, size_t upper_bound) +quarantine_drain(tsdn_t *tsdn, quarantine_t *quarantine, size_t upper_bound) { while (quarantine->curbytes > upper_bound && quarantine->curobjs > 0) - quarantine_drain_one(tsd, quarantine); + quarantine_drain_one(tsdn, quarantine); } void quarantine(tsd_t *tsd, void *ptr) { quarantine_t *quarantine; - size_t usize = isalloc(ptr, config_prof); + size_t usize = isalloc(tsd_tsdn(tsd), ptr, config_prof); cassert(config_fill); assert(opt_quarantine); if ((quarantine = tsd_quarantine_get(tsd)) == NULL) { - idalloctm(tsd, ptr, NULL, false, true); + idalloctm(tsd_tsdn(tsd), ptr, NULL, false, true); return; } /* * Drain one or more objects if the quarantine size limit would be * exceeded by appending ptr. */ if (quarantine->curbytes + usize > opt_quarantine) { size_t upper_bound = (opt_quarantine >= usize) ? opt_quarantine - usize : 0; - quarantine_drain(tsd, quarantine, upper_bound); + quarantine_drain(tsd_tsdn(tsd), quarantine, upper_bound); } /* Grow the quarantine ring buffer if it's full. */ if (quarantine->curobjs == (ZU(1) << quarantine->lg_maxobjs)) quarantine = quarantine_grow(tsd, quarantine); /* quarantine_grow() must free a slot if it fails to grow. */ assert(quarantine->curobjs < (ZU(1) << quarantine->lg_maxobjs)); /* Append ptr if its size doesn't exceed the quarantine size. */ if (quarantine->curbytes + usize <= opt_quarantine) { size_t offset = (quarantine->first + quarantine->curobjs) & ((ZU(1) << quarantine->lg_maxobjs) - 1); quarantine_obj_t *obj = &quarantine->objs[offset]; obj->ptr = ptr; obj->usize = usize; quarantine->curbytes += usize; quarantine->curobjs++; if (config_fill && unlikely(opt_junk_free)) { /* * Only do redzone validation if Valgrind isn't in * operation. */ if ((!config_valgrind || likely(!in_valgrind)) && usize <= SMALL_MAXCLASS) arena_quarantine_junk_small(ptr, usize); else - memset(ptr, 0x5a, usize); + memset(ptr, JEMALLOC_FREE_JUNK, usize); } } else { assert(quarantine->curbytes == 0); - idalloctm(tsd, ptr, NULL, false, true); + idalloctm(tsd_tsdn(tsd), ptr, NULL, false, true); } } void quarantine_cleanup(tsd_t *tsd) { quarantine_t *quarantine; if (!config_fill) return; quarantine = tsd_quarantine_get(tsd); if (quarantine != NULL) { - quarantine_drain(tsd, quarantine, 0); - idalloctm(tsd, quarantine, tcache_get(tsd, false), true, true); + quarantine_drain(tsd_tsdn(tsd), quarantine, 0); + idalloctm(tsd_tsdn(tsd), quarantine, NULL, true, true); tsd_quarantine_set(tsd, NULL); } } Index: head/contrib/jemalloc/src/rtree.c =================================================================== --- head/contrib/jemalloc/src/rtree.c (revision 299586) +++ head/contrib/jemalloc/src/rtree.c (revision 299587) @@ -1,127 +1,129 @@ #define JEMALLOC_RTREE_C_ #include "jemalloc/internal/jemalloc_internal.h" static unsigned hmin(unsigned ha, unsigned hb) { return (ha < hb ? ha : hb); } /* Only the most significant bits of keys passed to rtree_[gs]et() are used. */ bool rtree_new(rtree_t *rtree, unsigned bits, rtree_node_alloc_t *alloc, rtree_node_dalloc_t *dalloc) { unsigned bits_in_leaf, height, i; + assert(RTREE_HEIGHT_MAX == ((ZU(1) << (LG_SIZEOF_PTR+3)) / + RTREE_BITS_PER_LEVEL)); assert(bits > 0 && bits <= (sizeof(uintptr_t) << 3)); bits_in_leaf = (bits % RTREE_BITS_PER_LEVEL) == 0 ? RTREE_BITS_PER_LEVEL : (bits % RTREE_BITS_PER_LEVEL); if (bits > bits_in_leaf) { height = 1 + (bits - bits_in_leaf) / RTREE_BITS_PER_LEVEL; if ((height-1) * RTREE_BITS_PER_LEVEL + bits_in_leaf != bits) height++; } else height = 1; assert((height-1) * RTREE_BITS_PER_LEVEL + bits_in_leaf == bits); rtree->alloc = alloc; rtree->dalloc = dalloc; rtree->height = height; /* Root level. */ rtree->levels[0].subtree = NULL; rtree->levels[0].bits = (height > 1) ? RTREE_BITS_PER_LEVEL : bits_in_leaf; rtree->levels[0].cumbits = rtree->levels[0].bits; /* Interior levels. */ for (i = 1; i < height-1; i++) { rtree->levels[i].subtree = NULL; rtree->levels[i].bits = RTREE_BITS_PER_LEVEL; rtree->levels[i].cumbits = rtree->levels[i-1].cumbits + RTREE_BITS_PER_LEVEL; } /* Leaf level. */ if (height > 1) { rtree->levels[height-1].subtree = NULL; rtree->levels[height-1].bits = bits_in_leaf; rtree->levels[height-1].cumbits = bits; } /* Compute lookup table to be used by rtree_start_level(). */ for (i = 0; i < RTREE_HEIGHT_MAX; i++) { rtree->start_level[i] = hmin(RTREE_HEIGHT_MAX - 1 - i, height - 1); } return (false); } static void rtree_delete_subtree(rtree_t *rtree, rtree_node_elm_t *node, unsigned level) { if (level + 1 < rtree->height) { size_t nchildren, i; nchildren = ZU(1) << rtree->levels[level].bits; for (i = 0; i < nchildren; i++) { rtree_node_elm_t *child = node[i].child; if (child != NULL) rtree_delete_subtree(rtree, child, level + 1); } } rtree->dalloc(node); } void rtree_delete(rtree_t *rtree) { unsigned i; for (i = 0; i < rtree->height; i++) { rtree_node_elm_t *subtree = rtree->levels[i].subtree; if (subtree != NULL) rtree_delete_subtree(rtree, subtree, i); } } static rtree_node_elm_t * rtree_node_init(rtree_t *rtree, unsigned level, rtree_node_elm_t **elmp) { rtree_node_elm_t *node; if (atomic_cas_p((void **)elmp, NULL, RTREE_NODE_INITIALIZING)) { /* * Another thread is already in the process of initializing. * Spin-wait until initialization is complete. */ do { CPU_SPINWAIT; node = atomic_read_p((void **)elmp); } while (node == RTREE_NODE_INITIALIZING); } else { node = rtree->alloc(ZU(1) << rtree->levels[level].bits); if (node == NULL) return (NULL); atomic_write_p((void **)elmp, node); } return (node); } rtree_node_elm_t * rtree_subtree_read_hard(rtree_t *rtree, unsigned level) { return (rtree_node_init(rtree, level, &rtree->levels[level].subtree)); } rtree_node_elm_t * rtree_child_read_hard(rtree_t *rtree, rtree_node_elm_t *elm, unsigned level) { return (rtree_node_init(rtree, level, &elm->child)); } Index: head/contrib/jemalloc/src/stats.c =================================================================== --- head/contrib/jemalloc/src/stats.c (revision 299586) +++ head/contrib/jemalloc/src/stats.c (revision 299587) @@ -1,672 +1,676 @@ #define JEMALLOC_STATS_C_ #include "jemalloc/internal/jemalloc_internal.h" #define CTL_GET(n, v, t) do { \ size_t sz = sizeof(t); \ xmallctl(n, v, &sz, NULL, 0); \ } while (0) #define CTL_M2_GET(n, i, v, t) do { \ size_t mib[6]; \ size_t miblen = sizeof(mib) / sizeof(size_t); \ size_t sz = sizeof(t); \ xmallctlnametomib(n, mib, &miblen); \ mib[2] = (i); \ xmallctlbymib(mib, miblen, v, &sz, NULL, 0); \ } while (0) #define CTL_M2_M4_GET(n, i, j, v, t) do { \ size_t mib[6]; \ size_t miblen = sizeof(mib) / sizeof(size_t); \ size_t sz = sizeof(t); \ xmallctlnametomib(n, mib, &miblen); \ mib[2] = (i); \ mib[4] = (j); \ xmallctlbymib(mib, miblen, v, &sz, NULL, 0); \ } while (0) /******************************************************************************/ /* Data. */ bool opt_stats_print = false; size_t stats_cactive = 0; /******************************************************************************/ /* Function prototypes for non-inline static functions. */ static void stats_arena_bins_print(void (*write_cb)(void *, const char *), void *cbopaque, unsigned i); static void stats_arena_lruns_print(void (*write_cb)(void *, const char *), void *cbopaque, unsigned i); static void stats_arena_hchunks_print( void (*write_cb)(void *, const char *), void *cbopaque, unsigned i); static void stats_arena_print(void (*write_cb)(void *, const char *), void *cbopaque, unsigned i, bool bins, bool large, bool huge); /******************************************************************************/ static void stats_arena_bins_print(void (*write_cb)(void *, const char *), void *cbopaque, unsigned i) { size_t page; bool config_tcache, in_gap; unsigned nbins, j; CTL_GET("arenas.page", &page, size_t); CTL_GET("config.tcache", &config_tcache, bool); if (config_tcache) { malloc_cprintf(write_cb, cbopaque, "bins: size ind allocated nmalloc" " ndalloc nrequests curregs curruns regs" " pgs util nfills nflushes newruns" " reruns\n"); } else { malloc_cprintf(write_cb, cbopaque, "bins: size ind allocated nmalloc" " ndalloc nrequests curregs curruns regs" " pgs util newruns reruns\n"); } CTL_GET("arenas.nbins", &nbins, unsigned); for (j = 0, in_gap = false; j < nbins; j++) { uint64_t nruns; CTL_M2_M4_GET("stats.arenas.0.bins.0.nruns", i, j, &nruns, uint64_t); if (nruns == 0) in_gap = true; else { size_t reg_size, run_size, curregs, availregs, milli; size_t curruns; uint32_t nregs; uint64_t nmalloc, ndalloc, nrequests, nfills, nflushes; uint64_t reruns; char util[6]; /* "x.yyy". */ if (in_gap) { malloc_cprintf(write_cb, cbopaque, " ---\n"); in_gap = false; } CTL_M2_GET("arenas.bin.0.size", j, ®_size, size_t); CTL_M2_GET("arenas.bin.0.nregs", j, &nregs, uint32_t); CTL_M2_GET("arenas.bin.0.run_size", j, &run_size, size_t); CTL_M2_M4_GET("stats.arenas.0.bins.0.nmalloc", i, j, &nmalloc, uint64_t); CTL_M2_M4_GET("stats.arenas.0.bins.0.ndalloc", i, j, &ndalloc, uint64_t); CTL_M2_M4_GET("stats.arenas.0.bins.0.curregs", i, j, &curregs, size_t); CTL_M2_M4_GET("stats.arenas.0.bins.0.nrequests", i, j, &nrequests, uint64_t); if (config_tcache) { CTL_M2_M4_GET("stats.arenas.0.bins.0.nfills", i, j, &nfills, uint64_t); CTL_M2_M4_GET("stats.arenas.0.bins.0.nflushes", i, j, &nflushes, uint64_t); } CTL_M2_M4_GET("stats.arenas.0.bins.0.nreruns", i, j, &reruns, uint64_t); CTL_M2_M4_GET("stats.arenas.0.bins.0.curruns", i, j, &curruns, size_t); availregs = nregs * curruns; milli = (availregs != 0) ? (1000 * curregs) / availregs : 1000; assert(milli <= 1000); if (milli < 10) { malloc_snprintf(util, sizeof(util), "0.00%zu", milli); } else if (milli < 100) { malloc_snprintf(util, sizeof(util), "0.0%zu", milli); } else if (milli < 1000) { malloc_snprintf(util, sizeof(util), "0.%zu", milli); } else malloc_snprintf(util, sizeof(util), "1"); if (config_tcache) { malloc_cprintf(write_cb, cbopaque, "%20zu %3u %12zu %12"FMTu64 " %12"FMTu64" %12"FMTu64" %12zu" " %12zu %4u %3zu %-5s %12"FMTu64 " %12"FMTu64" %12"FMTu64" %12"FMTu64"\n", reg_size, j, curregs * reg_size, nmalloc, ndalloc, nrequests, curregs, curruns, nregs, run_size / page, util, nfills, nflushes, nruns, reruns); } else { malloc_cprintf(write_cb, cbopaque, "%20zu %3u %12zu %12"FMTu64 " %12"FMTu64" %12"FMTu64" %12zu" " %12zu %4u %3zu %-5s %12"FMTu64 " %12"FMTu64"\n", reg_size, j, curregs * reg_size, nmalloc, ndalloc, nrequests, curregs, curruns, nregs, run_size / page, util, nruns, reruns); } } } if (in_gap) { malloc_cprintf(write_cb, cbopaque, " ---\n"); } } static void stats_arena_lruns_print(void (*write_cb)(void *, const char *), void *cbopaque, unsigned i) { unsigned nbins, nlruns, j; bool in_gap; malloc_cprintf(write_cb, cbopaque, "large: size ind allocated nmalloc ndalloc" " nrequests curruns\n"); CTL_GET("arenas.nbins", &nbins, unsigned); CTL_GET("arenas.nlruns", &nlruns, unsigned); for (j = 0, in_gap = false; j < nlruns; j++) { uint64_t nmalloc, ndalloc, nrequests; size_t run_size, curruns; CTL_M2_M4_GET("stats.arenas.0.lruns.0.nmalloc", i, j, &nmalloc, uint64_t); CTL_M2_M4_GET("stats.arenas.0.lruns.0.ndalloc", i, j, &ndalloc, uint64_t); CTL_M2_M4_GET("stats.arenas.0.lruns.0.nrequests", i, j, &nrequests, uint64_t); if (nrequests == 0) in_gap = true; else { CTL_M2_GET("arenas.lrun.0.size", j, &run_size, size_t); CTL_M2_M4_GET("stats.arenas.0.lruns.0.curruns", i, j, &curruns, size_t); if (in_gap) { malloc_cprintf(write_cb, cbopaque, " ---\n"); in_gap = false; } malloc_cprintf(write_cb, cbopaque, "%20zu %3u %12zu %12"FMTu64" %12"FMTu64 " %12"FMTu64" %12zu\n", run_size, nbins + j, curruns * run_size, nmalloc, ndalloc, nrequests, curruns); } } if (in_gap) { malloc_cprintf(write_cb, cbopaque, " ---\n"); } } static void stats_arena_hchunks_print(void (*write_cb)(void *, const char *), void *cbopaque, unsigned i) { unsigned nbins, nlruns, nhchunks, j; bool in_gap; malloc_cprintf(write_cb, cbopaque, "huge: size ind allocated nmalloc ndalloc" " nrequests curhchunks\n"); CTL_GET("arenas.nbins", &nbins, unsigned); CTL_GET("arenas.nlruns", &nlruns, unsigned); CTL_GET("arenas.nhchunks", &nhchunks, unsigned); for (j = 0, in_gap = false; j < nhchunks; j++) { uint64_t nmalloc, ndalloc, nrequests; size_t hchunk_size, curhchunks; CTL_M2_M4_GET("stats.arenas.0.hchunks.0.nmalloc", i, j, &nmalloc, uint64_t); CTL_M2_M4_GET("stats.arenas.0.hchunks.0.ndalloc", i, j, &ndalloc, uint64_t); CTL_M2_M4_GET("stats.arenas.0.hchunks.0.nrequests", i, j, &nrequests, uint64_t); if (nrequests == 0) in_gap = true; else { CTL_M2_GET("arenas.hchunk.0.size", j, &hchunk_size, size_t); CTL_M2_M4_GET("stats.arenas.0.hchunks.0.curhchunks", i, j, &curhchunks, size_t); if (in_gap) { malloc_cprintf(write_cb, cbopaque, " ---\n"); in_gap = false; } malloc_cprintf(write_cb, cbopaque, "%20zu %3u %12zu %12"FMTu64" %12"FMTu64 " %12"FMTu64" %12zu\n", hchunk_size, nbins + nlruns + j, curhchunks * hchunk_size, nmalloc, ndalloc, nrequests, curhchunks); } } if (in_gap) { malloc_cprintf(write_cb, cbopaque, " ---\n"); } } static void stats_arena_print(void (*write_cb)(void *, const char *), void *cbopaque, unsigned i, bool bins, bool large, bool huge) { unsigned nthreads; const char *dss; ssize_t lg_dirty_mult, decay_time; - size_t page, pactive, pdirty, mapped; + size_t page, pactive, pdirty, mapped, retained; size_t metadata_mapped, metadata_allocated; uint64_t npurge, nmadvise, purged; size_t small_allocated; uint64_t small_nmalloc, small_ndalloc, small_nrequests; size_t large_allocated; uint64_t large_nmalloc, large_ndalloc, large_nrequests; size_t huge_allocated; uint64_t huge_nmalloc, huge_ndalloc, huge_nrequests; CTL_GET("arenas.page", &page, size_t); CTL_M2_GET("stats.arenas.0.nthreads", i, &nthreads, unsigned); malloc_cprintf(write_cb, cbopaque, "assigned threads: %u\n", nthreads); CTL_M2_GET("stats.arenas.0.dss", i, &dss, const char *); malloc_cprintf(write_cb, cbopaque, "dss allocation precedence: %s\n", dss); CTL_M2_GET("stats.arenas.0.lg_dirty_mult", i, &lg_dirty_mult, ssize_t); if (opt_purge == purge_mode_ratio) { if (lg_dirty_mult >= 0) { malloc_cprintf(write_cb, cbopaque, "min active:dirty page ratio: %u:1\n", (1U << lg_dirty_mult)); } else { malloc_cprintf(write_cb, cbopaque, "min active:dirty page ratio: N/A\n"); } } CTL_M2_GET("stats.arenas.0.decay_time", i, &decay_time, ssize_t); if (opt_purge == purge_mode_decay) { if (decay_time >= 0) { malloc_cprintf(write_cb, cbopaque, "decay time: %zd\n", decay_time); } else malloc_cprintf(write_cb, cbopaque, "decay time: N/A\n"); } CTL_M2_GET("stats.arenas.0.pactive", i, &pactive, size_t); CTL_M2_GET("stats.arenas.0.pdirty", i, &pdirty, size_t); CTL_M2_GET("stats.arenas.0.npurge", i, &npurge, uint64_t); CTL_M2_GET("stats.arenas.0.nmadvise", i, &nmadvise, uint64_t); CTL_M2_GET("stats.arenas.0.purged", i, &purged, uint64_t); malloc_cprintf(write_cb, cbopaque, "purging: dirty: %zu, sweeps: %"FMTu64", madvises: %"FMTu64", " "purged: %"FMTu64"\n", pdirty, npurge, nmadvise, purged); malloc_cprintf(write_cb, cbopaque, " allocated nmalloc ndalloc" " nrequests\n"); CTL_M2_GET("stats.arenas.0.small.allocated", i, &small_allocated, size_t); CTL_M2_GET("stats.arenas.0.small.nmalloc", i, &small_nmalloc, uint64_t); CTL_M2_GET("stats.arenas.0.small.ndalloc", i, &small_ndalloc, uint64_t); CTL_M2_GET("stats.arenas.0.small.nrequests", i, &small_nrequests, uint64_t); malloc_cprintf(write_cb, cbopaque, "small: %12zu %12"FMTu64" %12"FMTu64 " %12"FMTu64"\n", small_allocated, small_nmalloc, small_ndalloc, small_nrequests); CTL_M2_GET("stats.arenas.0.large.allocated", i, &large_allocated, size_t); CTL_M2_GET("stats.arenas.0.large.nmalloc", i, &large_nmalloc, uint64_t); CTL_M2_GET("stats.arenas.0.large.ndalloc", i, &large_ndalloc, uint64_t); CTL_M2_GET("stats.arenas.0.large.nrequests", i, &large_nrequests, uint64_t); malloc_cprintf(write_cb, cbopaque, "large: %12zu %12"FMTu64" %12"FMTu64 " %12"FMTu64"\n", large_allocated, large_nmalloc, large_ndalloc, large_nrequests); CTL_M2_GET("stats.arenas.0.huge.allocated", i, &huge_allocated, size_t); CTL_M2_GET("stats.arenas.0.huge.nmalloc", i, &huge_nmalloc, uint64_t); CTL_M2_GET("stats.arenas.0.huge.ndalloc", i, &huge_ndalloc, uint64_t); CTL_M2_GET("stats.arenas.0.huge.nrequests", i, &huge_nrequests, uint64_t); malloc_cprintf(write_cb, cbopaque, "huge: %12zu %12"FMTu64" %12"FMTu64 " %12"FMTu64"\n", huge_allocated, huge_nmalloc, huge_ndalloc, huge_nrequests); malloc_cprintf(write_cb, cbopaque, "total: %12zu %12"FMTu64" %12"FMTu64 " %12"FMTu64"\n", small_allocated + large_allocated + huge_allocated, small_nmalloc + large_nmalloc + huge_nmalloc, small_ndalloc + large_ndalloc + huge_ndalloc, small_nrequests + large_nrequests + huge_nrequests); malloc_cprintf(write_cb, cbopaque, "active: %12zu\n", pactive * page); CTL_M2_GET("stats.arenas.0.mapped", i, &mapped, size_t); malloc_cprintf(write_cb, cbopaque, "mapped: %12zu\n", mapped); + CTL_M2_GET("stats.arenas.0.retained", i, &retained, size_t); + malloc_cprintf(write_cb, cbopaque, + "retained: %12zu\n", retained); CTL_M2_GET("stats.arenas.0.metadata.mapped", i, &metadata_mapped, size_t); CTL_M2_GET("stats.arenas.0.metadata.allocated", i, &metadata_allocated, size_t); malloc_cprintf(write_cb, cbopaque, "metadata: mapped: %zu, allocated: %zu\n", metadata_mapped, metadata_allocated); if (bins) stats_arena_bins_print(write_cb, cbopaque, i); if (large) stats_arena_lruns_print(write_cb, cbopaque, i); if (huge) stats_arena_hchunks_print(write_cb, cbopaque, i); } void stats_print(void (*write_cb)(void *, const char *), void *cbopaque, const char *opts) { int err; uint64_t epoch; size_t u64sz; bool general = true; bool merged = true; bool unmerged = true; bool bins = true; bool large = true; bool huge = true; /* * Refresh stats, in case mallctl() was called by the application. * * Check for OOM here, since refreshing the ctl cache can trigger * allocation. In practice, none of the subsequent mallctl()-related * calls in this function will cause OOM if this one succeeds. * */ epoch = 1; u64sz = sizeof(uint64_t); err = je_mallctl("epoch", &epoch, &u64sz, &epoch, sizeof(uint64_t)); if (err != 0) { if (err == EAGAIN) { malloc_write(": Memory allocation failure in " "mallctl(\"epoch\", ...)\n"); return; } malloc_write(": Failure in mallctl(\"epoch\", " "...)\n"); abort(); } if (opts != NULL) { unsigned i; for (i = 0; opts[i] != '\0'; i++) { switch (opts[i]) { case 'g': general = false; break; case 'm': merged = false; break; case 'a': unmerged = false; break; case 'b': bins = false; break; case 'l': large = false; break; case 'h': huge = false; break; default:; } } } malloc_cprintf(write_cb, cbopaque, "___ Begin jemalloc statistics ___\n"); if (general) { const char *cpv; bool bv; unsigned uv; ssize_t ssv; size_t sv, bsz, usz, ssz, sssz, cpsz; bsz = sizeof(bool); usz = sizeof(unsigned); ssz = sizeof(size_t); sssz = sizeof(ssize_t); cpsz = sizeof(const char *); CTL_GET("version", &cpv, const char *); malloc_cprintf(write_cb, cbopaque, "Version: %s\n", cpv); CTL_GET("config.debug", &bv, bool); malloc_cprintf(write_cb, cbopaque, "Assertions %s\n", bv ? "enabled" : "disabled"); malloc_cprintf(write_cb, cbopaque, "config.malloc_conf: \"%s\"\n", config_malloc_conf); #define OPT_WRITE_BOOL(n) \ if (je_mallctl("opt."#n, &bv, &bsz, NULL, 0) == 0) { \ malloc_cprintf(write_cb, cbopaque, \ " opt."#n": %s\n", bv ? "true" : "false"); \ } #define OPT_WRITE_BOOL_MUTABLE(n, m) { \ bool bv2; \ if (je_mallctl("opt."#n, &bv, &bsz, NULL, 0) == 0 && \ je_mallctl(#m, &bv2, &bsz, NULL, 0) == 0) { \ malloc_cprintf(write_cb, cbopaque, \ " opt."#n": %s ("#m": %s)\n", bv ? "true" \ : "false", bv2 ? "true" : "false"); \ } \ } #define OPT_WRITE_UNSIGNED(n) \ if (je_mallctl("opt."#n, &uv, &usz, NULL, 0) == 0) { \ malloc_cprintf(write_cb, cbopaque, \ - " opt."#n": %zu\n", sv); \ + " opt."#n": %u\n", uv); \ } #define OPT_WRITE_SIZE_T(n) \ if (je_mallctl("opt."#n, &sv, &ssz, NULL, 0) == 0) { \ malloc_cprintf(write_cb, cbopaque, \ " opt."#n": %zu\n", sv); \ } #define OPT_WRITE_SSIZE_T(n) \ if (je_mallctl("opt."#n, &ssv, &sssz, NULL, 0) == 0) { \ malloc_cprintf(write_cb, cbopaque, \ " opt."#n": %zd\n", ssv); \ } #define OPT_WRITE_SSIZE_T_MUTABLE(n, m) { \ ssize_t ssv2; \ if (je_mallctl("opt."#n, &ssv, &sssz, NULL, 0) == 0 && \ je_mallctl(#m, &ssv2, &sssz, NULL, 0) == 0) { \ malloc_cprintf(write_cb, cbopaque, \ " opt."#n": %zd ("#m": %zd)\n", \ ssv, ssv2); \ } \ } #define OPT_WRITE_CHAR_P(n) \ if (je_mallctl("opt."#n, &cpv, &cpsz, NULL, 0) == 0) { \ malloc_cprintf(write_cb, cbopaque, \ " opt."#n": \"%s\"\n", cpv); \ } malloc_cprintf(write_cb, cbopaque, "Run-time option settings:\n"); OPT_WRITE_BOOL(abort) OPT_WRITE_SIZE_T(lg_chunk) OPT_WRITE_CHAR_P(dss) OPT_WRITE_UNSIGNED(narenas) OPT_WRITE_CHAR_P(purge) if (opt_purge == purge_mode_ratio) { OPT_WRITE_SSIZE_T_MUTABLE(lg_dirty_mult, arenas.lg_dirty_mult) } if (opt_purge == purge_mode_decay) OPT_WRITE_SSIZE_T_MUTABLE(decay_time, arenas.decay_time) OPT_WRITE_BOOL(stats_print) OPT_WRITE_CHAR_P(junk) OPT_WRITE_SIZE_T(quarantine) OPT_WRITE_BOOL(redzone) OPT_WRITE_BOOL(zero) OPT_WRITE_BOOL(utrace) OPT_WRITE_BOOL(valgrind) OPT_WRITE_BOOL(xmalloc) OPT_WRITE_BOOL(tcache) OPT_WRITE_SSIZE_T(lg_tcache_max) OPT_WRITE_BOOL(prof) OPT_WRITE_CHAR_P(prof_prefix) OPT_WRITE_BOOL_MUTABLE(prof_active, prof.active) OPT_WRITE_BOOL_MUTABLE(prof_thread_active_init, prof.thread_active_init) OPT_WRITE_SSIZE_T(lg_prof_sample) OPT_WRITE_BOOL(prof_accum) OPT_WRITE_SSIZE_T(lg_prof_interval) OPT_WRITE_BOOL(prof_gdump) OPT_WRITE_BOOL(prof_final) OPT_WRITE_BOOL(prof_leak) #undef OPT_WRITE_BOOL #undef OPT_WRITE_BOOL_MUTABLE #undef OPT_WRITE_SIZE_T #undef OPT_WRITE_SSIZE_T #undef OPT_WRITE_CHAR_P malloc_cprintf(write_cb, cbopaque, "CPUs: %u\n", ncpus); CTL_GET("arenas.narenas", &uv, unsigned); malloc_cprintf(write_cb, cbopaque, "Arenas: %u\n", uv); malloc_cprintf(write_cb, cbopaque, "Pointer size: %zu\n", sizeof(void *)); CTL_GET("arenas.quantum", &sv, size_t); malloc_cprintf(write_cb, cbopaque, "Quantum size: %zu\n", sv); CTL_GET("arenas.page", &sv, size_t); malloc_cprintf(write_cb, cbopaque, "Page size: %zu\n", sv); CTL_GET("arenas.lg_dirty_mult", &ssv, ssize_t); if (opt_purge == purge_mode_ratio) { if (ssv >= 0) { malloc_cprintf(write_cb, cbopaque, "Min active:dirty page ratio per arena: " "%u:1\n", (1U << ssv)); } else { malloc_cprintf(write_cb, cbopaque, "Min active:dirty page ratio per arena: " "N/A\n"); } } CTL_GET("arenas.decay_time", &ssv, ssize_t); if (opt_purge == purge_mode_decay) { malloc_cprintf(write_cb, cbopaque, "Unused dirty page decay time: %zd%s\n", ssv, (ssv < 0) ? " (no decay)" : ""); } if (je_mallctl("arenas.tcache_max", &sv, &ssz, NULL, 0) == 0) { malloc_cprintf(write_cb, cbopaque, "Maximum thread-cached size class: %zu\n", sv); } if (je_mallctl("opt.prof", &bv, &bsz, NULL, 0) == 0 && bv) { CTL_GET("prof.lg_sample", &sv, size_t); malloc_cprintf(write_cb, cbopaque, "Average profile sample interval: %"FMTu64 " (2^%zu)\n", (((uint64_t)1U) << sv), sv); CTL_GET("opt.lg_prof_interval", &ssv, ssize_t); if (ssv >= 0) { malloc_cprintf(write_cb, cbopaque, "Average profile dump interval: %"FMTu64 " (2^%zd)\n", (((uint64_t)1U) << ssv), ssv); } else { malloc_cprintf(write_cb, cbopaque, "Average profile dump interval: N/A\n"); } } CTL_GET("opt.lg_chunk", &sv, size_t); malloc_cprintf(write_cb, cbopaque, "Chunk size: %zu (2^%zu)\n", (ZU(1) << sv), sv); } if (config_stats) { size_t *cactive; - size_t allocated, active, metadata, resident, mapped; + size_t allocated, active, metadata, resident, mapped, retained; CTL_GET("stats.cactive", &cactive, size_t *); CTL_GET("stats.allocated", &allocated, size_t); CTL_GET("stats.active", &active, size_t); CTL_GET("stats.metadata", &metadata, size_t); CTL_GET("stats.resident", &resident, size_t); CTL_GET("stats.mapped", &mapped, size_t); + CTL_GET("stats.retained", &retained, size_t); malloc_cprintf(write_cb, cbopaque, "Allocated: %zu, active: %zu, metadata: %zu," - " resident: %zu, mapped: %zu\n", - allocated, active, metadata, resident, mapped); + " resident: %zu, mapped: %zu, retained: %zu\n", + allocated, active, metadata, resident, mapped, retained); malloc_cprintf(write_cb, cbopaque, "Current active ceiling: %zu\n", atomic_read_z(cactive)); if (merged) { unsigned narenas; CTL_GET("arenas.narenas", &narenas, unsigned); { VARIABLE_ARRAY(bool, initialized, narenas); size_t isz; unsigned i, ninitialized; isz = sizeof(bool) * narenas; xmallctl("arenas.initialized", initialized, &isz, NULL, 0); for (i = ninitialized = 0; i < narenas; i++) { if (initialized[i]) ninitialized++; } if (ninitialized > 1 || !unmerged) { /* Print merged arena stats. */ malloc_cprintf(write_cb, cbopaque, "\nMerged arenas stats:\n"); stats_arena_print(write_cb, cbopaque, narenas, bins, large, huge); } } } if (unmerged) { unsigned narenas; /* Print stats for each arena. */ CTL_GET("arenas.narenas", &narenas, unsigned); { VARIABLE_ARRAY(bool, initialized, narenas); size_t isz; unsigned i; isz = sizeof(bool) * narenas; xmallctl("arenas.initialized", initialized, &isz, NULL, 0); for (i = 0; i < narenas; i++) { if (initialized[i]) { malloc_cprintf(write_cb, cbopaque, "\narenas[%u]:\n", i); stats_arena_print(write_cb, cbopaque, i, bins, large, huge); } } } } } malloc_cprintf(write_cb, cbopaque, "--- End jemalloc statistics ---\n"); } Index: head/contrib/jemalloc/src/tcache.c =================================================================== --- head/contrib/jemalloc/src/tcache.c (revision 299586) +++ head/contrib/jemalloc/src/tcache.c (revision 299587) @@ -1,546 +1,555 @@ #define JEMALLOC_TCACHE_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Data. */ bool opt_tcache = true; ssize_t opt_lg_tcache_max = LG_TCACHE_MAXCLASS_DEFAULT; tcache_bin_info_t *tcache_bin_info; static unsigned stack_nelms; /* Total stack elms per tcache. */ unsigned nhbins; size_t tcache_maxclass; tcaches_t *tcaches; /* Index of first element within tcaches that has never been used. */ static unsigned tcaches_past; /* Head of singly linked list tracking available tcaches elements. */ static tcaches_t *tcaches_avail; /******************************************************************************/ -size_t tcache_salloc(const void *ptr) +size_t +tcache_salloc(tsdn_t *tsdn, const void *ptr) { - return (arena_salloc(ptr, false)); + return (arena_salloc(tsdn, ptr, false)); } void tcache_event_hard(tsd_t *tsd, tcache_t *tcache) { szind_t binind = tcache->next_gc_bin; tcache_bin_t *tbin = &tcache->tbins[binind]; tcache_bin_info_t *tbin_info = &tcache_bin_info[binind]; if (tbin->low_water > 0) { /* * Flush (ceiling) 3/4 of the objects below the low water mark. */ if (binind < NBINS) { tcache_bin_flush_small(tsd, tcache, tbin, binind, tbin->ncached - tbin->low_water + (tbin->low_water >> 2)); } else { tcache_bin_flush_large(tsd, tbin, binind, tbin->ncached - tbin->low_water + (tbin->low_water >> 2), tcache); } /* * Reduce fill count by 2X. Limit lg_fill_div such that the * fill count is always at least 1. */ if ((tbin_info->ncached_max >> (tbin->lg_fill_div+1)) >= 1) tbin->lg_fill_div++; } else if (tbin->low_water < 0) { /* * Increase fill count by 2X. Make sure lg_fill_div stays * greater than 0. */ if (tbin->lg_fill_div > 1) tbin->lg_fill_div--; } tbin->low_water = tbin->ncached; tcache->next_gc_bin++; if (tcache->next_gc_bin == nhbins) tcache->next_gc_bin = 0; } void * -tcache_alloc_small_hard(tsd_t *tsd, arena_t *arena, tcache_t *tcache, +tcache_alloc_small_hard(tsdn_t *tsdn, arena_t *arena, tcache_t *tcache, tcache_bin_t *tbin, szind_t binind, bool *tcache_success) { void *ret; - arena_tcache_fill_small(tsd, arena, tbin, binind, config_prof ? + arena_tcache_fill_small(tsdn, arena, tbin, binind, config_prof ? tcache->prof_accumbytes : 0); if (config_prof) tcache->prof_accumbytes = 0; ret = tcache_alloc_easy(tbin, tcache_success); return (ret); } void tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin, szind_t binind, unsigned rem) { arena_t *arena; void *ptr; unsigned i, nflush, ndeferred; bool merged_stats = false; assert(binind < NBINS); assert(rem <= tbin->ncached); arena = arena_choose(tsd, NULL); assert(arena != NULL); for (nflush = tbin->ncached - rem; nflush > 0; nflush = ndeferred) { /* Lock the arena bin associated with the first object. */ arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE( *(tbin->avail - 1)); arena_t *bin_arena = extent_node_arena_get(&chunk->node); arena_bin_t *bin = &bin_arena->bins[binind]; if (config_prof && bin_arena == arena) { - if (arena_prof_accum(arena, tcache->prof_accumbytes)) - prof_idump(); + if (arena_prof_accum(tsd_tsdn(tsd), arena, + tcache->prof_accumbytes)) + prof_idump(tsd_tsdn(tsd)); tcache->prof_accumbytes = 0; } - malloc_mutex_lock(&bin->lock); + malloc_mutex_lock(tsd_tsdn(tsd), &bin->lock); if (config_stats && bin_arena == arena) { assert(!merged_stats); merged_stats = true; bin->stats.nflushes++; bin->stats.nrequests += tbin->tstats.nrequests; tbin->tstats.nrequests = 0; } ndeferred = 0; for (i = 0; i < nflush; i++) { ptr = *(tbin->avail - 1 - i); assert(ptr != NULL); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (extent_node_arena_get(&chunk->node) == bin_arena) { size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; arena_chunk_map_bits_t *bitselm = - arena_bitselm_get(chunk, pageind); - arena_dalloc_bin_junked_locked(bin_arena, chunk, - ptr, bitselm); + arena_bitselm_get_mutable(chunk, pageind); + arena_dalloc_bin_junked_locked(tsd_tsdn(tsd), + bin_arena, chunk, ptr, bitselm); } else { /* * This object was allocated via a different * arena bin than the one that is currently * locked. Stash the object, so that it can be * handled in a future pass. */ *(tbin->avail - 1 - ndeferred) = ptr; ndeferred++; } } - malloc_mutex_unlock(&bin->lock); - arena_decay_ticks(tsd, bin_arena, nflush - ndeferred); + malloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock); + arena_decay_ticks(tsd_tsdn(tsd), bin_arena, nflush - ndeferred); } if (config_stats && !merged_stats) { /* * The flush loop didn't happen to flush to this thread's * arena, so the stats didn't get merged. Manually do so now. */ arena_bin_t *bin = &arena->bins[binind]; - malloc_mutex_lock(&bin->lock); + malloc_mutex_lock(tsd_tsdn(tsd), &bin->lock); bin->stats.nflushes++; bin->stats.nrequests += tbin->tstats.nrequests; tbin->tstats.nrequests = 0; - malloc_mutex_unlock(&bin->lock); + malloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock); } memmove(tbin->avail - rem, tbin->avail - tbin->ncached, rem * sizeof(void *)); tbin->ncached = rem; if ((int)tbin->ncached < tbin->low_water) tbin->low_water = tbin->ncached; } void tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind, unsigned rem, tcache_t *tcache) { arena_t *arena; void *ptr; unsigned i, nflush, ndeferred; bool merged_stats = false; assert(binind < nhbins); assert(rem <= tbin->ncached); arena = arena_choose(tsd, NULL); assert(arena != NULL); for (nflush = tbin->ncached - rem; nflush > 0; nflush = ndeferred) { /* Lock the arena associated with the first object. */ arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE( *(tbin->avail - 1)); arena_t *locked_arena = extent_node_arena_get(&chunk->node); UNUSED bool idump; if (config_prof) idump = false; - malloc_mutex_lock(&locked_arena->lock); + malloc_mutex_lock(tsd_tsdn(tsd), &locked_arena->lock); if ((config_prof || config_stats) && locked_arena == arena) { if (config_prof) { idump = arena_prof_accum_locked(arena, tcache->prof_accumbytes); tcache->prof_accumbytes = 0; } if (config_stats) { merged_stats = true; arena->stats.nrequests_large += tbin->tstats.nrequests; arena->stats.lstats[binind - NBINS].nrequests += tbin->tstats.nrequests; tbin->tstats.nrequests = 0; } } ndeferred = 0; for (i = 0; i < nflush; i++) { ptr = *(tbin->avail - 1 - i); assert(ptr != NULL); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (extent_node_arena_get(&chunk->node) == locked_arena) { - arena_dalloc_large_junked_locked(locked_arena, - chunk, ptr); + arena_dalloc_large_junked_locked(tsd_tsdn(tsd), + locked_arena, chunk, ptr); } else { /* * This object was allocated via a different * arena than the one that is currently locked. * Stash the object, so that it can be handled * in a future pass. */ *(tbin->avail - 1 - ndeferred) = ptr; ndeferred++; } } - malloc_mutex_unlock(&locked_arena->lock); + malloc_mutex_unlock(tsd_tsdn(tsd), &locked_arena->lock); if (config_prof && idump) - prof_idump(); - arena_decay_ticks(tsd, locked_arena, nflush - ndeferred); + prof_idump(tsd_tsdn(tsd)); + arena_decay_ticks(tsd_tsdn(tsd), locked_arena, nflush - + ndeferred); } if (config_stats && !merged_stats) { /* * The flush loop didn't happen to flush to this thread's * arena, so the stats didn't get merged. Manually do so now. */ - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsd_tsdn(tsd), &arena->lock); arena->stats.nrequests_large += tbin->tstats.nrequests; arena->stats.lstats[binind - NBINS].nrequests += tbin->tstats.nrequests; tbin->tstats.nrequests = 0; - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsd_tsdn(tsd), &arena->lock); } memmove(tbin->avail - rem, tbin->avail - tbin->ncached, rem * sizeof(void *)); tbin->ncached = rem; if ((int)tbin->ncached < tbin->low_water) tbin->low_water = tbin->ncached; } -void -tcache_arena_associate(tcache_t *tcache, arena_t *arena) +static void +tcache_arena_associate(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena) { if (config_stats) { /* Link into list of extant tcaches. */ - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); ql_elm_new(tcache, link); ql_tail_insert(&arena->tcache_ql, tcache, link); - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsdn, &arena->lock); } } -void -tcache_arena_reassociate(tcache_t *tcache, arena_t *oldarena, arena_t *newarena) +static void +tcache_arena_dissociate(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena) { - tcache_arena_dissociate(tcache, oldarena); - tcache_arena_associate(tcache, newarena); -} - -void -tcache_arena_dissociate(tcache_t *tcache, arena_t *arena) -{ - if (config_stats) { /* Unlink from list of extant tcaches. */ - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsdn, &arena->lock); if (config_debug) { bool in_ql = false; tcache_t *iter; ql_foreach(iter, &arena->tcache_ql, link) { if (iter == tcache) { in_ql = true; break; } } assert(in_ql); } ql_remove(&arena->tcache_ql, tcache, link); - tcache_stats_merge(tcache, arena); - malloc_mutex_unlock(&arena->lock); + tcache_stats_merge(tsdn, tcache, arena); + malloc_mutex_unlock(tsdn, &arena->lock); } } +void +tcache_arena_reassociate(tsdn_t *tsdn, tcache_t *tcache, arena_t *oldarena, + arena_t *newarena) +{ + + tcache_arena_dissociate(tsdn, tcache, oldarena); + tcache_arena_associate(tsdn, tcache, newarena); +} + tcache_t * tcache_get_hard(tsd_t *tsd) { arena_t *arena; if (!tcache_enabled_get()) { if (tsd_nominal(tsd)) tcache_enabled_set(false); /* Memoize. */ return (NULL); } arena = arena_choose(tsd, NULL); if (unlikely(arena == NULL)) return (NULL); - return (tcache_create(tsd, arena)); + return (tcache_create(tsd_tsdn(tsd), arena)); } tcache_t * -tcache_create(tsd_t *tsd, arena_t *arena) +tcache_create(tsdn_t *tsdn, arena_t *arena) { tcache_t *tcache; size_t size, stack_offset; unsigned i; size = offsetof(tcache_t, tbins) + (sizeof(tcache_bin_t) * nhbins); /* Naturally align the pointer stacks. */ size = PTR_CEILING(size); stack_offset = size; size += stack_nelms * sizeof(void *); /* Avoid false cacheline sharing. */ size = sa2u(size, CACHELINE); - tcache = ipallocztm(tsd, size, CACHELINE, true, false, true, - arena_get(0, false)); + tcache = ipallocztm(tsdn, size, CACHELINE, true, NULL, true, + arena_get(TSDN_NULL, 0, true)); if (tcache == NULL) return (NULL); - tcache_arena_associate(tcache, arena); + tcache_arena_associate(tsdn, tcache, arena); ticker_init(&tcache->gc_ticker, TCACHE_GC_INCR); assert((TCACHE_NSLOTS_SMALL_MAX & 1U) == 0); for (i = 0; i < nhbins; i++) { tcache->tbins[i].lg_fill_div = 1; stack_offset += tcache_bin_info[i].ncached_max * sizeof(void *); /* * avail points past the available space. Allocations will * access the slots toward higher addresses (for the benefit of * prefetch). */ tcache->tbins[i].avail = (void **)((uintptr_t)tcache + (uintptr_t)stack_offset); } return (tcache); } static void tcache_destroy(tsd_t *tsd, tcache_t *tcache) { arena_t *arena; unsigned i; arena = arena_choose(tsd, NULL); - tcache_arena_dissociate(tcache, arena); + tcache_arena_dissociate(tsd_tsdn(tsd), tcache, arena); for (i = 0; i < NBINS; i++) { tcache_bin_t *tbin = &tcache->tbins[i]; tcache_bin_flush_small(tsd, tcache, tbin, i, 0); if (config_stats && tbin->tstats.nrequests != 0) { arena_bin_t *bin = &arena->bins[i]; - malloc_mutex_lock(&bin->lock); + malloc_mutex_lock(tsd_tsdn(tsd), &bin->lock); bin->stats.nrequests += tbin->tstats.nrequests; - malloc_mutex_unlock(&bin->lock); + malloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock); } } for (; i < nhbins; i++) { tcache_bin_t *tbin = &tcache->tbins[i]; tcache_bin_flush_large(tsd, tbin, i, 0, tcache); if (config_stats && tbin->tstats.nrequests != 0) { - malloc_mutex_lock(&arena->lock); + malloc_mutex_lock(tsd_tsdn(tsd), &arena->lock); arena->stats.nrequests_large += tbin->tstats.nrequests; arena->stats.lstats[i - NBINS].nrequests += tbin->tstats.nrequests; - malloc_mutex_unlock(&arena->lock); + malloc_mutex_unlock(tsd_tsdn(tsd), &arena->lock); } } if (config_prof && tcache->prof_accumbytes > 0 && - arena_prof_accum(arena, tcache->prof_accumbytes)) - prof_idump(); + arena_prof_accum(tsd_tsdn(tsd), arena, tcache->prof_accumbytes)) + prof_idump(tsd_tsdn(tsd)); - idalloctm(tsd, tcache, false, true, true); + idalloctm(tsd_tsdn(tsd), tcache, NULL, true, true); } void tcache_cleanup(tsd_t *tsd) { tcache_t *tcache; if (!config_tcache) return; if ((tcache = tsd_tcache_get(tsd)) != NULL) { tcache_destroy(tsd, tcache); tsd_tcache_set(tsd, NULL); } } void tcache_enabled_cleanup(tsd_t *tsd) { /* Do nothing. */ } -/* Caller must own arena->lock. */ void -tcache_stats_merge(tcache_t *tcache, arena_t *arena) +tcache_stats_merge(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena) { unsigned i; cassert(config_stats); + malloc_mutex_assert_owner(tsdn, &arena->lock); + /* Merge and reset tcache stats. */ for (i = 0; i < NBINS; i++) { arena_bin_t *bin = &arena->bins[i]; tcache_bin_t *tbin = &tcache->tbins[i]; - malloc_mutex_lock(&bin->lock); + malloc_mutex_lock(tsdn, &bin->lock); bin->stats.nrequests += tbin->tstats.nrequests; - malloc_mutex_unlock(&bin->lock); + malloc_mutex_unlock(tsdn, &bin->lock); tbin->tstats.nrequests = 0; } for (; i < nhbins; i++) { malloc_large_stats_t *lstats = &arena->stats.lstats[i - NBINS]; tcache_bin_t *tbin = &tcache->tbins[i]; arena->stats.nrequests_large += tbin->tstats.nrequests; lstats->nrequests += tbin->tstats.nrequests; tbin->tstats.nrequests = 0; } } bool -tcaches_create(tsd_t *tsd, unsigned *r_ind) +tcaches_create(tsdn_t *tsdn, unsigned *r_ind) { + arena_t *arena; tcache_t *tcache; tcaches_t *elm; if (tcaches == NULL) { - tcaches = base_alloc(sizeof(tcache_t *) * + tcaches = base_alloc(tsdn, sizeof(tcache_t *) * (MALLOCX_TCACHE_MAX+1)); if (tcaches == NULL) return (true); } if (tcaches_avail == NULL && tcaches_past > MALLOCX_TCACHE_MAX) return (true); - tcache = tcache_create(tsd, arena_get(0, false)); + arena = arena_ichoose(tsdn, NULL); + if (unlikely(arena == NULL)) + return (true); + tcache = tcache_create(tsdn, arena); if (tcache == NULL) return (true); if (tcaches_avail != NULL) { elm = tcaches_avail; tcaches_avail = tcaches_avail->next; elm->tcache = tcache; *r_ind = (unsigned)(elm - tcaches); } else { elm = &tcaches[tcaches_past]; elm->tcache = tcache; *r_ind = tcaches_past; tcaches_past++; } return (false); } static void tcaches_elm_flush(tsd_t *tsd, tcaches_t *elm) { if (elm->tcache == NULL) return; tcache_destroy(tsd, elm->tcache); elm->tcache = NULL; } void tcaches_flush(tsd_t *tsd, unsigned ind) { tcaches_elm_flush(tsd, &tcaches[ind]); } void tcaches_destroy(tsd_t *tsd, unsigned ind) { tcaches_t *elm = &tcaches[ind]; tcaches_elm_flush(tsd, elm); elm->next = tcaches_avail; tcaches_avail = elm; } bool -tcache_boot(void) +tcache_boot(tsdn_t *tsdn) { unsigned i; /* * If necessary, clamp opt_lg_tcache_max, now that large_maxclass is * known. */ if (opt_lg_tcache_max < 0 || (1U << opt_lg_tcache_max) < SMALL_MAXCLASS) tcache_maxclass = SMALL_MAXCLASS; else if ((1U << opt_lg_tcache_max) > large_maxclass) tcache_maxclass = large_maxclass; else tcache_maxclass = (1U << opt_lg_tcache_max); nhbins = size2index(tcache_maxclass) + 1; /* Initialize tcache_bin_info. */ - tcache_bin_info = (tcache_bin_info_t *)base_alloc(nhbins * + tcache_bin_info = (tcache_bin_info_t *)base_alloc(tsdn, nhbins * sizeof(tcache_bin_info_t)); if (tcache_bin_info == NULL) return (true); stack_nelms = 0; for (i = 0; i < NBINS; i++) { if ((arena_bin_info[i].nregs << 1) <= TCACHE_NSLOTS_SMALL_MIN) { tcache_bin_info[i].ncached_max = TCACHE_NSLOTS_SMALL_MIN; } else if ((arena_bin_info[i].nregs << 1) <= TCACHE_NSLOTS_SMALL_MAX) { tcache_bin_info[i].ncached_max = (arena_bin_info[i].nregs << 1); } else { tcache_bin_info[i].ncached_max = TCACHE_NSLOTS_SMALL_MAX; } stack_nelms += tcache_bin_info[i].ncached_max; } for (; i < nhbins; i++) { tcache_bin_info[i].ncached_max = TCACHE_NSLOTS_LARGE; stack_nelms += tcache_bin_info[i].ncached_max; } return (false); } Index: head/contrib/jemalloc/src/tsd.c =================================================================== --- head/contrib/jemalloc/src/tsd.c (revision 299586) +++ head/contrib/jemalloc/src/tsd.c (revision 299587) @@ -1,195 +1,197 @@ #define JEMALLOC_TSD_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Data. */ static unsigned ncleanups; static malloc_tsd_cleanup_t cleanups[MALLOC_TSD_CLEANUPS_MAX]; malloc_tsd_data(, , tsd_t, TSD_INITIALIZER) /******************************************************************************/ void * malloc_tsd_malloc(size_t size) { return (a0malloc(CACHELINE_CEILING(size))); } void malloc_tsd_dalloc(void *wrapper) { a0dalloc(wrapper); } void malloc_tsd_no_cleanup(void *arg) { not_reached(); } #if defined(JEMALLOC_MALLOC_THREAD_CLEANUP) || defined(_WIN32) #ifndef _WIN32 JEMALLOC_EXPORT #endif void _malloc_thread_cleanup(void) { bool pending[MALLOC_TSD_CLEANUPS_MAX], again; unsigned i; for (i = 0; i < ncleanups; i++) pending[i] = true; do { again = false; for (i = 0; i < ncleanups; i++) { if (pending[i]) { pending[i] = cleanups[i](); if (pending[i]) again = true; } } } while (again); } #endif void malloc_tsd_cleanup_register(bool (*f)(void)) { assert(ncleanups < MALLOC_TSD_CLEANUPS_MAX); cleanups[ncleanups] = f; ncleanups++; } void tsd_cleanup(void *arg) { tsd_t *tsd = (tsd_t *)arg; switch (tsd->state) { case tsd_state_uninitialized: /* Do nothing. */ break; case tsd_state_nominal: -#define O(n, t) \ +#define O(n, t) \ n##_cleanup(tsd); MALLOC_TSD #undef O tsd->state = tsd_state_purgatory; tsd_set(tsd); break; case tsd_state_purgatory: /* * The previous time this destructor was called, we set the * state to tsd_state_purgatory so that other destructors * wouldn't cause re-creation of the tsd. This time, do * nothing, and do not request another callback. */ break; case tsd_state_reincarnated: /* * Another destructor deallocated memory after this destructor * was called. Reset state to tsd_state_purgatory and request * another callback. */ tsd->state = tsd_state_purgatory; tsd_set(tsd); break; default: not_reached(); } } -bool +tsd_t * malloc_tsd_boot0(void) { + tsd_t *tsd; ncleanups = 0; if (tsd_boot0()) - return (true); - *tsd_arenas_tdata_bypassp_get(tsd_fetch()) = true; - return (false); + return (NULL); + tsd = tsd_fetch(); + *tsd_arenas_tdata_bypassp_get(tsd) = true; + return (tsd); } void malloc_tsd_boot1(void) { tsd_boot1(); *tsd_arenas_tdata_bypassp_get(tsd_fetch()) = false; } #ifdef _WIN32 static BOOL WINAPI _tls_callback(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved) { switch (fdwReason) { #ifdef JEMALLOC_LAZY_LOCK case DLL_THREAD_ATTACH: isthreaded = true; break; #endif case DLL_THREAD_DETACH: _malloc_thread_cleanup(); break; default: break; } return (true); } #ifdef _MSC_VER # ifdef _M_IX86 # pragma comment(linker, "/INCLUDE:__tls_used") # pragma comment(linker, "/INCLUDE:_tls_callback") # else # pragma comment(linker, "/INCLUDE:_tls_used") # pragma comment(linker, "/INCLUDE:tls_callback") # endif # pragma section(".CRT$XLY",long,read) #endif JEMALLOC_SECTION(".CRT$XLY") JEMALLOC_ATTR(used) BOOL (WINAPI *const tls_callback)(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved) = _tls_callback; #endif #if (!defined(JEMALLOC_MALLOC_THREAD_CLEANUP) && !defined(JEMALLOC_TLS) && \ !defined(_WIN32)) void * tsd_init_check_recursion(tsd_init_head_t *head, tsd_init_block_t *block) { pthread_t self = pthread_self(); tsd_init_block_t *iter; /* Check whether this thread has already inserted into the list. */ - malloc_mutex_lock(&head->lock); + malloc_mutex_lock(NULL, &head->lock); ql_foreach(iter, &head->blocks, link) { if (iter->thread == self) { - malloc_mutex_unlock(&head->lock); + malloc_mutex_unlock(NULL, &head->lock); return (iter->data); } } /* Insert block into list. */ ql_elm_new(block, link); block->thread = self; ql_tail_insert(&head->blocks, block, link); - malloc_mutex_unlock(&head->lock); + malloc_mutex_unlock(NULL, &head->lock); return (NULL); } void tsd_init_finish(tsd_init_head_t *head, tsd_init_block_t *block) { - malloc_mutex_lock(&head->lock); + malloc_mutex_lock(NULL, &head->lock); ql_remove(&head->blocks, block, link); - malloc_mutex_unlock(&head->lock); + malloc_mutex_unlock(NULL, &head->lock); } #endif Index: head/contrib/jemalloc/src/util.c =================================================================== --- head/contrib/jemalloc/src/util.c (revision 299586) +++ head/contrib/jemalloc/src/util.c (revision 299587) @@ -1,684 +1,682 @@ /* * Define simple versions of assertion macros that won't recurse in case * of assertion failures in malloc_*printf(). */ #define assert(e) do { \ if (config_debug && !(e)) { \ malloc_write(": Failed assertion\n"); \ abort(); \ } \ } while (0) #define not_reached() do { \ if (config_debug) { \ malloc_write(": Unreachable code reached\n"); \ abort(); \ } \ + unreachable(); \ } while (0) #define not_implemented() do { \ if (config_debug) { \ malloc_write(": Not implemented\n"); \ abort(); \ } \ } while (0) #define JEMALLOC_UTIL_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Function prototypes for non-inline static functions. */ static void wrtmessage(void *cbopaque, const char *s); #define U2S_BUFSIZE ((1U << (LG_SIZEOF_INTMAX_T + 3)) + 1) static char *u2s(uintmax_t x, unsigned base, bool uppercase, char *s, size_t *slen_p); #define D2S_BUFSIZE (1 + U2S_BUFSIZE) static char *d2s(intmax_t x, char sign, char *s, size_t *slen_p); #define O2S_BUFSIZE (1 + U2S_BUFSIZE) static char *o2s(uintmax_t x, bool alt_form, char *s, size_t *slen_p); #define X2S_BUFSIZE (2 + U2S_BUFSIZE) static char *x2s(uintmax_t x, bool alt_form, bool uppercase, char *s, size_t *slen_p); /******************************************************************************/ /* malloc_message() setup. */ static void wrtmessage(void *cbopaque, const char *s) { #ifdef SYS_write /* * Use syscall(2) rather than write(2) when possible in order to avoid * the possibility of memory allocation within libc. This is necessary * on FreeBSD; most operating systems do not have this problem though. * * syscall() returns long or int, depending on platform, so capture the * unused result in the widest plausible type to avoid compiler * warnings. */ UNUSED long result = syscall(SYS_write, STDERR_FILENO, s, strlen(s)); #else UNUSED ssize_t result = write(STDERR_FILENO, s, strlen(s)); #endif } JEMALLOC_EXPORT void (*je_malloc_message)(void *, const char *s); JEMALLOC_ATTR(visibility("hidden")) void wrtmessage_1_0(const char *s1, const char *s2, const char *s3, const char *s4) { wrtmessage(NULL, s1); wrtmessage(NULL, s2); wrtmessage(NULL, s3); wrtmessage(NULL, s4); } void (*__malloc_message_1_0)(const char *s1, const char *s2, const char *s3, const char *s4) = wrtmessage_1_0; __sym_compat(_malloc_message, __malloc_message_1_0, FBSD_1.0); /* * Wrapper around malloc_message() that avoids the need for * je_malloc_message(...) throughout the code. */ void malloc_write(const char *s) { if (je_malloc_message != NULL) je_malloc_message(NULL, s); else wrtmessage(NULL, s); } /* * glibc provides a non-standard strerror_r() when _GNU_SOURCE is defined, so * provide a wrapper. */ int buferror(int err, char *buf, size_t buflen) { #ifdef _WIN32 FormatMessageA(FORMAT_MESSAGE_FROM_SYSTEM, NULL, err, 0, (LPSTR)buf, (DWORD)buflen, NULL); return (0); #elif defined(__GLIBC__) && defined(_GNU_SOURCE) char *b = strerror_r(err, buf, buflen); if (b != buf) { strncpy(buf, b, buflen); buf[buflen-1] = '\0'; } return (0); #else return (strerror_r(err, buf, buflen)); #endif } uintmax_t malloc_strtoumax(const char *restrict nptr, char **restrict endptr, int base) { uintmax_t ret, digit; unsigned b; bool neg; const char *p, *ns; p = nptr; if (base < 0 || base == 1 || base > 36) { ns = p; set_errno(EINVAL); ret = UINTMAX_MAX; goto label_return; } b = base; /* Swallow leading whitespace and get sign, if any. */ neg = false; while (true) { switch (*p) { case '\t': case '\n': case '\v': case '\f': case '\r': case ' ': p++; break; case '-': neg = true; /* Fall through. */ case '+': p++; /* Fall through. */ default: goto label_prefix; } } /* Get prefix, if any. */ label_prefix: /* * Note where the first non-whitespace/sign character is so that it is * possible to tell whether any digits are consumed (e.g., " 0" vs. * " -x"). */ ns = p; if (*p == '0') { switch (p[1]) { case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': if (b == 0) b = 8; if (b == 8) p++; break; case 'X': case 'x': switch (p[2]) { case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': case 'A': case 'B': case 'C': case 'D': case 'E': case 'F': case 'a': case 'b': case 'c': case 'd': case 'e': case 'f': if (b == 0) b = 16; if (b == 16) p += 2; break; default: break; } break; default: p++; ret = 0; goto label_return; } } if (b == 0) b = 10; /* Convert. */ ret = 0; while ((*p >= '0' && *p <= '9' && (digit = *p - '0') < b) || (*p >= 'A' && *p <= 'Z' && (digit = 10 + *p - 'A') < b) || (*p >= 'a' && *p <= 'z' && (digit = 10 + *p - 'a') < b)) { uintmax_t pret = ret; ret *= b; ret += digit; if (ret < pret) { /* Overflow. */ set_errno(ERANGE); ret = UINTMAX_MAX; goto label_return; } p++; } if (neg) ret = -ret; if (p == ns) { /* No conversion performed. */ set_errno(EINVAL); ret = UINTMAX_MAX; goto label_return; } label_return: if (endptr != NULL) { if (p == ns) { /* No characters were converted. */ *endptr = (char *)nptr; } else *endptr = (char *)p; } return (ret); } static char * u2s(uintmax_t x, unsigned base, bool uppercase, char *s, size_t *slen_p) { unsigned i; i = U2S_BUFSIZE - 1; s[i] = '\0'; switch (base) { case 10: do { i--; s[i] = "0123456789"[x % (uint64_t)10]; x /= (uint64_t)10; } while (x > 0); break; case 16: { const char *digits = (uppercase) ? "0123456789ABCDEF" : "0123456789abcdef"; do { i--; s[i] = digits[x & 0xf]; x >>= 4; } while (x > 0); break; } default: { const char *digits = (uppercase) ? "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ" : "0123456789abcdefghijklmnopqrstuvwxyz"; assert(base >= 2 && base <= 36); do { i--; s[i] = digits[x % (uint64_t)base]; x /= (uint64_t)base; } while (x > 0); }} *slen_p = U2S_BUFSIZE - 1 - i; return (&s[i]); } static char * d2s(intmax_t x, char sign, char *s, size_t *slen_p) { bool neg; if ((neg = (x < 0))) x = -x; s = u2s(x, 10, false, s, slen_p); if (neg) sign = '-'; switch (sign) { case '-': if (!neg) break; /* Fall through. */ case ' ': case '+': s--; (*slen_p)++; *s = sign; break; default: not_reached(); } return (s); } static char * o2s(uintmax_t x, bool alt_form, char *s, size_t *slen_p) { s = u2s(x, 8, false, s, slen_p); if (alt_form && *s != '0') { s--; (*slen_p)++; *s = '0'; } return (s); } static char * x2s(uintmax_t x, bool alt_form, bool uppercase, char *s, size_t *slen_p) { s = u2s(x, 16, uppercase, s, slen_p); if (alt_form) { s -= 2; (*slen_p) += 2; memcpy(s, uppercase ? "0X" : "0x", 2); } return (s); } -int +size_t malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap) { - int ret; size_t i; const char *f; #define APPEND_C(c) do { \ if (i < size) \ str[i] = (c); \ i++; \ } while (0) #define APPEND_S(s, slen) do { \ if (i < size) { \ size_t cpylen = (slen <= size - i) ? slen : size - i; \ memcpy(&str[i], s, cpylen); \ } \ i += slen; \ } while (0) #define APPEND_PADDED_S(s, slen, width, left_justify) do { \ /* Left padding. */ \ size_t pad_len = (width == -1) ? 0 : ((slen < (size_t)width) ? \ (size_t)width - slen : 0); \ if (!left_justify && pad_len != 0) { \ size_t j; \ for (j = 0; j < pad_len; j++) \ APPEND_C(' '); \ } \ /* Value. */ \ APPEND_S(s, slen); \ /* Right padding. */ \ if (left_justify && pad_len != 0) { \ size_t j; \ for (j = 0; j < pad_len; j++) \ APPEND_C(' '); \ } \ } while (0) #define GET_ARG_NUMERIC(val, len) do { \ switch (len) { \ case '?': \ val = va_arg(ap, int); \ break; \ case '?' | 0x80: \ val = va_arg(ap, unsigned int); \ break; \ case 'l': \ val = va_arg(ap, long); \ break; \ case 'l' | 0x80: \ val = va_arg(ap, unsigned long); \ break; \ case 'q': \ val = va_arg(ap, long long); \ break; \ case 'q' | 0x80: \ val = va_arg(ap, unsigned long long); \ break; \ case 'j': \ val = va_arg(ap, intmax_t); \ break; \ case 'j' | 0x80: \ val = va_arg(ap, uintmax_t); \ break; \ case 't': \ val = va_arg(ap, ptrdiff_t); \ break; \ case 'z': \ val = va_arg(ap, ssize_t); \ break; \ case 'z' | 0x80: \ val = va_arg(ap, size_t); \ break; \ case 'p': /* Synthetic; used for %p. */ \ val = va_arg(ap, uintptr_t); \ break; \ default: \ not_reached(); \ val = 0; \ } \ } while (0) i = 0; f = format; while (true) { switch (*f) { case '\0': goto label_out; case '%': { bool alt_form = false; bool left_justify = false; bool plus_space = false; bool plus_plus = false; int prec = -1; int width = -1; unsigned char len = '?'; + char *s; + size_t slen; f++; /* Flags. */ while (true) { switch (*f) { case '#': assert(!alt_form); alt_form = true; break; case '-': assert(!left_justify); left_justify = true; break; case ' ': assert(!plus_space); plus_space = true; break; case '+': assert(!plus_plus); plus_plus = true; break; default: goto label_width; } f++; } /* Width. */ label_width: switch (*f) { case '*': width = va_arg(ap, int); f++; if (width < 0) { left_justify = true; width = -width; } break; case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': { uintmax_t uwidth; set_errno(0); uwidth = malloc_strtoumax(f, (char **)&f, 10); assert(uwidth != UINTMAX_MAX || get_errno() != ERANGE); width = (int)uwidth; break; } default: break; } /* Width/precision separator. */ if (*f == '.') f++; else goto label_length; /* Precision. */ switch (*f) { case '*': prec = va_arg(ap, int); f++; break; case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': { uintmax_t uprec; set_errno(0); uprec = malloc_strtoumax(f, (char **)&f, 10); assert(uprec != UINTMAX_MAX || get_errno() != ERANGE); prec = (int)uprec; break; } default: break; } /* Length. */ label_length: switch (*f) { case 'l': f++; if (*f == 'l') { len = 'q'; f++; } else len = 'l'; break; case 'q': case 'j': case 't': case 'z': len = *f; f++; break; default: break; } /* Conversion specifier. */ switch (*f) { - char *s; - size_t slen; case '%': /* %% */ APPEND_C(*f); f++; break; case 'd': case 'i': { intmax_t val JEMALLOC_CC_SILENCE_INIT(0); char buf[D2S_BUFSIZE]; GET_ARG_NUMERIC(val, len); s = d2s(val, (plus_plus ? '+' : (plus_space ? ' ' : '-')), buf, &slen); APPEND_PADDED_S(s, slen, width, left_justify); f++; break; } case 'o': { uintmax_t val JEMALLOC_CC_SILENCE_INIT(0); char buf[O2S_BUFSIZE]; GET_ARG_NUMERIC(val, len | 0x80); s = o2s(val, alt_form, buf, &slen); APPEND_PADDED_S(s, slen, width, left_justify); f++; break; } case 'u': { uintmax_t val JEMALLOC_CC_SILENCE_INIT(0); char buf[U2S_BUFSIZE]; GET_ARG_NUMERIC(val, len | 0x80); s = u2s(val, 10, false, buf, &slen); APPEND_PADDED_S(s, slen, width, left_justify); f++; break; } case 'x': case 'X': { uintmax_t val JEMALLOC_CC_SILENCE_INIT(0); char buf[X2S_BUFSIZE]; GET_ARG_NUMERIC(val, len | 0x80); s = x2s(val, alt_form, *f == 'X', buf, &slen); APPEND_PADDED_S(s, slen, width, left_justify); f++; break; } case 'c': { unsigned char val; char buf[2]; assert(len == '?' || len == 'l'); assert_not_implemented(len != 'l'); val = va_arg(ap, int); buf[0] = val; buf[1] = '\0'; APPEND_PADDED_S(buf, 1, width, left_justify); f++; break; } case 's': assert(len == '?' || len == 'l'); assert_not_implemented(len != 'l'); s = va_arg(ap, char *); slen = (prec < 0) ? strlen(s) : (size_t)prec; APPEND_PADDED_S(s, slen, width, left_justify); f++; break; case 'p': { uintmax_t val; char buf[X2S_BUFSIZE]; GET_ARG_NUMERIC(val, 'p'); s = x2s(val, true, false, buf, &slen); APPEND_PADDED_S(s, slen, width, left_justify); f++; break; } default: not_reached(); } break; } default: { APPEND_C(*f); f++; break; }} } label_out: if (i < size) str[i] = '\0'; else str[size - 1] = '\0'; - assert(i < INT_MAX); - ret = (int)i; #undef APPEND_C #undef APPEND_S #undef APPEND_PADDED_S #undef GET_ARG_NUMERIC - return (ret); + return (i); } JEMALLOC_FORMAT_PRINTF(3, 4) -int +size_t malloc_snprintf(char *str, size_t size, const char *format, ...) { - int ret; + size_t ret; va_list ap; va_start(ap, format); ret = malloc_vsnprintf(str, size, format, ap); va_end(ap); return (ret); } void malloc_vcprintf(void (*write_cb)(void *, const char *), void *cbopaque, const char *format, va_list ap) { char buf[MALLOC_PRINTF_BUFSIZE]; if (write_cb == NULL) { /* * The caller did not provide an alternate write_cb callback * function, so use the default one. malloc_write() is an * inline function, so use malloc_message() directly here. */ write_cb = (je_malloc_message != NULL) ? je_malloc_message : wrtmessage; cbopaque = NULL; } malloc_vsnprintf(buf, sizeof(buf), format, ap); write_cb(cbopaque, buf); } /* * Print to a callback function in such a way as to (hopefully) avoid memory * allocation. */ JEMALLOC_FORMAT_PRINTF(3, 4) void malloc_cprintf(void (*write_cb)(void *, const char *), void *cbopaque, const char *format, ...) { va_list ap; va_start(ap, format); malloc_vcprintf(write_cb, cbopaque, format, ap); va_end(ap); } /* Print to stderr in such a way as to avoid memory allocation. */ JEMALLOC_FORMAT_PRINTF(1, 2) void malloc_printf(const char *format, ...) { va_list ap; va_start(ap, format); malloc_vcprintf(NULL, NULL, format, ap); va_end(ap); } /* * Restore normal assertion macros, in order to make it possible to compile all * C files as a single concatenation. */ #undef assert #undef not_reached #undef not_implemented #include "jemalloc/internal/assert.h" Index: head/contrib/jemalloc/src/witness.c =================================================================== --- head/contrib/jemalloc/src/witness.c (nonexistent) +++ head/contrib/jemalloc/src/witness.c (revision 299587) @@ -0,0 +1,136 @@ +#define JEMALLOC_WITNESS_C_ +#include "jemalloc/internal/jemalloc_internal.h" + +void +witness_init(witness_t *witness, const char *name, witness_rank_t rank, + witness_comp_t *comp) +{ + + witness->name = name; + witness->rank = rank; + witness->comp = comp; +} + +#ifdef JEMALLOC_JET +#undef witness_lock_error +#define witness_lock_error JEMALLOC_N(n_witness_lock_error) +#endif +void +witness_lock_error(const witness_list_t *witnesses, const witness_t *witness) +{ + witness_t *w; + + malloc_printf(": Lock rank order reversal:"); + ql_foreach(w, witnesses, link) { + malloc_printf(" %s(%u)", w->name, w->rank); + } + malloc_printf(" %s(%u)\n", witness->name, witness->rank); + abort(); +} +#ifdef JEMALLOC_JET +#undef witness_lock_error +#define witness_lock_error JEMALLOC_N(witness_lock_error) +witness_lock_error_t *witness_lock_error = JEMALLOC_N(n_witness_lock_error); +#endif + +#ifdef JEMALLOC_JET +#undef witness_owner_error +#define witness_owner_error JEMALLOC_N(n_witness_owner_error) +#endif +void +witness_owner_error(const witness_t *witness) +{ + + malloc_printf(": Should own %s(%u)\n", witness->name, + witness->rank); + abort(); +} +#ifdef JEMALLOC_JET +#undef witness_owner_error +#define witness_owner_error JEMALLOC_N(witness_owner_error) +witness_owner_error_t *witness_owner_error = JEMALLOC_N(n_witness_owner_error); +#endif + +#ifdef JEMALLOC_JET +#undef witness_not_owner_error +#define witness_not_owner_error JEMALLOC_N(n_witness_not_owner_error) +#endif +void +witness_not_owner_error(const witness_t *witness) +{ + + malloc_printf(": Should not own %s(%u)\n", witness->name, + witness->rank); + abort(); +} +#ifdef JEMALLOC_JET +#undef witness_not_owner_error +#define witness_not_owner_error JEMALLOC_N(witness_not_owner_error) +witness_not_owner_error_t *witness_not_owner_error = + JEMALLOC_N(n_witness_not_owner_error); +#endif + +#ifdef JEMALLOC_JET +#undef witness_lockless_error +#define witness_lockless_error JEMALLOC_N(n_witness_lockless_error) +#endif +void +witness_lockless_error(const witness_list_t *witnesses) +{ + witness_t *w; + + malloc_printf(": Should not own any locks:"); + ql_foreach(w, witnesses, link) { + malloc_printf(" %s(%u)", w->name, w->rank); + } + malloc_printf("\n"); + abort(); +} +#ifdef JEMALLOC_JET +#undef witness_lockless_error +#define witness_lockless_error JEMALLOC_N(witness_lockless_error) +witness_lockless_error_t *witness_lockless_error = + JEMALLOC_N(n_witness_lockless_error); +#endif + +void +witnesses_cleanup(tsd_t *tsd) +{ + + witness_assert_lockless(tsd_tsdn(tsd)); + + /* Do nothing. */ +} + +void +witness_fork_cleanup(tsd_t *tsd) +{ + + /* Do nothing. */ +} + +void +witness_prefork(tsd_t *tsd) +{ + + tsd_witness_fork_set(tsd, true); +} + +void +witness_postfork_parent(tsd_t *tsd) +{ + + tsd_witness_fork_set(tsd, false); +} + +void +witness_postfork_child(tsd_t *tsd) +{ +#ifndef JEMALLOC_MUTEX_INIT_CB + witness_list_t *witnesses; + + witnesses = tsd_witnessesp_get(tsd); + ql_new(witnesses); +#endif + tsd_witness_fork_set(tsd, false); +} Property changes on: head/contrib/jemalloc/src/witness.c ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property Index: head/lib/libc/stdlib/jemalloc/Makefile.inc =================================================================== --- head/lib/libc/stdlib/jemalloc/Makefile.inc (revision 299586) +++ head/lib/libc/stdlib/jemalloc/Makefile.inc (revision 299587) @@ -1,49 +1,49 @@ # $FreeBSD$ .PATH: ${LIBC_SRCTOP}/stdlib/jemalloc JEMALLOCSRCS:= jemalloc.c arena.c atomic.c base.c bitmap.c chunk.c \ chunk_dss.c chunk_mmap.c ckh.c ctl.c extent.c hash.c huge.c mb.c \ mutex.c nstime.c pages.c prng.c prof.c quarantine.c rtree.c stats.c \ - tcache.c ticker.c tsd.c util.c + tcache.c ticker.c tsd.c util.c witness.c SYM_MAPS+=${LIBC_SRCTOP}/stdlib/jemalloc/Symbol.map CFLAGS+=-I${LIBC_SRCTOP}/../../contrib/jemalloc/include .for src in ${JEMALLOCSRCS} MISRCS+=jemalloc_${src} CLEANFILES+=jemalloc_${src} jemalloc_${src}: ${LIBC_SRCTOP}/../../contrib/jemalloc/src/${src} .NOMETA ln -sf ${.ALLSRC} ${.TARGET} .endfor MAN+=jemalloc.3 CLEANFILES+=jemalloc.3 jemalloc.3: ${LIBC_SRCTOP}/../../contrib/jemalloc/doc/jemalloc.3 .NOMETA ln -sf ${.ALLSRC} ${.TARGET} MLINKS+= \ jemalloc.3 malloc.3 \ jemalloc.3 calloc.3 \ jemalloc.3 posix_memalign.3 \ jemalloc.3 aligned_alloc.3 \ jemalloc.3 realloc.3 \ jemalloc.3 free.3 \ jemalloc.3 malloc_usable_size.3 \ jemalloc.3 malloc_stats_print.3 \ jemalloc.3 mallctl.3 \ jemalloc.3 mallctlnametomib.3 \ jemalloc.3 mallctlbymib.3 \ jemalloc.3 mallocx.3 \ jemalloc.3 rallocx.3 \ jemalloc.3 xallocx.3 \ jemalloc.3 sallocx.3 \ jemalloc.3 dallocx.3 \ jemalloc.3 sdallocx.3 \ jemalloc.3 nallocx.3 \ jemalloc.3 malloc.conf.5 .if defined(MALLOC_PRODUCTION) CFLAGS+= -DMALLOC_PRODUCTION .endif