Index: projects/import-googletest-1.8.1/UPDATING =================================================================== --- projects/import-googletest-1.8.1/UPDATING (revision 344270) +++ projects/import-googletest-1.8.1/UPDATING (revision 344271) @@ -1,1960 +1,1966 @@ Updating Information for FreeBSD current users. This file is maintained and copyrighted by M. Warner Losh . See end of file for further details. For commonly done items, please see the COMMON ITEMS: section later in the file. These instructions assume that you basically know what you are doing. If not, then please consult the FreeBSD handbook: https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/makeworld.html Items affecting the ports and packages system can be found in /usr/ports/UPDATING. Please read that file before running portupgrade. NOTE: FreeBSD has switched from gcc to clang. If you have trouble bootstrapping from older versions of FreeBSD, try WITHOUT_CLANG and WITH_GCC to bootstrap to the tip of head, and then rebuild without this option. The bootstrap process from older version of current across the gcc/clang cutover is a bit fragile. NOTE TO PEOPLE WHO THINK THAT FreeBSD 13.x IS SLOW: FreeBSD 13.x has many debugging features turned on, in both the kernel and userland. These features attempt to detect incorrect use of system primitives, and encourage loud failure through extra sanity checking and fail stop semantics. They also substantially impact system performance. If you want to do performance measurement, benchmarking, and optimization, you'll want to turn them off. This includes various WITNESS- related kernel options, INVARIANTS, malloc debugging flags in userland, and various verbose features in the kernel. Many developers choose to disable these features on build machines to maximize performance. (To completely disable malloc debugging, define MALLOC_PRODUCTION in /etc/make.conf, or to merely disable the most expensive debugging functionality run "ln -s 'abort:false,junk:false' /etc/malloc.conf".) 20190131: Iflib is no longer unconditionally compiled into the kernel. Drivers using iflib and statically compiled into the kernel, now require the 'device iflib' config option. For the same drivers loaded as modules on kernels not having 'device iflib', the iflib.ko module is loaded automatically. +20190125: + The IEEE80211_AMPDU_AGE and AH_SUPPORT_AR5416 kernel configuration + options no longer exist since r343219 and r343427 respectively; + nothing uses them, so they should be just removed from custom + kernel config files. + 20181230: r342635 changes the way efibootmgr(8) works by requiring users to add the -b (bootnum) parameter for commands where the bootnum was previously specified with each option. For example 'efibootmgr -B 0001' is now 'efibootmgr -B -b 0001'. 20181220: r342286 modifies the NFSv4 server so that it obeys vfs.nfsd.nfs_privport in the same as it is applied to NFSv2 and 3. This implies that NFSv4 servers that have vfs.nfsd.nfs_privport set will only allow mounts from clients using a reserved port#. Since both the FreeBSD and Linux NFSv4 clients use reserved port#s by default, this should not affect most NFSv4 mounts. 20181219: The XLP config has been removed. We can't support 64-bit atomics in this kernel because it is running in 32-bit mode. XLP users must transition to running a 64-bit kernel (XLP64 or XLPN32). The mips GXEMUL support has been removed from FreeBSD. MALTA* + qemu is the preferred emulator today and we don't need two different ones. The old sibyte / swarm / Broadcom BCM1250 support has been removed from the mips port. 20181211: Clang, llvm, lld, lldb, compiler-rt and libc++ have been upgraded to 7.0.1. Please see the 20141231 entry below for information about prerequisites and upgrading, if you are not already using clang 3.5.0 or higher. 20181211: Remove the timed and netdate programs from the base tree. Setting the time with these deamons has been obsolete for over a decade. 20181126: On amd64, arm64 and armv7 (architectures that install LLVM's ld.lld linker as /usr/bin/ld) GNU ld is no longer installed as ld.bfd, as it produces broken binaries when ifuncs are in use. Users needing GNU ld should install the binutils port or package. 20181123: The BSD crtbegin and crtend code has been enabled by default. It has had extensive testing on amd64, arm64, and i386. It can be disabled by building a world with -DWITHOUT_BSD_CRTBEGIN. 20181115: The set of CTM commands (ctm, ctm_smail, ctm_rmail, ctm_dequeue) has been converted to a port (misc/ctm) and will be removed from FreeBSD-13. It is available as a package (ctm) for all supported FreeBSD versions. 20181110: The default newsyslog.conf(5) file has been changed to only include files in /etc/newsyslog.conf.d/ and /usr/local/etc/newsyslog.conf.d/ if the filenames end in '.conf' and do not begin with a '.'. You should check the configuration files in these two directories match this naming convention. You can verify which configuration files are being included using the command: $ newsyslog -Nrv 20181015: Ports for the DRM modules have been simplified. Now, amd64 users should just install the drm-kmod port. All others should install drm-legacy-kmod. Graphics hardware that's newer than about 2010 usually works with drm-kmod. For hardware older than 2013, however, some users will need to use drm-legacy-kmod if drm-kmod doesn't work for them. Hardware older than 2008 usually only works in drm-legacy-kmod. The graphics team can only commit to hardware made since 2013 due to the complexity of the market and difficulty to test all the older cards effectively. If you have hardware supported by drm-kmod, you are strongly encouraged to use that as you will get better support. Other than KPI chasing, drm-legacy-kmod will not be updated. As outlined elsewhere, the drm and drm2 modules will be eliminated from the src base soon (with a limited exception for arm). Please update to the package asap and report any issues to x11@freebsd.org. Generally, anybody using the drm*-kmod packages should add WITHOUT_DRM_MODULE=t and WITHOUT_DRM2_MODULE=t to avoid nasty cross-threading surprises, especially with automatic driver loading from X11 startup. These will become the defaults in 13-current shortly. 20181012: The ixlv(4) driver has been renamed to iavf(4). As a consequence, custom kernel and module loading configuration files must be updated accordingly. Moreover, interfaces previous presented as ixlvN to the system are now exposed as iavfN and network configuration files must be adjusted as necessary. 20181009: OpenSSL has been updated to version 1.1.1. This update included additional various API changes througout the base system. It is important to rebuild third-party software after upgrading. The value of __FreeBSD_version has been bumped accordingly. 20181006: The legacy DRM modules and drivers have now been added to the loader's module blacklist, in favor of loading them with kld_list in rc.conf(5). The module blacklist may be overridden with the loader.conf(5) 'module_blacklist' variable, but loading them via rc.conf(5) is strongly encouraged. 20181002: The cam(4) based nda(4) driver will be used over nvd(4) by default on powerpc64. You may set 'options NVME_USE_NVD=1' in your kernel conf or loader tunable 'hw.nvme.use_nvd=1' if you wish to use the existing driver. Make sure to edit /boot/etc/kboot.conf and fstab to use the nda device name. 20180913: Reproducible build mode is now on by default, in preparation for FreeBSD 12.0. This eliminates build metadata such as the user, host, and time from the kernel (and uname), unless the working tree corresponds to a modified checkout from a version control system. The previous behavior can be obtained by setting the /etc/src.conf knob WITHOUT_REPRODUCIBLE_BUILD. 20180826: The Yarrow CSPRNG has been removed from the kernel as it has not been supported by its designers since at least 2003. Fortuna has been the default since FreeBSD-11. 20180822: devctl freeze/thaw have gone into the tree, the rc scripts have been updated to use them and devmatch has been changed. You should update kernel, userland and rc scripts all at the same time. 20180818: The default interpreter has been switched from 4th to Lua. LOADER_DEFAULT_INTERP, documented in build(7), will override the default interpreter. If you have custom FORTH code you will need to set LOADER_DEFAULT_INTERP=4th (valid values are 4th, lua or simp) in src.conf for the build. This will create default hard links between loader and loader_4th instead of loader and loader_lua, the new default. If you are using UEFI it will create the proper hard link to loader.efi. bhyve uses userboot.so. It remains 4th-only until some issues are solved regarding coexisting with multiple versions of FreeBSD are resolved. 20180815: ls(1) now respects the COLORTERM environment variable used in other systems and software to indicate that a colored terminal is both supported and desired. If ls(1) is suddenly emitting colors, they may be disabled again by either removing the unwanted COLORTERM from your environment, or using `ls --color=never`. The ls(1) specific CLICOLOR may not be observed in a future release. 20180808: The default pager for most commands has been changed to "less". To restore the old behavior, set PAGER="more" and MANPAGER="more -s" in your environment. 20180731: The jedec_ts(4) driver has been removed. A superset of its functionality is available in the jedec_dimm(4) driver, and the manpage for that driver includes migration instructions. If you have "device jedec_ts" in your kernel configuration file, it must be removed. 20180730: amd64/GENERIC now has EFI runtime services, EFIRT, enabled by default. This should have no effect if the kernel is booted via BIOS/legacy boot. EFIRT may be disabled via a loader tunable, efi.rt.disabled, if a system has a buggy firmware that prevents a successful boot due to use of runtime services. 20180727: Atmel AT91RM9200 and AT91SAM9, Cavium CNS 11xx and XScale support has been removed from the tree. These ports were obsolete and/or known to be broken for many years. 20180723: loader.efi has been augmented to participate more fully in the UEFI boot manager protocol. loader.efi will now look at the BootXXXX environment variable to determine if a specific kernel or root partition was specified. XXXX is derived from BootCurrent. efibootmgr(8) manages these standard UEFI variables. 20180720: zfsloader's functionality has now been folded into loader. zfsloader is no longer necessary once you've updated your boot blocks. For a transition period, we will install a hardlink for zfsloader to loader to allow a smooth transition until the boot blocks can be updated (hard link because old zfs boot blocks don't understand symlinks). 20180719: ARM64 now have efifb support, if you want to have serial console on your arm64 board when an screen is connected and the bootloader - setup a frambuffer for us to use, just add : + setup a framebuffer for us to use, just add : boot_serial=YES boot_multicons=YES in /boot/loader.conf For Raspberry Pi 3 (RPI) users, this is needed even if you don't have an screen connected as the firmware will setup a framebuffer are that u-boot will expose as an EFI framebuffer. 20180719: New uid:gid added, ntpd:ntpd (123:123). Be sure to run mergemaster or take steps to update /etc/passwd before doing installworld on existing systems. Do not skip the "mergemaster -Fp" step before installworld, as described in the update procedures near the bottom of this document. Also, rc.d/ntpd now starts ntpd(8) as user ntpd if the new mac_ntpd(4) policy is available, unless ntpd_flags or the ntp config file contain options that change file/dir locations. When such options (e.g., "statsdir" or "crypto") are used, ntpd can still be run as non-root by setting ntpd_user=ntpd in rc.conf, after taking steps to ensure that all required files/dirs are accessible by the ntpd user. 20180717: Big endian arm support has been removed. 20180711: The static environment setup in kernel configs is no longer mutually exclusive with the loader(8) environment by default. In order to restore the previous default behavior of disabling the loader(8) environment if a static environment is present, you must specify loader_env.disabled=1 in the static environment. 20180705: The ABI of syscalls used by management tools like sockstat and netstat has been broken to allow 32-bit binaries to work on 64-bit kernels without modification. These programs will need to match the kernel in order to function. External programs may require minor modifications to accommodate a change of type in structures from pointers to 64-bit virtual addresses. 20180702: On i386 and amd64 atomics are now inlined. Out of tree modules using atomics will need to be rebuilt. 20180701: The '%I' format in the kern.corefile sysctl limits the number of core files that a process can generate to the number stored in the debug.ncores sysctl. The '%I' format is replaced by the single digit index. Previously, if all indexes were taken the kernel would overwrite only a core file with the highest index in a filename. Currently the system will create a new core file if there is a free index or if all slots are taken it will overwrite the oldest one. 20180630: Clang, llvm, lld, lldb, compiler-rt and libc++ have been upgraded to 6.0.1. Please see the 20141231 entry below for information about prerequisites and upgrading, if you are not already using clang 3.5.0 or higher. 20180628: r335753 introduced a new quoting method. However, etc/devd/devmatch.conf needed to be changed to work with it. This change was made with r335763 and requires a mergemaster / etcupdate / etc to update the installed file. 20180612: r334930 changed the interface between the NFS modules, so they all need to be rebuilt. r335018 did a __FreeBSD_version bump for this. 20180530: As of r334391 lld is the default amd64 system linker; it is installed as /usr/bin/ld. Kernel build workarounds (see 20180510 entry) are no longer necessary. 20180530: The kernel / userland interface for devinfo changed, so you'll need a new kernel and userland as a pair for it to work (rebuilding lib/libdevinfo is all that's required). devinfo and devmatch will not work, but everything else will when there's a mismatch. 20180523: The on-disk format for hwpmc callchain records has changed to include threadid corresponding to a given record. This changes the field offsets and thus requires that libpmcstat be rebuilt before using a kernel later than r334108. 20180517: The vxge(4) driver has been removed. This driver was introduced into HEAD one week before the Exar left the Ethernet market and is not known to be used. If you have device vxge in your kernel config file it must be removed. 20180510: The amd64 kernel now requires a ld that supports ifunc to produce a working kernel, either lld or a newer binutils. lld is built by default on amd64, and the 'buildkernel' target uses it automatically. However, it is not the default linker, so building the kernel the traditional way requires LD=ld.lld on the command line (or LD=/usr/local/bin/ld for binutils port/package). lld will soon be default, and this requirement will go away. NOTE: As of r334391 lld is the default system linker on amd64, and no workaround is necessary. 20180508: The nxge(4) driver has been removed. This driver was for PCI-X 10g cards made by s2io/Neterion. The company was aquired by Exar and no longer sells or supports Ethernet products. If you have device nxge in your kernel config file it must be removed. 20180504: The tz database (tzdb) has been updated to 2018e. This version more correctly models time stamps in time zones with negative DST such as Europe/Dublin (from 1971 on), Europe/Prague (1946/7), and Africa/Windhoek (1994/2017). This does not affect the UT offsets, only time zone abbreviations and the tm_isdst flag. 20180502: The ixgb(4) driver has been removed. This driver was for an early and uncommon legacy PCI 10GbE for a single ASIC, Intel 82597EX. Intel quickly shifted to the long lived ixgbe family. If you have device ixgb in your kernel config file it must be removed. 20180501: The lmc(4) driver has been removed. This was a WAN interface card that was already reportedly rare in 2003, and had an ambiguous license. If you have device lmc in your kernel config file it must be removed. 20180413: Support for Arcnet networks has been removed. If you have device arcnet or device cm in your kernel config file they must be removed. 20180411: Support for FDDI networks has been removed. If you have device fddi or device fpa in your kernel config file they must be removed. 20180406: In addition to supporting RFC 3164 formatted messages, the syslogd(8) service is now capable of parsing RFC 5424 formatted log messages. The main benefit of using RFC 5424 is that clients may now send log messages with timestamps containing year numbers, microseconds and time zone offsets. Similarly, the syslog(3) C library function has been altered to send RFC 5424 formatted messages to the local system logging daemon. On systems using syslogd(8), this change should have no negative impact, as long as syslogd(8) and the C library are updated at the same time. On systems using a different system logging daemon, it may be necessary to make configuration adjustments, depending on the software used. When using syslog-ng, add the 'syslog-protocol' flag to local input sources to enable parsing of RFC 5424 formatted messages: source src { unix-dgram("/var/run/log" flags(syslog-protocol)); } When using rsyslog, disable the 'SysSock.UseSpecialParser' option of the 'imuxsock' module to let messages be processed by the regular RFC 3164/5424 parsing pipeline: module(load="imuxsock" SysSock.UseSpecialParser="off") Do note that these changes only affect communication between local applications and syslogd(8). The format that syslogd(8) uses to store messages on disk or forward messages to other systems remains unchanged. syslogd(8) still uses RFC 3164 for these purposes. Options to customize this behaviour will be added in the future. Utilities that process log files stored in /var/log are thus expected to continue to function as before. __FreeBSD_version has been incremented to 1200061 to denote this change. 20180328: Support for token ring networks has been removed. If you have "device token" in your kernel config you should remove it. No device drivers supported token ring. 20180323: makefs was modified to be able to tag ISO9660 El Torito boot catalog entries as EFI instead of overloading the i386 tag as done previously. The amd64 mkisoimages.sh script used to build amd64 ISO images for release was updated to use this. This may mean that makefs must be updated before "make cdrom" can be run in the release directory. This should be as simple as: $ cd $SRCDIR/usr.sbin/makefs $ make depend all install 20180212: FreeBSD boot loader enhanced with Lua scripting. It's purely opt-in for now by building WITH_LOADER_LUA and WITHOUT_FORTH in /etc/src.conf. Co-existance for the transition period will come shortly. Booting is a complex environment and test coverage for Lua-enabled loaders has been thin, so it would be prudent to assume it might not work and make provisions for backup boot methods. 20180211: devmatch functionality has been turned on in devd. It will automatically load drivers for unattached devices. This may cause unexpected drivers to be loaded. Please report any problems to current@ and imp@freebsd.org. 20180114: Clang, llvm, lld, lldb, compiler-rt and libc++ have been upgraded to 6.0.0. Please see the 20141231 entry below for information about prerequisites and upgrading, if you are not already using clang 3.5.0 or higher. 20180110: LLVM's lld linker is now used as the FreeBSD/amd64 bootstrap linker. This means it is used to link the kernel and userland libraries and executables, but is not yet installed as /usr/bin/ld by default. To revert to ld.bfd as the bootstrap linker, in /etc/src.conf set WITHOUT_LLD_BOOTSTRAP=yes 20180110: On i386, pmtimer has been removed. Its functionality has been folded into apm. It was a no-op on ACPI in current for a while now (but was still needed on i386 in FreeBSD 11 and earlier). Users may need to remove it from kernel config files. 20180104: The use of RSS hash from the network card aka flowid has been disabled by default for lagg(4) as it's currently incompatible with the lacp and loadbalance protocols. This can be re-enabled by setting the following in loader.conf: net.link.lagg.default_use_flowid="1" 20180102: The SW_WATCHDOG option is no longer necessary to enable the hardclock-based software watchdog if no hardware watchdog is configured. As before, SW_WATCHDOG will cause the software watchdog to be enabled even if a hardware watchdog is configured. 20171215: r326887 fixes the issue described in the 20171214 UPDATING entry. r326888 flips the switch back to building GELI support always. 20171214: r362593 broke ZFS + GELI support for reasons unknown. However, it also broke ZFS support generally, so GELI has been turned off by default as the lesser evil in r326857. If you boot off ZFS and/or GELI, it might not be a good time to update. 20171125: PowerPC users must update loader(8) by rebuilding world before installing a new kernel, as the protocol connecting them has changed. Without the update, loader metadata will not be passed successfully to the kernel and users will have to enter their root partition at the kernel mountroot prompt to continue booting. Newer versions of loader can boot old kernels without issue. 20171110: The LOADER_FIREWIRE_SUPPORT build variable as been renamed to WITH/OUT_LOADER_FIREWIRE. LOADER_{NO_,}GELI_SUPPORT has been renamed to WITH/OUT_LOADER_GELI. 20171106: The naive and non-compliant support of posix_fallocate(2) in ZFS has been removed as of r325320. The system call now returns EINVAL when used on a ZFS file. Although the new behavior complies with the standard, some consumers are not prepared to cope with it. One known victim is lld prior to r325420. 20171102: Building in a FreeBSD src checkout will automatically create object directories now rather than store files in the current directory if 'make obj' was not ran. Calling 'make obj' is no longer necessary. This feature can be disabled by setting WITHOUT_AUTO_OBJ=yes in /etc/src-env.conf (not /etc/src.conf), or passing the option in the environment. 20171101: The default MAKEOBJDIR has changed from /usr/obj/ for native builds, and /usr/obj// for cross-builds, to a unified /usr/obj//. This behavior can be changed to the old format by setting WITHOUT_UNIFIED_OBJDIR=yes in /etc/src-env.conf, the environment, or with -DWITHOUT_UNIFIED_OBJDIR when building. The UNIFIED_OBJDIR option is a transitional feature that will be removed for 12.0 release; please migrate to the new format for any tools by looking up the OBJDIR used by 'make -V .OBJDIR' means rather than hardcoding paths. 20171028: The native-xtools target no longer installs the files by default to the OBJDIR. Use the native-xtools-install target with a DESTDIR to install to ${DESTDIR}/${NXTP} where NXTP defaults to /nxb-bin. 20171021: As part of the boot loader infrastructure cleanup, LOADER_*_SUPPORT options are changing from controlling the build if defined / undefined to controlling the build with explicit 'yes' or 'no' values. They will shift to WITH/WITHOUT options to match other options in the system. 20171010: libstand has turned into a private library for sys/boot use only. It is no longer supported as a public interface outside of sys/boot. 20171005: The arm port has split armv6 into armv6 and armv7. armv7 is now a valid TARGET_ARCH/MACHINE_ARCH setting. If you have an armv7 system and are running a kernel from before r324363, you will need to add MACHINE_ARCH=armv7 to 'make buildworld' to do a native build. 20171003: When building multiple kernels using KERNCONF, non-existent KERNCONF files will produce an error and buildkernel will fail. Previously missing KERNCONF files silently failed giving no indication as to why, only to subsequently discover during installkernel that the desired kernel was never built in the first place. 20170912: The default serial number format for CTL LUNs has changed. This will affect users who use /dev/diskid/* device nodes, or whose FibreChannel or iSCSI clients care about their LUNs' serial numbers. Users who require serial number stability should hardcode serial numbers in /etc/ctl.conf . 20170912: For 32-bit arm compiled for hard-float support, soft-floating point binaries now always get their shared libraries from LD_SOFT_LIBRARY_PATH (in the past, this was only used if /usr/libsoft also existed). Only users with a hard-float ld.so, but soft-float everything else should be affected. 20170826: The geli password typed at boot is now hidden. To restore the previous behavior, see geli(8) for configuration options. 20170825: Move PMTUD blackhole counters to TCPSTATS and remove them from bare sysctl values. Minor nit, but requires a rebuild of both world/kernel to complete. 20170814: "make check" behavior (made in ^/head@r295380) has been changed to execute from a limited sandbox, as opposed to executing from ${TESTSDIR}. Behavioral changes: - The "beforecheck" and "aftercheck" targets are now specified. - ${CHECKDIR} (added in commit noted above) has been removed. - Legacy behavior can be enabled by setting WITHOUT_MAKE_CHECK_USE_SANDBOX in src.conf(5) or the environment. If the limited sandbox mode is enabled, "make check" will execute "make distribution", then install, execute the tests, and clean up the sandbox if successful. The "make distribution" and "make install" targets are typically run as root to set appropriate permissions and ownership at installation time. The end-user should set "WITH_INSTALL_AS_USER" in src.conf(5) or the environment if executing "make check" with limited sandbox mode using an unprivileged user. 20170808: Since the switch to GPT disk labels, fsck for UFS/FFS has been unable to automatically find alternate superblocks. As of r322297, the information needed to find alternate superblocks has been moved to the end of the area reserved for the boot block. Filesystems created with a newfs of this vintage or later will create the recovery information. If you have a filesystem created prior to this change and wish to have a recovery block created for your filesystem, you can do so by running fsck in foreground mode (i.e., do not use the -p or -y options). As it starts, fsck will ask ``SAVE DATA TO FIND ALTERNATE SUPERBLOCKS'' to which you should answer yes. 20170728: As of r321665, an NFSv4 server configuration that services Kerberos mounts or clients that do not support the uid/gid in owner/owner_group string capability, must explicitly enable the nfsuserd daemon by adding nfsuserd_enable="YES" to the machine's /etc/rc.conf file. 20170722: Clang, llvm, lldb, compiler-rt and libc++ have been upgraded to 5.0.0. Please see the 20141231 entry below for information about prerequisites and upgrading, if you are not already using clang 3.5.0 or higher. 20170701: WITHOUT_RCMDS is now the default. Set WITH_RCMDS if you need the r-commands (rlogin, rsh, etc.) to be built with the base system. 20170625: The FreeBSD/powerpc platform now uses a 64-bit type for time_t. This is a very major ABI incompatible change, so users of FreeBSD/powerpc must be careful when performing source upgrades. It is best to run 'make installworld' from an alternate root system, either a live CD/memory stick, or a temporary root partition. Additionally, all ports must be recompiled. powerpc64 is largely unaffected, except in the case of 32-bit compatibility. All 32-bit binaries will be affected. 20170623: Forward compatibility for the "ino64" project have been committed. This will allow most new binaries to run on older kernels in a limited fashion. This prevents many of the common foot-shooting actions in the upgrade as well as the limited ability to roll back the kernel across the ino64 upgrade. Complicated use cases may not work properly, though enough simpler ones work to allow recovery in most situations. 20170620: Switch back to the BSDL dtc (Device Tree Compiler). Set WITH_GPL_DTC if you require the GPL compiler. 20170618: The internal ABI used for communication between the NFS kernel modules was changed by r320085, so __FreeBSD_version was bumped to ensure all the NFS related modules are updated together. 20170617: The ABI of struct event was changed by extending the data member to 64bit and adding ext fields. For upgrade, same precautions as for the entry 20170523 "ino64" must be followed. 20170531: The GNU roff toolchain has been removed from base. To render manpages which are not supported by mandoc(1), man(1) can fallback on GNU roff from ports (and recommends to install it). To render roff(7) documents, consider using GNU roff from ports or the heirloom doctools roff toolchain from ports via pkg install groff or via pkg install heirloom-doctools. 20170524: The ath(4) and ath_hal(4) modules now build piecemeal to allow for smaller runtime footprint builds. This is useful for embedded systems which only require one chipset support. If you load it as a module, make sure this is in /boot/loader.conf: if_ath_load="YES" This will load the HAL, all chip/RF backends and if_ath_pci. If you have if_ath_pci in /boot/loader.conf, ensure it is after if_ath or it will not load any HAL chipset support. If you want to selectively load things (eg on ye cheape ARM/MIPS platforms where RAM is at a premium) you should: * load ath_hal * load the chip modules in question * load ath_rate, ath_dfs * load ath_main * load if_ath_pci and/or if_ath_ahb depending upon your particular bus bind type - this is where probe/attach is done. For further comments/feedback, poke adrian@ . 20170523: The "ino64" 64-bit inode project has been committed, which extends a number of types to 64 bits. Upgrading in place requires care and adherence to the documented upgrade procedure. If using a custom kernel configuration ensure that the COMPAT_FREEBSD11 option is included (as during the upgrade the system will be running the ino64 kernel with the existing world). For the safest in-place upgrade begin by removing previous build artifacts via "rm -rf /usr/obj/*". Then, carefully follow the full procedure documented below under the heading "To rebuild everything and install it on the current system." Specifically, a reboot is required after installing the new kernel before installing world. 20170424: The NATM framework including the en(4), fatm(4), hatm(4), and patm(4) devices has been removed. Consumers should plan a migration before the end-of-life date for FreeBSD 11. 20170420: GNU diff has been replaced by a BSD licensed diff. Some features of GNU diff has not been implemented, if those are needed a newer version of GNU diff is available via the diffutils package under the gdiff name. 20170413: As of r316810 for ipfilter, keep frags is no longer assumed when keep state is specified in a rule. r316810 aligns ipfilter with documentation in man pages separating keep frags from keep state. This allows keep state to be specified without forcing keep frags and allows keep frags to be specified independently of keep state. To maintain previous behaviour, also specify keep frags with keep state (as documented in ipf.conf.5). 20170407: arm64 builds now use the base system LLD 4.0.0 linker by default, instead of requiring that the aarch64-binutils port or package be installed. To continue using aarch64-binutils, set CROSS_BINUTILS_PREFIX=/usr/local/aarch64-freebsd/bin . 20170405: The UDP optimization in entry 20160818 that added the sysctl net.inet.udp.require_l2_bcast has been reverted. L2 broadcast packets will no longer be treated as L3 broadcast packets. 20170331: Binds and sends to the loopback addresses, IPv6 and IPv4, will now use any explicitly assigned loopback address available in the jail instead of using the first assigned address of the jail. 20170329: The ctl.ko module no longer implements the iSCSI target frontend: cfiscsi.ko does instead. If building cfiscsi.ko as a kernel module, the module can be loaded via one of the following methods: - `cfiscsi_load="YES"` in loader.conf(5). - Add `cfiscsi` to `$kld_list` in rc.conf(5). - ctladm(8)/ctld(8), when compiled with iSCSI support (`WITH_ISCSI=yes` in src.conf(5)) Please see cfiscsi(4) for more details. 20170316: The mmcsd.ko module now additionally depends on geom_flashmap.ko. Also, mmc.ko and mmcsd.ko need to be a matching pair built from the same source (previously, the dependency of mmcsd.ko on mmc.ko was missing, but mmcsd.ko now will refuse to load if it is incompatible with mmc.ko). 20170315: The syntax of ipfw(8) named states was changed to avoid ambiguity. If you have used named states in the firewall rules, you need to modify them after installworld and before rebooting. Now named states must be prefixed with colon. 20170311: The old drm (sys/dev/drm/) drivers for i915 and radeon have been removed as the userland we provide cannot use them. The KMS version (sys/dev/drm2) supports the same hardware. 20170302: Clang, llvm, lldb, compiler-rt and libc++ have been upgraded to 4.0.0. Please see the 20141231 entry below for information about prerequisites and upgrading, if you are not already using clang 3.5.0 or higher. 20170221: The code that provides support for ZFS .zfs/ directory functionality has been reimplemented. It's not possible now to create a snapshot by mkdir under .zfs/snapshot/. That should be the only user visible change. 20170216: EISA bus support has been removed. The WITH_EISA option is no longer valid. 20170215: MCA bus support has been removed. 20170127: The WITH_LLD_AS_LD / WITHOUT_LLD_AS_LD build knobs have been renamed WITH_LLD_IS_LD / WITHOUT_LLD_IS_LD, for consistency with CLANG_IS_CC. 20170112: The EM_MULTIQUEUE kernel configuration option is deprecated now that the em(4) driver conforms to iflib specifications. 20170109: The igb(4), em(4) and lem(4) ethernet drivers are now implemented via IFLIB. If you have a custom kernel configuration that excludes em(4) but you use igb(4), you need to re-add em(4) to your custom configuration. 20161217: Clang, llvm, lldb, compiler-rt and libc++ have been upgraded to 3.9.1. Please see the 20141231 entry below for information about prerequisites and upgrading, if you are not already using clang 3.5.0 or higher. 20161124: Clang, llvm, lldb, compiler-rt and libc++ have been upgraded to 3.9.0. Please see the 20141231 entry below for information about prerequisites and upgrading, if you are not already using clang 3.5.0 or higher. 20161119: The layout of the pmap structure has changed for powerpc to put the pmap statistics at the front for all CPU variations. libkvm(3) and all tools that link against it need to be recompiled. 20161030: isl(4) and cyapa(4) drivers now require a new driver, chromebook_platform(4), to work properly on Chromebook-class hardware. On other types of hardware the drivers may need to be configured using device hints. Please see the corresponding manual pages for details. 20161017: The urtwn(4) driver was merged into rtwn(4) and now consists of rtwn(4) main module + rtwn_usb(4) and rtwn_pci(4) bus-specific parts. Also, firmware for RTL8188CE was renamed due to possible name conflict (rtwnrtl8192cU(B) -> rtwnrtl8192cE(B)) 20161015: GNU rcs has been removed from base. It is available as packages: - rcs: Latest GPLv3 GNU rcs version. - rcs57: Copy of the latest version of GNU rcs (GPLv2) before it was removed from base. 20161008: Use of the cc_cdg, cc_chd, cc_hd, or cc_vegas congestion control modules now requires that the kernel configuration contain the TCP_HHOOK option. (This option is included in the GENERIC kernel.) 20161003: The WITHOUT_ELFCOPY_AS_OBJCOPY src.conf(5) knob has been retired. ELF Tool Chain's elfcopy is always installed as /usr/bin/objcopy. 20160924: Relocatable object files with the extension of .So have been renamed to use an extension of .pico instead. The purpose of this change is to avoid a name clash with shared libraries on case-insensitive file systems. On those file systems, foo.So is the same file as foo.so. 20160918: GNU rcs has been turned off by default. It can (temporarily) be built again by adding WITH_RCS knob in src.conf. Otherwise, GNU rcs is available from packages: - rcs: Latest GPLv3 GNU rcs version. - rcs57: Copy of the latest version of GNU rcs (GPLv2) from base. 20160918: The backup_uses_rcs functionality has been removed from rc.subr. 20160908: The queue(3) debugging macro, QUEUE_MACRO_DEBUG, has been split into two separate components, QUEUE_MACRO_DEBUG_TRACE and QUEUE_MACRO_DEBUG_TRASH. Define both for the original QUEUE_MACRO_DEBUG behavior. 20160824: r304787 changed some ioctl interfaces between the iSCSI userspace programs and the kernel. ctladm, ctld, iscsictl, and iscsid must be rebuilt to work with new kernels. __FreeBSD_version has been bumped to 1200005. 20160818: The UDP receive code has been updated to only treat incoming UDP packets that were addressed to an L2 broadcast address as L3 broadcast packets. It is not expected that this will affect any standards-conforming UDP application. The new behaviour can be disabled by setting the sysctl net.inet.udp.require_l2_bcast to 0. 20160818: Remove the openbsd_poll system call. __FreeBSD_version has been bumped because of this. 20160708: The stable/11 branch has been created from head@r302406. 20160622: The libc stub for the pipe(2) system call has been replaced with a wrapper that calls the pipe2(2) system call and the pipe(2) system call is now only implemented by the kernels that include "options COMPAT_FREEBSD10" in their config file (this is the default). Users should ensure that this option is enabled in their kernel or upgrade userspace to r302092 before upgrading their kernel. 20160527: CAM will now strip leading spaces from SCSI disks' serial numbers. This will affect users who create UFS filesystems on SCSI disks using those disk's diskid device nodes. For example, if /etc/fstab previously contained a line like "/dev/diskid/DISK-%20%20%20%20%20%20%20ABCDEFG0123456", you should change it to "/dev/diskid/DISK-ABCDEFG0123456". Users of geom transforms like gmirror may also be affected. ZFS users should generally be fine. 20160523: The bitstring(3) API has been updated with new functionality and improved performance. But it is binary-incompatible with the old API. Objects built with the new headers may not be linked against objects built with the old headers. 20160520: The brk and sbrk functions have been removed from libc on arm64. Binutils from ports has been updated to not link to these functions and should be updated to the latest version before installing a new libc. 20160517: The armv6 port now defaults to hard float ABI. Limited support for running both hardfloat and soft float on the same system is available using the libraries installed with -DWITH_LIBSOFT. This has only been tested as an upgrade path for installworld and packages may fail or need manual intervention to run. New packages will be needed. To update an existing self-hosted armv6hf system, you must add TARGET_ARCH=armv6 on the make command line for both the build and the install steps. 20160510: Kernel modules compiled outside of a kernel build now default to installing to /boot/modules instead of /boot/kernel. Many kernel modules built this way (such as those in ports) already overrode KMODDIR explicitly to install into /boot/modules. However, manually building and installing a module from /sys/modules will now install to /boot/modules instead of /boot/kernel. 20160414: The CAM I/O scheduler has been committed to the kernel. There should be no user visible impact. This does enable NCQ Trim on ada SSDs. While the list of known rogues that claim support for this but actually corrupt data is believed to be complete, be on the lookout for data corruption. The known rogue list is believed to be complete: o Crucial MX100, M550 drives with MU01 firmware. o Micron M510 and M550 drives with MU01 firmware. o Micron M500 prior to MU07 firmware o Samsung 830, 840, and 850 all firmwares o FCCT M500 all firmwares Crucial has firmware http://www.crucial.com/usa/en/support-ssd-firmware with working NCQ TRIM. For Micron branded drives, see your sales rep for updated firmware. Black listed drives will work correctly because these drives work correctly so long as no NCQ TRIMs are sent to them. Given this list is the same as found in Linux, it's believed there are no other rogues in the market place. All other models from the above vendors work. To be safe, if you are at all concerned, you can quirk each of your drives to prevent NCQ from being sent by setting: kern.cam.ada.X.quirks="0x2" in loader.conf. If the drive requires the 4k sector quirk, set the quirks entry to 0x3. 20160330: The FAST_DEPEND build option has been removed and its functionality is now the one true way. The old mkdep(1) style of 'make depend' has been removed. See 20160311 for further details. 20160317: Resource range types have grown from unsigned long to uintmax_t. All drivers, and anything using libdevinfo, need to be recompiled. 20160311: WITH_FAST_DEPEND is now enabled by default for in-tree and out-of-tree builds. It no longer runs mkdep(1) during 'make depend', and the 'make depend' stage can safely be skipped now as it is auto ran when building 'make all' and will generate all SRCS and DPSRCS before building anything else. Dependencies are gathered at compile time with -MF flags kept in separate .depend files per object file. Users should run 'make cleandepend' once if using -DNO_CLEAN to clean out older stale .depend files. 20160306: On amd64, clang 3.8.0 can now insert sections of type AMD64_UNWIND into kernel modules. Therefore, if you load any kernel modules at boot time, please install the boot loaders after you install the kernel, but before rebooting, e.g.: make buildworld make buildkernel KERNCONF=YOUR_KERNEL_HERE make installkernel KERNCONF=YOUR_KERNEL_HERE make -C sys/boot install Then follow the usual steps, described in the General Notes section, below. 20160305: Clang, llvm, lldb and compiler-rt have been upgraded to 3.8.0. Please see the 20141231 entry below for information about prerequisites and upgrading, if you are not already using clang 3.5.0 or higher. 20160301: The AIO subsystem is now a standard part of the kernel. The VFS_AIO kernel option and aio.ko kernel module have been removed. Due to stability concerns, asynchronous I/O requests are only permitted on sockets and raw disks by default. To enable asynchronous I/O requests on all file types, set the vfs.aio.enable_unsafe sysctl to a non-zero value. 20160226: The ELF object manipulation tool objcopy is now provided by the ELF Tool Chain project rather than by GNU binutils. It should be a drop-in replacement, with the addition of arm64 support. The (temporary) src.conf knob WITHOUT_ELFCOPY_AS_OBJCOPY knob may be set to obtain the GNU version if necessary. 20160129: Building ZFS pools on top of zvols is prohibited by default. That feature has never worked safely; it's always been prone to deadlocks. Using a zvol as the backing store for a VM guest's virtual disk will still work, even if the guest is using ZFS. Legacy behavior can be restored by setting vfs.zfs.vol.recursive=1. 20160119: The NONE and HPN patches has been removed from OpenSSH. They are still available in the security/openssh-portable port. 20160113: With the addition of ypldap(8), a new _ypldap user is now required during installworld. "mergemaster -p" can be used to add the user prior to installworld, as documented in the handbook. 20151216: The tftp loader (pxeboot) now uses the option root-path directive. As a consequence it no longer looks for a pxeboot.4th file on the tftp server. Instead it uses the regular /boot infrastructure as with the other loaders. 20151211: The code to start recording plug and play data into the modules has been committed. While the old tools will properly build a new kernel, a number of warnings about "unknown metadata record 4" will be produced for an older kldxref. To avoid such warnings, make sure to rebuild the kernel toolchain (or world). Make sure that you have r292078 or later when trying to build 292077 or later before rebuilding. 20151207: Debug data files are now built by default with 'make buildworld' and installed with 'make installworld'. This facilitates debugging but requires more disk space both during the build and for the installed world. Debug files may be disabled by setting WITHOUT_DEBUG_FILES=yes in src.conf(5). 20151130: r291527 changed the internal interface between the nfsd.ko and nfscommon.ko modules. As such, they must both be upgraded to-gether. __FreeBSD_version has been bumped because of this. 20151108: Add support for unicode collation strings leads to a change of order of files listed by ls(1) for example. To get back to the old behaviour, set LC_COLLATE environment variable to "C". Databases administrators will need to reindex their databases given collation results will be different. Due to a bug in install(1) it is recommended to remove the ancient locales before running make installworld. rm -rf /usr/share/locale/* 20151030: The OpenSSL has been upgraded to 1.0.2d. Any binaries requiring libcrypto.so.7 or libssl.so.7 must be recompiled. 20151020: Qlogic 24xx/25xx firmware images were updated from 5.5.0 to 7.3.0. Kernel modules isp_2400_multi and isp_2500_multi were removed and should be replaced with isp_2400 and isp_2500 modules respectively. 20151017: The build previously allowed using 'make -n' to not recurse into sub-directories while showing what commands would be executed, and 'make -n -n' to recursively show commands. Now 'make -n' will recurse and 'make -N' will not. 20151012: If you specify SENDMAIL_MC or SENDMAIL_CF in make.conf, mergemaster and etcupdate will now use this file. A custom sendmail.cf is now updated via this mechanism rather than via installworld. If you had excluded sendmail.cf in mergemaster.rc or etcupdate.conf, you may want to remove the exclusion or change it to "always install". /etc/mail/sendmail.cf is now managed the same way regardless of whether SENDMAIL_MC/SENDMAIL_CF is used. If you are not using SENDMAIL_MC/SENDMAIL_CF there should be no change in behavior. 20151011: Compatibility shims for legacy ATA device names have been removed. It includes ATA_STATIC_ID kernel option, kern.cam.ada.legacy_aliases and kern.geom.raid.legacy_aliases loader tunables, kern.devalias.* environment variables, /dev/ad* and /dev/ar* symbolic links. 20151006: Clang, llvm, lldb, compiler-rt and libc++ have been upgraded to 3.7.0. Please see the 20141231 entry below for information about prerequisites and upgrading, if you are not already using clang 3.5.0 or higher. 20150924: Kernel debug files have been moved to /usr/lib/debug/boot/kernel/, and renamed from .symbols to .debug. This reduces the size requirements on the boot partition or file system and provides consistency with userland debug files. When using the supported kernel installation method the /usr/lib/debug/boot/kernel directory will be renamed (to kernel.old) as is done with /boot/kernel. Developers wishing to maintain the historical behavior of installing debug files in /boot/kernel/ can set KERN_DEBUGDIR="" in src.conf(5). 20150827: The wireless drivers had undergone changes that remove the 'parent interface' from the ifconfig -l output. The rc.d network scripts used to check presence of a parent interface in the list, so old scripts would fail to start wireless networking. Thus, etcupdate(3) or mergemaster(8) run is required after kernel update, to update your rc.d scripts in /etc. 20150827: pf no longer supports 'scrub fragment crop' or 'scrub fragment drop-ovl' These configurations are now automatically interpreted as 'scrub fragment reassemble'. 20150817: Kernel-loadable modules for the random(4) device are back. To use them, the kernel must have device random options RANDOM_LOADABLE kldload(8) can then be used to load random_fortuna.ko or random_yarrow.ko. Please note that due to the indirect function calls that the loadable modules need to provide, the build-in variants will be slightly more efficient. The random(4) kernel option RANDOM_DUMMY has been retired due to unpopularity. It was not all that useful anyway. 20150813: The WITHOUT_ELFTOOLCHAIN_TOOLS src.conf(5) knob has been retired. Control over building the ELF Tool Chain tools is now provided by the WITHOUT_TOOLCHAIN knob. 20150810: The polarity of Pulse Per Second (PPS) capture events with the uart(4) driver has been corrected. Prior to this change the PPS "assert" event corresponded to the trailing edge of a positive PPS pulse and the "clear" event was the leading edge of the next pulse. As the width of a PPS pulse in a typical GPS receiver is on the order of 1 millisecond, most users will not notice any significant difference with this change. Anyone who has compensated for the historical polarity reversal by configuring a negative offset equal to the pulse width will need to remove that workaround. 20150809: The default group assigned to /dev/dri entries has been changed from 'wheel' to 'video' with the id of '44'. If you want to have access to the dri devices please add yourself to the video group with: # pw groupmod video -m $USER 20150806: The menu.rc and loader.rc files will now be replaced during upgrades. Please migrate local changes to menu.rc.local and loader.rc.local instead. 20150805: GNU Binutils versions of addr2line, c++filt, nm, readelf, size, strings and strip have been removed. The src.conf(5) knob WITHOUT_ELFTOOLCHAIN_TOOLS no longer provides the binutils tools. 20150728: As ZFS requires more kernel stack pages than is the default on some architectures e.g. i386, it now warns if KSTACK_PAGES is less than ZFS_MIN_KSTACK_PAGES (which is 4 at the time of writing). Please consider using 'options KSTACK_PAGES=X' where X is greater than or equal to ZFS_MIN_KSTACK_PAGES i.e. 4 in such configurations. 20150706: sendmail has been updated to 8.15.2. Starting with FreeBSD 11.0 and sendmail 8.15, sendmail uses uncompressed IPv6 addresses by default, i.e., they will not contain "::". For example, instead of ::1, it will be 0:0:0:0:0:0:0:1. This permits a zero subnet to have a more specific match, such as different map entries for IPv6:0:0 vs IPv6:0. This change requires that configuration data (including maps, files, classes, custom ruleset, etc.) must use the same format, so make certain such configuration data is upgrading. As a very simple check search for patterns like 'IPv6:[0-9a-fA-F:]*::' and 'IPv6::'. To return to the old behavior, set the m4 option confUSE_COMPRESSED_IPV6_ADDRESSES or the cf option UseCompressedIPv6Addresses. 20150630: The default kernel entropy-processing algorithm is now Fortuna, replacing Yarrow. Assuming you have 'device random' in your kernel config file, the configurations allow a kernel option to override this default. You may choose *ONE* of: options RANDOM_YARROW # Legacy /dev/random algorithm. options RANDOM_DUMMY # Blocking-only driver. If you have neither, you get Fortuna. For most people, read no further, Fortuna will give a /dev/random that works like it always used to, and the difference will be irrelevant. If you remove 'device random', you get *NO* kernel-processed entropy at all. This may be acceptable to folks building embedded systems, but has complications. Carry on reading, and it is assumed you know what you need. *PLEASE* read random(4) and random(9) if you are in the habit of tweaking kernel configs, and/or if you are a member of the embedded community, wanting specific and not-usual behaviour from your security subsystems. NOTE!! If you use RANDOM_DUMMY and/or have no 'device random', you will NOT have a functioning /dev/random, and many cryptographic features will not work, including SSH. You may also find strange behaviour from the random(3) set of library functions, in particular sranddev(3), srandomdev(3) and arc4random(3). The reason for this is that the KERN_ARND sysctl only returns entropy if it thinks it has some to share, and with RANDOM_DUMMY or no 'device random' this will never happen. 20150623: An additional fix for the issue described in the 20150614 sendmail entry below has been committed in revision 284717. 20150616: FreeBSD's old make (fmake) has been removed from the system. It is available as the devel/fmake port or via pkg install fmake. 20150615: The fix for the issue described in the 20150614 sendmail entry below has been committed in revision 284436. The work around described in that entry is no longer needed unless the default setting is overridden by a confDH_PARAMETERS configuration setting of '5' or pointing to a 512 bit DH parameter file. 20150614: ALLOW_DEPRECATED_ATF_TOOLS/ATFFILE support has been removed from atf.test.mk (included from bsd.test.mk). Please upgrade devel/atf and devel/kyua to version 0.20+ and adjust any calling code to work with Kyuafile and kyua. 20150614: The import of openssl to address the FreeBSD-SA-15:10.openssl security advisory includes a change which rejects handshakes with DH parameters below 768 bits. sendmail releases prior to 8.15.2 (not yet released), defaulted to a 512 bit DH parameter setting for client connections. To work around this interoperability, sendmail can be configured to use a 2048 bit DH parameter by: 1. Edit /etc/mail/`hostname`.mc 2. If a setting for confDH_PARAMETERS does not exist or exists and is set to a string beginning with '5', replace it with '2'. 3. If a setting for confDH_PARAMETERS exists and is set to a file path, create a new file with: openssl dhparam -out /path/to/file 2048 4. Rebuild the .cf file: cd /etc/mail/; make; make install 5. Restart sendmail: cd /etc/mail/; make restart A sendmail patch is coming, at which time this file will be updated. 20150604: Generation of legacy formatted entries have been disabled by default in pwd_mkdb(8), as all base system consumers of the legacy formatted entries were converted to use the new format by default when the new, machine independent format have been added and supported since FreeBSD 5.x. Please see the pwd_mkdb(8) manual page for further details. 20150525: Clang and llvm have been upgraded to 3.6.1 release. Please see the 20141231 entry below for information about prerequisites and upgrading, if you are not already using 3.5.0 or higher. 20150521: TI platform code switched to using vendor DTS files and this update may break existing systems running on Beaglebone, Beaglebone Black, and Pandaboard: - dtb files should be regenerated/reinstalled. Filenames are the same but content is different now - GPIO addressing was changed, now each GPIO bank (32 pins per bank) has its own /dev/gpiocX device, e.g. pin 121 on /dev/gpioc0 in old addressing scheme is now pin 25 on /dev/gpioc3. - Pandaboard: /etc/ttys should be updated, serial console device is now /dev/ttyu2, not /dev/ttyu0 20150501: soelim(1) from gnu/usr.bin/groff has been replaced by usr.bin/soelim. If you need the GNU extension from groff soelim(1), install groff from package: pkg install groff, or via ports: textproc/groff. 20150423: chmod, chflags, chown and chgrp now affect symlinks in -R mode as defined in symlink(7); previously symlinks were silently ignored. 20150415: The const qualifier has been removed from iconv(3) to comply with POSIX. The ports tree is aware of this from r384038 onwards. 20150416: Libraries specified by LIBADD in Makefiles must have a corresponding DPADD_ variable to ensure correct dependencies. This is now enforced in src.libnames.mk. 20150324: From legacy ata(4) driver was removed support for SATA controllers supported by more functional drivers ahci(4), siis(4) and mvs(4). Kernel modules ataahci and ataadaptec were removed completely, replaced by ahci and mvs modules respectively. 20150315: Clang, llvm and lldb have been upgraded to 3.6.0 release. Please see the 20141231 entry below for information about prerequisites and upgrading, if you are not already using 3.5.0 or higher. 20150307: The 32-bit PowerPC kernel has been changed to a position-independent executable. This can only be booted with a version of loader(8) newer than January 31, 2015, so make sure to update both world and kernel before rebooting. 20150217: If you are running a -CURRENT kernel since r273872 (Oct 30th, 2014), but before r278950, the RNG was not seeded properly. Immediately upgrade the kernel to r278950 or later and regenerate any keys (e.g. ssh keys or openssl keys) that were generated w/ a kernel from that range. This does not affect programs that directly used /dev/random or /dev/urandom. All userland uses of arc4random(3) are affected. 20150210: The autofs(4) ABI was changed in order to restore binary compatibility with 10.1-RELEASE. The automountd(8) daemon needs to be rebuilt to work with the new kernel. 20150131: The powerpc64 kernel has been changed to a position-independent executable. This can only be booted with a new version of loader(8), so make sure to update both world and kernel before rebooting. 20150118: Clang and llvm have been upgraded to 3.5.1 release. This is a bugfix only release, no new features have been added. Please see the 20141231 entry below for information about prerequisites and upgrading, if you are not already using 3.5.0. 20150107: ELF tools addr2line, elfcopy (strip), nm, size, and strings are now taken from the ELF Tool Chain project rather than GNU binutils. They should be drop-in replacements, with the addition of arm64 support. The WITHOUT_ELFTOOLCHAIN_TOOLS= knob may be used to obtain the binutils tools, if necessary. See 20150805 for updated information. 20150105: The default Unbound configuration now enables remote control using a local socket. Users who have already enabled the local_unbound service should regenerate their configuration by running "service local_unbound setup" as root. 20150102: The GNU texinfo and GNU info pages have been removed. To be able to view GNU info pages please install texinfo from ports. 20141231: Clang, llvm and lldb have been upgraded to 3.5.0 release. As of this release, a prerequisite for building clang, llvm and lldb is a C++11 capable compiler and C++11 standard library. This means that to be able to successfully build the cross-tools stage of buildworld, with clang as the bootstrap compiler, your system compiler or cross compiler should either be clang 3.3 or later, or gcc 4.8 or later, and your system C++ library should be libc++, or libdstdc++ from gcc 4.8 or later. On any standard FreeBSD 10.x or 11.x installation, where clang and libc++ are on by default (that is, on x86 or arm), this should work out of the box. On 9.x installations where clang is enabled by default, e.g. on x86 and powerpc, libc++ will not be enabled by default, so libc++ should be built (with clang) and installed first. If both clang and libc++ are missing, build clang first, then use it to build libc++. On 8.x and earlier installations, upgrade to 9.x first, and then follow the instructions for 9.x above. Sparc64 and mips users are unaffected, as they still use gcc 4.2.1 by default, and do not build clang. Many embedded systems are resource constrained, and will not be able to build clang in a reasonable time, or in some cases at all. In those cases, cross building bootable systems on amd64 is a workaround. This new version of clang introduces a number of new warnings, of which the following are most likely to appear: -Wabsolute-value This warns in two cases, for both C and C++: * When the code is trying to take the absolute value of an unsigned quantity, which is effectively a no-op, and almost never what was intended. The code should be fixed, if at all possible. If you are sure that the unsigned quantity can be safely cast to signed, without loss of information or undefined behavior, you can add an explicit cast, or disable the warning. * When the code is trying to take an absolute value, but the called abs() variant is for the wrong type, which can lead to truncation. If you want to disable the warning instead of fixing the code, please make sure that truncation will not occur, or it might lead to unwanted side-effects. -Wtautological-undefined-compare and -Wundefined-bool-conversion These warn when C++ code is trying to compare 'this' against NULL, while 'this' should never be NULL in well-defined C++ code. However, there is some legacy (pre C++11) code out there, which actively abuses this feature, which was less strictly defined in previous C++ versions. Squid and openjdk do this, for example. The warning can be turned off for C++98 and earlier, but compiling the code in C++11 mode might result in unexpected behavior; for example, the parts of the program that are unreachable could be optimized away. 20141222: The old NFS client and server (kernel options NFSCLIENT, NFSSERVER) kernel sources have been removed. The .h files remain, since some utilities include them. This will need to be fixed later. If "mount -t oldnfs ..." is attempted, it will fail. If the "-o" option on mountd(8), nfsd(8) or nfsstat(1) is used, the utilities will report errors. 20141121: The handling of LOCAL_LIB_DIRS has been altered to skip addition of directories to top level SUBDIR variable when their parent directory is included in LOCAL_DIRS. Users with build systems with such hierarchies and without SUBDIR entries in the parent directory Makefiles should add them or add the directories to LOCAL_DIRS. 20141109: faith(4) and faithd(8) have been removed from the base system. Faith has been obsolete for a very long time. 20141104: vt(4), the new console driver, is enabled by default. It brings support for Unicode and double-width characters, as well as support for UEFI and integration with the KMS kernel video drivers. You may need to update your console settings in /etc/rc.conf, most probably the keymap. During boot, /etc/rc.d/syscons will indicate what you need to do. vt(4) still has issues and lacks some features compared to syscons(4). See the wiki for up-to-date information: https://wiki.freebsd.org/Newcons If you want to keep using syscons(4), you can do so by adding the following line to /boot/loader.conf: kern.vty=sc 20141102: pjdfstest has been integrated into kyua as an opt-in test suite. Please see share/doc/pjdfstest/README for more details on how to execute it. 20141009: gperf has been removed from the base system for architectures that use clang. Ports that require gperf will obtain it from the devel/gperf port. 20140923: pjdfstest has been moved from tools/regression/pjdfstest to contrib/pjdfstest . 20140922: At svn r271982, The default linux compat kernel ABI has been adjusted to 2.6.18 in support of the linux-c6 compat ports infrastructure update. If you wish to continue using the linux-f10 compat ports, add compat.linux.osrelease=2.6.16 to your local sysctl.conf. Users are encouraged to update their linux-compat packages to linux-c6 during their next update cycle. 20140729: The ofwfb driver, used to provide a graphics console on PowerPC when using vt(4), no longer allows mmap() of all physical memory. This will prevent Xorg on PowerPC with some ATI graphics cards from initializing properly unless x11-servers/xorg-server is updated to 1.12.4_8 or newer. 20140723: The xdev targets have been converted to using TARGET and TARGET_ARCH instead of XDEV and XDEV_ARCH. 20140719: The default unbound configuration has been modified to address issues with reverse lookups on networks that use private address ranges. If you use the local_unbound service, run "service local_unbound setup" as root to regenerate your configuration, then "service local_unbound reload" to load the new configuration. 20140709: The GNU texinfo and GNU info pages are not built and installed anymore, WITH_INFO knob has been added to allow to built and install them again. UPDATE: see 20150102 entry on texinfo's removal 20140708: The GNU readline library is now an INTERNALLIB - that is, it is statically linked into consumers (GDB and variants) in the base system, and the shared library is no longer installed. The devel/readline port is available for third party software that requires readline. 20140702: The Itanium architecture (ia64) has been removed from the list of known architectures. This is the first step in the removal of the architecture. 20140701: Commit r268115 has added NFSv4.1 server support, merged from projects/nfsv4.1-server. Since this includes changes to the internal interfaces between the NFS related modules, a full build of the kernel and modules will be necessary. __FreeBSD_version has been bumped. 20140629: The WITHOUT_VT_SUPPORT kernel config knob has been renamed WITHOUT_VT. (The other _SUPPORT knobs have a consistent meaning which differs from the behaviour controlled by this knob.) 20140619: Maximal length of the serial number in CTL was increased from 16 to 64 chars, that breaks ABI. All CTL-related tools, such as ctladm and ctld, need to be rebuilt to work with a new kernel. 20140606: The libatf-c and libatf-c++ major versions were downgraded to 0 and 1 respectively to match the upstream numbers. They were out of sync because, when they were originally added to FreeBSD, the upstream versions were not respected. These libraries are private and not yet built by default, so renumbering them should be a non-issue. However, unclean source trees will yield broken test programs once the operator executes "make delete-old-libs" after a "make installworld". Additionally, the atf-sh binary was made private by moving it into /usr/libexec/. Already-built shell test programs will keep the path to the old binary so they will break after "make delete-old" is run. If you are using WITH_TESTS=yes (not the default), wipe the object tree and rebuild from scratch to prevent spurious test failures. This is only needed once: the misnumbered libraries and misplaced binaries have been added to OptionalObsoleteFiles.inc so they will be removed during a clean upgrade. 20140512: Clang and llvm have been upgraded to 3.4.1 release. 20140508: We bogusly installed src.opts.mk in /usr/share/mk. This file should be removed to avoid issues in the future (and has been added to ObsoleteFiles.inc). 20140505: /etc/src.conf now affects only builds of the FreeBSD src tree. In the past, it affected all builds that used the bsd.*.mk files. The old behavior was a bug, but people may have relied upon it. To get this behavior back, you can .include /etc/src.conf from /etc/make.conf (which is still global and isn't changed). This also changes the behavior of incremental builds inside the tree of individual directories. Set MAKESYSPATH to ".../share/mk" to do that. Although this has survived make universe and some upgrade scenarios, other upgrade scenarios may have broken. At least one form of temporary breakage was fixed with MAKESYSPATH settings for buildworld as well... In cases where MAKESYSPATH isn't working with this setting, you'll need to set it to the full path to your tree. One side effect of all this cleaning up is that bsd.compiler.mk is no longer implicitly included by bsd.own.mk. If you wish to use COMPILER_TYPE, you must now explicitly include bsd.compiler.mk as well. 20140430: The lindev device has been removed since /dev/full has been made a standard device. __FreeBSD_version has been bumped. 20140424: The knob WITHOUT_VI was added to the base system, which controls building ex(1), vi(1), etc. Older releases of FreeBSD required ex(1) in order to reorder files share/termcap and didn't build ex(1) as a build tool, so building/installing with WITH_VI is highly advised for build hosts for older releases. This issue has been fixed in stable/9 and stable/10 in r277022 and r276991, respectively. 20140418: The YES_HESIOD knob has been removed. It has been obsolete for a decade. Please move to using WITH_HESIOD instead or your builds will silently lack HESIOD. 20140405: The uart(4) driver has been changed with respect to its handling of the low-level console. Previously the uart(4) driver prevented any process from changing the baudrate or the CLOCAL and HUPCL control flags. By removing the restrictions, operators can make changes to the serial console port without having to reboot. However, when getty(8) is started on the serial device that is associated with the low-level console, a misconfigured terminal line in /etc/ttys will now have a real impact. Before upgrading the kernel, make sure that /etc/ttys has the serial console device configured as 3wire without baudrate to preserve the previous behaviour. E.g: ttyu0 "/usr/libexec/getty 3wire" vt100 on secure 20140306: Support for libwrap (TCP wrappers) in rpcbind was disabled by default to improve performance. To re-enable it, if needed, run rpcbind with command line option -W. 20140226: Switched back to the GPL dtc compiler due to updates in the upstream dts files not being supported by the BSDL dtc compiler. You will need to rebuild your kernel toolchain to pick up the new compiler. Core dumps may result while building dtb files during a kernel build if you fail to do so. Set WITHOUT_GPL_DTC if you require the BSDL compiler. 20140216: Clang and llvm have been upgraded to 3.4 release. 20140216: The nve(4) driver has been removed. Please use the nfe(4) driver for NVIDIA nForce MCP Ethernet adapters instead. 20140212: An ABI incompatibility crept into the libc++ 3.4 import in r261283. This could cause certain C++ applications using shared libraries built against the previous version of libc++ to crash. The incompatibility has now been fixed, but any C++ applications or shared libraries built between r261283 and r261801 should be recompiled. 20140204: OpenSSH will now ignore errors caused by kernel lacking of Capsicum capability mode support. Please note that enabling the feature in kernel is still highly recommended. 20140131: OpenSSH is now built with sandbox support, and will use sandbox as the default privilege separation method. This requires Capsicum capability mode support in kernel. 20140128: The libelf and libdwarf libraries have been updated to newer versions from upstream. Shared library version numbers for these two libraries were bumped. Any ports or binaries requiring these two libraries should be recompiled. __FreeBSD_version is bumped to 1100006. 20140110: If a Makefile in a tests/ directory was auto-generating a Kyuafile instead of providing an explicit one, this would prevent such Makefile from providing its own Kyuafile in the future during NO_CLEAN builds. This has been fixed in the Makefiles but manual intervention is needed to clean an objdir if you use NO_CLEAN: # find /usr/obj -name Kyuafile | xargs rm -f 20131213: The behavior of gss_pseudo_random() for the krb5 mechanism has changed, for applications requesting a longer random string than produced by the underlying enctype's pseudo-random() function. In particular, the random string produced from a session key of enctype aes256-cts-hmac-sha1-96 or aes256-cts-hmac-sha1-96 will be different at the 17th octet and later, after this change. The counter used in the PRF+ construction is now encoded as a big-endian integer in accordance with RFC 4402. __FreeBSD_version is bumped to 1100004. 20131108: The WITHOUT_ATF build knob has been removed and its functionality has been subsumed into the more generic WITHOUT_TESTS. If you were using the former to disable the build of the ATF libraries, you should change your settings to use the latter. 20131025: The default version of mtree is nmtree which is obtained from NetBSD. The output is generally the same, but may vary slightly. If you found you need identical output adding "-F freebsd9" to the command line should do the trick. For the time being, the old mtree is available as fmtree. 20131014: libbsdyml has been renamed to libyaml and moved to /usr/lib/private. This will break ports-mgmt/pkg. Rebuild the port, or upgrade to pkg 1.1.4_8 and verify bsdyml not linked in, before running "make delete-old-libs": # make -C /usr/ports/ports-mgmt/pkg build deinstall install clean or # pkg install pkg; ldd /usr/local/sbin/pkg | grep bsdyml 20131010: The stable/10 branch has been created in subversion from head revision r256279. COMMON ITEMS: General Notes ------------- Avoid using make -j when upgrading. While generally safe, there are sometimes problems using -j to upgrade. If your upgrade fails with -j, please try again without -j. From time to time in the past there have been problems using -j with buildworld and/or installworld. This is especially true when upgrading between "distant" versions (eg one that cross a major release boundary or several minor releases, or when several months have passed on the -current branch). Sometimes, obscure build problems are the result of environment poisoning. This can happen because the make utility reads its environment when searching for values for global variables. To run your build attempts in an "environmental clean room", prefix all make commands with 'env -i '. See the env(1) manual page for more details. When upgrading from one major version to another it is generally best to upgrade to the latest code in the currently installed branch first, then do an upgrade to the new branch. This is the best-tested upgrade path, and has the highest probability of being successful. Please try this approach if you encounter problems with a major version upgrade. Since the stable 4.x branch point, one has generally been able to upgrade from anywhere in the most recent stable branch to head / current (or even the last couple of stable branches). See the top of this file when there's an exception. When upgrading a live system, having a root shell around before installing anything can help undo problems. Not having a root shell around can lead to problems if pam has changed too much from your starting point to allow continued authentication after the upgrade. This file should be read as a log of events. When a later event changes information of a prior event, the prior event should not be deleted. Instead, a pointer to the entry with the new information should be placed in the old entry. Readers of this file should also sanity check older entries before relying on them blindly. Authors of new entries should write them with this in mind. ZFS notes --------- When upgrading the boot ZFS pool to a new version, always follow these two steps: 1.) recompile and reinstall the ZFS boot loader and boot block (this is part of "make buildworld" and "make installworld") 2.) update the ZFS boot block on your boot drive The following example updates the ZFS boot block on the first partition (freebsd-boot) of a GPT partitioned drive ada0: "gpart bootcode -p /boot/gptzfsboot -i 1 ada0" Non-boot pools do not need these updates. To build a kernel ----------------- If you are updating from a prior version of FreeBSD (even one just a few days old), you should follow this procedure. It is the most failsafe as it uses a /usr/obj tree with a fresh mini-buildworld, make kernel-toolchain make -DALWAYS_CHECK_MAKE buildkernel KERNCONF=YOUR_KERNEL_HERE make -DALWAYS_CHECK_MAKE installkernel KERNCONF=YOUR_KERNEL_HERE To test a kernel once --------------------- If you just want to boot a kernel once (because you are not sure if it works, or if you want to boot a known bad kernel to provide debugging information) run make installkernel KERNCONF=YOUR_KERNEL_HERE KODIR=/boot/testkernel nextboot -k testkernel To rebuild everything and install it on the current system. ----------------------------------------------------------- # Note: sometimes if you are running current you gotta do more than # is listed here if you are upgrading from a really old current. make buildworld make buildkernel KERNCONF=YOUR_KERNEL_HERE make installkernel KERNCONF=YOUR_KERNEL_HERE [1] [3] mergemaster -Fp [5] make installworld mergemaster -Fi [4] make delete-old [6] To cross-install current onto a separate partition -------------------------------------------------- # In this approach we use a separate partition to hold # current's root, 'usr', and 'var' directories. A partition # holding "/", "/usr" and "/var" should be about 2GB in # size. make buildworld make buildkernel KERNCONF=YOUR_KERNEL_HERE make installworld DESTDIR=${CURRENT_ROOT} -DDB_FROM_SRC make distribution DESTDIR=${CURRENT_ROOT} # if newfs'd make installkernel KERNCONF=YOUR_KERNEL_HERE DESTDIR=${CURRENT_ROOT} cp /etc/fstab ${CURRENT_ROOT}/etc/fstab # if newfs'd To upgrade in-place from stable to current ---------------------------------------------- make buildworld [9] make buildkernel KERNCONF=YOUR_KERNEL_HERE [8] make installkernel KERNCONF=YOUR_KERNEL_HERE [1] [3] mergemaster -Fp [5] make installworld mergemaster -Fi [4] make delete-old [6] Make sure that you've read the UPDATING file to understand the tweaks to various things you need. At this point in the life cycle of current, things change often and you are on your own to cope. The defaults can also change, so please read ALL of the UPDATING entries. Also, if you are tracking -current, you must be subscribed to freebsd-current@freebsd.org. Make sure that before you update your sources that you have read and understood all the recent messages there. If in doubt, please track -stable which has much fewer pitfalls. [1] If you have third party modules, such as vmware, you should disable them at this point so they don't crash your system on reboot. [3] From the bootblocks, boot -s, and then do fsck -p mount -u / mount -a cd src adjkerntz -i # if CMOS is wall time Also, when doing a major release upgrade, it is required that you boot into single user mode to do the installworld. [4] Note: This step is non-optional. Failure to do this step can result in a significant reduction in the functionality of the system. Attempting to do it by hand is not recommended and those that pursue this avenue should read this file carefully, as well as the archives of freebsd-current and freebsd-hackers mailing lists for potential gotchas. The -U option is also useful to consider. See mergemaster(8) for more information. [5] Usually this step is a no-op. However, from time to time you may need to do this if you get unknown user in the following step. It never hurts to do it all the time. You may need to install a new mergemaster (cd src/usr.sbin/mergemaster && make install) after the buildworld before this step if you last updated from current before 20130425 or from -stable before 20130430. [6] This only deletes old files and directories. Old libraries can be deleted by "make delete-old-libs", but you have to make sure that no program is using those libraries anymore. [8] The new kernel must be able to run existing binaries used by an installworld. When upgrading across major versions, the new kernel's configuration must include the correct COMPAT_FREEBSD option for existing binaries (e.g. COMPAT_FREEBSD11 to run 11.x binaries). Failure to do so may leave you with a system that is hard to boot to recover. A GENERIC kernel will include suitable compatibility options to run binaries from older branches. Make sure that you merge any new devices from GENERIC since the last time you updated your kernel config file. [9] If CPUTYPE is defined in your /etc/make.conf, make sure to use the "?=" instead of the "=" assignment operator, so that buildworld can override the CPUTYPE if it needs to. MAKEOBJDIRPREFIX must be defined in an environment variable, and not on the command line, or in /etc/make.conf. buildworld will warn if it is improperly defined. FORMAT: This file contains a list, in reverse chronological order, of major breakages in tracking -current. It is not guaranteed to be a complete list of such breakages, and only contains entries since September 23, 2011. If you need to see UPDATING entries from before that date, you will need to fetch an UPDATING file from an older FreeBSD release. Copyright information: Copyright 1998-2009 M. Warner Losh. Redistribution, publication, translation and use, with or without modification, in full or in part, in any form or format of this document are permitted without further permission from the author. THIS DOCUMENT IS PROVIDED BY WARNER LOSH ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL WARNER LOSH BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Contact Warner Losh if you have any questions about your use of this document. $FreeBSD$ Index: projects/import-googletest-1.8.1/contrib/libarchive/libarchive/archive_read_disk_posix.c =================================================================== --- projects/import-googletest-1.8.1/contrib/libarchive/libarchive/archive_read_disk_posix.c (revision 344270) +++ projects/import-googletest-1.8.1/contrib/libarchive/libarchive/archive_read_disk_posix.c (revision 344271) @@ -1,2689 +1,2690 @@ /*- * Copyright (c) 2003-2009 Tim Kientzle * Copyright (c) 2010-2012 Michihiro NAKAJIMA * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer * in this position and unchanged. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR(S) ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR(S) BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* This is the tree-walking code for POSIX systems. */ #if !defined(_WIN32) || defined(__CYGWIN__) #include "archive_platform.h" __FBSDID("$FreeBSD$"); #ifdef HAVE_SYS_PARAM_H #include #endif #ifdef HAVE_SYS_MOUNT_H #include #endif #ifdef HAVE_SYS_STAT_H #include #endif #ifdef HAVE_SYS_STATFS_H #include #endif #ifdef HAVE_SYS_STATVFS_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_LINUX_MAGIC_H #include #endif #ifdef HAVE_LINUX_FS_H #include #endif /* * Some Linux distributions have both linux/ext2_fs.h and ext2fs/ext2_fs.h. * As the include guards don't agree, the order of include is important. */ #ifdef HAVE_LINUX_EXT2_FS_H #include /* for Linux file flags */ #endif #if defined(HAVE_EXT2FS_EXT2_FS_H) && !defined(__CYGWIN__) #include /* Linux file flags, broken on Cygwin */ #endif #ifdef HAVE_DIRECT_H #include #endif #ifdef HAVE_DIRENT_H #include #endif #ifdef HAVE_ERRNO_H #include #endif #ifdef HAVE_FCNTL_H #include #endif #ifdef HAVE_LIMITS_H #include #endif #ifdef HAVE_STDLIB_H #include #endif #ifdef HAVE_STRING_H #include #endif #ifdef HAVE_UNISTD_H #include #endif #ifdef HAVE_SYS_IOCTL_H #include #endif #include "archive.h" #include "archive_string.h" #include "archive_entry.h" #include "archive_private.h" #include "archive_read_disk_private.h" #ifndef HAVE_FCHDIR #error fchdir function required. #endif #ifndef O_BINARY #define O_BINARY 0 #endif #ifndef O_CLOEXEC #define O_CLOEXEC 0 #endif /*- * This is a new directory-walking system that addresses a number * of problems I've had with fts(3). In particular, it has no * pathname-length limits (other than the size of 'int'), handles * deep logical traversals, uses considerably less memory, and has * an opaque interface (easier to modify in the future). * * Internally, it keeps a single list of "tree_entry" items that * represent filesystem objects that require further attention. * Non-directories are not kept in memory: they are pulled from * readdir(), returned to the client, then freed as soon as possible. * Any directory entry to be traversed gets pushed onto the stack. * * There is surprisingly little information that needs to be kept for * each item on the stack. Just the name, depth (represented here as the * string length of the parent directory's pathname), and some markers * indicating how to get back to the parent (via chdir("..") for a * regular dir or via fchdir(2) for a symlink). */ /* * TODO: * 1) Loop checking. * 3) Arbitrary logical traversals by closing/reopening intermediate fds. */ struct restore_time { const char *name; time_t mtime; long mtime_nsec; time_t atime; long atime_nsec; mode_t filetype; int noatime; }; struct tree_entry { int depth; struct tree_entry *next; struct tree_entry *parent; struct archive_string name; size_t dirname_length; int64_t dev; int64_t ino; int flags; int filesystem_id; /* How to return back to the parent of a symlink. */ int symlink_parent_fd; /* How to restore time of a directory. */ struct restore_time restore_time; }; struct filesystem { int64_t dev; int synthetic; int remote; int noatime; #if defined(USE_READDIR_R) size_t name_max; #endif long incr_xfer_size; long max_xfer_size; long min_xfer_size; long xfer_align; /* * Buffer used for reading file contents. */ /* Exactly allocated memory pointer. */ unsigned char *allocation_ptr; /* Pointer adjusted to the filesystem alignment . */ unsigned char *buff; size_t buff_size; }; /* Definitions for tree_entry.flags bitmap. */ #define isDir 1 /* This entry is a regular directory. */ #define isDirLink 2 /* This entry is a symbolic link to a directory. */ #define needsFirstVisit 4 /* This is an initial entry. */ #define needsDescent 8 /* This entry needs to be previsited. */ #define needsOpen 16 /* This is a directory that needs to be opened. */ #define needsAscent 32 /* This entry needs to be postvisited. */ /* * Local data for this package. */ struct tree { struct tree_entry *stack; struct tree_entry *current; DIR *d; #define INVALID_DIR_HANDLE NULL struct dirent *de; #if defined(USE_READDIR_R) struct dirent *dirent; size_t dirent_allocated; #endif int flags; int visit_type; /* Error code from last failed operation. */ int tree_errno; /* Dynamically-sized buffer for holding path */ struct archive_string path; /* Last path element */ const char *basename; /* Leading dir length */ size_t dirname_length; int depth; int openCount; int maxOpenCount; int initial_dir_fd; int working_dir_fd; struct stat lst; struct stat st; int descend; int nlink; /* How to restore time of a file. */ struct restore_time restore_time; struct entry_sparse { int64_t length; int64_t offset; } *sparse_list, *current_sparse; int sparse_count; int sparse_list_size; char initial_symlink_mode; char symlink_mode; struct filesystem *current_filesystem; struct filesystem *filesystem_table; int initial_filesystem_id; int current_filesystem_id; int max_filesystem_id; int allocated_filesystem; int entry_fd; int entry_eof; int64_t entry_remaining_bytes; int64_t entry_total; unsigned char *entry_buff; size_t entry_buff_size; }; /* Definitions for tree.flags bitmap. */ #define hasStat 16 /* The st entry is valid. */ #define hasLstat 32 /* The lst entry is valid. */ #define onWorkingDir 64 /* We are on the working dir where we are * reading directory entry at this time. */ #define needsRestoreTimes 128 #define onInitialDir 256 /* We are on the initial dir. */ static int tree_dir_next_posix(struct tree *t); #ifdef HAVE_DIRENT_D_NAMLEN /* BSD extension; avoids need for a strlen() call. */ #define D_NAMELEN(dp) (dp)->d_namlen #else #define D_NAMELEN(dp) (strlen((dp)->d_name)) #endif /* Initiate/terminate a tree traversal. */ static struct tree *tree_open(const char *, int, int); static struct tree *tree_reopen(struct tree *, const char *, int); static void tree_close(struct tree *); static void tree_free(struct tree *); static void tree_push(struct tree *, const char *, int, int64_t, int64_t, struct restore_time *); static int tree_enter_initial_dir(struct tree *); static int tree_enter_working_dir(struct tree *); static int tree_current_dir_fd(struct tree *); /* * tree_next() returns Zero if there is no next entry, non-zero if * there is. Note that directories are visited three times. * Directories are always visited first as part of enumerating their * parent; that is a "regular" visit. If tree_descend() is invoked at * that time, the directory is added to a work list and will * subsequently be visited two more times: once just after descending * into the directory ("postdescent") and again just after ascending * back to the parent ("postascent"). * * TREE_ERROR_DIR is returned if the descent failed (because the * directory couldn't be opened, for instance). This is returned * instead of TREE_POSTDESCENT/TREE_POSTASCENT. TREE_ERROR_DIR is not a * fatal error, but it does imply that the relevant subtree won't be * visited. TREE_ERROR_FATAL is returned for an error that left the * traversal completely hosed. Right now, this is only returned for * chdir() failures during ascent. */ #define TREE_REGULAR 1 #define TREE_POSTDESCENT 2 #define TREE_POSTASCENT 3 #define TREE_ERROR_DIR -1 #define TREE_ERROR_FATAL -2 static int tree_next(struct tree *); /* * Return information about the current entry. */ /* * The current full pathname, length of the full pathname, and a name * that can be used to access the file. Because tree does use chdir * extensively, the access path is almost never the same as the full * current path. * * TODO: On platforms that support it, use openat()-style operations * to eliminate the chdir() operations entirely while still supporting * arbitrarily deep traversals. This makes access_path troublesome to * support, of course, which means we'll need a rich enough interface * that clients can function without it. (In particular, we'll need * tree_current_open() that returns an open file descriptor.) * */ static const char *tree_current_path(struct tree *); static const char *tree_current_access_path(struct tree *); /* * Request the lstat() or stat() data for the current path. Since the * tree package needs to do some of this anyway, and caches the * results, you should take advantage of it here if you need it rather * than make a redundant stat() or lstat() call of your own. */ static const struct stat *tree_current_stat(struct tree *); static const struct stat *tree_current_lstat(struct tree *); static int tree_current_is_symblic_link_target(struct tree *); /* The following functions use tricks to avoid a certain number of * stat()/lstat() calls. */ /* "is_physical_dir" is equivalent to S_ISDIR(tree_current_lstat()->st_mode) */ static int tree_current_is_physical_dir(struct tree *); /* "is_dir" is equivalent to S_ISDIR(tree_current_stat()->st_mode) */ static int tree_current_is_dir(struct tree *); static int update_current_filesystem(struct archive_read_disk *a, int64_t dev); static int setup_current_filesystem(struct archive_read_disk *); static int tree_target_is_same_as_parent(struct tree *, const struct stat *); static int _archive_read_disk_open(struct archive *, const char *); static int _archive_read_free(struct archive *); static int _archive_read_close(struct archive *); static int _archive_read_data_block(struct archive *, const void **, size_t *, int64_t *); static int _archive_read_next_header(struct archive *, struct archive_entry **); static int _archive_read_next_header2(struct archive *, struct archive_entry *); static const char *trivial_lookup_gname(void *, int64_t gid); static const char *trivial_lookup_uname(void *, int64_t uid); static int setup_sparse(struct archive_read_disk *, struct archive_entry *); static int close_and_restore_time(int fd, struct tree *, struct restore_time *); static int open_on_current_dir(struct tree *, const char *, int); static int tree_dup(int); static struct archive_vtable * archive_read_disk_vtable(void) { static struct archive_vtable av; static int inited = 0; if (!inited) { av.archive_free = _archive_read_free; av.archive_close = _archive_read_close; av.archive_read_data_block = _archive_read_data_block; av.archive_read_next_header = _archive_read_next_header; av.archive_read_next_header2 = _archive_read_next_header2; inited = 1; } return (&av); } const char * archive_read_disk_gname(struct archive *_a, la_int64_t gid) { struct archive_read_disk *a = (struct archive_read_disk *)_a; if (ARCHIVE_OK != __archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_ANY, "archive_read_disk_gname")) return (NULL); if (a->lookup_gname == NULL) return (NULL); return ((*a->lookup_gname)(a->lookup_gname_data, gid)); } const char * archive_read_disk_uname(struct archive *_a, la_int64_t uid) { struct archive_read_disk *a = (struct archive_read_disk *)_a; if (ARCHIVE_OK != __archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_ANY, "archive_read_disk_uname")) return (NULL); if (a->lookup_uname == NULL) return (NULL); return ((*a->lookup_uname)(a->lookup_uname_data, uid)); } int archive_read_disk_set_gname_lookup(struct archive *_a, void *private_data, const char * (*lookup_gname)(void *private, la_int64_t gid), void (*cleanup_gname)(void *private)) { struct archive_read_disk *a = (struct archive_read_disk *)_a; archive_check_magic(&a->archive, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_ANY, "archive_read_disk_set_gname_lookup"); if (a->cleanup_gname != NULL && a->lookup_gname_data != NULL) (a->cleanup_gname)(a->lookup_gname_data); a->lookup_gname = lookup_gname; a->cleanup_gname = cleanup_gname; a->lookup_gname_data = private_data; return (ARCHIVE_OK); } int archive_read_disk_set_uname_lookup(struct archive *_a, void *private_data, const char * (*lookup_uname)(void *private, la_int64_t uid), void (*cleanup_uname)(void *private)) { struct archive_read_disk *a = (struct archive_read_disk *)_a; archive_check_magic(&a->archive, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_ANY, "archive_read_disk_set_uname_lookup"); if (a->cleanup_uname != NULL && a->lookup_uname_data != NULL) (a->cleanup_uname)(a->lookup_uname_data); a->lookup_uname = lookup_uname; a->cleanup_uname = cleanup_uname; a->lookup_uname_data = private_data; return (ARCHIVE_OK); } /* * Create a new archive_read_disk object and initialize it with global state. */ struct archive * archive_read_disk_new(void) { struct archive_read_disk *a; a = (struct archive_read_disk *)calloc(1, sizeof(*a)); if (a == NULL) return (NULL); a->archive.magic = ARCHIVE_READ_DISK_MAGIC; a->archive.state = ARCHIVE_STATE_NEW; a->archive.vtable = archive_read_disk_vtable(); a->entry = archive_entry_new2(&a->archive); a->lookup_uname = trivial_lookup_uname; a->lookup_gname = trivial_lookup_gname; a->flags = ARCHIVE_READDISK_MAC_COPYFILE; a->open_on_current_dir = open_on_current_dir; a->tree_current_dir_fd = tree_current_dir_fd; a->tree_enter_working_dir = tree_enter_working_dir; return (&a->archive); } static int _archive_read_free(struct archive *_a) { struct archive_read_disk *a = (struct archive_read_disk *)_a; int r; if (_a == NULL) return (ARCHIVE_OK); archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_ANY | ARCHIVE_STATE_FATAL, "archive_read_free"); if (a->archive.state != ARCHIVE_STATE_CLOSED) r = _archive_read_close(&a->archive); else r = ARCHIVE_OK; tree_free(a->tree); if (a->cleanup_gname != NULL && a->lookup_gname_data != NULL) (a->cleanup_gname)(a->lookup_gname_data); if (a->cleanup_uname != NULL && a->lookup_uname_data != NULL) (a->cleanup_uname)(a->lookup_uname_data); archive_string_free(&a->archive.error_string); archive_entry_free(a->entry); a->archive.magic = 0; __archive_clean(&a->archive); free(a); return (r); } static int _archive_read_close(struct archive *_a) { struct archive_read_disk *a = (struct archive_read_disk *)_a; archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_ANY | ARCHIVE_STATE_FATAL, "archive_read_close"); if (a->archive.state != ARCHIVE_STATE_FATAL) a->archive.state = ARCHIVE_STATE_CLOSED; tree_close(a->tree); return (ARCHIVE_OK); } static void setup_symlink_mode(struct archive_read_disk *a, char symlink_mode, int follow_symlinks) { a->symlink_mode = symlink_mode; a->follow_symlinks = follow_symlinks; if (a->tree != NULL) { a->tree->initial_symlink_mode = a->symlink_mode; a->tree->symlink_mode = a->symlink_mode; } } int archive_read_disk_set_symlink_logical(struct archive *_a) { struct archive_read_disk *a = (struct archive_read_disk *)_a; archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_ANY, "archive_read_disk_set_symlink_logical"); setup_symlink_mode(a, 'L', 1); return (ARCHIVE_OK); } int archive_read_disk_set_symlink_physical(struct archive *_a) { struct archive_read_disk *a = (struct archive_read_disk *)_a; archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_ANY, "archive_read_disk_set_symlink_physical"); setup_symlink_mode(a, 'P', 0); return (ARCHIVE_OK); } int archive_read_disk_set_symlink_hybrid(struct archive *_a) { struct archive_read_disk *a = (struct archive_read_disk *)_a; archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_ANY, "archive_read_disk_set_symlink_hybrid"); setup_symlink_mode(a, 'H', 1);/* Follow symlinks initially. */ return (ARCHIVE_OK); } int archive_read_disk_set_atime_restored(struct archive *_a) { struct archive_read_disk *a = (struct archive_read_disk *)_a; archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_ANY, "archive_read_disk_restore_atime"); #ifdef HAVE_UTIMES a->flags |= ARCHIVE_READDISK_RESTORE_ATIME; if (a->tree != NULL) a->tree->flags |= needsRestoreTimes; return (ARCHIVE_OK); #else /* Display warning and unset flag */ archive_set_error(&a->archive, ARCHIVE_ERRNO_MISC, "Cannot restore access time on this system"); a->flags &= ~ARCHIVE_READDISK_RESTORE_ATIME; return (ARCHIVE_WARN); #endif } int archive_read_disk_set_behavior(struct archive *_a, int flags) { struct archive_read_disk *a = (struct archive_read_disk *)_a; int r = ARCHIVE_OK; archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_ANY, "archive_read_disk_honor_nodump"); a->flags = flags; if (flags & ARCHIVE_READDISK_RESTORE_ATIME) r = archive_read_disk_set_atime_restored(_a); else { if (a->tree != NULL) a->tree->flags &= ~needsRestoreTimes; } return (r); } /* * Trivial implementations of gname/uname lookup functions. * These are normally overridden by the client, but these stub * versions ensure that we always have something that works. */ static const char * trivial_lookup_gname(void *private_data, int64_t gid) { (void)private_data; /* UNUSED */ (void)gid; /* UNUSED */ return (NULL); } static const char * trivial_lookup_uname(void *private_data, int64_t uid) { (void)private_data; /* UNUSED */ (void)uid; /* UNUSED */ return (NULL); } /* * Allocate memory for the reading buffer adjusted to the filesystem * alignment. */ static int setup_suitable_read_buffer(struct archive_read_disk *a) { struct tree *t = a->tree; struct filesystem *cf = t->current_filesystem; size_t asize; size_t s; if (cf->allocation_ptr == NULL) { /* If we couldn't get a filesystem alignment, * we use 4096 as default value but we won't use * O_DIRECT to open() and openat() operations. */ long xfer_align = (cf->xfer_align == -1)?4096:cf->xfer_align; if (cf->max_xfer_size != -1) asize = cf->max_xfer_size + xfer_align; else { long incr = cf->incr_xfer_size; /* Some platform does not set a proper value to * incr_xfer_size.*/ if (incr < 0) incr = cf->min_xfer_size; if (cf->min_xfer_size < 0) { incr = xfer_align; asize = xfer_align; } else asize = cf->min_xfer_size; /* Increase a buffer size up to 64K bytes in * a proper increment size. */ while (asize < 1024*64) asize += incr; /* Take a margin to adjust to the filesystem * alignment. */ asize += xfer_align; } cf->allocation_ptr = malloc(asize); if (cf->allocation_ptr == NULL) { archive_set_error(&a->archive, ENOMEM, "Couldn't allocate memory"); a->archive.state = ARCHIVE_STATE_FATAL; return (ARCHIVE_FATAL); } /* * Calculate proper address for the filesystem. */ s = (uintptr_t)cf->allocation_ptr; s %= xfer_align; if (s > 0) s = xfer_align - s; /* * Set a read buffer pointer in the proper alignment of * the current filesystem. */ cf->buff = cf->allocation_ptr + s; cf->buff_size = asize - xfer_align; } return (ARCHIVE_OK); } static int _archive_read_data_block(struct archive *_a, const void **buff, size_t *size, int64_t *offset) { struct archive_read_disk *a = (struct archive_read_disk *)_a; struct tree *t = a->tree; int r; ssize_t bytes; size_t buffbytes; int empty_sparse_region = 0; archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_DATA, "archive_read_data_block"); if (t->entry_eof || t->entry_remaining_bytes <= 0) { r = ARCHIVE_EOF; goto abort_read_data; } /* * Open the current file. */ if (t->entry_fd < 0) { int flags = O_RDONLY | O_BINARY | O_CLOEXEC; /* * Eliminate or reduce cache effects if we can. * * Carefully consider this to be enabled. */ #if defined(O_DIRECT) && 0/* Disabled for now */ if (t->current_filesystem->xfer_align != -1 && t->nlink == 1) flags |= O_DIRECT; #endif #if defined(O_NOATIME) /* * Linux has O_NOATIME flag; use it if we need. */ if ((t->flags & needsRestoreTimes) != 0 && t->restore_time.noatime == 0) flags |= O_NOATIME; do { #endif t->entry_fd = open_on_current_dir(t, tree_current_access_path(t), flags); __archive_ensure_cloexec_flag(t->entry_fd); #if defined(O_NOATIME) /* * When we did open the file with O_NOATIME flag, * if successful, set 1 to t->restore_time.noatime * not to restore an atime of the file later. * if failed by EPERM, retry it without O_NOATIME flag. */ if (flags & O_NOATIME) { if (t->entry_fd >= 0) t->restore_time.noatime = 1; else if (errno == EPERM) { flags &= ~O_NOATIME; continue; } } } while (0); #endif if (t->entry_fd < 0) { archive_set_error(&a->archive, errno, "Couldn't open %s", tree_current_path(t)); r = ARCHIVE_FAILED; tree_enter_initial_dir(t); goto abort_read_data; } tree_enter_initial_dir(t); } /* * Allocate read buffer if not allocated. */ if (t->current_filesystem->allocation_ptr == NULL) { r = setup_suitable_read_buffer(a); if (r != ARCHIVE_OK) { a->archive.state = ARCHIVE_STATE_FATAL; goto abort_read_data; } } t->entry_buff = t->current_filesystem->buff; t->entry_buff_size = t->current_filesystem->buff_size; buffbytes = t->entry_buff_size; if ((int64_t)buffbytes > t->current_sparse->length) buffbytes = t->current_sparse->length; if (t->current_sparse->length == 0) empty_sparse_region = 1; /* * Skip hole. * TODO: Should we consider t->current_filesystem->xfer_align? */ if (t->current_sparse->offset > t->entry_total) { if (lseek(t->entry_fd, (off_t)t->current_sparse->offset, SEEK_SET) < 0) { archive_set_error(&a->archive, errno, "Seek error"); r = ARCHIVE_FATAL; a->archive.state = ARCHIVE_STATE_FATAL; goto abort_read_data; } bytes = t->current_sparse->offset - t->entry_total; t->entry_remaining_bytes -= bytes; t->entry_total += bytes; } /* * Read file contents. */ if (buffbytes > 0) { bytes = read(t->entry_fd, t->entry_buff, buffbytes); if (bytes < 0) { archive_set_error(&a->archive, errno, "Read error"); r = ARCHIVE_FATAL; a->archive.state = ARCHIVE_STATE_FATAL; goto abort_read_data; } } else bytes = 0; /* * Return an EOF unless we've read a leading empty sparse region, which * is used to represent fully-sparse files. */ if (bytes == 0 && !empty_sparse_region) { /* Get EOF */ t->entry_eof = 1; r = ARCHIVE_EOF; goto abort_read_data; } *buff = t->entry_buff; *size = bytes; *offset = t->entry_total; t->entry_total += bytes; t->entry_remaining_bytes -= bytes; if (t->entry_remaining_bytes == 0) { /* Close the current file descriptor */ close_and_restore_time(t->entry_fd, t, &t->restore_time); t->entry_fd = -1; t->entry_eof = 1; } t->current_sparse->offset += bytes; t->current_sparse->length -= bytes; if (t->current_sparse->length == 0 && !t->entry_eof) t->current_sparse++; return (ARCHIVE_OK); abort_read_data: *buff = NULL; *size = 0; *offset = t->entry_total; if (t->entry_fd >= 0) { /* Close the current file descriptor */ close_and_restore_time(t->entry_fd, t, &t->restore_time); t->entry_fd = -1; } return (r); } static int next_entry(struct archive_read_disk *a, struct tree *t, struct archive_entry *entry) { const struct stat *st; /* info to use for this entry */ const struct stat *lst;/* lstat() information */ const char *name; int delayed, delayed_errno, descend, r; struct archive_string delayed_str; delayed = ARCHIVE_OK; + delayed_errno = 0; archive_string_init(&delayed_str); st = NULL; lst = NULL; t->descend = 0; do { switch (tree_next(t)) { case TREE_ERROR_FATAL: archive_set_error(&a->archive, t->tree_errno, "%s: Unable to continue traversing directory tree", tree_current_path(t)); a->archive.state = ARCHIVE_STATE_FATAL; tree_enter_initial_dir(t); return (ARCHIVE_FATAL); case TREE_ERROR_DIR: archive_set_error(&a->archive, ARCHIVE_ERRNO_MISC, "%s: Couldn't visit directory", tree_current_path(t)); tree_enter_initial_dir(t); return (ARCHIVE_FAILED); case 0: tree_enter_initial_dir(t); return (ARCHIVE_EOF); case TREE_POSTDESCENT: case TREE_POSTASCENT: break; case TREE_REGULAR: lst = tree_current_lstat(t); if (lst == NULL) { if (errno == ENOENT && t->depth > 0) { delayed = ARCHIVE_WARN; delayed_errno = errno; if (delayed_str.length == 0) { archive_string_sprintf(&delayed_str, "%s", tree_current_path(t)); } else { archive_string_sprintf(&delayed_str, " %s", tree_current_path(t)); } } else { archive_set_error(&a->archive, errno, "%s: Cannot stat", tree_current_path(t)); tree_enter_initial_dir(t); return (ARCHIVE_FAILED); } } break; } } while (lst == NULL); #ifdef __APPLE__ if (a->flags & ARCHIVE_READDISK_MAC_COPYFILE) { /* If we're using copyfile(), ignore "._XXX" files. */ const char *bname = strrchr(tree_current_path(t), '/'); if (bname == NULL) bname = tree_current_path(t); else ++bname; if (bname[0] == '.' && bname[1] == '_') return (ARCHIVE_RETRY); } #endif archive_entry_copy_pathname(entry, tree_current_path(t)); /* * Perform path matching. */ if (a->matching) { r = archive_match_path_excluded(a->matching, entry); if (r < 0) { archive_set_error(&(a->archive), errno, "Failed : %s", archive_error_string(a->matching)); return (r); } if (r) { if (a->excluded_cb_func) a->excluded_cb_func(&(a->archive), a->excluded_cb_data, entry); return (ARCHIVE_RETRY); } } /* * Distinguish 'L'/'P'/'H' symlink following. */ switch(t->symlink_mode) { case 'H': /* 'H': After the first item, rest like 'P'. */ t->symlink_mode = 'P'; /* 'H': First item (from command line) like 'L'. */ /* FALLTHROUGH */ case 'L': /* 'L': Do descend through a symlink to dir. */ descend = tree_current_is_dir(t); /* 'L': Follow symlinks to files. */ a->symlink_mode = 'L'; a->follow_symlinks = 1; /* 'L': Archive symlinks as targets, if we can. */ st = tree_current_stat(t); if (st != NULL && !tree_target_is_same_as_parent(t, st)) break; /* If stat fails, we have a broken symlink; * in that case, don't follow the link. */ /* FALLTHROUGH */ default: /* 'P': Don't descend through a symlink to dir. */ descend = tree_current_is_physical_dir(t); /* 'P': Don't follow symlinks to files. */ a->symlink_mode = 'P'; a->follow_symlinks = 0; /* 'P': Archive symlinks as symlinks. */ st = lst; break; } if (update_current_filesystem(a, st->st_dev) != ARCHIVE_OK) { a->archive.state = ARCHIVE_STATE_FATAL; tree_enter_initial_dir(t); return (ARCHIVE_FATAL); } if (t->initial_filesystem_id == -1) t->initial_filesystem_id = t->current_filesystem_id; if (a->flags & ARCHIVE_READDISK_NO_TRAVERSE_MOUNTS) { if (t->initial_filesystem_id != t->current_filesystem_id) descend = 0; } t->descend = descend; /* * Honor nodump flag. * If the file is marked with nodump flag, do not return this entry. */ if (a->flags & ARCHIVE_READDISK_HONOR_NODUMP) { #if defined(HAVE_STRUCT_STAT_ST_FLAGS) && defined(UF_NODUMP) if (st->st_flags & UF_NODUMP) return (ARCHIVE_RETRY); #elif (defined(FS_IOC_GETFLAGS) && defined(FS_NODUMP_FL) && \ defined(HAVE_WORKING_FS_IOC_GETFLAGS)) || \ (defined(EXT2_IOC_GETFLAGS) && defined(EXT2_NODUMP_FL) && \ defined(HAVE_WORKING_EXT2_IOC_GETFLAGS)) if (S_ISREG(st->st_mode) || S_ISDIR(st->st_mode)) { int stflags; t->entry_fd = open_on_current_dir(t, tree_current_access_path(t), O_RDONLY | O_NONBLOCK | O_CLOEXEC); __archive_ensure_cloexec_flag(t->entry_fd); if (t->entry_fd >= 0) { r = ioctl(t->entry_fd, #ifdef FS_IOC_GETFLAGS FS_IOC_GETFLAGS, #else EXT2_IOC_GETFLAGS, #endif &stflags); #ifdef FS_NODUMP_FL if (r == 0 && (stflags & FS_NODUMP_FL) != 0) #else if (r == 0 && (stflags & EXT2_NODUMP_FL) != 0) #endif return (ARCHIVE_RETRY); } } #endif } archive_entry_copy_stat(entry, st); /* Save the times to be restored. This must be in before * calling archive_read_disk_descend() or any chance of it, * especially, invoking a callback. */ t->restore_time.mtime = archive_entry_mtime(entry); t->restore_time.mtime_nsec = archive_entry_mtime_nsec(entry); t->restore_time.atime = archive_entry_atime(entry); t->restore_time.atime_nsec = archive_entry_atime_nsec(entry); t->restore_time.filetype = archive_entry_filetype(entry); t->restore_time.noatime = t->current_filesystem->noatime; /* * Perform time matching. */ if (a->matching) { r = archive_match_time_excluded(a->matching, entry); if (r < 0) { archive_set_error(&(a->archive), errno, "Failed : %s", archive_error_string(a->matching)); return (r); } if (r) { if (a->excluded_cb_func) a->excluded_cb_func(&(a->archive), a->excluded_cb_data, entry); return (ARCHIVE_RETRY); } } /* Lookup uname/gname */ name = archive_read_disk_uname(&(a->archive), archive_entry_uid(entry)); if (name != NULL) archive_entry_copy_uname(entry, name); name = archive_read_disk_gname(&(a->archive), archive_entry_gid(entry)); if (name != NULL) archive_entry_copy_gname(entry, name); /* * Perform owner matching. */ if (a->matching) { r = archive_match_owner_excluded(a->matching, entry); if (r < 0) { archive_set_error(&(a->archive), errno, "Failed : %s", archive_error_string(a->matching)); return (r); } if (r) { if (a->excluded_cb_func) a->excluded_cb_func(&(a->archive), a->excluded_cb_data, entry); return (ARCHIVE_RETRY); } } /* * Invoke a meta data filter callback. */ if (a->metadata_filter_func) { if (!a->metadata_filter_func(&(a->archive), a->metadata_filter_data, entry)) return (ARCHIVE_RETRY); } /* * Populate the archive_entry with metadata from the disk. */ archive_entry_copy_sourcepath(entry, tree_current_access_path(t)); r = archive_read_disk_entry_from_file(&(a->archive), entry, t->entry_fd, st); if (r == ARCHIVE_OK) { r = delayed; if (r != ARCHIVE_OK) { archive_string_sprintf(&delayed_str, ": %s", "File removed before we read it"); archive_set_error(&(a->archive), delayed_errno, "%s", delayed_str.s); } } if (!archive_string_empty(&delayed_str)) archive_string_free(&delayed_str); return (r); } static int _archive_read_next_header(struct archive *_a, struct archive_entry **entryp) { int ret; struct archive_read_disk *a = (struct archive_read_disk *)_a; *entryp = NULL; ret = _archive_read_next_header2(_a, a->entry); *entryp = a->entry; return ret; } static int _archive_read_next_header2(struct archive *_a, struct archive_entry *entry) { struct archive_read_disk *a = (struct archive_read_disk *)_a; struct tree *t; int r; archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_HEADER | ARCHIVE_STATE_DATA, "archive_read_next_header2"); t = a->tree; if (t->entry_fd >= 0) { close_and_restore_time(t->entry_fd, t, &t->restore_time); t->entry_fd = -1; } for (;;) { r = next_entry(a, t, entry); if (t->entry_fd >= 0) { close(t->entry_fd); t->entry_fd = -1; } if (r == ARCHIVE_RETRY) { archive_entry_clear(entry); continue; } break; } /* Return to the initial directory. */ tree_enter_initial_dir(t); /* * EOF and FATAL are persistent at this layer. By * modifying the state, we guarantee that future calls to * read a header or read data will fail. */ switch (r) { case ARCHIVE_EOF: a->archive.state = ARCHIVE_STATE_EOF; break; case ARCHIVE_OK: case ARCHIVE_WARN: /* Overwrite the sourcepath based on the initial directory. */ archive_entry_copy_sourcepath(entry, tree_current_path(t)); t->entry_total = 0; if (archive_entry_filetype(entry) == AE_IFREG) { t->nlink = archive_entry_nlink(entry); t->entry_remaining_bytes = archive_entry_size(entry); t->entry_eof = (t->entry_remaining_bytes == 0)? 1: 0; if (!t->entry_eof && setup_sparse(a, entry) != ARCHIVE_OK) return (ARCHIVE_FATAL); } else { t->entry_remaining_bytes = 0; t->entry_eof = 1; } a->archive.state = ARCHIVE_STATE_DATA; break; case ARCHIVE_RETRY: break; case ARCHIVE_FATAL: a->archive.state = ARCHIVE_STATE_FATAL; break; } __archive_reset_read_data(&a->archive); return (r); } static int setup_sparse(struct archive_read_disk *a, struct archive_entry *entry) { struct tree *t = a->tree; int64_t length, offset; int i; t->sparse_count = archive_entry_sparse_reset(entry); if (t->sparse_count+1 > t->sparse_list_size) { free(t->sparse_list); t->sparse_list_size = t->sparse_count + 1; t->sparse_list = malloc(sizeof(t->sparse_list[0]) * t->sparse_list_size); if (t->sparse_list == NULL) { t->sparse_list_size = 0; archive_set_error(&a->archive, ENOMEM, "Can't allocate data"); a->archive.state = ARCHIVE_STATE_FATAL; return (ARCHIVE_FATAL); } } for (i = 0; i < t->sparse_count; i++) { archive_entry_sparse_next(entry, &offset, &length); t->sparse_list[i].offset = offset; t->sparse_list[i].length = length; } if (i == 0) { t->sparse_list[i].offset = 0; t->sparse_list[i].length = archive_entry_size(entry); } else { t->sparse_list[i].offset = archive_entry_size(entry); t->sparse_list[i].length = 0; } t->current_sparse = t->sparse_list; return (ARCHIVE_OK); } int archive_read_disk_set_matching(struct archive *_a, struct archive *_ma, void (*_excluded_func)(struct archive *, void *, struct archive_entry *), void *_client_data) { struct archive_read_disk *a = (struct archive_read_disk *)_a; archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_ANY, "archive_read_disk_set_matching"); a->matching = _ma; a->excluded_cb_func = _excluded_func; a->excluded_cb_data = _client_data; return (ARCHIVE_OK); } int archive_read_disk_set_metadata_filter_callback(struct archive *_a, int (*_metadata_filter_func)(struct archive *, void *, struct archive_entry *), void *_client_data) { struct archive_read_disk *a = (struct archive_read_disk *)_a; archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_ANY, "archive_read_disk_set_metadata_filter_callback"); a->metadata_filter_func = _metadata_filter_func; a->metadata_filter_data = _client_data; return (ARCHIVE_OK); } int archive_read_disk_can_descend(struct archive *_a) { struct archive_read_disk *a = (struct archive_read_disk *)_a; struct tree *t = a->tree; archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_HEADER | ARCHIVE_STATE_DATA, "archive_read_disk_can_descend"); return (t->visit_type == TREE_REGULAR && t->descend); } /* * Called by the client to mark the directory just returned from * tree_next() as needing to be visited. */ int archive_read_disk_descend(struct archive *_a) { struct archive_read_disk *a = (struct archive_read_disk *)_a; struct tree *t = a->tree; archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_HEADER | ARCHIVE_STATE_DATA, "archive_read_disk_descend"); if (t->visit_type != TREE_REGULAR || !t->descend) return (ARCHIVE_OK); if (tree_current_is_physical_dir(t)) { tree_push(t, t->basename, t->current_filesystem_id, t->lst.st_dev, t->lst.st_ino, &t->restore_time); t->stack->flags |= isDir; } else if (tree_current_is_dir(t)) { tree_push(t, t->basename, t->current_filesystem_id, t->st.st_dev, t->st.st_ino, &t->restore_time); t->stack->flags |= isDirLink; } t->descend = 0; return (ARCHIVE_OK); } int archive_read_disk_open(struct archive *_a, const char *pathname) { struct archive_read_disk *a = (struct archive_read_disk *)_a; archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_NEW | ARCHIVE_STATE_CLOSED, "archive_read_disk_open"); archive_clear_error(&a->archive); return (_archive_read_disk_open(_a, pathname)); } int archive_read_disk_open_w(struct archive *_a, const wchar_t *pathname) { struct archive_read_disk *a = (struct archive_read_disk *)_a; struct archive_string path; int ret; archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_NEW | ARCHIVE_STATE_CLOSED, "archive_read_disk_open_w"); archive_clear_error(&a->archive); /* Make a char string from a wchar_t string. */ archive_string_init(&path); if (archive_string_append_from_wcs(&path, pathname, wcslen(pathname)) != 0) { if (errno == ENOMEM) archive_set_error(&a->archive, ENOMEM, "Can't allocate memory"); else archive_set_error(&a->archive, ARCHIVE_ERRNO_MISC, "Can't convert a path to a char string"); a->archive.state = ARCHIVE_STATE_FATAL; ret = ARCHIVE_FATAL; } else ret = _archive_read_disk_open(_a, path.s); archive_string_free(&path); return (ret); } static int _archive_read_disk_open(struct archive *_a, const char *pathname) { struct archive_read_disk *a = (struct archive_read_disk *)_a; if (a->tree != NULL) a->tree = tree_reopen(a->tree, pathname, a->flags & ARCHIVE_READDISK_RESTORE_ATIME); else a->tree = tree_open(pathname, a->symlink_mode, a->flags & ARCHIVE_READDISK_RESTORE_ATIME); if (a->tree == NULL) { archive_set_error(&a->archive, ENOMEM, "Can't allocate tar data"); a->archive.state = ARCHIVE_STATE_FATAL; return (ARCHIVE_FATAL); } a->archive.state = ARCHIVE_STATE_HEADER; return (ARCHIVE_OK); } /* * Return a current filesystem ID which is index of the filesystem entry * you've visited through archive_read_disk. */ int archive_read_disk_current_filesystem(struct archive *_a) { struct archive_read_disk *a = (struct archive_read_disk *)_a; archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_DATA, "archive_read_disk_current_filesystem"); return (a->tree->current_filesystem_id); } static int update_current_filesystem(struct archive_read_disk *a, int64_t dev) { struct tree *t = a->tree; int i, fid; if (t->current_filesystem != NULL && t->current_filesystem->dev == dev) return (ARCHIVE_OK); for (i = 0; i < t->max_filesystem_id; i++) { if (t->filesystem_table[i].dev == dev) { /* There is the filesystem ID we've already generated. */ t->current_filesystem_id = i; t->current_filesystem = &(t->filesystem_table[i]); return (ARCHIVE_OK); } } /* * This is the new filesystem which we have to generate a new ID for. */ fid = t->max_filesystem_id++; if (t->max_filesystem_id > t->allocated_filesystem) { size_t s; void *p; s = t->max_filesystem_id * 2; p = realloc(t->filesystem_table, s * sizeof(*t->filesystem_table)); if (p == NULL) { archive_set_error(&a->archive, ENOMEM, "Can't allocate tar data"); return (ARCHIVE_FATAL); } t->filesystem_table = (struct filesystem *)p; t->allocated_filesystem = s; } t->current_filesystem_id = fid; t->current_filesystem = &(t->filesystem_table[fid]); t->current_filesystem->dev = dev; t->current_filesystem->allocation_ptr = NULL; t->current_filesystem->buff = NULL; /* Setup the current filesystem properties which depend on * platform specific. */ return (setup_current_filesystem(a)); } /* * Returns 1 if current filesystem is generated filesystem, 0 if it is not * or -1 if it is unknown. */ int archive_read_disk_current_filesystem_is_synthetic(struct archive *_a) { struct archive_read_disk *a = (struct archive_read_disk *)_a; archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_DATA, "archive_read_disk_current_filesystem"); return (a->tree->current_filesystem->synthetic); } /* * Returns 1 if current filesystem is remote filesystem, 0 if it is not * or -1 if it is unknown. */ int archive_read_disk_current_filesystem_is_remote(struct archive *_a) { struct archive_read_disk *a = (struct archive_read_disk *)_a; archive_check_magic(_a, ARCHIVE_READ_DISK_MAGIC, ARCHIVE_STATE_DATA, "archive_read_disk_current_filesystem"); return (a->tree->current_filesystem->remote); } #if defined(_PC_REC_INCR_XFER_SIZE) && defined(_PC_REC_MAX_XFER_SIZE) &&\ defined(_PC_REC_MIN_XFER_SIZE) && defined(_PC_REC_XFER_ALIGN) static int get_xfer_size(struct tree *t, int fd, const char *path) { t->current_filesystem->xfer_align = -1; errno = 0; if (fd >= 0) { t->current_filesystem->incr_xfer_size = fpathconf(fd, _PC_REC_INCR_XFER_SIZE); t->current_filesystem->max_xfer_size = fpathconf(fd, _PC_REC_MAX_XFER_SIZE); t->current_filesystem->min_xfer_size = fpathconf(fd, _PC_REC_MIN_XFER_SIZE); t->current_filesystem->xfer_align = fpathconf(fd, _PC_REC_XFER_ALIGN); } else if (path != NULL) { t->current_filesystem->incr_xfer_size = pathconf(path, _PC_REC_INCR_XFER_SIZE); t->current_filesystem->max_xfer_size = pathconf(path, _PC_REC_MAX_XFER_SIZE); t->current_filesystem->min_xfer_size = pathconf(path, _PC_REC_MIN_XFER_SIZE); t->current_filesystem->xfer_align = pathconf(path, _PC_REC_XFER_ALIGN); } /* At least we need an alignment size. */ if (t->current_filesystem->xfer_align == -1) return ((errno == EINVAL)?1:-1); else return (0); } #else static int get_xfer_size(struct tree *t, int fd, const char *path) { (void)t; /* UNUSED */ (void)fd; /* UNUSED */ (void)path; /* UNUSED */ return (1);/* Not supported */ } #endif #if defined(HAVE_STATFS) && defined(HAVE_FSTATFS) && defined(MNT_LOCAL) \ && !defined(ST_LOCAL) /* * Gather current filesystem properties on FreeBSD, OpenBSD and Mac OS X. */ static int setup_current_filesystem(struct archive_read_disk *a) { struct tree *t = a->tree; struct statfs sfs; #if defined(HAVE_GETVFSBYNAME) && defined(VFCF_SYNTHETIC) /* TODO: configure should set GETVFSBYNAME_ARG_TYPE to make * this accurate; some platforms have both and we need the one that's * used by getvfsbyname() * * Then the following would become: * #if defined(GETVFSBYNAME_ARG_TYPE) * GETVFSBYNAME_ARG_TYPE vfc; * #endif */ # if defined(HAVE_STRUCT_XVFSCONF) struct xvfsconf vfc; # else struct vfsconf vfc; # endif #endif int r, xr = 0; #if !defined(HAVE_STRUCT_STATFS_F_NAMEMAX) long nm; #endif t->current_filesystem->synthetic = -1; t->current_filesystem->remote = -1; if (tree_current_is_symblic_link_target(t)) { #if defined(HAVE_OPENAT) /* * Get file system statistics on any directory * where current is. */ int fd = openat(tree_current_dir_fd(t), tree_current_access_path(t), O_RDONLY | O_CLOEXEC); __archive_ensure_cloexec_flag(fd); if (fd < 0) { archive_set_error(&a->archive, errno, "openat failed"); return (ARCHIVE_FAILED); } r = fstatfs(fd, &sfs); if (r == 0) xr = get_xfer_size(t, fd, NULL); close(fd); #else if (tree_enter_working_dir(t) != 0) { archive_set_error(&a->archive, errno, "fchdir failed"); return (ARCHIVE_FAILED); } r = statfs(tree_current_access_path(t), &sfs); if (r == 0) xr = get_xfer_size(t, -1, tree_current_access_path(t)); #endif } else { r = fstatfs(tree_current_dir_fd(t), &sfs); if (r == 0) xr = get_xfer_size(t, tree_current_dir_fd(t), NULL); } if (r == -1 || xr == -1) { archive_set_error(&a->archive, errno, "statfs failed"); return (ARCHIVE_FAILED); } else if (xr == 1) { /* pathconf(_PC_REX_*) operations are not supported. */ t->current_filesystem->xfer_align = sfs.f_bsize; t->current_filesystem->max_xfer_size = -1; t->current_filesystem->min_xfer_size = sfs.f_iosize; t->current_filesystem->incr_xfer_size = sfs.f_iosize; } if (sfs.f_flags & MNT_LOCAL) t->current_filesystem->remote = 0; else t->current_filesystem->remote = 1; #if defined(HAVE_GETVFSBYNAME) && defined(VFCF_SYNTHETIC) r = getvfsbyname(sfs.f_fstypename, &vfc); if (r == -1) { archive_set_error(&a->archive, errno, "getvfsbyname failed"); return (ARCHIVE_FAILED); } if (vfc.vfc_flags & VFCF_SYNTHETIC) t->current_filesystem->synthetic = 1; else t->current_filesystem->synthetic = 0; #endif #if defined(MNT_NOATIME) if (sfs.f_flags & MNT_NOATIME) t->current_filesystem->noatime = 1; else #endif t->current_filesystem->noatime = 0; #if defined(USE_READDIR_R) /* Set maximum filename length. */ #if defined(HAVE_STRUCT_STATFS_F_NAMEMAX) t->current_filesystem->name_max = sfs.f_namemax; #else # if defined(_PC_NAME_MAX) /* Mac OS X does not have f_namemax in struct statfs. */ if (tree_current_is_symblic_link_target(t)) { if (tree_enter_working_dir(t) != 0) { archive_set_error(&a->archive, errno, "fchdir failed"); return (ARCHIVE_FAILED); } nm = pathconf(tree_current_access_path(t), _PC_NAME_MAX); } else nm = fpathconf(tree_current_dir_fd(t), _PC_NAME_MAX); # else nm = -1; # endif if (nm == -1) t->current_filesystem->name_max = NAME_MAX; else t->current_filesystem->name_max = nm; #endif #endif /* USE_READDIR_R */ return (ARCHIVE_OK); } #elif (defined(HAVE_STATVFS) || defined(HAVE_FSTATVFS)) && defined(ST_LOCAL) /* * Gather current filesystem properties on NetBSD */ static int setup_current_filesystem(struct archive_read_disk *a) { struct tree *t = a->tree; struct statvfs sfs; int r, xr = 0; t->current_filesystem->synthetic = -1; if (tree_enter_working_dir(t) != 0) { archive_set_error(&a->archive, errno, "fchdir failed"); return (ARCHIVE_FAILED); } if (tree_current_is_symblic_link_target(t)) { r = statvfs(tree_current_access_path(t), &sfs); if (r == 0) xr = get_xfer_size(t, -1, tree_current_access_path(t)); } else { #ifdef HAVE_FSTATVFS r = fstatvfs(tree_current_dir_fd(t), &sfs); if (r == 0) xr = get_xfer_size(t, tree_current_dir_fd(t), NULL); #else r = statvfs(".", &sfs); if (r == 0) xr = get_xfer_size(t, -1, "."); #endif } if (r == -1 || xr == -1) { t->current_filesystem->remote = -1; archive_set_error(&a->archive, errno, "statvfs failed"); return (ARCHIVE_FAILED); } else if (xr == 1) { /* Usually come here unless NetBSD supports _PC_REC_XFER_ALIGN * for pathconf() function. */ t->current_filesystem->xfer_align = sfs.f_frsize; t->current_filesystem->max_xfer_size = -1; #if defined(HAVE_STRUCT_STATVFS_F_IOSIZE) t->current_filesystem->min_xfer_size = sfs.f_iosize; t->current_filesystem->incr_xfer_size = sfs.f_iosize; #else t->current_filesystem->min_xfer_size = sfs.f_bsize; t->current_filesystem->incr_xfer_size = sfs.f_bsize; #endif } if (sfs.f_flag & ST_LOCAL) t->current_filesystem->remote = 0; else t->current_filesystem->remote = 1; #if defined(ST_NOATIME) if (sfs.f_flag & ST_NOATIME) t->current_filesystem->noatime = 1; else #endif t->current_filesystem->noatime = 0; /* Set maximum filename length. */ t->current_filesystem->name_max = sfs.f_namemax; return (ARCHIVE_OK); } #elif defined(HAVE_SYS_STATFS_H) && defined(HAVE_LINUX_MAGIC_H) &&\ defined(HAVE_STATFS) && defined(HAVE_FSTATFS) /* * Note: statfs is deprecated since LSB 3.2 */ #ifndef CIFS_SUPER_MAGIC #define CIFS_SUPER_MAGIC 0xFF534D42 #endif #ifndef DEVFS_SUPER_MAGIC #define DEVFS_SUPER_MAGIC 0x1373 #endif /* * Gather current filesystem properties on Linux */ static int setup_current_filesystem(struct archive_read_disk *a) { struct tree *t = a->tree; struct statfs sfs; #if defined(HAVE_STATVFS) struct statvfs svfs; #endif int r, vr = 0, xr = 0; if (tree_current_is_symblic_link_target(t)) { #if defined(HAVE_OPENAT) /* * Get file system statistics on any directory * where current is. */ int fd = openat(tree_current_dir_fd(t), tree_current_access_path(t), O_RDONLY | O_CLOEXEC); __archive_ensure_cloexec_flag(fd); if (fd < 0) { archive_set_error(&a->archive, errno, "openat failed"); return (ARCHIVE_FAILED); } #if defined(HAVE_FSTATVFS) vr = fstatvfs(fd, &svfs);/* for f_flag, mount flags */ #endif r = fstatfs(fd, &sfs); if (r == 0) xr = get_xfer_size(t, fd, NULL); close(fd); #else if (tree_enter_working_dir(t) != 0) { archive_set_error(&a->archive, errno, "fchdir failed"); return (ARCHIVE_FAILED); } #if defined(HAVE_STATVFS) vr = statvfs(tree_current_access_path(t), &svfs); #endif r = statfs(tree_current_access_path(t), &sfs); if (r == 0) xr = get_xfer_size(t, -1, tree_current_access_path(t)); #endif } else { #ifdef HAVE_FSTATFS #if defined(HAVE_FSTATVFS) vr = fstatvfs(tree_current_dir_fd(t), &svfs); #endif r = fstatfs(tree_current_dir_fd(t), &sfs); if (r == 0) xr = get_xfer_size(t, tree_current_dir_fd(t), NULL); #else if (tree_enter_working_dir(t) != 0) { archive_set_error(&a->archive, errno, "fchdir failed"); return (ARCHIVE_FAILED); } #if defined(HAVE_STATVFS) vr = statvfs(".", &svfs); #endif r = statfs(".", &sfs); if (r == 0) xr = get_xfer_size(t, -1, "."); #endif } if (r == -1 || xr == -1 || vr == -1) { t->current_filesystem->synthetic = -1; t->current_filesystem->remote = -1; archive_set_error(&a->archive, errno, "statfs failed"); return (ARCHIVE_FAILED); } else if (xr == 1) { /* pathconf(_PC_REX_*) operations are not supported. */ #if defined(HAVE_STATVFS) t->current_filesystem->xfer_align = svfs.f_frsize; t->current_filesystem->max_xfer_size = -1; t->current_filesystem->min_xfer_size = svfs.f_bsize; t->current_filesystem->incr_xfer_size = svfs.f_bsize; #else t->current_filesystem->xfer_align = sfs.f_frsize; t->current_filesystem->max_xfer_size = -1; t->current_filesystem->min_xfer_size = sfs.f_bsize; t->current_filesystem->incr_xfer_size = sfs.f_bsize; #endif } switch (sfs.f_type) { case AFS_SUPER_MAGIC: case CIFS_SUPER_MAGIC: case CODA_SUPER_MAGIC: case NCP_SUPER_MAGIC:/* NetWare */ case NFS_SUPER_MAGIC: case SMB_SUPER_MAGIC: t->current_filesystem->remote = 1; t->current_filesystem->synthetic = 0; break; case DEVFS_SUPER_MAGIC: case PROC_SUPER_MAGIC: case USBDEVICE_SUPER_MAGIC: t->current_filesystem->remote = 0; t->current_filesystem->synthetic = 1; break; default: t->current_filesystem->remote = 0; t->current_filesystem->synthetic = 0; break; } #if defined(ST_NOATIME) #if defined(HAVE_STATVFS) if (svfs.f_flag & ST_NOATIME) #else if (sfs.f_flag & ST_NOATIME) #endif t->current_filesystem->noatime = 1; else #endif t->current_filesystem->noatime = 0; #if defined(USE_READDIR_R) /* Set maximum filename length. */ t->current_filesystem->name_max = sfs.f_namelen; #endif return (ARCHIVE_OK); } #elif defined(HAVE_SYS_STATVFS_H) &&\ (defined(HAVE_STATVFS) || defined(HAVE_FSTATVFS)) /* * Gather current filesystem properties on other posix platform. */ static int setup_current_filesystem(struct archive_read_disk *a) { struct tree *t = a->tree; struct statvfs sfs; int r, xr = 0; t->current_filesystem->synthetic = -1;/* Not supported */ t->current_filesystem->remote = -1;/* Not supported */ if (tree_current_is_symblic_link_target(t)) { #if defined(HAVE_OPENAT) /* * Get file system statistics on any directory * where current is. */ int fd = openat(tree_current_dir_fd(t), tree_current_access_path(t), O_RDONLY | O_CLOEXEC); __archive_ensure_cloexec_flag(fd); if (fd < 0) { archive_set_error(&a->archive, errno, "openat failed"); return (ARCHIVE_FAILED); } r = fstatvfs(fd, &sfs); if (r == 0) xr = get_xfer_size(t, fd, NULL); close(fd); #else if (tree_enter_working_dir(t) != 0) { archive_set_error(&a->archive, errno, "fchdir failed"); return (ARCHIVE_FAILED); } r = statvfs(tree_current_access_path(t), &sfs); if (r == 0) xr = get_xfer_size(t, -1, tree_current_access_path(t)); #endif } else { #ifdef HAVE_FSTATVFS r = fstatvfs(tree_current_dir_fd(t), &sfs); if (r == 0) xr = get_xfer_size(t, tree_current_dir_fd(t), NULL); #else if (tree_enter_working_dir(t) != 0) { archive_set_error(&a->archive, errno, "fchdir failed"); return (ARCHIVE_FAILED); } r = statvfs(".", &sfs); if (r == 0) xr = get_xfer_size(t, -1, "."); #endif } if (r == -1 || xr == -1) { t->current_filesystem->synthetic = -1; t->current_filesystem->remote = -1; archive_set_error(&a->archive, errno, "statvfs failed"); return (ARCHIVE_FAILED); } else if (xr == 1) { /* pathconf(_PC_REX_*) operations are not supported. */ t->current_filesystem->xfer_align = sfs.f_frsize; t->current_filesystem->max_xfer_size = -1; t->current_filesystem->min_xfer_size = sfs.f_bsize; t->current_filesystem->incr_xfer_size = sfs.f_bsize; } #if defined(ST_NOATIME) if (sfs.f_flag & ST_NOATIME) t->current_filesystem->noatime = 1; else #endif t->current_filesystem->noatime = 0; #if defined(USE_READDIR_R) /* Set maximum filename length. */ t->current_filesystem->name_max = sfs.f_namemax; #endif return (ARCHIVE_OK); } #else /* * Generic: Gather current filesystem properties. * TODO: Is this generic function really needed? */ static int setup_current_filesystem(struct archive_read_disk *a) { struct tree *t = a->tree; #if defined(_PC_NAME_MAX) && defined(USE_READDIR_R) long nm; #endif t->current_filesystem->synthetic = -1;/* Not supported */ t->current_filesystem->remote = -1;/* Not supported */ t->current_filesystem->noatime = 0; (void)get_xfer_size(t, -1, ".");/* Dummy call to avoid build error. */ t->current_filesystem->xfer_align = -1;/* Unknown */ t->current_filesystem->max_xfer_size = -1; t->current_filesystem->min_xfer_size = -1; t->current_filesystem->incr_xfer_size = -1; #if defined(USE_READDIR_R) /* Set maximum filename length. */ # if defined(_PC_NAME_MAX) if (tree_current_is_symblic_link_target(t)) { if (tree_enter_working_dir(t) != 0) { archive_set_error(&a->archive, errno, "fchdir failed"); return (ARCHIVE_FAILED); } nm = pathconf(tree_current_access_path(t), _PC_NAME_MAX); } else nm = fpathconf(tree_current_dir_fd(t), _PC_NAME_MAX); if (nm == -1) # endif /* _PC_NAME_MAX */ /* * Some systems (HP-UX or others?) incorrectly defined * NAME_MAX macro to be a smaller value. */ # if defined(NAME_MAX) && NAME_MAX >= 255 t->current_filesystem->name_max = NAME_MAX; # else /* No way to get a trusted value of maximum filename * length. */ t->current_filesystem->name_max = PATH_MAX; # endif /* NAME_MAX */ # if defined(_PC_NAME_MAX) else t->current_filesystem->name_max = nm; # endif /* _PC_NAME_MAX */ #endif /* USE_READDIR_R */ return (ARCHIVE_OK); } #endif static int close_and_restore_time(int fd, struct tree *t, struct restore_time *rt) { #ifndef HAVE_UTIMES (void)t; /* UNUSED */ (void)rt; /* UNUSED */ return (close(fd)); #else #if defined(HAVE_FUTIMENS) && !defined(__CYGWIN__) struct timespec timespecs[2]; #endif struct timeval times[2]; if ((t->flags & needsRestoreTimes) == 0 || rt->noatime) { if (fd >= 0) return (close(fd)); else return (0); } #if defined(HAVE_FUTIMENS) && !defined(__CYGWIN__) timespecs[1].tv_sec = rt->mtime; timespecs[1].tv_nsec = rt->mtime_nsec; timespecs[0].tv_sec = rt->atime; timespecs[0].tv_nsec = rt->atime_nsec; /* futimens() is defined in POSIX.1-2008. */ if (futimens(fd, timespecs) == 0) return (close(fd)); #endif times[1].tv_sec = rt->mtime; times[1].tv_usec = rt->mtime_nsec / 1000; times[0].tv_sec = rt->atime; times[0].tv_usec = rt->atime_nsec / 1000; #if !defined(HAVE_FUTIMENS) && defined(HAVE_FUTIMES) && !defined(__CYGWIN__) if (futimes(fd, times) == 0) return (close(fd)); #endif close(fd); #if defined(HAVE_FUTIMESAT) if (futimesat(tree_current_dir_fd(t), rt->name, times) == 0) return (0); #endif #ifdef HAVE_LUTIMES if (lutimes(rt->name, times) != 0) #else if (AE_IFLNK != rt->filetype && utimes(rt->name, times) != 0) #endif return (-1); #endif return (0); } static int open_on_current_dir(struct tree *t, const char *path, int flags) { #ifdef HAVE_OPENAT return (openat(tree_current_dir_fd(t), path, flags)); #else if (tree_enter_working_dir(t) != 0) return (-1); return (open(path, flags)); #endif } static int tree_dup(int fd) { int new_fd; #ifdef F_DUPFD_CLOEXEC static volatile int can_dupfd_cloexec = 1; if (can_dupfd_cloexec) { new_fd = fcntl(fd, F_DUPFD_CLOEXEC, 0); if (new_fd != -1) return (new_fd); /* Linux 2.6.18 - 2.6.23 declare F_DUPFD_CLOEXEC, * but it cannot be used. So we have to try dup(). */ /* We won't try F_DUPFD_CLOEXEC. */ can_dupfd_cloexec = 0; } #endif /* F_DUPFD_CLOEXEC */ new_fd = dup(fd); __archive_ensure_cloexec_flag(new_fd); return (new_fd); } /* * Add a directory path to the current stack. */ static void tree_push(struct tree *t, const char *path, int filesystem_id, int64_t dev, int64_t ino, struct restore_time *rt) { struct tree_entry *te; te = calloc(1, sizeof(*te)); te->next = t->stack; te->parent = t->current; if (te->parent) te->depth = te->parent->depth + 1; t->stack = te; archive_string_init(&te->name); te->symlink_parent_fd = -1; archive_strcpy(&te->name, path); te->flags = needsDescent | needsOpen | needsAscent; te->filesystem_id = filesystem_id; te->dev = dev; te->ino = ino; te->dirname_length = t->dirname_length; te->restore_time.name = te->name.s; if (rt != NULL) { te->restore_time.mtime = rt->mtime; te->restore_time.mtime_nsec = rt->mtime_nsec; te->restore_time.atime = rt->atime; te->restore_time.atime_nsec = rt->atime_nsec; te->restore_time.filetype = rt->filetype; te->restore_time.noatime = rt->noatime; } } /* * Append a name to the current dir path. */ static void tree_append(struct tree *t, const char *name, size_t name_length) { size_t size_needed; t->path.s[t->dirname_length] = '\0'; t->path.length = t->dirname_length; /* Strip trailing '/' from name, unless entire name is "/". */ while (name_length > 1 && name[name_length - 1] == '/') name_length--; /* Resize pathname buffer as needed. */ size_needed = name_length + t->dirname_length + 2; archive_string_ensure(&t->path, size_needed); /* Add a separating '/' if it's needed. */ if (t->dirname_length > 0 && t->path.s[archive_strlen(&t->path)-1] != '/') archive_strappend_char(&t->path, '/'); t->basename = t->path.s + archive_strlen(&t->path); archive_strncat(&t->path, name, name_length); t->restore_time.name = t->basename; } /* * Open a directory tree for traversal. */ static struct tree * tree_open(const char *path, int symlink_mode, int restore_time) { struct tree *t; if ((t = calloc(1, sizeof(*t))) == NULL) return (NULL); archive_string_init(&t->path); archive_string_ensure(&t->path, 31); t->initial_symlink_mode = symlink_mode; return (tree_reopen(t, path, restore_time)); } static struct tree * tree_reopen(struct tree *t, const char *path, int restore_time) { t->flags = (restore_time != 0)?needsRestoreTimes:0; t->flags |= onInitialDir; t->visit_type = 0; t->tree_errno = 0; t->dirname_length = 0; t->depth = 0; t->descend = 0; t->current = NULL; t->d = INVALID_DIR_HANDLE; t->symlink_mode = t->initial_symlink_mode; archive_string_empty(&t->path); t->entry_fd = -1; t->entry_eof = 0; t->entry_remaining_bytes = 0; t->initial_filesystem_id = -1; /* First item is set up a lot like a symlink traversal. */ tree_push(t, path, 0, 0, 0, NULL); t->stack->flags = needsFirstVisit; t->maxOpenCount = t->openCount = 1; t->initial_dir_fd = open(".", O_RDONLY | O_CLOEXEC); __archive_ensure_cloexec_flag(t->initial_dir_fd); t->working_dir_fd = tree_dup(t->initial_dir_fd); return (t); } static int tree_descent(struct tree *t) { int flag, new_fd, r = 0; t->dirname_length = archive_strlen(&t->path); flag = O_RDONLY | O_CLOEXEC; #if defined(O_DIRECTORY) flag |= O_DIRECTORY; #endif new_fd = open_on_current_dir(t, t->stack->name.s, flag); __archive_ensure_cloexec_flag(new_fd); if (new_fd < 0) { t->tree_errno = errno; r = TREE_ERROR_DIR; } else { t->depth++; /* If it is a link, set up fd for the ascent. */ if (t->stack->flags & isDirLink) { t->stack->symlink_parent_fd = t->working_dir_fd; t->openCount++; if (t->openCount > t->maxOpenCount) t->maxOpenCount = t->openCount; } else close(t->working_dir_fd); /* Renew the current working directory. */ t->working_dir_fd = new_fd; t->flags &= ~onWorkingDir; } return (r); } /* * We've finished a directory; ascend back to the parent. */ static int tree_ascend(struct tree *t) { struct tree_entry *te; int new_fd, r = 0, prev_dir_fd; te = t->stack; prev_dir_fd = t->working_dir_fd; if (te->flags & isDirLink) new_fd = te->symlink_parent_fd; else { new_fd = open_on_current_dir(t, "..", O_RDONLY | O_CLOEXEC); __archive_ensure_cloexec_flag(new_fd); } if (new_fd < 0) { t->tree_errno = errno; r = TREE_ERROR_FATAL; } else { /* Renew the current working directory. */ t->working_dir_fd = new_fd; t->flags &= ~onWorkingDir; /* Current directory has been changed, we should * close an fd of previous working directory. */ close_and_restore_time(prev_dir_fd, t, &te->restore_time); if (te->flags & isDirLink) { t->openCount--; te->symlink_parent_fd = -1; } t->depth--; } return (r); } /* * Return to the initial directory where tree_open() was performed. */ static int tree_enter_initial_dir(struct tree *t) { int r = 0; if ((t->flags & onInitialDir) == 0) { r = fchdir(t->initial_dir_fd); if (r == 0) { t->flags &= ~onWorkingDir; t->flags |= onInitialDir; } } return (r); } /* * Restore working directory of directory traversals. */ static int tree_enter_working_dir(struct tree *t) { int r = 0; /* * Change the current directory if really needed. * Sometimes this is unneeded when we did not do * descent. */ if (t->depth > 0 && (t->flags & onWorkingDir) == 0) { r = fchdir(t->working_dir_fd); if (r == 0) { t->flags &= ~onInitialDir; t->flags |= onWorkingDir; } } return (r); } static int tree_current_dir_fd(struct tree *t) { return (t->working_dir_fd); } /* * Pop the working stack. */ static void tree_pop(struct tree *t) { struct tree_entry *te; t->path.s[t->dirname_length] = '\0'; t->path.length = t->dirname_length; if (t->stack == t->current && t->current != NULL) t->current = t->current->parent; te = t->stack; t->stack = te->next; t->dirname_length = te->dirname_length; t->basename = t->path.s + t->dirname_length; while (t->basename[0] == '/') t->basename++; archive_string_free(&te->name); free(te); } /* * Get the next item in the tree traversal. */ static int tree_next(struct tree *t) { int r; while (t->stack != NULL) { /* If there's an open dir, get the next entry from there. */ if (t->d != INVALID_DIR_HANDLE) { r = tree_dir_next_posix(t); if (r == 0) continue; return (r); } if (t->stack->flags & needsFirstVisit) { /* Top stack item needs a regular visit. */ t->current = t->stack; tree_append(t, t->stack->name.s, archive_strlen(&(t->stack->name))); /* t->dirname_length = t->path_length; */ /* tree_pop(t); */ t->stack->flags &= ~needsFirstVisit; return (t->visit_type = TREE_REGULAR); } else if (t->stack->flags & needsDescent) { /* Top stack item is dir to descend into. */ t->current = t->stack; tree_append(t, t->stack->name.s, archive_strlen(&(t->stack->name))); t->stack->flags &= ~needsDescent; r = tree_descent(t); if (r != 0) { tree_pop(t); t->visit_type = r; } else t->visit_type = TREE_POSTDESCENT; return (t->visit_type); } else if (t->stack->flags & needsOpen) { t->stack->flags &= ~needsOpen; r = tree_dir_next_posix(t); if (r == 0) continue; return (r); } else if (t->stack->flags & needsAscent) { /* Top stack item is dir and we're done with it. */ r = tree_ascend(t); tree_pop(t); t->visit_type = r != 0 ? r : TREE_POSTASCENT; return (t->visit_type); } else { /* Top item on stack is dead. */ tree_pop(t); t->flags &= ~hasLstat; t->flags &= ~hasStat; } } return (t->visit_type = 0); } static int tree_dir_next_posix(struct tree *t) { int r; const char *name; size_t namelen; if (t->d == NULL) { #if defined(USE_READDIR_R) size_t dirent_size; #endif #if defined(HAVE_FDOPENDIR) t->d = fdopendir(tree_dup(t->working_dir_fd)); #else /* HAVE_FDOPENDIR */ if (tree_enter_working_dir(t) == 0) { t->d = opendir("."); #if HAVE_DIRFD || defined(dirfd) __archive_ensure_cloexec_flag(dirfd(t->d)); #endif } #endif /* HAVE_FDOPENDIR */ if (t->d == NULL) { r = tree_ascend(t); /* Undo "chdir" */ tree_pop(t); t->tree_errno = errno; t->visit_type = r != 0 ? r : TREE_ERROR_DIR; return (t->visit_type); } #if defined(USE_READDIR_R) dirent_size = offsetof(struct dirent, d_name) + t->filesystem_table[t->current->filesystem_id].name_max + 1; if (t->dirent == NULL || t->dirent_allocated < dirent_size) { free(t->dirent); t->dirent = malloc(dirent_size); if (t->dirent == NULL) { closedir(t->d); t->d = INVALID_DIR_HANDLE; (void)tree_ascend(t); tree_pop(t); t->tree_errno = ENOMEM; t->visit_type = TREE_ERROR_DIR; return (t->visit_type); } t->dirent_allocated = dirent_size; } #endif /* USE_READDIR_R */ } for (;;) { errno = 0; #if defined(USE_READDIR_R) r = readdir_r(t->d, t->dirent, &t->de); #ifdef _AIX /* Note: According to the man page, return value 9 indicates * that the readdir_r was not successful and the error code * is set to the global errno variable. And then if the end * of directory entries was reached, the return value is 9 * and the third parameter is set to NULL and errno is * unchanged. */ if (r == 9) r = errno; #endif /* _AIX */ if (r != 0 || t->de == NULL) { #else t->de = readdir(t->d); if (t->de == NULL) { r = errno; #endif closedir(t->d); t->d = INVALID_DIR_HANDLE; if (r != 0) { t->tree_errno = r; t->visit_type = TREE_ERROR_DIR; return (t->visit_type); } else return (0); } name = t->de->d_name; namelen = D_NAMELEN(t->de); t->flags &= ~hasLstat; t->flags &= ~hasStat; if (name[0] == '.' && name[1] == '\0') continue; if (name[0] == '.' && name[1] == '.' && name[2] == '\0') continue; tree_append(t, name, namelen); return (t->visit_type = TREE_REGULAR); } } /* * Get the stat() data for the entry just returned from tree_next(). */ static const struct stat * tree_current_stat(struct tree *t) { if (!(t->flags & hasStat)) { #ifdef HAVE_FSTATAT if (fstatat(tree_current_dir_fd(t), tree_current_access_path(t), &t->st, 0) != 0) #else if (tree_enter_working_dir(t) != 0) return NULL; if (stat(tree_current_access_path(t), &t->st) != 0) #endif return NULL; t->flags |= hasStat; } return (&t->st); } /* * Get the lstat() data for the entry just returned from tree_next(). */ static const struct stat * tree_current_lstat(struct tree *t) { if (!(t->flags & hasLstat)) { #ifdef HAVE_FSTATAT if (fstatat(tree_current_dir_fd(t), tree_current_access_path(t), &t->lst, AT_SYMLINK_NOFOLLOW) != 0) #else if (tree_enter_working_dir(t) != 0) return NULL; if (lstat(tree_current_access_path(t), &t->lst) != 0) #endif return NULL; t->flags |= hasLstat; } return (&t->lst); } /* * Test whether current entry is a dir or link to a dir. */ static int tree_current_is_dir(struct tree *t) { const struct stat *st; /* * If we already have lstat() info, then try some * cheap tests to determine if this is a dir. */ if (t->flags & hasLstat) { /* If lstat() says it's a dir, it must be a dir. */ st = tree_current_lstat(t); if (st == NULL) return 0; if (S_ISDIR(st->st_mode)) return 1; /* Not a dir; might be a link to a dir. */ /* If it's not a link, then it's not a link to a dir. */ if (!S_ISLNK(st->st_mode)) return 0; /* * It's a link, but we don't know what it's a link to, * so we'll have to use stat(). */ } st = tree_current_stat(t); /* If we can't stat it, it's not a dir. */ if (st == NULL) return 0; /* Use the definitive test. Hopefully this is cached. */ return (S_ISDIR(st->st_mode)); } /* * Test whether current entry is a physical directory. Usually, we * already have at least one of stat() or lstat() in memory, so we * use tricks to try to avoid an extra trip to the disk. */ static int tree_current_is_physical_dir(struct tree *t) { const struct stat *st; /* * If stat() says it isn't a dir, then it's not a dir. * If stat() data is cached, this check is free, so do it first. */ if (t->flags & hasStat) { st = tree_current_stat(t); if (st == NULL) return (0); if (!S_ISDIR(st->st_mode)) return (0); } /* * Either stat() said it was a dir (in which case, we have * to determine whether it's really a link to a dir) or * stat() info wasn't available. So we use lstat(), which * hopefully is already cached. */ st = tree_current_lstat(t); /* If we can't stat it, it's not a dir. */ if (st == NULL) return 0; /* Use the definitive test. Hopefully this is cached. */ return (S_ISDIR(st->st_mode)); } /* * Test whether the same file has been in the tree as its parent. */ static int tree_target_is_same_as_parent(struct tree *t, const struct stat *st) { struct tree_entry *te; for (te = t->current->parent; te != NULL; te = te->parent) { if (te->dev == (int64_t)st->st_dev && te->ino == (int64_t)st->st_ino) return (1); } return (0); } /* * Test whether the current file is symbolic link target and * on the other filesystem. */ static int tree_current_is_symblic_link_target(struct tree *t) { static const struct stat *lst, *st; lst = tree_current_lstat(t); st = tree_current_stat(t); return (st != NULL && lst != NULL && (int64_t)st->st_dev == t->current_filesystem->dev && st->st_dev != lst->st_dev); } /* * Return the access path for the entry just returned from tree_next(). */ static const char * tree_current_access_path(struct tree *t) { return (t->basename); } /* * Return the full path for the entry just returned from tree_next(). */ static const char * tree_current_path(struct tree *t) { return (t->path.s); } /* * Terminate the traversal. */ static void tree_close(struct tree *t) { if (t == NULL) return; if (t->entry_fd >= 0) { close_and_restore_time(t->entry_fd, t, &t->restore_time); t->entry_fd = -1; } /* Close the handle of readdir(). */ if (t->d != INVALID_DIR_HANDLE) { closedir(t->d); t->d = INVALID_DIR_HANDLE; } /* Release anything remaining in the stack. */ while (t->stack != NULL) { if (t->stack->flags & isDirLink) close(t->stack->symlink_parent_fd); tree_pop(t); } if (t->working_dir_fd >= 0) { close(t->working_dir_fd); t->working_dir_fd = -1; } if (t->initial_dir_fd >= 0) { close(t->initial_dir_fd); t->initial_dir_fd = -1; } } /* * Release any resources. */ static void tree_free(struct tree *t) { int i; if (t == NULL) return; archive_string_free(&t->path); #if defined(USE_READDIR_R) free(t->dirent); #endif free(t->sparse_list); for (i = 0; i < t->max_filesystem_id; i++) free(t->filesystem_table[i].allocation_ptr); free(t->filesystem_table); free(t); } #endif Index: projects/import-googletest-1.8.1/contrib/libarchive =================================================================== --- projects/import-googletest-1.8.1/contrib/libarchive (revision 344270) +++ projects/import-googletest-1.8.1/contrib/libarchive (revision 344271) Property changes on: projects/import-googletest-1.8.1/contrib/libarchive ___________________________________________________________________ Modified: svn:mergeinfo ## -0,0 +0,2 ## Merged /head/contrib/libarchive:r344081-344270 Merged /vendor/libarchive/dist:r344088 Index: projects/import-googletest-1.8.1/contrib/libc++/include/type_traits =================================================================== --- projects/import-googletest-1.8.1/contrib/libc++/include/type_traits (revision 344270) +++ projects/import-googletest-1.8.1/contrib/libc++/include/type_traits (revision 344271) @@ -1,4856 +1,4850 @@ // -*- C++ -*- //===------------------------ type_traits ---------------------------------===// // // The LLVM Compiler Infrastructure // // This file is dual licensed under the MIT and the University of Illinois Open // Source Licenses. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// #ifndef _LIBCPP_TYPE_TRAITS #define _LIBCPP_TYPE_TRAITS /* type_traits synopsis namespace std { // helper class: template struct integral_constant; typedef integral_constant true_type; // C++11 typedef integral_constant false_type; // C++11 template // C++14 using bool_constant = integral_constant; // C++14 typedef bool_constant true_type; // C++14 typedef bool_constant false_type; // C++14 // helper traits template struct enable_if; template struct conditional; // Primary classification traits: template struct is_void; template struct is_null_pointer; // C++14 template struct is_integral; template struct is_floating_point; template struct is_array; template struct is_pointer; template struct is_lvalue_reference; template struct is_rvalue_reference; template struct is_member_object_pointer; template struct is_member_function_pointer; template struct is_enum; template struct is_union; template struct is_class; template struct is_function; // Secondary classification traits: template struct is_reference; template struct is_arithmetic; template struct is_fundamental; template struct is_member_pointer; template struct is_scalar; template struct is_object; template struct is_compound; // Const-volatile properties and transformations: template struct is_const; template struct is_volatile; template struct remove_const; template struct remove_volatile; template struct remove_cv; template struct add_const; template struct add_volatile; template struct add_cv; // Reference transformations: template struct remove_reference; template struct add_lvalue_reference; template struct add_rvalue_reference; // Pointer transformations: template struct remove_pointer; template struct add_pointer; // Integral properties: template struct is_signed; template struct is_unsigned; template struct make_signed; template struct make_unsigned; // Array properties and transformations: template struct rank; template struct extent; template struct remove_extent; template struct remove_all_extents; // Member introspection: template struct is_pod; template struct is_trivial; template struct is_trivially_copyable; template struct is_standard_layout; template struct is_literal_type; template struct is_empty; template struct is_polymorphic; template struct is_abstract; template struct is_final; // C++14 template struct is_aggregate; // C++17 template struct is_constructible; template struct is_default_constructible; template struct is_copy_constructible; template struct is_move_constructible; template struct is_assignable; template struct is_copy_assignable; template struct is_move_assignable; template struct is_swappable_with; // C++17 template struct is_swappable; // C++17 template struct is_destructible; template struct is_trivially_constructible; template struct is_trivially_default_constructible; template struct is_trivially_copy_constructible; template struct is_trivially_move_constructible; template struct is_trivially_assignable; template struct is_trivially_copy_assignable; template struct is_trivially_move_assignable; template struct is_trivially_destructible; template struct is_nothrow_constructible; template struct is_nothrow_default_constructible; template struct is_nothrow_copy_constructible; template struct is_nothrow_move_constructible; template struct is_nothrow_assignable; template struct is_nothrow_copy_assignable; template struct is_nothrow_move_assignable; template struct is_nothrow_swappable_with; // C++17 template struct is_nothrow_swappable; // C++17 template struct is_nothrow_destructible; template struct has_virtual_destructor; template struct has_unique_object_representations; // C++17 // Relationships between types: template struct is_same; template struct is_base_of; template struct is_convertible; template struct is_invocable; template struct is_invocable_r; template struct is_nothrow_invocable; template struct is_nothrow_invocable_r; // Alignment properties and transformations: template struct alignment_of; template struct aligned_storage; template struct aligned_union; template struct remove_cvref; // C++20 template struct decay; template struct common_type; template struct underlying_type; template class result_of; // undefined template class result_of; template struct invoke_result; // C++17 // const-volatile modifications: template using remove_const_t = typename remove_const::type; // C++14 template using remove_volatile_t = typename remove_volatile::type; // C++14 template using remove_cv_t = typename remove_cv::type; // C++14 template using add_const_t = typename add_const::type; // C++14 template using add_volatile_t = typename add_volatile::type; // C++14 template using add_cv_t = typename add_cv::type; // C++14 // reference modifications: template using remove_reference_t = typename remove_reference::type; // C++14 template using add_lvalue_reference_t = typename add_lvalue_reference::type; // C++14 template using add_rvalue_reference_t = typename add_rvalue_reference::type; // C++14 // sign modifications: template using make_signed_t = typename make_signed::type; // C++14 template using make_unsigned_t = typename make_unsigned::type; // C++14 // array modifications: template using remove_extent_t = typename remove_extent::type; // C++14 template using remove_all_extents_t = typename remove_all_extents::type; // C++14 // pointer modifications: template using remove_pointer_t = typename remove_pointer::type; // C++14 template using add_pointer_t = typename add_pointer::type; // C++14 // other transformations: template using aligned_storage_t = typename aligned_storage::type; // C++14 template using aligned_union_t = typename aligned_union::type; // C++14 template using remove_cvref_t = typename remove_cvref::type; // C++20 template using decay_t = typename decay::type; // C++14 template using enable_if_t = typename enable_if::type; // C++14 template using conditional_t = typename conditional::type; // C++14 template using common_type_t = typename common_type::type; // C++14 template using underlying_type_t = typename underlying_type::type; // C++14 template using result_of_t = typename result_of::type; // C++14 template using invoke_result_t = typename invoke_result::type; // C++17 template using void_t = void; // C++17 // See C++14 20.10.4.1, primary type categories template inline constexpr bool is_void_v = is_void::value; // C++17 template inline constexpr bool is_null_pointer_v = is_null_pointer::value; // C++17 template inline constexpr bool is_integral_v = is_integral::value; // C++17 template inline constexpr bool is_floating_point_v = is_floating_point::value; // C++17 template inline constexpr bool is_array_v = is_array::value; // C++17 template inline constexpr bool is_pointer_v = is_pointer::value; // C++17 template inline constexpr bool is_lvalue_reference_v = is_lvalue_reference::value; // C++17 template inline constexpr bool is_rvalue_reference_v = is_rvalue_reference::value; // C++17 template inline constexpr bool is_member_object_pointer_v = is_member_object_pointer::value; // C++17 template inline constexpr bool is_member_function_pointer_v = is_member_function_pointer::value; // C++17 template inline constexpr bool is_enum_v = is_enum::value; // C++17 template inline constexpr bool is_union_v = is_union::value; // C++17 template inline constexpr bool is_class_v = is_class::value; // C++17 template inline constexpr bool is_function_v = is_function::value; // C++17 // See C++14 20.10.4.2, composite type categories template inline constexpr bool is_reference_v = is_reference::value; // C++17 template inline constexpr bool is_arithmetic_v = is_arithmetic::value; // C++17 template inline constexpr bool is_fundamental_v = is_fundamental::value; // C++17 template inline constexpr bool is_object_v = is_object::value; // C++17 template inline constexpr bool is_scalar_v = is_scalar::value; // C++17 template inline constexpr bool is_compound_v = is_compound::value; // C++17 template inline constexpr bool is_member_pointer_v = is_member_pointer::value; // C++17 // See C++14 20.10.4.3, type properties template inline constexpr bool is_const_v = is_const::value; // C++17 template inline constexpr bool is_volatile_v = is_volatile::value; // C++17 template inline constexpr bool is_trivial_v = is_trivial::value; // C++17 template inline constexpr bool is_trivially_copyable_v = is_trivially_copyable::value; // C++17 template inline constexpr bool is_standard_layout_v = is_standard_layout::value; // C++17 template inline constexpr bool is_pod_v = is_pod::value; // C++17 template inline constexpr bool is_literal_type_v = is_literal_type::value; // C++17 template inline constexpr bool is_empty_v = is_empty::value; // C++17 template inline constexpr bool is_polymorphic_v = is_polymorphic::value; // C++17 template inline constexpr bool is_abstract_v = is_abstract::value; // C++17 template inline constexpr bool is_final_v = is_final::value; // C++17 template inline constexpr bool is_aggregate_v = is_aggregate::value; // C++17 template inline constexpr bool is_signed_v = is_signed::value; // C++17 template inline constexpr bool is_unsigned_v = is_unsigned::value; // C++17 template inline constexpr bool is_constructible_v = is_constructible::value; // C++17 template inline constexpr bool is_default_constructible_v = is_default_constructible::value; // C++17 template inline constexpr bool is_copy_constructible_v = is_copy_constructible::value; // C++17 template inline constexpr bool is_move_constructible_v = is_move_constructible::value; // C++17 template inline constexpr bool is_assignable_v = is_assignable::value; // C++17 template inline constexpr bool is_copy_assignable_v = is_copy_assignable::value; // C++17 template inline constexpr bool is_move_assignable_v = is_move_assignable::value; // C++17 template inline constexpr bool is_swappable_with_v = is_swappable_with::value; // C++17 template inline constexpr bool is_swappable_v = is_swappable::value; // C++17 template inline constexpr bool is_destructible_v = is_destructible::value; // C++17 template inline constexpr bool is_trivially_constructible_v = is_trivially_constructible::value; // C++17 template inline constexpr bool is_trivially_default_constructible_v = is_trivially_default_constructible::value; // C++17 template inline constexpr bool is_trivially_copy_constructible_v = is_trivially_copy_constructible::value; // C++17 template inline constexpr bool is_trivially_move_constructible_v = is_trivially_move_constructible::value; // C++17 template inline constexpr bool is_trivially_assignable_v = is_trivially_assignable::value; // C++17 template inline constexpr bool is_trivially_copy_assignable_v = is_trivially_copy_assignable::value; // C++17 template inline constexpr bool is_trivially_move_assignable_v = is_trivially_move_assignable::value; // C++17 template inline constexpr bool is_trivially_destructible_v = is_trivially_destructible::value; // C++17 template inline constexpr bool is_nothrow_constructible_v = is_nothrow_constructible::value; // C++17 template inline constexpr bool is_nothrow_default_constructible_v = is_nothrow_default_constructible::value; // C++17 template inline constexpr bool is_nothrow_copy_constructible_v = is_nothrow_copy_constructible::value; // C++17 template inline constexpr bool is_nothrow_move_constructible_v = is_nothrow_move_constructible::value; // C++17 template inline constexpr bool is_nothrow_assignable_v = is_nothrow_assignable::value; // C++17 template inline constexpr bool is_nothrow_copy_assignable_v = is_nothrow_copy_assignable::value; // C++17 template inline constexpr bool is_nothrow_move_assignable_v = is_nothrow_move_assignable::value; // C++17 template inline constexpr bool is_nothrow_swappable_with_v = is_nothrow_swappable_with::value; // C++17 template inline constexpr bool is_nothrow_swappable_v = is_nothrow_swappable::value; // C++17 template inline constexpr bool is_nothrow_destructible_v = is_nothrow_destructible::value; // C++17 template inline constexpr bool has_virtual_destructor_v = has_virtual_destructor::value; // C++17 template inline constexpr bool has_unique_object_representations_v // C++17 = has_unique_object_representations::value; // See C++14 20.10.5, type property queries template inline constexpr size_t alignment_of_v = alignment_of::value; // C++17 template inline constexpr size_t rank_v = rank::value; // C++17 template inline constexpr size_t extent_v = extent::value; // C++17 // See C++14 20.10.6, type relations template inline constexpr bool is_same_v = is_same::value; // C++17 template inline constexpr bool is_base_of_v = is_base_of::value; // C++17 template inline constexpr bool is_convertible_v = is_convertible::value; // C++17 template inline constexpr bool is_invocable_v = is_invocable::value; // C++17 template inline constexpr bool is_invocable_r_v = is_invocable_r::value; // C++17 template inline constexpr bool is_nothrow_invocable_v = is_nothrow_invocable::value; // C++17 template inline constexpr bool is_nothrow_invocable_r_v = is_nothrow_invocable_r::value; // C++17 // [meta.logical], logical operator traits: template struct conjunction; // C++17 template inline constexpr bool conjunction_v = conjunction::value; // C++17 template struct disjunction; // C++17 template inline constexpr bool disjunction_v = disjunction::value; // C++17 template struct negation; // C++17 template inline constexpr bool negation_v = negation::value; // C++17 } */ #include <__config> #include #if !defined(_LIBCPP_HAS_NO_PRAGMA_SYSTEM_HEADER) #pragma GCC system_header #endif _LIBCPP_BEGIN_NAMESPACE_STD template struct _LIBCPP_TEMPLATE_VIS pair; template class _LIBCPP_TEMPLATE_VIS reference_wrapper; template struct _LIBCPP_TEMPLATE_VIS hash; template struct __void_t { typedef void type; }; template struct __identity { typedef _Tp type; }; template struct _LIBCPP_TEMPLATE_VIS __dependent_type : public _Tp {}; template struct _LIBCPP_TEMPLATE_VIS conditional {typedef _If type;}; template struct _LIBCPP_TEMPLATE_VIS conditional {typedef _Then type;}; #if _LIBCPP_STD_VER > 11 template using conditional_t = typename conditional<_Bp, _If, _Then>::type; #endif template struct _LIBCPP_TEMPLATE_VIS __lazy_enable_if {}; template struct _LIBCPP_TEMPLATE_VIS __lazy_enable_if {typedef typename _Tp::type type;}; template struct _LIBCPP_TEMPLATE_VIS enable_if {}; template struct _LIBCPP_TEMPLATE_VIS enable_if {typedef _Tp type;}; #if _LIBCPP_STD_VER > 11 template using enable_if_t = typename enable_if<_Bp, _Tp>::type; #endif // addressof #ifndef _LIBCPP_HAS_NO_BUILTIN_ADDRESSOF template inline _LIBCPP_CONSTEXPR_AFTER_CXX14 _LIBCPP_NO_CFI _LIBCPP_INLINE_VISIBILITY _Tp* addressof(_Tp& __x) _NOEXCEPT { return __builtin_addressof(__x); } #else template inline _LIBCPP_NO_CFI _LIBCPP_INLINE_VISIBILITY _Tp* addressof(_Tp& __x) _NOEXCEPT { return reinterpret_cast<_Tp *>( const_cast(&reinterpret_cast(__x))); } #endif // _LIBCPP_HAS_NO_BUILTIN_ADDRESSOF #if defined(_LIBCPP_HAS_OBJC_ARC) && !defined(_LIBCPP_PREDEFINED_OBJC_ARC_ADDRESSOF) // Objective-C++ Automatic Reference Counting uses qualified pointers // that require special addressof() signatures. When // _LIBCPP_PREDEFINED_OBJC_ARC_ADDRESSOF is defined, the compiler // itself is providing these definitions. Otherwise, we provide them. template inline _LIBCPP_INLINE_VISIBILITY __strong _Tp* addressof(__strong _Tp& __x) _NOEXCEPT { return &__x; } #ifdef _LIBCPP_HAS_OBJC_ARC_WEAK template inline _LIBCPP_INLINE_VISIBILITY __weak _Tp* addressof(__weak _Tp& __x) _NOEXCEPT { return &__x; } #endif template inline _LIBCPP_INLINE_VISIBILITY __autoreleasing _Tp* addressof(__autoreleasing _Tp& __x) _NOEXCEPT { return &__x; } template inline _LIBCPP_INLINE_VISIBILITY __unsafe_unretained _Tp* addressof(__unsafe_unretained _Tp& __x) _NOEXCEPT { return &__x; } #endif #if !defined(_LIBCPP_CXX03_LANG) template _Tp* addressof(const _Tp&&) noexcept = delete; #endif struct __two {char __lx[2];}; // helper class: template struct _LIBCPP_TEMPLATE_VIS integral_constant { static _LIBCPP_CONSTEXPR const _Tp value = __v; typedef _Tp value_type; typedef integral_constant type; _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR operator value_type() const _NOEXCEPT {return value;} #if _LIBCPP_STD_VER > 11 _LIBCPP_INLINE_VISIBILITY constexpr value_type operator ()() const _NOEXCEPT {return value;} #endif }; template _LIBCPP_CONSTEXPR const _Tp integral_constant<_Tp, __v>::value; #if _LIBCPP_STD_VER > 14 template using bool_constant = integral_constant; #define _LIBCPP_BOOL_CONSTANT(__b) bool_constant<(__b)> #else #define _LIBCPP_BOOL_CONSTANT(__b) integral_constant #endif typedef _LIBCPP_BOOL_CONSTANT(true) true_type; typedef _LIBCPP_BOOL_CONSTANT(false) false_type; #if !defined(_LIBCPP_CXX03_LANG) // __lazy_and template struct __lazy_and_impl; template struct __lazy_and_impl : false_type {}; template <> struct __lazy_and_impl : true_type {}; template struct __lazy_and_impl : integral_constant {}; template struct __lazy_and_impl : __lazy_and_impl<_Hp::type::value, _Tp...> {}; template struct __lazy_and : __lazy_and_impl<_P1::type::value, _Pr...> {}; // __lazy_or template struct __lazy_or_impl; template struct __lazy_or_impl : true_type {}; template <> struct __lazy_or_impl : false_type {}; template struct __lazy_or_impl : __lazy_or_impl<_Hp::type::value, _Tp...> {}; template struct __lazy_or : __lazy_or_impl<_P1::type::value, _Pr...> {}; // __lazy_not template struct __lazy_not : integral_constant {}; // __and_ template struct __and_; template<> struct __and_<> : true_type {}; template struct __and_<_B0> : _B0 {}; template struct __and_<_B0, _B1> : conditional<_B0::value, _B1, _B0>::type {}; template struct __and_<_B0, _B1, _B2, _Bn...> : conditional<_B0::value, __and_<_B1, _B2, _Bn...>, _B0>::type {}; // __or_ template struct __or_; template<> struct __or_<> : false_type {}; template struct __or_<_B0> : _B0 {}; template struct __or_<_B0, _B1> : conditional<_B0::value, _B0, _B1>::type {}; template struct __or_<_B0, _B1, _B2, _Bn...> : conditional<_B0::value, _B0, __or_<_B1, _B2, _Bn...> >::type {}; // __not_ template struct __not_ : conditional<_Tp::value, false_type, true_type>::type {}; #endif // !defined(_LIBCPP_CXX03_LANG) // is_const template struct _LIBCPP_TEMPLATE_VIS is_const : public false_type {}; template struct _LIBCPP_TEMPLATE_VIS is_const<_Tp const> : public true_type {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_const_v = is_const<_Tp>::value; #endif // is_volatile template struct _LIBCPP_TEMPLATE_VIS is_volatile : public false_type {}; template struct _LIBCPP_TEMPLATE_VIS is_volatile<_Tp volatile> : public true_type {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_volatile_v = is_volatile<_Tp>::value; #endif // remove_const template struct _LIBCPP_TEMPLATE_VIS remove_const {typedef _Tp type;}; template struct _LIBCPP_TEMPLATE_VIS remove_const {typedef _Tp type;}; #if _LIBCPP_STD_VER > 11 template using remove_const_t = typename remove_const<_Tp>::type; #endif // remove_volatile template struct _LIBCPP_TEMPLATE_VIS remove_volatile {typedef _Tp type;}; template struct _LIBCPP_TEMPLATE_VIS remove_volatile {typedef _Tp type;}; #if _LIBCPP_STD_VER > 11 template using remove_volatile_t = typename remove_volatile<_Tp>::type; #endif // remove_cv template struct _LIBCPP_TEMPLATE_VIS remove_cv {typedef typename remove_volatile::type>::type type;}; #if _LIBCPP_STD_VER > 11 template using remove_cv_t = typename remove_cv<_Tp>::type; #endif // is_void template struct __libcpp_is_void : public false_type {}; template <> struct __libcpp_is_void : public true_type {}; template struct _LIBCPP_TEMPLATE_VIS is_void : public __libcpp_is_void::type> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_void_v = is_void<_Tp>::value; #endif // __is_nullptr_t template struct __is_nullptr_t_impl : public false_type {}; template <> struct __is_nullptr_t_impl : public true_type {}; template struct _LIBCPP_TEMPLATE_VIS __is_nullptr_t : public __is_nullptr_t_impl::type> {}; #if _LIBCPP_STD_VER > 11 template struct _LIBCPP_TEMPLATE_VIS is_null_pointer : public __is_nullptr_t_impl::type> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_null_pointer_v = is_null_pointer<_Tp>::value; #endif #endif // is_integral template struct __libcpp_is_integral : public false_type {}; template <> struct __libcpp_is_integral : public true_type {}; template <> struct __libcpp_is_integral : public true_type {}; template <> struct __libcpp_is_integral : public true_type {}; template <> struct __libcpp_is_integral : public true_type {}; template <> struct __libcpp_is_integral : public true_type {}; #ifndef _LIBCPP_HAS_NO_UNICODE_CHARS template <> struct __libcpp_is_integral : public true_type {}; template <> struct __libcpp_is_integral : public true_type {}; #endif // _LIBCPP_HAS_NO_UNICODE_CHARS template <> struct __libcpp_is_integral : public true_type {}; template <> struct __libcpp_is_integral : public true_type {}; template <> struct __libcpp_is_integral : public true_type {}; template <> struct __libcpp_is_integral : public true_type {}; template <> struct __libcpp_is_integral : public true_type {}; template <> struct __libcpp_is_integral : public true_type {}; template <> struct __libcpp_is_integral : public true_type {}; template <> struct __libcpp_is_integral : public true_type {}; #ifndef _LIBCPP_HAS_NO_INT128 template <> struct __libcpp_is_integral<__int128_t> : public true_type {}; template <> struct __libcpp_is_integral<__uint128_t> : public true_type {}; #endif template struct _LIBCPP_TEMPLATE_VIS is_integral : public __libcpp_is_integral::type> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_integral_v = is_integral<_Tp>::value; #endif // is_floating_point template struct __libcpp_is_floating_point : public false_type {}; -#ifdef __clang__ -template <> struct __libcpp_is_floating_point<__fp16> : public true_type {}; -#endif -#ifdef __FLT16_MANT_DIG__ -template <> struct __libcpp_is_floating_point<_Float16> : public true_type {}; -#endif template <> struct __libcpp_is_floating_point : public true_type {}; template <> struct __libcpp_is_floating_point : public true_type {}; template <> struct __libcpp_is_floating_point : public true_type {}; template struct _LIBCPP_TEMPLATE_VIS is_floating_point : public __libcpp_is_floating_point::type> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_floating_point_v = is_floating_point<_Tp>::value; #endif // is_array template struct _LIBCPP_TEMPLATE_VIS is_array : public false_type {}; template struct _LIBCPP_TEMPLATE_VIS is_array<_Tp[]> : public true_type {}; template struct _LIBCPP_TEMPLATE_VIS is_array<_Tp[_Np]> : public true_type {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_array_v = is_array<_Tp>::value; #endif // is_pointer template struct __libcpp_is_pointer : public false_type {}; template struct __libcpp_is_pointer<_Tp*> : public true_type {}; template struct _LIBCPP_TEMPLATE_VIS is_pointer : public __libcpp_is_pointer::type> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_pointer_v = is_pointer<_Tp>::value; #endif // is_reference template struct _LIBCPP_TEMPLATE_VIS is_lvalue_reference : public false_type {}; template struct _LIBCPP_TEMPLATE_VIS is_lvalue_reference<_Tp&> : public true_type {}; template struct _LIBCPP_TEMPLATE_VIS is_rvalue_reference : public false_type {}; #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES template struct _LIBCPP_TEMPLATE_VIS is_rvalue_reference<_Tp&&> : public true_type {}; #endif template struct _LIBCPP_TEMPLATE_VIS is_reference : public false_type {}; template struct _LIBCPP_TEMPLATE_VIS is_reference<_Tp&> : public true_type {}; #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES template struct _LIBCPP_TEMPLATE_VIS is_reference<_Tp&&> : public true_type {}; #endif #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_reference_v = is_reference<_Tp>::value; template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_lvalue_reference_v = is_lvalue_reference<_Tp>::value; template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_rvalue_reference_v = is_rvalue_reference<_Tp>::value; #endif // is_union #if __has_feature(is_union) || (_GNUC_VER >= 403) template struct _LIBCPP_TEMPLATE_VIS is_union : public integral_constant {}; #else template struct __libcpp_union : public false_type {}; template struct _LIBCPP_TEMPLATE_VIS is_union : public __libcpp_union::type> {}; #endif #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_union_v = is_union<_Tp>::value; #endif // is_class #if __has_feature(is_class) || (_GNUC_VER >= 403) template struct _LIBCPP_TEMPLATE_VIS is_class : public integral_constant {}; #else namespace __is_class_imp { template char __test(int _Tp::*); template __two __test(...); } template struct _LIBCPP_TEMPLATE_VIS is_class : public integral_constant(0)) == 1 && !is_union<_Tp>::value> {}; #endif #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_class_v = is_class<_Tp>::value; #endif // is_same template struct _LIBCPP_TEMPLATE_VIS is_same : public false_type {}; template struct _LIBCPP_TEMPLATE_VIS is_same<_Tp, _Tp> : public true_type {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_same_v = is_same<_Tp, _Up>::value; #endif // is_function namespace __libcpp_is_function_imp { struct __dummy_type {}; template char __test(_Tp*); template char __test(__dummy_type); template __two __test(...); template _Tp& __source(int); template __dummy_type __source(...); } template ::value || is_union<_Tp>::value || is_void<_Tp>::value || is_reference<_Tp>::value || __is_nullptr_t<_Tp>::value > struct __libcpp_is_function : public integral_constant(__libcpp_is_function_imp::__source<_Tp>(0))) == 1> {}; template struct __libcpp_is_function<_Tp, true> : public false_type {}; template struct _LIBCPP_TEMPLATE_VIS is_function : public __libcpp_is_function<_Tp> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_function_v = is_function<_Tp>::value; #endif // is_member_function_pointer // template struct __libcpp_is_member_function_pointer : public false_type {}; // template struct __libcpp_is_member_function_pointer<_Tp _Up::*> : public is_function<_Tp> {}; // template struct __member_pointer_traits_imp { // forward declaration; specializations later }; template struct __libcpp_is_member_function_pointer : public false_type {}; template struct __libcpp_is_member_function_pointer<_Ret _Class::*> : public is_function<_Ret> {}; template struct _LIBCPP_TEMPLATE_VIS is_member_function_pointer : public __libcpp_is_member_function_pointer::type>::type {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_member_function_pointer_v = is_member_function_pointer<_Tp>::value; #endif // is_member_pointer template struct __libcpp_is_member_pointer : public false_type {}; template struct __libcpp_is_member_pointer<_Tp _Up::*> : public true_type {}; template struct _LIBCPP_TEMPLATE_VIS is_member_pointer : public __libcpp_is_member_pointer::type> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_member_pointer_v = is_member_pointer<_Tp>::value; #endif // is_member_object_pointer template struct _LIBCPP_TEMPLATE_VIS is_member_object_pointer : public integral_constant::value && !is_member_function_pointer<_Tp>::value> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_member_object_pointer_v = is_member_object_pointer<_Tp>::value; #endif // is_enum #if __has_feature(is_enum) || (_GNUC_VER >= 403) template struct _LIBCPP_TEMPLATE_VIS is_enum : public integral_constant {}; #else template struct _LIBCPP_TEMPLATE_VIS is_enum : public integral_constant::value && !is_integral<_Tp>::value && !is_floating_point<_Tp>::value && !is_array<_Tp>::value && !is_pointer<_Tp>::value && !is_reference<_Tp>::value && !is_member_pointer<_Tp>::value && !is_union<_Tp>::value && !is_class<_Tp>::value && !is_function<_Tp>::value > {}; #endif #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_enum_v = is_enum<_Tp>::value; #endif // is_arithmetic template struct _LIBCPP_TEMPLATE_VIS is_arithmetic : public integral_constant::value || is_floating_point<_Tp>::value> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_arithmetic_v = is_arithmetic<_Tp>::value; #endif // is_fundamental template struct _LIBCPP_TEMPLATE_VIS is_fundamental : public integral_constant::value || __is_nullptr_t<_Tp>::value || is_arithmetic<_Tp>::value> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_fundamental_v = is_fundamental<_Tp>::value; #endif // is_scalar template struct _LIBCPP_TEMPLATE_VIS is_scalar : public integral_constant::value || is_member_pointer<_Tp>::value || is_pointer<_Tp>::value || __is_nullptr_t<_Tp>::value || is_enum<_Tp>::value > {}; template <> struct _LIBCPP_TEMPLATE_VIS is_scalar : public true_type {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_scalar_v = is_scalar<_Tp>::value; #endif // is_object template struct _LIBCPP_TEMPLATE_VIS is_object : public integral_constant::value || is_array<_Tp>::value || is_union<_Tp>::value || is_class<_Tp>::value > {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_object_v = is_object<_Tp>::value; #endif // is_compound template struct _LIBCPP_TEMPLATE_VIS is_compound : public integral_constant::value> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_compound_v = is_compound<_Tp>::value; #endif // __is_referenceable [defns.referenceable] struct __is_referenceable_impl { template static _Tp& __test(int); template static __two __test(...); }; template struct __is_referenceable : integral_constant(0)), __two>::value> {}; // add_const template ::value || is_function<_Tp>::value || is_const<_Tp>::value > struct __add_const {typedef _Tp type;}; template struct __add_const<_Tp, false> {typedef const _Tp type;}; template struct _LIBCPP_TEMPLATE_VIS add_const {typedef typename __add_const<_Tp>::type type;}; #if _LIBCPP_STD_VER > 11 template using add_const_t = typename add_const<_Tp>::type; #endif // add_volatile template ::value || is_function<_Tp>::value || is_volatile<_Tp>::value > struct __add_volatile {typedef _Tp type;}; template struct __add_volatile<_Tp, false> {typedef volatile _Tp type;}; template struct _LIBCPP_TEMPLATE_VIS add_volatile {typedef typename __add_volatile<_Tp>::type type;}; #if _LIBCPP_STD_VER > 11 template using add_volatile_t = typename add_volatile<_Tp>::type; #endif // add_cv template struct _LIBCPP_TEMPLATE_VIS add_cv {typedef typename add_const::type>::type type;}; #if _LIBCPP_STD_VER > 11 template using add_cv_t = typename add_cv<_Tp>::type; #endif // remove_reference template struct _LIBCPP_TEMPLATE_VIS remove_reference {typedef _Tp type;}; template struct _LIBCPP_TEMPLATE_VIS remove_reference<_Tp&> {typedef _Tp type;}; #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES template struct _LIBCPP_TEMPLATE_VIS remove_reference<_Tp&&> {typedef _Tp type;}; #endif #if _LIBCPP_STD_VER > 11 template using remove_reference_t = typename remove_reference<_Tp>::type; #endif // add_lvalue_reference template ::value> struct __add_lvalue_reference_impl { typedef _Tp type; }; template struct __add_lvalue_reference_impl<_Tp, true> { typedef _Tp& type; }; template struct _LIBCPP_TEMPLATE_VIS add_lvalue_reference {typedef typename __add_lvalue_reference_impl<_Tp>::type type;}; #if _LIBCPP_STD_VER > 11 template using add_lvalue_reference_t = typename add_lvalue_reference<_Tp>::type; #endif #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES template ::value> struct __add_rvalue_reference_impl { typedef _Tp type; }; template struct __add_rvalue_reference_impl<_Tp, true> { typedef _Tp&& type; }; template struct _LIBCPP_TEMPLATE_VIS add_rvalue_reference {typedef typename __add_rvalue_reference_impl<_Tp>::type type;}; #if _LIBCPP_STD_VER > 11 template using add_rvalue_reference_t = typename add_rvalue_reference<_Tp>::type; #endif #endif // _LIBCPP_HAS_NO_RVALUE_REFERENCES #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES template _Tp&& __declval(int); template _Tp __declval(long); template decltype(_VSTD::__declval<_Tp>(0)) declval() _NOEXCEPT; #else // _LIBCPP_HAS_NO_RVALUE_REFERENCES template typename add_lvalue_reference<_Tp>::type declval(); #endif // _LIBCPP_HAS_NO_RVALUE_REFERENCES // __uncvref template struct __uncvref { typedef typename remove_cv::type>::type type; }; template struct __unconstref { typedef typename remove_const::type>::type type; }; #ifndef _LIBCPP_CXX03_LANG template using __uncvref_t = typename __uncvref<_Tp>::type; #endif // __is_same_uncvref template struct __is_same_uncvref : is_same::type, typename __uncvref<_Up>::type> {}; #if _LIBCPP_STD_VER > 17 // remove_cvref - same as __uncvref template struct remove_cvref : public __uncvref<_Tp> {}; template using remove_cvref_t = typename remove_cvref<_Tp>::type; #endif struct __any { __any(...); }; // remove_pointer template struct _LIBCPP_TEMPLATE_VIS remove_pointer {typedef _Tp type;}; template struct _LIBCPP_TEMPLATE_VIS remove_pointer<_Tp*> {typedef _Tp type;}; template struct _LIBCPP_TEMPLATE_VIS remove_pointer<_Tp* const> {typedef _Tp type;}; template struct _LIBCPP_TEMPLATE_VIS remove_pointer<_Tp* volatile> {typedef _Tp type;}; template struct _LIBCPP_TEMPLATE_VIS remove_pointer<_Tp* const volatile> {typedef _Tp type;}; #if _LIBCPP_STD_VER > 11 template using remove_pointer_t = typename remove_pointer<_Tp>::type; #endif // add_pointer template ::value || is_same::type, void>::value> struct __add_pointer_impl {typedef typename remove_reference<_Tp>::type* type;}; template struct __add_pointer_impl<_Tp, false> {typedef _Tp type;}; template struct _LIBCPP_TEMPLATE_VIS add_pointer {typedef typename __add_pointer_impl<_Tp>::type type;}; #if _LIBCPP_STD_VER > 11 template using add_pointer_t = typename add_pointer<_Tp>::type; #endif // is_signed template ::value> struct __libcpp_is_signed_impl : public _LIBCPP_BOOL_CONSTANT(_Tp(-1) < _Tp(0)) {}; template struct __libcpp_is_signed_impl<_Tp, false> : public true_type {}; // floating point template ::value> struct __libcpp_is_signed : public __libcpp_is_signed_impl<_Tp> {}; template struct __libcpp_is_signed<_Tp, false> : public false_type {}; template struct _LIBCPP_TEMPLATE_VIS is_signed : public __libcpp_is_signed<_Tp> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_signed_v = is_signed<_Tp>::value; #endif // is_unsigned template ::value> struct __libcpp_is_unsigned_impl : public _LIBCPP_BOOL_CONSTANT(_Tp(0) < _Tp(-1)) {}; template struct __libcpp_is_unsigned_impl<_Tp, false> : public false_type {}; // floating point template ::value> struct __libcpp_is_unsigned : public __libcpp_is_unsigned_impl<_Tp> {}; template struct __libcpp_is_unsigned<_Tp, false> : public false_type {}; template struct _LIBCPP_TEMPLATE_VIS is_unsigned : public __libcpp_is_unsigned<_Tp> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_unsigned_v = is_unsigned<_Tp>::value; #endif // rank template struct _LIBCPP_TEMPLATE_VIS rank : public integral_constant {}; template struct _LIBCPP_TEMPLATE_VIS rank<_Tp[]> : public integral_constant::value + 1> {}; template struct _LIBCPP_TEMPLATE_VIS rank<_Tp[_Np]> : public integral_constant::value + 1> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR size_t rank_v = rank<_Tp>::value; #endif // extent template struct _LIBCPP_TEMPLATE_VIS extent : public integral_constant {}; template struct _LIBCPP_TEMPLATE_VIS extent<_Tp[], 0> : public integral_constant {}; template struct _LIBCPP_TEMPLATE_VIS extent<_Tp[], _Ip> : public integral_constant::value> {}; template struct _LIBCPP_TEMPLATE_VIS extent<_Tp[_Np], 0> : public integral_constant {}; template struct _LIBCPP_TEMPLATE_VIS extent<_Tp[_Np], _Ip> : public integral_constant::value> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR size_t extent_v = extent<_Tp, _Ip>::value; #endif // remove_extent template struct _LIBCPP_TEMPLATE_VIS remove_extent {typedef _Tp type;}; template struct _LIBCPP_TEMPLATE_VIS remove_extent<_Tp[]> {typedef _Tp type;}; template struct _LIBCPP_TEMPLATE_VIS remove_extent<_Tp[_Np]> {typedef _Tp type;}; #if _LIBCPP_STD_VER > 11 template using remove_extent_t = typename remove_extent<_Tp>::type; #endif // remove_all_extents template struct _LIBCPP_TEMPLATE_VIS remove_all_extents {typedef _Tp type;}; template struct _LIBCPP_TEMPLATE_VIS remove_all_extents<_Tp[]> {typedef typename remove_all_extents<_Tp>::type type;}; template struct _LIBCPP_TEMPLATE_VIS remove_all_extents<_Tp[_Np]> {typedef typename remove_all_extents<_Tp>::type type;}; #if _LIBCPP_STD_VER > 11 template using remove_all_extents_t = typename remove_all_extents<_Tp>::type; #endif // decay template struct __decay { typedef typename remove_cv<_Up>::type type; }; template struct __decay<_Up, true> { public: typedef typename conditional < is_array<_Up>::value, typename remove_extent<_Up>::type*, typename conditional < is_function<_Up>::value, typename add_pointer<_Up>::type, typename remove_cv<_Up>::type >::type >::type type; }; template struct _LIBCPP_TEMPLATE_VIS decay { private: typedef typename remove_reference<_Tp>::type _Up; public: typedef typename __decay<_Up, __is_referenceable<_Up>::value>::type type; }; #if _LIBCPP_STD_VER > 11 template using decay_t = typename decay<_Tp>::type; #endif // is_abstract template struct _LIBCPP_TEMPLATE_VIS is_abstract : public integral_constant {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_abstract_v = is_abstract<_Tp>::value; #endif // is_final #if defined(_LIBCPP_HAS_IS_FINAL) template struct _LIBCPP_TEMPLATE_VIS __libcpp_is_final : public integral_constant {}; #else template struct _LIBCPP_TEMPLATE_VIS __libcpp_is_final : public false_type {}; #endif #if defined(_LIBCPP_HAS_IS_FINAL) && _LIBCPP_STD_VER > 11 template struct _LIBCPP_TEMPLATE_VIS is_final : public integral_constant {}; #endif #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_final_v = is_final<_Tp>::value; #endif // is_aggregate #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_IS_AGGREGATE) template struct _LIBCPP_TEMPLATE_VIS is_aggregate : public integral_constant {}; #if !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR constexpr bool is_aggregate_v = is_aggregate<_Tp>::value; #endif #endif // _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_IS_AGGREGATE) // is_base_of #ifdef _LIBCPP_HAS_IS_BASE_OF template struct _LIBCPP_TEMPLATE_VIS is_base_of : public integral_constant {}; #else // _LIBCPP_HAS_IS_BASE_OF namespace __is_base_of_imp { template struct _Dst { _Dst(const volatile _Tp &); }; template struct _Src { operator const volatile _Tp &(); template operator const _Dst<_Up> &(); }; template struct __one { typedef char type; }; template typename __one(declval<_Src<_Dp> >()))>::type __test(int); template __two __test(...); } template struct _LIBCPP_TEMPLATE_VIS is_base_of : public integral_constant::value && sizeof(__is_base_of_imp::__test<_Bp, _Dp>(0)) == 2> {}; #endif // _LIBCPP_HAS_IS_BASE_OF #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_base_of_v = is_base_of<_Bp, _Dp>::value; #endif // is_convertible #if __has_feature(is_convertible_to) && !defined(_LIBCPP_USE_IS_CONVERTIBLE_FALLBACK) template struct _LIBCPP_TEMPLATE_VIS is_convertible : public integral_constant::value> {}; #else // __has_feature(is_convertible_to) namespace __is_convertible_imp { template void __test_convert(_Tp); template struct __is_convertible_test : public false_type {}; template struct __is_convertible_test<_From, _To, decltype(_VSTD::__is_convertible_imp::__test_convert<_To>(_VSTD::declval<_From>()))> : public true_type {}; template ::value, bool _IsFunction = is_function<_Tp>::value, bool _IsVoid = is_void<_Tp>::value> struct __is_array_function_or_void {enum {value = 0};}; template struct __is_array_function_or_void<_Tp, true, false, false> {enum {value = 1};}; template struct __is_array_function_or_void<_Tp, false, true, false> {enum {value = 2};}; template struct __is_array_function_or_void<_Tp, false, false, true> {enum {value = 3};}; } template ::type>::value> struct __is_convertible_check { static const size_t __v = 0; }; template struct __is_convertible_check<_Tp, 0> { static const size_t __v = sizeof(_Tp); }; template ::value, unsigned _T2_is_array_function_or_void = __is_convertible_imp::__is_array_function_or_void<_T2>::value> struct __is_convertible : public integral_constant::value #if defined(_LIBCPP_HAS_NO_RVALUE_REFERENCES) && !(!is_function<_T1>::value && !is_reference<_T1>::value && is_reference<_T2>::value && (!is_const::type>::value || is_volatile::type>::value) && (is_same::type, typename remove_cv::type>::type>::value || is_base_of::type, _T1>::value)) #endif > {}; template struct __is_convertible<_T1, _T2, 0, 1> : public false_type {}; template struct __is_convertible<_T1, _T2, 1, 1> : public false_type {}; template struct __is_convertible<_T1, _T2, 2, 1> : public false_type {}; template struct __is_convertible<_T1, _T2, 3, 1> : public false_type {}; template struct __is_convertible<_T1, _T2, 0, 2> : public false_type {}; template struct __is_convertible<_T1, _T2, 1, 2> : public false_type {}; template struct __is_convertible<_T1, _T2, 2, 2> : public false_type {}; template struct __is_convertible<_T1, _T2, 3, 2> : public false_type {}; template struct __is_convertible<_T1, _T2, 0, 3> : public false_type {}; template struct __is_convertible<_T1, _T2, 1, 3> : public false_type {}; template struct __is_convertible<_T1, _T2, 2, 3> : public false_type {}; template struct __is_convertible<_T1, _T2, 3, 3> : public true_type {}; template struct _LIBCPP_TEMPLATE_VIS is_convertible : public __is_convertible<_T1, _T2> { static const size_t __complete_check1 = __is_convertible_check<_T1>::__v; static const size_t __complete_check2 = __is_convertible_check<_T2>::__v; }; #endif // __has_feature(is_convertible_to) #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_convertible_v = is_convertible<_From, _To>::value; #endif // is_empty #if __has_feature(is_empty) || (_GNUC_VER >= 407) template struct _LIBCPP_TEMPLATE_VIS is_empty : public integral_constant {}; #else // __has_feature(is_empty) template struct __is_empty1 : public _Tp { double __lx; }; struct __is_empty2 { double __lx; }; template ::value> struct __libcpp_empty : public integral_constant) == sizeof(__is_empty2)> {}; template struct __libcpp_empty<_Tp, false> : public false_type {}; template struct _LIBCPP_TEMPLATE_VIS is_empty : public __libcpp_empty<_Tp> {}; #endif // __has_feature(is_empty) #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_empty_v = is_empty<_Tp>::value; #endif // is_polymorphic #if __has_feature(is_polymorphic) || defined(_LIBCPP_COMPILER_MSVC) template struct _LIBCPP_TEMPLATE_VIS is_polymorphic : public integral_constant {}; #else template char &__is_polymorphic_impl( typename enable_if(declval<_Tp*>())) != 0, int>::type); template __two &__is_polymorphic_impl(...); template struct _LIBCPP_TEMPLATE_VIS is_polymorphic : public integral_constant(0)) == 1> {}; #endif // __has_feature(is_polymorphic) #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_polymorphic_v = is_polymorphic<_Tp>::value; #endif // has_virtual_destructor #if __has_feature(has_virtual_destructor) || (_GNUC_VER >= 403) template struct _LIBCPP_TEMPLATE_VIS has_virtual_destructor : public integral_constant {}; #else template struct _LIBCPP_TEMPLATE_VIS has_virtual_destructor : public false_type {}; #endif #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool has_virtual_destructor_v = has_virtual_destructor<_Tp>::value; #endif // has_unique_object_representations #if _LIBCPP_STD_VER > 14 && defined(_LIBCPP_HAS_UNIQUE_OBJECT_REPRESENTATIONS) template struct _LIBCPP_TEMPLATE_VIS has_unique_object_representations : public integral_constant>)> {}; #if !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool has_unique_object_representations_v = has_unique_object_representations<_Tp>::value; #endif #endif // alignment_of template struct _LIBCPP_TEMPLATE_VIS alignment_of : public integral_constant {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR size_t alignment_of_v = alignment_of<_Tp>::value; #endif // aligned_storage template struct __type_list { typedef _Hp _Head; typedef _Tp _Tail; }; struct __nat { #ifndef _LIBCPP_CXX03_LANG __nat() = delete; __nat(const __nat&) = delete; __nat& operator=(const __nat&) = delete; ~__nat() = delete; #endif }; template struct __align_type { static const size_t value = alignment_of<_Tp>::value; typedef _Tp type; }; struct __struct_double {long double __lx;}; struct __struct_double4 {double __lx[4];}; typedef __type_list<__align_type, __type_list<__align_type, __type_list<__align_type, __type_list<__align_type, __type_list<__align_type, __type_list<__align_type, __type_list<__align_type, __type_list<__align_type<__struct_double>, __type_list<__align_type<__struct_double4>, __type_list<__align_type, __nat > > > > > > > > > > __all_types; template struct __find_pod; template struct __find_pod<__type_list<_Hp, __nat>, _Align> { typedef typename conditional< _Align == _Hp::value, typename _Hp::type, void >::type type; }; template struct __find_pod<__type_list<_Hp, _Tp>, _Align> { typedef typename conditional< _Align == _Hp::value, typename _Hp::type, typename __find_pod<_Tp, _Align>::type >::type type; }; template struct __find_max_align; template struct __find_max_align<__type_list<_Hp, __nat>, _Len> : public integral_constant {}; template struct __select_align { private: static const size_t __min = _A2 < _A1 ? _A2 : _A1; static const size_t __max = _A1 < _A2 ? _A2 : _A1; public: static const size_t value = _Len < __max ? __min : __max; }; template struct __find_max_align<__type_list<_Hp, _Tp>, _Len> : public integral_constant::value>::value> {}; template ::value> struct _LIBCPP_TEMPLATE_VIS aligned_storage { typedef typename __find_pod<__all_types, _Align>::type _Aligner; static_assert(!is_void<_Aligner>::value, ""); union type { _Aligner __align; unsigned char __data[(_Len + _Align - 1)/_Align * _Align]; }; }; #if _LIBCPP_STD_VER > 11 template ::value> using aligned_storage_t = typename aligned_storage<_Len, _Align>::type; #endif #define _CREATE_ALIGNED_STORAGE_SPECIALIZATION(n) \ template \ struct _LIBCPP_TEMPLATE_VIS aligned_storage<_Len, n>\ {\ struct _ALIGNAS(n) type\ {\ unsigned char __lx[(_Len + n - 1)/n * n];\ };\ } _CREATE_ALIGNED_STORAGE_SPECIALIZATION(0x1); _CREATE_ALIGNED_STORAGE_SPECIALIZATION(0x2); _CREATE_ALIGNED_STORAGE_SPECIALIZATION(0x4); _CREATE_ALIGNED_STORAGE_SPECIALIZATION(0x8); _CREATE_ALIGNED_STORAGE_SPECIALIZATION(0x10); _CREATE_ALIGNED_STORAGE_SPECIALIZATION(0x20); _CREATE_ALIGNED_STORAGE_SPECIALIZATION(0x40); _CREATE_ALIGNED_STORAGE_SPECIALIZATION(0x80); _CREATE_ALIGNED_STORAGE_SPECIALIZATION(0x100); _CREATE_ALIGNED_STORAGE_SPECIALIZATION(0x200); _CREATE_ALIGNED_STORAGE_SPECIALIZATION(0x400); _CREATE_ALIGNED_STORAGE_SPECIALIZATION(0x800); _CREATE_ALIGNED_STORAGE_SPECIALIZATION(0x1000); _CREATE_ALIGNED_STORAGE_SPECIALIZATION(0x2000); // PE/COFF does not support alignment beyond 8192 (=0x2000) #if !defined(_LIBCPP_OBJECT_FORMAT_COFF) _CREATE_ALIGNED_STORAGE_SPECIALIZATION(0x4000); #endif // !defined(_LIBCPP_OBJECT_FORMAT_COFF) #undef _CREATE_ALIGNED_STORAGE_SPECIALIZATION #ifndef _LIBCPP_HAS_NO_VARIADICS // aligned_union template struct __static_max; template struct __static_max<_I0> { static const size_t value = _I0; }; template struct __static_max<_I0, _I1, _In...> { static const size_t value = _I0 >= _I1 ? __static_max<_I0, _In...>::value : __static_max<_I1, _In...>::value; }; template struct aligned_union { static const size_t alignment_value = __static_max<__alignof__(_Type0), __alignof__(_Types)...>::value; static const size_t __len = __static_max<_Len, sizeof(_Type0), sizeof(_Types)...>::value; typedef typename aligned_storage<__len, alignment_value>::type type; }; #if _LIBCPP_STD_VER > 11 template using aligned_union_t = typename aligned_union<_Len, _Types...>::type; #endif #endif // _LIBCPP_HAS_NO_VARIADICS template struct __numeric_type { static void __test(...); static float __test(float); static double __test(char); static double __test(int); static double __test(unsigned); static double __test(long); static double __test(unsigned long); static double __test(long long); static double __test(unsigned long long); static double __test(double); static long double __test(long double); typedef decltype(__test(declval<_Tp>())) type; static const bool value = !is_same::value; }; template <> struct __numeric_type { static const bool value = true; }; // __promote template ::value && __numeric_type<_A2>::value && __numeric_type<_A3>::value> class __promote_imp { public: static const bool value = false; }; template class __promote_imp<_A1, _A2, _A3, true> { private: typedef typename __promote_imp<_A1>::type __type1; typedef typename __promote_imp<_A2>::type __type2; typedef typename __promote_imp<_A3>::type __type3; public: typedef decltype(__type1() + __type2() + __type3()) type; static const bool value = true; }; template class __promote_imp<_A1, _A2, void, true> { private: typedef typename __promote_imp<_A1>::type __type1; typedef typename __promote_imp<_A2>::type __type2; public: typedef decltype(__type1() + __type2()) type; static const bool value = true; }; template class __promote_imp<_A1, void, void, true> { public: typedef typename __numeric_type<_A1>::type type; static const bool value = true; }; template class __promote : public __promote_imp<_A1, _A2, _A3> {}; // make_signed / make_unsigned typedef __type_list #endif > > > > > __signed_types; typedef __type_list #endif > > > > > __unsigned_types; template struct __find_first; template struct __find_first<__type_list<_Hp, _Tp>, _Size, true> { typedef _Hp type; }; template struct __find_first<__type_list<_Hp, _Tp>, _Size, false> { typedef typename __find_first<_Tp, _Size>::type type; }; template ::type>::value, bool = is_volatile::type>::value> struct __apply_cv { typedef _Up type; }; template struct __apply_cv<_Tp, _Up, true, false> { typedef const _Up type; }; template struct __apply_cv<_Tp, _Up, false, true> { typedef volatile _Up type; }; template struct __apply_cv<_Tp, _Up, true, true> { typedef const volatile _Up type; }; template struct __apply_cv<_Tp&, _Up, false, false> { typedef _Up& type; }; template struct __apply_cv<_Tp&, _Up, true, false> { typedef const _Up& type; }; template struct __apply_cv<_Tp&, _Up, false, true> { typedef volatile _Up& type; }; template struct __apply_cv<_Tp&, _Up, true, true> { typedef const volatile _Up& type; }; template ::value || is_enum<_Tp>::value> struct __make_signed {}; template struct __make_signed<_Tp, true> { typedef typename __find_first<__signed_types, sizeof(_Tp)>::type type; }; template <> struct __make_signed {}; template <> struct __make_signed< signed short, true> {typedef short type;}; template <> struct __make_signed {typedef short type;}; template <> struct __make_signed< signed int, true> {typedef int type;}; template <> struct __make_signed {typedef int type;}; template <> struct __make_signed< signed long, true> {typedef long type;}; template <> struct __make_signed {typedef long type;}; template <> struct __make_signed< signed long long, true> {typedef long long type;}; template <> struct __make_signed {typedef long long type;}; #ifndef _LIBCPP_HAS_NO_INT128 template <> struct __make_signed<__int128_t, true> {typedef __int128_t type;}; template <> struct __make_signed<__uint128_t, true> {typedef __int128_t type;}; #endif template struct _LIBCPP_TEMPLATE_VIS make_signed { typedef typename __apply_cv<_Tp, typename __make_signed::type>::type>::type type; }; #if _LIBCPP_STD_VER > 11 template using make_signed_t = typename make_signed<_Tp>::type; #endif template ::value || is_enum<_Tp>::value> struct __make_unsigned {}; template struct __make_unsigned<_Tp, true> { typedef typename __find_first<__unsigned_types, sizeof(_Tp)>::type type; }; template <> struct __make_unsigned {}; template <> struct __make_unsigned< signed short, true> {typedef unsigned short type;}; template <> struct __make_unsigned {typedef unsigned short type;}; template <> struct __make_unsigned< signed int, true> {typedef unsigned int type;}; template <> struct __make_unsigned {typedef unsigned int type;}; template <> struct __make_unsigned< signed long, true> {typedef unsigned long type;}; template <> struct __make_unsigned {typedef unsigned long type;}; template <> struct __make_unsigned< signed long long, true> {typedef unsigned long long type;}; template <> struct __make_unsigned {typedef unsigned long long type;}; #ifndef _LIBCPP_HAS_NO_INT128 template <> struct __make_unsigned<__int128_t, true> {typedef __uint128_t type;}; template <> struct __make_unsigned<__uint128_t, true> {typedef __uint128_t type;}; #endif template struct _LIBCPP_TEMPLATE_VIS make_unsigned { typedef typename __apply_cv<_Tp, typename __make_unsigned::type>::type>::type type; }; #if _LIBCPP_STD_VER > 11 template using make_unsigned_t = typename make_unsigned<_Tp>::type; #endif #ifdef _LIBCPP_HAS_NO_VARIADICS template struct _LIBCPP_TEMPLATE_VIS common_type { public: typedef typename common_type::type, _Vp>::type type; }; template <> struct _LIBCPP_TEMPLATE_VIS common_type { public: typedef void type; }; template struct _LIBCPP_TEMPLATE_VIS common_type<_Tp, void, void> { public: typedef typename common_type<_Tp, _Tp>::type type; }; template struct _LIBCPP_TEMPLATE_VIS common_type<_Tp, _Up, void> { typedef typename decay() : _VSTD::declval<_Up>() )>::type type; }; #else // _LIBCPP_HAS_NO_VARIADICS // bullet 1 - sizeof...(Tp) == 0 template struct _LIBCPP_TEMPLATE_VIS common_type {}; // bullet 2 - sizeof...(Tp) == 1 template struct _LIBCPP_TEMPLATE_VIS common_type<_Tp> : public common_type<_Tp, _Tp> {}; // bullet 3 - sizeof...(Tp) == 2 template struct __common_type2_imp {}; template struct __common_type2_imp<_Tp, _Up, typename __void_t() : _VSTD::declval<_Up>() )>::type> { typedef typename decay() : _VSTD::declval<_Up>() )>::type type; }; template ::type, class _DUp = typename decay<_Up>::type> using __common_type2 = typename conditional< is_same<_Tp, _DTp>::value && is_same<_Up, _DUp>::value, __common_type2_imp<_Tp, _Up>, common_type<_DTp, _DUp> >::type; template struct _LIBCPP_TEMPLATE_VIS common_type<_Tp, _Up> : __common_type2<_Tp, _Up> {}; // bullet 4 - sizeof...(Tp) > 2 template struct __common_types; template struct __common_type_impl {}; template struct __common_type_impl< __common_types<_Tp, _Up>, typename __void_t::type>::type> { typedef typename common_type<_Tp, _Up>::type type; }; template struct __common_type_impl<__common_types<_Tp, _Up, _Vp...>, typename __void_t::type>::type> : __common_type_impl< __common_types::type, _Vp...> > { }; template struct _LIBCPP_TEMPLATE_VIS common_type<_Tp, _Up, _Vp...> : __common_type_impl<__common_types<_Tp, _Up, _Vp...> > {}; #if _LIBCPP_STD_VER > 11 template using common_type_t = typename common_type<_Tp...>::type; #endif #endif // _LIBCPP_HAS_NO_VARIADICS // is_assignable template struct __select_2nd { typedef _Tp type; }; template typename __select_2nd() = _VSTD::declval<_Arg>())), true_type>::type __is_assignable_test(int); template false_type __is_assignable_test(...); template ::value || is_void<_Arg>::value> struct __is_assignable_imp : public decltype((_VSTD::__is_assignable_test<_Tp, _Arg>(0))) {}; template struct __is_assignable_imp<_Tp, _Arg, true> : public false_type { }; template struct is_assignable : public __is_assignable_imp<_Tp, _Arg> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_assignable_v = is_assignable<_Tp, _Arg>::value; #endif // is_copy_assignable template struct _LIBCPP_TEMPLATE_VIS is_copy_assignable : public is_assignable::type, typename add_lvalue_reference::type>::type> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_copy_assignable_v = is_copy_assignable<_Tp>::value; #endif // is_move_assignable template struct _LIBCPP_TEMPLATE_VIS is_move_assignable #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES : public is_assignable::type, typename add_rvalue_reference<_Tp>::type> {}; #else : public is_copy_assignable<_Tp> {}; #endif #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_move_assignable_v = is_move_assignable<_Tp>::value; #endif // is_destructible // if it's a reference, return true // if it's a function, return false // if it's void, return false // if it's an array of unknown bound, return false // Otherwise, return "std::declval<_Up&>().~_Up()" is well-formed // where _Up is remove_all_extents<_Tp>::type template struct __is_destructible_apply { typedef int type; }; template struct __is_destructor_wellformed { template static char __test ( typename __is_destructible_apply().~_Tp1())>::type ); template static __two __test (...); static const bool value = sizeof(__test<_Tp>(12)) == sizeof(char); }; template struct __destructible_imp; template struct __destructible_imp<_Tp, false> : public _VSTD::integral_constant::type>::value> {}; template struct __destructible_imp<_Tp, true> : public _VSTD::true_type {}; template struct __destructible_false; template struct __destructible_false<_Tp, false> : public __destructible_imp<_Tp, _VSTD::is_reference<_Tp>::value> {}; template struct __destructible_false<_Tp, true> : public _VSTD::false_type {}; template struct is_destructible : public __destructible_false<_Tp, _VSTD::is_function<_Tp>::value> {}; template struct is_destructible<_Tp[]> : public _VSTD::false_type {}; template <> struct is_destructible : public _VSTD::false_type {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_destructible_v = is_destructible<_Tp>::value; #endif // move #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES template inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR typename remove_reference<_Tp>::type&& move(_Tp&& __t) _NOEXCEPT { typedef typename remove_reference<_Tp>::type _Up; return static_cast<_Up&&>(__t); } template inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR _Tp&& forward(typename remove_reference<_Tp>::type& __t) _NOEXCEPT { return static_cast<_Tp&&>(__t); } template inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR _Tp&& forward(typename remove_reference<_Tp>::type&& __t) _NOEXCEPT { static_assert(!is_lvalue_reference<_Tp>::value, "can not forward an rvalue as an lvalue"); return static_cast<_Tp&&>(__t); } #else // _LIBCPP_HAS_NO_RVALUE_REFERENCES template inline _LIBCPP_INLINE_VISIBILITY _Tp& move(_Tp& __t) { return __t; } template inline _LIBCPP_INLINE_VISIBILITY const _Tp& move(const _Tp& __t) { return __t; } template inline _LIBCPP_INLINE_VISIBILITY _Tp& forward(typename remove_reference<_Tp>::type& __t) _NOEXCEPT { return __t; } template class __rv { typedef typename remove_reference<_Tp>::type _Trr; _Trr& t_; public: _LIBCPP_INLINE_VISIBILITY _Trr* operator->() {return &t_;} _LIBCPP_INLINE_VISIBILITY explicit __rv(_Trr& __t) : t_(__t) {} }; #endif // _LIBCPP_HAS_NO_RVALUE_REFERENCES #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES template inline _LIBCPP_INLINE_VISIBILITY typename decay<_Tp>::type __decay_copy(_Tp&& __t) { return _VSTD::forward<_Tp>(__t); } #else template inline _LIBCPP_INLINE_VISIBILITY typename decay<_Tp>::type __decay_copy(const _Tp& __t) { return _VSTD::forward<_Tp>(__t); } #endif #ifndef _LIBCPP_HAS_NO_VARIADICS template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param...), true, false> { typedef _Class _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param..., ...), true, false> { typedef _Class _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param..., ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param...) const, true, false> { typedef _Class const _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param..., ...) const, true, false> { typedef _Class const _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param..., ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param...) volatile, true, false> { typedef _Class volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param..., ...) volatile, true, false> { typedef _Class volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param..., ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param...) const volatile, true, false> { typedef _Class const volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param..., ...) const volatile, true, false> { typedef _Class const volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param..., ...); }; #if __has_feature(cxx_reference_qualified_functions) || \ (defined(_GNUC_VER) && _GNUC_VER >= 409) template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param...) &, true, false> { typedef _Class& _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param..., ...) &, true, false> { typedef _Class& _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param..., ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param...) const&, true, false> { typedef _Class const& _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param..., ...) const&, true, false> { typedef _Class const& _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param..., ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param...) volatile&, true, false> { typedef _Class volatile& _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param..., ...) volatile&, true, false> { typedef _Class volatile& _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param..., ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param...) const volatile&, true, false> { typedef _Class const volatile& _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param..., ...) const volatile&, true, false> { typedef _Class const volatile& _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param..., ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param...) &&, true, false> { typedef _Class&& _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param..., ...) &&, true, false> { typedef _Class&& _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param..., ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param...) const&&, true, false> { typedef _Class const&& _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param..., ...) const&&, true, false> { typedef _Class const&& _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param..., ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param...) volatile&&, true, false> { typedef _Class volatile&& _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param..., ...) volatile&&, true, false> { typedef _Class volatile&& _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param..., ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param...) const volatile&&, true, false> { typedef _Class const volatile&& _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_Param..., ...) const volatile&&, true, false> { typedef _Class const volatile&& _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_Param..., ...); }; #endif // __has_feature(cxx_reference_qualified_functions) || _GNUC_VER >= 409 #else // _LIBCPP_HAS_NO_VARIADICS template struct __member_pointer_traits_imp<_Rp (_Class::*)(), true, false> { typedef _Class _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(...), true, false> { typedef _Class _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0), true, false> { typedef _Class _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, ...), true, false> { typedef _Class _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, _P1), true, false> { typedef _Class _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, _P1); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, _P1, ...), true, false> { typedef _Class _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, _P1, ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, _P1, _P2), true, false> { typedef _Class _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, _P1, _P2); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, _P1, _P2, ...), true, false> { typedef _Class _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, _P1, _P2, ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)() const, true, false> { typedef _Class const _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(...) const, true, false> { typedef _Class const _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0) const, true, false> { typedef _Class const _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, ...) const, true, false> { typedef _Class const _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, _P1) const, true, false> { typedef _Class const _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, _P1); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, _P1, ...) const, true, false> { typedef _Class const _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, _P1, ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, _P1, _P2) const, true, false> { typedef _Class const _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, _P1, _P2); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, _P1, _P2, ...) const, true, false> { typedef _Class const _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, _P1, _P2, ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)() volatile, true, false> { typedef _Class volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(...) volatile, true, false> { typedef _Class volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0) volatile, true, false> { typedef _Class volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, ...) volatile, true, false> { typedef _Class volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, _P1) volatile, true, false> { typedef _Class volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, _P1); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, _P1, ...) volatile, true, false> { typedef _Class volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, _P1, ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, _P1, _P2) volatile, true, false> { typedef _Class volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, _P1, _P2); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, _P1, _P2, ...) volatile, true, false> { typedef _Class volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, _P1, _P2, ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)() const volatile, true, false> { typedef _Class const volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(...) const volatile, true, false> { typedef _Class const volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0) const volatile, true, false> { typedef _Class const volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, ...) const volatile, true, false> { typedef _Class const volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, _P1) const volatile, true, false> { typedef _Class const volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, _P1); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, _P1, ...) const volatile, true, false> { typedef _Class const volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, _P1, ...); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, _P1, _P2) const volatile, true, false> { typedef _Class const volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, _P1, _P2); }; template struct __member_pointer_traits_imp<_Rp (_Class::*)(_P0, _P1, _P2, ...) const volatile, true, false> { typedef _Class const volatile _ClassType; typedef _Rp _ReturnType; typedef _Rp (_FnType) (_P0, _P1, _P2, ...); }; #endif // _LIBCPP_HAS_NO_VARIADICS template struct __member_pointer_traits_imp<_Rp _Class::*, false, true> { typedef _Class _ClassType; typedef _Rp _ReturnType; }; template struct __member_pointer_traits : public __member_pointer_traits_imp::type, is_member_function_pointer<_MP>::value, is_member_object_pointer<_MP>::value> { // typedef ... _ClassType; // typedef ... _ReturnType; // typedef ... _FnType; }; template struct __member_pointer_class_type {}; template struct __member_pointer_class_type<_Ret _ClassType::*> { typedef _ClassType type; }; // result_of template class result_of; #ifdef _LIBCPP_HAS_NO_VARIADICS template class __result_of { }; template class __result_of<_Fn(), true, false> { public: typedef decltype(declval<_Fn>()()) type; }; template class __result_of<_Fn(_A0), true, false> { public: typedef decltype(declval<_Fn>()(declval<_A0>())) type; }; template class __result_of<_Fn(_A0, _A1), true, false> { public: typedef decltype(declval<_Fn>()(declval<_A0>(), declval<_A1>())) type; }; template class __result_of<_Fn(_A0, _A1, _A2), true, false> { public: typedef decltype(declval<_Fn>()(declval<_A0>(), declval<_A1>(), declval<_A2>())) type; }; template struct __result_of_mp; // member function pointer template struct __result_of_mp<_MP, _Tp, true> : public __identity::_ReturnType> { }; // member data pointer template struct __result_of_mdp; template struct __result_of_mdp<_Rp _Class::*, _Tp, false> { typedef typename __apply_cv()), _Rp>::type& type; }; template struct __result_of_mdp<_Rp _Class::*, _Tp, true> { typedef typename __apply_cv<_Tp, _Rp>::type& type; }; template struct __result_of_mp<_Rp _Class::*, _Tp, false> : public __result_of_mdp<_Rp _Class::*, _Tp, is_base_of<_Class, typename remove_reference<_Tp>::type>::value> { }; template class __result_of<_Fn(_Tp), false, true> // _Fn must be member pointer : public __result_of_mp::type, _Tp, is_member_function_pointer::type>::value> { }; template class __result_of<_Fn(_Tp, _A0), false, true> // _Fn must be member pointer : public __result_of_mp::type, _Tp, is_member_function_pointer::type>::value> { }; template class __result_of<_Fn(_Tp, _A0, _A1), false, true> // _Fn must be member pointer : public __result_of_mp::type, _Tp, is_member_function_pointer::type>::value> { }; template class __result_of<_Fn(_Tp, _A0, _A1, _A2), false, true> // _Fn must be member pointer : public __result_of_mp::type, _Tp, is_member_function_pointer::type>::value> { }; // result_of template class _LIBCPP_TEMPLATE_VIS result_of<_Fn()> : public __result_of<_Fn(), is_class::type>::value || is_function::type>::type>::value, is_member_pointer::type>::value > { }; template class _LIBCPP_TEMPLATE_VIS result_of<_Fn(_A0)> : public __result_of<_Fn(_A0), is_class::type>::value || is_function::type>::type>::value, is_member_pointer::type>::value > { }; template class _LIBCPP_TEMPLATE_VIS result_of<_Fn(_A0, _A1)> : public __result_of<_Fn(_A0, _A1), is_class::type>::value || is_function::type>::type>::value, is_member_pointer::type>::value > { }; template class _LIBCPP_TEMPLATE_VIS result_of<_Fn(_A0, _A1, _A2)> : public __result_of<_Fn(_A0, _A1, _A2), is_class::type>::value || is_function::type>::type>::value, is_member_pointer::type>::value > { }; #endif // _LIBCPP_HAS_NO_VARIADICS // template struct is_constructible; namespace __is_construct { struct __nat {}; } #if !defined(_LIBCPP_CXX03_LANG) && (!__has_feature(is_constructible) || \ defined(_LIBCPP_TESTING_FALLBACK_IS_CONSTRUCTIBLE)) template struct __libcpp_is_constructible; template struct __is_invalid_base_to_derived_cast { static_assert(is_reference<_To>::value, "Wrong specialization"); using _RawFrom = __uncvref_t<_From>; using _RawTo = __uncvref_t<_To>; static const bool value = __lazy_and< __lazy_not>, is_base_of<_RawFrom, _RawTo>, __lazy_not<__libcpp_is_constructible<_RawTo, _From>> >::value; }; template struct __is_invalid_lvalue_to_rvalue_cast : false_type { static_assert(is_reference<_To>::value, "Wrong specialization"); }; template struct __is_invalid_lvalue_to_rvalue_cast<_ToRef&&, _FromRef&> { using _RawFrom = __uncvref_t<_FromRef>; using _RawTo = __uncvref_t<_ToRef>; static const bool value = __lazy_and< __lazy_not>, __lazy_or< is_same<_RawFrom, _RawTo>, is_base_of<_RawTo, _RawFrom>> >::value; }; struct __is_constructible_helper { template static void __eat(_To); // This overload is needed to work around a Clang bug that disallows // static_cast(e) for non-reference-compatible types. // Example: static_cast(declval()); // NOTE: The static_cast implementation below is required to support // classes with explicit conversion operators. template (_VSTD::declval<_From>()))> static true_type __test_cast(int); template (_VSTD::declval<_From>()))> static integral_constant::value && !__is_invalid_lvalue_to_rvalue_cast<_To, _From>::value > __test_cast(long); template static false_type __test_cast(...); template ()...))> static true_type __test_nary(int); template static false_type __test_nary(...); template ()))> static is_destructible<_Tp> __test_unary(int); template static false_type __test_unary(...); }; template ::value> struct __is_default_constructible : decltype(__is_constructible_helper::__test_nary<_Tp>(0)) {}; template struct __is_default_constructible<_Tp, true> : false_type {}; template struct __is_default_constructible<_Tp[], false> : false_type {}; template struct __is_default_constructible<_Tp[_Nx], false> : __is_default_constructible::type> {}; template struct __libcpp_is_constructible { static_assert(sizeof...(_Args) > 1, "Wrong specialization"); typedef decltype(__is_constructible_helper::__test_nary<_Tp, _Args...>(0)) type; }; template struct __libcpp_is_constructible<_Tp> : __is_default_constructible<_Tp> {}; template struct __libcpp_is_constructible<_Tp, _A0> : public decltype(__is_constructible_helper::__test_unary<_Tp, _A0>(0)) {}; template struct __libcpp_is_constructible<_Tp&, _A0> : public decltype(__is_constructible_helper:: __test_cast<_Tp&, _A0>(0)) {}; template struct __libcpp_is_constructible<_Tp&&, _A0> : public decltype(__is_constructible_helper:: __test_cast<_Tp&&, _A0>(0)) {}; #endif #if __has_feature(is_constructible) template struct _LIBCPP_TEMPLATE_VIS is_constructible : public integral_constant {}; #elif !defined(_LIBCPP_CXX03_LANG) template struct _LIBCPP_TEMPLATE_VIS is_constructible : public __libcpp_is_constructible<_Tp, _Args...>::type {}; #else // template struct is_constructible0; // main is_constructible0 test template decltype((_Tp(), true_type())) __is_constructible0_test(_Tp&); false_type __is_constructible0_test(__any); template decltype((_Tp(_VSTD::declval<_A0>()), true_type())) __is_constructible1_test(_Tp&, _A0&); template false_type __is_constructible1_test(__any, _A0&); template decltype((_Tp(_VSTD::declval<_A0>(), _VSTD::declval<_A1>()), true_type())) __is_constructible2_test(_Tp&, _A0&, _A1&); template false_type __is_constructible2_test(__any, _A0&, _A1&); template decltype((_Tp(_VSTD::declval<_A0>(), _VSTD::declval<_A1>(), _VSTD::declval<_A2>()), true_type())) __is_constructible3_test(_Tp&, _A0&, _A1&, _A2&); template false_type __is_constructible3_test(__any, _A0&, _A1&, _A2&); template struct __is_constructible0_imp // false, _Tp is not a scalar : public common_type < decltype(__is_constructible0_test(declval<_Tp&>())) >::type {}; template struct __is_constructible1_imp // false, _Tp is not a scalar : public common_type < decltype(__is_constructible1_test(declval<_Tp&>(), declval<_A0&>())) >::type {}; template struct __is_constructible2_imp // false, _Tp is not a scalar : public common_type < decltype(__is_constructible2_test(declval<_Tp&>(), declval<_A0>(), declval<_A1>())) >::type {}; template struct __is_constructible3_imp // false, _Tp is not a scalar : public common_type < decltype(__is_constructible3_test(declval<_Tp&>(), declval<_A0>(), declval<_A1>(), declval<_A2>())) >::type {}; // handle scalars and reference types // Scalars are default constructible, references are not template struct __is_constructible0_imp : public is_scalar<_Tp> {}; template struct __is_constructible1_imp : public is_convertible<_A0, _Tp> {}; template struct __is_constructible2_imp : public false_type {}; template struct __is_constructible3_imp : public false_type {}; // Treat scalars and reference types separately template struct __is_constructible0_void_check : public __is_constructible0_imp::value || is_reference<_Tp>::value, _Tp> {}; template struct __is_constructible1_void_check : public __is_constructible1_imp::value || is_reference<_Tp>::value, _Tp, _A0> {}; template struct __is_constructible2_void_check : public __is_constructible2_imp::value || is_reference<_Tp>::value, _Tp, _A0, _A1> {}; template struct __is_constructible3_void_check : public __is_constructible3_imp::value || is_reference<_Tp>::value, _Tp, _A0, _A1, _A2> {}; // If any of T or Args is void, is_constructible should be false template struct __is_constructible0_void_check : public false_type {}; template struct __is_constructible1_void_check : public false_type {}; template struct __is_constructible2_void_check : public false_type {}; template struct __is_constructible3_void_check : public false_type {}; // is_constructible entry point template struct _LIBCPP_TEMPLATE_VIS is_constructible : public __is_constructible3_void_check::value || is_abstract<_Tp>::value || is_function<_Tp>::value || is_void<_A0>::value || is_void<_A1>::value || is_void<_A2>::value, _Tp, _A0, _A1, _A2> {}; template struct _LIBCPP_TEMPLATE_VIS is_constructible<_Tp, __is_construct::__nat, __is_construct::__nat> : public __is_constructible0_void_check::value || is_abstract<_Tp>::value || is_function<_Tp>::value, _Tp> {}; template struct _LIBCPP_TEMPLATE_VIS is_constructible<_Tp, _A0, __is_construct::__nat> : public __is_constructible1_void_check::value || is_abstract<_Tp>::value || is_function<_Tp>::value || is_void<_A0>::value, _Tp, _A0> {}; template struct _LIBCPP_TEMPLATE_VIS is_constructible<_Tp, _A0, _A1, __is_construct::__nat> : public __is_constructible2_void_check::value || is_abstract<_Tp>::value || is_function<_Tp>::value || is_void<_A0>::value || is_void<_A1>::value, _Tp, _A0, _A1> {}; // Array types are default constructible if their element type // is default constructible template struct __is_constructible0_imp : public is_constructible::type> {}; template struct __is_constructible1_imp : public false_type {}; template struct __is_constructible2_imp : public false_type {}; template struct __is_constructible3_imp : public false_type {}; // Incomplete array types are not constructible template struct __is_constructible0_imp : public false_type {}; template struct __is_constructible1_imp : public false_type {}; template struct __is_constructible2_imp : public false_type {}; template struct __is_constructible3_imp : public false_type {}; #endif // __has_feature(is_constructible) #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) && !defined(_LIBCPP_HAS_NO_VARIADICS) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_constructible_v = is_constructible<_Tp, _Args...>::value; #endif // is_default_constructible template struct _LIBCPP_TEMPLATE_VIS is_default_constructible : public is_constructible<_Tp> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_default_constructible_v = is_default_constructible<_Tp>::value; #endif // is_copy_constructible template struct _LIBCPP_TEMPLATE_VIS is_copy_constructible : public is_constructible<_Tp, typename add_lvalue_reference::type>::type> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_copy_constructible_v = is_copy_constructible<_Tp>::value; #endif // is_move_constructible template struct _LIBCPP_TEMPLATE_VIS is_move_constructible #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES : public is_constructible<_Tp, typename add_rvalue_reference<_Tp>::type> #else : public is_copy_constructible<_Tp> #endif {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_move_constructible_v = is_move_constructible<_Tp>::value; #endif // is_trivially_constructible #ifndef _LIBCPP_HAS_NO_VARIADICS #if __has_feature(is_trivially_constructible) || _GNUC_VER >= 501 template struct _LIBCPP_TEMPLATE_VIS is_trivially_constructible : integral_constant { }; #else // !__has_feature(is_trivially_constructible) template struct _LIBCPP_TEMPLATE_VIS is_trivially_constructible : false_type { }; template struct _LIBCPP_TEMPLATE_VIS is_trivially_constructible<_Tp> #if __has_feature(has_trivial_constructor) || (_GNUC_VER >= 403) : integral_constant #else : integral_constant::value> #endif { }; template #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES struct _LIBCPP_TEMPLATE_VIS is_trivially_constructible<_Tp, _Tp&&> #else struct _LIBCPP_TEMPLATE_VIS is_trivially_constructible<_Tp, _Tp> #endif : integral_constant::value> { }; template struct _LIBCPP_TEMPLATE_VIS is_trivially_constructible<_Tp, const _Tp&> : integral_constant::value> { }; template struct _LIBCPP_TEMPLATE_VIS is_trivially_constructible<_Tp, _Tp&> : integral_constant::value> { }; #endif // !__has_feature(is_trivially_constructible) #else // _LIBCPP_HAS_NO_VARIADICS template struct _LIBCPP_TEMPLATE_VIS is_trivially_constructible : false_type { }; #if __has_feature(is_trivially_constructible) || _GNUC_VER >= 501 template struct _LIBCPP_TEMPLATE_VIS is_trivially_constructible<_Tp, __is_construct::__nat, __is_construct::__nat> : integral_constant { }; template struct _LIBCPP_TEMPLATE_VIS is_trivially_constructible<_Tp, _Tp, __is_construct::__nat> : integral_constant { }; template struct _LIBCPP_TEMPLATE_VIS is_trivially_constructible<_Tp, const _Tp&, __is_construct::__nat> : integral_constant { }; template struct _LIBCPP_TEMPLATE_VIS is_trivially_constructible<_Tp, _Tp&, __is_construct::__nat> : integral_constant { }; #else // !__has_feature(is_trivially_constructible) template struct _LIBCPP_TEMPLATE_VIS is_trivially_constructible<_Tp, __is_construct::__nat, __is_construct::__nat> : integral_constant::value> { }; template struct _LIBCPP_TEMPLATE_VIS is_trivially_constructible<_Tp, _Tp, __is_construct::__nat> : integral_constant::value> { }; template struct _LIBCPP_TEMPLATE_VIS is_trivially_constructible<_Tp, const _Tp&, __is_construct::__nat> : integral_constant::value> { }; template struct _LIBCPP_TEMPLATE_VIS is_trivially_constructible<_Tp, _Tp&, __is_construct::__nat> : integral_constant::value> { }; #endif // !__has_feature(is_trivially_constructible) #endif // _LIBCPP_HAS_NO_VARIADICS #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) && !defined(_LIBCPP_HAS_NO_VARIADICS) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_trivially_constructible_v = is_trivially_constructible<_Tp, _Args...>::value; #endif // is_trivially_default_constructible template struct _LIBCPP_TEMPLATE_VIS is_trivially_default_constructible : public is_trivially_constructible<_Tp> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_trivially_default_constructible_v = is_trivially_default_constructible<_Tp>::value; #endif // is_trivially_copy_constructible template struct _LIBCPP_TEMPLATE_VIS is_trivially_copy_constructible : public is_trivially_constructible<_Tp, typename add_lvalue_reference::type> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_trivially_copy_constructible_v = is_trivially_copy_constructible<_Tp>::value; #endif // is_trivially_move_constructible template struct _LIBCPP_TEMPLATE_VIS is_trivially_move_constructible #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES : public is_trivially_constructible<_Tp, typename add_rvalue_reference<_Tp>::type> #else : public is_trivially_copy_constructible<_Tp> #endif {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_trivially_move_constructible_v = is_trivially_move_constructible<_Tp>::value; #endif // is_trivially_assignable #if __has_feature(is_trivially_assignable) || _GNUC_VER >= 501 template struct is_trivially_assignable : integral_constant { }; #else // !__has_feature(is_trivially_assignable) template struct is_trivially_assignable : public false_type {}; template struct is_trivially_assignable<_Tp&, _Tp> : integral_constant::value> {}; template struct is_trivially_assignable<_Tp&, _Tp&> : integral_constant::value> {}; template struct is_trivially_assignable<_Tp&, const _Tp&> : integral_constant::value> {}; #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES template struct is_trivially_assignable<_Tp&, _Tp&&> : integral_constant::value> {}; #endif // _LIBCPP_HAS_NO_RVALUE_REFERENCES #endif // !__has_feature(is_trivially_assignable) #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_trivially_assignable_v = is_trivially_assignable<_Tp, _Arg>::value; #endif // is_trivially_copy_assignable template struct _LIBCPP_TEMPLATE_VIS is_trivially_copy_assignable : public is_trivially_assignable::type, typename add_lvalue_reference::type>::type> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_trivially_copy_assignable_v = is_trivially_copy_assignable<_Tp>::value; #endif // is_trivially_move_assignable template struct _LIBCPP_TEMPLATE_VIS is_trivially_move_assignable : public is_trivially_assignable::type, #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES typename add_rvalue_reference<_Tp>::type> #else typename add_lvalue_reference<_Tp>::type> #endif {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_trivially_move_assignable_v = is_trivially_move_assignable<_Tp>::value; #endif // is_trivially_destructible #if __has_feature(has_trivial_destructor) || (_GNUC_VER >= 403) template struct _LIBCPP_TEMPLATE_VIS is_trivially_destructible : public integral_constant::value && __has_trivial_destructor(_Tp)> {}; #else template struct __libcpp_trivial_destructor : public integral_constant::value || is_reference<_Tp>::value> {}; template struct _LIBCPP_TEMPLATE_VIS is_trivially_destructible : public __libcpp_trivial_destructor::type> {}; template struct _LIBCPP_TEMPLATE_VIS is_trivially_destructible<_Tp[]> : public false_type {}; #endif #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_trivially_destructible_v = is_trivially_destructible<_Tp>::value; #endif // is_nothrow_constructible #if 0 template struct _LIBCPP_TEMPLATE_VIS is_nothrow_constructible : public integral_constant { }; #else #ifndef _LIBCPP_HAS_NO_VARIADICS #if __has_feature(cxx_noexcept) || (_GNUC_VER >= 407 && __cplusplus >= 201103L) template struct __libcpp_is_nothrow_constructible; template struct __libcpp_is_nothrow_constructible : public integral_constant()...))> { }; template void __implicit_conversion_to(_Tp) noexcept { } template struct __libcpp_is_nothrow_constructible : public integral_constant(declval<_Arg>()))> { }; template struct __libcpp_is_nothrow_constructible : public false_type { }; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_constructible : __libcpp_is_nothrow_constructible::value, is_reference<_Tp>::value, _Tp, _Args...> { }; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_constructible<_Tp[_Ns]> : __libcpp_is_nothrow_constructible::value, is_reference<_Tp>::value, _Tp> { }; #else // __has_feature(cxx_noexcept) template struct _LIBCPP_TEMPLATE_VIS is_nothrow_constructible : false_type { }; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_constructible<_Tp> #if __has_feature(has_nothrow_constructor) || (_GNUC_VER >= 403) : integral_constant #else : integral_constant::value> #endif { }; template #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES struct _LIBCPP_TEMPLATE_VIS is_nothrow_constructible<_Tp, _Tp&&> #else struct _LIBCPP_TEMPLATE_VIS is_nothrow_constructible<_Tp, _Tp> #endif #if __has_feature(has_nothrow_copy) || (_GNUC_VER >= 403) : integral_constant #else : integral_constant::value> #endif { }; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_constructible<_Tp, const _Tp&> #if __has_feature(has_nothrow_copy) || (_GNUC_VER >= 403) : integral_constant #else : integral_constant::value> #endif { }; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_constructible<_Tp, _Tp&> #if __has_feature(has_nothrow_copy) || (_GNUC_VER >= 403) : integral_constant #else : integral_constant::value> #endif { }; #endif // __has_feature(cxx_noexcept) #else // _LIBCPP_HAS_NO_VARIADICS template struct _LIBCPP_TEMPLATE_VIS is_nothrow_constructible : false_type { }; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_constructible<_Tp, __is_construct::__nat, __is_construct::__nat> #if __has_feature(has_nothrow_constructor) || (_GNUC_VER >= 403) : integral_constant #else : integral_constant::value> #endif { }; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_constructible<_Tp, _Tp, __is_construct::__nat> #if __has_feature(has_nothrow_copy) || (_GNUC_VER >= 403) : integral_constant #else : integral_constant::value> #endif { }; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_constructible<_Tp, const _Tp&, __is_construct::__nat> #if __has_feature(has_nothrow_copy) || (_GNUC_VER >= 403) : integral_constant #else : integral_constant::value> #endif { }; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_constructible<_Tp, _Tp&, __is_construct::__nat> #if __has_feature(has_nothrow_copy) || (_GNUC_VER >= 403) : integral_constant #else : integral_constant::value> #endif { }; #endif // _LIBCPP_HAS_NO_VARIADICS #endif // __has_feature(is_nothrow_constructible) #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) && !defined(_LIBCPP_HAS_NO_VARIADICS) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_nothrow_constructible_v = is_nothrow_constructible<_Tp, _Args...>::value; #endif // is_nothrow_default_constructible template struct _LIBCPP_TEMPLATE_VIS is_nothrow_default_constructible : public is_nothrow_constructible<_Tp> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_nothrow_default_constructible_v = is_nothrow_default_constructible<_Tp>::value; #endif // is_nothrow_copy_constructible template struct _LIBCPP_TEMPLATE_VIS is_nothrow_copy_constructible : public is_nothrow_constructible<_Tp, typename add_lvalue_reference::type>::type> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_nothrow_copy_constructible_v = is_nothrow_copy_constructible<_Tp>::value; #endif // is_nothrow_move_constructible template struct _LIBCPP_TEMPLATE_VIS is_nothrow_move_constructible #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES : public is_nothrow_constructible<_Tp, typename add_rvalue_reference<_Tp>::type> #else : public is_nothrow_copy_constructible<_Tp> #endif {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_nothrow_move_constructible_v = is_nothrow_move_constructible<_Tp>::value; #endif // is_nothrow_assignable #if __has_feature(cxx_noexcept) || (_GNUC_VER >= 407 && __cplusplus >= 201103L) template struct __libcpp_is_nothrow_assignable; template struct __libcpp_is_nothrow_assignable : public false_type { }; template struct __libcpp_is_nothrow_assignable : public integral_constant() = _VSTD::declval<_Arg>()) > { }; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_assignable : public __libcpp_is_nothrow_assignable::value, _Tp, _Arg> { }; #else // __has_feature(cxx_noexcept) template struct _LIBCPP_TEMPLATE_VIS is_nothrow_assignable : public false_type {}; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_assignable<_Tp&, _Tp> #if __has_feature(has_nothrow_assign) || (_GNUC_VER >= 403) : integral_constant {}; #else : integral_constant::value> {}; #endif template struct _LIBCPP_TEMPLATE_VIS is_nothrow_assignable<_Tp&, _Tp&> #if __has_feature(has_nothrow_assign) || (_GNUC_VER >= 403) : integral_constant {}; #else : integral_constant::value> {}; #endif template struct _LIBCPP_TEMPLATE_VIS is_nothrow_assignable<_Tp&, const _Tp&> #if __has_feature(has_nothrow_assign) || (_GNUC_VER >= 403) : integral_constant {}; #else : integral_constant::value> {}; #endif #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES template struct is_nothrow_assignable<_Tp&, _Tp&&> #if __has_feature(has_nothrow_assign) || (_GNUC_VER >= 403) : integral_constant {}; #else : integral_constant::value> {}; #endif #endif // _LIBCPP_HAS_NO_RVALUE_REFERENCES #endif // __has_feature(cxx_noexcept) #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_nothrow_assignable_v = is_nothrow_assignable<_Tp, _Arg>::value; #endif // is_nothrow_copy_assignable template struct _LIBCPP_TEMPLATE_VIS is_nothrow_copy_assignable : public is_nothrow_assignable::type, typename add_lvalue_reference::type>::type> {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_nothrow_copy_assignable_v = is_nothrow_copy_assignable<_Tp>::value; #endif // is_nothrow_move_assignable template struct _LIBCPP_TEMPLATE_VIS is_nothrow_move_assignable : public is_nothrow_assignable::type, #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES typename add_rvalue_reference<_Tp>::type> #else typename add_lvalue_reference<_Tp>::type> #endif {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_nothrow_move_assignable_v = is_nothrow_move_assignable<_Tp>::value; #endif // is_nothrow_destructible #if __has_feature(cxx_noexcept) || (_GNUC_VER >= 407 && __cplusplus >= 201103L) template struct __libcpp_is_nothrow_destructible; template struct __libcpp_is_nothrow_destructible : public false_type { }; template struct __libcpp_is_nothrow_destructible : public integral_constant().~_Tp()) > { }; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_destructible : public __libcpp_is_nothrow_destructible::value, _Tp> { }; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_destructible<_Tp[_Ns]> : public is_nothrow_destructible<_Tp> { }; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_destructible<_Tp&> : public true_type { }; #ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES template struct _LIBCPP_TEMPLATE_VIS is_nothrow_destructible<_Tp&&> : public true_type { }; #endif #else template struct __libcpp_nothrow_destructor : public integral_constant::value || is_reference<_Tp>::value> {}; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_destructible : public __libcpp_nothrow_destructor::type> {}; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_destructible<_Tp[]> : public false_type {}; #endif #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_nothrow_destructible_v = is_nothrow_destructible<_Tp>::value; #endif // is_pod #if __has_feature(is_pod) || (_GNUC_VER >= 403) template struct _LIBCPP_TEMPLATE_VIS is_pod : public integral_constant {}; #else template struct _LIBCPP_TEMPLATE_VIS is_pod : public integral_constant::value && is_trivially_copy_constructible<_Tp>::value && is_trivially_copy_assignable<_Tp>::value && is_trivially_destructible<_Tp>::value> {}; #endif #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_pod_v = is_pod<_Tp>::value; #endif // is_literal_type; template struct _LIBCPP_TEMPLATE_VIS is_literal_type #ifdef _LIBCPP_IS_LITERAL : public integral_constant #else : integral_constant::type>::value || is_reference::type>::value> #endif {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_literal_type_v = is_literal_type<_Tp>::value; #endif // is_standard_layout; template struct _LIBCPP_TEMPLATE_VIS is_standard_layout #if __has_feature(is_standard_layout) || (_GNUC_VER >= 407) : public integral_constant #else : integral_constant::type>::value> #endif {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_standard_layout_v = is_standard_layout<_Tp>::value; #endif // is_trivially_copyable; template struct _LIBCPP_TEMPLATE_VIS is_trivially_copyable #if __has_feature(is_trivially_copyable) : public integral_constant #elif _GNUC_VER >= 501 : public integral_constant::value && __is_trivially_copyable(_Tp)> #else : integral_constant::type>::value> #endif {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_trivially_copyable_v = is_trivially_copyable<_Tp>::value; #endif // is_trivial; template struct _LIBCPP_TEMPLATE_VIS is_trivial #if __has_feature(is_trivial) || _GNUC_VER >= 407 : public integral_constant #else : integral_constant::value && is_trivially_default_constructible<_Tp>::value> #endif {}; #if _LIBCPP_STD_VER > 14 && !defined(_LIBCPP_HAS_NO_VARIABLE_TEMPLATES) template _LIBCPP_INLINE_VAR _LIBCPP_CONSTEXPR bool is_trivial_v = is_trivial<_Tp>::value; #endif template struct __is_reference_wrapper_impl : public false_type {}; template struct __is_reference_wrapper_impl > : public true_type {}; template struct __is_reference_wrapper : public __is_reference_wrapper_impl::type> {}; #ifndef _LIBCPP_CXX03_LANG template ::type, class _DecayA0 = typename decay<_A0>::type, class _ClassT = typename __member_pointer_class_type<_DecayFp>::type> using __enable_if_bullet1 = typename enable_if < is_member_function_pointer<_DecayFp>::value && is_base_of<_ClassT, _DecayA0>::value >::type; template ::type, class _DecayA0 = typename decay<_A0>::type> using __enable_if_bullet2 = typename enable_if < is_member_function_pointer<_DecayFp>::value && __is_reference_wrapper<_DecayA0>::value >::type; template ::type, class _DecayA0 = typename decay<_A0>::type, class _ClassT = typename __member_pointer_class_type<_DecayFp>::type> using __enable_if_bullet3 = typename enable_if < is_member_function_pointer<_DecayFp>::value && !is_base_of<_ClassT, _DecayA0>::value && !__is_reference_wrapper<_DecayA0>::value >::type; template ::type, class _DecayA0 = typename decay<_A0>::type, class _ClassT = typename __member_pointer_class_type<_DecayFp>::type> using __enable_if_bullet4 = typename enable_if < is_member_object_pointer<_DecayFp>::value && is_base_of<_ClassT, _DecayA0>::value >::type; template ::type, class _DecayA0 = typename decay<_A0>::type> using __enable_if_bullet5 = typename enable_if < is_member_object_pointer<_DecayFp>::value && __is_reference_wrapper<_DecayA0>::value >::type; template ::type, class _DecayA0 = typename decay<_A0>::type, class _ClassT = typename __member_pointer_class_type<_DecayFp>::type> using __enable_if_bullet6 = typename enable_if < is_member_object_pointer<_DecayFp>::value && !is_base_of<_ClassT, _DecayA0>::value && !__is_reference_wrapper<_DecayA0>::value >::type; // __invoke forward declarations // fall back - none of the bullets #define _LIBCPP_INVOKE_RETURN(...) \ noexcept(noexcept(__VA_ARGS__)) -> decltype(__VA_ARGS__) \ { return __VA_ARGS__; } template auto __invoke(__any, _Args&& ...__args) -> __nat; template auto __invoke_constexpr(__any, _Args&& ...__args) -> __nat; // bullets 1, 2 and 3 template > inline _LIBCPP_INLINE_VISIBILITY auto __invoke(_Fp&& __f, _A0&& __a0, _Args&& ...__args) _LIBCPP_INVOKE_RETURN((_VSTD::forward<_A0>(__a0).*__f)(_VSTD::forward<_Args>(__args)...)) template > inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR auto __invoke_constexpr(_Fp&& __f, _A0&& __a0, _Args&& ...__args) _LIBCPP_INVOKE_RETURN((_VSTD::forward<_A0>(__a0).*__f)(_VSTD::forward<_Args>(__args)...)) template > inline _LIBCPP_INLINE_VISIBILITY auto __invoke(_Fp&& __f, _A0&& __a0, _Args&& ...__args) _LIBCPP_INVOKE_RETURN((__a0.get().*__f)(_VSTD::forward<_Args>(__args)...)) template > inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR auto __invoke_constexpr(_Fp&& __f, _A0&& __a0, _Args&& ...__args) _LIBCPP_INVOKE_RETURN((__a0.get().*__f)(_VSTD::forward<_Args>(__args)...)) template > inline _LIBCPP_INLINE_VISIBILITY auto __invoke(_Fp&& __f, _A0&& __a0, _Args&& ...__args) _LIBCPP_INVOKE_RETURN(((*_VSTD::forward<_A0>(__a0)).*__f)(_VSTD::forward<_Args>(__args)...)) template > inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR auto __invoke_constexpr(_Fp&& __f, _A0&& __a0, _Args&& ...__args) _LIBCPP_INVOKE_RETURN(((*_VSTD::forward<_A0>(__a0)).*__f)(_VSTD::forward<_Args>(__args)...)) // bullets 4, 5 and 6 template > inline _LIBCPP_INLINE_VISIBILITY auto __invoke(_Fp&& __f, _A0&& __a0) _LIBCPP_INVOKE_RETURN(_VSTD::forward<_A0>(__a0).*__f) template > inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR auto __invoke_constexpr(_Fp&& __f, _A0&& __a0) _LIBCPP_INVOKE_RETURN(_VSTD::forward<_A0>(__a0).*__f) template > inline _LIBCPP_INLINE_VISIBILITY auto __invoke(_Fp&& __f, _A0&& __a0) _LIBCPP_INVOKE_RETURN(__a0.get().*__f) template > inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR auto __invoke_constexpr(_Fp&& __f, _A0&& __a0) _LIBCPP_INVOKE_RETURN(__a0.get().*__f) template > inline _LIBCPP_INLINE_VISIBILITY auto __invoke(_Fp&& __f, _A0&& __a0) _LIBCPP_INVOKE_RETURN((*_VSTD::forward<_A0>(__a0)).*__f) template > inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR auto __invoke_constexpr(_Fp&& __f, _A0&& __a0) _LIBCPP_INVOKE_RETURN((*_VSTD::forward<_A0>(__a0)).*__f) // bullet 7 template inline _LIBCPP_INLINE_VISIBILITY auto __invoke(_Fp&& __f, _Args&& ...__args) _LIBCPP_INVOKE_RETURN(_VSTD::forward<_Fp>(__f)(_VSTD::forward<_Args>(__args)...)) template inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR auto __invoke_constexpr(_Fp&& __f, _Args&& ...__args) _LIBCPP_INVOKE_RETURN(_VSTD::forward<_Fp>(__f)(_VSTD::forward<_Args>(__args)...)) #undef _LIBCPP_INVOKE_RETURN // __invokable template struct __invokable_r { // FIXME: Check that _Ret, _Fp, and _Args... are all complete types, cv void, // or incomplete array types as required by the standard. using _Result = decltype( _VSTD::__invoke(_VSTD::declval<_Fp>(), _VSTD::declval<_Args>()...)); using type = typename conditional< !is_same<_Result, __nat>::value, typename conditional< is_void<_Ret>::value, true_type, is_convertible<_Result, _Ret> >::type, false_type >::type; static const bool value = type::value; }; template using __invokable = __invokable_r; template struct __nothrow_invokable_r_imp { static const bool value = false; }; template struct __nothrow_invokable_r_imp { typedef __nothrow_invokable_r_imp _ThisT; template static void __test_noexcept(_Tp) noexcept; static const bool value = noexcept(_ThisT::__test_noexcept<_Ret>( _VSTD::__invoke(_VSTD::declval<_Fp>(), _VSTD::declval<_Args>()...))); }; template struct __nothrow_invokable_r_imp { static const bool value = noexcept( _VSTD::__invoke(_VSTD::declval<_Fp>(), _VSTD::declval<_Args>()...)); }; template using __nothrow_invokable_r = __nothrow_invokable_r_imp< __invokable_r<_Ret, _Fp, _Args...>::value, is_void<_Ret>::value, _Ret, _Fp, _Args... >; template using __nothrow_invokable = __nothrow_invokable_r_imp< __invokable<_Fp, _Args...>::value, true, void, _Fp, _Args... >; template struct __invoke_of : public enable_if< __invokable<_Fp, _Args...>::value, typename __invokable_r::_Result> { }; // result_of template class _LIBCPP_TEMPLATE_VIS result_of<_Fp(_Args...)> : public __invoke_of<_Fp, _Args...> { }; #if _LIBCPP_STD_VER > 11 template using result_of_t = typename result_of<_Tp>::type; #endif #if _LIBCPP_STD_VER > 14 // invoke_result template struct _LIBCPP_TEMPLATE_VIS invoke_result : __invoke_of<_Fn, _Args...> { }; template using invoke_result_t = typename invoke_result<_Fn, _Args...>::type; // is_invocable template struct _LIBCPP_TEMPLATE_VIS is_invocable : integral_constant::value> {}; template struct _LIBCPP_TEMPLATE_VIS is_invocable_r : integral_constant::value> {}; template _LIBCPP_INLINE_VAR constexpr bool is_invocable_v = is_invocable<_Fn, _Args...>::value; template _LIBCPP_INLINE_VAR constexpr bool is_invocable_r_v = is_invocable_r<_Ret, _Fn, _Args...>::value; // is_nothrow_invocable template struct _LIBCPP_TEMPLATE_VIS is_nothrow_invocable : integral_constant::value> {}; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_invocable_r : integral_constant::value> {}; template _LIBCPP_INLINE_VAR constexpr bool is_nothrow_invocable_v = is_nothrow_invocable<_Fn, _Args...>::value; template _LIBCPP_INLINE_VAR constexpr bool is_nothrow_invocable_r_v = is_nothrow_invocable_r<_Ret, _Fn, _Args...>::value; #endif // _LIBCPP_STD_VER > 14 #endif // !defined(_LIBCPP_CXX03_LANG) template struct __is_swappable; template struct __is_nothrow_swappable; template inline _LIBCPP_INLINE_VISIBILITY #ifndef _LIBCPP_CXX03_LANG typename enable_if < is_move_constructible<_Tp>::value && is_move_assignable<_Tp>::value >::type #else void #endif swap(_Tp& __x, _Tp& __y) _NOEXCEPT_(is_nothrow_move_constructible<_Tp>::value && is_nothrow_move_assignable<_Tp>::value) { _Tp __t(_VSTD::move(__x)); __x = _VSTD::move(__y); __y = _VSTD::move(__t); } template inline _LIBCPP_INLINE_VISIBILITY typename enable_if< __is_swappable<_Tp>::value >::type swap(_Tp (&__a)[_Np], _Tp (&__b)[_Np]) _NOEXCEPT_(__is_nothrow_swappable<_Tp>::value); template inline _LIBCPP_INLINE_VISIBILITY void iter_swap(_ForwardIterator1 __a, _ForwardIterator2 __b) // _NOEXCEPT_(_NOEXCEPT_(swap(*__a, *__b))) _NOEXCEPT_(_NOEXCEPT_(swap(*_VSTD::declval<_ForwardIterator1>(), *_VSTD::declval<_ForwardIterator2>()))) { swap(*__a, *__b); } // __swappable namespace __detail { // ALL generic swap overloads MUST already have a declaration available at this point. template ::value && !is_void<_Up>::value> struct __swappable_with { template static decltype(swap(_VSTD::declval<_LHS>(), _VSTD::declval<_RHS>())) __test_swap(int); template static __nat __test_swap(long); // Extra parens are needed for the C++03 definition of decltype. typedef decltype((__test_swap<_Tp, _Up>(0))) __swap1; typedef decltype((__test_swap<_Up, _Tp>(0))) __swap2; static const bool value = !is_same<__swap1, __nat>::value && !is_same<__swap2, __nat>::value; }; template struct __swappable_with<_Tp, _Up, false> : false_type {}; template ::value> struct __nothrow_swappable_with { static const bool value = #ifndef _LIBCPP_HAS_NO_NOEXCEPT noexcept(swap(_VSTD::declval<_Tp>(), _VSTD::declval<_Up>())) && noexcept(swap(_VSTD::declval<_Up>(), _VSTD::declval<_Tp>())); #else false; #endif }; template struct __nothrow_swappable_with<_Tp, _Up, false> : false_type {}; } // __detail template struct __is_swappable : public integral_constant::value> { }; template struct __is_nothrow_swappable : public integral_constant::value> { }; #if _LIBCPP_STD_VER > 14 template struct _LIBCPP_TEMPLATE_VIS is_swappable_with : public integral_constant::value> { }; template struct _LIBCPP_TEMPLATE_VIS is_swappable : public conditional< __is_referenceable<_Tp>::value, is_swappable_with< typename add_lvalue_reference<_Tp>::type, typename add_lvalue_reference<_Tp>::type>, false_type >::type { }; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_swappable_with : public integral_constant::value> { }; template struct _LIBCPP_TEMPLATE_VIS is_nothrow_swappable : public conditional< __is_referenceable<_Tp>::value, is_nothrow_swappable_with< typename add_lvalue_reference<_Tp>::type, typename add_lvalue_reference<_Tp>::type>, false_type >::type { }; template _LIBCPP_INLINE_VAR constexpr bool is_swappable_with_v = is_swappable_with<_Tp, _Up>::value; template _LIBCPP_INLINE_VAR constexpr bool is_swappable_v = is_swappable<_Tp>::value; template _LIBCPP_INLINE_VAR constexpr bool is_nothrow_swappable_with_v = is_nothrow_swappable_with<_Tp, _Up>::value; template _LIBCPP_INLINE_VAR constexpr bool is_nothrow_swappable_v = is_nothrow_swappable<_Tp>::value; #endif // _LIBCPP_STD_VER > 14 #ifdef _LIBCPP_UNDERLYING_TYPE template struct underlying_type { typedef _LIBCPP_UNDERLYING_TYPE(_Tp) type; }; #if _LIBCPP_STD_VER > 11 template using underlying_type_t = typename underlying_type<_Tp>::type; #endif #else // _LIBCPP_UNDERLYING_TYPE template struct underlying_type { static_assert(_Support, "The underyling_type trait requires compiler " "support. Either no such support exists or " "libc++ does not know how to use it."); }; #endif // _LIBCPP_UNDERLYING_TYPE template ::value> struct __sfinae_underlying_type { typedef typename underlying_type<_Tp>::type type; typedef decltype(((type)1) + 0) __promoted_type; }; template struct __sfinae_underlying_type<_Tp, false> {}; inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR int __convert_to_integral(int __val) { return __val; } inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR unsigned __convert_to_integral(unsigned __val) { return __val; } inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR long __convert_to_integral(long __val) { return __val; } inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR unsigned long __convert_to_integral(unsigned long __val) { return __val; } inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR long long __convert_to_integral(long long __val) { return __val; } inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR unsigned long long __convert_to_integral(unsigned long long __val) {return __val; } template inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR typename enable_if::value, long long>::type __convert_to_integral(_Fp __val) { return __val; } #ifndef _LIBCPP_HAS_NO_INT128 inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR __int128_t __convert_to_integral(__int128_t __val) { return __val; } inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR __uint128_t __convert_to_integral(__uint128_t __val) { return __val; } #endif template inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR typename __sfinae_underlying_type<_Tp>::__promoted_type __convert_to_integral(_Tp __val) { return __val; } #ifndef _LIBCPP_CXX03_LANG template struct __has_operator_addressof_member_imp { template static auto __test(int) -> typename __select_2nd().operator&()), true_type>::type; template static auto __test(long) -> false_type; static const bool value = decltype(__test<_Tp>(0))::value; }; template struct __has_operator_addressof_free_imp { template static auto __test(int) -> typename __select_2nd())), true_type>::type; template static auto __test(long) -> false_type; static const bool value = decltype(__test<_Tp>(0))::value; }; template struct __has_operator_addressof : public integral_constant::value || __has_operator_addressof_free_imp<_Tp>::value> {}; #endif // _LIBCPP_CXX03_LANG #if _LIBCPP_STD_VER > 14 #define __cpp_lib_void_t 201411 template using void_t = void; # ifndef _LIBCPP_HAS_NO_VARIADICS template struct conjunction : __and_<_Args...> {}; template _LIBCPP_INLINE_VAR constexpr bool conjunction_v = conjunction<_Args...>::value; template struct disjunction : __or_<_Args...> {}; template _LIBCPP_INLINE_VAR constexpr bool disjunction_v = disjunction<_Args...>::value; template struct negation : __not_<_Tp> {}; template _LIBCPP_INLINE_VAR constexpr bool negation_v = negation<_Tp>::value; # endif // _LIBCPP_HAS_NO_VARIADICS #endif // _LIBCPP_STD_VER > 14 // These traits are used in __tree and __hash_table #ifndef _LIBCPP_CXX03_LANG struct __extract_key_fail_tag {}; struct __extract_key_self_tag {}; struct __extract_key_first_tag {}; template ::type> struct __can_extract_key : conditional::value, __extract_key_self_tag, __extract_key_fail_tag>::type {}; template struct __can_extract_key<_Pair, _Key, pair<_First, _Second>> : conditional::type, _Key>::value, __extract_key_first_tag, __extract_key_fail_tag>::type {}; // __can_extract_map_key uses true_type/false_type instead of the tags. // It returns true if _Key != _ContainerValueTy (the container is a map not a set) // and _ValTy == _Key. template ::type> struct __can_extract_map_key : integral_constant::value> {}; // This specialization returns __extract_key_fail_tag for non-map containers // because _Key == _ContainerValueTy template struct __can_extract_map_key<_ValTy, _Key, _Key, _RawValTy> : false_type {}; #endif #if _LIBCPP_STD_VER > 17 enum class endian { little = 0xDEAD, big = 0xFACE, #if defined(_LIBCPP_LITTLE_ENDIAN) native = little #elif defined(_LIBCPP_BIG_ENDIAN) native = big #else native = 0xCAFE #endif }; #endif _LIBCPP_END_NAMESPACE_STD #if _LIBCPP_STD_VER > 14 // std::byte namespace std // purposefully not versioned { template constexpr typename enable_if, byte>::type & operator<<=(byte& __lhs, _Integer __shift) noexcept { return __lhs = __lhs << __shift; } template constexpr typename enable_if, byte>::type operator<< (byte __lhs, _Integer __shift) noexcept { return static_cast(static_cast(static_cast(__lhs) << __shift)); } template constexpr typename enable_if, byte>::type & operator>>=(byte& __lhs, _Integer __shift) noexcept { return __lhs = __lhs >> __shift; } template constexpr typename enable_if, byte>::type operator>> (byte __lhs, _Integer __shift) noexcept { return static_cast(static_cast(static_cast(__lhs) >> __shift)); } template constexpr typename enable_if, _Integer>::type to_integer(byte __b) noexcept { return static_cast<_Integer>(__b); } } #endif #endif // _LIBCPP_TYPE_TRAITS Index: projects/import-googletest-1.8.1/contrib/libc++ =================================================================== --- projects/import-googletest-1.8.1/contrib/libc++ (revision 344270) +++ projects/import-googletest-1.8.1/contrib/libc++ (revision 344271) Property changes on: projects/import-googletest-1.8.1/contrib/libc++ ___________________________________________________________________ Modified: svn:mergeinfo ## -0,0 +0,1 ## Merged /head/contrib/libc++:r344081-344270 Index: projects/import-googletest-1.8.1/contrib/llvm/lib/MC/ELFObjectWriter.cpp =================================================================== --- projects/import-googletest-1.8.1/contrib/llvm/lib/MC/ELFObjectWriter.cpp (revision 344270) +++ projects/import-googletest-1.8.1/contrib/llvm/lib/MC/ELFObjectWriter.cpp (revision 344271) @@ -1,1520 +1,1526 @@ //===- lib/MC/ELFObjectWriter.cpp - ELF File Writer -----------------------===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This file implements ELF object file writer information. // //===----------------------------------------------------------------------===// #include "llvm/ADT/ArrayRef.h" #include "llvm/ADT/DenseMap.h" #include "llvm/ADT/STLExtras.h" #include "llvm/ADT/SmallString.h" #include "llvm/ADT/SmallVector.h" #include "llvm/ADT/StringRef.h" #include "llvm/ADT/Twine.h" #include "llvm/BinaryFormat/ELF.h" #include "llvm/MC/MCAsmBackend.h" #include "llvm/MC/MCAsmInfo.h" #include "llvm/MC/MCAsmLayout.h" #include "llvm/MC/MCAssembler.h" #include "llvm/MC/MCContext.h" #include "llvm/MC/MCELFObjectWriter.h" #include "llvm/MC/MCExpr.h" #include "llvm/MC/MCFixup.h" #include "llvm/MC/MCFixupKindInfo.h" #include "llvm/MC/MCFragment.h" #include "llvm/MC/MCObjectWriter.h" #include "llvm/MC/MCSection.h" #include "llvm/MC/MCSectionELF.h" #include "llvm/MC/MCSymbol.h" #include "llvm/MC/MCSymbolELF.h" #include "llvm/MC/MCValue.h" #include "llvm/MC/StringTableBuilder.h" #include "llvm/Support/Allocator.h" #include "llvm/Support/Casting.h" #include "llvm/Support/Compression.h" #include "llvm/Support/Endian.h" #include "llvm/Support/Error.h" #include "llvm/Support/ErrorHandling.h" #include "llvm/Support/Host.h" #include "llvm/Support/LEB128.h" #include "llvm/Support/MathExtras.h" #include "llvm/Support/SMLoc.h" #include "llvm/Support/StringSaver.h" #include "llvm/Support/SwapByteOrder.h" #include "llvm/Support/raw_ostream.h" #include #include #include #include #include #include #include #include #include using namespace llvm; #undef DEBUG_TYPE #define DEBUG_TYPE "reloc-info" namespace { using SectionIndexMapTy = DenseMap; class ELFObjectWriter; struct ELFWriter; bool isDwoSection(const MCSectionELF &Sec) { return Sec.getSectionName().endswith(".dwo"); } class SymbolTableWriter { ELFWriter &EWriter; bool Is64Bit; // indexes we are going to write to .symtab_shndx. std::vector ShndxIndexes; // The numbel of symbols written so far. unsigned NumWritten; void createSymtabShndx(); template void write(T Value); public: SymbolTableWriter(ELFWriter &EWriter, bool Is64Bit); void writeSymbol(uint32_t name, uint8_t info, uint64_t value, uint64_t size, uint8_t other, uint32_t shndx, bool Reserved); ArrayRef getShndxIndexes() const { return ShndxIndexes; } }; struct ELFWriter { ELFObjectWriter &OWriter; support::endian::Writer W; enum DwoMode { AllSections, NonDwoOnly, DwoOnly, } Mode; static uint64_t SymbolValue(const MCSymbol &Sym, const MCAsmLayout &Layout); static bool isInSymtab(const MCAsmLayout &Layout, const MCSymbolELF &Symbol, bool Used, bool Renamed); /// Helper struct for containing some precomputed information on symbols. struct ELFSymbolData { const MCSymbolELF *Symbol; uint32_t SectionIndex; StringRef Name; // Support lexicographic sorting. bool operator<(const ELFSymbolData &RHS) const { unsigned LHSType = Symbol->getType(); unsigned RHSType = RHS.Symbol->getType(); if (LHSType == ELF::STT_SECTION && RHSType != ELF::STT_SECTION) return false; if (LHSType != ELF::STT_SECTION && RHSType == ELF::STT_SECTION) return true; if (LHSType == ELF::STT_SECTION && RHSType == ELF::STT_SECTION) return SectionIndex < RHS.SectionIndex; return Name < RHS.Name; } }; /// @} /// @name Symbol Table Data /// @{ StringTableBuilder StrTabBuilder{StringTableBuilder::ELF}; /// @} // This holds the symbol table index of the last local symbol. unsigned LastLocalSymbolIndex; // This holds the .strtab section index. unsigned StringTableIndex; // This holds the .symtab section index. unsigned SymbolTableIndex; // Sections in the order they are to be output in the section table. std::vector SectionTable; unsigned addToSectionTable(const MCSectionELF *Sec); // TargetObjectWriter wrappers. bool is64Bit() const; bool hasRelocationAddend() const; void align(unsigned Alignment); bool maybeWriteCompression(uint64_t Size, SmallVectorImpl &CompressedContents, bool ZLibStyle, unsigned Alignment); public: ELFWriter(ELFObjectWriter &OWriter, raw_pwrite_stream &OS, bool IsLittleEndian, DwoMode Mode) : OWriter(OWriter), W(OS, IsLittleEndian ? support::little : support::big), Mode(Mode) {} void WriteWord(uint64_t Word) { if (is64Bit()) W.write(Word); else W.write(Word); } template void write(T Val) { W.write(Val); } void writeHeader(const MCAssembler &Asm); void writeSymbol(SymbolTableWriter &Writer, uint32_t StringIndex, ELFSymbolData &MSD, const MCAsmLayout &Layout); // Start and end offset of each section using SectionOffsetsTy = std::map>; // Map from a signature symbol to the group section index using RevGroupMapTy = DenseMap; /// Compute the symbol table data /// /// \param Asm - The assembler. /// \param SectionIndexMap - Maps a section to its index. /// \param RevGroupMap - Maps a signature symbol to the group section. void computeSymbolTable(MCAssembler &Asm, const MCAsmLayout &Layout, const SectionIndexMapTy &SectionIndexMap, const RevGroupMapTy &RevGroupMap, SectionOffsetsTy &SectionOffsets); void writeAddrsigSection(); MCSectionELF *createRelocationSection(MCContext &Ctx, const MCSectionELF &Sec); const MCSectionELF *createStringTable(MCContext &Ctx); void writeSectionHeader(const MCAsmLayout &Layout, const SectionIndexMapTy &SectionIndexMap, const SectionOffsetsTy &SectionOffsets); void writeSectionData(const MCAssembler &Asm, MCSection &Sec, const MCAsmLayout &Layout); void WriteSecHdrEntry(uint32_t Name, uint32_t Type, uint64_t Flags, uint64_t Address, uint64_t Offset, uint64_t Size, uint32_t Link, uint32_t Info, uint64_t Alignment, uint64_t EntrySize); void writeRelocations(const MCAssembler &Asm, const MCSectionELF &Sec); uint64_t writeObject(MCAssembler &Asm, const MCAsmLayout &Layout); void writeSection(const SectionIndexMapTy &SectionIndexMap, uint32_t GroupSymbolIndex, uint64_t Offset, uint64_t Size, const MCSectionELF &Section); }; class ELFObjectWriter : public MCObjectWriter { /// The target specific ELF writer instance. std::unique_ptr TargetObjectWriter; DenseMap> Relocations; DenseMap Renames; bool EmitAddrsigSection = false; std::vector AddrsigSyms; bool hasRelocationAddend() const; bool shouldRelocateWithSymbol(const MCAssembler &Asm, const MCSymbolRefExpr *RefA, const MCSymbolELF *Sym, uint64_t C, unsigned Type) const; public: ELFObjectWriter(std::unique_ptr MOTW) : TargetObjectWriter(std::move(MOTW)) {} void reset() override { Relocations.clear(); Renames.clear(); MCObjectWriter::reset(); } bool isSymbolRefDifferenceFullyResolvedImpl(const MCAssembler &Asm, const MCSymbol &SymA, const MCFragment &FB, bool InSet, bool IsPCRel) const override; virtual bool checkRelocation(MCContext &Ctx, SMLoc Loc, const MCSectionELF *From, const MCSectionELF *To) { return true; } void recordRelocation(MCAssembler &Asm, const MCAsmLayout &Layout, const MCFragment *Fragment, const MCFixup &Fixup, MCValue Target, uint64_t &FixedValue) override; void executePostLayoutBinding(MCAssembler &Asm, const MCAsmLayout &Layout) override; void emitAddrsigSection() override { EmitAddrsigSection = true; } void addAddrsigSymbol(const MCSymbol *Sym) override { AddrsigSyms.push_back(Sym); } friend struct ELFWriter; }; class ELFSingleObjectWriter : public ELFObjectWriter { raw_pwrite_stream &OS; bool IsLittleEndian; public: ELFSingleObjectWriter(std::unique_ptr MOTW, raw_pwrite_stream &OS, bool IsLittleEndian) : ELFObjectWriter(std::move(MOTW)), OS(OS), IsLittleEndian(IsLittleEndian) {} uint64_t writeObject(MCAssembler &Asm, const MCAsmLayout &Layout) override { return ELFWriter(*this, OS, IsLittleEndian, ELFWriter::AllSections) .writeObject(Asm, Layout); } friend struct ELFWriter; }; class ELFDwoObjectWriter : public ELFObjectWriter { raw_pwrite_stream &OS, &DwoOS; bool IsLittleEndian; public: ELFDwoObjectWriter(std::unique_ptr MOTW, raw_pwrite_stream &OS, raw_pwrite_stream &DwoOS, bool IsLittleEndian) : ELFObjectWriter(std::move(MOTW)), OS(OS), DwoOS(DwoOS), IsLittleEndian(IsLittleEndian) {} virtual bool checkRelocation(MCContext &Ctx, SMLoc Loc, const MCSectionELF *From, const MCSectionELF *To) override { if (isDwoSection(*From)) { Ctx.reportError(Loc, "A dwo section may not contain relocations"); return false; } if (To && isDwoSection(*To)) { Ctx.reportError(Loc, "A relocation may not refer to a dwo section"); return false; } return true; } uint64_t writeObject(MCAssembler &Asm, const MCAsmLayout &Layout) override { uint64_t Size = ELFWriter(*this, OS, IsLittleEndian, ELFWriter::NonDwoOnly) .writeObject(Asm, Layout); Size += ELFWriter(*this, DwoOS, IsLittleEndian, ELFWriter::DwoOnly) .writeObject(Asm, Layout); return Size; } }; } // end anonymous namespace void ELFWriter::align(unsigned Alignment) { uint64_t Padding = OffsetToAlignment(W.OS.tell(), Alignment); W.OS.write_zeros(Padding); } unsigned ELFWriter::addToSectionTable(const MCSectionELF *Sec) { SectionTable.push_back(Sec); StrTabBuilder.add(Sec->getSectionName()); return SectionTable.size(); } void SymbolTableWriter::createSymtabShndx() { if (!ShndxIndexes.empty()) return; ShndxIndexes.resize(NumWritten); } template void SymbolTableWriter::write(T Value) { EWriter.write(Value); } SymbolTableWriter::SymbolTableWriter(ELFWriter &EWriter, bool Is64Bit) : EWriter(EWriter), Is64Bit(Is64Bit), NumWritten(0) {} void SymbolTableWriter::writeSymbol(uint32_t name, uint8_t info, uint64_t value, uint64_t size, uint8_t other, uint32_t shndx, bool Reserved) { bool LargeIndex = shndx >= ELF::SHN_LORESERVE && !Reserved; if (LargeIndex) createSymtabShndx(); if (!ShndxIndexes.empty()) { if (LargeIndex) ShndxIndexes.push_back(shndx); else ShndxIndexes.push_back(0); } uint16_t Index = LargeIndex ? uint16_t(ELF::SHN_XINDEX) : shndx; if (Is64Bit) { write(name); // st_name write(info); // st_info write(other); // st_other write(Index); // st_shndx write(value); // st_value write(size); // st_size } else { write(name); // st_name write(uint32_t(value)); // st_value write(uint32_t(size)); // st_size write(info); // st_info write(other); // st_other write(Index); // st_shndx } ++NumWritten; } bool ELFWriter::is64Bit() const { return OWriter.TargetObjectWriter->is64Bit(); } bool ELFWriter::hasRelocationAddend() const { return OWriter.hasRelocationAddend(); } // Emit the ELF header. void ELFWriter::writeHeader(const MCAssembler &Asm) { // ELF Header // ---------- // // Note // ---- // emitWord method behaves differently for ELF32 and ELF64, writing // 4 bytes in the former and 8 in the latter. W.OS << ELF::ElfMagic; // e_ident[EI_MAG0] to e_ident[EI_MAG3] W.OS << char(is64Bit() ? ELF::ELFCLASS64 : ELF::ELFCLASS32); // e_ident[EI_CLASS] // e_ident[EI_DATA] W.OS << char(W.Endian == support::little ? ELF::ELFDATA2LSB : ELF::ELFDATA2MSB); W.OS << char(ELF::EV_CURRENT); // e_ident[EI_VERSION] // e_ident[EI_OSABI] W.OS << char(OWriter.TargetObjectWriter->getOSABI()); W.OS << char(0); // e_ident[EI_ABIVERSION] W.OS.write_zeros(ELF::EI_NIDENT - ELF::EI_PAD); W.write(ELF::ET_REL); // e_type W.write(OWriter.TargetObjectWriter->getEMachine()); // e_machine = target W.write(ELF::EV_CURRENT); // e_version WriteWord(0); // e_entry, no entry point in .o file WriteWord(0); // e_phoff, no program header for .o WriteWord(0); // e_shoff = sec hdr table off in bytes // e_flags = whatever the target wants W.write(Asm.getELFHeaderEFlags()); // e_ehsize = ELF header size W.write(is64Bit() ? sizeof(ELF::Elf64_Ehdr) : sizeof(ELF::Elf32_Ehdr)); W.write(0); // e_phentsize = prog header entry size W.write(0); // e_phnum = # prog header entries = 0 // e_shentsize = Section header entry size W.write(is64Bit() ? sizeof(ELF::Elf64_Shdr) : sizeof(ELF::Elf32_Shdr)); // e_shnum = # of section header ents W.write(0); // e_shstrndx = Section # of '.shstrtab' assert(StringTableIndex < ELF::SHN_LORESERVE); W.write(StringTableIndex); } uint64_t ELFWriter::SymbolValue(const MCSymbol &Sym, const MCAsmLayout &Layout) { if (Sym.isCommon() && Sym.isExternal()) return Sym.getCommonAlignment(); uint64_t Res; if (!Layout.getSymbolOffset(Sym, Res)) return 0; if (Layout.getAssembler().isThumbFunc(&Sym)) Res |= 1; return Res; } static uint8_t mergeTypeForSet(uint8_t origType, uint8_t newType) { uint8_t Type = newType; // Propagation rules: // IFUNC > FUNC > OBJECT > NOTYPE // TLS_OBJECT > OBJECT > NOTYPE // // dont let the new type degrade the old type switch (origType) { default: break; case ELF::STT_GNU_IFUNC: if (Type == ELF::STT_FUNC || Type == ELF::STT_OBJECT || Type == ELF::STT_NOTYPE || Type == ELF::STT_TLS) Type = ELF::STT_GNU_IFUNC; break; case ELF::STT_FUNC: if (Type == ELF::STT_OBJECT || Type == ELF::STT_NOTYPE || Type == ELF::STT_TLS) Type = ELF::STT_FUNC; break; case ELF::STT_OBJECT: if (Type == ELF::STT_NOTYPE) Type = ELF::STT_OBJECT; break; case ELF::STT_TLS: if (Type == ELF::STT_OBJECT || Type == ELF::STT_NOTYPE || Type == ELF::STT_GNU_IFUNC || Type == ELF::STT_FUNC) Type = ELF::STT_TLS; break; } return Type; } void ELFWriter::writeSymbol(SymbolTableWriter &Writer, uint32_t StringIndex, ELFSymbolData &MSD, const MCAsmLayout &Layout) { const auto &Symbol = cast(*MSD.Symbol); const MCSymbolELF *Base = cast_or_null(Layout.getBaseSymbol(Symbol)); // This has to be in sync with when computeSymbolTable uses SHN_ABS or // SHN_COMMON. bool IsReserved = !Base || Symbol.isCommon(); // Binding and Type share the same byte as upper and lower nibbles uint8_t Binding = Symbol.getBinding(); uint8_t Type = Symbol.getType(); if (Base) { Type = mergeTypeForSet(Type, Base->getType()); } uint8_t Info = (Binding << 4) | Type; // Other and Visibility share the same byte with Visibility using the lower // 2 bits uint8_t Visibility = Symbol.getVisibility(); uint8_t Other = Symbol.getOther() | Visibility; uint64_t Value = SymbolValue(*MSD.Symbol, Layout); uint64_t Size = 0; const MCExpr *ESize = MSD.Symbol->getSize(); if (!ESize && Base) ESize = Base->getSize(); if (ESize) { int64_t Res; if (!ESize->evaluateKnownAbsolute(Res, Layout)) report_fatal_error("Size expression must be absolute."); Size = Res; } // Write out the symbol table entry Writer.writeSymbol(StringIndex, Info, Value, Size, Other, MSD.SectionIndex, IsReserved); } // True if the assembler knows nothing about the final value of the symbol. // This doesn't cover the comdat issues, since in those cases the assembler // can at least know that all symbols in the section will move together. static bool isWeak(const MCSymbolELF &Sym) { if (Sym.getType() == ELF::STT_GNU_IFUNC) return true; switch (Sym.getBinding()) { default: llvm_unreachable("Unknown binding"); case ELF::STB_LOCAL: return false; case ELF::STB_GLOBAL: return false; case ELF::STB_WEAK: case ELF::STB_GNU_UNIQUE: return true; } } bool ELFWriter::isInSymtab(const MCAsmLayout &Layout, const MCSymbolELF &Symbol, bool Used, bool Renamed) { if (Symbol.isVariable()) { const MCExpr *Expr = Symbol.getVariableValue(); if (const MCSymbolRefExpr *Ref = dyn_cast(Expr)) { if (Ref->getKind() == MCSymbolRefExpr::VK_WEAKREF) return false; } } if (Used) return true; if (Renamed) return false; if (Symbol.isVariable() && Symbol.isUndefined()) { // FIXME: this is here just to diagnose the case of a var = commmon_sym. Layout.getBaseSymbol(Symbol); return false; } if (Symbol.isUndefined() && !Symbol.isBindingSet()) return false; if (Symbol.isTemporary()) return false; if (Symbol.getType() == ELF::STT_SECTION) return false; return true; } void ELFWriter::computeSymbolTable( MCAssembler &Asm, const MCAsmLayout &Layout, const SectionIndexMapTy &SectionIndexMap, const RevGroupMapTy &RevGroupMap, SectionOffsetsTy &SectionOffsets) { MCContext &Ctx = Asm.getContext(); SymbolTableWriter Writer(*this, is64Bit()); // Symbol table unsigned EntrySize = is64Bit() ? ELF::SYMENTRY_SIZE64 : ELF::SYMENTRY_SIZE32; MCSectionELF *SymtabSection = Ctx.getELFSection(".symtab", ELF::SHT_SYMTAB, 0, EntrySize, ""); SymtabSection->setAlignment(is64Bit() ? 8 : 4); SymbolTableIndex = addToSectionTable(SymtabSection); align(SymtabSection->getAlignment()); uint64_t SecStart = W.OS.tell(); // The first entry is the undefined symbol entry. Writer.writeSymbol(0, 0, 0, 0, 0, 0, false); std::vector LocalSymbolData; std::vector ExternalSymbolData; // Add the data for the symbols. bool HasLargeSectionIndex = false; for (const MCSymbol &S : Asm.symbols()) { const auto &Symbol = cast(S); bool Used = Symbol.isUsedInReloc(); bool WeakrefUsed = Symbol.isWeakrefUsedInReloc(); bool isSignature = Symbol.isSignature(); if (!isInSymtab(Layout, Symbol, Used || WeakrefUsed || isSignature, OWriter.Renames.count(&Symbol))) continue; if (Symbol.isTemporary() && Symbol.isUndefined()) { Ctx.reportError(SMLoc(), "Undefined temporary symbol"); continue; } ELFSymbolData MSD; MSD.Symbol = cast(&Symbol); bool Local = Symbol.getBinding() == ELF::STB_LOCAL; assert(Local || !Symbol.isTemporary()); if (Symbol.isAbsolute()) { MSD.SectionIndex = ELF::SHN_ABS; } else if (Symbol.isCommon()) { assert(!Local); MSD.SectionIndex = ELF::SHN_COMMON; } else if (Symbol.isUndefined()) { if (isSignature && !Used) { MSD.SectionIndex = RevGroupMap.lookup(&Symbol); if (MSD.SectionIndex >= ELF::SHN_LORESERVE) HasLargeSectionIndex = true; } else { MSD.SectionIndex = ELF::SHN_UNDEF; } } else { const MCSectionELF &Section = static_cast(Symbol.getSection()); if (Mode == NonDwoOnly && isDwoSection(Section)) continue; MSD.SectionIndex = SectionIndexMap.lookup(&Section); assert(MSD.SectionIndex && "Invalid section index!"); if (MSD.SectionIndex >= ELF::SHN_LORESERVE) HasLargeSectionIndex = true; } StringRef Name = Symbol.getName(); // Sections have their own string table if (Symbol.getType() != ELF::STT_SECTION) { MSD.Name = Name; StrTabBuilder.add(Name); } if (Local) LocalSymbolData.push_back(MSD); else ExternalSymbolData.push_back(MSD); } // This holds the .symtab_shndx section index. unsigned SymtabShndxSectionIndex = 0; if (HasLargeSectionIndex) { MCSectionELF *SymtabShndxSection = Ctx.getELFSection(".symtab_shndxr", ELF::SHT_SYMTAB_SHNDX, 0, 4, ""); SymtabShndxSectionIndex = addToSectionTable(SymtabShndxSection); SymtabShndxSection->setAlignment(4); } ArrayRef FileNames = Asm.getFileNames(); for (const std::string &Name : FileNames) StrTabBuilder.add(Name); StrTabBuilder.finalize(); // File symbols are emitted first and handled separately from normal symbols, // i.e. a non-STT_FILE symbol with the same name may appear. for (const std::string &Name : FileNames) Writer.writeSymbol(StrTabBuilder.getOffset(Name), ELF::STT_FILE | ELF::STB_LOCAL, 0, 0, ELF::STV_DEFAULT, ELF::SHN_ABS, true); // Symbols are required to be in lexicographic order. array_pod_sort(LocalSymbolData.begin(), LocalSymbolData.end()); array_pod_sort(ExternalSymbolData.begin(), ExternalSymbolData.end()); // Set the symbol indices. Local symbols must come before all other // symbols with non-local bindings. unsigned Index = FileNames.size() + 1; for (ELFSymbolData &MSD : LocalSymbolData) { unsigned StringIndex = MSD.Symbol->getType() == ELF::STT_SECTION ? 0 : StrTabBuilder.getOffset(MSD.Name); MSD.Symbol->setIndex(Index++); writeSymbol(Writer, StringIndex, MSD, Layout); } // Write the symbol table entries. LastLocalSymbolIndex = Index; for (ELFSymbolData &MSD : ExternalSymbolData) { unsigned StringIndex = StrTabBuilder.getOffset(MSD.Name); MSD.Symbol->setIndex(Index++); writeSymbol(Writer, StringIndex, MSD, Layout); assert(MSD.Symbol->getBinding() != ELF::STB_LOCAL); } uint64_t SecEnd = W.OS.tell(); SectionOffsets[SymtabSection] = std::make_pair(SecStart, SecEnd); ArrayRef ShndxIndexes = Writer.getShndxIndexes(); if (ShndxIndexes.empty()) { assert(SymtabShndxSectionIndex == 0); return; } assert(SymtabShndxSectionIndex != 0); SecStart = W.OS.tell(); const MCSectionELF *SymtabShndxSection = SectionTable[SymtabShndxSectionIndex - 1]; for (uint32_t Index : ShndxIndexes) write(Index); SecEnd = W.OS.tell(); SectionOffsets[SymtabShndxSection] = std::make_pair(SecStart, SecEnd); } void ELFWriter::writeAddrsigSection() { for (const MCSymbol *Sym : OWriter.AddrsigSyms) encodeULEB128(Sym->getIndex(), W.OS); } MCSectionELF *ELFWriter::createRelocationSection(MCContext &Ctx, const MCSectionELF &Sec) { if (OWriter.Relocations[&Sec].empty()) return nullptr; const StringRef SectionName = Sec.getSectionName(); std::string RelaSectionName = hasRelocationAddend() ? ".rela" : ".rel"; RelaSectionName += SectionName; unsigned EntrySize; if (hasRelocationAddend()) EntrySize = is64Bit() ? sizeof(ELF::Elf64_Rela) : sizeof(ELF::Elf32_Rela); else EntrySize = is64Bit() ? sizeof(ELF::Elf64_Rel) : sizeof(ELF::Elf32_Rel); unsigned Flags = 0; if (Sec.getFlags() & ELF::SHF_GROUP) Flags = ELF::SHF_GROUP; MCSectionELF *RelaSection = Ctx.createELFRelSection( RelaSectionName, hasRelocationAddend() ? ELF::SHT_RELA : ELF::SHT_REL, Flags, EntrySize, Sec.getGroup(), &Sec); RelaSection->setAlignment(is64Bit() ? 8 : 4); return RelaSection; } // Include the debug info compression header. bool ELFWriter::maybeWriteCompression( uint64_t Size, SmallVectorImpl &CompressedContents, bool ZLibStyle, unsigned Alignment) { if (ZLibStyle) { uint64_t HdrSize = is64Bit() ? sizeof(ELF::Elf32_Chdr) : sizeof(ELF::Elf64_Chdr); if (Size <= HdrSize + CompressedContents.size()) return false; // Platform specific header is followed by compressed data. if (is64Bit()) { // Write Elf64_Chdr header. write(static_cast(ELF::ELFCOMPRESS_ZLIB)); write(static_cast(0)); // ch_reserved field. write(static_cast(Size)); write(static_cast(Alignment)); } else { // Write Elf32_Chdr header otherwise. write(static_cast(ELF::ELFCOMPRESS_ZLIB)); write(static_cast(Size)); write(static_cast(Alignment)); } return true; } // "ZLIB" followed by 8 bytes representing the uncompressed size of the section, // useful for consumers to preallocate a buffer to decompress into. const StringRef Magic = "ZLIB"; if (Size <= Magic.size() + sizeof(Size) + CompressedContents.size()) return false; W.OS << Magic; support::endian::write(W.OS, Size, support::big); return true; } void ELFWriter::writeSectionData(const MCAssembler &Asm, MCSection &Sec, const MCAsmLayout &Layout) { MCSectionELF &Section = static_cast(Sec); StringRef SectionName = Section.getSectionName(); auto &MC = Asm.getContext(); const auto &MAI = MC.getAsmInfo(); // Compressing debug_frame requires handling alignment fragments which is // more work (possibly generalizing MCAssembler.cpp:writeFragment to allow // for writing to arbitrary buffers) for little benefit. bool CompressionEnabled = MAI->compressDebugSections() != DebugCompressionType::None; if (!CompressionEnabled || !SectionName.startswith(".debug_") || SectionName == ".debug_frame") { Asm.writeSectionData(W.OS, &Section, Layout); return; } assert((MAI->compressDebugSections() == DebugCompressionType::Z || MAI->compressDebugSections() == DebugCompressionType::GNU) && "expected zlib or zlib-gnu style compression"); SmallVector UncompressedData; raw_svector_ostream VecOS(UncompressedData); Asm.writeSectionData(VecOS, &Section, Layout); SmallVector CompressedContents; if (Error E = zlib::compress( StringRef(UncompressedData.data(), UncompressedData.size()), CompressedContents)) { consumeError(std::move(E)); W.OS << UncompressedData; return; } bool ZlibStyle = MAI->compressDebugSections() == DebugCompressionType::Z; if (!maybeWriteCompression(UncompressedData.size(), CompressedContents, ZlibStyle, Sec.getAlignment())) { W.OS << UncompressedData; return; } if (ZlibStyle) // Set the compressed flag. That is zlib style. Section.setFlags(Section.getFlags() | ELF::SHF_COMPRESSED); else // Add "z" prefix to section name. This is zlib-gnu style. MC.renameELFSection(&Section, (".z" + SectionName.drop_front(1)).str()); W.OS << CompressedContents; } void ELFWriter::WriteSecHdrEntry(uint32_t Name, uint32_t Type, uint64_t Flags, uint64_t Address, uint64_t Offset, uint64_t Size, uint32_t Link, uint32_t Info, uint64_t Alignment, uint64_t EntrySize) { W.write(Name); // sh_name: index into string table W.write(Type); // sh_type WriteWord(Flags); // sh_flags WriteWord(Address); // sh_addr WriteWord(Offset); // sh_offset WriteWord(Size); // sh_size W.write(Link); // sh_link W.write(Info); // sh_info WriteWord(Alignment); // sh_addralign WriteWord(EntrySize); // sh_entsize } void ELFWriter::writeRelocations(const MCAssembler &Asm, const MCSectionELF &Sec) { std::vector &Relocs = OWriter.Relocations[&Sec]; // We record relocations by pushing to the end of a vector. Reverse the vector // to get the relocations in the order they were created. // In most cases that is not important, but it can be for special sections // (.eh_frame) or specific relocations (TLS optimizations on SystemZ). std::reverse(Relocs.begin(), Relocs.end()); // Sort the relocation entries. MIPS needs this. OWriter.TargetObjectWriter->sortRelocs(Asm, Relocs); for (unsigned i = 0, e = Relocs.size(); i != e; ++i) { const ELFRelocationEntry &Entry = Relocs[e - i - 1]; unsigned Index = Entry.Symbol ? Entry.Symbol->getIndex() : 0; if (is64Bit()) { write(Entry.Offset); if (OWriter.TargetObjectWriter->getEMachine() == ELF::EM_MIPS) { write(uint32_t(Index)); write(OWriter.TargetObjectWriter->getRSsym(Entry.Type)); write(OWriter.TargetObjectWriter->getRType3(Entry.Type)); write(OWriter.TargetObjectWriter->getRType2(Entry.Type)); write(OWriter.TargetObjectWriter->getRType(Entry.Type)); } else { struct ELF::Elf64_Rela ERE64; ERE64.setSymbolAndType(Index, Entry.Type); write(ERE64.r_info); } if (hasRelocationAddend()) write(Entry.Addend); } else { write(uint32_t(Entry.Offset)); struct ELF::Elf32_Rela ERE32; ERE32.setSymbolAndType(Index, Entry.Type); write(ERE32.r_info); if (hasRelocationAddend()) write(uint32_t(Entry.Addend)); if (OWriter.TargetObjectWriter->getEMachine() == ELF::EM_MIPS) { if (uint32_t RType = OWriter.TargetObjectWriter->getRType2(Entry.Type)) { write(uint32_t(Entry.Offset)); ERE32.setSymbolAndType(0, RType); write(ERE32.r_info); write(uint32_t(0)); } if (uint32_t RType = OWriter.TargetObjectWriter->getRType3(Entry.Type)) { write(uint32_t(Entry.Offset)); ERE32.setSymbolAndType(0, RType); write(ERE32.r_info); write(uint32_t(0)); } } } } } const MCSectionELF *ELFWriter::createStringTable(MCContext &Ctx) { const MCSectionELF *StrtabSection = SectionTable[StringTableIndex - 1]; StrTabBuilder.write(W.OS); return StrtabSection; } void ELFWriter::writeSection(const SectionIndexMapTy &SectionIndexMap, uint32_t GroupSymbolIndex, uint64_t Offset, uint64_t Size, const MCSectionELF &Section) { uint64_t sh_link = 0; uint64_t sh_info = 0; switch(Section.getType()) { default: // Nothing to do. break; case ELF::SHT_DYNAMIC: llvm_unreachable("SHT_DYNAMIC in a relocatable object"); case ELF::SHT_REL: case ELF::SHT_RELA: { sh_link = SymbolTableIndex; assert(sh_link && ".symtab not found"); const MCSection *InfoSection = Section.getAssociatedSection(); sh_info = SectionIndexMap.lookup(cast(InfoSection)); break; } case ELF::SHT_SYMTAB: sh_link = StringTableIndex; sh_info = LastLocalSymbolIndex; break; case ELF::SHT_SYMTAB_SHNDX: case ELF::SHT_LLVM_CALL_GRAPH_PROFILE: case ELF::SHT_LLVM_ADDRSIG: sh_link = SymbolTableIndex; break; case ELF::SHT_GROUP: sh_link = SymbolTableIndex; sh_info = GroupSymbolIndex; break; } if (Section.getFlags() & ELF::SHF_LINK_ORDER) { const MCSymbol *Sym = Section.getAssociatedSymbol(); const MCSectionELF *Sec = cast(&Sym->getSection()); sh_link = SectionIndexMap.lookup(Sec); } WriteSecHdrEntry(StrTabBuilder.getOffset(Section.getSectionName()), Section.getType(), Section.getFlags(), 0, Offset, Size, sh_link, sh_info, Section.getAlignment(), Section.getEntrySize()); } void ELFWriter::writeSectionHeader( const MCAsmLayout &Layout, const SectionIndexMapTy &SectionIndexMap, const SectionOffsetsTy &SectionOffsets) { const unsigned NumSections = SectionTable.size(); // Null section first. uint64_t FirstSectionSize = (NumSections + 1) >= ELF::SHN_LORESERVE ? NumSections + 1 : 0; WriteSecHdrEntry(0, 0, 0, 0, 0, FirstSectionSize, 0, 0, 0, 0); for (const MCSectionELF *Section : SectionTable) { uint32_t GroupSymbolIndex; unsigned Type = Section->getType(); if (Type != ELF::SHT_GROUP) GroupSymbolIndex = 0; else GroupSymbolIndex = Section->getGroup()->getIndex(); const std::pair &Offsets = SectionOffsets.find(Section)->second; uint64_t Size; if (Type == ELF::SHT_NOBITS) Size = Layout.getSectionAddressSize(Section); else Size = Offsets.second - Offsets.first; writeSection(SectionIndexMap, GroupSymbolIndex, Offsets.first, Size, *Section); } } uint64_t ELFWriter::writeObject(MCAssembler &Asm, const MCAsmLayout &Layout) { uint64_t StartOffset = W.OS.tell(); MCContext &Ctx = Asm.getContext(); MCSectionELF *StrtabSection = Ctx.getELFSection(".strtab", ELF::SHT_STRTAB, 0); StringTableIndex = addToSectionTable(StrtabSection); RevGroupMapTy RevGroupMap; SectionIndexMapTy SectionIndexMap; std::map> GroupMembers; // Write out the ELF header ... writeHeader(Asm); // ... then the sections ... SectionOffsetsTy SectionOffsets; std::vector Groups; std::vector Relocations; for (MCSection &Sec : Asm) { MCSectionELF &Section = static_cast(Sec); if (Mode == NonDwoOnly && isDwoSection(Section)) continue; if (Mode == DwoOnly && !isDwoSection(Section)) continue; align(Section.getAlignment()); // Remember the offset into the file for this section. uint64_t SecStart = W.OS.tell(); const MCSymbolELF *SignatureSymbol = Section.getGroup(); writeSectionData(Asm, Section, Layout); uint64_t SecEnd = W.OS.tell(); SectionOffsets[&Section] = std::make_pair(SecStart, SecEnd); MCSectionELF *RelSection = createRelocationSection(Ctx, Section); if (SignatureSymbol) { Asm.registerSymbol(*SignatureSymbol); unsigned &GroupIdx = RevGroupMap[SignatureSymbol]; if (!GroupIdx) { MCSectionELF *Group = Ctx.createELFGroupSection(SignatureSymbol); GroupIdx = addToSectionTable(Group); Group->setAlignment(4); Groups.push_back(Group); } std::vector &Members = GroupMembers[SignatureSymbol]; Members.push_back(&Section); if (RelSection) Members.push_back(RelSection); } SectionIndexMap[&Section] = addToSectionTable(&Section); if (RelSection) { SectionIndexMap[RelSection] = addToSectionTable(RelSection); Relocations.push_back(RelSection); } } MCSectionELF *CGProfileSection = nullptr; if (!Asm.CGProfile.empty()) { CGProfileSection = Ctx.getELFSection(".llvm.call-graph-profile", ELF::SHT_LLVM_CALL_GRAPH_PROFILE, ELF::SHF_EXCLUDE, 16, ""); SectionIndexMap[CGProfileSection] = addToSectionTable(CGProfileSection); } for (MCSectionELF *Group : Groups) { align(Group->getAlignment()); // Remember the offset into the file for this section. uint64_t SecStart = W.OS.tell(); const MCSymbol *SignatureSymbol = Group->getGroup(); assert(SignatureSymbol); write(uint32_t(ELF::GRP_COMDAT)); for (const MCSectionELF *Member : GroupMembers[SignatureSymbol]) { uint32_t SecIndex = SectionIndexMap.lookup(Member); write(SecIndex); } uint64_t SecEnd = W.OS.tell(); SectionOffsets[Group] = std::make_pair(SecStart, SecEnd); } if (Mode == DwoOnly) { // dwo files don't have symbol tables or relocations, but they do have // string tables. StrTabBuilder.finalize(); } else { MCSectionELF *AddrsigSection; if (OWriter.EmitAddrsigSection) { AddrsigSection = Ctx.getELFSection(".llvm_addrsig", ELF::SHT_LLVM_ADDRSIG, ELF::SHF_EXCLUDE); addToSectionTable(AddrsigSection); } // Compute symbol table information. computeSymbolTable(Asm, Layout, SectionIndexMap, RevGroupMap, SectionOffsets); for (MCSectionELF *RelSection : Relocations) { align(RelSection->getAlignment()); // Remember the offset into the file for this section. uint64_t SecStart = W.OS.tell(); writeRelocations(Asm, cast(*RelSection->getAssociatedSection())); uint64_t SecEnd = W.OS.tell(); SectionOffsets[RelSection] = std::make_pair(SecStart, SecEnd); } if (OWriter.EmitAddrsigSection) { uint64_t SecStart = W.OS.tell(); writeAddrsigSection(); uint64_t SecEnd = W.OS.tell(); SectionOffsets[AddrsigSection] = std::make_pair(SecStart, SecEnd); } } if (CGProfileSection) { uint64_t SecStart = W.OS.tell(); for (const MCAssembler::CGProfileEntry &CGPE : Asm.CGProfile) { W.write(CGPE.From->getSymbol().getIndex()); W.write(CGPE.To->getSymbol().getIndex()); W.write(CGPE.Count); } uint64_t SecEnd = W.OS.tell(); SectionOffsets[CGProfileSection] = std::make_pair(SecStart, SecEnd); } { uint64_t SecStart = W.OS.tell(); const MCSectionELF *Sec = createStringTable(Ctx); uint64_t SecEnd = W.OS.tell(); SectionOffsets[Sec] = std::make_pair(SecStart, SecEnd); } uint64_t NaturalAlignment = is64Bit() ? 8 : 4; align(NaturalAlignment); const uint64_t SectionHeaderOffset = W.OS.tell(); // ... then the section header table ... writeSectionHeader(Layout, SectionIndexMap, SectionOffsets); uint16_t NumSections = support::endian::byte_swap( (SectionTable.size() + 1 >= ELF::SHN_LORESERVE) ? (uint16_t)ELF::SHN_UNDEF : SectionTable.size() + 1, W.Endian); unsigned NumSectionsOffset; auto &Stream = static_cast(W.OS); if (is64Bit()) { uint64_t Val = support::endian::byte_swap(SectionHeaderOffset, W.Endian); Stream.pwrite(reinterpret_cast(&Val), sizeof(Val), offsetof(ELF::Elf64_Ehdr, e_shoff)); NumSectionsOffset = offsetof(ELF::Elf64_Ehdr, e_shnum); } else { uint32_t Val = support::endian::byte_swap(SectionHeaderOffset, W.Endian); Stream.pwrite(reinterpret_cast(&Val), sizeof(Val), offsetof(ELF::Elf32_Ehdr, e_shoff)); NumSectionsOffset = offsetof(ELF::Elf32_Ehdr, e_shnum); } Stream.pwrite(reinterpret_cast(&NumSections), sizeof(NumSections), NumSectionsOffset); return W.OS.tell() - StartOffset; } bool ELFObjectWriter::hasRelocationAddend() const { return TargetObjectWriter->hasRelocationAddend(); } void ELFObjectWriter::executePostLayoutBinding(MCAssembler &Asm, const MCAsmLayout &Layout) { // The presence of symbol versions causes undefined symbols and // versions declared with @@@ to be renamed. for (const std::pair &P : Asm.Symvers) { StringRef AliasName = P.first; const auto &Symbol = cast(*P.second); size_t Pos = AliasName.find('@'); assert(Pos != StringRef::npos); StringRef Prefix = AliasName.substr(0, Pos); StringRef Rest = AliasName.substr(Pos); StringRef Tail = Rest; if (Rest.startswith("@@@")) Tail = Rest.substr(Symbol.isUndefined() ? 2 : 1); auto *Alias = cast(Asm.getContext().getOrCreateSymbol(Prefix + Tail)); Asm.registerSymbol(*Alias); const MCExpr *Value = MCSymbolRefExpr::create(&Symbol, Asm.getContext()); Alias->setVariableValue(Value); // Aliases defined with .symvar copy the binding from the symbol they alias. // This is the first place we are able to copy this information. Alias->setExternal(Symbol.isExternal()); Alias->setBinding(Symbol.getBinding()); if (!Symbol.isUndefined() && !Rest.startswith("@@@")) continue; - // FIXME: produce a better error message. + // FIXME: Get source locations for these errors or diagnose them earlier. if (Symbol.isUndefined() && Rest.startswith("@@") && - !Rest.startswith("@@@")) - report_fatal_error("A @@ version cannot be undefined"); + !Rest.startswith("@@@")) { + Asm.getContext().reportError(SMLoc(), "versioned symbol " + AliasName + + " must be defined"); + continue; + } - if (Renames.count(&Symbol) && Renames[&Symbol] != Alias) - report_fatal_error(llvm::Twine("Multiple symbol versions defined for ") + - Symbol.getName()); + if (Renames.count(&Symbol) && Renames[&Symbol] != Alias) { + Asm.getContext().reportError( + SMLoc(), llvm::Twine("multiple symbol versions defined for ") + + Symbol.getName()); + continue; + } Renames.insert(std::make_pair(&Symbol, Alias)); } for (const MCSymbol *&Sym : AddrsigSyms) { if (const MCSymbol *R = Renames.lookup(cast(Sym))) Sym = R; Sym->setUsedInReloc(); } } // It is always valid to create a relocation with a symbol. It is preferable // to use a relocation with a section if that is possible. Using the section // allows us to omit some local symbols from the symbol table. bool ELFObjectWriter::shouldRelocateWithSymbol(const MCAssembler &Asm, const MCSymbolRefExpr *RefA, const MCSymbolELF *Sym, uint64_t C, unsigned Type) const { // A PCRel relocation to an absolute value has no symbol (or section). We // represent that with a relocation to a null section. if (!RefA) return false; MCSymbolRefExpr::VariantKind Kind = RefA->getKind(); switch (Kind) { default: break; // The .odp creation emits a relocation against the symbol ".TOC." which // create a R_PPC64_TOC relocation. However the relocation symbol name // in final object creation should be NULL, since the symbol does not // really exist, it is just the reference to TOC base for the current // object file. Since the symbol is undefined, returning false results // in a relocation with a null section which is the desired result. case MCSymbolRefExpr::VK_PPC_TOCBASE: return false; // These VariantKind cause the relocation to refer to something other than // the symbol itself, like a linker generated table. Since the address of // symbol is not relevant, we cannot replace the symbol with the // section and patch the difference in the addend. case MCSymbolRefExpr::VK_GOT: case MCSymbolRefExpr::VK_PLT: case MCSymbolRefExpr::VK_GOTPCREL: case MCSymbolRefExpr::VK_PPC_GOT_LO: case MCSymbolRefExpr::VK_PPC_GOT_HI: case MCSymbolRefExpr::VK_PPC_GOT_HA: return true; } // An undefined symbol is not in any section, so the relocation has to point // to the symbol itself. assert(Sym && "Expected a symbol"); if (Sym->isUndefined()) return true; unsigned Binding = Sym->getBinding(); switch(Binding) { default: llvm_unreachable("Invalid Binding"); case ELF::STB_LOCAL: break; case ELF::STB_WEAK: // If the symbol is weak, it might be overridden by a symbol in another // file. The relocation has to point to the symbol so that the linker // can update it. return true; case ELF::STB_GLOBAL: // Global ELF symbols can be preempted by the dynamic linker. The relocation // has to point to the symbol for a reason analogous to the STB_WEAK case. return true; } // If a relocation points to a mergeable section, we have to be careful. // If the offset is zero, a relocation with the section will encode the // same information. With a non-zero offset, the situation is different. // For example, a relocation can point 42 bytes past the end of a string. // If we change such a relocation to use the section, the linker would think // that it pointed to another string and subtracting 42 at runtime will // produce the wrong value. if (Sym->isInSection()) { auto &Sec = cast(Sym->getSection()); unsigned Flags = Sec.getFlags(); if (Flags & ELF::SHF_MERGE) { if (C != 0) return true; // It looks like gold has a bug (http://sourceware.org/PR16794) and can // only handle section relocations to mergeable sections if using RELA. if (!hasRelocationAddend()) return true; } // Most TLS relocations use a got, so they need the symbol. Even those that // are just an offset (@tpoff), require a symbol in gold versions before // 5efeedf61e4fe720fd3e9a08e6c91c10abb66d42 (2014-09-26) which fixed // http://sourceware.org/PR16773. if (Flags & ELF::SHF_TLS) return true; } // If the symbol is a thumb function the final relocation must set the lowest // bit. With a symbol that is done by just having the symbol have that bit // set, so we would lose the bit if we relocated with the section. // FIXME: We could use the section but add the bit to the relocation value. if (Asm.isThumbFunc(Sym)) return true; if (TargetObjectWriter->needsRelocateWithSymbol(*Sym, Type)) return true; return false; } void ELFObjectWriter::recordRelocation(MCAssembler &Asm, const MCAsmLayout &Layout, const MCFragment *Fragment, const MCFixup &Fixup, MCValue Target, uint64_t &FixedValue) { MCAsmBackend &Backend = Asm.getBackend(); bool IsPCRel = Backend.getFixupKindInfo(Fixup.getKind()).Flags & MCFixupKindInfo::FKF_IsPCRel; const MCSectionELF &FixupSection = cast(*Fragment->getParent()); uint64_t C = Target.getConstant(); uint64_t FixupOffset = Layout.getFragmentOffset(Fragment) + Fixup.getOffset(); MCContext &Ctx = Asm.getContext(); if (const MCSymbolRefExpr *RefB = Target.getSymB()) { // Let A, B and C being the components of Target and R be the location of // the fixup. If the fixup is not pcrel, we want to compute (A - B + C). // If it is pcrel, we want to compute (A - B + C - R). // In general, ELF has no relocations for -B. It can only represent (A + C) // or (A + C - R). If B = R + K and the relocation is not pcrel, we can // replace B to implement it: (A - R - K + C) if (IsPCRel) { Ctx.reportError( Fixup.getLoc(), "No relocation available to represent this relative expression"); return; } const auto &SymB = cast(RefB->getSymbol()); if (SymB.isUndefined()) { Ctx.reportError(Fixup.getLoc(), Twine("symbol '") + SymB.getName() + "' can not be undefined in a subtraction expression"); return; } assert(!SymB.isAbsolute() && "Should have been folded"); const MCSection &SecB = SymB.getSection(); if (&SecB != &FixupSection) { Ctx.reportError(Fixup.getLoc(), "Cannot represent a difference across sections"); return; } uint64_t SymBOffset = Layout.getSymbolOffset(SymB); uint64_t K = SymBOffset - FixupOffset; IsPCRel = true; C -= K; } // We either rejected the fixup or folded B into C at this point. const MCSymbolRefExpr *RefA = Target.getSymA(); const auto *SymA = RefA ? cast(&RefA->getSymbol()) : nullptr; bool ViaWeakRef = false; if (SymA && SymA->isVariable()) { const MCExpr *Expr = SymA->getVariableValue(); if (const auto *Inner = dyn_cast(Expr)) { if (Inner->getKind() == MCSymbolRefExpr::VK_WEAKREF) { SymA = cast(&Inner->getSymbol()); ViaWeakRef = true; } } } unsigned Type = TargetObjectWriter->getRelocType(Ctx, Target, Fixup, IsPCRel); uint64_t OriginalC = C; bool RelocateWithSymbol = shouldRelocateWithSymbol(Asm, RefA, SymA, C, Type); if (!RelocateWithSymbol && SymA && !SymA->isUndefined()) C += Layout.getSymbolOffset(*SymA); uint64_t Addend = 0; if (hasRelocationAddend()) { Addend = C; C = 0; } FixedValue = C; const MCSectionELF *SecA = (SymA && SymA->isInSection()) ? cast(&SymA->getSection()) : nullptr; if (!checkRelocation(Ctx, Fixup.getLoc(), &FixupSection, SecA)) return; if (!RelocateWithSymbol) { const auto *SectionSymbol = SecA ? cast(SecA->getBeginSymbol()) : nullptr; if (SectionSymbol) SectionSymbol->setUsedInReloc(); ELFRelocationEntry Rec(FixupOffset, SectionSymbol, Type, Addend, SymA, OriginalC); Relocations[&FixupSection].push_back(Rec); return; } const auto *RenamedSymA = SymA; if (SymA) { if (const MCSymbolELF *R = Renames.lookup(SymA)) RenamedSymA = R; if (ViaWeakRef) RenamedSymA->setIsWeakrefUsedInReloc(); else RenamedSymA->setUsedInReloc(); } ELFRelocationEntry Rec(FixupOffset, RenamedSymA, Type, Addend, SymA, OriginalC); Relocations[&FixupSection].push_back(Rec); } bool ELFObjectWriter::isSymbolRefDifferenceFullyResolvedImpl( const MCAssembler &Asm, const MCSymbol &SA, const MCFragment &FB, bool InSet, bool IsPCRel) const { const auto &SymA = cast(SA); if (IsPCRel) { assert(!InSet); if (isWeak(SymA)) return false; } return MCObjectWriter::isSymbolRefDifferenceFullyResolvedImpl(Asm, SymA, FB, InSet, IsPCRel); } std::unique_ptr llvm::createELFObjectWriter(std::unique_ptr MOTW, raw_pwrite_stream &OS, bool IsLittleEndian) { return llvm::make_unique(std::move(MOTW), OS, IsLittleEndian); } std::unique_ptr llvm::createELFDwoObjectWriter(std::unique_ptr MOTW, raw_pwrite_stream &OS, raw_pwrite_stream &DwoOS, bool IsLittleEndian) { return llvm::make_unique(std::move(MOTW), OS, DwoOS, IsLittleEndian); } Index: projects/import-googletest-1.8.1/contrib/llvm =================================================================== --- projects/import-googletest-1.8.1/contrib/llvm (revision 344270) +++ projects/import-googletest-1.8.1/contrib/llvm (revision 344271) Property changes on: projects/import-googletest-1.8.1/contrib/llvm ___________________________________________________________________ Modified: svn:mergeinfo ## -0,0 +0,1 ## Merged /head/contrib/llvm:r344081-344270 Index: projects/import-googletest-1.8.1/etc/mtree/BSD.root.dist =================================================================== --- projects/import-googletest-1.8.1/etc/mtree/BSD.root.dist (revision 344270) +++ projects/import-googletest-1.8.1/etc/mtree/BSD.root.dist (revision 344271) @@ -1,124 +1,126 @@ # $FreeBSD$ # # Please see the file src/etc/mtree/README before making changes to this file. # /set type=dir uname=root gname=wheel mode=0755 . bin .. boot defaults .. dtb allwinner tags=package=runtime .. overlays tags=package=runtime .. rockchip tags=package=runtime .. .. firmware .. lua .. kernel .. modules .. + uboot + .. zfs .. .. dev mode=0555 .. etc X11 .. authpf .. autofs .. bluetooth .. cron.d .. defaults .. devd .. dma .. gss .. mail .. mtree .. newsyslog.conf.d .. ntp mode=0700 .. pam.d .. periodic daily .. monthly .. security .. weekly .. .. pkg .. ppp .. rc.conf.d .. rc.d .. security .. ssh .. ssl .. syslog.d .. zfs .. .. lib casper .. geom .. nvmecontrol .. .. libexec resolvconf .. .. media .. mnt .. net .. proc mode=0555 .. rescue .. root .. sbin .. tmp mode=01777 .. usr .. var .. .. Index: projects/import-googletest-1.8.1/kerberos5/tools/asn1_compile/Makefile =================================================================== --- projects/import-googletest-1.8.1/kerberos5/tools/asn1_compile/Makefile (revision 344270) +++ projects/import-googletest-1.8.1/kerberos5/tools/asn1_compile/Makefile (revision 344271) @@ -1,36 +1,37 @@ # $FreeBSD$ PROG= asn1_compile MAN= LIBROKEN_A= ${.OBJDIR:H:H}/lib/libroken/libroken.a LIBADD= vers LDADD= ${LIBROKEN_A} DPADD= ${LIBROKEN_A} +MK_PIE:= no SRCS= \ asn1parse.y \ gen.c \ gen_copy.c \ gen_decode.c \ gen_encode.c \ gen_free.c \ gen_glue.c \ gen_length.c \ gen_seq.c \ gen_template.c \ hash.c \ lex.l \ main.c \ roken.h \ symbol.c CFLAGS+=-I${KRB5DIR}/lib/roken -I${KRB5DIR}/lib/asn1 -I. CLEANFILES= roken.h lex.c parse.c roken.h: make-roken > ${.TARGET} .include .PATH: ${KRB5DIR}/lib/asn1 Index: projects/import-googletest-1.8.1/kerberos5/tools/slc/Makefile =================================================================== --- projects/import-googletest-1.8.1/kerberos5/tools/slc/Makefile (revision 344270) +++ projects/import-googletest-1.8.1/kerberos5/tools/slc/Makefile (revision 344271) @@ -1,25 +1,26 @@ # $FreeBSD$ PROG= slc LIBROKEN_A= ${.OBJDIR:H:H}/lib/libroken/libroken.a LIBADD= vers LDADD= ${LIBROKEN_A} DPADD= ${LIBROKEN_A} MAN= +MK_PIE:= no SRCS= roken.h \ slc-gram.y \ slc-lex.l CFLAGS+=-I${KRB5DIR}/lib/roken -I${KRB5DIR}/lib/sl -I${KRB5DIR}/lib/vers -I. CLEANFILES= roken.h slc-gram.c slc-lex.c roken.h: ${MAKE_ROKEN} > ${.TARGET} # ${.OBJDIR:H}/make-roken/make-roken > ${.TARGET} .include .PATH: ${KRB5DIR}/lib/roken ${KRB5DIR}/lib/sl Index: projects/import-googletest-1.8.1/lib/clang/Makefile.inc =================================================================== --- projects/import-googletest-1.8.1/lib/clang/Makefile.inc (revision 344270) +++ projects/import-googletest-1.8.1/lib/clang/Makefile.inc (revision 344271) @@ -1,9 +1,11 @@ # $FreeBSD$ .include +MK_PIE:= no # Explicit libXXX.a references + .if ${COMPILER_TYPE} == "clang" DEBUG_FILES_CFLAGS= -gline-tables-only .else DEBUG_FILES_CFLAGS= -g1 .endif Index: projects/import-googletest-1.8.1/lib/clang/libllvmminimal/Makefile =================================================================== --- projects/import-googletest-1.8.1/lib/clang/libllvmminimal/Makefile (revision 344270) +++ projects/import-googletest-1.8.1/lib/clang/libllvmminimal/Makefile (revision 344271) @@ -1,73 +1,74 @@ # $FreeBSD$ .include "../llvm.pre.mk" LIB= llvmminimal INTERNALLIB= SRCDIR= lib SRCS+= Support/APFloat.cpp SRCS+= Support/APInt.cpp SRCS+= Support/Atomic.cpp SRCS+= Support/CodeGenCoverage.cpp SRCS+= Support/CommandLine.cpp SRCS+= Support/ConvertUTF.cpp SRCS+= Support/ConvertUTFWrapper.cpp SRCS+= Support/Debug.cpp SRCS+= Support/Errno.cpp SRCS+= Support/Error.cpp SRCS+= Support/ErrorHandling.cpp SRCS+= Support/FoldingSet.cpp +SRCS+= Support/FormatVariadic.cpp SRCS+= Support/FormattedStream.cpp SRCS+= Support/Hashing.cpp SRCS+= Support/Host.cpp SRCS+= Support/IntEqClasses.cpp SRCS+= Support/JSON.cpp SRCS+= Support/Locale.cpp SRCS+= Support/LowLevelType.cpp SRCS+= Support/MD5.cpp SRCS+= Support/ManagedStatic.cpp SRCS+= Support/MemoryBuffer.cpp SRCS+= Support/Mutex.cpp SRCS+= Support/NativeFormatting.cpp SRCS+= Support/Path.cpp SRCS+= Support/PrettyStackTrace.cpp SRCS+= Support/Process.cpp SRCS+= Support/Program.cpp SRCS+= Support/Regex.cpp SRCS+= Support/Signals.cpp SRCS+= Support/SmallPtrSet.cpp SRCS+= Support/SmallVector.cpp SRCS+= Support/SourceMgr.cpp SRCS+= Support/Statistic.cpp SRCS+= Support/StringExtras.cpp SRCS+= Support/StringMap.cpp SRCS+= Support/StringRef.cpp SRCS+= Support/StringSaver.cpp SRCS+= Support/TargetParser.cpp SRCS+= Support/Threading.cpp SRCS+= Support/Timer.cpp SRCS+= Support/ToolOutputFile.cpp SRCS+= Support/Triple.cpp SRCS+= Support/Twine.cpp SRCS+= Support/Unicode.cpp SRCS+= Support/WithColor.cpp SRCS+= Support/circular_raw_ostream.cpp SRCS+= Support/raw_ostream.cpp SRCS+= Support/regcomp.c SRCS+= Support/regerror.c SRCS+= Support/regexec.c SRCS+= Support/regfree.c SRCS+= Support/regstrlcpy.c SRCS+= TableGen/Error.cpp SRCS+= TableGen/JSONBackend.cpp SRCS+= TableGen/Main.cpp SRCS+= TableGen/Record.cpp SRCS+= TableGen/SetTheory.cpp SRCS+= TableGen/StringMatcher.cpp SRCS+= TableGen/TGLexer.cpp SRCS+= TableGen/TGParser.cpp SRCS+= TableGen/TableGenBackend.cpp .include "../llvm.build.mk" .include Index: projects/import-googletest-1.8.1/lib/libbe/be.c =================================================================== --- projects/import-googletest-1.8.1/lib/libbe/be.c (revision 344270) +++ projects/import-googletest-1.8.1/lib/libbe/be.c (revision 344271) @@ -1,965 +1,1017 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2017 Kyle J. Kneitinger * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include "be.h" #include "be_impl.h" +struct be_destroy_data { + libbe_handle_t *lbh; + char *snapname; +}; + #if SOON static int be_create_child_noent(libbe_handle_t *lbh, const char *active, const char *child_path); static int be_create_child_cloned(libbe_handle_t *lbh, const char *active); #endif /* * Iterator function for locating the rootfs amongst the children of the * zfs_be_root set by loader(8). data is expected to be a libbe_handle_t *. */ static int be_locate_rootfs(libbe_handle_t *lbh) { struct statfs sfs; struct extmnttab entry; zfs_handle_t *zfs; /* * Check first if root is ZFS; if not, we'll bail on rootfs capture. * Unfortunately needed because zfs_path_to_zhandle will emit to * stderr if / isn't actually a ZFS filesystem, which we'd like * to avoid. */ if (statfs("/", &sfs) == 0) { statfs2mnttab(&sfs, &entry); if (strcmp(entry.mnt_fstype, MNTTYPE_ZFS) != 0) return (1); } else return (1); zfs = zfs_path_to_zhandle(lbh->lzh, "/", ZFS_TYPE_FILESYSTEM); if (zfs == NULL) return (1); strlcpy(lbh->rootfs, zfs_get_name(zfs), sizeof(lbh->rootfs)); zfs_close(zfs); return (0); } /* * Initializes the libbe context to operate in the root boot environment * dataset, for example, zroot/ROOT. */ libbe_handle_t * libbe_init(const char *root) { char altroot[MAXPATHLEN]; libbe_handle_t *lbh; char *poolname, *pos; int pnamelen; lbh = NULL; poolname = pos = NULL; if ((lbh = calloc(1, sizeof(libbe_handle_t))) == NULL) goto err; if ((lbh->lzh = libzfs_init()) == NULL) goto err; /* * Grab rootfs, we'll work backwards from there if an optional BE root * has not been passed in. */ if (be_locate_rootfs(lbh) != 0) { if (root == NULL) goto err; *lbh->rootfs = '\0'; } if (root == NULL) { /* Strip off the final slash from rootfs to get the be root */ strlcpy(lbh->root, lbh->rootfs, sizeof(lbh->root)); pos = strrchr(lbh->root, '/'); if (pos == NULL) goto err; *pos = '\0'; } else strlcpy(lbh->root, root, sizeof(lbh->root)); if ((pos = strchr(lbh->root, '/')) == NULL) goto err; pnamelen = pos - lbh->root; poolname = malloc(pnamelen + 1); if (poolname == NULL) goto err; strlcpy(poolname, lbh->root, pnamelen + 1); if ((lbh->active_phandle = zpool_open(lbh->lzh, poolname)) == NULL) goto err; free(poolname); poolname = NULL; if (zpool_get_prop(lbh->active_phandle, ZPOOL_PROP_BOOTFS, lbh->bootfs, sizeof(lbh->bootfs), NULL, true) != 0) goto err; if (zpool_get_prop(lbh->active_phandle, ZPOOL_PROP_ALTROOT, altroot, sizeof(altroot), NULL, true) == 0 && strcmp(altroot, "-") != 0) lbh->altroot_len = strlen(altroot); return (lbh); err: if (lbh != NULL) { if (lbh->active_phandle != NULL) zpool_close(lbh->active_phandle); if (lbh->lzh != NULL) libzfs_fini(lbh->lzh); free(lbh); } free(poolname); return (NULL); } /* * Free memory allocated by libbe_init() */ void libbe_close(libbe_handle_t *lbh) { if (lbh->active_phandle != NULL) zpool_close(lbh->active_phandle); libzfs_fini(lbh->lzh); free(lbh); } /* * Proxy through to libzfs for the moment. */ void be_nicenum(uint64_t num, char *buf, size_t buflen) { zfs_nicenum(num, buf, buflen); } static int be_destroy_cb(zfs_handle_t *zfs_hdl, void *data) { + char path[BE_MAXPATHLEN]; + struct be_destroy_data *bdd; + zfs_handle_t *snap; int err; - if ((err = zfs_iter_children(zfs_hdl, be_destroy_cb, data)) != 0) + bdd = (struct be_destroy_data *)data; + if (bdd->snapname == NULL) { + err = zfs_iter_children(zfs_hdl, be_destroy_cb, data); + if (err != 0) + return (err); + return (zfs_destroy(zfs_hdl, false)); + } + /* If we're dealing with snapshots instead, delete that one alone */ + err = zfs_iter_filesystems(zfs_hdl, be_destroy_cb, data); + if (err != 0) return (err); - if ((err = zfs_destroy(zfs_hdl, false)) != 0) - return (err); + /* + * This part is intentionally glossing over any potential errors, + * because there's a lot less potential for errors when we're cleaning + * up snapshots rather than a full deep BE. The primary error case + * here being if the snapshot doesn't exist in the first place, which + * the caller will likely deem insignificant as long as it doesn't + * exist after the call. Thus, such a missing snapshot shouldn't jam + * up the destruction. + */ + snprintf(path, sizeof(path), "%s@%s", zfs_get_name(zfs_hdl), + bdd->snapname); + if (!zfs_dataset_exists(bdd->lbh->lzh, path, ZFS_TYPE_SNAPSHOT)) + return (0); + snap = zfs_open(bdd->lbh->lzh, path, ZFS_TYPE_SNAPSHOT); + if (snap != NULL) + zfs_destroy(snap, false); return (0); } /* * Destroy the boot environment or snapshot specified by the name * parameter. Options are or'd together with the possible values: * BE_DESTROY_FORCE : forces operation on mounted datasets + * BE_DESTROY_ORIGIN: destroy the origin snapshot as well */ int be_destroy(libbe_handle_t *lbh, const char *name, int options) { + struct be_destroy_data bdd; char origin[BE_MAXPATHLEN], path[BE_MAXPATHLEN]; zfs_handle_t *fs; - char *p; + char *snapdelim; int err, force, mounted; + size_t rootlen; - p = path; + bdd.lbh = lbh; + bdd.snapname = NULL; force = options & BE_DESTROY_FORCE; *origin = '\0'; be_root_concat(lbh, name, path); - if (strchr(name, '@') == NULL) { + if ((snapdelim = strchr(path, '@')) == NULL) { if (!zfs_dataset_exists(lbh->lzh, path, ZFS_TYPE_FILESYSTEM)) return (set_error(lbh, BE_ERR_NOENT)); if (strcmp(path, lbh->rootfs) == 0 || strcmp(path, lbh->bootfs) == 0) return (set_error(lbh, BE_ERR_DESTROYACT)); - fs = zfs_open(lbh->lzh, p, ZFS_TYPE_FILESYSTEM); + fs = zfs_open(lbh->lzh, path, ZFS_TYPE_FILESYSTEM); if (fs == NULL) return (set_error(lbh, BE_ERR_ZFSOPEN)); + if ((options & BE_DESTROY_ORIGIN) != 0 && zfs_prop_get(fs, ZFS_PROP_ORIGIN, origin, sizeof(origin), NULL, NULL, 0, 1) != 0) return (set_error(lbh, BE_ERR_NOORIGIN)); } else { if (!zfs_dataset_exists(lbh->lzh, path, ZFS_TYPE_SNAPSHOT)) return (set_error(lbh, BE_ERR_NOENT)); - fs = zfs_open(lbh->lzh, p, ZFS_TYPE_SNAPSHOT); - if (fs == NULL) + bdd.snapname = strdup(snapdelim + 1); + if (bdd.snapname == NULL) + return (set_error(lbh, BE_ERR_NOMEM)); + *snapdelim = '\0'; + fs = zfs_open(lbh->lzh, path, ZFS_TYPE_DATASET); + if (fs == NULL) { + free(bdd.snapname); return (set_error(lbh, BE_ERR_ZFSOPEN)); + } } /* Check if mounted, unmount if force is specified */ if ((mounted = zfs_is_mounted(fs, NULL)) != 0) { - if (force) + if (force) { zfs_unmount(fs, NULL, 0); - else + } else { + free(bdd.snapname); return (set_error(lbh, BE_ERR_DESTROYMNT)); + } } - if ((err = be_destroy_cb(fs, NULL)) != 0) { + err = be_destroy_cb(fs, &bdd); + zfs_close(fs); + free(bdd.snapname); + if (err != 0) { /* Children are still present or the mount is referenced */ if (err == EBUSY) return (set_error(lbh, BE_ERR_DESTROYMNT)); return (set_error(lbh, BE_ERR_UNKNOWN)); } - if (*origin != '\0') { - fs = zfs_open(lbh->lzh, origin, ZFS_TYPE_SNAPSHOT); - if (fs == NULL) - return (set_error(lbh, BE_ERR_ZFSOPEN)); - err = zfs_destroy(fs, false); - if (err == EBUSY) - return (set_error(lbh, BE_ERR_DESTROYMNT)); - else if (err != 0) - return (set_error(lbh, BE_ERR_UNKNOWN)); - } + if ((options & BE_DESTROY_ORIGIN) == 0) + return (0); - return (0); -} + /* The origin can't possibly be shorter than the BE root */ + rootlen = strlen(lbh->root); + if (*origin == '\0' || strlen(origin) <= rootlen + 1) + return (set_error(lbh, BE_ERR_INVORIGIN)); + /* + * We'll be chopping off the BE root and running this back through + * be_destroy, so that we properly handle the origin snapshot whether + * it be that of a deep BE or not. + */ + if (strncmp(origin, lbh->root, rootlen) != 0 || origin[rootlen] != '/') + return (0); + + return (be_destroy(lbh, origin + rootlen + 1, + options & ~BE_DESTROY_ORIGIN)); +} int be_snapshot(libbe_handle_t *lbh, const char *source, const char *snap_name, bool recursive, char *result) { char buf[BE_MAXPATHLEN]; time_t rawtime; int len, err; be_root_concat(lbh, source, buf); if ((err = be_exists(lbh, buf)) != 0) return (set_error(lbh, err)); if (snap_name != NULL) { if (strlcat(buf, "@", sizeof(buf)) >= sizeof(buf)) return (set_error(lbh, BE_ERR_INVALIDNAME)); if (strlcat(buf, snap_name, sizeof(buf)) >= sizeof(buf)) return (set_error(lbh, BE_ERR_INVALIDNAME)); if (result != NULL) snprintf(result, BE_MAXPATHLEN, "%s@%s", source, snap_name); } else { time(&rawtime); len = strlen(buf); strftime(buf + len, sizeof(buf) - len, "@%F-%T", localtime(&rawtime)); if (result != NULL && strlcpy(result, strrchr(buf, '/') + 1, sizeof(buf)) >= sizeof(buf)) return (set_error(lbh, BE_ERR_INVALIDNAME)); } if ((err = zfs_snapshot(lbh->lzh, buf, recursive, NULL)) != 0) { switch (err) { case EZFS_INVALIDNAME: return (set_error(lbh, BE_ERR_INVALIDNAME)); default: /* * The other errors that zfs_ioc_snapshot might return * shouldn't happen if we've set things up properly, so * we'll gloss over them and call it UNKNOWN as it will * require further triage. */ if (errno == ENOTSUP) return (set_error(lbh, BE_ERR_NOPOOL)); return (set_error(lbh, BE_ERR_UNKNOWN)); } } return (BE_ERR_SUCCESS); } /* * Create the boot environment specified by the name parameter */ int be_create(libbe_handle_t *lbh, const char *name) { int err; err = be_create_from_existing(lbh, name, be_active_path(lbh)); return (set_error(lbh, err)); } static int be_deep_clone_prop(int prop, void *cb) { int err; struct libbe_dccb *dccb; zprop_source_t src; char pval[BE_MAXPATHLEN]; char source[BE_MAXPATHLEN]; char *val; dccb = cb; /* Skip some properties we don't want to touch */ if (prop == ZFS_PROP_CANMOUNT) return (ZPROP_CONT); /* Don't copy readonly properties */ if (zfs_prop_readonly(prop)) return (ZPROP_CONT); if ((err = zfs_prop_get(dccb->zhp, prop, (char *)&pval, sizeof(pval), &src, (char *)&source, sizeof(source), false))) /* Just continue if we fail to read a property */ return (ZPROP_CONT); /* Only copy locally defined properties */ if (src != ZPROP_SRC_LOCAL) return (ZPROP_CONT); /* Augment mountpoint with altroot, if needed */ val = pval; if (prop == ZFS_PROP_MOUNTPOINT) val = be_mountpoint_augmented(dccb->lbh, val); nvlist_add_string(dccb->props, zfs_prop_to_name(prop), val); return (ZPROP_CONT); } static int be_deep_clone(zfs_handle_t *ds, void *data) { int err; char be_path[BE_MAXPATHLEN]; char snap_path[BE_MAXPATHLEN]; const char *dspath; char *dsname; zfs_handle_t *snap_hdl; nvlist_t *props; struct libbe_deep_clone *isdc, sdc; struct libbe_dccb dccb; isdc = (struct libbe_deep_clone *)data; dspath = zfs_get_name(ds); if ((dsname = strrchr(dspath, '/')) == NULL) return (BE_ERR_UNKNOWN); dsname++; if (isdc->bename == NULL) snprintf(be_path, sizeof(be_path), "%s/%s", isdc->be_root, dsname); else snprintf(be_path, sizeof(be_path), "%s/%s", isdc->be_root, isdc->bename); snprintf(snap_path, sizeof(snap_path), "%s@%s", dspath, isdc->snapname); if (zfs_dataset_exists(isdc->lbh->lzh, be_path, ZFS_TYPE_DATASET)) return (set_error(isdc->lbh, BE_ERR_EXISTS)); if ((snap_hdl = zfs_open(isdc->lbh->lzh, snap_path, ZFS_TYPE_SNAPSHOT)) == NULL) return (set_error(isdc->lbh, BE_ERR_ZFSOPEN)); nvlist_alloc(&props, NV_UNIQUE_NAME, KM_SLEEP); nvlist_add_string(props, "canmount", "noauto"); dccb.lbh = isdc->lbh; dccb.zhp = ds; dccb.props = props; if (zprop_iter(be_deep_clone_prop, &dccb, B_FALSE, B_FALSE, ZFS_TYPE_FILESYSTEM) == ZPROP_INVAL) return (-1); if ((err = zfs_clone(snap_hdl, be_path, props)) != 0) err = BE_ERR_ZFSCLONE; nvlist_free(props); zfs_close(snap_hdl); /* Failed to clone */ if (err != BE_ERR_SUCCESS) return (set_error(isdc->lbh, err)); sdc.lbh = isdc->lbh; sdc.bename = NULL; sdc.snapname = isdc->snapname; sdc.be_root = (char *)&be_path; err = zfs_iter_filesystems(ds, be_deep_clone, &sdc); return (err); } /* * Create the boot environment from pre-existing snapshot */ int be_create_from_existing_snap(libbe_handle_t *lbh, const char *name, const char *snap) { int err; char be_path[BE_MAXPATHLEN]; char snap_path[BE_MAXPATHLEN]; const char *bename; char *parentname, *snapname; zfs_handle_t *parent_hdl; struct libbe_deep_clone sdc; if ((err = be_validate_name(lbh, name)) != 0) return (set_error(lbh, err)); if ((err = be_root_concat(lbh, snap, snap_path)) != 0) return (set_error(lbh, err)); if ((err = be_validate_snap(lbh, snap_path)) != 0) return (set_error(lbh, err)); if ((err = be_root_concat(lbh, name, be_path)) != 0) return (set_error(lbh, err)); if ((bename = strrchr(name, '/')) == NULL) bename = name; else bename++; if ((parentname = strdup(snap_path)) == NULL) return (set_error(lbh, BE_ERR_UNKNOWN)); snapname = strchr(parentname, '@'); if (snapname == NULL) { free(parentname); return (set_error(lbh, BE_ERR_UNKNOWN)); } *snapname = '\0'; snapname++; sdc.lbh = lbh; sdc.bename = bename; sdc.snapname = snapname; sdc.be_root = lbh->root; parent_hdl = zfs_open(lbh->lzh, parentname, ZFS_TYPE_DATASET); err = be_deep_clone(parent_hdl, &sdc); free(parentname); return (set_error(lbh, err)); } /* * Create a boot environment from an existing boot environment */ int be_create_from_existing(libbe_handle_t *lbh, const char *name, const char *old) { int err; char buf[BE_MAXPATHLEN]; if ((err = be_snapshot(lbh, old, NULL, true, (char *)&buf)) != 0) return (set_error(lbh, err)); err = be_create_from_existing_snap(lbh, name, (char *)buf); return (set_error(lbh, err)); } /* * Verifies that a snapshot has a valid name, exists, and has a mountpoint of * '/'. Returns BE_ERR_SUCCESS (0), upon success, or the relevant BE_ERR_* upon * failure. Does not set the internal library error state. */ int be_validate_snap(libbe_handle_t *lbh, const char *snap_name) { if (strlen(snap_name) >= BE_MAXPATHLEN) return (BE_ERR_PATHLEN); if (!zfs_dataset_exists(lbh->lzh, snap_name, ZFS_TYPE_SNAPSHOT)) return (BE_ERR_NOENT); return (BE_ERR_SUCCESS); } /* * Idempotently appends the name argument to the root boot environment path * and copies the resulting string into the result buffer (which is assumed * to be at least BE_MAXPATHLEN characters long. Returns BE_ERR_SUCCESS upon * success, BE_ERR_PATHLEN if the resulting path is longer than BE_MAXPATHLEN, * or BE_ERR_INVALIDNAME if the name is a path that does not begin with * zfs_be_root. Does not set internal library error state. */ int be_root_concat(libbe_handle_t *lbh, const char *name, char *result) { size_t name_len, root_len; name_len = strlen(name); root_len = strlen(lbh->root); /* Act idempotently; return be name if it is already a full path */ if (strrchr(name, '/') != NULL) { if (strstr(name, lbh->root) != name) return (BE_ERR_INVALIDNAME); if (name_len >= BE_MAXPATHLEN) return (BE_ERR_PATHLEN); strlcpy(result, name, BE_MAXPATHLEN); return (BE_ERR_SUCCESS); } else if (name_len + root_len + 1 < BE_MAXPATHLEN) { snprintf(result, BE_MAXPATHLEN, "%s/%s", lbh->root, name); return (BE_ERR_SUCCESS); } return (BE_ERR_PATHLEN); } /* * Verifies the validity of a boot environment name (A-Za-z0-9-_.). Returns * BE_ERR_SUCCESS (0) if name is valid, otherwise returns BE_ERR_INVALIDNAME * or BE_ERR_PATHLEN. * Does not set internal library error state. */ int be_validate_name(libbe_handle_t *lbh, const char *name) { for (int i = 0; *name; i++) { char c = *(name++); if (isalnum(c) || (c == '-') || (c == '_') || (c == '.')) continue; return (BE_ERR_INVALIDNAME); } /* * Impose the additional restriction that the entire dataset name must * not exceed the maximum length of a dataset, i.e. MAXNAMELEN. */ if (strlen(lbh->root) + 1 + strlen(name) > MAXNAMELEN) return (BE_ERR_PATHLEN); return (BE_ERR_SUCCESS); } /* * usage */ int be_rename(libbe_handle_t *lbh, const char *old, const char *new) { char full_old[BE_MAXPATHLEN]; char full_new[BE_MAXPATHLEN]; zfs_handle_t *zfs_hdl; int err; /* * be_validate_name is documented not to set error state, so we should * do so here. */ if ((err = be_validate_name(lbh, new)) != 0) return (set_error(lbh, err)); if ((err = be_root_concat(lbh, old, full_old)) != 0) return (set_error(lbh, err)); if ((err = be_root_concat(lbh, new, full_new)) != 0) return (set_error(lbh, err)); if (!zfs_dataset_exists(lbh->lzh, full_old, ZFS_TYPE_DATASET)) return (set_error(lbh, BE_ERR_NOENT)); if (zfs_dataset_exists(lbh->lzh, full_new, ZFS_TYPE_DATASET)) return (set_error(lbh, BE_ERR_EXISTS)); if ((zfs_hdl = zfs_open(lbh->lzh, full_old, ZFS_TYPE_FILESYSTEM)) == NULL) return (set_error(lbh, BE_ERR_ZFSOPEN)); /* recurse, nounmount, forceunmount */ struct renameflags flags = { .nounmount = 1, }; err = zfs_rename(zfs_hdl, NULL, full_new, flags); zfs_close(zfs_hdl); if (err != 0) return (set_error(lbh, BE_ERR_UNKNOWN)); return (0); } int be_export(libbe_handle_t *lbh, const char *bootenv, int fd) { char snap_name[BE_MAXPATHLEN]; char buf[BE_MAXPATHLEN]; zfs_handle_t *zfs; int err; if ((err = be_snapshot(lbh, bootenv, NULL, true, snap_name)) != 0) /* Use the error set by be_snapshot */ return (err); be_root_concat(lbh, snap_name, buf); if ((zfs = zfs_open(lbh->lzh, buf, ZFS_TYPE_DATASET)) == NULL) return (set_error(lbh, BE_ERR_ZFSOPEN)); err = zfs_send_one(zfs, NULL, fd, 0); zfs_close(zfs); return (err); } int be_import(libbe_handle_t *lbh, const char *bootenv, int fd) { char buf[BE_MAXPATHLEN]; nvlist_t *props; zfs_handle_t *zfs; recvflags_t flags = { .nomount = 1 }; int err; be_root_concat(lbh, bootenv, buf); if ((err = zfs_receive(lbh->lzh, buf, NULL, &flags, fd, NULL)) != 0) { switch (err) { case EINVAL: return (set_error(lbh, BE_ERR_NOORIGIN)); case ENOENT: return (set_error(lbh, BE_ERR_NOENT)); case EIO: return (set_error(lbh, BE_ERR_IO)); default: return (set_error(lbh, BE_ERR_UNKNOWN)); } } if ((zfs = zfs_open(lbh->lzh, buf, ZFS_TYPE_FILESYSTEM)) == NULL) return (set_error(lbh, BE_ERR_ZFSOPEN)); nvlist_alloc(&props, NV_UNIQUE_NAME, KM_SLEEP); nvlist_add_string(props, "canmount", "noauto"); nvlist_add_string(props, "mountpoint", "/"); err = zfs_prop_set_list(zfs, props); nvlist_free(props); zfs_close(zfs); if (err != 0) return (set_error(lbh, BE_ERR_UNKNOWN)); return (0); } #if SOON static int be_create_child_noent(libbe_handle_t *lbh, const char *active, const char *child_path) { nvlist_t *props; zfs_handle_t *zfs; int err; nvlist_alloc(&props, NV_UNIQUE_NAME, KM_SLEEP); nvlist_add_string(props, "canmount", "noauto"); nvlist_add_string(props, "mountpoint", child_path); /* Create */ if ((err = zfs_create(lbh->lzh, active, ZFS_TYPE_DATASET, props)) != 0) { switch (err) { case EZFS_EXISTS: return (set_error(lbh, BE_ERR_EXISTS)); case EZFS_NOENT: return (set_error(lbh, BE_ERR_NOENT)); case EZFS_BADTYPE: case EZFS_BADVERSION: return (set_error(lbh, BE_ERR_NOPOOL)); case EZFS_BADPROP: default: /* We set something up wrong, probably... */ return (set_error(lbh, BE_ERR_UNKNOWN)); } } nvlist_free(props); if ((zfs = zfs_open(lbh->lzh, active, ZFS_TYPE_DATASET)) == NULL) return (set_error(lbh, BE_ERR_ZFSOPEN)); /* Set props */ if ((err = zfs_prop_set(zfs, "canmount", "noauto")) != 0) { zfs_close(zfs); /* * Similar to other cases, this shouldn't fail unless we've * done something wrong. This is a new dataset that shouldn't * have been mounted anywhere between creation and now. */ if (err == EZFS_NOMEM) return (set_error(lbh, BE_ERR_NOMEM)); return (set_error(lbh, BE_ERR_UNKNOWN)); } zfs_close(zfs); return (BE_ERR_SUCCESS); } static int be_create_child_cloned(libbe_handle_t *lbh, const char *active) { char buf[BE_MAXPATHLEN], tmp[BE_MAXPATHLEN];; zfs_handle_t *zfs; int err; /* XXX TODO ? */ /* * Establish if the existing path is a zfs dataset or just * the subdirectory of one */ strlcpy(tmp, "tmp/be_snap.XXXXX", sizeof(tmp)); if (mktemp(tmp) == NULL) return (set_error(lbh, BE_ERR_UNKNOWN)); be_root_concat(lbh, tmp, buf); printf("Here %s?\n", buf); if ((err = zfs_snapshot(lbh->lzh, buf, false, NULL)) != 0) { switch (err) { case EZFS_INVALIDNAME: return (set_error(lbh, BE_ERR_INVALIDNAME)); default: /* * The other errors that zfs_ioc_snapshot might return * shouldn't happen if we've set things up properly, so * we'll gloss over them and call it UNKNOWN as it will * require further triage. */ if (errno == ENOTSUP) return (set_error(lbh, BE_ERR_NOPOOL)); return (set_error(lbh, BE_ERR_UNKNOWN)); } } /* Clone */ if ((zfs = zfs_open(lbh->lzh, buf, ZFS_TYPE_SNAPSHOT)) == NULL) return (BE_ERR_ZFSOPEN); if ((err = zfs_clone(zfs, active, NULL)) != 0) /* XXX TODO correct error */ return (set_error(lbh, BE_ERR_UNKNOWN)); /* set props */ zfs_close(zfs); return (BE_ERR_SUCCESS); } int be_add_child(libbe_handle_t *lbh, const char *child_path, bool cp_if_exists) { struct stat sb; char active[BE_MAXPATHLEN], buf[BE_MAXPATHLEN]; nvlist_t *props; const char *s; /* Require absolute paths */ if (*child_path != '/') return (set_error(lbh, BE_ERR_BADPATH)); strlcpy(active, be_active_path(lbh), BE_MAXPATHLEN); strcpy(buf, active); /* Create non-mountable parent dataset(s) */ s = child_path; for (char *p; (p = strchr(s+1, '/')) != NULL; s = p) { size_t len = p - s; strncat(buf, s, len); nvlist_alloc(&props, NV_UNIQUE_NAME, KM_SLEEP); nvlist_add_string(props, "canmount", "off"); nvlist_add_string(props, "mountpoint", "none"); zfs_create(lbh->lzh, buf, ZFS_TYPE_DATASET, props); nvlist_free(props); } /* Path does not exist as a descendent of / yet */ if (strlcat(active, child_path, BE_MAXPATHLEN) >= BE_MAXPATHLEN) return (set_error(lbh, BE_ERR_PATHLEN)); if (stat(child_path, &sb) != 0) { /* Verify that error is ENOENT */ if (errno != ENOENT) return (set_error(lbh, BE_ERR_UNKNOWN)); return (be_create_child_noent(lbh, active, child_path)); } else if (cp_if_exists) /* Path is already a descendent of / and should be copied */ return (be_create_child_cloned(lbh, active)); return (set_error(lbh, BE_ERR_EXISTS)); } #endif /* SOON */ static int be_set_nextboot(libbe_handle_t *lbh, nvlist_t *config, uint64_t pool_guid, const char *zfsdev) { nvlist_t **child; uint64_t vdev_guid; int c, children; if (nvlist_lookup_nvlist_array(config, ZPOOL_CONFIG_CHILDREN, &child, &children) == 0) { for (c = 0; c < children; ++c) if (be_set_nextboot(lbh, child[c], pool_guid, zfsdev) != 0) return (1); return (0); } if (nvlist_lookup_uint64(config, ZPOOL_CONFIG_GUID, &vdev_guid) != 0) { return (1); } if (zpool_nextboot(lbh->lzh, pool_guid, vdev_guid, zfsdev) != 0) { perror("ZFS_IOC_NEXTBOOT failed"); return (1); } return (0); } /* * Deactivate old BE dataset; currently just sets canmount=noauto */ static int be_deactivate(libbe_handle_t *lbh, const char *ds) { zfs_handle_t *zfs; if ((zfs = zfs_open(lbh->lzh, ds, ZFS_TYPE_DATASET)) == NULL) return (1); if (zfs_prop_set(zfs, "canmount", "noauto") != 0) return (1); zfs_close(zfs); return (0); } int be_activate(libbe_handle_t *lbh, const char *bootenv, bool temporary) { char be_path[BE_MAXPATHLEN]; char buf[BE_MAXPATHLEN]; nvlist_t *config, *dsprops, *vdevs; char *origin; uint64_t pool_guid; zfs_handle_t *zhp; int err; be_root_concat(lbh, bootenv, be_path); /* Note: be_exists fails if mountpoint is not / */ if ((err = be_exists(lbh, be_path)) != 0) return (set_error(lbh, err)); if (temporary) { config = zpool_get_config(lbh->active_phandle, NULL); if (config == NULL) /* config should be fetchable... */ return (set_error(lbh, BE_ERR_UNKNOWN)); if (nvlist_lookup_uint64(config, ZPOOL_CONFIG_POOL_GUID, &pool_guid) != 0) /* Similarly, it shouldn't be possible */ return (set_error(lbh, BE_ERR_UNKNOWN)); /* Expected format according to zfsbootcfg(8) man */ snprintf(buf, sizeof(buf), "zfs:%s:", be_path); /* We have no config tree */ if (nvlist_lookup_nvlist(config, ZPOOL_CONFIG_VDEV_TREE, &vdevs) != 0) return (set_error(lbh, BE_ERR_NOPOOL)); return (be_set_nextboot(lbh, vdevs, pool_guid, buf)); } else { if (be_deactivate(lbh, lbh->bootfs) != 0) return (-1); /* Obtain bootenv zpool */ err = zpool_set_prop(lbh->active_phandle, "bootfs", be_path); if (err) return (-1); zhp = zfs_open(lbh->lzh, be_path, ZFS_TYPE_FILESYSTEM); if (zhp == NULL) return (-1); if (be_prop_list_alloc(&dsprops) != 0) return (-1); if (be_get_dataset_props(lbh, be_path, dsprops) != 0) { nvlist_free(dsprops); return (-1); } if (nvlist_lookup_string(dsprops, "origin", &origin) == 0) err = zfs_promote(zhp); nvlist_free(dsprops); zfs_close(zhp); if (err) return (-1); } return (BE_ERR_SUCCESS); } Index: projects/import-googletest-1.8.1/lib/libbe/be.h =================================================================== --- projects/import-googletest-1.8.1/lib/libbe/be.h (revision 344270) +++ projects/import-googletest-1.8.1/lib/libbe/be.h (revision 344271) @@ -1,132 +1,133 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2017 Kyle J. Kneitinger * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #ifndef _LIBBE_H #define _LIBBE_H #include #include #define BE_MAXPATHLEN 512 typedef struct libbe_handle libbe_handle_t; typedef enum be_error { BE_ERR_SUCCESS = 0, /* No error */ BE_ERR_INVALIDNAME, /* invalid boot env name */ BE_ERR_EXISTS, /* boot env name already taken */ BE_ERR_NOENT, /* boot env doesn't exist */ BE_ERR_PERMS, /* insufficient permissions */ BE_ERR_DESTROYACT, /* cannot destroy active boot env */ BE_ERR_DESTROYMNT, /* destroying a mounted be requires force */ BE_ERR_BADPATH, /* path not suitable for operation */ BE_ERR_PATHBUSY, /* requested path is busy */ BE_ERR_PATHLEN, /* provided name exceeds maximum length limit */ BE_ERR_BADMOUNT, /* mountpoint is not '/' */ BE_ERR_NOORIGIN, /* could not open snapshot's origin */ BE_ERR_MOUNTED, /* boot environment is already mounted */ BE_ERR_NOMOUNT, /* boot environment is not mounted */ BE_ERR_ZFSOPEN, /* calling zfs_open() failed */ BE_ERR_ZFSCLONE, /* error when calling zfs_clone to create be */ BE_ERR_IO, /* error when doing some I/O operation */ BE_ERR_NOPOOL, /* operation not supported on this pool */ BE_ERR_NOMEM, /* insufficient memory */ BE_ERR_UNKNOWN, /* unknown error */ + BE_ERR_INVORIGIN, /* invalid origin */ } be_error_t; /* Library handling functions: be.c */ libbe_handle_t *libbe_init(const char *root); void libbe_close(libbe_handle_t *); /* Bootenv information functions: be_info.c */ const char *be_active_name(libbe_handle_t *); const char *be_active_path(libbe_handle_t *); const char *be_nextboot_name(libbe_handle_t *); const char *be_nextboot_path(libbe_handle_t *); const char *be_root_path(libbe_handle_t *); int be_get_bootenv_props(libbe_handle_t *, nvlist_t *); int be_get_dataset_props(libbe_handle_t *, const char *, nvlist_t *); int be_get_dataset_snapshots(libbe_handle_t *, const char *, nvlist_t *); int be_prop_list_alloc(nvlist_t **be_list); void be_prop_list_free(nvlist_t *be_list); int be_activate(libbe_handle_t *, const char *, bool); /* Bootenv creation functions */ int be_create(libbe_handle_t *, const char *); int be_create_from_existing(libbe_handle_t *, const char *, const char *); int be_create_from_existing_snap(libbe_handle_t *, const char *, const char *); int be_snapshot(libbe_handle_t *, const char *, const char *, bool, char *); /* Bootenv manipulation functions */ int be_rename(libbe_handle_t *, const char *, const char *); /* Bootenv removal functions */ typedef enum { BE_DESTROY_FORCE = 1 << 0, BE_DESTROY_ORIGIN = 1 << 1, } be_destroy_opt_t; int be_destroy(libbe_handle_t *, const char *, int); /* Bootenv mounting functions: be_access.c */ typedef enum { BE_MNT_FORCE = 1 << 0, BE_MNT_DEEP = 1 << 1, } be_mount_opt_t; int be_mount(libbe_handle_t *, char *, char *, int, char *); int be_unmount(libbe_handle_t *, char *, int); int be_mounted_at(libbe_handle_t *, const char *path, nvlist_t *); /* Error related functions: be_error.c */ int libbe_errno(libbe_handle_t *); const char *libbe_error_description(libbe_handle_t *); void libbe_print_on_error(libbe_handle_t *, bool); /* Utility Functions */ int be_root_concat(libbe_handle_t *, const char *, char *); int be_validate_name(libbe_handle_t * __unused, const char *); int be_validate_snap(libbe_handle_t *, const char *); int be_exists(libbe_handle_t *, char *); int be_export(libbe_handle_t *, const char *, int fd); int be_import(libbe_handle_t *, const char *, int fd); #if SOON int be_add_child(libbe_handle_t *, const char *, bool); #endif void be_nicenum(uint64_t num, char *buf, size_t buflen); #endif /* _LIBBE_H */ Index: projects/import-googletest-1.8.1/lib/libbe/be_error.c =================================================================== --- projects/import-googletest-1.8.1/lib/libbe/be_error.c (revision 344270) +++ projects/import-googletest-1.8.1/lib/libbe/be_error.c (revision 344271) @@ -1,133 +1,136 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2017 Kyle J. Kneitinger * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include "be.h" #include "be_impl.h" /* * Usage */ int libbe_errno(libbe_handle_t *lbh) { return (lbh->error); } const char * libbe_error_description(libbe_handle_t *lbh) { switch (lbh->error) { case BE_ERR_INVALIDNAME: return ("invalid boot environment name"); case BE_ERR_EXISTS: return ("boot environment name already taken"); case BE_ERR_NOENT: return ("specified boot environment does not exist"); case BE_ERR_PERMS: return ("insufficient permissions"); case BE_ERR_DESTROYACT: return ("cannot destroy active boot environment"); case BE_ERR_DESTROYMNT: return ("cannot destroy mounted boot env unless forced"); case BE_ERR_BADPATH: return ("path not suitable for operation"); case BE_ERR_PATHBUSY: return ("specified path is busy"); case BE_ERR_PATHLEN: return ("provided path name exceeds maximum length limit"); case BE_ERR_BADMOUNT: return ("mountpoint is not \"/\""); case BE_ERR_NOORIGIN: return ("could not open snapshot's origin"); case BE_ERR_MOUNTED: return ("boot environment is already mounted"); case BE_ERR_NOMOUNT: return ("boot environment is not mounted"); case BE_ERR_ZFSOPEN: return ("calling zfs_open() failed"); case BE_ERR_ZFSCLONE: return ("error when calling zfs_clone() to create boot env"); case BE_ERR_IO: return ("input/output error"); case BE_ERR_NOPOOL: return ("operation not supported on this pool"); case BE_ERR_NOMEM: return ("insufficient memory"); case BE_ERR_UNKNOWN: return ("unknown error"); + case BE_ERR_INVORIGIN: + return ("invalid origin"); + default: assert(lbh->error == BE_ERR_SUCCESS); return ("no error"); } } void libbe_print_on_error(libbe_handle_t *lbh, bool val) { lbh->print_on_err = val; libzfs_print_on_error(lbh->lzh, val); } int set_error(libbe_handle_t *lbh, be_error_t err) { lbh->error = err; if (lbh->print_on_err && (err != BE_ERR_SUCCESS)) fprintf(stderr, "%s\n", libbe_error_description(lbh)); return (err); } Index: projects/import-googletest-1.8.1/lib/libbe/libbe.3 =================================================================== --- projects/import-googletest-1.8.1/lib/libbe/libbe.3 (revision 344270) +++ projects/import-googletest-1.8.1/lib/libbe/libbe.3 (revision 344271) @@ -1,502 +1,504 @@ .\" .\" SPDX-License-Identifier: BSD-2-Clause-FreeBSD .\" .\" Copyright (c) 2017 Kyle Kneitinger .\" All rights reserved. .\" Copyright (c) 2018 Kyle Evans .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd February 11, 2019 +.Dd February 12, 2019 .Dt LIBBE 3 .Os .Sh NAME .Nm libbe .Nd library for creating, destroying and modifying ZFS boot environments .Sh LIBRARY .Lb libbe .Sh SYNOPSIS .In be.h .Ft "libbe_handle_t *hdl" Ns .Fn libbe_init "const char *be_root" .Pp .Ft void .Fn libbe_close "libbe_handle_t *hdl" .Pp .Ft const char * Ns .Fn be_active_name "libbe_handle_t *hdl" .Pp .Ft const char * Ns .Fn be_active_path "libbe_handle_t *hdl" .Pp .Ft const char * Ns .Fn be_nextboot_name "libbe_handle_t *hdl" .Pp .Ft const char * Ns .Fn be_nextboot_path "libbe_handle_t *hdl" .Pp .Ft const char * Ns .Fn be_root_path "libbe_handle_t *hdl" .Pp .Ft int .Fn be_create "libbe_handle_t *hdl" "const char *be_name" .Pp .Ft int .Fn be_create_from_existing "libbe_handle_t *hdl" "const char *be_name" "const char *be_origin" .Pp .Ft int .Fn be_create_from_existing_snap "libbe_handle_t *hdl" "const char *be_name" "const char *snap" .Pp .Ft int .Fn be_rename "libbe_handle_t *hdl" "const char *be_old" "const char *be_new" .Pp .Ft int .Fn be_activate "libbe_handle_t *hdl" "const char *be_name" "bool temporary" .Ft int .Fn be_destroy "libbe_handle_t *hdl" "const char *be_name" "int options" .Pp .Ft void .Fn be_nicenum "uint64_t num" "char *buf" "size_t bufsz" .Pp .\" TODO: Write up of mount options .\" typedef enum { .\" BE_MNT_FORCE = 1 << 0, .\" BE_MNT_DEEP = 1 << 1, .\" } be_mount_opt_t .Ft int .Fn be_mount "libbe_handle_t *hdl" "char *be_name" "char *mntpoint" "int flags" "char *result" .Pp .Ft int .Fn be_mounted_at "libbe_handle_t *hdl" "const char *path" "nvlist_t *details" .Pp .Ft int .Fn be_unmount "libbe_handle_t *hdl" "char *be_name" "int flags" .Pp .Ft int .Fn libbe_errno "libbe_handle_t *hdl" .Pp .Ft const char * Ns .Fn libbe_error_description "libbe_handle_t *hdl" .Pp .Ft void .Fn libbe_print_on_error "libbe_handle_t *hdl" "bool doprint" .Pp .Ft int .Fn be_root_concat "libbe_handle_t *hdl" "const char *be_name" "char *result" .Pp .Ft int .Fn be_validate_name "libbe_handle_t *hdl" "const char *be_name" .Pp .Ft int .Fn be_validate_snap "libbe_handle_t *hdl" "const char *snap" .Pp .Ft int .Fn be_exists "libbe_handle_t *hdl" "char *be_name" .Pp .Ft int .Fn be_export "libbe_handle_t *hdl" "const char *be_name" "int fd" .Pp .Ft int .Fn be_import "libbe_handle_t *hdl" "const char *be_name" "int fd" .Pp .Ft int .Fn be_prop_list_alloc "nvlist_t **prop_list" .Pp .Ft int .Fn be_get_bootenv_props "libbe_handle_t *hdl" "nvlist_t *be_list" .Pp .Ft int .Fn be_get_dataset_props "libbe_handle_t *hdl" "const char *ds_name" "nvlist_t *props" .Pp .Ft int .Fn be_get_dataset_snapshots "libbe_handle_t *hdl" "const char *ds_name" "nvlist_t *snap_list" .Pp .Ft void .Fn be_prop_list_free "nvlist_t *prop_list" .Sh DESCRIPTION .Nm interfaces with libzfs to provide a set of functions for various operations regarding ZFS boot environments including "deep" boot environments in which a boot environments has child datasets. .Pp A context structure is passed to each function, allowing for a small amount of state to be retained, such as errors from previous operations. .Nm may be configured to print the corresponding error message to .Dv stderr when an error is encountered with .Fn libbe_print_on_error . .Pp All functions returning an .Vt int return 0 on success, or a .Nm errno otherwise as described in .Sx DIAGNOSTICS . .Pp The .Fn libbe_init function takes an optional BE root and initializes .Nm , returning a .Vt "libbe_handle_t *" on success, or .Dv NULL on error. If a BE root is supplied, .Nm will only operate out of that pool and BE root. An error may occur if: .Bl -column .It /boot and / are not on the same filesystem and device, .It libzfs fails to initialize, .It The system has not been properly booted with a ZFS boot environment, .It Nm fails to open the zpool the active boot environment resides on, or .It Nm fails to locate the boot environment that is currently mounted. .El .Pp The .Fn libbe_close function frees all resources previously acquired in .Fn libbe_init , invalidating the handle in the process. .Pp The .Fn be_active_name function returns the name of the currently booted boot environment. This boot environment may not belong to the same BE root as the root libbe is operating on! .Pp The .Fn be_active_path function returns the full path of the currently booted boot environment. This boot environment may not belong to the same BE root as the root libbe is operating on! .Pp The .Fn be_nextboot_name function returns the name of the boot environment that will be active on reboot. .Pp The .Fn be_nextboot_path function returns the full path of the boot environment that will be active on reboot. .Pp The .Fn be_root_path function returns the boot environment root path. .Pp The .Fn be_create function creates a boot environment with the given name. It will be created from a snapshot of the currently booted boot environment. .Pp The .Fn be_create_from_existing function creates a boot environment with the given name from the name of an existing boot environment. A snapshot will be made of the base boot environment, and the new boot environment will be created from that. .Pp The .Fn be_create_from_existing_snap function creates a boot environment with the given name from an existing snapshot. .Pp The .Fn be_rename function renames a boot environment without unmounting it, as if renamed with the .Fl u argument were passed to .Nm zfs .Cm rename .Pp The .Fn be_activate function makes a boot environment active on the next boot. If the .Fa temporary flag is set, then it will be active for the next boot only, as done by .Xr zfsbootcfg 8 . Next boot functionality is currently only available when booting in x86 BIOS mode. .Pp The .Fn be_destroy function will recursively destroy the given boot environment. It will not destroy a mounted boot environment unless the .Dv BE_DESTROY_FORCE option is set in .Fa options . If the .Dv BE_DESTROY_ORIGIN option is set in .Fa options , the .Fn be_destroy function will destroy the origin snapshot to this boot environment as well. .Pp The .Fn be_nicenum function will format .Fa name in a traditional ZFS humanized format, similar to .Xr humanize_number 3 . This function effectively proxies .Fn zfs_nicenum from libzfs. .Pp The .Fn be_mount function will mount the given boot environment. If .Fa mountpoint is .Dv NULL , a mount point will be generated in .Pa /tmp using .Xr mkdtemp 3 . If .Fa result is not .Dv NULL , it should be large enough to accommodate .Dv BE_MAXPATHLEN including the null terminator. the final mount point will be copied into it. Setting the .Dv BE_MNT_FORCE flag will pass .Dv MNT_FORCE to the underlying .Xr mount 2 call. .Pp The .Fn be_mounted_at function will check if there is a boot environment mounted at the given .Fa path . If .Fa details is not .Dv NULL , it will be populated with a list of the mounted dataset's properties. This list of properties matches the properties collected by .Fn be_get_bootenv_props . .Pp The .Fn be_unmount function will unmount the given boot environment. Setting the .Dv BE_MNT_FORCE flag will pass .Dv MNT_FORCE to the underlying .Xr mount 2 call. .Pp The .Fn libbe_errno function returns the .Nm errno. .Pp The .Fn libbe_error_description function returns a string description of the currently set .Nm errno. .Pp The .Fn libbe_print_on_error function will change whether or not .Nm prints the description of any encountered error to .Dv stderr , based on .Fa doprint . .Pp The .Fn be_root_concat function will concatenate the boot environment root and the given boot environment name into .Fa result . .Pp The .Fn be_validate_name function will validate the given boot environment name for both length restrictions as well as valid character restrictions. This function does not set the internal library error state. .Pp The .Fn be_validate_snap function will validate the given snapshot name. The snapshot must have a valid name, exist, and have a mountpoint of .Pa / . This function does not set the internal library error state. .Pp The .Fn be_exists function will check whether the given boot environment exists and has a mountpoint of .Pa / . This function does not set the internal library error state, but will return the appropriate error. .Pp The .Fn be_export function will export the given boot environment to the file specified by .Fa fd . A snapshot will be created of the boot environment prior to export. .Pp The .Fn be_import function will import the boot environment in the file specified by .Fa fd , and give it the name .Fa be_name . .Pp The .Fn be_prop_list_alloc function allocates a property list suitable for passing to .Fn be_get_bootenv_props , .Fn be_get_dataset_props , or .Fn be_get_dataset_snapshots . It should be freed later by .Fa be_prop_list_free . .Pp The .Fn be_get_bootenv_props function will populate .Fa be_list with .Vt nvpair_t of boot environment names paired with an .Vt nvlist_t of their properties. The following properties are currently collected as appropriate: .Bl -column "Returned name" .It Sy Returned name Ta Sy Description .It dataset Ta - .It name Ta Boot environment name .It mounted Ta Current mount point .It mountpoint Ta Do mountpoint Dc property .It origin Ta Do origin Dc property .It creation Ta Do creation Dc property .It active Ta Currently booted environment .It used Ta Literal Do used Dc property .It usedds Ta Literal Do usedds Dc property .It usedsnap Ta Literal Do usedrefreserv Dc property .It referenced Ta Literal Do referenced Dc property .It nextboot Ta Active on next boot .El .Pp Only the .Dq dataset , .Dq name , .Dq active , and .Dq nextboot returned values will always be present. All other properties may be omitted if not available. .Pp The .Fn be_get_dataset_props function will get properties of the specified dataset. .Fa props is populated directly with a list of the properties as returned by .Fn be_get_bootenv_props . .Pp The .Fn be_get_dataset_snapshots function will retrieve all snapshots of the given dataset. .Fa snap_list will be populated with a list of .Vt nvpair_t exactly as specified by .Fn be_get_bootenv_props . .Pp The .Fn be_prop_list_free function will free the property list. .Sh DIAGNOSTICS Upon error, one of the following values will be returned: .Bl -dash -offset indent -compact .It BE_ERR_SUCCESS .It BE_ERR_INVALIDNAME .It BE_ERR_EXISTS .It BE_ERR_NOENT .It BE_ERR_PERMS .It BE_ERR_DESTROYACT .It BE_ERR_DESTROYMNT .It BE_ERR_BADPATH .It BE_ERR_PATHBUSY .It BE_ERR_PATHLEN .It BE_ERR_BADMOUNT .It BE_ERR_NOORIGIN .It BE_ERR_MOUNTED .It BE_ERR_NOMOUNT .It BE_ERR_ZFSOPEN .It BE_ERR_ZFSCLONE .It BE_ERR_IO .It BE_ERR_NOPOOL .It BE_ERR_NOMEM .It BE_ERR_UNKNOWN +.It +BE_ERR_INVORIGIN .El .Sh SEE ALSO .Xr bectl 8 .Sh HISTORY .Nm and its corresponding command, .Xr bectl 8 , were written as a 2017 Google Summer of Code project with Allan Jude serving as a mentor. Later work was done by .An Kyle Evans Aq Mt kevans@FreeBSD.org . Index: projects/import-googletest-1.8.1/lib/libc/Makefile =================================================================== --- projects/import-googletest-1.8.1/lib/libc/Makefile (revision 344270) +++ projects/import-googletest-1.8.1/lib/libc/Makefile (revision 344271) @@ -1,215 +1,216 @@ # @(#)Makefile 8.2 (Berkeley) 2/3/94 # $FreeBSD$ PACKAGE= clibs SHLIBDIR?= /lib .include # BIND_NOW in libc results in segfault at startup (PR 233333) MK_BIND_NOW= no # Force building of libc_pic.a MK_TOOLCHAIN= yes LIBC_SRCTOP?= ${.CURDIR} # Pick the current architecture directory for libc. In general, this is # named MACHINE_CPUARCH, but some ABIs are different enough to require # their own libc, so allow a directory named MACHINE_ARCH to override this. .if exists(${LIBC_SRCTOP}/${MACHINE_ARCH}) LIBC_ARCH=${MACHINE_ARCH} .else LIBC_ARCH=${MACHINE_CPUARCH} .endif # All library objects contain FreeBSD revision strings by default; they may be # excluded as a space-saving measure. To produce a library that does # not contain these strings, add -DSTRIP_FBSDID (see ) to CFLAGS # below. Note: there are no IDs for syscall stubs whose sources are generated. # To include legacy CSRG SCCS ID strings, remove -DNO__SCCSID from CFLAGS. # To include RCS ID strings from other BSD projects, remove -DNO__RCSID from CFLAGS. CFLAGS+=-DNO__SCCSID -DNO__RCSID LIB=c SHLIB_MAJOR= 7 .if ${MK_SSP} != "no" SHLIB_LDSCRIPT=libc.ldscript .else SHLIB_LDSCRIPT=libc_nossp.ldscript .endif SHLIB_LDSCRIPT_LINKS=libxnet.so WARNS?= 2 CFLAGS+=-I${LIBC_SRCTOP}/include -I${SRCTOP}/include CFLAGS+=-I${LIBC_SRCTOP}/${LIBC_ARCH} .if ${MK_NLS} != "no" CFLAGS+=-DNLS .endif CLEANFILES+=tags INSTALL_PIC_ARCHIVE= BUILD_NOSSP_PIC_ARCHIVE= PRECIOUSLIB= .ifndef NO_THREAD_STACK_UNWIND CANCELPOINTS_CFLAGS=-fexceptions CFLAGS+=${CANCELPOINTS_CFLAGS} .endif # # Link with static libcompiler_rt.a. # LDFLAGS+= -nodefaultlibs LIBADD+= compiler_rt .if ${MK_SSP} != "no" LIBADD+= ssp_nonshared .endif # Extras that live in either libc.a or libc_nonshared.a LIBC_NONSHARED_SRCS= # Define (empty) variables so that make doesn't give substitution # errors if the included makefiles don't change these: MDSRCS= MISRCS= MDASM= MIASM= NOASM= .include "${LIBC_SRCTOP}/${LIBC_ARCH}/Makefile.inc" .include "${LIBC_SRCTOP}/db/Makefile.inc" .include "${LIBC_SRCTOP}/compat-43/Makefile.inc" .include "${LIBC_SRCTOP}/gdtoa/Makefile.inc" .include "${LIBC_SRCTOP}/gen/Makefile.inc" .include "${LIBC_SRCTOP}/gmon/Makefile.inc" .if ${MK_ICONV} != "no" .include "${LIBC_SRCTOP}/iconv/Makefile.inc" .endif .include "${LIBC_SRCTOP}/inet/Makefile.inc" .include "${LIBC_SRCTOP}/isc/Makefile.inc" .include "${LIBC_SRCTOP}/locale/Makefile.inc" .include "${LIBC_SRCTOP}/md/Makefile.inc" .include "${LIBC_SRCTOP}/nameser/Makefile.inc" .include "${LIBC_SRCTOP}/net/Makefile.inc" .include "${LIBC_SRCTOP}/nls/Makefile.inc" .include "${LIBC_SRCTOP}/posix1e/Makefile.inc" .if ${LIBC_ARCH} != "aarch64" && \ ${LIBC_ARCH} != "amd64" && \ ${LIBC_ARCH} != "powerpc64" && \ ${LIBC_ARCH} != "riscv" && \ ${LIBC_ARCH} != "sparc64" && \ ${MACHINE_ARCH:Mmipsn32*} == "" && \ ${MACHINE_ARCH:Mmips64*} == "" .include "${LIBC_SRCTOP}/quad/Makefile.inc" .endif .include "${LIBC_SRCTOP}/regex/Makefile.inc" .include "${LIBC_SRCTOP}/resolv/Makefile.inc" .include "${LIBC_SRCTOP}/stdio/Makefile.inc" .include "${LIBC_SRCTOP}/stdlib/Makefile.inc" .include "${LIBC_SRCTOP}/stdlib/jemalloc/Makefile.inc" .include "${LIBC_SRCTOP}/stdtime/Makefile.inc" .include "${LIBC_SRCTOP}/string/Makefile.inc" .include "${LIBC_SRCTOP}/sys/Makefile.inc" .include "${LIBC_SRCTOP}/secure/Makefile.inc" .include "${LIBC_SRCTOP}/rpc/Makefile.inc" .include "${LIBC_SRCTOP}/uuid/Makefile.inc" .include "${LIBC_SRCTOP}/xdr/Makefile.inc" .if (${LIBC_ARCH} == "arm" && \ (${MACHINE_ARCH:Marmv[67]*} == "" || (defined(CPUTYPE) && ${CPUTYPE:M*soft*}))) || \ (${LIBC_ARCH} == "mips" && ${MACHINE_ARCH:Mmips*hf} == "") || \ (${LIBC_ARCH} == "riscv" && ${MACHINE_ARCH:Mriscv*sf} != "") .include "${LIBC_SRCTOP}/softfloat/Makefile.inc" .endif .if ${LIBC_ARCH} == "i386" || ${LIBC_ARCH} == "amd64" .include "${LIBC_SRCTOP}/x86/sys/Makefile.inc" +.include "${LIBC_SRCTOP}/x86/gen/Makefile.inc" .endif .if ${MK_NIS} != "no" CFLAGS+= -DYP .include "${LIBC_SRCTOP}/yp/Makefile.inc" .endif .include "${LIBC_SRCTOP}/capability/Makefile.inc" .if ${MK_HESIOD} != "no" CFLAGS+= -DHESIOD .endif .if ${MK_FP_LIBC} == "no" CFLAGS+= -DNO_FLOATING_POINT .endif .if ${MK_NS_CACHING} != "no" CFLAGS+= -DNS_CACHING .endif .if defined(_FREEFALL_CONFIG) CFLAGS+=-D_FREEFALL_CONFIG .endif STATICOBJS+=${LIBC_NONSHARED_SRCS:S/.c$/.o/} VERSION_DEF=${LIBC_SRCTOP}/Versions.def SYMBOL_MAPS=${SYM_MAPS} CFLAGS+= -DSYMBOL_VERSIONING # If there are no machine dependent sources, append all the # machine-independent sources: .if empty(MDSRCS) SRCS+= ${MISRCS} .else # Append machine-dependent sources, then append machine-independent sources # for which there is no machine-dependent variant. SRCS+= ${MDSRCS} .for _src in ${MISRCS} .if ${MDSRCS:R:M${_src:R}} == "" SRCS+= ${_src} .endif .endfor .endif KQSRCS= adddi3.c anddi3.c ashldi3.c ashrdi3.c cmpdi2.c divdi3.c iordi3.c \ lshldi3.c lshrdi3.c moddi3.c muldi3.c negdi2.c notdi2.c qdivrem.c \ subdi3.c ucmpdi2.c udivdi3.c umoddi3.c xordi3.c KSRCS= bcmp.c ffs.c ffsl.c fls.c flsl.c mcount.c strcat.c strchr.c \ strcmp.c strcpy.c strlen.c strncpy.c strrchr.c libkern: libkern.gen libkern.${LIBC_ARCH} libkern.gen: ${KQSRCS} ${KSRCS} ${CP} ${LIBC_SRCTOP}/quad/quad.h ${.ALLSRC} ${DESTDIR}/sys/libkern libkern.${LIBC_ARCH}:: ${KMSRCS} .if defined(KMSRCS) && !empty(KMSRCS) ${CP} ${.ALLSRC} ${DESTDIR}/sys/libkern/${LIBC_ARCH} .endif HAS_TESTS= SUBDIR.${MK_TESTS}+= tests .include .if (${LIBC_ARCH} == amd64 || ${LIBC_ARCH} == i386) && \ ${.TARGETS:Mall} == all && \ defined(LINKER_FEATURES) && ${LINKER_FEATURES:Mifunc} == "" .error ${LIBC_ARCH} libc requires linker ifunc support .endif .if !defined(_SKIP_BUILD) # We need libutil.h, get it directly to avoid # recording a build dependency CFLAGS+= -I${SRCTOP}/lib/libutil # Same issue with libm MSUN_ARCH_SUBDIR != ${MAKE} -B -C ${SRCTOP}/lib/msun -V ARCH_SUBDIR # unfortunately msun/src contains both private and public headers CFLAGS+= -I${SRCTOP}/lib/msun/${MSUN_ARCH_SUBDIR} .if ${MACHINE_CPUARCH} == "i386" || ${MACHINE_CPUARCH} == "amd64" CFLAGS+= -I${SRCTOP}/lib/msun/x86 .endif CFLAGS+= -I${SRCTOP}/lib/msun/src # and we do not want to record a dependency on msun .if ${.MAKE.LEVEL} > 0 GENDIRDEPS_FILTER+= N${RELDIR:H}/msun .endif .endif # Disable warnings in contributed sources. CWARNFLAGS:= ${.IMPSRC:Ngdtoa_*.c:C/^.+$/${CWARNFLAGS}/:C/^$/-w/} # Disable stack protection for SSP symbols. SSP_CFLAGS:= ${.IMPSRC:N*/stack_protector.c:C/^.+$/${SSP_CFLAGS}/} # Generate stack unwinding tables for cancellation points CANCELPOINTS_CFLAGS:= ${.IMPSRC:Mcancelpoints_*:C/^.+$/${CANCELPOINTS_CFLAGS}/:C/^$//} Index: projects/import-googletest-1.8.1/lib/libc/amd64/gen/getcontextx.c =================================================================== --- projects/import-googletest-1.8.1/lib/libc/amd64/gen/getcontextx.c (revision 344270) +++ projects/import-googletest-1.8.1/lib/libc/amd64/gen/getcontextx.c (nonexistent) @@ -1,113 +0,0 @@ -/*- - * SPDX-License-Identifier: BSD-2-Clause-FreeBSD - * - * Copyright (c) 2011 Konstantin Belousov - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR - * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES - * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. - * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, - * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT - * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF - * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include -__FBSDID("$FreeBSD$"); - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -static int xstate_sz = -1; - -int -__getcontextx_size(void) -{ - u_int p[4]; - - if (xstate_sz == -1) { - do_cpuid(1, p); - if ((p[2] & CPUID2_OSXSAVE) != 0) { - cpuid_count(0xd, 0x0, p); - xstate_sz = p[1] - sizeof(struct savefpu); - } else - xstate_sz = 0; - } - - return (sizeof(ucontext_t) + xstate_sz); -} - -int -__fillcontextx2(char *ctx) -{ - struct amd64_get_xfpustate xfpu; - ucontext_t *ucp; - - ucp = (ucontext_t *)ctx; - if (xstate_sz != 0) { - xfpu.addr = (char *)(ucp + 1); - xfpu.len = xstate_sz; - if (sysarch(AMD64_GET_XFPUSTATE, &xfpu) == -1) - return (-1); - ucp->uc_mcontext.mc_xfpustate = (__register_t)xfpu.addr; - ucp->uc_mcontext.mc_xfpustate_len = xstate_sz; - ucp->uc_mcontext.mc_flags |= _MC_HASFPXSTATE; - } else { - ucp->uc_mcontext.mc_xfpustate = 0; - ucp->uc_mcontext.mc_xfpustate_len = 0; - } - return (0); -} - -int -__fillcontextx(char *ctx) -{ - ucontext_t *ucp; - - ucp = (ucontext_t *)ctx; - if (getcontext(ucp) == -1) - return (-1); - __fillcontextx2(ctx); - return (0); -} - -__weak_reference(__getcontextx, getcontextx); - -ucontext_t * -__getcontextx(void) -{ - char *ctx; - int error; - - ctx = malloc(__getcontextx_size()); - if (ctx == NULL) - return (NULL); - if (__fillcontextx(ctx) == -1) { - error = errno; - free(ctx); - errno = error; - return (NULL); - } - return ((ucontext_t *)ctx); -} Property changes on: projects/import-googletest-1.8.1/lib/libc/amd64/gen/getcontextx.c ___________________________________________________________________ Deleted: svn:keywords ## -1 +0,0 ## -FreeBSD=%H \ No newline at end of property Index: projects/import-googletest-1.8.1/lib/libc/amd64/gen/Makefile.inc =================================================================== --- projects/import-googletest-1.8.1/lib/libc/amd64/gen/Makefile.inc (revision 344270) +++ projects/import-googletest-1.8.1/lib/libc/amd64/gen/Makefile.inc (revision 344271) @@ -1,8 +1,8 @@ # @(#)Makefile.inc 8.1 (Berkeley) 6/4/93 # $FreeBSD$ SRCS+= _setjmp.S _set_tp.c rfork_thread.S setjmp.S sigsetjmp.S \ - fabs.S getcontextx.c \ + fabs.S \ infinity.c ldexp.c makecontext.c signalcontext.c \ flt_rounds.c fpgetmask.c fpsetmask.c fpgetprec.c fpsetprec.c \ fpgetround.c fpsetround.c fpgetsticky.c Index: projects/import-googletest-1.8.1/lib/libc/gen/readpassphrase.3 =================================================================== --- projects/import-googletest-1.8.1/lib/libc/gen/readpassphrase.3 (revision 344270) +++ projects/import-googletest-1.8.1/lib/libc/gen/readpassphrase.3 (revision 344271) @@ -1,181 +1,183 @@ .\" $OpenBSD: readpassphrase.3,v 1.17 2007/05/31 19:19:28 jmc Exp $ .\" .\" Copyright (c) 2000, 2002 Todd C. Miller .\" .\" Permission to use, copy, modify, and distribute this software for any .\" purpose with or without fee is hereby granted, provided that the above .\" copyright notice and this permission notice appear in all copies. .\" .\" THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES .\" WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF .\" MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR .\" ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES .\" WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN .\" ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF .\" OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. .\" .\" Sponsored in part by the Defense Advanced Research Projects .\" Agency (DARPA) and Air Force Research Laboratory, Air Force .\" Materiel Command, USAF, under agreement number F39502-99-1-0512. .\" .\" $FreeBSD$ .\" .Dd May 31, 2007 .Dt READPASSPHRASE 3 .Os .Sh NAME .Nm readpassphrase .Nd get a passphrase from the user .Sh SYNOPSIS .In readpassphrase.h .Ft "char *" .Fn readpassphrase "const char *prompt" "char *buf" "size_t bufsiz" "int flags" .Sh DESCRIPTION The .Fn readpassphrase function displays a prompt to, and reads in a passphrase from, .Pa /dev/tty . If this file is inaccessible and the .Dv RPP_REQUIRE_TTY flag is not set, .Fn readpassphrase displays the prompt on the standard error output and reads from the standard input. In this case it is generally not possible to turn off echo. .Pp Up to .Fa bufsiz \- 1 characters (one is for the .Dv NUL ) are read into the provided buffer .Fa buf . Any additional characters and the terminating newline (or return) character are discarded. .Pp The .Fn readpassphrase function takes the following optional .Fa flags : .Pp .Bl -tag -width ".Dv RPP_REQUIRE_TTY" -compact .It Dv RPP_ECHO_OFF turn off echo (default behavior) .It Dv RPP_ECHO_ON leave echo on .It Dv RPP_REQUIRE_TTY fail if there is no tty .It Dv RPP_FORCELOWER force input to lower case .It Dv RPP_FORCEUPPER force input to upper case .It Dv RPP_SEVENBIT strip the high bit from input .It Dv RPP_STDIN force read of passphrase from stdin .El .Pp The calling process should zero the passphrase as soon as possible to avoid leaving the cleartext passphrase visible in the process's address space. .Sh RETURN VALUES Upon successful completion, .Fn readpassphrase returns a pointer to the NUL-terminated passphrase. If an error is encountered, the terminal state is restored and a .Dv NULL pointer is returned. .Sh FILES .Bl -tag -width ".Pa /dev/tty" -compact .It Pa /dev/tty .El .Sh EXAMPLES The following code fragment will read a passphrase from .Pa /dev/tty into the buffer .Fa passbuf . .Bd -literal -offset indent char passbuf[1024]; \&... if (readpassphrase("Response: ", passbuf, sizeof(passbuf), RPP_REQUIRE_TTY) == NULL) errx(1, "unable to read passphrase"); if (compare(transform(passbuf), epass) != 0) errx(1, "bad passphrase"); \&... memset(passbuf, 0, sizeof(passbuf)); .Ed .Sh ERRORS .Bl -tag -width Er .It Bq Er EINTR The .Fn readpassphrase function was interrupted by a signal. .It Bq Er EINVAL The .Ar bufsiz argument was zero. .It Bq Er EIO The process is a member of a background process attempting to read from its controlling terminal, the process is ignoring or blocking the .Dv SIGTTIN signal, or the process group is orphaned. .It Bq Er EMFILE The process has already reached its limit for open file descriptors. .It Bq Er ENFILE The system file table is full. .It Bq Er ENOTTY There is no controlling terminal and the .Dv RPP_REQUIRE_TTY flag was specified. .El .Sh SIGNALS The .Fn readpassphrase function will catch the following signals: .Bd -literal -offset indent SIGALRM SIGHUP SIGINT SIGPIPE SIGQUIT SIGTERM SIGTSTP SIGTTIN SIGTTOU .Ed .Pp When one of the above signals is intercepted, terminal echo will be restored if it had previously been turned off. If a signal handler was installed for the signal when .Fn readpassphrase was called, that handler is then executed. If no handler was previously installed for the signal then the default action is taken as per .Xr sigaction 2 . .Pp The .Dv SIGTSTP , SIGTTIN and .Dv SIGTTOU signals (stop signals generated from keyboard or due to terminal I/O from a background process) are treated specially. When the process is resumed after it has been stopped, .Fn readpassphrase will reprint the prompt and the user may then enter a passphrase. .Sh SEE ALSO .Xr sigaction 2 , .Xr getpass 3 .Sh STANDARDS The .Fn readpassphrase function is an extension and should not be used if portability is desired. .Sh HISTORY The .Fn readpassphrase function first appeared in +.Fx 4.6 +and .Ox 2.9 . Index: projects/import-googletest-1.8.1/lib/libc/i386/gen/getcontextx.c =================================================================== --- projects/import-googletest-1.8.1/lib/libc/i386/gen/getcontextx.c (revision 344270) +++ projects/import-googletest-1.8.1/lib/libc/i386/gen/getcontextx.c (nonexistent) @@ -1,145 +0,0 @@ -/*- - * SPDX-License-Identifier: BSD-2-Clause-FreeBSD - * - * Copyright (c) 2011 Konstantin Belousov - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR - * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES - * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. - * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, - * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT - * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF - * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include -__FBSDID("$FreeBSD$"); - -#include -#include -#include -#include -#include -#include -#include -#include - -static int xstate_sz = -1; - -int -__getcontextx_size(void) -{ - u_int p[4]; - int cpuid_supported; - - if (xstate_sz == -1) { - __asm __volatile( - " pushfl\n" - " popl %%eax\n" - " movl %%eax,%%ecx\n" - " xorl $0x200000,%%eax\n" - " pushl %%eax\n" - " popfl\n" - " pushfl\n" - " popl %%eax\n" - " xorl %%eax,%%ecx\n" - " je 1f\n" - " movl $1,%0\n" - " jmp 2f\n" - "1: movl $0,%0\n" - "2:\n" - : "=r" (cpuid_supported) : : "eax", "ecx"); - if (cpuid_supported) { - __asm __volatile( - " pushl %%ebx\n" - " cpuid\n" - " movl %%ebx,%1\n" - " popl %%ebx\n" - : "=a" (p[0]), "=r" (p[1]), "=c" (p[2]), "=d" (p[3]) - : "0" (0x1)); - if ((p[2] & CPUID2_OSXSAVE) != 0) { - __asm __volatile( - " pushl %%ebx\n" - " cpuid\n" - " movl %%ebx,%1\n" - " popl %%ebx\n" - : "=a" (p[0]), "=r" (p[1]), "=c" (p[2]), - "=d" (p[3]) - : "0" (0xd), "2" (0x0)); - xstate_sz = p[1] - sizeof(struct savexmm); - } else - xstate_sz = 0; - } else - xstate_sz = 0; - } - - return (sizeof(ucontext_t) + xstate_sz); -} - -int -__fillcontextx2(char *ctx) -{ - struct i386_get_xfpustate xfpu; - ucontext_t *ucp; - - ucp = (ucontext_t *)ctx; - if (xstate_sz != 0) { - xfpu.addr = (char *)(ucp + 1); - xfpu.len = xstate_sz; - if (sysarch(I386_GET_XFPUSTATE, &xfpu) == -1) - return (-1); - ucp->uc_mcontext.mc_xfpustate = (__register_t)xfpu.addr; - ucp->uc_mcontext.mc_xfpustate_len = xstate_sz; - ucp->uc_mcontext.mc_flags |= _MC_HASFPXSTATE; - } else { - ucp->uc_mcontext.mc_xfpustate = 0; - ucp->uc_mcontext.mc_xfpustate_len = 0; - } - return (0); -} - -int -__fillcontextx(char *ctx) -{ - ucontext_t *ucp; - - ucp = (ucontext_t *)ctx; - if (getcontext(ucp) == -1) - return (-1); - __fillcontextx2(ctx); - return (0); -} - -__weak_reference(__getcontextx, getcontextx); - -ucontext_t * -__getcontextx(void) -{ - char *ctx; - int error; - - ctx = malloc(__getcontextx_size()); - if (ctx == NULL) - return (NULL); - if (__fillcontextx(ctx) == -1) { - error = errno; - free(ctx); - errno = error; - return (NULL); - } - return ((ucontext_t *)ctx); -} Property changes on: projects/import-googletest-1.8.1/lib/libc/i386/gen/getcontextx.c ___________________________________________________________________ Deleted: svn:keywords ## -1 +0,0 ## -FreeBSD=%H \ No newline at end of property Index: projects/import-googletest-1.8.1/lib/libc/i386/gen/Makefile.inc =================================================================== --- projects/import-googletest-1.8.1/lib/libc/i386/gen/Makefile.inc (revision 344270) +++ projects/import-googletest-1.8.1/lib/libc/i386/gen/Makefile.inc (revision 344271) @@ -1,6 +1,6 @@ # @(#)Makefile.inc 8.1 (Berkeley) 6/4/93 # $FreeBSD$ SRCS+= _ctx_start.S _setjmp.S _set_tp.c fabs.S \ - flt_rounds.c getcontextx.c infinity.c ldexp.c makecontext.c \ + flt_rounds.c infinity.c ldexp.c makecontext.c \ rfork_thread.S setjmp.S signalcontext.c sigsetjmp.S Index: projects/import-googletest-1.8.1/lib/libc/sys/sendfile.2 =================================================================== --- projects/import-googletest-1.8.1/lib/libc/sys/sendfile.2 (revision 344270) +++ projects/import-googletest-1.8.1/lib/libc/sys/sendfile.2 (revision 344271) @@ -1,416 +1,433 @@ .\" Copyright (c) 2003, David G. Lawrence .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice unmodified, this list of conditions, and the following .\" disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd January 25, 2019 +.Dd February 15, 2019 .Dt SENDFILE 2 .Os .Sh NAME .Nm sendfile .Nd send a file to a socket .Sh LIBRARY .Lb libc .Sh SYNOPSIS .In sys/types.h .In sys/socket.h .In sys/uio.h .Ft int .Fo sendfile .Fa "int fd" "int s" "off_t offset" "size_t nbytes" .Fa "struct sf_hdtr *hdtr" "off_t *sbytes" "int flags" .Fc .Sh DESCRIPTION The .Fn sendfile system call sends a regular file or shared memory object specified by descriptor .Fa fd out a stream socket specified by descriptor .Fa s . .Pp The .Fa offset argument specifies where to begin in the file. Should .Fa offset fall beyond the end of file, the system will return success and report 0 bytes sent as described below. The .Fa nbytes argument specifies how many bytes of the file should be sent, with 0 having the special meaning of send until the end of file has been reached. .Pp An optional header and/or trailer can be sent before and after the file data by specifying a pointer to a .Vt "struct sf_hdtr" , which has the following structure: .Pp .Bd -literal -offset indent -compact struct sf_hdtr { struct iovec *headers; /* pointer to header iovecs */ int hdr_cnt; /* number of header iovecs */ struct iovec *trailers; /* pointer to trailer iovecs */ int trl_cnt; /* number of trailer iovecs */ }; .Ed .Pp The .Fa headers and .Fa trailers pointers, if .Pf non- Dv NULL , point to arrays of .Vt "struct iovec" structures. See the .Fn writev system call for information on the iovec structure. The number of iovecs in these arrays is specified by .Fa hdr_cnt and .Fa trl_cnt . .Pp If .Pf non- Dv NULL , the system will write the total number of bytes sent on the socket to the variable pointed to by .Fa sbytes . .Pp The least significant 16 bits of .Fa flags argument is a bitmap of these values: .Bl -tag -offset indent -width "SF_USER_READAHEAD" .It Dv SF_NODISKIO This flag causes .Nm to return .Er EBUSY instead of blocking when a busy page is encountered. This rare situation can happen if some other process is now working with the same region of the file. It is advised to retry the operation after a short period. .Pp Note that in older .Fx versions the .Dv SF_NODISKIO had slightly different notion. The flag prevented .Nm to run I/O operations in case if an invalid (not cached) page is encountered, thus avoiding blocking on I/O. Starting with .Fx 11 .Nm sending files off the .Xr ffs 7 filesystem does not block on I/O (see .Sx IMPLEMENTATION NOTES ), so the condition no longer applies. However, it is safe if an application utilizes .Dv SF_NODISKIO and on .Er EBUSY performs the same action as it did in older .Fx versions, e.g., .Xr aio_read 2 , .Xr read 2 or .Nm in a different context. .It Dv SF_NOCACHE The data sent to socket will not be cached by the virtual memory system, and will be freed directly to the pool of free pages. .It Dv SF_SYNC .Nm sleeps until the network stack no longer references the VM pages of the file, making subsequent modifications to it safe. Please note that this is not a guarantee that the data has actually been sent. .It Dv SF_USER_READAHEAD .Nm has some internal heuristics to do readahead when sending data. This flag forces .Nm to override any heuristically calculated readahead and use exactly the application specified readahead. See .Sx SETTING READAHEAD for more details on readahead. .El .Pp When using a socket marked for non-blocking I/O, .Fn sendfile may send fewer bytes than requested. In this case, the number of bytes successfully written is returned in .Fa *sbytes (if specified), and the error .Er EAGAIN is returned. .Sh SETTING READAHEAD .Nm uses internal heuristics based on request size and file system layout to do readahead. Additionally application may request extra readahead. The most significant 16 bits of .Fa flags specify amount of pages that .Nm may read ahead when reading the file. A macro .Fn SF_FLAGS is provided to combine readahead amount and flags. An example showing specifying readahead of 16 pages and .Dv SF_NOCACHE flag: .Pp .Bd -literal -offset indent -compact SF_FLAGS(16, SF_NOCACHE) .Ed .Pp .Nm will use either application specified readahead or internally calculated, whichever is bigger. Setting flag .Dv SF_USER_READAHEAD would turn off any heuristics and set maximum possible readahead length to the number of pages specified via flags. .Sh IMPLEMENTATION NOTES The .Fx implementation of .Fn sendfile does not block on disk I/O when it sends a file off the .Xr ffs 7 filesystem. The syscall returns success before the actual I/O completes, and data is put into the socket later unattended. However, the order of data in the socket is preserved, so it is safe to do further writes to the socket. .Pp The .Fx implementation of .Fn sendfile is "zero-copy", meaning that it has been optimized so that copying of the file data is avoided. .Sh TUNING +.Ss physical paging buffers +.Fn sendfile +uses vnode pager to read file pages into memory. +The pager uses a pool of physical buffers to run its I/O operations. +When system runs out of pbufs, sendfile will block and report state +.Dq Li zonelimit . +Size of the pool can be tuned with +.Va vm.vnode_pbufs +.Xr loader.conf 5 +tunable and can be checked with +.Xr sysctl 8 +OID of the same name at runtime. +.Ss sendfile(2) buffers On some architectures, this system call internally uses a special .Fn sendfile buffer .Pq Vt "struct sf_buf" to handle sending file data to the client. If the sending socket is blocking, and there are not enough .Fn sendfile buffers available, .Fn sendfile will block and report a state of .Dq Li sfbufa . If the sending socket is non-blocking and there are not enough .Fn sendfile buffers available, the call will block and wait for the necessary buffers to become available before finishing the call. .Pp The number of .Vt sf_buf Ns 's allocated should be proportional to the number of nmbclusters used to send data to a client via .Fn sendfile . Tune accordingly to avoid blocking! Busy installations that make extensive use of .Fn sendfile may want to increase these values to be inline with their .Va kern.ipc.nmbclusters (see .Xr tuning 7 for details). .Pp The number of .Fn sendfile buffers available is determined at boot time by either the .Va kern.ipc.nsfbufs .Xr loader.conf 5 variable or the .Dv NSFBUFS kernel configuration tunable. The number of .Fn sendfile buffers scales with .Va kern.maxusers . The .Va kern.ipc.nsfbufsused and .Va kern.ipc.nsfbufspeak read-only .Xr sysctl 8 variables show current and peak .Fn sendfile buffers usage respectively. These values may also be viewed through .Nm netstat Fl m . .Pp -If a value of zero is reported for -.Va kern.ipc.nsfbufs , -your architecture does not need to use +If +.Xr sysctl 8 +OID +.Va kern.ipc.nsfbufs +doesn't exist, your architecture does not need to use .Fn sendfile buffers because their task can be efficiently performed by the generic virtual memory structures. .Sh RETURN VALUES .Rv -std sendfile .Sh ERRORS .Bl -tag -width Er .It Bq Er EAGAIN The socket is marked for non-blocking I/O and not all data was sent due to the socket buffer being filled. If specified, the number of bytes successfully sent will be returned in .Fa *sbytes . .It Bq Er EBADF The .Fa fd argument is not a valid file descriptor. .It Bq Er EBADF The .Fa s argument is not a valid socket descriptor. .It Bq Er EBUSY A busy page was encountered and .Dv SF_NODISKIO had been specified. Partial data may have been sent. .It Bq Er EFAULT An invalid address was specified for an argument. .It Bq Er EINTR A signal interrupted .Fn sendfile before it could be completed. If specified, the number of bytes successfully sent will be returned in .Fa *sbytes . .It Bq Er EINVAL The .Fa fd argument is not a regular file. .It Bq Er EINVAL The .Fa s argument is not a SOCK_STREAM type socket. .It Bq Er EINVAL The .Fa offset argument is negative. .It Bq Er EIO An error occurred while reading from .Fa fd . .It Bq Er ENOTCAPABLE The .Fa fd or the .Fa s argument has insufficient rights. .It Bq Er ENOBUFS The system was unable to allocate an internal buffer. .It Bq Er ENOTCONN The .Fa s argument points to an unconnected socket. .It Bq Er ENOTSOCK The .Fa s argument is not a socket. .It Bq Er EOPNOTSUPP The file system for descriptor .Fa fd does not support .Fn sendfile . .It Bq Er EPIPE The socket peer has closed the connection. .El .Sh SEE ALSO +.Xr loader.conf 5 , .Xr netstat 1 , .Xr open 2 , .Xr send 2 , .Xr socket 2 , .Xr writev 2 , +.Xr sysctl 8 , .Xr tuning 7 .Rs .%A K. Elmeleegy .%A A. Chanda .%A A. L. Cox .%A W. Zwaenepoel .%T A Portable Kernel Abstraction for Low-Overhead Ephemeral Mapping Management .%J The Proceedings of the 2005 USENIX Annual Technical Conference .%P pp 223-236 .%D 2005 .Re .Sh HISTORY The .Fn sendfile system call first appeared in .Fx 3.0 . This manual page first appeared in .Fx 3.1 . In .Fx 10 support for sending shared memory descriptors had been introduced. In .Fx 11 a non-blocking implementation had been introduced. .Sh AUTHORS The initial implementation of .Fn sendfile system call and this manual page were written by .An David G. Lawrence Aq Mt dg@dglawrence.com . The .Fx 11 implementation was written by .An Gleb Smirnoff Aq Mt glebius@FreeBSD.org . .Sh BUGS The .Fn sendfile system call will not fail, i.e., return .Dv -1 and set .Va errno to .Er EFAULT , if provided an invalid address for .Fa sbytes . Index: projects/import-googletest-1.8.1/lib/libc/x86/gen/Makefile.inc =================================================================== --- projects/import-googletest-1.8.1/lib/libc/x86/gen/Makefile.inc (nonexistent) +++ projects/import-googletest-1.8.1/lib/libc/x86/gen/Makefile.inc (revision 344271) @@ -0,0 +1,6 @@ +# $FreeBSD$ + +.PATH: ${LIBC_SRCTOP}/x86/gen + +SRCS+= \ + getcontextx.c Property changes on: projects/import-googletest-1.8.1/lib/libc/x86/gen/Makefile.inc ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property Index: projects/import-googletest-1.8.1/lib/libc/x86/gen/getcontextx.c =================================================================== --- projects/import-googletest-1.8.1/lib/libc/x86/gen/getcontextx.c (nonexistent) +++ projects/import-googletest-1.8.1/lib/libc/x86/gen/getcontextx.c (revision 344271) @@ -0,0 +1,140 @@ +/*- + * SPDX-License-Identifier: BSD-2-Clause-FreeBSD + * + * Copyright (c) 2011 Konstantin Belousov + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR + * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES + * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. + * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, + * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT + * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF + * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +__FBSDID("$FreeBSD$"); + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#if defined __i386__ +#define X86_GET_XFPUSTATE I386_GET_XFPUSTATE +typedef struct savexmm savex86_t ; +typedef struct i386_get_xfpustate x86_get_xfpustate_t; +#elif defined __amd64__ +#define X86_GET_XFPUSTATE AMD64_GET_XFPUSTATE +typedef struct savefpu savex86_t; +typedef struct amd64_get_xfpustate x86_get_xfpustate_t; +#else +#error "Wrong arch" +#endif + +static int xstate_sz = 0; + +static int +__getcontextx_size_xfpu(void) +{ + + return (sizeof(ucontext_t) + xstate_sz); +} + +DEFINE_UIFUNC(, int, __getcontextx_size, (void), static) +{ + u_int p[4]; + + if ((cpu_feature2 & CPUID2_OSXSAVE) != 0) { + cpuid_count(0xd, 0x0, p); + xstate_sz = p[1] - sizeof(savex86_t); + } + return (__getcontextx_size_xfpu); +} + +static int +__fillcontextx2_xfpu(char *ctx) +{ + x86_get_xfpustate_t xfpu; + ucontext_t *ucp; + + ucp = (ucontext_t *)ctx; + xfpu.addr = (char *)(ucp + 1); + xfpu.len = xstate_sz; + if (sysarch(X86_GET_XFPUSTATE, &xfpu) == -1) + return (-1); + ucp->uc_mcontext.mc_xfpustate = (__register_t)xfpu.addr; + ucp->uc_mcontext.mc_xfpustate_len = xstate_sz; + ucp->uc_mcontext.mc_flags |= _MC_HASFPXSTATE; + return (0); +} + +static int +__fillcontextx2_noxfpu(char *ctx) +{ + ucontext_t *ucp; + + ucp = (ucontext_t *)ctx; + ucp->uc_mcontext.mc_xfpustate = 0; + ucp->uc_mcontext.mc_xfpustate_len = 0; + return (0); +} + +DEFINE_UIFUNC(, int, __fillcontextx2, (char *), static) +{ + + return ((cpu_feature2 & CPUID2_OSXSAVE) != 0 ? __fillcontextx2_xfpu : + __fillcontextx2_noxfpu); +} + +int +__fillcontextx(char *ctx) +{ + ucontext_t *ucp; + + ucp = (ucontext_t *)ctx; + if (getcontext(ucp) == -1) + return (-1); + __fillcontextx2(ctx); + return (0); +} + +__weak_reference(__getcontextx, getcontextx); + +ucontext_t * +__getcontextx(void) +{ + char *ctx; + int error; + + ctx = malloc(__getcontextx_size()); + if (ctx == NULL) + return (NULL); + if (__fillcontextx(ctx) == -1) { + error = errno; + free(ctx); + errno = error; + return (NULL); + } + return ((ucontext_t *)ctx); +} Property changes on: projects/import-googletest-1.8.1/lib/libc/x86/gen/getcontextx.c ___________________________________________________________________ Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Index: projects/import-googletest-1.8.1/lib/libc/x86/sys/__vdso_gettc.c =================================================================== --- projects/import-googletest-1.8.1/lib/libc/x86/sys/__vdso_gettc.c (revision 344270) +++ projects/import-googletest-1.8.1/lib/libc/x86/sys/__vdso_gettc.c (revision 344271) @@ -1,295 +1,270 @@ /*- * Copyright (c) 2012 Konstantin Belousov * Copyright (c) 2016, 2017, 2019 The FreeBSD Foundation * All rights reserved. * * Portions of this software were developed by Konstantin Belousov * under sponsorship from the FreeBSD Foundation. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include "namespace.h" #include #include #include #include #include #include #include #include #include #include "un-namespace.h" #include #include #include #include #ifdef WANT_HYPERV #include #endif #include #include "libc_private.h" static void -cpuidp(u_int leaf, u_int p[4]) -{ - - __asm __volatile( -#if defined(__i386__) - " pushl %%ebx\n" -#endif - " cpuid\n" -#if defined(__i386__) - " movl %%ebx,%1\n" - " popl %%ebx" -#endif - : "=a" (p[0]), -#if defined(__i386__) - "=r" (p[1]), -#elif defined(__amd64__) - "=b" (p[1]), -#else -#error "Arch" -#endif - "=c" (p[2]), "=d" (p[3]) - : "0" (leaf)); -} - -static void rdtsc_mb_lfence(void) { lfence(); } static void rdtsc_mb_mfence(void) { mfence(); } static void rdtsc_mb_none(void) { } DEFINE_UIFUNC(static, void, rdtsc_mb, (void), static) { u_int p[4]; - /* Not a typo, string matches our cpuidp() registers use. */ + /* Not a typo, string matches our do_cpuid() registers use. */ static const char intel_id[] = "GenuntelineI"; if ((cpu_feature & CPUID_SSE2) == 0) return (rdtsc_mb_none); - cpuidp(0, p); + do_cpuid(0, p); return (memcmp(p + 1, intel_id, sizeof(intel_id) - 1) == 0 ? rdtsc_mb_lfence : rdtsc_mb_mfence); } static u_int __vdso_gettc_rdtsc_low(const struct vdso_timehands *th) { u_int rv; rdtsc_mb(); __asm __volatile("rdtsc; shrd %%cl, %%edx, %0" : "=a" (rv) : "c" (th->th_x86_shift) : "edx"); return (rv); } static u_int __vdso_rdtsc32(void) { rdtsc_mb(); return (rdtsc32()); } #define HPET_DEV_MAP_MAX 10 static volatile char *hpet_dev_map[HPET_DEV_MAP_MAX]; static void __vdso_init_hpet(uint32_t u) { static const char devprefix[] = "/dev/hpet"; char devname[64], *c, *c1, t; volatile char *new_map, *old_map; unsigned int mode; uint32_t u1; int fd; c1 = c = stpcpy(devname, devprefix); u1 = u; do { *c++ = u1 % 10 + '0'; u1 /= 10; } while (u1 != 0); *c = '\0'; for (c--; c1 != c; c1++, c--) { t = *c1; *c1 = *c; *c = t; } old_map = hpet_dev_map[u]; if (old_map != NULL) return; /* * Explicitely check for the capability mode to avoid * triggering trap_enocap on the device open by absolute path. */ if ((cap_getmode(&mode) == 0 && mode != 0) || (fd = _open(devname, O_RDONLY)) == -1) { /* Prevent the caller from re-entering. */ atomic_cmpset_rel_ptr((volatile uintptr_t *)&hpet_dev_map[u], (uintptr_t)old_map, (uintptr_t)MAP_FAILED); return; } new_map = mmap(NULL, PAGE_SIZE, PROT_READ, MAP_SHARED, fd, 0); _close(fd); if (atomic_cmpset_rel_ptr((volatile uintptr_t *)&hpet_dev_map[u], (uintptr_t)old_map, (uintptr_t)new_map) == 0 && new_map != MAP_FAILED) munmap((void *)new_map, PAGE_SIZE); } #ifdef WANT_HYPERV #define HYPERV_REFTSC_DEVPATH "/dev/" HYPERV_REFTSC_DEVNAME /* * NOTE: * We use 'NULL' for this variable to indicate that initialization * is required. And if this variable is 'MAP_FAILED', then Hyper-V * reference TSC can not be used, e.g. in misconfigured jail. */ static struct hyperv_reftsc *hyperv_ref_tsc; static void __vdso_init_hyperv_tsc(void) { int fd; unsigned int mode; if (cap_getmode(&mode) == 0 && mode != 0) goto fail; fd = _open(HYPERV_REFTSC_DEVPATH, O_RDONLY); if (fd < 0) goto fail; hyperv_ref_tsc = mmap(NULL, sizeof(*hyperv_ref_tsc), PROT_READ, MAP_SHARED, fd, 0); _close(fd); return; fail: /* Prevent the caller from re-entering. */ hyperv_ref_tsc = MAP_FAILED; } static int __vdso_hyperv_tsc(struct hyperv_reftsc *tsc_ref, u_int *tc) { uint64_t disc, ret, tsc, scale; uint32_t seq; int64_t ofs; while ((seq = atomic_load_acq_int(&tsc_ref->tsc_seq)) != 0) { scale = tsc_ref->tsc_scale; ofs = tsc_ref->tsc_ofs; rdtsc_mb(); tsc = rdtsc(); /* ret = ((tsc * scale) >> 64) + ofs */ __asm__ __volatile__ ("mulq %3" : "=d" (ret), "=a" (disc) : "a" (tsc), "r" (scale)); ret += ofs; atomic_thread_fence_acq(); if (tsc_ref->tsc_seq == seq) { *tc = ret; return (0); } /* Sequence changed; re-sync. */ } return (ENOSYS); } #endif /* WANT_HYPERV */ #pragma weak __vdso_gettc int __vdso_gettc(const struct vdso_timehands *th, u_int *tc) { volatile char *map; uint32_t idx; switch (th->th_algo) { case VDSO_TH_ALGO_X86_TSC: *tc = th->th_x86_shift > 0 ? __vdso_gettc_rdtsc_low(th) : __vdso_rdtsc32(); return (0); case VDSO_TH_ALGO_X86_HPET: idx = th->th_x86_hpet_idx; if (idx >= HPET_DEV_MAP_MAX) return (ENOSYS); map = (volatile char *)atomic_load_acq_ptr( (volatile uintptr_t *)&hpet_dev_map[idx]); if (map == NULL) { __vdso_init_hpet(idx); map = (volatile char *)atomic_load_acq_ptr( (volatile uintptr_t *)&hpet_dev_map[idx]); } if (map == MAP_FAILED) return (ENOSYS); *tc = *(volatile uint32_t *)(map + HPET_MAIN_COUNTER); return (0); #ifdef WANT_HYPERV case VDSO_TH_ALGO_X86_HVTSC: if (hyperv_ref_tsc == NULL) __vdso_init_hyperv_tsc(); if (hyperv_ref_tsc == MAP_FAILED) return (ENOSYS); return (__vdso_hyperv_tsc(hyperv_ref_tsc, tc)); #endif default: return (ENOSYS); } } #pragma weak __vdso_gettimekeep int __vdso_gettimekeep(struct vdso_timekeep **tk) { return (_elf_aux_info(AT_TIMEKEEP, tk, sizeof(*tk))); } Index: projects/import-googletest-1.8.1/lib/libmemstat/memstat_uma.c =================================================================== --- projects/import-googletest-1.8.1/lib/libmemstat/memstat_uma.c (revision 344270) +++ projects/import-googletest-1.8.1/lib/libmemstat/memstat_uma.c (revision 344271) @@ -1,480 +1,489 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2005-2006 Robert N. M. Watson * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "memstat.h" #include "memstat_internal.h" static struct nlist namelist[] = { #define X_UMA_KEGS 0 { .n_name = "_uma_kegs" }, #define X_MP_MAXID 1 { .n_name = "_mp_maxid" }, #define X_ALL_CPUS 2 { .n_name = "_all_cpus" }, #define X_VM_NDOMAINS 3 { .n_name = "_vm_ndomains" }, { .n_name = "" }, }; /* * Extract uma(9) statistics from the running kernel, and store all memory * type information in the passed list. For each type, check the list for an * existing entry with the right name/allocator -- if present, update that * entry. Otherwise, add a new entry. On error, the entire list will be * cleared, as entries will be in an inconsistent state. * * To reduce the level of work for a list that starts empty, we keep around a * hint as to whether it was empty when we began, so we can avoid searching * the list for entries to update. Updates are O(n^2) due to searching for * each entry before adding it. */ int memstat_sysctl_uma(struct memory_type_list *list, int flags) { struct uma_stream_header *ushp; struct uma_type_header *uthp; struct uma_percpu_stat *upsp; struct memory_type *mtp; int count, hint_dontsearch, i, j, maxcpus, maxid; char *buffer, *p; size_t size; hint_dontsearch = LIST_EMPTY(&list->mtl_list); /* * Query the number of CPUs, number of malloc types so that we can * guess an initial buffer size. We loop until we succeed or really * fail. Note that the value of maxcpus we query using sysctl is not * the version we use when processing the real data -- that is read * from the header. */ retry: size = sizeof(maxid); if (sysctlbyname("kern.smp.maxid", &maxid, &size, NULL, 0) < 0) { if (errno == EACCES || errno == EPERM) list->mtl_error = MEMSTAT_ERROR_PERMISSION; else list->mtl_error = MEMSTAT_ERROR_DATAERROR; return (-1); } if (size != sizeof(maxid)) { list->mtl_error = MEMSTAT_ERROR_DATAERROR; return (-1); } size = sizeof(count); if (sysctlbyname("vm.zone_count", &count, &size, NULL, 0) < 0) { if (errno == EACCES || errno == EPERM) list->mtl_error = MEMSTAT_ERROR_PERMISSION; else list->mtl_error = MEMSTAT_ERROR_VERSION; return (-1); } if (size != sizeof(count)) { list->mtl_error = MEMSTAT_ERROR_DATAERROR; return (-1); } size = sizeof(*uthp) + count * (sizeof(*uthp) + sizeof(*upsp) * (maxid + 1)); buffer = malloc(size); if (buffer == NULL) { list->mtl_error = MEMSTAT_ERROR_NOMEMORY; return (-1); } if (sysctlbyname("vm.zone_stats", buffer, &size, NULL, 0) < 0) { /* * XXXRW: ENOMEM is an ambiguous return, we should bound the * number of loops, perhaps. */ if (errno == ENOMEM) { free(buffer); goto retry; } if (errno == EACCES || errno == EPERM) list->mtl_error = MEMSTAT_ERROR_PERMISSION; else list->mtl_error = MEMSTAT_ERROR_VERSION; free(buffer); return (-1); } if (size == 0) { free(buffer); return (0); } if (size < sizeof(*ushp)) { list->mtl_error = MEMSTAT_ERROR_VERSION; free(buffer); return (-1); } p = buffer; ushp = (struct uma_stream_header *)p; p += sizeof(*ushp); if (ushp->ush_version != UMA_STREAM_VERSION) { list->mtl_error = MEMSTAT_ERROR_VERSION; free(buffer); return (-1); } /* * For the remainder of this function, we are quite trusting about * the layout of structures and sizes, since we've determined we have * a matching version and acceptable CPU count. */ maxcpus = ushp->ush_maxcpus; count = ushp->ush_count; for (i = 0; i < count; i++) { uthp = (struct uma_type_header *)p; p += sizeof(*uthp); if (hint_dontsearch == 0) { mtp = memstat_mtl_find(list, ALLOCATOR_UMA, uthp->uth_name); } else mtp = NULL; if (mtp == NULL) mtp = _memstat_mt_allocate(list, ALLOCATOR_UMA, uthp->uth_name, maxid + 1); if (mtp == NULL) { _memstat_mtl_empty(list); free(buffer); list->mtl_error = MEMSTAT_ERROR_NOMEMORY; return (-1); } /* * Reset the statistics on a current node. */ _memstat_mt_reset_stats(mtp, maxid + 1); mtp->mt_numallocs = uthp->uth_allocs; mtp->mt_numfrees = uthp->uth_frees; mtp->mt_failures = uthp->uth_fails; mtp->mt_sleeps = uthp->uth_sleeps; for (j = 0; j < maxcpus; j++) { upsp = (struct uma_percpu_stat *)p; p += sizeof(*upsp); mtp->mt_percpu_cache[j].mtp_free = upsp->ups_cache_free; mtp->mt_free += upsp->ups_cache_free; mtp->mt_numallocs += upsp->ups_allocs; mtp->mt_numfrees += upsp->ups_frees; } + /* + * Values for uth_allocs and uth_frees frees are snap. + * It may happen that kernel reports that number of frees + * is greater than number of allocs. See counter(9) for + * details. + */ + if (mtp->mt_numallocs < mtp->mt_numfrees) + mtp->mt_numallocs = mtp->mt_numfrees; + mtp->mt_size = uthp->uth_size; mtp->mt_rsize = uthp->uth_rsize; mtp->mt_memalloced = mtp->mt_numallocs * uthp->uth_size; mtp->mt_memfreed = mtp->mt_numfrees * uthp->uth_size; mtp->mt_bytes = mtp->mt_memalloced - mtp->mt_memfreed; mtp->mt_countlimit = uthp->uth_limit; mtp->mt_byteslimit = uthp->uth_limit * uthp->uth_size; mtp->mt_count = mtp->mt_numallocs - mtp->mt_numfrees; mtp->mt_zonefree = uthp->uth_zone_free; /* * UMA secondary zones share a keg with the primary zone. To * avoid double-reporting of free items, report keg free * items only in the primary zone. */ if (!(uthp->uth_zone_flags & UTH_ZONE_SECONDARY)) { mtp->mt_kegfree = uthp->uth_keg_free; mtp->mt_free += mtp->mt_kegfree; } mtp->mt_free += mtp->mt_zonefree; } free(buffer); return (0); } static int kread(kvm_t *kvm, void *kvm_pointer, void *address, size_t size, size_t offset) { ssize_t ret; ret = kvm_read(kvm, (unsigned long)kvm_pointer + offset, address, size); if (ret < 0) return (MEMSTAT_ERROR_KVM); if ((size_t)ret != size) return (MEMSTAT_ERROR_KVM_SHORTREAD); return (0); } static int kread_string(kvm_t *kvm, const void *kvm_pointer, char *buffer, int buflen) { ssize_t ret; int i; for (i = 0; i < buflen; i++) { ret = kvm_read(kvm, (unsigned long)kvm_pointer + i, &(buffer[i]), sizeof(char)); if (ret < 0) return (MEMSTAT_ERROR_KVM); if ((size_t)ret != sizeof(char)) return (MEMSTAT_ERROR_KVM_SHORTREAD); if (buffer[i] == '\0') return (0); } /* Truncate. */ buffer[i-1] = '\0'; return (0); } static int kread_symbol(kvm_t *kvm, int index, void *address, size_t size, size_t offset) { ssize_t ret; ret = kvm_read(kvm, namelist[index].n_value + offset, address, size); if (ret < 0) return (MEMSTAT_ERROR_KVM); if ((size_t)ret != size) return (MEMSTAT_ERROR_KVM_SHORTREAD); return (0); } /* * memstat_kvm_uma() is similar to memstat_sysctl_uma(), only it extracts * UMA(9) statistics from a kernel core/memory file. */ int memstat_kvm_uma(struct memory_type_list *list, void *kvm_handle) { LIST_HEAD(, uma_keg) uma_kegs; struct memory_type *mtp; struct uma_zone_domain uzd; struct uma_bucket *ubp, ub; struct uma_cache *ucp, *ucp_array; struct uma_zone *uzp, uz; struct uma_keg *kzp, kz; int hint_dontsearch, i, mp_maxid, ndomains, ret; char name[MEMTYPE_MAXNAME]; cpuset_t all_cpus; long cpusetsize; kvm_t *kvm; kvm = (kvm_t *)kvm_handle; hint_dontsearch = LIST_EMPTY(&list->mtl_list); if (kvm_nlist(kvm, namelist) != 0) { list->mtl_error = MEMSTAT_ERROR_KVM; return (-1); } if (namelist[X_UMA_KEGS].n_type == 0 || namelist[X_UMA_KEGS].n_value == 0) { list->mtl_error = MEMSTAT_ERROR_KVM_NOSYMBOL; return (-1); } ret = kread_symbol(kvm, X_MP_MAXID, &mp_maxid, sizeof(mp_maxid), 0); if (ret != 0) { list->mtl_error = ret; return (-1); } ret = kread_symbol(kvm, X_VM_NDOMAINS, &ndomains, sizeof(ndomains), 0); if (ret != 0) { list->mtl_error = ret; return (-1); } ret = kread_symbol(kvm, X_UMA_KEGS, &uma_kegs, sizeof(uma_kegs), 0); if (ret != 0) { list->mtl_error = ret; return (-1); } cpusetsize = sysconf(_SC_CPUSET_SIZE); if (cpusetsize == -1 || (u_long)cpusetsize > sizeof(cpuset_t)) { list->mtl_error = MEMSTAT_ERROR_KVM_NOSYMBOL; return (-1); } CPU_ZERO(&all_cpus); ret = kread_symbol(kvm, X_ALL_CPUS, &all_cpus, cpusetsize, 0); if (ret != 0) { list->mtl_error = ret; return (-1); } ucp_array = malloc(sizeof(struct uma_cache) * (mp_maxid + 1)); if (ucp_array == NULL) { list->mtl_error = MEMSTAT_ERROR_NOMEMORY; return (-1); } for (kzp = LIST_FIRST(&uma_kegs); kzp != NULL; kzp = LIST_NEXT(&kz, uk_link)) { ret = kread(kvm, kzp, &kz, sizeof(kz), 0); if (ret != 0) { free(ucp_array); _memstat_mtl_empty(list); list->mtl_error = ret; return (-1); } for (uzp = LIST_FIRST(&kz.uk_zones); uzp != NULL; uzp = LIST_NEXT(&uz, uz_link)) { ret = kread(kvm, uzp, &uz, sizeof(uz), 0); if (ret != 0) { free(ucp_array); _memstat_mtl_empty(list); list->mtl_error = ret; return (-1); } ret = kread(kvm, uzp, ucp_array, sizeof(struct uma_cache) * (mp_maxid + 1), offsetof(struct uma_zone, uz_cpu[0])); if (ret != 0) { free(ucp_array); _memstat_mtl_empty(list); list->mtl_error = ret; return (-1); } ret = kread_string(kvm, uz.uz_name, name, MEMTYPE_MAXNAME); if (ret != 0) { free(ucp_array); _memstat_mtl_empty(list); list->mtl_error = ret; return (-1); } if (hint_dontsearch == 0) { mtp = memstat_mtl_find(list, ALLOCATOR_UMA, name); } else mtp = NULL; if (mtp == NULL) mtp = _memstat_mt_allocate(list, ALLOCATOR_UMA, name, mp_maxid + 1); if (mtp == NULL) { free(ucp_array); _memstat_mtl_empty(list); list->mtl_error = MEMSTAT_ERROR_NOMEMORY; return (-1); } /* * Reset the statistics on a current node. */ _memstat_mt_reset_stats(mtp, mp_maxid + 1); mtp->mt_numallocs = kvm_counter_u64_fetch(kvm, (unsigned long )uz.uz_allocs); mtp->mt_numfrees = kvm_counter_u64_fetch(kvm, (unsigned long )uz.uz_frees); mtp->mt_failures = kvm_counter_u64_fetch(kvm, (unsigned long )uz.uz_fails); mtp->mt_sleeps = uz.uz_sleeps; if (kz.uk_flags & UMA_ZFLAG_INTERNAL) goto skip_percpu; for (i = 0; i < mp_maxid + 1; i++) { if (!CPU_ISSET(i, &all_cpus)) continue; ucp = &ucp_array[i]; mtp->mt_numallocs += ucp->uc_allocs; mtp->mt_numfrees += ucp->uc_frees; if (ucp->uc_allocbucket != NULL) { ret = kread(kvm, ucp->uc_allocbucket, &ub, sizeof(ub), 0); if (ret != 0) { free(ucp_array); _memstat_mtl_empty(list); list->mtl_error = ret; return (-1); } mtp->mt_free += ub.ub_cnt; } if (ucp->uc_freebucket != NULL) { ret = kread(kvm, ucp->uc_freebucket, &ub, sizeof(ub), 0); if (ret != 0) { free(ucp_array); _memstat_mtl_empty(list); list->mtl_error = ret; return (-1); } mtp->mt_free += ub.ub_cnt; } } skip_percpu: mtp->mt_size = kz.uk_size; mtp->mt_rsize = kz.uk_rsize; mtp->mt_memalloced = mtp->mt_numallocs * mtp->mt_size; mtp->mt_memfreed = mtp->mt_numfrees * mtp->mt_size; mtp->mt_bytes = mtp->mt_memalloced - mtp->mt_memfreed; mtp->mt_countlimit = uz.uz_max_items; mtp->mt_byteslimit = mtp->mt_countlimit * mtp->mt_size; mtp->mt_count = mtp->mt_numallocs - mtp->mt_numfrees; for (i = 0; i < ndomains; i++) { ret = kread(kvm, &uz.uz_domain[i], &uzd, sizeof(uzd), 0); for (ubp = LIST_FIRST(&uzd.uzd_buckets); ubp != NULL; ubp = LIST_NEXT(&ub, ub_link)) { ret = kread(kvm, ubp, &ub, sizeof(ub), 0); mtp->mt_zonefree += ub.ub_cnt; } } if (!((kz.uk_flags & UMA_ZONE_SECONDARY) && LIST_FIRST(&kz.uk_zones) != uzp)) { mtp->mt_kegfree = kz.uk_free; mtp->mt_free += mtp->mt_kegfree; } mtp->mt_free += mtp->mt_zonefree; } } free(ucp_array); return (0); } Index: projects/import-googletest-1.8.1/lib/libthr/arch/powerpc/include/pthread_md.h =================================================================== --- projects/import-googletest-1.8.1/lib/libthr/arch/powerpc/include/pthread_md.h (revision 344270) +++ projects/import-googletest-1.8.1/lib/libthr/arch/powerpc/include/pthread_md.h (revision 344271) @@ -1,93 +1,94 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright 2004 by Peter Grehan. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. The name of the author may not be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED * AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ /* * Machine-dependent thread prototypes/definitions. */ #ifndef _PTHREAD_MD_H_ #define _PTHREAD_MD_H_ #include #include #define CPU_SPINWAIT #define DTV_OFFSET offsetof(struct tcb, tcb_dtv) #ifdef __powerpc64__ #define TP_OFFSET 0x7010 #else #define TP_OFFSET 0x7008 #endif /* * Variant I tcb. The structure layout is fixed, don't blindly * change it. * %r2 (32-bit) or %r13 (64-bit) points to end of the structure. */ struct tcb { void *tcb_dtv; struct pthread *tcb_thread; }; static __inline void _tcb_set(struct tcb *tcb) { #ifdef __powerpc64__ __asm __volatile("mr 13,%0" :: "r"((uint8_t *)tcb + TP_OFFSET)); #else __asm __volatile("mr 2,%0" :: "r"((uint8_t *)tcb + TP_OFFSET)); #endif } static __inline struct tcb * _tcb_get(void) { - register uint8_t *_tp; + register struct tcb *tcb; + #ifdef __powerpc64__ - __asm __volatile("mr %0,13" : "=r"(_tp)); + __asm __volatile("addi %0,13,%1" : "=r"(tcb) : "i"(-TP_OFFSET)); #else - __asm __volatile("mr %0,2" : "=r"(_tp)); + __asm __volatile("addi %0,2,%1" : "=r"(tcb) : "i"(-TP_OFFSET)); #endif - return ((struct tcb *)(_tp - TP_OFFSET)); + return (tcb); } static __inline struct pthread * _get_curthread(void) { if (_thr_initial) return (_tcb_get()->tcb_thread); return (NULL); } #endif /* _PTHREAD_MD_H_ */ Index: projects/import-googletest-1.8.1/libexec/rc/rc.d/nfsd =================================================================== --- projects/import-googletest-1.8.1/libexec/rc/rc.d/nfsd (revision 344270) +++ projects/import-googletest-1.8.1/libexec/rc/rc.d/nfsd (revision 344271) @@ -1,51 +1,56 @@ #!/bin/sh # # $FreeBSD$ # # PROVIDE: nfsd # REQUIRE: mountcritremote mountd hostname gssd nfsuserd # KEYWORD: nojail shutdown . /etc/rc.subr name="nfsd" desc="Remote NFS server" rcvar="nfs_server_enable" command="/usr/sbin/${name}" +nfs_server_vhost="" load_rc_config $name start_precmd="nfsd_precmd" sig_stop="USR1" nfsd_precmd() { + local _vhost rc_flags="${nfs_server_flags}" # Load the modules now, so that the vfs.nfsd sysctl # oids are available. load_kld nfsd if checkyesno nfs_reserved_port_only; then echo 'NFS on reserved port only=YES' sysctl vfs.nfsd.nfs_privport=1 > /dev/null else sysctl vfs.nfsd.nfs_privport=0 > /dev/null fi if checkyesno nfs_server_managegids; then force_depend nfsuserd || err 1 "Cannot run nfsuserd" fi if checkyesno nfsv4_server_enable; then sysctl vfs.nfsd.server_max_nfsvers=4 > /dev/null else echo 'NFSv4 is disabled' sysctl vfs.nfsd.server_max_nfsvers=3 > /dev/null fi force_depend rpcbind || return 1 force_depend mountd || return 1 + if [ -n "${nfs_server_vhost}" ]; then + command_args="-V \"${nfs_server_vhost}\"" + fi } run_rc_command "$1" Index: projects/import-googletest-1.8.1/libexec/rtld-elf/Makefile =================================================================== --- projects/import-googletest-1.8.1/libexec/rtld-elf/Makefile (revision 344270) +++ projects/import-googletest-1.8.1/libexec/rtld-elf/Makefile (revision 344271) @@ -1,118 +1,119 @@ # $FreeBSD$ # Use the following command to build local debug version of dynamic # linker: # make DEBUG_FLAGS=-g DEBUG=-DDEBUG WITHOUT_TESTS=yes all .include PACKAGE= clibs MK_BIND_NOW= no +MK_PIE= no # Always position independent using local rules MK_SSP= no CONFS= libmap.conf PROG?= ld-elf.so.1 .if (${PROG:M*ld-elf32*} != "") TAGS+= lib32 .endif SRCS= \ rtld_start.S \ reloc.c \ rtld.c \ rtld_lock.c \ rtld_malloc.c \ rtld_printf.c \ map_object.c \ xmalloc.c \ debug.c \ libmap.c MAN= rtld.1 CSTD?= gnu99 CFLAGS+= -Wall -DFREEBSD_ELF -DIN_RTLD -ffreestanding CFLAGS+= -I${SRCTOP}/lib/csu/common .if exists(${.CURDIR}/${MACHINE_ARCH}) RTLD_ARCH= ${MACHINE_ARCH} .else RTLD_ARCH= ${MACHINE_CPUARCH} .endif CFLAGS+= -I${.CURDIR}/${RTLD_ARCH} -I${.CURDIR} .if ${MACHINE_ARCH} == "powerpc64" LDFLAGS+= -nostdlib -e _rtld_start .else LDFLAGS+= -nostdlib -e .rtld_start .endif NO_WCAST_ALIGN= yes WARNS?= 6 INSTALLFLAGS= -C -b PRECIOUSPROG= BINDIR= /libexec SYMLINKS= ../..${BINDIR}/${PROG} ${LIBEXECDIR}/${PROG} MLINKS= rtld.1 ld-elf.so.1.1 \ rtld.1 ld.so.1 .if ${MACHINE_CPUARCH} == "sparc64" CFLAGS+= -fPIC .else CFLAGS+= -fpic .endif CFLAGS+= -DPIC $(DEBUG) .if ${MACHINE_CPUARCH} == "amd64" || ${MACHINE_CPUARCH} == "i386" CFLAGS+= -fvisibility=hidden .endif .if ${MACHINE_CPUARCH} == "mips" CFLAGS.reloc.c+=-fno-jump-tables .endif LDFLAGS+= -shared -Wl,-Bsymbolic -Wl,-z,defs LIBADD= c_nossp_pic .if ${MK_TOOLCHAIN} == "no" LDFLAGS+= -L${LIBCDIR} .endif .if ${MACHINE_CPUARCH} == "arm" # Some of the required math functions (div & mod) are implemented in # libcompiler_rt on ARM. The library also needs to be placed first to be # correctly linked. As some of the functions are used before we have # shared libraries. LIBADD+= compiler_rt .endif .if ${MK_SYMVER} == "yes" VERSION_DEF= ${LIBCSRCDIR}/Versions.def SYMBOL_MAPS= ${.CURDIR}/Symbol.map VERSION_MAP= Version.map LDFLAGS+= -Wl,--version-script=${VERSION_MAP} .if exists(${.CURDIR}/${RTLD_ARCH}/Symbol.map) SYMBOL_MAPS+= ${.CURDIR}/${RTLD_ARCH}/Symbol.map .endif .endif .sinclude "${.CURDIR}/${RTLD_ARCH}/Makefile.inc" # Since moving rtld-elf to /libexec, we need to create a symlink. # Fixup the existing binary that's there so we can symlink over it. beforeinstall: .if exists(${DESTDIR}/usr/libexec/${PROG}) && ${MK_STAGING} == "no" -chflags -h noschg ${DESTDIR}/usr/libexec/${PROG} .endif .PATH: ${.CURDIR}/${RTLD_ARCH} HAS_TESTS= SUBDIR.${MK_TESTS}+= tests .include ${PROG_FULL}: ${VERSION_MAP} .include .if ${COMPILER_TYPE} == "gcc" # GCC warns about redeclarations even though they have __exported # and are therefore not identical to the ones from the system headers. CFLAGS+= -Wno-redundant-decls .if ${COMPILER_VERSION} < 40300 # Silence -Wshadow false positives in ancient GCC CFLAGS+= -Wno-shadow .endif .endif Index: projects/import-googletest-1.8.1/sbin/mdmfs/mdmfs.c =================================================================== --- projects/import-googletest-1.8.1/sbin/mdmfs/mdmfs.c (revision 344270) +++ projects/import-googletest-1.8.1/sbin/mdmfs/mdmfs.c (revision 344271) @@ -1,829 +1,830 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2001 Dima Dorfman. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * mdmfs (md/MFS) is a wrapper around mdconfig(8), * newfs(8), and mount(8) that mimics the command line option set of * the deprecated mount_mfs(8). */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include typedef enum { false, true } bool; struct mtpt_info { uid_t mi_uid; bool mi_have_uid; gid_t mi_gid; bool mi_have_gid; mode_t mi_mode; bool mi_have_mode; bool mi_forced_pw; }; static bool debug; /* Emit debugging information? */ static bool loudsubs; /* Suppress output from helper programs? */ static bool norun; /* Actually run the helper programs? */ static int unit; /* The unit we're working with. */ static const char *mdname; /* Name of memory disk device (e.g., "md"). */ static const char *mdsuffix; /* Suffix of memory disk device (e.g., ".uzip"). */ static size_t mdnamelen; /* Length of mdname. */ static const char *path_mdconfig =_PATH_MDCONFIG; static void argappend(char **, const char *, ...) __printflike(2, 3); static void debugprintf(const char *, ...) __printflike(1, 2); static void do_mdconfig_attach(const char *, const enum md_types); static void do_mdconfig_attach_au(const char *, const enum md_types); static void do_mdconfig_detach(void); static void do_mount_md(const char *, const char *); static void do_mount_tmpfs(const char *, const char *); static void do_mtptsetup(const char *, struct mtpt_info *); static void do_newfs(const char *); static void extract_ugid(const char *, struct mtpt_info *); static int run(int *, const char *, ...) __printflike(2, 3); static const char *run_exitstr(int); static int run_exitnumber(int); static void usage(void); int main(int argc, char **argv) { struct mtpt_info mi; /* Mountpoint info. */ intmax_t mdsize; char *mdconfig_arg, *newfs_arg, /* Args to helper programs. */ *mount_arg; enum md_types mdtype; /* The type of our memory disk. */ bool have_mdtype, mlmac; bool detach, softdep, autounit, newfs; const char *mtpoint, *size_arg, *unitstr; char *p; int ch, idx; void *set; unsigned long ul; /* Misc. initialization. */ (void)memset(&mi, '\0', sizeof(mi)); detach = true; softdep = true; autounit = false; mlmac = false; newfs = true; have_mdtype = false; mdtype = MD_SWAP; mdname = MD_NAME; mdnamelen = strlen(mdname); mdsize = 0; /* * Can't set these to NULL. They may be passed to the * respective programs without modification. I.e., we may not * receive any command-line options which will caused them to * be modified. */ mdconfig_arg = strdup(""); newfs_arg = strdup(""); mount_arg = strdup(""); size_arg = NULL; /* If we were started as mount_mfs or mfs, imply -C. */ if (strcmp(getprogname(), "mount_mfs") == 0 || strcmp(getprogname(), "mfs") == 0) { /* Make compatibility assumptions. */ mi.mi_mode = 01777; mi.mi_have_mode = true; } while ((ch = getopt(argc, argv, "a:b:Cc:Dd:E:e:F:f:hi:LlMm:NnO:o:Pp:Ss:tT:Uv:w:X")) != -1) switch (ch) { case 'a': argappend(&newfs_arg, "-a %s", optarg); break; case 'b': argappend(&newfs_arg, "-b %s", optarg); break; case 'C': /* Ignored for compatibility. */ break; case 'c': argappend(&newfs_arg, "-c %s", optarg); break; case 'D': detach = false; break; case 'd': argappend(&newfs_arg, "-d %s", optarg); break; case 'E': path_mdconfig = optarg; break; case 'e': argappend(&newfs_arg, "-e %s", optarg); break; case 'F': if (have_mdtype) usage(); mdtype = MD_VNODE; have_mdtype = true; argappend(&mdconfig_arg, "-f %s", optarg); break; case 'f': argappend(&newfs_arg, "-f %s", optarg); break; case 'h': usage(); break; case 'i': argappend(&newfs_arg, "-i %s", optarg); break; case 'L': loudsubs = true; break; case 'l': mlmac = true; argappend(&newfs_arg, "-l"); break; case 'M': if (have_mdtype) usage(); mdtype = MD_MALLOC; have_mdtype = true; + argappend(&mdconfig_arg, "-o reserve"); break; case 'm': argappend(&newfs_arg, "-m %s", optarg); break; case 'N': norun = true; break; case 'n': argappend(&newfs_arg, "-n"); break; case 'O': argappend(&newfs_arg, "-o %s", optarg); break; case 'o': argappend(&mount_arg, "-o %s", optarg); break; case 'P': newfs = false; break; case 'p': if ((set = setmode(optarg)) == NULL) usage(); mi.mi_mode = getmode(set, S_IRWXU | S_IRWXG | S_IRWXO); mi.mi_have_mode = true; mi.mi_forced_pw = true; free(set); break; case 'S': softdep = false; break; case 's': size_arg = optarg; break; case 't': argappend(&newfs_arg, "-t"); break; case 'T': argappend(&mount_arg, "-t %s", optarg); break; case 'U': softdep = true; break; case 'v': argappend(&newfs_arg, "-O %s", optarg); break; case 'w': extract_ugid(optarg, &mi); mi.mi_forced_pw = true; break; case 'X': debug = true; break; default: usage(); } argc -= optind; argv += optind; if (argc < 2) usage(); /* * Historically our size arg was passed directly to mdconfig, which * treats a number without a suffix as a count of 512-byte sectors; * tmpfs would treat it as a count of bytes. To get predictable * behavior for 'auto' we document that the size always uses mdconfig * rules. To make that work, decode the size here so it can be passed * to either tmpfs or mdconfig as a count of bytes. */ if (size_arg != NULL) { mdsize = (intmax_t)strtoumax(size_arg, &p, 0); if (p == size_arg || (p[0] != 0 && p[1] != 0) || mdsize < 0) errx(1, "invalid size '%s'", size_arg); switch (*p) { case 'p': case 'P': mdsize *= 1024; case 't': case 'T': mdsize *= 1024; case 'g': case 'G': mdsize *= 1024; case 'm': case 'M': mdsize *= 1024; case 'k': case 'K': mdsize *= 1024; case 'b': case 'B': break; case '\0': mdsize *= 512; break; default: errx(1, "invalid size suffix on '%s'", size_arg); } } /* * Based on the command line 'md-device' either mount a tmpfs filesystem * or configure the md device then format and mount a filesystem on it. * If the device is 'auto' use tmpfs if it is available and there is no * request for multilabel MAC (which tmpfs does not support). */ unitstr = argv[0]; mtpoint = argv[1]; if (strcmp(unitstr, "auto") == 0) { if (mlmac) idx = -1; /* Must use md for mlmac. */ else if ((idx = modfind("tmpfs")) == -1) idx = kldload("tmpfs"); if (idx == -1) unitstr = "md"; else unitstr = "tmpfs"; } if (strcmp(unitstr, "tmpfs") == 0) { if (size_arg != NULL && mdsize != 0) argappend(&mount_arg, "-o size=%jd", mdsize); do_mount_tmpfs(mount_arg, mtpoint); } else { if (size_arg != NULL) argappend(&mdconfig_arg, "-s %jdB", mdsize); if (strncmp(unitstr, "/dev/", 5) == 0) unitstr += 5; if (strncmp(unitstr, mdname, mdnamelen) == 0) unitstr += mdnamelen; if (!isdigit(*unitstr)) { autounit = true; unit = -1; mdsuffix = unitstr; } else { ul = strtoul(unitstr, &p, 10); if (ul == ULONG_MAX) errx(1, "bad device unit: %s", unitstr); unit = ul; mdsuffix = p; /* can be empty */ } if (!have_mdtype) mdtype = MD_SWAP; if (softdep) argappend(&newfs_arg, "-U"); if (mdtype != MD_VNODE && !newfs) errx(1, "-P requires a vnode-backed disk"); /* Do the work. */ if (detach && !autounit) do_mdconfig_detach(); if (autounit) do_mdconfig_attach_au(mdconfig_arg, mdtype); else do_mdconfig_attach(mdconfig_arg, mdtype); if (newfs) do_newfs(newfs_arg); do_mount_md(mount_arg, mtpoint); } do_mtptsetup(mtpoint, &mi); return (0); } /* * Append the expansion of 'fmt' to the buffer pointed to by '*dstp'; * reallocate as required. */ static void argappend(char **dstp, const char *fmt, ...) { char *old, *new; va_list ap; old = *dstp; assert(old != NULL); va_start(ap, fmt); if (vasprintf(&new, fmt,ap) == -1) errx(1, "vasprintf"); va_end(ap); *dstp = new; if (asprintf(&new, "%s %s", old, new) == -1) errx(1, "asprintf"); free(*dstp); free(old); *dstp = new; } /* * If run-time debugging is enabled, print the expansion of 'fmt'. * Otherwise, do nothing. */ static void debugprintf(const char *fmt, ...) { va_list ap; if (!debug) return; fprintf(stderr, "DEBUG: "); va_start(ap, fmt); vfprintf(stderr, fmt, ap); va_end(ap); fprintf(stderr, "\n"); fflush(stderr); } /* * Attach a memory disk with a known unit. */ static void do_mdconfig_attach(const char *args, const enum md_types mdtype) { int rv; const char *ta; /* Type arg. */ switch (mdtype) { case MD_SWAP: ta = "-t swap"; break; case MD_VNODE: ta = "-t vnode"; break; case MD_MALLOC: ta = "-t malloc"; break; default: abort(); } rv = run(NULL, "%s -a %s%s -u %s%d", path_mdconfig, ta, args, mdname, unit); if (rv) errx(1, "mdconfig (attach) exited %s %d", run_exitstr(rv), run_exitnumber(rv)); } /* * Attach a memory disk with an unknown unit; use autounit. */ static void do_mdconfig_attach_au(const char *args, const enum md_types mdtype) { const char *ta; /* Type arg. */ char *linep; char linebuf[12]; /* 32-bit unit (10) + '\n' (1) + '\0' (1) */ int fd; /* Standard output of mdconfig invocation. */ FILE *sfd; int rv; char *p; size_t linelen; unsigned long ul; switch (mdtype) { case MD_SWAP: ta = "-t swap"; break; case MD_VNODE: ta = "-t vnode"; break; case MD_MALLOC: ta = "-t malloc"; break; default: abort(); } rv = run(&fd, "%s -a %s%s", path_mdconfig, ta, args); if (rv) errx(1, "mdconfig (attach) exited %s %d", run_exitstr(rv), run_exitnumber(rv)); /* Receive the unit number. */ if (norun) { /* Since we didn't run, we can't read. Fake it. */ unit = 0; return; } sfd = fdopen(fd, "r"); if (sfd == NULL) err(1, "fdopen"); linep = fgetln(sfd, &linelen); /* If the output format changes, we want to know about it. */ if (linep == NULL || linelen <= mdnamelen + 1 || linelen - mdnamelen >= sizeof(linebuf) || strncmp(linep, mdname, mdnamelen) != 0) errx(1, "unexpected output from mdconfig (attach)"); linep += mdnamelen; linelen -= mdnamelen; /* Can't use strlcpy because linep is not NULL-terminated. */ strncpy(linebuf, linep, linelen); linebuf[linelen] = '\0'; ul = strtoul(linebuf, &p, 10); if (ul == ULONG_MAX || *p != '\n') errx(1, "unexpected output from mdconfig (attach)"); unit = ul; fclose(sfd); } /* * Detach a memory disk. */ static void do_mdconfig_detach(void) { int rv; rv = run(NULL, "%s -d -u %s%d", path_mdconfig, mdname, unit); if (rv && debug) /* This is allowed to fail. */ warnx("mdconfig (detach) exited %s %d (ignored)", run_exitstr(rv), run_exitnumber(rv)); } /* * Mount the configured memory disk. */ static void do_mount_md(const char *args, const char *mtpoint) { int rv; rv = run(NULL, "%s%s /dev/%s%d%s %s", _PATH_MOUNT, args, mdname, unit, mdsuffix, mtpoint); if (rv) errx(1, "mount exited %s %d", run_exitstr(rv), run_exitnumber(rv)); } /* * Mount the configured tmpfs. */ static void do_mount_tmpfs(const char *args, const char *mtpoint) { int rv; rv = run(NULL, "%s -t tmpfs %s tmp %s", _PATH_MOUNT, args, mtpoint); if (rv) errx(1, "tmpfs mount exited %s %d", run_exitstr(rv), run_exitnumber(rv)); } /* * Various configuration of the mountpoint. Mostly, enact 'mip'. */ static void do_mtptsetup(const char *mtpoint, struct mtpt_info *mip) { struct statfs sfs; if (!mip->mi_have_mode && !mip->mi_have_uid && !mip->mi_have_gid) return; if (!norun) { if (statfs(mtpoint, &sfs) == -1) { warn("statfs: %s", mtpoint); return; } if ((sfs.f_flags & MNT_RDONLY) != 0) { if (mip->mi_forced_pw) { warnx( "Not changing mode/owner of %s since it is read-only", mtpoint); } else { debugprintf( "Not changing mode/owner of %s since it is read-only", mtpoint); } return; } } if (mip->mi_have_mode) { debugprintf("changing mode of %s to %o.", mtpoint, mip->mi_mode); if (!norun) if (chmod(mtpoint, mip->mi_mode) == -1) err(1, "chmod: %s", mtpoint); } /* * We have to do these separately because the user may have * only specified one of them. */ if (mip->mi_have_uid) { debugprintf("changing owner (user) or %s to %u.", mtpoint, mip->mi_uid); if (!norun) if (chown(mtpoint, mip->mi_uid, -1) == -1) err(1, "chown %s to %u (user)", mtpoint, mip->mi_uid); } if (mip->mi_have_gid) { debugprintf("changing owner (group) or %s to %u.", mtpoint, mip->mi_gid); if (!norun) if (chown(mtpoint, -1, mip->mi_gid) == -1) err(1, "chown %s to %u (group)", mtpoint, mip->mi_gid); } } /* * Put a file system on the memory disk. */ static void do_newfs(const char *args) { int rv; rv = run(NULL, "%s%s /dev/%s%d", _PATH_NEWFS, args, mdname, unit); if (rv) errx(1, "newfs exited %s %d", run_exitstr(rv), run_exitnumber(rv)); } /* * 'str' should be a user and group name similar to the last argument * to chown(1); i.e., a user, followed by a colon, followed by a * group. The user and group in 'str' may be either a [ug]id or a * name. Upon return, the uid and gid fields in 'mip' will contain * the uid and gid of the user and group name in 'str', respectively. * * In other words, this derives a user and group id from a string * formatted like the last argument to chown(1). * * Notice: At this point we don't support only a username or only a * group name. do_mtptsetup already does, so when this feature is * desired, this is the only routine that needs to be changed. */ static void extract_ugid(const char *str, struct mtpt_info *mip) { char *ug; /* Writable 'str'. */ char *user, *group; /* Result of extracton. */ struct passwd *pw; struct group *gr; char *p; uid_t *uid; gid_t *gid; uid = &mip->mi_uid; gid = &mip->mi_gid; mip->mi_have_uid = mip->mi_have_gid = false; /* Extract the user and group from 'str'. Format above. */ ug = strdup(str); assert(ug != NULL); group = ug; user = strsep(&group, ":"); if (user == NULL || group == NULL || *user == '\0' || *group == '\0') usage(); /* Derive uid. */ *uid = strtoul(user, &p, 10); if (*uid == (uid_t)ULONG_MAX) usage(); if (*p != '\0') { pw = getpwnam(user); if (pw == NULL) errx(1, "invalid user: %s", user); *uid = pw->pw_uid; } mip->mi_have_uid = true; /* Derive gid. */ *gid = strtoul(group, &p, 10); if (*gid == (gid_t)ULONG_MAX) usage(); if (*p != '\0') { gr = getgrnam(group); if (gr == NULL) errx(1, "invalid group: %s", group); *gid = gr->gr_gid; } mip->mi_have_gid = true; free(ug); } /* * Run a process with command name and arguments pointed to by the * formatted string 'cmdline'. Since system(3) is not used, the first * space-delimited token of 'cmdline' must be the full pathname of the * program to run. * * The return value is the return code of the process spawned, or a negative * signal number if the process exited due to an uncaught signal. * * If 'ofd' is non-NULL, it is set to the standard output of * the program spawned (i.e., you can read from ofd and get the output * of the program). */ static int run(int *ofd, const char *cmdline, ...) { char **argv, **argvp; /* Result of splitting 'cmd'. */ int argc; char *cmd; /* Expansion of 'cmdline'. */ int pid, status; /* Child info. */ int pfd[2]; /* Pipe to the child. */ int nfd; /* Null (/dev/null) file descriptor. */ bool dup2dn; /* Dup /dev/null to stdout? */ va_list ap; char *p; int rv, i; dup2dn = true; va_start(ap, cmdline); rv = vasprintf(&cmd, cmdline, ap); if (rv == -1) err(1, "vasprintf"); va_end(ap); /* Split up 'cmd' into 'argv' for use with execve. */ for (argc = 1, p = cmd; (p = strchr(p, ' ')) != NULL; p++) argc++; /* 'argc' generation loop. */ argv = (char **)malloc(sizeof(*argv) * (argc + 1)); assert(argv != NULL); for (p = cmd, argvp = argv; (*argvp = strsep(&p, " ")) != NULL;) if (**argvp != '\0') if (++argvp >= &argv[argc]) { *argvp = NULL; break; } assert(*argv); /* The argv array ends up NULL-terminated here. */ /* Make sure the above loop works as expected. */ if (debug) { /* * We can't, but should, use debugprintf here. First, * it appends a trailing newline to the output, and * second it prepends "DEBUG: " to the output. The * former is a problem for this would-be first call, * and the latter for the would-be call inside the * loop. */ (void)fprintf(stderr, "DEBUG: running:"); /* Should be equivalent to 'cmd' (before strsep, of course). */ for (i = 0; argv[i] != NULL; i++) (void)fprintf(stderr, " %s", argv[i]); (void)fprintf(stderr, "\n"); } /* Create a pipe if necessary and fork the helper program. */ if (ofd != NULL) { if (pipe(&pfd[0]) == -1) err(1, "pipe"); *ofd = pfd[0]; dup2dn = false; } pid = fork(); switch (pid) { case 0: /* XXX can we call err() in here? */ if (norun) _exit(0); if (ofd != NULL) if (dup2(pfd[1], STDOUT_FILENO) < 0) err(1, "dup2"); if (!loudsubs) { nfd = open(_PATH_DEVNULL, O_RDWR); if (nfd == -1) err(1, "open: %s", _PATH_DEVNULL); if (dup2(nfd, STDIN_FILENO) < 0) err(1, "dup2"); if (dup2dn) if (dup2(nfd, STDOUT_FILENO) < 0) err(1, "dup2"); if (dup2(nfd, STDERR_FILENO) < 0) err(1, "dup2"); } (void)execv(argv[0], argv); warn("exec: %s", argv[0]); _exit(-1); case -1: err(1, "fork"); } free(cmd); free(argv); while (waitpid(pid, &status, 0) != pid) ; if (WIFEXITED(status)) return (WEXITSTATUS(status)); if (WIFSIGNALED(status)) return (-WTERMSIG(status)); err(1, "unexpected waitpid status: 0x%x", status); } /* * If run() returns non-zero, provide a string explaining why. */ static const char * run_exitstr(int rv) { if (rv > 0) return ("with error code"); if (rv < 0) return ("with signal"); return (NULL); } /* * If run returns non-zero, provide a relevant number. */ static int run_exitnumber(int rv) { if (rv < 0) return (-rv); return (rv); } static void usage(void) { fprintf(stderr, "usage: %s [-DLlMNnPStUX] [-a maxcontig] [-b block-size]\n" "\t[-c blocks-per-cylinder-group][-d max-extent-size] [-E path-mdconfig]\n" "\t[-e maxbpg] [-F file] [-f frag-size] [-i bytes] [-m percent-free]\n" "\t[-O optimization] [-o mount-options]\n" "\t[-p permissions] [-s size] [-v version] [-w user:group]\n" "\tmd-device mount-point\n", getprogname()); exit(1); } Index: projects/import-googletest-1.8.1/sbin/nvmecontrol/firmware.c =================================================================== --- projects/import-googletest-1.8.1/sbin/nvmecontrol/firmware.c (revision 344270) +++ projects/import-googletest-1.8.1/sbin/nvmecontrol/firmware.c (revision 344271) @@ -1,337 +1,336 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2013 EMC Corp. * All rights reserved. * * Copyright (C) 2012-2013 Intel Corporation * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "nvmecontrol.h" #define FIRMWARE_USAGE \ "firmware [-s slot] [-f path_to_firmware] [-a] \n" static int slot_has_valid_firmware(int fd, int slot) { struct nvme_firmware_page fw; int has_fw = false; read_logpage(fd, NVME_LOG_FIRMWARE_SLOT, NVME_GLOBAL_NAMESPACE_TAG, &fw, sizeof(fw)); if (fw.revision[slot-1] != 0LLU) has_fw = true; return (has_fw); } static void read_image_file(char *path, void **buf, int32_t *size) { struct stat sb; int32_t filesize; int fd; *size = 0; *buf = NULL; if ((fd = open(path, O_RDONLY)) < 0) err(1, "unable to open '%s'", path); if (fstat(fd, &sb) < 0) err(1, "unable to stat '%s'", path); /* * The NVMe spec does not explicitly state a maximum firmware image * size, although one can be inferred from the dword size limitation * for the size and offset fields in the Firmware Image Download * command. * * Technically, the max is UINT32_MAX * sizeof(uint32_t), since the * size and offsets are specified in terms of dwords (not bytes), but * realistically INT32_MAX is sufficient here and simplifies matters * a bit. */ if (sb.st_size > INT32_MAX) errx(1, "size of file '%s' is too large (%jd bytes)", path, (intmax_t)sb.st_size); filesize = (int32_t)sb.st_size; if ((*buf = malloc(filesize)) == NULL) errx(1, "unable to malloc %d bytes", filesize); if ((*size = read(fd, *buf, filesize)) < 0) err(1, "error reading '%s'", path); /* XXX assuming no short reads */ if (*size != filesize) errx(1, "error reading '%s' (read %d bytes, requested %d bytes)", path, *size, filesize); } static void update_firmware(int fd, uint8_t *payload, int32_t payload_size) { struct nvme_pt_command pt; int32_t off, resid, size; void *chunk; off = 0; resid = payload_size; if ((chunk = aligned_alloc(PAGE_SIZE, NVME_MAX_XFER_SIZE)) == NULL) errx(1, "unable to malloc %d bytes", NVME_MAX_XFER_SIZE); while (resid > 0) { size = (resid >= NVME_MAX_XFER_SIZE) ? NVME_MAX_XFER_SIZE : resid; memcpy(chunk, payload + off, size); memset(&pt, 0, sizeof(pt)); pt.cmd.opc = NVME_OPC_FIRMWARE_IMAGE_DOWNLOAD; pt.cmd.cdw10 = htole32((size / sizeof(uint32_t)) - 1); pt.cmd.cdw11 = htole32(off / sizeof(uint32_t)); pt.buf = chunk; pt.len = size; pt.is_read = 0; if (ioctl(fd, NVME_PASSTHROUGH_CMD, &pt) < 0) err(1, "firmware download request failed"); if (nvme_completion_is_error(&pt.cpl)) errx(1, "firmware download request returned error"); resid -= size; off += size; } } static int activate_firmware(int fd, int slot, int activate_action) { struct nvme_pt_command pt; uint16_t sct, sc; memset(&pt, 0, sizeof(pt)); pt.cmd.opc = NVME_OPC_FIRMWARE_ACTIVATE; pt.cmd.cdw10 = htole32((activate_action << 3) | slot); pt.is_read = 0; if (ioctl(fd, NVME_PASSTHROUGH_CMD, &pt) < 0) err(1, "firmware activate request failed"); sct = NVME_STATUS_GET_SCT(pt.cpl.status); sc = NVME_STATUS_GET_SC(pt.cpl.status); if (sct == NVME_SCT_COMMAND_SPECIFIC && sc == NVME_SC_FIRMWARE_REQUIRES_RESET) return 1; if (nvme_completion_is_error(&pt.cpl)) errx(1, "firmware activate request returned error"); return 0; } static void firmware(const struct nvme_function *nf, int argc, char *argv[]) { int fd = -1, slot = 0; - int a_flag, s_flag, f_flag; + int a_flag, f_flag; int activate_action, reboot_required; int opt; char *p, *image = NULL; char *controller = NULL, prompt[64]; void *buf = NULL; int32_t size = 0; uint16_t oacs_fw; uint8_t fw_slot1_ro, fw_num_slots; struct nvme_controller_data cdata; - a_flag = s_flag = f_flag = false; + a_flag = f_flag = false; while ((opt = getopt(argc, argv, "af:s:")) != -1) { switch (opt) { case 'a': a_flag = true; break; case 's': slot = strtol(optarg, &p, 0); if (p != NULL && *p != '\0') { fprintf(stderr, "\"%s\" not valid slot.\n", optarg); usage(nf); } else if (slot == 0) { fprintf(stderr, "0 is not a valid slot number. " "Slot numbers start at 1.\n"); usage(nf); } else if (slot > 7) { fprintf(stderr, "Slot number %s specified which is " "greater than max allowed slot number of " "7.\n", optarg); usage(nf); } - s_flag = true; break; case 'f': image = optarg; f_flag = true; break; } } /* Check that a controller (and not a namespace) was specified. */ if (optind >= argc || strstr(argv[optind], NVME_NS_PREFIX) != NULL) usage(nf); if (!f_flag && !a_flag) { fprintf(stderr, "Neither a replace ([-f path_to_firmware]) nor " "activate ([-a]) firmware image action\n" "was specified.\n"); usage(nf); } if (!f_flag && a_flag && slot == 0) { fprintf(stderr, "Slot number to activate not specified.\n"); usage(nf); } controller = argv[optind]; open_dev(controller, &fd, 1, 1); read_controller_data(fd, &cdata); oacs_fw = (cdata.oacs >> NVME_CTRLR_DATA_OACS_FIRMWARE_SHIFT) & NVME_CTRLR_DATA_OACS_FIRMWARE_MASK; if (oacs_fw == 0) errx(1, "controller does not support firmware activate/download"); fw_slot1_ro = (cdata.frmw >> NVME_CTRLR_DATA_FRMW_SLOT1_RO_SHIFT) & NVME_CTRLR_DATA_FRMW_SLOT1_RO_MASK; if (f_flag && slot == 1 && fw_slot1_ro) errx(1, "slot %d is marked as read only", slot); fw_num_slots = (cdata.frmw >> NVME_CTRLR_DATA_FRMW_NUM_SLOTS_SHIFT) & NVME_CTRLR_DATA_FRMW_NUM_SLOTS_MASK; if (slot > fw_num_slots) errx(1, "slot %d specified but controller only supports %d slots", slot, fw_num_slots); if (a_flag && !f_flag && !slot_has_valid_firmware(fd, slot)) errx(1, "slot %d does not contain valid firmware,\n" "try 'nvmecontrol logpage -p 3 %s' to get a list " "of available images\n", slot, controller); if (f_flag) read_image_file(image, &buf, &size); if (f_flag && a_flag) printf("You are about to download and activate " "firmware image (%s) to controller %s.\n" "This may damage your controller and/or " "overwrite an existing firmware image.\n", image, controller); else if (a_flag) printf("You are about to activate a new firmware " "image on controller %s.\n" "This may damage your controller.\n", controller); else if (f_flag) printf("You are about to download firmware image " "(%s) to controller %s.\n" "This may damage your controller and/or " "overwrite an existing firmware image.\n", image, controller); printf("Are you sure you want to continue? (yes/no) "); while (1) { fgets(prompt, sizeof(prompt), stdin); if (strncasecmp(prompt, "yes", 3) == 0) break; if (strncasecmp(prompt, "no", 2) == 0) exit(1); printf("Please answer \"yes\" or \"no\". "); } if (f_flag) { update_firmware(fd, buf, size); if (a_flag) activate_action = NVME_AA_REPLACE_ACTIVATE; else activate_action = NVME_AA_REPLACE_NO_ACTIVATE; } else { activate_action = NVME_AA_ACTIVATE; } reboot_required = activate_firmware(fd, slot, activate_action); if (a_flag) { if (reboot_required) { printf("New firmware image activated but requires " "conventional reset (i.e. reboot) to " "complete activation.\n"); } else { printf("New firmware image activated and will take " "effect after next controller reset.\n" "Controller reset can be initiated via " "'nvmecontrol reset %s'\n", controller); } } close(fd); exit(0); } NVME_COMMAND(top, firmware, firmware, FIRMWARE_USAGE); Index: projects/import-googletest-1.8.1/share/man/man5/src.conf.5 =================================================================== --- projects/import-googletest-1.8.1/share/man/man5/src.conf.5 (revision 344270) +++ projects/import-googletest-1.8.1/share/man/man5/src.conf.5 (revision 344271) @@ -1,1937 +1,1941 @@ .\" DO NOT EDIT-- this file is @generated by tools/build/options/makeman. .\" $FreeBSD$ -.Dd January 31, 2019 +.Dd February 15, 2019 .Dt SRC.CONF 5 .Os .Sh NAME .Nm src.conf .Nd "source build options" .Sh DESCRIPTION The .Nm file contains settings that will apply to every build involving the .Fx source tree; see .Xr build 7 . .Pp The .Nm file uses the standard makefile syntax. However, .Nm should not specify any dependencies to .Xr make 1 . Instead, .Nm is to set .Xr make 1 variables that control the aspects of how the system builds. .Pp The default location of .Nm is .Pa /etc/src.conf , though an alternative location can be specified in the .Xr make 1 variable .Va SRCCONF . Overriding the location of .Nm may be necessary if the system-wide settings are not suitable for a particular build. For instance, setting .Va SRCCONF to .Pa /dev/null effectively resets all build controls to their defaults. .Pp The only purpose of .Nm is to control the compilation of the .Fx source code, which is usually located in .Pa /usr/src . As a rule, the system administrator creates .Nm when the values of certain control variables need to be changed from their defaults. .Pp In addition, control variables can be specified for a particular build via the .Fl D option of .Xr make 1 or in its environment; see .Xr environ 7 . .Pp The environment of .Xr make 1 for the build can be controlled via the .Va SRC_ENV_CONF variable, which defaults to .Pa /etc/src-env.conf . Some examples that may only be set in this file are .Va WITH_DIRDEPS_BUILD , and .Va WITH_META_MODE , and .Va MAKEOBJDIRPREFIX as they are environment-only variables. .Pp The values of variables are ignored regardless of their setting; even if they would be set to .Dq Li FALSE or .Dq Li NO . The presence of an option causes it to be honored by .Xr make 1 . .Pp This list provides a name and short description for variables that can be used for source builds. .Bl -tag -width indent .It Va WITHOUT_ACCT Set to not build process accounting tools such as .Xr accton 8 and .Xr sa 8 . .It Va WITHOUT_ACPI Set to not build .Xr acpiconf 8 , .Xr acpidump 8 and related programs. .It Va WITHOUT_AMD Set to not build .Xr amd 8 , and related programs. .It Va WITHOUT_APM Set to not build .Xr apm 8 , .Xr apmd 8 and related programs. .It Va WITHOUT_ASSERT_DEBUG Set to compile programs and libraries without the .Xr assert 3 checks. .It Va WITHOUT_AT Set to not build .Xr at 1 and related utilities. .It Va WITHOUT_ATM Set to not build programs and libraries related to ATM networking. .It Va WITHOUT_AUDIT Set to not build audit support into system programs. .It Va WITHOUT_AUTHPF Set to not build .Xr authpf 8 . .It Va WITHOUT_AUTOFS Set to not build .Xr autofs 5 related programs, libraries, and kernel modules. .It Va WITHOUT_AUTO_OBJ Disable automatic creation of objdirs. This is enabled by default if the wanted OBJDIR is writable by the current user. .Pp This must be set in the environment, make command line, or .Pa /etc/src-env.conf , not .Pa /etc/src.conf . .It Va WITHOUT_BHYVE Set to not build or install .Xr bhyve 8 , associated utilities, and examples. .Pp This option only affects amd64/amd64. .It Va WITH_BIND_NOW Build all binaries with the .Dv DF_BIND_NOW flag set to indicate that the run-time loader should perform all relocation processing at process startup rather than on demand. .It Va WITHOUT_BINUTILS Set to not build or install GNU .Xr as 1 , .Xr objdump 1 , and for some CPU architectures .Xr ld.bfd 1 as part of the normal system build. The resulting system cannot build programs from source. .Pp This is a default setting on arm64/aarch64 and riscv/riscv64. When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_GDB .El .It Va WITH_BINUTILS Set to build and install GNU .Xr as 1 , .Xr objdump 1 , and for some CPU architectures .Xr ld.bfd 1 as part of the normal system build. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe and sparc64/sparc64. .It Va WITHOUT_BINUTILS_BOOTSTRAP Set to not build binutils (as, ld, and objdump) as part of the bootstrap process. .Bf -symbolic The option does not work for build targets unless some alternative toolchain is provided. .Ef .Pp This is a default setting on arm64/aarch64 and riscv/riscv64. .It Va WITH_BINUTILS_BOOTSTRAP Set build binutils (as, ld, and objdump) as part of the bootstrap process. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe and sparc64/sparc64. .It Va WITHOUT_BLACKLIST Set this if you do not want to build .Xr blacklistd 8 and .Xr blacklistctl 8 . When set, these options are also in effect: .Pp .Bl -inset -compact .It Va WITHOUT_BLACKLIST_SUPPORT (unless .Va WITH_BLACKLIST_SUPPORT is set explicitly) .El .It Va WITHOUT_BLACKLIST_SUPPORT Set to build some programs without .Xr libblacklist 3 support, like .Xr fingerd 8 , .Xr ftpd 8 , .Xr rlogind 8 , .Xr rshd 8 , and .Xr sshd 8 . .It Va WITHOUT_BLUETOOTH Set to not build Bluetooth related kernel modules, programs and libraries. .It Va WITHOUT_BOOT Set to not build the boot blocks and loader. .It Va WITHOUT_BOOTPARAMD Set to not build or install .Xr bootparamd 8 . .It Va WITHOUT_BOOTPD Set to not build or install .Xr bootpd 8 . .It Va WITHOUT_BSDINSTALL Set to not build .Xr bsdinstall 8 , .Xr sade 8 , and related programs. .It Va WITHOUT_BSD_CPIO Set to not build the BSD licensed version of cpio based on .Xr libarchive 3 . .It Va WITHOUT_BSD_CRTBEGIN Disable the BSD licensed .Pa crtbegin.o and .Pa crtend.o . .Pp This is a default setting on powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe and sparc64/sparc64. .It Va WITH_BSD_CRTBEGIN Enable the BSD licensed .Pa crtbegin.o and .Pa crtend.o . .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf and riscv/riscv64. .It Va WITH_BSD_GREP Install BSD-licensed grep as '[ef]grep' instead of GNU grep. .It Va WITHOUT_BSNMP Set to not build or install .Xr bsnmpd 1 and related libraries and data files. .It Va WITHOUT_BZIP2 Set to not build contributed bzip2 software as a part of the base system. .Bf -symbolic The option has no effect yet. .Ef When set, these options are also in effect: .Pp .Bl -inset -compact .It Va WITHOUT_BZIP2_SUPPORT (unless .Va WITH_BZIP2_SUPPORT is set explicitly) .El .It Va WITHOUT_BZIP2_SUPPORT Set to build some programs without optional bzip2 support. .It Va WITHOUT_CALENDAR Set to not build .Xr calendar 1 . .It Va WITHOUT_CAPSICUM Set to not build Capsicum support into system programs. When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_CASPER .El .It Va WITHOUT_CASPER Set to not build Casper program and related libraries. .It Va WITH_CCACHE_BUILD Set to use .Xr ccache 1 for the build. No configuration is required except to install the .Sy devel/ccache package. When using with .Xr distcc 1 , set .Sy CCACHE_PREFIX=/usr/local/bin/distcc . The default cache directory of .Pa $HOME/.ccache will be used, which can be overridden by setting .Sy CCACHE_DIR . The .Sy CCACHE_COMPILERCHECK option defaults to .Sy content when using the in-tree bootstrap compiler, and .Sy mtime when using an external compiler. The .Sy CCACHE_CPP2 option is used for Clang but not GCC. .Pp Sharing a cache between multiple work directories requires using a layout similar to .Pa /some/prefix/src .Pa /some/prefix/obj and an environment such as: .Bd -literal -offset indent CCACHE_BASEDIR='${SRCTOP:H}' MAKEOBJDIRPREFIX='${SRCTOP:H}/obj' .Ed .Pp See .Xr ccache 1 for more configuration options. .It Va WITHOUT_CCD Set to not build .Xr geom_ccd 4 and related utilities. .It Va WITHOUT_CDDL Set to not build code licensed under Sun's CDDL. When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_CTF .It .Va WITHOUT_LOADER_ZFS .It .Va WITHOUT_ZFS .El .It Va WITHOUT_CLANG Set to not build the Clang C/C++ compiler during the regular phase of the build. .Pp This is a default setting on riscv/riscv64 and sparc64/sparc64. When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_CLANG_EXTRAS .It .Va WITHOUT_CLANG_FULL .It .Va WITHOUT_LLVM_COV .El .Pp When set, these options are also in effect: .Pp .Bl -inset -compact .It Va WITHOUT_LLVM_TARGET_AARCH64 (unless .Va WITH_LLVM_TARGET_AARCH64 is set explicitly) .It Va WITHOUT_LLVM_TARGET_ALL (unless .Va WITH_LLVM_TARGET_ALL is set explicitly) .It Va WITHOUT_LLVM_TARGET_ARM (unless .Va WITH_LLVM_TARGET_ARM is set explicitly) .It Va WITHOUT_LLVM_TARGET_MIPS (unless .Va WITH_LLVM_TARGET_MIPS is set explicitly) .It Va WITHOUT_LLVM_TARGET_POWERPC (unless .Va WITH_LLVM_TARGET_POWERPC is set explicitly) .It Va WITHOUT_LLVM_TARGET_SPARC (unless .Va WITH_LLVM_TARGET_SPARC is set explicitly) .It Va WITHOUT_LLVM_TARGET_X86 (unless .Va WITH_LLVM_TARGET_X86 is set explicitly) .El .It Va WITH_CLANG Set to build the Clang C/C++ compiler during the normal phase of the build. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64 and powerpc/powerpcspe. .It Va WITHOUT_CLANG_BOOTSTRAP Set to not build the Clang C/C++ compiler during the bootstrap phase of the build. To be able to build the system, either gcc or clang bootstrap must be enabled unless an alternate compiler is provided via XCC. .Pp This is a default setting on mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe, riscv/riscv64 and sparc64/sparc64. .It Va WITH_CLANG_BOOTSTRAP Set to build the Clang C/C++ compiler during the bootstrap phase of the build. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64 and i386/i386. .It Va WITH_CLANG_EXTRAS -Set to build additional clang and llvm tools, such as bugpoint. +Set to build additional clang and llvm tools, such as bugpoint and +clang-format. .It Va WITHOUT_CLANG_FULL Set to avoid building the ARCMigrate, Rewriter and StaticAnalyzer components of the Clang C/C++ compiler. .Pp This is a default setting on riscv/riscv64 and sparc64/sparc64. .It Va WITH_CLANG_FULL Set to build the ARCMigrate, Rewriter and StaticAnalyzer components of the Clang C/C++ compiler. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64 and powerpc/powerpcspe. .It Va WITHOUT_CLANG_IS_CC Set to install the GCC compiler as .Pa /usr/bin/cc , .Pa /usr/bin/c++ and .Pa /usr/bin/cpp . .Pp This is a default setting on mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe, riscv/riscv64 and sparc64/sparc64. .It Va WITH_CLANG_IS_CC Set to install the Clang C/C++ compiler as .Pa /usr/bin/cc , .Pa /usr/bin/c++ and .Pa /usr/bin/cpp . .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64 and i386/i386. .It Va WITHOUT_CPP Set to not build .Xr cpp 1 . .It Va WITHOUT_CROSS_COMPILER Set to not build any cross compiler in the cross-tools stage of buildworld. When compiling a different version of .Fx than what is installed on the system, provide an alternate compiler with XCC to ensure success. When compiling with an identical version of .Fx to the host, this option may be safely used. This option may also be safe when the host version of .Fx is close to the sources being built, but all bets are off if there have been any changes to the toolchain between the versions. When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_BINUTILS_BOOTSTRAP .It .Va WITHOUT_CLANG_BOOTSTRAP .It .Va WITHOUT_ELFTOOLCHAIN_BOOTSTRAP .It .Va WITHOUT_GCC_BOOTSTRAP .It .Va WITHOUT_LLD_BOOTSTRAP .El .It Va WITHOUT_CRYPT Set to not build any crypto code. When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_KERBEROS .It .Va WITHOUT_OPENSSH .It .Va WITHOUT_OPENSSL .El .Pp When set, these options are also in effect: .Pp .Bl -inset -compact .It Va WITHOUT_GSSAPI (unless .Va WITH_GSSAPI is set explicitly) .El .It Va WITH_CTF Set to compile with CTF (Compact C Type Format) data. CTF data encapsulates a reduced form of debugging information similar to DWARF and the venerable stabs and is required for DTrace. .It Va WITHOUT_CUSE Set to not build CUSE-related programs and libraries. .It Va WITHOUT_CXGBETOOL Set to not build .Xr cxgbetool 8 .Pp This is a default setting on arm/arm, arm/armv6, arm/armv7, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpcspe and riscv/riscv64. .It Va WITH_CXGBETOOL Set to build .Xr cxgbetool 8 .Pp This is a default setting on amd64/amd64, arm64/aarch64, i386/i386, powerpc/powerpc64 and sparc64/sparc64. .It Va WITHOUT_CXX Set to not build .Xr c++ 1 and related libraries. It will also prevent building of .Xr gperf 1 and .Xr devd 8 . When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_CLANG .It .Va WITHOUT_CLANG_EXTRAS .It .Va WITHOUT_CLANG_FULL .It .Va WITHOUT_DTRACE_TESTS .It .Va WITHOUT_GNUCXX .It .Va WITHOUT_LLVM_COV .It .Va WITHOUT_TESTS .El .It Va WITHOUT_DEBUG_FILES Set to avoid building or installing standalone debug files for each executable binary and shared library. .It Va WITHOUT_DIALOG Set to not build .Xr dialog 1 , .Xr dialog 3 , .Xr dpv 1 , and .Xr dpv 3 . When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_BSDINSTALL .El .It Va WITHOUT_DICT Set to not build the Webster dictionary files. .It Va WITH_DIRDEPS_BUILD This is an experimental build system. For details see http://www.crufty.net/sjg/docs/freebsd-meta-mode.htm. Build commands can be seen from the top-level with: .Dl make show-valid-targets The build is driven by dirdeps.mk using .Va DIRDEPS stored in Makefile.depend files found in each directory. .Pp The build can be started from anywhere, and behaves the same. The initial instance of .Xr make 1 recursively reads .Va DIRDEPS from .Pa Makefile.depend , computing a graph of tree dependencies from the current origin. Setting .Va NO_DIRDEPS skips checking dirdep dependencies and will only build in the current and child directories. .Va NO_DIRDEPS_BELOW skips building any dirdeps and only build the current directory. .Pp This also utilizes the .Va WITH_META_MODE logic for incremental builds. .Pp The build hides commands executed unless .Va NO_SILENT is defined. .Pp Note that there is currently no mass install feature for this. .Pp When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITH_INSTALL_AS_USER .El .Pp When set, these options are also in effect: .Pp .Bl -inset -compact .It Va WITH_META_MODE (unless .Va WITHOUT_META_MODE is set explicitly) .It Va WITH_STAGING (unless .Va WITHOUT_STAGING is set explicitly) .It Va WITH_STAGING_MAN (unless .Va WITHOUT_STAGING_MAN is set explicitly) .It Va WITH_STAGING_PROG (unless .Va WITHOUT_STAGING_PROG is set explicitly) .It Va WITH_SYSROOT (unless .Va WITHOUT_SYSROOT is set explicitly) .El .Pp This must be set in the environment, make command line, or .Pa /etc/src-env.conf , not .Pa /etc/src.conf . .It Va WITH_DIRDEPS_CACHE Cache result of dirdeps.mk which can save significant time for subsequent builds. Depends on .Va WITH_DIRDEPS_BUILD . .Pp This must be set in the environment, make command line, or .Pa /etc/src-env.conf , not .Pa /etc/src.conf . .It Va WITHOUT_DMAGENT Set to not build dma Mail Transport Agent. .It Va WITHOUT_DOCCOMPRESS Set to not install compressed system documentation. Only the uncompressed version will be installed. .It Va WITH_DTRACE_TESTS Set to build and install the DTrace test suite in .Pa /usr/tests/cddl/usr.sbin/dtrace . This test suite is considered experimental on architectures other than amd64/amd64 and running it may cause system instability. .It Va WITHOUT_DYNAMICROOT Set this if you do not want to link .Pa /bin and .Pa /sbin dynamically. .It Va WITHOUT_EE Set to not build and install .Xr edit 1 , .Xr ee 1 , and related programs. .It Va WITHOUT_EFI Set not to build .Xr efivar 3 and .Xr efivar 8 . .Pp This is a default setting on mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe, riscv/riscv64 and sparc64/sparc64. .It Va WITH_EFI Set to build .Xr efivar 3 and .Xr efivar 8 . .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64 and i386/i386. .It Va WITHOUT_ELFTOOLCHAIN_BOOTSTRAP Set to not build ELF Tool Chain tools (addr2line, nm, size, strings and strip) as part of the bootstrap process. .Bf -symbolic An alternate bootstrap tool chain must be provided. .Ef .It Va WITHOUT_EXAMPLES Set to avoid installing examples to .Pa /usr/share/examples/ . .It Va WITH_EXPERIMENTAL Set to include experimental features in the build. .It Va WITH_EXTRA_TCP_STACKS Set to build extra TCP stack modules. .It Va WITHOUT_FDT Set to not build Flattened Device Tree support as part of the base system. This includes the device tree compiler (dtc) and libfdt support library. .It Va WITHOUT_FILE Set to not build .Xr file 1 and related programs. .It Va WITHOUT_FINGER Set to not build or install .Xr finger 1 and .Xr fingerd 8 . .It Va WITHOUT_FLOPPY Set to not build or install programs for operating floppy disk driver. .It Va WITHOUT_FMTREE Set to not build and install .Pa /usr/sbin/fmtree . .It Va WITHOUT_FORMAT_EXTENSIONS Set to not enable .Fl fformat-extensions when compiling the kernel. Also disables all format checking. .It Va WITHOUT_FORTH Set to build bootloaders without Forth support. .It Va WITHOUT_FP_LIBC Set to build .Nm libc without floating-point support. .It Va WITHOUT_FREEBSD_UPDATE Set to not build .Xr freebsd-update 8 . .It Va WITHOUT_FTP Set to not build or install .Xr ftp 1 and .Xr ftpd 8 . .It Va WITHOUT_GAMES Set to not build games. .It Va WITHOUT_GCC Set to not build and install gcc and g++ as part of the normal build process. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64, i386/i386 and riscv/riscv64. .It Va WITH_GCC Set to build and install gcc and g++. .Pp This is a default setting on mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe and sparc64/sparc64. .It Va WITHOUT_GCC_BOOTSTRAP Set to not build gcc and g++ as part of the bootstrap process. You must enable either gcc or clang bootstrap to be able to build the system, unless an alternative compiler is provided via XCC. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64, i386/i386 and riscv/riscv64. .It Va WITH_GCC_BOOTSTRAP Set to build gcc and g++ as part of the bootstrap process. .Pp This is a default setting on mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe and sparc64/sparc64. .It Va WITHOUT_GCOV Set to not build the .Xr gcov 1 tool. .It Va WITHOUT_GDB Set to not build .Xr gdb 1 . .Pp This is a default setting on arm64/aarch64 and riscv/riscv64. .It Va WITH_GDB Set to build .Xr gdb 1 . .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe and sparc64/sparc64. .It Va WITHOUT_GDB_LIBEXEC Set to install .Xr gdb 1 into .Pa /usr/bin . .Pp This is a default setting on sparc64/sparc64. .It Va WITH_GDB_LIBEXEC Set to install .Xr gdb 1 into .Pa /usr/libexec . This permits .Xr gdb 1 to be used as a fallback for .Xr crashinfo 8 if a newer version is not installed. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe and riscv/riscv64. .It Va WITHOUT_GNUCXX Do not build the GNU C++ stack (g++, libstdc++). This is the default on platforms where clang is the system compiler. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64 and i386/i386. .It Va WITH_GNUCXX Build the GNU C++ stack (g++, libstdc++). This is the default on platforms where gcc is the system compiler. .Pp This is a default setting on mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe, riscv/riscv64 and sparc64/sparc64. .It Va WITHOUT_GNU_DIFF Set to not build GNU .Xr diff 1 and .Xr diff3 1 . .It Va WITHOUT_GNU_GREP Set to not build GNU .Xr grep 1 . .It Va WITH_GNU_GREP_COMPAT Set this option to include GNU extensions in .Xr bsdgrep 1 by linking against libgnuregex. .It Va WITHOUT_GPIO Set to not build .Xr gpioctl 8 as part of the base system. .It Va WITHOUT_GPL_DTC Set to build the BSD licensed version of the device tree compiler rather than the GPLed one from elinux.org. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64 and i386/i386. .It Va WITH_GPL_DTC Set to build the GPL'd version of the device tree compiler from elinux.org, instead of the BSD licensed one. .Pp This is a default setting on mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe, riscv/riscv64 and sparc64/sparc64. .It Va WITHOUT_GSSAPI Set to not build libgssapi. .It Va WITHOUT_HAST Set to not build .Xr hastd 8 and related utilities. .It Va WITH_HESIOD Set to build Hesiod support. .It Va WITHOUT_HTML Set to not build HTML docs. .It Va WITHOUT_HYPERV Set to not build or install HyperV utilities. .Pp This is a default setting on arm/arm, arm/armv6, arm/armv7, arm64/aarch64, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe, riscv/riscv64 and sparc64/sparc64. .It Va WITH_HYPERV Set to build or install HyperV utilities. .Pp This is a default setting on amd64/amd64 and i386/i386. .It Va WITHOUT_ICONV Set to not build iconv as part of libc. .It Va WITHOUT_INCLUDES Set to not install header files. This option used to be spelled .Va NO_INCS . .Bf -symbolic The option does not work for build targets. .Ef .It Va WITHOUT_INET Set to not build programs and libraries related to IPv4 networking. When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_INET_SUPPORT .El .It Va WITHOUT_INET6 Set to not build programs and libraries related to IPv6 networking. When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_INET6_SUPPORT .El .It Va WITHOUT_INET6_SUPPORT Set to build libraries, programs, and kernel modules without IPv6 support. .It Va WITHOUT_INETD Set to not build .Xr inetd 8 . .It Va WITHOUT_INET_SUPPORT Set to build libraries, programs, and kernel modules without IPv4 support. .It Va WITHOUT_INSTALLLIB Set this to not install optional libraries. For example, when creating a .Xr nanobsd 8 image. .Bf -symbolic The option does not work for build targets. .Ef .It Va WITH_INSTALL_AS_USER Set to make install targets succeed for non-root users by installing files with owner and group attributes set to that of the user running the .Xr make 1 command. The user still must set the .Va DESTDIR variable to point to a directory where the user has write permissions. .It Va WITHOUT_IPFILTER Set to not build IP Filter package. .It Va WITHOUT_IPFW Set to not build IPFW tools. .It Va WITHOUT_IPSEC_SUPPORT Set to not build the kernel with .Xr ipsec 4 support. This option is needed for .Xr ipsec 4 and .Xr tcpmd5 4 . .It Va WITHOUT_ISCSI Set to not build .Xr iscsid 8 and related utilities. .It Va WITHOUT_JAIL Set to not build tools for the support of jails; e.g., .Xr jail 8 . .It Va WITHOUT_KDUMP Set to not build .Xr kdump 1 and .Xr truss 1 . .It Va WITHOUT_KERBEROS Set this to not build Kerberos 5 (KTH Heimdal). When set, these options are also in effect: .Pp .Bl -inset -compact .It Va WITHOUT_GSSAPI (unless .Va WITH_GSSAPI is set explicitly) .It Va WITHOUT_KERBEROS_SUPPORT (unless .Va WITH_KERBEROS_SUPPORT is set explicitly) .El .It Va WITHOUT_KERBEROS_SUPPORT Set to build some programs without Kerberos support, like .Xr ssh 1 , .Xr telnet 1 , .Xr sshd 8 , and .Xr telnetd 8 . .It Va WITH_KERNEL_RETPOLINE Set to enable the "retpoline" mitigation for CVE-2017-5715 in the kernel build. .It Va WITHOUT_KERNEL_SYMBOLS Set to not install kernel symbol files. .Bf -symbolic This option is recommended for those people who have small root partitions. .Ef .It Va WITHOUT_KVM Set to not build the .Nm libkvm library as a part of the base system. .Bf -symbolic The option has no effect yet. .Ef When set, these options are also in effect: .Pp .Bl -inset -compact .It Va WITHOUT_KVM_SUPPORT (unless .Va WITH_KVM_SUPPORT is set explicitly) .El .It Va WITHOUT_KVM_SUPPORT Set to build some programs without optional .Nm libkvm support. .It Va WITHOUT_LDNS Setting this variable will prevent the LDNS library from being built. When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_LDNS_UTILS .It .Va WITHOUT_UNBOUND .El .It Va WITHOUT_LDNS_UTILS Setting this variable will prevent building the LDNS utilities .Xr drill 1 and .Xr host 1 . .It Va WITHOUT_LEGACY_CONSOLE Set to not build programs that support a legacy PC console; e.g., .Xr kbdcontrol 1 and .Xr vidcontrol 1 . .It Va WITHOUT_LIB32 On 64-bit platforms, set to not build 32-bit library set and a .Nm ld-elf32.so.1 runtime linker. .It Va WITHOUT_LIBCPLUSPLUS Set to avoid building libcxxrt and libc++. .It Va WITHOUT_LIBPTHREAD Set to not build the .Nm libpthread providing library, .Nm libthr . When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_LIBTHR .El .It Va WITH_LIBSOFT On armv6 only, set to enable soft float ABI compatibility libraries. This option is for transitioning to the new hard float ABI. .It Va WITHOUT_LIBTHR Set to not build the .Nm libthr (1:1 threading) library. .It Va WITHOUT_LLD Set to not build LLVM's lld linker. .Pp This is a default setting on riscv/riscv64 and sparc64/sparc64. .It Va WITH_LLD Set to build LLVM's lld linker. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64 and powerpc/powerpcspe. .It Va WITHOUT_LLDB Set to not build the LLDB debugger. .Pp This is a default setting on arm/arm, arm/armv6, arm/armv7, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe, riscv/riscv64 and sparc64/sparc64. .It Va WITH_LLDB Set to build the LLDB debugger. .Pp This is a default setting on amd64/amd64, arm64/aarch64 and i386/i386. .It Va WITHOUT_LLD_BOOTSTRAP Set to not build the LLD linker during the bootstrap phase of the build. To be able to build the system, either Binutils or LLD bootstrap must be enabled unless an alternate linker is provided via XLD. .Pp This is a default setting on arm/arm, arm/armv6, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe, riscv/riscv64 and sparc64/sparc64. .It Va WITH_LLD_BOOTSTRAP Set to build the LLD linker during the bootstrap phase of the build, and use it during buildworld and buildkernel. .Pp This is a default setting on amd64/amd64, arm/armv7, arm64/aarch64 and i386/i386. .It Va WITHOUT_LLD_IS_LD Set to use GNU binutils ld as the system linker, instead of LLVM's LLD. .Pp This is a default setting on arm/arm, arm/armv6, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe, riscv/riscv64 and sparc64/sparc64. .It Va WITH_LLD_IS_LD Set to use LLVM's LLD as the system linker, instead of GNU binutils ld. .Pp This is a default setting on amd64/amd64, arm/armv7, arm64/aarch64 and i386/i386. .It Va WITHOUT_LLVM_COV Set to not build the .Xr llvm-cov 1 tool. .Pp This is a default setting on riscv/riscv64 and sparc64/sparc64. .It Va WITH_LLVM_COV Set to build the .Xr llvm-cov 1 tool. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64 and powerpc/powerpcspe. .It Va WITHOUT_LLVM_LIBUNWIND Set to use GCC's stack unwinder (instead of LLVM's libunwind). .Pp This is a default setting on arm/arm, arm/armv6, arm/armv7, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe and sparc64/sparc64. .It Va WITH_LLVM_LIBUNWIND Set to use LLVM's libunwind stack unwinder (instead of GCC's unwinder). .Pp This is a default setting on amd64/amd64, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf and riscv/riscv64. .It Va WITHOUT_LLVM_TARGET_AARCH64 Set to not build LLVM target support for AArch64. The .Va LLVM_TARGET_ALL option should be used rather than this in most cases. .Pp This is a default setting on arm/arm, arm/armv6, riscv/riscv64 and sparc64/sparc64. .It Va WITH_LLVM_TARGET_AARCH64 Set to build LLVM target support for AArch64. The .Va LLVM_TARGET_ALL option should be used rather than this in most cases. .Pp This is a default setting on amd64/amd64, arm/armv7, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64 and powerpc/powerpcspe. .It Va WITHOUT_LLVM_TARGET_ALL Set to only build the required LLVM target support. This option is preferred to specific target support options. .Pp This is a default setting on riscv/riscv64 and sparc64/sparc64. When set, these options are also in effect: .Pp .Bl -inset -compact .It Va WITHOUT_LLVM_TARGET_AARCH64 (unless .Va WITH_LLVM_TARGET_AARCH64 is set explicitly) .It Va WITHOUT_LLVM_TARGET_ARM (unless .Va WITH_LLVM_TARGET_ARM is set explicitly) .It Va WITHOUT_LLVM_TARGET_MIPS (unless .Va WITH_LLVM_TARGET_MIPS is set explicitly) .It Va WITHOUT_LLVM_TARGET_POWERPC (unless .Va WITH_LLVM_TARGET_POWERPC is set explicitly) .It Va WITHOUT_LLVM_TARGET_SPARC (unless .Va WITH_LLVM_TARGET_SPARC is set explicitly) .El .It Va WITH_LLVM_TARGET_ALL Set to build support for all LLVM targets. This option is always applied to the bootstrap compiler for buildworld when LLVM is used. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64 and powerpc/powerpcspe. .It Va WITHOUT_LLVM_TARGET_ARM Set to not build LLVM target support for ARM. The .Va LLVM_TARGET_ALL option should be used rather than this in most cases. .Pp This is a default setting on riscv/riscv64 and sparc64/sparc64. .It Va WITH_LLVM_TARGET_ARM Set to build LLVM target support for ARM. The .Va LLVM_TARGET_ALL option should be used rather than this in most cases. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64 and powerpc/powerpcspe. .It Va WITH_LLVM_TARGET_BPF Set to build LLVM target support for BPF. The .Va LLVM_TARGET_ALL option should be used rather than this in most cases. .It Va WITHOUT_LLVM_TARGET_MIPS Set to not build LLVM target support for MIPS. The .Va LLVM_TARGET_ALL option should be used rather than this in most cases. .Pp This is a default setting on arm/arm, arm/armv6, riscv/riscv64 and sparc64/sparc64. .It Va WITH_LLVM_TARGET_MIPS Set to build LLVM target support for MIPS. The .Va LLVM_TARGET_ALL option should be used rather than this in most cases. .Pp This is a default setting on amd64/amd64, arm/armv7, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64 and powerpc/powerpcspe. .It Va WITHOUT_LLVM_TARGET_POWERPC Set to not build LLVM target support for PowerPC. The .Va LLVM_TARGET_ALL option should be used rather than this in most cases. .Pp This is a default setting on arm/arm, arm/armv6, riscv/riscv64 and sparc64/sparc64. .It Va WITH_LLVM_TARGET_POWERPC Set to build LLVM target support for PowerPC. The .Va LLVM_TARGET_ALL option should be used rather than this in most cases. .Pp This is a default setting on amd64/amd64, arm/armv7, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64 and powerpc/powerpcspe. .It Va WITHOUT_LLVM_TARGET_SPARC Set to not build LLVM target support for SPARC. The .Va LLVM_TARGET_ALL option should be used rather than this in most cases. .Pp This is a default setting on arm/arm, arm/armv6, riscv/riscv64 and sparc64/sparc64. .It Va WITH_LLVM_TARGET_SPARC Set to build LLVM target support for SPARC. The .Va LLVM_TARGET_ALL option should be used rather than this in most cases. .Pp This is a default setting on amd64/amd64, arm/armv7, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64 and powerpc/powerpcspe. .It Va WITHOUT_LLVM_TARGET_X86 Set to not build LLVM target support for X86. The .Va LLVM_TARGET_ALL option should be used rather than this in most cases. .Pp This is a default setting on arm/arm, arm/armv6, riscv/riscv64 and sparc64/sparc64. .It Va WITH_LLVM_TARGET_X86 Set to build LLVM target support for X86. The .Va LLVM_TARGET_ALL option should be used rather than this in most cases. .Pp This is a default setting on amd64/amd64, arm/armv7, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64 and powerpc/powerpcspe. .It Va WITH_LOADER_FIREWIRE Enable firewire support in /boot/loader on x86. This option is a nop on all other platforms. .It Va WITH_LOADER_FORCE_LE Set to force the powerpc boot loader to launch the kernel in little endian mode. .It Va WITHOUT_LOADER_GELI Disable inclusion of GELI crypto support in the boot chain binaries. .Pp This is a default setting on powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe and sparc64/sparc64. .It Va WITH_LOADER_GELI Set to build GELI bootloader support. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf and riscv/riscv64. .It Va WITHOUT_LOADER_LUA Set to not build LUA bindings for the boot loader. .Pp This is a default setting on powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe and sparc64/sparc64. .It Va WITH_LOADER_LUA Set to build LUA bindings for the boot loader. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf and riscv/riscv64. .It Va WITHOUT_LOADER_OFW Disable building of openfirmware bootloader components. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf and riscv/riscv64. .It Va WITH_LOADER_OFW Set to build openfirmware bootloader components. .Pp This is a default setting on powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe and sparc64/sparc64. .It Va WITHOUT_LOADER_UBOOT Disable building of ubldr. .Pp This is a default setting on amd64/amd64, arm64/aarch64, i386/i386, riscv/riscv64 and sparc64/sparc64. .It Va WITH_LOADER_UBOOT Set to build ubldr. .Pp This is a default setting on arm/arm, arm/armv6, arm/armv7, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpc64 and powerpc/powerpcspe. .It Va WITH_LOADER_VERBOSE Set to build with extra verbose debugging in the loader. May explode already nearly too large loader over the limit. Use with care. .It Va WITHOUT_LOADER_ZFS Set to not build ZFS file system boot loader support. .It Va WITHOUT_LOCALES Set to not build localization files; see .Xr locale 1 . .It Va WITHOUT_LOCATE Set to not build .Xr locate 1 and related programs. .It Va WITHOUT_LPR Set to not build .Xr lpr 1 and related programs. .It Va WITHOUT_LS_COLORS Set to build .Xr ls 1 without support for colors to distinguish file types. .It Va WITHOUT_LZMA_SUPPORT Set to build some programs without optional lzma compression support. .It Va WITHOUT_MAIL Set to not build any mail support (MUA or MTA). When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_DMAGENT .It .Va WITHOUT_MAILWRAPPER .It .Va WITHOUT_SENDMAIL .El .It Va WITHOUT_MAILWRAPPER Set to not build the .Xr mailwrapper 8 MTA selector. .It Va WITHOUT_MAKE Set to not install .Xr make 1 and related support files. .It Va WITHOUT_MAKE_CHECK_USE_SANDBOX Set to not execute .Dq Li "make check" in limited sandbox mode. This option should be paired with .Va WITH_INSTALL_AS_USER if executed as an unprivileged user. See .Xr tests 7 for more details. .It Va WITHOUT_MAN Set to not build manual pages. When set, these options are also in effect: .Pp .Bl -inset -compact .It Va WITHOUT_MAN_UTILS (unless .Va WITH_MAN_UTILS is set explicitly) .El .It Va WITHOUT_MANCOMPRESS Set to not to install compressed man pages. Only the uncompressed versions will be installed. .It Va WITHOUT_MAN_UTILS Set to not build utilities for manual pages, .Xr apropos 1 , .Xr makewhatis 1 , .Xr man 1 , .Xr whatis 1 , .Xr manctl 8 , and related support files. .It Va WITH_META_MODE Create .Xr make 1 meta files when building, which can provide a reliable incremental build when using .Xr filemon 4 . The meta file is created in OBJDIR as .Pa target.meta . These meta files track the command that was executed, its output, and the current directory. The .Xr filemon 4 module is required unless .Va NO_FILEMON is defined. When the module is loaded, any files used by the commands executed are tracked as dependencies for the target in its meta file. The target is considered out-of-date and rebuilt if any of these conditions are true compared to the last build: .Bl -bullet -compact .It The command to execute changes. .It The current working directory changes. .It The target's meta file is missing. .It The target's meta file is missing filemon data when filemon is loaded and a previous run did not have it loaded. .It [requires .Xr filemon 4 ] Files read, executed or linked to are newer than the target. .It [requires .Xr filemon 4 ] Files read, written, executed or linked are missing. .El The meta files can also be useful for debugging. .Pp The build hides commands that are executed unless .Va NO_SILENT is defined. Errors cause .Xr make 1 to show some of its environment for further debugging. .Pp The build operates as it normally would otherwise. This option originally invoked a different build system but that was renamed to .Va WITH_DIRDEPS_BUILD . .Pp This must be set in the environment, make command line, or .Pa /etc/src-env.conf , not .Pa /etc/src.conf . .It Va WITHOUT_MLX5TOOL Set to not build .Xr mlx5tool 8 .Pp This is a default setting on arm/arm, arm/armv6, arm/armv7, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpcspe and riscv/riscv64. .It Va WITH_MLX5TOOL Set to build .Xr mlx5tool 8 .Pp This is a default setting on amd64/amd64, arm64/aarch64, i386/i386, powerpc/powerpc64 and sparc64/sparc64. .It Va WITH_MODULE_DRM Enable creation of old drm video modules. .It Va WITH_MODULE_DRM2 Enable creation of old drm2 video modules. .It Va WITH_NAND Set to build the NAND Flash components. .It Va WITHOUT_NDIS Set to not build programs and libraries related to NDIS emulation support. .It Va WITHOUT_NETCAT Set to not build .Xr nc 1 utility. .It Va WITHOUT_NETGRAPH Set to not build applications to support .Xr netgraph 4 . When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_ATM .It .Va WITHOUT_BLUETOOTH .El .Pp When set, these options are also in effect: .Pp .Bl -inset -compact .It Va WITHOUT_NETGRAPH_SUPPORT (unless .Va WITH_NETGRAPH_SUPPORT is set explicitly) .El .It Va WITHOUT_NETGRAPH_SUPPORT Set to build libraries, programs, and kernel modules without netgraph support. .It Va WITHOUT_NIS Set to not build .Xr NIS 8 support and related programs. If set, you might need to adopt your .Xr nsswitch.conf 5 and remove .Sq nis entries. .It Va WITHOUT_NLS Set to not build NLS catalogs. When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_NLS_CATALOGS .El .It Va WITHOUT_NLS_CATALOGS Set to not build NLS catalog support for .Xr csh 1 . .It Va WITHOUT_NS_CACHING Set to disable name caching in the .Pa nsswitch subsystem. The generic caching daemon, .Xr nscd 8 , will not be built either if this option is set. .It Va WITHOUT_NTP Set to not build .Xr ntpd 8 and related programs. .It Va WITHOUT_NVME Set to not build nvme related tools and kernel modules. .Pp This is a default setting on arm/arm, arm/armv6, arm/armv7, arm64/aarch64, mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf, mips/mips64hf, powerpc/powerpc, powerpc/powerpcspe, riscv/riscv64 and sparc64/sparc64. .It Va WITH_NVME Set to build nvme related tools and kernel modules. .Pp This is a default setting on amd64/amd64, i386/i386 and powerpc/powerpc64. .It Va WITH_OFED Set to build the .Dq "OpenFabrics Enterprise Distribution" Infiniband software stack. .It Va WITH_OFED_EXTRA Set to build the non-essential components of the .Dq "OpenFabrics Enterprise Distribution" Infiniband software stack, mostly examples. .It Va WITH_OPENLDAP Enable building openldap support for kerberos. .It Va WITHOUT_OPENSSH Set to not build OpenSSH. .It Va WITHOUT_OPENSSL Set to not build OpenSSL. When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_KERBEROS .It .Va WITHOUT_OPENSSH .El .Pp When set, these options are also in effect: .Pp .Bl -inset -compact .It Va WITHOUT_GSSAPI (unless .Va WITH_GSSAPI is set explicitly) .El .It Va WITHOUT_PAM Set to not build PAM library and modules. .Bf -symbolic This option is deprecated and does nothing. .Ef When set, these options are also in effect: .Pp .Bl -inset -compact .It Va WITHOUT_PAM_SUPPORT (unless .Va WITH_PAM_SUPPORT is set explicitly) .El .It Va WITHOUT_PAM_SUPPORT Set to build some programs without PAM support, particularly .Xr ftpd 8 and .Xr ppp 8 . .It Va WITHOUT_PC_SYSINSTALL Set to not build .Xr pc-sysinstall 8 and related programs. .It Va WITHOUT_PF Set to not build PF firewall package. When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_AUTHPF .El +.It Va WITH_PIE +Build dynamically linked binaries as +Position-Independent Executable (PIE). .It Va WITHOUT_PKGBOOTSTRAP Set to not build .Xr pkg 7 bootstrap tool. .It Va WITHOUT_PMC Set to not build .Xr pmccontrol 8 and related programs. .It Va WITHOUT_PORTSNAP Set to not build or install .Xr portsnap 8 and related files. When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_FREEBSD_UPDATE .El .It Va WITHOUT_PPP Set to not build .Xr ppp 8 and related programs. .It Va WITHOUT_PROFILE Set to not build profiled libraries for use with .Xr gprof 8 . .Pp This is a default setting on mips/mips64el, mips/mips64, mips/mips64elhf and mips/mips64hf. .It Va WITH_PROFILE Set to build profiled libraries for use with .Xr gprof 8 . .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64, i386/i386, mips/mipsel, mips/mips, mips/mipsn32, mips/mipselhf, mips/mipshf, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe, riscv/riscv64 and sparc64/sparc64. .It Va WITHOUT_QUOTAS Set to not build .Xr quota 1 and related programs. .It Va WITHOUT_RADIUS_SUPPORT Set to not build radius support into various applications, like .Xr pam_radius 8 and .Xr ppp 8 . .It Va WITH_RATELIMIT Set to build the system with rate limit support. .Pp This makes .Dv SO_MAX_PACING_RATE effective in .Xr getsockopt 2 , and .Ar txrlimit support in .Xr ifconfig 8 , by proxy. .It Va WITHOUT_RBOOTD Set to not build or install .Xr rbootd 8 . .It Va WITHOUT_REPRODUCIBLE_BUILD Set to include build metadata (such as the build time, user, and host) in the kernel, boot loaders, and uname output. Successive builds will not be bit-for-bit identical. .It Va WITHOUT_RESCUE Set to not build .Xr rescue 8 . .It Va WITH_RETPOLINE Set to build the base system with the retpoline speculative execution vulnerability mitigation for CVE-2017-5715. .It Va WITHOUT_ROUTED Set to not build .Xr routed 8 utility. .It Va WITH_RPCBIND_WARMSTART_SUPPORT Set to build .Xr rpcbind 8 with warmstart support. .It Va WITHOUT_SENDMAIL Set to not build .Xr sendmail 8 and related programs. .It Va WITHOUT_SERVICESDB Set to not install .Pa /var/db/services.db . .It Va WITHOUT_SETUID_LOGIN Set this to disable the installation of .Xr login 1 as a set-user-ID root program. .It Va WITHOUT_SHAREDOCS Set to not build the .Bx 4.4 legacy docs. .It Va WITH_SHARED_TOOLCHAIN Set to build the toolchain binaries shared. The set includes .Xr cc 1 , .Xr make 1 and necessary utilities like assembler, linker and library archive manager. .It Va WITH_SORT_THREADS Set to enable threads in .Xr sort 1 . .It Va WITHOUT_SOURCELESS Set to not build kernel modules that include sourceless code (either microcode or native code for host CPU). When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_SOURCELESS_HOST .It .Va WITHOUT_SOURCELESS_UCODE .El .It Va WITHOUT_SOURCELESS_HOST Set to not build kernel modules that include sourceless native code for host CPU. .It Va WITHOUT_SOURCELESS_UCODE Set to not build kernel modules that include sourceless microcode. .It Va WITHOUT_SSP Set to not build world with propolice stack smashing protection. .Pp This is a default setting on mips/mipsel, mips/mips, mips/mips64el, mips/mips64, mips/mipsn32, mips/mipselhf, mips/mipshf, mips/mips64elhf and mips/mips64hf. .It Va WITH_SSP Set to build world with propolice stack smashing protection. .Pp This is a default setting on amd64/amd64, arm/arm, arm/armv6, arm/armv7, arm64/aarch64, i386/i386, powerpc/powerpc, powerpc/powerpc64, powerpc/powerpcspe, riscv/riscv64 and sparc64/sparc64. .It Va WITH_STAGING Enable staging of files to a stage tree. This can be best thought of as auto-install to .Va DESTDIR with some extra meta data to ensure dependencies can be tracked. Depends on .Va WITH_DIRDEPS_BUILD . When set, these options are also in effect: .Pp .Bl -inset -compact .It Va WITH_STAGING_MAN (unless .Va WITHOUT_STAGING_MAN is set explicitly) .It Va WITH_STAGING_PROG (unless .Va WITHOUT_STAGING_PROG is set explicitly) .El .Pp This must be set in the environment, make command line, or .Pa /etc/src-env.conf , not .Pa /etc/src.conf . .It Va WITH_STAGING_MAN Enable staging of man pages to stage tree. .It Va WITH_STAGING_PROG Enable staging of PROGs to stage tree. .It Va WITH_STALE_STAGED Check staged files are not stale. .It Va WITH_SVN Set to install .Xr svnlite 1 as .Xr svn 1 . .It Va WITHOUT_SVNLITE Set to not build .Xr svnlite 1 and related programs. .It Va WITHOUT_SYMVER Set to disable symbol versioning when building shared libraries. .It Va WITHOUT_SYSCONS Set to not build .Xr syscons 4 support files such as keyboard maps, fonts, and screen output maps. .It Va WITH_SYSROOT Enable use of sysroot during build. Depends on .Va WITH_DIRDEPS_BUILD . .Pp This must be set in the environment, make command line, or .Pa /etc/src-env.conf , not .Pa /etc/src.conf . .It Va WITHOUT_SYSTEM_COMPILER Set to not opportunistically skip building a cross-compiler during the bootstrap phase of the build. Normally, if the currently installed compiler matches the planned bootstrap compiler type and revision, then it will not be built. This does not prevent a compiler from being built for installation though, only for building one for the build itself. The .Va WITHOUT_CLANG and .Va WITHOUT_GCC options control those. .It Va WITHOUT_SYSTEM_LINKER Set to not opportunistically skip building a cross-linker during the bootstrap phase of the build. Normally, if the currently installed linker matches the planned bootstrap linker type and revision, then it will not be built. This does not prevent a linker from being built for installation though, only for building one for the build itself. The .Va WITHOUT_LLD and .Va WITHOUT_BINUTILS options control those. .Pp This option is only relevant when .Va WITH_LLD_BOOTSTRAP is set. .It Va WITHOUT_TALK Set to not build or install .Xr talk 1 and .Xr talkd 8 . .It Va WITHOUT_TCP_WRAPPERS Set to not build or install .Xr tcpd 8 , and related utilities. .It Va WITHOUT_TCSH Set to not build and install .Pa /bin/csh (which is .Xr tcsh 1 ) . .It Va WITHOUT_TELNET Set to not build .Xr telnet 1 and related programs. .It Va WITHOUT_TESTS Set to not build nor install the .Fx Test Suite in .Pa /usr/tests/ . See .Xr tests 7 for more details. This also disables the build of all test-related dependencies, including ATF. When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_DTRACE_TESTS .El .Pp When set, these options are also in effect: .Pp .Bl -inset -compact .It Va WITHOUT_TESTS_SUPPORT (unless .Va WITH_TESTS_SUPPORT is set explicitly) .El .It Va WITHOUT_TESTS_SUPPORT Set to disables the build of all test-related dependencies, including ATF. .It Va WITHOUT_TEXTPROC Set to not build programs used for text processing. .It Va WITHOUT_TFTP Set to not build or install .Xr tftp 1 and .Xr tftpd 8 . .It Va WITHOUT_TOOLCHAIN Set to not install header or programs used for program development, compilers, debuggers etc. When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_BINUTILS .It .Va WITHOUT_CLANG .It .Va WITHOUT_CLANG_EXTRAS .It .Va WITHOUT_CLANG_FULL .It .Va WITHOUT_GCC .It .Va WITHOUT_GDB .It .Va WITHOUT_INCLUDES .It .Va WITHOUT_LLD .It .Va WITHOUT_LLDB .It .Va WITHOUT_LLVM_COV .El .It Va WITHOUT_UNBOUND Set to not build .Xr unbound 8 and related programs. .It Va WITHOUT_UNIFIED_OBJDIR Set to use the historical object directory format for .Xr build 7 targets. For native-builds and builds done directly in sub-directories the format of .Pa ${MAKEOBJDIRPREFIX}/${.CURDIR} is used, while for cross-builds .Pa ${MAKEOBJDIRPREFIX}/${TARGET}.${TARGET_ARCH}/${.CURDIR} is used. .Pp This option is transitional and will be removed before the 12.0 release, at which time .va WITH_UNIFIED_OBJDIR will be enabled permanently. .Pp This must be set in the environment, make command line, or .Pa /etc/src-env.conf , not .Pa /etc/src.conf . .It Va WITHOUT_USB Set to not build USB-related programs and libraries. .It Va WITHOUT_USB_GADGET_EXAMPLES Set to not build USB gadget kernel modules. .It Va WITHOUT_UTMPX Set to not build user accounting tools such as .Xr last 1 , .Xr users 1 , .Xr who 1 , .Xr ac 8 , .Xr lastlogin 8 and .Xr utx 8 . .It Va WITHOUT_VI Set to not build and install vi, view, ex and related programs. .It Va WITHOUT_VT Set to not build .Xr vt 4 support files (fonts and keymaps). .It Va WITHOUT_WARNS Set this to not add warning flags to the compiler invocations. Useful as a temporary workaround when code enters the tree which triggers warnings in environments that differ from the original developer. .It Va WITHOUT_WIRELESS Set to not build programs used for 802.11 wireless networks; especially .Xr wpa_supplicant 8 and .Xr hostapd 8 . When set, these options are also in effect: .Pp .Bl -inset -compact .It Va WITHOUT_WIRELESS_SUPPORT (unless .Va WITH_WIRELESS_SUPPORT is set explicitly) .El .It Va WITHOUT_WIRELESS_SUPPORT Set to build libraries, programs, and kernel modules without 802.11 wireless support. .It Va WITHOUT_WPA_SUPPLICANT_EAPOL Build .Xr wpa_supplicant 8 without support for the IEEE 802.1X protocol and without support for EAP-PEAP, EAP-TLS, EAP-LEAP, and EAP-TTLS protocols (usable only via 802.1X). .It Va WITHOUT_ZFS Set to not build ZFS file system kernel module, libraries, and user commands. .It Va WITHOUT_ZONEINFO Set to not build the timezone database. When set, it enforces these options: .Pp .Bl -item -compact .It .Va WITHOUT_ZONEINFO_LEAPSECONDS_SUPPORT .It .Va WITHOUT_ZONEINFO_OLD_TIMEZONES_SUPPORT .El .It Va WITH_ZONEINFO_LEAPSECONDS_SUPPORT Set to build leapsecond information in to the timezone database. .It Va WITH_ZONEINFO_OLD_TIMEZONES_SUPPORT Set to build backward compatibility timezone aliases in to the timezone database. .El .Sh FILES .Bl -tag -compact -width Pa .It Pa /etc/src.conf .It Pa /etc/src-env.conf .It Pa /usr/share/mk/bsd.own.mk .El .Sh SEE ALSO .Xr make 1 , .Xr make.conf 5 , .Xr build 7 , .Xr ports 7 .Sh HISTORY The .Nm file appeared in .Fx 7.0 . .Sh AUTHORS This manual page was autogenerated by .An tools/build/options/makeman . Index: projects/import-googletest-1.8.1/share/mk/bsd.lib.mk =================================================================== --- projects/import-googletest-1.8.1/share/mk/bsd.lib.mk (revision 344270) +++ projects/import-googletest-1.8.1/share/mk/bsd.lib.mk (revision 344271) @@ -1,495 +1,529 @@ # from: @(#)bsd.lib.mk 5.26 (Berkeley) 5/2/91 # $FreeBSD$ # .include .if defined(LIB_CXX) || defined(SHLIB_CXX) _LD= ${CXX} .else _LD= ${CC} .endif .if defined(LIB_CXX) LIB= ${LIB_CXX} .endif .if defined(SHLIB_CXX) SHLIB= ${SHLIB_CXX} .endif LIB_PRIVATE= ${PRIVATELIB:Dprivate} # Set up the variables controlling shared libraries. After this section, # SHLIB_NAME will be defined only if we are to create a shared library. # SHLIB_LINK will be defined only if we are to create a link to it. # INSTALL_PIC_ARCHIVE will be defined only if we are to create a PIC archive. # BUILD_NOSSP_PIC_ARCHIVE will be defined only if we are to create a PIC archive. .if defined(NO_PIC) .undef SHLIB_NAME .undef INSTALL_PIC_ARCHIVE .undef BUILD_NOSSP_PIC_ARCHIVE .else .if !defined(SHLIB) && defined(LIB) SHLIB= ${LIB} .endif .if !defined(SHLIB_NAME) && defined(SHLIB) && defined(SHLIB_MAJOR) SHLIB_NAME= lib${LIB_PRIVATE}${SHLIB}.so.${SHLIB_MAJOR} .endif .if defined(SHLIB_NAME) && !empty(SHLIB_NAME:M*.so.*) SHLIB_LINK?= ${SHLIB_NAME:R} .endif SONAME?= ${SHLIB_NAME} .endif .if defined(CRUNCH_CFLAGS) CFLAGS+= ${CRUNCH_CFLAGS} .endif .if ${MK_ASSERT_DEBUG} == "no" CFLAGS+= -DNDEBUG NO_WERROR= .endif .if defined(DEBUG_FLAGS) CFLAGS+= ${DEBUG_FLAGS} .if ${MK_CTF} != "no" && ${DEBUG_FLAGS:M-g} != "" CTFFLAGS+= -g .endif .else STRIP?= -s .endif .if ${SHLIBDIR:M*lib32*} TAGS+= lib32 .endif .if defined(NO_ROOT) .if !defined(TAGS) || ! ${TAGS:Mpackage=*} TAGS+= package=${PACKAGE:Uruntime} .endif TAG_ARGS= -T ${TAGS:[*]:S/ /,/g} .endif # ELF hardening knobs .if ${MK_BIND_NOW} != "no" LDFLAGS+= -Wl,-znow .endif .if ${MK_RETPOLINE} != "no" CFLAGS+= -mretpoline CXXFLAGS+= -mretpoline LDFLAGS+= -Wl,-zretpolineplt .endif .if ${MK_DEBUG_FILES} != "no" && empty(DEBUG_FLAGS:M-g) && \ empty(DEBUG_FLAGS:M-gdwarf*) CFLAGS+= ${DEBUG_FILES_CFLAGS} CXXFLAGS+= ${DEBUG_FILES_CFLAGS} CTFFLAGS+= -g .endif .include # prefer .s to a .c, add .po, remove stuff not used in the BSD libraries # .pico used for PIC object files # .nossppico used for NOSSP PIC object files -.SUFFIXES: .out .o .bc .ll .po .pico .nossppico .S .asm .s .c .cc .cpp .cxx .C .f .y .l .ln +# .pieo used for PIE object files +.SUFFIXES: .out .o .bc .ll .po .pico .nossppico .pieo .S .asm .s .c .cc .cpp .cxx .C .f .y .l .ln .if !defined(PICFLAG) .if ${MACHINE_CPUARCH} == "sparc64" PICFLAG=-fPIC +PIEFLAG=-fPIE .else PICFLAG=-fpic +PIEFLAG=-fpie .endif .endif PO_FLAG=-pg .c.po: ${CC} ${PO_FLAG} ${STATIC_CFLAGS} ${PO_CFLAGS} -c ${.IMPSRC} -o ${.TARGET} ${CTFCONVERT_CMD} .c.pico: ${CC} ${PICFLAG} -DPIC ${SHARED_CFLAGS} ${CFLAGS} -c ${.IMPSRC} -o ${.TARGET} ${CTFCONVERT_CMD} .c.nossppico: ${CC} ${PICFLAG} -DPIC ${SHARED_CFLAGS:C/^-fstack-protector.*$//} ${CFLAGS:C/^-fstack-protector.*$//} -c ${.IMPSRC} -o ${.TARGET} ${CTFCONVERT_CMD} +.c.pieo: + ${CC} ${PIEFLAG} -DPIC ${SHARED_CFLAGS} ${CFLAGS} -c ${.IMPSRC} -o ${.TARGET} + ${CTFCONVERT_CMD} + .cc.po .C.po .cpp.po .cxx.po: ${CXX} ${PO_FLAG} ${STATIC_CXXFLAGS} ${PO_CXXFLAGS} -c ${.IMPSRC} -o ${.TARGET} .cc.pico .C.pico .cpp.pico .cxx.pico: ${CXX} ${PICFLAG} -DPIC ${SHARED_CXXFLAGS} ${CXXFLAGS} -c ${.IMPSRC} -o ${.TARGET} .cc.nossppico .C.nossppico .cpp.nossppico .cxx.nossppico: ${CXX} ${PICFLAG} -DPIC ${SHARED_CXXFLAGS:C/^-fstack-protector.*$//} ${CXXFLAGS:C/^-fstack-protector.*$//} -c ${.IMPSRC} -o ${.TARGET} +.cc.pieo .C.pieo .cpp.pieo .cxx.pieo: + ${CXX} ${PIEFLAG} ${SHARED_CXXFLAGS} ${CXXFLAGS} -c ${.IMPSRC} -o ${.TARGET} + .f.po: ${FC} -pg ${FFLAGS} -o ${.TARGET} -c ${.IMPSRC} ${CTFCONVERT_CMD} .f.pico: ${FC} ${PICFLAG} -DPIC ${FFLAGS} -o ${.TARGET} -c ${.IMPSRC} ${CTFCONVERT_CMD} .f.nossppico: ${FC} ${PICFLAG} -DPIC ${FFLAGS:C/^-fstack-protector.*$//} -o ${.TARGET} -c ${.IMPSRC} ${CTFCONVERT_CMD} -.s.po .s.pico .s.nossppico: +.s.po .s.pico .s.nossppico .s.pieo: ${AS} ${AFLAGS} -o ${.TARGET} ${.IMPSRC} ${CTFCONVERT_CMD} .asm.po: ${CC:N${CCACHE_BIN}} -x assembler-with-cpp -DPROF ${PO_CFLAGS} \ ${ACFLAGS} -c ${.IMPSRC} -o ${.TARGET} ${CTFCONVERT_CMD} .asm.pico: ${CC:N${CCACHE_BIN}} -x assembler-with-cpp ${PICFLAG} -DPIC \ ${CFLAGS} ${ACFLAGS} -c ${.IMPSRC} -o ${.TARGET} ${CTFCONVERT_CMD} .asm.nossppico: ${CC:N${CCACHE_BIN}} -x assembler-with-cpp ${PICFLAG} -DPIC \ ${CFLAGS:C/^-fstack-protector.*$//} ${ACFLAGS} -c ${.IMPSRC} -o ${.TARGET} ${CTFCONVERT_CMD} +.asm.pieo: + ${CC:N${CCACHE_BIN}} -x assembler-with-cpp ${PIEFLAG} -DPIC \ + ${CFLAGS} ${ACFLAGS} -c ${.IMPSRC} -o ${.TARGET} + ${CTFCONVERT_CMD} + .S.po: ${CC:N${CCACHE_BIN}} -DPROF ${PO_CFLAGS} ${ACFLAGS} -c ${.IMPSRC} \ -o ${.TARGET} ${CTFCONVERT_CMD} .S.pico: ${CC:N${CCACHE_BIN}} ${PICFLAG} -DPIC ${CFLAGS} ${ACFLAGS} \ -c ${.IMPSRC} -o ${.TARGET} ${CTFCONVERT_CMD} .S.nossppico: ${CC:N${CCACHE_BIN}} ${PICFLAG} -DPIC ${CFLAGS:C/^-fstack-protector.*$//} ${ACFLAGS} \ -c ${.IMPSRC} -o ${.TARGET} ${CTFCONVERT_CMD} +.S.pieo: + ${CC:N${CCACHE_BIN}} ${PIEFLAG} -DPIC ${CFLAGS} ${ACFLAGS} \ + -c ${.IMPSRC} -o ${.TARGET} + ${CTFCONVERT_CMD} + _LIBDIR:=${LIBDIR} _SHLIBDIR:=${SHLIBDIR} .if defined(SHLIB_NAME) .if ${MK_DEBUG_FILES} != "no" SHLIB_NAME_FULL=${SHLIB_NAME}.full # Use ${DEBUGDIR} for base system debug files, else .debug subdirectory .if ${_SHLIBDIR} == "/boot" ||\ ${SHLIBDIR:C%/lib(/.*)?$%/lib%} == "/lib" ||\ ${SHLIBDIR:C%/usr/(tests/)?lib(32|exec)?(/.*)?%/usr/lib%} == "/usr/lib" DEBUGFILEDIR=${DEBUGDIR}${_SHLIBDIR} .else DEBUGFILEDIR=${_SHLIBDIR}/.debug .endif .if !exists(${DESTDIR}${DEBUGFILEDIR}) DEBUGMKDIR= .endif .else SHLIB_NAME_FULL=${SHLIB_NAME} .endif .endif .include # Allow libraries to specify their own version map or have it # automatically generated (see bsd.symver.mk above). .if ${MK_SYMVER} == "yes" && !empty(VERSION_MAP) ${SHLIB_NAME_FULL}: ${VERSION_MAP} LDFLAGS+= -Wl,--version-script=${VERSION_MAP} .endif .if defined(LIB) && !empty(LIB) || defined(SHLIB_NAME) OBJS+= ${SRCS:N*.h:${OBJS_SRCS_FILTER:ts:}:S/$/.o/} BCOBJS+= ${SRCS:N*.[hsS]:N*.asm:${OBJS_SRCS_FILTER:ts:}:S/$/.bco/g} LLOBJS+= ${SRCS:N*.[hsS]:N*.asm:${OBJS_SRCS_FILTER:ts:}:S/$/.llo/g} CLEANFILES+= ${OBJS} ${BCOBJS} ${LLOBJS} ${STATICOBJS} .endif .if defined(LIB) && !empty(LIB) _LIBS= lib${LIB_PRIVATE}${LIB}.a lib${LIB_PRIVATE}${LIB}.a: ${OBJS} ${STATICOBJS} @${ECHO} building static ${LIB} library @rm -f ${.TARGET} ${AR} ${ARFLAGS} ${.TARGET} `NM='${NM}' NMFLAGS='${NMFLAGS}' \ ${LORDER} ${OBJS} ${STATICOBJS} | ${TSORT} ${TSORTFLAGS}` ${ARADD} ${RANLIB} ${RANLIBFLAGS} ${.TARGET} .endif .if !defined(INTERNALLIB) .if ${MK_PROFILE} != "no" && defined(LIB) && !empty(LIB) _LIBS+= lib${LIB_PRIVATE}${LIB}_p.a POBJS+= ${OBJS:.o=.po} ${STATICOBJS:.o=.po} DEPENDOBJS+= ${POBJS} CLEANFILES+= ${POBJS} lib${LIB_PRIVATE}${LIB}_p.a: ${POBJS} @${ECHO} building profiled ${LIB} library @rm -f ${.TARGET} ${AR} ${ARFLAGS} ${.TARGET} `NM='${NM}' NMFLAGS='${NMFLAGS}' \ ${LORDER} ${POBJS} | ${TSORT} ${TSORTFLAGS}` ${ARADD} ${RANLIB} ${RANLIBFLAGS} ${.TARGET} .endif .if defined(LLVM_LINK) lib${LIB_PRIVATE}${LIB}.bc: ${BCOBJS} ${LLVM_LINK} -o ${.TARGET} ${BCOBJS} lib${LIB_PRIVATE}${LIB}.ll: ${LLOBJS} ${LLVM_LINK} -S -o ${.TARGET} ${LLOBJS} CLEANFILES+= lib${LIB_PRIVATE}${LIB}.bc lib${LIB_PRIVATE}${LIB}.ll .endif .if defined(SHLIB_NAME) || \ defined(INSTALL_PIC_ARCHIVE) && defined(LIB) && !empty(LIB) SOBJS+= ${OBJS:.o=.pico} DEPENDOBJS+= ${SOBJS} CLEANFILES+= ${SOBJS} .endif .if defined(SHLIB_NAME) _LIBS+= ${SHLIB_NAME} SOLINKOPTS+= -shared -Wl,-x .if defined(LD_FATAL_WARNINGS) && ${LD_FATAL_WARNINGS} == "no" SOLINKOPTS+= -Wl,--no-fatal-warnings .else SOLINKOPTS+= -Wl,--fatal-warnings .endif SOLINKOPTS+= -Wl,--warn-shared-textrel .if target(beforelinking) beforelinking: ${SOBJS} ${SHLIB_NAME_FULL}: beforelinking .endif .if defined(SHLIB_LINK) .if defined(SHLIB_LDSCRIPT) && !empty(SHLIB_LDSCRIPT) && exists(${.CURDIR}/${SHLIB_LDSCRIPT}) ${SHLIB_LINK:R}.ld: ${.CURDIR}/${SHLIB_LDSCRIPT} sed -e 's,@@SHLIB@@,${_SHLIBDIR}/${SHLIB_NAME},g' \ -e 's,@@LIBDIR@@,${_LIBDIR},g' \ ${.ALLSRC} > ${.TARGET} ${SHLIB_NAME_FULL}: ${SHLIB_LINK:R}.ld CLEANFILES+= ${SHLIB_LINK:R}.ld .endif CLEANFILES+= ${SHLIB_LINK} .endif ${SHLIB_NAME_FULL}: ${SOBJS} @${ECHO} building shared library ${SHLIB_NAME} @rm -f ${SHLIB_NAME} ${SHLIB_LINK} .if defined(SHLIB_LINK) && !commands(${SHLIB_LINK:R}.ld) && ${MK_DEBUG_FILES} == "no" @${INSTALL_LIBSYMLINK} ${TAG_ARGS:D${TAG_ARGS},development} ${SHLIB_NAME} ${SHLIB_LINK} .endif ${_LD:N${CCACHE_BIN}} ${LDFLAGS} ${SSP_CFLAGS} ${SOLINKOPTS} \ -o ${.TARGET} -Wl,-soname,${SONAME} \ `NM='${NM}' NMFLAGS='${NMFLAGS}' ${LORDER} ${SOBJS} | \ ${TSORT} ${TSORTFLAGS}` ${LDADD} .if ${MK_CTF} != "no" ${CTFMERGE} ${CTFFLAGS} -o ${.TARGET} ${SOBJS} .endif .if ${MK_DEBUG_FILES} != "no" CLEANFILES+= ${SHLIB_NAME_FULL} ${SHLIB_NAME}.debug ${SHLIB_NAME}: ${SHLIB_NAME_FULL} ${SHLIB_NAME}.debug ${OBJCOPY} --strip-debug --add-gnu-debuglink=${SHLIB_NAME}.debug \ ${SHLIB_NAME_FULL} ${.TARGET} .if defined(SHLIB_LINK) && !commands(${SHLIB_LINK:R}.ld) @${INSTALL_LIBSYMLINK} ${TAG_ARGS:D${TAG_ARGS},development} ${SHLIB_NAME} ${SHLIB_LINK} .endif ${SHLIB_NAME}.debug: ${SHLIB_NAME_FULL} ${OBJCOPY} --only-keep-debug ${SHLIB_NAME_FULL} ${.TARGET} .endif .endif #defined(SHLIB_NAME) .if defined(INSTALL_PIC_ARCHIVE) && defined(LIB) && !empty(LIB) && ${MK_TOOLCHAIN} != "no" _LIBS+= lib${LIB_PRIVATE}${LIB}_pic.a lib${LIB_PRIVATE}${LIB}_pic.a: ${SOBJS} @${ECHO} building special pic ${LIB} library @rm -f ${.TARGET} ${AR} ${ARFLAGS} ${.TARGET} ${SOBJS} ${ARADD} ${RANLIB} ${RANLIBFLAGS} ${.TARGET} .endif .if defined(BUILD_NOSSP_PIC_ARCHIVE) && defined(LIB) && !empty(LIB) NOSSPSOBJS+= ${OBJS:.o=.nossppico} DEPENDOBJS+= ${NOSSPSOBJS} CLEANFILES+= ${NOSSPSOBJS} _LIBS+= lib${LIB_PRIVATE}${LIB}_nossp_pic.a lib${LIB_PRIVATE}${LIB}_nossp_pic.a: ${NOSSPSOBJS} @${ECHO} building special nossp pic ${LIB} library @rm -f ${.TARGET} ${AR} ${ARFLAGS} ${.TARGET} ${NOSSPSOBJS} ${ARADD} ${RANLIB} ${RANLIBFLAGS} ${.TARGET} .endif .endif # !defined(INTERNALLIB) + +.if defined(INTERNALLIB) && ${MK_PIE} != "no" +PIEOBJS+= ${OBJS:.o=.pieo} +DEPENDOBJS+= ${PIEOBJS} +CLEANFILES+= ${PIEOBJS} + +_LIBS+= lib${LIB_PRIVATE}${LIB}_pie.a + +lib${LIB_PRIVATE}${LIB}_pie.a: ${PIEOBJS} + @${ECHO} building pie ${LIB} library + @rm -f ${.TARGET} + ${AR} ${ARFLAGS} ${.TARGET} ${PIEOBJS} ${ARADD} + ${RANLIB} ${RANLIBFLAGS} ${.TARGET} +.endif .if defined(_SKIP_BUILD) all: .else .if defined(_LIBS) && !empty(_LIBS) all: ${_LIBS} .endif .if ${MK_MAN} != "no" && !defined(LIBRARIES_ONLY) all: all-man .endif .endif CLEANFILES+= ${_LIBS} _EXTRADEPEND: .if !defined(NO_EXTRADEPEND) && defined(SHLIB_NAME) .if defined(DPADD) && !empty(DPADD) echo ${SHLIB_NAME_FULL}: ${DPADD} >> ${DEPENDFILE} .endif .endif .if !target(install) .if defined(PRECIOUSLIB) .if !defined(NO_FSCHG) SHLINSTALLFLAGS+= -fschg .endif .endif # Install libraries with -S to avoid risk of modifying in-use libraries when # installing to a running system. It is safe to avoid this for NO_ROOT builds # that are only creating an image. .if !defined(NO_SAFE_LIBINSTALL) && !defined(NO_ROOT) SHLINSTALLFLAGS+= -S .endif _INSTALLFLAGS:= ${INSTALLFLAGS} .for ie in ${INSTALLFLAGS_EDIT} _INSTALLFLAGS:= ${_INSTALLFLAGS${ie}} .endfor _SHLINSTALLFLAGS:= ${SHLINSTALLFLAGS} .for ie in ${INSTALLFLAGS_EDIT} _SHLINSTALLFLAGS:= ${_SHLINSTALLFLAGS${ie}} .endfor .if !defined(INTERNALLIB) realinstall: _libinstall .ORDER: beforeinstall _libinstall _libinstall: .if defined(LIB) && !empty(LIB) && ${MK_INSTALLLIB} != "no" ${INSTALL} ${TAG_ARGS:D${TAG_ARGS},development} -C -o ${LIBOWN} -g ${LIBGRP} -m ${LIBMODE} \ ${_INSTALLFLAGS} lib${LIB_PRIVATE}${LIB}.a ${DESTDIR}${_LIBDIR}/ .endif .if ${MK_PROFILE} != "no" && defined(LIB) && !empty(LIB) ${INSTALL} ${TAG_ARGS:D${TAG_ARGS},profile} -C -o ${LIBOWN} -g ${LIBGRP} -m ${LIBMODE} \ ${_INSTALLFLAGS} lib${LIB_PRIVATE}${LIB}_p.a ${DESTDIR}${_LIBDIR}/ .endif .if defined(SHLIB_NAME) ${INSTALL} ${TAG_ARGS} ${STRIP} -o ${LIBOWN} -g ${LIBGRP} -m ${LIBMODE} \ ${_INSTALLFLAGS} ${_SHLINSTALLFLAGS} \ ${SHLIB_NAME} ${DESTDIR}${_SHLIBDIR}/ .if ${MK_DEBUG_FILES} != "no" .if defined(DEBUGMKDIR) ${INSTALL} ${TAG_ARGS:D${TAG_ARGS},debug} -d ${DESTDIR}${DEBUGFILEDIR}/ .endif ${INSTALL} ${TAG_ARGS:D${TAG_ARGS},debug} -o ${LIBOWN} -g ${LIBGRP} -m ${DEBUGMODE} \ ${_INSTALLFLAGS} \ ${SHLIB_NAME}.debug ${DESTDIR}${DEBUGFILEDIR}/ .endif .if defined(SHLIB_LINK) .if commands(${SHLIB_LINK:R}.ld) ${INSTALL} ${TAG_ARGS:D${TAG_ARGS},development} -S -C -o ${LIBOWN} -g ${LIBGRP} -m ${LIBMODE} \ ${_INSTALLFLAGS} ${SHLIB_LINK:R}.ld \ ${DESTDIR}${_LIBDIR}/${SHLIB_LINK} .for _SHLIB_LINK_LINK in ${SHLIB_LDSCRIPT_LINKS} ${INSTALL_LIBSYMLINK} ${SHLIB_LINK} ${DESTDIR}${_LIBDIR}/${_SHLIB_LINK_LINK} .endfor .else .if ${_SHLIBDIR} == ${_LIBDIR} .if ${SHLIB_LINK:Mlib*} ${INSTALL_RSYMLINK} ${TAG_ARGS:D${TAG_ARGS},development} ${SHLIB_NAME} ${DESTDIR}${_LIBDIR}/${SHLIB_LINK} .else ${INSTALL_RSYMLINK} ${TAG_ARGS} ${DESTDIR}${_SHLIBDIR}/${SHLIB_NAME} \ ${DESTDIR}${_LIBDIR}/${SHLIB_LINK} .endif .else .if ${SHLIB_LINK:Mlib*} ${INSTALL_RSYMLINK} ${TAG_ARGS:D${TAG_ARGS},development} ${DESTDIR}${_SHLIBDIR}/${SHLIB_NAME} \ ${DESTDIR}${_LIBDIR}/${SHLIB_LINK} .else ${INSTALL_RSYMLINK} ${TAG_ARGS} ${DESTDIR}${_SHLIBDIR}/${SHLIB_NAME} \ ${DESTDIR}${_LIBDIR}/${SHLIB_LINK} .endif .if exists(${DESTDIR}${_LIBDIR}/${SHLIB_NAME}) -chflags noschg ${DESTDIR}${_LIBDIR}/${SHLIB_NAME} rm -f ${DESTDIR}${_LIBDIR}/${SHLIB_NAME} .endif .endif .endif # SHLIB_LDSCRIPT .endif # SHLIB_LINK .endif # SHIB_NAME .if defined(INSTALL_PIC_ARCHIVE) && defined(LIB) && !empty(LIB) && ${MK_TOOLCHAIN} != "no" ${INSTALL} ${TAG_ARGS:D${TAG_ARGS},development} -o ${LIBOWN} -g ${LIBGRP} -m ${LIBMODE} \ ${_INSTALLFLAGS} lib${LIB}_pic.a ${DESTDIR}${_LIBDIR}/ .endif .endif # !defined(INTERNALLIB) .if !defined(LIBRARIES_ONLY) .include .include .include .include .endif LINKOWN?= ${LIBOWN} LINKGRP?= ${LIBGRP} LINKMODE?= ${LIBMODE} SYMLINKOWN?= ${LIBOWN} SYMLINKGRP?= ${LIBGRP} .include .if ${MK_MAN} != "no" && !defined(LIBRARIES_ONLY) realinstall: maninstall .ORDER: beforeinstall maninstall .endif .endif .if ${MK_MAN} != "no" && !defined(LIBRARIES_ONLY) .include .endif .if defined(LIB) && !empty(LIB) OBJS_DEPEND_GUESS+= ${SRCS:M*.h} .for _S in ${SRCS:N*.[hly]} OBJS_DEPEND_GUESS.${_S:${OBJS_SRCS_FILTER:ts:}}.po+= ${_S} .endfor .endif .if defined(SHLIB_NAME) || \ defined(INSTALL_PIC_ARCHIVE) && defined(LIB) && !empty(LIB) .for _S in ${SRCS:N*.[hly]} OBJS_DEPEND_GUESS.${_S:${OBJS_SRCS_FILTER:ts:}}.pico+= ${_S} .endfor .endif .if defined(BUILD_NOSSP_PIC_ARCHIVE) && defined(LIB) && !empty(LIB) .for _S in ${SRCS:N*.[hly]} OBJS_DEPEND_GUESS.${_S:${OBJS_SRCS_FILTER:ts:}}.nossppico+= ${_S} .endfor .endif .if defined(HAS_TESTS) MAKE+= MK_MAKE_CHECK_USE_SANDBOX=yes SUBDIR_TARGETS+= check TESTS_LD_LIBRARY_PATH+= ${.OBJDIR} .endif .include .include .include .include Index: projects/import-googletest-1.8.1/share/mk/bsd.opts.mk =================================================================== --- projects/import-googletest-1.8.1/share/mk/bsd.opts.mk (revision 344270) +++ projects/import-googletest-1.8.1/share/mk/bsd.opts.mk (revision 344271) @@ -1,111 +1,112 @@ # $FreeBSD$ # # Option file for src builds. # # Users define WITH_FOO and WITHOUT_FOO on the command line or in /etc/src.conf # and /etc/make.conf files. These translate in the build system to MK_FOO={yes,no} # with (usually) sensible defaults. # # Makefiles must include bsd.opts.mk after defining specific MK_FOO options that # are applicable for that Makefile (typically there are none, but sometimes there # are exceptions). Recursive makes usually add MK_FOO=no for options that they wish # to omit from that make. # # Makefiles must include bsd.mkopt.mk before they test the value of any MK_FOO # variable. # # Makefiles may also assume that this file is included by bsd.own.mk should it # need variables defined there prior to the end of the Makefile where # bsd.{subdir,lib.bin}.mk is traditionally included. # # The old-style YES_FOO and NO_FOO are being phased out. No new instances of them # should be added. Old instances should be removed since they were just to # bridge the gap between FreeBSD 4 and FreeBSD 5. # # Makefiles should never test WITH_FOO or WITHOUT_FOO directly (although an # exception is made for _WITHOUT_SRCONF which turns off this mechanism # completely). # .if !target(____) ____: .if !defined(_WITHOUT_SRCCONF) # # Define MK_* variables (which are either "yes" or "no") for users # to set via WITH_*/WITHOUT_* in /etc/src.conf and override in the # make(1) environment. # These should be tested with `== "no"' or `!= "no"' in makefiles. # The NO_* variables should only be set by makefiles for variables # that haven't been converted over. # # Only these options are used by bsd.*.mk. KERBEROS and OPENSSH are # unfortunately needed to support statically linking the entire # tree. su(1) wouldn't link since it depends on PAM which depends on # ssh libraries when building with OPENSSH, and likewise for KERBEROS. # All other variables used to build /usr/src live in src.opts.mk # and variables from both files are documented in src.conf(5). __DEFAULT_YES_OPTIONS = \ ASSERT_DEBUG \ DEBUG_FILES \ DOCCOMPRESS \ INCLUDES \ INSTALLLIB \ KERBEROS \ MAKE_CHECK_USE_SANDBOX \ MAN \ MANCOMPRESS \ NIS \ NLS \ OPENSSH \ PROFILE \ SSP \ SYMVER \ TESTS \ TOOLCHAIN \ WARNS __DEFAULT_NO_OPTIONS = \ BIND_NOW \ CCACHE_BUILD \ CTF \ INSTALL_AS_USER \ + PIE \ RETPOLINE \ STALE_STAGED __DEFAULT_DEPENDENT_OPTIONS = \ MAKE_CHECK_USE_SANDBOX/TESTS \ STAGING_MAN/STAGING \ STAGING_PROG/STAGING \ STALE_STAGED/STAGING \ .include # # Supported NO_* options (if defined, MK_* will be forced to "no", # regardless of user's setting). # # These are transitional and will disappaer in the FreeBSD 12. # .for var in \ CTF \ DEBUG_FILES \ INSTALLLIB \ MAN \ PROFILE \ WARNS .if defined(NO_${var}) .warning "NO_${var} is defined, but deprecated. Please use MK_${var}=no instead." MK_${var}:=no .endif .endfor .include .endif # !_WITHOUT_SRCCONF .endif Index: projects/import-googletest-1.8.1/share/mk/bsd.prog.mk =================================================================== --- projects/import-googletest-1.8.1/share/mk/bsd.prog.mk (revision 344270) +++ projects/import-googletest-1.8.1/share/mk/bsd.prog.mk (revision 344271) @@ -1,332 +1,337 @@ # from: @(#)bsd.prog.mk 5.26 (Berkeley) 6/25/91 # $FreeBSD$ .include .include .SUFFIXES: .out .o .bc .c .cc .cpp .cxx .C .m .y .l .ll .ln .s .S .asm # XXX The use of COPTS in modern makefiles is discouraged. .if defined(COPTS) .warning ${.CURDIR}: COPTS should be CFLAGS. CFLAGS+=${COPTS} .endif .if ${MK_ASSERT_DEBUG} == "no" CFLAGS+= -DNDEBUG NO_WERROR= .endif .if defined(DEBUG_FLAGS) CFLAGS+=${DEBUG_FLAGS} CXXFLAGS+=${DEBUG_FLAGS} .if ${MK_CTF} != "no" && ${DEBUG_FLAGS:M-g} != "" CTFFLAGS+= -g .endif .endif .if defined(PROG_CXX) PROG= ${PROG_CXX} .endif .if !empty(LDFLAGS:M-Wl,*--oformat,*) || !empty(LDFLAGS:M-static) MK_DEBUG_FILES= no .endif # ELF hardening knobs .if ${MK_BIND_NOW} != "no" LDFLAGS+= -Wl,-znow .endif +.if ${MK_PIE} != "no" && (!defined(NO_SHARED) || ${NO_SHARED:tl} == "no") +CFLAGS+= -fPIE +CXXFLAGS+= -fPIE +LDFLAGS+= -pie +.endif .if ${MK_RETPOLINE} != "no" CFLAGS+= -mretpoline CXXFLAGS+= -mretpoline # retpolineplt is broken with static linking (PR 233336) -.if !defined(NO_SHARED) || ${NO_SHARED} == "no" || ${NO_SHARED} == "NO" +.if !defined(NO_SHARED) || ${NO_SHARED:tl} == "no" LDFLAGS+= -Wl,-zretpolineplt .endif .endif .if defined(CRUNCH_CFLAGS) CFLAGS+=${CRUNCH_CFLAGS} .else .if ${MK_DEBUG_FILES} != "no" && empty(DEBUG_FLAGS:M-g) && \ empty(DEBUG_FLAGS:M-gdwarf-*) CFLAGS+= ${DEBUG_FILES_CFLAGS} CTFFLAGS+= -g .endif .endif .if !defined(DEBUG_FLAGS) STRIP?= -s .endif .if defined(NO_ROOT) .if !defined(TAGS) || ! ${TAGS:Mpackage=*} TAGS+= package=${PACKAGE:Uruntime} .endif TAG_ARGS= -T ${TAGS:[*]:S/ /,/g} .endif -.if defined(NO_SHARED) && (${NO_SHARED} != "no" && ${NO_SHARED} != "NO") +.if defined(NO_SHARED) && ${NO_SHARED:tl} != "no" LDFLAGS+= -static .endif .if ${MK_DEBUG_FILES} != "no" PROG_FULL=${PROG}.full # Use ${DEBUGDIR} for base system debug files, else .debug subdirectory .if defined(BINDIR) && (\ ${BINDIR} == "/bin" ||\ ${BINDIR:C%/libexec(/.*)?%/libexec%} == "/libexec" ||\ ${BINDIR} == "/sbin" ||\ ${BINDIR:C%/usr/(bin|bsdinstall|libexec|lpr|sendmail|sm.bin|sbin|tests)(/.*)?%/usr/bin%} == "/usr/bin" ||\ ${BINDIR} == "/usr/lib" \ ) DEBUGFILEDIR= ${DEBUGDIR}${BINDIR} .else DEBUGFILEDIR?= ${BINDIR}/.debug .endif .if !exists(${DESTDIR}${DEBUGFILEDIR}) DEBUGMKDIR= .endif .else PROG_FULL= ${PROG} .endif .if defined(PROG) PROGNAME?= ${PROG} .if defined(SRCS) OBJS+= ${SRCS:N*.h:${OBJS_SRCS_FILTER:ts:}:S/$/.o/g} # LLVM bitcode / textual IR representations of the program BCOBJS+=${SRCS:N*.[hsS]:N*.asm:${OBJS_SRCS_FILTER:ts:}:S/$/.bco/g} LLOBJS+=${SRCS:N*.[hsS]:N*.asm:${OBJS_SRCS_FILTER:ts:}:S/$/.llo/g} .if target(beforelinking) beforelinking: ${OBJS} ${PROG_FULL}: beforelinking .endif ${PROG_FULL}: ${OBJS} .if defined(PROG_CXX) ${CXX:N${CCACHE_BIN}} ${CXXFLAGS:N-M*} ${LDFLAGS} -o ${.TARGET} \ ${OBJS} ${LDADD} .else ${CC:N${CCACHE_BIN}} ${CFLAGS:N-M*} ${LDFLAGS} -o ${.TARGET} ${OBJS} \ ${LDADD} .endif .if ${MK_CTF} != "no" ${CTFMERGE} ${CTFFLAGS} -o ${.TARGET} ${OBJS} .endif .else # !defined(SRCS) .if !target(${PROG}) .if defined(PROG_CXX) SRCS= ${PROG}.cc .else SRCS= ${PROG}.c .endif # Always make an intermediate object file because: # - it saves time rebuilding when only the library has changed # - the name of the object gets put into the executable symbol table instead of # the name of a variable temporary object. # - it's useful to keep objects around for crunching. OBJS+= ${PROG}.o BCOBJS+= ${PROG}.bc LLOBJS+= ${PROG}.ll CLEANFILES+= ${PROG}.o ${PROG}.bc ${PROG}.ll .if target(beforelinking) beforelinking: ${OBJS} ${PROG_FULL}: beforelinking .endif ${PROG_FULL}: ${OBJS} .if defined(PROG_CXX) ${CXX:N${CCACHE_BIN}} ${CXXFLAGS:N-M*} ${LDFLAGS} -o ${.TARGET} \ ${OBJS} ${LDADD} .else ${CC:N${CCACHE_BIN}} ${CFLAGS:N-M*} ${LDFLAGS} -o ${.TARGET} ${OBJS} \ ${LDADD} .endif .if ${MK_CTF} != "no" ${CTFMERGE} ${CTFFLAGS} -o ${.TARGET} ${OBJS} .endif .endif # !target(${PROG}) .endif # !defined(SRCS) .if ${MK_DEBUG_FILES} != "no" ${PROG}: ${PROG_FULL} ${PROGNAME}.debug ${OBJCOPY} --strip-debug --add-gnu-debuglink=${PROGNAME}.debug \ ${PROG_FULL} ${.TARGET} ${PROGNAME}.debug: ${PROG_FULL} ${OBJCOPY} --only-keep-debug ${PROG_FULL} ${.TARGET} .endif .if defined(LLVM_LINK) ${PROG_FULL}.bc: ${BCOBJS} ${LLVM_LINK} -o ${.TARGET} ${BCOBJS} ${PROG_FULL}.ll: ${LLOBJS} ${LLVM_LINK} -S -o ${.TARGET} ${LLOBJS} CLEANFILES+= ${PROG_FULL}.bc ${PROG_FULL}.ll .endif # defined(LLVM_LINK) .if ${MK_MAN} != "no" && !defined(MAN) && \ !defined(MAN1) && !defined(MAN2) && !defined(MAN3) && \ !defined(MAN4) && !defined(MAN5) && !defined(MAN6) && \ !defined(MAN7) && !defined(MAN8) && !defined(MAN9) MAN= ${PROG}.1 MAN1= ${MAN} .endif .endif # defined(PROG) .if defined(_SKIP_BUILD) all: .else all: ${PROG} ${SCRIPTS} .if ${MK_MAN} != "no" all: all-man .endif .endif .if defined(PROG) CLEANFILES+= ${PROG} ${PROG}.bc ${PROG}.ll .if ${MK_DEBUG_FILES} != "no" CLEANFILES+= ${PROG_FULL} ${PROGNAME}.debug .endif .endif .if defined(OBJS) CLEANFILES+= ${OBJS} ${BCOBJS} ${LLOBJS} .endif .include .if defined(PROG) .if !defined(NO_EXTRADEPEND) _EXTRADEPEND: .if defined(LDFLAGS) && !empty(LDFLAGS:M-nostdlib) .if defined(DPADD) && !empty(DPADD) echo ${PROG_FULL}: ${DPADD} >> ${DEPENDFILE} .endif .else echo ${PROG_FULL}: ${LIBC} ${DPADD} >> ${DEPENDFILE} .if defined(PROG_CXX) .if ${COMPILER_TYPE} == "clang" && empty(CXXFLAGS:M-stdlib=libstdc++) echo ${PROG_FULL}: ${LIBCPLUSPLUS} >> ${DEPENDFILE} .else echo ${PROG_FULL}: ${LIBSTDCPLUSPLUS} >> ${DEPENDFILE} .endif .endif .endif .endif # !defined(NO_EXTRADEPEND) .endif .if !target(install) .if defined(PRECIOUSPROG) .if !defined(NO_FSCHG) INSTALLFLAGS+= -fschg .endif INSTALLFLAGS+= -S .endif _INSTALLFLAGS:= ${INSTALLFLAGS} .for ie in ${INSTALLFLAGS_EDIT} _INSTALLFLAGS:= ${_INSTALLFLAGS${ie}} .endfor .if !target(realinstall) && !defined(INTERNALPROG) realinstall: _proginstall .ORDER: beforeinstall _proginstall _proginstall: .if defined(PROG) ${INSTALL} ${TAG_ARGS} ${STRIP} -o ${BINOWN} -g ${BINGRP} -m ${BINMODE} \ ${_INSTALLFLAGS} ${PROG} ${DESTDIR}${BINDIR}/${PROGNAME} .if ${MK_DEBUG_FILES} != "no" .if defined(DEBUGMKDIR) ${INSTALL} ${TAG_ARGS:D${TAG_ARGS},debug} -d ${DESTDIR}${DEBUGFILEDIR}/ .endif ${INSTALL} ${TAG_ARGS:D${TAG_ARGS},debug} -o ${BINOWN} -g ${BINGRP} -m ${DEBUGMODE} \ ${PROGNAME}.debug ${DESTDIR}${DEBUGFILEDIR}/${PROGNAME}.debug .endif .endif .endif # !target(realinstall) .if defined(SCRIPTS) && !empty(SCRIPTS) realinstall: _scriptsinstall .ORDER: beforeinstall _scriptsinstall SCRIPTSDIR?= ${BINDIR} SCRIPTSOWN?= ${BINOWN} SCRIPTSGRP?= ${BINGRP} SCRIPTSMODE?= ${BINMODE} STAGE_AS_SETS+= scripts stage_as.scripts: ${SCRIPTS} FLAGS.stage_as.scripts= -m ${SCRIPTSMODE} STAGE_FILES_DIR.scripts= ${STAGE_OBJTOP} .for script in ${SCRIPTS} .if defined(SCRIPTSNAME) SCRIPTSNAME_${script:T}?= ${SCRIPTSNAME} .else SCRIPTSNAME_${script:T}?= ${script:T:R} .endif SCRIPTSDIR_${script:T}?= ${SCRIPTSDIR} SCRIPTSOWN_${script:T}?= ${SCRIPTSOWN} SCRIPTSGRP_${script:T}?= ${SCRIPTSGRP} SCRIPTSMODE_${script:T}?= ${SCRIPTSMODE} STAGE_AS_${script:T}= ${SCRIPTSDIR_${script:T}}/${SCRIPTSNAME_${script:T}} _scriptsinstall: _SCRIPTSINS_${script:T} _SCRIPTSINS_${script:T}: ${script} ${INSTALL} ${TAG_ARGS} -o ${SCRIPTSOWN_${.ALLSRC:T}} \ -g ${SCRIPTSGRP_${.ALLSRC:T}} -m ${SCRIPTSMODE_${.ALLSRC:T}} \ ${.ALLSRC} \ ${DESTDIR}${SCRIPTSDIR_${.ALLSRC:T}}/${SCRIPTSNAME_${.ALLSRC:T}} .endfor .endif NLSNAME?= ${PROG} .include .include .include .include LINKOWN?= ${BINOWN} LINKGRP?= ${BINGRP} LINKMODE?= ${BINMODE} .include .if ${MK_MAN} != "no" realinstall: maninstall .ORDER: beforeinstall maninstall .endif .endif # !target(install) .if ${MK_MAN} != "no" .include .endif .if defined(HAS_TESTS) MAKE+= MK_MAKE_CHECK_USE_SANDBOX=yes SUBDIR_TARGETS+= check TESTS_LD_LIBRARY_PATH+= ${.OBJDIR} TESTS_PATH+= ${.OBJDIR} .endif .if defined(PROG) OBJS_DEPEND_GUESS+= ${SRCS:M*.h} .endif .include .include .include .include Index: projects/import-googletest-1.8.1/share/mk/src.libnames.mk =================================================================== --- projects/import-googletest-1.8.1/share/mk/src.libnames.mk (revision 344270) +++ projects/import-googletest-1.8.1/share/mk/src.libnames.mk (revision 344271) @@ -1,635 +1,641 @@ # $FreeBSD$ # # The include file define library names suitable # for INTERNALLIB and PRIVATELIB definition .if !target(____) .error src.libnames.mk cannot be included directly. .endif .if !target(____) ____: .include _PRIVATELIBS= \ atf_c \ atf_cxx \ bsdstat \ devdctl \ event \ gmock \ gtest \ gmock_main \ gtest_main \ heimipcc \ heimipcs \ ifconfig \ ldns \ sqlite3 \ ssh \ ucl \ unbound \ zstd _INTERNALLIBS= \ amu \ bsnmptools \ c_nossp_pic \ cron \ elftc \ fifolog \ ipf \ lpr \ netbsd \ ntp \ ntpevent \ openbsd \ opts \ parse \ pe \ pmcstat \ sl \ sm \ smdb \ smutil \ telnet \ vers _LIBRARIES= \ ${_PRIVATELIBS} \ ${_INTERNALLIBS} \ ${LOCAL_LIBRARIES} \ 80211 \ alias \ archive \ asn1 \ auditd \ avl \ be \ begemot \ bluetooth \ bsdxml \ bsm \ bsnmp \ bz2 \ c \ c_pic \ calendar \ cam \ casper \ cap_dns \ cap_fileargs \ cap_grp \ cap_pwd \ cap_random \ cap_sysctl \ cap_syslog \ com_err \ compiler_rt \ crypt \ crypto \ ctf \ cuse \ cxxrt \ devctl \ devdctl \ devinfo \ devstat \ dialog \ dl \ dpv \ dtrace \ dwarf \ edit \ efivar \ elf \ execinfo \ fetch \ figpar \ geom \ gnuregex \ gpio \ gssapi \ gssapi_krb5 \ hdb \ heimbase \ heimntlm \ heimsqlite \ hx509 \ ipsec \ ipt \ jail \ kadm5clnt \ kadm5srv \ kafs5 \ kdc \ kiconv \ krb5 \ kvm \ l \ lzma \ m \ magic \ md \ memstat \ mp \ mt \ nandfs \ ncurses \ ncursesw \ netgraph \ ngatm \ nv \ nvpair \ opencsd \ opie \ pam \ panel \ panelw \ pcap \ pcsclite \ pjdlog \ pmc \ proc \ procstat \ pthread \ radius \ regex \ roken \ rpcsec_gss \ rpcsvc \ rt \ rtld_db \ sbuf \ sdp \ sm \ smb \ ssl \ ssp_nonshared \ stdthreads \ supcplusplus \ sysdecode \ tacplus \ termcap \ termcapw \ ufs \ ugidfw \ ulog \ umem \ usb \ usbhid \ util \ uutil \ vmmapi \ wind \ wrap \ xo \ y \ ypclnt \ z \ zfs_core \ zfs \ zpool \ .if ${MK_BLACKLIST} != "no" _LIBRARIES+= \ blacklist \ .endif .if ${MK_OFED} != "no" _LIBRARIES+= \ cxgb4 \ ibcm \ ibmad \ ibnetdisc \ ibumad \ ibverbs \ mlx4 \ mlx5 \ rdmacm \ osmcomp \ opensm \ osmvendor .endif # Each library's LIBADD needs to be duplicated here for static linkage of # 2nd+ order consumers. Auto-generating this would be better. _DP_80211= sbuf bsdxml _DP_archive= z bz2 lzma bsdxml _DP_zstd= pthread .if ${MK_BLACKLIST} != "no" _DP_blacklist+= pthread .endif _DP_crypto= pthread .if ${MK_OPENSSL} != "no" _DP_archive+= crypto .else _DP_archive+= md .endif _DP_sqlite3= pthread _DP_ssl= crypto _DP_ssh= crypto crypt z .if ${MK_LDNS} != "no" _DP_ssh+= ldns .endif _DP_edit= ncursesw .if ${MK_OPENSSL} != "no" _DP_bsnmp= crypto .endif _DP_geom= bsdxml sbuf _DP_cam= sbuf _DP_kvm= elf _DP_casper= nv _DP_cap_dns= nv _DP_cap_fileargs= nv _DP_cap_grp= nv _DP_cap_pwd= nv _DP_cap_random= nv _DP_cap_sysctl= nv _DP_cap_syslog= nv .if ${MK_OFED} != "no" _DP_pcap= ibverbs mlx5 .endif _DP_pjdlog= util _DP_opie= md _DP_usb= pthread _DP_unbound= ssl crypto pthread _DP_rt= pthread .if ${MK_OPENSSL} == "no" _DP_radius= md .else _DP_radius= crypto .endif _DP_rtld_db= elf procstat _DP_procstat= kvm util elf .if ${MK_CXX} == "yes" .if ${MK_LIBCPLUSPLUS} != "no" _DP_proc= cxxrt .else _DP_proc= supcplusplus .endif .endif .if ${MK_CDDL} != "no" _DP_proc+= ctf .endif _DP_proc+= elf procstat rtld_db util _DP_mp= crypto _DP_memstat= kvm _DP_magic= z _DP_mt= sbuf bsdxml _DP_ldns= ssl crypto .if ${MK_OPENSSL} != "no" _DP_fetch= ssl crypto .else _DP_fetch= md .endif _DP_execinfo= elf _DP_dwarf= elf _DP_dpv= dialog figpar util ncursesw _DP_dialog= ncursesw m _DP_cuse= pthread _DP_atf_cxx= atf_c _DP_gtest= pthread _DP_gmock= gtest _DP_gmock_main= gmock _DP_gtest_main= gtest _DP_devstat= kvm _DP_pam= radius tacplus opie md util .if ${MK_KERBEROS} != "no" _DP_pam+= krb5 .endif .if ${MK_OPENSSH} != "no" _DP_pam+= ssh .endif .if ${MK_NIS} != "no" _DP_pam+= ypclnt .endif _DP_roken= crypt _DP_kadm5clnt= com_err krb5 roken _DP_kadm5srv= com_err hdb krb5 roken _DP_heimntlm= crypto com_err krb5 roken _DP_hx509= asn1 com_err crypto roken wind _DP_hdb= asn1 com_err krb5 roken sqlite3 _DP_asn1= com_err roken _DP_kdc= roken hdb hx509 krb5 heimntlm asn1 crypto _DP_wind= com_err roken _DP_heimbase= pthread _DP_heimipcc= heimbase roken pthread _DP_heimipcs= heimbase roken pthread _DP_kafs5= asn1 krb5 roken _DP_krb5+= asn1 com_err crypt crypto hx509 roken wind heimbase heimipcc _DP_gssapi_krb5+= gssapi krb5 crypto roken asn1 com_err _DP_lzma= pthread _DP_ucl= m _DP_vmmapi= util _DP_opencsd= cxxrt _DP_ctf= z _DP_dtrace= ctf elf proc pthread rtld_db _DP_xo= util # The libc dependencies are not strictly needed but are defined to make the # assert happy. _DP_c= compiler_rt .if ${MK_SSP} != "no" _DP_c+= ssp_nonshared .endif _DP_stdthreads= pthread _DP_tacplus= md _DP_panel= ncurses _DP_panelw= ncursesw _DP_rpcsec_gss= gssapi _DP_smb= kiconv _DP_ulog= md _DP_fifolog= z _DP_ipf= kvm _DP_zfs= md pthread umem util uutil m nvpair avl bsdxml geom nvpair z \ zfs_core _DP_zfs_core= nvpair _DP_zpool= md pthread z nvpair avl umem _DP_be= zfs nvpair # OFED support .if ${MK_OFED} != "no" _DP_cxgb4= ibverbs pthread _DP_ibcm= ibverbs _DP_ibmad= ibumad _DP_ibnetdisc= osmcomp ibmad ibumad _DP_ibumad= _DP_ibverbs= _DP_mlx4= ibverbs pthread _DP_mlx5= ibverbs pthread _DP_rdmacm= ibverbs _DP_osmcomp= pthread _DP_opensm= pthread _DP_osmvendor= ibumad pthread .endif # Define special cases LDADD_supcplusplus= -lsupc++ LIBATF_C= ${LIBDESTDIR}${LIBDIR_BASE}/libprivateatf-c.a LIBATF_CXX= ${LIBDESTDIR}${LIBDIR_BASE}/libprivateatf-c++.a LDADD_atf_c= -lprivateatf-c LDADD_atf_cxx= -lprivateatf-c++ LIBGMOCK= ${LIBDESTDIR}${LIBDIR_BASE}/libprivategmock.a LIBGTEST= ${LIBDESTDIR}${LIBDIR_BASE}/libprivategtest.a LIBGMOCKMAIN= ${LIBDESTDIR}${LIBDIR_BASE}/libprivategmockmain.a LIBGTESTMAIN= ${LIBDESTDIR}${LIBDIR_BASE}/libprivategtestmain.a LDADD_gmock= -lprivategmock LDADD_gtest= -lprivategtest LDADD_gmock_main= -lprivategmock_main LDADD_gtest_main= -lprivategtest_main .for _l in ${_PRIVATELIBS} LIB${_l:tu}?= ${LIBDESTDIR}${LIBDIR_BASE}/libprivate${_l}.a .endfor +.if ${MK_PIE} != "no" +PIE_SUFFIX= _pie +.endif + .for _l in ${_LIBRARIES} .if ${_INTERNALLIBS:M${_l}} || !defined(SYSROOT) LDADD_${_l}_L+= -L${LIB${_l:tu}DIR} .endif DPADD_${_l}?= ${LIB${_l:tu}} .if ${_PRIVATELIBS:M${_l}} LDADD_${_l}?= -lprivate${_l} +.elif ${_INTERNALLIBS:M${_l}} +LDADD_${_l}?= ${LDADD_${_l}_L} -l${_l:S/${PIE_SUFFIX}//}${PIE_SUFFIX} .else LDADD_${_l}?= ${LDADD_${_l}_L} -l${_l} .endif # Add in all dependencies for static linkage. .if defined(_DP_${_l}) && (${_INTERNALLIBS:M${_l}} || \ - (defined(NO_SHARED) && (${NO_SHARED} != "no" && ${NO_SHARED} != "NO"))) + (defined(NO_SHARED) && ${NO_SHARED:tl} != "no")) .for _d in ${_DP_${_l}} DPADD_${_l}+= ${DPADD_${_d}} LDADD_${_l}+= ${LDADD_${_d}} .endfor .endif .endfor # These are special cases where the library is broken and anything that uses # it needs to add more dependencies. Broken usually means that it has a # cyclic dependency and cannot link its own dependencies. This is bad, please # fix the library instead. # Unless the library itself is broken then the proper place to define # dependencies is _DP_* above. # libatf-c++ exposes libatf-c abi hence we need to explicit link to atf_c for # atf_cxx DPADD_atf_cxx+= ${DPADD_atf_c} LDADD_atf_cxx+= ${LDADD_atf_c} DPADD_gmock+= ${DPADD_gtest} LDADD_gmock+= ${LDADD_gtest} DPADD_gmock_main+= ${DPADD_gmock} LDADD_gmock_main+= ${LDADD_gmock} DPADD_gtest_main+= ${DPADD_gtest} LDADD_gtest_main+= ${LDADD_gtest} # Detect LDADD/DPADD that should be LIBADD, before modifying LDADD here. _BADLDADD= .for _l in ${LDADD:M-l*:N-l*/*:C,^-l,,} .if ${_LIBRARIES:M${_l}} && !${_PRIVATELIBS:M${_l}} _BADLDADD+= ${_l} .endif .endfor .if !empty(_BADLDADD) .error ${.CURDIR}: These libraries should be LIBADD+=foo rather than DPADD/LDADD+=-lfoo: ${_BADLDADD} .endif .for _l in ${LIBADD} DPADD+= ${DPADD_${_l}} LDADD+= ${LDADD_${_l}} .endfor # INTERNALLIB definitions. LIBELFTCDIR= ${OBJTOP}/lib/libelftc -LIBELFTC?= ${LIBELFTCDIR}/libelftc.a +LIBELFTC?= ${LIBELFTCDIR}/libelftc${PIE_SUFFIX}.a LIBPEDIR= ${OBJTOP}/lib/libpe -LIBPE?= ${LIBPEDIR}/libpe.a +LIBPE?= ${LIBPEDIR}/libpe${PIE_SUFFIX}.a LIBOPENBSDDIR= ${OBJTOP}/lib/libopenbsd -LIBOPENBSD?= ${LIBOPENBSDDIR}/libopenbsd.a +LIBOPENBSD?= ${LIBOPENBSDDIR}/libopenbsd${PIE_SUFFIX}.a LIBSMDIR= ${OBJTOP}/lib/libsm -LIBSM?= ${LIBSMDIR}/libsm.a +LIBSM?= ${LIBSMDIR}/libsm${PIE_SUFFIX}.a LIBSMDBDIR= ${OBJTOP}/lib/libsmdb -LIBSMDB?= ${LIBSMDBDIR}/libsmdb.a +LIBSMDB?= ${LIBSMDBDIR}/libsmdb${PIE_SUFFIX}.a LIBSMUTILDIR= ${OBJTOP}/lib/libsmutil -LIBSMUTIL?= ${LIBSMUTILDIR}/libsmutil.a +LIBSMUTIL?= ${LIBSMUTILDIR}/libsmutil${PIE_SUFFIX}.a LIBNETBSDDIR?= ${OBJTOP}/lib/libnetbsd -LIBNETBSD?= ${LIBNETBSDDIR}/libnetbsd.a +LIBNETBSD?= ${LIBNETBSDDIR}/libnetbsd${PIE_SUFFIX}.a LIBVERSDIR?= ${OBJTOP}/kerberos5/lib/libvers -LIBVERS?= ${LIBVERSDIR}/libvers.a +LIBVERS?= ${LIBVERSDIR}/libvers${PIE_SUFFIX}.a LIBSLDIR= ${OBJTOP}/kerberos5/lib/libsl -LIBSL?= ${LIBSLDIR}/libsl.a +LIBSL?= ${LIBSLDIR}/libsl${PIE_SUFFIX}.a LIBIPFDIR= ${OBJTOP}/sbin/ipf/libipf -LIBIPF?= ${LIBIPFDIR}/libipf.a +LIBIPF?= ${LIBIPFDIR}/libipf${PIE_SUFFIX}.a LIBTELNETDIR= ${OBJTOP}/lib/libtelnet -LIBTELNET?= ${LIBTELNETDIR}/libtelnet.a +LIBTELNET?= ${LIBTELNETDIR}/libtelnet${PIE_SUFFIX}.a LIBCRONDIR= ${OBJTOP}/usr.sbin/cron/lib -LIBCRON?= ${LIBCRONDIR}/libcron.a +LIBCRON?= ${LIBCRONDIR}/libcron${PIE_SUFFIX}.a LIBNTPDIR= ${OBJTOP}/usr.sbin/ntp/libntp -LIBNTP?= ${LIBNTPDIR}/libntp.a +LIBNTP?= ${LIBNTPDIR}/libntp${PIE_SUFFIX}.a LIBNTPEVENTDIR= ${OBJTOP}/usr.sbin/ntp/libntpevent -LIBNTPEVENT?= ${LIBNTPEVENTDIR}/libntpevent.a +LIBNTPEVENT?= ${LIBNTPEVENTDIR}/libntpevent${PIE_SUFFIX}.a LIBOPTSDIR= ${OBJTOP}/usr.sbin/ntp/libopts -LIBOPTS?= ${LIBOPTSDIR}/libopts.a +LIBOPTS?= ${LIBOPTSDIR}/libopts${PIE_SUFFIX}.a LIBPARSEDIR= ${OBJTOP}/usr.sbin/ntp/libparse -LIBPARSE?= ${LIBPARSEDIR}/libparse.a +LIBPARSE?= ${LIBPARSEDIR}/libparse${PIE_SUFFIX}.a LIBLPRDIR= ${OBJTOP}/usr.sbin/lpr/common_source -LIBLPR?= ${LIBLPRDIR}/liblpr.a +LIBLPR?= ${LIBLPRDIR}/liblpr${PIE_SUFFIX}.a LIBFIFOLOGDIR= ${OBJTOP}/usr.sbin/fifolog/lib -LIBFIFOLOG?= ${LIBFIFOLOGDIR}/libfifolog.a +LIBFIFOLOG?= ${LIBFIFOLOGDIR}/libfifolog${PIE_SUFFIX}.a LIBBSNMPTOOLSDIR= ${OBJTOP}/usr.sbin/bsnmpd/tools/libbsnmptools -LIBBSNMPTOOLS?= ${LIBBSNMPTOOLSDIR}/libbsnmptools.a +LIBBSNMPTOOLS?= ${LIBBSNMPTOOLSDIR}/libbsnmptools${PIE_SUFFIX}.a LIBAMUDIR= ${OBJTOP}/usr.sbin/amd/libamu -LIBAMU?= ${LIBAMUDIR}/libamu.a +LIBAMU?= ${LIBAMUDIR}/libamu${PIE_SUFFIX}.a -LIBBE?= ${LIBBEDIR}/libbe.a +LIBBE?= ${LIBBEDIR}/libbe${PIE_SUFFIX}.a LIBPMCSTATDIR= ${OBJTOP}/lib/libpmcstat -LIBPMCSTAT?= ${LIBPMCSTATDIR}/libpmcstat.a +LIBPMCSTAT?= ${LIBPMCSTATDIR}/libpmcstat${PIE_SUFFIX}.a LIBC_NOSSP_PICDIR= ${OBJTOP}/lib/libc LIBC_NOSSP_PIC?= ${LIBC_NOSSP_PICDIR}/libc_nossp_pic.a # Define a directory for each library. This is useful for adding -L in when # not using a --sysroot or for meta mode bootstrapping when there is no # Makefile.depend. These are sorted by directory. LIBAVLDIR= ${OBJTOP}/cddl/lib/libavl LIBCTFDIR= ${OBJTOP}/cddl/lib/libctf LIBDTRACEDIR= ${OBJTOP}/cddl/lib/libdtrace LIBNVPAIRDIR= ${OBJTOP}/cddl/lib/libnvpair LIBUMEMDIR= ${OBJTOP}/cddl/lib/libumem LIBUUTILDIR= ${OBJTOP}/cddl/lib/libuutil LIBZFSDIR= ${OBJTOP}/cddl/lib/libzfs LIBZFS_COREDIR= ${OBJTOP}/cddl/lib/libzfs_core LIBZPOOLDIR= ${OBJTOP}/cddl/lib/libzpool # OFED support LIBCXGB4DIR= ${OBJTOP}/lib/ofed/libcxgb4 LIBIBCMDIR= ${OBJTOP}/lib/ofed/libibcm LIBIBMADDIR= ${OBJTOP}/lib/ofed/libibmad LIBIBNETDISCDIR=${OBJTOP}/lib/ofed/libibnetdisc LIBIBUMADDIR= ${OBJTOP}/lib/ofed/libibumad LIBIBVERBSDIR= ${OBJTOP}/lib/ofed/libibverbs LIBMLX4DIR= ${OBJTOP}/lib/ofed/libmlx4 LIBMLX5DIR= ${OBJTOP}/lib/ofed/libmlx5 LIBRDMACMDIR= ${OBJTOP}/lib/ofed/librdmacm LIBOSMCOMPDIR= ${OBJTOP}/lib/ofed/complib LIBOPENSMDIR= ${OBJTOP}/lib/ofed/libopensm LIBOSMVENDORDIR=${OBJTOP}/lib/ofed/libvendor LIBDIALOGDIR= ${OBJTOP}/gnu/lib/libdialog LIBGCOVDIR= ${OBJTOP}/gnu/lib/libgcov LIBGOMPDIR= ${OBJTOP}/gnu/lib/libgomp LIBGNUREGEXDIR= ${OBJTOP}/gnu/lib/libregex LIBSSPDIR= ${OBJTOP}/gnu/lib/libssp LIBSSP_NONSHAREDDIR= ${OBJTOP}/gnu/lib/libssp/libssp_nonshared LIBSUPCPLUSPLUSDIR= ${OBJTOP}/gnu/lib/libsupc++ LIBASN1DIR= ${OBJTOP}/kerberos5/lib/libasn1 LIBGSSAPI_KRB5DIR= ${OBJTOP}/kerberos5/lib/libgssapi_krb5 LIBGSSAPI_NTLMDIR= ${OBJTOP}/kerberos5/lib/libgssapi_ntlm LIBGSSAPI_SPNEGODIR= ${OBJTOP}/kerberos5/lib/libgssapi_spnego LIBHDBDIR= ${OBJTOP}/kerberos5/lib/libhdb LIBHEIMBASEDIR= ${OBJTOP}/kerberos5/lib/libheimbase LIBHEIMIPCCDIR= ${OBJTOP}/kerberos5/lib/libheimipcc LIBHEIMIPCSDIR= ${OBJTOP}/kerberos5/lib/libheimipcs LIBHEIMNTLMDIR= ${OBJTOP}/kerberos5/lib/libheimntlm LIBHX509DIR= ${OBJTOP}/kerberos5/lib/libhx509 LIBKADM5CLNTDIR= ${OBJTOP}/kerberos5/lib/libkadm5clnt LIBKADM5SRVDIR= ${OBJTOP}/kerberos5/lib/libkadm5srv LIBKAFS5DIR= ${OBJTOP}/kerberos5/lib/libkafs5 LIBKDCDIR= ${OBJTOP}/kerberos5/lib/libkdc LIBKRB5DIR= ${OBJTOP}/kerberos5/lib/libkrb5 LIBROKENDIR= ${OBJTOP}/kerberos5/lib/libroken LIBWINDDIR= ${OBJTOP}/kerberos5/lib/libwind LIBATF_CDIR= ${OBJTOP}/lib/atf/libatf-c LIBATF_CXXDIR= ${OBJTOP}/lib/atf/libatf-c++ LIBGMOCKDIR= ${OBJTOP}/lib/googletest/gmock LIBGMOCK_MAINDIR= ${OBJTOP}/lib/googletest/gmock_main LIBGTESTDIR= ${OBJTOP}/lib/googletest/gtest LIBGTEST_MAINDIR= ${OBJTOP}/lib/googletest/gtest_main LIBALIASDIR= ${OBJTOP}/lib/libalias/libalias LIBBLACKLISTDIR= ${OBJTOP}/lib/libblacklist LIBBLOCKSRUNTIMEDIR= ${OBJTOP}/lib/libblocksruntime LIBBSNMPDIR= ${OBJTOP}/lib/libbsnmp/libbsnmp LIBCASPERDIR= ${OBJTOP}/lib/libcasper/libcasper LIBCAP_DNSDIR= ${OBJTOP}/lib/libcasper/services/cap_dns LIBCAP_GRPDIR= ${OBJTOP}/lib/libcasper/services/cap_grp LIBCAP_PWDDIR= ${OBJTOP}/lib/libcasper/services/cap_pwd LIBCAP_RANDOMDIR= ${OBJTOP}/lib/libcasper/services/cap_random LIBCAP_SYSCTLDIR= ${OBJTOP}/lib/libcasper/services/cap_sysctl LIBCAP_SYSLOGDIR= ${OBJTOP}/lib/libcasper/services/cap_syslog LIBBSDXMLDIR= ${OBJTOP}/lib/libexpat LIBKVMDIR= ${OBJTOP}/lib/libkvm LIBPTHREADDIR= ${OBJTOP}/lib/libthr LIBMDIR= ${OBJTOP}/lib/msun LIBFORMDIR= ${OBJTOP}/lib/ncurses/form LIBFORMLIBWDIR= ${OBJTOP}/lib/ncurses/formw LIBMENUDIR= ${OBJTOP}/lib/ncurses/menu LIBMENULIBWDIR= ${OBJTOP}/lib/ncurses/menuw LIBNCURSESDIR= ${OBJTOP}/lib/ncurses/ncurses LIBNCURSESWDIR= ${OBJTOP}/lib/ncurses/ncursesw LIBPANELDIR= ${OBJTOP}/lib/ncurses/panel LIBPANELWDIR= ${OBJTOP}/lib/ncurses/panelw LIBCRYPTODIR= ${OBJTOP}/secure/lib/libcrypto LIBSSHDIR= ${OBJTOP}/secure/lib/libssh LIBSSLDIR= ${OBJTOP}/secure/lib/libssl LIBTEKENDIR= ${OBJTOP}/sys/teken/libteken LIBEGACYDIR= ${OBJTOP}/tools/build LIBLNDIR= ${OBJTOP}/usr.bin/lex/lib LIBTERMCAPDIR= ${LIBNCURSESDIR} LIBTERMCAPWDIR= ${LIBNCURSESWDIR} # Default other library directories to lib/libNAME. .for lib in ${_LIBRARIES} LIB${lib:tu}DIR?= ${OBJTOP}/lib/lib${lib} .endfor # Validate that listed LIBADD are valid. .for _l in ${LIBADD} .if empty(_LIBRARIES:M${_l}) _BADLIBADD+= ${_l} .endif .endfor .if !empty(_BADLIBADD) .error ${.CURDIR}: Invalid LIBADD used which may need to be added to ${_this:T}: ${_BADLIBADD} .endif # Sanity check that libraries are defined here properly when building them. .if defined(LIB) && ${_LIBRARIES:M${LIB}} != "" .if !empty(LIBADD) && \ (!defined(_DP_${LIB}) || ${LIBADD:O:u} != ${_DP_${LIB}:O:u}) .error ${.CURDIR}: Missing or incorrect _DP_${LIB} entry in ${_this:T}. Should match LIBADD for ${LIB} ('${LIBADD}' vs '${_DP_${LIB}}') .endif # Note that OBJTOP is not yet defined here but for the purpose of the check # it is fine as it resolves to the SRC directory. .if !defined(LIB${LIB:tu}DIR) || !exists(${SRCTOP}/${LIB${LIB:tu}DIR:S,^${OBJTOP}/,,}) .error ${.CURDIR}: Missing or incorrect value for LIB${LIB:tu}DIR in ${_this:T}: ${LIB${LIB:tu}DIR:S,^${OBJTOP}/,,} .endif .if ${_INTERNALLIBS:M${LIB}} != "" && !defined(LIB${LIB:tu}) .error ${.CURDIR}: Missing value for LIB${LIB:tu} in ${_this:T}. Likely should be: LIB${LIB:tu}?= $${LIB${LIB:tu}DIR}/lib${LIB}.a .endif .endif .endif # !target(____) Index: projects/import-googletest-1.8.1/stand/common/dev_net.c =================================================================== --- projects/import-googletest-1.8.1/stand/common/dev_net.c (revision 344270) +++ projects/import-googletest-1.8.1/stand/common/dev_net.c (revision 344271) @@ -1,435 +1,437 @@ /* $NetBSD: dev_net.c,v 1.23 2008/04/28 20:24:06 martin Exp $ */ /*- * Copyright (c) 1997 The NetBSD Foundation, Inc. * All rights reserved. * * This code is derived from software contributed to The NetBSD Foundation * by Gordon W. Ross. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /*- * This module implements a "raw device" interface suitable for * use by the stand-alone I/O library NFS code. This interface * does not support any "block" access, and exists only for the * purpose of initializing the network interface, getting boot * parameters, and performing the NFS mount. * * At open time, this does: * * find interface - netif_open() * RARP for IP address - rarp_getipaddress() * RPC/bootparams - callrpc(d, RPC_BOOTPARAMS, ...) * RPC/mountd - nfs_mount(sock, ip, path) * * the root file handle from mountd is saved in a global * for use by the NFS open code (NFS/lookup). */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include "dev_net.h" #include "bootstrap.h" #ifdef NETIF_DEBUG int debug = 0; #endif static char *netdev_name; static int netdev_sock = -1; static int netdev_opens; static int net_init(void); static int net_open(struct open_file *, ...); static int net_close(struct open_file *); static void net_cleanup(void); static int net_strategy(void *, int, daddr_t, size_t, char *, size_t *); static int net_print(int); static int net_getparams(int sock); struct devsw netdev = { "net", DEVT_NET, net_init, net_strategy, net_open, net_close, noioctl, net_print, net_cleanup }; static struct uri_scheme { const char *scheme; int proto; } uri_schemes[] = { { "tftp:/", NET_TFTP }, { "nfs:/", NET_NFS }, }; static int net_init(void) { return (0); } /* * Called by devopen after it sets f->f_dev to our devsw entry. * This opens the low-level device and sets f->f_devdata. * This is declared with variable arguments... */ static int net_open(struct open_file *f, ...) { struct iodesc *d; va_list args; - char *devname; /* Device part of file name (or NULL). */ + struct devdesc *dev; + const char *devname; /* Device part of file name (or NULL). */ int error = 0; va_start(args, f); - devname = va_arg(args, char*); + dev = va_arg(args, struct devdesc *); va_end(args); + devname = dev->d_dev->dv_name; /* Before opening another interface, close the previous one first. */ if (netdev_sock >= 0 && strcmp(devname, netdev_name) != 0) net_cleanup(); /* On first open, do netif open, mount, etc. */ if (netdev_opens == 0) { /* Find network interface. */ if (netdev_sock < 0) { - netdev_sock = netif_open(devname); + netdev_sock = netif_open(dev); if (netdev_sock < 0) { printf("net_open: netif_open() failed\n"); return (ENXIO); } netdev_name = strdup(devname); #ifdef NETIF_DEBUG if (debug) printf("net_open: netif_open() succeeded\n"); #endif } /* * If network params were not set by netif_open(), try to get * them via bootp, rarp, etc. */ if (rootip.s_addr == 0) { /* Get root IP address, and path, etc. */ error = net_getparams(netdev_sock); if (error) { /* getparams makes its own noise */ free(netdev_name); netif_close(netdev_sock); netdev_sock = -1; return (error); } } /* * Set the variables required by the kernel's nfs_diskless * mechanism. This is the minimum set of variables required to * mount a root filesystem without needing to obtain additional * info from bootp or other sources. */ d = socktodesc(netdev_sock); setenv("boot.netif.hwaddr", ether_sprintf(d->myea), 1); setenv("boot.netif.ip", inet_ntoa(myip), 1); setenv("boot.netif.netmask", intoa(netmask), 1); setenv("boot.netif.gateway", inet_ntoa(gateip), 1); setenv("boot.netif.server", inet_ntoa(rootip), 1); if (netproto == NET_TFTP) { setenv("boot.tftproot.server", inet_ntoa(rootip), 1); setenv("boot.tftproot.path", rootpath, 1); } else if (netproto == NET_NFS) { setenv("boot.nfsroot.server", inet_ntoa(rootip), 1); setenv("boot.nfsroot.path", rootpath, 1); } if (intf_mtu != 0) { char mtu[16]; snprintf(mtu, sizeof(mtu), "%u", intf_mtu); setenv("boot.netif.mtu", mtu, 1); } } netdev_opens++; f->f_devdata = &netdev_sock; return (error); } static int net_close(struct open_file *f) { #ifdef NETIF_DEBUG if (debug) printf("net_close: opens=%d\n", netdev_opens); #endif f->f_devdata = NULL; return (0); } static void net_cleanup(void) { if (netdev_sock >= 0) { #ifdef NETIF_DEBUG if (debug) printf("net_cleanup: calling netif_close()\n"); #endif rootip.s_addr = 0; free(netdev_name); netif_close(netdev_sock); netdev_sock = -1; } } static int net_strategy(void *devdata, int rw, daddr_t blk, size_t size, char *buf, size_t *rsize) { return (EIO); } #define SUPPORT_BOOTP /* * Get info for NFS boot: our IP address, our hostname, * server IP address, and our root path on the server. * There are two ways to do this: The old, Sun way, * and the more modern, BOOTP way. (RFC951, RFC1048) * * The default is to use the Sun bootparams RPC * (because that is what the kernel will do). * MD code can make try_bootp initialied data, * which will override this common definition. */ #ifdef SUPPORT_BOOTP int try_bootp = 1; #endif extern n_long ip_convertaddr(char *p); static int net_getparams(int sock) { char buf[MAXHOSTNAMELEN]; n_long rootaddr, smask; #ifdef SUPPORT_BOOTP /* * Try to get boot info using BOOTP. If we succeed, then * the server IP address, gateway, and root path will all * be initialized. If any remain uninitialized, we will * use RARP and RPC/bootparam (the Sun way) to get them. */ if (try_bootp) bootp(sock); if (myip.s_addr != 0) goto exit; #ifdef NETIF_DEBUG if (debug) printf("net_open: BOOTP failed, trying RARP/RPC...\n"); #endif #endif /* * Use RARP to get our IP address. This also sets our * netmask to the "natural" default for our address. */ if (rarp_getipaddress(sock)) { printf("net_open: RARP failed\n"); return (EIO); } printf("net_open: client addr: %s\n", inet_ntoa(myip)); /* Get our hostname, server IP address, gateway. */ if (bp_whoami(sock)) { printf("net_open: bootparam/whoami RPC failed\n"); return (EIO); } #ifdef NETIF_DEBUG if (debug) printf("net_open: client name: %s\n", hostname); #endif /* * Ignore the gateway from whoami (unreliable). * Use the "gateway" parameter instead. */ smask = 0; gateip.s_addr = 0; if (bp_getfile(sock, "gateway", &gateip, buf) == 0) { /* Got it! Parse the netmask. */ smask = ip_convertaddr(buf); } if (smask) { netmask = smask; #ifdef NETIF_DEBUG if (debug) printf("net_open: subnet mask: %s\n", intoa(netmask)); #endif } #ifdef NETIF_DEBUG if (gateip.s_addr && debug) printf("net_open: net gateway: %s\n", inet_ntoa(gateip)); #endif /* Get the root server and pathname. */ if (bp_getfile(sock, "root", &rootip, rootpath)) { printf("net_open: bootparam/getfile RPC failed\n"); return (EIO); } exit: if ((rootaddr = net_parse_rootpath()) != INADDR_NONE) rootip.s_addr = rootaddr; #ifdef NETIF_DEBUG if (debug) { printf("net_open: server addr: %s\n", inet_ntoa(rootip)); printf("net_open: server path: %s\n", rootpath); } #endif return (0); } static int net_print(int verbose) { struct netif_driver *drv; int i, d, cnt; int ret = 0; if (netif_drivers[0] == NULL) return (ret); printf("%s devices:", netdev.dv_name); if ((ret = pager_output("\n")) != 0) return (ret); cnt = 0; for (d = 0; netif_drivers[d]; d++) { drv = netif_drivers[d]; for (i = 0; i < drv->netif_nifs; i++) { printf("\t%s%d:", netdev.dv_name, cnt++); if (verbose) { printf(" (%s%d)", drv->netif_bname, drv->netif_ifs[i].dif_unit); } if ((ret = pager_output("\n")) != 0) return (ret); } } return (ret); } /* * Parses the rootpath if present * * The rootpath format can be in the form * ://ip/path * :/path * * For compatibility with previous behaviour it also accepts as an NFS scheme * ip:/path * /path * * If an ip is set it returns it in network byte order. * The default scheme defined in the global netproto, if not set it defaults to * NFS. * It leaves just the pathname in the global rootpath. */ uint32_t net_parse_rootpath(void) { n_long addr = htonl(INADDR_NONE); size_t i; char ip[FNAME_SIZE]; char *ptr, *val; netproto = NET_NONE; for (i = 0; i < nitems(uri_schemes); i++) { if (strncmp(rootpath, uri_schemes[i].scheme, strlen(uri_schemes[i].scheme)) != 0) continue; netproto = uri_schemes[i].proto; break; } ptr = rootpath; /* Fallback for compatibility mode */ if (netproto == NET_NONE) { netproto = NET_NFS; (void)strsep(&ptr, ":"); if (ptr != NULL) { addr = inet_addr(rootpath); bcopy(ptr, rootpath, strlen(ptr) + 1); } } else { ptr += strlen(uri_schemes[i].scheme); if (*ptr == '/') { /* we are in the form ://, we do expect an ip */ ptr++; /* * XXX when http will be there we will need to check for * a port, but right now we do not need it yet */ val = strchr(ptr, '/'); if (val != NULL) { snprintf(ip, sizeof(ip), "%.*s", (int)((uintptr_t)val - (uintptr_t)ptr), ptr); addr = inet_addr(ip); bcopy(val, rootpath, strlen(val) + 1); } } else { ptr--; bcopy(ptr, rootpath, strlen(ptr) + 1); } } return (addr); } Index: projects/import-googletest-1.8.1/stand/common/disk.c =================================================================== --- projects/import-googletest-1.8.1/stand/common/disk.c (revision 344270) +++ projects/import-googletest-1.8.1/stand/common/disk.c (revision 344271) @@ -1,440 +1,443 @@ /*- * Copyright (c) 1998 Michael Smith * Copyright (c) 2012 Andrey V. Elsukov * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include "disk.h" #ifdef DISK_DEBUG # define DEBUG(fmt, args...) printf("%s: " fmt "\n" , __func__ , ## args) #else # define DEBUG(fmt, args...) #endif struct open_disk { struct ptable *table; uint64_t mediasize; uint64_t entrysize; u_int sectorsize; }; struct print_args { struct disk_devdesc *dev; const char *prefix; int verbose; }; /* Convert size to a human-readable number. */ static char * display_size(uint64_t size, u_int sectorsize) { static char buf[80]; char unit; size = size * sectorsize / 1024; unit = 'K'; if (size >= 10485760000LL) { size /= 1073741824; unit = 'T'; } else if (size >= 10240000) { size /= 1048576; unit = 'G'; } else if (size >= 10000) { size /= 1024; unit = 'M'; } - sprintf(buf, "%ld%cB", (long)size, unit); + sprintf(buf, "%4ld%cB", (long)size, unit); return (buf); } int ptblread(void *d, void *buf, size_t blocks, uint64_t offset) { struct disk_devdesc *dev; struct open_disk *od; dev = (struct disk_devdesc *)d; od = (struct open_disk *)dev->dd.d_opendata; /* * The strategy function assumes the offset is in units of 512 byte * sectors. For larger sector sizes, we need to adjust the offset to * match the actual sector size. */ offset *= (od->sectorsize / 512); /* * As the GPT backup partition is located at the end of the disk, * to avoid reading past disk end, flag bcache not to use RA. */ return (dev->dd.d_dev->dv_strategy(dev, F_READ | F_NORA, offset, blocks * od->sectorsize, (char *)buf, NULL)); } -#define PWIDTH 35 static int ptable_print(void *arg, const char *pname, const struct ptable_entry *part) { struct disk_devdesc dev; struct print_args *pa, bsd; struct open_disk *od; struct ptable *table; char line[80]; int res; + u_int sectsize; + uint64_t partsize; pa = (struct print_args *)arg; od = (struct open_disk *)pa->dev->dd.d_opendata; - sprintf(line, " %s%s: %s", pa->prefix, pname, - parttype2str(part->type)); - if (pa->verbose) - sprintf(line, "%-*s%s", PWIDTH, line, - display_size(part->end - part->start + 1, - od->sectorsize)); - strcat(line, "\n"); + sectsize = od->sectorsize; + partsize = part->end - part->start + 1; + sprintf(line, " %s%s: %s\t%s\n", pa->prefix, pname, + parttype2str(part->type), + pa->verbose ? display_size(partsize, sectsize) : ""); if (pager_output(line)) return 1; res = 0; if (part->type == PART_FREEBSD) { /* Open slice with BSD label */ dev.dd.d_dev = pa->dev->dd.d_dev; dev.dd.d_unit = pa->dev->dd.d_unit; dev.d_slice = part->index; dev.d_partition = -1; - if (disk_open(&dev, part->end - part->start + 1, - od->sectorsize) == 0) { - table = ptable_open(&dev, part->end - part->start + 1, - od->sectorsize, ptblread); + if (disk_open(&dev, partsize, sectsize) == 0) { + /* + * disk_open() for partition -1 on a bsd slice assumes + * you want the first bsd partition. Reset things so + * that we're looking at the start of the raw slice. + */ + dev.d_partition = -1; + dev.d_offset = part->start; + table = ptable_open(&dev, partsize, sectsize, ptblread); if (table != NULL) { sprintf(line, " %s%s", pa->prefix, pname); bsd.dev = pa->dev; bsd.prefix = line; bsd.verbose = pa->verbose; res = ptable_iterate(table, &bsd, ptable_print); ptable_close(table); } disk_close(&dev); } } return (res); } -#undef PWIDTH int disk_print(struct disk_devdesc *dev, char *prefix, int verbose) { struct open_disk *od; struct print_args pa; /* Disk should be opened */ od = (struct open_disk *)dev->dd.d_opendata; pa.dev = dev; pa.prefix = prefix; pa.verbose = verbose; return (ptable_iterate(od->table, &pa, ptable_print)); } int disk_read(struct disk_devdesc *dev, void *buf, uint64_t offset, u_int blocks) { struct open_disk *od; int ret; od = (struct open_disk *)dev->dd.d_opendata; ret = dev->dd.d_dev->dv_strategy(dev, F_READ, dev->d_offset + offset, blocks * od->sectorsize, buf, NULL); return (ret); } int disk_write(struct disk_devdesc *dev, void *buf, uint64_t offset, u_int blocks) { struct open_disk *od; int ret; od = (struct open_disk *)dev->dd.d_opendata; ret = dev->dd.d_dev->dv_strategy(dev, F_WRITE, dev->d_offset + offset, blocks * od->sectorsize, buf, NULL); return (ret); } int disk_ioctl(struct disk_devdesc *dev, u_long cmd, void *data) { struct open_disk *od = dev->dd.d_opendata; if (od == NULL) return (ENOTTY); switch (cmd) { case DIOCGSECTORSIZE: *(u_int *)data = od->sectorsize; break; case DIOCGMEDIASIZE: if (dev->d_offset == 0) *(uint64_t *)data = od->mediasize; else *(uint64_t *)data = od->entrysize * od->sectorsize; break; default: return (ENOTTY); } return (0); } int disk_open(struct disk_devdesc *dev, uint64_t mediasize, u_int sectorsize) { struct disk_devdesc partdev; struct open_disk *od; struct ptable *table; struct ptable_entry part; int rc, slice, partition; rc = 0; od = (struct open_disk *)malloc(sizeof(struct open_disk)); if (od == NULL) { DEBUG("no memory"); return (ENOMEM); } dev->dd.d_opendata = od; od->entrysize = 0; od->mediasize = mediasize; od->sectorsize = sectorsize; /* * While we are reading disk metadata, make sure we do it relative * to the start of the disk */ memcpy(&partdev, dev, sizeof(partdev)); partdev.d_offset = 0; partdev.d_slice = -1; partdev.d_partition = -1; dev->d_offset = 0; table = NULL; slice = dev->d_slice; partition = dev->d_partition; DEBUG("%s unit %d, slice %d, partition %d => %p", disk_fmtdev(dev), dev->dd.d_unit, dev->d_slice, dev->d_partition, od); /* Determine disk layout. */ od->table = ptable_open(&partdev, mediasize / sectorsize, sectorsize, ptblread); if (od->table == NULL) { DEBUG("Can't read partition table"); rc = ENXIO; goto out; } if (ptable_getsize(od->table, &mediasize) != 0) { rc = ENXIO; goto out; } od->mediasize = mediasize; if (ptable_gettype(od->table) == PTABLE_BSD && partition >= 0) { /* It doesn't matter what value has d_slice */ rc = ptable_getpart(od->table, &part, partition); if (rc == 0) { dev->d_offset = part.start; od->entrysize = part.end - part.start + 1; } } else if (ptable_gettype(od->table) == PTABLE_ISO9660) { dev->d_offset = 0; od->entrysize = mediasize; } else if (slice >= 0) { /* Try to get information about partition */ if (slice == 0) rc = ptable_getbestpart(od->table, &part); else rc = ptable_getpart(od->table, &part, slice); if (rc != 0) /* Partition doesn't exist */ goto out; dev->d_offset = part.start; od->entrysize = part.end - part.start + 1; slice = part.index; if (ptable_gettype(od->table) == PTABLE_GPT) { partition = 255; goto out; /* Nothing more to do */ } else if (partition == 255) { /* * When we try to open GPT partition, but partition * table isn't GPT, reset d_partition value to -1 * and try to autodetect appropriate value. */ partition = -1; } /* * If d_partition < 0 and we are looking at a BSD slice, * then try to read BSD label, otherwise return the * whole MBR slice. */ if (partition == -1 && part.type != PART_FREEBSD) goto out; /* Try to read BSD label */ table = ptable_open(dev, part.end - part.start + 1, od->sectorsize, ptblread); if (table == NULL) { DEBUG("Can't read BSD label"); rc = ENXIO; goto out; } /* * If slice contains BSD label and d_partition < 0, then * assume the 'a' partition. Otherwise just return the * whole MBR slice, because it can contain ZFS. */ if (partition < 0) { if (ptable_gettype(table) != PTABLE_BSD) goto out; partition = 0; } rc = ptable_getpart(table, &part, partition); if (rc != 0) goto out; dev->d_offset += part.start; od->entrysize = part.end - part.start + 1; } out: if (table != NULL) ptable_close(table); if (rc != 0) { if (od->table != NULL) ptable_close(od->table); free(od); DEBUG("%s could not open", disk_fmtdev(dev)); } else { /* Save the slice and partition number to the dev */ dev->d_slice = slice; dev->d_partition = partition; DEBUG("%s offset %lld => %p", disk_fmtdev(dev), (long long)dev->d_offset, od); } return (rc); } int disk_close(struct disk_devdesc *dev) { struct open_disk *od; od = (struct open_disk *)dev->dd.d_opendata; DEBUG("%s closed => %p", disk_fmtdev(dev), od); ptable_close(od->table); free(od); return (0); } char* disk_fmtdev(struct disk_devdesc *dev) { static char buf[128]; char *cp; cp = buf + sprintf(buf, "%s%d", dev->dd.d_dev->dv_name, dev->dd.d_unit); if (dev->d_slice >= 0) { #ifdef LOADER_GPT_SUPPORT if (dev->d_partition == 255) { sprintf(cp, "p%d:", dev->d_slice); return (buf); } else #endif #ifdef LOADER_MBR_SUPPORT cp += sprintf(cp, "s%d", dev->d_slice); #endif } if (dev->d_partition >= 0) cp += sprintf(cp, "%c", dev->d_partition + 'a'); strcat(cp, ":"); return (buf); } int disk_parsedev(struct disk_devdesc *dev, const char *devspec, const char **path) { int unit, slice, partition; const char *np; char *cp; np = devspec; unit = slice = partition = -1; if (*np != '\0' && *np != ':') { unit = strtol(np, &cp, 10); if (cp == np) return (EUNIT); #ifdef LOADER_GPT_SUPPORT if (*cp == 'p') { np = cp + 1; slice = strtol(np, &cp, 10); if (np == cp) return (ESLICE); /* we don't support nested partitions on GPT */ if (*cp != '\0' && *cp != ':') return (EINVAL); partition = 255; } else #endif #ifdef LOADER_MBR_SUPPORT if (*cp == 's') { np = cp + 1; slice = strtol(np, &cp, 10); if (np == cp) return (ESLICE); } #endif if (*cp != '\0' && *cp != ':') { partition = *cp - 'a'; if (partition < 0) return (EPART); cp++; } } else return (EINVAL); if (*cp != '\0' && *cp != ':') return (EINVAL); dev->dd.d_unit = unit; dev->d_slice = slice; dev->d_partition = partition; if (path != NULL) *path = (*cp == '\0') ? cp: cp + 1; return (0); } Index: projects/import-googletest-1.8.1/stand/common/part.c =================================================================== --- projects/import-googletest-1.8.1/stand/common/part.c (revision 344270) +++ projects/import-googletest-1.8.1/stand/common/part.c (revision 344271) @@ -1,946 +1,949 @@ /*- * Copyright (c) 2012 Andrey V. Elsukov * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHORS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef PART_DEBUG #define DEBUG(fmt, args...) printf("%s: " fmt "\n", __func__, ## args) #else #define DEBUG(fmt, args...) #endif #ifdef LOADER_GPT_SUPPORT #define MAXTBLSZ 64 static const uuid_t gpt_uuid_unused = GPT_ENT_TYPE_UNUSED; static const uuid_t gpt_uuid_ms_basic_data = GPT_ENT_TYPE_MS_BASIC_DATA; static const uuid_t gpt_uuid_freebsd_ufs = GPT_ENT_TYPE_FREEBSD_UFS; static const uuid_t gpt_uuid_efi = GPT_ENT_TYPE_EFI; static const uuid_t gpt_uuid_freebsd = GPT_ENT_TYPE_FREEBSD; static const uuid_t gpt_uuid_freebsd_boot = GPT_ENT_TYPE_FREEBSD_BOOT; static const uuid_t gpt_uuid_freebsd_nandfs = GPT_ENT_TYPE_FREEBSD_NANDFS; static const uuid_t gpt_uuid_freebsd_swap = GPT_ENT_TYPE_FREEBSD_SWAP; static const uuid_t gpt_uuid_freebsd_zfs = GPT_ENT_TYPE_FREEBSD_ZFS; static const uuid_t gpt_uuid_freebsd_vinum = GPT_ENT_TYPE_FREEBSD_VINUM; #endif struct pentry { struct ptable_entry part; uint64_t flags; union { uint8_t bsd; uint8_t mbr; uuid_t gpt; uint16_t vtoc8; } type; STAILQ_ENTRY(pentry) entry; }; struct ptable { enum ptable_type type; uint16_t sectorsize; uint64_t sectors; STAILQ_HEAD(, pentry) entries; }; static struct parttypes { enum partition_type type; const char *desc; } ptypes[] = { { PART_UNKNOWN, "Unknown" }, { PART_EFI, "EFI" }, { PART_FREEBSD, "FreeBSD" }, { PART_FREEBSD_BOOT, "FreeBSD boot" }, { PART_FREEBSD_NANDFS, "FreeBSD nandfs" }, { PART_FREEBSD_UFS, "FreeBSD UFS" }, { PART_FREEBSD_ZFS, "FreeBSD ZFS" }, { PART_FREEBSD_SWAP, "FreeBSD swap" }, { PART_FREEBSD_VINUM, "FreeBSD vinum" }, { PART_LINUX, "Linux" }, { PART_LINUX_SWAP, "Linux swap" }, { PART_DOS, "DOS/Windows" }, { PART_ISO9660, "ISO9660" }, }; const char * parttype2str(enum partition_type type) { size_t i; for (i = 0; i < nitems(ptypes); i++) if (ptypes[i].type == type) return (ptypes[i].desc); return (ptypes[0].desc); } #ifdef LOADER_GPT_SUPPORT static void uuid_letoh(uuid_t *uuid) { uuid->time_low = le32toh(uuid->time_low); uuid->time_mid = le16toh(uuid->time_mid); uuid->time_hi_and_version = le16toh(uuid->time_hi_and_version); } static enum partition_type gpt_parttype(uuid_t type) { if (uuid_equal(&type, &gpt_uuid_efi, NULL)) return (PART_EFI); else if (uuid_equal(&type, &gpt_uuid_ms_basic_data, NULL)) return (PART_DOS); else if (uuid_equal(&type, &gpt_uuid_freebsd_boot, NULL)) return (PART_FREEBSD_BOOT); else if (uuid_equal(&type, &gpt_uuid_freebsd_ufs, NULL)) return (PART_FREEBSD_UFS); else if (uuid_equal(&type, &gpt_uuid_freebsd_zfs, NULL)) return (PART_FREEBSD_ZFS); else if (uuid_equal(&type, &gpt_uuid_freebsd_swap, NULL)) return (PART_FREEBSD_SWAP); else if (uuid_equal(&type, &gpt_uuid_freebsd_vinum, NULL)) return (PART_FREEBSD_VINUM); else if (uuid_equal(&type, &gpt_uuid_freebsd_nandfs, NULL)) return (PART_FREEBSD_NANDFS); else if (uuid_equal(&type, &gpt_uuid_freebsd, NULL)) return (PART_FREEBSD); return (PART_UNKNOWN); } static struct gpt_hdr * gpt_checkhdr(struct gpt_hdr *hdr, uint64_t lba_self, uint64_t lba_last, uint16_t sectorsize) { uint32_t sz, crc; if (memcmp(hdr->hdr_sig, GPT_HDR_SIG, sizeof(hdr->hdr_sig)) != 0) { DEBUG("no GPT signature"); return (NULL); } sz = le32toh(hdr->hdr_size); if (sz < 92 || sz > sectorsize) { DEBUG("invalid GPT header size: %d", sz); return (NULL); } crc = le32toh(hdr->hdr_crc_self); hdr->hdr_crc_self = 0; if (crc32(hdr, sz) != crc) { DEBUG("GPT header's CRC doesn't match"); return (NULL); } hdr->hdr_crc_self = crc; hdr->hdr_revision = le32toh(hdr->hdr_revision); if (hdr->hdr_revision < GPT_HDR_REVISION) { DEBUG("unsupported GPT revision %d", hdr->hdr_revision); return (NULL); } hdr->hdr_lba_self = le64toh(hdr->hdr_lba_self); if (hdr->hdr_lba_self != lba_self) { DEBUG("self LBA doesn't match"); return (NULL); } hdr->hdr_lba_alt = le64toh(hdr->hdr_lba_alt); if (hdr->hdr_lba_alt == hdr->hdr_lba_self) { DEBUG("invalid alternate LBA"); return (NULL); } hdr->hdr_entries = le32toh(hdr->hdr_entries); hdr->hdr_entsz = le32toh(hdr->hdr_entsz); if (hdr->hdr_entries == 0 || hdr->hdr_entsz < sizeof(struct gpt_ent) || sectorsize % hdr->hdr_entsz != 0) { DEBUG("invalid entry size or number of entries"); return (NULL); } hdr->hdr_lba_start = le64toh(hdr->hdr_lba_start); hdr->hdr_lba_end = le64toh(hdr->hdr_lba_end); hdr->hdr_lba_table = le64toh(hdr->hdr_lba_table); hdr->hdr_crc_table = le32toh(hdr->hdr_crc_table); uuid_letoh(&hdr->hdr_uuid); return (hdr); } static int gpt_checktbl(const struct gpt_hdr *hdr, uint8_t *tbl, size_t size, uint64_t lba_last) { struct gpt_ent *ent; uint32_t i, cnt; cnt = size / hdr->hdr_entsz; if (hdr->hdr_entries <= cnt) { cnt = hdr->hdr_entries; /* Check CRC only when buffer size is enough for table. */ if (hdr->hdr_crc_table != crc32(tbl, hdr->hdr_entries * hdr->hdr_entsz)) { DEBUG("GPT table's CRC doesn't match"); return (-1); } } for (i = 0; i < cnt; i++) { ent = (struct gpt_ent *)(tbl + i * hdr->hdr_entsz); uuid_letoh(&ent->ent_type); if (uuid_equal(&ent->ent_type, &gpt_uuid_unused, NULL)) continue; ent->ent_lba_start = le64toh(ent->ent_lba_start); ent->ent_lba_end = le64toh(ent->ent_lba_end); } return (0); } static struct ptable * ptable_gptread(struct ptable *table, void *dev, diskread_t dread) { struct pentry *entry; struct gpt_hdr *phdr, hdr; struct gpt_ent *ent; uint8_t *buf, *tbl; uint64_t offset; int pri, sec; size_t size, i; buf = malloc(table->sectorsize); if (buf == NULL) return (NULL); tbl = malloc(table->sectorsize * MAXTBLSZ); if (tbl == NULL) { free(buf); return (NULL); } /* Read the primary GPT header. */ if (dread(dev, buf, 1, 1) != 0) { ptable_close(table); table = NULL; goto out; } pri = sec = 0; /* Check the primary GPT header. */ phdr = gpt_checkhdr((struct gpt_hdr *)buf, 1, table->sectors - 1, table->sectorsize); if (phdr != NULL) { /* Read the primary GPT table. */ size = MIN(MAXTBLSZ, howmany(phdr->hdr_entries * phdr->hdr_entsz, table->sectorsize)); if (dread(dev, tbl, size, phdr->hdr_lba_table) == 0 && gpt_checktbl(phdr, tbl, size * table->sectorsize, table->sectors - 1) == 0) { memcpy(&hdr, phdr, sizeof(hdr)); pri = 1; } } offset = pri ? hdr.hdr_lba_alt: table->sectors - 1; /* Read the backup GPT header. */ if (dread(dev, buf, 1, offset) != 0) phdr = NULL; else phdr = gpt_checkhdr((struct gpt_hdr *)buf, offset, table->sectors - 1, table->sectorsize); if (phdr != NULL) { /* * Compare primary and backup headers. * If they are equal, then we do not need to read backup * table. If they are different, then prefer backup header * and try to read backup table. */ if (pri == 0 || uuid_equal(&hdr.hdr_uuid, &phdr->hdr_uuid, NULL) == 0 || hdr.hdr_revision != phdr->hdr_revision || hdr.hdr_size != phdr->hdr_size || hdr.hdr_lba_start != phdr->hdr_lba_start || hdr.hdr_lba_end != phdr->hdr_lba_end || hdr.hdr_entries != phdr->hdr_entries || hdr.hdr_entsz != phdr->hdr_entsz || hdr.hdr_crc_table != phdr->hdr_crc_table) { /* Read the backup GPT table. */ size = MIN(MAXTBLSZ, howmany(phdr->hdr_entries * phdr->hdr_entsz, table->sectorsize)); if (dread(dev, tbl, size, phdr->hdr_lba_table) == 0 && gpt_checktbl(phdr, tbl, size * table->sectorsize, table->sectors - 1) == 0) { memcpy(&hdr, phdr, sizeof(hdr)); sec = 1; } } } if (pri == 0 && sec == 0) { /* Both primary and backup tables are invalid. */ table->type = PTABLE_NONE; goto out; } DEBUG("GPT detected"); size = MIN(hdr.hdr_entries * hdr.hdr_entsz, MAXTBLSZ * table->sectorsize); /* * If the disk's sector count is smaller than the sector count recorded * in the disk's GPT table header, set the table->sectors to the value * recorded in GPT tables. This is done to work around buggy firmware * that returns truncated disk sizes. * * Note, this is still not a foolproof way to get disk's size. For * example, an image file can be truncated when copied to smaller media. */ table->sectors = hdr.hdr_lba_alt + 1; for (i = 0; i < size / hdr.hdr_entsz; i++) { ent = (struct gpt_ent *)(tbl + i * hdr.hdr_entsz); if (uuid_equal(&ent->ent_type, &gpt_uuid_unused, NULL)) continue; /* Simple sanity checks. */ if (ent->ent_lba_start < hdr.hdr_lba_start || ent->ent_lba_end > hdr.hdr_lba_end || ent->ent_lba_start > ent->ent_lba_end) continue; entry = malloc(sizeof(*entry)); if (entry == NULL) break; entry->part.start = ent->ent_lba_start; entry->part.end = ent->ent_lba_end; entry->part.index = i + 1; entry->part.type = gpt_parttype(ent->ent_type); entry->flags = le64toh(ent->ent_attr); memcpy(&entry->type.gpt, &ent->ent_type, sizeof(uuid_t)); STAILQ_INSERT_TAIL(&table->entries, entry, entry); DEBUG("new GPT partition added"); } out: free(buf); free(tbl); return (table); } #endif /* LOADER_GPT_SUPPORT */ #ifdef LOADER_MBR_SUPPORT /* We do not need to support too many EBR partitions in the loader */ #define MAXEBRENTRIES 8 static enum partition_type mbr_parttype(uint8_t type) { switch (type) { case DOSPTYP_386BSD: return (PART_FREEBSD); case DOSPTYP_LINSWP: return (PART_LINUX_SWAP); case DOSPTYP_LINUX: return (PART_LINUX); case 0x01: case 0x04: case 0x06: case 0x07: case 0x0b: case 0x0c: case 0x0e: return (PART_DOS); } return (PART_UNKNOWN); } static struct ptable * ptable_ebrread(struct ptable *table, void *dev, diskread_t dread) { struct dos_partition *dp; struct pentry *e1, *entry; uint32_t start, end, offset; u_char *buf; int i, index; STAILQ_FOREACH(e1, &table->entries, entry) { if (e1->type.mbr == DOSPTYP_EXT || e1->type.mbr == DOSPTYP_EXTLBA) break; } if (e1 == NULL) return (table); index = 5; offset = e1->part.start; buf = malloc(table->sectorsize); if (buf == NULL) return (table); DEBUG("EBR detected"); for (i = 0; i < MAXEBRENTRIES; i++) { #if 0 /* Some BIOSes return an incorrect number of sectors */ if (offset >= table->sectors) break; #endif if (dread(dev, buf, 1, offset) != 0) break; dp = (struct dos_partition *)(buf + DOSPARTOFF); if (dp[0].dp_typ == 0) break; start = le32toh(dp[0].dp_start); if (dp[0].dp_typ == DOSPTYP_EXT && dp[1].dp_typ == 0) { offset = e1->part.start + start; continue; } end = le32toh(dp[0].dp_size); entry = malloc(sizeof(*entry)); if (entry == NULL) break; entry->part.start = offset + start; entry->part.end = entry->part.start + end - 1; entry->part.index = index++; entry->part.type = mbr_parttype(dp[0].dp_typ); entry->flags = dp[0].dp_flag; entry->type.mbr = dp[0].dp_typ; STAILQ_INSERT_TAIL(&table->entries, entry, entry); DEBUG("new EBR partition added"); if (dp[1].dp_typ == 0) break; offset = e1->part.start + le32toh(dp[1].dp_start); } free(buf); return (table); } #endif /* LOADER_MBR_SUPPORT */ static enum partition_type bsd_parttype(uint8_t type) { switch (type) { case FS_NANDFS: return (PART_FREEBSD_NANDFS); case FS_SWAP: return (PART_FREEBSD_SWAP); case FS_BSDFFS: return (PART_FREEBSD_UFS); case FS_VINUM: return (PART_FREEBSD_VINUM); case FS_ZFS: return (PART_FREEBSD_ZFS); } return (PART_UNKNOWN); } static struct ptable * ptable_bsdread(struct ptable *table, void *dev, diskread_t dread) { struct disklabel *dl; struct partition *part; struct pentry *entry; uint8_t *buf; uint32_t raw_offset; int i; if (table->sectorsize < sizeof(struct disklabel)) { DEBUG("Too small sectorsize"); return (table); } buf = malloc(table->sectorsize); if (buf == NULL) return (table); if (dread(dev, buf, 1, 1) != 0) { DEBUG("read failed"); ptable_close(table); table = NULL; goto out; } dl = (struct disklabel *)buf; if (le32toh(dl->d_magic) != DISKMAGIC && le32toh(dl->d_magic2) != DISKMAGIC) goto out; if (le32toh(dl->d_secsize) != table->sectorsize) { DEBUG("unsupported sector size"); goto out; } dl->d_npartitions = le16toh(dl->d_npartitions); if (dl->d_npartitions > 20 || dl->d_npartitions < 8) { DEBUG("invalid number of partitions"); goto out; } DEBUG("BSD detected"); part = &dl->d_partitions[0]; raw_offset = le32toh(part[RAW_PART].p_offset); for (i = 0; i < dl->d_npartitions; i++, part++) { if (i == RAW_PART) continue; if (part->p_size == 0) continue; entry = malloc(sizeof(*entry)); if (entry == NULL) break; entry->part.start = le32toh(part->p_offset) - raw_offset; entry->part.end = entry->part.start + le32toh(part->p_size) - 1; entry->part.type = bsd_parttype(part->p_fstype); entry->part.index = i; /* starts from zero */ entry->type.bsd = part->p_fstype; STAILQ_INSERT_TAIL(&table->entries, entry, entry); DEBUG("new BSD partition added"); } table->type = PTABLE_BSD; out: free(buf); return (table); } #ifdef LOADER_VTOC8_SUPPORT static enum partition_type vtoc8_parttype(uint16_t type) { switch (type) { case VTOC_TAG_FREEBSD_NANDFS: return (PART_FREEBSD_NANDFS); case VTOC_TAG_FREEBSD_SWAP: return (PART_FREEBSD_SWAP); case VTOC_TAG_FREEBSD_UFS: return (PART_FREEBSD_UFS); case VTOC_TAG_FREEBSD_VINUM: return (PART_FREEBSD_VINUM); case VTOC_TAG_FREEBSD_ZFS: return (PART_FREEBSD_ZFS); } return (PART_UNKNOWN); } static struct ptable * ptable_vtoc8read(struct ptable *table, void *dev, diskread_t dread) { struct pentry *entry; struct vtoc8 *dl; uint8_t *buf; uint16_t sum, heads, sectors; int i; if (table->sectorsize != sizeof(struct vtoc8)) return (table); buf = malloc(table->sectorsize); if (buf == NULL) return (table); if (dread(dev, buf, 1, 0) != 0) { DEBUG("read failed"); ptable_close(table); table = NULL; goto out; } dl = (struct vtoc8 *)buf; /* Check the sum */ for (i = sum = 0; i < sizeof(struct vtoc8); i += sizeof(sum)) sum ^= be16dec(buf + i); if (sum != 0) { DEBUG("incorrect checksum"); goto out; } if (be16toh(dl->nparts) != VTOC8_NPARTS) { DEBUG("invalid number of entries"); goto out; } sectors = be16toh(dl->nsecs); heads = be16toh(dl->nheads); if (sectors * heads == 0) { DEBUG("invalid geometry"); goto out; } DEBUG("VTOC8 detected"); for (i = 0; i < VTOC8_NPARTS; i++) { dl->part[i].tag = be16toh(dl->part[i].tag); if (i == VTOC_RAW_PART || dl->part[i].tag == VTOC_TAG_UNASSIGNED) continue; entry = malloc(sizeof(*entry)); if (entry == NULL) break; entry->part.start = be32toh(dl->map[i].cyl) * heads * sectors; entry->part.end = be32toh(dl->map[i].nblks) + entry->part.start - 1; entry->part.type = vtoc8_parttype(dl->part[i].tag); entry->part.index = i; /* starts from zero */ entry->type.vtoc8 = dl->part[i].tag; STAILQ_INSERT_TAIL(&table->entries, entry, entry); DEBUG("new VTOC8 partition added"); } table->type = PTABLE_VTOC8; out: free(buf); return (table); } #endif /* LOADER_VTOC8_SUPPORT */ #define cdb2devb(bno) ((bno) * ISO_DEFAULT_BLOCK_SIZE / table->sectorsize) static struct ptable * ptable_iso9660read(struct ptable *table, void *dev, diskread_t dread) { uint8_t *buf; struct iso_primary_descriptor *vd; struct pentry *entry; buf = malloc(table->sectorsize); if (buf == NULL) return (table); if (dread(dev, buf, 1, cdb2devb(16)) != 0) { DEBUG("read failed"); ptable_close(table); table = NULL; goto out; } vd = (struct iso_primary_descriptor *)buf; if (bcmp(vd->id, ISO_STANDARD_ID, sizeof vd->id) != 0) goto out; entry = malloc(sizeof(*entry)); if (entry == NULL) goto out; entry->part.start = 0; entry->part.end = table->sectors; entry->part.type = PART_ISO9660; entry->part.index = 0; STAILQ_INSERT_TAIL(&table->entries, entry, entry); table->type = PTABLE_ISO9660; out: free(buf); return (table); } struct ptable * ptable_open(void *dev, uint64_t sectors, uint16_t sectorsize, diskread_t *dread) { struct dos_partition *dp; struct ptable *table; uint8_t *buf; int i, count; #ifdef LOADER_MBR_SUPPORT struct pentry *entry; uint32_t start, end; int has_ext; #endif table = NULL; buf = malloc(sectorsize); if (buf == NULL) return (NULL); /* First, read the MBR. */ if (dread(dev, buf, 1, DOSBBSECTOR) != 0) { DEBUG("read failed"); goto out; } table = malloc(sizeof(*table)); if (table == NULL) goto out; table->sectors = sectors; table->sectorsize = sectorsize; table->type = PTABLE_NONE; STAILQ_INIT(&table->entries); if (ptable_iso9660read(table, dev, dread) == NULL) { /* Read error. */ table = NULL; goto out; } else if (table->type == PTABLE_ISO9660) goto out; #ifdef LOADER_VTOC8_SUPPORT if (be16dec(buf + offsetof(struct vtoc8, magic)) == VTOC_MAGIC) { if (ptable_vtoc8read(table, dev, dread) == NULL) { /* Read error. */ table = NULL; goto out; } else if (table->type == PTABLE_VTOC8) goto out; } #endif /* Check the BSD label. */ if (ptable_bsdread(table, dev, dread) == NULL) { /* Read error. */ table = NULL; goto out; } else if (table->type == PTABLE_BSD) goto out; #if defined(LOADER_GPT_SUPPORT) || defined(LOADER_MBR_SUPPORT) /* Check the MBR magic. */ if (buf[DOSMAGICOFFSET] != 0x55 || buf[DOSMAGICOFFSET + 1] != 0xaa) { DEBUG("magic sequence not found"); #if defined(LOADER_GPT_SUPPORT) /* There is no PMBR, check that we have backup GPT */ table->type = PTABLE_GPT; table = ptable_gptread(table, dev, dread); #endif goto out; } /* Check that we have PMBR. Also do some validation. */ dp = (struct dos_partition *)(buf + DOSPARTOFF); for (i = 0, count = 0; i < NDOSPART; i++) { if (dp[i].dp_flag != 0 && dp[i].dp_flag != 0x80) { DEBUG("invalid partition flag %x", dp[i].dp_flag); goto out; } #ifdef LOADER_GPT_SUPPORT if (dp[i].dp_typ == DOSPTYP_PMBR) { table->type = PTABLE_GPT; DEBUG("PMBR detected"); } #endif if (dp[i].dp_typ != 0) count++; } /* Do we have some invalid values? */ if (table->type == PTABLE_GPT && count > 1) { if (dp[1].dp_typ != DOSPTYP_HFS) { table->type = PTABLE_NONE; DEBUG("Incorrect PMBR, ignore it"); } else { DEBUG("Bootcamp detected"); } } #ifdef LOADER_GPT_SUPPORT if (table->type == PTABLE_GPT) { table = ptable_gptread(table, dev, dread); goto out; } #endif #ifdef LOADER_MBR_SUPPORT /* Read MBR. */ DEBUG("MBR detected"); table->type = PTABLE_MBR; for (i = has_ext = 0; i < NDOSPART; i++) { if (dp[i].dp_typ == 0) continue; start = le32dec(&(dp[i].dp_start)); end = le32dec(&(dp[i].dp_size)); if (start == 0 || end == 0) continue; #if 0 /* Some BIOSes return an incorrect number of sectors */ if (start + end - 1 >= sectors) continue; /* XXX: ignore */ #endif if (dp[i].dp_typ == DOSPTYP_EXT || dp[i].dp_typ == DOSPTYP_EXTLBA) has_ext = 1; entry = malloc(sizeof(*entry)); if (entry == NULL) break; entry->part.start = start; entry->part.end = start + end - 1; entry->part.index = i + 1; entry->part.type = mbr_parttype(dp[i].dp_typ); entry->flags = dp[i].dp_flag; entry->type.mbr = dp[i].dp_typ; STAILQ_INSERT_TAIL(&table->entries, entry, entry); DEBUG("new MBR partition added"); } if (has_ext) { table = ptable_ebrread(table, dev, dread); /* FALLTHROUGH */ } #endif /* LOADER_MBR_SUPPORT */ #endif /* LOADER_MBR_SUPPORT || LOADER_GPT_SUPPORT */ out: free(buf); return (table); } void ptable_close(struct ptable *table) { struct pentry *entry; + if (table == NULL) + return; + while (!STAILQ_EMPTY(&table->entries)) { entry = STAILQ_FIRST(&table->entries); STAILQ_REMOVE_HEAD(&table->entries, entry); free(entry); } free(table); } enum ptable_type ptable_gettype(const struct ptable *table) { return (table->type); } int ptable_getsize(const struct ptable *table, uint64_t *sizep) { uint64_t tmp = table->sectors * table->sectorsize; if (tmp < table->sectors) return (EOVERFLOW); if (sizep != NULL) *sizep = tmp; return (0); } int ptable_getpart(const struct ptable *table, struct ptable_entry *part, int index) { struct pentry *entry; if (part == NULL || table == NULL) return (EINVAL); STAILQ_FOREACH(entry, &table->entries, entry) { if (entry->part.index != index) continue; memcpy(part, &entry->part, sizeof(*part)); return (0); } return (ENOENT); } /* * Search for a slice with the following preferences: * * 1: Active FreeBSD slice * 2: Non-active FreeBSD slice * 3: Active Linux slice * 4: non-active Linux slice * 5: Active FAT/FAT32 slice * 6: non-active FAT/FAT32 slice */ #define PREF_RAWDISK 0 #define PREF_FBSD_ACT 1 #define PREF_FBSD 2 #define PREF_LINUX_ACT 3 #define PREF_LINUX 4 #define PREF_DOS_ACT 5 #define PREF_DOS 6 #define PREF_NONE 7 int ptable_getbestpart(const struct ptable *table, struct ptable_entry *part) { struct pentry *entry, *best; int pref, preflevel; if (part == NULL || table == NULL) return (EINVAL); best = NULL; preflevel = pref = PREF_NONE; STAILQ_FOREACH(entry, &table->entries, entry) { #ifdef LOADER_MBR_SUPPORT if (table->type == PTABLE_MBR) { switch (entry->type.mbr) { case DOSPTYP_386BSD: pref = entry->flags & 0x80 ? PREF_FBSD_ACT: PREF_FBSD; break; case DOSPTYP_LINUX: pref = entry->flags & 0x80 ? PREF_LINUX_ACT: PREF_LINUX; break; case 0x01: /* DOS/Windows */ case 0x04: case 0x06: case 0x0c: case 0x0e: case DOSPTYP_FAT32: pref = entry->flags & 0x80 ? PREF_DOS_ACT: PREF_DOS; break; default: pref = PREF_NONE; } } #endif /* LOADER_MBR_SUPPORT */ #ifdef LOADER_GPT_SUPPORT if (table->type == PTABLE_GPT) { if (entry->part.type == PART_DOS) pref = PREF_DOS; else if (entry->part.type == PART_FREEBSD_UFS || entry->part.type == PART_FREEBSD_ZFS) pref = PREF_FBSD; else pref = PREF_NONE; } #endif /* LOADER_GPT_SUPPORT */ if (pref < preflevel) { preflevel = pref; best = entry; } } if (best != NULL) { memcpy(part, &best->part, sizeof(*part)); return (0); } return (ENOENT); } int ptable_iterate(const struct ptable *table, void *arg, ptable_iterate_t *iter) { struct pentry *entry; char name[32]; int ret = 0; name[0] = '\0'; STAILQ_FOREACH(entry, &table->entries, entry) { #ifdef LOADER_MBR_SUPPORT if (table->type == PTABLE_MBR) sprintf(name, "s%d", entry->part.index); else #endif #ifdef LOADER_GPT_SUPPORT if (table->type == PTABLE_GPT) sprintf(name, "p%d", entry->part.index); else #endif #ifdef LOADER_VTOC8_SUPPORT if (table->type == PTABLE_VTOC8) sprintf(name, "%c", (uint8_t) 'a' + entry->part.index); else #endif if (table->type == PTABLE_BSD) sprintf(name, "%c", (uint8_t) 'a' + entry->part.index); if ((ret = iter(arg, name, &entry->part)) != 0) return (ret); } return (ret); } Index: projects/import-googletest-1.8.1/stand/i386/Makefile.inc =================================================================== --- projects/import-googletest-1.8.1/stand/i386/Makefile.inc (revision 344270) +++ projects/import-googletest-1.8.1/stand/i386/Makefile.inc (revision 344271) @@ -1,41 +1,42 @@ # Common defines for all of stand/i386/ # # $FreeBSD$ .include "bsd.linker.mk" LOADER_ADDRESS?=0x200000 LDFLAGS+= -nostdlib LDFLAGS.lld+= -Wl,--no-rosegment +MK_PIE:= no # BTX components BTXDIR= ${BOOTOBJ}/i386/btx BTXLDR= ${BTXDIR}/btxldr/btxldr BTXKERN= ${BTXDIR}/btx/btx BTXCRT= ${BTXDIR}/lib/crt0.o BTXSRC= ${BOOTSRC}/i386/btx BTXLIB= ${BTXSRC}/lib CFLAGS+= -I${BTXLIB} # compact binary with no padding between text, data, bss LDSCRIPT= ${BOOTSRC}/i386/boot.ldscript # LDFLAGS_BIN=-e start -Ttext ${ORG} -Wl,-T,${LDSCRIPT},-S,--oformat,binary # LD_FLAGS_BIN=-static -T ${LDSCRIPT} --gc-sections LDFLAGS_BIN=-e start -Ttext ${ORG} -Wl,-N,-S,--oformat,binary .if ${LINKER_FEATURES:Mbuild-id} != "" LDFLAGS_BIN+=-Wl,--build-id=none .endif LD_FLAGS_BIN=-static -N --gc-sections .if ${MACHINE_CPUARCH} == "amd64" DO32=1 .endif .if defined(LOADER_FIREWIRE_SUPPORT) MK_LOADER_FIREWIRE=yes .warning "LOADER_FIREWIRE_SUPPORT deprecated, please move to WITH_LOADER_FIREWIRE" .endif .include "../Makefile.inc" Index: projects/import-googletest-1.8.1/stand/i386/zfsboot/zfsboot.c =================================================================== --- projects/import-googletest-1.8.1/stand/i386/zfsboot/zfsboot.c (revision 344270) +++ projects/import-googletest-1.8.1/stand/i386/zfsboot/zfsboot.c (revision 344271) @@ -1,1141 +1,1128 @@ /*- * Copyright (c) 1998 Robert Nordier * All rights reserved. * * Redistribution and use in source and binary forms are freely * permitted provided that the above copyright notice and this * paragraph and the following disclaimer are duplicated in all * such forms. * * This software is provided "AS IS" and without any express or * implied warranties, including, without limitation, the implied * warranties of merchantability and fitness for a particular * purpose. */ #include __FBSDID("$FreeBSD$"); #include "stand.h" #include #include #include #ifdef GPT #include #endif #include #include #include #include #include #include #include #include #include #include "lib.h" #include "rbx.h" #include "drv.h" #include "edd.h" #include "cons.h" #include "bootargs.h" #include "paths.h" #include "libzfs.h" #define ARGS 0x900 #define NOPT 14 #define NDEV 3 #define BIOS_NUMDRIVES 0x475 #define DRV_HARD 0x80 #define DRV_MASK 0x7f #define TYPE_AD 0 #define TYPE_DA 1 #define TYPE_MAXHARD TYPE_DA #define TYPE_FD 2 #define DEV_GELIBOOT_BSIZE 4096 extern uint32_t _end; #ifdef GPT static const uuid_t freebsd_zfs_uuid = GPT_ENT_TYPE_FREEBSD_ZFS; #endif static const char optstr[NOPT] = "DhaCcdgmnpqrsv"; /* Also 'P', 'S' */ static const unsigned char flags[NOPT] = { RBX_DUAL, RBX_SERIAL, RBX_ASKNAME, RBX_CDROM, RBX_CONFIG, RBX_KDB, RBX_GDB, RBX_MUTE, RBX_NOINTR, RBX_PAUSE, RBX_QUIET, RBX_DFLTROOT, RBX_SINGLE, RBX_VERBOSE }; uint32_t opts; static const unsigned char dev_maj[NDEV] = {30, 4, 2}; static char cmd[512]; static char cmddup[512]; static char kname[1024]; static char rootname[256]; static int comspeed = SIOSPD; static struct bootinfo bootinfo; static uint32_t bootdev; static struct zfs_boot_args zfsargs; vm_offset_t high_heap_base; uint32_t bios_basemem, bios_extmem, high_heap_size; static struct bios_smap smap; /* * The minimum amount of memory to reserve in bios_extmem for the heap. */ #define HEAP_MIN (64 * 1024 * 1024) static char *heap_next; static char *heap_end; /* Buffers that must not span a 64k boundary. */ #define READ_BUF_SIZE 8192 struct dmadat { char rdbuf[READ_BUF_SIZE]; /* for reading large things */ char secbuf[READ_BUF_SIZE]; /* for MBR/disklabel */ }; static struct dmadat *dmadat; void exit(int); void reboot(void); static void load(void); static int parse_cmd(void); static void bios_getmem(void); int main(void); #ifdef LOADER_GELI_SUPPORT #include "geliboot.h" static char gelipw[GELI_PW_MAXLEN]; #endif struct zfsdsk { struct dsk dsk; #ifdef LOADER_GELI_SUPPORT struct geli_dev *gdev; #endif }; #include "zfsimpl.c" /* * Read from a dnode (which must be from a ZPL filesystem). */ static int zfs_read(spa_t *spa, const dnode_phys_t *dnode, off_t *offp, void *start, size_t size) { const znode_phys_t *zp = (const znode_phys_t *) dnode->dn_bonus; size_t n; int rc; n = size; if (*offp + n > zp->zp_size) n = zp->zp_size - *offp; rc = dnode_read(spa, dnode, *offp, start, n); if (rc) return (-1); *offp += n; return (n); } /* * Current ZFS pool */ static spa_t *spa; static spa_t *primary_spa; static vdev_t *primary_vdev; /* * A wrapper for dskread that doesn't have to worry about whether the * buffer pointer crosses a 64k boundary. */ static int vdev_read(void *xvdev, void *priv, off_t off, void *buf, size_t bytes) { char *p; daddr_t lba, alignlba; off_t diff; unsigned int nb, alignnb; struct zfsdsk *zdsk = (struct zfsdsk *) priv; if ((off & (DEV_BSIZE - 1)) || (bytes & (DEV_BSIZE - 1))) return -1; p = buf; lba = off / DEV_BSIZE; lba += zdsk->dsk.start; /* * Align reads to 4k else 4k sector GELIs will not decrypt. * Round LBA down to nearest multiple of DEV_GELIBOOT_BSIZE bytes. */ alignlba = rounddown2(off, DEV_GELIBOOT_BSIZE) / DEV_BSIZE; /* * The read must be aligned to DEV_GELIBOOT_BSIZE bytes relative to the * start of the GELI partition, not the start of the actual disk. */ alignlba += zdsk->dsk.start; diff = (lba - alignlba) * DEV_BSIZE; while (bytes > 0) { nb = bytes / DEV_BSIZE; /* * Ensure that the read size plus the leading offset does not * exceed the size of the read buffer. */ if (nb > (READ_BUF_SIZE - diff) / DEV_BSIZE) nb = (READ_BUF_SIZE - diff) / DEV_BSIZE; /* * Round the number of blocks to read up to the nearest multiple * of DEV_GELIBOOT_BSIZE. */ alignnb = roundup2(nb * DEV_BSIZE + diff, DEV_GELIBOOT_BSIZE) / DEV_BSIZE; if (zdsk->dsk.size > 0 && alignlba + alignnb > zdsk->dsk.size + zdsk->dsk.start) { printf("Shortening read at %lld from %d to %lld\n", alignlba, alignnb, (zdsk->dsk.size + zdsk->dsk.start) - alignlba); alignnb = (zdsk->dsk.size + zdsk->dsk.start) - alignlba; } if (drvread(&zdsk->dsk, dmadat->rdbuf, alignlba, alignnb)) return -1; #ifdef LOADER_GELI_SUPPORT /* decrypt */ if (zdsk->gdev != NULL) { if (geli_read(zdsk->gdev, ((alignlba - zdsk->dsk.start) * DEV_BSIZE), dmadat->rdbuf, alignnb * DEV_BSIZE)) return (-1); } #endif memcpy(p, dmadat->rdbuf + diff, nb * DEV_BSIZE); p += nb * DEV_BSIZE; lba += nb; alignlba += alignnb; bytes -= nb * DEV_BSIZE; /* Don't need the leading offset after the first block. */ diff = 0; } return 0; } /* Match the signature exactly due to signature madness */ static int vdev_read2(vdev_t *vdev, void *priv, off_t off, void *buf, size_t bytes) { return vdev_read(vdev, priv, off, buf, bytes); } static int vdev_write(vdev_t *vdev, void *priv, off_t off, void *buf, size_t bytes) { char *p; daddr_t lba; unsigned int nb; struct zfsdsk *zdsk = (struct zfsdsk *) priv; if ((off & (DEV_BSIZE - 1)) || (bytes & (DEV_BSIZE - 1))) return -1; p = buf; lba = off / DEV_BSIZE; lba += zdsk->dsk.start; while (bytes > 0) { nb = bytes / DEV_BSIZE; if (nb > READ_BUF_SIZE / DEV_BSIZE) nb = READ_BUF_SIZE / DEV_BSIZE; memcpy(dmadat->rdbuf, p, nb * DEV_BSIZE); if (drvwrite(&zdsk->dsk, dmadat->rdbuf, lba, nb)) return -1; p += nb * DEV_BSIZE; lba += nb; bytes -= nb * DEV_BSIZE; } return 0; } static int xfsread(const dnode_phys_t *dnode, off_t *offp, void *buf, size_t nbyte) { if ((size_t)zfs_read(spa, dnode, offp, buf, nbyte) != nbyte) { printf("Invalid format\n"); return -1; } return 0; } /* * Read Pad2 (formerly "Boot Block Header") area of the first * vdev label of the given vdev. */ static int vdev_read_pad2(vdev_t *vdev, char *buf, size_t size) { blkptr_t bp; char *tmp = zap_scratch; off_t off = offsetof(vdev_label_t, vl_pad2); if (size > VDEV_PAD_SIZE) size = VDEV_PAD_SIZE; BP_ZERO(&bp); BP_SET_LSIZE(&bp, VDEV_PAD_SIZE); BP_SET_PSIZE(&bp, VDEV_PAD_SIZE); BP_SET_CHECKSUM(&bp, ZIO_CHECKSUM_LABEL); BP_SET_COMPRESS(&bp, ZIO_COMPRESS_OFF); DVA_SET_OFFSET(BP_IDENTITY(&bp), off); if (vdev_read_phys(vdev, &bp, tmp, off, 0)) return (EIO); memcpy(buf, tmp, size); return (0); } static int vdev_clear_pad2(vdev_t *vdev) { char *zeroes = zap_scratch; uint64_t *end; off_t off = offsetof(vdev_label_t, vl_pad2); memset(zeroes, 0, VDEV_PAD_SIZE); end = (uint64_t *)(zeroes + VDEV_PAD_SIZE); /* ZIO_CHECKSUM_LABEL magic and pre-calcualted checksum for all zeros */ end[-5] = 0x0210da7ab10c7a11; end[-4] = 0x97f48f807f6e2a3f; end[-3] = 0xaf909f1658aacefc; end[-2] = 0xcbd1ea57ff6db48b; end[-1] = 0x6ec692db0d465fab; if (vdev_write(vdev, vdev->v_read_priv, off, zeroes, VDEV_PAD_SIZE)) return (EIO); return (0); } static void bios_getmem(void) { uint64_t size; /* Parse system memory map */ v86.ebx = 0; do { v86.ctl = V86_FLAGS; v86.addr = 0x15; /* int 0x15 function 0xe820*/ v86.eax = 0xe820; v86.ecx = sizeof(struct bios_smap); v86.edx = SMAP_SIG; v86.es = VTOPSEG(&smap); v86.edi = VTOPOFF(&smap); v86int(); if (V86_CY(v86.efl) || (v86.eax != SMAP_SIG)) break; /* look for a low-memory segment that's large enough */ if ((smap.type == SMAP_TYPE_MEMORY) && (smap.base == 0) && (smap.length >= (512 * 1024))) bios_basemem = smap.length; /* look for the first segment in 'extended' memory */ if ((smap.type == SMAP_TYPE_MEMORY) && (smap.base == 0x100000)) { bios_extmem = smap.length; } /* * Look for the largest segment in 'extended' memory beyond * 1MB but below 4GB. */ if ((smap.type == SMAP_TYPE_MEMORY) && (smap.base > 0x100000) && (smap.base < 0x100000000ull)) { size = smap.length; /* * If this segment crosses the 4GB boundary, truncate it. */ if (smap.base + size > 0x100000000ull) size = 0x100000000ull - smap.base; if (size > high_heap_size) { high_heap_size = size; high_heap_base = smap.base; } } } while (v86.ebx != 0); /* Fall back to the old compatibility function for base memory */ if (bios_basemem == 0) { v86.ctl = 0; v86.addr = 0x12; /* int 0x12 */ v86int(); bios_basemem = (v86.eax & 0xffff) * 1024; } /* Fall back through several compatibility functions for extended memory */ if (bios_extmem == 0) { v86.ctl = V86_FLAGS; v86.addr = 0x15; /* int 0x15 function 0xe801*/ v86.eax = 0xe801; v86int(); if (!V86_CY(v86.efl)) { bios_extmem = ((v86.ecx & 0xffff) + ((v86.edx & 0xffff) * 64)) * 1024; } } if (bios_extmem == 0) { v86.ctl = 0; v86.addr = 0x15; /* int 0x15 function 0x88*/ v86.eax = 0x8800; v86int(); bios_extmem = (v86.eax & 0xffff) * 1024; } /* * If we have extended memory and did not find a suitable heap * region in the SMAP, use the last 3MB of 'extended' memory as a * high heap candidate. */ if (bios_extmem >= HEAP_MIN && high_heap_size < HEAP_MIN) { high_heap_size = HEAP_MIN; high_heap_base = bios_extmem + 0x100000 - HEAP_MIN; } } /* * Try to detect a device supported by the legacy int13 BIOS */ static int int13probe(int drive) { v86.ctl = V86_FLAGS; v86.addr = 0x13; v86.eax = 0x800; v86.edx = drive; v86int(); if (!V86_CY(v86.efl) && /* carry clear */ ((v86.edx & 0xff) != (drive & DRV_MASK))) { /* unit # OK */ if ((v86.ecx & 0x3f) == 0) { /* absurd sector size */ return(0); /* skip device */ } return (1); } return(0); } /* * We call this when we find a ZFS vdev - ZFS consumes the dsk * structure so we must make a new one. */ static struct zfsdsk * copy_dsk(struct zfsdsk *zdsk) { struct zfsdsk *newdsk; newdsk = malloc(sizeof(struct zfsdsk)); *newdsk = *zdsk; return (newdsk); } /* * Get disk size from eax=0x800 and 0x4800. We need to probe both * because 0x4800 may not be available and we would like to get more * or less correct disk size - if it is possible at all. * Note we do not really want to touch drv.c because that code is shared * with boot2 and we can not afford to grow that code. */ static uint64_t drvsize_ext(struct zfsdsk *zdsk) { struct dsk *dskp; uint64_t size, tmp; int cyl, hds, sec; dskp = &zdsk->dsk; v86.ctl = V86_FLAGS; v86.addr = 0x13; v86.eax = 0x800; v86.edx = dskp->drive; v86int(); /* Don't error out if we get bad sector number, try EDD as well */ if (V86_CY(v86.efl) || /* carry set */ (v86.edx & 0xff) <= (unsigned)(dskp->drive & 0x7f)) /* unit # bad */ return (0); cyl = ((v86.ecx & 0xc0) << 2) + ((v86.ecx & 0xff00) >> 8) + 1; /* Convert max head # -> # of heads */ hds = ((v86.edx & 0xff00) >> 8) + 1; sec = v86.ecx & 0x3f; size = (uint64_t)cyl * hds * sec; /* Determine if we can use EDD with this device. */ v86.ctl = V86_FLAGS; v86.addr = 0x13; v86.eax = 0x4100; v86.edx = dskp->drive; v86.ebx = 0x55aa; v86int(); if (V86_CY(v86.efl) || /* carry set */ (v86.ebx & 0xffff) != 0xaa55 || /* signature */ (v86.ecx & EDD_INTERFACE_FIXED_DISK) == 0) return (size); tmp = drvsize(dskp); if (tmp > size) size = tmp; return (size); } /* * The "layered" ioctl to read disk/partition size. Unfortunately * the zfsboot case is hardest, because we do not have full software * stack available, so we need to do some manual work here. */ uint64_t ldi_get_size(void *priv) { struct zfsdsk *zdsk = priv; uint64_t size = zdsk->dsk.size; if (zdsk->dsk.start == 0) size = drvsize_ext(zdsk); return (size * DEV_BSIZE); } static void probe_drive(struct zfsdsk *zdsk) { #ifdef GPT struct gpt_hdr hdr; struct gpt_ent *ent; unsigned part, entries_per_sec; daddr_t slba; #endif #if defined(GPT) || defined(LOADER_GELI_SUPPORT) daddr_t elba; #endif struct dos_partition *dp; char *sec; unsigned i; - /* - * If we find a vdev on the whole disk, stop here. - */ - if (vdev_probe(vdev_read2, zdsk, NULL) == 0) - return; - #ifdef LOADER_GELI_SUPPORT /* - * Taste the disk, if it is GELI encrypted, decrypt it and check to see if - * it is a usable vdev then. Otherwise dig - * out the partition table and probe each slice/partition - * in turn for a vdev or GELI encrypted vdev. + * Taste the disk, if it is GELI encrypted, decrypt it then dig out the + * partition table and probe each slice/partition in turn for a vdev or + * GELI encrypted vdev. */ elba = drvsize_ext(zdsk); if (elba > 0) { elba--; } zdsk->gdev = geli_taste(vdev_read, zdsk, elba, "disk%u:0:"); - if (zdsk->gdev != NULL) { - if (geli_havekey(zdsk->gdev) == 0 || - geli_passphrase(zdsk->gdev, gelipw) == 0) { - if (vdev_probe(vdev_read2, zdsk, NULL) == 0) { - return; - } - } - } + if ((zdsk->gdev != NULL) && (geli_havekey(zdsk->gdev) == 0)) + geli_passphrase(zdsk->gdev, gelipw); #endif /* LOADER_GELI_SUPPORT */ sec = dmadat->secbuf; zdsk->dsk.start = 0; #ifdef GPT /* * First check for GPT. */ if (drvread(&zdsk->dsk, sec, 1, 1)) { return; } memcpy(&hdr, sec, sizeof(hdr)); if (memcmp(hdr.hdr_sig, GPT_HDR_SIG, sizeof(hdr.hdr_sig)) != 0 || hdr.hdr_lba_self != 1 || hdr.hdr_revision < 0x00010000 || hdr.hdr_entsz < sizeof(*ent) || DEV_BSIZE % hdr.hdr_entsz != 0) { goto trymbr; } /* * Probe all GPT partitions for the presence of ZFS pools. We * return the spa_t for the first we find (if requested). This * will have the effect of booting from the first pool on the * disk. * * If no vdev is found, GELI decrypting the device and try again */ entries_per_sec = DEV_BSIZE / hdr.hdr_entsz; slba = hdr.hdr_lba_table; elba = slba + hdr.hdr_entries / entries_per_sec; while (slba < elba) { zdsk->dsk.start = 0; if (drvread(&zdsk->dsk, sec, slba, 1)) return; for (part = 0; part < entries_per_sec; part++) { ent = (struct gpt_ent *)(sec + part * hdr.hdr_entsz); if (memcmp(&ent->ent_type, &freebsd_zfs_uuid, sizeof(uuid_t)) == 0) { zdsk->dsk.start = ent->ent_lba_start; zdsk->dsk.size = ent->ent_lba_end - ent->ent_lba_start + 1; zdsk->dsk.slice = part + 1; zdsk->dsk.part = 255; if (vdev_probe(vdev_read2, zdsk, NULL) == 0) { /* * This slice had a vdev. We need a new dsk * structure now since the vdev now owns this one. */ zdsk = copy_dsk(zdsk); } #ifdef LOADER_GELI_SUPPORT else if ((zdsk->gdev = geli_taste(vdev_read, zdsk, ent->ent_lba_end - ent->ent_lba_start, "disk%up%u:", zdsk->dsk.unit, zdsk->dsk.slice)) != NULL) { if (geli_havekey(zdsk->gdev) == 0 || geli_passphrase(zdsk->gdev, gelipw) == 0) { /* * This slice has GELI, check it for ZFS. */ if (vdev_probe(vdev_read2, zdsk, NULL) == 0) { /* * This slice had a vdev. We need a new dsk * structure now since the vdev now owns this one. */ zdsk = copy_dsk(zdsk); } break; } } #endif /* LOADER_GELI_SUPPORT */ } } slba++; } return; trymbr: #endif /* GPT */ if (drvread(&zdsk->dsk, sec, DOSBBSECTOR, 1)) return; dp = (void *)(sec + DOSPARTOFF); for (i = 0; i < NDOSPART; i++) { if (!dp[i].dp_typ) continue; zdsk->dsk.start = dp[i].dp_start; zdsk->dsk.size = dp[i].dp_size; zdsk->dsk.slice = i + 1; if (vdev_probe(vdev_read2, zdsk, NULL) == 0) { zdsk = copy_dsk(zdsk); } #ifdef LOADER_GELI_SUPPORT else if ((zdsk->gdev = geli_taste(vdev_read, zdsk, dp[i].dp_size - dp[i].dp_start, "disk%us%u:")) != NULL) { if (geli_havekey(zdsk->gdev) == 0 || geli_passphrase(zdsk->gdev, gelipw) == 0) { /* * This slice has GELI, check it for ZFS. */ if (vdev_probe(vdev_read2, zdsk, NULL) == 0) { /* * This slice had a vdev. We need a new dsk * structure now since the vdev now owns this one. */ zdsk = copy_dsk(zdsk); } break; } } #endif /* LOADER_GELI_SUPPORT */ } } int main(void) { dnode_phys_t dn; off_t off; struct zfsdsk *zdsk; int autoboot, i; int nextboot; int rc; dmadat = (void *)(roundup2(__base + (int32_t)&_end, 0x10000) - __base); bios_getmem(); if (high_heap_size > 0) { heap_end = PTOV(high_heap_base + high_heap_size); heap_next = PTOV(high_heap_base); } else { heap_next = (char *)dmadat + sizeof(*dmadat); heap_end = (char *)PTOV(bios_basemem); } setheap(heap_next, heap_end); zdsk = calloc(1, sizeof(struct zfsdsk)); zdsk->dsk.drive = *(uint8_t *)PTOV(ARGS); zdsk->dsk.type = zdsk->dsk.drive & DRV_HARD ? TYPE_AD : TYPE_FD; zdsk->dsk.unit = zdsk->dsk.drive & DRV_MASK; zdsk->dsk.slice = *(uint8_t *)PTOV(ARGS + 1) + 1; zdsk->dsk.part = 0; zdsk->dsk.start = 0; zdsk->dsk.size = drvsize_ext(zdsk); bootinfo.bi_version = BOOTINFO_VERSION; bootinfo.bi_size = sizeof(bootinfo); bootinfo.bi_basemem = bios_basemem / 1024; bootinfo.bi_extmem = bios_extmem / 1024; bootinfo.bi_memsizes_valid++; bootinfo.bi_bios_dev = zdsk->dsk.drive; bootdev = MAKEBOOTDEV(dev_maj[zdsk->dsk.type], zdsk->dsk.slice, zdsk->dsk.unit, zdsk->dsk.part); /* Process configuration file */ autoboot = 1; zfs_init(); /* * Probe the boot drive first - we will try to boot from whatever * pool we find on that drive. */ probe_drive(zdsk); /* * Probe the rest of the drives that the bios knows about. This * will find any other available pools and it may fill in missing * vdevs for the boot pool. */ #ifndef VIRTUALBOX for (i = 0; i < *(unsigned char *)PTOV(BIOS_NUMDRIVES); i++) #else for (i = 0; i < MAXBDDEV; i++) #endif { if ((i | DRV_HARD) == *(uint8_t *)PTOV(ARGS)) continue; if (!int13probe(i | DRV_HARD)) break; zdsk = calloc(1, sizeof(struct zfsdsk)); zdsk->dsk.drive = i | DRV_HARD; zdsk->dsk.type = zdsk->dsk.drive & TYPE_AD; zdsk->dsk.unit = i; zdsk->dsk.slice = 0; zdsk->dsk.part = 0; zdsk->dsk.start = 0; zdsk->dsk.size = drvsize_ext(zdsk); probe_drive(zdsk); } /* * The first discovered pool, if any, is the pool. */ spa = spa_get_primary(); if (!spa) { printf("%s: No ZFS pools located, can't boot\n", BOOTPROG); for (;;) ; } primary_spa = spa; primary_vdev = spa_get_primary_vdev(spa); nextboot = 0; rc = vdev_read_pad2(primary_vdev, cmd, sizeof(cmd)); if (vdev_clear_pad2(primary_vdev)) printf("failed to clear pad2 area of primary vdev\n"); if (rc == 0) { if (*cmd) { /* * We could find an old-style ZFS Boot Block header here. * Simply ignore it. */ if (*(uint64_t *)cmd != 0x2f5b007b10c) { /* * Note that parse() is destructive to cmd[] and we also want * to honor RBX_QUIET option that could be present in cmd[]. */ nextboot = 1; memcpy(cmddup, cmd, sizeof(cmd)); if (parse_cmd()) { printf("failed to parse pad2 area of primary vdev\n"); reboot(); } if (!OPT_CHECK(RBX_QUIET)) printf("zfs nextboot: %s\n", cmddup); } /* Do not process this command twice */ *cmd = 0; } } else printf("failed to read pad2 area of primary vdev\n"); /* Mount ZFS only if it's not already mounted via nextboot parsing. */ if (zfsmount.spa == NULL && (zfs_spa_init(spa) != 0 || zfs_mount(spa, 0, &zfsmount) != 0)) { printf("%s: failed to mount default pool %s\n", BOOTPROG, spa->spa_name); autoboot = 0; } else if (zfs_lookup(&zfsmount, PATH_CONFIG, &dn) == 0 || zfs_lookup(&zfsmount, PATH_DOTCONFIG, &dn) == 0) { off = 0; zfs_read(spa, &dn, &off, cmd, sizeof(cmd)); } if (*cmd) { /* * Note that parse_cmd() is destructive to cmd[] and we also want * to honor RBX_QUIET option that could be present in cmd[]. */ memcpy(cmddup, cmd, sizeof(cmd)); if (parse_cmd()) autoboot = 0; if (!OPT_CHECK(RBX_QUIET)) printf("%s: %s\n", PATH_CONFIG, cmddup); /* Do not process this command twice */ *cmd = 0; } /* Do not risk waiting at the prompt forever. */ if (nextboot && !autoboot) reboot(); /* * Try to exec /boot/loader. If interrupted by a keypress, * or in case of failure, try to load a kernel directly instead. */ if (autoboot && !*kname) { memcpy(kname, PATH_LOADER, sizeof(PATH_LOADER)); if (!keyhit(3)) { load(); memcpy(kname, PATH_KERNEL, sizeof(PATH_KERNEL)); } } /* Present the user with the boot2 prompt. */ for (;;) { if (!autoboot || !OPT_CHECK(RBX_QUIET)) { printf("\nFreeBSD/x86 boot\n"); if (zfs_rlookup(spa, zfsmount.rootobj, rootname) != 0) printf("Default: %s/<0x%llx>:%s\n" "boot: ", spa->spa_name, zfsmount.rootobj, kname); else if (rootname[0] != '\0') printf("Default: %s/%s:%s\n" "boot: ", spa->spa_name, rootname, kname); else printf("Default: %s:%s\n" "boot: ", spa->spa_name, kname); } if (ioctrl & IO_SERIAL) sio_flush(); if (!autoboot || keyhit(5)) getstr(cmd, sizeof(cmd)); else if (!autoboot || !OPT_CHECK(RBX_QUIET)) putchar('\n'); autoboot = 0; if (parse_cmd()) putchar('\a'); else load(); } } /* XXX - Needed for btxld to link the boot2 binary; do not remove. */ void exit(int x) { __exit(x); } void reboot(void) { __exit(0); } static void load(void) { union { struct exec ex; Elf32_Ehdr eh; } hdr; static Elf32_Phdr ep[2]; static Elf32_Shdr es[2]; caddr_t p; dnode_phys_t dn; off_t off; uint32_t addr, x; int fmt, i, j; if (zfs_lookup(&zfsmount, kname, &dn)) { printf("\nCan't find %s\n", kname); return; } off = 0; if (xfsread(&dn, &off, &hdr, sizeof(hdr))) return; if (N_GETMAGIC(hdr.ex) == ZMAGIC) fmt = 0; else if (IS_ELF(hdr.eh)) fmt = 1; else { printf("Invalid %s\n", "format"); return; } if (fmt == 0) { addr = hdr.ex.a_entry & 0xffffff; p = PTOV(addr); off = PAGE_SIZE; if (xfsread(&dn, &off, p, hdr.ex.a_text)) return; p += roundup2(hdr.ex.a_text, PAGE_SIZE); if (xfsread(&dn, &off, p, hdr.ex.a_data)) return; p += hdr.ex.a_data + roundup2(hdr.ex.a_bss, PAGE_SIZE); bootinfo.bi_symtab = VTOP(p); memcpy(p, &hdr.ex.a_syms, sizeof(hdr.ex.a_syms)); p += sizeof(hdr.ex.a_syms); if (hdr.ex.a_syms) { if (xfsread(&dn, &off, p, hdr.ex.a_syms)) return; p += hdr.ex.a_syms; if (xfsread(&dn, &off, p, sizeof(int))) return; x = *(uint32_t *)p; p += sizeof(int); x -= sizeof(int); if (xfsread(&dn, &off, p, x)) return; p += x; } } else { off = hdr.eh.e_phoff; for (j = i = 0; i < hdr.eh.e_phnum && j < 2; i++) { if (xfsread(&dn, &off, ep + j, sizeof(ep[0]))) return; if (ep[j].p_type == PT_LOAD) j++; } for (i = 0; i < 2; i++) { p = PTOV(ep[i].p_paddr & 0xffffff); off = ep[i].p_offset; if (xfsread(&dn, &off, p, ep[i].p_filesz)) return; } p += roundup2(ep[1].p_memsz, PAGE_SIZE); bootinfo.bi_symtab = VTOP(p); if (hdr.eh.e_shnum == hdr.eh.e_shstrndx + 3) { off = hdr.eh.e_shoff + sizeof(es[0]) * (hdr.eh.e_shstrndx + 1); if (xfsread(&dn, &off, &es, sizeof(es))) return; for (i = 0; i < 2; i++) { memcpy(p, &es[i].sh_size, sizeof(es[i].sh_size)); p += sizeof(es[i].sh_size); off = es[i].sh_offset; if (xfsread(&dn, &off, p, es[i].sh_size)) return; p += es[i].sh_size; } } addr = hdr.eh.e_entry & 0xffffff; } bootinfo.bi_esymtab = VTOP(p); bootinfo.bi_kernelname = VTOP(kname); zfsargs.size = sizeof(zfsargs); zfsargs.pool = zfsmount.spa->spa_guid; zfsargs.root = zfsmount.rootobj; zfsargs.primary_pool = primary_spa->spa_guid; #ifdef LOADER_GELI_SUPPORT explicit_bzero(gelipw, sizeof(gelipw)); export_geli_boot_data(&zfsargs.gelidata); #endif if (primary_vdev != NULL) zfsargs.primary_vdev = primary_vdev->v_guid; else printf("failed to detect primary vdev\n"); /* * Note that the zfsargs struct is passed by value, not by pointer. Code in * btxldr.S copies the values from the entry stack to a fixed location * within loader(8) at startup due to the presence of KARGS_FLAGS_EXTARG. */ __exec((caddr_t)addr, RB_BOOTINFO | (opts & RBX_MASK), bootdev, KARGS_FLAGS_ZFS | KARGS_FLAGS_EXTARG, (uint32_t) spa->spa_guid, (uint32_t) (spa->spa_guid >> 32), VTOP(&bootinfo), zfsargs); } static int zfs_mount_ds(char *dsname) { uint64_t newroot; spa_t *newspa; char *q; q = strchr(dsname, '/'); if (q) *q++ = '\0'; newspa = spa_find_by_name(dsname); if (newspa == NULL) { printf("\nCan't find ZFS pool %s\n", dsname); return -1; } if (zfs_spa_init(newspa)) return -1; newroot = 0; if (q) { if (zfs_lookup_dataset(newspa, q, &newroot)) { printf("\nCan't find dataset %s in ZFS pool %s\n", q, newspa->spa_name); return -1; } } if (zfs_mount(newspa, newroot, &zfsmount)) { printf("\nCan't mount ZFS dataset\n"); return -1; } spa = newspa; return (0); } static int parse_cmd(void) { char *arg = cmd; char *ep, *p, *q; const char *cp; int c, i, j; while ((c = *arg++)) { if (c == ' ' || c == '\t' || c == '\n') continue; for (p = arg; *p && *p != '\n' && *p != ' ' && *p != '\t'; p++); ep = p; if (*p) *p++ = 0; if (c == '-') { while ((c = *arg++)) { if (c == 'P') { if (*(uint8_t *)PTOV(0x496) & 0x10) { cp = "yes"; } else { opts |= OPT_SET(RBX_DUAL) | OPT_SET(RBX_SERIAL); cp = "no"; } printf("Keyboard: %s\n", cp); continue; } else if (c == 'S') { j = 0; while ((unsigned int)(i = *arg++ - '0') <= 9) j = j * 10 + i; if (j > 0 && i == -'0') { comspeed = j; break; } /* Fall through to error below ('S' not in optstr[]). */ } for (i = 0; c != optstr[i]; i++) if (i == NOPT - 1) return -1; opts ^= OPT_SET(flags[i]); } ioctrl = OPT_CHECK(RBX_DUAL) ? (IO_SERIAL|IO_KEYBOARD) : OPT_CHECK(RBX_SERIAL) ? IO_SERIAL : IO_KEYBOARD; if (ioctrl & IO_SERIAL) { if (sio_init(115200 / comspeed) != 0) ioctrl &= ~IO_SERIAL; } } if (c == '?') { dnode_phys_t dn; if (zfs_lookup(&zfsmount, arg, &dn) == 0) { zap_list(spa, &dn); } return -1; } else { arg--; /* * Report pool status if the comment is 'status'. Lets * hope no-one wants to load /status as a kernel. */ if (!strcmp(arg, "status")) { spa_all_status(); return -1; } /* * If there is "zfs:" prefix simply ignore it. */ if (strncmp(arg, "zfs:", 4) == 0) arg += 4; /* * If there is a colon, switch pools. */ q = strchr(arg, ':'); if (q) { *q++ = '\0'; if (zfs_mount_ds(arg) != 0) return -1; arg = q; } if ((i = ep - arg)) { if ((size_t)i >= sizeof(kname)) return -1; memcpy(kname, arg, i + 1); } } arg = p; } return 0; } Index: projects/import-googletest-1.8.1/stand/libsa/cd9660.c =================================================================== --- projects/import-googletest-1.8.1/stand/libsa/cd9660.c (revision 344270) +++ projects/import-googletest-1.8.1/stand/libsa/cd9660.c (revision 344271) @@ -1,593 +1,597 @@ /* $NetBSD: cd9660.c,v 1.5 1997/06/26 19:11:33 drochner Exp $ */ /* * Copyright (C) 1996 Wolfgang Solfrank. * Copyright (C) 1996 TooLs GmbH. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by TooLs GmbH. * 4. The name of TooLs GmbH may not be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY TOOLS GMBH ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL TOOLS GMBH BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* * Stand-alone ISO9660 file reading package. * * Note: This doesn't support Rock Ridge extensions, extended attributes, * blocksizes other than 2048 bytes, multi-extent files, etc. */ #include #include #include #include #include #include "stand.h" #define SUSP_CONTINUATION "CE" #define SUSP_PRESENT "SP" #define SUSP_STOP "ST" #define SUSP_EXTREF "ER" #define RRIP_NAME "NM" typedef struct { ISO_SUSP_HEADER h; u_char signature [ISODCL ( 5, 6)]; u_char len_skp [ISODCL ( 7, 7)]; /* 711 */ } ISO_SUSP_PRESENT; static int buf_read_file(struct open_file *f, char **buf_p, size_t *size_p); static int cd9660_open(const char *path, struct open_file *f); static int cd9660_close(struct open_file *f); static int cd9660_read(struct open_file *f, void *buf, size_t size, size_t *resid); static off_t cd9660_seek(struct open_file *f, off_t offset, int where); static int cd9660_stat(struct open_file *f, struct stat *sb); static int cd9660_readdir(struct open_file *f, struct dirent *d); static int dirmatch(struct open_file *f, const char *path, struct iso_directory_record *dp, int use_rrip, int lenskip); static int rrip_check(struct open_file *f, struct iso_directory_record *dp, int *lenskip); static char *rrip_lookup_name(struct open_file *f, struct iso_directory_record *dp, int lenskip, size_t *len); static ISO_SUSP_HEADER *susp_lookup_record(struct open_file *f, const char *identifier, struct iso_directory_record *dp, int lenskip); struct fs_ops cd9660_fsops = { "cd9660", cd9660_open, cd9660_close, cd9660_read, null_write, cd9660_seek, cd9660_stat, cd9660_readdir }; #define F_ISDIR 0x0001 /* Directory */ #define F_ROOTDIR 0x0002 /* Root directory */ #define F_RR 0x0004 /* Rock Ridge on this volume */ struct file { int f_flags; /* file flags */ off_t f_off; /* Current offset within file */ daddr_t f_bno; /* Starting block number */ off_t f_size; /* Size of file */ daddr_t f_buf_blkno; /* block number of data block */ char *f_buf; /* buffer for data block */ int f_susp_skip; /* len_skip for SUSP records */ }; struct ptable_ent { char namlen [ISODCL( 1, 1)]; /* 711 */ char extlen [ISODCL( 2, 2)]; /* 711 */ char block [ISODCL( 3, 6)]; /* 732 */ char parent [ISODCL( 7, 8)]; /* 722 */ char name [1]; }; #define PTFIXSZ 8 #define PTSIZE(pp) roundup(PTFIXSZ + isonum_711((pp)->namlen), 2) #define cdb2devb(bno) ((bno) * ISO_DEFAULT_BLOCK_SIZE / DEV_BSIZE) static ISO_SUSP_HEADER * susp_lookup_record(struct open_file *f, const char *identifier, struct iso_directory_record *dp, int lenskip) { static char susp_buffer[ISO_DEFAULT_BLOCK_SIZE]; ISO_SUSP_HEADER *sh; ISO_RRIP_CONT *shc; char *p, *end; int error; size_t read; p = dp->name + isonum_711(dp->name_len) + lenskip; /* Names of even length have a padding byte after the name. */ if ((isonum_711(dp->name_len) & 1) == 0) p++; end = (char *)dp + isonum_711(dp->length); while (p + 3 < end) { sh = (ISO_SUSP_HEADER *)p; if (bcmp(sh->type, identifier, 2) == 0) return (sh); if (bcmp(sh->type, SUSP_STOP, 2) == 0) return (NULL); if (bcmp(sh->type, SUSP_CONTINUATION, 2) == 0) { shc = (ISO_RRIP_CONT *)sh; error = f->f_dev->dv_strategy(f->f_devdata, F_READ, cdb2devb(isonum_733(shc->location)), ISO_DEFAULT_BLOCK_SIZE, susp_buffer, &read); /* Bail if it fails. */ if (error != 0 || read != ISO_DEFAULT_BLOCK_SIZE) return (NULL); p = susp_buffer + isonum_733(shc->offset); end = p + isonum_733(shc->length); } else { /* Ignore this record and skip to the next. */ p += isonum_711(sh->length); /* Avoid infinite loops with corrupted file systems */ if (isonum_711(sh->length) == 0) return (NULL); } } return (NULL); } static char * rrip_lookup_name(struct open_file *f, struct iso_directory_record *dp, int lenskip, size_t *len) { ISO_RRIP_ALTNAME *p; if (len == NULL) return (NULL); p = (ISO_RRIP_ALTNAME *)susp_lookup_record(f, RRIP_NAME, dp, lenskip); if (p == NULL) return (NULL); switch (*p->flags) { case ISO_SUSP_CFLAG_CURRENT: *len = 1; return ("."); case ISO_SUSP_CFLAG_PARENT: *len = 2; return (".."); case 0: *len = isonum_711(p->h.length) - 5; return ((char *)p + 5); default: /* * We don't handle hostnames or continued names as they are * too hard, so just bail and use the default name. */ return (NULL); } } static int rrip_check(struct open_file *f, struct iso_directory_record *dp, int *lenskip) { ISO_SUSP_PRESENT *sp; ISO_RRIP_EXTREF *er; char *p; /* First, see if we can find a SP field. */ p = dp->name + isonum_711(dp->name_len); if (p > (char *)dp + isonum_711(dp->length)) return (0); sp = (ISO_SUSP_PRESENT *)p; if (bcmp(sp->h.type, SUSP_PRESENT, 2) != 0) return (0); if (isonum_711(sp->h.length) != sizeof(ISO_SUSP_PRESENT)) return (0); if (sp->signature[0] != 0xbe || sp->signature[1] != 0xef) return (0); *lenskip = isonum_711(sp->len_skp); /* * Now look for an ER field. If RRIP is present, then there must * be at least one of these. It would be more pedantic to walk * through the list of fields looking for a Rock Ridge ER field. */ er = (ISO_RRIP_EXTREF *)susp_lookup_record(f, SUSP_EXTREF, dp, 0); if (er == NULL) return (0); return (1); } static int dirmatch(struct open_file *f, const char *path, struct iso_directory_record *dp, int use_rrip, int lenskip) { size_t len; char *cp; int i, icase; if (use_rrip) cp = rrip_lookup_name(f, dp, lenskip, &len); else cp = NULL; if (cp == NULL) { len = isonum_711(dp->name_len); cp = dp->name; icase = 1; } else icase = 0; + + if (strlen(path) != len) + return (0); + for (i = len; --i >= 0; path++, cp++) { if (!*path || *path == '/') break; if (*path == *cp) continue; if (!icase && toupper(*path) == *cp) continue; return 0; } if (*path && *path != '/') return 0; /* * Allow stripping of trailing dots and the version number. * Note that this will find the first instead of the last version * of a file. */ if (i >= 0 && (*cp == ';' || *cp == '.')) { /* This is to prevent matching of numeric extensions */ if (*cp == '.' && cp[1] != ';') return 0; while (--i >= 0) if (*++cp != ';' && (*cp < '0' || *cp > '9')) return 0; } return 1; } static int cd9660_open(const char *path, struct open_file *f) { struct file *fp = NULL; void *buf; struct iso_primary_descriptor *vd; size_t buf_size, read, dsize, off; daddr_t bno, boff; struct iso_directory_record rec; struct iso_directory_record *dp = NULL; int rc, first, use_rrip, lenskip; /* First find the volume descriptor */ buf = malloc(buf_size = ISO_DEFAULT_BLOCK_SIZE); vd = buf; for (bno = 16;; bno++) { twiddle(1); rc = f->f_dev->dv_strategy(f->f_devdata, F_READ, cdb2devb(bno), ISO_DEFAULT_BLOCK_SIZE, buf, &read); if (rc) goto out; if (read != ISO_DEFAULT_BLOCK_SIZE) { rc = EIO; goto out; } rc = EINVAL; if (bcmp(vd->id, ISO_STANDARD_ID, sizeof vd->id) != 0) goto out; if (isonum_711(vd->type) == ISO_VD_END) goto out; if (isonum_711(vd->type) == ISO_VD_PRIMARY) break; } if (isonum_723(vd->logical_block_size) != ISO_DEFAULT_BLOCK_SIZE) goto out; bcopy(vd->root_directory_record, &rec, sizeof(rec)); if (*path == '/') path++; /* eat leading '/' */ first = 1; use_rrip = 0; lenskip = 0; while (*path) { bno = isonum_733(rec.extent) + isonum_711(rec.ext_attr_length); dsize = isonum_733(rec.size); off = 0; boff = 0; while (off < dsize) { if ((off % ISO_DEFAULT_BLOCK_SIZE) == 0) { twiddle(1); rc = f->f_dev->dv_strategy (f->f_devdata, F_READ, cdb2devb(bno + boff), ISO_DEFAULT_BLOCK_SIZE, buf, &read); if (rc) goto out; if (read != ISO_DEFAULT_BLOCK_SIZE) { rc = EIO; goto out; } boff++; dp = (struct iso_directory_record *) buf; } if (isonum_711(dp->length) == 0) { /* skip to next block, if any */ off = boff * ISO_DEFAULT_BLOCK_SIZE; continue; } /* See if RRIP is in use. */ if (first) use_rrip = rrip_check(f, dp, &lenskip); if (dirmatch(f, path, dp, use_rrip, first ? 0 : lenskip)) { first = 0; break; } else first = 0; dp = (struct iso_directory_record *) ((char *) dp + isonum_711(dp->length)); /* If the new block has zero length, it is padding. */ if (isonum_711(dp->length) == 0) { /* Skip to next block, if any. */ off = boff * ISO_DEFAULT_BLOCK_SIZE; continue; } off += isonum_711(dp->length); } if (off >= dsize) { rc = ENOENT; goto out; } rec = *dp; while (*path && *path != '/') /* look for next component */ path++; if (*path) path++; /* skip '/' */ } /* allocate file system specific data structure */ fp = malloc(sizeof(struct file)); bzero(fp, sizeof(struct file)); f->f_fsdata = (void *)fp; if ((isonum_711(rec.flags) & 2) != 0) { fp->f_flags = F_ISDIR; } if (first) { fp->f_flags |= F_ROOTDIR; /* Check for Rock Ridge since we didn't in the loop above. */ bno = isonum_733(rec.extent) + isonum_711(rec.ext_attr_length); twiddle(1); rc = f->f_dev->dv_strategy(f->f_devdata, F_READ, cdb2devb(bno), ISO_DEFAULT_BLOCK_SIZE, buf, &read); if (rc) goto out; if (read != ISO_DEFAULT_BLOCK_SIZE) { rc = EIO; goto out; } dp = (struct iso_directory_record *)buf; use_rrip = rrip_check(f, dp, &lenskip); } if (use_rrip) { fp->f_flags |= F_RR; fp->f_susp_skip = lenskip; } fp->f_off = 0; fp->f_bno = isonum_733(rec.extent) + isonum_711(rec.ext_attr_length); fp->f_size = isonum_733(rec.size); free(buf); return 0; out: if (fp) free(fp); free(buf); return rc; } static int cd9660_close(struct open_file *f) { struct file *fp = (struct file *)f->f_fsdata; f->f_fsdata = NULL; free(fp); return 0; } static int buf_read_file(struct open_file *f, char **buf_p, size_t *size_p) { struct file *fp = (struct file *)f->f_fsdata; daddr_t blkno, blkoff; int rc = 0; size_t read; blkno = fp->f_off / ISO_DEFAULT_BLOCK_SIZE + fp->f_bno; blkoff = fp->f_off % ISO_DEFAULT_BLOCK_SIZE; if (blkno != fp->f_buf_blkno) { if (fp->f_buf == (char *)0) fp->f_buf = malloc(ISO_DEFAULT_BLOCK_SIZE); twiddle(16); rc = f->f_dev->dv_strategy(f->f_devdata, F_READ, cdb2devb(blkno), ISO_DEFAULT_BLOCK_SIZE, fp->f_buf, &read); if (rc) return (rc); if (read != ISO_DEFAULT_BLOCK_SIZE) return (EIO); fp->f_buf_blkno = blkno; } *buf_p = fp->f_buf + blkoff; *size_p = ISO_DEFAULT_BLOCK_SIZE - blkoff; if (*size_p > fp->f_size - fp->f_off) *size_p = fp->f_size - fp->f_off; return (rc); } static int cd9660_read(struct open_file *f, void *start, size_t size, size_t *resid) { struct file *fp = (struct file *)f->f_fsdata; char *buf, *addr; size_t buf_size, csize; int rc = 0; addr = start; while (size) { if (fp->f_off < 0 || fp->f_off >= fp->f_size) break; rc = buf_read_file(f, &buf, &buf_size); if (rc) break; csize = size > buf_size ? buf_size : size; bcopy(buf, addr, csize); fp->f_off += csize; addr += csize; size -= csize; } if (resid) *resid = size; return (rc); } static int cd9660_readdir(struct open_file *f, struct dirent *d) { struct file *fp = (struct file *)f->f_fsdata; struct iso_directory_record *ep; size_t buf_size, reclen, namelen; int error = 0; int lenskip; char *buf, *name; again: if (fp->f_off >= fp->f_size) return (ENOENT); error = buf_read_file(f, &buf, &buf_size); if (error) return (error); ep = (struct iso_directory_record *)buf; if (isonum_711(ep->length) == 0) { daddr_t blkno; /* skip to next block, if any */ blkno = fp->f_off / ISO_DEFAULT_BLOCK_SIZE; fp->f_off = (blkno + 1) * ISO_DEFAULT_BLOCK_SIZE; goto again; } if (fp->f_flags & F_RR) { if (fp->f_flags & F_ROOTDIR && fp->f_off == 0) lenskip = 0; else lenskip = fp->f_susp_skip; name = rrip_lookup_name(f, ep, lenskip, &namelen); } else name = NULL; if (name == NULL) { namelen = isonum_711(ep->name_len); name = ep->name; if (namelen == 1) { if (ep->name[0] == 0) name = "."; else if (ep->name[0] == 1) { namelen = 2; name = ".."; } } } reclen = sizeof(struct dirent) - (MAXNAMLEN+1) + namelen + 1; reclen = (reclen + 3) & ~3; d->d_fileno = isonum_733(ep->extent); d->d_reclen = reclen; if (isonum_711(ep->flags) & 2) d->d_type = DT_DIR; else d->d_type = DT_REG; d->d_namlen = namelen; bcopy(name, d->d_name, d->d_namlen); d->d_name[d->d_namlen] = 0; fp->f_off += isonum_711(ep->length); return (0); } static off_t cd9660_seek(struct open_file *f, off_t offset, int where) { struct file *fp = (struct file *)f->f_fsdata; switch (where) { case SEEK_SET: fp->f_off = offset; break; case SEEK_CUR: fp->f_off += offset; break; case SEEK_END: fp->f_off = fp->f_size - offset; break; default: return -1; } return fp->f_off; } static int cd9660_stat(struct open_file *f, struct stat *sb) { struct file *fp = (struct file *)f->f_fsdata; /* only important stuff */ sb->st_mode = S_IRUSR | S_IRGRP | S_IROTH; if (fp->f_flags & F_ISDIR) sb->st_mode |= S_IFDIR; else sb->st_mode |= S_IFREG; sb->st_uid = sb->st_gid = 0; sb->st_size = fp->f_size; return 0; } Index: projects/import-googletest-1.8.1/stand/libsa/zfs/zfs.c =================================================================== --- projects/import-googletest-1.8.1/stand/libsa/zfs/zfs.c (revision 344270) +++ projects/import-googletest-1.8.1/stand/libsa/zfs/zfs.c (revision 344271) @@ -1,1017 +1,1066 @@ /*- * Copyright (c) 2007 Doug Rabson * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #include __FBSDID("$FreeBSD$"); /* * Stand-alone file reading package. */ #include #include #include #include #include #include #include #include #include #include #include #include "libzfs.h" #include "zfsimpl.c" /* Define the range of indexes to be populated with ZFS Boot Environments */ #define ZFS_BE_FIRST 4 #define ZFS_BE_LAST 8 static int zfs_open(const char *path, struct open_file *f); static int zfs_close(struct open_file *f); static int zfs_read(struct open_file *f, void *buf, size_t size, size_t *resid); static off_t zfs_seek(struct open_file *f, off_t offset, int where); static int zfs_stat(struct open_file *f, struct stat *sb); static int zfs_readdir(struct open_file *f, struct dirent *d); static void zfs_bootenv_initial(const char *); struct devsw zfs_dev; struct fs_ops zfs_fsops = { "zfs", zfs_open, zfs_close, zfs_read, null_write, zfs_seek, zfs_stat, zfs_readdir }; /* * In-core open file. */ struct file { off_t f_seekp; /* seek pointer */ dnode_phys_t f_dnode; uint64_t f_zap_type; /* zap type for readdir */ uint64_t f_num_leafs; /* number of fzap leaf blocks */ zap_leaf_phys_t *f_zap_leaf; /* zap leaf buffer */ }; static int zfs_env_index; static int zfs_env_count; SLIST_HEAD(zfs_be_list, zfs_be_entry) zfs_be_head = SLIST_HEAD_INITIALIZER(zfs_be_head); struct zfs_be_list *zfs_be_headp; struct zfs_be_entry { const char *name; SLIST_ENTRY(zfs_be_entry) entries; } *zfs_be, *zfs_be_tmp; /* * Open a file. */ static int zfs_open(const char *upath, struct open_file *f) { struct zfsmount *mount = (struct zfsmount *)f->f_devdata; struct file *fp; int rc; if (f->f_dev != &zfs_dev) return (EINVAL); /* allocate file system specific data structure */ fp = malloc(sizeof(struct file)); bzero(fp, sizeof(struct file)); f->f_fsdata = (void *)fp; rc = zfs_lookup(mount, upath, &fp->f_dnode); fp->f_seekp = 0; if (rc) { f->f_fsdata = NULL; free(fp); } return (rc); } static int zfs_close(struct open_file *f) { struct file *fp = (struct file *)f->f_fsdata; dnode_cache_obj = NULL; f->f_fsdata = (void *)0; if (fp == (struct file *)0) return (0); free(fp); return (0); } /* * Copy a portion of a file into kernel memory. * Cross block boundaries when necessary. */ static int zfs_read(struct open_file *f, void *start, size_t size, size_t *resid /* out */) { const spa_t *spa = ((struct zfsmount *)f->f_devdata)->spa; struct file *fp = (struct file *)f->f_fsdata; struct stat sb; size_t n; int rc; rc = zfs_stat(f, &sb); if (rc) return (rc); n = size; if (fp->f_seekp + n > sb.st_size) n = sb.st_size - fp->f_seekp; rc = dnode_read(spa, &fp->f_dnode, fp->f_seekp, start, n); if (rc) return (rc); if (0) { int i; for (i = 0; i < n; i++) putchar(((char*) start)[i]); } fp->f_seekp += n; if (resid) *resid = size - n; return (0); } static off_t zfs_seek(struct open_file *f, off_t offset, int where) { struct file *fp = (struct file *)f->f_fsdata; switch (where) { case SEEK_SET: fp->f_seekp = offset; break; case SEEK_CUR: fp->f_seekp += offset; break; case SEEK_END: { struct stat sb; int error; error = zfs_stat(f, &sb); if (error != 0) { errno = error; return (-1); } fp->f_seekp = sb.st_size - offset; break; } default: errno = EINVAL; return (-1); } return (fp->f_seekp); } static int zfs_stat(struct open_file *f, struct stat *sb) { const spa_t *spa = ((struct zfsmount *)f->f_devdata)->spa; struct file *fp = (struct file *)f->f_fsdata; return (zfs_dnode_stat(spa, &fp->f_dnode, sb)); } static int zfs_readdir(struct open_file *f, struct dirent *d) { const spa_t *spa = ((struct zfsmount *)f->f_devdata)->spa; struct file *fp = (struct file *)f->f_fsdata; mzap_ent_phys_t mze; struct stat sb; size_t bsize = fp->f_dnode.dn_datablkszsec << SPA_MINBLOCKSHIFT; int rc; rc = zfs_stat(f, &sb); if (rc) return (rc); if (!S_ISDIR(sb.st_mode)) return (ENOTDIR); /* * If this is the first read, get the zap type. */ if (fp->f_seekp == 0) { rc = dnode_read(spa, &fp->f_dnode, 0, &fp->f_zap_type, sizeof(fp->f_zap_type)); if (rc) return (rc); if (fp->f_zap_type == ZBT_MICRO) { fp->f_seekp = offsetof(mzap_phys_t, mz_chunk); } else { rc = dnode_read(spa, &fp->f_dnode, offsetof(zap_phys_t, zap_num_leafs), &fp->f_num_leafs, sizeof(fp->f_num_leafs)); if (rc) return (rc); fp->f_seekp = bsize; fp->f_zap_leaf = (zap_leaf_phys_t *)malloc(bsize); rc = dnode_read(spa, &fp->f_dnode, fp->f_seekp, fp->f_zap_leaf, bsize); if (rc) return (rc); } } if (fp->f_zap_type == ZBT_MICRO) { mzap_next: if (fp->f_seekp >= bsize) return (ENOENT); rc = dnode_read(spa, &fp->f_dnode, fp->f_seekp, &mze, sizeof(mze)); if (rc) return (rc); fp->f_seekp += sizeof(mze); if (!mze.mze_name[0]) goto mzap_next; d->d_fileno = ZFS_DIRENT_OBJ(mze.mze_value); d->d_type = ZFS_DIRENT_TYPE(mze.mze_value); strcpy(d->d_name, mze.mze_name); d->d_namlen = strlen(d->d_name); return (0); } else { zap_leaf_t zl; zap_leaf_chunk_t *zc, *nc; int chunk; size_t namelen; char *p; uint64_t value; /* * Initialise this so we can use the ZAP size * calculating macros. */ zl.l_bs = ilog2(bsize); zl.l_phys = fp->f_zap_leaf; /* * Figure out which chunk we are currently looking at * and consider seeking to the next leaf. We use the * low bits of f_seekp as a simple chunk index. */ fzap_next: chunk = fp->f_seekp & (bsize - 1); if (chunk == ZAP_LEAF_NUMCHUNKS(&zl)) { fp->f_seekp = rounddown2(fp->f_seekp, bsize) + bsize; chunk = 0; /* * Check for EOF and read the new leaf. */ if (fp->f_seekp >= bsize * fp->f_num_leafs) return (ENOENT); rc = dnode_read(spa, &fp->f_dnode, fp->f_seekp, fp->f_zap_leaf, bsize); if (rc) return (rc); } zc = &ZAP_LEAF_CHUNK(&zl, chunk); fp->f_seekp++; if (zc->l_entry.le_type != ZAP_CHUNK_ENTRY) goto fzap_next; namelen = zc->l_entry.le_name_numints; if (namelen > sizeof(d->d_name)) namelen = sizeof(d->d_name); /* * Paste the name back together. */ nc = &ZAP_LEAF_CHUNK(&zl, zc->l_entry.le_name_chunk); p = d->d_name; while (namelen > 0) { int len; len = namelen; if (len > ZAP_LEAF_ARRAY_BYTES) len = ZAP_LEAF_ARRAY_BYTES; memcpy(p, nc->l_array.la_array, len); p += len; namelen -= len; nc = &ZAP_LEAF_CHUNK(&zl, nc->l_array.la_next); } d->d_name[sizeof(d->d_name) - 1] = 0; /* * Assume the first eight bytes of the value are * a uint64_t. */ value = fzap_leaf_value(&zl, zc); d->d_fileno = ZFS_DIRENT_OBJ(value); d->d_type = ZFS_DIRENT_TYPE(value); d->d_namlen = strlen(d->d_name); return (0); } } static int vdev_read(vdev_t *vdev, void *priv, off_t offset, void *buf, size_t bytes) { int fd, ret; - size_t res, size, remainder, rb_size, blksz; - unsigned secsz; - off_t off; - char *bouncebuf, *rb_buf; + size_t res, head, tail, total_size, full_sec_size; + unsigned secsz, do_tail_read; + off_t start_sec; + char *outbuf, *bouncebuf; fd = (uintptr_t) priv; + outbuf = (char *) buf; bouncebuf = NULL; ret = ioctl(fd, DIOCGSECTORSIZE, &secsz); if (ret != 0) return (ret); - off = offset / secsz; - remainder = offset % secsz; - if (lseek(fd, off * secsz, SEEK_SET) == -1) - return (errno); + /* + * Handling reads of arbitrary offset and size - multi-sector case + * and single-sector case. + * + * Multi-sector Case + * (do_tail_read = true if tail > 0) + * + * |<----------------------total_size--------------------->| + * | | + * |<--head-->|<--------------bytes------------>|<--tail-->| + * | | | | + * | | |<~full_sec_size~>| | | + * +------------------+ +------------------+ + * | |0101010| . . . |0101011| | + * +------------------+ +------------------+ + * start_sec start_sec + n + * + * + * Single-sector Case + * (do_tail_read = false) + * + * |<------total_size = secsz----->| + * | | + * |<-head->|<---bytes--->|<-tail->| + * +-------------------------------+ + * | |0101010101010| | + * +-------------------------------+ + * start_sec + */ + start_sec = offset / secsz; + head = offset % secsz; + total_size = roundup2(head + bytes, secsz); + tail = total_size - (head + bytes); + do_tail_read = ((tail > 0) && (head + bytes > secsz)); + full_sec_size = total_size; + if (head > 0) + full_sec_size -= secsz; + if (do_tail_read) + full_sec_size -= secsz; - rb_buf = buf; - rb_size = bytes; - size = roundup2(bytes + remainder, secsz); - blksz = size; - if (remainder != 0 || size != bytes) { + /* Return of partial sector data requires a bounce buffer. */ + if ((head > 0) || do_tail_read) { bouncebuf = zfs_alloc(secsz); if (bouncebuf == NULL) { printf("vdev_read: out of memory\n"); return (ENOMEM); } - rb_buf = bouncebuf; - blksz = rb_size - remainder; } - while (bytes > 0) { - res = read(fd, rb_buf, rb_size); - if (res != rb_size) { + if (lseek(fd, start_sec * secsz, SEEK_SET) == -1) + return (errno); + + /* Partial data return from first sector */ + if (head > 0) { + res = read(fd, bouncebuf, secsz); + if (res != secsz) { ret = EIO; goto error; } - if (bytes < blksz) - blksz = bytes; - if (bouncebuf != NULL) - memcpy(buf, rb_buf + remainder, blksz); - buf = (void *)((uintptr_t)buf + blksz); - bytes -= blksz; - remainder = 0; - blksz = rb_size; + memcpy(outbuf, bouncebuf + head, min(secsz - head, bytes)); + outbuf += min(secsz - head, bytes); + } + + /* Full data return from read sectors */ + if (full_sec_size > 0) { + res = read(fd, outbuf, full_sec_size); + if (res != full_sec_size) { + ret = EIO; + goto error; + } + outbuf += full_sec_size; + } + + /* Partial data return from last sector */ + if (do_tail_read) { + res = read(fd, bouncebuf, secsz); + if (res != secsz) { + ret = EIO; + goto error; + } + memcpy(outbuf, bouncebuf, secsz - tail); } ret = 0; error: if (bouncebuf != NULL) zfs_free(bouncebuf, secsz); return (ret); } static int zfs_dev_init(void) { spa_t *spa; spa_t *next; spa_t *prev; zfs_init(); if (archsw.arch_zfs_probe == NULL) return (ENXIO); archsw.arch_zfs_probe(); prev = NULL; spa = STAILQ_FIRST(&zfs_pools); while (spa != NULL) { next = STAILQ_NEXT(spa, spa_link); if (zfs_spa_init(spa)) { if (prev == NULL) STAILQ_REMOVE_HEAD(&zfs_pools, spa_link); else STAILQ_REMOVE_AFTER(&zfs_pools, prev, spa_link); } else prev = spa; spa = next; } return (0); } struct zfs_probe_args { int fd; const char *devname; uint64_t *pool_guid; u_int secsz; }; static int zfs_diskread(void *arg, void *buf, size_t blocks, uint64_t offset) { struct zfs_probe_args *ppa; ppa = (struct zfs_probe_args *)arg; return (vdev_read(NULL, (void *)(uintptr_t)ppa->fd, offset * ppa->secsz, buf, blocks * ppa->secsz)); } static int zfs_probe(int fd, uint64_t *pool_guid) { spa_t *spa; int ret; spa = NULL; ret = vdev_probe(vdev_read, (void *)(uintptr_t)fd, &spa); if (ret == 0 && pool_guid != NULL) *pool_guid = spa->spa_guid; return (ret); } static int zfs_probe_partition(void *arg, const char *partname, const struct ptable_entry *part) { struct zfs_probe_args *ppa, pa; struct ptable *table; char devname[32]; int ret; /* Probe only freebsd-zfs and freebsd partitions */ if (part->type != PART_FREEBSD && part->type != PART_FREEBSD_ZFS) return (0); ppa = (struct zfs_probe_args *)arg; strncpy(devname, ppa->devname, strlen(ppa->devname) - 1); devname[strlen(ppa->devname) - 1] = '\0'; sprintf(devname, "%s%s:", devname, partname); pa.fd = open(devname, O_RDONLY); if (pa.fd == -1) return (0); ret = zfs_probe(pa.fd, ppa->pool_guid); if (ret == 0) return (0); /* Do we have BSD label here? */ if (part->type == PART_FREEBSD) { pa.devname = devname; pa.pool_guid = ppa->pool_guid; pa.secsz = ppa->secsz; table = ptable_open(&pa, part->end - part->start + 1, ppa->secsz, zfs_diskread); if (table != NULL) { ptable_iterate(table, &pa, zfs_probe_partition); ptable_close(table); } } close(pa.fd); return (0); } int zfs_probe_dev(const char *devname, uint64_t *pool_guid) { struct disk_devdesc *dev; struct ptable *table; struct zfs_probe_args pa; uint64_t mediasz; int ret; if (pool_guid) *pool_guid = 0; pa.fd = open(devname, O_RDONLY); if (pa.fd == -1) return (ENXIO); /* * We will not probe the whole disk, we can not boot from such * disks and some systems will misreport the disk sizes and will * hang while accessing the disk. */ if (archsw.arch_getdev((void **)&dev, devname, NULL) == 0) { int partition = dev->d_partition; int slice = dev->d_slice; free(dev); if (partition != -1 && slice != -1) { ret = zfs_probe(pa.fd, pool_guid); if (ret == 0) return (0); } } /* Probe each partition */ ret = ioctl(pa.fd, DIOCGMEDIASIZE, &mediasz); if (ret == 0) ret = ioctl(pa.fd, DIOCGSECTORSIZE, &pa.secsz); if (ret == 0) { pa.devname = devname; pa.pool_guid = pool_guid; table = ptable_open(&pa, mediasz / pa.secsz, pa.secsz, zfs_diskread); if (table != NULL) { ptable_iterate(table, &pa, zfs_probe_partition); ptable_close(table); } } close(pa.fd); if (pool_guid && *pool_guid == 0) ret = ENXIO; return (ret); } /* * Print information about ZFS pools */ static int zfs_dev_print(int verbose) { spa_t *spa; char line[80]; int ret = 0; if (STAILQ_EMPTY(&zfs_pools)) return (0); printf("%s devices:", zfs_dev.dv_name); if ((ret = pager_output("\n")) != 0) return (ret); if (verbose) { return (spa_all_status()); } STAILQ_FOREACH(spa, &zfs_pools, spa_link) { snprintf(line, sizeof(line), " zfs:%s\n", spa->spa_name); ret = pager_output(line); if (ret != 0) break; } return (ret); } /* * Attempt to open the pool described by (dev) for use by (f). */ static int zfs_dev_open(struct open_file *f, ...) { va_list args; struct zfs_devdesc *dev; struct zfsmount *mount; spa_t *spa; int rv; va_start(args, f); dev = va_arg(args, struct zfs_devdesc *); va_end(args); if (dev->pool_guid == 0) spa = STAILQ_FIRST(&zfs_pools); else spa = spa_find_by_guid(dev->pool_guid); if (!spa) return (ENXIO); mount = malloc(sizeof(*mount)); rv = zfs_mount(spa, dev->root_guid, mount); if (rv != 0) { free(mount); return (rv); } if (mount->objset.os_type != DMU_OST_ZFS) { printf("Unexpected object set type %ju\n", (uintmax_t)mount->objset.os_type); free(mount); return (EIO); } f->f_devdata = mount; free(dev); return (0); } static int zfs_dev_close(struct open_file *f) { free(f->f_devdata); f->f_devdata = NULL; return (0); } static int zfs_dev_strategy(void *devdata, int rw, daddr_t dblk, size_t size, char *buf, size_t *rsize) { return (ENOSYS); } struct devsw zfs_dev = { .dv_name = "zfs", .dv_type = DEVT_ZFS, .dv_init = zfs_dev_init, .dv_strategy = zfs_dev_strategy, .dv_open = zfs_dev_open, .dv_close = zfs_dev_close, .dv_ioctl = noioctl, .dv_print = zfs_dev_print, .dv_cleanup = NULL }; int zfs_parsedev(struct zfs_devdesc *dev, const char *devspec, const char **path) { static char rootname[ZFS_MAXNAMELEN]; static char poolname[ZFS_MAXNAMELEN]; spa_t *spa; const char *end; const char *np; const char *sep; int rv; np = devspec; if (*np != ':') return (EINVAL); np++; end = strrchr(np, ':'); if (end == NULL) return (EINVAL); sep = strchr(np, '/'); if (sep == NULL || sep >= end) sep = end; memcpy(poolname, np, sep - np); poolname[sep - np] = '\0'; if (sep < end) { sep++; memcpy(rootname, sep, end - sep); rootname[end - sep] = '\0'; } else rootname[0] = '\0'; spa = spa_find_by_name(poolname); if (!spa) return (ENXIO); dev->pool_guid = spa->spa_guid; rv = zfs_lookup_dataset(spa, rootname, &dev->root_guid); if (rv != 0) return (rv); if (path != NULL) *path = (*end == '\0') ? end : end + 1; dev->dd.d_dev = &zfs_dev; return (0); } char * zfs_fmtdev(void *vdev) { static char rootname[ZFS_MAXNAMELEN]; static char buf[2 * ZFS_MAXNAMELEN + 8]; struct zfs_devdesc *dev = (struct zfs_devdesc *)vdev; spa_t *spa; buf[0] = '\0'; if (dev->dd.d_dev->dv_type != DEVT_ZFS) return (buf); if (dev->pool_guid == 0) { spa = STAILQ_FIRST(&zfs_pools); dev->pool_guid = spa->spa_guid; } else spa = spa_find_by_guid(dev->pool_guid); if (spa == NULL) { printf("ZFS: can't find pool by guid\n"); return (buf); } if (dev->root_guid == 0 && zfs_get_root(spa, &dev->root_guid)) { printf("ZFS: can't find root filesystem\n"); return (buf); } if (zfs_rlookup(spa, dev->root_guid, rootname)) { printf("ZFS: can't find filesystem by guid\n"); return (buf); } if (rootname[0] == '\0') sprintf(buf, "%s:%s:", dev->dd.d_dev->dv_name, spa->spa_name); else sprintf(buf, "%s:%s/%s:", dev->dd.d_dev->dv_name, spa->spa_name, rootname); return (buf); } int zfs_list(const char *name) { static char poolname[ZFS_MAXNAMELEN]; uint64_t objid; spa_t *spa; const char *dsname; int len; int rv; len = strlen(name); dsname = strchr(name, '/'); if (dsname != NULL) { len = dsname - name; dsname++; } else dsname = ""; memcpy(poolname, name, len); poolname[len] = '\0'; spa = spa_find_by_name(poolname); if (!spa) return (ENXIO); rv = zfs_lookup_dataset(spa, dsname, &objid); if (rv != 0) return (rv); return (zfs_list_dataset(spa, objid)); } void init_zfs_bootenv(const char *currdev_in) { char *beroot, *currdev; int currdev_len; currdev = NULL; currdev_len = strlen(currdev_in); if (currdev_len == 0) return; if (strncmp(currdev_in, "zfs:", 4) != 0) return; currdev = strdup(currdev_in); if (currdev == NULL) return; /* Remove the trailing : */ currdev[currdev_len - 1] = '\0'; setenv("zfs_be_active", currdev, 1); setenv("zfs_be_currpage", "1", 1); /* Remove the last element (current bootenv) */ beroot = strrchr(currdev, '/'); if (beroot != NULL) beroot[0] = '\0'; beroot = strchr(currdev, ':') + 1; setenv("zfs_be_root", beroot, 1); zfs_bootenv_initial(beroot); free(currdev); } static void zfs_bootenv_initial(const char *name) { char poolname[ZFS_MAXNAMELEN], *dsname; char envname[32], envval[256]; uint64_t objid; spa_t *spa; int bootenvs_idx, len, rv; SLIST_INIT(&zfs_be_head); zfs_env_count = 0; len = strlen(name); dsname = strchr(name, '/'); if (dsname != NULL) { len = dsname - name; dsname++; } else dsname = ""; strlcpy(poolname, name, len + 1); spa = spa_find_by_name(poolname); if (spa == NULL) return; rv = zfs_lookup_dataset(spa, dsname, &objid); if (rv != 0) return; rv = zfs_callback_dataset(spa, objid, zfs_belist_add); bootenvs_idx = 0; /* Populate the initial environment variables */ SLIST_FOREACH_SAFE(zfs_be, &zfs_be_head, entries, zfs_be_tmp) { /* Enumerate all bootenvs for general usage */ snprintf(envname, sizeof(envname), "bootenvs[%d]", bootenvs_idx); snprintf(envval, sizeof(envval), "zfs:%s/%s", name, zfs_be->name); rv = setenv(envname, envval, 1); if (rv != 0) break; bootenvs_idx++; } snprintf(envval, sizeof(envval), "%d", bootenvs_idx); setenv("bootenvs_count", envval, 1); /* Clean up the SLIST of ZFS BEs */ while (!SLIST_EMPTY(&zfs_be_head)) { zfs_be = SLIST_FIRST(&zfs_be_head); SLIST_REMOVE_HEAD(&zfs_be_head, entries); free(zfs_be); } return; } int zfs_bootenv(const char *name) { static char poolname[ZFS_MAXNAMELEN], *dsname, *root; char becount[4]; uint64_t objid; spa_t *spa; int len, rv, pages, perpage, currpage; if (name == NULL) return (EINVAL); if ((root = getenv("zfs_be_root")) == NULL) return (EINVAL); if (strcmp(name, root) != 0) { if (setenv("zfs_be_root", name, 1) != 0) return (ENOMEM); } SLIST_INIT(&zfs_be_head); zfs_env_count = 0; len = strlen(name); dsname = strchr(name, '/'); if (dsname != NULL) { len = dsname - name; dsname++; } else dsname = ""; memcpy(poolname, name, len); poolname[len] = '\0'; spa = spa_find_by_name(poolname); if (!spa) return (ENXIO); rv = zfs_lookup_dataset(spa, dsname, &objid); if (rv != 0) return (rv); rv = zfs_callback_dataset(spa, objid, zfs_belist_add); /* Calculate and store the number of pages of BEs */ perpage = (ZFS_BE_LAST - ZFS_BE_FIRST + 1); pages = (zfs_env_count / perpage) + ((zfs_env_count % perpage) > 0 ? 1 : 0); snprintf(becount, 4, "%d", pages); if (setenv("zfs_be_pages", becount, 1) != 0) return (ENOMEM); /* Roll over the page counter if it has exceeded the maximum */ currpage = strtol(getenv("zfs_be_currpage"), NULL, 10); if (currpage > pages) { if (setenv("zfs_be_currpage", "1", 1) != 0) return (ENOMEM); } /* Populate the menu environment variables */ zfs_set_env(); /* Clean up the SLIST of ZFS BEs */ while (!SLIST_EMPTY(&zfs_be_head)) { zfs_be = SLIST_FIRST(&zfs_be_head); SLIST_REMOVE_HEAD(&zfs_be_head, entries); free(zfs_be); } return (rv); } int zfs_belist_add(const char *name, uint64_t value __unused) { /* Skip special datasets that start with a $ character */ if (strncmp(name, "$", 1) == 0) { return (0); } /* Add the boot environment to the head of the SLIST */ zfs_be = malloc(sizeof(struct zfs_be_entry)); if (zfs_be == NULL) { return (ENOMEM); } zfs_be->name = name; SLIST_INSERT_HEAD(&zfs_be_head, zfs_be, entries); zfs_env_count++; return (0); } int zfs_set_env(void) { char envname[32], envval[256]; char *beroot, *pagenum; int rv, page, ctr; beroot = getenv("zfs_be_root"); if (beroot == NULL) { return (1); } pagenum = getenv("zfs_be_currpage"); if (pagenum != NULL) { page = strtol(pagenum, NULL, 10); } else { page = 1; } ctr = 1; rv = 0; zfs_env_index = ZFS_BE_FIRST; SLIST_FOREACH_SAFE(zfs_be, &zfs_be_head, entries, zfs_be_tmp) { /* Skip to the requested page number */ if (ctr <= ((ZFS_BE_LAST - ZFS_BE_FIRST + 1) * (page - 1))) { ctr++; continue; } snprintf(envname, sizeof(envname), "bootenvmenu_caption[%d]", zfs_env_index); snprintf(envval, sizeof(envval), "%s", zfs_be->name); rv = setenv(envname, envval, 1); if (rv != 0) { break; } snprintf(envname, sizeof(envname), "bootenvansi_caption[%d]", zfs_env_index); rv = setenv(envname, envval, 1); if (rv != 0){ break; } snprintf(envname, sizeof(envname), "bootenvmenu_command[%d]", zfs_env_index); rv = setenv(envname, "set_bootenv", 1); if (rv != 0){ break; } snprintf(envname, sizeof(envname), "bootenv_root[%d]", zfs_env_index); snprintf(envval, sizeof(envval), "zfs:%s/%s", beroot, zfs_be->name); rv = setenv(envname, envval, 1); if (rv != 0){ break; } zfs_env_index++; if (zfs_env_index > ZFS_BE_LAST) { break; } } for (; zfs_env_index <= ZFS_BE_LAST; zfs_env_index++) { snprintf(envname, sizeof(envname), "bootenvmenu_caption[%d]", zfs_env_index); (void)unsetenv(envname); snprintf(envname, sizeof(envname), "bootenvansi_caption[%d]", zfs_env_index); (void)unsetenv(envname); snprintf(envname, sizeof(envname), "bootenvmenu_command[%d]", zfs_env_index); (void)unsetenv(envname); snprintf(envname, sizeof(envname), "bootenv_root[%d]", zfs_env_index); (void)unsetenv(envname); } return (rv); } Index: projects/import-googletest-1.8.1/stand/lua/password.lua =================================================================== --- projects/import-googletest-1.8.1/stand/lua/password.lua (revision 344270) +++ projects/import-googletest-1.8.1/stand/lua/password.lua (revision 344271) @@ -1,133 +1,138 @@ -- -- SPDX-License-Identifier: BSD-2-Clause-FreeBSD -- -- Copyright (c) 2015 Pedro Souza -- Copyright (C) 2018 Kyle Evans -- All rights reserved. -- -- Redistribution and use in source and binary forms, with or without -- modification, are permitted provided that the following conditions -- are met: -- 1. Redistributions of source code must retain the above copyright -- notice, this list of conditions and the following disclaimer. -- 2. Redistributions in binary form must reproduce the above copyright -- notice, this list of conditions and the following disclaimer in the -- documentation and/or other materials provided with the distribution. -- -- THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND -- ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -- IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -- ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE -- FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL -- DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS -- OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) -- HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -- LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY -- OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF -- SUCH DAMAGE. -- -- $FreeBSD$ -- local core = require("core") local screen = require("screen") local password = {} local INCORRECT_PASSWORD = "loader: incorrect password" -- Asterisks as a password mask local show_password_mask = false local twiddle_chars = {"/", "-", "\\", "|"} +local screen_setup = false -- Module exports function password.read(prompt_length) local str = "" local twiddle_pos = 1 local function draw_twiddle() printc(twiddle_chars[twiddle_pos]) -- Reset cursor to just after the password prompt screen.setcursor(prompt_length + 2, screen.default_y) twiddle_pos = (twiddle_pos % #twiddle_chars) + 1 end -- Space between the prompt and any on-screen feedback printc(" ") while true do local ch = io.getchar() if ch == core.KEY_ENTER then break end if ch == core.KEY_BACKSPACE or ch == core.KEY_DELETE then if #str > 0 then if show_password_mask then printc("\008 \008") else draw_twiddle() end str = str:sub(1, #str - 1) end else if show_password_mask then printc("*") else draw_twiddle() end str = str .. string.char(ch) end end return str end function password.check() - screen.clear() - screen.defcursor() -- pwd is optionally supplied if we want to check it local function doPrompt(prompt, pwd) local attempts = 1 local function clear_incorrect_text_prompt() printc("\r" .. string.rep(" ", #INCORRECT_PASSWORD)) + end + + if not screen_setup then + screen.clear() + screen.defcursor() + screen_setup = true end while true do if attempts > 1 then clear_incorrect_text_prompt() end screen.defcursor() printc(prompt) local read_pwd = password.read(#prompt) if pwd == nil or pwd == read_pwd then -- Clear the prompt + twiddle printc(string.rep(" ", #prompt + 5)) return read_pwd end printc("\n" .. INCORRECT_PASSWORD) attempts = attempts + 1 loader.delay(3*1000*1000) end end local function compare(prompt, pwd) if pwd == nil then return end doPrompt(prompt, pwd) end local boot_pwd = loader.getenv("bootlock_password") compare("Bootlock password:", boot_pwd) local geli_prompt = loader.getenv("geom_eli_passphrase_prompt") if geli_prompt ~= nil and geli_prompt:lower() == "yes" then local passphrase = doPrompt("GELI Passphrase:") loader.setenv("kern.geom.eli.passphrase", passphrase) end local pwd = loader.getenv("password") if pwd ~= nil then core.autoboot() end compare("Loader password:", pwd) end return password Index: projects/import-googletest-1.8.1/stand/powerpc/uboot/Makefile =================================================================== --- projects/import-googletest-1.8.1/stand/powerpc/uboot/Makefile (revision 344270) +++ projects/import-googletest-1.8.1/stand/powerpc/uboot/Makefile (revision 344271) @@ -1,33 +1,34 @@ # $FreeBSD$ LOADER_UFS_SUPPORT?= yes LOADER_CD9660_SUPPORT?= no LOADER_EXT2FS_SUPPORT?= no LOADER_NET_SUPPORT?= yes LOADER_NFS_SUPPORT?= yes LOADER_TFTP_SUPPORT?= no LOADER_GZIP_SUPPORT?= no LOADER_BZIP2_SUPPORT?= no .include +BINDIR= /boot/uboot PROG= ubldr NEWVERSWHAT= "U-Boot loader" ${MACHINE_ARCH} INSTALLFLAGS= -b # Architecture-specific loader code SRCS= start.S conf.c vers.c ppc64_elf_freebsd.c SRCS+= ucmpdi2.c # Always add MI sources .include "${BOOTSRC}/loader.mk" .PATH: ${SYSDIR}/libkern LDFLAGS= -nostdlib -static -T ${.CURDIR}/ldscript.powerpc .include "${BOOTSRC}/uboot.mk" DPADD= ${LDR_INTERP} ${LIBUBOOT} ${LIBFDT} ${LIBUBOOT_FDT} ${LIBSA} LDADD= ${LDR_INTERP} ${LIBUBOOT} ${LIBFDT} ${LIBUBOOT_FDT} ${LIBSA} .include Index: projects/import-googletest-1.8.1/stand/uboot/common/main.c =================================================================== --- projects/import-googletest-1.8.1/stand/uboot/common/main.c (revision 344270) +++ projects/import-googletest-1.8.1/stand/uboot/common/main.c (revision 344271) @@ -1,689 +1,711 @@ /*- * Copyright (c) 2000 Benno Rice * Copyright (c) 2000 Stephane Potvin * Copyright (c) 2007-2008 Semihalf, Rafal Jaworowski * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHORS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include "api_public.h" #include "bootstrap.h" #include "glue.h" #include "libuboot.h" #ifndef nitems #define nitems(x) (sizeof((x)) / sizeof((x)[0])) #endif #ifndef HEAP_SIZE #define HEAP_SIZE (2 * 1024 * 1024) #endif struct uboot_devdesc currdev; struct arch_switch archsw; /* MI/MD interface boundary */ int devs_no; uintptr_t uboot_heap_start; uintptr_t uboot_heap_end; struct device_type { const char *name; int type; } device_types[] = { { "disk", DEV_TYP_STOR }, { "ide", DEV_TYP_STOR | DT_STOR_IDE }, { "mmc", DEV_TYP_STOR | DT_STOR_MMC }, { "sata", DEV_TYP_STOR | DT_STOR_SATA }, { "scsi", DEV_TYP_STOR | DT_STOR_SCSI }, { "usb", DEV_TYP_STOR | DT_STOR_USB }, { "net", DEV_TYP_NET } }; extern char end[]; extern unsigned char _etext[]; extern unsigned char _edata[]; extern unsigned char __bss_start[]; extern unsigned char __sbss_start[]; extern unsigned char __sbss_end[]; extern unsigned char _end[]; #ifdef LOADER_FDT_SUPPORT extern int command_fdt_internal(int argc, char *argv[]); #endif static void dump_sig(struct api_signature *sig) { #ifdef DEBUG printf("signature:\n"); printf(" version\t= %d\n", sig->version); printf(" checksum\t= 0x%08x\n", sig->checksum); printf(" sc entry\t= 0x%08x\n", sig->syscall); #endif } static void dump_addr_info(void) { #ifdef DEBUG printf("\naddresses info:\n"); printf(" _etext (sdata) = 0x%08x\n", (uint32_t)_etext); printf(" _edata = 0x%08x\n", (uint32_t)_edata); printf(" __sbss_start = 0x%08x\n", (uint32_t)__sbss_start); printf(" __sbss_end = 0x%08x\n", (uint32_t)__sbss_end); printf(" __sbss_start = 0x%08x\n", (uint32_t)__bss_start); printf(" _end = 0x%08x\n", (uint32_t)_end); printf(" syscall entry = 0x%08x\n", (uint32_t)syscall_ptr); #endif } static uint64_t memsize(struct sys_info *si, int flags) { uint64_t size; int i; size = 0; for (i = 0; i < si->mr_no; i++) if (si->mr[i].flags == flags && si->mr[i].size) size += (si->mr[i].size); return (size); } static void meminfo(void) { uint64_t size; struct sys_info *si; int t[3] = { MR_ATTR_DRAM, MR_ATTR_FLASH, MR_ATTR_SRAM }; int i; if ((si = ub_get_sys_info()) == NULL) panic("could not retrieve system info"); for (i = 0; i < 3; i++) { size = memsize(si, t[i]); if (size > 0) printf("%s: %juMB\n", ub_mem_type(t[i]), (uintmax_t)(size / 1024 / 1024)); } } static const char * get_device_type(const char *devstr, int *devtype) { int i; int namelen; struct device_type *dt; if (devstr) { for (i = 0; i < nitems(device_types); i++) { dt = &device_types[i]; namelen = strlen(dt->name); if (strncmp(dt->name, devstr, namelen) == 0) { *devtype = dt->type; return (devstr + namelen); } } printf("Unknown device type '%s'\n", devstr); } - *devtype = -1; + *devtype = DEV_TYP_NONE; return (NULL); } static const char * device_typename(int type) { int i; for (i = 0; i < nitems(device_types); i++) if (device_types[i].type == type) return (device_types[i].name); return (""); } /* * Parse a device string into type, unit, slice and partition numbers. A * returned value of -1 for type indicates a search should be done for the * first loadable device, otherwise a returned value of -1 for unit * indicates a search should be done for the first loadable device of the * given type. * * The returned values for slice and partition are interpreted by * disk_open(). * + * The device string can be a standard loader(8) disk specifier: + * + * disks disk0s1 + * disks disk1s2a + * diskp disk0p4 + * + * or one of the following formats: + * * Valid device strings: For device types: * * DEV_TYP_STOR, DEV_TYP_NET * DEV_TYP_STOR, DEV_TYP_NET * : DEV_TYP_STOR, DEV_TYP_NET * : DEV_TYP_STOR * :. DEV_TYP_STOR * :. DEV_TYP_STOR * * For valid type names, see the device_types array, above. * * Slice numbers are 1-based. 0 is a wildcard. */ static void get_load_device(int *type, int *unit, int *slice, int *partition) { + struct disk_devdesc dev; char *devstr; const char *p; char *endp; - *type = -1; + *type = DEV_TYP_NONE; *unit = -1; *slice = 0; *partition = -1; devstr = ub_env_get("loaderdev"); if (devstr == NULL) { printf("U-Boot env: loaderdev not set, will probe all devices.\n"); return; } printf("U-Boot env: loaderdev='%s'\n", devstr); p = get_device_type(devstr, type); + /* + * If type is DEV_TYP_STOR we have a disk-like device. If we can parse + * the remainder of the string as a standard unit+slice+partition (e.g., + * 0s2a or 1p12), return those results. Otherwise we'll fall through to + * the code that parses the legacy format. + */ + if ((*type & DEV_TYP_STOR) && disk_parsedev(&dev, p, NULL) == 0) { + *unit = dev.dd.d_unit; + *slice = dev.d_slice; + *partition = dev.d_partition; + return; + } + /* Ignore optional spaces after the device name. */ while (*p == ' ') p++; /* Unknown device name, or a known name without unit number. */ - if ((*type == -1) || (*p == '\0')) { + if ((*type == DEV_TYP_NONE) || (*p == '\0')) { return; } /* Malformed unit number. */ if (!isdigit(*p)) { - *type = -1; + *type = DEV_TYP_NONE; return; } /* Guaranteed to extract a number from the string, as *p is a digit. */ *unit = strtol(p, &endp, 10); p = endp; /* Known device name with unit number and nothing else. */ if (*p == '\0') { return; } /* Device string is malformed beyond unit number. */ if (*p != ':') { - *type = -1; + *type = DEV_TYP_NONE; *unit = -1; return; } p++; /* No slice and partition specification. */ if ('\0' == *p ) return; /* Only DEV_TYP_STOR devices can have a slice specification. */ if (!(*type & DEV_TYP_STOR)) { - *type = -1; + *type = DEV_TYP_NONE; *unit = -1; return; } *slice = strtoul(p, &endp, 10); /* Malformed slice number. */ if (p == endp) { - *type = -1; + *type = DEV_TYP_NONE; *unit = -1; *slice = 0; return; } p = endp; /* No partition specification. */ if (*p == '\0') return; /* Device string is malformed beyond slice number. */ if (*p != '.') { - *type = -1; + *type = DEV_TYP_NONE; *unit = -1; *slice = 0; return; } p++; /* No partition specification. */ if (*p == '\0') return; *partition = strtol(p, &endp, 10); p = endp; /* Full, valid device string. */ if (*endp == '\0') return; /* Junk beyond partition number. */ - *type = -1; + *type = DEV_TYP_NONE; *unit = -1; *slice = 0; *partition = -1; } static void print_disk_probe_info() { char slice[32]; char partition[32]; - if (currdev.d_disk.slice > 0) - sprintf(slice, "%d", currdev.d_disk.slice); + if (currdev.d_disk.d_slice > 0) + sprintf(slice, "%d", currdev.d_disk.d_slice); else strcpy(slice, ""); - if (currdev.d_disk.partition >= 0) - sprintf(partition, "%d", currdev.d_disk.partition); + if (currdev.d_disk.d_partition >= 0) + sprintf(partition, "%d", currdev.d_disk.d_partition); else strcpy(partition, ""); printf(" Checking unit=%d slice=%s partition=%s...", currdev.dd.d_unit, slice, partition); } static int probe_disks(int devidx, int load_type, int load_unit, int load_slice, int load_partition) { int open_result, unit; struct open_file f; - currdev.d_disk.slice = load_slice; - currdev.d_disk.partition = load_partition; + currdev.d_disk.d_slice = load_slice; + currdev.d_disk.d_partition = load_partition; f.f_devdata = &currdev; open_result = -1; if (load_type == -1) { printf(" Probing all disk devices...\n"); /* Try each disk in succession until one works. */ for (currdev.dd.d_unit = 0; currdev.dd.d_unit < UB_MAX_DEV; currdev.dd.d_unit++) { print_disk_probe_info(); open_result = devsw[devidx]->dv_open(&f, &currdev); if (open_result == 0) { printf(" good.\n"); return (0); } printf("\n"); } return (-1); } if (load_unit == -1) { printf(" Probing all %s devices...\n", device_typename(load_type)); /* Try each disk of given type in succession until one works. */ for (unit = 0; unit < UB_MAX_DEV; unit++) { currdev.dd.d_unit = uboot_diskgetunit(load_type, unit); if (currdev.dd.d_unit == -1) break; print_disk_probe_info(); open_result = devsw[devidx]->dv_open(&f, &currdev); if (open_result == 0) { printf(" good.\n"); return (0); } printf("\n"); } return (-1); } if ((currdev.dd.d_unit = uboot_diskgetunit(load_type, load_unit)) != -1) { print_disk_probe_info(); open_result = devsw[devidx]->dv_open(&f,&currdev); if (open_result == 0) { printf(" good.\n"); return (0); } printf("\n"); } printf(" Requested disk type/unit/slice/partition not found\n"); return (-1); } int main(int argc, char **argv) { struct api_signature *sig = NULL; int load_type, load_unit, load_slice, load_partition; int i; const char *ldev; /* * We first check if a command line argument was passed to us containing * API's signature address. If it wasn't then we try to search for the * API signature via the usual hinted address. * If we can't find the magic signature and related info, exit with a * unique error code that U-Boot reports as "## Application terminated, * rc = 0xnnbadab1". Hopefully 'badab1' looks enough like "bad api" to * provide a clue. It's better than 0xffffffff anyway. */ if (!api_parse_cmdline_sig(argc, argv, &sig) && !api_search_sig(&sig)) return (0x01badab1); syscall_ptr = sig->syscall; if (syscall_ptr == NULL) return (0x02badab1); if (sig->version > API_SIG_VERSION) return (0x03badab1); /* Clear BSS sections */ bzero(__sbss_start, __sbss_end - __sbss_start); bzero(__bss_start, _end - __bss_start); /* * Initialise the heap as early as possible. Once this is done, * alloc() is usable. We are using the stack u-boot set up near the top * of physical ram; hopefully there is sufficient space between the end * of our bss and the bottom of the u-boot stack to avoid overlap. */ uboot_heap_start = round_page((uintptr_t)end); uboot_heap_end = uboot_heap_start + HEAP_SIZE; setheap((void *)uboot_heap_start, (void *)uboot_heap_end); /* * Set up console. */ cons_probe(); printf("Compatible U-Boot API signature found @%p\n", sig); printf("\n%s", bootprog_info); printf("\n"); dump_sig(sig); dump_addr_info(); meminfo(); /* * Enumerate U-Boot devices */ if ((devs_no = ub_dev_enum()) == 0) { printf("no U-Boot devices found"); goto do_interact; } printf("Number of U-Boot devices: %d\n", devs_no); get_load_device(&load_type, &load_unit, &load_slice, &load_partition); /* * March through the device switch probing for things. */ for (i = 0; devsw[i] != NULL; i++) { if (devsw[i]->dv_init == NULL) continue; if ((devsw[i]->dv_init)() != 0) continue; printf("Found U-Boot device: %s\n", devsw[i]->dv_name); currdev.dd.d_dev = devsw[i]; currdev.dd.d_unit = 0; - if ((load_type == -1 || (load_type & DEV_TYP_STOR)) && + if ((load_type == DEV_TYP_NONE || (load_type & DEV_TYP_STOR)) && strcmp(devsw[i]->dv_name, "disk") == 0) { if (probe_disks(i, load_type, load_unit, load_slice, load_partition) == 0) break; } - if ((load_type == -1 || (load_type & DEV_TYP_NET)) && + if ((load_type == DEV_TYP_NONE || (load_type & DEV_TYP_NET)) && strcmp(devsw[i]->dv_name, "net") == 0) break; } /* * If we couldn't find a boot device, return an error to u-boot. * U-boot may be running a boot script that can try something different * so returning an error is better than forcing a reboot. */ if (devsw[i] == NULL) { printf("No boot device found!\n"); return (0xbadef1ce); } ldev = uboot_fmtdev(&currdev); env_setenv("currdev", EV_VOLATILE, ldev, uboot_setcurrdev, env_nounset); env_setenv("loaddev", EV_VOLATILE, ldev, env_noset, env_nounset); printf("Booting from %s\n", ldev); do_interact: setenv("LINES", "24", 1); /* optional */ setenv("prompt", "loader>", 1); #ifdef __powerpc__ setenv("usefdt", "1", 1); #endif archsw.arch_loadaddr = uboot_loadaddr; archsw.arch_getdev = uboot_getdev; archsw.arch_copyin = uboot_copyin; archsw.arch_copyout = uboot_copyout; archsw.arch_readin = uboot_readin; archsw.arch_autoload = uboot_autoload; interact(); /* doesn't return */ return (0); } COMMAND_SET(heap, "heap", "show heap usage", command_heap); static int command_heap(int argc, char *argv[]) { printf("heap base at %p, top at %p, used %td\n", end, sbrk(0), sbrk(0) - end); return (CMD_OK); } COMMAND_SET(reboot, "reboot", "reboot the system", command_reboot); static int command_reboot(int argc, char *argv[]) { printf("Resetting...\n"); ub_reset(); printf("Reset failed!\n"); while (1); __unreachable(); } COMMAND_SET(devinfo, "devinfo", "show U-Boot devices", command_devinfo); static int command_devinfo(int argc, char *argv[]) { int i; if ((devs_no = ub_dev_enum()) == 0) { command_errmsg = "no U-Boot devices found!?"; return (CMD_ERROR); } printf("U-Boot devices:\n"); for (i = 0; i < devs_no; i++) { ub_dump_di(i); printf("\n"); } return (CMD_OK); } COMMAND_SET(sysinfo, "sysinfo", "show U-Boot system info", command_sysinfo); static int command_sysinfo(int argc, char *argv[]) { struct sys_info *si; if ((si = ub_get_sys_info()) == NULL) { command_errmsg = "could not retrieve U-Boot sys info!?"; return (CMD_ERROR); } printf("U-Boot system info:\n"); ub_dump_si(si); return (CMD_OK); } enum ubenv_action { UBENV_UNKNOWN, UBENV_SHOW, UBENV_IMPORT }; static void handle_uboot_env_var(enum ubenv_action action, const char * var) { char ldvar[128]; const char *val; char *wrk; int len; /* * On an import with the variable name formatted as ldname=ubname, * import the uboot variable ubname into the loader variable ldname, * otherwise the historical behavior is to import to uboot.ubname. */ if (action == UBENV_IMPORT) { len = strcspn(var, "="); if (len == 0) { printf("name cannot start with '=': '%s'\n", var); return; } if (var[len] == 0) { strcpy(ldvar, "uboot."); strncat(ldvar, var, sizeof(ldvar) - 7); } else { len = MIN(len, sizeof(ldvar) - 1); strncpy(ldvar, var, len); ldvar[len] = 0; var = &var[len + 1]; } } /* * If the user prepended "uboot." (which is how they usually see these * names) strip it off as a convenience. */ if (strncmp(var, "uboot.", 6) == 0) { var = &var[6]; } /* If there is no variable name left, punt. */ if (var[0] == 0) { printf("empty variable name\n"); return; } val = ub_env_get(var); if (action == UBENV_SHOW) { if (val == NULL) printf("uboot.%s is not set\n", var); else printf("uboot.%s=%s\n", var, val); } else if (action == UBENV_IMPORT) { if (val != NULL) { setenv(ldvar, val, 1); } } } static int command_ubenv(int argc, char *argv[]) { enum ubenv_action action; const char *var; int i; action = UBENV_UNKNOWN; if (argc > 1) { if (strcasecmp(argv[1], "import") == 0) action = UBENV_IMPORT; else if (strcasecmp(argv[1], "show") == 0) action = UBENV_SHOW; } if (action == UBENV_UNKNOWN) { command_errmsg = "usage: 'ubenv [var ...]"; return (CMD_ERROR); } if (argc > 2) { for (i = 2; i < argc; i++) handle_uboot_env_var(action, argv[i]); } else { var = NULL; for (;;) { if ((var = ub_env_enum(var)) == NULL) break; handle_uboot_env_var(action, var); } } return (CMD_OK); } COMMAND_SET(ubenv, "ubenv", "show or import U-Boot env vars", command_ubenv); #ifdef LOADER_FDT_SUPPORT /* * Since proper fdt command handling function is defined in fdt_loader_cmd.c, * and declaring it as extern is in contradiction with COMMAND_SET() macro * (which uses static pointer), we're defining wrapper function, which * calls the proper fdt handling routine. */ static int command_fdt(int argc, char *argv[]) { return (command_fdt_internal(argc, argv)); } COMMAND_SET(fdt, "fdt", "flattened device tree handling", command_fdt); #endif Index: projects/import-googletest-1.8.1/stand/uboot/lib/libuboot.h =================================================================== --- projects/import-googletest-1.8.1/stand/uboot/lib/libuboot.h (revision 344270) +++ projects/import-googletest-1.8.1/stand/uboot/lib/libuboot.h (revision 344271) @@ -1,80 +1,76 @@ /*- * Copyright (C) 2000 Benno Rice. * Copyright (C) 2007 Semihalf, Rafal Jaworowski * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ +#include + struct uboot_devdesc { - struct devdesc dd; /* Must be first. */ union { - struct { - int slice; - int partition; - off_t offset; - } disk; - } d_kind; + struct devdesc dd; + struct disk_devdesc d_disk; + }; }; - -#define d_disk d_kind.disk /* * Default network packet alignment in memory. On arm arches packets must be * aligned to cacheline boundaries. */ #if defined(__aarch64__) #define PKTALIGN 128 #elif defined(__arm__) #define PKTALIGN 64 #else #define PKTALIGN 32 #endif int uboot_getdev(void **vdev, const char *devspec, const char **path); char *uboot_fmtdev(void *vdev); int uboot_setcurrdev(struct env_var *ev, int flags, const void *value); extern int devs_no; extern struct netif_driver uboot_net; extern struct devsw uboot_storage; extern uintptr_t uboot_heap_start; extern uintptr_t uboot_heap_end; uint64_t uboot_loadaddr(u_int type, void *data, uint64_t addr); ssize_t uboot_copyin(const void *src, vm_offset_t dest, const size_t len); ssize_t uboot_copyout(const vm_offset_t src, void *dest, const size_t len); ssize_t uboot_readin(const int fd, vm_offset_t dest, const size_t len); extern int uboot_autoload(void); struct preloaded_file; struct file_format; extern struct file_format uboot_elf; void reboot(void); int uboot_diskgetunit(int type, int type_unit); Index: projects/import-googletest-1.8.1/sys/amd64/amd64/pmap.c =================================================================== --- projects/import-googletest-1.8.1/sys/amd64/amd64/pmap.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/amd64/amd64/pmap.c (revision 344271) @@ -1,9023 +1,9003 @@ /*- * SPDX-License-Identifier: BSD-4-Clause * * Copyright (c) 1991 Regents of the University of California. * All rights reserved. * Copyright (c) 1994 John S. Dyson * All rights reserved. * Copyright (c) 1994 David Greenman * All rights reserved. * Copyright (c) 2003 Peter Wemm * All rights reserved. * Copyright (c) 2005-2010 Alan L. Cox * All rights reserved. * * This code is derived from software contributed to Berkeley by * the Systems Programming Group of the University of Utah Computer * Science Department and William Jolitz of UUNET Technologies Inc. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by the University of * California, Berkeley and its contributors. * 4. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * from: @(#)pmap.c 7.7 (Berkeley) 5/12/91 */ /*- * Copyright (c) 2003 Networks Associates Technology, Inc. * Copyright (c) 2014-2018 The FreeBSD Foundation * All rights reserved. * * This software was developed for the FreeBSD Project by Jake Burkholder, * Safeport Network Services, and Network Associates Laboratories, the * Security Research Division of Network Associates, Inc. under * DARPA/SPAWAR contract N66001-01-C-8035 ("CBOSS"), as part of the DARPA * CHATS research program. * * Portions of this software were developed by * Konstantin Belousov under sponsorship from * the FreeBSD Foundation. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #define AMD64_NPT_AWARE #include __FBSDID("$FreeBSD$"); /* * Manages physical address maps. * * Since the information managed by this module is * also stored by the logical address mapping module, * this module may throw away valid virtual-to-physical * mappings at almost any time. However, invalidations * of virtual-to-physical mappings must be done as * requested. * * In order to cope with hardware architectures which * make virtual-to-physical map invalidates expensive, * this module may delay invalidate or reduced protection * operations until such time as they are actually * necessary. This module is given full information as * to which processors are currently using which maps, * and to when physical maps must be made correct. */ #include "opt_pmap.h" #include "opt_vm.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef SMP #include #endif #include static __inline boolean_t pmap_type_guest(pmap_t pmap) { return ((pmap->pm_type == PT_EPT) || (pmap->pm_type == PT_RVI)); } static __inline boolean_t pmap_emulate_ad_bits(pmap_t pmap) { return ((pmap->pm_flags & PMAP_EMULATE_AD_BITS) != 0); } static __inline pt_entry_t pmap_valid_bit(pmap_t pmap) { pt_entry_t mask; switch (pmap->pm_type) { case PT_X86: case PT_RVI: mask = X86_PG_V; break; case PT_EPT: if (pmap_emulate_ad_bits(pmap)) mask = EPT_PG_EMUL_V; else mask = EPT_PG_READ; break; default: panic("pmap_valid_bit: invalid pm_type %d", pmap->pm_type); } return (mask); } static __inline pt_entry_t pmap_rw_bit(pmap_t pmap) { pt_entry_t mask; switch (pmap->pm_type) { case PT_X86: case PT_RVI: mask = X86_PG_RW; break; case PT_EPT: if (pmap_emulate_ad_bits(pmap)) mask = EPT_PG_EMUL_RW; else mask = EPT_PG_WRITE; break; default: panic("pmap_rw_bit: invalid pm_type %d", pmap->pm_type); } return (mask); } static pt_entry_t pg_g; static __inline pt_entry_t pmap_global_bit(pmap_t pmap) { pt_entry_t mask; switch (pmap->pm_type) { case PT_X86: mask = pg_g; break; case PT_RVI: case PT_EPT: mask = 0; break; default: panic("pmap_global_bit: invalid pm_type %d", pmap->pm_type); } return (mask); } static __inline pt_entry_t pmap_accessed_bit(pmap_t pmap) { pt_entry_t mask; switch (pmap->pm_type) { case PT_X86: case PT_RVI: mask = X86_PG_A; break; case PT_EPT: if (pmap_emulate_ad_bits(pmap)) mask = EPT_PG_READ; else mask = EPT_PG_A; break; default: panic("pmap_accessed_bit: invalid pm_type %d", pmap->pm_type); } return (mask); } static __inline pt_entry_t pmap_modified_bit(pmap_t pmap) { pt_entry_t mask; switch (pmap->pm_type) { case PT_X86: case PT_RVI: mask = X86_PG_M; break; case PT_EPT: if (pmap_emulate_ad_bits(pmap)) mask = EPT_PG_WRITE; else mask = EPT_PG_M; break; default: panic("pmap_modified_bit: invalid pm_type %d", pmap->pm_type); } return (mask); } #if !defined(DIAGNOSTIC) #ifdef __GNUC_GNU_INLINE__ #define PMAP_INLINE __attribute__((__gnu_inline__)) inline #else #define PMAP_INLINE extern inline #endif #else #define PMAP_INLINE #endif #ifdef PV_STATS #define PV_STAT(x) do { x ; } while (0) #else #define PV_STAT(x) do { } while (0) #endif #define pa_index(pa) ((pa) >> PDRSHIFT) #define pa_to_pvh(pa) (&pv_table[pa_index(pa)]) #define NPV_LIST_LOCKS MAXCPU #define PHYS_TO_PV_LIST_LOCK(pa) \ (&pv_list_locks[pa_index(pa) % NPV_LIST_LOCKS]) #define CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, pa) do { \ struct rwlock **_lockp = (lockp); \ struct rwlock *_new_lock; \ \ _new_lock = PHYS_TO_PV_LIST_LOCK(pa); \ if (_new_lock != *_lockp) { \ if (*_lockp != NULL) \ rw_wunlock(*_lockp); \ *_lockp = _new_lock; \ rw_wlock(*_lockp); \ } \ } while (0) #define CHANGE_PV_LIST_LOCK_TO_VM_PAGE(lockp, m) \ CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, VM_PAGE_TO_PHYS(m)) #define RELEASE_PV_LIST_LOCK(lockp) do { \ struct rwlock **_lockp = (lockp); \ \ if (*_lockp != NULL) { \ rw_wunlock(*_lockp); \ *_lockp = NULL; \ } \ } while (0) #define VM_PAGE_TO_PV_LIST_LOCK(m) \ PHYS_TO_PV_LIST_LOCK(VM_PAGE_TO_PHYS(m)) struct pmap kernel_pmap_store; vm_offset_t virtual_avail; /* VA of first avail page (after kernel bss) */ vm_offset_t virtual_end; /* VA of last avail page (end of kernel AS) */ int nkpt; SYSCTL_INT(_machdep, OID_AUTO, nkpt, CTLFLAG_RD, &nkpt, 0, "Number of kernel page table pages allocated on bootup"); static int ndmpdp; vm_paddr_t dmaplimit; vm_offset_t kernel_vm_end = VM_MIN_KERNEL_ADDRESS; pt_entry_t pg_nx; static SYSCTL_NODE(_vm, OID_AUTO, pmap, CTLFLAG_RD, 0, "VM/pmap parameters"); -static int pat_works = 1; -SYSCTL_INT(_vm_pmap, OID_AUTO, pat_works, CTLFLAG_RD, &pat_works, 1, - "Is page attribute table fully functional?"); - static int pg_ps_enabled = 1; SYSCTL_INT(_vm_pmap, OID_AUTO, pg_ps_enabled, CTLFLAG_RDTUN | CTLFLAG_NOFETCH, &pg_ps_enabled, 0, "Are large page mappings enabled?"); #define PAT_INDEX_SIZE 8 static int pat_index[PAT_INDEX_SIZE]; /* cache mode to PAT index conversion */ static u_int64_t KPTphys; /* phys addr of kernel level 1 */ static u_int64_t KPDphys; /* phys addr of kernel level 2 */ u_int64_t KPDPphys; /* phys addr of kernel level 3 */ u_int64_t KPML4phys; /* phys addr of kernel level 4 */ static u_int64_t DMPDphys; /* phys addr of direct mapped level 2 */ static u_int64_t DMPDPphys; /* phys addr of direct mapped level 3 */ static int ndmpdpphys; /* number of DMPDPphys pages */ static vm_paddr_t KERNend; /* phys addr of end of bootstrap data */ /* * pmap_mapdev support pre initialization (i.e. console) */ #define PMAP_PREINIT_MAPPING_COUNT 8 static struct pmap_preinit_mapping { vm_paddr_t pa; vm_offset_t va; vm_size_t sz; int mode; } pmap_preinit_mapping[PMAP_PREINIT_MAPPING_COUNT]; static int pmap_initialized; /* * Data for the pv entry allocation mechanism. * Updates to pv_invl_gen are protected by the pv_list_locks[] * elements, but reads are not. */ static TAILQ_HEAD(pch, pv_chunk) pv_chunks = TAILQ_HEAD_INITIALIZER(pv_chunks); static struct mtx __exclusive_cache_line pv_chunks_mutex; static struct rwlock __exclusive_cache_line pv_list_locks[NPV_LIST_LOCKS]; static u_long pv_invl_gen[NPV_LIST_LOCKS]; static struct md_page *pv_table; static struct md_page pv_dummy; /* * All those kernel PT submaps that BSD is so fond of */ pt_entry_t *CMAP1 = NULL; caddr_t CADDR1 = 0; static vm_offset_t qframe = 0; static struct mtx qframe_mtx; static int pmap_flags = PMAP_PDE_SUPERPAGE; /* flags for x86 pmaps */ static vmem_t *large_vmem; static u_int lm_ents; int pmap_pcid_enabled = 1; SYSCTL_INT(_vm_pmap, OID_AUTO, pcid_enabled, CTLFLAG_RDTUN | CTLFLAG_NOFETCH, &pmap_pcid_enabled, 0, "Is TLB Context ID enabled ?"); int invpcid_works = 0; SYSCTL_INT(_vm_pmap, OID_AUTO, invpcid_works, CTLFLAG_RD, &invpcid_works, 0, "Is the invpcid instruction available ?"); int __read_frequently pti = 0; SYSCTL_INT(_vm_pmap, OID_AUTO, pti, CTLFLAG_RDTUN | CTLFLAG_NOFETCH, &pti, 0, "Page Table Isolation enabled"); static vm_object_t pti_obj; static pml4_entry_t *pti_pml4; static vm_pindex_t pti_pg_idx; static bool pti_finalized; static int pmap_pcid_save_cnt_proc(SYSCTL_HANDLER_ARGS) { int i; uint64_t res; res = 0; CPU_FOREACH(i) { res += cpuid_to_pcpu[i]->pc_pm_save_cnt; } return (sysctl_handle_64(oidp, &res, 0, req)); } SYSCTL_PROC(_vm_pmap, OID_AUTO, pcid_save_cnt, CTLTYPE_U64 | CTLFLAG_RW | CTLFLAG_MPSAFE, NULL, 0, pmap_pcid_save_cnt_proc, "QU", "Count of saved TLB context on switch"); static LIST_HEAD(, pmap_invl_gen) pmap_invl_gen_tracker = LIST_HEAD_INITIALIZER(&pmap_invl_gen_tracker); static struct mtx invl_gen_mtx; static u_long pmap_invl_gen = 0; /* Fake lock object to satisfy turnstiles interface. */ static struct lock_object invl_gen_ts = { .lo_name = "invlts", }; static bool pmap_not_in_di(void) { return (curthread->td_md.md_invl_gen.gen == 0); } #define PMAP_ASSERT_NOT_IN_DI() \ KASSERT(pmap_not_in_di(), ("DI already started")) /* * Start a new Delayed Invalidation (DI) block of code, executed by * the current thread. Within a DI block, the current thread may * destroy both the page table and PV list entries for a mapping and * then release the corresponding PV list lock before ensuring that * the mapping is flushed from the TLBs of any processors with the * pmap active. */ static void pmap_delayed_invl_started(void) { struct pmap_invl_gen *invl_gen; u_long currgen; invl_gen = &curthread->td_md.md_invl_gen; PMAP_ASSERT_NOT_IN_DI(); mtx_lock(&invl_gen_mtx); if (LIST_EMPTY(&pmap_invl_gen_tracker)) currgen = pmap_invl_gen; else currgen = LIST_FIRST(&pmap_invl_gen_tracker)->gen; invl_gen->gen = currgen + 1; LIST_INSERT_HEAD(&pmap_invl_gen_tracker, invl_gen, link); mtx_unlock(&invl_gen_mtx); } /* * Finish the DI block, previously started by the current thread. All * required TLB flushes for the pages marked by * pmap_delayed_invl_page() must be finished before this function is * called. * * This function works by bumping the global DI generation number to * the generation number of the current thread's DI, unless there is a * pending DI that started earlier. In the latter case, bumping the * global DI generation number would incorrectly signal that the * earlier DI had finished. Instead, this function bumps the earlier * DI's generation number to match the generation number of the * current thread's DI. */ static void pmap_delayed_invl_finished(void) { struct pmap_invl_gen *invl_gen, *next; struct turnstile *ts; invl_gen = &curthread->td_md.md_invl_gen; KASSERT(invl_gen->gen != 0, ("missed invl_started")); mtx_lock(&invl_gen_mtx); next = LIST_NEXT(invl_gen, link); if (next == NULL) { turnstile_chain_lock(&invl_gen_ts); ts = turnstile_lookup(&invl_gen_ts); pmap_invl_gen = invl_gen->gen; if (ts != NULL) { turnstile_broadcast(ts, TS_SHARED_QUEUE); turnstile_unpend(ts); } turnstile_chain_unlock(&invl_gen_ts); } else { next->gen = invl_gen->gen; } LIST_REMOVE(invl_gen, link); mtx_unlock(&invl_gen_mtx); invl_gen->gen = 0; } #ifdef PV_STATS static long invl_wait; SYSCTL_LONG(_vm_pmap, OID_AUTO, invl_wait, CTLFLAG_RD, &invl_wait, 0, "Number of times DI invalidation blocked pmap_remove_all/write"); #endif static u_long * pmap_delayed_invl_genp(vm_page_t m) { return (&pv_invl_gen[pa_index(VM_PAGE_TO_PHYS(m)) % NPV_LIST_LOCKS]); } /* * Ensure that all currently executing DI blocks, that need to flush * TLB for the given page m, actually flushed the TLB at the time the * function returned. If the page m has an empty PV list and we call * pmap_delayed_invl_wait(), upon its return we know that no CPU has a * valid mapping for the page m in either its page table or TLB. * * This function works by blocking until the global DI generation * number catches up with the generation number associated with the * given page m and its PV list. Since this function's callers * typically own an object lock and sometimes own a page lock, it * cannot sleep. Instead, it blocks on a turnstile to relinquish the * processor. */ static void pmap_delayed_invl_wait(vm_page_t m) { struct turnstile *ts; u_long *m_gen; #ifdef PV_STATS bool accounted = false; #endif m_gen = pmap_delayed_invl_genp(m); while (*m_gen > pmap_invl_gen) { #ifdef PV_STATS if (!accounted) { atomic_add_long(&invl_wait, 1); accounted = true; } #endif ts = turnstile_trywait(&invl_gen_ts); if (*m_gen > pmap_invl_gen) turnstile_wait(ts, NULL, TS_SHARED_QUEUE); else turnstile_cancel(ts); } } /* * Mark the page m's PV list as participating in the current thread's * DI block. Any threads concurrently using m's PV list to remove or * restrict all mappings to m will wait for the current thread's DI * block to complete before proceeding. * * The function works by setting the DI generation number for m's PV * list to at least the DI generation number of the current thread. * This forces a caller of pmap_delayed_invl_wait() to block until * current thread calls pmap_delayed_invl_finished(). */ static void pmap_delayed_invl_page(vm_page_t m) { u_long gen, *m_gen; rw_assert(VM_PAGE_TO_PV_LIST_LOCK(m), RA_WLOCKED); gen = curthread->td_md.md_invl_gen.gen; if (gen == 0) return; m_gen = pmap_delayed_invl_genp(m); if (*m_gen < gen) *m_gen = gen; } /* * Crashdump maps. */ static caddr_t crashdumpmap; /* * Internal flags for pmap_enter()'s helper functions. */ #define PMAP_ENTER_NORECLAIM 0x1000000 /* Don't reclaim PV entries. */ #define PMAP_ENTER_NOREPLACE 0x2000000 /* Don't replace mappings. */ static void free_pv_chunk(struct pv_chunk *pc); static void free_pv_entry(pmap_t pmap, pv_entry_t pv); static pv_entry_t get_pv_entry(pmap_t pmap, struct rwlock **lockp); static int popcnt_pc_map_pq(uint64_t *map); static vm_page_t reclaim_pv_chunk(pmap_t locked_pmap, struct rwlock **lockp); static void reserve_pv_entries(pmap_t pmap, int needed, struct rwlock **lockp); static void pmap_pv_demote_pde(pmap_t pmap, vm_offset_t va, vm_paddr_t pa, struct rwlock **lockp); static bool pmap_pv_insert_pde(pmap_t pmap, vm_offset_t va, pd_entry_t pde, u_int flags, struct rwlock **lockp); #if VM_NRESERVLEVEL > 0 static void pmap_pv_promote_pde(pmap_t pmap, vm_offset_t va, vm_paddr_t pa, struct rwlock **lockp); #endif static void pmap_pvh_free(struct md_page *pvh, pmap_t pmap, vm_offset_t va); static pv_entry_t pmap_pvh_remove(struct md_page *pvh, pmap_t pmap, vm_offset_t va); static int pmap_change_attr_locked(vm_offset_t va, vm_size_t size, int mode, bool noflush); static boolean_t pmap_demote_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t va); static boolean_t pmap_demote_pde_locked(pmap_t pmap, pd_entry_t *pde, vm_offset_t va, struct rwlock **lockp); static boolean_t pmap_demote_pdpe(pmap_t pmap, pdp_entry_t *pdpe, vm_offset_t va); static bool pmap_enter_2mpage(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot, struct rwlock **lockp); static int pmap_enter_pde(pmap_t pmap, vm_offset_t va, pd_entry_t newpde, u_int flags, vm_page_t m, struct rwlock **lockp); static vm_page_t pmap_enter_quick_locked(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot, vm_page_t mpte, struct rwlock **lockp); static void pmap_fill_ptp(pt_entry_t *firstpte, pt_entry_t newpte); static int pmap_insert_pt_page(pmap_t pmap, vm_page_t mpte); static void pmap_invalidate_cache_range_selfsnoop(vm_offset_t sva, vm_offset_t eva); static void pmap_invalidate_cache_range_all(vm_offset_t sva, vm_offset_t eva); static void pmap_invalidate_pde_page(pmap_t pmap, vm_offset_t va, pd_entry_t pde); static void pmap_kenter_attr(vm_offset_t va, vm_paddr_t pa, int mode); static vm_page_t pmap_large_map_getptp_unlocked(void); static void pmap_pde_attr(pd_entry_t *pde, int cache_bits, int mask); #if VM_NRESERVLEVEL > 0 static void pmap_promote_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t va, struct rwlock **lockp); #endif static boolean_t pmap_protect_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t sva, vm_prot_t prot); static void pmap_pte_attr(pt_entry_t *pte, int cache_bits, int mask); static void pmap_pti_add_kva_locked(vm_offset_t sva, vm_offset_t eva, bool exec); static pdp_entry_t *pmap_pti_pdpe(vm_offset_t va); static pd_entry_t *pmap_pti_pde(vm_offset_t va); static void pmap_pti_wire_pte(void *pte); static int pmap_remove_pde(pmap_t pmap, pd_entry_t *pdq, vm_offset_t sva, struct spglist *free, struct rwlock **lockp); static int pmap_remove_pte(pmap_t pmap, pt_entry_t *ptq, vm_offset_t sva, pd_entry_t ptepde, struct spglist *free, struct rwlock **lockp); static vm_page_t pmap_remove_pt_page(pmap_t pmap, vm_offset_t va); static void pmap_remove_page(pmap_t pmap, vm_offset_t va, pd_entry_t *pde, struct spglist *free); static bool pmap_remove_ptes(pmap_t pmap, vm_offset_t sva, vm_offset_t eva, pd_entry_t *pde, struct spglist *free, struct rwlock **lockp); static boolean_t pmap_try_insert_pv_entry(pmap_t pmap, vm_offset_t va, vm_page_t m, struct rwlock **lockp); static void pmap_update_pde(pmap_t pmap, vm_offset_t va, pd_entry_t *pde, pd_entry_t newpde); static void pmap_update_pde_invalidate(pmap_t, vm_offset_t va, pd_entry_t pde); static vm_page_t _pmap_allocpte(pmap_t pmap, vm_pindex_t ptepindex, struct rwlock **lockp); static vm_page_t pmap_allocpde(pmap_t pmap, vm_offset_t va, struct rwlock **lockp); static vm_page_t pmap_allocpte(pmap_t pmap, vm_offset_t va, struct rwlock **lockp); static void _pmap_unwire_ptp(pmap_t pmap, vm_offset_t va, vm_page_t m, struct spglist *free); static int pmap_unuse_pt(pmap_t, vm_offset_t, pd_entry_t, struct spglist *); /********************/ /* Inline functions */ /********************/ /* Return a non-clipped PD index for a given VA */ static __inline vm_pindex_t pmap_pde_pindex(vm_offset_t va) { return (va >> PDRSHIFT); } /* Return a pointer to the PML4 slot that corresponds to a VA */ static __inline pml4_entry_t * pmap_pml4e(pmap_t pmap, vm_offset_t va) { return (&pmap->pm_pml4[pmap_pml4e_index(va)]); } /* Return a pointer to the PDP slot that corresponds to a VA */ static __inline pdp_entry_t * pmap_pml4e_to_pdpe(pml4_entry_t *pml4e, vm_offset_t va) { pdp_entry_t *pdpe; pdpe = (pdp_entry_t *)PHYS_TO_DMAP(*pml4e & PG_FRAME); return (&pdpe[pmap_pdpe_index(va)]); } /* Return a pointer to the PDP slot that corresponds to a VA */ static __inline pdp_entry_t * pmap_pdpe(pmap_t pmap, vm_offset_t va) { pml4_entry_t *pml4e; pt_entry_t PG_V; PG_V = pmap_valid_bit(pmap); pml4e = pmap_pml4e(pmap, va); if ((*pml4e & PG_V) == 0) return (NULL); return (pmap_pml4e_to_pdpe(pml4e, va)); } /* Return a pointer to the PD slot that corresponds to a VA */ static __inline pd_entry_t * pmap_pdpe_to_pde(pdp_entry_t *pdpe, vm_offset_t va) { pd_entry_t *pde; pde = (pd_entry_t *)PHYS_TO_DMAP(*pdpe & PG_FRAME); return (&pde[pmap_pde_index(va)]); } /* Return a pointer to the PD slot that corresponds to a VA */ static __inline pd_entry_t * pmap_pde(pmap_t pmap, vm_offset_t va) { pdp_entry_t *pdpe; pt_entry_t PG_V; PG_V = pmap_valid_bit(pmap); pdpe = pmap_pdpe(pmap, va); if (pdpe == NULL || (*pdpe & PG_V) == 0) return (NULL); return (pmap_pdpe_to_pde(pdpe, va)); } /* Return a pointer to the PT slot that corresponds to a VA */ static __inline pt_entry_t * pmap_pde_to_pte(pd_entry_t *pde, vm_offset_t va) { pt_entry_t *pte; pte = (pt_entry_t *)PHYS_TO_DMAP(*pde & PG_FRAME); return (&pte[pmap_pte_index(va)]); } /* Return a pointer to the PT slot that corresponds to a VA */ static __inline pt_entry_t * pmap_pte(pmap_t pmap, vm_offset_t va) { pd_entry_t *pde; pt_entry_t PG_V; PG_V = pmap_valid_bit(pmap); pde = pmap_pde(pmap, va); if (pde == NULL || (*pde & PG_V) == 0) return (NULL); if ((*pde & PG_PS) != 0) /* compat with i386 pmap_pte() */ return ((pt_entry_t *)pde); return (pmap_pde_to_pte(pde, va)); } static __inline void pmap_resident_count_inc(pmap_t pmap, int count) { PMAP_LOCK_ASSERT(pmap, MA_OWNED); pmap->pm_stats.resident_count += count; } static __inline void pmap_resident_count_dec(pmap_t pmap, int count) { PMAP_LOCK_ASSERT(pmap, MA_OWNED); KASSERT(pmap->pm_stats.resident_count >= count, ("pmap %p resident count underflow %ld %d", pmap, pmap->pm_stats.resident_count, count)); pmap->pm_stats.resident_count -= count; } PMAP_INLINE pt_entry_t * vtopte(vm_offset_t va) { u_int64_t mask = ((1ul << (NPTEPGSHIFT + NPDEPGSHIFT + NPDPEPGSHIFT + NPML4EPGSHIFT)) - 1); KASSERT(va >= VM_MAXUSER_ADDRESS, ("vtopte on a uva/gpa 0x%0lx", va)); return (PTmap + ((va >> PAGE_SHIFT) & mask)); } static __inline pd_entry_t * vtopde(vm_offset_t va) { u_int64_t mask = ((1ul << (NPDEPGSHIFT + NPDPEPGSHIFT + NPML4EPGSHIFT)) - 1); KASSERT(va >= VM_MAXUSER_ADDRESS, ("vtopde on a uva/gpa 0x%0lx", va)); return (PDmap + ((va >> PDRSHIFT) & mask)); } static u_int64_t allocpages(vm_paddr_t *firstaddr, int n) { u_int64_t ret; ret = *firstaddr; bzero((void *)ret, n * PAGE_SIZE); *firstaddr += n * PAGE_SIZE; return (ret); } CTASSERT(powerof2(NDMPML4E)); /* number of kernel PDP slots */ #define NKPDPE(ptpgs) howmany(ptpgs, NPDEPG) static void nkpt_init(vm_paddr_t addr) { int pt_pages; #ifdef NKPT pt_pages = NKPT; #else pt_pages = howmany(addr, 1 << PDRSHIFT); pt_pages += NKPDPE(pt_pages); /* * Add some slop beyond the bare minimum required for bootstrapping * the kernel. * * This is quite important when allocating KVA for kernel modules. * The modules are required to be linked in the negative 2GB of * the address space. If we run out of KVA in this region then * pmap_growkernel() will need to allocate page table pages to map * the entire 512GB of KVA space which is an unnecessary tax on * physical memory. * * Secondly, device memory mapped as part of setting up the low- * level console(s) is taken from KVA, starting at virtual_avail. * This is because cninit() is called after pmap_bootstrap() but * before vm_init() and pmap_init(). 20MB for a frame buffer is * not uncommon. */ pt_pages += 32; /* 64MB additional slop. */ #endif nkpt = pt_pages; } /* * Returns the proper write/execute permission for a physical page that is * part of the initial boot allocations. * * If the page has kernel text, it is marked as read-only. If the page has * kernel read-only data, it is marked as read-only/not-executable. If the * page has only read-write data, it is marked as read-write/not-executable. * If the page is below/above the kernel range, it is marked as read-write. * * This function operates on 2M pages, since we map the kernel space that * way. * * Note that this doesn't currently provide any protection for modules. */ static inline pt_entry_t bootaddr_rwx(vm_paddr_t pa) { /* * Everything in the same 2M page as the start of the kernel * should be static. On the other hand, things in the same 2M * page as the end of the kernel could be read-write/executable, * as the kernel image is not guaranteed to end on a 2M boundary. */ if (pa < trunc_2mpage(btext - KERNBASE) || pa >= trunc_2mpage(_end - KERNBASE)) return (X86_PG_RW); /* * The linker should ensure that the read-only and read-write * portions don't share the same 2M page, so this shouldn't * impact read-only data. However, in any case, any page with * read-write data needs to be read-write. */ if (pa >= trunc_2mpage(brwsection - KERNBASE)) return (X86_PG_RW | pg_nx); /* * Mark any 2M page containing kernel text as read-only. Mark * other pages with read-only data as read-only and not executable. * (It is likely a small portion of the read-only data section will * be marked as read-only, but executable. This should be acceptable * since the read-only protection will keep the data from changing.) * Note that fixups to the .text section will still work until we * set CR0.WP. */ if (pa < round_2mpage(etext - KERNBASE)) return (0); return (pg_nx); } static void create_pagetables(vm_paddr_t *firstaddr) { int i, j, ndm1g, nkpdpe, nkdmpde; pt_entry_t *pt_p; pd_entry_t *pd_p; pdp_entry_t *pdp_p; pml4_entry_t *p4_p; uint64_t DMPDkernphys; /* Allocate page table pages for the direct map */ ndmpdp = howmany(ptoa(Maxmem), NBPDP); if (ndmpdp < 4) /* Minimum 4GB of dirmap */ ndmpdp = 4; ndmpdpphys = howmany(ndmpdp, NPDPEPG); if (ndmpdpphys > NDMPML4E) { /* * Each NDMPML4E allows 512 GB, so limit to that, * and then readjust ndmpdp and ndmpdpphys. */ printf("NDMPML4E limits system to %d GB\n", NDMPML4E * 512); Maxmem = atop(NDMPML4E * NBPML4); ndmpdpphys = NDMPML4E; ndmpdp = NDMPML4E * NPDEPG; } DMPDPphys = allocpages(firstaddr, ndmpdpphys); ndm1g = 0; if ((amd_feature & AMDID_PAGE1GB) != 0) { /* * Calculate the number of 1G pages that will fully fit in * Maxmem. */ ndm1g = ptoa(Maxmem) >> PDPSHIFT; /* * Allocate 2M pages for the kernel. These will be used in * place of the first one or more 1G pages from ndm1g. */ nkdmpde = howmany((vm_offset_t)(brwsection - KERNBASE), NBPDP); DMPDkernphys = allocpages(firstaddr, nkdmpde); } if (ndm1g < ndmpdp) DMPDphys = allocpages(firstaddr, ndmpdp - ndm1g); dmaplimit = (vm_paddr_t)ndmpdp << PDPSHIFT; /* Allocate pages */ KPML4phys = allocpages(firstaddr, 1); KPDPphys = allocpages(firstaddr, NKPML4E); /* * Allocate the initial number of kernel page table pages required to * bootstrap. We defer this until after all memory-size dependent * allocations are done (e.g. direct map), so that we don't have to * build in too much slop in our estimate. * * Note that when NKPML4E > 1, we have an empty page underneath * all but the KPML4I'th one, so we need NKPML4E-1 extra (zeroed) * pages. (pmap_enter requires a PD page to exist for each KPML4E.) */ nkpt_init(*firstaddr); nkpdpe = NKPDPE(nkpt); KPTphys = allocpages(firstaddr, nkpt); KPDphys = allocpages(firstaddr, nkpdpe); /* Fill in the underlying page table pages */ /* XXX not fully used, underneath 2M pages */ pt_p = (pt_entry_t *)KPTphys; for (i = 0; ptoa(i) < *firstaddr; i++) pt_p[i] = ptoa(i) | X86_PG_V | pg_g | bootaddr_rwx(ptoa(i)); /* Now map the page tables at their location within PTmap */ pd_p = (pd_entry_t *)KPDphys; for (i = 0; i < nkpt; i++) pd_p[i] = (KPTphys + ptoa(i)) | X86_PG_RW | X86_PG_V; /* Map from zero to end of allocations under 2M pages */ /* This replaces some of the KPTphys entries above */ for (i = 0; (i << PDRSHIFT) < *firstaddr; i++) /* Preset PG_M and PG_A because demotion expects it. */ pd_p[i] = (i << PDRSHIFT) | X86_PG_V | PG_PS | pg_g | X86_PG_M | X86_PG_A | bootaddr_rwx(i << PDRSHIFT); /* * Because we map the physical blocks in 2M pages, adjust firstaddr * to record the physical blocks we've actually mapped into kernel * virtual address space. */ *firstaddr = round_2mpage(*firstaddr); /* And connect up the PD to the PDP (leaving room for L4 pages) */ pdp_p = (pdp_entry_t *)(KPDPphys + ptoa(KPML4I - KPML4BASE)); for (i = 0; i < nkpdpe; i++) pdp_p[i + KPDPI] = (KPDphys + ptoa(i)) | X86_PG_RW | X86_PG_V; /* * Now, set up the direct map region using 2MB and/or 1GB pages. If * the end of physical memory is not aligned to a 1GB page boundary, * then the residual physical memory is mapped with 2MB pages. Later, * if pmap_mapdev{_attr}() uses the direct map for non-write-back * memory, pmap_change_attr() will demote any 2MB or 1GB page mappings * that are partially used. */ pd_p = (pd_entry_t *)DMPDphys; for (i = NPDEPG * ndm1g, j = 0; i < NPDEPG * ndmpdp; i++, j++) { pd_p[j] = (vm_paddr_t)i << PDRSHIFT; /* Preset PG_M and PG_A because demotion expects it. */ pd_p[j] |= X86_PG_RW | X86_PG_V | PG_PS | pg_g | X86_PG_M | X86_PG_A | pg_nx; } pdp_p = (pdp_entry_t *)DMPDPphys; for (i = 0; i < ndm1g; i++) { pdp_p[i] = (vm_paddr_t)i << PDPSHIFT; /* Preset PG_M and PG_A because demotion expects it. */ pdp_p[i] |= X86_PG_RW | X86_PG_V | PG_PS | pg_g | X86_PG_M | X86_PG_A | pg_nx; } for (j = 0; i < ndmpdp; i++, j++) { pdp_p[i] = DMPDphys + ptoa(j); pdp_p[i] |= X86_PG_RW | X86_PG_V; } /* * Instead of using a 1G page for the memory containing the kernel, * use 2M pages with appropriate permissions. (If using 1G pages, * this will partially overwrite the PDPEs above.) */ if (ndm1g) { pd_p = (pd_entry_t *)DMPDkernphys; for (i = 0; i < (NPDEPG * nkdmpde); i++) pd_p[i] = (i << PDRSHIFT) | X86_PG_V | PG_PS | pg_g | X86_PG_M | X86_PG_A | pg_nx | bootaddr_rwx(i << PDRSHIFT); for (i = 0; i < nkdmpde; i++) pdp_p[i] = (DMPDkernphys + ptoa(i)) | X86_PG_RW | X86_PG_V; } /* And recursively map PML4 to itself in order to get PTmap */ p4_p = (pml4_entry_t *)KPML4phys; p4_p[PML4PML4I] = KPML4phys; p4_p[PML4PML4I] |= X86_PG_RW | X86_PG_V | pg_nx; /* Connect the Direct Map slot(s) up to the PML4. */ for (i = 0; i < ndmpdpphys; i++) { p4_p[DMPML4I + i] = DMPDPphys + ptoa(i); p4_p[DMPML4I + i] |= X86_PG_RW | X86_PG_V; } /* Connect the KVA slots up to the PML4 */ for (i = 0; i < NKPML4E; i++) { p4_p[KPML4BASE + i] = KPDPphys + ptoa(i); p4_p[KPML4BASE + i] |= X86_PG_RW | X86_PG_V; } } /* * Bootstrap the system enough to run with virtual memory. * * On amd64 this is called after mapping has already been enabled * and just syncs the pmap module with what has already been done. * [We can't call it easily with mapping off since the kernel is not * mapped with PA == VA, hence we would have to relocate every address * from the linked base (virtual) address "KERNBASE" to the actual * (physical) address starting relative to 0] */ void pmap_bootstrap(vm_paddr_t *firstaddr) { vm_offset_t va; pt_entry_t *pte; uint64_t cr4; u_long res; int i; KERNend = *firstaddr; res = atop(KERNend - (vm_paddr_t)kernphys); if (!pti) pg_g = X86_PG_G; /* * Create an initial set of page tables to run the kernel in. */ create_pagetables(firstaddr); /* * Add a physical memory segment (vm_phys_seg) corresponding to the * preallocated kernel page table pages so that vm_page structures * representing these pages will be created. The vm_page structures * are required for promotion of the corresponding kernel virtual * addresses to superpage mappings. */ vm_phys_add_seg(KPTphys, KPTphys + ptoa(nkpt)); virtual_avail = (vm_offset_t) KERNBASE + *firstaddr; virtual_end = VM_MAX_KERNEL_ADDRESS; /* * Enable PG_G global pages, then switch to the kernel page * table from the bootstrap page table. After the switch, it * is possible to enable SMEP and SMAP since PG_U bits are * correct now. */ cr4 = rcr4(); cr4 |= CR4_PGE; load_cr4(cr4); load_cr3(KPML4phys); if (cpu_stdext_feature & CPUID_STDEXT_SMEP) cr4 |= CR4_SMEP; if (cpu_stdext_feature & CPUID_STDEXT_SMAP) cr4 |= CR4_SMAP; load_cr4(cr4); /* * Initialize the kernel pmap (which is statically allocated). * Count bootstrap data as being resident in case any of this data is * later unmapped (using pmap_remove()) and freed. */ PMAP_LOCK_INIT(kernel_pmap); kernel_pmap->pm_pml4 = (pdp_entry_t *)PHYS_TO_DMAP(KPML4phys); kernel_pmap->pm_cr3 = KPML4phys; kernel_pmap->pm_ucr3 = PMAP_NO_CR3; CPU_FILL(&kernel_pmap->pm_active); /* don't allow deactivation */ TAILQ_INIT(&kernel_pmap->pm_pvchunk); kernel_pmap->pm_stats.resident_count = res; kernel_pmap->pm_flags = pmap_flags; /* * Initialize the TLB invalidations generation number lock. */ mtx_init(&invl_gen_mtx, "invlgn", NULL, MTX_DEF); /* * Reserve some special page table entries/VA space for temporary * mapping of pages. */ #define SYSMAP(c, p, v, n) \ v = (c)va; va += ((n)*PAGE_SIZE); p = pte; pte += (n); va = virtual_avail; pte = vtopte(va); /* * Crashdump maps. The first page is reused as CMAP1 for the * memory test. */ SYSMAP(caddr_t, CMAP1, crashdumpmap, MAXDUMPPGS) CADDR1 = crashdumpmap; virtual_avail = va; /* * Initialize the PAT MSR. * pmap_init_pat() clears and sets CR4_PGE, which, as a * side-effect, invalidates stale PG_G TLB entries that might * have been created in our pre-boot environment. */ pmap_init_pat(); /* Initialize TLB Context Id. */ if (pmap_pcid_enabled) { for (i = 0; i < MAXCPU; i++) { kernel_pmap->pm_pcids[i].pm_pcid = PMAP_PCID_KERN; kernel_pmap->pm_pcids[i].pm_gen = 1; } /* * PMAP_PCID_KERN + 1 is used for initialization of * proc0 pmap. The pmap' pcid state might be used by * EFIRT entry before first context switch, so it * needs to be valid. */ PCPU_SET(pcid_next, PMAP_PCID_KERN + 2); PCPU_SET(pcid_gen, 1); /* * pcpu area for APs is zeroed during AP startup. * pc_pcid_next and pc_pcid_gen are initialized by AP * during pcpu setup. */ load_cr4(rcr4() | CR4_PCIDE); } } /* * Setup the PAT MSR. */ void pmap_init_pat(void) { - int pat_table[PAT_INDEX_SIZE]; uint64_t pat_msr; u_long cr0, cr4; int i; /* Bail if this CPU doesn't implement PAT. */ if ((cpu_feature & CPUID_PAT) == 0) panic("no PAT??"); /* Set default PAT index table. */ for (i = 0; i < PAT_INDEX_SIZE; i++) - pat_table[i] = -1; - pat_table[PAT_WRITE_BACK] = 0; - pat_table[PAT_WRITE_THROUGH] = 1; - pat_table[PAT_UNCACHEABLE] = 3; - pat_table[PAT_WRITE_COMBINING] = 3; - pat_table[PAT_WRITE_PROTECTED] = 3; - pat_table[PAT_UNCACHED] = 3; + pat_index[i] = -1; + pat_index[PAT_WRITE_BACK] = 0; + pat_index[PAT_WRITE_THROUGH] = 1; + pat_index[PAT_UNCACHEABLE] = 3; + pat_index[PAT_WRITE_COMBINING] = 6; + pat_index[PAT_WRITE_PROTECTED] = 5; + pat_index[PAT_UNCACHED] = 2; - /* Initialize default PAT entries. */ + /* + * Initialize default PAT entries. + * Leave the indices 0-3 at the default of WB, WT, UC-, and UC. + * Program 5 and 6 as WP and WC. + * + * Leave 4 and 7 as WB and UC. Note that a recursive page table + * mapping for a 2M page uses a PAT value with the bit 3 set due + * to its overload with PG_PS. + */ pat_msr = PAT_VALUE(0, PAT_WRITE_BACK) | PAT_VALUE(1, PAT_WRITE_THROUGH) | PAT_VALUE(2, PAT_UNCACHED) | PAT_VALUE(3, PAT_UNCACHEABLE) | PAT_VALUE(4, PAT_WRITE_BACK) | - PAT_VALUE(5, PAT_WRITE_THROUGH) | - PAT_VALUE(6, PAT_UNCACHED) | + PAT_VALUE(5, PAT_WRITE_PROTECTED) | + PAT_VALUE(6, PAT_WRITE_COMBINING) | PAT_VALUE(7, PAT_UNCACHEABLE); - if (pat_works) { - /* - * Leave the indices 0-3 at the default of WB, WT, UC-, and UC. - * Program 5 and 6 as WP and WC. - * Leave 4 and 7 as WB and UC. - */ - pat_msr &= ~(PAT_MASK(5) | PAT_MASK(6)); - pat_msr |= PAT_VALUE(5, PAT_WRITE_PROTECTED) | - PAT_VALUE(6, PAT_WRITE_COMBINING); - pat_table[PAT_UNCACHED] = 2; - pat_table[PAT_WRITE_PROTECTED] = 5; - pat_table[PAT_WRITE_COMBINING] = 6; - } else { - /* - * Just replace PAT Index 2 with WC instead of UC-. - */ - pat_msr &= ~PAT_MASK(2); - pat_msr |= PAT_VALUE(2, PAT_WRITE_COMBINING); - pat_table[PAT_WRITE_COMBINING] = 2; - } - /* Disable PGE. */ cr4 = rcr4(); load_cr4(cr4 & ~CR4_PGE); /* Disable caches (CD = 1, NW = 0). */ cr0 = rcr0(); load_cr0((cr0 & ~CR0_NW) | CR0_CD); /* Flushes caches and TLBs. */ wbinvd(); invltlb(); /* Update PAT and index table. */ wrmsr(MSR_PAT, pat_msr); - for (i = 0; i < PAT_INDEX_SIZE; i++) - pat_index[i] = pat_table[i]; /* Flush caches and TLBs again. */ wbinvd(); invltlb(); /* Restore caches and PGE. */ load_cr0(cr0); load_cr4(cr4); } /* * Initialize a vm_page's machine-dependent fields. */ void pmap_page_init(vm_page_t m) { TAILQ_INIT(&m->md.pv_list); m->md.pat_mode = PAT_WRITE_BACK; } /* * Initialize the pmap module. * Called by vm_init, to initialize any structures that the pmap * system needs to map virtual memory. */ void pmap_init(void) { struct pmap_preinit_mapping *ppim; vm_page_t m, mpte; vm_size_t s; int error, i, pv_npg, ret, skz63; /* L1TF, reserve page @0 unconditionally */ vm_page_blacklist_add(0, bootverbose); /* Detect bare-metal Skylake Server and Skylake-X. */ if (vm_guest == VM_GUEST_NO && cpu_vendor_id == CPU_VENDOR_INTEL && CPUID_TO_FAMILY(cpu_id) == 0x6 && CPUID_TO_MODEL(cpu_id) == 0x55) { /* * Skylake-X errata SKZ63. Processor May Hang When * Executing Code In an HLE Transaction Region between * 40000000H and 403FFFFFH. * * Mark the pages in the range as preallocated. It * seems to be impossible to distinguish between * Skylake Server and Skylake X. */ skz63 = 1; TUNABLE_INT_FETCH("hw.skz63_enable", &skz63); if (skz63 != 0) { if (bootverbose) printf("SKZ63: skipping 4M RAM starting " "at physical 1G\n"); for (i = 0; i < atop(0x400000); i++) { ret = vm_page_blacklist_add(0x40000000 + ptoa(i), FALSE); if (!ret && bootverbose) printf("page at %#lx already used\n", 0x40000000 + ptoa(i)); } } } /* * Initialize the vm page array entries for the kernel pmap's * page table pages. */ PMAP_LOCK(kernel_pmap); for (i = 0; i < nkpt; i++) { mpte = PHYS_TO_VM_PAGE(KPTphys + (i << PAGE_SHIFT)); KASSERT(mpte >= vm_page_array && mpte < &vm_page_array[vm_page_array_size], ("pmap_init: page table page is out of range")); mpte->pindex = pmap_pde_pindex(KERNBASE) + i; mpte->phys_addr = KPTphys + (i << PAGE_SHIFT); mpte->wire_count = 1; if (i << PDRSHIFT < KERNend && pmap_insert_pt_page(kernel_pmap, mpte)) panic("pmap_init: pmap_insert_pt_page failed"); } PMAP_UNLOCK(kernel_pmap); vm_wire_add(nkpt); /* * If the kernel is running on a virtual machine, then it must assume * that MCA is enabled by the hypervisor. Moreover, the kernel must * be prepared for the hypervisor changing the vendor and family that * are reported by CPUID. Consequently, the workaround for AMD Family * 10h Erratum 383 is enabled if the processor's feature set does not * include at least one feature that is only supported by older Intel * or newer AMD processors. */ if (vm_guest != VM_GUEST_NO && (cpu_feature & CPUID_SS) == 0 && (cpu_feature2 & (CPUID2_SSSE3 | CPUID2_SSE41 | CPUID2_AESNI | CPUID2_AVX | CPUID2_XSAVE)) == 0 && (amd_feature2 & (AMDID2_XOP | AMDID2_FMA4)) == 0) workaround_erratum383 = 1; /* * Are large page mappings enabled? */ TUNABLE_INT_FETCH("vm.pmap.pg_ps_enabled", &pg_ps_enabled); if (pg_ps_enabled) { KASSERT(MAXPAGESIZES > 1 && pagesizes[1] == 0, ("pmap_init: can't assign to pagesizes[1]")); pagesizes[1] = NBPDR; } /* * Initialize the pv chunk list mutex. */ mtx_init(&pv_chunks_mutex, "pmap pv chunk list", NULL, MTX_DEF); /* * Initialize the pool of pv list locks. */ for (i = 0; i < NPV_LIST_LOCKS; i++) rw_init(&pv_list_locks[i], "pmap pv list"); /* * Calculate the size of the pv head table for superpages. */ pv_npg = howmany(vm_phys_segs[vm_phys_nsegs - 1].end, NBPDR); /* * Allocate memory for the pv head table for superpages. */ s = (vm_size_t)(pv_npg * sizeof(struct md_page)); s = round_page(s); pv_table = (struct md_page *)kmem_malloc(s, M_WAITOK | M_ZERO); for (i = 0; i < pv_npg; i++) TAILQ_INIT(&pv_table[i].pv_list); TAILQ_INIT(&pv_dummy.pv_list); pmap_initialized = 1; for (i = 0; i < PMAP_PREINIT_MAPPING_COUNT; i++) { ppim = pmap_preinit_mapping + i; if (ppim->va == 0) continue; /* Make the direct map consistent */ if (ppim->pa < dmaplimit && ppim->pa + ppim->sz <= dmaplimit) { (void)pmap_change_attr(PHYS_TO_DMAP(ppim->pa), ppim->sz, ppim->mode); } if (!bootverbose) continue; printf("PPIM %u: PA=%#lx, VA=%#lx, size=%#lx, mode=%#x\n", i, ppim->pa, ppim->va, ppim->sz, ppim->mode); } mtx_init(&qframe_mtx, "qfrmlk", NULL, MTX_SPIN); error = vmem_alloc(kernel_arena, PAGE_SIZE, M_BESTFIT | M_WAITOK, (vmem_addr_t *)&qframe); if (error != 0) panic("qframe allocation failed"); lm_ents = 8; TUNABLE_INT_FETCH("vm.pmap.large_map_pml4_entries", &lm_ents); if (lm_ents > LMEPML4I - LMSPML4I + 1) lm_ents = LMEPML4I - LMSPML4I + 1; if (bootverbose) printf("pmap: large map %u PML4 slots (%lu Gb)\n", lm_ents, (u_long)lm_ents * (NBPML4 / 1024 / 1024 / 1024)); if (lm_ents != 0) { large_vmem = vmem_create("large", LARGEMAP_MIN_ADDRESS, (vmem_size_t)lm_ents * NBPML4, PAGE_SIZE, 0, M_WAITOK); if (large_vmem == NULL) { printf("pmap: cannot create large map\n"); lm_ents = 0; } for (i = 0; i < lm_ents; i++) { m = pmap_large_map_getptp_unlocked(); kernel_pmap->pm_pml4[LMSPML4I + i] = X86_PG_V | X86_PG_RW | X86_PG_A | X86_PG_M | pg_nx | VM_PAGE_TO_PHYS(m); } } } static SYSCTL_NODE(_vm_pmap, OID_AUTO, pde, CTLFLAG_RD, 0, "2MB page mapping counters"); static u_long pmap_pde_demotions; SYSCTL_ULONG(_vm_pmap_pde, OID_AUTO, demotions, CTLFLAG_RD, &pmap_pde_demotions, 0, "2MB page demotions"); static u_long pmap_pde_mappings; SYSCTL_ULONG(_vm_pmap_pde, OID_AUTO, mappings, CTLFLAG_RD, &pmap_pde_mappings, 0, "2MB page mappings"); static u_long pmap_pde_p_failures; SYSCTL_ULONG(_vm_pmap_pde, OID_AUTO, p_failures, CTLFLAG_RD, &pmap_pde_p_failures, 0, "2MB page promotion failures"); static u_long pmap_pde_promotions; SYSCTL_ULONG(_vm_pmap_pde, OID_AUTO, promotions, CTLFLAG_RD, &pmap_pde_promotions, 0, "2MB page promotions"); static SYSCTL_NODE(_vm_pmap, OID_AUTO, pdpe, CTLFLAG_RD, 0, "1GB page mapping counters"); static u_long pmap_pdpe_demotions; SYSCTL_ULONG(_vm_pmap_pdpe, OID_AUTO, demotions, CTLFLAG_RD, &pmap_pdpe_demotions, 0, "1GB page demotions"); /*************************************************** * Low level helper routines..... ***************************************************/ static pt_entry_t pmap_swap_pat(pmap_t pmap, pt_entry_t entry) { int x86_pat_bits = X86_PG_PTE_PAT | X86_PG_PDE_PAT; switch (pmap->pm_type) { case PT_X86: case PT_RVI: /* Verify that both PAT bits are not set at the same time */ KASSERT((entry & x86_pat_bits) != x86_pat_bits, ("Invalid PAT bits in entry %#lx", entry)); /* Swap the PAT bits if one of them is set */ if ((entry & x86_pat_bits) != 0) entry ^= x86_pat_bits; break; case PT_EPT: /* * Nothing to do - the memory attributes are represented * the same way for regular pages and superpages. */ break; default: panic("pmap_switch_pat_bits: bad pm_type %d", pmap->pm_type); } return (entry); } boolean_t pmap_is_valid_memattr(pmap_t pmap __unused, vm_memattr_t mode) { return (mode >= 0 && mode < PAT_INDEX_SIZE && pat_index[(int)mode] >= 0); } /* * Determine the appropriate bits to set in a PTE or PDE for a specified * caching mode. */ int pmap_cache_bits(pmap_t pmap, int mode, boolean_t is_pde) { int cache_bits, pat_flag, pat_idx; if (!pmap_is_valid_memattr(pmap, mode)) panic("Unknown caching mode %d\n", mode); switch (pmap->pm_type) { case PT_X86: case PT_RVI: /* The PAT bit is different for PTE's and PDE's. */ pat_flag = is_pde ? X86_PG_PDE_PAT : X86_PG_PTE_PAT; /* Map the caching mode to a PAT index. */ pat_idx = pat_index[mode]; /* Map the 3-bit index value into the PAT, PCD, and PWT bits. */ cache_bits = 0; if (pat_idx & 0x4) cache_bits |= pat_flag; if (pat_idx & 0x2) cache_bits |= PG_NC_PCD; if (pat_idx & 0x1) cache_bits |= PG_NC_PWT; break; case PT_EPT: cache_bits = EPT_PG_IGNORE_PAT | EPT_PG_MEMORY_TYPE(mode); break; default: panic("unsupported pmap type %d", pmap->pm_type); } return (cache_bits); } static int pmap_cache_mask(pmap_t pmap, boolean_t is_pde) { int mask; switch (pmap->pm_type) { case PT_X86: case PT_RVI: mask = is_pde ? X86_PG_PDE_CACHE : X86_PG_PTE_CACHE; break; case PT_EPT: mask = EPT_PG_IGNORE_PAT | EPT_PG_MEMORY_TYPE(0x7); break; default: panic("pmap_cache_mask: invalid pm_type %d", pmap->pm_type); } return (mask); } bool pmap_ps_enabled(pmap_t pmap) { return (pg_ps_enabled && (pmap->pm_flags & PMAP_PDE_SUPERPAGE) != 0); } static void pmap_update_pde_store(pmap_t pmap, pd_entry_t *pde, pd_entry_t newpde) { switch (pmap->pm_type) { case PT_X86: break; case PT_RVI: case PT_EPT: /* * XXX * This is a little bogus since the generation number is * supposed to be bumped up when a region of the address * space is invalidated in the page tables. * * In this case the old PDE entry is valid but yet we want * to make sure that any mappings using the old entry are * invalidated in the TLB. * * The reason this works as expected is because we rendezvous * "all" host cpus and force any vcpu context to exit as a * side-effect. */ atomic_add_acq_long(&pmap->pm_eptgen, 1); break; default: panic("pmap_update_pde_store: bad pm_type %d", pmap->pm_type); } pde_store(pde, newpde); } /* * After changing the page size for the specified virtual address in the page * table, flush the corresponding entries from the processor's TLB. Only the * calling processor's TLB is affected. * * The calling thread must be pinned to a processor. */ static void pmap_update_pde_invalidate(pmap_t pmap, vm_offset_t va, pd_entry_t newpde) { pt_entry_t PG_G; if (pmap_type_guest(pmap)) return; KASSERT(pmap->pm_type == PT_X86, ("pmap_update_pde_invalidate: invalid type %d", pmap->pm_type)); PG_G = pmap_global_bit(pmap); if ((newpde & PG_PS) == 0) /* Demotion: flush a specific 2MB page mapping. */ invlpg(va); else if ((newpde & PG_G) == 0) /* * Promotion: flush every 4KB page mapping from the TLB * because there are too many to flush individually. */ invltlb(); else { /* * Promotion: flush every 4KB page mapping from the TLB, * including any global (PG_G) mappings. */ invltlb_glob(); } } #ifdef SMP /* * For SMP, these functions have to use the IPI mechanism for coherence. * * N.B.: Before calling any of the following TLB invalidation functions, * the calling processor must ensure that all stores updating a non- * kernel page table are globally performed. Otherwise, another * processor could cache an old, pre-update entry without being * invalidated. This can happen one of two ways: (1) The pmap becomes * active on another processor after its pm_active field is checked by * one of the following functions but before a store updating the page * table is globally performed. (2) The pmap becomes active on another * processor before its pm_active field is checked but due to * speculative loads one of the following functions stills reads the * pmap as inactive on the other processor. * * The kernel page table is exempt because its pm_active field is * immutable. The kernel page table is always active on every * processor. */ /* * Interrupt the cpus that are executing in the guest context. * This will force the vcpu to exit and the cached EPT mappings * will be invalidated by the host before the next vmresume. */ static __inline void pmap_invalidate_ept(pmap_t pmap) { int ipinum; sched_pin(); KASSERT(!CPU_ISSET(curcpu, &pmap->pm_active), ("pmap_invalidate_ept: absurd pm_active")); /* * The TLB mappings associated with a vcpu context are not * flushed each time a different vcpu is chosen to execute. * * This is in contrast with a process's vtop mappings that * are flushed from the TLB on each context switch. * * Therefore we need to do more than just a TLB shootdown on * the active cpus in 'pmap->pm_active'. To do this we keep * track of the number of invalidations performed on this pmap. * * Each vcpu keeps a cache of this counter and compares it * just before a vmresume. If the counter is out-of-date an * invept will be done to flush stale mappings from the TLB. */ atomic_add_acq_long(&pmap->pm_eptgen, 1); /* * Force the vcpu to exit and trap back into the hypervisor. */ ipinum = pmap->pm_flags & PMAP_NESTED_IPIMASK; ipi_selected(pmap->pm_active, ipinum); sched_unpin(); } static cpuset_t pmap_invalidate_cpu_mask(pmap_t pmap) { return (pmap == kernel_pmap ? all_cpus : pmap->pm_active); } static inline void pmap_invalidate_page_pcid(pmap_t pmap, vm_offset_t va, const bool invpcid_works1) { struct invpcid_descr d; uint64_t kcr3, ucr3; uint32_t pcid; u_int cpuid, i; cpuid = PCPU_GET(cpuid); if (pmap == PCPU_GET(curpmap)) { if (pmap->pm_ucr3 != PMAP_NO_CR3) { /* * Because pm_pcid is recalculated on a * context switch, we must disable switching. * Otherwise, we might use a stale value * below. */ critical_enter(); pcid = pmap->pm_pcids[cpuid].pm_pcid; if (invpcid_works1) { d.pcid = pcid | PMAP_PCID_USER_PT; d.pad = 0; d.addr = va; invpcid(&d, INVPCID_ADDR); } else { kcr3 = pmap->pm_cr3 | pcid | CR3_PCID_SAVE; ucr3 = pmap->pm_ucr3 | pcid | PMAP_PCID_USER_PT | CR3_PCID_SAVE; pmap_pti_pcid_invlpg(ucr3, kcr3, va); } critical_exit(); } } else pmap->pm_pcids[cpuid].pm_gen = 0; CPU_FOREACH(i) { if (cpuid != i) pmap->pm_pcids[i].pm_gen = 0; } /* * The fence is between stores to pm_gen and the read of the * pm_active mask. We need to ensure that it is impossible * for us to miss the bit update in pm_active and * simultaneously observe a non-zero pm_gen in * pmap_activate_sw(), otherwise TLB update is missed. * Without the fence, IA32 allows such an outcome. Note that * pm_active is updated by a locked operation, which provides * the reciprocal fence. */ atomic_thread_fence_seq_cst(); } static void pmap_invalidate_page_pcid_invpcid(pmap_t pmap, vm_offset_t va) { pmap_invalidate_page_pcid(pmap, va, true); } static void pmap_invalidate_page_pcid_noinvpcid(pmap_t pmap, vm_offset_t va) { pmap_invalidate_page_pcid(pmap, va, false); } static void pmap_invalidate_page_nopcid(pmap_t pmap, vm_offset_t va) { } DEFINE_IFUNC(static, void, pmap_invalidate_page_mode, (pmap_t, vm_offset_t), static) { if (pmap_pcid_enabled) return (invpcid_works ? pmap_invalidate_page_pcid_invpcid : pmap_invalidate_page_pcid_noinvpcid); return (pmap_invalidate_page_nopcid); } void pmap_invalidate_page(pmap_t pmap, vm_offset_t va) { if (pmap_type_guest(pmap)) { pmap_invalidate_ept(pmap); return; } KASSERT(pmap->pm_type == PT_X86, ("pmap_invalidate_page: invalid type %d", pmap->pm_type)); sched_pin(); if (pmap == kernel_pmap) { invlpg(va); } else { if (pmap == PCPU_GET(curpmap)) invlpg(va); pmap_invalidate_page_mode(pmap, va); } smp_masked_invlpg(pmap_invalidate_cpu_mask(pmap), va, pmap); sched_unpin(); } /* 4k PTEs -- Chosen to exceed the total size of Broadwell L2 TLB */ #define PMAP_INVLPG_THRESHOLD (4 * 1024 * PAGE_SIZE) static void pmap_invalidate_range_pcid(pmap_t pmap, vm_offset_t sva, vm_offset_t eva, const bool invpcid_works1) { struct invpcid_descr d; uint64_t kcr3, ucr3; uint32_t pcid; u_int cpuid, i; cpuid = PCPU_GET(cpuid); if (pmap == PCPU_GET(curpmap)) { if (pmap->pm_ucr3 != PMAP_NO_CR3) { critical_enter(); pcid = pmap->pm_pcids[cpuid].pm_pcid; if (invpcid_works1) { d.pcid = pcid | PMAP_PCID_USER_PT; d.pad = 0; d.addr = sva; for (; d.addr < eva; d.addr += PAGE_SIZE) invpcid(&d, INVPCID_ADDR); } else { kcr3 = pmap->pm_cr3 | pcid | CR3_PCID_SAVE; ucr3 = pmap->pm_ucr3 | pcid | PMAP_PCID_USER_PT | CR3_PCID_SAVE; pmap_pti_pcid_invlrng(ucr3, kcr3, sva, eva); } critical_exit(); } } else pmap->pm_pcids[cpuid].pm_gen = 0; CPU_FOREACH(i) { if (cpuid != i) pmap->pm_pcids[i].pm_gen = 0; } /* See the comment in pmap_invalidate_page_pcid(). */ atomic_thread_fence_seq_cst(); } static void pmap_invalidate_range_pcid_invpcid(pmap_t pmap, vm_offset_t sva, vm_offset_t eva) { pmap_invalidate_range_pcid(pmap, sva, eva, true); } static void pmap_invalidate_range_pcid_noinvpcid(pmap_t pmap, vm_offset_t sva, vm_offset_t eva) { pmap_invalidate_range_pcid(pmap, sva, eva, false); } static void pmap_invalidate_range_nopcid(pmap_t pmap, vm_offset_t sva, vm_offset_t eva) { } DEFINE_IFUNC(static, void, pmap_invalidate_range_mode, (pmap_t, vm_offset_t, vm_offset_t), static) { if (pmap_pcid_enabled) return (invpcid_works ? pmap_invalidate_range_pcid_invpcid : pmap_invalidate_range_pcid_noinvpcid); return (pmap_invalidate_range_nopcid); } void pmap_invalidate_range(pmap_t pmap, vm_offset_t sva, vm_offset_t eva) { vm_offset_t addr; if (eva - sva >= PMAP_INVLPG_THRESHOLD) { pmap_invalidate_all(pmap); return; } if (pmap_type_guest(pmap)) { pmap_invalidate_ept(pmap); return; } KASSERT(pmap->pm_type == PT_X86, ("pmap_invalidate_range: invalid type %d", pmap->pm_type)); sched_pin(); if (pmap == kernel_pmap) { for (addr = sva; addr < eva; addr += PAGE_SIZE) invlpg(addr); } else { if (pmap == PCPU_GET(curpmap)) { for (addr = sva; addr < eva; addr += PAGE_SIZE) invlpg(addr); } pmap_invalidate_range_mode(pmap, sva, eva); } smp_masked_invlpg_range(pmap_invalidate_cpu_mask(pmap), sva, eva, pmap); sched_unpin(); } static inline void pmap_invalidate_all_pcid(pmap_t pmap, bool invpcid_works1) { struct invpcid_descr d; uint64_t kcr3, ucr3; uint32_t pcid; u_int cpuid, i; if (pmap == kernel_pmap) { if (invpcid_works1) { bzero(&d, sizeof(d)); invpcid(&d, INVPCID_CTXGLOB); } else { invltlb_glob(); } } else { cpuid = PCPU_GET(cpuid); if (pmap == PCPU_GET(curpmap)) { critical_enter(); pcid = pmap->pm_pcids[cpuid].pm_pcid; if (invpcid_works1) { d.pcid = pcid; d.pad = 0; d.addr = 0; invpcid(&d, INVPCID_CTX); if (pmap->pm_ucr3 != PMAP_NO_CR3) { d.pcid |= PMAP_PCID_USER_PT; invpcid(&d, INVPCID_CTX); } } else { kcr3 = pmap->pm_cr3 | pcid; ucr3 = pmap->pm_ucr3; if (ucr3 != PMAP_NO_CR3) { ucr3 |= pcid | PMAP_PCID_USER_PT; pmap_pti_pcid_invalidate(ucr3, kcr3); } else { load_cr3(kcr3); } } critical_exit(); } else pmap->pm_pcids[cpuid].pm_gen = 0; CPU_FOREACH(i) { if (cpuid != i) pmap->pm_pcids[i].pm_gen = 0; } } /* See the comment in pmap_invalidate_page_pcid(). */ atomic_thread_fence_seq_cst(); } static void pmap_invalidate_all_pcid_invpcid(pmap_t pmap) { pmap_invalidate_all_pcid(pmap, true); } static void pmap_invalidate_all_pcid_noinvpcid(pmap_t pmap) { pmap_invalidate_all_pcid(pmap, false); } static void pmap_invalidate_all_nopcid(pmap_t pmap) { if (pmap == kernel_pmap) invltlb_glob(); else if (pmap == PCPU_GET(curpmap)) invltlb(); } DEFINE_IFUNC(static, void, pmap_invalidate_all_mode, (pmap_t), static) { if (pmap_pcid_enabled) return (invpcid_works ? pmap_invalidate_all_pcid_invpcid : pmap_invalidate_all_pcid_noinvpcid); return (pmap_invalidate_all_nopcid); } void pmap_invalidate_all(pmap_t pmap) { if (pmap_type_guest(pmap)) { pmap_invalidate_ept(pmap); return; } KASSERT(pmap->pm_type == PT_X86, ("pmap_invalidate_all: invalid type %d", pmap->pm_type)); sched_pin(); pmap_invalidate_all_mode(pmap); smp_masked_invltlb(pmap_invalidate_cpu_mask(pmap), pmap); sched_unpin(); } void pmap_invalidate_cache(void) { sched_pin(); wbinvd(); smp_cache_flush(); sched_unpin(); } struct pde_action { cpuset_t invalidate; /* processors that invalidate their TLB */ pmap_t pmap; vm_offset_t va; pd_entry_t *pde; pd_entry_t newpde; u_int store; /* processor that updates the PDE */ }; static void pmap_update_pde_action(void *arg) { struct pde_action *act = arg; if (act->store == PCPU_GET(cpuid)) pmap_update_pde_store(act->pmap, act->pde, act->newpde); } static void pmap_update_pde_teardown(void *arg) { struct pde_action *act = arg; if (CPU_ISSET(PCPU_GET(cpuid), &act->invalidate)) pmap_update_pde_invalidate(act->pmap, act->va, act->newpde); } /* * Change the page size for the specified virtual address in a way that * prevents any possibility of the TLB ever having two entries that map the * same virtual address using different page sizes. This is the recommended * workaround for Erratum 383 on AMD Family 10h processors. It prevents a * machine check exception for a TLB state that is improperly diagnosed as a * hardware error. */ static void pmap_update_pde(pmap_t pmap, vm_offset_t va, pd_entry_t *pde, pd_entry_t newpde) { struct pde_action act; cpuset_t active, other_cpus; u_int cpuid; sched_pin(); cpuid = PCPU_GET(cpuid); other_cpus = all_cpus; CPU_CLR(cpuid, &other_cpus); if (pmap == kernel_pmap || pmap_type_guest(pmap)) active = all_cpus; else { active = pmap->pm_active; } if (CPU_OVERLAP(&active, &other_cpus)) { act.store = cpuid; act.invalidate = active; act.va = va; act.pmap = pmap; act.pde = pde; act.newpde = newpde; CPU_SET(cpuid, &active); smp_rendezvous_cpus(active, smp_no_rendezvous_barrier, pmap_update_pde_action, pmap_update_pde_teardown, &act); } else { pmap_update_pde_store(pmap, pde, newpde); if (CPU_ISSET(cpuid, &active)) pmap_update_pde_invalidate(pmap, va, newpde); } sched_unpin(); } #else /* !SMP */ /* * Normal, non-SMP, invalidation functions. */ void pmap_invalidate_page(pmap_t pmap, vm_offset_t va) { struct invpcid_descr d; uint64_t kcr3, ucr3; uint32_t pcid; if (pmap->pm_type == PT_RVI || pmap->pm_type == PT_EPT) { pmap->pm_eptgen++; return; } KASSERT(pmap->pm_type == PT_X86, ("pmap_invalidate_range: unknown type %d", pmap->pm_type)); if (pmap == kernel_pmap || pmap == PCPU_GET(curpmap)) { invlpg(va); if (pmap == PCPU_GET(curpmap) && pmap_pcid_enabled && pmap->pm_ucr3 != PMAP_NO_CR3) { critical_enter(); pcid = pmap->pm_pcids[0].pm_pcid; if (invpcid_works) { d.pcid = pcid | PMAP_PCID_USER_PT; d.pad = 0; d.addr = va; invpcid(&d, INVPCID_ADDR); } else { kcr3 = pmap->pm_cr3 | pcid | CR3_PCID_SAVE; ucr3 = pmap->pm_ucr3 | pcid | PMAP_PCID_USER_PT | CR3_PCID_SAVE; pmap_pti_pcid_invlpg(ucr3, kcr3, va); } critical_exit(); } } else if (pmap_pcid_enabled) pmap->pm_pcids[0].pm_gen = 0; } void pmap_invalidate_range(pmap_t pmap, vm_offset_t sva, vm_offset_t eva) { struct invpcid_descr d; vm_offset_t addr; uint64_t kcr3, ucr3; if (pmap->pm_type == PT_RVI || pmap->pm_type == PT_EPT) { pmap->pm_eptgen++; return; } KASSERT(pmap->pm_type == PT_X86, ("pmap_invalidate_range: unknown type %d", pmap->pm_type)); if (pmap == kernel_pmap || pmap == PCPU_GET(curpmap)) { for (addr = sva; addr < eva; addr += PAGE_SIZE) invlpg(addr); if (pmap == PCPU_GET(curpmap) && pmap_pcid_enabled && pmap->pm_ucr3 != PMAP_NO_CR3) { critical_enter(); if (invpcid_works) { d.pcid = pmap->pm_pcids[0].pm_pcid | PMAP_PCID_USER_PT; d.pad = 0; d.addr = sva; for (; d.addr < eva; d.addr += PAGE_SIZE) invpcid(&d, INVPCID_ADDR); } else { kcr3 = pmap->pm_cr3 | pmap->pm_pcids[0]. pm_pcid | CR3_PCID_SAVE; ucr3 = pmap->pm_ucr3 | pmap->pm_pcids[0]. pm_pcid | PMAP_PCID_USER_PT | CR3_PCID_SAVE; pmap_pti_pcid_invlrng(ucr3, kcr3, sva, eva); } critical_exit(); } } else if (pmap_pcid_enabled) { pmap->pm_pcids[0].pm_gen = 0; } } void pmap_invalidate_all(pmap_t pmap) { struct invpcid_descr d; uint64_t kcr3, ucr3; if (pmap->pm_type == PT_RVI || pmap->pm_type == PT_EPT) { pmap->pm_eptgen++; return; } KASSERT(pmap->pm_type == PT_X86, ("pmap_invalidate_all: unknown type %d", pmap->pm_type)); if (pmap == kernel_pmap) { if (pmap_pcid_enabled && invpcid_works) { bzero(&d, sizeof(d)); invpcid(&d, INVPCID_CTXGLOB); } else { invltlb_glob(); } } else if (pmap == PCPU_GET(curpmap)) { if (pmap_pcid_enabled) { critical_enter(); if (invpcid_works) { d.pcid = pmap->pm_pcids[0].pm_pcid; d.pad = 0; d.addr = 0; invpcid(&d, INVPCID_CTX); if (pmap->pm_ucr3 != PMAP_NO_CR3) { d.pcid |= PMAP_PCID_USER_PT; invpcid(&d, INVPCID_CTX); } } else { kcr3 = pmap->pm_cr3 | pmap->pm_pcids[0].pm_pcid; if (pmap->pm_ucr3 != PMAP_NO_CR3) { ucr3 = pmap->pm_ucr3 | pmap->pm_pcids[ 0].pm_pcid | PMAP_PCID_USER_PT; pmap_pti_pcid_invalidate(ucr3, kcr3); } else load_cr3(kcr3); } critical_exit(); } else { invltlb(); } } else if (pmap_pcid_enabled) { pmap->pm_pcids[0].pm_gen = 0; } } PMAP_INLINE void pmap_invalidate_cache(void) { wbinvd(); } static void pmap_update_pde(pmap_t pmap, vm_offset_t va, pd_entry_t *pde, pd_entry_t newpde) { pmap_update_pde_store(pmap, pde, newpde); if (pmap == kernel_pmap || pmap == PCPU_GET(curpmap)) pmap_update_pde_invalidate(pmap, va, newpde); else pmap->pm_pcids[0].pm_gen = 0; } #endif /* !SMP */ static void pmap_invalidate_pde_page(pmap_t pmap, vm_offset_t va, pd_entry_t pde) { /* * When the PDE has PG_PROMOTED set, the 2MB page mapping was created * by a promotion that did not invalidate the 512 4KB page mappings * that might exist in the TLB. Consequently, at this point, the TLB * may hold both 4KB and 2MB page mappings for the address range [va, * va + NBPDR). Therefore, the entire range must be invalidated here. * In contrast, when PG_PROMOTED is clear, the TLB will not hold any * 4KB page mappings for the address range [va, va + NBPDR), and so a * single INVLPG suffices to invalidate the 2MB page mapping from the * TLB. */ if ((pde & PG_PROMOTED) != 0) pmap_invalidate_range(pmap, va, va + NBPDR - 1); else pmap_invalidate_page(pmap, va); } DEFINE_IFUNC(, void, pmap_invalidate_cache_range, (vm_offset_t sva, vm_offset_t eva), static) { if ((cpu_feature & CPUID_SS) != 0) return (pmap_invalidate_cache_range_selfsnoop); if ((cpu_feature & CPUID_CLFSH) != 0) return (pmap_force_invalidate_cache_range); return (pmap_invalidate_cache_range_all); } #define PMAP_CLFLUSH_THRESHOLD (2 * 1024 * 1024) static void pmap_invalidate_cache_range_check_align(vm_offset_t sva, vm_offset_t eva) { KASSERT((sva & PAGE_MASK) == 0, ("pmap_invalidate_cache_range: sva not page-aligned")); KASSERT((eva & PAGE_MASK) == 0, ("pmap_invalidate_cache_range: eva not page-aligned")); } static void pmap_invalidate_cache_range_selfsnoop(vm_offset_t sva, vm_offset_t eva) { pmap_invalidate_cache_range_check_align(sva, eva); } void pmap_force_invalidate_cache_range(vm_offset_t sva, vm_offset_t eva) { sva &= ~(vm_offset_t)(cpu_clflush_line_size - 1); /* * XXX: Some CPUs fault, hang, or trash the local APIC * registers if we use CLFLUSH on the local APIC range. The * local APIC is always uncached, so we don't need to flush * for that range anyway. */ if (pmap_kextract(sva) == lapic_paddr) return; if ((cpu_stdext_feature & CPUID_STDEXT_CLFLUSHOPT) != 0) { /* * Do per-cache line flush. Use the sfence * instruction to insure that previous stores are * included in the write-back. The processor * propagates flush to other processors in the cache * coherence domain. */ sfence(); for (; sva < eva; sva += cpu_clflush_line_size) clflushopt(sva); sfence(); } else { /* * Writes are ordered by CLFLUSH on Intel CPUs. */ if (cpu_vendor_id != CPU_VENDOR_INTEL) mfence(); for (; sva < eva; sva += cpu_clflush_line_size) clflush(sva); if (cpu_vendor_id != CPU_VENDOR_INTEL) mfence(); } } static void pmap_invalidate_cache_range_all(vm_offset_t sva, vm_offset_t eva) { pmap_invalidate_cache_range_check_align(sva, eva); pmap_invalidate_cache(); } /* * Remove the specified set of pages from the data and instruction caches. * * In contrast to pmap_invalidate_cache_range(), this function does not * rely on the CPU's self-snoop feature, because it is intended for use * when moving pages into a different cache domain. */ void pmap_invalidate_cache_pages(vm_page_t *pages, int count) { vm_offset_t daddr, eva; int i; bool useclflushopt; useclflushopt = (cpu_stdext_feature & CPUID_STDEXT_CLFLUSHOPT) != 0; if (count >= PMAP_CLFLUSH_THRESHOLD / PAGE_SIZE || ((cpu_feature & CPUID_CLFSH) == 0 && !useclflushopt)) pmap_invalidate_cache(); else { if (useclflushopt) sfence(); else if (cpu_vendor_id != CPU_VENDOR_INTEL) mfence(); for (i = 0; i < count; i++) { daddr = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(pages[i])); eva = daddr + PAGE_SIZE; for (; daddr < eva; daddr += cpu_clflush_line_size) { if (useclflushopt) clflushopt(daddr); else clflush(daddr); } } if (useclflushopt) sfence(); else if (cpu_vendor_id != CPU_VENDOR_INTEL) mfence(); } } void pmap_flush_cache_range(vm_offset_t sva, vm_offset_t eva) { pmap_invalidate_cache_range_check_align(sva, eva); if ((cpu_stdext_feature & CPUID_STDEXT_CLWB) == 0) { pmap_force_invalidate_cache_range(sva, eva); return; } /* See comment in pmap_force_invalidate_cache_range(). */ if (pmap_kextract(sva) == lapic_paddr) return; sfence(); for (; sva < eva; sva += cpu_clflush_line_size) clwb(sva); sfence(); } void pmap_flush_cache_phys_range(vm_paddr_t spa, vm_paddr_t epa, vm_memattr_t mattr) { pt_entry_t *pte; vm_offset_t vaddr; int error, pte_bits; KASSERT((spa & PAGE_MASK) == 0, ("pmap_flush_cache_phys_range: spa not page-aligned")); KASSERT((epa & PAGE_MASK) == 0, ("pmap_flush_cache_phys_range: epa not page-aligned")); if (spa < dmaplimit) { pmap_flush_cache_range(PHYS_TO_DMAP(spa), PHYS_TO_DMAP(MIN( dmaplimit, epa))); if (dmaplimit >= epa) return; spa = dmaplimit; } pte_bits = pmap_cache_bits(kernel_pmap, mattr, 0) | X86_PG_RW | X86_PG_V; error = vmem_alloc(kernel_arena, PAGE_SIZE, M_BESTFIT | M_WAITOK, &vaddr); KASSERT(error == 0, ("vmem_alloc failed: %d", error)); pte = vtopte(vaddr); for (; spa < epa; spa += PAGE_SIZE) { sched_pin(); pte_store(pte, spa | pte_bits); invlpg(vaddr); /* XXXKIB sfences inside flush_cache_range are excessive */ pmap_flush_cache_range(vaddr, vaddr + PAGE_SIZE); sched_unpin(); } vmem_free(kernel_arena, vaddr, PAGE_SIZE); } /* * Routine: pmap_extract * Function: * Extract the physical page address associated * with the given map/virtual_address pair. */ vm_paddr_t pmap_extract(pmap_t pmap, vm_offset_t va) { pdp_entry_t *pdpe; pd_entry_t *pde; pt_entry_t *pte, PG_V; vm_paddr_t pa; pa = 0; PG_V = pmap_valid_bit(pmap); PMAP_LOCK(pmap); pdpe = pmap_pdpe(pmap, va); if (pdpe != NULL && (*pdpe & PG_V) != 0) { if ((*pdpe & PG_PS) != 0) pa = (*pdpe & PG_PS_FRAME) | (va & PDPMASK); else { pde = pmap_pdpe_to_pde(pdpe, va); if ((*pde & PG_V) != 0) { if ((*pde & PG_PS) != 0) { pa = (*pde & PG_PS_FRAME) | (va & PDRMASK); } else { pte = pmap_pde_to_pte(pde, va); pa = (*pte & PG_FRAME) | (va & PAGE_MASK); } } } } PMAP_UNLOCK(pmap); return (pa); } /* * Routine: pmap_extract_and_hold * Function: * Atomically extract and hold the physical page * with the given pmap and virtual address pair * if that mapping permits the given protection. */ vm_page_t pmap_extract_and_hold(pmap_t pmap, vm_offset_t va, vm_prot_t prot) { pd_entry_t pde, *pdep; pt_entry_t pte, PG_RW, PG_V; vm_paddr_t pa; vm_page_t m; pa = 0; m = NULL; PG_RW = pmap_rw_bit(pmap); PG_V = pmap_valid_bit(pmap); PMAP_LOCK(pmap); retry: pdep = pmap_pde(pmap, va); if (pdep != NULL && (pde = *pdep)) { if (pde & PG_PS) { if ((pde & PG_RW) || (prot & VM_PROT_WRITE) == 0) { if (vm_page_pa_tryrelock(pmap, (pde & PG_PS_FRAME) | (va & PDRMASK), &pa)) goto retry; m = PHYS_TO_VM_PAGE(pa); } } else { pte = *pmap_pde_to_pte(pdep, va); if ((pte & PG_V) && ((pte & PG_RW) || (prot & VM_PROT_WRITE) == 0)) { if (vm_page_pa_tryrelock(pmap, pte & PG_FRAME, &pa)) goto retry; m = PHYS_TO_VM_PAGE(pa); } } if (m != NULL) vm_page_hold(m); } PA_UNLOCK_COND(pa); PMAP_UNLOCK(pmap); return (m); } vm_paddr_t pmap_kextract(vm_offset_t va) { pd_entry_t pde; vm_paddr_t pa; if (va >= DMAP_MIN_ADDRESS && va < DMAP_MAX_ADDRESS) { pa = DMAP_TO_PHYS(va); } else { pde = *vtopde(va); if (pde & PG_PS) { pa = (pde & PG_PS_FRAME) | (va & PDRMASK); } else { /* * Beware of a concurrent promotion that changes the * PDE at this point! For example, vtopte() must not * be used to access the PTE because it would use the * new PDE. It is, however, safe to use the old PDE * because the page table page is preserved by the * promotion. */ pa = *pmap_pde_to_pte(&pde, va); pa = (pa & PG_FRAME) | (va & PAGE_MASK); } } return (pa); } /*************************************************** * Low level mapping routines..... ***************************************************/ /* * Add a wired page to the kva. * Note: not SMP coherent. */ PMAP_INLINE void pmap_kenter(vm_offset_t va, vm_paddr_t pa) { pt_entry_t *pte; pte = vtopte(va); pte_store(pte, pa | X86_PG_RW | X86_PG_V | pg_g); } static __inline void pmap_kenter_attr(vm_offset_t va, vm_paddr_t pa, int mode) { pt_entry_t *pte; int cache_bits; pte = vtopte(va); cache_bits = pmap_cache_bits(kernel_pmap, mode, 0); pte_store(pte, pa | X86_PG_RW | X86_PG_V | pg_g | cache_bits); } /* * Remove a page from the kernel pagetables. * Note: not SMP coherent. */ PMAP_INLINE void pmap_kremove(vm_offset_t va) { pt_entry_t *pte; pte = vtopte(va); pte_clear(pte); } /* * Used to map a range of physical addresses into kernel * virtual address space. * * The value passed in '*virt' is a suggested virtual address for * the mapping. Architectures which can support a direct-mapped * physical to virtual region can return the appropriate address * within that region, leaving '*virt' unchanged. Other * architectures should map the pages starting at '*virt' and * update '*virt' with the first usable address after the mapped * region. */ vm_offset_t pmap_map(vm_offset_t *virt, vm_paddr_t start, vm_paddr_t end, int prot) { return PHYS_TO_DMAP(start); } /* * Add a list of wired pages to the kva * this routine is only used for temporary * kernel mappings that do not need to have * page modification or references recorded. * Note that old mappings are simply written * over. The page *must* be wired. * Note: SMP coherent. Uses a ranged shootdown IPI. */ void pmap_qenter(vm_offset_t sva, vm_page_t *ma, int count) { pt_entry_t *endpte, oldpte, pa, *pte; vm_page_t m; int cache_bits; oldpte = 0; pte = vtopte(sva); endpte = pte + count; while (pte < endpte) { m = *ma++; cache_bits = pmap_cache_bits(kernel_pmap, m->md.pat_mode, 0); pa = VM_PAGE_TO_PHYS(m) | cache_bits; if ((*pte & (PG_FRAME | X86_PG_PTE_CACHE)) != pa) { oldpte |= *pte; pte_store(pte, pa | pg_g | pg_nx | X86_PG_RW | X86_PG_V); } pte++; } if (__predict_false((oldpte & X86_PG_V) != 0)) pmap_invalidate_range(kernel_pmap, sva, sva + count * PAGE_SIZE); } /* * This routine tears out page mappings from the * kernel -- it is meant only for temporary mappings. * Note: SMP coherent. Uses a ranged shootdown IPI. */ void pmap_qremove(vm_offset_t sva, int count) { vm_offset_t va; va = sva; while (count-- > 0) { KASSERT(va >= VM_MIN_KERNEL_ADDRESS, ("usermode va %lx", va)); pmap_kremove(va); va += PAGE_SIZE; } pmap_invalidate_range(kernel_pmap, sva, va); } /*************************************************** * Page table page management routines..... ***************************************************/ /* * Schedule the specified unused page table page to be freed. Specifically, * add the page to the specified list of pages that will be released to the * physical memory manager after the TLB has been updated. */ static __inline void pmap_add_delayed_free_list(vm_page_t m, struct spglist *free, boolean_t set_PG_ZERO) { if (set_PG_ZERO) m->flags |= PG_ZERO; else m->flags &= ~PG_ZERO; SLIST_INSERT_HEAD(free, m, plinks.s.ss); } /* * Inserts the specified page table page into the specified pmap's collection * of idle page table pages. Each of a pmap's page table pages is responsible * for mapping a distinct range of virtual addresses. The pmap's collection is * ordered by this virtual address range. */ static __inline int pmap_insert_pt_page(pmap_t pmap, vm_page_t mpte) { PMAP_LOCK_ASSERT(pmap, MA_OWNED); return (vm_radix_insert(&pmap->pm_root, mpte)); } /* * Removes the page table page mapping the specified virtual address from the * specified pmap's collection of idle page table pages, and returns it. * Otherwise, returns NULL if there is no page table page corresponding to the * specified virtual address. */ static __inline vm_page_t pmap_remove_pt_page(pmap_t pmap, vm_offset_t va) { PMAP_LOCK_ASSERT(pmap, MA_OWNED); return (vm_radix_remove(&pmap->pm_root, pmap_pde_pindex(va))); } /* * Decrements a page table page's wire count, which is used to record the * number of valid page table entries within the page. If the wire count * drops to zero, then the page table page is unmapped. Returns TRUE if the * page table page was unmapped and FALSE otherwise. */ static inline boolean_t pmap_unwire_ptp(pmap_t pmap, vm_offset_t va, vm_page_t m, struct spglist *free) { --m->wire_count; if (m->wire_count == 0) { _pmap_unwire_ptp(pmap, va, m, free); return (TRUE); } else return (FALSE); } static void _pmap_unwire_ptp(pmap_t pmap, vm_offset_t va, vm_page_t m, struct spglist *free) { PMAP_LOCK_ASSERT(pmap, MA_OWNED); /* * unmap the page table page */ if (m->pindex >= (NUPDE + NUPDPE)) { /* PDP page */ pml4_entry_t *pml4; pml4 = pmap_pml4e(pmap, va); *pml4 = 0; if (pmap->pm_pml4u != NULL && va <= VM_MAXUSER_ADDRESS) { pml4 = &pmap->pm_pml4u[pmap_pml4e_index(va)]; *pml4 = 0; } } else if (m->pindex >= NUPDE) { /* PD page */ pdp_entry_t *pdp; pdp = pmap_pdpe(pmap, va); *pdp = 0; } else { /* PTE page */ pd_entry_t *pd; pd = pmap_pde(pmap, va); *pd = 0; } pmap_resident_count_dec(pmap, 1); if (m->pindex < NUPDE) { /* We just released a PT, unhold the matching PD */ vm_page_t pdpg; pdpg = PHYS_TO_VM_PAGE(*pmap_pdpe(pmap, va) & PG_FRAME); pmap_unwire_ptp(pmap, va, pdpg, free); } if (m->pindex >= NUPDE && m->pindex < (NUPDE + NUPDPE)) { /* We just released a PD, unhold the matching PDP */ vm_page_t pdppg; pdppg = PHYS_TO_VM_PAGE(*pmap_pml4e(pmap, va) & PG_FRAME); pmap_unwire_ptp(pmap, va, pdppg, free); } /* * Put page on a list so that it is released after * *ALL* TLB shootdown is done */ pmap_add_delayed_free_list(m, free, TRUE); } /* * After removing a page table entry, this routine is used to * conditionally free the page, and manage the hold/wire counts. */ static int pmap_unuse_pt(pmap_t pmap, vm_offset_t va, pd_entry_t ptepde, struct spglist *free) { vm_page_t mpte; if (va >= VM_MAXUSER_ADDRESS) return (0); KASSERT(ptepde != 0, ("pmap_unuse_pt: ptepde != 0")); mpte = PHYS_TO_VM_PAGE(ptepde & PG_FRAME); return (pmap_unwire_ptp(pmap, va, mpte, free)); } void pmap_pinit0(pmap_t pmap) { int i; PMAP_LOCK_INIT(pmap); pmap->pm_pml4 = (pml4_entry_t *)PHYS_TO_DMAP(KPML4phys); pmap->pm_pml4u = NULL; pmap->pm_cr3 = KPML4phys; /* hack to keep pmap_pti_pcid_invalidate() alive */ pmap->pm_ucr3 = PMAP_NO_CR3; pmap->pm_root.rt_root = 0; CPU_ZERO(&pmap->pm_active); TAILQ_INIT(&pmap->pm_pvchunk); bzero(&pmap->pm_stats, sizeof pmap->pm_stats); pmap->pm_flags = pmap_flags; CPU_FOREACH(i) { pmap->pm_pcids[i].pm_pcid = PMAP_PCID_KERN + 1; pmap->pm_pcids[i].pm_gen = 1; } pmap_activate_boot(pmap); } void pmap_pinit_pml4(vm_page_t pml4pg) { pml4_entry_t *pm_pml4; int i; pm_pml4 = (pml4_entry_t *)PHYS_TO_DMAP(VM_PAGE_TO_PHYS(pml4pg)); /* Wire in kernel global address entries. */ for (i = 0; i < NKPML4E; i++) { pm_pml4[KPML4BASE + i] = (KPDPphys + ptoa(i)) | X86_PG_RW | X86_PG_V; } for (i = 0; i < ndmpdpphys; i++) { pm_pml4[DMPML4I + i] = (DMPDPphys + ptoa(i)) | X86_PG_RW | X86_PG_V; } /* install self-referential address mapping entry(s) */ pm_pml4[PML4PML4I] = VM_PAGE_TO_PHYS(pml4pg) | X86_PG_V | X86_PG_RW | X86_PG_A | X86_PG_M; /* install large map entries if configured */ for (i = 0; i < lm_ents; i++) pm_pml4[LMSPML4I + i] = kernel_pmap->pm_pml4[LMSPML4I + i]; } static void pmap_pinit_pml4_pti(vm_page_t pml4pg) { pml4_entry_t *pm_pml4; int i; pm_pml4 = (pml4_entry_t *)PHYS_TO_DMAP(VM_PAGE_TO_PHYS(pml4pg)); for (i = 0; i < NPML4EPG; i++) pm_pml4[i] = pti_pml4[i]; } /* * Initialize a preallocated and zeroed pmap structure, * such as one in a vmspace structure. */ int pmap_pinit_type(pmap_t pmap, enum pmap_type pm_type, int flags) { vm_page_t pml4pg, pml4pgu; vm_paddr_t pml4phys; int i; /* * allocate the page directory page */ pml4pg = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED | VM_ALLOC_ZERO | VM_ALLOC_WAITOK); pml4phys = VM_PAGE_TO_PHYS(pml4pg); pmap->pm_pml4 = (pml4_entry_t *)PHYS_TO_DMAP(pml4phys); CPU_FOREACH(i) { pmap->pm_pcids[i].pm_pcid = PMAP_PCID_NONE; pmap->pm_pcids[i].pm_gen = 0; } pmap->pm_cr3 = PMAP_NO_CR3; /* initialize to an invalid value */ pmap->pm_ucr3 = PMAP_NO_CR3; pmap->pm_pml4u = NULL; pmap->pm_type = pm_type; if ((pml4pg->flags & PG_ZERO) == 0) pagezero(pmap->pm_pml4); /* * Do not install the host kernel mappings in the nested page * tables. These mappings are meaningless in the guest physical * address space. * Install minimal kernel mappings in PTI case. */ if (pm_type == PT_X86) { pmap->pm_cr3 = pml4phys; pmap_pinit_pml4(pml4pg); if (pti) { pml4pgu = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED | VM_ALLOC_WAITOK); pmap->pm_pml4u = (pml4_entry_t *)PHYS_TO_DMAP( VM_PAGE_TO_PHYS(pml4pgu)); pmap_pinit_pml4_pti(pml4pgu); pmap->pm_ucr3 = VM_PAGE_TO_PHYS(pml4pgu); } } pmap->pm_root.rt_root = 0; CPU_ZERO(&pmap->pm_active); TAILQ_INIT(&pmap->pm_pvchunk); bzero(&pmap->pm_stats, sizeof pmap->pm_stats); pmap->pm_flags = flags; pmap->pm_eptgen = 0; return (1); } int pmap_pinit(pmap_t pmap) { return (pmap_pinit_type(pmap, PT_X86, pmap_flags)); } /* * This routine is called if the desired page table page does not exist. * * If page table page allocation fails, this routine may sleep before * returning NULL. It sleeps only if a lock pointer was given. * * Note: If a page allocation fails at page table level two or three, * one or two pages may be held during the wait, only to be released * afterwards. This conservative approach is easily argued to avoid * race conditions. */ static vm_page_t _pmap_allocpte(pmap_t pmap, vm_pindex_t ptepindex, struct rwlock **lockp) { vm_page_t m, pdppg, pdpg; pt_entry_t PG_A, PG_M, PG_RW, PG_V; PMAP_LOCK_ASSERT(pmap, MA_OWNED); PG_A = pmap_accessed_bit(pmap); PG_M = pmap_modified_bit(pmap); PG_V = pmap_valid_bit(pmap); PG_RW = pmap_rw_bit(pmap); /* * Allocate a page table page. */ if ((m = vm_page_alloc(NULL, ptepindex, VM_ALLOC_NOOBJ | VM_ALLOC_WIRED | VM_ALLOC_ZERO)) == NULL) { if (lockp != NULL) { RELEASE_PV_LIST_LOCK(lockp); PMAP_UNLOCK(pmap); PMAP_ASSERT_NOT_IN_DI(); vm_wait(NULL); PMAP_LOCK(pmap); } /* * Indicate the need to retry. While waiting, the page table * page may have been allocated. */ return (NULL); } if ((m->flags & PG_ZERO) == 0) pmap_zero_page(m); /* * Map the pagetable page into the process address space, if * it isn't already there. */ if (ptepindex >= (NUPDE + NUPDPE)) { pml4_entry_t *pml4, *pml4u; vm_pindex_t pml4index; /* Wire up a new PDPE page */ pml4index = ptepindex - (NUPDE + NUPDPE); pml4 = &pmap->pm_pml4[pml4index]; *pml4 = VM_PAGE_TO_PHYS(m) | PG_U | PG_RW | PG_V | PG_A | PG_M; if (pmap->pm_pml4u != NULL && pml4index < NUPML4E) { /* * PTI: Make all user-space mappings in the * kernel-mode page table no-execute so that * we detect any programming errors that leave * the kernel-mode page table active on return * to user space. */ if (pmap->pm_ucr3 != PMAP_NO_CR3) *pml4 |= pg_nx; pml4u = &pmap->pm_pml4u[pml4index]; *pml4u = VM_PAGE_TO_PHYS(m) | PG_U | PG_RW | PG_V | PG_A | PG_M; } } else if (ptepindex >= NUPDE) { vm_pindex_t pml4index; vm_pindex_t pdpindex; pml4_entry_t *pml4; pdp_entry_t *pdp; /* Wire up a new PDE page */ pdpindex = ptepindex - NUPDE; pml4index = pdpindex >> NPML4EPGSHIFT; pml4 = &pmap->pm_pml4[pml4index]; if ((*pml4 & PG_V) == 0) { /* Have to allocate a new pdp, recurse */ if (_pmap_allocpte(pmap, NUPDE + NUPDPE + pml4index, lockp) == NULL) { vm_page_unwire_noq(m); vm_page_free_zero(m); return (NULL); } } else { /* Add reference to pdp page */ pdppg = PHYS_TO_VM_PAGE(*pml4 & PG_FRAME); pdppg->wire_count++; } pdp = (pdp_entry_t *)PHYS_TO_DMAP(*pml4 & PG_FRAME); /* Now find the pdp page */ pdp = &pdp[pdpindex & ((1ul << NPDPEPGSHIFT) - 1)]; *pdp = VM_PAGE_TO_PHYS(m) | PG_U | PG_RW | PG_V | PG_A | PG_M; } else { vm_pindex_t pml4index; vm_pindex_t pdpindex; pml4_entry_t *pml4; pdp_entry_t *pdp; pd_entry_t *pd; /* Wire up a new PTE page */ pdpindex = ptepindex >> NPDPEPGSHIFT; pml4index = pdpindex >> NPML4EPGSHIFT; /* First, find the pdp and check that its valid. */ pml4 = &pmap->pm_pml4[pml4index]; if ((*pml4 & PG_V) == 0) { /* Have to allocate a new pd, recurse */ if (_pmap_allocpte(pmap, NUPDE + pdpindex, lockp) == NULL) { vm_page_unwire_noq(m); vm_page_free_zero(m); return (NULL); } pdp = (pdp_entry_t *)PHYS_TO_DMAP(*pml4 & PG_FRAME); pdp = &pdp[pdpindex & ((1ul << NPDPEPGSHIFT) - 1)]; } else { pdp = (pdp_entry_t *)PHYS_TO_DMAP(*pml4 & PG_FRAME); pdp = &pdp[pdpindex & ((1ul << NPDPEPGSHIFT) - 1)]; if ((*pdp & PG_V) == 0) { /* Have to allocate a new pd, recurse */ if (_pmap_allocpte(pmap, NUPDE + pdpindex, lockp) == NULL) { vm_page_unwire_noq(m); vm_page_free_zero(m); return (NULL); } } else { /* Add reference to the pd page */ pdpg = PHYS_TO_VM_PAGE(*pdp & PG_FRAME); pdpg->wire_count++; } } pd = (pd_entry_t *)PHYS_TO_DMAP(*pdp & PG_FRAME); /* Now we know where the page directory page is */ pd = &pd[ptepindex & ((1ul << NPDEPGSHIFT) - 1)]; *pd = VM_PAGE_TO_PHYS(m) | PG_U | PG_RW | PG_V | PG_A | PG_M; } pmap_resident_count_inc(pmap, 1); return (m); } static vm_page_t pmap_allocpde(pmap_t pmap, vm_offset_t va, struct rwlock **lockp) { vm_pindex_t pdpindex, ptepindex; pdp_entry_t *pdpe, PG_V; vm_page_t pdpg; PG_V = pmap_valid_bit(pmap); retry: pdpe = pmap_pdpe(pmap, va); if (pdpe != NULL && (*pdpe & PG_V) != 0) { /* Add a reference to the pd page. */ pdpg = PHYS_TO_VM_PAGE(*pdpe & PG_FRAME); pdpg->wire_count++; } else { /* Allocate a pd page. */ ptepindex = pmap_pde_pindex(va); pdpindex = ptepindex >> NPDPEPGSHIFT; pdpg = _pmap_allocpte(pmap, NUPDE + pdpindex, lockp); if (pdpg == NULL && lockp != NULL) goto retry; } return (pdpg); } static vm_page_t pmap_allocpte(pmap_t pmap, vm_offset_t va, struct rwlock **lockp) { vm_pindex_t ptepindex; pd_entry_t *pd, PG_V; vm_page_t m; PG_V = pmap_valid_bit(pmap); /* * Calculate pagetable page index */ ptepindex = pmap_pde_pindex(va); retry: /* * Get the page directory entry */ pd = pmap_pde(pmap, va); /* * This supports switching from a 2MB page to a * normal 4K page. */ if (pd != NULL && (*pd & (PG_PS | PG_V)) == (PG_PS | PG_V)) { if (!pmap_demote_pde_locked(pmap, pd, va, lockp)) { /* * Invalidation of the 2MB page mapping may have caused * the deallocation of the underlying PD page. */ pd = NULL; } } /* * If the page table page is mapped, we just increment the * hold count, and activate it. */ if (pd != NULL && (*pd & PG_V) != 0) { m = PHYS_TO_VM_PAGE(*pd & PG_FRAME); m->wire_count++; } else { /* * Here if the pte page isn't mapped, or if it has been * deallocated. */ m = _pmap_allocpte(pmap, ptepindex, lockp); if (m == NULL && lockp != NULL) goto retry; } return (m); } /*************************************************** * Pmap allocation/deallocation routines. ***************************************************/ /* * Release any resources held by the given physical map. * Called when a pmap initialized by pmap_pinit is being released. * Should only be called if the map contains no valid mappings. */ void pmap_release(pmap_t pmap) { vm_page_t m; int i; KASSERT(pmap->pm_stats.resident_count == 0, ("pmap_release: pmap resident count %ld != 0", pmap->pm_stats.resident_count)); KASSERT(vm_radix_is_empty(&pmap->pm_root), ("pmap_release: pmap has reserved page table page(s)")); KASSERT(CPU_EMPTY(&pmap->pm_active), ("releasing active pmap %p", pmap)); m = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((vm_offset_t)pmap->pm_pml4)); for (i = 0; i < NKPML4E; i++) /* KVA */ pmap->pm_pml4[KPML4BASE + i] = 0; for (i = 0; i < ndmpdpphys; i++)/* Direct Map */ pmap->pm_pml4[DMPML4I + i] = 0; pmap->pm_pml4[PML4PML4I] = 0; /* Recursive Mapping */ for (i = 0; i < lm_ents; i++) /* Large Map */ pmap->pm_pml4[LMSPML4I + i] = 0; vm_page_unwire_noq(m); vm_page_free_zero(m); if (pmap->pm_pml4u != NULL) { m = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((vm_offset_t)pmap->pm_pml4u)); vm_page_unwire_noq(m); vm_page_free(m); } } static int kvm_size(SYSCTL_HANDLER_ARGS) { unsigned long ksize = VM_MAX_KERNEL_ADDRESS - VM_MIN_KERNEL_ADDRESS; return sysctl_handle_long(oidp, &ksize, 0, req); } SYSCTL_PROC(_vm, OID_AUTO, kvm_size, CTLTYPE_LONG|CTLFLAG_RD, 0, 0, kvm_size, "LU", "Size of KVM"); static int kvm_free(SYSCTL_HANDLER_ARGS) { unsigned long kfree = VM_MAX_KERNEL_ADDRESS - kernel_vm_end; return sysctl_handle_long(oidp, &kfree, 0, req); } SYSCTL_PROC(_vm, OID_AUTO, kvm_free, CTLTYPE_LONG|CTLFLAG_RD, 0, 0, kvm_free, "LU", "Amount of KVM free"); /* * grow the number of kernel page table entries, if needed */ void pmap_growkernel(vm_offset_t addr) { vm_paddr_t paddr; vm_page_t nkpg; pd_entry_t *pde, newpdir; pdp_entry_t *pdpe; mtx_assert(&kernel_map->system_mtx, MA_OWNED); /* * Return if "addr" is within the range of kernel page table pages * that were preallocated during pmap bootstrap. Moreover, leave * "kernel_vm_end" and the kernel page table as they were. * * The correctness of this action is based on the following * argument: vm_map_insert() allocates contiguous ranges of the * kernel virtual address space. It calls this function if a range * ends after "kernel_vm_end". If the kernel is mapped between * "kernel_vm_end" and "addr", then the range cannot begin at * "kernel_vm_end". In fact, its beginning address cannot be less * than the kernel. Thus, there is no immediate need to allocate * any new kernel page table pages between "kernel_vm_end" and * "KERNBASE". */ if (KERNBASE < addr && addr <= KERNBASE + nkpt * NBPDR) return; addr = roundup2(addr, NBPDR); if (addr - 1 >= vm_map_max(kernel_map)) addr = vm_map_max(kernel_map); while (kernel_vm_end < addr) { pdpe = pmap_pdpe(kernel_pmap, kernel_vm_end); if ((*pdpe & X86_PG_V) == 0) { /* We need a new PDP entry */ nkpg = vm_page_alloc(NULL, kernel_vm_end >> PDPSHIFT, VM_ALLOC_INTERRUPT | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED | VM_ALLOC_ZERO); if (nkpg == NULL) panic("pmap_growkernel: no memory to grow kernel"); if ((nkpg->flags & PG_ZERO) == 0) pmap_zero_page(nkpg); paddr = VM_PAGE_TO_PHYS(nkpg); *pdpe = (pdp_entry_t)(paddr | X86_PG_V | X86_PG_RW | X86_PG_A | X86_PG_M); continue; /* try again */ } pde = pmap_pdpe_to_pde(pdpe, kernel_vm_end); if ((*pde & X86_PG_V) != 0) { kernel_vm_end = (kernel_vm_end + NBPDR) & ~PDRMASK; if (kernel_vm_end - 1 >= vm_map_max(kernel_map)) { kernel_vm_end = vm_map_max(kernel_map); break; } continue; } nkpg = vm_page_alloc(NULL, pmap_pde_pindex(kernel_vm_end), VM_ALLOC_INTERRUPT | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED | VM_ALLOC_ZERO); if (nkpg == NULL) panic("pmap_growkernel: no memory to grow kernel"); if ((nkpg->flags & PG_ZERO) == 0) pmap_zero_page(nkpg); paddr = VM_PAGE_TO_PHYS(nkpg); newpdir = paddr | X86_PG_V | X86_PG_RW | X86_PG_A | X86_PG_M; pde_store(pde, newpdir); kernel_vm_end = (kernel_vm_end + NBPDR) & ~PDRMASK; if (kernel_vm_end - 1 >= vm_map_max(kernel_map)) { kernel_vm_end = vm_map_max(kernel_map); break; } } } /*************************************************** * page management routines. ***************************************************/ CTASSERT(sizeof(struct pv_chunk) == PAGE_SIZE); CTASSERT(_NPCM == 3); CTASSERT(_NPCPV == 168); static __inline struct pv_chunk * pv_to_chunk(pv_entry_t pv) { return ((struct pv_chunk *)((uintptr_t)pv & ~(uintptr_t)PAGE_MASK)); } #define PV_PMAP(pv) (pv_to_chunk(pv)->pc_pmap) #define PC_FREE0 0xfffffffffffffffful #define PC_FREE1 0xfffffffffffffffful #define PC_FREE2 0x000000fffffffffful static const uint64_t pc_freemask[_NPCM] = { PC_FREE0, PC_FREE1, PC_FREE2 }; #ifdef PV_STATS static int pc_chunk_count, pc_chunk_allocs, pc_chunk_frees, pc_chunk_tryfail; SYSCTL_INT(_vm_pmap, OID_AUTO, pc_chunk_count, CTLFLAG_RD, &pc_chunk_count, 0, "Current number of pv entry chunks"); SYSCTL_INT(_vm_pmap, OID_AUTO, pc_chunk_allocs, CTLFLAG_RD, &pc_chunk_allocs, 0, "Current number of pv entry chunks allocated"); SYSCTL_INT(_vm_pmap, OID_AUTO, pc_chunk_frees, CTLFLAG_RD, &pc_chunk_frees, 0, "Current number of pv entry chunks frees"); SYSCTL_INT(_vm_pmap, OID_AUTO, pc_chunk_tryfail, CTLFLAG_RD, &pc_chunk_tryfail, 0, "Number of times tried to get a chunk page but failed."); static long pv_entry_frees, pv_entry_allocs, pv_entry_count; static int pv_entry_spare; SYSCTL_LONG(_vm_pmap, OID_AUTO, pv_entry_frees, CTLFLAG_RD, &pv_entry_frees, 0, "Current number of pv entry frees"); SYSCTL_LONG(_vm_pmap, OID_AUTO, pv_entry_allocs, CTLFLAG_RD, &pv_entry_allocs, 0, "Current number of pv entry allocs"); SYSCTL_LONG(_vm_pmap, OID_AUTO, pv_entry_count, CTLFLAG_RD, &pv_entry_count, 0, "Current number of pv entries"); SYSCTL_INT(_vm_pmap, OID_AUTO, pv_entry_spare, CTLFLAG_RD, &pv_entry_spare, 0, "Current number of spare pv entries"); #endif static void reclaim_pv_chunk_leave_pmap(pmap_t pmap, pmap_t locked_pmap, bool start_di) { if (pmap == NULL) return; pmap_invalidate_all(pmap); if (pmap != locked_pmap) PMAP_UNLOCK(pmap); if (start_di) pmap_delayed_invl_finished(); } /* * We are in a serious low memory condition. Resort to * drastic measures to free some pages so we can allocate * another pv entry chunk. * * Returns NULL if PV entries were reclaimed from the specified pmap. * * We do not, however, unmap 2mpages because subsequent accesses will * allocate per-page pv entries until repromotion occurs, thereby * exacerbating the shortage of free pv entries. */ static vm_page_t reclaim_pv_chunk(pmap_t locked_pmap, struct rwlock **lockp) { struct pv_chunk *pc, *pc_marker, *pc_marker_end; struct pv_chunk_header pc_marker_b, pc_marker_end_b; struct md_page *pvh; pd_entry_t *pde; pmap_t next_pmap, pmap; pt_entry_t *pte, tpte; pt_entry_t PG_G, PG_A, PG_M, PG_RW; pv_entry_t pv; vm_offset_t va; vm_page_t m, m_pc; struct spglist free; uint64_t inuse; int bit, field, freed; bool start_di; static int active_reclaims = 0; PMAP_LOCK_ASSERT(locked_pmap, MA_OWNED); KASSERT(lockp != NULL, ("reclaim_pv_chunk: lockp is NULL")); pmap = NULL; m_pc = NULL; PG_G = PG_A = PG_M = PG_RW = 0; SLIST_INIT(&free); bzero(&pc_marker_b, sizeof(pc_marker_b)); bzero(&pc_marker_end_b, sizeof(pc_marker_end_b)); pc_marker = (struct pv_chunk *)&pc_marker_b; pc_marker_end = (struct pv_chunk *)&pc_marker_end_b; /* * A delayed invalidation block should already be active if * pmap_advise() or pmap_remove() called this function by way * of pmap_demote_pde_locked(). */ start_di = pmap_not_in_di(); mtx_lock(&pv_chunks_mutex); active_reclaims++; TAILQ_INSERT_HEAD(&pv_chunks, pc_marker, pc_lru); TAILQ_INSERT_TAIL(&pv_chunks, pc_marker_end, pc_lru); while ((pc = TAILQ_NEXT(pc_marker, pc_lru)) != pc_marker_end && SLIST_EMPTY(&free)) { next_pmap = pc->pc_pmap; if (next_pmap == NULL) { /* * The next chunk is a marker. However, it is * not our marker, so active_reclaims must be * > 1. Consequently, the next_chunk code * will not rotate the pv_chunks list. */ goto next_chunk; } mtx_unlock(&pv_chunks_mutex); /* * A pv_chunk can only be removed from the pc_lru list * when both pc_chunks_mutex is owned and the * corresponding pmap is locked. */ if (pmap != next_pmap) { reclaim_pv_chunk_leave_pmap(pmap, locked_pmap, start_di); pmap = next_pmap; /* Avoid deadlock and lock recursion. */ if (pmap > locked_pmap) { RELEASE_PV_LIST_LOCK(lockp); PMAP_LOCK(pmap); if (start_di) pmap_delayed_invl_started(); mtx_lock(&pv_chunks_mutex); continue; } else if (pmap != locked_pmap) { if (PMAP_TRYLOCK(pmap)) { if (start_di) pmap_delayed_invl_started(); mtx_lock(&pv_chunks_mutex); continue; } else { pmap = NULL; /* pmap is not locked */ mtx_lock(&pv_chunks_mutex); pc = TAILQ_NEXT(pc_marker, pc_lru); if (pc == NULL || pc->pc_pmap != next_pmap) continue; goto next_chunk; } } else if (start_di) pmap_delayed_invl_started(); PG_G = pmap_global_bit(pmap); PG_A = pmap_accessed_bit(pmap); PG_M = pmap_modified_bit(pmap); PG_RW = pmap_rw_bit(pmap); } /* * Destroy every non-wired, 4 KB page mapping in the chunk. */ freed = 0; for (field = 0; field < _NPCM; field++) { for (inuse = ~pc->pc_map[field] & pc_freemask[field]; inuse != 0; inuse &= ~(1UL << bit)) { bit = bsfq(inuse); pv = &pc->pc_pventry[field * 64 + bit]; va = pv->pv_va; pde = pmap_pde(pmap, va); if ((*pde & PG_PS) != 0) continue; pte = pmap_pde_to_pte(pde, va); if ((*pte & PG_W) != 0) continue; tpte = pte_load_clear(pte); if ((tpte & PG_G) != 0) pmap_invalidate_page(pmap, va); m = PHYS_TO_VM_PAGE(tpte & PG_FRAME); if ((tpte & (PG_M | PG_RW)) == (PG_M | PG_RW)) vm_page_dirty(m); if ((tpte & PG_A) != 0) vm_page_aflag_set(m, PGA_REFERENCED); CHANGE_PV_LIST_LOCK_TO_VM_PAGE(lockp, m); TAILQ_REMOVE(&m->md.pv_list, pv, pv_next); m->md.pv_gen++; if (TAILQ_EMPTY(&m->md.pv_list) && (m->flags & PG_FICTITIOUS) == 0) { pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m)); if (TAILQ_EMPTY(&pvh->pv_list)) { vm_page_aflag_clear(m, PGA_WRITEABLE); } } pmap_delayed_invl_page(m); pc->pc_map[field] |= 1UL << bit; pmap_unuse_pt(pmap, va, *pde, &free); freed++; } } if (freed == 0) { mtx_lock(&pv_chunks_mutex); goto next_chunk; } /* Every freed mapping is for a 4 KB page. */ pmap_resident_count_dec(pmap, freed); PV_STAT(atomic_add_long(&pv_entry_frees, freed)); PV_STAT(atomic_add_int(&pv_entry_spare, freed)); PV_STAT(atomic_subtract_long(&pv_entry_count, freed)); TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list); if (pc->pc_map[0] == PC_FREE0 && pc->pc_map[1] == PC_FREE1 && pc->pc_map[2] == PC_FREE2) { PV_STAT(atomic_subtract_int(&pv_entry_spare, _NPCPV)); PV_STAT(atomic_subtract_int(&pc_chunk_count, 1)); PV_STAT(atomic_add_int(&pc_chunk_frees, 1)); /* Entire chunk is free; return it. */ m_pc = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((vm_offset_t)pc)); dump_drop_page(m_pc->phys_addr); mtx_lock(&pv_chunks_mutex); TAILQ_REMOVE(&pv_chunks, pc, pc_lru); break; } TAILQ_INSERT_HEAD(&pmap->pm_pvchunk, pc, pc_list); mtx_lock(&pv_chunks_mutex); /* One freed pv entry in locked_pmap is sufficient. */ if (pmap == locked_pmap) break; next_chunk: TAILQ_REMOVE(&pv_chunks, pc_marker, pc_lru); TAILQ_INSERT_AFTER(&pv_chunks, pc, pc_marker, pc_lru); if (active_reclaims == 1 && pmap != NULL) { /* * Rotate the pv chunks list so that we do not * scan the same pv chunks that could not be * freed (because they contained a wired * and/or superpage mapping) on every * invocation of reclaim_pv_chunk(). */ while ((pc = TAILQ_FIRST(&pv_chunks)) != pc_marker) { MPASS(pc->pc_pmap != NULL); TAILQ_REMOVE(&pv_chunks, pc, pc_lru); TAILQ_INSERT_TAIL(&pv_chunks, pc, pc_lru); } } } TAILQ_REMOVE(&pv_chunks, pc_marker, pc_lru); TAILQ_REMOVE(&pv_chunks, pc_marker_end, pc_lru); active_reclaims--; mtx_unlock(&pv_chunks_mutex); reclaim_pv_chunk_leave_pmap(pmap, locked_pmap, start_di); if (m_pc == NULL && !SLIST_EMPTY(&free)) { m_pc = SLIST_FIRST(&free); SLIST_REMOVE_HEAD(&free, plinks.s.ss); /* Recycle a freed page table page. */ m_pc->wire_count = 1; } vm_page_free_pages_toq(&free, true); return (m_pc); } /* * free the pv_entry back to the free list */ static void free_pv_entry(pmap_t pmap, pv_entry_t pv) { struct pv_chunk *pc; int idx, field, bit; PMAP_LOCK_ASSERT(pmap, MA_OWNED); PV_STAT(atomic_add_long(&pv_entry_frees, 1)); PV_STAT(atomic_add_int(&pv_entry_spare, 1)); PV_STAT(atomic_subtract_long(&pv_entry_count, 1)); pc = pv_to_chunk(pv); idx = pv - &pc->pc_pventry[0]; field = idx / 64; bit = idx % 64; pc->pc_map[field] |= 1ul << bit; if (pc->pc_map[0] != PC_FREE0 || pc->pc_map[1] != PC_FREE1 || pc->pc_map[2] != PC_FREE2) { /* 98% of the time, pc is already at the head of the list. */ if (__predict_false(pc != TAILQ_FIRST(&pmap->pm_pvchunk))) { TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list); TAILQ_INSERT_HEAD(&pmap->pm_pvchunk, pc, pc_list); } return; } TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list); free_pv_chunk(pc); } static void free_pv_chunk(struct pv_chunk *pc) { vm_page_t m; mtx_lock(&pv_chunks_mutex); TAILQ_REMOVE(&pv_chunks, pc, pc_lru); mtx_unlock(&pv_chunks_mutex); PV_STAT(atomic_subtract_int(&pv_entry_spare, _NPCPV)); PV_STAT(atomic_subtract_int(&pc_chunk_count, 1)); PV_STAT(atomic_add_int(&pc_chunk_frees, 1)); /* entire chunk is free, return it */ m = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((vm_offset_t)pc)); dump_drop_page(m->phys_addr); vm_page_unwire(m, PQ_NONE); vm_page_free(m); } /* * Returns a new PV entry, allocating a new PV chunk from the system when * needed. If this PV chunk allocation fails and a PV list lock pointer was * given, a PV chunk is reclaimed from an arbitrary pmap. Otherwise, NULL is * returned. * * The given PV list lock may be released. */ static pv_entry_t get_pv_entry(pmap_t pmap, struct rwlock **lockp) { int bit, field; pv_entry_t pv; struct pv_chunk *pc; vm_page_t m; PMAP_LOCK_ASSERT(pmap, MA_OWNED); PV_STAT(atomic_add_long(&pv_entry_allocs, 1)); retry: pc = TAILQ_FIRST(&pmap->pm_pvchunk); if (pc != NULL) { for (field = 0; field < _NPCM; field++) { if (pc->pc_map[field]) { bit = bsfq(pc->pc_map[field]); break; } } if (field < _NPCM) { pv = &pc->pc_pventry[field * 64 + bit]; pc->pc_map[field] &= ~(1ul << bit); /* If this was the last item, move it to tail */ if (pc->pc_map[0] == 0 && pc->pc_map[1] == 0 && pc->pc_map[2] == 0) { TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list); TAILQ_INSERT_TAIL(&pmap->pm_pvchunk, pc, pc_list); } PV_STAT(atomic_add_long(&pv_entry_count, 1)); PV_STAT(atomic_subtract_int(&pv_entry_spare, 1)); return (pv); } } /* No free items, allocate another chunk */ m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED); if (m == NULL) { if (lockp == NULL) { PV_STAT(pc_chunk_tryfail++); return (NULL); } m = reclaim_pv_chunk(pmap, lockp); if (m == NULL) goto retry; } PV_STAT(atomic_add_int(&pc_chunk_count, 1)); PV_STAT(atomic_add_int(&pc_chunk_allocs, 1)); dump_add_page(m->phys_addr); pc = (void *)PHYS_TO_DMAP(m->phys_addr); pc->pc_pmap = pmap; pc->pc_map[0] = PC_FREE0 & ~1ul; /* preallocated bit 0 */ pc->pc_map[1] = PC_FREE1; pc->pc_map[2] = PC_FREE2; mtx_lock(&pv_chunks_mutex); TAILQ_INSERT_TAIL(&pv_chunks, pc, pc_lru); mtx_unlock(&pv_chunks_mutex); pv = &pc->pc_pventry[0]; TAILQ_INSERT_HEAD(&pmap->pm_pvchunk, pc, pc_list); PV_STAT(atomic_add_long(&pv_entry_count, 1)); PV_STAT(atomic_add_int(&pv_entry_spare, _NPCPV - 1)); return (pv); } /* * Returns the number of one bits within the given PV chunk map. * * The erratas for Intel processors state that "POPCNT Instruction May * Take Longer to Execute Than Expected". It is believed that the * issue is the spurious dependency on the destination register. * Provide a hint to the register rename logic that the destination * value is overwritten, by clearing it, as suggested in the * optimization manual. It should be cheap for unaffected processors * as well. * * Reference numbers for erratas are * 4th Gen Core: HSD146 * 5th Gen Core: BDM85 * 6th Gen Core: SKL029 */ static int popcnt_pc_map_pq(uint64_t *map) { u_long result, tmp; __asm __volatile("xorl %k0,%k0;popcntq %2,%0;" "xorl %k1,%k1;popcntq %3,%1;addl %k1,%k0;" "xorl %k1,%k1;popcntq %4,%1;addl %k1,%k0" : "=&r" (result), "=&r" (tmp) : "m" (map[0]), "m" (map[1]), "m" (map[2])); return (result); } /* * Ensure that the number of spare PV entries in the specified pmap meets or * exceeds the given count, "needed". * * The given PV list lock may be released. */ static void reserve_pv_entries(pmap_t pmap, int needed, struct rwlock **lockp) { struct pch new_tail; struct pv_chunk *pc; vm_page_t m; int avail, free; bool reclaimed; PMAP_LOCK_ASSERT(pmap, MA_OWNED); KASSERT(lockp != NULL, ("reserve_pv_entries: lockp is NULL")); /* * Newly allocated PV chunks must be stored in a private list until * the required number of PV chunks have been allocated. Otherwise, * reclaim_pv_chunk() could recycle one of these chunks. In * contrast, these chunks must be added to the pmap upon allocation. */ TAILQ_INIT(&new_tail); retry: avail = 0; TAILQ_FOREACH(pc, &pmap->pm_pvchunk, pc_list) { #ifndef __POPCNT__ if ((cpu_feature2 & CPUID2_POPCNT) == 0) bit_count((bitstr_t *)pc->pc_map, 0, sizeof(pc->pc_map) * NBBY, &free); else #endif free = popcnt_pc_map_pq(pc->pc_map); if (free == 0) break; avail += free; if (avail >= needed) break; } for (reclaimed = false; avail < needed; avail += _NPCPV) { m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED); if (m == NULL) { m = reclaim_pv_chunk(pmap, lockp); if (m == NULL) goto retry; reclaimed = true; } PV_STAT(atomic_add_int(&pc_chunk_count, 1)); PV_STAT(atomic_add_int(&pc_chunk_allocs, 1)); dump_add_page(m->phys_addr); pc = (void *)PHYS_TO_DMAP(m->phys_addr); pc->pc_pmap = pmap; pc->pc_map[0] = PC_FREE0; pc->pc_map[1] = PC_FREE1; pc->pc_map[2] = PC_FREE2; TAILQ_INSERT_HEAD(&pmap->pm_pvchunk, pc, pc_list); TAILQ_INSERT_TAIL(&new_tail, pc, pc_lru); PV_STAT(atomic_add_int(&pv_entry_spare, _NPCPV)); /* * The reclaim might have freed a chunk from the current pmap. * If that chunk contained available entries, we need to * re-count the number of available entries. */ if (reclaimed) goto retry; } if (!TAILQ_EMPTY(&new_tail)) { mtx_lock(&pv_chunks_mutex); TAILQ_CONCAT(&pv_chunks, &new_tail, pc_lru); mtx_unlock(&pv_chunks_mutex); } } /* * First find and then remove the pv entry for the specified pmap and virtual * address from the specified pv list. Returns the pv entry if found and NULL * otherwise. This operation can be performed on pv lists for either 4KB or * 2MB page mappings. */ static __inline pv_entry_t pmap_pvh_remove(struct md_page *pvh, pmap_t pmap, vm_offset_t va) { pv_entry_t pv; TAILQ_FOREACH(pv, &pvh->pv_list, pv_next) { if (pmap == PV_PMAP(pv) && va == pv->pv_va) { TAILQ_REMOVE(&pvh->pv_list, pv, pv_next); pvh->pv_gen++; break; } } return (pv); } /* * After demotion from a 2MB page mapping to 512 4KB page mappings, * destroy the pv entry for the 2MB page mapping and reinstantiate the pv * entries for each of the 4KB page mappings. */ static void pmap_pv_demote_pde(pmap_t pmap, vm_offset_t va, vm_paddr_t pa, struct rwlock **lockp) { struct md_page *pvh; struct pv_chunk *pc; pv_entry_t pv; vm_offset_t va_last; vm_page_t m; int bit, field; PMAP_LOCK_ASSERT(pmap, MA_OWNED); KASSERT((pa & PDRMASK) == 0, ("pmap_pv_demote_pde: pa is not 2mpage aligned")); CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, pa); /* * Transfer the 2mpage's pv entry for this mapping to the first * page's pv list. Once this transfer begins, the pv list lock * must not be released until the last pv entry is reinstantiated. */ pvh = pa_to_pvh(pa); va = trunc_2mpage(va); pv = pmap_pvh_remove(pvh, pmap, va); KASSERT(pv != NULL, ("pmap_pv_demote_pde: pv not found")); m = PHYS_TO_VM_PAGE(pa); TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next); m->md.pv_gen++; /* Instantiate the remaining NPTEPG - 1 pv entries. */ PV_STAT(atomic_add_long(&pv_entry_allocs, NPTEPG - 1)); va_last = va + NBPDR - PAGE_SIZE; for (;;) { pc = TAILQ_FIRST(&pmap->pm_pvchunk); KASSERT(pc->pc_map[0] != 0 || pc->pc_map[1] != 0 || pc->pc_map[2] != 0, ("pmap_pv_demote_pde: missing spare")); for (field = 0; field < _NPCM; field++) { while (pc->pc_map[field]) { bit = bsfq(pc->pc_map[field]); pc->pc_map[field] &= ~(1ul << bit); pv = &pc->pc_pventry[field * 64 + bit]; va += PAGE_SIZE; pv->pv_va = va; m++; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("pmap_pv_demote_pde: page %p is not managed", m)); TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next); m->md.pv_gen++; if (va == va_last) goto out; } } TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list); TAILQ_INSERT_TAIL(&pmap->pm_pvchunk, pc, pc_list); } out: if (pc->pc_map[0] == 0 && pc->pc_map[1] == 0 && pc->pc_map[2] == 0) { TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list); TAILQ_INSERT_TAIL(&pmap->pm_pvchunk, pc, pc_list); } PV_STAT(atomic_add_long(&pv_entry_count, NPTEPG - 1)); PV_STAT(atomic_subtract_int(&pv_entry_spare, NPTEPG - 1)); } #if VM_NRESERVLEVEL > 0 /* * After promotion from 512 4KB page mappings to a single 2MB page mapping, * replace the many pv entries for the 4KB page mappings by a single pv entry * for the 2MB page mapping. */ static void pmap_pv_promote_pde(pmap_t pmap, vm_offset_t va, vm_paddr_t pa, struct rwlock **lockp) { struct md_page *pvh; pv_entry_t pv; vm_offset_t va_last; vm_page_t m; KASSERT((pa & PDRMASK) == 0, ("pmap_pv_promote_pde: pa is not 2mpage aligned")); CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, pa); /* * Transfer the first page's pv entry for this mapping to the 2mpage's * pv list. Aside from avoiding the cost of a call to get_pv_entry(), * a transfer avoids the possibility that get_pv_entry() calls * reclaim_pv_chunk() and that reclaim_pv_chunk() removes one of the * mappings that is being promoted. */ m = PHYS_TO_VM_PAGE(pa); va = trunc_2mpage(va); pv = pmap_pvh_remove(&m->md, pmap, va); KASSERT(pv != NULL, ("pmap_pv_promote_pde: pv not found")); pvh = pa_to_pvh(pa); TAILQ_INSERT_TAIL(&pvh->pv_list, pv, pv_next); pvh->pv_gen++; /* Free the remaining NPTEPG - 1 pv entries. */ va_last = va + NBPDR - PAGE_SIZE; do { m++; va += PAGE_SIZE; pmap_pvh_free(&m->md, pmap, va); } while (va < va_last); } #endif /* VM_NRESERVLEVEL > 0 */ /* * First find and then destroy the pv entry for the specified pmap and virtual * address. This operation can be performed on pv lists for either 4KB or 2MB * page mappings. */ static void pmap_pvh_free(struct md_page *pvh, pmap_t pmap, vm_offset_t va) { pv_entry_t pv; pv = pmap_pvh_remove(pvh, pmap, va); KASSERT(pv != NULL, ("pmap_pvh_free: pv not found")); free_pv_entry(pmap, pv); } /* * Conditionally create the PV entry for a 4KB page mapping if the required * memory can be allocated without resorting to reclamation. */ static boolean_t pmap_try_insert_pv_entry(pmap_t pmap, vm_offset_t va, vm_page_t m, struct rwlock **lockp) { pv_entry_t pv; PMAP_LOCK_ASSERT(pmap, MA_OWNED); /* Pass NULL instead of the lock pointer to disable reclamation. */ if ((pv = get_pv_entry(pmap, NULL)) != NULL) { pv->pv_va = va; CHANGE_PV_LIST_LOCK_TO_VM_PAGE(lockp, m); TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next); m->md.pv_gen++; return (TRUE); } else return (FALSE); } /* * Create the PV entry for a 2MB page mapping. Always returns true unless the * flag PMAP_ENTER_NORECLAIM is specified. If that flag is specified, returns * false if the PV entry cannot be allocated without resorting to reclamation. */ static bool pmap_pv_insert_pde(pmap_t pmap, vm_offset_t va, pd_entry_t pde, u_int flags, struct rwlock **lockp) { struct md_page *pvh; pv_entry_t pv; vm_paddr_t pa; PMAP_LOCK_ASSERT(pmap, MA_OWNED); /* Pass NULL instead of the lock pointer to disable reclamation. */ if ((pv = get_pv_entry(pmap, (flags & PMAP_ENTER_NORECLAIM) != 0 ? NULL : lockp)) == NULL) return (false); pv->pv_va = va; pa = pde & PG_PS_FRAME; CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, pa); pvh = pa_to_pvh(pa); TAILQ_INSERT_TAIL(&pvh->pv_list, pv, pv_next); pvh->pv_gen++; return (true); } /* * Fills a page table page with mappings to consecutive physical pages. */ static void pmap_fill_ptp(pt_entry_t *firstpte, pt_entry_t newpte) { pt_entry_t *pte; for (pte = firstpte; pte < firstpte + NPTEPG; pte++) { *pte = newpte; newpte += PAGE_SIZE; } } /* * Tries to demote a 2MB page mapping. If demotion fails, the 2MB page * mapping is invalidated. */ static boolean_t pmap_demote_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t va) { struct rwlock *lock; boolean_t rv; lock = NULL; rv = pmap_demote_pde_locked(pmap, pde, va, &lock); if (lock != NULL) rw_wunlock(lock); return (rv); } static boolean_t pmap_demote_pde_locked(pmap_t pmap, pd_entry_t *pde, vm_offset_t va, struct rwlock **lockp) { pd_entry_t newpde, oldpde; pt_entry_t *firstpte, newpte; pt_entry_t PG_A, PG_G, PG_M, PG_RW, PG_V; vm_paddr_t mptepa; vm_page_t mpte; struct spglist free; vm_offset_t sva; int PG_PTE_CACHE; PG_G = pmap_global_bit(pmap); PG_A = pmap_accessed_bit(pmap); PG_M = pmap_modified_bit(pmap); PG_RW = pmap_rw_bit(pmap); PG_V = pmap_valid_bit(pmap); PG_PTE_CACHE = pmap_cache_mask(pmap, 0); PMAP_LOCK_ASSERT(pmap, MA_OWNED); oldpde = *pde; KASSERT((oldpde & (PG_PS | PG_V)) == (PG_PS | PG_V), ("pmap_demote_pde: oldpde is missing PG_PS and/or PG_V")); if ((oldpde & PG_A) == 0 || (mpte = pmap_remove_pt_page(pmap, va)) == NULL) { KASSERT((oldpde & PG_W) == 0, ("pmap_demote_pde: page table page for a wired mapping" " is missing")); /* * Invalidate the 2MB page mapping and return "failure" if the * mapping was never accessed or the allocation of the new * page table page fails. If the 2MB page mapping belongs to * the direct map region of the kernel's address space, then * the page allocation request specifies the highest possible * priority (VM_ALLOC_INTERRUPT). Otherwise, the priority is * normal. Page table pages are preallocated for every other * part of the kernel address space, so the direct map region * is the only part of the kernel address space that must be * handled here. */ if ((oldpde & PG_A) == 0 || (mpte = vm_page_alloc(NULL, pmap_pde_pindex(va), (va >= DMAP_MIN_ADDRESS && va < DMAP_MAX_ADDRESS ? VM_ALLOC_INTERRUPT : VM_ALLOC_NORMAL) | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED)) == NULL) { SLIST_INIT(&free); sva = trunc_2mpage(va); pmap_remove_pde(pmap, pde, sva, &free, lockp); if ((oldpde & PG_G) == 0) pmap_invalidate_pde_page(pmap, sva, oldpde); vm_page_free_pages_toq(&free, true); CTR2(KTR_PMAP, "pmap_demote_pde: failure for va %#lx" " in pmap %p", va, pmap); return (FALSE); } if (va < VM_MAXUSER_ADDRESS) pmap_resident_count_inc(pmap, 1); } mptepa = VM_PAGE_TO_PHYS(mpte); firstpte = (pt_entry_t *)PHYS_TO_DMAP(mptepa); newpde = mptepa | PG_M | PG_A | (oldpde & PG_U) | PG_RW | PG_V; KASSERT((oldpde & PG_A) != 0, ("pmap_demote_pde: oldpde is missing PG_A")); KASSERT((oldpde & (PG_M | PG_RW)) != PG_RW, ("pmap_demote_pde: oldpde is missing PG_M")); newpte = oldpde & ~PG_PS; newpte = pmap_swap_pat(pmap, newpte); /* * If the page table page is new, initialize it. */ if (mpte->wire_count == 1) { mpte->wire_count = NPTEPG; pmap_fill_ptp(firstpte, newpte); } KASSERT((*firstpte & PG_FRAME) == (newpte & PG_FRAME), ("pmap_demote_pde: firstpte and newpte map different physical" " addresses")); /* * If the mapping has changed attributes, update the page table * entries. */ if ((*firstpte & PG_PTE_PROMOTE) != (newpte & PG_PTE_PROMOTE)) pmap_fill_ptp(firstpte, newpte); /* * The spare PV entries must be reserved prior to demoting the * mapping, that is, prior to changing the PDE. Otherwise, the state * of the PDE and the PV lists will be inconsistent, which can result * in reclaim_pv_chunk() attempting to remove a PV entry from the * wrong PV list and pmap_pv_demote_pde() failing to find the expected * PV entry for the 2MB page mapping that is being demoted. */ if ((oldpde & PG_MANAGED) != 0) reserve_pv_entries(pmap, NPTEPG - 1, lockp); /* * Demote the mapping. This pmap is locked. The old PDE has * PG_A set. If the old PDE has PG_RW set, it also has PG_M * set. Thus, there is no danger of a race with another * processor changing the setting of PG_A and/or PG_M between * the read above and the store below. */ if (workaround_erratum383) pmap_update_pde(pmap, va, pde, newpde); else pde_store(pde, newpde); /* * Invalidate a stale recursive mapping of the page table page. */ if (va >= VM_MAXUSER_ADDRESS) pmap_invalidate_page(pmap, (vm_offset_t)vtopte(va)); /* * Demote the PV entry. */ if ((oldpde & PG_MANAGED) != 0) pmap_pv_demote_pde(pmap, va, oldpde & PG_PS_FRAME, lockp); atomic_add_long(&pmap_pde_demotions, 1); CTR2(KTR_PMAP, "pmap_demote_pde: success for va %#lx" " in pmap %p", va, pmap); return (TRUE); } /* * pmap_remove_kernel_pde: Remove a kernel superpage mapping. */ static void pmap_remove_kernel_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t va) { pd_entry_t newpde; vm_paddr_t mptepa; vm_page_t mpte; KASSERT(pmap == kernel_pmap, ("pmap %p is not kernel_pmap", pmap)); PMAP_LOCK_ASSERT(pmap, MA_OWNED); mpte = pmap_remove_pt_page(pmap, va); if (mpte == NULL) panic("pmap_remove_kernel_pde: Missing pt page."); mptepa = VM_PAGE_TO_PHYS(mpte); newpde = mptepa | X86_PG_M | X86_PG_A | X86_PG_RW | X86_PG_V; /* * Initialize the page table page. */ pagezero((void *)PHYS_TO_DMAP(mptepa)); /* * Demote the mapping. */ if (workaround_erratum383) pmap_update_pde(pmap, va, pde, newpde); else pde_store(pde, newpde); /* * Invalidate a stale recursive mapping of the page table page. */ pmap_invalidate_page(pmap, (vm_offset_t)vtopte(va)); } /* * pmap_remove_pde: do the things to unmap a superpage in a process */ static int pmap_remove_pde(pmap_t pmap, pd_entry_t *pdq, vm_offset_t sva, struct spglist *free, struct rwlock **lockp) { struct md_page *pvh; pd_entry_t oldpde; vm_offset_t eva, va; vm_page_t m, mpte; pt_entry_t PG_G, PG_A, PG_M, PG_RW; PG_G = pmap_global_bit(pmap); PG_A = pmap_accessed_bit(pmap); PG_M = pmap_modified_bit(pmap); PG_RW = pmap_rw_bit(pmap); PMAP_LOCK_ASSERT(pmap, MA_OWNED); KASSERT((sva & PDRMASK) == 0, ("pmap_remove_pde: sva is not 2mpage aligned")); oldpde = pte_load_clear(pdq); if (oldpde & PG_W) pmap->pm_stats.wired_count -= NBPDR / PAGE_SIZE; if ((oldpde & PG_G) != 0) pmap_invalidate_pde_page(kernel_pmap, sva, oldpde); pmap_resident_count_dec(pmap, NBPDR / PAGE_SIZE); if (oldpde & PG_MANAGED) { CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, oldpde & PG_PS_FRAME); pvh = pa_to_pvh(oldpde & PG_PS_FRAME); pmap_pvh_free(pvh, pmap, sva); eva = sva + NBPDR; for (va = sva, m = PHYS_TO_VM_PAGE(oldpde & PG_PS_FRAME); va < eva; va += PAGE_SIZE, m++) { if ((oldpde & (PG_M | PG_RW)) == (PG_M | PG_RW)) vm_page_dirty(m); if (oldpde & PG_A) vm_page_aflag_set(m, PGA_REFERENCED); if (TAILQ_EMPTY(&m->md.pv_list) && TAILQ_EMPTY(&pvh->pv_list)) vm_page_aflag_clear(m, PGA_WRITEABLE); pmap_delayed_invl_page(m); } } if (pmap == kernel_pmap) { pmap_remove_kernel_pde(pmap, pdq, sva); } else { mpte = pmap_remove_pt_page(pmap, sva); if (mpte != NULL) { pmap_resident_count_dec(pmap, 1); KASSERT(mpte->wire_count == NPTEPG, ("pmap_remove_pde: pte page wire count error")); mpte->wire_count = 0; pmap_add_delayed_free_list(mpte, free, FALSE); } } return (pmap_unuse_pt(pmap, sva, *pmap_pdpe(pmap, sva), free)); } /* * pmap_remove_pte: do the things to unmap a page in a process */ static int pmap_remove_pte(pmap_t pmap, pt_entry_t *ptq, vm_offset_t va, pd_entry_t ptepde, struct spglist *free, struct rwlock **lockp) { struct md_page *pvh; pt_entry_t oldpte, PG_A, PG_M, PG_RW; vm_page_t m; PG_A = pmap_accessed_bit(pmap); PG_M = pmap_modified_bit(pmap); PG_RW = pmap_rw_bit(pmap); PMAP_LOCK_ASSERT(pmap, MA_OWNED); oldpte = pte_load_clear(ptq); if (oldpte & PG_W) pmap->pm_stats.wired_count -= 1; pmap_resident_count_dec(pmap, 1); if (oldpte & PG_MANAGED) { m = PHYS_TO_VM_PAGE(oldpte & PG_FRAME); if ((oldpte & (PG_M | PG_RW)) == (PG_M | PG_RW)) vm_page_dirty(m); if (oldpte & PG_A) vm_page_aflag_set(m, PGA_REFERENCED); CHANGE_PV_LIST_LOCK_TO_VM_PAGE(lockp, m); pmap_pvh_free(&m->md, pmap, va); if (TAILQ_EMPTY(&m->md.pv_list) && (m->flags & PG_FICTITIOUS) == 0) { pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m)); if (TAILQ_EMPTY(&pvh->pv_list)) vm_page_aflag_clear(m, PGA_WRITEABLE); } pmap_delayed_invl_page(m); } return (pmap_unuse_pt(pmap, va, ptepde, free)); } /* * Remove a single page from a process address space */ static void pmap_remove_page(pmap_t pmap, vm_offset_t va, pd_entry_t *pde, struct spglist *free) { struct rwlock *lock; pt_entry_t *pte, PG_V; PG_V = pmap_valid_bit(pmap); PMAP_LOCK_ASSERT(pmap, MA_OWNED); if ((*pde & PG_V) == 0) return; pte = pmap_pde_to_pte(pde, va); if ((*pte & PG_V) == 0) return; lock = NULL; pmap_remove_pte(pmap, pte, va, *pde, free, &lock); if (lock != NULL) rw_wunlock(lock); pmap_invalidate_page(pmap, va); } /* * Removes the specified range of addresses from the page table page. */ static bool pmap_remove_ptes(pmap_t pmap, vm_offset_t sva, vm_offset_t eva, pd_entry_t *pde, struct spglist *free, struct rwlock **lockp) { pt_entry_t PG_G, *pte; vm_offset_t va; bool anyvalid; PMAP_LOCK_ASSERT(pmap, MA_OWNED); PG_G = pmap_global_bit(pmap); anyvalid = false; va = eva; for (pte = pmap_pde_to_pte(pde, sva); sva != eva; pte++, sva += PAGE_SIZE) { if (*pte == 0) { if (va != eva) { pmap_invalidate_range(pmap, va, sva); va = eva; } continue; } if ((*pte & PG_G) == 0) anyvalid = true; else if (va == eva) va = sva; if (pmap_remove_pte(pmap, pte, sva, *pde, free, lockp)) { sva += PAGE_SIZE; break; } } if (va != eva) pmap_invalidate_range(pmap, va, sva); return (anyvalid); } /* * Remove the given range of addresses from the specified map. * * It is assumed that the start and end are properly * rounded to the page size. */ void pmap_remove(pmap_t pmap, vm_offset_t sva, vm_offset_t eva) { struct rwlock *lock; vm_offset_t va_next; pml4_entry_t *pml4e; pdp_entry_t *pdpe; pd_entry_t ptpaddr, *pde; pt_entry_t PG_G, PG_V; struct spglist free; int anyvalid; PG_G = pmap_global_bit(pmap); PG_V = pmap_valid_bit(pmap); /* * Perform an unsynchronized read. This is, however, safe. */ if (pmap->pm_stats.resident_count == 0) return; anyvalid = 0; SLIST_INIT(&free); pmap_delayed_invl_started(); PMAP_LOCK(pmap); /* * special handling of removing one page. a very * common operation and easy to short circuit some * code. */ if (sva + PAGE_SIZE == eva) { pde = pmap_pde(pmap, sva); if (pde && (*pde & PG_PS) == 0) { pmap_remove_page(pmap, sva, pde, &free); goto out; } } lock = NULL; for (; sva < eva; sva = va_next) { if (pmap->pm_stats.resident_count == 0) break; pml4e = pmap_pml4e(pmap, sva); if ((*pml4e & PG_V) == 0) { va_next = (sva + NBPML4) & ~PML4MASK; if (va_next < sva) va_next = eva; continue; } pdpe = pmap_pml4e_to_pdpe(pml4e, sva); if ((*pdpe & PG_V) == 0) { va_next = (sva + NBPDP) & ~PDPMASK; if (va_next < sva) va_next = eva; continue; } /* * Calculate index for next page table. */ va_next = (sva + NBPDR) & ~PDRMASK; if (va_next < sva) va_next = eva; pde = pmap_pdpe_to_pde(pdpe, sva); ptpaddr = *pde; /* * Weed out invalid mappings. */ if (ptpaddr == 0) continue; /* * Check for large page. */ if ((ptpaddr & PG_PS) != 0) { /* * Are we removing the entire large page? If not, * demote the mapping and fall through. */ if (sva + NBPDR == va_next && eva >= va_next) { /* * The TLB entry for a PG_G mapping is * invalidated by pmap_remove_pde(). */ if ((ptpaddr & PG_G) == 0) anyvalid = 1; pmap_remove_pde(pmap, pde, sva, &free, &lock); continue; } else if (!pmap_demote_pde_locked(pmap, pde, sva, &lock)) { /* The large page mapping was destroyed. */ continue; } else ptpaddr = *pde; } /* * Limit our scan to either the end of the va represented * by the current page table page, or to the end of the * range being removed. */ if (va_next > eva) va_next = eva; if (pmap_remove_ptes(pmap, sva, va_next, pde, &free, &lock)) anyvalid = 1; } if (lock != NULL) rw_wunlock(lock); out: if (anyvalid) pmap_invalidate_all(pmap); PMAP_UNLOCK(pmap); pmap_delayed_invl_finished(); vm_page_free_pages_toq(&free, true); } /* * Routine: pmap_remove_all * Function: * Removes this physical page from * all physical maps in which it resides. * Reflects back modify bits to the pager. * * Notes: * Original versions of this routine were very * inefficient because they iteratively called * pmap_remove (slow...) */ void pmap_remove_all(vm_page_t m) { struct md_page *pvh; pv_entry_t pv; pmap_t pmap; struct rwlock *lock; pt_entry_t *pte, tpte, PG_A, PG_M, PG_RW; pd_entry_t *pde; vm_offset_t va; struct spglist free; int pvh_gen, md_gen; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("pmap_remove_all: page %p is not managed", m)); SLIST_INIT(&free); lock = VM_PAGE_TO_PV_LIST_LOCK(m); pvh = (m->flags & PG_FICTITIOUS) != 0 ? &pv_dummy : pa_to_pvh(VM_PAGE_TO_PHYS(m)); retry: rw_wlock(lock); while ((pv = TAILQ_FIRST(&pvh->pv_list)) != NULL) { pmap = PV_PMAP(pv); if (!PMAP_TRYLOCK(pmap)) { pvh_gen = pvh->pv_gen; rw_wunlock(lock); PMAP_LOCK(pmap); rw_wlock(lock); if (pvh_gen != pvh->pv_gen) { rw_wunlock(lock); PMAP_UNLOCK(pmap); goto retry; } } va = pv->pv_va; pde = pmap_pde(pmap, va); (void)pmap_demote_pde_locked(pmap, pde, va, &lock); PMAP_UNLOCK(pmap); } while ((pv = TAILQ_FIRST(&m->md.pv_list)) != NULL) { pmap = PV_PMAP(pv); if (!PMAP_TRYLOCK(pmap)) { pvh_gen = pvh->pv_gen; md_gen = m->md.pv_gen; rw_wunlock(lock); PMAP_LOCK(pmap); rw_wlock(lock); if (pvh_gen != pvh->pv_gen || md_gen != m->md.pv_gen) { rw_wunlock(lock); PMAP_UNLOCK(pmap); goto retry; } } PG_A = pmap_accessed_bit(pmap); PG_M = pmap_modified_bit(pmap); PG_RW = pmap_rw_bit(pmap); pmap_resident_count_dec(pmap, 1); pde = pmap_pde(pmap, pv->pv_va); KASSERT((*pde & PG_PS) == 0, ("pmap_remove_all: found" " a 2mpage in page %p's pv list", m)); pte = pmap_pde_to_pte(pde, pv->pv_va); tpte = pte_load_clear(pte); if (tpte & PG_W) pmap->pm_stats.wired_count--; if (tpte & PG_A) vm_page_aflag_set(m, PGA_REFERENCED); /* * Update the vm_page_t clean and reference bits. */ if ((tpte & (PG_M | PG_RW)) == (PG_M | PG_RW)) vm_page_dirty(m); pmap_unuse_pt(pmap, pv->pv_va, *pde, &free); pmap_invalidate_page(pmap, pv->pv_va); TAILQ_REMOVE(&m->md.pv_list, pv, pv_next); m->md.pv_gen++; free_pv_entry(pmap, pv); PMAP_UNLOCK(pmap); } vm_page_aflag_clear(m, PGA_WRITEABLE); rw_wunlock(lock); pmap_delayed_invl_wait(m); vm_page_free_pages_toq(&free, true); } /* * pmap_protect_pde: do the things to protect a 2mpage in a process */ static boolean_t pmap_protect_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t sva, vm_prot_t prot) { pd_entry_t newpde, oldpde; vm_offset_t eva, va; vm_page_t m; boolean_t anychanged; pt_entry_t PG_G, PG_M, PG_RW; PG_G = pmap_global_bit(pmap); PG_M = pmap_modified_bit(pmap); PG_RW = pmap_rw_bit(pmap); PMAP_LOCK_ASSERT(pmap, MA_OWNED); KASSERT((sva & PDRMASK) == 0, ("pmap_protect_pde: sva is not 2mpage aligned")); anychanged = FALSE; retry: oldpde = newpde = *pde; if ((oldpde & (PG_MANAGED | PG_M | PG_RW)) == (PG_MANAGED | PG_M | PG_RW)) { eva = sva + NBPDR; for (va = sva, m = PHYS_TO_VM_PAGE(oldpde & PG_PS_FRAME); va < eva; va += PAGE_SIZE, m++) vm_page_dirty(m); } if ((prot & VM_PROT_WRITE) == 0) newpde &= ~(PG_RW | PG_M); if ((prot & VM_PROT_EXECUTE) == 0) newpde |= pg_nx; if (newpde != oldpde) { /* * As an optimization to future operations on this PDE, clear * PG_PROMOTED. The impending invalidation will remove any * lingering 4KB page mappings from the TLB. */ if (!atomic_cmpset_long(pde, oldpde, newpde & ~PG_PROMOTED)) goto retry; if ((oldpde & PG_G) != 0) pmap_invalidate_pde_page(kernel_pmap, sva, oldpde); else anychanged = TRUE; } return (anychanged); } /* * Set the physical protection on the * specified range of this map as requested. */ void pmap_protect(pmap_t pmap, vm_offset_t sva, vm_offset_t eva, vm_prot_t prot) { vm_offset_t va_next; pml4_entry_t *pml4e; pdp_entry_t *pdpe; pd_entry_t ptpaddr, *pde; pt_entry_t *pte, PG_G, PG_M, PG_RW, PG_V; boolean_t anychanged; KASSERT((prot & ~VM_PROT_ALL) == 0, ("invalid prot %x", prot)); if (prot == VM_PROT_NONE) { pmap_remove(pmap, sva, eva); return; } if ((prot & (VM_PROT_WRITE|VM_PROT_EXECUTE)) == (VM_PROT_WRITE|VM_PROT_EXECUTE)) return; PG_G = pmap_global_bit(pmap); PG_M = pmap_modified_bit(pmap); PG_V = pmap_valid_bit(pmap); PG_RW = pmap_rw_bit(pmap); anychanged = FALSE; /* * Although this function delays and batches the invalidation * of stale TLB entries, it does not need to call * pmap_delayed_invl_started() and * pmap_delayed_invl_finished(), because it does not * ordinarily destroy mappings. Stale TLB entries from * protection-only changes need only be invalidated before the * pmap lock is released, because protection-only changes do * not destroy PV entries. Even operations that iterate over * a physical page's PV list of mappings, like * pmap_remove_write(), acquire the pmap lock for each * mapping. Consequently, for protection-only changes, the * pmap lock suffices to synchronize both page table and TLB * updates. * * This function only destroys a mapping if pmap_demote_pde() * fails. In that case, stale TLB entries are immediately * invalidated. */ PMAP_LOCK(pmap); for (; sva < eva; sva = va_next) { pml4e = pmap_pml4e(pmap, sva); if ((*pml4e & PG_V) == 0) { va_next = (sva + NBPML4) & ~PML4MASK; if (va_next < sva) va_next = eva; continue; } pdpe = pmap_pml4e_to_pdpe(pml4e, sva); if ((*pdpe & PG_V) == 0) { va_next = (sva + NBPDP) & ~PDPMASK; if (va_next < sva) va_next = eva; continue; } va_next = (sva + NBPDR) & ~PDRMASK; if (va_next < sva) va_next = eva; pde = pmap_pdpe_to_pde(pdpe, sva); ptpaddr = *pde; /* * Weed out invalid mappings. */ if (ptpaddr == 0) continue; /* * Check for large page. */ if ((ptpaddr & PG_PS) != 0) { /* * Are we protecting the entire large page? If not, * demote the mapping and fall through. */ if (sva + NBPDR == va_next && eva >= va_next) { /* * The TLB entry for a PG_G mapping is * invalidated by pmap_protect_pde(). */ if (pmap_protect_pde(pmap, pde, sva, prot)) anychanged = TRUE; continue; } else if (!pmap_demote_pde(pmap, pde, sva)) { /* * The large page mapping was destroyed. */ continue; } } if (va_next > eva) va_next = eva; for (pte = pmap_pde_to_pte(pde, sva); sva != va_next; pte++, sva += PAGE_SIZE) { pt_entry_t obits, pbits; vm_page_t m; retry: obits = pbits = *pte; if ((pbits & PG_V) == 0) continue; if ((prot & VM_PROT_WRITE) == 0) { if ((pbits & (PG_MANAGED | PG_M | PG_RW)) == (PG_MANAGED | PG_M | PG_RW)) { m = PHYS_TO_VM_PAGE(pbits & PG_FRAME); vm_page_dirty(m); } pbits &= ~(PG_RW | PG_M); } if ((prot & VM_PROT_EXECUTE) == 0) pbits |= pg_nx; if (pbits != obits) { if (!atomic_cmpset_long(pte, obits, pbits)) goto retry; if (obits & PG_G) pmap_invalidate_page(pmap, sva); else anychanged = TRUE; } } } if (anychanged) pmap_invalidate_all(pmap); PMAP_UNLOCK(pmap); } #if VM_NRESERVLEVEL > 0 /* * Tries to promote the 512, contiguous 4KB page mappings that are within a * single page table page (PTP) to a single 2MB page mapping. For promotion * to occur, two conditions must be met: (1) the 4KB page mappings must map * aligned, contiguous physical memory and (2) the 4KB page mappings must have * identical characteristics. */ static void pmap_promote_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t va, struct rwlock **lockp) { pd_entry_t newpde; pt_entry_t *firstpte, oldpte, pa, *pte; pt_entry_t PG_G, PG_A, PG_M, PG_RW, PG_V; vm_page_t mpte; int PG_PTE_CACHE; PG_A = pmap_accessed_bit(pmap); PG_G = pmap_global_bit(pmap); PG_M = pmap_modified_bit(pmap); PG_V = pmap_valid_bit(pmap); PG_RW = pmap_rw_bit(pmap); PG_PTE_CACHE = pmap_cache_mask(pmap, 0); PMAP_LOCK_ASSERT(pmap, MA_OWNED); /* * Examine the first PTE in the specified PTP. Abort if this PTE is * either invalid, unused, or does not map the first 4KB physical page * within a 2MB page. */ firstpte = (pt_entry_t *)PHYS_TO_DMAP(*pde & PG_FRAME); setpde: newpde = *firstpte; if ((newpde & ((PG_FRAME & PDRMASK) | PG_A | PG_V)) != (PG_A | PG_V)) { atomic_add_long(&pmap_pde_p_failures, 1); CTR2(KTR_PMAP, "pmap_promote_pde: failure for va %#lx" " in pmap %p", va, pmap); return; } if ((newpde & (PG_M | PG_RW)) == PG_RW) { /* * When PG_M is already clear, PG_RW can be cleared without * a TLB invalidation. */ if (!atomic_cmpset_long(firstpte, newpde, newpde & ~PG_RW)) goto setpde; newpde &= ~PG_RW; } /* * Examine each of the other PTEs in the specified PTP. Abort if this * PTE maps an unexpected 4KB physical page or does not have identical * characteristics to the first PTE. */ pa = (newpde & (PG_PS_FRAME | PG_A | PG_V)) + NBPDR - PAGE_SIZE; for (pte = firstpte + NPTEPG - 1; pte > firstpte; pte--) { setpte: oldpte = *pte; if ((oldpte & (PG_FRAME | PG_A | PG_V)) != pa) { atomic_add_long(&pmap_pde_p_failures, 1); CTR2(KTR_PMAP, "pmap_promote_pde: failure for va %#lx" " in pmap %p", va, pmap); return; } if ((oldpte & (PG_M | PG_RW)) == PG_RW) { /* * When PG_M is already clear, PG_RW can be cleared * without a TLB invalidation. */ if (!atomic_cmpset_long(pte, oldpte, oldpte & ~PG_RW)) goto setpte; oldpte &= ~PG_RW; CTR2(KTR_PMAP, "pmap_promote_pde: protect for va %#lx" " in pmap %p", (oldpte & PG_FRAME & PDRMASK) | (va & ~PDRMASK), pmap); } if ((oldpte & PG_PTE_PROMOTE) != (newpde & PG_PTE_PROMOTE)) { atomic_add_long(&pmap_pde_p_failures, 1); CTR2(KTR_PMAP, "pmap_promote_pde: failure for va %#lx" " in pmap %p", va, pmap); return; } pa -= PAGE_SIZE; } /* * Save the page table page in its current state until the PDE * mapping the superpage is demoted by pmap_demote_pde() or * destroyed by pmap_remove_pde(). */ mpte = PHYS_TO_VM_PAGE(*pde & PG_FRAME); KASSERT(mpte >= vm_page_array && mpte < &vm_page_array[vm_page_array_size], ("pmap_promote_pde: page table page is out of range")); KASSERT(mpte->pindex == pmap_pde_pindex(va), ("pmap_promote_pde: page table page's pindex is wrong")); if (pmap_insert_pt_page(pmap, mpte)) { atomic_add_long(&pmap_pde_p_failures, 1); CTR2(KTR_PMAP, "pmap_promote_pde: failure for va %#lx in pmap %p", va, pmap); return; } /* * Promote the pv entries. */ if ((newpde & PG_MANAGED) != 0) pmap_pv_promote_pde(pmap, va, newpde & PG_PS_FRAME, lockp); /* * Propagate the PAT index to its proper position. */ newpde = pmap_swap_pat(pmap, newpde); /* * Map the superpage. */ if (workaround_erratum383) pmap_update_pde(pmap, va, pde, PG_PS | newpde); else pde_store(pde, PG_PROMOTED | PG_PS | newpde); atomic_add_long(&pmap_pde_promotions, 1); CTR2(KTR_PMAP, "pmap_promote_pde: success for va %#lx" " in pmap %p", va, pmap); } #endif /* VM_NRESERVLEVEL > 0 */ /* * Insert the given physical page (p) at * the specified virtual address (v) in the * target physical map with the protection requested. * * If specified, the page will be wired down, meaning * that the related pte can not be reclaimed. * * NB: This is the only routine which MAY NOT lazy-evaluate * or lose information. That is, this routine must actually * insert this page into the given map NOW. * * When destroying both a page table and PV entry, this function * performs the TLB invalidation before releasing the PV list * lock, so we do not need pmap_delayed_invl_page() calls here. */ int pmap_enter(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot, u_int flags, int8_t psind) { struct rwlock *lock; pd_entry_t *pde; pt_entry_t *pte, PG_G, PG_A, PG_M, PG_RW, PG_V; pt_entry_t newpte, origpte; pv_entry_t pv; vm_paddr_t opa, pa; vm_page_t mpte, om; int rv; boolean_t nosleep; PG_A = pmap_accessed_bit(pmap); PG_G = pmap_global_bit(pmap); PG_M = pmap_modified_bit(pmap); PG_V = pmap_valid_bit(pmap); PG_RW = pmap_rw_bit(pmap); va = trunc_page(va); KASSERT(va <= VM_MAX_KERNEL_ADDRESS, ("pmap_enter: toobig")); KASSERT(va < UPT_MIN_ADDRESS || va >= UPT_MAX_ADDRESS, ("pmap_enter: invalid to pmap_enter page table pages (va: 0x%lx)", va)); KASSERT((m->oflags & VPO_UNMANAGED) != 0 || va < kmi.clean_sva || va >= kmi.clean_eva, ("pmap_enter: managed mapping within the clean submap")); if ((m->oflags & VPO_UNMANAGED) == 0 && !vm_page_xbusied(m)) VM_OBJECT_ASSERT_LOCKED(m->object); KASSERT((flags & PMAP_ENTER_RESERVED) == 0, ("pmap_enter: flags %u has reserved bits set", flags)); pa = VM_PAGE_TO_PHYS(m); newpte = (pt_entry_t)(pa | PG_A | PG_V); if ((flags & VM_PROT_WRITE) != 0) newpte |= PG_M; if ((prot & VM_PROT_WRITE) != 0) newpte |= PG_RW; KASSERT((newpte & (PG_M | PG_RW)) != PG_M, ("pmap_enter: flags includes VM_PROT_WRITE but prot doesn't")); if ((prot & VM_PROT_EXECUTE) == 0) newpte |= pg_nx; if ((flags & PMAP_ENTER_WIRED) != 0) newpte |= PG_W; if (va < VM_MAXUSER_ADDRESS) newpte |= PG_U; if (pmap == kernel_pmap) newpte |= PG_G; newpte |= pmap_cache_bits(pmap, m->md.pat_mode, psind > 0); /* * Set modified bit gratuitously for writeable mappings if * the page is unmanaged. We do not want to take a fault * to do the dirty bit accounting for these mappings. */ if ((m->oflags & VPO_UNMANAGED) != 0) { if ((newpte & PG_RW) != 0) newpte |= PG_M; } else newpte |= PG_MANAGED; lock = NULL; PMAP_LOCK(pmap); if (psind == 1) { /* Assert the required virtual and physical alignment. */ KASSERT((va & PDRMASK) == 0, ("pmap_enter: va unaligned")); KASSERT(m->psind > 0, ("pmap_enter: m->psind < psind")); rv = pmap_enter_pde(pmap, va, newpte | PG_PS, flags, m, &lock); goto out; } mpte = NULL; /* * In the case that a page table page is not * resident, we are creating it here. */ retry: pde = pmap_pde(pmap, va); if (pde != NULL && (*pde & PG_V) != 0 && ((*pde & PG_PS) == 0 || pmap_demote_pde_locked(pmap, pde, va, &lock))) { pte = pmap_pde_to_pte(pde, va); if (va < VM_MAXUSER_ADDRESS && mpte == NULL) { mpte = PHYS_TO_VM_PAGE(*pde & PG_FRAME); mpte->wire_count++; } } else if (va < VM_MAXUSER_ADDRESS) { /* * Here if the pte page isn't mapped, or if it has been * deallocated. */ nosleep = (flags & PMAP_ENTER_NOSLEEP) != 0; mpte = _pmap_allocpte(pmap, pmap_pde_pindex(va), nosleep ? NULL : &lock); if (mpte == NULL && nosleep) { rv = KERN_RESOURCE_SHORTAGE; goto out; } goto retry; } else panic("pmap_enter: invalid page directory va=%#lx", va); origpte = *pte; pv = NULL; /* * Is the specified virtual address already mapped? */ if ((origpte & PG_V) != 0) { /* * Wiring change, just update stats. We don't worry about * wiring PT pages as they remain resident as long as there * are valid mappings in them. Hence, if a user page is wired, * the PT page will be also. */ if ((newpte & PG_W) != 0 && (origpte & PG_W) == 0) pmap->pm_stats.wired_count++; else if ((newpte & PG_W) == 0 && (origpte & PG_W) != 0) pmap->pm_stats.wired_count--; /* * Remove the extra PT page reference. */ if (mpte != NULL) { mpte->wire_count--; KASSERT(mpte->wire_count > 0, ("pmap_enter: missing reference to page table page," " va: 0x%lx", va)); } /* * Has the physical page changed? */ opa = origpte & PG_FRAME; if (opa == pa) { /* * No, might be a protection or wiring change. */ if ((origpte & PG_MANAGED) != 0 && (newpte & PG_RW) != 0) vm_page_aflag_set(m, PGA_WRITEABLE); if (((origpte ^ newpte) & ~(PG_M | PG_A)) == 0) goto unchanged; goto validate; } /* * The physical page has changed. Temporarily invalidate * the mapping. This ensures that all threads sharing the * pmap keep a consistent view of the mapping, which is * necessary for the correct handling of COW faults. It * also permits reuse of the old mapping's PV entry, * avoiding an allocation. * * For consistency, handle unmanaged mappings the same way. */ origpte = pte_load_clear(pte); KASSERT((origpte & PG_FRAME) == opa, ("pmap_enter: unexpected pa update for %#lx", va)); if ((origpte & PG_MANAGED) != 0) { om = PHYS_TO_VM_PAGE(opa); /* * The pmap lock is sufficient to synchronize with * concurrent calls to pmap_page_test_mappings() and * pmap_ts_referenced(). */ if ((origpte & (PG_M | PG_RW)) == (PG_M | PG_RW)) vm_page_dirty(om); if ((origpte & PG_A) != 0) vm_page_aflag_set(om, PGA_REFERENCED); CHANGE_PV_LIST_LOCK_TO_PHYS(&lock, opa); pv = pmap_pvh_remove(&om->md, pmap, va); KASSERT(pv != NULL, ("pmap_enter: no PV entry for %#lx", va)); if ((newpte & PG_MANAGED) == 0) free_pv_entry(pmap, pv); if ((om->aflags & PGA_WRITEABLE) != 0 && TAILQ_EMPTY(&om->md.pv_list) && ((om->flags & PG_FICTITIOUS) != 0 || TAILQ_EMPTY(&pa_to_pvh(opa)->pv_list))) vm_page_aflag_clear(om, PGA_WRITEABLE); } if ((origpte & PG_A) != 0) pmap_invalidate_page(pmap, va); origpte = 0; } else { /* * Increment the counters. */ if ((newpte & PG_W) != 0) pmap->pm_stats.wired_count++; pmap_resident_count_inc(pmap, 1); } /* * Enter on the PV list if part of our managed memory. */ if ((newpte & PG_MANAGED) != 0) { if (pv == NULL) { pv = get_pv_entry(pmap, &lock); pv->pv_va = va; } CHANGE_PV_LIST_LOCK_TO_PHYS(&lock, pa); TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next); m->md.pv_gen++; if ((newpte & PG_RW) != 0) vm_page_aflag_set(m, PGA_WRITEABLE); } /* * Update the PTE. */ if ((origpte & PG_V) != 0) { validate: origpte = pte_load_store(pte, newpte); KASSERT((origpte & PG_FRAME) == pa, ("pmap_enter: unexpected pa update for %#lx", va)); if ((newpte & PG_M) == 0 && (origpte & (PG_M | PG_RW)) == (PG_M | PG_RW)) { if ((origpte & PG_MANAGED) != 0) vm_page_dirty(m); /* * Although the PTE may still have PG_RW set, TLB * invalidation may nonetheless be required because * the PTE no longer has PG_M set. */ } else if ((origpte & PG_NX) != 0 || (newpte & PG_NX) == 0) { /* * This PTE change does not require TLB invalidation. */ goto unchanged; } if ((origpte & PG_A) != 0) pmap_invalidate_page(pmap, va); } else pte_store(pte, newpte); unchanged: #if VM_NRESERVLEVEL > 0 /* * If both the page table page and the reservation are fully * populated, then attempt promotion. */ if ((mpte == NULL || mpte->wire_count == NPTEPG) && pmap_ps_enabled(pmap) && (m->flags & PG_FICTITIOUS) == 0 && vm_reserv_level_iffullpop(m) == 0) pmap_promote_pde(pmap, pde, va, &lock); #endif rv = KERN_SUCCESS; out: if (lock != NULL) rw_wunlock(lock); PMAP_UNLOCK(pmap); return (rv); } /* * Tries to create a read- and/or execute-only 2MB page mapping. Returns true * if successful. Returns false if (1) a page table page cannot be allocated * without sleeping, (2) a mapping already exists at the specified virtual * address, or (3) a PV entry cannot be allocated without reclaiming another * PV entry. */ static bool pmap_enter_2mpage(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot, struct rwlock **lockp) { pd_entry_t newpde; pt_entry_t PG_V; PMAP_LOCK_ASSERT(pmap, MA_OWNED); PG_V = pmap_valid_bit(pmap); newpde = VM_PAGE_TO_PHYS(m) | pmap_cache_bits(pmap, m->md.pat_mode, 1) | PG_PS | PG_V; if ((m->oflags & VPO_UNMANAGED) == 0) newpde |= PG_MANAGED; if ((prot & VM_PROT_EXECUTE) == 0) newpde |= pg_nx; if (va < VM_MAXUSER_ADDRESS) newpde |= PG_U; return (pmap_enter_pde(pmap, va, newpde, PMAP_ENTER_NOSLEEP | PMAP_ENTER_NOREPLACE | PMAP_ENTER_NORECLAIM, NULL, lockp) == KERN_SUCCESS); } /* * Tries to create the specified 2MB page mapping. Returns KERN_SUCCESS if * the mapping was created, and either KERN_FAILURE or KERN_RESOURCE_SHORTAGE * otherwise. Returns KERN_FAILURE if PMAP_ENTER_NOREPLACE was specified and * a mapping already exists at the specified virtual address. Returns * KERN_RESOURCE_SHORTAGE if PMAP_ENTER_NOSLEEP was specified and a page table * page allocation failed. Returns KERN_RESOURCE_SHORTAGE if * PMAP_ENTER_NORECLAIM was specified and a PV entry allocation failed. * * The parameter "m" is only used when creating a managed, writeable mapping. */ static int pmap_enter_pde(pmap_t pmap, vm_offset_t va, pd_entry_t newpde, u_int flags, vm_page_t m, struct rwlock **lockp) { struct spglist free; pd_entry_t oldpde, *pde; pt_entry_t PG_G, PG_RW, PG_V; vm_page_t mt, pdpg; PG_G = pmap_global_bit(pmap); PG_RW = pmap_rw_bit(pmap); KASSERT((newpde & (pmap_modified_bit(pmap) | PG_RW)) != PG_RW, ("pmap_enter_pde: newpde is missing PG_M")); PG_V = pmap_valid_bit(pmap); PMAP_LOCK_ASSERT(pmap, MA_OWNED); if ((pdpg = pmap_allocpde(pmap, va, (flags & PMAP_ENTER_NOSLEEP) != 0 ? NULL : lockp)) == NULL) { CTR2(KTR_PMAP, "pmap_enter_pde: failure for va %#lx" " in pmap %p", va, pmap); return (KERN_RESOURCE_SHORTAGE); } pde = (pd_entry_t *)PHYS_TO_DMAP(VM_PAGE_TO_PHYS(pdpg)); pde = &pde[pmap_pde_index(va)]; oldpde = *pde; if ((oldpde & PG_V) != 0) { KASSERT(pdpg->wire_count > 1, ("pmap_enter_pde: pdpg's wire count is too low")); if ((flags & PMAP_ENTER_NOREPLACE) != 0) { pdpg->wire_count--; CTR2(KTR_PMAP, "pmap_enter_pde: failure for va %#lx" " in pmap %p", va, pmap); return (KERN_FAILURE); } /* Break the existing mapping(s). */ SLIST_INIT(&free); if ((oldpde & PG_PS) != 0) { /* * The reference to the PD page that was acquired by * pmap_allocpde() ensures that it won't be freed. * However, if the PDE resulted from a promotion, then * a reserved PT page could be freed. */ (void)pmap_remove_pde(pmap, pde, va, &free, lockp); if ((oldpde & PG_G) == 0) pmap_invalidate_pde_page(pmap, va, oldpde); } else { pmap_delayed_invl_started(); if (pmap_remove_ptes(pmap, va, va + NBPDR, pde, &free, lockp)) pmap_invalidate_all(pmap); pmap_delayed_invl_finished(); } vm_page_free_pages_toq(&free, true); if (va >= VM_MAXUSER_ADDRESS) { mt = PHYS_TO_VM_PAGE(*pde & PG_FRAME); if (pmap_insert_pt_page(pmap, mt)) { /* * XXX Currently, this can't happen because * we do not perform pmap_enter(psind == 1) * on the kernel pmap. */ panic("pmap_enter_pde: trie insert failed"); } } else KASSERT(*pde == 0, ("pmap_enter_pde: non-zero pde %p", pde)); } if ((newpde & PG_MANAGED) != 0) { /* * Abort this mapping if its PV entry could not be created. */ if (!pmap_pv_insert_pde(pmap, va, newpde, flags, lockp)) { SLIST_INIT(&free); if (pmap_unwire_ptp(pmap, va, pdpg, &free)) { /* * Although "va" is not mapped, paging- * structure caches could nonetheless have * entries that refer to the freed page table * pages. Invalidate those entries. */ pmap_invalidate_page(pmap, va); vm_page_free_pages_toq(&free, true); } CTR2(KTR_PMAP, "pmap_enter_pde: failure for va %#lx" " in pmap %p", va, pmap); return (KERN_RESOURCE_SHORTAGE); } if ((newpde & PG_RW) != 0) { for (mt = m; mt < &m[NBPDR / PAGE_SIZE]; mt++) vm_page_aflag_set(mt, PGA_WRITEABLE); } } /* * Increment counters. */ if ((newpde & PG_W) != 0) pmap->pm_stats.wired_count += NBPDR / PAGE_SIZE; pmap_resident_count_inc(pmap, NBPDR / PAGE_SIZE); /* * Map the superpage. (This is not a promoted mapping; there will not * be any lingering 4KB page mappings in the TLB.) */ pde_store(pde, newpde); atomic_add_long(&pmap_pde_mappings, 1); CTR2(KTR_PMAP, "pmap_enter_pde: success for va %#lx" " in pmap %p", va, pmap); return (KERN_SUCCESS); } /* * Maps a sequence of resident pages belonging to the same object. * The sequence begins with the given page m_start. This page is * mapped at the given virtual address start. Each subsequent page is * mapped at a virtual address that is offset from start by the same * amount as the page is offset from m_start within the object. The * last page in the sequence is the page with the largest offset from * m_start that can be mapped at a virtual address less than the given * virtual address end. Not every virtual page between start and end * is mapped; only those for which a resident page exists with the * corresponding offset from m_start are mapped. */ void pmap_enter_object(pmap_t pmap, vm_offset_t start, vm_offset_t end, vm_page_t m_start, vm_prot_t prot) { struct rwlock *lock; vm_offset_t va; vm_page_t m, mpte; vm_pindex_t diff, psize; VM_OBJECT_ASSERT_LOCKED(m_start->object); psize = atop(end - start); mpte = NULL; m = m_start; lock = NULL; PMAP_LOCK(pmap); while (m != NULL && (diff = m->pindex - m_start->pindex) < psize) { va = start + ptoa(diff); if ((va & PDRMASK) == 0 && va + NBPDR <= end && m->psind == 1 && pmap_ps_enabled(pmap) && pmap_enter_2mpage(pmap, va, m, prot, &lock)) m = &m[NBPDR / PAGE_SIZE - 1]; else mpte = pmap_enter_quick_locked(pmap, va, m, prot, mpte, &lock); m = TAILQ_NEXT(m, listq); } if (lock != NULL) rw_wunlock(lock); PMAP_UNLOCK(pmap); } /* * this code makes some *MAJOR* assumptions: * 1. Current pmap & pmap exists. * 2. Not wired. * 3. Read access. * 4. No page table pages. * but is *MUCH* faster than pmap_enter... */ void pmap_enter_quick(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot) { struct rwlock *lock; lock = NULL; PMAP_LOCK(pmap); (void)pmap_enter_quick_locked(pmap, va, m, prot, NULL, &lock); if (lock != NULL) rw_wunlock(lock); PMAP_UNLOCK(pmap); } static vm_page_t pmap_enter_quick_locked(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot, vm_page_t mpte, struct rwlock **lockp) { struct spglist free; pt_entry_t newpte, *pte, PG_V; KASSERT(va < kmi.clean_sva || va >= kmi.clean_eva || (m->oflags & VPO_UNMANAGED) != 0, ("pmap_enter_quick_locked: managed mapping within the clean submap")); PG_V = pmap_valid_bit(pmap); PMAP_LOCK_ASSERT(pmap, MA_OWNED); /* * In the case that a page table page is not * resident, we are creating it here. */ if (va < VM_MAXUSER_ADDRESS) { vm_pindex_t ptepindex; pd_entry_t *ptepa; /* * Calculate pagetable page index */ ptepindex = pmap_pde_pindex(va); if (mpte && (mpte->pindex == ptepindex)) { mpte->wire_count++; } else { /* * Get the page directory entry */ ptepa = pmap_pde(pmap, va); /* * If the page table page is mapped, we just increment * the hold count, and activate it. Otherwise, we * attempt to allocate a page table page. If this * attempt fails, we don't retry. Instead, we give up. */ if (ptepa && (*ptepa & PG_V) != 0) { if (*ptepa & PG_PS) return (NULL); mpte = PHYS_TO_VM_PAGE(*ptepa & PG_FRAME); mpte->wire_count++; } else { /* * Pass NULL instead of the PV list lock * pointer, because we don't intend to sleep. */ mpte = _pmap_allocpte(pmap, ptepindex, NULL); if (mpte == NULL) return (mpte); } } pte = (pt_entry_t *)PHYS_TO_DMAP(VM_PAGE_TO_PHYS(mpte)); pte = &pte[pmap_pte_index(va)]; } else { mpte = NULL; pte = vtopte(va); } if (*pte) { if (mpte != NULL) { mpte->wire_count--; mpte = NULL; } return (mpte); } /* * Enter on the PV list if part of our managed memory. */ if ((m->oflags & VPO_UNMANAGED) == 0 && !pmap_try_insert_pv_entry(pmap, va, m, lockp)) { if (mpte != NULL) { SLIST_INIT(&free); if (pmap_unwire_ptp(pmap, va, mpte, &free)) { /* * Although "va" is not mapped, paging- * structure caches could nonetheless have * entries that refer to the freed page table * pages. Invalidate those entries. */ pmap_invalidate_page(pmap, va); vm_page_free_pages_toq(&free, true); } mpte = NULL; } return (mpte); } /* * Increment counters */ pmap_resident_count_inc(pmap, 1); newpte = VM_PAGE_TO_PHYS(m) | PG_V | pmap_cache_bits(pmap, m->md.pat_mode, 0); if ((m->oflags & VPO_UNMANAGED) == 0) newpte |= PG_MANAGED; if ((prot & VM_PROT_EXECUTE) == 0) newpte |= pg_nx; if (va < VM_MAXUSER_ADDRESS) newpte |= PG_U; pte_store(pte, newpte); return (mpte); } /* * Make a temporary mapping for a physical address. This is only intended * to be used for panic dumps. */ void * pmap_kenter_temporary(vm_paddr_t pa, int i) { vm_offset_t va; va = (vm_offset_t)crashdumpmap + (i * PAGE_SIZE); pmap_kenter(va, pa); invlpg(va); return ((void *)crashdumpmap); } /* * This code maps large physical mmap regions into the * processor address space. Note that some shortcuts * are taken, but the code works. */ void pmap_object_init_pt(pmap_t pmap, vm_offset_t addr, vm_object_t object, vm_pindex_t pindex, vm_size_t size) { pd_entry_t *pde; pt_entry_t PG_A, PG_M, PG_RW, PG_V; vm_paddr_t pa, ptepa; vm_page_t p, pdpg; int pat_mode; PG_A = pmap_accessed_bit(pmap); PG_M = pmap_modified_bit(pmap); PG_V = pmap_valid_bit(pmap); PG_RW = pmap_rw_bit(pmap); VM_OBJECT_ASSERT_WLOCKED(object); KASSERT(object->type == OBJT_DEVICE || object->type == OBJT_SG, ("pmap_object_init_pt: non-device object")); if ((addr & (NBPDR - 1)) == 0 && (size & (NBPDR - 1)) == 0) { if (!pmap_ps_enabled(pmap)) return; if (!vm_object_populate(object, pindex, pindex + atop(size))) return; p = vm_page_lookup(object, pindex); KASSERT(p->valid == VM_PAGE_BITS_ALL, ("pmap_object_init_pt: invalid page %p", p)); pat_mode = p->md.pat_mode; /* * Abort the mapping if the first page is not physically * aligned to a 2MB page boundary. */ ptepa = VM_PAGE_TO_PHYS(p); if (ptepa & (NBPDR - 1)) return; /* * Skip the first page. Abort the mapping if the rest of * the pages are not physically contiguous or have differing * memory attributes. */ p = TAILQ_NEXT(p, listq); for (pa = ptepa + PAGE_SIZE; pa < ptepa + size; pa += PAGE_SIZE) { KASSERT(p->valid == VM_PAGE_BITS_ALL, ("pmap_object_init_pt: invalid page %p", p)); if (pa != VM_PAGE_TO_PHYS(p) || pat_mode != p->md.pat_mode) return; p = TAILQ_NEXT(p, listq); } /* * Map using 2MB pages. Since "ptepa" is 2M aligned and * "size" is a multiple of 2M, adding the PAT setting to "pa" * will not affect the termination of this loop. */ PMAP_LOCK(pmap); for (pa = ptepa | pmap_cache_bits(pmap, pat_mode, 1); pa < ptepa + size; pa += NBPDR) { pdpg = pmap_allocpde(pmap, addr, NULL); if (pdpg == NULL) { /* * The creation of mappings below is only an * optimization. If a page directory page * cannot be allocated without blocking, * continue on to the next mapping rather than * blocking. */ addr += NBPDR; continue; } pde = (pd_entry_t *)PHYS_TO_DMAP(VM_PAGE_TO_PHYS(pdpg)); pde = &pde[pmap_pde_index(addr)]; if ((*pde & PG_V) == 0) { pde_store(pde, pa | PG_PS | PG_M | PG_A | PG_U | PG_RW | PG_V); pmap_resident_count_inc(pmap, NBPDR / PAGE_SIZE); atomic_add_long(&pmap_pde_mappings, 1); } else { /* Continue on if the PDE is already valid. */ pdpg->wire_count--; KASSERT(pdpg->wire_count > 0, ("pmap_object_init_pt: missing reference " "to page directory page, va: 0x%lx", addr)); } addr += NBPDR; } PMAP_UNLOCK(pmap); } } /* * Clear the wired attribute from the mappings for the specified range of * addresses in the given pmap. Every valid mapping within that range * must have the wired attribute set. In contrast, invalid mappings * cannot have the wired attribute set, so they are ignored. * * The wired attribute of the page table entry is not a hardware * feature, so there is no need to invalidate any TLB entries. * Since pmap_demote_pde() for the wired entry must never fail, * pmap_delayed_invl_started()/finished() calls around the * function are not needed. */ void pmap_unwire(pmap_t pmap, vm_offset_t sva, vm_offset_t eva) { vm_offset_t va_next; pml4_entry_t *pml4e; pdp_entry_t *pdpe; pd_entry_t *pde; pt_entry_t *pte, PG_V; PG_V = pmap_valid_bit(pmap); PMAP_LOCK(pmap); for (; sva < eva; sva = va_next) { pml4e = pmap_pml4e(pmap, sva); if ((*pml4e & PG_V) == 0) { va_next = (sva + NBPML4) & ~PML4MASK; if (va_next < sva) va_next = eva; continue; } pdpe = pmap_pml4e_to_pdpe(pml4e, sva); if ((*pdpe & PG_V) == 0) { va_next = (sva + NBPDP) & ~PDPMASK; if (va_next < sva) va_next = eva; continue; } va_next = (sva + NBPDR) & ~PDRMASK; if (va_next < sva) va_next = eva; pde = pmap_pdpe_to_pde(pdpe, sva); if ((*pde & PG_V) == 0) continue; if ((*pde & PG_PS) != 0) { if ((*pde & PG_W) == 0) panic("pmap_unwire: pde %#jx is missing PG_W", (uintmax_t)*pde); /* * Are we unwiring the entire large page? If not, * demote the mapping and fall through. */ if (sva + NBPDR == va_next && eva >= va_next) { atomic_clear_long(pde, PG_W); pmap->pm_stats.wired_count -= NBPDR / PAGE_SIZE; continue; } else if (!pmap_demote_pde(pmap, pde, sva)) panic("pmap_unwire: demotion failed"); } if (va_next > eva) va_next = eva; for (pte = pmap_pde_to_pte(pde, sva); sva != va_next; pte++, sva += PAGE_SIZE) { if ((*pte & PG_V) == 0) continue; if ((*pte & PG_W) == 0) panic("pmap_unwire: pte %#jx is missing PG_W", (uintmax_t)*pte); /* * PG_W must be cleared atomically. Although the pmap * lock synchronizes access to PG_W, another processor * could be setting PG_M and/or PG_A concurrently. */ atomic_clear_long(pte, PG_W); pmap->pm_stats.wired_count--; } } PMAP_UNLOCK(pmap); } /* * Copy the range specified by src_addr/len * from the source map to the range dst_addr/len * in the destination map. * * This routine is only advisory and need not do anything. */ void pmap_copy(pmap_t dst_pmap, pmap_t src_pmap, vm_offset_t dst_addr, vm_size_t len, vm_offset_t src_addr) { struct rwlock *lock; struct spglist free; vm_offset_t addr; vm_offset_t end_addr = src_addr + len; vm_offset_t va_next; vm_page_t dst_pdpg, dstmpte, srcmpte; pt_entry_t PG_A, PG_M, PG_V; if (dst_addr != src_addr) return; if (dst_pmap->pm_type != src_pmap->pm_type) return; /* * EPT page table entries that require emulation of A/D bits are * sensitive to clearing the PG_A bit (aka EPT_PG_READ). Although * we clear PG_M (aka EPT_PG_WRITE) concomitantly, the PG_U bit * (aka EPT_PG_EXECUTE) could still be set. Since some EPT * implementations flag an EPT misconfiguration for exec-only * mappings we skip this function entirely for emulated pmaps. */ if (pmap_emulate_ad_bits(dst_pmap)) return; lock = NULL; if (dst_pmap < src_pmap) { PMAP_LOCK(dst_pmap); PMAP_LOCK(src_pmap); } else { PMAP_LOCK(src_pmap); PMAP_LOCK(dst_pmap); } PG_A = pmap_accessed_bit(dst_pmap); PG_M = pmap_modified_bit(dst_pmap); PG_V = pmap_valid_bit(dst_pmap); for (addr = src_addr; addr < end_addr; addr = va_next) { pt_entry_t *src_pte, *dst_pte; pml4_entry_t *pml4e; pdp_entry_t *pdpe; pd_entry_t srcptepaddr, *pde; KASSERT(addr < UPT_MIN_ADDRESS, ("pmap_copy: invalid to pmap_copy page tables")); pml4e = pmap_pml4e(src_pmap, addr); if ((*pml4e & PG_V) == 0) { va_next = (addr + NBPML4) & ~PML4MASK; if (va_next < addr) va_next = end_addr; continue; } pdpe = pmap_pml4e_to_pdpe(pml4e, addr); if ((*pdpe & PG_V) == 0) { va_next = (addr + NBPDP) & ~PDPMASK; if (va_next < addr) va_next = end_addr; continue; } va_next = (addr + NBPDR) & ~PDRMASK; if (va_next < addr) va_next = end_addr; pde = pmap_pdpe_to_pde(pdpe, addr); srcptepaddr = *pde; if (srcptepaddr == 0) continue; if (srcptepaddr & PG_PS) { if ((addr & PDRMASK) != 0 || addr + NBPDR > end_addr) continue; dst_pdpg = pmap_allocpde(dst_pmap, addr, NULL); if (dst_pdpg == NULL) break; pde = (pd_entry_t *) PHYS_TO_DMAP(VM_PAGE_TO_PHYS(dst_pdpg)); pde = &pde[pmap_pde_index(addr)]; if (*pde == 0 && ((srcptepaddr & PG_MANAGED) == 0 || pmap_pv_insert_pde(dst_pmap, addr, srcptepaddr, PMAP_ENTER_NORECLAIM, &lock))) { *pde = srcptepaddr & ~PG_W; pmap_resident_count_inc(dst_pmap, NBPDR / PAGE_SIZE); atomic_add_long(&pmap_pde_mappings, 1); } else dst_pdpg->wire_count--; continue; } srcptepaddr &= PG_FRAME; srcmpte = PHYS_TO_VM_PAGE(srcptepaddr); KASSERT(srcmpte->wire_count > 0, ("pmap_copy: source page table page is unused")); if (va_next > end_addr) va_next = end_addr; src_pte = (pt_entry_t *)PHYS_TO_DMAP(srcptepaddr); src_pte = &src_pte[pmap_pte_index(addr)]; dstmpte = NULL; while (addr < va_next) { pt_entry_t ptetemp; ptetemp = *src_pte; /* * we only virtual copy managed pages */ if ((ptetemp & PG_MANAGED) != 0) { if (dstmpte != NULL && dstmpte->pindex == pmap_pde_pindex(addr)) dstmpte->wire_count++; else if ((dstmpte = pmap_allocpte(dst_pmap, addr, NULL)) == NULL) goto out; dst_pte = (pt_entry_t *) PHYS_TO_DMAP(VM_PAGE_TO_PHYS(dstmpte)); dst_pte = &dst_pte[pmap_pte_index(addr)]; if (*dst_pte == 0 && pmap_try_insert_pv_entry(dst_pmap, addr, PHYS_TO_VM_PAGE(ptetemp & PG_FRAME), &lock)) { /* * Clear the wired, modified, and * accessed (referenced) bits * during the copy. */ *dst_pte = ptetemp & ~(PG_W | PG_M | PG_A); pmap_resident_count_inc(dst_pmap, 1); } else { SLIST_INIT(&free); if (pmap_unwire_ptp(dst_pmap, addr, dstmpte, &free)) { /* * Although "addr" is not * mapped, paging-structure * caches could nonetheless * have entries that refer to * the freed page table pages. * Invalidate those entries. */ pmap_invalidate_page(dst_pmap, addr); vm_page_free_pages_toq(&free, true); } goto out; } if (dstmpte->wire_count >= srcmpte->wire_count) break; } addr += PAGE_SIZE; src_pte++; } } out: if (lock != NULL) rw_wunlock(lock); PMAP_UNLOCK(src_pmap); PMAP_UNLOCK(dst_pmap); } /* * Zero the specified hardware page. */ void pmap_zero_page(vm_page_t m) { vm_offset_t va = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(m)); pagezero((void *)va); } /* * Zero an an area within a single hardware page. off and size must not * cover an area beyond a single hardware page. */ void pmap_zero_page_area(vm_page_t m, int off, int size) { vm_offset_t va = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(m)); if (off == 0 && size == PAGE_SIZE) pagezero((void *)va); else bzero((char *)va + off, size); } /* * Copy 1 specified hardware page to another. */ void pmap_copy_page(vm_page_t msrc, vm_page_t mdst) { vm_offset_t src = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(msrc)); vm_offset_t dst = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(mdst)); pagecopy((void *)src, (void *)dst); } int unmapped_buf_allowed = 1; void pmap_copy_pages(vm_page_t ma[], vm_offset_t a_offset, vm_page_t mb[], vm_offset_t b_offset, int xfersize) { void *a_cp, *b_cp; vm_page_t pages[2]; vm_offset_t vaddr[2], a_pg_offset, b_pg_offset; int cnt; boolean_t mapped; while (xfersize > 0) { a_pg_offset = a_offset & PAGE_MASK; pages[0] = ma[a_offset >> PAGE_SHIFT]; b_pg_offset = b_offset & PAGE_MASK; pages[1] = mb[b_offset >> PAGE_SHIFT]; cnt = min(xfersize, PAGE_SIZE - a_pg_offset); cnt = min(cnt, PAGE_SIZE - b_pg_offset); mapped = pmap_map_io_transient(pages, vaddr, 2, FALSE); a_cp = (char *)vaddr[0] + a_pg_offset; b_cp = (char *)vaddr[1] + b_pg_offset; bcopy(a_cp, b_cp, cnt); if (__predict_false(mapped)) pmap_unmap_io_transient(pages, vaddr, 2, FALSE); a_offset += cnt; b_offset += cnt; xfersize -= cnt; } } /* * Returns true if the pmap's pv is one of the first * 16 pvs linked to from this page. This count may * be changed upwards or downwards in the future; it * is only necessary that true be returned for a small * subset of pmaps for proper page aging. */ boolean_t pmap_page_exists_quick(pmap_t pmap, vm_page_t m) { struct md_page *pvh; struct rwlock *lock; pv_entry_t pv; int loops = 0; boolean_t rv; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("pmap_page_exists_quick: page %p is not managed", m)); rv = FALSE; lock = VM_PAGE_TO_PV_LIST_LOCK(m); rw_rlock(lock); TAILQ_FOREACH(pv, &m->md.pv_list, pv_next) { if (PV_PMAP(pv) == pmap) { rv = TRUE; break; } loops++; if (loops >= 16) break; } if (!rv && loops < 16 && (m->flags & PG_FICTITIOUS) == 0) { pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m)); TAILQ_FOREACH(pv, &pvh->pv_list, pv_next) { if (PV_PMAP(pv) == pmap) { rv = TRUE; break; } loops++; if (loops >= 16) break; } } rw_runlock(lock); return (rv); } /* * pmap_page_wired_mappings: * * Return the number of managed mappings to the given physical page * that are wired. */ int pmap_page_wired_mappings(vm_page_t m) { struct rwlock *lock; struct md_page *pvh; pmap_t pmap; pt_entry_t *pte; pv_entry_t pv; int count, md_gen, pvh_gen; if ((m->oflags & VPO_UNMANAGED) != 0) return (0); lock = VM_PAGE_TO_PV_LIST_LOCK(m); rw_rlock(lock); restart: count = 0; TAILQ_FOREACH(pv, &m->md.pv_list, pv_next) { pmap = PV_PMAP(pv); if (!PMAP_TRYLOCK(pmap)) { md_gen = m->md.pv_gen; rw_runlock(lock); PMAP_LOCK(pmap); rw_rlock(lock); if (md_gen != m->md.pv_gen) { PMAP_UNLOCK(pmap); goto restart; } } pte = pmap_pte(pmap, pv->pv_va); if ((*pte & PG_W) != 0) count++; PMAP_UNLOCK(pmap); } if ((m->flags & PG_FICTITIOUS) == 0) { pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m)); TAILQ_FOREACH(pv, &pvh->pv_list, pv_next) { pmap = PV_PMAP(pv); if (!PMAP_TRYLOCK(pmap)) { md_gen = m->md.pv_gen; pvh_gen = pvh->pv_gen; rw_runlock(lock); PMAP_LOCK(pmap); rw_rlock(lock); if (md_gen != m->md.pv_gen || pvh_gen != pvh->pv_gen) { PMAP_UNLOCK(pmap); goto restart; } } pte = pmap_pde(pmap, pv->pv_va); if ((*pte & PG_W) != 0) count++; PMAP_UNLOCK(pmap); } } rw_runlock(lock); return (count); } /* * Returns TRUE if the given page is mapped individually or as part of * a 2mpage. Otherwise, returns FALSE. */ boolean_t pmap_page_is_mapped(vm_page_t m) { struct rwlock *lock; boolean_t rv; if ((m->oflags & VPO_UNMANAGED) != 0) return (FALSE); lock = VM_PAGE_TO_PV_LIST_LOCK(m); rw_rlock(lock); rv = !TAILQ_EMPTY(&m->md.pv_list) || ((m->flags & PG_FICTITIOUS) == 0 && !TAILQ_EMPTY(&pa_to_pvh(VM_PAGE_TO_PHYS(m))->pv_list)); rw_runlock(lock); return (rv); } /* * Destroy all managed, non-wired mappings in the given user-space * pmap. This pmap cannot be active on any processor besides the * caller. * * This function cannot be applied to the kernel pmap. Moreover, it * is not intended for general use. It is only to be used during * process termination. Consequently, it can be implemented in ways * that make it faster than pmap_remove(). First, it can more quickly * destroy mappings by iterating over the pmap's collection of PV * entries, rather than searching the page table. Second, it doesn't * have to test and clear the page table entries atomically, because * no processor is currently accessing the user address space. In * particular, a page table entry's dirty bit won't change state once * this function starts. * * Although this function destroys all of the pmap's managed, * non-wired mappings, it can delay and batch the invalidation of TLB * entries without calling pmap_delayed_invl_started() and * pmap_delayed_invl_finished(). Because the pmap is not active on * any other processor, none of these TLB entries will ever be used * before their eventual invalidation. Consequently, there is no need * for either pmap_remove_all() or pmap_remove_write() to wait for * that eventual TLB invalidation. */ void pmap_remove_pages(pmap_t pmap) { pd_entry_t ptepde; pt_entry_t *pte, tpte; pt_entry_t PG_M, PG_RW, PG_V; struct spglist free; vm_page_t m, mpte, mt; pv_entry_t pv; struct md_page *pvh; struct pv_chunk *pc, *npc; struct rwlock *lock; int64_t bit; uint64_t inuse, bitmask; int allfree, field, freed, idx; boolean_t superpage; vm_paddr_t pa; /* * Assert that the given pmap is only active on the current * CPU. Unfortunately, we cannot block another CPU from * activating the pmap while this function is executing. */ KASSERT(pmap == PCPU_GET(curpmap), ("non-current pmap %p", pmap)); #ifdef INVARIANTS { cpuset_t other_cpus; other_cpus = all_cpus; critical_enter(); CPU_CLR(PCPU_GET(cpuid), &other_cpus); CPU_AND(&other_cpus, &pmap->pm_active); critical_exit(); KASSERT(CPU_EMPTY(&other_cpus), ("pmap active %p", pmap)); } #endif lock = NULL; PG_M = pmap_modified_bit(pmap); PG_V = pmap_valid_bit(pmap); PG_RW = pmap_rw_bit(pmap); SLIST_INIT(&free); PMAP_LOCK(pmap); TAILQ_FOREACH_SAFE(pc, &pmap->pm_pvchunk, pc_list, npc) { allfree = 1; freed = 0; for (field = 0; field < _NPCM; field++) { inuse = ~pc->pc_map[field] & pc_freemask[field]; while (inuse != 0) { bit = bsfq(inuse); bitmask = 1UL << bit; idx = field * 64 + bit; pv = &pc->pc_pventry[idx]; inuse &= ~bitmask; pte = pmap_pdpe(pmap, pv->pv_va); ptepde = *pte; pte = pmap_pdpe_to_pde(pte, pv->pv_va); tpte = *pte; if ((tpte & (PG_PS | PG_V)) == PG_V) { superpage = FALSE; ptepde = tpte; pte = (pt_entry_t *)PHYS_TO_DMAP(tpte & PG_FRAME); pte = &pte[pmap_pte_index(pv->pv_va)]; tpte = *pte; } else { /* * Keep track whether 'tpte' is a * superpage explicitly instead of * relying on PG_PS being set. * * This is because PG_PS is numerically * identical to PG_PTE_PAT and thus a * regular page could be mistaken for * a superpage. */ superpage = TRUE; } if ((tpte & PG_V) == 0) { panic("bad pte va %lx pte %lx", pv->pv_va, tpte); } /* * We cannot remove wired pages from a process' mapping at this time */ if (tpte & PG_W) { allfree = 0; continue; } if (superpage) pa = tpte & PG_PS_FRAME; else pa = tpte & PG_FRAME; m = PHYS_TO_VM_PAGE(pa); KASSERT(m->phys_addr == pa, ("vm_page_t %p phys_addr mismatch %016jx %016jx", m, (uintmax_t)m->phys_addr, (uintmax_t)tpte)); KASSERT((m->flags & PG_FICTITIOUS) != 0 || m < &vm_page_array[vm_page_array_size], ("pmap_remove_pages: bad tpte %#jx", (uintmax_t)tpte)); pte_clear(pte); /* * Update the vm_page_t clean/reference bits. */ if ((tpte & (PG_M | PG_RW)) == (PG_M | PG_RW)) { if (superpage) { for (mt = m; mt < &m[NBPDR / PAGE_SIZE]; mt++) vm_page_dirty(mt); } else vm_page_dirty(m); } CHANGE_PV_LIST_LOCK_TO_VM_PAGE(&lock, m); /* Mark free */ pc->pc_map[field] |= bitmask; if (superpage) { pmap_resident_count_dec(pmap, NBPDR / PAGE_SIZE); pvh = pa_to_pvh(tpte & PG_PS_FRAME); TAILQ_REMOVE(&pvh->pv_list, pv, pv_next); pvh->pv_gen++; if (TAILQ_EMPTY(&pvh->pv_list)) { for (mt = m; mt < &m[NBPDR / PAGE_SIZE]; mt++) if ((mt->aflags & PGA_WRITEABLE) != 0 && TAILQ_EMPTY(&mt->md.pv_list)) vm_page_aflag_clear(mt, PGA_WRITEABLE); } mpte = pmap_remove_pt_page(pmap, pv->pv_va); if (mpte != NULL) { pmap_resident_count_dec(pmap, 1); KASSERT(mpte->wire_count == NPTEPG, ("pmap_remove_pages: pte page wire count error")); mpte->wire_count = 0; pmap_add_delayed_free_list(mpte, &free, FALSE); } } else { pmap_resident_count_dec(pmap, 1); TAILQ_REMOVE(&m->md.pv_list, pv, pv_next); m->md.pv_gen++; if ((m->aflags & PGA_WRITEABLE) != 0 && TAILQ_EMPTY(&m->md.pv_list) && (m->flags & PG_FICTITIOUS) == 0) { pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m)); if (TAILQ_EMPTY(&pvh->pv_list)) vm_page_aflag_clear(m, PGA_WRITEABLE); } } pmap_unuse_pt(pmap, pv->pv_va, ptepde, &free); freed++; } } PV_STAT(atomic_add_long(&pv_entry_frees, freed)); PV_STAT(atomic_add_int(&pv_entry_spare, freed)); PV_STAT(atomic_subtract_long(&pv_entry_count, freed)); if (allfree) { TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list); free_pv_chunk(pc); } } if (lock != NULL) rw_wunlock(lock); pmap_invalidate_all(pmap); PMAP_UNLOCK(pmap); vm_page_free_pages_toq(&free, true); } static boolean_t pmap_page_test_mappings(vm_page_t m, boolean_t accessed, boolean_t modified) { struct rwlock *lock; pv_entry_t pv; struct md_page *pvh; pt_entry_t *pte, mask; pt_entry_t PG_A, PG_M, PG_RW, PG_V; pmap_t pmap; int md_gen, pvh_gen; boolean_t rv; rv = FALSE; lock = VM_PAGE_TO_PV_LIST_LOCK(m); rw_rlock(lock); restart: TAILQ_FOREACH(pv, &m->md.pv_list, pv_next) { pmap = PV_PMAP(pv); if (!PMAP_TRYLOCK(pmap)) { md_gen = m->md.pv_gen; rw_runlock(lock); PMAP_LOCK(pmap); rw_rlock(lock); if (md_gen != m->md.pv_gen) { PMAP_UNLOCK(pmap); goto restart; } } pte = pmap_pte(pmap, pv->pv_va); mask = 0; if (modified) { PG_M = pmap_modified_bit(pmap); PG_RW = pmap_rw_bit(pmap); mask |= PG_RW | PG_M; } if (accessed) { PG_A = pmap_accessed_bit(pmap); PG_V = pmap_valid_bit(pmap); mask |= PG_V | PG_A; } rv = (*pte & mask) == mask; PMAP_UNLOCK(pmap); if (rv) goto out; } if ((m->flags & PG_FICTITIOUS) == 0) { pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m)); TAILQ_FOREACH(pv, &pvh->pv_list, pv_next) { pmap = PV_PMAP(pv); if (!PMAP_TRYLOCK(pmap)) { md_gen = m->md.pv_gen; pvh_gen = pvh->pv_gen; rw_runlock(lock); PMAP_LOCK(pmap); rw_rlock(lock); if (md_gen != m->md.pv_gen || pvh_gen != pvh->pv_gen) { PMAP_UNLOCK(pmap); goto restart; } } pte = pmap_pde(pmap, pv->pv_va); mask = 0; if (modified) { PG_M = pmap_modified_bit(pmap); PG_RW = pmap_rw_bit(pmap); mask |= PG_RW | PG_M; } if (accessed) { PG_A = pmap_accessed_bit(pmap); PG_V = pmap_valid_bit(pmap); mask |= PG_V | PG_A; } rv = (*pte & mask) == mask; PMAP_UNLOCK(pmap); if (rv) goto out; } } out: rw_runlock(lock); return (rv); } /* * pmap_is_modified: * * Return whether or not the specified physical page was modified * in any physical maps. */ boolean_t pmap_is_modified(vm_page_t m) { KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("pmap_is_modified: page %p is not managed", m)); /* * If the page is not exclusive busied, then PGA_WRITEABLE cannot be * concurrently set while the object is locked. Thus, if PGA_WRITEABLE * is clear, no PTEs can have PG_M set. */ VM_OBJECT_ASSERT_WLOCKED(m->object); if (!vm_page_xbusied(m) && (m->aflags & PGA_WRITEABLE) == 0) return (FALSE); return (pmap_page_test_mappings(m, FALSE, TRUE)); } /* * pmap_is_prefaultable: * * Return whether or not the specified virtual address is eligible * for prefault. */ boolean_t pmap_is_prefaultable(pmap_t pmap, vm_offset_t addr) { pd_entry_t *pde; pt_entry_t *pte, PG_V; boolean_t rv; PG_V = pmap_valid_bit(pmap); rv = FALSE; PMAP_LOCK(pmap); pde = pmap_pde(pmap, addr); if (pde != NULL && (*pde & (PG_PS | PG_V)) == PG_V) { pte = pmap_pde_to_pte(pde, addr); rv = (*pte & PG_V) == 0; } PMAP_UNLOCK(pmap); return (rv); } /* * pmap_is_referenced: * * Return whether or not the specified physical page was referenced * in any physical maps. */ boolean_t pmap_is_referenced(vm_page_t m) { KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("pmap_is_referenced: page %p is not managed", m)); return (pmap_page_test_mappings(m, TRUE, FALSE)); } /* * Clear the write and modified bits in each of the given page's mappings. */ void pmap_remove_write(vm_page_t m) { struct md_page *pvh; pmap_t pmap; struct rwlock *lock; pv_entry_t next_pv, pv; pd_entry_t *pde; pt_entry_t oldpte, *pte, PG_M, PG_RW; vm_offset_t va; int pvh_gen, md_gen; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("pmap_remove_write: page %p is not managed", m)); /* * If the page is not exclusive busied, then PGA_WRITEABLE cannot be * set by another thread while the object is locked. Thus, * if PGA_WRITEABLE is clear, no page table entries need updating. */ VM_OBJECT_ASSERT_WLOCKED(m->object); if (!vm_page_xbusied(m) && (m->aflags & PGA_WRITEABLE) == 0) return; lock = VM_PAGE_TO_PV_LIST_LOCK(m); pvh = (m->flags & PG_FICTITIOUS) != 0 ? &pv_dummy : pa_to_pvh(VM_PAGE_TO_PHYS(m)); retry_pv_loop: rw_wlock(lock); TAILQ_FOREACH_SAFE(pv, &pvh->pv_list, pv_next, next_pv) { pmap = PV_PMAP(pv); if (!PMAP_TRYLOCK(pmap)) { pvh_gen = pvh->pv_gen; rw_wunlock(lock); PMAP_LOCK(pmap); rw_wlock(lock); if (pvh_gen != pvh->pv_gen) { PMAP_UNLOCK(pmap); rw_wunlock(lock); goto retry_pv_loop; } } PG_RW = pmap_rw_bit(pmap); va = pv->pv_va; pde = pmap_pde(pmap, va); if ((*pde & PG_RW) != 0) (void)pmap_demote_pde_locked(pmap, pde, va, &lock); KASSERT(lock == VM_PAGE_TO_PV_LIST_LOCK(m), ("inconsistent pv lock %p %p for page %p", lock, VM_PAGE_TO_PV_LIST_LOCK(m), m)); PMAP_UNLOCK(pmap); } TAILQ_FOREACH(pv, &m->md.pv_list, pv_next) { pmap = PV_PMAP(pv); if (!PMAP_TRYLOCK(pmap)) { pvh_gen = pvh->pv_gen; md_gen = m->md.pv_gen; rw_wunlock(lock); PMAP_LOCK(pmap); rw_wlock(lock); if (pvh_gen != pvh->pv_gen || md_gen != m->md.pv_gen) { PMAP_UNLOCK(pmap); rw_wunlock(lock); goto retry_pv_loop; } } PG_M = pmap_modified_bit(pmap); PG_RW = pmap_rw_bit(pmap); pde = pmap_pde(pmap, pv->pv_va); KASSERT((*pde & PG_PS) == 0, ("pmap_remove_write: found a 2mpage in page %p's pv list", m)); pte = pmap_pde_to_pte(pde, pv->pv_va); retry: oldpte = *pte; if (oldpte & PG_RW) { if (!atomic_cmpset_long(pte, oldpte, oldpte & ~(PG_RW | PG_M))) goto retry; if ((oldpte & PG_M) != 0) vm_page_dirty(m); pmap_invalidate_page(pmap, pv->pv_va); } PMAP_UNLOCK(pmap); } rw_wunlock(lock); vm_page_aflag_clear(m, PGA_WRITEABLE); pmap_delayed_invl_wait(m); } static __inline boolean_t safe_to_clear_referenced(pmap_t pmap, pt_entry_t pte) { if (!pmap_emulate_ad_bits(pmap)) return (TRUE); KASSERT(pmap->pm_type == PT_EPT, ("invalid pm_type %d", pmap->pm_type)); /* * XWR = 010 or 110 will cause an unconditional EPT misconfiguration * so we don't let the referenced (aka EPT_PG_READ) bit to be cleared * if the EPT_PG_WRITE bit is set. */ if ((pte & EPT_PG_WRITE) != 0) return (FALSE); /* * XWR = 100 is allowed only if the PMAP_SUPPORTS_EXEC_ONLY is set. */ if ((pte & EPT_PG_EXECUTE) == 0 || ((pmap->pm_flags & PMAP_SUPPORTS_EXEC_ONLY) != 0)) return (TRUE); else return (FALSE); } /* * pmap_ts_referenced: * * Return a count of reference bits for a page, clearing those bits. * It is not necessary for every reference bit to be cleared, but it * is necessary that 0 only be returned when there are truly no * reference bits set. * * As an optimization, update the page's dirty field if a modified bit is * found while counting reference bits. This opportunistic update can be * performed at low cost and can eliminate the need for some future calls * to pmap_is_modified(). However, since this function stops after * finding PMAP_TS_REFERENCED_MAX reference bits, it may not detect some * dirty pages. Those dirty pages will only be detected by a future call * to pmap_is_modified(). * * A DI block is not needed within this function, because * invalidations are performed before the PV list lock is * released. */ int pmap_ts_referenced(vm_page_t m) { struct md_page *pvh; pv_entry_t pv, pvf; pmap_t pmap; struct rwlock *lock; pd_entry_t oldpde, *pde; pt_entry_t *pte, PG_A, PG_M, PG_RW; vm_offset_t va; vm_paddr_t pa; int cleared, md_gen, not_cleared, pvh_gen; struct spglist free; boolean_t demoted; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("pmap_ts_referenced: page %p is not managed", m)); SLIST_INIT(&free); cleared = 0; pa = VM_PAGE_TO_PHYS(m); lock = PHYS_TO_PV_LIST_LOCK(pa); pvh = (m->flags & PG_FICTITIOUS) != 0 ? &pv_dummy : pa_to_pvh(pa); rw_wlock(lock); retry: not_cleared = 0; if ((pvf = TAILQ_FIRST(&pvh->pv_list)) == NULL) goto small_mappings; pv = pvf; do { if (pvf == NULL) pvf = pv; pmap = PV_PMAP(pv); if (!PMAP_TRYLOCK(pmap)) { pvh_gen = pvh->pv_gen; rw_wunlock(lock); PMAP_LOCK(pmap); rw_wlock(lock); if (pvh_gen != pvh->pv_gen) { PMAP_UNLOCK(pmap); goto retry; } } PG_A = pmap_accessed_bit(pmap); PG_M = pmap_modified_bit(pmap); PG_RW = pmap_rw_bit(pmap); va = pv->pv_va; pde = pmap_pde(pmap, pv->pv_va); oldpde = *pde; if ((oldpde & (PG_M | PG_RW)) == (PG_M | PG_RW)) { /* * Although "oldpde" is mapping a 2MB page, because * this function is called at a 4KB page granularity, * we only update the 4KB page under test. */ vm_page_dirty(m); } if ((oldpde & PG_A) != 0) { /* * Since this reference bit is shared by 512 4KB * pages, it should not be cleared every time it is * tested. Apply a simple "hash" function on the * physical page number, the virtual superpage number, * and the pmap address to select one 4KB page out of * the 512 on which testing the reference bit will * result in clearing that reference bit. This * function is designed to avoid the selection of the * same 4KB page for every 2MB page mapping. * * On demotion, a mapping that hasn't been referenced * is simply destroyed. To avoid the possibility of a * subsequent page fault on a demoted wired mapping, * always leave its reference bit set. Moreover, * since the superpage is wired, the current state of * its reference bit won't affect page replacement. */ if ((((pa >> PAGE_SHIFT) ^ (pv->pv_va >> PDRSHIFT) ^ (uintptr_t)pmap) & (NPTEPG - 1)) == 0 && (oldpde & PG_W) == 0) { if (safe_to_clear_referenced(pmap, oldpde)) { atomic_clear_long(pde, PG_A); pmap_invalidate_page(pmap, pv->pv_va); demoted = FALSE; } else if (pmap_demote_pde_locked(pmap, pde, pv->pv_va, &lock)) { /* * Remove the mapping to a single page * so that a subsequent access may * repromote. Since the underlying * page table page is fully populated, * this removal never frees a page * table page. */ demoted = TRUE; va += VM_PAGE_TO_PHYS(m) - (oldpde & PG_PS_FRAME); pte = pmap_pde_to_pte(pde, va); pmap_remove_pte(pmap, pte, va, *pde, NULL, &lock); pmap_invalidate_page(pmap, va); } else demoted = TRUE; if (demoted) { /* * The superpage mapping was removed * entirely and therefore 'pv' is no * longer valid. */ if (pvf == pv) pvf = NULL; pv = NULL; } cleared++; KASSERT(lock == VM_PAGE_TO_PV_LIST_LOCK(m), ("inconsistent pv lock %p %p for page %p", lock, VM_PAGE_TO_PV_LIST_LOCK(m), m)); } else not_cleared++; } PMAP_UNLOCK(pmap); /* Rotate the PV list if it has more than one entry. */ if (pv != NULL && TAILQ_NEXT(pv, pv_next) != NULL) { TAILQ_REMOVE(&pvh->pv_list, pv, pv_next); TAILQ_INSERT_TAIL(&pvh->pv_list, pv, pv_next); pvh->pv_gen++; } if (cleared + not_cleared >= PMAP_TS_REFERENCED_MAX) goto out; } while ((pv = TAILQ_FIRST(&pvh->pv_list)) != pvf); small_mappings: if ((pvf = TAILQ_FIRST(&m->md.pv_list)) == NULL) goto out; pv = pvf; do { if (pvf == NULL) pvf = pv; pmap = PV_PMAP(pv); if (!PMAP_TRYLOCK(pmap)) { pvh_gen = pvh->pv_gen; md_gen = m->md.pv_gen; rw_wunlock(lock); PMAP_LOCK(pmap); rw_wlock(lock); if (pvh_gen != pvh->pv_gen || md_gen != m->md.pv_gen) { PMAP_UNLOCK(pmap); goto retry; } } PG_A = pmap_accessed_bit(pmap); PG_M = pmap_modified_bit(pmap); PG_RW = pmap_rw_bit(pmap); pde = pmap_pde(pmap, pv->pv_va); KASSERT((*pde & PG_PS) == 0, ("pmap_ts_referenced: found a 2mpage in page %p's pv list", m)); pte = pmap_pde_to_pte(pde, pv->pv_va); if ((*pte & (PG_M | PG_RW)) == (PG_M | PG_RW)) vm_page_dirty(m); if ((*pte & PG_A) != 0) { if (safe_to_clear_referenced(pmap, *pte)) { atomic_clear_long(pte, PG_A); pmap_invalidate_page(pmap, pv->pv_va); cleared++; } else if ((*pte & PG_W) == 0) { /* * Wired pages cannot be paged out so * doing accessed bit emulation for * them is wasted effort. We do the * hard work for unwired pages only. */ pmap_remove_pte(pmap, pte, pv->pv_va, *pde, &free, &lock); pmap_invalidate_page(pmap, pv->pv_va); cleared++; if (pvf == pv) pvf = NULL; pv = NULL; KASSERT(lock == VM_PAGE_TO_PV_LIST_LOCK(m), ("inconsistent pv lock %p %p for page %p", lock, VM_PAGE_TO_PV_LIST_LOCK(m), m)); } else not_cleared++; } PMAP_UNLOCK(pmap); /* Rotate the PV list if it has more than one entry. */ if (pv != NULL && TAILQ_NEXT(pv, pv_next) != NULL) { TAILQ_REMOVE(&m->md.pv_list, pv, pv_next); TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next); m->md.pv_gen++; } } while ((pv = TAILQ_FIRST(&m->md.pv_list)) != pvf && cleared + not_cleared < PMAP_TS_REFERENCED_MAX); out: rw_wunlock(lock); vm_page_free_pages_toq(&free, true); return (cleared + not_cleared); } /* * Apply the given advice to the specified range of addresses within the * given pmap. Depending on the advice, clear the referenced and/or * modified flags in each mapping and set the mapped page's dirty field. */ void pmap_advise(pmap_t pmap, vm_offset_t sva, vm_offset_t eva, int advice) { struct rwlock *lock; pml4_entry_t *pml4e; pdp_entry_t *pdpe; pd_entry_t oldpde, *pde; pt_entry_t *pte, PG_A, PG_G, PG_M, PG_RW, PG_V; vm_offset_t va, va_next; vm_page_t m; boolean_t anychanged; if (advice != MADV_DONTNEED && advice != MADV_FREE) return; /* * A/D bit emulation requires an alternate code path when clearing * the modified and accessed bits below. Since this function is * advisory in nature we skip it entirely for pmaps that require * A/D bit emulation. */ if (pmap_emulate_ad_bits(pmap)) return; PG_A = pmap_accessed_bit(pmap); PG_G = pmap_global_bit(pmap); PG_M = pmap_modified_bit(pmap); PG_V = pmap_valid_bit(pmap); PG_RW = pmap_rw_bit(pmap); anychanged = FALSE; pmap_delayed_invl_started(); PMAP_LOCK(pmap); for (; sva < eva; sva = va_next) { pml4e = pmap_pml4e(pmap, sva); if ((*pml4e & PG_V) == 0) { va_next = (sva + NBPML4) & ~PML4MASK; if (va_next < sva) va_next = eva; continue; } pdpe = pmap_pml4e_to_pdpe(pml4e, sva); if ((*pdpe & PG_V) == 0) { va_next = (sva + NBPDP) & ~PDPMASK; if (va_next < sva) va_next = eva; continue; } va_next = (sva + NBPDR) & ~PDRMASK; if (va_next < sva) va_next = eva; pde = pmap_pdpe_to_pde(pdpe, sva); oldpde = *pde; if ((oldpde & PG_V) == 0) continue; else if ((oldpde & PG_PS) != 0) { if ((oldpde & PG_MANAGED) == 0) continue; lock = NULL; if (!pmap_demote_pde_locked(pmap, pde, sva, &lock)) { if (lock != NULL) rw_wunlock(lock); /* * The large page mapping was destroyed. */ continue; } /* * Unless the page mappings are wired, remove the * mapping to a single page so that a subsequent * access may repromote. Since the underlying page * table page is fully populated, this removal never * frees a page table page. */ if ((oldpde & PG_W) == 0) { pte = pmap_pde_to_pte(pde, sva); KASSERT((*pte & PG_V) != 0, ("pmap_advise: invalid PTE")); pmap_remove_pte(pmap, pte, sva, *pde, NULL, &lock); anychanged = TRUE; } if (lock != NULL) rw_wunlock(lock); } if (va_next > eva) va_next = eva; va = va_next; for (pte = pmap_pde_to_pte(pde, sva); sva != va_next; pte++, sva += PAGE_SIZE) { if ((*pte & (PG_MANAGED | PG_V)) != (PG_MANAGED | PG_V)) goto maybe_invlrng; else if ((*pte & (PG_M | PG_RW)) == (PG_M | PG_RW)) { if (advice == MADV_DONTNEED) { /* * Future calls to pmap_is_modified() * can be avoided by making the page * dirty now. */ m = PHYS_TO_VM_PAGE(*pte & PG_FRAME); vm_page_dirty(m); } atomic_clear_long(pte, PG_M | PG_A); } else if ((*pte & PG_A) != 0) atomic_clear_long(pte, PG_A); else goto maybe_invlrng; if ((*pte & PG_G) != 0) { if (va == va_next) va = sva; } else anychanged = TRUE; continue; maybe_invlrng: if (va != va_next) { pmap_invalidate_range(pmap, va, sva); va = va_next; } } if (va != va_next) pmap_invalidate_range(pmap, va, sva); } if (anychanged) pmap_invalidate_all(pmap); PMAP_UNLOCK(pmap); pmap_delayed_invl_finished(); } /* * Clear the modify bits on the specified physical page. */ void pmap_clear_modify(vm_page_t m) { struct md_page *pvh; pmap_t pmap; pv_entry_t next_pv, pv; pd_entry_t oldpde, *pde; pt_entry_t oldpte, *pte, PG_M, PG_RW, PG_V; struct rwlock *lock; vm_offset_t va; int md_gen, pvh_gen; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("pmap_clear_modify: page %p is not managed", m)); VM_OBJECT_ASSERT_WLOCKED(m->object); KASSERT(!vm_page_xbusied(m), ("pmap_clear_modify: page %p is exclusive busied", m)); /* * If the page is not PGA_WRITEABLE, then no PTEs can have PG_M set. * If the object containing the page is locked and the page is not * exclusive busied, then PGA_WRITEABLE cannot be concurrently set. */ if ((m->aflags & PGA_WRITEABLE) == 0) return; pvh = (m->flags & PG_FICTITIOUS) != 0 ? &pv_dummy : pa_to_pvh(VM_PAGE_TO_PHYS(m)); lock = VM_PAGE_TO_PV_LIST_LOCK(m); rw_wlock(lock); restart: TAILQ_FOREACH_SAFE(pv, &pvh->pv_list, pv_next, next_pv) { pmap = PV_PMAP(pv); if (!PMAP_TRYLOCK(pmap)) { pvh_gen = pvh->pv_gen; rw_wunlock(lock); PMAP_LOCK(pmap); rw_wlock(lock); if (pvh_gen != pvh->pv_gen) { PMAP_UNLOCK(pmap); goto restart; } } PG_M = pmap_modified_bit(pmap); PG_V = pmap_valid_bit(pmap); PG_RW = pmap_rw_bit(pmap); va = pv->pv_va; pde = pmap_pde(pmap, va); oldpde = *pde; if ((oldpde & PG_RW) != 0) { if (pmap_demote_pde_locked(pmap, pde, va, &lock)) { if ((oldpde & PG_W) == 0) { /* * Write protect the mapping to a * single page so that a subsequent * write access may repromote. */ va += VM_PAGE_TO_PHYS(m) - (oldpde & PG_PS_FRAME); pte = pmap_pde_to_pte(pde, va); oldpte = *pte; if ((oldpte & PG_V) != 0) { while (!atomic_cmpset_long(pte, oldpte, oldpte & ~(PG_M | PG_RW))) oldpte = *pte; vm_page_dirty(m); pmap_invalidate_page(pmap, va); } } } } PMAP_UNLOCK(pmap); } TAILQ_FOREACH(pv, &m->md.pv_list, pv_next) { pmap = PV_PMAP(pv); if (!PMAP_TRYLOCK(pmap)) { md_gen = m->md.pv_gen; pvh_gen = pvh->pv_gen; rw_wunlock(lock); PMAP_LOCK(pmap); rw_wlock(lock); if (pvh_gen != pvh->pv_gen || md_gen != m->md.pv_gen) { PMAP_UNLOCK(pmap); goto restart; } } PG_M = pmap_modified_bit(pmap); PG_RW = pmap_rw_bit(pmap); pde = pmap_pde(pmap, pv->pv_va); KASSERT((*pde & PG_PS) == 0, ("pmap_clear_modify: found" " a 2mpage in page %p's pv list", m)); pte = pmap_pde_to_pte(pde, pv->pv_va); if ((*pte & (PG_M | PG_RW)) == (PG_M | PG_RW)) { atomic_clear_long(pte, PG_M); pmap_invalidate_page(pmap, pv->pv_va); } PMAP_UNLOCK(pmap); } rw_wunlock(lock); } /* * Miscellaneous support routines follow */ /* Adjust the cache mode for a 4KB page mapped via a PTE. */ static __inline void pmap_pte_attr(pt_entry_t *pte, int cache_bits, int mask) { u_int opte, npte; /* * The cache mode bits are all in the low 32-bits of the * PTE, so we can just spin on updating the low 32-bits. */ do { opte = *(u_int *)pte; npte = opte & ~mask; npte |= cache_bits; } while (npte != opte && !atomic_cmpset_int((u_int *)pte, opte, npte)); } /* Adjust the cache mode for a 2MB page mapped via a PDE. */ static __inline void pmap_pde_attr(pd_entry_t *pde, int cache_bits, int mask) { u_int opde, npde; /* * The cache mode bits are all in the low 32-bits of the * PDE, so we can just spin on updating the low 32-bits. */ do { opde = *(u_int *)pde; npde = opde & ~mask; npde |= cache_bits; } while (npde != opde && !atomic_cmpset_int((u_int *)pde, opde, npde)); } /* * Map a set of physical memory pages into the kernel virtual * address space. Return a pointer to where it is mapped. This * routine is intended to be used for mapping device memory, * NOT real memory. */ static void * pmap_mapdev_internal(vm_paddr_t pa, vm_size_t size, int mode, bool noflush) { struct pmap_preinit_mapping *ppim; vm_offset_t va, offset; vm_size_t tmpsize; int i; offset = pa & PAGE_MASK; size = round_page(offset + size); pa = trunc_page(pa); if (!pmap_initialized) { va = 0; for (i = 0; i < PMAP_PREINIT_MAPPING_COUNT; i++) { ppim = pmap_preinit_mapping + i; if (ppim->va == 0) { ppim->pa = pa; ppim->sz = size; ppim->mode = mode; ppim->va = virtual_avail; virtual_avail += size; va = ppim->va; break; } } if (va == 0) panic("%s: too many preinit mappings", __func__); } else { /* * If we have a preinit mapping, re-use it. */ for (i = 0; i < PMAP_PREINIT_MAPPING_COUNT; i++) { ppim = pmap_preinit_mapping + i; if (ppim->pa == pa && ppim->sz == size && ppim->mode == mode) return ((void *)(ppim->va + offset)); } /* * If the specified range of physical addresses fits within * the direct map window, use the direct map. */ if (pa < dmaplimit && pa + size <= dmaplimit) { va = PHYS_TO_DMAP(pa); PMAP_LOCK(kernel_pmap); i = pmap_change_attr_locked(va, size, mode, noflush); PMAP_UNLOCK(kernel_pmap); if (!i) return ((void *)(va + offset)); } va = kva_alloc(size); if (va == 0) panic("%s: Couldn't allocate KVA", __func__); } for (tmpsize = 0; tmpsize < size; tmpsize += PAGE_SIZE) pmap_kenter_attr(va + tmpsize, pa + tmpsize, mode); pmap_invalidate_range(kernel_pmap, va, va + tmpsize); if (!noflush) pmap_invalidate_cache_range(va, va + tmpsize); return ((void *)(va + offset)); } void * pmap_mapdev_attr(vm_paddr_t pa, vm_size_t size, int mode) { return (pmap_mapdev_internal(pa, size, mode, false)); } void * pmap_mapdev(vm_paddr_t pa, vm_size_t size) { return (pmap_mapdev_internal(pa, size, PAT_UNCACHEABLE, false)); } void * pmap_mapdev_pciecfg(vm_paddr_t pa, vm_size_t size) { return (pmap_mapdev_internal(pa, size, PAT_UNCACHEABLE, true)); } void * pmap_mapbios(vm_paddr_t pa, vm_size_t size) { return (pmap_mapdev_internal(pa, size, PAT_WRITE_BACK, false)); } void pmap_unmapdev(vm_offset_t va, vm_size_t size) { struct pmap_preinit_mapping *ppim; vm_offset_t offset; int i; /* If we gave a direct map region in pmap_mapdev, do nothing */ if (va >= DMAP_MIN_ADDRESS && va < DMAP_MAX_ADDRESS) return; offset = va & PAGE_MASK; size = round_page(offset + size); va = trunc_page(va); for (i = 0; i < PMAP_PREINIT_MAPPING_COUNT; i++) { ppim = pmap_preinit_mapping + i; if (ppim->va == va && ppim->sz == size) { if (pmap_initialized) return; ppim->pa = 0; ppim->va = 0; ppim->sz = 0; ppim->mode = 0; if (va + size == virtual_avail) virtual_avail = va; return; } } if (pmap_initialized) kva_free(va, size); } /* * Tries to demote a 1GB page mapping. */ static boolean_t pmap_demote_pdpe(pmap_t pmap, pdp_entry_t *pdpe, vm_offset_t va) { pdp_entry_t newpdpe, oldpdpe; pd_entry_t *firstpde, newpde, *pde; pt_entry_t PG_A, PG_M, PG_RW, PG_V; vm_paddr_t pdpgpa; vm_page_t pdpg; PG_A = pmap_accessed_bit(pmap); PG_M = pmap_modified_bit(pmap); PG_V = pmap_valid_bit(pmap); PG_RW = pmap_rw_bit(pmap); PMAP_LOCK_ASSERT(pmap, MA_OWNED); oldpdpe = *pdpe; KASSERT((oldpdpe & (PG_PS | PG_V)) == (PG_PS | PG_V), ("pmap_demote_pdpe: oldpdpe is missing PG_PS and/or PG_V")); if ((pdpg = vm_page_alloc(NULL, va >> PDPSHIFT, VM_ALLOC_INTERRUPT | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED)) == NULL) { CTR2(KTR_PMAP, "pmap_demote_pdpe: failure for va %#lx" " in pmap %p", va, pmap); return (FALSE); } pdpgpa = VM_PAGE_TO_PHYS(pdpg); firstpde = (pd_entry_t *)PHYS_TO_DMAP(pdpgpa); newpdpe = pdpgpa | PG_M | PG_A | (oldpdpe & PG_U) | PG_RW | PG_V; KASSERT((oldpdpe & PG_A) != 0, ("pmap_demote_pdpe: oldpdpe is missing PG_A")); KASSERT((oldpdpe & (PG_M | PG_RW)) != PG_RW, ("pmap_demote_pdpe: oldpdpe is missing PG_M")); newpde = oldpdpe; /* * Initialize the page directory page. */ for (pde = firstpde; pde < firstpde + NPDEPG; pde++) { *pde = newpde; newpde += NBPDR; } /* * Demote the mapping. */ *pdpe = newpdpe; /* * Invalidate a stale recursive mapping of the page directory page. */ pmap_invalidate_page(pmap, (vm_offset_t)vtopde(va)); pmap_pdpe_demotions++; CTR2(KTR_PMAP, "pmap_demote_pdpe: success for va %#lx" " in pmap %p", va, pmap); return (TRUE); } /* * Sets the memory attribute for the specified page. */ void pmap_page_set_memattr(vm_page_t m, vm_memattr_t ma) { m->md.pat_mode = ma; /* * If "m" is a normal page, update its direct mapping. This update * can be relied upon to perform any cache operations that are * required for data coherence. */ if ((m->flags & PG_FICTITIOUS) == 0 && pmap_change_attr(PHYS_TO_DMAP(VM_PAGE_TO_PHYS(m)), PAGE_SIZE, m->md.pat_mode)) panic("memory attribute change on the direct map failed"); } /* * Changes the specified virtual address range's memory type to that given by * the parameter "mode". The specified virtual address range must be * completely contained within either the direct map or the kernel map. If * the virtual address range is contained within the kernel map, then the * memory type for each of the corresponding ranges of the direct map is also * changed. (The corresponding ranges of the direct map are those ranges that * map the same physical pages as the specified virtual address range.) These * changes to the direct map are necessary because Intel describes the * behavior of their processors as "undefined" if two or more mappings to the * same physical page have different memory types. * * Returns zero if the change completed successfully, and either EINVAL or * ENOMEM if the change failed. Specifically, EINVAL is returned if some part * of the virtual address range was not mapped, and ENOMEM is returned if * there was insufficient memory available to complete the change. In the * latter case, the memory type may have been changed on some part of the * virtual address range or the direct map. */ int pmap_change_attr(vm_offset_t va, vm_size_t size, int mode) { int error; PMAP_LOCK(kernel_pmap); error = pmap_change_attr_locked(va, size, mode, false); PMAP_UNLOCK(kernel_pmap); return (error); } static int pmap_change_attr_locked(vm_offset_t va, vm_size_t size, int mode, bool noflush) { vm_offset_t base, offset, tmpva; vm_paddr_t pa_start, pa_end, pa_end1; pdp_entry_t *pdpe; pd_entry_t *pde; pt_entry_t *pte; int cache_bits_pte, cache_bits_pde, error; boolean_t changed; PMAP_LOCK_ASSERT(kernel_pmap, MA_OWNED); base = trunc_page(va); offset = va & PAGE_MASK; size = round_page(offset + size); /* * Only supported on kernel virtual addresses, including the direct * map but excluding the recursive map. */ if (base < DMAP_MIN_ADDRESS) return (EINVAL); cache_bits_pde = pmap_cache_bits(kernel_pmap, mode, 1); cache_bits_pte = pmap_cache_bits(kernel_pmap, mode, 0); changed = FALSE; /* * Pages that aren't mapped aren't supported. Also break down 2MB pages * into 4KB pages if required. */ for (tmpva = base; tmpva < base + size; ) { pdpe = pmap_pdpe(kernel_pmap, tmpva); if (pdpe == NULL || *pdpe == 0) return (EINVAL); if (*pdpe & PG_PS) { /* * If the current 1GB page already has the required * memory type, then we need not demote this page. Just * increment tmpva to the next 1GB page frame. */ if ((*pdpe & X86_PG_PDE_CACHE) == cache_bits_pde) { tmpva = trunc_1gpage(tmpva) + NBPDP; continue; } /* * If the current offset aligns with a 1GB page frame * and there is at least 1GB left within the range, then * we need not break down this page into 2MB pages. */ if ((tmpva & PDPMASK) == 0 && tmpva + PDPMASK < base + size) { tmpva += NBPDP; continue; } if (!pmap_demote_pdpe(kernel_pmap, pdpe, tmpva)) return (ENOMEM); } pde = pmap_pdpe_to_pde(pdpe, tmpva); if (*pde == 0) return (EINVAL); if (*pde & PG_PS) { /* * If the current 2MB page already has the required * memory type, then we need not demote this page. Just * increment tmpva to the next 2MB page frame. */ if ((*pde & X86_PG_PDE_CACHE) == cache_bits_pde) { tmpva = trunc_2mpage(tmpva) + NBPDR; continue; } /* * If the current offset aligns with a 2MB page frame * and there is at least 2MB left within the range, then * we need not break down this page into 4KB pages. */ if ((tmpva & PDRMASK) == 0 && tmpva + PDRMASK < base + size) { tmpva += NBPDR; continue; } if (!pmap_demote_pde(kernel_pmap, pde, tmpva)) return (ENOMEM); } pte = pmap_pde_to_pte(pde, tmpva); if (*pte == 0) return (EINVAL); tmpva += PAGE_SIZE; } error = 0; /* * Ok, all the pages exist, so run through them updating their * cache mode if required. */ pa_start = pa_end = 0; for (tmpva = base; tmpva < base + size; ) { pdpe = pmap_pdpe(kernel_pmap, tmpva); if (*pdpe & PG_PS) { if ((*pdpe & X86_PG_PDE_CACHE) != cache_bits_pde) { pmap_pde_attr(pdpe, cache_bits_pde, X86_PG_PDE_CACHE); changed = TRUE; } if (tmpva >= VM_MIN_KERNEL_ADDRESS && (*pdpe & PG_PS_FRAME) < dmaplimit) { if (pa_start == pa_end) { /* Start physical address run. */ pa_start = *pdpe & PG_PS_FRAME; pa_end = pa_start + NBPDP; } else if (pa_end == (*pdpe & PG_PS_FRAME)) pa_end += NBPDP; else { /* Run ended, update direct map. */ error = pmap_change_attr_locked( PHYS_TO_DMAP(pa_start), pa_end - pa_start, mode, noflush); if (error != 0) break; /* Start physical address run. */ pa_start = *pdpe & PG_PS_FRAME; pa_end = pa_start + NBPDP; } } tmpva = trunc_1gpage(tmpva) + NBPDP; continue; } pde = pmap_pdpe_to_pde(pdpe, tmpva); if (*pde & PG_PS) { if ((*pde & X86_PG_PDE_CACHE) != cache_bits_pde) { pmap_pde_attr(pde, cache_bits_pde, X86_PG_PDE_CACHE); changed = TRUE; } if (tmpva >= VM_MIN_KERNEL_ADDRESS && (*pde & PG_PS_FRAME) < dmaplimit) { if (pa_start == pa_end) { /* Start physical address run. */ pa_start = *pde & PG_PS_FRAME; pa_end = pa_start + NBPDR; } else if (pa_end == (*pde & PG_PS_FRAME)) pa_end += NBPDR; else { /* Run ended, update direct map. */ error = pmap_change_attr_locked( PHYS_TO_DMAP(pa_start), pa_end - pa_start, mode, noflush); if (error != 0) break; /* Start physical address run. */ pa_start = *pde & PG_PS_FRAME; pa_end = pa_start + NBPDR; } } tmpva = trunc_2mpage(tmpva) + NBPDR; } else { pte = pmap_pde_to_pte(pde, tmpva); if ((*pte & X86_PG_PTE_CACHE) != cache_bits_pte) { pmap_pte_attr(pte, cache_bits_pte, X86_PG_PTE_CACHE); changed = TRUE; } if (tmpva >= VM_MIN_KERNEL_ADDRESS && (*pte & PG_FRAME) < dmaplimit) { if (pa_start == pa_end) { /* Start physical address run. */ pa_start = *pte & PG_FRAME; pa_end = pa_start + PAGE_SIZE; } else if (pa_end == (*pte & PG_FRAME)) pa_end += PAGE_SIZE; else { /* Run ended, update direct map. */ error = pmap_change_attr_locked( PHYS_TO_DMAP(pa_start), pa_end - pa_start, mode, noflush); if (error != 0) break; /* Start physical address run. */ pa_start = *pte & PG_FRAME; pa_end = pa_start + PAGE_SIZE; } } tmpva += PAGE_SIZE; } } if (error == 0 && pa_start != pa_end && pa_start < dmaplimit) { pa_end1 = MIN(pa_end, dmaplimit); if (pa_start != pa_end1) error = pmap_change_attr_locked(PHYS_TO_DMAP(pa_start), pa_end1 - pa_start, mode, noflush); } /* * Flush CPU caches if required to make sure any data isn't cached that * shouldn't be, etc. */ if (changed) { pmap_invalidate_range(kernel_pmap, base, tmpva); if (!noflush) pmap_invalidate_cache_range(base, tmpva); } return (error); } /* * Demotes any mapping within the direct map region that covers more than the * specified range of physical addresses. This range's size must be a power * of two and its starting address must be a multiple of its size. Since the * demotion does not change any attributes of the mapping, a TLB invalidation * is not mandatory. The caller may, however, request a TLB invalidation. */ void pmap_demote_DMAP(vm_paddr_t base, vm_size_t len, boolean_t invalidate) { pdp_entry_t *pdpe; pd_entry_t *pde; vm_offset_t va; boolean_t changed; if (len == 0) return; KASSERT(powerof2(len), ("pmap_demote_DMAP: len is not a power of 2")); KASSERT((base & (len - 1)) == 0, ("pmap_demote_DMAP: base is not a multiple of len")); if (len < NBPDP && base < dmaplimit) { va = PHYS_TO_DMAP(base); changed = FALSE; PMAP_LOCK(kernel_pmap); pdpe = pmap_pdpe(kernel_pmap, va); if ((*pdpe & X86_PG_V) == 0) panic("pmap_demote_DMAP: invalid PDPE"); if ((*pdpe & PG_PS) != 0) { if (!pmap_demote_pdpe(kernel_pmap, pdpe, va)) panic("pmap_demote_DMAP: PDPE failed"); changed = TRUE; } if (len < NBPDR) { pde = pmap_pdpe_to_pde(pdpe, va); if ((*pde & X86_PG_V) == 0) panic("pmap_demote_DMAP: invalid PDE"); if ((*pde & PG_PS) != 0) { if (!pmap_demote_pde(kernel_pmap, pde, va)) panic("pmap_demote_DMAP: PDE failed"); changed = TRUE; } } if (changed && invalidate) pmap_invalidate_page(kernel_pmap, va); PMAP_UNLOCK(kernel_pmap); } } /* * perform the pmap work for mincore */ int pmap_mincore(pmap_t pmap, vm_offset_t addr, vm_paddr_t *locked_pa) { pd_entry_t *pdep; pt_entry_t pte, PG_A, PG_M, PG_RW, PG_V; vm_paddr_t pa; int val; PG_A = pmap_accessed_bit(pmap); PG_M = pmap_modified_bit(pmap); PG_V = pmap_valid_bit(pmap); PG_RW = pmap_rw_bit(pmap); PMAP_LOCK(pmap); retry: pdep = pmap_pde(pmap, addr); if (pdep != NULL && (*pdep & PG_V)) { if (*pdep & PG_PS) { pte = *pdep; /* Compute the physical address of the 4KB page. */ pa = ((*pdep & PG_PS_FRAME) | (addr & PDRMASK)) & PG_FRAME; val = MINCORE_SUPER; } else { pte = *pmap_pde_to_pte(pdep, addr); pa = pte & PG_FRAME; val = 0; } } else { pte = 0; pa = 0; val = 0; } if ((pte & PG_V) != 0) { val |= MINCORE_INCORE; if ((pte & (PG_M | PG_RW)) == (PG_M | PG_RW)) val |= MINCORE_MODIFIED | MINCORE_MODIFIED_OTHER; if ((pte & PG_A) != 0) val |= MINCORE_REFERENCED | MINCORE_REFERENCED_OTHER; } if ((val & (MINCORE_MODIFIED_OTHER | MINCORE_REFERENCED_OTHER)) != (MINCORE_MODIFIED_OTHER | MINCORE_REFERENCED_OTHER) && (pte & (PG_MANAGED | PG_V)) == (PG_MANAGED | PG_V)) { /* Ensure that "PHYS_TO_VM_PAGE(pa)->object" doesn't change. */ if (vm_page_pa_tryrelock(pmap, pa, locked_pa)) goto retry; } else PA_UNLOCK_COND(*locked_pa); PMAP_UNLOCK(pmap); return (val); } static uint64_t pmap_pcid_alloc(pmap_t pmap, u_int cpuid) { uint32_t gen, new_gen, pcid_next; CRITICAL_ASSERT(curthread); gen = PCPU_GET(pcid_gen); if (pmap->pm_pcids[cpuid].pm_pcid == PMAP_PCID_KERN) return (pti ? 0 : CR3_PCID_SAVE); if (pmap->pm_pcids[cpuid].pm_gen == gen) return (CR3_PCID_SAVE); pcid_next = PCPU_GET(pcid_next); KASSERT((!pti && pcid_next <= PMAP_PCID_OVERMAX) || (pti && pcid_next <= PMAP_PCID_OVERMAX_KERN), ("cpu %d pcid_next %#x", cpuid, pcid_next)); if ((!pti && pcid_next == PMAP_PCID_OVERMAX) || (pti && pcid_next == PMAP_PCID_OVERMAX_KERN)) { new_gen = gen + 1; if (new_gen == 0) new_gen = 1; PCPU_SET(pcid_gen, new_gen); pcid_next = PMAP_PCID_KERN + 1; } else { new_gen = gen; } pmap->pm_pcids[cpuid].pm_pcid = pcid_next; pmap->pm_pcids[cpuid].pm_gen = new_gen; PCPU_SET(pcid_next, pcid_next + 1); return (0); } static uint64_t pmap_pcid_alloc_checked(pmap_t pmap, u_int cpuid) { uint64_t cached; cached = pmap_pcid_alloc(pmap, cpuid); KASSERT(pmap->pm_pcids[cpuid].pm_pcid < PMAP_PCID_OVERMAX, ("pmap %p cpu %d pcid %#x", pmap, cpuid, pmap->pm_pcids[cpuid].pm_pcid)); KASSERT(pmap->pm_pcids[cpuid].pm_pcid != PMAP_PCID_KERN || pmap == kernel_pmap, ("non-kernel pmap pmap %p cpu %d pcid %#x", pmap, cpuid, pmap->pm_pcids[cpuid].pm_pcid)); return (cached); } static void pmap_activate_sw_pti_post(pmap_t pmap) { if (pmap->pm_ucr3 != PMAP_NO_CR3) PCPU_GET(tssp)->tss_rsp0 = ((vm_offset_t)PCPU_PTR(pti_stack) + PC_PTI_STACK_SZ * sizeof(uint64_t)) & ~0xful; } static void inline pmap_activate_sw_pcid_pti(pmap_t pmap, u_int cpuid, const bool invpcid_works1) { struct invpcid_descr d; uint64_t cached, cr3, kcr3, ucr3; cached = pmap_pcid_alloc_checked(pmap, cpuid); cr3 = rcr3(); if ((cr3 & ~CR3_PCID_MASK) != pmap->pm_cr3) load_cr3(pmap->pm_cr3 | pmap->pm_pcids[cpuid].pm_pcid); PCPU_SET(curpmap, pmap); kcr3 = pmap->pm_cr3 | pmap->pm_pcids[cpuid].pm_pcid; ucr3 = pmap->pm_ucr3 | pmap->pm_pcids[cpuid].pm_pcid | PMAP_PCID_USER_PT; if (!cached && pmap->pm_ucr3 != PMAP_NO_CR3) { /* * Explicitly invalidate translations cached from the * user page table. They are not automatically * flushed by reload of cr3 with the kernel page table * pointer above. * * Note that the if() condition is resolved statically * by using the function argument instead of * runtime-evaluated invpcid_works value. */ if (invpcid_works1) { d.pcid = PMAP_PCID_USER_PT | pmap->pm_pcids[cpuid].pm_pcid; d.pad = 0; d.addr = 0; invpcid(&d, INVPCID_CTX); } else { pmap_pti_pcid_invalidate(ucr3, kcr3); } } PCPU_SET(kcr3, kcr3 | CR3_PCID_SAVE); PCPU_SET(ucr3, ucr3 | CR3_PCID_SAVE); if (cached) PCPU_INC(pm_save_cnt); } static void pmap_activate_sw_pcid_invpcid_pti(pmap_t pmap, u_int cpuid) { pmap_activate_sw_pcid_pti(pmap, cpuid, true); pmap_activate_sw_pti_post(pmap); } static void pmap_activate_sw_pcid_noinvpcid_pti(pmap_t pmap, u_int cpuid) { register_t rflags; /* * If the INVPCID instruction is not available, * invltlb_pcid_handler() is used to handle an invalidate_all * IPI, which checks for curpmap == smp_tlb_pmap. The below * sequence of operations has a window where %CR3 is loaded * with the new pmap's PML4 address, but the curpmap value has * not yet been updated. This causes the invltlb IPI handler, * which is called between the updates, to execute as a NOP, * which leaves stale TLB entries. * * Note that the most typical use of pmap_activate_sw(), from * the context switch, is immune to this race, because * interrupts are disabled (while the thread lock is owned), * and the IPI happens after curpmap is updated. Protect * other callers in a similar way, by disabling interrupts * around the %cr3 register reload and curpmap assignment. */ rflags = intr_disable(); pmap_activate_sw_pcid_pti(pmap, cpuid, false); intr_restore(rflags); pmap_activate_sw_pti_post(pmap); } static void pmap_activate_sw_pcid_nopti(pmap_t pmap, u_int cpuid) { uint64_t cached, cr3; cached = pmap_pcid_alloc_checked(pmap, cpuid); cr3 = rcr3(); if (!cached || (cr3 & ~CR3_PCID_MASK) != pmap->pm_cr3) load_cr3(pmap->pm_cr3 | pmap->pm_pcids[cpuid].pm_pcid | cached); PCPU_SET(curpmap, pmap); if (cached) PCPU_INC(pm_save_cnt); } static void pmap_activate_sw_pcid_noinvpcid_nopti(pmap_t pmap, u_int cpuid) { register_t rflags; rflags = intr_disable(); pmap_activate_sw_pcid_nopti(pmap, cpuid); intr_restore(rflags); } static void pmap_activate_sw_nopcid_nopti(pmap_t pmap, u_int cpuid __unused) { load_cr3(pmap->pm_cr3); PCPU_SET(curpmap, pmap); } static void pmap_activate_sw_nopcid_pti(pmap_t pmap, u_int cpuid __unused) { pmap_activate_sw_nopcid_nopti(pmap, cpuid); PCPU_SET(kcr3, pmap->pm_cr3); PCPU_SET(ucr3, pmap->pm_ucr3); pmap_activate_sw_pti_post(pmap); } DEFINE_IFUNC(static, void, pmap_activate_sw_mode, (pmap_t, u_int), static) { if (pmap_pcid_enabled && pti && invpcid_works) return (pmap_activate_sw_pcid_invpcid_pti); else if (pmap_pcid_enabled && pti && !invpcid_works) return (pmap_activate_sw_pcid_noinvpcid_pti); else if (pmap_pcid_enabled && !pti && invpcid_works) return (pmap_activate_sw_pcid_nopti); else if (pmap_pcid_enabled && !pti && !invpcid_works) return (pmap_activate_sw_pcid_noinvpcid_nopti); else if (!pmap_pcid_enabled && pti) return (pmap_activate_sw_nopcid_pti); else /* if (!pmap_pcid_enabled && !pti) */ return (pmap_activate_sw_nopcid_nopti); } void pmap_activate_sw(struct thread *td) { pmap_t oldpmap, pmap; u_int cpuid; oldpmap = PCPU_GET(curpmap); pmap = vmspace_pmap(td->td_proc->p_vmspace); if (oldpmap == pmap) return; cpuid = PCPU_GET(cpuid); #ifdef SMP CPU_SET_ATOMIC(cpuid, &pmap->pm_active); #else CPU_SET(cpuid, &pmap->pm_active); #endif pmap_activate_sw_mode(pmap, cpuid); #ifdef SMP CPU_CLR_ATOMIC(cpuid, &oldpmap->pm_active); #else CPU_CLR(cpuid, &oldpmap->pm_active); #endif } void pmap_activate(struct thread *td) { critical_enter(); pmap_activate_sw(td); critical_exit(); } void pmap_activate_boot(pmap_t pmap) { uint64_t kcr3; u_int cpuid; /* * kernel_pmap must be never deactivated, and we ensure that * by never activating it at all. */ MPASS(pmap != kernel_pmap); cpuid = PCPU_GET(cpuid); #ifdef SMP CPU_SET_ATOMIC(cpuid, &pmap->pm_active); #else CPU_SET(cpuid, &pmap->pm_active); #endif PCPU_SET(curpmap, pmap); if (pti) { kcr3 = pmap->pm_cr3; if (pmap_pcid_enabled) kcr3 |= pmap->pm_pcids[cpuid].pm_pcid | CR3_PCID_SAVE; } else { kcr3 = PMAP_NO_CR3; } PCPU_SET(kcr3, kcr3); PCPU_SET(ucr3, PMAP_NO_CR3); } void pmap_sync_icache(pmap_t pm, vm_offset_t va, vm_size_t sz) { } /* * Increase the starting virtual address of the given mapping if a * different alignment might result in more superpage mappings. */ void pmap_align_superpage(vm_object_t object, vm_ooffset_t offset, vm_offset_t *addr, vm_size_t size) { vm_offset_t superpage_offset; if (size < NBPDR) return; if (object != NULL && (object->flags & OBJ_COLORED) != 0) offset += ptoa(object->pg_color); superpage_offset = offset & PDRMASK; if (size - ((NBPDR - superpage_offset) & PDRMASK) < NBPDR || (*addr & PDRMASK) == superpage_offset) return; if ((*addr & PDRMASK) < superpage_offset) *addr = (*addr & ~PDRMASK) + superpage_offset; else *addr = ((*addr + PDRMASK) & ~PDRMASK) + superpage_offset; } #ifdef INVARIANTS static unsigned long num_dirty_emulations; SYSCTL_ULONG(_vm_pmap, OID_AUTO, num_dirty_emulations, CTLFLAG_RW, &num_dirty_emulations, 0, NULL); static unsigned long num_accessed_emulations; SYSCTL_ULONG(_vm_pmap, OID_AUTO, num_accessed_emulations, CTLFLAG_RW, &num_accessed_emulations, 0, NULL); static unsigned long num_superpage_accessed_emulations; SYSCTL_ULONG(_vm_pmap, OID_AUTO, num_superpage_accessed_emulations, CTLFLAG_RW, &num_superpage_accessed_emulations, 0, NULL); static unsigned long ad_emulation_superpage_promotions; SYSCTL_ULONG(_vm_pmap, OID_AUTO, ad_emulation_superpage_promotions, CTLFLAG_RW, &ad_emulation_superpage_promotions, 0, NULL); #endif /* INVARIANTS */ int pmap_emulate_accessed_dirty(pmap_t pmap, vm_offset_t va, int ftype) { int rv; struct rwlock *lock; #if VM_NRESERVLEVEL > 0 vm_page_t m, mpte; #endif pd_entry_t *pde; pt_entry_t *pte, PG_A, PG_M, PG_RW, PG_V; KASSERT(ftype == VM_PROT_READ || ftype == VM_PROT_WRITE, ("pmap_emulate_accessed_dirty: invalid fault type %d", ftype)); if (!pmap_emulate_ad_bits(pmap)) return (-1); PG_A = pmap_accessed_bit(pmap); PG_M = pmap_modified_bit(pmap); PG_V = pmap_valid_bit(pmap); PG_RW = pmap_rw_bit(pmap); rv = -1; lock = NULL; PMAP_LOCK(pmap); pde = pmap_pde(pmap, va); if (pde == NULL || (*pde & PG_V) == 0) goto done; if ((*pde & PG_PS) != 0) { if (ftype == VM_PROT_READ) { #ifdef INVARIANTS atomic_add_long(&num_superpage_accessed_emulations, 1); #endif *pde |= PG_A; rv = 0; } goto done; } pte = pmap_pde_to_pte(pde, va); if ((*pte & PG_V) == 0) goto done; if (ftype == VM_PROT_WRITE) { if ((*pte & PG_RW) == 0) goto done; /* * Set the modified and accessed bits simultaneously. * * Intel EPT PTEs that do software emulation of A/D bits map * PG_A and PG_M to EPT_PG_READ and EPT_PG_WRITE respectively. * An EPT misconfiguration is triggered if the PTE is writable * but not readable (WR=10). This is avoided by setting PG_A * and PG_M simultaneously. */ *pte |= PG_M | PG_A; } else { *pte |= PG_A; } #if VM_NRESERVLEVEL > 0 /* try to promote the mapping */ if (va < VM_MAXUSER_ADDRESS) mpte = PHYS_TO_VM_PAGE(*pde & PG_FRAME); else mpte = NULL; m = PHYS_TO_VM_PAGE(*pte & PG_FRAME); if ((mpte == NULL || mpte->wire_count == NPTEPG) && pmap_ps_enabled(pmap) && (m->flags & PG_FICTITIOUS) == 0 && vm_reserv_level_iffullpop(m) == 0) { pmap_promote_pde(pmap, pde, va, &lock); #ifdef INVARIANTS atomic_add_long(&ad_emulation_superpage_promotions, 1); #endif } #endif #ifdef INVARIANTS if (ftype == VM_PROT_WRITE) atomic_add_long(&num_dirty_emulations, 1); else atomic_add_long(&num_accessed_emulations, 1); #endif rv = 0; /* success */ done: if (lock != NULL) rw_wunlock(lock); PMAP_UNLOCK(pmap); return (rv); } void pmap_get_mapping(pmap_t pmap, vm_offset_t va, uint64_t *ptr, int *num) { pml4_entry_t *pml4; pdp_entry_t *pdp; pd_entry_t *pde; pt_entry_t *pte, PG_V; int idx; idx = 0; PG_V = pmap_valid_bit(pmap); PMAP_LOCK(pmap); pml4 = pmap_pml4e(pmap, va); ptr[idx++] = *pml4; if ((*pml4 & PG_V) == 0) goto done; pdp = pmap_pml4e_to_pdpe(pml4, va); ptr[idx++] = *pdp; if ((*pdp & PG_V) == 0 || (*pdp & PG_PS) != 0) goto done; pde = pmap_pdpe_to_pde(pdp, va); ptr[idx++] = *pde; if ((*pde & PG_V) == 0 || (*pde & PG_PS) != 0) goto done; pte = pmap_pde_to_pte(pde, va); ptr[idx++] = *pte; done: PMAP_UNLOCK(pmap); *num = idx; } /** * Get the kernel virtual address of a set of physical pages. If there are * physical addresses not covered by the DMAP perform a transient mapping * that will be removed when calling pmap_unmap_io_transient. * * \param page The pages the caller wishes to obtain the virtual * address on the kernel memory map. * \param vaddr On return contains the kernel virtual memory address * of the pages passed in the page parameter. * \param count Number of pages passed in. * \param can_fault TRUE if the thread using the mapped pages can take * page faults, FALSE otherwise. * * \returns TRUE if the caller must call pmap_unmap_io_transient when * finished or FALSE otherwise. * */ boolean_t pmap_map_io_transient(vm_page_t page[], vm_offset_t vaddr[], int count, boolean_t can_fault) { vm_paddr_t paddr; boolean_t needs_mapping; pt_entry_t *pte; int cache_bits, error __unused, i; /* * Allocate any KVA space that we need, this is done in a separate * loop to prevent calling vmem_alloc while pinned. */ needs_mapping = FALSE; for (i = 0; i < count; i++) { paddr = VM_PAGE_TO_PHYS(page[i]); if (__predict_false(paddr >= dmaplimit)) { error = vmem_alloc(kernel_arena, PAGE_SIZE, M_BESTFIT | M_WAITOK, &vaddr[i]); KASSERT(error == 0, ("vmem_alloc failed: %d", error)); needs_mapping = TRUE; } else { vaddr[i] = PHYS_TO_DMAP(paddr); } } /* Exit early if everything is covered by the DMAP */ if (!needs_mapping) return (FALSE); /* * NB: The sequence of updating a page table followed by accesses * to the corresponding pages used in the !DMAP case is subject to * the situation described in the "AMD64 Architecture Programmer's * Manual Volume 2: System Programming" rev. 3.23, "7.3.1 Special * Coherency Considerations". Therefore, issuing the INVLPG right * after modifying the PTE bits is crucial. */ if (!can_fault) sched_pin(); for (i = 0; i < count; i++) { paddr = VM_PAGE_TO_PHYS(page[i]); if (paddr >= dmaplimit) { if (can_fault) { /* * Slow path, since we can get page faults * while mappings are active don't pin the * thread to the CPU and instead add a global * mapping visible to all CPUs. */ pmap_qenter(vaddr[i], &page[i], 1); } else { pte = vtopte(vaddr[i]); cache_bits = pmap_cache_bits(kernel_pmap, page[i]->md.pat_mode, 0); pte_store(pte, paddr | X86_PG_RW | X86_PG_V | cache_bits); invlpg(vaddr[i]); } } } return (needs_mapping); } void pmap_unmap_io_transient(vm_page_t page[], vm_offset_t vaddr[], int count, boolean_t can_fault) { vm_paddr_t paddr; int i; if (!can_fault) sched_unpin(); for (i = 0; i < count; i++) { paddr = VM_PAGE_TO_PHYS(page[i]); if (paddr >= dmaplimit) { if (can_fault) pmap_qremove(vaddr[i], 1); vmem_free(kernel_arena, vaddr[i], PAGE_SIZE); } } } vm_offset_t pmap_quick_enter_page(vm_page_t m) { vm_paddr_t paddr; paddr = VM_PAGE_TO_PHYS(m); if (paddr < dmaplimit) return (PHYS_TO_DMAP(paddr)); mtx_lock_spin(&qframe_mtx); KASSERT(*vtopte(qframe) == 0, ("qframe busy")); pte_store(vtopte(qframe), paddr | X86_PG_RW | X86_PG_V | X86_PG_A | X86_PG_M | pmap_cache_bits(kernel_pmap, m->md.pat_mode, 0)); return (qframe); } void pmap_quick_remove_page(vm_offset_t addr) { if (addr != qframe) return; pte_store(vtopte(qframe), 0); invlpg(qframe); mtx_unlock_spin(&qframe_mtx); } /* * Pdp pages from the large map are managed differently from either * kernel or user page table pages. They are permanently allocated at * initialization time, and their wire count is permanently set to * zero. The pml4 entries pointing to those pages are copied into * each allocated pmap. * * In contrast, pd and pt pages are managed like user page table * pages. They are dynamically allocated, and their wire count * represents the number of valid entries within the page. */ static vm_page_t pmap_large_map_getptp_unlocked(void) { vm_page_t m; m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ | VM_ALLOC_ZERO); if (m != NULL && (m->flags & PG_ZERO) == 0) pmap_zero_page(m); return (m); } static vm_page_t pmap_large_map_getptp(void) { vm_page_t m; PMAP_LOCK_ASSERT(kernel_pmap, MA_OWNED); m = pmap_large_map_getptp_unlocked(); if (m == NULL) { PMAP_UNLOCK(kernel_pmap); vm_wait(NULL); PMAP_LOCK(kernel_pmap); /* Callers retry. */ } return (m); } static pdp_entry_t * pmap_large_map_pdpe(vm_offset_t va) { vm_pindex_t pml4_idx; vm_paddr_t mphys; pml4_idx = pmap_pml4e_index(va); KASSERT(LMSPML4I <= pml4_idx && pml4_idx < LMSPML4I + lm_ents, ("pmap_large_map_pdpe: va %#jx out of range idx %#jx LMSPML4I " "%#jx lm_ents %d", (uintmax_t)va, (uintmax_t)pml4_idx, LMSPML4I, lm_ents)); KASSERT((kernel_pmap->pm_pml4[pml4_idx] & X86_PG_V) != 0, ("pmap_large_map_pdpe: invalid pml4 for va %#jx idx %#jx " "LMSPML4I %#jx lm_ents %d", (uintmax_t)va, (uintmax_t)pml4_idx, LMSPML4I, lm_ents)); mphys = kernel_pmap->pm_pml4[pml4_idx] & PG_FRAME; return ((pdp_entry_t *)PHYS_TO_DMAP(mphys) + pmap_pdpe_index(va)); } static pd_entry_t * pmap_large_map_pde(vm_offset_t va) { pdp_entry_t *pdpe; vm_page_t m; vm_paddr_t mphys; retry: pdpe = pmap_large_map_pdpe(va); if (*pdpe == 0) { m = pmap_large_map_getptp(); if (m == NULL) goto retry; mphys = VM_PAGE_TO_PHYS(m); *pdpe = mphys | X86_PG_A | X86_PG_RW | X86_PG_V | pg_nx; } else { MPASS((*pdpe & X86_PG_PS) == 0); mphys = *pdpe & PG_FRAME; } return ((pd_entry_t *)PHYS_TO_DMAP(mphys) + pmap_pde_index(va)); } static pt_entry_t * pmap_large_map_pte(vm_offset_t va) { pd_entry_t *pde; vm_page_t m; vm_paddr_t mphys; retry: pde = pmap_large_map_pde(va); if (*pde == 0) { m = pmap_large_map_getptp(); if (m == NULL) goto retry; mphys = VM_PAGE_TO_PHYS(m); *pde = mphys | X86_PG_A | X86_PG_RW | X86_PG_V | pg_nx; PHYS_TO_VM_PAGE(DMAP_TO_PHYS((uintptr_t)pde))->wire_count++; } else { MPASS((*pde & X86_PG_PS) == 0); mphys = *pde & PG_FRAME; } return ((pt_entry_t *)PHYS_TO_DMAP(mphys) + pmap_pte_index(va)); } static int pmap_large_map_getva(vm_size_t len, vm_offset_t align, vm_offset_t phase, vmem_addr_t *vmem_res) { /* * Large mappings are all but static. Consequently, there * is no point in waiting for an earlier allocation to be * freed. */ return (vmem_xalloc(large_vmem, len, align, phase, 0, VMEM_ADDR_MIN, VMEM_ADDR_MAX, M_NOWAIT | M_BESTFIT, vmem_res)); } int pmap_large_map(vm_paddr_t spa, vm_size_t len, void **addr, vm_memattr_t mattr) { pdp_entry_t *pdpe; pd_entry_t *pde; pt_entry_t *pte; vm_offset_t va, inc; vmem_addr_t vmem_res; vm_paddr_t pa; int error; if (len == 0 || spa + len < spa) return (EINVAL); /* See if DMAP can serve. */ if (spa + len <= dmaplimit) { va = PHYS_TO_DMAP(spa); *addr = (void *)va; return (pmap_change_attr(va, len, mattr)); } /* * No, allocate KVA. Fit the address with best possible * alignment for superpages. Fall back to worse align if * failed. */ error = ENOMEM; if ((amd_feature & AMDID_PAGE1GB) != 0 && rounddown2(spa + len, NBPDP) >= roundup2(spa, NBPDP) + NBPDP) error = pmap_large_map_getva(len, NBPDP, spa & PDPMASK, &vmem_res); if (error != 0 && rounddown2(spa + len, NBPDR) >= roundup2(spa, NBPDR) + NBPDR) error = pmap_large_map_getva(len, NBPDR, spa & PDRMASK, &vmem_res); if (error != 0) error = pmap_large_map_getva(len, PAGE_SIZE, 0, &vmem_res); if (error != 0) return (error); /* * Fill pagetable. PG_M is not pre-set, we scan modified bits * in the pagetable to minimize flushing. No need to * invalidate TLB, since we only update invalid entries. */ PMAP_LOCK(kernel_pmap); for (pa = spa, va = vmem_res; len > 0; pa += inc, va += inc, len -= inc) { if ((amd_feature & AMDID_PAGE1GB) != 0 && len >= NBPDP && (pa & PDPMASK) == 0 && (va & PDPMASK) == 0) { pdpe = pmap_large_map_pdpe(va); MPASS(*pdpe == 0); *pdpe = pa | pg_g | X86_PG_PS | X86_PG_RW | X86_PG_V | X86_PG_A | pg_nx | pmap_cache_bits(kernel_pmap, mattr, TRUE); inc = NBPDP; } else if (len >= NBPDR && (pa & PDRMASK) == 0 && (va & PDRMASK) == 0) { pde = pmap_large_map_pde(va); MPASS(*pde == 0); *pde = pa | pg_g | X86_PG_PS | X86_PG_RW | X86_PG_V | X86_PG_A | pg_nx | pmap_cache_bits(kernel_pmap, mattr, TRUE); PHYS_TO_VM_PAGE(DMAP_TO_PHYS((uintptr_t)pde))-> wire_count++; inc = NBPDR; } else { pte = pmap_large_map_pte(va); MPASS(*pte == 0); *pte = pa | pg_g | X86_PG_RW | X86_PG_V | X86_PG_A | pg_nx | pmap_cache_bits(kernel_pmap, mattr, FALSE); PHYS_TO_VM_PAGE(DMAP_TO_PHYS((uintptr_t)pte))-> wire_count++; inc = PAGE_SIZE; } } PMAP_UNLOCK(kernel_pmap); MPASS(len == 0); *addr = (void *)vmem_res; return (0); } void pmap_large_unmap(void *svaa, vm_size_t len) { vm_offset_t sva, va; vm_size_t inc; pdp_entry_t *pdpe, pdp; pd_entry_t *pde, pd; pt_entry_t *pte; vm_page_t m; struct spglist spgf; sva = (vm_offset_t)svaa; if (len == 0 || sva + len < sva || (sva >= DMAP_MIN_ADDRESS && sva + len <= DMAP_MIN_ADDRESS + dmaplimit)) return; SLIST_INIT(&spgf); KASSERT(LARGEMAP_MIN_ADDRESS <= sva && sva + len <= LARGEMAP_MAX_ADDRESS + NBPML4 * (u_long)lm_ents, ("not largemap range %#lx %#lx", (u_long)svaa, (u_long)svaa + len)); PMAP_LOCK(kernel_pmap); for (va = sva; va < sva + len; va += inc) { pdpe = pmap_large_map_pdpe(va); pdp = *pdpe; KASSERT((pdp & X86_PG_V) != 0, ("invalid pdp va %#lx pdpe %#lx pdp %#lx", va, (u_long)pdpe, pdp)); if ((pdp & X86_PG_PS) != 0) { KASSERT((amd_feature & AMDID_PAGE1GB) != 0, ("no 1G pages, va %#lx pdpe %#lx pdp %#lx", va, (u_long)pdpe, pdp)); KASSERT((va & PDPMASK) == 0, ("PDPMASK bit set, va %#lx pdpe %#lx pdp %#lx", va, (u_long)pdpe, pdp)); KASSERT(va + NBPDP <= sva + len, ("unmap covers partial 1GB page, sva %#lx va %#lx " "pdpe %#lx pdp %#lx len %#lx", sva, va, (u_long)pdpe, pdp, len)); *pdpe = 0; inc = NBPDP; continue; } pde = pmap_pdpe_to_pde(pdpe, va); pd = *pde; KASSERT((pd & X86_PG_V) != 0, ("invalid pd va %#lx pde %#lx pd %#lx", va, (u_long)pde, pd)); if ((pd & X86_PG_PS) != 0) { KASSERT((va & PDRMASK) == 0, ("PDRMASK bit set, va %#lx pde %#lx pd %#lx", va, (u_long)pde, pd)); KASSERT(va + NBPDR <= sva + len, ("unmap covers partial 2MB page, sva %#lx va %#lx " "pde %#lx pd %#lx len %#lx", sva, va, (u_long)pde, pd, len)); pde_store(pde, 0); inc = NBPDR; m = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((vm_offset_t)pde)); m->wire_count--; if (m->wire_count == 0) { *pdpe = 0; SLIST_INSERT_HEAD(&spgf, m, plinks.s.ss); } continue; } pte = pmap_pde_to_pte(pde, va); KASSERT((*pte & X86_PG_V) != 0, ("invalid pte va %#lx pte %#lx pt %#lx", va, (u_long)pte, *pte)); pte_clear(pte); inc = PAGE_SIZE; m = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((vm_offset_t)pte)); m->wire_count--; if (m->wire_count == 0) { *pde = 0; SLIST_INSERT_HEAD(&spgf, m, plinks.s.ss); m = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((vm_offset_t)pde)); m->wire_count--; if (m->wire_count == 0) { *pdpe = 0; SLIST_INSERT_HEAD(&spgf, m, plinks.s.ss); } } } pmap_invalidate_range(kernel_pmap, sva, sva + len); PMAP_UNLOCK(kernel_pmap); vm_page_free_pages_toq(&spgf, false); vmem_free(large_vmem, sva, len); } static void pmap_large_map_wb_fence_mfence(void) { mfence(); } static void pmap_large_map_wb_fence_sfence(void) { sfence(); } static void pmap_large_map_wb_fence_nop(void) { } DEFINE_IFUNC(static, void, pmap_large_map_wb_fence, (void), static) { if (cpu_vendor_id != CPU_VENDOR_INTEL) return (pmap_large_map_wb_fence_mfence); else if ((cpu_stdext_feature & (CPUID_STDEXT_CLWB | CPUID_STDEXT_CLFLUSHOPT)) == 0) return (pmap_large_map_wb_fence_sfence); else /* clflush is strongly enough ordered */ return (pmap_large_map_wb_fence_nop); } static void pmap_large_map_flush_range_clwb(vm_offset_t va, vm_size_t len) { for (; len > 0; len -= cpu_clflush_line_size, va += cpu_clflush_line_size) clwb(va); } static void pmap_large_map_flush_range_clflushopt(vm_offset_t va, vm_size_t len) { for (; len > 0; len -= cpu_clflush_line_size, va += cpu_clflush_line_size) clflushopt(va); } static void pmap_large_map_flush_range_clflush(vm_offset_t va, vm_size_t len) { for (; len > 0; len -= cpu_clflush_line_size, va += cpu_clflush_line_size) clflush(va); } static void pmap_large_map_flush_range_nop(vm_offset_t sva __unused, vm_size_t len __unused) { } DEFINE_IFUNC(static, void, pmap_large_map_flush_range, (vm_offset_t, vm_size_t), static) { if ((cpu_stdext_feature & CPUID_STDEXT_CLWB) != 0) return (pmap_large_map_flush_range_clwb); else if ((cpu_stdext_feature & CPUID_STDEXT_CLFLUSHOPT) != 0) return (pmap_large_map_flush_range_clflushopt); else if ((cpu_feature & CPUID_CLFSH) != 0) return (pmap_large_map_flush_range_clflush); else return (pmap_large_map_flush_range_nop); } static void pmap_large_map_wb_large(vm_offset_t sva, vm_offset_t eva) { volatile u_long *pe; u_long p; vm_offset_t va; vm_size_t inc; bool seen_other; for (va = sva; va < eva; va += inc) { inc = 0; if ((amd_feature & AMDID_PAGE1GB) != 0) { pe = (volatile u_long *)pmap_large_map_pdpe(va); p = *pe; if ((p & X86_PG_PS) != 0) inc = NBPDP; } if (inc == 0) { pe = (volatile u_long *)pmap_large_map_pde(va); p = *pe; if ((p & X86_PG_PS) != 0) inc = NBPDR; } if (inc == 0) { pe = (volatile u_long *)pmap_large_map_pte(va); p = *pe; inc = PAGE_SIZE; } seen_other = false; for (;;) { if ((p & X86_PG_AVAIL1) != 0) { /* * Spin-wait for the end of a parallel * write-back. */ cpu_spinwait(); p = *pe; /* * If we saw other write-back * occuring, we cannot rely on PG_M to * indicate state of the cache. The * PG_M bit is cleared before the * flush to avoid ignoring new writes, * and writes which are relevant for * us might happen after. */ seen_other = true; continue; } if ((p & X86_PG_M) != 0 || seen_other) { if (!atomic_fcmpset_long(pe, &p, (p & ~X86_PG_M) | X86_PG_AVAIL1)) /* * If we saw PG_M without * PG_AVAIL1, and then on the * next attempt we do not * observe either PG_M or * PG_AVAIL1, the other * write-back started after us * and finished before us. We * can rely on it doing our * work. */ continue; pmap_large_map_flush_range(va, inc); atomic_clear_long(pe, X86_PG_AVAIL1); } break; } maybe_yield(); } } /* * Write-back cache lines for the given address range. * * Must be called only on the range or sub-range returned from * pmap_large_map(). Must not be called on the coalesced ranges. * * Does nothing on CPUs without CLWB, CLFLUSHOPT, or CLFLUSH * instructions support. */ void pmap_large_map_wb(void *svap, vm_size_t len) { vm_offset_t eva, sva; sva = (vm_offset_t)svap; eva = sva + len; pmap_large_map_wb_fence(); if (sva >= DMAP_MIN_ADDRESS && eva <= DMAP_MIN_ADDRESS + dmaplimit) { pmap_large_map_flush_range(sva, len); } else { KASSERT(sva >= LARGEMAP_MIN_ADDRESS && eva <= LARGEMAP_MIN_ADDRESS + lm_ents * NBPML4, ("pmap_large_map_wb: not largemap %#lx %#lx", sva, len)); pmap_large_map_wb_large(sva, eva); } pmap_large_map_wb_fence(); } static vm_page_t pmap_pti_alloc_page(void) { vm_page_t m; VM_OBJECT_ASSERT_WLOCKED(pti_obj); m = vm_page_grab(pti_obj, pti_pg_idx++, VM_ALLOC_NOBUSY | VM_ALLOC_WIRED | VM_ALLOC_ZERO); return (m); } static bool pmap_pti_free_page(vm_page_t m) { KASSERT(m->wire_count > 0, ("page %p not wired", m)); if (!vm_page_unwire_noq(m)) return (false); vm_page_free_zero(m); return (true); } static void pmap_pti_init(void) { vm_page_t pml4_pg; pdp_entry_t *pdpe; vm_offset_t va; int i; if (!pti) return; pti_obj = vm_pager_allocate(OBJT_PHYS, NULL, 0, VM_PROT_ALL, 0, NULL); VM_OBJECT_WLOCK(pti_obj); pml4_pg = pmap_pti_alloc_page(); pti_pml4 = (pml4_entry_t *)PHYS_TO_DMAP(VM_PAGE_TO_PHYS(pml4_pg)); for (va = VM_MIN_KERNEL_ADDRESS; va <= VM_MAX_KERNEL_ADDRESS && va >= VM_MIN_KERNEL_ADDRESS && va > NBPML4; va += NBPML4) { pdpe = pmap_pti_pdpe(va); pmap_pti_wire_pte(pdpe); } pmap_pti_add_kva_locked((vm_offset_t)&__pcpu[0], (vm_offset_t)&__pcpu[0] + sizeof(__pcpu[0]) * MAXCPU, false); pmap_pti_add_kva_locked((vm_offset_t)gdt, (vm_offset_t)gdt + sizeof(struct user_segment_descriptor) * NGDT * MAXCPU, false); pmap_pti_add_kva_locked((vm_offset_t)idt, (vm_offset_t)idt + sizeof(struct gate_descriptor) * NIDT, false); pmap_pti_add_kva_locked((vm_offset_t)common_tss, (vm_offset_t)common_tss + sizeof(struct amd64tss) * MAXCPU, false); CPU_FOREACH(i) { /* Doublefault stack IST 1 */ va = common_tss[i].tss_ist1; pmap_pti_add_kva_locked(va - PAGE_SIZE, va, false); /* NMI stack IST 2 */ va = common_tss[i].tss_ist2 + sizeof(struct nmi_pcpu); pmap_pti_add_kva_locked(va - PAGE_SIZE, va, false); /* MC# stack IST 3 */ va = common_tss[i].tss_ist3 + sizeof(struct nmi_pcpu); pmap_pti_add_kva_locked(va - PAGE_SIZE, va, false); /* DB# stack IST 4 */ va = common_tss[i].tss_ist4 + sizeof(struct nmi_pcpu); pmap_pti_add_kva_locked(va - PAGE_SIZE, va, false); } pmap_pti_add_kva_locked((vm_offset_t)kernphys + KERNBASE, (vm_offset_t)etext, true); pti_finalized = true; VM_OBJECT_WUNLOCK(pti_obj); } SYSINIT(pmap_pti, SI_SUB_CPU + 1, SI_ORDER_ANY, pmap_pti_init, NULL); static pdp_entry_t * pmap_pti_pdpe(vm_offset_t va) { pml4_entry_t *pml4e; pdp_entry_t *pdpe; vm_page_t m; vm_pindex_t pml4_idx; vm_paddr_t mphys; VM_OBJECT_ASSERT_WLOCKED(pti_obj); pml4_idx = pmap_pml4e_index(va); pml4e = &pti_pml4[pml4_idx]; m = NULL; if (*pml4e == 0) { if (pti_finalized) panic("pml4 alloc after finalization\n"); m = pmap_pti_alloc_page(); if (*pml4e != 0) { pmap_pti_free_page(m); mphys = *pml4e & ~PAGE_MASK; } else { mphys = VM_PAGE_TO_PHYS(m); *pml4e = mphys | X86_PG_RW | X86_PG_V; } } else { mphys = *pml4e & ~PAGE_MASK; } pdpe = (pdp_entry_t *)PHYS_TO_DMAP(mphys) + pmap_pdpe_index(va); return (pdpe); } static void pmap_pti_wire_pte(void *pte) { vm_page_t m; VM_OBJECT_ASSERT_WLOCKED(pti_obj); m = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((uintptr_t)pte)); m->wire_count++; } static void pmap_pti_unwire_pde(void *pde, bool only_ref) { vm_page_t m; VM_OBJECT_ASSERT_WLOCKED(pti_obj); m = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((uintptr_t)pde)); MPASS(m->wire_count > 0); MPASS(only_ref || m->wire_count > 1); pmap_pti_free_page(m); } static void pmap_pti_unwire_pte(void *pte, vm_offset_t va) { vm_page_t m; pd_entry_t *pde; VM_OBJECT_ASSERT_WLOCKED(pti_obj); m = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((uintptr_t)pte)); MPASS(m->wire_count > 0); if (pmap_pti_free_page(m)) { pde = pmap_pti_pde(va); MPASS((*pde & (X86_PG_PS | X86_PG_V)) == X86_PG_V); *pde = 0; pmap_pti_unwire_pde(pde, false); } } static pd_entry_t * pmap_pti_pde(vm_offset_t va) { pdp_entry_t *pdpe; pd_entry_t *pde; vm_page_t m; vm_pindex_t pd_idx; vm_paddr_t mphys; VM_OBJECT_ASSERT_WLOCKED(pti_obj); pdpe = pmap_pti_pdpe(va); if (*pdpe == 0) { m = pmap_pti_alloc_page(); if (*pdpe != 0) { pmap_pti_free_page(m); MPASS((*pdpe & X86_PG_PS) == 0); mphys = *pdpe & ~PAGE_MASK; } else { mphys = VM_PAGE_TO_PHYS(m); *pdpe = mphys | X86_PG_RW | X86_PG_V; } } else { MPASS((*pdpe & X86_PG_PS) == 0); mphys = *pdpe & ~PAGE_MASK; } pde = (pd_entry_t *)PHYS_TO_DMAP(mphys); pd_idx = pmap_pde_index(va); pde += pd_idx; return (pde); } static pt_entry_t * pmap_pti_pte(vm_offset_t va, bool *unwire_pde) { pd_entry_t *pde; pt_entry_t *pte; vm_page_t m; vm_paddr_t mphys; VM_OBJECT_ASSERT_WLOCKED(pti_obj); pde = pmap_pti_pde(va); if (unwire_pde != NULL) { *unwire_pde = true; pmap_pti_wire_pte(pde); } if (*pde == 0) { m = pmap_pti_alloc_page(); if (*pde != 0) { pmap_pti_free_page(m); MPASS((*pde & X86_PG_PS) == 0); mphys = *pde & ~(PAGE_MASK | pg_nx); } else { mphys = VM_PAGE_TO_PHYS(m); *pde = mphys | X86_PG_RW | X86_PG_V; if (unwire_pde != NULL) *unwire_pde = false; } } else { MPASS((*pde & X86_PG_PS) == 0); mphys = *pde & ~(PAGE_MASK | pg_nx); } pte = (pt_entry_t *)PHYS_TO_DMAP(mphys); pte += pmap_pte_index(va); return (pte); } static void pmap_pti_add_kva_locked(vm_offset_t sva, vm_offset_t eva, bool exec) { vm_paddr_t pa; pd_entry_t *pde; pt_entry_t *pte, ptev; bool unwire_pde; VM_OBJECT_ASSERT_WLOCKED(pti_obj); sva = trunc_page(sva); MPASS(sva > VM_MAXUSER_ADDRESS); eva = round_page(eva); MPASS(sva < eva); for (; sva < eva; sva += PAGE_SIZE) { pte = pmap_pti_pte(sva, &unwire_pde); pa = pmap_kextract(sva); ptev = pa | X86_PG_RW | X86_PG_V | X86_PG_A | X86_PG_G | (exec ? 0 : pg_nx) | pmap_cache_bits(kernel_pmap, VM_MEMATTR_DEFAULT, FALSE); if (*pte == 0) { pte_store(pte, ptev); pmap_pti_wire_pte(pte); } else { KASSERT(!pti_finalized, ("pti overlap after fin %#lx %#lx %#lx", sva, *pte, ptev)); KASSERT(*pte == ptev, ("pti non-identical pte after fin %#lx %#lx %#lx", sva, *pte, ptev)); } if (unwire_pde) { pde = pmap_pti_pde(sva); pmap_pti_unwire_pde(pde, true); } } } void pmap_pti_add_kva(vm_offset_t sva, vm_offset_t eva, bool exec) { if (!pti) return; VM_OBJECT_WLOCK(pti_obj); pmap_pti_add_kva_locked(sva, eva, exec); VM_OBJECT_WUNLOCK(pti_obj); } void pmap_pti_remove_kva(vm_offset_t sva, vm_offset_t eva) { pt_entry_t *pte; vm_offset_t va; if (!pti) return; sva = rounddown2(sva, PAGE_SIZE); MPASS(sva > VM_MAXUSER_ADDRESS); eva = roundup2(eva, PAGE_SIZE); MPASS(sva < eva); VM_OBJECT_WLOCK(pti_obj); for (va = sva; va < eva; va += PAGE_SIZE) { pte = pmap_pti_pte(va, NULL); KASSERT((*pte & X86_PG_V) != 0, ("invalid pte va %#lx pte %#lx pt %#lx", va, (u_long)pte, *pte)); pte_clear(pte); pmap_pti_unwire_pte(pte, va); } pmap_invalidate_range(kernel_pmap, sva, eva); VM_OBJECT_WUNLOCK(pti_obj); } #include "opt_ddb.h" #ifdef DDB #include #include DB_SHOW_COMMAND(pte, pmap_print_pte) { pmap_t pmap; pml4_entry_t *pml4; pdp_entry_t *pdp; pd_entry_t *pde; pt_entry_t *pte, PG_V; vm_offset_t va; if (!have_addr) { db_printf("show pte addr\n"); return; } va = (vm_offset_t)addr; if (kdb_thread != NULL) pmap = vmspace_pmap(kdb_thread->td_proc->p_vmspace); else pmap = PCPU_GET(curpmap); PG_V = pmap_valid_bit(pmap); pml4 = pmap_pml4e(pmap, va); db_printf("VA %#016lx pml4e %#016lx", va, *pml4); if ((*pml4 & PG_V) == 0) { db_printf("\n"); return; } pdp = pmap_pml4e_to_pdpe(pml4, va); db_printf(" pdpe %#016lx", *pdp); if ((*pdp & PG_V) == 0 || (*pdp & PG_PS) != 0) { db_printf("\n"); return; } pde = pmap_pdpe_to_pde(pdp, va); db_printf(" pde %#016lx", *pde); if ((*pde & PG_V) == 0 || (*pde & PG_PS) != 0) { db_printf("\n"); return; } pte = pmap_pde_to_pte(pde, va); db_printf(" pte %#016lx\n", *pte); } DB_SHOW_COMMAND(phys2dmap, pmap_phys2dmap) { vm_paddr_t a; if (have_addr) { a = (vm_paddr_t)addr; db_printf("0x%jx\n", (uintmax_t)PHYS_TO_DMAP(a)); } else { db_printf("show phys2dmap addr\n"); } } #endif Index: projects/import-googletest-1.8.1/sys/amd64/sgx/sgx_linux.c =================================================================== --- projects/import-googletest-1.8.1/sys/amd64/sgx/sgx_linux.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/amd64/sgx/sgx_linux.c (revision 344271) @@ -1,119 +1,115 @@ /*- * Copyright (c) 2017 Ruslan Bukin * All rights reserved. * * This software was developed by BAE Systems, the University of Cambridge * Computer Laboratory, and Memorial University under DARPA/AFRL contract * FA8650-15-C-7558 ("CADETS"), as part of the DARPA Transparent Computing * (TC) research program. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #define SGX_LINUX_IOCTL_MIN (SGX_IOC_ENCLAVE_CREATE & 0xffff) #define SGX_LINUX_IOCTL_MAX (SGX_IOC_ENCLAVE_INIT & 0xffff) static int sgx_linux_ioctl(struct thread *td, struct linux_ioctl_args *args) { uint8_t data[SGX_IOCTL_MAX_DATA_LEN]; cap_rights_t rights; struct file *fp; u_long cmd; int error; int len; error = fget(td, args->fd, cap_rights_init(&rights, CAP_IOCTL), &fp); if (error != 0) return (error); cmd = args->cmd; args->cmd &= ~(LINUX_IOC_IN | LINUX_IOC_OUT); - if (cmd & LINUX_IOC_IN) + if ((cmd & LINUX_IOC_IN) != 0) args->cmd |= IOC_IN; - if (cmd & LINUX_IOC_OUT) + if ((cmd & LINUX_IOC_OUT) != 0) args->cmd |= IOC_OUT; len = IOCPARM_LEN(cmd); if (len > SGX_IOCTL_MAX_DATA_LEN) { - printf("%s: Can't copy data: cmd len is too big %d\n", - __func__, len); - return (EINVAL); + error = EINVAL; + goto out; } - if (cmd & LINUX_IOC_IN) { + if ((cmd & LINUX_IOC_IN) != 0) { error = copyin((void *)args->arg, data, len); - if (error) { - printf("%s: Can't copy data, error %d\n", - __func__, error); - return (EINVAL); - } + if (error != 0) + goto out; } - error = (fo_ioctl(fp, args->cmd, (caddr_t)data, td->td_ucred, td)); + error = fo_ioctl(fp, args->cmd, (caddr_t)data, td->td_ucred, td); +out: fdrop(fp, td); - return (error); } static struct linux_ioctl_handler sgx_linux_handler = { sgx_linux_ioctl, SGX_LINUX_IOCTL_MIN, SGX_LINUX_IOCTL_MAX, }; SYSINIT(sgx_linux_register, SI_SUB_KLD, SI_ORDER_MIDDLE, linux_ioctl_register_handler, &sgx_linux_handler); SYSUNINIT(sgx_linux_unregister, SI_SUB_KLD, SI_ORDER_MIDDLE, linux_ioctl_unregister_handler, &sgx_linux_handler); static int sgx_linux_modevent(module_t mod, int type, void *data) { return (0); } DEV_MODULE(sgx_linux, sgx_linux_modevent, NULL); MODULE_DEPEND(sgx_linux, linux64, 1, 1, 1); Index: projects/import-googletest-1.8.1/sys/arm/allwinner/axp81x.c =================================================================== --- projects/import-googletest-1.8.1/sys/arm/allwinner/axp81x.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/arm/allwinner/axp81x.c (revision 344271) @@ -1,1578 +1,1619 @@ /*- * Copyright (c) 2018 Emmanuel Vadot * Copyright (c) 2016 Jared McNeill * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED * AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ /* * X-Powers AXP803/813/818 PMU for Allwinner SoCs */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "gpio_if.h" #include "iicbus_if.h" #include "regdev_if.h" MALLOC_DEFINE(M_AXP8XX_REG, "AXP8xx regulator", "AXP8xx power regulator"); #define AXP_POWERSRC 0x00 #define AXP_POWERSRC_ACIN (1 << 7) #define AXP_POWERSRC_VBUS (1 << 5) #define AXP_POWERSRC_VBAT (1 << 3) #define AXP_POWERSRC_CHARING (1 << 2) /* Charging Direction */ #define AXP_POWERSRC_SHORTED (1 << 1) #define AXP_POWERSRC_STARTUP (1 << 0) #define AXP_POWERMODE 0x01 #define AXP_POWERMODE_BAT_CHARGING (1 << 6) #define AXP_POWERMODE_BAT_PRESENT (1 << 5) #define AXP_POWERMODE_BAT_VALID (1 << 4) #define AXP_ICTYPE 0x03 #define AXP_POWERCTL1 0x10 #define AXP_POWERCTL1_DCDC7 (1 << 6) /* AXP813/818 only */ #define AXP_POWERCTL1_DCDC6 (1 << 5) #define AXP_POWERCTL1_DCDC5 (1 << 4) #define AXP_POWERCTL1_DCDC4 (1 << 3) #define AXP_POWERCTL1_DCDC3 (1 << 2) #define AXP_POWERCTL1_DCDC2 (1 << 1) #define AXP_POWERCTL1_DCDC1 (1 << 0) #define AXP_POWERCTL2 0x12 #define AXP_POWERCTL2_DC1SW (1 << 7) /* AXP803 only */ #define AXP_POWERCTL2_DLDO4 (1 << 6) #define AXP_POWERCTL2_DLDO3 (1 << 5) #define AXP_POWERCTL2_DLDO2 (1 << 4) #define AXP_POWERCTL2_DLDO1 (1 << 3) #define AXP_POWERCTL2_ELDO3 (1 << 2) #define AXP_POWERCTL2_ELDO2 (1 << 1) #define AXP_POWERCTL2_ELDO1 (1 << 0) #define AXP_POWERCTL3 0x13 #define AXP_POWERCTL3_ALDO3 (1 << 7) #define AXP_POWERCTL3_ALDO2 (1 << 6) #define AXP_POWERCTL3_ALDO1 (1 << 5) #define AXP_POWERCTL3_FLDO3 (1 << 4) /* AXP813/818 only */ #define AXP_POWERCTL3_FLDO2 (1 << 3) #define AXP_POWERCTL3_FLDO1 (1 << 2) #define AXP_VOLTCTL_DLDO1 0x15 #define AXP_VOLTCTL_DLDO2 0x16 #define AXP_VOLTCTL_DLDO3 0x17 #define AXP_VOLTCTL_DLDO4 0x18 #define AXP_VOLTCTL_ELDO1 0x19 #define AXP_VOLTCTL_ELDO2 0x1A #define AXP_VOLTCTL_ELDO3 0x1B #define AXP_VOLTCTL_FLDO1 0x1C #define AXP_VOLTCTL_FLDO2 0x1D #define AXP_VOLTCTL_DCDC1 0x20 #define AXP_VOLTCTL_DCDC2 0x21 #define AXP_VOLTCTL_DCDC3 0x22 #define AXP_VOLTCTL_DCDC4 0x23 #define AXP_VOLTCTL_DCDC5 0x24 #define AXP_VOLTCTL_DCDC6 0x25 #define AXP_VOLTCTL_DCDC7 0x26 #define AXP_VOLTCTL_ALDO1 0x28 #define AXP_VOLTCTL_ALDO2 0x29 #define AXP_VOLTCTL_ALDO3 0x2A #define AXP_VOLTCTL_STATUS (1 << 7) #define AXP_VOLTCTL_MASK 0x7f #define AXP_POWERBAT 0x32 #define AXP_POWERBAT_SHUTDOWN (1 << 7) +#define AXP_CHARGERCTL1 0x33 +#define AXP_CHARGERCTL1_MIN 0 +#define AXP_CHARGERCTL1_MAX 13 +#define AXP_CHARGERCTL1_CMASK 0xf #define AXP_IRQEN1 0x40 #define AXP_IRQEN1_ACIN_HI (1 << 6) #define AXP_IRQEN1_ACIN_LO (1 << 5) #define AXP_IRQEN1_VBUS_HI (1 << 3) #define AXP_IRQEN1_VBUS_LO (1 << 2) #define AXP_IRQEN2 0x41 #define AXP_IRQEN2_BAT_IN (1 << 7) #define AXP_IRQEN2_BAT_NO (1 << 6) #define AXP_IRQEN2_BATCHGC (1 << 3) #define AXP_IRQEN2_BATCHGD (1 << 2) #define AXP_IRQEN3 0x42 #define AXP_IRQEN4 0x43 #define AXP_IRQEN4_BATLVL_LO1 (1 << 1) #define AXP_IRQEN4_BATLVL_LO0 (1 << 0) #define AXP_IRQEN5 0x44 #define AXP_IRQEN5_POKSIRQ (1 << 4) #define AXP_IRQEN5_POKLIRQ (1 << 3) #define AXP_IRQEN6 0x45 #define AXP_IRQSTAT1 0x48 #define AXP_IRQSTAT1_ACIN_HI (1 << 6) #define AXP_IRQSTAT1_ACIN_LO (1 << 5) #define AXP_IRQSTAT1_VBUS_HI (1 << 3) #define AXP_IRQSTAT1_VBUS_LO (1 << 2) #define AXP_IRQSTAT2 0x49 #define AXP_IRQSTAT2_BAT_IN (1 << 7) #define AXP_IRQSTAT2_BAT_NO (1 << 6) #define AXP_IRQSTAT2_BATCHGC (1 << 3) #define AXP_IRQSTAT2_BATCHGD (1 << 2) #define AXP_IRQSTAT3 0x4a #define AXP_IRQSTAT4 0x4b #define AXP_IRQSTAT4_BATLVL_LO1 (1 << 1) #define AXP_IRQSTAT4_BATLVL_LO0 (1 << 0) #define AXP_IRQSTAT5 0x4c #define AXP_IRQSTAT5_POKSIRQ (1 << 4) #define AXP_IRQEN5_POKLIRQ (1 << 3) #define AXP_IRQSTAT6 0x4d #define AXP_BATSENSE_HI 0x78 #define AXP_BATSENSE_LO 0x79 #define AXP_BATCHG_HI 0x7a #define AXP_BATCHG_LO 0x7b #define AXP_BATDISCHG_HI 0x7c #define AXP_BATDISCHG_LO 0x7d #define AXP_GPIO0_CTRL 0x90 #define AXP_GPIO0LDO_CTRL 0x91 #define AXP_GPIO1_CTRL 0x92 #define AXP_GPIO1LDO_CTRL 0x93 #define AXP_GPIO_FUNC (0x7 << 0) #define AXP_GPIO_FUNC_SHIFT 0 #define AXP_GPIO_FUNC_DRVLO 0 #define AXP_GPIO_FUNC_DRVHI 1 #define AXP_GPIO_FUNC_INPUT 2 #define AXP_GPIO_FUNC_LDO_ON 3 #define AXP_GPIO_FUNC_LDO_OFF 4 #define AXP_GPIO_SIGBIT 0x94 #define AXP_GPIO_PD 0x97 #define AXP_FUEL_GAUGECTL 0xb8 #define AXP_FUEL_GAUGECTL_EN (1 << 7) #define AXP_BAT_CAP 0xb9 #define AXP_BAT_CAP_VALID (1 << 7) #define AXP_BAT_CAP_PERCENT 0x7f #define AXP_BAT_MAX_CAP_HI 0xe0 #define AXP_BAT_MAX_CAP_VALID (1 << 7) #define AXP_BAT_MAX_CAP_LO 0xe1 #define AXP_BAT_COULOMB_HI 0xe2 #define AXP_BAT_COULOMB_VALID (1 << 7) #define AXP_BAT_COULOMB_LO 0xe3 #define AXP_BAT_CAP_WARN 0xe6 #define AXP_BAT_CAP_WARN_LV1 0xf0 /* Bits 4, 5, 6, 7 */ #define AXP_BAT_CAP_WARN_LV2 0xf /* Bits 0, 1, 2, 3 */ /* Sensor conversion macros */ #define AXP_SENSOR_BAT_H(hi) ((hi) << 4) #define AXP_SENSOR_BAT_L(lo) ((lo) & 0xf) #define AXP_SENSOR_COULOMB(hi, lo) (((hi & ~(1 << 7)) << 8) | (lo)) static const struct { const char *name; uint8_t ctrl_reg; } axp8xx_pins[] = { { "GPIO0", AXP_GPIO0_CTRL }, { "GPIO1", AXP_GPIO1_CTRL }, }; enum AXP8XX_TYPE { AXP803 = 1, AXP813, }; static struct ofw_compat_data compat_data[] = { { "x-powers,axp803", AXP803 }, { "x-powers,axp813", AXP813 }, { "x-powers,axp818", AXP813 }, { NULL, 0 } }; static struct resource_spec axp8xx_spec[] = { { SYS_RES_IRQ, 0, RF_ACTIVE }, { -1, 0 } }; struct axp8xx_regdef { intptr_t id; char *name; char *supply_name; uint8_t enable_reg; uint8_t enable_mask; uint8_t enable_value; uint8_t disable_value; uint8_t voltage_reg; int voltage_min; int voltage_max; int voltage_step1; int voltage_nstep1; int voltage_step2; int voltage_nstep2; }; enum axp8xx_reg_id { AXP8XX_REG_ID_DCDC1 = 100, AXP8XX_REG_ID_DCDC2, AXP8XX_REG_ID_DCDC3, AXP8XX_REG_ID_DCDC4, AXP8XX_REG_ID_DCDC5, AXP8XX_REG_ID_DCDC6, AXP813_REG_ID_DCDC7, AXP803_REG_ID_DC1SW, AXP8XX_REG_ID_DLDO1, AXP8XX_REG_ID_DLDO2, AXP8XX_REG_ID_DLDO3, AXP8XX_REG_ID_DLDO4, AXP8XX_REG_ID_ELDO1, AXP8XX_REG_ID_ELDO2, AXP8XX_REG_ID_ELDO3, AXP8XX_REG_ID_ALDO1, AXP8XX_REG_ID_ALDO2, AXP8XX_REG_ID_ALDO3, AXP8XX_REG_ID_FLDO1, AXP8XX_REG_ID_FLDO2, AXP813_REG_ID_FLDO3, AXP8XX_REG_ID_GPIO0_LDO, AXP8XX_REG_ID_GPIO1_LDO, }; static struct axp8xx_regdef axp803_regdefs[] = { { .id = AXP803_REG_ID_DC1SW, .name = "dc1sw", .enable_reg = AXP_POWERCTL2, .enable_mask = (uint8_t) AXP_POWERCTL2_DC1SW, .enable_value = AXP_POWERCTL2_DC1SW, }, }; static struct axp8xx_regdef axp813_regdefs[] = { { .id = AXP813_REG_ID_DCDC7, .name = "dcdc7", .enable_reg = AXP_POWERCTL1, .enable_mask = (uint8_t) AXP_POWERCTL1_DCDC7, .enable_value = AXP_POWERCTL1_DCDC7, .voltage_reg = AXP_VOLTCTL_DCDC7, .voltage_min = 600, .voltage_max = 1520, .voltage_step1 = 10, .voltage_nstep1 = 50, .voltage_step2 = 20, .voltage_nstep2 = 21, }, }; static struct axp8xx_regdef axp8xx_common_regdefs[] = { { .id = AXP8XX_REG_ID_DCDC1, .name = "dcdc1", .enable_reg = AXP_POWERCTL1, .enable_mask = (uint8_t) AXP_POWERCTL1_DCDC1, .enable_value = AXP_POWERCTL1_DCDC1, .voltage_reg = AXP_VOLTCTL_DCDC1, .voltage_min = 1600, .voltage_max = 3400, .voltage_step1 = 100, .voltage_nstep1 = 18, }, { .id = AXP8XX_REG_ID_DCDC2, .name = "dcdc2", .enable_reg = AXP_POWERCTL1, .enable_mask = (uint8_t) AXP_POWERCTL1_DCDC2, .enable_value = AXP_POWERCTL1_DCDC2, .voltage_reg = AXP_VOLTCTL_DCDC2, .voltage_min = 500, .voltage_max = 1300, .voltage_step1 = 10, .voltage_nstep1 = 70, .voltage_step2 = 20, .voltage_nstep2 = 5, }, { .id = AXP8XX_REG_ID_DCDC3, .name = "dcdc3", .enable_reg = AXP_POWERCTL1, .enable_mask = (uint8_t) AXP_POWERCTL1_DCDC3, .enable_value = AXP_POWERCTL1_DCDC3, .voltage_reg = AXP_VOLTCTL_DCDC3, .voltage_min = 500, .voltage_max = 1300, .voltage_step1 = 10, .voltage_nstep1 = 70, .voltage_step2 = 20, .voltage_nstep2 = 5, }, { .id = AXP8XX_REG_ID_DCDC4, .name = "dcdc4", .enable_reg = AXP_POWERCTL1, .enable_mask = (uint8_t) AXP_POWERCTL1_DCDC4, .enable_value = AXP_POWERCTL1_DCDC4, .voltage_reg = AXP_VOLTCTL_DCDC4, .voltage_min = 500, .voltage_max = 1300, .voltage_step1 = 10, .voltage_nstep1 = 70, .voltage_step2 = 20, .voltage_nstep2 = 5, }, { .id = AXP8XX_REG_ID_DCDC5, .name = "dcdc5", .enable_reg = AXP_POWERCTL1, .enable_mask = (uint8_t) AXP_POWERCTL1_DCDC5, .enable_value = AXP_POWERCTL1_DCDC5, .voltage_reg = AXP_VOLTCTL_DCDC5, .voltage_min = 800, .voltage_max = 1840, .voltage_step1 = 10, .voltage_nstep1 = 42, .voltage_step2 = 20, .voltage_nstep2 = 36, }, { .id = AXP8XX_REG_ID_DCDC6, .name = "dcdc6", .enable_reg = AXP_POWERCTL1, .enable_mask = (uint8_t) AXP_POWERCTL1_DCDC6, .enable_value = AXP_POWERCTL1_DCDC6, .voltage_reg = AXP_VOLTCTL_DCDC6, .voltage_min = 600, .voltage_max = 1520, .voltage_step1 = 10, .voltage_nstep1 = 50, .voltage_step2 = 20, .voltage_nstep2 = 21, }, { .id = AXP8XX_REG_ID_DLDO1, .name = "dldo1", .enable_reg = AXP_POWERCTL2, .enable_mask = (uint8_t) AXP_POWERCTL2_DLDO1, .enable_value = AXP_POWERCTL2_DLDO1, .voltage_reg = AXP_VOLTCTL_DLDO1, .voltage_min = 700, .voltage_max = 3300, .voltage_step1 = 100, .voltage_nstep1 = 26, }, { .id = AXP8XX_REG_ID_DLDO2, .name = "dldo2", .enable_reg = AXP_POWERCTL2, .enable_mask = (uint8_t) AXP_POWERCTL2_DLDO2, .enable_value = AXP_POWERCTL2_DLDO2, .voltage_reg = AXP_VOLTCTL_DLDO2, .voltage_min = 700, .voltage_max = 4200, .voltage_step1 = 100, .voltage_nstep1 = 27, .voltage_step2 = 200, .voltage_nstep2 = 4, }, { .id = AXP8XX_REG_ID_DLDO3, .name = "dldo3", .enable_reg = AXP_POWERCTL2, .enable_mask = (uint8_t) AXP_POWERCTL2_DLDO3, .enable_value = AXP_POWERCTL2_DLDO3, .voltage_reg = AXP_VOLTCTL_DLDO3, .voltage_min = 700, .voltage_max = 3300, .voltage_step1 = 100, .voltage_nstep1 = 26, }, { .id = AXP8XX_REG_ID_DLDO4, .name = "dldo4", .enable_reg = AXP_POWERCTL2, .enable_mask = (uint8_t) AXP_POWERCTL2_DLDO4, .enable_value = AXP_POWERCTL2_DLDO4, .voltage_reg = AXP_VOLTCTL_DLDO4, .voltage_min = 700, .voltage_max = 3300, .voltage_step1 = 100, .voltage_nstep1 = 26, }, { .id = AXP8XX_REG_ID_ALDO1, .name = "aldo1", .enable_reg = AXP_POWERCTL3, .enable_mask = (uint8_t) AXP_POWERCTL3_ALDO1, .enable_value = AXP_POWERCTL3_ALDO1, .voltage_min = 700, .voltage_max = 3300, .voltage_step1 = 100, .voltage_nstep1 = 26, }, { .id = AXP8XX_REG_ID_ALDO2, .name = "aldo2", .enable_reg = AXP_POWERCTL3, .enable_mask = (uint8_t) AXP_POWERCTL3_ALDO2, .enable_value = AXP_POWERCTL3_ALDO2, .voltage_min = 700, .voltage_max = 3300, .voltage_step1 = 100, .voltage_nstep1 = 26, }, { .id = AXP8XX_REG_ID_ALDO3, .name = "aldo3", .enable_reg = AXP_POWERCTL3, .enable_mask = (uint8_t) AXP_POWERCTL3_ALDO3, .enable_value = AXP_POWERCTL3_ALDO3, .voltage_min = 700, .voltage_max = 3300, .voltage_step1 = 100, .voltage_nstep1 = 26, }, { .id = AXP8XX_REG_ID_ELDO1, .name = "eldo1", .enable_reg = AXP_POWERCTL2, .enable_mask = (uint8_t) AXP_POWERCTL2_ELDO1, .enable_value = AXP_POWERCTL2_ELDO1, .voltage_min = 700, .voltage_max = 1900, .voltage_step1 = 50, .voltage_nstep1 = 24, }, { .id = AXP8XX_REG_ID_ELDO2, .name = "eldo2", .enable_reg = AXP_POWERCTL2, .enable_mask = (uint8_t) AXP_POWERCTL2_ELDO2, .enable_value = AXP_POWERCTL2_ELDO2, .voltage_min = 700, .voltage_max = 1900, .voltage_step1 = 50, .voltage_nstep1 = 24, }, { .id = AXP8XX_REG_ID_ELDO3, .name = "eldo3", .enable_reg = AXP_POWERCTL2, .enable_mask = (uint8_t) AXP_POWERCTL2_ELDO3, .enable_value = AXP_POWERCTL2_ELDO3, .voltage_min = 700, .voltage_max = 1900, .voltage_step1 = 50, .voltage_nstep1 = 24, }, { .id = AXP8XX_REG_ID_FLDO1, .name = "fldo1", .enable_reg = AXP_POWERCTL3, .enable_mask = (uint8_t) AXP_POWERCTL3_FLDO1, .enable_value = AXP_POWERCTL3_FLDO1, .voltage_min = 700, .voltage_max = 1450, .voltage_step1 = 50, .voltage_nstep1 = 15, }, { .id = AXP8XX_REG_ID_FLDO2, .name = "fldo2", .enable_reg = AXP_POWERCTL3, .enable_mask = (uint8_t) AXP_POWERCTL3_FLDO2, .enable_value = AXP_POWERCTL3_FLDO2, .voltage_min = 700, .voltage_max = 1450, .voltage_step1 = 50, .voltage_nstep1 = 15, }, { .id = AXP8XX_REG_ID_GPIO0_LDO, .name = "ldo-io0", .enable_reg = AXP_GPIO0_CTRL, .enable_mask = (uint8_t) AXP_GPIO_FUNC, .enable_value = AXP_GPIO_FUNC_LDO_ON, .disable_value = AXP_GPIO_FUNC_LDO_OFF, .voltage_reg = AXP_GPIO0LDO_CTRL, .voltage_min = 700, .voltage_max = 3300, .voltage_step1 = 100, .voltage_nstep1 = 26, }, { .id = AXP8XX_REG_ID_GPIO1_LDO, .name = "ldo-io1", .enable_reg = AXP_GPIO1_CTRL, .enable_mask = (uint8_t) AXP_GPIO_FUNC, .enable_value = AXP_GPIO_FUNC_LDO_ON, .disable_value = AXP_GPIO_FUNC_LDO_OFF, .voltage_reg = AXP_GPIO1LDO_CTRL, .voltage_min = 700, .voltage_max = 3300, .voltage_step1 = 100, .voltage_nstep1 = 26, }, }; enum axp8xx_sensor { AXP_SENSOR_ACIN_PRESENT, AXP_SENSOR_VBUS_PRESENT, AXP_SENSOR_BATT_PRESENT, AXP_SENSOR_BATT_CHARGING, AXP_SENSOR_BATT_CHARGE_STATE, AXP_SENSOR_BATT_VOLTAGE, AXP_SENSOR_BATT_CHARGE_CURRENT, AXP_SENSOR_BATT_DISCHARGE_CURRENT, AXP_SENSOR_BATT_CAPACITY_PERCENT, AXP_SENSOR_BATT_MAXIMUM_CAPACITY, AXP_SENSOR_BATT_CURRENT_CAPACITY, }; enum battery_capacity_state { BATT_CAPACITY_NORMAL = 1, /* normal cap in battery */ BATT_CAPACITY_WARNING, /* warning cap in battery */ BATT_CAPACITY_CRITICAL, /* critical cap in battery */ BATT_CAPACITY_HIGH, /* high cap in battery */ BATT_CAPACITY_MAX, /* maximum cap in battery */ BATT_CAPACITY_LOW /* low cap in battery */ }; struct axp8xx_sensors { int id; const char *name; const char *desc; const char *format; }; static const struct axp8xx_sensors axp8xx_common_sensors[] = { { .id = AXP_SENSOR_ACIN_PRESENT, .name = "acin", .format = "I", .desc = "ACIN Present", }, { .id = AXP_SENSOR_VBUS_PRESENT, .name = "vbus", .format = "I", .desc = "VBUS Present", }, { .id = AXP_SENSOR_BATT_PRESENT, .name = "bat", .format = "I", .desc = "Battery Present", }, { .id = AXP_SENSOR_BATT_CHARGING, .name = "batcharging", .format = "I", .desc = "Battery Charging", }, { .id = AXP_SENSOR_BATT_CHARGE_STATE, .name = "batchargestate", .format = "I", .desc = "Battery Charge State", }, { .id = AXP_SENSOR_BATT_VOLTAGE, .name = "batvolt", .format = "I", .desc = "Battery Voltage", }, { .id = AXP_SENSOR_BATT_CHARGE_CURRENT, .name = "batchargecurrent", .format = "I", - .desc = "Battery Charging Current", + .desc = "Average Battery Charging Current", }, { .id = AXP_SENSOR_BATT_DISCHARGE_CURRENT, .name = "batdischargecurrent", .format = "I", - .desc = "Battery Discharging Current", + .desc = "Average Battery Discharging Current", }, { .id = AXP_SENSOR_BATT_CAPACITY_PERCENT, .name = "batcapacitypercent", .format = "I", .desc = "Battery Capacity Percentage", }, { .id = AXP_SENSOR_BATT_MAXIMUM_CAPACITY, .name = "batmaxcapacity", .format = "I", .desc = "Battery Maximum Capacity", }, { .id = AXP_SENSOR_BATT_CURRENT_CAPACITY, .name = "batcurrentcapacity", .format = "I", .desc = "Battery Current Capacity", }, }; struct axp8xx_config { const char *name; int batsense_step; /* uV */ int charge_step; /* uA */ int discharge_step; /* uA */ int maxcap_step; /* uAh */ int coulomb_step; /* uAh */ }; static struct axp8xx_config axp803_config = { .name = "AXP803", .batsense_step = 1100, .charge_step = 1000, .discharge_step = 1000, .maxcap_step = 1456, .coulomb_step = 1456, }; struct axp8xx_softc; struct axp8xx_reg_sc { struct regnode *regnode; device_t base_dev; struct axp8xx_regdef *def; phandle_t xref; struct regnode_std_param *param; }; struct axp8xx_softc { struct resource *res; uint16_t addr; void *ih; device_t gpiodev; struct mtx mtx; int busy; int type; /* Configs */ const struct axp8xx_config *config; /* Sensors */ const struct axp8xx_sensors *sensors; int nsensors; /* Regulators */ struct axp8xx_reg_sc **regs; int nregs; /* Warning, shutdown thresholds */ int warn_thres; int shut_thres; }; #define AXP_LOCK(sc) mtx_lock(&(sc)->mtx) #define AXP_UNLOCK(sc) mtx_unlock(&(sc)->mtx) static int axp8xx_read(device_t dev, uint8_t reg, uint8_t *data, uint8_t size) { struct axp8xx_softc *sc; struct iic_msg msg[2]; sc = device_get_softc(dev); msg[0].slave = sc->addr; msg[0].flags = IIC_M_WR; msg[0].len = 1; msg[0].buf = ® msg[1].slave = sc->addr; msg[1].flags = IIC_M_RD; msg[1].len = size; msg[1].buf = data; return (iicbus_transfer(dev, msg, 2)); } static int axp8xx_write(device_t dev, uint8_t reg, uint8_t val) { struct axp8xx_softc *sc; struct iic_msg msg[2]; sc = device_get_softc(dev); msg[0].slave = sc->addr; msg[0].flags = IIC_M_WR; msg[0].len = 1; msg[0].buf = ® msg[1].slave = sc->addr; msg[1].flags = IIC_M_WR; msg[1].len = 1; msg[1].buf = &val; return (iicbus_transfer(dev, msg, 2)); } static int axp8xx_regnode_init(struct regnode *regnode) { return (0); } static int axp8xx_regnode_enable(struct regnode *regnode, bool enable, int *udelay) { struct axp8xx_reg_sc *sc; uint8_t val; sc = regnode_get_softc(regnode); if (bootverbose) device_printf(sc->base_dev, "%sable %s (%s)\n", enable ? "En" : "Dis", regnode_get_name(regnode), sc->def->name); axp8xx_read(sc->base_dev, sc->def->enable_reg, &val, 1); val &= ~sc->def->enable_mask; if (enable) val |= sc->def->enable_value; else { if (sc->def->disable_value) val |= sc->def->disable_value; else val &= ~sc->def->enable_value; } axp8xx_write(sc->base_dev, sc->def->enable_reg, val); *udelay = 0; return (0); } static void axp8xx_regnode_reg_to_voltage(struct axp8xx_reg_sc *sc, uint8_t val, int *uv) { if (val < sc->def->voltage_nstep1) *uv = sc->def->voltage_min + val * sc->def->voltage_step1; else *uv = sc->def->voltage_min + (sc->def->voltage_nstep1 * sc->def->voltage_step1) + ((val - sc->def->voltage_nstep1) * sc->def->voltage_step2); *uv *= 1000; } static int axp8xx_regnode_voltage_to_reg(struct axp8xx_reg_sc *sc, int min_uvolt, int max_uvolt, uint8_t *val) { uint8_t nval; int nstep, uvolt; nval = 0; uvolt = sc->def->voltage_min * 1000; for (nstep = 0; nstep < sc->def->voltage_nstep1 && uvolt < min_uvolt; nstep++) { ++nval; uvolt += (sc->def->voltage_step1 * 1000); } for (nstep = 0; nstep < sc->def->voltage_nstep2 && uvolt < min_uvolt; nstep++) { ++nval; uvolt += (sc->def->voltage_step2 * 1000); } if (uvolt > max_uvolt) return (EINVAL); *val = nval; return (0); } static int axp8xx_regnode_set_voltage(struct regnode *regnode, int min_uvolt, int max_uvolt, int *udelay) { struct axp8xx_reg_sc *sc; uint8_t val; sc = regnode_get_softc(regnode); if (bootverbose) device_printf(sc->base_dev, "Setting %s (%s) to %d<->%d\n", regnode_get_name(regnode), sc->def->name, min_uvolt, max_uvolt); if (sc->def->voltage_step1 == 0) return (ENXIO); if (axp8xx_regnode_voltage_to_reg(sc, min_uvolt, max_uvolt, &val) != 0) return (ERANGE); axp8xx_write(sc->base_dev, sc->def->voltage_reg, val); *udelay = 0; return (0); } static int axp8xx_regnode_get_voltage(struct regnode *regnode, int *uvolt) { struct axp8xx_reg_sc *sc; uint8_t val; sc = regnode_get_softc(regnode); if (!sc->def->voltage_step1 || !sc->def->voltage_step2) return (ENXIO); axp8xx_read(sc->base_dev, sc->def->voltage_reg, &val, 1); axp8xx_regnode_reg_to_voltage(sc, val & AXP_VOLTCTL_MASK, uvolt); return (0); } static regnode_method_t axp8xx_regnode_methods[] = { /* Regulator interface */ REGNODEMETHOD(regnode_init, axp8xx_regnode_init), REGNODEMETHOD(regnode_enable, axp8xx_regnode_enable), REGNODEMETHOD(regnode_set_voltage, axp8xx_regnode_set_voltage), REGNODEMETHOD(regnode_get_voltage, axp8xx_regnode_get_voltage), REGNODEMETHOD_END }; DEFINE_CLASS_1(axp8xx_regnode, axp8xx_regnode_class, axp8xx_regnode_methods, sizeof(struct axp8xx_reg_sc), regnode_class); static void axp8xx_shutdown(void *devp, int howto) { device_t dev; if ((howto & RB_POWEROFF) == 0) return; dev = devp; if (bootverbose) device_printf(dev, "Shutdown Axp8xx\n"); axp8xx_write(dev, AXP_POWERBAT, AXP_POWERBAT_SHUTDOWN); } static int +axp8xx_sysctl_chargecurrent(SYSCTL_HANDLER_ARGS) +{ + device_t dev = arg1; + uint8_t data; + int val, error; + + error = axp8xx_read(dev, AXP_CHARGERCTL1, &data, 1); + if (error != 0) + return (error); + + if (bootverbose) + device_printf(dev, "Raw CHARGECTL1 val: 0x%0x\n", data); + val = (data & AXP_CHARGERCTL1_CMASK); + error = sysctl_handle_int(oidp, &val, 0, req); + if (error || !req->newptr) /* error || read request */ + return (error); + + if ((val < AXP_CHARGERCTL1_MIN) || (val > AXP_CHARGERCTL1_MAX)) + return (EINVAL); + + val |= (data & (AXP_CHARGERCTL1_CMASK << 4)); + axp8xx_write(dev, AXP_CHARGERCTL1, val); + + return (0); +} + +static int axp8xx_sysctl(SYSCTL_HANDLER_ARGS) { struct axp8xx_softc *sc; device_t dev = arg1; enum axp8xx_sensor sensor = arg2; const struct axp8xx_config *c; uint8_t data; int val, i, found, batt_val; uint8_t lo, hi; sc = device_get_softc(dev); c = sc->config; for (found = 0, i = 0; i < sc->nsensors; i++) { if (sc->sensors[i].id == sensor) { found = 1; break; } } if (found == 0) return (ENOENT); switch (sensor) { case AXP_SENSOR_ACIN_PRESENT: if (axp8xx_read(dev, AXP_POWERSRC, &data, 1) == 0) val = !!(data & AXP_POWERSRC_ACIN); break; case AXP_SENSOR_VBUS_PRESENT: if (axp8xx_read(dev, AXP_POWERSRC, &data, 1) == 0) val = !!(data & AXP_POWERSRC_VBUS); break; case AXP_SENSOR_BATT_PRESENT: if (axp8xx_read(dev, AXP_POWERMODE, &data, 1) == 0) { if (data & AXP_POWERMODE_BAT_VALID) val = !!(data & AXP_POWERMODE_BAT_PRESENT); } break; case AXP_SENSOR_BATT_CHARGING: if (axp8xx_read(dev, AXP_POWERMODE, &data, 1) == 0) val = !!(data & AXP_POWERMODE_BAT_CHARGING); break; case AXP_SENSOR_BATT_CHARGE_STATE: if (axp8xx_read(dev, AXP_BAT_CAP, &data, 1) == 0 && (data & AXP_BAT_CAP_VALID) != 0) { batt_val = (data & AXP_BAT_CAP_PERCENT); if (batt_val <= sc->shut_thres) val = BATT_CAPACITY_CRITICAL; else if (batt_val <= sc->warn_thres) val = BATT_CAPACITY_WARNING; else val = BATT_CAPACITY_NORMAL; } break; case AXP_SENSOR_BATT_CAPACITY_PERCENT: if (axp8xx_read(dev, AXP_BAT_CAP, &data, 1) == 0 && (data & AXP_BAT_CAP_VALID) != 0) val = (data & AXP_BAT_CAP_PERCENT); break; case AXP_SENSOR_BATT_VOLTAGE: if (axp8xx_read(dev, AXP_BATSENSE_HI, &hi, 1) == 0 && axp8xx_read(dev, AXP_BATSENSE_LO, &lo, 1) == 0) { val = (AXP_SENSOR_BAT_H(hi) | AXP_SENSOR_BAT_L(lo)); val *= c->batsense_step; } break; case AXP_SENSOR_BATT_CHARGE_CURRENT: if (axp8xx_read(dev, AXP_POWERSRC, &data, 1) == 0 && (data & AXP_POWERSRC_CHARING) != 0 && axp8xx_read(dev, AXP_BATCHG_HI, &hi, 1) == 0 && axp8xx_read(dev, AXP_BATCHG_LO, &lo, 1) == 0) { val = (AXP_SENSOR_BAT_H(hi) | AXP_SENSOR_BAT_L(lo)); val *= c->charge_step; } break; case AXP_SENSOR_BATT_DISCHARGE_CURRENT: if (axp8xx_read(dev, AXP_POWERSRC, &data, 1) == 0 && (data & AXP_POWERSRC_CHARING) == 0 && axp8xx_read(dev, AXP_BATDISCHG_HI, &hi, 1) == 0 && axp8xx_read(dev, AXP_BATDISCHG_LO, &lo, 1) == 0) { val = (AXP_SENSOR_BAT_H(hi) | AXP_SENSOR_BAT_L(lo)); val *= c->discharge_step; } break; case AXP_SENSOR_BATT_MAXIMUM_CAPACITY: if (axp8xx_read(dev, AXP_BAT_MAX_CAP_HI, &hi, 1) == 0 && axp8xx_read(dev, AXP_BAT_MAX_CAP_LO, &lo, 1) == 0) { val = AXP_SENSOR_COULOMB(hi, lo); val *= c->maxcap_step; } break; case AXP_SENSOR_BATT_CURRENT_CAPACITY: if (axp8xx_read(dev, AXP_BAT_COULOMB_HI, &hi, 1) == 0 && axp8xx_read(dev, AXP_BAT_COULOMB_LO, &lo, 1) == 0) { val = AXP_SENSOR_COULOMB(hi, lo); val *= c->coulomb_step; } break; } return sysctl_handle_opaque(oidp, &val, sizeof(val), req); } static void axp8xx_intr(void *arg) { device_t dev; uint8_t val; int error; dev = arg; error = axp8xx_read(dev, AXP_IRQSTAT1, &val, 1); if (error != 0) return; if (val) { if (bootverbose) device_printf(dev, "AXP_IRQSTAT1 val: %x\n", val); if (val & AXP_IRQSTAT1_ACIN_HI) devctl_notify("PMU", "AC", "plugged", NULL); if (val & AXP_IRQSTAT1_ACIN_LO) devctl_notify("PMU", "AC", "unplugged", NULL); if (val & AXP_IRQSTAT1_VBUS_HI) devctl_notify("PMU", "USB", "plugged", NULL); if (val & AXP_IRQSTAT1_VBUS_LO) devctl_notify("PMU", "USB", "unplugged", NULL); /* Acknowledge */ axp8xx_write(dev, AXP_IRQSTAT1, val); } error = axp8xx_read(dev, AXP_IRQSTAT2, &val, 1); if (error != 0) return; if (val) { if (bootverbose) device_printf(dev, "AXP_IRQSTAT2 val: %x\n", val); if (val & AXP_IRQSTAT2_BATCHGD) devctl_notify("PMU", "Battery", "charged", NULL); if (val & AXP_IRQSTAT2_BATCHGC) devctl_notify("PMU", "Battery", "charging", NULL); if (val & AXP_IRQSTAT2_BAT_NO) devctl_notify("PMU", "Battery", "absent", NULL); if (val & AXP_IRQSTAT2_BAT_IN) devctl_notify("PMU", "Battery", "plugged", NULL); /* Acknowledge */ axp8xx_write(dev, AXP_IRQSTAT2, val); } error = axp8xx_read(dev, AXP_IRQSTAT3, &val, 1); if (error != 0) return; if (val) { /* Acknowledge */ axp8xx_write(dev, AXP_IRQSTAT3, val); } error = axp8xx_read(dev, AXP_IRQSTAT4, &val, 1); if (error != 0) return; if (val) { if (bootverbose) device_printf(dev, "AXP_IRQSTAT4 val: %x\n", val); if (val & AXP_IRQSTAT4_BATLVL_LO0) devctl_notify("PMU", "Battery", "lower than level 2", NULL); if (val & AXP_IRQSTAT4_BATLVL_LO1) devctl_notify("PMU", "Battery", "lower than level 1", NULL); /* Acknowledge */ axp8xx_write(dev, AXP_IRQSTAT4, val); } error = axp8xx_read(dev, AXP_IRQSTAT5, &val, 1); if (error != 0) return; if (val != 0) { if ((val & AXP_IRQSTAT5_POKSIRQ) != 0) { if (bootverbose) device_printf(dev, "Power button pressed\n"); shutdown_nice(RB_POWEROFF); } /* Acknowledge */ axp8xx_write(dev, AXP_IRQSTAT5, val); } error = axp8xx_read(dev, AXP_IRQSTAT6, &val, 1); if (error != 0) return; if (val) { /* Acknowledge */ axp8xx_write(dev, AXP_IRQSTAT6, val); } } static device_t axp8xx_gpio_get_bus(device_t dev) { struct axp8xx_softc *sc; sc = device_get_softc(dev); return (sc->gpiodev); } static int axp8xx_gpio_pin_max(device_t dev, int *maxpin) { *maxpin = nitems(axp8xx_pins) - 1; return (0); } static int axp8xx_gpio_pin_getname(device_t dev, uint32_t pin, char *name) { if (pin >= nitems(axp8xx_pins)) return (EINVAL); snprintf(name, GPIOMAXNAME, "%s", axp8xx_pins[pin].name); return (0); } static int axp8xx_gpio_pin_getcaps(device_t dev, uint32_t pin, uint32_t *caps) { if (pin >= nitems(axp8xx_pins)) return (EINVAL); *caps = GPIO_PIN_INPUT | GPIO_PIN_OUTPUT; return (0); } static int axp8xx_gpio_pin_getflags(device_t dev, uint32_t pin, uint32_t *flags) { struct axp8xx_softc *sc; uint8_t data, func; int error; if (pin >= nitems(axp8xx_pins)) return (EINVAL); sc = device_get_softc(dev); AXP_LOCK(sc); error = axp8xx_read(dev, axp8xx_pins[pin].ctrl_reg, &data, 1); if (error == 0) { func = (data & AXP_GPIO_FUNC) >> AXP_GPIO_FUNC_SHIFT; if (func == AXP_GPIO_FUNC_INPUT) *flags = GPIO_PIN_INPUT; else if (func == AXP_GPIO_FUNC_DRVLO || func == AXP_GPIO_FUNC_DRVHI) *flags = GPIO_PIN_OUTPUT; else *flags = 0; } AXP_UNLOCK(sc); return (error); } static int axp8xx_gpio_pin_setflags(device_t dev, uint32_t pin, uint32_t flags) { struct axp8xx_softc *sc; uint8_t data; int error; if (pin >= nitems(axp8xx_pins)) return (EINVAL); sc = device_get_softc(dev); AXP_LOCK(sc); error = axp8xx_read(dev, axp8xx_pins[pin].ctrl_reg, &data, 1); if (error == 0) { data &= ~AXP_GPIO_FUNC; if ((flags & (GPIO_PIN_INPUT|GPIO_PIN_OUTPUT)) != 0) { if ((flags & GPIO_PIN_OUTPUT) == 0) data |= AXP_GPIO_FUNC_INPUT; } error = axp8xx_write(dev, axp8xx_pins[pin].ctrl_reg, data); } AXP_UNLOCK(sc); return (error); } static int axp8xx_gpio_pin_get(device_t dev, uint32_t pin, unsigned int *val) { struct axp8xx_softc *sc; uint8_t data, func; int error; if (pin >= nitems(axp8xx_pins)) return (EINVAL); sc = device_get_softc(dev); AXP_LOCK(sc); error = axp8xx_read(dev, axp8xx_pins[pin].ctrl_reg, &data, 1); if (error == 0) { func = (data & AXP_GPIO_FUNC) >> AXP_GPIO_FUNC_SHIFT; switch (func) { case AXP_GPIO_FUNC_DRVLO: *val = 0; break; case AXP_GPIO_FUNC_DRVHI: *val = 1; break; case AXP_GPIO_FUNC_INPUT: error = axp8xx_read(dev, AXP_GPIO_SIGBIT, &data, 1); if (error == 0) *val = (data & (1 << pin)) ? 1 : 0; break; default: error = EIO; break; } } AXP_UNLOCK(sc); return (error); } static int axp8xx_gpio_pin_set(device_t dev, uint32_t pin, unsigned int val) { struct axp8xx_softc *sc; uint8_t data, func; int error; if (pin >= nitems(axp8xx_pins)) return (EINVAL); sc = device_get_softc(dev); AXP_LOCK(sc); error = axp8xx_read(dev, axp8xx_pins[pin].ctrl_reg, &data, 1); if (error == 0) { func = (data & AXP_GPIO_FUNC) >> AXP_GPIO_FUNC_SHIFT; switch (func) { case AXP_GPIO_FUNC_DRVLO: case AXP_GPIO_FUNC_DRVHI: data &= ~AXP_GPIO_FUNC; data |= (val << AXP_GPIO_FUNC_SHIFT); break; default: error = EIO; break; } } if (error == 0) error = axp8xx_write(dev, axp8xx_pins[pin].ctrl_reg, data); AXP_UNLOCK(sc); return (error); } static int axp8xx_gpio_pin_toggle(device_t dev, uint32_t pin) { struct axp8xx_softc *sc; uint8_t data, func; int error; if (pin >= nitems(axp8xx_pins)) return (EINVAL); sc = device_get_softc(dev); AXP_LOCK(sc); error = axp8xx_read(dev, axp8xx_pins[pin].ctrl_reg, &data, 1); if (error == 0) { func = (data & AXP_GPIO_FUNC) >> AXP_GPIO_FUNC_SHIFT; switch (func) { case AXP_GPIO_FUNC_DRVLO: data &= ~AXP_GPIO_FUNC; data |= (AXP_GPIO_FUNC_DRVHI << AXP_GPIO_FUNC_SHIFT); break; case AXP_GPIO_FUNC_DRVHI: data &= ~AXP_GPIO_FUNC; data |= (AXP_GPIO_FUNC_DRVLO << AXP_GPIO_FUNC_SHIFT); break; default: error = EIO; break; } } if (error == 0) error = axp8xx_write(dev, axp8xx_pins[pin].ctrl_reg, data); AXP_UNLOCK(sc); return (error); } static int axp8xx_gpio_map_gpios(device_t bus, phandle_t dev, phandle_t gparent, int gcells, pcell_t *gpios, uint32_t *pin, uint32_t *flags) { if (gpios[0] >= nitems(axp8xx_pins)) return (EINVAL); *pin = gpios[0]; *flags = gpios[1]; return (0); } static phandle_t axp8xx_get_node(device_t dev, device_t bus) { return (ofw_bus_get_node(dev)); } static struct axp8xx_reg_sc * axp8xx_reg_attach(device_t dev, phandle_t node, struct axp8xx_regdef *def) { struct axp8xx_reg_sc *reg_sc; struct regnode_init_def initdef; struct regnode *regnode; memset(&initdef, 0, sizeof(initdef)); if (regulator_parse_ofw_stdparam(dev, node, &initdef) != 0) return (NULL); if (initdef.std_param.min_uvolt == 0) initdef.std_param.min_uvolt = def->voltage_min * 1000; if (initdef.std_param.max_uvolt == 0) initdef.std_param.max_uvolt = def->voltage_max * 1000; initdef.id = def->id; initdef.ofw_node = node; regnode = regnode_create(dev, &axp8xx_regnode_class, &initdef); if (regnode == NULL) { device_printf(dev, "cannot create regulator\n"); return (NULL); } reg_sc = regnode_get_softc(regnode); reg_sc->regnode = regnode; reg_sc->base_dev = dev; reg_sc->def = def; reg_sc->xref = OF_xref_from_node(node); reg_sc->param = regnode_get_stdparam(regnode); regnode_register(regnode); return (reg_sc); } static int axp8xx_regdev_map(device_t dev, phandle_t xref, int ncells, pcell_t *cells, intptr_t *num) { struct axp8xx_softc *sc; int i; sc = device_get_softc(dev); for (i = 0; i < sc->nregs; i++) { if (sc->regs[i] == NULL) continue; if (sc->regs[i]->xref == xref) { *num = sc->regs[i]->def->id; return (0); } } return (ENXIO); } static int axp8xx_probe(device_t dev) { if (!ofw_bus_status_okay(dev)) return (ENXIO); switch (ofw_bus_search_compatible(dev, compat_data)->ocd_data) { case AXP803: device_set_desc(dev, "X-Powers AXP803 Power Management Unit"); break; case AXP813: device_set_desc(dev, "X-Powers AXP813 Power Management Unit"); break; default: return (ENXIO); } return (BUS_PROBE_DEFAULT); } static int axp8xx_attach(device_t dev) { struct axp8xx_softc *sc; struct axp8xx_reg_sc *reg; uint8_t chip_id, val; phandle_t rnode, child; int error, i; sc = device_get_softc(dev); sc->addr = iicbus_get_addr(dev); mtx_init(&sc->mtx, device_get_nameunit(dev), NULL, MTX_DEF); error = bus_alloc_resources(dev, axp8xx_spec, &sc->res); if (error != 0) { device_printf(dev, "cannot allocate resources for device\n"); return (error); } if (bootverbose) { axp8xx_read(dev, AXP_ICTYPE, &chip_id, 1); device_printf(dev, "chip ID 0x%02x\n", chip_id); } sc->nregs = nitems(axp8xx_common_regdefs); sc->type = ofw_bus_search_compatible(dev, compat_data)->ocd_data; switch (sc->type) { case AXP803: sc->nregs += nitems(axp803_regdefs); break; case AXP813: sc->nregs += nitems(axp813_regdefs); break; } sc->config = &axp803_config; sc->sensors = axp8xx_common_sensors; sc->nsensors = nitems(axp8xx_common_sensors); sc->regs = malloc(sizeof(struct axp8xx_reg_sc *) * sc->nregs, M_AXP8XX_REG, M_WAITOK | M_ZERO); /* Attach known regulators that exist in the DT */ rnode = ofw_bus_find_child(ofw_bus_get_node(dev), "regulators"); if (rnode > 0) { for (i = 0; i < sc->nregs; i++) { char *regname; struct axp8xx_regdef *regdef; if (i <= nitems(axp8xx_common_regdefs)) { regname = axp8xx_common_regdefs[i].name; regdef = &axp8xx_common_regdefs[i]; } else { int off; off = i - nitems(axp8xx_common_regdefs); switch (sc->type) { case AXP803: regname = axp803_regdefs[off].name; regdef = &axp803_regdefs[off]; break; case AXP813: regname = axp813_regdefs[off].name; regdef = &axp813_regdefs[off]; break; } } child = ofw_bus_find_child(rnode, regname); if (child == 0) continue; reg = axp8xx_reg_attach(dev, child, regdef); if (reg == NULL) { device_printf(dev, "cannot attach regulator %s\n", regname); continue; } sc->regs[i] = reg; } } /* Add sensors */ for (i = 0; i < sc->nsensors; i++) { SYSCTL_ADD_PROC(device_get_sysctl_ctx(dev), SYSCTL_CHILDREN(device_get_sysctl_tree(dev)), OID_AUTO, sc->sensors[i].name, CTLTYPE_INT | CTLFLAG_RD, dev, sc->sensors[i].id, axp8xx_sysctl, sc->sensors[i].format, sc->sensors[i].desc); } + SYSCTL_ADD_PROC(device_get_sysctl_ctx(dev), + SYSCTL_CHILDREN(device_get_sysctl_tree(dev)), + OID_AUTO, "batchargecurrentstep", + CTLTYPE_INT | CTLFLAG_RW, + dev, 0, axp8xx_sysctl_chargecurrent, + "I", "Battery Charging Current Step, " + "0: 200mA, 1: 400mA, 2: 600mA, 3: 800mA, " + "4: 1000mA, 5: 1200mA, 6: 1400mA, 7: 1600mA, " + "8: 1800mA, 9: 2000mA, 10: 2200mA, 11: 2400mA, " + "12: 2600mA, 13: 2800mA"); /* Get thresholds */ if (axp8xx_read(dev, AXP_BAT_CAP_WARN, &val, 1) == 0) { sc->warn_thres = (val & AXP_BAT_CAP_WARN_LV1) >> 4; sc->shut_thres = (val & AXP_BAT_CAP_WARN_LV2); if (bootverbose) { device_printf(dev, "Raw reg val: 0x%02x\n", val); device_printf(dev, "Warning threshold: 0x%02x\n", sc->warn_thres); device_printf(dev, "Shutdown threshold: 0x%02x\n", sc->shut_thres); } } /* Enable interrupts */ axp8xx_write(dev, AXP_IRQEN1, AXP_IRQEN1_VBUS_LO | AXP_IRQEN1_VBUS_HI | AXP_IRQEN1_ACIN_LO | AXP_IRQEN1_ACIN_HI); axp8xx_write(dev, AXP_IRQEN2, AXP_IRQEN2_BATCHGD | AXP_IRQEN2_BATCHGC | AXP_IRQEN2_BAT_NO | AXP_IRQEN2_BAT_IN); axp8xx_write(dev, AXP_IRQEN3, 0); axp8xx_write(dev, AXP_IRQEN4, AXP_IRQEN4_BATLVL_LO0 | AXP_IRQEN4_BATLVL_LO1); axp8xx_write(dev, AXP_IRQEN5, AXP_IRQEN5_POKSIRQ | AXP_IRQEN5_POKLIRQ); axp8xx_write(dev, AXP_IRQEN6, 0); /* Install interrupt handler */ error = bus_setup_intr(dev, sc->res, INTR_TYPE_MISC | INTR_MPSAFE, NULL, axp8xx_intr, dev, &sc->ih); if (error != 0) { device_printf(dev, "cannot setup interrupt handler\n"); return (error); } EVENTHANDLER_REGISTER(shutdown_final, axp8xx_shutdown, dev, SHUTDOWN_PRI_LAST); sc->gpiodev = gpiobus_attach_bus(dev); return (0); } static device_method_t axp8xx_methods[] = { /* Device interface */ DEVMETHOD(device_probe, axp8xx_probe), DEVMETHOD(device_attach, axp8xx_attach), /* GPIO interface */ DEVMETHOD(gpio_get_bus, axp8xx_gpio_get_bus), DEVMETHOD(gpio_pin_max, axp8xx_gpio_pin_max), DEVMETHOD(gpio_pin_getname, axp8xx_gpio_pin_getname), DEVMETHOD(gpio_pin_getcaps, axp8xx_gpio_pin_getcaps), DEVMETHOD(gpio_pin_getflags, axp8xx_gpio_pin_getflags), DEVMETHOD(gpio_pin_setflags, axp8xx_gpio_pin_setflags), DEVMETHOD(gpio_pin_get, axp8xx_gpio_pin_get), DEVMETHOD(gpio_pin_set, axp8xx_gpio_pin_set), DEVMETHOD(gpio_pin_toggle, axp8xx_gpio_pin_toggle), DEVMETHOD(gpio_map_gpios, axp8xx_gpio_map_gpios), /* Regdev interface */ DEVMETHOD(regdev_map, axp8xx_regdev_map), /* OFW bus interface */ DEVMETHOD(ofw_bus_get_node, axp8xx_get_node), DEVMETHOD_END }; static driver_t axp8xx_driver = { "axp8xx_pmu", axp8xx_methods, sizeof(struct axp8xx_softc), }; static devclass_t axp8xx_devclass; extern devclass_t ofwgpiobus_devclass, gpioc_devclass; extern driver_t ofw_gpiobus_driver, gpioc_driver; EARLY_DRIVER_MODULE(axp8xx, iicbus, axp8xx_driver, axp8xx_devclass, 0, 0, BUS_PASS_INTERRUPT + BUS_PASS_ORDER_LAST); EARLY_DRIVER_MODULE(ofw_gpiobus, axp8xx_pmu, ofw_gpiobus_driver, ofwgpiobus_devclass, 0, 0, BUS_PASS_INTERRUPT + BUS_PASS_ORDER_LAST); DRIVER_MODULE(gpioc, axp8xx_pmu, gpioc_driver, gpioc_devclass, 0, 0); MODULE_VERSION(axp8xx, 1); MODULE_DEPEND(axp8xx, iicbus, 1, 1, 1); Index: projects/import-googletest-1.8.1/sys/arm/arm/elf_machdep.c =================================================================== --- projects/import-googletest-1.8.1/sys/arm/arm/elf_machdep.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/arm/arm/elf_machdep.c (revision 344271) @@ -1,320 +1,320 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright 1996-1998 John D. Polstra. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef VFP #include #endif static boolean_t elf32_arm_abi_supported(struct image_params *); u_long elf_hwcap; u_long elf_hwcap2; struct sysentvec elf32_freebsd_sysvec = { .sv_size = SYS_MAXSYSCALL, .sv_table = sysent, .sv_errsize = 0, .sv_errtbl = NULL, .sv_transtrap = NULL, .sv_fixup = __elfN(freebsd_fixup), .sv_sendsig = sendsig, .sv_sigcode = sigcode, .sv_szsigcode = &szsigcode, .sv_name = "FreeBSD ELF32", .sv_coredump = __elfN(coredump), .sv_imgact_try = NULL, .sv_minsigstksz = MINSIGSTKSZ, .sv_pagesize = PAGE_SIZE, .sv_minuser = VM_MIN_ADDRESS, .sv_maxuser = VM_MAXUSER_ADDRESS, .sv_usrstack = USRSTACK, .sv_psstrings = PS_STRINGS, .sv_stackprot = VM_PROT_ALL, .sv_copyout_strings = exec_copyout_strings, .sv_setregs = exec_setregs, .sv_fixlimit = NULL, .sv_maxssiz = NULL, .sv_flags = #if __ARM_ARCH >= 6 SV_ASLR | SV_SHP | SV_TIMEKEEP | #endif - SV_ABI_FREEBSD | SV_ILP32, + SV_ABI_FREEBSD | SV_ILP32 | SV_ASLR, .sv_set_syscall_retval = cpu_set_syscall_retval, .sv_fetch_syscall_args = cpu_fetch_syscall_args, .sv_syscallnames = syscallnames, .sv_shared_page_base = SHAREDPAGE, .sv_shared_page_len = PAGE_SIZE, .sv_schedtail = NULL, .sv_thread_detach = NULL, .sv_trap = NULL, .sv_hwcap = &elf_hwcap, .sv_hwcap2 = &elf_hwcap2, }; INIT_SYSENTVEC(elf32_sysvec, &elf32_freebsd_sysvec); static Elf32_Brandinfo freebsd_brand_info = { .brand = ELFOSABI_FREEBSD, .machine = EM_ARM, .compat_3_brand = "FreeBSD", .emul_path = NULL, .interp_path = "/libexec/ld-elf.so.1", .sysvec = &elf32_freebsd_sysvec, .interp_newpath = NULL, .brand_note = &elf32_freebsd_brandnote, .flags = BI_CAN_EXEC_DYN | BI_BRAND_NOTE, .header_supported= elf32_arm_abi_supported, }; SYSINIT(elf32, SI_SUB_EXEC, SI_ORDER_FIRST, (sysinit_cfunc_t) elf32_insert_brand_entry, &freebsd_brand_info); static boolean_t elf32_arm_abi_supported(struct image_params *imgp) { const Elf_Ehdr *hdr = (const Elf_Ehdr *)imgp->image_header; /* * When configured for EABI, FreeBSD supports EABI vesions 4 and 5. */ if (EF_ARM_EABI_VERSION(hdr->e_flags) < EF_ARM_EABI_FREEBSD_MIN) { if (bootverbose) uprintf("Attempting to execute non EABI binary (rev %d) image %s", EF_ARM_EABI_VERSION(hdr->e_flags), imgp->args->fname); return (FALSE); } return (TRUE); } void elf32_dump_thread(struct thread *td, void *dst, size_t *off) { #ifdef VFP mcontext_vfp_t vfp; if (dst != NULL) { get_vfpcontext(td, &vfp); *off = elf32_populate_note(NT_ARM_VFP, &vfp, dst, sizeof(vfp), NULL); } else *off = elf32_populate_note(NT_ARM_VFP, NULL, NULL, sizeof(vfp), NULL); #endif } bool elf_is_ifunc_reloc(Elf_Size r_info __unused) { return (false); } /* * It is possible for the compiler to emit relocations for unaligned data. * We handle this situation with these inlines. */ #define RELOC_ALIGNED_P(x) \ (((uintptr_t)(x) & (sizeof(void *) - 1)) == 0) static __inline Elf_Addr load_ptr(Elf_Addr *where) { Elf_Addr res; if (RELOC_ALIGNED_P(where)) return *where; memcpy(&res, where, sizeof(res)); return (res); } static __inline void store_ptr(Elf_Addr *where, Elf_Addr val) { if (RELOC_ALIGNED_P(where)) *where = val; else memcpy(where, &val, sizeof(val)); } #undef RELOC_ALIGNED_P /* Process one elf relocation with addend. */ static int elf_reloc_internal(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, int local, elf_lookup_fn lookup) { Elf_Addr *where; Elf_Addr addr; Elf_Addr addend; Elf_Word rtype, symidx; const Elf_Rel *rel; const Elf_Rela *rela; int error; switch (type) { case ELF_RELOC_REL: rel = (const Elf_Rel *)data; where = (Elf_Addr *) (relocbase + rel->r_offset); addend = load_ptr(where); rtype = ELF_R_TYPE(rel->r_info); symidx = ELF_R_SYM(rel->r_info); break; case ELF_RELOC_RELA: rela = (const Elf_Rela *)data; where = (Elf_Addr *) (relocbase + rela->r_offset); addend = rela->r_addend; rtype = ELF_R_TYPE(rela->r_info); symidx = ELF_R_SYM(rela->r_info); break; default: panic("unknown reloc type %d\n", type); } if (local) { if (rtype == R_ARM_RELATIVE) { /* A + B */ addr = elf_relocaddr(lf, relocbase + addend); if (load_ptr(where) != addr) store_ptr(where, addr); } return (0); } switch (rtype) { case R_ARM_NONE: /* none */ break; case R_ARM_ABS32: error = lookup(lf, symidx, 1, &addr); if (error != 0) return -1; store_ptr(where, addr + load_ptr(where)); break; case R_ARM_COPY: /* none */ /* * There shouldn't be copy relocations in kernel * objects. */ printf("kldload: unexpected R_COPY relocation\n"); return -1; break; case R_ARM_JUMP_SLOT: error = lookup(lf, symidx, 1, &addr); if (error == 0) { store_ptr(where, addr); return (0); } return (-1); case R_ARM_RELATIVE: break; default: printf("kldload: unexpected relocation type %d\n", rtype); return -1; } return(0); } int elf_reloc(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, elf_lookup_fn lookup) { return (elf_reloc_internal(lf, relocbase, data, type, 0, lookup)); } int elf_reloc_local(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, elf_lookup_fn lookup) { return (elf_reloc_internal(lf, relocbase, data, type, 1, lookup)); } int elf_cpu_load_file(linker_file_t lf) { /* * The pmap code does not do an icache sync upon establishing executable * mappings in the kernel pmap. It's an optimization based on the fact * that kernel memory allocations always have EXECUTABLE protection even * when the memory isn't going to hold executable code. The only time * kernel memory holding instructions does need a sync is after loading * a kernel module, and that's when this function gets called. * * This syncs data and instruction caches after loading a module. We * don't worry about the kernel itself (lf->id is 1) as locore.S did * that on entry. Even if data cache maintenance was done by IO code, * the relocation fixup process creates dirty cache entries that we must * write back before doing icache sync. The instruction cache sync also * invalidates the branch predictor cache on platforms that have one. */ if (lf->id == 1) return (0); #if __ARM_ARCH >= 6 dcache_wb_pou((vm_offset_t)lf->address, (vm_size_t)lf->size); icache_inv_all(); #else cpu_dcache_wb_range((vm_offset_t)lf->address, (vm_size_t)lf->size); cpu_l2cache_wb_range((vm_offset_t)lf->address, (vm_size_t)lf->size); cpu_icache_sync_range((vm_offset_t)lf->address, (vm_size_t)lf->size); #endif return (0); } int elf_cpu_unload_file(linker_file_t lf __unused) { return (0); } Index: projects/import-googletest-1.8.1/sys/arm/freescale/imx/imx6_snvs.c =================================================================== --- projects/import-googletest-1.8.1/sys/arm/freescale/imx/imx6_snvs.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/arm/freescale/imx/imx6_snvs.c (revision 344271) @@ -1,239 +1,240 @@ /*- * Copyright (c) 2017 Ian Lepore * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* * Driver for imx6 Secure Non-Volatile Storage system, which really means "all * the stuff that's powered by a battery when main power is off". This includes * realtime clock, tamper monitor, and power-management functions. Currently * this driver provides only realtime clock support. */ #include #include #include #include #include #include #include #include #include "clock_if.h" #define SNVS_LPCR 0x38 /* Control register */ #define LPCR_LPCALB_VAL_SHIFT 10 /* Calibration shift */ #define LPCR_LPCALB_VAL_MASK 0x1f /* Calibration mask */ #define LPCR_LPCALB_EN (1u << 8) /* Calibration enable */ #define LPCR_SRTC_ENV (1u << 0) /* RTC enabled/valid */ #define SNVS_LPSRTCMR 0x50 /* Counter MSB */ #define SNVS_LPSRTCLR 0x54 /* Counter LSB */ #define RTC_RESOLUTION_US (1000000 / 32768) /* 32khz clock */ /* * The RTC is a 47-bit counter clocked at 32KHz and organized as a 32.15 * fixed-point binary value. Shifting by SBT_LSB bits translates between * counter and sbintime values. */ #define RTC_BITS 47 #define SBT_BITS 64 #define SBT_LSB (SBT_BITS - RTC_BITS) struct snvs_softc { device_t dev; struct resource * memres; uint32_t lpcr; }; static struct ofw_compat_data compat_data[] = { + {"fsl,sec-v4.0-mon-rtc-lp", true}, {"fsl,sec-v4.0-mon", true}, {NULL, false} }; static inline uint32_t RD4(struct snvs_softc *sc, bus_size_t offset) { return (bus_read_4(sc->memres, offset)); } static inline void WR4(struct snvs_softc *sc, bus_size_t offset, uint32_t value) { bus_write_4(sc->memres, offset, value); } static void snvs_rtc_enable(struct snvs_softc *sc, bool enable) { uint32_t enbit; if (enable) sc->lpcr |= LPCR_SRTC_ENV; else sc->lpcr &= ~LPCR_SRTC_ENV; WR4(sc, SNVS_LPCR, sc->lpcr); /* Wait for the hardware to achieve the requested state. */ enbit = sc->lpcr & LPCR_SRTC_ENV; while ((RD4(sc, SNVS_LPCR) & LPCR_SRTC_ENV) != enbit) continue; } static int snvs_gettime(device_t dev, struct timespec *ts) { struct snvs_softc *sc; sbintime_t counter1, counter2; sc = device_get_softc(dev); /* If the clock is not enabled and valid, we can't help. */ if (!(RD4(sc, SNVS_LPCR) & LPCR_SRTC_ENV)) { return (EINVAL); } /* * The counter is clocked asynchronously to cpu accesses; read and * assemble the pieces of the counter until we get the same value twice. * The counter is 47 bits, organized as a 32.15 binary fixed-point * value. If we shift it up to the high order part of a 64-bit word it * turns into an sbintime. */ do { counter1 = (uint64_t)RD4(sc, SNVS_LPSRTCMR) << (SBT_LSB + 32); counter1 |= (uint64_t)RD4(sc, SNVS_LPSRTCLR) << (SBT_LSB); counter2 = (uint64_t)RD4(sc, SNVS_LPSRTCMR) << (SBT_LSB + 32); counter2 |= (uint64_t)RD4(sc, SNVS_LPSRTCLR) << (SBT_LSB); } while (counter1 != counter2); *ts = sbttots(counter1); clock_dbgprint_ts(sc->dev, CLOCK_DBG_READ, ts); return (0); } static int snvs_settime(device_t dev, struct timespec *ts) { struct snvs_softc *sc; sbintime_t sbt; sc = device_get_softc(dev); /* * The hardware format is the same as sbt (with fewer fractional bits), * so first convert the time to sbt. It takes two clock cycles for the * counter to start after setting the enable bit, so add two SBT_LSBs to * what we're about to set. */ sbt = tstosbt(*ts); sbt += 2 << SBT_LSB; snvs_rtc_enable(sc, false); WR4(sc, SNVS_LPSRTCMR, (uint32_t)(sbt >> (SBT_LSB + 32))); WR4(sc, SNVS_LPSRTCLR, (uint32_t)(sbt >> (SBT_LSB))); snvs_rtc_enable(sc, true); clock_dbgprint_ts(sc->dev, CLOCK_DBG_WRITE, ts); return (0); } static int snvs_probe(device_t dev) { if (!ofw_bus_status_okay(dev)) return (ENXIO); if (!ofw_bus_search_compatible(dev, compat_data)->ocd_data) return (ENXIO); device_set_desc(dev, "i.MX6 SNVS RTC"); return (BUS_PROBE_DEFAULT); } static int snvs_attach(device_t dev) { struct snvs_softc *sc; int rid; sc = device_get_softc(dev); sc->dev = dev; rid = 0; sc->memres = bus_alloc_resource_any(sc->dev, SYS_RES_MEMORY, &rid, RF_ACTIVE); if (sc->memres == NULL) { device_printf(sc->dev, "could not allocate registers\n"); return (ENXIO); } clock_register(sc->dev, RTC_RESOLUTION_US); return (0); } static int snvs_detach(device_t dev) { struct snvs_softc *sc; sc = device_get_softc(dev); clock_unregister(sc->dev); bus_release_resource(sc->dev, SYS_RES_MEMORY, 0, sc->memres); return (0); } static device_method_t snvs_methods[] = { DEVMETHOD(device_probe, snvs_probe), DEVMETHOD(device_attach, snvs_attach), DEVMETHOD(device_detach, snvs_detach), /* clock_if methods */ DEVMETHOD(clock_gettime, snvs_gettime), DEVMETHOD(clock_settime, snvs_settime), DEVMETHOD_END }; static driver_t snvs_driver = { "snvs", snvs_methods, sizeof(struct snvs_softc), }; static devclass_t snvs_devclass; DRIVER_MODULE(snvs, simplebus, snvs_driver, snvs_devclass, 0, 0); SIMPLEBUS_PNP_INFO(compat_data); Index: projects/import-googletest-1.8.1/sys/arm64/arm64/elf_machdep.c =================================================================== --- projects/import-googletest-1.8.1/sys/arm64/arm64/elf_machdep.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/arm64/arm64/elf_machdep.c (revision 344271) @@ -1,214 +1,215 @@ /*- * Copyright (c) 2014, 2015 The FreeBSD Foundation. * Copyright (c) 2014 Andrew Turner. * All rights reserved. * * This software was developed by Andrew Turner under * sponsorship from the FreeBSD Foundation. * * Portions of this software were developed by Konstantin Belousov * under sponsorship from the FreeBSD Foundation. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "linker_if.h" static struct sysentvec elf64_freebsd_sysvec = { .sv_size = SYS_MAXSYSCALL, .sv_table = sysent, .sv_errsize = 0, .sv_errtbl = NULL, .sv_transtrap = NULL, .sv_fixup = __elfN(freebsd_fixup), .sv_sendsig = sendsig, .sv_sigcode = sigcode, .sv_szsigcode = &szsigcode, .sv_name = "FreeBSD ELF64", .sv_coredump = __elfN(coredump), .sv_imgact_try = NULL, .sv_minsigstksz = MINSIGSTKSZ, .sv_pagesize = PAGE_SIZE, .sv_minuser = VM_MIN_ADDRESS, .sv_maxuser = VM_MAXUSER_ADDRESS, .sv_usrstack = USRSTACK, .sv_psstrings = PS_STRINGS, .sv_stackprot = VM_PROT_READ | VM_PROT_WRITE, .sv_copyout_strings = exec_copyout_strings, .sv_setregs = exec_setregs, .sv_fixlimit = NULL, .sv_maxssiz = NULL, - .sv_flags = SV_SHP | SV_TIMEKEEP | SV_ABI_FREEBSD | SV_LP64, + .sv_flags = SV_SHP | SV_TIMEKEEP | SV_ABI_FREEBSD | SV_LP64 | + SV_ASLR, .sv_set_syscall_retval = cpu_set_syscall_retval, .sv_fetch_syscall_args = cpu_fetch_syscall_args, .sv_syscallnames = syscallnames, .sv_shared_page_base = SHAREDPAGE, .sv_shared_page_len = PAGE_SIZE, .sv_schedtail = NULL, .sv_thread_detach = NULL, .sv_trap = NULL, }; INIT_SYSENTVEC(elf64_sysvec, &elf64_freebsd_sysvec); static Elf64_Brandinfo freebsd_brand_info = { .brand = ELFOSABI_FREEBSD, .machine = EM_AARCH64, .compat_3_brand = "FreeBSD", .emul_path = NULL, .interp_path = "/libexec/ld-elf.so.1", .sysvec = &elf64_freebsd_sysvec, .interp_newpath = NULL, .brand_note = &elf64_freebsd_brandnote, .flags = BI_CAN_EXEC_DYN | BI_BRAND_NOTE }; SYSINIT(elf64, SI_SUB_EXEC, SI_ORDER_FIRST, (sysinit_cfunc_t)elf64_insert_brand_entry, &freebsd_brand_info); void elf64_dump_thread(struct thread *td __unused, void *dst __unused, size_t *off __unused) { } bool elf_is_ifunc_reloc(Elf_Size r_info __unused) { return (ELF_R_TYPE(r_info) == R_AARCH64_IRELATIVE); } static int elf_reloc_internal(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, int local, elf_lookup_fn lookup) { Elf_Addr *where, addr, addend, val; Elf_Word rtype, symidx; const Elf_Rel *rel; const Elf_Rela *rela; int error; switch (type) { case ELF_RELOC_REL: rel = (const Elf_Rel *)data; where = (Elf_Addr *) (relocbase + rel->r_offset); addend = *where; rtype = ELF_R_TYPE(rel->r_info); symidx = ELF_R_SYM(rel->r_info); break; case ELF_RELOC_RELA: rela = (const Elf_Rela *)data; where = (Elf_Addr *) (relocbase + rela->r_offset); addend = rela->r_addend; rtype = ELF_R_TYPE(rela->r_info); symidx = ELF_R_SYM(rela->r_info); break; default: panic("unknown reloc type %d\n", type); } if (local) { if (rtype == R_AARCH64_RELATIVE) *where = elf_relocaddr(lf, relocbase + addend); return (0); } switch (rtype) { case R_AARCH64_NONE: case R_AARCH64_RELATIVE: break; case R_AARCH64_ABS64: case R_AARCH64_GLOB_DAT: case R_AARCH64_JUMP_SLOT: error = lookup(lf, symidx, 1, &addr); if (error != 0) return (-1); *where = addr + addend; break; case R_AARCH64_IRELATIVE: addr = relocbase + addend; val = ((Elf64_Addr (*)(void))addr)(); if (*where != val) *where = val; break; default: printf("kldload: unexpected relocation type %d\n", rtype); return (-1); } return (0); } int elf_reloc_local(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, elf_lookup_fn lookup) { return (elf_reloc_internal(lf, relocbase, data, type, 1, lookup)); } /* Process one elf relocation with addend. */ int elf_reloc(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, elf_lookup_fn lookup) { return (elf_reloc_internal(lf, relocbase, data, type, 0, lookup)); } int elf_cpu_load_file(linker_file_t lf) { if (lf->id != 1) cpu_icache_sync_range((vm_offset_t)lf->address, lf->size); return (0); } int elf_cpu_unload_file(linker_file_t lf __unused) { return (0); } Index: projects/import-googletest-1.8.1/sys/conf/NOTES =================================================================== --- projects/import-googletest-1.8.1/sys/conf/NOTES (revision 344270) +++ projects/import-googletest-1.8.1/sys/conf/NOTES (revision 344271) @@ -1,3010 +1,3002 @@ # $FreeBSD$ # # NOTES -- Lines that can be cut/pasted into kernel and hints configs. # # Lines that begin with 'device', 'options', 'machine', 'ident', 'maxusers', # 'makeoptions', 'hints', etc. go into the kernel configuration that you # run config(8) with. # # Lines that begin with 'hint.' are NOT for config(8), they go into your # hints file. See /boot/device.hints and/or the 'hints' config(8) directive. # # Please use ``make LINT'' to create an old-style LINT file if you want to # do kernel test-builds. # # This file contains machine independent kernel configuration notes. For # machine dependent notes, look in /sys//conf/NOTES. # # # NOTES conventions and style guide: # # Large block comments should begin and end with a line containing only a # comment character. # # To describe a particular object, a block comment (if it exists) should # come first. Next should come device, options, and hints lines in that # order. All device and option lines must be described by a comment that # doesn't just expand the device or option name. Use only a concise # comment on the same line if possible. Very detailed descriptions of # devices and subsystems belong in man pages. # # A space followed by a tab separates 'options' from an option name. Two # spaces followed by a tab separate 'device' from a device name. Comments # after an option or device should use one space after the comment character. # To comment out a negative option that disables code and thus should not be # enabled for LINT builds, precede 'options' with "#!". # # # This is the ``identification'' of the kernel. Usually this should # be the same as the name of your kernel. # ident LINT # # The `maxusers' parameter controls the static sizing of a number of # internal system tables by a formula defined in subr_param.c. # Omitting this parameter or setting it to 0 will cause the system to # auto-size based on physical memory. # maxusers 10 # To statically compile in device wiring instead of /boot/device.hints #hints "LINT.hints" # Default places to look for devices. # Use the following to compile in values accessible to the kernel # through getenv() (or kenv(1) in userland). The format of the file # is 'variable=value', see kenv(1) # #env "LINT.env" # # The `makeoptions' parameter allows variables to be passed to the # generated Makefile in the build area. # # CONF_CFLAGS gives some extra compiler flags that are added to ${CFLAGS} # after most other flags. Here we use it to inhibit use of non-optimal # gcc built-in functions (e.g., memcmp). # # DEBUG happens to be magic. # The following is equivalent to 'config -g KERNELNAME' and creates # 'kernel.debug' compiled with -g debugging as well as a normal # 'kernel'. Use 'make install.debug' to install the debug kernel # but that isn't normally necessary as the debug symbols are not loaded # by the kernel and are not useful there anyway. # # KERNEL can be overridden so that you can change the default name of your # kernel. # # MODULES_OVERRIDE can be used to limit modules built to a specific list. # makeoptions CONF_CFLAGS=-fno-builtin #Don't allow use of memcmp, etc. #makeoptions DEBUG=-g #Build kernel with gdb(1) debug symbols #makeoptions KERNEL=foo #Build kernel "foo" and install "/foo" # Only build ext2fs module plus those parts of the sound system I need. #makeoptions MODULES_OVERRIDE="ext2fs sound/sound sound/driver/maestro3" makeoptions DESTDIR=/tmp # # FreeBSD processes are subject to certain limits to their consumption # of system resources. See getrlimit(2) for more details. Each # resource limit has two values, a "soft" limit and a "hard" limit. # The soft limits can be modified during normal system operation, but # the hard limits are set at boot time. Their default values are # in sys//include/vmparam.h. There are two ways to change them: # # 1. Set the values at kernel build time. The options below are one # way to allow that limit to grow to 1GB. They can be increased # further by changing the parameters: # # 2. In /boot/loader.conf, set the tunables kern.maxswzone, # kern.maxbcache, kern.maxtsiz, kern.dfldsiz, kern.maxdsiz, # kern.dflssiz, kern.maxssiz and kern.sgrowsiz. # # The options in /boot/loader.conf override anything in the kernel # configuration file. See the function init_param1 in # sys/kern/subr_param.c for more details. # options MAXDSIZ=(1024UL*1024*1024) options MAXSSIZ=(128UL*1024*1024) options DFLDSIZ=(1024UL*1024*1024) # # BLKDEV_IOSIZE sets the default block size used in user block # device I/O. Note that this value will be overridden by the label # when specifying a block device from a label with a non-0 # partition blocksize. The default is PAGE_SIZE. # options BLKDEV_IOSIZE=8192 # # MAXPHYS and DFLTPHYS # # These are the maximal and safe 'raw' I/O block device access sizes. # Reads and writes will be split into MAXPHYS chunks for known good # devices and DFLTPHYS for the rest. Some applications have better # performance with larger raw I/O access sizes. Note that certain VM # parameters are derived from these values and making them too large # can make an unbootable kernel. # # The defaults are 64K and 128K respectively. options DFLTPHYS=(64*1024) options MAXPHYS=(128*1024) # This allows you to actually store this configuration file into # the kernel binary itself. See config(8) for more details. # options INCLUDE_CONFIG_FILE # Include this file in kernel # # Compile-time defaults for various boot parameters # options BOOTVERBOSE=1 options BOOTHOWTO=RB_MULTIPLE # # Compile-time defaults for dmesg boot tagging # # Default boot tag; may use 'kern.boot_tag' loader tunable to override. The # current boot's tag is also exposed via the 'kern.boot_tag' sysctl. options BOOT_TAG=\"---<>---\" # Maximum boot tag size the kernel's static buffer should accomodate. Maximum # size for both BOOT_TAG and the assocated tunable. options BOOT_TAG_SZ=32 options GEOM_BDE # Disk encryption. options GEOM_BSD # BSD disklabels (obsolete, gone in 12) options GEOM_CACHE # Disk cache. options GEOM_CONCAT # Disk concatenation. options GEOM_ELI # Disk encryption. options GEOM_FOX # Redundant path mitigation (obsolete, gone in 12) options GEOM_GATE # Userland services. options GEOM_JOURNAL # Journaling. options GEOM_LABEL # Providers labelization. options GEOM_LINUX_LVM # Linux LVM2 volumes options GEOM_MAP # Map based partitioning options GEOM_MBR # DOS/MBR partitioning (obsolete, gone in 12) options GEOM_MIRROR # Disk mirroring. options GEOM_MULTIPATH # Disk multipath options GEOM_NOP # Test class. options GEOM_PART_APM # Apple partitioning options GEOM_PART_BSD # BSD disklabel options GEOM_PART_BSD64 # BSD disklabel64 options GEOM_PART_EBR # Extended Boot Records options GEOM_PART_EBR_COMPAT # Backward compatible partition names options GEOM_PART_GPT # GPT partitioning options GEOM_PART_LDM # Logical Disk Manager options GEOM_PART_MBR # MBR partitioning options GEOM_PART_VTOC8 # SMI VTOC8 disk label options GEOM_RAID # Soft RAID functionality. options GEOM_RAID3 # RAID3 functionality. options GEOM_SHSEC # Shared secret. options GEOM_STRIPE # Disk striping. options GEOM_SUNLABEL # Sun/Solaris partitioning (obsolete, gone in 12) options GEOM_UZIP # Read-only compressed disks options GEOM_VINUM # Vinum logical volume manager options GEOM_VIRSTOR # Virtual storage. options GEOM_VOL # Volume names from UFS superblock (obsolete, gone in 12) options GEOM_ZERO # Performance testing helper. # # The root device and filesystem type can be compiled in; # this provides a fallback option if the root device cannot # be correctly guessed by the bootstrap code, or an override if # the RB_DFLTROOT flag (-r) is specified when booting the kernel. # options ROOTDEVNAME=\"ufs:da0s2e\" ##################################################################### # Scheduler options: # # Specifying one of SCHED_4BSD or SCHED_ULE is mandatory. These options # select which scheduler is compiled in. # # SCHED_4BSD is the historical, proven, BSD scheduler. It has a global run # queue and no CPU affinity which makes it suboptimal for SMP. It has very # good interactivity and priority selection. # # SCHED_ULE provides significant performance advantages over 4BSD on many # workloads on SMP machines. It supports cpu-affinity, per-cpu runqueues # and scheduler locks. It also has a stronger notion of interactivity # which leads to better responsiveness even on uniprocessor machines. This # is the default scheduler. # # SCHED_STATS is a debugging option which keeps some stats in the sysctl # tree at 'kern.sched.stats' and is useful for debugging scheduling decisions. # options SCHED_4BSD options SCHED_STATS #options SCHED_ULE ##################################################################### # SMP OPTIONS: # # SMP enables building of a Symmetric MultiProcessor Kernel. # Mandatory: options SMP # Symmetric MultiProcessor Kernel # EARLY_AP_STARTUP releases the Application Processors earlier in the # kernel startup process (before devices are probed) rather than at the # end. This is a temporary option for use during the transition from # late to early AP startup. options EARLY_AP_STARTUP # MAXCPU defines the maximum number of CPUs that can boot in the system. # A default value should be already present, for every architecture. options MAXCPU=32 # NUMA enables use of Non-Uniform Memory Access policies in various kernel # subsystems. options NUMA # MAXMEMDOM defines the maximum number of memory domains that can boot in the # system. A default value should already be defined by every architecture. options MAXMEMDOM=2 # ADAPTIVE_MUTEXES changes the behavior of blocking mutexes to spin # if the thread that currently owns the mutex is executing on another # CPU. This behavior is enabled by default, so this option can be used # to disable it. options NO_ADAPTIVE_MUTEXES # ADAPTIVE_RWLOCKS changes the behavior of reader/writer locks to spin # if the thread that currently owns the rwlock is executing on another # CPU. This behavior is enabled by default, so this option can be used # to disable it. options NO_ADAPTIVE_RWLOCKS # ADAPTIVE_SX changes the behavior of sx locks to spin if the thread that # currently owns the sx lock is executing on another CPU. # This behavior is enabled by default, so this option can be used to # disable it. options NO_ADAPTIVE_SX # MUTEX_NOINLINE forces mutex operations to call functions to perform each # operation rather than inlining the simple cases. This can be used to # shrink the size of the kernel text segment. Note that this behavior is # already implied by the INVARIANT_SUPPORT, INVARIANTS, KTR, LOCK_PROFILING, # and WITNESS options. options MUTEX_NOINLINE # RWLOCK_NOINLINE forces rwlock operations to call functions to perform each # operation rather than inlining the simple cases. This can be used to # shrink the size of the kernel text segment. Note that this behavior is # already implied by the INVARIANT_SUPPORT, INVARIANTS, KTR, LOCK_PROFILING, # and WITNESS options. options RWLOCK_NOINLINE # SX_NOINLINE forces sx lock operations to call functions to perform each # operation rather than inlining the simple cases. This can be used to # shrink the size of the kernel text segment. Note that this behavior is # already implied by the INVARIANT_SUPPORT, INVARIANTS, KTR, LOCK_PROFILING, # and WITNESS options. options SX_NOINLINE # SMP Debugging Options: # # CALLOUT_PROFILING enables rudimentary profiling of the callwheel data # structure used as backend in callout(9). # PREEMPTION allows the threads that are in the kernel to be preempted by # higher priority [interrupt] threads. It helps with interactivity # and allows interrupt threads to run sooner rather than waiting. # WARNING! Only tested on amd64 and i386. # FULL_PREEMPTION instructs the kernel to preempt non-realtime kernel # threads. Its sole use is to expose race conditions and other # bugs during development. Enabling this option will reduce # performance and increase the frequency of kernel panics by # design. If you aren't sure that you need it then you don't. # Relies on the PREEMPTION option. DON'T TURN THIS ON. # SLEEPQUEUE_PROFILING enables rudimentary profiling of the hash table # used to hold active sleep queues as well as sleep wait message # frequency. # TURNSTILE_PROFILING enables rudimentary profiling of the hash table # used to hold active lock queues. # UMTX_PROFILING enables rudimentary profiling of the hash table used # to hold active lock queues. # WITNESS enables the witness code which detects deadlocks and cycles # during locking operations. # WITNESS_KDB causes the witness code to drop into the kernel debugger if # a lock hierarchy violation occurs or if locks are held when going to # sleep. # WITNESS_SKIPSPIN disables the witness checks on spin mutexes. options PREEMPTION options FULL_PREEMPTION options WITNESS options WITNESS_KDB options WITNESS_SKIPSPIN # LOCK_PROFILING - Profiling locks. See LOCK_PROFILING(9) for details. options LOCK_PROFILING # Set the number of buffers and the hash size. The hash size MUST be larger # than the number of buffers. Hash size should be prime. options MPROF_BUFFERS="1536" options MPROF_HASH_SIZE="1543" # Profiling for the callout(9) backend. options CALLOUT_PROFILING # Profiling for internal hash tables. options SLEEPQUEUE_PROFILING options TURNSTILE_PROFILING options UMTX_PROFILING ##################################################################### # COMPATIBILITY OPTIONS # # Implement system calls compatible with 4.3BSD and older versions of # FreeBSD. You probably do NOT want to remove this as much current code # still relies on the 4.3 emulation. Note that some architectures that # are supported by FreeBSD do not include support for certain important # aspects of this compatibility option, namely those related to the # signal delivery mechanism. # options COMPAT_43 # Old tty interface. options COMPAT_43TTY # Note that as a general rule, COMPAT_FREEBSD depends on # COMPAT_FREEBSD, COMPAT_FREEBSD, etc. # Enable FreeBSD4 compatibility syscalls options COMPAT_FREEBSD4 # Enable FreeBSD5 compatibility syscalls options COMPAT_FREEBSD5 # Enable FreeBSD6 compatibility syscalls options COMPAT_FREEBSD6 # Enable FreeBSD7 compatibility syscalls options COMPAT_FREEBSD7 # Enable FreeBSD9 compatibility syscalls options COMPAT_FREEBSD9 # Enable FreeBSD10 compatibility syscalls options COMPAT_FREEBSD10 # Enable FreeBSD11 compatibility syscalls options COMPAT_FREEBSD11 # Enable Linux Kernel Programming Interface options COMPAT_LINUXKPI # # These three options provide support for System V Interface # Definition-style interprocess communication, in the form of shared # memory, semaphores, and message queues, respectively. # options SYSVSHM options SYSVSEM options SYSVMSG ##################################################################### # DEBUGGING OPTIONS # # Compile with kernel debugger related code. # options KDB # # Print a stack trace of the current thread on the console for a panic. # options KDB_TRACE # # Don't enter the debugger for a panic. Intended for unattended operation # where you may want to enter the debugger from the console, but still want # the machine to recover from a panic. # options KDB_UNATTENDED # # Enable the ddb debugger backend. # options DDB # # Print the numerical value of symbols in addition to the symbolic # representation. # options DDB_NUMSYM # # Enable the remote gdb debugger backend. # options GDB # # SYSCTL_DEBUG enables a 'sysctl' debug tree that can be used to dump the # contents of the registered sysctl nodes on the console. It is disabled by # default because it generates excessively verbose console output that can # interfere with serial console operation. # options SYSCTL_DEBUG # # Enable textdump by default, this disables kernel core dumps. # options TEXTDUMP_PREFERRED # # Enable extra debug messages while performing textdumps. # options TEXTDUMP_VERBOSE # # NO_SYSCTL_DESCR omits the sysctl node descriptions to save space in the # resulting kernel. options NO_SYSCTL_DESCR # # MALLOC_DEBUG_MAXZONES enables multiple uma zones for malloc(9) # allocations that are smaller than a page. The purpose is to isolate # different malloc types into hash classes, so that any buffer # overruns or use-after-free will usually only affect memory from # malloc types in that hash class. This is purely a debugging tool; # by varying the hash function and tracking which hash class was # corrupted, the intersection of the hash classes from each instance # will point to a single malloc type that is being misused. At this # point inspection or memguard(9) can be used to catch the offending # code. # options MALLOC_DEBUG_MAXZONES=8 # # DEBUG_MEMGUARD builds and enables memguard(9), a replacement allocator # for the kernel used to detect modify-after-free scenarios. See the # memguard(9) man page for more information on usage. # options DEBUG_MEMGUARD # # DEBUG_REDZONE enables buffer underflows and buffer overflows detection for # malloc(9). # options DEBUG_REDZONE # # EARLY_PRINTF enables support for calling a special printf (eprintf) # very early in the kernel (before cn_init() has been called). This # should only be used for debugging purposes early in boot. Normally, # it is not defined. It is commented out here because this feature # isn't generally available. And the required eputc() isn't defined. # #options EARLY_PRINTF # # KTRACE enables the system-call tracing facility ktrace(2). To be more # SMP-friendly, KTRACE uses a worker thread to process most trace events # asynchronously to the thread generating the event. This requires a # pre-allocated store of objects representing trace events. The # KTRACE_REQUEST_POOL option specifies the initial size of this store. # The size of the pool can be adjusted both at boottime and runtime via # the kern.ktrace_request_pool tunable and sysctl. # options KTRACE #kernel tracing options KTRACE_REQUEST_POOL=101 # # KTR is a kernel tracing facility imported from BSD/OS. It is # enabled with the KTR option. KTR_ENTRIES defines the number of # entries in the circular trace buffer; it may be an arbitrary number. # KTR_BOOT_ENTRIES defines the number of entries during the early boot, # before malloc(9) is functional. # KTR_COMPILE defines the mask of events to compile into the kernel as # defined by the KTR_* constants in . KTR_MASK defines the # initial value of the ktr_mask variable which determines at runtime # what events to trace. KTR_CPUMASK determines which CPU's log # events, with bit X corresponding to CPU X. The layout of the string # passed as KTR_CPUMASK must match a series of bitmasks each of them # separated by the "," character (ie: # KTR_CPUMASK=0xAF,0xFFFFFFFFFFFFFFFF). KTR_VERBOSE enables # dumping of KTR events to the console by default. This functionality # can be toggled via the debug.ktr_verbose sysctl and defaults to off # if KTR_VERBOSE is not defined. See ktr(4) and ktrdump(8) for details. # options KTR options KTR_BOOT_ENTRIES=1024 options KTR_ENTRIES=(128*1024) options KTR_COMPILE=(KTR_ALL) options KTR_MASK=KTR_INTR options KTR_CPUMASK=0x3 options KTR_VERBOSE # # ALQ(9) is a facility for the asynchronous queuing of records from the kernel # to a vnode, and is employed by services such as ktr(4) to produce trace # files based on a kernel event stream. Records are written asynchronously # in a worker thread. # options ALQ options KTR_ALQ # # The INVARIANTS option is used in a number of source files to enable # extra sanity checking of internal structures. This support is not # enabled by default because of the extra time it would take to check # for these conditions, which can only occur as a result of # programming errors. # options INVARIANTS # # The INVARIANT_SUPPORT option makes us compile in support for # verifying some of the internal structures. It is a prerequisite for # 'INVARIANTS', as enabling 'INVARIANTS' will make these functions be # called. The intent is that you can set 'INVARIANTS' for single # source files (by changing the source file or specifying it on the # command line) if you have 'INVARIANT_SUPPORT' enabled. Also, if you # wish to build a kernel module with 'INVARIANTS', then adding # 'INVARIANT_SUPPORT' to your kernel will provide all the necessary # infrastructure without the added overhead. # options INVARIANT_SUPPORT # # The KASSERT_PANIC_OPTIONAL option allows kasserts to fire without # necessarily inducing a panic. Panic is the default behavior, but # runtime options can configure it either entirely off, or off with a # limit. # options KASSERT_PANIC_OPTIONAL # # The DIAGNOSTIC option is used to enable extra debugging information # from some parts of the kernel. As this makes everything more noisy, # it is disabled by default. # options DIAGNOSTIC # # REGRESSION causes optional kernel interfaces necessary only for regression # testing to be enabled. These interfaces may constitute security risks # when enabled, as they permit processes to easily modify aspects of the # run-time environment to reproduce unlikely or unusual (possibly normally # impossible) scenarios. # options REGRESSION # # This option lets some drivers co-exist that can't co-exist in a running # system. This is used to be able to compile all kernel code in one go for # quality assurance purposes (like this file, which the option takes it name # from.) # options COMPILING_LINT # # STACK enables the stack(9) facility, allowing the capture of kernel stack # for the purpose of procinfo(1), etc. stack(9) will also be compiled in # automatically if DDB(4) is compiled into the kernel. # options STACK # # The NUM_CORE_FILES option specifies the limit for the number of core # files generated by a particular process, when the core file format # specifier includes the %I pattern. Since we only have 1 character for # the core count in the format string, meaning the range will be 0-9, the # maximum value allowed for this option is 10. # This core file limit can be adjusted at runtime via the debug.ncores # sysctl. # options NUM_CORE_FILES=5 # # The TSLOG option enables timestamped logging of events, especially # function entries/exits, in order to track the time spent by the kernel. # In particular, this is useful when investigating the early boot process, # before it is possible to use more sophisticated tools like DTrace. # The TSLOGSIZE option controls the size of the (preallocated, fixed # length) buffer used for storing these events (default: 262144 records). # # For security reasons the TSLOG option should not be enabled on systems # used in production. # options TSLOG options TSLOGSIZE=262144 ##################################################################### # PERFORMANCE MONITORING OPTIONS # # The hwpmc driver that allows the use of in-CPU performance monitoring # counters for performance monitoring. The base kernel needs to be configured # with the 'options' line, while the hwpmc device can be either compiled # in or loaded as a loadable kernel module. # # Additional configuration options may be required on specific architectures, # please see hwpmc(4). device hwpmc # Driver (also a loadable module) options HWPMC_DEBUG options HWPMC_HOOKS # Other necessary kernel hooks ##################################################################### # NETWORKING OPTIONS # # Protocol families # options INET #Internet communications protocols options INET6 #IPv6 communications protocols options RATELIMIT # TX rate limiting support options ROUTETABLES=2 # allocated fibs up to 65536. default is 1. # but that would be a bad idea as they are large. options TCP_OFFLOAD # TCP offload support. options TCPHPTS # In order to enable IPSEC you MUST also add device crypto to # your kernel configuration options IPSEC #IP security (requires device crypto) # Option IPSEC_SUPPORT does not enable IPsec, but makes it possible to # load it as a kernel module. You still MUST add device crypto to your kernel # configuration. options IPSEC_SUPPORT #options IPSEC_DEBUG #debug for IP security # # SMB/CIFS requester # NETSMB enables support for SMB protocol, it requires LIBMCHAIN and LIBICONV # options. options NETSMB #SMB/CIFS requester # mchain library. It can be either loaded as KLD or compiled into kernel options LIBMCHAIN # libalias library, performing NAT options LIBALIAS # # SCTP is a NEW transport protocol defined by # RFC2960 updated by RFC3309 and RFC3758.. and # soon to have a new base RFC and many many more # extensions. This release supports all the extensions # including many drafts (most about to become RFC's). # It is the reference implementation of SCTP # and is quite well tested. # # Note YOU MUST have both INET and INET6 defined. # You don't have to enable V6, but SCTP is # dual stacked and so far we have not torn apart # the V6 and V4.. since an association can span # both a V6 and V4 address at the SAME time :-) # options SCTP # There are bunches of options: # this one turns on all sorts of # nastily printing that you can # do. It's all controlled by a # bit mask (settable by socket opt and # by sysctl). Including will not cause # logging until you set the bits.. but it # can be quite verbose.. so without this # option we don't do any of the tests for # bits and prints.. which makes the code run # faster.. if you are not debugging don't use. options SCTP_DEBUG # # All that options after that turn on specific types of # logging. You can monitor CWND growth, flight size # and all sorts of things. Go look at the code and # see. I have used this to produce interesting # charts and graphs as well :-> # # I have not yet committed the tools to get and print # the logs, I will do that eventually .. before then # if you want them send me an email rrs@freebsd.org # You basically must have ktr(4) enabled for these # and you then set the sysctl to turn on/off various # logging bits. Use ktrdump(8) to pull the log and run # it through a display program.. and graphs and other # things too. # options SCTP_LOCK_LOGGING options SCTP_MBUF_LOGGING options SCTP_MBCNT_LOGGING options SCTP_PACKET_LOGGING options SCTP_LTRACE_CHUNKS options SCTP_LTRACE_ERRORS # altq(9). Enable the base part of the hooks with the ALTQ option. # Individual disciplines must be built into the base system and can not be # loaded as modules at this point. ALTQ requires a stable TSC so if yours is # broken or changes with CPU throttling then you must also have the ALTQ_NOPCC # option. options ALTQ options ALTQ_CBQ # Class Based Queueing options ALTQ_RED # Random Early Detection options ALTQ_RIO # RED In/Out options ALTQ_CODEL # CoDel Active Queueing options ALTQ_HFSC # Hierarchical Packet Scheduler options ALTQ_FAIRQ # Fair Packet Scheduler options ALTQ_CDNR # Traffic conditioner options ALTQ_PRIQ # Priority Queueing options ALTQ_NOPCC # Required if the TSC is unusable options ALTQ_DEBUG # netgraph(4). Enable the base netgraph code with the NETGRAPH option. # Individual node types can be enabled with the corresponding option # listed below; however, this is not strictly necessary as netgraph # will automatically load the corresponding KLD module if the node type # is not already compiled into the kernel. Each type below has a # corresponding man page, e.g., ng_async(8). options NETGRAPH # netgraph(4) system options NETGRAPH_DEBUG # enable extra debugging, this # affects netgraph(4) and nodes # Node types options NETGRAPH_ASYNC options NETGRAPH_ATMLLC options NETGRAPH_ATM_ATMPIF options NETGRAPH_BLUETOOTH # ng_bluetooth(4) options NETGRAPH_BLUETOOTH_BT3C # ng_bt3c(4) options NETGRAPH_BLUETOOTH_HCI # ng_hci(4) options NETGRAPH_BLUETOOTH_L2CAP # ng_l2cap(4) options NETGRAPH_BLUETOOTH_SOCKET # ng_btsocket(4) options NETGRAPH_BLUETOOTH_UBT # ng_ubt(4) options NETGRAPH_BLUETOOTH_UBTBCMFW # ubtbcmfw(4) options NETGRAPH_BPF options NETGRAPH_BRIDGE options NETGRAPH_CAR options NETGRAPH_CHECKSUM options NETGRAPH_CISCO options NETGRAPH_DEFLATE options NETGRAPH_DEVICE options NETGRAPH_ECHO options NETGRAPH_EIFACE options NETGRAPH_ETHER options NETGRAPH_FRAME_RELAY options NETGRAPH_GIF options NETGRAPH_GIF_DEMUX options NETGRAPH_HOLE options NETGRAPH_IFACE options NETGRAPH_IP_INPUT options NETGRAPH_IPFW options NETGRAPH_KSOCKET options NETGRAPH_L2TP options NETGRAPH_LMI options NETGRAPH_MPPC_COMPRESSION options NETGRAPH_MPPC_ENCRYPTION options NETGRAPH_NETFLOW options NETGRAPH_NAT options NETGRAPH_ONE2MANY options NETGRAPH_PATCH options NETGRAPH_PIPE options NETGRAPH_PPP options NETGRAPH_PPPOE options NETGRAPH_PPTPGRE options NETGRAPH_PRED1 options NETGRAPH_RFC1490 options NETGRAPH_SOCKET options NETGRAPH_SPLIT options NETGRAPH_SPPP options NETGRAPH_TAG options NETGRAPH_TCPMSS options NETGRAPH_TEE options NETGRAPH_UI options NETGRAPH_VJC options NETGRAPH_VLAN # NgATM - Netgraph ATM options NGATM_ATM options NGATM_ATMBASE options NGATM_SSCOP options NGATM_SSCFU options NGATM_UNI options NGATM_CCATM device mn # Munich32x/Falc54 Nx64kbit/sec cards. # Network stack virtualization. options VIMAGE options VNET_DEBUG # debug for VIMAGE # # Network interfaces: # The `loop' device is MANDATORY when networking is enabled. device loop # The `ether' device provides generic code to handle # Ethernets; it is MANDATORY when an Ethernet device driver is # configured. device ether # The `vlan' device implements the VLAN tagging of Ethernet frames # according to IEEE 802.1Q. device vlan # The `vxlan' device implements the VXLAN encapsulation of Ethernet # frames in UDP packets according to RFC7348. device vxlan # The `wlan' device provides generic code to support 802.11 # drivers, including host AP mode; it is MANDATORY for the wi, # and ath drivers and will eventually be required by all 802.11 drivers. device wlan options IEEE80211_DEBUG #enable debugging msgs options IEEE80211_SUPPORT_MESH #enable 802.11s D3.0 support options IEEE80211_SUPPORT_TDMA #enable TDMA support # The `wlan_wep', `wlan_tkip', and `wlan_ccmp' devices provide # support for WEP, TKIP, and AES-CCMP crypto protocols optionally # used with 802.11 devices that depend on the `wlan' module. device wlan_wep device wlan_ccmp device wlan_tkip # The `wlan_xauth' device provides support for external (i.e. user-mode) # authenticators for use with 802.11 drivers that use the `wlan' # module and support 802.1x and/or WPA security protocols. device wlan_xauth # The `wlan_acl' device provides a MAC-based access control mechanism # for use with 802.11 drivers operating in ap mode and using the # `wlan' module. # The 'wlan_amrr' device provides AMRR transmit rate control algorithm device wlan_acl device wlan_amrr # The `sppp' device serves a similar role for certain types # of synchronous PPP links (like `cx', `ar'). device sppp # The `bpf' device enables the Berkeley Packet Filter. Be # aware of the legal and administrative consequences of enabling this # option. DHCP requires bpf. device bpf # The `netmap' device implements memory-mapped access to network # devices from userspace, enabling wire-speed packet capture and # generation even at 10Gbit/s. Requires support in the device # driver. Supported drivers are ixgbe, e1000, re. device netmap # The `disc' device implements a minimal network interface, # which throws away all packets sent and never receives any. It is # included for testing and benchmarking purposes. device disc # The `epair' device implements a virtual back-to-back connected Ethernet # like interface pair. device epair # The `edsc' device implements a minimal Ethernet interface, # which discards all packets sent and receives none. device edsc # The `tap' device is a pty-like virtual Ethernet interface device tap # The `tun' device implements (user-)ppp and nos-tun(8) device tun # The `gif' device implements IPv6 over IP4 tunneling, # IPv4 over IPv6 tunneling, IPv4 over IPv4 tunneling and # IPv6 over IPv6 tunneling. # The `gre' device implements GRE (Generic Routing Encapsulation) tunneling, # as specified in the RFC 2784 and RFC 2890. # The `me' device implements Minimal Encapsulation within IPv4 as # specified in the RFC 2004. # The XBONEHACK option allows the same pair of addresses to be configured on # multiple gif interfaces. device gif device gre device me options XBONEHACK # The `stf' device implements 6to4 encapsulation. device stf # The pf packet filter consists of three devices: # The `pf' device provides /dev/pf and the firewall code itself. # The `pflog' device provides the pflog0 interface which logs packets. # The `pfsync' device provides the pfsync0 interface used for # synchronization of firewall state tables (over the net). device pf device pflog device pfsync # Bridge interface. device if_bridge # Common Address Redundancy Protocol. See carp(4) for more details. device carp # IPsec interface. device enc # Link aggregation interface. device lagg # # Internet family options: # # MROUTING enables the kernel multicast packet forwarder, which works # with mrouted and XORP. # # IPFIREWALL enables support for IP firewall construction, in # conjunction with the `ipfw' program. IPFIREWALL_VERBOSE sends # logged packets to the system logger. IPFIREWALL_VERBOSE_LIMIT # limits the number of times a matching entry can be logged. # # WARNING: IPFIREWALL defaults to a policy of "deny ip from any to any" # and if you do not add other rules during startup to allow access, # YOU WILL LOCK YOURSELF OUT. It is suggested that you set firewall_type=open # in /etc/rc.conf when first enabling this feature, then refining the # firewall rules in /etc/rc.firewall after you've tested that the new kernel # feature works properly. # # IPFIREWALL_DEFAULT_TO_ACCEPT causes the default rule (at boot) to # allow everything. Use with care, if a cracker can crash your # firewall machine, they can get to your protected machines. However, # if you are using it as an as-needed filter for specific problems as # they arise, then this may be for you. Changing the default to 'allow' # means that you won't get stuck if the kernel and /sbin/ipfw binary get # out of sync. # # IPDIVERT enables the divert IP sockets, used by ``ipfw divert''. It # depends on IPFIREWALL if compiled into the kernel. # # IPFIREWALL_NAT adds support for in kernel nat in ipfw, and it requires # LIBALIAS. # # IPFIREWALL_NAT64 adds support for in kernel NAT64 in ipfw. # # IPFIREWALL_NPTV6 adds support for in kernel NPTv6 in ipfw. # # IPFIREWALL_PMOD adds support for protocols modification module. Currently # it supports only TCP MSS modification. # # IPSTEALTH enables code to support stealth forwarding (i.e., forwarding # packets without touching the TTL). This can be useful to hide firewalls # from traceroute and similar tools. # # PF_DEFAULT_TO_DROP causes the default pf(4) rule to deny everything. # # TCPDEBUG enables code which keeps traces of the TCP state machine # for sockets with the SO_DEBUG option set, which can then be examined # using the trpt(8) utility. # # TCPPCAP enables code which keeps the last n packets sent and received # on a TCP socket. # # TCP_BLACKBOX enables enhanced TCP event logging. # # TCP_HHOOK enables the hhook(9) framework hooks for the TCP stack. # # RADIX_MPATH provides support for equal-cost multi-path routing. # options MROUTING # Multicast routing options IPFIREWALL #firewall options IPFIREWALL_VERBOSE #enable logging to syslogd(8) options IPFIREWALL_VERBOSE_LIMIT=100 #limit verbosity options IPFIREWALL_DEFAULT_TO_ACCEPT #allow everything by default options IPFIREWALL_NAT #ipfw kernel nat support options IPFIREWALL_NAT64 #ipfw kernel NAT64 support options IPFIREWALL_NPTV6 #ipfw kernel IPv6 NPT support options IPDIVERT #divert sockets options IPFILTER #ipfilter support options IPFILTER_LOG #ipfilter logging options IPFILTER_LOOKUP #ipfilter pools options IPFILTER_DEFAULT_BLOCK #block all packets by default options IPSTEALTH #support for stealth forwarding options PF_DEFAULT_TO_DROP #drop everything by default options TCPDEBUG options TCPPCAP options TCP_BLACKBOX options TCP_HHOOK options RADIX_MPATH # The MBUF_STRESS_TEST option enables options which create # various random failures / extreme cases related to mbuf # functions. See mbuf(9) for a list of available test cases. # MBUF_PROFILING enables code to profile the mbuf chains # exiting the system (via participating interfaces) and # return a logarithmic histogram of monitored parameters # (e.g. packet size, wasted space, number of mbufs in chain). options MBUF_STRESS_TEST options MBUF_PROFILING # Statically link in accept filters options ACCEPT_FILTER_DATA options ACCEPT_FILTER_DNS options ACCEPT_FILTER_HTTP # TCP_SIGNATURE adds support for RFC 2385 (TCP-MD5) digests. These are # carried in TCP option 19. This option is commonly used to protect # TCP sessions (e.g. BGP) where IPSEC is not available nor desirable. # This is enabled on a per-socket basis using the TCP_MD5SIG socket option. # This requires the use of 'device crypto' and either 'options IPSEC' or # 'options IPSEC_SUPPORT'. options TCP_SIGNATURE #include support for RFC 2385 # DUMMYNET enables the "dummynet" bandwidth limiter. You need IPFIREWALL # as well. See dummynet(4) and ipfw(8) for more info. When you run # DUMMYNET it is advisable to also have at least "options HZ=1000" to achieve # a smooth scheduling of the traffic. options DUMMYNET # The NETDUMP option enables netdump(4) client support in the kernel. # This allows a panicking kernel to transmit a kernel dump to a remote host. options NETDUMP ##################################################################### # FILESYSTEM OPTIONS # # Only the root filesystem needs to be statically compiled or preloaded # as module; everything else will be automatically loaded at mount # time. Some people still prefer to statically compile other # filesystems as well. # # NB: The UNION filesystem was known to be buggy in the past. It is now # being actively maintained, although there are still some issues being # resolved. # # One of these is mandatory: options FFS #Fast filesystem options NFSCL #Network File System client # The rest are optional: options AUTOFS #Automounter filesystem options CD9660 #ISO 9660 filesystem options FDESCFS #File descriptor filesystem options FUSE #FUSE support module options MSDOSFS #MS DOS File System (FAT, FAT32) options NFSLOCKD #Network Lock Manager options NFSD #Network Filesystem Server options KGSSAPI #Kernel GSSAPI implementation options NULLFS #NULL filesystem options PROCFS #Process filesystem (requires PSEUDOFS) options PSEUDOFS #Pseudo-filesystem framework options PSEUDOFS_TRACE #Debugging support for PSEUDOFS options SMBFS #SMB/CIFS filesystem options TMPFS #Efficient memory filesystem options UDF #Universal Disk Format options UNIONFS #Union filesystem # The xFS_ROOT options REQUIRE the associated ``options xFS'' options NFS_ROOT #NFS usable as root device # Soft updates is a technique for improving filesystem speed and # making abrupt shutdown less risky. # options SOFTUPDATES # Extended attributes allow additional data to be associated with files, # and is used for ACLs, Capabilities, and MAC labels. # See src/sys/ufs/ufs/README.extattr for more information. options UFS_EXTATTR options UFS_EXTATTR_AUTOSTART # Access Control List support for UFS filesystems. The current ACL # implementation requires extended attribute support, UFS_EXTATTR, # for the underlying filesystem. # See src/sys/ufs/ufs/README.acls for more information. options UFS_ACL # Directory hashing improves the speed of operations on very large # directories at the expense of some memory. options UFS_DIRHASH # Gjournal-based UFS journaling support. options UFS_GJOURNAL # Make space in the kernel for a root filesystem on a md device. # Define to the number of kilobytes to reserve for the filesystem. # This is now optional. # If not defined, the root filesystem passed in as the MFS_IMAGE makeoption # will be automatically embedded in the kernel during linking. Its exact size # will be consumed within the kernel. # If defined, the old way of embedding the filesystem in the kernel will be # used. That is to say MD_ROOT_SIZE KB will be allocated in the kernel and # later, the filesystem image passed in as the MFS_IMAGE makeoption will be # dd'd into the reserved space if it fits. options MD_ROOT_SIZE=10 # Make the md device a potential root device, either with preloaded # images of type mfs_root or md_root. options MD_ROOT # Write-protect the md root device so that it may not be mounted writeable. options MD_ROOT_READONLY # Allow to read MD image from external memory regions options MD_ROOT_MEM # Disk quotas are supported when this option is enabled. options QUOTA #enable disk quotas # If you are running a machine just as a fileserver for PC and MAC # users, using SAMBA, you may consider setting this option # and keeping all those users' directories on a filesystem that is # mounted with the suiddir option. This gives new files the same # ownership as the directory (similar to group). It's a security hole # if you let these users run programs, so confine it to file-servers # (but it'll save you lots of headaches in those cases). Root owned # directories are exempt and X bits are cleared. The suid bit must be # set on the directory as well; see chmod(1). PC owners can't see/set # ownerships so they keep getting their toes trodden on. This saves # you all the support calls as the filesystem it's used on will act as # they expect: "It's my dir so it must be my file". # options SUIDDIR # NFS options: options NFS_MINATTRTIMO=3 # VREG attrib cache timeout in sec options NFS_MAXATTRTIMO=60 options NFS_MINDIRATTRTIMO=30 # VDIR attrib cache timeout in sec options NFS_MAXDIRATTRTIMO=60 options NFS_DEBUG # Enable NFS Debugging # # Add support for the EXT2FS filesystem of Linux fame. Be a bit # careful with this - the ext2fs code has a tendency to lag behind # changes and not be exercised very much, so mounting read/write could # be dangerous (and even mounting read only could result in panics.) # options EXT2FS # Cryptographically secure random number generator; /dev/random device random # The system memory devices; /dev/mem, /dev/kmem device mem # The kernel symbol table device; /dev/ksyms device ksyms # Optional character code conversion support with LIBICONV. # Each option requires their base file system and LIBICONV. options CD9660_ICONV options MSDOSFS_ICONV options UDF_ICONV ##################################################################### # POSIX P1003.1B # Real time extensions added in the 1993 POSIX # _KPOSIX_PRIORITY_SCHEDULING: Build in _POSIX_PRIORITY_SCHEDULING options _KPOSIX_PRIORITY_SCHEDULING # p1003_1b_semaphores are very experimental, # user should be ready to assist in debugging if problems arise. options P1003_1B_SEMAPHORES # POSIX message queue options P1003_1B_MQUEUE ##################################################################### # SECURITY POLICY PARAMETERS # Support for BSM audit options AUDIT # Support for Mandatory Access Control (MAC): options MAC options MAC_BIBA options MAC_BSDEXTENDED options MAC_IFOFF options MAC_LOMAC options MAC_MLS options MAC_NONE options MAC_NTPD options MAC_PARTITION options MAC_PORTACL options MAC_SEEOTHERUIDS options MAC_STUB options MAC_TEST # Support for Capsicum options CAPABILITIES # fine-grained rights on file descriptors options CAPABILITY_MODE # sandboxes with no global namespace access ##################################################################### # CLOCK OPTIONS # The granularity of operation is controlled by the kernel option HZ whose # default value (1000 on most architectures) means a granularity of 1ms # (1s/HZ). Historically, the default was 100, but finer granularity is # required for DUMMYNET and other systems on modern hardware. There are # reasonable arguments that HZ should, in fact, be 100 still; consider, # that reducing the granularity too much might cause excessive overhead in # clock interrupt processing, potentially causing ticks to be missed and thus # actually reducing the accuracy of operation. options HZ=100 # Enable support for the kernel PLL to use an external PPS signal, # under supervision of [x]ntpd(8) # More info in ntpd documentation: http://www.eecis.udel.edu/~ntp options PPS_SYNC # Enable support for generic feed-forward clocks in the kernel. # The feed-forward clock support is an alternative to the feedback oriented # ntpd/system clock approach, and is to be used with a feed-forward # synchronization algorithm such as the RADclock: # More info here: http://www.synclab.org/radclock options FFCLOCK ##################################################################### # SCSI DEVICES # SCSI DEVICE CONFIGURATION # The SCSI subsystem consists of the `base' SCSI code, a number of # high-level SCSI device `type' drivers, and the low-level host-adapter # device drivers. The host adapters are listed in the ISA and PCI # device configuration sections below. # # It is possible to wire down your SCSI devices so that a given bus, # target, and LUN always come on line as the same device unit. In # earlier versions the unit numbers were assigned in the order that # the devices were probed on the SCSI bus. This means that if you # removed a disk drive, you may have had to rewrite your /etc/fstab # file, and also that you had to be careful when adding a new disk # as it may have been probed earlier and moved your device configuration # around. (See also option GEOM_VOL for a different solution to this # problem.) # This old behavior is maintained as the default behavior. The unit # assignment begins with the first non-wired down unit for a device # type. For example, if you wire a disk as "da3" then the first # non-wired disk will be assigned da4. # The syntax for wiring down devices is: hint.scbus.0.at="ahc0" hint.scbus.1.at="ahc1" hint.scbus.1.bus="0" hint.scbus.3.at="ahc2" hint.scbus.3.bus="0" hint.scbus.2.at="ahc2" hint.scbus.2.bus="1" hint.da.0.at="scbus0" hint.da.0.target="0" hint.da.0.unit="0" hint.da.1.at="scbus3" hint.da.1.target="1" hint.da.2.at="scbus2" hint.da.2.target="3" hint.sa.1.at="scbus1" hint.sa.1.target="6" # "units" (SCSI logical unit number) that are not specified are # treated as if specified as LUN 0. # All SCSI devices allocate as many units as are required. # The ch driver drives SCSI Media Changer ("jukebox") devices. # # The da driver drives SCSI Direct Access ("disk") and Optical Media # ("WORM") devices. # # The sa driver drives SCSI Sequential Access ("tape") devices. # # The cd driver drives SCSI Read Only Direct Access ("cd") devices. # # The ses driver drives SCSI Environment Services ("ses") and # SAF-TE ("SCSI Accessible Fault-Tolerant Enclosure") devices. # # The pt driver drives SCSI Processor devices. # # The sg driver provides a passthrough API that is compatible with the # Linux SG driver. It will work in conjunction with the COMPAT_LINUX # option to run linux SG apps. It can also stand on its own and provide # source level API compatibility for porting apps to FreeBSD. # # Target Mode support is provided here but also requires that a SIM # (SCSI Host Adapter Driver) provide support as well. # # The targ driver provides target mode support as a Processor type device. # It exists to give the minimal context necessary to respond to Inquiry # commands. There is a sample user application that shows how the rest # of the command support might be done in /usr/share/examples/scsi_target. # # The targbh driver provides target mode support and exists to respond # to incoming commands that do not otherwise have a logical unit assigned # to them. # # The pass driver provides a passthrough API to access the CAM subsystem. device scbus #base SCSI code device ch #SCSI media changers device da #SCSI direct access devices (aka disks) device sa #SCSI tapes device cd #SCSI CD-ROMs device ses #Enclosure Services (SES and SAF-TE) device pt #SCSI processor device targ #SCSI Target Mode Code device targbh #SCSI Target Mode Blackhole Device device pass #CAM passthrough driver device sg #Linux SCSI passthrough device ctl #CAM Target Layer # CAM OPTIONS: # debugging options: # CAMDEBUG Compile in all possible debugging. # CAM_DEBUG_COMPILE Debug levels to compile in. # CAM_DEBUG_FLAGS Debug levels to enable on boot. # CAM_DEBUG_BUS Limit debugging to the given bus. # CAM_DEBUG_TARGET Limit debugging to the given target. # CAM_DEBUG_LUN Limit debugging to the given lun. # CAM_DEBUG_DELAY Delay in us after printing each debug line. # # CAM_MAX_HIGHPOWER: Maximum number of concurrent high power (start unit) cmds # SCSI_NO_SENSE_STRINGS: When defined disables sense descriptions # SCSI_NO_OP_STRINGS: When defined disables opcode descriptions # SCSI_DELAY: The number of MILLISECONDS to freeze the SIM (scsi adapter) # queue after a bus reset, and the number of milliseconds to # freeze the device queue after a bus device reset. This # can be changed at boot and runtime with the # kern.cam.scsi_delay tunable/sysctl. options CAMDEBUG options CAM_DEBUG_COMPILE=-1 options CAM_DEBUG_FLAGS=(CAM_DEBUG_INFO|CAM_DEBUG_PROBE|CAM_DEBUG_PERIPH) options CAM_DEBUG_BUS=-1 options CAM_DEBUG_TARGET=-1 options CAM_DEBUG_LUN=-1 options CAM_DEBUG_DELAY=1 options CAM_MAX_HIGHPOWER=4 options SCSI_NO_SENSE_STRINGS options SCSI_NO_OP_STRINGS options SCSI_DELAY=5000 # Be pessimistic about Joe SCSI device options CAM_IOSCHED_DYNAMIC options CAM_TEST_FAILURE # Options for the CAM CDROM driver: # CHANGER_MIN_BUSY_SECONDS: Guaranteed minimum time quantum for a changer LUN # CHANGER_MAX_BUSY_SECONDS: Maximum time quantum per changer LUN, only # enforced if there is I/O waiting for another LUN # The compiled in defaults for these variables are 2 and 10 seconds, # respectively. # # These can also be changed on the fly with the following sysctl variables: # kern.cam.cd.changer.min_busy_seconds # kern.cam.cd.changer.max_busy_seconds # options CHANGER_MIN_BUSY_SECONDS=2 options CHANGER_MAX_BUSY_SECONDS=10 # Options for the CAM sequential access driver: # SA_IO_TIMEOUT: Timeout for read/write/wfm operations, in minutes # SA_SPACE_TIMEOUT: Timeout for space operations, in minutes # SA_REWIND_TIMEOUT: Timeout for rewind operations, in minutes # SA_ERASE_TIMEOUT: Timeout for erase operations, in minutes # SA_1FM_AT_EOD: Default to model which only has a default one filemark at EOT. options SA_IO_TIMEOUT=4 options SA_SPACE_TIMEOUT=60 options SA_REWIND_TIMEOUT=(2*60) options SA_ERASE_TIMEOUT=(4*60) options SA_1FM_AT_EOD # Optional timeout for the CAM processor target (pt) device # This is specified in seconds. The default is 60 seconds. options SCSI_PT_DEFAULT_TIMEOUT=60 # Optional enable of doing SES passthrough on other devices (e.g., disks) # # Normally disabled because a lot of newer SCSI disks report themselves # as having SES capabilities, but this can then clot up attempts to build # a topology with the SES device that's on the box these drives are in.... options SES_ENABLE_PASSTHROUGH ##################################################################### # MISCELLANEOUS DEVICES AND OPTIONS device pty #BSD-style compatibility pseudo ttys device nmdm #back-to-back tty devices device md #Memory/malloc disk device snp #Snoop device - to look at pty/vty/etc.. device ccd #Concatenated disk driver device firmware #firmware(9) support # Kernel side iconv library options LIBICONV # Size of the kernel message buffer. Should be N * pagesize. options MSGBUF_SIZE=40960 ##################################################################### # HARDWARE BUS CONFIGURATION # # PCI bus & PCI options: # device pci options PCI_HP # PCI-Express native HotPlug options PCI_IOV # PCI SR-IOV support ##################################################################### # HARDWARE DEVICE CONFIGURATION # For ISA the required hints are listed. # PCI, CardBus, SD/MMC and pccard are self identifying buses, so # no hints are needed. # # Mandatory devices: # # These options are valid for other keyboard drivers as well. options KBD_DISABLE_KEYMAP_LOAD # refuse to load a keymap options KBD_INSTALL_CDEV # install a CDEV entry in /dev device kbdmux # keyboard multiplexer options KBDMUX_DFLT_KEYMAP # specify the built-in keymap makeoptions KBDMUX_DFLT_KEYMAP=it.iso options FB_DEBUG # Frame buffer debugging device splash # Splash screen and screen saver support # Various screen savers. device blank_saver device daemon_saver device dragon_saver device fade_saver device fire_saver device green_saver device logo_saver device rain_saver device snake_saver device star_saver device warp_saver # The syscons console driver (SCO color console compatible). device sc hint.sc.0.at="isa" options MAXCONS=16 # number of virtual consoles options SC_ALT_MOUSE_IMAGE # simplified mouse cursor in text mode options SC_DFLT_FONT # compile font in makeoptions SC_DFLT_FONT=cp850 options SC_DISABLE_KDBKEY # disable `debug' key options SC_DISABLE_REBOOT # disable reboot key sequence options SC_HISTORY_SIZE=200 # number of history buffer lines options SC_MOUSE_CHAR=0x3 # char code for text mode mouse cursor options SC_PIXEL_MODE # add support for the raster text mode # The following options will let you change the default colors of syscons. options SC_NORM_ATTR=(FG_GREEN|BG_BLACK) options SC_NORM_REV_ATTR=(FG_YELLOW|BG_GREEN) options SC_KERNEL_CONS_ATTR=(FG_RED|BG_BLACK) options SC_KERNEL_CONS_ATTRS=\"\x0c\x0d\x0e\x0f\x02\x09\x0a\x0b\" options SC_KERNEL_CONS_REV_ATTR=(FG_BLACK|BG_RED) # The following options will let you change the default behavior of # cut-n-paste feature options SC_CUT_SPACES2TABS # convert leading spaces into tabs options SC_CUT_SEPCHARS=\"x09\" # set of characters that delimit words # (default is single space - \"x20\") # If you have a two button mouse, you may want to add the following option # to use the right button of the mouse to paste text. options SC_TWOBUTTON_MOUSE # You can selectively disable features in syscons. options SC_NO_CUTPASTE options SC_NO_FONT_LOADING options SC_NO_HISTORY options SC_NO_MODE_CHANGE options SC_NO_SYSMOUSE options SC_NO_SUSPEND_VTYSWITCH # `flags' for sc # 0x80 Put the video card in the VESA 800x600 dots, 16 color mode # 0x100 Probe for a keyboard device periodically if one is not present # Enable experimental features of the syscons terminal emulator (teken). options TEKEN_CONS25 # cons25-style terminal emulation options TEKEN_UTF8 # UTF-8 output handling # The vt video console driver. device vt options VT_ALT_TO_ESC_HACK=1 # Prepend ESC sequence to ALT keys options VT_MAXWINDOWS=16 # Number of virtual consoles options VT_TWOBUTTON_MOUSE # Use right mouse button to paste # The following options set the default framebuffer size. options VT_FB_DEFAULT_HEIGHT=480 options VT_FB_DEFAULT_WIDTH=640 # The following options will let you change the default vt terminal colors. options TERMINAL_NORM_ATTR=(FG_GREEN|BG_BLACK) options TERMINAL_KERN_ATTR=(FG_LIGHTRED|BG_BLACK) # # Optional devices: # # # SCSI host adapters: # # ahc: Adaptec 274x/284x/2910/293x/294x/394x/3950x/3960x/398X/4944/ # 19160x/29160x, aic7770/aic78xx # ahd: Adaptec 29320/39320 Controllers. # esp: Emulex ESP, NCR 53C9x and QLogic FAS families based controllers # including the AMD Am53C974 (found on devices such as the Tekram # DC-390(T)) and the Sun ESP and FAS families of controllers # isp: Qlogic ISP 1020, 1040 and 1040B PCI SCSI host adapters, # ISP 1240 Dual Ultra SCSI, ISP 1080 and 1280 (Dual) Ultra2, # ISP 12160 Ultra3 SCSI, # Qlogic ISP 2100 and ISP 2200 1Gb Fibre Channel host adapters. # Qlogic ISP 2300 and ISP 2312 2Gb Fibre Channel host adapters. # Qlogic ISP 2322 and ISP 6322 2Gb Fibre Channel host adapters. # ispfw: Firmware module for Qlogic host adapters # mpt: LSI-Logic MPT/Fusion 53c1020 or 53c1030 Ultra4 # or FC9x9 Fibre Channel host adapters. # sym: Symbios/Logic 53C8XX family of PCI-SCSI I/O processors: # 53C810, 53C810A, 53C815, 53C825, 53C825A, 53C860, 53C875, # 53C876, 53C885, 53C895, 53C895A, 53C896, 53C897, 53C1510D, # 53C1010-33, 53C1010-66. # trm: Tekram DC395U/UW/F DC315U adapters. device ahc device ahd device esp device iscsi_initiator device isp hint.isp.0.disable="1" hint.isp.0.role="3" hint.isp.0.prefer_iomap="1" hint.isp.0.prefer_memmap="1" hint.isp.0.fwload_disable="1" hint.isp.0.ignore_nvram="1" hint.isp.0.fullduplex="1" hint.isp.0.topology="lport" hint.isp.0.topology="nport" hint.isp.0.topology="lport-only" hint.isp.0.topology="nport-only" # we can't get u_int64_t types, nor can we get strings if it's got # a leading 0x, hence this silly dodge. hint.isp.0.portwnn="w50000000aaaa0000" hint.isp.0.nodewnn="w50000000aaaa0001" device ispfw device mpt device sym device trm # The aic7xxx driver will attempt to use memory mapped I/O for all PCI # controllers that have it configured only if this option is set. Unfortunately, # this doesn't work on some motherboards, which prevents it from being the # default. options AHC_ALLOW_MEMIO # Dump the contents of the ahc controller configuration PROM. options AHC_DUMP_EEPROM # Bitmap of units to enable targetmode operations. options AHC_TMODE_ENABLE # Compile in Aic7xxx Debugging code. options AHC_DEBUG # Aic7xxx driver debugging options. See sys/dev/aic7xxx/aic7xxx.h options AHC_DEBUG_OPTS # Print register bitfields in debug output. Adds ~128k to driver # See ahc(4). options AHC_REG_PRETTY_PRINT # Compile in aic79xx debugging code. options AHD_DEBUG # Aic79xx driver debugging options. Adds ~215k to driver. See ahd(4). options AHD_DEBUG_OPTS=0xFFFFFFFF # Print human-readable register definitions when debugging options AHD_REG_PRETTY_PRINT # Bitmap of units to enable targetmode operations. options AHD_TMODE_ENABLE # Options used in dev/iscsi (Software iSCSI stack) # options ISCSI_INITIATOR_DEBUG=9 # Options used in dev/isp/ (Qlogic SCSI/FC driver). # # ISP_TARGET_MODE - enable target mode operation # options ISP_TARGET_MODE=1 # # ISP_DEFAULT_ROLES - default role # none=0 # target=1 # initiator=2 # both=3 (not supported currently) # # ISP_INTERNAL_TARGET (trivial internal disk target, for testing) # options ISP_DEFAULT_ROLES=0 #options SYM_SETUP_SCSI_DIFF #-HVD support for 825a, 875, 885 # disabled:0 (default), enabled:1 #options SYM_SETUP_PCI_PARITY #-PCI parity checking # disabled:0, enabled:1 (default) #options SYM_SETUP_MAX_LUN #-Number of LUNs supported # default:8, range:[1..64] # # Compaq "CISS" RAID controllers (SmartRAID 5* series) # These controllers have a SCSI-like interface, and require the # CAM infrastructure. # device ciss # # Intel Integrated RAID controllers. # This driver was developed and is maintained by Intel. Contacts # at Intel for this driver are # "Kannanthanam, Boji T" and # "Leubner, Achim" . # device iir # # Mylex AcceleRAID and eXtremeRAID controllers with v6 and later # firmware. These controllers have a SCSI-like interface, and require # the CAM infrastructure. # device mly # # Compaq Smart RAID, Mylex DAC960 and AMI MegaRAID controllers. Only # one entry is needed; the code will find and configure all supported # controllers. # device ida # Compaq Smart RAID device mlx # Mylex DAC960 device amr # AMI MegaRAID device amrp # SCSI Passthrough interface (optional, CAM req.) device mfi # LSI MegaRAID SAS device mfip # LSI MegaRAID SAS passthrough, requires CAM options MFI_DEBUG device mrsas # LSI/Avago MegaRAID SAS/SATA, 6Gb/s and 12Gb/s # # 3ware ATA RAID # device twe # 3ware ATA RAID # # Serial ATA host controllers: # # ahci: Advanced Host Controller Interface (AHCI) compatible # mvs: Marvell 88SX50XX/88SX60XX/88SX70XX/SoC controllers # siis: SiliconImage SiI3124/SiI3132/SiI3531 controllers # # These drivers are part of cam(4) subsystem. They supersede less featured # ata(4) subsystem drivers, supporting same hardware. device ahci device mvs device siis # # The 'ATA' driver supports all legacy ATA/ATAPI controllers, including # PC Card devices. You only need one "device ata" for it to find all # PCI and PC Card ATA/ATAPI devices on modern machines. # Alternatively, individual bus and chipset drivers may be chosen by using # the 'atacore' driver then selecting the drivers on a per vendor basis. # For example to build a system which only supports a VIA chipset, # omit 'ata' and include the 'atacore', 'atapci' and 'atavia' drivers. device ata # Modular ATA #device atacore # Core ATA functionality #device atacard # CARDBUS support #device ataisa # ISA bus support #device atapci # PCI bus support; only generic chipset support # PCI ATA chipsets #device ataacard # ACARD #device ataacerlabs # Acer Labs Inc. (ALI) #device ataamd # American Micro Devices (AMD) #device ataati # ATI #device atacenatek # Cenatek #device atacypress # Cypress #device atacyrix # Cyrix #device atahighpoint # HighPoint #device ataintel # Intel #device ataite # Integrated Technology Inc. (ITE) #device atajmicron # JMicron #device atamarvell # Marvell #device atamicron # Micron #device atanational # National #device atanetcell # NetCell #device atanvidia # nVidia #device atapromise # Promise #device ataserverworks # ServerWorks #device atasiliconimage # Silicon Image Inc. (SiI) (formerly CMD) #device atasis # Silicon Integrated Systems Corp.(SiS) #device atavia # VIA Technologies Inc. # # For older non-PCI, non-PnPBIOS systems, these are the hints lines to add: hint.ata.0.at="isa" hint.ata.0.port="0x1f0" hint.ata.0.irq="14" hint.ata.1.at="isa" hint.ata.1.port="0x170" hint.ata.1.irq="15" # -# The following options are valid on the ATA driver: -# -# ATA_REQUEST_TIMEOUT: the number of seconds to wait for an ATA request -# before timing out. - -#options ATA_REQUEST_TIMEOUT=10 - -# # Standard floppy disk controllers and floppy tapes, supports # the Y-E DATA External FDD (PC Card) # device fdc hint.fdc.0.at="isa" hint.fdc.0.port="0x3F0" hint.fdc.0.irq="6" hint.fdc.0.drq="2" # # FDC_DEBUG enables floppy debugging. Since the debug output is huge, you # gotta turn it actually on by setting the variable fd_debug with DDB, # however. options FDC_DEBUG # # Activate this line if you happen to have an Insight floppy tape. # Probing them proved to be dangerous for people with floppy disks only, # so it's "hidden" behind a flag: #hint.fdc.0.flags="1" # Specify floppy devices hint.fd.0.at="fdc0" hint.fd.0.drive="0" hint.fd.1.at="fdc0" hint.fd.1.drive="1" # # uart: newbusified driver for serial interfaces. It consolidates the sio(4), # sab(4) and zs(4) drivers. # device uart # Options for uart(4) options UART_PPS_ON_CTS # Do time pulse capturing using CTS # instead of DCD. options UART_POLL_FREQ # Set polling rate, used when hw has # no interrupt support (50 Hz default). # The following hint should only be used for pure ISA devices. It is not # needed otherwise. Use of hints is strongly discouraged. hint.uart.0.at="isa" # The following 3 hints are used when the UART is a system device (i.e., a # console or debug port), but only on platforms that don't have any other # means to pass the information to the kernel. The unit number of the hint # is only used to bundle the hints together. There is no relation to the # unit number of the probed UART. hint.uart.0.port="0x3f8" hint.uart.0.flags="0x10" hint.uart.0.baud="115200" # `flags' for serial drivers that support consoles like sio(4) and uart(4): # 0x10 enable console support for this unit. Other console flags # (if applicable) are ignored unless this is set. Enabling # console support does not make the unit the preferred console. # Boot with -h or set boot_serial=YES in the loader. For sio(4) # specifically, the 0x20 flag can also be set (see above). # Currently, at most one unit can have console support; the # first one (in config file order) with this flag set is # preferred. Setting this flag for sio0 gives the old behavior. # 0x80 use this port for serial line gdb support in ddb. Also known # as debug port. # # Options for serial drivers that support consoles: options BREAK_TO_DEBUGGER # A BREAK/DBG on the console goes to # ddb, if available. # Solaris implements a new BREAK which is initiated by a character # sequence CR ~ ^b which is similar to a familiar pattern used on # Sun servers by the Remote Console. There are FreeBSD extensions: # CR ~ ^p requests force panic and CR ~ ^r requests a clean reboot. options ALT_BREAK_TO_DEBUGGER # Serial Communications Controller # Supports the Siemens SAB 82532 and Zilog Z8530 multi-channel # communications controllers. device scc # PCI Universal Communications driver # Supports various multi port PCI I/O cards. device puc # # Network interfaces: # # MII bus support is required for many PCI Ethernet NICs, # namely those which use MII-compliant transceivers or implement # transceiver control interfaces that operate like an MII. Adding # "device miibus" to the kernel config pulls in support for the generic # miibus API, the common support for for bit-bang'ing the MII and all # of the PHY drivers, including a generic one for PHYs that aren't # specifically handled by an individual driver. Support for specific # PHYs may be built by adding "device mii", "device mii_bitbang" if # needed by the NIC driver and then adding the appropriate PHY driver. device mii # Minimal MII support device mii_bitbang # Common module for bit-bang'ing the MII device miibus # MII support w/ bit-bang'ing and all PHYs device acphy # Altima Communications AC101 device amphy # AMD AM79c873 / Davicom DM910{1,2} device atphy # Attansic/Atheros F1 device axphy # Asix Semiconductor AX88x9x device bmtphy # Broadcom BCM5201/BCM5202 and 3Com 3c905C device bnxt # Broadcom NetXtreme-C/NetXtreme-E device brgphy # Broadcom BCM54xx/57xx 1000baseTX device ciphy # Cicada/Vitesse CS/VSC8xxx device e1000phy # Marvell 88E1000 1000/100/10-BT device gentbi # Generic 10-bit 1000BASE-{LX,SX} fiber ifaces device icsphy # ICS ICS1889-1893 device ip1000phy # IC Plus IP1000A/IP1001 device jmphy # JMicron JMP211/JMP202 device lxtphy # Level One LXT-970 device mlphy # Micro Linear 6692 device nsgphy # NatSemi DP8361/DP83865/DP83891 device nsphy # NatSemi DP83840A device nsphyter # NatSemi DP83843/DP83815 device pnaphy # HomePNA device qsphy # Quality Semiconductor QS6612 device rdcphy # RDC Semiconductor R6040 device rgephy # RealTek 8169S/8110S/8211B/8211C device rlphy # RealTek 8139 device rlswitch # RealTek 8305 device smcphy # SMSC LAN91C111 device tdkphy # TDK 89Q2120 device tlphy # Texas Instruments ThunderLAN device truephy # LSI TruePHY device xmphy # XaQti XMAC II # an: Aironet 4500/4800 802.11 wireless adapters. Supports the PCMCIA, # PCI and ISA varieties. # ae: Support for gigabit ethernet adapters based on the Attansic/Atheros # L2 PCI-Express FastEthernet controllers. # age: Support for gigabit ethernet adapters based on the Attansic/Atheros # L1 PCI express gigabit ethernet controllers. # alc: Support for Atheros AR8131/AR8132 PCIe ethernet controllers. # ale: Support for Atheros AR8121/AR8113/AR8114 PCIe ethernet controllers. # ath: Atheros a/b/g WiFi adapters (requires ath_hal and wlan) # bce: Broadcom NetXtreme II (BCM5706/BCM5708) PCI/PCIe Gigabit Ethernet # adapters. # bfe: Broadcom BCM4401 Ethernet adapter. # bge: Support for gigabit ethernet adapters based on the Broadcom # BCM570x family of controllers, including the 3Com 3c996-T, # the Netgear GA302T, the SysKonnect SK-9D21 and SK-9D41, and # the embedded gigE NICs on Dell PowerEdge 2550 servers. # bnxt: Broadcom NetXtreme-C and NetXtreme-E PCIe 10/25/50G Ethernet adapters. # bxe: Broadcom NetXtreme II (BCM5771X/BCM578XX) PCIe 10Gb Ethernet # adapters. # bwi: Broadcom BCM430* and BCM431* family of wireless adapters. # bwn: Broadcom BCM43xx family of wireless adapters. # cas: Sun Cassini/Cassini+ and National Semiconductor DP83065 Saturn # cxgb: Chelsio T3 based 1GbE/10GbE PCIe Ethernet adapters. # cxgbe:Chelsio T4, T5, and T6-based 1/10/25/40/100GbE PCIe Ethernet # adapters. # cxgbev: Chelsio T4, T5, and T6-based PCIe Virtual Functions. # dc: Support for PCI fast ethernet adapters based on the DEC/Intel 21143 # and various workalikes including: # the ADMtek AL981 Comet and AN985 Centaur, the ASIX Electronics # AX88140A and AX88141, the Davicom DM9100 and DM9102, the Lite-On # 82c168 and 82c169 PNIC, the Lite-On/Macronix LC82C115 PNIC II # and the Macronix 98713/98713A/98715/98715A/98725 PMAC. This driver # replaces the old al, ax, dm, pn and mx drivers. List of brands: # Digital DE500-BA, Kingston KNE100TX, D-Link DFE-570TX, SOHOware SFA110, # SVEC PN102-TX, CNet Pro110B, 120A, and 120B, Compex RL100-TX, # LinkSys LNE100TX, LNE100TX V2.0, Jaton XpressNet, Alfa Inc GFC2204, # KNE110TX. # de: Digital Equipment DC21040 # em: Intel Pro/1000 Gigabit Ethernet 82542, 82543, 82544 based adapters. # ep: 3Com 3C509, 3C529, 3C556, 3C562D, 3C563D, 3C572, 3C574X, 3C579, 3C589 # and PC Card devices using these chipsets. # ex: Intel EtherExpress Pro/10 and other i82595-based adapters, # Olicom Ethernet PC Card devices. # fe: Fujitsu MB86960A/MB86965A Ethernet # fxp: Intel EtherExpress Pro/100B # (hint of prefer_iomap can be done to prefer I/O instead of Mem mapping) # gem: Apple GMAC/Sun ERI/Sun GEM # hme: Sun HME (Happy Meal Ethernet) # jme: JMicron JMC260 Fast Ethernet/JMC250 Gigabit Ethernet based adapters. # le: AMD Am7900 LANCE and Am79C9xx PCnet # lge: Support for PCI gigabit ethernet adapters based on the Level 1 # LXT1001 NetCellerator chipset. This includes the D-Link DGE-500SX, # SMC TigerCard 1000 (SMC9462SX), and some Addtron cards. # lio: Support for Cavium 23XX Ethernet adapters # malo: Marvell Libertas wireless NICs. # mwl: Marvell 88W8363 802.11n wireless NICs. # Requires the mwl firmware module # mwlfw: Marvell 88W8363 firmware # msk: Support for gigabit ethernet adapters based on the Marvell/SysKonnect # Yukon II Gigabit controllers, including 88E8021, 88E8022, 88E8061, # 88E8062, 88E8035, 88E8036, 88E8038, 88E8050, 88E8052, 88E8053, # 88E8055, 88E8056 and D-Link 560T/550SX. # mlx5: Mellanox ConnectX-4 and ConnectX-4 LX IB and Eth shared code module. # mlx5en:Mellanox ConnectX-4 and ConnectX-4 LX PCIe Ethernet adapters. # my: Myson Fast Ethernet (MTD80X, MTD89X) # nge: Support for PCI gigabit ethernet adapters based on the National # Semiconductor DP83820 and DP83821 chipset. This includes the # SMC EZ Card 1000 (SMC9462TX), D-Link DGE-500T, Asante FriendlyNet # GigaNIX 1000TA and 1000TPC, the Addtron AEG320T, the Surecom # EP-320G-TX and the Netgear GA622T. # oce: Emulex 10 Gbit adapters (OneConnect Ethernet) # pcn: Support for PCI fast ethernet adapters based on the AMD Am79c97x # PCnet-FAST, PCnet-FAST+, PCnet-FAST III, PCnet-PRO and PCnet-Home # chipsets. These can also be handled by the le(4) driver if the # pcn(4) driver is left out of the kernel. The le(4) driver does not # support the additional features like the MII bus and burst mode of # the PCnet-FAST and greater chipsets though. # ral: Ralink Technology IEEE 802.11 wireless adapter # re: RealTek 8139C+/8169/816xS/811xS/8101E PCI/PCIe Ethernet adapter # rl: Support for PCI fast ethernet adapters based on the RealTek 8129/8139 # chipset. Note that the RealTek driver defaults to using programmed # I/O to do register accesses because memory mapped mode seems to cause # severe lockups on SMP hardware. This driver also supports the # Accton EN1207D `Cheetah' adapter, which uses a chip called # the MPX 5030/5038, which is either a RealTek in disguise or a # RealTek workalike. Note that the D-Link DFE-530TX+ uses the RealTek # chipset and is supported by this driver, not the 'vr' driver. # rtwn: RealTek wireless adapters. # rtwnfw: RealTek wireless firmware. # sf: Support for Adaptec Duralink PCI fast ethernet adapters based on the # Adaptec AIC-6915 "starfire" controller. # This includes dual and quad port cards, as well as one 100baseFX card. # Most of these are 64-bit PCI devices, except for one single port # card which is 32-bit. # sge: Silicon Integrated Systems SiS190/191 Fast/Gigabit Ethernet adapter # sis: Support for NICs based on the Silicon Integrated Systems SiS 900, # SiS 7016 and NS DP83815 PCI fast ethernet controller chips. # sk: Support for the SysKonnect SK-984x series PCI gigabit ethernet NICs. # This includes the SK-9841 and SK-9842 single port cards (single mode # and multimode fiber) and the SK-9843 and SK-9844 dual port cards # (also single mode and multimode). # The driver will autodetect the number of ports on the card and # attach each one as a separate network interface. # sn: Support for ISA and PC Card Ethernet devices using the # SMC91C90/92/94/95 chips. # ste: Sundance Technologies ST201 PCI fast ethernet controller, includes # the D-Link DFE-550TX. # stge: Support for gigabit ethernet adapters based on the Sundance/Tamarack # TC9021 family of controllers, including the Sundance ST2021/ST2023, # the Sundance/Tamarack TC9021, the D-Link DL-4000 and ASUS NX1101. # ti: Support for PCI gigabit ethernet NICs based on the Alteon Networks # Tigon 1 and Tigon 2 chipsets. This includes the Alteon AceNIC, the # 3Com 3c985, the Netgear GA620 and various others. Note that you will # probably want to bump up kern.ipc.nmbclusters a lot to use this driver. # tl: Support for the Texas Instruments TNETE100 series 'ThunderLAN' # cards and integrated ethernet controllers. This includes several # Compaq Netelligent 10/100 cards and the built-in ethernet controllers # in several Compaq Prosignia, Proliant and Deskpro systems. It also # supports several Olicom 10Mbps and 10/100 boards. # tx: SMC 9432 TX, BTX and FTX cards. (SMC EtherPower II series) # txp: Support for 3Com 3cR990 cards with the "Typhoon" chipset # vr: Support for various fast ethernet adapters based on the VIA # Technologies VT3043 `Rhine I' and VT86C100A `Rhine II' chips, # including the D-Link DFE520TX and D-Link DFE530TX (see 'rl' for # DFE530TX+), the Hawking Technologies PN102TX, and the AOpen/Acer ALN-320. # vte: DM&P Vortex86 RDC R6040 Fast Ethernet # vx: 3Com 3C590 and 3C595 # wb: Support for fast ethernet adapters based on the Winbond W89C840F chip. # Note: this is not the same as the Winbond W89C940F, which is a # NE2000 clone. # wi: Lucent WaveLAN/IEEE 802.11 PCMCIA adapters. Note: this supports both # the PCMCIA and ISA cards: the ISA card is really a PCMCIA to ISA # bridge with a PCMCIA adapter plugged into it. # xe: Xircom/Intel EtherExpress Pro100/16 PC Card ethernet controller, # Accton Fast EtherCard-16, Compaq Netelligent 10/100 PC Card, # Toshiba 10/100 Ethernet PC Card, Xircom 16-bit Ethernet + Modem 56 # xl: Support for the 3Com 3c900, 3c905, 3c905B and 3c905C (Fast) # Etherlink XL cards and integrated controllers. This includes the # integrated 3c905B-TX chips in certain Dell Optiplex and Dell # Precision desktop machines and the integrated 3c905-TX chips # in Dell Latitude laptop docking stations. # Also supported: 3Com 3c980(C)-TX, 3Com 3cSOHO100-TX, 3Com 3c450-TX # Order for ISA devices is important here device ep device ex device fe hint.fe.0.at="isa" hint.fe.0.port="0x300" device sn hint.sn.0.at="isa" hint.sn.0.port="0x300" hint.sn.0.irq="10" device an device wi device xe # PCI Ethernet NICs that use the common MII bus controller code. device ae # Attansic/Atheros L2 FastEthernet device age # Attansic/Atheros L1 Gigabit Ethernet device alc # Atheros AR8131/AR8132 Ethernet device ale # Atheros AR8121/AR8113/AR8114 Ethernet device bce # Broadcom BCM5706/BCM5708 Gigabit Ethernet device bfe # Broadcom BCM440x 10/100 Ethernet device bge # Broadcom BCM570xx Gigabit Ethernet device cas # Sun Cassini/Cassini+ and NS DP83065 Saturn device dc # DEC/Intel 21143 and various workalikes device et # Agere ET1310 10/100/Gigabit Ethernet device fxp # Intel EtherExpress PRO/100B (82557, 82558) hint.fxp.0.prefer_iomap="0" device gem # Apple GMAC/Sun ERI/Sun GEM device hme # Sun HME (Happy Meal Ethernet) device jme # JMicron JMC250 Gigabit/JMC260 Fast Ethernet device lge # Level 1 LXT1001 gigabit Ethernet device mlx5 # Shared code module between IB and Ethernet device mlx5en # Mellanox ConnectX-4 and ConnectX-4 LX device msk # Marvell/SysKonnect Yukon II Gigabit Ethernet device my # Myson Fast Ethernet (MTD80X, MTD89X) device nge # NatSemi DP83820 gigabit Ethernet device re # RealTek 8139C+/8169/8169S/8110S device rl # RealTek 8129/8139 device pcn # AMD Am79C97x PCI 10/100 NICs device sf # Adaptec AIC-6915 (``Starfire'') device sge # Silicon Integrated Systems SiS190/191 device sis # Silicon Integrated Systems SiS 900/SiS 7016 device sk # SysKonnect SK-984x & SK-982x gigabit Ethernet device ste # Sundance ST201 (D-Link DFE-550TX) device stge # Sundance/Tamarack TC9021 gigabit Ethernet device tl # Texas Instruments ThunderLAN device tx # SMC EtherPower II (83c170 ``EPIC'') device vr # VIA Rhine, Rhine II device vte # DM&P Vortex86 RDC R6040 Fast Ethernet device wb # Winbond W89C840F device xl # 3Com 3c90x (``Boomerang'', ``Cyclone'') # PCI/PCI-X/PCIe Ethernet NICs that use iflib infrastructure device iflib device em # Intel Pro/1000 Gigabit Ethernet device ix # Intel Pro/10Gbe PCIE Ethernet device ixv # Intel Pro/10Gbe PCIE Ethernet VF # PCI Ethernet NICs. device cxgb # Chelsio T3 10 Gigabit Ethernet device cxgb_t3fw # Chelsio T3 10 Gigabit Ethernet firmware device cxgbe # Chelsio T4-T6 1/10/25/40/100 Gigabit Ethernet device cxgbev # Chelsio T4-T6 Virtual Functions device de # DEC/Intel DC21x4x (``Tulip'') device le # AMD Am7900 LANCE and Am79C9xx PCnet device mxge # Myricom Myri-10G 10GbE NIC device oce # Emulex 10 GbE (OneConnect Ethernet) device ti # Alteon Networks Tigon I/II gigabit Ethernet device txp # 3Com 3cR990 (``Typhoon'') device vx # 3Com 3c590, 3c595 (``Vortex'') # PCI IEEE 802.11 Wireless NICs device ath # Atheros pci/cardbus NIC's device ath_hal # pci/cardbus chip support #device ath_ar5210 # AR5210 chips #device ath_ar5211 # AR5211 chips #device ath_ar5212 # AR5212 chips #device ath_rf2413 #device ath_rf2417 #device ath_rf2425 #device ath_rf5111 #device ath_rf5112 #device ath_rf5413 #device ath_ar5416 # AR5416 chips # All of the AR5212 parts have a problem when paired with the AR71xx # CPUS. These parts have a bug that triggers a fatal bus error on the AR71xx # only. Details of the exact nature of the bug are sketchy, but some can be # found at https://forum.openwrt.org/viewtopic.php?pid=70060 on pages 4, 5 and # 6. This option enables this workaround. There is a performance penalty # for this work around, but without it things don't work at all. The DMA # from the card usually bursts 128 bytes, but on the affected CPUs, only # 4 are safe. options AH_RXCFG_SDMAMW_4BYTES #device ath_ar9160 # AR9160 chips #device ath_ar9280 # AR9280 chips #device ath_ar9285 # AR9285 chips device ath_rate_sample # SampleRate tx rate control for ath device bwi # Broadcom BCM430* BCM431* device bwn # Broadcom BCM43xx device malo # Marvell Libertas wireless NICs. device mwl # Marvell 88W8363 802.11n wireless NICs. device mwlfw device ral # Ralink Technology RT2500 wireless NICs. device rtwn # Realtek wireless NICs device rtwnfw # Use sf_buf(9) interface for jumbo buffers on ti(4) controllers. #options TI_SF_BUF_JUMBO # Turn on the header splitting option for the ti(4) driver firmware. This # only works for Tigon II chips, and has no effect for Tigon I chips. # This option requires the TI_SF_BUF_JUMBO option above. #options TI_JUMBO_HDRSPLIT # These two options allow manipulating the mbuf cluster size and mbuf size, # respectively. Be very careful with NIC driver modules when changing # these from their default values, because that can potentially cause a # mismatch between the mbuf size assumed by the kernel and the mbuf size # assumed by a module. The only driver that currently has the ability to # detect a mismatch is ti(4). options MCLSHIFT=12 # mbuf cluster shift in bits, 12 == 4KB options MSIZE=512 # mbuf size in bytes # # Sound drivers # # sound: The generic sound driver. # device sound # # snd_*: Device-specific drivers. # # The flags of the device tell the device a bit more info about the # device that normally is obtained through the PnP interface. # bit 2..0 secondary DMA channel; # bit 4 set if the board uses two dma channels; # bit 15..8 board type, overrides autodetection; leave it # zero if don't know what to put in (and you don't, # since this is unsupported at the moment...). # # snd_ad1816: Analog Devices AD1816 ISA PnP/non-PnP. # snd_als4000: Avance Logic ALS4000 PCI. # snd_atiixp: ATI IXP 200/300/400 PCI. # snd_audiocs: Crystal Semiconductor CS4231 SBus/EBus. Only # for sparc64. # snd_cmi: CMedia CMI8338/CMI8738 PCI. # snd_cs4281: Crystal Semiconductor CS4281 PCI. # snd_csa: Crystal Semiconductor CS461x/428x PCI. (except # 4281) # snd_ds1: Yamaha DS-1 PCI. # snd_emu10k1: Creative EMU10K1 PCI and EMU10K2 (Audigy) PCI. # snd_emu10kx: Creative SoundBlaster Live! and Audigy # snd_envy24: VIA Envy24 and compatible, needs snd_spicds. # snd_envy24ht: VIA Envy24HT and compatible, needs snd_spicds. # snd_es137x: Ensoniq AudioPCI ES137x PCI. # snd_ess: Ensoniq ESS ISA PnP/non-PnP, to be used in # conjunction with snd_sbc. # snd_fm801: Forte Media FM801 PCI. # snd_gusc: Gravis UltraSound ISA PnP/non-PnP. # snd_hda: Intel High Definition Audio (Controller) and # compatible. # snd_hdspe: RME HDSPe AIO and RayDAT. # snd_ich: Intel ICH AC'97 and some more audio controllers # embedded in a chipset, for example nVidia # nForce controllers. # snd_maestro: ESS Technology Maestro-1/2x PCI. # snd_maestro3: ESS Technology Maestro-3/Allegro PCI. # snd_mss: Microsoft Sound System ISA PnP/non-PnP. # snd_neomagic: Neomagic 256 AV/ZX PCI. # snd_sb16: Creative SoundBlaster16, to be used in # conjunction with snd_sbc. # snd_sb8: Creative SoundBlaster (pre-16), to be used in # conjunction with snd_sbc. # snd_sbc: Creative SoundBlaster ISA PnP/non-PnP. # Supports ESS and Avance ISA chips as well. # snd_solo: ESS Solo-1x PCI. # snd_spicds: SPI codec driver, needed by Envy24/Envy24HT drivers. # snd_t4dwave: Trident 4DWave DX/NX PCI, Sis 7018 PCI and Acer Labs # M5451 PCI. # snd_uaudio: USB audio. # snd_via8233: VIA VT8233x PCI. # snd_via82c686: VIA VT82C686A PCI. # snd_vibes: S3 Sonicvibes PCI. device snd_ad1816 device snd_als4000 device snd_atiixp #device snd_audiocs device snd_cmi device snd_cs4281 device snd_csa device snd_ds1 device snd_emu10k1 device snd_emu10kx device snd_envy24 device snd_envy24ht device snd_es137x device snd_ess device snd_fm801 device snd_gusc device snd_hda device snd_hdspe device snd_ich device snd_maestro device snd_maestro3 device snd_mss device snd_neomagic device snd_sb16 device snd_sb8 device snd_sbc device snd_solo device snd_spicds device snd_t4dwave device snd_uaudio device snd_via8233 device snd_via82c686 device snd_vibes # For non-PnP sound cards: hint.pcm.0.at="isa" hint.pcm.0.irq="10" hint.pcm.0.drq="1" hint.pcm.0.flags="0x0" hint.sbc.0.at="isa" hint.sbc.0.port="0x220" hint.sbc.0.irq="5" hint.sbc.0.drq="1" hint.sbc.0.flags="0x15" hint.gusc.0.at="isa" hint.gusc.0.port="0x220" hint.gusc.0.irq="5" hint.gusc.0.drq="1" hint.gusc.0.flags="0x13" # # Following options are intended for debugging/testing purposes: # # SND_DEBUG Enable extra debugging code that includes # sanity checking and possible increase of # verbosity. # # SND_DIAGNOSTIC Similar in a spirit of INVARIANTS/DIAGNOSTIC, # zero tolerance against inconsistencies. # # SND_FEEDER_MULTIFORMAT By default, only 16/32 bit feeders are compiled # in. This options enable most feeder converters # except for 8bit. WARNING: May bloat the kernel. # # SND_FEEDER_FULL_MULTIFORMAT Ditto, but includes 8bit feeders as well. # # SND_FEEDER_RATE_HP (feeder_rate) High precision 64bit arithmetic # as much as possible (the default trying to # avoid it). Possible slowdown. # # SND_PCM_64 (Only applicable for i386/32bit arch) # Process 32bit samples through 64bit # integer/arithmetic. Slight increase of dynamic # range at a cost of possible slowdown. # # SND_OLDSTEREO Only 2 channels are allowed, effectively # disabling multichannel processing. # options SND_DEBUG options SND_DIAGNOSTIC options SND_FEEDER_MULTIFORMAT options SND_FEEDER_FULL_MULTIFORMAT options SND_FEEDER_RATE_HP options SND_PCM_64 options SND_OLDSTEREO # # Miscellaneous hardware: # # bktr: Brooktree bt848/848a/849a/878/879 video capture and TV Tuner board # cmx: OmniKey CardMan 4040 pccard smartcard reader device cmx # # The 'bktr' device is a PCI video capture device using the Brooktree # bt848/bt848a/bt849a/bt878/bt879 chipset. When used with a TV Tuner it forms a # TV card, e.g. Miro PC/TV, Hauppauge WinCast/TV WinTV, VideoLogic Captivator, # Intel Smart Video III, AverMedia, IMS Turbo, FlyVideo. # # options OVERRIDE_CARD=xxx # options OVERRIDE_TUNER=xxx # options OVERRIDE_MSP=1 # options OVERRIDE_DBX=1 # These options can be used to override the auto detection # The current values for xxx are found in src/sys/dev/bktr/bktr_card.h # Using sysctl(8) run-time overrides on a per-card basis can be made # # options BROOKTREE_SYSTEM_DEFAULT=BROOKTREE_PAL # or # options BROOKTREE_SYSTEM_DEFAULT=BROOKTREE_NTSC # Specifies the default video capture mode. # This is required for Dual Crystal (28&35MHz) boards where PAL is used # to prevent hangs during initialization, e.g. VideoLogic Captivator PCI. # # options BKTR_USE_PLL # This is required for PAL or SECAM boards with a 28MHz crystal and no 35MHz # crystal, e.g. some new Bt878 cards. # # options BKTR_GPIO_ACCESS # This enables IOCTLs which give user level access to the GPIO port. # # options BKTR_NO_MSP_RESET # Prevents the MSP34xx reset. Good if you initialize the MSP in another OS first # # options BKTR_430_FX_MODE # Switch Bt878/879 cards into Intel 430FX chipset compatibility mode. # # options BKTR_SIS_VIA_MODE # Switch Bt878/879 cards into SIS/VIA chipset compatibility mode which is # needed for some old SiS and VIA chipset motherboards. # This also allows Bt878/879 chips to work on old OPTi (<1997) chipset # motherboards and motherboards with bad or incomplete PCI 2.1 support. # As a rough guess, old = before 1998 # # options BKTR_NEW_MSP34XX_DRIVER # Use new, more complete initialization scheme for the msp34* soundchip. # Should fix stereo autodetection if the old driver does only output # mono sound. # # options BKTR_USE_FREEBSD_SMBUS # Compile with FreeBSD SMBus implementation # # Brooktree driver has been ported to the new I2C framework. Thus, # you'll need to have the following 3 lines in the kernel config. # device smbus # device iicbus # device iicbb # device iicsmb # The iic and smb devices are only needed if you want to control other # I2C slaves connected to the external connector of some cards. # device bktr # # PC Card/PCMCIA and Cardbus # # cbb: pci/cardbus bridge implementing YENTA interface # pccard: pccard slots # cardbus: cardbus slots device cbb device pccard device cardbus # # MMC/SD # # mmc MMC/SD bus # mmcsd MMC/SD memory card # sdhci Generic PCI SD Host Controller # device mmc device mmcsd device sdhci # # SMB bus # # System Management Bus support is provided by the 'smbus' device. # Access to the SMBus device is via the 'smb' device (/dev/smb*), # which is a child of the 'smbus' device. # # Supported devices: # smb standard I/O through /dev/smb* # # Supported SMB interfaces: # iicsmb I2C to SMB bridge with any iicbus interface # bktr brooktree848 I2C hardware interface # intpm Intel PIIX4 (82371AB, 82443MX) Power Management Unit # alpm Acer Aladdin-IV/V/Pro2 Power Management Unit # ichsmb Intel ICH SMBus controller chips (82801AA, 82801AB, 82801BA) # viapm VIA VT82C586B/596B/686A and VT8233 Power Management Unit # amdpm AMD 756 Power Management Unit # amdsmb AMD 8111 SMBus 2.0 Controller # nfpm NVIDIA nForce Power Management Unit # nfsmb NVIDIA nForce2/3/4 MCP SMBus 2.0 Controller # ismt Intel SMBus 2.0 controller chips (on Atom S1200, C2000) # device smbus # Bus support, required for smb below. device intpm device alpm device ichsmb device viapm device amdpm device amdsmb device nfpm device nfsmb device ismt device smb # SMBus peripheral devices # # jedec_dimm Asset and temperature reporting for DDR3 and DDR4 DIMMs # device jedec_dimm # I2C Bus # # Philips i2c bus support is provided by the `iicbus' device. # # Supported devices: # ic i2c network interface # iic i2c standard io # iicsmb i2c to smb bridge. Allow i2c i/o with smb commands. # iicoc simple polling driver for OpenCores I2C controller # # Supported interfaces: # bktr brooktree848 I2C software interface # # Other: # iicbb generic I2C bit-banging code (needed by lpbb, bktr) # device iicbus # Bus support, required for ic/iic/iicsmb below. device iicbb device ic device iic device iicsmb # smb over i2c bridge device iicoc # OpenCores I2C controller support # I2C peripheral devices # device ds1307 # Dallas DS1307 RTC and compatible device ds13rtc # All Dallas/Maxim ds13xx chips device ds1672 # Dallas DS1672 RTC device ds3231 # Dallas DS3231 RTC + temperature device icee # AT24Cxxx and compatible EEPROMs device lm75 # LM75 compatible temperature sensor device nxprtc # NXP RTCs: PCA/PFC212x PCA/PCF85xx device s35390a # Seiko Instruments S-35390A RTC # Parallel-Port Bus # # Parallel port bus support is provided by the `ppbus' device. # Multiple devices may be attached to the parallel port, devices # are automatically probed and attached when found. # # Supported devices: # vpo Iomega Zip Drive # Requires SCSI disk support ('scbus' and 'da'), best # performance is achieved with ports in EPP 1.9 mode. # lpt Parallel Printer # plip Parallel network interface # ppi General-purpose I/O ("Geek Port") + IEEE1284 I/O # pps Pulse per second Timing Interface # lpbb Philips official parallel port I2C bit-banging interface # pcfclock Parallel port clock driver. # # Supported interfaces: # ppc ISA-bus parallel port interfaces. # options PPC_PROBE_CHIPSET # Enable chipset specific detection # (see flags in ppc(4)) options DEBUG_1284 # IEEE1284 signaling protocol debug options PERIPH_1284 # Makes your computer act as an IEEE1284 # compliant peripheral options DONTPROBE_1284 # Avoid boot detection of PnP parallel devices options VP0_DEBUG # ZIP/ZIP+ debug options LPT_DEBUG # Printer driver debug options PPC_DEBUG # Parallel chipset level debug options PLIP_DEBUG # Parallel network IP interface debug options PCFCLOCK_VERBOSE # Verbose pcfclock driver options PCFCLOCK_MAX_RETRIES=5 # Maximum read tries (default 10) device ppc hint.ppc.0.at="isa" hint.ppc.0.irq="7" device ppbus device vpo device lpt device plip device ppi device pps device lpbb device pcfclock # # Etherswitch framework and drivers # # etherswitch The etherswitch(4) framework # miiproxy Proxy device for miibus(4) functionality # # Switch hardware support: # arswitch Atheros switches # ip17x IC+ 17x family switches # rtl8366r Realtek RTL8366 switches # ukswitch Multi-PHY switches # device etherswitch device miiproxy device arswitch device ip17x device rtl8366rb device ukswitch # Kernel BOOTP support options BOOTP # Use BOOTP to obtain IP address/hostname # Requires NFSCL and NFS_ROOT options BOOTP_NFSROOT # NFS mount root filesystem using BOOTP info options BOOTP_NFSV3 # Use NFS v3 to NFS mount root options BOOTP_COMPAT # Workaround for broken bootp daemons. options BOOTP_WIRED_TO=fxp0 # Use interface fxp0 for BOOTP options BOOTP_BLOCKSIZE=8192 # Override NFS block size # # Enable software watchdog routines, even if hardware watchdog is present. # By default, software watchdog timer is enabled only if no hardware watchdog # is present. # options SW_WATCHDOG # # Add the software deadlock resolver thread. # options DEADLKRES # # Disable swapping of stack pages. This option removes all # code which actually performs swapping, so it's not possible to turn # it back on at run-time. # # This is sometimes usable for systems which don't have any swap space # (see also sysctl "vm.disable_swapspace_pageouts") # #options NO_SWAPPING # Set the number of sf_bufs to allocate. sf_bufs are virtual buffers # for sendfile(2) that are used to map file VM pages, and normally # default to a quantity that is roughly 16*MAXUSERS+512. You would # typically want about 4 of these for each simultaneous file send. # options NSFBUFS=1024 # # Enable extra debugging code for locks. This stores the filename and # line of whatever acquired the lock in the lock itself, and changes a # number of function calls to pass around the relevant data. This is # not at all useful unless you are debugging lock code. Note that # modules should be recompiled as this option modifies KBI. # options DEBUG_LOCKS ##################################################################### # USB support # UHCI controller device uhci # OHCI controller device ohci # EHCI controller device ehci # XHCI controller device xhci # SL811 Controller #device slhci # General USB code (mandatory for USB) device usb # # USB Double Bulk Pipe devices device udbp # USB Fm Radio device ufm # USB temperature meter device ugold # USB LED device uled # Human Interface Device (anything with buttons and dials) device uhid # USB keyboard device ukbd # USB printer device ulpt # USB mass storage driver (Requires scbus and da) device umass # USB mass storage driver for device-side mode device usfs # USB support for Belkin F5U109 and Magic Control Technology serial adapters device umct # USB modem support device umodem # USB mouse device ums # USB touchpad(s) device atp device wsp # eGalax USB touch screen device uep # Diamond Rio 500 MP3 player device urio # # USB serial support device ucom # USB support for 3G modem cards by Option, Novatel, Huawei and Sierra device u3g # USB support for Technologies ARK3116 based serial adapters device uark # USB support for Belkin F5U103 and compatible serial adapters device ubsa # USB support for serial adapters based on the FT8U100AX and FT8U232AM device uftdi # USB support for some Windows CE based serial communication. device uipaq # USB support for Prolific PL-2303 serial adapters device uplcom # USB support for Silicon Laboratories CP2101/CP2102 based USB serial adapters device uslcom # USB Visor and Palm devices device uvisor # USB serial support for DDI pocket's PHS device uvscom # # USB ethernet support device uether # ADMtek USB ethernet. Supports the LinkSys USB100TX, # the Billionton USB100, the Melco LU-ATX, the D-Link DSB-650TX # and the SMC 2202USB. Also works with the ADMtek AN986 Pegasus # eval board. device aue # ASIX Electronics AX88172 USB 2.0 ethernet driver. Used in the # LinkSys USB200M and various other adapters. device axe # ASIX Electronics AX88178A/AX88179 USB 2.0/3.0 gigabit ethernet driver. device axge # # Devices which communicate using Ethernet over USB, particularly # Communication Device Class (CDC) Ethernet specification. Supports # Sharp Zaurus PDAs, some DOCSIS cable modems and so on. device cdce # # CATC USB-EL1201A USB ethernet. Supports the CATC Netmate # and Netmate II, and the Belkin F5U111. device cue # # Kawasaki LSI ethernet. Supports the LinkSys USB10T, # Entrega USB-NET-E45, Peracom Ethernet Adapter, the # 3Com 3c19250, the ADS Technologies USB-10BT, the ATen UC10T, # the Netgear EA101, the D-Link DSB-650, the SMC 2102USB # and 2104USB, and the Corega USB-T. device kue # # RealTek RTL8150 USB to fast ethernet. Supports the Melco LUA-KTX # and the GREEN HOUSE GH-USB100B. device rue # # Davicom DM9601E USB to fast ethernet. Supports the Corega FEther USB-TXC. device udav # # RealTek RTL8152/RTL8153 USB Ethernet driver device ure # # Moschip MCS7730/MCS7840 USB to fast ethernet. Supports the Sitecom LN030. device mos # # HSxPA devices from Option N.V device uhso # Realtek RTL8188SU/RTL8191SU/RTL8192SU wireless driver device rsu # # Ralink Technology RT2501USB/RT2601USB wireless driver device rum # Ralink Technology RT2700U/RT2800U/RT3000U wireless driver device run # # Atheros AR5523 wireless driver device uath # # Conexant/Intersil PrismGT wireless driver device upgt # # Ralink Technology RT2500USB wireless driver device ural # # RNDIS USB ethernet driver device urndis # Realtek RTL8187B/L wireless driver device urtw # # ZyDas ZD1211/ZD1211B wireless driver device zyd # # Sierra USB wireless driver device usie # # debugging options for the USB subsystem # options USB_DEBUG options U3G_DEBUG # options for ukbd: options UKBD_DFLT_KEYMAP # specify the built-in keymap makeoptions UKBD_DFLT_KEYMAP=jp # options for uplcom: options UPLCOM_INTR_INTERVAL=100 # interrupt pipe interval # in milliseconds # options for uvscom: options UVSCOM_DEFAULT_OPKTSIZE=8 # default output packet size options UVSCOM_INTR_INTERVAL=100 # interrupt pipe interval # in milliseconds ##################################################################### # FireWire support device firewire # FireWire bus code device sbp # SCSI over Firewire (Requires scbus and da) device sbp_targ # SBP-2 Target mode (Requires scbus and targ) device fwe # Ethernet over FireWire (non-standard!) device fwip # IP over FireWire (RFC2734 and RFC3146) ##################################################################### # dcons support (Dumb Console Device) device dcons # dumb console driver device dcons_crom # FireWire attachment options DCONS_BUF_SIZE=16384 # buffer size options DCONS_POLL_HZ=100 # polling rate options DCONS_FORCE_CONSOLE=0 # force to be the primary console options DCONS_FORCE_GDB=1 # force to be the gdb device ##################################################################### # crypto subsystem # # This is a port of the OpenBSD crypto framework. Include this when # configuring IPSEC and when you have a h/w crypto device to accelerate # user applications that link to OpenSSL. # # Drivers are ports from OpenBSD with some simple enhancements that have # been fed back to OpenBSD. device crypto # core crypto support # Only install the cryptodev device if you are running tests, or know # specifically why you need it. In most cases, it is not needed and # will make things slower. device cryptodev # /dev/crypto for access to h/w device rndtest # FIPS 140-2 entropy tester device ccr # Chelsio T6 device hifn # Hifn 7951, 7781, etc. options HIFN_DEBUG # enable debugging support: hw.hifn.debug options HIFN_RNDTEST # enable rndtest support device ubsec # Broadcom 5501, 5601, 58xx options UBSEC_DEBUG # enable debugging support: hw.ubsec.debug options UBSEC_RNDTEST # enable rndtest support ##################################################################### # # Embedded system options: # # An embedded system might want to run something other than init. options INIT_PATH=/sbin/init:/rescue/init # Debug options options BUS_DEBUG # enable newbus debugging options DEBUG_VFS_LOCKS # enable VFS lock debugging options SOCKBUF_DEBUG # enable sockbuf last record/mb tail checking options IFMEDIA_DEBUG # enable debugging in net/if_media.c # # Verbose SYSINIT # # Make the SYSINIT process performed by mi_startup() verbose. This is very # useful when porting to a new architecture. If DDB is also enabled, this # will print function names instead of addresses. If defined with a value # of zero, the verbose code is compiled-in but disabled by default, and can # be enabled with the debug.verbose_sysinit=1 tunable. options VERBOSE_SYSINIT ##################################################################### # SYSV IPC KERNEL PARAMETERS # # Maximum number of System V semaphores that can be used on the system at # one time. options SEMMNI=11 # Total number of semaphores system wide options SEMMNS=61 # Total number of undo structures in system options SEMMNU=31 # Maximum number of System V semaphores that can be used by a single process # at one time. options SEMMSL=61 # Maximum number of operations that can be outstanding on a single System V # semaphore at one time. options SEMOPM=101 # Maximum number of undo operations that can be outstanding on a single # System V semaphore at one time. options SEMUME=11 # Maximum number of shared memory pages system wide. options SHMALL=1025 # Maximum size, in bytes, of a single System V shared memory region. options SHMMAX=(SHMMAXPGS*PAGE_SIZE+1) options SHMMAXPGS=1025 # Minimum size, in bytes, of a single System V shared memory region. options SHMMIN=2 # Maximum number of shared memory regions that can be used on the system # at one time. options SHMMNI=33 # Maximum number of System V shared memory regions that can be attached to # a single process at one time. options SHMSEG=9 # Set the amount of time (in seconds) the system will wait before # rebooting automatically when a kernel panic occurs. If set to (-1), # the system will wait indefinitely until a key is pressed on the # console. options PANIC_REBOOT_WAIT_TIME=16 # Attempt to bypass the buffer cache and put data directly into the # userland buffer for read operation when O_DIRECT flag is set on the # file. Both offset and length of the read operation must be # multiples of the physical media sector size. # options DIRECTIO # Specify a lower limit for the number of swap I/O buffers. They are # (among other things) used when bypassing the buffer cache due to # DIRECTIO kernel option enabled and O_DIRECT flag set on file. # options NSWBUF_MIN=120 ##################################################################### # More undocumented options for linting. # Note that documenting these is not considered an affront. options CAM_DEBUG_DELAY # VFS cluster debugging. options CLUSTERDEBUG options DEBUG # Kernel filelock debugging. options LOCKF_DEBUG # System V compatible message queues # Please note that the values provided here are used to test kernel # building. The defaults in the sources provide almost the same numbers. # MSGSSZ must be a power of 2 between 8 and 1024. options MSGMNB=2049 # Max number of chars in queue options MSGMNI=41 # Max number of message queue identifiers options MSGSEG=2049 # Max number of message segments options MSGSSZ=16 # Size of a message segment options MSGTQL=41 # Max number of messages in system options NBUF=512 # Number of buffer headers options SC_DEBUG_LEVEL=5 # Syscons debug level options SC_RENDER_DEBUG # syscons rendering debugging options VFS_BIO_DEBUG # VFS buffer I/O debugging options KSTACK_MAX_PAGES=32 # Maximum pages to give the kernel stack options KSTACK_USAGE_PROF # Adaptec Array Controller driver options options AAC_DEBUG # Debugging levels: # 0 - quiet, only emit warnings # 1 - noisy, emit major function # points and things done # 2 - extremely noisy, emit trace # items in loops, etc. # Resource Accounting options RACCT # Resource Limits options RCTL # Yet more undocumented options for linting. # BKTR_ALLOC_PAGES has no effect except to cause warnings, and # BROOKTREE_ALLOC_PAGES hasn't actually been superseded by it, since the # driver still mostly spells this option BROOKTREE_ALLOC_PAGES. ##options BKTR_ALLOC_PAGES=(217*4+1) options BROOKTREE_ALLOC_PAGES=(217*4+1) options MAXFILES=999 # Random number generator # Allow the CSPRNG algorithm to be loaded as a module. #options RANDOM_LOADABLE # Select this to allow high-rate but potentially expensive # harvesting of Slab-Allocator entropy. In very high-rate # situations the value of doing this is dubious at best. options RANDOM_ENABLE_UMA # slab allocator # Select this to allow high-rate but potentially expensive # harvesting of of the m_next pointer in the mbuf. Note that # the m_next pointer is NULL except when receiving > 4K # jumbo frames or sustained bursts by way of LRO. Thus in # the common case it is stirring zero in to the entropy # pool. In cases where it is not NULL it is pointing to one # of a small (in the thousands to 10s of thousands) number # of 256 byte aligned mbufs. Hence it is, even in the best # case, a poor source of entropy. And in the absence of actual # runtime analysis of entropy collection may mislead the user in # to believe that substantially more entropy is being collected # than in fact is - leading to a different class of security # risk. In high packet rate situations ethernet entropy # collection is also very expensive, possibly leading to as # much as a 50% drop in packets received. # This option is present to maintain backwards compatibility # if desired, however it cannot be recommended for use in any # environment. options RANDOM_ENABLE_ETHER # ether_input # Module to enable execution of application via emulators like QEMU options IMAGACT_BINMISC # zlib I/O stream support # This enables support for compressed core dumps. options GZIO # zstd I/O stream support # This enables support for Zstd compressed core dumps. options ZSTDIO # BHND(4) drivers options BHND_LOGLEVEL # Logging threshold level # evdev interface device evdev # input event device support options EVDEV_SUPPORT # evdev support in legacy drivers options EVDEV_DEBUG # enable event debug msgs device uinput # install /dev/uinput cdev options UINPUT_DEBUG # enable uinput debug msgs # Encrypted kernel crash dumps. options EKCD # Serial Peripheral Interface (SPI) support. device spibus # Bus support. device at45d # DataFlash driver device cqspi # device mx25l # SPIFlash driver device n25q # device spigen # Generic access to SPI devices from userland. # Enable legacy /dev/spigenN name aliases for /dev/spigenX.Y devices. options SPIGEN_LEGACY_CDEVNAME # legacy device names for spigen Index: projects/import-googletest-1.8.1/sys/conf/files =================================================================== --- projects/import-googletest-1.8.1/sys/conf/files (revision 344270) +++ projects/import-googletest-1.8.1/sys/conf/files (revision 344271) @@ -1,4994 +1,4996 @@ # $FreeBSD$ # # The long compile-with and dependency lines are required because of # limitations in config: backslash-newline doesn't work in strings, and # dependency lines other than the first are silently ignored. # acpi_quirks.h optional acpi \ dependency "$S/tools/acpi_quirks2h.awk $S/dev/acpica/acpi_quirks" \ compile-with "${AWK} -f $S/tools/acpi_quirks2h.awk $S/dev/acpica/acpi_quirks" \ no-obj no-implicit-rule before-depend \ clean "acpi_quirks.h" bhnd_nvram_map.h optional bhnd \ dependency "$S/dev/bhnd/tools/nvram_map_gen.sh $S/dev/bhnd/tools/nvram_map_gen.awk $S/dev/bhnd/nvram/nvram_map" \ compile-with "sh $S/dev/bhnd/tools/nvram_map_gen.sh $S/dev/bhnd/nvram/nvram_map -h" \ no-obj no-implicit-rule before-depend \ clean "bhnd_nvram_map.h" bhnd_nvram_map_data.h optional bhnd \ dependency "$S/dev/bhnd/tools/nvram_map_gen.sh $S/dev/bhnd/tools/nvram_map_gen.awk $S/dev/bhnd/nvram/nvram_map" \ compile-with "sh $S/dev/bhnd/tools/nvram_map_gen.sh $S/dev/bhnd/nvram/nvram_map -d" \ no-obj no-implicit-rule before-depend \ clean "bhnd_nvram_map_data.h" # # The 'fdt_dtb_file' target covers an actual DTB file name, which is derived # from the specified source (DTS) file: .dts -> .dtb # fdt_dtb_file optional fdt fdt_dtb_static \ compile-with "sh -c 'MACHINE=${MACHINE} $S/tools/fdt/make_dtb.sh $S ${FDT_DTS_FILE} ${.CURDIR}'" \ no-obj no-implicit-rule before-depend \ clean "${FDT_DTS_FILE:R}.dtb" fdt_static_dtb.h optional fdt fdt_dtb_static \ compile-with "sh -c 'MACHINE=${MACHINE} $S/tools/fdt/make_dtbh.sh ${FDT_DTS_FILE} ${.CURDIR}'" \ dependency "fdt_dtb_file" \ no-obj no-implicit-rule before-depend \ clean "fdt_static_dtb.h" feeder_eq_gen.h optional sound \ dependency "$S/tools/sound/feeder_eq_mkfilter.awk" \ compile-with "${AWK} -f $S/tools/sound/feeder_eq_mkfilter.awk -- ${FEEDER_EQ_PRESETS} > feeder_eq_gen.h" \ no-obj no-implicit-rule before-depend \ clean "feeder_eq_gen.h" feeder_rate_gen.h optional sound \ dependency "$S/tools/sound/feeder_rate_mkfilter.awk" \ compile-with "${AWK} -f $S/tools/sound/feeder_rate_mkfilter.awk -- ${FEEDER_RATE_PRESETS} > feeder_rate_gen.h" \ no-obj no-implicit-rule before-depend \ clean "feeder_rate_gen.h" snd_fxdiv_gen.h optional sound \ dependency "$S/tools/sound/snd_fxdiv_gen.awk" \ compile-with "${AWK} -f $S/tools/sound/snd_fxdiv_gen.awk -- > snd_fxdiv_gen.h" \ no-obj no-implicit-rule before-depend \ clean "snd_fxdiv_gen.h" miidevs.h optional miibus | mii \ dependency "$S/tools/miidevs2h.awk $S/dev/mii/miidevs" \ compile-with "${AWK} -f $S/tools/miidevs2h.awk $S/dev/mii/miidevs" \ no-obj no-implicit-rule before-depend \ clean "miidevs.h" pccarddevs.h standard \ dependency "$S/tools/pccarddevs2h.awk $S/dev/pccard/pccarddevs" \ compile-with "${AWK} -f $S/tools/pccarddevs2h.awk $S/dev/pccard/pccarddevs" \ no-obj no-implicit-rule before-depend \ clean "pccarddevs.h" kbdmuxmap.h optional kbdmux_dflt_keymap \ compile-with "kbdcontrol -P ${S:S/sys$/share/}/vt/keymaps -P ${S:S/sys$/share/}/syscons/keymaps -L ${KBDMUX_DFLT_KEYMAP} | sed -e 's/^static keymap_t.* = /static keymap_t key_map = /' -e 's/^static accentmap_t.* = /static accentmap_t accent_map = /' > kbdmuxmap.h" \ no-obj no-implicit-rule before-depend \ clean "kbdmuxmap.h" teken_state.h optional sc | vt \ dependency "$S/teken/gensequences $S/teken/sequences" \ compile-with "${AWK} -f $S/teken/gensequences $S/teken/sequences > teken_state.h" \ no-obj no-implicit-rule before-depend \ clean "teken_state.h" usbdevs.h optional usb \ dependency "$S/tools/usbdevs2h.awk $S/dev/usb/usbdevs" \ compile-with "${AWK} -f $S/tools/usbdevs2h.awk $S/dev/usb/usbdevs -h" \ no-obj no-implicit-rule before-depend \ clean "usbdevs.h" usbdevs_data.h optional usb \ dependency "$S/tools/usbdevs2h.awk $S/dev/usb/usbdevs" \ compile-with "${AWK} -f $S/tools/usbdevs2h.awk $S/dev/usb/usbdevs -d" \ no-obj no-implicit-rule before-depend \ clean "usbdevs_data.h" cam/cam.c optional scbus cam/cam_compat.c optional scbus cam/cam_iosched.c optional scbus cam/cam_periph.c optional scbus cam/cam_queue.c optional scbus cam/cam_sim.c optional scbus cam/cam_xpt.c optional scbus cam/ata/ata_all.c optional scbus cam/ata/ata_xpt.c optional scbus cam/ata/ata_pmp.c optional scbus cam/nvme/nvme_all.c optional scbus nvme cam/nvme/nvme_da.c optional scbus nvme da cam/nvme/nvme_xpt.c optional scbus nvme cam/scsi/scsi_xpt.c optional scbus cam/scsi/scsi_all.c optional scbus cam/scsi/scsi_cd.c optional cd cam/scsi/scsi_ch.c optional ch cam/ata/ata_da.c optional ada | da cam/ctl/ctl.c optional ctl cam/ctl/ctl_backend.c optional ctl cam/ctl/ctl_backend_block.c optional ctl cam/ctl/ctl_backend_ramdisk.c optional ctl cam/ctl/ctl_cmd_table.c optional ctl cam/ctl/ctl_frontend.c optional ctl cam/ctl/ctl_frontend_cam_sim.c optional ctl cam/ctl/ctl_frontend_ioctl.c optional ctl cam/ctl/ctl_frontend_iscsi.c optional ctl cfiscsi cam/ctl/ctl_ha.c optional ctl cam/ctl/ctl_scsi_all.c optional ctl cam/ctl/ctl_tpc.c optional ctl cam/ctl/ctl_tpc_local.c optional ctl cam/ctl/ctl_error.c optional ctl cam/ctl/ctl_util.c optional ctl cam/ctl/scsi_ctl.c optional ctl cam/mmc/mmc_xpt.c optional scbus mmccam cam/mmc/mmc_da.c optional scbus mmccam da cam/scsi/scsi_da.c optional da cam/scsi/scsi_pass.c optional pass cam/scsi/scsi_pt.c optional pt cam/scsi/scsi_sa.c optional sa cam/scsi/scsi_enc.c optional ses cam/scsi/scsi_enc_ses.c optional ses cam/scsi/scsi_enc_safte.c optional ses cam/scsi/scsi_sg.c optional sg cam/scsi/scsi_targ_bh.c optional targbh cam/scsi/scsi_target.c optional targ cam/scsi/smp_all.c optional scbus # shared between zfs and dtrace cddl/compat/opensolaris/kern/opensolaris.c optional zfs | dtrace compile-with "${CDDL_C}" cddl/compat/opensolaris/kern/opensolaris_cmn_err.c optional zfs | dtrace compile-with "${CDDL_C}" cddl/compat/opensolaris/kern/opensolaris_kmem.c optional zfs | dtrace compile-with "${CDDL_C}" cddl/compat/opensolaris/kern/opensolaris_misc.c optional zfs | dtrace compile-with "${CDDL_C}" cddl/compat/opensolaris/kern/opensolaris_proc.c optional zfs | dtrace compile-with "${CDDL_C}" cddl/compat/opensolaris/kern/opensolaris_sunddi.c optional zfs | dtrace compile-with "${CDDL_C}" cddl/compat/opensolaris/kern/opensolaris_taskq.c optional zfs | dtrace compile-with "${CDDL_C}" # zfs specific cddl/compat/opensolaris/kern/opensolaris_acl.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_dtrace.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_kobj.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_kstat.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_lookup.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_policy.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_string.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_sysevent.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_uio.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_vfs.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_vm.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_zone.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/acl/acl_common.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/avl/avl.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/nvpair/opensolaris_fnvpair.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/nvpair/opensolaris_nvpair.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/nvpair/opensolaris_nvpair_alloc_fixed.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/unicode/u8_textprep.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zfeature_common.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zfs_comutil.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zfs_deleg.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zfs_fletcher.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zfs_ioctl_compat.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zfs_namecheck.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zfs_prop.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zpool_prop.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zprop_common.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/vnode.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/abd.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/aggsum.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/blkptr.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/bplist.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/bpobj.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/bptree.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/bqueue.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/cityhash.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf_stats.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/ddt.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/ddt_zap.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_diff.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_object.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_objset.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_send.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_tx.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_zfetch.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dnode.c optional zfs compile-with "${ZFS_C}" \ warning "kernel contains CDDL licensed ZFS filesystem" cddl/contrib/opensolaris/uts/common/fs/zfs/dnode_sync.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_bookmark.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_dataset.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_deadlist.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_deleg.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_destroy.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_dir.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_pool.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_prop.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_scan.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_userhold.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_synctask.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/gzip.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lz4.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lzjb.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/multilist.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/range_tree.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/refcount.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/rrwlock.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/sha256.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/skein_zfs.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/spa_checkpoint.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/spa_config.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/spa_errlog.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/spa_history.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/spa_misc.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/space_map.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/space_reftree.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/trim_map.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/uberblock.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/unique.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_cache.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_file.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_indirect.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_indirect_births.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_indirect_mapping.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_initialize.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_label.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_mirror.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_missing.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_queue.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_raidz.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_removal.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_root.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zap.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zap_leaf.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zap_micro.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zcp.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zcp_get.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zcp_global.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zcp_iter.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zcp_synctask.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfeature.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_acl.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_byteswap.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_debug.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_fm.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_fuid.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_log.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_onexit.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_replay.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_rlock.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_sa.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_znode.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zil.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zio_checksum.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zio_compress.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zio_inject.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zle.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zrlock.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zthr.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zvol.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/os/callb.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/os/fm.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/os/list.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/os/nvpair_alloc_system.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/adler32.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/deflate.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/inffast.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/inflate.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/inftrees.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/opensolaris_crc32.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/trees.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/zmod.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/zmod_subr.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/zutil.c optional zfs compile-with "${ZFS_C}" # zfs lua support cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lapi.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lauxlib.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lbaselib.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lbitlib.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lcode.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lcompat.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lcorolib.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lctype.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/ldebug.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/ldo.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/ldump.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lfunc.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lgc.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/llex.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lmem.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lobject.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lopcodes.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lparser.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lstate.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lstring.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lstrlib.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/ltable.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/ltablib.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/ltm.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lundump.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lvm.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lua/lzio.c optional zfs compile-with "${ZFS_C}" # dtrace specific cddl/contrib/opensolaris/uts/common/dtrace/dtrace.c optional dtrace compile-with "${DTRACE_C}" \ warning "kernel contains CDDL licensed DTRACE" cddl/contrib/opensolaris/uts/common/dtrace/dtrace_xoroshiro128_plus.c optional dtrace compile-with "${DTRACE_C}" cddl/dev/dtmalloc/dtmalloc.c optional dtmalloc | dtraceall compile-with "${CDDL_C}" cddl/dev/profile/profile.c optional dtrace_profile | dtraceall compile-with "${CDDL_C}" cddl/dev/sdt/sdt.c optional dtrace_sdt | dtraceall compile-with "${CDDL_C}" cddl/dev/fbt/fbt.c optional dtrace_fbt | dtraceall compile-with "${FBT_C}" cddl/dev/systrace/systrace.c optional dtrace_systrace | dtraceall compile-with "${CDDL_C}" cddl/dev/prototype.c optional dtrace_prototype | dtraceall compile-with "${CDDL_C}" fs/nfsclient/nfs_clkdtrace.c optional dtnfscl nfscl | dtraceall nfscl compile-with "${CDDL_C}" compat/cloudabi/cloudabi_clock.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_errno.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_fd.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_file.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_futex.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_mem.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_proc.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_random.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_sock.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_thread.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_vdso.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi32/cloudabi32_fd.c optional compat_cloudabi32 compat/cloudabi32/cloudabi32_module.c optional compat_cloudabi32 compat/cloudabi32/cloudabi32_poll.c optional compat_cloudabi32 compat/cloudabi32/cloudabi32_sock.c optional compat_cloudabi32 compat/cloudabi32/cloudabi32_syscalls.c optional compat_cloudabi32 compat/cloudabi32/cloudabi32_sysent.c optional compat_cloudabi32 compat/cloudabi32/cloudabi32_thread.c optional compat_cloudabi32 compat/cloudabi64/cloudabi64_fd.c optional compat_cloudabi64 compat/cloudabi64/cloudabi64_module.c optional compat_cloudabi64 compat/cloudabi64/cloudabi64_poll.c optional compat_cloudabi64 compat/cloudabi64/cloudabi64_sock.c optional compat_cloudabi64 compat/cloudabi64/cloudabi64_syscalls.c optional compat_cloudabi64 compat/cloudabi64/cloudabi64_sysent.c optional compat_cloudabi64 compat/cloudabi64/cloudabi64_thread.c optional compat_cloudabi64 compat/freebsd32/freebsd32_capability.c optional compat_freebsd32 compat/freebsd32/freebsd32_ioctl.c optional compat_freebsd32 compat/freebsd32/freebsd32_misc.c optional compat_freebsd32 compat/freebsd32/freebsd32_syscalls.c optional compat_freebsd32 compat/freebsd32/freebsd32_sysent.c optional compat_freebsd32 contrib/ck/src/ck_array.c standard compile-with "${NORMAL_C} -I$S/contrib/ck/include" contrib/ck/src/ck_barrier_centralized.c standard compile-with "${NORMAL_C} -I$S/contrib/ck/include" contrib/ck/src/ck_barrier_combining.c standard compile-with "${NORMAL_C} -I$S/contrib/ck/include" contrib/ck/src/ck_barrier_dissemination.c standard compile-with "${NORMAL_C} -I$S/contrib/ck/include" contrib/ck/src/ck_barrier_mcs.c standard compile-with "${NORMAL_C} -I$S/contrib/ck/include" contrib/ck/src/ck_barrier_tournament.c standard compile-with "${NORMAL_C} -I$S/contrib/ck/include" contrib/ck/src/ck_epoch.c standard compile-with "${NORMAL_C} -I$S/contrib/ck/include" contrib/ck/src/ck_hp.c standard compile-with "${NORMAL_C} -I$S/contrib/ck/include" contrib/ck/src/ck_hs.c standard compile-with "${NORMAL_C} -I$S/contrib/ck/include" contrib/ck/src/ck_ht.c standard compile-with "${NORMAL_C} -I$S/contrib/ck/include" contrib/ck/src/ck_rhs.c standard compile-with "${NORMAL_C} -I$S/contrib/ck/include" contrib/dev/acpica/common/ahids.c optional acpi acpi_debug contrib/dev/acpica/common/ahuuids.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbcmds.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbconvert.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbdisply.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbexec.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbhistry.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbinput.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbmethod.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbnames.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbobject.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbstats.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbtest.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbutils.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbxface.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmbuffer.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmcstyle.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmdeferred.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmnames.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmopcode.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmresrc.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmresrcl.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmresrcl2.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmresrcs.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmutils.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmwalk.c optional acpi acpi_debug contrib/dev/acpica/components/dispatcher/dsargs.c optional acpi contrib/dev/acpica/components/dispatcher/dscontrol.c optional acpi contrib/dev/acpica/components/dispatcher/dsdebug.c optional acpi contrib/dev/acpica/components/dispatcher/dsfield.c optional acpi contrib/dev/acpica/components/dispatcher/dsinit.c optional acpi contrib/dev/acpica/components/dispatcher/dsmethod.c optional acpi contrib/dev/acpica/components/dispatcher/dsmthdat.c optional acpi contrib/dev/acpica/components/dispatcher/dsobject.c optional acpi contrib/dev/acpica/components/dispatcher/dsopcode.c optional acpi contrib/dev/acpica/components/dispatcher/dspkginit.c optional acpi contrib/dev/acpica/components/dispatcher/dsutils.c optional acpi contrib/dev/acpica/components/dispatcher/dswexec.c optional acpi contrib/dev/acpica/components/dispatcher/dswload.c optional acpi contrib/dev/acpica/components/dispatcher/dswload2.c optional acpi contrib/dev/acpica/components/dispatcher/dswscope.c optional acpi contrib/dev/acpica/components/dispatcher/dswstate.c optional acpi contrib/dev/acpica/components/events/evevent.c optional acpi contrib/dev/acpica/components/events/evglock.c optional acpi contrib/dev/acpica/components/events/evgpe.c optional acpi contrib/dev/acpica/components/events/evgpeblk.c optional acpi contrib/dev/acpica/components/events/evgpeinit.c optional acpi contrib/dev/acpica/components/events/evgpeutil.c optional acpi contrib/dev/acpica/components/events/evhandler.c optional acpi contrib/dev/acpica/components/events/evmisc.c optional acpi contrib/dev/acpica/components/events/evregion.c optional acpi contrib/dev/acpica/components/events/evrgnini.c optional acpi contrib/dev/acpica/components/events/evsci.c optional acpi contrib/dev/acpica/components/events/evxface.c optional acpi contrib/dev/acpica/components/events/evxfevnt.c optional acpi contrib/dev/acpica/components/events/evxfgpe.c optional acpi contrib/dev/acpica/components/events/evxfregn.c optional acpi contrib/dev/acpica/components/executer/exconcat.c optional acpi contrib/dev/acpica/components/executer/exconfig.c optional acpi contrib/dev/acpica/components/executer/exconvrt.c optional acpi contrib/dev/acpica/components/executer/excreate.c optional acpi contrib/dev/acpica/components/executer/exdebug.c optional acpi contrib/dev/acpica/components/executer/exdump.c optional acpi contrib/dev/acpica/components/executer/exfield.c optional acpi contrib/dev/acpica/components/executer/exfldio.c optional acpi contrib/dev/acpica/components/executer/exmisc.c optional acpi contrib/dev/acpica/components/executer/exmutex.c optional acpi contrib/dev/acpica/components/executer/exnames.c optional acpi contrib/dev/acpica/components/executer/exoparg1.c optional acpi contrib/dev/acpica/components/executer/exoparg2.c optional acpi contrib/dev/acpica/components/executer/exoparg3.c optional acpi contrib/dev/acpica/components/executer/exoparg6.c optional acpi contrib/dev/acpica/components/executer/exprep.c optional acpi contrib/dev/acpica/components/executer/exregion.c optional acpi contrib/dev/acpica/components/executer/exresnte.c optional acpi contrib/dev/acpica/components/executer/exresolv.c optional acpi contrib/dev/acpica/components/executer/exresop.c optional acpi contrib/dev/acpica/components/executer/exserial.c optional acpi contrib/dev/acpica/components/executer/exstore.c optional acpi contrib/dev/acpica/components/executer/exstoren.c optional acpi contrib/dev/acpica/components/executer/exstorob.c optional acpi contrib/dev/acpica/components/executer/exsystem.c optional acpi contrib/dev/acpica/components/executer/extrace.c optional acpi contrib/dev/acpica/components/executer/exutils.c optional acpi contrib/dev/acpica/components/hardware/hwacpi.c optional acpi contrib/dev/acpica/components/hardware/hwesleep.c optional acpi contrib/dev/acpica/components/hardware/hwgpe.c optional acpi contrib/dev/acpica/components/hardware/hwpci.c optional acpi contrib/dev/acpica/components/hardware/hwregs.c optional acpi contrib/dev/acpica/components/hardware/hwsleep.c optional acpi contrib/dev/acpica/components/hardware/hwtimer.c optional acpi contrib/dev/acpica/components/hardware/hwvalid.c optional acpi contrib/dev/acpica/components/hardware/hwxface.c optional acpi contrib/dev/acpica/components/hardware/hwxfsleep.c optional acpi contrib/dev/acpica/components/namespace/nsaccess.c optional acpi contrib/dev/acpica/components/namespace/nsalloc.c optional acpi contrib/dev/acpica/components/namespace/nsarguments.c optional acpi contrib/dev/acpica/components/namespace/nsconvert.c optional acpi contrib/dev/acpica/components/namespace/nsdump.c optional acpi contrib/dev/acpica/components/namespace/nseval.c optional acpi contrib/dev/acpica/components/namespace/nsinit.c optional acpi contrib/dev/acpica/components/namespace/nsload.c optional acpi contrib/dev/acpica/components/namespace/nsnames.c optional acpi contrib/dev/acpica/components/namespace/nsobject.c optional acpi contrib/dev/acpica/components/namespace/nsparse.c optional acpi contrib/dev/acpica/components/namespace/nspredef.c optional acpi contrib/dev/acpica/components/namespace/nsprepkg.c optional acpi contrib/dev/acpica/components/namespace/nsrepair.c optional acpi contrib/dev/acpica/components/namespace/nsrepair2.c optional acpi contrib/dev/acpica/components/namespace/nssearch.c optional acpi contrib/dev/acpica/components/namespace/nsutils.c optional acpi contrib/dev/acpica/components/namespace/nswalk.c optional acpi contrib/dev/acpica/components/namespace/nsxfeval.c optional acpi contrib/dev/acpica/components/namespace/nsxfname.c optional acpi contrib/dev/acpica/components/namespace/nsxfobj.c optional acpi contrib/dev/acpica/components/parser/psargs.c optional acpi contrib/dev/acpica/components/parser/psloop.c optional acpi contrib/dev/acpica/components/parser/psobject.c optional acpi contrib/dev/acpica/components/parser/psopcode.c optional acpi contrib/dev/acpica/components/parser/psopinfo.c optional acpi contrib/dev/acpica/components/parser/psparse.c optional acpi contrib/dev/acpica/components/parser/psscope.c optional acpi contrib/dev/acpica/components/parser/pstree.c optional acpi contrib/dev/acpica/components/parser/psutils.c optional acpi contrib/dev/acpica/components/parser/pswalk.c optional acpi contrib/dev/acpica/components/parser/psxface.c optional acpi contrib/dev/acpica/components/resources/rsaddr.c optional acpi contrib/dev/acpica/components/resources/rscalc.c optional acpi contrib/dev/acpica/components/resources/rscreate.c optional acpi contrib/dev/acpica/components/resources/rsdump.c optional acpi acpi_debug contrib/dev/acpica/components/resources/rsdumpinfo.c optional acpi contrib/dev/acpica/components/resources/rsinfo.c optional acpi contrib/dev/acpica/components/resources/rsio.c optional acpi contrib/dev/acpica/components/resources/rsirq.c optional acpi contrib/dev/acpica/components/resources/rslist.c optional acpi contrib/dev/acpica/components/resources/rsmemory.c optional acpi contrib/dev/acpica/components/resources/rsmisc.c optional acpi contrib/dev/acpica/components/resources/rsserial.c optional acpi contrib/dev/acpica/components/resources/rsutils.c optional acpi contrib/dev/acpica/components/resources/rsxface.c optional acpi contrib/dev/acpica/components/tables/tbdata.c optional acpi contrib/dev/acpica/components/tables/tbfadt.c optional acpi contrib/dev/acpica/components/tables/tbfind.c optional acpi contrib/dev/acpica/components/tables/tbinstal.c optional acpi contrib/dev/acpica/components/tables/tbprint.c optional acpi contrib/dev/acpica/components/tables/tbutils.c optional acpi contrib/dev/acpica/components/tables/tbxface.c optional acpi contrib/dev/acpica/components/tables/tbxfload.c optional acpi contrib/dev/acpica/components/tables/tbxfroot.c optional acpi contrib/dev/acpica/components/utilities/utaddress.c optional acpi contrib/dev/acpica/components/utilities/utalloc.c optional acpi contrib/dev/acpica/components/utilities/utascii.c optional acpi contrib/dev/acpica/components/utilities/utbuffer.c optional acpi contrib/dev/acpica/components/utilities/utcache.c optional acpi contrib/dev/acpica/components/utilities/utcopy.c optional acpi contrib/dev/acpica/components/utilities/utdebug.c optional acpi contrib/dev/acpica/components/utilities/utdecode.c optional acpi contrib/dev/acpica/components/utilities/utdelete.c optional acpi contrib/dev/acpica/components/utilities/uterror.c optional acpi contrib/dev/acpica/components/utilities/uteval.c optional acpi contrib/dev/acpica/components/utilities/utexcep.c optional acpi contrib/dev/acpica/components/utilities/utglobal.c optional acpi contrib/dev/acpica/components/utilities/uthex.c optional acpi contrib/dev/acpica/components/utilities/utids.c optional acpi contrib/dev/acpica/components/utilities/utinit.c optional acpi contrib/dev/acpica/components/utilities/utlock.c optional acpi contrib/dev/acpica/components/utilities/utmath.c optional acpi contrib/dev/acpica/components/utilities/utmisc.c optional acpi contrib/dev/acpica/components/utilities/utmutex.c optional acpi contrib/dev/acpica/components/utilities/utnonansi.c optional acpi contrib/dev/acpica/components/utilities/utobject.c optional acpi contrib/dev/acpica/components/utilities/utosi.c optional acpi contrib/dev/acpica/components/utilities/utownerid.c optional acpi contrib/dev/acpica/components/utilities/utpredef.c optional acpi contrib/dev/acpica/components/utilities/utresdecode.c optional acpi acpi_debug contrib/dev/acpica/components/utilities/utresrc.c optional acpi contrib/dev/acpica/components/utilities/utstate.c optional acpi contrib/dev/acpica/components/utilities/utstring.c optional acpi contrib/dev/acpica/components/utilities/utstrsuppt.c optional acpi contrib/dev/acpica/components/utilities/utstrtoul64.c optional acpi contrib/dev/acpica/components/utilities/utuuid.c optional acpi acpi_debug contrib/dev/acpica/components/utilities/utxface.c optional acpi contrib/dev/acpica/components/utilities/utxferror.c optional acpi contrib/dev/acpica/components/utilities/utxfinit.c optional acpi contrib/dev/acpica/os_specific/service_layers/osgendbg.c optional acpi acpi_debug contrib/ipfilter/netinet/fil.c optional ipfilter inet \ compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_auth.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_fil_freebsd.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_frag.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_log.c optional ipfilter inet \ compile-with "${NORMAL_C} -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_nat.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_proxy.c optional ipfilter inet \ compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_state.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_lookup.c optional ipfilter inet \ compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN} -Wno-unused -Wno-error -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_pool.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_htable.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter ${NO_WTAUTOLOGICAL_POINTER_COMPARE}" contrib/ipfilter/netinet/ip_sync.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/mlfk_ipl.c optional ipfilter inet \ compile-with "${NORMAL_C} -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_nat6.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_rules.c optional ipfilter inet \ compile-with "${NORMAL_C} -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_scan.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_dstlist.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/radix_ipf.c optional ipfilter inet \ compile-with "${NORMAL_C} -I$S/contrib/ipfilter" contrib/libfdt/fdt.c optional fdt contrib/libfdt/fdt_ro.c optional fdt contrib/libfdt/fdt_rw.c optional fdt contrib/libfdt/fdt_strerror.c optional fdt contrib/libfdt/fdt_sw.c optional fdt contrib/libfdt/fdt_wip.c optional fdt contrib/libnv/cnvlist.c standard contrib/libnv/dnvlist.c standard contrib/libnv/nvlist.c standard contrib/libnv/nvpair.c standard contrib/ngatm/netnatm/api/cc_conn.c optional ngatm_ccatm \ compile-with "${NORMAL_C_NOWERROR} -I$S/contrib/ngatm" contrib/ngatm/netnatm/api/cc_data.c optional ngatm_ccatm \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/api/cc_dump.c optional ngatm_ccatm \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/api/cc_port.c optional ngatm_ccatm \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/api/cc_sig.c optional ngatm_ccatm \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/api/cc_user.c optional ngatm_ccatm \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/api/unisap.c optional ngatm_ccatm \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/misc/straddr.c optional ngatm_atmbase \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/misc/unimsg_common.c optional ngatm_atmbase \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/msg/traffic.c optional ngatm_atmbase \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/msg/uni_ie.c optional ngatm_atmbase \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/msg/uni_msg.c optional ngatm_atmbase \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/saal/saal_sscfu.c optional ngatm_sscfu \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/saal/saal_sscop.c optional ngatm_sscop \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/sig/sig_call.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/sig/sig_coord.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/sig/sig_party.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/sig/sig_print.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/sig/sig_reset.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/sig/sig_uni.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/sig/sig_unimsgcpy.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/sig/sig_verify.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" # Zstd contrib/zstd/lib/freebsd/zstd_kmalloc.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/common/zstd_common.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/common/fse_decompress.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/common/entropy_common.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/common/error_private.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/common/xxhash.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/compress/zstd_compress.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/compress/fse_compress.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/compress/hist.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/compress/huf_compress.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/compress/zstd_double_fast.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/compress/zstd_fast.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/compress/zstd_lazy.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/compress/zstd_ldm.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/compress/zstd_opt.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/decompress/zstd_ddict.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/decompress/zstd_decompress.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/decompress/zstd_decompress_block.c optional zstdio compile-with ${ZSTD_C} contrib/zstd/lib/decompress/huf_decompress.c optional zstdio compile-with ${ZSTD_C} # Blake 2 contrib/libb2/blake2b-ref.c optional crypto | ipsec | ipsec_support \ compile-with "${NORMAL_C} -I$S/crypto/blake2 -Wno-cast-qual -DSUFFIX=_ref -Wno-unused-function" contrib/libb2/blake2s-ref.c optional crypto | ipsec | ipsec_support \ compile-with "${NORMAL_C} -I$S/crypto/blake2 -Wno-cast-qual -DSUFFIX=_ref -Wno-unused-function" crypto/blake2/blake2-sw.c optional crypto | ipsec | ipsec_support \ compile-with "${NORMAL_C} -I$S/crypto/blake2 -Wno-cast-qual" crypto/blowfish/bf_ecb.c optional ipsec | ipsec_support crypto/blowfish/bf_skey.c optional crypto | ipsec | ipsec_support crypto/camellia/camellia.c optional crypto | ipsec | ipsec_support crypto/camellia/camellia-api.c optional crypto | ipsec | ipsec_support crypto/chacha20/chacha.c optional crypto | ipsec | ipsec_support crypto/chacha20/chacha-sw.c optional crypto | ipsec | ipsec_support crypto/des/des_ecb.c optional crypto | ipsec | ipsec_support | netsmb crypto/des/des_setkey.c optional crypto | ipsec | ipsec_support | netsmb crypto/rc4/rc4.c optional netgraph_mppc_encryption | kgssapi crypto/rijndael/rijndael-alg-fst.c optional crypto | ekcd | geom_bde | \ ipsec | ipsec_support | random !random_loadable | wlan_ccmp crypto/rijndael/rijndael-api-fst.c optional ekcd | geom_bde | random !random_loadable crypto/rijndael/rijndael-api.c optional crypto | ipsec | ipsec_support | \ wlan_ccmp crypto/sha1.c optional carp | crypto | ipsec | \ ipsec_support | netgraph_mppc_encryption | sctp crypto/sha2/sha256c.c optional crypto | ekcd | geom_bde | ipsec | \ ipsec_support | random !random_loadable | sctp | zfs crypto/sha2/sha512c.c optional crypto | geom_bde | ipsec | \ ipsec_support | zfs crypto/skein/skein.c optional crypto | zfs crypto/skein/skein_block.c optional crypto | zfs crypto/siphash/siphash.c optional inet | inet6 crypto/siphash/siphash_test.c optional inet | inet6 ddb/db_access.c optional ddb ddb/db_break.c optional ddb ddb/db_capture.c optional ddb ddb/db_command.c optional ddb ddb/db_examine.c optional ddb ddb/db_expr.c optional ddb ddb/db_input.c optional ddb ddb/db_lex.c optional ddb ddb/db_main.c optional ddb ddb/db_output.c optional ddb ddb/db_print.c optional ddb ddb/db_ps.c optional ddb ddb/db_run.c optional ddb ddb/db_script.c optional ddb ddb/db_sym.c optional ddb ddb/db_thread.c optional ddb ddb/db_textdump.c optional ddb ddb/db_variables.c optional ddb ddb/db_watch.c optional ddb ddb/db_write_cmd.c optional ddb dev/aac/aac.c optional aac dev/aac/aac_cam.c optional aacp aac dev/aac/aac_debug.c optional aac dev/aac/aac_disk.c optional aac dev/aac/aac_linux.c optional aac compat_linux dev/aac/aac_pci.c optional aac pci dev/aacraid/aacraid.c optional aacraid dev/aacraid/aacraid_cam.c optional aacraid scbus dev/aacraid/aacraid_debug.c optional aacraid dev/aacraid/aacraid_linux.c optional aacraid compat_linux dev/aacraid/aacraid_pci.c optional aacraid pci dev/acpi_support/acpi_wmi.c optional acpi_wmi acpi dev/acpi_support/acpi_asus.c optional acpi_asus acpi dev/acpi_support/acpi_asus_wmi.c optional acpi_asus_wmi acpi dev/acpi_support/acpi_fujitsu.c optional acpi_fujitsu acpi dev/acpi_support/acpi_hp.c optional acpi_hp acpi dev/acpi_support/acpi_ibm.c optional acpi_ibm acpi dev/acpi_support/acpi_panasonic.c optional acpi_panasonic acpi dev/acpi_support/acpi_sony.c optional acpi_sony acpi dev/acpi_support/acpi_toshiba.c optional acpi_toshiba acpi dev/acpi_support/atk0110.c optional aibs acpi dev/acpica/Osd/OsdDebug.c optional acpi dev/acpica/Osd/OsdHardware.c optional acpi dev/acpica/Osd/OsdInterrupt.c optional acpi dev/acpica/Osd/OsdMemory.c optional acpi dev/acpica/Osd/OsdSchedule.c optional acpi dev/acpica/Osd/OsdStream.c optional acpi dev/acpica/Osd/OsdSynch.c optional acpi dev/acpica/Osd/OsdTable.c optional acpi dev/acpica/acpi.c optional acpi dev/acpica/acpi_acad.c optional acpi dev/acpica/acpi_battery.c optional acpi dev/acpica/acpi_button.c optional acpi dev/acpica/acpi_cmbat.c optional acpi dev/acpica/acpi_cpu.c optional acpi dev/acpica/acpi_ec.c optional acpi dev/acpica/acpi_isab.c optional acpi isa dev/acpica/acpi_lid.c optional acpi dev/acpica/acpi_package.c optional acpi dev/acpica/acpi_perf.c optional acpi dev/acpica/acpi_powerres.c optional acpi dev/acpica/acpi_quirk.c optional acpi dev/acpica/acpi_resource.c optional acpi dev/acpica/acpi_container.c optional acpi dev/acpica/acpi_smbat.c optional acpi dev/acpica/acpi_thermal.c optional acpi dev/acpica/acpi_throttle.c optional acpi dev/acpica/acpi_video.c optional acpi_video acpi dev/acpica/acpi_dock.c optional acpi_dock acpi dev/adlink/adlink.c optional adlink dev/ae/if_ae.c optional ae pci dev/age/if_age.c optional age pci dev/agp/agp.c optional agp pci dev/agp/agp_if.m optional agp pci dev/ahci/ahci.c optional ahci dev/ahci/ahciem.c optional ahci dev/ahci/ahci_pci.c optional ahci pci dev/aic7xxx/ahc_isa.c optional ahc isa dev/aic7xxx/ahc_pci.c optional ahc pci \ compile-with "${NORMAL_C} ${NO_WCONSTANT_CONVERSION}" dev/aic7xxx/ahd_pci.c optional ahd pci \ compile-with "${NORMAL_C} ${NO_WCONSTANT_CONVERSION}" dev/aic7xxx/aic7770.c optional ahc dev/aic7xxx/aic79xx.c optional ahd pci dev/aic7xxx/aic79xx_osm.c optional ahd pci dev/aic7xxx/aic79xx_pci.c optional ahd pci dev/aic7xxx/aic79xx_reg_print.c optional ahd pci ahd_reg_pretty_print dev/aic7xxx/aic7xxx.c optional ahc dev/aic7xxx/aic7xxx_93cx6.c optional ahc dev/aic7xxx/aic7xxx_osm.c optional ahc dev/aic7xxx/aic7xxx_pci.c optional ahc pci dev/aic7xxx/aic7xxx_reg_print.c optional ahc ahc_reg_pretty_print dev/al_eth/al_eth.c optional al_eth fdt \ no-depend \ compile-with "${CC} -c -o ${.TARGET} ${CFLAGS} -I$S/contrib/alpine-hal -I$S/contrib/alpine-hal/eth ${PROF} ${.IMPSRC}" dev/al_eth/al_init_eth_lm.c optional al_eth fdt \ no-depend \ compile-with "${CC} -c -o ${.TARGET} ${CFLAGS} -I$S/contrib/alpine-hal -I$S/contrib/alpine-hal/eth ${PROF} ${.IMPSRC}" dev/al_eth/al_init_eth_kr.c optional al_eth fdt \ no-depend \ compile-with "${CC} -c -o ${.TARGET} ${CFLAGS} -I$S/contrib/alpine-hal -I$S/contrib/alpine-hal/eth ${PROF} ${.IMPSRC}" contrib/alpine-hal/al_hal_iofic.c optional al_iofic \ no-depend \ compile-with "${CC} -c -o ${.TARGET} ${CFLAGS} -I$S/contrib/alpine-hal -I$S/contrib/alpine-hal/eth ${PROF} ${.IMPSRC}" contrib/alpine-hal/al_hal_serdes_25g.c optional al_serdes \ no-depend \ compile-with "${CC} -c -o ${.TARGET} ${CFLAGS} -I$S/contrib/alpine-hal -I$S/contrib/alpine-hal/eth ${PROF} ${.IMPSRC}" contrib/alpine-hal/al_hal_serdes_hssp.c optional al_serdes \ no-depend \ compile-with "${CC} -c -o ${.TARGET} ${CFLAGS} -I$S/contrib/alpine-hal -I$S/contrib/alpine-hal/eth ${PROF} ${.IMPSRC}" contrib/alpine-hal/al_hal_udma_config.c optional al_udma \ no-depend \ compile-with "${CC} -c -o ${.TARGET} ${CFLAGS} -I$S/contrib/alpine-hal -I$S/contrib/alpine-hal/eth ${PROF} ${.IMPSRC}" contrib/alpine-hal/al_hal_udma_debug.c optional al_udma \ no-depend \ compile-with "${CC} -c -o ${.TARGET} ${CFLAGS} -I$S/contrib/alpine-hal -I$S/contrib/alpine-hal/eth ${PROF} ${.IMPSRC}" contrib/alpine-hal/al_hal_udma_iofic.c optional al_udma \ no-depend \ compile-with "${CC} -c -o ${.TARGET} ${CFLAGS} -I$S/contrib/alpine-hal -I$S/contrib/alpine-hal/eth ${PROF} ${.IMPSRC}" contrib/alpine-hal/al_hal_udma_main.c optional al_udma \ no-depend \ compile-with "${CC} -c -o ${.TARGET} ${CFLAGS} -I$S/contrib/alpine-hal -I$S/contrib/alpine-hal/eth ${PROF} ${.IMPSRC}" contrib/alpine-hal/al_serdes.c optional al_serdes \ no-depend \ compile-with "${CC} -c -o ${.TARGET} ${CFLAGS} -I$S/contrib/alpine-hal -I$S/contrib/alpine-hal/eth ${PROF} ${.IMPSRC}" contrib/alpine-hal/eth/al_hal_eth_kr.c optional al_eth \ no-depend \ compile-with "${CC} -c -o ${.TARGET} ${CFLAGS} -I$S/contrib/alpine-hal -I$S/contrib/alpine-hal/eth ${PROF} ${.IMPSRC}" contrib/alpine-hal/eth/al_hal_eth_main.c optional al_eth \ no-depend \ compile-with "${CC} -c -o ${.TARGET} ${CFLAGS} -I$S/contrib/alpine-hal -I$S/contrib/alpine-hal/eth ${PROF} ${.IMPSRC}" dev/alc/if_alc.c optional alc pci dev/ale/if_ale.c optional ale pci dev/alpm/alpm.c optional alpm pci dev/altera/avgen/altera_avgen.c optional altera_avgen dev/altera/avgen/altera_avgen_fdt.c optional altera_avgen fdt dev/altera/avgen/altera_avgen_nexus.c optional altera_avgen dev/altera/msgdma/msgdma.c optional altera_msgdma xdma dev/altera/sdcard/altera_sdcard.c optional altera_sdcard dev/altera/sdcard/altera_sdcard_disk.c optional altera_sdcard dev/altera/sdcard/altera_sdcard_io.c optional altera_sdcard dev/altera/sdcard/altera_sdcard_fdt.c optional altera_sdcard fdt dev/altera/sdcard/altera_sdcard_nexus.c optional altera_sdcard dev/altera/softdma/softdma.c optional altera_softdma xdma fdt dev/altera/pio/pio.c optional altera_pio dev/altera/pio/pio_if.m optional altera_pio dev/amdpm/amdpm.c optional amdpm pci | nfpm pci dev/amdsmb/amdsmb.c optional amdsmb pci dev/amr/amr.c optional amr dev/amr/amr_cam.c optional amrp amr dev/amr/amr_disk.c optional amr dev/amr/amr_linux.c optional amr compat_linux dev/amr/amr_pci.c optional amr pci dev/an/if_an.c optional an dev/an/if_an_isa.c optional an isa dev/an/if_an_pccard.c optional an pccard dev/an/if_an_pci.c optional an pci # dev/ata/ata_if.m optional ata | atacore dev/ata/ata-all.c optional ata | atacore dev/ata/ata-dma.c optional ata | atacore dev/ata/ata-lowlevel.c optional ata | atacore dev/ata/ata-sata.c optional ata | atacore dev/ata/ata-card.c optional ata pccard | atapccard dev/ata/ata-isa.c optional ata isa | ataisa dev/ata/ata-pci.c optional ata pci | atapci dev/ata/chipsets/ata-acard.c optional ata pci | ataacard dev/ata/chipsets/ata-acerlabs.c optional ata pci | ataacerlabs dev/ata/chipsets/ata-amd.c optional ata pci | ataamd dev/ata/chipsets/ata-ati.c optional ata pci | ataati dev/ata/chipsets/ata-cenatek.c optional ata pci | atacenatek dev/ata/chipsets/ata-cypress.c optional ata pci | atacypress dev/ata/chipsets/ata-cyrix.c optional ata pci | atacyrix dev/ata/chipsets/ata-highpoint.c optional ata pci | atahighpoint dev/ata/chipsets/ata-intel.c optional ata pci | ataintel dev/ata/chipsets/ata-ite.c optional ata pci | ataite dev/ata/chipsets/ata-jmicron.c optional ata pci | atajmicron dev/ata/chipsets/ata-marvell.c optional ata pci | atamarvell dev/ata/chipsets/ata-micron.c optional ata pci | atamicron dev/ata/chipsets/ata-national.c optional ata pci | atanational dev/ata/chipsets/ata-netcell.c optional ata pci | atanetcell dev/ata/chipsets/ata-nvidia.c optional ata pci | atanvidia dev/ata/chipsets/ata-promise.c optional ata pci | atapromise dev/ata/chipsets/ata-serverworks.c optional ata pci | ataserverworks dev/ata/chipsets/ata-siliconimage.c optional ata pci | atasiliconimage | ataati dev/ata/chipsets/ata-sis.c optional ata pci | atasis dev/ata/chipsets/ata-via.c optional ata pci | atavia # dev/ath/if_ath_pci.c optional ath_pci pci \ compile-with "${NORMAL_C} -I$S/dev/ath" # dev/ath/if_ath_ahb.c optional ath_ahb \ compile-with "${NORMAL_C} -I$S/dev/ath" # dev/ath/if_ath.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_alq.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_beacon.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_btcoex.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_btcoex_mci.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_debug.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_descdma.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_keycache.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_ioctl.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_led.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_lna_div.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_tx.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_tx_edma.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_tx_ht.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_tdma.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_sysctl.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_rx.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_rx_edma.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_spectral.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ah_osdep.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" # dev/ath/ath_hal/ah.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ath_hal/ah_eeprom_v1.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ath_hal/ah_eeprom_v3.c optional ath_hal | ath_ar5211 | ath_ar5212 \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ath_hal/ah_eeprom_v14.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ath_hal/ah_eeprom_v4k.c \ optional ath_hal | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ath_hal/ah_eeprom_9287.c \ optional ath_hal | ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ath_hal/ah_regdomain.c optional ath \ compile-with "${NORMAL_C} ${NO_WSHIFT_COUNT_NEGATIVE} ${NO_WSHIFT_COUNT_OVERFLOW} -I$S/dev/ath" # ar5210 dev/ath/ath_hal/ar5210/ar5210_attach.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_beacon.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_interrupts.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_keycache.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_misc.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_phy.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_power.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_recv.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_reset.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_xmit.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar5211 dev/ath/ath_hal/ar5211/ar5211_attach.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_beacon.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_interrupts.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_keycache.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_misc.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_phy.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_power.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_recv.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_reset.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_xmit.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar5212 dev/ath/ath_hal/ar5212/ar5212_ani.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_attach.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_beacon.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_eeprom.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_gpio.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_interrupts.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_keycache.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_misc.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_phy.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_power.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_recv.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_reset.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_rfgain.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_xmit.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar5416 (depends on ar5212) dev/ath/ath_hal/ar5416/ar5416_ani.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_attach.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_beacon.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_btcoex.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_cal.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_cal_iq.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_cal_adcgain.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_cal_adcdc.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_eeprom.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_gpio.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_interrupts.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_keycache.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_misc.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_phy.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_power.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_radar.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_recv.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_reset.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_spectral.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_xmit.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar9130 (depends upon ar5416) - also requires AH_SUPPORT_AR9130 # # Since this is an embedded MAC SoC, there's no need to compile it into the # default HAL. dev/ath/ath_hal/ar9001/ar9130_attach.c optional ath_ar9130 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9001/ar9130_phy.c optional ath_ar9130 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9001/ar9130_eeprom.c optional ath_ar9130 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar9160 (depends on ar5416) dev/ath/ath_hal/ar9001/ar9160_attach.c optional ath_hal | ath_ar9160 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar9280 (depends on ar5416) dev/ath/ath_hal/ar9002/ar9280_attach.c optional ath_hal | ath_ar9280 | \ ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9280_olc.c optional ath_hal | ath_ar9280 | \ ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar9285 (depends on ar5416 and ar9280) dev/ath/ath_hal/ar9002/ar9285_attach.c optional ath_hal | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9285_btcoex.c optional ath_hal | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9285_reset.c optional ath_hal | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9285_cal.c optional ath_hal | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9285_phy.c optional ath_hal | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9285_diversity.c optional ath_hal | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar9287 (depends on ar5416) dev/ath/ath_hal/ar9002/ar9287_attach.c optional ath_hal | ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9287_reset.c optional ath_hal | ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9287_cal.c optional ath_hal | ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9287_olc.c optional ath_hal | ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar9300 contrib/dev/ath/ath_hal/ar9300/ar9300_ani.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_attach.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_beacon.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_eeprom.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal ${NO_WCONSTANT_CONVERSION}" contrib/dev/ath/ath_hal/ar9300/ar9300_freebsd.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_gpio.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_interrupts.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_keycache.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_mci.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_misc.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_paprd.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_phy.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_power.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_radar.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_radio.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_recv.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_recv_ds.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_reset.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal ${NO_WSOMETIMES_UNINITIALIZED} -Wno-unused-function" contrib/dev/ath/ath_hal/ar9300/ar9300_stub.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_stub_funcs.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_spectral.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_timer.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_xmit.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_xmit_ds.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" # rf backends dev/ath/ath_hal/ar5212/ar2316.c optional ath_rf2316 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar2317.c optional ath_rf2317 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar2413.c optional ath_hal | ath_rf2413 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar2425.c optional ath_hal | ath_rf2425 | ath_rf2417 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5111.c optional ath_hal | ath_rf5111 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5112.c optional ath_hal | ath_rf5112 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5413.c optional ath_hal | ath_rf5413 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar2133.c optional ath_hal | ath_ar5416 | \ ath_ar9130 | ath_ar9160 | ath_ar9280 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9280.c optional ath_hal | ath_ar9280 | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9285.c optional ath_hal | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9287.c optional ath_hal | ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ath rate control algorithms dev/ath/ath_rate/amrr/amrr.c optional ath_rate_amrr \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ath_rate/onoe/onoe.c optional ath_rate_onoe \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ath_rate/sample/sample.c optional ath_rate_sample \ compile-with "${NORMAL_C} -I$S/dev/ath" # ath DFS modules dev/ath/ath_dfs/null/dfs_null.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" # dev/bce/if_bce.c optional bce dev/bfe/if_bfe.c optional bfe dev/bge/if_bge.c optional bge dev/bhnd/bhnd.c optional bhnd dev/bhnd/bhnd_erom.c optional bhnd dev/bhnd/bhnd_erom_if.m optional bhnd dev/bhnd/bhnd_subr.c optional bhnd dev/bhnd/bhnd_bus_if.m optional bhnd dev/bhnd/bhndb/bhnd_bhndb.c optional bhndb bhnd dev/bhnd/bhndb/bhndb.c optional bhndb bhnd dev/bhnd/bhndb/bhndb_bus_if.m optional bhndb bhnd dev/bhnd/bhndb/bhndb_hwdata.c optional bhndb bhnd dev/bhnd/bhndb/bhndb_if.m optional bhndb bhnd dev/bhnd/bhndb/bhndb_pci.c optional bhndb_pci bhndb bhnd pci dev/bhnd/bhndb/bhndb_pci_hwdata.c optional bhndb_pci bhndb bhnd pci dev/bhnd/bhndb/bhndb_pci_sprom.c optional bhndb_pci bhndb bhnd pci dev/bhnd/bhndb/bhndb_subr.c optional bhndb bhnd dev/bhnd/bcma/bcma.c optional bcma bhnd dev/bhnd/bcma/bcma_bhndb.c optional bcma bhnd bhndb dev/bhnd/bcma/bcma_erom.c optional bcma bhnd dev/bhnd/bcma/bcma_subr.c optional bcma bhnd dev/bhnd/cores/chipc/bhnd_chipc_if.m optional bhnd dev/bhnd/cores/chipc/bhnd_sprom_chipc.c optional bhnd dev/bhnd/cores/chipc/bhnd_pmu_chipc.c optional bhnd dev/bhnd/cores/chipc/chipc.c optional bhnd dev/bhnd/cores/chipc/chipc_cfi.c optional bhnd cfi dev/bhnd/cores/chipc/chipc_gpio.c optional bhnd gpio dev/bhnd/cores/chipc/chipc_slicer.c optional bhnd cfi | bhnd spibus dev/bhnd/cores/chipc/chipc_spi.c optional bhnd spibus dev/bhnd/cores/chipc/chipc_subr.c optional bhnd dev/bhnd/cores/chipc/pwrctl/bhnd_pwrctl.c optional bhnd dev/bhnd/cores/chipc/pwrctl/bhnd_pwrctl_if.m optional bhnd dev/bhnd/cores/chipc/pwrctl/bhnd_pwrctl_hostb_if.m optional bhnd dev/bhnd/cores/chipc/pwrctl/bhnd_pwrctl_subr.c optional bhnd dev/bhnd/cores/pci/bhnd_pci.c optional bhnd pci dev/bhnd/cores/pci/bhnd_pci_hostb.c optional bhndb bhnd pci dev/bhnd/cores/pci/bhnd_pcib.c optional bhnd_pcib bhnd pci dev/bhnd/cores/pcie2/bhnd_pcie2.c optional bhnd pci dev/bhnd/cores/pcie2/bhnd_pcie2_hostb.c optional bhndb bhnd pci dev/bhnd/cores/pcie2/bhnd_pcie2b.c optional bhnd_pcie2b bhnd pci dev/bhnd/cores/pmu/bhnd_pmu.c optional bhnd dev/bhnd/cores/pmu/bhnd_pmu_core.c optional bhnd dev/bhnd/cores/pmu/bhnd_pmu_if.m optional bhnd dev/bhnd/cores/pmu/bhnd_pmu_subr.c optional bhnd dev/bhnd/nvram/bhnd_nvram_data.c optional bhnd dev/bhnd/nvram/bhnd_nvram_data_bcm.c optional bhnd dev/bhnd/nvram/bhnd_nvram_data_bcmraw.c optional bhnd dev/bhnd/nvram/bhnd_nvram_data_btxt.c optional bhnd dev/bhnd/nvram/bhnd_nvram_data_sprom.c optional bhnd dev/bhnd/nvram/bhnd_nvram_data_sprom_subr.c optional bhnd dev/bhnd/nvram/bhnd_nvram_data_tlv.c optional bhnd dev/bhnd/nvram/bhnd_nvram_if.m optional bhnd dev/bhnd/nvram/bhnd_nvram_io.c optional bhnd dev/bhnd/nvram/bhnd_nvram_iobuf.c optional bhnd dev/bhnd/nvram/bhnd_nvram_ioptr.c optional bhnd dev/bhnd/nvram/bhnd_nvram_iores.c optional bhnd dev/bhnd/nvram/bhnd_nvram_plist.c optional bhnd dev/bhnd/nvram/bhnd_nvram_store.c optional bhnd dev/bhnd/nvram/bhnd_nvram_store_subr.c optional bhnd dev/bhnd/nvram/bhnd_nvram_subr.c optional bhnd dev/bhnd/nvram/bhnd_nvram_value.c optional bhnd dev/bhnd/nvram/bhnd_nvram_value_fmts.c optional bhnd dev/bhnd/nvram/bhnd_nvram_value_prf.c optional bhnd dev/bhnd/nvram/bhnd_nvram_value_subr.c optional bhnd dev/bhnd/nvram/bhnd_sprom.c optional bhnd dev/bhnd/siba/siba.c optional siba bhnd dev/bhnd/siba/siba_bhndb.c optional siba bhnd bhndb dev/bhnd/siba/siba_erom.c optional siba bhnd dev/bhnd/siba/siba_subr.c optional siba bhnd # dev/bktr/bktr_audio.c optional bktr pci dev/bktr/bktr_card.c optional bktr pci dev/bktr/bktr_core.c optional bktr pci dev/bktr/bktr_i2c.c optional bktr pci smbus dev/bktr/bktr_os.c optional bktr pci dev/bktr/bktr_tuner.c optional bktr pci dev/bktr/msp34xx.c optional bktr pci dev/bnxt/bnxt_hwrm.c optional bnxt iflib pci dev/bnxt/bnxt_sysctl.c optional bnxt iflib pci dev/bnxt/bnxt_txrx.c optional bnxt iflib pci dev/bnxt/if_bnxt.c optional bnxt iflib pci dev/bwi/bwimac.c optional bwi dev/bwi/bwiphy.c optional bwi dev/bwi/bwirf.c optional bwi dev/bwi/if_bwi.c optional bwi dev/bwi/if_bwi_pci.c optional bwi pci dev/bwn/if_bwn.c optional bwn bhnd dev/bwn/if_bwn_pci.c optional bwn pci bhnd bhndb bhndb_pci dev/bwn/if_bwn_phy_common.c optional bwn bhnd dev/bwn/if_bwn_phy_g.c optional bwn bhnd dev/bwn/if_bwn_phy_lp.c optional bwn bhnd dev/bwn/if_bwn_phy_n.c optional bwn bhnd dev/bwn/if_bwn_util.c optional bwn bhnd dev/cardbus/cardbus.c optional cardbus dev/cardbus/cardbus_cis.c optional cardbus dev/cardbus/cardbus_device.c optional cardbus dev/cas/if_cas.c optional cas dev/cfi/cfi_bus_fdt.c optional cfi fdt dev/cfi/cfi_bus_nexus.c optional cfi dev/cfi/cfi_core.c optional cfi dev/cfi/cfi_dev.c optional cfi dev/cfi/cfi_disk.c optional cfid dev/chromebook_platform/chromebook_platform.c optional chromebook_platform dev/ciss/ciss.c optional ciss dev/cmx/cmx.c optional cmx dev/cmx/cmx_pccard.c optional cmx pccard dev/cpufreq/ichss.c optional cpufreq pci dev/cs/if_cs.c optional cs dev/cs/if_cs_isa.c optional cs isa dev/cs/if_cs_pccard.c optional cs pccard dev/cxgb/cxgb_main.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/cxgb_sge.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_mc5.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_vsc7323.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_vsc8211.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_ael1002.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_aq100x.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_mv88e1xxx.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_xgmac.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_t3_hw.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_tn1010.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/sys/uipc_mvec.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/cxgb_t3fw.c optional cxgb cxgb_t3fw \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgbe/t4_clip.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_filter.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_if.m optional cxgbe pci dev/cxgbe/t4_iov.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_mp_ring.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_main.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_netmap.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_sched.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_sge.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_smt.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_l2t.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_tracer.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_vf.c optional cxgbev pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/common/t4_hw.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/common/t4vf_hw.c optional cxgbev pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/cudbg/cudbg_common.c optional cxgbe \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/cudbg/cudbg_flash_utils.c optional cxgbe \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/cudbg/cudbg_lib.c optional cxgbe \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/cudbg/cudbg_wtp.c optional cxgbe \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/cudbg/fastlz.c optional cxgbe \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/cudbg/fastlz_api.c optional cxgbe \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" t4fw_cfg.c optional cxgbe \ compile-with "${AWK} -f $S/tools/fw_stub.awk t4fw_cfg.fw:t4fw_cfg t4fw_cfg_uwire.fw:t4fw_cfg_uwire t4fw.fw:t4fw -mt4fw_cfg -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "t4fw_cfg.c" t4fw_cfg.fwo optional cxgbe \ dependency "t4fw_cfg.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "t4fw_cfg.fwo" t4fw_cfg.fw optional cxgbe \ dependency "$S/dev/cxgbe/firmware/t4fw_cfg.txt" \ compile-with "${CP} ${.ALLSRC} ${.TARGET}" \ no-obj no-implicit-rule \ clean "t4fw_cfg.fw" t4fw_cfg_uwire.fwo optional cxgbe \ dependency "t4fw_cfg_uwire.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "t4fw_cfg_uwire.fwo" t4fw_cfg_uwire.fw optional cxgbe \ dependency "$S/dev/cxgbe/firmware/t4fw_cfg_uwire.txt" \ compile-with "${CP} ${.ALLSRC} ${.TARGET}" \ no-obj no-implicit-rule \ clean "t4fw_cfg_uwire.fw" t4fw.fwo optional cxgbe \ dependency "t4fw.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "t4fw.fwo" t4fw.fw optional cxgbe \ dependency "$S/dev/cxgbe/firmware/t4fw-1.22.0.3.bin.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "t4fw.fw" t5fw_cfg.c optional cxgbe \ compile-with "${AWK} -f $S/tools/fw_stub.awk t5fw_cfg.fw:t5fw_cfg t5fw_cfg_uwire.fw:t5fw_cfg_uwire t5fw.fw:t5fw -mt5fw_cfg -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "t5fw_cfg.c" t5fw_cfg.fwo optional cxgbe \ dependency "t5fw_cfg.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "t5fw_cfg.fwo" t5fw_cfg.fw optional cxgbe \ dependency "$S/dev/cxgbe/firmware/t5fw_cfg.txt" \ compile-with "${CP} ${.ALLSRC} ${.TARGET}" \ no-obj no-implicit-rule \ clean "t5fw_cfg.fw" t5fw_cfg_uwire.fwo optional cxgbe \ dependency "t5fw_cfg_uwire.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "t5fw_cfg_uwire.fwo" t5fw_cfg_uwire.fw optional cxgbe \ dependency "$S/dev/cxgbe/firmware/t5fw_cfg_uwire.txt" \ compile-with "${CP} ${.ALLSRC} ${.TARGET}" \ no-obj no-implicit-rule \ clean "t5fw_cfg_uwire.fw" t5fw.fwo optional cxgbe \ dependency "t5fw.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "t5fw.fwo" t5fw.fw optional cxgbe \ dependency "$S/dev/cxgbe/firmware/t5fw-1.22.0.3.bin.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "t5fw.fw" t6fw_cfg.c optional cxgbe \ compile-with "${AWK} -f $S/tools/fw_stub.awk t6fw_cfg.fw:t6fw_cfg t6fw_cfg_uwire.fw:t6fw_cfg_uwire t6fw.fw:t6fw -mt6fw_cfg -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "t6fw_cfg.c" t6fw_cfg.fwo optional cxgbe \ dependency "t6fw_cfg.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "t6fw_cfg.fwo" t6fw_cfg.fw optional cxgbe \ dependency "$S/dev/cxgbe/firmware/t6fw_cfg.txt" \ compile-with "${CP} ${.ALLSRC} ${.TARGET}" \ no-obj no-implicit-rule \ clean "t6fw_cfg.fw" t6fw_cfg_uwire.fwo optional cxgbe \ dependency "t6fw_cfg_uwire.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "t6fw_cfg_uwire.fwo" t6fw_cfg_uwire.fw optional cxgbe \ dependency "$S/dev/cxgbe/firmware/t6fw_cfg_uwire.txt" \ compile-with "${CP} ${.ALLSRC} ${.TARGET}" \ no-obj no-implicit-rule \ clean "t6fw_cfg_uwire.fw" t6fw.fwo optional cxgbe \ dependency "t6fw.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "t6fw.fwo" t6fw.fw optional cxgbe \ dependency "$S/dev/cxgbe/firmware/t6fw-1.22.0.3.bin.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "t6fw.fw" dev/cxgbe/crypto/t4_crypto.c optional ccr \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cy/cy.c optional cy dev/cy/cy_isa.c optional cy isa dev/cy/cy_pci.c optional cy pci dev/cyapa/cyapa.c optional cyapa iicbus dev/dc/if_dc.c optional dc pci dev/dc/dcphy.c optional dc pci dev/dc/pnphy.c optional dc pci dev/dcons/dcons.c optional dcons dev/dcons/dcons_crom.c optional dcons_crom dev/dcons/dcons_os.c optional dcons dev/de/if_de.c optional de pci dev/dme/if_dme.c optional dme dev/drm/ati_pcigart.c optional drm dev/drm/drm_agpsupport.c optional drm dev/drm/drm_auth.c optional drm dev/drm/drm_bufs.c optional drm dev/drm/drm_context.c optional drm dev/drm/drm_dma.c optional drm dev/drm/drm_drawable.c optional drm dev/drm/drm_drv.c optional drm dev/drm/drm_fops.c optional drm dev/drm/drm_hashtab.c optional drm dev/drm/drm_ioctl.c optional drm dev/drm/drm_irq.c optional drm dev/drm/drm_lock.c optional drm dev/drm/drm_memory.c optional drm dev/drm/drm_mm.c optional drm dev/drm/drm_pci.c optional drm dev/drm/drm_scatter.c optional drm dev/drm/drm_sman.c optional drm dev/drm/drm_sysctl.c optional drm dev/drm/drm_vm.c optional drm dev/drm/mach64_dma.c optional mach64drm dev/drm/mach64_drv.c optional mach64drm dev/drm/mach64_irq.c optional mach64drm dev/drm/mach64_state.c optional mach64drm dev/drm/mga_dma.c optional mgadrm dev/drm/mga_drv.c optional mgadrm dev/drm/mga_irq.c optional mgadrm dev/drm/mga_state.c optional mgadrm dev/drm/mga_warp.c optional mgadrm dev/drm/r128_cce.c optional r128drm \ compile-with "${NORMAL_C} ${NO_WCONSTANT_CONVERSION}" dev/drm/r128_drv.c optional r128drm dev/drm/r128_irq.c optional r128drm dev/drm/r128_state.c optional r128drm dev/drm/savage_bci.c optional savagedrm dev/drm/savage_drv.c optional savagedrm dev/drm/savage_state.c optional savagedrm dev/drm/sis_drv.c optional sisdrm dev/drm/sis_ds.c optional sisdrm dev/drm/sis_mm.c optional sisdrm dev/drm/tdfx_drv.c optional tdfxdrm dev/drm/via_dma.c optional viadrm dev/drm/via_dmablit.c optional viadrm dev/drm/via_drv.c optional viadrm dev/drm/via_irq.c optional viadrm dev/drm/via_map.c optional viadrm dev/drm/via_mm.c optional viadrm dev/drm/via_verifier.c optional viadrm dev/drm/via_video.c optional viadrm dev/drm2/drm_agpsupport.c optional drm2 dev/drm2/drm_auth.c optional drm2 dev/drm2/drm_bufs.c optional drm2 dev/drm2/drm_buffer.c optional drm2 dev/drm2/drm_context.c optional drm2 dev/drm2/drm_crtc.c optional drm2 dev/drm2/drm_crtc_helper.c optional drm2 dev/drm2/drm_dma.c optional drm2 dev/drm2/drm_dp_helper.c optional drm2 dev/drm2/drm_dp_iic_helper.c optional drm2 dev/drm2/drm_drv.c optional drm2 dev/drm2/drm_edid.c optional drm2 dev/drm2/drm_fb_helper.c optional drm2 dev/drm2/drm_fops.c optional drm2 dev/drm2/drm_gem.c optional drm2 dev/drm2/drm_gem_names.c optional drm2 dev/drm2/drm_global.c optional drm2 dev/drm2/drm_hashtab.c optional drm2 dev/drm2/drm_ioctl.c optional drm2 dev/drm2/drm_irq.c optional drm2 dev/drm2/drm_linux_list_sort.c optional drm2 dev/drm2/drm_lock.c optional drm2 dev/drm2/drm_memory.c optional drm2 dev/drm2/drm_mm.c optional drm2 dev/drm2/drm_modes.c optional drm2 dev/drm2/drm_pci.c optional drm2 dev/drm2/drm_platform.c optional drm2 dev/drm2/drm_scatter.c optional drm2 dev/drm2/drm_stub.c optional drm2 dev/drm2/drm_sysctl.c optional drm2 dev/drm2/drm_vm.c optional drm2 dev/drm2/drm_os_freebsd.c optional drm2 dev/drm2/ttm/ttm_agp_backend.c optional drm2 dev/drm2/ttm/ttm_lock.c optional drm2 dev/drm2/ttm/ttm_object.c optional drm2 dev/drm2/ttm/ttm_tt.c optional drm2 dev/drm2/ttm/ttm_bo_util.c optional drm2 dev/drm2/ttm/ttm_bo.c optional drm2 dev/drm2/ttm/ttm_bo_manager.c optional drm2 dev/drm2/ttm/ttm_execbuf_util.c optional drm2 dev/drm2/ttm/ttm_memory.c optional drm2 dev/drm2/ttm/ttm_page_alloc.c optional drm2 dev/drm2/ttm/ttm_bo_vm.c optional drm2 dev/drm2/ati_pcigart.c optional drm2 agp pci dev/ed/if_ed.c optional ed dev/ed/if_ed_novell.c optional ed dev/ed/if_ed_rtl80x9.c optional ed dev/ed/if_ed_pccard.c optional ed pccard dev/ed/if_ed_pci.c optional ed pci dev/efidev/efidev.c optional efirt dev/efidev/efirt.c optional efirt dev/efidev/efirtc.c optional efirt dev/e1000/if_em.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/em_txrx.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/igb_txrx.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_80003es2lan.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_82540.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_82541.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_82542.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_82543.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_82571.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_82575.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_ich8lan.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_i210.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_api.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_mac.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_manage.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_nvm.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_phy.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_vf.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_mbx.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_osdep.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/et/if_et.c optional et dev/ena/ena.c optional ena \ compile-with "${NORMAL_C} -I$S/contrib" dev/ena/ena_sysctl.c optional ena \ compile-with "${NORMAL_C} -I$S/contrib" contrib/ena-com/ena_com.c optional ena contrib/ena-com/ena_eth_com.c optional ena dev/ep/if_ep.c optional ep dev/ep/if_ep_isa.c optional ep isa dev/ep/if_ep_pccard.c optional ep pccard dev/esp/esp_pci.c optional esp pci dev/esp/ncr53c9x.c optional esp dev/etherswitch/arswitch/arswitch.c optional arswitch dev/etherswitch/arswitch/arswitch_reg.c optional arswitch dev/etherswitch/arswitch/arswitch_phy.c optional arswitch dev/etherswitch/arswitch/arswitch_8216.c optional arswitch dev/etherswitch/arswitch/arswitch_8226.c optional arswitch dev/etherswitch/arswitch/arswitch_8316.c optional arswitch dev/etherswitch/arswitch/arswitch_8327.c optional arswitch dev/etherswitch/arswitch/arswitch_7240.c optional arswitch dev/etherswitch/arswitch/arswitch_9340.c optional arswitch dev/etherswitch/arswitch/arswitch_vlans.c optional arswitch dev/etherswitch/etherswitch.c optional etherswitch dev/etherswitch/etherswitch_if.m optional etherswitch dev/etherswitch/ip17x/ip17x.c optional ip17x dev/etherswitch/ip17x/ip175c.c optional ip17x dev/etherswitch/ip17x/ip175d.c optional ip17x dev/etherswitch/ip17x/ip17x_phy.c optional ip17x dev/etherswitch/ip17x/ip17x_vlans.c optional ip17x dev/etherswitch/miiproxy.c optional miiproxy dev/etherswitch/rtl8366/rtl8366rb.c optional rtl8366rb dev/etherswitch/e6000sw/e6000sw.c optional e6000sw dev/etherswitch/e6000sw/e6060sw.c optional e6060sw dev/etherswitch/infineon/adm6996fc.c optional adm6996fc dev/etherswitch/micrel/ksz8995ma.c optional ksz8995ma dev/etherswitch/ukswitch/ukswitch.c optional ukswitch dev/evdev/cdev.c optional evdev dev/evdev/evdev.c optional evdev dev/evdev/evdev_mt.c optional evdev dev/evdev/evdev_utils.c optional evdev dev/evdev/uinput.c optional evdev uinput dev/ex/if_ex.c optional ex dev/ex/if_ex_isa.c optional ex isa dev/ex/if_ex_pccard.c optional ex pccard dev/exca/exca.c optional cbb dev/extres/clk/clk.c optional ext_resources clk fdt dev/extres/clk/clkdev_if.m optional ext_resources clk fdt dev/extres/clk/clknode_if.m optional ext_resources clk fdt dev/extres/clk/clk_bus.c optional ext_resources clk fdt dev/extres/clk/clk_div.c optional ext_resources clk fdt dev/extres/clk/clk_fixed.c optional ext_resources clk fdt dev/extres/clk/clk_gate.c optional ext_resources clk fdt dev/extres/clk/clk_mux.c optional ext_resources clk fdt dev/extres/phy/phy.c optional ext_resources phy fdt dev/extres/phy/phydev_if.m optional ext_resources phy fdt dev/extres/phy/phynode_if.m optional ext_resources phy fdt dev/extres/phy/phy_usb.c optional ext_resources phy fdt dev/extres/phy/phynode_usb_if.m optional ext_resources phy fdt dev/extres/hwreset/hwreset.c optional ext_resources hwreset fdt dev/extres/hwreset/hwreset_if.m optional ext_resources hwreset fdt dev/extres/nvmem/nvmem.c optional ext_resources nvmem fdt dev/extres/nvmem/nvmem_if.m optional ext_resources nvmem fdt dev/extres/regulator/regdev_if.m optional ext_resources regulator fdt dev/extres/regulator/regnode_if.m optional ext_resources regulator fdt dev/extres/regulator/regulator.c optional ext_resources regulator fdt dev/extres/regulator/regulator_bus.c optional ext_resources regulator fdt dev/extres/regulator/regulator_fixed.c optional ext_resources regulator fdt dev/extres/syscon/syscon.c optional ext_resources syscon dev/extres/syscon/syscon_generic.c optional ext_resources syscon fdt dev/extres/syscon/syscon_if.m optional ext_resources syscon dev/fb/fbd.c optional fbd | vt dev/fb/fb_if.m standard dev/fb/splash.c optional sc splash dev/fdt/fdt_clock.c optional fdt fdt_clock dev/fdt/fdt_clock_if.m optional fdt fdt_clock dev/fdt/fdt_common.c optional fdt dev/fdt/fdt_pinctrl.c optional fdt fdt_pinctrl dev/fdt/fdt_pinctrl_if.m optional fdt fdt_pinctrl dev/fdt/fdt_slicer.c optional fdt cfi | fdt nand | fdt mx25l | fdt n25q dev/fdt/fdt_static_dtb.S optional fdt fdt_dtb_static \ dependency "fdt_dtb_file" dev/fdt/simplebus.c optional fdt dev/fdt/simple_mfd.c optional fdt dev/fe/if_fe.c optional fe dev/fe/if_fe_pccard.c optional fe pccard dev/filemon/filemon.c optional filemon dev/firewire/firewire.c optional firewire dev/firewire/fwcrom.c optional firewire dev/firewire/fwdev.c optional firewire dev/firewire/fwdma.c optional firewire dev/firewire/fwmem.c optional firewire dev/firewire/fwohci.c optional firewire dev/firewire/fwohci_pci.c optional firewire pci dev/firewire/if_fwe.c optional fwe dev/firewire/if_fwip.c optional fwip dev/firewire/sbp.c optional sbp dev/firewire/sbp_targ.c optional sbp_targ dev/flash/at45d.c optional at45d dev/flash/cqspi.c optional cqspi fdt xdma dev/flash/mx25l.c optional mx25l dev/flash/n25q.c optional n25q fdt dev/flash/qspi_if.m optional cqspi fdt | n25q fdt dev/fxp/if_fxp.c optional fxp dev/fxp/inphy.c optional fxp dev/gem/if_gem.c optional gem dev/gem/if_gem_pci.c optional gem pci dev/gem/if_gem_sbus.c optional gem sbus dev/gpio/gpiobacklight.c optional gpiobacklight fdt dev/gpio/gpiokeys.c optional gpiokeys fdt dev/gpio/gpiokeys_codes.c optional gpiokeys fdt dev/gpio/gpiobus.c optional gpio \ dependency "gpiobus_if.h" dev/gpio/gpioc.c optional gpio \ dependency "gpio_if.h" dev/gpio/gpioiic.c optional gpioiic dev/gpio/gpioled.c optional gpioled !fdt dev/gpio/gpioled_fdt.c optional gpioled fdt dev/gpio/gpiopower.c optional gpiopower fdt dev/gpio/gpioregulator.c optional gpioregulator fdt ext_resources dev/gpio/gpiospi.c optional gpiospi dev/gpio/gpioths.c optional gpioths dev/gpio/gpio_if.m optional gpio dev/gpio/gpiobus_if.m optional gpio dev/gpio/gpiopps.c optional gpiopps dev/gpio/ofw_gpiobus.c optional fdt gpio dev/hifn/hifn7751.c optional hifn dev/hme/if_hme.c optional hme dev/hme/if_hme_pci.c optional hme pci dev/hme/if_hme_sbus.c optional hme sbus dev/hptiop/hptiop.c optional hptiop scbus dev/hwpmc/hwpmc_logging.c optional hwpmc dev/hwpmc/hwpmc_mod.c optional hwpmc dev/hwpmc/hwpmc_soft.c optional hwpmc dev/ichiic/ig4_acpi.c optional ig4 acpi iicbus dev/ichiic/ig4_iic.c optional ig4 iicbus dev/ichiic/ig4_pci.c optional ig4 pci iicbus dev/ichsmb/ichsmb.c optional ichsmb dev/ichsmb/ichsmb_pci.c optional ichsmb pci dev/ida/ida.c optional ida dev/ida/ida_disk.c optional ida dev/ida/ida_pci.c optional ida pci dev/iicbus/ad7418.c optional ad7418 dev/iicbus/ds1307.c optional ds1307 dev/iicbus/ds13rtc.c optional ds13rtc | ds133x | ds1374 dev/iicbus/ds1672.c optional ds1672 dev/iicbus/ds3231.c optional ds3231 dev/iicbus/rtc8583.c optional rtc8583 dev/iicbus/syr827.c optional ext_resources syr827 dev/iicbus/icee.c optional icee dev/iicbus/if_ic.c optional ic dev/iicbus/iic.c optional iic dev/iicbus/iic_recover_bus.c optional iicbus dev/iicbus/iicbb.c optional iicbb dev/iicbus/iicbb_if.m optional iicbb dev/iicbus/iicbus.c optional iicbus dev/iicbus/iicbus_if.m optional iicbus dev/iicbus/iiconf.c optional iicbus dev/iicbus/iicsmb.c optional iicsmb \ dependency "iicbus_if.h" dev/iicbus/iicoc.c optional iicoc dev/iicbus/isl12xx.c optional isl12xx dev/iicbus/lm75.c optional lm75 dev/iicbus/nxprtc.c optional nxprtc | pcf8563 dev/iicbus/ofw_iicbus.c optional fdt iicbus dev/iicbus/s35390a.c optional s35390a dev/iir/iir.c optional iir dev/iir/iir_ctrl.c optional iir dev/iir/iir_pci.c optional iir pci dev/intpm/intpm.c optional intpm pci # XXX Work around clang warning, until maintainer approves fix. dev/ips/ips.c optional ips \ compile-with "${NORMAL_C} ${NO_WSOMETIMES_UNINITIALIZED}" dev/ips/ips_commands.c optional ips dev/ips/ips_disk.c optional ips dev/ips/ips_ioctl.c optional ips dev/ips/ips_pci.c optional ips pci dev/ipw/if_ipw.c optional ipw ipwbssfw.c optional ipwbssfw | ipwfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk ipw_bss.fw:ipw_bss:130 -lintel_ipw -mipw_bss -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "ipwbssfw.c" ipw_bss.fwo optional ipwbssfw | ipwfw \ dependency "ipw_bss.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "ipw_bss.fwo" ipw_bss.fw optional ipwbssfw | ipwfw \ dependency "$S/contrib/dev/ipw/ipw2100-1.3.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "ipw_bss.fw" ipwibssfw.c optional ipwibssfw | ipwfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk ipw_ibss.fw:ipw_ibss:130 -lintel_ipw -mipw_ibss -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "ipwibssfw.c" ipw_ibss.fwo optional ipwibssfw | ipwfw \ dependency "ipw_ibss.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "ipw_ibss.fwo" ipw_ibss.fw optional ipwibssfw | ipwfw \ dependency "$S/contrib/dev/ipw/ipw2100-1.3-i.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "ipw_ibss.fw" ipwmonitorfw.c optional ipwmonitorfw | ipwfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk ipw_monitor.fw:ipw_monitor:130 -lintel_ipw -mipw_monitor -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "ipwmonitorfw.c" ipw_monitor.fwo optional ipwmonitorfw | ipwfw \ dependency "ipw_monitor.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "ipw_monitor.fwo" ipw_monitor.fw optional ipwmonitorfw | ipwfw \ dependency "$S/contrib/dev/ipw/ipw2100-1.3-p.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "ipw_monitor.fw" dev/iscsi/icl.c optional iscsi dev/iscsi/icl_conn_if.m optional cfiscsi | iscsi dev/iscsi/icl_soft.c optional iscsi dev/iscsi/icl_soft_proxy.c optional iscsi dev/iscsi/iscsi.c optional iscsi scbus dev/iscsi_initiator/iscsi.c optional iscsi_initiator scbus dev/iscsi_initiator/iscsi_subr.c optional iscsi_initiator scbus dev/iscsi_initiator/isc_cam.c optional iscsi_initiator scbus dev/iscsi_initiator/isc_soc.c optional iscsi_initiator scbus dev/iscsi_initiator/isc_sm.c optional iscsi_initiator scbus dev/iscsi_initiator/isc_subr.c optional iscsi_initiator scbus dev/ismt/ismt.c optional ismt dev/isl/isl.c optional isl iicbus dev/isp/isp.c optional isp dev/isp/isp_freebsd.c optional isp dev/isp/isp_library.c optional isp dev/isp/isp_pci.c optional isp pci dev/isp/isp_sbus.c optional isp sbus dev/isp/isp_target.c optional isp dev/ispfw/ispfw.c optional ispfw dev/iwi/if_iwi.c optional iwi iwibssfw.c optional iwibssfw | iwifw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwi_bss.fw:iwi_bss:300 -lintel_iwi -miwi_bss -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwibssfw.c" iwi_bss.fwo optional iwibssfw | iwifw \ dependency "iwi_bss.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwi_bss.fwo" iwi_bss.fw optional iwibssfw | iwifw \ dependency "$S/contrib/dev/iwi/ipw2200-bss.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwi_bss.fw" iwiibssfw.c optional iwiibssfw | iwifw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwi_ibss.fw:iwi_ibss:300 -lintel_iwi -miwi_ibss -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwiibssfw.c" iwi_ibss.fwo optional iwiibssfw | iwifw \ dependency "iwi_ibss.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwi_ibss.fwo" iwi_ibss.fw optional iwiibssfw | iwifw \ dependency "$S/contrib/dev/iwi/ipw2200-ibss.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwi_ibss.fw" iwimonitorfw.c optional iwimonitorfw | iwifw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwi_monitor.fw:iwi_monitor:300 -lintel_iwi -miwi_monitor -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwimonitorfw.c" iwi_monitor.fwo optional iwimonitorfw | iwifw \ dependency "iwi_monitor.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwi_monitor.fwo" iwi_monitor.fw optional iwimonitorfw | iwifw \ dependency "$S/contrib/dev/iwi/ipw2200-sniffer.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwi_monitor.fw" dev/iwm/if_iwm.c optional iwm dev/iwm/if_iwm_7000.c optional iwm dev/iwm/if_iwm_8000.c optional iwm dev/iwm/if_iwm_binding.c optional iwm dev/iwm/if_iwm_fw.c optional iwm dev/iwm/if_iwm_led.c optional iwm dev/iwm/if_iwm_mac_ctxt.c optional iwm dev/iwm/if_iwm_notif_wait.c optional iwm dev/iwm/if_iwm_pcie_trans.c optional iwm dev/iwm/if_iwm_phy_ctxt.c optional iwm dev/iwm/if_iwm_phy_db.c optional iwm dev/iwm/if_iwm_power.c optional iwm dev/iwm/if_iwm_scan.c optional iwm dev/iwm/if_iwm_sf.c optional iwm dev/iwm/if_iwm_sta.c optional iwm dev/iwm/if_iwm_time_event.c optional iwm dev/iwm/if_iwm_util.c optional iwm iwm3160fw.c optional iwm3160fw | iwmfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwm3160.fw:iwm3160fw -miwm3160fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwm3160fw.c" iwm3160fw.fwo optional iwm3160fw | iwmfw \ dependency "iwm3160.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwm3160fw.fwo" iwm3160.fw optional iwm3160fw | iwmfw \ dependency "$S/contrib/dev/iwm/iwm-3160-17.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwm3160.fw" iwm3168fw.c optional iwm3168fw | iwmfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwm3168.fw:iwm3168fw -miwm3168fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwm3168fw.c" iwm3168fw.fwo optional iwm3168fw | iwmfw \ dependency "iwm3168.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwm3168fw.fwo" iwm3168.fw optional iwm3168fw | iwmfw \ dependency "$S/contrib/dev/iwm/iwm-3168-22.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwm3168.fw" iwm7260fw.c optional iwm7260fw | iwmfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwm7260.fw:iwm7260fw -miwm7260fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwm7260fw.c" iwm7260fw.fwo optional iwm7260fw | iwmfw \ dependency "iwm7260.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwm7260fw.fwo" iwm7260.fw optional iwm7260fw | iwmfw \ dependency "$S/contrib/dev/iwm/iwm-7260-17.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwm7260.fw" iwm7265fw.c optional iwm7265fw | iwmfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwm7265.fw:iwm7265fw -miwm7265fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwm7265fw.c" iwm7265fw.fwo optional iwm7265fw | iwmfw \ dependency "iwm7265.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwm7265fw.fwo" iwm7265.fw optional iwm7265fw | iwmfw \ dependency "$S/contrib/dev/iwm/iwm-7265-17.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwm7265.fw" iwm7265Dfw.c optional iwm7265Dfw | iwmfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwm7265D.fw:iwm7265Dfw -miwm7265Dfw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwm7265Dfw.c" iwm7265Dfw.fwo optional iwm7265Dfw | iwmfw \ dependency "iwm7265D.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwm7265Dfw.fwo" iwm7265D.fw optional iwm7265Dfw | iwmfw \ dependency "$S/contrib/dev/iwm/iwm-7265D-17.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwm7265D.fw" iwm8000Cfw.c optional iwm8000Cfw | iwmfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwm8000C.fw:iwm8000Cfw -miwm8000Cfw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwm8000Cfw.c" iwm8000Cfw.fwo optional iwm8000Cfw | iwmfw \ dependency "iwm8000C.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwm8000Cfw.fwo" iwm8000C.fw optional iwm8000Cfw | iwmfw \ dependency "$S/contrib/dev/iwm/iwm-8000C-16.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwm8000C.fw" iwm8265.fw optional iwm8265fw | iwmfw \ dependency "$S/contrib/dev/iwm/iwm-8265-22.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwm8265.fw" iwm8265fw.c optional iwm8265fw | iwmfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwm8265.fw:iwm8265fw -miwm8265fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwm8265fw.c" iwm8265fw.fwo optional iwm8265fw | iwmfw \ dependency "iwm8265.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwm8265fw.fwo" dev/iwn/if_iwn.c optional iwn iwn1000fw.c optional iwn1000fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn1000.fw:iwn1000fw -miwn1000fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn1000fw.c" iwn1000fw.fwo optional iwn1000fw | iwnfw \ dependency "iwn1000.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn1000fw.fwo" iwn1000.fw optional iwn1000fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-1000-39.31.5.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn1000.fw" iwn100fw.c optional iwn100fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn100.fw:iwn100fw -miwn100fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn100fw.c" iwn100fw.fwo optional iwn100fw | iwnfw \ dependency "iwn100.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn100fw.fwo" iwn100.fw optional iwn100fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-100-39.31.5.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn100.fw" iwn105fw.c optional iwn105fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn105.fw:iwn105fw -miwn105fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn105fw.c" iwn105fw.fwo optional iwn105fw | iwnfw \ dependency "iwn105.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn105fw.fwo" iwn105.fw optional iwn105fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-105-6-18.168.6.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn105.fw" iwn135fw.c optional iwn135fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn135.fw:iwn135fw -miwn135fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn135fw.c" iwn135fw.fwo optional iwn135fw | iwnfw \ dependency "iwn135.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn135fw.fwo" iwn135.fw optional iwn135fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-135-6-18.168.6.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn135.fw" iwn2000fw.c optional iwn2000fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn2000.fw:iwn2000fw -miwn2000fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn2000fw.c" iwn2000fw.fwo optional iwn2000fw | iwnfw \ dependency "iwn2000.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn2000fw.fwo" iwn2000.fw optional iwn2000fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-2000-18.168.6.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn2000.fw" iwn2030fw.c optional iwn2030fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn2030.fw:iwn2030fw -miwn2030fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn2030fw.c" iwn2030fw.fwo optional iwn2030fw | iwnfw \ dependency "iwn2030.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn2030fw.fwo" iwn2030.fw optional iwn2030fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwnwifi-2030-18.168.6.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn2030.fw" iwn4965fw.c optional iwn4965fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn4965.fw:iwn4965fw -miwn4965fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn4965fw.c" iwn4965fw.fwo optional iwn4965fw | iwnfw \ dependency "iwn4965.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn4965fw.fwo" iwn4965.fw optional iwn4965fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-4965-228.61.2.24.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn4965.fw" iwn5000fw.c optional iwn5000fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn5000.fw:iwn5000fw -miwn5000fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn5000fw.c" iwn5000fw.fwo optional iwn5000fw | iwnfw \ dependency "iwn5000.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn5000fw.fwo" iwn5000.fw optional iwn5000fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-5000-8.83.5.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn5000.fw" iwn5150fw.c optional iwn5150fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn5150.fw:iwn5150fw -miwn5150fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn5150fw.c" iwn5150fw.fwo optional iwn5150fw | iwnfw \ dependency "iwn5150.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn5150fw.fwo" iwn5150.fw optional iwn5150fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-5150-8.24.2.2.fw.uu"\ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn5150.fw" iwn6000fw.c optional iwn6000fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn6000.fw:iwn6000fw -miwn6000fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn6000fw.c" iwn6000fw.fwo optional iwn6000fw | iwnfw \ dependency "iwn6000.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn6000fw.fwo" iwn6000.fw optional iwn6000fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-6000-9.221.4.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn6000.fw" iwn6000g2afw.c optional iwn6000g2afw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn6000g2a.fw:iwn6000g2afw -miwn6000g2afw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn6000g2afw.c" iwn6000g2afw.fwo optional iwn6000g2afw | iwnfw \ dependency "iwn6000g2a.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn6000g2afw.fwo" iwn6000g2a.fw optional iwn6000g2afw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-6000g2a-18.168.6.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn6000g2a.fw" iwn6000g2bfw.c optional iwn6000g2bfw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn6000g2b.fw:iwn6000g2bfw -miwn6000g2bfw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn6000g2bfw.c" iwn6000g2bfw.fwo optional iwn6000g2bfw | iwnfw \ dependency "iwn6000g2b.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn6000g2bfw.fwo" iwn6000g2b.fw optional iwn6000g2bfw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-6000g2b-18.168.6.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn6000g2b.fw" iwn6050fw.c optional iwn6050fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn6050.fw:iwn6050fw -miwn6050fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn6050fw.c" iwn6050fw.fwo optional iwn6050fw | iwnfw \ dependency "iwn6050.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn6050fw.fwo" iwn6050.fw optional iwn6050fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-6050-41.28.5.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn6050.fw" dev/ixgbe/if_ix.c optional ix inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe -DSMP" dev/ixgbe/if_ixv.c optional ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe -DSMP" dev/ixgbe/if_bypass.c optional ix inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/if_fdir.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/if_sriov.c optional ix inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ix_txrx.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_osdep.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_phy.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_api.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_common.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_mbx.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_vf.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_82598.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_82599.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_x540.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_x550.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_dcb.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_dcb_82598.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_dcb_82599.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/jedec_dimm/jedec_dimm.c optional jedec_dimm smbus dev/jme/if_jme.c optional jme pci dev/kbd/kbd.c optional atkbd | pckbd | sc | ukbd | vt dev/kbdmux/kbdmux.c optional kbdmux dev/ksyms/ksyms.c optional ksyms dev/le/am7990.c optional le dev/le/am79900.c optional le dev/le/if_le_pci.c optional le pci dev/le/lance.c optional le dev/led/led.c standard dev/lge/if_lge.c optional lge dev/liquidio/base/cn23xx_pf_device.c optional lio \ compile-with "${NORMAL_C} \ -I$S/dev/liquidio -I$S/dev/liquidio/base -DSMP" dev/liquidio/base/lio_console.c optional lio \ compile-with "${NORMAL_C} \ -I$S/dev/liquidio -I$S/dev/liquidio/base -DSMP" dev/liquidio/base/lio_ctrl.c optional lio \ compile-with "${NORMAL_C} \ -I$S/dev/liquidio -I$S/dev/liquidio/base -DSMP" dev/liquidio/base/lio_device.c optional lio \ compile-with "${NORMAL_C} \ -I$S/dev/liquidio -I$S/dev/liquidio/base -DSMP" dev/liquidio/base/lio_droq.c optional lio \ compile-with "${NORMAL_C} \ -I$S/dev/liquidio -I$S/dev/liquidio/base -DSMP" dev/liquidio/base/lio_mem_ops.c optional lio \ compile-with "${NORMAL_C} \ -I$S/dev/liquidio -I$S/dev/liquidio/base -DSMP" dev/liquidio/base/lio_request_manager.c optional lio \ compile-with "${NORMAL_C} \ -I$S/dev/liquidio -I$S/dev/liquidio/base -DSMP" dev/liquidio/base/lio_response_manager.c optional lio \ compile-with "${NORMAL_C} \ -I$S/dev/liquidio -I$S/dev/liquidio/base -DSMP" dev/liquidio/lio_core.c optional lio \ compile-with "${NORMAL_C} \ -I$S/dev/liquidio -I$S/dev/liquidio/base -DSMP" dev/liquidio/lio_ioctl.c optional lio \ compile-with "${NORMAL_C} \ -I$S/dev/liquidio -I$S/dev/liquidio/base -DSMP" dev/liquidio/lio_main.c optional lio \ compile-with "${NORMAL_C} \ -I$S/dev/liquidio -I$S/dev/liquidio/base -DSMP" dev/liquidio/lio_rss.c optional lio \ compile-with "${NORMAL_C} \ -I$S/dev/liquidio -I$S/dev/liquidio/base -DSMP" dev/liquidio/lio_rxtx.c optional lio \ compile-with "${NORMAL_C} \ -I$S/dev/liquidio -I$S/dev/liquidio/base -DSMP" dev/liquidio/lio_sysctl.c optional lio \ compile-with "${NORMAL_C} \ -I$S/dev/liquidio -I$S/dev/liquidio/base -DSMP" lio.c optional lio \ compile-with "${AWK} -f $S/tools/fw_stub.awk lio_23xx_nic.bin.fw:lio_23xx_nic.bin -mlio_23xx_nic.bin -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "lio.c" lio_23xx_nic.bin.fw.fwo optional lio \ dependency "lio_23xx_nic.bin.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "lio_23xx_nic.bin.fw.fwo" lio_23xx_nic.bin.fw optional lio \ dependency "$S/contrib/dev/liquidio/lio_23xx_nic.bin.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "lio_23xx_nic.bin.fw" dev/malo/if_malo.c optional malo dev/malo/if_malohal.c optional malo dev/malo/if_malo_pci.c optional malo pci dev/mc146818/mc146818.c optional mc146818 dev/md/md.c optional md dev/mdio/mdio_if.m optional miiproxy | mdio dev/mdio/mdio.c optional miiproxy | mdio dev/mem/memdev.c optional mem dev/mem/memutil.c optional mem dev/mfi/mfi.c optional mfi dev/mfi/mfi_debug.c optional mfi dev/mfi/mfi_pci.c optional mfi pci dev/mfi/mfi_disk.c optional mfi dev/mfi/mfi_syspd.c optional mfi dev/mfi/mfi_tbolt.c optional mfi dev/mfi/mfi_linux.c optional mfi compat_linux dev/mfi/mfi_cam.c optional mfip scbus dev/mii/acphy.c optional miibus | acphy dev/mii/amphy.c optional miibus | amphy dev/mii/atphy.c optional miibus | atphy dev/mii/axphy.c optional miibus | axphy dev/mii/bmtphy.c optional miibus | bmtphy dev/mii/brgphy.c optional miibus | brgphy dev/mii/ciphy.c optional miibus | ciphy dev/mii/e1000phy.c optional miibus | e1000phy dev/mii/gentbi.c optional miibus | gentbi dev/mii/icsphy.c optional miibus | icsphy dev/mii/ip1000phy.c optional miibus | ip1000phy dev/mii/jmphy.c optional miibus | jmphy dev/mii/lxtphy.c optional miibus | lxtphy dev/mii/micphy.c optional miibus fdt | micphy fdt dev/mii/mii.c optional miibus | mii dev/mii/mii_bitbang.c optional miibus | mii_bitbang dev/mii/mii_physubr.c optional miibus | mii dev/mii/mii_fdt.c optional miibus fdt | mii fdt dev/mii/miibus_if.m optional miibus | mii dev/mii/mlphy.c optional miibus | mlphy dev/mii/nsgphy.c optional miibus | nsgphy dev/mii/nsphy.c optional miibus | nsphy dev/mii/nsphyter.c optional miibus | nsphyter dev/mii/pnaphy.c optional miibus | pnaphy dev/mii/qsphy.c optional miibus | qsphy dev/mii/rdcphy.c optional miibus | rdcphy dev/mii/rgephy.c optional miibus | rgephy dev/mii/rlphy.c optional miibus | rlphy dev/mii/rlswitch.c optional rlswitch dev/mii/smcphy.c optional miibus | smcphy dev/mii/smscphy.c optional miibus | smscphy dev/mii/tdkphy.c optional miibus | tdkphy dev/mii/tlphy.c optional miibus | tlphy dev/mii/truephy.c optional miibus | truephy dev/mii/ukphy.c optional miibus | mii dev/mii/ukphy_subr.c optional miibus | mii dev/mii/vscphy.c optional miibus | vscphy dev/mii/xmphy.c optional miibus | xmphy dev/mk48txx/mk48txx.c optional mk48txx dev/mlx/mlx.c optional mlx dev/mlx/mlx_disk.c optional mlx dev/mlx/mlx_pci.c optional mlx pci dev/mly/mly.c optional mly dev/mmc/mmc_subr.c optional mmc | mmcsd !mmccam dev/mmc/mmc.c optional mmc !mmccam dev/mmc/mmcbr_if.m standard dev/mmc/mmcbus_if.m standard dev/mmc/mmcsd.c optional mmcsd !mmccam dev/mmcnull/mmcnull.c optional mmcnull dev/mn/if_mn.c optional mn pci dev/mpr/mpr.c optional mpr dev/mpr/mpr_config.c optional mpr # XXX Work around clang warning, until maintainer approves fix. dev/mpr/mpr_mapping.c optional mpr \ compile-with "${NORMAL_C} ${NO_WSOMETIMES_UNINITIALIZED}" dev/mpr/mpr_pci.c optional mpr pci dev/mpr/mpr_sas.c optional mpr \ compile-with "${NORMAL_C} ${NO_WUNNEEDED_INTERNAL_DECL}" dev/mpr/mpr_sas_lsi.c optional mpr dev/mpr/mpr_table.c optional mpr dev/mpr/mpr_user.c optional mpr dev/mps/mps.c optional mps dev/mps/mps_config.c optional mps # XXX Work around clang warning, until maintainer approves fix. dev/mps/mps_mapping.c optional mps \ compile-with "${NORMAL_C} ${NO_WSOMETIMES_UNINITIALIZED}" dev/mps/mps_pci.c optional mps pci dev/mps/mps_sas.c optional mps \ compile-with "${NORMAL_C} ${NO_WUNNEEDED_INTERNAL_DECL}" dev/mps/mps_sas_lsi.c optional mps dev/mps/mps_table.c optional mps dev/mps/mps_user.c optional mps dev/mpt/mpt.c optional mpt dev/mpt/mpt_cam.c optional mpt dev/mpt/mpt_debug.c optional mpt dev/mpt/mpt_pci.c optional mpt pci dev/mpt/mpt_raid.c optional mpt dev/mpt/mpt_user.c optional mpt dev/mrsas/mrsas.c optional mrsas dev/mrsas/mrsas_cam.c optional mrsas dev/mrsas/mrsas_ioctl.c optional mrsas dev/mrsas/mrsas_fp.c optional mrsas dev/msk/if_msk.c optional msk dev/mvs/mvs.c optional mvs dev/mvs/mvs_if.m optional mvs dev/mvs/mvs_pci.c optional mvs pci dev/mwl/if_mwl.c optional mwl dev/mwl/if_mwl_pci.c optional mwl pci dev/mwl/mwlhal.c optional mwl mwlfw.c optional mwlfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk mw88W8363.fw:mw88W8363fw mwlboot.fw:mwlboot -mmwl -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "mwlfw.c" mw88W8363.fwo optional mwlfw \ dependency "mw88W8363.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "mw88W8363.fwo" mw88W8363.fw optional mwlfw \ dependency "$S/contrib/dev/mwl/mw88W8363.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "mw88W8363.fw" mwlboot.fwo optional mwlfw \ dependency "mwlboot.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "mwlboot.fwo" mwlboot.fw optional mwlfw \ dependency "$S/contrib/dev/mwl/mwlboot.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "mwlboot.fw" dev/mxge/if_mxge.c optional mxge pci dev/mxge/mxge_eth_z8e.c optional mxge pci dev/mxge/mxge_ethp_z8e.c optional mxge pci dev/mxge/mxge_rss_eth_z8e.c optional mxge pci dev/mxge/mxge_rss_ethp_z8e.c optional mxge pci dev/my/if_my.c optional my dev/nand/nand.c optional nand dev/nand/nand_bbt.c optional nand dev/nand/nand_cdev.c optional nand dev/nand/nand_generic.c optional nand dev/nand/nand_geom.c optional nand dev/nand/nand_id.c optional nand dev/nand/nandbus.c optional nand dev/nand/nandbus_if.m optional nand dev/nand/nand_if.m optional nand dev/nand/nandsim.c optional nandsim nand dev/nand/nandsim_chip.c optional nandsim nand dev/nand/nandsim_ctrl.c optional nandsim nand dev/nand/nandsim_log.c optional nandsim nand dev/nand/nandsim_swap.c optional nandsim nand dev/nand/nfc_if.m optional nand dev/netmap/if_ptnet.c optional netmap inet dev/netmap/netmap.c optional netmap dev/netmap/netmap_bdg.c optional netmap dev/netmap/netmap_freebsd.c optional netmap dev/netmap/netmap_generic.c optional netmap dev/netmap/netmap_kloop.c optional netmap dev/netmap/netmap_legacy.c optional netmap dev/netmap/netmap_mbq.c optional netmap dev/netmap/netmap_mem2.c optional netmap dev/netmap/netmap_monitor.c optional netmap dev/netmap/netmap_null.c optional netmap dev/netmap/netmap_offloadings.c optional netmap dev/netmap/netmap_pipe.c optional netmap dev/netmap/netmap_pt.c optional netmap dev/netmap/netmap_vale.c optional netmap # compile-with "${NORMAL_C} -Wconversion -Wextra" dev/nfsmb/nfsmb.c optional nfsmb pci dev/nge/if_nge.c optional nge dev/nmdm/nmdm.c optional nmdm dev/nsp/nsp.c optional nsp dev/nsp/nsp_pccard.c optional nsp pccard dev/null/null.c standard dev/nvd/nvd.c optional nvd nvme dev/nvme/nvme.c optional nvme dev/nvme/nvme_ctrlr.c optional nvme dev/nvme/nvme_ctrlr_cmd.c optional nvme dev/nvme/nvme_ns.c optional nvme dev/nvme/nvme_ns_cmd.c optional nvme dev/nvme/nvme_qpair.c optional nvme dev/nvme/nvme_sim.c optional nvme scbus dev/nvme/nvme_sysctl.c optional nvme dev/nvme/nvme_test.c optional nvme dev/nvme/nvme_util.c optional nvme dev/oce/oce_hw.c optional oce pci dev/oce/oce_if.c optional oce pci dev/oce/oce_mbox.c optional oce pci dev/oce/oce_queue.c optional oce pci dev/oce/oce_sysctl.c optional oce pci dev/oce/oce_util.c optional oce pci dev/ocs_fc/ocs_pci.c optional ocs_fc pci dev/ocs_fc/ocs_ioctl.c optional ocs_fc pci dev/ocs_fc/ocs_os.c optional ocs_fc pci dev/ocs_fc/ocs_utils.c optional ocs_fc pci dev/ocs_fc/ocs_hw.c optional ocs_fc pci dev/ocs_fc/ocs_hw_queues.c optional ocs_fc pci dev/ocs_fc/sli4.c optional ocs_fc pci dev/ocs_fc/ocs_sm.c optional ocs_fc pci dev/ocs_fc/ocs_device.c optional ocs_fc pci dev/ocs_fc/ocs_xport.c optional ocs_fc pci dev/ocs_fc/ocs_domain.c optional ocs_fc pci dev/ocs_fc/ocs_sport.c optional ocs_fc pci dev/ocs_fc/ocs_els.c optional ocs_fc pci dev/ocs_fc/ocs_fabric.c optional ocs_fc pci dev/ocs_fc/ocs_io.c optional ocs_fc pci dev/ocs_fc/ocs_node.c optional ocs_fc pci dev/ocs_fc/ocs_scsi.c optional ocs_fc pci dev/ocs_fc/ocs_unsol.c optional ocs_fc pci dev/ocs_fc/ocs_ddump.c optional ocs_fc pci dev/ocs_fc/ocs_mgmt.c optional ocs_fc pci dev/ocs_fc/ocs_cam.c optional ocs_fc pci dev/ofw/ofw_bus_if.m optional fdt dev/ofw/ofw_bus_subr.c optional fdt dev/ofw/ofw_cpu.c optional fdt dev/ofw/ofw_fdt.c optional fdt dev/ofw/ofw_if.m optional fdt dev/ofw/ofw_subr.c optional fdt dev/ofw/ofwbus.c optional fdt dev/ofw/openfirm.c optional fdt dev/ofw/openfirmio.c optional fdt dev/ow/ow.c optional ow \ dependency "owll_if.h" \ dependency "own_if.h" dev/ow/owll_if.m optional ow dev/ow/own_if.m optional ow dev/ow/ow_temp.c optional ow_temp dev/ow/owc_gpiobus.c optional owc gpio dev/pbio/pbio.c optional pbio isa dev/pccard/card_if.m standard dev/pccard/pccard.c optional pccard dev/pccard/pccard_cis.c optional pccard dev/pccard/pccard_cis_quirks.c optional pccard dev/pccard/pccard_device.c optional pccard dev/pccard/power_if.m standard dev/pccbb/pccbb.c optional cbb dev/pccbb/pccbb_isa.c optional cbb isa dev/pccbb/pccbb_pci.c optional cbb pci dev/pcf/pcf.c optional pcf dev/pci/fixup_pci.c optional pci dev/pci/hostb_pci.c optional pci dev/pci/ignore_pci.c optional pci dev/pci/isa_pci.c optional pci isa dev/pci/pci.c optional pci dev/pci/pci_if.m standard dev/pci/pci_iov.c optional pci pci_iov dev/pci/pci_iov_if.m standard dev/pci/pci_iov_schema.c optional pci pci_iov dev/pci/pci_pci.c optional pci dev/pci/pci_subr.c optional pci dev/pci/pci_user.c optional pci dev/pci/pcib_if.m standard dev/pci/pcib_support.c standard dev/pci/vga_pci.c optional pci dev/pcn/if_pcn.c optional pcn pci dev/pms/freebsd/driver/ini/src/agtiapi.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sadisc.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/mpi.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/saframe.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sahw.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sainit.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/saint.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sampicmd.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sampirsp.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/saphy.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/saport.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sasata.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sasmp.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sassp.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/satimer.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sautil.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/saioctlcmd.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/mpidebug.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/discovery/dm/dminit.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/discovery/dm/dmsmp.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/discovery/dm/dmdisc.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/discovery/dm/dmport.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/discovery/dm/dmtimer.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/discovery/dm/dmmisc.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sat/src/sminit.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sat/src/smmisc.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sat/src/smsat.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sat/src/smsatcb.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sat/src/smsathw.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sat/src/smtimer.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdinit.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdmisc.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdesgl.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdport.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdint.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdioctl.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdhw.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/ossacmnapi.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tddmcmnapi.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdsmcmnapi.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdtimers.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/sas/ini/itdio.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/sas/ini/itdcb.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/sas/ini/itdinit.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/sas/ini/itddisc.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/sata/host/sat.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/sata/host/ossasat.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/sata/host/sathw.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/ppbus/if_plip.c optional plip dev/ppbus/immio.c optional vpo dev/ppbus/lpbb.c optional lpbb dev/ppbus/lpt.c optional lpt dev/ppbus/pcfclock.c optional pcfclock dev/ppbus/ppb_1284.c optional ppbus dev/ppbus/ppb_base.c optional ppbus dev/ppbus/ppb_msq.c optional ppbus dev/ppbus/ppbconf.c optional ppbus dev/ppbus/ppbus_if.m optional ppbus dev/ppbus/ppi.c optional ppi dev/ppbus/pps.c optional pps dev/ppbus/vpo.c optional vpo dev/ppbus/vpoio.c optional vpo dev/ppc/ppc.c optional ppc dev/ppc/ppc_acpi.c optional ppc acpi dev/ppc/ppc_isa.c optional ppc isa dev/ppc/ppc_pci.c optional ppc pci dev/ppc/ppc_puc.c optional ppc puc dev/proto/proto_bus_isa.c optional proto acpi | proto isa dev/proto/proto_bus_pci.c optional proto pci dev/proto/proto_busdma.c optional proto dev/proto/proto_core.c optional proto dev/pst/pst-iop.c optional pst dev/pst/pst-pci.c optional pst pci dev/pst/pst-raid.c optional pst dev/pty/pty.c optional pty dev/puc/puc.c optional puc dev/puc/puc_cfg.c optional puc dev/puc/puc_pccard.c optional puc pccard dev/puc/puc_pci.c optional puc pci dev/pwm/pwmc.c optional pwm dev/pwm/pwmbus.c optional pwm dev/pwm/pwm_if.m optional pwm dev/pwm/pwmbus_if.m optional pwm dev/pwm/ofw_pwm.c optional pwm fdt dev/quicc/quicc_core.c optional quicc dev/ral/rt2560.c optional ral dev/ral/rt2661.c optional ral dev/ral/rt2860.c optional ral dev/ral/if_ral_pci.c optional ral pci rt2561fw.c optional rt2561fw | ralfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rt2561.fw:rt2561fw -mrt2561 -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rt2561fw.c" rt2561fw.fwo optional rt2561fw | ralfw \ dependency "rt2561.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rt2561fw.fwo" rt2561.fw optional rt2561fw | ralfw \ dependency "$S/contrib/dev/ral/rt2561.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rt2561.fw" rt2561sfw.c optional rt2561sfw | ralfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rt2561s.fw:rt2561sfw -mrt2561s -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rt2561sfw.c" rt2561sfw.fwo optional rt2561sfw | ralfw \ dependency "rt2561s.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rt2561sfw.fwo" rt2561s.fw optional rt2561sfw | ralfw \ dependency "$S/contrib/dev/ral/rt2561s.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rt2561s.fw" rt2661fw.c optional rt2661fw | ralfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rt2661.fw:rt2661fw -mrt2661 -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rt2661fw.c" rt2661fw.fwo optional rt2661fw | ralfw \ dependency "rt2661.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rt2661fw.fwo" rt2661.fw optional rt2661fw | ralfw \ dependency "$S/contrib/dev/ral/rt2661.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rt2661.fw" rt2860fw.c optional rt2860fw | ralfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rt2860.fw:rt2860fw -mrt2860 -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rt2860fw.c" rt2860fw.fwo optional rt2860fw | ralfw \ dependency "rt2860.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rt2860fw.fwo" rt2860.fw optional rt2860fw | ralfw \ dependency "$S/contrib/dev/ral/rt2860.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rt2860.fw" dev/random/random_infra.c optional random dev/random/random_harvestq.c optional random dev/random/randomdev.c optional random dev/random/fortuna.c optional random !random_loadable dev/random/hash.c optional random dev/rc/rc.c optional rc dev/rccgpio/rccgpio.c optional rccgpio gpio dev/re/if_re.c optional re dev/rl/if_rl.c optional rl pci dev/rndtest/rndtest.c optional rndtest dev/rp/rp.c optional rp dev/rp/rp_isa.c optional rp isa dev/rp/rp_pci.c optional rp pci # dev/rtwn/if_rtwn.c optional rtwn dev/rtwn/if_rtwn_beacon.c optional rtwn dev/rtwn/if_rtwn_calib.c optional rtwn dev/rtwn/if_rtwn_cam.c optional rtwn dev/rtwn/if_rtwn_efuse.c optional rtwn dev/rtwn/if_rtwn_fw.c optional rtwn dev/rtwn/if_rtwn_rx.c optional rtwn dev/rtwn/if_rtwn_task.c optional rtwn dev/rtwn/if_rtwn_tx.c optional rtwn # dev/rtwn/pci/rtwn_pci_attach.c optional rtwn_pci pci dev/rtwn/pci/rtwn_pci_reg.c optional rtwn_pci pci dev/rtwn/pci/rtwn_pci_rx.c optional rtwn_pci pci dev/rtwn/pci/rtwn_pci_tx.c optional rtwn_pci pci # dev/rtwn/usb/rtwn_usb_attach.c optional rtwn_usb dev/rtwn/usb/rtwn_usb_ep.c optional rtwn_usb dev/rtwn/usb/rtwn_usb_reg.c optional rtwn_usb dev/rtwn/usb/rtwn_usb_rx.c optional rtwn_usb dev/rtwn/usb/rtwn_usb_tx.c optional rtwn_usb # RTL8188E dev/rtwn/rtl8188e/r88e_beacon.c optional rtwn dev/rtwn/rtl8188e/r88e_calib.c optional rtwn dev/rtwn/rtl8188e/r88e_chan.c optional rtwn dev/rtwn/rtl8188e/r88e_fw.c optional rtwn dev/rtwn/rtl8188e/r88e_init.c optional rtwn dev/rtwn/rtl8188e/r88e_led.c optional rtwn dev/rtwn/rtl8188e/r88e_tx.c optional rtwn dev/rtwn/rtl8188e/r88e_rf.c optional rtwn dev/rtwn/rtl8188e/r88e_rom.c optional rtwn dev/rtwn/rtl8188e/r88e_rx.c optional rtwn dev/rtwn/rtl8188e/pci/r88ee_attach.c optional rtwn_pci pci dev/rtwn/rtl8188e/pci/r88ee_init.c optional rtwn_pci pci dev/rtwn/rtl8188e/pci/r88ee_rx.c optional rtwn_pci pci dev/rtwn/rtl8188e/usb/r88eu_attach.c optional rtwn_usb dev/rtwn/rtl8188e/usb/r88eu_init.c optional rtwn_usb # RTL8192C dev/rtwn/rtl8192c/r92c_attach.c optional rtwn dev/rtwn/rtl8192c/r92c_beacon.c optional rtwn dev/rtwn/rtl8192c/r92c_calib.c optional rtwn dev/rtwn/rtl8192c/r92c_chan.c optional rtwn dev/rtwn/rtl8192c/r92c_fw.c optional rtwn dev/rtwn/rtl8192c/r92c_init.c optional rtwn dev/rtwn/rtl8192c/r92c_llt.c optional rtwn dev/rtwn/rtl8192c/r92c_rf.c optional rtwn dev/rtwn/rtl8192c/r92c_rom.c optional rtwn dev/rtwn/rtl8192c/r92c_rx.c optional rtwn dev/rtwn/rtl8192c/r92c_tx.c optional rtwn dev/rtwn/rtl8192c/pci/r92ce_attach.c optional rtwn_pci pci dev/rtwn/rtl8192c/pci/r92ce_calib.c optional rtwn_pci pci dev/rtwn/rtl8192c/pci/r92ce_fw.c optional rtwn_pci pci dev/rtwn/rtl8192c/pci/r92ce_init.c optional rtwn_pci pci dev/rtwn/rtl8192c/pci/r92ce_led.c optional rtwn_pci pci dev/rtwn/rtl8192c/pci/r92ce_rx.c optional rtwn_pci pci dev/rtwn/rtl8192c/pci/r92ce_tx.c optional rtwn_pci pci dev/rtwn/rtl8192c/usb/r92cu_attach.c optional rtwn_usb dev/rtwn/rtl8192c/usb/r92cu_init.c optional rtwn_usb dev/rtwn/rtl8192c/usb/r92cu_led.c optional rtwn_usb dev/rtwn/rtl8192c/usb/r92cu_rx.c optional rtwn_usb dev/rtwn/rtl8192c/usb/r92cu_tx.c optional rtwn_usb # RTL8192E dev/rtwn/rtl8192e/r92e_chan.c optional rtwn dev/rtwn/rtl8192e/r92e_fw.c optional rtwn dev/rtwn/rtl8192e/r92e_init.c optional rtwn dev/rtwn/rtl8192e/r92e_led.c optional rtwn dev/rtwn/rtl8192e/r92e_rf.c optional rtwn dev/rtwn/rtl8192e/r92e_rom.c optional rtwn dev/rtwn/rtl8192e/r92e_rx.c optional rtwn dev/rtwn/rtl8192e/usb/r92eu_attach.c optional rtwn_usb dev/rtwn/rtl8192e/usb/r92eu_init.c optional rtwn_usb # RTL8812A dev/rtwn/rtl8812a/r12a_beacon.c optional rtwn dev/rtwn/rtl8812a/r12a_calib.c optional rtwn dev/rtwn/rtl8812a/r12a_caps.c optional rtwn dev/rtwn/rtl8812a/r12a_chan.c optional rtwn dev/rtwn/rtl8812a/r12a_fw.c optional rtwn dev/rtwn/rtl8812a/r12a_init.c optional rtwn dev/rtwn/rtl8812a/r12a_led.c optional rtwn dev/rtwn/rtl8812a/r12a_rf.c optional rtwn dev/rtwn/rtl8812a/r12a_rom.c optional rtwn dev/rtwn/rtl8812a/r12a_rx.c optional rtwn dev/rtwn/rtl8812a/r12a_tx.c optional rtwn dev/rtwn/rtl8812a/usb/r12au_attach.c optional rtwn_usb dev/rtwn/rtl8812a/usb/r12au_init.c optional rtwn_usb dev/rtwn/rtl8812a/usb/r12au_rx.c optional rtwn_usb dev/rtwn/rtl8812a/usb/r12au_tx.c optional rtwn_usb # RTL8821A dev/rtwn/rtl8821a/r21a_beacon.c optional rtwn dev/rtwn/rtl8821a/r21a_calib.c optional rtwn dev/rtwn/rtl8821a/r21a_chan.c optional rtwn dev/rtwn/rtl8821a/r21a_fw.c optional rtwn dev/rtwn/rtl8821a/r21a_init.c optional rtwn dev/rtwn/rtl8821a/r21a_led.c optional rtwn dev/rtwn/rtl8821a/r21a_rom.c optional rtwn dev/rtwn/rtl8821a/r21a_rx.c optional rtwn dev/rtwn/rtl8821a/usb/r21au_attach.c optional rtwn_usb dev/rtwn/rtl8821a/usb/r21au_dfs.c optional rtwn_usb dev/rtwn/rtl8821a/usb/r21au_init.c optional rtwn_usb rtwn-rtl8188eefw.c optional rtwn-rtl8188eefw | rtwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rtwn-rtl8188eefw.fw:rtwn-rtl8188eefw:111 -mrtwn-rtl8188eefw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rtwn-rtl8188eefw.c" rtwn-rtl8188eefw.fwo optional rtwn-rtl8188eefw | rtwnfw \ dependency "rtwn-rtl8188eefw.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rtwn-rtl8188eefw.fwo" rtwn-rtl8188eefw.fw optional rtwn-rtl8188eefw | rtwnfw \ dependency "$S/contrib/dev/rtwn/rtwn-rtl8188eefw.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rtwn-rtl8188eefw.fw" rtwn-rtl8188eufw.c optional rtwn-rtl8188eufw | rtwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rtwn-rtl8188eufw.fw:rtwn-rtl8188eufw:111 -mrtwn-rtl8188eufw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rtwn-rtl8188eufw.c" rtwn-rtl8188eufw.fwo optional rtwn-rtl8188eufw | rtwnfw \ dependency "rtwn-rtl8188eufw.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rtwn-rtl8188eufw.fwo" rtwn-rtl8188eufw.fw optional rtwn-rtl8188eufw | rtwnfw \ dependency "$S/contrib/dev/rtwn/rtwn-rtl8188eufw.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rtwn-rtl8188eufw.fw" rtwn-rtl8192cfwE.c optional rtwn-rtl8192cfwE | rtwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rtwn-rtl8192cfwE.fw:rtwn-rtl8192cfwE:111 -mrtwn-rtl8192cfwE -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rtwn-rtl8192cfwE.c" rtwn-rtl8192cfwE.fwo optional rtwn-rtl8192cfwE | rtwnfw \ dependency "rtwn-rtl8192cfwE.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rtwn-rtl8192cfwE.fwo" rtwn-rtl8192cfwE.fw optional rtwn-rtl8192cfwE | rtwnfw \ dependency "$S/contrib/dev/rtwn/rtwn-rtl8192cfwE.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rtwn-rtl8192cfwE.fw" rtwn-rtl8192cfwE_B.c optional rtwn-rtl8192cfwE_B | rtwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rtwn-rtl8192cfwE_B.fw:rtwn-rtl8192cfwE_B:111 -mrtwn-rtl8192cfwE_B -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rtwn-rtl8192cfwE_B.c" rtwn-rtl8192cfwE_B.fwo optional rtwn-rtl8192cfwE_B | rtwnfw \ dependency "rtwn-rtl8192cfwE_B.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rtwn-rtl8192cfwE_B.fwo" rtwn-rtl8192cfwE_B.fw optional rtwn-rtl8192cfwE_B | rtwnfw \ dependency "$S/contrib/dev/rtwn/rtwn-rtl8192cfwE_B.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rtwn-rtl8192cfwE_B.fw" rtwn-rtl8192cfwT.c optional rtwn-rtl8192cfwT | rtwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rtwn-rtl8192cfwT.fw:rtwn-rtl8192cfwT:111 -mrtwn-rtl8192cfwT -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rtwn-rtl8192cfwT.c" rtwn-rtl8192cfwT.fwo optional rtwn-rtl8192cfwT | rtwnfw \ dependency "rtwn-rtl8192cfwT.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rtwn-rtl8192cfwT.fwo" rtwn-rtl8192cfwT.fw optional rtwn-rtl8192cfwT | rtwnfw \ dependency "$S/contrib/dev/rtwn/rtwn-rtl8192cfwT.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rtwn-rtl8192cfwT.fw" rtwn-rtl8192cfwU.c optional rtwn-rtl8192cfwU | rtwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rtwn-rtl8192cfwU.fw:rtwn-rtl8192cfwU:111 -mrtwn-rtl8192cfwU -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rtwn-rtl8192cfwU.c" rtwn-rtl8192cfwU.fwo optional rtwn-rtl8192cfwU | rtwnfw \ dependency "rtwn-rtl8192cfwU.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rtwn-rtl8192cfwU.fwo" rtwn-rtl8192cfwU.fw optional rtwn-rtl8192cfwU | rtwnfw \ dependency "$S/contrib/dev/rtwn/rtwn-rtl8192cfwU.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rtwn-rtl8192cfwU.fw" rtwn-rtl8192eufw.c optional rtwn-rtl8192eufw | rtwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rtwn-rtl8192eufw.fw:rtwn-rtl8192eufw:111 -mrtwn-rtl8192eufw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rtwn-rtl8192eufw.c" rtwn-rtl8192eufw.fwo optional rtwn-rtl8192eufw | rtwnfw \ dependency "rtwn-rtl8192eufw.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rtwn-rtl8192eufw.fwo" rtwn-rtl8192eufw.fw optional rtwn-rtl8192eufw | rtwnfw \ dependency "$S/contrib/dev/rtwn/rtwn-rtl8192eufw.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rtwn-rtl8192eufw.fw" rtwn-rtl8812aufw.c optional rtwn-rtl8812aufw | rtwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rtwn-rtl8812aufw.fw:rtwn-rtl8812aufw:111 -mrtwn-rtl8812aufw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rtwn-rtl8812aufw.c" rtwn-rtl8812aufw.fwo optional rtwn-rtl8812aufw | rtwnfw \ dependency "rtwn-rtl8812aufw.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rtwn-rtl8812aufw.fwo" rtwn-rtl8812aufw.fw optional rtwn-rtl8812aufw | rtwnfw \ dependency "$S/contrib/dev/rtwn/rtwn-rtl8812aufw.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rtwn-rtl8812aufw.fw" rtwn-rtl8821aufw.c optional rtwn-rtl8821aufw | rtwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rtwn-rtl8821aufw.fw:rtwn-rtl8821aufw:111 -mrtwn-rtl8821aufw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rtwn-rtl8821aufw.c" rtwn-rtl8821aufw.fwo optional rtwn-rtl8821aufw | rtwnfw \ dependency "rtwn-rtl8821aufw.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rtwn-rtl8821aufw.fwo" rtwn-rtl8821aufw.fw optional rtwn-rtl8821aufw | rtwnfw \ dependency "$S/contrib/dev/rtwn/rtwn-rtl8821aufw.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rtwn-rtl8821aufw.fw" dev/safe/safe.c optional safe dev/scc/scc_if.m optional scc dev/scc/scc_bfe_ebus.c optional scc ebus dev/scc/scc_bfe_quicc.c optional scc quicc dev/scc/scc_bfe_sbus.c optional scc fhc | scc sbus dev/scc/scc_core.c optional scc dev/scc/scc_dev_quicc.c optional scc quicc dev/scc/scc_dev_sab82532.c optional scc dev/scc/scc_dev_z8530.c optional scc dev/sdhci/sdhci.c optional sdhci dev/sdhci/sdhci_fdt.c optional sdhci fdt dev/sdhci/sdhci_fdt_gpio.c optional sdhci fdt gpio dev/sdhci/sdhci_if.m optional sdhci dev/sdhci/sdhci_acpi.c optional sdhci acpi dev/sdhci/sdhci_pci.c optional sdhci pci dev/sf/if_sf.c optional sf pci dev/sge/if_sge.c optional sge pci dev/siis/siis.c optional siis pci dev/sis/if_sis.c optional sis pci dev/sk/if_sk.c optional sk pci dev/smbus/smb.c optional smb dev/smbus/smbconf.c optional smbus dev/smbus/smbus.c optional smbus dev/smbus/smbus_if.m optional smbus dev/smc/if_smc.c optional smc dev/smc/if_smc_fdt.c optional smc fdt dev/sn/if_sn.c optional sn dev/sn/if_sn_isa.c optional sn isa dev/sn/if_sn_pccard.c optional sn pccard dev/snp/snp.c optional snp dev/sound/clone.c optional sound dev/sound/unit.c optional sound dev/sound/isa/ad1816.c optional snd_ad1816 isa dev/sound/isa/ess.c optional snd_ess isa dev/sound/isa/gusc.c optional snd_gusc isa dev/sound/isa/mss.c optional snd_mss isa dev/sound/isa/sb16.c optional snd_sb16 isa dev/sound/isa/sb8.c optional snd_sb8 isa dev/sound/isa/sbc.c optional snd_sbc isa dev/sound/isa/sndbuf_dma.c optional sound isa dev/sound/pci/als4000.c optional snd_als4000 pci dev/sound/pci/atiixp.c optional snd_atiixp pci dev/sound/pci/cmi.c optional snd_cmi pci dev/sound/pci/cs4281.c optional snd_cs4281 pci dev/sound/pci/csa.c optional snd_csa pci dev/sound/pci/csapcm.c optional snd_csa pci dev/sound/pci/ds1.c optional snd_ds1 pci dev/sound/pci/emu10k1.c optional snd_emu10k1 pci dev/sound/pci/emu10kx.c optional snd_emu10kx pci dev/sound/pci/emu10kx-pcm.c optional snd_emu10kx pci dev/sound/pci/emu10kx-midi.c optional snd_emu10kx pci dev/sound/pci/envy24.c optional snd_envy24 pci dev/sound/pci/envy24ht.c optional snd_envy24ht pci dev/sound/pci/es137x.c optional snd_es137x pci dev/sound/pci/fm801.c optional snd_fm801 pci dev/sound/pci/ich.c optional snd_ich pci dev/sound/pci/maestro.c optional snd_maestro pci dev/sound/pci/maestro3.c optional snd_maestro3 pci dev/sound/pci/neomagic.c optional snd_neomagic pci dev/sound/pci/solo.c optional snd_solo pci dev/sound/pci/spicds.c optional snd_spicds pci dev/sound/pci/t4dwave.c optional snd_t4dwave pci dev/sound/pci/via8233.c optional snd_via8233 pci dev/sound/pci/via82c686.c optional snd_via82c686 pci dev/sound/pci/vibes.c optional snd_vibes pci dev/sound/pci/hda/hdaa.c optional snd_hda pci dev/sound/pci/hda/hdaa_patches.c optional snd_hda pci dev/sound/pci/hda/hdac.c optional snd_hda pci dev/sound/pci/hda/hdac_if.m optional snd_hda pci dev/sound/pci/hda/hdacc.c optional snd_hda pci dev/sound/pci/hdspe.c optional snd_hdspe pci dev/sound/pci/hdspe-pcm.c optional snd_hdspe pci dev/sound/pcm/ac97.c optional sound dev/sound/pcm/ac97_if.m optional sound dev/sound/pcm/ac97_patch.c optional sound dev/sound/pcm/buffer.c optional sound \ dependency "snd_fxdiv_gen.h" dev/sound/pcm/channel.c optional sound dev/sound/pcm/channel_if.m optional sound dev/sound/pcm/dsp.c optional sound dev/sound/pcm/feeder.c optional sound dev/sound/pcm/feeder_chain.c optional sound dev/sound/pcm/feeder_eq.c optional sound \ dependency "feeder_eq_gen.h" \ dependency "snd_fxdiv_gen.h" dev/sound/pcm/feeder_if.m optional sound dev/sound/pcm/feeder_format.c optional sound \ dependency "snd_fxdiv_gen.h" dev/sound/pcm/feeder_matrix.c optional sound \ dependency "snd_fxdiv_gen.h" dev/sound/pcm/feeder_mixer.c optional sound \ dependency "snd_fxdiv_gen.h" dev/sound/pcm/feeder_rate.c optional sound \ dependency "feeder_rate_gen.h" \ dependency "snd_fxdiv_gen.h" dev/sound/pcm/feeder_volume.c optional sound \ dependency "snd_fxdiv_gen.h" dev/sound/pcm/mixer.c optional sound dev/sound/pcm/mixer_if.m optional sound dev/sound/pcm/sndstat.c optional sound dev/sound/pcm/sound.c optional sound dev/sound/pcm/vchan.c optional sound dev/sound/usb/uaudio.c optional snd_uaudio usb dev/sound/usb/uaudio_pcm.c optional snd_uaudio usb dev/sound/midi/midi.c optional sound dev/sound/midi/mpu401.c optional sound dev/sound/midi/mpu_if.m optional sound dev/sound/midi/mpufoi_if.m optional sound dev/sound/midi/sequencer.c optional sound dev/sound/midi/synth_if.m optional sound dev/spibus/ofw_spibus.c optional fdt spibus dev/spibus/spibus.c optional spibus \ dependency "spibus_if.h" dev/spibus/spigen.c optional spigen dev/spibus/spibus_if.m optional spibus dev/ste/if_ste.c optional ste pci dev/stge/if_stge.c optional stge dev/sym/sym_hipd.c optional sym \ dependency "$S/dev/sym/sym_{conf,defs}.h" dev/syscons/blank/blank_saver.c optional blank_saver dev/syscons/daemon/daemon_saver.c optional daemon_saver dev/syscons/dragon/dragon_saver.c optional dragon_saver dev/syscons/fade/fade_saver.c optional fade_saver dev/syscons/fire/fire_saver.c optional fire_saver dev/syscons/green/green_saver.c optional green_saver dev/syscons/logo/logo.c optional logo_saver dev/syscons/logo/logo_saver.c optional logo_saver dev/syscons/rain/rain_saver.c optional rain_saver dev/syscons/schistory.c optional sc dev/syscons/scmouse.c optional sc dev/syscons/scterm.c optional sc dev/syscons/scvidctl.c optional sc dev/syscons/snake/snake_saver.c optional snake_saver dev/syscons/star/star_saver.c optional star_saver dev/syscons/syscons.c optional sc dev/syscons/sysmouse.c optional sc dev/syscons/warp/warp_saver.c optional warp_saver dev/tcp_log/tcp_log_dev.c optional tcp_blackbox inet | tcp_blackbox inet6 dev/tdfx/tdfx_linux.c optional tdfx_linux tdfx compat_linux dev/tdfx/tdfx_pci.c optional tdfx pci dev/ti/if_ti.c optional ti pci dev/tl/if_tl.c optional tl pci dev/trm/trm.c optional trm dev/twa/tw_cl_init.c optional twa \ compile-with "${NORMAL_C} -I$S/dev/twa" dev/twa/tw_cl_intr.c optional twa \ compile-with "${NORMAL_C} -I$S/dev/twa" dev/twa/tw_cl_io.c optional twa \ compile-with "${NORMAL_C} -I$S/dev/twa" dev/twa/tw_cl_misc.c optional twa \ compile-with "${NORMAL_C} -I$S/dev/twa" dev/twa/tw_osl_cam.c optional twa \ compile-with "${NORMAL_C} -I$S/dev/twa" dev/twa/tw_osl_freebsd.c optional twa \ compile-with "${NORMAL_C} -I$S/dev/twa" dev/twe/twe.c optional twe dev/twe/twe_freebsd.c optional twe dev/tws/tws.c optional tws dev/tws/tws_cam.c optional tws dev/tws/tws_hdm.c optional tws dev/tws/tws_services.c optional tws dev/tws/tws_user.c optional tws dev/tx/if_tx.c optional tx dev/txp/if_txp.c optional txp dev/uart/uart_bus_acpi.c optional uart acpi dev/uart/uart_bus_ebus.c optional uart ebus dev/uart/uart_bus_fdt.c optional uart fdt dev/uart/uart_bus_isa.c optional uart isa dev/uart/uart_bus_pccard.c optional uart pccard dev/uart/uart_bus_pci.c optional uart pci dev/uart/uart_bus_puc.c optional uart puc dev/uart/uart_bus_scc.c optional uart scc dev/uart/uart_core.c optional uart dev/uart/uart_dbg.c optional uart gdb dev/uart/uart_dev_msm.c optional uart uart_msm fdt dev/uart/uart_dev_mvebu.c optional uart uart_mvebu dev/uart/uart_dev_ns8250.c optional uart uart_ns8250 | uart uart_snps dev/uart/uart_dev_pl011.c optional uart pl011 dev/uart/uart_dev_quicc.c optional uart quicc dev/uart/uart_dev_sab82532.c optional uart uart_sab82532 dev/uart/uart_dev_sab82532.c optional uart scc dev/uart/uart_dev_snps.c optional uart uart_snps fdt dev/uart/uart_dev_z8530.c optional uart uart_z8530 dev/uart/uart_dev_z8530.c optional uart scc dev/uart/uart_if.m optional uart dev/uart/uart_subr.c optional uart dev/uart/uart_tty.c optional uart dev/ubsec/ubsec.c optional ubsec # # USB controller drivers # dev/usb/controller/musb_otg.c optional musb dev/usb/controller/dwc_otg.c optional dwcotg dev/usb/controller/dwc_otg_fdt.c optional dwcotg fdt dev/usb/controller/ehci.c optional ehci dev/usb/controller/ehci_msm.c optional ehci_msm fdt dev/usb/controller/ehci_pci.c optional ehci pci dev/usb/controller/ohci.c optional ohci dev/usb/controller/ohci_pci.c optional ohci pci dev/usb/controller/uhci.c optional uhci dev/usb/controller/uhci_pci.c optional uhci pci dev/usb/controller/xhci.c optional xhci dev/usb/controller/xhci_pci.c optional xhci pci dev/usb/controller/saf1761_otg.c optional saf1761otg dev/usb/controller/saf1761_otg_fdt.c optional saf1761otg fdt dev/usb/controller/uss820dci.c optional uss820dci dev/usb/controller/usb_controller.c optional usb # # USB storage drivers # dev/usb/storage/cfumass.c optional cfumass ctl dev/usb/storage/umass.c optional umass dev/usb/storage/urio.c optional urio dev/usb/storage/ustorage_fs.c optional usfs # # USB core # dev/usb/usb_busdma.c optional usb dev/usb/usb_core.c optional usb dev/usb/usb_debug.c optional usb dev/usb/usb_dev.c optional usb dev/usb/usb_device.c optional usb dev/usb/usb_dynamic.c optional usb dev/usb/usb_error.c optional usb dev/usb/usb_generic.c optional usb dev/usb/usb_handle_request.c optional usb dev/usb/usb_hid.c optional usb dev/usb/usb_hub.c optional usb dev/usb/usb_if.m optional usb dev/usb/usb_lookup.c optional usb dev/usb/usb_mbuf.c optional usb dev/usb/usb_msctest.c optional usb dev/usb/usb_parse.c optional usb dev/usb/usb_pf.c optional usb dev/usb/usb_process.c optional usb dev/usb/usb_request.c optional usb dev/usb/usb_transfer.c optional usb dev/usb/usb_util.c optional usb # # USB network drivers # dev/usb/net/if_aue.c optional aue dev/usb/net/if_axe.c optional axe dev/usb/net/if_axge.c optional axge dev/usb/net/if_cdce.c optional cdce dev/usb/net/if_cue.c optional cue dev/usb/net/if_ipheth.c optional ipheth dev/usb/net/if_kue.c optional kue dev/usb/net/if_mos.c optional mos dev/usb/net/if_muge.c optional muge dev/usb/net/if_rue.c optional rue dev/usb/net/if_smsc.c optional smsc dev/usb/net/if_udav.c optional udav dev/usb/net/if_ure.c optional ure dev/usb/net/if_usie.c optional usie dev/usb/net/if_urndis.c optional urndis dev/usb/net/ruephy.c optional rue dev/usb/net/usb_ethernet.c optional uether | aue | axe | axge | cdce | \ cue | ipheth | kue | mos | rue | \ smsc | udav | ure | urndis | muge dev/usb/net/uhso.c optional uhso # # USB WLAN drivers # dev/usb/wlan/if_rsu.c optional rsu rsu-rtl8712fw.c optional rsu-rtl8712fw | rsufw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rsu-rtl8712fw.fw:rsu-rtl8712fw:120 -mrsu-rtl8712fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rsu-rtl8712fw.c" rsu-rtl8712fw.fwo optional rsu-rtl8712fw | rsufw \ dependency "rsu-rtl8712fw.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rsu-rtl8712fw.fwo" rsu-rtl8712fw.fw optional rsu-rtl8712.fw | rsufw \ dependency "$S/contrib/dev/rsu/rsu-rtl8712fw.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rsu-rtl8712fw.fw" dev/usb/wlan/if_rum.c optional rum dev/usb/wlan/if_run.c optional run runfw.c optional runfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk run.fw:runfw -mrunfw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "runfw.c" runfw.fwo optional runfw \ dependency "run.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "runfw.fwo" run.fw optional runfw \ dependency "$S/contrib/dev/run/rt2870.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "run.fw" dev/usb/wlan/if_uath.c optional uath dev/usb/wlan/if_upgt.c optional upgt dev/usb/wlan/if_ural.c optional ural dev/usb/wlan/if_urtw.c optional urtw dev/usb/wlan/if_zyd.c optional zyd # # USB serial and parallel port drivers # dev/usb/serial/u3g.c optional u3g dev/usb/serial/uark.c optional uark dev/usb/serial/ubsa.c optional ubsa dev/usb/serial/ubser.c optional ubser dev/usb/serial/uchcom.c optional uchcom dev/usb/serial/ucycom.c optional ucycom dev/usb/serial/ufoma.c optional ufoma dev/usb/serial/uftdi.c optional uftdi dev/usb/serial/ugensa.c optional ugensa dev/usb/serial/uipaq.c optional uipaq dev/usb/serial/ulpt.c optional ulpt dev/usb/serial/umcs.c optional umcs dev/usb/serial/umct.c optional umct dev/usb/serial/umodem.c optional umodem dev/usb/serial/umoscom.c optional umoscom dev/usb/serial/uplcom.c optional uplcom dev/usb/serial/uslcom.c optional uslcom dev/usb/serial/uvisor.c optional uvisor dev/usb/serial/uvscom.c optional uvscom dev/usb/serial/usb_serial.c optional ucom | u3g | uark | ubsa | ubser | \ uchcom | ucycom | ufoma | uftdi | \ ugensa | uipaq | umcs | umct | \ umodem | umoscom | uplcom | usie | \ uslcom | uvisor | uvscom # # USB misc drivers # dev/usb/misc/ufm.c optional ufm dev/usb/misc/udbp.c optional udbp dev/usb/misc/ugold.c optional ugold dev/usb/misc/uled.c optional uled # # USB input drivers # dev/usb/input/atp.c optional atp dev/usb/input/uep.c optional uep dev/usb/input/uhid.c optional uhid dev/usb/input/uhid_snes.c optional uhid_snes dev/usb/input/ukbd.c optional ukbd dev/usb/input/ums.c optional ums dev/usb/input/wmt.c optional wmt dev/usb/input/wsp.c optional wsp # # USB quirks # dev/usb/quirk/usb_quirk.c optional usb # # USB templates # dev/usb/template/usb_template.c optional usb_template dev/usb/template/usb_template_audio.c optional usb_template dev/usb/template/usb_template_cdce.c optional usb_template dev/usb/template/usb_template_kbd.c optional usb_template dev/usb/template/usb_template_modem.c optional usb_template dev/usb/template/usb_template_mouse.c optional usb_template dev/usb/template/usb_template_msc.c optional usb_template dev/usb/template/usb_template_mtp.c optional usb_template dev/usb/template/usb_template_phone.c optional usb_template dev/usb/template/usb_template_serialnet.c optional usb_template dev/usb/template/usb_template_midi.c optional usb_template dev/usb/template/usb_template_multi.c optional usb_template # # USB video drivers # dev/usb/video/udl.c optional udl # # USB END # dev/videomode/videomode.c optional videomode dev/videomode/edid.c optional videomode dev/videomode/pickmode.c optional videomode dev/videomode/vesagtf.c optional videomode dev/veriexec/verified_exec.c optional veriexec mac_veriexec dev/vge/if_vge.c optional vge dev/viapm/viapm.c optional viapm pci dev/virtio/virtio.c optional virtio dev/virtio/virtqueue.c optional virtio dev/virtio/virtio_bus_if.m optional virtio dev/virtio/virtio_if.m optional virtio dev/virtio/pci/virtio_pci.c optional virtio_pci dev/virtio/mmio/virtio_mmio.c optional virtio_mmio dev/virtio/mmio/virtio_mmio_acpi.c optional virtio_mmio acpi dev/virtio/mmio/virtio_mmio_fdt.c optional virtio_mmio fdt dev/virtio/mmio/virtio_mmio_if.m optional virtio_mmio dev/virtio/network/if_vtnet.c optional vtnet dev/virtio/block/virtio_blk.c optional virtio_blk dev/virtio/balloon/virtio_balloon.c optional virtio_balloon dev/virtio/scsi/virtio_scsi.c optional virtio_scsi dev/virtio/random/virtio_random.c optional virtio_random dev/virtio/console/virtio_console.c optional virtio_console dev/vkbd/vkbd.c optional vkbd dev/vr/if_vr.c optional vr pci dev/vt/colors/vt_termcolors.c optional vt dev/vt/font/vt_font_default.c optional vt dev/vt/font/vt_mouse_cursor.c optional vt dev/vt/hw/efifb/efifb.c optional vt_efifb dev/vt/hw/fb/vt_fb.c optional vt dev/vt/hw/vga/vt_vga.c optional vt vt_vga dev/vt/logo/logo_freebsd.c optional vt splash dev/vt/logo/logo_beastie.c optional vt splash dev/vt/vt_buf.c optional vt dev/vt/vt_consolectl.c optional vt dev/vt/vt_core.c optional vt dev/vt/vt_cpulogos.c optional vt splash dev/vt/vt_font.c optional vt dev/vt/vt_sysmouse.c optional vt dev/vte/if_vte.c optional vte pci dev/vx/if_vx.c optional vx dev/vx/if_vx_pci.c optional vx pci dev/watchdog/watchdog.c standard dev/wb/if_wb.c optional wb pci dev/wi/if_wi.c optional wi dev/wi/if_wi_pccard.c optional wi pccard dev/wi/if_wi_pci.c optional wi pci dev/wpi/if_wpi.c optional wpi pci wpifw.c optional wpifw \ compile-with "${AWK} -f $S/tools/fw_stub.awk wpi.fw:wpifw:153229 -mwpi -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "wpifw.c" wpifw.fwo optional wpifw \ dependency "wpi.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "wpifw.fwo" wpi.fw optional wpifw \ dependency "$S/contrib/dev/wpi/iwlwifi-3945-15.32.2.9.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "wpi.fw" dev/xdma/controller/pl330.c optional xdma pl330 dev/xdma/xdma.c optional xdma dev/xdma/xdma_bank.c optional xdma dev/xdma/xdma_bio.c optional xdma dev/xdma/xdma_fdt_test.c optional xdma xdma_test fdt dev/xdma/xdma_if.m optional xdma dev/xdma/xdma_mbuf.c optional xdma dev/xdma/xdma_queue.c optional xdma dev/xdma/xdma_sg.c optional xdma dev/xdma/xdma_sglist.c optional xdma dev/xe/if_xe.c optional xe dev/xe/if_xe_pccard.c optional xe pccard dev/xen/balloon/balloon.c optional xenhvm dev/xen/blkfront/blkfront.c optional xenhvm dev/xen/blkback/blkback.c optional xenhvm dev/xen/console/xen_console.c optional xenhvm dev/xen/control/control.c optional xenhvm dev/xen/grant_table/grant_table.c optional xenhvm dev/xen/netback/netback.c optional xenhvm dev/xen/netfront/netfront.c optional xenhvm dev/xen/xenpci/xenpci.c optional xenpci dev/xen/timer/timer.c optional xenhvm dev/xen/pvcpu/pvcpu.c optional xenhvm dev/xen/xenstore/xenstore.c optional xenhvm dev/xen/xenstore/xenstore_dev.c optional xenhvm dev/xen/xenstore/xenstored_dev.c optional xenhvm dev/xen/evtchn/evtchn_dev.c optional xenhvm dev/xen/privcmd/privcmd.c optional xenhvm dev/xen/gntdev/gntdev.c optional xenhvm dev/xen/debug/debug.c optional xenhvm dev/xl/if_xl.c optional xl pci dev/xl/xlphy.c optional xl pci fs/autofs/autofs.c optional autofs fs/autofs/autofs_vfsops.c optional autofs fs/autofs/autofs_vnops.c optional autofs fs/deadfs/dead_vnops.c standard fs/devfs/devfs_devs.c standard fs/devfs/devfs_dir.c standard fs/devfs/devfs_rule.c standard fs/devfs/devfs_vfsops.c standard fs/devfs/devfs_vnops.c standard fs/fdescfs/fdesc_vfsops.c optional fdescfs fs/fdescfs/fdesc_vnops.c optional fdescfs fs/fifofs/fifo_vnops.c standard fs/cuse/cuse.c optional cuse fs/fuse/fuse_device.c optional fuse fs/fuse/fuse_file.c optional fuse fs/fuse/fuse_internal.c optional fuse fs/fuse/fuse_io.c optional fuse fs/fuse/fuse_ipc.c optional fuse fs/fuse/fuse_main.c optional fuse fs/fuse/fuse_node.c optional fuse fs/fuse/fuse_vfsops.c optional fuse fs/fuse/fuse_vnops.c optional fuse fs/msdosfs/msdosfs_conv.c optional msdosfs fs/msdosfs/msdosfs_denode.c optional msdosfs fs/msdosfs/msdosfs_fat.c optional msdosfs fs/msdosfs/msdosfs_iconv.c optional msdosfs_iconv fs/msdosfs/msdosfs_lookup.c optional msdosfs fs/msdosfs/msdosfs_vfsops.c optional msdosfs fs/msdosfs/msdosfs_vnops.c optional msdosfs fs/nandfs/bmap.c optional nandfs fs/nandfs/nandfs_alloc.c optional nandfs fs/nandfs/nandfs_bmap.c optional nandfs fs/nandfs/nandfs_buffer.c optional nandfs fs/nandfs/nandfs_cleaner.c optional nandfs fs/nandfs/nandfs_cpfile.c optional nandfs fs/nandfs/nandfs_dat.c optional nandfs fs/nandfs/nandfs_dir.c optional nandfs fs/nandfs/nandfs_ifile.c optional nandfs fs/nandfs/nandfs_segment.c optional nandfs fs/nandfs/nandfs_subr.c optional nandfs fs/nandfs/nandfs_sufile.c optional nandfs fs/nandfs/nandfs_vfsops.c optional nandfs fs/nandfs/nandfs_vnops.c optional nandfs fs/nfs/nfs_commonkrpc.c optional nfscl | nfsd fs/nfs/nfs_commonsubs.c optional nfscl | nfsd fs/nfs/nfs_commonport.c optional nfscl | nfsd fs/nfs/nfs_commonacl.c optional nfscl | nfsd fs/nfsclient/nfs_clcomsubs.c optional nfscl fs/nfsclient/nfs_clsubs.c optional nfscl fs/nfsclient/nfs_clstate.c optional nfscl fs/nfsclient/nfs_clkrpc.c optional nfscl fs/nfsclient/nfs_clrpcops.c optional nfscl fs/nfsclient/nfs_clvnops.c optional nfscl fs/nfsclient/nfs_clnode.c optional nfscl fs/nfsclient/nfs_clvfsops.c optional nfscl fs/nfsclient/nfs_clport.c optional nfscl fs/nfsclient/nfs_clbio.c optional nfscl fs/nfsclient/nfs_clnfsiod.c optional nfscl fs/nfsserver/nfs_fha_new.c optional nfsd inet fs/nfsserver/nfs_nfsdsocket.c optional nfsd inet fs/nfsserver/nfs_nfsdsubs.c optional nfsd inet fs/nfsserver/nfs_nfsdstate.c optional nfsd inet fs/nfsserver/nfs_nfsdkrpc.c optional nfsd inet fs/nfsserver/nfs_nfsdserv.c optional nfsd inet fs/nfsserver/nfs_nfsdport.c optional nfsd inet fs/nfsserver/nfs_nfsdcache.c optional nfsd inet fs/nullfs/null_subr.c optional nullfs fs/nullfs/null_vfsops.c optional nullfs fs/nullfs/null_vnops.c optional nullfs fs/procfs/procfs.c optional procfs fs/procfs/procfs_dbregs.c optional procfs fs/procfs/procfs_fpregs.c optional procfs fs/procfs/procfs_ioctl.c optional procfs fs/procfs/procfs_map.c optional procfs fs/procfs/procfs_mem.c optional procfs fs/procfs/procfs_note.c optional procfs fs/procfs/procfs_osrel.c optional procfs fs/procfs/procfs_regs.c optional procfs fs/procfs/procfs_rlimit.c optional procfs fs/procfs/procfs_status.c optional procfs fs/procfs/procfs_type.c optional procfs fs/pseudofs/pseudofs.c optional pseudofs fs/pseudofs/pseudofs_fileno.c optional pseudofs fs/pseudofs/pseudofs_vncache.c optional pseudofs fs/pseudofs/pseudofs_vnops.c optional pseudofs fs/smbfs/smbfs_io.c optional smbfs fs/smbfs/smbfs_node.c optional smbfs fs/smbfs/smbfs_smb.c optional smbfs fs/smbfs/smbfs_subr.c optional smbfs fs/smbfs/smbfs_vfsops.c optional smbfs fs/smbfs/smbfs_vnops.c optional smbfs fs/udf/osta.c optional udf fs/udf/udf_iconv.c optional udf_iconv fs/udf/udf_vfsops.c optional udf fs/udf/udf_vnops.c optional udf fs/unionfs/union_subr.c optional unionfs fs/unionfs/union_vfsops.c optional unionfs fs/unionfs/union_vnops.c optional unionfs fs/tmpfs/tmpfs_vnops.c optional tmpfs fs/tmpfs/tmpfs_fifoops.c optional tmpfs fs/tmpfs/tmpfs_vfsops.c optional tmpfs fs/tmpfs/tmpfs_subr.c optional tmpfs gdb/gdb_cons.c optional gdb gdb/gdb_main.c optional gdb gdb/gdb_packet.c optional gdb geom/bde/g_bde.c optional geom_bde geom/bde/g_bde_crypt.c optional geom_bde geom/bde/g_bde_lock.c optional geom_bde geom/bde/g_bde_work.c optional geom_bde geom/cache/g_cache.c optional geom_cache geom/concat/g_concat.c optional geom_concat geom/eli/g_eli.c optional geom_eli geom/eli/g_eli_crypto.c optional geom_eli geom/eli/g_eli_ctl.c optional geom_eli geom/eli/g_eli_hmac.c optional geom_eli geom/eli/g_eli_integrity.c optional geom_eli geom/eli/g_eli_key.c optional geom_eli geom/eli/g_eli_key_cache.c optional geom_eli geom/eli/g_eli_privacy.c optional geom_eli geom/eli/pkcs5v2.c optional geom_eli geom/gate/g_gate.c optional geom_gate geom/geom_bsd.c optional geom_bsd geom/geom_bsd_enc.c optional geom_bsd | geom_part_bsd geom/geom_ccd.c optional ccd | geom_ccd geom/geom_ctl.c standard geom/geom_dev.c standard geom/geom_disk.c standard geom/geom_dump.c standard geom/geom_event.c standard geom/geom_fox.c optional geom_fox geom/geom_flashmap.c optional fdt cfi | fdt nand | fdt mx25l | mmcsd | fdt n25q geom/geom_io.c standard geom/geom_kern.c standard geom/geom_map.c optional geom_map geom/geom_mbr.c optional geom_mbr geom/geom_mbr_enc.c optional geom_mbr geom/geom_redboot.c optional geom_redboot geom/geom_slice.c standard geom/geom_subr.c standard geom/geom_sunlabel.c optional geom_sunlabel geom/geom_sunlabel_enc.c optional geom_sunlabel geom/geom_vfs.c standard geom/geom_vol_ffs.c optional geom_vol geom/journal/g_journal.c optional geom_journal geom/journal/g_journal_ufs.c optional geom_journal geom/label/g_label.c optional geom_label | geom_label_gpt geom/label/g_label_ext2fs.c optional geom_label geom/label/g_label_iso9660.c optional geom_label geom/label/g_label_msdosfs.c optional geom_label geom/label/g_label_ntfs.c optional geom_label geom/label/g_label_reiserfs.c optional geom_label geom/label/g_label_ufs.c optional geom_label geom/label/g_label_gpt.c optional geom_label | geom_label_gpt geom/label/g_label_disk_ident.c optional geom_label geom/linux_lvm/g_linux_lvm.c optional geom_linux_lvm geom/mirror/g_mirror.c optional geom_mirror geom/mirror/g_mirror_ctl.c optional geom_mirror geom/mountver/g_mountver.c optional geom_mountver geom/multipath/g_multipath.c optional geom_multipath geom/nop/g_nop.c optional geom_nop geom/part/g_part.c standard geom/part/g_part_if.m standard geom/part/g_part_apm.c optional geom_part_apm geom/part/g_part_bsd.c optional geom_part_bsd geom/part/g_part_bsd64.c optional geom_part_bsd64 geom/part/g_part_ebr.c optional geom_part_ebr geom/part/g_part_gpt.c optional geom_part_gpt geom/part/g_part_ldm.c optional geom_part_ldm geom/part/g_part_mbr.c optional geom_part_mbr geom/part/g_part_vtoc8.c optional geom_part_vtoc8 geom/raid/g_raid.c optional geom_raid geom/raid/g_raid_ctl.c optional geom_raid geom/raid/g_raid_md_if.m optional geom_raid geom/raid/g_raid_tr_if.m optional geom_raid geom/raid/md_ddf.c optional geom_raid geom/raid/md_intel.c optional geom_raid geom/raid/md_jmicron.c optional geom_raid geom/raid/md_nvidia.c optional geom_raid geom/raid/md_promise.c optional geom_raid geom/raid/md_sii.c optional geom_raid geom/raid/tr_concat.c optional geom_raid geom/raid/tr_raid0.c optional geom_raid geom/raid/tr_raid1.c optional geom_raid geom/raid/tr_raid1e.c optional geom_raid geom/raid/tr_raid5.c optional geom_raid geom/raid3/g_raid3.c optional geom_raid3 geom/raid3/g_raid3_ctl.c optional geom_raid3 geom/shsec/g_shsec.c optional geom_shsec geom/stripe/g_stripe.c optional geom_stripe contrib/xz-embedded/freebsd/xz_malloc.c \ optional xz_embedded | geom_uzip \ compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/freebsd/ -I$S/contrib/xz-embedded/linux/lib/xz/ -I$S/contrib/xz-embedded/linux/include/linux/" contrib/xz-embedded/linux/lib/xz/xz_crc32.c \ optional xz_embedded | geom_uzip \ compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/freebsd/ -I$S/contrib/xz-embedded/linux/lib/xz/ -I$S/contrib/xz-embedded/linux/include/linux/" contrib/xz-embedded/linux/lib/xz/xz_dec_bcj.c \ optional xz_embedded | geom_uzip \ compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/freebsd/ -I$S/contrib/xz-embedded/linux/lib/xz/ -I$S/contrib/xz-embedded/linux/include/linux/" contrib/xz-embedded/linux/lib/xz/xz_dec_lzma2.c \ optional xz_embedded | geom_uzip \ compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/freebsd/ -I$S/contrib/xz-embedded/linux/lib/xz/ -I$S/contrib/xz-embedded/linux/include/linux/" contrib/xz-embedded/linux/lib/xz/xz_dec_stream.c \ optional xz_embedded | geom_uzip \ compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/freebsd/ -I$S/contrib/xz-embedded/linux/lib/xz/ -I$S/contrib/xz-embedded/linux/include/linux/" geom/uzip/g_uzip.c optional geom_uzip geom/uzip/g_uzip_lzma.c optional geom_uzip geom/uzip/g_uzip_wrkthr.c optional geom_uzip geom/uzip/g_uzip_zlib.c optional geom_uzip geom/vinum/geom_vinum.c optional geom_vinum geom/vinum/geom_vinum_create.c optional geom_vinum geom/vinum/geom_vinum_drive.c optional geom_vinum geom/vinum/geom_vinum_plex.c optional geom_vinum geom/vinum/geom_vinum_volume.c optional geom_vinum geom/vinum/geom_vinum_subr.c optional geom_vinum geom/vinum/geom_vinum_raid5.c optional geom_vinum geom/vinum/geom_vinum_share.c optional geom_vinum geom/vinum/geom_vinum_list.c optional geom_vinum geom/vinum/geom_vinum_rm.c optional geom_vinum geom/vinum/geom_vinum_init.c optional geom_vinum geom/vinum/geom_vinum_state.c optional geom_vinum geom/vinum/geom_vinum_rename.c optional geom_vinum geom/vinum/geom_vinum_move.c optional geom_vinum geom/vinum/geom_vinum_events.c optional geom_vinum geom/virstor/binstream.c optional geom_virstor geom/virstor/g_virstor.c optional geom_virstor geom/virstor/g_virstor_md.c optional geom_virstor geom/zero/g_zero.c optional geom_zero fs/ext2fs/ext2_acl.c optional ext2fs fs/ext2fs/ext2_alloc.c optional ext2fs fs/ext2fs/ext2_balloc.c optional ext2fs fs/ext2fs/ext2_bmap.c optional ext2fs fs/ext2fs/ext2_csum.c optional ext2fs fs/ext2fs/ext2_extattr.c optional ext2fs fs/ext2fs/ext2_extents.c optional ext2fs fs/ext2fs/ext2_inode.c optional ext2fs fs/ext2fs/ext2_inode_cnv.c optional ext2fs fs/ext2fs/ext2_hash.c optional ext2fs fs/ext2fs/ext2_htree.c optional ext2fs fs/ext2fs/ext2_lookup.c optional ext2fs fs/ext2fs/ext2_subr.c optional ext2fs fs/ext2fs/ext2_vfsops.c optional ext2fs fs/ext2fs/ext2_vnops.c optional ext2fs # isa/isa_if.m standard isa/isa_common.c optional isa isa/isahint.c optional isa isa/pnp.c optional isa isapnp isa/pnpparse.c optional isa isapnp fs/cd9660/cd9660_bmap.c optional cd9660 fs/cd9660/cd9660_lookup.c optional cd9660 fs/cd9660/cd9660_node.c optional cd9660 fs/cd9660/cd9660_rrip.c optional cd9660 fs/cd9660/cd9660_util.c optional cd9660 fs/cd9660/cd9660_vfsops.c optional cd9660 fs/cd9660/cd9660_vnops.c optional cd9660 fs/cd9660/cd9660_iconv.c optional cd9660_iconv kern/bus_if.m standard kern/clock_if.m standard kern/cpufreq_if.m standard kern/device_if.m standard kern/imgact_binmisc.c optional imagact_binmisc kern/imgact_elf.c standard kern/imgact_elf32.c optional compat_freebsd32 kern/imgact_shell.c standard kern/init_main.c standard kern/init_sysent.c standard kern/ksched.c optional _kposix_priority_scheduling kern/kern_acct.c standard kern/kern_alq.c optional alq kern/kern_clock.c standard kern/kern_condvar.c standard kern/kern_conf.c standard kern/kern_cons.c standard kern/kern_cpu.c standard kern/kern_cpuset.c standard kern/kern_context.c standard kern/kern_descrip.c standard kern/kern_dtrace.c optional kdtrace_hooks kern/kern_dump.c standard kern/kern_environment.c standard kern/kern_et.c standard kern/kern_event.c standard kern/kern_exec.c standard kern/kern_exit.c standard kern/kern_fail.c standard kern/kern_ffclock.c standard kern/kern_fork.c standard kern/kern_hhook.c standard kern/kern_idle.c standard kern/kern_intr.c standard kern/kern_jail.c standard kern/kern_kcov.c optional kcov \ compile-with "${NORMAL_C:N-fsanitize*}" kern/kern_khelp.c standard kern/kern_kthread.c standard kern/kern_ktr.c optional ktr kern/kern_ktrace.c standard kern/kern_linker.c standard kern/kern_lock.c standard kern/kern_lockf.c standard kern/kern_lockstat.c optional kdtrace_hooks kern/kern_loginclass.c standard kern/kern_malloc.c standard kern/kern_mbuf.c standard kern/kern_mib.c standard kern/kern_module.c standard kern/kern_mtxpool.c standard kern/kern_mutex.c standard kern/kern_ntptime.c standard kern/kern_osd.c standard kern/kern_physio.c standard kern/kern_pmc.c standard kern/kern_poll.c optional device_polling kern/kern_priv.c standard kern/kern_proc.c standard kern/kern_procctl.c standard kern/kern_prot.c standard kern/kern_racct.c standard kern/kern_rangelock.c standard kern/kern_rctl.c standard kern/kern_resource.c standard kern/kern_rmlock.c standard kern/kern_rwlock.c standard kern/kern_sdt.c optional kdtrace_hooks kern/kern_sema.c standard kern/kern_sendfile.c standard kern/kern_sharedpage.c standard kern/kern_shutdown.c standard kern/kern_sig.c standard kern/kern_switch.c standard kern/kern_sx.c standard kern/kern_synch.c standard kern/kern_syscalls.c standard kern/kern_sysctl.c standard kern/kern_tc.c standard kern/kern_thr.c standard kern/kern_thread.c standard kern/kern_time.c standard kern/kern_timeout.c standard kern/kern_tslog.c optional tslog kern/kern_ubsan.c optional kubsan kern/kern_umtx.c standard kern/kern_uuid.c standard kern/kern_xxx.c standard kern/link_elf.c standard kern/linker_if.m standard kern/md4c.c optional netsmb kern/md5c.c standard kern/p1003_1b.c standard kern/posix4_mib.c standard kern/sched_4bsd.c optional sched_4bsd kern/sched_ule.c optional sched_ule kern/serdev_if.m standard kern/stack_protector.c standard \ compile-with "${NORMAL_C:N-fstack-protector*}" kern/subr_acl_nfs4.c optional ufs_acl | zfs kern/subr_acl_posix1e.c optional ufs_acl kern/subr_autoconf.c standard kern/subr_blist.c standard kern/subr_boot.c standard kern/subr_bus.c standard kern/subr_bus_dma.c standard kern/subr_bufring.c standard kern/subr_capability.c standard kern/subr_clock.c standard kern/subr_compressor.c standard \ compile-with "${NORMAL_C} -I$S/contrib/zstd/lib/freebsd" kern/subr_coverage.c optional coverage \ compile-with "${NORMAL_C:N-fsanitize*}" kern/subr_counter.c standard kern/subr_devstat.c standard kern/subr_disk.c standard kern/subr_early.c standard kern/subr_epoch.c standard kern/subr_eventhandler.c standard kern/subr_fattime.c standard kern/subr_firmware.c optional firmware kern/subr_gtaskqueue.c standard kern/subr_hash.c standard kern/subr_hints.c standard kern/subr_inflate.c optional gzip kern/subr_kdb.c standard kern/subr_kobj.c standard kern/subr_lock.c standard kern/subr_log.c standard kern/subr_mchain.c optional libmchain kern/subr_module.c standard kern/subr_msgbuf.c standard kern/subr_param.c standard kern/subr_pcpu.c standard kern/subr_pctrie.c standard kern/subr_pidctrl.c standard kern/subr_power.c standard kern/subr_prf.c standard kern/subr_prof.c standard kern/subr_rman.c standard kern/subr_rtc.c standard kern/subr_sbuf.c standard kern/subr_scanf.c standard kern/subr_sglist.c standard kern/subr_sleepqueue.c standard kern/subr_smp.c standard kern/subr_stack.c optional ddb | stack | ktr kern/subr_taskqueue.c standard kern/subr_terminal.c optional vt kern/subr_trap.c standard kern/subr_turnstile.c standard kern/subr_uio.c standard kern/subr_unit.c standard kern/subr_vmem.c standard kern/subr_witness.c optional witness kern/sys_capability.c standard kern/sys_generic.c standard kern/sys_getrandom.c standard kern/sys_pipe.c standard kern/sys_procdesc.c standard kern/sys_process.c standard kern/sys_socket.c standard kern/syscalls.c standard kern/sysv_ipc.c standard kern/sysv_msg.c optional sysvmsg kern/sysv_sem.c optional sysvsem kern/sysv_shm.c optional sysvshm kern/tty.c standard kern/tty_compat.c optional compat_43tty kern/tty_info.c standard kern/tty_inq.c standard kern/tty_outq.c standard kern/tty_pts.c standard kern/tty_tty.c standard kern/tty_ttydisc.c standard kern/uipc_accf.c standard kern/uipc_debug.c optional ddb kern/uipc_domain.c standard kern/uipc_mbuf.c standard kern/uipc_mbuf2.c standard kern/uipc_mbufhash.c standard kern/uipc_mqueue.c optional p1003_1b_mqueue kern/uipc_sem.c optional p1003_1b_semaphores kern/uipc_shm.c standard kern/uipc_sockbuf.c standard kern/uipc_socket.c standard kern/uipc_syscalls.c standard kern/uipc_usrreq.c standard kern/vfs_acl.c standard kern/vfs_aio.c standard kern/vfs_bio.c standard kern/vfs_cache.c standard kern/vfs_cluster.c standard kern/vfs_default.c standard kern/vfs_export.c standard kern/vfs_extattr.c standard kern/vfs_hash.c standard kern/vfs_init.c standard kern/vfs_lookup.c standard kern/vfs_mount.c standard kern/vfs_mountroot.c standard kern/vfs_subr.c standard kern/vfs_syscalls.c standard kern/vfs_vnops.c standard # # Kernel GSS-API # gssd.h optional kgssapi \ dependency "$S/kgssapi/gssd.x" \ compile-with "RPCGEN_CPP='${CPP}' rpcgen -hM $S/kgssapi/gssd.x | grep -v pthread.h > gssd.h" \ no-obj no-implicit-rule before-depend local \ clean "gssd.h" gssd_xdr.c optional kgssapi \ dependency "$S/kgssapi/gssd.x gssd.h" \ compile-with "RPCGEN_CPP='${CPP}' rpcgen -c $S/kgssapi/gssd.x -o gssd_xdr.c" \ no-implicit-rule before-depend local \ clean "gssd_xdr.c" gssd_clnt.c optional kgssapi \ dependency "$S/kgssapi/gssd.x gssd.h" \ compile-with "RPCGEN_CPP='${CPP}' rpcgen -lM $S/kgssapi/gssd.x | grep -v string.h > gssd_clnt.c" \ no-implicit-rule before-depend local \ clean "gssd_clnt.c" kgssapi/gss_accept_sec_context.c optional kgssapi kgssapi/gss_add_oid_set_member.c optional kgssapi kgssapi/gss_acquire_cred.c optional kgssapi kgssapi/gss_canonicalize_name.c optional kgssapi kgssapi/gss_create_empty_oid_set.c optional kgssapi kgssapi/gss_delete_sec_context.c optional kgssapi kgssapi/gss_display_status.c optional kgssapi kgssapi/gss_export_name.c optional kgssapi kgssapi/gss_get_mic.c optional kgssapi kgssapi/gss_init_sec_context.c optional kgssapi kgssapi/gss_impl.c optional kgssapi kgssapi/gss_import_name.c optional kgssapi kgssapi/gss_names.c optional kgssapi kgssapi/gss_pname_to_uid.c optional kgssapi kgssapi/gss_release_buffer.c optional kgssapi kgssapi/gss_release_cred.c optional kgssapi kgssapi/gss_release_name.c optional kgssapi kgssapi/gss_release_oid_set.c optional kgssapi kgssapi/gss_set_cred_option.c optional kgssapi kgssapi/gss_test_oid_set_member.c optional kgssapi kgssapi/gss_unwrap.c optional kgssapi kgssapi/gss_verify_mic.c optional kgssapi kgssapi/gss_wrap.c optional kgssapi kgssapi/gss_wrap_size_limit.c optional kgssapi kgssapi/gssd_prot.c optional kgssapi kgssapi/krb5/krb5_mech.c optional kgssapi kgssapi/krb5/kcrypto.c optional kgssapi kgssapi/krb5/kcrypto_aes.c optional kgssapi kgssapi/krb5/kcrypto_arcfour.c optional kgssapi kgssapi/krb5/kcrypto_des.c optional kgssapi kgssapi/krb5/kcrypto_des3.c optional kgssapi kgssapi/kgss_if.m optional kgssapi kgssapi/gsstest.c optional kgssapi_debug # These files in libkern/ are those needed by all architectures. Some # of the files in libkern/ are only needed on some architectures, e.g., # libkern/divdi3.c is needed by i386 but not alpha. Also, some of these # routines may be optimized for a particular platform. In either case, # the file should be moved to conf/files. from here. # libkern/arc4random.c standard crypto/chacha20/chacha.c standard libkern/asprintf.c standard libkern/bcd.c standard libkern/bsearch.c standard libkern/crc32.c standard libkern/explicit_bzero.c standard libkern/fnmatch.c standard libkern/iconv.c optional libiconv libkern/iconv_converter_if.m optional libiconv libkern/iconv_ucs.c optional libiconv libkern/iconv_xlat.c optional libiconv libkern/iconv_xlat16.c optional libiconv libkern/inet_aton.c standard libkern/inet_ntoa.c standard libkern/inet_ntop.c standard libkern/inet_pton.c standard libkern/jenkins_hash.c standard libkern/murmur3_32.c standard libkern/mcount.c optional profiling-routine libkern/memcchr.c standard libkern/memchr.c standard libkern/memmem.c optional gdb libkern/qsort.c standard libkern/qsort_r.c standard libkern/random.c standard libkern/scanc.c standard libkern/strcasecmp.c standard libkern/strcat.c standard libkern/strchr.c standard libkern/strcmp.c standard libkern/strcpy.c standard libkern/strcspn.c standard libkern/strdup.c standard libkern/strndup.c standard libkern/strlcat.c standard libkern/strlcpy.c standard libkern/strlen.c standard libkern/strncat.c standard libkern/strncmp.c standard libkern/strncpy.c standard libkern/strnlen.c standard libkern/strrchr.c standard libkern/strsep.c standard libkern/strspn.c standard libkern/strstr.c standard libkern/strtol.c standard libkern/strtoq.c standard libkern/strtoul.c standard libkern/strtouq.c standard libkern/strvalid.c standard libkern/timingsafe_bcmp.c standard libkern/zlib.c optional crypto | geom_uzip | ipsec | \ ipsec_support | mxge | netgraph_deflate | ddb_ctf | gzio net/altq/altq_cbq.c optional altq net/altq/altq_codel.c optional altq net/altq/altq_hfsc.c optional altq net/altq/altq_fairq.c optional altq net/altq/altq_priq.c optional altq net/altq/altq_red.c optional altq net/altq/altq_rio.c optional altq net/altq/altq_rmclass.c optional altq net/altq/altq_subr.c optional altq net/bpf.c standard net/bpf_buffer.c optional bpf net/bpf_jitter.c optional bpf_jitter net/bpf_filter.c optional bpf | netgraph_bpf net/bpf_zerocopy.c optional bpf net/bridgestp.c optional bridge | if_bridge net/flowtable.c optional flowtable inet | flowtable inet6 net/ieee8023ad_lacp.c optional lagg net/if.c standard net/if_bridge.c optional bridge inet | if_bridge inet net/if_clone.c standard net/if_dead.c standard net/if_debug.c optional ddb net/if_disc.c optional disc net/if_edsc.c optional edsc net/if_enc.c optional enc inet | enc inet6 net/if_epair.c optional epair net/if_ethersubr.c optional ether net/if_fwsubr.c optional fwip net/if_gif.c optional gif inet | gif inet6 | \ netgraph_gif inet | netgraph_gif inet6 net/if_gre.c optional gre inet | gre inet6 net/if_ipsec.c optional inet ipsec | inet6 ipsec net/if_lagg.c optional lagg net/if_loop.c optional loop net/if_llatbl.c standard net/if_me.c optional me inet net/if_media.c standard net/if_mib.c standard net/if_spppfr.c optional sppp | netgraph_sppp net/if_spppsubr.c optional sppp | netgraph_sppp net/if_stf.c optional stf inet inet6 net/if_tun.c optional tun net/if_tap.c optional tap net/if_vlan.c optional vlan net/if_vxlan.c optional vxlan inet | vxlan inet6 net/ifdi_if.m optional ether pci iflib net/iflib.c optional ether pci iflib net/iflib_clone.c optional ether pci iflib net/mp_ring.c optional ether iflib net/mppcc.c optional netgraph_mppc_compression net/mppcd.c optional netgraph_mppc_compression net/netisr.c standard net/pfil.c optional ether | inet net/radix.c standard net/radix_mpath.c standard net/raw_cb.c standard net/raw_usrreq.c standard net/route.c standard net/rss_config.c optional inet rss | inet6 rss net/rtsock.c standard net/slcompress.c optional netgraph_vjc | sppp | \ netgraph_sppp net/toeplitz.c optional inet rss | inet6 rss net/vnet.c optional vimage net80211/ieee80211.c optional wlan net80211/ieee80211_acl.c optional wlan wlan_acl net80211/ieee80211_action.c optional wlan net80211/ieee80211_adhoc.c optional wlan \ compile-with "${NORMAL_C} -Wno-unused-function" net80211/ieee80211_ageq.c optional wlan net80211/ieee80211_amrr.c optional wlan | wlan_amrr net80211/ieee80211_crypto.c optional wlan \ compile-with "${NORMAL_C} -Wno-unused-function" net80211/ieee80211_crypto_ccmp.c optional wlan wlan_ccmp net80211/ieee80211_crypto_none.c optional wlan net80211/ieee80211_crypto_tkip.c optional wlan wlan_tkip net80211/ieee80211_crypto_wep.c optional wlan wlan_wep net80211/ieee80211_ddb.c optional wlan ddb net80211/ieee80211_dfs.c optional wlan net80211/ieee80211_freebsd.c optional wlan net80211/ieee80211_hostap.c optional wlan \ compile-with "${NORMAL_C} -Wno-unused-function" net80211/ieee80211_ht.c optional wlan net80211/ieee80211_hwmp.c optional wlan ieee80211_support_mesh net80211/ieee80211_input.c optional wlan net80211/ieee80211_ioctl.c optional wlan net80211/ieee80211_mesh.c optional wlan ieee80211_support_mesh \ compile-with "${NORMAL_C} -Wno-unused-function" net80211/ieee80211_monitor.c optional wlan net80211/ieee80211_node.c optional wlan net80211/ieee80211_output.c optional wlan net80211/ieee80211_phy.c optional wlan net80211/ieee80211_power.c optional wlan net80211/ieee80211_proto.c optional wlan net80211/ieee80211_radiotap.c optional wlan net80211/ieee80211_ratectl.c optional wlan net80211/ieee80211_ratectl_none.c optional wlan net80211/ieee80211_regdomain.c optional wlan net80211/ieee80211_rssadapt.c optional wlan wlan_rssadapt net80211/ieee80211_scan.c optional wlan net80211/ieee80211_scan_sta.c optional wlan net80211/ieee80211_sta.c optional wlan \ compile-with "${NORMAL_C} -Wno-unused-function" net80211/ieee80211_superg.c optional wlan ieee80211_support_superg net80211/ieee80211_scan_sw.c optional wlan net80211/ieee80211_tdma.c optional wlan ieee80211_support_tdma net80211/ieee80211_vht.c optional wlan net80211/ieee80211_wds.c optional wlan net80211/ieee80211_xauth.c optional wlan wlan_xauth net80211/ieee80211_alq.c optional wlan ieee80211_alq netgraph/atm/ccatm/ng_ccatm.c optional ngatm_ccatm \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" netgraph/atm/ngatmbase.c optional ngatm_atmbase \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" netgraph/atm/sscfu/ng_sscfu.c optional ngatm_sscfu \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" netgraph/atm/sscop/ng_sscop.c optional ngatm_sscop \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" netgraph/atm/uni/ng_uni.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" netgraph/bluetooth/common/ng_bluetooth.c optional netgraph_bluetooth netgraph/bluetooth/drivers/bt3c/ng_bt3c_pccard.c optional netgraph_bluetooth_bt3c netgraph/bluetooth/drivers/h4/ng_h4.c optional netgraph_bluetooth_h4 netgraph/bluetooth/drivers/ubt/ng_ubt.c optional netgraph_bluetooth_ubt usb netgraph/bluetooth/drivers/ubtbcmfw/ubtbcmfw.c optional netgraph_bluetooth_ubtbcmfw usb netgraph/bluetooth/hci/ng_hci_cmds.c optional netgraph_bluetooth_hci netgraph/bluetooth/hci/ng_hci_evnt.c optional netgraph_bluetooth_hci netgraph/bluetooth/hci/ng_hci_main.c optional netgraph_bluetooth_hci netgraph/bluetooth/hci/ng_hci_misc.c optional netgraph_bluetooth_hci netgraph/bluetooth/hci/ng_hci_ulpi.c optional netgraph_bluetooth_hci netgraph/bluetooth/l2cap/ng_l2cap_cmds.c optional netgraph_bluetooth_l2cap netgraph/bluetooth/l2cap/ng_l2cap_evnt.c optional netgraph_bluetooth_l2cap netgraph/bluetooth/l2cap/ng_l2cap_llpi.c optional netgraph_bluetooth_l2cap netgraph/bluetooth/l2cap/ng_l2cap_main.c optional netgraph_bluetooth_l2cap netgraph/bluetooth/l2cap/ng_l2cap_misc.c optional netgraph_bluetooth_l2cap netgraph/bluetooth/l2cap/ng_l2cap_ulpi.c optional netgraph_bluetooth_l2cap netgraph/bluetooth/socket/ng_btsocket.c optional netgraph_bluetooth_socket netgraph/bluetooth/socket/ng_btsocket_hci_raw.c optional netgraph_bluetooth_socket netgraph/bluetooth/socket/ng_btsocket_l2cap.c optional netgraph_bluetooth_socket netgraph/bluetooth/socket/ng_btsocket_l2cap_raw.c optional netgraph_bluetooth_socket netgraph/bluetooth/socket/ng_btsocket_rfcomm.c optional netgraph_bluetooth_socket netgraph/bluetooth/socket/ng_btsocket_sco.c optional netgraph_bluetooth_socket netgraph/netflow/netflow.c optional netgraph_netflow netgraph/netflow/netflow_v9.c optional netgraph_netflow netgraph/netflow/ng_netflow.c optional netgraph_netflow netgraph/ng_UI.c optional netgraph_UI netgraph/ng_async.c optional netgraph_async netgraph/ng_atmllc.c optional netgraph_atmllc netgraph/ng_base.c optional netgraph netgraph/ng_bpf.c optional netgraph_bpf netgraph/ng_bridge.c optional netgraph_bridge netgraph/ng_car.c optional netgraph_car netgraph/ng_checksum.c optional netgraph_checksum netgraph/ng_cisco.c optional netgraph_cisco netgraph/ng_deflate.c optional netgraph_deflate netgraph/ng_device.c optional netgraph_device netgraph/ng_echo.c optional netgraph_echo netgraph/ng_eiface.c optional netgraph_eiface netgraph/ng_ether.c optional netgraph_ether netgraph/ng_ether_echo.c optional netgraph_ether_echo netgraph/ng_frame_relay.c optional netgraph_frame_relay netgraph/ng_gif.c optional netgraph_gif inet6 | netgraph_gif inet netgraph/ng_gif_demux.c optional netgraph_gif_demux netgraph/ng_hole.c optional netgraph_hole netgraph/ng_iface.c optional netgraph_iface netgraph/ng_ip_input.c optional netgraph_ip_input netgraph/ng_ipfw.c optional netgraph_ipfw inet ipfirewall netgraph/ng_ksocket.c optional netgraph_ksocket netgraph/ng_l2tp.c optional netgraph_l2tp netgraph/ng_lmi.c optional netgraph_lmi netgraph/ng_mppc.c optional netgraph_mppc_compression | \ netgraph_mppc_encryption netgraph/ng_nat.c optional netgraph_nat inet libalias netgraph/ng_one2many.c optional netgraph_one2many netgraph/ng_parse.c optional netgraph netgraph/ng_patch.c optional netgraph_patch netgraph/ng_pipe.c optional netgraph_pipe netgraph/ng_ppp.c optional netgraph_ppp netgraph/ng_pppoe.c optional netgraph_pppoe netgraph/ng_pptpgre.c optional netgraph_pptpgre netgraph/ng_pred1.c optional netgraph_pred1 netgraph/ng_rfc1490.c optional netgraph_rfc1490 netgraph/ng_socket.c optional netgraph_socket netgraph/ng_split.c optional netgraph_split netgraph/ng_sppp.c optional netgraph_sppp netgraph/ng_tag.c optional netgraph_tag netgraph/ng_tcpmss.c optional netgraph_tcpmss netgraph/ng_tee.c optional netgraph_tee netgraph/ng_tty.c optional netgraph_tty netgraph/ng_vjc.c optional netgraph_vjc netgraph/ng_vlan.c optional netgraph_vlan netinet/accf_data.c optional accept_filter_data inet netinet/accf_dns.c optional accept_filter_dns inet netinet/accf_http.c optional accept_filter_http inet netinet/if_ether.c optional inet ether netinet/igmp.c optional inet netinet/in.c optional inet netinet/in_debug.c optional inet ddb netinet/in_kdtrace.c optional inet | inet6 netinet/ip_carp.c optional inet carp | inet6 carp netinet/in_fib.c optional inet netinet/in_gif.c optional gif inet | netgraph_gif inet netinet/ip_gre.c optional gre inet netinet/ip_id.c optional inet netinet/in_jail.c optional inet netinet/in_mcast.c optional inet netinet/in_pcb.c optional inet | inet6 netinet/in_pcbgroup.c optional inet pcbgroup | inet6 pcbgroup netinet/in_prot.c optional inet | inet6 netinet/in_proto.c optional inet | inet6 netinet/in_rmx.c optional inet netinet/in_rss.c optional inet rss netinet/ip_divert.c optional inet ipdivert ipfirewall netinet/ip_ecn.c optional inet | inet6 netinet/ip_encap.c optional inet | inet6 netinet/ip_fastfwd.c optional inet netinet/ip_icmp.c optional inet | inet6 netinet/ip_input.c optional inet netinet/ip_mroute.c optional mrouting inet netinet/ip_options.c optional inet netinet/ip_output.c optional inet netinet/ip_reass.c optional inet netinet/raw_ip.c optional inet | inet6 netinet/cc/cc.c optional inet | inet6 netinet/cc/cc_newreno.c optional inet | inet6 netinet/sctp_asconf.c optional inet sctp | inet6 sctp netinet/sctp_auth.c optional inet sctp | inet6 sctp netinet/sctp_bsd_addr.c optional inet sctp | inet6 sctp netinet/sctp_cc_functions.c optional inet sctp | inet6 sctp netinet/sctp_crc32.c optional inet | inet6 netinet/sctp_indata.c optional inet sctp | inet6 sctp netinet/sctp_input.c optional inet sctp | inet6 sctp netinet/sctp_output.c optional inet sctp | inet6 sctp netinet/sctp_pcb.c optional inet sctp | inet6 sctp netinet/sctp_peeloff.c optional inet sctp | inet6 sctp netinet/sctp_ss_functions.c optional inet sctp | inet6 sctp netinet/sctp_syscalls.c optional inet sctp | inet6 sctp netinet/sctp_sysctl.c optional inet sctp | inet6 sctp netinet/sctp_timer.c optional inet sctp | inet6 sctp netinet/sctp_usrreq.c optional inet sctp | inet6 sctp netinet/sctputil.c optional inet sctp | inet6 sctp netinet/siftr.c optional inet siftr alq | inet6 siftr alq netinet/tcp_debug.c optional tcpdebug netinet/tcp_fastopen.c optional inet tcp_rfc7413 | inet6 tcp_rfc7413 netinet/tcp_hostcache.c optional inet | inet6 netinet/tcp_input.c optional inet | inet6 netinet/tcp_log_buf.c optional tcp_blackbox inet | tcp_blackbox inet6 netinet/tcp_lro.c optional inet | inet6 netinet/tcp_output.c optional inet | inet6 netinet/tcp_offload.c optional tcp_offload inet | tcp_offload inet6 netinet/tcp_hpts.c optional tcphpts inet | tcphpts inet6 netinet/tcp_pcap.c optional inet tcppcap | inet6 tcppcap netinet/tcp_reass.c optional inet | inet6 netinet/tcp_sack.c optional inet | inet6 netinet/tcp_subr.c optional inet | inet6 netinet/tcp_syncache.c optional inet | inet6 netinet/tcp_timer.c optional inet | inet6 netinet/tcp_timewait.c optional inet | inet6 netinet/tcp_usrreq.c optional inet | inet6 netinet/udp_usrreq.c optional inet | inet6 netinet/libalias/alias.c optional libalias inet | netgraph_nat inet netinet/libalias/alias_db.c optional libalias inet | netgraph_nat inet netinet/libalias/alias_mod.c optional libalias | netgraph_nat netinet/libalias/alias_proxy.c optional libalias inet | netgraph_nat inet netinet/libalias/alias_util.c optional libalias inet | netgraph_nat inet netinet/libalias/alias_sctp.c optional libalias inet | netgraph_nat inet netinet/netdump/netdump_client.c optional inet netdump netinet6/dest6.c optional inet6 netinet6/frag6.c optional inet6 netinet6/icmp6.c optional inet6 netinet6/in6.c optional inet6 netinet6/in6_cksum.c optional inet6 netinet6/in6_fib.c optional inet6 netinet6/in6_gif.c optional gif inet6 | netgraph_gif inet6 netinet6/in6_ifattach.c optional inet6 netinet6/in6_jail.c optional inet6 netinet6/in6_mcast.c optional inet6 netinet6/in6_pcb.c optional inet6 netinet6/in6_pcbgroup.c optional inet6 pcbgroup netinet6/in6_proto.c optional inet6 netinet6/in6_rmx.c optional inet6 netinet6/in6_rss.c optional inet6 rss netinet6/in6_src.c optional inet6 netinet6/ip6_fastfwd.c optional inet6 netinet6/ip6_forward.c optional inet6 netinet6/ip6_gre.c optional gre inet6 netinet6/ip6_id.c optional inet6 netinet6/ip6_input.c optional inet6 netinet6/ip6_mroute.c optional mrouting inet6 netinet6/ip6_output.c optional inet6 netinet6/mld6.c optional inet6 netinet6/nd6.c optional inet6 netinet6/nd6_nbr.c optional inet6 netinet6/nd6_rtr.c optional inet6 netinet6/raw_ip6.c optional inet6 netinet6/route6.c optional inet6 netinet6/scope6.c optional inet6 netinet6/sctp6_usrreq.c optional inet6 sctp netinet6/udp6_usrreq.c optional inet6 netipsec/ipsec.c optional ipsec inet | ipsec inet6 netipsec/ipsec_input.c optional ipsec inet | ipsec inet6 netipsec/ipsec_mbuf.c optional ipsec inet | ipsec inet6 netipsec/ipsec_mod.c optional ipsec inet | ipsec inet6 netipsec/ipsec_output.c optional ipsec inet | ipsec inet6 netipsec/ipsec_pcb.c optional ipsec inet | ipsec inet6 | \ ipsec_support inet | ipsec_support inet6 netipsec/key.c optional ipsec inet | ipsec inet6 | \ ipsec_support inet | ipsec_support inet6 netipsec/key_debug.c optional ipsec inet | ipsec inet6 | \ ipsec_support inet | ipsec_support inet6 netipsec/keysock.c optional ipsec inet | ipsec inet6 | \ ipsec_support inet | ipsec_support inet6 netipsec/subr_ipsec.c optional ipsec inet | ipsec inet6 | \ ipsec_support inet | ipsec_support inet6 netipsec/udpencap.c optional ipsec inet netipsec/xform_ah.c optional ipsec inet | ipsec inet6 netipsec/xform_esp.c optional ipsec inet | ipsec inet6 netipsec/xform_ipcomp.c optional ipsec inet | ipsec inet6 netipsec/xform_tcp.c optional ipsec inet tcp_signature | \ ipsec inet6 tcp_signature | ipsec_support inet tcp_signature | \ ipsec_support inet6 tcp_signature netpfil/ipfw/dn_aqm_codel.c optional inet dummynet netpfil/ipfw/dn_aqm_pie.c optional inet dummynet netpfil/ipfw/dn_heap.c optional inet dummynet netpfil/ipfw/dn_sched_fifo.c optional inet dummynet netpfil/ipfw/dn_sched_fq_codel.c optional inet dummynet netpfil/ipfw/dn_sched_fq_pie.c optional inet dummynet netpfil/ipfw/dn_sched_prio.c optional inet dummynet netpfil/ipfw/dn_sched_qfq.c optional inet dummynet netpfil/ipfw/dn_sched_rr.c optional inet dummynet netpfil/ipfw/dn_sched_wf2q.c optional inet dummynet netpfil/ipfw/ip_dummynet.c optional inet dummynet netpfil/ipfw/ip_dn_io.c optional inet dummynet netpfil/ipfw/ip_dn_glue.c optional inet dummynet netpfil/ipfw/ip_fw2.c optional inet ipfirewall netpfil/ipfw/ip_fw_bpf.c optional inet ipfirewall netpfil/ipfw/ip_fw_dynamic.c optional inet ipfirewall \ compile-with "${NORMAL_C} -I$S/contrib/ck/include" netpfil/ipfw/ip_fw_eaction.c optional inet ipfirewall netpfil/ipfw/ip_fw_log.c optional inet ipfirewall netpfil/ipfw/ip_fw_pfil.c optional inet ipfirewall netpfil/ipfw/ip_fw_sockopt.c optional inet ipfirewall netpfil/ipfw/ip_fw_table.c optional inet ipfirewall netpfil/ipfw/ip_fw_table_algo.c optional inet ipfirewall netpfil/ipfw/ip_fw_table_value.c optional inet ipfirewall netpfil/ipfw/ip_fw_iface.c optional inet ipfirewall netpfil/ipfw/ip_fw_nat.c optional inet ipfirewall_nat netpfil/ipfw/nat64/ip_fw_nat64.c optional inet inet6 ipfirewall \ ipfirewall_nat64 netpfil/ipfw/nat64/nat64lsn.c optional inet inet6 ipfirewall \ ipfirewall_nat64 netpfil/ipfw/nat64/nat64lsn_control.c optional inet inet6 ipfirewall \ ipfirewall_nat64 netpfil/ipfw/nat64/nat64stl.c optional inet inet6 ipfirewall \ ipfirewall_nat64 netpfil/ipfw/nat64/nat64stl_control.c optional inet inet6 ipfirewall \ ipfirewall_nat64 netpfil/ipfw/nat64/nat64_translate.c optional inet inet6 ipfirewall \ ipfirewall_nat64 netpfil/ipfw/nptv6/ip_fw_nptv6.c optional inet inet6 ipfirewall \ ipfirewall_nptv6 netpfil/ipfw/nptv6/nptv6.c optional inet inet6 ipfirewall \ ipfirewall_nptv6 netpfil/ipfw/pmod/ip_fw_pmod.c optional inet ipfirewall_pmod netpfil/ipfw/pmod/tcpmod.c optional inet ipfirewall_pmod netpfil/pf/if_pflog.c optional pflog pf inet netpfil/pf/if_pfsync.c optional pfsync pf inet netpfil/pf/pf.c optional pf inet netpfil/pf/pf_if.c optional pf inet netpfil/pf/pf_ioctl.c optional pf inet netpfil/pf/pf_lb.c optional pf inet netpfil/pf/pf_norm.c optional pf inet netpfil/pf/pf_osfp.c optional pf inet netpfil/pf/pf_ruleset.c optional pf inet netpfil/pf/pf_table.c optional pf inet netpfil/pf/in4_cksum.c optional pf inet netsmb/smb_conn.c optional netsmb netsmb/smb_crypt.c optional netsmb netsmb/smb_dev.c optional netsmb netsmb/smb_iod.c optional netsmb netsmb/smb_rq.c optional netsmb netsmb/smb_smb.c optional netsmb netsmb/smb_subr.c optional netsmb netsmb/smb_trantcp.c optional netsmb netsmb/smb_usr.c optional netsmb nfs/bootp_subr.c optional bootp nfscl nfs/krpc_subr.c optional bootp nfscl nfs/nfs_diskless.c optional nfscl nfs_root nfs/nfs_fha.c optional nfsd nfs/nfs_lock.c optional nfscl | nfslockd | nfsd nfs/nfs_nfssvc.c optional nfscl | nfsd nlm/nlm_advlock.c optional nfslockd | nfsd nlm/nlm_prot_clnt.c optional nfslockd | nfsd nlm/nlm_prot_impl.c optional nfslockd | nfsd nlm/nlm_prot_server.c optional nfslockd | nfsd nlm/nlm_prot_svc.c optional nfslockd | nfsd nlm/nlm_prot_xdr.c optional nfslockd | nfsd nlm/sm_inter_xdr.c optional nfslockd | nfsd # Linux Kernel Programming Interface compat/linuxkpi/common/src/linux_kmod.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_compat.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_current.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_hrtimer.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_kthread.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_lock.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_page.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_pci.c optional compat_linuxkpi pci \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_tasklet.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_idr.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_radix.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_rcu.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C} -I$S/contrib/ck/include" compat/linuxkpi/common/src/linux_schedule.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_slab.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_usb.c optional compat_linuxkpi usb \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_work.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C}" # OpenFabrics Enterprise Distribution (Infiniband) ofed/drivers/infiniband/core/ib_addr.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_agent.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_cache.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_cm.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_cma.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_cq.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_device.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_fmr_pool.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_iwcm.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_iwpm_msg.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_iwpm_util.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_mad.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_mad_rmpp.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_multicast.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_packer.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_roce_gid_mgmt.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_sa_query.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_smi.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_sysfs.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_ucm.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_ucma.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_ud_header.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_umem.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_user_mad.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_uverbs_cmd.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_uverbs_main.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_uverbs_marshall.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/core/ib_verbs.c optional ofed \ compile-with "${OFED_C}" ofed/drivers/infiniband/ulp/ipoib/ipoib_cm.c optional ipoib \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" #ofed/drivers/infiniband/ulp/ipoib/ipoib_fs.c optional ipoib \ # compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" ofed/drivers/infiniband/ulp/ipoib/ipoib_ib.c optional ipoib \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" ofed/drivers/infiniband/ulp/ipoib/ipoib_main.c optional ipoib \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" ofed/drivers/infiniband/ulp/ipoib/ipoib_multicast.c optional ipoib \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" ofed/drivers/infiniband/ulp/ipoib/ipoib_verbs.c optional ipoib \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" #ofed/drivers/infiniband/ulp/ipoib/ipoib_vlan.c optional ipoib \ # compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" ofed/drivers/infiniband/ulp/sdp/sdp_bcopy.c optional sdp inet \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/sdp/" ofed/drivers/infiniband/ulp/sdp/sdp_main.c optional sdp inet \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/sdp/" ofed/drivers/infiniband/ulp/sdp/sdp_rx.c optional sdp inet \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/sdp/" ofed/drivers/infiniband/ulp/sdp/sdp_cma.c optional sdp inet \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/sdp/" ofed/drivers/infiniband/ulp/sdp/sdp_tx.c optional sdp inet \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/sdp/" dev/mthca/mthca_allocator.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mthca/mthca_av.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mthca/mthca_catas.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mthca/mthca_cmd.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mthca/mthca_cq.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mthca/mthca_eq.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mthca/mthca_mad.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mthca/mthca_main.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mthca/mthca_mcg.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mthca/mthca_memfree.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mthca/mthca_mr.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mthca/mthca_pd.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mthca/mthca_profile.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mthca/mthca_provider.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mthca/mthca_qp.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mthca/mthca_reset.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mthca/mthca_srq.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mthca/mthca_uar.c optional mthca pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_alias_GUID.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_mcg.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_sysfs.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_cm.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_ah.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_cq.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_doorbell.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_mad.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_main.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_mr.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_qp.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_srq.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_wc.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_alloc.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_catas.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_cmd.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_cq.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_eq.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_fw.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_fw_qos.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_icm.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_intf.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_main.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_mcg.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_mr.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_pd.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_port.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_profile.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_qp.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_reset.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_sense.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_srq.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_resource_tracker.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_en/mlx4_en_cq.c optional mlx4en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx4/mlx4_en/mlx4_en_main.c optional mlx4en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx4/mlx4_en/mlx4_en_netdev.c optional mlx4en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx4/mlx4_en/mlx4_en_port.c optional mlx4en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx4/mlx4_en/mlx4_en_resources.c optional mlx4en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx4/mlx4_en/mlx4_en_rx.c optional mlx4en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx4/mlx4_en/mlx4_en_tx.c optional mlx4en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx5/mlx5_ib/mlx5_ib_ah.c optional mlx5ib pci ofed \ compile-with "${OFED_C}" dev/mlx5/mlx5_ib/mlx5_ib_cong.c optional mlx5ib pci ofed \ compile-with "${OFED_C}" dev/mlx5/mlx5_ib/mlx5_ib_cq.c optional mlx5ib pci ofed \ compile-with "${OFED_C}" dev/mlx5/mlx5_ib/mlx5_ib_doorbell.c optional mlx5ib pci ofed \ compile-with "${OFED_C}" dev/mlx5/mlx5_ib/mlx5_ib_gsi.c optional mlx5ib pci ofed \ compile-with "${OFED_C}" dev/mlx5/mlx5_ib/mlx5_ib_mad.c optional mlx5ib pci ofed \ compile-with "${OFED_C}" dev/mlx5/mlx5_ib/mlx5_ib_main.c optional mlx5ib pci ofed \ compile-with "${OFED_C}" dev/mlx5/mlx5_ib/mlx5_ib_mem.c optional mlx5ib pci ofed \ compile-with "${OFED_C}" dev/mlx5/mlx5_ib/mlx5_ib_mr.c optional mlx5ib pci ofed \ compile-with "${OFED_C}" dev/mlx5/mlx5_ib/mlx5_ib_qp.c optional mlx5ib pci ofed \ compile-with "${OFED_C}" dev/mlx5/mlx5_ib/mlx5_ib_srq.c optional mlx5ib pci ofed \ compile-with "${OFED_C}" dev/mlx5/mlx5_ib/mlx5_ib_virt.c optional mlx5ib pci ofed \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_alloc.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_cmd.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_cq.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_diagnostics.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_eq.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_fs_cmd.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_fs_tree.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_fw.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_fwdump.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_fwdump_regmaps.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_health.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_mad.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_main.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_mcg.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_mr.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_pagealloc.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_pd.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_port.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_qp.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_rl.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_srq.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_transobj.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_uar.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_vport.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_vsc.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_wq.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_lib/mlx5_gid.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_en/mlx5_en_ethtool.c optional mlx5en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx5/mlx5_en/mlx5_en_main.c optional mlx5en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx5/mlx5_en/mlx5_en_tx.c optional mlx5en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx5/mlx5_en/mlx5_en_flow_table.c optional mlx5en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx5/mlx5_en/mlx5_en_rx.c optional mlx5en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx5/mlx5_en/mlx5_en_rl.c optional mlx5en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx5/mlx5_en/mlx5_en_txrx.c optional mlx5en pci inet inet6 \ compile-with "${OFED_C}" # crypto support opencrypto/cast.c optional crypto | ipsec | ipsec_support opencrypto/criov.c optional crypto | ipsec | ipsec_support opencrypto/crypto.c optional crypto | ipsec | ipsec_support opencrypto/cryptodev.c optional cryptodev opencrypto/cryptodev_if.m optional crypto | ipsec | ipsec_support opencrypto/cryptosoft.c optional crypto | ipsec | ipsec_support opencrypto/cryptodeflate.c optional crypto | ipsec | ipsec_support opencrypto/gmac.c optional crypto | ipsec | ipsec_support opencrypto/gfmult.c optional crypto | ipsec | ipsec_support opencrypto/rmd160.c optional crypto | ipsec | ipsec_support opencrypto/skipjack.c optional crypto | ipsec | ipsec_support opencrypto/xform.c optional crypto | ipsec | ipsec_support opencrypto/xform_poly1305.c optional crypto \ compile-with "${NORMAL_C} -I$S/contrib/libsodium/src/libsodium/include -I$S/crypto/libsodium" contrib/libsodium/src/libsodium/crypto_onetimeauth/poly1305/onetimeauth_poly1305.c \ optional crypto \ compile-with "${NORMAL_C} -I$S/contrib/libsodium/src/libsodium/include/sodium -I$S/crypto/libsodium" contrib/libsodium/src/libsodium/crypto_onetimeauth/poly1305/donna/poly1305_donna.c \ optional crypto \ compile-with "${NORMAL_C} -I$S/contrib/libsodium/src/libsodium/include/sodium -I$S/crypto/libsodium" contrib/libsodium/src/libsodium/crypto_verify/sodium/verify.c \ optional crypto \ compile-with "${NORMAL_C} -I$S/contrib/libsodium/src/libsodium/include/sodium -I$S/crypto/libsodium" crypto/libsodium/randombytes.c optional crypto \ compile-with "${NORMAL_C} -I$S/contrib/libsodium/src/libsodium/include -I$S/crypto/libsodium" crypto/libsodium/utils.c optional crypto \ compile-with "${NORMAL_C} -I$S/contrib/libsodium/src/libsodium/include -I$S/crypto/libsodium" +opencrypto/cbc_mac.c optional crypto +opencrypto/xform_cbc_mac.c optional crypto rpc/auth_none.c optional krpc | nfslockd | nfscl | nfsd rpc/auth_unix.c optional krpc | nfslockd | nfscl | nfsd rpc/authunix_prot.c optional krpc | nfslockd | nfscl | nfsd rpc/clnt_bck.c optional krpc | nfslockd | nfscl | nfsd rpc/clnt_dg.c optional krpc | nfslockd | nfscl | nfsd rpc/clnt_rc.c optional krpc | nfslockd | nfscl | nfsd rpc/clnt_vc.c optional krpc | nfslockd | nfscl | nfsd rpc/getnetconfig.c optional krpc | nfslockd | nfscl | nfsd rpc/replay.c optional krpc | nfslockd | nfscl | nfsd rpc/rpc_callmsg.c optional krpc | nfslockd | nfscl | nfsd rpc/rpc_generic.c optional krpc | nfslockd | nfscl | nfsd rpc/rpc_prot.c optional krpc | nfslockd | nfscl | nfsd rpc/rpcb_clnt.c optional krpc | nfslockd | nfscl | nfsd rpc/rpcb_prot.c optional krpc | nfslockd | nfscl | nfsd rpc/svc.c optional krpc | nfslockd | nfscl | nfsd rpc/svc_auth.c optional krpc | nfslockd | nfscl | nfsd rpc/svc_auth_unix.c optional krpc | nfslockd | nfscl | nfsd rpc/svc_dg.c optional krpc | nfslockd | nfscl | nfsd rpc/svc_generic.c optional krpc | nfslockd | nfscl | nfsd rpc/svc_vc.c optional krpc | nfslockd | nfscl | nfsd rpc/rpcsec_gss/rpcsec_gss.c optional krpc kgssapi | nfslockd kgssapi | nfscl kgssapi | nfsd kgssapi rpc/rpcsec_gss/rpcsec_gss_conf.c optional krpc kgssapi | nfslockd kgssapi | nfscl kgssapi | nfsd kgssapi rpc/rpcsec_gss/rpcsec_gss_misc.c optional krpc kgssapi | nfslockd kgssapi | nfscl kgssapi | nfsd kgssapi rpc/rpcsec_gss/rpcsec_gss_prot.c optional krpc kgssapi | nfslockd kgssapi | nfscl kgssapi | nfsd kgssapi rpc/rpcsec_gss/svc_rpcsec_gss.c optional krpc kgssapi | nfslockd kgssapi | nfscl kgssapi | nfsd kgssapi security/audit/audit.c optional audit security/audit/audit_arg.c optional audit security/audit/audit_bsm.c optional audit security/audit/audit_bsm_db.c optional audit security/audit/audit_bsm_klib.c optional audit security/audit/audit_dtrace.c optional dtaudit audit | dtraceall audit compile-with "${CDDL_C}" security/audit/audit_pipe.c optional audit security/audit/audit_syscalls.c standard security/audit/audit_trigger.c optional audit security/audit/audit_worker.c optional audit security/audit/bsm_domain.c optional audit security/audit/bsm_errno.c optional audit security/audit/bsm_fcntl.c optional audit security/audit/bsm_socket_type.c optional audit security/audit/bsm_token.c optional audit security/mac/mac_audit.c optional mac audit security/mac/mac_cred.c optional mac security/mac/mac_framework.c optional mac security/mac/mac_inet.c optional mac inet | mac inet6 security/mac/mac_inet6.c optional mac inet6 security/mac/mac_label.c optional mac security/mac/mac_net.c optional mac security/mac/mac_pipe.c optional mac security/mac/mac_posix_sem.c optional mac security/mac/mac_posix_shm.c optional mac security/mac/mac_priv.c optional mac security/mac/mac_process.c optional mac security/mac/mac_socket.c optional mac security/mac/mac_syscalls.c standard security/mac/mac_system.c optional mac security/mac/mac_sysv_msg.c optional mac security/mac/mac_sysv_sem.c optional mac security/mac/mac_sysv_shm.c optional mac security/mac/mac_vfs.c optional mac security/mac_biba/mac_biba.c optional mac_biba security/mac_bsdextended/mac_bsdextended.c optional mac_bsdextended security/mac_bsdextended/ugidfw_system.c optional mac_bsdextended security/mac_bsdextended/ugidfw_vnode.c optional mac_bsdextended security/mac_ifoff/mac_ifoff.c optional mac_ifoff security/mac_lomac/mac_lomac.c optional mac_lomac security/mac_mls/mac_mls.c optional mac_mls security/mac_none/mac_none.c optional mac_none security/mac_ntpd/mac_ntpd.c optional mac_ntpd security/mac_partition/mac_partition.c optional mac_partition security/mac_portacl/mac_portacl.c optional mac_portacl security/mac_seeotheruids/mac_seeotheruids.c optional mac_seeotheruids security/mac_stub/mac_stub.c optional mac_stub security/mac_test/mac_test.c optional mac_test security/mac_veriexec/mac_veriexec.c optional mac_veriexec security/mac_veriexec/veriexec_fingerprint.c optional mac_veriexec security/mac_veriexec/veriexec_metadata.c optional mac_veriexec security/mac_veriexec/mac_veriexec_rmd160.c optional mac_veriexec_rmd160 security/mac_veriexec/mac_veriexec_sha1.c optional mac_veriexec_sha1 security/mac_veriexec/mac_veriexec_sha256.c optional mac_veriexec_sha256 security/mac_veriexec/mac_veriexec_sha384.c optional mac_veriexec_sha384 security/mac_veriexec/mac_veriexec_sha512.c optional mac_veriexec_sha512 teken/teken.c optional sc | vt ufs/ffs/ffs_alloc.c optional ffs ufs/ffs/ffs_balloc.c optional ffs ufs/ffs/ffs_inode.c optional ffs ufs/ffs/ffs_snapshot.c optional ffs ufs/ffs/ffs_softdep.c optional ffs ufs/ffs/ffs_subr.c optional ffs | geom_label ufs/ffs/ffs_tables.c optional ffs | geom_label ufs/ffs/ffs_vfsops.c optional ffs ufs/ffs/ffs_vnops.c optional ffs ufs/ffs/ffs_rawread.c optional ffs directio ufs/ffs/ffs_suspend.c optional ffs ufs/ufs/ufs_acl.c optional ffs ufs/ufs/ufs_bmap.c optional ffs ufs/ufs/ufs_dirhash.c optional ffs ufs/ufs/ufs_extattr.c optional ffs ufs/ufs/ufs_gjournal.c optional ffs UFS_GJOURNAL ufs/ufs/ufs_inode.c optional ffs ufs/ufs/ufs_lookup.c optional ffs ufs/ufs/ufs_quota.c optional ffs ufs/ufs/ufs_vfsops.c optional ffs ufs/ufs/ufs_vnops.c optional ffs vm/default_pager.c standard vm/device_pager.c standard vm/phys_pager.c standard vm/redzone.c optional DEBUG_REDZONE vm/sg_pager.c standard vm/swap_pager.c standard vm/uma_core.c standard vm/uma_dbg.c standard vm/memguard.c optional DEBUG_MEMGUARD vm/vm_domainset.c standard vm/vm_fault.c standard vm/vm_glue.c standard vm/vm_init.c standard vm/vm_kern.c standard vm/vm_map.c standard vm/vm_meter.c standard vm/vm_mmap.c standard vm/vm_object.c standard vm/vm_page.c standard vm/vm_pageout.c standard vm/vm_pager.c standard vm/vm_phys.c standard vm/vm_radix.c standard vm/vm_reserv.c standard vm/vm_swapout.c optional !NO_SWAPPING vm/vm_swapout_dummy.c optional NO_SWAPPING vm/vm_unix.c standard vm/vnode_pager.c standard xen/features.c optional xenhvm xen/xenbus/xenbus_if.m optional xenhvm xen/xenbus/xenbus.c optional xenhvm xen/xenbus/xenbusb_if.m optional xenhvm xen/xenbus/xenbusb.c optional xenhvm xen/xenbus/xenbusb_front.c optional xenhvm xen/xenbus/xenbusb_back.c optional xenhvm xen/xenmem/xenmem_if.m optional xenhvm xdr/xdr.c optional krpc | nfslockd | nfscl | nfsd xdr/xdr_array.c optional krpc | nfslockd | nfscl | nfsd xdr/xdr_mbuf.c optional krpc | nfslockd | nfscl | nfsd xdr/xdr_mem.c optional krpc | nfslockd | nfscl | nfsd xdr/xdr_reference.c optional krpc | nfslockd | nfscl | nfsd xdr/xdr_sizeof.c optional krpc | nfslockd | nfscl | nfsd Index: projects/import-googletest-1.8.1/sys/conf/ldscript.riscv =================================================================== --- projects/import-googletest-1.8.1/sys/conf/ldscript.riscv (revision 344270) +++ projects/import-googletest-1.8.1/sys/conf/ldscript.riscv (revision 344271) @@ -1,135 +1,139 @@ /* $FreeBSD$ */ OUTPUT_ARCH(riscv) ENTRY(_start) SEARCH_DIR(/usr/lib); SECTIONS { /* Read-only sections, merged into text segment: */ . = kernbase; .text : AT(ADDR(.text) - kernbase) { *(.text) *(.stub) /* .gnu.warning sections are handled specially by elf32.em. */ *(.gnu.warning) *(.gnu.linkonce.t*) } =0x9090 _etext = .; PROVIDE (etext = .); .fini : { *(.fini) } =0x9090 .rodata : { *(.rodata) *(.gnu.linkonce.r*) } .rodata1 : { *(.rodata1) } .interp : { *(.interp) } .hash : { *(.hash) } .dynsym : { *(.dynsym) } .dynstr : { *(.dynstr) } .gnu.version : { *(.gnu.version) } .gnu.version_d : { *(.gnu.version_d) } .gnu.version_r : { *(.gnu.version_r) } .rel.text : { *(.rel.text) *(.rel.gnu.linkonce.t*) } .rela.text : { *(.rela.text) *(.rela.gnu.linkonce.t*) } .rel.data : { *(.rel.data) *(.rel.gnu.linkonce.d*) } .rela.data : { *(.rela.data) *(.rela.gnu.linkonce.d*) } .rel.rodata : { *(.rel.rodata) *(.rel.gnu.linkonce.r*) } .rela.rodata : { *(.rela.rodata) *(.rela.gnu.linkonce.r*) } .rel.got : { *(.rel.got) } .rela.got : { *(.rela.got) } .rel.ctors : { *(.rel.ctors) } .rela.ctors : { *(.rela.ctors) } .rel.dtors : { *(.rel.dtors) } .rela.dtors : { *(.rela.dtors) } .rel.init : { *(.rel.init) } .rela.init : { *(.rela.init) } .rel.fini : { *(.rel.fini) } .rela.fini : { *(.rela.fini) } .rel.bss : { *(.rel.bss) } .rela.bss : { *(.rela.bss) } .rel.plt : { *(.rel.plt) } .rela.plt : { *(.rela.plt) } .init : { *(.init) } =0x9090 .plt : { *(.plt) } /* Adjust the address for the data segment. We want to adjust up to the same address within the page on the next page up. */ . = ALIGN(0x1000) + (. & (0x1000 - 1)) ; .data : { *(.data) *(.gnu.linkonce.d*) } .data1 : { *(.data1) } . = ALIGN(32 / 8); _start_ctors = .; PROVIDE (start_ctors = .); .ctors : { *(.ctors) } _stop_ctors = .; PROVIDE (stop_ctors = .); .dtors : { *(.dtors) } .got : { *(.got.plt) *(.got) } .dynamic : { *(.dynamic) } /* We want the small data sections together, so single-instruction offsets can access them all, and initialized data all before uninitialized, so we can shorten the on-disk segment size. */ . = ALIGN(8); .sdata : { *(.sdata) } _edata = .; PROVIDE (edata = .); + /* Ensure __bss_start is associated with the next section in case orphan + sections are placed directly after .sdata, as has been seen to happen with + LLD. */ + . = .; __bss_start = .; .sbss : { *(.sbss) *(.scommon) } .bss : { *(.dynbss) *(.bss) *(COMMON) } . = ALIGN(8); _end = . ; PROVIDE (end = .); /* Stabs debugging sections. */ .stab 0 : { *(.stab) } .stabstr 0 : { *(.stabstr) } .stab.excl 0 : { *(.stab.excl) } .stab.exclstr 0 : { *(.stab.exclstr) } .stab.index 0 : { *(.stab.index) } .stab.indexstr 0 : { *(.stab.indexstr) } .comment 0 : { *(.comment) } /* DWARF debug sections. Symbols in the DWARF debugging sections are relative to the beginning of the section so we begin them at 0. */ /* DWARF 1 */ .debug 0 : { *(.debug) } .line 0 : { *(.line) } /* GNU DWARF 1 extensions */ .debug_srcinfo 0 : { *(.debug_srcinfo) } .debug_sfnames 0 : { *(.debug_sfnames) } /* DWARF 1.1 and DWARF 2 */ .debug_aranges 0 : { *(.debug_aranges) } .debug_pubnames 0 : { *(.debug_pubnames) } /* DWARF 2 */ .debug_info 0 : { *(.debug_info) } .debug_abbrev 0 : { *(.debug_abbrev) } .debug_line 0 : { *(.debug_line) } .debug_frame 0 : { *(.debug_frame) } .debug_str 0 : { *(.debug_str) } .debug_loc 0 : { *(.debug_loc) } .debug_macinfo 0 : { *(.debug_macinfo) } /* SGI/MIPS DWARF 2 extensions */ .debug_weaknames 0 : { *(.debug_weaknames) } .debug_funcnames 0 : { *(.debug_funcnames) } .debug_typenames 0 : { *(.debug_typenames) } .debug_varnames 0 : { *(.debug_varnames) } /* These must appear regardless of . */ } Index: projects/import-googletest-1.8.1/sys/contrib/libnv/nvpair.c =================================================================== --- projects/import-googletest-1.8.1/sys/contrib/libnv/nvpair.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/contrib/libnv/nvpair.c (revision 344271) @@ -1,2142 +1,2134 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2009-2013 The FreeBSD Foundation * Copyright (c) 2013-2015 Mariusz Zaborski * All rights reserved. * * This software was developed by Pawel Jakub Dawidek under sponsorship from * the FreeBSD Foundation. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHORS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #ifdef _KERNEL #include #include #include #include #include #else #include #include #include #include #include #include #include #include #include "common_impl.h" #endif #ifdef HAVE_PJDLOG #include #endif #include #include "nv_impl.h" #include "nvlist_impl.h" #include "nvpair_impl.h" #ifndef HAVE_PJDLOG #ifdef _KERNEL #define PJDLOG_ASSERT(...) MPASS(__VA_ARGS__) #define PJDLOG_RASSERT(expr, ...) KASSERT(expr, (__VA_ARGS__)) #define PJDLOG_ABORT(...) panic(__VA_ARGS__) #else #include #define PJDLOG_ASSERT(...) assert(__VA_ARGS__) #define PJDLOG_RASSERT(expr, ...) assert(expr) #define PJDLOG_ABORT(...) abort() #endif #endif #define NVPAIR_MAGIC 0x6e7670 /* "nvp" */ struct nvpair { int nvp_magic; char *nvp_name; int nvp_type; uint64_t nvp_data; size_t nvp_datasize; size_t nvp_nitems; /* Used only for array types. */ nvlist_t *nvp_list; TAILQ_ENTRY(nvpair) nvp_next; }; #define NVPAIR_ASSERT(nvp) do { \ PJDLOG_ASSERT((nvp) != NULL); \ PJDLOG_ASSERT((nvp)->nvp_magic == NVPAIR_MAGIC); \ } while (0) struct nvpair_header { uint8_t nvph_type; uint16_t nvph_namesize; uint64_t nvph_datasize; uint64_t nvph_nitems; } __packed; void nvpair_assert(const nvpair_t *nvp) { NVPAIR_ASSERT(nvp); } static nvpair_t * nvpair_allocv(const char *name, int type, uint64_t data, size_t datasize, size_t nitems) { nvpair_t *nvp; size_t namelen; PJDLOG_ASSERT(type >= NV_TYPE_FIRST && type <= NV_TYPE_LAST); namelen = strlen(name); if (namelen >= NV_NAME_MAX) { ERRNO_SET(ENAMETOOLONG); return (NULL); } nvp = nv_calloc(1, sizeof(*nvp) + namelen + 1); if (nvp != NULL) { nvp->nvp_name = (char *)(nvp + 1); memcpy(nvp->nvp_name, name, namelen); nvp->nvp_name[namelen] = '\0'; nvp->nvp_type = type; nvp->nvp_data = data; nvp->nvp_datasize = datasize; nvp->nvp_nitems = nitems; nvp->nvp_magic = NVPAIR_MAGIC; } return (nvp); } static int nvpair_append(nvpair_t *nvp, const void *value, size_t valsize, size_t datasize) { void *olddata, *data, *valp; size_t oldlen; oldlen = nvp->nvp_nitems * valsize; olddata = (void *)(uintptr_t)nvp->nvp_data; data = nv_realloc(olddata, oldlen + valsize); if (data == NULL) { ERRNO_SET(ENOMEM); return (-1); } valp = (unsigned char *)data + oldlen; memcpy(valp, value, valsize); nvp->nvp_data = (uint64_t)(uintptr_t)data; nvp->nvp_datasize += datasize; nvp->nvp_nitems++; return (0); } nvlist_t * nvpair_nvlist(const nvpair_t *nvp) { NVPAIR_ASSERT(nvp); return (nvp->nvp_list); } nvpair_t * nvpair_next(const nvpair_t *nvp) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_list != NULL); return (TAILQ_NEXT(nvp, nvp_next)); } nvpair_t * nvpair_prev(const nvpair_t *nvp) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_list != NULL); return (TAILQ_PREV(nvp, nvl_head, nvp_next)); } void nvpair_insert(struct nvl_head *head, nvpair_t *nvp, nvlist_t *nvl) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_list == NULL); PJDLOG_ASSERT((nvlist_flags(nvl) & NV_FLAG_NO_UNIQUE) != 0 || !nvlist_exists(nvl, nvpair_name(nvp))); TAILQ_INSERT_TAIL(head, nvp, nvp_next); nvp->nvp_list = nvl; } static void nvpair_remove_nvlist(nvpair_t *nvp) { nvlist_t *nvl; /* XXX: DECONST is bad, mkay? */ nvl = __DECONST(nvlist_t *, nvpair_get_nvlist(nvp)); PJDLOG_ASSERT(nvl != NULL); nvlist_set_parent(nvl, NULL); } static void nvpair_remove_nvlist_array(nvpair_t *nvp) { nvlist_t **nvlarray; size_t count, i; /* XXX: DECONST is bad, mkay? */ nvlarray = __DECONST(nvlist_t **, nvpair_get_nvlist_array(nvp, &count)); for (i = 0; i < count; i++) { - nvlist_t *nvl; - nvpair_t *nnvp; - - nvl = nvlarray[i]; - nnvp = nvlist_get_array_next_nvpair(nvl); - if (nnvp != NULL) { - nvpair_free_structure(nnvp); - } - nvlist_set_array_next(nvl, NULL); - nvlist_set_parent(nvl, NULL); + nvlist_set_array_next(nvlarray[i], NULL); + nvlist_set_parent(nvlarray[i], NULL); } } void nvpair_remove(struct nvl_head *head, nvpair_t *nvp, const nvlist_t *nvl) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_list == nvl); if (nvpair_type(nvp) == NV_TYPE_NVLIST) nvpair_remove_nvlist(nvp); else if (nvpair_type(nvp) == NV_TYPE_NVLIST_ARRAY) nvpair_remove_nvlist_array(nvp); TAILQ_REMOVE(head, nvp, nvp_next); nvp->nvp_list = NULL; } nvpair_t * nvpair_clone(const nvpair_t *nvp) { nvpair_t *newnvp; const char *name; const void *data; size_t datasize; NVPAIR_ASSERT(nvp); name = nvpair_name(nvp); switch (nvpair_type(nvp)) { case NV_TYPE_NULL: newnvp = nvpair_create_null(name); break; case NV_TYPE_BOOL: newnvp = nvpair_create_bool(name, nvpair_get_bool(nvp)); break; case NV_TYPE_NUMBER: newnvp = nvpair_create_number(name, nvpair_get_number(nvp)); break; case NV_TYPE_STRING: newnvp = nvpair_create_string(name, nvpair_get_string(nvp)); break; case NV_TYPE_NVLIST: newnvp = nvpair_create_nvlist(name, nvpair_get_nvlist(nvp)); break; case NV_TYPE_BINARY: data = nvpair_get_binary(nvp, &datasize); newnvp = nvpair_create_binary(name, data, datasize); break; case NV_TYPE_BOOL_ARRAY: data = nvpair_get_bool_array(nvp, &datasize); newnvp = nvpair_create_bool_array(name, data, datasize); break; case NV_TYPE_NUMBER_ARRAY: data = nvpair_get_number_array(nvp, &datasize); newnvp = nvpair_create_number_array(name, data, datasize); break; case NV_TYPE_STRING_ARRAY: data = nvpair_get_string_array(nvp, &datasize); newnvp = nvpair_create_string_array(name, data, datasize); break; case NV_TYPE_NVLIST_ARRAY: data = nvpair_get_nvlist_array(nvp, &datasize); newnvp = nvpair_create_nvlist_array(name, data, datasize); break; #ifndef _KERNEL case NV_TYPE_DESCRIPTOR: newnvp = nvpair_create_descriptor(name, nvpair_get_descriptor(nvp)); break; case NV_TYPE_DESCRIPTOR_ARRAY: data = nvpair_get_descriptor_array(nvp, &datasize); newnvp = nvpair_create_descriptor_array(name, data, datasize); break; #endif default: PJDLOG_ABORT("Unknown type: %d.", nvpair_type(nvp)); } return (newnvp); } size_t nvpair_header_size(void) { return (sizeof(struct nvpair_header)); } size_t nvpair_size(const nvpair_t *nvp) { NVPAIR_ASSERT(nvp); return (nvp->nvp_datasize); } unsigned char * nvpair_pack_header(const nvpair_t *nvp, unsigned char *ptr, size_t *leftp) { struct nvpair_header nvphdr; size_t namesize; NVPAIR_ASSERT(nvp); nvphdr.nvph_type = nvp->nvp_type; namesize = strlen(nvp->nvp_name) + 1; PJDLOG_ASSERT(namesize > 0 && namesize <= UINT16_MAX); nvphdr.nvph_namesize = namesize; nvphdr.nvph_datasize = nvp->nvp_datasize; nvphdr.nvph_nitems = nvp->nvp_nitems; PJDLOG_ASSERT(*leftp >= sizeof(nvphdr)); memcpy(ptr, &nvphdr, sizeof(nvphdr)); ptr += sizeof(nvphdr); *leftp -= sizeof(nvphdr); PJDLOG_ASSERT(*leftp >= namesize); memcpy(ptr, nvp->nvp_name, namesize); ptr += namesize; *leftp -= namesize; return (ptr); } unsigned char * nvpair_pack_null(const nvpair_t *nvp, unsigned char *ptr, size_t *leftp __unused) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_NULL); return (ptr); } unsigned char * nvpair_pack_bool(const nvpair_t *nvp, unsigned char *ptr, size_t *leftp) { uint8_t value; NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_BOOL); value = (uint8_t)nvp->nvp_data; PJDLOG_ASSERT(*leftp >= sizeof(value)); memcpy(ptr, &value, sizeof(value)); ptr += sizeof(value); *leftp -= sizeof(value); return (ptr); } unsigned char * nvpair_pack_number(const nvpair_t *nvp, unsigned char *ptr, size_t *leftp) { uint64_t value; NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_NUMBER); value = (uint64_t)nvp->nvp_data; PJDLOG_ASSERT(*leftp >= sizeof(value)); memcpy(ptr, &value, sizeof(value)); ptr += sizeof(value); *leftp -= sizeof(value); return (ptr); } unsigned char * nvpair_pack_string(const nvpair_t *nvp, unsigned char *ptr, size_t *leftp) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_STRING); PJDLOG_ASSERT(*leftp >= nvp->nvp_datasize); memcpy(ptr, (const void *)(intptr_t)nvp->nvp_data, nvp->nvp_datasize); ptr += nvp->nvp_datasize; *leftp -= nvp->nvp_datasize; return (ptr); } unsigned char * nvpair_pack_nvlist_up(unsigned char *ptr, size_t *leftp) { struct nvpair_header nvphdr; size_t namesize; const char *name = ""; namesize = 1; nvphdr.nvph_type = NV_TYPE_NVLIST_UP; nvphdr.nvph_namesize = namesize; nvphdr.nvph_datasize = 0; nvphdr.nvph_nitems = 0; PJDLOG_ASSERT(*leftp >= sizeof(nvphdr)); memcpy(ptr, &nvphdr, sizeof(nvphdr)); ptr += sizeof(nvphdr); *leftp -= sizeof(nvphdr); PJDLOG_ASSERT(*leftp >= namesize); memcpy(ptr, name, namesize); ptr += namesize; *leftp -= namesize; return (ptr); } unsigned char * nvpair_pack_nvlist_array_next(unsigned char *ptr, size_t *leftp) { struct nvpair_header nvphdr; size_t namesize; const char *name = ""; namesize = 1; nvphdr.nvph_type = NV_TYPE_NVLIST_ARRAY_NEXT; nvphdr.nvph_namesize = namesize; nvphdr.nvph_datasize = 0; nvphdr.nvph_nitems = 0; PJDLOG_ASSERT(*leftp >= sizeof(nvphdr)); memcpy(ptr, &nvphdr, sizeof(nvphdr)); ptr += sizeof(nvphdr); *leftp -= sizeof(nvphdr); PJDLOG_ASSERT(*leftp >= namesize); memcpy(ptr, name, namesize); ptr += namesize; *leftp -= namesize; return (ptr); } #ifndef _KERNEL unsigned char * nvpair_pack_descriptor(const nvpair_t *nvp, unsigned char *ptr, int64_t *fdidxp, size_t *leftp) { int64_t value; NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_DESCRIPTOR); value = (int64_t)nvp->nvp_data; if (value != -1) { /* * If there is a real descriptor here, we change its number * to position in the array of descriptors send via control * message. */ PJDLOG_ASSERT(fdidxp != NULL); value = *fdidxp; (*fdidxp)++; } PJDLOG_ASSERT(*leftp >= sizeof(value)); memcpy(ptr, &value, sizeof(value)); ptr += sizeof(value); *leftp -= sizeof(value); return (ptr); } #endif unsigned char * nvpair_pack_binary(const nvpair_t *nvp, unsigned char *ptr, size_t *leftp) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_BINARY); PJDLOG_ASSERT(*leftp >= nvp->nvp_datasize); memcpy(ptr, (const void *)(intptr_t)nvp->nvp_data, nvp->nvp_datasize); ptr += nvp->nvp_datasize; *leftp -= nvp->nvp_datasize; return (ptr); } unsigned char * nvpair_pack_bool_array(const nvpair_t *nvp, unsigned char *ptr, size_t *leftp) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_BOOL_ARRAY); PJDLOG_ASSERT(*leftp >= nvp->nvp_datasize); memcpy(ptr, (const void *)(intptr_t)nvp->nvp_data, nvp->nvp_datasize); ptr += nvp->nvp_datasize; *leftp -= nvp->nvp_datasize; return (ptr); } unsigned char * nvpair_pack_number_array(const nvpair_t *nvp, unsigned char *ptr, size_t *leftp) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_NUMBER_ARRAY); PJDLOG_ASSERT(*leftp >= nvp->nvp_datasize); memcpy(ptr, (const void *)(intptr_t)nvp->nvp_data, nvp->nvp_datasize); ptr += nvp->nvp_datasize; *leftp -= nvp->nvp_datasize; return (ptr); } unsigned char * nvpair_pack_string_array(const nvpair_t *nvp, unsigned char *ptr, size_t *leftp) { unsigned int ii; size_t size, len; const char * const *array; NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_STRING_ARRAY); PJDLOG_ASSERT(*leftp >= nvp->nvp_datasize); size = 0; array = nvpair_get_string_array(nvp, NULL); PJDLOG_ASSERT(array != NULL); for (ii = 0; ii < nvp->nvp_nitems; ii++) { len = strlen(array[ii]) + 1; PJDLOG_ASSERT(*leftp >= len); memcpy(ptr, (const void *)array[ii], len); size += len; ptr += len; *leftp -= len; } PJDLOG_ASSERT(size == nvp->nvp_datasize); return (ptr); } #ifndef _KERNEL unsigned char * nvpair_pack_descriptor_array(const nvpair_t *nvp, unsigned char *ptr, int64_t *fdidxp, size_t *leftp) { int64_t value; const int *array; unsigned int ii; NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_DESCRIPTOR_ARRAY); PJDLOG_ASSERT(*leftp >= nvp->nvp_datasize); array = nvpair_get_descriptor_array(nvp, NULL); PJDLOG_ASSERT(array != NULL); for (ii = 0; ii < nvp->nvp_nitems; ii++) { PJDLOG_ASSERT(*leftp >= sizeof(value)); value = array[ii]; if (value != -1) { /* * If there is a real descriptor here, we change its * number to position in the array of descriptors send * via control message. */ PJDLOG_ASSERT(fdidxp != NULL); value = *fdidxp; (*fdidxp)++; } memcpy(ptr, &value, sizeof(value)); ptr += sizeof(value); *leftp -= sizeof(value); } return (ptr); } #endif void nvpair_init_datasize(nvpair_t *nvp) { NVPAIR_ASSERT(nvp); if (nvp->nvp_type == NV_TYPE_NVLIST) { if (nvp->nvp_data == 0) { nvp->nvp_datasize = 0; } else { nvp->nvp_datasize = nvlist_size((const nvlist_t *)(intptr_t)nvp->nvp_data); } } } const unsigned char * nvpair_unpack_header(bool isbe, nvpair_t *nvp, const unsigned char *ptr, size_t *leftp) { struct nvpair_header nvphdr; if (*leftp < sizeof(nvphdr)) goto fail; memcpy(&nvphdr, ptr, sizeof(nvphdr)); ptr += sizeof(nvphdr); *leftp -= sizeof(nvphdr); #if NV_TYPE_FIRST > 0 if (nvphdr.nvph_type < NV_TYPE_FIRST) goto fail; #endif if (nvphdr.nvph_type > NV_TYPE_LAST && nvphdr.nvph_type != NV_TYPE_NVLIST_UP && nvphdr.nvph_type != NV_TYPE_NVLIST_ARRAY_NEXT) { goto fail; } #if BYTE_ORDER == BIG_ENDIAN if (!isbe) { nvphdr.nvph_namesize = le16toh(nvphdr.nvph_namesize); nvphdr.nvph_datasize = le64toh(nvphdr.nvph_datasize); } #else if (isbe) { nvphdr.nvph_namesize = be16toh(nvphdr.nvph_namesize); nvphdr.nvph_datasize = be64toh(nvphdr.nvph_datasize); } #endif if (nvphdr.nvph_namesize > NV_NAME_MAX) goto fail; if (*leftp < nvphdr.nvph_namesize) goto fail; if (nvphdr.nvph_namesize < 1) goto fail; if (strnlen((const char *)ptr, nvphdr.nvph_namesize) != (size_t)(nvphdr.nvph_namesize - 1)) { goto fail; } memcpy(nvp->nvp_name, ptr, nvphdr.nvph_namesize); ptr += nvphdr.nvph_namesize; *leftp -= nvphdr.nvph_namesize; if (*leftp < nvphdr.nvph_datasize) goto fail; nvp->nvp_type = nvphdr.nvph_type; nvp->nvp_data = 0; nvp->nvp_datasize = nvphdr.nvph_datasize; nvp->nvp_nitems = nvphdr.nvph_nitems; return (ptr); fail: ERRNO_SET(EINVAL); return (NULL); } const unsigned char * nvpair_unpack_null(bool isbe __unused, nvpair_t *nvp, const unsigned char *ptr, size_t *leftp __unused) { PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_NULL); if (nvp->nvp_datasize != 0) { ERRNO_SET(EINVAL); return (NULL); } return (ptr); } const unsigned char * nvpair_unpack_bool(bool isbe __unused, nvpair_t *nvp, const unsigned char *ptr, size_t *leftp) { uint8_t value; PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_BOOL); if (nvp->nvp_datasize != sizeof(value)) { ERRNO_SET(EINVAL); return (NULL); } if (*leftp < sizeof(value)) { ERRNO_SET(EINVAL); return (NULL); } memcpy(&value, ptr, sizeof(value)); ptr += sizeof(value); *leftp -= sizeof(value); if (value != 0 && value != 1) { ERRNO_SET(EINVAL); return (NULL); } nvp->nvp_data = (uint64_t)value; return (ptr); } const unsigned char * nvpair_unpack_number(bool isbe, nvpair_t *nvp, const unsigned char *ptr, size_t *leftp) { PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_NUMBER); if (nvp->nvp_datasize != sizeof(uint64_t)) { ERRNO_SET(EINVAL); return (NULL); } if (*leftp < sizeof(uint64_t)) { ERRNO_SET(EINVAL); return (NULL); } if (isbe) nvp->nvp_data = be64dec(ptr); else nvp->nvp_data = le64dec(ptr); ptr += sizeof(uint64_t); *leftp -= sizeof(uint64_t); return (ptr); } const unsigned char * nvpair_unpack_string(bool isbe __unused, nvpair_t *nvp, const unsigned char *ptr, size_t *leftp) { PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_STRING); if (*leftp < nvp->nvp_datasize || nvp->nvp_datasize == 0) { ERRNO_SET(EINVAL); return (NULL); } if (strnlen((const char *)ptr, nvp->nvp_datasize) != nvp->nvp_datasize - 1) { ERRNO_SET(EINVAL); return (NULL); } nvp->nvp_data = (uint64_t)(uintptr_t)nv_strdup((const char *)ptr); if (nvp->nvp_data == 0) return (NULL); ptr += nvp->nvp_datasize; *leftp -= nvp->nvp_datasize; return (ptr); } const unsigned char * nvpair_unpack_nvlist(bool isbe __unused, nvpair_t *nvp, const unsigned char *ptr, size_t *leftp, size_t nfds, nvlist_t **child) { nvlist_t *value; PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_NVLIST); if (*leftp < nvp->nvp_datasize || nvp->nvp_datasize == 0) { ERRNO_SET(EINVAL); return (NULL); } value = nvlist_create(0); if (value == NULL) return (NULL); ptr = nvlist_unpack_header(value, ptr, nfds, NULL, leftp); if (ptr == NULL) return (NULL); nvp->nvp_data = (uint64_t)(uintptr_t)value; *child = value; return (ptr); } #ifndef _KERNEL const unsigned char * nvpair_unpack_descriptor(bool isbe, nvpair_t *nvp, const unsigned char *ptr, size_t *leftp, const int *fds, size_t nfds) { int64_t idx; PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_DESCRIPTOR); if (nvp->nvp_datasize != sizeof(idx)) { ERRNO_SET(EINVAL); return (NULL); } if (*leftp < sizeof(idx)) { ERRNO_SET(EINVAL); return (NULL); } if (isbe) idx = be64dec(ptr); else idx = le64dec(ptr); if (idx < 0) { ERRNO_SET(EINVAL); return (NULL); } if ((size_t)idx >= nfds) { ERRNO_SET(EINVAL); return (NULL); } nvp->nvp_data = (uint64_t)fds[idx]; ptr += sizeof(idx); *leftp -= sizeof(idx); return (ptr); } #endif const unsigned char * nvpair_unpack_binary(bool isbe __unused, nvpair_t *nvp, const unsigned char *ptr, size_t *leftp) { void *value; PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_BINARY); if (*leftp < nvp->nvp_datasize || nvp->nvp_datasize == 0) { ERRNO_SET(EINVAL); return (NULL); } value = nv_malloc(nvp->nvp_datasize); if (value == NULL) return (NULL); memcpy(value, ptr, nvp->nvp_datasize); ptr += nvp->nvp_datasize; *leftp -= nvp->nvp_datasize; nvp->nvp_data = (uint64_t)(uintptr_t)value; return (ptr); } const unsigned char * nvpair_unpack_bool_array(bool isbe __unused, nvpair_t *nvp, const unsigned char *ptr, size_t *leftp) { uint8_t *value; size_t size; unsigned int i; PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_BOOL_ARRAY); size = sizeof(*value) * nvp->nvp_nitems; if (nvp->nvp_datasize != size || *leftp < size || nvp->nvp_nitems == 0 || size < nvp->nvp_nitems) { ERRNO_SET(EINVAL); return (NULL); } value = nv_malloc(size); if (value == NULL) return (NULL); for (i = 0; i < nvp->nvp_nitems; i++) { value[i] = *(const uint8_t *)ptr; ptr += sizeof(*value); *leftp -= sizeof(*value); } nvp->nvp_data = (uint64_t)(uintptr_t)value; return (ptr); } const unsigned char * nvpair_unpack_number_array(bool isbe, nvpair_t *nvp, const unsigned char *ptr, size_t *leftp) { uint64_t *value; size_t size; unsigned int i; PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_NUMBER_ARRAY); size = sizeof(*value) * nvp->nvp_nitems; if (nvp->nvp_datasize != size || *leftp < size || nvp->nvp_nitems == 0 || size < nvp->nvp_nitems) { ERRNO_SET(EINVAL); return (NULL); } value = nv_malloc(size); if (value == NULL) return (NULL); for (i = 0; i < nvp->nvp_nitems; i++) { if (isbe) value[i] = be64dec(ptr); else value[i] = le64dec(ptr); ptr += sizeof(*value); *leftp -= sizeof(*value); } nvp->nvp_data = (uint64_t)(uintptr_t)value; return (ptr); } const unsigned char * nvpair_unpack_string_array(bool isbe __unused, nvpair_t *nvp, const unsigned char *ptr, size_t *leftp) { ssize_t size; size_t len; const char *tmp; char **value; unsigned int ii, j; PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_STRING_ARRAY); if (*leftp < nvp->nvp_datasize || nvp->nvp_datasize == 0 || nvp->nvp_nitems == 0) { ERRNO_SET(EINVAL); return (NULL); } size = nvp->nvp_datasize; tmp = (const char *)ptr; for (ii = 0; ii < nvp->nvp_nitems; ii++) { len = strnlen(tmp, size - 1) + 1; size -= len; if (size < 0) { ERRNO_SET(EINVAL); return (NULL); } tmp += len; } if (size != 0) { ERRNO_SET(EINVAL); return (NULL); } value = nv_malloc(sizeof(*value) * nvp->nvp_nitems); if (value == NULL) return (NULL); for (ii = 0; ii < nvp->nvp_nitems; ii++) { value[ii] = nv_strdup((const char *)ptr); if (value[ii] == NULL) goto out; len = strlen(value[ii]) + 1; ptr += len; *leftp -= len; } nvp->nvp_data = (uint64_t)(uintptr_t)value; return (ptr); out: for (j = 0; j < ii; j++) nv_free(value[j]); nv_free(value); return (NULL); } #ifndef _KERNEL const unsigned char * nvpair_unpack_descriptor_array(bool isbe, nvpair_t *nvp, const unsigned char *ptr, size_t *leftp, const int *fds, size_t nfds) { int64_t idx; size_t size; unsigned int ii; int *array; PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_DESCRIPTOR_ARRAY); size = sizeof(idx) * nvp->nvp_nitems; if (nvp->nvp_datasize != size || *leftp < size || nvp->nvp_nitems == 0 || size < nvp->nvp_nitems) { ERRNO_SET(EINVAL); return (NULL); } array = (int *)nv_malloc(size); if (array == NULL) return (NULL); for (ii = 0; ii < nvp->nvp_nitems; ii++) { if (isbe) idx = be64dec(ptr); else idx = le64dec(ptr); if (idx < 0) { ERRNO_SET(EINVAL); nv_free(array); return (NULL); } if ((size_t)idx >= nfds) { ERRNO_SET(EINVAL); nv_free(array); return (NULL); } array[ii] = (uint64_t)fds[idx]; ptr += sizeof(idx); *leftp -= sizeof(idx); } nvp->nvp_data = (uint64_t)(uintptr_t)array; return (ptr); } #endif const unsigned char * nvpair_unpack_nvlist_array(bool isbe __unused, nvpair_t *nvp, const unsigned char *ptr, size_t *leftp, nvlist_t **firstel) { nvlist_t **value; nvpair_t *tmpnvp; unsigned int ii, j; size_t sizeup; PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_NVLIST_ARRAY); sizeup = sizeof(struct nvpair_header) * nvp->nvp_nitems; if (nvp->nvp_nitems == 0 || sizeup < nvp->nvp_nitems || sizeup > *leftp) { ERRNO_SET(EINVAL); return (NULL); } value = nv_malloc(nvp->nvp_nitems * sizeof(*value)); if (value == NULL) return (NULL); for (ii = 0; ii < nvp->nvp_nitems; ii++) { value[ii] = nvlist_create(0); if (value[ii] == NULL) goto fail; if (ii > 0) { tmpnvp = nvpair_allocv(" ", NV_TYPE_NVLIST, (uint64_t)(uintptr_t)value[ii], 0, 0); if (tmpnvp == NULL) goto fail; nvlist_set_array_next(value[ii - 1], tmpnvp); } } nvlist_set_flags(value[nvp->nvp_nitems - 1], NV_FLAG_IN_ARRAY); nvp->nvp_data = (uint64_t)(uintptr_t)value; *firstel = value[0]; return (ptr); fail: ERRNO_SAVE(); for (j = 0; j <= ii; j++) nvlist_destroy(value[j]); nv_free(value); ERRNO_RESTORE(); return (NULL); } const unsigned char * nvpair_unpack(bool isbe, const unsigned char *ptr, size_t *leftp, nvpair_t **nvpp) { nvpair_t *nvp, *tmp; nvp = nv_calloc(1, sizeof(*nvp) + NV_NAME_MAX); if (nvp == NULL) return (NULL); nvp->nvp_name = (char *)(nvp + 1); ptr = nvpair_unpack_header(isbe, nvp, ptr, leftp); if (ptr == NULL) goto fail; tmp = nv_realloc(nvp, sizeof(*nvp) + strlen(nvp->nvp_name) + 1); if (tmp == NULL) goto fail; nvp = tmp; /* Update nvp_name after realloc(). */ nvp->nvp_name = (char *)(nvp + 1); nvp->nvp_data = 0x00; nvp->nvp_magic = NVPAIR_MAGIC; *nvpp = nvp; return (ptr); fail: nv_free(nvp); return (NULL); } int nvpair_type(const nvpair_t *nvp) { NVPAIR_ASSERT(nvp); return (nvp->nvp_type); } const char * nvpair_name(const nvpair_t *nvp) { NVPAIR_ASSERT(nvp); return (nvp->nvp_name); } nvpair_t * nvpair_create_stringf(const char *name, const char *valuefmt, ...) { va_list valueap; nvpair_t *nvp; va_start(valueap, valuefmt); nvp = nvpair_create_stringv(name, valuefmt, valueap); va_end(valueap); return (nvp); } nvpair_t * nvpair_create_stringv(const char *name, const char *valuefmt, va_list valueap) { nvpair_t *nvp; char *str; int len; len = nv_vasprintf(&str, valuefmt, valueap); if (len < 0) return (NULL); nvp = nvpair_create_string(name, str); nv_free(str); return (nvp); } nvpair_t * nvpair_create_null(const char *name) { return (nvpair_allocv(name, NV_TYPE_NULL, 0, 0, 0)); } nvpair_t * nvpair_create_bool(const char *name, bool value) { return (nvpair_allocv(name, NV_TYPE_BOOL, value ? 1 : 0, sizeof(uint8_t), 0)); } nvpair_t * nvpair_create_number(const char *name, uint64_t value) { return (nvpair_allocv(name, NV_TYPE_NUMBER, value, sizeof(value), 0)); } nvpair_t * nvpair_create_string(const char *name, const char *value) { nvpair_t *nvp; size_t size; char *data; if (value == NULL) { ERRNO_SET(EINVAL); return (NULL); } data = nv_strdup(value); if (data == NULL) return (NULL); size = strlen(value) + 1; nvp = nvpair_allocv(name, NV_TYPE_STRING, (uint64_t)(uintptr_t)data, size, 0); if (nvp == NULL) nv_free(data); return (nvp); } nvpair_t * nvpair_create_nvlist(const char *name, const nvlist_t *value) { nvlist_t *nvl; nvpair_t *nvp; if (value == NULL) { ERRNO_SET(EINVAL); return (NULL); } nvl = nvlist_clone(value); if (nvl == NULL) return (NULL); nvp = nvpair_allocv(name, NV_TYPE_NVLIST, (uint64_t)(uintptr_t)nvl, 0, 0); if (nvp == NULL) nvlist_destroy(nvl); else nvlist_set_parent(nvl, nvp); return (nvp); } #ifndef _KERNEL nvpair_t * nvpair_create_descriptor(const char *name, int value) { nvpair_t *nvp; value = fcntl(value, F_DUPFD_CLOEXEC, 0); if (value < 0) return (NULL); nvp = nvpair_allocv(name, NV_TYPE_DESCRIPTOR, (uint64_t)value, sizeof(int64_t), 0); if (nvp == NULL) { ERRNO_SAVE(); close(value); ERRNO_RESTORE(); } return (nvp); } #endif nvpair_t * nvpair_create_binary(const char *name, const void *value, size_t size) { nvpair_t *nvp; void *data; if (value == NULL || size == 0) { ERRNO_SET(EINVAL); return (NULL); } data = nv_malloc(size); if (data == NULL) return (NULL); memcpy(data, value, size); nvp = nvpair_allocv(name, NV_TYPE_BINARY, (uint64_t)(uintptr_t)data, size, 0); if (nvp == NULL) nv_free(data); return (nvp); } nvpair_t * nvpair_create_bool_array(const char *name, const bool *value, size_t nitems) { nvpair_t *nvp; size_t size; void *data; if (value == NULL || nitems == 0) { ERRNO_SET(EINVAL); return (NULL); } size = sizeof(value[0]) * nitems; data = nv_malloc(size); if (data == NULL) return (NULL); memcpy(data, value, size); nvp = nvpair_allocv(name, NV_TYPE_BOOL_ARRAY, (uint64_t)(uintptr_t)data, size, nitems); if (nvp == NULL) { ERRNO_SAVE(); nv_free(data); ERRNO_RESTORE(); } return (nvp); } nvpair_t * nvpair_create_number_array(const char *name, const uint64_t *value, size_t nitems) { nvpair_t *nvp; size_t size; void *data; if (value == NULL || nitems == 0) { ERRNO_SET(EINVAL); return (NULL); } size = sizeof(value[0]) * nitems; data = nv_malloc(size); if (data == NULL) return (NULL); memcpy(data, value, size); nvp = nvpair_allocv(name, NV_TYPE_NUMBER_ARRAY, (uint64_t)(uintptr_t)data, size, nitems); if (nvp == NULL) { ERRNO_SAVE(); nv_free(data); ERRNO_RESTORE(); } return (nvp); } nvpair_t * nvpair_create_string_array(const char *name, const char * const *value, size_t nitems) { nvpair_t *nvp; unsigned int ii; size_t datasize, size; char **data; if (value == NULL || nitems == 0) { ERRNO_SET(EINVAL); return (NULL); } nvp = NULL; datasize = 0; data = nv_malloc(sizeof(value[0]) * nitems); if (data == NULL) return (NULL); for (ii = 0; ii < nitems; ii++) { if (value[ii] == NULL) { ERRNO_SET(EINVAL); goto fail; } size = strlen(value[ii]) + 1; datasize += size; data[ii] = nv_strdup(value[ii]); if (data[ii] == NULL) goto fail; } nvp = nvpair_allocv(name, NV_TYPE_STRING_ARRAY, (uint64_t)(uintptr_t)data, datasize, nitems); fail: if (nvp == NULL) { ERRNO_SAVE(); for (; ii > 0; ii--) nv_free(data[ii - 1]); nv_free(data); ERRNO_RESTORE(); } return (nvp); } nvpair_t * nvpair_create_nvlist_array(const char *name, const nvlist_t * const *value, size_t nitems) { unsigned int ii; nvlist_t **nvls; nvpair_t *parent; int flags; nvls = NULL; if (value == NULL || nitems == 0) { ERRNO_SET(EINVAL); return (NULL); } nvls = nv_malloc(sizeof(value[0]) * nitems); if (nvls == NULL) return (NULL); for (ii = 0; ii < nitems; ii++) { if (value[ii] == NULL) { ERRNO_SET(EINVAL); goto fail; } nvls[ii] = nvlist_clone(value[ii]); if (nvls[ii] == NULL) goto fail; if (ii > 0) { nvpair_t *nvp; nvp = nvpair_allocv(" ", NV_TYPE_NVLIST, (uint64_t)(uintptr_t)nvls[ii], 0, 0); if (nvp == NULL) { ERRNO_SAVE(); nvlist_destroy(nvls[ii]); ERRNO_RESTORE(); goto fail; } nvlist_set_array_next(nvls[ii - 1], nvp); } } flags = nvlist_flags(nvls[nitems - 1]) | NV_FLAG_IN_ARRAY; nvlist_set_flags(nvls[nitems - 1], flags); parent = nvpair_allocv(name, NV_TYPE_NVLIST_ARRAY, (uint64_t)(uintptr_t)nvls, 0, nitems); if (parent == NULL) goto fail; for (ii = 0; ii < nitems; ii++) nvlist_set_parent(nvls[ii], parent); return (parent); fail: ERRNO_SAVE(); for (; ii > 0; ii--) nvlist_destroy(nvls[ii - 1]); nv_free(nvls); ERRNO_RESTORE(); return (NULL); } #ifndef _KERNEL nvpair_t * nvpair_create_descriptor_array(const char *name, const int *value, size_t nitems) { unsigned int ii; nvpair_t *nvp; int *fds; if (value == NULL) { ERRNO_SET(EINVAL); return (NULL); } nvp = NULL; fds = nv_malloc(sizeof(value[0]) * nitems); if (fds == NULL) return (NULL); for (ii = 0; ii < nitems; ii++) { if (value[ii] == -1) { fds[ii] = -1; } else { fds[ii] = fcntl(value[ii], F_DUPFD_CLOEXEC, 0); if (fds[ii] == -1) goto fail; } } nvp = nvpair_allocv(name, NV_TYPE_DESCRIPTOR_ARRAY, (uint64_t)(uintptr_t)fds, sizeof(int64_t) * nitems, nitems); fail: if (nvp == NULL) { ERRNO_SAVE(); for (; ii > 0; ii--) { if (fds[ii - 1] != -1) close(fds[ii - 1]); } nv_free(fds); ERRNO_RESTORE(); } return (nvp); } #endif nvpair_t * nvpair_move_string(const char *name, char *value) { nvpair_t *nvp; if (value == NULL) { ERRNO_SET(EINVAL); return (NULL); } nvp = nvpair_allocv(name, NV_TYPE_STRING, (uint64_t)(uintptr_t)value, strlen(value) + 1, 0); if (nvp == NULL) { ERRNO_SAVE(); nv_free(value); ERRNO_RESTORE(); } return (nvp); } nvpair_t * nvpair_move_nvlist(const char *name, nvlist_t *value) { nvpair_t *nvp; if (value == NULL || nvlist_get_nvpair_parent(value) != NULL) { ERRNO_SET(EINVAL); return (NULL); } if (nvlist_error(value) != 0) { ERRNO_SET(nvlist_error(value)); nvlist_destroy(value); return (NULL); } nvp = nvpair_allocv(name, NV_TYPE_NVLIST, (uint64_t)(uintptr_t)value, 0, 0); if (nvp == NULL) nvlist_destroy(value); else nvlist_set_parent(value, nvp); return (nvp); } #ifndef _KERNEL nvpair_t * nvpair_move_descriptor(const char *name, int value) { nvpair_t *nvp; if (value < 0 || !fd_is_valid(value)) { ERRNO_SET(EBADF); return (NULL); } nvp = nvpair_allocv(name, NV_TYPE_DESCRIPTOR, (uint64_t)value, sizeof(int64_t), 0); if (nvp == NULL) { ERRNO_SAVE(); close(value); ERRNO_RESTORE(); } return (nvp); } #endif nvpair_t * nvpair_move_binary(const char *name, void *value, size_t size) { nvpair_t *nvp; if (value == NULL || size == 0) { ERRNO_SET(EINVAL); return (NULL); } nvp = nvpair_allocv(name, NV_TYPE_BINARY, (uint64_t)(uintptr_t)value, size, 0); if (nvp == NULL) { ERRNO_SAVE(); nv_free(value); ERRNO_RESTORE(); } return (nvp); } nvpair_t * nvpair_move_bool_array(const char *name, bool *value, size_t nitems) { nvpair_t *nvp; if (value == NULL || nitems == 0) { ERRNO_SET(EINVAL); return (NULL); } nvp = nvpair_allocv(name, NV_TYPE_BOOL_ARRAY, (uint64_t)(uintptr_t)value, sizeof(value[0]) * nitems, nitems); if (nvp == NULL) { ERRNO_SAVE(); nv_free(value); ERRNO_RESTORE(); } return (nvp); } nvpair_t * nvpair_move_string_array(const char *name, char **value, size_t nitems) { nvpair_t *nvp; size_t i, size; if (value == NULL || nitems == 0) { ERRNO_SET(EINVAL); return (NULL); } size = 0; for (i = 0; i < nitems; i++) { if (value[i] == NULL) { ERRNO_SET(EINVAL); return (NULL); } size += strlen(value[i]) + 1; } nvp = nvpair_allocv(name, NV_TYPE_STRING_ARRAY, (uint64_t)(uintptr_t)value, size, nitems); if (nvp == NULL) { ERRNO_SAVE(); for (i = 0; i < nitems; i++) nv_free(value[i]); nv_free(value); ERRNO_RESTORE(); } return (nvp); } nvpair_t * nvpair_move_number_array(const char *name, uint64_t *value, size_t nitems) { nvpair_t *nvp; if (value == NULL || nitems == 0) { ERRNO_SET(EINVAL); return (NULL); } nvp = nvpair_allocv(name, NV_TYPE_NUMBER_ARRAY, (uint64_t)(uintptr_t)value, sizeof(value[0]) * nitems, nitems); if (nvp == NULL) { ERRNO_SAVE(); nv_free(value); ERRNO_RESTORE(); } return (nvp); } nvpair_t * nvpair_move_nvlist_array(const char *name, nvlist_t **value, size_t nitems) { nvpair_t *parent; unsigned int ii; int flags; if (value == NULL || nitems == 0) { ERRNO_SET(EINVAL); return (NULL); } for (ii = 0; ii < nitems; ii++) { if (value == NULL || nvlist_error(value[ii]) != 0 || nvlist_get_pararr(value[ii], NULL) != NULL) { ERRNO_SET(EINVAL); goto fail; } if (ii > 0) { nvpair_t *nvp; nvp = nvpair_allocv(" ", NV_TYPE_NVLIST, (uint64_t)(uintptr_t)value[ii], 0, 0); if (nvp == NULL) goto fail; nvlist_set_array_next(value[ii - 1], nvp); } } flags = nvlist_flags(value[nitems - 1]) | NV_FLAG_IN_ARRAY; nvlist_set_flags(value[nitems - 1], flags); parent = nvpair_allocv(name, NV_TYPE_NVLIST_ARRAY, (uint64_t)(uintptr_t)value, 0, nitems); if (parent == NULL) goto fail; for (ii = 0; ii < nitems; ii++) nvlist_set_parent(value[ii], parent); return (parent); fail: ERRNO_SAVE(); for (ii = 0; ii < nitems; ii++) { if (value[ii] != NULL && nvlist_get_pararr(value[ii], NULL) != NULL) { nvlist_destroy(value[ii]); } } nv_free(value); ERRNO_RESTORE(); return (NULL); } #ifndef _KERNEL nvpair_t * nvpair_move_descriptor_array(const char *name, int *value, size_t nitems) { nvpair_t *nvp; size_t i; if (value == NULL || nitems == 0) { ERRNO_SET(EINVAL); return (NULL); } for (i = 0; i < nitems; i++) { if (value[i] != -1 && !fd_is_valid(value[i])) { ERRNO_SET(EBADF); goto fail; } } nvp = nvpair_allocv(name, NV_TYPE_DESCRIPTOR_ARRAY, (uint64_t)(uintptr_t)value, sizeof(value[0]) * nitems, nitems); if (nvp == NULL) goto fail; return (nvp); fail: ERRNO_SAVE(); for (i = 0; i < nitems; i++) { if (fd_is_valid(value[i])) close(value[i]); } nv_free(value); ERRNO_RESTORE(); return (NULL); } #endif bool nvpair_get_bool(const nvpair_t *nvp) { NVPAIR_ASSERT(nvp); return (nvp->nvp_data == 1); } uint64_t nvpair_get_number(const nvpair_t *nvp) { NVPAIR_ASSERT(nvp); return (nvp->nvp_data); } const char * nvpair_get_string(const nvpair_t *nvp) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_STRING); return ((const char *)(intptr_t)nvp->nvp_data); } const nvlist_t * nvpair_get_nvlist(const nvpair_t *nvp) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_NVLIST); return ((const nvlist_t *)(intptr_t)nvp->nvp_data); } #ifndef _KERNEL int nvpair_get_descriptor(const nvpair_t *nvp) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_DESCRIPTOR); return ((int)nvp->nvp_data); } #endif const void * nvpair_get_binary(const nvpair_t *nvp, size_t *sizep) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_BINARY); if (sizep != NULL) *sizep = nvp->nvp_datasize; return ((const void *)(intptr_t)nvp->nvp_data); } const bool * nvpair_get_bool_array(const nvpair_t *nvp, size_t *nitems) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_BOOL_ARRAY); if (nitems != NULL) *nitems = nvp->nvp_nitems; return ((const bool *)(intptr_t)nvp->nvp_data); } const uint64_t * nvpair_get_number_array(const nvpair_t *nvp, size_t *nitems) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_NUMBER_ARRAY); if (nitems != NULL) *nitems = nvp->nvp_nitems; return ((const uint64_t *)(intptr_t)nvp->nvp_data); } const char * const * nvpair_get_string_array(const nvpair_t *nvp, size_t *nitems) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_STRING_ARRAY); if (nitems != NULL) *nitems = nvp->nvp_nitems; return ((const char * const *)(intptr_t)nvp->nvp_data); } const nvlist_t * const * nvpair_get_nvlist_array(const nvpair_t *nvp, size_t *nitems) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_NVLIST_ARRAY); if (nitems != NULL) *nitems = nvp->nvp_nitems; return ((const nvlist_t * const *)((intptr_t)nvp->nvp_data)); } #ifndef _KERNEL const int * nvpair_get_descriptor_array(const nvpair_t *nvp, size_t *nitems) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_DESCRIPTOR_ARRAY); if (nitems != NULL) *nitems = nvp->nvp_nitems; return ((const int *)(intptr_t)nvp->nvp_data); } #endif int nvpair_append_bool_array(nvpair_t *nvp, const bool value) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_BOOL_ARRAY); return (nvpair_append(nvp, &value, sizeof(value), sizeof(value))); } int nvpair_append_number_array(nvpair_t *nvp, const uint64_t value) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_NUMBER_ARRAY); return (nvpair_append(nvp, &value, sizeof(value), sizeof(value))); } int nvpair_append_string_array(nvpair_t *nvp, const char *value) { char *str; NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_STRING_ARRAY); if (value == NULL) { ERRNO_SET(EINVAL); return (-1); } str = nv_strdup(value); if (str == NULL) { return (-1); } if (nvpair_append(nvp, &str, sizeof(str), strlen(str) + 1) == -1) { nv_free(str); return (-1); } return (0); } int nvpair_append_nvlist_array(nvpair_t *nvp, const nvlist_t *value) { nvpair_t *tmpnvp; nvlist_t *nvl, *prev; int flags; NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_NVLIST_ARRAY); if (value == NULL || nvlist_error(value) != 0 || nvlist_get_pararr(value, NULL) != NULL) { ERRNO_SET(EINVAL); return (-1); } nvl = nvlist_clone(value); if (nvl == NULL) { return (-1); } flags = nvlist_flags(nvl) | NV_FLAG_IN_ARRAY; nvlist_set_flags(nvl, flags); tmpnvp = NULL; prev = NULL; if (nvp->nvp_nitems > 0) { nvlist_t **nvls = (void *)(uintptr_t)nvp->nvp_data; prev = nvls[nvp->nvp_nitems - 1]; PJDLOG_ASSERT(prev != NULL); tmpnvp = nvpair_allocv(" ", NV_TYPE_NVLIST, (uint64_t)(uintptr_t)nvl, 0, 0); if (tmpnvp == NULL) { goto fail; } } if (nvpair_append(nvp, &nvl, sizeof(nvl), 0) == -1) { goto fail; } if (tmpnvp) { NVPAIR_ASSERT(tmpnvp); nvlist_set_array_next(prev, tmpnvp); } nvlist_set_parent(nvl, nvp); return (0); fail: if (tmpnvp) { nvpair_free(tmpnvp); } nvlist_destroy(nvl); return (-1); } #ifndef _KERNEL int nvpair_append_descriptor_array(nvpair_t *nvp, const int value) { int fd; NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_type == NV_TYPE_DESCRIPTOR_ARRAY); fd = fcntl(value, F_DUPFD_CLOEXEC, 0); if (fd == -1) { return (-1); } if (nvpair_append(nvp, &fd, sizeof(fd), sizeof(fd)) == -1) { close(fd); return (-1); } return (0); } #endif void nvpair_free(nvpair_t *nvp) { size_t i; NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_list == NULL); nvp->nvp_magic = 0; switch (nvp->nvp_type) { #ifndef _KERNEL case NV_TYPE_DESCRIPTOR: close((int)nvp->nvp_data); break; case NV_TYPE_DESCRIPTOR_ARRAY: for (i = 0; i < nvp->nvp_nitems; i++) close(((int *)(intptr_t)nvp->nvp_data)[i]); nv_free((int *)(intptr_t)nvp->nvp_data); break; #endif case NV_TYPE_NVLIST: nvlist_destroy((nvlist_t *)(intptr_t)nvp->nvp_data); break; case NV_TYPE_STRING: nv_free((char *)(intptr_t)nvp->nvp_data); break; case NV_TYPE_BINARY: nv_free((void *)(intptr_t)nvp->nvp_data); break; case NV_TYPE_NVLIST_ARRAY: for (i = 0; i < nvp->nvp_nitems; i++) { nvlist_destroy( ((nvlist_t **)(intptr_t)nvp->nvp_data)[i]); } nv_free(((nvlist_t **)(intptr_t)nvp->nvp_data)); break; case NV_TYPE_NUMBER_ARRAY: nv_free((uint64_t *)(intptr_t)nvp->nvp_data); break; case NV_TYPE_BOOL_ARRAY: nv_free((bool *)(intptr_t)nvp->nvp_data); break; case NV_TYPE_STRING_ARRAY: for (i = 0; i < nvp->nvp_nitems; i++) nv_free(((char **)(intptr_t)nvp->nvp_data)[i]); nv_free((char **)(intptr_t)nvp->nvp_data); break; } nv_free(nvp); } void nvpair_free_structure(nvpair_t *nvp) { NVPAIR_ASSERT(nvp); PJDLOG_ASSERT(nvp->nvp_list == NULL); nvp->nvp_magic = 0; nv_free(nvp); } const char * nvpair_type_string(int type) { switch (type) { case NV_TYPE_NULL: return ("NULL"); case NV_TYPE_BOOL: return ("BOOL"); case NV_TYPE_NUMBER: return ("NUMBER"); case NV_TYPE_STRING: return ("STRING"); case NV_TYPE_NVLIST: return ("NVLIST"); case NV_TYPE_DESCRIPTOR: return ("DESCRIPTOR"); case NV_TYPE_BINARY: return ("BINARY"); case NV_TYPE_BOOL_ARRAY: return ("BOOL ARRAY"); case NV_TYPE_NUMBER_ARRAY: return ("NUMBER ARRAY"); case NV_TYPE_STRING_ARRAY: return ("STRING ARRAY"); case NV_TYPE_NVLIST_ARRAY: return ("NVLIST ARRAY"); case NV_TYPE_DESCRIPTOR_ARRAY: return ("DESCRIPTOR ARRAY"); default: return (""); } } Index: projects/import-googletest-1.8.1/sys/dev/ata/ata-all.h =================================================================== --- projects/import-googletest-1.8.1/sys/dev/ata/ata-all.h (revision 344270) +++ projects/import-googletest-1.8.1/sys/dev/ata/ata-all.h (revision 344271) @@ -1,593 +1,589 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 1998 - 2008 Søren Schmidt * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer, * without modification, immediately at the beginning of the file. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * $FreeBSD$ */ #if 0 #define ATA_LEGACY_SUPPORT /* Enable obsolete features that break * some modern devices */ #endif /* ATA register defines */ #define ATA_DATA 0 /* (RW) data */ #define ATA_FEATURE 1 /* (W) feature */ #define ATA_F_DMA 0x01 /* enable DMA */ #define ATA_F_OVL 0x02 /* enable overlap */ #define ATA_COUNT 2 /* (W) sector count */ #define ATA_SECTOR 3 /* (RW) sector # */ #define ATA_CYL_LSB 4 /* (RW) cylinder# LSB */ #define ATA_CYL_MSB 5 /* (RW) cylinder# MSB */ #define ATA_DRIVE 6 /* (W) Sector/Drive/Head */ #define ATA_D_LBA 0x40 /* use LBA addressing */ #define ATA_D_IBM 0xa0 /* 512 byte sectors, ECC */ #define ATA_COMMAND 7 /* (W) command */ #define ATA_ERROR 8 /* (R) error */ #define ATA_E_ILI 0x01 /* illegal length */ #define ATA_E_NM 0x02 /* no media */ #define ATA_E_ABORT 0x04 /* command aborted */ #define ATA_E_MCR 0x08 /* media change request */ #define ATA_E_IDNF 0x10 /* ID not found */ #define ATA_E_MC 0x20 /* media changed */ #define ATA_E_UNC 0x40 /* uncorrectable data */ #define ATA_E_ICRC 0x80 /* UDMA crc error */ #define ATA_E_ATAPI_SENSE_MASK 0xf0 /* ATAPI sense key mask */ #define ATA_IREASON 9 /* (R) interrupt reason */ #define ATA_I_CMD 0x01 /* cmd (1) | data (0) */ #define ATA_I_IN 0x02 /* read (1) | write (0) */ #define ATA_I_RELEASE 0x04 /* released bus (1) */ #define ATA_I_TAGMASK 0xf8 /* tag mask */ #define ATA_STATUS 10 /* (R) status */ #define ATA_ALTSTAT 11 /* (R) alternate status */ #define ATA_S_ERROR 0x01 /* error */ #define ATA_S_INDEX 0x02 /* index */ #define ATA_S_CORR 0x04 /* data corrected */ #define ATA_S_DRQ 0x08 /* data request */ #define ATA_S_DSC 0x10 /* drive seek completed */ #define ATA_S_SERVICE 0x10 /* drive needs service */ #define ATA_S_DWF 0x20 /* drive write fault */ #define ATA_S_DMA 0x20 /* DMA ready */ #define ATA_S_READY 0x40 /* drive ready */ #define ATA_S_BUSY 0x80 /* busy */ #define ATA_CONTROL 12 /* (W) control */ #define ATA_CTLOFFSET 0x206 /* control register offset */ #define ATA_PCCARD_CTLOFFSET 0x0e /* do for PCCARD devices */ #define ATA_A_IDS 0x02 /* disable interrupts */ #define ATA_A_RESET 0x04 /* RESET controller */ #ifdef ATA_LEGACY_SUPPORT #define ATA_A_4BIT 0x08 /* 4 head bits: obsolete 1996 */ #else #define ATA_A_4BIT 0x00 #endif #define ATA_A_HOB 0x80 /* High Order Byte enable */ /* SATA register defines */ #define ATA_SSTATUS 13 #define ATA_SS_DET_MASK 0x0000000f #define ATA_SS_DET_NO_DEVICE 0x00000000 #define ATA_SS_DET_DEV_PRESENT 0x00000001 #define ATA_SS_DET_PHY_ONLINE 0x00000003 #define ATA_SS_DET_PHY_OFFLINE 0x00000004 #define ATA_SS_SPD_MASK 0x000000f0 #define ATA_SS_SPD_NO_SPEED 0x00000000 #define ATA_SS_SPD_GEN1 0x00000010 #define ATA_SS_SPD_GEN2 0x00000020 #define ATA_SS_SPD_GEN3 0x00000030 #define ATA_SS_IPM_MASK 0x00000f00 #define ATA_SS_IPM_NO_DEVICE 0x00000000 #define ATA_SS_IPM_ACTIVE 0x00000100 #define ATA_SS_IPM_PARTIAL 0x00000200 #define ATA_SS_IPM_SLUMBER 0x00000600 #define ATA_SERROR 14 #define ATA_SE_DATA_CORRECTED 0x00000001 #define ATA_SE_COMM_CORRECTED 0x00000002 #define ATA_SE_DATA_ERR 0x00000100 #define ATA_SE_COMM_ERR 0x00000200 #define ATA_SE_PROT_ERR 0x00000400 #define ATA_SE_HOST_ERR 0x00000800 #define ATA_SE_PHY_CHANGED 0x00010000 #define ATA_SE_PHY_IERROR 0x00020000 #define ATA_SE_COMM_WAKE 0x00040000 #define ATA_SE_DECODE_ERR 0x00080000 #define ATA_SE_PARITY_ERR 0x00100000 #define ATA_SE_CRC_ERR 0x00200000 #define ATA_SE_HANDSHAKE_ERR 0x00400000 #define ATA_SE_LINKSEQ_ERR 0x00800000 #define ATA_SE_TRANSPORT_ERR 0x01000000 #define ATA_SE_UNKNOWN_FIS 0x02000000 #define ATA_SCONTROL 15 #define ATA_SC_DET_MASK 0x0000000f #define ATA_SC_DET_IDLE 0x00000000 #define ATA_SC_DET_RESET 0x00000001 #define ATA_SC_DET_DISABLE 0x00000004 #define ATA_SC_SPD_MASK 0x000000f0 #define ATA_SC_SPD_NO_SPEED 0x00000000 #define ATA_SC_SPD_SPEED_GEN1 0x00000010 #define ATA_SC_SPD_SPEED_GEN2 0x00000020 #define ATA_SC_SPD_SPEED_GEN3 0x00000030 #define ATA_SC_IPM_MASK 0x00000f00 #define ATA_SC_IPM_NONE 0x00000000 #define ATA_SC_IPM_DIS_PARTIAL 0x00000100 #define ATA_SC_IPM_DIS_SLUMBER 0x00000200 #define ATA_SACTIVE 16 /* DMA register defines */ #define ATA_DMA_ENTRIES 256 #define ATA_DMA_EOT 0x80000000 #define ATA_BMCMD_PORT 17 #define ATA_BMCMD_START_STOP 0x01 #define ATA_BMCMD_WRITE_READ 0x08 #define ATA_BMDEVSPEC_0 18 #define ATA_BMSTAT_PORT 19 #define ATA_BMSTAT_ACTIVE 0x01 #define ATA_BMSTAT_ERROR 0x02 #define ATA_BMSTAT_INTERRUPT 0x04 #define ATA_BMSTAT_MASK 0x07 #define ATA_BMSTAT_DMA_MASTER 0x20 #define ATA_BMSTAT_DMA_SLAVE 0x40 #define ATA_BMSTAT_DMA_SIMPLEX 0x80 #define ATA_BMDEVSPEC_1 20 #define ATA_BMDTP_PORT 21 #define ATA_IDX_ADDR 22 #define ATA_IDX_DATA 23 #define ATA_MAX_RES 24 /* misc defines */ #define ATA_PRIMARY 0x1f0 #define ATA_SECONDARY 0x170 #define ATA_IOSIZE 0x08 #define ATA_CTLIOSIZE 0x01 #define ATA_BMIOSIZE 0x08 #define ATA_IOADDR_RID 0 #define ATA_CTLADDR_RID 1 #define ATA_BMADDR_RID 0x20 #define ATA_IRQ_RID 0 #define ATA_DEV(unit) ((unit > 0) ? 0x10 : 0) #define ATA_CFA_MAGIC1 0x844A #define ATA_CFA_MAGIC2 0x848A #define ATA_CFA_MAGIC3 0x8400 #define ATAPI_MAGIC_LSB 0x14 #define ATAPI_MAGIC_MSB 0xeb #define ATAPI_P_READ (ATA_S_DRQ | ATA_I_IN) #define ATAPI_P_WRITE (ATA_S_DRQ) #define ATAPI_P_CMDOUT (ATA_S_DRQ | ATA_I_CMD) #define ATAPI_P_DONEDRQ (ATA_S_DRQ | ATA_I_CMD | ATA_I_IN) #define ATAPI_P_DONE (ATA_I_CMD | ATA_I_IN) #define ATAPI_P_ABORT 0 #define ATA_INTR_FLAGS (INTR_MPSAFE|INTR_TYPE_BIO|INTR_ENTROPY) #define ATA_OP_CONTINUES 0 #define ATA_OP_FINISHED 1 #define ATA_MAX_28BIT_LBA 268435455UL -#ifndef ATA_REQUEST_TIMEOUT -#define ATA_REQUEST_TIMEOUT 10 -#endif - /* structure used for composite atomic operations */ #define MAX_COMPOSITES 32 /* u_int32_t bits */ struct ata_composite { struct mtx lock; /* control lock */ u_int32_t rd_needed; /* needed read subdisks */ u_int32_t rd_done; /* done read subdisks */ u_int32_t wr_needed; /* needed write subdisks */ u_int32_t wr_depend; /* write depends on subdisks */ u_int32_t wr_done; /* done write subdisks */ struct ata_request *request[MAX_COMPOSITES]; u_int32_t residual; /* bytes still to transfer */ caddr_t data_1; caddr_t data_2; }; /* structure used to queue an ATA/ATAPI request */ struct ata_request { device_t dev; /* device handle */ device_t parent; /* channel handle */ int unit; /* physical unit */ union { struct { u_int8_t command; /* command reg */ u_int16_t feature; /* feature reg */ u_int16_t count; /* count reg */ u_int64_t lba; /* lba reg */ } ata; struct { u_int8_t ccb[16]; /* ATAPI command block */ struct atapi_sense sense; /* ATAPI request sense data */ u_int8_t saved_cmd; /* ATAPI saved command */ } atapi; } u; u_int32_t bytecount; /* bytes to transfer */ u_int32_t transfersize; /* bytes pr transfer */ caddr_t data; /* pointer to data buf */ u_int32_t tag; /* HW tag of this request */ int flags; #define ATA_R_CONTROL 0x00000001 #define ATA_R_READ 0x00000002 #define ATA_R_WRITE 0x00000004 #define ATA_R_ATAPI 0x00000008 #define ATA_R_DMA 0x00000010 #define ATA_R_QUIET 0x00000020 #define ATA_R_TIMEOUT 0x00000040 #define ATA_R_48BIT 0x00000080 #define ATA_R_ORDERED 0x00000100 #define ATA_R_AT_HEAD 0x00000200 #define ATA_R_REQUEUE 0x00000400 #define ATA_R_THREAD 0x00000800 #define ATA_R_DIRECT 0x00001000 #define ATA_R_NEEDRESULT 0x00002000 #define ATA_R_DATA_IN_CCB 0x00004000 #define ATA_R_ATAPI16 0x00010000 #define ATA_R_ATAPI_INTR 0x00020000 #define ATA_R_DEBUG 0x10000000 #define ATA_R_DANGER1 0x20000000 #define ATA_R_DANGER2 0x40000000 struct ata_dmaslot *dma; /* DMA slot of this request */ u_int8_t status; /* ATA status */ u_int8_t error; /* ATA error */ u_int32_t donecount; /* bytes transferred */ int result; /* result error code */ void (*callback)(struct ata_request *request); struct sema done; /* request done sema */ int retries; /* retry count */ int timeout; /* timeout for this cmd */ struct callout callout; /* callout management */ struct task task; /* task management */ struct bio *bio; /* bio for this request */ int this; /* this request ID */ struct ata_composite *composite; /* for composite atomic ops */ void *driver; /* driver specific */ TAILQ_ENTRY(ata_request) chain; /* list management */ union ccb *ccb; }; /* define this for debugging request processing */ #if 0 #define ATA_DEBUG_RQ(request, string) \ { \ if (request->flags & ATA_R_DEBUG) \ device_printf(request->parent, "req=%p %s " string "\n", \ request, ata_cmd2str(request)); \ } #else #define ATA_DEBUG_RQ(request, string) #endif /* structure describing an ATA/ATAPI device */ struct ata_device { device_t dev; /* device handle */ int unit; /* physical unit */ #define ATA_MASTER 0x00 #define ATA_SLAVE 0x01 #define ATA_PM 0x0f struct ata_params param; /* ata param structure */ int mode; /* current transfermode */ u_int32_t max_iosize; /* max IO size */ int spindown; /* idle spindown timeout */ struct callout spindown_timer; int spindown_state; int flags; #define ATA_D_USE_CHS 0x0001 #define ATA_D_MEDIA_CHANGED 0x0002 #define ATA_D_ENC_PRESENT 0x0004 }; /* structure for holding DMA Physical Region Descriptors (PRD) entries */ struct ata_dma_prdentry { u_int32_t addr; u_int32_t count; }; /* structure used by the setprd function */ struct ata_dmasetprd_args { void *dmatab; int nsegs; int error; }; struct ata_dmaslot { u_int8_t status; /* DMA status */ bus_dma_tag_t sg_tag; /* SG list DMA tag */ bus_dmamap_t sg_map; /* SG list DMA map */ void *sg; /* DMA transfer table */ bus_addr_t sg_bus; /* bus address of dmatab */ bus_dma_tag_t data_tag; /* data DMA tag */ bus_dmamap_t data_map; /* data DMA map */ }; /* structure holding DMA related information */ struct ata_dma { bus_dma_tag_t dmatag; /* parent DMA tag */ bus_dma_tag_t work_tag; /* workspace DMA tag */ bus_dmamap_t work_map; /* workspace DMA map */ u_int8_t *work; /* workspace */ bus_addr_t work_bus; /* bus address of dmatab */ #define ATA_DMA_SLOTS 1 int dma_slots; /* DMA slots allocated */ struct ata_dmaslot slot[ATA_DMA_SLOTS]; u_int32_t alignment; /* DMA SG list alignment */ u_int32_t boundary; /* DMA SG list boundary */ u_int32_t segsize; /* DMA SG list segment size */ u_int32_t max_iosize; /* DMA data max IO size */ u_int64_t max_address; /* highest DMA'able address */ int flags; #define ATA_DMA_ACTIVE 0x01 /* DMA transfer in progress */ void (*alloc)(device_t dev); void (*free)(device_t dev); void (*setprd)(void *xsc, bus_dma_segment_t *segs, int nsegs, int error); int (*load)(struct ata_request *request, void *addr, int *nsegs); int (*unload)(struct ata_request *request); int (*start)(struct ata_request *request); int (*stop)(struct ata_request *request); void (*reset)(device_t dev); }; /* structure holding lowlevel functions */ struct ata_lowlevel { u_int32_t (*softreset)(device_t dev, int pmport); int (*pm_read)(device_t dev, int port, int reg, u_int32_t *result); int (*pm_write)(device_t dev, int port, int reg, u_int32_t value); int (*status)(device_t dev); int (*begin_transaction)(struct ata_request *request); int (*end_transaction)(struct ata_request *request); int (*command)(struct ata_request *request); void (*tf_read)(struct ata_request *request); void (*tf_write)(struct ata_request *request); }; /* structure holding resources for an ATA channel */ struct ata_resource { struct resource *res; int offset; }; struct ata_cam_device { u_int revision; int mode; u_int bytecount; u_int atapi; u_int caps; }; /* structure describing an ATA channel */ struct ata_channel { device_t dev; /* device handle */ int unit; /* physical channel */ int attached; /* channel is attached */ struct ata_resource r_io[ATA_MAX_RES];/* I/O resources */ struct resource *r_irq; /* interrupt of this channel */ void *ih; /* interrupt handle */ struct ata_lowlevel hw; /* lowlevel HW functions */ struct ata_dma dma; /* DMA data / functions */ int flags; /* channel flags */ #define ATA_NO_SLAVE 0x01 #define ATA_USE_16BIT 0x02 #define ATA_ATAPI_DMA_RO 0x04 #define ATA_NO_48BIT_DMA 0x08 #define ATA_ALWAYS_DMASTAT 0x10 #define ATA_CHECKS_CABLE 0x20 #define ATA_NO_ATAPI_DMA 0x40 #define ATA_SATA 0x80 #define ATA_DMA_BEFORE_CMD 0x100 #define ATA_KNOWN_PRESENCE 0x200 #define ATA_STATUS_IS_LONG 0x400 #define ATA_PERIODIC_POLL 0x800 int pm_level; /* power management level */ int devices; /* what is present */ #define ATA_ATA_MASTER 0x00000001 #define ATA_ATA_SLAVE 0x00000002 #define ATA_PORTMULTIPLIER 0x00008000 #define ATA_ATAPI_MASTER 0x00010000 #define ATA_ATAPI_SLAVE 0x00020000 struct mtx state_mtx; /* state lock */ int state; /* ATA channel state */ #define ATA_IDLE 0x0000 #define ATA_ACTIVE 0x0001 #define ATA_STALL_QUEUE 0x0002 struct ata_request *running; /* currently running request */ struct task conntask; /* PHY events handling task */ struct cam_sim *sim; struct cam_path *path; struct ata_cam_device user[16]; /* User-specified settings */ struct ata_cam_device curr[16]; /* Current settings */ int requestsense; /* CCB waiting for SENSE. */ struct callout poll_callout; /* Periodic status poll. */ struct ata_request request; }; /* disk bay/enclosure related */ #define ATA_LED_OFF 0x00 #define ATA_LED_RED 0x01 #define ATA_LED_GREEN 0x02 #define ATA_LED_ORANGE 0x03 #define ATA_LED_MASK 0x03 /* externs */ extern int (*ata_raid_ioctl_func)(u_long cmd, caddr_t data); extern struct intr_config_hook *ata_delayed_attach; extern devclass_t ata_devclass; extern int ata_wc; extern int ata_setmax; extern int ata_dma_check_80pin; /* public prototypes */ /* ata-all.c: */ int ata_probe(device_t dev); int ata_attach(device_t dev); int ata_detach(device_t dev); int ata_reinit(device_t dev); int ata_suspend(device_t dev); int ata_resume(device_t dev); void ata_interrupt(void *data); int ata_getparam(struct ata_device *atadev, int init); void ata_default_registers(device_t dev); void ata_udelay(int interval); const char *ata_cmd2str(struct ata_request *request); const char *ata_mode2str(int mode); void ata_setmode(device_t dev); void ata_print_cable(device_t dev, u_int8_t *who); int ata_atapi(device_t dev, int target); void ata_timeout(struct ata_request *); /* ata-lowlevel.c: */ void ata_generic_hw(device_t dev); int ata_begin_transaction(struct ata_request *); int ata_end_transaction(struct ata_request *); void ata_generic_reset(device_t dev); int ata_generic_command(struct ata_request *request); /* ata-dma.c: */ void ata_dmainit(device_t); void ata_dmafini(device_t dev); /* ata-sata.c: */ void ata_sata_phy_check_events(device_t dev, int port); int ata_sata_scr_read(struct ata_channel *ch, int port, int reg, uint32_t *val); int ata_sata_scr_write(struct ata_channel *ch, int port, int reg, uint32_t val); int ata_sata_phy_reset(device_t dev, int port, int quick); int ata_sata_setmode(device_t dev, int target, int mode); int ata_sata_getrev(device_t dev, int target); int ata_request2fis_h2d(struct ata_request *request, u_int8_t *fis); void ata_pm_identify(device_t dev); MALLOC_DECLARE(M_ATA); /* misc newbus defines */ #define GRANDPARENT(dev) device_get_parent(device_get_parent(dev)) /* macros to hide busspace uglyness */ #define ATA_INB(res, offset) \ bus_read_1((res), (offset)) #define ATA_INW(res, offset) \ bus_read_2((res), (offset)) #define ATA_INW_STRM(res, offset) \ bus_read_stream_2((res), (offset)) #define ATA_INL(res, offset) \ bus_read_4((res), (offset)) #define ATA_INSW(res, offset, addr, count) \ bus_read_multi_2((res), (offset), (addr), (count)) #define ATA_INSW_STRM(res, offset, addr, count) \ bus_read_multi_stream_2((res), (offset), (addr), (count)) #define ATA_INSL(res, offset, addr, count) \ bus_read_multi_4((res), (offset), (addr), (count)) #define ATA_INSL_STRM(res, offset, addr, count) \ bus_read_multi_stream_4((res), (offset), (addr), (count)) #define ATA_OUTB(res, offset, value) \ bus_write_1((res), (offset), (value)) #define ATA_OUTW(res, offset, value) \ bus_write_2((res), (offset), (value)) #define ATA_OUTW_STRM(res, offset, value) \ bus_write_stream_2((res), (offset), (value)) #define ATA_OUTL(res, offset, value) \ bus_write_4((res), (offset), (value)) #define ATA_OUTSW(res, offset, addr, count) \ bus_write_multi_2((res), (offset), (addr), (count)) #define ATA_OUTSW_STRM(res, offset, addr, count) \ bus_write_multi_stream_2((res), (offset), (addr), (count)) #define ATA_OUTSL(res, offset, addr, count) \ bus_write_multi_4((res), (offset), (addr), (count)) #define ATA_OUTSL_STRM(res, offset, addr, count) \ bus_write_multi_stream_4((res), (offset), (addr), (count)) #define ATA_IDX_INB(ch, idx) \ ATA_INB(ch->r_io[idx].res, ch->r_io[idx].offset) #define ATA_IDX_INW(ch, idx) \ ATA_INW(ch->r_io[idx].res, ch->r_io[idx].offset) #define ATA_IDX_INW_STRM(ch, idx) \ ATA_INW_STRM(ch->r_io[idx].res, ch->r_io[idx].offset) #define ATA_IDX_INL(ch, idx) \ ATA_INL(ch->r_io[idx].res, ch->r_io[idx].offset) #define ATA_IDX_INSW(ch, idx, addr, count) \ ATA_INSW(ch->r_io[idx].res, ch->r_io[idx].offset, addr, count) #define ATA_IDX_INSW_STRM(ch, idx, addr, count) \ ATA_INSW_STRM(ch->r_io[idx].res, ch->r_io[idx].offset, addr, count) #define ATA_IDX_INSL(ch, idx, addr, count) \ ATA_INSL(ch->r_io[idx].res, ch->r_io[idx].offset, addr, count) #define ATA_IDX_INSL_STRM(ch, idx, addr, count) \ ATA_INSL_STRM(ch->r_io[idx].res, ch->r_io[idx].offset, addr, count) #define ATA_IDX_OUTB(ch, idx, value) \ ATA_OUTB(ch->r_io[idx].res, ch->r_io[idx].offset, value) #define ATA_IDX_OUTW(ch, idx, value) \ ATA_OUTW(ch->r_io[idx].res, ch->r_io[idx].offset, value) #define ATA_IDX_OUTW_STRM(ch, idx, value) \ ATA_OUTW_STRM(ch->r_io[idx].res, ch->r_io[idx].offset, value) #define ATA_IDX_OUTL(ch, idx, value) \ ATA_OUTL(ch->r_io[idx].res, ch->r_io[idx].offset, value) #define ATA_IDX_OUTSW(ch, idx, addr, count) \ ATA_OUTSW(ch->r_io[idx].res, ch->r_io[idx].offset, addr, count) #define ATA_IDX_OUTSW_STRM(ch, idx, addr, count) \ ATA_OUTSW_STRM(ch->r_io[idx].res, ch->r_io[idx].offset, addr, count) #define ATA_IDX_OUTSL(ch, idx, addr, count) \ ATA_OUTSL(ch->r_io[idx].res, ch->r_io[idx].offset, addr, count) #define ATA_IDX_OUTSL_STRM(ch, idx, addr, count) \ ATA_OUTSL_STRM(ch->r_io[idx].res, ch->r_io[idx].offset, addr, count) Index: projects/import-googletest-1.8.1/sys/dev/ena/ena.c =================================================================== --- projects/import-googletest-1.8.1/sys/dev/ena/ena.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/dev/ena/ena.c (revision 344271) @@ -1,3955 +1,3964 @@ /*- * BSD LICENSE * * Copyright (c) 2015-2017 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "ena.h" #include "ena_sysctl.h" /********************************************************* * Function prototypes *********************************************************/ static int ena_probe(device_t); static void ena_intr_msix_mgmnt(void *); static int ena_allocate_pci_resources(struct ena_adapter*); static void ena_free_pci_resources(struct ena_adapter *); static int ena_change_mtu(if_t, int); static inline void ena_alloc_counters(counter_u64_t *, int); static inline void ena_free_counters(counter_u64_t *, int); static inline void ena_reset_counters(counter_u64_t *, int); static void ena_init_io_rings_common(struct ena_adapter *, struct ena_ring *, uint16_t); static void ena_init_io_rings(struct ena_adapter *); static void ena_free_io_ring_resources(struct ena_adapter *, unsigned int); static void ena_free_all_io_rings_resources(struct ena_adapter *); static int ena_setup_tx_dma_tag(struct ena_adapter *); static int ena_free_tx_dma_tag(struct ena_adapter *); static int ena_setup_rx_dma_tag(struct ena_adapter *); static int ena_free_rx_dma_tag(struct ena_adapter *); static int ena_setup_tx_resources(struct ena_adapter *, int); static void ena_free_tx_resources(struct ena_adapter *, int); static int ena_setup_all_tx_resources(struct ena_adapter *); static void ena_free_all_tx_resources(struct ena_adapter *); static inline int validate_rx_req_id(struct ena_ring *, uint16_t); static int ena_setup_rx_resources(struct ena_adapter *, unsigned int); static void ena_free_rx_resources(struct ena_adapter *, unsigned int); static int ena_setup_all_rx_resources(struct ena_adapter *); static void ena_free_all_rx_resources(struct ena_adapter *); static inline int ena_alloc_rx_mbuf(struct ena_adapter *, struct ena_ring *, struct ena_rx_buffer *); static void ena_free_rx_mbuf(struct ena_adapter *, struct ena_ring *, struct ena_rx_buffer *); static int ena_refill_rx_bufs(struct ena_ring *, uint32_t); static void ena_free_rx_bufs(struct ena_adapter *, unsigned int); static void ena_refill_all_rx_bufs(struct ena_adapter *); static void ena_free_all_rx_bufs(struct ena_adapter *); static void ena_free_tx_bufs(struct ena_adapter *, unsigned int); static void ena_free_all_tx_bufs(struct ena_adapter *); static void ena_destroy_all_tx_queues(struct ena_adapter *); static void ena_destroy_all_rx_queues(struct ena_adapter *); static void ena_destroy_all_io_queues(struct ena_adapter *); static int ena_create_io_queues(struct ena_adapter *); static int ena_tx_cleanup(struct ena_ring *); static void ena_deferred_rx_cleanup(void *, int); static int ena_rx_cleanup(struct ena_ring *); static inline int validate_tx_req_id(struct ena_ring *, uint16_t); static void ena_rx_hash_mbuf(struct ena_ring *, struct ena_com_rx_ctx *, struct mbuf *); static struct mbuf* ena_rx_mbuf(struct ena_ring *, struct ena_com_rx_buf_info *, struct ena_com_rx_ctx *, uint16_t *); static inline void ena_rx_checksum(struct ena_ring *, struct ena_com_rx_ctx *, struct mbuf *); static void ena_handle_msix(void *); static int ena_enable_msix(struct ena_adapter *); static void ena_setup_mgmnt_intr(struct ena_adapter *); static void ena_setup_io_intr(struct ena_adapter *); static int ena_request_mgmnt_irq(struct ena_adapter *); static int ena_request_io_irq(struct ena_adapter *); static void ena_free_mgmnt_irq(struct ena_adapter *); static void ena_free_io_irq(struct ena_adapter *); static void ena_free_irqs(struct ena_adapter*); static void ena_disable_msix(struct ena_adapter *); static void ena_unmask_all_io_irqs(struct ena_adapter *); static int ena_rss_configure(struct ena_adapter *); static int ena_up_complete(struct ena_adapter *); static int ena_up(struct ena_adapter *); static void ena_down(struct ena_adapter *); static uint64_t ena_get_counter(if_t, ift_counter); static int ena_media_change(if_t); static void ena_media_status(if_t, struct ifmediareq *); static void ena_init(void *); static int ena_ioctl(if_t, u_long, caddr_t); static int ena_get_dev_offloads(struct ena_com_dev_get_features_ctx *); static void ena_update_host_info(struct ena_admin_host_info *, if_t); static void ena_update_hwassist(struct ena_adapter *); static int ena_setup_ifnet(device_t, struct ena_adapter *, struct ena_com_dev_get_features_ctx *); static void ena_tx_csum(struct ena_com_tx_ctx *, struct mbuf *); static int ena_check_and_collapse_mbuf(struct ena_ring *tx_ring, struct mbuf **mbuf); static int ena_xmit_mbuf(struct ena_ring *, struct mbuf **); static void ena_start_xmit(struct ena_ring *); static int ena_mq_start(if_t, struct mbuf *); static void ena_deferred_mq_start(void *, int); static void ena_qflush(if_t); static int ena_calc_io_queue_num(struct ena_adapter *, struct ena_com_dev_get_features_ctx *); static int ena_calc_queue_size(struct ena_adapter *, uint16_t *, uint16_t *, struct ena_com_dev_get_features_ctx *); static int ena_rss_init_default(struct ena_adapter *); static void ena_rss_init_default_deferred(void *); static void ena_config_host_info(struct ena_com_dev *); static int ena_attach(device_t); static int ena_detach(device_t); static int ena_device_init(struct ena_adapter *, device_t, struct ena_com_dev_get_features_ctx *, int *); static int ena_enable_msix_and_set_admin_interrupts(struct ena_adapter *, int); static void ena_update_on_link_change(void *, struct ena_admin_aenq_entry *); static void unimplemented_aenq_handler(void *, struct ena_admin_aenq_entry *); static void ena_timer_service(void *); static char ena_version[] = DEVICE_NAME DRV_MODULE_NAME " v" DRV_MODULE_VERSION; static SYSCTL_NODE(_hw, OID_AUTO, ena, CTLFLAG_RD, 0, "ENA driver parameters"); /* * Tuneable number of buffers in the buf-ring (drbr) */ static int ena_buf_ring_size = 4096; SYSCTL_INT(_hw_ena, OID_AUTO, buf_ring_size, CTLFLAG_RWTUN, &ena_buf_ring_size, 0, "Size of the bufring"); /* * Logging level for changing verbosity of the output */ int ena_log_level = ENA_ALERT | ENA_WARNING; SYSCTL_INT(_hw_ena, OID_AUTO, log_level, CTLFLAG_RWTUN, &ena_log_level, 0, "Logging level indicating verbosity of the logs"); static ena_vendor_info_t ena_vendor_info_array[] = { { PCI_VENDOR_ID_AMAZON, PCI_DEV_ID_ENA_PF, 0}, { PCI_VENDOR_ID_AMAZON, PCI_DEV_ID_ENA_LLQ_PF, 0}, { PCI_VENDOR_ID_AMAZON, PCI_DEV_ID_ENA_VF, 0}, { PCI_VENDOR_ID_AMAZON, PCI_DEV_ID_ENA_LLQ_VF, 0}, /* Last entry */ { 0, 0, 0 } }; /* * Contains pointers to event handlers, e.g. link state chage. */ static struct ena_aenq_handlers aenq_handlers; void ena_dmamap_callback(void *arg, bus_dma_segment_t *segs, int nseg, int error) { if (error != 0) return; *(bus_addr_t *) arg = segs[0].ds_addr; } int ena_dma_alloc(device_t dmadev, bus_size_t size, ena_mem_handle_t *dma , int mapflags) { struct ena_adapter* adapter = device_get_softc(dmadev); uint32_t maxsize; uint64_t dma_space_addr; int error; maxsize = ((size - 1) / PAGE_SIZE + 1) * PAGE_SIZE; dma_space_addr = ENA_DMA_BIT_MASK(adapter->dma_width); if (unlikely(dma_space_addr == 0)) dma_space_addr = BUS_SPACE_MAXADDR; error = bus_dma_tag_create(bus_get_dma_tag(dmadev), /* parent */ 8, 0, /* alignment, bounds */ dma_space_addr, /* lowaddr of exclusion window */ BUS_SPACE_MAXADDR,/* highaddr of exclusion window */ NULL, NULL, /* filter, filterarg */ maxsize, /* maxsize */ 1, /* nsegments */ maxsize, /* maxsegsize */ BUS_DMA_ALLOCNOW, /* flags */ NULL, /* lockfunc */ NULL, /* lockarg */ &dma->tag); if (unlikely(error != 0)) { ena_trace(ENA_ALERT, "bus_dma_tag_create failed: %d\n", error); goto fail_tag; } error = bus_dmamem_alloc(dma->tag, (void**) &dma->vaddr, BUS_DMA_COHERENT | BUS_DMA_ZERO, &dma->map); if (unlikely(error != 0)) { ena_trace(ENA_ALERT, "bus_dmamem_alloc(%ju) failed: %d\n", (uintmax_t)size, error); goto fail_map_create; } dma->paddr = 0; error = bus_dmamap_load(dma->tag, dma->map, dma->vaddr, size, ena_dmamap_callback, &dma->paddr, mapflags); if (unlikely((error != 0) || (dma->paddr == 0))) { ena_trace(ENA_ALERT, ": bus_dmamap_load failed: %d\n", error); goto fail_map_load; } return (0); fail_map_load: bus_dmamem_free(dma->tag, dma->vaddr, dma->map); fail_map_create: bus_dma_tag_destroy(dma->tag); fail_tag: dma->tag = NULL; return (error); } static int ena_allocate_pci_resources(struct ena_adapter* adapter) { device_t pdev = adapter->pdev; int rid; rid = PCIR_BAR(ENA_REG_BAR); adapter->memory = NULL; adapter->registers = bus_alloc_resource_any(pdev, SYS_RES_MEMORY, &rid, RF_ACTIVE); if (unlikely(adapter->registers == NULL)) { device_printf(pdev, "Unable to allocate bus resource: " "registers\n"); return (ENXIO); } return (0); } static void ena_free_pci_resources(struct ena_adapter *adapter) { device_t pdev = adapter->pdev; if (adapter->memory != NULL) { bus_release_resource(pdev, SYS_RES_MEMORY, PCIR_BAR(ENA_MEM_BAR), adapter->memory); } if (adapter->registers != NULL) { bus_release_resource(pdev, SYS_RES_MEMORY, PCIR_BAR(ENA_REG_BAR), adapter->registers); } } static int ena_probe(device_t dev) { ena_vendor_info_t *ent; char adapter_name[60]; uint16_t pci_vendor_id = 0; uint16_t pci_device_id = 0; pci_vendor_id = pci_get_vendor(dev); pci_device_id = pci_get_device(dev); ent = ena_vendor_info_array; while (ent->vendor_id != 0) { if ((pci_vendor_id == ent->vendor_id) && (pci_device_id == ent->device_id)) { ena_trace(ENA_DBG, "vendor=%x device=%x ", pci_vendor_id, pci_device_id); sprintf(adapter_name, DEVICE_DESC); device_set_desc_copy(dev, adapter_name); return (BUS_PROBE_DEFAULT); } ent++; } return (ENXIO); } static int ena_change_mtu(if_t ifp, int new_mtu) { struct ena_adapter *adapter = if_getsoftc(ifp); int rc; if ((new_mtu > adapter->max_mtu) || (new_mtu < ENA_MIN_MTU)) { device_printf(adapter->pdev, "Invalid MTU setting. " "new_mtu: %d max mtu: %d min mtu: %d\n", new_mtu, adapter->max_mtu, ENA_MIN_MTU); return (EINVAL); } rc = ena_com_set_dev_mtu(adapter->ena_dev, new_mtu); if (likely(rc == 0)) { ena_trace(ENA_DBG, "set MTU to %d\n", new_mtu); if_setmtu(ifp, new_mtu); } else { device_printf(adapter->pdev, "Failed to set MTU to %d\n", new_mtu); } return (rc); } static inline void ena_alloc_counters(counter_u64_t *begin, int size) { counter_u64_t *end = (counter_u64_t *)((char *)begin + size); for (; begin < end; ++begin) *begin = counter_u64_alloc(M_WAITOK); } static inline void ena_free_counters(counter_u64_t *begin, int size) { counter_u64_t *end = (counter_u64_t *)((char *)begin + size); for (; begin < end; ++begin) counter_u64_free(*begin); } static inline void ena_reset_counters(counter_u64_t *begin, int size) { counter_u64_t *end = (counter_u64_t *)((char *)begin + size); for (; begin < end; ++begin) counter_u64_zero(*begin); } static void ena_init_io_rings_common(struct ena_adapter *adapter, struct ena_ring *ring, uint16_t qid) { ring->qid = qid; ring->adapter = adapter; ring->ena_dev = adapter->ena_dev; } static void ena_init_io_rings(struct ena_adapter *adapter) { struct ena_com_dev *ena_dev; struct ena_ring *txr, *rxr; struct ena_que *que; int i; ena_dev = adapter->ena_dev; for (i = 0; i < adapter->num_queues; i++) { txr = &adapter->tx_ring[i]; rxr = &adapter->rx_ring[i]; /* TX/RX common ring state */ ena_init_io_rings_common(adapter, txr, i); ena_init_io_rings_common(adapter, rxr, i); /* TX specific ring state */ txr->ring_size = adapter->tx_ring_size; txr->tx_max_header_size = ena_dev->tx_max_header_size; txr->tx_mem_queue_type = ena_dev->tx_mem_queue_type; txr->smoothed_interval = ena_com_get_nonadaptive_moderation_interval_tx(ena_dev); /* Allocate a buf ring */ txr->br = buf_ring_alloc(ena_buf_ring_size, M_DEVBUF, M_WAITOK, &txr->ring_mtx); /* Alloc TX statistics. */ ena_alloc_counters((counter_u64_t *)&txr->tx_stats, sizeof(txr->tx_stats)); /* RX specific ring state */ rxr->ring_size = adapter->rx_ring_size; rxr->smoothed_interval = ena_com_get_nonadaptive_moderation_interval_rx(ena_dev); /* Alloc RX statistics. */ ena_alloc_counters((counter_u64_t *)&rxr->rx_stats, sizeof(rxr->rx_stats)); /* Initialize locks */ snprintf(txr->mtx_name, nitems(txr->mtx_name), "%s:tx(%d)", device_get_nameunit(adapter->pdev), i); snprintf(rxr->mtx_name, nitems(rxr->mtx_name), "%s:rx(%d)", device_get_nameunit(adapter->pdev), i); mtx_init(&txr->ring_mtx, txr->mtx_name, NULL, MTX_DEF); mtx_init(&rxr->ring_mtx, rxr->mtx_name, NULL, MTX_DEF); que = &adapter->que[i]; que->adapter = adapter; que->id = i; que->tx_ring = txr; que->rx_ring = rxr; txr->que = que; rxr->que = que; rxr->empty_rx_queue = 0; } } static void ena_free_io_ring_resources(struct ena_adapter *adapter, unsigned int qid) { struct ena_ring *txr = &adapter->tx_ring[qid]; struct ena_ring *rxr = &adapter->rx_ring[qid]; ena_free_counters((counter_u64_t *)&txr->tx_stats, sizeof(txr->tx_stats)); ena_free_counters((counter_u64_t *)&rxr->rx_stats, sizeof(rxr->rx_stats)); ENA_RING_MTX_LOCK(txr); drbr_free(txr->br, M_DEVBUF); ENA_RING_MTX_UNLOCK(txr); mtx_destroy(&txr->ring_mtx); mtx_destroy(&rxr->ring_mtx); } static void ena_free_all_io_rings_resources(struct ena_adapter *adapter) { int i; for (i = 0; i < adapter->num_queues; i++) ena_free_io_ring_resources(adapter, i); } static int ena_setup_tx_dma_tag(struct ena_adapter *adapter) { int ret; /* Create DMA tag for Tx buffers */ ret = bus_dma_tag_create(bus_get_dma_tag(adapter->pdev), 1, 0, /* alignment, bounds */ ENA_DMA_BIT_MASK(adapter->dma_width), /* lowaddr of excl window */ BUS_SPACE_MAXADDR, /* highaddr of excl window */ NULL, NULL, /* filter, filterarg */ ENA_TSO_MAXSIZE, /* maxsize */ adapter->max_tx_sgl_size - 1, /* nsegments */ ENA_TSO_MAXSIZE, /* maxsegsize */ 0, /* flags */ NULL, /* lockfunc */ NULL, /* lockfuncarg */ &adapter->tx_buf_tag); return (ret); } static int ena_free_tx_dma_tag(struct ena_adapter *adapter) { int ret; ret = bus_dma_tag_destroy(adapter->tx_buf_tag); if (likely(ret == 0)) adapter->tx_buf_tag = NULL; return (ret); } static int ena_setup_rx_dma_tag(struct ena_adapter *adapter) { int ret; /* Create DMA tag for Rx buffers*/ ret = bus_dma_tag_create(bus_get_dma_tag(adapter->pdev), /* parent */ 1, 0, /* alignment, bounds */ ENA_DMA_BIT_MASK(adapter->dma_width), /* lowaddr of excl window */ BUS_SPACE_MAXADDR, /* highaddr of excl window */ NULL, NULL, /* filter, filterarg */ MJUM16BYTES, /* maxsize */ adapter->max_rx_sgl_size, /* nsegments */ MJUM16BYTES, /* maxsegsize */ 0, /* flags */ NULL, /* lockfunc */ NULL, /* lockarg */ &adapter->rx_buf_tag); return (ret); } static int ena_free_rx_dma_tag(struct ena_adapter *adapter) { int ret; ret = bus_dma_tag_destroy(adapter->rx_buf_tag); if (likely(ret == 0)) adapter->rx_buf_tag = NULL; return (ret); } /** * ena_setup_tx_resources - allocate Tx resources (Descriptors) * @adapter: network interface device structure * @qid: queue index * * Returns 0 on success, otherwise on failure. **/ static int ena_setup_tx_resources(struct ena_adapter *adapter, int qid) { struct ena_que *que = &adapter->que[qid]; struct ena_ring *tx_ring = que->tx_ring; int size, i, err; #ifdef RSS cpuset_t cpu_mask; #endif size = sizeof(struct ena_tx_buffer) * tx_ring->ring_size; tx_ring->tx_buffer_info = malloc(size, M_DEVBUF, M_NOWAIT | M_ZERO); if (unlikely(tx_ring->tx_buffer_info == NULL)) return (ENOMEM); size = sizeof(uint16_t) * tx_ring->ring_size; tx_ring->free_tx_ids = malloc(size, M_DEVBUF, M_NOWAIT | M_ZERO); if (unlikely(tx_ring->free_tx_ids == NULL)) goto err_buf_info_free; /* Req id stack for TX OOO completions */ for (i = 0; i < tx_ring->ring_size; i++) tx_ring->free_tx_ids[i] = i; /* Reset TX statistics. */ ena_reset_counters((counter_u64_t *)&tx_ring->tx_stats, sizeof(tx_ring->tx_stats)); tx_ring->next_to_use = 0; tx_ring->next_to_clean = 0; /* Make sure that drbr is empty */ ENA_RING_MTX_LOCK(tx_ring); drbr_flush(adapter->ifp, tx_ring->br); ENA_RING_MTX_UNLOCK(tx_ring); /* ... and create the buffer DMA maps */ for (i = 0; i < tx_ring->ring_size; i++) { err = bus_dmamap_create(adapter->tx_buf_tag, 0, &tx_ring->tx_buffer_info[i].map); if (unlikely(err != 0)) { ena_trace(ENA_ALERT, "Unable to create Tx DMA map for buffer %d\n", i); goto err_buf_info_unmap; } } /* Allocate taskqueues */ TASK_INIT(&tx_ring->enqueue_task, 0, ena_deferred_mq_start, tx_ring); tx_ring->enqueue_tq = taskqueue_create_fast("ena_tx_enque", M_NOWAIT, taskqueue_thread_enqueue, &tx_ring->enqueue_tq); if (unlikely(tx_ring->enqueue_tq == NULL)) { ena_trace(ENA_ALERT, "Unable to create taskqueue for enqueue task\n"); i = tx_ring->ring_size; goto err_buf_info_unmap; } /* RSS set cpu for thread */ #ifdef RSS CPU_SETOF(que->cpu, &cpu_mask); taskqueue_start_threads_cpuset(&tx_ring->enqueue_tq, 1, PI_NET, &cpu_mask, "%s tx_ring enq (bucket %d)", device_get_nameunit(adapter->pdev), que->cpu); #else /* RSS */ taskqueue_start_threads(&tx_ring->enqueue_tq, 1, PI_NET, "%s txeq %d", device_get_nameunit(adapter->pdev), que->cpu); #endif /* RSS */ return (0); err_buf_info_unmap: while (i--) { bus_dmamap_destroy(adapter->tx_buf_tag, tx_ring->tx_buffer_info[i].map); } free(tx_ring->free_tx_ids, M_DEVBUF); tx_ring->free_tx_ids = NULL; err_buf_info_free: free(tx_ring->tx_buffer_info, M_DEVBUF); tx_ring->tx_buffer_info = NULL; return (ENOMEM); } /** * ena_free_tx_resources - Free Tx Resources per Queue * @adapter: network interface device structure * @qid: queue index * * Free all transmit software resources **/ static void ena_free_tx_resources(struct ena_adapter *adapter, int qid) { struct ena_ring *tx_ring = &adapter->tx_ring[qid]; while (taskqueue_cancel(tx_ring->enqueue_tq, &tx_ring->enqueue_task, NULL)) taskqueue_drain(tx_ring->enqueue_tq, &tx_ring->enqueue_task); taskqueue_free(tx_ring->enqueue_tq); ENA_RING_MTX_LOCK(tx_ring); /* Flush buffer ring, */ drbr_flush(adapter->ifp, tx_ring->br); /* Free buffer DMA maps, */ for (int i = 0; i < tx_ring->ring_size; i++) { m_freem(tx_ring->tx_buffer_info[i].mbuf); tx_ring->tx_buffer_info[i].mbuf = NULL; bus_dmamap_unload(adapter->tx_buf_tag, tx_ring->tx_buffer_info[i].map); bus_dmamap_destroy(adapter->tx_buf_tag, tx_ring->tx_buffer_info[i].map); } ENA_RING_MTX_UNLOCK(tx_ring); /* And free allocated memory. */ free(tx_ring->tx_buffer_info, M_DEVBUF); tx_ring->tx_buffer_info = NULL; free(tx_ring->free_tx_ids, M_DEVBUF); tx_ring->free_tx_ids = NULL; } /** * ena_setup_all_tx_resources - allocate all queues Tx resources * @adapter: network interface device structure * * Returns 0 on success, otherwise on failure. **/ static int ena_setup_all_tx_resources(struct ena_adapter *adapter) { int i, rc; for (i = 0; i < adapter->num_queues; i++) { rc = ena_setup_tx_resources(adapter, i); if (rc != 0) { device_printf(adapter->pdev, "Allocation for Tx Queue %u failed\n", i); goto err_setup_tx; } } return (0); err_setup_tx: /* Rewind the index freeing the rings as we go */ while (i--) ena_free_tx_resources(adapter, i); return (rc); } /** * ena_free_all_tx_resources - Free Tx Resources for All Queues * @adapter: network interface device structure * * Free all transmit software resources **/ static void ena_free_all_tx_resources(struct ena_adapter *adapter) { int i; for (i = 0; i < adapter->num_queues; i++) ena_free_tx_resources(adapter, i); } static inline int validate_rx_req_id(struct ena_ring *rx_ring, uint16_t req_id) { if (likely(req_id < rx_ring->ring_size)) return (0); device_printf(rx_ring->adapter->pdev, "Invalid rx req_id: %hu\n", req_id); counter_u64_add(rx_ring->rx_stats.bad_req_id, 1); /* Trigger device reset */ rx_ring->adapter->reset_reason = ENA_REGS_RESET_INV_RX_REQ_ID; rx_ring->adapter->trigger_reset = true; return (EFAULT); } /** * ena_setup_rx_resources - allocate Rx resources (Descriptors) * @adapter: network interface device structure * @qid: queue index * * Returns 0 on success, otherwise on failure. **/ static int ena_setup_rx_resources(struct ena_adapter *adapter, unsigned int qid) { struct ena_que *que = &adapter->que[qid]; struct ena_ring *rx_ring = que->rx_ring; int size, err, i; #ifdef RSS cpuset_t cpu_mask; #endif size = sizeof(struct ena_rx_buffer) * rx_ring->ring_size; /* * Alloc extra element so in rx path * we can always prefetch rx_info + 1 */ size += sizeof(struct ena_rx_buffer); rx_ring->rx_buffer_info = malloc(size, M_DEVBUF, M_WAITOK | M_ZERO); size = sizeof(uint16_t) * rx_ring->ring_size; rx_ring->free_rx_ids = malloc(size, M_DEVBUF, M_WAITOK); for (i = 0; i < rx_ring->ring_size; i++) rx_ring->free_rx_ids[i] = i; /* Reset RX statistics. */ ena_reset_counters((counter_u64_t *)&rx_ring->rx_stats, sizeof(rx_ring->rx_stats)); rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; /* ... and create the buffer DMA maps */ for (i = 0; i < rx_ring->ring_size; i++) { err = bus_dmamap_create(adapter->rx_buf_tag, 0, &(rx_ring->rx_buffer_info[i].map)); if (err != 0) { ena_trace(ENA_ALERT, "Unable to create Rx DMA map for buffer %d\n", i); goto err_buf_info_unmap; } } /* Create LRO for the ring */ if ((adapter->ifp->if_capenable & IFCAP_LRO) != 0) { int err = tcp_lro_init(&rx_ring->lro); if (err != 0) { device_printf(adapter->pdev, "LRO[%d] Initialization failed!\n", qid); } else { ena_trace(ENA_INFO, "RX Soft LRO[%d] Initialized\n", qid); rx_ring->lro.ifp = adapter->ifp; } } /* Allocate taskqueues */ TASK_INIT(&rx_ring->cmpl_task, 0, ena_deferred_rx_cleanup, rx_ring); rx_ring->cmpl_tq = taskqueue_create_fast("ena RX completion", M_WAITOK, taskqueue_thread_enqueue, &rx_ring->cmpl_tq); /* RSS set cpu for thread */ #ifdef RSS CPU_SETOF(que->cpu, &cpu_mask); taskqueue_start_threads_cpuset(&rx_ring->cmpl_tq, 1, PI_NET, &cpu_mask, "%s rx_ring cmpl (bucket %d)", device_get_nameunit(adapter->pdev), que->cpu); #else taskqueue_start_threads(&rx_ring->cmpl_tq, 1, PI_NET, "%s rx_ring cmpl %d", device_get_nameunit(adapter->pdev), que->cpu); #endif return (0); err_buf_info_unmap: while (i--) { bus_dmamap_destroy(adapter->rx_buf_tag, rx_ring->rx_buffer_info[i].map); } free(rx_ring->free_rx_ids, M_DEVBUF); rx_ring->free_rx_ids = NULL; free(rx_ring->rx_buffer_info, M_DEVBUF); rx_ring->rx_buffer_info = NULL; return (ENOMEM); } /** * ena_free_rx_resources - Free Rx Resources * @adapter: network interface device structure * @qid: queue index * * Free all receive software resources **/ static void ena_free_rx_resources(struct ena_adapter *adapter, unsigned int qid) { struct ena_ring *rx_ring = &adapter->rx_ring[qid]; while (taskqueue_cancel(rx_ring->cmpl_tq, &rx_ring->cmpl_task, NULL) != 0) taskqueue_drain(rx_ring->cmpl_tq, &rx_ring->cmpl_task); taskqueue_free(rx_ring->cmpl_tq); /* Free buffer DMA maps, */ for (int i = 0; i < rx_ring->ring_size; i++) { m_freem(rx_ring->rx_buffer_info[i].mbuf); rx_ring->rx_buffer_info[i].mbuf = NULL; bus_dmamap_unload(adapter->rx_buf_tag, rx_ring->rx_buffer_info[i].map); bus_dmamap_destroy(adapter->rx_buf_tag, rx_ring->rx_buffer_info[i].map); } /* free LRO resources, */ tcp_lro_free(&rx_ring->lro); /* free allocated memory */ free(rx_ring->rx_buffer_info, M_DEVBUF); rx_ring->rx_buffer_info = NULL; free(rx_ring->free_rx_ids, M_DEVBUF); rx_ring->free_rx_ids = NULL; } /** * ena_setup_all_rx_resources - allocate all queues Rx resources * @adapter: network interface device structure * * Returns 0 on success, otherwise on failure. **/ static int ena_setup_all_rx_resources(struct ena_adapter *adapter) { int i, rc = 0; for (i = 0; i < adapter->num_queues; i++) { rc = ena_setup_rx_resources(adapter, i); if (rc != 0) { device_printf(adapter->pdev, "Allocation for Rx Queue %u failed\n", i); goto err_setup_rx; } } return (0); err_setup_rx: /* rewind the index freeing the rings as we go */ while (i--) ena_free_rx_resources(adapter, i); return (rc); } /** * ena_free_all_rx_resources - Free Rx resources for all queues * @adapter: network interface device structure * * Free all receive software resources **/ static void ena_free_all_rx_resources(struct ena_adapter *adapter) { int i; for (i = 0; i < adapter->num_queues; i++) ena_free_rx_resources(adapter, i); } static inline int ena_alloc_rx_mbuf(struct ena_adapter *adapter, struct ena_ring *rx_ring, struct ena_rx_buffer *rx_info) { struct ena_com_buf *ena_buf; bus_dma_segment_t segs[1]; int nsegs, error; int mlen; /* if previous allocated frag is not used */ if (unlikely(rx_info->mbuf != NULL)) return (0); /* Get mbuf using UMA allocator */ rx_info->mbuf = m_getjcl(M_NOWAIT, MT_DATA, M_PKTHDR, MJUM16BYTES); if (unlikely(rx_info->mbuf == NULL)) { counter_u64_add(rx_ring->rx_stats.mjum_alloc_fail, 1); rx_info->mbuf = m_getcl(M_NOWAIT, MT_DATA, M_PKTHDR); if (unlikely(rx_info->mbuf == NULL)) { counter_u64_add(rx_ring->rx_stats.mbuf_alloc_fail, 1); return (ENOMEM); } mlen = MCLBYTES; } else { mlen = MJUM16BYTES; } /* Set mbuf length*/ rx_info->mbuf->m_pkthdr.len = rx_info->mbuf->m_len = mlen; /* Map packets for DMA */ ena_trace(ENA_DBG | ENA_RSC | ENA_RXPTH, "Using tag %p for buffers' DMA mapping, mbuf %p len: %d", adapter->rx_buf_tag,rx_info->mbuf, rx_info->mbuf->m_len); error = bus_dmamap_load_mbuf_sg(adapter->rx_buf_tag, rx_info->map, rx_info->mbuf, segs, &nsegs, BUS_DMA_NOWAIT); if (unlikely((error != 0) || (nsegs != 1))) { ena_trace(ENA_WARNING, "failed to map mbuf, error: %d, " "nsegs: %d\n", error, nsegs); counter_u64_add(rx_ring->rx_stats.dma_mapping_err, 1); goto exit; } bus_dmamap_sync(adapter->rx_buf_tag, rx_info->map, BUS_DMASYNC_PREREAD); ena_buf = &rx_info->ena_buf; ena_buf->paddr = segs[0].ds_addr; ena_buf->len = mlen; ena_trace(ENA_DBG | ENA_RSC | ENA_RXPTH, "ALLOC RX BUF: mbuf %p, rx_info %p, len %d, paddr %#jx\n", rx_info->mbuf, rx_info,ena_buf->len, (uintmax_t)ena_buf->paddr); return (0); exit: m_freem(rx_info->mbuf); rx_info->mbuf = NULL; return (EFAULT); } static void ena_free_rx_mbuf(struct ena_adapter *adapter, struct ena_ring *rx_ring, struct ena_rx_buffer *rx_info) { if (rx_info->mbuf == NULL) { ena_trace(ENA_WARNING, "Trying to free unallocated buffer\n"); return; } bus_dmamap_unload(adapter->rx_buf_tag, rx_info->map); m_freem(rx_info->mbuf); rx_info->mbuf = NULL; } /** * ena_refill_rx_bufs - Refills ring with descriptors * @rx_ring: the ring which we want to feed with free descriptors * @num: number of descriptors to refill * Refills the ring with newly allocated DMA-mapped mbufs for receiving **/ static int ena_refill_rx_bufs(struct ena_ring *rx_ring, uint32_t num) { struct ena_adapter *adapter = rx_ring->adapter; uint16_t next_to_use, req_id; uint32_t i; int rc; ena_trace(ENA_DBG | ENA_RXPTH | ENA_RSC, "refill qid: %d", rx_ring->qid); next_to_use = rx_ring->next_to_use; for (i = 0; i < num; i++) { struct ena_rx_buffer *rx_info; ena_trace(ENA_DBG | ENA_RXPTH | ENA_RSC, "RX buffer - next to use: %d", next_to_use); req_id = rx_ring->free_rx_ids[next_to_use]; - rc = validate_rx_req_id(rx_ring, req_id); - if (unlikely(rc != 0)) - break; - rx_info = &rx_ring->rx_buffer_info[req_id]; rc = ena_alloc_rx_mbuf(adapter, rx_ring, rx_info); if (unlikely(rc != 0)) { ena_trace(ENA_WARNING, "failed to alloc buffer for rx queue %d\n", rx_ring->qid); break; } rc = ena_com_add_single_rx_desc(rx_ring->ena_com_io_sq, &rx_info->ena_buf, req_id); if (unlikely(rc != 0)) { ena_trace(ENA_WARNING, "failed to add buffer for rx queue %d\n", rx_ring->qid); break; } next_to_use = ENA_RX_RING_IDX_NEXT(next_to_use, rx_ring->ring_size); } if (unlikely(i < num)) { counter_u64_add(rx_ring->rx_stats.refil_partial, 1); ena_trace(ENA_WARNING, "refilled rx qid %d with only %d mbufs (from %d)\n", rx_ring->qid, i, num); } if (likely(i != 0)) { wmb(); ena_com_write_sq_doorbell(rx_ring->ena_com_io_sq); } rx_ring->next_to_use = next_to_use; return (i); } static void ena_free_rx_bufs(struct ena_adapter *adapter, unsigned int qid) { struct ena_ring *rx_ring = &adapter->rx_ring[qid]; unsigned int i; for (i = 0; i < rx_ring->ring_size; i++) { struct ena_rx_buffer *rx_info = &rx_ring->rx_buffer_info[i]; if (rx_info->mbuf != NULL) ena_free_rx_mbuf(adapter, rx_ring, rx_info); } } /** * ena_refill_all_rx_bufs - allocate all queues Rx buffers * @adapter: network interface device structure * */ static void ena_refill_all_rx_bufs(struct ena_adapter *adapter) { struct ena_ring *rx_ring; int i, rc, bufs_num; for (i = 0; i < adapter->num_queues; i++) { rx_ring = &adapter->rx_ring[i]; bufs_num = rx_ring->ring_size - 1; rc = ena_refill_rx_bufs(rx_ring, bufs_num); if (unlikely(rc != bufs_num)) ena_trace(ENA_WARNING, "refilling Queue %d failed. " "Allocated %d buffers from: %d\n", i, rc, bufs_num); } } static void ena_free_all_rx_bufs(struct ena_adapter *adapter) { int i; for (i = 0; i < adapter->num_queues; i++) ena_free_rx_bufs(adapter, i); } /** * ena_free_tx_bufs - Free Tx Buffers per Queue * @adapter: network interface device structure * @qid: queue index **/ static void ena_free_tx_bufs(struct ena_adapter *adapter, unsigned int qid) { bool print_once = true; struct ena_ring *tx_ring = &adapter->tx_ring[qid]; ENA_RING_MTX_LOCK(tx_ring); for (int i = 0; i < tx_ring->ring_size; i++) { struct ena_tx_buffer *tx_info = &tx_ring->tx_buffer_info[i]; if (tx_info->mbuf == NULL) continue; if (print_once) { device_printf(adapter->pdev, "free uncompleted tx mbuf qid %d idx 0x%x", qid, i); print_once = false; } else { ena_trace(ENA_DBG, "free uncompleted tx mbuf qid %d idx 0x%x", qid, i); } bus_dmamap_unload(adapter->tx_buf_tag, tx_info->map); m_free(tx_info->mbuf); tx_info->mbuf = NULL; } ENA_RING_MTX_UNLOCK(tx_ring); } static void ena_free_all_tx_bufs(struct ena_adapter *adapter) { for (int i = 0; i < adapter->num_queues; i++) ena_free_tx_bufs(adapter, i); } static void ena_destroy_all_tx_queues(struct ena_adapter *adapter) { uint16_t ena_qid; int i; for (i = 0; i < adapter->num_queues; i++) { ena_qid = ENA_IO_TXQ_IDX(i); ena_com_destroy_io_queue(adapter->ena_dev, ena_qid); } } static void ena_destroy_all_rx_queues(struct ena_adapter *adapter) { uint16_t ena_qid; int i; for (i = 0; i < adapter->num_queues; i++) { ena_qid = ENA_IO_RXQ_IDX(i); ena_com_destroy_io_queue(adapter->ena_dev, ena_qid); } } static void ena_destroy_all_io_queues(struct ena_adapter *adapter) { ena_destroy_all_tx_queues(adapter); ena_destroy_all_rx_queues(adapter); } static inline int validate_tx_req_id(struct ena_ring *tx_ring, uint16_t req_id) { struct ena_adapter *adapter = tx_ring->adapter; struct ena_tx_buffer *tx_info = NULL; if (likely(req_id < tx_ring->ring_size)) { tx_info = &tx_ring->tx_buffer_info[req_id]; if (tx_info->mbuf != NULL) return (0); } if (tx_info->mbuf == NULL) device_printf(adapter->pdev, "tx_info doesn't have valid mbuf\n"); else device_printf(adapter->pdev, "Invalid req_id: %hu\n", req_id); counter_u64_add(tx_ring->tx_stats.bad_req_id, 1); return (EFAULT); } static int ena_create_io_queues(struct ena_adapter *adapter) { struct ena_com_dev *ena_dev = adapter->ena_dev; struct ena_com_create_io_ctx ctx; struct ena_ring *ring; uint16_t ena_qid; uint32_t msix_vector; int rc, i; /* Create TX queues */ for (i = 0; i < adapter->num_queues; i++) { msix_vector = ENA_IO_IRQ_IDX(i); ena_qid = ENA_IO_TXQ_IDX(i); ctx.mem_queue_type = ena_dev->tx_mem_queue_type; ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_TX; ctx.queue_size = adapter->tx_ring_size; ctx.msix_vector = msix_vector; ctx.qid = ena_qid; rc = ena_com_create_io_queue(ena_dev, &ctx); if (rc != 0) { device_printf(adapter->pdev, "Failed to create io TX queue #%d rc: %d\n", i, rc); goto err_tx; } ring = &adapter->tx_ring[i]; rc = ena_com_get_io_handlers(ena_dev, ena_qid, &ring->ena_com_io_sq, &ring->ena_com_io_cq); if (rc != 0) { device_printf(adapter->pdev, "Failed to get TX queue handlers. TX queue num" " %d rc: %d\n", i, rc); ena_com_destroy_io_queue(ena_dev, ena_qid); goto err_tx; } } /* Create RX queues */ for (i = 0; i < adapter->num_queues; i++) { msix_vector = ENA_IO_IRQ_IDX(i); ena_qid = ENA_IO_RXQ_IDX(i); ctx.mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX; ctx.queue_size = adapter->rx_ring_size; ctx.msix_vector = msix_vector; ctx.qid = ena_qid; rc = ena_com_create_io_queue(ena_dev, &ctx); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "Failed to create io RX queue[%d] rc: %d\n", i, rc); goto err_rx; } ring = &adapter->rx_ring[i]; rc = ena_com_get_io_handlers(ena_dev, ena_qid, &ring->ena_com_io_sq, &ring->ena_com_io_cq); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "Failed to get RX queue handlers. RX queue num" " %d rc: %d\n", i, rc); ena_com_destroy_io_queue(ena_dev, ena_qid); goto err_rx; } } return (0); err_rx: while (i--) ena_com_destroy_io_queue(ena_dev, ENA_IO_RXQ_IDX(i)); i = adapter->num_queues; err_tx: while (i--) ena_com_destroy_io_queue(ena_dev, ENA_IO_TXQ_IDX(i)); return (ENXIO); } /** * ena_tx_cleanup - clear sent packets and corresponding descriptors * @tx_ring: ring for which we want to clean packets * * Once packets are sent, we ask the device in a loop for no longer used * descriptors. We find the related mbuf chain in a map (index in an array) * and free it, then update ring state. * This is performed in "endless" loop, updating ring pointers every * TX_COMMIT. The first check of free descriptor is performed before the actual * loop, then repeated at the loop end. **/ static int ena_tx_cleanup(struct ena_ring *tx_ring) { struct ena_adapter *adapter; struct ena_com_io_cq* io_cq; uint16_t next_to_clean; uint16_t req_id; uint16_t ena_qid; unsigned int total_done = 0; int rc; int commit = TX_COMMIT; int budget = TX_BUDGET; int work_done; adapter = tx_ring->que->adapter; ena_qid = ENA_IO_TXQ_IDX(tx_ring->que->id); io_cq = &adapter->ena_dev->io_cq_queues[ena_qid]; next_to_clean = tx_ring->next_to_clean; do { struct ena_tx_buffer *tx_info; struct mbuf *mbuf; rc = ena_com_tx_comp_req_id_get(io_cq, &req_id); if (unlikely(rc != 0)) break; rc = validate_tx_req_id(tx_ring, req_id); if (unlikely(rc != 0)) break; tx_info = &tx_ring->tx_buffer_info[req_id]; mbuf = tx_info->mbuf; tx_info->mbuf = NULL; bintime_clear(&tx_info->timestamp); if (likely(tx_info->num_of_bufs != 0)) { /* Map is no longer required */ bus_dmamap_unload(adapter->tx_buf_tag, tx_info->map); } ena_trace(ENA_DBG | ENA_TXPTH, "tx: q %d mbuf %p completed", tx_ring->qid, mbuf); m_freem(mbuf); total_done += tx_info->tx_descs; tx_ring->free_tx_ids[next_to_clean] = req_id; next_to_clean = ENA_TX_RING_IDX_NEXT(next_to_clean, tx_ring->ring_size); if (unlikely(--commit == 0)) { commit = TX_COMMIT; /* update ring state every TX_COMMIT descriptor */ tx_ring->next_to_clean = next_to_clean; ena_com_comp_ack( &adapter->ena_dev->io_sq_queues[ena_qid], total_done); ena_com_update_dev_comp_head(io_cq); total_done = 0; } } while (likely(--budget)); work_done = TX_BUDGET - budget; ena_trace(ENA_DBG | ENA_TXPTH, "tx: q %d done. total pkts: %d", tx_ring->qid, work_done); /* If there is still something to commit update ring state */ if (likely(commit != TX_COMMIT)) { tx_ring->next_to_clean = next_to_clean; ena_com_comp_ack(&adapter->ena_dev->io_sq_queues[ena_qid], total_done); ena_com_update_dev_comp_head(io_cq); } taskqueue_enqueue(tx_ring->enqueue_tq, &tx_ring->enqueue_task); return (work_done); } static void ena_rx_hash_mbuf(struct ena_ring *rx_ring, struct ena_com_rx_ctx *ena_rx_ctx, struct mbuf *mbuf) { struct ena_adapter *adapter = rx_ring->adapter; if (likely(adapter->rss_support)) { mbuf->m_pkthdr.flowid = ena_rx_ctx->hash; if (ena_rx_ctx->frag && (ena_rx_ctx->l3_proto != ENA_ETH_IO_L3_PROTO_UNKNOWN)) { M_HASHTYPE_SET(mbuf, M_HASHTYPE_OPAQUE_HASH); return; } switch (ena_rx_ctx->l3_proto) { case ENA_ETH_IO_L3_PROTO_IPV4: switch (ena_rx_ctx->l4_proto) { case ENA_ETH_IO_L4_PROTO_TCP: M_HASHTYPE_SET(mbuf, M_HASHTYPE_RSS_TCP_IPV4); break; case ENA_ETH_IO_L4_PROTO_UDP: M_HASHTYPE_SET(mbuf, M_HASHTYPE_RSS_UDP_IPV4); break; default: M_HASHTYPE_SET(mbuf, M_HASHTYPE_RSS_IPV4); } break; case ENA_ETH_IO_L3_PROTO_IPV6: switch (ena_rx_ctx->l4_proto) { case ENA_ETH_IO_L4_PROTO_TCP: M_HASHTYPE_SET(mbuf, M_HASHTYPE_RSS_TCP_IPV6); break; case ENA_ETH_IO_L4_PROTO_UDP: M_HASHTYPE_SET(mbuf, M_HASHTYPE_RSS_UDP_IPV6); break; default: M_HASHTYPE_SET(mbuf, M_HASHTYPE_RSS_IPV6); } break; case ENA_ETH_IO_L3_PROTO_UNKNOWN: M_HASHTYPE_SET(mbuf, M_HASHTYPE_NONE); break; default: M_HASHTYPE_SET(mbuf, M_HASHTYPE_OPAQUE_HASH); } } else { mbuf->m_pkthdr.flowid = rx_ring->qid; M_HASHTYPE_SET(mbuf, M_HASHTYPE_NONE); } } /** * ena_rx_mbuf - assemble mbuf from descriptors * @rx_ring: ring for which we want to clean packets * @ena_bufs: buffer info * @ena_rx_ctx: metadata for this packet(s) * @next_to_clean: ring pointer, will be updated only upon success * **/ static struct mbuf* ena_rx_mbuf(struct ena_ring *rx_ring, struct ena_com_rx_buf_info *ena_bufs, struct ena_com_rx_ctx *ena_rx_ctx, uint16_t *next_to_clean) { struct mbuf *mbuf; struct ena_rx_buffer *rx_info; struct ena_adapter *adapter; unsigned int descs = ena_rx_ctx->descs; + int rc; uint16_t ntc, len, req_id, buf = 0; ntc = *next_to_clean; adapter = rx_ring->adapter; - rx_info = &rx_ring->rx_buffer_info[ntc]; + len = ena_bufs[buf].len; + req_id = ena_bufs[buf].req_id; + rc = validate_rx_req_id(rx_ring, req_id); + if (unlikely(rc != 0)) + return (NULL); + + rx_info = &rx_ring->rx_buffer_info[req_id]; if (unlikely(rx_info->mbuf == NULL)) { device_printf(adapter->pdev, "NULL mbuf in rx_info"); return (NULL); } - len = ena_bufs[buf].len; - req_id = ena_bufs[buf].req_id; - rx_info = &rx_ring->rx_buffer_info[req_id]; - ena_trace(ENA_DBG | ENA_RXPTH, "rx_info %p, mbuf %p, paddr %jx", rx_info, rx_info->mbuf, (uintmax_t)rx_info->ena_buf.paddr); mbuf = rx_info->mbuf; mbuf->m_flags |= M_PKTHDR; mbuf->m_pkthdr.len = len; mbuf->m_len = len; mbuf->m_pkthdr.rcvif = rx_ring->que->adapter->ifp; /* Fill mbuf with hash key and it's interpretation for optimization */ ena_rx_hash_mbuf(rx_ring, ena_rx_ctx, mbuf); ena_trace(ENA_DBG | ENA_RXPTH, "rx mbuf 0x%p, flags=0x%x, len: %d", mbuf, mbuf->m_flags, mbuf->m_pkthdr.len); /* DMA address is not needed anymore, unmap it */ bus_dmamap_unload(rx_ring->adapter->rx_buf_tag, rx_info->map); rx_info->mbuf = NULL; rx_ring->free_rx_ids[ntc] = req_id; ntc = ENA_RX_RING_IDX_NEXT(ntc, rx_ring->ring_size); /* * While we have more than 1 descriptors for one rcvd packet, append * other mbufs to the main one */ while (--descs) { ++buf; len = ena_bufs[buf].len; req_id = ena_bufs[buf].req_id; + rc = validate_rx_req_id(rx_ring, req_id); + if (unlikely(rc != 0)) { + /* + * If the req_id is invalid, then the device will be + * reset. In that case we must free all mbufs that + * were already gathered. + */ + m_freem(mbuf); + return (NULL); + } rx_info = &rx_ring->rx_buffer_info[req_id]; if (unlikely(rx_info->mbuf == NULL)) { device_printf(adapter->pdev, "NULL mbuf in rx_info"); /* * If one of the required mbufs was not allocated yet, * we can break there. * All earlier used descriptors will be reallocated * later and not used mbufs can be reused. * The next_to_clean pointer will not be updated in case * of an error, so caller should advance it manually * in error handling routine to keep it up to date * with hw ring. */ m_freem(mbuf); return (NULL); } if (unlikely(m_append(mbuf, len, rx_info->mbuf->m_data) == 0)) { counter_u64_add(rx_ring->rx_stats.mbuf_alloc_fail, 1); ena_trace(ENA_WARNING, "Failed to append Rx mbuf %p", mbuf); } ena_trace(ENA_DBG | ENA_RXPTH, "rx mbuf updated. len %d", mbuf->m_pkthdr.len); /* Free already appended mbuf, it won't be useful anymore */ bus_dmamap_unload(rx_ring->adapter->rx_buf_tag, rx_info->map); m_freem(rx_info->mbuf); rx_info->mbuf = NULL; rx_ring->free_rx_ids[ntc] = req_id; ntc = ENA_RX_RING_IDX_NEXT(ntc, rx_ring->ring_size); } *next_to_clean = ntc; return (mbuf); } /** * ena_rx_checksum - indicate in mbuf if hw indicated a good cksum **/ static inline void ena_rx_checksum(struct ena_ring *rx_ring, struct ena_com_rx_ctx *ena_rx_ctx, struct mbuf *mbuf) { /* if IP and error */ if (unlikely((ena_rx_ctx->l3_proto == ENA_ETH_IO_L3_PROTO_IPV4) && ena_rx_ctx->l3_csum_err)) { /* ipv4 checksum error */ mbuf->m_pkthdr.csum_flags = 0; counter_u64_add(rx_ring->rx_stats.bad_csum, 1); ena_trace(ENA_DBG, "RX IPv4 header checksum error"); return; } /* if TCP/UDP */ if ((ena_rx_ctx->l4_proto == ENA_ETH_IO_L4_PROTO_TCP) || (ena_rx_ctx->l4_proto == ENA_ETH_IO_L4_PROTO_UDP)) { if (ena_rx_ctx->l4_csum_err) { /* TCP/UDP checksum error */ mbuf->m_pkthdr.csum_flags = 0; counter_u64_add(rx_ring->rx_stats.bad_csum, 1); ena_trace(ENA_DBG, "RX L4 checksum error"); } else { mbuf->m_pkthdr.csum_flags = CSUM_IP_CHECKED; mbuf->m_pkthdr.csum_flags |= CSUM_IP_VALID; } } } static void ena_deferred_rx_cleanup(void *arg, int pending) { struct ena_ring *rx_ring = arg; int budget = CLEAN_BUDGET; ENA_RING_MTX_LOCK(rx_ring); /* * If deferred task was executed, perform cleanup of all awaiting * descs (or until given budget is depleted to avoid infinite loop). */ while (likely(budget--)) { if (ena_rx_cleanup(rx_ring) == 0) break; } ENA_RING_MTX_UNLOCK(rx_ring); } /** * ena_rx_cleanup - handle rx irq * @arg: ring for which irq is being handled **/ static int ena_rx_cleanup(struct ena_ring *rx_ring) { struct ena_adapter *adapter; struct mbuf *mbuf; struct ena_com_rx_ctx ena_rx_ctx; struct ena_com_io_cq* io_cq; struct ena_com_io_sq* io_sq; if_t ifp; uint16_t ena_qid; uint16_t next_to_clean; uint32_t refill_required; uint32_t refill_threshold; uint32_t do_if_input = 0; unsigned int qid; int rc, i; int budget = RX_BUDGET; adapter = rx_ring->que->adapter; ifp = adapter->ifp; qid = rx_ring->que->id; ena_qid = ENA_IO_RXQ_IDX(qid); io_cq = &adapter->ena_dev->io_cq_queues[ena_qid]; io_sq = &adapter->ena_dev->io_sq_queues[ena_qid]; next_to_clean = rx_ring->next_to_clean; ena_trace(ENA_DBG, "rx: qid %d", qid); do { ena_rx_ctx.ena_bufs = rx_ring->ena_bufs; ena_rx_ctx.max_bufs = adapter->max_rx_sgl_size; ena_rx_ctx.descs = 0; rc = ena_com_rx_pkt(io_cq, io_sq, &ena_rx_ctx); if (unlikely(rc != 0)) goto error; if (unlikely(ena_rx_ctx.descs == 0)) break; ena_trace(ENA_DBG | ENA_RXPTH, "rx: q %d got packet from ena. " "descs #: %d l3 proto %d l4 proto %d hash: %x", rx_ring->qid, ena_rx_ctx.descs, ena_rx_ctx.l3_proto, ena_rx_ctx.l4_proto, ena_rx_ctx.hash); /* Receive mbuf from the ring */ mbuf = ena_rx_mbuf(rx_ring, rx_ring->ena_bufs, &ena_rx_ctx, &next_to_clean); /* Exit if we failed to retrieve a buffer */ if (unlikely(mbuf == NULL)) { for (i = 0; i < ena_rx_ctx.descs; ++i) { rx_ring->free_rx_ids[next_to_clean] = rx_ring->ena_bufs[i].req_id; next_to_clean = ENA_RX_RING_IDX_NEXT(next_to_clean, rx_ring->ring_size); } break; } if (((ifp->if_capenable & IFCAP_RXCSUM) != 0) || ((ifp->if_capenable & IFCAP_RXCSUM_IPV6) != 0)) { ena_rx_checksum(rx_ring, &ena_rx_ctx, mbuf); } counter_enter(); counter_u64_add_protected(rx_ring->rx_stats.bytes, mbuf->m_pkthdr.len); counter_u64_add_protected(adapter->hw_stats.rx_bytes, mbuf->m_pkthdr.len); counter_exit(); /* * LRO is only for IP/TCP packets and TCP checksum of the packet * should be computed by hardware. */ do_if_input = 1; if (((ifp->if_capenable & IFCAP_LRO) != 0) && ((mbuf->m_pkthdr.csum_flags & CSUM_IP_VALID) != 0) && (ena_rx_ctx.l4_proto == ENA_ETH_IO_L4_PROTO_TCP)) { /* * Send to the stack if: * - LRO not enabled, or * - no LRO resources, or * - lro enqueue fails */ if ((rx_ring->lro.lro_cnt != 0) && (tcp_lro_rx(&rx_ring->lro, mbuf, 0) == 0)) do_if_input = 0; } if (do_if_input != 0) { ena_trace(ENA_DBG | ENA_RXPTH, "calling if_input() with mbuf %p", mbuf); (*ifp->if_input)(ifp, mbuf); } counter_enter(); counter_u64_add_protected(rx_ring->rx_stats.cnt, 1); counter_u64_add_protected(adapter->hw_stats.rx_packets, 1); counter_exit(); } while (--budget); rx_ring->next_to_clean = next_to_clean; refill_required = ena_com_free_desc(io_sq); refill_threshold = rx_ring->ring_size / ENA_RX_REFILL_THRESH_DIVIDER; if (refill_required > refill_threshold) { ena_com_update_dev_comp_head(rx_ring->ena_com_io_cq); ena_refill_rx_bufs(rx_ring, refill_required); } tcp_lro_flush_all(&rx_ring->lro); return (RX_BUDGET - budget); error: counter_u64_add(rx_ring->rx_stats.bad_desc_num, 1); return (RX_BUDGET - budget); } /********************************************************************* * * MSIX & Interrupt Service routine * **********************************************************************/ /** * ena_handle_msix - MSIX Interrupt Handler for admin/async queue * @arg: interrupt number **/ static void ena_intr_msix_mgmnt(void *arg) { struct ena_adapter *adapter = (struct ena_adapter *)arg; ena_com_admin_q_comp_intr_handler(adapter->ena_dev); if (likely(adapter->running)) ena_com_aenq_intr_handler(adapter->ena_dev, arg); } /** * ena_handle_msix - MSIX Interrupt Handler for Tx/Rx * @arg: interrupt number **/ static void ena_handle_msix(void *arg) { struct ena_que *que = arg; struct ena_adapter *adapter = que->adapter; if_t ifp = adapter->ifp; struct ena_ring *tx_ring; struct ena_ring *rx_ring; struct ena_com_io_cq* io_cq; struct ena_eth_io_intr_reg intr_reg; int qid, ena_qid; int txc, rxc, i; if (unlikely((if_getdrvflags(ifp) & IFF_DRV_RUNNING) == 0)) return; ena_trace(ENA_DBG, "MSI-X TX/RX routine"); tx_ring = que->tx_ring; rx_ring = que->rx_ring; qid = que->id; ena_qid = ENA_IO_TXQ_IDX(qid); io_cq = &adapter->ena_dev->io_cq_queues[ena_qid]; for (i = 0; i < CLEAN_BUDGET; ++i) { /* * If lock cannot be acquired, then deferred cleanup task was * being executed and rx ring is being cleaned up in * another thread. */ if (likely(ENA_RING_MTX_TRYLOCK(rx_ring) != 0)) { rxc = ena_rx_cleanup(rx_ring); ENA_RING_MTX_UNLOCK(rx_ring); } else { rxc = 0; } /* Protection from calling ena_tx_cleanup from ena_start_xmit */ ENA_RING_MTX_LOCK(tx_ring); txc = ena_tx_cleanup(tx_ring); ENA_RING_MTX_UNLOCK(tx_ring); if (unlikely((if_getdrvflags(ifp) & IFF_DRV_RUNNING) == 0)) return; if ((txc != TX_BUDGET) && (rxc != RX_BUDGET)) break; } /* Signal that work is done and unmask interrupt */ ena_com_update_intr_reg(&intr_reg, RX_IRQ_INTERVAL, TX_IRQ_INTERVAL, true); ena_com_unmask_intr(io_cq, &intr_reg); } static int ena_enable_msix(struct ena_adapter *adapter) { device_t dev = adapter->pdev; int msix_vecs, msix_req; int i, rc = 0; /* Reserved the max msix vectors we might need */ msix_vecs = ENA_MAX_MSIX_VEC(adapter->num_queues); adapter->msix_entries = malloc(msix_vecs * sizeof(struct msix_entry), M_DEVBUF, M_WAITOK | M_ZERO); ena_trace(ENA_DBG, "trying to enable MSI-X, vectors: %d", msix_vecs); for (i = 0; i < msix_vecs; i++) { adapter->msix_entries[i].entry = i; /* Vectors must start from 1 */ adapter->msix_entries[i].vector = i + 1; } msix_req = msix_vecs; rc = pci_alloc_msix(dev, &msix_vecs); if (unlikely(rc != 0)) { device_printf(dev, "Failed to enable MSIX, vectors %d rc %d\n", msix_vecs, rc); rc = ENOSPC; goto err_msix_free; } if (msix_vecs != msix_req) { device_printf(dev, "Enable only %d MSI-x (out of %d), reduce " "the number of queues\n", msix_vecs, msix_req); adapter->num_queues = msix_vecs - ENA_ADMIN_MSIX_VEC; } adapter->msix_vecs = msix_vecs; adapter->msix_enabled = true; return (0); err_msix_free: free(adapter->msix_entries, M_DEVBUF); adapter->msix_entries = NULL; return (rc); } static void ena_setup_mgmnt_intr(struct ena_adapter *adapter) { snprintf(adapter->irq_tbl[ENA_MGMNT_IRQ_IDX].name, ENA_IRQNAME_SIZE, "ena-mgmnt@pci:%s", device_get_nameunit(adapter->pdev)); /* * Handler is NULL on purpose, it will be set * when mgmnt interrupt is acquired */ adapter->irq_tbl[ENA_MGMNT_IRQ_IDX].handler = NULL; adapter->irq_tbl[ENA_MGMNT_IRQ_IDX].data = adapter; adapter->irq_tbl[ENA_MGMNT_IRQ_IDX].vector = adapter->msix_entries[ENA_MGMNT_IRQ_IDX].vector; } static void ena_setup_io_intr(struct ena_adapter *adapter) { static int last_bind_cpu = -1; int irq_idx; for (int i = 0; i < adapter->num_queues; i++) { irq_idx = ENA_IO_IRQ_IDX(i); snprintf(adapter->irq_tbl[irq_idx].name, ENA_IRQNAME_SIZE, "%s-TxRx-%d", device_get_nameunit(adapter->pdev), i); adapter->irq_tbl[irq_idx].handler = ena_handle_msix; adapter->irq_tbl[irq_idx].data = &adapter->que[i]; adapter->irq_tbl[irq_idx].vector = adapter->msix_entries[irq_idx].vector; ena_trace(ENA_INFO | ENA_IOQ, "ena_setup_io_intr vector: %d\n", adapter->msix_entries[irq_idx].vector); #ifdef RSS adapter->que[i].cpu = adapter->irq_tbl[irq_idx].cpu = rss_getcpu(i % rss_getnumbuckets()); #else /* * We still want to bind rings to the corresponding cpu * using something similar to the RSS round-robin technique. */ if (unlikely(last_bind_cpu < 0)) last_bind_cpu = CPU_FIRST(); adapter->que[i].cpu = adapter->irq_tbl[irq_idx].cpu = last_bind_cpu; last_bind_cpu = CPU_NEXT(last_bind_cpu); #endif } } static int ena_request_mgmnt_irq(struct ena_adapter *adapter) { struct ena_irq *irq; unsigned long flags; int rc, rcc; flags = RF_ACTIVE | RF_SHAREABLE; irq = &adapter->irq_tbl[ENA_MGMNT_IRQ_IDX]; irq->res = bus_alloc_resource_any(adapter->pdev, SYS_RES_IRQ, &irq->vector, flags); if (unlikely(irq->res == NULL)) { device_printf(adapter->pdev, "could not allocate " "irq vector: %d\n", irq->vector); return (ENXIO); } rc = bus_activate_resource(adapter->pdev, SYS_RES_IRQ, irq->vector, irq->res); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "could not activate " "irq vector: %d\n", irq->vector); goto err_res_free; } rc = bus_setup_intr(adapter->pdev, irq->res, INTR_TYPE_NET | INTR_MPSAFE, NULL, ena_intr_msix_mgmnt, irq->data, &irq->cookie); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "failed to register " "interrupt handler for irq %ju: %d\n", rman_get_start(irq->res), rc); goto err_res_free; } irq->requested = true; return (rc); err_res_free: ena_trace(ENA_INFO | ENA_ADMQ, "releasing resource for irq %d\n", irq->vector); rcc = bus_release_resource(adapter->pdev, SYS_RES_IRQ, irq->vector, irq->res); if (unlikely(rcc != 0)) device_printf(adapter->pdev, "dev has no parent while " "releasing res for irq: %d\n", irq->vector); irq->res = NULL; return (rc); } static int ena_request_io_irq(struct ena_adapter *adapter) { struct ena_irq *irq; unsigned long flags = 0; int rc = 0, i, rcc; if (unlikely(adapter->msix_enabled == 0)) { device_printf(adapter->pdev, "failed to request I/O IRQ: MSI-X is not enabled\n"); return (EINVAL); } else { flags = RF_ACTIVE | RF_SHAREABLE; } for (i = ENA_IO_IRQ_FIRST_IDX; i < adapter->msix_vecs; i++) { irq = &adapter->irq_tbl[i]; if (unlikely(irq->requested)) continue; irq->res = bus_alloc_resource_any(adapter->pdev, SYS_RES_IRQ, &irq->vector, flags); if (unlikely(irq->res == NULL)) { device_printf(adapter->pdev, "could not allocate " "irq vector: %d\n", irq->vector); goto err; } rc = bus_setup_intr(adapter->pdev, irq->res, INTR_TYPE_NET | INTR_MPSAFE, NULL, irq->handler, irq->data, &irq->cookie); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "failed to register " "interrupt handler for irq %ju: %d\n", rman_get_start(irq->res), rc); goto err; } irq->requested = true; #ifdef RSS ena_trace(ENA_INFO, "queue %d - RSS bucket %d\n", i - ENA_IO_IRQ_FIRST_IDX, irq->cpu); #else ena_trace(ENA_INFO, "queue %d - cpu %d\n", i - ENA_IO_IRQ_FIRST_IDX, irq->cpu); #endif } return (rc); err: for (; i >= ENA_IO_IRQ_FIRST_IDX; i--) { irq = &adapter->irq_tbl[i]; rcc = 0; /* Once we entered err: section and irq->requested is true we free both intr and resources */ if (irq->requested) rcc = bus_teardown_intr(adapter->pdev, irq->res, irq->cookie); if (unlikely(rcc != 0)) device_printf(adapter->pdev, "could not release" " irq: %d, error: %d\n", irq->vector, rcc); /* If we entred err: section without irq->requested set we know it was bus_alloc_resource_any() that needs cleanup, provided res is not NULL. In case res is NULL no work in needed in this iteration */ rcc = 0; if (irq->res != NULL) { rcc = bus_release_resource(adapter->pdev, SYS_RES_IRQ, irq->vector, irq->res); } if (unlikely(rcc != 0)) device_printf(adapter->pdev, "dev has no parent while " "releasing res for irq: %d\n", irq->vector); irq->requested = false; irq->res = NULL; } return (rc); } static void ena_free_mgmnt_irq(struct ena_adapter *adapter) { struct ena_irq *irq; int rc; irq = &adapter->irq_tbl[ENA_MGMNT_IRQ_IDX]; if (irq->requested) { ena_trace(ENA_INFO | ENA_ADMQ, "tear down irq: %d\n", irq->vector); rc = bus_teardown_intr(adapter->pdev, irq->res, irq->cookie); if (unlikely(rc != 0)) device_printf(adapter->pdev, "failed to tear " "down irq: %d\n", irq->vector); irq->requested = 0; } if (irq->res != NULL) { ena_trace(ENA_INFO | ENA_ADMQ, "release resource irq: %d\n", irq->vector); rc = bus_release_resource(adapter->pdev, SYS_RES_IRQ, irq->vector, irq->res); irq->res = NULL; if (unlikely(rc != 0)) device_printf(adapter->pdev, "dev has no parent while " "releasing res for irq: %d\n", irq->vector); } } static void ena_free_io_irq(struct ena_adapter *adapter) { struct ena_irq *irq; int rc; for (int i = ENA_IO_IRQ_FIRST_IDX; i < adapter->msix_vecs; i++) { irq = &adapter->irq_tbl[i]; if (irq->requested) { ena_trace(ENA_INFO | ENA_IOQ, "tear down irq: %d\n", irq->vector); rc = bus_teardown_intr(adapter->pdev, irq->res, irq->cookie); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "failed to tear " "down irq: %d\n", irq->vector); } irq->requested = 0; } if (irq->res != NULL) { ena_trace(ENA_INFO | ENA_IOQ, "release resource irq: %d\n", irq->vector); rc = bus_release_resource(adapter->pdev, SYS_RES_IRQ, irq->vector, irq->res); irq->res = NULL; if (unlikely(rc != 0)) { device_printf(adapter->pdev, "dev has no parent" " while releasing res for irq: %d\n", irq->vector); } } } } static void ena_free_irqs(struct ena_adapter* adapter) { ena_free_io_irq(adapter); ena_free_mgmnt_irq(adapter); ena_disable_msix(adapter); } static void ena_disable_msix(struct ena_adapter *adapter) { pci_release_msi(adapter->pdev); adapter->msix_vecs = 0; free(adapter->msix_entries, M_DEVBUF); adapter->msix_entries = NULL; } static void ena_unmask_all_io_irqs(struct ena_adapter *adapter) { struct ena_com_io_cq* io_cq; struct ena_eth_io_intr_reg intr_reg; uint16_t ena_qid; int i; /* Unmask interrupts for all queues */ for (i = 0; i < adapter->num_queues; i++) { ena_qid = ENA_IO_TXQ_IDX(i); io_cq = &adapter->ena_dev->io_cq_queues[ena_qid]; ena_com_update_intr_reg(&intr_reg, 0, 0, true); ena_com_unmask_intr(io_cq, &intr_reg); } } /* Configure the Rx forwarding */ static int ena_rss_configure(struct ena_adapter *adapter) { struct ena_com_dev *ena_dev = adapter->ena_dev; int rc; /* Set indirect table */ rc = ena_com_indirect_table_set(ena_dev); if (unlikely((rc != 0) && (rc != EOPNOTSUPP))) return (rc); /* Configure hash function (if supported) */ rc = ena_com_set_hash_function(ena_dev); if (unlikely((rc != 0) && (rc != EOPNOTSUPP))) return (rc); /* Configure hash inputs (if supported) */ rc = ena_com_set_hash_ctrl(ena_dev); if (unlikely((rc != 0) && (rc != EOPNOTSUPP))) return (rc); return (0); } static int ena_up_complete(struct ena_adapter *adapter) { int rc; if (likely(adapter->rss_support)) { rc = ena_rss_configure(adapter); if (rc != 0) return (rc); } rc = ena_change_mtu(adapter->ifp, adapter->ifp->if_mtu); if (unlikely(rc != 0)) return (rc); ena_refill_all_rx_bufs(adapter); ena_reset_counters((counter_u64_t *)&adapter->hw_stats, sizeof(adapter->hw_stats)); return (0); } static int ena_up(struct ena_adapter *adapter) { int rc = 0; if (unlikely(device_is_attached(adapter->pdev) == 0)) { device_printf(adapter->pdev, "device is not attached!\n"); return (ENXIO); } if (unlikely(!adapter->running)) { device_printf(adapter->pdev, "device is not running!\n"); return (ENXIO); } if (!adapter->up) { device_printf(adapter->pdev, "device is going UP\n"); /* setup interrupts for IO queues */ ena_setup_io_intr(adapter); rc = ena_request_io_irq(adapter); if (unlikely(rc != 0)) { ena_trace(ENA_ALERT, "err_req_irq"); goto err_req_irq; } /* allocate transmit descriptors */ rc = ena_setup_all_tx_resources(adapter); if (unlikely(rc != 0)) { ena_trace(ENA_ALERT, "err_setup_tx"); goto err_setup_tx; } /* allocate receive descriptors */ rc = ena_setup_all_rx_resources(adapter); if (unlikely(rc != 0)) { ena_trace(ENA_ALERT, "err_setup_rx"); goto err_setup_rx; } /* create IO queues for Rx & Tx */ rc = ena_create_io_queues(adapter); if (unlikely(rc != 0)) { ena_trace(ENA_ALERT, "create IO queues failed"); goto err_io_que; } if (unlikely(adapter->link_status)) if_link_state_change(adapter->ifp, LINK_STATE_UP); rc = ena_up_complete(adapter); if (unlikely(rc != 0)) goto err_up_complete; counter_u64_add(adapter->dev_stats.interface_up, 1); ena_update_hwassist(adapter); if_setdrvflagbits(adapter->ifp, IFF_DRV_RUNNING, IFF_DRV_OACTIVE); callout_reset_sbt(&adapter->timer_service, SBT_1S, SBT_1S, ena_timer_service, (void *)adapter, 0); adapter->up = true; ena_unmask_all_io_irqs(adapter); } return (0); err_up_complete: ena_destroy_all_io_queues(adapter); err_io_que: ena_free_all_rx_resources(adapter); err_setup_rx: ena_free_all_tx_resources(adapter); err_setup_tx: ena_free_io_irq(adapter); err_req_irq: return (rc); } static uint64_t ena_get_counter(if_t ifp, ift_counter cnt) { struct ena_adapter *adapter; struct ena_hw_stats *stats; adapter = if_getsoftc(ifp); stats = &adapter->hw_stats; switch (cnt) { case IFCOUNTER_IPACKETS: return (counter_u64_fetch(stats->rx_packets)); case IFCOUNTER_OPACKETS: return (counter_u64_fetch(stats->tx_packets)); case IFCOUNTER_IBYTES: return (counter_u64_fetch(stats->rx_bytes)); case IFCOUNTER_OBYTES: return (counter_u64_fetch(stats->tx_bytes)); case IFCOUNTER_IQDROPS: return (counter_u64_fetch(stats->rx_drops)); default: return (if_get_counter_default(ifp, cnt)); } } static int ena_media_change(if_t ifp) { /* Media Change is not supported by firmware */ return (0); } static void ena_media_status(if_t ifp, struct ifmediareq *ifmr) { struct ena_adapter *adapter = if_getsoftc(ifp); ena_trace(ENA_DBG, "enter"); mtx_lock(&adapter->global_mtx); ifmr->ifm_status = IFM_AVALID; ifmr->ifm_active = IFM_ETHER; if (!adapter->link_status) { mtx_unlock(&adapter->global_mtx); ena_trace(ENA_INFO, "link_status = false"); return; } ifmr->ifm_status |= IFM_ACTIVE; ifmr->ifm_active |= IFM_10G_T | IFM_FDX; mtx_unlock(&adapter->global_mtx); } static void ena_init(void *arg) { struct ena_adapter *adapter = (struct ena_adapter *)arg; if (!adapter->up) { sx_xlock(&adapter->ioctl_sx); ena_up(adapter); sx_unlock(&adapter->ioctl_sx); } } static int ena_ioctl(if_t ifp, u_long command, caddr_t data) { struct ena_adapter *adapter; struct ifreq *ifr; int rc; adapter = ifp->if_softc; ifr = (struct ifreq *)data; /* * Acquiring lock to prevent from running up and down routines parallel. */ rc = 0; switch (command) { case SIOCSIFMTU: if (ifp->if_mtu == ifr->ifr_mtu) break; sx_xlock(&adapter->ioctl_sx); ena_down(adapter); ena_change_mtu(ifp, ifr->ifr_mtu); rc = ena_up(adapter); sx_unlock(&adapter->ioctl_sx); break; case SIOCSIFFLAGS: if ((ifp->if_flags & IFF_UP) != 0) { if ((if_getdrvflags(ifp) & IFF_DRV_RUNNING) != 0) { if ((ifp->if_flags & (IFF_PROMISC | IFF_ALLMULTI)) != 0) { device_printf(adapter->pdev, "ioctl promisc/allmulti\n"); } } else { sx_xlock(&adapter->ioctl_sx); rc = ena_up(adapter); sx_unlock(&adapter->ioctl_sx); } } else { if ((if_getdrvflags(ifp) & IFF_DRV_RUNNING) != 0) { sx_xlock(&adapter->ioctl_sx); ena_down(adapter); sx_unlock(&adapter->ioctl_sx); } } break; case SIOCADDMULTI: case SIOCDELMULTI: break; case SIOCSIFMEDIA: case SIOCGIFMEDIA: rc = ifmedia_ioctl(ifp, ifr, &adapter->media, command); break; case SIOCSIFCAP: { int reinit = 0; if (ifr->ifr_reqcap != ifp->if_capenable) { ifp->if_capenable = ifr->ifr_reqcap; reinit = 1; } if ((reinit != 0) && ((if_getdrvflags(ifp) & IFF_DRV_RUNNING) != 0)) { sx_xlock(&adapter->ioctl_sx); ena_down(adapter); rc = ena_up(adapter); sx_unlock(&adapter->ioctl_sx); } } break; default: rc = ether_ioctl(ifp, command, data); break; } return (rc); } static int ena_get_dev_offloads(struct ena_com_dev_get_features_ctx *feat) { int caps = 0; if ((feat->offload.tx & (ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_MASK | ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_MASK | ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L3_CSUM_IPV4_MASK)) != 0) caps |= IFCAP_TXCSUM; if ((feat->offload.tx & (ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_MASK | ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_MASK)) != 0) caps |= IFCAP_TXCSUM_IPV6; if ((feat->offload.tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_MASK) != 0) caps |= IFCAP_TSO4; if ((feat->offload.tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_MASK) != 0) caps |= IFCAP_TSO6; if ((feat->offload.rx_supported & (ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_MASK | ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L3_CSUM_IPV4_MASK)) != 0) caps |= IFCAP_RXCSUM; if ((feat->offload.rx_supported & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_MASK) != 0) caps |= IFCAP_RXCSUM_IPV6; caps |= IFCAP_LRO | IFCAP_JUMBO_MTU; return (caps); } static void ena_update_host_info(struct ena_admin_host_info *host_info, if_t ifp) { host_info->supported_network_features[0] = (uint32_t)if_getcapabilities(ifp); } static void ena_update_hwassist(struct ena_adapter *adapter) { if_t ifp = adapter->ifp; uint32_t feat = adapter->tx_offload_cap; int cap = if_getcapenable(ifp); int flags = 0; if_clearhwassist(ifp); if ((cap & IFCAP_TXCSUM) != 0) { if ((feat & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L3_CSUM_IPV4_MASK) != 0) flags |= CSUM_IP; if ((feat & (ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_MASK | ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_MASK)) != 0) flags |= CSUM_IP_UDP | CSUM_IP_TCP; } if ((cap & IFCAP_TXCSUM_IPV6) != 0) flags |= CSUM_IP6_UDP | CSUM_IP6_TCP; if ((cap & IFCAP_TSO4) != 0) flags |= CSUM_IP_TSO; if ((cap & IFCAP_TSO6) != 0) flags |= CSUM_IP6_TSO; if_sethwassistbits(ifp, flags, 0); } static int ena_setup_ifnet(device_t pdev, struct ena_adapter *adapter, struct ena_com_dev_get_features_ctx *feat) { if_t ifp; int caps = 0; ifp = adapter->ifp = if_gethandle(IFT_ETHER); if (unlikely(ifp == NULL)) { ena_trace(ENA_ALERT, "can not allocate ifnet structure\n"); return (ENXIO); } if_initname(ifp, device_get_name(pdev), device_get_unit(pdev)); if_setdev(ifp, pdev); if_setsoftc(ifp, adapter); if_setflags(ifp, IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST); if_setinitfn(ifp, ena_init); if_settransmitfn(ifp, ena_mq_start); if_setqflushfn(ifp, ena_qflush); if_setioctlfn(ifp, ena_ioctl); if_setgetcounterfn(ifp, ena_get_counter); if_setsendqlen(ifp, adapter->tx_ring_size); if_setsendqready(ifp); if_setmtu(ifp, ETHERMTU); if_setbaudrate(ifp, 0); /* Zeroize capabilities... */ if_setcapabilities(ifp, 0); if_setcapenable(ifp, 0); /* check hardware support */ caps = ena_get_dev_offloads(feat); /* ... and set them */ if_setcapabilitiesbit(ifp, caps, 0); /* TSO parameters */ ifp->if_hw_tsomax = ENA_TSO_MAXSIZE - (ETHER_HDR_LEN + ETHER_VLAN_ENCAP_LEN); ifp->if_hw_tsomaxsegcount = adapter->max_tx_sgl_size - 1; ifp->if_hw_tsomaxsegsize = ENA_TSO_MAXSIZE; if_setifheaderlen(ifp, sizeof(struct ether_vlan_header)); if_setcapenable(ifp, if_getcapabilities(ifp)); /* * Specify the media types supported by this adapter and register * callbacks to update media and link information */ ifmedia_init(&adapter->media, IFM_IMASK, ena_media_change, ena_media_status); ifmedia_add(&adapter->media, IFM_ETHER | IFM_AUTO, 0, NULL); ifmedia_set(&adapter->media, IFM_ETHER | IFM_AUTO); ether_ifattach(ifp, adapter->mac_addr); return (0); } static void ena_down(struct ena_adapter *adapter) { int rc; if (adapter->up) { device_printf(adapter->pdev, "device is going DOWN\n"); callout_drain(&adapter->timer_service); adapter->up = false; if_setdrvflagbits(adapter->ifp, IFF_DRV_OACTIVE, IFF_DRV_RUNNING); ena_free_io_irq(adapter); if (adapter->trigger_reset) { rc = ena_com_dev_reset(adapter->ena_dev, adapter->reset_reason); if (unlikely(rc != 0)) device_printf(adapter->pdev, "Device reset failed\n"); } ena_destroy_all_io_queues(adapter); ena_free_all_tx_bufs(adapter); ena_free_all_rx_bufs(adapter); ena_free_all_tx_resources(adapter); ena_free_all_rx_resources(adapter); counter_u64_add(adapter->dev_stats.interface_down, 1); } } static void ena_tx_csum(struct ena_com_tx_ctx *ena_tx_ctx, struct mbuf *mbuf) { struct ena_com_tx_meta *ena_meta; struct ether_vlan_header *eh; u32 mss; bool offload; uint16_t etype; int ehdrlen; struct ip *ip; int iphlen; struct tcphdr *th; offload = false; ena_meta = &ena_tx_ctx->ena_meta; mss = mbuf->m_pkthdr.tso_segsz; if (mss != 0) offload = true; if ((mbuf->m_pkthdr.csum_flags & CSUM_TSO) != 0) offload = true; if ((mbuf->m_pkthdr.csum_flags & CSUM_OFFLOAD) != 0) offload = true; if (!offload) { ena_tx_ctx->meta_valid = 0; return; } /* Determine where frame payload starts. */ eh = mtod(mbuf, struct ether_vlan_header *); if (eh->evl_encap_proto == htons(ETHERTYPE_VLAN)) { etype = ntohs(eh->evl_proto); ehdrlen = ETHER_HDR_LEN + ETHER_VLAN_ENCAP_LEN; } else { etype = ntohs(eh->evl_encap_proto); ehdrlen = ETHER_HDR_LEN; } ip = (struct ip *)(mbuf->m_data + ehdrlen); iphlen = ip->ip_hl << 2; th = (struct tcphdr *)((caddr_t)ip + iphlen); if ((mbuf->m_pkthdr.csum_flags & CSUM_IP) != 0) { ena_tx_ctx->l3_csum_enable = 1; } if ((mbuf->m_pkthdr.csum_flags & CSUM_TSO) != 0) { ena_tx_ctx->tso_enable = 1; ena_meta->l4_hdr_len = (th->th_off); } switch (etype) { case ETHERTYPE_IP: ena_tx_ctx->l3_proto = ENA_ETH_IO_L3_PROTO_IPV4; if ((ip->ip_off & htons(IP_DF)) != 0) ena_tx_ctx->df = 1; break; case ETHERTYPE_IPV6: ena_tx_ctx->l3_proto = ENA_ETH_IO_L3_PROTO_IPV6; default: break; } if (ip->ip_p == IPPROTO_TCP) { ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP; if ((mbuf->m_pkthdr.csum_flags & (CSUM_IP_TCP | CSUM_IP6_TCP)) != 0) ena_tx_ctx->l4_csum_enable = 1; else ena_tx_ctx->l4_csum_enable = 0; } else if (ip->ip_p == IPPROTO_UDP) { ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP; if ((mbuf->m_pkthdr.csum_flags & (CSUM_IP_UDP | CSUM_IP6_UDP)) != 0) ena_tx_ctx->l4_csum_enable = 1; else ena_tx_ctx->l4_csum_enable = 0; } else { ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UNKNOWN; ena_tx_ctx->l4_csum_enable = 0; } ena_meta->mss = mss; ena_meta->l3_hdr_len = iphlen; ena_meta->l3_hdr_offset = ehdrlen; ena_tx_ctx->meta_valid = 1; } static int ena_check_and_collapse_mbuf(struct ena_ring *tx_ring, struct mbuf **mbuf) { struct ena_adapter *adapter; struct mbuf *collapsed_mbuf; int num_frags; adapter = tx_ring->adapter; num_frags = ena_mbuf_count(*mbuf); /* One segment must be reserved for configuration descriptor. */ if (num_frags < adapter->max_tx_sgl_size) return (0); counter_u64_add(tx_ring->tx_stats.collapse, 1); collapsed_mbuf = m_collapse(*mbuf, M_NOWAIT, adapter->max_tx_sgl_size - 1); if (unlikely(collapsed_mbuf == NULL)) { counter_u64_add(tx_ring->tx_stats.collapse_err, 1); return (ENOMEM); } /* If mbuf was collapsed succesfully, original mbuf is released. */ *mbuf = collapsed_mbuf; return (0); } static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct mbuf **mbuf) { struct ena_adapter *adapter; struct ena_tx_buffer *tx_info; struct ena_com_tx_ctx ena_tx_ctx; struct ena_com_dev *ena_dev; struct ena_com_buf *ena_buf; struct ena_com_io_sq* io_sq; bus_dma_segment_t segs[ENA_BUS_DMA_SEGS]; void *push_hdr; uint16_t next_to_use; uint16_t req_id; uint16_t push_len; uint16_t ena_qid; uint32_t nsegs, header_len; int i, rc; int nb_hw_desc; ena_qid = ENA_IO_TXQ_IDX(tx_ring->que->id); adapter = tx_ring->que->adapter; ena_dev = adapter->ena_dev; io_sq = &ena_dev->io_sq_queues[ena_qid]; rc = ena_check_and_collapse_mbuf(tx_ring, mbuf); if (unlikely(rc != 0)) { ena_trace(ENA_WARNING, "Failed to collapse mbuf! err: %d", rc); return (rc); } next_to_use = tx_ring->next_to_use; req_id = tx_ring->free_tx_ids[next_to_use]; tx_info = &tx_ring->tx_buffer_info[req_id]; tx_info->mbuf = *mbuf; tx_info->num_of_bufs = 0; ena_buf = tx_info->bufs; ena_trace(ENA_DBG | ENA_TXPTH, "Tx: %d bytes", (*mbuf)->m_pkthdr.len); push_len = 0; /* * header_len is just a hint for the device. Because FreeBSD is not * giving us information about packet header length and it is not * guaranteed that all packet headers will be in the 1st mbuf, setting * header_len to 0 is making the device ignore this value and resolve * header on it's own. */ header_len = 0; push_hdr = NULL; rc = bus_dmamap_load_mbuf_sg(adapter->tx_buf_tag, tx_info->map, *mbuf, segs, &nsegs, BUS_DMA_NOWAIT); if (unlikely((rc != 0) || (nsegs == 0))) { ena_trace(ENA_WARNING, "dmamap load failed! err: %d nsegs: %d", rc, nsegs); counter_u64_add(tx_ring->tx_stats.dma_mapping_err, 1); tx_info->mbuf = NULL; if (rc == ENOMEM) return (ENA_COM_NO_MEM); else return (ENA_COM_INVAL); } for (i = 0; i < nsegs; i++) { ena_buf->len = segs[i].ds_len; ena_buf->paddr = segs[i].ds_addr; ena_buf++; } tx_info->num_of_bufs = nsegs; memset(&ena_tx_ctx, 0x0, sizeof(struct ena_com_tx_ctx)); ena_tx_ctx.ena_bufs = tx_info->bufs; ena_tx_ctx.push_header = push_hdr; ena_tx_ctx.num_bufs = tx_info->num_of_bufs; ena_tx_ctx.req_id = req_id; ena_tx_ctx.header_len = header_len; /* Set flags and meta data */ ena_tx_csum(&ena_tx_ctx, *mbuf); /* Prepare the packet's descriptors and send them to device */ rc = ena_com_prepare_tx(io_sq, &ena_tx_ctx, &nb_hw_desc); if (unlikely(rc != 0)) { ena_trace(ENA_DBG | ENA_TXPTH, "failed to prepare tx bufs\n"); counter_u64_add(tx_ring->tx_stats.prepare_ctx_err, 1); goto dma_error; } counter_enter(); counter_u64_add_protected(tx_ring->tx_stats.cnt, 1); counter_u64_add_protected(tx_ring->tx_stats.bytes, (*mbuf)->m_pkthdr.len); counter_u64_add_protected(adapter->hw_stats.tx_packets, 1); counter_u64_add_protected(adapter->hw_stats.tx_bytes, (*mbuf)->m_pkthdr.len); counter_exit(); tx_info->tx_descs = nb_hw_desc; getbinuptime(&tx_info->timestamp); tx_info->print_once = true; tx_ring->next_to_use = ENA_TX_RING_IDX_NEXT(next_to_use, tx_ring->ring_size); bus_dmamap_sync(adapter->tx_buf_tag, tx_info->map, BUS_DMASYNC_PREWRITE); return (0); dma_error: tx_info->mbuf = NULL; bus_dmamap_unload(adapter->tx_buf_tag, tx_info->map); return (rc); } static void ena_start_xmit(struct ena_ring *tx_ring) { struct mbuf *mbuf; struct ena_adapter *adapter = tx_ring->adapter; struct ena_com_io_sq* io_sq; int ena_qid; int acum_pkts = 0; int ret = 0; if (unlikely((if_getdrvflags(adapter->ifp) & IFF_DRV_RUNNING) == 0)) return; if (unlikely(!adapter->link_status)) return; ena_qid = ENA_IO_TXQ_IDX(tx_ring->que->id); io_sq = &adapter->ena_dev->io_sq_queues[ena_qid]; while ((mbuf = drbr_peek(adapter->ifp, tx_ring->br)) != NULL) { ena_trace(ENA_DBG | ENA_TXPTH, "\ndequeued mbuf %p with flags %#x and" " header csum flags %#jx", mbuf, mbuf->m_flags, (uint64_t)mbuf->m_pkthdr.csum_flags); if (unlikely(!ena_com_sq_have_enough_space(io_sq, ENA_TX_CLEANUP_THRESHOLD))) ena_tx_cleanup(tx_ring); if (unlikely((ret = ena_xmit_mbuf(tx_ring, &mbuf)) != 0)) { if (ret == ENA_COM_NO_MEM) { drbr_putback(adapter->ifp, tx_ring->br, mbuf); } else if (ret == ENA_COM_NO_SPACE) { drbr_putback(adapter->ifp, tx_ring->br, mbuf); } else { m_freem(mbuf); drbr_advance(adapter->ifp, tx_ring->br); } break; } drbr_advance(adapter->ifp, tx_ring->br); if (unlikely((if_getdrvflags(adapter->ifp) & IFF_DRV_RUNNING) == 0)) return; acum_pkts++; BPF_MTAP(adapter->ifp, mbuf); if (unlikely(acum_pkts == DB_THRESHOLD)) { acum_pkts = 0; wmb(); /* Trigger the dma engine */ ena_com_write_sq_doorbell(io_sq); counter_u64_add(tx_ring->tx_stats.doorbells, 1); } } if (likely(acum_pkts != 0)) { wmb(); /* Trigger the dma engine */ ena_com_write_sq_doorbell(io_sq); counter_u64_add(tx_ring->tx_stats.doorbells, 1); } if (!ena_com_sq_have_enough_space(io_sq, ENA_TX_CLEANUP_THRESHOLD)) ena_tx_cleanup(tx_ring); } static void ena_deferred_mq_start(void *arg, int pending) { struct ena_ring *tx_ring = (struct ena_ring *)arg; struct ifnet *ifp = tx_ring->adapter->ifp; while (!drbr_empty(ifp, tx_ring->br) && (if_getdrvflags(ifp) & IFF_DRV_RUNNING) != 0) { ENA_RING_MTX_LOCK(tx_ring); ena_start_xmit(tx_ring); ENA_RING_MTX_UNLOCK(tx_ring); } } static int ena_mq_start(if_t ifp, struct mbuf *m) { struct ena_adapter *adapter = ifp->if_softc; struct ena_ring *tx_ring; int ret, is_drbr_empty; uint32_t i; if (unlikely((if_getdrvflags(adapter->ifp) & IFF_DRV_RUNNING) == 0)) return (ENODEV); /* Which queue to use */ /* * If everything is setup correctly, it should be the * same bucket that the current CPU we're on is. * It should improve performance. */ if (M_HASHTYPE_GET(m) != M_HASHTYPE_NONE) { #ifdef RSS if (rss_hash2bucket(m->m_pkthdr.flowid, M_HASHTYPE_GET(m), &i) == 0) { i = i % adapter->num_queues; } else #endif { i = m->m_pkthdr.flowid % adapter->num_queues; } } else { i = curcpu % adapter->num_queues; } tx_ring = &adapter->tx_ring[i]; /* Check if drbr is empty before putting packet */ is_drbr_empty = drbr_empty(ifp, tx_ring->br); ret = drbr_enqueue(ifp, tx_ring->br, m); if (unlikely(ret != 0)) { taskqueue_enqueue(tx_ring->enqueue_tq, &tx_ring->enqueue_task); return (ret); } if ((is_drbr_empty != 0) && (ENA_RING_MTX_TRYLOCK(tx_ring) != 0)) { ena_start_xmit(tx_ring); ENA_RING_MTX_UNLOCK(tx_ring); } else { taskqueue_enqueue(tx_ring->enqueue_tq, &tx_ring->enqueue_task); } return (0); } static void ena_qflush(if_t ifp) { struct ena_adapter *adapter = ifp->if_softc; struct ena_ring *tx_ring = adapter->tx_ring; int i; for(i = 0; i < adapter->num_queues; ++i, ++tx_ring) if (!drbr_empty(ifp, tx_ring->br)) { ENA_RING_MTX_LOCK(tx_ring); drbr_flush(ifp, tx_ring->br); ENA_RING_MTX_UNLOCK(tx_ring); } if_qflush(ifp); } static int ena_calc_io_queue_num(struct ena_adapter *adapter, struct ena_com_dev_get_features_ctx *get_feat_ctx) { int io_sq_num, io_cq_num, io_queue_num; io_sq_num = get_feat_ctx->max_queues.max_sq_num; io_cq_num = get_feat_ctx->max_queues.max_cq_num; io_queue_num = min_t(int, mp_ncpus, ENA_MAX_NUM_IO_QUEUES); io_queue_num = min_t(int, io_queue_num, io_sq_num); io_queue_num = min_t(int, io_queue_num, io_cq_num); /* 1 IRQ for for mgmnt and 1 IRQ for each TX/RX pair */ io_queue_num = min_t(int, io_queue_num, pci_msix_count(adapter->pdev) - 1); #ifdef RSS io_queue_num = min_t(int, io_queue_num, rss_getnumbuckets()); #endif return (io_queue_num); } static int ena_calc_queue_size(struct ena_adapter *adapter, uint16_t *max_tx_sgl_size, uint16_t *max_rx_sgl_size, struct ena_com_dev_get_features_ctx *feat) { uint32_t queue_size = ENA_DEFAULT_RING_SIZE; uint32_t v; uint32_t q; queue_size = min_t(uint32_t, queue_size, feat->max_queues.max_cq_depth); queue_size = min_t(uint32_t, queue_size, feat->max_queues.max_sq_depth); /* round down to the nearest power of 2 */ v = queue_size; while (v != 0) { if (powerof2(queue_size) != 0) break; v /= 2; q = rounddown2(queue_size, v); if (q != 0) { queue_size = q; break; } } if (unlikely(queue_size == 0)) { device_printf(adapter->pdev, "Invalid queue size\n"); return (ENA_COM_FAULT); } *max_tx_sgl_size = min_t(uint16_t, ENA_PKT_MAX_BUFS, feat->max_queues.max_packet_tx_descs); *max_rx_sgl_size = min_t(uint16_t, ENA_PKT_MAX_BUFS, feat->max_queues.max_packet_rx_descs); return (queue_size); } static int ena_rss_init_default(struct ena_adapter *adapter) { struct ena_com_dev *ena_dev = adapter->ena_dev; device_t dev = adapter->pdev; int qid, rc, i; rc = ena_com_rss_init(ena_dev, ENA_RX_RSS_TABLE_LOG_SIZE); if (unlikely(rc != 0)) { device_printf(dev, "Cannot init indirect table\n"); return (rc); } for (i = 0; i < ENA_RX_RSS_TABLE_SIZE; i++) { #ifdef RSS qid = rss_get_indirection_to_bucket(i); qid = qid % adapter->num_queues; #else qid = i % adapter->num_queues; #endif rc = ena_com_indirect_table_fill_entry(ena_dev, i, ENA_IO_RXQ_IDX(qid)); if (unlikely((rc != 0) && (rc != EOPNOTSUPP))) { device_printf(dev, "Cannot fill indirect table\n"); goto err_rss_destroy; } } rc = ena_com_fill_hash_function(ena_dev, ENA_ADMIN_CRC32, NULL, ENA_HASH_KEY_SIZE, 0xFFFFFFFF); if (unlikely((rc != 0) && (rc != EOPNOTSUPP))) { device_printf(dev, "Cannot fill hash function\n"); goto err_rss_destroy; } rc = ena_com_set_default_hash_ctrl(ena_dev); if (unlikely((rc != 0) && (rc != EOPNOTSUPP))) { device_printf(dev, "Cannot fill hash control\n"); goto err_rss_destroy; } return (0); err_rss_destroy: ena_com_rss_destroy(ena_dev); return (rc); } static void ena_rss_init_default_deferred(void *arg) { struct ena_adapter *adapter; devclass_t dc; int max; int rc; dc = devclass_find("ena"); if (unlikely(dc == NULL)) { ena_trace(ENA_ALERT, "No devclass ena\n"); return; } max = devclass_get_maxunit(dc); while (max-- >= 0) { adapter = devclass_get_softc(dc, max); if (adapter != NULL) { rc = ena_rss_init_default(adapter); adapter->rss_support = true; if (unlikely(rc != 0)) { device_printf(adapter->pdev, "WARNING: RSS was not properly initialized," " it will affect bandwidth\n"); adapter->rss_support = false; } } } } SYSINIT(ena_rss_init, SI_SUB_KICK_SCHEDULER, SI_ORDER_SECOND, ena_rss_init_default_deferred, NULL); static void ena_config_host_info(struct ena_com_dev *ena_dev) { struct ena_admin_host_info *host_info; int rc; /* Allocate only the host info */ rc = ena_com_allocate_host_info(ena_dev); if (unlikely(rc != 0)) { ena_trace(ENA_ALERT, "Cannot allocate host info\n"); return; } host_info = ena_dev->host_attr.host_info; host_info->os_type = ENA_ADMIN_OS_FREEBSD; host_info->kernel_ver = osreldate; sprintf(host_info->kernel_ver_str, "%d", osreldate); host_info->os_dist = 0; strncpy(host_info->os_dist_str, osrelease, sizeof(host_info->os_dist_str) - 1); host_info->driver_version = (DRV_MODULE_VER_MAJOR) | (DRV_MODULE_VER_MINOR << ENA_ADMIN_HOST_INFO_MINOR_SHIFT) | (DRV_MODULE_VER_SUBMINOR << ENA_ADMIN_HOST_INFO_SUB_MINOR_SHIFT); rc = ena_com_set_host_attributes(ena_dev); if (unlikely(rc != 0)) { if (rc == EOPNOTSUPP) ena_trace(ENA_WARNING, "Cannot set host attributes\n"); else ena_trace(ENA_ALERT, "Cannot set host attributes\n"); goto err; } return; err: ena_com_delete_host_info(ena_dev); } static int ena_device_init(struct ena_adapter *adapter, device_t pdev, struct ena_com_dev_get_features_ctx *get_feat_ctx, int *wd_active) { struct ena_com_dev* ena_dev = adapter->ena_dev; bool readless_supported; uint32_t aenq_groups; int dma_width; int rc; rc = ena_com_mmio_reg_read_request_init(ena_dev); if (unlikely(rc != 0)) { device_printf(pdev, "failed to init mmio read less\n"); return (rc); } /* * The PCIe configuration space revision id indicate if mmio reg * read is disabled */ readless_supported = !(pci_get_revid(pdev) & ENA_MMIO_DISABLE_REG_READ); ena_com_set_mmio_read_mode(ena_dev, readless_supported); rc = ena_com_dev_reset(ena_dev, ENA_REGS_RESET_NORMAL); if (unlikely(rc != 0)) { device_printf(pdev, "Can not reset device\n"); goto err_mmio_read_less; } rc = ena_com_validate_version(ena_dev); if (unlikely(rc != 0)) { device_printf(pdev, "device version is too low\n"); goto err_mmio_read_less; } dma_width = ena_com_get_dma_width(ena_dev); if (unlikely(dma_width < 0)) { device_printf(pdev, "Invalid dma width value %d", dma_width); rc = dma_width; goto err_mmio_read_less; } adapter->dma_width = dma_width; /* ENA admin level init */ rc = ena_com_admin_init(ena_dev, &aenq_handlers, true); if (unlikely(rc != 0)) { device_printf(pdev, "Can not initialize ena admin queue with device\n"); goto err_mmio_read_less; } /* * To enable the msix interrupts the driver needs to know the number * of queues. So the driver uses polling mode to retrieve this * information */ ena_com_set_admin_polling_mode(ena_dev, true); ena_config_host_info(ena_dev); /* Get Device Attributes */ rc = ena_com_get_dev_attr_feat(ena_dev, get_feat_ctx); if (unlikely(rc != 0)) { device_printf(pdev, "Cannot get attribute for ena device rc: %d\n", rc); goto err_admin_init; } aenq_groups = BIT(ENA_ADMIN_LINK_CHANGE) | BIT(ENA_ADMIN_KEEP_ALIVE); aenq_groups &= get_feat_ctx->aenq.supported_groups; rc = ena_com_set_aenq_config(ena_dev, aenq_groups); if (unlikely(rc != 0)) { device_printf(pdev, "Cannot configure aenq groups rc: %d\n", rc); goto err_admin_init; } *wd_active = !!(aenq_groups & BIT(ENA_ADMIN_KEEP_ALIVE)); return (0); err_admin_init: ena_com_delete_host_info(ena_dev); ena_com_admin_destroy(ena_dev); err_mmio_read_less: ena_com_mmio_reg_read_request_destroy(ena_dev); return (rc); } static int ena_enable_msix_and_set_admin_interrupts(struct ena_adapter *adapter, int io_vectors) { struct ena_com_dev *ena_dev = adapter->ena_dev; int rc; rc = ena_enable_msix(adapter); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "Error with MSI-X enablement\n"); return (rc); } ena_setup_mgmnt_intr(adapter); rc = ena_request_mgmnt_irq(adapter); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "Cannot setup mgmnt queue intr\n"); goto err_disable_msix; } ena_com_set_admin_polling_mode(ena_dev, false); ena_com_admin_aenq_enable(ena_dev); return (0); err_disable_msix: ena_disable_msix(adapter); return (rc); } /* Function called on ENA_ADMIN_KEEP_ALIVE event */ static void ena_keep_alive_wd(void *adapter_data, struct ena_admin_aenq_entry *aenq_e) { struct ena_adapter *adapter = (struct ena_adapter *)adapter_data; struct ena_admin_aenq_keep_alive_desc *desc; sbintime_t stime; uint64_t rx_drops; desc = (struct ena_admin_aenq_keep_alive_desc *)aenq_e; rx_drops = ((uint64_t)desc->rx_drops_high << 32) | desc->rx_drops_low; counter_u64_zero(adapter->hw_stats.rx_drops); counter_u64_add(adapter->hw_stats.rx_drops, rx_drops); stime = getsbinuptime(); atomic_store_rel_64(&adapter->keep_alive_timestamp, stime); } /* Check for keep alive expiration */ static void check_for_missing_keep_alive(struct ena_adapter *adapter) { sbintime_t timestamp, time; if (adapter->wd_active == 0) return; if (likely(adapter->keep_alive_timeout == 0)) return; timestamp = atomic_load_acq_64(&adapter->keep_alive_timestamp); time = getsbinuptime() - timestamp; if (unlikely(time > adapter->keep_alive_timeout)) { device_printf(adapter->pdev, "Keep alive watchdog timeout.\n"); counter_u64_add(adapter->dev_stats.wd_expired, 1); adapter->reset_reason = ENA_REGS_RESET_KEEP_ALIVE_TO; adapter->trigger_reset = true; } } /* Check if admin queue is enabled */ static void check_for_admin_com_state(struct ena_adapter *adapter) { if (unlikely(ena_com_get_admin_running_state(adapter->ena_dev) == false)) { device_printf(adapter->pdev, "ENA admin queue is not in running state!\n"); counter_u64_add(adapter->dev_stats.admin_q_pause, 1); adapter->reset_reason = ENA_REGS_RESET_ADMIN_TO; adapter->trigger_reset = true; } } static int check_missing_comp_in_queue(struct ena_adapter *adapter, struct ena_ring *tx_ring) { struct bintime curtime, time; struct ena_tx_buffer *tx_buf; uint32_t missed_tx = 0; int i; getbinuptime(&curtime); for (i = 0; i < tx_ring->ring_size; i++) { tx_buf = &tx_ring->tx_buffer_info[i]; if (bintime_isset(&tx_buf->timestamp) == 0) continue; time = curtime; bintime_sub(&time, &tx_buf->timestamp); /* Check again if packet is still waiting */ if (unlikely(bttosbt(time) > adapter->missing_tx_timeout)) { if (!tx_buf->print_once) ena_trace(ENA_WARNING, "Found a Tx that wasn't " "completed on time, qid %d, index %d.\n", tx_ring->qid, i); tx_buf->print_once = true; missed_tx++; counter_u64_add(tx_ring->tx_stats.missing_tx_comp, 1); if (unlikely(missed_tx > adapter->missing_tx_threshold)) { device_printf(adapter->pdev, "The number of lost tx completion " "is above the threshold (%d > %d). " "Reset the device\n", missed_tx, adapter->missing_tx_threshold); adapter->reset_reason = ENA_REGS_RESET_MISS_TX_CMPL; adapter->trigger_reset = true; return (EIO); } } } return (0); } /* * Check for TX which were not completed on time. * Timeout is defined by "missing_tx_timeout". * Reset will be performed if number of incompleted * transactions exceeds "missing_tx_threshold". */ static void check_for_missing_tx_completions(struct ena_adapter *adapter) { struct ena_ring *tx_ring; int i, budget, rc; /* Make sure the driver doesn't turn the device in other process */ rmb(); if (!adapter->up) return; if (adapter->trigger_reset) return; if (adapter->missing_tx_timeout == 0) return; budget = adapter->missing_tx_max_queues; for (i = adapter->next_monitored_tx_qid; i < adapter->num_queues; i++) { tx_ring = &adapter->tx_ring[i]; rc = check_missing_comp_in_queue(adapter, tx_ring); if (unlikely(rc != 0)) return; budget--; if (budget == 0) { i++; break; } } adapter->next_monitored_tx_qid = i % adapter->num_queues; } /* trigger deferred rx cleanup after 2 consecutive detections */ #define EMPTY_RX_REFILL 2 /* For the rare case where the device runs out of Rx descriptors and the * msix handler failed to refill new Rx descriptors (due to a lack of memory * for example). * This case will lead to a deadlock: * The device won't send interrupts since all the new Rx packets will be dropped * The msix handler won't allocate new Rx descriptors so the device won't be * able to send new packets. * * When such a situation is detected - execute rx cleanup task in another thread */ static void check_for_empty_rx_ring(struct ena_adapter *adapter) { struct ena_ring *rx_ring; int i, refill_required; if (!adapter->up) return; if (adapter->trigger_reset) return; for (i = 0; i < adapter->num_queues; i++) { rx_ring = &adapter->rx_ring[i]; refill_required = ena_com_free_desc(rx_ring->ena_com_io_sq); if (unlikely(refill_required == (rx_ring->ring_size - 1))) { rx_ring->empty_rx_queue++; if (rx_ring->empty_rx_queue >= EMPTY_RX_REFILL) { counter_u64_add(rx_ring->rx_stats.empty_rx_ring, 1); device_printf(adapter->pdev, "trigger refill for ring %d\n", i); taskqueue_enqueue(rx_ring->cmpl_tq, &rx_ring->cmpl_task); rx_ring->empty_rx_queue = 0; } } else { rx_ring->empty_rx_queue = 0; } } } static void ena_timer_service(void *data) { struct ena_adapter *adapter = (struct ena_adapter *)data; struct ena_admin_host_info *host_info = adapter->ena_dev->host_attr.host_info; check_for_missing_keep_alive(adapter); check_for_admin_com_state(adapter); check_for_missing_tx_completions(adapter); check_for_empty_rx_ring(adapter); if (host_info != NULL) ena_update_host_info(host_info, adapter->ifp); if (unlikely(adapter->trigger_reset)) { device_printf(adapter->pdev, "Trigger reset is on\n"); taskqueue_enqueue(adapter->reset_tq, &adapter->reset_task); return; } /* * Schedule another timeout one second from now. */ callout_schedule_sbt(&adapter->timer_service, SBT_1S, SBT_1S, 0); } static void ena_reset_task(void *arg, int pending) { struct ena_com_dev_get_features_ctx get_feat_ctx; struct ena_adapter *adapter = (struct ena_adapter *)arg; struct ena_com_dev *ena_dev = adapter->ena_dev; bool dev_up; int rc; if (unlikely(!adapter->trigger_reset)) { device_printf(adapter->pdev, "device reset scheduled but trigger_reset is off\n"); return; } sx_xlock(&adapter->ioctl_sx); callout_drain(&adapter->timer_service); dev_up = adapter->up; ena_com_set_admin_running_state(ena_dev, false); ena_down(adapter); ena_free_mgmnt_irq(adapter); ena_disable_msix(adapter); ena_com_abort_admin_commands(ena_dev); ena_com_wait_for_abort_completion(ena_dev); ena_com_admin_destroy(ena_dev); ena_com_mmio_reg_read_request_destroy(ena_dev); adapter->reset_reason = ENA_REGS_RESET_NORMAL; adapter->trigger_reset = false; /* Finished destroy part. Restart the device */ rc = ena_device_init(adapter, adapter->pdev, &get_feat_ctx, &adapter->wd_active); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "ENA device init failed! (err: %d)\n", rc); goto err_dev_free; } rc = ena_enable_msix_and_set_admin_interrupts(adapter, adapter->num_queues); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "Enable MSI-X failed\n"); goto err_com_free; } /* If the interface was up before the reset bring it up */ if (dev_up) { rc = ena_up(adapter); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "Failed to create I/O queues\n"); goto err_msix_free; } } callout_reset_sbt(&adapter->timer_service, SBT_1S, SBT_1S, ena_timer_service, (void *)adapter, 0); sx_unlock(&adapter->ioctl_sx); return; err_msix_free: ena_free_mgmnt_irq(adapter); ena_disable_msix(adapter); err_com_free: ena_com_admin_destroy(ena_dev); err_dev_free: device_printf(adapter->pdev, "ENA reset failed!\n"); adapter->running = false; sx_unlock(&adapter->ioctl_sx); } /** * ena_attach - Device Initialization Routine * @pdev: device information struct * * Returns 0 on success, otherwise on failure. * * ena_attach initializes an adapter identified by a device structure. * The OS initialization, configuring of the adapter private structure, * and a hardware reset occur. **/ static int ena_attach(device_t pdev) { struct ena_com_dev_get_features_ctx get_feat_ctx; static int version_printed; struct ena_adapter *adapter; struct ena_com_dev *ena_dev = NULL; uint16_t tx_sgl_size = 0; uint16_t rx_sgl_size = 0; int io_queue_num; int queue_size; int rc; adapter = device_get_softc(pdev); adapter->pdev = pdev; mtx_init(&adapter->global_mtx, "ENA global mtx", NULL, MTX_DEF); sx_init(&adapter->ioctl_sx, "ENA ioctl sx"); /* Set up the timer service */ callout_init_mtx(&adapter->timer_service, &adapter->global_mtx, 0); adapter->keep_alive_timeout = DEFAULT_KEEP_ALIVE_TO; adapter->missing_tx_timeout = DEFAULT_TX_CMP_TO; adapter->missing_tx_max_queues = DEFAULT_TX_MONITORED_QUEUES; adapter->missing_tx_threshold = DEFAULT_TX_CMP_THRESHOLD; if (version_printed++ == 0) device_printf(pdev, "%s\n", ena_version); rc = ena_allocate_pci_resources(adapter); if (unlikely(rc != 0)) { device_printf(pdev, "PCI resource allocation failed!\n"); ena_free_pci_resources(adapter); return (rc); } /* Allocate memory for ena_dev structure */ ena_dev = malloc(sizeof(struct ena_com_dev), M_DEVBUF, M_WAITOK | M_ZERO); adapter->ena_dev = ena_dev; ena_dev->dmadev = pdev; ena_dev->bus = malloc(sizeof(struct ena_bus), M_DEVBUF, M_WAITOK | M_ZERO); /* Store register resources */ ((struct ena_bus*)(ena_dev->bus))->reg_bar_t = rman_get_bustag(adapter->registers); ((struct ena_bus*)(ena_dev->bus))->reg_bar_h = rman_get_bushandle(adapter->registers); if (unlikely(((struct ena_bus*)(ena_dev->bus))->reg_bar_h == 0)) { device_printf(pdev, "failed to pmap registers bar\n"); rc = ENXIO; goto err_bus_free; } ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; /* Device initialization */ rc = ena_device_init(adapter, pdev, &get_feat_ctx, &adapter->wd_active); if (unlikely(rc != 0)) { device_printf(pdev, "ENA device init failed! (err: %d)\n", rc); rc = ENXIO; goto err_bus_free; } adapter->keep_alive_timestamp = getsbinuptime(); adapter->tx_offload_cap = get_feat_ctx.offload.tx; /* Set for sure that interface is not up */ adapter->up = false; memcpy(adapter->mac_addr, get_feat_ctx.dev_attr.mac_addr, ETHER_ADDR_LEN); /* calculate IO queue number to create */ io_queue_num = ena_calc_io_queue_num(adapter, &get_feat_ctx); ENA_ASSERT(io_queue_num > 0, "Invalid queue number: %d\n", io_queue_num); adapter->num_queues = io_queue_num; adapter->max_mtu = get_feat_ctx.dev_attr.max_mtu; /* calculatre ring sizes */ queue_size = ena_calc_queue_size(adapter,&tx_sgl_size, &rx_sgl_size, &get_feat_ctx); if (unlikely((queue_size <= 0) || (io_queue_num <= 0))) { rc = ENA_COM_FAULT; goto err_com_free; } adapter->reset_reason = ENA_REGS_RESET_NORMAL; adapter->tx_ring_size = queue_size; adapter->rx_ring_size = queue_size; adapter->max_tx_sgl_size = tx_sgl_size; adapter->max_rx_sgl_size = rx_sgl_size; /* set up dma tags for rx and tx buffers */ rc = ena_setup_tx_dma_tag(adapter); if (unlikely(rc != 0)) { device_printf(pdev, "Failed to create TX DMA tag\n"); goto err_com_free; } rc = ena_setup_rx_dma_tag(adapter); if (unlikely(rc != 0)) { device_printf(pdev, "Failed to create RX DMA tag\n"); goto err_tx_tag_free; } /* initialize rings basic information */ device_printf(pdev, "initalize %d io queues\n", io_queue_num); ena_init_io_rings(adapter); /* setup network interface */ rc = ena_setup_ifnet(pdev, adapter, &get_feat_ctx); if (unlikely(rc != 0)) { device_printf(pdev, "Error with network interface setup\n"); goto err_io_free; } rc = ena_enable_msix_and_set_admin_interrupts(adapter, io_queue_num); if (unlikely(rc != 0)) { device_printf(pdev, "Failed to enable and set the admin interrupts\n"); goto err_ifp_free; } /* Initialize reset task queue */ TASK_INIT(&adapter->reset_task, 0, ena_reset_task, adapter); adapter->reset_tq = taskqueue_create("ena_reset_enqueue", M_WAITOK | M_ZERO, taskqueue_thread_enqueue, &adapter->reset_tq); taskqueue_start_threads(&adapter->reset_tq, 1, PI_NET, "%s rstq", device_get_nameunit(adapter->pdev)); /* Initialize statistics */ ena_alloc_counters((counter_u64_t *)&adapter->dev_stats, sizeof(struct ena_stats_dev)); ena_alloc_counters((counter_u64_t *)&adapter->hw_stats, sizeof(struct ena_hw_stats)); ena_sysctl_add_nodes(adapter); /* Tell the stack that the interface is not active */ if_setdrvflagbits(adapter->ifp, IFF_DRV_OACTIVE, IFF_DRV_RUNNING); adapter->running = true; return (0); err_ifp_free: if_detach(adapter->ifp); if_free(adapter->ifp); err_io_free: ena_free_all_io_rings_resources(adapter); ena_free_rx_dma_tag(adapter); err_tx_tag_free: ena_free_tx_dma_tag(adapter); err_com_free: ena_com_admin_destroy(ena_dev); ena_com_delete_host_info(ena_dev); ena_com_mmio_reg_read_request_destroy(ena_dev); err_bus_free: free(ena_dev->bus, M_DEVBUF); free(ena_dev, M_DEVBUF); ena_free_pci_resources(adapter); return (rc); } /** * ena_detach - Device Removal Routine * @pdev: device information struct * * ena_detach is called by the device subsystem to alert the driver * that it should release a PCI device. **/ static int ena_detach(device_t pdev) { struct ena_adapter *adapter = device_get_softc(pdev); struct ena_com_dev *ena_dev = adapter->ena_dev; int rc; /* Make sure VLANS are not using driver */ if (adapter->ifp->if_vlantrunk != NULL) { device_printf(adapter->pdev ,"VLAN is in use, detach first\n"); return (EBUSY); } /* Free reset task and callout */ callout_drain(&adapter->timer_service); while (taskqueue_cancel(adapter->reset_tq, &adapter->reset_task, NULL)) taskqueue_drain(adapter->reset_tq, &adapter->reset_task); taskqueue_free(adapter->reset_tq); sx_xlock(&adapter->ioctl_sx); ena_down(adapter); sx_unlock(&adapter->ioctl_sx); if (adapter->ifp != NULL) { ether_ifdetach(adapter->ifp); if_free(adapter->ifp); } ena_free_all_io_rings_resources(adapter); ena_free_counters((counter_u64_t *)&adapter->hw_stats, sizeof(struct ena_hw_stats)); ena_free_counters((counter_u64_t *)&adapter->dev_stats, sizeof(struct ena_stats_dev)); if (likely(adapter->rss_support)) ena_com_rss_destroy(ena_dev); rc = ena_free_rx_dma_tag(adapter); if (unlikely(rc != 0)) device_printf(adapter->pdev, "Unmapped RX DMA tag associations\n"); rc = ena_free_tx_dma_tag(adapter); if (unlikely(rc != 0)) device_printf(adapter->pdev, "Unmapped TX DMA tag associations\n"); /* Reset the device only if the device is running. */ if (adapter->running) ena_com_dev_reset(ena_dev, adapter->reset_reason); ena_com_delete_host_info(ena_dev); ena_free_irqs(adapter); ena_com_abort_admin_commands(ena_dev); ena_com_wait_for_abort_completion(ena_dev); ena_com_admin_destroy(ena_dev); ena_com_mmio_reg_read_request_destroy(ena_dev); ena_free_pci_resources(adapter); mtx_destroy(&adapter->global_mtx); sx_destroy(&adapter->ioctl_sx); if (ena_dev->bus != NULL) free(ena_dev->bus, M_DEVBUF); if (ena_dev != NULL) free(ena_dev, M_DEVBUF); return (bus_generic_detach(pdev)); } /****************************************************************************** ******************************** AENQ Handlers ******************************* *****************************************************************************/ /** * ena_update_on_link_change: * Notify the network interface about the change in link status **/ static void ena_update_on_link_change(void *adapter_data, struct ena_admin_aenq_entry *aenq_e) { struct ena_adapter *adapter = (struct ena_adapter *)adapter_data; struct ena_admin_aenq_link_change_desc *aenq_desc; int status; if_t ifp; aenq_desc = (struct ena_admin_aenq_link_change_desc *)aenq_e; ifp = adapter->ifp; status = aenq_desc->flags & ENA_ADMIN_AENQ_LINK_CHANGE_DESC_LINK_STATUS_MASK; if (status != 0) { device_printf(adapter->pdev, "link is UP\n"); if_link_state_change(ifp, LINK_STATE_UP); } else if (status == 0) { device_printf(adapter->pdev, "link is DOWN\n"); if_link_state_change(ifp, LINK_STATE_DOWN); } else { device_printf(adapter->pdev, "invalid value recvd\n"); BUG(); } adapter->link_status = status; } /** * This handler will called for unknown event group or unimplemented handlers **/ static void unimplemented_aenq_handler(void *data, struct ena_admin_aenq_entry *aenq_e) { return; } static struct ena_aenq_handlers aenq_handlers = { .handlers = { [ENA_ADMIN_LINK_CHANGE] = ena_update_on_link_change, [ENA_ADMIN_KEEP_ALIVE] = ena_keep_alive_wd, }, .unimplemented_handler = unimplemented_aenq_handler }; /********************************************************************* * FreeBSD Device Interface Entry Points *********************************************************************/ static device_method_t ena_methods[] = { /* Device interface */ DEVMETHOD(device_probe, ena_probe), DEVMETHOD(device_attach, ena_attach), DEVMETHOD(device_detach, ena_detach), DEVMETHOD_END }; static driver_t ena_driver = { "ena", ena_methods, sizeof(struct ena_adapter), }; devclass_t ena_devclass; DRIVER_MODULE(ena, pci, ena_driver, ena_devclass, 0, 0); MODULE_PNP_INFO("U16:vendor;U16:device", pci, ena, ena_vendor_info_array, nitems(ena_vendor_info_array) - 1); MODULE_DEPEND(ena, pci, 1, 1, 1); MODULE_DEPEND(ena, ether, 1, 1, 1); /*********************************************************************/ Index: projects/import-googletest-1.8.1/sys/dev/ena/ena.h =================================================================== --- projects/import-googletest-1.8.1/sys/dev/ena/ena.h (revision 344270) +++ projects/import-googletest-1.8.1/sys/dev/ena/ena.h (revision 344271) @@ -1,404 +1,404 @@ /*- * BSD LICENSE * * Copyright (c) 2015-2017 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * $FreeBSD$ * */ #ifndef ENA_H #define ENA_H #include #include "ena-com/ena_com.h" #include "ena-com/ena_eth_com.h" #define DRV_MODULE_VER_MAJOR 0 #define DRV_MODULE_VER_MINOR 8 -#define DRV_MODULE_VER_SUBMINOR 2 +#define DRV_MODULE_VER_SUBMINOR 3 #define DRV_MODULE_NAME "ena" #ifndef DRV_MODULE_VERSION #define DRV_MODULE_VERSION \ __XSTRING(DRV_MODULE_VER_MAJOR) "." \ __XSTRING(DRV_MODULE_VER_MINOR) "." \ __XSTRING(DRV_MODULE_VER_SUBMINOR) #endif #define DEVICE_NAME "Elastic Network Adapter (ENA)" #define DEVICE_DESC "ENA adapter" /* Calculate DMA mask - width for ena cannot exceed 48, so it is safe */ #define ENA_DMA_BIT_MASK(x) ((1ULL << (x)) - 1ULL) /* 1 for AENQ + ADMIN */ #define ENA_ADMIN_MSIX_VEC 1 #define ENA_MAX_MSIX_VEC(io_queues) (ENA_ADMIN_MSIX_VEC + (io_queues)) #define ENA_REG_BAR 0 #define ENA_MEM_BAR 2 #define ENA_BUS_DMA_SEGS 32 #define ENA_DEFAULT_RING_SIZE 1024 #define ENA_RX_REFILL_THRESH_DIVIDER 8 #define ENA_IRQNAME_SIZE 40 #define ENA_PKT_MAX_BUFS 19 #define ENA_RX_RSS_TABLE_LOG_SIZE 7 #define ENA_RX_RSS_TABLE_SIZE (1 << ENA_RX_RSS_TABLE_LOG_SIZE) #define ENA_HASH_KEY_SIZE 40 #define ENA_MAX_FRAME_LEN 10000 #define ENA_MIN_FRAME_LEN 60 #define ENA_TX_CLEANUP_THRESHOLD 128 #define DB_THRESHOLD 64 #define TX_COMMIT 32 /* * TX budget for cleaning. It should be half of the RX budget to reduce amount * of TCP retransmissions. */ #define TX_BUDGET 128 /* RX cleanup budget. -1 stands for infinity. */ #define RX_BUDGET 256 /* * How many times we can repeat cleanup in the io irq handling routine if the * RX or TX budget was depleted. */ #define CLEAN_BUDGET 8 #define RX_IRQ_INTERVAL 20 #define TX_IRQ_INTERVAL 50 #define ENA_MIN_MTU 128 #define ENA_TSO_MAXSIZE 65536 #define ENA_MMIO_DISABLE_REG_READ BIT(0) #define ENA_TX_RING_IDX_NEXT(idx, ring_size) (((idx) + 1) & ((ring_size) - 1)) #define ENA_RX_RING_IDX_NEXT(idx, ring_size) (((idx) + 1) & ((ring_size) - 1)) #define ENA_IO_TXQ_IDX(q) (2 * (q)) #define ENA_IO_RXQ_IDX(q) (2 * (q) + 1) #define ENA_MGMNT_IRQ_IDX 0 #define ENA_IO_IRQ_FIRST_IDX 1 #define ENA_IO_IRQ_IDX(q) (ENA_IO_IRQ_FIRST_IDX + (q)) /* * ENA device should send keep alive msg every 1 sec. * We wait for 6 sec just to be on the safe side. */ #define DEFAULT_KEEP_ALIVE_TO (SBT_1S * 6) /* Time in jiffies before concluding the transmitter is hung. */ #define DEFAULT_TX_CMP_TO (SBT_1S * 5) /* Number of queues to check for missing queues per timer tick */ #define DEFAULT_TX_MONITORED_QUEUES (4) /* Max number of timeouted packets before device reset */ #define DEFAULT_TX_CMP_THRESHOLD (128) /* * Supported PCI vendor and devices IDs */ #define PCI_VENDOR_ID_AMAZON 0x1d0f #define PCI_DEV_ID_ENA_PF 0x0ec2 #define PCI_DEV_ID_ENA_LLQ_PF 0x1ec2 #define PCI_DEV_ID_ENA_VF 0xec20 #define PCI_DEV_ID_ENA_LLQ_VF 0xec21 struct msix_entry { int entry; int vector; }; typedef struct _ena_vendor_info_t { uint16_t vendor_id; uint16_t device_id; unsigned int index; } ena_vendor_info_t; struct ena_irq { /* Interrupt resources */ struct resource *res; driver_intr_t *handler; void *data; void *cookie; unsigned int vector; bool requested; int cpu; char name[ENA_IRQNAME_SIZE]; }; struct ena_que { struct ena_adapter *adapter; struct ena_ring *tx_ring; struct ena_ring *rx_ring; uint32_t id; int cpu; }; struct ena_tx_buffer { struct mbuf *mbuf; /* # of ena desc for this specific mbuf * (includes data desc and metadata desc) */ unsigned int tx_descs; /* # of buffers used by this mbuf */ unsigned int num_of_bufs; bus_dmamap_t map; /* Used to detect missing tx packets */ struct bintime timestamp; bool print_once; struct ena_com_buf bufs[ENA_PKT_MAX_BUFS]; } __aligned(CACHE_LINE_SIZE); struct ena_rx_buffer { struct mbuf *mbuf; bus_dmamap_t map; struct ena_com_buf ena_buf; } __aligned(CACHE_LINE_SIZE); struct ena_stats_tx { counter_u64_t cnt; counter_u64_t bytes; counter_u64_t prepare_ctx_err; counter_u64_t dma_mapping_err; counter_u64_t doorbells; counter_u64_t missing_tx_comp; counter_u64_t bad_req_id; counter_u64_t collapse; counter_u64_t collapse_err; }; struct ena_stats_rx { counter_u64_t cnt; counter_u64_t bytes; counter_u64_t refil_partial; counter_u64_t bad_csum; counter_u64_t mjum_alloc_fail; counter_u64_t mbuf_alloc_fail; counter_u64_t dma_mapping_err; counter_u64_t bad_desc_num; counter_u64_t bad_req_id; counter_u64_t empty_rx_ring; }; struct ena_ring { /* Holds the empty requests for TX/RX out of order completions */ union { uint16_t *free_tx_ids; uint16_t *free_rx_ids; }; struct ena_com_dev *ena_dev; struct ena_adapter *adapter; struct ena_com_io_cq *ena_com_io_cq; struct ena_com_io_sq *ena_com_io_sq; uint16_t qid; /* Determines if device will use LLQ or normal mode for TX */ enum ena_admin_placement_policy_type tx_mem_queue_type; /* The maximum length the driver can push to the device (For LLQ) */ uint8_t tx_max_header_size; struct ena_com_rx_buf_info ena_bufs[ENA_PKT_MAX_BUFS]; /* * Fields used for Adaptive Interrupt Modulation - to be implemented in * the future releases */ uint32_t smoothed_interval; enum ena_intr_moder_level moder_tbl_idx; struct ena_que *que; struct lro_ctrl lro; uint16_t next_to_use; uint16_t next_to_clean; union { struct ena_tx_buffer *tx_buffer_info; /* contex of tx packet */ struct ena_rx_buffer *rx_buffer_info; /* contex of rx packet */ }; int ring_size; /* number of tx/rx_buffer_info's entries */ struct buf_ring *br; /* only for TX */ struct mtx ring_mtx; char mtx_name[16]; union { struct { struct task enqueue_task; struct taskqueue *enqueue_tq; }; struct { struct task cmpl_task; struct taskqueue *cmpl_tq; }; }; union { struct ena_stats_tx tx_stats; struct ena_stats_rx rx_stats; }; int empty_rx_queue; } __aligned(CACHE_LINE_SIZE); struct ena_stats_dev { counter_u64_t wd_expired; counter_u64_t interface_up; counter_u64_t interface_down; counter_u64_t admin_q_pause; }; struct ena_hw_stats { counter_u64_t rx_packets; counter_u64_t tx_packets; counter_u64_t rx_bytes; counter_u64_t tx_bytes; counter_u64_t rx_drops; }; /* Board specific private data structure */ struct ena_adapter { struct ena_com_dev *ena_dev; /* OS defined structs */ if_t ifp; device_t pdev; struct ifmedia media; /* OS resources */ struct resource *memory; struct resource *registers; struct mtx global_mtx; struct sx ioctl_sx; /* MSI-X */ uint32_t msix_enabled; struct msix_entry *msix_entries; int msix_vecs; /* DMA tags used throughout the driver adapter for Tx and Rx */ bus_dma_tag_t tx_buf_tag; bus_dma_tag_t rx_buf_tag; int dma_width; uint32_t max_mtu; uint16_t max_tx_sgl_size; uint16_t max_rx_sgl_size; uint32_t tx_offload_cap; /* Tx fast path data */ int num_queues; unsigned int tx_ring_size; unsigned int rx_ring_size; /* RSS*/ uint8_t rss_ind_tbl[ENA_RX_RSS_TABLE_SIZE]; bool rss_support; uint8_t mac_addr[ETHER_ADDR_LEN]; /* mdio and phy*/ bool link_status; bool trigger_reset; bool up; bool running; /* Queue will represent one TX and one RX ring */ struct ena_que que[ENA_MAX_NUM_IO_QUEUES] __aligned(CACHE_LINE_SIZE); /* TX */ struct ena_ring tx_ring[ENA_MAX_NUM_IO_QUEUES] __aligned(CACHE_LINE_SIZE); /* RX */ struct ena_ring rx_ring[ENA_MAX_NUM_IO_QUEUES] __aligned(CACHE_LINE_SIZE); struct ena_irq irq_tbl[ENA_MAX_MSIX_VEC(ENA_MAX_NUM_IO_QUEUES)]; /* Timer service */ struct callout timer_service; sbintime_t keep_alive_timestamp; uint32_t next_monitored_tx_qid; struct task reset_task; struct taskqueue *reset_tq; int wd_active; sbintime_t keep_alive_timeout; sbintime_t missing_tx_timeout; uint32_t missing_tx_max_queues; uint32_t missing_tx_threshold; /* Statistics */ struct ena_stats_dev dev_stats; struct ena_hw_stats hw_stats; enum ena_regs_reset_reason_types reset_reason; }; #define ENA_RING_MTX_LOCK(_ring) mtx_lock(&(_ring)->ring_mtx) #define ENA_RING_MTX_TRYLOCK(_ring) mtx_trylock(&(_ring)->ring_mtx) #define ENA_RING_MTX_UNLOCK(_ring) mtx_unlock(&(_ring)->ring_mtx) static inline int ena_mbuf_count(struct mbuf *mbuf) { int count = 1; while ((mbuf = mbuf->m_next) != NULL) ++count; return count; } #endif /* !(ENA_H) */ Index: projects/import-googletest-1.8.1/sys/dev/ixl/if_ixl.c =================================================================== --- projects/import-googletest-1.8.1/sys/dev/ixl/if_ixl.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/dev/ixl/if_ixl.c (revision 344271) @@ -1,1697 +1,1697 @@ /****************************************************************************** Copyright (c) 2013-2018, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the Intel Corporation nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ******************************************************************************/ /*$FreeBSD$*/ #include "ixl.h" #include "ixl_pf.h" #ifdef IXL_IW #include "ixl_iw.h" #include "ixl_iw_int.h" #endif #ifdef PCI_IOV #include "ixl_pf_iov.h" #endif /********************************************************************* * Driver version *********************************************************************/ #define IXL_DRIVER_VERSION_MAJOR 2 #define IXL_DRIVER_VERSION_MINOR 1 #define IXL_DRIVER_VERSION_BUILD 0 #define IXL_DRIVER_VERSION_STRING \ __XSTRING(IXL_DRIVER_VERSION_MAJOR) "." \ __XSTRING(IXL_DRIVER_VERSION_MINOR) "." \ __XSTRING(IXL_DRIVER_VERSION_BUILD) "-k" /********************************************************************* * PCI Device ID Table * * Used by probe to select devices to load on * * ( Vendor ID, Device ID, Branding String ) *********************************************************************/ static pci_vendor_info_t ixl_vendor_info_array[] = { PVIDV(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_SFP_XL710, "Intel(R) Ethernet Controller X710 for 10GbE SFP+"), PVIDV(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_KX_B, "Intel(R) Ethernet Controller XL710 for 40GbE backplane"), PVIDV(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_KX_C, "Intel(R) Ethernet Controller X710 for 10GbE backplane"), PVIDV(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_QSFP_A, "Intel(R) Ethernet Controller XL710 for 40GbE QSFP+"), PVIDV(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_QSFP_B, "Intel(R) Ethernet Controller XL710 for 40GbE QSFP+"), PVIDV(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_QSFP_C, "Intel(R) Ethernet Controller X710 for 10GbE QSFP+"), PVIDV(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_10G_BASE_T, "Intel(R) Ethernet Controller X710 for 10GBASE-T"), PVIDV(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_10G_BASE_T4, "Intel(R) Ethernet Controller X710/X557-AT 10GBASE-T"), PVIDV(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_KX_X722, "Intel(R) Ethernet Connection X722 for 10GbE backplane"), PVIDV(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_QSFP_X722, "Intel(R) Ethernet Connection X722 for 10GbE QSFP+"), PVIDV(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_SFP_X722, "Intel(R) Ethernet Connection X722 for 10GbE SFP+"), PVIDV(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_1G_BASE_T_X722, "Intel(R) Ethernet Connection X722 for 1GbE"), PVIDV(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_10G_BASE_T_X722, "Intel(R) Ethernet Connection X722 for 10GBASE-T"), PVIDV(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_SFP_I_X722, "Intel(R) Ethernet Connection X722 for 10GbE SFP+"), PVIDV(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_25G_B, "Intel(R) Ethernet Controller XXV710 for 25GbE backplane"), PVIDV(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_25G_SFP28, "Intel(R) Ethernet Controller XXV710 for 25GbE SFP28"), /* required last entry */ PVID_END }; /********************************************************************* * Function prototypes *********************************************************************/ /*** IFLIB interface ***/ static void *ixl_register(device_t dev); static int ixl_if_attach_pre(if_ctx_t ctx); static int ixl_if_attach_post(if_ctx_t ctx); static int ixl_if_detach(if_ctx_t ctx); static int ixl_if_shutdown(if_ctx_t ctx); static int ixl_if_suspend(if_ctx_t ctx); static int ixl_if_resume(if_ctx_t ctx); static int ixl_if_msix_intr_assign(if_ctx_t ctx, int msix); static void ixl_if_enable_intr(if_ctx_t ctx); static void ixl_if_disable_intr(if_ctx_t ctx); static int ixl_if_rx_queue_intr_enable(if_ctx_t ctx, uint16_t rxqid); static int ixl_if_tx_queue_intr_enable(if_ctx_t ctx, uint16_t txqid); static int ixl_if_tx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int ntxqs, int ntxqsets); static int ixl_if_rx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int nqs, int nqsets); static void ixl_if_queues_free(if_ctx_t ctx); static void ixl_if_update_admin_status(if_ctx_t ctx); static void ixl_if_multi_set(if_ctx_t ctx); static int ixl_if_mtu_set(if_ctx_t ctx, uint32_t mtu); static void ixl_if_media_status(if_ctx_t ctx, struct ifmediareq *ifmr); static int ixl_if_media_change(if_ctx_t ctx); static int ixl_if_promisc_set(if_ctx_t ctx, int flags); static void ixl_if_timer(if_ctx_t ctx, uint16_t qid); static void ixl_if_vlan_register(if_ctx_t ctx, u16 vtag); static void ixl_if_vlan_unregister(if_ctx_t ctx, u16 vtag); static uint64_t ixl_if_get_counter(if_ctx_t ctx, ift_counter cnt); static int ixl_if_i2c_req(if_ctx_t ctx, struct ifi2creq *req); static int ixl_if_priv_ioctl(if_ctx_t ctx, u_long command, caddr_t data); #ifdef PCI_IOV static void ixl_if_vflr_handle(if_ctx_t ctx); #endif /*** Other ***/ static int ixl_mc_filter_apply(void *arg, struct ifmultiaddr *ifma, int); static void ixl_save_pf_tunables(struct ixl_pf *); static int ixl_allocate_pci_resources(struct ixl_pf *); /********************************************************************* * FreeBSD Device Interface Entry Points *********************************************************************/ static device_method_t ixl_methods[] = { /* Device interface */ DEVMETHOD(device_register, ixl_register), DEVMETHOD(device_probe, iflib_device_probe), DEVMETHOD(device_attach, iflib_device_attach), DEVMETHOD(device_detach, iflib_device_detach), DEVMETHOD(device_shutdown, iflib_device_shutdown), #ifdef PCI_IOV DEVMETHOD(pci_iov_init, iflib_device_iov_init), DEVMETHOD(pci_iov_uninit, iflib_device_iov_uninit), DEVMETHOD(pci_iov_add_vf, iflib_device_iov_add_vf), #endif DEVMETHOD_END }; static driver_t ixl_driver = { "ixl", ixl_methods, sizeof(struct ixl_pf), }; devclass_t ixl_devclass; DRIVER_MODULE(ixl, pci, ixl_driver, ixl_devclass, 0, 0); IFLIB_PNP_INFO(pci, ixl, ixl_vendor_info_array); MODULE_VERSION(ixl, 3); MODULE_DEPEND(ixl, pci, 1, 1, 1); MODULE_DEPEND(ixl, ether, 1, 1, 1); MODULE_DEPEND(ixl, iflib, 1, 1, 1); static device_method_t ixl_if_methods[] = { DEVMETHOD(ifdi_attach_pre, ixl_if_attach_pre), DEVMETHOD(ifdi_attach_post, ixl_if_attach_post), DEVMETHOD(ifdi_detach, ixl_if_detach), DEVMETHOD(ifdi_shutdown, ixl_if_shutdown), DEVMETHOD(ifdi_suspend, ixl_if_suspend), DEVMETHOD(ifdi_resume, ixl_if_resume), DEVMETHOD(ifdi_init, ixl_if_init), DEVMETHOD(ifdi_stop, ixl_if_stop), DEVMETHOD(ifdi_msix_intr_assign, ixl_if_msix_intr_assign), DEVMETHOD(ifdi_intr_enable, ixl_if_enable_intr), DEVMETHOD(ifdi_intr_disable, ixl_if_disable_intr), DEVMETHOD(ifdi_rx_queue_intr_enable, ixl_if_rx_queue_intr_enable), DEVMETHOD(ifdi_tx_queue_intr_enable, ixl_if_tx_queue_intr_enable), DEVMETHOD(ifdi_tx_queues_alloc, ixl_if_tx_queues_alloc), DEVMETHOD(ifdi_rx_queues_alloc, ixl_if_rx_queues_alloc), DEVMETHOD(ifdi_queues_free, ixl_if_queues_free), DEVMETHOD(ifdi_update_admin_status, ixl_if_update_admin_status), DEVMETHOD(ifdi_multi_set, ixl_if_multi_set), DEVMETHOD(ifdi_mtu_set, ixl_if_mtu_set), DEVMETHOD(ifdi_media_status, ixl_if_media_status), DEVMETHOD(ifdi_media_change, ixl_if_media_change), DEVMETHOD(ifdi_promisc_set, ixl_if_promisc_set), DEVMETHOD(ifdi_timer, ixl_if_timer), DEVMETHOD(ifdi_vlan_register, ixl_if_vlan_register), DEVMETHOD(ifdi_vlan_unregister, ixl_if_vlan_unregister), DEVMETHOD(ifdi_get_counter, ixl_if_get_counter), DEVMETHOD(ifdi_i2c_req, ixl_if_i2c_req), DEVMETHOD(ifdi_priv_ioctl, ixl_if_priv_ioctl), #ifdef PCI_IOV DEVMETHOD(ifdi_iov_init, ixl_if_iov_init), DEVMETHOD(ifdi_iov_uninit, ixl_if_iov_uninit), DEVMETHOD(ifdi_iov_vf_add, ixl_if_iov_vf_add), DEVMETHOD(ifdi_vflr_handle, ixl_if_vflr_handle), #endif // ifdi_led_func // ifdi_debug DEVMETHOD_END }; static driver_t ixl_if_driver = { "ixl_if", ixl_if_methods, sizeof(struct ixl_pf) }; /* ** TUNEABLE PARAMETERS: */ static SYSCTL_NODE(_hw, OID_AUTO, ixl, CTLFLAG_RD, 0, "ixl driver parameters"); /* * Leave this on unless you need to send flow control * frames (or other control frames) from software */ static int ixl_enable_tx_fc_filter = 1; TUNABLE_INT("hw.ixl.enable_tx_fc_filter", &ixl_enable_tx_fc_filter); SYSCTL_INT(_hw_ixl, OID_AUTO, enable_tx_fc_filter, CTLFLAG_RDTUN, &ixl_enable_tx_fc_filter, 0, "Filter out packets with Ethertype 0x8808 from being sent out by non-HW sources"); static int ixl_i2c_access_method = 0; TUNABLE_INT("hw.ixl.i2c_access_method", &ixl_i2c_access_method); SYSCTL_INT(_hw_ixl, OID_AUTO, i2c_access_method, CTLFLAG_RDTUN, &ixl_i2c_access_method, 0, IXL_SYSCTL_HELP_I2C_METHOD); static int ixl_enable_vf_loopback = 1; TUNABLE_INT("hw.ixl.enable_vf_loopback", &ixl_enable_vf_loopback); SYSCTL_INT(_hw_ixl, OID_AUTO, enable_vf_loopback, CTLFLAG_RDTUN, &ixl_enable_vf_loopback, 0, IXL_SYSCTL_HELP_VF_LOOPBACK); /* * Different method for processing TX descriptor * completion. */ static int ixl_enable_head_writeback = 1; TUNABLE_INT("hw.ixl.enable_head_writeback", &ixl_enable_head_writeback); SYSCTL_INT(_hw_ixl, OID_AUTO, enable_head_writeback, CTLFLAG_RDTUN, &ixl_enable_head_writeback, 0, "For detecting last completed TX descriptor by hardware, use value written by HW instead of checking descriptors"); static int ixl_core_debug_mask = 0; TUNABLE_INT("hw.ixl.core_debug_mask", &ixl_core_debug_mask); SYSCTL_INT(_hw_ixl, OID_AUTO, core_debug_mask, CTLFLAG_RDTUN, &ixl_core_debug_mask, 0, "Display debug statements that are printed in non-shared code"); static int ixl_shared_debug_mask = 0; TUNABLE_INT("hw.ixl.shared_debug_mask", &ixl_shared_debug_mask); SYSCTL_INT(_hw_ixl, OID_AUTO, shared_debug_mask, CTLFLAG_RDTUN, &ixl_shared_debug_mask, 0, "Display debug statements that are printed in shared code"); #if 0 /* ** Controls for Interrupt Throttling ** - true/false for dynamic adjustment ** - default values for static ITR */ static int ixl_dynamic_rx_itr = 0; TUNABLE_INT("hw.ixl.dynamic_rx_itr", &ixl_dynamic_rx_itr); SYSCTL_INT(_hw_ixl, OID_AUTO, dynamic_rx_itr, CTLFLAG_RDTUN, &ixl_dynamic_rx_itr, 0, "Dynamic RX Interrupt Rate"); static int ixl_dynamic_tx_itr = 0; TUNABLE_INT("hw.ixl.dynamic_tx_itr", &ixl_dynamic_tx_itr); SYSCTL_INT(_hw_ixl, OID_AUTO, dynamic_tx_itr, CTLFLAG_RDTUN, &ixl_dynamic_tx_itr, 0, "Dynamic TX Interrupt Rate"); #endif static int ixl_rx_itr = IXL_ITR_8K; TUNABLE_INT("hw.ixl.rx_itr", &ixl_rx_itr); SYSCTL_INT(_hw_ixl, OID_AUTO, rx_itr, CTLFLAG_RDTUN, &ixl_rx_itr, 0, "RX Interrupt Rate"); static int ixl_tx_itr = IXL_ITR_4K; TUNABLE_INT("hw.ixl.tx_itr", &ixl_tx_itr); SYSCTL_INT(_hw_ixl, OID_AUTO, tx_itr, CTLFLAG_RDTUN, &ixl_tx_itr, 0, "TX Interrupt Rate"); #ifdef IXL_IW int ixl_enable_iwarp = 0; TUNABLE_INT("hw.ixl.enable_iwarp", &ixl_enable_iwarp); SYSCTL_INT(_hw_ixl, OID_AUTO, enable_iwarp, CTLFLAG_RDTUN, &ixl_enable_iwarp, 0, "iWARP enabled"); #if __FreeBSD_version < 1100000 int ixl_limit_iwarp_msix = 1; #else int ixl_limit_iwarp_msix = IXL_IW_MAX_MSIX; #endif TUNABLE_INT("hw.ixl.limit_iwarp_msix", &ixl_limit_iwarp_msix); SYSCTL_INT(_hw_ixl, OID_AUTO, limit_iwarp_msix, CTLFLAG_RDTUN, &ixl_limit_iwarp_msix, 0, "Limit MSI-X vectors assigned to iWARP"); #endif extern struct if_txrx ixl_txrx_hwb; extern struct if_txrx ixl_txrx_dwb; static struct if_shared_ctx ixl_sctx_init = { .isc_magic = IFLIB_MAGIC, .isc_q_align = PAGE_SIZE, .isc_tx_maxsize = IXL_TSO_SIZE + sizeof(struct ether_vlan_header), .isc_tx_maxsegsize = IXL_MAX_DMA_SEG_SIZE, .isc_tso_maxsize = IXL_TSO_SIZE + sizeof(struct ether_vlan_header), .isc_tso_maxsegsize = IXL_MAX_DMA_SEG_SIZE, .isc_rx_maxsize = 16384, .isc_rx_nsegments = IXL_MAX_RX_SEGS, .isc_rx_maxsegsize = IXL_MAX_DMA_SEG_SIZE, .isc_nfl = 1, .isc_ntxqs = 1, .isc_nrxqs = 1, .isc_admin_intrcnt = 1, .isc_vendor_info = ixl_vendor_info_array, .isc_driver_version = IXL_DRIVER_VERSION_STRING, .isc_driver = &ixl_if_driver, .isc_flags = IFLIB_NEED_SCRATCH | IFLIB_NEED_ZERO_CSUM | IFLIB_TSO_INIT_IP | IFLIB_ADMIN_ALWAYS_RUN, .isc_nrxd_min = {IXL_MIN_RING}, .isc_ntxd_min = {IXL_MIN_RING}, .isc_nrxd_max = {IXL_MAX_RING}, .isc_ntxd_max = {IXL_MAX_RING}, .isc_nrxd_default = {IXL_DEFAULT_RING}, .isc_ntxd_default = {IXL_DEFAULT_RING}, }; if_shared_ctx_t ixl_sctx = &ixl_sctx_init; /*** Functions ***/ static void * ixl_register(device_t dev) { return (ixl_sctx); } static int ixl_allocate_pci_resources(struct ixl_pf *pf) { device_t dev = iflib_get_dev(pf->vsi.ctx); struct i40e_hw *hw = &pf->hw; int rid; /* Map BAR0 */ rid = PCIR_BAR(0); pf->pci_mem = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &rid, RF_ACTIVE); if (!(pf->pci_mem)) { device_printf(dev, "Unable to allocate bus resource: PCI memory\n"); return (ENXIO); } /* Save off the PCI information */ hw->vendor_id = pci_get_vendor(dev); hw->device_id = pci_get_device(dev); hw->revision_id = pci_read_config(dev, PCIR_REVID, 1); hw->subsystem_vendor_id = pci_read_config(dev, PCIR_SUBVEND_0, 2); hw->subsystem_device_id = pci_read_config(dev, PCIR_SUBDEV_0, 2); hw->bus.device = pci_get_slot(dev); hw->bus.func = pci_get_function(dev); /* Save off register access information */ pf->osdep.mem_bus_space_tag = rman_get_bustag(pf->pci_mem); pf->osdep.mem_bus_space_handle = rman_get_bushandle(pf->pci_mem); pf->osdep.mem_bus_space_size = rman_get_size(pf->pci_mem); pf->osdep.flush_reg = I40E_GLGEN_STAT; pf->osdep.dev = dev; pf->hw.hw_addr = (u8 *) &pf->osdep.mem_bus_space_handle; pf->hw.back = &pf->osdep; return (0); } static int ixl_if_attach_pre(if_ctx_t ctx) { device_t dev; struct ixl_pf *pf; struct i40e_hw *hw; struct ixl_vsi *vsi; if_softc_ctx_t scctx; struct i40e_filter_control_settings filter; enum i40e_status_code status; int error = 0; INIT_DBG_DEV(dev, "begin"); dev = iflib_get_dev(ctx); pf = iflib_get_softc(ctx); vsi = &pf->vsi; vsi->back = pf; pf->dev = dev; hw = &pf->hw; vsi->dev = dev; vsi->hw = &pf->hw; vsi->id = 0; vsi->num_vlans = 0; vsi->ctx = ctx; vsi->media = iflib_get_media(ctx); vsi->shared = scctx = iflib_get_softc_ctx(ctx); /* Save tunable values */ ixl_save_pf_tunables(pf); /* Do PCI setup - map BAR0, etc */ if (ixl_allocate_pci_resources(pf)) { device_printf(dev, "Allocation of PCI resources failed\n"); error = ENXIO; goto err_pci_res; } /* Establish a clean starting point */ i40e_clear_hw(hw); status = i40e_pf_reset(hw); if (status) { device_printf(dev, "PF reset failure %s\n", i40e_stat_str(hw, status)); error = EIO; goto err_out; } /* Initialize the shared code */ status = i40e_init_shared_code(hw); if (status) { device_printf(dev, "Unable to initialize shared code, error %s\n", i40e_stat_str(hw, status)); error = EIO; goto err_out; } /* Set up the admin queue */ hw->aq.num_arq_entries = IXL_AQ_LEN; hw->aq.num_asq_entries = IXL_AQ_LEN; hw->aq.arq_buf_size = IXL_AQ_BUF_SZ; hw->aq.asq_buf_size = IXL_AQ_BUF_SZ; status = i40e_init_adminq(hw); if (status != 0 && status != I40E_ERR_FIRMWARE_API_VERSION) { device_printf(dev, "Unable to initialize Admin Queue, error %s\n", i40e_stat_str(hw, status)); error = EIO; goto err_out; } ixl_print_nvm_version(pf); if (status == I40E_ERR_FIRMWARE_API_VERSION) { device_printf(dev, "The driver for the device stopped " "because the NVM image is newer than expected.\n"); device_printf(dev, "You must install the most recent version of " "the network driver.\n"); error = EIO; goto err_out; } if (hw->aq.api_maj_ver == I40E_FW_API_VERSION_MAJOR && hw->aq.api_min_ver > I40E_FW_MINOR_VERSION(hw)) { device_printf(dev, "The driver for the device detected " "a newer version of the NVM image than expected.\n"); device_printf(dev, "Please install the most recent version " "of the network driver.\n"); } else if (hw->aq.api_maj_ver == 1 && hw->aq.api_min_ver < 4) { device_printf(dev, "The driver for the device detected " "an older version of the NVM image than expected.\n"); device_printf(dev, "Please update the NVM image.\n"); } /* Clear PXE mode */ i40e_clear_pxe_mode(hw); /* Get capabilities from the device */ error = ixl_get_hw_capabilities(pf); if (error) { device_printf(dev, "get_hw_capabilities failed: %d\n", error); goto err_get_cap; } /* Set up host memory cache */ status = i40e_init_lan_hmc(hw, hw->func_caps.num_tx_qp, hw->func_caps.num_rx_qp, 0, 0); if (status) { device_printf(dev, "init_lan_hmc failed: %s\n", i40e_stat_str(hw, status)); goto err_get_cap; } status = i40e_configure_lan_hmc(hw, I40E_HMC_MODEL_DIRECT_ONLY); if (status) { device_printf(dev, "configure_lan_hmc failed: %s\n", i40e_stat_str(hw, status)); goto err_mac_hmc; } /* Disable LLDP from the firmware for certain NVM versions */ if (((pf->hw.aq.fw_maj_ver == 4) && (pf->hw.aq.fw_min_ver < 3)) || (pf->hw.aq.fw_maj_ver < 4)) { i40e_aq_stop_lldp(hw, TRUE, NULL); pf->state |= IXL_PF_STATE_FW_LLDP_DISABLED; } /* Get MAC addresses from hardware */ i40e_get_mac_addr(hw, hw->mac.addr); error = i40e_validate_mac_addr(hw->mac.addr); if (error) { device_printf(dev, "validate_mac_addr failed: %d\n", error); goto err_mac_hmc; } bcopy(hw->mac.addr, hw->mac.perm_addr, ETHER_ADDR_LEN); iflib_set_mac(ctx, hw->mac.addr); i40e_get_port_mac_addr(hw, hw->mac.port_addr); /* Set up the device filtering */ bzero(&filter, sizeof(filter)); filter.enable_ethtype = TRUE; filter.enable_macvlan = TRUE; filter.enable_fdir = FALSE; filter.hash_lut_size = I40E_HASH_LUT_SIZE_512; if (i40e_set_filter_control(hw, &filter)) device_printf(dev, "i40e_set_filter_control() failed\n"); /* Query device FW LLDP status */ ixl_get_fw_lldp_status(pf); /* Tell FW to apply DCB config on link up */ i40e_aq_set_dcb_parameters(hw, true, NULL); /* Fill out iflib parameters */ if (hw->mac.type == I40E_MAC_X722) scctx->isc_ntxqsets_max = scctx->isc_nrxqsets_max = 128; else scctx->isc_ntxqsets_max = scctx->isc_nrxqsets_max = 64; if (vsi->enable_head_writeback) { scctx->isc_txqsizes[0] = roundup2(scctx->isc_ntxd[0] * sizeof(struct i40e_tx_desc) + sizeof(u32), DBA_ALIGN); scctx->isc_txrx = &ixl_txrx_hwb; } else { scctx->isc_txqsizes[0] = roundup2(scctx->isc_ntxd[0] * sizeof(struct i40e_tx_desc), DBA_ALIGN); scctx->isc_txrx = &ixl_txrx_dwb; } scctx->isc_txrx->ift_legacy_intr = ixl_intr; scctx->isc_rxqsizes[0] = roundup2(scctx->isc_nrxd[0] * sizeof(union i40e_32byte_rx_desc), DBA_ALIGN); scctx->isc_msix_bar = PCIR_BAR(IXL_MSIX_BAR); scctx->isc_tx_nsegments = IXL_MAX_TX_SEGS; scctx->isc_tx_tso_segments_max = IXL_MAX_TSO_SEGS; scctx->isc_tx_tso_size_max = IXL_TSO_SIZE; scctx->isc_tx_tso_segsize_max = IXL_MAX_DMA_SEG_SIZE; scctx->isc_rss_table_size = pf->hw.func_caps.rss_table_size; scctx->isc_tx_csum_flags = CSUM_OFFLOAD; scctx->isc_capabilities = scctx->isc_capenable = IXL_CAPS; INIT_DBG_DEV(dev, "end"); return (0); err_mac_hmc: i40e_shutdown_lan_hmc(hw); err_get_cap: i40e_shutdown_adminq(hw); err_out: ixl_free_pci_resources(pf); err_pci_res: return (error); } static int ixl_if_attach_post(if_ctx_t ctx) { device_t dev; struct ixl_pf *pf; struct i40e_hw *hw; struct ixl_vsi *vsi; int error = 0; enum i40e_status_code status; INIT_DBG_DEV(dev, "begin"); dev = iflib_get_dev(ctx); pf = iflib_get_softc(ctx); vsi = &pf->vsi; vsi->ifp = iflib_get_ifp(ctx); hw = &pf->hw; /* Save off determined number of queues for interface */ vsi->num_rx_queues = vsi->shared->isc_nrxqsets; vsi->num_tx_queues = vsi->shared->isc_ntxqsets; /* Setup OS network interface / ifnet */ if (ixl_setup_interface(dev, pf)) { device_printf(dev, "interface setup failed!\n"); error = EIO; goto err; } /* Determine link state */ if (ixl_attach_get_link_status(pf)) { error = EINVAL; goto err; } error = ixl_switch_config(pf); if (error) { device_printf(dev, "Initial ixl_switch_config() failed: %d\n", error); goto err; } /* Add protocol filters to list */ ixl_init_filters(vsi); /* Init queue allocation manager */ error = ixl_pf_qmgr_init(&pf->qmgr, hw->func_caps.num_tx_qp); if (error) { device_printf(dev, "Failed to init queue manager for PF queues, error %d\n", error); goto err; } /* reserve a contiguous allocation for the PF's VSI */ error = ixl_pf_qmgr_alloc_contiguous(&pf->qmgr, max(vsi->num_rx_queues, vsi->num_tx_queues), &pf->qtag); if (error) { device_printf(dev, "Failed to reserve queues for PF LAN VSI, error %d\n", error); goto err; } device_printf(dev, "Allocating %d queues for PF LAN VSI; %d queues active\n", pf->qtag.num_allocated, pf->qtag.num_active); /* Limit PHY interrupts to link, autoneg, and modules failure */ status = i40e_aq_set_phy_int_mask(hw, IXL_DEFAULT_PHY_INT_MASK, NULL); if (status) { device_printf(dev, "i40e_aq_set_phy_mask() failed: err %s," " aq_err %s\n", i40e_stat_str(hw, status), i40e_aq_str(hw, hw->aq.asq_last_status)); goto err; } /* Get the bus configuration and set the shared code */ ixl_get_bus_info(pf); /* Keep admin queue interrupts active while driver is loaded */ if (vsi->shared->isc_intr == IFLIB_INTR_MSIX) { ixl_configure_intr0_msix(pf); ixl_enable_intr0(hw); } /* Set initial advertised speed sysctl value */ ixl_set_initial_advertised_speeds(pf); /* Initialize statistics & add sysctls */ ixl_add_device_sysctls(pf); ixl_pf_reset_stats(pf); ixl_update_stats_counters(pf); ixl_add_hw_stats(pf); hw->phy.get_link_info = true; i40e_get_link_status(hw, &pf->link_up); ixl_update_link_status(pf); #ifdef PCI_IOV ixl_initialize_sriov(pf); #endif #ifdef IXL_IW if (hw->func_caps.iwarp && ixl_enable_iwarp) { pf->iw_enabled = (pf->iw_msix > 0) ? true : false; if (pf->iw_enabled) { error = ixl_iw_pf_attach(pf); if (error) { device_printf(dev, "interfacing to iWARP driver failed: %d\n", error); goto err; } else device_printf(dev, "iWARP ready\n"); } else device_printf(dev, "iWARP disabled on this device " "(no MSI-X vectors)\n"); } else { pf->iw_enabled = false; device_printf(dev, "The device is not iWARP enabled\n"); } #endif INIT_DBG_DEV(dev, "end"); return (0); err: INIT_DEBUGOUT("end: error %d", error); /* ixl_if_detach() is called on error from this */ return (error); } /** * XXX: iflib always ignores the return value of detach() * -> This means that this isn't allowed to fail */ static int ixl_if_detach(if_ctx_t ctx) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ixl_vsi *vsi = &pf->vsi; struct i40e_hw *hw = &pf->hw; device_t dev = pf->dev; enum i40e_status_code status; #ifdef IXL_IW int error; #endif INIT_DBG_DEV(dev, "begin"); #ifdef IXL_IW if (ixl_enable_iwarp && pf->iw_enabled) { error = ixl_iw_pf_detach(pf); if (error == EBUSY) { device_printf(dev, "iwarp in use; stop it first.\n"); //return (error); } } #endif /* Remove all previously allocated media types */ ifmedia_removeall(vsi->media); /* Shutdown LAN HMC */ if (hw->hmc.hmc_obj) { status = i40e_shutdown_lan_hmc(hw); if (status) device_printf(dev, "i40e_shutdown_lan_hmc() failed with status %s\n", i40e_stat_str(hw, status)); } /* Shutdown admin queue */ ixl_disable_intr0(hw); status = i40e_shutdown_adminq(hw); if (status) device_printf(dev, "i40e_shutdown_adminq() failed with status %s\n", i40e_stat_str(hw, status)); ixl_pf_qmgr_destroy(&pf->qmgr); ixl_free_pci_resources(pf); ixl_free_mac_filters(vsi); INIT_DBG_DEV(dev, "end"); return (0); } static int ixl_if_shutdown(if_ctx_t ctx) { int error = 0; INIT_DEBUGOUT("ixl_if_shutdown: begin"); /* TODO: Call ixl_if_stop()? */ /* TODO: Then setup low power mode */ return (error); } static int ixl_if_suspend(if_ctx_t ctx) { int error = 0; INIT_DEBUGOUT("ixl_if_suspend: begin"); /* TODO: Call ixl_if_stop()? */ /* TODO: Then setup low power mode */ return (error); } static int ixl_if_resume(if_ctx_t ctx) { struct ifnet *ifp = iflib_get_ifp(ctx); INIT_DEBUGOUT("ixl_if_resume: begin"); /* Read & clear wake-up registers */ /* Required after D3->D0 transition */ if (ifp->if_flags & IFF_UP) ixl_if_init(ctx); return (0); } void ixl_if_init(if_ctx_t ctx) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ixl_vsi *vsi = &pf->vsi; struct i40e_hw *hw = &pf->hw; struct ifnet *ifp = iflib_get_ifp(ctx); device_t dev = iflib_get_dev(ctx); u8 tmpaddr[ETHER_ADDR_LEN]; int ret; /* * If the aq is dead here, it probably means something outside of the driver * did something to the adapter, like a PF reset. * So, rebuild the driver's state here if that occurs. */ if (!i40e_check_asq_alive(&pf->hw)) { device_printf(dev, "Admin Queue is down; resetting...\n"); ixl_teardown_hw_structs(pf); ixl_rebuild_hw_structs_after_reset(pf); } /* Get the latest mac address... User might use a LAA */ bcopy(IF_LLADDR(vsi->ifp), tmpaddr, ETH_ALEN); if (!cmp_etheraddr(hw->mac.addr, tmpaddr) && (i40e_validate_mac_addr(tmpaddr) == I40E_SUCCESS)) { ixl_del_filter(vsi, hw->mac.addr, IXL_VLAN_ANY); bcopy(tmpaddr, hw->mac.addr, ETH_ALEN); ret = i40e_aq_mac_address_write(hw, I40E_AQC_WRITE_TYPE_LAA_ONLY, hw->mac.addr, NULL); if (ret) { device_printf(dev, "LLA address change failed!!\n"); return; } ixl_add_filter(vsi, hw->mac.addr, IXL_VLAN_ANY); } iflib_set_mac(ctx, hw->mac.addr); /* Prepare the VSI: rings, hmc contexts, etc... */ if (ixl_initialize_vsi(vsi)) { device_printf(dev, "initialize vsi failed!!\n"); return; } /* Reconfigure multicast filters in HW */ ixl_if_multi_set(ctx); /* Set up RSS */ ixl_config_rss(pf); /* Set up MSI-X routing and the ITR settings */ if (vsi->shared->isc_intr == IFLIB_INTR_MSIX) { ixl_configure_queue_intr_msix(pf); ixl_configure_itr(pf); } else ixl_configure_legacy(pf); if (vsi->enable_head_writeback) ixl_init_tx_cidx(vsi); else ixl_init_tx_rsqs(vsi); ixl_enable_rings(vsi); i40e_aq_set_default_vsi(hw, vsi->seid, NULL); /* Re-add configure filters to HW */ ixl_reconfigure_filters(vsi); /* Configure promiscuous mode */ ixl_if_promisc_set(ctx, if_getflags(ifp)); #ifdef IXL_IW if (ixl_enable_iwarp && pf->iw_enabled) { ret = ixl_iw_pf_init(pf); if (ret) device_printf(dev, "initialize iwarp failed, code %d\n", ret); } #endif } void ixl_if_stop(if_ctx_t ctx) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ixl_vsi *vsi = &pf->vsi; INIT_DEBUGOUT("ixl_if_stop: begin\n"); // TODO: This may need to be reworked #ifdef IXL_IW /* Stop iWARP device */ if (ixl_enable_iwarp && pf->iw_enabled) ixl_iw_pf_stop(pf); #endif ixl_disable_rings_intr(vsi); ixl_disable_rings(pf, vsi, &pf->qtag); } static int ixl_if_msix_intr_assign(if_ctx_t ctx, int msix) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ixl_vsi *vsi = &pf->vsi; struct ixl_rx_queue *rx_que = vsi->rx_queues; struct ixl_tx_queue *tx_que = vsi->tx_queues; int err, i, rid, vector = 0; char buf[16]; MPASS(vsi->shared->isc_nrxqsets > 0); MPASS(vsi->shared->isc_ntxqsets > 0); /* Admin Que must use vector 0*/ rid = vector + 1; err = iflib_irq_alloc_generic(ctx, &vsi->irq, rid, IFLIB_INTR_ADMIN, ixl_msix_adminq, pf, 0, "aq"); if (err) { iflib_irq_free(ctx, &vsi->irq); device_printf(iflib_get_dev(ctx), "Failed to register Admin Que handler"); return (err); } /* Create soft IRQ for handling VFLRs */ - iflib_softirq_alloc_generic(ctx, &pf->iov_irq, IFLIB_INTR_IOV, pf, 0, "iov"); + iflib_softirq_alloc_generic(ctx, NULL, IFLIB_INTR_IOV, pf, 0, "iov"); /* Now set up the stations */ for (i = 0, vector = 1; i < vsi->shared->isc_nrxqsets; i++, vector++, rx_que++) { rid = vector + 1; snprintf(buf, sizeof(buf), "rxq%d", i); err = iflib_irq_alloc_generic(ctx, &rx_que->que_irq, rid, IFLIB_INTR_RX, ixl_msix_que, rx_que, rx_que->rxr.me, buf); /* XXX: Does the driver work as expected if there are fewer num_rx_queues than * what's expected in the iflib context? */ if (err) { device_printf(iflib_get_dev(ctx), "Failed to allocate queue RX int vector %d, err: %d\n", i, err); vsi->num_rx_queues = i + 1; goto fail; } rx_que->msix = vector; } bzero(buf, sizeof(buf)); for (i = 0; i < vsi->shared->isc_ntxqsets; i++, tx_que++) { snprintf(buf, sizeof(buf), "txq%d", i); iflib_softirq_alloc_generic(ctx, &vsi->rx_queues[i % vsi->shared->isc_nrxqsets].que_irq, IFLIB_INTR_TX, tx_que, tx_que->txr.me, buf); /* TODO: Maybe call a strategy function for this to figure out which * interrupts to map Tx queues to. I don't know if there's an immediately * better way than this other than a user-supplied map, though. */ tx_que->msix = (i % vsi->shared->isc_nrxqsets) + 1; } return (0); fail: iflib_irq_free(ctx, &vsi->irq); rx_que = vsi->rx_queues; for (int i = 0; i < vsi->num_rx_queues; i++, rx_que++) iflib_irq_free(ctx, &rx_que->que_irq); return (err); } /* * Enable all interrupts * * Called in: * iflib_init_locked, after ixl_if_init() */ static void ixl_if_enable_intr(if_ctx_t ctx) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ixl_vsi *vsi = &pf->vsi; struct i40e_hw *hw = vsi->hw; struct ixl_rx_queue *que = vsi->rx_queues; ixl_enable_intr0(hw); /* Enable queue interrupts */ for (int i = 0; i < vsi->num_rx_queues; i++, que++) /* TODO: Queue index parameter is probably wrong */ ixl_enable_queue(hw, que->rxr.me); } /* * Disable queue interrupts * * Other interrupt causes need to remain active. */ static void ixl_if_disable_intr(if_ctx_t ctx) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ixl_vsi *vsi = &pf->vsi; struct i40e_hw *hw = vsi->hw; struct ixl_rx_queue *rx_que = vsi->rx_queues; if (vsi->shared->isc_intr == IFLIB_INTR_MSIX) { for (int i = 0; i < vsi->num_rx_queues; i++, rx_que++) ixl_disable_queue(hw, rx_que->msix - 1); } else { // Set PFINT_LNKLST0 FIRSTQ_INDX to 0x7FF // stops queues from triggering interrupts wr32(hw, I40E_PFINT_LNKLST0, 0x7FF); } } static int ixl_if_rx_queue_intr_enable(if_ctx_t ctx, uint16_t rxqid) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ixl_vsi *vsi = &pf->vsi; struct i40e_hw *hw = vsi->hw; struct ixl_rx_queue *rx_que = &vsi->rx_queues[rxqid]; ixl_enable_queue(hw, rx_que->msix - 1); return (0); } static int ixl_if_tx_queue_intr_enable(if_ctx_t ctx, uint16_t txqid) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ixl_vsi *vsi = &pf->vsi; struct i40e_hw *hw = vsi->hw; struct ixl_tx_queue *tx_que = &vsi->tx_queues[txqid]; ixl_enable_queue(hw, tx_que->msix - 1); return (0); } static int ixl_if_tx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int ntxqs, int ntxqsets) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ixl_vsi *vsi = &pf->vsi; if_softc_ctx_t scctx = vsi->shared; struct ixl_tx_queue *que; int i, j, error = 0; MPASS(scctx->isc_ntxqsets > 0); MPASS(ntxqs == 1); MPASS(scctx->isc_ntxqsets == ntxqsets); /* Allocate queue structure memory */ if (!(vsi->tx_queues = (struct ixl_tx_queue *) malloc(sizeof(struct ixl_tx_queue) *ntxqsets, M_IXL, M_NOWAIT | M_ZERO))) { device_printf(iflib_get_dev(ctx), "Unable to allocate TX ring memory\n"); return (ENOMEM); } for (i = 0, que = vsi->tx_queues; i < ntxqsets; i++, que++) { struct tx_ring *txr = &que->txr; txr->me = i; que->vsi = vsi; if (!vsi->enable_head_writeback) { /* Allocate report status array */ if (!(txr->tx_rsq = malloc(sizeof(qidx_t) * scctx->isc_ntxd[0], M_IXL, M_NOWAIT))) { device_printf(iflib_get_dev(ctx), "failed to allocate tx_rsq memory\n"); error = ENOMEM; goto fail; } /* Init report status array */ for (j = 0; j < scctx->isc_ntxd[0]; j++) txr->tx_rsq[j] = QIDX_INVALID; } /* get the virtual and physical address of the hardware queues */ txr->tail = I40E_QTX_TAIL(txr->me); txr->tx_base = (struct i40e_tx_desc *)vaddrs[i * ntxqs]; txr->tx_paddr = paddrs[i * ntxqs]; txr->que = que; } return (0); fail: ixl_if_queues_free(ctx); return (error); } static int ixl_if_rx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int nrxqs, int nrxqsets) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ixl_vsi *vsi = &pf->vsi; struct ixl_rx_queue *que; int i, error = 0; #ifdef INVARIANTS if_softc_ctx_t scctx = vsi->shared; MPASS(scctx->isc_nrxqsets > 0); MPASS(nrxqs == 1); MPASS(scctx->isc_nrxqsets == nrxqsets); #endif /* Allocate queue structure memory */ if (!(vsi->rx_queues = (struct ixl_rx_queue *) malloc(sizeof(struct ixl_rx_queue) * nrxqsets, M_IXL, M_NOWAIT | M_ZERO))) { device_printf(iflib_get_dev(ctx), "Unable to allocate RX ring memory\n"); error = ENOMEM; goto fail; } for (i = 0, que = vsi->rx_queues; i < nrxqsets; i++, que++) { struct rx_ring *rxr = &que->rxr; rxr->me = i; que->vsi = vsi; /* get the virtual and physical address of the hardware queues */ rxr->tail = I40E_QRX_TAIL(rxr->me); rxr->rx_base = (union i40e_rx_desc *)vaddrs[i * nrxqs]; rxr->rx_paddr = paddrs[i * nrxqs]; rxr->que = que; } return (0); fail: ixl_if_queues_free(ctx); return (error); } static void ixl_if_queues_free(if_ctx_t ctx) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ixl_vsi *vsi = &pf->vsi; if (!vsi->enable_head_writeback) { struct ixl_tx_queue *que; int i = 0; for (i = 0, que = vsi->tx_queues; i < vsi->num_tx_queues; i++, que++) { struct tx_ring *txr = &que->txr; if (txr->tx_rsq != NULL) { free(txr->tx_rsq, M_IXL); txr->tx_rsq = NULL; } } } if (vsi->tx_queues != NULL) { free(vsi->tx_queues, M_IXL); vsi->tx_queues = NULL; } if (vsi->rx_queues != NULL) { free(vsi->rx_queues, M_IXL); vsi->rx_queues = NULL; } } void ixl_update_link_status(struct ixl_pf *pf) { struct ixl_vsi *vsi = &pf->vsi; struct i40e_hw *hw = &pf->hw; u64 baudrate; if (pf->link_up) { if (vsi->link_active == FALSE) { vsi->link_active = TRUE; baudrate = ixl_max_aq_speed_to_value(hw->phy.link_info.link_speed); iflib_link_state_change(vsi->ctx, LINK_STATE_UP, baudrate); ixl_link_up_msg(pf); #ifdef PCI_IOV ixl_broadcast_link_state(pf); #endif } } else { /* Link down */ if (vsi->link_active == TRUE) { vsi->link_active = FALSE; iflib_link_state_change(vsi->ctx, LINK_STATE_DOWN, 0); #ifdef PCI_IOV ixl_broadcast_link_state(pf); #endif } } } static void ixl_handle_lan_overflow_event(struct ixl_pf *pf, struct i40e_arq_event_info *e) { device_t dev = pf->dev; u32 rxq_idx, qtx_ctl; rxq_idx = (e->desc.params.external.param0 & I40E_PRTDCB_RUPTQ_RXQNUM_MASK) >> I40E_PRTDCB_RUPTQ_RXQNUM_SHIFT; qtx_ctl = e->desc.params.external.param1; device_printf(dev, "LAN overflow event: global rxq_idx %d\n", rxq_idx); device_printf(dev, "LAN overflow event: QTX_CTL 0x%08x\n", qtx_ctl); } static int ixl_process_adminq(struct ixl_pf *pf, u16 *pending) { enum i40e_status_code status = I40E_SUCCESS; struct i40e_arq_event_info event; struct i40e_hw *hw = &pf->hw; device_t dev = pf->dev; u16 opcode; u32 loop = 0, reg; event.buf_len = IXL_AQ_BUF_SZ; event.msg_buf = malloc(event.buf_len, M_IXL, M_NOWAIT | M_ZERO); if (!event.msg_buf) { device_printf(dev, "%s: Unable to allocate memory for Admin" " Queue event!\n", __func__); return (ENOMEM); } /* clean and process any events */ do { status = i40e_clean_arq_element(hw, &event, pending); if (status) break; opcode = LE16_TO_CPU(event.desc.opcode); ixl_dbg(pf, IXL_DBG_AQ, "Admin Queue event: %#06x\n", opcode); switch (opcode) { case i40e_aqc_opc_get_link_status: ixl_link_event(pf, &event); break; case i40e_aqc_opc_send_msg_to_pf: #ifdef PCI_IOV ixl_handle_vf_msg(pf, &event); #endif break; /* * This should only occur on no-drop queues, which * aren't currently configured. */ case i40e_aqc_opc_event_lan_overflow: ixl_handle_lan_overflow_event(pf, &event); break; default: break; } } while (*pending && (loop++ < IXL_ADM_LIMIT)); free(event.msg_buf, M_IXL); /* Re-enable admin queue interrupt cause */ reg = rd32(hw, I40E_PFINT_ICR0_ENA); reg |= I40E_PFINT_ICR0_ENA_ADMINQ_MASK; wr32(hw, I40E_PFINT_ICR0_ENA, reg); return (status); } static void ixl_if_update_admin_status(if_ctx_t ctx) { struct ixl_pf *pf = iflib_get_softc(ctx); struct i40e_hw *hw = &pf->hw; u16 pending; if (pf->state & IXL_PF_STATE_ADAPTER_RESETTING) ixl_handle_empr_reset(pf); if (pf->state & IXL_PF_STATE_MDD_PENDING) ixl_handle_mdd_event(pf); ixl_process_adminq(pf, &pending); ixl_update_link_status(pf); ixl_update_stats_counters(pf); /* * If there are still messages to process, reschedule ourselves. * Otherwise, re-enable our interrupt and go to sleep. */ if (pending > 0) iflib_admin_intr_deferred(ctx); else ixl_enable_intr0(hw); } static void ixl_if_multi_set(if_ctx_t ctx) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ixl_vsi *vsi = &pf->vsi; struct i40e_hw *hw = vsi->hw; int mcnt = 0, flags; int del_mcnt; IOCTL_DEBUGOUT("ixl_if_multi_set: begin"); mcnt = if_multiaddr_count(iflib_get_ifp(ctx), MAX_MULTICAST_ADDR); /* Delete filters for removed multicast addresses */ del_mcnt = ixl_del_multi(vsi); vsi->num_macs -= del_mcnt; if (__predict_false(mcnt == MAX_MULTICAST_ADDR)) { i40e_aq_set_vsi_multicast_promiscuous(hw, vsi->seid, TRUE, NULL); return; } /* (re-)install filters for all mcast addresses */ /* XXX: This bypasses filter count tracking code! */ mcnt = if_multi_apply(iflib_get_ifp(ctx), ixl_mc_filter_apply, vsi); if (mcnt > 0) { vsi->num_macs += mcnt; flags = (IXL_FILTER_ADD | IXL_FILTER_USED | IXL_FILTER_MC); ixl_add_hw_filters(vsi, flags, mcnt); } ixl_dbg_filter(pf, "%s: filter mac total: %d\n", __func__, vsi->num_macs); IOCTL_DEBUGOUT("ixl_if_multi_set: end"); } static int ixl_if_mtu_set(if_ctx_t ctx, uint32_t mtu) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ixl_vsi *vsi = &pf->vsi; IOCTL_DEBUGOUT("ioctl: SIOCSIFMTU (Set Interface MTU)"); if (mtu > IXL_MAX_FRAME - ETHER_HDR_LEN - ETHER_CRC_LEN - ETHER_VLAN_ENCAP_LEN) return (EINVAL); vsi->shared->isc_max_frame_size = mtu + ETHER_HDR_LEN + ETHER_CRC_LEN + ETHER_VLAN_ENCAP_LEN; return (0); } static void ixl_if_media_status(if_ctx_t ctx, struct ifmediareq *ifmr) { struct ixl_pf *pf = iflib_get_softc(ctx); struct i40e_hw *hw = &pf->hw; INIT_DEBUGOUT("ixl_media_status: begin"); ifmr->ifm_status = IFM_AVALID; ifmr->ifm_active = IFM_ETHER; if (!pf->link_up) { return; } ifmr->ifm_status |= IFM_ACTIVE; /* Hardware is always full-duplex */ ifmr->ifm_active |= IFM_FDX; switch (hw->phy.link_info.phy_type) { /* 100 M */ case I40E_PHY_TYPE_100BASE_TX: ifmr->ifm_active |= IFM_100_TX; break; /* 1 G */ case I40E_PHY_TYPE_1000BASE_T: ifmr->ifm_active |= IFM_1000_T; break; case I40E_PHY_TYPE_1000BASE_SX: ifmr->ifm_active |= IFM_1000_SX; break; case I40E_PHY_TYPE_1000BASE_LX: ifmr->ifm_active |= IFM_1000_LX; break; case I40E_PHY_TYPE_1000BASE_T_OPTICAL: ifmr->ifm_active |= IFM_1000_T; break; /* 10 G */ case I40E_PHY_TYPE_10GBASE_SFPP_CU: ifmr->ifm_active |= IFM_10G_TWINAX; break; case I40E_PHY_TYPE_10GBASE_SR: ifmr->ifm_active |= IFM_10G_SR; break; case I40E_PHY_TYPE_10GBASE_LR: ifmr->ifm_active |= IFM_10G_LR; break; case I40E_PHY_TYPE_10GBASE_T: ifmr->ifm_active |= IFM_10G_T; break; case I40E_PHY_TYPE_XAUI: case I40E_PHY_TYPE_XFI: ifmr->ifm_active |= IFM_10G_TWINAX; break; case I40E_PHY_TYPE_10GBASE_AOC: ifmr->ifm_active |= IFM_10G_AOC; break; /* 25 G */ case I40E_PHY_TYPE_25GBASE_KR: ifmr->ifm_active |= IFM_25G_KR; break; case I40E_PHY_TYPE_25GBASE_CR: ifmr->ifm_active |= IFM_25G_CR; break; case I40E_PHY_TYPE_25GBASE_SR: ifmr->ifm_active |= IFM_25G_SR; break; case I40E_PHY_TYPE_25GBASE_LR: ifmr->ifm_active |= IFM_25G_LR; break; case I40E_PHY_TYPE_25GBASE_AOC: ifmr->ifm_active |= IFM_25G_AOC; break; case I40E_PHY_TYPE_25GBASE_ACC: ifmr->ifm_active |= IFM_25G_ACC; break; /* 40 G */ case I40E_PHY_TYPE_40GBASE_CR4: case I40E_PHY_TYPE_40GBASE_CR4_CU: ifmr->ifm_active |= IFM_40G_CR4; break; case I40E_PHY_TYPE_40GBASE_SR4: ifmr->ifm_active |= IFM_40G_SR4; break; case I40E_PHY_TYPE_40GBASE_LR4: ifmr->ifm_active |= IFM_40G_LR4; break; case I40E_PHY_TYPE_XLAUI: ifmr->ifm_active |= IFM_OTHER; break; case I40E_PHY_TYPE_1000BASE_KX: ifmr->ifm_active |= IFM_1000_KX; break; case I40E_PHY_TYPE_SGMII: ifmr->ifm_active |= IFM_1000_SGMII; break; /* ERJ: What's the difference between these? */ case I40E_PHY_TYPE_10GBASE_CR1_CU: case I40E_PHY_TYPE_10GBASE_CR1: ifmr->ifm_active |= IFM_10G_CR1; break; case I40E_PHY_TYPE_10GBASE_KX4: ifmr->ifm_active |= IFM_10G_KX4; break; case I40E_PHY_TYPE_10GBASE_KR: ifmr->ifm_active |= IFM_10G_KR; break; case I40E_PHY_TYPE_SFI: ifmr->ifm_active |= IFM_10G_SFI; break; /* Our single 20G media type */ case I40E_PHY_TYPE_20GBASE_KR2: ifmr->ifm_active |= IFM_20G_KR2; break; case I40E_PHY_TYPE_40GBASE_KR4: ifmr->ifm_active |= IFM_40G_KR4; break; case I40E_PHY_TYPE_XLPPI: case I40E_PHY_TYPE_40GBASE_AOC: ifmr->ifm_active |= IFM_40G_XLPPI; break; /* Unknown to driver */ default: ifmr->ifm_active |= IFM_UNKNOWN; break; } /* Report flow control status as well */ if (hw->phy.link_info.an_info & I40E_AQ_LINK_PAUSE_TX) ifmr->ifm_active |= IFM_ETH_TXPAUSE; if (hw->phy.link_info.an_info & I40E_AQ_LINK_PAUSE_RX) ifmr->ifm_active |= IFM_ETH_RXPAUSE; } static int ixl_if_media_change(if_ctx_t ctx) { struct ifmedia *ifm = iflib_get_media(ctx); INIT_DEBUGOUT("ixl_media_change: begin"); if (IFM_TYPE(ifm->ifm_media) != IFM_ETHER) return (EINVAL); if_printf(iflib_get_ifp(ctx), "Media change is not supported.\n"); return (ENODEV); } static int ixl_if_promisc_set(if_ctx_t ctx, int flags) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ixl_vsi *vsi = &pf->vsi; struct ifnet *ifp = iflib_get_ifp(ctx); struct i40e_hw *hw = vsi->hw; int err; bool uni = FALSE, multi = FALSE; if (flags & IFF_PROMISC) uni = multi = TRUE; else if (flags & IFF_ALLMULTI || if_multiaddr_count(ifp, MAX_MULTICAST_ADDR) == MAX_MULTICAST_ADDR) multi = TRUE; err = i40e_aq_set_vsi_unicast_promiscuous(hw, vsi->seid, uni, NULL, true); if (err) return (err); err = i40e_aq_set_vsi_multicast_promiscuous(hw, vsi->seid, multi, NULL); return (err); } static void ixl_if_timer(if_ctx_t ctx, uint16_t qid) { if (qid != 0) return; /* Fire off the adminq task */ iflib_admin_intr_deferred(ctx); } static void ixl_if_vlan_register(if_ctx_t ctx, u16 vtag) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ixl_vsi *vsi = &pf->vsi; struct i40e_hw *hw = vsi->hw; if ((vtag == 0) || (vtag > 4095)) /* Invalid */ return; ++vsi->num_vlans; ixl_add_filter(vsi, hw->mac.addr, vtag); } static void ixl_if_vlan_unregister(if_ctx_t ctx, u16 vtag) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ixl_vsi *vsi = &pf->vsi; struct i40e_hw *hw = vsi->hw; if ((vtag == 0) || (vtag > 4095)) /* Invalid */ return; --vsi->num_vlans; ixl_del_filter(vsi, hw->mac.addr, vtag); } static uint64_t ixl_if_get_counter(if_ctx_t ctx, ift_counter cnt) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ixl_vsi *vsi = &pf->vsi; if_t ifp = iflib_get_ifp(ctx); switch (cnt) { case IFCOUNTER_IPACKETS: return (vsi->ipackets); case IFCOUNTER_IERRORS: return (vsi->ierrors); case IFCOUNTER_OPACKETS: return (vsi->opackets); case IFCOUNTER_OERRORS: return (vsi->oerrors); case IFCOUNTER_COLLISIONS: /* Collisions are by standard impossible in 40G/10G Ethernet */ return (0); case IFCOUNTER_IBYTES: return (vsi->ibytes); case IFCOUNTER_OBYTES: return (vsi->obytes); case IFCOUNTER_IMCASTS: return (vsi->imcasts); case IFCOUNTER_OMCASTS: return (vsi->omcasts); case IFCOUNTER_IQDROPS: return (vsi->iqdrops); case IFCOUNTER_OQDROPS: return (vsi->oqdrops); case IFCOUNTER_NOPROTO: return (vsi->noproto); default: return (if_get_counter_default(ifp, cnt)); } } #ifdef PCI_IOV static void ixl_if_vflr_handle(if_ctx_t ctx) { struct ixl_pf *pf = iflib_get_softc(ctx); ixl_handle_vflr(pf); } #endif static int ixl_if_i2c_req(if_ctx_t ctx, struct ifi2creq *req) { struct ixl_pf *pf = iflib_get_softc(ctx); if (pf->read_i2c_byte == NULL) return (EINVAL); for (int i = 0; i < req->len; i++) if (pf->read_i2c_byte(pf, req->offset + i, req->dev_addr, &req->data[i])) return (EIO); return (0); } static int ixl_if_priv_ioctl(if_ctx_t ctx, u_long command, caddr_t data) { struct ixl_pf *pf = iflib_get_softc(ctx); struct ifdrv *ifd = (struct ifdrv *)data; int error = 0; /* NVM update command */ if (ifd->ifd_cmd == I40E_NVM_ACCESS) error = ixl_handle_nvmupd_cmd(pf, ifd); else error = EINVAL; return (error); } static int ixl_mc_filter_apply(void *arg, struct ifmultiaddr *ifma, int count __unused) { struct ixl_vsi *vsi = arg; if (ifma->ifma_addr->sa_family != AF_LINK) return (0); ixl_add_mc_filter(vsi, (u8*)LLADDR((struct sockaddr_dl *) ifma->ifma_addr)); return (1); } /* * Sanity check and save off tunable values. */ static void ixl_save_pf_tunables(struct ixl_pf *pf) { device_t dev = pf->dev; /* Save tunable information */ pf->enable_tx_fc_filter = ixl_enable_tx_fc_filter; pf->dbg_mask = ixl_core_debug_mask; pf->hw.debug_mask = ixl_shared_debug_mask; pf->vsi.enable_head_writeback = !!(ixl_enable_head_writeback); pf->enable_vf_loopback = !!(ixl_enable_vf_loopback); #if 0 pf->dynamic_rx_itr = ixl_dynamic_rx_itr; pf->dynamic_tx_itr = ixl_dynamic_tx_itr; #endif if (ixl_i2c_access_method > 3 || ixl_i2c_access_method < 0) pf->i2c_access_method = 0; else pf->i2c_access_method = ixl_i2c_access_method; if (ixl_tx_itr < 0 || ixl_tx_itr > IXL_MAX_ITR) { device_printf(dev, "Invalid tx_itr value of %d set!\n", ixl_tx_itr); device_printf(dev, "tx_itr must be between %d and %d, " "inclusive\n", 0, IXL_MAX_ITR); device_printf(dev, "Using default value of %d instead\n", IXL_ITR_4K); pf->tx_itr = IXL_ITR_4K; } else pf->tx_itr = ixl_tx_itr; if (ixl_rx_itr < 0 || ixl_rx_itr > IXL_MAX_ITR) { device_printf(dev, "Invalid rx_itr value of %d set!\n", ixl_rx_itr); device_printf(dev, "rx_itr must be between %d and %d, " "inclusive\n", 0, IXL_MAX_ITR); device_printf(dev, "Using default value of %d instead\n", IXL_ITR_8K); pf->rx_itr = IXL_ITR_8K; } else pf->rx_itr = ixl_rx_itr; } Index: projects/import-googletest-1.8.1/sys/dev/ixl/ixl_pf.h =================================================================== --- projects/import-googletest-1.8.1/sys/dev/ixl/ixl_pf.h (revision 344270) +++ projects/import-googletest-1.8.1/sys/dev/ixl/ixl_pf.h (revision 344271) @@ -1,396 +1,395 @@ /****************************************************************************** Copyright (c) 2013-2018, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the Intel Corporation nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ******************************************************************************/ /*$FreeBSD$*/ #ifndef _IXL_PF_H_ #define _IXL_PF_H_ #include "ixl.h" #include "ixl_pf_qmgr.h" #define VF_FLAG_ENABLED 0x01 #define VF_FLAG_SET_MAC_CAP 0x02 #define VF_FLAG_VLAN_CAP 0x04 #define VF_FLAG_PROMISC_CAP 0x08 #define VF_FLAG_MAC_ANTI_SPOOF 0x10 #define IXL_ICR0_CRIT_ERR_MASK \ (I40E_PFINT_ICR0_PCI_EXCEPTION_MASK | \ I40E_PFINT_ICR0_ECC_ERR_MASK | \ I40E_PFINT_ICR0_PE_CRITERR_MASK) /* VF Interrupts */ #define IXL_VPINT_LNKLSTN_REG(hw, vector, vf_num) \ I40E_VPINT_LNKLSTN(((vector) - 1) + \ (((hw)->func_caps.num_msix_vectors_vf - 1) * (vf_num))) #define IXL_VFINT_DYN_CTLN_REG(hw, vector, vf_num) \ I40E_VFINT_DYN_CTLN(((vector) - 1) + \ (((hw)->func_caps.num_msix_vectors_vf - 1) * (vf_num))) /* Used in struct ixl_pf's state field */ enum ixl_pf_state { IXL_PF_STATE_ADAPTER_RESETTING = (1 << 0), IXL_PF_STATE_MDD_PENDING = (1 << 1), IXL_PF_STATE_PF_RESET_REQ = (1 << 2), IXL_PF_STATE_VF_RESET_REQ = (1 << 3), IXL_PF_STATE_PF_CRIT_ERR = (1 << 4), IXL_PF_STATE_CORE_RESET_REQ = (1 << 5), IXL_PF_STATE_GLOB_RESET_REQ = (1 << 6), IXL_PF_STATE_EMP_RESET_REQ = (1 << 7), IXL_PF_STATE_FW_LLDP_DISABLED = (1 << 8), }; struct ixl_vf { struct ixl_vsi vsi; u32 vf_flags; u32 num_mdd_events; u8 mac[ETHER_ADDR_LEN]; u16 vf_num; u32 version; struct ixl_pf_qtag qtag; struct sysctl_ctx_list ctx; }; /* Physical controller structure */ struct ixl_pf { struct ixl_vsi vsi; struct i40e_hw hw; struct i40e_osdep osdep; device_t dev; struct resource *pci_mem; #ifdef IXL_IW int iw_msix; bool iw_enabled; #endif u32 state; u8 supported_speeds; struct ixl_pf_qmgr qmgr; struct ixl_pf_qtag qtag; /* Tunable values */ bool enable_tx_fc_filter; int dynamic_rx_itr; int dynamic_tx_itr; int tx_itr; int rx_itr; int enable_vf_loopback; bool link_up; int advertised_speed; int fc; /* link flow ctrl setting */ enum ixl_dbg_mask dbg_mask; bool has_i2c; /* Misc stats maintained by the driver */ u64 admin_irq; /* Statistics from hw */ struct i40e_hw_port_stats stats; struct i40e_hw_port_stats stats_offsets; bool stat_offsets_loaded; /* I2C access methods */ u8 i2c_access_method; s32 (*read_i2c_byte)(struct ixl_pf *pf, u8 byte_offset, u8 dev_addr, u8 *data); s32 (*write_i2c_byte)(struct ixl_pf *pf, u8 byte_offset, u8 dev_addr, u8 data); /* SR-IOV */ struct ixl_vf *vfs; int num_vfs; uint16_t veb_seid; - struct if_irq iov_irq; }; /* * Defines used for NVM update ioctls. * This value is used in the Solaris tool, too. */ #define I40E_NVM_ACCESS \ (((((((('E' << 4) + '1') << 4) + 'K') << 4) + 'G') << 4) | 5) #define IXL_DEFAULT_PHY_INT_MASK \ ((~(I40E_AQ_EVENT_LINK_UPDOWN | I40E_AQ_EVENT_MODULE_QUAL_FAIL \ | I40E_AQ_EVENT_MEDIA_NA)) & 0x3FF) /*** Sysctl help messages; displayed with "sysctl -d" ***/ #define IXL_SYSCTL_HELP_SET_ADVERTISE \ "\nControl advertised link speed.\n" \ "Flags:\n" \ "\t 0x1 - advertise 100M\n" \ "\t 0x2 - advertise 1G\n" \ "\t 0x4 - advertise 10G\n" \ "\t 0x8 - advertise 20G\n" \ "\t0x10 - advertise 25G\n" \ "\t0x20 - advertise 40G\n\n" \ "Set to 0 to disable link.\n" \ "Use \"sysctl -x\" to view flags properly." #define IXL_SYSCTL_HELP_SUPPORTED_SPEED \ "\nSupported link speeds.\n" \ "Flags:\n" \ "\t 0x1 - 100M\n" \ "\t 0x2 - 1G\n" \ "\t 0x4 - 10G\n" \ "\t 0x8 - 20G\n" \ "\t0x10 - 25G\n" \ "\t0x20 - 40G\n\n" \ "Use \"sysctl -x\" to view flags properly." #define IXL_SYSCTL_HELP_FC \ "\nSet flow control mode using the values below.\n" \ "\t0 - off\n" \ "\t1 - rx pause\n" \ "\t2 - tx pause\n" \ "\t3 - tx and rx pause" #define IXL_SYSCTL_HELP_LINK_STATUS \ "\nExecutes a \"Get Link Status\" command on the Admin Queue, and displays" \ " the response." #define IXL_SYSCTL_HELP_FW_LLDP \ "\nFW LLDP engine:\n" \ "\t0 - disable\n" \ "\t1 - enable\n" #define IXL_SYSCTL_HELP_READ_I2C \ "\nRead a byte from I2C bus\n" \ "Input: 32-bit value\n" \ "\tbits 0-7: device address (0xA0 or 0xA2)\n" \ "\tbits 8-15: offset (0-255)\n" \ "\tbits 16-31: unused\n" \ "Output: 8-bit value read" #define IXL_SYSCTL_HELP_WRITE_I2C \ "\nWrite a byte to the I2C bus\n" \ "Input: 32-bit value\n" \ "\tbits 0-7: device address (0xA0 or 0xA2)\n" \ "\tbits 8-15: offset (0-255)\n" \ "\tbits 16-23: value to write\n" \ "\tbits 24-31: unused\n" \ "Output: 8-bit value written" #define IXL_SYSCTL_HELP_I2C_METHOD \ "\nI2C access method that driver will use:\n" \ "\t0 - best available method\n" \ "\t1 - bit bang via I2CPARAMS register\n" \ "\t2 - register read/write via I2CCMD register\n" \ "\t3 - Use Admin Queue command (best)\n" \ "Using the Admin Queue is only supported on 710 devices with FW version 1.7 or higher" #define IXL_SYSCTL_HELP_VF_LOOPBACK \ "\nDetermines mode that embedded device switch will use when SR-IOV is initialized:\n" \ "\t0 - Disable (VEPA)\n" \ "\t1 - Enable (VEB)\n" \ "Enabling this will allow VFs in separate VMs to communicate over the hardware bridge." extern const char * const ixl_fc_string[6]; MALLOC_DECLARE(M_IXL); /*** Functions / Macros ***/ /* Adjust the level here to 10 or over to print stats messages */ #define I40E_VC_DEBUG(p, level, ...) \ do { \ if (level < 10) \ ixl_dbg(p, IXL_DBG_IOV_VC, ##__VA_ARGS__); \ } while (0) #define i40e_send_vf_nack(pf, vf, op, st) \ ixl_send_vf_nack_msg((pf), (vf), (op), (st), __FILE__, __LINE__) /* Debug printing */ #define ixl_dbg(pf, m, s, ...) ixl_debug_core(pf->dev, pf->dbg_mask, m, s, ##__VA_ARGS__) #define ixl_dbg_info(pf, s, ...) ixl_debug_core(pf->dev, pf->dbg_mask, IXL_DBG_INFO, s, ##__VA_ARGS__) #define ixl_dbg_filter(pf, s, ...) ixl_debug_core(pf->dev, pf->dbg_mask, IXL_DBG_FILTER, s, ##__VA_ARGS__) #define ixl_dbg_iov(pf, s, ...) ixl_debug_core(pf->dev, pf->dbg_mask, IXL_DBG_IOV, s, ##__VA_ARGS__) /* PF-only function declarations */ int ixl_setup_interface(device_t, struct ixl_pf *); void ixl_print_nvm_cmd(device_t, struct i40e_nvm_access *); char * ixl_aq_speed_to_str(enum i40e_aq_link_speed); void ixl_handle_que(void *context, int pending); void ixl_init(void *); void ixl_local_timer(void *); void ixl_register_vlan(void *, struct ifnet *, u16); void ixl_unregister_vlan(void *, struct ifnet *, u16); int ixl_intr(void *); int ixl_msix_que(void *); int ixl_msix_adminq(void *); void ixl_do_adminq(void *, int); int ixl_res_alloc_cmp(const void *, const void *); char * ixl_switch_res_type_string(u8); char * ixl_switch_element_string(struct sbuf *, struct i40e_aqc_switch_config_element_resp *); void ixl_add_sysctls_mac_stats(struct sysctl_ctx_list *, struct sysctl_oid_list *, struct i40e_hw_port_stats *); void ixl_media_status(struct ifnet *, struct ifmediareq *); int ixl_media_change(struct ifnet *); int ixl_ioctl(struct ifnet *, u_long, caddr_t); void ixl_enable_queue(struct i40e_hw *, int); void ixl_disable_queue(struct i40e_hw *, int); void ixl_enable_intr0(struct i40e_hw *); void ixl_disable_intr0(struct i40e_hw *); void ixl_nvm_version_str(struct i40e_hw *hw, struct sbuf *buf); void ixl_stat_update48(struct i40e_hw *, u32, u32, bool, u64 *, u64 *); void ixl_stat_update32(struct i40e_hw *, u32, bool, u64 *, u64 *); void ixl_stop(struct ixl_pf *); int ixl_get_hw_capabilities(struct ixl_pf *); void ixl_link_up_msg(struct ixl_pf *); void ixl_update_link_status(struct ixl_pf *); int ixl_setup_stations(struct ixl_pf *); int ixl_switch_config(struct ixl_pf *); void ixl_stop_locked(struct ixl_pf *); int ixl_teardown_hw_structs(struct ixl_pf *); int ixl_reset(struct ixl_pf *); void ixl_init_locked(struct ixl_pf *); void ixl_set_rss_key(struct ixl_pf *); void ixl_set_rss_pctypes(struct ixl_pf *); void ixl_set_rss_hlut(struct ixl_pf *); int ixl_setup_adminq_msix(struct ixl_pf *); int ixl_setup_adminq_tq(struct ixl_pf *); int ixl_teardown_adminq_msix(struct ixl_pf *); void ixl_configure_intr0_msix(struct ixl_pf *); void ixl_configure_queue_intr_msix(struct ixl_pf *); void ixl_free_adminq_tq(struct ixl_pf *); int ixl_setup_legacy(struct ixl_pf *); int ixl_init_msix(struct ixl_pf *); void ixl_configure_itr(struct ixl_pf *); void ixl_configure_legacy(struct ixl_pf *); void ixl_free_pci_resources(struct ixl_pf *); void ixl_link_event(struct ixl_pf *, struct i40e_arq_event_info *); void ixl_config_rss(struct ixl_pf *); int ixl_set_advertised_speeds(struct ixl_pf *, int, bool); void ixl_set_initial_advertised_speeds(struct ixl_pf *); void ixl_print_nvm_version(struct ixl_pf *pf); void ixl_add_device_sysctls(struct ixl_pf *); void ixl_handle_mdd_event(struct ixl_pf *); void ixl_add_hw_stats(struct ixl_pf *); void ixl_update_stats_counters(struct ixl_pf *); void ixl_pf_reset_stats(struct ixl_pf *); void ixl_get_bus_info(struct ixl_pf *pf); int ixl_aq_get_link_status(struct ixl_pf *, struct i40e_aqc_get_link_status *); int ixl_handle_nvmupd_cmd(struct ixl_pf *, struct ifdrv *); void ixl_handle_empr_reset(struct ixl_pf *); int ixl_prepare_for_reset(struct ixl_pf *pf, bool is_up); int ixl_rebuild_hw_structs_after_reset(struct ixl_pf *); void ixl_set_queue_rx_itr(struct ixl_rx_queue *); void ixl_set_queue_tx_itr(struct ixl_tx_queue *); void ixl_add_filter(struct ixl_vsi *, const u8 *, s16 vlan); void ixl_del_filter(struct ixl_vsi *, const u8 *, s16 vlan); void ixl_reconfigure_filters(struct ixl_vsi *vsi); int ixl_disable_rings(struct ixl_pf *, struct ixl_vsi *, struct ixl_pf_qtag *); int ixl_disable_tx_ring(struct ixl_pf *, struct ixl_pf_qtag *, u16); int ixl_disable_rx_ring(struct ixl_pf *, struct ixl_pf_qtag *, u16); int ixl_disable_ring(struct ixl_pf *pf, struct ixl_pf_qtag *, u16); int ixl_enable_rings(struct ixl_vsi *); int ixl_enable_tx_ring(struct ixl_pf *, struct ixl_pf_qtag *, u16); int ixl_enable_rx_ring(struct ixl_pf *, struct ixl_pf_qtag *, u16); int ixl_enable_ring(struct ixl_pf *pf, struct ixl_pf_qtag *, u16); void ixl_update_eth_stats(struct ixl_vsi *); void ixl_cap_txcsum_tso(struct ixl_vsi *, struct ifnet *, int); int ixl_initialize_vsi(struct ixl_vsi *); void ixl_add_ifmedia(struct ixl_vsi *, u64); int ixl_setup_queue_msix(struct ixl_vsi *); int ixl_setup_queue_tqs(struct ixl_vsi *); int ixl_teardown_queue_msix(struct ixl_vsi *); void ixl_free_queue_tqs(struct ixl_vsi *); void ixl_enable_intr(struct ixl_vsi *); void ixl_disable_rings_intr(struct ixl_vsi *); void ixl_set_promisc(struct ixl_vsi *); void ixl_add_multi(struct ixl_vsi *); int ixl_del_multi(struct ixl_vsi *); void ixl_setup_vlan_filters(struct ixl_vsi *); void ixl_init_filters(struct ixl_vsi *); void ixl_add_hw_filters(struct ixl_vsi *, int, int); void ixl_del_hw_filters(struct ixl_vsi *, int); void ixl_del_default_hw_filters(struct ixl_vsi *); struct ixl_mac_filter * ixl_find_filter(struct ixl_vsi *, const u8 *, s16); void ixl_add_mc_filter(struct ixl_vsi *, u8 *); void ixl_free_mac_filters(struct ixl_vsi *vsi); void ixl_update_vsi_stats(struct ixl_vsi *); void ixl_vsi_reset_stats(struct ixl_vsi *); void ixl_vsi_free_queues(struct ixl_vsi *vsi); void ixl_if_init(if_ctx_t ctx); void ixl_if_stop(if_ctx_t ctx); /* * I2C Function prototypes */ int ixl_find_i2c_interface(struct ixl_pf *); s32 ixl_read_i2c_byte_bb(struct ixl_pf *pf, u8 byte_offset, u8 dev_addr, u8 *data); s32 ixl_write_i2c_byte_bb(struct ixl_pf *pf, u8 byte_offset, u8 dev_addr, u8 data); s32 ixl_read_i2c_byte_reg(struct ixl_pf *pf, u8 byte_offset, u8 dev_addr, u8 *data); s32 ixl_write_i2c_byte_reg(struct ixl_pf *pf, u8 byte_offset, u8 dev_addr, u8 data); s32 ixl_read_i2c_byte_aq(struct ixl_pf *pf, u8 byte_offset, u8 dev_addr, u8 *data); s32 ixl_write_i2c_byte_aq(struct ixl_pf *pf, u8 byte_offset, u8 dev_addr, u8 data); int ixl_get_fw_lldp_status(struct ixl_pf *pf); int ixl_attach_get_link_status(struct ixl_pf *); u64 ixl_max_aq_speed_to_value(u8); #endif /* _IXL_PF_H_ */ Index: projects/import-googletest-1.8.1/sys/dev/netmap/netmap_freebsd.c =================================================================== --- projects/import-googletest-1.8.1/sys/dev/netmap/netmap_freebsd.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/dev/netmap/netmap_freebsd.c (revision 344271) @@ -1,1603 +1,1618 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (C) 2013-2014 Universita` di Pisa. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* $FreeBSD$ */ #include "opt_inet.h" #include "opt_inet6.h" #include #include #include #include #include /* POLLIN, POLLOUT */ #include /* types used in module initialization */ #include /* DEV_MODULE_ORDERED */ #include #include /* kern_ioctl() */ #include #include /* vtophys */ #include /* vtophys */ #include #include #include #include #include #include #include /* sockaddrs */ #include #include /* kthread_add() */ #include /* PROC_LOCK() */ #include /* RFNOWAIT */ #include /* sched_bind() */ #include /* mp_maxid */ #include /* taskqueue_enqueue(), taskqueue_create(), ... */ #include #include #include /* IFT_ETHER */ #include /* ether_ifdetach */ #include /* LLADDR */ #include /* bus_dmamap_* */ #include /* in6_cksum_pseudo() */ #include /* in_pseudo(), in_cksum_hdr() */ #include #include #include #include /* ======================== FREEBSD-SPECIFIC ROUTINES ================== */ static void nm_kqueue_notify(void *opaque, int pending) { struct nm_selinfo *si = opaque; /* We use a non-zero hint to distinguish this notification call * from the call done in kqueue_scan(), which uses hint=0. */ KNOTE_UNLOCKED(&si->si.si_note, /*hint=*/0x100); } int nm_os_selinfo_init(NM_SELINFO_T *si, const char *name) { int err; TASK_INIT(&si->ntfytask, 0, nm_kqueue_notify, si); si->ntfytq = taskqueue_create(name, M_NOWAIT, taskqueue_thread_enqueue, &si->ntfytq); if (si->ntfytq == NULL) return -ENOMEM; err = taskqueue_start_threads(&si->ntfytq, 1, PI_NET, "tq %s", name); if (err) { taskqueue_free(si->ntfytq); si->ntfytq = NULL; return err; } snprintf(si->mtxname, sizeof(si->mtxname), "nmkl%s", name); mtx_init(&si->m, si->mtxname, NULL, MTX_DEF); knlist_init_mtx(&si->si.si_note, &si->m); + si->kqueue_users = 0; return (0); } void nm_os_selinfo_uninit(NM_SELINFO_T *si) { if (si->ntfytq == NULL) { return; /* si was not initialized */ } taskqueue_drain(si->ntfytq, &si->ntfytask); taskqueue_free(si->ntfytq); si->ntfytq = NULL; knlist_delete(&si->si.si_note, curthread, /*islocked=*/0); knlist_destroy(&si->si.si_note); /* now we don't need the mutex anymore */ mtx_destroy(&si->m); } void * nm_os_malloc(size_t size) { return malloc(size, M_DEVBUF, M_NOWAIT | M_ZERO); } void * nm_os_realloc(void *addr, size_t new_size, size_t old_size __unused) { return realloc(addr, new_size, M_DEVBUF, M_NOWAIT | M_ZERO); } void nm_os_free(void *addr) { free(addr, M_DEVBUF); } void nm_os_ifnet_lock(void) { IFNET_RLOCK(); } void nm_os_ifnet_unlock(void) { IFNET_RUNLOCK(); } static int netmap_use_count = 0; void nm_os_get_module(void) { netmap_use_count++; } void nm_os_put_module(void) { netmap_use_count--; } static void netmap_ifnet_arrival_handler(void *arg __unused, struct ifnet *ifp) { netmap_undo_zombie(ifp); } static void netmap_ifnet_departure_handler(void *arg __unused, struct ifnet *ifp) { netmap_make_zombie(ifp); } static eventhandler_tag nm_ifnet_ah_tag; static eventhandler_tag nm_ifnet_dh_tag; int nm_os_ifnet_init(void) { nm_ifnet_ah_tag = EVENTHANDLER_REGISTER(ifnet_arrival_event, netmap_ifnet_arrival_handler, NULL, EVENTHANDLER_PRI_ANY); nm_ifnet_dh_tag = EVENTHANDLER_REGISTER(ifnet_departure_event, netmap_ifnet_departure_handler, NULL, EVENTHANDLER_PRI_ANY); return 0; } void nm_os_ifnet_fini(void) { EVENTHANDLER_DEREGISTER(ifnet_arrival_event, nm_ifnet_ah_tag); EVENTHANDLER_DEREGISTER(ifnet_departure_event, nm_ifnet_dh_tag); } unsigned nm_os_ifnet_mtu(struct ifnet *ifp) { #if __FreeBSD_version < 1100030 return ifp->if_data.ifi_mtu; #else /* __FreeBSD_version >= 1100030 */ return ifp->if_mtu; #endif } rawsum_t nm_os_csum_raw(uint8_t *data, size_t len, rawsum_t cur_sum) { /* TODO XXX please use the FreeBSD implementation for this. */ uint16_t *words = (uint16_t *)data; int nw = len / 2; int i; for (i = 0; i < nw; i++) cur_sum += be16toh(words[i]); if (len & 1) cur_sum += (data[len-1] << 8); return cur_sum; } /* Fold a raw checksum: 'cur_sum' is in host byte order, while the * return value is in network byte order. */ uint16_t nm_os_csum_fold(rawsum_t cur_sum) { /* TODO XXX please use the FreeBSD implementation for this. */ while (cur_sum >> 16) cur_sum = (cur_sum & 0xFFFF) + (cur_sum >> 16); return htobe16((~cur_sum) & 0xFFFF); } uint16_t nm_os_csum_ipv4(struct nm_iphdr *iph) { #if 0 return in_cksum_hdr((void *)iph); #else return nm_os_csum_fold(nm_os_csum_raw((uint8_t*)iph, sizeof(struct nm_iphdr), 0)); #endif } void nm_os_csum_tcpudp_ipv4(struct nm_iphdr *iph, void *data, size_t datalen, uint16_t *check) { #ifdef INET uint16_t pseudolen = datalen + iph->protocol; /* Compute and insert the pseudo-header cheksum. */ *check = in_pseudo(iph->saddr, iph->daddr, htobe16(pseudolen)); /* Compute the checksum on TCP/UDP header + payload * (includes the pseudo-header). */ *check = nm_os_csum_fold(nm_os_csum_raw(data, datalen, 0)); #else static int notsupported = 0; if (!notsupported) { notsupported = 1; nm_prerr("inet4 segmentation not supported"); } #endif } void nm_os_csum_tcpudp_ipv6(struct nm_ipv6hdr *ip6h, void *data, size_t datalen, uint16_t *check) { #ifdef INET6 *check = in6_cksum_pseudo((void*)ip6h, datalen, ip6h->nexthdr, 0); *check = nm_os_csum_fold(nm_os_csum_raw(data, datalen, 0)); #else static int notsupported = 0; if (!notsupported) { notsupported = 1; nm_prerr("inet6 segmentation not supported"); } #endif } /* on FreeBSD we send up one packet at a time */ void * nm_os_send_up(struct ifnet *ifp, struct mbuf *m, struct mbuf *prev) { NA(ifp)->if_input(ifp, m); return NULL; } int nm_os_mbuf_has_csum_offld(struct mbuf *m) { return m->m_pkthdr.csum_flags & (CSUM_TCP | CSUM_UDP | CSUM_SCTP | CSUM_TCP_IPV6 | CSUM_UDP_IPV6 | CSUM_SCTP_IPV6); } int nm_os_mbuf_has_seg_offld(struct mbuf *m) { return m->m_pkthdr.csum_flags & CSUM_TSO; } static void freebsd_generic_rx_handler(struct ifnet *ifp, struct mbuf *m) { int stolen; if (unlikely(!NM_NA_VALID(ifp))) { nm_prlim(1, "Warning: RX packet intercepted, but no" " emulated adapter"); return; } stolen = generic_rx_handler(ifp, m); if (!stolen) { struct netmap_generic_adapter *gna = (struct netmap_generic_adapter *)NA(ifp); gna->save_if_input(ifp, m); } } /* * Intercept the rx routine in the standard device driver. * Second argument is non-zero to intercept, 0 to restore */ int nm_os_catch_rx(struct netmap_generic_adapter *gna, int intercept) { struct netmap_adapter *na = &gna->up.up; struct ifnet *ifp = na->ifp; int ret = 0; nm_os_ifnet_lock(); if (intercept) { if (gna->save_if_input) { nm_prerr("RX on %s already intercepted", na->name); ret = EBUSY; /* already set */ goto out; } gna->save_if_input = ifp->if_input; ifp->if_input = freebsd_generic_rx_handler; } else { if (!gna->save_if_input) { nm_prerr("Failed to undo RX intercept on %s", na->name); ret = EINVAL; /* not saved */ goto out; } ifp->if_input = gna->save_if_input; gna->save_if_input = NULL; } out: nm_os_ifnet_unlock(); return ret; } /* * Intercept the packet steering routine in the tx path, * so that we can decide which queue is used for an mbuf. * Second argument is non-zero to intercept, 0 to restore. * On freebsd we just intercept if_transmit. */ int nm_os_catch_tx(struct netmap_generic_adapter *gna, int intercept) { struct netmap_adapter *na = &gna->up.up; struct ifnet *ifp = netmap_generic_getifp(gna); nm_os_ifnet_lock(); if (intercept) { na->if_transmit = ifp->if_transmit; ifp->if_transmit = netmap_transmit; } else { ifp->if_transmit = na->if_transmit; } nm_os_ifnet_unlock(); return 0; } /* * Transmit routine used by generic_netmap_txsync(). Returns 0 on success * and non-zero on error (which may be packet drops or other errors). * addr and len identify the netmap buffer, m is the (preallocated) * mbuf to use for transmissions. * * We should add a reference to the mbuf so the m_freem() at the end * of the transmission does not consume resources. * * On FreeBSD, and on multiqueue cards, we can force the queue using * if (M_HASHTYPE_GET(m) != M_HASHTYPE_NONE) * i = m->m_pkthdr.flowid % adapter->num_queues; * else * i = curcpu % adapter->num_queues; * */ int nm_os_generic_xmit_frame(struct nm_os_gen_arg *a) { int ret; u_int len = a->len; struct ifnet *ifp = a->ifp; struct mbuf *m = a->m; #if __FreeBSD_version < 1100000 /* * Old FreeBSD versions. The mbuf has a cluster attached, * we need to copy from the cluster to the netmap buffer. */ if (MBUF_REFCNT(m) != 1) { nm_prerr("invalid refcnt %d for %p", MBUF_REFCNT(m), m); panic("in generic_xmit_frame"); } if (m->m_ext.ext_size < len) { nm_prlim(2, "size %d < len %d", m->m_ext.ext_size, len); len = m->m_ext.ext_size; } bcopy(a->addr, m->m_data, len); #else /* __FreeBSD_version >= 1100000 */ /* New FreeBSD versions. Link the external storage to * the netmap buffer, so that no copy is necessary. */ m->m_ext.ext_buf = m->m_data = a->addr; m->m_ext.ext_size = len; #endif /* __FreeBSD_version >= 1100000 */ m->m_len = m->m_pkthdr.len = len; /* mbuf refcnt is not contended, no need to use atomic * (a memory barrier is enough). */ SET_MBUF_REFCNT(m, 2); M_HASHTYPE_SET(m, M_HASHTYPE_OPAQUE); m->m_pkthdr.flowid = a->ring_nr; m->m_pkthdr.rcvif = ifp; /* used for tx notification */ ret = NA(ifp)->if_transmit(ifp, m); return ret ? -1 : 0; } #if __FreeBSD_version >= 1100005 struct netmap_adapter * netmap_getna(if_t ifp) { return (NA((struct ifnet *)ifp)); } #endif /* __FreeBSD_version >= 1100005 */ /* * The following two functions are empty until we have a generic * way to extract the info from the ifp */ int nm_os_generic_find_num_desc(struct ifnet *ifp, unsigned int *tx, unsigned int *rx) { return 0; } void nm_os_generic_find_num_queues(struct ifnet *ifp, u_int *txq, u_int *rxq) { unsigned num_rings = netmap_generic_rings ? netmap_generic_rings : 1; *txq = num_rings; *rxq = num_rings; } void nm_os_generic_set_features(struct netmap_generic_adapter *gna) { gna->rxsg = 1; /* Supported through m_copydata. */ gna->txqdisc = 0; /* Not supported. */ } void nm_os_mitigation_init(struct nm_generic_mit *mit, int idx, struct netmap_adapter *na) { mit->mit_pending = 0; mit->mit_ring_idx = idx; mit->mit_na = na; } void nm_os_mitigation_start(struct nm_generic_mit *mit) { } void nm_os_mitigation_restart(struct nm_generic_mit *mit) { } int nm_os_mitigation_active(struct nm_generic_mit *mit) { return 0; } void nm_os_mitigation_cleanup(struct nm_generic_mit *mit) { } static int nm_vi_dummy(struct ifnet *ifp, u_long cmd, caddr_t addr) { return EINVAL; } static void nm_vi_start(struct ifnet *ifp) { panic("nm_vi_start() must not be called"); } /* * Index manager of persistent virtual interfaces. * It is used to decide the lowest byte of the MAC address. * We use the same algorithm with management of bridge port index. */ #define NM_VI_MAX 255 static struct { uint8_t index[NM_VI_MAX]; /* XXX just for a reasonable number */ uint8_t active; struct mtx lock; } nm_vi_indices; void nm_os_vi_init_index(void) { int i; for (i = 0; i < NM_VI_MAX; i++) nm_vi_indices.index[i] = i; nm_vi_indices.active = 0; mtx_init(&nm_vi_indices.lock, "nm_vi_indices_lock", NULL, MTX_DEF); } /* return -1 if no index available */ static int nm_vi_get_index(void) { int ret; mtx_lock(&nm_vi_indices.lock); ret = nm_vi_indices.active == NM_VI_MAX ? -1 : nm_vi_indices.index[nm_vi_indices.active++]; mtx_unlock(&nm_vi_indices.lock); return ret; } static void nm_vi_free_index(uint8_t val) { int i, lim; mtx_lock(&nm_vi_indices.lock); lim = nm_vi_indices.active; for (i = 0; i < lim; i++) { if (nm_vi_indices.index[i] == val) { /* swap index[lim-1] and j */ int tmp = nm_vi_indices.index[lim-1]; nm_vi_indices.index[lim-1] = val; nm_vi_indices.index[i] = tmp; nm_vi_indices.active--; break; } } if (lim == nm_vi_indices.active) nm_prerr("Index %u not found", val); mtx_unlock(&nm_vi_indices.lock); } #undef NM_VI_MAX /* * Implementation of a netmap-capable virtual interface that * registered to the system. * It is based on if_tap.c and ip_fw_log.c in FreeBSD 9. * * Note: Linux sets refcount to 0 on allocation of net_device, * then increments it on registration to the system. * FreeBSD sets refcount to 1 on if_alloc(), and does not * increment this refcount on if_attach(). */ int nm_os_vi_persist(const char *name, struct ifnet **ret) { struct ifnet *ifp; u_short macaddr_hi; uint32_t macaddr_mid; u_char eaddr[6]; int unit = nm_vi_get_index(); /* just to decide MAC address */ if (unit < 0) return EBUSY; /* * We use the same MAC address generation method with tap * except for the highest octet is 00:be instead of 00:bd */ macaddr_hi = htons(0x00be); /* XXX tap + 1 */ macaddr_mid = (uint32_t) ticks; bcopy(&macaddr_hi, eaddr, sizeof(short)); bcopy(&macaddr_mid, &eaddr[2], sizeof(uint32_t)); eaddr[5] = (uint8_t)unit; ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { nm_prerr("if_alloc failed"); return ENOMEM; } if_initname(ifp, name, IF_DUNIT_NONE); ifp->if_mtu = 65536; ifp->if_flags = IFF_UP | IFF_SIMPLEX | IFF_MULTICAST; ifp->if_init = (void *)nm_vi_dummy; ifp->if_ioctl = nm_vi_dummy; ifp->if_start = nm_vi_start; ifp->if_mtu = ETHERMTU; IFQ_SET_MAXLEN(&ifp->if_snd, ifqmaxlen); ifp->if_capabilities |= IFCAP_LINKSTATE; ifp->if_capenable |= IFCAP_LINKSTATE; ether_ifattach(ifp, eaddr); *ret = ifp; return 0; } /* unregister from the system and drop the final refcount */ void nm_os_vi_detach(struct ifnet *ifp) { nm_vi_free_index(((char *)IF_LLADDR(ifp))[5]); ether_ifdetach(ifp); if_free(ifp); } #ifdef WITH_EXTMEM #include #include struct nm_os_extmem { vm_object_t obj; vm_offset_t kva; vm_offset_t size; uintptr_t scan; }; void nm_os_extmem_delete(struct nm_os_extmem *e) { nm_prinf("freeing %zx bytes", (size_t)e->size); vm_map_remove(kernel_map, e->kva, e->kva + e->size); nm_os_free(e); } char * nm_os_extmem_nextpage(struct nm_os_extmem *e) { char *rv = NULL; if (e->scan < e->kva + e->size) { rv = (char *)e->scan; e->scan += PAGE_SIZE; } return rv; } int nm_os_extmem_isequal(struct nm_os_extmem *e1, struct nm_os_extmem *e2) { return (e1->obj == e2->obj); } int nm_os_extmem_nr_pages(struct nm_os_extmem *e) { return e->size >> PAGE_SHIFT; } struct nm_os_extmem * nm_os_extmem_create(unsigned long p, struct nmreq_pools_info *pi, int *perror) { vm_map_t map; vm_map_entry_t entry; vm_object_t obj; vm_prot_t prot; vm_pindex_t index; boolean_t wired; struct nm_os_extmem *e = NULL; int rv, error = 0; e = nm_os_malloc(sizeof(*e)); if (e == NULL) { error = ENOMEM; goto out; } map = &curthread->td_proc->p_vmspace->vm_map; rv = vm_map_lookup(&map, p, VM_PROT_RW, &entry, &obj, &index, &prot, &wired); if (rv != KERN_SUCCESS) { nm_prerr("address %lx not found", p); goto out_free; } /* check that we are given the whole vm_object ? */ vm_map_lookup_done(map, entry); // XXX can we really use obj after releasing the map lock? e->obj = obj; vm_object_reference(obj); /* wire the memory and add the vm_object to the kernel map, * to make sure that it is not fred even if the processes that * are mmap()ing it all exit */ e->kva = vm_map_min(kernel_map); e->size = obj->size << PAGE_SHIFT; rv = vm_map_find(kernel_map, obj, 0, &e->kva, e->size, 0, VMFS_OPTIMAL_SPACE, VM_PROT_READ | VM_PROT_WRITE, VM_PROT_READ | VM_PROT_WRITE, 0); if (rv != KERN_SUCCESS) { nm_prerr("vm_map_find(%zx) failed", (size_t)e->size); goto out_rel; } rv = vm_map_wire(kernel_map, e->kva, e->kva + e->size, VM_MAP_WIRE_SYSTEM | VM_MAP_WIRE_NOHOLES); if (rv != KERN_SUCCESS) { nm_prerr("vm_map_wire failed"); goto out_rem; } e->scan = e->kva; return e; out_rem: vm_map_remove(kernel_map, e->kva, e->kva + e->size); e->obj = NULL; out_rel: vm_object_deallocate(e->obj); out_free: nm_os_free(e); out: if (perror) *perror = error; return NULL; } #endif /* WITH_EXTMEM */ /* ================== PTNETMAP GUEST SUPPORT ==================== */ #ifdef WITH_PTNETMAP #include #include #include /* bus_dmamap_* */ #include #include #include /* * ptnetmap memory device (memdev) for freebsd guest, * ssed to expose host netmap memory to the guest through a PCI BAR. */ /* * ptnetmap memdev private data structure */ struct ptnetmap_memdev { device_t dev; struct resource *pci_io; struct resource *pci_mem; struct netmap_mem_d *nm_mem; }; static int ptn_memdev_probe(device_t); static int ptn_memdev_attach(device_t); static int ptn_memdev_detach(device_t); static int ptn_memdev_shutdown(device_t); static device_method_t ptn_memdev_methods[] = { DEVMETHOD(device_probe, ptn_memdev_probe), DEVMETHOD(device_attach, ptn_memdev_attach), DEVMETHOD(device_detach, ptn_memdev_detach), DEVMETHOD(device_shutdown, ptn_memdev_shutdown), DEVMETHOD_END }; static driver_t ptn_memdev_driver = { PTNETMAP_MEMDEV_NAME, ptn_memdev_methods, sizeof(struct ptnetmap_memdev), }; /* We use (SI_ORDER_MIDDLE+1) here, see DEV_MODULE_ORDERED() invocation * below. */ static devclass_t ptnetmap_devclass; DRIVER_MODULE_ORDERED(ptn_memdev, pci, ptn_memdev_driver, ptnetmap_devclass, NULL, NULL, SI_ORDER_MIDDLE + 1); /* * Map host netmap memory through PCI-BAR in the guest OS, * returning physical (nm_paddr) and virtual (nm_addr) addresses * of the netmap memory mapped in the guest. */ int nm_os_pt_memdev_iomap(struct ptnetmap_memdev *ptn_dev, vm_paddr_t *nm_paddr, void **nm_addr, uint64_t *mem_size) { int rid; nm_prinf("ptn_memdev_driver iomap"); rid = PCIR_BAR(PTNETMAP_MEM_PCI_BAR); *mem_size = bus_read_4(ptn_dev->pci_io, PTNET_MDEV_IO_MEMSIZE_HI); *mem_size = bus_read_4(ptn_dev->pci_io, PTNET_MDEV_IO_MEMSIZE_LO) | (*mem_size << 32); /* map memory allocator */ ptn_dev->pci_mem = bus_alloc_resource(ptn_dev->dev, SYS_RES_MEMORY, &rid, 0, ~0, *mem_size, RF_ACTIVE); if (ptn_dev->pci_mem == NULL) { *nm_paddr = 0; *nm_addr = NULL; return ENOMEM; } *nm_paddr = rman_get_start(ptn_dev->pci_mem); *nm_addr = rman_get_virtual(ptn_dev->pci_mem); nm_prinf("=== BAR %d start %lx len %lx mem_size %lx ===", PTNETMAP_MEM_PCI_BAR, (unsigned long)(*nm_paddr), (unsigned long)rman_get_size(ptn_dev->pci_mem), (unsigned long)*mem_size); return (0); } uint32_t nm_os_pt_memdev_ioread(struct ptnetmap_memdev *ptn_dev, unsigned int reg) { return bus_read_4(ptn_dev->pci_io, reg); } /* Unmap host netmap memory. */ void nm_os_pt_memdev_iounmap(struct ptnetmap_memdev *ptn_dev) { nm_prinf("ptn_memdev_driver iounmap"); if (ptn_dev->pci_mem) { bus_release_resource(ptn_dev->dev, SYS_RES_MEMORY, PCIR_BAR(PTNETMAP_MEM_PCI_BAR), ptn_dev->pci_mem); ptn_dev->pci_mem = NULL; } } /* Device identification routine, return BUS_PROBE_DEFAULT on success, * positive on failure */ static int ptn_memdev_probe(device_t dev) { char desc[256]; if (pci_get_vendor(dev) != PTNETMAP_PCI_VENDOR_ID) return (ENXIO); if (pci_get_device(dev) != PTNETMAP_PCI_DEVICE_ID) return (ENXIO); snprintf(desc, sizeof(desc), "%s PCI adapter", PTNETMAP_MEMDEV_NAME); device_set_desc_copy(dev, desc); return (BUS_PROBE_DEFAULT); } /* Device initialization routine. */ static int ptn_memdev_attach(device_t dev) { struct ptnetmap_memdev *ptn_dev; int rid; uint16_t mem_id; ptn_dev = device_get_softc(dev); ptn_dev->dev = dev; pci_enable_busmaster(dev); rid = PCIR_BAR(PTNETMAP_IO_PCI_BAR); ptn_dev->pci_io = bus_alloc_resource_any(dev, SYS_RES_IOPORT, &rid, RF_ACTIVE); if (ptn_dev->pci_io == NULL) { device_printf(dev, "cannot map I/O space\n"); return (ENXIO); } mem_id = bus_read_4(ptn_dev->pci_io, PTNET_MDEV_IO_MEMID); /* create guest allocator */ ptn_dev->nm_mem = netmap_mem_pt_guest_attach(ptn_dev, mem_id); if (ptn_dev->nm_mem == NULL) { ptn_memdev_detach(dev); return (ENOMEM); } netmap_mem_get(ptn_dev->nm_mem); nm_prinf("ptnetmap memdev attached, host memid: %u", mem_id); return (0); } /* Device removal routine. */ static int ptn_memdev_detach(device_t dev) { struct ptnetmap_memdev *ptn_dev; ptn_dev = device_get_softc(dev); if (ptn_dev->nm_mem) { nm_prinf("ptnetmap memdev detached, host memid %u", netmap_mem_get_id(ptn_dev->nm_mem)); netmap_mem_put(ptn_dev->nm_mem); ptn_dev->nm_mem = NULL; } if (ptn_dev->pci_mem) { bus_release_resource(dev, SYS_RES_MEMORY, PCIR_BAR(PTNETMAP_MEM_PCI_BAR), ptn_dev->pci_mem); ptn_dev->pci_mem = NULL; } if (ptn_dev->pci_io) { bus_release_resource(dev, SYS_RES_IOPORT, PCIR_BAR(PTNETMAP_IO_PCI_BAR), ptn_dev->pci_io); ptn_dev->pci_io = NULL; } return (0); } static int ptn_memdev_shutdown(device_t dev) { return bus_generic_shutdown(dev); } #endif /* WITH_PTNETMAP */ /* * In order to track whether pages are still mapped, we hook into * the standard cdev_pager and intercept the constructor and * destructor. */ struct netmap_vm_handle_t { struct cdev *dev; struct netmap_priv_d *priv; }; static int netmap_dev_pager_ctor(void *handle, vm_ooffset_t size, vm_prot_t prot, vm_ooffset_t foff, struct ucred *cred, u_short *color) { struct netmap_vm_handle_t *vmh = handle; if (netmap_verbose) nm_prinf("handle %p size %jd prot %d foff %jd", handle, (intmax_t)size, prot, (intmax_t)foff); if (color) *color = 0; dev_ref(vmh->dev); return 0; } static void netmap_dev_pager_dtor(void *handle) { struct netmap_vm_handle_t *vmh = handle; struct cdev *dev = vmh->dev; struct netmap_priv_d *priv = vmh->priv; if (netmap_verbose) nm_prinf("handle %p", handle); netmap_dtor(priv); free(vmh, M_DEVBUF); dev_rel(dev); } static int netmap_dev_pager_fault(vm_object_t object, vm_ooffset_t offset, int prot, vm_page_t *mres) { struct netmap_vm_handle_t *vmh = object->handle; struct netmap_priv_d *priv = vmh->priv; struct netmap_adapter *na = priv->np_na; vm_paddr_t paddr; vm_page_t page; vm_memattr_t memattr; vm_pindex_t pidx; nm_prdis("object %p offset %jd prot %d mres %p", object, (intmax_t)offset, prot, mres); memattr = object->memattr; pidx = OFF_TO_IDX(offset); paddr = netmap_mem_ofstophys(na->nm_mem, offset); if (paddr == 0) return VM_PAGER_FAIL; if (((*mres)->flags & PG_FICTITIOUS) != 0) { /* * If the passed in result page is a fake page, update it with * the new physical address. */ page = *mres; vm_page_updatefake(page, paddr, memattr); } else { /* * Replace the passed in reqpage page with our own fake page and * free up the all of the original pages. */ #ifndef VM_OBJECT_WUNLOCK /* FreeBSD < 10.x */ #define VM_OBJECT_WUNLOCK VM_OBJECT_UNLOCK #define VM_OBJECT_WLOCK VM_OBJECT_LOCK #endif /* VM_OBJECT_WUNLOCK */ VM_OBJECT_WUNLOCK(object); page = vm_page_getfake(paddr, memattr); VM_OBJECT_WLOCK(object); vm_page_lock(*mres); vm_page_free(*mres); vm_page_unlock(*mres); *mres = page; vm_page_insert(page, object, pidx); } page->valid = VM_PAGE_BITS_ALL; return (VM_PAGER_OK); } static struct cdev_pager_ops netmap_cdev_pager_ops = { .cdev_pg_ctor = netmap_dev_pager_ctor, .cdev_pg_dtor = netmap_dev_pager_dtor, .cdev_pg_fault = netmap_dev_pager_fault, }; static int netmap_mmap_single(struct cdev *cdev, vm_ooffset_t *foff, vm_size_t objsize, vm_object_t *objp, int prot) { int error; struct netmap_vm_handle_t *vmh; struct netmap_priv_d *priv; vm_object_t obj; if (netmap_verbose) nm_prinf("cdev %p foff %jd size %jd objp %p prot %d", cdev, (intmax_t )*foff, (intmax_t )objsize, objp, prot); vmh = malloc(sizeof(struct netmap_vm_handle_t), M_DEVBUF, M_NOWAIT | M_ZERO); if (vmh == NULL) return ENOMEM; vmh->dev = cdev; NMG_LOCK(); error = devfs_get_cdevpriv((void**)&priv); if (error) goto err_unlock; if (priv->np_nifp == NULL) { error = EINVAL; goto err_unlock; } vmh->priv = priv; priv->np_refs++; NMG_UNLOCK(); obj = cdev_pager_allocate(vmh, OBJT_DEVICE, &netmap_cdev_pager_ops, objsize, prot, *foff, NULL); if (obj == NULL) { nm_prerr("cdev_pager_allocate failed"); error = EINVAL; goto err_deref; } *objp = obj; return 0; err_deref: NMG_LOCK(); priv->np_refs--; err_unlock: NMG_UNLOCK(); // err: free(vmh, M_DEVBUF); return error; } /* * On FreeBSD the close routine is only called on the last close on * the device (/dev/netmap) so we cannot do anything useful. * To track close() on individual file descriptors we pass netmap_dtor() to * devfs_set_cdevpriv() on open(). The FreeBSD kernel will call the destructor * when the last fd pointing to the device is closed. * * Note that FreeBSD does not even munmap() on close() so we also have * to track mmap() ourselves, and postpone the call to * netmap_dtor() is called when the process has no open fds and no active * memory maps on /dev/netmap, as in linux. */ static int netmap_close(struct cdev *dev, int fflag, int devtype, struct thread *td) { if (netmap_verbose) nm_prinf("dev %p fflag 0x%x devtype %d td %p", dev, fflag, devtype, td); return 0; } static int netmap_open(struct cdev *dev, int oflags, int devtype, struct thread *td) { struct netmap_priv_d *priv; int error; (void)dev; (void)oflags; (void)devtype; (void)td; NMG_LOCK(); priv = netmap_priv_new(); if (priv == NULL) { error = ENOMEM; goto out; } error = devfs_set_cdevpriv(priv, netmap_dtor); if (error) { netmap_priv_delete(priv); } out: NMG_UNLOCK(); return error; } /******************** kthread wrapper ****************/ #include u_int nm_os_ncpus(void) { return mp_maxid + 1; } struct nm_kctx_ctx { /* Userspace thread (kthread creator). */ struct thread *user_td; /* worker function and parameter */ nm_kctx_worker_fn_t worker_fn; void *worker_private; struct nm_kctx *nmk; /* integer to manage multiple worker contexts (e.g., RX or TX on ptnetmap) */ long type; }; struct nm_kctx { struct thread *worker; struct mtx worker_lock; struct nm_kctx_ctx worker_ctx; int run; /* used to stop kthread */ int attach_user; /* kthread attached to user_process */ int affinity; }; static void nm_kctx_worker(void *data) { struct nm_kctx *nmk = data; struct nm_kctx_ctx *ctx = &nmk->worker_ctx; if (nmk->affinity >= 0) { thread_lock(curthread); sched_bind(curthread, nmk->affinity); thread_unlock(curthread); } while (nmk->run) { /* * check if the parent process dies * (when kthread is attached to user process) */ if (ctx->user_td) { PROC_LOCK(curproc); thread_suspend_check(0); PROC_UNLOCK(curproc); } else { kthread_suspend_check(); } /* Continuously execute worker process. */ ctx->worker_fn(ctx->worker_private); /* worker body */ } kthread_exit(); } void nm_os_kctx_worker_setaff(struct nm_kctx *nmk, int affinity) { nmk->affinity = affinity; } struct nm_kctx * nm_os_kctx_create(struct nm_kctx_cfg *cfg, void *opaque) { struct nm_kctx *nmk = NULL; nmk = malloc(sizeof(*nmk), M_DEVBUF, M_NOWAIT | M_ZERO); if (!nmk) return NULL; mtx_init(&nmk->worker_lock, "nm_kthread lock", NULL, MTX_DEF); nmk->worker_ctx.worker_fn = cfg->worker_fn; nmk->worker_ctx.worker_private = cfg->worker_private; nmk->worker_ctx.type = cfg->type; nmk->affinity = -1; /* attach kthread to user process (ptnetmap) */ nmk->attach_user = cfg->attach_user; return nmk; } int nm_os_kctx_worker_start(struct nm_kctx *nmk) { struct proc *p = NULL; int error = 0; /* Temporarily disable this function as it is currently broken * and causes kernel crashes. The failure can be triggered by * the "vale_polling_enable_disable" test in ctrl-api-test.c. */ return EOPNOTSUPP; if (nmk->worker) return EBUSY; /* check if we want to attach kthread to user process */ if (nmk->attach_user) { nmk->worker_ctx.user_td = curthread; p = curthread->td_proc; } /* enable kthread main loop */ nmk->run = 1; /* create kthread */ if((error = kthread_add(nm_kctx_worker, nmk, p, &nmk->worker, RFNOWAIT /* to be checked */, 0, "nm-kthread-%ld", nmk->worker_ctx.type))) { goto err; } nm_prinf("nm_kthread started td %p", nmk->worker); return 0; err: nm_prerr("nm_kthread start failed err %d", error); nmk->worker = NULL; return error; } void nm_os_kctx_worker_stop(struct nm_kctx *nmk) { if (!nmk->worker) return; /* tell to kthread to exit from main loop */ nmk->run = 0; /* wake up kthread if it sleeps */ kthread_resume(nmk->worker); nmk->worker = NULL; } void nm_os_kctx_destroy(struct nm_kctx *nmk) { if (!nmk) return; if (nmk->worker) nm_os_kctx_worker_stop(nmk); free(nmk, M_DEVBUF); } /******************** kqueue support ****************/ /* * In addition to calling selwakeuppri(), nm_os_selwakeup() also * needs to call knote() to wake up kqueue listeners. * This operation is deferred to a taskqueue in order to avoid possible * lock order reversals; these may happen because knote() grabs a * private lock associated to the 'si' (see struct selinfo, * struct nm_selinfo, and nm_os_selinfo_init), and nm_os_selwakeup() * can be called while holding the lock associated to a different * 'si'. * When calling knote() we use a non-zero 'hint' argument to inform * the netmap_knrw() function that it is being called from * 'nm_os_selwakeup'; this is necessary because when netmap_knrw() is * called by the kevent subsystem (i.e. kevent_scan()) we also need to * call netmap_poll(). * * The netmap_kqfilter() function registers one or another f_event * depending on read or write mode. A pointer to the struct * 'netmap_priv_d' is stored into kn->kn_hook, so that it can later * be passed to netmap_poll(). We pass NULL as a third argument to * netmap_poll(), so that the latter only runs the txsync/rxsync * (if necessary), and skips the nm_os_selrecord() calls. */ void nm_os_selwakeup(struct nm_selinfo *si) { selwakeuppri(&si->si, PI_NET); - taskqueue_enqueue(si->ntfytq, &si->ntfytask); + if (si->kqueue_users > 0) { + taskqueue_enqueue(si->ntfytq, &si->ntfytask); + } } void nm_os_selrecord(struct thread *td, struct nm_selinfo *si) { selrecord(td, &si->si); } static void netmap_knrdetach(struct knote *kn) { struct netmap_priv_d *priv = (struct netmap_priv_d *)kn->kn_hook; - struct selinfo *si = &priv->np_si[NR_RX]->si; + struct nm_selinfo *si = priv->np_si[NR_RX]; - nm_prinf("remove selinfo %p", si); - knlist_remove(&si->si_note, kn, /*islocked=*/0); + knlist_remove(&si->si.si_note, kn, /*islocked=*/0); + NMG_LOCK(); + KASSERT(si->kqueue_users > 0, ("kqueue_user underflow on %s", + si->mtxname)); + si->kqueue_users--; + nm_prinf("kqueue users for %s: %d", si->mtxname, si->kqueue_users); + NMG_UNLOCK(); } static void netmap_knwdetach(struct knote *kn) { struct netmap_priv_d *priv = (struct netmap_priv_d *)kn->kn_hook; - struct selinfo *si = &priv->np_si[NR_TX]->si; + struct nm_selinfo *si = priv->np_si[NR_TX]; - nm_prinf("remove selinfo %p", si); - knlist_remove(&si->si_note, kn, /*islocked=*/0); + knlist_remove(&si->si.si_note, kn, /*islocked=*/0); + NMG_LOCK(); + si->kqueue_users--; + nm_prinf("kqueue users for %s: %d", si->mtxname, si->kqueue_users); + NMG_UNLOCK(); } /* * Callback triggered by netmap notifications (see netmap_notify()), * and by the application calling kevent(). In the former case we * just return 1 (events ready), since we are not able to do better. * In the latter case we use netmap_poll() to see which events are * ready. */ static int netmap_knrw(struct knote *kn, long hint, int events) { struct netmap_priv_d *priv; int revents; if (hint != 0) { /* Called from netmap_notify(), typically from a * thread different from the one issuing kevent(). * Assume we are ready. */ return 1; } /* Called from kevent(). */ priv = kn->kn_hook; revents = netmap_poll(priv, events, /*thread=*/NULL); return (events & revents) ? 1 : 0; } static int netmap_knread(struct knote *kn, long hint) { return netmap_knrw(kn, hint, POLLIN); } static int netmap_knwrite(struct knote *kn, long hint) { return netmap_knrw(kn, hint, POLLOUT); } static struct filterops netmap_rfiltops = { .f_isfd = 1, .f_detach = netmap_knrdetach, .f_event = netmap_knread, }; static struct filterops netmap_wfiltops = { .f_isfd = 1, .f_detach = netmap_knwdetach, .f_event = netmap_knwrite, }; /* * This is called when a thread invokes kevent() to record * a change in the configuration of the kqueue(). * The 'priv' is the one associated to the open netmap device. */ static int netmap_kqfilter(struct cdev *dev, struct knote *kn) { struct netmap_priv_d *priv; int error; struct netmap_adapter *na; struct nm_selinfo *si; int ev = kn->kn_filter; if (ev != EVFILT_READ && ev != EVFILT_WRITE) { nm_prerr("bad filter request %d", ev); return 1; } error = devfs_get_cdevpriv((void**)&priv); if (error) { nm_prerr("device not yet setup"); return 1; } na = priv->np_na; if (na == NULL) { nm_prerr("no netmap adapter for this file descriptor"); return 1; } /* the si is indicated in the priv */ si = priv->np_si[(ev == EVFILT_WRITE) ? NR_TX : NR_RX]; kn->kn_fop = (ev == EVFILT_WRITE) ? &netmap_wfiltops : &netmap_rfiltops; kn->kn_hook = priv; + NMG_LOCK(); + si->kqueue_users++; + nm_prinf("kqueue users for %s: %d", si->mtxname, si->kqueue_users); + NMG_UNLOCK(); knlist_add(&si->si.si_note, kn, /*islocked=*/0); return 0; } static int freebsd_netmap_poll(struct cdev *cdevi __unused, int events, struct thread *td) { struct netmap_priv_d *priv; if (devfs_get_cdevpriv((void **)&priv)) { return POLLERR; } return netmap_poll(priv, events, td); } static int freebsd_netmap_ioctl(struct cdev *dev __unused, u_long cmd, caddr_t data, int ffla __unused, struct thread *td) { int error; struct netmap_priv_d *priv; CURVNET_SET(TD_TO_VNET(td)); error = devfs_get_cdevpriv((void **)&priv); if (error) { /* XXX ENOENT should be impossible, since the priv * is now created in the open */ if (error == ENOENT) error = ENXIO; goto out; } error = netmap_ioctl(priv, cmd, data, td, /*nr_body_is_user=*/1); out: CURVNET_RESTORE(); return error; } void nm_os_onattach(struct ifnet *ifp) { ifp->if_capabilities |= IFCAP_NETMAP; } void nm_os_onenter(struct ifnet *ifp) { struct netmap_adapter *na = NA(ifp); na->if_transmit = ifp->if_transmit; ifp->if_transmit = netmap_transmit; ifp->if_capenable |= IFCAP_NETMAP; } void nm_os_onexit(struct ifnet *ifp) { struct netmap_adapter *na = NA(ifp); ifp->if_transmit = na->if_transmit; ifp->if_capenable &= ~IFCAP_NETMAP; } extern struct cdevsw netmap_cdevsw; /* XXX used in netmap.c, should go elsewhere */ struct cdevsw netmap_cdevsw = { .d_version = D_VERSION, .d_name = "netmap", .d_open = netmap_open, .d_mmap_single = netmap_mmap_single, .d_ioctl = freebsd_netmap_ioctl, .d_poll = freebsd_netmap_poll, .d_kqfilter = netmap_kqfilter, .d_close = netmap_close, }; /*--- end of kqueue support ----*/ /* * Kernel entry point. * * Initialize/finalize the module and return. * * Return 0 on success, errno on failure. */ static int netmap_loader(__unused struct module *module, int event, __unused void *arg) { int error = 0; switch (event) { case MOD_LOAD: error = netmap_init(); break; case MOD_UNLOAD: /* * if some one is still using netmap, * then the module can not be unloaded. */ if (netmap_use_count) { nm_prerr("netmap module can not be unloaded - netmap_use_count: %d", netmap_use_count); error = EBUSY; break; } netmap_fini(); break; default: error = EOPNOTSUPP; break; } return (error); } #ifdef DEV_MODULE_ORDERED /* * The netmap module contains three drivers: (i) the netmap character device * driver; (ii) the ptnetmap memdev PCI device driver, (iii) the ptnet PCI * device driver. The attach() routines of both (ii) and (iii) need the * lock of the global allocator, and such lock is initialized in netmap_init(), * which is part of (i). * Therefore, we make sure that (i) is loaded before (ii) and (iii), using * the 'order' parameter of driver declaration macros. For (i), we specify * SI_ORDER_MIDDLE, while higher orders are used with the DRIVER_MODULE_ORDERED * macros for (ii) and (iii). */ DEV_MODULE_ORDERED(netmap, netmap_loader, NULL, SI_ORDER_MIDDLE); #else /* !DEV_MODULE_ORDERED */ DEV_MODULE(netmap, netmap_loader, NULL); #endif /* DEV_MODULE_ORDERED */ MODULE_DEPEND(netmap, pci, 1, 1, 1); MODULE_VERSION(netmap, 1); /* reduce conditional code */ // linux API, use for the knlist in FreeBSD /* use a private mutex for the knlist */ Index: projects/import-googletest-1.8.1/sys/dev/netmap/netmap_kern.h =================================================================== --- projects/import-googletest-1.8.1/sys/dev/netmap/netmap_kern.h (revision 344270) +++ projects/import-googletest-1.8.1/sys/dev/netmap/netmap_kern.h (revision 344271) @@ -1,2405 +1,2408 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (C) 2011-2014 Matteo Landi, Luigi Rizzo * Copyright (C) 2013-2016 Universita` di Pisa * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * $FreeBSD$ * * The header contains the definitions of constants and function * prototypes used only in kernelspace. */ #ifndef _NET_NETMAP_KERN_H_ #define _NET_NETMAP_KERN_H_ #if defined(linux) #if defined(CONFIG_NETMAP_EXTMEM) #define WITH_EXTMEM #endif #if defined(CONFIG_NETMAP_VALE) #define WITH_VALE #endif #if defined(CONFIG_NETMAP_PIPE) #define WITH_PIPES #endif #if defined(CONFIG_NETMAP_MONITOR) #define WITH_MONITOR #endif #if defined(CONFIG_NETMAP_GENERIC) #define WITH_GENERIC #endif #if defined(CONFIG_NETMAP_PTNETMAP) #define WITH_PTNETMAP #endif #if defined(CONFIG_NETMAP_SINK) #define WITH_SINK #endif #if defined(CONFIG_NETMAP_NULL) #define WITH_NMNULL #endif #elif defined (_WIN32) #define WITH_VALE // comment out to disable VALE support #define WITH_PIPES #define WITH_MONITOR #define WITH_GENERIC #define WITH_NMNULL #else /* neither linux nor windows */ #define WITH_VALE // comment out to disable VALE support #define WITH_PIPES #define WITH_MONITOR #define WITH_GENERIC #define WITH_PTNETMAP /* ptnetmap guest support */ #define WITH_EXTMEM #define WITH_NMNULL #endif #if defined(__FreeBSD__) #include #define likely(x) __builtin_expect((long)!!(x), 1L) #define unlikely(x) __builtin_expect((long)!!(x), 0L) #define __user #define NM_LOCK_T struct mtx /* low level spinlock, used to protect queues */ #define NM_MTX_T struct sx /* OS-specific mutex (sleepable) */ #define NM_MTX_INIT(m) sx_init(&(m), #m) #define NM_MTX_DESTROY(m) sx_destroy(&(m)) #define NM_MTX_LOCK(m) sx_xlock(&(m)) #define NM_MTX_SPINLOCK(m) while (!sx_try_xlock(&(m))) ; #define NM_MTX_UNLOCK(m) sx_xunlock(&(m)) #define NM_MTX_ASSERT(m) sx_assert(&(m), SA_XLOCKED) #define NM_SELINFO_T struct nm_selinfo #define NM_SELRECORD_T struct thread #define MBUF_LEN(m) ((m)->m_pkthdr.len) #define MBUF_TXQ(m) ((m)->m_pkthdr.flowid) #define MBUF_TRANSMIT(na, ifp, m) ((na)->if_transmit(ifp, m)) #define GEN_TX_MBUF_IFP(m) ((m)->m_pkthdr.rcvif) #define NM_ATOMIC_T volatile int /* required by atomic/bitops.h */ /* atomic operations */ #include #define NM_ATOMIC_TEST_AND_SET(p) (!atomic_cmpset_acq_int((p), 0, 1)) #define NM_ATOMIC_CLEAR(p) atomic_store_rel_int((p), 0) #if __FreeBSD_version >= 1100030 #define WNA(_ifp) (_ifp)->if_netmap #else /* older FreeBSD */ #define WNA(_ifp) (_ifp)->if_pspare[0] #endif /* older FreeBSD */ #if __FreeBSD_version >= 1100005 struct netmap_adapter *netmap_getna(if_t ifp); #endif #if __FreeBSD_version >= 1100027 #define MBUF_REFCNT(m) ((m)->m_ext.ext_count) #define SET_MBUF_REFCNT(m, x) (m)->m_ext.ext_count = x #else #define MBUF_REFCNT(m) ((m)->m_ext.ref_cnt ? *((m)->m_ext.ref_cnt) : -1) #define SET_MBUF_REFCNT(m, x) *((m)->m_ext.ref_cnt) = x #endif #define MBUF_QUEUED(m) 1 struct nm_selinfo { + /* Support for select(2) and poll(2). */ struct selinfo si; + /* Support for kqueue(9). See comments in netmap_freebsd.c */ struct taskqueue *ntfytq; struct task ntfytask; struct mtx m; char mtxname[32]; + int kqueue_users; }; struct hrtimer { /* Not used in FreeBSD. */ }; #define NM_BNS_GET(b) #define NM_BNS_PUT(b) #elif defined (linux) #define NM_LOCK_T safe_spinlock_t // see bsd_glue.h #define NM_SELINFO_T wait_queue_head_t #define MBUF_LEN(m) ((m)->len) #define MBUF_TRANSMIT(na, ifp, m) \ ({ \ /* Avoid infinite recursion with generic. */ \ m->priority = NM_MAGIC_PRIORITY_TX; \ (((struct net_device_ops *)(na)->if_transmit)->ndo_start_xmit(m, ifp)); \ 0; \ }) /* See explanation in nm_os_generic_xmit_frame. */ #define GEN_TX_MBUF_IFP(m) ((struct ifnet *)skb_shinfo(m)->destructor_arg) #define NM_ATOMIC_T volatile long unsigned int #define NM_MTX_T struct mutex /* OS-specific sleepable lock */ #define NM_MTX_INIT(m) mutex_init(&(m)) #define NM_MTX_DESTROY(m) do { (void)(m); } while (0) #define NM_MTX_LOCK(m) mutex_lock(&(m)) #define NM_MTX_UNLOCK(m) mutex_unlock(&(m)) #define NM_MTX_ASSERT(m) mutex_is_locked(&(m)) #ifndef DEV_NETMAP #define DEV_NETMAP #endif /* DEV_NETMAP */ #elif defined (__APPLE__) #warning apple support is incomplete. #define likely(x) __builtin_expect(!!(x), 1) #define unlikely(x) __builtin_expect(!!(x), 0) #define NM_LOCK_T IOLock * #define NM_SELINFO_T struct selinfo #define MBUF_LEN(m) ((m)->m_pkthdr.len) #elif defined (_WIN32) #include "../../../WINDOWS/win_glue.h" #define NM_SELRECORD_T IO_STACK_LOCATION #define NM_SELINFO_T win_SELINFO // see win_glue.h #define NM_LOCK_T win_spinlock_t // see win_glue.h #define NM_MTX_T KGUARDED_MUTEX /* OS-specific mutex (sleepable) */ #define NM_MTX_INIT(m) KeInitializeGuardedMutex(&m); #define NM_MTX_DESTROY(m) do { (void)(m); } while (0) #define NM_MTX_LOCK(m) KeAcquireGuardedMutex(&(m)) #define NM_MTX_UNLOCK(m) KeReleaseGuardedMutex(&(m)) #define NM_MTX_ASSERT(m) assert(&m.Count>0) //These linknames are for the NDIS driver #define NETMAP_NDIS_LINKNAME_STRING L"\\DosDevices\\NMAPNDIS" #define NETMAP_NDIS_NTDEVICE_STRING L"\\Device\\NMAPNDIS" //Definition of internal driver-to-driver ioctl codes #define NETMAP_KERNEL_XCHANGE_POINTERS _IO('i', 180) #define NETMAP_KERNEL_SEND_SHUTDOWN_SIGNAL _IO_direct('i', 195) typedef struct hrtimer{ KTIMER timer; BOOLEAN active; KDPC deferred_proc; }; /* MSVC does not have likely/unlikely support */ #ifdef _MSC_VER #define likely(x) (x) #define unlikely(x) (x) #else #define likely(x) __builtin_expect((long)!!(x), 1L) #define unlikely(x) __builtin_expect((long)!!(x), 0L) #endif //_MSC_VER #else #error unsupported platform #endif /* end - platform-specific code */ #ifndef _WIN32 /* support for emulated sysctl */ #define SYSBEGIN(x) #define SYSEND #endif /* _WIN32 */ #define NM_ACCESS_ONCE(x) (*(volatile __typeof__(x) *)&(x)) #define NMG_LOCK_T NM_MTX_T #define NMG_LOCK_INIT() NM_MTX_INIT(netmap_global_lock) #define NMG_LOCK_DESTROY() NM_MTX_DESTROY(netmap_global_lock) #define NMG_LOCK() NM_MTX_LOCK(netmap_global_lock) #define NMG_UNLOCK() NM_MTX_UNLOCK(netmap_global_lock) #define NMG_LOCK_ASSERT() NM_MTX_ASSERT(netmap_global_lock) #if defined(__FreeBSD__) #define nm_prerr_int printf #define nm_prinf_int printf #elif defined (_WIN32) #define nm_prerr_int DbgPrint #define nm_prinf_int DbgPrint #elif defined(linux) #define nm_prerr_int(fmt, arg...) printk(KERN_ERR fmt, ##arg) #define nm_prinf_int(fmt, arg...) printk(KERN_INFO fmt, ##arg) #endif #define nm_prinf(format, ...) \ do { \ struct timeval __xxts; \ microtime(&__xxts); \ nm_prinf_int("%03d.%06d [%4d] %-25s " format "\n",\ (int)__xxts.tv_sec % 1000, (int)__xxts.tv_usec, \ __LINE__, __FUNCTION__, ##__VA_ARGS__); \ } while (0) #define nm_prerr(format, ...) \ do { \ struct timeval __xxts; \ microtime(&__xxts); \ nm_prerr_int("%03d.%06d [%4d] %-25s " format "\n",\ (int)__xxts.tv_sec % 1000, (int)__xxts.tv_usec, \ __LINE__, __FUNCTION__, ##__VA_ARGS__); \ } while (0) /* Disabled printf (used to be nm_prdis). */ #define nm_prdis(format, ...) /* Rate limited, lps indicates how many per second. */ #define nm_prlim(lps, format, ...) \ do { \ static int t0, __cnt; \ if (t0 != time_second) { \ t0 = time_second; \ __cnt = 0; \ } \ if (__cnt++ < lps) \ nm_prinf(format, ##__VA_ARGS__); \ } while (0) struct netmap_adapter; struct nm_bdg_fwd; struct nm_bridge; struct netmap_priv_d; struct nm_bdg_args; /* os-specific NM_SELINFO_T initialzation/destruction functions */ int nm_os_selinfo_init(NM_SELINFO_T *, const char *name); void nm_os_selinfo_uninit(NM_SELINFO_T *); const char *nm_dump_buf(char *p, int len, int lim, char *dst); void nm_os_selwakeup(NM_SELINFO_T *si); void nm_os_selrecord(NM_SELRECORD_T *sr, NM_SELINFO_T *si); int nm_os_ifnet_init(void); void nm_os_ifnet_fini(void); void nm_os_ifnet_lock(void); void nm_os_ifnet_unlock(void); unsigned nm_os_ifnet_mtu(struct ifnet *ifp); void nm_os_get_module(void); void nm_os_put_module(void); void netmap_make_zombie(struct ifnet *); void netmap_undo_zombie(struct ifnet *); /* os independent alloc/realloc/free */ void *nm_os_malloc(size_t); void *nm_os_vmalloc(size_t); void *nm_os_realloc(void *, size_t new_size, size_t old_size); void nm_os_free(void *); void nm_os_vfree(void *); /* os specific attach/detach enter/exit-netmap-mode routines */ void nm_os_onattach(struct ifnet *); void nm_os_ondetach(struct ifnet *); void nm_os_onenter(struct ifnet *); void nm_os_onexit(struct ifnet *); /* passes a packet up to the host stack. * If the packet is sent (or dropped) immediately it returns NULL, * otherwise it links the packet to prev and returns m. * In this case, a final call with m=NULL and prev != NULL will send up * the entire chain to the host stack. */ void *nm_os_send_up(struct ifnet *, struct mbuf *m, struct mbuf *prev); int nm_os_mbuf_has_seg_offld(struct mbuf *m); int nm_os_mbuf_has_csum_offld(struct mbuf *m); #include "netmap_mbq.h" extern NMG_LOCK_T netmap_global_lock; enum txrx { NR_RX = 0, NR_TX = 1, NR_TXRX }; static __inline const char* nm_txrx2str(enum txrx t) { return (t== NR_RX ? "RX" : "TX"); } static __inline enum txrx nm_txrx_swap(enum txrx t) { return (t== NR_RX ? NR_TX : NR_RX); } #define for_rx_tx(t) for ((t) = 0; (t) < NR_TXRX; (t)++) #ifdef WITH_MONITOR struct netmap_zmon_list { struct netmap_kring *next; struct netmap_kring *prev; }; #endif /* WITH_MONITOR */ /* * private, kernel view of a ring. Keeps track of the status of * a ring across system calls. * * nr_hwcur index of the next buffer to refill. * It corresponds to ring->head * at the time the system call returns. * * nr_hwtail index of the first buffer owned by the kernel. * On RX, hwcur->hwtail are receive buffers * not yet released. hwcur is advanced following * ring->head, hwtail is advanced on incoming packets, * and a wakeup is generated when hwtail passes ring->cur * On TX, hwcur->rcur have been filled by the sender * but not sent yet to the NIC; rcur->hwtail are available * for new transmissions, and hwtail->hwcur-1 are pending * transmissions not yet acknowledged. * * The indexes in the NIC and netmap rings are offset by nkr_hwofs slots. * This is so that, on a reset, buffers owned by userspace are not * modified by the kernel. In particular: * RX rings: the next empty buffer (hwtail + hwofs) coincides with * the next empty buffer as known by the hardware (next_to_check or so). * TX rings: hwcur + hwofs coincides with next_to_send * * The following fields are used to implement lock-free copy of packets * from input to output ports in VALE switch: * nkr_hwlease buffer after the last one being copied. * A writer in nm_bdg_flush reserves N buffers * from nr_hwlease, advances it, then does the * copy outside the lock. * In RX rings (used for VALE ports), * nkr_hwtail <= nkr_hwlease < nkr_hwcur+N-1 * In TX rings (used for NIC or host stack ports) * nkr_hwcur <= nkr_hwlease < nkr_hwtail * nkr_leases array of nkr_num_slots where writers can report * completion of their block. NR_NOSLOT (~0) indicates * that the writer has not finished yet * nkr_lease_idx index of next free slot in nr_leases, to be assigned * * The kring is manipulated by txsync/rxsync and generic netmap function. * * Concurrent rxsync or txsync on the same ring are prevented through * by nm_kr_(try)lock() which in turn uses nr_busy. This is all we need * for NIC rings, and for TX rings attached to the host stack. * * RX rings attached to the host stack use an mbq (rx_queue) on both * rxsync_from_host() and netmap_transmit(). The mbq is protected * by its internal lock. * * RX rings attached to the VALE switch are accessed by both senders * and receiver. They are protected through the q_lock on the RX ring. */ struct netmap_kring { struct netmap_ring *ring; uint32_t nr_hwcur; /* should be nr_hwhead */ uint32_t nr_hwtail; /* * Copies of values in user rings, so we do not need to look * at the ring (which could be modified). These are set in the * *sync_prologue()/finalize() routines. */ uint32_t rhead; uint32_t rcur; uint32_t rtail; uint32_t nr_kflags; /* private driver flags */ #define NKR_PENDINTR 0x1 // Pending interrupt. #define NKR_EXCLUSIVE 0x2 /* exclusive binding */ #define NKR_FORWARD 0x4 /* (host ring only) there are packets to forward */ #define NKR_NEEDRING 0x8 /* ring needed even if users==0 * (used internally by pipes and * by ptnetmap host ports) */ #define NKR_NOINTR 0x10 /* don't use interrupts on this ring */ #define NKR_FAKERING 0x20 /* don't allocate/free buffers */ uint32_t nr_mode; uint32_t nr_pending_mode; #define NKR_NETMAP_OFF 0x0 #define NKR_NETMAP_ON 0x1 uint32_t nkr_num_slots; /* * On a NIC reset, the NIC ring indexes may be reset but the * indexes in the netmap rings remain the same. nkr_hwofs * keeps track of the offset between the two. */ int32_t nkr_hwofs; /* last_reclaim is opaque marker to help reduce the frequency * of operations such as reclaiming tx buffers. A possible use * is set it to ticks and do the reclaim only once per tick. */ uint64_t last_reclaim; NM_SELINFO_T si; /* poll/select wait queue */ NM_LOCK_T q_lock; /* protects kring and ring. */ NM_ATOMIC_T nr_busy; /* prevent concurrent syscalls */ /* the adapter the owns this kring */ struct netmap_adapter *na; /* the adapter that wants to be notified when this kring has * new slots avaialable. This is usually the same as the above, * but wrappers may let it point to themselves */ struct netmap_adapter *notify_na; /* The following fields are for VALE switch support */ struct nm_bdg_fwd *nkr_ft; uint32_t *nkr_leases; #define NR_NOSLOT ((uint32_t)~0) /* used in nkr_*lease* */ uint32_t nkr_hwlease; uint32_t nkr_lease_idx; /* while nkr_stopped is set, no new [tr]xsync operations can * be started on this kring. * This is used by netmap_disable_all_rings() * to find a synchronization point where critical data * structures pointed to by the kring can be added or removed */ volatile int nkr_stopped; /* Support for adapters without native netmap support. * On tx rings we preallocate an array of tx buffers * (same size as the netmap ring), on rx rings we * store incoming mbufs in a queue that is drained by * a rxsync. */ struct mbuf **tx_pool; struct mbuf *tx_event; /* TX event used as a notification */ NM_LOCK_T tx_event_lock; /* protects the tx_event mbuf */ struct mbq rx_queue; /* intercepted rx mbufs. */ uint32_t users; /* existing bindings for this ring */ uint32_t ring_id; /* kring identifier */ enum txrx tx; /* kind of ring (tx or rx) */ char name[64]; /* diagnostic */ /* [tx]sync callback for this kring. * The default nm_kring_create callback (netmap_krings_create) * sets the nm_sync callback of each hardware tx(rx) kring to * the corresponding nm_txsync(nm_rxsync) taken from the * netmap_adapter; moreover, it sets the sync callback * of the host tx(rx) ring to netmap_txsync_to_host * (netmap_rxsync_from_host). * * Overrides: the above configuration is not changed by * any of the nm_krings_create callbacks. */ int (*nm_sync)(struct netmap_kring *kring, int flags); int (*nm_notify)(struct netmap_kring *kring, int flags); #ifdef WITH_PIPES struct netmap_kring *pipe; /* if this is a pipe ring, * pointer to the other end */ uint32_t pipe_tail; /* hwtail updated by the other end */ #endif /* WITH_PIPES */ int (*save_notify)(struct netmap_kring *kring, int flags); #ifdef WITH_MONITOR /* array of krings that are monitoring this kring */ struct netmap_kring **monitors; uint32_t max_monitors; /* current size of the monitors array */ uint32_t n_monitors; /* next unused entry in the monitor array */ uint32_t mon_pos[NR_TXRX]; /* index of this ring in the monitored ring array */ uint32_t mon_tail; /* last seen slot on rx */ /* circular list of zero-copy monitors */ struct netmap_zmon_list zmon_list[NR_TXRX]; /* * Monitors work by intercepting the sync and notify callbacks of the * monitored krings. This is implemented by replacing the pointers * above and saving the previous ones in mon_* pointers below */ int (*mon_sync)(struct netmap_kring *kring, int flags); int (*mon_notify)(struct netmap_kring *kring, int flags); #endif } #ifdef _WIN32 __declspec(align(64)); #else __attribute__((__aligned__(64))); #endif /* return 1 iff the kring needs to be turned on */ static inline int nm_kring_pending_on(struct netmap_kring *kring) { return kring->nr_pending_mode == NKR_NETMAP_ON && kring->nr_mode == NKR_NETMAP_OFF; } /* return 1 iff the kring needs to be turned off */ static inline int nm_kring_pending_off(struct netmap_kring *kring) { return kring->nr_pending_mode == NKR_NETMAP_OFF && kring->nr_mode == NKR_NETMAP_ON; } /* return the next index, with wraparound */ static inline uint32_t nm_next(uint32_t i, uint32_t lim) { return unlikely (i == lim) ? 0 : i + 1; } /* return the previous index, with wraparound */ static inline uint32_t nm_prev(uint32_t i, uint32_t lim) { return unlikely (i == 0) ? lim : i - 1; } /* * * Here is the layout for the Rx and Tx rings. RxRING TxRING +-----------------+ +-----------------+ | | | | | free | | free | +-----------------+ +-----------------+ head->| owned by user |<-hwcur | not sent to nic |<-hwcur | | | yet | +-----------------+ | | cur->| available to | | | | user, not read | +-----------------+ | yet | cur->| (being | | | | prepared) | | | | | +-----------------+ + ------ + tail->| |<-hwtail | |<-hwlease | (being | ... | | ... | prepared) | ... | | ... +-----------------+ ... | | ... | |<-hwlease +-----------------+ | | tail->| |<-hwtail | | | | | | | | | | | | +-----------------+ +-----------------+ * The cur/tail (user view) and hwcur/hwtail (kernel view) * are used in the normal operation of the card. * * When a ring is the output of a switch port (Rx ring for * a VALE port, Tx ring for the host stack or NIC), slots * are reserved in blocks through 'hwlease' which points * to the next unused slot. * On an Rx ring, hwlease is always after hwtail, * and completions cause hwtail to advance. * On a Tx ring, hwlease is always between cur and hwtail, * and completions cause cur to advance. * * nm_kr_space() returns the maximum number of slots that * can be assigned. * nm_kr_lease() reserves the required number of buffers, * advances nkr_hwlease and also returns an entry in * a circular array where completions should be reported. */ struct lut_entry; #ifdef __FreeBSD__ #define plut_entry lut_entry #endif struct netmap_lut { struct lut_entry *lut; struct plut_entry *plut; uint32_t objtotal; /* max buffer index */ uint32_t objsize; /* buffer size */ }; struct netmap_vp_adapter; // forward struct nm_bridge; /* Struct to be filled by nm_config callbacks. */ struct nm_config_info { unsigned num_tx_rings; unsigned num_rx_rings; unsigned num_tx_descs; unsigned num_rx_descs; unsigned rx_buf_maxsize; }; /* * default type for the magic field. * May be overriden in glue code. */ #ifndef NM_OS_MAGIC #define NM_OS_MAGIC uint32_t #endif /* !NM_OS_MAGIC */ /* * The "struct netmap_adapter" extends the "struct adapter" * (or equivalent) device descriptor. * It contains all base fields needed to support netmap operation. * There are in fact different types of netmap adapters * (native, generic, VALE switch...) so a netmap_adapter is * just the first field in the derived type. */ struct netmap_adapter { /* * On linux we do not have a good way to tell if an interface * is netmap-capable. So we always use the following trick: * NA(ifp) points here, and the first entry (which hopefully * always exists and is at least 32 bits) contains a magic * value which we can use to detect that the interface is good. */ NM_OS_MAGIC magic; uint32_t na_flags; /* enabled, and other flags */ #define NAF_SKIP_INTR 1 /* use the regular interrupt handler. * useful during initialization */ #define NAF_SW_ONLY 2 /* forward packets only to sw adapter */ #define NAF_BDG_MAYSLEEP 4 /* the bridge is allowed to sleep when * forwarding packets coming from this * interface */ #define NAF_MEM_OWNER 8 /* the adapter uses its own memory area * that cannot be changed */ #define NAF_NATIVE 16 /* the adapter is native. * Virtual ports (non persistent vale ports, * pipes, monitors...) should never use * this flag. */ #define NAF_NETMAP_ON 32 /* netmap is active (either native or * emulated). Where possible (e.g. FreeBSD) * IFCAP_NETMAP also mirrors this flag. */ #define NAF_HOST_RINGS 64 /* the adapter supports the host rings */ #define NAF_FORCE_NATIVE 128 /* the adapter is always NATIVE */ /* free */ #define NAF_MOREFRAG 512 /* the adapter supports NS_MOREFRAG */ #define NAF_ZOMBIE (1U<<30) /* the nic driver has been unloaded */ #define NAF_BUSY (1U<<31) /* the adapter is used internally and * cannot be registered from userspace */ int active_fds; /* number of user-space descriptors using this interface, which is equal to the number of struct netmap_if objs in the mapped region. */ u_int num_rx_rings; /* number of adapter receive rings */ u_int num_tx_rings; /* number of adapter transmit rings */ u_int num_host_rx_rings; /* number of host receive rings */ u_int num_host_tx_rings; /* number of host transmit rings */ u_int num_tx_desc; /* number of descriptor in each queue */ u_int num_rx_desc; /* tx_rings and rx_rings are private but allocated as a * contiguous chunk of memory. Each array has N+K entries, * N for the hardware rings and K for the host rings. */ struct netmap_kring **tx_rings; /* array of TX rings. */ struct netmap_kring **rx_rings; /* array of RX rings. */ void *tailroom; /* space below the rings array */ /* (used for leases) */ NM_SELINFO_T si[NR_TXRX]; /* global wait queues */ /* count users of the global wait queues */ int si_users[NR_TXRX]; void *pdev; /* used to store pci device */ /* copy of if_qflush and if_transmit pointers, to intercept * packets from the network stack when netmap is active. */ int (*if_transmit)(struct ifnet *, struct mbuf *); /* copy of if_input for netmap_send_up() */ void (*if_input)(struct ifnet *, struct mbuf *); /* Back reference to the parent ifnet struct. Used for * hardware ports (emulated netmap included). */ struct ifnet *ifp; /* adapter is ifp->if_softc */ /*---- callbacks for this netmap adapter -----*/ /* * nm_dtor() is the cleanup routine called when destroying * the adapter. * Called with NMG_LOCK held. * * nm_register() is called on NIOCREGIF and close() to enter * or exit netmap mode on the NIC * Called with NNG_LOCK held. * * nm_txsync() pushes packets to the underlying hw/switch * * nm_rxsync() collects packets from the underlying hw/switch * * nm_config() returns configuration information from the OS * Called with NMG_LOCK held. * * nm_krings_create() create and init the tx_rings and * rx_rings arrays of kring structures. In particular, * set the nm_sync callbacks for each ring. * There is no need to also allocate the corresponding * netmap_rings, since netmap_mem_rings_create() will always * be called to provide the missing ones. * Called with NNG_LOCK held. * * nm_krings_delete() cleanup and delete the tx_rings and rx_rings * arrays * Called with NMG_LOCK held. * * nm_notify() is used to act after data have become available * (or the stopped state of the ring has changed) * For hw devices this is typically a selwakeup(), * but for NIC/host ports attached to a switch (or vice-versa) * we also need to invoke the 'txsync' code downstream. * This callback pointer is actually used only to initialize * kring->nm_notify. * Return values are the same as for netmap_rx_irq(). */ void (*nm_dtor)(struct netmap_adapter *); int (*nm_register)(struct netmap_adapter *, int onoff); void (*nm_intr)(struct netmap_adapter *, int onoff); int (*nm_txsync)(struct netmap_kring *kring, int flags); int (*nm_rxsync)(struct netmap_kring *kring, int flags); int (*nm_notify)(struct netmap_kring *kring, int flags); #define NAF_FORCE_READ 1 #define NAF_FORCE_RECLAIM 2 #define NAF_CAN_FORWARD_DOWN 4 /* return configuration information */ int (*nm_config)(struct netmap_adapter *, struct nm_config_info *info); int (*nm_krings_create)(struct netmap_adapter *); void (*nm_krings_delete)(struct netmap_adapter *); /* * nm_bdg_attach() initializes the na_vp field to point * to an adapter that can be attached to a VALE switch. If the * current adapter is already a VALE port, na_vp is simply a cast; * otherwise, na_vp points to a netmap_bwrap_adapter. * If applicable, this callback also initializes na_hostvp, * that can be used to connect the adapter host rings to the * switch. * Called with NMG_LOCK held. * * nm_bdg_ctl() is called on the actual attach/detach to/from * to/from the switch, to perform adapter-specific * initializations * Called with NMG_LOCK held. */ int (*nm_bdg_attach)(const char *bdg_name, struct netmap_adapter *, struct nm_bridge *); int (*nm_bdg_ctl)(struct nmreq_header *, struct netmap_adapter *); /* adapter used to attach this adapter to a VALE switch (if any) */ struct netmap_vp_adapter *na_vp; /* adapter used to attach the host rings of this adapter * to a VALE switch (if any) */ struct netmap_vp_adapter *na_hostvp; /* standard refcount to control the lifetime of the adapter * (it should be equal to the lifetime of the corresponding ifp) */ int na_refcount; /* memory allocator (opaque) * We also cache a pointer to the lut_entry for translating * buffer addresses, the total number of buffers and the buffer size. */ struct netmap_mem_d *nm_mem; struct netmap_mem_d *nm_mem_prev; struct netmap_lut na_lut; /* additional information attached to this adapter * by other netmap subsystems. Currently used by * bwrap, LINUX/v1000 and ptnetmap */ void *na_private; /* array of pipes that have this adapter as a parent */ struct netmap_pipe_adapter **na_pipes; int na_next_pipe; /* next free slot in the array */ int na_max_pipes; /* size of the array */ /* Offset of ethernet header for each packet. */ u_int virt_hdr_len; /* Max number of bytes that the NIC can store in the buffer * referenced by each RX descriptor. This translates to the maximum * bytes that a single netmap slot can reference. Larger packets * require NS_MOREFRAG support. */ unsigned rx_buf_maxsize; char name[NETMAP_REQ_IFNAMSIZ]; /* used at least by pipes */ #ifdef WITH_MONITOR unsigned long monitor_id; /* debugging */ #endif }; static __inline u_int nma_get_ndesc(struct netmap_adapter *na, enum txrx t) { return (t == NR_TX ? na->num_tx_desc : na->num_rx_desc); } static __inline void nma_set_ndesc(struct netmap_adapter *na, enum txrx t, u_int v) { if (t == NR_TX) na->num_tx_desc = v; else na->num_rx_desc = v; } static __inline u_int nma_get_nrings(struct netmap_adapter *na, enum txrx t) { return (t == NR_TX ? na->num_tx_rings : na->num_rx_rings); } static __inline u_int nma_get_host_nrings(struct netmap_adapter *na, enum txrx t) { return (t == NR_TX ? na->num_host_tx_rings : na->num_host_rx_rings); } static __inline void nma_set_nrings(struct netmap_adapter *na, enum txrx t, u_int v) { if (t == NR_TX) na->num_tx_rings = v; else na->num_rx_rings = v; } static __inline void nma_set_host_nrings(struct netmap_adapter *na, enum txrx t, u_int v) { if (t == NR_TX) na->num_host_tx_rings = v; else na->num_host_rx_rings = v; } static __inline struct netmap_kring** NMR(struct netmap_adapter *na, enum txrx t) { return (t == NR_TX ? na->tx_rings : na->rx_rings); } int nma_intr_enable(struct netmap_adapter *na, int onoff); /* * If the NIC is owned by the kernel * (i.e., bridge), neither another bridge nor user can use it; * if the NIC is owned by a user, only users can share it. * Evaluation must be done under NMG_LOCK(). */ #define NETMAP_OWNED_BY_KERN(na) ((na)->na_flags & NAF_BUSY) #define NETMAP_OWNED_BY_ANY(na) \ (NETMAP_OWNED_BY_KERN(na) || ((na)->active_fds > 0)) /* * derived netmap adapters for various types of ports */ struct netmap_vp_adapter { /* VALE software port */ struct netmap_adapter up; /* * Bridge support: * * bdg_port is the port number used in the bridge; * na_bdg points to the bridge this NA is attached to. */ int bdg_port; struct nm_bridge *na_bdg; int retry; int autodelete; /* remove the ifp on last reference */ /* Maximum Frame Size, used in bdg_mismatch_datapath() */ u_int mfs; /* Last source MAC on this port */ uint64_t last_smac; }; struct netmap_hw_adapter { /* physical device */ struct netmap_adapter up; #ifdef linux struct net_device_ops nm_ndo; struct ethtool_ops nm_eto; #endif const struct ethtool_ops* save_ethtool; int (*nm_hw_register)(struct netmap_adapter *, int onoff); }; #ifdef WITH_GENERIC /* Mitigation support. */ struct nm_generic_mit { struct hrtimer mit_timer; int mit_pending; int mit_ring_idx; /* index of the ring being mitigated */ struct netmap_adapter *mit_na; /* backpointer */ }; struct netmap_generic_adapter { /* emulated device */ struct netmap_hw_adapter up; /* Pointer to a previously used netmap adapter. */ struct netmap_adapter *prev; /* Emulated netmap adapters support: * - save_if_input saves the if_input hook (FreeBSD); * - mit implements rx interrupt mitigation; */ void (*save_if_input)(struct ifnet *, struct mbuf *); struct nm_generic_mit *mit; #ifdef linux netdev_tx_t (*save_start_xmit)(struct mbuf *, struct ifnet *); #endif /* Is the adapter able to use multiple RX slots to scatter * each packet pushed up by the driver? */ int rxsg; /* Is the transmission path controlled by a netmap-aware * device queue (i.e. qdisc on linux)? */ int txqdisc; }; #endif /* WITH_GENERIC */ static __inline u_int netmap_real_rings(struct netmap_adapter *na, enum txrx t) { return nma_get_nrings(na, t) + !!(na->na_flags & NAF_HOST_RINGS) * nma_get_host_nrings(na, t); } /* account for fake rings */ static __inline u_int netmap_all_rings(struct netmap_adapter *na, enum txrx t) { return max(nma_get_nrings(na, t) + 1, netmap_real_rings(na, t)); } int netmap_default_bdg_attach(const char *name, struct netmap_adapter *na, struct nm_bridge *); struct nm_bdg_polling_state; /* * Bridge wrapper for non VALE ports attached to a VALE switch. * * The real device must already have its own netmap adapter (hwna). * The bridge wrapper and the hwna adapter share the same set of * netmap rings and buffers, but they have two separate sets of * krings descriptors, with tx/rx meanings swapped: * * netmap * bwrap krings rings krings hwna * +------+ +------+ +-----+ +------+ +------+ * |tx_rings->| |\ /| |----| |<-tx_rings| * | | +------+ \ / +-----+ +------+ | | * | | X | | * | | / \ | | * | | +------+/ \+-----+ +------+ | | * |rx_rings->| | | |----| |<-rx_rings| * | | +------+ +-----+ +------+ | | * +------+ +------+ * * - packets coming from the bridge go to the brwap rx rings, * which are also the hwna tx rings. The bwrap notify callback * will then complete the hwna tx (see netmap_bwrap_notify). * * - packets coming from the outside go to the hwna rx rings, * which are also the bwrap tx rings. The (overwritten) hwna * notify method will then complete the bridge tx * (see netmap_bwrap_intr_notify). * * The bridge wrapper may optionally connect the hwna 'host' rings * to the bridge. This is done by using a second port in the * bridge and connecting it to the 'host' netmap_vp_adapter * contained in the netmap_bwrap_adapter. The brwap host adapter * cross-links the hwna host rings in the same way as shown above. * * - packets coming from the bridge and directed to the host stack * are handled by the bwrap host notify callback * (see netmap_bwrap_host_notify) * * - packets coming from the host stack are still handled by the * overwritten hwna notify callback (netmap_bwrap_intr_notify), * but are diverted to the host adapter depending on the ring number. * */ struct netmap_bwrap_adapter { struct netmap_vp_adapter up; struct netmap_vp_adapter host; /* for host rings */ struct netmap_adapter *hwna; /* the underlying device */ /* * When we attach a physical interface to the bridge, we * allow the controlling process to terminate, so we need * a place to store the n_detmap_priv_d data structure. * This is only done when physical interfaces * are attached to a bridge. */ struct netmap_priv_d *na_kpriv; struct nm_bdg_polling_state *na_polling_state; /* we overwrite the hwna->na_vp pointer, so we save * here its original value, to be restored at detach */ struct netmap_vp_adapter *saved_na_vp; }; int nm_bdg_polling(struct nmreq_header *hdr); #ifdef WITH_VALE int netmap_vale_attach(struct nmreq_header *hdr, void *auth_token); int netmap_vale_detach(struct nmreq_header *hdr, void *auth_token); int netmap_vale_list(struct nmreq_header *hdr); int netmap_vi_create(struct nmreq_header *hdr, int); int nm_vi_create(struct nmreq_header *); int nm_vi_destroy(const char *name); #else /* !WITH_VALE */ #define netmap_vi_create(hdr, a) (EOPNOTSUPP) #endif /* WITH_VALE */ #ifdef WITH_PIPES #define NM_MAXPIPES 64 /* max number of pipes per adapter */ struct netmap_pipe_adapter { /* pipe identifier is up.name */ struct netmap_adapter up; #define NM_PIPE_ROLE_MASTER 0x1 #define NM_PIPE_ROLE_SLAVE 0x2 int role; /* either NM_PIPE_ROLE_MASTER or NM_PIPE_ROLE_SLAVE */ struct netmap_adapter *parent; /* adapter that owns the memory */ struct netmap_pipe_adapter *peer; /* the other end of the pipe */ int peer_ref; /* 1 iff we are holding a ref to the peer */ struct ifnet *parent_ifp; /* maybe null */ u_int parent_slot; /* index in the parent pipe array */ }; #endif /* WITH_PIPES */ #ifdef WITH_NMNULL struct netmap_null_adapter { struct netmap_adapter up; }; #endif /* WITH_NMNULL */ /* return slots reserved to rx clients; used in drivers */ static inline uint32_t nm_kr_rxspace(struct netmap_kring *k) { int space = k->nr_hwtail - k->nr_hwcur; if (space < 0) space += k->nkr_num_slots; nm_prdis("preserving %d rx slots %d -> %d", space, k->nr_hwcur, k->nr_hwtail); return space; } /* return slots reserved to tx clients */ #define nm_kr_txspace(_k) nm_kr_rxspace(_k) /* True if no space in the tx ring, only valid after txsync_prologue */ static inline int nm_kr_txempty(struct netmap_kring *kring) { return kring->rhead == kring->nr_hwtail; } /* True if no more completed slots in the rx ring, only valid after * rxsync_prologue */ #define nm_kr_rxempty(_k) nm_kr_txempty(_k) /* True if the application needs to wait for more space on the ring * (more received packets or more free tx slots). * Only valid after *xsync_prologue. */ static inline int nm_kr_wouldblock(struct netmap_kring *kring) { return kring->rcur == kring->nr_hwtail; } /* * protect against multiple threads using the same ring. * also check that the ring has not been stopped or locked */ #define NM_KR_BUSY 1 /* some other thread is syncing the ring */ #define NM_KR_STOPPED 2 /* unbounded stop (ifconfig down or driver unload) */ #define NM_KR_LOCKED 3 /* bounded, brief stop for mutual exclusion */ /* release the previously acquired right to use the *sync() methods of the ring */ static __inline void nm_kr_put(struct netmap_kring *kr) { NM_ATOMIC_CLEAR(&kr->nr_busy); } /* true if the ifp that backed the adapter has disappeared (e.g., the * driver has been unloaded) */ static inline int nm_iszombie(struct netmap_adapter *na); /* try to obtain exclusive right to issue the *sync() operations on the ring. * The right is obtained and must be later relinquished via nm_kr_put() if and * only if nm_kr_tryget() returns 0. * If can_sleep is 1 there are only two other possible outcomes: * - the function returns NM_KR_BUSY * - the function returns NM_KR_STOPPED and sets the POLLERR bit in *perr * (if non-null) * In both cases the caller will typically skip the ring, possibly collecting * errors along the way. * If the calling context does not allow sleeping, the caller must pass 0 in can_sleep. * In the latter case, the function may also return NM_KR_LOCKED and leave *perr * untouched: ideally, the caller should try again at a later time. */ static __inline int nm_kr_tryget(struct netmap_kring *kr, int can_sleep, int *perr) { int busy = 1, stopped; /* check a first time without taking the lock * to avoid starvation for nm_kr_get() */ retry: stopped = kr->nkr_stopped; if (unlikely(stopped)) { goto stop; } busy = NM_ATOMIC_TEST_AND_SET(&kr->nr_busy); /* we should not return NM_KR_BUSY if the ring was * actually stopped, so check another time after * the barrier provided by the atomic operation */ stopped = kr->nkr_stopped; if (unlikely(stopped)) { goto stop; } if (unlikely(nm_iszombie(kr->na))) { stopped = NM_KR_STOPPED; goto stop; } return unlikely(busy) ? NM_KR_BUSY : 0; stop: if (!busy) nm_kr_put(kr); if (stopped == NM_KR_STOPPED) { /* if POLLERR is defined we want to use it to simplify netmap_poll(). * Otherwise, any non-zero value will do. */ #ifdef POLLERR #define NM_POLLERR POLLERR #else #define NM_POLLERR 1 #endif /* POLLERR */ if (perr) *perr |= NM_POLLERR; #undef NM_POLLERR } else if (can_sleep) { tsleep(kr, 0, "NM_KR_TRYGET", 4); goto retry; } return stopped; } /* put the ring in the 'stopped' state and wait for the current user (if any) to * notice. stopped must be either NM_KR_STOPPED or NM_KR_LOCKED */ static __inline void nm_kr_stop(struct netmap_kring *kr, int stopped) { kr->nkr_stopped = stopped; while (NM_ATOMIC_TEST_AND_SET(&kr->nr_busy)) tsleep(kr, 0, "NM_KR_GET", 4); } /* restart a ring after a stop */ static __inline void nm_kr_start(struct netmap_kring *kr) { kr->nkr_stopped = 0; nm_kr_put(kr); } /* * The following functions are used by individual drivers to * support netmap operation. * * netmap_attach() initializes a struct netmap_adapter, allocating the * struct netmap_ring's and the struct selinfo. * * netmap_detach() frees the memory allocated by netmap_attach(). * * netmap_transmit() replaces the if_transmit routine of the interface, * and is used to intercept packets coming from the stack. * * netmap_load_map/netmap_reload_map are helper routines to set/reset * the dmamap for a packet buffer * * netmap_reset() is a helper routine to be called in the hw driver * when reinitializing a ring. It should not be called by * virtual ports (vale, pipes, monitor) */ int netmap_attach(struct netmap_adapter *); int netmap_attach_ext(struct netmap_adapter *, size_t size, int override_reg); void netmap_detach(struct ifnet *); int netmap_transmit(struct ifnet *, struct mbuf *); struct netmap_slot *netmap_reset(struct netmap_adapter *na, enum txrx tx, u_int n, u_int new_cur); int netmap_ring_reinit(struct netmap_kring *); int netmap_rings_config_get(struct netmap_adapter *, struct nm_config_info *); /* Return codes for netmap_*x_irq. */ enum { /* Driver should do normal interrupt processing, e.g. because * the interface is not in netmap mode. */ NM_IRQ_PASS = 0, /* Port is in netmap mode, and the interrupt work has been * completed. The driver does not have to notify netmap * again before the next interrupt. */ NM_IRQ_COMPLETED = -1, /* Port is in netmap mode, but the interrupt work has not been * completed. The driver has to make sure netmap will be * notified again soon, even if no more interrupts come (e.g. * on Linux the driver should not call napi_complete()). */ NM_IRQ_RESCHED = -2, }; /* default functions to handle rx/tx interrupts */ int netmap_rx_irq(struct ifnet *, u_int, u_int *); #define netmap_tx_irq(_n, _q) netmap_rx_irq(_n, _q, NULL) int netmap_common_irq(struct netmap_adapter *, u_int, u_int *work_done); #ifdef WITH_VALE /* functions used by external modules to interface with VALE */ #define netmap_vp_to_ifp(_vp) ((_vp)->up.ifp) #define netmap_ifp_to_vp(_ifp) (NA(_ifp)->na_vp) #define netmap_ifp_to_host_vp(_ifp) (NA(_ifp)->na_hostvp) #define netmap_bdg_idx(_vp) ((_vp)->bdg_port) const char *netmap_bdg_name(struct netmap_vp_adapter *); #else /* !WITH_VALE */ #define netmap_vp_to_ifp(_vp) NULL #define netmap_ifp_to_vp(_ifp) NULL #define netmap_ifp_to_host_vp(_ifp) NULL #define netmap_bdg_idx(_vp) -1 #endif /* WITH_VALE */ static inline int nm_netmap_on(struct netmap_adapter *na) { return na && na->na_flags & NAF_NETMAP_ON; } static inline int nm_native_on(struct netmap_adapter *na) { return nm_netmap_on(na) && (na->na_flags & NAF_NATIVE); } static inline int nm_iszombie(struct netmap_adapter *na) { return na == NULL || (na->na_flags & NAF_ZOMBIE); } static inline void nm_update_hostrings_mode(struct netmap_adapter *na) { /* Process nr_mode and nr_pending_mode for host rings. */ na->tx_rings[na->num_tx_rings]->nr_mode = na->tx_rings[na->num_tx_rings]->nr_pending_mode; na->rx_rings[na->num_rx_rings]->nr_mode = na->rx_rings[na->num_rx_rings]->nr_pending_mode; } void nm_set_native_flags(struct netmap_adapter *); void nm_clear_native_flags(struct netmap_adapter *); void netmap_krings_mode_commit(struct netmap_adapter *na, int onoff); /* * nm_*sync_prologue() functions are used in ioctl/poll and ptnetmap * kthreads. * We need netmap_ring* parameter, because in ptnetmap it is decoupled * from host kring. * The user-space ring pointers (head/cur/tail) are shared through * CSB between host and guest. */ /* * validates parameters in the ring/kring, returns a value for head * If any error, returns ring_size to force a reinit. */ uint32_t nm_txsync_prologue(struct netmap_kring *, struct netmap_ring *); /* * validates parameters in the ring/kring, returns a value for head * If any error, returns ring_size lim to force a reinit. */ uint32_t nm_rxsync_prologue(struct netmap_kring *, struct netmap_ring *); /* check/fix address and len in tx rings */ #if 1 /* debug version */ #define NM_CHECK_ADDR_LEN(_na, _a, _l) do { \ if (_a == NETMAP_BUF_BASE(_na) || _l > NETMAP_BUF_SIZE(_na)) { \ nm_prlim(5, "bad addr/len ring %d slot %d idx %d len %d", \ kring->ring_id, nm_i, slot->buf_idx, len); \ if (_l > NETMAP_BUF_SIZE(_na)) \ _l = NETMAP_BUF_SIZE(_na); \ } } while (0) #else /* no debug version */ #define NM_CHECK_ADDR_LEN(_na, _a, _l) do { \ if (_l > NETMAP_BUF_SIZE(_na)) \ _l = NETMAP_BUF_SIZE(_na); \ } while (0) #endif /*---------------------------------------------------------------*/ /* * Support routines used by netmap subsystems * (native drivers, VALE, generic, pipes, monitors, ...) */ /* common routine for all functions that create a netmap adapter. It performs * two main tasks: * - if the na points to an ifp, mark the ifp as netmap capable * using na as its native adapter; * - provide defaults for the setup callbacks and the memory allocator */ int netmap_attach_common(struct netmap_adapter *); /* fill priv->np_[tr]xq{first,last} using the ringid and flags information * coming from a struct nmreq_register */ int netmap_interp_ringid(struct netmap_priv_d *priv, uint32_t nr_mode, uint16_t nr_ringid, uint64_t nr_flags); /* update the ring parameters (number and size of tx and rx rings). * It calls the nm_config callback, if available. */ int netmap_update_config(struct netmap_adapter *na); /* create and initialize the common fields of the krings array. * using the information that must be already available in the na. * tailroom can be used to request the allocation of additional * tailroom bytes after the krings array. This is used by * netmap_vp_adapter's (i.e., VALE ports) to make room for * leasing-related data structures */ int netmap_krings_create(struct netmap_adapter *na, u_int tailroom); /* deletes the kring array of the adapter. The array must have * been created using netmap_krings_create */ void netmap_krings_delete(struct netmap_adapter *na); int netmap_hw_krings_create(struct netmap_adapter *na); void netmap_hw_krings_delete(struct netmap_adapter *na); /* set the stopped/enabled status of ring * When stopping, they also wait for all current activity on the ring to * terminate. The status change is then notified using the na nm_notify * callback. */ void netmap_set_ring(struct netmap_adapter *, u_int ring_id, enum txrx, int stopped); /* set the stopped/enabled status of all rings of the adapter. */ void netmap_set_all_rings(struct netmap_adapter *, int stopped); /* convenience wrappers for netmap_set_all_rings */ void netmap_disable_all_rings(struct ifnet *); void netmap_enable_all_rings(struct ifnet *); int netmap_buf_size_validate(const struct netmap_adapter *na, unsigned mtu); int netmap_do_regif(struct netmap_priv_d *priv, struct netmap_adapter *na, uint32_t nr_mode, uint16_t nr_ringid, uint64_t nr_flags); void netmap_do_unregif(struct netmap_priv_d *priv); u_int nm_bound_var(u_int *v, u_int dflt, u_int lo, u_int hi, const char *msg); int netmap_get_na(struct nmreq_header *hdr, struct netmap_adapter **na, struct ifnet **ifp, struct netmap_mem_d *nmd, int create); void netmap_unget_na(struct netmap_adapter *na, struct ifnet *ifp); int netmap_get_hw_na(struct ifnet *ifp, struct netmap_mem_d *nmd, struct netmap_adapter **na); #ifdef WITH_VALE uint32_t netmap_vale_learning(struct nm_bdg_fwd *ft, uint8_t *dst_ring, struct netmap_vp_adapter *, void *private_data); /* these are redefined in case of no VALE support */ int netmap_get_vale_na(struct nmreq_header *hdr, struct netmap_adapter **na, struct netmap_mem_d *nmd, int create); void *netmap_vale_create(const char *bdg_name, int *return_status); int netmap_vale_destroy(const char *bdg_name, void *auth_token); #else /* !WITH_VALE */ #define netmap_bdg_learning(_1, _2, _3, _4) 0 #define netmap_get_vale_na(_1, _2, _3, _4) 0 #define netmap_bdg_create(_1, _2) NULL #define netmap_bdg_destroy(_1, _2) 0 #endif /* !WITH_VALE */ #ifdef WITH_PIPES /* max number of pipes per device */ #define NM_MAXPIPES 64 /* XXX this should probably be a sysctl */ void netmap_pipe_dealloc(struct netmap_adapter *); int netmap_get_pipe_na(struct nmreq_header *hdr, struct netmap_adapter **na, struct netmap_mem_d *nmd, int create); #else /* !WITH_PIPES */ #define NM_MAXPIPES 0 #define netmap_pipe_alloc(_1, _2) 0 #define netmap_pipe_dealloc(_1) #define netmap_get_pipe_na(hdr, _2, _3, _4) \ ((strchr(hdr->nr_name, '{') != NULL || strchr(hdr->nr_name, '}') != NULL) ? EOPNOTSUPP : 0) #endif #ifdef WITH_MONITOR int netmap_get_monitor_na(struct nmreq_header *hdr, struct netmap_adapter **na, struct netmap_mem_d *nmd, int create); void netmap_monitor_stop(struct netmap_adapter *na); #else #define netmap_get_monitor_na(hdr, _2, _3, _4) \ (((struct nmreq_register *)(uintptr_t)hdr->nr_body)->nr_flags & (NR_MONITOR_TX | NR_MONITOR_RX) ? EOPNOTSUPP : 0) #endif #ifdef WITH_NMNULL int netmap_get_null_na(struct nmreq_header *hdr, struct netmap_adapter **na, struct netmap_mem_d *nmd, int create); #else /* !WITH_NMNULL */ #define netmap_get_null_na(hdr, _2, _3, _4) \ (((struct nmreq_register *)(uintptr_t)hdr->nr_body)->nr_flags & (NR_MONITOR_TX | NR_MONITOR_RX) ? EOPNOTSUPP : 0) #endif /* WITH_NMNULL */ #ifdef CONFIG_NET_NS struct net *netmap_bns_get(void); void netmap_bns_put(struct net *); void netmap_bns_getbridges(struct nm_bridge **, u_int *); #else extern struct nm_bridge *nm_bridges; #define netmap_bns_get() #define netmap_bns_put(_1) #define netmap_bns_getbridges(b, n) \ do { *b = nm_bridges; *n = NM_BRIDGES; } while (0) #endif /* Various prototypes */ int netmap_poll(struct netmap_priv_d *, int events, NM_SELRECORD_T *td); int netmap_init(void); void netmap_fini(void); int netmap_get_memory(struct netmap_priv_d* p); void netmap_dtor(void *data); int netmap_ioctl(struct netmap_priv_d *priv, u_long cmd, caddr_t data, struct thread *, int nr_body_is_user); int netmap_ioctl_legacy(struct netmap_priv_d *priv, u_long cmd, caddr_t data, struct thread *td); size_t nmreq_size_by_type(uint16_t nr_reqtype); /* netmap_adapter creation/destruction */ // #define NM_DEBUG_PUTGET 1 #ifdef NM_DEBUG_PUTGET #define NM_DBG(f) __##f void __netmap_adapter_get(struct netmap_adapter *na); #define netmap_adapter_get(na) \ do { \ struct netmap_adapter *__na = na; \ nm_prinf("getting %p:%s (%d)", __na, (__na)->name, (__na)->na_refcount); \ __netmap_adapter_get(__na); \ } while (0) int __netmap_adapter_put(struct netmap_adapter *na); #define netmap_adapter_put(na) \ ({ \ struct netmap_adapter *__na = na; \ nm_prinf("putting %p:%s (%d)", __na, (__na)->name, (__na)->na_refcount); \ __netmap_adapter_put(__na); \ }) #else /* !NM_DEBUG_PUTGET */ #define NM_DBG(f) f void netmap_adapter_get(struct netmap_adapter *na); int netmap_adapter_put(struct netmap_adapter *na); #endif /* !NM_DEBUG_PUTGET */ /* * module variables */ #define NETMAP_BUF_BASE(_na) ((_na)->na_lut.lut[0].vaddr) #define NETMAP_BUF_SIZE(_na) ((_na)->na_lut.objsize) extern int netmap_no_pendintr; extern int netmap_mitigate; extern int netmap_verbose; #ifdef CONFIG_NETMAP_DEBUG extern int netmap_debug; /* for debugging */ #else /* !CONFIG_NETMAP_DEBUG */ #define netmap_debug (0) #endif /* !CONFIG_NETMAP_DEBUG */ enum { /* debug flags */ NM_DEBUG_ON = 1, /* generic debug messsages */ NM_DEBUG_HOST = 0x2, /* debug host stack */ NM_DEBUG_RXSYNC = 0x10, /* debug on rxsync/txsync */ NM_DEBUG_TXSYNC = 0x20, NM_DEBUG_RXINTR = 0x100, /* debug on rx/tx intr (driver) */ NM_DEBUG_TXINTR = 0x200, NM_DEBUG_NIC_RXSYNC = 0x1000, /* debug on rx/tx intr (driver) */ NM_DEBUG_NIC_TXSYNC = 0x2000, NM_DEBUG_MEM = 0x4000, /* verbose memory allocations/deallocations */ NM_DEBUG_VALE = 0x8000, /* debug messages from memory allocators */ NM_DEBUG_BDG = NM_DEBUG_VALE, }; extern int netmap_txsync_retry; extern int netmap_flags; extern int netmap_generic_hwcsum; extern int netmap_generic_mit; extern int netmap_generic_ringsize; extern int netmap_generic_rings; #ifdef linux extern int netmap_generic_txqdisc; #endif /* * NA returns a pointer to the struct netmap adapter from the ifp. * WNA is os-specific and must be defined in glue code. */ #define NA(_ifp) ((struct netmap_adapter *)WNA(_ifp)) /* * we provide a default implementation of NM_ATTACH_NA/NM_DETACH_NA * based on the WNA field. * Glue code may override this by defining its own NM_ATTACH_NA */ #ifndef NM_ATTACH_NA /* * On old versions of FreeBSD, NA(ifp) is a pspare. On linux we * overload another pointer in the netdev. * * We check if NA(ifp) is set and its first element has a related * magic value. The capenable is within the struct netmap_adapter. */ #define NETMAP_MAGIC 0x52697a7a #define NM_NA_VALID(ifp) (NA(ifp) && \ ((uint32_t)(uintptr_t)NA(ifp) ^ NA(ifp)->magic) == NETMAP_MAGIC ) #define NM_ATTACH_NA(ifp, na) do { \ WNA(ifp) = na; \ if (NA(ifp)) \ NA(ifp)->magic = \ ((uint32_t)(uintptr_t)NA(ifp)) ^ NETMAP_MAGIC; \ } while(0) #define NM_RESTORE_NA(ifp, na) WNA(ifp) = na; #define NM_DETACH_NA(ifp) do { WNA(ifp) = NULL; } while (0) #define NM_NA_CLASH(ifp) (NA(ifp) && !NM_NA_VALID(ifp)) #endif /* !NM_ATTACH_NA */ #define NM_IS_NATIVE(ifp) (NM_NA_VALID(ifp) && NA(ifp)->nm_dtor == netmap_hw_dtor) #if defined(__FreeBSD__) /* Assigns the device IOMMU domain to an allocator. * Returns -ENOMEM in case the domain is different */ #define nm_iommu_group_id(dev) (0) /* Callback invoked by the dma machinery after a successful dmamap_load */ static void netmap_dmamap_cb(__unused void *arg, __unused bus_dma_segment_t * segs, __unused int nseg, __unused int error) { } /* bus_dmamap_load wrapper: call aforementioned function if map != NULL. * XXX can we do it without a callback ? */ static inline int netmap_load_map(struct netmap_adapter *na, bus_dma_tag_t tag, bus_dmamap_t map, void *buf) { if (map) bus_dmamap_load(tag, map, buf, NETMAP_BUF_SIZE(na), netmap_dmamap_cb, NULL, BUS_DMA_NOWAIT); return 0; } static inline void netmap_unload_map(struct netmap_adapter *na, bus_dma_tag_t tag, bus_dmamap_t map) { if (map) bus_dmamap_unload(tag, map); } #define netmap_sync_map(na, tag, map, sz, t) /* update the map when a buffer changes. */ static inline void netmap_reload_map(struct netmap_adapter *na, bus_dma_tag_t tag, bus_dmamap_t map, void *buf) { if (map) { bus_dmamap_unload(tag, map); bus_dmamap_load(tag, map, buf, NETMAP_BUF_SIZE(na), netmap_dmamap_cb, NULL, BUS_DMA_NOWAIT); } } #elif defined(_WIN32) #else /* linux */ int nm_iommu_group_id(bus_dma_tag_t dev); #include /* * on linux we need * dma_map_single(&pdev->dev, virt_addr, len, direction) * dma_unmap_single(&adapter->pdev->dev, phys_addr, len, direction) */ #if 0 struct e1000_buffer *buffer_info = &tx_ring->buffer_info[l]; /* set time_stamp *before* dma to help avoid a possible race */ buffer_info->time_stamp = jiffies; buffer_info->mapped_as_page = false; buffer_info->length = len; //buffer_info->next_to_watch = l; /* reload dma map */ dma_unmap_single(&adapter->pdev->dev, buffer_info->dma, NETMAP_BUF_SIZE, DMA_TO_DEVICE); buffer_info->dma = dma_map_single(&adapter->pdev->dev, addr, NETMAP_BUF_SIZE, DMA_TO_DEVICE); if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma)) { nm_prerr("dma mapping error"); /* goto dma_error; See e1000_put_txbuf() */ /* XXX reset */ } tx_desc->buffer_addr = htole64(buffer_info->dma); //XXX #endif static inline int netmap_load_map(struct netmap_adapter *na, bus_dma_tag_t tag, bus_dmamap_t map, void *buf, u_int size) { if (map) { *map = dma_map_single(na->pdev, buf, size, DMA_BIDIRECTIONAL); if (dma_mapping_error(na->pdev, *map)) { *map = 0; return ENOMEM; } } return 0; } static inline void netmap_unload_map(struct netmap_adapter *na, bus_dma_tag_t tag, bus_dmamap_t map, u_int sz) { if (*map) { dma_unmap_single(na->pdev, *map, sz, DMA_BIDIRECTIONAL); } } #ifdef NETMAP_LINUX_HAVE_DMASYNC static inline void netmap_sync_map_cpu(struct netmap_adapter *na, bus_dma_tag_t tag, bus_dmamap_t map, u_int sz, enum txrx t) { if (*map) { dma_sync_single_for_cpu(na->pdev, *map, sz, (t == NR_TX ? DMA_TO_DEVICE : DMA_FROM_DEVICE)); } } static inline void netmap_sync_map_dev(struct netmap_adapter *na, bus_dma_tag_t tag, bus_dmamap_t map, u_int sz, enum txrx t) { if (*map) { dma_sync_single_for_device(na->pdev, *map, sz, (t == NR_TX ? DMA_TO_DEVICE : DMA_FROM_DEVICE)); } } static inline void netmap_reload_map(struct netmap_adapter *na, bus_dma_tag_t tag, bus_dmamap_t map, void *buf) { u_int sz = NETMAP_BUF_SIZE(na); if (*map) { dma_unmap_single(na->pdev, *map, sz, DMA_BIDIRECTIONAL); } *map = dma_map_single(na->pdev, buf, sz, DMA_BIDIRECTIONAL); } #else /* !NETMAP_LINUX_HAVE_DMASYNC */ #define netmap_sync_map_cpu(na, tag, map, sz, t) #define netmap_sync_map_dev(na, tag, map, sz, t) #endif /* NETMAP_LINUX_HAVE_DMASYNC */ #endif /* linux */ /* * functions to map NIC to KRING indexes (n2k) and vice versa (k2n) */ static inline int netmap_idx_n2k(struct netmap_kring *kr, int idx) { int n = kr->nkr_num_slots; if (likely(kr->nkr_hwofs == 0)) { return idx; } idx += kr->nkr_hwofs; if (idx < 0) return idx + n; else if (idx < n) return idx; else return idx - n; } static inline int netmap_idx_k2n(struct netmap_kring *kr, int idx) { int n = kr->nkr_num_slots; if (likely(kr->nkr_hwofs == 0)) { return idx; } idx -= kr->nkr_hwofs; if (idx < 0) return idx + n; else if (idx < n) return idx; else return idx - n; } /* Entries of the look-up table. */ #ifdef __FreeBSD__ struct lut_entry { void *vaddr; /* virtual address. */ vm_paddr_t paddr; /* physical address. */ }; #else /* linux & _WIN32 */ /* dma-mapping in linux can assign a buffer a different address * depending on the device, so we need to have a separate * physical-address look-up table for each na. * We can still share the vaddrs, though, therefore we split * the lut_entry structure. */ struct lut_entry { void *vaddr; /* virtual address. */ }; struct plut_entry { vm_paddr_t paddr; /* physical address. */ }; #endif /* linux & _WIN32 */ struct netmap_obj_pool; /* * NMB return the virtual address of a buffer (buffer 0 on bad index) * PNMB also fills the physical address */ static inline void * NMB(struct netmap_adapter *na, struct netmap_slot *slot) { struct lut_entry *lut = na->na_lut.lut; uint32_t i = slot->buf_idx; return (unlikely(i >= na->na_lut.objtotal)) ? lut[0].vaddr : lut[i].vaddr; } static inline void * PNMB(struct netmap_adapter *na, struct netmap_slot *slot, uint64_t *pp) { uint32_t i = slot->buf_idx; struct lut_entry *lut = na->na_lut.lut; struct plut_entry *plut = na->na_lut.plut; void *ret = (i >= na->na_lut.objtotal) ? lut[0].vaddr : lut[i].vaddr; #ifdef _WIN32 *pp = (i >= na->na_lut.objtotal) ? (uint64_t)plut[0].paddr.QuadPart : (uint64_t)plut[i].paddr.QuadPart; #else *pp = (i >= na->na_lut.objtotal) ? plut[0].paddr : plut[i].paddr; #endif return ret; } /* * Structure associated to each netmap file descriptor. * It is created on open and left unbound (np_nifp == NULL). * A successful NIOCREGIF will set np_nifp and the first few fields; * this is protected by a global lock (NMG_LOCK) due to low contention. * * np_refs counts the number of references to the structure: one for the fd, * plus (on FreeBSD) one for each active mmap which we track ourselves * (linux automatically tracks them, but FreeBSD does not). * np_refs is protected by NMG_LOCK. * * Read access to the structure is lock free, because ni_nifp once set * can only go to 0 when nobody is using the entry anymore. Readers * must check that np_nifp != NULL before using the other fields. */ struct netmap_priv_d { struct netmap_if * volatile np_nifp; /* netmap if descriptor. */ struct netmap_adapter *np_na; struct ifnet *np_ifp; uint32_t np_flags; /* from the ioctl */ u_int np_qfirst[NR_TXRX], np_qlast[NR_TXRX]; /* range of tx/rx rings to scan */ uint16_t np_txpoll; uint16_t np_kloop_state; /* use with NMG_LOCK held */ #define NM_SYNC_KLOOP_RUNNING (1 << 0) #define NM_SYNC_KLOOP_STOPPING (1 << 1) int np_sync_flags; /* to be passed to nm_sync */ int np_refs; /* use with NMG_LOCK held */ /* pointers to the selinfo to be used for selrecord. * Either the local or the global one depending on the * number of rings. */ NM_SELINFO_T *np_si[NR_TXRX]; /* In the optional CSB mode, the user must specify the start address * of two arrays of Communication Status Block (CSB) entries, for the * two directions (kernel read application write, and kernel write * application read). * The number of entries must agree with the number of rings bound to * the netmap file descriptor. The entries corresponding to the TX * rings are laid out before the ones corresponding to the RX rings. * * Array of CSB entries for application --> kernel communication * (N entries). */ struct nm_csb_atok *np_csb_atok_base; /* Array of CSB entries for kernel --> application communication * (N entries). */ struct nm_csb_ktoa *np_csb_ktoa_base; #ifdef linux struct file *np_filp; /* used by sync kloop */ #endif /* linux */ }; struct netmap_priv_d *netmap_priv_new(void); void netmap_priv_delete(struct netmap_priv_d *); static inline int nm_kring_pending(struct netmap_priv_d *np) { struct netmap_adapter *na = np->np_na; enum txrx t; int i; for_rx_tx(t) { for (i = np->np_qfirst[t]; i < np->np_qlast[t]; i++) { struct netmap_kring *kring = NMR(na, t)[i]; if (kring->nr_mode != kring->nr_pending_mode) { return 1; } } } return 0; } /* call with NMG_LOCK held */ static __inline int nm_si_user(struct netmap_priv_d *priv, enum txrx t) { return (priv->np_na != NULL && (priv->np_qlast[t] - priv->np_qfirst[t] > 1)); } #ifdef WITH_PIPES int netmap_pipe_txsync(struct netmap_kring *txkring, int flags); int netmap_pipe_rxsync(struct netmap_kring *rxkring, int flags); int netmap_pipe_krings_create_both(struct netmap_adapter *na, struct netmap_adapter *ona); void netmap_pipe_krings_delete_both(struct netmap_adapter *na, struct netmap_adapter *ona); int netmap_pipe_reg_both(struct netmap_adapter *na, struct netmap_adapter *ona); #endif /* WITH_PIPES */ #ifdef WITH_MONITOR struct netmap_monitor_adapter { struct netmap_adapter up; struct netmap_priv_d priv; uint32_t flags; }; #endif /* WITH_MONITOR */ #ifdef WITH_GENERIC /* * generic netmap emulation for devices that do not have * native netmap support. */ int generic_netmap_attach(struct ifnet *ifp); int generic_rx_handler(struct ifnet *ifp, struct mbuf *m);; int nm_os_catch_rx(struct netmap_generic_adapter *gna, int intercept); int nm_os_catch_tx(struct netmap_generic_adapter *gna, int intercept); int na_is_generic(struct netmap_adapter *na); /* * the generic transmit routine is passed a structure to optionally * build a queue of descriptors, in an OS-specific way. * The payload is at addr, if non-null, and the routine should send or queue * the packet, returning 0 if successful, 1 on failure. * * At the end, if head is non-null, there will be an additional call * to the function with addr = NULL; this should tell the OS-specific * routine to send the queue and free any resources. Failure is ignored. */ struct nm_os_gen_arg { struct ifnet *ifp; void *m; /* os-specific mbuf-like object */ void *head, *tail; /* tailq, if the OS-specific routine needs to build one */ void *addr; /* payload of current packet */ u_int len; /* packet length */ u_int ring_nr; /* packet length */ u_int qevent; /* in txqdisc mode, place an event on this mbuf */ }; int nm_os_generic_xmit_frame(struct nm_os_gen_arg *); int nm_os_generic_find_num_desc(struct ifnet *ifp, u_int *tx, u_int *rx); void nm_os_generic_find_num_queues(struct ifnet *ifp, u_int *txq, u_int *rxq); void nm_os_generic_set_features(struct netmap_generic_adapter *gna); static inline struct ifnet* netmap_generic_getifp(struct netmap_generic_adapter *gna) { if (gna->prev) return gna->prev->ifp; return gna->up.up.ifp; } void netmap_generic_irq(struct netmap_adapter *na, u_int q, u_int *work_done); //#define RATE_GENERIC /* Enables communication statistics for generic. */ #ifdef RATE_GENERIC void generic_rate(int txp, int txs, int txi, int rxp, int rxs, int rxi); #else #define generic_rate(txp, txs, txi, rxp, rxs, rxi) #endif /* * netmap_mitigation API. This is used by the generic adapter * to reduce the number of interrupt requests/selwakeup * to clients on incoming packets. */ void nm_os_mitigation_init(struct nm_generic_mit *mit, int idx, struct netmap_adapter *na); void nm_os_mitigation_start(struct nm_generic_mit *mit); void nm_os_mitigation_restart(struct nm_generic_mit *mit); int nm_os_mitigation_active(struct nm_generic_mit *mit); void nm_os_mitigation_cleanup(struct nm_generic_mit *mit); #else /* !WITH_GENERIC */ #define generic_netmap_attach(ifp) (EOPNOTSUPP) #define na_is_generic(na) (0) #endif /* WITH_GENERIC */ /* Shared declarations for the VALE switch. */ /* * Each transmit queue accumulates a batch of packets into * a structure before forwarding. Packets to the same * destination are put in a list using ft_next as a link field. * ft_frags and ft_next are valid only on the first fragment. */ struct nm_bdg_fwd { /* forwarding entry for a bridge */ void *ft_buf; /* netmap or indirect buffer */ uint8_t ft_frags; /* how many fragments (only on 1st frag) */ uint16_t ft_offset; /* dst port (unused) */ uint16_t ft_flags; /* flags, e.g. indirect */ uint16_t ft_len; /* src fragment len */ uint16_t ft_next; /* next packet to same destination */ }; /* struct 'virtio_net_hdr' from linux. */ struct nm_vnet_hdr { #define VIRTIO_NET_HDR_F_NEEDS_CSUM 1 /* Use csum_start, csum_offset */ #define VIRTIO_NET_HDR_F_DATA_VALID 2 /* Csum is valid */ uint8_t flags; #define VIRTIO_NET_HDR_GSO_NONE 0 /* Not a GSO frame */ #define VIRTIO_NET_HDR_GSO_TCPV4 1 /* GSO frame, IPv4 TCP (TSO) */ #define VIRTIO_NET_HDR_GSO_UDP 3 /* GSO frame, IPv4 UDP (UFO) */ #define VIRTIO_NET_HDR_GSO_TCPV6 4 /* GSO frame, IPv6 TCP */ #define VIRTIO_NET_HDR_GSO_ECN 0x80 /* TCP has ECN set */ uint8_t gso_type; uint16_t hdr_len; uint16_t gso_size; uint16_t csum_start; uint16_t csum_offset; }; #define WORST_CASE_GSO_HEADER (14+40+60) /* IPv6 + TCP */ /* Private definitions for IPv4, IPv6, UDP and TCP headers. */ struct nm_iphdr { uint8_t version_ihl; uint8_t tos; uint16_t tot_len; uint16_t id; uint16_t frag_off; uint8_t ttl; uint8_t protocol; uint16_t check; uint32_t saddr; uint32_t daddr; /*The options start here. */ }; struct nm_tcphdr { uint16_t source; uint16_t dest; uint32_t seq; uint32_t ack_seq; uint8_t doff; /* Data offset + Reserved */ uint8_t flags; uint16_t window; uint16_t check; uint16_t urg_ptr; }; struct nm_udphdr { uint16_t source; uint16_t dest; uint16_t len; uint16_t check; }; struct nm_ipv6hdr { uint8_t priority_version; uint8_t flow_lbl[3]; uint16_t payload_len; uint8_t nexthdr; uint8_t hop_limit; uint8_t saddr[16]; uint8_t daddr[16]; }; /* Type used to store a checksum (in host byte order) that hasn't been * folded yet. */ #define rawsum_t uint32_t rawsum_t nm_os_csum_raw(uint8_t *data, size_t len, rawsum_t cur_sum); uint16_t nm_os_csum_ipv4(struct nm_iphdr *iph); void nm_os_csum_tcpudp_ipv4(struct nm_iphdr *iph, void *data, size_t datalen, uint16_t *check); void nm_os_csum_tcpudp_ipv6(struct nm_ipv6hdr *ip6h, void *data, size_t datalen, uint16_t *check); uint16_t nm_os_csum_fold(rawsum_t cur_sum); void bdg_mismatch_datapath(struct netmap_vp_adapter *na, struct netmap_vp_adapter *dst_na, const struct nm_bdg_fwd *ft_p, struct netmap_ring *dst_ring, u_int *j, u_int lim, u_int *howmany); /* persistent virtual port routines */ int nm_os_vi_persist(const char *, struct ifnet **); void nm_os_vi_detach(struct ifnet *); void nm_os_vi_init_index(void); /* * kernel thread routines */ struct nm_kctx; /* OS-specific kernel context - opaque */ typedef void (*nm_kctx_worker_fn_t)(void *data); /* kthread configuration */ struct nm_kctx_cfg { long type; /* kthread type/identifier */ nm_kctx_worker_fn_t worker_fn; /* worker function */ void *worker_private;/* worker parameter */ int attach_user; /* attach kthread to user process */ }; /* kthread configuration */ struct nm_kctx *nm_os_kctx_create(struct nm_kctx_cfg *cfg, void *opaque); int nm_os_kctx_worker_start(struct nm_kctx *); void nm_os_kctx_worker_stop(struct nm_kctx *); void nm_os_kctx_destroy(struct nm_kctx *); void nm_os_kctx_worker_setaff(struct nm_kctx *, int); u_int nm_os_ncpus(void); int netmap_sync_kloop(struct netmap_priv_d *priv, struct nmreq_header *hdr); int netmap_sync_kloop_stop(struct netmap_priv_d *priv); #ifdef WITH_PTNETMAP /* ptnetmap guest routines */ /* * ptnetmap_memdev routines used to talk with ptnetmap_memdev device driver */ struct ptnetmap_memdev; int nm_os_pt_memdev_iomap(struct ptnetmap_memdev *, vm_paddr_t *, void **, uint64_t *); void nm_os_pt_memdev_iounmap(struct ptnetmap_memdev *); uint32_t nm_os_pt_memdev_ioread(struct ptnetmap_memdev *, unsigned int); /* * netmap adapter for guest ptnetmap ports */ struct netmap_pt_guest_adapter { /* The netmap adapter to be used by netmap applications. * This field must be the first, to allow upcast. */ struct netmap_hw_adapter hwup; /* The netmap adapter to be used by the driver. */ struct netmap_hw_adapter dr; /* Reference counter to track users of backend netmap port: the * network stack and netmap clients. * Used to decide when we need (de)allocate krings/rings and * start (stop) ptnetmap kthreads. */ int backend_users; }; int netmap_pt_guest_attach(struct netmap_adapter *na, unsigned int nifp_offset, unsigned int memid); bool netmap_pt_guest_txsync(struct nm_csb_atok *atok, struct nm_csb_ktoa *ktoa, struct netmap_kring *kring, int flags); bool netmap_pt_guest_rxsync(struct nm_csb_atok *atok, struct nm_csb_ktoa *ktoa, struct netmap_kring *kring, int flags); int ptnet_nm_krings_create(struct netmap_adapter *na); void ptnet_nm_krings_delete(struct netmap_adapter *na); void ptnet_nm_dtor(struct netmap_adapter *na); /* Helper function wrapping nm_sync_kloop_appl_read(). */ static inline void ptnet_sync_tail(struct nm_csb_ktoa *ktoa, struct netmap_kring *kring) { struct netmap_ring *ring = kring->ring; /* Update hwcur and hwtail as known by the host. */ nm_sync_kloop_appl_read(ktoa, &kring->nr_hwtail, &kring->nr_hwcur); /* nm_sync_finalize */ ring->tail = kring->rtail = kring->nr_hwtail; } #endif /* WITH_PTNETMAP */ #ifdef __FreeBSD__ /* * FreeBSD mbuf allocator/deallocator in emulation mode: */ #if __FreeBSD_version < 1100000 /* * For older versions of FreeBSD: * * We allocate EXT_PACKET mbuf+clusters, but need to set M_NOFREE * so that the destructor, if invoked, will not free the packet. * In principle we should set the destructor only on demand, * but since there might be a race we better do it on allocation. * As a consequence, we also need to set the destructor or we * would leak buffers. */ /* mbuf destructor, also need to change the type to EXT_EXTREF, * add an M_NOFREE flag, and then clear the flag and * chain into uma_zfree(zone_pack, mf) * (or reinstall the buffer ?) */ #define SET_MBUF_DESTRUCTOR(m, fn) do { \ (m)->m_ext.ext_free = (void *)fn; \ (m)->m_ext.ext_type = EXT_EXTREF; \ } while (0) static int void_mbuf_dtor(struct mbuf *m, void *arg1, void *arg2) { /* restore original mbuf */ m->m_ext.ext_buf = m->m_data = m->m_ext.ext_arg1; m->m_ext.ext_arg1 = NULL; m->m_ext.ext_type = EXT_PACKET; m->m_ext.ext_free = NULL; if (MBUF_REFCNT(m) == 0) SET_MBUF_REFCNT(m, 1); uma_zfree(zone_pack, m); return 0; } static inline struct mbuf * nm_os_get_mbuf(struct ifnet *ifp, int len) { struct mbuf *m; (void)ifp; m = m_getcl(M_NOWAIT, MT_DATA, M_PKTHDR); if (m) { /* m_getcl() (mb_ctor_mbuf) has an assert that checks that * M_NOFREE flag is not specified as third argument, * so we have to set M_NOFREE after m_getcl(). */ m->m_flags |= M_NOFREE; m->m_ext.ext_arg1 = m->m_ext.ext_buf; // XXX save m->m_ext.ext_free = (void *)void_mbuf_dtor; m->m_ext.ext_type = EXT_EXTREF; nm_prdis(5, "create m %p refcnt %d", m, MBUF_REFCNT(m)); } return m; } #else /* __FreeBSD_version >= 1100000 */ /* * Newer versions of FreeBSD, using a straightforward scheme. * * We allocate mbufs with m_gethdr(), since the mbuf header is needed * by the driver. We also attach a customly-provided external storage, * which in this case is a netmap buffer. When calling m_extadd(), however * we pass a NULL address, since the real address (and length) will be * filled in by nm_os_generic_xmit_frame() right before calling * if_transmit(). * * The dtor function does nothing, however we need it since mb_free_ext() * has a KASSERT(), checking that the mbuf dtor function is not NULL. */ #if __FreeBSD_version <= 1200050 static void void_mbuf_dtor(struct mbuf *m, void *arg1, void *arg2) { } #else /* __FreeBSD_version >= 1200051 */ /* The arg1 and arg2 pointers argument were removed by r324446, which * in included since version 1200051. */ static void void_mbuf_dtor(struct mbuf *m) { } #endif /* __FreeBSD_version >= 1200051 */ #define SET_MBUF_DESTRUCTOR(m, fn) do { \ (m)->m_ext.ext_free = (fn != NULL) ? \ (void *)fn : (void *)void_mbuf_dtor; \ } while (0) static inline struct mbuf * nm_os_get_mbuf(struct ifnet *ifp, int len) { struct mbuf *m; (void)ifp; (void)len; m = m_gethdr(M_NOWAIT, MT_DATA); if (m == NULL) { return m; } m_extadd(m, NULL /* buf */, 0 /* size */, void_mbuf_dtor, NULL, NULL, 0, EXT_NET_DRV); return m; } #endif /* __FreeBSD_version >= 1100000 */ #endif /* __FreeBSD__ */ struct nmreq_option * nmreq_findoption(struct nmreq_option *, uint16_t); int nmreq_checkduplicate(struct nmreq_option *); int netmap_init_bridges(void); void netmap_uninit_bridges(void); /* Functions to read and write CSB fields from the kernel. */ #if defined (linux) #define CSB_READ(csb, field, r) (get_user(r, &csb->field)) #define CSB_WRITE(csb, field, v) (put_user(v, &csb->field)) #else /* ! linux */ #define CSB_READ(csb, field, r) (r = fuword32(&csb->field)) #define CSB_WRITE(csb, field, v) (suword32(&csb->field, v)) #endif /* ! linux */ #endif /* _NET_NETMAP_KERN_H_ */ Index: projects/import-googletest-1.8.1/sys/fs/fuse/fuse_internal.c =================================================================== --- projects/import-googletest-1.8.1/sys/fs/fuse/fuse_internal.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/fs/fuse/fuse_internal.c (revision 344271) @@ -1,651 +1,640 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2007-2009 Google Inc. and Amit Singh * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * Copyright (C) 2005 Csaba Henk. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "fuse.h" #include "fuse_file.h" #include "fuse_internal.h" #include "fuse_ipc.h" #include "fuse_node.h" #include "fuse_file.h" #include "fuse_param.h" #define FUSE_DEBUG_MODULE INTERNAL #include "fuse_debug.h" #ifdef ZERO_PAD_INCOMPLETE_BUFS static int isbzero(void *buf, size_t len); #endif /* access */ int fuse_internal_access(struct vnode *vp, mode_t mode, struct fuse_access_param *facp, struct thread *td, struct ucred *cred) { int err = 0; uint32_t mask = 0; int dataflags; int vtype; struct mount *mp; struct fuse_dispatcher fdi; struct fuse_access_in *fai; struct fuse_data *data; /* NOT YET DONE */ /* * If this vnop gives you trouble, just return 0 here for a lazy * kludge. */ /* return 0;*/ fuse_trace_printf_func(); mp = vnode_mount(vp); vtype = vnode_vtype(vp); data = fuse_get_mpdata(mp); dataflags = data->dataflags; if ((mode & VWRITE) && vfs_isrdonly(mp)) { return EACCES; } /* Unless explicitly permitted, deny everyone except the fs owner. */ if (vnode_isvroot(vp) && !(facp->facc_flags & FACCESS_NOCHECKSPY)) { if (!(dataflags & FSESS_DAEMON_CAN_SPY)) { int denied = fuse_match_cred(data->daemoncred, cred); if (denied) { return EPERM; } } facp->facc_flags |= FACCESS_NOCHECKSPY; } if (!(facp->facc_flags & FACCESS_DO_ACCESS)) { return 0; } if (((vtype == VREG) && (mode & VEXEC))) { #ifdef NEED_MOUNT_ARGUMENT_FOR_THIS /* Let the kernel handle this through open / close heuristics.*/ return ENOTSUP; #else /* Let the kernel handle this. */ return 0; #endif } if (!fsess_isimpl(mp, FUSE_ACCESS)) { /* Let the kernel handle this. */ return 0; } if (dataflags & FSESS_DEFAULT_PERMISSIONS) { /* Let the kernel handle this. */ return 0; } if ((mode & VADMIN) != 0) { err = priv_check_cred(cred, PRIV_VFS_ADMIN); if (err) { return err; } } if ((mode & (VWRITE | VAPPEND | VADMIN)) != 0) { mask |= W_OK; } if ((mode & VREAD) != 0) { mask |= R_OK; } if ((mode & VEXEC) != 0) { mask |= X_OK; } bzero(&fdi, sizeof(fdi)); fdisp_init(&fdi, sizeof(*fai)); fdisp_make_vp(&fdi, FUSE_ACCESS, vp, td, cred); fai = fdi.indata; fai->mask = F_OK; fai->mask |= mask; err = fdisp_wait_answ(&fdi); fdisp_destroy(&fdi); if (err == ENOSYS) { fsess_set_notimpl(mp, FUSE_ACCESS); err = 0; } return err; } /* fsync */ int fuse_internal_fsync_callback(struct fuse_ticket *tick, struct uio *uio) { fuse_trace_printf_func(); if (tick->tk_aw_ohead.error == ENOSYS) { fsess_set_notimpl(tick->tk_data->mp, fticket_opcode(tick)); } return 0; } int fuse_internal_fsync(struct vnode *vp, struct thread *td, struct ucred *cred, struct fuse_filehandle *fufh) { int op = FUSE_FSYNC; struct fuse_fsync_in *ffsi; struct fuse_dispatcher fdi; fuse_trace_printf_func(); if (vnode_isdir(vp)) { op = FUSE_FSYNCDIR; } fdisp_init(&fdi, sizeof(*ffsi)); fdisp_make_vp(&fdi, op, vp, td, cred); ffsi = fdi.indata; ffsi->fh = fufh->fh_id; ffsi->fsync_flags = 1; /* datasync */ fuse_insert_callback(fdi.tick, fuse_internal_fsync_callback); fuse_insert_message(fdi.tick); fdisp_destroy(&fdi); return 0; } /* readdir */ int fuse_internal_readdir(struct vnode *vp, struct uio *uio, struct fuse_filehandle *fufh, struct fuse_iov *cookediov) { int err = 0; struct fuse_dispatcher fdi; struct fuse_read_in *fri; if (uio_resid(uio) == 0) { return 0; } fdisp_init(&fdi, 0); /* * Note that we DO NOT have a UIO_SYSSPACE here (so no need for p2p * I/O). */ while (uio_resid(uio) > 0) { fdi.iosize = sizeof(*fri); fdisp_make_vp(&fdi, FUSE_READDIR, vp, NULL, NULL); fri = fdi.indata; fri->fh = fufh->fh_id; fri->offset = uio_offset(uio); fri->size = min(uio_resid(uio), FUSE_DEFAULT_IOSIZE); /* mp->max_read */ if ((err = fdisp_wait_answ(&fdi))) { break; } if ((err = fuse_internal_readdir_processdata(uio, fri->size, fdi.answ, fdi.iosize, cookediov))) { break; } } fdisp_destroy(&fdi); return ((err == -1) ? 0 : err); } int fuse_internal_readdir_processdata(struct uio *uio, size_t reqsize, void *buf, size_t bufsize, void *param) { int err = 0; int cou = 0; int bytesavail; size_t freclen; struct dirent *de; struct fuse_dirent *fudge; struct fuse_iov *cookediov = param; if (bufsize < FUSE_NAME_OFFSET) { return -1; } for (;;) { if (bufsize < FUSE_NAME_OFFSET) { err = -1; break; } fudge = (struct fuse_dirent *)buf; freclen = FUSE_DIRENT_SIZE(fudge); cou++; if (bufsize < freclen) { err = ((cou == 1) ? -1 : 0); break; } #ifdef ZERO_PAD_INCOMPLETE_BUFS if (isbzero(buf, FUSE_NAME_OFFSET)) { err = -1; break; } #endif if (!fudge->namelen || fudge->namelen > MAXNAMLEN) { err = EINVAL; break; } bytesavail = GENERIC_DIRSIZ((struct pseudo_dirent *) &fudge->namelen); if (bytesavail > uio_resid(uio)) { err = -1; break; } fiov_refresh(cookediov); fiov_adjust(cookediov, bytesavail); de = (struct dirent *)cookediov->base; de->d_fileno = fudge->ino; de->d_reclen = bytesavail; de->d_type = fudge->type; de->d_namlen = fudge->namelen; memcpy((char *)cookediov->base + sizeof(struct dirent) - MAXNAMLEN - 1, (char *)buf + FUSE_NAME_OFFSET, fudge->namelen); dirent_terminate(de); err = uiomove(cookediov->base, cookediov->len, uio); if (err) { break; } buf = (char *)buf + freclen; bufsize -= freclen; uio_setoffset(uio, fudge->off); } return err; } /* remove */ -#define INVALIDATE_CACHED_VATTRS_UPON_UNLINK 1 int fuse_internal_remove(struct vnode *dvp, struct vnode *vp, struct componentname *cnp, enum fuse_opcode op) { struct fuse_dispatcher fdi; + struct fuse_vnode_data *fvdat; + int err; - struct vattr *vap = VTOVA(vp); + err = 0; + fvdat = VTOFUD(vp); -#if INVALIDATE_CACHED_VATTRS_UPON_UNLINK - int need_invalidate = 0; - uint64_t target_nlink = 0; - -#endif - int err = 0; - debug_printf("dvp=%p, cnp=%p, op=%d\n", vp, cnp, op); fdisp_init(&fdi, cnp->cn_namelen + 1); fdisp_make_vp(&fdi, op, dvp, cnp->cn_thread, cnp->cn_cred); memcpy(fdi.indata, cnp->cn_nameptr, cnp->cn_namelen); ((char *)fdi.indata)[cnp->cn_namelen] = '\0'; -#if INVALIDATE_CACHED_VATTRS_UPON_UNLINK - if (vap->va_nlink > 1) { - need_invalidate = 1; - target_nlink = vap->va_nlink; - } -#endif - err = fdisp_wait_answ(&fdi); fdisp_destroy(&fdi); return err; } /* rename */ int fuse_internal_rename(struct vnode *fdvp, struct componentname *fcnp, struct vnode *tdvp, struct componentname *tcnp) { struct fuse_dispatcher fdi; struct fuse_rename_in *fri; int err = 0; fdisp_init(&fdi, sizeof(*fri) + fcnp->cn_namelen + tcnp->cn_namelen + 2); fdisp_make_vp(&fdi, FUSE_RENAME, fdvp, tcnp->cn_thread, tcnp->cn_cred); fri = fdi.indata; fri->newdir = VTOI(tdvp); memcpy((char *)fdi.indata + sizeof(*fri), fcnp->cn_nameptr, fcnp->cn_namelen); ((char *)fdi.indata)[sizeof(*fri) + fcnp->cn_namelen] = '\0'; memcpy((char *)fdi.indata + sizeof(*fri) + fcnp->cn_namelen + 1, tcnp->cn_nameptr, tcnp->cn_namelen); ((char *)fdi.indata)[sizeof(*fri) + fcnp->cn_namelen + tcnp->cn_namelen + 1] = '\0'; err = fdisp_wait_answ(&fdi); fdisp_destroy(&fdi); return err; } /* strategy */ /* entity creation */ void fuse_internal_newentry_makerequest(struct mount *mp, uint64_t dnid, struct componentname *cnp, enum fuse_opcode op, void *buf, size_t bufsize, struct fuse_dispatcher *fdip) { debug_printf("fdip=%p\n", fdip); fdip->iosize = bufsize + cnp->cn_namelen + 1; fdisp_make(fdip, op, mp, dnid, cnp->cn_thread, cnp->cn_cred); memcpy(fdip->indata, buf, bufsize); memcpy((char *)fdip->indata + bufsize, cnp->cn_nameptr, cnp->cn_namelen); ((char *)fdip->indata)[bufsize + cnp->cn_namelen] = '\0'; } int fuse_internal_newentry_core(struct vnode *dvp, struct vnode **vpp, struct componentname *cnp, enum vtype vtyp, struct fuse_dispatcher *fdip) { int err = 0; struct fuse_entry_out *feo; struct mount *mp = vnode_mount(dvp); if ((err = fdisp_wait_answ(fdip))) { return err; } feo = fdip->answ; if ((err = fuse_internal_checkentry(feo, vtyp))) { return err; } - err = fuse_vnode_get(mp, feo->nodeid, dvp, vpp, cnp, vtyp); + err = fuse_vnode_get(mp, feo, feo->nodeid, dvp, vpp, cnp, vtyp); if (err) { fuse_internal_forget_send(mp, cnp->cn_thread, cnp->cn_cred, feo->nodeid, 1); return err; } - cache_attrs(*vpp, feo); + cache_attrs(*vpp, feo, NULL); return err; } int fuse_internal_newentry(struct vnode *dvp, struct vnode **vpp, struct componentname *cnp, enum fuse_opcode op, void *buf, size_t bufsize, enum vtype vtype) { int err; struct fuse_dispatcher fdi; struct mount *mp = vnode_mount(dvp); fdisp_init(&fdi, 0); fuse_internal_newentry_makerequest(mp, VTOI(dvp), cnp, op, buf, bufsize, &fdi); err = fuse_internal_newentry_core(dvp, vpp, cnp, vtype, &fdi); fdisp_destroy(&fdi); return err; } /* entity destruction */ int fuse_internal_forget_callback(struct fuse_ticket *ftick, struct uio *uio) { fuse_internal_forget_send(ftick->tk_data->mp, curthread, NULL, ((struct fuse_in_header *)ftick->tk_ms_fiov.base)->nodeid, 1); return 0; } void fuse_internal_forget_send(struct mount *mp, struct thread *td, struct ucred *cred, uint64_t nodeid, uint64_t nlookup) { struct fuse_dispatcher fdi; struct fuse_forget_in *ffi; debug_printf("mp=%p, nodeid=%ju, nlookup=%ju\n", mp, (uintmax_t)nodeid, (uintmax_t)nlookup); /* * KASSERT(nlookup > 0, ("zero-times forget for vp #%llu", * (long long unsigned) nodeid)); */ fdisp_init(&fdi, sizeof(*ffi)); fdisp_make(&fdi, FUSE_FORGET, mp, nodeid, td, cred); ffi = fdi.indata; ffi->nlookup = nlookup; fuse_insert_message(fdi.tick); fdisp_destroy(&fdi); } void fuse_internal_vnode_disappear(struct vnode *vp) { struct fuse_vnode_data *fvdat = VTOFUD(vp); ASSERT_VOP_ELOCKED(vp, "fuse_internal_vnode_disappear"); fvdat->flag |= FN_REVOKED; + fvdat->valid_attr_cache = false; cache_purge(vp); } /* fuse start/stop */ int fuse_internal_init_callback(struct fuse_ticket *tick, struct uio *uio) { int err = 0; struct fuse_data *data = tick->tk_data; struct fuse_init_out *fiio; if ((err = tick->tk_aw_ohead.error)) { goto out; } if ((err = fticket_pull(tick, uio))) { goto out; } fiio = fticket_resp(tick)->base; /* XXX: Do we want to check anything further besides this? */ if (fiio->major < 7) { debug_printf("userpace version too low\n"); err = EPROTONOSUPPORT; goto out; } data->fuse_libabi_major = fiio->major; data->fuse_libabi_minor = fiio->minor; if (fuse_libabi_geq(data, 7, 5)) { if (fticket_resp(tick)->len == sizeof(struct fuse_init_out)) { data->max_write = fiio->max_write; } else { err = EINVAL; } } else { /* Old fix values */ data->max_write = 4096; } out: if (err) { fdata_set_dead(data); } FUSE_LOCK(); data->dataflags |= FSESS_INITED; wakeup(&data->ticketer); FUSE_UNLOCK(); return 0; } void fuse_internal_send_init(struct fuse_data *data, struct thread *td) { struct fuse_init_in *fiii; struct fuse_dispatcher fdi; fdisp_init(&fdi, sizeof(*fiii)); fdisp_make(&fdi, FUSE_INIT, data->mp, 0, td, NULL); fiii = fdi.indata; fiii->major = FUSE_KERNEL_VERSION; fiii->minor = FUSE_KERNEL_MINOR_VERSION; fiii->max_readahead = FUSE_DEFAULT_IOSIZE * 16; fiii->flags = 0; fuse_insert_callback(fdi.tick, fuse_internal_init_callback); fuse_insert_message(fdi.tick); fdisp_destroy(&fdi); } #ifdef ZERO_PAD_INCOMPLETE_BUFS static int isbzero(void *buf, size_t len) { int i; for (i = 0; i < len; i++) { if (((char *)buf)[i]) return (0); } return (1); } #endif Index: projects/import-googletest-1.8.1/sys/fs/fuse/fuse_internal.h =================================================================== --- projects/import-googletest-1.8.1/sys/fs/fuse/fuse_internal.h (revision 344270) +++ projects/import-googletest-1.8.1/sys/fs/fuse/fuse_internal.h (revision 344271) @@ -1,370 +1,398 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2007-2009 Google Inc. and Amit Singh * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * Copyright (C) 2005 Csaba Henk. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #ifndef _FUSE_INTERNAL_H_ #define _FUSE_INTERNAL_H_ #include #include #include #include #include "fuse_ipc.h" #include "fuse_node.h" static __inline int vfs_isrdonly(struct mount *mp) { return ((mp->mnt_flag & MNT_RDONLY) != 0 ? 1 : 0); } static __inline struct mount * vnode_mount(struct vnode *vp) { return (vp->v_mount); } static __inline int vnode_mountedhere(struct vnode *vp) { return (vp->v_mountedhere != NULL ? 1 : 0); } static __inline enum vtype vnode_vtype(struct vnode *vp) { return (vp->v_type); } static __inline int vnode_isvroot(struct vnode *vp) { return ((vp->v_vflag & VV_ROOT) != 0 ? 1 : 0); } static __inline int vnode_isreg(struct vnode *vp) { return (vp->v_type == VREG ? 1 : 0); } static __inline int vnode_isdir(struct vnode *vp) { return (vp->v_type == VDIR ? 1 : 0); } static __inline int vnode_islnk(struct vnode *vp) { return (vp->v_type == VLNK ? 1 : 0); } static __inline ssize_t uio_resid(struct uio *uio) { return (uio->uio_resid); } static __inline off_t uio_offset(struct uio *uio) { return (uio->uio_offset); } static __inline void uio_setoffset(struct uio *uio, off_t offset) { uio->uio_offset = offset; } static __inline void uio_setresid(struct uio *uio, ssize_t resid) { uio->uio_resid = resid; } /* miscellaneous */ static __inline__ int fuse_isdeadfs(struct vnode *vp) { struct fuse_data *data = fuse_get_mpdata(vnode_mount(vp)); return (data->dataflags & FSESS_DEAD); } static __inline__ int fuse_iosize(struct vnode *vp) { return vp->v_mount->mnt_stat.f_iosize; } /* access */ #define FVP_ACCESS_NOOP 0x01 #define FACCESS_VA_VALID 0x01 #define FACCESS_DO_ACCESS 0x02 #define FACCESS_STICKY 0x04 #define FACCESS_CHOWN 0x08 #define FACCESS_NOCHECKSPY 0x10 #define FACCESS_SETGID 0x12 #define FACCESS_XQUERIES FACCESS_STICKY | FACCESS_CHOWN | FACCESS_SETGID struct fuse_access_param { uid_t xuid; gid_t xgid; uint32_t facc_flags; }; static __inline int fuse_match_cred(struct ucred *basecred, struct ucred *usercred) { if (basecred->cr_uid == usercred->cr_uid && basecred->cr_uid == usercred->cr_ruid && basecred->cr_uid == usercred->cr_svuid && basecred->cr_groups[0] == usercred->cr_groups[0] && basecred->cr_groups[0] == usercred->cr_rgid && basecred->cr_groups[0] == usercred->cr_svgid) return 0; return EPERM; } int fuse_internal_access(struct vnode *vp, mode_t mode, struct fuse_access_param *facp, struct thread *td, struct ucred *cred); /* attributes */ +/* + * Cache FUSE attributes 'fat', with nominal expiration + * 'attr_valid'.'attr_valid_nsec', in attr cache associated with vnode 'vp'. + * Optionally, if argument 'vap' is not NULL, store a copy of the converted + * attributes there as well. + * + * If the nominal attribute cache TTL is zero, do not cache on the 'vp' (but do + * return the result to the caller). + */ static __inline void -fuse_internal_attr_fat2vat(struct mount *mp, +fuse_internal_attr_fat2vat(struct vnode *vp, struct fuse_attr *fat, + uint64_t attr_valid, + uint32_t attr_valid_nsec, struct vattr *vap) { + struct mount *mp; + struct fuse_vnode_data *fvdat; + struct vattr *vp_cache_at; + + mp = vnode_mount(vp); + fvdat = VTOFUD(vp); + DEBUGX(FUSE_DEBUG_INTERNAL, "node #%ju, mode 0%o\n", (uintmax_t)fat->ino, fat->mode); + /* Honor explicit do-not-cache requests from user filesystems. */ + if (attr_valid == 0 && attr_valid_nsec == 0) + fvdat->valid_attr_cache = false; + else + fvdat->valid_attr_cache = true; + + vp_cache_at = VTOVA(vp); + + if (vap == NULL && vp_cache_at == NULL) + return; + + if (vap == NULL) + vap = vp_cache_at; + vattr_null(vap); vap->va_fsid = mp->mnt_stat.f_fsid.val[0]; vap->va_fileid = fat->ino; vap->va_mode = fat->mode & ~S_IFMT; vap->va_nlink = fat->nlink; vap->va_uid = fat->uid; vap->va_gid = fat->gid; vap->va_rdev = fat->rdev; vap->va_size = fat->size; vap->va_atime.tv_sec = fat->atime; /* XXX on some platforms cast from 64 bits to 32 */ vap->va_atime.tv_nsec = fat->atimensec; vap->va_mtime.tv_sec = fat->mtime; vap->va_mtime.tv_nsec = fat->mtimensec; vap->va_ctime.tv_sec = fat->ctime; vap->va_ctime.tv_nsec = fat->ctimensec; vap->va_blocksize = PAGE_SIZE; vap->va_type = IFTOVT(fat->mode); - -#if (S_BLKSIZE == 512) - /* Optimize this case */ - vap->va_bytes = fat->blocks << 9; -#else vap->va_bytes = fat->blocks * S_BLKSIZE; -#endif - vap->va_flags = 0; + + if (vap != vp_cache_at && vp_cache_at != NULL) + memcpy(vp_cache_at, vap, sizeof(*vap)); } -#define cache_attrs(vp, fuse_out) \ - fuse_internal_attr_fat2vat(vnode_mount(vp), &(fuse_out)->attr, \ - VTOVA(vp)); +#define cache_attrs(vp, fuse_out, vap_out) \ + fuse_internal_attr_fat2vat((vp), &(fuse_out)->attr, \ + (fuse_out)->attr_valid, (fuse_out)->attr_valid_nsec, (vap_out)) /* fsync */ int fuse_internal_fsync(struct vnode *vp, struct thread *td, struct ucred *cred, struct fuse_filehandle *fufh); int fuse_internal_fsync_callback(struct fuse_ticket *tick, struct uio *uio); /* readdir */ struct pseudo_dirent { uint32_t d_namlen; }; int fuse_internal_readdir(struct vnode *vp, struct uio *uio, struct fuse_filehandle *fufh, struct fuse_iov *cookediov); int fuse_internal_readdir_processdata(struct uio *uio, size_t reqsize, void *buf, size_t bufsize, void *param); /* remove */ int fuse_internal_remove(struct vnode *dvp, struct vnode *vp, struct componentname *cnp, enum fuse_opcode op); /* rename */ int fuse_internal_rename(struct vnode *fdvp, struct componentname *fcnp, struct vnode *tdvp, struct componentname *tcnp); /* revoke */ void fuse_internal_vnode_disappear(struct vnode *vp); /* strategy */ /* entity creation */ static __inline int fuse_internal_checkentry(struct fuse_entry_out *feo, enum vtype vtyp) { DEBUGX(FUSE_DEBUG_INTERNAL, "feo=%p, vtype=%d\n", feo, vtyp); if (vtyp != IFTOVT(feo->attr.mode)) { DEBUGX(FUSE_DEBUG_INTERNAL, "EINVAL -- %x != %x\n", vtyp, IFTOVT(feo->attr.mode)); return EINVAL; } if (feo->nodeid == FUSE_NULL_ID) { DEBUGX(FUSE_DEBUG_INTERNAL, "EINVAL -- feo->nodeid is NULL\n"); return EINVAL; } if (feo->nodeid == FUSE_ROOT_ID) { DEBUGX(FUSE_DEBUG_INTERNAL, "EINVAL -- feo->nodeid is FUSE_ROOT_ID\n"); return EINVAL; } return 0; } int fuse_internal_newentry(struct vnode *dvp, struct vnode **vpp, struct componentname *cnp, enum fuse_opcode op, void *buf, size_t bufsize, enum vtype vtyp); void fuse_internal_newentry_makerequest(struct mount *mp, uint64_t dnid, struct componentname *cnp, enum fuse_opcode op, void *buf, size_t bufsize, struct fuse_dispatcher *fdip); int fuse_internal_newentry_core(struct vnode *dvp, struct vnode **vpp, struct componentname *cnp, enum vtype vtyp, struct fuse_dispatcher *fdip); /* entity destruction */ int fuse_internal_forget_callback(struct fuse_ticket *tick, struct uio *uio); void fuse_internal_forget_send(struct mount *mp, struct thread *td, struct ucred *cred, uint64_t nodeid, uint64_t nlookup); /* fuse start/stop */ int fuse_internal_init_callback(struct fuse_ticket *tick, struct uio *uio); void fuse_internal_send_init(struct fuse_data *data, struct thread *td); #endif /* _FUSE_INTERNAL_H_ */ Index: projects/import-googletest-1.8.1/sys/fs/fuse/fuse_io.c =================================================================== --- projects/import-googletest-1.8.1/sys/fs/fuse/fuse_io.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/fs/fuse/fuse_io.c (revision 344271) @@ -1,814 +1,824 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2007-2009 Google Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * Copyright (C) 2005 Csaba Henk. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "fuse.h" #include "fuse_file.h" #include "fuse_node.h" #include "fuse_internal.h" #include "fuse_ipc.h" #include "fuse_io.h" #define FUSE_DEBUG_MODULE IO #include "fuse_debug.h" static int fuse_read_directbackend(struct vnode *vp, struct uio *uio, struct ucred *cred, struct fuse_filehandle *fufh); static int fuse_read_biobackend(struct vnode *vp, struct uio *uio, struct ucred *cred, struct fuse_filehandle *fufh); static int fuse_write_directbackend(struct vnode *vp, struct uio *uio, struct ucred *cred, struct fuse_filehandle *fufh, int ioflag); static int fuse_write_biobackend(struct vnode *vp, struct uio *uio, struct ucred *cred, struct fuse_filehandle *fufh, int ioflag); int fuse_io_dispatch(struct vnode *vp, struct uio *uio, int ioflag, struct ucred *cred) { struct fuse_filehandle *fufh; int err, directio; MPASS(vp->v_type == VREG || vp->v_type == VDIR); err = fuse_filehandle_getrw(vp, (uio->uio_rw == UIO_READ) ? FUFH_RDONLY : FUFH_WRONLY, &fufh); if (err) { printf("FUSE: io dispatch: filehandles are closed\n"); return err; } /* * Ideally, when the daemon asks for direct io at open time, the * standard file flag should be set according to this, so that would * just change the default mode, which later on could be changed via * fcntl(2). * But this doesn't work, the O_DIRECT flag gets cleared at some point * (don't know where). So to make any use of the Fuse direct_io option, * we hardwire it into the file's private data (similarly to Linux, * btw.). */ directio = (ioflag & IO_DIRECT) || !fsess_opt_datacache(vnode_mount(vp)); switch (uio->uio_rw) { case UIO_READ: if (directio) { FS_DEBUG("direct read of vnode %ju via file handle %ju\n", (uintmax_t)VTOILLU(vp), (uintmax_t)fufh->fh_id); err = fuse_read_directbackend(vp, uio, cred, fufh); } else { FS_DEBUG("buffered read of vnode %ju\n", (uintmax_t)VTOILLU(vp)); err = fuse_read_biobackend(vp, uio, cred, fufh); } break; case UIO_WRITE: - if (directio) { + /* + * Kludge: simulate write-through caching via write-around + * caching. Same effect, as far as never caching dirty data, + * but slightly pessimal in that newly written data is not + * cached. + */ + if (directio || fuse_data_cache_mode == FUSE_CACHE_WT) { FS_DEBUG("direct write of vnode %ju via file handle %ju\n", (uintmax_t)VTOILLU(vp), (uintmax_t)fufh->fh_id); err = fuse_write_directbackend(vp, uio, cred, fufh, ioflag); } else { FS_DEBUG("buffered write of vnode %ju\n", (uintmax_t)VTOILLU(vp)); err = fuse_write_biobackend(vp, uio, cred, fufh, ioflag); } break; default: panic("uninterpreted mode passed to fuse_io_dispatch"); } return (err); } static int fuse_read_biobackend(struct vnode *vp, struct uio *uio, struct ucred *cred, struct fuse_filehandle *fufh) { struct buf *bp; daddr_t lbn; int bcount; int err = 0, n = 0, on = 0; off_t filesize; const int biosize = fuse_iosize(vp); FS_DEBUG("resid=%zx offset=%jx fsize=%jx\n", uio->uio_resid, uio->uio_offset, VTOFUD(vp)->filesize); if (uio->uio_resid == 0) return (0); if (uio->uio_offset < 0) return (EINVAL); bcount = MIN(MAXBSIZE, biosize); filesize = VTOFUD(vp)->filesize; do { if (fuse_isdeadfs(vp)) { err = ENXIO; break; } lbn = uio->uio_offset / biosize; on = uio->uio_offset & (biosize - 1); FS_DEBUG2G("biosize %d, lbn %d, on %d\n", biosize, (int)lbn, on); /* * Obtain the buffer cache block. Figure out the buffer size * when we are at EOF. If we are modifying the size of the * buffer based on an EOF condition we need to hold * nfs_rslock() through obtaining the buffer to prevent * a potential writer-appender from messing with n_size. * Otherwise we may accidentally truncate the buffer and * lose dirty data. * * Note that bcount is *not* DEV_BSIZE aligned. */ if ((off_t)lbn * biosize >= filesize) { bcount = 0; } else if ((off_t)(lbn + 1) * biosize > filesize) { bcount = filesize - (off_t)lbn *biosize; } bp = getblk(vp, lbn, bcount, PCATCH, 0, 0); if (!bp) return (EINTR); /* * If B_CACHE is not set, we must issue the read. If this * fails, we return an error. */ if ((bp->b_flags & B_CACHE) == 0) { bp->b_iocmd = BIO_READ; vfs_busy_pages(bp, 0); err = fuse_io_strategy(vp, bp); if (err) { brelse(bp); return (err); } } /* * on is the offset into the current bp. Figure out how many * bytes we can copy out of the bp. Note that bcount is * NOT DEV_BSIZE aligned. * * Then figure out how many bytes we can copy into the uio. */ n = 0; if (on < bcount) n = MIN((unsigned)(bcount - on), uio->uio_resid); if (n > 0) { FS_DEBUG2G("feeding buffeater with %d bytes of buffer %p," " saying %d was asked for\n", n, bp->b_data + on, n + (int)bp->b_resid); err = uiomove(bp->b_data + on, n, uio); } brelse(bp); FS_DEBUG2G("end of turn, err %d, uio->uio_resid %zd, n %d\n", err, uio->uio_resid, n); } while (err == 0 && uio->uio_resid > 0 && n > 0); return (err); } static int fuse_read_directbackend(struct vnode *vp, struct uio *uio, struct ucred *cred, struct fuse_filehandle *fufh) { struct fuse_dispatcher fdi; struct fuse_read_in *fri; int err = 0; if (uio->uio_resid == 0) return (0); fdisp_init(&fdi, 0); /* * XXX In "normal" case we use an intermediate kernel buffer for * transmitting data from daemon's context to ours. Eventually, we should * get rid of this. Anyway, if the target uio lives in sysspace (we are * called from pageops), and the input data doesn't need kernel-side * processing (we are not called from readdir) we can already invoke * an optimized, "peer-to-peer" I/O routine. */ while (uio->uio_resid > 0) { fdi.iosize = sizeof(*fri); fdisp_make_vp(&fdi, FUSE_READ, vp, uio->uio_td, cred); fri = fdi.indata; fri->fh = fufh->fh_id; fri->offset = uio->uio_offset; fri->size = MIN(uio->uio_resid, fuse_get_mpdata(vp->v_mount)->max_read); FS_DEBUG2G("fri->fh %ju, fri->offset %ju, fri->size %ju\n", (uintmax_t)fri->fh, (uintmax_t)fri->offset, (uintmax_t)fri->size); if ((err = fdisp_wait_answ(&fdi))) goto out; FS_DEBUG2G("complete: got iosize=%d, requested fri.size=%zd; " "resid=%zd offset=%ju\n", fri->size, fdi.iosize, uio->uio_resid, (uintmax_t)uio->uio_offset); if ((err = uiomove(fdi.answ, MIN(fri->size, fdi.iosize), uio))) break; if (fdi.iosize < fri->size) break; } out: fdisp_destroy(&fdi); return (err); } static int fuse_write_directbackend(struct vnode *vp, struct uio *uio, struct ucred *cred, struct fuse_filehandle *fufh, int ioflag) { struct fuse_vnode_data *fvdat = VTOFUD(vp); struct fuse_write_in *fwi; struct fuse_dispatcher fdi; size_t chunksize; int diff; int err = 0; if (uio->uio_resid == 0) return (0); if (ioflag & IO_APPEND) uio_setoffset(uio, fvdat->filesize); fdisp_init(&fdi, 0); while (uio->uio_resid > 0) { chunksize = MIN(uio->uio_resid, fuse_get_mpdata(vp->v_mount)->max_write); fdi.iosize = sizeof(*fwi) + chunksize; fdisp_make_vp(&fdi, FUSE_WRITE, vp, uio->uio_td, cred); fwi = fdi.indata; fwi->fh = fufh->fh_id; fwi->offset = uio->uio_offset; fwi->size = chunksize; if ((err = uiomove((char *)fdi.indata + sizeof(*fwi), chunksize, uio))) break; if ((err = fdisp_wait_answ(&fdi))) break; diff = chunksize - ((struct fuse_write_out *)fdi.answ)->size; if (diff < 0) { err = EINVAL; break; } uio->uio_resid += diff; uio->uio_offset -= diff; - if (uio->uio_offset > fvdat->filesize) + if (uio->uio_offset > fvdat->filesize && + fuse_data_cache_mode != FUSE_CACHE_UC) { fuse_vnode_setsize(vp, cred, uio->uio_offset); + fvdat->flag &= ~FN_SIZECHANGE; + } } fdisp_destroy(&fdi); return (err); } static int fuse_write_biobackend(struct vnode *vp, struct uio *uio, struct ucred *cred, struct fuse_filehandle *fufh, int ioflag) { struct fuse_vnode_data *fvdat = VTOFUD(vp); struct buf *bp; daddr_t lbn; int bcount; int n, on, err = 0; const int biosize = fuse_iosize(vp); KASSERT(uio->uio_rw == UIO_WRITE, ("ncl_write mode")); FS_DEBUG("resid=%zx offset=%jx fsize=%jx\n", uio->uio_resid, uio->uio_offset, fvdat->filesize); if (vp->v_type != VREG) return (EIO); if (uio->uio_offset < 0) return (EINVAL); if (uio->uio_resid == 0) return (0); if (ioflag & IO_APPEND) uio_setoffset(uio, fvdat->filesize); /* * Find all of this file's B_NEEDCOMMIT buffers. If our writes * would exceed the local maximum per-file write commit size when * combined with those, we must decide whether to flush, * go synchronous, or return err. We don't bother checking * IO_UNIT -- we just make all writes atomic anyway, as there's * no point optimizing for something that really won't ever happen. */ do { if (fuse_isdeadfs(vp)) { err = ENXIO; break; } lbn = uio->uio_offset / biosize; on = uio->uio_offset & (biosize - 1); n = MIN((unsigned)(biosize - on), uio->uio_resid); FS_DEBUG2G("lbn %ju, on %d, n %d, uio offset %ju, uio resid %zd\n", (uintmax_t)lbn, on, n, (uintmax_t)uio->uio_offset, uio->uio_resid); again: /* * Handle direct append and file extension cases, calculate * unaligned buffer size. */ if (uio->uio_offset == fvdat->filesize && n) { /* * Get the buffer (in its pre-append state to maintain * B_CACHE if it was previously set). Resize the * nfsnode after we have locked the buffer to prevent * readers from reading garbage. */ bcount = on; FS_DEBUG("getting block from OS, bcount %d\n", bcount); bp = getblk(vp, lbn, bcount, PCATCH, 0, 0); if (bp != NULL) { long save; err = fuse_vnode_setsize(vp, cred, uio->uio_offset + n); if (err) { brelse(bp); break; } save = bp->b_flags & B_CACHE; bcount += n; allocbuf(bp, bcount); bp->b_flags |= save; } } else { /* * Obtain the locked cache block first, and then * adjust the file's size as appropriate. */ bcount = on + n; if ((off_t)lbn * biosize + bcount < fvdat->filesize) { if ((off_t)(lbn + 1) * biosize < fvdat->filesize) bcount = biosize; else bcount = fvdat->filesize - (off_t)lbn *biosize; } FS_DEBUG("getting block from OS, bcount %d\n", bcount); bp = getblk(vp, lbn, bcount, PCATCH, 0, 0); if (bp && uio->uio_offset + n > fvdat->filesize) { err = fuse_vnode_setsize(vp, cred, uio->uio_offset + n); if (err) { brelse(bp); break; } } } if (!bp) { err = EINTR; break; } /* * Issue a READ if B_CACHE is not set. In special-append * mode, B_CACHE is based on the buffer prior to the write * op and is typically set, avoiding the read. If a read * is required in special append mode, the server will * probably send us a short-read since we extended the file * on our end, resulting in b_resid == 0 and, thusly, * B_CACHE getting set. * * We can also avoid issuing the read if the write covers * the entire buffer. We have to make sure the buffer state * is reasonable in this case since we will not be initiating * I/O. See the comments in kern/vfs_bio.c's getblk() for * more information. * * B_CACHE may also be set due to the buffer being cached * normally. */ if (on == 0 && n == bcount) { bp->b_flags |= B_CACHE; bp->b_flags &= ~B_INVAL; bp->b_ioflags &= ~BIO_ERROR; } if ((bp->b_flags & B_CACHE) == 0) { bp->b_iocmd = BIO_READ; vfs_busy_pages(bp, 0); fuse_io_strategy(vp, bp); if ((err = bp->b_error)) { brelse(bp); break; } } if (bp->b_wcred == NOCRED) bp->b_wcred = crhold(cred); /* * If dirtyend exceeds file size, chop it down. This should * not normally occur but there is an append race where it * might occur XXX, so we log it. * * If the chopping creates a reverse-indexed or degenerate * situation with dirtyoff/end, we 0 both of them. */ if (bp->b_dirtyend > bcount) { FS_DEBUG("FUSE append race @%lx:%d\n", (long)bp->b_blkno * biosize, bp->b_dirtyend - bcount); bp->b_dirtyend = bcount; } if (bp->b_dirtyoff >= bp->b_dirtyend) bp->b_dirtyoff = bp->b_dirtyend = 0; /* * If the new write will leave a contiguous dirty * area, just update the b_dirtyoff and b_dirtyend, * otherwise force a write rpc of the old dirty area. * * While it is possible to merge discontiguous writes due to * our having a B_CACHE buffer ( and thus valid read data * for the hole), we don't because it could lead to * significant cache coherency problems with multiple clients, * especially if locking is implemented later on. * * as an optimization we could theoretically maintain * a linked list of discontinuous areas, but we would still * have to commit them separately so there isn't much * advantage to it except perhaps a bit of asynchronization. */ if (bp->b_dirtyend > 0 && (on > bp->b_dirtyend || (on + n) < bp->b_dirtyoff)) { /* * Yes, we mean it. Write out everything to "storage" * immediately, without hesitation. (Apart from other * reasons: the only way to know if a write is valid * if its actually written out.) */ bwrite(bp); if (bp->b_error == EINTR) { err = EINTR; break; } goto again; } err = uiomove((char *)bp->b_data + on, n, uio); /* * Since this block is being modified, it must be written * again and not just committed. Since write clustering does * not work for the stage 1 data write, only the stage 2 * commit rpc, we have to clear B_CLUSTEROK as well. */ bp->b_flags &= ~(B_NEEDCOMMIT | B_CLUSTEROK); if (err) { bp->b_ioflags |= BIO_ERROR; bp->b_error = err; brelse(bp); break; } /* * Only update dirtyoff/dirtyend if not a degenerate * condition. */ if (n) { if (bp->b_dirtyend > 0) { bp->b_dirtyoff = MIN(on, bp->b_dirtyoff); bp->b_dirtyend = MAX((on + n), bp->b_dirtyend); } else { bp->b_dirtyoff = on; bp->b_dirtyend = on + n; } vfs_bio_set_valid(bp, on, n); } err = bwrite(bp); if (err) break; } while (uio->uio_resid > 0 && n > 0); if (fuse_sync_resize && (fvdat->flag & FN_SIZECHANGE) != 0) fuse_vnode_savesize(vp, cred); return (err); } int fuse_io_strategy(struct vnode *vp, struct buf *bp) { struct fuse_filehandle *fufh; struct fuse_vnode_data *fvdat = VTOFUD(vp); struct ucred *cred; struct uio *uiop; struct uio uio; struct iovec io; int error = 0; const int biosize = fuse_iosize(vp); MPASS(vp->v_type == VREG || vp->v_type == VDIR); MPASS(bp->b_iocmd == BIO_READ || bp->b_iocmd == BIO_WRITE); FS_DEBUG("inode=%ju offset=%jd resid=%ld\n", (uintmax_t)VTOI(vp), (intmax_t)(((off_t)bp->b_blkno) * biosize), bp->b_bcount); error = fuse_filehandle_getrw(vp, (bp->b_iocmd == BIO_READ) ? FUFH_RDONLY : FUFH_WRONLY, &fufh); if (error) { printf("FUSE: strategy: filehandles are closed\n"); bp->b_ioflags |= BIO_ERROR; bp->b_error = error; return (error); } cred = bp->b_iocmd == BIO_READ ? bp->b_rcred : bp->b_wcred; uiop = &uio; uiop->uio_iov = &io; uiop->uio_iovcnt = 1; uiop->uio_segflg = UIO_SYSSPACE; uiop->uio_td = curthread; /* * clear BIO_ERROR and B_INVAL state prior to initiating the I/O. We * do this here so we do not have to do it in all the code that * calls us. */ bp->b_flags &= ~B_INVAL; bp->b_ioflags &= ~BIO_ERROR; KASSERT(!(bp->b_flags & B_DONE), ("fuse_io_strategy: bp %p already marked done", bp)); if (bp->b_iocmd == BIO_READ) { io.iov_len = uiop->uio_resid = bp->b_bcount; io.iov_base = bp->b_data; uiop->uio_rw = UIO_READ; uiop->uio_offset = ((off_t)bp->b_blkno) * biosize; error = fuse_read_directbackend(vp, uiop, cred, fufh); + /* XXXCEM: Potentially invalid access to cached_attrs here */ if ((!error && uiop->uio_resid) || (fsess_opt_brokenio(vnode_mount(vp)) && error == EIO && uiop->uio_offset < fvdat->filesize && fvdat->filesize > 0 && uiop->uio_offset >= fvdat->cached_attrs.va_size)) { /* * If we had a short read with no error, we must have * hit a file hole. We should zero-fill the remainder. * This can also occur if the server hits the file EOF. * * Holes used to be able to occur due to pending * writes, but that is not possible any longer. */ int nread = bp->b_bcount - uiop->uio_resid; int left = uiop->uio_resid; if (error != 0) { printf("FUSE: Fix broken io: offset %ju, " " resid %zd, file size %ju/%ju\n", (uintmax_t)uiop->uio_offset, uiop->uio_resid, fvdat->filesize, fvdat->cached_attrs.va_size); error = 0; } if (left > 0) bzero((char *)bp->b_data + nread, left); uiop->uio_resid = 0; } if (error) { bp->b_ioflags |= BIO_ERROR; bp->b_error = error; } } else { /* * If we only need to commit, try to commit */ if (bp->b_flags & B_NEEDCOMMIT) { FS_DEBUG("write: B_NEEDCOMMIT flags set\n"); } /* * Setup for actual write */ if ((off_t)bp->b_blkno * biosize + bp->b_dirtyend > fvdat->filesize) bp->b_dirtyend = fvdat->filesize - (off_t)bp->b_blkno * biosize; if (bp->b_dirtyend > bp->b_dirtyoff) { io.iov_len = uiop->uio_resid = bp->b_dirtyend - bp->b_dirtyoff; uiop->uio_offset = (off_t)bp->b_blkno * biosize + bp->b_dirtyoff; io.iov_base = (char *)bp->b_data + bp->b_dirtyoff; uiop->uio_rw = UIO_WRITE; error = fuse_write_directbackend(vp, uiop, cred, fufh, 0); if (error == EINTR || error == ETIMEDOUT || (!error && (bp->b_flags & B_NEEDCOMMIT))) { bp->b_flags &= ~(B_INVAL | B_NOCACHE); if ((bp->b_flags & B_PAGING) == 0) { bdirty(bp); bp->b_flags &= ~B_DONE; } if ((error == EINTR || error == ETIMEDOUT) && (bp->b_flags & B_ASYNC) == 0) bp->b_flags |= B_EINTR; } else { if (error) { bp->b_ioflags |= BIO_ERROR; bp->b_flags |= B_INVAL; bp->b_error = error; } bp->b_dirtyoff = bp->b_dirtyend = 0; } } else { bp->b_resid = 0; bufdone(bp); return (0); } } bp->b_resid = uiop->uio_resid; bufdone(bp); return (error); } int fuse_io_flushbuf(struct vnode *vp, int waitfor, struct thread *td) { struct vop_fsync_args a = { .a_vp = vp, .a_waitfor = waitfor, .a_td = td, }; return (vop_stdfsync(&a)); } /* * Flush and invalidate all dirty buffers. If another process is already * doing the flush, just wait for completion. */ int fuse_io_invalbuf(struct vnode *vp, struct thread *td) { struct fuse_vnode_data *fvdat = VTOFUD(vp); int error = 0; if (vp->v_iflag & VI_DOOMED) return 0; ASSERT_VOP_ELOCKED(vp, "fuse_io_invalbuf"); while (fvdat->flag & FN_FLUSHINPROG) { struct proc *p = td->td_proc; if (vp->v_mount->mnt_kern_flag & MNTK_UNMOUNTF) return EIO; fvdat->flag |= FN_FLUSHWANT; tsleep(&fvdat->flag, PRIBIO + 2, "fusevinv", 2 * hz); error = 0; if (p != NULL) { PROC_LOCK(p); if (SIGNOTEMPTY(p->p_siglist) || SIGNOTEMPTY(td->td_siglist)) error = EINTR; PROC_UNLOCK(p); } if (error == EINTR) return EINTR; } fvdat->flag |= FN_FLUSHINPROG; if (vp->v_bufobj.bo_object != NULL) { VM_OBJECT_WLOCK(vp->v_bufobj.bo_object); vm_object_page_clean(vp->v_bufobj.bo_object, 0, 0, OBJPC_SYNC); VM_OBJECT_WUNLOCK(vp->v_bufobj.bo_object); } error = vinvalbuf(vp, V_SAVE, PCATCH, 0); while (error) { if (error == ERESTART || error == EINTR) { fvdat->flag &= ~FN_FLUSHINPROG; if (fvdat->flag & FN_FLUSHWANT) { fvdat->flag &= ~FN_FLUSHWANT; wakeup(&fvdat->flag); } return EINTR; } error = vinvalbuf(vp, V_SAVE, PCATCH, 0); } fvdat->flag &= ~FN_FLUSHINPROG; if (fvdat->flag & FN_FLUSHWANT) { fvdat->flag &= ~FN_FLUSHWANT; wakeup(&fvdat->flag); } return (error); } Index: projects/import-googletest-1.8.1/sys/fs/fuse/fuse_ipc.h =================================================================== --- projects/import-googletest-1.8.1/sys/fs/fuse/fuse_ipc.h (revision 344270) +++ projects/import-googletest-1.8.1/sys/fs/fuse/fuse_ipc.h (revision 344271) @@ -1,425 +1,431 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2007-2009 Google Inc. and Amit Singh * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * Copyright (C) 2005 Csaba Henk. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #ifndef _FUSE_IPC_H_ #define _FUSE_IPC_H_ #include #include struct fuse_iov { void *base; size_t len; size_t allocated_size; int credit; }; void fiov_init(struct fuse_iov *fiov, size_t size); void fiov_teardown(struct fuse_iov *fiov); void fiov_refresh(struct fuse_iov *fiov); void fiov_adjust(struct fuse_iov *fiov, size_t size); #define FUSE_DIMALLOC(fiov, spc1, spc2, amnt) \ do { \ fiov_adjust(fiov, (sizeof(*(spc1)) + (amnt))); \ (spc1) = (fiov)->base; \ (spc2) = (char *)(fiov)->base + (sizeof(*(spc1))); \ } while (0) #define FU_AT_LEAST(siz) max((siz), 160) #define FUSE_ASSERT_AW_DONE(ftick) \ KASSERT((ftick)->tk_aw_link.tqe_next == NULL && \ (ftick)->tk_aw_link.tqe_prev == NULL, \ ("FUSE: ticket still on answer delivery list %p", (ftick))) \ #define FUSE_ASSERT_MS_DONE(ftick) \ KASSERT((ftick)->tk_ms_link.stqe_next == NULL, \ ("FUSE: ticket still on message list %p", (ftick))) struct fuse_ticket; struct fuse_data; typedef int fuse_handler_t(struct fuse_ticket *ftick, struct uio *uio); struct fuse_ticket { /* fields giving the identity of the ticket */ uint64_t tk_unique; struct fuse_data *tk_data; int tk_flag; u_int tk_refcount; /* fields for initiating an upgoing message */ struct fuse_iov tk_ms_fiov; void *tk_ms_bufdata; size_t tk_ms_bufsize; enum { FT_M_FIOV, FT_M_BUF } tk_ms_type; STAILQ_ENTRY(fuse_ticket) tk_ms_link; /* fields for handling answers coming from userspace */ struct fuse_iov tk_aw_fiov; void *tk_aw_bufdata; size_t tk_aw_bufsize; enum { FT_A_FIOV, FT_A_BUF } tk_aw_type; struct fuse_out_header tk_aw_ohead; int tk_aw_errno; struct mtx tk_aw_mtx; fuse_handler_t *tk_aw_handler; TAILQ_ENTRY(fuse_ticket) tk_aw_link; }; #define FT_ANSW 0x01 /* request of ticket has already been answered */ #define FT_DIRTY 0x04 /* ticket has been used */ static __inline__ struct fuse_iov * fticket_resp(struct fuse_ticket *ftick) { return (&ftick->tk_aw_fiov); } static __inline__ int fticket_answered(struct fuse_ticket *ftick) { DEBUGX(FUSE_DEBUG_IPC, "-> ftick=%p\n", ftick); mtx_assert(&ftick->tk_aw_mtx, MA_OWNED); return (ftick->tk_flag & FT_ANSW); } static __inline__ void fticket_set_answered(struct fuse_ticket *ftick) { DEBUGX(FUSE_DEBUG_IPC, "-> ftick=%p\n", ftick); mtx_assert(&ftick->tk_aw_mtx, MA_OWNED); ftick->tk_flag |= FT_ANSW; } static __inline__ enum fuse_opcode fticket_opcode(struct fuse_ticket *ftick) { DEBUGX(FUSE_DEBUG_IPC, "-> ftick=%p\n", ftick); return (((struct fuse_in_header *)(ftick->tk_ms_fiov.base))->opcode); } int fticket_pull(struct fuse_ticket *ftick, struct uio *uio); enum mountpri { FM_NOMOUNTED, FM_PRIMARY, FM_SECONDARY }; /* * The data representing a FUSE session. */ struct fuse_data { struct cdev *fdev; struct mount *mp; struct vnode *vroot; struct ucred *daemoncred; int dataflags; int ref; struct mtx ms_mtx; STAILQ_HEAD(, fuse_ticket) ms_head; struct mtx aw_mtx; TAILQ_HEAD(, fuse_ticket) aw_head; u_long ticketer; struct sx rename_lock; uint32_t fuse_libabi_major; uint32_t fuse_libabi_minor; uint32_t max_write; uint32_t max_read; uint32_t subtype; char volname[MAXPATHLEN]; struct selinfo ks_rsel; int daemon_timeout; uint64_t notimpl; }; #define FSESS_DEAD 0x0001 /* session is to be closed */ #define FSESS_UNUSED0 0x0002 /* unused */ #define FSESS_INITED 0x0004 /* session has been inited */ #define FSESS_DAEMON_CAN_SPY 0x0010 /* let non-owners access this fs */ /* (and being observed by the daemon) */ #define FSESS_PUSH_SYMLINKS_IN 0x0020 /* prefix absolute symlinks with mp */ #define FSESS_DEFAULT_PERMISSIONS 0x0040 /* kernel does permission checking */ #define FSESS_NO_ATTRCACHE 0x0080 /* no attribute caching */ #define FSESS_NO_READAHEAD 0x0100 /* no readaheads */ #define FSESS_NO_DATACACHE 0x0200 /* disable buffer cache */ #define FSESS_NO_NAMECACHE 0x0400 /* disable name cache */ #define FSESS_NO_MMAP 0x0800 /* disable mmap */ #define FSESS_BROKENIO 0x1000 /* fix broken io */ -extern int fuse_data_cache_enable; +enum fuse_data_cache_mode { + FUSE_CACHE_UC, + FUSE_CACHE_WT, + FUSE_CACHE_WB, +}; + +extern int fuse_data_cache_mode; extern int fuse_data_cache_invalidate; extern int fuse_mmap_enable; extern int fuse_sync_resize; extern int fuse_fix_broken_io; static __inline__ struct fuse_data * fuse_get_mpdata(struct mount *mp) { return mp->mnt_data; } static __inline int fsess_isimpl(struct mount *mp, int opcode) { struct fuse_data *data = fuse_get_mpdata(mp); return (data->notimpl & (1ULL << opcode)) == 0; } static __inline void fsess_set_notimpl(struct mount *mp, int opcode) { struct fuse_data *data = fuse_get_mpdata(mp); data->notimpl |= (1ULL << opcode); } static __inline int fsess_opt_datacache(struct mount *mp) { struct fuse_data *data = fuse_get_mpdata(mp); - return (fuse_data_cache_enable || + return (fuse_data_cache_mode != FUSE_CACHE_UC && (data->dataflags & FSESS_NO_DATACACHE) == 0); } static __inline int fsess_opt_mmap(struct mount *mp) { struct fuse_data *data = fuse_get_mpdata(mp); - if (!(fuse_mmap_enable && fuse_data_cache_enable)) + if (!fuse_mmap_enable || fuse_data_cache_mode == FUSE_CACHE_UC) return 0; return ((data->dataflags & (FSESS_NO_DATACACHE | FSESS_NO_MMAP)) == 0); } static __inline int fsess_opt_brokenio(struct mount *mp) { struct fuse_data *data = fuse_get_mpdata(mp); return (fuse_fix_broken_io || (data->dataflags & FSESS_BROKENIO)); } static __inline__ void fuse_ms_push(struct fuse_ticket *ftick) { DEBUGX(FUSE_DEBUG_IPC, "ftick=%p refcount=%d\n", ftick, ftick->tk_refcount + 1); mtx_assert(&ftick->tk_data->ms_mtx, MA_OWNED); refcount_acquire(&ftick->tk_refcount); STAILQ_INSERT_TAIL(&ftick->tk_data->ms_head, ftick, tk_ms_link); } static __inline__ struct fuse_ticket * fuse_ms_pop(struct fuse_data *data) { struct fuse_ticket *ftick = NULL; mtx_assert(&data->ms_mtx, MA_OWNED); if ((ftick = STAILQ_FIRST(&data->ms_head))) { STAILQ_REMOVE_HEAD(&data->ms_head, tk_ms_link); #ifdef INVARIANTS ftick->tk_ms_link.stqe_next = NULL; #endif } DEBUGX(FUSE_DEBUG_IPC, "ftick=%p refcount=%d\n", ftick, ftick ? ftick->tk_refcount : -1); return ftick; } static __inline__ void fuse_aw_push(struct fuse_ticket *ftick) { DEBUGX(FUSE_DEBUG_IPC, "ftick=%p refcount=%d\n", ftick, ftick->tk_refcount + 1); mtx_assert(&ftick->tk_data->aw_mtx, MA_OWNED); refcount_acquire(&ftick->tk_refcount); TAILQ_INSERT_TAIL(&ftick->tk_data->aw_head, ftick, tk_aw_link); } static __inline__ void fuse_aw_remove(struct fuse_ticket *ftick) { DEBUGX(FUSE_DEBUG_IPC, "ftick=%p refcount=%d\n", ftick, ftick->tk_refcount); mtx_assert(&ftick->tk_data->aw_mtx, MA_OWNED); TAILQ_REMOVE(&ftick->tk_data->aw_head, ftick, tk_aw_link); #ifdef INVARIANTS ftick->tk_aw_link.tqe_next = NULL; ftick->tk_aw_link.tqe_prev = NULL; #endif } static __inline__ struct fuse_ticket * fuse_aw_pop(struct fuse_data *data) { struct fuse_ticket *ftick = NULL; mtx_assert(&data->aw_mtx, MA_OWNED); if ((ftick = TAILQ_FIRST(&data->aw_head))) { fuse_aw_remove(ftick); } DEBUGX(FUSE_DEBUG_IPC, "ftick=%p refcount=%d\n", ftick, ftick ? ftick->tk_refcount : -1); return ftick; } struct fuse_ticket *fuse_ticket_fetch(struct fuse_data *data); int fuse_ticket_drop(struct fuse_ticket *ftick); void fuse_insert_callback(struct fuse_ticket *ftick, fuse_handler_t *handler); void fuse_insert_message(struct fuse_ticket *ftick); static __inline__ int fuse_libabi_geq(struct fuse_data *data, uint32_t abi_maj, uint32_t abi_min) { return (data->fuse_libabi_major > abi_maj || (data->fuse_libabi_major == abi_maj && data->fuse_libabi_minor >= abi_min)); } struct fuse_data *fdata_alloc(struct cdev *dev, struct ucred *cred); void fdata_trydestroy(struct fuse_data *data); void fdata_set_dead(struct fuse_data *data); static __inline__ int fdata_get_dead(struct fuse_data *data) { return (data->dataflags & FSESS_DEAD); } struct fuse_dispatcher { struct fuse_ticket *tick; struct fuse_in_header *finh; void *indata; size_t iosize; uint64_t nodeid; int answ_stat; void *answ; }; static __inline__ void fdisp_init(struct fuse_dispatcher *fdisp, size_t iosize) { DEBUGX(FUSE_DEBUG_IPC, "-> fdisp=%p, iosize=%zx\n", fdisp, iosize); fdisp->iosize = iosize; fdisp->tick = NULL; } static __inline__ void fdisp_destroy(struct fuse_dispatcher *fdisp) { DEBUGX(FUSE_DEBUG_IPC, "-> fdisp=%p, ftick=%p\n", fdisp, fdisp->tick); fuse_ticket_drop(fdisp->tick); #ifdef INVARIANTS fdisp->tick = NULL; #endif } void fdisp_make(struct fuse_dispatcher *fdip, enum fuse_opcode op, struct mount *mp, uint64_t nid, struct thread *td, struct ucred *cred); void fdisp_make_pid(struct fuse_dispatcher *fdip, enum fuse_opcode op, struct mount *mp, uint64_t nid, pid_t pid, struct ucred *cred); void fdisp_make_vp(struct fuse_dispatcher *fdip, enum fuse_opcode op, struct vnode *vp, struct thread *td, struct ucred *cred); int fdisp_wait_answ(struct fuse_dispatcher *fdip); static __inline__ int fdisp_simple_putget_vp(struct fuse_dispatcher *fdip, enum fuse_opcode op, struct vnode *vp, struct thread *td, struct ucred *cred) { DEBUGX(FUSE_DEBUG_IPC, "-> fdip=%p, opcode=%d, vp=%p\n", fdip, op, vp); fdisp_make_vp(fdip, op, vp, td, cred); return fdisp_wait_answ(fdip); } #endif /* _FUSE_IPC_H_ */ Index: projects/import-googletest-1.8.1/sys/fs/fuse/fuse_node.c =================================================================== --- projects/import-googletest-1.8.1/sys/fs/fuse/fuse_node.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/fs/fuse/fuse_node.c (revision 344271) @@ -1,402 +1,432 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2007-2009 Google Inc. and Amit Singh * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * Copyright (C) 2005 Csaba Henk. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "fuse.h" #include "fuse_node.h" #include "fuse_internal.h" #include "fuse_io.h" #include "fuse_ipc.h" #define FUSE_DEBUG_MODULE VNOPS #include "fuse_debug.h" MALLOC_DEFINE(M_FUSEVN, "fuse_vnode", "fuse vnode private data"); +static int sysctl_fuse_cache_mode(SYSCTL_HANDLER_ARGS); + static int fuse_node_count = 0; SYSCTL_INT(_vfs_fuse, OID_AUTO, node_count, CTLFLAG_RD, &fuse_node_count, 0, "Count of FUSE vnodes"); -int fuse_data_cache_enable = 1; +int fuse_data_cache_mode = FUSE_CACHE_WT; -SYSCTL_INT(_vfs_fuse, OID_AUTO, data_cache_enable, CTLFLAG_RW, - &fuse_data_cache_enable, 0, - "enable caching of FUSE file data (including dirty data)"); +SYSCTL_PROC(_vfs_fuse, OID_AUTO, data_cache_mode, CTLTYPE_INT|CTLFLAG_RW, + &fuse_data_cache_mode, 0, sysctl_fuse_cache_mode, "I", + "Zero: disable caching of FUSE file data; One: write-through caching " + "(default); Two: write-back caching (generally unsafe)"); int fuse_data_cache_invalidate = 0; SYSCTL_INT(_vfs_fuse, OID_AUTO, data_cache_invalidate, CTLFLAG_RW, &fuse_data_cache_invalidate, 0, "If non-zero, discard cached clean file data when there are no active file" " users"); int fuse_mmap_enable = 1; SYSCTL_INT(_vfs_fuse, OID_AUTO, mmap_enable, CTLFLAG_RW, &fuse_mmap_enable, 0, - "If non-zero, and data_cache_enable is also non-zero, enable mmap(2) of " + "If non-zero, and data_cache_mode is also non-zero, enable mmap(2) of " "FUSE files"); int fuse_refresh_size = 0; SYSCTL_INT(_vfs_fuse, OID_AUTO, refresh_size, CTLFLAG_RW, &fuse_refresh_size, 0, "If non-zero, and no dirty file extension data is buffered, fetch file " "size before write operations"); int fuse_sync_resize = 1; SYSCTL_INT(_vfs_fuse, OID_AUTO, sync_resize, CTLFLAG_RW, &fuse_sync_resize, 0, "If a cached write extended a file, inform FUSE filesystem of the changed" "size immediately subsequent to the issued writes"); int fuse_fix_broken_io = 0; SYSCTL_INT(_vfs_fuse, OID_AUTO, fix_broken_io, CTLFLAG_RW, &fuse_fix_broken_io, 0, "If non-zero, print a diagnostic warning if a userspace filesystem returns" " EIO on reads of recently extended portions of files"); +static int +sysctl_fuse_cache_mode(SYSCTL_HANDLER_ARGS) +{ + int val, error; + + val = *(int *)arg1; + error = sysctl_handle_int(oidp, &val, 0, req); + if (error || !req->newptr) + return (error); + + switch (val) { + case FUSE_CACHE_UC: + case FUSE_CACHE_WT: + case FUSE_CACHE_WB: + *(int *)arg1 = val; + break; + default: + return (EDOM); + } + return (0); +} + static void fuse_vnode_init(struct vnode *vp, struct fuse_vnode_data *fvdat, uint64_t nodeid, enum vtype vtyp) { int i; fvdat->nid = nodeid; + vattr_null(&fvdat->cached_attrs); if (nodeid == FUSE_ROOT_ID) { vp->v_vflag |= VV_ROOT; } vp->v_type = vtyp; vp->v_data = fvdat; for (i = 0; i < FUFH_MAXTYPE; i++) fvdat->fufh[i].fh_type = FUFH_INVALID; atomic_add_acq_int(&fuse_node_count, 1); } void fuse_vnode_destroy(struct vnode *vp) { struct fuse_vnode_data *fvdat = vp->v_data; vp->v_data = NULL; free(fvdat, M_FUSEVN); atomic_subtract_acq_int(&fuse_node_count, 1); } static int fuse_vnode_cmp(struct vnode *vp, void *nidp) { return (VTOI(vp) != *((uint64_t *)nidp)); } static uint32_t __inline fuse_vnode_hash(uint64_t id) { return (fnv_32_buf(&id, sizeof(id), FNV1_32_INIT)); } static int fuse_vnode_alloc(struct mount *mp, struct thread *td, uint64_t nodeid, enum vtype vtyp, struct vnode **vpp) { struct fuse_vnode_data *fvdat; struct vnode *vp2; int err = 0; FS_DEBUG("been asked for vno #%ju\n", (uintmax_t)nodeid); if (vtyp == VNON) { return EINVAL; } *vpp = NULL; err = vfs_hash_get(mp, fuse_vnode_hash(nodeid), LK_EXCLUSIVE, td, vpp, fuse_vnode_cmp, &nodeid); if (err) return (err); if (*vpp) { MPASS((*vpp)->v_type == vtyp && (*vpp)->v_data != NULL); FS_DEBUG("vnode taken from hash\n"); return (0); } fvdat = malloc(sizeof(*fvdat), M_FUSEVN, M_WAITOK | M_ZERO); err = getnewvnode("fuse", mp, &fuse_vnops, vpp); if (err) { free(fvdat, M_FUSEVN); return (err); } lockmgr((*vpp)->v_vnlock, LK_EXCLUSIVE, NULL); fuse_vnode_init(*vpp, fvdat, nodeid, vtyp); err = insmntque(*vpp, mp); ASSERT_VOP_ELOCKED(*vpp, "fuse_vnode_alloc"); if (err) { free(fvdat, M_FUSEVN); *vpp = NULL; return (err); } err = vfs_hash_insert(*vpp, fuse_vnode_hash(nodeid), LK_EXCLUSIVE, td, &vp2, fuse_vnode_cmp, &nodeid); if (err) return (err); if (vp2 != NULL) { *vpp = vp2; return (0); } ASSERT_VOP_ELOCKED(*vpp, "fuse_vnode_alloc"); return (0); } int fuse_vnode_get(struct mount *mp, + struct fuse_entry_out *feo, uint64_t nodeid, struct vnode *dvp, struct vnode **vpp, struct componentname *cnp, enum vtype vtyp) { struct thread *td = (cnp != NULL ? cnp->cn_thread : curthread); int err = 0; debug_printf("dvp=%p\n", dvp); err = fuse_vnode_alloc(mp, td, nodeid, vtyp, vpp); if (err) { return err; } if (dvp != NULL) { MPASS((cnp->cn_flags & ISDOTDOT) == 0); MPASS(!(cnp->cn_namelen == 1 && cnp->cn_nameptr[0] == '.')); fuse_vnode_setparent(*vpp, dvp); } - if (dvp != NULL && cnp != NULL && (cnp->cn_flags & MAKEENTRY) != 0) { + if (dvp != NULL && cnp != NULL && (cnp->cn_flags & MAKEENTRY) != 0 && + feo != NULL && + (feo->entry_valid != 0 || feo->entry_valid_nsec != 0)) { ASSERT_VOP_LOCKED(*vpp, "fuse_vnode_get"); ASSERT_VOP_LOCKED(dvp, "fuse_vnode_get"); cache_enter(dvp, *vpp, cnp); } /* * In userland, libfuse uses cached lookups for dot and dotdot entries, * thus it does not really bump the nlookup counter for forget. * Follow the same semantic and avoid tu bump it in order to keep * nlookup counters consistent. */ if (cnp == NULL || ((cnp->cn_flags & ISDOTDOT) == 0 && (cnp->cn_namelen != 1 || cnp->cn_nameptr[0] != '.'))) VTOFUD(*vpp)->nlookup++; return 0; } void fuse_vnode_open(struct vnode *vp, int32_t fuse_open_flags, struct thread *td) { /* * Funcation is called for every vnode open. * Merge fuse_open_flags it may be 0 */ /* * Ideally speaking, direct io should be enabled on * fd's but do not see of any way of providing that * this implementation. * * Also cannot think of a reason why would two * different fd's on same vnode would like * have DIRECT_IO turned on and off. But linux * based implementation works on an fd not an * inode and provides such a feature. * * XXXIP: Handle fd based DIRECT_IO */ if (fuse_open_flags & FOPEN_DIRECT_IO) { ASSERT_VOP_ELOCKED(vp, __func__); VTOFUD(vp)->flag |= FN_DIRECTIO; fuse_io_invalbuf(vp, td); } else { if ((fuse_open_flags & FOPEN_KEEP_CACHE) == 0) fuse_io_invalbuf(vp, td); VTOFUD(vp)->flag &= ~FN_DIRECTIO; } if (vnode_vtype(vp) == VREG) { /* XXXIP prevent getattr, by using cached node size */ vnode_create_vobject(vp, 0, td); } } int fuse_vnode_savesize(struct vnode *vp, struct ucred *cred) { struct fuse_vnode_data *fvdat = VTOFUD(vp); struct thread *td = curthread; struct fuse_filehandle *fufh = NULL; struct fuse_dispatcher fdi; struct fuse_setattr_in *fsai; int err = 0; FS_DEBUG("inode=%ju size=%ju\n", (uintmax_t)VTOI(vp), (uintmax_t)fvdat->filesize); ASSERT_VOP_ELOCKED(vp, "fuse_io_extend"); if (fuse_isdeadfs(vp)) { return EBADF; } if (vnode_vtype(vp) == VDIR) { return EISDIR; } if (vfs_isrdonly(vnode_mount(vp))) { return EROFS; } if (cred == NULL) { cred = td->td_ucred; } fdisp_init(&fdi, sizeof(*fsai)); fdisp_make_vp(&fdi, FUSE_SETATTR, vp, td, cred); fsai = fdi.indata; fsai->valid = 0; /* Truncate to a new value. */ fsai->size = fvdat->filesize; fsai->valid |= FATTR_SIZE; fuse_filehandle_getrw(vp, FUFH_WRONLY, &fufh); if (fufh) { fsai->fh = fufh->fh_id; fsai->valid |= FATTR_FH; } err = fdisp_wait_answ(&fdi); fdisp_destroy(&fdi); if (err == 0) fvdat->flag &= ~FN_SIZECHANGE; return err; } void fuse_vnode_refreshsize(struct vnode *vp, struct ucred *cred) { struct fuse_vnode_data *fvdat = VTOFUD(vp); struct vattr va; if ((fvdat->flag & FN_SIZECHANGE) != 0 || + fuse_data_cache_mode == FUSE_CACHE_UC || (fuse_refresh_size == 0 && fvdat->filesize != 0)) return; VOP_GETATTR(vp, &va, cred); FS_DEBUG("refreshed file size: %jd\n", (intmax_t)VTOFUD(vp)->filesize); } int fuse_vnode_setsize(struct vnode *vp, struct ucred *cred, off_t newsize) { struct fuse_vnode_data *fvdat = VTOFUD(vp); off_t oldsize; int err = 0; FS_DEBUG("inode=%ju oldsize=%ju newsize=%ju\n", (uintmax_t)VTOI(vp), (uintmax_t)fvdat->filesize, (uintmax_t)newsize); ASSERT_VOP_ELOCKED(vp, "fuse_vnode_setsize"); oldsize = fvdat->filesize; fvdat->filesize = newsize; fvdat->flag |= FN_SIZECHANGE; if (newsize < oldsize) { err = vtruncbuf(vp, cred, newsize, fuse_iosize(vp)); } vnode_pager_setsize(vp, newsize); return err; } Index: projects/import-googletest-1.8.1/sys/fs/fuse/fuse_node.h =================================================================== --- projects/import-googletest-1.8.1/sys/fs/fuse/fuse_node.h (revision 344270) +++ projects/import-googletest-1.8.1/sys/fs/fuse/fuse_node.h (revision 344271) @@ -1,133 +1,137 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2007-2009 Google Inc. and Amit Singh * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * Copyright (C) 2005 Csaba Henk. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #ifndef _FUSE_NODE_H_ #define _FUSE_NODE_H_ #include #include #include "fuse_file.h" #define FN_REVOKED 0x00000020 #define FN_FLUSHINPROG 0x00000040 #define FN_FLUSHWANT 0x00000080 #define FN_SIZECHANGE 0x00000100 #define FN_DIRECTIO 0x00000200 struct fuse_vnode_data { /** self **/ uint64_t nid; /** parent **/ /* XXXIP very likely to be stale, it's not updated in rename() */ uint64_t parent_nid; /** I/O **/ struct fuse_filehandle fufh[FUFH_MAXTYPE]; /** flags **/ uint32_t flag; /** meta **/ + bool valid_attr_cache; struct vattr cached_attrs; off_t filesize; uint64_t nlookup; enum vtype vtype; }; #define VTOFUD(vp) \ ((struct fuse_vnode_data *)((vp)->v_data)) #define VTOI(vp) (VTOFUD(vp)->nid) -#define VTOVA(vp) (&(VTOFUD(vp)->cached_attrs)) +#define VTOVA(vp) \ + (VTOFUD(vp)->valid_attr_cache ? \ + &(VTOFUD(vp)->cached_attrs) : NULL) #define VTOILLU(vp) ((uint64_t)(VTOFUD(vp) ? VTOI(vp) : 0)) #define FUSE_NULL_ID 0 extern struct vop_vector fuse_vnops; static __inline void fuse_vnode_setparent(struct vnode *vp, struct vnode *dvp) { if (dvp != NULL && vp->v_type == VDIR) { MPASS(dvp->v_type == VDIR); VTOFUD(vp)->parent_nid = VTOI(dvp); } } void fuse_vnode_destroy(struct vnode *vp); int fuse_vnode_get(struct mount *mp, + struct fuse_entry_out *feo, uint64_t nodeid, struct vnode *dvp, struct vnode **vpp, struct componentname *cnp, enum vtype vtyp); void fuse_vnode_open(struct vnode *vp, int32_t fuse_open_flags, struct thread *td); void fuse_vnode_refreshsize(struct vnode *vp, struct ucred *cred); int fuse_vnode_savesize(struct vnode *vp, struct ucred *cred); int fuse_vnode_setsize(struct vnode *vp, struct ucred *cred, off_t newsize); #endif /* _FUSE_NODE_H_ */ Index: projects/import-googletest-1.8.1/sys/fs/fuse/fuse_vfsops.c =================================================================== --- projects/import-googletest-1.8.1/sys/fs/fuse/fuse_vfsops.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/fs/fuse/fuse_vfsops.c (revision 344271) @@ -1,532 +1,533 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2007-2009 Google Inc. and Amit Singh * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * Copyright (C) 2005 Csaba Henk. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "fuse.h" #include "fuse_param.h" #include "fuse_node.h" #include "fuse_ipc.h" #include "fuse_internal.h" #include #include #define FUSE_DEBUG_MODULE VFSOPS #include "fuse_debug.h" /* This will do for privilege types for now */ #ifndef PRIV_VFS_FUSE_ALLOWOTHER #define PRIV_VFS_FUSE_ALLOWOTHER PRIV_VFS_MOUNT_NONUSER #endif #ifndef PRIV_VFS_FUSE_MOUNT_NONUSER #define PRIV_VFS_FUSE_MOUNT_NONUSER PRIV_VFS_MOUNT_NONUSER #endif #ifndef PRIV_VFS_FUSE_SYNC_UNMOUNT #define PRIV_VFS_FUSE_SYNC_UNMOUNT PRIV_VFS_MOUNT_NONUSER #endif static vfs_mount_t fuse_vfsop_mount; static vfs_unmount_t fuse_vfsop_unmount; static vfs_root_t fuse_vfsop_root; static vfs_statfs_t fuse_vfsop_statfs; struct vfsops fuse_vfsops = { .vfs_mount = fuse_vfsop_mount, .vfs_unmount = fuse_vfsop_unmount, .vfs_root = fuse_vfsop_root, .vfs_statfs = fuse_vfsop_statfs, }; SYSCTL_INT(_vfs_fuse, OID_AUTO, init_backgrounded, CTLFLAG_RD, SYSCTL_NULL_INT_PTR, 1, "indicate async handshake"); static int fuse_enforce_dev_perms = 0; SYSCTL_INT(_vfs_fuse, OID_AUTO, enforce_dev_perms, CTLFLAG_RW, &fuse_enforce_dev_perms, 0, "enforce fuse device permissions for secondary mounts"); static unsigned sync_unmount = 1; SYSCTL_UINT(_vfs_fuse, OID_AUTO, sync_unmount, CTLFLAG_RW, &sync_unmount, 0, "specify when to use synchronous unmount"); MALLOC_DEFINE(M_FUSEVFS, "fuse_filesystem", "buffer for fuse vfs layer"); static int fuse_getdevice(const char *fspec, struct thread *td, struct cdev **fdevp) { struct nameidata nd, *ndp = &nd; struct vnode *devvp; struct cdev *fdev; int err; /* * Not an update, or updating the name: look up the name * and verify that it refers to a sensible disk device. */ NDINIT(ndp, LOOKUP, FOLLOW, UIO_SYSSPACE, fspec, td); if ((err = namei(ndp)) != 0) return err; NDFREE(ndp, NDF_ONLY_PNBUF); devvp = ndp->ni_vp; if (devvp->v_type != VCHR) { vrele(devvp); return ENXIO; } fdev = devvp->v_rdev; dev_ref(fdev); if (fuse_enforce_dev_perms) { /* * Check if mounter can open the fuse device. * * This has significance only if we are doing a secondary mount * which doesn't involve actually opening fuse devices, but we * still want to enforce the permissions of the device (in * order to keep control over the circle of fuse users). * * (In case of primary mounts, we are either the superuser so * we can do anything anyway, or we can mount only if the * device is already opened by us, ie. we are permitted to open * the device.) */ #if 0 #ifdef MAC err = mac_check_vnode_open(td->td_ucred, devvp, VREAD | VWRITE); if (!err) #endif #endif /* 0 */ err = VOP_ACCESS(devvp, VREAD | VWRITE, td->td_ucred, td); if (err) { vrele(devvp); dev_rel(fdev); return err; } } /* * according to coda code, no extra lock is needed -- * although in sys/vnode.h this field is marked "v" */ vrele(devvp); if (!fdev->si_devsw || strcmp("fuse", fdev->si_devsw->d_name)) { dev_rel(fdev); return ENXIO; } *fdevp = fdev; return 0; } #define FUSE_FLAGOPT(fnam, fval) do { \ vfs_flagopt(opts, #fnam, &mntopts, fval); \ vfs_flagopt(opts, "__" #fnam, &__mntopts, fval); \ } while (0) static int fuse_vfsop_mount(struct mount *mp) { int err; uint64_t mntopts, __mntopts; uint32_t max_read; int daemon_timeout; int fd; size_t len; struct cdev *fdev; struct fuse_data *data; struct thread *td; struct file *fp, *fptmp; char *fspec, *subtype; struct vfsoptlist *opts; subtype = NULL; max_read = ~0; err = 0; mntopts = 0; __mntopts = 0; td = curthread; fuse_trace_printf_vfsop(); if (mp->mnt_flag & MNT_UPDATE) return EOPNOTSUPP; MNT_ILOCK(mp); mp->mnt_flag |= MNT_SYNCHRONOUS; mp->mnt_data = NULL; MNT_IUNLOCK(mp); /* Get the new options passed to mount */ opts = mp->mnt_optnew; if (!opts) return EINVAL; /* `fspath' contains the mount point (eg. /mnt/fuse/sshfs); REQUIRED */ if (!vfs_getopts(opts, "fspath", &err)) return err; /* `from' contains the device name (eg. /dev/fuse0); REQUIRED */ fspec = vfs_getopts(opts, "from", &err); if (!fspec) return err; /* `fd' contains the filedescriptor for this session; REQUIRED */ if (vfs_scanopt(opts, "fd", "%d", &fd) != 1) return EINVAL; err = fuse_getdevice(fspec, td, &fdev); if (err != 0) return err; /* * With the help of underscored options the mount program * can inform us from the flags it sets by default */ FUSE_FLAGOPT(allow_other, FSESS_DAEMON_CAN_SPY); FUSE_FLAGOPT(push_symlinks_in, FSESS_PUSH_SYMLINKS_IN); FUSE_FLAGOPT(default_permissions, FSESS_DEFAULT_PERMISSIONS); FUSE_FLAGOPT(no_attrcache, FSESS_NO_ATTRCACHE); FUSE_FLAGOPT(no_readahed, FSESS_NO_READAHEAD); FUSE_FLAGOPT(no_datacache, FSESS_NO_DATACACHE); FUSE_FLAGOPT(no_namecache, FSESS_NO_NAMECACHE); FUSE_FLAGOPT(no_mmap, FSESS_NO_MMAP); FUSE_FLAGOPT(brokenio, FSESS_BROKENIO); (void)vfs_scanopt(opts, "max_read=", "%u", &max_read); if (vfs_scanopt(opts, "timeout=", "%u", &daemon_timeout) == 1) { if (daemon_timeout < FUSE_MIN_DAEMON_TIMEOUT) daemon_timeout = FUSE_MIN_DAEMON_TIMEOUT; else if (daemon_timeout > FUSE_MAX_DAEMON_TIMEOUT) daemon_timeout = FUSE_MAX_DAEMON_TIMEOUT; } else { daemon_timeout = FUSE_DEFAULT_DAEMON_TIMEOUT; } subtype = vfs_getopts(opts, "subtype=", &err); FS_DEBUG2G("mntopts 0x%jx\n", (uintmax_t)mntopts); err = fget(td, fd, &cap_read_rights, &fp); if (err != 0) { FS_DEBUG("invalid or not opened device: data=%p\n", data); goto out; } fptmp = td->td_fpop; td->td_fpop = fp; err = devfs_get_cdevpriv((void **)&data); td->td_fpop = fptmp; fdrop(fp, td); FUSE_LOCK(); if (err != 0 || data == NULL || data->mp != NULL) { FS_DEBUG("invalid or not opened device: data=%p data.mp=%p\n", data, data != NULL ? data->mp : NULL); err = ENXIO; FUSE_UNLOCK(); goto out; } if (fdata_get_dead(data)) { FS_DEBUG("device is dead during mount: data=%p\n", data); err = ENOTCONN; FUSE_UNLOCK(); goto out; } /* Sanity + permission checks */ if (!data->daemoncred) panic("fuse daemon found, but identity unknown"); if (mntopts & FSESS_DAEMON_CAN_SPY) err = priv_check(td, PRIV_VFS_FUSE_ALLOWOTHER); if (err == 0 && td->td_ucred->cr_uid != data->daemoncred->cr_uid) /* are we allowed to do the first mount? */ err = priv_check(td, PRIV_VFS_FUSE_MOUNT_NONUSER); if (err) { FUSE_UNLOCK(); goto out; } data->ref++; data->mp = mp; data->dataflags |= mntopts; data->max_read = max_read; data->daemon_timeout = daemon_timeout; FUSE_UNLOCK(); vfs_getnewfsid(mp); MNT_ILOCK(mp); mp->mnt_data = data; mp->mnt_flag |= MNT_LOCAL; mp->mnt_kern_flag |= MNTK_USES_BCACHE; MNT_IUNLOCK(mp); /* We need this here as this slot is used by getnewvnode() */ mp->mnt_stat.f_iosize = DFLTPHYS; if (subtype) { strlcat(mp->mnt_stat.f_fstypename, ".", MFSNAMELEN); strlcat(mp->mnt_stat.f_fstypename, subtype, MFSNAMELEN); } copystr(fspec, mp->mnt_stat.f_mntfromname, MNAMELEN - 1, &len); bzero(mp->mnt_stat.f_mntfromname + len, MNAMELEN - len); FS_DEBUG2G("mp %p: %s\n", mp, mp->mnt_stat.f_mntfromname); /* Now handshaking with daemon */ fuse_internal_send_init(data, td); out: if (err) { FUSE_LOCK(); if (data->mp == mp) { /* * Destroy device only if we acquired reference to * it */ FS_DEBUG("mount failed, destroy device: data=%p mp=%p" " err=%d\n", data, mp, err); data->mp = NULL; fdata_trydestroy(data); } FUSE_UNLOCK(); dev_rel(fdev); } return err; } static int fuse_vfsop_unmount(struct mount *mp, int mntflags) { int err = 0; int flags = 0; struct cdev *fdev; struct fuse_data *data; struct fuse_dispatcher fdi; struct thread *td = curthread; fuse_trace_printf_vfsop(); if (mntflags & MNT_FORCE) { flags |= FORCECLOSE; } data = fuse_get_mpdata(mp); if (!data) { panic("no private data for mount point?"); } /* There is 1 extra root vnode reference (mp->mnt_data). */ FUSE_LOCK(); if (data->vroot != NULL) { struct vnode *vroot = data->vroot; data->vroot = NULL; FUSE_UNLOCK(); vrele(vroot); } else FUSE_UNLOCK(); err = vflush(mp, 0, flags, td); if (err) { debug_printf("vflush failed"); return err; } if (fdata_get_dead(data)) { goto alreadydead; } fdisp_init(&fdi, 0); fdisp_make(&fdi, FUSE_DESTROY, mp, 0, td, NULL); err = fdisp_wait_answ(&fdi); fdisp_destroy(&fdi); fdata_set_dead(data); alreadydead: FUSE_LOCK(); data->mp = NULL; fdev = data->fdev; fdata_trydestroy(data); FUSE_UNLOCK(); MNT_ILOCK(mp); mp->mnt_data = NULL; mp->mnt_flag &= ~MNT_LOCAL; MNT_IUNLOCK(mp); dev_rel(fdev); return 0; } static int fuse_vfsop_root(struct mount *mp, int lkflags, struct vnode **vpp) { struct fuse_data *data = fuse_get_mpdata(mp); int err = 0; if (data->vroot != NULL) { err = vget(data->vroot, lkflags, curthread); if (err == 0) *vpp = data->vroot; } else { - err = fuse_vnode_get(mp, FUSE_ROOT_ID, NULL, vpp, NULL, VDIR); + err = fuse_vnode_get(mp, NULL, FUSE_ROOT_ID, NULL, vpp, NULL, + VDIR); if (err == 0) { FUSE_LOCK(); MPASS(data->vroot == NULL || data->vroot == *vpp); if (data->vroot == NULL) { FS_DEBUG("new root vnode\n"); data->vroot = *vpp; FUSE_UNLOCK(); vref(*vpp); } else if (data->vroot != *vpp) { FS_DEBUG("root vnode race\n"); FUSE_UNLOCK(); VOP_UNLOCK(*vpp, 0); vrele(*vpp); vrecycle(*vpp); *vpp = data->vroot; } else FUSE_UNLOCK(); } } return err; } static int fuse_vfsop_statfs(struct mount *mp, struct statfs *sbp) { struct fuse_dispatcher fdi; int err = 0; struct fuse_statfs_out *fsfo; struct fuse_data *data; FS_DEBUG2G("mp %p: %s\n", mp, mp->mnt_stat.f_mntfromname); data = fuse_get_mpdata(mp); if (!(data->dataflags & FSESS_INITED)) goto fake; fdisp_init(&fdi, 0); fdisp_make(&fdi, FUSE_STATFS, mp, FUSE_ROOT_ID, NULL, NULL); err = fdisp_wait_answ(&fdi); if (err) { fdisp_destroy(&fdi); if (err == ENOTCONN) { /* * We want to seem a legitimate fs even if the daemon * is stiff dead... (so that, eg., we can still do path * based unmounting after the daemon dies). */ goto fake; } return err; } fsfo = fdi.answ; sbp->f_blocks = fsfo->st.blocks; sbp->f_bfree = fsfo->st.bfree; sbp->f_bavail = fsfo->st.bavail; sbp->f_files = fsfo->st.files; sbp->f_ffree = fsfo->st.ffree; /* cast from uint64_t to int64_t */ sbp->f_namemax = fsfo->st.namelen; sbp->f_bsize = fsfo->st.frsize; /* cast from uint32_t to uint64_t */ FS_DEBUG("fuse_statfs_out -- blocks: %llu, bfree: %llu, bavail: %llu, " "fil es: %llu, ffree: %llu, bsize: %i, namelen: %i\n", (unsigned long long)fsfo->st.blocks, (unsigned long long)fsfo->st.bfree, (unsigned long long)fsfo->st.bavail, (unsigned long long)fsfo->st.files, (unsigned long long)fsfo->st.ffree, fsfo->st.bsize, fsfo->st.namelen); fdisp_destroy(&fdi); return 0; fake: sbp->f_blocks = 0; sbp->f_bfree = 0; sbp->f_bavail = 0; sbp->f_files = 0; sbp->f_ffree = 0; sbp->f_namemax = 0; sbp->f_bsize = FUSE_DEFAULT_BLOCKSIZE; return 0; } Index: projects/import-googletest-1.8.1/sys/fs/fuse/fuse_vnops.c =================================================================== --- projects/import-googletest-1.8.1/sys/fs/fuse/fuse_vnops.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/fs/fuse/fuse_vnops.c (revision 344271) @@ -1,2391 +1,2419 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2007-2009 Google Inc. and Amit Singh * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * Copyright (C) 2005 Csaba Henk. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "fuse.h" #include "fuse_file.h" #include "fuse_internal.h" #include "fuse_ipc.h" #include "fuse_node.h" #include "fuse_param.h" #include "fuse_io.h" #include #define FUSE_DEBUG_MODULE VNOPS #include "fuse_debug.h" /* vnode ops */ static vop_access_t fuse_vnop_access; static vop_close_t fuse_vnop_close; static vop_create_t fuse_vnop_create; static vop_deleteextattr_t fuse_vnop_deleteextattr; static vop_fsync_t fuse_vnop_fsync; static vop_getattr_t fuse_vnop_getattr; static vop_getextattr_t fuse_vnop_getextattr; static vop_inactive_t fuse_vnop_inactive; static vop_link_t fuse_vnop_link; static vop_listextattr_t fuse_vnop_listextattr; static vop_lookup_t fuse_vnop_lookup; static vop_mkdir_t fuse_vnop_mkdir; static vop_mknod_t fuse_vnop_mknod; static vop_open_t fuse_vnop_open; static vop_pathconf_t fuse_vnop_pathconf; static vop_read_t fuse_vnop_read; static vop_readdir_t fuse_vnop_readdir; static vop_readlink_t fuse_vnop_readlink; static vop_reclaim_t fuse_vnop_reclaim; static vop_remove_t fuse_vnop_remove; static vop_rename_t fuse_vnop_rename; static vop_rmdir_t fuse_vnop_rmdir; static vop_setattr_t fuse_vnop_setattr; static vop_setextattr_t fuse_vnop_setextattr; static vop_strategy_t fuse_vnop_strategy; static vop_symlink_t fuse_vnop_symlink; static vop_write_t fuse_vnop_write; static vop_getpages_t fuse_vnop_getpages; static vop_putpages_t fuse_vnop_putpages; static vop_print_t fuse_vnop_print; struct vop_vector fuse_vnops = { .vop_default = &default_vnodeops, .vop_access = fuse_vnop_access, .vop_close = fuse_vnop_close, .vop_create = fuse_vnop_create, .vop_deleteextattr = fuse_vnop_deleteextattr, .vop_fsync = fuse_vnop_fsync, .vop_getattr = fuse_vnop_getattr, .vop_getextattr = fuse_vnop_getextattr, .vop_inactive = fuse_vnop_inactive, .vop_link = fuse_vnop_link, .vop_listextattr = fuse_vnop_listextattr, .vop_lookup = fuse_vnop_lookup, .vop_mkdir = fuse_vnop_mkdir, .vop_mknod = fuse_vnop_mknod, .vop_open = fuse_vnop_open, .vop_pathconf = fuse_vnop_pathconf, .vop_read = fuse_vnop_read, .vop_readdir = fuse_vnop_readdir, .vop_readlink = fuse_vnop_readlink, .vop_reclaim = fuse_vnop_reclaim, .vop_remove = fuse_vnop_remove, .vop_rename = fuse_vnop_rename, .vop_rmdir = fuse_vnop_rmdir, .vop_setattr = fuse_vnop_setattr, .vop_setextattr = fuse_vnop_setextattr, .vop_strategy = fuse_vnop_strategy, .vop_symlink = fuse_vnop_symlink, .vop_write = fuse_vnop_write, .vop_getpages = fuse_vnop_getpages, .vop_putpages = fuse_vnop_putpages, .vop_print = fuse_vnop_print, }; static u_long fuse_lookup_cache_hits = 0; SYSCTL_ULONG(_vfs_fuse, OID_AUTO, lookup_cache_hits, CTLFLAG_RD, &fuse_lookup_cache_hits, 0, ""); static u_long fuse_lookup_cache_misses = 0; SYSCTL_ULONG(_vfs_fuse, OID_AUTO, lookup_cache_misses, CTLFLAG_RD, &fuse_lookup_cache_misses, 0, ""); int fuse_lookup_cache_enable = 1; SYSCTL_INT(_vfs_fuse, OID_AUTO, lookup_cache_enable, CTLFLAG_RW, &fuse_lookup_cache_enable, 0, ""); /* * XXX: This feature is highly experimental and can bring to instabilities, * needs revisiting before to be enabled by default. */ static int fuse_reclaim_revoked = 0; SYSCTL_INT(_vfs_fuse, OID_AUTO, reclaim_revoked, CTLFLAG_RW, &fuse_reclaim_revoked, 0, ""); uma_zone_t fuse_pbuf_zone; #define fuse_vm_page_lock(m) vm_page_lock((m)); #define fuse_vm_page_unlock(m) vm_page_unlock((m)); #define fuse_vm_page_lock_queues() ((void)0) #define fuse_vm_page_unlock_queues() ((void)0) /* struct vnop_access_args { struct vnode *a_vp; #if VOP_ACCESS_TAKES_ACCMODE_T accmode_t a_accmode; #else int a_mode; #endif struct ucred *a_cred; struct thread *a_td; }; */ static int fuse_vnop_access(struct vop_access_args *ap) { struct vnode *vp = ap->a_vp; int accmode = ap->a_accmode; struct ucred *cred = ap->a_cred; struct fuse_access_param facp; struct fuse_data *data = fuse_get_mpdata(vnode_mount(vp)); int err; FS_DEBUG2G("inode=%ju\n", (uintmax_t)VTOI(vp)); if (fuse_isdeadfs(vp)) { if (vnode_isvroot(vp)) { return 0; } return ENXIO; } if (!(data->dataflags & FSESS_INITED)) { if (vnode_isvroot(vp)) { if (priv_check_cred(cred, PRIV_VFS_ADMIN) || (fuse_match_cred(data->daemoncred, cred) == 0)) { return 0; } } return EBADF; } if (vnode_islnk(vp)) { return 0; } bzero(&facp, sizeof(facp)); err = fuse_internal_access(vp, accmode, &facp, ap->a_td, ap->a_cred); FS_DEBUG2G("err=%d accmode=0x%x\n", err, accmode); return err; } /* struct vnop_close_args { struct vnode *a_vp; int a_fflag; struct ucred *a_cred; struct thread *a_td; }; */ static int fuse_vnop_close(struct vop_close_args *ap) { struct vnode *vp = ap->a_vp; struct ucred *cred = ap->a_cred; int fflag = ap->a_fflag; fufh_type_t fufh_type; fuse_trace_printf_vnop(); if (fuse_isdeadfs(vp)) { return 0; } if (vnode_isdir(vp)) { if (fuse_filehandle_valid(vp, FUFH_RDONLY)) { fuse_filehandle_close(vp, FUFH_RDONLY, NULL, cred); } return 0; } if (fflag & IO_NDELAY) { return 0; } fufh_type = fuse_filehandle_xlate_from_fflags(fflag); if (!fuse_filehandle_valid(vp, fufh_type)) { int i; for (i = 0; i < FUFH_MAXTYPE; i++) if (fuse_filehandle_valid(vp, i)) break; if (i == FUFH_MAXTYPE) panic("FUSE: fufh type %d found to be invalid in close" " (fflag=0x%x)\n", fufh_type, fflag); } if ((VTOFUD(vp)->flag & FN_SIZECHANGE) != 0) { fuse_vnode_savesize(vp, cred); } return 0; } /* struct vnop_create_args { struct vnode *a_dvp; struct vnode **a_vpp; struct componentname *a_cnp; struct vattr *a_vap; }; */ static int fuse_vnop_create(struct vop_create_args *ap) { struct vnode *dvp = ap->a_dvp; struct vnode **vpp = ap->a_vpp; struct componentname *cnp = ap->a_cnp; struct vattr *vap = ap->a_vap; struct thread *td = cnp->cn_thread; struct ucred *cred = cnp->cn_cred; struct fuse_open_in *foi; struct fuse_entry_out *feo; struct fuse_dispatcher fdi; struct fuse_dispatcher *fdip = &fdi; int err; struct mount *mp = vnode_mount(dvp); uint64_t parentnid = VTOFUD(dvp)->nid; mode_t mode = MAKEIMODE(vap->va_type, vap->va_mode); uint64_t x_fh_id; uint32_t x_open_flags; fuse_trace_printf_vnop(); if (fuse_isdeadfs(dvp)) { return ENXIO; } bzero(&fdi, sizeof(fdi)); /* XXX: Will we ever want devices ? */ if ((vap->va_type != VREG)) { printf("fuse_vnop_create: unsupported va_type %d\n", vap->va_type); return (EINVAL); } debug_printf("parent nid = %ju, mode = %x\n", (uintmax_t)parentnid, mode); fdisp_init(fdip, sizeof(*foi) + cnp->cn_namelen + 1); if (!fsess_isimpl(mp, FUSE_CREATE)) { debug_printf("eh, daemon doesn't implement create?\n"); return (EINVAL); } fdisp_make(fdip, FUSE_CREATE, vnode_mount(dvp), parentnid, td, cred); foi = fdip->indata; foi->mode = mode; foi->flags = O_CREAT | O_RDWR; memcpy((char *)fdip->indata + sizeof(*foi), cnp->cn_nameptr, cnp->cn_namelen); ((char *)fdip->indata)[sizeof(*foi) + cnp->cn_namelen] = '\0'; err = fdisp_wait_answ(fdip); if (err) { if (err == ENOSYS) fsess_set_notimpl(mp, FUSE_CREATE); debug_printf("create: got err=%d from daemon\n", err); goto out; } feo = fdip->answ; if ((err = fuse_internal_checkentry(feo, VREG))) { goto out; } - err = fuse_vnode_get(mp, feo->nodeid, dvp, vpp, cnp, VREG); + err = fuse_vnode_get(mp, feo, feo->nodeid, dvp, vpp, cnp, VREG); if (err) { struct fuse_release_in *fri; uint64_t nodeid = feo->nodeid; uint64_t fh_id = ((struct fuse_open_out *)(feo + 1))->fh; fdisp_init(fdip, sizeof(*fri)); fdisp_make(fdip, FUSE_RELEASE, mp, nodeid, td, cred); fri = fdip->indata; fri->fh = fh_id; fri->flags = OFLAGS(mode); fuse_insert_callback(fdip->tick, fuse_internal_forget_callback); fuse_insert_message(fdip->tick); return err; } ASSERT_VOP_ELOCKED(*vpp, "fuse_vnop_create"); fdip->answ = feo + 1; x_fh_id = ((struct fuse_open_out *)(feo + 1))->fh; x_open_flags = ((struct fuse_open_out *)(feo + 1))->open_flags; fuse_filehandle_init(*vpp, FUFH_RDWR, NULL, x_fh_id); fuse_vnode_open(*vpp, x_open_flags, td); cache_purge_negative(dvp); out: fdisp_destroy(fdip); return err; } /* * Our vnop_fsync roughly corresponds to the FUSE_FSYNC method. The Linux * version of FUSE also has a FUSE_FLUSH method. * * On Linux, fsync() synchronizes a file's complete in-core state with that * on disk. The call is not supposed to return until the system has completed * that action or until an error is detected. * * Linux also has an fdatasync() call that is similar to fsync() but is not * required to update the metadata such as access time and modification time. */ /* struct vnop_fsync_args { struct vnodeop_desc *a_desc; struct vnode * a_vp; struct ucred * a_cred; int a_waitfor; struct thread * a_td; }; */ static int fuse_vnop_fsync(struct vop_fsync_args *ap) { struct vnode *vp = ap->a_vp; struct thread *td = ap->a_td; struct fuse_filehandle *fufh; struct fuse_vnode_data *fvdat = VTOFUD(vp); int type, err = 0; fuse_trace_printf_vnop(); if (fuse_isdeadfs(vp)) { return 0; } if ((err = vop_stdfsync(ap))) return err; if (!fsess_isimpl(vnode_mount(vp), (vnode_vtype(vp) == VDIR ? FUSE_FSYNCDIR : FUSE_FSYNC))) { goto out; } for (type = 0; type < FUFH_MAXTYPE; type++) { fufh = &(fvdat->fufh[type]); if (FUFH_IS_VALID(fufh)) { fuse_internal_fsync(vp, td, NULL, fufh); } } out: return 0; } /* struct vnop_getattr_args { struct vnode *a_vp; struct vattr *a_vap; struct ucred *a_cred; struct thread *a_td; }; */ static int fuse_vnop_getattr(struct vop_getattr_args *ap) { struct vnode *vp = ap->a_vp; struct vattr *vap = ap->a_vap; struct ucred *cred = ap->a_cred; struct thread *td = curthread; struct fuse_vnode_data *fvdat = VTOFUD(vp); int err = 0; int dataflags; struct fuse_dispatcher fdi; FS_DEBUG2G("inode=%ju\n", (uintmax_t)VTOI(vp)); dataflags = fuse_get_mpdata(vnode_mount(vp))->dataflags; /* Note that we are not bailing out on a dead file system just yet. */ if (!(dataflags & FSESS_INITED)) { if (!vnode_isvroot(vp)) { fdata_set_dead(fuse_get_mpdata(vnode_mount(vp))); err = ENOTCONN; debug_printf("fuse_getattr b: returning ENOTCONN\n"); return err; } else { goto fake; } } fdisp_init(&fdi, 0); if ((err = fdisp_simple_putget_vp(&fdi, FUSE_GETATTR, vp, td, cred))) { if ((err == ENOTCONN) && vnode_isvroot(vp)) { /* see comment at similar place in fuse_statfs() */ fdisp_destroy(&fdi); goto fake; } if (err == ENOENT) { fuse_internal_vnode_disappear(vp); } goto out; } - cache_attrs(vp, (struct fuse_attr_out *)fdi.answ); - if (vap != VTOVA(vp)) { - memcpy(vap, VTOVA(vp), sizeof(*vap)); - } + + cache_attrs(vp, (struct fuse_attr_out *)fdi.answ, vap); if (vap->va_type != vnode_vtype(vp)) { fuse_internal_vnode_disappear(vp); err = ENOENT; goto out; } if ((fvdat->flag & FN_SIZECHANGE) != 0) vap->va_size = fvdat->filesize; if (vnode_isreg(vp) && (fvdat->flag & FN_SIZECHANGE) == 0) { /* * This is for those cases when the file size changed without us * knowing, and we want to catch up. */ off_t new_filesize = ((struct fuse_attr_out *) fdi.answ)->attr.size; if (fvdat->filesize != new_filesize) { fuse_vnode_setsize(vp, cred, new_filesize); + fvdat->flag &= ~FN_SIZECHANGE; } } debug_printf("fuse_getattr e: returning 0\n"); out: fdisp_destroy(&fdi); return err; fake: bzero(vap, sizeof(*vap)); vap->va_type = vnode_vtype(vp); return 0; } /* struct vnop_inactive_args { struct vnode *a_vp; struct thread *a_td; }; */ static int fuse_vnop_inactive(struct vop_inactive_args *ap) { struct vnode *vp = ap->a_vp; struct thread *td = ap->a_td; struct fuse_vnode_data *fvdat = VTOFUD(vp); struct fuse_filehandle *fufh = NULL; int type, need_flush = 1; FS_DEBUG("inode=%ju\n", (uintmax_t)VTOI(vp)); for (type = 0; type < FUFH_MAXTYPE; type++) { fufh = &(fvdat->fufh[type]); if (FUFH_IS_VALID(fufh)) { if (need_flush && vp->v_type == VREG) { if ((VTOFUD(vp)->flag & FN_SIZECHANGE) != 0) { fuse_vnode_savesize(vp, NULL); } if (fuse_data_cache_invalidate || (fvdat->flag & FN_REVOKED) != 0) fuse_io_invalbuf(vp, td); else fuse_io_flushbuf(vp, MNT_WAIT, td); need_flush = 0; } fuse_filehandle_close(vp, type, td, NULL); } } if ((fvdat->flag & FN_REVOKED) != 0 && fuse_reclaim_revoked) { vrecycle(vp); } return 0; } /* struct vnop_link_args { struct vnode *a_tdvp; struct vnode *a_vp; struct componentname *a_cnp; }; */ static int fuse_vnop_link(struct vop_link_args *ap) { struct vnode *vp = ap->a_vp; struct vnode *tdvp = ap->a_tdvp; struct componentname *cnp = ap->a_cnp; struct vattr *vap = VTOVA(vp); struct fuse_dispatcher fdi; struct fuse_entry_out *feo; struct fuse_link_in fli; int err; fuse_trace_printf_vnop(); if (fuse_isdeadfs(vp)) { return ENXIO; } if (vnode_mount(tdvp) != vnode_mount(vp)) { return EXDEV; } - if (vap->va_nlink >= FUSE_LINK_MAX) { + + /* + * This is a seatbelt check to protect naive userspace filesystems from + * themselves and the limitations of the FUSE IPC protocol. If a + * filesystem does not allow attribute caching, assume it is capable of + * validating that nlink does not overflow. + */ + if (vap != NULL && vap->va_nlink >= FUSE_LINK_MAX) return EMLINK; - } fli.oldnodeid = VTOI(vp); fdisp_init(&fdi, 0); fuse_internal_newentry_makerequest(vnode_mount(tdvp), VTOI(tdvp), cnp, FUSE_LINK, &fli, sizeof(fli), &fdi); if ((err = fdisp_wait_answ(&fdi))) { goto out; } feo = fdi.answ; err = fuse_internal_checkentry(feo, vnode_vtype(vp)); out: fdisp_destroy(&fdi); return err; } /* struct vnop_lookup_args { struct vnodeop_desc *a_desc; struct vnode *a_dvp; struct vnode **a_vpp; struct componentname *a_cnp; }; */ int fuse_vnop_lookup(struct vop_lookup_args *ap) { struct vnode *dvp = ap->a_dvp; struct vnode **vpp = ap->a_vpp; struct componentname *cnp = ap->a_cnp; struct thread *td = cnp->cn_thread; struct ucred *cred = cnp->cn_cred; int nameiop = cnp->cn_nameiop; int flags = cnp->cn_flags; int wantparent = flags & (LOCKPARENT | WANTPARENT); int islastcn = flags & ISLASTCN; struct mount *mp = vnode_mount(dvp); int err = 0; int lookup_err = 0; struct vnode *vp = NULL; struct fuse_dispatcher fdi; enum fuse_opcode op; uint64_t nid; struct fuse_access_param facp; FS_DEBUG2G("parent_inode=%ju - %*s\n", (uintmax_t)VTOI(dvp), (int)cnp->cn_namelen, cnp->cn_nameptr); if (fuse_isdeadfs(dvp)) { *vpp = NULL; return ENXIO; } if (!vnode_isdir(dvp)) { return ENOTDIR; } if (islastcn && vfs_isrdonly(mp) && (nameiop != LOOKUP)) { return EROFS; } /* * We do access check prior to doing anything else only in the case * when we are at fs root (we'd like to say, "we are at the first * component", but that's not exactly the same... nevermind). * See further comments at further access checks. */ bzero(&facp, sizeof(facp)); if (vnode_isvroot(dvp)) { /* early permission check hack */ if ((err = fuse_internal_access(dvp, VEXEC, &facp, td, cred))) { return err; } } if (flags & ISDOTDOT) { nid = VTOFUD(dvp)->parent_nid; if (nid == 0) { return ENOENT; } fdisp_init(&fdi, 0); op = FUSE_GETATTR; goto calldaemon; } else if (cnp->cn_namelen == 1 && *(cnp->cn_nameptr) == '.') { nid = VTOI(dvp); fdisp_init(&fdi, 0); op = FUSE_GETATTR; goto calldaemon; } else if (fuse_lookup_cache_enable) { err = cache_lookup(dvp, vpp, cnp, NULL, NULL); switch (err) { case -1: /* positive match */ atomic_add_acq_long(&fuse_lookup_cache_hits, 1); return 0; case 0: /* no match in cache */ atomic_add_acq_long(&fuse_lookup_cache_misses, 1); break; case ENOENT: /* negative match */ /* fall through */ default: return err; } } nid = VTOI(dvp); fdisp_init(&fdi, cnp->cn_namelen + 1); op = FUSE_LOOKUP; calldaemon: fdisp_make(&fdi, op, mp, nid, td, cred); if (op == FUSE_LOOKUP) { memcpy(fdi.indata, cnp->cn_nameptr, cnp->cn_namelen); ((char *)fdi.indata)[cnp->cn_namelen] = '\0'; } lookup_err = fdisp_wait_answ(&fdi); if ((op == FUSE_LOOKUP) && !lookup_err) { /* lookup call succeeded */ nid = ((struct fuse_entry_out *)fdi.answ)->nodeid; if (!nid) { /* * zero nodeid is the same as "not found", * but it's also cacheable (which we keep * keep on doing not as of writing this) */ lookup_err = ENOENT; } else if (nid == FUSE_ROOT_ID) { lookup_err = EINVAL; } } if (lookup_err && (!fdi.answ_stat || lookup_err != ENOENT || op != FUSE_LOOKUP)) { fdisp_destroy(&fdi); return lookup_err; } /* lookup_err, if non-zero, must be ENOENT at this point */ if (lookup_err) { if ((nameiop == CREATE || nameiop == RENAME) && islastcn /* && directory dvp has not been removed */ ) { if (vfs_isrdonly(mp)) { err = EROFS; goto out; } #if 0 /* THINK_ABOUT_THIS */ if ((err = fuse_internal_access(dvp, VWRITE, cred, td, &facp))) { goto out; } #endif /* * Possibly record the position of a slot in the * directory large enough for the new component name. * This can be recorded in the vnode private data for * dvp. Set the SAVENAME flag to hold onto the * pathname for use later in VOP_CREATE or VOP_RENAME. */ cnp->cn_flags |= SAVENAME; err = EJUSTRETURN; goto out; } /* Consider inserting name into cache. */ /* * No we can't use negative caching, as the fs * changes are out of our control. * False positives' falseness turns out just as things * go by, but false negatives' falseness doesn't. * (and aiding the caching mechanism with extra control * mechanisms comes quite close to beating the whole purpose * caching...) */ #if 0 if ((cnp->cn_flags & MAKEENTRY) != 0) { FS_DEBUG("inserting NULL into cache\n"); cache_enter(dvp, NULL, cnp); } #endif err = ENOENT; goto out; } else { /* !lookup_err */ struct fuse_entry_out *feo = NULL; struct fuse_attr *fattr = NULL; if (op == FUSE_GETATTR) { fattr = &((struct fuse_attr_out *)fdi.answ)->attr; } else { feo = (struct fuse_entry_out *)fdi.answ; fattr = &(feo->attr); } /* * If deleting, and at end of pathname, return parameters * which can be used to remove file. If the wantparent flag * isn't set, we return only the directory, otherwise we go on * and lock the inode, being careful with ".". */ if (nameiop == DELETE && islastcn) { /* * Check for write access on directory. */ facp.xuid = fattr->uid; facp.facc_flags |= FACCESS_STICKY; err = fuse_internal_access(dvp, VWRITE, &facp, td, cred); facp.facc_flags &= ~FACCESS_XQUERIES; if (err) { goto out; } if (nid == VTOI(dvp)) { vref(dvp); *vpp = dvp; } else { - err = fuse_vnode_get(dvp->v_mount, nid, dvp, - &vp, cnp, IFTOVT(fattr->mode)); + err = fuse_vnode_get(dvp->v_mount, feo, nid, + dvp, &vp, cnp, IFTOVT(fattr->mode)); if (err) goto out; *vpp = vp; } /* * Save the name for use in VOP_RMDIR and VOP_REMOVE * later. */ cnp->cn_flags |= SAVENAME; goto out; } /* * If rewriting (RENAME), return the inode and the * information required to rewrite the present directory * Must get inode of directory entry to verify it's a * regular file, or empty directory. */ if (nameiop == RENAME && wantparent && islastcn) { #if 0 /* THINK_ABOUT_THIS */ if ((err = fuse_internal_access(dvp, VWRITE, cred, td, &facp))) { goto out; } #endif /* * Check for "." */ if (nid == VTOI(dvp)) { err = EISDIR; goto out; } - err = fuse_vnode_get(vnode_mount(dvp), - nid, - dvp, - &vp, - cnp, - IFTOVT(fattr->mode)); + err = fuse_vnode_get(vnode_mount(dvp), feo, nid, dvp, + &vp, cnp, IFTOVT(fattr->mode)); if (err) { goto out; } *vpp = vp; /* * Save the name for use in VOP_RENAME later. */ cnp->cn_flags |= SAVENAME; goto out; } if (flags & ISDOTDOT) { struct mount *mp; int ltype; /* * Expanded copy of vn_vget_ino() so that * fuse_vnode_get() can be used. */ mp = dvp->v_mount; ltype = VOP_ISLOCKED(dvp); err = vfs_busy(mp, MBF_NOWAIT); if (err != 0) { vfs_ref(mp); VOP_UNLOCK(dvp, 0); err = vfs_busy(mp, 0); vn_lock(dvp, ltype | LK_RETRY); vfs_rel(mp); if (err) goto out; if ((dvp->v_iflag & VI_DOOMED) != 0) { err = ENOENT; vfs_unbusy(mp); goto out; } } VOP_UNLOCK(dvp, 0); - err = fuse_vnode_get(vnode_mount(dvp), - nid, - NULL, - &vp, - cnp, - IFTOVT(fattr->mode)); + err = fuse_vnode_get(vnode_mount(dvp), feo, nid, NULL, + &vp, cnp, IFTOVT(fattr->mode)); vfs_unbusy(mp); vn_lock(dvp, ltype | LK_RETRY); if ((dvp->v_iflag & VI_DOOMED) != 0) { if (err == 0) vput(vp); err = ENOENT; } if (err) goto out; *vpp = vp; } else if (nid == VTOI(dvp)) { vref(dvp); *vpp = dvp; } else { - err = fuse_vnode_get(vnode_mount(dvp), - nid, - dvp, - &vp, - cnp, - IFTOVT(fattr->mode)); + struct fuse_vnode_data *fvdat; + + err = fuse_vnode_get(vnode_mount(dvp), feo, nid, dvp, + &vp, cnp, IFTOVT(fattr->mode)); if (err) { goto out; } fuse_vnode_setparent(vp, dvp); + + /* + * In the case where we are looking up a FUSE node + * represented by an existing cached vnode, and the + * true size reported by FUSE_LOOKUP doesn't match + * the vnode's cached size, fix the vnode cache to + * match the real object size. + * + * This can occur via FUSE distributed filesystems, + * irregular files, etc. + */ + fvdat = VTOFUD(vp); + if (vnode_isreg(vp) && + fattr->size != fvdat->filesize) { + /* + * The FN_SIZECHANGE flag reflects a dirty + * append. If userspace lets us know our cache + * is invalid, that write was lost. (Dirty + * writes that do not cause append are also + * lost, but we don't detect them here.) + * + * XXX: Maybe disable WB caching on this mount. + */ + if (fvdat->flag & FN_SIZECHANGE) + printf("%s: WB cache incoherent on " + "%s!\n", __func__, + vnode_mount(vp)->mnt_stat.f_mntonname); + + (void)fuse_vnode_setsize(vp, cred, fattr->size); + fvdat->flag &= ~FN_SIZECHANGE; + } *vpp = vp; } if (op == FUSE_GETATTR) { - cache_attrs(*vpp, (struct fuse_attr_out *)fdi.answ); + cache_attrs(*vpp, (struct fuse_attr_out *)fdi.answ, + NULL); } else { - cache_attrs(*vpp, (struct fuse_entry_out *)fdi.answ); + cache_attrs(*vpp, (struct fuse_entry_out *)fdi.answ, + NULL); } /* Insert name into cache if appropriate. */ /* * Nooo, caching is evil. With caching, we can't avoid stale * information taking over the playground (cached info is not * just positive/negative, it does have qualitative aspects, * too). And a (VOP/FUSE)_GETATTR is always thrown anyway, when * walking down along cached path components, and that's not * any cheaper than FUSE_LOOKUP. This might change with * implementing kernel side attr caching, but... In Linux, * lookup results are not cached, and the daemon is bombarded * with FUSE_LOOKUPS on and on. This shows that by design, the * daemon is expected to handle frequent lookup queries * efficiently, do its caching in userspace, and so on. * * So just leave the name cache alone. */ /* * Well, now I know, Linux caches lookups, but with a * timeout... So it's the same thing as attribute caching: * we can deal with it when implement timeouts. */ #if 0 if (cnp->cn_flags & MAKEENTRY) { cache_enter(dvp, *vpp, cnp); } #endif } out: if (!lookup_err) { /* No lookup error; need to clean up. */ if (err) { /* Found inode; exit with no vnode. */ if (op == FUSE_LOOKUP) { fuse_internal_forget_send(vnode_mount(dvp), td, cred, nid, 1); } fdisp_destroy(&fdi); return err; } else { #ifndef NO_EARLY_PERM_CHECK_HACK if (!islastcn) { /* * We have the attributes of the next item * *now*, and it's a fact, and we do not * have to do extra work for it (ie, beg the * daemon), and it neither depends on such * accidental things like attr caching. So * the big idea: check credentials *now*, * not at the beginning of the next call to * lookup. * * The first item of the lookup chain (fs root) * won't be checked then here, of course, as * its never "the next". But go and see that * the root is taken care about at the very * beginning of this function. * * Now, given we want to do the access check * this way, one might ask: so then why not * do the access check just after fetching * the inode and its attributes from the * daemon? Why bother with producing the * corresponding vnode at all if something * is not OK? We know what's the deal as * soon as we get those attrs... There is * one bit of info though not given us by * the daemon: whether his response is * authoritative or not... His response should * be ignored if something is mounted over * the dir in question. But that can be * known only by having the vnode... */ int tmpvtype = vnode_vtype(*vpp); bzero(&facp, sizeof(facp)); /*the early perm check hack */ facp.facc_flags |= FACCESS_VA_VALID; if ((tmpvtype != VDIR) && (tmpvtype != VLNK)) { err = ENOTDIR; } if (!err && !vnode_mountedhere(*vpp)) { err = fuse_internal_access(*vpp, VEXEC, &facp, td, cred); } if (err) { if (tmpvtype == VLNK) FS_DEBUG("weird, permission error with a symlink?\n"); vput(*vpp); *vpp = NULL; } } #endif } } fdisp_destroy(&fdi); return err; } /* struct vnop_mkdir_args { struct vnode *a_dvp; struct vnode **a_vpp; struct componentname *a_cnp; struct vattr *a_vap; }; */ static int fuse_vnop_mkdir(struct vop_mkdir_args *ap) { struct vnode *dvp = ap->a_dvp; struct vnode **vpp = ap->a_vpp; struct componentname *cnp = ap->a_cnp; struct vattr *vap = ap->a_vap; struct fuse_mkdir_in fmdi; fuse_trace_printf_vnop(); if (fuse_isdeadfs(dvp)) { return ENXIO; } fmdi.mode = MAKEIMODE(vap->va_type, vap->va_mode); return (fuse_internal_newentry(dvp, vpp, cnp, FUSE_MKDIR, &fmdi, sizeof(fmdi), VDIR)); } /* struct vnop_mknod_args { struct vnode *a_dvp; struct vnode **a_vpp; struct componentname *a_cnp; struct vattr *a_vap; }; */ static int fuse_vnop_mknod(struct vop_mknod_args *ap) { return (EINVAL); } /* struct vnop_open_args { struct vnode *a_vp; int a_mode; struct ucred *a_cred; struct thread *a_td; int a_fdidx; / struct file *a_fp; }; */ static int fuse_vnop_open(struct vop_open_args *ap) { struct vnode *vp = ap->a_vp; int mode = ap->a_mode; struct thread *td = ap->a_td; struct ucred *cred = ap->a_cred; fufh_type_t fufh_type; struct fuse_vnode_data *fvdat; int error, isdir = 0; int32_t fuse_open_flags; FS_DEBUG2G("inode=%ju mode=0x%x\n", (uintmax_t)VTOI(vp), mode); if (fuse_isdeadfs(vp)) { return ENXIO; } fvdat = VTOFUD(vp); if (vnode_isdir(vp)) { isdir = 1; } fuse_open_flags = 0; if (isdir) { fufh_type = FUFH_RDONLY; } else { fufh_type = fuse_filehandle_xlate_from_fflags(mode); /* * For WRONLY opens, force DIRECT_IO. This is necessary * since writing a partial block through the buffer cache * will result in a read of the block and that read won't * be allowed by the WRONLY open. */ if (fufh_type == FUFH_WRONLY || (fvdat->flag & FN_DIRECTIO) != 0) fuse_open_flags = FOPEN_DIRECT_IO; } if (fuse_filehandle_validrw(vp, fufh_type) != FUFH_INVALID) { fuse_vnode_open(vp, fuse_open_flags, td); return 0; } error = fuse_filehandle_open(vp, fufh_type, NULL, td, cred); return error; } static int fuse_vnop_pathconf(struct vop_pathconf_args *ap) { switch (ap->a_name) { case _PC_FILESIZEBITS: *ap->a_retval = 64; return (0); case _PC_NAME_MAX: *ap->a_retval = NAME_MAX; return (0); case _PC_LINK_MAX: *ap->a_retval = MIN(LONG_MAX, FUSE_LINK_MAX); return (0); case _PC_SYMLINK_MAX: *ap->a_retval = MAXPATHLEN; return (0); case _PC_NO_TRUNC: *ap->a_retval = 1; return (0); default: return (vop_stdpathconf(ap)); } } /* struct vnop_read_args { struct vnode *a_vp; struct uio *a_uio; int a_ioflag; struct ucred *a_cred; }; */ static int fuse_vnop_read(struct vop_read_args *ap) { struct vnode *vp = ap->a_vp; struct uio *uio = ap->a_uio; int ioflag = ap->a_ioflag; struct ucred *cred = ap->a_cred; FS_DEBUG2G("inode=%ju offset=%jd resid=%zd\n", (uintmax_t)VTOI(vp), uio->uio_offset, uio->uio_resid); if (fuse_isdeadfs(vp)) { return ENXIO; } if (VTOFUD(vp)->flag & FN_DIRECTIO) { ioflag |= IO_DIRECT; } return fuse_io_dispatch(vp, uio, ioflag, cred); } /* struct vnop_readdir_args { struct vnode *a_vp; struct uio *a_uio; struct ucred *a_cred; int *a_eofflag; int *ncookies; u_long **a_cookies; }; */ static int fuse_vnop_readdir(struct vop_readdir_args *ap) { struct vnode *vp = ap->a_vp; struct uio *uio = ap->a_uio; struct ucred *cred = ap->a_cred; struct fuse_filehandle *fufh = NULL; struct fuse_iov cookediov; int err = 0; int freefufh = 0; FS_DEBUG2G("inode=%ju\n", (uintmax_t)VTOI(vp)); if (fuse_isdeadfs(vp)) { return ENXIO; } if ( /* XXXIP ((uio_iovcnt(uio) > 1)) || */ (uio_resid(uio) < sizeof(struct dirent))) { return EINVAL; } if (!fuse_filehandle_valid(vp, FUFH_RDONLY)) { FS_DEBUG("calling readdir() before open()"); err = fuse_filehandle_open(vp, FUFH_RDONLY, &fufh, NULL, cred); freefufh = 1; } else { err = fuse_filehandle_get(vp, FUFH_RDONLY, &fufh); } if (err) { return (err); } #define DIRCOOKEDSIZE FUSE_DIRENT_ALIGN(FUSE_NAME_OFFSET + MAXNAMLEN + 1) fiov_init(&cookediov, DIRCOOKEDSIZE); err = fuse_internal_readdir(vp, uio, fufh, &cookediov); fiov_teardown(&cookediov); if (freefufh) { fuse_filehandle_close(vp, FUFH_RDONLY, NULL, cred); } return err; } /* struct vnop_readlink_args { struct vnode *a_vp; struct uio *a_uio; struct ucred *a_cred; }; */ static int fuse_vnop_readlink(struct vop_readlink_args *ap) { struct vnode *vp = ap->a_vp; struct uio *uio = ap->a_uio; struct ucred *cred = ap->a_cred; struct fuse_dispatcher fdi; int err; FS_DEBUG2G("inode=%ju\n", (uintmax_t)VTOI(vp)); if (fuse_isdeadfs(vp)) { return ENXIO; } if (!vnode_islnk(vp)) { return EINVAL; } fdisp_init(&fdi, 0); err = fdisp_simple_putget_vp(&fdi, FUSE_READLINK, vp, curthread, cred); if (err) { goto out; } if (((char *)fdi.answ)[0] == '/' && fuse_get_mpdata(vnode_mount(vp))->dataflags & FSESS_PUSH_SYMLINKS_IN) { char *mpth = vnode_mount(vp)->mnt_stat.f_mntonname; err = uiomove(mpth, strlen(mpth), uio); } if (!err) { err = uiomove(fdi.answ, fdi.iosize, uio); } out: fdisp_destroy(&fdi); return err; } /* struct vnop_reclaim_args { struct vnode *a_vp; struct thread *a_td; }; */ static int fuse_vnop_reclaim(struct vop_reclaim_args *ap) { struct vnode *vp = ap->a_vp; struct thread *td = ap->a_td; struct fuse_vnode_data *fvdat = VTOFUD(vp); struct fuse_filehandle *fufh = NULL; int type; if (!fvdat) { panic("FUSE: no vnode data during recycling"); } FS_DEBUG("inode=%ju\n", (uintmax_t)VTOI(vp)); for (type = 0; type < FUFH_MAXTYPE; type++) { fufh = &(fvdat->fufh[type]); if (FUFH_IS_VALID(fufh)) { printf("FUSE: vnode being reclaimed but fufh (type=%d) is valid", type); fuse_filehandle_close(vp, type, td, NULL); } } if ((!fuse_isdeadfs(vp)) && (fvdat->nlookup)) { fuse_internal_forget_send(vnode_mount(vp), td, NULL, VTOI(vp), fvdat->nlookup); } fuse_vnode_setparent(vp, NULL); cache_purge(vp); vfs_hash_remove(vp); vnode_destroy_vobject(vp); fuse_vnode_destroy(vp); return 0; } /* struct vnop_remove_args { struct vnode *a_dvp; struct vnode *a_vp; struct componentname *a_cnp; }; */ static int fuse_vnop_remove(struct vop_remove_args *ap) { struct vnode *dvp = ap->a_dvp; struct vnode *vp = ap->a_vp; struct componentname *cnp = ap->a_cnp; int err; FS_DEBUG2G("inode=%ju name=%*s\n", (uintmax_t)VTOI(vp), (int)cnp->cn_namelen, cnp->cn_nameptr); if (fuse_isdeadfs(vp)) { return ENXIO; } if (vnode_isdir(vp)) { return EPERM; } cache_purge(vp); err = fuse_internal_remove(dvp, vp, cnp, FUSE_UNLINK); if (err == 0) fuse_internal_vnode_disappear(vp); return err; } /* struct vnop_rename_args { struct vnode *a_fdvp; struct vnode *a_fvp; struct componentname *a_fcnp; struct vnode *a_tdvp; struct vnode *a_tvp; struct componentname *a_tcnp; }; */ static int fuse_vnop_rename(struct vop_rename_args *ap) { struct vnode *fdvp = ap->a_fdvp; struct vnode *fvp = ap->a_fvp; struct componentname *fcnp = ap->a_fcnp; struct vnode *tdvp = ap->a_tdvp; struct vnode *tvp = ap->a_tvp; struct componentname *tcnp = ap->a_tcnp; struct fuse_data *data; int err = 0; FS_DEBUG2G("from: inode=%ju name=%*s -> to: inode=%ju name=%*s\n", (uintmax_t)VTOI(fvp), (int)fcnp->cn_namelen, fcnp->cn_nameptr, (uintmax_t)(tvp == NULL ? -1 : VTOI(tvp)), (int)tcnp->cn_namelen, tcnp->cn_nameptr); if (fuse_isdeadfs(fdvp)) { return ENXIO; } if (fvp->v_mount != tdvp->v_mount || (tvp && fvp->v_mount != tvp->v_mount)) { FS_DEBUG("cross-device rename: %s -> %s\n", fcnp->cn_nameptr, (tcnp != NULL ? tcnp->cn_nameptr : "(NULL)")); err = EXDEV; goto out; } cache_purge(fvp); /* * FUSE library is expected to check if target directory is not * under the source directory in the file system tree. * Linux performs this check at VFS level. */ data = fuse_get_mpdata(vnode_mount(tdvp)); sx_xlock(&data->rename_lock); err = fuse_internal_rename(fdvp, fcnp, tdvp, tcnp); if (err == 0) { if (tdvp != fdvp) fuse_vnode_setparent(fvp, tdvp); if (tvp != NULL) fuse_vnode_setparent(tvp, NULL); } sx_unlock(&data->rename_lock); if (tvp != NULL && tvp != fvp) { cache_purge(tvp); } if (vnode_isdir(fvp)) { if ((tvp != NULL) && vnode_isdir(tvp)) { cache_purge(tdvp); } cache_purge(fdvp); } out: if (tdvp == tvp) { vrele(tdvp); } else { vput(tdvp); } if (tvp != NULL) { vput(tvp); } vrele(fdvp); vrele(fvp); return err; } /* struct vnop_rmdir_args { struct vnode *a_dvp; struct vnode *a_vp; struct componentname *a_cnp; } *ap; */ static int fuse_vnop_rmdir(struct vop_rmdir_args *ap) { struct vnode *dvp = ap->a_dvp; struct vnode *vp = ap->a_vp; int err; FS_DEBUG2G("inode=%ju\n", (uintmax_t)VTOI(vp)); if (fuse_isdeadfs(vp)) { return ENXIO; } if (VTOFUD(vp) == VTOFUD(dvp)) { return EINVAL; } err = fuse_internal_remove(dvp, vp, ap->a_cnp, FUSE_RMDIR); if (err == 0) fuse_internal_vnode_disappear(vp); return err; } /* struct vnop_setattr_args { struct vnode *a_vp; struct vattr *a_vap; struct ucred *a_cred; struct thread *a_td; }; */ static int fuse_vnop_setattr(struct vop_setattr_args *ap) { struct vnode *vp = ap->a_vp; struct vattr *vap = ap->a_vap; struct ucred *cred = ap->a_cred; struct thread *td = curthread; struct fuse_dispatcher fdi; struct fuse_setattr_in *fsai; struct fuse_access_param facp; int err = 0; enum vtype vtyp; int sizechanged = 0; uint64_t newsize = 0; FS_DEBUG2G("inode=%ju\n", (uintmax_t)VTOI(vp)); if (fuse_isdeadfs(vp)) { return ENXIO; } fdisp_init(&fdi, sizeof(*fsai)); fdisp_make_vp(&fdi, FUSE_SETATTR, vp, td, cred); fsai = fdi.indata; fsai->valid = 0; bzero(&facp, sizeof(facp)); facp.xuid = vap->va_uid; facp.xgid = vap->va_gid; if (vap->va_uid != (uid_t)VNOVAL) { facp.facc_flags |= FACCESS_CHOWN; fsai->uid = vap->va_uid; fsai->valid |= FATTR_UID; } if (vap->va_gid != (gid_t)VNOVAL) { facp.facc_flags |= FACCESS_CHOWN; fsai->gid = vap->va_gid; fsai->valid |= FATTR_GID; } if (vap->va_size != VNOVAL) { struct fuse_filehandle *fufh = NULL; /*Truncate to a new value. */ fsai->size = vap->va_size; sizechanged = 1; newsize = vap->va_size; fsai->valid |= FATTR_SIZE; fuse_filehandle_getrw(vp, FUFH_WRONLY, &fufh); if (fufh) { fsai->fh = fufh->fh_id; fsai->valid |= FATTR_FH; } } if (vap->va_atime.tv_sec != VNOVAL) { fsai->atime = vap->va_atime.tv_sec; fsai->atimensec = vap->va_atime.tv_nsec; fsai->valid |= FATTR_ATIME; } if (vap->va_mtime.tv_sec != VNOVAL) { fsai->mtime = vap->va_mtime.tv_sec; fsai->mtimensec = vap->va_mtime.tv_nsec; fsai->valid |= FATTR_MTIME; } if (vap->va_mode != (mode_t)VNOVAL) { fsai->mode = vap->va_mode & ALLPERMS; fsai->valid |= FATTR_MODE; } if (!fsai->valid) { goto out; } vtyp = vnode_vtype(vp); if (fsai->valid & FATTR_SIZE && vtyp == VDIR) { err = EISDIR; goto out; } if (vfs_isrdonly(vnode_mount(vp)) && (fsai->valid & ~FATTR_SIZE || vtyp == VREG)) { err = EROFS; goto out; } if (fsai->valid & ~FATTR_SIZE) { /*err = fuse_internal_access(vp, VADMIN, context, &facp); */ /*XXX */ err = 0; } facp.facc_flags &= ~FACCESS_XQUERIES; if (err && !(fsai->valid & ~(FATTR_ATIME | FATTR_MTIME)) && vap->va_vaflags & VA_UTIMES_NULL) { err = fuse_internal_access(vp, VWRITE, &facp, td, cred); } if (err) goto out; if ((err = fdisp_wait_answ(&fdi))) goto out; vtyp = IFTOVT(((struct fuse_attr_out *)fdi.answ)->attr.mode); if (vnode_vtype(vp) != vtyp) { if (vnode_vtype(vp) == VNON && vtyp != VNON) { debug_printf("FUSE: Dang! vnode_vtype is VNON and vtype isn't.\n"); } else { /* * STALE vnode, ditch * * The vnode has changed its type "behind our back". There's * nothing really we can do, so let us just force an internal * revocation and tell the caller to try again, if interested. */ fuse_internal_vnode_disappear(vp); err = EAGAIN; } } - if (!err && !sizechanged) { - cache_attrs(vp, (struct fuse_attr_out *)fdi.answ); - } + if (err == 0) + cache_attrs(vp, (struct fuse_attr_out *)fdi.answ, NULL); + out: fdisp_destroy(&fdi); if (!err && sizechanged) { fuse_vnode_setsize(vp, cred, newsize); VTOFUD(vp)->flag &= ~FN_SIZECHANGE; } return err; } /* struct vnop_strategy_args { struct vnode *a_vp; struct buf *a_bp; }; */ static int fuse_vnop_strategy(struct vop_strategy_args *ap) { struct vnode *vp = ap->a_vp; struct buf *bp = ap->a_bp; fuse_trace_printf_vnop(); if (!vp || fuse_isdeadfs(vp)) { bp->b_ioflags |= BIO_ERROR; bp->b_error = ENXIO; bufdone(bp); return ENXIO; } if (bp->b_iocmd == BIO_WRITE) fuse_vnode_refreshsize(vp, NOCRED); (void)fuse_io_strategy(vp, bp); /* * This is a dangerous function. If returns error, that might mean a * panic. We prefer pretty much anything over being forced to panic * by a malicious daemon (a demon?). So we just return 0 anyway. You * should never mind this: this function has its own error * propagation mechanism via the argument buffer, so * not-that-melodramatic residents of the call chain still will be * able to know what to do. */ return 0; } /* struct vnop_symlink_args { struct vnode *a_dvp; struct vnode **a_vpp; struct componentname *a_cnp; struct vattr *a_vap; char *a_target; }; */ static int fuse_vnop_symlink(struct vop_symlink_args *ap) { struct vnode *dvp = ap->a_dvp; struct vnode **vpp = ap->a_vpp; struct componentname *cnp = ap->a_cnp; const char *target = ap->a_target; struct fuse_dispatcher fdi; int err; size_t len; FS_DEBUG2G("inode=%ju name=%*s\n", (uintmax_t)VTOI(dvp), (int)cnp->cn_namelen, cnp->cn_nameptr); if (fuse_isdeadfs(dvp)) { return ENXIO; } /* * Unlike the other creator type calls, here we have to create a message * where the name of the new entry comes first, and the data describing * the entry comes second. * Hence we can't rely on our handy fuse_internal_newentry() routine, * but put together the message manually and just call the core part. */ len = strlen(target) + 1; fdisp_init(&fdi, len + cnp->cn_namelen + 1); fdisp_make_vp(&fdi, FUSE_SYMLINK, dvp, curthread, NULL); memcpy(fdi.indata, cnp->cn_nameptr, cnp->cn_namelen); ((char *)fdi.indata)[cnp->cn_namelen] = '\0'; memcpy((char *)fdi.indata + cnp->cn_namelen + 1, target, len); err = fuse_internal_newentry_core(dvp, vpp, cnp, VLNK, &fdi); fdisp_destroy(&fdi); return err; } /* struct vnop_write_args { struct vnode *a_vp; struct uio *a_uio; int a_ioflag; struct ucred *a_cred; }; */ static int fuse_vnop_write(struct vop_write_args *ap) { struct vnode *vp = ap->a_vp; struct uio *uio = ap->a_uio; int ioflag = ap->a_ioflag; struct ucred *cred = ap->a_cred; fuse_trace_printf_vnop(); if (fuse_isdeadfs(vp)) { return ENXIO; } fuse_vnode_refreshsize(vp, cred); if (VTOFUD(vp)->flag & FN_DIRECTIO) { ioflag |= IO_DIRECT; } return fuse_io_dispatch(vp, uio, ioflag, cred); } /* struct vnop_getpages_args { struct vnode *a_vp; vm_page_t *a_m; int a_count; int a_reqpage; }; */ static int fuse_vnop_getpages(struct vop_getpages_args *ap) { int i, error, nextoff, size, toff, count, npages; struct uio uio; struct iovec iov; vm_offset_t kva; struct buf *bp; struct vnode *vp; struct thread *td; struct ucred *cred; vm_page_t *pages; FS_DEBUG2G("heh\n"); vp = ap->a_vp; KASSERT(vp->v_object, ("objectless vp passed to getpages")); td = curthread; /* XXX */ cred = curthread->td_ucred; /* XXX */ pages = ap->a_m; npages = ap->a_count; if (!fsess_opt_mmap(vnode_mount(vp))) { FS_DEBUG("called on non-cacheable vnode??\n"); return (VM_PAGER_ERROR); } /* * If the last page is partially valid, just return it and allow * the pager to zero-out the blanks. Partially valid pages can * only occur at the file EOF. * * XXXGL: is that true for FUSE, which is a local filesystem, * but still somewhat disconnected from the kernel? */ VM_OBJECT_WLOCK(vp->v_object); if (pages[npages - 1]->valid != 0 && --npages == 0) goto out; VM_OBJECT_WUNLOCK(vp->v_object); /* * We use only the kva address for the buffer, but this is extremely * convenient and fast. */ bp = uma_zalloc(fuse_pbuf_zone, M_WAITOK); kva = (vm_offset_t)bp->b_data; pmap_qenter(kva, pages, npages); VM_CNT_INC(v_vnodein); VM_CNT_ADD(v_vnodepgsin, npages); count = npages << PAGE_SHIFT; iov.iov_base = (caddr_t)kva; iov.iov_len = count; uio.uio_iov = &iov; uio.uio_iovcnt = 1; uio.uio_offset = IDX_TO_OFF(pages[0]->pindex); uio.uio_resid = count; uio.uio_segflg = UIO_SYSSPACE; uio.uio_rw = UIO_READ; uio.uio_td = td; error = fuse_io_dispatch(vp, &uio, IO_DIRECT, cred); pmap_qremove(kva, npages); uma_zfree(fuse_pbuf_zone, bp); if (error && (uio.uio_resid == count)) { FS_DEBUG("error %d\n", error); return VM_PAGER_ERROR; } /* * Calculate the number of bytes read and validate only that number * of bytes. Note that due to pending writes, size may be 0. This * does not mean that the remaining data is invalid! */ size = count - uio.uio_resid; VM_OBJECT_WLOCK(vp->v_object); fuse_vm_page_lock_queues(); for (i = 0, toff = 0; i < npages; i++, toff = nextoff) { vm_page_t m; nextoff = toff + PAGE_SIZE; m = pages[i]; if (nextoff <= size) { /* * Read operation filled an entire page */ m->valid = VM_PAGE_BITS_ALL; KASSERT(m->dirty == 0, ("fuse_getpages: page %p is dirty", m)); } else if (size > toff) { /* * Read operation filled a partial page. */ m->valid = 0; vm_page_set_valid_range(m, 0, size - toff); KASSERT(m->dirty == 0, ("fuse_getpages: page %p is dirty", m)); } else { /* * Read operation was short. If no error occurred * we may have hit a zero-fill section. We simply * leave valid set to 0. */ ; } } fuse_vm_page_unlock_queues(); out: VM_OBJECT_WUNLOCK(vp->v_object); if (ap->a_rbehind) *ap->a_rbehind = 0; if (ap->a_rahead) *ap->a_rahead = 0; return (VM_PAGER_OK); } /* struct vnop_putpages_args { struct vnode *a_vp; vm_page_t *a_m; int a_count; int a_sync; int *a_rtvals; vm_ooffset_t a_offset; }; */ static int fuse_vnop_putpages(struct vop_putpages_args *ap) { struct uio uio; struct iovec iov; vm_offset_t kva; struct buf *bp; int i, error, npages, count; off_t offset; int *rtvals; struct vnode *vp; struct thread *td; struct ucred *cred; vm_page_t *pages; vm_ooffset_t fsize; FS_DEBUG2G("heh\n"); vp = ap->a_vp; KASSERT(vp->v_object, ("objectless vp passed to putpages")); fsize = vp->v_object->un_pager.vnp.vnp_size; td = curthread; /* XXX */ cred = curthread->td_ucred; /* XXX */ pages = ap->a_m; count = ap->a_count; rtvals = ap->a_rtvals; npages = btoc(count); offset = IDX_TO_OFF(pages[0]->pindex); if (!fsess_opt_mmap(vnode_mount(vp))) { FS_DEBUG("called on non-cacheable vnode??\n"); } for (i = 0; i < npages; i++) rtvals[i] = VM_PAGER_AGAIN; /* * When putting pages, do not extend file past EOF. */ if (offset + count > fsize) { count = fsize - offset; if (count < 0) count = 0; } /* * We use only the kva address for the buffer, but this is extremely * convenient and fast. */ bp = uma_zalloc(fuse_pbuf_zone, M_WAITOK); kva = (vm_offset_t)bp->b_data; pmap_qenter(kva, pages, npages); VM_CNT_INC(v_vnodeout); VM_CNT_ADD(v_vnodepgsout, count); iov.iov_base = (caddr_t)kva; iov.iov_len = count; uio.uio_iov = &iov; uio.uio_iovcnt = 1; uio.uio_offset = offset; uio.uio_resid = count; uio.uio_segflg = UIO_SYSSPACE; uio.uio_rw = UIO_WRITE; uio.uio_td = td; error = fuse_io_dispatch(vp, &uio, IO_DIRECT, cred); pmap_qremove(kva, npages); uma_zfree(fuse_pbuf_zone, bp); if (!error) { int nwritten = round_page(count - uio.uio_resid) / PAGE_SIZE; for (i = 0; i < nwritten; i++) { rtvals[i] = VM_PAGER_OK; VM_OBJECT_WLOCK(pages[i]->object); vm_page_undirty(pages[i]); VM_OBJECT_WUNLOCK(pages[i]->object); } } return rtvals[0]; } static const char extattr_namespace_separator = '.'; /* struct vop_getextattr_args { struct vop_generic_args a_gen; struct vnode *a_vp; int a_attrnamespace; const char *a_name; struct uio *a_uio; size_t *a_size; struct ucred *a_cred; struct thread *a_td; }; */ static int fuse_vnop_getextattr(struct vop_getextattr_args *ap) { struct vnode *vp = ap->a_vp; struct uio *uio = ap->a_uio; struct fuse_dispatcher fdi; struct fuse_getxattr_in *get_xattr_in; struct fuse_getxattr_out *get_xattr_out; struct mount *mp = vnode_mount(vp); struct thread *td = ap->a_td; struct ucred *cred = ap->a_cred; char *prefix; char *attr_str; size_t len; int err; fuse_trace_printf_vnop(); if (fuse_isdeadfs(vp)) return (ENXIO); /* Default to looking for user attributes. */ if (ap->a_attrnamespace == EXTATTR_NAMESPACE_SYSTEM) prefix = EXTATTR_NAMESPACE_SYSTEM_STRING; else prefix = EXTATTR_NAMESPACE_USER_STRING; len = strlen(prefix) + sizeof(extattr_namespace_separator) + strlen(ap->a_name) + 1; fdisp_init(&fdi, len + sizeof(*get_xattr_in)); fdisp_make_vp(&fdi, FUSE_GETXATTR, vp, td, cred); get_xattr_in = fdi.indata; /* * Check to see whether we're querying the available size or * issuing the actual request. If we pass in 0, we get back struct * fuse_getxattr_out. If we pass in a non-zero size, we get back * that much data, without the struct fuse_getxattr_out header. */ if (uio == NULL) get_xattr_in->size = 0; else get_xattr_in->size = uio->uio_resid; attr_str = (char *)fdi.indata + sizeof(*get_xattr_in); snprintf(attr_str, len, "%s%c%s", prefix, extattr_namespace_separator, ap->a_name); err = fdisp_wait_answ(&fdi); if (err != 0) { if (err == ENOSYS) fsess_set_notimpl(mp, FUSE_GETXATTR); debug_printf("getxattr: got err=%d from daemon\n", err); goto out; } get_xattr_out = fdi.answ; if (ap->a_size != NULL) *ap->a_size = get_xattr_out->size; if (uio != NULL) err = uiomove(fdi.answ, fdi.iosize, uio); out: fdisp_destroy(&fdi); return (err); } /* struct vop_setextattr_args { struct vop_generic_args a_gen; struct vnode *a_vp; int a_attrnamespace; const char *a_name; struct uio *a_uio; struct ucred *a_cred; struct thread *a_td; }; */ static int fuse_vnop_setextattr(struct vop_setextattr_args *ap) { struct vnode *vp = ap->a_vp; struct uio *uio = ap->a_uio; struct fuse_dispatcher fdi; struct fuse_setxattr_in *set_xattr_in; struct mount *mp = vnode_mount(vp); struct thread *td = ap->a_td; struct ucred *cred = ap->a_cred; char *prefix; size_t len; char *attr_str; int err; fuse_trace_printf_vnop(); if (fuse_isdeadfs(vp)) return (ENXIO); /* Default to looking for user attributes. */ if (ap->a_attrnamespace == EXTATTR_NAMESPACE_SYSTEM) prefix = EXTATTR_NAMESPACE_SYSTEM_STRING; else prefix = EXTATTR_NAMESPACE_USER_STRING; len = strlen(prefix) + sizeof(extattr_namespace_separator) + strlen(ap->a_name) + 1; fdisp_init(&fdi, len + sizeof(*set_xattr_in) + uio->uio_resid); fdisp_make_vp(&fdi, FUSE_SETXATTR, vp, td, cred); set_xattr_in = fdi.indata; set_xattr_in->size = uio->uio_resid; attr_str = (char *)fdi.indata + sizeof(*set_xattr_in); snprintf(attr_str, len, "%s%c%s", prefix, extattr_namespace_separator, ap->a_name); err = uiomove((char *)fdi.indata + sizeof(*set_xattr_in) + len, uio->uio_resid, uio); if (err != 0) { debug_printf("setxattr: got error %d from uiomove\n", err); goto out; } err = fdisp_wait_answ(&fdi); if (err != 0) { if (err == ENOSYS) fsess_set_notimpl(mp, FUSE_SETXATTR); debug_printf("setxattr: got err=%d from daemon\n", err); goto out; } out: fdisp_destroy(&fdi); return (err); } /* * The Linux / FUSE extended attribute list is simply a collection of * NUL-terminated strings. The FreeBSD extended attribute list is a single * byte length followed by a non-NUL terminated string. So, this allows * conversion of the Linux / FUSE format to the FreeBSD format in place. * Linux attribute names are reported with the namespace as a prefix (e.g. * "user.attribute_name"), but in FreeBSD they are reported without the * namespace prefix (e.g. "attribute_name"). So, we're going from: * * user.attr_name1\0user.attr_name2\0 * * to: * * attr_name1attr_name2 * * Where "" is a single byte number of characters in the attribute name. * * Args: * prefix - exattr namespace prefix string * list, list_len - input list with namespace prefixes * bsd_list, bsd_list_len - output list compatible with bsd vfs */ static int fuse_xattrlist_convert(char *prefix, const char *list, int list_len, char *bsd_list, int *bsd_list_len) { int len, pos, dist_to_next, prefix_len; pos = 0; *bsd_list_len = 0; prefix_len = strlen(prefix); while (pos < list_len && list[pos] != '\0') { dist_to_next = strlen(&list[pos]) + 1; if (bcmp(&list[pos], prefix, prefix_len) == 0 && list[pos + prefix_len] == extattr_namespace_separator) { len = dist_to_next - (prefix_len + sizeof(extattr_namespace_separator)) - 1; if (len >= EXTATTR_MAXNAMELEN) return (ENAMETOOLONG); bsd_list[*bsd_list_len] = len; memcpy(&bsd_list[*bsd_list_len + 1], &list[pos + prefix_len + sizeof(extattr_namespace_separator)], len); *bsd_list_len += len + 1; } pos += dist_to_next; } return (0); } /* struct vop_listextattr_args { struct vop_generic_args a_gen; struct vnode *a_vp; int a_attrnamespace; struct uio *a_uio; size_t *a_size; struct ucred *a_cred; struct thread *a_td; }; */ static int fuse_vnop_listextattr(struct vop_listextattr_args *ap) { struct vnode *vp = ap->a_vp; struct uio *uio = ap->a_uio; struct fuse_dispatcher fdi; struct fuse_listxattr_in *list_xattr_in; struct fuse_listxattr_out *list_xattr_out; struct mount *mp = vnode_mount(vp); struct thread *td = ap->a_td; struct ucred *cred = ap->a_cred; size_t len; char *prefix; char *attr_str; char *bsd_list = NULL; char *linux_list; int bsd_list_len; int linux_list_len; int err; fuse_trace_printf_vnop(); if (fuse_isdeadfs(vp)) return (ENXIO); /* * Add space for a NUL and the period separator if enabled. * Default to looking for user attributes. */ if (ap->a_attrnamespace == EXTATTR_NAMESPACE_SYSTEM) prefix = EXTATTR_NAMESPACE_SYSTEM_STRING; else prefix = EXTATTR_NAMESPACE_USER_STRING; len = strlen(prefix) + sizeof(extattr_namespace_separator) + 1; fdisp_init(&fdi, sizeof(*list_xattr_in) + len); fdisp_make_vp(&fdi, FUSE_LISTXATTR, vp, td, cred); /* * Retrieve Linux / FUSE compatible list size. */ list_xattr_in = fdi.indata; list_xattr_in->size = 0; attr_str = (char *)fdi.indata + sizeof(*list_xattr_in); snprintf(attr_str, len, "%s%c", prefix, extattr_namespace_separator); err = fdisp_wait_answ(&fdi); if (err != 0) { if (err == ENOSYS) fsess_set_notimpl(mp, FUSE_LISTXATTR); debug_printf("listextattr: got err=%d from daemon\n", err); goto out; } list_xattr_out = fdi.answ; linux_list_len = list_xattr_out->size; if (linux_list_len == 0) { if (ap->a_size != NULL) *ap->a_size = linux_list_len; goto out; } /* * Retrieve Linux / FUSE compatible list values. */ fdisp_make_vp(&fdi, FUSE_LISTXATTR, vp, td, cred); list_xattr_in = fdi.indata; list_xattr_in->size = linux_list_len + sizeof(*list_xattr_out); attr_str = (char *)fdi.indata + sizeof(*list_xattr_in); snprintf(attr_str, len, "%s%c", prefix, extattr_namespace_separator); err = fdisp_wait_answ(&fdi); if (err != 0) goto out; linux_list = fdi.answ; linux_list_len = fdi.iosize; /* * Retrieve the BSD compatible list values. * The Linux / FUSE attribute list format isn't the same * as FreeBSD's format. So we need to transform it into * FreeBSD's format before giving it to the user. */ bsd_list = malloc(linux_list_len, M_TEMP, M_WAITOK); err = fuse_xattrlist_convert(prefix, linux_list, linux_list_len, bsd_list, &bsd_list_len); if (err != 0) goto out; if (ap->a_size != NULL) *ap->a_size = bsd_list_len; if (uio != NULL) err = uiomove(bsd_list, bsd_list_len, uio); out: free(bsd_list, M_TEMP); fdisp_destroy(&fdi); return (err); } /* struct vop_deleteextattr_args { struct vop_generic_args a_gen; struct vnode *a_vp; int a_attrnamespace; const char *a_name; struct ucred *a_cred; struct thread *a_td; }; */ static int fuse_vnop_deleteextattr(struct vop_deleteextattr_args *ap) { struct vnode *vp = ap->a_vp; struct fuse_dispatcher fdi; struct mount *mp = vnode_mount(vp); struct thread *td = ap->a_td; struct ucred *cred = ap->a_cred; char *prefix; size_t len; char *attr_str; int err; fuse_trace_printf_vnop(); if (fuse_isdeadfs(vp)) return (ENXIO); /* Default to looking for user attributes. */ if (ap->a_attrnamespace == EXTATTR_NAMESPACE_SYSTEM) prefix = EXTATTR_NAMESPACE_SYSTEM_STRING; else prefix = EXTATTR_NAMESPACE_USER_STRING; len = strlen(prefix) + sizeof(extattr_namespace_separator) + strlen(ap->a_name) + 1; fdisp_init(&fdi, len); fdisp_make_vp(&fdi, FUSE_REMOVEXATTR, vp, td, cred); attr_str = fdi.indata; snprintf(attr_str, len, "%s%c%s", prefix, extattr_namespace_separator, ap->a_name); err = fdisp_wait_answ(&fdi); if (err != 0) { if (err == ENOSYS) fsess_set_notimpl(mp, FUSE_REMOVEXATTR); debug_printf("removexattr: got err=%d from daemon\n", err); } fdisp_destroy(&fdi); return (err); } /* struct vnop_print_args { struct vnode *a_vp; }; */ static int fuse_vnop_print(struct vop_print_args *ap) { struct fuse_vnode_data *fvdat = VTOFUD(ap->a_vp); printf("nodeid: %ju, parent nodeid: %ju, nlookup: %ju, flag: %#x\n", (uintmax_t)VTOILLU(ap->a_vp), (uintmax_t)fvdat->parent_nid, (uintmax_t)fvdat->nlookup, fvdat->flag); return 0; } Index: projects/import-googletest-1.8.1/sys/i386/include/cpufunc.h =================================================================== --- projects/import-googletest-1.8.1/sys/i386/include/cpufunc.h (revision 344270) +++ projects/import-googletest-1.8.1/sys/i386/include/cpufunc.h (revision 344271) @@ -1,782 +1,808 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 1993 The Regents of the University of California. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ /* * Functions to provide access to special i386 instructions. * This in included in sys/systm.h, and that file should be * used in preference to this. */ #ifndef _MACHINE_CPUFUNC_H_ #define _MACHINE_CPUFUNC_H_ #ifndef _SYS_CDEFS_H_ #error this file needs sys/cdefs.h as a prerequisite #endif struct region_descriptor; #define readb(va) (*(volatile uint8_t *) (va)) #define readw(va) (*(volatile uint16_t *) (va)) #define readl(va) (*(volatile uint32_t *) (va)) #define writeb(va, d) (*(volatile uint8_t *) (va) = (d)) #define writew(va, d) (*(volatile uint16_t *) (va) = (d)) #define writel(va, d) (*(volatile uint32_t *) (va) = (d)) #if defined(__GNUCLIKE_ASM) && defined(__CC_SUPPORTS___INLINE) static __inline void breakpoint(void) { __asm __volatile("int $3"); } static __inline __pure2 u_int bsfl(u_int mask) { u_int result; __asm("bsfl %1,%0" : "=r" (result) : "rm" (mask) : "cc"); return (result); } static __inline __pure2 u_int bsrl(u_int mask) { u_int result; __asm("bsrl %1,%0" : "=r" (result) : "rm" (mask) : "cc"); return (result); } static __inline void clflush(u_long addr) { __asm __volatile("clflush %0" : : "m" (*(char *)addr)); } static __inline void clflushopt(u_long addr) { __asm __volatile(".byte 0x66;clflush %0" : : "m" (*(char *)addr)); } static __inline void clts(void) { __asm __volatile("clts"); } static __inline void disable_intr(void) { __asm __volatile("cli" : : : "memory"); } +#ifdef _KERNEL static __inline void do_cpuid(u_int ax, u_int *p) { __asm __volatile("cpuid" - : "=a" (p[0]), "=b" (p[1]), "=c" (p[2]), "=d" (p[3]) - : "0" (ax)); + : "=a" (p[0]), "=b" (p[1]), "=c" (p[2]), "=d" (p[3]) + : "0" (ax)); } static __inline void cpuid_count(u_int ax, u_int cx, u_int *p) { __asm __volatile("cpuid" - : "=a" (p[0]), "=b" (p[1]), "=c" (p[2]), "=d" (p[3]) - : "0" (ax), "c" (cx)); + : "=a" (p[0]), "=b" (p[1]), "=c" (p[2]), "=d" (p[3]) + : "0" (ax), "c" (cx)); } +#else +static __inline void +do_cpuid(u_int ax, u_int *p) +{ + __asm __volatile( + "pushl\t%%ebx\n\t" + "cpuid\n\t" + "movl\t%%ebx,%1\n\t" + "popl\t%%ebx" + : "=a" (p[0]), "=DS" (p[1]), "=c" (p[2]), "=d" (p[3]) + : "0" (ax)); +} + +static __inline void +cpuid_count(u_int ax, u_int cx, u_int *p) +{ + __asm __volatile( + "pushl\t%%ebx\n\t" + "cpuid\n\t" + "movl\t%%ebx,%1\n\t" + "popl\t%%ebx" + : "=a" (p[0]), "=DS" (p[1]), "=c" (p[2]), "=d" (p[3]) + : "0" (ax), "c" (cx)); +} +#endif static __inline void enable_intr(void) { __asm __volatile("sti"); } static __inline void cpu_monitor(const void *addr, u_long extensions, u_int hints) { __asm __volatile("monitor" : : "a" (addr), "c" (extensions), "d" (hints)); } static __inline void cpu_mwait(u_long extensions, u_int hints) { __asm __volatile("mwait" : : "a" (hints), "c" (extensions)); } static __inline void lfence(void) { __asm __volatile("lfence" : : : "memory"); } static __inline void mfence(void) { __asm __volatile("mfence" : : : "memory"); } static __inline void sfence(void) { __asm __volatile("sfence" : : : "memory"); } #ifdef _KERNEL #define HAVE_INLINE_FFS static __inline __pure2 int ffs(int mask) { /* * Note that gcc-2's builtin ffs would be used if we didn't declare * this inline or turn off the builtin. The builtin is faster but * broken in gcc-2.4.5 and slower but working in gcc-2.5 and later * versions. */ return (mask == 0 ? mask : (int)bsfl((u_int)mask) + 1); } #define HAVE_INLINE_FFSL static __inline __pure2 int ffsl(long mask) { return (ffs((int)mask)); } #define HAVE_INLINE_FLS static __inline __pure2 int fls(int mask) { return (mask == 0 ? mask : (int)bsrl((u_int)mask) + 1); } #define HAVE_INLINE_FLSL static __inline __pure2 int flsl(long mask) { return (fls((int)mask)); } #endif /* _KERNEL */ static __inline void halt(void) { __asm __volatile("hlt"); } static __inline u_char inb(u_int port) { u_char data; __asm __volatile("inb %w1, %0" : "=a" (data) : "Nd" (port)); return (data); } static __inline u_int inl(u_int port) { u_int data; __asm __volatile("inl %w1, %0" : "=a" (data) : "Nd" (port)); return (data); } static __inline void insb(u_int port, void *addr, size_t count) { __asm __volatile("cld; rep; insb" : "+D" (addr), "+c" (count) : "d" (port) : "memory"); } static __inline void insw(u_int port, void *addr, size_t count) { __asm __volatile("cld; rep; insw" : "+D" (addr), "+c" (count) : "d" (port) : "memory"); } static __inline void insl(u_int port, void *addr, size_t count) { __asm __volatile("cld; rep; insl" : "+D" (addr), "+c" (count) : "d" (port) : "memory"); } static __inline void invd(void) { __asm __volatile("invd"); } static __inline u_short inw(u_int port) { u_short data; __asm __volatile("inw %w1, %0" : "=a" (data) : "Nd" (port)); return (data); } static __inline void outb(u_int port, u_char data) { __asm __volatile("outb %0, %w1" : : "a" (data), "Nd" (port)); } static __inline void outl(u_int port, u_int data) { __asm __volatile("outl %0, %w1" : : "a" (data), "Nd" (port)); } static __inline void outsb(u_int port, const void *addr, size_t count) { __asm __volatile("cld; rep; outsb" : "+S" (addr), "+c" (count) : "d" (port)); } static __inline void outsw(u_int port, const void *addr, size_t count) { __asm __volatile("cld; rep; outsw" : "+S" (addr), "+c" (count) : "d" (port)); } static __inline void outsl(u_int port, const void *addr, size_t count) { __asm __volatile("cld; rep; outsl" : "+S" (addr), "+c" (count) : "d" (port)); } static __inline void outw(u_int port, u_short data) { __asm __volatile("outw %0, %w1" : : "a" (data), "Nd" (port)); } static __inline void ia32_pause(void) { __asm __volatile("pause"); } static __inline u_int read_eflags(void) { u_int ef; __asm __volatile("pushfl; popl %0" : "=r" (ef)); return (ef); } static __inline uint64_t rdmsr(u_int msr) { uint64_t rv; __asm __volatile("rdmsr" : "=A" (rv) : "c" (msr)); return (rv); } static __inline uint32_t rdmsr32(u_int msr) { uint32_t low; __asm __volatile("rdmsr" : "=a" (low) : "c" (msr) : "edx"); return (low); } static __inline uint64_t rdpmc(u_int pmc) { uint64_t rv; __asm __volatile("rdpmc" : "=A" (rv) : "c" (pmc)); return (rv); } static __inline uint64_t rdtsc(void) { uint64_t rv; __asm __volatile("rdtsc" : "=A" (rv)); return (rv); } static __inline uint64_t rdtscp(void) { uint64_t rv; __asm __volatile("rdtscp" : "=A" (rv) : : "ecx"); return (rv); } static __inline uint32_t rdtsc32(void) { uint32_t rv; __asm __volatile("rdtsc" : "=a" (rv) : : "edx"); return (rv); } static __inline void wbinvd(void) { __asm __volatile("wbinvd"); } static __inline void write_eflags(u_int ef) { __asm __volatile("pushl %0; popfl" : : "r" (ef)); } static __inline void wrmsr(u_int msr, uint64_t newval) { __asm __volatile("wrmsr" : : "A" (newval), "c" (msr)); } static __inline void load_cr0(u_int data) { __asm __volatile("movl %0,%%cr0" : : "r" (data)); } static __inline u_int rcr0(void) { u_int data; __asm __volatile("movl %%cr0,%0" : "=r" (data)); return (data); } static __inline u_int rcr2(void) { u_int data; __asm __volatile("movl %%cr2,%0" : "=r" (data)); return (data); } static __inline void load_cr3(u_int data) { __asm __volatile("movl %0,%%cr3" : : "r" (data) : "memory"); } static __inline u_int rcr3(void) { u_int data; __asm __volatile("movl %%cr3,%0" : "=r" (data)); return (data); } static __inline void load_cr4(u_int data) { __asm __volatile("movl %0,%%cr4" : : "r" (data)); } static __inline u_int rcr4(void) { u_int data; __asm __volatile("movl %%cr4,%0" : "=r" (data)); return (data); } static __inline uint64_t rxcr(u_int reg) { u_int low, high; __asm __volatile("xgetbv" : "=a" (low), "=d" (high) : "c" (reg)); return (low | ((uint64_t)high << 32)); } static __inline void load_xcr(u_int reg, uint64_t val) { u_int low, high; low = val; high = val >> 32; __asm __volatile("xsetbv" : : "c" (reg), "a" (low), "d" (high)); } /* * Global TLB flush (except for thise for pages marked PG_G) */ static __inline void invltlb(void) { load_cr3(rcr3()); } /* * TLB flush for an individual page (even if it has PG_G). * Only works on 486+ CPUs (i386 does not have PG_G). */ static __inline void invlpg(u_int addr) { __asm __volatile("invlpg %0" : : "m" (*(char *)addr) : "memory"); } static __inline u_short rfs(void) { u_short sel; __asm __volatile("movw %%fs,%0" : "=rm" (sel)); return (sel); } static __inline uint64_t rgdt(void) { uint64_t gdtr; __asm __volatile("sgdt %0" : "=m" (gdtr)); return (gdtr); } static __inline u_short rgs(void) { u_short sel; __asm __volatile("movw %%gs,%0" : "=rm" (sel)); return (sel); } static __inline uint64_t ridt(void) { uint64_t idtr; __asm __volatile("sidt %0" : "=m" (idtr)); return (idtr); } static __inline u_short rldt(void) { u_short ldtr; __asm __volatile("sldt %0" : "=g" (ldtr)); return (ldtr); } static __inline u_short rss(void) { u_short sel; __asm __volatile("movw %%ss,%0" : "=rm" (sel)); return (sel); } static __inline u_short rtr(void) { u_short tr; __asm __volatile("str %0" : "=g" (tr)); return (tr); } static __inline void load_fs(u_short sel) { __asm __volatile("movw %0,%%fs" : : "rm" (sel)); } static __inline void load_gs(u_short sel) { __asm __volatile("movw %0,%%gs" : : "rm" (sel)); } static __inline void lidt(struct region_descriptor *addr) { __asm __volatile("lidt (%0)" : : "r" (addr)); } static __inline void lldt(u_short sel) { __asm __volatile("lldt %0" : : "r" (sel)); } static __inline void ltr(u_short sel) { __asm __volatile("ltr %0" : : "r" (sel)); } static __inline u_int rdr0(void) { u_int data; __asm __volatile("movl %%dr0,%0" : "=r" (data)); return (data); } static __inline void load_dr0(u_int dr0) { __asm __volatile("movl %0,%%dr0" : : "r" (dr0)); } static __inline u_int rdr1(void) { u_int data; __asm __volatile("movl %%dr1,%0" : "=r" (data)); return (data); } static __inline void load_dr1(u_int dr1) { __asm __volatile("movl %0,%%dr1" : : "r" (dr1)); } static __inline u_int rdr2(void) { u_int data; __asm __volatile("movl %%dr2,%0" : "=r" (data)); return (data); } static __inline void load_dr2(u_int dr2) { __asm __volatile("movl %0,%%dr2" : : "r" (dr2)); } static __inline u_int rdr3(void) { u_int data; __asm __volatile("movl %%dr3,%0" : "=r" (data)); return (data); } static __inline void load_dr3(u_int dr3) { __asm __volatile("movl %0,%%dr3" : : "r" (dr3)); } static __inline u_int rdr6(void) { u_int data; __asm __volatile("movl %%dr6,%0" : "=r" (data)); return (data); } static __inline void load_dr6(u_int dr6) { __asm __volatile("movl %0,%%dr6" : : "r" (dr6)); } static __inline u_int rdr7(void) { u_int data; __asm __volatile("movl %%dr7,%0" : "=r" (data)); return (data); } static __inline void load_dr7(u_int dr7) { __asm __volatile("movl %0,%%dr7" : : "r" (dr7)); } static __inline u_char read_cyrix_reg(u_char reg) { outb(0x22, reg); return inb(0x23); } static __inline void write_cyrix_reg(u_char reg, u_char data) { outb(0x22, reg); outb(0x23, data); } static __inline register_t intr_disable(void) { register_t eflags; eflags = read_eflags(); disable_intr(); return (eflags); } static __inline void intr_restore(register_t eflags) { write_eflags(eflags); } #else /* !(__GNUCLIKE_ASM && __CC_SUPPORTS___INLINE) */ int breakpoint(void); u_int bsfl(u_int mask); u_int bsrl(u_int mask); void clflush(u_long addr); void clts(void); void cpuid_count(u_int ax, u_int cx, u_int *p); void disable_intr(void); void do_cpuid(u_int ax, u_int *p); void enable_intr(void); void halt(void); void ia32_pause(void); u_char inb(u_int port); u_int inl(u_int port); void insb(u_int port, void *addr, size_t count); void insl(u_int port, void *addr, size_t count); void insw(u_int port, void *addr, size_t count); register_t intr_disable(void); void intr_restore(register_t ef); void invd(void); void invlpg(u_int addr); void invltlb(void); u_short inw(u_int port); void lidt(struct region_descriptor *addr); void lldt(u_short sel); void load_cr0(u_int cr0); void load_cr3(u_int cr3); void load_cr4(u_int cr4); void load_dr0(u_int dr0); void load_dr1(u_int dr1); void load_dr2(u_int dr2); void load_dr3(u_int dr3); void load_dr6(u_int dr6); void load_dr7(u_int dr7); void load_fs(u_short sel); void load_gs(u_short sel); void ltr(u_short sel); void outb(u_int port, u_char data); void outl(u_int port, u_int data); void outsb(u_int port, const void *addr, size_t count); void outsl(u_int port, const void *addr, size_t count); void outsw(u_int port, const void *addr, size_t count); void outw(u_int port, u_short data); u_int rcr0(void); u_int rcr2(void); u_int rcr3(void); u_int rcr4(void); uint64_t rdmsr(u_int msr); uint64_t rdpmc(u_int pmc); u_int rdr0(void); u_int rdr1(void); u_int rdr2(void); u_int rdr3(void); u_int rdr6(void); u_int rdr7(void); uint64_t rdtsc(void); u_char read_cyrix_reg(u_char reg); u_int read_eflags(void); u_int rfs(void); uint64_t rgdt(void); u_int rgs(void); uint64_t ridt(void); u_short rldt(void); u_short rtr(void); void wbinvd(void); void write_cyrix_reg(u_char reg, u_char data); void write_eflags(u_int ef); void wrmsr(u_int msr, uint64_t newval); #endif /* __GNUCLIKE_ASM && __CC_SUPPORTS___INLINE */ void reset_dbregs(void); #ifdef _KERNEL int rdmsr_safe(u_int msr, uint64_t *val); int wrmsr_safe(u_int msr, uint64_t newval); #endif #endif /* !_MACHINE_CPUFUNC_H_ */ Index: projects/import-googletest-1.8.1/sys/kern/kern_resource.c =================================================================== --- projects/import-googletest-1.8.1/sys/kern/kern_resource.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/kern/kern_resource.c (revision 344271) @@ -1,1454 +1,1535 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 1982, 1986, 1991, 1993 * The Regents of the University of California. All rights reserved. * (c) UNIX System Laboratories, Inc. * All or some portions of this file are derived from material licensed * to the University of California by American Telephone and Telegraph * Co. or Unix System Laboratories, Inc. and are reproduced herein with * the permission of UNIX System Laboratories, Inc. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * @(#)kern_resource.c 8.5 (Berkeley) 1/21/94 */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include static MALLOC_DEFINE(M_PLIMIT, "plimit", "plimit structures"); static MALLOC_DEFINE(M_UIDINFO, "uidinfo", "uidinfo structures"); #define UIHASH(uid) (&uihashtbl[(uid) & uihash]) static struct rwlock uihashtbl_lock; static LIST_HEAD(uihashhead, uidinfo) *uihashtbl; static u_long uihash; /* size of hash table - 1 */ static void calcru1(struct proc *p, struct rusage_ext *ruxp, struct timeval *up, struct timeval *sp); static int donice(struct thread *td, struct proc *chgp, int n); static struct uidinfo *uilookup(uid_t uid); static void ruxagg_locked(struct rusage_ext *rux, struct thread *td); /* * Resource controls and accounting. */ #ifndef _SYS_SYSPROTO_H_ struct getpriority_args { int which; int who; }; #endif int sys_getpriority(struct thread *td, struct getpriority_args *uap) { struct proc *p; struct pgrp *pg; int error, low; error = 0; low = PRIO_MAX + 1; switch (uap->which) { case PRIO_PROCESS: if (uap->who == 0) low = td->td_proc->p_nice; else { p = pfind(uap->who); if (p == NULL) break; if (p_cansee(td, p) == 0) low = p->p_nice; PROC_UNLOCK(p); } break; case PRIO_PGRP: sx_slock(&proctree_lock); if (uap->who == 0) { pg = td->td_proc->p_pgrp; PGRP_LOCK(pg); } else { pg = pgfind(uap->who); if (pg == NULL) { sx_sunlock(&proctree_lock); break; } } sx_sunlock(&proctree_lock); LIST_FOREACH(p, &pg->pg_members, p_pglist) { PROC_LOCK(p); if (p->p_state == PRS_NORMAL && p_cansee(td, p) == 0) { if (p->p_nice < low) low = p->p_nice; } PROC_UNLOCK(p); } PGRP_UNLOCK(pg); break; case PRIO_USER: if (uap->who == 0) uap->who = td->td_ucred->cr_uid; sx_slock(&allproc_lock); FOREACH_PROC_IN_SYSTEM(p) { PROC_LOCK(p); if (p->p_state == PRS_NORMAL && p_cansee(td, p) == 0 && p->p_ucred->cr_uid == uap->who) { if (p->p_nice < low) low = p->p_nice; } PROC_UNLOCK(p); } sx_sunlock(&allproc_lock); break; default: error = EINVAL; break; } if (low == PRIO_MAX + 1 && error == 0) error = ESRCH; td->td_retval[0] = low; return (error); } #ifndef _SYS_SYSPROTO_H_ struct setpriority_args { int which; int who; int prio; }; #endif int sys_setpriority(struct thread *td, struct setpriority_args *uap) { struct proc *curp, *p; struct pgrp *pg; int found = 0, error = 0; curp = td->td_proc; switch (uap->which) { case PRIO_PROCESS: if (uap->who == 0) { PROC_LOCK(curp); error = donice(td, curp, uap->prio); PROC_UNLOCK(curp); } else { p = pfind(uap->who); if (p == NULL) break; error = p_cansee(td, p); if (error == 0) error = donice(td, p, uap->prio); PROC_UNLOCK(p); } found++; break; case PRIO_PGRP: sx_slock(&proctree_lock); if (uap->who == 0) { pg = curp->p_pgrp; PGRP_LOCK(pg); } else { pg = pgfind(uap->who); if (pg == NULL) { sx_sunlock(&proctree_lock); break; } } sx_sunlock(&proctree_lock); LIST_FOREACH(p, &pg->pg_members, p_pglist) { PROC_LOCK(p); if (p->p_state == PRS_NORMAL && p_cansee(td, p) == 0) { error = donice(td, p, uap->prio); found++; } PROC_UNLOCK(p); } PGRP_UNLOCK(pg); break; case PRIO_USER: if (uap->who == 0) uap->who = td->td_ucred->cr_uid; sx_slock(&allproc_lock); FOREACH_PROC_IN_SYSTEM(p) { PROC_LOCK(p); if (p->p_state == PRS_NORMAL && p->p_ucred->cr_uid == uap->who && p_cansee(td, p) == 0) { error = donice(td, p, uap->prio); found++; } PROC_UNLOCK(p); } sx_sunlock(&allproc_lock); break; default: error = EINVAL; break; } if (found == 0 && error == 0) error = ESRCH; return (error); } /* * Set "nice" for a (whole) process. */ static int donice(struct thread *td, struct proc *p, int n) { int error; PROC_LOCK_ASSERT(p, MA_OWNED); if ((error = p_cansched(td, p))) return (error); if (n > PRIO_MAX) n = PRIO_MAX; if (n < PRIO_MIN) n = PRIO_MIN; if (n < p->p_nice && priv_check(td, PRIV_SCHED_SETPRIORITY) != 0) return (EACCES); sched_nice(p, n); return (0); } static int unprivileged_idprio; SYSCTL_INT(_security_bsd, OID_AUTO, unprivileged_idprio, CTLFLAG_RW, &unprivileged_idprio, 0, "Allow non-root users to set an idle priority"); /* * Set realtime priority for LWP. */ #ifndef _SYS_SYSPROTO_H_ struct rtprio_thread_args { int function; lwpid_t lwpid; struct rtprio *rtp; }; #endif int sys_rtprio_thread(struct thread *td, struct rtprio_thread_args *uap) { struct proc *p; struct rtprio rtp; struct thread *td1; int cierror, error; /* Perform copyin before acquiring locks if needed. */ if (uap->function == RTP_SET) cierror = copyin(uap->rtp, &rtp, sizeof(struct rtprio)); else cierror = 0; if (uap->lwpid == 0 || uap->lwpid == td->td_tid) { p = td->td_proc; td1 = td; PROC_LOCK(p); } else { /* Only look up thread in current process */ td1 = tdfind(uap->lwpid, curproc->p_pid); if (td1 == NULL) return (ESRCH); p = td1->td_proc; } switch (uap->function) { case RTP_LOOKUP: if ((error = p_cansee(td, p))) break; pri_to_rtp(td1, &rtp); PROC_UNLOCK(p); return (copyout(&rtp, uap->rtp, sizeof(struct rtprio))); case RTP_SET: if ((error = p_cansched(td, p)) || (error = cierror)) break; /* Disallow setting rtprio in most cases if not superuser. */ /* * Realtime priority has to be restricted for reasons which * should be obvious. However, for idleprio processes, there is * a potential for system deadlock if an idleprio process gains * a lock on a resource that other processes need (and the * idleprio process can't run due to a CPU-bound normal * process). Fix me! XXX * * This problem is not only related to idleprio process. * A user level program can obtain a file lock and hold it * indefinitely. Additionally, without idleprio processes it is * still conceivable that a program with low priority will never * get to run. In short, allowing this feature might make it * easier to lock a resource indefinitely, but it is not the * only thing that makes it possible. */ if (RTP_PRIO_BASE(rtp.type) == RTP_PRIO_REALTIME || (RTP_PRIO_BASE(rtp.type) == RTP_PRIO_IDLE && unprivileged_idprio == 0)) { error = priv_check(td, PRIV_SCHED_RTPRIO); if (error) break; } error = rtp_to_pri(&rtp, td1); break; default: error = EINVAL; break; } PROC_UNLOCK(p); return (error); } /* * Set realtime priority. */ #ifndef _SYS_SYSPROTO_H_ struct rtprio_args { int function; pid_t pid; struct rtprio *rtp; }; #endif int sys_rtprio(struct thread *td, struct rtprio_args *uap) { struct proc *p; struct thread *tdp; struct rtprio rtp; int cierror, error; /* Perform copyin before acquiring locks if needed. */ if (uap->function == RTP_SET) cierror = copyin(uap->rtp, &rtp, sizeof(struct rtprio)); else cierror = 0; if (uap->pid == 0) { p = td->td_proc; PROC_LOCK(p); } else { p = pfind(uap->pid); if (p == NULL) return (ESRCH); } switch (uap->function) { case RTP_LOOKUP: if ((error = p_cansee(td, p))) break; /* * Return OUR priority if no pid specified, * or if one is, report the highest priority * in the process. There isn't much more you can do as * there is only room to return a single priority. * Note: specifying our own pid is not the same * as leaving it zero. */ if (uap->pid == 0) { pri_to_rtp(td, &rtp); } else { struct rtprio rtp2; rtp.type = RTP_PRIO_IDLE; rtp.prio = RTP_PRIO_MAX; FOREACH_THREAD_IN_PROC(p, tdp) { pri_to_rtp(tdp, &rtp2); if (rtp2.type < rtp.type || (rtp2.type == rtp.type && rtp2.prio < rtp.prio)) { rtp.type = rtp2.type; rtp.prio = rtp2.prio; } } } PROC_UNLOCK(p); return (copyout(&rtp, uap->rtp, sizeof(struct rtprio))); case RTP_SET: if ((error = p_cansched(td, p)) || (error = cierror)) break; /* * Disallow setting rtprio in most cases if not superuser. * See the comment in sys_rtprio_thread about idprio * threads holding a lock. */ if (RTP_PRIO_BASE(rtp.type) == RTP_PRIO_REALTIME || (RTP_PRIO_BASE(rtp.type) == RTP_PRIO_IDLE && !unprivileged_idprio)) { error = priv_check(td, PRIV_SCHED_RTPRIO); if (error) break; } /* * If we are setting our own priority, set just our * thread but if we are doing another process, * do all the threads on that process. If we * specify our own pid we do the latter. */ if (uap->pid == 0) { error = rtp_to_pri(&rtp, td); } else { FOREACH_THREAD_IN_PROC(p, td) { if ((error = rtp_to_pri(&rtp, td)) != 0) break; } } break; default: error = EINVAL; break; } PROC_UNLOCK(p); return (error); } int rtp_to_pri(struct rtprio *rtp, struct thread *td) { u_char newpri, oldclass, oldpri; switch (RTP_PRIO_BASE(rtp->type)) { case RTP_PRIO_REALTIME: if (rtp->prio > RTP_PRIO_MAX) return (EINVAL); newpri = PRI_MIN_REALTIME + rtp->prio; break; case RTP_PRIO_NORMAL: if (rtp->prio > (PRI_MAX_TIMESHARE - PRI_MIN_TIMESHARE)) return (EINVAL); newpri = PRI_MIN_TIMESHARE + rtp->prio; break; case RTP_PRIO_IDLE: if (rtp->prio > RTP_PRIO_MAX) return (EINVAL); newpri = PRI_MIN_IDLE + rtp->prio; break; default: return (EINVAL); } thread_lock(td); oldclass = td->td_pri_class; sched_class(td, rtp->type); /* XXX fix */ oldpri = td->td_user_pri; sched_user_prio(td, newpri); if (td->td_user_pri != oldpri && (oldclass != RTP_PRIO_NORMAL || td->td_pri_class != RTP_PRIO_NORMAL)) sched_prio(td, td->td_user_pri); if (TD_ON_UPILOCK(td) && oldpri != newpri) { critical_enter(); thread_unlock(td); umtx_pi_adjust(td, oldpri); critical_exit(); } else thread_unlock(td); return (0); } void pri_to_rtp(struct thread *td, struct rtprio *rtp) { thread_lock(td); switch (PRI_BASE(td->td_pri_class)) { case PRI_REALTIME: rtp->prio = td->td_base_user_pri - PRI_MIN_REALTIME; break; case PRI_TIMESHARE: rtp->prio = td->td_base_user_pri - PRI_MIN_TIMESHARE; break; case PRI_IDLE: rtp->prio = td->td_base_user_pri - PRI_MIN_IDLE; break; default: break; } rtp->type = td->td_pri_class; thread_unlock(td); } #if defined(COMPAT_43) #ifndef _SYS_SYSPROTO_H_ struct osetrlimit_args { u_int which; struct orlimit *rlp; }; #endif int osetrlimit(struct thread *td, struct osetrlimit_args *uap) { struct orlimit olim; struct rlimit lim; int error; if ((error = copyin(uap->rlp, &olim, sizeof(struct orlimit)))) return (error); lim.rlim_cur = olim.rlim_cur; lim.rlim_max = olim.rlim_max; error = kern_setrlimit(td, uap->which, &lim); return (error); } #ifndef _SYS_SYSPROTO_H_ struct ogetrlimit_args { u_int which; struct orlimit *rlp; }; #endif int ogetrlimit(struct thread *td, struct ogetrlimit_args *uap) { struct orlimit olim; struct rlimit rl; int error; if (uap->which >= RLIM_NLIMITS) return (EINVAL); lim_rlimit(td, uap->which, &rl); /* * XXX would be more correct to convert only RLIM_INFINITY to the * old RLIM_INFINITY and fail with EOVERFLOW for other larger * values. Most 64->32 and 32->16 conversions, including not * unimportant ones of uids are even more broken than what we * do here (they blindly truncate). We don't do this correctly * here since we have little experience with EOVERFLOW yet. * Elsewhere, getuid() can't fail... */ olim.rlim_cur = rl.rlim_cur > 0x7fffffff ? 0x7fffffff : rl.rlim_cur; olim.rlim_max = rl.rlim_max > 0x7fffffff ? 0x7fffffff : rl.rlim_max; error = copyout(&olim, uap->rlp, sizeof(olim)); return (error); } #endif /* COMPAT_43 */ #ifndef _SYS_SYSPROTO_H_ struct __setrlimit_args { u_int which; struct rlimit *rlp; }; #endif int sys_setrlimit(struct thread *td, struct __setrlimit_args *uap) { struct rlimit alim; int error; if ((error = copyin(uap->rlp, &alim, sizeof(struct rlimit)))) return (error); error = kern_setrlimit(td, uap->which, &alim); return (error); } static void lim_cb(void *arg) { struct rlimit rlim; struct thread *td; struct proc *p; p = arg; PROC_LOCK_ASSERT(p, MA_OWNED); /* * Check if the process exceeds its cpu resource allocation. If * it reaches the max, arrange to kill the process in ast(). */ if (p->p_cpulimit == RLIM_INFINITY) return; PROC_STATLOCK(p); FOREACH_THREAD_IN_PROC(p, td) { ruxagg(p, td); } PROC_STATUNLOCK(p); if (p->p_rux.rux_runtime > p->p_cpulimit * cpu_tickrate()) { lim_rlimit_proc(p, RLIMIT_CPU, &rlim); if (p->p_rux.rux_runtime >= rlim.rlim_max * cpu_tickrate()) { killproc(p, "exceeded maximum CPU limit"); } else { if (p->p_cpulimit < rlim.rlim_max) p->p_cpulimit += 5; kern_psignal(p, SIGXCPU); } } if ((p->p_flag & P_WEXIT) == 0) callout_reset_sbt(&p->p_limco, SBT_1S, 0, lim_cb, p, C_PREL(1)); } int kern_setrlimit(struct thread *td, u_int which, struct rlimit *limp) { return (kern_proc_setrlimit(td, td->td_proc, which, limp)); } int kern_proc_setrlimit(struct thread *td, struct proc *p, u_int which, struct rlimit *limp) { struct plimit *newlim, *oldlim; struct rlimit *alimp; struct rlimit oldssiz; int error; if (which >= RLIM_NLIMITS) return (EINVAL); /* * Preserve historical bugs by treating negative limits as unsigned. */ if (limp->rlim_cur < 0) limp->rlim_cur = RLIM_INFINITY; if (limp->rlim_max < 0) limp->rlim_max = RLIM_INFINITY; oldssiz.rlim_cur = 0; newlim = lim_alloc(); PROC_LOCK(p); oldlim = p->p_limit; alimp = &oldlim->pl_rlimit[which]; if (limp->rlim_cur > alimp->rlim_max || limp->rlim_max > alimp->rlim_max) if ((error = priv_check(td, PRIV_PROC_SETRLIMIT))) { PROC_UNLOCK(p); lim_free(newlim); return (error); } if (limp->rlim_cur > limp->rlim_max) limp->rlim_cur = limp->rlim_max; lim_copy(newlim, oldlim); alimp = &newlim->pl_rlimit[which]; switch (which) { case RLIMIT_CPU: if (limp->rlim_cur != RLIM_INFINITY && p->p_cpulimit == RLIM_INFINITY) callout_reset_sbt(&p->p_limco, SBT_1S, 0, lim_cb, p, C_PREL(1)); p->p_cpulimit = limp->rlim_cur; break; case RLIMIT_DATA: if (limp->rlim_cur > maxdsiz) limp->rlim_cur = maxdsiz; if (limp->rlim_max > maxdsiz) limp->rlim_max = maxdsiz; break; case RLIMIT_STACK: if (limp->rlim_cur > maxssiz) limp->rlim_cur = maxssiz; if (limp->rlim_max > maxssiz) limp->rlim_max = maxssiz; oldssiz = *alimp; if (p->p_sysent->sv_fixlimit != NULL) p->p_sysent->sv_fixlimit(&oldssiz, RLIMIT_STACK); break; case RLIMIT_NOFILE: if (limp->rlim_cur > maxfilesperproc) limp->rlim_cur = maxfilesperproc; if (limp->rlim_max > maxfilesperproc) limp->rlim_max = maxfilesperproc; break; case RLIMIT_NPROC: if (limp->rlim_cur > maxprocperuid) limp->rlim_cur = maxprocperuid; if (limp->rlim_max > maxprocperuid) limp->rlim_max = maxprocperuid; if (limp->rlim_cur < 1) limp->rlim_cur = 1; if (limp->rlim_max < 1) limp->rlim_max = 1; break; } if (p->p_sysent->sv_fixlimit != NULL) p->p_sysent->sv_fixlimit(limp, which); *alimp = *limp; p->p_limit = newlim; PROC_UPDATE_COW(p); PROC_UNLOCK(p); lim_free(oldlim); if (which == RLIMIT_STACK && /* * Skip calls from exec_new_vmspace(), done when stack is * not mapped yet. */ (td != curthread || (p->p_flag & P_INEXEC) == 0)) { /* * Stack is allocated to the max at exec time with only * "rlim_cur" bytes accessible. If stack limit is going * up make more accessible, if going down make inaccessible. */ if (limp->rlim_cur != oldssiz.rlim_cur) { vm_offset_t addr; vm_size_t size; vm_prot_t prot; if (limp->rlim_cur > oldssiz.rlim_cur) { prot = p->p_sysent->sv_stackprot; size = limp->rlim_cur - oldssiz.rlim_cur; addr = p->p_sysent->sv_usrstack - limp->rlim_cur; } else { prot = VM_PROT_NONE; size = oldssiz.rlim_cur - limp->rlim_cur; addr = p->p_sysent->sv_usrstack - oldssiz.rlim_cur; } addr = trunc_page(addr); size = round_page(size); (void)vm_map_protect(&p->p_vmspace->vm_map, addr, addr + size, prot, FALSE); } } return (0); } #ifndef _SYS_SYSPROTO_H_ struct __getrlimit_args { u_int which; struct rlimit *rlp; }; #endif /* ARGSUSED */ int sys_getrlimit(struct thread *td, struct __getrlimit_args *uap) { struct rlimit rlim; int error; if (uap->which >= RLIM_NLIMITS) return (EINVAL); lim_rlimit(td, uap->which, &rlim); error = copyout(&rlim, uap->rlp, sizeof(struct rlimit)); return (error); } /* * Transform the running time and tick information for children of proc p * into user and system time usage. */ void calccru(struct proc *p, struct timeval *up, struct timeval *sp) { PROC_LOCK_ASSERT(p, MA_OWNED); calcru1(p, &p->p_crux, up, sp); } /* * Transform the running time and tick information in proc p into user * and system time usage. If appropriate, include the current time slice * on this CPU. */ void calcru(struct proc *p, struct timeval *up, struct timeval *sp) { struct thread *td; uint64_t runtime, u; PROC_LOCK_ASSERT(p, MA_OWNED); PROC_STATLOCK_ASSERT(p, MA_OWNED); /* * If we are getting stats for the current process, then add in the * stats that this thread has accumulated in its current time slice. * We reset the thread and CPU state as if we had performed a context * switch right here. */ td = curthread; if (td->td_proc == p) { u = cpu_ticks(); runtime = u - PCPU_GET(switchtime); td->td_runtime += runtime; td->td_incruntime += runtime; PCPU_SET(switchtime, u); } /* Make sure the per-thread stats are current. */ FOREACH_THREAD_IN_PROC(p, td) { if (td->td_incruntime == 0) continue; ruxagg(p, td); } calcru1(p, &p->p_rux, up, sp); } /* Collect resource usage for a single thread. */ void rufetchtd(struct thread *td, struct rusage *ru) { struct proc *p; uint64_t runtime, u; p = td->td_proc; PROC_STATLOCK_ASSERT(p, MA_OWNED); THREAD_LOCK_ASSERT(td, MA_OWNED); /* * If we are getting stats for the current thread, then add in the * stats that this thread has accumulated in its current time slice. * We reset the thread and CPU state as if we had performed a context * switch right here. */ if (td == curthread) { u = cpu_ticks(); runtime = u - PCPU_GET(switchtime); td->td_runtime += runtime; td->td_incruntime += runtime; PCPU_SET(switchtime, u); } ruxagg(p, td); *ru = td->td_ru; calcru1(p, &td->td_rux, &ru->ru_utime, &ru->ru_stime); } +/* XXX: the MI version is too slow to use: */ +#ifndef __HAVE_INLINE_FLSLL +#define flsll(x) (fls((x) >> 32) != 0 ? fls((x) >> 32) + 32 : fls(x)) +#endif + static uint64_t mul64_by_fraction(uint64_t a, uint64_t b, uint64_t c) { + uint64_t acc, bh, bl; + int i, s, sa, sb; + /* - * Compute floor(a * (b / c)) without overflowing, (b / c) <= 1.0. + * Calculate (a * b) / c accurately enough without overflowing. c + * must be nonzero, and its top bit must be 0. a or b must be + * <= c, and the implementation is tuned for b <= c. + * + * The comments about times are for use in calcru1() with units of + * microseconds for 'a' and stathz ticks at 128 Hz for b and c. + * + * Let n be the number of top zero bits in c. Each iteration + * either returns, or reduces b by right shifting it by at least n. + * The number of iterations is at most 1 + 64 / n, and the error is + * at most the number of iterations. + * + * It is very unusual to need even 2 iterations. Previous + * implementations overflowed essentially by returning early in the + * first iteration, with n = 38 giving overflow at 105+ hours and + * n = 32 giving overlow at at 388+ days despite a more careful + * calculation. 388 days is a reasonable uptime, and the calculation + * needs to work for the uptime times the number of CPUs since 'a' + * is per-process. */ - return ((a / c) * b + (a % c) * (b / c) + (a % c) * (b % c) / c); + if (a >= (uint64_t)1 << 63) + return (0); /* Unsupported arg -- can't happen. */ + acc = 0; + for (i = 0; i < 128; i++) { + sa = flsll(a); + sb = flsll(b); + if (sa + sb <= 64) + /* Up to 105 hours on first iteration. */ + return (acc + (a * b) / c); + if (a >= c) { + /* + * This reduction is based on a = q * c + r, with the + * remainder r < c. 'a' may be large to start, and + * moving bits from b into 'a' at the end of the loop + * sets the top bit of 'a', so the reduction makes + * significant progress. + */ + acc += (a / c) * b; + a %= c; + sa = flsll(a); + if (sa + sb <= 64) + /* Up to 388 days on first iteration. */ + return (acc + (a * b) / c); + } + + /* + * This step writes a * b as a * ((bh << s) + bl) = + * a * (bh << s) + a * bl = (a << s) * bh + a * bl. The 2 + * additive terms are handled separately. Splitting in + * this way is linear except for rounding errors. + * + * s = 64 - sa is the maximum such that a << s fits in 64 + * bits. Since a < c and c has at least 1 zero top bit, + * sa < 64 and s > 0. Thus this step makes progress by + * reducing b (it increases 'a', but taking remainders on + * the next iteration completes the reduction). + * + * Finally, the choice for s is just what is needed to keep + * a * bl from overflowing, so we don't need complications + * like a recursive call mul64_by_fraction(a, bl, c) to + * handle the second additive term. + */ + s = 64 - sa; + bh = b >> s; + bl = b - (bh << s); + acc += (a * bl) / c; + a <<= s; + b = bh; + } + return (0); /* Algorithm failure -- can't happen. */ } static void calcru1(struct proc *p, struct rusage_ext *ruxp, struct timeval *up, struct timeval *sp) { /* {user, system, interrupt, total} {ticks, usec}: */ uint64_t ut, uu, st, su, it, tt, tu; ut = ruxp->rux_uticks; st = ruxp->rux_sticks; it = ruxp->rux_iticks; tt = ut + st + it; if (tt == 0) { /* Avoid divide by zero */ st = 1; tt = 1; } tu = cputick2usec(ruxp->rux_runtime); if ((int64_t)tu < 0) { /* XXX: this should be an assert /phk */ printf("calcru: negative runtime of %jd usec for pid %d (%s)\n", (intmax_t)tu, p->p_pid, p->p_comm); tu = ruxp->rux_tu; } + /* Subdivide tu. Avoid overflow in the multiplications. */ + if (__predict_true(tu <= ((uint64_t)1 << 38) && tt <= (1 << 26))) { + /* Up to 76 hours when stathz is 128. */ + uu = (tu * ut) / tt; + su = (tu * st) / tt; + } else { + uu = mul64_by_fraction(tu, ut, tt); + su = mul64_by_fraction(tu, ut, st); + } + if (tu >= ruxp->rux_tu) { /* * The normal case, time increased. * Enforce monotonicity of bucketed numbers. */ - uu = mul64_by_fraction(tu, ut, tt); if (uu < ruxp->rux_uu) uu = ruxp->rux_uu; - su = mul64_by_fraction(tu, st, tt); if (su < ruxp->rux_su) su = ruxp->rux_su; } else if (tu + 3 > ruxp->rux_tu || 101 * tu > 100 * ruxp->rux_tu) { /* * When we calibrate the cputicker, it is not uncommon to * see the presumably fixed frequency increase slightly over * time as a result of thermal stabilization and NTP * discipline (of the reference clock). We therefore ignore * a bit of backwards slop because we expect to catch up * shortly. We use a 3 microsecond limit to catch low * counts and a 1% limit for high counts. */ uu = ruxp->rux_uu; su = ruxp->rux_su; tu = ruxp->rux_tu; } else { /* tu < ruxp->rux_tu */ /* * What happened here was likely that a laptop, which ran at * a reduced clock frequency at boot, kicked into high gear. * The wisdom of spamming this message in that case is * dubious, but it might also be indicative of something * serious, so lets keep it and hope laptops can be made * more truthful about their CPU speed via ACPI. */ printf("calcru: runtime went backwards from %ju usec " "to %ju usec for pid %d (%s)\n", (uintmax_t)ruxp->rux_tu, (uintmax_t)tu, p->p_pid, p->p_comm); - uu = mul64_by_fraction(tu, ut, tt); - su = mul64_by_fraction(tu, st, tt); } ruxp->rux_uu = uu; ruxp->rux_su = su; ruxp->rux_tu = tu; up->tv_sec = uu / 1000000; up->tv_usec = uu % 1000000; sp->tv_sec = su / 1000000; sp->tv_usec = su % 1000000; } #ifndef _SYS_SYSPROTO_H_ struct getrusage_args { int who; struct rusage *rusage; }; #endif int sys_getrusage(struct thread *td, struct getrusage_args *uap) { struct rusage ru; int error; error = kern_getrusage(td, uap->who, &ru); if (error == 0) error = copyout(&ru, uap->rusage, sizeof(struct rusage)); return (error); } int kern_getrusage(struct thread *td, int who, struct rusage *rup) { struct proc *p; int error; error = 0; p = td->td_proc; PROC_LOCK(p); switch (who) { case RUSAGE_SELF: rufetchcalc(p, rup, &rup->ru_utime, &rup->ru_stime); break; case RUSAGE_CHILDREN: *rup = p->p_stats->p_cru; calccru(p, &rup->ru_utime, &rup->ru_stime); break; case RUSAGE_THREAD: PROC_STATLOCK(p); thread_lock(td); rufetchtd(td, rup); thread_unlock(td); PROC_STATUNLOCK(p); break; default: error = EINVAL; } PROC_UNLOCK(p); return (error); } void rucollect(struct rusage *ru, struct rusage *ru2) { long *ip, *ip2; int i; if (ru->ru_maxrss < ru2->ru_maxrss) ru->ru_maxrss = ru2->ru_maxrss; ip = &ru->ru_first; ip2 = &ru2->ru_first; for (i = &ru->ru_last - &ru->ru_first; i >= 0; i--) *ip++ += *ip2++; } void ruadd(struct rusage *ru, struct rusage_ext *rux, struct rusage *ru2, struct rusage_ext *rux2) { rux->rux_runtime += rux2->rux_runtime; rux->rux_uticks += rux2->rux_uticks; rux->rux_sticks += rux2->rux_sticks; rux->rux_iticks += rux2->rux_iticks; rux->rux_uu += rux2->rux_uu; rux->rux_su += rux2->rux_su; rux->rux_tu += rux2->rux_tu; rucollect(ru, ru2); } /* * Aggregate tick counts into the proc's rusage_ext. */ static void ruxagg_locked(struct rusage_ext *rux, struct thread *td) { THREAD_LOCK_ASSERT(td, MA_OWNED); PROC_STATLOCK_ASSERT(td->td_proc, MA_OWNED); rux->rux_runtime += td->td_incruntime; rux->rux_uticks += td->td_uticks; rux->rux_sticks += td->td_sticks; rux->rux_iticks += td->td_iticks; } void ruxagg(struct proc *p, struct thread *td) { thread_lock(td); ruxagg_locked(&p->p_rux, td); ruxagg_locked(&td->td_rux, td); td->td_incruntime = 0; td->td_uticks = 0; td->td_iticks = 0; td->td_sticks = 0; thread_unlock(td); } /* * Update the rusage_ext structure and fetch a valid aggregate rusage * for proc p if storage for one is supplied. */ void rufetch(struct proc *p, struct rusage *ru) { struct thread *td; PROC_STATLOCK_ASSERT(p, MA_OWNED); *ru = p->p_ru; if (p->p_numthreads > 0) { FOREACH_THREAD_IN_PROC(p, td) { ruxagg(p, td); rucollect(ru, &td->td_ru); } } } /* * Atomically perform a rufetch and a calcru together. * Consumers, can safely assume the calcru is executed only once * rufetch is completed. */ void rufetchcalc(struct proc *p, struct rusage *ru, struct timeval *up, struct timeval *sp) { PROC_STATLOCK(p); rufetch(p, ru); calcru(p, up, sp); PROC_STATUNLOCK(p); } /* * Allocate a new resource limits structure and initialize its * reference count and mutex pointer. */ struct plimit * lim_alloc() { struct plimit *limp; limp = malloc(sizeof(struct plimit), M_PLIMIT, M_WAITOK); refcount_init(&limp->pl_refcnt, 1); return (limp); } struct plimit * lim_hold(struct plimit *limp) { refcount_acquire(&limp->pl_refcnt); return (limp); } void lim_fork(struct proc *p1, struct proc *p2) { PROC_LOCK_ASSERT(p1, MA_OWNED); PROC_LOCK_ASSERT(p2, MA_OWNED); p2->p_limit = lim_hold(p1->p_limit); callout_init_mtx(&p2->p_limco, &p2->p_mtx, 0); if (p1->p_cpulimit != RLIM_INFINITY) callout_reset_sbt(&p2->p_limco, SBT_1S, 0, lim_cb, p2, C_PREL(1)); } void lim_free(struct plimit *limp) { if (refcount_release(&limp->pl_refcnt)) free((void *)limp, M_PLIMIT); } /* * Make a copy of the plimit structure. * We share these structures copy-on-write after fork. */ void lim_copy(struct plimit *dst, struct plimit *src) { KASSERT(dst->pl_refcnt <= 1, ("lim_copy to shared limit")); bcopy(src->pl_rlimit, dst->pl_rlimit, sizeof(src->pl_rlimit)); } /* * Return the hard limit for a particular system resource. The * which parameter specifies the index into the rlimit array. */ rlim_t lim_max(struct thread *td, int which) { struct rlimit rl; lim_rlimit(td, which, &rl); return (rl.rlim_max); } rlim_t lim_max_proc(struct proc *p, int which) { struct rlimit rl; lim_rlimit_proc(p, which, &rl); return (rl.rlim_max); } /* * Return the current (soft) limit for a particular system resource. * The which parameter which specifies the index into the rlimit array */ rlim_t (lim_cur)(struct thread *td, int which) { struct rlimit rl; lim_rlimit(td, which, &rl); return (rl.rlim_cur); } rlim_t lim_cur_proc(struct proc *p, int which) { struct rlimit rl; lim_rlimit_proc(p, which, &rl); return (rl.rlim_cur); } /* * Return a copy of the entire rlimit structure for the system limit * specified by 'which' in the rlimit structure pointed to by 'rlp'. */ void lim_rlimit(struct thread *td, int which, struct rlimit *rlp) { struct proc *p = td->td_proc; MPASS(td == curthread); KASSERT(which >= 0 && which < RLIM_NLIMITS, ("request for invalid resource limit")); *rlp = td->td_limit->pl_rlimit[which]; if (p->p_sysent->sv_fixlimit != NULL) p->p_sysent->sv_fixlimit(rlp, which); } void lim_rlimit_proc(struct proc *p, int which, struct rlimit *rlp) { PROC_LOCK_ASSERT(p, MA_OWNED); KASSERT(which >= 0 && which < RLIM_NLIMITS, ("request for invalid resource limit")); *rlp = p->p_limit->pl_rlimit[which]; if (p->p_sysent->sv_fixlimit != NULL) p->p_sysent->sv_fixlimit(rlp, which); } void uihashinit() { uihashtbl = hashinit(maxproc / 16, M_UIDINFO, &uihash); rw_init(&uihashtbl_lock, "uidinfo hash"); } /* * Look up a uidinfo struct for the parameter uid. * uihashtbl_lock must be locked. * Increase refcount on uidinfo struct returned. */ static struct uidinfo * uilookup(uid_t uid) { struct uihashhead *uipp; struct uidinfo *uip; rw_assert(&uihashtbl_lock, RA_LOCKED); uipp = UIHASH(uid); LIST_FOREACH(uip, uipp, ui_hash) if (uip->ui_uid == uid) { uihold(uip); break; } return (uip); } /* * Find or allocate a struct uidinfo for a particular uid. * Returns with uidinfo struct referenced. * uifree() should be called on a struct uidinfo when released. */ struct uidinfo * uifind(uid_t uid) { struct uidinfo *new_uip, *uip; struct ucred *cred; cred = curthread->td_ucred; if (cred->cr_uidinfo->ui_uid == uid) { uip = cred->cr_uidinfo; uihold(uip); return (uip); } else if (cred->cr_ruidinfo->ui_uid == uid) { uip = cred->cr_ruidinfo; uihold(uip); return (uip); } rw_rlock(&uihashtbl_lock); uip = uilookup(uid); rw_runlock(&uihashtbl_lock); if (uip != NULL) return (uip); new_uip = malloc(sizeof(*new_uip), M_UIDINFO, M_WAITOK | M_ZERO); racct_create(&new_uip->ui_racct); refcount_init(&new_uip->ui_ref, 1); new_uip->ui_uid = uid; rw_wlock(&uihashtbl_lock); /* * There's a chance someone created our uidinfo while we * were in malloc and not holding the lock, so we have to * make sure we don't insert a duplicate uidinfo. */ if ((uip = uilookup(uid)) == NULL) { LIST_INSERT_HEAD(UIHASH(uid), new_uip, ui_hash); rw_wunlock(&uihashtbl_lock); uip = new_uip; } else { rw_wunlock(&uihashtbl_lock); racct_destroy(&new_uip->ui_racct); free(new_uip, M_UIDINFO); } return (uip); } /* * Place another refcount on a uidinfo struct. */ void uihold(struct uidinfo *uip) { refcount_acquire(&uip->ui_ref); } /*- * Since uidinfo structs have a long lifetime, we use an * opportunistic refcounting scheme to avoid locking the lookup hash * for each release. * * If the refcount hits 0, we need to free the structure, * which means we need to lock the hash. * Optimal case: * After locking the struct and lowering the refcount, if we find * that we don't need to free, simply unlock and return. * Suboptimal case: * If refcount lowering results in need to free, bump the count * back up, lose the lock and acquire the locks in the proper * order to try again. */ void uifree(struct uidinfo *uip) { if (refcount_release_if_not_last(&uip->ui_ref)) return; rw_wlock(&uihashtbl_lock); if (refcount_release(&uip->ui_ref) == 0) { rw_wunlock(&uihashtbl_lock); return; } racct_destroy(&uip->ui_racct); LIST_REMOVE(uip, ui_hash); rw_wunlock(&uihashtbl_lock); if (uip->ui_sbsize != 0) printf("freeing uidinfo: uid = %d, sbsize = %ld\n", uip->ui_uid, uip->ui_sbsize); if (uip->ui_proccnt != 0) printf("freeing uidinfo: uid = %d, proccnt = %ld\n", uip->ui_uid, uip->ui_proccnt); if (uip->ui_vmsize != 0) printf("freeing uidinfo: uid = %d, swapuse = %lld\n", uip->ui_uid, (unsigned long long)uip->ui_vmsize); free(uip, M_UIDINFO); } #ifdef RACCT void ui_racct_foreach(void (*callback)(struct racct *racct, void *arg2, void *arg3), void (*pre)(void), void (*post)(void), void *arg2, void *arg3) { struct uidinfo *uip; struct uihashhead *uih; rw_rlock(&uihashtbl_lock); if (pre != NULL) (pre)(); for (uih = &uihashtbl[uihash]; uih >= uihashtbl; uih--) { LIST_FOREACH(uip, uih, ui_hash) { (callback)(uip->ui_racct, arg2, arg3); } } if (post != NULL) (post)(); rw_runlock(&uihashtbl_lock); } #endif static inline int chglimit(struct uidinfo *uip, long *limit, int diff, rlim_t max, const char *name) { long new; /* Don't allow them to exceed max, but allow subtraction. */ new = atomic_fetchadd_long(limit, (long)diff) + diff; if (diff > 0 && max != 0) { if (new < 0 || new > max) { atomic_subtract_long(limit, (long)diff); return (0); } } else if (new < 0) printf("negative %s for uid = %d\n", name, uip->ui_uid); return (1); } /* * Change the count associated with number of processes * a given user is using. When 'max' is 0, don't enforce a limit */ int chgproccnt(struct uidinfo *uip, int diff, rlim_t max) { return (chglimit(uip, &uip->ui_proccnt, diff, max, "proccnt")); } /* * Change the total socket buffer size a user has used. */ int chgsbsize(struct uidinfo *uip, u_int *hiwat, u_int to, rlim_t max) { int diff, rv; diff = to - *hiwat; if (diff > 0 && max == 0) { rv = 0; } else { rv = chglimit(uip, &uip->ui_sbsize, diff, max, "sbsize"); if (rv != 0) *hiwat = to; } return (rv); } /* * Change the count associated with number of pseudo-terminals * a given user is using. When 'max' is 0, don't enforce a limit */ int chgptscnt(struct uidinfo *uip, int diff, rlim_t max) { return (chglimit(uip, &uip->ui_ptscnt, diff, max, "ptscnt")); } int chgkqcnt(struct uidinfo *uip, int diff, rlim_t max) { return (chglimit(uip, &uip->ui_kqcnt, diff, max, "kqcnt")); } int chgumtxcnt(struct uidinfo *uip, int diff, rlim_t max) { return (chglimit(uip, &uip->ui_umtxcnt, diff, max, "umtxcnt")); } Index: projects/import-googletest-1.8.1/sys/kern/sys_pipe.c =================================================================== --- projects/import-googletest-1.8.1/sys/kern/sys_pipe.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/kern/sys_pipe.c (revision 344271) @@ -1,1774 +1,1772 @@ /*- * SPDX-License-Identifier: BSD-4-Clause * * Copyright (c) 1996 John S. Dyson * Copyright (c) 2012 Giovanni Trematerra * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice immediately at the beginning of the file, without modification, * this list of conditions, and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Absolutely no warranty of function or purpose is made by the author * John S. Dyson. * 4. Modifications may be freely made to this file if the above conditions * are met. */ /* * This file contains a high-performance replacement for the socket-based * pipes scheme originally used in FreeBSD/4.4Lite. It does not support * all features of sockets, but does do everything that pipes normally * do. */ /* * This code has two modes of operation, a small write mode and a large * write mode. The small write mode acts like conventional pipes with * a kernel buffer. If the buffer is less than PIPE_MINDIRECT, then the * "normal" pipe buffering is done. If the buffer is between PIPE_MINDIRECT * and PIPE_SIZE in size, the sending process pins the underlying pages in * memory, and the receiving process copies directly from these pinned pages * in the sending process. * * If the sending process receives a signal, it is possible that it will * go away, and certainly its address space can change, because control * is returned back to the user-mode side. In that case, the pipe code * arranges to copy the buffer supplied by the user process, to a pageable * kernel buffer, and the receiving process will grab the data from the * pageable kernel buffer. Since signals don't happen all that often, * the copy operation is normally eliminated. * * The constant PIPE_MINDIRECT is chosen to make sure that buffering will * happen for small transfers so that the system will not spend all of * its time context switching. * * In order to limit the resource use of pipes, two sysctls exist: * * kern.ipc.maxpipekva - This is a hard limit on the amount of pageable * address space available to us in pipe_map. This value is normally * autotuned, but may also be loader tuned. * * kern.ipc.pipekva - This read-only sysctl tracks the current amount of * memory in use by pipes. * * Based on how large pipekva is relative to maxpipekva, the following * will happen: * * 0% - 50%: * New pipes are given 16K of memory backing, pipes may dynamically * grow to as large as 64K where needed. * 50% - 75%: * New pipes are given 4K (or PAGE_SIZE) of memory backing, * existing pipes may NOT grow. * 75% - 100%: * New pipes are given 4K (or PAGE_SIZE) of memory backing, * existing pipes will be shrunk down to 4K whenever possible. * * Resizing may be disabled by setting kern.ipc.piperesizeallowed=0. If * that is set, the only resize that will occur is the 0 -> SMALL_PIPE_SIZE * resize which MUST occur for reverse-direction pipes when they are * first used. * * Additional information about the current state of pipes may be obtained * from kern.ipc.pipes, kern.ipc.pipefragretry, kern.ipc.pipeallocfail, * and kern.ipc.piperesizefail. * * Locking rules: There are two locks present here: A mutex, used via * PIPE_LOCK, and a flag, used via pipelock(). All locking is done via * the flag, as mutexes can not persist over uiomove. The mutex * exists only to guard access to the flag, and is not in itself a * locking mechanism. Also note that there is only a single mutex for * both directions of a pipe. * * As pipelock() may have to sleep before it can acquire the flag, it * is important to reread all data after a call to pipelock(); everything * in the structure may have changed. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include /* * Use this define if you want to disable *fancy* VM things. Expect an * approx 30% decrease in transfer rate. This could be useful for * NetBSD or OpenBSD. */ /* #define PIPE_NODIRECT */ #define PIPE_PEER(pipe) \ (((pipe)->pipe_state & PIPE_NAMED) ? (pipe) : ((pipe)->pipe_peer)) /* * interfaces to the outside world */ static fo_rdwr_t pipe_read; static fo_rdwr_t pipe_write; static fo_truncate_t pipe_truncate; static fo_ioctl_t pipe_ioctl; static fo_poll_t pipe_poll; static fo_kqfilter_t pipe_kqfilter; static fo_stat_t pipe_stat; static fo_close_t pipe_close; static fo_chmod_t pipe_chmod; static fo_chown_t pipe_chown; static fo_fill_kinfo_t pipe_fill_kinfo; struct fileops pipeops = { .fo_read = pipe_read, .fo_write = pipe_write, .fo_truncate = pipe_truncate, .fo_ioctl = pipe_ioctl, .fo_poll = pipe_poll, .fo_kqfilter = pipe_kqfilter, .fo_stat = pipe_stat, .fo_close = pipe_close, .fo_chmod = pipe_chmod, .fo_chown = pipe_chown, .fo_sendfile = invfo_sendfile, .fo_fill_kinfo = pipe_fill_kinfo, .fo_flags = DFLAG_PASSABLE }; static void filt_pipedetach(struct knote *kn); static void filt_pipedetach_notsup(struct knote *kn); static int filt_pipenotsup(struct knote *kn, long hint); static int filt_piperead(struct knote *kn, long hint); static int filt_pipewrite(struct knote *kn, long hint); static struct filterops pipe_nfiltops = { .f_isfd = 1, .f_detach = filt_pipedetach_notsup, .f_event = filt_pipenotsup }; static struct filterops pipe_rfiltops = { .f_isfd = 1, .f_detach = filt_pipedetach, .f_event = filt_piperead }; static struct filterops pipe_wfiltops = { .f_isfd = 1, .f_detach = filt_pipedetach, .f_event = filt_pipewrite }; /* * Default pipe buffer size(s), this can be kind-of large now because pipe * space is pageable. The pipe code will try to maintain locality of * reference for performance reasons, so small amounts of outstanding I/O * will not wipe the cache. */ #define MINPIPESIZE (PIPE_SIZE/3) #define MAXPIPESIZE (2*PIPE_SIZE/3) static long amountpipekva; static int pipefragretry; static int pipeallocfail; static int piperesizefail; static int piperesizeallowed = 1; SYSCTL_LONG(_kern_ipc, OID_AUTO, maxpipekva, CTLFLAG_RDTUN | CTLFLAG_NOFETCH, &maxpipekva, 0, "Pipe KVA limit"); SYSCTL_LONG(_kern_ipc, OID_AUTO, pipekva, CTLFLAG_RD, &amountpipekva, 0, "Pipe KVA usage"); SYSCTL_INT(_kern_ipc, OID_AUTO, pipefragretry, CTLFLAG_RD, &pipefragretry, 0, "Pipe allocation retries due to fragmentation"); SYSCTL_INT(_kern_ipc, OID_AUTO, pipeallocfail, CTLFLAG_RD, &pipeallocfail, 0, "Pipe allocation failures"); SYSCTL_INT(_kern_ipc, OID_AUTO, piperesizefail, CTLFLAG_RD, &piperesizefail, 0, "Pipe resize failures"); SYSCTL_INT(_kern_ipc, OID_AUTO, piperesizeallowed, CTLFLAG_RW, &piperesizeallowed, 0, "Pipe resizing allowed"); static void pipeinit(void *dummy __unused); static void pipeclose(struct pipe *cpipe); static void pipe_free_kmem(struct pipe *cpipe); static void pipe_create(struct pipe *pipe, int backing); static void pipe_paircreate(struct thread *td, struct pipepair **p_pp); static __inline int pipelock(struct pipe *cpipe, int catch); static __inline void pipeunlock(struct pipe *cpipe); #ifndef PIPE_NODIRECT static int pipe_build_write_buffer(struct pipe *wpipe, struct uio *uio); static void pipe_destroy_write_buffer(struct pipe *wpipe); static int pipe_direct_write(struct pipe *wpipe, struct uio *uio); static void pipe_clone_write_buffer(struct pipe *wpipe); #endif static int pipespace(struct pipe *cpipe, int size); static int pipespace_new(struct pipe *cpipe, int size); static int pipe_zone_ctor(void *mem, int size, void *arg, int flags); static int pipe_zone_init(void *mem, int size, int flags); static void pipe_zone_fini(void *mem, int size); static uma_zone_t pipe_zone; static struct unrhdr64 pipeino_unr; static dev_t pipedev_ino; SYSINIT(vfs, SI_SUB_VFS, SI_ORDER_ANY, pipeinit, NULL); static void pipeinit(void *dummy __unused) { pipe_zone = uma_zcreate("pipe", sizeof(struct pipepair), pipe_zone_ctor, NULL, pipe_zone_init, pipe_zone_fini, UMA_ALIGN_PTR, 0); KASSERT(pipe_zone != NULL, ("pipe_zone not initialized")); new_unrhdr64(&pipeino_unr, 1); pipedev_ino = devfs_alloc_cdp_inode(); KASSERT(pipedev_ino > 0, ("pipe dev inode not initialized")); } static int pipe_zone_ctor(void *mem, int size, void *arg, int flags) { struct pipepair *pp; struct pipe *rpipe, *wpipe; KASSERT(size == sizeof(*pp), ("pipe_zone_ctor: wrong size")); pp = (struct pipepair *)mem; /* * We zero both pipe endpoints to make sure all the kmem pointers * are NULL, flag fields are zero'd, etc. We timestamp both * endpoints with the same time. */ rpipe = &pp->pp_rpipe; bzero(rpipe, sizeof(*rpipe)); vfs_timestamp(&rpipe->pipe_ctime); rpipe->pipe_atime = rpipe->pipe_mtime = rpipe->pipe_ctime; wpipe = &pp->pp_wpipe; bzero(wpipe, sizeof(*wpipe)); wpipe->pipe_ctime = rpipe->pipe_ctime; wpipe->pipe_atime = wpipe->pipe_mtime = rpipe->pipe_ctime; rpipe->pipe_peer = wpipe; rpipe->pipe_pair = pp; wpipe->pipe_peer = rpipe; wpipe->pipe_pair = pp; /* * Mark both endpoints as present; they will later get free'd * one at a time. When both are free'd, then the whole pair * is released. */ rpipe->pipe_present = PIPE_ACTIVE; wpipe->pipe_present = PIPE_ACTIVE; /* * Eventually, the MAC Framework may initialize the label * in ctor or init, but for now we do it elswhere to avoid * blocking in ctor or init. */ pp->pp_label = NULL; return (0); } static int pipe_zone_init(void *mem, int size, int flags) { struct pipepair *pp; KASSERT(size == sizeof(*pp), ("pipe_zone_init: wrong size")); pp = (struct pipepair *)mem; mtx_init(&pp->pp_mtx, "pipe mutex", NULL, MTX_DEF | MTX_NEW); return (0); } static void pipe_zone_fini(void *mem, int size) { struct pipepair *pp; KASSERT(size == sizeof(*pp), ("pipe_zone_fini: wrong size")); pp = (struct pipepair *)mem; mtx_destroy(&pp->pp_mtx); } static void pipe_paircreate(struct thread *td, struct pipepair **p_pp) { struct pipepair *pp; struct pipe *rpipe, *wpipe; *p_pp = pp = uma_zalloc(pipe_zone, M_WAITOK); #ifdef MAC /* * The MAC label is shared between the connected endpoints. As a * result mac_pipe_init() and mac_pipe_create() are called once * for the pair, and not on the endpoints. */ mac_pipe_init(pp); mac_pipe_create(td->td_ucred, pp); #endif rpipe = &pp->pp_rpipe; wpipe = &pp->pp_wpipe; knlist_init_mtx(&rpipe->pipe_sel.si_note, PIPE_MTX(rpipe)); knlist_init_mtx(&wpipe->pipe_sel.si_note, PIPE_MTX(wpipe)); /* Only the forward direction pipe is backed by default */ pipe_create(rpipe, 1); pipe_create(wpipe, 0); rpipe->pipe_state |= PIPE_DIRECTOK; wpipe->pipe_state |= PIPE_DIRECTOK; } void pipe_named_ctor(struct pipe **ppipe, struct thread *td) { struct pipepair *pp; pipe_paircreate(td, &pp); pp->pp_rpipe.pipe_state |= PIPE_NAMED; *ppipe = &pp->pp_rpipe; } void pipe_dtor(struct pipe *dpipe) { struct pipe *peer; - ino_t ino; - ino = dpipe->pipe_ino; peer = (dpipe->pipe_state & PIPE_NAMED) != 0 ? dpipe->pipe_peer : NULL; funsetown(&dpipe->pipe_sigio); pipeclose(dpipe); if (peer != NULL) { funsetown(&peer->pipe_sigio); pipeclose(peer); } } /* * The pipe system call for the DTYPE_PIPE type of pipes. If we fail, let * the zone pick up the pieces via pipeclose(). */ int kern_pipe(struct thread *td, int fildes[2], int flags, struct filecaps *fcaps1, struct filecaps *fcaps2) { struct file *rf, *wf; struct pipe *rpipe, *wpipe; struct pipepair *pp; int fd, fflags, error; pipe_paircreate(td, &pp); rpipe = &pp->pp_rpipe; wpipe = &pp->pp_wpipe; error = falloc_caps(td, &rf, &fd, flags, fcaps1); if (error) { pipeclose(rpipe); pipeclose(wpipe); return (error); } /* An extra reference on `rf' has been held for us by falloc_caps(). */ fildes[0] = fd; fflags = FREAD | FWRITE; if ((flags & O_NONBLOCK) != 0) fflags |= FNONBLOCK; /* * Warning: once we've gotten past allocation of the fd for the * read-side, we can only drop the read side via fdrop() in order * to avoid races against processes which manage to dup() the read * side while we are blocked trying to allocate the write side. */ finit(rf, fflags, DTYPE_PIPE, rpipe, &pipeops); error = falloc_caps(td, &wf, &fd, flags, fcaps2); if (error) { fdclose(td, rf, fildes[0]); fdrop(rf, td); /* rpipe has been closed by fdrop(). */ pipeclose(wpipe); return (error); } /* An extra reference on `wf' has been held for us by falloc_caps(). */ finit(wf, fflags, DTYPE_PIPE, wpipe, &pipeops); fdrop(wf, td); fildes[1] = fd; fdrop(rf, td); return (0); } #ifdef COMPAT_FREEBSD10 /* ARGSUSED */ int freebsd10_pipe(struct thread *td, struct freebsd10_pipe_args *uap __unused) { int error; int fildes[2]; error = kern_pipe(td, fildes, 0, NULL, NULL); if (error) return (error); td->td_retval[0] = fildes[0]; td->td_retval[1] = fildes[1]; return (0); } #endif int sys_pipe2(struct thread *td, struct pipe2_args *uap) { int error, fildes[2]; if (uap->flags & ~(O_CLOEXEC | O_NONBLOCK)) return (EINVAL); error = kern_pipe(td, fildes, uap->flags, NULL, NULL); if (error) return (error); error = copyout(fildes, uap->fildes, 2 * sizeof(int)); if (error) { (void)kern_close(td, fildes[0]); (void)kern_close(td, fildes[1]); } return (error); } /* * Allocate kva for pipe circular buffer, the space is pageable * This routine will 'realloc' the size of a pipe safely, if it fails * it will retain the old buffer. * If it fails it will return ENOMEM. */ static int pipespace_new(struct pipe *cpipe, int size) { caddr_t buffer; int error, cnt, firstseg; static int curfail = 0; static struct timeval lastfail; KASSERT(!mtx_owned(PIPE_MTX(cpipe)), ("pipespace: pipe mutex locked")); KASSERT(!(cpipe->pipe_state & PIPE_DIRECTW), ("pipespace: resize of direct writes not allowed")); retry: cnt = cpipe->pipe_buffer.cnt; if (cnt > size) size = cnt; size = round_page(size); buffer = (caddr_t) vm_map_min(pipe_map); error = vm_map_find(pipe_map, NULL, 0, (vm_offset_t *)&buffer, size, 0, VMFS_ANY_SPACE, VM_PROT_RW, VM_PROT_RW, 0); if (error != KERN_SUCCESS) { if ((cpipe->pipe_buffer.buffer == NULL) && (size > SMALL_PIPE_SIZE)) { size = SMALL_PIPE_SIZE; pipefragretry++; goto retry; } if (cpipe->pipe_buffer.buffer == NULL) { pipeallocfail++; if (ppsratecheck(&lastfail, &curfail, 1)) printf("kern.ipc.maxpipekva exceeded; see tuning(7)\n"); } else { piperesizefail++; } return (ENOMEM); } /* copy data, then free old resources if we're resizing */ if (cnt > 0) { if (cpipe->pipe_buffer.in <= cpipe->pipe_buffer.out) { firstseg = cpipe->pipe_buffer.size - cpipe->pipe_buffer.out; bcopy(&cpipe->pipe_buffer.buffer[cpipe->pipe_buffer.out], buffer, firstseg); if ((cnt - firstseg) > 0) bcopy(cpipe->pipe_buffer.buffer, &buffer[firstseg], cpipe->pipe_buffer.in); } else { bcopy(&cpipe->pipe_buffer.buffer[cpipe->pipe_buffer.out], buffer, cnt); } } pipe_free_kmem(cpipe); cpipe->pipe_buffer.buffer = buffer; cpipe->pipe_buffer.size = size; cpipe->pipe_buffer.in = cnt; cpipe->pipe_buffer.out = 0; cpipe->pipe_buffer.cnt = cnt; atomic_add_long(&amountpipekva, cpipe->pipe_buffer.size); return (0); } /* * Wrapper for pipespace_new() that performs locking assertions. */ static int pipespace(struct pipe *cpipe, int size) { KASSERT(cpipe->pipe_state & PIPE_LOCKFL, ("Unlocked pipe passed to pipespace")); return (pipespace_new(cpipe, size)); } /* * lock a pipe for I/O, blocking other access */ static __inline int pipelock(struct pipe *cpipe, int catch) { int error; PIPE_LOCK_ASSERT(cpipe, MA_OWNED); while (cpipe->pipe_state & PIPE_LOCKFL) { cpipe->pipe_state |= PIPE_LWANT; error = msleep(cpipe, PIPE_MTX(cpipe), catch ? (PRIBIO | PCATCH) : PRIBIO, "pipelk", 0); if (error != 0) return (error); } cpipe->pipe_state |= PIPE_LOCKFL; return (0); } /* * unlock a pipe I/O lock */ static __inline void pipeunlock(struct pipe *cpipe) { PIPE_LOCK_ASSERT(cpipe, MA_OWNED); KASSERT(cpipe->pipe_state & PIPE_LOCKFL, ("Unlocked pipe passed to pipeunlock")); cpipe->pipe_state &= ~PIPE_LOCKFL; if (cpipe->pipe_state & PIPE_LWANT) { cpipe->pipe_state &= ~PIPE_LWANT; wakeup(cpipe); } } void pipeselwakeup(struct pipe *cpipe) { PIPE_LOCK_ASSERT(cpipe, MA_OWNED); if (cpipe->pipe_state & PIPE_SEL) { selwakeuppri(&cpipe->pipe_sel, PSOCK); if (!SEL_WAITING(&cpipe->pipe_sel)) cpipe->pipe_state &= ~PIPE_SEL; } if ((cpipe->pipe_state & PIPE_ASYNC) && cpipe->pipe_sigio) pgsigio(&cpipe->pipe_sigio, SIGIO, 0); KNOTE_LOCKED(&cpipe->pipe_sel.si_note, 0); } /* * Initialize and allocate VM and memory for pipe. The structure * will start out zero'd from the ctor, so we just manage the kmem. */ static void pipe_create(struct pipe *pipe, int backing) { if (backing) { /* * Note that these functions can fail if pipe map is exhausted * (as a result of too many pipes created), but we ignore the * error as it is not fatal and could be provoked by * unprivileged users. The only consequence is worse performance * with given pipe. */ if (amountpipekva > maxpipekva / 2) (void)pipespace_new(pipe, SMALL_PIPE_SIZE); else (void)pipespace_new(pipe, PIPE_SIZE); } pipe->pipe_ino = alloc_unr64(&pipeino_unr); } /* ARGSUSED */ static int pipe_read(struct file *fp, struct uio *uio, struct ucred *active_cred, int flags, struct thread *td) { struct pipe *rpipe; int error; int nread = 0; int size; rpipe = fp->f_data; PIPE_LOCK(rpipe); ++rpipe->pipe_busy; error = pipelock(rpipe, 1); if (error) goto unlocked_error; #ifdef MAC error = mac_pipe_check_read(active_cred, rpipe->pipe_pair); if (error) goto locked_error; #endif if (amountpipekva > (3 * maxpipekva) / 4) { if (!(rpipe->pipe_state & PIPE_DIRECTW) && (rpipe->pipe_buffer.size > SMALL_PIPE_SIZE) && (rpipe->pipe_buffer.cnt <= SMALL_PIPE_SIZE) && (piperesizeallowed == 1)) { PIPE_UNLOCK(rpipe); pipespace(rpipe, SMALL_PIPE_SIZE); PIPE_LOCK(rpipe); } } while (uio->uio_resid) { /* * normal pipe buffer receive */ if (rpipe->pipe_buffer.cnt > 0) { size = rpipe->pipe_buffer.size - rpipe->pipe_buffer.out; if (size > rpipe->pipe_buffer.cnt) size = rpipe->pipe_buffer.cnt; if (size > uio->uio_resid) size = uio->uio_resid; PIPE_UNLOCK(rpipe); error = uiomove( &rpipe->pipe_buffer.buffer[rpipe->pipe_buffer.out], size, uio); PIPE_LOCK(rpipe); if (error) break; rpipe->pipe_buffer.out += size; if (rpipe->pipe_buffer.out >= rpipe->pipe_buffer.size) rpipe->pipe_buffer.out = 0; rpipe->pipe_buffer.cnt -= size; /* * If there is no more to read in the pipe, reset * its pointers to the beginning. This improves * cache hit stats. */ if (rpipe->pipe_buffer.cnt == 0) { rpipe->pipe_buffer.in = 0; rpipe->pipe_buffer.out = 0; } nread += size; #ifndef PIPE_NODIRECT /* * Direct copy, bypassing a kernel buffer. */ } else if ((size = rpipe->pipe_map.cnt) && (rpipe->pipe_state & PIPE_DIRECTW)) { if (size > uio->uio_resid) size = (u_int) uio->uio_resid; PIPE_UNLOCK(rpipe); error = uiomove_fromphys(rpipe->pipe_map.ms, rpipe->pipe_map.pos, size, uio); PIPE_LOCK(rpipe); if (error) break; nread += size; rpipe->pipe_map.pos += size; rpipe->pipe_map.cnt -= size; if (rpipe->pipe_map.cnt == 0) { rpipe->pipe_state &= ~(PIPE_DIRECTW|PIPE_WANTW); wakeup(rpipe); } #endif } else { /* * detect EOF condition * read returns 0 on EOF, no need to set error */ if (rpipe->pipe_state & PIPE_EOF) break; /* * If the "write-side" has been blocked, wake it up now. */ if (rpipe->pipe_state & PIPE_WANTW) { rpipe->pipe_state &= ~PIPE_WANTW; wakeup(rpipe); } /* * Break if some data was read. */ if (nread > 0) break; /* * Unlock the pipe buffer for our remaining processing. * We will either break out with an error or we will * sleep and relock to loop. */ pipeunlock(rpipe); /* * Handle non-blocking mode operation or * wait for more data. */ if (fp->f_flag & FNONBLOCK) { error = EAGAIN; } else { rpipe->pipe_state |= PIPE_WANTR; if ((error = msleep(rpipe, PIPE_MTX(rpipe), PRIBIO | PCATCH, "piperd", 0)) == 0) error = pipelock(rpipe, 1); } if (error) goto unlocked_error; } } #ifdef MAC locked_error: #endif pipeunlock(rpipe); /* XXX: should probably do this before getting any locks. */ if (error == 0) vfs_timestamp(&rpipe->pipe_atime); unlocked_error: --rpipe->pipe_busy; /* * PIPE_WANT processing only makes sense if pipe_busy is 0. */ if ((rpipe->pipe_busy == 0) && (rpipe->pipe_state & PIPE_WANT)) { rpipe->pipe_state &= ~(PIPE_WANT|PIPE_WANTW); wakeup(rpipe); } else if (rpipe->pipe_buffer.cnt < MINPIPESIZE) { /* * Handle write blocking hysteresis. */ if (rpipe->pipe_state & PIPE_WANTW) { rpipe->pipe_state &= ~PIPE_WANTW; wakeup(rpipe); } } if ((rpipe->pipe_buffer.size - rpipe->pipe_buffer.cnt) >= PIPE_BUF) pipeselwakeup(rpipe); PIPE_UNLOCK(rpipe); return (error); } #ifndef PIPE_NODIRECT /* * Map the sending processes' buffer into kernel space and wire it. * This is similar to a physical write operation. */ static int pipe_build_write_buffer(struct pipe *wpipe, struct uio *uio) { u_int size; int i; PIPE_LOCK_ASSERT(wpipe, MA_NOTOWNED); KASSERT(wpipe->pipe_state & PIPE_DIRECTW, ("Clone attempt on non-direct write pipe!")); if (uio->uio_iov->iov_len > wpipe->pipe_buffer.size) size = wpipe->pipe_buffer.size; else size = uio->uio_iov->iov_len; if ((i = vm_fault_quick_hold_pages(&curproc->p_vmspace->vm_map, (vm_offset_t)uio->uio_iov->iov_base, size, VM_PROT_READ, wpipe->pipe_map.ms, PIPENPAGES)) < 0) return (EFAULT); /* * set up the control block */ wpipe->pipe_map.npages = i; wpipe->pipe_map.pos = ((vm_offset_t) uio->uio_iov->iov_base) & PAGE_MASK; wpipe->pipe_map.cnt = size; /* * and update the uio data */ uio->uio_iov->iov_len -= size; uio->uio_iov->iov_base = (char *)uio->uio_iov->iov_base + size; if (uio->uio_iov->iov_len == 0) uio->uio_iov++; uio->uio_resid -= size; uio->uio_offset += size; return (0); } /* * unmap and unwire the process buffer */ static void pipe_destroy_write_buffer(struct pipe *wpipe) { PIPE_LOCK_ASSERT(wpipe, MA_OWNED); vm_page_unhold_pages(wpipe->pipe_map.ms, wpipe->pipe_map.npages); wpipe->pipe_map.npages = 0; } /* * In the case of a signal, the writing process might go away. This * code copies the data into the circular buffer so that the source * pages can be freed without loss of data. */ static void pipe_clone_write_buffer(struct pipe *wpipe) { struct uio uio; struct iovec iov; int size; int pos; PIPE_LOCK_ASSERT(wpipe, MA_OWNED); size = wpipe->pipe_map.cnt; pos = wpipe->pipe_map.pos; wpipe->pipe_buffer.in = size; wpipe->pipe_buffer.out = 0; wpipe->pipe_buffer.cnt = size; wpipe->pipe_state &= ~PIPE_DIRECTW; PIPE_UNLOCK(wpipe); iov.iov_base = wpipe->pipe_buffer.buffer; iov.iov_len = size; uio.uio_iov = &iov; uio.uio_iovcnt = 1; uio.uio_offset = 0; uio.uio_resid = size; uio.uio_segflg = UIO_SYSSPACE; uio.uio_rw = UIO_READ; uio.uio_td = curthread; uiomove_fromphys(wpipe->pipe_map.ms, pos, size, &uio); PIPE_LOCK(wpipe); pipe_destroy_write_buffer(wpipe); } /* * This implements the pipe buffer write mechanism. Note that only * a direct write OR a normal pipe write can be pending at any given time. * If there are any characters in the pipe buffer, the direct write will * be deferred until the receiving process grabs all of the bytes from * the pipe buffer. Then the direct mapping write is set-up. */ static int pipe_direct_write(struct pipe *wpipe, struct uio *uio) { int error; retry: PIPE_LOCK_ASSERT(wpipe, MA_OWNED); error = pipelock(wpipe, 1); if (error != 0) goto error1; if ((wpipe->pipe_state & PIPE_EOF) != 0) { error = EPIPE; pipeunlock(wpipe); goto error1; } while (wpipe->pipe_state & PIPE_DIRECTW) { if (wpipe->pipe_state & PIPE_WANTR) { wpipe->pipe_state &= ~PIPE_WANTR; wakeup(wpipe); } pipeselwakeup(wpipe); wpipe->pipe_state |= PIPE_WANTW; pipeunlock(wpipe); error = msleep(wpipe, PIPE_MTX(wpipe), PRIBIO | PCATCH, "pipdww", 0); if (error) goto error1; else goto retry; } wpipe->pipe_map.cnt = 0; /* transfer not ready yet */ if (wpipe->pipe_buffer.cnt > 0) { if (wpipe->pipe_state & PIPE_WANTR) { wpipe->pipe_state &= ~PIPE_WANTR; wakeup(wpipe); } pipeselwakeup(wpipe); wpipe->pipe_state |= PIPE_WANTW; pipeunlock(wpipe); error = msleep(wpipe, PIPE_MTX(wpipe), PRIBIO | PCATCH, "pipdwc", 0); if (error) goto error1; else goto retry; } wpipe->pipe_state |= PIPE_DIRECTW; PIPE_UNLOCK(wpipe); error = pipe_build_write_buffer(wpipe, uio); PIPE_LOCK(wpipe); if (error) { wpipe->pipe_state &= ~PIPE_DIRECTW; pipeunlock(wpipe); goto error1; } error = 0; while (!error && (wpipe->pipe_state & PIPE_DIRECTW)) { if (wpipe->pipe_state & PIPE_EOF) { pipe_destroy_write_buffer(wpipe); pipeselwakeup(wpipe); pipeunlock(wpipe); error = EPIPE; goto error1; } if (wpipe->pipe_state & PIPE_WANTR) { wpipe->pipe_state &= ~PIPE_WANTR; wakeup(wpipe); } pipeselwakeup(wpipe); wpipe->pipe_state |= PIPE_WANTW; pipeunlock(wpipe); error = msleep(wpipe, PIPE_MTX(wpipe), PRIBIO | PCATCH, "pipdwt", 0); pipelock(wpipe, 0); } if (wpipe->pipe_state & PIPE_EOF) error = EPIPE; if (wpipe->pipe_state & PIPE_DIRECTW) { /* * this bit of trickery substitutes a kernel buffer for * the process that might be going away. */ pipe_clone_write_buffer(wpipe); } else { pipe_destroy_write_buffer(wpipe); } pipeunlock(wpipe); return (error); error1: wakeup(wpipe); return (error); } #endif static int pipe_write(struct file *fp, struct uio *uio, struct ucred *active_cred, int flags, struct thread *td) { int error = 0; int desiredsize; ssize_t orig_resid; struct pipe *wpipe, *rpipe; rpipe = fp->f_data; wpipe = PIPE_PEER(rpipe); PIPE_LOCK(rpipe); error = pipelock(wpipe, 1); if (error) { PIPE_UNLOCK(rpipe); return (error); } /* * detect loss of pipe read side, issue SIGPIPE if lost. */ if (wpipe->pipe_present != PIPE_ACTIVE || (wpipe->pipe_state & PIPE_EOF)) { pipeunlock(wpipe); PIPE_UNLOCK(rpipe); return (EPIPE); } #ifdef MAC error = mac_pipe_check_write(active_cred, wpipe->pipe_pair); if (error) { pipeunlock(wpipe); PIPE_UNLOCK(rpipe); return (error); } #endif ++wpipe->pipe_busy; /* Choose a larger size if it's advantageous */ desiredsize = max(SMALL_PIPE_SIZE, wpipe->pipe_buffer.size); while (desiredsize < wpipe->pipe_buffer.cnt + uio->uio_resid) { if (piperesizeallowed != 1) break; if (amountpipekva > maxpipekva / 2) break; if (desiredsize == BIG_PIPE_SIZE) break; desiredsize = desiredsize * 2; } /* Choose a smaller size if we're in a OOM situation */ if ((amountpipekva > (3 * maxpipekva) / 4) && (wpipe->pipe_buffer.size > SMALL_PIPE_SIZE) && (wpipe->pipe_buffer.cnt <= SMALL_PIPE_SIZE) && (piperesizeallowed == 1)) desiredsize = SMALL_PIPE_SIZE; /* Resize if the above determined that a new size was necessary */ if ((desiredsize != wpipe->pipe_buffer.size) && ((wpipe->pipe_state & PIPE_DIRECTW) == 0)) { PIPE_UNLOCK(wpipe); pipespace(wpipe, desiredsize); PIPE_LOCK(wpipe); } if (wpipe->pipe_buffer.size == 0) { /* * This can only happen for reverse direction use of pipes * in a complete OOM situation. */ error = ENOMEM; --wpipe->pipe_busy; pipeunlock(wpipe); PIPE_UNLOCK(wpipe); return (error); } pipeunlock(wpipe); orig_resid = uio->uio_resid; while (uio->uio_resid) { int space; pipelock(wpipe, 0); if (wpipe->pipe_state & PIPE_EOF) { pipeunlock(wpipe); error = EPIPE; break; } #ifndef PIPE_NODIRECT /* * If the transfer is large, we can gain performance if * we do process-to-process copies directly. * If the write is non-blocking, we don't use the * direct write mechanism. * * The direct write mechanism will detect the reader going * away on us. */ if (uio->uio_segflg == UIO_USERSPACE && uio->uio_iov->iov_len >= PIPE_MINDIRECT && wpipe->pipe_buffer.size >= PIPE_MINDIRECT && (fp->f_flag & FNONBLOCK) == 0) { pipeunlock(wpipe); error = pipe_direct_write(wpipe, uio); if (error) break; continue; } #endif /* * Pipe buffered writes cannot be coincidental with * direct writes. We wait until the currently executing * direct write is completed before we start filling the * pipe buffer. We break out if a signal occurs or the * reader goes away. */ if (wpipe->pipe_state & PIPE_DIRECTW) { if (wpipe->pipe_state & PIPE_WANTR) { wpipe->pipe_state &= ~PIPE_WANTR; wakeup(wpipe); } pipeselwakeup(wpipe); wpipe->pipe_state |= PIPE_WANTW; pipeunlock(wpipe); error = msleep(wpipe, PIPE_MTX(rpipe), PRIBIO | PCATCH, "pipbww", 0); if (error) break; else continue; } space = wpipe->pipe_buffer.size - wpipe->pipe_buffer.cnt; /* Writes of size <= PIPE_BUF must be atomic. */ if ((space < uio->uio_resid) && (orig_resid <= PIPE_BUF)) space = 0; if (space > 0) { int size; /* Transfer size */ int segsize; /* first segment to transfer */ /* * Transfer size is minimum of uio transfer * and free space in pipe buffer. */ if (space > uio->uio_resid) size = uio->uio_resid; else size = space; /* * First segment to transfer is minimum of * transfer size and contiguous space in * pipe buffer. If first segment to transfer * is less than the transfer size, we've got * a wraparound in the buffer. */ segsize = wpipe->pipe_buffer.size - wpipe->pipe_buffer.in; if (segsize > size) segsize = size; /* Transfer first segment */ PIPE_UNLOCK(rpipe); error = uiomove(&wpipe->pipe_buffer.buffer[wpipe->pipe_buffer.in], segsize, uio); PIPE_LOCK(rpipe); if (error == 0 && segsize < size) { KASSERT(wpipe->pipe_buffer.in + segsize == wpipe->pipe_buffer.size, ("Pipe buffer wraparound disappeared")); /* * Transfer remaining part now, to * support atomic writes. Wraparound * happened. */ PIPE_UNLOCK(rpipe); error = uiomove( &wpipe->pipe_buffer.buffer[0], size - segsize, uio); PIPE_LOCK(rpipe); } if (error == 0) { wpipe->pipe_buffer.in += size; if (wpipe->pipe_buffer.in >= wpipe->pipe_buffer.size) { KASSERT(wpipe->pipe_buffer.in == size - segsize + wpipe->pipe_buffer.size, ("Expected wraparound bad")); wpipe->pipe_buffer.in = size - segsize; } wpipe->pipe_buffer.cnt += size; KASSERT(wpipe->pipe_buffer.cnt <= wpipe->pipe_buffer.size, ("Pipe buffer overflow")); } pipeunlock(wpipe); if (error != 0) break; } else { /* * If the "read-side" has been blocked, wake it up now. */ if (wpipe->pipe_state & PIPE_WANTR) { wpipe->pipe_state &= ~PIPE_WANTR; wakeup(wpipe); } /* * don't block on non-blocking I/O */ if (fp->f_flag & FNONBLOCK) { error = EAGAIN; pipeunlock(wpipe); break; } /* * We have no more space and have something to offer, * wake up select/poll. */ pipeselwakeup(wpipe); wpipe->pipe_state |= PIPE_WANTW; pipeunlock(wpipe); error = msleep(wpipe, PIPE_MTX(rpipe), PRIBIO | PCATCH, "pipewr", 0); if (error != 0) break; } } pipelock(wpipe, 0); --wpipe->pipe_busy; if ((wpipe->pipe_busy == 0) && (wpipe->pipe_state & PIPE_WANT)) { wpipe->pipe_state &= ~(PIPE_WANT | PIPE_WANTR); wakeup(wpipe); } else if (wpipe->pipe_buffer.cnt > 0) { /* * If we have put any characters in the buffer, we wake up * the reader. */ if (wpipe->pipe_state & PIPE_WANTR) { wpipe->pipe_state &= ~PIPE_WANTR; wakeup(wpipe); } } /* * Don't return EPIPE if any byte was written. * EINTR and other interrupts are handled by generic I/O layer. * Do not pretend that I/O succeeded for obvious user error * like EFAULT. */ if (uio->uio_resid != orig_resid && error == EPIPE) error = 0; if (error == 0) vfs_timestamp(&wpipe->pipe_mtime); /* * We have something to offer, * wake up select/poll. */ if (wpipe->pipe_buffer.cnt) pipeselwakeup(wpipe); pipeunlock(wpipe); PIPE_UNLOCK(rpipe); return (error); } /* ARGSUSED */ static int pipe_truncate(struct file *fp, off_t length, struct ucred *active_cred, struct thread *td) { struct pipe *cpipe; int error; cpipe = fp->f_data; if (cpipe->pipe_state & PIPE_NAMED) error = vnops.fo_truncate(fp, length, active_cred, td); else error = invfo_truncate(fp, length, active_cred, td); return (error); } /* * we implement a very minimal set of ioctls for compatibility with sockets. */ static int pipe_ioctl(struct file *fp, u_long cmd, void *data, struct ucred *active_cred, struct thread *td) { struct pipe *mpipe = fp->f_data; int error; PIPE_LOCK(mpipe); #ifdef MAC error = mac_pipe_check_ioctl(active_cred, mpipe->pipe_pair, cmd, data); if (error) { PIPE_UNLOCK(mpipe); return (error); } #endif error = 0; switch (cmd) { case FIONBIO: break; case FIOASYNC: if (*(int *)data) { mpipe->pipe_state |= PIPE_ASYNC; } else { mpipe->pipe_state &= ~PIPE_ASYNC; } break; case FIONREAD: if (!(fp->f_flag & FREAD)) { *(int *)data = 0; PIPE_UNLOCK(mpipe); return (0); } if (mpipe->pipe_state & PIPE_DIRECTW) *(int *)data = mpipe->pipe_map.cnt; else *(int *)data = mpipe->pipe_buffer.cnt; break; case FIOSETOWN: PIPE_UNLOCK(mpipe); error = fsetown(*(int *)data, &mpipe->pipe_sigio); goto out_unlocked; case FIOGETOWN: *(int *)data = fgetown(&mpipe->pipe_sigio); break; /* This is deprecated, FIOSETOWN should be used instead. */ case TIOCSPGRP: PIPE_UNLOCK(mpipe); error = fsetown(-(*(int *)data), &mpipe->pipe_sigio); goto out_unlocked; /* This is deprecated, FIOGETOWN should be used instead. */ case TIOCGPGRP: *(int *)data = -fgetown(&mpipe->pipe_sigio); break; default: error = ENOTTY; break; } PIPE_UNLOCK(mpipe); out_unlocked: return (error); } static int pipe_poll(struct file *fp, int events, struct ucred *active_cred, struct thread *td) { struct pipe *rpipe; struct pipe *wpipe; int levents, revents; #ifdef MAC int error; #endif revents = 0; rpipe = fp->f_data; wpipe = PIPE_PEER(rpipe); PIPE_LOCK(rpipe); #ifdef MAC error = mac_pipe_check_poll(active_cred, rpipe->pipe_pair); if (error) goto locked_error; #endif if (fp->f_flag & FREAD && events & (POLLIN | POLLRDNORM)) if ((rpipe->pipe_state & PIPE_DIRECTW) || (rpipe->pipe_buffer.cnt > 0)) revents |= events & (POLLIN | POLLRDNORM); if (fp->f_flag & FWRITE && events & (POLLOUT | POLLWRNORM)) if (wpipe->pipe_present != PIPE_ACTIVE || (wpipe->pipe_state & PIPE_EOF) || (((wpipe->pipe_state & PIPE_DIRECTW) == 0) && ((wpipe->pipe_buffer.size - wpipe->pipe_buffer.cnt) >= PIPE_BUF || wpipe->pipe_buffer.size == 0))) revents |= events & (POLLOUT | POLLWRNORM); levents = events & (POLLIN | POLLINIGNEOF | POLLPRI | POLLRDNORM | POLLRDBAND); if (rpipe->pipe_state & PIPE_NAMED && fp->f_flag & FREAD && levents && fp->f_seqcount == rpipe->pipe_wgen) events |= POLLINIGNEOF; if ((events & POLLINIGNEOF) == 0) { if (rpipe->pipe_state & PIPE_EOF) { revents |= (events & (POLLIN | POLLRDNORM)); if (wpipe->pipe_present != PIPE_ACTIVE || (wpipe->pipe_state & PIPE_EOF)) revents |= POLLHUP; } } if (revents == 0) { if (fp->f_flag & FREAD && events & (POLLIN | POLLRDNORM)) { selrecord(td, &rpipe->pipe_sel); if (SEL_WAITING(&rpipe->pipe_sel)) rpipe->pipe_state |= PIPE_SEL; } if (fp->f_flag & FWRITE && events & (POLLOUT | POLLWRNORM)) { selrecord(td, &wpipe->pipe_sel); if (SEL_WAITING(&wpipe->pipe_sel)) wpipe->pipe_state |= PIPE_SEL; } } #ifdef MAC locked_error: #endif PIPE_UNLOCK(rpipe); return (revents); } /* * We shouldn't need locks here as we're doing a read and this should * be a natural race. */ static int pipe_stat(struct file *fp, struct stat *ub, struct ucred *active_cred, struct thread *td) { struct pipe *pipe; #ifdef MAC int error; #endif pipe = fp->f_data; PIPE_LOCK(pipe); #ifdef MAC error = mac_pipe_check_stat(active_cred, pipe->pipe_pair); if (error) { PIPE_UNLOCK(pipe); return (error); } #endif /* For named pipes ask the underlying filesystem. */ if (pipe->pipe_state & PIPE_NAMED) { PIPE_UNLOCK(pipe); return (vnops.fo_stat(fp, ub, active_cred, td)); } PIPE_UNLOCK(pipe); bzero(ub, sizeof(*ub)); ub->st_mode = S_IFIFO; ub->st_blksize = PAGE_SIZE; if (pipe->pipe_state & PIPE_DIRECTW) ub->st_size = pipe->pipe_map.cnt; else ub->st_size = pipe->pipe_buffer.cnt; ub->st_blocks = howmany(ub->st_size, ub->st_blksize); ub->st_atim = pipe->pipe_atime; ub->st_mtim = pipe->pipe_mtime; ub->st_ctim = pipe->pipe_ctime; ub->st_uid = fp->f_cred->cr_uid; ub->st_gid = fp->f_cred->cr_gid; ub->st_dev = pipedev_ino; ub->st_ino = pipe->pipe_ino; /* * Left as 0: st_nlink, st_rdev, st_flags, st_gen. */ return (0); } /* ARGSUSED */ static int pipe_close(struct file *fp, struct thread *td) { if (fp->f_vnode != NULL) return vnops.fo_close(fp, td); fp->f_ops = &badfileops; pipe_dtor(fp->f_data); fp->f_data = NULL; return (0); } static int pipe_chmod(struct file *fp, mode_t mode, struct ucred *active_cred, struct thread *td) { struct pipe *cpipe; int error; cpipe = fp->f_data; if (cpipe->pipe_state & PIPE_NAMED) error = vn_chmod(fp, mode, active_cred, td); else error = invfo_chmod(fp, mode, active_cred, td); return (error); } static int pipe_chown(struct file *fp, uid_t uid, gid_t gid, struct ucred *active_cred, struct thread *td) { struct pipe *cpipe; int error; cpipe = fp->f_data; if (cpipe->pipe_state & PIPE_NAMED) error = vn_chown(fp, uid, gid, active_cred, td); else error = invfo_chown(fp, uid, gid, active_cred, td); return (error); } static int pipe_fill_kinfo(struct file *fp, struct kinfo_file *kif, struct filedesc *fdp) { struct pipe *pi; if (fp->f_type == DTYPE_FIFO) return (vn_fill_kinfo(fp, kif, fdp)); kif->kf_type = KF_TYPE_PIPE; pi = fp->f_data; kif->kf_un.kf_pipe.kf_pipe_addr = (uintptr_t)pi; kif->kf_un.kf_pipe.kf_pipe_peer = (uintptr_t)pi->pipe_peer; kif->kf_un.kf_pipe.kf_pipe_buffer_cnt = pi->pipe_buffer.cnt; return (0); } static void pipe_free_kmem(struct pipe *cpipe) { KASSERT(!mtx_owned(PIPE_MTX(cpipe)), ("pipe_free_kmem: pipe mutex locked")); if (cpipe->pipe_buffer.buffer != NULL) { atomic_subtract_long(&amountpipekva, cpipe->pipe_buffer.size); vm_map_remove(pipe_map, (vm_offset_t)cpipe->pipe_buffer.buffer, (vm_offset_t)cpipe->pipe_buffer.buffer + cpipe->pipe_buffer.size); cpipe->pipe_buffer.buffer = NULL; } #ifndef PIPE_NODIRECT { cpipe->pipe_map.cnt = 0; cpipe->pipe_map.pos = 0; cpipe->pipe_map.npages = 0; } #endif } /* * shutdown the pipe */ static void pipeclose(struct pipe *cpipe) { struct pipepair *pp; struct pipe *ppipe; KASSERT(cpipe != NULL, ("pipeclose: cpipe == NULL")); PIPE_LOCK(cpipe); pipelock(cpipe, 0); pp = cpipe->pipe_pair; pipeselwakeup(cpipe); /* * If the other side is blocked, wake it up saying that * we want to close it down. */ cpipe->pipe_state |= PIPE_EOF; while (cpipe->pipe_busy) { wakeup(cpipe); cpipe->pipe_state |= PIPE_WANT; pipeunlock(cpipe); msleep(cpipe, PIPE_MTX(cpipe), PRIBIO, "pipecl", 0); pipelock(cpipe, 0); } /* * Disconnect from peer, if any. */ ppipe = cpipe->pipe_peer; if (ppipe->pipe_present == PIPE_ACTIVE) { pipeselwakeup(ppipe); ppipe->pipe_state |= PIPE_EOF; wakeup(ppipe); KNOTE_LOCKED(&ppipe->pipe_sel.si_note, 0); } /* * Mark this endpoint as free. Release kmem resources. We * don't mark this endpoint as unused until we've finished * doing that, or the pipe might disappear out from under * us. */ PIPE_UNLOCK(cpipe); pipe_free_kmem(cpipe); PIPE_LOCK(cpipe); cpipe->pipe_present = PIPE_CLOSING; pipeunlock(cpipe); /* * knlist_clear() may sleep dropping the PIPE_MTX. Set the * PIPE_FINALIZED, that allows other end to free the * pipe_pair, only after the knotes are completely dismantled. */ knlist_clear(&cpipe->pipe_sel.si_note, 1); cpipe->pipe_present = PIPE_FINALIZED; seldrain(&cpipe->pipe_sel); knlist_destroy(&cpipe->pipe_sel.si_note); /* * If both endpoints are now closed, release the memory for the * pipe pair. If not, unlock. */ if (ppipe->pipe_present == PIPE_FINALIZED) { PIPE_UNLOCK(cpipe); #ifdef MAC mac_pipe_destroy(pp); #endif uma_zfree(pipe_zone, cpipe->pipe_pair); } else PIPE_UNLOCK(cpipe); } /*ARGSUSED*/ static int pipe_kqfilter(struct file *fp, struct knote *kn) { struct pipe *cpipe; /* * If a filter is requested that is not supported by this file * descriptor, don't return an error, but also don't ever generate an * event. */ if ((kn->kn_filter == EVFILT_READ) && !(fp->f_flag & FREAD)) { kn->kn_fop = &pipe_nfiltops; return (0); } if ((kn->kn_filter == EVFILT_WRITE) && !(fp->f_flag & FWRITE)) { kn->kn_fop = &pipe_nfiltops; return (0); } cpipe = fp->f_data; PIPE_LOCK(cpipe); switch (kn->kn_filter) { case EVFILT_READ: kn->kn_fop = &pipe_rfiltops; break; case EVFILT_WRITE: kn->kn_fop = &pipe_wfiltops; if (cpipe->pipe_peer->pipe_present != PIPE_ACTIVE) { /* other end of pipe has been closed */ PIPE_UNLOCK(cpipe); return (EPIPE); } cpipe = PIPE_PEER(cpipe); break; default: PIPE_UNLOCK(cpipe); return (EINVAL); } kn->kn_hook = cpipe; knlist_add(&cpipe->pipe_sel.si_note, kn, 1); PIPE_UNLOCK(cpipe); return (0); } static void filt_pipedetach(struct knote *kn) { struct pipe *cpipe = kn->kn_hook; PIPE_LOCK(cpipe); knlist_remove(&cpipe->pipe_sel.si_note, kn, 1); PIPE_UNLOCK(cpipe); } /*ARGSUSED*/ static int filt_piperead(struct knote *kn, long hint) { struct pipe *rpipe = kn->kn_hook; struct pipe *wpipe = rpipe->pipe_peer; int ret; PIPE_LOCK_ASSERT(rpipe, MA_OWNED); kn->kn_data = rpipe->pipe_buffer.cnt; if ((kn->kn_data == 0) && (rpipe->pipe_state & PIPE_DIRECTW)) kn->kn_data = rpipe->pipe_map.cnt; if ((rpipe->pipe_state & PIPE_EOF) || wpipe->pipe_present != PIPE_ACTIVE || (wpipe->pipe_state & PIPE_EOF)) { kn->kn_flags |= EV_EOF; return (1); } ret = kn->kn_data > 0; return ret; } /*ARGSUSED*/ static int filt_pipewrite(struct knote *kn, long hint) { struct pipe *wpipe; wpipe = kn->kn_hook; PIPE_LOCK_ASSERT(wpipe, MA_OWNED); if (wpipe->pipe_present != PIPE_ACTIVE || (wpipe->pipe_state & PIPE_EOF)) { kn->kn_data = 0; kn->kn_flags |= EV_EOF; return (1); } kn->kn_data = (wpipe->pipe_buffer.size > 0) ? (wpipe->pipe_buffer.size - wpipe->pipe_buffer.cnt) : PIPE_BUF; if (wpipe->pipe_state & PIPE_DIRECTW) kn->kn_data = 0; return (kn->kn_data >= PIPE_BUF); } static void filt_pipedetach_notsup(struct knote *kn) { } static int filt_pipenotsup(struct knote *kn, long hint) { return (0); } Index: projects/import-googletest-1.8.1/sys/mips/mips/elf_machdep.c =================================================================== --- projects/import-googletest-1.8.1/sys/mips/mips/elf_machdep.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/mips/mips/elf_machdep.c (revision 344271) @@ -1,552 +1,552 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright 1996-1998 John D. Polstra. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * from: src/sys/i386/i386/elf_machdep.c,v 1.20 2004/08/11 02:35:05 marcel */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef __mips_n64 struct sysentvec elf64_freebsd_sysvec = { .sv_size = SYS_MAXSYSCALL, .sv_table = sysent, .sv_errsize = 0, .sv_errtbl = NULL, .sv_transtrap = NULL, .sv_fixup = __elfN(freebsd_fixup), .sv_sendsig = sendsig, .sv_sigcode = sigcode, .sv_szsigcode = &szsigcode, .sv_name = "FreeBSD ELF64", .sv_coredump = __elfN(coredump), .sv_imgact_try = NULL, .sv_minsigstksz = MINSIGSTKSZ, .sv_pagesize = PAGE_SIZE, .sv_minuser = VM_MIN_ADDRESS, .sv_maxuser = VM_MAXUSER_ADDRESS, .sv_usrstack = USRSTACK, .sv_psstrings = PS_STRINGS, .sv_stackprot = VM_PROT_ALL, .sv_copyout_strings = exec_copyout_strings, .sv_setregs = exec_setregs, .sv_fixlimit = NULL, .sv_maxssiz = NULL, - .sv_flags = SV_ABI_FREEBSD | SV_LP64, + .sv_flags = SV_ABI_FREEBSD | SV_LP64 | SV_ASLR, .sv_set_syscall_retval = cpu_set_syscall_retval, .sv_fetch_syscall_args = cpu_fetch_syscall_args, .sv_syscallnames = syscallnames, .sv_schedtail = NULL, .sv_thread_detach = NULL, .sv_trap = NULL, }; static Elf64_Brandinfo freebsd_brand_info = { .brand = ELFOSABI_FREEBSD, .machine = EM_MIPS, .compat_3_brand = "FreeBSD", .emul_path = NULL, .interp_path = "/libexec/ld-elf.so.1", .sysvec = &elf64_freebsd_sysvec, .interp_newpath = NULL, .brand_note = &elf64_freebsd_brandnote, .flags = BI_CAN_EXEC_DYN | BI_BRAND_NOTE }; SYSINIT(elf64, SI_SUB_EXEC, SI_ORDER_ANY, (sysinit_cfunc_t) elf64_insert_brand_entry, &freebsd_brand_info); void elf64_dump_thread(struct thread *td __unused, void *dst __unused, size_t *off __unused) { } #else struct sysentvec elf32_freebsd_sysvec = { .sv_size = SYS_MAXSYSCALL, .sv_table = sysent, .sv_errsize = 0, .sv_errtbl = NULL, .sv_transtrap = NULL, .sv_fixup = __elfN(freebsd_fixup), .sv_sendsig = sendsig, .sv_sigcode = sigcode, .sv_szsigcode = &szsigcode, .sv_name = "FreeBSD ELF32", .sv_coredump = __elfN(coredump), .sv_imgact_try = NULL, .sv_minsigstksz = MINSIGSTKSZ, .sv_pagesize = PAGE_SIZE, .sv_minuser = VM_MIN_ADDRESS, .sv_maxuser = VM_MAXUSER_ADDRESS, .sv_usrstack = USRSTACK, .sv_psstrings = PS_STRINGS, .sv_stackprot = VM_PROT_ALL, .sv_copyout_strings = exec_copyout_strings, .sv_setregs = exec_setregs, .sv_fixlimit = NULL, .sv_maxssiz = NULL, - .sv_flags = SV_ABI_FREEBSD | SV_ILP32, + .sv_flags = SV_ABI_FREEBSD | SV_ILP32 | SV_ASLR, .sv_set_syscall_retval = cpu_set_syscall_retval, .sv_fetch_syscall_args = cpu_fetch_syscall_args, .sv_syscallnames = syscallnames, .sv_schedtail = NULL, .sv_thread_detach = NULL, .sv_trap = NULL, }; static Elf32_Brandinfo freebsd_brand_info = { .brand = ELFOSABI_FREEBSD, .machine = EM_MIPS, .compat_3_brand = "FreeBSD", .emul_path = NULL, .interp_path = "/libexec/ld-elf.so.1", .sysvec = &elf32_freebsd_sysvec, .interp_newpath = NULL, .brand_note = &elf32_freebsd_brandnote, .flags = BI_CAN_EXEC_DYN | BI_BRAND_NOTE }; SYSINIT(elf32, SI_SUB_EXEC, SI_ORDER_FIRST, (sysinit_cfunc_t) elf32_insert_brand_entry, &freebsd_brand_info); void elf32_dump_thread(struct thread *td __unused, void *dst __unused, size_t *off __unused) { } #endif /* * The following MIPS relocation code for tracking multiple * consecutive HI32/LO32 entries is because of the following: * * https://dmz-portal.mips.com/wiki/MIPS_relocation_types * * === * * + R_MIPS_HI16 * * An R_MIPS_HI16 must be followed eventually by an associated R_MIPS_LO16 * relocation record in the same SHT_REL section. The contents of the two * fields to be relocated are combined to form a full 32-bit addend AHL. * An R_MIPS_LO16 entry which does not immediately follow a R_MIPS_HI16 is * combined with the most recent one encountered, i.e. multiple R_MIPS_LO16 * entries may be associated with a single R_MIPS_HI16. Use of these * relocation types in a SHT_REL section is discouraged and may be * forbidden to avoid this complication. * * A GNU extension allows multiple R_MIPS_HI16 records to share the same * R_MIPS_LO16 relocation record(s). The association works like this within * a single relocation section: * * + From the beginning of the section moving to the end of the section, * until R_MIPS_LO16 is not found each found R_MIPS_HI16 relocation will * be associated with the first R_MIPS_LO16. * * + Until another R_MIPS_HI16 record is found all found R_MIPS_LO16 * relocations found are associated with the last R_MIPS_HI16. * * === * * This is so gcc can do dead code detection/removal without having to * generate HI/LO pairs even if one of them would be deleted. * * So, the summary is: * * + A HI16 entry must occur before any LO16 entries; * + Multiple consecutive HI16 RELA entries need to be buffered until the * first LO16 RELA entry occurs - and then all HI16 RELA relocations use * the offset in the LOW16 RELA for calculating their offsets; * + The last HI16 RELA entry before a LO16 RELA entry is used (the AHL) * for the first subsequent LO16 calculation; * + If multiple consecutive LO16 RELA entries occur, only the first * LO16 RELA entry triggers an update of buffered HI16 RELA entries; * any subsequent LO16 RELA entry before another HI16 RELA entry will * not cause any further updates to the HI16 RELA entries. * * Additionally, flush out any outstanding HI16 entries that don't have * a LO16 entry in case some garbage entries are left in the file. */ struct mips_tmp_reloc; struct mips_tmp_reloc { struct mips_tmp_reloc *next; Elf_Addr ahl; Elf32_Addr *where_hi16; }; static struct mips_tmp_reloc *ml = NULL; /* * Add a temporary relocation (ie, a HI16 reloc type.) */ static int mips_tmp_reloc_add(Elf_Addr ahl, Elf32_Addr *where_hi16) { struct mips_tmp_reloc *r; r = malloc(sizeof(struct mips_tmp_reloc), M_TEMP, M_NOWAIT); if (r == NULL) { printf("%s: failed to malloc\n", __func__); return (0); } r->ahl = ahl; r->where_hi16 = where_hi16; r->next = ml; ml = r; return (1); } /* * Flush the temporary relocation list. * * This should be done after a file is completely loaded * so no stale relocations exist to confuse the next * load. */ static void mips_tmp_reloc_flush(void) { struct mips_tmp_reloc *r, *rn; r = ml; ml = NULL; while (r != NULL) { rn = r->next; free(r, M_TEMP); r = rn; } } /* * Get an entry from the reloc list; or NULL if we've run out. */ static struct mips_tmp_reloc * mips_tmp_reloc_get(void) { struct mips_tmp_reloc *r; r = ml; if (r == NULL) return (NULL); ml = ml->next; return (r); } /* * Free a relocation entry. */ static void mips_tmp_reloc_free(struct mips_tmp_reloc *r) { free(r, M_TEMP); } bool elf_is_ifunc_reloc(Elf_Size r_info __unused) { return (false); } /* Process one elf relocation with addend. */ static int elf_reloc_internal(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, int local, elf_lookup_fn lookup) { Elf32_Addr *where = (Elf32_Addr *)NULL; Elf_Addr addr; Elf_Addr addend = (Elf_Addr)0; Elf_Word rtype = (Elf_Word)0, symidx; struct mips_tmp_reloc *r; const Elf_Rel *rel = NULL; const Elf_Rela *rela = NULL; int error; /* Store the last seen ahl from a HI16 for LO16 processing */ static Elf_Addr last_ahl; switch (type) { case ELF_RELOC_REL: rel = (const Elf_Rel *)data; where = (Elf32_Addr *) (relocbase + rel->r_offset); rtype = ELF_R_TYPE(rel->r_info); symidx = ELF_R_SYM(rel->r_info); switch (rtype) { case R_MIPS_64: addend = *(Elf64_Addr *)where; break; default: addend = *where; break; } break; case ELF_RELOC_RELA: rela = (const Elf_Rela *)data; where = (Elf32_Addr *) (relocbase + rela->r_offset); addend = rela->r_addend; rtype = ELF_R_TYPE(rela->r_info); symidx = ELF_R_SYM(rela->r_info); break; default: panic("unknown reloc type %d\n", type); } switch (rtype) { case R_MIPS_NONE: /* none */ break; case R_MIPS_32: /* S + A */ error = lookup(lf, symidx, 1, &addr); if (error != 0) return (-1); addr += addend; if (*where != addr) *where = (Elf32_Addr)addr; break; case R_MIPS_26: /* ((A << 2) | (P & 0xf0000000) + S) >> 2 */ error = lookup(lf, symidx, 1, &addr); if (error != 0) return (-1); addend &= 0x03ffffff; /* * Addendum for .rela R_MIPS_26 is not shifted right */ if (rela == NULL) addend <<= 2; addr += ((Elf_Addr)where & 0xf0000000) | addend; addr >>= 2; *where &= ~0x03ffffff; *where |= addr & 0x03ffffff; break; case R_MIPS_64: /* S + A */ error = lookup(lf, symidx, 1, &addr); if (error != 0) return (-1); addr += addend; if (*(Elf64_Addr*)where != addr) *(Elf64_Addr*)where = addr; break; /* * Handle the two GNU extension cases: * * + Multiple HI16s followed by a LO16, and * + A HI16 followed by multiple LO16s. * * The former is tricky - the HI16 relocations need * to be buffered until a LO16 occurs, at which point * each HI16 is replayed against the LO16 relocation entry * (with the relevant overflow information.) * * The latter should be easy to handle - when the * first LO16 is seen, write out and flush the * HI16 buffer. Any subsequent LO16 entries will * find a blank relocation buffer. * */ case R_MIPS_HI16: /* ((AHL + S) - ((short)(AHL + S)) >> 16 */ if (rela != NULL) { error = lookup(lf, symidx, 1, &addr); if (error != 0) return (-1); addr += addend; *where &= 0xffff0000; *where |= ((((long long) addr + 0x8000LL) >> 16) & 0xffff); } else { /* * Add a temporary relocation to the list; * will pop it off / free the list when * we've found a suitable HI16. */ if (mips_tmp_reloc_add(addend << 16, where) == 0) return (-1); /* * Track the last seen HI16 AHL for use by * the first LO16 AHL calculation. * * The assumption is any intermediary deleted * LO16's were optimised out, so the last * HI16 before the LO16 is the "true" relocation * entry to use for that LO16 write. */ last_ahl = addend << 16; } break; case R_MIPS_LO16: /* AHL + S */ if (rela != NULL) { error = lookup(lf, symidx, 1, &addr); if (error != 0) return (-1); addr += addend; *where &= 0xffff0000; *where |= addr & 0xffff; } else { Elf_Addr tmp_ahl; Elf_Addr tmp_addend; tmp_ahl = last_ahl + (int16_t) addend; error = lookup(lf, symidx, 1, &addr); if (error != 0) return (-1); tmp_addend = addend & 0xffff0000; /* Use the last seen ahl for calculating addend */ tmp_addend |= (uint16_t)(tmp_ahl + addr); *where = tmp_addend; /* * This logic implements the "we saw multiple HI16 * before a LO16" assignment /and/ "we saw multiple * LO16s". * * Multiple LO16s will be handled as a blank * relocation list. * * Multple HI16's are iterated over here. */ while ((r = mips_tmp_reloc_get()) != NULL) { Elf_Addr rahl; /* * We have the ahl from the HI16 entry, so * offset it by the 16 bits of the low ahl. */ rahl = r->ahl; rahl += (int16_t) addend; tmp_addend = *(r->where_hi16); tmp_addend &= 0xffff0000; tmp_addend |= ((rahl + addr) - (int16_t)(rahl + addr)) >> 16; *(r->where_hi16) = tmp_addend; mips_tmp_reloc_free(r); } } break; case R_MIPS_HIGHER: /* %higher(A+S) */ error = lookup(lf, symidx, 1, &addr); if (error != 0) return (-1); addr += addend; *where &= 0xffff0000; *where |= (((long long)addr + 0x80008000LL) >> 32) & 0xffff; break; case R_MIPS_HIGHEST: /* %highest(A+S) */ error = lookup(lf, symidx, 1, &addr); if (error != 0) return (-1); addr += addend; *where &= 0xffff0000; *where |= (((long long)addr + 0x800080008000LL) >> 48) & 0xffff; break; default: printf("kldload: unexpected relocation type %d\n", rtype); return (-1); } return(0); } int elf_reloc(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, elf_lookup_fn lookup) { return (elf_reloc_internal(lf, relocbase, data, type, 0, lookup)); } int elf_reloc_local(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, elf_lookup_fn lookup) { return (elf_reloc_internal(lf, relocbase, data, type, 1, lookup)); } int elf_cpu_load_file(linker_file_t lf __unused) { /* * Sync the I and D caches to make sure our relocations are visible. */ mips_icache_sync_all(); /* Flush outstanding relocations */ mips_tmp_reloc_flush(); return (0); } int elf_cpu_unload_file(linker_file_t lf __unused) { return (0); } Index: projects/import-googletest-1.8.1/sys/modules/crypto/Makefile =================================================================== --- projects/import-googletest-1.8.1/sys/modules/crypto/Makefile (revision 344270) +++ projects/import-googletest-1.8.1/sys/modules/crypto/Makefile (revision 344271) @@ -1,72 +1,74 @@ # $FreeBSD$ LIBSODIUM=${SRCTOP}/sys/contrib/libsodium/src/libsodium .PATH: ${SRCTOP}/sys/opencrypto .PATH: ${SRCTOP}/sys/crypto .PATH: ${SRCTOP}/sys/crypto/blowfish .PATH: ${SRCTOP}/sys/crypto/camellia .PATH: ${SRCTOP}/sys/crypto/des .PATH: ${SRCTOP}/sys/crypto/rijndael .PATH: ${SRCTOP}/sys/crypto/sha2 .PATH: ${SRCTOP}/sys/crypto/siphash .PATH: ${SRCTOP}/sys/crypto/skein .PATH: ${SRCTOP}/sys/crypto/blake2 .PATH: ${SRCTOP}/sys/crypto/chacha20 .PATH: ${SRCTOP}/sys/contrib/libb2 .PATH: ${LIBSODIUM}/crypto_onetimeauth/poly1305 .PATH: ${LIBSODIUM}/crypto_onetimeauth/poly1305/donna .PATH: ${LIBSODIUM}/crypto_verify/sodium .PATH: ${SRCTOP}/sys/crypto/libsodium KMOD = crypto SRCS = crypto.c cryptodev_if.c SRCS += criov.c cryptosoft.c xform.c SRCS += cast.c cryptodeflate.c rmd160.c rijndael-alg-fst.c rijndael-api.c rijndael-api-fst.c SRCS += skipjack.c bf_enc.c bf_ecb.c bf_skey.c SRCS += camellia.c camellia-api.c SRCS += des_ecb.c des_enc.c des_setkey.c SRCS += sha1.c sha256c.c sha512c.c SRCS += skein.c skein_block.c # unroll the 256 and 512 loops, half unroll the 1024 CFLAGS+= -DSKEIN_LOOP=995 .if exists(${MACHINE_ARCH}/skein_block_asm.s) .PATH: ${SRCTOP}/sys/crypto/skein/${MACHINE_ARCH} SRCS += skein_block_asm.s CFLAGS += -DSKEIN_ASM -DSKEIN_USE_ASM=1792 # list of block functions to replace with assembly: 256+512+1024 = 1792 ACFLAGS += -DELF -Wa,--noexecstack # Fully unroll all loops in the assembly optimized version AFLAGS+= --defsym SKEIN_LOOP=0 .endif SRCS += siphash.c SRCS += gmac.c gfmult.c SRCS += blake2b-ref.c SRCS += blake2s-ref.c SRCS += blake2-sw.c CFLAGS.blake2b-ref.c += -I${SRCTOP}/sys/crypto/blake2 -DSUFFIX=_ref CFLAGS.blake2s-ref.c += -I${SRCTOP}/sys/crypto/blake2 -DSUFFIX=_ref CFLAGS.blake2-sw.c += -I${SRCTOP}/sys/crypto/blake2 CWARNFLAGS.blake2b-ref.c += -Wno-cast-qual -Wno-unused-function CWARNFLAGS.blake2s-ref.c += -Wno-cast-qual -Wno-unused-function SRCS += chacha.c SRCS += chacha-sw.c LIBSODIUM_INC=${LIBSODIUM}/include LIBSODIUM_COMPAT=${SRCTOP}/sys/crypto/libsodium SRCS += xform_poly1305.c CFLAGS.xform_poly1305.c += -I${LIBSODIUM_INC} -I${LIBSODIUM_COMPAT} SRCS += onetimeauth_poly1305.c CFLAGS.onetimeauth_poly1305.c += -I${LIBSODIUM_INC}/sodium -I${LIBSODIUM_COMPAT} SRCS += poly1305_donna.c CFLAGS.poly1305_donna.c += -I${LIBSODIUM_INC}/sodium -I${LIBSODIUM_COMPAT} SRCS += verify.c CFLAGS.verify.c += -I${LIBSODIUM_INC}/sodium -I${LIBSODIUM_COMPAT} SRCS += randombytes.c CFLAGS.randombytes.c += -I${LIBSODIUM_INC} -I${LIBSODIUM_COMPAT} SRCS += utils.c CFLAGS.utils.c += -I${LIBSODIUM_INC} -I${LIBSODIUM_COMPAT} SRCS += opt_param.h cryptodev_if.h bus_if.h device_if.h SRCS += opt_ddb.h +SRCS += cbc_mac.c +SRCS += xform_cbc_mac.c .include Index: projects/import-googletest-1.8.1/sys/net/if_lagg.c =================================================================== --- projects/import-googletest-1.8.1/sys/net/if_lagg.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/net/if_lagg.c (revision 344271) @@ -1,2249 +1,2258 @@ /* $OpenBSD: if_trunk.c,v 1.30 2007/01/31 06:20:19 reyk Exp $ */ /* * Copyright (c) 2005, 2006 Reyk Floeter * Copyright (c) 2007 Andrew Thompson * Copyright (c) 2014, 2016 Marcelo Araujo * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include __FBSDID("$FreeBSD$"); #include "opt_inet.h" #include "opt_inet6.h" #include "opt_ratelimit.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #if defined(INET) || defined(INET6) #include #include #endif #ifdef INET #include #include #endif #ifdef INET6 #include #include #include #endif #include #include #include #define LAGG_RLOCK() struct epoch_tracker lagg_et; epoch_enter_preempt(net_epoch_preempt, &lagg_et) #define LAGG_RUNLOCK() epoch_exit_preempt(net_epoch_preempt, &lagg_et) #define LAGG_RLOCK_ASSERT() MPASS(in_epoch(net_epoch_preempt)) #define LAGG_UNLOCK_ASSERT() MPASS(!in_epoch(net_epoch_preempt)) #define LAGG_SX_INIT(_sc) sx_init(&(_sc)->sc_sx, "if_lagg sx") #define LAGG_SX_DESTROY(_sc) sx_destroy(&(_sc)->sc_sx) #define LAGG_XLOCK(_sc) sx_xlock(&(_sc)->sc_sx) #define LAGG_XUNLOCK(_sc) sx_xunlock(&(_sc)->sc_sx) #define LAGG_SXLOCK_ASSERT(_sc) sx_assert(&(_sc)->sc_sx, SA_LOCKED) #define LAGG_XLOCK_ASSERT(_sc) sx_assert(&(_sc)->sc_sx, SA_XLOCKED) /* Special flags we should propagate to the lagg ports. */ static struct { int flag; int (*func)(struct ifnet *, int); } lagg_pflags[] = { {IFF_PROMISC, ifpromisc}, {IFF_ALLMULTI, if_allmulti}, {0, NULL} }; VNET_DEFINE(SLIST_HEAD(__trhead, lagg_softc), lagg_list); /* list of laggs */ #define V_lagg_list VNET(lagg_list) VNET_DEFINE_STATIC(struct mtx, lagg_list_mtx); #define V_lagg_list_mtx VNET(lagg_list_mtx) #define LAGG_LIST_LOCK_INIT(x) mtx_init(&V_lagg_list_mtx, \ "if_lagg list", NULL, MTX_DEF) #define LAGG_LIST_LOCK_DESTROY(x) mtx_destroy(&V_lagg_list_mtx) #define LAGG_LIST_LOCK(x) mtx_lock(&V_lagg_list_mtx) #define LAGG_LIST_UNLOCK(x) mtx_unlock(&V_lagg_list_mtx) eventhandler_tag lagg_detach_cookie = NULL; static int lagg_clone_create(struct if_clone *, int, caddr_t); static void lagg_clone_destroy(struct ifnet *); VNET_DEFINE_STATIC(struct if_clone *, lagg_cloner); #define V_lagg_cloner VNET(lagg_cloner) static const char laggname[] = "lagg"; static void lagg_capabilities(struct lagg_softc *); static int lagg_port_create(struct lagg_softc *, struct ifnet *); static int lagg_port_destroy(struct lagg_port *, int); static struct mbuf *lagg_input(struct ifnet *, struct mbuf *); static void lagg_linkstate(struct lagg_softc *); static void lagg_port_state(struct ifnet *, int); static int lagg_port_ioctl(struct ifnet *, u_long, caddr_t); static int lagg_port_output(struct ifnet *, struct mbuf *, const struct sockaddr *, struct route *); static void lagg_port_ifdetach(void *arg __unused, struct ifnet *); #ifdef LAGG_PORT_STACKING static int lagg_port_checkstacking(struct lagg_softc *); #endif static void lagg_port2req(struct lagg_port *, struct lagg_reqport *); static void lagg_init(void *); static void lagg_stop(struct lagg_softc *); static int lagg_ioctl(struct ifnet *, u_long, caddr_t); #ifdef RATELIMIT static int lagg_snd_tag_alloc(struct ifnet *, union if_snd_tag_alloc_params *, struct m_snd_tag **); +static void lagg_snd_tag_free(struct m_snd_tag *); #endif static int lagg_setmulti(struct lagg_port *); static int lagg_clrmulti(struct lagg_port *); static int lagg_setcaps(struct lagg_port *, int cap); static int lagg_setflag(struct lagg_port *, int, int, int (*func)(struct ifnet *, int)); static int lagg_setflags(struct lagg_port *, int status); static uint64_t lagg_get_counter(struct ifnet *ifp, ift_counter cnt); static int lagg_transmit(struct ifnet *, struct mbuf *); static void lagg_qflush(struct ifnet *); static int lagg_media_change(struct ifnet *); static void lagg_media_status(struct ifnet *, struct ifmediareq *); static struct lagg_port *lagg_link_active(struct lagg_softc *, struct lagg_port *); /* Simple round robin */ static void lagg_rr_attach(struct lagg_softc *); static int lagg_rr_start(struct lagg_softc *, struct mbuf *); static struct mbuf *lagg_rr_input(struct lagg_softc *, struct lagg_port *, struct mbuf *); /* Active failover */ static int lagg_fail_start(struct lagg_softc *, struct mbuf *); static struct mbuf *lagg_fail_input(struct lagg_softc *, struct lagg_port *, struct mbuf *); /* Loadbalancing */ static void lagg_lb_attach(struct lagg_softc *); static void lagg_lb_detach(struct lagg_softc *); static int lagg_lb_port_create(struct lagg_port *); static void lagg_lb_port_destroy(struct lagg_port *); static int lagg_lb_start(struct lagg_softc *, struct mbuf *); static struct mbuf *lagg_lb_input(struct lagg_softc *, struct lagg_port *, struct mbuf *); static int lagg_lb_porttable(struct lagg_softc *, struct lagg_port *); /* Broadcast */ static int lagg_bcast_start(struct lagg_softc *, struct mbuf *); static struct mbuf *lagg_bcast_input(struct lagg_softc *, struct lagg_port *, struct mbuf *); /* 802.3ad LACP */ static void lagg_lacp_attach(struct lagg_softc *); static void lagg_lacp_detach(struct lagg_softc *); static int lagg_lacp_start(struct lagg_softc *, struct mbuf *); static struct mbuf *lagg_lacp_input(struct lagg_softc *, struct lagg_port *, struct mbuf *); static void lagg_lacp_lladdr(struct lagg_softc *); /* lagg protocol table */ static const struct lagg_proto { lagg_proto pr_num; void (*pr_attach)(struct lagg_softc *); void (*pr_detach)(struct lagg_softc *); int (*pr_start)(struct lagg_softc *, struct mbuf *); struct mbuf * (*pr_input)(struct lagg_softc *, struct lagg_port *, struct mbuf *); int (*pr_addport)(struct lagg_port *); void (*pr_delport)(struct lagg_port *); void (*pr_linkstate)(struct lagg_port *); void (*pr_init)(struct lagg_softc *); void (*pr_stop)(struct lagg_softc *); void (*pr_lladdr)(struct lagg_softc *); void (*pr_request)(struct lagg_softc *, void *); void (*pr_portreq)(struct lagg_port *, void *); } lagg_protos[] = { { .pr_num = LAGG_PROTO_NONE }, { .pr_num = LAGG_PROTO_ROUNDROBIN, .pr_attach = lagg_rr_attach, .pr_start = lagg_rr_start, .pr_input = lagg_rr_input, }, { .pr_num = LAGG_PROTO_FAILOVER, .pr_start = lagg_fail_start, .pr_input = lagg_fail_input, }, { .pr_num = LAGG_PROTO_LOADBALANCE, .pr_attach = lagg_lb_attach, .pr_detach = lagg_lb_detach, .pr_start = lagg_lb_start, .pr_input = lagg_lb_input, .pr_addport = lagg_lb_port_create, .pr_delport = lagg_lb_port_destroy, }, { .pr_num = LAGG_PROTO_LACP, .pr_attach = lagg_lacp_attach, .pr_detach = lagg_lacp_detach, .pr_start = lagg_lacp_start, .pr_input = lagg_lacp_input, .pr_addport = lacp_port_create, .pr_delport = lacp_port_destroy, .pr_linkstate = lacp_linkstate, .pr_init = lacp_init, .pr_stop = lacp_stop, .pr_lladdr = lagg_lacp_lladdr, .pr_request = lacp_req, .pr_portreq = lacp_portreq, }, { .pr_num = LAGG_PROTO_BROADCAST, .pr_start = lagg_bcast_start, .pr_input = lagg_bcast_input, }, }; SYSCTL_DECL(_net_link); SYSCTL_NODE(_net_link, OID_AUTO, lagg, CTLFLAG_RW, 0, "Link Aggregation"); /* Allow input on any failover links */ VNET_DEFINE_STATIC(int, lagg_failover_rx_all); #define V_lagg_failover_rx_all VNET(lagg_failover_rx_all) SYSCTL_INT(_net_link_lagg, OID_AUTO, failover_rx_all, CTLFLAG_RW | CTLFLAG_VNET, &VNET_NAME(lagg_failover_rx_all), 0, "Accept input from any interface in a failover lagg"); /* Default value for using flowid */ VNET_DEFINE_STATIC(int, def_use_flowid) = 0; #define V_def_use_flowid VNET(def_use_flowid) SYSCTL_INT(_net_link_lagg, OID_AUTO, default_use_flowid, CTLFLAG_RWTUN, &VNET_NAME(def_use_flowid), 0, "Default setting for using flow id for load sharing"); /* Default value for flowid shift */ VNET_DEFINE_STATIC(int, def_flowid_shift) = 16; #define V_def_flowid_shift VNET(def_flowid_shift) SYSCTL_INT(_net_link_lagg, OID_AUTO, default_flowid_shift, CTLFLAG_RWTUN, &VNET_NAME(def_flowid_shift), 0, "Default setting for flowid shift for load sharing"); static void vnet_lagg_init(const void *unused __unused) { LAGG_LIST_LOCK_INIT(); SLIST_INIT(&V_lagg_list); V_lagg_cloner = if_clone_simple(laggname, lagg_clone_create, lagg_clone_destroy, 0); } VNET_SYSINIT(vnet_lagg_init, SI_SUB_PROTO_IFATTACHDOMAIN, SI_ORDER_ANY, vnet_lagg_init, NULL); static void vnet_lagg_uninit(const void *unused __unused) { if_clone_detach(V_lagg_cloner); LAGG_LIST_LOCK_DESTROY(); } VNET_SYSUNINIT(vnet_lagg_uninit, SI_SUB_INIT_IF, SI_ORDER_ANY, vnet_lagg_uninit, NULL); static int lagg_modevent(module_t mod, int type, void *data) { switch (type) { case MOD_LOAD: lagg_input_p = lagg_input; lagg_linkstate_p = lagg_port_state; lagg_detach_cookie = EVENTHANDLER_REGISTER( ifnet_departure_event, lagg_port_ifdetach, NULL, EVENTHANDLER_PRI_ANY); break; case MOD_UNLOAD: EVENTHANDLER_DEREGISTER(ifnet_departure_event, lagg_detach_cookie); lagg_input_p = NULL; lagg_linkstate_p = NULL; break; default: return (EOPNOTSUPP); } return (0); } static moduledata_t lagg_mod = { "if_lagg", lagg_modevent, 0 }; DECLARE_MODULE(if_lagg, lagg_mod, SI_SUB_PSEUDO, SI_ORDER_ANY); MODULE_VERSION(if_lagg, 1); static void lagg_proto_attach(struct lagg_softc *sc, lagg_proto pr) { LAGG_XLOCK_ASSERT(sc); KASSERT(sc->sc_proto == LAGG_PROTO_NONE, ("%s: sc %p has proto", __func__, sc)); if (sc->sc_ifflags & IFF_DEBUG) if_printf(sc->sc_ifp, "using proto %u\n", pr); if (lagg_protos[pr].pr_attach != NULL) lagg_protos[pr].pr_attach(sc); sc->sc_proto = pr; } static void lagg_proto_detach(struct lagg_softc *sc) { lagg_proto pr; LAGG_XLOCK_ASSERT(sc); pr = sc->sc_proto; sc->sc_proto = LAGG_PROTO_NONE; if (lagg_protos[pr].pr_detach != NULL) lagg_protos[pr].pr_detach(sc); } static int lagg_proto_start(struct lagg_softc *sc, struct mbuf *m) { return (lagg_protos[sc->sc_proto].pr_start(sc, m)); } static struct mbuf * lagg_proto_input(struct lagg_softc *sc, struct lagg_port *lp, struct mbuf *m) { return (lagg_protos[sc->sc_proto].pr_input(sc, lp, m)); } static int lagg_proto_addport(struct lagg_softc *sc, struct lagg_port *lp) { if (lagg_protos[sc->sc_proto].pr_addport == NULL) return (0); else return (lagg_protos[sc->sc_proto].pr_addport(lp)); } static void lagg_proto_delport(struct lagg_softc *sc, struct lagg_port *lp) { if (lagg_protos[sc->sc_proto].pr_delport != NULL) lagg_protos[sc->sc_proto].pr_delport(lp); } static void lagg_proto_linkstate(struct lagg_softc *sc, struct lagg_port *lp) { if (lagg_protos[sc->sc_proto].pr_linkstate != NULL) lagg_protos[sc->sc_proto].pr_linkstate(lp); } static void lagg_proto_init(struct lagg_softc *sc) { if (lagg_protos[sc->sc_proto].pr_init != NULL) lagg_protos[sc->sc_proto].pr_init(sc); } static void lagg_proto_stop(struct lagg_softc *sc) { if (lagg_protos[sc->sc_proto].pr_stop != NULL) lagg_protos[sc->sc_proto].pr_stop(sc); } static void lagg_proto_lladdr(struct lagg_softc *sc) { if (lagg_protos[sc->sc_proto].pr_lladdr != NULL) lagg_protos[sc->sc_proto].pr_lladdr(sc); } static void lagg_proto_request(struct lagg_softc *sc, void *v) { if (lagg_protos[sc->sc_proto].pr_request != NULL) lagg_protos[sc->sc_proto].pr_request(sc, v); } static void lagg_proto_portreq(struct lagg_softc *sc, struct lagg_port *lp, void *v) { if (lagg_protos[sc->sc_proto].pr_portreq != NULL) lagg_protos[sc->sc_proto].pr_portreq(lp, v); } /* * This routine is run via an vlan * config EVENT */ static void lagg_register_vlan(void *arg, struct ifnet *ifp, u_int16_t vtag) { struct lagg_softc *sc = ifp->if_softc; struct lagg_port *lp; if (ifp->if_softc != arg) /* Not our event */ return; LAGG_RLOCK(); CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) EVENTHANDLER_INVOKE(vlan_config, lp->lp_ifp, vtag); LAGG_RUNLOCK(); } /* * This routine is run via an vlan * unconfig EVENT */ static void lagg_unregister_vlan(void *arg, struct ifnet *ifp, u_int16_t vtag) { struct lagg_softc *sc = ifp->if_softc; struct lagg_port *lp; if (ifp->if_softc != arg) /* Not our event */ return; LAGG_RLOCK(); CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) EVENTHANDLER_INVOKE(vlan_unconfig, lp->lp_ifp, vtag); LAGG_RUNLOCK(); } static int lagg_clone_create(struct if_clone *ifc, int unit, caddr_t params) { struct lagg_softc *sc; struct ifnet *ifp; static const u_char eaddr[6]; /* 00:00:00:00:00:00 */ sc = malloc(sizeof(*sc), M_DEVBUF, M_WAITOK|M_ZERO); ifp = sc->sc_ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { free(sc, M_DEVBUF); return (ENOSPC); } LAGG_SX_INIT(sc); LAGG_XLOCK(sc); if (V_def_use_flowid) sc->sc_opts |= LAGG_OPT_USE_FLOWID; sc->flowid_shift = V_def_flowid_shift; /* Hash all layers by default */ sc->sc_flags = MBUF_HASHFLAG_L2|MBUF_HASHFLAG_L3|MBUF_HASHFLAG_L4; lagg_proto_attach(sc, LAGG_PROTO_DEFAULT); CK_SLIST_INIT(&sc->sc_ports); /* Initialise pseudo media types */ ifmedia_init(&sc->sc_media, 0, lagg_media_change, lagg_media_status); ifmedia_add(&sc->sc_media, IFM_ETHER | IFM_AUTO, 0, NULL); ifmedia_set(&sc->sc_media, IFM_ETHER | IFM_AUTO); if_initname(ifp, laggname, unit); ifp->if_softc = sc; ifp->if_transmit = lagg_transmit; ifp->if_qflush = lagg_qflush; ifp->if_init = lagg_init; ifp->if_ioctl = lagg_ioctl; ifp->if_get_counter = lagg_get_counter; ifp->if_flags = IFF_SIMPLEX | IFF_BROADCAST | IFF_MULTICAST; #ifdef RATELIMIT ifp->if_snd_tag_alloc = lagg_snd_tag_alloc; + ifp->if_snd_tag_free = lagg_snd_tag_free; #endif ifp->if_capenable = ifp->if_capabilities = IFCAP_HWSTATS; /* * Attach as an ordinary ethernet device, children will be attached * as special device IFT_IEEE8023ADLAG. */ ether_ifattach(ifp, eaddr); sc->vlan_attach = EVENTHANDLER_REGISTER(vlan_config, lagg_register_vlan, sc, EVENTHANDLER_PRI_FIRST); sc->vlan_detach = EVENTHANDLER_REGISTER(vlan_unconfig, lagg_unregister_vlan, sc, EVENTHANDLER_PRI_FIRST); /* Insert into the global list of laggs */ LAGG_LIST_LOCK(); SLIST_INSERT_HEAD(&V_lagg_list, sc, sc_entries); LAGG_LIST_UNLOCK(); LAGG_XUNLOCK(sc); return (0); } static void lagg_clone_destroy(struct ifnet *ifp) { struct lagg_softc *sc = (struct lagg_softc *)ifp->if_softc; struct lagg_port *lp; LAGG_XLOCK(sc); sc->sc_destroying = 1; lagg_stop(sc); ifp->if_flags &= ~IFF_UP; EVENTHANDLER_DEREGISTER(vlan_config, sc->vlan_attach); EVENTHANDLER_DEREGISTER(vlan_unconfig, sc->vlan_detach); /* Shutdown and remove lagg ports */ while ((lp = CK_SLIST_FIRST(&sc->sc_ports)) != NULL) lagg_port_destroy(lp, 1); /* Unhook the aggregation protocol */ lagg_proto_detach(sc); LAGG_XUNLOCK(sc); ifmedia_removeall(&sc->sc_media); ether_ifdetach(ifp); if_free(ifp); LAGG_LIST_LOCK(); SLIST_REMOVE(&V_lagg_list, sc, lagg_softc, sc_entries); LAGG_LIST_UNLOCK(); LAGG_SX_DESTROY(sc); free(sc, M_DEVBUF); } static void lagg_capabilities(struct lagg_softc *sc) { struct lagg_port *lp; int cap, ena, pena; uint64_t hwa; struct ifnet_hw_tsomax hw_tsomax; LAGG_XLOCK_ASSERT(sc); /* Get common enabled capabilities for the lagg ports */ ena = ~0; CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) ena &= lp->lp_ifp->if_capenable; ena = (ena == ~0 ? 0 : ena); /* * Apply common enabled capabilities back to the lagg ports. * May require several iterations if they are dependent. */ do { pena = ena; CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) { lagg_setcaps(lp, ena); ena &= lp->lp_ifp->if_capenable; } } while (pena != ena); /* Get other capabilities from the lagg ports */ cap = ~0; hwa = ~(uint64_t)0; memset(&hw_tsomax, 0, sizeof(hw_tsomax)); CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) { cap &= lp->lp_ifp->if_capabilities; hwa &= lp->lp_ifp->if_hwassist; if_hw_tsomax_common(lp->lp_ifp, &hw_tsomax); } cap = (cap == ~0 ? 0 : cap); hwa = (hwa == ~(uint64_t)0 ? 0 : hwa); if (sc->sc_ifp->if_capabilities != cap || sc->sc_ifp->if_capenable != ena || sc->sc_ifp->if_hwassist != hwa || if_hw_tsomax_update(sc->sc_ifp, &hw_tsomax) != 0) { sc->sc_ifp->if_capabilities = cap; sc->sc_ifp->if_capenable = ena; sc->sc_ifp->if_hwassist = hwa; getmicrotime(&sc->sc_ifp->if_lastchange); if (sc->sc_ifflags & IFF_DEBUG) if_printf(sc->sc_ifp, "capabilities 0x%08x enabled 0x%08x\n", cap, ena); } } static int lagg_port_create(struct lagg_softc *sc, struct ifnet *ifp) { struct lagg_softc *sc_ptr; struct lagg_port *lp, *tlp; struct ifreq ifr; int error, i, oldmtu; uint64_t *pval; LAGG_XLOCK_ASSERT(sc); if (sc->sc_ifp == ifp) { if_printf(sc->sc_ifp, "cannot add a lagg to itself as a port\n"); return (EINVAL); } /* Limit the maximal number of lagg ports */ if (sc->sc_count >= LAGG_MAX_PORTS) return (ENOSPC); /* Check if port has already been associated to a lagg */ if (ifp->if_lagg != NULL) { /* Port is already in the current lagg? */ lp = (struct lagg_port *)ifp->if_lagg; if (lp->lp_softc == sc) return (EEXIST); return (EBUSY); } /* XXX Disallow non-ethernet interfaces (this should be any of 802) */ if (ifp->if_type != IFT_ETHER && ifp->if_type != IFT_L2VLAN) return (EPROTONOSUPPORT); /* Allow the first Ethernet member to define the MTU */ oldmtu = -1; if (CK_SLIST_EMPTY(&sc->sc_ports)) { sc->sc_ifp->if_mtu = ifp->if_mtu; } else if (sc->sc_ifp->if_mtu != ifp->if_mtu) { if (ifp->if_ioctl == NULL) { if_printf(sc->sc_ifp, "cannot change MTU for %s\n", ifp->if_xname); return (EINVAL); } oldmtu = ifp->if_mtu; strlcpy(ifr.ifr_name, ifp->if_xname, sizeof(ifr.ifr_name)); ifr.ifr_mtu = sc->sc_ifp->if_mtu; error = (*ifp->if_ioctl)(ifp, SIOCSIFMTU, (caddr_t)&ifr); if (error != 0) { if_printf(sc->sc_ifp, "invalid MTU for %s\n", ifp->if_xname); return (error); } ifr.ifr_mtu = oldmtu; } lp = malloc(sizeof(struct lagg_port), M_DEVBUF, M_WAITOK|M_ZERO); lp->lp_softc = sc; /* Check if port is a stacked lagg */ LAGG_LIST_LOCK(); SLIST_FOREACH(sc_ptr, &V_lagg_list, sc_entries) { if (ifp == sc_ptr->sc_ifp) { LAGG_LIST_UNLOCK(); free(lp, M_DEVBUF); if (oldmtu != -1) (*ifp->if_ioctl)(ifp, SIOCSIFMTU, (caddr_t)&ifr); return (EINVAL); /* XXX disable stacking for the moment, its untested */ #ifdef LAGG_PORT_STACKING lp->lp_flags |= LAGG_PORT_STACK; if (lagg_port_checkstacking(sc_ptr) >= LAGG_MAX_STACKING) { LAGG_LIST_UNLOCK(); free(lp, M_DEVBUF); if (oldmtu != -1) (*ifp->if_ioctl)(ifp, SIOCSIFMTU, (caddr_t)&ifr); return (E2BIG); } #endif } } LAGG_LIST_UNLOCK(); if_ref(ifp); lp->lp_ifp = ifp; bcopy(IF_LLADDR(ifp), lp->lp_lladdr, ETHER_ADDR_LEN); lp->lp_ifcapenable = ifp->if_capenable; if (CK_SLIST_EMPTY(&sc->sc_ports)) { bcopy(IF_LLADDR(ifp), IF_LLADDR(sc->sc_ifp), ETHER_ADDR_LEN); lagg_proto_lladdr(sc); EVENTHANDLER_INVOKE(iflladdr_event, sc->sc_ifp); } else { if_setlladdr(ifp, IF_LLADDR(sc->sc_ifp), ETHER_ADDR_LEN); } lagg_setflags(lp, 1); if (CK_SLIST_EMPTY(&sc->sc_ports)) sc->sc_primary = lp; /* Change the interface type */ lp->lp_iftype = ifp->if_type; ifp->if_type = IFT_IEEE8023ADLAG; ifp->if_lagg = lp; lp->lp_ioctl = ifp->if_ioctl; ifp->if_ioctl = lagg_port_ioctl; lp->lp_output = ifp->if_output; ifp->if_output = lagg_port_output; /* Read port counters */ pval = lp->port_counters.val; for (i = 0; i < IFCOUNTERS; i++, pval++) *pval = ifp->if_get_counter(ifp, i); /* * Insert into the list of ports. * Keep ports sorted by if_index. It is handy, when configuration * is predictable and `ifconfig laggN create ...` command * will lead to the same result each time. */ LAGG_RLOCK(); CK_SLIST_FOREACH(tlp, &sc->sc_ports, lp_entries) { if (tlp->lp_ifp->if_index < ifp->if_index && ( CK_SLIST_NEXT(tlp, lp_entries) == NULL || ((struct lagg_port*)CK_SLIST_NEXT(tlp, lp_entries))->lp_ifp->if_index > ifp->if_index)) break; } LAGG_RUNLOCK(); if (tlp != NULL) CK_SLIST_INSERT_AFTER(tlp, lp, lp_entries); else CK_SLIST_INSERT_HEAD(&sc->sc_ports, lp, lp_entries); sc->sc_count++; lagg_setmulti(lp); if ((error = lagg_proto_addport(sc, lp)) != 0) { /* Remove the port, without calling pr_delport. */ lagg_port_destroy(lp, 0); if (oldmtu != -1) (*ifp->if_ioctl)(ifp, SIOCSIFMTU, (caddr_t)&ifr); return (error); } /* Update lagg capabilities */ lagg_capabilities(sc); lagg_linkstate(sc); return (0); } #ifdef LAGG_PORT_STACKING static int lagg_port_checkstacking(struct lagg_softc *sc) { struct lagg_softc *sc_ptr; struct lagg_port *lp; int m = 0; LAGG_SXLOCK_ASSERT(sc); CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) { if (lp->lp_flags & LAGG_PORT_STACK) { sc_ptr = (struct lagg_softc *)lp->lp_ifp->if_softc; m = MAX(m, lagg_port_checkstacking(sc_ptr)); } } return (m + 1); } #endif static void lagg_port_destroy_cb(epoch_context_t ec) { struct lagg_port *lp; struct ifnet *ifp; lp = __containerof(ec, struct lagg_port, lp_epoch_ctx); ifp = lp->lp_ifp; if_rele(ifp); free(lp, M_DEVBUF); } static int lagg_port_destroy(struct lagg_port *lp, int rundelport) { struct lagg_softc *sc = lp->lp_softc; struct lagg_port *lp_ptr, *lp0; struct ifnet *ifp = lp->lp_ifp; uint64_t *pval, vdiff; int i; LAGG_XLOCK_ASSERT(sc); if (rundelport) lagg_proto_delport(sc, lp); if (lp->lp_detaching == 0) lagg_clrmulti(lp); /* Restore interface */ ifp->if_type = lp->lp_iftype; ifp->if_ioctl = lp->lp_ioctl; ifp->if_output = lp->lp_output; ifp->if_lagg = NULL; /* Update detached port counters */ pval = lp->port_counters.val; for (i = 0; i < IFCOUNTERS; i++, pval++) { vdiff = ifp->if_get_counter(ifp, i) - *pval; sc->detached_counters.val[i] += vdiff; } /* Finally, remove the port from the lagg */ CK_SLIST_REMOVE(&sc->sc_ports, lp, lagg_port, lp_entries); sc->sc_count--; /* Update the primary interface */ if (lp == sc->sc_primary) { uint8_t lladdr[ETHER_ADDR_LEN]; if ((lp0 = CK_SLIST_FIRST(&sc->sc_ports)) == NULL) bzero(&lladdr, ETHER_ADDR_LEN); else bcopy(lp0->lp_lladdr, lladdr, ETHER_ADDR_LEN); sc->sc_primary = lp0; if (sc->sc_destroying == 0) { bcopy(lladdr, IF_LLADDR(sc->sc_ifp), ETHER_ADDR_LEN); lagg_proto_lladdr(sc); EVENTHANDLER_INVOKE(iflladdr_event, sc->sc_ifp); } /* * Update lladdr for each port (new primary needs update * as well, to switch from old lladdr to its 'real' one) */ CK_SLIST_FOREACH(lp_ptr, &sc->sc_ports, lp_entries) if_setlladdr(lp_ptr->lp_ifp, lladdr, ETHER_ADDR_LEN); } if (lp->lp_ifflags) if_printf(ifp, "%s: lp_ifflags unclean\n", __func__); if (lp->lp_detaching == 0) { lagg_setflags(lp, 0); lagg_setcaps(lp, lp->lp_ifcapenable); if_setlladdr(ifp, lp->lp_lladdr, ETHER_ADDR_LEN); } /* * free port and release it's ifnet reference after a grace period has * elapsed. */ epoch_call(net_epoch_preempt, &lp->lp_epoch_ctx, lagg_port_destroy_cb); /* Update lagg capabilities */ lagg_capabilities(sc); lagg_linkstate(sc); return (0); } static int lagg_port_ioctl(struct ifnet *ifp, u_long cmd, caddr_t data) { struct lagg_reqport *rp = (struct lagg_reqport *)data; struct lagg_softc *sc; struct lagg_port *lp = NULL; int error = 0; /* Should be checked by the caller */ if (ifp->if_type != IFT_IEEE8023ADLAG || (lp = ifp->if_lagg) == NULL || (sc = lp->lp_softc) == NULL) goto fallback; switch (cmd) { case SIOCGLAGGPORT: if (rp->rp_portname[0] == '\0' || ifunit(rp->rp_portname) != ifp) { error = EINVAL; break; } LAGG_RLOCK(); if ((lp = ifp->if_lagg) == NULL || lp->lp_softc != sc) { error = ENOENT; LAGG_RUNLOCK(); break; } lagg_port2req(lp, rp); LAGG_RUNLOCK(); break; case SIOCSIFCAP: if (lp->lp_ioctl == NULL) { error = EINVAL; break; } error = (*lp->lp_ioctl)(ifp, cmd, data); if (error) break; /* Update lagg interface capabilities */ LAGG_XLOCK(sc); lagg_capabilities(sc); LAGG_XUNLOCK(sc); VLAN_CAPABILITIES(sc->sc_ifp); break; case SIOCSIFMTU: /* Do not allow the MTU to be changed once joined */ error = EINVAL; break; default: goto fallback; } return (error); fallback: if (lp != NULL && lp->lp_ioctl != NULL) return ((*lp->lp_ioctl)(ifp, cmd, data)); return (EINVAL); } /* * Requests counter @cnt data. * * Counter value is calculated the following way: * 1) for each port, sum difference between current and "initial" measurements. * 2) add lagg logical interface counters. * 3) add data from detached_counters array. * * We also do the following things on ports attach/detach: * 1) On port attach we store all counters it has into port_counter array. * 2) On port detach we add the different between "initial" and * current counters data to detached_counters array. */ static uint64_t lagg_get_counter(struct ifnet *ifp, ift_counter cnt) { struct lagg_softc *sc; struct lagg_port *lp; struct ifnet *lpifp; uint64_t newval, oldval, vsum; /* Revise this when we've got non-generic counters. */ KASSERT(cnt < IFCOUNTERS, ("%s: invalid cnt %d", __func__, cnt)); sc = (struct lagg_softc *)ifp->if_softc; vsum = 0; LAGG_RLOCK(); CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) { /* Saved attached value */ oldval = lp->port_counters.val[cnt]; /* current value */ lpifp = lp->lp_ifp; newval = lpifp->if_get_counter(lpifp, cnt); /* Calculate diff and save new */ vsum += newval - oldval; } LAGG_RUNLOCK(); /* * Add counter data which might be added by upper * layer protocols operating on logical interface. */ vsum += if_get_counter_default(ifp, cnt); /* * Add counter data from detached ports counters */ vsum += sc->detached_counters.val[cnt]; return (vsum); } /* * For direct output to child ports. */ static int lagg_port_output(struct ifnet *ifp, struct mbuf *m, const struct sockaddr *dst, struct route *ro) { struct lagg_port *lp = ifp->if_lagg; switch (dst->sa_family) { case pseudo_AF_HDRCMPLT: case AF_UNSPEC: return ((*lp->lp_output)(ifp, m, dst, ro)); } /* drop any other frames */ m_freem(m); return (ENETDOWN); } static void lagg_port_ifdetach(void *arg __unused, struct ifnet *ifp) { struct lagg_port *lp; struct lagg_softc *sc; if ((lp = ifp->if_lagg) == NULL) return; /* If the ifnet is just being renamed, don't do anything. */ if (ifp->if_flags & IFF_RENAMING) return; sc = lp->lp_softc; LAGG_XLOCK(sc); lp->lp_detaching = 1; lagg_port_destroy(lp, 1); LAGG_XUNLOCK(sc); VLAN_CAPABILITIES(sc->sc_ifp); } static void lagg_port2req(struct lagg_port *lp, struct lagg_reqport *rp) { struct lagg_softc *sc = lp->lp_softc; strlcpy(rp->rp_ifname, sc->sc_ifname, sizeof(rp->rp_ifname)); strlcpy(rp->rp_portname, lp->lp_ifp->if_xname, sizeof(rp->rp_portname)); rp->rp_prio = lp->lp_prio; rp->rp_flags = lp->lp_flags; lagg_proto_portreq(sc, lp, &rp->rp_psc); /* Add protocol specific flags */ switch (sc->sc_proto) { case LAGG_PROTO_FAILOVER: if (lp == sc->sc_primary) rp->rp_flags |= LAGG_PORT_MASTER; if (lp == lagg_link_active(sc, sc->sc_primary)) rp->rp_flags |= LAGG_PORT_ACTIVE; break; case LAGG_PROTO_ROUNDROBIN: case LAGG_PROTO_LOADBALANCE: case LAGG_PROTO_BROADCAST: if (LAGG_PORTACTIVE(lp)) rp->rp_flags |= LAGG_PORT_ACTIVE; break; case LAGG_PROTO_LACP: /* LACP has a different definition of active */ if (lacp_isactive(lp)) rp->rp_flags |= LAGG_PORT_ACTIVE; if (lacp_iscollecting(lp)) rp->rp_flags |= LAGG_PORT_COLLECTING; if (lacp_isdistributing(lp)) rp->rp_flags |= LAGG_PORT_DISTRIBUTING; break; } } static void lagg_init(void *xsc) { struct lagg_softc *sc = (struct lagg_softc *)xsc; struct ifnet *ifp = sc->sc_ifp; struct lagg_port *lp; LAGG_XLOCK(sc); if (ifp->if_drv_flags & IFF_DRV_RUNNING) { LAGG_XUNLOCK(sc); return; } ifp->if_drv_flags |= IFF_DRV_RUNNING; /* * Update the port lladdrs if needed. * This might be if_setlladdr() notification * that lladdr has been changed. */ CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) { if (memcmp(IF_LLADDR(ifp), IF_LLADDR(lp->lp_ifp), ETHER_ADDR_LEN) != 0) if_setlladdr(lp->lp_ifp, IF_LLADDR(ifp), ETHER_ADDR_LEN); } lagg_proto_init(sc); LAGG_XUNLOCK(sc); } static void lagg_stop(struct lagg_softc *sc) { struct ifnet *ifp = sc->sc_ifp; LAGG_XLOCK_ASSERT(sc); if ((ifp->if_drv_flags & IFF_DRV_RUNNING) == 0) return; ifp->if_drv_flags &= ~IFF_DRV_RUNNING; lagg_proto_stop(sc); } static int lagg_ioctl(struct ifnet *ifp, u_long cmd, caddr_t data) { struct lagg_softc *sc = (struct lagg_softc *)ifp->if_softc; struct lagg_reqall *ra = (struct lagg_reqall *)data; struct lagg_reqopts *ro = (struct lagg_reqopts *)data; struct lagg_reqport *rp = (struct lagg_reqport *)data, rpbuf; struct lagg_reqflags *rf = (struct lagg_reqflags *)data; struct ifreq *ifr = (struct ifreq *)data; struct lagg_port *lp; struct ifnet *tpif; struct thread *td = curthread; char *buf, *outbuf; int count, buflen, len, error = 0; bzero(&rpbuf, sizeof(rpbuf)); switch (cmd) { case SIOCGLAGG: LAGG_XLOCK(sc); buflen = sc->sc_count * sizeof(struct lagg_reqport); outbuf = malloc(buflen, M_TEMP, M_WAITOK | M_ZERO); ra->ra_proto = sc->sc_proto; lagg_proto_request(sc, &ra->ra_psc); count = 0; buf = outbuf; len = min(ra->ra_size, buflen); CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) { if (len < sizeof(rpbuf)) break; lagg_port2req(lp, &rpbuf); memcpy(buf, &rpbuf, sizeof(rpbuf)); count++; buf += sizeof(rpbuf); len -= sizeof(rpbuf); } LAGG_XUNLOCK(sc); ra->ra_ports = count; ra->ra_size = count * sizeof(rpbuf); error = copyout(outbuf, ra->ra_port, ra->ra_size); free(outbuf, M_TEMP); break; case SIOCSLAGG: error = priv_check(td, PRIV_NET_LAGG); if (error) break; if (ra->ra_proto >= LAGG_PROTO_MAX) { error = EPROTONOSUPPORT; break; } LAGG_XLOCK(sc); lagg_proto_detach(sc); LAGG_UNLOCK_ASSERT(); lagg_proto_attach(sc, ra->ra_proto); LAGG_XUNLOCK(sc); break; case SIOCGLAGGOPTS: LAGG_XLOCK(sc); ro->ro_opts = sc->sc_opts; if (sc->sc_proto == LAGG_PROTO_LACP) { struct lacp_softc *lsc; lsc = (struct lacp_softc *)sc->sc_psc; if (lsc->lsc_debug.lsc_tx_test != 0) ro->ro_opts |= LAGG_OPT_LACP_TXTEST; if (lsc->lsc_debug.lsc_rx_test != 0) ro->ro_opts |= LAGG_OPT_LACP_RXTEST; if (lsc->lsc_strict_mode != 0) ro->ro_opts |= LAGG_OPT_LACP_STRICT; if (lsc->lsc_fast_timeout != 0) ro->ro_opts |= LAGG_OPT_LACP_TIMEOUT; ro->ro_active = sc->sc_active; } else { ro->ro_active = 0; CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) ro->ro_active += LAGG_PORTACTIVE(lp); } ro->ro_bkt = sc->sc_bkt; ro->ro_flapping = sc->sc_flapping; ro->ro_flowid_shift = sc->flowid_shift; LAGG_XUNLOCK(sc); break; case SIOCSLAGGOPTS: if (sc->sc_proto == LAGG_PROTO_ROUNDROBIN) { if (ro->ro_bkt == 0) sc->sc_bkt = 1; // Minimum 1 packet per iface. else sc->sc_bkt = ro->ro_bkt; } error = priv_check(td, PRIV_NET_LAGG); if (error) break; if (ro->ro_opts == 0) break; /* * Set options. LACP options are stored in sc->sc_psc, * not in sc_opts. */ int valid, lacp; switch (ro->ro_opts) { case LAGG_OPT_USE_FLOWID: case -LAGG_OPT_USE_FLOWID: case LAGG_OPT_FLOWIDSHIFT: valid = 1; lacp = 0; break; case LAGG_OPT_LACP_TXTEST: case -LAGG_OPT_LACP_TXTEST: case LAGG_OPT_LACP_RXTEST: case -LAGG_OPT_LACP_RXTEST: case LAGG_OPT_LACP_STRICT: case -LAGG_OPT_LACP_STRICT: case LAGG_OPT_LACP_TIMEOUT: case -LAGG_OPT_LACP_TIMEOUT: valid = lacp = 1; break; default: valid = lacp = 0; break; } LAGG_XLOCK(sc); if (valid == 0 || (lacp == 1 && sc->sc_proto != LAGG_PROTO_LACP)) { /* Invalid combination of options specified. */ error = EINVAL; LAGG_XUNLOCK(sc); break; /* Return from SIOCSLAGGOPTS. */ } /* * Store new options into sc->sc_opts except for * FLOWIDSHIFT and LACP options. */ if (lacp == 0) { if (ro->ro_opts == LAGG_OPT_FLOWIDSHIFT) sc->flowid_shift = ro->ro_flowid_shift; else if (ro->ro_opts > 0) sc->sc_opts |= ro->ro_opts; else sc->sc_opts &= ~ro->ro_opts; } else { struct lacp_softc *lsc; struct lacp_port *lp; lsc = (struct lacp_softc *)sc->sc_psc; switch (ro->ro_opts) { case LAGG_OPT_LACP_TXTEST: lsc->lsc_debug.lsc_tx_test = 1; break; case -LAGG_OPT_LACP_TXTEST: lsc->lsc_debug.lsc_tx_test = 0; break; case LAGG_OPT_LACP_RXTEST: lsc->lsc_debug.lsc_rx_test = 1; break; case -LAGG_OPT_LACP_RXTEST: lsc->lsc_debug.lsc_rx_test = 0; break; case LAGG_OPT_LACP_STRICT: lsc->lsc_strict_mode = 1; break; case -LAGG_OPT_LACP_STRICT: lsc->lsc_strict_mode = 0; break; case LAGG_OPT_LACP_TIMEOUT: LACP_LOCK(lsc); LIST_FOREACH(lp, &lsc->lsc_ports, lp_next) lp->lp_state |= LACP_STATE_TIMEOUT; LACP_UNLOCK(lsc); lsc->lsc_fast_timeout = 1; break; case -LAGG_OPT_LACP_TIMEOUT: LACP_LOCK(lsc); LIST_FOREACH(lp, &lsc->lsc_ports, lp_next) lp->lp_state &= ~LACP_STATE_TIMEOUT; LACP_UNLOCK(lsc); lsc->lsc_fast_timeout = 0; break; } } LAGG_XUNLOCK(sc); break; case SIOCGLAGGFLAGS: rf->rf_flags = 0; LAGG_XLOCK(sc); if (sc->sc_flags & MBUF_HASHFLAG_L2) rf->rf_flags |= LAGG_F_HASHL2; if (sc->sc_flags & MBUF_HASHFLAG_L3) rf->rf_flags |= LAGG_F_HASHL3; if (sc->sc_flags & MBUF_HASHFLAG_L4) rf->rf_flags |= LAGG_F_HASHL4; LAGG_XUNLOCK(sc); break; case SIOCSLAGGHASH: error = priv_check(td, PRIV_NET_LAGG); if (error) break; if ((rf->rf_flags & LAGG_F_HASHMASK) == 0) { error = EINVAL; break; } LAGG_XLOCK(sc); sc->sc_flags = 0; if (rf->rf_flags & LAGG_F_HASHL2) sc->sc_flags |= MBUF_HASHFLAG_L2; if (rf->rf_flags & LAGG_F_HASHL3) sc->sc_flags |= MBUF_HASHFLAG_L3; if (rf->rf_flags & LAGG_F_HASHL4) sc->sc_flags |= MBUF_HASHFLAG_L4; LAGG_XUNLOCK(sc); break; case SIOCGLAGGPORT: if (rp->rp_portname[0] == '\0' || (tpif = ifunit_ref(rp->rp_portname)) == NULL) { error = EINVAL; break; } LAGG_RLOCK(); if ((lp = (struct lagg_port *)tpif->if_lagg) == NULL || lp->lp_softc != sc) { error = ENOENT; LAGG_RUNLOCK(); if_rele(tpif); break; } lagg_port2req(lp, rp); LAGG_RUNLOCK(); if_rele(tpif); break; case SIOCSLAGGPORT: error = priv_check(td, PRIV_NET_LAGG); if (error) break; if (rp->rp_portname[0] == '\0' || (tpif = ifunit_ref(rp->rp_portname)) == NULL) { error = EINVAL; break; } #ifdef INET6 /* * A laggport interface should not have inet6 address * because two interfaces with a valid link-local * scope zone must not be merged in any form. This * restriction is needed to prevent violation of * link-local scope zone. Attempts to add a laggport * interface which has inet6 addresses triggers * removal of all inet6 addresses on the member * interface. */ if (in6ifa_llaonifp(tpif)) { in6_ifdetach(tpif); if_printf(sc->sc_ifp, "IPv6 addresses on %s have been removed " "before adding it as a member to prevent " "IPv6 address scope violation.\n", tpif->if_xname); } #endif LAGG_XLOCK(sc); error = lagg_port_create(sc, tpif); LAGG_XUNLOCK(sc); if_rele(tpif); VLAN_CAPABILITIES(ifp); break; case SIOCSLAGGDELPORT: error = priv_check(td, PRIV_NET_LAGG); if (error) break; if (rp->rp_portname[0] == '\0' || (tpif = ifunit_ref(rp->rp_portname)) == NULL) { error = EINVAL; break; } LAGG_XLOCK(sc); if ((lp = (struct lagg_port *)tpif->if_lagg) == NULL || lp->lp_softc != sc) { error = ENOENT; LAGG_XUNLOCK(sc); if_rele(tpif); break; } error = lagg_port_destroy(lp, 1); LAGG_XUNLOCK(sc); if_rele(tpif); VLAN_CAPABILITIES(ifp); break; case SIOCSIFFLAGS: /* Set flags on ports too */ LAGG_XLOCK(sc); CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) { lagg_setflags(lp, 1); } if (!(ifp->if_flags & IFF_UP) && (ifp->if_drv_flags & IFF_DRV_RUNNING)) { /* * If interface is marked down and it is running, * then stop and disable it. */ lagg_stop(sc); LAGG_XUNLOCK(sc); } else if ((ifp->if_flags & IFF_UP) && !(ifp->if_drv_flags & IFF_DRV_RUNNING)) { /* * If interface is marked up and it is stopped, then * start it. */ LAGG_XUNLOCK(sc); (*ifp->if_init)(sc); } else LAGG_XUNLOCK(sc); break; case SIOCADDMULTI: case SIOCDELMULTI: LAGG_XLOCK(sc); CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) { lagg_clrmulti(lp); lagg_setmulti(lp); } LAGG_XUNLOCK(sc); error = 0; break; case SIOCSIFMEDIA: case SIOCGIFMEDIA: error = ifmedia_ioctl(ifp, ifr, &sc->sc_media, cmd); break; case SIOCSIFCAP: LAGG_XLOCK(sc); CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) { if (lp->lp_ioctl != NULL) (*lp->lp_ioctl)(lp->lp_ifp, cmd, data); } lagg_capabilities(sc); LAGG_XUNLOCK(sc); VLAN_CAPABILITIES(ifp); error = 0; break; case SIOCSIFMTU: LAGG_XLOCK(sc); CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) { if (lp->lp_ioctl != NULL) error = (*lp->lp_ioctl)(lp->lp_ifp, cmd, data); else error = EINVAL; if (error != 0) { if_printf(ifp, "failed to change MTU to %d on port %s, " "reverting all ports to original MTU (%d)\n", ifr->ifr_mtu, lp->lp_ifp->if_xname, ifp->if_mtu); break; } } if (error == 0) { ifp->if_mtu = ifr->ifr_mtu; } else { /* set every port back to the original MTU */ ifr->ifr_mtu = ifp->if_mtu; CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) { if (lp->lp_ioctl != NULL) (*lp->lp_ioctl)(lp->lp_ifp, cmd, data); } } LAGG_XUNLOCK(sc); break; default: error = ether_ioctl(ifp, cmd, data); break; } return (error); } #ifdef RATELIMIT static int lagg_snd_tag_alloc(struct ifnet *ifp, union if_snd_tag_alloc_params *params, struct m_snd_tag **ppmt) { struct lagg_softc *sc = (struct lagg_softc *)ifp->if_softc; struct lagg_port *lp; struct lagg_lb *lb; uint32_t p; switch (sc->sc_proto) { case LAGG_PROTO_FAILOVER: lp = lagg_link_active(sc, sc->sc_primary); break; case LAGG_PROTO_LOADBALANCE: if ((sc->sc_opts & LAGG_OPT_USE_FLOWID) == 0 || params->hdr.flowtype == M_HASHTYPE_NONE) return (EOPNOTSUPP); p = params->hdr.flowid >> sc->flowid_shift; p %= sc->sc_count; lb = (struct lagg_lb *)sc->sc_psc; lp = lb->lb_ports[p]; lp = lagg_link_active(sc, lp); break; case LAGG_PROTO_LACP: if ((sc->sc_opts & LAGG_OPT_USE_FLOWID) == 0 || params->hdr.flowtype == M_HASHTYPE_NONE) return (EOPNOTSUPP); lp = lacp_select_tx_port_by_hash(sc, params->hdr.flowid); break; default: return (EOPNOTSUPP); } if (lp == NULL) return (EOPNOTSUPP); ifp = lp->lp_ifp; if (ifp == NULL || ifp->if_snd_tag_alloc == NULL || (ifp->if_capenable & IFCAP_TXRTLMT) == 0) return (EOPNOTSUPP); /* forward allocation request */ return (ifp->if_snd_tag_alloc(ifp, params, ppmt)); } + +static void +lagg_snd_tag_free(struct m_snd_tag *tag) +{ + tag->ifp->if_snd_tag_free(tag); +} + #endif static int lagg_setmulti(struct lagg_port *lp) { struct lagg_softc *sc = lp->lp_softc; struct ifnet *ifp = lp->lp_ifp; struct ifnet *scifp = sc->sc_ifp; struct lagg_mc *mc; struct ifmultiaddr *ifma; int error; IF_ADDR_WLOCK(scifp); CK_STAILQ_FOREACH(ifma, &scifp->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family != AF_LINK) continue; mc = malloc(sizeof(struct lagg_mc), M_DEVBUF, M_NOWAIT); if (mc == NULL) { IF_ADDR_WUNLOCK(scifp); return (ENOMEM); } bcopy(ifma->ifma_addr, &mc->mc_addr, ifma->ifma_addr->sa_len); mc->mc_addr.sdl_index = ifp->if_index; mc->mc_ifma = NULL; SLIST_INSERT_HEAD(&lp->lp_mc_head, mc, mc_entries); } IF_ADDR_WUNLOCK(scifp); SLIST_FOREACH (mc, &lp->lp_mc_head, mc_entries) { error = if_addmulti(ifp, (struct sockaddr *)&mc->mc_addr, &mc->mc_ifma); if (error) return (error); } return (0); } static int lagg_clrmulti(struct lagg_port *lp) { struct lagg_mc *mc; LAGG_XLOCK_ASSERT(lp->lp_softc); while ((mc = SLIST_FIRST(&lp->lp_mc_head)) != NULL) { SLIST_REMOVE(&lp->lp_mc_head, mc, lagg_mc, mc_entries); if (mc->mc_ifma && lp->lp_detaching == 0) if_delmulti_ifma(mc->mc_ifma); free(mc, M_DEVBUF); } return (0); } static int lagg_setcaps(struct lagg_port *lp, int cap) { struct ifreq ifr; if (lp->lp_ifp->if_capenable == cap) return (0); if (lp->lp_ioctl == NULL) return (ENXIO); ifr.ifr_reqcap = cap; return ((*lp->lp_ioctl)(lp->lp_ifp, SIOCSIFCAP, (caddr_t)&ifr)); } /* Handle a ref counted flag that should be set on the lagg port as well */ static int lagg_setflag(struct lagg_port *lp, int flag, int status, int (*func)(struct ifnet *, int)) { struct lagg_softc *sc = lp->lp_softc; struct ifnet *scifp = sc->sc_ifp; struct ifnet *ifp = lp->lp_ifp; int error; LAGG_XLOCK_ASSERT(sc); status = status ? (scifp->if_flags & flag) : 0; /* Now "status" contains the flag value or 0 */ /* * See if recorded ports status is different from what * we want it to be. If it is, flip it. We record ports * status in lp_ifflags so that we won't clear ports flag * we haven't set. In fact, we don't clear or set ports * flags directly, but get or release references to them. * That's why we can be sure that recorded flags still are * in accord with actual ports flags. */ if (status != (lp->lp_ifflags & flag)) { error = (*func)(ifp, status); if (error) return (error); lp->lp_ifflags &= ~flag; lp->lp_ifflags |= status; } return (0); } /* * Handle IFF_* flags that require certain changes on the lagg port * if "status" is true, update ports flags respective to the lagg * if "status" is false, forcedly clear the flags set on port. */ static int lagg_setflags(struct lagg_port *lp, int status) { int error, i; for (i = 0; lagg_pflags[i].flag; i++) { error = lagg_setflag(lp, lagg_pflags[i].flag, status, lagg_pflags[i].func); if (error) return (error); } return (0); } static int lagg_transmit(struct ifnet *ifp, struct mbuf *m) { struct lagg_softc *sc = (struct lagg_softc *)ifp->if_softc; int error; LAGG_RLOCK(); /* We need a Tx algorithm and at least one port */ if (sc->sc_proto == LAGG_PROTO_NONE || sc->sc_count == 0) { LAGG_RUNLOCK(); m_freem(m); if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); return (ENXIO); } ETHER_BPF_MTAP(ifp, m); error = lagg_proto_start(sc, m); LAGG_RUNLOCK(); if (error != 0) if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); return (error); } /* * The ifp->if_qflush entry point for lagg(4) is no-op. */ static void lagg_qflush(struct ifnet *ifp __unused) { } static struct mbuf * lagg_input(struct ifnet *ifp, struct mbuf *m) { struct lagg_port *lp = ifp->if_lagg; struct lagg_softc *sc = lp->lp_softc; struct ifnet *scifp = sc->sc_ifp; LAGG_RLOCK(); if ((scifp->if_drv_flags & IFF_DRV_RUNNING) == 0 || lp->lp_detaching != 0 || sc->sc_proto == LAGG_PROTO_NONE) { LAGG_RUNLOCK(); m_freem(m); return (NULL); } ETHER_BPF_MTAP(scifp, m); m = lagg_proto_input(sc, lp, m); if (m != NULL && (scifp->if_flags & IFF_MONITOR) != 0) { m_freem(m); m = NULL; } LAGG_RUNLOCK(); return (m); } static int lagg_media_change(struct ifnet *ifp) { struct lagg_softc *sc = (struct lagg_softc *)ifp->if_softc; if (sc->sc_ifflags & IFF_DEBUG) printf("%s\n", __func__); /* Ignore */ return (0); } static void lagg_media_status(struct ifnet *ifp, struct ifmediareq *imr) { struct lagg_softc *sc = (struct lagg_softc *)ifp->if_softc; struct lagg_port *lp; imr->ifm_status = IFM_AVALID; imr->ifm_active = IFM_ETHER | IFM_AUTO; LAGG_RLOCK(); CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) { if (LAGG_PORTACTIVE(lp)) imr->ifm_status |= IFM_ACTIVE; } LAGG_RUNLOCK(); } static void lagg_linkstate(struct lagg_softc *sc) { struct lagg_port *lp; int new_link = LINK_STATE_DOWN; uint64_t speed; LAGG_XLOCK_ASSERT(sc); /* LACP handles link state itself */ if (sc->sc_proto == LAGG_PROTO_LACP) return; /* Our link is considered up if at least one of our ports is active */ LAGG_RLOCK(); CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) { if (lp->lp_ifp->if_link_state == LINK_STATE_UP) { new_link = LINK_STATE_UP; break; } } LAGG_RUNLOCK(); if_link_state_change(sc->sc_ifp, new_link); /* Update if_baudrate to reflect the max possible speed */ switch (sc->sc_proto) { case LAGG_PROTO_FAILOVER: sc->sc_ifp->if_baudrate = sc->sc_primary != NULL ? sc->sc_primary->lp_ifp->if_baudrate : 0; break; case LAGG_PROTO_ROUNDROBIN: case LAGG_PROTO_LOADBALANCE: case LAGG_PROTO_BROADCAST: speed = 0; LAGG_RLOCK(); CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) speed += lp->lp_ifp->if_baudrate; LAGG_RUNLOCK(); sc->sc_ifp->if_baudrate = speed; break; case LAGG_PROTO_LACP: /* LACP updates if_baudrate itself */ break; } } static void lagg_port_state(struct ifnet *ifp, int state) { struct lagg_port *lp = (struct lagg_port *)ifp->if_lagg; struct lagg_softc *sc = NULL; if (lp != NULL) sc = lp->lp_softc; if (sc == NULL) return; LAGG_XLOCK(sc); lagg_linkstate(sc); lagg_proto_linkstate(sc, lp); LAGG_XUNLOCK(sc); } struct lagg_port * lagg_link_active(struct lagg_softc *sc, struct lagg_port *lp) { struct lagg_port *lp_next, *rval = NULL; struct epoch_tracker net_et; /* * Search a port which reports an active link state. */ if (lp == NULL) goto search; if (LAGG_PORTACTIVE(lp)) { rval = lp; goto found; } if ((lp_next = CK_SLIST_NEXT(lp, lp_entries)) != NULL && LAGG_PORTACTIVE(lp_next)) { rval = lp_next; goto found; } search: epoch_enter_preempt(net_epoch_preempt, &net_et); CK_SLIST_FOREACH(lp_next, &sc->sc_ports, lp_entries) { if (LAGG_PORTACTIVE(lp_next)) { epoch_exit_preempt(net_epoch_preempt, &net_et); return (lp_next); } } epoch_exit_preempt(net_epoch_preempt, &net_et); found: return (rval); } int lagg_enqueue(struct ifnet *ifp, struct mbuf *m) { return (ifp->if_transmit)(ifp, m); } /* * Simple round robin aggregation */ static void lagg_rr_attach(struct lagg_softc *sc) { sc->sc_seq = 0; sc->sc_bkt_count = sc->sc_bkt; } static int lagg_rr_start(struct lagg_softc *sc, struct mbuf *m) { struct lagg_port *lp; uint32_t p; if (sc->sc_bkt_count == 0 && sc->sc_bkt > 0) sc->sc_bkt_count = sc->sc_bkt; if (sc->sc_bkt > 0) { atomic_subtract_int(&sc->sc_bkt_count, 1); if (atomic_cmpset_int(&sc->sc_bkt_count, 0, sc->sc_bkt)) p = atomic_fetchadd_32(&sc->sc_seq, 1); else p = sc->sc_seq; } else p = atomic_fetchadd_32(&sc->sc_seq, 1); p %= sc->sc_count; lp = CK_SLIST_FIRST(&sc->sc_ports); while (p--) lp = CK_SLIST_NEXT(lp, lp_entries); /* * Check the port's link state. This will return the next active * port if the link is down or the port is NULL. */ if ((lp = lagg_link_active(sc, lp)) == NULL) { m_freem(m); return (ENETDOWN); } /* Send mbuf */ return (lagg_enqueue(lp->lp_ifp, m)); } static struct mbuf * lagg_rr_input(struct lagg_softc *sc, struct lagg_port *lp, struct mbuf *m) { struct ifnet *ifp = sc->sc_ifp; /* Just pass in the packet to our lagg device */ m->m_pkthdr.rcvif = ifp; return (m); } /* * Broadcast mode */ static int lagg_bcast_start(struct lagg_softc *sc, struct mbuf *m) { int active_ports = 0; int errors = 0; int ret; struct lagg_port *lp, *last = NULL; struct mbuf *m0; LAGG_RLOCK(); CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) { if (!LAGG_PORTACTIVE(lp)) continue; active_ports++; if (last != NULL) { m0 = m_copym(m, 0, M_COPYALL, M_NOWAIT); if (m0 == NULL) { ret = ENOBUFS; errors++; break; } ret = lagg_enqueue(last->lp_ifp, m0); if (ret != 0) errors++; } last = lp; } LAGG_RUNLOCK(); if (last == NULL) { m_freem(m); return (ENOENT); } if ((last = lagg_link_active(sc, last)) == NULL) { m_freem(m); return (ENETDOWN); } ret = lagg_enqueue(last->lp_ifp, m); if (ret != 0) errors++; if (errors == 0) return (ret); return (0); } static struct mbuf* lagg_bcast_input(struct lagg_softc *sc, struct lagg_port *lp, struct mbuf *m) { struct ifnet *ifp = sc->sc_ifp; /* Just pass in the packet to our lagg device */ m->m_pkthdr.rcvif = ifp; return (m); } /* * Active failover */ static int lagg_fail_start(struct lagg_softc *sc, struct mbuf *m) { struct lagg_port *lp; /* Use the master port if active or the next available port */ if ((lp = lagg_link_active(sc, sc->sc_primary)) == NULL) { m_freem(m); return (ENETDOWN); } /* Send mbuf */ return (lagg_enqueue(lp->lp_ifp, m)); } static struct mbuf * lagg_fail_input(struct lagg_softc *sc, struct lagg_port *lp, struct mbuf *m) { struct ifnet *ifp = sc->sc_ifp; struct lagg_port *tmp_tp; if (lp == sc->sc_primary || V_lagg_failover_rx_all) { m->m_pkthdr.rcvif = ifp; return (m); } if (!LAGG_PORTACTIVE(sc->sc_primary)) { tmp_tp = lagg_link_active(sc, sc->sc_primary); /* * If tmp_tp is null, we've received a packet when all * our links are down. Weird, but process it anyways. */ if ((tmp_tp == NULL || tmp_tp == lp)) { m->m_pkthdr.rcvif = ifp; return (m); } } m_freem(m); return (NULL); } /* * Loadbalancing */ static void lagg_lb_attach(struct lagg_softc *sc) { struct lagg_port *lp; struct lagg_lb *lb; LAGG_XLOCK_ASSERT(sc); lb = malloc(sizeof(struct lagg_lb), M_DEVBUF, M_WAITOK | M_ZERO); lb->lb_key = m_ether_tcpip_hash_init(); sc->sc_psc = lb; CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) lagg_lb_port_create(lp); } static void lagg_lb_detach(struct lagg_softc *sc) { struct lagg_lb *lb; lb = (struct lagg_lb *)sc->sc_psc; if (lb != NULL) free(lb, M_DEVBUF); } static int lagg_lb_porttable(struct lagg_softc *sc, struct lagg_port *lp) { struct lagg_lb *lb = (struct lagg_lb *)sc->sc_psc; struct lagg_port *lp_next; int i = 0, rv; rv = 0; bzero(&lb->lb_ports, sizeof(lb->lb_ports)); LAGG_RLOCK(); CK_SLIST_FOREACH(lp_next, &sc->sc_ports, lp_entries) { if (lp_next == lp) continue; if (i >= LAGG_MAX_PORTS) { rv = EINVAL; break; } if (sc->sc_ifflags & IFF_DEBUG) printf("%s: port %s at index %d\n", sc->sc_ifname, lp_next->lp_ifp->if_xname, i); lb->lb_ports[i++] = lp_next; } LAGG_RUNLOCK(); return (rv); } static int lagg_lb_port_create(struct lagg_port *lp) { struct lagg_softc *sc = lp->lp_softc; return (lagg_lb_porttable(sc, NULL)); } static void lagg_lb_port_destroy(struct lagg_port *lp) { struct lagg_softc *sc = lp->lp_softc; lagg_lb_porttable(sc, lp); } static int lagg_lb_start(struct lagg_softc *sc, struct mbuf *m) { struct lagg_lb *lb = (struct lagg_lb *)sc->sc_psc; struct lagg_port *lp = NULL; uint32_t p = 0; if ((sc->sc_opts & LAGG_OPT_USE_FLOWID) && M_HASHTYPE_GET(m) != M_HASHTYPE_NONE) p = m->m_pkthdr.flowid >> sc->flowid_shift; else p = m_ether_tcpip_hash(sc->sc_flags, m, lb->lb_key); p %= sc->sc_count; lp = lb->lb_ports[p]; /* * Check the port's link state. This will return the next active * port if the link is down or the port is NULL. */ if ((lp = lagg_link_active(sc, lp)) == NULL) { m_freem(m); return (ENETDOWN); } /* Send mbuf */ return (lagg_enqueue(lp->lp_ifp, m)); } static struct mbuf * lagg_lb_input(struct lagg_softc *sc, struct lagg_port *lp, struct mbuf *m) { struct ifnet *ifp = sc->sc_ifp; /* Just pass in the packet to our lagg device */ m->m_pkthdr.rcvif = ifp; return (m); } /* * 802.3ad LACP */ static void lagg_lacp_attach(struct lagg_softc *sc) { struct lagg_port *lp; lacp_attach(sc); LAGG_XLOCK_ASSERT(sc); CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) lacp_port_create(lp); } static void lagg_lacp_detach(struct lagg_softc *sc) { struct lagg_port *lp; void *psc; LAGG_XLOCK_ASSERT(sc); CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) lacp_port_destroy(lp); psc = sc->sc_psc; sc->sc_psc = NULL; lacp_detach(psc); } static void lagg_lacp_lladdr(struct lagg_softc *sc) { struct lagg_port *lp; LAGG_SXLOCK_ASSERT(sc); /* purge all the lacp ports */ CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) lacp_port_destroy(lp); /* add them back in */ CK_SLIST_FOREACH(lp, &sc->sc_ports, lp_entries) lacp_port_create(lp); } static int lagg_lacp_start(struct lagg_softc *sc, struct mbuf *m) { struct lagg_port *lp; lp = lacp_select_tx_port(sc, m); if (lp == NULL) { m_freem(m); return (ENETDOWN); } /* Send mbuf */ return (lagg_enqueue(lp->lp_ifp, m)); } static struct mbuf * lagg_lacp_input(struct lagg_softc *sc, struct lagg_port *lp, struct mbuf *m) { struct ifnet *ifp = sc->sc_ifp; struct ether_header *eh; u_short etype; eh = mtod(m, struct ether_header *); etype = ntohs(eh->ether_type); /* Tap off LACP control messages */ if ((m->m_flags & M_VLANTAG) == 0 && etype == ETHERTYPE_SLOW) { m = lacp_input(lp, m); if (m == NULL) return (NULL); } /* * If the port is not collecting or not in the active aggregator then * free and return. */ if (lacp_iscollecting(lp) == 0 || lacp_isactive(lp) == 0) { m_freem(m); return (NULL); } m->m_pkthdr.rcvif = ifp; return (m); } Index: projects/import-googletest-1.8.1/sys/net/if_vlan.c =================================================================== --- projects/import-googletest-1.8.1/sys/net/if_vlan.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/net/if_vlan.c (revision 344271) @@ -1,1937 +1,1945 @@ /*- * Copyright 1998 Massachusetts Institute of Technology * Copyright 2012 ADARA Networks, Inc. * Copyright 2017 Dell EMC Isilon * * Portions of this software were developed by Robert N. M. Watson under * contract to ADARA Networks, Inc. * * Permission to use, copy, modify, and distribute this software and * its documentation for any purpose and without fee is hereby * granted, provided that both the above copyright notice and this * permission notice appear in all copies, that both the above * copyright notice and this permission notice appear in all * supporting documentation, and that the name of M.I.T. not be used * in advertising or publicity pertaining to distribution of the * software without specific, written prior permission. M.I.T. makes * no representations about the suitability of this software for any * purpose. It is provided "as is" without express or implied * warranty. * * THIS SOFTWARE IS PROVIDED BY M.I.T. ``AS IS''. M.I.T. DISCLAIMS * ALL EXPRESS OR IMPLIED WARRANTIES WITH REGARD TO THIS SOFTWARE, * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT * SHALL M.I.T. BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * if_vlan.c - pseudo-device driver for IEEE 802.1Q virtual LANs. * This is sort of sneaky in the implementation, since * we need to pretend to be enough of an Ethernet implementation * to make arp work. The way we do this is by telling everyone * that we are an Ethernet, and then catch the packets that * ether_output() sends to us via if_transmit(), rewrite them for * use by the real outgoing interface, and ask it to send them. */ #include __FBSDID("$FreeBSD$"); #include "opt_inet.h" #include "opt_vlan.h" #include "opt_ratelimit.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef INET #include #include #endif #define VLAN_DEF_HWIDTH 4 #define VLAN_IFFLAGS (IFF_BROADCAST | IFF_MULTICAST) #define UP_AND_RUNNING(ifp) \ ((ifp)->if_flags & IFF_UP && (ifp)->if_drv_flags & IFF_DRV_RUNNING) CK_SLIST_HEAD(ifvlanhead, ifvlan); struct ifvlantrunk { struct ifnet *parent; /* parent interface of this trunk */ struct mtx lock; #ifdef VLAN_ARRAY #define VLAN_ARRAY_SIZE (EVL_VLID_MASK + 1) struct ifvlan *vlans[VLAN_ARRAY_SIZE]; /* static table */ #else struct ifvlanhead *hash; /* dynamic hash-list table */ uint16_t hmask; uint16_t hwidth; #endif int refcnt; }; /* * This macro provides a facility to iterate over every vlan on a trunk with * the assumption that none will be added/removed during iteration. */ #ifdef VLAN_ARRAY #define VLAN_FOREACH(_ifv, _trunk) \ size_t _i; \ for (_i = 0; _i < VLAN_ARRAY_SIZE; _i++) \ if (((_ifv) = (_trunk)->vlans[_i]) != NULL) #else /* VLAN_ARRAY */ #define VLAN_FOREACH(_ifv, _trunk) \ struct ifvlan *_next; \ size_t _i; \ for (_i = 0; _i < (1 << (_trunk)->hwidth); _i++) \ CK_SLIST_FOREACH_SAFE((_ifv), &(_trunk)->hash[_i], ifv_list, _next) #endif /* VLAN_ARRAY */ /* * This macro provides a facility to iterate over every vlan on a trunk while * also modifying the number of vlans on the trunk. The iteration continues * until some condition is met or there are no more vlans on the trunk. */ #ifdef VLAN_ARRAY /* The VLAN_ARRAY case is simple -- just a for loop using the condition. */ #define VLAN_FOREACH_UNTIL_SAFE(_ifv, _trunk, _cond) \ size_t _i; \ for (_i = 0; !(_cond) && _i < VLAN_ARRAY_SIZE; _i++) \ if (((_ifv) = (_trunk)->vlans[_i])) #else /* VLAN_ARRAY */ /* * The hash table case is more complicated. We allow for the hash table to be * modified (i.e. vlans removed) while we are iterating over it. To allow for * this we must restart the iteration every time we "touch" something during * the iteration, since removal will resize the hash table and invalidate our * current position. If acting on the touched element causes the trunk to be * emptied, then iteration also stops. */ #define VLAN_FOREACH_UNTIL_SAFE(_ifv, _trunk, _cond) \ size_t _i; \ bool _touch = false; \ for (_i = 0; \ !(_cond) && _i < (1 << (_trunk)->hwidth); \ _i = (_touch && ((_trunk) != NULL) ? 0 : _i + 1), _touch = false) \ if (((_ifv) = CK_SLIST_FIRST(&(_trunk)->hash[_i])) != NULL && \ (_touch = true)) #endif /* VLAN_ARRAY */ struct vlan_mc_entry { struct sockaddr_dl mc_addr; CK_SLIST_ENTRY(vlan_mc_entry) mc_entries; struct epoch_context mc_epoch_ctx; }; struct ifvlan { struct ifvlantrunk *ifv_trunk; struct ifnet *ifv_ifp; #define TRUNK(ifv) ((ifv)->ifv_trunk) #define PARENT(ifv) ((ifv)->ifv_trunk->parent) void *ifv_cookie; int ifv_pflags; /* special flags we have set on parent */ int ifv_capenable; int ifv_encaplen; /* encapsulation length */ int ifv_mtufudge; /* MTU fudged by this much */ int ifv_mintu; /* min transmission unit */ uint16_t ifv_proto; /* encapsulation ethertype */ uint16_t ifv_tag; /* tag to apply on packets leaving if */ uint16_t ifv_vid; /* VLAN ID */ uint8_t ifv_pcp; /* Priority Code Point (PCP). */ struct task lladdr_task; CK_SLIST_HEAD(, vlan_mc_entry) vlan_mc_listhead; #ifndef VLAN_ARRAY CK_SLIST_ENTRY(ifvlan) ifv_list; #endif }; /* Special flags we should propagate to parent. */ static struct { int flag; int (*func)(struct ifnet *, int); } vlan_pflags[] = { {IFF_PROMISC, ifpromisc}, {IFF_ALLMULTI, if_allmulti}, {0, NULL} }; extern int vlan_mtag_pcp; static const char vlanname[] = "vlan"; static MALLOC_DEFINE(M_VLAN, vlanname, "802.1Q Virtual LAN Interface"); static eventhandler_tag ifdetach_tag; static eventhandler_tag iflladdr_tag; /* * if_vlan uses two module-level synchronizations primitives to allow concurrent * modification of vlan interfaces and (mostly) allow for vlans to be destroyed * while they are being used for tx/rx. To accomplish this in a way that has * acceptable performance and cooperation with other parts of the network stack * there is a non-sleepable epoch(9) and an sx(9). * * The performance-sensitive paths that warrant using the epoch(9) are * vlan_transmit and vlan_input. Both have to check for the vlan interface's * existence using if_vlantrunk, and being in the network tx/rx paths the use * of an epoch(9) gives a measureable improvement in performance. * * The reason for having an sx(9) is mostly because there are still areas that * must be sleepable and also have safe concurrent access to a vlan interface. * Since the sx(9) exists, it is used by default in most paths unless sleeping * is not permitted, or if it is not clear whether sleeping is permitted. * */ #define _VLAN_SX_ID ifv_sx static struct sx _VLAN_SX_ID; #define VLAN_LOCKING_INIT() \ sx_init(&_VLAN_SX_ID, "vlan_sx") #define VLAN_LOCKING_DESTROY() \ sx_destroy(&_VLAN_SX_ID) #define VLAN_SLOCK() sx_slock(&_VLAN_SX_ID) #define VLAN_SUNLOCK() sx_sunlock(&_VLAN_SX_ID) #define VLAN_XLOCK() sx_xlock(&_VLAN_SX_ID) #define VLAN_XUNLOCK() sx_xunlock(&_VLAN_SX_ID) #define VLAN_SLOCK_ASSERT() sx_assert(&_VLAN_SX_ID, SA_SLOCKED) #define VLAN_XLOCK_ASSERT() sx_assert(&_VLAN_SX_ID, SA_XLOCKED) #define VLAN_SXLOCK_ASSERT() sx_assert(&_VLAN_SX_ID, SA_LOCKED) /* * We also have a per-trunk mutex that should be acquired when changing * its state. */ #define TRUNK_LOCK_INIT(trunk) mtx_init(&(trunk)->lock, vlanname, NULL, MTX_DEF) #define TRUNK_LOCK_DESTROY(trunk) mtx_destroy(&(trunk)->lock) #define TRUNK_WLOCK(trunk) mtx_lock(&(trunk)->lock) #define TRUNK_WUNLOCK(trunk) mtx_unlock(&(trunk)->lock) #define TRUNK_LOCK_ASSERT(trunk) MPASS(in_epoch(net_epoch_preempt) || mtx_owned(&(trunk)->lock)) #define TRUNK_WLOCK_ASSERT(trunk) mtx_assert(&(trunk)->lock, MA_OWNED); /* * The VLAN_ARRAY substitutes the dynamic hash with a static array * with 4096 entries. In theory this can give a boost in processing, * however in practice it does not. Probably this is because the array * is too big to fit into CPU cache. */ #ifndef VLAN_ARRAY static void vlan_inithash(struct ifvlantrunk *trunk); static void vlan_freehash(struct ifvlantrunk *trunk); static int vlan_inshash(struct ifvlantrunk *trunk, struct ifvlan *ifv); static int vlan_remhash(struct ifvlantrunk *trunk, struct ifvlan *ifv); static void vlan_growhash(struct ifvlantrunk *trunk, int howmuch); static __inline struct ifvlan * vlan_gethash(struct ifvlantrunk *trunk, uint16_t vid); #endif static void trunk_destroy(struct ifvlantrunk *trunk); static void vlan_init(void *foo); static void vlan_input(struct ifnet *ifp, struct mbuf *m); static int vlan_ioctl(struct ifnet *ifp, u_long cmd, caddr_t addr); #ifdef RATELIMIT static int vlan_snd_tag_alloc(struct ifnet *, union if_snd_tag_alloc_params *, struct m_snd_tag **); +static void vlan_snd_tag_free(struct m_snd_tag *); #endif static void vlan_qflush(struct ifnet *ifp); static int vlan_setflag(struct ifnet *ifp, int flag, int status, int (*func)(struct ifnet *, int)); static int vlan_setflags(struct ifnet *ifp, int status); static int vlan_setmulti(struct ifnet *ifp); static int vlan_transmit(struct ifnet *ifp, struct mbuf *m); static void vlan_unconfig(struct ifnet *ifp); static void vlan_unconfig_locked(struct ifnet *ifp, int departing); static int vlan_config(struct ifvlan *ifv, struct ifnet *p, uint16_t tag); static void vlan_link_state(struct ifnet *ifp); static void vlan_capabilities(struct ifvlan *ifv); static void vlan_trunk_capabilities(struct ifnet *ifp); static struct ifnet *vlan_clone_match_ethervid(const char *, int *); static int vlan_clone_match(struct if_clone *, const char *); static int vlan_clone_create(struct if_clone *, char *, size_t, caddr_t); static int vlan_clone_destroy(struct if_clone *, struct ifnet *); static void vlan_ifdetach(void *arg, struct ifnet *ifp); static void vlan_iflladdr(void *arg, struct ifnet *ifp); static void vlan_lladdr_fn(void *arg, int pending); static struct if_clone *vlan_cloner; #ifdef VIMAGE VNET_DEFINE_STATIC(struct if_clone *, vlan_cloner); #define V_vlan_cloner VNET(vlan_cloner) #endif static void vlan_mc_free(struct epoch_context *ctx) { struct vlan_mc_entry *mc = __containerof(ctx, struct vlan_mc_entry, mc_epoch_ctx); free(mc, M_VLAN); } #ifndef VLAN_ARRAY #define HASH(n, m) ((((n) >> 8) ^ ((n) >> 4) ^ (n)) & (m)) static void vlan_inithash(struct ifvlantrunk *trunk) { int i, n; /* * The trunk must not be locked here since we call malloc(M_WAITOK). * It is OK in case this function is called before the trunk struct * gets hooked up and becomes visible from other threads. */ KASSERT(trunk->hwidth == 0 && trunk->hash == NULL, ("%s: hash already initialized", __func__)); trunk->hwidth = VLAN_DEF_HWIDTH; n = 1 << trunk->hwidth; trunk->hmask = n - 1; trunk->hash = malloc(sizeof(struct ifvlanhead) * n, M_VLAN, M_WAITOK); for (i = 0; i < n; i++) CK_SLIST_INIT(&trunk->hash[i]); } static void vlan_freehash(struct ifvlantrunk *trunk) { #ifdef INVARIANTS int i; KASSERT(trunk->hwidth > 0, ("%s: hwidth not positive", __func__)); for (i = 0; i < (1 << trunk->hwidth); i++) KASSERT(CK_SLIST_EMPTY(&trunk->hash[i]), ("%s: hash table not empty", __func__)); #endif free(trunk->hash, M_VLAN); trunk->hash = NULL; trunk->hwidth = trunk->hmask = 0; } static int vlan_inshash(struct ifvlantrunk *trunk, struct ifvlan *ifv) { int i, b; struct ifvlan *ifv2; VLAN_XLOCK_ASSERT(); KASSERT(trunk->hwidth > 0, ("%s: hwidth not positive", __func__)); b = 1 << trunk->hwidth; i = HASH(ifv->ifv_vid, trunk->hmask); CK_SLIST_FOREACH(ifv2, &trunk->hash[i], ifv_list) if (ifv->ifv_vid == ifv2->ifv_vid) return (EEXIST); /* * Grow the hash when the number of vlans exceeds half of the number of * hash buckets squared. This will make the average linked-list length * buckets/2. */ if (trunk->refcnt > (b * b) / 2) { vlan_growhash(trunk, 1); i = HASH(ifv->ifv_vid, trunk->hmask); } CK_SLIST_INSERT_HEAD(&trunk->hash[i], ifv, ifv_list); trunk->refcnt++; return (0); } static int vlan_remhash(struct ifvlantrunk *trunk, struct ifvlan *ifv) { int i, b; struct ifvlan *ifv2; VLAN_XLOCK_ASSERT(); KASSERT(trunk->hwidth > 0, ("%s: hwidth not positive", __func__)); b = 1 << trunk->hwidth; i = HASH(ifv->ifv_vid, trunk->hmask); CK_SLIST_FOREACH(ifv2, &trunk->hash[i], ifv_list) if (ifv2 == ifv) { trunk->refcnt--; CK_SLIST_REMOVE(&trunk->hash[i], ifv2, ifvlan, ifv_list); if (trunk->refcnt < (b * b) / 2) vlan_growhash(trunk, -1); return (0); } panic("%s: vlan not found\n", __func__); return (ENOENT); /*NOTREACHED*/ } /* * Grow the hash larger or smaller if memory permits. */ static void vlan_growhash(struct ifvlantrunk *trunk, int howmuch) { struct ifvlan *ifv; struct ifvlanhead *hash2; int hwidth2, i, j, n, n2; VLAN_XLOCK_ASSERT(); KASSERT(trunk->hwidth > 0, ("%s: hwidth not positive", __func__)); if (howmuch == 0) { /* Harmless yet obvious coding error */ printf("%s: howmuch is 0\n", __func__); return; } hwidth2 = trunk->hwidth + howmuch; n = 1 << trunk->hwidth; n2 = 1 << hwidth2; /* Do not shrink the table below the default */ if (hwidth2 < VLAN_DEF_HWIDTH) return; hash2 = malloc(sizeof(struct ifvlanhead) * n2, M_VLAN, M_WAITOK); if (hash2 == NULL) { printf("%s: out of memory -- hash size not changed\n", __func__); return; /* We can live with the old hash table */ } for (j = 0; j < n2; j++) CK_SLIST_INIT(&hash2[j]); for (i = 0; i < n; i++) while ((ifv = CK_SLIST_FIRST(&trunk->hash[i])) != NULL) { CK_SLIST_REMOVE(&trunk->hash[i], ifv, ifvlan, ifv_list); j = HASH(ifv->ifv_vid, n2 - 1); CK_SLIST_INSERT_HEAD(&hash2[j], ifv, ifv_list); } NET_EPOCH_WAIT(); free(trunk->hash, M_VLAN); trunk->hash = hash2; trunk->hwidth = hwidth2; trunk->hmask = n2 - 1; if (bootverbose) if_printf(trunk->parent, "VLAN hash table resized from %d to %d buckets\n", n, n2); } static __inline struct ifvlan * vlan_gethash(struct ifvlantrunk *trunk, uint16_t vid) { struct ifvlan *ifv; NET_EPOCH_ASSERT(); CK_SLIST_FOREACH(ifv, &trunk->hash[HASH(vid, trunk->hmask)], ifv_list) if (ifv->ifv_vid == vid) return (ifv); return (NULL); } #if 0 /* Debugging code to view the hashtables. */ static void vlan_dumphash(struct ifvlantrunk *trunk) { int i; struct ifvlan *ifv; for (i = 0; i < (1 << trunk->hwidth); i++) { printf("%d: ", i); CK_SLIST_FOREACH(ifv, &trunk->hash[i], ifv_list) printf("%s ", ifv->ifv_ifp->if_xname); printf("\n"); } } #endif /* 0 */ #else static __inline struct ifvlan * vlan_gethash(struct ifvlantrunk *trunk, uint16_t vid) { return trunk->vlans[vid]; } static __inline int vlan_inshash(struct ifvlantrunk *trunk, struct ifvlan *ifv) { if (trunk->vlans[ifv->ifv_vid] != NULL) return EEXIST; trunk->vlans[ifv->ifv_vid] = ifv; trunk->refcnt++; return (0); } static __inline int vlan_remhash(struct ifvlantrunk *trunk, struct ifvlan *ifv) { trunk->vlans[ifv->ifv_vid] = NULL; trunk->refcnt--; return (0); } static __inline void vlan_freehash(struct ifvlantrunk *trunk) { } static __inline void vlan_inithash(struct ifvlantrunk *trunk) { } #endif /* !VLAN_ARRAY */ static void trunk_destroy(struct ifvlantrunk *trunk) { VLAN_XLOCK_ASSERT(); vlan_freehash(trunk); trunk->parent->if_vlantrunk = NULL; TRUNK_LOCK_DESTROY(trunk); if_rele(trunk->parent); free(trunk, M_VLAN); } /* * Program our multicast filter. What we're actually doing is * programming the multicast filter of the parent. This has the * side effect of causing the parent interface to receive multicast * traffic that it doesn't really want, which ends up being discarded * later by the upper protocol layers. Unfortunately, there's no way * to avoid this: there really is only one physical interface. */ static int vlan_setmulti(struct ifnet *ifp) { struct ifnet *ifp_p; struct ifmultiaddr *ifma; struct ifvlan *sc; struct vlan_mc_entry *mc; int error; VLAN_XLOCK_ASSERT(); /* Find the parent. */ sc = ifp->if_softc; ifp_p = PARENT(sc); CURVNET_SET_QUIET(ifp_p->if_vnet); /* First, remove any existing filter entries. */ while ((mc = CK_SLIST_FIRST(&sc->vlan_mc_listhead)) != NULL) { CK_SLIST_REMOVE_HEAD(&sc->vlan_mc_listhead, mc_entries); (void)if_delmulti(ifp_p, (struct sockaddr *)&mc->mc_addr); epoch_call(net_epoch_preempt, &mc->mc_epoch_ctx, vlan_mc_free); } /* Now program new ones. */ IF_ADDR_WLOCK(ifp); CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family != AF_LINK) continue; mc = malloc(sizeof(struct vlan_mc_entry), M_VLAN, M_NOWAIT); if (mc == NULL) { IF_ADDR_WUNLOCK(ifp); return (ENOMEM); } bcopy(ifma->ifma_addr, &mc->mc_addr, ifma->ifma_addr->sa_len); mc->mc_addr.sdl_index = ifp_p->if_index; CK_SLIST_INSERT_HEAD(&sc->vlan_mc_listhead, mc, mc_entries); } IF_ADDR_WUNLOCK(ifp); CK_SLIST_FOREACH (mc, &sc->vlan_mc_listhead, mc_entries) { error = if_addmulti(ifp_p, (struct sockaddr *)&mc->mc_addr, NULL); if (error) return (error); } CURVNET_RESTORE(); return (0); } /* * A handler for parent interface link layer address changes. * If the parent interface link layer address is changed we * should also change it on all children vlans. */ static void vlan_iflladdr(void *arg __unused, struct ifnet *ifp) { struct epoch_tracker et; struct ifvlan *ifv; struct ifnet *ifv_ifp; struct ifvlantrunk *trunk; struct sockaddr_dl *sdl; /* Need the epoch since this is run on taskqueue_swi. */ NET_EPOCH_ENTER(et); trunk = ifp->if_vlantrunk; if (trunk == NULL) { NET_EPOCH_EXIT(et); return; } /* * OK, it's a trunk. Loop over and change all vlan's lladdrs on it. * We need an exclusive lock here to prevent concurrent SIOCSIFLLADDR * ioctl calls on the parent garbling the lladdr of the child vlan. */ TRUNK_WLOCK(trunk); VLAN_FOREACH(ifv, trunk) { /* * Copy new new lladdr into the ifv_ifp, enqueue a task * to actually call if_setlladdr. if_setlladdr needs to * be deferred to a taskqueue because it will call into * the if_vlan ioctl path and try to acquire the global * lock. */ ifv_ifp = ifv->ifv_ifp; bcopy(IF_LLADDR(ifp), IF_LLADDR(ifv_ifp), ifp->if_addrlen); sdl = (struct sockaddr_dl *)ifv_ifp->if_addr->ifa_addr; sdl->sdl_alen = ifp->if_addrlen; taskqueue_enqueue(taskqueue_thread, &ifv->lladdr_task); } TRUNK_WUNLOCK(trunk); NET_EPOCH_EXIT(et); } /* * A handler for network interface departure events. * Track departure of trunks here so that we don't access invalid * pointers or whatever if a trunk is ripped from under us, e.g., * by ejecting its hot-plug card. However, if an ifnet is simply * being renamed, then there's no need to tear down the state. */ static void vlan_ifdetach(void *arg __unused, struct ifnet *ifp) { struct ifvlan *ifv; struct ifvlantrunk *trunk; /* If the ifnet is just being renamed, don't do anything. */ if (ifp->if_flags & IFF_RENAMING) return; VLAN_XLOCK(); trunk = ifp->if_vlantrunk; if (trunk == NULL) { VLAN_XUNLOCK(); return; } /* * OK, it's a trunk. Loop over and detach all vlan's on it. * Check trunk pointer after each vlan_unconfig() as it will * free it and set to NULL after the last vlan was detached. */ VLAN_FOREACH_UNTIL_SAFE(ifv, ifp->if_vlantrunk, ifp->if_vlantrunk == NULL) vlan_unconfig_locked(ifv->ifv_ifp, 1); /* Trunk should have been destroyed in vlan_unconfig(). */ KASSERT(ifp->if_vlantrunk == NULL, ("%s: purge failed", __func__)); VLAN_XUNLOCK(); } /* * Return the trunk device for a virtual interface. */ static struct ifnet * vlan_trunkdev(struct ifnet *ifp) { struct epoch_tracker et; struct ifvlan *ifv; if (ifp->if_type != IFT_L2VLAN) return (NULL); NET_EPOCH_ENTER(et); ifv = ifp->if_softc; ifp = NULL; if (ifv->ifv_trunk) ifp = PARENT(ifv); NET_EPOCH_EXIT(et); return (ifp); } /* * Return the 12-bit VLAN VID for this interface, for use by external * components such as Infiniband. * * XXXRW: Note that the function name here is historical; it should be named * vlan_vid(). */ static int vlan_tag(struct ifnet *ifp, uint16_t *vidp) { struct ifvlan *ifv; if (ifp->if_type != IFT_L2VLAN) return (EINVAL); ifv = ifp->if_softc; *vidp = ifv->ifv_vid; return (0); } static int vlan_pcp(struct ifnet *ifp, uint16_t *pcpp) { struct ifvlan *ifv; if (ifp->if_type != IFT_L2VLAN) return (EINVAL); ifv = ifp->if_softc; *pcpp = ifv->ifv_pcp; return (0); } /* * Return a driver specific cookie for this interface. Synchronization * with setcookie must be provided by the driver. */ static void * vlan_cookie(struct ifnet *ifp) { struct ifvlan *ifv; if (ifp->if_type != IFT_L2VLAN) return (NULL); ifv = ifp->if_softc; return (ifv->ifv_cookie); } /* * Store a cookie in our softc that drivers can use to store driver * private per-instance data in. */ static int vlan_setcookie(struct ifnet *ifp, void *cookie) { struct ifvlan *ifv; if (ifp->if_type != IFT_L2VLAN) return (EINVAL); ifv = ifp->if_softc; ifv->ifv_cookie = cookie; return (0); } /* * Return the vlan device present at the specific VID. */ static struct ifnet * vlan_devat(struct ifnet *ifp, uint16_t vid) { struct epoch_tracker et; struct ifvlantrunk *trunk; struct ifvlan *ifv; NET_EPOCH_ENTER(et); trunk = ifp->if_vlantrunk; if (trunk == NULL) { NET_EPOCH_EXIT(et); return (NULL); } ifp = NULL; ifv = vlan_gethash(trunk, vid); if (ifv) ifp = ifv->ifv_ifp; NET_EPOCH_EXIT(et); return (ifp); } /* * Recalculate the cached VLAN tag exposed via the MIB. */ static void vlan_tag_recalculate(struct ifvlan *ifv) { ifv->ifv_tag = EVL_MAKETAG(ifv->ifv_vid, ifv->ifv_pcp, 0); } /* * VLAN support can be loaded as a module. The only place in the * system that's intimately aware of this is ether_input. We hook * into this code through vlan_input_p which is defined there and * set here. No one else in the system should be aware of this so * we use an explicit reference here. */ extern void (*vlan_input_p)(struct ifnet *, struct mbuf *); /* For if_link_state_change() eyes only... */ extern void (*vlan_link_state_p)(struct ifnet *); static int vlan_modevent(module_t mod, int type, void *data) { switch (type) { case MOD_LOAD: ifdetach_tag = EVENTHANDLER_REGISTER(ifnet_departure_event, vlan_ifdetach, NULL, EVENTHANDLER_PRI_ANY); if (ifdetach_tag == NULL) return (ENOMEM); iflladdr_tag = EVENTHANDLER_REGISTER(iflladdr_event, vlan_iflladdr, NULL, EVENTHANDLER_PRI_ANY); if (iflladdr_tag == NULL) return (ENOMEM); VLAN_LOCKING_INIT(); vlan_input_p = vlan_input; vlan_link_state_p = vlan_link_state; vlan_trunk_cap_p = vlan_trunk_capabilities; vlan_trunkdev_p = vlan_trunkdev; vlan_cookie_p = vlan_cookie; vlan_setcookie_p = vlan_setcookie; vlan_tag_p = vlan_tag; vlan_pcp_p = vlan_pcp; vlan_devat_p = vlan_devat; #ifndef VIMAGE vlan_cloner = if_clone_advanced(vlanname, 0, vlan_clone_match, vlan_clone_create, vlan_clone_destroy); #endif if (bootverbose) printf("vlan: initialized, using " #ifdef VLAN_ARRAY "full-size arrays" #else "hash tables with chaining" #endif "\n"); break; case MOD_UNLOAD: #ifndef VIMAGE if_clone_detach(vlan_cloner); #endif EVENTHANDLER_DEREGISTER(ifnet_departure_event, ifdetach_tag); EVENTHANDLER_DEREGISTER(iflladdr_event, iflladdr_tag); vlan_input_p = NULL; vlan_link_state_p = NULL; vlan_trunk_cap_p = NULL; vlan_trunkdev_p = NULL; vlan_tag_p = NULL; vlan_cookie_p = NULL; vlan_setcookie_p = NULL; vlan_devat_p = NULL; VLAN_LOCKING_DESTROY(); if (bootverbose) printf("vlan: unloaded\n"); break; default: return (EOPNOTSUPP); } return (0); } static moduledata_t vlan_mod = { "if_vlan", vlan_modevent, 0 }; DECLARE_MODULE(if_vlan, vlan_mod, SI_SUB_PSEUDO, SI_ORDER_ANY); MODULE_VERSION(if_vlan, 3); #ifdef VIMAGE static void vnet_vlan_init(const void *unused __unused) { vlan_cloner = if_clone_advanced(vlanname, 0, vlan_clone_match, vlan_clone_create, vlan_clone_destroy); V_vlan_cloner = vlan_cloner; } VNET_SYSINIT(vnet_vlan_init, SI_SUB_PROTO_IFATTACHDOMAIN, SI_ORDER_ANY, vnet_vlan_init, NULL); static void vnet_vlan_uninit(const void *unused __unused) { if_clone_detach(V_vlan_cloner); } VNET_SYSUNINIT(vnet_vlan_uninit, SI_SUB_INIT_IF, SI_ORDER_FIRST, vnet_vlan_uninit, NULL); #endif /* * Check for . style interface names. */ static struct ifnet * vlan_clone_match_ethervid(const char *name, int *vidp) { char ifname[IFNAMSIZ]; char *cp; struct ifnet *ifp; int vid; strlcpy(ifname, name, IFNAMSIZ); if ((cp = strchr(ifname, '.')) == NULL) return (NULL); *cp = '\0'; if ((ifp = ifunit_ref(ifname)) == NULL) return (NULL); /* Parse VID. */ if (*++cp == '\0') { if_rele(ifp); return (NULL); } vid = 0; for(; *cp >= '0' && *cp <= '9'; cp++) vid = (vid * 10) + (*cp - '0'); if (*cp != '\0') { if_rele(ifp); return (NULL); } if (vidp != NULL) *vidp = vid; return (ifp); } static int vlan_clone_match(struct if_clone *ifc, const char *name) { const char *cp; if (vlan_clone_match_ethervid(name, NULL) != NULL) return (1); if (strncmp(vlanname, name, strlen(vlanname)) != 0) return (0); for (cp = name + 4; *cp != '\0'; cp++) { if (*cp < '0' || *cp > '9') return (0); } return (1); } static int vlan_clone_create(struct if_clone *ifc, char *name, size_t len, caddr_t params) { char *dp; int wildcard; int unit; int error; int vid; struct ifvlan *ifv; struct ifnet *ifp; struct ifnet *p; struct ifaddr *ifa; struct sockaddr_dl *sdl; struct vlanreq vlr; static const u_char eaddr[ETHER_ADDR_LEN]; /* 00:00:00:00:00:00 */ /* * There are 3 (ugh) ways to specify the cloned device: * o pass a parameter block with the clone request. * o specify parameters in the text of the clone device name * o specify no parameters and get an unattached device that * must be configured separately. * The first technique is preferred; the latter two are * supported for backwards compatibility. * * XXXRW: Note historic use of the word "tag" here. New ioctls may be * called for. */ if (params) { error = copyin(params, &vlr, sizeof(vlr)); if (error) return error; p = ifunit_ref(vlr.vlr_parent); if (p == NULL) return (ENXIO); error = ifc_name2unit(name, &unit); if (error != 0) { if_rele(p); return (error); } vid = vlr.vlr_tag; wildcard = (unit < 0); } else if ((p = vlan_clone_match_ethervid(name, &vid)) != NULL) { unit = -1; wildcard = 0; } else { p = NULL; error = ifc_name2unit(name, &unit); if (error != 0) return (error); wildcard = (unit < 0); } error = ifc_alloc_unit(ifc, &unit); if (error != 0) { if (p != NULL) if_rele(p); return (error); } /* In the wildcard case, we need to update the name. */ if (wildcard) { for (dp = name; *dp != '\0'; dp++); if (snprintf(dp, len - (dp-name), "%d", unit) > len - (dp-name) - 1) { panic("%s: interface name too long", __func__); } } ifv = malloc(sizeof(struct ifvlan), M_VLAN, M_WAITOK | M_ZERO); ifp = ifv->ifv_ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { ifc_free_unit(ifc, unit); free(ifv, M_VLAN); if (p != NULL) if_rele(p); return (ENOSPC); } CK_SLIST_INIT(&ifv->vlan_mc_listhead); ifp->if_softc = ifv; /* * Set the name manually rather than using if_initname because * we don't conform to the default naming convention for interfaces. */ strlcpy(ifp->if_xname, name, IFNAMSIZ); ifp->if_dname = vlanname; ifp->if_dunit = unit; ifp->if_init = vlan_init; ifp->if_transmit = vlan_transmit; ifp->if_qflush = vlan_qflush; ifp->if_ioctl = vlan_ioctl; #ifdef RATELIMIT ifp->if_snd_tag_alloc = vlan_snd_tag_alloc; + ifp->if_snd_tag_free = vlan_snd_tag_free; #endif ifp->if_flags = VLAN_IFFLAGS; ether_ifattach(ifp, eaddr); /* Now undo some of the damage... */ ifp->if_baudrate = 0; ifp->if_type = IFT_L2VLAN; ifp->if_hdrlen = ETHER_VLAN_ENCAP_LEN; ifa = ifp->if_addr; sdl = (struct sockaddr_dl *)ifa->ifa_addr; sdl->sdl_type = IFT_L2VLAN; if (p != NULL) { error = vlan_config(ifv, p, vid); if_rele(p); if (error != 0) { /* * Since we've partially failed, we need to back * out all the way, otherwise userland could get * confused. Thus, we destroy the interface. */ ether_ifdetach(ifp); vlan_unconfig(ifp); if_free(ifp); ifc_free_unit(ifc, unit); free(ifv, M_VLAN); return (error); } } return (0); } static int vlan_clone_destroy(struct if_clone *ifc, struct ifnet *ifp) { struct ifvlan *ifv = ifp->if_softc; int unit = ifp->if_dunit; ether_ifdetach(ifp); /* first, remove it from system-wide lists */ vlan_unconfig(ifp); /* now it can be unconfigured and freed */ /* * We should have the only reference to the ifv now, so we can now * drain any remaining lladdr task before freeing the ifnet and the * ifvlan. */ taskqueue_drain(taskqueue_thread, &ifv->lladdr_task); NET_EPOCH_WAIT(); if_free(ifp); free(ifv, M_VLAN); ifc_free_unit(ifc, unit); return (0); } /* * The ifp->if_init entry point for vlan(4) is a no-op. */ static void vlan_init(void *foo __unused) { } /* * The if_transmit method for vlan(4) interface. */ static int vlan_transmit(struct ifnet *ifp, struct mbuf *m) { struct epoch_tracker et; struct ifvlan *ifv; struct ifnet *p; int error, len, mcast; NET_EPOCH_ENTER(et); ifv = ifp->if_softc; if (TRUNK(ifv) == NULL) { if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); NET_EPOCH_EXIT(et); m_freem(m); return (ENETDOWN); } p = PARENT(ifv); len = m->m_pkthdr.len; mcast = (m->m_flags & (M_MCAST | M_BCAST)) ? 1 : 0; BPF_MTAP(ifp, m); /* * Do not run parent's if_transmit() if the parent is not up, * or parent's driver will cause a system crash. */ if (!UP_AND_RUNNING(p)) { if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); NET_EPOCH_EXIT(et); m_freem(m); return (ENETDOWN); } if (!ether_8021q_frame(&m, ifp, p, ifv->ifv_vid, ifv->ifv_pcp)) { if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); NET_EPOCH_EXIT(et); return (0); } /* * Send it, precisely as ether_output() would have. */ error = (p->if_transmit)(p, m); if (error == 0) { if_inc_counter(ifp, IFCOUNTER_OPACKETS, 1); if_inc_counter(ifp, IFCOUNTER_OBYTES, len); if_inc_counter(ifp, IFCOUNTER_OMCASTS, mcast); } else if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); NET_EPOCH_EXIT(et); return (error); } /* * The ifp->if_qflush entry point for vlan(4) is a no-op. */ static void vlan_qflush(struct ifnet *ifp __unused) { } static void vlan_input(struct ifnet *ifp, struct mbuf *m) { struct epoch_tracker et; struct ifvlantrunk *trunk; struct ifvlan *ifv; struct m_tag *mtag; uint16_t vid, tag; NET_EPOCH_ENTER(et); trunk = ifp->if_vlantrunk; if (trunk == NULL) { NET_EPOCH_EXIT(et); m_freem(m); return; } if (m->m_flags & M_VLANTAG) { /* * Packet is tagged, but m contains a normal * Ethernet frame; the tag is stored out-of-band. */ tag = m->m_pkthdr.ether_vtag; m->m_flags &= ~M_VLANTAG; } else { struct ether_vlan_header *evl; /* * Packet is tagged in-band as specified by 802.1q. */ switch (ifp->if_type) { case IFT_ETHER: if (m->m_len < sizeof(*evl) && (m = m_pullup(m, sizeof(*evl))) == NULL) { if_printf(ifp, "cannot pullup VLAN header\n"); NET_EPOCH_EXIT(et); return; } evl = mtod(m, struct ether_vlan_header *); tag = ntohs(evl->evl_tag); /* * Remove the 802.1q header by copying the Ethernet * addresses over it and adjusting the beginning of * the data in the mbuf. The encapsulated Ethernet * type field is already in place. */ bcopy((char *)evl, (char *)evl + ETHER_VLAN_ENCAP_LEN, ETHER_HDR_LEN - ETHER_TYPE_LEN); m_adj(m, ETHER_VLAN_ENCAP_LEN); break; default: #ifdef INVARIANTS panic("%s: %s has unsupported if_type %u", __func__, ifp->if_xname, ifp->if_type); #endif if_inc_counter(ifp, IFCOUNTER_NOPROTO, 1); NET_EPOCH_EXIT(et); m_freem(m); return; } } vid = EVL_VLANOFTAG(tag); ifv = vlan_gethash(trunk, vid); if (ifv == NULL || !UP_AND_RUNNING(ifv->ifv_ifp)) { NET_EPOCH_EXIT(et); if_inc_counter(ifp, IFCOUNTER_NOPROTO, 1); m_freem(m); return; } if (vlan_mtag_pcp) { /* * While uncommon, it is possible that we will find a 802.1q * packet encapsulated inside another packet that also had an * 802.1q header. For example, ethernet tunneled over IPSEC * arriving over ethernet. In that case, we replace the * existing 802.1q PCP m_tag value. */ mtag = m_tag_locate(m, MTAG_8021Q, MTAG_8021Q_PCP_IN, NULL); if (mtag == NULL) { mtag = m_tag_alloc(MTAG_8021Q, MTAG_8021Q_PCP_IN, sizeof(uint8_t), M_NOWAIT); if (mtag == NULL) { if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); NET_EPOCH_EXIT(et); m_freem(m); return; } m_tag_prepend(m, mtag); } *(uint8_t *)(mtag + 1) = EVL_PRIOFTAG(tag); } m->m_pkthdr.rcvif = ifv->ifv_ifp; if_inc_counter(ifv->ifv_ifp, IFCOUNTER_IPACKETS, 1); NET_EPOCH_EXIT(et); /* Pass it back through the parent's input routine. */ (*ifv->ifv_ifp->if_input)(ifv->ifv_ifp, m); } static void vlan_lladdr_fn(void *arg, int pending __unused) { struct ifvlan *ifv; struct ifnet *ifp; ifv = (struct ifvlan *)arg; ifp = ifv->ifv_ifp; CURVNET_SET(ifp->if_vnet); /* The ifv_ifp already has the lladdr copied in. */ if_setlladdr(ifp, IF_LLADDR(ifp), ifp->if_addrlen); CURVNET_RESTORE(); } static int vlan_config(struct ifvlan *ifv, struct ifnet *p, uint16_t vid) { struct epoch_tracker et; struct ifvlantrunk *trunk; struct ifnet *ifp; int error = 0; /* * We can handle non-ethernet hardware types as long as * they handle the tagging and headers themselves. */ if (p->if_type != IFT_ETHER && (p->if_capenable & IFCAP_VLAN_HWTAGGING) == 0) return (EPROTONOSUPPORT); if ((p->if_flags & VLAN_IFFLAGS) != VLAN_IFFLAGS) return (EPROTONOSUPPORT); /* * Don't let the caller set up a VLAN VID with * anything except VLID bits. * VID numbers 0x0 and 0xFFF are reserved. */ if (vid == 0 || vid == 0xFFF || (vid & ~EVL_VLID_MASK)) return (EINVAL); if (ifv->ifv_trunk) return (EBUSY); VLAN_XLOCK(); if (p->if_vlantrunk == NULL) { trunk = malloc(sizeof(struct ifvlantrunk), M_VLAN, M_WAITOK | M_ZERO); vlan_inithash(trunk); TRUNK_LOCK_INIT(trunk); TRUNK_WLOCK(trunk); p->if_vlantrunk = trunk; trunk->parent = p; if_ref(trunk->parent); TRUNK_WUNLOCK(trunk); } else { trunk = p->if_vlantrunk; } ifv->ifv_vid = vid; /* must set this before vlan_inshash() */ ifv->ifv_pcp = 0; /* Default: best effort delivery. */ vlan_tag_recalculate(ifv); error = vlan_inshash(trunk, ifv); if (error) goto done; ifv->ifv_proto = ETHERTYPE_VLAN; ifv->ifv_encaplen = ETHER_VLAN_ENCAP_LEN; ifv->ifv_mintu = ETHERMIN; ifv->ifv_pflags = 0; ifv->ifv_capenable = -1; /* * If the parent supports the VLAN_MTU capability, * i.e. can Tx/Rx larger than ETHER_MAX_LEN frames, * use it. */ if (p->if_capenable & IFCAP_VLAN_MTU) { /* * No need to fudge the MTU since the parent can * handle extended frames. */ ifv->ifv_mtufudge = 0; } else { /* * Fudge the MTU by the encapsulation size. This * makes us incompatible with strictly compliant * 802.1Q implementations, but allows us to use * the feature with other NetBSD implementations, * which might still be useful. */ ifv->ifv_mtufudge = ifv->ifv_encaplen; } ifv->ifv_trunk = trunk; ifp = ifv->ifv_ifp; /* * Initialize fields from our parent. This duplicates some * work with ether_ifattach() but allows for non-ethernet * interfaces to also work. */ ifp->if_mtu = p->if_mtu - ifv->ifv_mtufudge; ifp->if_baudrate = p->if_baudrate; ifp->if_output = p->if_output; ifp->if_input = p->if_input; ifp->if_resolvemulti = p->if_resolvemulti; ifp->if_addrlen = p->if_addrlen; ifp->if_broadcastaddr = p->if_broadcastaddr; ifp->if_pcp = ifv->ifv_pcp; /* * Copy only a selected subset of flags from the parent. * Other flags are none of our business. */ #define VLAN_COPY_FLAGS (IFF_SIMPLEX) ifp->if_flags &= ~VLAN_COPY_FLAGS; ifp->if_flags |= p->if_flags & VLAN_COPY_FLAGS; #undef VLAN_COPY_FLAGS ifp->if_link_state = p->if_link_state; NET_EPOCH_ENTER(et); vlan_capabilities(ifv); NET_EPOCH_EXIT(et); /* * Set up our interface address to reflect the underlying * physical interface's. */ bcopy(IF_LLADDR(p), IF_LLADDR(ifp), p->if_addrlen); ((struct sockaddr_dl *)ifp->if_addr->ifa_addr)->sdl_alen = p->if_addrlen; TASK_INIT(&ifv->lladdr_task, 0, vlan_lladdr_fn, ifv); /* We are ready for operation now. */ ifp->if_drv_flags |= IFF_DRV_RUNNING; /* Update flags on the parent, if necessary. */ vlan_setflags(ifp, 1); /* * Configure multicast addresses that may already be * joined on the vlan device. */ (void)vlan_setmulti(ifp); done: if (error == 0) EVENTHANDLER_INVOKE(vlan_config, p, ifv->ifv_vid); VLAN_XUNLOCK(); return (error); } static void vlan_unconfig(struct ifnet *ifp) { VLAN_XLOCK(); vlan_unconfig_locked(ifp, 0); VLAN_XUNLOCK(); } static void vlan_unconfig_locked(struct ifnet *ifp, int departing) { struct ifvlantrunk *trunk; struct vlan_mc_entry *mc; struct ifvlan *ifv; struct ifnet *parent; int error; VLAN_XLOCK_ASSERT(); ifv = ifp->if_softc; trunk = ifv->ifv_trunk; parent = NULL; if (trunk != NULL) { parent = trunk->parent; /* * Since the interface is being unconfigured, we need to * empty the list of multicast groups that we may have joined * while we were alive from the parent's list. */ while ((mc = CK_SLIST_FIRST(&ifv->vlan_mc_listhead)) != NULL) { /* * If the parent interface is being detached, * all its multicast addresses have already * been removed. Warn about errors if * if_delmulti() does fail, but don't abort as * all callers expect vlan destruction to * succeed. */ if (!departing) { error = if_delmulti(parent, (struct sockaddr *)&mc->mc_addr); if (error) if_printf(ifp, "Failed to delete multicast address from parent: %d\n", error); } CK_SLIST_REMOVE_HEAD(&ifv->vlan_mc_listhead, mc_entries); epoch_call(net_epoch_preempt, &mc->mc_epoch_ctx, vlan_mc_free); } vlan_setflags(ifp, 0); /* clear special flags on parent */ vlan_remhash(trunk, ifv); ifv->ifv_trunk = NULL; /* * Check if we were the last. */ if (trunk->refcnt == 0) { parent->if_vlantrunk = NULL; NET_EPOCH_WAIT(); trunk_destroy(trunk); } } /* Disconnect from parent. */ if (ifv->ifv_pflags) if_printf(ifp, "%s: ifv_pflags unclean\n", __func__); ifp->if_mtu = ETHERMTU; ifp->if_link_state = LINK_STATE_UNKNOWN; ifp->if_drv_flags &= ~IFF_DRV_RUNNING; /* * Only dispatch an event if vlan was * attached, otherwise there is nothing * to cleanup anyway. */ if (parent != NULL) EVENTHANDLER_INVOKE(vlan_unconfig, parent, ifv->ifv_vid); } /* Handle a reference counted flag that should be set on the parent as well */ static int vlan_setflag(struct ifnet *ifp, int flag, int status, int (*func)(struct ifnet *, int)) { struct ifvlan *ifv; int error; VLAN_SXLOCK_ASSERT(); ifv = ifp->if_softc; status = status ? (ifp->if_flags & flag) : 0; /* Now "status" contains the flag value or 0 */ /* * See if recorded parent's status is different from what * we want it to be. If it is, flip it. We record parent's * status in ifv_pflags so that we won't clear parent's flag * we haven't set. In fact, we don't clear or set parent's * flags directly, but get or release references to them. * That's why we can be sure that recorded flags still are * in accord with actual parent's flags. */ if (status != (ifv->ifv_pflags & flag)) { error = (*func)(PARENT(ifv), status); if (error) return (error); ifv->ifv_pflags &= ~flag; ifv->ifv_pflags |= status; } return (0); } /* * Handle IFF_* flags that require certain changes on the parent: * if "status" is true, update parent's flags respective to our if_flags; * if "status" is false, forcedly clear the flags set on parent. */ static int vlan_setflags(struct ifnet *ifp, int status) { int error, i; for (i = 0; vlan_pflags[i].flag; i++) { error = vlan_setflag(ifp, vlan_pflags[i].flag, status, vlan_pflags[i].func); if (error) return (error); } return (0); } /* Inform all vlans that their parent has changed link state */ static void vlan_link_state(struct ifnet *ifp) { struct epoch_tracker et; struct ifvlantrunk *trunk; struct ifvlan *ifv; /* Called from a taskqueue_swi task, so we cannot sleep. */ NET_EPOCH_ENTER(et); trunk = ifp->if_vlantrunk; if (trunk == NULL) { NET_EPOCH_EXIT(et); return; } TRUNK_WLOCK(trunk); VLAN_FOREACH(ifv, trunk) { ifv->ifv_ifp->if_baudrate = trunk->parent->if_baudrate; if_link_state_change(ifv->ifv_ifp, trunk->parent->if_link_state); } TRUNK_WUNLOCK(trunk); NET_EPOCH_EXIT(et); } static void vlan_capabilities(struct ifvlan *ifv) { struct ifnet *p; struct ifnet *ifp; struct ifnet_hw_tsomax hw_tsomax; int cap = 0, ena = 0, mena; u_long hwa = 0; VLAN_SXLOCK_ASSERT(); NET_EPOCH_ASSERT(); p = PARENT(ifv); ifp = ifv->ifv_ifp; /* Mask parent interface enabled capabilities disabled by user. */ mena = p->if_capenable & ifv->ifv_capenable; /* * If the parent interface can do checksum offloading * on VLANs, then propagate its hardware-assisted * checksumming flags. Also assert that checksum * offloading requires hardware VLAN tagging. */ if (p->if_capabilities & IFCAP_VLAN_HWCSUM) cap |= p->if_capabilities & (IFCAP_HWCSUM | IFCAP_HWCSUM_IPV6); if (p->if_capenable & IFCAP_VLAN_HWCSUM && p->if_capenable & IFCAP_VLAN_HWTAGGING) { ena |= mena & (IFCAP_HWCSUM | IFCAP_HWCSUM_IPV6); if (ena & IFCAP_TXCSUM) hwa |= p->if_hwassist & (CSUM_IP | CSUM_TCP | CSUM_UDP | CSUM_SCTP); if (ena & IFCAP_TXCSUM_IPV6) hwa |= p->if_hwassist & (CSUM_TCP_IPV6 | CSUM_UDP_IPV6 | CSUM_SCTP_IPV6); } /* * If the parent interface can do TSO on VLANs then * propagate the hardware-assisted flag. TSO on VLANs * does not necessarily require hardware VLAN tagging. */ memset(&hw_tsomax, 0, sizeof(hw_tsomax)); if_hw_tsomax_common(p, &hw_tsomax); if_hw_tsomax_update(ifp, &hw_tsomax); if (p->if_capabilities & IFCAP_VLAN_HWTSO) cap |= p->if_capabilities & IFCAP_TSO; if (p->if_capenable & IFCAP_VLAN_HWTSO) { ena |= mena & IFCAP_TSO; if (ena & IFCAP_TSO) hwa |= p->if_hwassist & CSUM_TSO; } /* * If the parent interface can do LRO and checksum offloading on * VLANs, then guess it may do LRO on VLANs. False positive here * cost nothing, while false negative may lead to some confusions. */ if (p->if_capabilities & IFCAP_VLAN_HWCSUM) cap |= p->if_capabilities & IFCAP_LRO; if (p->if_capenable & IFCAP_VLAN_HWCSUM) ena |= p->if_capenable & IFCAP_LRO; /* * If the parent interface can offload TCP connections over VLANs then * propagate its TOE capability to the VLAN interface. * * All TOE drivers in the tree today can deal with VLANs. If this * changes then IFCAP_VLAN_TOE should be promoted to a full capability * with its own bit. */ #define IFCAP_VLAN_TOE IFCAP_TOE if (p->if_capabilities & IFCAP_VLAN_TOE) cap |= p->if_capabilities & IFCAP_TOE; if (p->if_capenable & IFCAP_VLAN_TOE) { TOEDEV(ifp) = TOEDEV(p); ena |= mena & IFCAP_TOE; } /* * If the parent interface supports dynamic link state, so does the * VLAN interface. */ cap |= (p->if_capabilities & IFCAP_LINKSTATE); ena |= (mena & IFCAP_LINKSTATE); #ifdef RATELIMIT /* * If the parent interface supports ratelimiting, so does the * VLAN interface. */ cap |= (p->if_capabilities & IFCAP_TXRTLMT); ena |= (mena & IFCAP_TXRTLMT); #endif ifp->if_capabilities = cap; ifp->if_capenable = ena; ifp->if_hwassist = hwa; } static void vlan_trunk_capabilities(struct ifnet *ifp) { struct epoch_tracker et; struct ifvlantrunk *trunk; struct ifvlan *ifv; VLAN_SLOCK(); trunk = ifp->if_vlantrunk; if (trunk == NULL) { VLAN_SUNLOCK(); return; } NET_EPOCH_ENTER(et); VLAN_FOREACH(ifv, trunk) { vlan_capabilities(ifv); } NET_EPOCH_EXIT(et); VLAN_SUNLOCK(); } static int vlan_ioctl(struct ifnet *ifp, u_long cmd, caddr_t data) { struct ifnet *p; struct ifreq *ifr; struct ifaddr *ifa; struct ifvlan *ifv; struct ifvlantrunk *trunk; struct vlanreq vlr; int error = 0; ifr = (struct ifreq *)data; ifa = (struct ifaddr *) data; ifv = ifp->if_softc; switch (cmd) { case SIOCSIFADDR: ifp->if_flags |= IFF_UP; #ifdef INET if (ifa->ifa_addr->sa_family == AF_INET) arp_ifinit(ifp, ifa); #endif break; case SIOCGIFADDR: bcopy(IF_LLADDR(ifp), &ifr->ifr_addr.sa_data[0], ifp->if_addrlen); break; case SIOCGIFMEDIA: VLAN_SLOCK(); if (TRUNK(ifv) != NULL) { p = PARENT(ifv); if_ref(p); error = (*p->if_ioctl)(p, SIOCGIFMEDIA, data); if_rele(p); /* Limit the result to the parent's current config. */ if (error == 0) { struct ifmediareq *ifmr; ifmr = (struct ifmediareq *)data; if (ifmr->ifm_count >= 1 && ifmr->ifm_ulist) { ifmr->ifm_count = 1; error = copyout(&ifmr->ifm_current, ifmr->ifm_ulist, sizeof(int)); } } } else { error = EINVAL; } VLAN_SUNLOCK(); break; case SIOCSIFMEDIA: error = EINVAL; break; case SIOCSIFMTU: /* * Set the interface MTU. */ VLAN_SLOCK(); trunk = TRUNK(ifv); if (trunk != NULL) { TRUNK_WLOCK(trunk); if (ifr->ifr_mtu > (PARENT(ifv)->if_mtu - ifv->ifv_mtufudge) || ifr->ifr_mtu < (ifv->ifv_mintu - ifv->ifv_mtufudge)) error = EINVAL; else ifp->if_mtu = ifr->ifr_mtu; TRUNK_WUNLOCK(trunk); } else error = EINVAL; VLAN_SUNLOCK(); break; case SIOCSETVLAN: #ifdef VIMAGE /* * XXXRW/XXXBZ: The goal in these checks is to allow a VLAN * interface to be delegated to a jail without allowing the * jail to change what underlying interface/VID it is * associated with. We are not entirely convinced that this * is the right way to accomplish that policy goal. */ if (ifp->if_vnet != ifp->if_home_vnet) { error = EPERM; break; } #endif error = copyin(ifr_data_get_ptr(ifr), &vlr, sizeof(vlr)); if (error) break; if (vlr.vlr_parent[0] == '\0') { vlan_unconfig(ifp); break; } p = ifunit_ref(vlr.vlr_parent); if (p == NULL) { error = ENOENT; break; } error = vlan_config(ifv, p, vlr.vlr_tag); if_rele(p); break; case SIOCGETVLAN: #ifdef VIMAGE if (ifp->if_vnet != ifp->if_home_vnet) { error = EPERM; break; } #endif bzero(&vlr, sizeof(vlr)); VLAN_SLOCK(); if (TRUNK(ifv) != NULL) { strlcpy(vlr.vlr_parent, PARENT(ifv)->if_xname, sizeof(vlr.vlr_parent)); vlr.vlr_tag = ifv->ifv_vid; } VLAN_SUNLOCK(); error = copyout(&vlr, ifr_data_get_ptr(ifr), sizeof(vlr)); break; case SIOCSIFFLAGS: /* * We should propagate selected flags to the parent, * e.g., promiscuous mode. */ VLAN_XLOCK(); if (TRUNK(ifv) != NULL) error = vlan_setflags(ifp, 1); VLAN_XUNLOCK(); break; case SIOCADDMULTI: case SIOCDELMULTI: /* * If we don't have a parent, just remember the membership for * when we do. * * XXX We need the rmlock here to avoid sleeping while * holding in6_multi_mtx. */ VLAN_XLOCK(); trunk = TRUNK(ifv); if (trunk != NULL) error = vlan_setmulti(ifp); VLAN_XUNLOCK(); break; case SIOCGVLANPCP: #ifdef VIMAGE if (ifp->if_vnet != ifp->if_home_vnet) { error = EPERM; break; } #endif ifr->ifr_vlan_pcp = ifv->ifv_pcp; break; case SIOCSVLANPCP: #ifdef VIMAGE if (ifp->if_vnet != ifp->if_home_vnet) { error = EPERM; break; } #endif error = priv_check(curthread, PRIV_NET_SETVLANPCP); if (error) break; if (ifr->ifr_vlan_pcp > 7) { error = EINVAL; break; } ifv->ifv_pcp = ifr->ifr_vlan_pcp; ifp->if_pcp = ifv->ifv_pcp; vlan_tag_recalculate(ifv); /* broadcast event about PCP change */ EVENTHANDLER_INVOKE(ifnet_event, ifp, IFNET_EVENT_PCP); break; case SIOCSIFCAP: VLAN_SLOCK(); ifv->ifv_capenable = ifr->ifr_reqcap; trunk = TRUNK(ifv); if (trunk != NULL) { struct epoch_tracker et; NET_EPOCH_ENTER(et); vlan_capabilities(ifv); NET_EPOCH_EXIT(et); } VLAN_SUNLOCK(); break; default: error = EINVAL; break; } return (error); } #ifdef RATELIMIT static int vlan_snd_tag_alloc(struct ifnet *ifp, union if_snd_tag_alloc_params *params, struct m_snd_tag **ppmt) { /* get trunk device */ ifp = vlan_trunkdev(ifp); if (ifp == NULL || (ifp->if_capenable & IFCAP_TXRTLMT) == 0) return (EOPNOTSUPP); /* forward allocation request */ return (ifp->if_snd_tag_alloc(ifp, params, ppmt)); +} + +static void +vlan_snd_tag_free(struct m_snd_tag *tag) +{ + tag->ifp->if_snd_tag_free(tag); } #endif Index: projects/import-googletest-1.8.1/sys/net/iflib.c =================================================================== --- projects/import-googletest-1.8.1/sys/net/iflib.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/net/iflib.c (revision 344271) @@ -1,6539 +1,6551 @@ /*- * Copyright (c) 2014-2018, Matthew Macy * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * 2. Neither the name of Matthew Macy nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include "opt_inet.h" #include "opt_inet6.h" #include "opt_acpi.h" #include "opt_sched.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "ifdi_if.h" #ifdef PCI_IOV #include #endif #include /* * enable accounting of every mbuf as it comes in to and goes out of * iflib's software descriptor references */ #define MEMORY_LOGGING 0 /* * Enable mbuf vectors for compressing long mbuf chains */ /* * NB: * - Prefetching in tx cleaning should perhaps be a tunable. The distance ahead * we prefetch needs to be determined by the time spent in m_free vis a vis * the cost of a prefetch. This will of course vary based on the workload: * - NFLX's m_free path is dominated by vm-based M_EXT manipulation which * is quite expensive, thus suggesting very little prefetch. * - small packet forwarding which is just returning a single mbuf to * UMA will typically be very fast vis a vis the cost of a memory * access. */ /* * File organization: * - private structures * - iflib private utility functions * - ifnet functions * - vlan registry and other exported functions * - iflib public core functions * * */ MALLOC_DEFINE(M_IFLIB, "iflib", "ifnet library"); struct iflib_txq; typedef struct iflib_txq *iflib_txq_t; struct iflib_rxq; typedef struct iflib_rxq *iflib_rxq_t; struct iflib_fl; typedef struct iflib_fl *iflib_fl_t; struct iflib_ctx; static void iru_init(if_rxd_update_t iru, iflib_rxq_t rxq, uint8_t flid); static void iflib_timer(void *arg); typedef struct iflib_filter_info { driver_filter_t *ifi_filter; void *ifi_filter_arg; struct grouptask *ifi_task; void *ifi_ctx; } *iflib_filter_info_t; struct iflib_ctx { KOBJ_FIELDS; /* * Pointer to hardware driver's softc */ void *ifc_softc; device_t ifc_dev; if_t ifc_ifp; cpuset_t ifc_cpus; if_shared_ctx_t ifc_sctx; struct if_softc_ctx ifc_softc_ctx; struct sx ifc_ctx_sx; struct mtx ifc_state_mtx; iflib_txq_t ifc_txqs; iflib_rxq_t ifc_rxqs; uint32_t ifc_if_flags; uint32_t ifc_flags; uint32_t ifc_max_fl_buf_size; int ifc_link_state; int ifc_link_irq; int ifc_watchdog_events; struct cdev *ifc_led_dev; struct resource *ifc_msix_mem; struct if_irq ifc_legacy_irq; struct grouptask ifc_admin_task; struct grouptask ifc_vflr_task; struct iflib_filter_info ifc_filter_info; struct ifmedia ifc_media; struct sysctl_oid *ifc_sysctl_node; uint16_t ifc_sysctl_ntxqs; uint16_t ifc_sysctl_nrxqs; uint16_t ifc_sysctl_qs_eq_override; uint16_t ifc_sysctl_rx_budget; uint16_t ifc_sysctl_tx_abdicate; qidx_t ifc_sysctl_ntxds[8]; qidx_t ifc_sysctl_nrxds[8]; struct if_txrx ifc_txrx; #define isc_txd_encap ifc_txrx.ift_txd_encap #define isc_txd_flush ifc_txrx.ift_txd_flush #define isc_txd_credits_update ifc_txrx.ift_txd_credits_update #define isc_rxd_available ifc_txrx.ift_rxd_available #define isc_rxd_pkt_get ifc_txrx.ift_rxd_pkt_get #define isc_rxd_refill ifc_txrx.ift_rxd_refill #define isc_rxd_flush ifc_txrx.ift_rxd_flush #define isc_rxd_refill ifc_txrx.ift_rxd_refill #define isc_rxd_refill ifc_txrx.ift_rxd_refill #define isc_legacy_intr ifc_txrx.ift_legacy_intr eventhandler_tag ifc_vlan_attach_event; eventhandler_tag ifc_vlan_detach_event; uint8_t ifc_mac[ETHER_ADDR_LEN]; char ifc_mtx_name[16]; }; void * iflib_get_softc(if_ctx_t ctx) { return (ctx->ifc_softc); } device_t iflib_get_dev(if_ctx_t ctx) { return (ctx->ifc_dev); } if_t iflib_get_ifp(if_ctx_t ctx) { return (ctx->ifc_ifp); } struct ifmedia * iflib_get_media(if_ctx_t ctx) { return (&ctx->ifc_media); } uint32_t iflib_get_flags(if_ctx_t ctx) { return (ctx->ifc_flags); } void iflib_set_mac(if_ctx_t ctx, uint8_t mac[ETHER_ADDR_LEN]) { bcopy(mac, ctx->ifc_mac, ETHER_ADDR_LEN); } if_softc_ctx_t iflib_get_softc_ctx(if_ctx_t ctx) { return (&ctx->ifc_softc_ctx); } if_shared_ctx_t iflib_get_sctx(if_ctx_t ctx) { return (ctx->ifc_sctx); } #define IP_ALIGNED(m) ((((uintptr_t)(m)->m_data) & 0x3) == 0x2) #define CACHE_PTR_INCREMENT (CACHE_LINE_SIZE/sizeof(void*)) #define CACHE_PTR_NEXT(ptr) ((void *)(((uintptr_t)(ptr)+CACHE_LINE_SIZE-1) & (CACHE_LINE_SIZE-1))) #define LINK_ACTIVE(ctx) ((ctx)->ifc_link_state == LINK_STATE_UP) #define CTX_IS_VF(ctx) ((ctx)->ifc_sctx->isc_flags & IFLIB_IS_VF) typedef struct iflib_sw_rx_desc_array { bus_dmamap_t *ifsd_map; /* bus_dma maps for packet */ struct mbuf **ifsd_m; /* pkthdr mbufs */ caddr_t *ifsd_cl; /* direct cluster pointer for rx */ bus_addr_t *ifsd_ba; /* bus addr of cluster for rx */ } iflib_rxsd_array_t; typedef struct iflib_sw_tx_desc_array { bus_dmamap_t *ifsd_map; /* bus_dma maps for packet */ bus_dmamap_t *ifsd_tso_map; /* bus_dma maps for TSO packet */ struct mbuf **ifsd_m; /* pkthdr mbufs */ } if_txsd_vec_t; /* magic number that should be high enough for any hardware */ #define IFLIB_MAX_TX_SEGS 128 #define IFLIB_RX_COPY_THRESH 128 #define IFLIB_MAX_RX_REFRESH 32 /* The minimum descriptors per second before we start coalescing */ #define IFLIB_MIN_DESC_SEC 16384 #define IFLIB_DEFAULT_TX_UPDATE_FREQ 16 #define IFLIB_QUEUE_IDLE 0 #define IFLIB_QUEUE_HUNG 1 #define IFLIB_QUEUE_WORKING 2 /* maximum number of txqs that can share an rx interrupt */ #define IFLIB_MAX_TX_SHARED_INTR 4 /* this should really scale with ring size - this is a fairly arbitrary value */ #define TX_BATCH_SIZE 32 #define IFLIB_RESTART_BUDGET 8 #define CSUM_OFFLOAD (CSUM_IP_TSO|CSUM_IP6_TSO|CSUM_IP| \ CSUM_IP_UDP|CSUM_IP_TCP|CSUM_IP_SCTP| \ CSUM_IP6_UDP|CSUM_IP6_TCP|CSUM_IP6_SCTP) struct iflib_txq { qidx_t ift_in_use; qidx_t ift_cidx; qidx_t ift_cidx_processed; qidx_t ift_pidx; uint8_t ift_gen; uint8_t ift_br_offset; uint16_t ift_npending; uint16_t ift_db_pending; uint16_t ift_rs_pending; /* implicit pad */ uint8_t ift_txd_size[8]; uint64_t ift_processed; uint64_t ift_cleaned; uint64_t ift_cleaned_prev; #if MEMORY_LOGGING uint64_t ift_enqueued; uint64_t ift_dequeued; #endif uint64_t ift_no_tx_dma_setup; uint64_t ift_no_desc_avail; uint64_t ift_mbuf_defrag_failed; uint64_t ift_mbuf_defrag; uint64_t ift_map_failed; uint64_t ift_txd_encap_efbig; uint64_t ift_pullups; uint64_t ift_last_timer_tick; struct mtx ift_mtx; struct mtx ift_db_mtx; /* constant values */ if_ctx_t ift_ctx; struct ifmp_ring *ift_br; struct grouptask ift_task; qidx_t ift_size; uint16_t ift_id; struct callout ift_timer; if_txsd_vec_t ift_sds; uint8_t ift_qstatus; uint8_t ift_closed; uint8_t ift_update_freq; struct iflib_filter_info ift_filter_info; bus_dma_tag_t ift_buf_tag; bus_dma_tag_t ift_tso_buf_tag; iflib_dma_info_t ift_ifdi; #define MTX_NAME_LEN 16 char ift_mtx_name[MTX_NAME_LEN]; char ift_db_mtx_name[MTX_NAME_LEN]; bus_dma_segment_t ift_segs[IFLIB_MAX_TX_SEGS] __aligned(CACHE_LINE_SIZE); #ifdef IFLIB_DIAGNOSTICS uint64_t ift_cpu_exec_count[256]; #endif } __aligned(CACHE_LINE_SIZE); struct iflib_fl { qidx_t ifl_cidx; qidx_t ifl_pidx; qidx_t ifl_credits; uint8_t ifl_gen; uint8_t ifl_rxd_size; #if MEMORY_LOGGING uint64_t ifl_m_enqueued; uint64_t ifl_m_dequeued; uint64_t ifl_cl_enqueued; uint64_t ifl_cl_dequeued; #endif /* implicit pad */ bitstr_t *ifl_rx_bitmap; qidx_t ifl_fragidx; /* constant */ qidx_t ifl_size; uint16_t ifl_buf_size; uint16_t ifl_cltype; uma_zone_t ifl_zone; iflib_rxsd_array_t ifl_sds; iflib_rxq_t ifl_rxq; uint8_t ifl_id; bus_dma_tag_t ifl_buf_tag; iflib_dma_info_t ifl_ifdi; uint64_t ifl_bus_addrs[IFLIB_MAX_RX_REFRESH] __aligned(CACHE_LINE_SIZE); caddr_t ifl_vm_addrs[IFLIB_MAX_RX_REFRESH]; qidx_t ifl_rxd_idxs[IFLIB_MAX_RX_REFRESH]; } __aligned(CACHE_LINE_SIZE); static inline qidx_t get_inuse(int size, qidx_t cidx, qidx_t pidx, uint8_t gen) { qidx_t used; if (pidx > cidx) used = pidx - cidx; else if (pidx < cidx) used = size - cidx + pidx; else if (gen == 0 && pidx == cidx) used = 0; else if (gen == 1 && pidx == cidx) used = size; else panic("bad state"); return (used); } #define TXQ_AVAIL(txq) (txq->ift_size - get_inuse(txq->ift_size, txq->ift_cidx, txq->ift_pidx, txq->ift_gen)) #define IDXDIFF(head, tail, wrap) \ ((head) >= (tail) ? (head) - (tail) : (wrap) - (tail) + (head)) struct iflib_rxq { /* If there is a separate completion queue - * these are the cq cidx and pidx. Otherwise * these are unused. */ qidx_t ifr_size; qidx_t ifr_cq_cidx; qidx_t ifr_cq_pidx; uint8_t ifr_cq_gen; uint8_t ifr_fl_offset; if_ctx_t ifr_ctx; iflib_fl_t ifr_fl; uint64_t ifr_rx_irq; uint16_t ifr_id; uint8_t ifr_lro_enabled; uint8_t ifr_nfl; uint8_t ifr_ntxqirq; uint8_t ifr_txqid[IFLIB_MAX_TX_SHARED_INTR]; struct lro_ctrl ifr_lc; struct grouptask ifr_task; struct iflib_filter_info ifr_filter_info; iflib_dma_info_t ifr_ifdi; /* dynamically allocate if any drivers need a value substantially larger than this */ struct if_rxd_frag ifr_frags[IFLIB_MAX_RX_SEGS] __aligned(CACHE_LINE_SIZE); #ifdef IFLIB_DIAGNOSTICS uint64_t ifr_cpu_exec_count[256]; #endif } __aligned(CACHE_LINE_SIZE); typedef struct if_rxsd { caddr_t *ifsd_cl; struct mbuf **ifsd_m; iflib_fl_t ifsd_fl; qidx_t ifsd_cidx; } *if_rxsd_t; /* multiple of word size */ #ifdef __LP64__ #define PKT_INFO_SIZE 6 #define RXD_INFO_SIZE 5 #define PKT_TYPE uint64_t #else #define PKT_INFO_SIZE 11 #define RXD_INFO_SIZE 8 #define PKT_TYPE uint32_t #endif #define PKT_LOOP_BOUND ((PKT_INFO_SIZE/3)*3) #define RXD_LOOP_BOUND ((RXD_INFO_SIZE/4)*4) typedef struct if_pkt_info_pad { PKT_TYPE pkt_val[PKT_INFO_SIZE]; } *if_pkt_info_pad_t; typedef struct if_rxd_info_pad { PKT_TYPE rxd_val[RXD_INFO_SIZE]; } *if_rxd_info_pad_t; CTASSERT(sizeof(struct if_pkt_info_pad) == sizeof(struct if_pkt_info)); CTASSERT(sizeof(struct if_rxd_info_pad) == sizeof(struct if_rxd_info)); static inline void pkt_info_zero(if_pkt_info_t pi) { if_pkt_info_pad_t pi_pad; pi_pad = (if_pkt_info_pad_t)pi; pi_pad->pkt_val[0] = 0; pi_pad->pkt_val[1] = 0; pi_pad->pkt_val[2] = 0; pi_pad->pkt_val[3] = 0; pi_pad->pkt_val[4] = 0; pi_pad->pkt_val[5] = 0; #ifndef __LP64__ pi_pad->pkt_val[6] = 0; pi_pad->pkt_val[7] = 0; pi_pad->pkt_val[8] = 0; pi_pad->pkt_val[9] = 0; pi_pad->pkt_val[10] = 0; #endif } static device_method_t iflib_pseudo_methods[] = { DEVMETHOD(device_attach, noop_attach), DEVMETHOD(device_detach, iflib_pseudo_detach), DEVMETHOD_END }; driver_t iflib_pseudodriver = { "iflib_pseudo", iflib_pseudo_methods, sizeof(struct iflib_ctx), }; static inline void rxd_info_zero(if_rxd_info_t ri) { if_rxd_info_pad_t ri_pad; int i; ri_pad = (if_rxd_info_pad_t)ri; for (i = 0; i < RXD_LOOP_BOUND; i += 4) { ri_pad->rxd_val[i] = 0; ri_pad->rxd_val[i+1] = 0; ri_pad->rxd_val[i+2] = 0; ri_pad->rxd_val[i+3] = 0; } #ifdef __LP64__ ri_pad->rxd_val[RXD_INFO_SIZE-1] = 0; #endif } /* * Only allow a single packet to take up most 1/nth of the tx ring */ #define MAX_SINGLE_PACKET_FRACTION 12 #define IF_BAD_DMA (bus_addr_t)-1 #define CTX_ACTIVE(ctx) ((if_getdrvflags((ctx)->ifc_ifp) & IFF_DRV_RUNNING)) #define CTX_LOCK_INIT(_sc) sx_init(&(_sc)->ifc_ctx_sx, "iflib ctx lock") #define CTX_LOCK(ctx) sx_xlock(&(ctx)->ifc_ctx_sx) #define CTX_UNLOCK(ctx) sx_xunlock(&(ctx)->ifc_ctx_sx) #define CTX_LOCK_DESTROY(ctx) sx_destroy(&(ctx)->ifc_ctx_sx) #define STATE_LOCK_INIT(_sc, _name) mtx_init(&(_sc)->ifc_state_mtx, _name, "iflib state lock", MTX_DEF) #define STATE_LOCK(ctx) mtx_lock(&(ctx)->ifc_state_mtx) #define STATE_UNLOCK(ctx) mtx_unlock(&(ctx)->ifc_state_mtx) #define STATE_LOCK_DESTROY(ctx) mtx_destroy(&(ctx)->ifc_state_mtx) #define CALLOUT_LOCK(txq) mtx_lock(&txq->ift_mtx) #define CALLOUT_UNLOCK(txq) mtx_unlock(&txq->ift_mtx) void iflib_set_detach(if_ctx_t ctx) { STATE_LOCK(ctx); ctx->ifc_flags |= IFC_IN_DETACH; STATE_UNLOCK(ctx); } /* Our boot-time initialization hook */ static int iflib_module_event_handler(module_t, int, void *); static moduledata_t iflib_moduledata = { "iflib", iflib_module_event_handler, NULL }; DECLARE_MODULE(iflib, iflib_moduledata, SI_SUB_INIT_IF, SI_ORDER_ANY); MODULE_VERSION(iflib, 1); MODULE_DEPEND(iflib, pci, 1, 1, 1); MODULE_DEPEND(iflib, ether, 1, 1, 1); TASKQGROUP_DEFINE(if_io_tqg, mp_ncpus, 1); TASKQGROUP_DEFINE(if_config_tqg, 1, 1); #ifndef IFLIB_DEBUG_COUNTERS #ifdef INVARIANTS #define IFLIB_DEBUG_COUNTERS 1 #else #define IFLIB_DEBUG_COUNTERS 0 #endif /* !INVARIANTS */ #endif static SYSCTL_NODE(_net, OID_AUTO, iflib, CTLFLAG_RD, 0, "iflib driver parameters"); /* * XXX need to ensure that this can't accidentally cause the head to be moved backwards */ static int iflib_min_tx_latency = 0; SYSCTL_INT(_net_iflib, OID_AUTO, min_tx_latency, CTLFLAG_RW, &iflib_min_tx_latency, 0, "minimize transmit latency at the possible expense of throughput"); static int iflib_no_tx_batch = 0; SYSCTL_INT(_net_iflib, OID_AUTO, no_tx_batch, CTLFLAG_RW, &iflib_no_tx_batch, 0, "minimize transmit latency at the possible expense of throughput"); #if IFLIB_DEBUG_COUNTERS static int iflib_tx_seen; static int iflib_tx_sent; static int iflib_tx_encap; static int iflib_rx_allocs; static int iflib_fl_refills; static int iflib_fl_refills_large; static int iflib_tx_frees; SYSCTL_INT(_net_iflib, OID_AUTO, tx_seen, CTLFLAG_RD, &iflib_tx_seen, 0, "# tx mbufs seen"); SYSCTL_INT(_net_iflib, OID_AUTO, tx_sent, CTLFLAG_RD, &iflib_tx_sent, 0, "# tx mbufs sent"); SYSCTL_INT(_net_iflib, OID_AUTO, tx_encap, CTLFLAG_RD, &iflib_tx_encap, 0, "# tx mbufs encapped"); SYSCTL_INT(_net_iflib, OID_AUTO, tx_frees, CTLFLAG_RD, &iflib_tx_frees, 0, "# tx frees"); SYSCTL_INT(_net_iflib, OID_AUTO, rx_allocs, CTLFLAG_RD, &iflib_rx_allocs, 0, "# rx allocations"); SYSCTL_INT(_net_iflib, OID_AUTO, fl_refills, CTLFLAG_RD, &iflib_fl_refills, 0, "# refills"); SYSCTL_INT(_net_iflib, OID_AUTO, fl_refills_large, CTLFLAG_RD, &iflib_fl_refills_large, 0, "# large refills"); static int iflib_txq_drain_flushing; static int iflib_txq_drain_oactive; static int iflib_txq_drain_notready; SYSCTL_INT(_net_iflib, OID_AUTO, txq_drain_flushing, CTLFLAG_RD, &iflib_txq_drain_flushing, 0, "# drain flushes"); SYSCTL_INT(_net_iflib, OID_AUTO, txq_drain_oactive, CTLFLAG_RD, &iflib_txq_drain_oactive, 0, "# drain oactives"); SYSCTL_INT(_net_iflib, OID_AUTO, txq_drain_notready, CTLFLAG_RD, &iflib_txq_drain_notready, 0, "# drain notready"); static int iflib_encap_load_mbuf_fail; static int iflib_encap_pad_mbuf_fail; static int iflib_encap_txq_avail_fail; static int iflib_encap_txd_encap_fail; SYSCTL_INT(_net_iflib, OID_AUTO, encap_load_mbuf_fail, CTLFLAG_RD, &iflib_encap_load_mbuf_fail, 0, "# busdma load failures"); SYSCTL_INT(_net_iflib, OID_AUTO, encap_pad_mbuf_fail, CTLFLAG_RD, &iflib_encap_pad_mbuf_fail, 0, "# runt frame pad failures"); SYSCTL_INT(_net_iflib, OID_AUTO, encap_txq_avail_fail, CTLFLAG_RD, &iflib_encap_txq_avail_fail, 0, "# txq avail failures"); SYSCTL_INT(_net_iflib, OID_AUTO, encap_txd_encap_fail, CTLFLAG_RD, &iflib_encap_txd_encap_fail, 0, "# driver encap failures"); static int iflib_task_fn_rxs; static int iflib_rx_intr_enables; static int iflib_fast_intrs; static int iflib_rx_unavail; static int iflib_rx_ctx_inactive; static int iflib_rx_if_input; static int iflib_rx_mbuf_null; static int iflib_rxd_flush; static int iflib_verbose_debug; SYSCTL_INT(_net_iflib, OID_AUTO, task_fn_rx, CTLFLAG_RD, &iflib_task_fn_rxs, 0, "# task_fn_rx calls"); SYSCTL_INT(_net_iflib, OID_AUTO, rx_intr_enables, CTLFLAG_RD, &iflib_rx_intr_enables, 0, "# rx intr enables"); SYSCTL_INT(_net_iflib, OID_AUTO, fast_intrs, CTLFLAG_RD, &iflib_fast_intrs, 0, "# fast_intr calls"); SYSCTL_INT(_net_iflib, OID_AUTO, rx_unavail, CTLFLAG_RD, &iflib_rx_unavail, 0, "# times rxeof called with no available data"); SYSCTL_INT(_net_iflib, OID_AUTO, rx_ctx_inactive, CTLFLAG_RD, &iflib_rx_ctx_inactive, 0, "# times rxeof called with inactive context"); SYSCTL_INT(_net_iflib, OID_AUTO, rx_if_input, CTLFLAG_RD, &iflib_rx_if_input, 0, "# times rxeof called if_input"); SYSCTL_INT(_net_iflib, OID_AUTO, rx_mbuf_null, CTLFLAG_RD, &iflib_rx_mbuf_null, 0, "# times rxeof got null mbuf"); SYSCTL_INT(_net_iflib, OID_AUTO, rxd_flush, CTLFLAG_RD, &iflib_rxd_flush, 0, "# times rxd_flush called"); SYSCTL_INT(_net_iflib, OID_AUTO, verbose_debug, CTLFLAG_RW, &iflib_verbose_debug, 0, "enable verbose debugging"); #define DBG_COUNTER_INC(name) atomic_add_int(&(iflib_ ## name), 1) static void iflib_debug_reset(void) { iflib_tx_seen = iflib_tx_sent = iflib_tx_encap = iflib_rx_allocs = iflib_fl_refills = iflib_fl_refills_large = iflib_tx_frees = iflib_txq_drain_flushing = iflib_txq_drain_oactive = iflib_txq_drain_notready = iflib_encap_load_mbuf_fail = iflib_encap_pad_mbuf_fail = iflib_encap_txq_avail_fail = iflib_encap_txd_encap_fail = iflib_task_fn_rxs = iflib_rx_intr_enables = iflib_fast_intrs = iflib_rx_unavail = iflib_rx_ctx_inactive = iflib_rx_if_input = iflib_rx_mbuf_null = iflib_rxd_flush = 0; } #else #define DBG_COUNTER_INC(name) static void iflib_debug_reset(void) {} #endif #define IFLIB_DEBUG 0 static void iflib_tx_structures_free(if_ctx_t ctx); static void iflib_rx_structures_free(if_ctx_t ctx); static int iflib_queues_alloc(if_ctx_t ctx); static int iflib_tx_credits_update(if_ctx_t ctx, iflib_txq_t txq); static int iflib_rxd_avail(if_ctx_t ctx, iflib_rxq_t rxq, qidx_t cidx, qidx_t budget); static int iflib_qset_structures_setup(if_ctx_t ctx); static int iflib_msix_init(if_ctx_t ctx); static int iflib_legacy_setup(if_ctx_t ctx, driver_filter_t filter, void *filterarg, int *rid, const char *str); static void iflib_txq_check_drain(iflib_txq_t txq, int budget); static uint32_t iflib_txq_can_drain(struct ifmp_ring *); #ifdef ALTQ static void iflib_altq_if_start(if_t ifp); static int iflib_altq_if_transmit(if_t ifp, struct mbuf *m); #endif static int iflib_register(if_ctx_t); static void iflib_init_locked(if_ctx_t ctx); static void iflib_add_device_sysctl_pre(if_ctx_t ctx); static void iflib_add_device_sysctl_post(if_ctx_t ctx); static void iflib_ifmp_purge(iflib_txq_t txq); static void _iflib_pre_assert(if_softc_ctx_t scctx); static void iflib_if_init_locked(if_ctx_t ctx); static void iflib_free_intr_mem(if_ctx_t ctx); #ifndef __NO_STRICT_ALIGNMENT static struct mbuf * iflib_fixup_rx(struct mbuf *m); #endif NETDUMP_DEFINE(iflib); #ifdef DEV_NETMAP #include #include #include MODULE_DEPEND(iflib, netmap, 1, 1, 1); static int netmap_fl_refill(iflib_rxq_t rxq, struct netmap_kring *kring, uint32_t nm_i, bool init); /* * device-specific sysctl variables: * * iflib_crcstrip: 0: keep CRC in rx frames (default), 1: strip it. * During regular operations the CRC is stripped, but on some * hardware reception of frames not multiple of 64 is slower, * so using crcstrip=0 helps in benchmarks. * * iflib_rx_miss, iflib_rx_miss_bufs: * count packets that might be missed due to lost interrupts. */ SYSCTL_DECL(_dev_netmap); /* * The xl driver by default strips CRCs and we do not override it. */ int iflib_crcstrip = 1; SYSCTL_INT(_dev_netmap, OID_AUTO, iflib_crcstrip, CTLFLAG_RW, &iflib_crcstrip, 1, "strip CRC on rx frames"); int iflib_rx_miss, iflib_rx_miss_bufs; SYSCTL_INT(_dev_netmap, OID_AUTO, iflib_rx_miss, CTLFLAG_RW, &iflib_rx_miss, 0, "potentially missed rx intr"); SYSCTL_INT(_dev_netmap, OID_AUTO, iflib_rx_miss_bufs, CTLFLAG_RW, &iflib_rx_miss_bufs, 0, "potentially missed rx intr bufs"); /* * Register/unregister. We are already under netmap lock. * Only called on the first register or the last unregister. */ static int iflib_netmap_register(struct netmap_adapter *na, int onoff) { struct ifnet *ifp = na->ifp; if_ctx_t ctx = ifp->if_softc; int status; CTX_LOCK(ctx); IFDI_INTR_DISABLE(ctx); /* Tell the stack that the interface is no longer active */ ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE); if (!CTX_IS_VF(ctx)) IFDI_CRCSTRIP_SET(ctx, onoff, iflib_crcstrip); /* enable or disable flags and callbacks in na and ifp */ if (onoff) { nm_set_native_flags(na); } else { nm_clear_native_flags(na); } iflib_stop(ctx); iflib_init_locked(ctx); IFDI_CRCSTRIP_SET(ctx, onoff, iflib_crcstrip); // XXX why twice ? status = ifp->if_drv_flags & IFF_DRV_RUNNING ? 0 : 1; if (status) nm_clear_native_flags(na); CTX_UNLOCK(ctx); return (status); } static int netmap_fl_refill(iflib_rxq_t rxq, struct netmap_kring *kring, uint32_t nm_i, bool init) { struct netmap_adapter *na = kring->na; u_int const lim = kring->nkr_num_slots - 1; u_int head = kring->rhead; struct netmap_ring *ring = kring->ring; bus_dmamap_t *map; struct if_rxd_update iru; if_ctx_t ctx = rxq->ifr_ctx; iflib_fl_t fl = &rxq->ifr_fl[0]; uint32_t refill_pidx, nic_i; #if IFLIB_DEBUG_COUNTERS int rf_count = 0; #endif if (nm_i == head && __predict_true(!init)) return 0; iru_init(&iru, rxq, 0 /* flid */); map = fl->ifl_sds.ifsd_map; refill_pidx = netmap_idx_k2n(kring, nm_i); /* * IMPORTANT: we must leave one free slot in the ring, * so move head back by one unit */ head = nm_prev(head, lim); nic_i = UINT_MAX; DBG_COUNTER_INC(fl_refills); while (nm_i != head) { #if IFLIB_DEBUG_COUNTERS if (++rf_count == 9) DBG_COUNTER_INC(fl_refills_large); #endif for (int tmp_pidx = 0; tmp_pidx < IFLIB_MAX_RX_REFRESH && nm_i != head; tmp_pidx++) { struct netmap_slot *slot = &ring->slot[nm_i]; void *addr = PNMB(na, slot, &fl->ifl_bus_addrs[tmp_pidx]); uint32_t nic_i_dma = refill_pidx; nic_i = netmap_idx_k2n(kring, nm_i); MPASS(tmp_pidx < IFLIB_MAX_RX_REFRESH); if (addr == NETMAP_BUF_BASE(na)) /* bad buf */ return netmap_ring_reinit(kring); fl->ifl_vm_addrs[tmp_pidx] = addr; if (__predict_false(init)) { netmap_load_map(na, fl->ifl_buf_tag, map[nic_i], addr); } else if (slot->flags & NS_BUF_CHANGED) { /* buffer has changed, reload map */ netmap_reload_map(na, fl->ifl_buf_tag, map[nic_i], addr); } slot->flags &= ~NS_BUF_CHANGED; nm_i = nm_next(nm_i, lim); fl->ifl_rxd_idxs[tmp_pidx] = nic_i = nm_next(nic_i, lim); if (nm_i != head && tmp_pidx < IFLIB_MAX_RX_REFRESH-1) continue; iru.iru_pidx = refill_pidx; iru.iru_count = tmp_pidx+1; ctx->isc_rxd_refill(ctx->ifc_softc, &iru); refill_pidx = nic_i; for (int n = 0; n < iru.iru_count; n++) { bus_dmamap_sync(fl->ifl_buf_tag, map[nic_i_dma], BUS_DMASYNC_PREREAD); /* XXX - change this to not use the netmap func*/ nic_i_dma = nm_next(nic_i_dma, lim); } } } kring->nr_hwcur = head; bus_dmamap_sync(fl->ifl_ifdi->idi_tag, fl->ifl_ifdi->idi_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); if (__predict_true(nic_i != UINT_MAX)) { ctx->isc_rxd_flush(ctx->ifc_softc, rxq->ifr_id, fl->ifl_id, nic_i); DBG_COUNTER_INC(rxd_flush); } return (0); } /* * Reconcile kernel and user view of the transmit ring. * * All information is in the kring. * Userspace wants to send packets up to the one before kring->rhead, * kernel knows kring->nr_hwcur is the first unsent packet. * * Here we push packets out (as many as possible), and possibly * reclaim buffers from previously completed transmission. * * The caller (netmap) guarantees that there is only one instance * running at any time. Any interference with other driver * methods should be handled by the individual drivers. */ static int iflib_netmap_txsync(struct netmap_kring *kring, int flags) { struct netmap_adapter *na = kring->na; struct ifnet *ifp = na->ifp; struct netmap_ring *ring = kring->ring; u_int nm_i; /* index into the netmap kring */ u_int nic_i; /* index into the NIC ring */ u_int n; u_int const lim = kring->nkr_num_slots - 1; u_int const head = kring->rhead; struct if_pkt_info pi; /* * interrupts on every tx packet are expensive so request * them every half ring, or where NS_REPORT is set */ u_int report_frequency = kring->nkr_num_slots >> 1; /* device-specific */ if_ctx_t ctx = ifp->if_softc; iflib_txq_t txq = &ctx->ifc_txqs[kring->ring_id]; bus_dmamap_sync(txq->ift_ifdi->idi_tag, txq->ift_ifdi->idi_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); /* * First part: process new packets to send. * nm_i is the current index in the netmap kring, * nic_i is the corresponding index in the NIC ring. * * If we have packets to send (nm_i != head) * iterate over the netmap ring, fetch length and update * the corresponding slot in the NIC ring. Some drivers also * need to update the buffer's physical address in the NIC slot * even NS_BUF_CHANGED is not set (PNMB computes the addresses). * * The netmap_reload_map() calls is especially expensive, * even when (as in this case) the tag is 0, so do only * when the buffer has actually changed. * * If possible do not set the report/intr bit on all slots, * but only a few times per ring or when NS_REPORT is set. * * Finally, on 10G and faster drivers, it might be useful * to prefetch the next slot and txr entry. */ nm_i = kring->nr_hwcur; if (nm_i != head) { /* we have new packets to send */ pkt_info_zero(&pi); pi.ipi_segs = txq->ift_segs; pi.ipi_qsidx = kring->ring_id; nic_i = netmap_idx_k2n(kring, nm_i); __builtin_prefetch(&ring->slot[nm_i]); __builtin_prefetch(&txq->ift_sds.ifsd_m[nic_i]); __builtin_prefetch(&txq->ift_sds.ifsd_map[nic_i]); for (n = 0; nm_i != head; n++) { struct netmap_slot *slot = &ring->slot[nm_i]; u_int len = slot->len; uint64_t paddr; void *addr = PNMB(na, slot, &paddr); int flags = (slot->flags & NS_REPORT || nic_i == 0 || nic_i == report_frequency) ? IPI_TX_INTR : 0; /* device-specific */ pi.ipi_len = len; pi.ipi_segs[0].ds_addr = paddr; pi.ipi_segs[0].ds_len = len; pi.ipi_nsegs = 1; pi.ipi_ndescs = 0; pi.ipi_pidx = nic_i; pi.ipi_flags = flags; /* Fill the slot in the NIC ring. */ ctx->isc_txd_encap(ctx->ifc_softc, &pi); DBG_COUNTER_INC(tx_encap); /* prefetch for next round */ __builtin_prefetch(&ring->slot[nm_i + 1]); __builtin_prefetch(&txq->ift_sds.ifsd_m[nic_i + 1]); __builtin_prefetch(&txq->ift_sds.ifsd_map[nic_i + 1]); NM_CHECK_ADDR_LEN(na, addr, len); if (slot->flags & NS_BUF_CHANGED) { /* buffer has changed, reload map */ netmap_reload_map(na, txq->ift_buf_tag, txq->ift_sds.ifsd_map[nic_i], addr); } /* make sure changes to the buffer are synced */ bus_dmamap_sync(txq->ift_buf_tag, txq->ift_sds.ifsd_map[nic_i], BUS_DMASYNC_PREWRITE); slot->flags &= ~(NS_REPORT | NS_BUF_CHANGED); nm_i = nm_next(nm_i, lim); nic_i = nm_next(nic_i, lim); } kring->nr_hwcur = nm_i; /* synchronize the NIC ring */ bus_dmamap_sync(txq->ift_ifdi->idi_tag, txq->ift_ifdi->idi_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); /* (re)start the tx unit up to slot nic_i (excluded) */ ctx->isc_txd_flush(ctx->ifc_softc, txq->ift_id, nic_i); } /* * Second part: reclaim buffers for completed transmissions. * * If there are unclaimed buffers, attempt to reclaim them. * If none are reclaimed, and TX IRQs are not in use, do an initial * minimal delay, then trigger the tx handler which will spin in the * group task queue. */ if (kring->nr_hwtail != nm_prev(kring->nr_hwcur, lim)) { if (iflib_tx_credits_update(ctx, txq)) { /* some tx completed, increment avail */ nic_i = txq->ift_cidx_processed; kring->nr_hwtail = nm_prev(netmap_idx_n2k(kring, nic_i), lim); } } if (!(ctx->ifc_flags & IFC_NETMAP_TX_IRQ)) if (kring->nr_hwtail != nm_prev(kring->nr_hwcur, lim)) { callout_reset_on(&txq->ift_timer, hz < 2000 ? 1 : hz / 1000, iflib_timer, txq, txq->ift_timer.c_cpu); } return (0); } /* * Reconcile kernel and user view of the receive ring. * Same as for the txsync, this routine must be efficient. * The caller guarantees a single invocations, but races against * the rest of the driver should be handled here. * * On call, kring->rhead is the first packet that userspace wants * to keep, and kring->rcur is the wakeup point. * The kernel has previously reported packets up to kring->rtail. * * If (flags & NAF_FORCE_READ) also check for incoming packets irrespective * of whether or not we received an interrupt. */ static int iflib_netmap_rxsync(struct netmap_kring *kring, int flags) { struct netmap_adapter *na = kring->na; struct netmap_ring *ring = kring->ring; iflib_fl_t fl; uint32_t nm_i; /* index into the netmap ring */ uint32_t nic_i; /* index into the NIC ring */ u_int i, n; u_int const lim = kring->nkr_num_slots - 1; u_int const head = kring->rhead; int force_update = (flags & NAF_FORCE_READ) || kring->nr_kflags & NKR_PENDINTR; struct if_rxd_info ri; struct ifnet *ifp = na->ifp; if_ctx_t ctx = ifp->if_softc; iflib_rxq_t rxq = &ctx->ifc_rxqs[kring->ring_id]; if (head > lim) return netmap_ring_reinit(kring); /* * XXX netmap_fl_refill() only ever (re)fills free list 0 so far. */ for (i = 0, fl = rxq->ifr_fl; i < rxq->ifr_nfl; i++, fl++) { bus_dmamap_sync(fl->ifl_ifdi->idi_tag, fl->ifl_ifdi->idi_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); } /* * First part: import newly received packets. * * nm_i is the index of the next free slot in the netmap ring, * nic_i is the index of the next received packet in the NIC ring, * and they may differ in case if_init() has been called while * in netmap mode. For the receive ring we have * * nic_i = rxr->next_check; * nm_i = kring->nr_hwtail (previous) * and * nm_i == (nic_i + kring->nkr_hwofs) % ring_size * * rxr->next_check is set to 0 on a ring reinit */ if (netmap_no_pendintr || force_update) { int crclen = iflib_crcstrip ? 0 : 4; int error, avail; for (i = 0; i < rxq->ifr_nfl; i++) { fl = &rxq->ifr_fl[i]; nic_i = fl->ifl_cidx; nm_i = netmap_idx_n2k(kring, nic_i); avail = ctx->isc_rxd_available(ctx->ifc_softc, rxq->ifr_id, nic_i, USHRT_MAX); for (n = 0; avail > 0; n++, avail--) { rxd_info_zero(&ri); ri.iri_frags = rxq->ifr_frags; ri.iri_qsidx = kring->ring_id; ri.iri_ifp = ctx->ifc_ifp; ri.iri_cidx = nic_i; error = ctx->isc_rxd_pkt_get(ctx->ifc_softc, &ri); ring->slot[nm_i].len = error ? 0 : ri.iri_len - crclen; ring->slot[nm_i].flags = 0; bus_dmamap_sync(fl->ifl_buf_tag, fl->ifl_sds.ifsd_map[nic_i], BUS_DMASYNC_POSTREAD); nm_i = nm_next(nm_i, lim); nic_i = nm_next(nic_i, lim); } if (n) { /* update the state variables */ if (netmap_no_pendintr && !force_update) { /* diagnostics */ iflib_rx_miss ++; iflib_rx_miss_bufs += n; } fl->ifl_cidx = nic_i; kring->nr_hwtail = nm_i; } kring->nr_kflags &= ~NKR_PENDINTR; } } /* * Second part: skip past packets that userspace has released. * (kring->nr_hwcur to head excluded), * and make the buffers available for reception. * As usual nm_i is the index in the netmap ring, * nic_i is the index in the NIC ring, and * nm_i == (nic_i + kring->nkr_hwofs) % ring_size */ /* XXX not sure how this will work with multiple free lists */ nm_i = kring->nr_hwcur; return (netmap_fl_refill(rxq, kring, nm_i, false)); } static void iflib_netmap_intr(struct netmap_adapter *na, int onoff) { struct ifnet *ifp = na->ifp; if_ctx_t ctx = ifp->if_softc; CTX_LOCK(ctx); if (onoff) { IFDI_INTR_ENABLE(ctx); } else { IFDI_INTR_DISABLE(ctx); } CTX_UNLOCK(ctx); } static int iflib_netmap_attach(if_ctx_t ctx) { struct netmap_adapter na; if_softc_ctx_t scctx = &ctx->ifc_softc_ctx; bzero(&na, sizeof(na)); na.ifp = ctx->ifc_ifp; na.na_flags = NAF_BDG_MAYSLEEP; MPASS(ctx->ifc_softc_ctx.isc_ntxqsets); MPASS(ctx->ifc_softc_ctx.isc_nrxqsets); na.num_tx_desc = scctx->isc_ntxd[0]; na.num_rx_desc = scctx->isc_nrxd[0]; na.nm_txsync = iflib_netmap_txsync; na.nm_rxsync = iflib_netmap_rxsync; na.nm_register = iflib_netmap_register; na.nm_intr = iflib_netmap_intr; na.num_tx_rings = ctx->ifc_softc_ctx.isc_ntxqsets; na.num_rx_rings = ctx->ifc_softc_ctx.isc_nrxqsets; return (netmap_attach(&na)); } static void iflib_netmap_txq_init(if_ctx_t ctx, iflib_txq_t txq) { struct netmap_adapter *na = NA(ctx->ifc_ifp); struct netmap_slot *slot; slot = netmap_reset(na, NR_TX, txq->ift_id, 0); if (slot == NULL) return; for (int i = 0; i < ctx->ifc_softc_ctx.isc_ntxd[0]; i++) { /* * In netmap mode, set the map for the packet buffer. * NOTE: Some drivers (not this one) also need to set * the physical buffer address in the NIC ring. * netmap_idx_n2k() maps a nic index, i, into the corresponding * netmap slot index, si */ int si = netmap_idx_n2k(na->tx_rings[txq->ift_id], i); netmap_load_map(na, txq->ift_buf_tag, txq->ift_sds.ifsd_map[i], NMB(na, slot + si)); } } static void iflib_netmap_rxq_init(if_ctx_t ctx, iflib_rxq_t rxq) { struct netmap_adapter *na = NA(ctx->ifc_ifp); struct netmap_kring *kring = na->rx_rings[rxq->ifr_id]; struct netmap_slot *slot; uint32_t nm_i; slot = netmap_reset(na, NR_RX, rxq->ifr_id, 0); if (slot == NULL) return; nm_i = netmap_idx_n2k(kring, 0); netmap_fl_refill(rxq, kring, nm_i, true); } static void iflib_netmap_timer_adjust(if_ctx_t ctx, iflib_txq_t txq, uint32_t *reset_on) { struct netmap_kring *kring; uint16_t txqid; txqid = txq->ift_id; kring = NA(ctx->ifc_ifp)->tx_rings[txqid]; if (kring->nr_hwcur != nm_next(kring->nr_hwtail, kring->nkr_num_slots - 1)) { bus_dmamap_sync(txq->ift_ifdi->idi_tag, txq->ift_ifdi->idi_map, BUS_DMASYNC_POSTREAD); if (ctx->isc_txd_credits_update(ctx->ifc_softc, txqid, false)) netmap_tx_irq(ctx->ifc_ifp, txqid); if (!(ctx->ifc_flags & IFC_NETMAP_TX_IRQ)) { if (hz < 2000) *reset_on = 1; else *reset_on = hz / 1000; } } } #define iflib_netmap_detach(ifp) netmap_detach(ifp) #else #define iflib_netmap_txq_init(ctx, txq) #define iflib_netmap_rxq_init(ctx, rxq) #define iflib_netmap_detach(ifp) #define iflib_netmap_attach(ctx) (0) #define netmap_rx_irq(ifp, qid, budget) (0) #define netmap_tx_irq(ifp, qid) do {} while (0) #define iflib_netmap_timer_adjust(ctx, txq, reset_on) #endif #if defined(__i386__) || defined(__amd64__) static __inline void prefetch(void *x) { __asm volatile("prefetcht0 %0" :: "m" (*(unsigned long *)x)); } static __inline void prefetch2cachelines(void *x) { __asm volatile("prefetcht0 %0" :: "m" (*(unsigned long *)x)); #if (CACHE_LINE_SIZE < 128) __asm volatile("prefetcht0 %0" :: "m" (*(((unsigned long *)x)+CACHE_LINE_SIZE/(sizeof(unsigned long))))); #endif } #else #define prefetch(x) #define prefetch2cachelines(x) #endif static void iflib_gen_mac(if_ctx_t ctx) { struct thread *td; MD5_CTX mdctx; char uuid[HOSTUUIDLEN+1]; char buf[HOSTUUIDLEN+16]; uint8_t *mac; unsigned char digest[16]; td = curthread; mac = ctx->ifc_mac; uuid[HOSTUUIDLEN] = 0; bcopy(td->td_ucred->cr_prison->pr_hostuuid, uuid, HOSTUUIDLEN); snprintf(buf, HOSTUUIDLEN+16, "%s-%s", uuid, device_get_nameunit(ctx->ifc_dev)); /* * Generate a pseudo-random, deterministic MAC * address based on the UUID and unit number. * The FreeBSD Foundation OUI of 58-9C-FC is used. */ MD5Init(&mdctx); MD5Update(&mdctx, buf, strlen(buf)); MD5Final(digest, &mdctx); mac[0] = 0x58; mac[1] = 0x9C; mac[2] = 0xFC; mac[3] = digest[0]; mac[4] = digest[1]; mac[5] = digest[2]; } static void iru_init(if_rxd_update_t iru, iflib_rxq_t rxq, uint8_t flid) { iflib_fl_t fl; fl = &rxq->ifr_fl[flid]; iru->iru_paddrs = fl->ifl_bus_addrs; iru->iru_vaddrs = &fl->ifl_vm_addrs[0]; iru->iru_idxs = fl->ifl_rxd_idxs; iru->iru_qsidx = rxq->ifr_id; iru->iru_buf_size = fl->ifl_buf_size; iru->iru_flidx = fl->ifl_id; } static void _iflib_dmamap_cb(void *arg, bus_dma_segment_t *segs, int nseg, int err) { if (err) return; *(bus_addr_t *) arg = segs[0].ds_addr; } int iflib_dma_alloc_align(if_ctx_t ctx, int size, int align, iflib_dma_info_t dma, int mapflags) { int err; device_t dev = ctx->ifc_dev; err = bus_dma_tag_create(bus_get_dma_tag(dev), /* parent */ align, 0, /* alignment, bounds */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ size, /* maxsize */ 1, /* nsegments */ size, /* maxsegsize */ BUS_DMA_ALLOCNOW, /* flags */ NULL, /* lockfunc */ NULL, /* lockarg */ &dma->idi_tag); if (err) { device_printf(dev, "%s: bus_dma_tag_create failed: %d\n", __func__, err); goto fail_0; } err = bus_dmamem_alloc(dma->idi_tag, (void**) &dma->idi_vaddr, BUS_DMA_NOWAIT | BUS_DMA_COHERENT | BUS_DMA_ZERO, &dma->idi_map); if (err) { device_printf(dev, "%s: bus_dmamem_alloc(%ju) failed: %d\n", __func__, (uintmax_t)size, err); goto fail_1; } dma->idi_paddr = IF_BAD_DMA; err = bus_dmamap_load(dma->idi_tag, dma->idi_map, dma->idi_vaddr, size, _iflib_dmamap_cb, &dma->idi_paddr, mapflags | BUS_DMA_NOWAIT); if (err || dma->idi_paddr == IF_BAD_DMA) { device_printf(dev, "%s: bus_dmamap_load failed: %d\n", __func__, err); goto fail_2; } dma->idi_size = size; return (0); fail_2: bus_dmamem_free(dma->idi_tag, dma->idi_vaddr, dma->idi_map); fail_1: bus_dma_tag_destroy(dma->idi_tag); fail_0: dma->idi_tag = NULL; return (err); } int iflib_dma_alloc(if_ctx_t ctx, int size, iflib_dma_info_t dma, int mapflags) { if_shared_ctx_t sctx = ctx->ifc_sctx; KASSERT(sctx->isc_q_align != 0, ("alignment value not initialized")); return (iflib_dma_alloc_align(ctx, size, sctx->isc_q_align, dma, mapflags)); } int iflib_dma_alloc_multi(if_ctx_t ctx, int *sizes, iflib_dma_info_t *dmalist, int mapflags, int count) { int i, err; iflib_dma_info_t *dmaiter; dmaiter = dmalist; for (i = 0; i < count; i++, dmaiter++) { if ((err = iflib_dma_alloc(ctx, sizes[i], *dmaiter, mapflags)) != 0) break; } if (err) iflib_dma_free_multi(dmalist, i); return (err); } void iflib_dma_free(iflib_dma_info_t dma) { if (dma->idi_tag == NULL) return; if (dma->idi_paddr != IF_BAD_DMA) { bus_dmamap_sync(dma->idi_tag, dma->idi_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(dma->idi_tag, dma->idi_map); dma->idi_paddr = IF_BAD_DMA; } if (dma->idi_vaddr != NULL) { bus_dmamem_free(dma->idi_tag, dma->idi_vaddr, dma->idi_map); dma->idi_vaddr = NULL; } bus_dma_tag_destroy(dma->idi_tag); dma->idi_tag = NULL; } void iflib_dma_free_multi(iflib_dma_info_t *dmalist, int count) { int i; iflib_dma_info_t *dmaiter = dmalist; for (i = 0; i < count; i++, dmaiter++) iflib_dma_free(*dmaiter); } #ifdef EARLY_AP_STARTUP static const int iflib_started = 1; #else /* * We used to abuse the smp_started flag to decide if the queues have been * fully initialized (by late taskqgroup_adjust() calls in a SYSINIT()). * That gave bad races, since the SYSINIT() runs strictly after smp_started * is set. Run a SYSINIT() strictly after that to just set a usable * completion flag. */ static int iflib_started; static void iflib_record_started(void *arg) { iflib_started = 1; } SYSINIT(iflib_record_started, SI_SUB_SMP + 1, SI_ORDER_FIRST, iflib_record_started, NULL); #endif static int iflib_fast_intr(void *arg) { iflib_filter_info_t info = arg; struct grouptask *gtask = info->ifi_task; + int result; + if (!iflib_started) - return (FILTER_HANDLED); + return (FILTER_STRAY); DBG_COUNTER_INC(fast_intrs); - if (info->ifi_filter != NULL && info->ifi_filter(info->ifi_filter_arg) == FILTER_HANDLED) - return (FILTER_HANDLED); + if (info->ifi_filter != NULL) { + result = info->ifi_filter(info->ifi_filter_arg); + if ((result & FILTER_SCHEDULE_THREAD) == 0) + return (result); + } GROUPTASK_ENQUEUE(gtask); return (FILTER_HANDLED); } static int iflib_fast_intr_rxtx(void *arg) { iflib_filter_info_t info = arg; struct grouptask *gtask = info->ifi_task; if_ctx_t ctx; iflib_rxq_t rxq = (iflib_rxq_t)info->ifi_ctx; iflib_txq_t txq; void *sc; - int i, cidx; + int i, cidx, result; qidx_t txqid; if (!iflib_started) - return (FILTER_HANDLED); + return (FILTER_STRAY); DBG_COUNTER_INC(fast_intrs); - if (info->ifi_filter != NULL && info->ifi_filter(info->ifi_filter_arg) == FILTER_HANDLED) - return (FILTER_HANDLED); + if (info->ifi_filter != NULL) { + result = info->ifi_filter(info->ifi_filter_arg); + if ((result & FILTER_SCHEDULE_THREAD) == 0) + return (result); + } ctx = rxq->ifr_ctx; sc = ctx->ifc_softc; MPASS(rxq->ifr_ntxqirq); for (i = 0; i < rxq->ifr_ntxqirq; i++) { txqid = rxq->ifr_txqid[i]; txq = &ctx->ifc_txqs[txqid]; bus_dmamap_sync(txq->ift_ifdi->idi_tag, txq->ift_ifdi->idi_map, BUS_DMASYNC_POSTREAD); if (!ctx->isc_txd_credits_update(sc, txqid, false)) { IFDI_TX_QUEUE_INTR_ENABLE(ctx, txqid); continue; } GROUPTASK_ENQUEUE(&txq->ift_task); } if (ctx->ifc_sctx->isc_flags & IFLIB_HAS_RXCQ) cidx = rxq->ifr_cq_cidx; else cidx = rxq->ifr_fl[0].ifl_cidx; if (iflib_rxd_avail(ctx, rxq, cidx, 1)) GROUPTASK_ENQUEUE(gtask); else { IFDI_RX_QUEUE_INTR_ENABLE(ctx, rxq->ifr_id); DBG_COUNTER_INC(rx_intr_enables); } return (FILTER_HANDLED); } static int iflib_fast_intr_ctx(void *arg) { iflib_filter_info_t info = arg; struct grouptask *gtask = info->ifi_task; + int result; if (!iflib_started) - return (FILTER_HANDLED); + return (FILTER_STRAY); DBG_COUNTER_INC(fast_intrs); - if (info->ifi_filter != NULL && info->ifi_filter(info->ifi_filter_arg) == FILTER_HANDLED) - return (FILTER_HANDLED); + if (info->ifi_filter != NULL) { + result = info->ifi_filter(info->ifi_filter_arg); + if ((result & FILTER_SCHEDULE_THREAD) == 0) + return (result); + } GROUPTASK_ENQUEUE(gtask); return (FILTER_HANDLED); } static int _iflib_irq_alloc(if_ctx_t ctx, if_irq_t irq, int rid, driver_filter_t filter, driver_intr_t handler, void *arg, const char *name) { int rc, flags; struct resource *res; void *tag = NULL; device_t dev = ctx->ifc_dev; flags = RF_ACTIVE; if (ctx->ifc_flags & IFC_LEGACY) flags |= RF_SHAREABLE; MPASS(rid < 512); irq->ii_rid = rid; res = bus_alloc_resource_any(dev, SYS_RES_IRQ, &irq->ii_rid, flags); if (res == NULL) { device_printf(dev, "failed to allocate IRQ for rid %d, name %s.\n", rid, name); return (ENOMEM); } irq->ii_res = res; KASSERT(filter == NULL || handler == NULL, ("filter and handler can't both be non-NULL")); rc = bus_setup_intr(dev, res, INTR_MPSAFE | INTR_TYPE_NET, filter, handler, arg, &tag); if (rc != 0) { device_printf(dev, "failed to setup interrupt for rid %d, name %s: %d\n", rid, name ? name : "unknown", rc); return (rc); } else if (name) bus_describe_intr(dev, res, tag, "%s", name); irq->ii_tag = tag; return (0); } /********************************************************************* * * Allocate DMA resources for TX buffers as well as memory for the TX * mbuf map. TX DMA maps (non-TSO/TSO) and TX mbuf map are kept in a * iflib_sw_tx_desc_array structure, storing all the information that * is needed to transmit a packet on the wire. This is called only * once at attach, setup is done every reset. * **********************************************************************/ static int iflib_txsd_alloc(iflib_txq_t txq) { if_ctx_t ctx = txq->ift_ctx; if_shared_ctx_t sctx = ctx->ifc_sctx; if_softc_ctx_t scctx = &ctx->ifc_softc_ctx; device_t dev = ctx->ifc_dev; bus_size_t tsomaxsize; int err, nsegments, ntsosegments; bool tso; nsegments = scctx->isc_tx_nsegments; ntsosegments = scctx->isc_tx_tso_segments_max; tsomaxsize = scctx->isc_tx_tso_size_max; if (if_getcapabilities(ctx->ifc_ifp) & IFCAP_VLAN_MTU) tsomaxsize += sizeof(struct ether_vlan_header); MPASS(scctx->isc_ntxd[0] > 0); MPASS(scctx->isc_ntxd[txq->ift_br_offset] > 0); MPASS(nsegments > 0); if (if_getcapabilities(ctx->ifc_ifp) & IFCAP_TSO) { MPASS(ntsosegments > 0); MPASS(sctx->isc_tso_maxsize >= tsomaxsize); } /* * Set up DMA tags for TX buffers. */ if ((err = bus_dma_tag_create(bus_get_dma_tag(dev), 1, 0, /* alignment, bounds */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ sctx->isc_tx_maxsize, /* maxsize */ nsegments, /* nsegments */ sctx->isc_tx_maxsegsize, /* maxsegsize */ 0, /* flags */ NULL, /* lockfunc */ NULL, /* lockfuncarg */ &txq->ift_buf_tag))) { device_printf(dev,"Unable to allocate TX DMA tag: %d\n", err); device_printf(dev,"maxsize: %ju nsegments: %d maxsegsize: %ju\n", (uintmax_t)sctx->isc_tx_maxsize, nsegments, (uintmax_t)sctx->isc_tx_maxsegsize); goto fail; } tso = (if_getcapabilities(ctx->ifc_ifp) & IFCAP_TSO) != 0; if (tso && (err = bus_dma_tag_create(bus_get_dma_tag(dev), 1, 0, /* alignment, bounds */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ tsomaxsize, /* maxsize */ ntsosegments, /* nsegments */ sctx->isc_tso_maxsegsize,/* maxsegsize */ 0, /* flags */ NULL, /* lockfunc */ NULL, /* lockfuncarg */ &txq->ift_tso_buf_tag))) { device_printf(dev, "Unable to allocate TSO TX DMA tag: %d\n", err); goto fail; } /* Allocate memory for the TX mbuf map. */ if (!(txq->ift_sds.ifsd_m = (struct mbuf **) malloc(sizeof(struct mbuf *) * scctx->isc_ntxd[txq->ift_br_offset], M_IFLIB, M_NOWAIT | M_ZERO))) { device_printf(dev, "Unable to allocate TX mbuf map memory\n"); err = ENOMEM; goto fail; } /* * Create the DMA maps for TX buffers. */ if ((txq->ift_sds.ifsd_map = (bus_dmamap_t *)malloc( sizeof(bus_dmamap_t) * scctx->isc_ntxd[txq->ift_br_offset], M_IFLIB, M_NOWAIT | M_ZERO)) == NULL) { device_printf(dev, "Unable to allocate TX buffer DMA map memory\n"); err = ENOMEM; goto fail; } if (tso && (txq->ift_sds.ifsd_tso_map = (bus_dmamap_t *)malloc( sizeof(bus_dmamap_t) * scctx->isc_ntxd[txq->ift_br_offset], M_IFLIB, M_NOWAIT | M_ZERO)) == NULL) { device_printf(dev, "Unable to allocate TSO TX buffer map memory\n"); err = ENOMEM; goto fail; } for (int i = 0; i < scctx->isc_ntxd[txq->ift_br_offset]; i++) { err = bus_dmamap_create(txq->ift_buf_tag, 0, &txq->ift_sds.ifsd_map[i]); if (err != 0) { device_printf(dev, "Unable to create TX DMA map\n"); goto fail; } if (!tso) continue; err = bus_dmamap_create(txq->ift_tso_buf_tag, 0, &txq->ift_sds.ifsd_tso_map[i]); if (err != 0) { device_printf(dev, "Unable to create TSO TX DMA map\n"); goto fail; } } return (0); fail: /* We free all, it handles case where we are in the middle */ iflib_tx_structures_free(ctx); return (err); } static void iflib_txsd_destroy(if_ctx_t ctx, iflib_txq_t txq, int i) { bus_dmamap_t map; map = NULL; if (txq->ift_sds.ifsd_map != NULL) map = txq->ift_sds.ifsd_map[i]; if (map != NULL) { bus_dmamap_sync(txq->ift_buf_tag, map, BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(txq->ift_buf_tag, map); bus_dmamap_destroy(txq->ift_buf_tag, map); txq->ift_sds.ifsd_map[i] = NULL; } map = NULL; if (txq->ift_sds.ifsd_tso_map != NULL) map = txq->ift_sds.ifsd_tso_map[i]; if (map != NULL) { bus_dmamap_sync(txq->ift_tso_buf_tag, map, BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(txq->ift_tso_buf_tag, map); bus_dmamap_destroy(txq->ift_tso_buf_tag, map); txq->ift_sds.ifsd_tso_map[i] = NULL; } } static void iflib_txq_destroy(iflib_txq_t txq) { if_ctx_t ctx = txq->ift_ctx; for (int i = 0; i < txq->ift_size; i++) iflib_txsd_destroy(ctx, txq, i); if (txq->ift_sds.ifsd_map != NULL) { free(txq->ift_sds.ifsd_map, M_IFLIB); txq->ift_sds.ifsd_map = NULL; } if (txq->ift_sds.ifsd_tso_map != NULL) { free(txq->ift_sds.ifsd_tso_map, M_IFLIB); txq->ift_sds.ifsd_tso_map = NULL; } if (txq->ift_sds.ifsd_m != NULL) { free(txq->ift_sds.ifsd_m, M_IFLIB); txq->ift_sds.ifsd_m = NULL; } if (txq->ift_buf_tag != NULL) { bus_dma_tag_destroy(txq->ift_buf_tag); txq->ift_buf_tag = NULL; } if (txq->ift_tso_buf_tag != NULL) { bus_dma_tag_destroy(txq->ift_tso_buf_tag); txq->ift_tso_buf_tag = NULL; } } static void iflib_txsd_free(if_ctx_t ctx, iflib_txq_t txq, int i) { struct mbuf **mp; mp = &txq->ift_sds.ifsd_m[i]; if (*mp == NULL) return; if (txq->ift_sds.ifsd_map != NULL) { bus_dmamap_sync(txq->ift_buf_tag, txq->ift_sds.ifsd_map[i], BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(txq->ift_buf_tag, txq->ift_sds.ifsd_map[i]); } if (txq->ift_sds.ifsd_tso_map != NULL) { bus_dmamap_sync(txq->ift_tso_buf_tag, txq->ift_sds.ifsd_tso_map[i], BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(txq->ift_tso_buf_tag, txq->ift_sds.ifsd_tso_map[i]); } m_free(*mp); DBG_COUNTER_INC(tx_frees); *mp = NULL; } static int iflib_txq_setup(iflib_txq_t txq) { if_ctx_t ctx = txq->ift_ctx; if_softc_ctx_t scctx = &ctx->ifc_softc_ctx; if_shared_ctx_t sctx = ctx->ifc_sctx; iflib_dma_info_t di; int i; /* Set number of descriptors available */ txq->ift_qstatus = IFLIB_QUEUE_IDLE; /* XXX make configurable */ txq->ift_update_freq = IFLIB_DEFAULT_TX_UPDATE_FREQ; /* Reset indices */ txq->ift_cidx_processed = 0; txq->ift_pidx = txq->ift_cidx = txq->ift_npending = 0; txq->ift_size = scctx->isc_ntxd[txq->ift_br_offset]; for (i = 0, di = txq->ift_ifdi; i < sctx->isc_ntxqs; i++, di++) bzero((void *)di->idi_vaddr, di->idi_size); IFDI_TXQ_SETUP(ctx, txq->ift_id); for (i = 0, di = txq->ift_ifdi; i < sctx->isc_ntxqs; i++, di++) bus_dmamap_sync(di->idi_tag, di->idi_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); return (0); } /********************************************************************* * * Allocate DMA resources for RX buffers as well as memory for the RX * mbuf map, direct RX cluster pointer map and RX cluster bus address * map. RX DMA map, RX mbuf map, direct RX cluster pointer map and * RX cluster map are kept in a iflib_sw_rx_desc_array structure. * Since we use use one entry in iflib_sw_rx_desc_array per received * packet, the maximum number of entries we'll need is equal to the * number of hardware receive descriptors that we've allocated. * **********************************************************************/ static int iflib_rxsd_alloc(iflib_rxq_t rxq) { if_ctx_t ctx = rxq->ifr_ctx; if_shared_ctx_t sctx = ctx->ifc_sctx; if_softc_ctx_t scctx = &ctx->ifc_softc_ctx; device_t dev = ctx->ifc_dev; iflib_fl_t fl; int err; MPASS(scctx->isc_nrxd[0] > 0); MPASS(scctx->isc_nrxd[rxq->ifr_fl_offset] > 0); fl = rxq->ifr_fl; for (int i = 0; i < rxq->ifr_nfl; i++, fl++) { fl->ifl_size = scctx->isc_nrxd[rxq->ifr_fl_offset]; /* this isn't necessarily the same */ /* Set up DMA tag for RX buffers. */ err = bus_dma_tag_create(bus_get_dma_tag(dev), /* parent */ 1, 0, /* alignment, bounds */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ sctx->isc_rx_maxsize, /* maxsize */ sctx->isc_rx_nsegments, /* nsegments */ sctx->isc_rx_maxsegsize, /* maxsegsize */ 0, /* flags */ NULL, /* lockfunc */ NULL, /* lockarg */ &fl->ifl_buf_tag); if (err) { device_printf(dev, "Unable to allocate RX DMA tag: %d\n", err); goto fail; } /* Allocate memory for the RX mbuf map. */ if (!(fl->ifl_sds.ifsd_m = (struct mbuf **) malloc(sizeof(struct mbuf *) * scctx->isc_nrxd[rxq->ifr_fl_offset], M_IFLIB, M_NOWAIT | M_ZERO))) { device_printf(dev, "Unable to allocate RX mbuf map memory\n"); err = ENOMEM; goto fail; } /* Allocate memory for the direct RX cluster pointer map. */ if (!(fl->ifl_sds.ifsd_cl = (caddr_t *) malloc(sizeof(caddr_t) * scctx->isc_nrxd[rxq->ifr_fl_offset], M_IFLIB, M_NOWAIT | M_ZERO))) { device_printf(dev, "Unable to allocate RX cluster map memory\n"); err = ENOMEM; goto fail; } /* Allocate memory for the RX cluster bus address map. */ if (!(fl->ifl_sds.ifsd_ba = (bus_addr_t *) malloc(sizeof(bus_addr_t) * scctx->isc_nrxd[rxq->ifr_fl_offset], M_IFLIB, M_NOWAIT | M_ZERO))) { device_printf(dev, "Unable to allocate RX bus address map memory\n"); err = ENOMEM; goto fail; } /* * Create the DMA maps for RX buffers. */ if (!(fl->ifl_sds.ifsd_map = (bus_dmamap_t *) malloc(sizeof(bus_dmamap_t) * scctx->isc_nrxd[rxq->ifr_fl_offset], M_IFLIB, M_NOWAIT | M_ZERO))) { device_printf(dev, "Unable to allocate RX buffer DMA map memory\n"); err = ENOMEM; goto fail; } for (int i = 0; i < scctx->isc_nrxd[rxq->ifr_fl_offset]; i++) { err = bus_dmamap_create(fl->ifl_buf_tag, 0, &fl->ifl_sds.ifsd_map[i]); if (err != 0) { device_printf(dev, "Unable to create RX buffer DMA map\n"); goto fail; } } } return (0); fail: iflib_rx_structures_free(ctx); return (err); } /* * Internal service routines */ struct rxq_refill_cb_arg { int error; bus_dma_segment_t seg; int nseg; }; static void _rxq_refill_cb(void *arg, bus_dma_segment_t *segs, int nseg, int error) { struct rxq_refill_cb_arg *cb_arg = arg; cb_arg->error = error; cb_arg->seg = segs[0]; cb_arg->nseg = nseg; } /** * rxq_refill - refill an rxq free-buffer list * @ctx: the iflib context * @rxq: the free-list to refill * @n: the number of new buffers to allocate * * (Re)populate an rxq free-buffer list with up to @n new packet buffers. * The caller must assure that @n does not exceed the queue's capacity. */ static void _iflib_fl_refill(if_ctx_t ctx, iflib_fl_t fl, int count) { struct if_rxd_update iru; struct rxq_refill_cb_arg cb_arg; struct mbuf *m; caddr_t cl, *sd_cl; struct mbuf **sd_m; bus_dmamap_t *sd_map; bus_addr_t bus_addr, *sd_ba; int err, frag_idx, i, idx, n, pidx; qidx_t credits; sd_m = fl->ifl_sds.ifsd_m; sd_map = fl->ifl_sds.ifsd_map; sd_cl = fl->ifl_sds.ifsd_cl; sd_ba = fl->ifl_sds.ifsd_ba; pidx = fl->ifl_pidx; idx = pidx; frag_idx = fl->ifl_fragidx; credits = fl->ifl_credits; i = 0; n = count; MPASS(n > 0); MPASS(credits + n <= fl->ifl_size); if (pidx < fl->ifl_cidx) MPASS(pidx + n <= fl->ifl_cidx); if (pidx == fl->ifl_cidx && (credits < fl->ifl_size)) MPASS(fl->ifl_gen == 0); if (pidx > fl->ifl_cidx) MPASS(n <= fl->ifl_size - pidx + fl->ifl_cidx); DBG_COUNTER_INC(fl_refills); if (n > 8) DBG_COUNTER_INC(fl_refills_large); iru_init(&iru, fl->ifl_rxq, fl->ifl_id); while (n--) { /* * We allocate an uninitialized mbuf + cluster, mbuf is * initialized after rx. * * If the cluster is still set then we know a minimum sized packet was received */ bit_ffc_at(fl->ifl_rx_bitmap, frag_idx, fl->ifl_size, &frag_idx); if (frag_idx < 0) bit_ffc(fl->ifl_rx_bitmap, fl->ifl_size, &frag_idx); MPASS(frag_idx >= 0); if ((cl = sd_cl[frag_idx]) == NULL) { if ((cl = m_cljget(NULL, M_NOWAIT, fl->ifl_buf_size)) == NULL) break; cb_arg.error = 0; MPASS(sd_map != NULL); err = bus_dmamap_load(fl->ifl_buf_tag, sd_map[frag_idx], cl, fl->ifl_buf_size, _rxq_refill_cb, &cb_arg, BUS_DMA_NOWAIT); if (err != 0 || cb_arg.error) { /* * !zone_pack ? */ if (fl->ifl_zone == zone_pack) uma_zfree(fl->ifl_zone, cl); break; } sd_ba[frag_idx] = bus_addr = cb_arg.seg.ds_addr; sd_cl[frag_idx] = cl; #if MEMORY_LOGGING fl->ifl_cl_enqueued++; #endif } else { bus_addr = sd_ba[frag_idx]; } bus_dmamap_sync(fl->ifl_buf_tag, sd_map[frag_idx], BUS_DMASYNC_PREREAD); MPASS(sd_m[frag_idx] == NULL); if ((m = m_gethdr(M_NOWAIT, MT_NOINIT)) == NULL) { break; } sd_m[frag_idx] = m; bit_set(fl->ifl_rx_bitmap, frag_idx); #if MEMORY_LOGGING fl->ifl_m_enqueued++; #endif DBG_COUNTER_INC(rx_allocs); fl->ifl_rxd_idxs[i] = frag_idx; fl->ifl_bus_addrs[i] = bus_addr; fl->ifl_vm_addrs[i] = cl; credits++; i++; MPASS(credits <= fl->ifl_size); if (++idx == fl->ifl_size) { fl->ifl_gen = 1; idx = 0; } if (n == 0 || i == IFLIB_MAX_RX_REFRESH) { iru.iru_pidx = pidx; iru.iru_count = i; ctx->isc_rxd_refill(ctx->ifc_softc, &iru); i = 0; pidx = idx; fl->ifl_pidx = idx; fl->ifl_credits = credits; } } if (i) { iru.iru_pidx = pidx; iru.iru_count = i; ctx->isc_rxd_refill(ctx->ifc_softc, &iru); fl->ifl_pidx = idx; fl->ifl_credits = credits; } DBG_COUNTER_INC(rxd_flush); if (fl->ifl_pidx == 0) pidx = fl->ifl_size - 1; else pidx = fl->ifl_pidx - 1; bus_dmamap_sync(fl->ifl_ifdi->idi_tag, fl->ifl_ifdi->idi_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); ctx->isc_rxd_flush(ctx->ifc_softc, fl->ifl_rxq->ifr_id, fl->ifl_id, pidx); fl->ifl_fragidx = frag_idx; } static __inline void __iflib_fl_refill_lt(if_ctx_t ctx, iflib_fl_t fl, int max) { /* we avoid allowing pidx to catch up with cidx as it confuses ixl */ int32_t reclaimable = fl->ifl_size - fl->ifl_credits - 1; #ifdef INVARIANTS int32_t delta = fl->ifl_size - get_inuse(fl->ifl_size, fl->ifl_cidx, fl->ifl_pidx, fl->ifl_gen) - 1; #endif MPASS(fl->ifl_credits <= fl->ifl_size); MPASS(reclaimable == delta); if (reclaimable > 0) _iflib_fl_refill(ctx, fl, min(max, reclaimable)); } uint8_t iflib_in_detach(if_ctx_t ctx) { bool in_detach; STATE_LOCK(ctx); in_detach = !!(ctx->ifc_flags & IFC_IN_DETACH); STATE_UNLOCK(ctx); return (in_detach); } static void iflib_fl_bufs_free(iflib_fl_t fl) { iflib_dma_info_t idi = fl->ifl_ifdi; bus_dmamap_t sd_map; uint32_t i; for (i = 0; i < fl->ifl_size; i++) { struct mbuf **sd_m = &fl->ifl_sds.ifsd_m[i]; caddr_t *sd_cl = &fl->ifl_sds.ifsd_cl[i]; if (*sd_cl != NULL) { sd_map = fl->ifl_sds.ifsd_map[i]; bus_dmamap_sync(fl->ifl_buf_tag, sd_map, BUS_DMASYNC_POSTREAD); bus_dmamap_unload(fl->ifl_buf_tag, sd_map); if (*sd_cl != NULL) uma_zfree(fl->ifl_zone, *sd_cl); // XXX: Should this get moved out? if (iflib_in_detach(fl->ifl_rxq->ifr_ctx)) bus_dmamap_destroy(fl->ifl_buf_tag, sd_map); if (*sd_m != NULL) { m_init(*sd_m, M_NOWAIT, MT_DATA, 0); uma_zfree(zone_mbuf, *sd_m); } } else { MPASS(*sd_cl == NULL); MPASS(*sd_m == NULL); } #if MEMORY_LOGGING fl->ifl_m_dequeued++; fl->ifl_cl_dequeued++; #endif *sd_cl = NULL; *sd_m = NULL; } #ifdef INVARIANTS for (i = 0; i < fl->ifl_size; i++) { MPASS(fl->ifl_sds.ifsd_cl[i] == NULL); MPASS(fl->ifl_sds.ifsd_m[i] == NULL); } #endif /* * Reset free list values */ fl->ifl_credits = fl->ifl_cidx = fl->ifl_pidx = fl->ifl_gen = fl->ifl_fragidx = 0; bzero(idi->idi_vaddr, idi->idi_size); } /********************************************************************* * * Initialize a receive ring and its buffers. * **********************************************************************/ static int iflib_fl_setup(iflib_fl_t fl) { iflib_rxq_t rxq = fl->ifl_rxq; if_ctx_t ctx = rxq->ifr_ctx; if_softc_ctx_t sctx = &ctx->ifc_softc_ctx; bit_nclear(fl->ifl_rx_bitmap, 0, fl->ifl_size - 1); /* ** Free current RX buffer structs and their mbufs */ iflib_fl_bufs_free(fl); /* Now replenish the mbufs */ MPASS(fl->ifl_credits == 0); /* * XXX don't set the max_frame_size to larger * than the hardware can handle */ if (sctx->isc_max_frame_size <= 2048) fl->ifl_buf_size = MCLBYTES; #ifndef CONTIGMALLOC_WORKS else fl->ifl_buf_size = MJUMPAGESIZE; #else else if (sctx->isc_max_frame_size <= 4096) fl->ifl_buf_size = MJUMPAGESIZE; else if (sctx->isc_max_frame_size <= 9216) fl->ifl_buf_size = MJUM9BYTES; else fl->ifl_buf_size = MJUM16BYTES; #endif if (fl->ifl_buf_size > ctx->ifc_max_fl_buf_size) ctx->ifc_max_fl_buf_size = fl->ifl_buf_size; fl->ifl_cltype = m_gettype(fl->ifl_buf_size); fl->ifl_zone = m_getzone(fl->ifl_buf_size); /* avoid pre-allocating zillions of clusters to an idle card * potentially speeding up attach */ _iflib_fl_refill(ctx, fl, min(128, fl->ifl_size)); MPASS(min(128, fl->ifl_size) == fl->ifl_credits); if (min(128, fl->ifl_size) != fl->ifl_credits) return (ENOBUFS); /* * handle failure */ MPASS(rxq != NULL); MPASS(fl->ifl_ifdi != NULL); bus_dmamap_sync(fl->ifl_ifdi->idi_tag, fl->ifl_ifdi->idi_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); return (0); } /********************************************************************* * * Free receive ring data structures * **********************************************************************/ static void iflib_rx_sds_free(iflib_rxq_t rxq) { iflib_fl_t fl; int i, j; if (rxq->ifr_fl != NULL) { for (i = 0; i < rxq->ifr_nfl; i++) { fl = &rxq->ifr_fl[i]; if (fl->ifl_buf_tag != NULL) { if (fl->ifl_sds.ifsd_map != NULL) { for (j = 0; j < fl->ifl_size; j++) { if (fl->ifl_sds.ifsd_map[j] == NULL) continue; bus_dmamap_sync( fl->ifl_buf_tag, fl->ifl_sds.ifsd_map[j], BUS_DMASYNC_POSTREAD); bus_dmamap_unload( fl->ifl_buf_tag, fl->ifl_sds.ifsd_map[j]); } } bus_dma_tag_destroy(fl->ifl_buf_tag); fl->ifl_buf_tag = NULL; } free(fl->ifl_sds.ifsd_m, M_IFLIB); free(fl->ifl_sds.ifsd_cl, M_IFLIB); free(fl->ifl_sds.ifsd_ba, M_IFLIB); free(fl->ifl_sds.ifsd_map, M_IFLIB); fl->ifl_sds.ifsd_m = NULL; fl->ifl_sds.ifsd_cl = NULL; fl->ifl_sds.ifsd_ba = NULL; fl->ifl_sds.ifsd_map = NULL; } free(rxq->ifr_fl, M_IFLIB); rxq->ifr_fl = NULL; rxq->ifr_cq_gen = rxq->ifr_cq_cidx = rxq->ifr_cq_pidx = 0; } } /* * MI independent logic * */ static void iflib_timer(void *arg) { iflib_txq_t txq = arg; if_ctx_t ctx = txq->ift_ctx; if_softc_ctx_t sctx = &ctx->ifc_softc_ctx; uint64_t this_tick = ticks; uint32_t reset_on = hz / 2; if (!(if_getdrvflags(ctx->ifc_ifp) & IFF_DRV_RUNNING)) return; /* ** Check on the state of the TX queue(s), this ** can be done without the lock because its RO ** and the HUNG state will be static if set. */ if (this_tick - txq->ift_last_timer_tick >= hz / 2) { txq->ift_last_timer_tick = this_tick; IFDI_TIMER(ctx, txq->ift_id); if ((txq->ift_qstatus == IFLIB_QUEUE_HUNG) && ((txq->ift_cleaned_prev == txq->ift_cleaned) || (sctx->isc_pause_frames == 0))) goto hung; if (ifmp_ring_is_stalled(txq->ift_br)) txq->ift_qstatus = IFLIB_QUEUE_HUNG; txq->ift_cleaned_prev = txq->ift_cleaned; } #ifdef DEV_NETMAP if (if_getcapenable(ctx->ifc_ifp) & IFCAP_NETMAP) iflib_netmap_timer_adjust(ctx, txq, &reset_on); #endif /* handle any laggards */ if (txq->ift_db_pending) GROUPTASK_ENQUEUE(&txq->ift_task); sctx->isc_pause_frames = 0; if (if_getdrvflags(ctx->ifc_ifp) & IFF_DRV_RUNNING) callout_reset_on(&txq->ift_timer, reset_on, iflib_timer, txq, txq->ift_timer.c_cpu); return; hung: device_printf(ctx->ifc_dev, "TX(%d) desc avail = %d, pidx = %d\n", txq->ift_id, TXQ_AVAIL(txq), txq->ift_pidx); STATE_LOCK(ctx); if_setdrvflagbits(ctx->ifc_ifp, IFF_DRV_OACTIVE, IFF_DRV_RUNNING); ctx->ifc_flags |= (IFC_DO_WATCHDOG|IFC_DO_RESET); iflib_admin_intr_deferred(ctx); STATE_UNLOCK(ctx); } static void iflib_init_locked(if_ctx_t ctx) { if_softc_ctx_t sctx = &ctx->ifc_softc_ctx; if_softc_ctx_t scctx = &ctx->ifc_softc_ctx; if_t ifp = ctx->ifc_ifp; iflib_fl_t fl; iflib_txq_t txq; iflib_rxq_t rxq; int i, j, tx_ip_csum_flags, tx_ip6_csum_flags; if_setdrvflagbits(ifp, IFF_DRV_OACTIVE, IFF_DRV_RUNNING); IFDI_INTR_DISABLE(ctx); tx_ip_csum_flags = scctx->isc_tx_csum_flags & (CSUM_IP | CSUM_TCP | CSUM_UDP | CSUM_SCTP); tx_ip6_csum_flags = scctx->isc_tx_csum_flags & (CSUM_IP6_TCP | CSUM_IP6_UDP | CSUM_IP6_SCTP); /* Set hardware offload abilities */ if_clearhwassist(ifp); if (if_getcapenable(ifp) & IFCAP_TXCSUM) if_sethwassistbits(ifp, tx_ip_csum_flags, 0); if (if_getcapenable(ifp) & IFCAP_TXCSUM_IPV6) if_sethwassistbits(ifp, tx_ip6_csum_flags, 0); if (if_getcapenable(ifp) & IFCAP_TSO4) if_sethwassistbits(ifp, CSUM_IP_TSO, 0); if (if_getcapenable(ifp) & IFCAP_TSO6) if_sethwassistbits(ifp, CSUM_IP6_TSO, 0); for (i = 0, txq = ctx->ifc_txqs; i < sctx->isc_ntxqsets; i++, txq++) { CALLOUT_LOCK(txq); callout_stop(&txq->ift_timer); CALLOUT_UNLOCK(txq); iflib_netmap_txq_init(ctx, txq); } #ifdef INVARIANTS i = if_getdrvflags(ifp); #endif IFDI_INIT(ctx); MPASS(if_getdrvflags(ifp) == i); for (i = 0, rxq = ctx->ifc_rxqs; i < sctx->isc_nrxqsets; i++, rxq++) { /* XXX this should really be done on a per-queue basis */ if (if_getcapenable(ifp) & IFCAP_NETMAP) { MPASS(rxq->ifr_id == i); iflib_netmap_rxq_init(ctx, rxq); continue; } for (j = 0, fl = rxq->ifr_fl; j < rxq->ifr_nfl; j++, fl++) { if (iflib_fl_setup(fl)) { device_printf(ctx->ifc_dev, "freelist setup failed - check cluster settings\n"); goto done; } } } done: if_setdrvflagbits(ctx->ifc_ifp, IFF_DRV_RUNNING, IFF_DRV_OACTIVE); IFDI_INTR_ENABLE(ctx); txq = ctx->ifc_txqs; for (i = 0; i < sctx->isc_ntxqsets; i++, txq++) callout_reset_on(&txq->ift_timer, hz/2, iflib_timer, txq, txq->ift_timer.c_cpu); } static int iflib_media_change(if_t ifp) { if_ctx_t ctx = if_getsoftc(ifp); int err; CTX_LOCK(ctx); if ((err = IFDI_MEDIA_CHANGE(ctx)) == 0) iflib_init_locked(ctx); CTX_UNLOCK(ctx); return (err); } static void iflib_media_status(if_t ifp, struct ifmediareq *ifmr) { if_ctx_t ctx = if_getsoftc(ifp); CTX_LOCK(ctx); IFDI_UPDATE_ADMIN_STATUS(ctx); IFDI_MEDIA_STATUS(ctx, ifmr); CTX_UNLOCK(ctx); } void iflib_stop(if_ctx_t ctx) { iflib_txq_t txq = ctx->ifc_txqs; iflib_rxq_t rxq = ctx->ifc_rxqs; if_softc_ctx_t scctx = &ctx->ifc_softc_ctx; if_shared_ctx_t sctx = ctx->ifc_sctx; iflib_dma_info_t di; iflib_fl_t fl; int i, j; /* Tell the stack that the interface is no longer active */ if_setdrvflagbits(ctx->ifc_ifp, IFF_DRV_OACTIVE, IFF_DRV_RUNNING); IFDI_INTR_DISABLE(ctx); DELAY(1000); IFDI_STOP(ctx); DELAY(1000); iflib_debug_reset(); /* Wait for current tx queue users to exit to disarm watchdog timer. */ for (i = 0; i < scctx->isc_ntxqsets; i++, txq++) { /* make sure all transmitters have completed before proceeding XXX */ CALLOUT_LOCK(txq); callout_stop(&txq->ift_timer); CALLOUT_UNLOCK(txq); /* clean any enqueued buffers */ iflib_ifmp_purge(txq); /* Free any existing tx buffers. */ for (j = 0; j < txq->ift_size; j++) { iflib_txsd_free(ctx, txq, j); } txq->ift_processed = txq->ift_cleaned = txq->ift_cidx_processed = 0; txq->ift_in_use = txq->ift_gen = txq->ift_cidx = txq->ift_pidx = txq->ift_no_desc_avail = 0; txq->ift_closed = txq->ift_mbuf_defrag = txq->ift_mbuf_defrag_failed = 0; txq->ift_no_tx_dma_setup = txq->ift_txd_encap_efbig = txq->ift_map_failed = 0; txq->ift_pullups = 0; ifmp_ring_reset_stats(txq->ift_br); for (j = 0, di = txq->ift_ifdi; j < sctx->isc_ntxqs; j++, di++) bzero((void *)di->idi_vaddr, di->idi_size); } for (i = 0; i < scctx->isc_nrxqsets; i++, rxq++) { /* make sure all transmitters have completed before proceeding XXX */ rxq->ifr_cq_gen = rxq->ifr_cq_cidx = rxq->ifr_cq_pidx = 0; for (j = 0, di = rxq->ifr_ifdi; j < sctx->isc_nrxqs; j++, di++) bzero((void *)di->idi_vaddr, di->idi_size); /* also resets the free lists pidx/cidx */ for (j = 0, fl = rxq->ifr_fl; j < rxq->ifr_nfl; j++, fl++) iflib_fl_bufs_free(fl); } } static inline caddr_t calc_next_rxd(iflib_fl_t fl, int cidx) { qidx_t size; int nrxd; caddr_t start, end, cur, next; nrxd = fl->ifl_size; size = fl->ifl_rxd_size; start = fl->ifl_ifdi->idi_vaddr; if (__predict_false(size == 0)) return (start); cur = start + size*cidx; end = start + size*nrxd; next = CACHE_PTR_NEXT(cur); return (next < end ? next : start); } static inline void prefetch_pkts(iflib_fl_t fl, int cidx) { int nextptr; int nrxd = fl->ifl_size; caddr_t next_rxd; nextptr = (cidx + CACHE_PTR_INCREMENT) & (nrxd-1); prefetch(&fl->ifl_sds.ifsd_m[nextptr]); prefetch(&fl->ifl_sds.ifsd_cl[nextptr]); next_rxd = calc_next_rxd(fl, cidx); prefetch(next_rxd); prefetch(fl->ifl_sds.ifsd_m[(cidx + 1) & (nrxd-1)]); prefetch(fl->ifl_sds.ifsd_m[(cidx + 2) & (nrxd-1)]); prefetch(fl->ifl_sds.ifsd_m[(cidx + 3) & (nrxd-1)]); prefetch(fl->ifl_sds.ifsd_m[(cidx + 4) & (nrxd-1)]); prefetch(fl->ifl_sds.ifsd_cl[(cidx + 1) & (nrxd-1)]); prefetch(fl->ifl_sds.ifsd_cl[(cidx + 2) & (nrxd-1)]); prefetch(fl->ifl_sds.ifsd_cl[(cidx + 3) & (nrxd-1)]); prefetch(fl->ifl_sds.ifsd_cl[(cidx + 4) & (nrxd-1)]); } static void rxd_frag_to_sd(iflib_rxq_t rxq, if_rxd_frag_t irf, int unload, if_rxsd_t sd) { int flid, cidx; bus_dmamap_t map; iflib_fl_t fl; int next; map = NULL; flid = irf->irf_flid; cidx = irf->irf_idx; fl = &rxq->ifr_fl[flid]; sd->ifsd_fl = fl; sd->ifsd_cidx = cidx; sd->ifsd_m = &fl->ifl_sds.ifsd_m[cidx]; sd->ifsd_cl = &fl->ifl_sds.ifsd_cl[cidx]; fl->ifl_credits--; #if MEMORY_LOGGING fl->ifl_m_dequeued++; #endif if (rxq->ifr_ctx->ifc_flags & IFC_PREFETCH) prefetch_pkts(fl, cidx); next = (cidx + CACHE_PTR_INCREMENT) & (fl->ifl_size-1); prefetch(&fl->ifl_sds.ifsd_map[next]); map = fl->ifl_sds.ifsd_map[cidx]; next = (cidx + CACHE_LINE_SIZE) & (fl->ifl_size-1); /* not valid assert if bxe really does SGE from non-contiguous elements */ MPASS(fl->ifl_cidx == cidx); bus_dmamap_sync(fl->ifl_buf_tag, map, BUS_DMASYNC_POSTREAD); if (unload) bus_dmamap_unload(fl->ifl_buf_tag, map); fl->ifl_cidx = (fl->ifl_cidx + 1) & (fl->ifl_size-1); if (__predict_false(fl->ifl_cidx == 0)) fl->ifl_gen = 0; bit_clear(fl->ifl_rx_bitmap, cidx); } static struct mbuf * assemble_segments(iflib_rxq_t rxq, if_rxd_info_t ri, if_rxsd_t sd) { int i, padlen , flags; struct mbuf *m, *mh, *mt; caddr_t cl; i = 0; mh = NULL; do { rxd_frag_to_sd(rxq, &ri->iri_frags[i], TRUE, sd); MPASS(*sd->ifsd_cl != NULL); MPASS(*sd->ifsd_m != NULL); /* Don't include zero-length frags */ if (ri->iri_frags[i].irf_len == 0) { /* XXX we can save the cluster here, but not the mbuf */ m_init(*sd->ifsd_m, M_NOWAIT, MT_DATA, 0); m_free(*sd->ifsd_m); *sd->ifsd_m = NULL; continue; } m = *sd->ifsd_m; *sd->ifsd_m = NULL; if (mh == NULL) { flags = M_PKTHDR|M_EXT; mh = mt = m; padlen = ri->iri_pad; } else { flags = M_EXT; mt->m_next = m; mt = m; /* assuming padding is only on the first fragment */ padlen = 0; } cl = *sd->ifsd_cl; *sd->ifsd_cl = NULL; /* Can these two be made one ? */ m_init(m, M_NOWAIT, MT_DATA, flags); m_cljset(m, cl, sd->ifsd_fl->ifl_cltype); /* * These must follow m_init and m_cljset */ m->m_data += padlen; ri->iri_len -= padlen; m->m_len = ri->iri_frags[i].irf_len; } while (++i < ri->iri_nfrags); return (mh); } /* * Process one software descriptor */ static struct mbuf * iflib_rxd_pkt_get(iflib_rxq_t rxq, if_rxd_info_t ri) { struct if_rxsd sd; struct mbuf *m; /* should I merge this back in now that the two paths are basically duplicated? */ if (ri->iri_nfrags == 1 && ri->iri_frags[0].irf_len <= MIN(IFLIB_RX_COPY_THRESH, MHLEN)) { rxd_frag_to_sd(rxq, &ri->iri_frags[0], FALSE, &sd); m = *sd.ifsd_m; *sd.ifsd_m = NULL; m_init(m, M_NOWAIT, MT_DATA, M_PKTHDR); #ifndef __NO_STRICT_ALIGNMENT if (!IP_ALIGNED(m)) m->m_data += 2; #endif memcpy(m->m_data, *sd.ifsd_cl, ri->iri_len); m->m_len = ri->iri_frags[0].irf_len; } else { m = assemble_segments(rxq, ri, &sd); } m->m_pkthdr.len = ri->iri_len; m->m_pkthdr.rcvif = ri->iri_ifp; m->m_flags |= ri->iri_flags; m->m_pkthdr.ether_vtag = ri->iri_vtag; m->m_pkthdr.flowid = ri->iri_flowid; M_HASHTYPE_SET(m, ri->iri_rsstype); m->m_pkthdr.csum_flags = ri->iri_csum_flags; m->m_pkthdr.csum_data = ri->iri_csum_data; return (m); } #if defined(INET6) || defined(INET) static void iflib_get_ip_forwarding(struct lro_ctrl *lc, bool *v4, bool *v6) { CURVNET_SET(lc->ifp->if_vnet); #if defined(INET6) *v6 = VNET(ip6_forwarding); #endif #if defined(INET) *v4 = VNET(ipforwarding); #endif CURVNET_RESTORE(); } /* * Returns true if it's possible this packet could be LROed. * if it returns false, it is guaranteed that tcp_lro_rx() * would not return zero. */ static bool iflib_check_lro_possible(struct mbuf *m, bool v4_forwarding, bool v6_forwarding) { struct ether_header *eh; uint16_t eh_type; eh = mtod(m, struct ether_header *); eh_type = ntohs(eh->ether_type); switch (eh_type) { #if defined(INET6) case ETHERTYPE_IPV6: return !v6_forwarding; #endif #if defined (INET) case ETHERTYPE_IP: return !v4_forwarding; #endif } return false; } #else static void iflib_get_ip_forwarding(struct lro_ctrl *lc __unused, bool *v4 __unused, bool *v6 __unused) { } #endif static bool iflib_rxeof(iflib_rxq_t rxq, qidx_t budget) { if_ctx_t ctx = rxq->ifr_ctx; if_shared_ctx_t sctx = ctx->ifc_sctx; if_softc_ctx_t scctx = &ctx->ifc_softc_ctx; int avail, i; qidx_t *cidxp; struct if_rxd_info ri; int err, budget_left, rx_bytes, rx_pkts; iflib_fl_t fl; struct ifnet *ifp; int lro_enabled; bool v4_forwarding, v6_forwarding, lro_possible; /* * XXX early demux data packets so that if_input processing only handles * acks in interrupt context */ struct mbuf *m, *mh, *mt, *mf; lro_possible = v4_forwarding = v6_forwarding = false; ifp = ctx->ifc_ifp; mh = mt = NULL; MPASS(budget > 0); rx_pkts = rx_bytes = 0; if (sctx->isc_flags & IFLIB_HAS_RXCQ) cidxp = &rxq->ifr_cq_cidx; else cidxp = &rxq->ifr_fl[0].ifl_cidx; if ((avail = iflib_rxd_avail(ctx, rxq, *cidxp, budget)) == 0) { for (i = 0, fl = &rxq->ifr_fl[0]; i < sctx->isc_nfl; i++, fl++) __iflib_fl_refill_lt(ctx, fl, budget + 8); DBG_COUNTER_INC(rx_unavail); return (false); } for (budget_left = budget; budget_left > 0 && avail > 0;) { if (__predict_false(!CTX_ACTIVE(ctx))) { DBG_COUNTER_INC(rx_ctx_inactive); break; } /* * Reset client set fields to their default values */ rxd_info_zero(&ri); ri.iri_qsidx = rxq->ifr_id; ri.iri_cidx = *cidxp; ri.iri_ifp = ifp; ri.iri_frags = rxq->ifr_frags; err = ctx->isc_rxd_pkt_get(ctx->ifc_softc, &ri); if (err) goto err; if (sctx->isc_flags & IFLIB_HAS_RXCQ) { *cidxp = ri.iri_cidx; /* Update our consumer index */ /* XXX NB: shurd - check if this is still safe */ while (rxq->ifr_cq_cidx >= scctx->isc_nrxd[0]) { rxq->ifr_cq_cidx -= scctx->isc_nrxd[0]; rxq->ifr_cq_gen = 0; } /* was this only a completion queue message? */ if (__predict_false(ri.iri_nfrags == 0)) continue; } MPASS(ri.iri_nfrags != 0); MPASS(ri.iri_len != 0); /* will advance the cidx on the corresponding free lists */ m = iflib_rxd_pkt_get(rxq, &ri); avail--; budget_left--; if (avail == 0 && budget_left) avail = iflib_rxd_avail(ctx, rxq, *cidxp, budget_left); if (__predict_false(m == NULL)) { DBG_COUNTER_INC(rx_mbuf_null); continue; } /* imm_pkt: -- cxgb */ if (mh == NULL) mh = mt = m; else { mt->m_nextpkt = m; mt = m; } } /* make sure that we can refill faster than drain */ for (i = 0, fl = &rxq->ifr_fl[0]; i < sctx->isc_nfl; i++, fl++) __iflib_fl_refill_lt(ctx, fl, budget + 8); lro_enabled = (if_getcapenable(ifp) & IFCAP_LRO); if (lro_enabled) iflib_get_ip_forwarding(&rxq->ifr_lc, &v4_forwarding, &v6_forwarding); mt = mf = NULL; while (mh != NULL) { m = mh; mh = mh->m_nextpkt; m->m_nextpkt = NULL; #ifndef __NO_STRICT_ALIGNMENT if (!IP_ALIGNED(m) && (m = iflib_fixup_rx(m)) == NULL) continue; #endif rx_bytes += m->m_pkthdr.len; rx_pkts++; #if defined(INET6) || defined(INET) if (lro_enabled) { if (!lro_possible) { lro_possible = iflib_check_lro_possible(m, v4_forwarding, v6_forwarding); if (lro_possible && mf != NULL) { ifp->if_input(ifp, mf); DBG_COUNTER_INC(rx_if_input); mt = mf = NULL; } } if ((m->m_pkthdr.csum_flags & (CSUM_L4_CALC|CSUM_L4_VALID)) == (CSUM_L4_CALC|CSUM_L4_VALID)) { if (lro_possible && tcp_lro_rx(&rxq->ifr_lc, m, 0) == 0) continue; } } #endif if (lro_possible) { ifp->if_input(ifp, m); DBG_COUNTER_INC(rx_if_input); continue; } if (mf == NULL) mf = m; if (mt != NULL) mt->m_nextpkt = m; mt = m; } if (mf != NULL) { ifp->if_input(ifp, mf); DBG_COUNTER_INC(rx_if_input); } if_inc_counter(ifp, IFCOUNTER_IBYTES, rx_bytes); if_inc_counter(ifp, IFCOUNTER_IPACKETS, rx_pkts); /* * Flush any outstanding LRO work */ #if defined(INET6) || defined(INET) tcp_lro_flush_all(&rxq->ifr_lc); #endif if (avail) return true; return (iflib_rxd_avail(ctx, rxq, *cidxp, 1)); err: STATE_LOCK(ctx); ctx->ifc_flags |= IFC_DO_RESET; iflib_admin_intr_deferred(ctx); STATE_UNLOCK(ctx); return (false); } #define TXD_NOTIFY_COUNT(txq) (((txq)->ift_size / (txq)->ift_update_freq)-1) static inline qidx_t txq_max_db_deferred(iflib_txq_t txq, qidx_t in_use) { qidx_t notify_count = TXD_NOTIFY_COUNT(txq); qidx_t minthresh = txq->ift_size / 8; if (in_use > 4*minthresh) return (notify_count); if (in_use > 2*minthresh) return (notify_count >> 1); if (in_use > minthresh) return (notify_count >> 3); return (0); } static inline qidx_t txq_max_rs_deferred(iflib_txq_t txq) { qidx_t notify_count = TXD_NOTIFY_COUNT(txq); qidx_t minthresh = txq->ift_size / 8; if (txq->ift_in_use > 4*minthresh) return (notify_count); if (txq->ift_in_use > 2*minthresh) return (notify_count >> 1); if (txq->ift_in_use > minthresh) return (notify_count >> 2); return (2); } #define M_CSUM_FLAGS(m) ((m)->m_pkthdr.csum_flags) #define M_HAS_VLANTAG(m) (m->m_flags & M_VLANTAG) #define TXQ_MAX_DB_DEFERRED(txq, in_use) txq_max_db_deferred((txq), (in_use)) #define TXQ_MAX_RS_DEFERRED(txq) txq_max_rs_deferred(txq) #define TXQ_MAX_DB_CONSUMED(size) (size >> 4) /* forward compatibility for cxgb */ #define FIRST_QSET(ctx) 0 #define NTXQSETS(ctx) ((ctx)->ifc_softc_ctx.isc_ntxqsets) #define NRXQSETS(ctx) ((ctx)->ifc_softc_ctx.isc_nrxqsets) #define QIDX(ctx, m) ((((m)->m_pkthdr.flowid & ctx->ifc_softc_ctx.isc_rss_table_mask) % NTXQSETS(ctx)) + FIRST_QSET(ctx)) #define DESC_RECLAIMABLE(q) ((int)((q)->ift_processed - (q)->ift_cleaned - (q)->ift_ctx->ifc_softc_ctx.isc_tx_nsegments)) /* XXX we should be setting this to something other than zero */ #define RECLAIM_THRESH(ctx) ((ctx)->ifc_sctx->isc_tx_reclaim_thresh) #define MAX_TX_DESC(ctx) max((ctx)->ifc_softc_ctx.isc_tx_tso_segments_max, \ (ctx)->ifc_softc_ctx.isc_tx_nsegments) static inline bool iflib_txd_db_check(if_ctx_t ctx, iflib_txq_t txq, int ring, qidx_t in_use) { qidx_t dbval, max; bool rang; rang = false; max = TXQ_MAX_DB_DEFERRED(txq, in_use); if (ring || txq->ift_db_pending >= max) { dbval = txq->ift_npending ? txq->ift_npending : txq->ift_pidx; bus_dmamap_sync(txq->ift_ifdi->idi_tag, txq->ift_ifdi->idi_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); ctx->isc_txd_flush(ctx->ifc_softc, txq->ift_id, dbval); txq->ift_db_pending = txq->ift_npending = 0; rang = true; } return (rang); } #ifdef PKT_DEBUG static void print_pkt(if_pkt_info_t pi) { printf("pi len: %d qsidx: %d nsegs: %d ndescs: %d flags: %x pidx: %d\n", pi->ipi_len, pi->ipi_qsidx, pi->ipi_nsegs, pi->ipi_ndescs, pi->ipi_flags, pi->ipi_pidx); printf("pi new_pidx: %d csum_flags: %lx tso_segsz: %d mflags: %x vtag: %d\n", pi->ipi_new_pidx, pi->ipi_csum_flags, pi->ipi_tso_segsz, pi->ipi_mflags, pi->ipi_vtag); printf("pi etype: %d ehdrlen: %d ip_hlen: %d ipproto: %d\n", pi->ipi_etype, pi->ipi_ehdrlen, pi->ipi_ip_hlen, pi->ipi_ipproto); } #endif #define IS_TSO4(pi) ((pi)->ipi_csum_flags & CSUM_IP_TSO) #define IS_TX_OFFLOAD4(pi) ((pi)->ipi_csum_flags & (CSUM_IP_TCP | CSUM_IP_TSO)) #define IS_TSO6(pi) ((pi)->ipi_csum_flags & CSUM_IP6_TSO) #define IS_TX_OFFLOAD6(pi) ((pi)->ipi_csum_flags & (CSUM_IP6_TCP | CSUM_IP6_TSO)) static int iflib_parse_header(iflib_txq_t txq, if_pkt_info_t pi, struct mbuf **mp) { if_shared_ctx_t sctx = txq->ift_ctx->ifc_sctx; struct ether_vlan_header *eh; struct mbuf *m; m = *mp; if ((sctx->isc_flags & IFLIB_NEED_SCRATCH) && M_WRITABLE(m) == 0) { if ((m = m_dup(m, M_NOWAIT)) == NULL) { return (ENOMEM); } else { m_freem(*mp); DBG_COUNTER_INC(tx_frees); *mp = m; } } /* * Determine where frame payload starts. * Jump over vlan headers if already present, * helpful for QinQ too. */ if (__predict_false(m->m_len < sizeof(*eh))) { txq->ift_pullups++; if (__predict_false((m = m_pullup(m, sizeof(*eh))) == NULL)) return (ENOMEM); } eh = mtod(m, struct ether_vlan_header *); if (eh->evl_encap_proto == htons(ETHERTYPE_VLAN)) { pi->ipi_etype = ntohs(eh->evl_proto); pi->ipi_ehdrlen = ETHER_HDR_LEN + ETHER_VLAN_ENCAP_LEN; } else { pi->ipi_etype = ntohs(eh->evl_encap_proto); pi->ipi_ehdrlen = ETHER_HDR_LEN; } switch (pi->ipi_etype) { #ifdef INET case ETHERTYPE_IP: { struct mbuf *n; struct ip *ip = NULL; struct tcphdr *th = NULL; int minthlen; minthlen = min(m->m_pkthdr.len, pi->ipi_ehdrlen + sizeof(*ip) + sizeof(*th)); if (__predict_false(m->m_len < minthlen)) { /* * if this code bloat is causing too much of a hit * move it to a separate function and mark it noinline */ if (m->m_len == pi->ipi_ehdrlen) { n = m->m_next; MPASS(n); if (n->m_len >= sizeof(*ip)) { ip = (struct ip *)n->m_data; if (n->m_len >= (ip->ip_hl << 2) + sizeof(*th)) th = (struct tcphdr *)((caddr_t)ip + (ip->ip_hl << 2)); } else { txq->ift_pullups++; if (__predict_false((m = m_pullup(m, minthlen)) == NULL)) return (ENOMEM); ip = (struct ip *)(m->m_data + pi->ipi_ehdrlen); } } else { txq->ift_pullups++; if (__predict_false((m = m_pullup(m, minthlen)) == NULL)) return (ENOMEM); ip = (struct ip *)(m->m_data + pi->ipi_ehdrlen); if (m->m_len >= (ip->ip_hl << 2) + sizeof(*th)) th = (struct tcphdr *)((caddr_t)ip + (ip->ip_hl << 2)); } } else { ip = (struct ip *)(m->m_data + pi->ipi_ehdrlen); if (m->m_len >= (ip->ip_hl << 2) + sizeof(*th)) th = (struct tcphdr *)((caddr_t)ip + (ip->ip_hl << 2)); } pi->ipi_ip_hlen = ip->ip_hl << 2; pi->ipi_ipproto = ip->ip_p; pi->ipi_flags |= IPI_TX_IPV4; /* TCP checksum offload may require TCP header length */ if (IS_TX_OFFLOAD4(pi)) { if (__predict_true(pi->ipi_ipproto == IPPROTO_TCP)) { if (__predict_false(th == NULL)) { txq->ift_pullups++; if (__predict_false((m = m_pullup(m, (ip->ip_hl << 2) + sizeof(*th))) == NULL)) return (ENOMEM); th = (struct tcphdr *)((caddr_t)ip + pi->ipi_ip_hlen); } pi->ipi_tcp_hflags = th->th_flags; pi->ipi_tcp_hlen = th->th_off << 2; pi->ipi_tcp_seq = th->th_seq; } if (IS_TSO4(pi)) { if (__predict_false(ip->ip_p != IPPROTO_TCP)) return (ENXIO); /* * TSO always requires hardware checksum offload. */ pi->ipi_csum_flags |= (CSUM_IP_TCP | CSUM_IP); th->th_sum = in_pseudo(ip->ip_src.s_addr, ip->ip_dst.s_addr, htons(IPPROTO_TCP)); pi->ipi_tso_segsz = m->m_pkthdr.tso_segsz; if (sctx->isc_flags & IFLIB_TSO_INIT_IP) { ip->ip_sum = 0; ip->ip_len = htons(pi->ipi_ip_hlen + pi->ipi_tcp_hlen + pi->ipi_tso_segsz); } } } if ((sctx->isc_flags & IFLIB_NEED_ZERO_CSUM) && (pi->ipi_csum_flags & CSUM_IP)) ip->ip_sum = 0; break; } #endif #ifdef INET6 case ETHERTYPE_IPV6: { struct ip6_hdr *ip6 = (struct ip6_hdr *)(m->m_data + pi->ipi_ehdrlen); struct tcphdr *th; pi->ipi_ip_hlen = sizeof(struct ip6_hdr); if (__predict_false(m->m_len < pi->ipi_ehdrlen + sizeof(struct ip6_hdr))) { txq->ift_pullups++; if (__predict_false((m = m_pullup(m, pi->ipi_ehdrlen + sizeof(struct ip6_hdr))) == NULL)) return (ENOMEM); } th = (struct tcphdr *)((caddr_t)ip6 + pi->ipi_ip_hlen); /* XXX-BZ this will go badly in case of ext hdrs. */ pi->ipi_ipproto = ip6->ip6_nxt; pi->ipi_flags |= IPI_TX_IPV6; /* TCP checksum offload may require TCP header length */ if (IS_TX_OFFLOAD6(pi)) { if (pi->ipi_ipproto == IPPROTO_TCP) { if (__predict_false(m->m_len < pi->ipi_ehdrlen + sizeof(struct ip6_hdr) + sizeof(struct tcphdr))) { txq->ift_pullups++; if (__predict_false((m = m_pullup(m, pi->ipi_ehdrlen + sizeof(struct ip6_hdr) + sizeof(struct tcphdr))) == NULL)) return (ENOMEM); } pi->ipi_tcp_hflags = th->th_flags; pi->ipi_tcp_hlen = th->th_off << 2; pi->ipi_tcp_seq = th->th_seq; } if (IS_TSO6(pi)) { if (__predict_false(ip6->ip6_nxt != IPPROTO_TCP)) return (ENXIO); /* * TSO always requires hardware checksum offload. */ pi->ipi_csum_flags |= CSUM_IP6_TCP; th->th_sum = in6_cksum_pseudo(ip6, 0, IPPROTO_TCP, 0); pi->ipi_tso_segsz = m->m_pkthdr.tso_segsz; } } break; } #endif default: pi->ipi_csum_flags &= ~CSUM_OFFLOAD; pi->ipi_ip_hlen = 0; break; } *mp = m; return (0); } /* * If dodgy hardware rejects the scatter gather chain we've handed it * we'll need to remove the mbuf chain from ifsg_m[] before we can add the * m_defrag'd mbufs */ static __noinline struct mbuf * iflib_remove_mbuf(iflib_txq_t txq) { int ntxd, pidx; struct mbuf *m, **ifsd_m; ifsd_m = txq->ift_sds.ifsd_m; ntxd = txq->ift_size; pidx = txq->ift_pidx & (ntxd - 1); ifsd_m = txq->ift_sds.ifsd_m; m = ifsd_m[pidx]; ifsd_m[pidx] = NULL; bus_dmamap_unload(txq->ift_buf_tag, txq->ift_sds.ifsd_map[pidx]); if (txq->ift_sds.ifsd_tso_map != NULL) bus_dmamap_unload(txq->ift_tso_buf_tag, txq->ift_sds.ifsd_tso_map[pidx]); #if MEMORY_LOGGING txq->ift_dequeued++; #endif return (m); } static inline caddr_t calc_next_txd(iflib_txq_t txq, int cidx, uint8_t qid) { qidx_t size; int ntxd; caddr_t start, end, cur, next; ntxd = txq->ift_size; size = txq->ift_txd_size[qid]; start = txq->ift_ifdi[qid].idi_vaddr; if (__predict_false(size == 0)) return (start); cur = start + size*cidx; end = start + size*ntxd; next = CACHE_PTR_NEXT(cur); return (next < end ? next : start); } /* * Pad an mbuf to ensure a minimum ethernet frame size. * min_frame_size is the frame size (less CRC) to pad the mbuf to */ static __noinline int iflib_ether_pad(device_t dev, struct mbuf **m_head, uint16_t min_frame_size) { /* * 18 is enough bytes to pad an ARP packet to 46 bytes, and * and ARP message is the smallest common payload I can think of */ static char pad[18]; /* just zeros */ int n; struct mbuf *new_head; if (!M_WRITABLE(*m_head)) { new_head = m_dup(*m_head, M_NOWAIT); if (new_head == NULL) { m_freem(*m_head); device_printf(dev, "cannot pad short frame, m_dup() failed"); DBG_COUNTER_INC(encap_pad_mbuf_fail); DBG_COUNTER_INC(tx_frees); return ENOMEM; } m_freem(*m_head); *m_head = new_head; } for (n = min_frame_size - (*m_head)->m_pkthdr.len; n > 0; n -= sizeof(pad)) if (!m_append(*m_head, min(n, sizeof(pad)), pad)) break; if (n > 0) { m_freem(*m_head); device_printf(dev, "cannot pad short frame\n"); DBG_COUNTER_INC(encap_pad_mbuf_fail); DBG_COUNTER_INC(tx_frees); return (ENOBUFS); } return 0; } static int iflib_encap(iflib_txq_t txq, struct mbuf **m_headp) { if_ctx_t ctx; if_shared_ctx_t sctx; if_softc_ctx_t scctx; bus_dma_tag_t buf_tag; bus_dma_segment_t *segs; struct mbuf *m_head, **ifsd_m; void *next_txd; bus_dmamap_t map; struct if_pkt_info pi; int remap = 0; int err, nsegs, ndesc, max_segs, pidx, cidx, next, ntxd; ctx = txq->ift_ctx; sctx = ctx->ifc_sctx; scctx = &ctx->ifc_softc_ctx; segs = txq->ift_segs; ntxd = txq->ift_size; m_head = *m_headp; map = NULL; /* * If we're doing TSO the next descriptor to clean may be quite far ahead */ cidx = txq->ift_cidx; pidx = txq->ift_pidx; if (ctx->ifc_flags & IFC_PREFETCH) { next = (cidx + CACHE_PTR_INCREMENT) & (ntxd-1); if (!(ctx->ifc_flags & IFLIB_HAS_TXCQ)) { next_txd = calc_next_txd(txq, cidx, 0); prefetch(next_txd); } /* prefetch the next cache line of mbuf pointers and flags */ prefetch(&txq->ift_sds.ifsd_m[next]); prefetch(&txq->ift_sds.ifsd_map[next]); next = (cidx + CACHE_LINE_SIZE) & (ntxd-1); } map = txq->ift_sds.ifsd_map[pidx]; ifsd_m = txq->ift_sds.ifsd_m; if (m_head->m_pkthdr.csum_flags & CSUM_TSO) { buf_tag = txq->ift_tso_buf_tag; max_segs = scctx->isc_tx_tso_segments_max; map = txq->ift_sds.ifsd_tso_map[pidx]; MPASS(buf_tag != NULL); MPASS(max_segs > 0); } else { buf_tag = txq->ift_buf_tag; max_segs = scctx->isc_tx_nsegments; map = txq->ift_sds.ifsd_map[pidx]; } if ((sctx->isc_flags & IFLIB_NEED_ETHER_PAD) && __predict_false(m_head->m_pkthdr.len < scctx->isc_min_frame_size)) { err = iflib_ether_pad(ctx->ifc_dev, m_headp, scctx->isc_min_frame_size); if (err) { DBG_COUNTER_INC(encap_txd_encap_fail); return err; } } m_head = *m_headp; pkt_info_zero(&pi); pi.ipi_mflags = (m_head->m_flags & (M_VLANTAG|M_BCAST|M_MCAST)); pi.ipi_pidx = pidx; pi.ipi_qsidx = txq->ift_id; pi.ipi_len = m_head->m_pkthdr.len; pi.ipi_csum_flags = m_head->m_pkthdr.csum_flags; pi.ipi_vtag = (m_head->m_flags & M_VLANTAG) ? m_head->m_pkthdr.ether_vtag : 0; /* deliberate bitwise OR to make one condition */ if (__predict_true((pi.ipi_csum_flags | pi.ipi_vtag))) { if (__predict_false((err = iflib_parse_header(txq, &pi, m_headp)) != 0)) { DBG_COUNTER_INC(encap_txd_encap_fail); return (err); } m_head = *m_headp; } retry: err = bus_dmamap_load_mbuf_sg(buf_tag, map, m_head, segs, &nsegs, BUS_DMA_NOWAIT); defrag: if (__predict_false(err)) { switch (err) { case EFBIG: /* try collapse once and defrag once */ if (remap == 0) { m_head = m_collapse(*m_headp, M_NOWAIT, max_segs); /* try defrag if collapsing fails */ if (m_head == NULL) remap++; } if (remap == 1) { txq->ift_mbuf_defrag++; m_head = m_defrag(*m_headp, M_NOWAIT); } remap++; if (__predict_false(m_head == NULL)) goto defrag_failed; *m_headp = m_head; goto retry; break; case ENOMEM: txq->ift_no_tx_dma_setup++; break; default: txq->ift_no_tx_dma_setup++; m_freem(*m_headp); DBG_COUNTER_INC(tx_frees); *m_headp = NULL; break; } txq->ift_map_failed++; DBG_COUNTER_INC(encap_load_mbuf_fail); DBG_COUNTER_INC(encap_txd_encap_fail); return (err); } ifsd_m[pidx] = m_head; /* * XXX assumes a 1 to 1 relationship between segments and * descriptors - this does not hold true on all drivers, e.g. * cxgb */ if (__predict_false(nsegs + 2 > TXQ_AVAIL(txq))) { txq->ift_no_desc_avail++; bus_dmamap_unload(buf_tag, map); DBG_COUNTER_INC(encap_txq_avail_fail); DBG_COUNTER_INC(encap_txd_encap_fail); if ((txq->ift_task.gt_task.ta_flags & TASK_ENQUEUED) == 0) GROUPTASK_ENQUEUE(&txq->ift_task); return (ENOBUFS); } /* * On Intel cards we can greatly reduce the number of TX interrupts * we see by only setting report status on every Nth descriptor. * However, this also means that the driver will need to keep track * of the descriptors that RS was set on to check them for the DD bit. */ txq->ift_rs_pending += nsegs + 1; if (txq->ift_rs_pending > TXQ_MAX_RS_DEFERRED(txq) || iflib_no_tx_batch || (TXQ_AVAIL(txq) - nsegs) <= MAX_TX_DESC(ctx) + 2) { pi.ipi_flags |= IPI_TX_INTR; txq->ift_rs_pending = 0; } pi.ipi_segs = segs; pi.ipi_nsegs = nsegs; MPASS(pidx >= 0 && pidx < txq->ift_size); #ifdef PKT_DEBUG print_pkt(&pi); #endif if ((err = ctx->isc_txd_encap(ctx->ifc_softc, &pi)) == 0) { bus_dmamap_sync(buf_tag, map, BUS_DMASYNC_PREWRITE); DBG_COUNTER_INC(tx_encap); MPASS(pi.ipi_new_pidx < txq->ift_size); ndesc = pi.ipi_new_pidx - pi.ipi_pidx; if (pi.ipi_new_pidx < pi.ipi_pidx) { ndesc += txq->ift_size; txq->ift_gen = 1; } /* * drivers can need as many as * two sentinels */ MPASS(ndesc <= pi.ipi_nsegs + 2); MPASS(pi.ipi_new_pidx != pidx); MPASS(ndesc > 0); txq->ift_in_use += ndesc; /* * We update the last software descriptor again here because there may * be a sentinel and/or there may be more mbufs than segments */ txq->ift_pidx = pi.ipi_new_pidx; txq->ift_npending += pi.ipi_ndescs; } else { *m_headp = m_head = iflib_remove_mbuf(txq); if (err == EFBIG) { txq->ift_txd_encap_efbig++; if (remap < 2) { remap = 1; goto defrag; } } goto defrag_failed; } /* * err can't possibly be non-zero here, so we don't neet to test it * to see if we need to DBG_COUNTER_INC(encap_txd_encap_fail). */ return (err); defrag_failed: txq->ift_mbuf_defrag_failed++; txq->ift_map_failed++; m_freem(*m_headp); DBG_COUNTER_INC(tx_frees); *m_headp = NULL; DBG_COUNTER_INC(encap_txd_encap_fail); return (ENOMEM); } static void iflib_tx_desc_free(iflib_txq_t txq, int n) { uint32_t qsize, cidx, mask, gen; struct mbuf *m, **ifsd_m; bool do_prefetch; cidx = txq->ift_cidx; gen = txq->ift_gen; qsize = txq->ift_size; mask = qsize-1; ifsd_m = txq->ift_sds.ifsd_m; do_prefetch = (txq->ift_ctx->ifc_flags & IFC_PREFETCH); while (n-- > 0) { if (do_prefetch) { prefetch(ifsd_m[(cidx + 3) & mask]); prefetch(ifsd_m[(cidx + 4) & mask]); } if ((m = ifsd_m[cidx]) != NULL) { prefetch(&ifsd_m[(cidx + CACHE_PTR_INCREMENT) & mask]); if (m->m_pkthdr.csum_flags & CSUM_TSO) { bus_dmamap_sync(txq->ift_tso_buf_tag, txq->ift_sds.ifsd_tso_map[cidx], BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(txq->ift_tso_buf_tag, txq->ift_sds.ifsd_tso_map[cidx]); } else { bus_dmamap_sync(txq->ift_buf_tag, txq->ift_sds.ifsd_map[cidx], BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(txq->ift_buf_tag, txq->ift_sds.ifsd_map[cidx]); } /* XXX we don't support any drivers that batch packets yet */ MPASS(m->m_nextpkt == NULL); m_freem(m); ifsd_m[cidx] = NULL; #if MEMORY_LOGGING txq->ift_dequeued++; #endif DBG_COUNTER_INC(tx_frees); } if (__predict_false(++cidx == qsize)) { cidx = 0; gen = 0; } } txq->ift_cidx = cidx; txq->ift_gen = gen; } static __inline int iflib_completed_tx_reclaim(iflib_txq_t txq, int thresh) { int reclaim; if_ctx_t ctx = txq->ift_ctx; KASSERT(thresh >= 0, ("invalid threshold to reclaim")); MPASS(thresh /*+ MAX_TX_DESC(txq->ift_ctx) */ < txq->ift_size); /* * Need a rate-limiting check so that this isn't called every time */ iflib_tx_credits_update(ctx, txq); reclaim = DESC_RECLAIMABLE(txq); if (reclaim <= thresh /* + MAX_TX_DESC(txq->ift_ctx) */) { #ifdef INVARIANTS if (iflib_verbose_debug) { printf("%s processed=%ju cleaned=%ju tx_nsegments=%d reclaim=%d thresh=%d\n", __FUNCTION__, txq->ift_processed, txq->ift_cleaned, txq->ift_ctx->ifc_softc_ctx.isc_tx_nsegments, reclaim, thresh); } #endif return (0); } iflib_tx_desc_free(txq, reclaim); txq->ift_cleaned += reclaim; txq->ift_in_use -= reclaim; return (reclaim); } static struct mbuf ** _ring_peek_one(struct ifmp_ring *r, int cidx, int offset, int remaining) { int next, size; struct mbuf **items; size = r->size; next = (cidx + CACHE_PTR_INCREMENT) & (size-1); items = __DEVOLATILE(struct mbuf **, &r->items[0]); prefetch(items[(cidx + offset) & (size-1)]); if (remaining > 1) { prefetch2cachelines(&items[next]); prefetch2cachelines(items[(cidx + offset + 1) & (size-1)]); prefetch2cachelines(items[(cidx + offset + 2) & (size-1)]); prefetch2cachelines(items[(cidx + offset + 3) & (size-1)]); } return (__DEVOLATILE(struct mbuf **, &r->items[(cidx + offset) & (size-1)])); } static void iflib_txq_check_drain(iflib_txq_t txq, int budget) { ifmp_ring_check_drainage(txq->ift_br, budget); } static uint32_t iflib_txq_can_drain(struct ifmp_ring *r) { iflib_txq_t txq = r->cookie; if_ctx_t ctx = txq->ift_ctx; if (TXQ_AVAIL(txq) > MAX_TX_DESC(ctx) + 2) return (1); bus_dmamap_sync(txq->ift_ifdi->idi_tag, txq->ift_ifdi->idi_map, BUS_DMASYNC_POSTREAD); return (ctx->isc_txd_credits_update(ctx->ifc_softc, txq->ift_id, false)); } static uint32_t iflib_txq_drain(struct ifmp_ring *r, uint32_t cidx, uint32_t pidx) { iflib_txq_t txq = r->cookie; if_ctx_t ctx = txq->ift_ctx; struct ifnet *ifp = ctx->ifc_ifp; struct mbuf **mp, *m; int i, count, consumed, pkt_sent, bytes_sent, mcast_sent, avail; int reclaimed, err, in_use_prev, desc_used; bool do_prefetch, ring, rang; if (__predict_false(!(if_getdrvflags(ifp) & IFF_DRV_RUNNING) || !LINK_ACTIVE(ctx))) { DBG_COUNTER_INC(txq_drain_notready); return (0); } reclaimed = iflib_completed_tx_reclaim(txq, RECLAIM_THRESH(ctx)); rang = iflib_txd_db_check(ctx, txq, reclaimed, txq->ift_in_use); avail = IDXDIFF(pidx, cidx, r->size); if (__predict_false(ctx->ifc_flags & IFC_QFLUSH)) { DBG_COUNTER_INC(txq_drain_flushing); for (i = 0; i < avail; i++) { if (__predict_true(r->items[(cidx + i) & (r->size-1)] != (void *)txq)) m_free(r->items[(cidx + i) & (r->size-1)]); r->items[(cidx + i) & (r->size-1)] = NULL; } return (avail); } if (__predict_false(if_getdrvflags(ctx->ifc_ifp) & IFF_DRV_OACTIVE)) { txq->ift_qstatus = IFLIB_QUEUE_IDLE; CALLOUT_LOCK(txq); callout_stop(&txq->ift_timer); CALLOUT_UNLOCK(txq); DBG_COUNTER_INC(txq_drain_oactive); return (0); } if (reclaimed) txq->ift_qstatus = IFLIB_QUEUE_IDLE; consumed = mcast_sent = bytes_sent = pkt_sent = 0; count = MIN(avail, TX_BATCH_SIZE); #ifdef INVARIANTS if (iflib_verbose_debug) printf("%s avail=%d ifc_flags=%x txq_avail=%d ", __FUNCTION__, avail, ctx->ifc_flags, TXQ_AVAIL(txq)); #endif do_prefetch = (ctx->ifc_flags & IFC_PREFETCH); avail = TXQ_AVAIL(txq); err = 0; for (desc_used = i = 0; i < count && avail > MAX_TX_DESC(ctx) + 2; i++) { int rem = do_prefetch ? count - i : 0; mp = _ring_peek_one(r, cidx, i, rem); MPASS(mp != NULL && *mp != NULL); if (__predict_false(*mp == (struct mbuf *)txq)) { consumed++; reclaimed++; continue; } in_use_prev = txq->ift_in_use; err = iflib_encap(txq, mp); if (__predict_false(err)) { /* no room - bail out */ if (err == ENOBUFS) break; consumed++; /* we can't send this packet - skip it */ continue; } consumed++; pkt_sent++; m = *mp; DBG_COUNTER_INC(tx_sent); bytes_sent += m->m_pkthdr.len; mcast_sent += !!(m->m_flags & M_MCAST); avail = TXQ_AVAIL(txq); txq->ift_db_pending += (txq->ift_in_use - in_use_prev); desc_used += (txq->ift_in_use - in_use_prev); ETHER_BPF_MTAP(ifp, m); if (__predict_false(!(ifp->if_drv_flags & IFF_DRV_RUNNING))) break; rang = iflib_txd_db_check(ctx, txq, false, in_use_prev); } /* deliberate use of bitwise or to avoid gratuitous short-circuit */ ring = rang ? false : (iflib_min_tx_latency | err) || (TXQ_AVAIL(txq) < MAX_TX_DESC(ctx)); iflib_txd_db_check(ctx, txq, ring, txq->ift_in_use); if_inc_counter(ifp, IFCOUNTER_OBYTES, bytes_sent); if_inc_counter(ifp, IFCOUNTER_OPACKETS, pkt_sent); if (mcast_sent) if_inc_counter(ifp, IFCOUNTER_OMCASTS, mcast_sent); #ifdef INVARIANTS if (iflib_verbose_debug) printf("consumed=%d\n", consumed); #endif return (consumed); } static uint32_t iflib_txq_drain_always(struct ifmp_ring *r) { return (1); } static uint32_t iflib_txq_drain_free(struct ifmp_ring *r, uint32_t cidx, uint32_t pidx) { int i, avail; struct mbuf **mp; iflib_txq_t txq; txq = r->cookie; txq->ift_qstatus = IFLIB_QUEUE_IDLE; CALLOUT_LOCK(txq); callout_stop(&txq->ift_timer); CALLOUT_UNLOCK(txq); avail = IDXDIFF(pidx, cidx, r->size); for (i = 0; i < avail; i++) { mp = _ring_peek_one(r, cidx, i, avail - i); if (__predict_false(*mp == (struct mbuf *)txq)) continue; m_freem(*mp); DBG_COUNTER_INC(tx_frees); } MPASS(ifmp_ring_is_stalled(r) == 0); return (avail); } static void iflib_ifmp_purge(iflib_txq_t txq) { struct ifmp_ring *r; r = txq->ift_br; r->drain = iflib_txq_drain_free; r->can_drain = iflib_txq_drain_always; ifmp_ring_check_drainage(r, r->size); r->drain = iflib_txq_drain; r->can_drain = iflib_txq_can_drain; } static void _task_fn_tx(void *context) { iflib_txq_t txq = context; if_ctx_t ctx = txq->ift_ctx; #if defined(ALTQ) || defined(DEV_NETMAP) if_t ifp = ctx->ifc_ifp; #endif int abdicate = ctx->ifc_sysctl_tx_abdicate; #ifdef IFLIB_DIAGNOSTICS txq->ift_cpu_exec_count[curcpu]++; #endif if (!(if_getdrvflags(ctx->ifc_ifp) & IFF_DRV_RUNNING)) return; #ifdef DEV_NETMAP if (if_getcapenable(ifp) & IFCAP_NETMAP) { bus_dmamap_sync(txq->ift_ifdi->idi_tag, txq->ift_ifdi->idi_map, BUS_DMASYNC_POSTREAD); if (ctx->isc_txd_credits_update(ctx->ifc_softc, txq->ift_id, false)) netmap_tx_irq(ifp, txq->ift_id); IFDI_TX_QUEUE_INTR_ENABLE(ctx, txq->ift_id); return; } #endif #ifdef ALTQ if (ALTQ_IS_ENABLED(&ifp->if_snd)) iflib_altq_if_start(ifp); #endif if (txq->ift_db_pending) ifmp_ring_enqueue(txq->ift_br, (void **)&txq, 1, TX_BATCH_SIZE, abdicate); else if (!abdicate) ifmp_ring_check_drainage(txq->ift_br, TX_BATCH_SIZE); /* * When abdicating, we always need to check drainage, not just when we don't enqueue */ if (abdicate) ifmp_ring_check_drainage(txq->ift_br, TX_BATCH_SIZE); if (ctx->ifc_flags & IFC_LEGACY) IFDI_INTR_ENABLE(ctx); else { #ifdef INVARIANTS int rc = #endif IFDI_TX_QUEUE_INTR_ENABLE(ctx, txq->ift_id); KASSERT(rc != ENOTSUP, ("MSI-X support requires queue_intr_enable, but not implemented in driver")); } } static void _task_fn_rx(void *context) { iflib_rxq_t rxq = context; if_ctx_t ctx = rxq->ifr_ctx; bool more; uint16_t budget; #ifdef IFLIB_DIAGNOSTICS rxq->ifr_cpu_exec_count[curcpu]++; #endif DBG_COUNTER_INC(task_fn_rxs); if (__predict_false(!(if_getdrvflags(ctx->ifc_ifp) & IFF_DRV_RUNNING))) return; more = true; #ifdef DEV_NETMAP if (if_getcapenable(ctx->ifc_ifp) & IFCAP_NETMAP) { u_int work = 0; if (netmap_rx_irq(ctx->ifc_ifp, rxq->ifr_id, &work)) { more = false; } } #endif budget = ctx->ifc_sysctl_rx_budget; if (budget == 0) budget = 16; /* XXX */ if (more == false || (more = iflib_rxeof(rxq, budget)) == false) { if (ctx->ifc_flags & IFC_LEGACY) IFDI_INTR_ENABLE(ctx); else { #ifdef INVARIANTS int rc = #endif IFDI_RX_QUEUE_INTR_ENABLE(ctx, rxq->ifr_id); KASSERT(rc != ENOTSUP, ("MSI-X support requires queue_intr_enable, but not implemented in driver")); DBG_COUNTER_INC(rx_intr_enables); } } if (__predict_false(!(if_getdrvflags(ctx->ifc_ifp) & IFF_DRV_RUNNING))) return; if (more) GROUPTASK_ENQUEUE(&rxq->ifr_task); } static void _task_fn_admin(void *context) { if_ctx_t ctx = context; if_softc_ctx_t sctx = &ctx->ifc_softc_ctx; iflib_txq_t txq; int i; bool oactive, running, do_reset, do_watchdog, in_detach; uint32_t reset_on = hz / 2; STATE_LOCK(ctx); running = (if_getdrvflags(ctx->ifc_ifp) & IFF_DRV_RUNNING); oactive = (if_getdrvflags(ctx->ifc_ifp) & IFF_DRV_OACTIVE); do_reset = (ctx->ifc_flags & IFC_DO_RESET); do_watchdog = (ctx->ifc_flags & IFC_DO_WATCHDOG); in_detach = (ctx->ifc_flags & IFC_IN_DETACH); ctx->ifc_flags &= ~(IFC_DO_RESET|IFC_DO_WATCHDOG); STATE_UNLOCK(ctx); if ((!running && !oactive) && !(ctx->ifc_sctx->isc_flags & IFLIB_ADMIN_ALWAYS_RUN)) return; if (in_detach) return; CTX_LOCK(ctx); for (txq = ctx->ifc_txqs, i = 0; i < sctx->isc_ntxqsets; i++, txq++) { CALLOUT_LOCK(txq); callout_stop(&txq->ift_timer); CALLOUT_UNLOCK(txq); } if (do_watchdog) { ctx->ifc_watchdog_events++; IFDI_WATCHDOG_RESET(ctx); } IFDI_UPDATE_ADMIN_STATUS(ctx); for (txq = ctx->ifc_txqs, i = 0; i < sctx->isc_ntxqsets; i++, txq++) { #ifdef DEV_NETMAP reset_on = hz / 2; if (if_getcapenable(ctx->ifc_ifp) & IFCAP_NETMAP) iflib_netmap_timer_adjust(ctx, txq, &reset_on); #endif callout_reset_on(&txq->ift_timer, reset_on, iflib_timer, txq, txq->ift_timer.c_cpu); } IFDI_LINK_INTR_ENABLE(ctx); if (do_reset) iflib_if_init_locked(ctx); CTX_UNLOCK(ctx); if (LINK_ACTIVE(ctx) == 0) return; for (txq = ctx->ifc_txqs, i = 0; i < sctx->isc_ntxqsets; i++, txq++) iflib_txq_check_drain(txq, IFLIB_RESTART_BUDGET); } static void _task_fn_iov(void *context) { if_ctx_t ctx = context; if (!(if_getdrvflags(ctx->ifc_ifp) & IFF_DRV_RUNNING) && !(ctx->ifc_sctx->isc_flags & IFLIB_ADMIN_ALWAYS_RUN)) return; CTX_LOCK(ctx); IFDI_VFLR_HANDLE(ctx); CTX_UNLOCK(ctx); } static int iflib_sysctl_int_delay(SYSCTL_HANDLER_ARGS) { int err; if_int_delay_info_t info; if_ctx_t ctx; info = (if_int_delay_info_t)arg1; ctx = info->iidi_ctx; info->iidi_req = req; info->iidi_oidp = oidp; CTX_LOCK(ctx); err = IFDI_SYSCTL_INT_DELAY(ctx, info); CTX_UNLOCK(ctx); return (err); } /********************************************************************* * * IFNET FUNCTIONS * **********************************************************************/ static void iflib_if_init_locked(if_ctx_t ctx) { iflib_stop(ctx); iflib_init_locked(ctx); } static void iflib_if_init(void *arg) { if_ctx_t ctx = arg; CTX_LOCK(ctx); iflib_if_init_locked(ctx); CTX_UNLOCK(ctx); } static int iflib_if_transmit(if_t ifp, struct mbuf *m) { if_ctx_t ctx = if_getsoftc(ifp); iflib_txq_t txq; int err, qidx; int abdicate = ctx->ifc_sysctl_tx_abdicate; if (__predict_false((ifp->if_drv_flags & IFF_DRV_RUNNING) == 0 || !LINK_ACTIVE(ctx))) { DBG_COUNTER_INC(tx_frees); m_freem(m); return (ENOBUFS); } MPASS(m->m_nextpkt == NULL); /* ALTQ-enabled interfaces always use queue 0. */ qidx = 0; if ((NTXQSETS(ctx) > 1) && M_HASHTYPE_GET(m) && !ALTQ_IS_ENABLED(&ifp->if_snd)) qidx = QIDX(ctx, m); /* * XXX calculate buf_ring based on flowid (divvy up bits?) */ txq = &ctx->ifc_txqs[qidx]; #ifdef DRIVER_BACKPRESSURE if (txq->ift_closed) { while (m != NULL) { next = m->m_nextpkt; m->m_nextpkt = NULL; m_freem(m); DBG_COUNTER_INC(tx_frees); m = next; } return (ENOBUFS); } #endif #ifdef notyet qidx = count = 0; mp = marr; next = m; do { count++; next = next->m_nextpkt; } while (next != NULL); if (count > nitems(marr)) if ((mp = malloc(count*sizeof(struct mbuf *), M_IFLIB, M_NOWAIT)) == NULL) { /* XXX check nextpkt */ m_freem(m); /* XXX simplify for now */ DBG_COUNTER_INC(tx_frees); return (ENOBUFS); } for (next = m, i = 0; next != NULL; i++) { mp[i] = next; next = next->m_nextpkt; mp[i]->m_nextpkt = NULL; } #endif DBG_COUNTER_INC(tx_seen); err = ifmp_ring_enqueue(txq->ift_br, (void **)&m, 1, TX_BATCH_SIZE, abdicate); if (abdicate) GROUPTASK_ENQUEUE(&txq->ift_task); if (err) { if (!abdicate) GROUPTASK_ENQUEUE(&txq->ift_task); /* support forthcoming later */ #ifdef DRIVER_BACKPRESSURE txq->ift_closed = TRUE; #endif ifmp_ring_check_drainage(txq->ift_br, TX_BATCH_SIZE); m_freem(m); DBG_COUNTER_INC(tx_frees); } return (err); } #ifdef ALTQ /* * The overall approach to integrating iflib with ALTQ is to continue to use * the iflib mp_ring machinery between the ALTQ queue(s) and the hardware * ring. Technically, when using ALTQ, queueing to an intermediate mp_ring * is redundant/unnecessary, but doing so minimizes the amount of * ALTQ-specific code required in iflib. It is assumed that the overhead of * redundantly queueing to an intermediate mp_ring is swamped by the * performance limitations inherent in using ALTQ. * * When ALTQ support is compiled in, all iflib drivers will use a transmit * routine, iflib_altq_if_transmit(), that checks if ALTQ is enabled for the * given interface. If ALTQ is enabled for an interface, then all * transmitted packets for that interface will be submitted to the ALTQ * subsystem via IFQ_ENQUEUE(). We don't use the legacy if_transmit() * implementation because it uses IFQ_HANDOFF(), which will duplicatively * update stats that the iflib machinery handles, and which is sensitve to * the disused IFF_DRV_OACTIVE flag. Additionally, iflib_altq_if_start() * will be installed as the start routine for use by ALTQ facilities that * need to trigger queue drains on a scheduled basis. * */ static void iflib_altq_if_start(if_t ifp) { struct ifaltq *ifq = &ifp->if_snd; struct mbuf *m; IFQ_LOCK(ifq); IFQ_DEQUEUE_NOLOCK(ifq, m); while (m != NULL) { iflib_if_transmit(ifp, m); IFQ_DEQUEUE_NOLOCK(ifq, m); } IFQ_UNLOCK(ifq); } static int iflib_altq_if_transmit(if_t ifp, struct mbuf *m) { int err; if (ALTQ_IS_ENABLED(&ifp->if_snd)) { IFQ_ENQUEUE(&ifp->if_snd, m, err); if (err == 0) iflib_altq_if_start(ifp); } else err = iflib_if_transmit(ifp, m); return (err); } #endif /* ALTQ */ static void iflib_if_qflush(if_t ifp) { if_ctx_t ctx = if_getsoftc(ifp); iflib_txq_t txq = ctx->ifc_txqs; int i; STATE_LOCK(ctx); ctx->ifc_flags |= IFC_QFLUSH; STATE_UNLOCK(ctx); for (i = 0; i < NTXQSETS(ctx); i++, txq++) while (!(ifmp_ring_is_idle(txq->ift_br) || ifmp_ring_is_stalled(txq->ift_br))) iflib_txq_check_drain(txq, 0); STATE_LOCK(ctx); ctx->ifc_flags &= ~IFC_QFLUSH; STATE_UNLOCK(ctx); /* * When ALTQ is enabled, this will also take care of purging the * ALTQ queue(s). */ if_qflush(ifp); } #define IFCAP_FLAGS (IFCAP_HWCSUM_IPV6 | IFCAP_HWCSUM | IFCAP_LRO | \ IFCAP_TSO | IFCAP_VLAN_HWTAGGING | IFCAP_HWSTATS | \ IFCAP_VLAN_MTU | IFCAP_VLAN_HWFILTER | \ IFCAP_VLAN_HWTSO | IFCAP_VLAN_HWCSUM) static int iflib_if_ioctl(if_t ifp, u_long command, caddr_t data) { if_ctx_t ctx = if_getsoftc(ifp); struct ifreq *ifr = (struct ifreq *)data; #if defined(INET) || defined(INET6) struct ifaddr *ifa = (struct ifaddr *)data; #endif bool avoid_reset = FALSE; int err = 0, reinit = 0, bits; switch (command) { case SIOCSIFADDR: #ifdef INET if (ifa->ifa_addr->sa_family == AF_INET) avoid_reset = TRUE; #endif #ifdef INET6 if (ifa->ifa_addr->sa_family == AF_INET6) avoid_reset = TRUE; #endif /* ** Calling init results in link renegotiation, ** so we avoid doing it when possible. */ if (avoid_reset) { if_setflagbits(ifp, IFF_UP,0); if (!(if_getdrvflags(ifp) & IFF_DRV_RUNNING)) reinit = 1; #ifdef INET if (!(if_getflags(ifp) & IFF_NOARP)) arp_ifinit(ifp, ifa); #endif } else err = ether_ioctl(ifp, command, data); break; case SIOCSIFMTU: CTX_LOCK(ctx); if (ifr->ifr_mtu == if_getmtu(ifp)) { CTX_UNLOCK(ctx); break; } bits = if_getdrvflags(ifp); /* stop the driver and free any clusters before proceeding */ iflib_stop(ctx); if ((err = IFDI_MTU_SET(ctx, ifr->ifr_mtu)) == 0) { STATE_LOCK(ctx); if (ifr->ifr_mtu > ctx->ifc_max_fl_buf_size) ctx->ifc_flags |= IFC_MULTISEG; else ctx->ifc_flags &= ~IFC_MULTISEG; STATE_UNLOCK(ctx); err = if_setmtu(ifp, ifr->ifr_mtu); } iflib_init_locked(ctx); STATE_LOCK(ctx); if_setdrvflags(ifp, bits); STATE_UNLOCK(ctx); CTX_UNLOCK(ctx); break; case SIOCSIFFLAGS: CTX_LOCK(ctx); if (if_getflags(ifp) & IFF_UP) { if (if_getdrvflags(ifp) & IFF_DRV_RUNNING) { if ((if_getflags(ifp) ^ ctx->ifc_if_flags) & (IFF_PROMISC | IFF_ALLMULTI)) { err = IFDI_PROMISC_SET(ctx, if_getflags(ifp)); } } else reinit = 1; } else if (if_getdrvflags(ifp) & IFF_DRV_RUNNING) { iflib_stop(ctx); } ctx->ifc_if_flags = if_getflags(ifp); CTX_UNLOCK(ctx); break; case SIOCADDMULTI: case SIOCDELMULTI: if (if_getdrvflags(ifp) & IFF_DRV_RUNNING) { CTX_LOCK(ctx); IFDI_INTR_DISABLE(ctx); IFDI_MULTI_SET(ctx); IFDI_INTR_ENABLE(ctx); CTX_UNLOCK(ctx); } break; case SIOCSIFMEDIA: CTX_LOCK(ctx); IFDI_MEDIA_SET(ctx); CTX_UNLOCK(ctx); /* falls thru */ case SIOCGIFMEDIA: case SIOCGIFXMEDIA: err = ifmedia_ioctl(ifp, ifr, &ctx->ifc_media, command); break; case SIOCGI2C: { struct ifi2creq i2c; err = copyin(ifr_data_get_ptr(ifr), &i2c, sizeof(i2c)); if (err != 0) break; if (i2c.dev_addr != 0xA0 && i2c.dev_addr != 0xA2) { err = EINVAL; break; } if (i2c.len > sizeof(i2c.data)) { err = EINVAL; break; } if ((err = IFDI_I2C_REQ(ctx, &i2c)) == 0) err = copyout(&i2c, ifr_data_get_ptr(ifr), sizeof(i2c)); break; } case SIOCSIFCAP: { int mask, setmask, oldmask; oldmask = if_getcapenable(ifp); mask = ifr->ifr_reqcap ^ oldmask; mask &= ctx->ifc_softc_ctx.isc_capabilities; setmask = 0; #ifdef TCP_OFFLOAD setmask |= mask & (IFCAP_TOE4|IFCAP_TOE6); #endif setmask |= (mask & IFCAP_FLAGS); setmask |= (mask & IFCAP_WOL); /* * If any RX csum has changed, change all the ones that * are supported by the driver. */ if (setmask & (IFCAP_RXCSUM | IFCAP_RXCSUM_IPV6)) { setmask |= ctx->ifc_softc_ctx.isc_capabilities & (IFCAP_RXCSUM | IFCAP_RXCSUM_IPV6); } /* * want to ensure that traffic has stopped before we change any of the flags */ if (setmask) { CTX_LOCK(ctx); bits = if_getdrvflags(ifp); if (bits & IFF_DRV_RUNNING && setmask & ~IFCAP_WOL) iflib_stop(ctx); STATE_LOCK(ctx); if_togglecapenable(ifp, setmask); STATE_UNLOCK(ctx); if (bits & IFF_DRV_RUNNING && setmask & ~IFCAP_WOL) iflib_init_locked(ctx); STATE_LOCK(ctx); if_setdrvflags(ifp, bits); STATE_UNLOCK(ctx); CTX_UNLOCK(ctx); } if_vlancap(ifp); break; } case SIOCGPRIVATE_0: case SIOCSDRVSPEC: case SIOCGDRVSPEC: CTX_LOCK(ctx); err = IFDI_PRIV_IOCTL(ctx, command, data); CTX_UNLOCK(ctx); break; default: err = ether_ioctl(ifp, command, data); break; } if (reinit) iflib_if_init(ctx); return (err); } static uint64_t iflib_if_get_counter(if_t ifp, ift_counter cnt) { if_ctx_t ctx = if_getsoftc(ifp); return (IFDI_GET_COUNTER(ctx, cnt)); } /********************************************************************* * * OTHER FUNCTIONS EXPORTED TO THE STACK * **********************************************************************/ static void iflib_vlan_register(void *arg, if_t ifp, uint16_t vtag) { if_ctx_t ctx = if_getsoftc(ifp); if ((void *)ctx != arg) return; if ((vtag == 0) || (vtag > 4095)) return; CTX_LOCK(ctx); IFDI_VLAN_REGISTER(ctx, vtag); /* Re-init to load the changes */ if (if_getcapenable(ifp) & IFCAP_VLAN_HWFILTER) iflib_if_init_locked(ctx); CTX_UNLOCK(ctx); } static void iflib_vlan_unregister(void *arg, if_t ifp, uint16_t vtag) { if_ctx_t ctx = if_getsoftc(ifp); if ((void *)ctx != arg) return; if ((vtag == 0) || (vtag > 4095)) return; CTX_LOCK(ctx); IFDI_VLAN_UNREGISTER(ctx, vtag); /* Re-init to load the changes */ if (if_getcapenable(ifp) & IFCAP_VLAN_HWFILTER) iflib_if_init_locked(ctx); CTX_UNLOCK(ctx); } static void iflib_led_func(void *arg, int onoff) { if_ctx_t ctx = arg; CTX_LOCK(ctx); IFDI_LED_FUNC(ctx, onoff); CTX_UNLOCK(ctx); } /********************************************************************* * * BUS FUNCTION DEFINITIONS * **********************************************************************/ int iflib_device_probe(device_t dev) { pci_vendor_info_t *ent; uint16_t pci_vendor_id, pci_device_id; uint16_t pci_subvendor_id, pci_subdevice_id; uint16_t pci_rev_id; if_shared_ctx_t sctx; if ((sctx = DEVICE_REGISTER(dev)) == NULL || sctx->isc_magic != IFLIB_MAGIC) return (ENOTSUP); pci_vendor_id = pci_get_vendor(dev); pci_device_id = pci_get_device(dev); pci_subvendor_id = pci_get_subvendor(dev); pci_subdevice_id = pci_get_subdevice(dev); pci_rev_id = pci_get_revid(dev); if (sctx->isc_parse_devinfo != NULL) sctx->isc_parse_devinfo(&pci_device_id, &pci_subvendor_id, &pci_subdevice_id, &pci_rev_id); ent = sctx->isc_vendor_info; while (ent->pvi_vendor_id != 0) { if (pci_vendor_id != ent->pvi_vendor_id) { ent++; continue; } if ((pci_device_id == ent->pvi_device_id) && ((pci_subvendor_id == ent->pvi_subvendor_id) || (ent->pvi_subvendor_id == 0)) && ((pci_subdevice_id == ent->pvi_subdevice_id) || (ent->pvi_subdevice_id == 0)) && ((pci_rev_id == ent->pvi_rev_id) || (ent->pvi_rev_id == 0))) { device_set_desc_copy(dev, ent->pvi_name); /* this needs to be changed to zero if the bus probing code * ever stops re-probing on best match because the sctx * may have its values over written by register calls * in subsequent probes */ return (BUS_PROBE_DEFAULT); } ent++; } return (ENXIO); } static void iflib_reset_qvalues(if_ctx_t ctx) { if_softc_ctx_t scctx = &ctx->ifc_softc_ctx; if_shared_ctx_t sctx = ctx->ifc_sctx; device_t dev = ctx->ifc_dev; int i; scctx->isc_txrx_budget_bytes_max = IFLIB_MAX_TX_BYTES; scctx->isc_tx_qdepth = IFLIB_DEFAULT_TX_QDEPTH; /* * XXX sanity check that ntxd & nrxd are a power of 2 */ if (ctx->ifc_sysctl_ntxqs != 0) scctx->isc_ntxqsets = ctx->ifc_sysctl_ntxqs; if (ctx->ifc_sysctl_nrxqs != 0) scctx->isc_nrxqsets = ctx->ifc_sysctl_nrxqs; for (i = 0; i < sctx->isc_ntxqs; i++) { if (ctx->ifc_sysctl_ntxds[i] != 0) scctx->isc_ntxd[i] = ctx->ifc_sysctl_ntxds[i]; else scctx->isc_ntxd[i] = sctx->isc_ntxd_default[i]; } for (i = 0; i < sctx->isc_nrxqs; i++) { if (ctx->ifc_sysctl_nrxds[i] != 0) scctx->isc_nrxd[i] = ctx->ifc_sysctl_nrxds[i]; else scctx->isc_nrxd[i] = sctx->isc_nrxd_default[i]; } for (i = 0; i < sctx->isc_nrxqs; i++) { if (scctx->isc_nrxd[i] < sctx->isc_nrxd_min[i]) { device_printf(dev, "nrxd%d: %d less than nrxd_min %d - resetting to min\n", i, scctx->isc_nrxd[i], sctx->isc_nrxd_min[i]); scctx->isc_nrxd[i] = sctx->isc_nrxd_min[i]; } if (scctx->isc_nrxd[i] > sctx->isc_nrxd_max[i]) { device_printf(dev, "nrxd%d: %d greater than nrxd_max %d - resetting to max\n", i, scctx->isc_nrxd[i], sctx->isc_nrxd_max[i]); scctx->isc_nrxd[i] = sctx->isc_nrxd_max[i]; } } for (i = 0; i < sctx->isc_ntxqs; i++) { if (scctx->isc_ntxd[i] < sctx->isc_ntxd_min[i]) { device_printf(dev, "ntxd%d: %d less than ntxd_min %d - resetting to min\n", i, scctx->isc_ntxd[i], sctx->isc_ntxd_min[i]); scctx->isc_ntxd[i] = sctx->isc_ntxd_min[i]; } if (scctx->isc_ntxd[i] > sctx->isc_ntxd_max[i]) { device_printf(dev, "ntxd%d: %d greater than ntxd_max %d - resetting to max\n", i, scctx->isc_ntxd[i], sctx->isc_ntxd_max[i]); scctx->isc_ntxd[i] = sctx->isc_ntxd_max[i]; } } } int iflib_device_register(device_t dev, void *sc, if_shared_ctx_t sctx, if_ctx_t *ctxp) { int err, rid, msix; if_ctx_t ctx; if_t ifp; if_softc_ctx_t scctx; int i; uint16_t main_txq; uint16_t main_rxq; ctx = malloc(sizeof(* ctx), M_IFLIB, M_WAITOK|M_ZERO); if (sc == NULL) { sc = malloc(sctx->isc_driver->size, M_IFLIB, M_WAITOK|M_ZERO); device_set_softc(dev, ctx); ctx->ifc_flags |= IFC_SC_ALLOCATED; } ctx->ifc_sctx = sctx; ctx->ifc_dev = dev; ctx->ifc_softc = sc; if ((err = iflib_register(ctx)) != 0) { device_printf(dev, "iflib_register failed %d\n", err); goto fail_ctx_free; } iflib_add_device_sysctl_pre(ctx); scctx = &ctx->ifc_softc_ctx; ifp = ctx->ifc_ifp; iflib_reset_qvalues(ctx); CTX_LOCK(ctx); if ((err = IFDI_ATTACH_PRE(ctx)) != 0) { device_printf(dev, "IFDI_ATTACH_PRE failed %d\n", err); goto fail_unlock; } _iflib_pre_assert(scctx); ctx->ifc_txrx = *scctx->isc_txrx; #ifdef INVARIANTS MPASS(scctx->isc_capabilities); if (scctx->isc_capabilities & IFCAP_TXCSUM) MPASS(scctx->isc_tx_csum_flags); #endif if_setcapabilities(ifp, scctx->isc_capabilities | IFCAP_HWSTATS); if_setcapenable(ifp, scctx->isc_capenable | IFCAP_HWSTATS); if (scctx->isc_ntxqsets == 0 || (scctx->isc_ntxqsets_max && scctx->isc_ntxqsets_max < scctx->isc_ntxqsets)) scctx->isc_ntxqsets = scctx->isc_ntxqsets_max; if (scctx->isc_nrxqsets == 0 || (scctx->isc_nrxqsets_max && scctx->isc_nrxqsets_max < scctx->isc_nrxqsets)) scctx->isc_nrxqsets = scctx->isc_nrxqsets_max; main_txq = (sctx->isc_flags & IFLIB_HAS_TXCQ) ? 1 : 0; main_rxq = (sctx->isc_flags & IFLIB_HAS_RXCQ) ? 1 : 0; /* XXX change for per-queue sizes */ device_printf(dev, "Using %d tx descriptors and %d rx descriptors\n", scctx->isc_ntxd[main_txq], scctx->isc_nrxd[main_rxq]); for (i = 0; i < sctx->isc_nrxqs; i++) { if (!powerof2(scctx->isc_nrxd[i])) { /* round down instead? */ device_printf(dev, "# rx descriptors must be a power of 2\n"); err = EINVAL; goto fail_iflib_detach; } } for (i = 0; i < sctx->isc_ntxqs; i++) { if (!powerof2(scctx->isc_ntxd[i])) { device_printf(dev, "# tx descriptors must be a power of 2"); err = EINVAL; goto fail_iflib_detach; } } if (scctx->isc_tx_nsegments > scctx->isc_ntxd[main_txq] / MAX_SINGLE_PACKET_FRACTION) scctx->isc_tx_nsegments = max(1, scctx->isc_ntxd[main_txq] / MAX_SINGLE_PACKET_FRACTION); if (scctx->isc_tx_tso_segments_max > scctx->isc_ntxd[main_txq] / MAX_SINGLE_PACKET_FRACTION) scctx->isc_tx_tso_segments_max = max(1, scctx->isc_ntxd[main_txq] / MAX_SINGLE_PACKET_FRACTION); /* TSO parameters - dig these out of the data sheet - simply correspond to tag setup */ if (if_getcapabilities(ifp) & IFCAP_TSO) { /* * The stack can't handle a TSO size larger than IP_MAXPACKET, * but some MACs do. */ if_sethwtsomax(ifp, min(scctx->isc_tx_tso_size_max, IP_MAXPACKET)); /* * Take maximum number of m_pullup(9)'s in iflib_parse_header() * into account. In the worst case, each of these calls will * add another mbuf and, thus, the requirement for another DMA * segment. So for best performance, it doesn't make sense to * advertize a maximum of TSO segments that typically will * require defragmentation in iflib_encap(). */ if_sethwtsomaxsegcount(ifp, scctx->isc_tx_tso_segments_max - 3); if_sethwtsomaxsegsize(ifp, scctx->isc_tx_tso_segsize_max); } if (scctx->isc_rss_table_size == 0) scctx->isc_rss_table_size = 64; scctx->isc_rss_table_mask = scctx->isc_rss_table_size-1; GROUPTASK_INIT(&ctx->ifc_admin_task, 0, _task_fn_admin, ctx); /* XXX format name */ taskqgroup_attach(qgroup_if_config_tqg, &ctx->ifc_admin_task, ctx, NULL, NULL, "admin"); /* Set up cpu set. If it fails, use the set of all CPUs. */ if (bus_get_cpus(dev, INTR_CPUS, sizeof(ctx->ifc_cpus), &ctx->ifc_cpus) != 0) { device_printf(dev, "Unable to fetch CPU list\n"); CPU_COPY(&all_cpus, &ctx->ifc_cpus); } MPASS(CPU_COUNT(&ctx->ifc_cpus) > 0); /* ** Now set up MSI or MSI-X, should return us the number of supported ** vectors (will be 1 for a legacy interrupt and MSI). */ if (sctx->isc_flags & IFLIB_SKIP_MSIX) { msix = scctx->isc_vectors; } else if (scctx->isc_msix_bar != 0) /* * The simple fact that isc_msix_bar is not 0 does not mean we * we have a good value there that is known to work. */ msix = iflib_msix_init(ctx); else { scctx->isc_vectors = 1; scctx->isc_ntxqsets = 1; scctx->isc_nrxqsets = 1; scctx->isc_intr = IFLIB_INTR_LEGACY; msix = 0; } /* Get memory for the station queues */ if ((err = iflib_queues_alloc(ctx))) { device_printf(dev, "Unable to allocate queue memory\n"); goto fail_intr_free; } if ((err = iflib_qset_structures_setup(ctx))) goto fail_queues; /* * Group taskqueues aren't properly set up until SMP is started, * so we disable interrupts until we can handle them post * SI_SUB_SMP. * * XXX: disabling interrupts doesn't actually work, at least for * the non-MSI case. When they occur before SI_SUB_SMP completes, * we do null handling and depend on this not causing too large an * interrupt storm. */ IFDI_INTR_DISABLE(ctx); if (msix > 1 && (err = IFDI_MSIX_INTR_ASSIGN(ctx, msix)) != 0) { device_printf(dev, "IFDI_MSIX_INTR_ASSIGN failed %d\n", err); goto fail_queues; } if (msix <= 1) { rid = 0; if (scctx->isc_intr == IFLIB_INTR_MSI) { MPASS(msix == 1); rid = 1; } if ((err = iflib_legacy_setup(ctx, ctx->isc_legacy_intr, ctx->ifc_softc, &rid, "irq0")) != 0) { device_printf(dev, "iflib_legacy_setup failed %d\n", err); goto fail_queues; } } ether_ifattach(ctx->ifc_ifp, ctx->ifc_mac); if ((err = IFDI_ATTACH_POST(ctx)) != 0) { device_printf(dev, "IFDI_ATTACH_POST failed %d\n", err); goto fail_detach; } /* * Tell the upper layer(s) if IFCAP_VLAN_MTU is supported. * This must appear after the call to ether_ifattach() because * ether_ifattach() sets if_hdrlen to the default value. */ if (if_getcapabilities(ifp) & IFCAP_VLAN_MTU) if_setifheaderlen(ifp, sizeof(struct ether_vlan_header)); if ((err = iflib_netmap_attach(ctx))) { device_printf(ctx->ifc_dev, "netmap attach failed: %d\n", err); goto fail_detach; } *ctxp = ctx; NETDUMP_SET(ctx->ifc_ifp, iflib); if_setgetcounterfn(ctx->ifc_ifp, iflib_if_get_counter); iflib_add_device_sysctl_post(ctx); ctx->ifc_flags |= IFC_INIT_DONE; CTX_UNLOCK(ctx); return (0); fail_detach: ether_ifdetach(ctx->ifc_ifp); fail_intr_free: iflib_free_intr_mem(ctx); fail_queues: iflib_tx_structures_free(ctx); iflib_rx_structures_free(ctx); fail_iflib_detach: IFDI_DETACH(ctx); fail_unlock: CTX_UNLOCK(ctx); fail_ctx_free: if (ctx->ifc_flags & IFC_SC_ALLOCATED) free(ctx->ifc_softc, M_IFLIB); free(ctx, M_IFLIB); return (err); } int iflib_pseudo_register(device_t dev, if_shared_ctx_t sctx, if_ctx_t *ctxp, struct iflib_cloneattach_ctx *clctx) { int err; if_ctx_t ctx; if_t ifp; if_softc_ctx_t scctx; int i; void *sc; uint16_t main_txq; uint16_t main_rxq; ctx = malloc(sizeof(*ctx), M_IFLIB, M_WAITOK|M_ZERO); sc = malloc(sctx->isc_driver->size, M_IFLIB, M_WAITOK|M_ZERO); ctx->ifc_flags |= IFC_SC_ALLOCATED; if (sctx->isc_flags & (IFLIB_PSEUDO|IFLIB_VIRTUAL)) ctx->ifc_flags |= IFC_PSEUDO; ctx->ifc_sctx = sctx; ctx->ifc_softc = sc; ctx->ifc_dev = dev; if ((err = iflib_register(ctx)) != 0) { device_printf(dev, "%s: iflib_register failed %d\n", __func__, err); goto fail_ctx_free; } iflib_add_device_sysctl_pre(ctx); scctx = &ctx->ifc_softc_ctx; ifp = ctx->ifc_ifp; /* * XXX sanity check that ntxd & nrxd are a power of 2 */ iflib_reset_qvalues(ctx); if ((err = IFDI_ATTACH_PRE(ctx)) != 0) { device_printf(dev, "IFDI_ATTACH_PRE failed %d\n", err); goto fail_ctx_free; } if (sctx->isc_flags & IFLIB_GEN_MAC) iflib_gen_mac(ctx); if ((err = IFDI_CLONEATTACH(ctx, clctx->cc_ifc, clctx->cc_name, clctx->cc_params)) != 0) { device_printf(dev, "IFDI_CLONEATTACH failed %d\n", err); goto fail_ctx_free; } ifmedia_add(&ctx->ifc_media, IFM_ETHER | IFM_1000_T | IFM_FDX, 0, NULL); ifmedia_add(&ctx->ifc_media, IFM_ETHER | IFM_AUTO, 0, NULL); ifmedia_set(&ctx->ifc_media, IFM_ETHER | IFM_AUTO); #ifdef INVARIANTS MPASS(scctx->isc_capabilities); if (scctx->isc_capabilities & IFCAP_TXCSUM) MPASS(scctx->isc_tx_csum_flags); #endif if_setcapabilities(ifp, scctx->isc_capabilities | IFCAP_HWSTATS | IFCAP_LINKSTATE); if_setcapenable(ifp, scctx->isc_capenable | IFCAP_HWSTATS | IFCAP_LINKSTATE); ifp->if_flags |= IFF_NOGROUP; if (sctx->isc_flags & IFLIB_PSEUDO) { ether_ifattach(ctx->ifc_ifp, ctx->ifc_mac); if ((err = IFDI_ATTACH_POST(ctx)) != 0) { device_printf(dev, "IFDI_ATTACH_POST failed %d\n", err); goto fail_detach; } *ctxp = ctx; /* * Tell the upper layer(s) if IFCAP_VLAN_MTU is supported. * This must appear after the call to ether_ifattach() because * ether_ifattach() sets if_hdrlen to the default value. */ if (if_getcapabilities(ifp) & IFCAP_VLAN_MTU) if_setifheaderlen(ifp, sizeof(struct ether_vlan_header)); if_setgetcounterfn(ctx->ifc_ifp, iflib_if_get_counter); iflib_add_device_sysctl_post(ctx); ctx->ifc_flags |= IFC_INIT_DONE; return (0); } _iflib_pre_assert(scctx); ctx->ifc_txrx = *scctx->isc_txrx; if (scctx->isc_ntxqsets == 0 || (scctx->isc_ntxqsets_max && scctx->isc_ntxqsets_max < scctx->isc_ntxqsets)) scctx->isc_ntxqsets = scctx->isc_ntxqsets_max; if (scctx->isc_nrxqsets == 0 || (scctx->isc_nrxqsets_max && scctx->isc_nrxqsets_max < scctx->isc_nrxqsets)) scctx->isc_nrxqsets = scctx->isc_nrxqsets_max; main_txq = (sctx->isc_flags & IFLIB_HAS_TXCQ) ? 1 : 0; main_rxq = (sctx->isc_flags & IFLIB_HAS_RXCQ) ? 1 : 0; /* XXX change for per-queue sizes */ device_printf(dev, "Using %d tx descriptors and %d rx descriptors\n", scctx->isc_ntxd[main_txq], scctx->isc_nrxd[main_rxq]); for (i = 0; i < sctx->isc_nrxqs; i++) { if (!powerof2(scctx->isc_nrxd[i])) { /* round down instead? */ device_printf(dev, "# rx descriptors must be a power of 2\n"); err = EINVAL; goto fail_iflib_detach; } } for (i = 0; i < sctx->isc_ntxqs; i++) { if (!powerof2(scctx->isc_ntxd[i])) { device_printf(dev, "# tx descriptors must be a power of 2"); err = EINVAL; goto fail_iflib_detach; } } if (scctx->isc_tx_nsegments > scctx->isc_ntxd[main_txq] / MAX_SINGLE_PACKET_FRACTION) scctx->isc_tx_nsegments = max(1, scctx->isc_ntxd[main_txq] / MAX_SINGLE_PACKET_FRACTION); if (scctx->isc_tx_tso_segments_max > scctx->isc_ntxd[main_txq] / MAX_SINGLE_PACKET_FRACTION) scctx->isc_tx_tso_segments_max = max(1, scctx->isc_ntxd[main_txq] / MAX_SINGLE_PACKET_FRACTION); /* TSO parameters - dig these out of the data sheet - simply correspond to tag setup */ if (if_getcapabilities(ifp) & IFCAP_TSO) { /* * The stack can't handle a TSO size larger than IP_MAXPACKET, * but some MACs do. */ if_sethwtsomax(ifp, min(scctx->isc_tx_tso_size_max, IP_MAXPACKET)); /* * Take maximum number of m_pullup(9)'s in iflib_parse_header() * into account. In the worst case, each of these calls will * add another mbuf and, thus, the requirement for another DMA * segment. So for best performance, it doesn't make sense to * advertize a maximum of TSO segments that typically will * require defragmentation in iflib_encap(). */ if_sethwtsomaxsegcount(ifp, scctx->isc_tx_tso_segments_max - 3); if_sethwtsomaxsegsize(ifp, scctx->isc_tx_tso_segsize_max); } if (scctx->isc_rss_table_size == 0) scctx->isc_rss_table_size = 64; scctx->isc_rss_table_mask = scctx->isc_rss_table_size-1; GROUPTASK_INIT(&ctx->ifc_admin_task, 0, _task_fn_admin, ctx); /* XXX format name */ taskqgroup_attach(qgroup_if_config_tqg, &ctx->ifc_admin_task, ctx, NULL, NULL, "admin"); /* XXX --- can support > 1 -- but keep it simple for now */ scctx->isc_intr = IFLIB_INTR_LEGACY; /* Get memory for the station queues */ if ((err = iflib_queues_alloc(ctx))) { device_printf(dev, "Unable to allocate queue memory\n"); goto fail_iflib_detach; } if ((err = iflib_qset_structures_setup(ctx))) { device_printf(dev, "qset structure setup failed %d\n", err); goto fail_queues; } /* * XXX What if anything do we want to do about interrupts? */ ether_ifattach(ctx->ifc_ifp, ctx->ifc_mac); if ((err = IFDI_ATTACH_POST(ctx)) != 0) { device_printf(dev, "IFDI_ATTACH_POST failed %d\n", err); goto fail_detach; } /* * Tell the upper layer(s) if IFCAP_VLAN_MTU is supported. * This must appear after the call to ether_ifattach() because * ether_ifattach() sets if_hdrlen to the default value. */ if (if_getcapabilities(ifp) & IFCAP_VLAN_MTU) if_setifheaderlen(ifp, sizeof(struct ether_vlan_header)); /* XXX handle more than one queue */ for (i = 0; i < scctx->isc_nrxqsets; i++) IFDI_RX_CLSET(ctx, 0, i, ctx->ifc_rxqs[i].ifr_fl[0].ifl_sds.ifsd_cl); *ctxp = ctx; if_setgetcounterfn(ctx->ifc_ifp, iflib_if_get_counter); iflib_add_device_sysctl_post(ctx); ctx->ifc_flags |= IFC_INIT_DONE; return (0); fail_detach: ether_ifdetach(ctx->ifc_ifp); fail_queues: iflib_tx_structures_free(ctx); iflib_rx_structures_free(ctx); fail_iflib_detach: IFDI_DETACH(ctx); fail_ctx_free: free(ctx->ifc_softc, M_IFLIB); free(ctx, M_IFLIB); return (err); } int iflib_pseudo_deregister(if_ctx_t ctx) { if_t ifp = ctx->ifc_ifp; iflib_txq_t txq; iflib_rxq_t rxq; int i, j; struct taskqgroup *tqg; iflib_fl_t fl; /* Unregister VLAN events */ if (ctx->ifc_vlan_attach_event != NULL) EVENTHANDLER_DEREGISTER(vlan_config, ctx->ifc_vlan_attach_event); if (ctx->ifc_vlan_detach_event != NULL) EVENTHANDLER_DEREGISTER(vlan_unconfig, ctx->ifc_vlan_detach_event); ether_ifdetach(ifp); /* ether_ifdetach calls if_qflush - lock must be destroy afterwards*/ CTX_LOCK_DESTROY(ctx); /* XXX drain any dependent tasks */ tqg = qgroup_if_io_tqg; for (txq = ctx->ifc_txqs, i = 0; i < NTXQSETS(ctx); i++, txq++) { callout_drain(&txq->ift_timer); if (txq->ift_task.gt_uniq != NULL) taskqgroup_detach(tqg, &txq->ift_task); } for (i = 0, rxq = ctx->ifc_rxqs; i < NRXQSETS(ctx); i++, rxq++) { if (rxq->ifr_task.gt_uniq != NULL) taskqgroup_detach(tqg, &rxq->ifr_task); for (j = 0, fl = rxq->ifr_fl; j < rxq->ifr_nfl; j++, fl++) free(fl->ifl_rx_bitmap, M_IFLIB); } tqg = qgroup_if_config_tqg; if (ctx->ifc_admin_task.gt_uniq != NULL) taskqgroup_detach(tqg, &ctx->ifc_admin_task); if (ctx->ifc_vflr_task.gt_uniq != NULL) taskqgroup_detach(tqg, &ctx->ifc_vflr_task); if_free(ifp); iflib_tx_structures_free(ctx); iflib_rx_structures_free(ctx); if (ctx->ifc_flags & IFC_SC_ALLOCATED) free(ctx->ifc_softc, M_IFLIB); free(ctx, M_IFLIB); return (0); } int iflib_device_attach(device_t dev) { if_ctx_t ctx; if_shared_ctx_t sctx; if ((sctx = DEVICE_REGISTER(dev)) == NULL || sctx->isc_magic != IFLIB_MAGIC) return (ENOTSUP); pci_enable_busmaster(dev); return (iflib_device_register(dev, NULL, sctx, &ctx)); } int iflib_device_deregister(if_ctx_t ctx) { if_t ifp = ctx->ifc_ifp; iflib_txq_t txq; iflib_rxq_t rxq; device_t dev = ctx->ifc_dev; int i, j; struct taskqgroup *tqg; iflib_fl_t fl; /* Make sure VLANS are not using driver */ if (if_vlantrunkinuse(ifp)) { device_printf(dev, "Vlan in use, detach first\n"); return (EBUSY); } #ifdef PCI_IOV if (!CTX_IS_VF(ctx) && pci_iov_detach(dev) != 0) { device_printf(dev, "SR-IOV in use; detach first.\n"); return (EBUSY); } #endif STATE_LOCK(ctx); ctx->ifc_flags |= IFC_IN_DETACH; STATE_UNLOCK(ctx); CTX_LOCK(ctx); iflib_stop(ctx); CTX_UNLOCK(ctx); /* Unregister VLAN events */ if (ctx->ifc_vlan_attach_event != NULL) EVENTHANDLER_DEREGISTER(vlan_config, ctx->ifc_vlan_attach_event); if (ctx->ifc_vlan_detach_event != NULL) EVENTHANDLER_DEREGISTER(vlan_unconfig, ctx->ifc_vlan_detach_event); iflib_netmap_detach(ifp); ether_ifdetach(ifp); if (ctx->ifc_led_dev != NULL) led_destroy(ctx->ifc_led_dev); /* XXX drain any dependent tasks */ tqg = qgroup_if_io_tqg; for (txq = ctx->ifc_txqs, i = 0; i < NTXQSETS(ctx); i++, txq++) { callout_drain(&txq->ift_timer); if (txq->ift_task.gt_uniq != NULL) taskqgroup_detach(tqg, &txq->ift_task); } for (i = 0, rxq = ctx->ifc_rxqs; i < NRXQSETS(ctx); i++, rxq++) { if (rxq->ifr_task.gt_uniq != NULL) taskqgroup_detach(tqg, &rxq->ifr_task); for (j = 0, fl = rxq->ifr_fl; j < rxq->ifr_nfl; j++, fl++) free(fl->ifl_rx_bitmap, M_IFLIB); } tqg = qgroup_if_config_tqg; if (ctx->ifc_admin_task.gt_uniq != NULL) taskqgroup_detach(tqg, &ctx->ifc_admin_task); if (ctx->ifc_vflr_task.gt_uniq != NULL) taskqgroup_detach(tqg, &ctx->ifc_vflr_task); CTX_LOCK(ctx); IFDI_DETACH(ctx); CTX_UNLOCK(ctx); /* ether_ifdetach calls if_qflush - lock must be destroy afterwards*/ CTX_LOCK_DESTROY(ctx); device_set_softc(ctx->ifc_dev, NULL); iflib_free_intr_mem(ctx); bus_generic_detach(dev); if_free(ifp); iflib_tx_structures_free(ctx); iflib_rx_structures_free(ctx); if (ctx->ifc_flags & IFC_SC_ALLOCATED) free(ctx->ifc_softc, M_IFLIB); STATE_LOCK_DESTROY(ctx); free(ctx, M_IFLIB); return (0); } static void iflib_free_intr_mem(if_ctx_t ctx) { if (ctx->ifc_softc_ctx.isc_intr != IFLIB_INTR_MSIX) { iflib_irq_free(ctx, &ctx->ifc_legacy_irq); } if (ctx->ifc_softc_ctx.isc_intr != IFLIB_INTR_LEGACY) { pci_release_msi(ctx->ifc_dev); } if (ctx->ifc_msix_mem != NULL) { bus_release_resource(ctx->ifc_dev, SYS_RES_MEMORY, rman_get_rid(ctx->ifc_msix_mem), ctx->ifc_msix_mem); ctx->ifc_msix_mem = NULL; } } int iflib_device_detach(device_t dev) { if_ctx_t ctx = device_get_softc(dev); return (iflib_device_deregister(ctx)); } int iflib_device_suspend(device_t dev) { if_ctx_t ctx = device_get_softc(dev); CTX_LOCK(ctx); IFDI_SUSPEND(ctx); CTX_UNLOCK(ctx); return bus_generic_suspend(dev); } int iflib_device_shutdown(device_t dev) { if_ctx_t ctx = device_get_softc(dev); CTX_LOCK(ctx); IFDI_SHUTDOWN(ctx); CTX_UNLOCK(ctx); return bus_generic_suspend(dev); } int iflib_device_resume(device_t dev) { if_ctx_t ctx = device_get_softc(dev); iflib_txq_t txq = ctx->ifc_txqs; CTX_LOCK(ctx); IFDI_RESUME(ctx); iflib_if_init_locked(ctx); CTX_UNLOCK(ctx); for (int i = 0; i < NTXQSETS(ctx); i++, txq++) iflib_txq_check_drain(txq, IFLIB_RESTART_BUDGET); return (bus_generic_resume(dev)); } int iflib_device_iov_init(device_t dev, uint16_t num_vfs, const nvlist_t *params) { int error; if_ctx_t ctx = device_get_softc(dev); CTX_LOCK(ctx); error = IFDI_IOV_INIT(ctx, num_vfs, params); CTX_UNLOCK(ctx); return (error); } void iflib_device_iov_uninit(device_t dev) { if_ctx_t ctx = device_get_softc(dev); CTX_LOCK(ctx); IFDI_IOV_UNINIT(ctx); CTX_UNLOCK(ctx); } int iflib_device_iov_add_vf(device_t dev, uint16_t vfnum, const nvlist_t *params) { int error; if_ctx_t ctx = device_get_softc(dev); CTX_LOCK(ctx); error = IFDI_IOV_VF_ADD(ctx, vfnum, params); CTX_UNLOCK(ctx); return (error); } /********************************************************************* * * MODULE FUNCTION DEFINITIONS * **********************************************************************/ /* * - Start a fast taskqueue thread for each core * - Start a taskqueue for control operations */ static int iflib_module_init(void) { return (0); } static int iflib_module_event_handler(module_t mod, int what, void *arg) { int err; switch (what) { case MOD_LOAD: if ((err = iflib_module_init()) != 0) return (err); break; case MOD_UNLOAD: return (EBUSY); default: return (EOPNOTSUPP); } return (0); } /********************************************************************* * * PUBLIC FUNCTION DEFINITIONS * ordered as in iflib.h * **********************************************************************/ static void _iflib_assert(if_shared_ctx_t sctx) { MPASS(sctx->isc_tx_maxsize); MPASS(sctx->isc_tx_maxsegsize); MPASS(sctx->isc_rx_maxsize); MPASS(sctx->isc_rx_nsegments); MPASS(sctx->isc_rx_maxsegsize); MPASS(sctx->isc_nrxd_min[0]); MPASS(sctx->isc_nrxd_max[0]); MPASS(sctx->isc_nrxd_default[0]); MPASS(sctx->isc_ntxd_min[0]); MPASS(sctx->isc_ntxd_max[0]); MPASS(sctx->isc_ntxd_default[0]); } static void _iflib_pre_assert(if_softc_ctx_t scctx) { MPASS(scctx->isc_txrx->ift_txd_encap); MPASS(scctx->isc_txrx->ift_txd_flush); MPASS(scctx->isc_txrx->ift_txd_credits_update); MPASS(scctx->isc_txrx->ift_rxd_available); MPASS(scctx->isc_txrx->ift_rxd_pkt_get); MPASS(scctx->isc_txrx->ift_rxd_refill); MPASS(scctx->isc_txrx->ift_rxd_flush); } static int iflib_register(if_ctx_t ctx) { if_shared_ctx_t sctx = ctx->ifc_sctx; driver_t *driver = sctx->isc_driver; device_t dev = ctx->ifc_dev; if_t ifp; _iflib_assert(sctx); CTX_LOCK_INIT(ctx); STATE_LOCK_INIT(ctx, device_get_nameunit(ctx->ifc_dev)); ifp = ctx->ifc_ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { device_printf(dev, "can not allocate ifnet structure\n"); return (ENOMEM); } /* * Initialize our context's device specific methods */ kobj_init((kobj_t) ctx, (kobj_class_t) driver); kobj_class_compile((kobj_class_t) driver); driver->refs++; if_initname(ifp, device_get_name(dev), device_get_unit(dev)); if_setsoftc(ifp, ctx); if_setdev(ifp, dev); if_setinitfn(ifp, iflib_if_init); if_setioctlfn(ifp, iflib_if_ioctl); #ifdef ALTQ if_setstartfn(ifp, iflib_altq_if_start); if_settransmitfn(ifp, iflib_altq_if_transmit); if_setsendqready(ifp); #else if_settransmitfn(ifp, iflib_if_transmit); #endif if_setqflushfn(ifp, iflib_if_qflush); if_setflags(ifp, IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST); ctx->ifc_vlan_attach_event = EVENTHANDLER_REGISTER(vlan_config, iflib_vlan_register, ctx, EVENTHANDLER_PRI_FIRST); ctx->ifc_vlan_detach_event = EVENTHANDLER_REGISTER(vlan_unconfig, iflib_vlan_unregister, ctx, EVENTHANDLER_PRI_FIRST); ifmedia_init(&ctx->ifc_media, IFM_IMASK, iflib_media_change, iflib_media_status); return (0); } static int iflib_queues_alloc(if_ctx_t ctx) { if_shared_ctx_t sctx = ctx->ifc_sctx; if_softc_ctx_t scctx = &ctx->ifc_softc_ctx; device_t dev = ctx->ifc_dev; int nrxqsets = scctx->isc_nrxqsets; int ntxqsets = scctx->isc_ntxqsets; iflib_txq_t txq; iflib_rxq_t rxq; iflib_fl_t fl = NULL; int i, j, cpu, err, txconf, rxconf; iflib_dma_info_t ifdip; uint32_t *rxqsizes = scctx->isc_rxqsizes; uint32_t *txqsizes = scctx->isc_txqsizes; uint8_t nrxqs = sctx->isc_nrxqs; uint8_t ntxqs = sctx->isc_ntxqs; int nfree_lists = sctx->isc_nfl ? sctx->isc_nfl : 1; caddr_t *vaddrs; uint64_t *paddrs; KASSERT(ntxqs > 0, ("number of queues per qset must be at least 1")); KASSERT(nrxqs > 0, ("number of queues per qset must be at least 1")); /* Allocate the TX ring struct memory */ if (!(ctx->ifc_txqs = (iflib_txq_t) malloc(sizeof(struct iflib_txq) * ntxqsets, M_IFLIB, M_NOWAIT | M_ZERO))) { device_printf(dev, "Unable to allocate TX ring memory\n"); err = ENOMEM; goto fail; } /* Now allocate the RX */ if (!(ctx->ifc_rxqs = (iflib_rxq_t) malloc(sizeof(struct iflib_rxq) * nrxqsets, M_IFLIB, M_NOWAIT | M_ZERO))) { device_printf(dev, "Unable to allocate RX ring memory\n"); err = ENOMEM; goto rx_fail; } txq = ctx->ifc_txqs; rxq = ctx->ifc_rxqs; /* * XXX handle allocation failure */ for (txconf = i = 0, cpu = CPU_FIRST(); i < ntxqsets; i++, txconf++, txq++, cpu = CPU_NEXT(cpu)) { /* Set up some basics */ if ((ifdip = malloc(sizeof(struct iflib_dma_info) * ntxqs, M_IFLIB, M_NOWAIT | M_ZERO)) == NULL) { device_printf(dev, "Unable to allocate TX DMA info memory\n"); err = ENOMEM; goto err_tx_desc; } txq->ift_ifdi = ifdip; for (j = 0; j < ntxqs; j++, ifdip++) { if (iflib_dma_alloc(ctx, txqsizes[j], ifdip, 0)) { device_printf(dev, "Unable to allocate TX descriptors\n"); err = ENOMEM; goto err_tx_desc; } txq->ift_txd_size[j] = scctx->isc_txd_size[j]; bzero((void *)ifdip->idi_vaddr, txqsizes[j]); } txq->ift_ctx = ctx; txq->ift_id = i; if (sctx->isc_flags & IFLIB_HAS_TXCQ) { txq->ift_br_offset = 1; } else { txq->ift_br_offset = 0; } /* XXX fix this */ txq->ift_timer.c_cpu = cpu; if (iflib_txsd_alloc(txq)) { device_printf(dev, "Critical Failure setting up TX buffers\n"); err = ENOMEM; goto err_tx_desc; } /* Initialize the TX lock */ snprintf(txq->ift_mtx_name, MTX_NAME_LEN, "%s:tx(%d):callout", device_get_nameunit(dev), txq->ift_id); mtx_init(&txq->ift_mtx, txq->ift_mtx_name, NULL, MTX_DEF); callout_init_mtx(&txq->ift_timer, &txq->ift_mtx, 0); snprintf(txq->ift_db_mtx_name, MTX_NAME_LEN, "%s:tx(%d):db", device_get_nameunit(dev), txq->ift_id); err = ifmp_ring_alloc(&txq->ift_br, 2048, txq, iflib_txq_drain, iflib_txq_can_drain, M_IFLIB, M_WAITOK); if (err) { /* XXX free any allocated rings */ device_printf(dev, "Unable to allocate buf_ring\n"); goto err_tx_desc; } } for (rxconf = i = 0; i < nrxqsets; i++, rxconf++, rxq++) { /* Set up some basics */ if ((ifdip = malloc(sizeof(struct iflib_dma_info) * nrxqs, M_IFLIB, M_NOWAIT | M_ZERO)) == NULL) { device_printf(dev, "Unable to allocate RX DMA info memory\n"); err = ENOMEM; goto err_tx_desc; } rxq->ifr_ifdi = ifdip; /* XXX this needs to be changed if #rx queues != #tx queues */ rxq->ifr_ntxqirq = 1; rxq->ifr_txqid[0] = i; for (j = 0; j < nrxqs; j++, ifdip++) { if (iflib_dma_alloc(ctx, rxqsizes[j], ifdip, 0)) { device_printf(dev, "Unable to allocate RX descriptors\n"); err = ENOMEM; goto err_tx_desc; } bzero((void *)ifdip->idi_vaddr, rxqsizes[j]); } rxq->ifr_ctx = ctx; rxq->ifr_id = i; if (sctx->isc_flags & IFLIB_HAS_RXCQ) { rxq->ifr_fl_offset = 1; } else { rxq->ifr_fl_offset = 0; } rxq->ifr_nfl = nfree_lists; if (!(fl = (iflib_fl_t) malloc(sizeof(struct iflib_fl) * nfree_lists, M_IFLIB, M_NOWAIT | M_ZERO))) { device_printf(dev, "Unable to allocate free list memory\n"); err = ENOMEM; goto err_tx_desc; } rxq->ifr_fl = fl; for (j = 0; j < nfree_lists; j++) { fl[j].ifl_rxq = rxq; fl[j].ifl_id = j; fl[j].ifl_ifdi = &rxq->ifr_ifdi[j + rxq->ifr_fl_offset]; fl[j].ifl_rxd_size = scctx->isc_rxd_size[j]; } /* Allocate receive buffers for the ring */ if (iflib_rxsd_alloc(rxq)) { device_printf(dev, "Critical Failure setting up receive buffers\n"); err = ENOMEM; goto err_rx_desc; } for (j = 0, fl = rxq->ifr_fl; j < rxq->ifr_nfl; j++, fl++) fl->ifl_rx_bitmap = bit_alloc(fl->ifl_size, M_IFLIB, M_WAITOK); } /* TXQs */ vaddrs = malloc(sizeof(caddr_t)*ntxqsets*ntxqs, M_IFLIB, M_WAITOK); paddrs = malloc(sizeof(uint64_t)*ntxqsets*ntxqs, M_IFLIB, M_WAITOK); for (i = 0; i < ntxqsets; i++) { iflib_dma_info_t di = ctx->ifc_txqs[i].ift_ifdi; for (j = 0; j < ntxqs; j++, di++) { vaddrs[i*ntxqs + j] = di->idi_vaddr; paddrs[i*ntxqs + j] = di->idi_paddr; } } if ((err = IFDI_TX_QUEUES_ALLOC(ctx, vaddrs, paddrs, ntxqs, ntxqsets)) != 0) { device_printf(ctx->ifc_dev, "Unable to allocate device TX queue\n"); iflib_tx_structures_free(ctx); free(vaddrs, M_IFLIB); free(paddrs, M_IFLIB); goto err_rx_desc; } free(vaddrs, M_IFLIB); free(paddrs, M_IFLIB); /* RXQs */ vaddrs = malloc(sizeof(caddr_t)*nrxqsets*nrxqs, M_IFLIB, M_WAITOK); paddrs = malloc(sizeof(uint64_t)*nrxqsets*nrxqs, M_IFLIB, M_WAITOK); for (i = 0; i < nrxqsets; i++) { iflib_dma_info_t di = ctx->ifc_rxqs[i].ifr_ifdi; for (j = 0; j < nrxqs; j++, di++) { vaddrs[i*nrxqs + j] = di->idi_vaddr; paddrs[i*nrxqs + j] = di->idi_paddr; } } if ((err = IFDI_RX_QUEUES_ALLOC(ctx, vaddrs, paddrs, nrxqs, nrxqsets)) != 0) { device_printf(ctx->ifc_dev, "Unable to allocate device RX queue\n"); iflib_tx_structures_free(ctx); free(vaddrs, M_IFLIB); free(paddrs, M_IFLIB); goto err_rx_desc; } free(vaddrs, M_IFLIB); free(paddrs, M_IFLIB); return (0); /* XXX handle allocation failure changes */ err_rx_desc: err_tx_desc: rx_fail: if (ctx->ifc_rxqs != NULL) free(ctx->ifc_rxqs, M_IFLIB); ctx->ifc_rxqs = NULL; if (ctx->ifc_txqs != NULL) free(ctx->ifc_txqs, M_IFLIB); ctx->ifc_txqs = NULL; fail: return (err); } static int iflib_tx_structures_setup(if_ctx_t ctx) { iflib_txq_t txq = ctx->ifc_txqs; int i; for (i = 0; i < NTXQSETS(ctx); i++, txq++) iflib_txq_setup(txq); return (0); } static void iflib_tx_structures_free(if_ctx_t ctx) { iflib_txq_t txq = ctx->ifc_txqs; if_shared_ctx_t sctx = ctx->ifc_sctx; int i, j; for (i = 0; i < NTXQSETS(ctx); i++, txq++) { iflib_txq_destroy(txq); for (j = 0; j < sctx->isc_ntxqs; j++) iflib_dma_free(&txq->ift_ifdi[j]); } free(ctx->ifc_txqs, M_IFLIB); ctx->ifc_txqs = NULL; IFDI_QUEUES_FREE(ctx); } /********************************************************************* * * Initialize all receive rings. * **********************************************************************/ static int iflib_rx_structures_setup(if_ctx_t ctx) { iflib_rxq_t rxq = ctx->ifc_rxqs; int q; #if defined(INET6) || defined(INET) int i, err; #endif for (q = 0; q < ctx->ifc_softc_ctx.isc_nrxqsets; q++, rxq++) { #if defined(INET6) || defined(INET) tcp_lro_free(&rxq->ifr_lc); if ((err = tcp_lro_init_args(&rxq->ifr_lc, ctx->ifc_ifp, TCP_LRO_ENTRIES, min(1024, ctx->ifc_softc_ctx.isc_nrxd[rxq->ifr_fl_offset]))) != 0) { device_printf(ctx->ifc_dev, "LRO Initialization failed!\n"); goto fail; } rxq->ifr_lro_enabled = TRUE; #endif IFDI_RXQ_SETUP(ctx, rxq->ifr_id); } return (0); #if defined(INET6) || defined(INET) fail: /* * Free RX software descriptors allocated so far, we will only handle * the rings that completed, the failing case will have * cleaned up for itself. 'q' failed, so its the terminus. */ rxq = ctx->ifc_rxqs; for (i = 0; i < q; ++i, rxq++) { iflib_rx_sds_free(rxq); rxq->ifr_cq_gen = rxq->ifr_cq_cidx = rxq->ifr_cq_pidx = 0; } return (err); #endif } /********************************************************************* * * Free all receive rings. * **********************************************************************/ static void iflib_rx_structures_free(if_ctx_t ctx) { iflib_rxq_t rxq = ctx->ifc_rxqs; for (int i = 0; i < ctx->ifc_softc_ctx.isc_nrxqsets; i++, rxq++) { iflib_rx_sds_free(rxq); } free(ctx->ifc_rxqs, M_IFLIB); ctx->ifc_rxqs = NULL; } static int iflib_qset_structures_setup(if_ctx_t ctx) { int err; /* * It is expected that the caller takes care of freeing queues if this * fails. */ if ((err = iflib_tx_structures_setup(ctx)) != 0) { device_printf(ctx->ifc_dev, "iflib_tx_structures_setup failed: %d\n", err); return (err); } if ((err = iflib_rx_structures_setup(ctx)) != 0) device_printf(ctx->ifc_dev, "iflib_rx_structures_setup failed: %d\n", err); return (err); } int iflib_irq_alloc(if_ctx_t ctx, if_irq_t irq, int rid, driver_filter_t filter, void *filter_arg, driver_intr_t handler, void *arg, const char *name) { return (_iflib_irq_alloc(ctx, irq, rid, filter, handler, arg, name)); } #ifdef SMP static int find_nth(if_ctx_t ctx, int qid) { cpuset_t cpus; int i, cpuid, eqid, count; CPU_COPY(&ctx->ifc_cpus, &cpus); count = CPU_COUNT(&cpus); eqid = qid % count; /* clear up to the qid'th bit */ for (i = 0; i < eqid; i++) { cpuid = CPU_FFS(&cpus); MPASS(cpuid != 0); CPU_CLR(cpuid-1, &cpus); } cpuid = CPU_FFS(&cpus); MPASS(cpuid != 0); return (cpuid-1); } #ifdef SCHED_ULE extern struct cpu_group *cpu_top; /* CPU topology */ static int find_child_with_core(int cpu, struct cpu_group *grp) { int i; if (grp->cg_children == 0) return -1; MPASS(grp->cg_child); for (i = 0; i < grp->cg_children; i++) { if (CPU_ISSET(cpu, &grp->cg_child[i].cg_mask)) return i; } return -1; } /* * Find the nth "close" core to the specified core * "close" is defined as the deepest level that shares * at least an L2 cache. With threads, this will be * threads on the same core. If the sahred cache is L3 * or higher, simply returns the same core. */ static int find_close_core(int cpu, int core_offset) { struct cpu_group *grp; int i; int fcpu; cpuset_t cs; grp = cpu_top; if (grp == NULL) return cpu; i = 0; while ((i = find_child_with_core(cpu, grp)) != -1) { /* If the child only has one cpu, don't descend */ if (grp->cg_child[i].cg_count <= 1) break; grp = &grp->cg_child[i]; } /* If they don't share at least an L2 cache, use the same CPU */ if (grp->cg_level > CG_SHARE_L2 || grp->cg_level == CG_SHARE_NONE) return cpu; /* Now pick one */ CPU_COPY(&grp->cg_mask, &cs); /* Add the selected CPU offset to core offset. */ for (i = 0; (fcpu = CPU_FFS(&cs)) != 0; i++) { if (fcpu - 1 == cpu) break; CPU_CLR(fcpu - 1, &cs); } MPASS(fcpu); core_offset += i; CPU_COPY(&grp->cg_mask, &cs); for (i = core_offset % grp->cg_count; i > 0; i--) { MPASS(CPU_FFS(&cs)); CPU_CLR(CPU_FFS(&cs) - 1, &cs); } MPASS(CPU_FFS(&cs)); return CPU_FFS(&cs) - 1; } #else static int find_close_core(int cpu, int core_offset __unused) { return cpu; } #endif static int get_core_offset(if_ctx_t ctx, iflib_intr_type_t type, int qid) { switch (type) { case IFLIB_INTR_TX: /* TX queues get cores which share at least an L2 cache with the corresponding RX queue */ /* XXX handle multiple RX threads per core and more than two core per L2 group */ return qid / CPU_COUNT(&ctx->ifc_cpus) + 1; case IFLIB_INTR_RX: case IFLIB_INTR_RXTX: /* RX queues get the specified core */ return qid / CPU_COUNT(&ctx->ifc_cpus); default: return -1; } } #else #define get_core_offset(ctx, type, qid) CPU_FIRST() #define find_close_core(cpuid, tid) CPU_FIRST() #define find_nth(ctx, gid) CPU_FIRST() #endif /* Just to avoid copy/paste */ static inline int iflib_irq_set_affinity(if_ctx_t ctx, if_irq_t irq, iflib_intr_type_t type, int qid, struct grouptask *gtask, struct taskqgroup *tqg, void *uniq, const char *name) { device_t dev; int err, cpuid, tid; dev = ctx->ifc_dev; cpuid = find_nth(ctx, qid); tid = get_core_offset(ctx, type, qid); MPASS(tid >= 0); cpuid = find_close_core(cpuid, tid); err = taskqgroup_attach_cpu(tqg, gtask, uniq, cpuid, dev, irq->ii_res, name); if (err) { device_printf(dev, "taskqgroup_attach_cpu failed %d\n", err); return (err); } #ifdef notyet if (cpuid > ctx->ifc_cpuid_highest) ctx->ifc_cpuid_highest = cpuid; #endif return 0; } int iflib_irq_alloc_generic(if_ctx_t ctx, if_irq_t irq, int rid, iflib_intr_type_t type, driver_filter_t *filter, void *filter_arg, int qid, const char *name) { device_t dev; struct grouptask *gtask; struct taskqgroup *tqg; iflib_filter_info_t info; gtask_fn_t *fn; int tqrid, err; driver_filter_t *intr_fast; void *q; info = &ctx->ifc_filter_info; tqrid = rid; switch (type) { /* XXX merge tx/rx for netmap? */ case IFLIB_INTR_TX: q = &ctx->ifc_txqs[qid]; info = &ctx->ifc_txqs[qid].ift_filter_info; gtask = &ctx->ifc_txqs[qid].ift_task; tqg = qgroup_if_io_tqg; fn = _task_fn_tx; intr_fast = iflib_fast_intr; GROUPTASK_INIT(gtask, 0, fn, q); ctx->ifc_flags |= IFC_NETMAP_TX_IRQ; break; case IFLIB_INTR_RX: q = &ctx->ifc_rxqs[qid]; info = &ctx->ifc_rxqs[qid].ifr_filter_info; gtask = &ctx->ifc_rxqs[qid].ifr_task; tqg = qgroup_if_io_tqg; fn = _task_fn_rx; intr_fast = iflib_fast_intr; GROUPTASK_INIT(gtask, 0, fn, q); break; case IFLIB_INTR_RXTX: q = &ctx->ifc_rxqs[qid]; info = &ctx->ifc_rxqs[qid].ifr_filter_info; gtask = &ctx->ifc_rxqs[qid].ifr_task; tqg = qgroup_if_io_tqg; fn = _task_fn_rx; intr_fast = iflib_fast_intr_rxtx; GROUPTASK_INIT(gtask, 0, fn, q); break; case IFLIB_INTR_ADMIN: q = ctx; tqrid = -1; info = &ctx->ifc_filter_info; gtask = &ctx->ifc_admin_task; tqg = qgroup_if_config_tqg; fn = _task_fn_admin; intr_fast = iflib_fast_intr_ctx; break; default: panic("unknown net intr type"); } info->ifi_filter = filter; info->ifi_filter_arg = filter_arg; info->ifi_task = gtask; info->ifi_ctx = q; dev = ctx->ifc_dev; err = _iflib_irq_alloc(ctx, irq, rid, intr_fast, NULL, info, name); if (err != 0) { device_printf(dev, "_iflib_irq_alloc failed %d\n", err); return (err); } if (type == IFLIB_INTR_ADMIN) return (0); if (tqrid != -1) { err = iflib_irq_set_affinity(ctx, irq, type, qid, gtask, tqg, q, name); if (err) return (err); } else { taskqgroup_attach(tqg, gtask, q, dev, irq->ii_res, name); } return (0); } void iflib_softirq_alloc_generic(if_ctx_t ctx, if_irq_t irq, iflib_intr_type_t type, void *arg, int qid, const char *name) { struct grouptask *gtask; struct taskqgroup *tqg; gtask_fn_t *fn; void *q; int err; switch (type) { case IFLIB_INTR_TX: q = &ctx->ifc_txqs[qid]; gtask = &ctx->ifc_txqs[qid].ift_task; tqg = qgroup_if_io_tqg; fn = _task_fn_tx; break; case IFLIB_INTR_RX: q = &ctx->ifc_rxqs[qid]; gtask = &ctx->ifc_rxqs[qid].ifr_task; tqg = qgroup_if_io_tqg; fn = _task_fn_rx; break; case IFLIB_INTR_IOV: q = ctx; gtask = &ctx->ifc_vflr_task; tqg = qgroup_if_config_tqg; fn = _task_fn_iov; break; default: panic("unknown net intr type"); } GROUPTASK_INIT(gtask, 0, fn, q); if (irq != NULL) { err = iflib_irq_set_affinity(ctx, irq, type, qid, gtask, tqg, q, name); if (err) taskqgroup_attach(tqg, gtask, q, ctx->ifc_dev, irq->ii_res, name); } else { taskqgroup_attach(tqg, gtask, q, NULL, NULL, name); } } void iflib_irq_free(if_ctx_t ctx, if_irq_t irq) { if (irq->ii_tag) bus_teardown_intr(ctx->ifc_dev, irq->ii_res, irq->ii_tag); if (irq->ii_res) bus_release_resource(ctx->ifc_dev, SYS_RES_IRQ, rman_get_rid(irq->ii_res), irq->ii_res); } static int iflib_legacy_setup(if_ctx_t ctx, driver_filter_t filter, void *filter_arg, int *rid, const char *name) { iflib_txq_t txq = ctx->ifc_txqs; iflib_rxq_t rxq = ctx->ifc_rxqs; if_irq_t irq = &ctx->ifc_legacy_irq; iflib_filter_info_t info; device_t dev; struct grouptask *gtask; struct resource *res; struct taskqgroup *tqg; gtask_fn_t *fn; int tqrid; void *q; int err; q = &ctx->ifc_rxqs[0]; info = &rxq[0].ifr_filter_info; gtask = &rxq[0].ifr_task; tqg = qgroup_if_io_tqg; tqrid = irq->ii_rid = *rid; fn = _task_fn_rx; ctx->ifc_flags |= IFC_LEGACY; info->ifi_filter = filter; info->ifi_filter_arg = filter_arg; info->ifi_task = gtask; info->ifi_ctx = ctx; dev = ctx->ifc_dev; /* We allocate a single interrupt resource */ if ((err = _iflib_irq_alloc(ctx, irq, tqrid, iflib_fast_intr_ctx, NULL, info, name)) != 0) return (err); GROUPTASK_INIT(gtask, 0, fn, q); res = irq->ii_res; taskqgroup_attach(tqg, gtask, q, dev, res, name); GROUPTASK_INIT(&txq->ift_task, 0, _task_fn_tx, txq); taskqgroup_attach(qgroup_if_io_tqg, &txq->ift_task, txq, dev, res, "tx"); return (0); } void iflib_led_create(if_ctx_t ctx) { ctx->ifc_led_dev = led_create(iflib_led_func, ctx, device_get_nameunit(ctx->ifc_dev)); } void iflib_tx_intr_deferred(if_ctx_t ctx, int txqid) { GROUPTASK_ENQUEUE(&ctx->ifc_txqs[txqid].ift_task); } void iflib_rx_intr_deferred(if_ctx_t ctx, int rxqid) { GROUPTASK_ENQUEUE(&ctx->ifc_rxqs[rxqid].ifr_task); } void iflib_admin_intr_deferred(if_ctx_t ctx) { #ifdef INVARIANTS struct grouptask *gtask; gtask = &ctx->ifc_admin_task; MPASS(gtask != NULL && gtask->gt_taskqueue != NULL); #endif GROUPTASK_ENQUEUE(&ctx->ifc_admin_task); } void iflib_iov_intr_deferred(if_ctx_t ctx) { GROUPTASK_ENQUEUE(&ctx->ifc_vflr_task); } void iflib_io_tqg_attach(struct grouptask *gt, void *uniq, int cpu, char *name) { taskqgroup_attach_cpu(qgroup_if_io_tqg, gt, uniq, cpu, NULL, NULL, name); } void iflib_config_gtask_init(void *ctx, struct grouptask *gtask, gtask_fn_t *fn, const char *name) { GROUPTASK_INIT(gtask, 0, fn, ctx); taskqgroup_attach(qgroup_if_config_tqg, gtask, gtask, NULL, NULL, name); } void iflib_config_gtask_deinit(struct grouptask *gtask) { taskqgroup_detach(qgroup_if_config_tqg, gtask); } void iflib_link_state_change(if_ctx_t ctx, int link_state, uint64_t baudrate) { if_t ifp = ctx->ifc_ifp; iflib_txq_t txq = ctx->ifc_txqs; if_setbaudrate(ifp, baudrate); if (baudrate >= IF_Gbps(10)) { STATE_LOCK(ctx); ctx->ifc_flags |= IFC_PREFETCH; STATE_UNLOCK(ctx); } /* If link down, disable watchdog */ if ((ctx->ifc_link_state == LINK_STATE_UP) && (link_state == LINK_STATE_DOWN)) { for (int i = 0; i < ctx->ifc_softc_ctx.isc_ntxqsets; i++, txq++) txq->ift_qstatus = IFLIB_QUEUE_IDLE; } ctx->ifc_link_state = link_state; if_link_state_change(ifp, link_state); } static int iflib_tx_credits_update(if_ctx_t ctx, iflib_txq_t txq) { int credits; #ifdef INVARIANTS int credits_pre = txq->ift_cidx_processed; #endif if (ctx->isc_txd_credits_update == NULL) return (0); bus_dmamap_sync(txq->ift_ifdi->idi_tag, txq->ift_ifdi->idi_map, BUS_DMASYNC_POSTREAD); if ((credits = ctx->isc_txd_credits_update(ctx->ifc_softc, txq->ift_id, true)) == 0) return (0); txq->ift_processed += credits; txq->ift_cidx_processed += credits; MPASS(credits_pre + credits == txq->ift_cidx_processed); if (txq->ift_cidx_processed >= txq->ift_size) txq->ift_cidx_processed -= txq->ift_size; return (credits); } static int iflib_rxd_avail(if_ctx_t ctx, iflib_rxq_t rxq, qidx_t cidx, qidx_t budget) { iflib_fl_t fl; u_int i; for (i = 0, fl = &rxq->ifr_fl[0]; i < rxq->ifr_nfl; i++, fl++) bus_dmamap_sync(fl->ifl_ifdi->idi_tag, fl->ifl_ifdi->idi_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); return (ctx->isc_rxd_available(ctx->ifc_softc, rxq->ifr_id, cidx, budget)); } void iflib_add_int_delay_sysctl(if_ctx_t ctx, const char *name, const char *description, if_int_delay_info_t info, int offset, int value) { info->iidi_ctx = ctx; info->iidi_offset = offset; info->iidi_value = value; SYSCTL_ADD_PROC(device_get_sysctl_ctx(ctx->ifc_dev), SYSCTL_CHILDREN(device_get_sysctl_tree(ctx->ifc_dev)), OID_AUTO, name, CTLTYPE_INT|CTLFLAG_RW, info, 0, iflib_sysctl_int_delay, "I", description); } struct sx * iflib_ctx_lock_get(if_ctx_t ctx) { return (&ctx->ifc_ctx_sx); } static int iflib_msix_init(if_ctx_t ctx) { device_t dev = ctx->ifc_dev; if_shared_ctx_t sctx = ctx->ifc_sctx; if_softc_ctx_t scctx = &ctx->ifc_softc_ctx; int vectors, queues, rx_queues, tx_queues, queuemsgs, msgs; int iflib_num_tx_queues, iflib_num_rx_queues; int err, admincnt, bar; iflib_num_tx_queues = ctx->ifc_sysctl_ntxqs; iflib_num_rx_queues = ctx->ifc_sysctl_nrxqs; if (bootverbose) device_printf(dev, "msix_init qsets capped at %d\n", imax(scctx->isc_ntxqsets, scctx->isc_nrxqsets)); bar = ctx->ifc_softc_ctx.isc_msix_bar; admincnt = sctx->isc_admin_intrcnt; /* Override by tuneable */ if (scctx->isc_disable_msix) goto msi; /* First try MSI-X */ if ((msgs = pci_msix_count(dev)) == 0) { if (bootverbose) device_printf(dev, "MSI-X not supported or disabled\n"); goto msi; } /* * bar == -1 => "trust me I know what I'm doing" * Some drivers are for hardware that is so shoddily * documented that no one knows which bars are which * so the developer has to map all bars. This hack * allows shoddy garbage to use MSI-X in this framework. */ if (bar != -1) { ctx->ifc_msix_mem = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &bar, RF_ACTIVE); if (ctx->ifc_msix_mem == NULL) { device_printf(dev, "Unable to map MSI-X table\n"); goto msi; } } #if IFLIB_DEBUG /* use only 1 qset in debug mode */ queuemsgs = min(msgs - admincnt, 1); #else queuemsgs = msgs - admincnt; #endif #ifdef RSS queues = imin(queuemsgs, rss_getnumbuckets()); #else queues = queuemsgs; #endif queues = imin(CPU_COUNT(&ctx->ifc_cpus), queues); if (bootverbose) device_printf(dev, "intr CPUs: %d queue msgs: %d admincnt: %d\n", CPU_COUNT(&ctx->ifc_cpus), queuemsgs, admincnt); #ifdef RSS /* If we're doing RSS, clamp at the number of RSS buckets */ if (queues > rss_getnumbuckets()) queues = rss_getnumbuckets(); #endif if (iflib_num_rx_queues > 0 && iflib_num_rx_queues < queuemsgs - admincnt) rx_queues = iflib_num_rx_queues; else rx_queues = queues; if (rx_queues > scctx->isc_nrxqsets) rx_queues = scctx->isc_nrxqsets; /* * We want this to be all logical CPUs by default */ if (iflib_num_tx_queues > 0 && iflib_num_tx_queues < queues) tx_queues = iflib_num_tx_queues; else tx_queues = mp_ncpus; if (tx_queues > scctx->isc_ntxqsets) tx_queues = scctx->isc_ntxqsets; if (ctx->ifc_sysctl_qs_eq_override == 0) { #ifdef INVARIANTS if (tx_queues != rx_queues) device_printf(dev, "queue equality override not set, capping rx_queues at %d and tx_queues at %d\n", min(rx_queues, tx_queues), min(rx_queues, tx_queues)); #endif tx_queues = min(rx_queues, tx_queues); rx_queues = min(rx_queues, tx_queues); } device_printf(dev, "Using %d rx queues %d tx queues\n", rx_queues, tx_queues); vectors = rx_queues + admincnt; if ((err = pci_alloc_msix(dev, &vectors)) == 0) { device_printf(dev, "Using MSI-X interrupts with %d vectors\n", vectors); scctx->isc_vectors = vectors; scctx->isc_nrxqsets = rx_queues; scctx->isc_ntxqsets = tx_queues; scctx->isc_intr = IFLIB_INTR_MSIX; return (vectors); } else { device_printf(dev, "failed to allocate %d MSI-X vectors, err: %d - using MSI\n", vectors, err); bus_release_resource(dev, SYS_RES_MEMORY, bar, ctx->ifc_msix_mem); ctx->ifc_msix_mem = NULL; } msi: vectors = pci_msi_count(dev); scctx->isc_nrxqsets = 1; scctx->isc_ntxqsets = 1; scctx->isc_vectors = vectors; if (vectors == 1 && pci_alloc_msi(dev, &vectors) == 0) { device_printf(dev,"Using an MSI interrupt\n"); scctx->isc_intr = IFLIB_INTR_MSI; } else { scctx->isc_vectors = 1; device_printf(dev,"Using a Legacy interrupt\n"); scctx->isc_intr = IFLIB_INTR_LEGACY; } return (vectors); } static const char *ring_states[] = { "IDLE", "BUSY", "STALLED", "ABDICATED" }; static int mp_ring_state_handler(SYSCTL_HANDLER_ARGS) { int rc; uint16_t *state = ((uint16_t *)oidp->oid_arg1); struct sbuf *sb; const char *ring_state = "UNKNOWN"; /* XXX needed ? */ rc = sysctl_wire_old_buffer(req, 0); MPASS(rc == 0); if (rc != 0) return (rc); sb = sbuf_new_for_sysctl(NULL, NULL, 80, req); MPASS(sb != NULL); if (sb == NULL) return (ENOMEM); if (state[3] <= 3) ring_state = ring_states[state[3]]; sbuf_printf(sb, "pidx_head: %04hd pidx_tail: %04hd cidx: %04hd state: %s", state[0], state[1], state[2], ring_state); rc = sbuf_finish(sb); sbuf_delete(sb); return(rc); } enum iflib_ndesc_handler { IFLIB_NTXD_HANDLER, IFLIB_NRXD_HANDLER, }; static int mp_ndesc_handler(SYSCTL_HANDLER_ARGS) { if_ctx_t ctx = (void *)arg1; enum iflib_ndesc_handler type = arg2; char buf[256] = {0}; qidx_t *ndesc; char *p, *next; int nqs, rc, i; MPASS(type == IFLIB_NTXD_HANDLER || type == IFLIB_NRXD_HANDLER); nqs = 8; switch(type) { case IFLIB_NTXD_HANDLER: ndesc = ctx->ifc_sysctl_ntxds; if (ctx->ifc_sctx) nqs = ctx->ifc_sctx->isc_ntxqs; break; case IFLIB_NRXD_HANDLER: ndesc = ctx->ifc_sysctl_nrxds; if (ctx->ifc_sctx) nqs = ctx->ifc_sctx->isc_nrxqs; break; default: panic("unhandled type"); } if (nqs == 0) nqs = 8; for (i=0; i<8; i++) { if (i >= nqs) break; if (i) strcat(buf, ","); sprintf(strchr(buf, 0), "%d", ndesc[i]); } rc = sysctl_handle_string(oidp, buf, sizeof(buf), req); if (rc || req->newptr == NULL) return rc; for (i = 0, next = buf, p = strsep(&next, " ,"); i < 8 && p; i++, p = strsep(&next, " ,")) { ndesc[i] = strtoul(p, NULL, 10); } return(rc); } #define NAME_BUFLEN 32 static void iflib_add_device_sysctl_pre(if_ctx_t ctx) { device_t dev = iflib_get_dev(ctx); struct sysctl_oid_list *child, *oid_list; struct sysctl_ctx_list *ctx_list; struct sysctl_oid *node; ctx_list = device_get_sysctl_ctx(dev); child = SYSCTL_CHILDREN(device_get_sysctl_tree(dev)); ctx->ifc_sysctl_node = node = SYSCTL_ADD_NODE(ctx_list, child, OID_AUTO, "iflib", CTLFLAG_RD, NULL, "IFLIB fields"); oid_list = SYSCTL_CHILDREN(node); SYSCTL_ADD_STRING(ctx_list, oid_list, OID_AUTO, "driver_version", CTLFLAG_RD, ctx->ifc_sctx->isc_driver_version, 0, "driver version"); SYSCTL_ADD_U16(ctx_list, oid_list, OID_AUTO, "override_ntxqs", CTLFLAG_RWTUN, &ctx->ifc_sysctl_ntxqs, 0, "# of txqs to use, 0 => use default #"); SYSCTL_ADD_U16(ctx_list, oid_list, OID_AUTO, "override_nrxqs", CTLFLAG_RWTUN, &ctx->ifc_sysctl_nrxqs, 0, "# of rxqs to use, 0 => use default #"); SYSCTL_ADD_U16(ctx_list, oid_list, OID_AUTO, "override_qs_enable", CTLFLAG_RWTUN, &ctx->ifc_sysctl_qs_eq_override, 0, "permit #txq != #rxq"); SYSCTL_ADD_INT(ctx_list, oid_list, OID_AUTO, "disable_msix", CTLFLAG_RWTUN, &ctx->ifc_softc_ctx.isc_disable_msix, 0, "disable MSI-X (default 0)"); SYSCTL_ADD_U16(ctx_list, oid_list, OID_AUTO, "rx_budget", CTLFLAG_RWTUN, &ctx->ifc_sysctl_rx_budget, 0, "set the rx budget"); SYSCTL_ADD_U16(ctx_list, oid_list, OID_AUTO, "tx_abdicate", CTLFLAG_RWTUN, &ctx->ifc_sysctl_tx_abdicate, 0, "cause tx to abdicate instead of running to completion"); /* XXX change for per-queue sizes */ SYSCTL_ADD_PROC(ctx_list, oid_list, OID_AUTO, "override_ntxds", CTLTYPE_STRING|CTLFLAG_RWTUN, ctx, IFLIB_NTXD_HANDLER, mp_ndesc_handler, "A", "list of # of tx descriptors to use, 0 = use default #"); SYSCTL_ADD_PROC(ctx_list, oid_list, OID_AUTO, "override_nrxds", CTLTYPE_STRING|CTLFLAG_RWTUN, ctx, IFLIB_NRXD_HANDLER, mp_ndesc_handler, "A", "list of # of rx descriptors to use, 0 = use default #"); } static void iflib_add_device_sysctl_post(if_ctx_t ctx) { if_shared_ctx_t sctx = ctx->ifc_sctx; if_softc_ctx_t scctx = &ctx->ifc_softc_ctx; device_t dev = iflib_get_dev(ctx); struct sysctl_oid_list *child; struct sysctl_ctx_list *ctx_list; iflib_fl_t fl; iflib_txq_t txq; iflib_rxq_t rxq; int i, j; char namebuf[NAME_BUFLEN]; char *qfmt; struct sysctl_oid *queue_node, *fl_node, *node; struct sysctl_oid_list *queue_list, *fl_list; ctx_list = device_get_sysctl_ctx(dev); node = ctx->ifc_sysctl_node; child = SYSCTL_CHILDREN(node); if (scctx->isc_ntxqsets > 100) qfmt = "txq%03d"; else if (scctx->isc_ntxqsets > 10) qfmt = "txq%02d"; else qfmt = "txq%d"; for (i = 0, txq = ctx->ifc_txqs; i < scctx->isc_ntxqsets; i++, txq++) { snprintf(namebuf, NAME_BUFLEN, qfmt, i); queue_node = SYSCTL_ADD_NODE(ctx_list, child, OID_AUTO, namebuf, CTLFLAG_RD, NULL, "Queue Name"); queue_list = SYSCTL_CHILDREN(queue_node); #if MEMORY_LOGGING SYSCTL_ADD_QUAD(ctx_list, queue_list, OID_AUTO, "txq_dequeued", CTLFLAG_RD, &txq->ift_dequeued, "total mbufs freed"); SYSCTL_ADD_QUAD(ctx_list, queue_list, OID_AUTO, "txq_enqueued", CTLFLAG_RD, &txq->ift_enqueued, "total mbufs enqueued"); #endif SYSCTL_ADD_QUAD(ctx_list, queue_list, OID_AUTO, "mbuf_defrag", CTLFLAG_RD, &txq->ift_mbuf_defrag, "# of times m_defrag was called"); SYSCTL_ADD_QUAD(ctx_list, queue_list, OID_AUTO, "m_pullups", CTLFLAG_RD, &txq->ift_pullups, "# of times m_pullup was called"); SYSCTL_ADD_QUAD(ctx_list, queue_list, OID_AUTO, "mbuf_defrag_failed", CTLFLAG_RD, &txq->ift_mbuf_defrag_failed, "# of times m_defrag failed"); SYSCTL_ADD_QUAD(ctx_list, queue_list, OID_AUTO, "no_desc_avail", CTLFLAG_RD, &txq->ift_no_desc_avail, "# of times no descriptors were available"); SYSCTL_ADD_QUAD(ctx_list, queue_list, OID_AUTO, "tx_map_failed", CTLFLAG_RD, &txq->ift_map_failed, "# of times dma map failed"); SYSCTL_ADD_QUAD(ctx_list, queue_list, OID_AUTO, "txd_encap_efbig", CTLFLAG_RD, &txq->ift_txd_encap_efbig, "# of times txd_encap returned EFBIG"); SYSCTL_ADD_QUAD(ctx_list, queue_list, OID_AUTO, "no_tx_dma_setup", CTLFLAG_RD, &txq->ift_no_tx_dma_setup, "# of times map failed for other than EFBIG"); SYSCTL_ADD_U16(ctx_list, queue_list, OID_AUTO, "txq_pidx", CTLFLAG_RD, &txq->ift_pidx, 1, "Producer Index"); SYSCTL_ADD_U16(ctx_list, queue_list, OID_AUTO, "txq_cidx", CTLFLAG_RD, &txq->ift_cidx, 1, "Consumer Index"); SYSCTL_ADD_U16(ctx_list, queue_list, OID_AUTO, "txq_cidx_processed", CTLFLAG_RD, &txq->ift_cidx_processed, 1, "Consumer Index seen by credit update"); SYSCTL_ADD_U16(ctx_list, queue_list, OID_AUTO, "txq_in_use", CTLFLAG_RD, &txq->ift_in_use, 1, "descriptors in use"); SYSCTL_ADD_QUAD(ctx_list, queue_list, OID_AUTO, "txq_processed", CTLFLAG_RD, &txq->ift_processed, "descriptors procesed for clean"); SYSCTL_ADD_QUAD(ctx_list, queue_list, OID_AUTO, "txq_cleaned", CTLFLAG_RD, &txq->ift_cleaned, "total cleaned"); SYSCTL_ADD_PROC(ctx_list, queue_list, OID_AUTO, "ring_state", CTLTYPE_STRING | CTLFLAG_RD, __DEVOLATILE(uint64_t *, &txq->ift_br->state), 0, mp_ring_state_handler, "A", "soft ring state"); SYSCTL_ADD_COUNTER_U64(ctx_list, queue_list, OID_AUTO, "r_enqueues", CTLFLAG_RD, &txq->ift_br->enqueues, "# of enqueues to the mp_ring for this queue"); SYSCTL_ADD_COUNTER_U64(ctx_list, queue_list, OID_AUTO, "r_drops", CTLFLAG_RD, &txq->ift_br->drops, "# of drops in the mp_ring for this queue"); SYSCTL_ADD_COUNTER_U64(ctx_list, queue_list, OID_AUTO, "r_starts", CTLFLAG_RD, &txq->ift_br->starts, "# of normal consumer starts in the mp_ring for this queue"); SYSCTL_ADD_COUNTER_U64(ctx_list, queue_list, OID_AUTO, "r_stalls", CTLFLAG_RD, &txq->ift_br->stalls, "# of consumer stalls in the mp_ring for this queue"); SYSCTL_ADD_COUNTER_U64(ctx_list, queue_list, OID_AUTO, "r_restarts", CTLFLAG_RD, &txq->ift_br->restarts, "# of consumer restarts in the mp_ring for this queue"); SYSCTL_ADD_COUNTER_U64(ctx_list, queue_list, OID_AUTO, "r_abdications", CTLFLAG_RD, &txq->ift_br->abdications, "# of consumer abdications in the mp_ring for this queue"); } if (scctx->isc_nrxqsets > 100) qfmt = "rxq%03d"; else if (scctx->isc_nrxqsets > 10) qfmt = "rxq%02d"; else qfmt = "rxq%d"; for (i = 0, rxq = ctx->ifc_rxqs; i < scctx->isc_nrxqsets; i++, rxq++) { snprintf(namebuf, NAME_BUFLEN, qfmt, i); queue_node = SYSCTL_ADD_NODE(ctx_list, child, OID_AUTO, namebuf, CTLFLAG_RD, NULL, "Queue Name"); queue_list = SYSCTL_CHILDREN(queue_node); if (sctx->isc_flags & IFLIB_HAS_RXCQ) { SYSCTL_ADD_U16(ctx_list, queue_list, OID_AUTO, "rxq_cq_pidx", CTLFLAG_RD, &rxq->ifr_cq_pidx, 1, "Producer Index"); SYSCTL_ADD_U16(ctx_list, queue_list, OID_AUTO, "rxq_cq_cidx", CTLFLAG_RD, &rxq->ifr_cq_cidx, 1, "Consumer Index"); } for (j = 0, fl = rxq->ifr_fl; j < rxq->ifr_nfl; j++, fl++) { snprintf(namebuf, NAME_BUFLEN, "rxq_fl%d", j); fl_node = SYSCTL_ADD_NODE(ctx_list, queue_list, OID_AUTO, namebuf, CTLFLAG_RD, NULL, "freelist Name"); fl_list = SYSCTL_CHILDREN(fl_node); SYSCTL_ADD_U16(ctx_list, fl_list, OID_AUTO, "pidx", CTLFLAG_RD, &fl->ifl_pidx, 1, "Producer Index"); SYSCTL_ADD_U16(ctx_list, fl_list, OID_AUTO, "cidx", CTLFLAG_RD, &fl->ifl_cidx, 1, "Consumer Index"); SYSCTL_ADD_U16(ctx_list, fl_list, OID_AUTO, "credits", CTLFLAG_RD, &fl->ifl_credits, 1, "credits available"); #if MEMORY_LOGGING SYSCTL_ADD_QUAD(ctx_list, fl_list, OID_AUTO, "fl_m_enqueued", CTLFLAG_RD, &fl->ifl_m_enqueued, "mbufs allocated"); SYSCTL_ADD_QUAD(ctx_list, fl_list, OID_AUTO, "fl_m_dequeued", CTLFLAG_RD, &fl->ifl_m_dequeued, "mbufs freed"); SYSCTL_ADD_QUAD(ctx_list, fl_list, OID_AUTO, "fl_cl_enqueued", CTLFLAG_RD, &fl->ifl_cl_enqueued, "clusters allocated"); SYSCTL_ADD_QUAD(ctx_list, fl_list, OID_AUTO, "fl_cl_dequeued", CTLFLAG_RD, &fl->ifl_cl_dequeued, "clusters freed"); #endif } } } void iflib_request_reset(if_ctx_t ctx) { STATE_LOCK(ctx); ctx->ifc_flags |= IFC_DO_RESET; STATE_UNLOCK(ctx); } #ifndef __NO_STRICT_ALIGNMENT static struct mbuf * iflib_fixup_rx(struct mbuf *m) { struct mbuf *n; if (m->m_len <= (MCLBYTES - ETHER_HDR_LEN)) { bcopy(m->m_data, m->m_data + ETHER_HDR_LEN, m->m_len); m->m_data += ETHER_HDR_LEN; n = m; } else { MGETHDR(n, M_NOWAIT, MT_DATA); if (n == NULL) { m_freem(m); return (NULL); } bcopy(m->m_data, n->m_data, ETHER_HDR_LEN); m->m_data += ETHER_HDR_LEN; m->m_len -= ETHER_HDR_LEN; n->m_len = ETHER_HDR_LEN; M_MOVE_PKTHDR(n, m); n->m_next = m; } return (n); } #endif #ifdef NETDUMP static void iflib_netdump_init(struct ifnet *ifp, int *nrxr, int *ncl, int *clsize) { if_ctx_t ctx; ctx = if_getsoftc(ifp); CTX_LOCK(ctx); *nrxr = NRXQSETS(ctx); *ncl = ctx->ifc_rxqs[0].ifr_fl->ifl_size; *clsize = ctx->ifc_rxqs[0].ifr_fl->ifl_buf_size; CTX_UNLOCK(ctx); } static void iflib_netdump_event(struct ifnet *ifp, enum netdump_ev event) { if_ctx_t ctx; if_softc_ctx_t scctx; iflib_fl_t fl; iflib_rxq_t rxq; int i, j; ctx = if_getsoftc(ifp); scctx = &ctx->ifc_softc_ctx; switch (event) { case NETDUMP_START: for (i = 0; i < scctx->isc_nrxqsets; i++) { rxq = &ctx->ifc_rxqs[i]; for (j = 0; j < rxq->ifr_nfl; j++) { fl = rxq->ifr_fl; fl->ifl_zone = m_getzone(fl->ifl_buf_size); } } iflib_no_tx_batch = 1; break; default: break; } } static int iflib_netdump_transmit(struct ifnet *ifp, struct mbuf *m) { if_ctx_t ctx; iflib_txq_t txq; int error; ctx = if_getsoftc(ifp); if ((if_getdrvflags(ifp) & (IFF_DRV_RUNNING | IFF_DRV_OACTIVE)) != IFF_DRV_RUNNING) return (EBUSY); txq = &ctx->ifc_txqs[0]; error = iflib_encap(txq, &m); if (error == 0) (void)iflib_txd_db_check(ctx, txq, true, txq->ift_in_use); return (error); } static int iflib_netdump_poll(struct ifnet *ifp, int count) { if_ctx_t ctx; if_softc_ctx_t scctx; iflib_txq_t txq; int i; ctx = if_getsoftc(ifp); scctx = &ctx->ifc_softc_ctx; if ((if_getdrvflags(ifp) & (IFF_DRV_RUNNING | IFF_DRV_OACTIVE)) != IFF_DRV_RUNNING) return (EBUSY); txq = &ctx->ifc_txqs[0]; (void)iflib_completed_tx_reclaim(txq, RECLAIM_THRESH(ctx)); for (i = 0; i < scctx->isc_nrxqsets; i++) (void)iflib_rxeof(&ctx->ifc_rxqs[i], 16 /* XXX */); return (0); } #endif /* NETDUMP */ Index: projects/import-googletest-1.8.1/sys/netinet/in_pcb.c =================================================================== --- projects/import-googletest-1.8.1/sys/netinet/in_pcb.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/netinet/in_pcb.c (revision 344271) @@ -1,3434 +1,3436 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 1982, 1986, 1991, 1993, 1995 * The Regents of the University of California. * Copyright (c) 2007-2009 Robert N. M. Watson * Copyright (c) 2010-2011 Juniper Networks, Inc. * All rights reserved. * * Portions of this software were developed by Robert N. M. Watson under * contract to Juniper Networks, Inc. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * @(#)in_pcb.c 8.4 (Berkeley) 5/24/95 */ #include __FBSDID("$FreeBSD$"); #include "opt_ddb.h" #include "opt_ipsec.h" #include "opt_inet.h" #include "opt_inet6.h" #include "opt_ratelimit.h" #include "opt_pcbgroup.h" #include "opt_rss.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef DDB #include #endif #include #include #include #include #include #include #include #include #if defined(INET) || defined(INET6) #include #include #include #include #ifdef TCPHPTS #include #endif #include #include #endif #ifdef INET #include #endif #ifdef INET6 #include #include #include #include #endif /* INET6 */ #include #include #define INPCBLBGROUP_SIZMIN 8 #define INPCBLBGROUP_SIZMAX 256 static struct callout ipport_tick_callout; /* * These configure the range of local port addresses assigned to * "unspecified" outgoing connections/packets/whatever. */ VNET_DEFINE(int, ipport_lowfirstauto) = IPPORT_RESERVED - 1; /* 1023 */ VNET_DEFINE(int, ipport_lowlastauto) = IPPORT_RESERVEDSTART; /* 600 */ VNET_DEFINE(int, ipport_firstauto) = IPPORT_EPHEMERALFIRST; /* 10000 */ VNET_DEFINE(int, ipport_lastauto) = IPPORT_EPHEMERALLAST; /* 65535 */ VNET_DEFINE(int, ipport_hifirstauto) = IPPORT_HIFIRSTAUTO; /* 49152 */ VNET_DEFINE(int, ipport_hilastauto) = IPPORT_HILASTAUTO; /* 65535 */ /* * Reserved ports accessible only to root. There are significant * security considerations that must be accounted for when changing these, * but the security benefits can be great. Please be careful. */ VNET_DEFINE(int, ipport_reservedhigh) = IPPORT_RESERVED - 1; /* 1023 */ VNET_DEFINE(int, ipport_reservedlow); /* Variables dealing with random ephemeral port allocation. */ VNET_DEFINE(int, ipport_randomized) = 1; /* user controlled via sysctl */ VNET_DEFINE(int, ipport_randomcps) = 10; /* user controlled via sysctl */ VNET_DEFINE(int, ipport_randomtime) = 45; /* user controlled via sysctl */ VNET_DEFINE(int, ipport_stoprandom); /* toggled by ipport_tick */ VNET_DEFINE(int, ipport_tcpallocs); VNET_DEFINE_STATIC(int, ipport_tcplastcount); #define V_ipport_tcplastcount VNET(ipport_tcplastcount) static void in_pcbremlists(struct inpcb *inp); #ifdef INET static struct inpcb *in_pcblookup_hash_locked(struct inpcbinfo *pcbinfo, struct in_addr faddr, u_int fport_arg, struct in_addr laddr, u_int lport_arg, int lookupflags, struct ifnet *ifp); #define RANGECHK(var, min, max) \ if ((var) < (min)) { (var) = (min); } \ else if ((var) > (max)) { (var) = (max); } static int sysctl_net_ipport_check(SYSCTL_HANDLER_ARGS) { int error; error = sysctl_handle_int(oidp, arg1, arg2, req); if (error == 0) { RANGECHK(V_ipport_lowfirstauto, 1, IPPORT_RESERVED - 1); RANGECHK(V_ipport_lowlastauto, 1, IPPORT_RESERVED - 1); RANGECHK(V_ipport_firstauto, IPPORT_RESERVED, IPPORT_MAX); RANGECHK(V_ipport_lastauto, IPPORT_RESERVED, IPPORT_MAX); RANGECHK(V_ipport_hifirstauto, IPPORT_RESERVED, IPPORT_MAX); RANGECHK(V_ipport_hilastauto, IPPORT_RESERVED, IPPORT_MAX); } return (error); } #undef RANGECHK static SYSCTL_NODE(_net_inet_ip, IPPROTO_IP, portrange, CTLFLAG_RW, 0, "IP Ports"); SYSCTL_PROC(_net_inet_ip_portrange, OID_AUTO, lowfirst, CTLFLAG_VNET | CTLTYPE_INT | CTLFLAG_RW, &VNET_NAME(ipport_lowfirstauto), 0, &sysctl_net_ipport_check, "I", ""); SYSCTL_PROC(_net_inet_ip_portrange, OID_AUTO, lowlast, CTLFLAG_VNET | CTLTYPE_INT | CTLFLAG_RW, &VNET_NAME(ipport_lowlastauto), 0, &sysctl_net_ipport_check, "I", ""); SYSCTL_PROC(_net_inet_ip_portrange, OID_AUTO, first, CTLFLAG_VNET | CTLTYPE_INT | CTLFLAG_RW, &VNET_NAME(ipport_firstauto), 0, &sysctl_net_ipport_check, "I", ""); SYSCTL_PROC(_net_inet_ip_portrange, OID_AUTO, last, CTLFLAG_VNET | CTLTYPE_INT | CTLFLAG_RW, &VNET_NAME(ipport_lastauto), 0, &sysctl_net_ipport_check, "I", ""); SYSCTL_PROC(_net_inet_ip_portrange, OID_AUTO, hifirst, CTLFLAG_VNET | CTLTYPE_INT | CTLFLAG_RW, &VNET_NAME(ipport_hifirstauto), 0, &sysctl_net_ipport_check, "I", ""); SYSCTL_PROC(_net_inet_ip_portrange, OID_AUTO, hilast, CTLFLAG_VNET | CTLTYPE_INT | CTLFLAG_RW, &VNET_NAME(ipport_hilastauto), 0, &sysctl_net_ipport_check, "I", ""); SYSCTL_INT(_net_inet_ip_portrange, OID_AUTO, reservedhigh, CTLFLAG_VNET | CTLFLAG_RW | CTLFLAG_SECURE, &VNET_NAME(ipport_reservedhigh), 0, ""); SYSCTL_INT(_net_inet_ip_portrange, OID_AUTO, reservedlow, CTLFLAG_RW|CTLFLAG_SECURE, &VNET_NAME(ipport_reservedlow), 0, ""); SYSCTL_INT(_net_inet_ip_portrange, OID_AUTO, randomized, CTLFLAG_VNET | CTLFLAG_RW, &VNET_NAME(ipport_randomized), 0, "Enable random port allocation"); SYSCTL_INT(_net_inet_ip_portrange, OID_AUTO, randomcps, CTLFLAG_VNET | CTLFLAG_RW, &VNET_NAME(ipport_randomcps), 0, "Maximum number of random port " "allocations before switching to a sequental one"); SYSCTL_INT(_net_inet_ip_portrange, OID_AUTO, randomtime, CTLFLAG_VNET | CTLFLAG_RW, &VNET_NAME(ipport_randomtime), 0, "Minimum time to keep sequental port " "allocation before switching to a random one"); #endif /* INET */ /* * in_pcb.c: manage the Protocol Control Blocks. * * NOTE: It is assumed that most of these functions will be called with * the pcbinfo lock held, and often, the inpcb lock held, as these utility * functions often modify hash chains or addresses in pcbs. */ static struct inpcblbgroup * in_pcblbgroup_alloc(struct inpcblbgrouphead *hdr, u_char vflag, uint16_t port, const union in_dependaddr *addr, int size) { struct inpcblbgroup *grp; size_t bytes; bytes = __offsetof(struct inpcblbgroup, il_inp[size]); grp = malloc(bytes, M_PCB, M_ZERO | M_NOWAIT); if (!grp) return (NULL); grp->il_vflag = vflag; grp->il_lport = port; grp->il_dependladdr = *addr; grp->il_inpsiz = size; CK_LIST_INSERT_HEAD(hdr, grp, il_list); return (grp); } static void in_pcblbgroup_free_deferred(epoch_context_t ctx) { struct inpcblbgroup *grp; grp = __containerof(ctx, struct inpcblbgroup, il_epoch_ctx); free(grp, M_PCB); } static void in_pcblbgroup_free(struct inpcblbgroup *grp) { CK_LIST_REMOVE(grp, il_list); epoch_call(net_epoch_preempt, &grp->il_epoch_ctx, in_pcblbgroup_free_deferred); } static struct inpcblbgroup * in_pcblbgroup_resize(struct inpcblbgrouphead *hdr, struct inpcblbgroup *old_grp, int size) { struct inpcblbgroup *grp; int i; grp = in_pcblbgroup_alloc(hdr, old_grp->il_vflag, old_grp->il_lport, &old_grp->il_dependladdr, size); if (grp == NULL) return (NULL); KASSERT(old_grp->il_inpcnt < grp->il_inpsiz, ("invalid new local group size %d and old local group count %d", grp->il_inpsiz, old_grp->il_inpcnt)); for (i = 0; i < old_grp->il_inpcnt; ++i) grp->il_inp[i] = old_grp->il_inp[i]; grp->il_inpcnt = old_grp->il_inpcnt; in_pcblbgroup_free(old_grp); return (grp); } /* * PCB at index 'i' is removed from the group. Pull up the ones below il_inp[i] * and shrink group if possible. */ static void in_pcblbgroup_reorder(struct inpcblbgrouphead *hdr, struct inpcblbgroup **grpp, int i) { struct inpcblbgroup *grp, *new_grp; grp = *grpp; for (; i + 1 < grp->il_inpcnt; ++i) grp->il_inp[i] = grp->il_inp[i + 1]; grp->il_inpcnt--; if (grp->il_inpsiz > INPCBLBGROUP_SIZMIN && grp->il_inpcnt <= grp->il_inpsiz / 4) { /* Shrink this group. */ new_grp = in_pcblbgroup_resize(hdr, grp, grp->il_inpsiz / 2); if (new_grp != NULL) *grpp = new_grp; } } /* * Add PCB to load balance group for SO_REUSEPORT_LB option. */ static int in_pcbinslbgrouphash(struct inpcb *inp) { const static struct timeval interval = { 60, 0 }; static struct timeval lastprint; struct inpcbinfo *pcbinfo; struct inpcblbgrouphead *hdr; struct inpcblbgroup *grp; uint32_t idx; pcbinfo = inp->inp_pcbinfo; INP_WLOCK_ASSERT(inp); INP_HASH_WLOCK_ASSERT(pcbinfo); /* * Don't allow jailed socket to join local group. */ if (inp->inp_socket != NULL && jailed(inp->inp_socket->so_cred)) return (0); #ifdef INET6 /* * Don't allow IPv4 mapped INET6 wild socket. */ if ((inp->inp_vflag & INP_IPV4) && inp->inp_laddr.s_addr == INADDR_ANY && INP_CHECK_SOCKAF(inp->inp_socket, AF_INET6)) { return (0); } #endif idx = INP_PCBPORTHASH(inp->inp_lport, pcbinfo->ipi_lbgrouphashmask); hdr = &pcbinfo->ipi_lbgrouphashbase[idx]; CK_LIST_FOREACH(grp, hdr, il_list) { if (grp->il_vflag == inp->inp_vflag && grp->il_lport == inp->inp_lport && memcmp(&grp->il_dependladdr, &inp->inp_inc.inc_ie.ie_dependladdr, sizeof(grp->il_dependladdr)) == 0) break; } if (grp == NULL) { /* Create new load balance group. */ grp = in_pcblbgroup_alloc(hdr, inp->inp_vflag, inp->inp_lport, &inp->inp_inc.inc_ie.ie_dependladdr, INPCBLBGROUP_SIZMIN); if (grp == NULL) return (ENOBUFS); } else if (grp->il_inpcnt == grp->il_inpsiz) { if (grp->il_inpsiz >= INPCBLBGROUP_SIZMAX) { if (ratecheck(&lastprint, &interval)) printf("lb group port %d, limit reached\n", ntohs(grp->il_lport)); return (0); } /* Expand this local group. */ grp = in_pcblbgroup_resize(hdr, grp, grp->il_inpsiz * 2); if (grp == NULL) return (ENOBUFS); } KASSERT(grp->il_inpcnt < grp->il_inpsiz, ("invalid local group size %d and count %d", grp->il_inpsiz, grp->il_inpcnt)); grp->il_inp[grp->il_inpcnt] = inp; grp->il_inpcnt++; return (0); } /* * Remove PCB from load balance group. */ static void in_pcbremlbgrouphash(struct inpcb *inp) { struct inpcbinfo *pcbinfo; struct inpcblbgrouphead *hdr; struct inpcblbgroup *grp; int i; pcbinfo = inp->inp_pcbinfo; INP_WLOCK_ASSERT(inp); INP_HASH_WLOCK_ASSERT(pcbinfo); hdr = &pcbinfo->ipi_lbgrouphashbase[ INP_PCBPORTHASH(inp->inp_lport, pcbinfo->ipi_lbgrouphashmask)]; CK_LIST_FOREACH(grp, hdr, il_list) { for (i = 0; i < grp->il_inpcnt; ++i) { if (grp->il_inp[i] != inp) continue; if (grp->il_inpcnt == 1) { /* We are the last, free this local group. */ in_pcblbgroup_free(grp); } else { /* Pull up inpcbs, shrink group if possible. */ in_pcblbgroup_reorder(hdr, &grp, i); } return; } } } /* * Different protocols initialize their inpcbs differently - giving * different name to the lock. But they all are disposed the same. */ static void inpcb_fini(void *mem, int size) { struct inpcb *inp = mem; INP_LOCK_DESTROY(inp); } /* * Initialize an inpcbinfo -- we should be able to reduce the number of * arguments in time. */ void in_pcbinfo_init(struct inpcbinfo *pcbinfo, const char *name, struct inpcbhead *listhead, int hash_nelements, int porthash_nelements, char *inpcbzone_name, uma_init inpcbzone_init, u_int hashfields) { porthash_nelements = imin(porthash_nelements, IPPORT_MAX + 1); INP_INFO_LOCK_INIT(pcbinfo, name); INP_HASH_LOCK_INIT(pcbinfo, "pcbinfohash"); /* XXXRW: argument? */ INP_LIST_LOCK_INIT(pcbinfo, "pcbinfolist"); #ifdef VIMAGE pcbinfo->ipi_vnet = curvnet; #endif pcbinfo->ipi_listhead = listhead; CK_LIST_INIT(pcbinfo->ipi_listhead); pcbinfo->ipi_count = 0; pcbinfo->ipi_hashbase = hashinit(hash_nelements, M_PCB, &pcbinfo->ipi_hashmask); pcbinfo->ipi_porthashbase = hashinit(porthash_nelements, M_PCB, &pcbinfo->ipi_porthashmask); pcbinfo->ipi_lbgrouphashbase = hashinit(porthash_nelements, M_PCB, &pcbinfo->ipi_lbgrouphashmask); #ifdef PCBGROUP in_pcbgroup_init(pcbinfo, hashfields, hash_nelements); #endif pcbinfo->ipi_zone = uma_zcreate(inpcbzone_name, sizeof(struct inpcb), NULL, NULL, inpcbzone_init, inpcb_fini, UMA_ALIGN_PTR, 0); uma_zone_set_max(pcbinfo->ipi_zone, maxsockets); uma_zone_set_warning(pcbinfo->ipi_zone, "kern.ipc.maxsockets limit reached"); } /* * Destroy an inpcbinfo. */ void in_pcbinfo_destroy(struct inpcbinfo *pcbinfo) { KASSERT(pcbinfo->ipi_count == 0, ("%s: ipi_count = %u", __func__, pcbinfo->ipi_count)); hashdestroy(pcbinfo->ipi_hashbase, M_PCB, pcbinfo->ipi_hashmask); hashdestroy(pcbinfo->ipi_porthashbase, M_PCB, pcbinfo->ipi_porthashmask); hashdestroy(pcbinfo->ipi_lbgrouphashbase, M_PCB, pcbinfo->ipi_lbgrouphashmask); #ifdef PCBGROUP in_pcbgroup_destroy(pcbinfo); #endif uma_zdestroy(pcbinfo->ipi_zone); INP_LIST_LOCK_DESTROY(pcbinfo); INP_HASH_LOCK_DESTROY(pcbinfo); INP_INFO_LOCK_DESTROY(pcbinfo); } /* * Allocate a PCB and associate it with the socket. * On success return with the PCB locked. */ int in_pcballoc(struct socket *so, struct inpcbinfo *pcbinfo) { struct inpcb *inp; int error; #ifdef INVARIANTS if (pcbinfo == &V_tcbinfo) { INP_INFO_RLOCK_ASSERT(pcbinfo); } else { INP_INFO_WLOCK_ASSERT(pcbinfo); } #endif error = 0; inp = uma_zalloc(pcbinfo->ipi_zone, M_NOWAIT); if (inp == NULL) return (ENOBUFS); bzero(&inp->inp_start_zero, inp_zero_size); inp->inp_pcbinfo = pcbinfo; inp->inp_socket = so; inp->inp_cred = crhold(so->so_cred); inp->inp_inc.inc_fibnum = so->so_fibnum; #ifdef MAC error = mac_inpcb_init(inp, M_NOWAIT); if (error != 0) goto out; mac_inpcb_create(so, inp); #endif #if defined(IPSEC) || defined(IPSEC_SUPPORT) error = ipsec_init_pcbpolicy(inp); if (error != 0) { #ifdef MAC mac_inpcb_destroy(inp); #endif goto out; } #endif /*IPSEC*/ #ifdef INET6 if (INP_SOCKAF(so) == AF_INET6) { inp->inp_vflag |= INP_IPV6PROTO; if (V_ip6_v6only) inp->inp_flags |= IN6P_IPV6_V6ONLY; } #endif INP_WLOCK(inp); INP_LIST_WLOCK(pcbinfo); CK_LIST_INSERT_HEAD(pcbinfo->ipi_listhead, inp, inp_list); pcbinfo->ipi_count++; so->so_pcb = (caddr_t)inp; #ifdef INET6 if (V_ip6_auto_flowlabel) inp->inp_flags |= IN6P_AUTOFLOWLABEL; #endif inp->inp_gencnt = ++pcbinfo->ipi_gencnt; refcount_init(&inp->inp_refcount, 1); /* Reference from inpcbinfo */ /* * Routes in inpcb's can cache L2 as well; they are guaranteed * to be cleaned up. */ inp->inp_route.ro_flags = RT_LLE_CACHE; INP_LIST_WUNLOCK(pcbinfo); #if defined(IPSEC) || defined(IPSEC_SUPPORT) || defined(MAC) out: if (error != 0) { crfree(inp->inp_cred); uma_zfree(pcbinfo->ipi_zone, inp); } #endif return (error); } #ifdef INET int in_pcbbind(struct inpcb *inp, struct sockaddr *nam, struct ucred *cred) { int anonport, error; INP_WLOCK_ASSERT(inp); INP_HASH_WLOCK_ASSERT(inp->inp_pcbinfo); if (inp->inp_lport != 0 || inp->inp_laddr.s_addr != INADDR_ANY) return (EINVAL); anonport = nam == NULL || ((struct sockaddr_in *)nam)->sin_port == 0; error = in_pcbbind_setup(inp, nam, &inp->inp_laddr.s_addr, &inp->inp_lport, cred); if (error) return (error); if (in_pcbinshash(inp) != 0) { inp->inp_laddr.s_addr = INADDR_ANY; inp->inp_lport = 0; return (EAGAIN); } if (anonport) inp->inp_flags |= INP_ANONPORT; return (0); } #endif /* * Select a local port (number) to use. */ #if defined(INET) || defined(INET6) int in_pcb_lport(struct inpcb *inp, struct in_addr *laddrp, u_short *lportp, struct ucred *cred, int lookupflags) { struct inpcbinfo *pcbinfo; struct inpcb *tmpinp; unsigned short *lastport; int count, dorandom, error; u_short aux, first, last, lport; #ifdef INET struct in_addr laddr; #endif pcbinfo = inp->inp_pcbinfo; /* * Because no actual state changes occur here, a global write lock on * the pcbinfo isn't required. */ INP_LOCK_ASSERT(inp); INP_HASH_LOCK_ASSERT(pcbinfo); if (inp->inp_flags & INP_HIGHPORT) { first = V_ipport_hifirstauto; /* sysctl */ last = V_ipport_hilastauto; lastport = &pcbinfo->ipi_lasthi; } else if (inp->inp_flags & INP_LOWPORT) { error = priv_check_cred(cred, PRIV_NETINET_RESERVEDPORT); if (error) return (error); first = V_ipport_lowfirstauto; /* 1023 */ last = V_ipport_lowlastauto; /* 600 */ lastport = &pcbinfo->ipi_lastlow; } else { first = V_ipport_firstauto; /* sysctl */ last = V_ipport_lastauto; lastport = &pcbinfo->ipi_lastport; } /* * For UDP(-Lite), use random port allocation as long as the user * allows it. For TCP (and as of yet unknown) connections, * use random port allocation only if the user allows it AND * ipport_tick() allows it. */ if (V_ipport_randomized && (!V_ipport_stoprandom || pcbinfo == &V_udbinfo || pcbinfo == &V_ulitecbinfo)) dorandom = 1; else dorandom = 0; /* * It makes no sense to do random port allocation if * we have the only port available. */ if (first == last) dorandom = 0; /* Make sure to not include UDP(-Lite) packets in the count. */ if (pcbinfo != &V_udbinfo || pcbinfo != &V_ulitecbinfo) V_ipport_tcpallocs++; /* * Instead of having two loops further down counting up or down * make sure that first is always <= last and go with only one * code path implementing all logic. */ if (first > last) { aux = first; first = last; last = aux; } #ifdef INET /* Make the compiler happy. */ laddr.s_addr = 0; if ((inp->inp_vflag & (INP_IPV4|INP_IPV6)) == INP_IPV4) { KASSERT(laddrp != NULL, ("%s: laddrp NULL for v4 inp %p", __func__, inp)); laddr = *laddrp; } #endif tmpinp = NULL; /* Make compiler happy. */ lport = *lportp; if (dorandom) *lastport = first + (arc4random() % (last - first)); count = last - first; do { if (count-- < 0) /* completely used? */ return (EADDRNOTAVAIL); ++*lastport; if (*lastport < first || *lastport > last) *lastport = first; lport = htons(*lastport); #ifdef INET6 if ((inp->inp_vflag & INP_IPV6) != 0) tmpinp = in6_pcblookup_local(pcbinfo, &inp->in6p_laddr, lport, lookupflags, cred); #endif #if defined(INET) && defined(INET6) else #endif #ifdef INET tmpinp = in_pcblookup_local(pcbinfo, laddr, lport, lookupflags, cred); #endif } while (tmpinp != NULL); #ifdef INET if ((inp->inp_vflag & (INP_IPV4|INP_IPV6)) == INP_IPV4) laddrp->s_addr = laddr.s_addr; #endif *lportp = lport; return (0); } /* * Return cached socket options. */ int inp_so_options(const struct inpcb *inp) { int so_options; so_options = 0; if ((inp->inp_flags2 & INP_REUSEPORT_LB) != 0) so_options |= SO_REUSEPORT_LB; if ((inp->inp_flags2 & INP_REUSEPORT) != 0) so_options |= SO_REUSEPORT; if ((inp->inp_flags2 & INP_REUSEADDR) != 0) so_options |= SO_REUSEADDR; return (so_options); } #endif /* INET || INET6 */ /* * Check if a new BINDMULTI socket is allowed to be created. * * ni points to the new inp. * oi points to the exisitng inp. * * This checks whether the existing inp also has BINDMULTI and * whether the credentials match. */ int in_pcbbind_check_bindmulti(const struct inpcb *ni, const struct inpcb *oi) { /* Check permissions match */ if ((ni->inp_flags2 & INP_BINDMULTI) && (ni->inp_cred->cr_uid != oi->inp_cred->cr_uid)) return (0); /* Check the existing inp has BINDMULTI set */ if ((ni->inp_flags2 & INP_BINDMULTI) && ((oi->inp_flags2 & INP_BINDMULTI) == 0)) return (0); /* * We're okay - either INP_BINDMULTI isn't set on ni, or * it is and it matches the checks. */ return (1); } #ifdef INET /* * Set up a bind operation on a PCB, performing port allocation * as required, but do not actually modify the PCB. Callers can * either complete the bind by setting inp_laddr/inp_lport and * calling in_pcbinshash(), or they can just use the resulting * port and address to authorise the sending of a once-off packet. * * On error, the values of *laddrp and *lportp are not changed. */ int in_pcbbind_setup(struct inpcb *inp, struct sockaddr *nam, in_addr_t *laddrp, u_short *lportp, struct ucred *cred) { struct socket *so = inp->inp_socket; struct sockaddr_in *sin; struct inpcbinfo *pcbinfo = inp->inp_pcbinfo; struct in_addr laddr; u_short lport = 0; int lookupflags = 0, reuseport = (so->so_options & SO_REUSEPORT); int error; /* * XXX: Maybe we could let SO_REUSEPORT_LB set SO_REUSEPORT bit here * so that we don't have to add to the (already messy) code below. */ int reuseport_lb = (so->so_options & SO_REUSEPORT_LB); /* * No state changes, so read locks are sufficient here. */ INP_LOCK_ASSERT(inp); INP_HASH_LOCK_ASSERT(pcbinfo); if (CK_STAILQ_EMPTY(&V_in_ifaddrhead)) /* XXX broken! */ return (EADDRNOTAVAIL); laddr.s_addr = *laddrp; if (nam != NULL && laddr.s_addr != INADDR_ANY) return (EINVAL); if ((so->so_options & (SO_REUSEADDR|SO_REUSEPORT|SO_REUSEPORT_LB)) == 0) lookupflags = INPLOOKUP_WILDCARD; if (nam == NULL) { if ((error = prison_local_ip4(cred, &laddr)) != 0) return (error); } else { sin = (struct sockaddr_in *)nam; if (nam->sa_len != sizeof (*sin)) return (EINVAL); #ifdef notdef /* * We should check the family, but old programs * incorrectly fail to initialize it. */ if (sin->sin_family != AF_INET) return (EAFNOSUPPORT); #endif error = prison_local_ip4(cred, &sin->sin_addr); if (error) return (error); if (sin->sin_port != *lportp) { /* Don't allow the port to change. */ if (*lportp != 0) return (EINVAL); lport = sin->sin_port; } /* NB: lport is left as 0 if the port isn't being changed. */ if (IN_MULTICAST(ntohl(sin->sin_addr.s_addr))) { /* * Treat SO_REUSEADDR as SO_REUSEPORT for multicast; * allow complete duplication of binding if * SO_REUSEPORT is set, or if SO_REUSEADDR is set * and a multicast address is bound on both * new and duplicated sockets. */ if ((so->so_options & (SO_REUSEADDR|SO_REUSEPORT)) != 0) reuseport = SO_REUSEADDR|SO_REUSEPORT; /* * XXX: How to deal with SO_REUSEPORT_LB here? * Treat same as SO_REUSEPORT for now. */ if ((so->so_options & (SO_REUSEADDR|SO_REUSEPORT_LB)) != 0) reuseport_lb = SO_REUSEADDR|SO_REUSEPORT_LB; } else if (sin->sin_addr.s_addr != INADDR_ANY) { sin->sin_port = 0; /* yech... */ bzero(&sin->sin_zero, sizeof(sin->sin_zero)); /* * Is the address a local IP address? * If INP_BINDANY is set, then the socket may be bound * to any endpoint address, local or not. */ if ((inp->inp_flags & INP_BINDANY) == 0 && ifa_ifwithaddr_check((struct sockaddr *)sin) == 0) return (EADDRNOTAVAIL); } laddr = sin->sin_addr; if (lport) { struct inpcb *t; struct tcptw *tw; /* GROSS */ if (ntohs(lport) <= V_ipport_reservedhigh && ntohs(lport) >= V_ipport_reservedlow && priv_check_cred(cred, PRIV_NETINET_RESERVEDPORT)) return (EACCES); if (!IN_MULTICAST(ntohl(sin->sin_addr.s_addr)) && priv_check_cred(inp->inp_cred, PRIV_NETINET_REUSEPORT) != 0) { t = in_pcblookup_local(pcbinfo, sin->sin_addr, lport, INPLOOKUP_WILDCARD, cred); /* * XXX * This entire block sorely needs a rewrite. */ if (t && ((inp->inp_flags2 & INP_BINDMULTI) == 0) && ((t->inp_flags & INP_TIMEWAIT) == 0) && (so->so_type != SOCK_STREAM || ntohl(t->inp_faddr.s_addr) == INADDR_ANY) && (ntohl(sin->sin_addr.s_addr) != INADDR_ANY || ntohl(t->inp_laddr.s_addr) != INADDR_ANY || (t->inp_flags2 & INP_REUSEPORT) || (t->inp_flags2 & INP_REUSEPORT_LB) == 0) && (inp->inp_cred->cr_uid != t->inp_cred->cr_uid)) return (EADDRINUSE); /* * If the socket is a BINDMULTI socket, then * the credentials need to match and the * original socket also has to have been bound * with BINDMULTI. */ if (t && (! in_pcbbind_check_bindmulti(inp, t))) return (EADDRINUSE); } t = in_pcblookup_local(pcbinfo, sin->sin_addr, lport, lookupflags, cred); if (t && (t->inp_flags & INP_TIMEWAIT)) { /* * XXXRW: If an incpb has had its timewait * state recycled, we treat the address as * being in use (for now). This is better * than a panic, but not desirable. */ tw = intotw(t); if (tw == NULL || ((reuseport & tw->tw_so_options) == 0 && (reuseport_lb & tw->tw_so_options) == 0)) { return (EADDRINUSE); } } else if (t && ((inp->inp_flags2 & INP_BINDMULTI) == 0) && (reuseport & inp_so_options(t)) == 0 && (reuseport_lb & inp_so_options(t)) == 0) { #ifdef INET6 if (ntohl(sin->sin_addr.s_addr) != INADDR_ANY || ntohl(t->inp_laddr.s_addr) != INADDR_ANY || (inp->inp_vflag & INP_IPV6PROTO) == 0 || (t->inp_vflag & INP_IPV6PROTO) == 0) #endif return (EADDRINUSE); if (t && (! in_pcbbind_check_bindmulti(inp, t))) return (EADDRINUSE); } } } if (*lportp != 0) lport = *lportp; if (lport == 0) { error = in_pcb_lport(inp, &laddr, &lport, cred, lookupflags); if (error != 0) return (error); } *laddrp = laddr.s_addr; *lportp = lport; return (0); } /* * Connect from a socket to a specified address. * Both address and port must be specified in argument sin. * If don't have a local address for this socket yet, * then pick one. */ int in_pcbconnect_mbuf(struct inpcb *inp, struct sockaddr *nam, struct ucred *cred, struct mbuf *m) { u_short lport, fport; in_addr_t laddr, faddr; int anonport, error; INP_WLOCK_ASSERT(inp); INP_HASH_WLOCK_ASSERT(inp->inp_pcbinfo); lport = inp->inp_lport; laddr = inp->inp_laddr.s_addr; anonport = (lport == 0); error = in_pcbconnect_setup(inp, nam, &laddr, &lport, &faddr, &fport, NULL, cred); if (error) return (error); /* Do the initial binding of the local address if required. */ if (inp->inp_laddr.s_addr == INADDR_ANY && inp->inp_lport == 0) { inp->inp_lport = lport; inp->inp_laddr.s_addr = laddr; if (in_pcbinshash(inp) != 0) { inp->inp_laddr.s_addr = INADDR_ANY; inp->inp_lport = 0; return (EAGAIN); } } /* Commit the remaining changes. */ inp->inp_lport = lport; inp->inp_laddr.s_addr = laddr; inp->inp_faddr.s_addr = faddr; inp->inp_fport = fport; in_pcbrehash_mbuf(inp, m); if (anonport) inp->inp_flags |= INP_ANONPORT; return (0); } int in_pcbconnect(struct inpcb *inp, struct sockaddr *nam, struct ucred *cred) { return (in_pcbconnect_mbuf(inp, nam, cred, NULL)); } /* * Do proper source address selection on an unbound socket in case * of connect. Take jails into account as well. */ int in_pcbladdr(struct inpcb *inp, struct in_addr *faddr, struct in_addr *laddr, struct ucred *cred) { struct ifaddr *ifa; struct sockaddr *sa; struct sockaddr_in *sin; struct route sro; struct epoch_tracker et; int error; KASSERT(laddr != NULL, ("%s: laddr NULL", __func__)); /* * Bypass source address selection and use the primary jail IP * if requested. */ if (cred != NULL && !prison_saddrsel_ip4(cred, laddr)) return (0); error = 0; bzero(&sro, sizeof(sro)); sin = (struct sockaddr_in *)&sro.ro_dst; sin->sin_family = AF_INET; sin->sin_len = sizeof(struct sockaddr_in); sin->sin_addr.s_addr = faddr->s_addr; /* * If route is known our src addr is taken from the i/f, * else punt. * * Find out route to destination. */ if ((inp->inp_socket->so_options & SO_DONTROUTE) == 0) in_rtalloc_ign(&sro, 0, inp->inp_inc.inc_fibnum); /* * If we found a route, use the address corresponding to * the outgoing interface. * * Otherwise assume faddr is reachable on a directly connected * network and try to find a corresponding interface to take * the source address from. */ NET_EPOCH_ENTER(et); if (sro.ro_rt == NULL || sro.ro_rt->rt_ifp == NULL) { struct in_ifaddr *ia; struct ifnet *ifp; ia = ifatoia(ifa_ifwithdstaddr((struct sockaddr *)sin, inp->inp_socket->so_fibnum)); if (ia == NULL) { ia = ifatoia(ifa_ifwithnet((struct sockaddr *)sin, 0, inp->inp_socket->so_fibnum)); } if (ia == NULL) { error = ENETUNREACH; goto done; } if (cred == NULL || !prison_flag(cred, PR_IP4)) { laddr->s_addr = ia->ia_addr.sin_addr.s_addr; goto done; } ifp = ia->ia_ifp; ia = NULL; CK_STAILQ_FOREACH(ifa, &ifp->if_addrhead, ifa_link) { sa = ifa->ifa_addr; if (sa->sa_family != AF_INET) continue; sin = (struct sockaddr_in *)sa; if (prison_check_ip4(cred, &sin->sin_addr) == 0) { ia = (struct in_ifaddr *)ifa; break; } } if (ia != NULL) { laddr->s_addr = ia->ia_addr.sin_addr.s_addr; goto done; } /* 3. As a last resort return the 'default' jail address. */ error = prison_get_ip4(cred, laddr); goto done; } /* * If the outgoing interface on the route found is not * a loopback interface, use the address from that interface. * In case of jails do those three steps: * 1. check if the interface address belongs to the jail. If so use it. * 2. check if we have any address on the outgoing interface * belonging to this jail. If so use it. * 3. as a last resort return the 'default' jail address. */ if ((sro.ro_rt->rt_ifp->if_flags & IFF_LOOPBACK) == 0) { struct in_ifaddr *ia; struct ifnet *ifp; /* If not jailed, use the default returned. */ if (cred == NULL || !prison_flag(cred, PR_IP4)) { ia = (struct in_ifaddr *)sro.ro_rt->rt_ifa; laddr->s_addr = ia->ia_addr.sin_addr.s_addr; goto done; } /* Jailed. */ /* 1. Check if the iface address belongs to the jail. */ sin = (struct sockaddr_in *)sro.ro_rt->rt_ifa->ifa_addr; if (prison_check_ip4(cred, &sin->sin_addr) == 0) { ia = (struct in_ifaddr *)sro.ro_rt->rt_ifa; laddr->s_addr = ia->ia_addr.sin_addr.s_addr; goto done; } /* * 2. Check if we have any address on the outgoing interface * belonging to this jail. */ ia = NULL; ifp = sro.ro_rt->rt_ifp; CK_STAILQ_FOREACH(ifa, &ifp->if_addrhead, ifa_link) { sa = ifa->ifa_addr; if (sa->sa_family != AF_INET) continue; sin = (struct sockaddr_in *)sa; if (prison_check_ip4(cred, &sin->sin_addr) == 0) { ia = (struct in_ifaddr *)ifa; break; } } if (ia != NULL) { laddr->s_addr = ia->ia_addr.sin_addr.s_addr; goto done; } /* 3. As a last resort return the 'default' jail address. */ error = prison_get_ip4(cred, laddr); goto done; } /* * The outgoing interface is marked with 'loopback net', so a route * to ourselves is here. * Try to find the interface of the destination address and then * take the address from there. That interface is not necessarily * a loopback interface. * In case of jails, check that it is an address of the jail * and if we cannot find, fall back to the 'default' jail address. */ if ((sro.ro_rt->rt_ifp->if_flags & IFF_LOOPBACK) != 0) { struct sockaddr_in sain; struct in_ifaddr *ia; bzero(&sain, sizeof(struct sockaddr_in)); sain.sin_family = AF_INET; sain.sin_len = sizeof(struct sockaddr_in); sain.sin_addr.s_addr = faddr->s_addr; ia = ifatoia(ifa_ifwithdstaddr(sintosa(&sain), inp->inp_socket->so_fibnum)); if (ia == NULL) ia = ifatoia(ifa_ifwithnet(sintosa(&sain), 0, inp->inp_socket->so_fibnum)); if (ia == NULL) ia = ifatoia(ifa_ifwithaddr(sintosa(&sain))); if (cred == NULL || !prison_flag(cred, PR_IP4)) { if (ia == NULL) { error = ENETUNREACH; goto done; } laddr->s_addr = ia->ia_addr.sin_addr.s_addr; goto done; } /* Jailed. */ if (ia != NULL) { struct ifnet *ifp; ifp = ia->ia_ifp; ia = NULL; CK_STAILQ_FOREACH(ifa, &ifp->if_addrhead, ifa_link) { sa = ifa->ifa_addr; if (sa->sa_family != AF_INET) continue; sin = (struct sockaddr_in *)sa; if (prison_check_ip4(cred, &sin->sin_addr) == 0) { ia = (struct in_ifaddr *)ifa; break; } } if (ia != NULL) { laddr->s_addr = ia->ia_addr.sin_addr.s_addr; goto done; } } /* 3. As a last resort return the 'default' jail address. */ error = prison_get_ip4(cred, laddr); goto done; } done: NET_EPOCH_EXIT(et); if (sro.ro_rt != NULL) RTFREE(sro.ro_rt); return (error); } /* * Set up for a connect from a socket to the specified address. * On entry, *laddrp and *lportp should contain the current local * address and port for the PCB; these are updated to the values * that should be placed in inp_laddr and inp_lport to complete * the connect. * * On success, *faddrp and *fportp will be set to the remote address * and port. These are not updated in the error case. * * If the operation fails because the connection already exists, * *oinpp will be set to the PCB of that connection so that the * caller can decide to override it. In all other cases, *oinpp * is set to NULL. */ int in_pcbconnect_setup(struct inpcb *inp, struct sockaddr *nam, in_addr_t *laddrp, u_short *lportp, in_addr_t *faddrp, u_short *fportp, struct inpcb **oinpp, struct ucred *cred) { struct rm_priotracker in_ifa_tracker; struct sockaddr_in *sin = (struct sockaddr_in *)nam; struct in_ifaddr *ia; struct inpcb *oinp; struct in_addr laddr, faddr; u_short lport, fport; int error; /* * Because a global state change doesn't actually occur here, a read * lock is sufficient. */ INP_LOCK_ASSERT(inp); INP_HASH_LOCK_ASSERT(inp->inp_pcbinfo); if (oinpp != NULL) *oinpp = NULL; if (nam->sa_len != sizeof (*sin)) return (EINVAL); if (sin->sin_family != AF_INET) return (EAFNOSUPPORT); if (sin->sin_port == 0) return (EADDRNOTAVAIL); laddr.s_addr = *laddrp; lport = *lportp; faddr = sin->sin_addr; fport = sin->sin_port; if (!CK_STAILQ_EMPTY(&V_in_ifaddrhead)) { /* * If the destination address is INADDR_ANY, * use the primary local address. * If the supplied address is INADDR_BROADCAST, * and the primary interface supports broadcast, * choose the broadcast address for that interface. */ if (faddr.s_addr == INADDR_ANY) { IN_IFADDR_RLOCK(&in_ifa_tracker); faddr = IA_SIN(CK_STAILQ_FIRST(&V_in_ifaddrhead))->sin_addr; IN_IFADDR_RUNLOCK(&in_ifa_tracker); if (cred != NULL && (error = prison_get_ip4(cred, &faddr)) != 0) return (error); } else if (faddr.s_addr == (u_long)INADDR_BROADCAST) { IN_IFADDR_RLOCK(&in_ifa_tracker); if (CK_STAILQ_FIRST(&V_in_ifaddrhead)->ia_ifp->if_flags & IFF_BROADCAST) faddr = satosin(&CK_STAILQ_FIRST( &V_in_ifaddrhead)->ia_broadaddr)->sin_addr; IN_IFADDR_RUNLOCK(&in_ifa_tracker); } } if (laddr.s_addr == INADDR_ANY) { error = in_pcbladdr(inp, &faddr, &laddr, cred); /* * If the destination address is multicast and an outgoing * interface has been set as a multicast option, prefer the * address of that interface as our source address. */ if (IN_MULTICAST(ntohl(faddr.s_addr)) && inp->inp_moptions != NULL) { struct ip_moptions *imo; struct ifnet *ifp; imo = inp->inp_moptions; if (imo->imo_multicast_ifp != NULL) { ifp = imo->imo_multicast_ifp; IN_IFADDR_RLOCK(&in_ifa_tracker); CK_STAILQ_FOREACH(ia, &V_in_ifaddrhead, ia_link) { if ((ia->ia_ifp == ifp) && (cred == NULL || prison_check_ip4(cred, &ia->ia_addr.sin_addr) == 0)) break; } if (ia == NULL) error = EADDRNOTAVAIL; else { laddr = ia->ia_addr.sin_addr; error = 0; } IN_IFADDR_RUNLOCK(&in_ifa_tracker); } } if (error) return (error); } oinp = in_pcblookup_hash_locked(inp->inp_pcbinfo, faddr, fport, laddr, lport, 0, NULL); if (oinp != NULL) { if (oinpp != NULL) *oinpp = oinp; return (EADDRINUSE); } if (lport == 0) { error = in_pcbbind_setup(inp, NULL, &laddr.s_addr, &lport, cred); if (error) return (error); } *laddrp = laddr.s_addr; *lportp = lport; *faddrp = faddr.s_addr; *fportp = fport; return (0); } void in_pcbdisconnect(struct inpcb *inp) { INP_WLOCK_ASSERT(inp); INP_HASH_WLOCK_ASSERT(inp->inp_pcbinfo); inp->inp_faddr.s_addr = INADDR_ANY; inp->inp_fport = 0; in_pcbrehash(inp); } #endif /* INET */ /* * in_pcbdetach() is responsibe for disassociating a socket from an inpcb. * For most protocols, this will be invoked immediately prior to calling * in_pcbfree(). However, with TCP the inpcb may significantly outlive the * socket, in which case in_pcbfree() is deferred. */ void in_pcbdetach(struct inpcb *inp) { KASSERT(inp->inp_socket != NULL, ("%s: inp_socket == NULL", __func__)); #ifdef RATELIMIT if (inp->inp_snd_tag != NULL) in_pcbdetach_txrtlmt(inp); #endif inp->inp_socket->so_pcb = NULL; inp->inp_socket = NULL; } /* * in_pcbref() bumps the reference count on an inpcb in order to maintain * stability of an inpcb pointer despite the inpcb lock being released. This * is used in TCP when the inpcbinfo lock needs to be acquired or upgraded, * but where the inpcb lock may already held, or when acquiring a reference * via a pcbgroup. * * in_pcbref() should be used only to provide brief memory stability, and * must always be followed by a call to INP_WLOCK() and in_pcbrele() to * garbage collect the inpcb if it has been in_pcbfree()'d from another * context. Until in_pcbrele() has returned that the inpcb is still valid, * lock and rele are the *only* safe operations that may be performed on the * inpcb. * * While the inpcb will not be freed, releasing the inpcb lock means that the * connection's state may change, so the caller should be careful to * revalidate any cached state on reacquiring the lock. Drop the reference * using in_pcbrele(). */ void in_pcbref(struct inpcb *inp) { KASSERT(inp->inp_refcount > 0, ("%s: refcount 0", __func__)); refcount_acquire(&inp->inp_refcount); } /* * Drop a refcount on an inpcb elevated using in_pcbref(); because a call to * in_pcbfree() may have been made between in_pcbref() and in_pcbrele(), we * return a flag indicating whether or not the inpcb remains valid. If it is * valid, we return with the inpcb lock held. * * Notice that, unlike in_pcbref(), the inpcb lock must be held to drop a * reference on an inpcb. Historically more work was done here (actually, in * in_pcbfree_internal()) but has been moved to in_pcbfree() to avoid the * need for the pcbinfo lock in in_pcbrele(). Deferring the free is entirely * about memory stability (and continued use of the write lock). */ int in_pcbrele_rlocked(struct inpcb *inp) { struct inpcbinfo *pcbinfo; KASSERT(inp->inp_refcount > 0, ("%s: refcount 0", __func__)); INP_RLOCK_ASSERT(inp); if (refcount_release(&inp->inp_refcount) == 0) { /* * If the inpcb has been freed, let the caller know, even if * this isn't the last reference. */ if (inp->inp_flags2 & INP_FREED) { INP_RUNLOCK(inp); return (1); } return (0); } KASSERT(inp->inp_socket == NULL, ("%s: inp_socket != NULL", __func__)); #ifdef TCPHPTS if (inp->inp_in_hpts || inp->inp_in_input) { struct tcp_hpts_entry *hpts; /* * We should not be on the hpts at * this point in any form. we must * get the lock to be sure. */ hpts = tcp_hpts_lock(inp); if (inp->inp_in_hpts) panic("Hpts:%p inp:%p at free still on hpts", hpts, inp); mtx_unlock(&hpts->p_mtx); hpts = tcp_input_lock(inp); if (inp->inp_in_input) panic("Hpts:%p inp:%p at free still on input hpts", hpts, inp); mtx_unlock(&hpts->p_mtx); } #endif INP_RUNLOCK(inp); pcbinfo = inp->inp_pcbinfo; uma_zfree(pcbinfo->ipi_zone, inp); return (1); } int in_pcbrele_wlocked(struct inpcb *inp) { struct inpcbinfo *pcbinfo; KASSERT(inp->inp_refcount > 0, ("%s: refcount 0", __func__)); INP_WLOCK_ASSERT(inp); if (refcount_release(&inp->inp_refcount) == 0) { /* * If the inpcb has been freed, let the caller know, even if * this isn't the last reference. */ if (inp->inp_flags2 & INP_FREED) { INP_WUNLOCK(inp); return (1); } return (0); } KASSERT(inp->inp_socket == NULL, ("%s: inp_socket != NULL", __func__)); #ifdef TCPHPTS if (inp->inp_in_hpts || inp->inp_in_input) { struct tcp_hpts_entry *hpts; /* * We should not be on the hpts at * this point in any form. we must * get the lock to be sure. */ hpts = tcp_hpts_lock(inp); if (inp->inp_in_hpts) panic("Hpts:%p inp:%p at free still on hpts", hpts, inp); mtx_unlock(&hpts->p_mtx); hpts = tcp_input_lock(inp); if (inp->inp_in_input) panic("Hpts:%p inp:%p at free still on input hpts", hpts, inp); mtx_unlock(&hpts->p_mtx); } #endif INP_WUNLOCK(inp); pcbinfo = inp->inp_pcbinfo; uma_zfree(pcbinfo->ipi_zone, inp); return (1); } /* * Temporary wrapper. */ int in_pcbrele(struct inpcb *inp) { return (in_pcbrele_wlocked(inp)); } void in_pcblist_rele_rlocked(epoch_context_t ctx) { struct in_pcblist *il; struct inpcb *inp; struct inpcbinfo *pcbinfo; int i, n; il = __containerof(ctx, struct in_pcblist, il_epoch_ctx); pcbinfo = il->il_pcbinfo; n = il->il_count; INP_INFO_WLOCK(pcbinfo); for (i = 0; i < n; i++) { inp = il->il_inp_list[i]; INP_RLOCK(inp); if (!in_pcbrele_rlocked(inp)) INP_RUNLOCK(inp); } INP_INFO_WUNLOCK(pcbinfo); free(il, M_TEMP); } static void inpcbport_free(epoch_context_t ctx) { struct inpcbport *phd; phd = __containerof(ctx, struct inpcbport, phd_epoch_ctx); free(phd, M_PCB); } static void in_pcbfree_deferred(epoch_context_t ctx) { struct inpcb *inp; int released __unused; inp = __containerof(ctx, struct inpcb, inp_epoch_ctx); INP_WLOCK(inp); + CURVNET_SET(inp->inp_vnet); #ifdef INET struct ip_moptions *imo = inp->inp_moptions; inp->inp_moptions = NULL; #endif /* XXXRW: Do as much as possible here. */ #if defined(IPSEC) || defined(IPSEC_SUPPORT) if (inp->inp_sp != NULL) ipsec_delete_pcbpolicy(inp); #endif #ifdef INET6 struct ip6_moptions *im6o = NULL; if (inp->inp_vflag & INP_IPV6PROTO) { ip6_freepcbopts(inp->in6p_outputopts); im6o = inp->in6p_moptions; inp->in6p_moptions = NULL; } #endif if (inp->inp_options) (void)m_free(inp->inp_options); inp->inp_vflag = 0; crfree(inp->inp_cred); #ifdef MAC mac_inpcb_destroy(inp); #endif released = in_pcbrele_wlocked(inp); MPASS(released); #ifdef INET6 ip6_freemoptions(im6o); #endif #ifdef INET inp_freemoptions(imo); #endif + CURVNET_RESTORE(); } /* * Unconditionally schedule an inpcb to be freed by decrementing its * reference count, which should occur only after the inpcb has been detached * from its socket. If another thread holds a temporary reference (acquired * using in_pcbref()) then the free is deferred until that reference is * released using in_pcbrele(), but the inpcb is still unlocked. Almost all * work, including removal from global lists, is done in this context, where * the pcbinfo lock is held. */ void in_pcbfree(struct inpcb *inp) { struct inpcbinfo *pcbinfo = inp->inp_pcbinfo; KASSERT(inp->inp_socket == NULL, ("%s: inp_socket != NULL", __func__)); KASSERT((inp->inp_flags2 & INP_FREED) == 0, ("%s: called twice for pcb %p", __func__, inp)); if (inp->inp_flags2 & INP_FREED) { INP_WUNLOCK(inp); return; } #ifdef INVARIANTS if (pcbinfo == &V_tcbinfo) { INP_INFO_LOCK_ASSERT(pcbinfo); } else { INP_INFO_WLOCK_ASSERT(pcbinfo); } #endif INP_WLOCK_ASSERT(inp); INP_LIST_WLOCK(pcbinfo); in_pcbremlists(inp); INP_LIST_WUNLOCK(pcbinfo); RO_INVALIDATE_CACHE(&inp->inp_route); /* mark as destruction in progress */ inp->inp_flags2 |= INP_FREED; INP_WUNLOCK(inp); epoch_call(net_epoch_preempt, &inp->inp_epoch_ctx, in_pcbfree_deferred); } /* * in_pcbdrop() removes an inpcb from hashed lists, releasing its address and * port reservation, and preventing it from being returned by inpcb lookups. * * It is used by TCP to mark an inpcb as unused and avoid future packet * delivery or event notification when a socket remains open but TCP has * closed. This might occur as a result of a shutdown()-initiated TCP close * or a RST on the wire, and allows the port binding to be reused while still * maintaining the invariant that so_pcb always points to a valid inpcb until * in_pcbdetach(). * * XXXRW: Possibly in_pcbdrop() should also prevent future notifications by * in_pcbnotifyall() and in_pcbpurgeif0()? */ void in_pcbdrop(struct inpcb *inp) { INP_WLOCK_ASSERT(inp); #ifdef INVARIANTS if (inp->inp_socket != NULL && inp->inp_ppcb != NULL) MPASS(inp->inp_refcount > 1); #endif /* * XXXRW: Possibly we should protect the setting of INP_DROPPED with * the hash lock...? */ inp->inp_flags |= INP_DROPPED; if (inp->inp_flags & INP_INHASHLIST) { struct inpcbport *phd = inp->inp_phd; INP_HASH_WLOCK(inp->inp_pcbinfo); in_pcbremlbgrouphash(inp); CK_LIST_REMOVE(inp, inp_hash); CK_LIST_REMOVE(inp, inp_portlist); if (CK_LIST_FIRST(&phd->phd_pcblist) == NULL) { CK_LIST_REMOVE(phd, phd_hash); epoch_call(net_epoch_preempt, &phd->phd_epoch_ctx, inpcbport_free); } INP_HASH_WUNLOCK(inp->inp_pcbinfo); inp->inp_flags &= ~INP_INHASHLIST; #ifdef PCBGROUP in_pcbgroup_remove(inp); #endif } } #ifdef INET /* * Common routines to return the socket addresses associated with inpcbs. */ struct sockaddr * in_sockaddr(in_port_t port, struct in_addr *addr_p) { struct sockaddr_in *sin; sin = malloc(sizeof *sin, M_SONAME, M_WAITOK | M_ZERO); sin->sin_family = AF_INET; sin->sin_len = sizeof(*sin); sin->sin_addr = *addr_p; sin->sin_port = port; return (struct sockaddr *)sin; } int in_getsockaddr(struct socket *so, struct sockaddr **nam) { struct inpcb *inp; struct in_addr addr; in_port_t port; inp = sotoinpcb(so); KASSERT(inp != NULL, ("in_getsockaddr: inp == NULL")); INP_RLOCK(inp); port = inp->inp_lport; addr = inp->inp_laddr; INP_RUNLOCK(inp); *nam = in_sockaddr(port, &addr); return 0; } int in_getpeeraddr(struct socket *so, struct sockaddr **nam) { struct inpcb *inp; struct in_addr addr; in_port_t port; inp = sotoinpcb(so); KASSERT(inp != NULL, ("in_getpeeraddr: inp == NULL")); INP_RLOCK(inp); port = inp->inp_fport; addr = inp->inp_faddr; INP_RUNLOCK(inp); *nam = in_sockaddr(port, &addr); return 0; } void in_pcbnotifyall(struct inpcbinfo *pcbinfo, struct in_addr faddr, int errno, struct inpcb *(*notify)(struct inpcb *, int)) { struct inpcb *inp, *inp_temp; INP_INFO_WLOCK(pcbinfo); CK_LIST_FOREACH_SAFE(inp, pcbinfo->ipi_listhead, inp_list, inp_temp) { INP_WLOCK(inp); #ifdef INET6 if ((inp->inp_vflag & INP_IPV4) == 0) { INP_WUNLOCK(inp); continue; } #endif if (inp->inp_faddr.s_addr != faddr.s_addr || inp->inp_socket == NULL) { INP_WUNLOCK(inp); continue; } if ((*notify)(inp, errno)) INP_WUNLOCK(inp); } INP_INFO_WUNLOCK(pcbinfo); } void in_pcbpurgeif0(struct inpcbinfo *pcbinfo, struct ifnet *ifp) { struct inpcb *inp; struct ip_moptions *imo; int i, gap; INP_INFO_WLOCK(pcbinfo); CK_LIST_FOREACH(inp, pcbinfo->ipi_listhead, inp_list) { INP_WLOCK(inp); imo = inp->inp_moptions; if ((inp->inp_vflag & INP_IPV4) && imo != NULL) { /* * Unselect the outgoing interface if it is being * detached. */ if (imo->imo_multicast_ifp == ifp) imo->imo_multicast_ifp = NULL; /* * Drop multicast group membership if we joined * through the interface being detached. * * XXX This can all be deferred to an epoch_call */ for (i = 0, gap = 0; i < imo->imo_num_memberships; i++) { if (imo->imo_membership[i]->inm_ifp == ifp) { IN_MULTI_LOCK_ASSERT(); in_leavegroup_locked(imo->imo_membership[i], NULL); gap++; } else if (gap != 0) imo->imo_membership[i - gap] = imo->imo_membership[i]; } imo->imo_num_memberships -= gap; } INP_WUNLOCK(inp); } INP_INFO_WUNLOCK(pcbinfo); } /* * Lookup a PCB based on the local address and port. Caller must hold the * hash lock. No inpcb locks or references are acquired. */ #define INP_LOOKUP_MAPPED_PCB_COST 3 struct inpcb * in_pcblookup_local(struct inpcbinfo *pcbinfo, struct in_addr laddr, u_short lport, int lookupflags, struct ucred *cred) { struct inpcb *inp; #ifdef INET6 int matchwild = 3 + INP_LOOKUP_MAPPED_PCB_COST; #else int matchwild = 3; #endif int wildcard; KASSERT((lookupflags & ~(INPLOOKUP_WILDCARD)) == 0, ("%s: invalid lookup flags %d", __func__, lookupflags)); INP_HASH_LOCK_ASSERT(pcbinfo); if ((lookupflags & INPLOOKUP_WILDCARD) == 0) { struct inpcbhead *head; /* * Look for an unconnected (wildcard foreign addr) PCB that * matches the local address and port we're looking for. */ head = &pcbinfo->ipi_hashbase[INP_PCBHASH(INADDR_ANY, lport, 0, pcbinfo->ipi_hashmask)]; CK_LIST_FOREACH(inp, head, inp_hash) { #ifdef INET6 /* XXX inp locking */ if ((inp->inp_vflag & INP_IPV4) == 0) continue; #endif if (inp->inp_faddr.s_addr == INADDR_ANY && inp->inp_laddr.s_addr == laddr.s_addr && inp->inp_lport == lport) { /* * Found? */ if (cred == NULL || prison_equal_ip4(cred->cr_prison, inp->inp_cred->cr_prison)) return (inp); } } /* * Not found. */ return (NULL); } else { struct inpcbporthead *porthash; struct inpcbport *phd; struct inpcb *match = NULL; /* * Best fit PCB lookup. * * First see if this local port is in use by looking on the * port hash list. */ porthash = &pcbinfo->ipi_porthashbase[INP_PCBPORTHASH(lport, pcbinfo->ipi_porthashmask)]; CK_LIST_FOREACH(phd, porthash, phd_hash) { if (phd->phd_port == lport) break; } if (phd != NULL) { /* * Port is in use by one or more PCBs. Look for best * fit. */ CK_LIST_FOREACH(inp, &phd->phd_pcblist, inp_portlist) { wildcard = 0; if (cred != NULL && !prison_equal_ip4(inp->inp_cred->cr_prison, cred->cr_prison)) continue; #ifdef INET6 /* XXX inp locking */ if ((inp->inp_vflag & INP_IPV4) == 0) continue; /* * We never select the PCB that has * INP_IPV6 flag and is bound to :: if * we have another PCB which is bound * to 0.0.0.0. If a PCB has the * INP_IPV6 flag, then we set its cost * higher than IPv4 only PCBs. * * Note that the case only happens * when a socket is bound to ::, under * the condition that the use of the * mapped address is allowed. */ if ((inp->inp_vflag & INP_IPV6) != 0) wildcard += INP_LOOKUP_MAPPED_PCB_COST; #endif if (inp->inp_faddr.s_addr != INADDR_ANY) wildcard++; if (inp->inp_laddr.s_addr != INADDR_ANY) { if (laddr.s_addr == INADDR_ANY) wildcard++; else if (inp->inp_laddr.s_addr != laddr.s_addr) continue; } else { if (laddr.s_addr != INADDR_ANY) wildcard++; } if (wildcard < matchwild) { match = inp; matchwild = wildcard; if (matchwild == 0) break; } } } return (match); } } #undef INP_LOOKUP_MAPPED_PCB_COST static struct inpcb * in_pcblookup_lbgroup(const struct inpcbinfo *pcbinfo, const struct in_addr *laddr, uint16_t lport, const struct in_addr *faddr, uint16_t fport, int lookupflags) { struct inpcb *local_wild; const struct inpcblbgrouphead *hdr; struct inpcblbgroup *grp; uint32_t idx; INP_HASH_LOCK_ASSERT(pcbinfo); hdr = &pcbinfo->ipi_lbgrouphashbase[ INP_PCBPORTHASH(lport, pcbinfo->ipi_lbgrouphashmask)]; /* * Order of socket selection: * 1. non-wild. * 2. wild (if lookupflags contains INPLOOKUP_WILDCARD). * * NOTE: * - Load balanced group does not contain jailed sockets * - Load balanced group does not contain IPv4 mapped INET6 wild sockets */ local_wild = NULL; CK_LIST_FOREACH(grp, hdr, il_list) { #ifdef INET6 if (!(grp->il_vflag & INP_IPV4)) continue; #endif if (grp->il_lport != lport) continue; idx = INP_PCBLBGROUP_PKTHASH(faddr->s_addr, lport, fport) % grp->il_inpcnt; if (grp->il_laddr.s_addr == laddr->s_addr) return (grp->il_inp[idx]); if (grp->il_laddr.s_addr == INADDR_ANY && (lookupflags & INPLOOKUP_WILDCARD) != 0) local_wild = grp->il_inp[idx]; } return (local_wild); } #ifdef PCBGROUP /* * Lookup PCB in hash list, using pcbgroup tables. */ static struct inpcb * in_pcblookup_group(struct inpcbinfo *pcbinfo, struct inpcbgroup *pcbgroup, struct in_addr faddr, u_int fport_arg, struct in_addr laddr, u_int lport_arg, int lookupflags, struct ifnet *ifp) { struct inpcbhead *head; struct inpcb *inp, *tmpinp; u_short fport = fport_arg, lport = lport_arg; bool locked; /* * First look for an exact match. */ tmpinp = NULL; INP_GROUP_LOCK(pcbgroup); head = &pcbgroup->ipg_hashbase[INP_PCBHASH(faddr.s_addr, lport, fport, pcbgroup->ipg_hashmask)]; CK_LIST_FOREACH(inp, head, inp_pcbgrouphash) { #ifdef INET6 /* XXX inp locking */ if ((inp->inp_vflag & INP_IPV4) == 0) continue; #endif if (inp->inp_faddr.s_addr == faddr.s_addr && inp->inp_laddr.s_addr == laddr.s_addr && inp->inp_fport == fport && inp->inp_lport == lport) { /* * XXX We should be able to directly return * the inp here, without any checks. * Well unless both bound with SO_REUSEPORT? */ if (prison_flag(inp->inp_cred, PR_IP4)) goto found; if (tmpinp == NULL) tmpinp = inp; } } if (tmpinp != NULL) { inp = tmpinp; goto found; } #ifdef RSS /* * For incoming connections, we may wish to do a wildcard * match for an RSS-local socket. */ if ((lookupflags & INPLOOKUP_WILDCARD) != 0) { struct inpcb *local_wild = NULL, *local_exact = NULL; #ifdef INET6 struct inpcb *local_wild_mapped = NULL; #endif struct inpcb *jail_wild = NULL; struct inpcbhead *head; int injail; /* * Order of socket selection - we always prefer jails. * 1. jailed, non-wild. * 2. jailed, wild. * 3. non-jailed, non-wild. * 4. non-jailed, wild. */ head = &pcbgroup->ipg_hashbase[INP_PCBHASH(INADDR_ANY, lport, 0, pcbgroup->ipg_hashmask)]; CK_LIST_FOREACH(inp, head, inp_pcbgrouphash) { #ifdef INET6 /* XXX inp locking */ if ((inp->inp_vflag & INP_IPV4) == 0) continue; #endif if (inp->inp_faddr.s_addr != INADDR_ANY || inp->inp_lport != lport) continue; injail = prison_flag(inp->inp_cred, PR_IP4); if (injail) { if (prison_check_ip4(inp->inp_cred, &laddr) != 0) continue; } else { if (local_exact != NULL) continue; } if (inp->inp_laddr.s_addr == laddr.s_addr) { if (injail) goto found; else local_exact = inp; } else if (inp->inp_laddr.s_addr == INADDR_ANY) { #ifdef INET6 /* XXX inp locking, NULL check */ if (inp->inp_vflag & INP_IPV6PROTO) local_wild_mapped = inp; else #endif if (injail) jail_wild = inp; else local_wild = inp; } } /* LIST_FOREACH */ inp = jail_wild; if (inp == NULL) inp = local_exact; if (inp == NULL) inp = local_wild; #ifdef INET6 if (inp == NULL) inp = local_wild_mapped; #endif if (inp != NULL) goto found; } #endif /* * Then look for a wildcard match, if requested. */ if ((lookupflags & INPLOOKUP_WILDCARD) != 0) { struct inpcb *local_wild = NULL, *local_exact = NULL; #ifdef INET6 struct inpcb *local_wild_mapped = NULL; #endif struct inpcb *jail_wild = NULL; struct inpcbhead *head; int injail; /* * Order of socket selection - we always prefer jails. * 1. jailed, non-wild. * 2. jailed, wild. * 3. non-jailed, non-wild. * 4. non-jailed, wild. */ head = &pcbinfo->ipi_wildbase[INP_PCBHASH(INADDR_ANY, lport, 0, pcbinfo->ipi_wildmask)]; CK_LIST_FOREACH(inp, head, inp_pcbgroup_wild) { #ifdef INET6 /* XXX inp locking */ if ((inp->inp_vflag & INP_IPV4) == 0) continue; #endif if (inp->inp_faddr.s_addr != INADDR_ANY || inp->inp_lport != lport) continue; injail = prison_flag(inp->inp_cred, PR_IP4); if (injail) { if (prison_check_ip4(inp->inp_cred, &laddr) != 0) continue; } else { if (local_exact != NULL) continue; } if (inp->inp_laddr.s_addr == laddr.s_addr) { if (injail) goto found; else local_exact = inp; } else if (inp->inp_laddr.s_addr == INADDR_ANY) { #ifdef INET6 /* XXX inp locking, NULL check */ if (inp->inp_vflag & INP_IPV6PROTO) local_wild_mapped = inp; else #endif if (injail) jail_wild = inp; else local_wild = inp; } } /* LIST_FOREACH */ inp = jail_wild; if (inp == NULL) inp = local_exact; if (inp == NULL) inp = local_wild; #ifdef INET6 if (inp == NULL) inp = local_wild_mapped; #endif if (inp != NULL) goto found; } /* if (lookupflags & INPLOOKUP_WILDCARD) */ INP_GROUP_UNLOCK(pcbgroup); return (NULL); found: if (lookupflags & INPLOOKUP_WLOCKPCB) locked = INP_TRY_WLOCK(inp); else if (lookupflags & INPLOOKUP_RLOCKPCB) locked = INP_TRY_RLOCK(inp); else panic("%s: locking bug", __func__); if (__predict_false(locked && (inp->inp_flags2 & INP_FREED))) { if (lookupflags & INPLOOKUP_WLOCKPCB) INP_WUNLOCK(inp); else INP_RUNLOCK(inp); return (NULL); } else if (!locked) in_pcbref(inp); INP_GROUP_UNLOCK(pcbgroup); if (!locked) { if (lookupflags & INPLOOKUP_WLOCKPCB) { INP_WLOCK(inp); if (in_pcbrele_wlocked(inp)) return (NULL); } else { INP_RLOCK(inp); if (in_pcbrele_rlocked(inp)) return (NULL); } } #ifdef INVARIANTS if (lookupflags & INPLOOKUP_WLOCKPCB) INP_WLOCK_ASSERT(inp); else INP_RLOCK_ASSERT(inp); #endif return (inp); } #endif /* PCBGROUP */ /* * Lookup PCB in hash list, using pcbinfo tables. This variation assumes * that the caller has locked the hash list, and will not perform any further * locking or reference operations on either the hash list or the connection. */ static struct inpcb * in_pcblookup_hash_locked(struct inpcbinfo *pcbinfo, struct in_addr faddr, u_int fport_arg, struct in_addr laddr, u_int lport_arg, int lookupflags, struct ifnet *ifp) { struct inpcbhead *head; struct inpcb *inp, *tmpinp; u_short fport = fport_arg, lport = lport_arg; #ifdef INVARIANTS KASSERT((lookupflags & ~(INPLOOKUP_WILDCARD)) == 0, ("%s: invalid lookup flags %d", __func__, lookupflags)); if (!mtx_owned(&pcbinfo->ipi_hash_lock)) MPASS(in_epoch_verbose(net_epoch_preempt, 1)); #endif /* * First look for an exact match. */ tmpinp = NULL; head = &pcbinfo->ipi_hashbase[INP_PCBHASH(faddr.s_addr, lport, fport, pcbinfo->ipi_hashmask)]; CK_LIST_FOREACH(inp, head, inp_hash) { #ifdef INET6 /* XXX inp locking */ if ((inp->inp_vflag & INP_IPV4) == 0) continue; #endif if (inp->inp_faddr.s_addr == faddr.s_addr && inp->inp_laddr.s_addr == laddr.s_addr && inp->inp_fport == fport && inp->inp_lport == lport) { /* * XXX We should be able to directly return * the inp here, without any checks. * Well unless both bound with SO_REUSEPORT? */ if (prison_flag(inp->inp_cred, PR_IP4)) return (inp); if (tmpinp == NULL) tmpinp = inp; } } if (tmpinp != NULL) return (tmpinp); /* * Then look in lb group (for wildcard match). */ if ((lookupflags & INPLOOKUP_WILDCARD) != 0) { inp = in_pcblookup_lbgroup(pcbinfo, &laddr, lport, &faddr, fport, lookupflags); if (inp != NULL) return (inp); } /* * Then look for a wildcard match, if requested. */ if ((lookupflags & INPLOOKUP_WILDCARD) != 0) { struct inpcb *local_wild = NULL, *local_exact = NULL; #ifdef INET6 struct inpcb *local_wild_mapped = NULL; #endif struct inpcb *jail_wild = NULL; int injail; /* * Order of socket selection - we always prefer jails. * 1. jailed, non-wild. * 2. jailed, wild. * 3. non-jailed, non-wild. * 4. non-jailed, wild. */ head = &pcbinfo->ipi_hashbase[INP_PCBHASH(INADDR_ANY, lport, 0, pcbinfo->ipi_hashmask)]; CK_LIST_FOREACH(inp, head, inp_hash) { #ifdef INET6 /* XXX inp locking */ if ((inp->inp_vflag & INP_IPV4) == 0) continue; #endif if (inp->inp_faddr.s_addr != INADDR_ANY || inp->inp_lport != lport) continue; injail = prison_flag(inp->inp_cred, PR_IP4); if (injail) { if (prison_check_ip4(inp->inp_cred, &laddr) != 0) continue; } else { if (local_exact != NULL) continue; } if (inp->inp_laddr.s_addr == laddr.s_addr) { if (injail) return (inp); else local_exact = inp; } else if (inp->inp_laddr.s_addr == INADDR_ANY) { #ifdef INET6 /* XXX inp locking, NULL check */ if (inp->inp_vflag & INP_IPV6PROTO) local_wild_mapped = inp; else #endif if (injail) jail_wild = inp; else local_wild = inp; } } /* LIST_FOREACH */ if (jail_wild != NULL) return (jail_wild); if (local_exact != NULL) return (local_exact); if (local_wild != NULL) return (local_wild); #ifdef INET6 if (local_wild_mapped != NULL) return (local_wild_mapped); #endif } /* if ((lookupflags & INPLOOKUP_WILDCARD) != 0) */ return (NULL); } /* * Lookup PCB in hash list, using pcbinfo tables. This variation locks the * hash list lock, and will return the inpcb locked (i.e., requires * INPLOOKUP_LOCKPCB). */ static struct inpcb * in_pcblookup_hash(struct inpcbinfo *pcbinfo, struct in_addr faddr, u_int fport, struct in_addr laddr, u_int lport, int lookupflags, struct ifnet *ifp) { struct inpcb *inp; INP_HASH_RLOCK(pcbinfo); inp = in_pcblookup_hash_locked(pcbinfo, faddr, fport, laddr, lport, (lookupflags & ~(INPLOOKUP_RLOCKPCB | INPLOOKUP_WLOCKPCB)), ifp); if (inp != NULL) { if (lookupflags & INPLOOKUP_WLOCKPCB) { INP_WLOCK(inp); if (__predict_false(inp->inp_flags2 & INP_FREED)) { INP_WUNLOCK(inp); inp = NULL; } } else if (lookupflags & INPLOOKUP_RLOCKPCB) { INP_RLOCK(inp); if (__predict_false(inp->inp_flags2 & INP_FREED)) { INP_RUNLOCK(inp); inp = NULL; } } else panic("%s: locking bug", __func__); #ifdef INVARIANTS if (inp != NULL) { if (lookupflags & INPLOOKUP_WLOCKPCB) INP_WLOCK_ASSERT(inp); else INP_RLOCK_ASSERT(inp); } #endif } INP_HASH_RUNLOCK(pcbinfo); return (inp); } /* * Public inpcb lookup routines, accepting a 4-tuple, and optionally, an mbuf * from which a pre-calculated hash value may be extracted. * * Possibly more of this logic should be in in_pcbgroup.c. */ struct inpcb * in_pcblookup(struct inpcbinfo *pcbinfo, struct in_addr faddr, u_int fport, struct in_addr laddr, u_int lport, int lookupflags, struct ifnet *ifp) { #if defined(PCBGROUP) && !defined(RSS) struct inpcbgroup *pcbgroup; #endif KASSERT((lookupflags & ~INPLOOKUP_MASK) == 0, ("%s: invalid lookup flags %d", __func__, lookupflags)); KASSERT((lookupflags & (INPLOOKUP_RLOCKPCB | INPLOOKUP_WLOCKPCB)) != 0, ("%s: LOCKPCB not set", __func__)); /* * When not using RSS, use connection groups in preference to the * reservation table when looking up 4-tuples. When using RSS, just * use the reservation table, due to the cost of the Toeplitz hash * in software. * * XXXRW: This policy belongs in the pcbgroup code, as in principle * we could be doing RSS with a non-Toeplitz hash that is affordable * in software. */ #if defined(PCBGROUP) && !defined(RSS) if (in_pcbgroup_enabled(pcbinfo)) { pcbgroup = in_pcbgroup_bytuple(pcbinfo, laddr, lport, faddr, fport); return (in_pcblookup_group(pcbinfo, pcbgroup, faddr, fport, laddr, lport, lookupflags, ifp)); } #endif return (in_pcblookup_hash(pcbinfo, faddr, fport, laddr, lport, lookupflags, ifp)); } struct inpcb * in_pcblookup_mbuf(struct inpcbinfo *pcbinfo, struct in_addr faddr, u_int fport, struct in_addr laddr, u_int lport, int lookupflags, struct ifnet *ifp, struct mbuf *m) { #ifdef PCBGROUP struct inpcbgroup *pcbgroup; #endif KASSERT((lookupflags & ~INPLOOKUP_MASK) == 0, ("%s: invalid lookup flags %d", __func__, lookupflags)); KASSERT((lookupflags & (INPLOOKUP_RLOCKPCB | INPLOOKUP_WLOCKPCB)) != 0, ("%s: LOCKPCB not set", __func__)); #ifdef PCBGROUP /* * If we can use a hardware-generated hash to look up the connection * group, use that connection group to find the inpcb. Otherwise * fall back on a software hash -- or the reservation table if we're * using RSS. * * XXXRW: As above, that policy belongs in the pcbgroup code. */ if (in_pcbgroup_enabled(pcbinfo) && !(M_HASHTYPE_TEST(m, M_HASHTYPE_NONE))) { pcbgroup = in_pcbgroup_byhash(pcbinfo, M_HASHTYPE_GET(m), m->m_pkthdr.flowid); if (pcbgroup != NULL) return (in_pcblookup_group(pcbinfo, pcbgroup, faddr, fport, laddr, lport, lookupflags, ifp)); #ifndef RSS pcbgroup = in_pcbgroup_bytuple(pcbinfo, laddr, lport, faddr, fport); return (in_pcblookup_group(pcbinfo, pcbgroup, faddr, fport, laddr, lport, lookupflags, ifp)); #endif } #endif return (in_pcblookup_hash(pcbinfo, faddr, fport, laddr, lport, lookupflags, ifp)); } #endif /* INET */ /* * Insert PCB onto various hash lists. */ static int in_pcbinshash_internal(struct inpcb *inp, int do_pcbgroup_update) { struct inpcbhead *pcbhash; struct inpcbporthead *pcbporthash; struct inpcbinfo *pcbinfo = inp->inp_pcbinfo; struct inpcbport *phd; u_int32_t hashkey_faddr; int so_options; INP_WLOCK_ASSERT(inp); INP_HASH_WLOCK_ASSERT(pcbinfo); KASSERT((inp->inp_flags & INP_INHASHLIST) == 0, ("in_pcbinshash: INP_INHASHLIST")); #ifdef INET6 if (inp->inp_vflag & INP_IPV6) hashkey_faddr = INP6_PCBHASHKEY(&inp->in6p_faddr); else #endif hashkey_faddr = inp->inp_faddr.s_addr; pcbhash = &pcbinfo->ipi_hashbase[INP_PCBHASH(hashkey_faddr, inp->inp_lport, inp->inp_fport, pcbinfo->ipi_hashmask)]; pcbporthash = &pcbinfo->ipi_porthashbase[ INP_PCBPORTHASH(inp->inp_lport, pcbinfo->ipi_porthashmask)]; /* * Add entry to load balance group. * Only do this if SO_REUSEPORT_LB is set. */ so_options = inp_so_options(inp); if (so_options & SO_REUSEPORT_LB) { int ret = in_pcbinslbgrouphash(inp); if (ret) { /* pcb lb group malloc fail (ret=ENOBUFS). */ return (ret); } } /* * Go through port list and look for a head for this lport. */ CK_LIST_FOREACH(phd, pcbporthash, phd_hash) { if (phd->phd_port == inp->inp_lport) break; } /* * If none exists, malloc one and tack it on. */ if (phd == NULL) { phd = malloc(sizeof(struct inpcbport), M_PCB, M_NOWAIT); if (phd == NULL) { return (ENOBUFS); /* XXX */ } bzero(&phd->phd_epoch_ctx, sizeof(struct epoch_context)); phd->phd_port = inp->inp_lport; CK_LIST_INIT(&phd->phd_pcblist); CK_LIST_INSERT_HEAD(pcbporthash, phd, phd_hash); } inp->inp_phd = phd; CK_LIST_INSERT_HEAD(&phd->phd_pcblist, inp, inp_portlist); CK_LIST_INSERT_HEAD(pcbhash, inp, inp_hash); inp->inp_flags |= INP_INHASHLIST; #ifdef PCBGROUP if (do_pcbgroup_update) in_pcbgroup_update(inp); #endif return (0); } /* * For now, there are two public interfaces to insert an inpcb into the hash * lists -- one that does update pcbgroups, and one that doesn't. The latter * is used only in the TCP syncache, where in_pcbinshash is called before the * full 4-tuple is set for the inpcb, and we don't want to install in the * pcbgroup until later. * * XXXRW: This seems like a misfeature. in_pcbinshash should always update * connection groups, and partially initialised inpcbs should not be exposed * to either reservation hash tables or pcbgroups. */ int in_pcbinshash(struct inpcb *inp) { return (in_pcbinshash_internal(inp, 1)); } int in_pcbinshash_nopcbgroup(struct inpcb *inp) { return (in_pcbinshash_internal(inp, 0)); } /* * Move PCB to the proper hash bucket when { faddr, fport } have been * changed. NOTE: This does not handle the case of the lport changing (the * hashed port list would have to be updated as well), so the lport must * not change after in_pcbinshash() has been called. */ void in_pcbrehash_mbuf(struct inpcb *inp, struct mbuf *m) { struct inpcbinfo *pcbinfo = inp->inp_pcbinfo; struct inpcbhead *head; u_int32_t hashkey_faddr; INP_WLOCK_ASSERT(inp); INP_HASH_WLOCK_ASSERT(pcbinfo); KASSERT(inp->inp_flags & INP_INHASHLIST, ("in_pcbrehash: !INP_INHASHLIST")); #ifdef INET6 if (inp->inp_vflag & INP_IPV6) hashkey_faddr = INP6_PCBHASHKEY(&inp->in6p_faddr); else #endif hashkey_faddr = inp->inp_faddr.s_addr; head = &pcbinfo->ipi_hashbase[INP_PCBHASH(hashkey_faddr, inp->inp_lport, inp->inp_fport, pcbinfo->ipi_hashmask)]; CK_LIST_REMOVE(inp, inp_hash); CK_LIST_INSERT_HEAD(head, inp, inp_hash); #ifdef PCBGROUP if (m != NULL) in_pcbgroup_update_mbuf(inp, m); else in_pcbgroup_update(inp); #endif } void in_pcbrehash(struct inpcb *inp) { in_pcbrehash_mbuf(inp, NULL); } /* * Remove PCB from various lists. */ static void in_pcbremlists(struct inpcb *inp) { struct inpcbinfo *pcbinfo = inp->inp_pcbinfo; #ifdef INVARIANTS if (pcbinfo == &V_tcbinfo) { INP_INFO_RLOCK_ASSERT(pcbinfo); } else { INP_INFO_WLOCK_ASSERT(pcbinfo); } #endif INP_WLOCK_ASSERT(inp); INP_LIST_WLOCK_ASSERT(pcbinfo); inp->inp_gencnt = ++pcbinfo->ipi_gencnt; if (inp->inp_flags & INP_INHASHLIST) { struct inpcbport *phd = inp->inp_phd; INP_HASH_WLOCK(pcbinfo); /* XXX: Only do if SO_REUSEPORT_LB set? */ in_pcbremlbgrouphash(inp); CK_LIST_REMOVE(inp, inp_hash); CK_LIST_REMOVE(inp, inp_portlist); if (CK_LIST_FIRST(&phd->phd_pcblist) == NULL) { CK_LIST_REMOVE(phd, phd_hash); epoch_call(net_epoch_preempt, &phd->phd_epoch_ctx, inpcbport_free); } INP_HASH_WUNLOCK(pcbinfo); inp->inp_flags &= ~INP_INHASHLIST; } CK_LIST_REMOVE(inp, inp_list); pcbinfo->ipi_count--; #ifdef PCBGROUP in_pcbgroup_remove(inp); #endif } /* * Check for alternatives when higher level complains * about service problems. For now, invalidate cached * routing information. If the route was created dynamically * (by a redirect), time to try a default gateway again. */ void in_losing(struct inpcb *inp) { RO_INVALIDATE_CACHE(&inp->inp_route); return; } /* * A set label operation has occurred at the socket layer, propagate the * label change into the in_pcb for the socket. */ void in_pcbsosetlabel(struct socket *so) { #ifdef MAC struct inpcb *inp; inp = sotoinpcb(so); KASSERT(inp != NULL, ("in_pcbsosetlabel: so->so_pcb == NULL")); INP_WLOCK(inp); SOCK_LOCK(so); mac_inpcb_sosetlabel(so, inp); SOCK_UNLOCK(so); INP_WUNLOCK(inp); #endif } /* * ipport_tick runs once per second, determining if random port allocation * should be continued. If more than ipport_randomcps ports have been * allocated in the last second, then we return to sequential port * allocation. We return to random allocation only once we drop below * ipport_randomcps for at least ipport_randomtime seconds. */ static void ipport_tick(void *xtp) { VNET_ITERATOR_DECL(vnet_iter); VNET_LIST_RLOCK_NOSLEEP(); VNET_FOREACH(vnet_iter) { CURVNET_SET(vnet_iter); /* XXX appease INVARIANTS here */ if (V_ipport_tcpallocs <= V_ipport_tcplastcount + V_ipport_randomcps) { if (V_ipport_stoprandom > 0) V_ipport_stoprandom--; } else V_ipport_stoprandom = V_ipport_randomtime; V_ipport_tcplastcount = V_ipport_tcpallocs; CURVNET_RESTORE(); } VNET_LIST_RUNLOCK_NOSLEEP(); callout_reset(&ipport_tick_callout, hz, ipport_tick, NULL); } static void ip_fini(void *xtp) { callout_stop(&ipport_tick_callout); } /* * The ipport_callout should start running at about the time we attach the * inet or inet6 domains. */ static void ipport_tick_init(const void *unused __unused) { /* Start ipport_tick. */ callout_init(&ipport_tick_callout, 1); callout_reset(&ipport_tick_callout, 1, ipport_tick, NULL); EVENTHANDLER_REGISTER(shutdown_pre_sync, ip_fini, NULL, SHUTDOWN_PRI_DEFAULT); } SYSINIT(ipport_tick_init, SI_SUB_PROTO_DOMAIN, SI_ORDER_MIDDLE, ipport_tick_init, NULL); void inp_wlock(struct inpcb *inp) { INP_WLOCK(inp); } void inp_wunlock(struct inpcb *inp) { INP_WUNLOCK(inp); } void inp_rlock(struct inpcb *inp) { INP_RLOCK(inp); } void inp_runlock(struct inpcb *inp) { INP_RUNLOCK(inp); } #ifdef INVARIANT_SUPPORT void inp_lock_assert(struct inpcb *inp) { INP_WLOCK_ASSERT(inp); } void inp_unlock_assert(struct inpcb *inp) { INP_UNLOCK_ASSERT(inp); } #endif void inp_apply_all(void (*func)(struct inpcb *, void *), void *arg) { struct inpcb *inp; INP_INFO_WLOCK(&V_tcbinfo); CK_LIST_FOREACH(inp, V_tcbinfo.ipi_listhead, inp_list) { INP_WLOCK(inp); func(inp, arg); INP_WUNLOCK(inp); } INP_INFO_WUNLOCK(&V_tcbinfo); } struct socket * inp_inpcbtosocket(struct inpcb *inp) { INP_WLOCK_ASSERT(inp); return (inp->inp_socket); } struct tcpcb * inp_inpcbtotcpcb(struct inpcb *inp) { INP_WLOCK_ASSERT(inp); return ((struct tcpcb *)inp->inp_ppcb); } int inp_ip_tos_get(const struct inpcb *inp) { return (inp->inp_ip_tos); } void inp_ip_tos_set(struct inpcb *inp, int val) { inp->inp_ip_tos = val; } void inp_4tuple_get(struct inpcb *inp, uint32_t *laddr, uint16_t *lp, uint32_t *faddr, uint16_t *fp) { INP_LOCK_ASSERT(inp); *laddr = inp->inp_laddr.s_addr; *faddr = inp->inp_faddr.s_addr; *lp = inp->inp_lport; *fp = inp->inp_fport; } struct inpcb * so_sotoinpcb(struct socket *so) { return (sotoinpcb(so)); } struct tcpcb * so_sototcpcb(struct socket *so) { return (sototcpcb(so)); } /* * Create an external-format (``xinpcb'') structure using the information in * the kernel-format in_pcb structure pointed to by inp. This is done to * reduce the spew of irrelevant information over this interface, to isolate * user code from changes in the kernel structure, and potentially to provide * information-hiding if we decide that some of this information should be * hidden from users. */ void in_pcbtoxinpcb(const struct inpcb *inp, struct xinpcb *xi) { bzero(xi, sizeof(*xi)); xi->xi_len = sizeof(struct xinpcb); if (inp->inp_socket) sotoxsocket(inp->inp_socket, &xi->xi_socket); bcopy(&inp->inp_inc, &xi->inp_inc, sizeof(struct in_conninfo)); xi->inp_gencnt = inp->inp_gencnt; xi->inp_ppcb = (uintptr_t)inp->inp_ppcb; xi->inp_flow = inp->inp_flow; xi->inp_flowid = inp->inp_flowid; xi->inp_flowtype = inp->inp_flowtype; xi->inp_flags = inp->inp_flags; xi->inp_flags2 = inp->inp_flags2; xi->inp_rss_listen_bucket = inp->inp_rss_listen_bucket; xi->in6p_cksum = inp->in6p_cksum; xi->in6p_hops = inp->in6p_hops; xi->inp_ip_tos = inp->inp_ip_tos; xi->inp_vflag = inp->inp_vflag; xi->inp_ip_ttl = inp->inp_ip_ttl; xi->inp_ip_p = inp->inp_ip_p; xi->inp_ip_minttl = inp->inp_ip_minttl; } #ifdef DDB static void db_print_indent(int indent) { int i; for (i = 0; i < indent; i++) db_printf(" "); } static void db_print_inconninfo(struct in_conninfo *inc, const char *name, int indent) { char faddr_str[48], laddr_str[48]; db_print_indent(indent); db_printf("%s at %p\n", name, inc); indent += 2; #ifdef INET6 if (inc->inc_flags & INC_ISIPV6) { /* IPv6. */ ip6_sprintf(laddr_str, &inc->inc6_laddr); ip6_sprintf(faddr_str, &inc->inc6_faddr); } else #endif { /* IPv4. */ inet_ntoa_r(inc->inc_laddr, laddr_str); inet_ntoa_r(inc->inc_faddr, faddr_str); } db_print_indent(indent); db_printf("inc_laddr %s inc_lport %u\n", laddr_str, ntohs(inc->inc_lport)); db_print_indent(indent); db_printf("inc_faddr %s inc_fport %u\n", faddr_str, ntohs(inc->inc_fport)); } static void db_print_inpflags(int inp_flags) { int comma; comma = 0; if (inp_flags & INP_RECVOPTS) { db_printf("%sINP_RECVOPTS", comma ? ", " : ""); comma = 1; } if (inp_flags & INP_RECVRETOPTS) { db_printf("%sINP_RECVRETOPTS", comma ? ", " : ""); comma = 1; } if (inp_flags & INP_RECVDSTADDR) { db_printf("%sINP_RECVDSTADDR", comma ? ", " : ""); comma = 1; } if (inp_flags & INP_ORIGDSTADDR) { db_printf("%sINP_ORIGDSTADDR", comma ? ", " : ""); comma = 1; } if (inp_flags & INP_HDRINCL) { db_printf("%sINP_HDRINCL", comma ? ", " : ""); comma = 1; } if (inp_flags & INP_HIGHPORT) { db_printf("%sINP_HIGHPORT", comma ? ", " : ""); comma = 1; } if (inp_flags & INP_LOWPORT) { db_printf("%sINP_LOWPORT", comma ? ", " : ""); comma = 1; } if (inp_flags & INP_ANONPORT) { db_printf("%sINP_ANONPORT", comma ? ", " : ""); comma = 1; } if (inp_flags & INP_RECVIF) { db_printf("%sINP_RECVIF", comma ? ", " : ""); comma = 1; } if (inp_flags & INP_MTUDISC) { db_printf("%sINP_MTUDISC", comma ? ", " : ""); comma = 1; } if (inp_flags & INP_RECVTTL) { db_printf("%sINP_RECVTTL", comma ? ", " : ""); comma = 1; } if (inp_flags & INP_DONTFRAG) { db_printf("%sINP_DONTFRAG", comma ? ", " : ""); comma = 1; } if (inp_flags & INP_RECVTOS) { db_printf("%sINP_RECVTOS", comma ? ", " : ""); comma = 1; } if (inp_flags & IN6P_IPV6_V6ONLY) { db_printf("%sIN6P_IPV6_V6ONLY", comma ? ", " : ""); comma = 1; } if (inp_flags & IN6P_PKTINFO) { db_printf("%sIN6P_PKTINFO", comma ? ", " : ""); comma = 1; } if (inp_flags & IN6P_HOPLIMIT) { db_printf("%sIN6P_HOPLIMIT", comma ? ", " : ""); comma = 1; } if (inp_flags & IN6P_HOPOPTS) { db_printf("%sIN6P_HOPOPTS", comma ? ", " : ""); comma = 1; } if (inp_flags & IN6P_DSTOPTS) { db_printf("%sIN6P_DSTOPTS", comma ? ", " : ""); comma = 1; } if (inp_flags & IN6P_RTHDR) { db_printf("%sIN6P_RTHDR", comma ? ", " : ""); comma = 1; } if (inp_flags & IN6P_RTHDRDSTOPTS) { db_printf("%sIN6P_RTHDRDSTOPTS", comma ? ", " : ""); comma = 1; } if (inp_flags & IN6P_TCLASS) { db_printf("%sIN6P_TCLASS", comma ? ", " : ""); comma = 1; } if (inp_flags & IN6P_AUTOFLOWLABEL) { db_printf("%sIN6P_AUTOFLOWLABEL", comma ? ", " : ""); comma = 1; } if (inp_flags & INP_TIMEWAIT) { db_printf("%sINP_TIMEWAIT", comma ? ", " : ""); comma = 1; } if (inp_flags & INP_ONESBCAST) { db_printf("%sINP_ONESBCAST", comma ? ", " : ""); comma = 1; } if (inp_flags & INP_DROPPED) { db_printf("%sINP_DROPPED", comma ? ", " : ""); comma = 1; } if (inp_flags & INP_SOCKREF) { db_printf("%sINP_SOCKREF", comma ? ", " : ""); comma = 1; } if (inp_flags & IN6P_RFC2292) { db_printf("%sIN6P_RFC2292", comma ? ", " : ""); comma = 1; } if (inp_flags & IN6P_MTU) { db_printf("IN6P_MTU%s", comma ? ", " : ""); comma = 1; } } static void db_print_inpvflag(u_char inp_vflag) { int comma; comma = 0; if (inp_vflag & INP_IPV4) { db_printf("%sINP_IPV4", comma ? ", " : ""); comma = 1; } if (inp_vflag & INP_IPV6) { db_printf("%sINP_IPV6", comma ? ", " : ""); comma = 1; } if (inp_vflag & INP_IPV6PROTO) { db_printf("%sINP_IPV6PROTO", comma ? ", " : ""); comma = 1; } } static void db_print_inpcb(struct inpcb *inp, const char *name, int indent) { db_print_indent(indent); db_printf("%s at %p\n", name, inp); indent += 2; db_print_indent(indent); db_printf("inp_flow: 0x%x\n", inp->inp_flow); db_print_inconninfo(&inp->inp_inc, "inp_conninfo", indent); db_print_indent(indent); db_printf("inp_ppcb: %p inp_pcbinfo: %p inp_socket: %p\n", inp->inp_ppcb, inp->inp_pcbinfo, inp->inp_socket); db_print_indent(indent); db_printf("inp_label: %p inp_flags: 0x%x (", inp->inp_label, inp->inp_flags); db_print_inpflags(inp->inp_flags); db_printf(")\n"); db_print_indent(indent); db_printf("inp_sp: %p inp_vflag: 0x%x (", inp->inp_sp, inp->inp_vflag); db_print_inpvflag(inp->inp_vflag); db_printf(")\n"); db_print_indent(indent); db_printf("inp_ip_ttl: %d inp_ip_p: %d inp_ip_minttl: %d\n", inp->inp_ip_ttl, inp->inp_ip_p, inp->inp_ip_minttl); db_print_indent(indent); #ifdef INET6 if (inp->inp_vflag & INP_IPV6) { db_printf("in6p_options: %p in6p_outputopts: %p " "in6p_moptions: %p\n", inp->in6p_options, inp->in6p_outputopts, inp->in6p_moptions); db_printf("in6p_icmp6filt: %p in6p_cksum %d " "in6p_hops %u\n", inp->in6p_icmp6filt, inp->in6p_cksum, inp->in6p_hops); } else #endif { db_printf("inp_ip_tos: %d inp_ip_options: %p " "inp_ip_moptions: %p\n", inp->inp_ip_tos, inp->inp_options, inp->inp_moptions); } db_print_indent(indent); db_printf("inp_phd: %p inp_gencnt: %ju\n", inp->inp_phd, (uintmax_t)inp->inp_gencnt); } DB_SHOW_COMMAND(inpcb, db_show_inpcb) { struct inpcb *inp; if (!have_addr) { db_printf("usage: show inpcb \n"); return; } inp = (struct inpcb *)addr; db_print_inpcb(inp, "inpcb", 0); } #endif /* DDB */ #ifdef RATELIMIT /* * Modify TX rate limit based on the existing "inp->inp_snd_tag", * if any. */ int in_pcbmodify_txrtlmt(struct inpcb *inp, uint32_t max_pacing_rate) { union if_snd_tag_modify_params params = { .rate_limit.max_rate = max_pacing_rate, }; struct m_snd_tag *mst; struct ifnet *ifp; int error; mst = inp->inp_snd_tag; if (mst == NULL) return (EINVAL); ifp = mst->ifp; if (ifp == NULL) return (EINVAL); if (ifp->if_snd_tag_modify == NULL) { error = EOPNOTSUPP; } else { error = ifp->if_snd_tag_modify(mst, ¶ms); } return (error); } /* * Query existing TX rate limit based on the existing * "inp->inp_snd_tag", if any. */ int in_pcbquery_txrtlmt(struct inpcb *inp, uint32_t *p_max_pacing_rate) { union if_snd_tag_query_params params = { }; struct m_snd_tag *mst; struct ifnet *ifp; int error; mst = inp->inp_snd_tag; if (mst == NULL) return (EINVAL); ifp = mst->ifp; if (ifp == NULL) return (EINVAL); if (ifp->if_snd_tag_query == NULL) { error = EOPNOTSUPP; } else { error = ifp->if_snd_tag_query(mst, ¶ms); if (error == 0 && p_max_pacing_rate != NULL) *p_max_pacing_rate = params.rate_limit.max_rate; } return (error); } /* * Query existing TX queue level based on the existing * "inp->inp_snd_tag", if any. */ int in_pcbquery_txrlevel(struct inpcb *inp, uint32_t *p_txqueue_level) { union if_snd_tag_query_params params = { }; struct m_snd_tag *mst; struct ifnet *ifp; int error; mst = inp->inp_snd_tag; if (mst == NULL) return (EINVAL); ifp = mst->ifp; if (ifp == NULL) return (EINVAL); if (ifp->if_snd_tag_query == NULL) return (EOPNOTSUPP); error = ifp->if_snd_tag_query(mst, ¶ms); if (error == 0 && p_txqueue_level != NULL) *p_txqueue_level = params.rate_limit.queue_level; return (error); } /* * Allocate a new TX rate limit send tag from the network interface * given by the "ifp" argument and save it in "inp->inp_snd_tag": */ int in_pcbattach_txrtlmt(struct inpcb *inp, struct ifnet *ifp, uint32_t flowtype, uint32_t flowid, uint32_t max_pacing_rate) { union if_snd_tag_alloc_params params = { .rate_limit.hdr.type = (max_pacing_rate == -1U) ? IF_SND_TAG_TYPE_UNLIMITED : IF_SND_TAG_TYPE_RATE_LIMIT, .rate_limit.hdr.flowid = flowid, .rate_limit.hdr.flowtype = flowtype, .rate_limit.max_rate = max_pacing_rate, }; int error; INP_WLOCK_ASSERT(inp); if (inp->inp_snd_tag != NULL) return (EINVAL); if (ifp->if_snd_tag_alloc == NULL) { error = EOPNOTSUPP; } else { error = ifp->if_snd_tag_alloc(ifp, ¶ms, &inp->inp_snd_tag); /* * At success increment the refcount on * the send tag's network interface: */ if (error == 0) if_ref(inp->inp_snd_tag->ifp); } return (error); } /* * Free an existing TX rate limit tag based on the "inp->inp_snd_tag", * if any: */ void in_pcbdetach_txrtlmt(struct inpcb *inp) { struct m_snd_tag *mst; struct ifnet *ifp; INP_WLOCK_ASSERT(inp); mst = inp->inp_snd_tag; inp->inp_snd_tag = NULL; if (mst == NULL) return; ifp = mst->ifp; if (ifp == NULL) return; /* * If the device was detached while we still had reference(s) * on the ifp, we assume if_snd_tag_free() was replaced with * stubs. */ ifp->if_snd_tag_free(mst); /* release reference count on network interface */ if_rele(ifp); } /* * This function should be called when the INP_RATE_LIMIT_CHANGED flag * is set in the fast path and will attach/detach/modify the TX rate * limit send tag based on the socket's so_max_pacing_rate value. */ void in_pcboutput_txrtlmt(struct inpcb *inp, struct ifnet *ifp, struct mbuf *mb) { struct socket *socket; uint32_t max_pacing_rate; bool did_upgrade; int error; if (inp == NULL) return; socket = inp->inp_socket; if (socket == NULL) return; if (!INP_WLOCKED(inp)) { /* * NOTE: If the write locking fails, we need to bail * out and use the non-ratelimited ring for the * transmit until there is a new chance to get the * write lock. */ if (!INP_TRY_UPGRADE(inp)) return; did_upgrade = 1; } else { did_upgrade = 0; } /* * NOTE: The so_max_pacing_rate value is read unlocked, * because atomic updates are not required since the variable * is checked at every mbuf we send. It is assumed that the * variable read itself will be atomic. */ max_pacing_rate = socket->so_max_pacing_rate; /* * NOTE: When attaching to a network interface a reference is * made to ensure the network interface doesn't go away until * all ratelimit connections are gone. The network interface * pointers compared below represent valid network interfaces, * except when comparing towards NULL. */ if (max_pacing_rate == 0 && inp->inp_snd_tag == NULL) { error = 0; } else if (!(ifp->if_capenable & IFCAP_TXRTLMT)) { if (inp->inp_snd_tag != NULL) in_pcbdetach_txrtlmt(inp); error = 0; } else if (inp->inp_snd_tag == NULL) { /* * In order to utilize packet pacing with RSS, we need * to wait until there is a valid RSS hash before we * can proceed: */ if (M_HASHTYPE_GET(mb) == M_HASHTYPE_NONE) { error = EAGAIN; } else { error = in_pcbattach_txrtlmt(inp, ifp, M_HASHTYPE_GET(mb), mb->m_pkthdr.flowid, max_pacing_rate); } } else { error = in_pcbmodify_txrtlmt(inp, max_pacing_rate); } if (error == 0 || error == EOPNOTSUPP) inp->inp_flags2 &= ~INP_RATE_LIMIT_CHANGED; if (did_upgrade) INP_DOWNGRADE(inp); } /* * Track route changes for TX rate limiting. */ void in_pcboutput_eagain(struct inpcb *inp) { struct socket *socket; bool did_upgrade; if (inp == NULL) return; socket = inp->inp_socket; if (socket == NULL) return; if (inp->inp_snd_tag == NULL) return; if (!INP_WLOCKED(inp)) { /* * NOTE: If the write locking fails, we need to bail * out and use the non-ratelimited ring for the * transmit until there is a new chance to get the * write lock. */ if (!INP_TRY_UPGRADE(inp)) return; did_upgrade = 1; } else { did_upgrade = 0; } /* detach rate limiting */ in_pcbdetach_txrtlmt(inp); /* make sure new mbuf send tag allocation is made */ inp->inp_flags2 |= INP_RATE_LIMIT_CHANGED; if (did_upgrade) INP_DOWNGRADE(inp); } #endif /* RATELIMIT */ Index: projects/import-googletest-1.8.1/sys/netinet/tcp_timewait.c =================================================================== --- projects/import-googletest-1.8.1/sys/netinet/tcp_timewait.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/netinet/tcp_timewait.c (revision 344271) @@ -1,769 +1,769 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 1982, 1986, 1988, 1990, 1993, 1995 * The Regents of the University of California. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * @(#)tcp_subr.c 8.2 (Berkeley) 5/24/95 */ #include __FBSDID("$FreeBSD$"); #include "opt_inet.h" #include "opt_inet6.h" #include "opt_tcpdebug.h" #include #include #include #include #include #include #include #include #include #include #include #ifndef INVARIANTS #include #endif #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef INET6 #include #include #include #include #include #endif #include #include #include #include #include #ifdef INET6 #include #endif #include #ifdef TCPDEBUG #include #endif #ifdef INET6 #include #endif #include #include VNET_DEFINE_STATIC(uma_zone_t, tcptw_zone); #define V_tcptw_zone VNET(tcptw_zone) static int maxtcptw; /* * The timed wait queue contains references to each of the TCP sessions * currently in the TIME_WAIT state. The queue pointers, including the * queue pointers in each tcptw structure, are protected using the global * timewait lock, which must be held over queue iteration and modification. * * Rules on tcptw usage: * - a inpcb is always freed _after_ its tcptw * - a tcptw relies on its inpcb reference counting for memory stability * - a tcptw is dereferenceable only while its inpcb is locked */ VNET_DEFINE_STATIC(TAILQ_HEAD(, tcptw), twq_2msl); #define V_twq_2msl VNET(twq_2msl) /* Global timewait lock */ VNET_DEFINE_STATIC(struct rwlock, tw_lock); #define V_tw_lock VNET(tw_lock) #define TW_LOCK_INIT(tw, d) rw_init_flags(&(tw), (d), 0) #define TW_LOCK_DESTROY(tw) rw_destroy(&(tw)) #define TW_RLOCK(tw) rw_rlock(&(tw)) #define TW_WLOCK(tw) rw_wlock(&(tw)) #define TW_RUNLOCK(tw) rw_runlock(&(tw)) #define TW_WUNLOCK(tw) rw_wunlock(&(tw)) #define TW_LOCK_ASSERT(tw) rw_assert(&(tw), RA_LOCKED) #define TW_RLOCK_ASSERT(tw) rw_assert(&(tw), RA_RLOCKED) #define TW_WLOCK_ASSERT(tw) rw_assert(&(tw), RA_WLOCKED) #define TW_UNLOCK_ASSERT(tw) rw_assert(&(tw), RA_UNLOCKED) static void tcp_tw_2msl_reset(struct tcptw *, int); static void tcp_tw_2msl_stop(struct tcptw *, int); static int tcp_twrespond(struct tcptw *, int); static int tcptw_auto_size(void) { int halfrange; /* * Max out at half the ephemeral port range so that TIME_WAIT * sockets don't tie up too many ephemeral ports. */ if (V_ipport_lastauto > V_ipport_firstauto) halfrange = (V_ipport_lastauto - V_ipport_firstauto) / 2; else halfrange = (V_ipport_firstauto - V_ipport_lastauto) / 2; /* Protect against goofy port ranges smaller than 32. */ return (imin(imax(halfrange, 32), maxsockets / 5)); } static int sysctl_maxtcptw(SYSCTL_HANDLER_ARGS) { int error, new; if (maxtcptw == 0) new = tcptw_auto_size(); else new = maxtcptw; error = sysctl_handle_int(oidp, &new, 0, req); if (error == 0 && req->newptr) if (new >= 32) { maxtcptw = new; uma_zone_set_max(V_tcptw_zone, maxtcptw); } return (error); } SYSCTL_PROC(_net_inet_tcp, OID_AUTO, maxtcptw, CTLTYPE_INT|CTLFLAG_RW, &maxtcptw, 0, sysctl_maxtcptw, "IU", "Maximum number of compressed TCP TIME_WAIT entries"); VNET_DEFINE_STATIC(int, nolocaltimewait) = 0; #define V_nolocaltimewait VNET(nolocaltimewait) SYSCTL_INT(_net_inet_tcp, OID_AUTO, nolocaltimewait, CTLFLAG_VNET | CTLFLAG_RW, &VNET_NAME(nolocaltimewait), 0, "Do not create compressed TCP TIME_WAIT entries for local connections"); void tcp_tw_zone_change(void) { if (maxtcptw == 0) uma_zone_set_max(V_tcptw_zone, tcptw_auto_size()); } void tcp_tw_init(void) { V_tcptw_zone = uma_zcreate("tcptw", sizeof(struct tcptw), NULL, NULL, NULL, NULL, UMA_ALIGN_PTR, 0); TUNABLE_INT_FETCH("net.inet.tcp.maxtcptw", &maxtcptw); if (maxtcptw == 0) uma_zone_set_max(V_tcptw_zone, tcptw_auto_size()); else uma_zone_set_max(V_tcptw_zone, maxtcptw); TAILQ_INIT(&V_twq_2msl); TW_LOCK_INIT(V_tw_lock, "tcptw"); } #ifdef VIMAGE void tcp_tw_destroy(void) { struct tcptw *tw; struct epoch_tracker et; INP_INFO_RLOCK_ET(&V_tcbinfo, et); while ((tw = TAILQ_FIRST(&V_twq_2msl)) != NULL) tcp_twclose(tw, 0); INP_INFO_RUNLOCK_ET(&V_tcbinfo, et); TW_LOCK_DESTROY(V_tw_lock); uma_zdestroy(V_tcptw_zone); } #endif /* * Move a TCP connection into TIME_WAIT state. * tcbinfo is locked. * inp is locked, and is unlocked before returning. */ void tcp_twstart(struct tcpcb *tp) { struct tcptw twlocal, *tw; struct inpcb *inp = tp->t_inpcb; struct socket *so; uint32_t recwin; bool acknow, local; #ifdef INET6 bool isipv6 = inp->inp_inc.inc_flags & INC_ISIPV6; #endif INP_INFO_RLOCK_ASSERT(&V_tcbinfo); INP_WLOCK_ASSERT(inp); /* A dropped inp should never transition to TIME_WAIT state. */ KASSERT((inp->inp_flags & INP_DROPPED) == 0, ("tcp_twstart: " "(inp->inp_flags & INP_DROPPED) != 0")); if (V_nolocaltimewait) { #ifdef INET6 if (isipv6) local = in6_localaddr(&inp->in6p_faddr); else #endif #ifdef INET local = in_localip(inp->inp_faddr); #else local = false; #endif } else local = false; /* * For use only by DTrace. We do not reference the state * after this point so modifying it in place is not a problem. */ tcp_state_change(tp, TCPS_TIME_WAIT); if (local) tw = &twlocal; else tw = uma_zalloc(V_tcptw_zone, M_NOWAIT); if (tw == NULL) { /* * Reached limit on total number of TIMEWAIT connections * allowed. Remove a connection from TIMEWAIT queue in LRU * fashion to make room for this connection. * * XXX: Check if it possible to always have enough room * in advance based on guarantees provided by uma_zalloc(). */ tw = tcp_tw_2msl_scan(1); if (tw == NULL) { tp = tcp_close(tp); if (tp != NULL) INP_WUNLOCK(inp); return; } } /* * For !local case the tcptw will hold a reference on its inpcb * until tcp_twclose is called. */ tw->tw_inpcb = inp; /* * Recover last window size sent. */ so = inp->inp_socket; recwin = lmin(lmax(sbspace(&so->so_rcv), 0), (long)TCP_MAXWIN << tp->rcv_scale); if (recwin < (so->so_rcv.sb_hiwat / 4) && recwin < tp->t_maxseg) recwin = 0; if (SEQ_GT(tp->rcv_adv, tp->rcv_nxt) && recwin < (tp->rcv_adv - tp->rcv_nxt)) recwin = (tp->rcv_adv - tp->rcv_nxt); - tw->last_win = htons((u_short)(recwin >> tp->rcv_scale)); + tw->last_win = (u_short)(recwin >> tp->rcv_scale); /* * Set t_recent if timestamps are used on the connection. */ if ((tp->t_flags & (TF_REQ_TSTMP|TF_RCVD_TSTMP|TF_NOOPT)) == (TF_REQ_TSTMP|TF_RCVD_TSTMP)) { tw->t_recent = tp->ts_recent; tw->ts_offset = tp->ts_offset; } else { tw->t_recent = 0; tw->ts_offset = 0; } tw->snd_nxt = tp->snd_nxt; tw->rcv_nxt = tp->rcv_nxt; tw->iss = tp->iss; tw->irs = tp->irs; tw->t_starttime = tp->t_starttime; tw->tw_time = 0; /* XXX * If this code will * be used for fin-wait-2 state also, then we may need * a ts_recent from the last segment. */ acknow = tp->t_flags & TF_ACKNOW; /* * First, discard tcpcb state, which includes stopping its timers and * freeing it. tcp_discardcb() used to also release the inpcb, but * that work is now done in the caller. * * Note: soisdisconnected() call used to be made in tcp_discardcb(), * and might not be needed here any longer. */ tcp_discardcb(tp); soisdisconnected(so); tw->tw_so_options = so->so_options; inp->inp_flags |= INP_TIMEWAIT; if (acknow) tcp_twrespond(tw, TH_ACK); if (local) in_pcbdrop(inp); else { in_pcbref(inp); /* Reference from tw */ tw->tw_cred = crhold(so->so_cred); inp->inp_ppcb = tw; TCPSTATES_INC(TCPS_TIME_WAIT); tcp_tw_2msl_reset(tw, 0); } /* * If the inpcb owns the sole reference to the socket, then we can * detach and free the socket as it is not needed in time wait. */ if (inp->inp_flags & INP_SOCKREF) { KASSERT(so->so_state & SS_PROTOREF, ("tcp_twstart: !SS_PROTOREF")); inp->inp_flags &= ~INP_SOCKREF; INP_WUNLOCK(inp); SOCK_LOCK(so); so->so_state &= ~SS_PROTOREF; sofree(so); } else INP_WUNLOCK(inp); } /* * Returns 1 if the TIME_WAIT state was killed and we should start over, * looking for a pcb in the listen state. Returns 0 otherwise. */ int tcp_twcheck(struct inpcb *inp, struct tcpopt *to __unused, struct tcphdr *th, struct mbuf *m, int tlen) { struct tcptw *tw; int thflags; tcp_seq seq; INP_INFO_RLOCK_ASSERT(&V_tcbinfo); INP_WLOCK_ASSERT(inp); /* * XXXRW: Time wait state for inpcb has been recycled, but inpcb is * still present. This is undesirable, but temporarily necessary * until we work out how to handle inpcb's who's timewait state has * been removed. */ tw = intotw(inp); if (tw == NULL) goto drop; thflags = th->th_flags; /* * NOTE: for FIN_WAIT_2 (to be added later), * must validate sequence number before accepting RST */ /* * If the segment contains RST: * Drop the segment - see Stevens, vol. 2, p. 964 and * RFC 1337. */ if (thflags & TH_RST) goto drop; #if 0 /* PAWS not needed at the moment */ /* * RFC 1323 PAWS: If we have a timestamp reply on this segment * and it's less than ts_recent, drop it. */ if ((to.to_flags & TOF_TS) != 0 && tp->ts_recent && TSTMP_LT(to.to_tsval, tp->ts_recent)) { if ((thflags & TH_ACK) == 0) goto drop; goto ack; } /* * ts_recent is never updated because we never accept new segments. */ #endif /* * If a new connection request is received * while in TIME_WAIT, drop the old connection * and start over if the sequence numbers * are above the previous ones. */ if ((thflags & TH_SYN) && SEQ_GT(th->th_seq, tw->rcv_nxt)) { tcp_twclose(tw, 0); return (1); } /* * Drop the segment if it does not contain an ACK. */ if ((thflags & TH_ACK) == 0) goto drop; /* * Reset the 2MSL timer if this is a duplicate FIN. */ if (thflags & TH_FIN) { seq = th->th_seq + tlen + (thflags & TH_SYN ? 1 : 0); if (seq + 1 == tw->rcv_nxt) tcp_tw_2msl_reset(tw, 1); } /* * Acknowledge the segment if it has data or is not a duplicate ACK. */ if (thflags != TH_ACK || tlen != 0 || th->th_seq != tw->rcv_nxt || th->th_ack != tw->snd_nxt) { TCP_PROBE5(receive, NULL, NULL, m, NULL, th); tcp_twrespond(tw, TH_ACK); goto dropnoprobe; } drop: TCP_PROBE5(receive, NULL, NULL, m, NULL, th); dropnoprobe: INP_WUNLOCK(inp); m_freem(m); return (0); } void tcp_twclose(struct tcptw *tw, int reuse) { struct socket *so; struct inpcb *inp; /* * At this point, we are in one of two situations: * * (1) We have no socket, just an inpcb<->twtcp pair. We can free * all state. * * (2) We have a socket -- if we own a reference, release it and * notify the socket layer. */ inp = tw->tw_inpcb; KASSERT((inp->inp_flags & INP_TIMEWAIT), ("tcp_twclose: !timewait")); KASSERT(intotw(inp) == tw, ("tcp_twclose: inp_ppcb != tw")); INP_INFO_RLOCK_ASSERT(&V_tcbinfo); /* in_pcbfree() */ INP_WLOCK_ASSERT(inp); tcp_tw_2msl_stop(tw, reuse); inp->inp_ppcb = NULL; in_pcbdrop(inp); so = inp->inp_socket; if (so != NULL) { /* * If there's a socket, handle two cases: first, we own a * strong reference, which we will now release, or we don't * in which case another reference exists (XXXRW: think * about this more), and we don't need to take action. */ if (inp->inp_flags & INP_SOCKREF) { inp->inp_flags &= ~INP_SOCKREF; INP_WUNLOCK(inp); SOCK_LOCK(so); KASSERT(so->so_state & SS_PROTOREF, ("tcp_twclose: INP_SOCKREF && !SS_PROTOREF")); so->so_state &= ~SS_PROTOREF; sofree(so); } else { /* * If we don't own the only reference, the socket and * inpcb need to be left around to be handled by * tcp_usr_detach() later. */ INP_WUNLOCK(inp); } } else { /* * The socket has been already cleaned-up for us, only free the * inpcb. */ in_pcbfree(inp); } TCPSTAT_INC(tcps_closed); } static int tcp_twrespond(struct tcptw *tw, int flags) { struct inpcb *inp = tw->tw_inpcb; #if defined(INET6) || defined(INET) struct tcphdr *th = NULL; #endif struct mbuf *m; #ifdef INET struct ip *ip = NULL; #endif u_int hdrlen, optlen; int error = 0; /* Keep compiler happy */ struct tcpopt to; #ifdef INET6 struct ip6_hdr *ip6 = NULL; int isipv6 = inp->inp_inc.inc_flags & INC_ISIPV6; #endif hdrlen = 0; /* Keep compiler happy */ INP_WLOCK_ASSERT(inp); m = m_gethdr(M_NOWAIT, MT_DATA); if (m == NULL) return (ENOBUFS); m->m_data += max_linkhdr; #ifdef MAC mac_inpcb_create_mbuf(inp, m); #endif #ifdef INET6 if (isipv6) { hdrlen = sizeof(struct ip6_hdr) + sizeof(struct tcphdr); ip6 = mtod(m, struct ip6_hdr *); th = (struct tcphdr *)(ip6 + 1); tcpip_fillheaders(inp, ip6, th); } #endif #if defined(INET6) && defined(INET) else #endif #ifdef INET { hdrlen = sizeof(struct tcpiphdr); ip = mtod(m, struct ip *); th = (struct tcphdr *)(ip + 1); tcpip_fillheaders(inp, ip, th); } #endif to.to_flags = 0; /* * Send a timestamp and echo-reply if both our side and our peer * have sent timestamps in our SYN's and this is not a RST. */ if (tw->t_recent && flags == TH_ACK) { to.to_flags |= TOF_TS; to.to_tsval = tcp_ts_getticks() + tw->ts_offset; to.to_tsecr = tw->t_recent; } optlen = tcp_addoptions(&to, (u_char *)(th + 1)); m->m_len = hdrlen + optlen; m->m_pkthdr.len = m->m_len; KASSERT(max_linkhdr + m->m_len <= MHLEN, ("tcptw: mbuf too small")); th->th_seq = htonl(tw->snd_nxt); th->th_ack = htonl(tw->rcv_nxt); th->th_off = (sizeof(struct tcphdr) + optlen) >> 2; th->th_flags = flags; th->th_win = htons(tw->last_win); m->m_pkthdr.csum_data = offsetof(struct tcphdr, th_sum); #ifdef INET6 if (isipv6) { m->m_pkthdr.csum_flags = CSUM_TCP_IPV6; th->th_sum = in6_cksum_pseudo(ip6, sizeof(struct tcphdr) + optlen, IPPROTO_TCP, 0); ip6->ip6_hlim = in6_selecthlim(inp, NULL); TCP_PROBE5(send, NULL, NULL, ip6, NULL, th); error = ip6_output(m, inp->in6p_outputopts, NULL, (tw->tw_so_options & SO_DONTROUTE), NULL, NULL, inp); } #endif #if defined(INET6) && defined(INET) else #endif #ifdef INET { m->m_pkthdr.csum_flags = CSUM_TCP; th->th_sum = in_pseudo(ip->ip_src.s_addr, ip->ip_dst.s_addr, htons(sizeof(struct tcphdr) + optlen + IPPROTO_TCP)); ip->ip_len = htons(m->m_pkthdr.len); if (V_path_mtu_discovery) ip->ip_off |= htons(IP_DF); TCP_PROBE5(send, NULL, NULL, ip, NULL, th); error = ip_output(m, inp->inp_options, NULL, ((tw->tw_so_options & SO_DONTROUTE) ? IP_ROUTETOIF : 0), NULL, inp); } #endif if (flags & TH_ACK) TCPSTAT_INC(tcps_sndacks); else TCPSTAT_INC(tcps_sndctrl); TCPSTAT_INC(tcps_sndtotal); return (error); } static void tcp_tw_2msl_reset(struct tcptw *tw, int rearm) { INP_INFO_RLOCK_ASSERT(&V_tcbinfo); INP_WLOCK_ASSERT(tw->tw_inpcb); TW_WLOCK(V_tw_lock); if (rearm) TAILQ_REMOVE(&V_twq_2msl, tw, tw_2msl); tw->tw_time = ticks + 2 * tcp_msl; TAILQ_INSERT_TAIL(&V_twq_2msl, tw, tw_2msl); TW_WUNLOCK(V_tw_lock); } static void tcp_tw_2msl_stop(struct tcptw *tw, int reuse) { struct ucred *cred; struct inpcb *inp; int released __unused; INP_INFO_RLOCK_ASSERT(&V_tcbinfo); TW_WLOCK(V_tw_lock); inp = tw->tw_inpcb; tw->tw_inpcb = NULL; TAILQ_REMOVE(&V_twq_2msl, tw, tw_2msl); cred = tw->tw_cred; tw->tw_cred = NULL; TW_WUNLOCK(V_tw_lock); if (cred != NULL) crfree(cred); released = in_pcbrele_wlocked(inp); KASSERT(!released, ("%s: inp should not be released here", __func__)); if (!reuse) uma_zfree(V_tcptw_zone, tw); TCPSTATES_DEC(TCPS_TIME_WAIT); } struct tcptw * tcp_tw_2msl_scan(int reuse) { struct tcptw *tw; struct inpcb *inp; struct epoch_tracker et; #ifdef INVARIANTS if (reuse) { /* * Exclusive pcbinfo lock is not required in reuse case even if * two inpcb locks can be acquired simultaneously: * - the inpcb transitioning to TIME_WAIT state in * tcp_tw_start(), * - the inpcb closed by tcp_twclose(). * * It is because only inpcbs in FIN_WAIT2 or CLOSING states can * transition in TIME_WAIT state. Then a pcbcb cannot be in * TIME_WAIT list and transitioning to TIME_WAIT state at same * time. */ INP_INFO_RLOCK_ASSERT(&V_tcbinfo); } #endif for (;;) { TW_RLOCK(V_tw_lock); tw = TAILQ_FIRST(&V_twq_2msl); if (tw == NULL || (!reuse && (tw->tw_time - ticks) > 0)) { TW_RUNLOCK(V_tw_lock); break; } KASSERT(tw->tw_inpcb != NULL, ("%s: tw->tw_inpcb == NULL", __func__)); inp = tw->tw_inpcb; in_pcbref(inp); TW_RUNLOCK(V_tw_lock); INP_INFO_RLOCK_ET(&V_tcbinfo, et); INP_WLOCK(inp); tw = intotw(inp); if (in_pcbrele_wlocked(inp)) { if (__predict_true(tw == NULL)) { INP_INFO_RUNLOCK_ET(&V_tcbinfo, et); continue; } else { /* This should not happen as in TIMEWAIT * state the inp should not be destroyed * before its tcptw. If INVARIANTS is * defined panic. */ #ifdef INVARIANTS panic("%s: Panic before an infinite " "loop: INP_TIMEWAIT && (INP_FREED " "|| inp last reference) && tw != " "NULL", __func__); #else log(LOG_ERR, "%s: Avoid an infinite " "loop: INP_TIMEWAIT && (INP_FREED " "|| inp last reference) && tw != " "NULL", __func__); #endif INP_INFO_RUNLOCK_ET(&V_tcbinfo, et); break; } } if (tw == NULL) { /* tcp_twclose() has already been called */ INP_WUNLOCK(inp); INP_INFO_RUNLOCK_ET(&V_tcbinfo, et); continue; } tcp_twclose(tw, reuse); INP_INFO_RUNLOCK_ET(&V_tcbinfo, et); if (reuse) return tw; } return NULL; } Index: projects/import-googletest-1.8.1/sys/opencrypto/cbc_mac.c =================================================================== --- projects/import-googletest-1.8.1/sys/opencrypto/cbc_mac.c (nonexistent) +++ projects/import-googletest-1.8.1/sys/opencrypto/cbc_mac.c (revision 344271) @@ -0,0 +1,252 @@ +/* + * Copyright (c) 2018-2019 iXsystems Inc. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR + * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES + * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. + * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, + * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT + * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF + * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +__FBSDID("$FreeBSD$"); + +#include +#include +#include +#include +#include +#include + +/* + * Given two CCM_CBC_BLOCK_LEN blocks, xor + * them into dst, and then encrypt dst. + */ +static void +xor_and_encrypt(struct aes_cbc_mac_ctx *ctx, + const uint8_t *src, uint8_t *dst) +{ + const uint64_t *b1; + uint64_t *b2; + uint64_t temp_block[CCM_CBC_BLOCK_LEN/sizeof(uint64_t)]; + + b1 = (const uint64_t*)src; + b2 = (uint64_t*)dst; + + for (size_t count = 0; + count < CCM_CBC_BLOCK_LEN/sizeof(uint64_t); + count++) { + temp_block[count] = b1[count] ^ b2[count]; + } + rijndaelEncrypt(ctx->keysched, ctx->rounds, (void*)temp_block, dst); +} + +void +AES_CBC_MAC_Init(struct aes_cbc_mac_ctx *ctx) +{ + bzero(ctx, sizeof(*ctx)); +} + +void +AES_CBC_MAC_Setkey(struct aes_cbc_mac_ctx *ctx, const uint8_t *key, uint16_t klen) +{ + ctx->rounds = rijndaelKeySetupEnc(ctx->keysched, key, klen * 8); +} + +/* + * This is called to set the nonce, aka IV. + * Before this call, the authDataLength and cryptDataLength fields + * MUST have been set. Sadly, there's no way to return an error. + * + * The CBC-MAC algorithm requires that the first block contain the + * nonce, as well as information about the sizes and lengths involved. + */ +void +AES_CBC_MAC_Reinit(struct aes_cbc_mac_ctx *ctx, const uint8_t *nonce, uint16_t nonceLen) +{ + uint8_t b0[CCM_CBC_BLOCK_LEN]; + uint8_t *bp = b0, flags = 0; + uint8_t L = 0; + uint64_t dataLength = ctx->cryptDataLength; + + KASSERT(ctx->authDataLength != 0 || ctx->cryptDataLength != 0, + ("Auth Data and Data lengths cannot both be 0")); + + KASSERT(nonceLen >= 7 && nonceLen <= 13, + ("nonceLen must be between 7 and 13 bytes")); + + ctx->nonce = nonce; + ctx->nonceLength = nonceLen; + + ctx->authDataCount = 0; + ctx->blockIndex = 0; + explicit_bzero(ctx->staging_block, sizeof(ctx->staging_block)); + + /* + * Need to determine the L field value. This is the number of + * bytes needed to specify the length of the message; the length + * is whatever is left in the 16 bytes after specifying flags and + * the nonce. + */ + L = 15 - nonceLen; + + flags = ((ctx->authDataLength > 0) << 6) + + (((AES_CBC_MAC_HASH_LEN - 2) / 2) << 3) + + L - 1; + /* + * Now we need to set up the first block, which has flags, nonce, + * and the message length. + */ + b0[0] = flags; + bcopy(nonce, b0 + 1, nonceLen); + bp = b0 + 1 + nonceLen; + + /* Need to copy L' [aka L-1] bytes of cryptDataLength */ + for (uint8_t *dst = b0 + sizeof(b0) - 1; dst >= bp; dst--) { + *dst = dataLength; + dataLength >>= 8; + } + /* Now need to encrypt b0 */ + rijndaelEncrypt(ctx->keysched, ctx->rounds, b0, ctx->block); + /* If there is auth data, we need to set up the staging block */ + if (ctx->authDataLength) { + if (ctx->authDataLength < ((1<<16) - (1<<8))) { + uint16_t sizeVal = htobe16(ctx->authDataLength); + bcopy(&sizeVal, ctx->staging_block, sizeof(sizeVal)); + ctx->blockIndex = sizeof(sizeVal); + } else if (ctx->authDataLength < (1ULL<<32)) { + uint32_t sizeVal = htobe32(ctx->authDataLength); + ctx->staging_block[0] = 0xff; + ctx->staging_block[1] = 0xfe; + bcopy(&sizeVal, ctx->staging_block+2, sizeof(sizeVal)); + ctx->blockIndex = 2 + sizeof(sizeVal); + } else { + uint64_t sizeVal = htobe64(ctx->authDataLength); + ctx->staging_block[0] = 0xff; + ctx->staging_block[1] = 0xff; + bcopy(&sizeVal, ctx->staging_block+2, sizeof(sizeVal)); + ctx->blockIndex = 2 + sizeof(sizeVal); + } + } +} + +int +AES_CBC_MAC_Update(struct aes_cbc_mac_ctx *ctx, const uint8_t *data, + uint16_t length) +{ + size_t copy_amt; + + /* + * This will be called in one of two phases: + * (1) Applying authentication data, or + * (2) Applying the payload data. + * + * Because CBC-MAC puts the authentication data size before the + * data, subsequent calls won't be block-size-aligned. Which + * complicates things a fair bit. + * + * The payload data doesn't have that problem. + */ + + if (ctx->authDataCount < ctx->authDataLength) { + /* + * We need to process data as authentication data. + * Since we may be out of sync, we may also need + * to pad out the staging block. + */ + const uint8_t *ptr = data; + while (length > 0) { + + copy_amt = MIN(length, + sizeof(ctx->staging_block) - ctx->blockIndex); + + bcopy(ptr, ctx->staging_block + ctx->blockIndex, + copy_amt); + ptr += copy_amt; + length -= copy_amt; + ctx->authDataCount += copy_amt; + ctx->blockIndex += copy_amt; + ctx->blockIndex %= sizeof(ctx->staging_block); + if (ctx->authDataCount == ctx->authDataLength) + length = 0; + if (ctx->blockIndex == 0 || + ctx->authDataCount >= ctx->authDataLength) { + /* + * We're done with this block, so we + * xor staging_block with block, and then + * encrypt it. + */ + xor_and_encrypt(ctx, ctx->staging_block, ctx->block); + bzero(ctx->staging_block, sizeof(ctx->staging_block)); + ctx->blockIndex = 0; + } + } + return (0); + } + /* + * If we're here, then we're encoding payload data. + * This is marginally easier, except that _Update can + * be called with non-aligned update lengths. As a result, + * we still need to use the staging block. + */ + KASSERT((length + ctx->cryptDataCount) <= ctx->cryptDataLength, + ("More encryption data than allowed")); + + while (length) { + uint8_t *ptr; + + copy_amt = MIN(sizeof(ctx->staging_block) - ctx->blockIndex, + length); + ptr = ctx->staging_block + ctx->blockIndex; + bcopy(data, ptr, copy_amt); + data += copy_amt; + ctx->blockIndex += copy_amt; + ctx->cryptDataCount += copy_amt; + length -= copy_amt; + if (ctx->blockIndex == sizeof(ctx->staging_block)) { + /* We've got a full block */ + xor_and_encrypt(ctx, ctx->staging_block, ctx->block); + ctx->blockIndex = 0; + bzero(ctx->staging_block, sizeof(ctx->staging_block)); + } + } + return (0); +} + +void +AES_CBC_MAC_Final(uint8_t *buf, struct aes_cbc_mac_ctx *ctx) +{ + uint8_t s0[CCM_CBC_BLOCK_LEN]; + + /* + * We first need to check to see if we've got any data + * left over to encrypt. + */ + if (ctx->blockIndex != 0) { + xor_and_encrypt(ctx, ctx->staging_block, ctx->block); + ctx->cryptDataCount += ctx->blockIndex; + ctx->blockIndex = 0; + explicit_bzero(ctx->staging_block, sizeof(ctx->staging_block)); + } + bzero(s0, sizeof(s0)); + s0[0] = (15 - ctx->nonceLength) - 1; + bcopy(ctx->nonce, s0 + 1, ctx->nonceLength); + rijndaelEncrypt(ctx->keysched, ctx->rounds, s0, s0); + for (size_t indx = 0; indx < AES_CBC_MAC_HASH_LEN; indx++) + buf[indx] = ctx->block[indx] ^ s0[indx]; + explicit_bzero(s0, sizeof(s0)); +} Property changes on: projects/import-googletest-1.8.1/sys/opencrypto/cbc_mac.c ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property Index: projects/import-googletest-1.8.1/sys/opencrypto/cbc_mac.h =================================================================== --- projects/import-googletest-1.8.1/sys/opencrypto/cbc_mac.h (nonexistent) +++ projects/import-googletest-1.8.1/sys/opencrypto/cbc_mac.h (revision 344271) @@ -0,0 +1,67 @@ +/* + * Copyright (c) 2014 The FreeBSD Foundation + * Copyright (c) 2018, iXsystems Inc. + * All rights reserved. + * + * This software was developed by Sean Eric Fagan, with lots of references + * to existing AES-CCM (gmac) code. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + * + * $FreeBSD$ + * + */ + +#ifndef _CBC_CCM_H +# define _CBC_CCM_H + +# include +# include + +# define CCM_CBC_BLOCK_LEN 16 /* 128 bits */ +# define CCM_CBC_MAX_DIGEST_LEN 16 +# define CCM_CBC_MIN_DIGEST_LEN 4 + +/* + * This is the authentication context structure; + * the encryption one is similar. + */ +struct aes_cbc_mac_ctx { + uint64_t authDataLength, authDataCount; + uint64_t cryptDataLength, cryptDataCount; + int blockIndex; + uint8_t staging_block[CCM_CBC_BLOCK_LEN]; + uint8_t block[CCM_CBC_BLOCK_LEN]; + const uint8_t *nonce; + int nonceLength; /* This one is in bytes, not bits! */ + /* AES state data */ + int rounds; + uint32_t keysched[4*(RIJNDAEL_MAXNR+1)]; +}; + +void AES_CBC_MAC_Init(struct aes_cbc_mac_ctx *); +void AES_CBC_MAC_Setkey(struct aes_cbc_mac_ctx *, const uint8_t *, uint16_t); +void AES_CBC_MAC_Reinit(struct aes_cbc_mac_ctx *, const uint8_t *, uint16_t); +int AES_CBC_MAC_Update(struct aes_cbc_mac_ctx *, const uint8_t *, uint16_t); +void AES_CBC_MAC_Final(uint8_t *, struct aes_cbc_mac_ctx *); + +#endif /* _CBC_CCM_H */ Property changes on: projects/import-googletest-1.8.1/sys/opencrypto/cbc_mac.h ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property Index: projects/import-googletest-1.8.1/sys/opencrypto/cryptodev.c =================================================================== --- projects/import-googletest-1.8.1/sys/opencrypto/cryptodev.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/opencrypto/cryptodev.c (revision 344271) @@ -1,1488 +1,1511 @@ /* $OpenBSD: cryptodev.c,v 1.52 2002/06/19 07:22:46 deraadt Exp $ */ /*- * Copyright (c) 2001 Theo de Raadt * Copyright (c) 2002-2006 Sam Leffler, Errno Consulting * Copyright (c) 2014 The FreeBSD Foundation * All rights reserved. * * Portions of this software were developed by John-Mark Gurney * under sponsorship of the FreeBSD Foundation and * Rubicon Communications, LLC (Netgate). * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. The name of the author may not be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * Effort sponsored in part by the Defense Advanced Research Projects * Agency (DARPA) and Air Force Research Laboratory, Air Force * Materiel Command, USAF, under agreement number F30602-01-2-0537. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include SDT_PROVIDER_DECLARE(opencrypto); SDT_PROBE_DEFINE1(opencrypto, dev, ioctl, error, "int"/*line number*/); #ifdef COMPAT_FREEBSD32 #include #include struct session_op32 { u_int32_t cipher; u_int32_t mac; u_int32_t keylen; u_int32_t key; int mackeylen; u_int32_t mackey; u_int32_t ses; }; struct session2_op32 { u_int32_t cipher; u_int32_t mac; u_int32_t keylen; u_int32_t key; int mackeylen; u_int32_t mackey; u_int32_t ses; int crid; int pad[4]; }; struct crypt_op32 { u_int32_t ses; u_int16_t op; u_int16_t flags; u_int len; u_int32_t src, dst; u_int32_t mac; u_int32_t iv; }; struct crparam32 { u_int32_t crp_p; u_int crp_nbits; }; struct crypt_kop32 { u_int crk_op; u_int crk_status; u_short crk_iparams; u_short crk_oparams; u_int crk_crid; struct crparam32 crk_param[CRK_MAXPARAM]; }; struct cryptotstat32 { struct timespec32 acc; struct timespec32 min; struct timespec32 max; u_int32_t count; }; struct cryptostats32 { u_int32_t cs_ops; u_int32_t cs_errs; u_int32_t cs_kops; u_int32_t cs_kerrs; u_int32_t cs_intrs; u_int32_t cs_rets; u_int32_t cs_blocks; u_int32_t cs_kblocks; struct cryptotstat32 cs_invoke; struct cryptotstat32 cs_done; struct cryptotstat32 cs_cb; struct cryptotstat32 cs_finis; }; #define CIOCGSESSION32 _IOWR('c', 101, struct session_op32) #define CIOCCRYPT32 _IOWR('c', 103, struct crypt_op32) #define CIOCKEY32 _IOWR('c', 104, struct crypt_kop32) #define CIOCGSESSION232 _IOWR('c', 106, struct session2_op32) #define CIOCKEY232 _IOWR('c', 107, struct crypt_kop32) static void session_op_from_32(const struct session_op32 *from, struct session_op *to) { CP(*from, *to, cipher); CP(*from, *to, mac); CP(*from, *to, keylen); PTRIN_CP(*from, *to, key); CP(*from, *to, mackeylen); PTRIN_CP(*from, *to, mackey); CP(*from, *to, ses); } static void session2_op_from_32(const struct session2_op32 *from, struct session2_op *to) { session_op_from_32((const struct session_op32 *)from, (struct session_op *)to); CP(*from, *to, crid); } static void session_op_to_32(const struct session_op *from, struct session_op32 *to) { CP(*from, *to, cipher); CP(*from, *to, mac); CP(*from, *to, keylen); PTROUT_CP(*from, *to, key); CP(*from, *to, mackeylen); PTROUT_CP(*from, *to, mackey); CP(*from, *to, ses); } static void session2_op_to_32(const struct session2_op *from, struct session2_op32 *to) { session_op_to_32((const struct session_op *)from, (struct session_op32 *)to); CP(*from, *to, crid); } static void crypt_op_from_32(const struct crypt_op32 *from, struct crypt_op *to) { CP(*from, *to, ses); CP(*from, *to, op); CP(*from, *to, flags); CP(*from, *to, len); PTRIN_CP(*from, *to, src); PTRIN_CP(*from, *to, dst); PTRIN_CP(*from, *to, mac); PTRIN_CP(*from, *to, iv); } static void crypt_op_to_32(const struct crypt_op *from, struct crypt_op32 *to) { CP(*from, *to, ses); CP(*from, *to, op); CP(*from, *to, flags); CP(*from, *to, len); PTROUT_CP(*from, *to, src); PTROUT_CP(*from, *to, dst); PTROUT_CP(*from, *to, mac); PTROUT_CP(*from, *to, iv); } static void crparam_from_32(const struct crparam32 *from, struct crparam *to) { PTRIN_CP(*from, *to, crp_p); CP(*from, *to, crp_nbits); } static void crparam_to_32(const struct crparam *from, struct crparam32 *to) { PTROUT_CP(*from, *to, crp_p); CP(*from, *to, crp_nbits); } static void crypt_kop_from_32(const struct crypt_kop32 *from, struct crypt_kop *to) { int i; CP(*from, *to, crk_op); CP(*from, *to, crk_status); CP(*from, *to, crk_iparams); CP(*from, *to, crk_oparams); CP(*from, *to, crk_crid); for (i = 0; i < CRK_MAXPARAM; i++) crparam_from_32(&from->crk_param[i], &to->crk_param[i]); } static void crypt_kop_to_32(const struct crypt_kop *from, struct crypt_kop32 *to) { int i; CP(*from, *to, crk_op); CP(*from, *to, crk_status); CP(*from, *to, crk_iparams); CP(*from, *to, crk_oparams); CP(*from, *to, crk_crid); for (i = 0; i < CRK_MAXPARAM; i++) crparam_to_32(&from->crk_param[i], &to->crk_param[i]); } #endif struct csession { TAILQ_ENTRY(csession) next; crypto_session_t cses; u_int32_t ses; struct mtx lock; /* for op submission */ u_int32_t cipher; struct enc_xform *txform; u_int32_t mac; struct auth_hash *thash; caddr_t key; int keylen; caddr_t mackey; int mackeylen; }; struct cryptop_data { struct csession *cse; struct iovec iovec[1]; struct uio uio; bool done; }; struct fcrypt { TAILQ_HEAD(csessionlist, csession) csessions; int sesn; }; static int cryptof_ioctl(struct file *, u_long, void *, struct ucred *, struct thread *); static int cryptof_stat(struct file *, struct stat *, struct ucred *, struct thread *); static int cryptof_close(struct file *, struct thread *); static int cryptof_fill_kinfo(struct file *, struct kinfo_file *, struct filedesc *); static struct fileops cryptofops = { .fo_read = invfo_rdwr, .fo_write = invfo_rdwr, .fo_truncate = invfo_truncate, .fo_ioctl = cryptof_ioctl, .fo_poll = invfo_poll, .fo_kqfilter = invfo_kqfilter, .fo_stat = cryptof_stat, .fo_close = cryptof_close, .fo_chmod = invfo_chmod, .fo_chown = invfo_chown, .fo_sendfile = invfo_sendfile, .fo_fill_kinfo = cryptof_fill_kinfo, }; static struct csession *csefind(struct fcrypt *, u_int); static int csedelete(struct fcrypt *, struct csession *); static struct csession *cseadd(struct fcrypt *, struct csession *); static struct csession *csecreate(struct fcrypt *, crypto_session_t, caddr_t, u_int64_t, caddr_t, u_int64_t, u_int32_t, u_int32_t, struct enc_xform *, struct auth_hash *); static void csefree(struct csession *); static int cryptodev_op(struct csession *, struct crypt_op *, struct ucred *, struct thread *td); static int cryptodev_aead(struct csession *, struct crypt_aead *, struct ucred *, struct thread *); static int cryptodev_key(struct crypt_kop *); static int cryptodev_find(struct crypt_find_op *); /* * Check a crypto identifier to see if it requested * a software device/driver. This can be done either * by device name/class or through search constraints. */ static int checkforsoftware(int *cridp) { int crid; crid = *cridp; if (!crypto_devallowsoft) { if (crid & CRYPTOCAP_F_SOFTWARE) { if (crid & CRYPTOCAP_F_HARDWARE) { *cridp = CRYPTOCAP_F_HARDWARE; return 0; } return EINVAL; } if ((crid & CRYPTOCAP_F_HARDWARE) == 0 && (crypto_getcaps(crid) & CRYPTOCAP_F_HARDWARE) == 0) return EINVAL; } return 0; } /* ARGSUSED */ static int cryptof_ioctl( struct file *fp, u_long cmd, void *data, struct ucred *active_cred, struct thread *td) { #define SES2(p) ((struct session2_op *)p) struct cryptoini cria, crie; struct fcrypt *fcr = fp->f_data; struct csession *cse; struct session_op *sop; struct crypt_op *cop; struct crypt_aead *caead; struct enc_xform *txform = NULL; struct auth_hash *thash = NULL; struct crypt_kop *kop; crypto_session_t cses; u_int32_t ses; int error = 0, crid; #ifdef COMPAT_FREEBSD32 struct session2_op sopc; struct crypt_op copc; struct crypt_kop kopc; #endif switch (cmd) { case CIOCGSESSION: case CIOCGSESSION2: #ifdef COMPAT_FREEBSD32 case CIOCGSESSION32: case CIOCGSESSION232: if (cmd == CIOCGSESSION32) { session_op_from_32(data, (struct session_op *)&sopc); sop = (struct session_op *)&sopc; } else if (cmd == CIOCGSESSION232) { session2_op_from_32(data, &sopc); sop = (struct session_op *)&sopc; } else #endif sop = (struct session_op *)data; switch (sop->cipher) { case 0: break; case CRYPTO_DES_CBC: txform = &enc_xform_des; break; case CRYPTO_3DES_CBC: txform = &enc_xform_3des; break; case CRYPTO_BLF_CBC: txform = &enc_xform_blf; break; case CRYPTO_CAST_CBC: txform = &enc_xform_cast5; break; case CRYPTO_SKIPJACK_CBC: txform = &enc_xform_skipjack; break; case CRYPTO_AES_CBC: txform = &enc_xform_rijndael128; break; case CRYPTO_AES_XTS: txform = &enc_xform_aes_xts; break; case CRYPTO_NULL_CBC: txform = &enc_xform_null; break; case CRYPTO_ARC4: txform = &enc_xform_arc4; break; case CRYPTO_CAMELLIA_CBC: txform = &enc_xform_camellia; break; case CRYPTO_AES_ICM: txform = &enc_xform_aes_icm; break; case CRYPTO_AES_NIST_GCM_16: txform = &enc_xform_aes_nist_gcm; break; case CRYPTO_CHACHA20: txform = &enc_xform_chacha20; break; + case CRYPTO_AES_CCM_16: + txform = &enc_xform_ccm; + break; default: CRYPTDEB("invalid cipher"); SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (EINVAL); } switch (sop->mac) { case 0: break; case CRYPTO_MD5_HMAC: thash = &auth_hash_hmac_md5; break; case CRYPTO_POLY1305: thash = &auth_hash_poly1305; break; case CRYPTO_SHA1_HMAC: thash = &auth_hash_hmac_sha1; break; case CRYPTO_SHA2_224_HMAC: thash = &auth_hash_hmac_sha2_224; break; case CRYPTO_SHA2_256_HMAC: thash = &auth_hash_hmac_sha2_256; break; case CRYPTO_SHA2_384_HMAC: thash = &auth_hash_hmac_sha2_384; break; case CRYPTO_SHA2_512_HMAC: thash = &auth_hash_hmac_sha2_512; break; case CRYPTO_RIPEMD160_HMAC: thash = &auth_hash_hmac_ripemd_160; break; case CRYPTO_AES_128_NIST_GMAC: thash = &auth_hash_nist_gmac_aes_128; break; case CRYPTO_AES_192_NIST_GMAC: thash = &auth_hash_nist_gmac_aes_192; break; case CRYPTO_AES_256_NIST_GMAC: thash = &auth_hash_nist_gmac_aes_256; break; + case CRYPTO_AES_CCM_CBC_MAC: + switch (sop->keylen) { + case 16: + thash = &auth_hash_ccm_cbc_mac_128; + break; + case 24: + thash = &auth_hash_ccm_cbc_mac_192; + break; + case 32: + thash = &auth_hash_ccm_cbc_mac_256; + break; + default: + CRYPTDEB("Invalid CBC MAC key size %d", + sop->keylen); + SDT_PROBE1(opencrypto, dev, ioctl, + error, __LINE__); + return (EINVAL); + } + break; #ifdef notdef case CRYPTO_MD5: thash = &auth_hash_md5; break; #endif case CRYPTO_SHA1: thash = &auth_hash_sha1; break; case CRYPTO_SHA2_224: thash = &auth_hash_sha2_224; break; case CRYPTO_SHA2_256: thash = &auth_hash_sha2_256; break; case CRYPTO_SHA2_384: thash = &auth_hash_sha2_384; break; case CRYPTO_SHA2_512: thash = &auth_hash_sha2_512; break; case CRYPTO_NULL_HMAC: thash = &auth_hash_null; break; case CRYPTO_BLAKE2B: thash = &auth_hash_blake2b; break; case CRYPTO_BLAKE2S: thash = &auth_hash_blake2s; break; default: CRYPTDEB("invalid mac"); SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (EINVAL); } bzero(&crie, sizeof(crie)); bzero(&cria, sizeof(cria)); if (txform) { crie.cri_alg = txform->type; crie.cri_klen = sop->keylen * 8; if (sop->keylen > txform->maxkey || sop->keylen < txform->minkey) { CRYPTDEB("invalid cipher parameters"); error = EINVAL; SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } crie.cri_key = malloc(crie.cri_klen / 8, M_XDATA, M_WAITOK); if ((error = copyin(sop->key, crie.cri_key, crie.cri_klen / 8))) { CRYPTDEB("invalid key"); SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } if (thash) crie.cri_next = &cria; } if (thash) { cria.cri_alg = thash->type; cria.cri_klen = sop->mackeylen * 8; if (thash->keysize != 0 && sop->mackeylen > thash->keysize) { CRYPTDEB("invalid mac key length"); error = EINVAL; SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } if (cria.cri_klen) { cria.cri_key = malloc(cria.cri_klen / 8, M_XDATA, M_WAITOK); if ((error = copyin(sop->mackey, cria.cri_key, cria.cri_klen / 8))) { CRYPTDEB("invalid mac key"); SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } } } /* NB: CIOCGSESSION2 has the crid */ if (cmd == CIOCGSESSION2 #ifdef COMPAT_FREEBSD32 || cmd == CIOCGSESSION232 #endif ) { crid = SES2(sop)->crid; error = checkforsoftware(&crid); if (error) { CRYPTDEB("checkforsoftware"); SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } } else crid = CRYPTOCAP_F_HARDWARE; error = crypto_newsession(&cses, (txform ? &crie : &cria), crid); if (error) { CRYPTDEB("crypto_newsession"); SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } cse = csecreate(fcr, cses, crie.cri_key, crie.cri_klen, cria.cri_key, cria.cri_klen, sop->cipher, sop->mac, txform, thash); if (cse == NULL) { crypto_freesession(cses); error = EINVAL; SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); CRYPTDEB("csecreate"); goto bail; } sop->ses = cse->ses; if (cmd == CIOCGSESSION2 #ifdef COMPAT_FREEBSD32 || cmd == CIOCGSESSION232 #endif ) { /* return hardware/driver id */ SES2(sop)->crid = crypto_ses2hid(cse->cses); } bail: if (error) { if (crie.cri_key) free(crie.cri_key, M_XDATA); if (cria.cri_key) free(cria.cri_key, M_XDATA); } #ifdef COMPAT_FREEBSD32 else { if (cmd == CIOCGSESSION32) session_op_to_32(sop, data); else if (cmd == CIOCGSESSION232) session2_op_to_32((struct session2_op *)sop, data); } #endif break; case CIOCFSESSION: ses = *(u_int32_t *)data; cse = csefind(fcr, ses); if (cse == NULL) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (EINVAL); } csedelete(fcr, cse); csefree(cse); break; case CIOCCRYPT: #ifdef COMPAT_FREEBSD32 case CIOCCRYPT32: if (cmd == CIOCCRYPT32) { cop = &copc; crypt_op_from_32(data, cop); } else #endif cop = (struct crypt_op *)data; cse = csefind(fcr, cop->ses); if (cse == NULL) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (EINVAL); } error = cryptodev_op(cse, cop, active_cred, td); #ifdef COMPAT_FREEBSD32 if (error == 0 && cmd == CIOCCRYPT32) crypt_op_to_32(cop, data); #endif break; case CIOCKEY: case CIOCKEY2: #ifdef COMPAT_FREEBSD32 case CIOCKEY32: case CIOCKEY232: #endif if (!crypto_userasymcrypto) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (EPERM); /* XXX compat? */ } #ifdef COMPAT_FREEBSD32 if (cmd == CIOCKEY32 || cmd == CIOCKEY232) { kop = &kopc; crypt_kop_from_32(data, kop); } else #endif kop = (struct crypt_kop *)data; if (cmd == CIOCKEY #ifdef COMPAT_FREEBSD32 || cmd == CIOCKEY32 #endif ) { /* NB: crypto core enforces s/w driver use */ kop->crk_crid = CRYPTOCAP_F_HARDWARE | CRYPTOCAP_F_SOFTWARE; } mtx_lock(&Giant); error = cryptodev_key(kop); mtx_unlock(&Giant); #ifdef COMPAT_FREEBSD32 if (cmd == CIOCKEY32 || cmd == CIOCKEY232) crypt_kop_to_32(kop, data); #endif break; case CIOCASYMFEAT: if (!crypto_userasymcrypto) { /* * NB: if user asym crypto operations are * not permitted return "no algorithms" * so well-behaved applications will just * fallback to doing them in software. */ *(int *)data = 0; } else { error = crypto_getfeat((int *)data); if (error) SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); } break; case CIOCFINDDEV: error = cryptodev_find((struct crypt_find_op *)data); break; case CIOCCRYPTAEAD: caead = (struct crypt_aead *)data; cse = csefind(fcr, caead->ses); if (cse == NULL) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (EINVAL); } error = cryptodev_aead(cse, caead, active_cred, td); break; default: error = EINVAL; SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); break; } return (error); #undef SES2 } static int cryptodev_cb(struct cryptop *); static struct cryptop_data * cod_alloc(struct csession *cse, size_t len, struct thread *td) { struct cryptop_data *cod; struct uio *uio; cod = malloc(sizeof(struct cryptop_data), M_XDATA, M_WAITOK | M_ZERO); cod->cse = cse; uio = &cod->uio; uio->uio_iov = cod->iovec; uio->uio_iovcnt = 1; uio->uio_resid = len; uio->uio_segflg = UIO_SYSSPACE; uio->uio_rw = UIO_WRITE; uio->uio_td = td; uio->uio_iov[0].iov_len = len; uio->uio_iov[0].iov_base = malloc(len, M_XDATA, M_WAITOK); return (cod); } static void cod_free(struct cryptop_data *cod) { free(cod->uio.uio_iov[0].iov_base, M_XDATA); free(cod, M_XDATA); } static int cryptodev_op( struct csession *cse, struct crypt_op *cop, struct ucred *active_cred, struct thread *td) { struct cryptop_data *cod = NULL; struct cryptop *crp = NULL; struct cryptodesc *crde = NULL, *crda = NULL; int error; if (cop->len > 256*1024-4) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (E2BIG); } if (cse->txform) { if (cop->len == 0 || (cop->len % cse->txform->blocksize) != 0) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (EINVAL); } } if (cse->thash) cod = cod_alloc(cse, cop->len + cse->thash->hashsize, td); else cod = cod_alloc(cse, cop->len, td); crp = crypto_getreq((cse->txform != NULL) + (cse->thash != NULL)); if (crp == NULL) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); error = ENOMEM; goto bail; } if (cse->thash && cse->txform) { if (cop->flags & COP_F_CIPHER_FIRST) { crde = crp->crp_desc; crda = crde->crd_next; } else { crda = crp->crp_desc; crde = crda->crd_next; } } else if (cse->thash) { crda = crp->crp_desc; } else if (cse->txform) { crde = crp->crp_desc; } else { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); error = EINVAL; goto bail; } if ((error = copyin(cop->src, cod->uio.uio_iov[0].iov_base, cop->len))) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } if (crda) { crda->crd_skip = 0; crda->crd_len = cop->len; crda->crd_inject = cop->len; crda->crd_alg = cse->mac; crda->crd_key = cse->mackey; crda->crd_klen = cse->mackeylen * 8; } if (crde) { if (cop->op == COP_ENCRYPT) crde->crd_flags |= CRD_F_ENCRYPT; else crde->crd_flags &= ~CRD_F_ENCRYPT; crde->crd_len = cop->len; crde->crd_inject = 0; crde->crd_alg = cse->cipher; crde->crd_key = cse->key; crde->crd_klen = cse->keylen * 8; } crp->crp_ilen = cop->len; crp->crp_flags = CRYPTO_F_IOV | CRYPTO_F_CBIMM | (cop->flags & COP_F_BATCH); crp->crp_uio = &cod->uio; crp->crp_callback = cryptodev_cb; crp->crp_session = cse->cses; crp->crp_opaque = cod; if (cop->iv) { if (crde == NULL) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); error = EINVAL; goto bail; } if (cse->cipher == CRYPTO_ARC4) { /* XXX use flag? */ SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); error = EINVAL; goto bail; } if ((error = copyin(cop->iv, crde->crd_iv, cse->txform->ivsize))) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } crde->crd_flags |= CRD_F_IV_EXPLICIT | CRD_F_IV_PRESENT; crde->crd_skip = 0; } else if (cse->cipher == CRYPTO_ARC4) { /* XXX use flag? */ crde->crd_skip = 0; } else if (crde) { crde->crd_flags |= CRD_F_IV_PRESENT; crde->crd_skip = cse->txform->ivsize; crde->crd_len -= cse->txform->ivsize; } if (cop->mac && crda == NULL) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); error = EINVAL; goto bail; } again: /* * Let the dispatch run unlocked, then, interlock against the * callback before checking if the operation completed and going * to sleep. This insures drivers don't inherit our lock which * results in a lock order reversal between crypto_dispatch forced * entry and the crypto_done callback into us. */ error = crypto_dispatch(crp); if (error != 0) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } mtx_lock(&cse->lock); while (!cod->done) mtx_sleep(cod, &cse->lock, PWAIT, "crydev", 0); mtx_unlock(&cse->lock); if (crp->crp_etype == EAGAIN) { crp->crp_etype = 0; crp->crp_flags &= ~CRYPTO_F_DONE; cod->done = false; goto again; } if (crp->crp_etype != 0) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); error = crp->crp_etype; goto bail; } if (cop->dst && (error = copyout(cod->uio.uio_iov[0].iov_base, cop->dst, cop->len))) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } if (cop->mac && (error = copyout((caddr_t)cod->uio.uio_iov[0].iov_base + cop->len, cop->mac, cse->thash->hashsize))) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } bail: if (crp) crypto_freereq(crp); if (cod) cod_free(cod); return (error); } static int cryptodev_aead( struct csession *cse, struct crypt_aead *caead, struct ucred *active_cred, struct thread *td) { struct cryptop_data *cod = NULL; struct cryptop *crp = NULL; struct cryptodesc *crde = NULL, *crda = NULL; int error; if (caead->len > 256*1024-4 || caead->aadlen > 256*1024-4) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (E2BIG); } if (cse->txform == NULL || cse->thash == NULL || caead->tag == NULL || (caead->len % cse->txform->blocksize) != 0) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (EINVAL); } cod = cod_alloc(cse, caead->aadlen + caead->len + cse->thash->hashsize, td); crp = crypto_getreq(2); if (crp == NULL) { error = ENOMEM; SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } if (caead->flags & COP_F_CIPHER_FIRST) { crde = crp->crp_desc; crda = crde->crd_next; } else { crda = crp->crp_desc; crde = crda->crd_next; } if ((error = copyin(caead->aad, cod->uio.uio_iov[0].iov_base, caead->aadlen))) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } if ((error = copyin(caead->src, (char *)cod->uio.uio_iov[0].iov_base + caead->aadlen, caead->len))) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } /* - * For GCM, crd_len covers only the AAD. For other ciphers + * For GCM/CCM, crd_len covers only the AAD. For other ciphers * chained with an HMAC, crd_len covers both the AAD and the * cipher text. */ crda->crd_skip = 0; - if (cse->cipher == CRYPTO_AES_NIST_GCM_16) + if (cse->cipher == CRYPTO_AES_NIST_GCM_16 || + cse->cipher == CRYPTO_AES_CCM_16) crda->crd_len = caead->aadlen; else crda->crd_len = caead->aadlen + caead->len; crda->crd_inject = caead->aadlen + caead->len; crda->crd_alg = cse->mac; crda->crd_key = cse->mackey; crda->crd_klen = cse->mackeylen * 8; if (caead->op == COP_ENCRYPT) crde->crd_flags |= CRD_F_ENCRYPT; else crde->crd_flags &= ~CRD_F_ENCRYPT; crde->crd_skip = caead->aadlen; crde->crd_len = caead->len; crde->crd_inject = caead->aadlen; crde->crd_alg = cse->cipher; crde->crd_key = cse->key; crde->crd_klen = cse->keylen * 8; crp->crp_ilen = caead->aadlen + caead->len; crp->crp_flags = CRYPTO_F_IOV | CRYPTO_F_CBIMM | (caead->flags & COP_F_BATCH); crp->crp_uio = &cod->uio; crp->crp_callback = cryptodev_cb; crp->crp_session = cse->cses; crp->crp_opaque = cod; if (caead->iv) { if (caead->ivlen > sizeof(crde->crd_iv)) { error = EINVAL; SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } if ((error = copyin(caead->iv, crde->crd_iv, caead->ivlen))) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } crde->crd_flags |= CRD_F_IV_EXPLICIT | CRD_F_IV_PRESENT; } else { crde->crd_flags |= CRD_F_IV_PRESENT; crde->crd_skip += cse->txform->ivsize; crde->crd_len -= cse->txform->ivsize; } if ((error = copyin(caead->tag, (caddr_t)cod->uio.uio_iov[0].iov_base + caead->len + caead->aadlen, cse->thash->hashsize))) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } again: /* * Let the dispatch run unlocked, then, interlock against the * callback before checking if the operation completed and going * to sleep. This insures drivers don't inherit our lock which * results in a lock order reversal between crypto_dispatch forced * entry and the crypto_done callback into us. */ error = crypto_dispatch(crp); if (error != 0) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } mtx_lock(&cse->lock); while (!cod->done) mtx_sleep(cod, &cse->lock, PWAIT, "crydev", 0); mtx_unlock(&cse->lock); if (crp->crp_etype == EAGAIN) { crp->crp_etype = 0; crp->crp_flags &= ~CRYPTO_F_DONE; cod->done = false; goto again; } if (crp->crp_etype != 0) { error = crp->crp_etype; SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } if (caead->dst && (error = copyout( (caddr_t)cod->uio.uio_iov[0].iov_base + caead->aadlen, caead->dst, caead->len))) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } if ((error = copyout((caddr_t)cod->uio.uio_iov[0].iov_base + caead->aadlen + caead->len, caead->tag, cse->thash->hashsize))) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto bail; } bail: crypto_freereq(crp); if (cod) cod_free(cod); return (error); } static int cryptodev_cb(struct cryptop *crp) { struct cryptop_data *cod = crp->crp_opaque; /* * Lock to ensure the wakeup() is not missed by the loops * waiting on cod->done in cryptodev_op() and * cryptodev_aead(). */ mtx_lock(&cod->cse->lock); cod->done = true; mtx_unlock(&cod->cse->lock); wakeup(cod); return (0); } static int cryptodevkey_cb(void *op) { struct cryptkop *krp = (struct cryptkop *) op; wakeup_one(krp); return (0); } static int cryptodev_key(struct crypt_kop *kop) { struct cryptkop *krp = NULL; int error = EINVAL; int in, out, size, i; if (kop->crk_iparams + kop->crk_oparams > CRK_MAXPARAM) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (EFBIG); } in = kop->crk_iparams; out = kop->crk_oparams; switch (kop->crk_op) { case CRK_MOD_EXP: if (in == 3 && out == 1) break; SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (EINVAL); case CRK_MOD_EXP_CRT: if (in == 6 && out == 1) break; SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (EINVAL); case CRK_DSA_SIGN: if (in == 5 && out == 2) break; SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (EINVAL); case CRK_DSA_VERIFY: if (in == 7 && out == 0) break; SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (EINVAL); case CRK_DH_COMPUTE_KEY: if (in == 3 && out == 1) break; SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (EINVAL); default: SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (EINVAL); } krp = (struct cryptkop *)malloc(sizeof *krp, M_XDATA, M_WAITOK|M_ZERO); if (!krp) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); return (ENOMEM); } krp->krp_op = kop->crk_op; krp->krp_status = kop->crk_status; krp->krp_iparams = kop->crk_iparams; krp->krp_oparams = kop->crk_oparams; krp->krp_crid = kop->crk_crid; krp->krp_status = 0; krp->krp_callback = (int (*) (struct cryptkop *)) cryptodevkey_cb; for (i = 0; i < CRK_MAXPARAM; i++) { if (kop->crk_param[i].crp_nbits > 65536) { /* Limit is the same as in OpenBSD */ SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto fail; } krp->krp_param[i].crp_nbits = kop->crk_param[i].crp_nbits; } for (i = 0; i < krp->krp_iparams + krp->krp_oparams; i++) { size = (krp->krp_param[i].crp_nbits + 7) / 8; if (size == 0) continue; krp->krp_param[i].crp_p = malloc(size, M_XDATA, M_WAITOK); if (i >= krp->krp_iparams) continue; error = copyin(kop->crk_param[i].crp_p, krp->krp_param[i].crp_p, size); if (error) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto fail; } } error = crypto_kdispatch(krp); if (error) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto fail; } error = tsleep(krp, PSOCK, "crydev", 0); if (error) { /* XXX can this happen? if so, how do we recover? */ SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto fail; } kop->crk_crid = krp->krp_crid; /* device that did the work */ if (krp->krp_status != 0) { error = krp->krp_status; SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto fail; } for (i = krp->krp_iparams; i < krp->krp_iparams + krp->krp_oparams; i++) { size = (krp->krp_param[i].crp_nbits + 7) / 8; if (size == 0) continue; error = copyout(krp->krp_param[i].crp_p, kop->crk_param[i].crp_p, size); if (error) { SDT_PROBE1(opencrypto, dev, ioctl, error, __LINE__); goto fail; } } fail: if (krp) { kop->crk_status = krp->krp_status; for (i = 0; i < CRK_MAXPARAM; i++) { if (krp->krp_param[i].crp_p) free(krp->krp_param[i].crp_p, M_XDATA); } free(krp, M_XDATA); } return (error); } static int cryptodev_find(struct crypt_find_op *find) { device_t dev; size_t fnlen = sizeof find->name; if (find->crid != -1) { dev = crypto_find_device_byhid(find->crid); if (dev == NULL) return (ENOENT); strncpy(find->name, device_get_nameunit(dev), fnlen); find->name[fnlen - 1] = '\x0'; } else { find->name[fnlen - 1] = '\x0'; find->crid = crypto_find_driver(find->name); if (find->crid == -1) return (ENOENT); } return (0); } /* ARGSUSED */ static int cryptof_stat( struct file *fp, struct stat *sb, struct ucred *active_cred, struct thread *td) { return (EOPNOTSUPP); } /* ARGSUSED */ static int cryptof_close(struct file *fp, struct thread *td) { struct fcrypt *fcr = fp->f_data; struct csession *cse; while ((cse = TAILQ_FIRST(&fcr->csessions))) { TAILQ_REMOVE(&fcr->csessions, cse, next); csefree(cse); } free(fcr, M_XDATA); fp->f_data = NULL; return 0; } static int cryptof_fill_kinfo(struct file *fp, struct kinfo_file *kif, struct filedesc *fdp) { kif->kf_type = KF_TYPE_CRYPTO; return (0); } static struct csession * csefind(struct fcrypt *fcr, u_int ses) { struct csession *cse; TAILQ_FOREACH(cse, &fcr->csessions, next) if (cse->ses == ses) return (cse); return (NULL); } static int csedelete(struct fcrypt *fcr, struct csession *cse_del) { struct csession *cse; TAILQ_FOREACH(cse, &fcr->csessions, next) { if (cse == cse_del) { TAILQ_REMOVE(&fcr->csessions, cse, next); return (1); } } return (0); } static struct csession * cseadd(struct fcrypt *fcr, struct csession *cse) { TAILQ_INSERT_TAIL(&fcr->csessions, cse, next); cse->ses = fcr->sesn++; return (cse); } struct csession * csecreate(struct fcrypt *fcr, crypto_session_t cses, caddr_t key, u_int64_t keylen, caddr_t mackey, u_int64_t mackeylen, u_int32_t cipher, u_int32_t mac, struct enc_xform *txform, struct auth_hash *thash) { struct csession *cse; cse = malloc(sizeof(struct csession), M_XDATA, M_NOWAIT | M_ZERO); if (cse == NULL) return NULL; mtx_init(&cse->lock, "cryptodev", "crypto session lock", MTX_DEF); cse->key = key; cse->keylen = keylen/8; cse->mackey = mackey; cse->mackeylen = mackeylen/8; cse->cses = cses; cse->cipher = cipher; cse->mac = mac; cse->txform = txform; cse->thash = thash; cseadd(fcr, cse); return (cse); } static void csefree(struct csession *cse) { crypto_freesession(cse->cses); mtx_destroy(&cse->lock); if (cse->key) free(cse->key, M_XDATA); if (cse->mackey) free(cse->mackey, M_XDATA); free(cse, M_XDATA); } static int cryptoopen(struct cdev *dev, int oflags, int devtype, struct thread *td) { return (0); } static int cryptoread(struct cdev *dev, struct uio *uio, int ioflag) { return (EIO); } static int cryptowrite(struct cdev *dev, struct uio *uio, int ioflag) { return (EIO); } static int cryptoioctl(struct cdev *dev, u_long cmd, caddr_t data, int flag, struct thread *td) { struct file *f; struct fcrypt *fcr; int fd, error; switch (cmd) { case CRIOGET: fcr = malloc(sizeof(struct fcrypt), M_XDATA, M_WAITOK); TAILQ_INIT(&fcr->csessions); fcr->sesn = 0; error = falloc(td, &f, &fd, 0); if (error) { free(fcr, M_XDATA); return (error); } /* falloc automatically provides an extra reference to 'f'. */ finit(f, FREAD | FWRITE, DTYPE_CRYPTO, fcr, &cryptofops); *(u_int32_t *)data = fd; fdrop(f, td); break; case CRIOFINDDEV: error = cryptodev_find((struct crypt_find_op *)data); break; case CRIOASYMFEAT: error = crypto_getfeat((int *)data); break; default: error = EINVAL; break; } return (error); } static struct cdevsw crypto_cdevsw = { .d_version = D_VERSION, .d_flags = D_NEEDGIANT, .d_open = cryptoopen, .d_read = cryptoread, .d_write = cryptowrite, .d_ioctl = cryptoioctl, .d_name = "crypto", }; static struct cdev *crypto_dev; /* * Initialization code, both for static and dynamic loading. */ static int cryptodev_modevent(module_t mod, int type, void *unused) { switch (type) { case MOD_LOAD: if (bootverbose) printf("crypto: \n"); crypto_dev = make_dev(&crypto_cdevsw, 0, UID_ROOT, GID_WHEEL, 0666, "crypto"); return 0; case MOD_UNLOAD: /*XXX disallow if active sessions */ destroy_dev(crypto_dev); return 0; } return EINVAL; } static moduledata_t cryptodev_mod = { "cryptodev", cryptodev_modevent, 0 }; MODULE_VERSION(cryptodev, 1); DECLARE_MODULE(cryptodev, cryptodev_mod, SI_SUB_PSEUDO, SI_ORDER_ANY); MODULE_DEPEND(cryptodev, crypto, 1, 1, 1); MODULE_DEPEND(cryptodev, zlib, 1, 1, 1); Index: projects/import-googletest-1.8.1/sys/opencrypto/cryptodev.h =================================================================== --- projects/import-googletest-1.8.1/sys/opencrypto/cryptodev.h (revision 344270) +++ projects/import-googletest-1.8.1/sys/opencrypto/cryptodev.h (revision 344271) @@ -1,570 +1,577 @@ /* $FreeBSD$ */ /* $OpenBSD: cryptodev.h,v 1.31 2002/06/11 11:14:29 beck Exp $ */ /*- * The author of this code is Angelos D. Keromytis (angelos@cis.upenn.edu) * Copyright (c) 2002-2006 Sam Leffler, Errno Consulting * * This code was written by Angelos D. Keromytis in Athens, Greece, in * February 2000. Network Security Technologies Inc. (NSTI) kindly * supported the development of this code. * * Copyright (c) 2000 Angelos D. Keromytis * * Permission to use, copy, and modify this software with or without fee * is hereby granted, provided that this entire notice is included in * all source code copies of any software which is or includes a copy or * modification of this software. * * THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR * IMPLIED WARRANTY. IN PARTICULAR, NONE OF THE AUTHORS MAKES ANY * REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE * MERCHANTABILITY OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR * PURPOSE. * * Copyright (c) 2001 Theo de Raadt * Copyright (c) 2014 The FreeBSD Foundation * All rights reserved. * * Portions of this software were developed by John-Mark Gurney * under sponsorship of the FreeBSD Foundation and * Rubicon Communications, LLC (Netgate). * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. The name of the author may not be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * Effort sponsored in part by the Defense Advanced Research Projects * Agency (DARPA) and Air Force Research Laboratory, Air Force * Materiel Command, USAF, under agreement number F30602-01-2-0537. * */ #ifndef _CRYPTO_CRYPTO_H_ #define _CRYPTO_CRYPTO_H_ #include #ifdef _KERNEL #include #include #endif /* Some initial values */ #define CRYPTO_DRIVERS_INITIAL 4 #define CRYPTO_SW_SESSIONS 32 /* Hash values */ #define NULL_HASH_LEN 16 #define MD5_HASH_LEN 16 #define SHA1_HASH_LEN 20 #define RIPEMD160_HASH_LEN 20 #define SHA2_224_HASH_LEN 28 #define SHA2_256_HASH_LEN 32 #define SHA2_384_HASH_LEN 48 #define SHA2_512_HASH_LEN 64 #define MD5_KPDK_HASH_LEN 16 #define SHA1_KPDK_HASH_LEN 20 #define AES_GMAC_HASH_LEN 16 #define POLY1305_HASH_LEN 16 +#define AES_CBC_MAC_HASH_LEN 16 /* Maximum hash algorithm result length */ #define HASH_MAX_LEN SHA2_512_HASH_LEN /* Keep this updated */ #define MD5_BLOCK_LEN 64 #define SHA1_BLOCK_LEN 64 #define RIPEMD160_BLOCK_LEN 64 #define SHA2_224_BLOCK_LEN 64 #define SHA2_256_BLOCK_LEN 64 #define SHA2_384_BLOCK_LEN 128 #define SHA2_512_BLOCK_LEN 128 /* HMAC values */ #define NULL_HMAC_BLOCK_LEN 64 /* Maximum HMAC block length */ #define HMAC_MAX_BLOCK_LEN SHA2_512_BLOCK_LEN /* Keep this updated */ #define HMAC_IPAD_VAL 0x36 #define HMAC_OPAD_VAL 0x5C /* HMAC Key Length */ #define AES_128_GMAC_KEY_LEN 16 #define AES_192_GMAC_KEY_LEN 24 #define AES_256_GMAC_KEY_LEN 32 +#define AES_128_CBC_MAC_KEY_LEN 16 +#define AES_192_CBC_MAC_KEY_LEN 24 +#define AES_256_CBC_MAC_KEY_LEN 32 #define POLY1305_KEY_LEN 32 /* Encryption algorithm block sizes */ #define NULL_BLOCK_LEN 4 /* IPsec to maintain alignment */ #define DES_BLOCK_LEN 8 #define DES3_BLOCK_LEN 8 #define BLOWFISH_BLOCK_LEN 8 #define SKIPJACK_BLOCK_LEN 8 #define CAST128_BLOCK_LEN 8 #define RIJNDAEL128_BLOCK_LEN 16 #define AES_BLOCK_LEN 16 #define AES_ICM_BLOCK_LEN 1 #define ARC4_BLOCK_LEN 1 #define CAMELLIA_BLOCK_LEN 16 #define CHACHA20_NATIVE_BLOCK_LEN 64 #define EALG_MAX_BLOCK_LEN CHACHA20_NATIVE_BLOCK_LEN /* Keep this updated */ /* IV Lengths */ #define ARC4_IV_LEN 1 #define AES_GCM_IV_LEN 12 +#define AES_CCM_IV_LEN 12 #define AES_XTS_IV_LEN 8 #define AES_XTS_ALPHA 0x87 /* GF(2^128) generator polynomial */ /* Min and Max Encryption Key Sizes */ #define NULL_MIN_KEY 0 #define NULL_MAX_KEY 256 /* 2048 bits, max key */ #define DES_MIN_KEY 8 #define DES_MAX_KEY DES_MIN_KEY #define TRIPLE_DES_MIN_KEY 24 #define TRIPLE_DES_MAX_KEY TRIPLE_DES_MIN_KEY #define BLOWFISH_MIN_KEY 5 #define BLOWFISH_MAX_KEY 56 /* 448 bits, max key */ #define CAST_MIN_KEY 5 #define CAST_MAX_KEY 16 #define SKIPJACK_MIN_KEY 10 #define SKIPJACK_MAX_KEY SKIPJACK_MIN_KEY #define RIJNDAEL_MIN_KEY 16 #define RIJNDAEL_MAX_KEY 32 #define AES_MIN_KEY RIJNDAEL_MIN_KEY #define AES_MAX_KEY RIJNDAEL_MAX_KEY #define AES_XTS_MIN_KEY (2 * AES_MIN_KEY) #define AES_XTS_MAX_KEY (2 * AES_MAX_KEY) #define ARC4_MIN_KEY 1 #define ARC4_MAX_KEY 32 #define CAMELLIA_MIN_KEY 8 #define CAMELLIA_MAX_KEY 32 /* Maximum hash algorithm result length */ #define AALG_MAX_RESULT_LEN 64 /* Keep this updated */ #define CRYPTO_ALGORITHM_MIN 1 #define CRYPTO_DES_CBC 1 #define CRYPTO_3DES_CBC 2 #define CRYPTO_BLF_CBC 3 #define CRYPTO_CAST_CBC 4 #define CRYPTO_SKIPJACK_CBC 5 #define CRYPTO_MD5_HMAC 6 #define CRYPTO_SHA1_HMAC 7 #define CRYPTO_RIPEMD160_HMAC 8 #define CRYPTO_MD5_KPDK 9 #define CRYPTO_SHA1_KPDK 10 #define CRYPTO_RIJNDAEL128_CBC 11 /* 128 bit blocksize */ #define CRYPTO_AES_CBC 11 /* 128 bit blocksize -- the same as above */ #define CRYPTO_ARC4 12 #define CRYPTO_MD5 13 #define CRYPTO_SHA1 14 #define CRYPTO_NULL_HMAC 15 #define CRYPTO_NULL_CBC 16 #define CRYPTO_DEFLATE_COMP 17 /* Deflate compression algorithm */ #define CRYPTO_SHA2_256_HMAC 18 #define CRYPTO_SHA2_384_HMAC 19 #define CRYPTO_SHA2_512_HMAC 20 #define CRYPTO_CAMELLIA_CBC 21 #define CRYPTO_AES_XTS 22 #define CRYPTO_AES_ICM 23 /* commonly known as CTR mode */ #define CRYPTO_AES_NIST_GMAC 24 /* cipher side */ #define CRYPTO_AES_NIST_GCM_16 25 /* 16 byte ICV */ #define CRYPTO_AES_128_NIST_GMAC 26 /* auth side */ #define CRYPTO_AES_192_NIST_GMAC 27 /* auth side */ #define CRYPTO_AES_256_NIST_GMAC 28 /* auth side */ #define CRYPTO_BLAKE2B 29 /* Blake2b hash */ #define CRYPTO_BLAKE2S 30 /* Blake2s hash */ #define CRYPTO_CHACHA20 31 /* Chacha20 stream cipher */ #define CRYPTO_SHA2_224_HMAC 32 #define CRYPTO_RIPEMD160 33 #define CRYPTO_SHA2_224 34 #define CRYPTO_SHA2_256 35 #define CRYPTO_SHA2_384 36 #define CRYPTO_SHA2_512 37 #define CRYPTO_POLY1305 38 -#define CRYPTO_ALGORITHM_MAX 38 /* Keep updated - see below */ +#define CRYPTO_AES_CCM_CBC_MAC 39 /* auth side */ +#define CRYPTO_AES_CCM_16 40 /* cipher side */ +#define CRYPTO_ALGORITHM_MAX 40 /* Keep updated - see below */ #define CRYPTO_ALGO_VALID(x) ((x) >= CRYPTO_ALGORITHM_MIN && \ (x) <= CRYPTO_ALGORITHM_MAX) /* Algorithm flags */ #define CRYPTO_ALG_FLAG_SUPPORTED 0x01 /* Algorithm is supported */ #define CRYPTO_ALG_FLAG_RNG_ENABLE 0x02 /* Has HW RNG for DH/DSA */ #define CRYPTO_ALG_FLAG_DSA_SHA 0x04 /* Can do SHA on msg */ /* * Crypto driver/device flags. They can set in the crid * parameter when creating a session or submitting a key * op to affect the device/driver assigned. If neither * of these are specified then the crid is assumed to hold * the driver id of an existing (and suitable) device that * must be used to satisfy the request. */ #define CRYPTO_FLAG_HARDWARE 0x01000000 /* hardware accelerated */ #define CRYPTO_FLAG_SOFTWARE 0x02000000 /* software implementation */ /* NB: deprecated */ struct session_op { u_int32_t cipher; /* ie. CRYPTO_DES_CBC */ u_int32_t mac; /* ie. CRYPTO_MD5_HMAC */ u_int32_t keylen; /* cipher key */ c_caddr_t key; int mackeylen; /* mac key */ c_caddr_t mackey; u_int32_t ses; /* returns: session # */ }; /* * session and crypt _op structs are used by userspace programs to interact * with /dev/crypto. Confusingly, the internal kernel interface is named * "cryptop" (no underscore). */ struct session2_op { u_int32_t cipher; /* ie. CRYPTO_DES_CBC */ u_int32_t mac; /* ie. CRYPTO_MD5_HMAC */ u_int32_t keylen; /* cipher key */ c_caddr_t key; int mackeylen; /* mac key */ c_caddr_t mackey; u_int32_t ses; /* returns: session # */ int crid; /* driver id + flags (rw) */ int pad[4]; /* for future expansion */ }; struct crypt_op { u_int32_t ses; u_int16_t op; /* i.e. COP_ENCRYPT */ #define COP_ENCRYPT 1 #define COP_DECRYPT 2 u_int16_t flags; #define COP_F_CIPHER_FIRST 0x0001 /* Cipher before MAC. */ #define COP_F_BATCH 0x0008 /* Batch op if possible */ u_int len; c_caddr_t src; /* become iov[] inside kernel */ caddr_t dst; caddr_t mac; /* must be big enough for chosen MAC */ c_caddr_t iv; }; /* op and flags the same as crypt_op */ struct crypt_aead { u_int32_t ses; u_int16_t op; /* i.e. COP_ENCRYPT */ u_int16_t flags; u_int len; u_int aadlen; u_int ivlen; c_caddr_t src; /* become iov[] inside kernel */ caddr_t dst; c_caddr_t aad; /* additional authenticated data */ caddr_t tag; /* must fit for chosen TAG length */ c_caddr_t iv; }; /* * Parameters for looking up a crypto driver/device by * device name or by id. The latter are returned for * created sessions (crid) and completed key operations. */ struct crypt_find_op { int crid; /* driver id + flags */ char name[32]; /* device/driver name */ }; /* bignum parameter, in packed bytes, ... */ struct crparam { caddr_t crp_p; u_int crp_nbits; }; #define CRK_MAXPARAM 8 struct crypt_kop { u_int crk_op; /* ie. CRK_MOD_EXP or other */ u_int crk_status; /* return status */ u_short crk_iparams; /* # of input parameters */ u_short crk_oparams; /* # of output parameters */ u_int crk_crid; /* NB: only used by CIOCKEY2 (rw) */ struct crparam crk_param[CRK_MAXPARAM]; }; #define CRK_ALGORITM_MIN 0 #define CRK_MOD_EXP 0 #define CRK_MOD_EXP_CRT 1 #define CRK_DSA_SIGN 2 #define CRK_DSA_VERIFY 3 #define CRK_DH_COMPUTE_KEY 4 #define CRK_ALGORITHM_MAX 4 /* Keep updated - see below */ #define CRF_MOD_EXP (1 << CRK_MOD_EXP) #define CRF_MOD_EXP_CRT (1 << CRK_MOD_EXP_CRT) #define CRF_DSA_SIGN (1 << CRK_DSA_SIGN) #define CRF_DSA_VERIFY (1 << CRK_DSA_VERIFY) #define CRF_DH_COMPUTE_KEY (1 << CRK_DH_COMPUTE_KEY) /* * done against open of /dev/crypto, to get a cloned descriptor. * Please use F_SETFD against the cloned descriptor. */ #define CRIOGET _IOWR('c', 100, u_int32_t) #define CRIOASYMFEAT CIOCASYMFEAT #define CRIOFINDDEV CIOCFINDDEV /* the following are done against the cloned descriptor */ #define CIOCGSESSION _IOWR('c', 101, struct session_op) #define CIOCFSESSION _IOW('c', 102, u_int32_t) #define CIOCCRYPT _IOWR('c', 103, struct crypt_op) #define CIOCKEY _IOWR('c', 104, struct crypt_kop) #define CIOCASYMFEAT _IOR('c', 105, u_int32_t) #define CIOCGSESSION2 _IOWR('c', 106, struct session2_op) #define CIOCKEY2 _IOWR('c', 107, struct crypt_kop) #define CIOCFINDDEV _IOWR('c', 108, struct crypt_find_op) #define CIOCCRYPTAEAD _IOWR('c', 109, struct crypt_aead) struct cryptotstat { struct timespec acc; /* total accumulated time */ struct timespec min; /* min time */ struct timespec max; /* max time */ u_int32_t count; /* number of observations */ }; struct cryptostats { u_int32_t cs_ops; /* symmetric crypto ops submitted */ u_int32_t cs_errs; /* symmetric crypto ops that failed */ u_int32_t cs_kops; /* asymetric/key ops submitted */ u_int32_t cs_kerrs; /* asymetric/key ops that failed */ u_int32_t cs_intrs; /* crypto swi thread activations */ u_int32_t cs_rets; /* crypto return thread activations */ u_int32_t cs_blocks; /* symmetric op driver block */ u_int32_t cs_kblocks; /* symmetric op driver block */ /* * When CRYPTO_TIMING is defined at compile time and the * sysctl debug.crypto is set to 1, the crypto system will * accumulate statistics about how long it takes to process * crypto requests at various points during processing. */ struct cryptotstat cs_invoke; /* crypto_dipsatch -> crypto_invoke */ struct cryptotstat cs_done; /* crypto_invoke -> crypto_done */ struct cryptotstat cs_cb; /* crypto_done -> callback */ struct cryptotstat cs_finis; /* callback -> callback return */ }; #ifdef _KERNEL #if 0 #define CRYPTDEB(s, ...) do { \ printf("%s:%d: " s "\n", __FILE__, __LINE__, ## __VA_ARGS__); \ } while (0) #else #define CRYPTDEB(...) do { } while (0) #endif /* Standard initialization structure beginning */ struct cryptoini { int cri_alg; /* Algorithm to use */ int cri_klen; /* Key length, in bits */ int cri_mlen; /* Number of bytes we want from the entire hash. 0 means all. */ caddr_t cri_key; /* key to use */ u_int8_t cri_iv[EALG_MAX_BLOCK_LEN]; /* IV to use */ struct cryptoini *cri_next; }; /* Describe boundaries of a single crypto operation */ struct cryptodesc { int crd_skip; /* How many bytes to ignore from start */ int crd_len; /* How many bytes to process */ int crd_inject; /* Where to inject results, if applicable */ int crd_flags; #define CRD_F_ENCRYPT 0x01 /* Set when doing encryption */ #define CRD_F_IV_PRESENT 0x02 /* When encrypting, IV is already in place, so don't copy. */ #define CRD_F_IV_EXPLICIT 0x04 /* IV explicitly provided */ #define CRD_F_DSA_SHA_NEEDED 0x08 /* Compute SHA-1 of buffer for DSA */ #define CRD_F_COMP 0x0f /* Set when doing compression */ #define CRD_F_KEY_EXPLICIT 0x10 /* Key explicitly provided */ struct cryptoini CRD_INI; /* Initialization/context data */ #define crd_esn CRD_INI.cri_esn #define crd_iv CRD_INI.cri_iv #define crd_key CRD_INI.cri_key #define crd_alg CRD_INI.cri_alg #define crd_klen CRD_INI.cri_klen struct cryptodesc *crd_next; }; /* Structure describing complete operation */ struct cryptop { TAILQ_ENTRY(cryptop) crp_next; struct task crp_task; crypto_session_t crp_session; /* Session */ int crp_ilen; /* Input data total length */ int crp_olen; /* Result total length */ int crp_etype; /* * Error type (zero means no error). * All error codes except EAGAIN * indicate possible data corruption (as in, * the data have been touched). On all * errors, the crp_session may have changed * (reset to a new one), so the caller * should always check and use the new * value on future requests. */ int crp_flags; #define CRYPTO_F_IMBUF 0x0001 /* Input/output are mbuf chains */ #define CRYPTO_F_IOV 0x0002 /* Input/output are uio */ #define CRYPTO_F_BATCH 0x0008 /* Batch op if possible */ #define CRYPTO_F_CBIMM 0x0010 /* Do callback immediately */ #define CRYPTO_F_DONE 0x0020 /* Operation completed */ #define CRYPTO_F_CBIFSYNC 0x0040 /* Do CBIMM if op is synchronous */ #define CRYPTO_F_ASYNC 0x0080 /* Dispatch crypto jobs on several threads * if op is synchronous */ #define CRYPTO_F_ASYNC_KEEPORDER 0x0100 /* * Dispatch the crypto jobs in the same * order there are submitted. Applied only * if CRYPTO_F_ASYNC flags is set */ union { caddr_t crp_buf; /* Data to be processed */ struct mbuf *crp_mbuf; struct uio *crp_uio; }; void * crp_opaque; /* Opaque pointer, passed along */ struct cryptodesc *crp_desc; /* Linked list of processing descriptors */ int (*crp_callback)(struct cryptop *); /* Callback function */ struct bintime crp_tstamp; /* performance time stamp */ uint32_t crp_seq; /* used for ordered dispatch */ uint32_t crp_retw_id; /* * the return worker to be used, * used for ordered dispatch */ }; #define CRYPTOP_ASYNC(crp) \ (((crp)->crp_flags & CRYPTO_F_ASYNC) && \ crypto_ses2caps((crp)->crp_session) & CRYPTOCAP_F_SYNC) #define CRYPTOP_ASYNC_KEEPORDER(crp) \ (CRYPTOP_ASYNC(crp) && \ (crp)->crp_flags & CRYPTO_F_ASYNC_KEEPORDER) #define CRYPTO_BUF_CONTIG 0x0 #define CRYPTO_BUF_IOV 0x1 #define CRYPTO_BUF_MBUF 0x2 #define CRYPTO_OP_DECRYPT 0x0 #define CRYPTO_OP_ENCRYPT 0x1 /* * Hints passed to process methods. */ #define CRYPTO_HINT_MORE 0x1 /* more ops coming shortly */ struct cryptkop { TAILQ_ENTRY(cryptkop) krp_next; u_int krp_op; /* ie. CRK_MOD_EXP or other */ u_int krp_status; /* return status */ u_short krp_iparams; /* # of input parameters */ u_short krp_oparams; /* # of output parameters */ u_int krp_crid; /* desired device, etc. */ u_int32_t krp_hid; struct crparam krp_param[CRK_MAXPARAM]; /* kvm */ int (*krp_callback)(struct cryptkop *); }; uint32_t crypto_ses2hid(crypto_session_t crypto_session); uint32_t crypto_ses2caps(crypto_session_t crypto_session); void *crypto_get_driver_session(crypto_session_t crypto_session); MALLOC_DECLARE(M_CRYPTO_DATA); extern int crypto_newsession(crypto_session_t *cses, struct cryptoini *cri, int hard); extern void crypto_freesession(crypto_session_t cses); #define CRYPTOCAP_F_HARDWARE CRYPTO_FLAG_HARDWARE #define CRYPTOCAP_F_SOFTWARE CRYPTO_FLAG_SOFTWARE #define CRYPTOCAP_F_SYNC 0x04000000 /* operates synchronously */ extern int32_t crypto_get_driverid(device_t dev, size_t session_size, int flags); extern int crypto_find_driver(const char *); extern device_t crypto_find_device_byhid(int hid); extern int crypto_getcaps(int hid); extern int crypto_register(u_int32_t driverid, int alg, u_int16_t maxoplen, u_int32_t flags); extern int crypto_kregister(u_int32_t, int, u_int32_t); extern int crypto_unregister(u_int32_t driverid, int alg); extern int crypto_unregister_all(u_int32_t driverid); extern int crypto_dispatch(struct cryptop *crp); extern int crypto_kdispatch(struct cryptkop *); #define CRYPTO_SYMQ 0x1 #define CRYPTO_ASYMQ 0x2 extern int crypto_unblock(u_int32_t, int); extern void crypto_done(struct cryptop *crp); extern void crypto_kdone(struct cryptkop *); extern int crypto_getfeat(int *); extern void crypto_freereq(struct cryptop *crp); extern struct cryptop *crypto_getreq(int num); extern int crypto_usercrypto; /* userland may do crypto requests */ extern int crypto_userasymcrypto; /* userland may do asym crypto reqs */ extern int crypto_devallowsoft; /* only use hardware crypto */ /* * Crypto-related utility routines used mainly by drivers. * * XXX these don't really belong here; but for now they're * kept apart from the rest of the system. */ struct uio; extern void cuio_copydata(struct uio* uio, int off, int len, caddr_t cp); extern void cuio_copyback(struct uio* uio, int off, int len, c_caddr_t cp); extern int cuio_getptr(struct uio *uio, int loc, int *off); extern int cuio_apply(struct uio *uio, int off, int len, int (*f)(void *, void *, u_int), void *arg); struct mbuf; struct iovec; extern int crypto_mbuftoiov(struct mbuf *mbuf, struct iovec **iovptr, int *cnt, int *allocated); extern void crypto_copyback(int flags, caddr_t buf, int off, int size, c_caddr_t in); extern void crypto_copydata(int flags, caddr_t buf, int off, int size, caddr_t out); extern int crypto_apply(int flags, caddr_t buf, int off, int len, int (*f)(void *, void *, u_int), void *arg); extern void *crypto_contiguous_subsegment(int, void *, size_t, size_t); #endif /* _KERNEL */ #endif /* _CRYPTO_CRYPTO_H_ */ Index: projects/import-googletest-1.8.1/sys/opencrypto/cryptosoft.c =================================================================== --- projects/import-googletest-1.8.1/sys/opencrypto/cryptosoft.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/opencrypto/cryptosoft.c (revision 344271) @@ -1,1338 +1,1418 @@ /* $OpenBSD: cryptosoft.c,v 1.35 2002/04/26 08:43:50 deraadt Exp $ */ /*- * The author of this code is Angelos D. Keromytis (angelos@cis.upenn.edu) * Copyright (c) 2002-2006 Sam Leffler, Errno Consulting * * This code was written by Angelos D. Keromytis in Athens, Greece, in * February 2000. Network Security Technologies Inc. (NSTI) kindly * supported the development of this code. * * Copyright (c) 2000, 2001 Angelos D. Keromytis * Copyright (c) 2014 The FreeBSD Foundation * All rights reserved. * * Portions of this software were developed by John-Mark Gurney * under sponsorship of the FreeBSD Foundation and * Rubicon Communications, LLC (Netgate). * * Permission to use, copy, and modify this software with or without fee * is hereby granted, provided that this entire notice is included in * all source code copies of any software which is or includes a copy or * modification of this software. * * THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR * IMPLIED WARRANTY. IN PARTICULAR, NONE OF THE AUTHORS MAKES ANY * REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE * MERCHANTABILITY OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR * PURPOSE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "cryptodev_if.h" +_Static_assert(AES_CCM_IV_LEN == AES_GCM_IV_LEN, + "AES_GCM_IV_LEN must currently be the same as AES_CCM_IV_LEN"); + static int32_t swcr_id; u_int8_t hmac_ipad_buffer[HMAC_MAX_BLOCK_LEN]; u_int8_t hmac_opad_buffer[HMAC_MAX_BLOCK_LEN]; static int swcr_encdec(struct cryptodesc *, struct swcr_data *, caddr_t, int); static int swcr_authcompute(struct cryptodesc *, struct swcr_data *, caddr_t, int); static int swcr_authenc(struct cryptop *crp); static int swcr_compdec(struct cryptodesc *, struct swcr_data *, caddr_t, int); static void swcr_freesession(device_t dev, crypto_session_t cses); /* * Apply a symmetric encryption/decryption algorithm. */ static int swcr_encdec(struct cryptodesc *crd, struct swcr_data *sw, caddr_t buf, int flags) { unsigned char iv[EALG_MAX_BLOCK_LEN], blk[EALG_MAX_BLOCK_LEN]; unsigned char *ivp, *nivp, iv2[EALG_MAX_BLOCK_LEN]; struct enc_xform *exf; int i, j, k, blks, ind, count, ivlen; struct uio *uio, uiolcl; struct iovec iovlcl[4]; struct iovec *iov; int iovcnt, iovalloc; int error; error = 0; exf = sw->sw_exf; blks = exf->blocksize; ivlen = exf->ivsize; /* Check for non-padded data */ if (crd->crd_len % blks) return EINVAL; if (crd->crd_alg == CRYPTO_AES_ICM && (crd->crd_flags & CRD_F_IV_EXPLICIT) == 0) return (EINVAL); /* Initialize the IV */ if (crd->crd_flags & CRD_F_ENCRYPT) { /* IV explicitly provided ? */ if (crd->crd_flags & CRD_F_IV_EXPLICIT) bcopy(crd->crd_iv, iv, ivlen); else arc4rand(iv, ivlen, 0); /* Do we need to write the IV */ if (!(crd->crd_flags & CRD_F_IV_PRESENT)) crypto_copyback(flags, buf, crd->crd_inject, ivlen, iv); } else { /* Decryption */ /* IV explicitly provided ? */ if (crd->crd_flags & CRD_F_IV_EXPLICIT) bcopy(crd->crd_iv, iv, ivlen); else { /* Get IV off buf */ crypto_copydata(flags, buf, crd->crd_inject, ivlen, iv); } } if (crd->crd_flags & CRD_F_KEY_EXPLICIT) { int error; if (sw->sw_kschedule) exf->zerokey(&(sw->sw_kschedule)); error = exf->setkey(&sw->sw_kschedule, crd->crd_key, crd->crd_klen / 8); if (error) return (error); } iov = iovlcl; iovcnt = nitems(iovlcl); iovalloc = 0; uio = &uiolcl; if ((flags & CRYPTO_F_IMBUF) != 0) { error = crypto_mbuftoiov((struct mbuf *)buf, &iov, &iovcnt, &iovalloc); if (error) return (error); uio->uio_iov = iov; uio->uio_iovcnt = iovcnt; } else if ((flags & CRYPTO_F_IOV) != 0) uio = (struct uio *)buf; else { iov[0].iov_base = buf; iov[0].iov_len = crd->crd_skip + crd->crd_len; uio->uio_iov = iov; uio->uio_iovcnt = 1; } ivp = iv; if (exf->reinit) { /* * xforms that provide a reinit method perform all IV * handling themselves. */ exf->reinit(sw->sw_kschedule, iv); } count = crd->crd_skip; ind = cuio_getptr(uio, count, &k); if (ind == -1) { error = EINVAL; goto out; } i = crd->crd_len; while (i > 0) { /* * If there's insufficient data at the end of * an iovec, we have to do some copying. */ if (uio->uio_iov[ind].iov_len < k + blks && uio->uio_iov[ind].iov_len != k) { cuio_copydata(uio, count, blks, blk); /* Actual encryption/decryption */ if (exf->reinit) { if (crd->crd_flags & CRD_F_ENCRYPT) { exf->encrypt(sw->sw_kschedule, blk); } else { exf->decrypt(sw->sw_kschedule, blk); } } else if (crd->crd_flags & CRD_F_ENCRYPT) { /* XOR with previous block */ for (j = 0; j < blks; j++) blk[j] ^= ivp[j]; exf->encrypt(sw->sw_kschedule, blk); /* * Keep encrypted block for XOR'ing * with next block */ bcopy(blk, iv, blks); ivp = iv; } else { /* decrypt */ /* * Keep encrypted block for XOR'ing * with next block */ nivp = (ivp == iv) ? iv2 : iv; bcopy(blk, nivp, blks); exf->decrypt(sw->sw_kschedule, blk); /* XOR with previous block */ for (j = 0; j < blks; j++) blk[j] ^= ivp[j]; ivp = nivp; } /* Copy back decrypted block */ cuio_copyback(uio, count, blks, blk); count += blks; /* Advance pointer */ ind = cuio_getptr(uio, count, &k); if (ind == -1) { error = EINVAL; goto out; } i -= blks; /* Could be done... */ if (i == 0) break; } while (uio->uio_iov[ind].iov_len >= k + blks && i > 0) { uint8_t *idat; size_t nb, rem; nb = blks; rem = MIN((size_t)i, uio->uio_iov[ind].iov_len - (size_t)k); idat = (uint8_t *)uio->uio_iov[ind].iov_base + k; if (exf->reinit) { if ((crd->crd_flags & CRD_F_ENCRYPT) != 0 && exf->encrypt_multi == NULL) exf->encrypt(sw->sw_kschedule, idat); else if ((crd->crd_flags & CRD_F_ENCRYPT) != 0) { nb = rounddown(rem, blks); exf->encrypt_multi(sw->sw_kschedule, idat, nb); } else if (exf->decrypt_multi == NULL) exf->decrypt(sw->sw_kschedule, idat); else { nb = rounddown(rem, blks); exf->decrypt_multi(sw->sw_kschedule, idat, nb); } } else if (crd->crd_flags & CRD_F_ENCRYPT) { /* XOR with previous block/IV */ for (j = 0; j < blks; j++) idat[j] ^= ivp[j]; exf->encrypt(sw->sw_kschedule, idat); ivp = idat; } else { /* decrypt */ /* * Keep encrypted block to be used * in next block's processing. */ nivp = (ivp == iv) ? iv2 : iv; bcopy(idat, nivp, blks); exf->decrypt(sw->sw_kschedule, idat); /* XOR with previous block/IV */ for (j = 0; j < blks; j++) idat[j] ^= ivp[j]; ivp = nivp; } count += nb; k += nb; i -= nb; } /* * Advance to the next iov if the end of the current iov * is aligned with the end of a cipher block. * Note that the code is equivalent to calling: * ind = cuio_getptr(uio, count, &k); */ if (i > 0 && k == uio->uio_iov[ind].iov_len) { k = 0; ind++; if (ind >= uio->uio_iovcnt) { error = EINVAL; goto out; } } } out: if (iovalloc) free(iov, M_CRYPTO_DATA); return (error); } static int __result_use_check swcr_authprepare(struct auth_hash *axf, struct swcr_data *sw, u_char *key, int klen) { int k; klen /= 8; switch (axf->type) { case CRYPTO_MD5_HMAC: case CRYPTO_SHA1_HMAC: case CRYPTO_SHA2_224_HMAC: case CRYPTO_SHA2_256_HMAC: case CRYPTO_SHA2_384_HMAC: case CRYPTO_SHA2_512_HMAC: case CRYPTO_NULL_HMAC: case CRYPTO_RIPEMD160_HMAC: for (k = 0; k < klen; k++) key[k] ^= HMAC_IPAD_VAL; axf->Init(sw->sw_ictx); axf->Update(sw->sw_ictx, key, klen); axf->Update(sw->sw_ictx, hmac_ipad_buffer, axf->blocksize - klen); for (k = 0; k < klen; k++) key[k] ^= (HMAC_IPAD_VAL ^ HMAC_OPAD_VAL); axf->Init(sw->sw_octx); axf->Update(sw->sw_octx, key, klen); axf->Update(sw->sw_octx, hmac_opad_buffer, axf->blocksize - klen); for (k = 0; k < klen; k++) key[k] ^= HMAC_OPAD_VAL; break; case CRYPTO_MD5_KPDK: case CRYPTO_SHA1_KPDK: { /* * We need a buffer that can hold an md5 and a sha1 result * just to throw it away. * What we do here is the initial part of: * ALGO( key, keyfill, .. ) * adding the key to sw_ictx and abusing Final() to get the * "keyfill" padding. * In addition we abuse the sw_octx to save the key to have * it to be able to append it at the end in swcr_authcompute(). */ u_char buf[SHA1_RESULTLEN]; sw->sw_klen = klen; bcopy(key, sw->sw_octx, klen); axf->Init(sw->sw_ictx); axf->Update(sw->sw_ictx, key, klen); axf->Final(buf, sw->sw_ictx); break; } case CRYPTO_POLY1305: if (klen != POLY1305_KEY_LEN) { CRYPTDEB("bad poly1305 key size %d", klen); return EINVAL; } /* FALLTHROUGH */ case CRYPTO_BLAKE2B: case CRYPTO_BLAKE2S: axf->Setkey(sw->sw_ictx, key, klen); axf->Init(sw->sw_ictx); break; default: printf("%s: CRD_F_KEY_EXPLICIT flag given, but algorithm %d " "doesn't use keys.\n", __func__, axf->type); return EINVAL; } return 0; } /* * Compute keyed-hash authenticator. */ static int swcr_authcompute(struct cryptodesc *crd, struct swcr_data *sw, caddr_t buf, int flags) { unsigned char aalg[HASH_MAX_LEN]; struct auth_hash *axf; union authctx ctx; int err; if (sw->sw_ictx == 0) return EINVAL; axf = sw->sw_axf; if (crd->crd_flags & CRD_F_KEY_EXPLICIT) { err = swcr_authprepare(axf, sw, crd->crd_key, crd->crd_klen); if (err != 0) return err; } bcopy(sw->sw_ictx, &ctx, axf->ctxsize); err = crypto_apply(flags, buf, crd->crd_skip, crd->crd_len, (int (*)(void *, void *, unsigned int))axf->Update, (caddr_t)&ctx); if (err) return err; switch (sw->sw_alg) { case CRYPTO_SHA1: case CRYPTO_SHA2_224: case CRYPTO_SHA2_256: case CRYPTO_SHA2_384: case CRYPTO_SHA2_512: axf->Final(aalg, &ctx); break; case CRYPTO_MD5_HMAC: case CRYPTO_SHA1_HMAC: case CRYPTO_SHA2_224_HMAC: case CRYPTO_SHA2_256_HMAC: case CRYPTO_SHA2_384_HMAC: case CRYPTO_SHA2_512_HMAC: case CRYPTO_RIPEMD160_HMAC: if (sw->sw_octx == NULL) return EINVAL; axf->Final(aalg, &ctx); bcopy(sw->sw_octx, &ctx, axf->ctxsize); axf->Update(&ctx, aalg, axf->hashsize); axf->Final(aalg, &ctx); break; case CRYPTO_MD5_KPDK: case CRYPTO_SHA1_KPDK: /* If we have no key saved, return error. */ if (sw->sw_octx == NULL) return EINVAL; /* * Add the trailing copy of the key (see comment in * swcr_authprepare()) after the data: * ALGO( .., key, algofill ) * and let Final() do the proper, natural "algofill" * padding. */ axf->Update(&ctx, sw->sw_octx, sw->sw_klen); axf->Final(aalg, &ctx); break; case CRYPTO_BLAKE2B: case CRYPTO_BLAKE2S: case CRYPTO_NULL_HMAC: case CRYPTO_POLY1305: axf->Final(aalg, &ctx); break; } /* Inject the authentication data */ crypto_copyback(flags, buf, crd->crd_inject, sw->sw_mlen == 0 ? axf->hashsize : sw->sw_mlen, aalg); return 0; } CTASSERT(INT_MAX <= (1ll<<39) - 256); /* GCM: plain text < 2^39-256 */ CTASSERT(INT_MAX <= (uint64_t)-1); /* GCM: associated data <= 2^64-1 */ /* * Apply a combined encryption-authentication transformation */ static int swcr_authenc(struct cryptop *crp) { uint32_t blkbuf[howmany(EALG_MAX_BLOCK_LEN, sizeof(uint32_t))]; u_char *blk = (u_char *)blkbuf; u_char aalg[AALG_MAX_RESULT_LEN]; u_char uaalg[AALG_MAX_RESULT_LEN]; u_char iv[EALG_MAX_BLOCK_LEN]; union authctx ctx; struct swcr_session *ses; struct cryptodesc *crd, *crda = NULL, *crde = NULL; struct swcr_data *sw, *swa, *swe = NULL; struct auth_hash *axf = NULL; struct enc_xform *exf = NULL; caddr_t buf = (caddr_t)crp->crp_buf; uint32_t *blkp; int aadlen, blksz, i, ivlen, len, iskip, oskip, r; + int isccm = 0; ivlen = blksz = iskip = oskip = 0; ses = crypto_get_driver_session(crp->crp_session); for (crd = crp->crp_desc; crd; crd = crd->crd_next) { for (i = 0; i < nitems(ses->swcr_algorithms) && ses->swcr_algorithms[i].sw_alg != crd->crd_alg; i++) ; if (i == nitems(ses->swcr_algorithms)) return (EINVAL); sw = &ses->swcr_algorithms[i]; switch (sw->sw_alg) { + case CRYPTO_AES_CCM_16: case CRYPTO_AES_NIST_GCM_16: case CRYPTO_AES_NIST_GMAC: swe = sw; crde = crd; exf = swe->sw_exf; - ivlen = 12; + /* AES_CCM_IV_LEN and AES_GCM_IV_LEN are both 12 */ + ivlen = AES_CCM_IV_LEN; break; + case CRYPTO_AES_CCM_CBC_MAC: + isccm = 1; + /* FALLTHROUGH */ case CRYPTO_AES_128_NIST_GMAC: case CRYPTO_AES_192_NIST_GMAC: case CRYPTO_AES_256_NIST_GMAC: swa = sw; crda = crd; axf = swa->sw_axf; if (swa->sw_ictx == 0) return (EINVAL); bcopy(swa->sw_ictx, &ctx, axf->ctxsize); blksz = axf->blocksize; break; default: return (EINVAL); } } if (crde == NULL || crda == NULL) return (EINVAL); + /* + * We need to make sure that the auth algorithm matches the + * encr algorithm. Specifically, for AES-GCM must go with + * AES NIST GMAC, and AES-CCM must go with CBC-MAC. + */ + if (crde->crd_alg == CRYPTO_AES_NIST_GCM_16) { + switch (crda->crd_alg) { + case CRYPTO_AES_128_NIST_GMAC: + case CRYPTO_AES_192_NIST_GMAC: + case CRYPTO_AES_256_NIST_GMAC: + break; /* Good! */ + default: + return (EINVAL); /* Not good! */ + } + } else if (crde->crd_alg == CRYPTO_AES_CCM_16 && + crda->crd_alg != CRYPTO_AES_CCM_CBC_MAC) + return (EINVAL); - if (crde->crd_alg == CRYPTO_AES_NIST_GCM_16 && + if ((crde->crd_alg == CRYPTO_AES_NIST_GCM_16 || + crde->crd_alg == CRYPTO_AES_CCM_16) && (crde->crd_flags & CRD_F_IV_EXPLICIT) == 0) return (EINVAL); if (crde->crd_klen != crda->crd_klen) return (EINVAL); /* Initialize the IV */ if (crde->crd_flags & CRD_F_ENCRYPT) { /* IV explicitly provided ? */ if (crde->crd_flags & CRD_F_IV_EXPLICIT) bcopy(crde->crd_iv, iv, ivlen); else arc4rand(iv, ivlen, 0); /* Do we need to write the IV */ if (!(crde->crd_flags & CRD_F_IV_PRESENT)) crypto_copyback(crp->crp_flags, buf, crde->crd_inject, ivlen, iv); } else { /* Decryption */ /* IV explicitly provided ? */ if (crde->crd_flags & CRD_F_IV_EXPLICIT) bcopy(crde->crd_iv, iv, ivlen); else { /* Get IV off buf */ crypto_copydata(crp->crp_flags, buf, crde->crd_inject, ivlen, iv); } } + if (swa->sw_alg == CRYPTO_AES_CCM_CBC_MAC) { + /* + * AES CCM-CBC needs to know the length of + * both the auth data, and payload data, before + * doing the auth computation. + */ + ctx.aes_cbc_mac_ctx.authDataLength = crda->crd_len; + ctx.aes_cbc_mac_ctx.cryptDataLength = crde->crd_len; + } /* Supply MAC with IV */ if (axf->Reinit) axf->Reinit(&ctx, iv, ivlen); /* Supply MAC with AAD */ aadlen = crda->crd_len; for (i = iskip; i < crda->crd_len; i += blksz) { len = MIN(crda->crd_len - i, blksz - oskip); crypto_copydata(crp->crp_flags, buf, crda->crd_skip + i, len, blk + oskip); bzero(blk + len + oskip, blksz - len - oskip); axf->Update(&ctx, blk, blksz); oskip = 0; /* reset initial output offset */ } if (exf->reinit) exf->reinit(swe->sw_kschedule, iv); /* Do encryption/decryption with MAC */ for (i = 0; i < crde->crd_len; i += len) { if (exf->encrypt_multi != NULL) { len = rounddown(crde->crd_len - i, blksz); if (len == 0) len = blksz; else len = MIN(len, sizeof(blkbuf)); } else len = blksz; len = MIN(crde->crd_len - i, len); if (len < blksz) bzero(blk, blksz); crypto_copydata(crp->crp_flags, buf, crde->crd_skip + i, len, blk); + /* + * One of the problems with CCM+CBC is that the authentication + * is done on the unecncrypted data. As a result, we have + * to do the authentication update at different times, + * depending on whether it's CCM or not. + */ if (crde->crd_flags & CRD_F_ENCRYPT) { + if (isccm) + axf->Update(&ctx, blk, len); if (exf->encrypt_multi != NULL) exf->encrypt_multi(swe->sw_kschedule, blk, len); else exf->encrypt(swe->sw_kschedule, blk); - axf->Update(&ctx, blk, len); + if (!isccm) + axf->Update(&ctx, blk, len); crypto_copyback(crp->crp_flags, buf, crde->crd_skip + i, len, blk); } else { + if (isccm) { + KASSERT(exf->encrypt_multi == NULL, + ("assume CCM is single-block only")); + exf->decrypt(swe->sw_kschedule, blk); + } axf->Update(&ctx, blk, len); } } /* Do any required special finalization */ switch (crda->crd_alg) { case CRYPTO_AES_128_NIST_GMAC: case CRYPTO_AES_192_NIST_GMAC: case CRYPTO_AES_256_NIST_GMAC: /* length block */ bzero(blk, blksz); blkp = (uint32_t *)blk + 1; *blkp = htobe32(aadlen * 8); blkp = (uint32_t *)blk + 3; *blkp = htobe32(crde->crd_len * 8); axf->Update(&ctx, blk, blksz); break; } /* Finalize MAC */ axf->Final(aalg, &ctx); /* Validate tag */ if (!(crde->crd_flags & CRD_F_ENCRYPT)) { crypto_copydata(crp->crp_flags, buf, crda->crd_inject, axf->hashsize, uaalg); r = timingsafe_bcmp(aalg, uaalg, axf->hashsize); if (r == 0) { /* tag matches, decrypt data */ + if (isccm) { + KASSERT(exf->reinit != NULL, + ("AES-CCM reinit function must be set")); + exf->reinit(swe->sw_kschedule, iv); + } for (i = 0; i < crde->crd_len; i += blksz) { len = MIN(crde->crd_len - i, blksz); if (len < blksz) bzero(blk, blksz); crypto_copydata(crp->crp_flags, buf, crde->crd_skip + i, len, blk); exf->decrypt(swe->sw_kschedule, blk); crypto_copyback(crp->crp_flags, buf, crde->crd_skip + i, len, blk); } } else return (EBADMSG); } else { /* Inject the authentication data */ crypto_copyback(crp->crp_flags, buf, crda->crd_inject, axf->hashsize, aalg); } return (0); } /* * Apply a compression/decompression algorithm */ static int swcr_compdec(struct cryptodesc *crd, struct swcr_data *sw, caddr_t buf, int flags) { u_int8_t *data, *out; struct comp_algo *cxf; int adj; u_int32_t result; cxf = sw->sw_cxf; /* We must handle the whole buffer of data in one time * then if there is not all the data in the mbuf, we must * copy in a buffer. */ data = malloc(crd->crd_len, M_CRYPTO_DATA, M_NOWAIT); if (data == NULL) return (EINVAL); crypto_copydata(flags, buf, crd->crd_skip, crd->crd_len, data); if (crd->crd_flags & CRD_F_COMP) result = cxf->compress(data, crd->crd_len, &out); else result = cxf->decompress(data, crd->crd_len, &out); free(data, M_CRYPTO_DATA); if (result == 0) return EINVAL; /* Copy back the (de)compressed data. m_copyback is * extending the mbuf as necessary. */ sw->sw_size = result; /* Check the compressed size when doing compression */ if (crd->crd_flags & CRD_F_COMP) { if (result >= crd->crd_len) { /* Compression was useless, we lost time */ free(out, M_CRYPTO_DATA); return 0; } } crypto_copyback(flags, buf, crd->crd_skip, result, out); if (result < crd->crd_len) { adj = result - crd->crd_len; if (flags & CRYPTO_F_IMBUF) { adj = result - crd->crd_len; m_adj((struct mbuf *)buf, adj); } else if (flags & CRYPTO_F_IOV) { struct uio *uio = (struct uio *)buf; int ind; adj = crd->crd_len - result; ind = uio->uio_iovcnt - 1; while (adj > 0 && ind >= 0) { if (adj < uio->uio_iov[ind].iov_len) { uio->uio_iov[ind].iov_len -= adj; break; } adj -= uio->uio_iov[ind].iov_len; uio->uio_iov[ind].iov_len = 0; ind--; uio->uio_iovcnt--; } } } free(out, M_CRYPTO_DATA); return 0; } /* * Generate a new software session. */ static int swcr_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri) { struct swcr_session *ses; struct swcr_data *swd; struct auth_hash *axf; struct enc_xform *txf; struct comp_algo *cxf; size_t i; int len; int error; if (cses == NULL || cri == NULL) return EINVAL; ses = crypto_get_driver_session(cses); mtx_init(&ses->swcr_lock, "swcr session lock", NULL, MTX_DEF); for (i = 0; cri != NULL && i < nitems(ses->swcr_algorithms); i++) { swd = &ses->swcr_algorithms[i]; switch (cri->cri_alg) { case CRYPTO_DES_CBC: txf = &enc_xform_des; goto enccommon; case CRYPTO_3DES_CBC: txf = &enc_xform_3des; goto enccommon; case CRYPTO_BLF_CBC: txf = &enc_xform_blf; goto enccommon; case CRYPTO_CAST_CBC: txf = &enc_xform_cast5; goto enccommon; case CRYPTO_SKIPJACK_CBC: txf = &enc_xform_skipjack; goto enccommon; case CRYPTO_RIJNDAEL128_CBC: txf = &enc_xform_rijndael128; goto enccommon; case CRYPTO_AES_XTS: txf = &enc_xform_aes_xts; goto enccommon; case CRYPTO_AES_ICM: txf = &enc_xform_aes_icm; goto enccommon; case CRYPTO_AES_NIST_GCM_16: txf = &enc_xform_aes_nist_gcm; goto enccommon; + case CRYPTO_AES_CCM_16: + txf = &enc_xform_ccm; + goto enccommon; case CRYPTO_AES_NIST_GMAC: txf = &enc_xform_aes_nist_gmac; swd->sw_exf = txf; break; case CRYPTO_CAMELLIA_CBC: txf = &enc_xform_camellia; goto enccommon; case CRYPTO_NULL_CBC: txf = &enc_xform_null; goto enccommon; case CRYPTO_CHACHA20: txf = &enc_xform_chacha20; goto enccommon; enccommon: if (cri->cri_key != NULL) { error = txf->setkey(&swd->sw_kschedule, cri->cri_key, cri->cri_klen / 8); if (error) { swcr_freesession(dev, cses); return error; } } swd->sw_exf = txf; break; case CRYPTO_MD5_HMAC: axf = &auth_hash_hmac_md5; goto authcommon; case CRYPTO_SHA1_HMAC: axf = &auth_hash_hmac_sha1; goto authcommon; case CRYPTO_SHA2_224_HMAC: axf = &auth_hash_hmac_sha2_224; goto authcommon; case CRYPTO_SHA2_256_HMAC: axf = &auth_hash_hmac_sha2_256; goto authcommon; case CRYPTO_SHA2_384_HMAC: axf = &auth_hash_hmac_sha2_384; goto authcommon; case CRYPTO_SHA2_512_HMAC: axf = &auth_hash_hmac_sha2_512; goto authcommon; case CRYPTO_NULL_HMAC: axf = &auth_hash_null; goto authcommon; case CRYPTO_RIPEMD160_HMAC: axf = &auth_hash_hmac_ripemd_160; authcommon: swd->sw_ictx = malloc(axf->ctxsize, M_CRYPTO_DATA, M_NOWAIT); if (swd->sw_ictx == NULL) { swcr_freesession(dev, cses); return ENOBUFS; } swd->sw_octx = malloc(axf->ctxsize, M_CRYPTO_DATA, M_NOWAIT); if (swd->sw_octx == NULL) { swcr_freesession(dev, cses); return ENOBUFS; } if (cri->cri_key != NULL) { error = swcr_authprepare(axf, swd, cri->cri_key, cri->cri_klen); if (error != 0) { swcr_freesession(dev, cses); return error; } } swd->sw_mlen = cri->cri_mlen; swd->sw_axf = axf; break; case CRYPTO_MD5_KPDK: axf = &auth_hash_key_md5; goto auth2common; case CRYPTO_SHA1_KPDK: axf = &auth_hash_key_sha1; auth2common: swd->sw_ictx = malloc(axf->ctxsize, M_CRYPTO_DATA, M_NOWAIT); if (swd->sw_ictx == NULL) { swcr_freesession(dev, cses); return ENOBUFS; } swd->sw_octx = malloc(cri->cri_klen / 8, M_CRYPTO_DATA, M_NOWAIT); if (swd->sw_octx == NULL) { swcr_freesession(dev, cses); return ENOBUFS; } /* Store the key so we can "append" it to the payload */ if (cri->cri_key != NULL) { error = swcr_authprepare(axf, swd, cri->cri_key, cri->cri_klen); if (error != 0) { swcr_freesession(dev, cses); return error; } } swd->sw_mlen = cri->cri_mlen; swd->sw_axf = axf; break; #ifdef notdef case CRYPTO_MD5: axf = &auth_hash_md5; goto auth3common; #endif case CRYPTO_SHA1: axf = &auth_hash_sha1; goto auth3common; case CRYPTO_SHA2_224: axf = &auth_hash_sha2_224; goto auth3common; case CRYPTO_SHA2_256: axf = &auth_hash_sha2_256; goto auth3common; case CRYPTO_SHA2_384: axf = &auth_hash_sha2_384; goto auth3common; case CRYPTO_SHA2_512: axf = &auth_hash_sha2_512; auth3common: swd->sw_ictx = malloc(axf->ctxsize, M_CRYPTO_DATA, M_NOWAIT); if (swd->sw_ictx == NULL) { swcr_freesession(dev, cses); return ENOBUFS; } axf->Init(swd->sw_ictx); swd->sw_mlen = cri->cri_mlen; swd->sw_axf = axf; break; + case CRYPTO_AES_CCM_CBC_MAC: + switch (cri->cri_klen) { + case 128: + axf = &auth_hash_ccm_cbc_mac_128; + break; + case 192: + axf = &auth_hash_ccm_cbc_mac_192; + break; + case 256: + axf = &auth_hash_ccm_cbc_mac_256; + break; + default: + swcr_freesession(dev, cses); + return EINVAL; + } + goto auth4common; case CRYPTO_AES_128_NIST_GMAC: axf = &auth_hash_nist_gmac_aes_128; goto auth4common; case CRYPTO_AES_192_NIST_GMAC: axf = &auth_hash_nist_gmac_aes_192; goto auth4common; case CRYPTO_AES_256_NIST_GMAC: axf = &auth_hash_nist_gmac_aes_256; auth4common: len = cri->cri_klen / 8; if (len != 16 && len != 24 && len != 32) { swcr_freesession(dev, cses); return EINVAL; } swd->sw_ictx = malloc(axf->ctxsize, M_CRYPTO_DATA, M_NOWAIT); if (swd->sw_ictx == NULL) { swcr_freesession(dev, cses); return ENOBUFS; } axf->Init(swd->sw_ictx); axf->Setkey(swd->sw_ictx, cri->cri_key, len); swd->sw_axf = axf; break; case CRYPTO_BLAKE2B: axf = &auth_hash_blake2b; goto auth5common; case CRYPTO_BLAKE2S: axf = &auth_hash_blake2s; goto auth5common; case CRYPTO_POLY1305: axf = &auth_hash_poly1305; auth5common: swd->sw_ictx = malloc(axf->ctxsize, M_CRYPTO_DATA, M_NOWAIT); if (swd->sw_ictx == NULL) { swcr_freesession(dev, cses); return ENOBUFS; } axf->Setkey(swd->sw_ictx, cri->cri_key, cri->cri_klen / 8); axf->Init(swd->sw_ictx); swd->sw_axf = axf; break; case CRYPTO_DEFLATE_COMP: cxf = &comp_algo_deflate; swd->sw_cxf = cxf; break; default: swcr_freesession(dev, cses); return EINVAL; } swd->sw_alg = cri->cri_alg; cri = cri->cri_next; ses->swcr_nalgs++; } if (cri != NULL) { CRYPTDEB("Bogus session request for three or more algorithms"); return EINVAL; } return 0; } static void swcr_freesession(device_t dev, crypto_session_t cses) { struct swcr_session *ses; struct swcr_data *swd; struct enc_xform *txf; struct auth_hash *axf; size_t i; ses = crypto_get_driver_session(cses); mtx_destroy(&ses->swcr_lock); for (i = 0; i < nitems(ses->swcr_algorithms); i++) { swd = &ses->swcr_algorithms[i]; switch (swd->sw_alg) { case CRYPTO_DES_CBC: case CRYPTO_3DES_CBC: case CRYPTO_BLF_CBC: case CRYPTO_CAST_CBC: case CRYPTO_SKIPJACK_CBC: case CRYPTO_RIJNDAEL128_CBC: case CRYPTO_AES_XTS: case CRYPTO_AES_ICM: case CRYPTO_AES_NIST_GCM_16: case CRYPTO_AES_NIST_GMAC: case CRYPTO_CAMELLIA_CBC: case CRYPTO_NULL_CBC: case CRYPTO_CHACHA20: + case CRYPTO_AES_CCM_16: txf = swd->sw_exf; if (swd->sw_kschedule) txf->zerokey(&(swd->sw_kschedule)); break; case CRYPTO_MD5_HMAC: case CRYPTO_SHA1_HMAC: case CRYPTO_SHA2_224_HMAC: case CRYPTO_SHA2_256_HMAC: case CRYPTO_SHA2_384_HMAC: case CRYPTO_SHA2_512_HMAC: case CRYPTO_RIPEMD160_HMAC: case CRYPTO_NULL_HMAC: + case CRYPTO_AES_CCM_CBC_MAC: axf = swd->sw_axf; if (swd->sw_ictx) { bzero(swd->sw_ictx, axf->ctxsize); free(swd->sw_ictx, M_CRYPTO_DATA); } if (swd->sw_octx) { bzero(swd->sw_octx, axf->ctxsize); free(swd->sw_octx, M_CRYPTO_DATA); } break; case CRYPTO_MD5_KPDK: case CRYPTO_SHA1_KPDK: axf = swd->sw_axf; if (swd->sw_ictx) { bzero(swd->sw_ictx, axf->ctxsize); free(swd->sw_ictx, M_CRYPTO_DATA); } if (swd->sw_octx) { bzero(swd->sw_octx, swd->sw_klen); free(swd->sw_octx, M_CRYPTO_DATA); } break; case CRYPTO_BLAKE2B: case CRYPTO_BLAKE2S: case CRYPTO_MD5: case CRYPTO_POLY1305: case CRYPTO_SHA1: case CRYPTO_SHA2_224: case CRYPTO_SHA2_256: case CRYPTO_SHA2_384: case CRYPTO_SHA2_512: case CRYPTO_AES_128_NIST_GMAC: case CRYPTO_AES_192_NIST_GMAC: case CRYPTO_AES_256_NIST_GMAC: axf = swd->sw_axf; if (swd->sw_ictx) { explicit_bzero(swd->sw_ictx, axf->ctxsize); free(swd->sw_ictx, M_CRYPTO_DATA); } break; case CRYPTO_DEFLATE_COMP: /* Nothing to do */ break; } } } /* * Process a software request. */ static int swcr_process(device_t dev, struct cryptop *crp, int hint) { struct swcr_session *ses = NULL; struct cryptodesc *crd; struct swcr_data *sw; size_t i; /* Sanity check */ if (crp == NULL) return EINVAL; if (crp->crp_desc == NULL || crp->crp_buf == NULL) { crp->crp_etype = EINVAL; goto done; } ses = crypto_get_driver_session(crp->crp_session); mtx_lock(&ses->swcr_lock); /* Go through crypto descriptors, processing as we go */ for (crd = crp->crp_desc; crd; crd = crd->crd_next) { /* * Find the crypto context. * * XXX Note that the logic here prevents us from having * XXX the same algorithm multiple times in a session * XXX (or rather, we can but it won't give us the right * XXX results). To do that, we'd need some way of differentiating * XXX between the various instances of an algorithm (so we can * XXX locate the correct crypto context). */ for (i = 0; i < nitems(ses->swcr_algorithms) && ses->swcr_algorithms[i].sw_alg != crd->crd_alg; i++) ; /* No such context ? */ if (i == nitems(ses->swcr_algorithms)) { crp->crp_etype = EINVAL; goto done; } sw = &ses->swcr_algorithms[i]; switch (sw->sw_alg) { case CRYPTO_DES_CBC: case CRYPTO_3DES_CBC: case CRYPTO_BLF_CBC: case CRYPTO_CAST_CBC: case CRYPTO_SKIPJACK_CBC: case CRYPTO_RIJNDAEL128_CBC: case CRYPTO_AES_XTS: case CRYPTO_AES_ICM: case CRYPTO_CAMELLIA_CBC: case CRYPTO_CHACHA20: if ((crp->crp_etype = swcr_encdec(crd, sw, crp->crp_buf, crp->crp_flags)) != 0) goto done; break; case CRYPTO_NULL_CBC: crp->crp_etype = 0; break; case CRYPTO_MD5_HMAC: case CRYPTO_SHA1_HMAC: case CRYPTO_SHA2_224_HMAC: case CRYPTO_SHA2_256_HMAC: case CRYPTO_SHA2_384_HMAC: case CRYPTO_SHA2_512_HMAC: case CRYPTO_RIPEMD160_HMAC: case CRYPTO_NULL_HMAC: case CRYPTO_MD5_KPDK: case CRYPTO_SHA1_KPDK: case CRYPTO_MD5: case CRYPTO_SHA1: case CRYPTO_SHA2_224: case CRYPTO_SHA2_256: case CRYPTO_SHA2_384: case CRYPTO_SHA2_512: case CRYPTO_BLAKE2B: case CRYPTO_BLAKE2S: case CRYPTO_POLY1305: if ((crp->crp_etype = swcr_authcompute(crd, sw, crp->crp_buf, crp->crp_flags)) != 0) goto done; break; case CRYPTO_AES_NIST_GCM_16: case CRYPTO_AES_NIST_GMAC: case CRYPTO_AES_128_NIST_GMAC: case CRYPTO_AES_192_NIST_GMAC: case CRYPTO_AES_256_NIST_GMAC: + case CRYPTO_AES_CCM_16: + case CRYPTO_AES_CCM_CBC_MAC: crp->crp_etype = swcr_authenc(crp); goto done; case CRYPTO_DEFLATE_COMP: if ((crp->crp_etype = swcr_compdec(crd, sw, crp->crp_buf, crp->crp_flags)) != 0) goto done; else crp->crp_olen = (int)sw->sw_size; break; default: /* Unknown/unsupported algorithm */ crp->crp_etype = EINVAL; goto done; } } done: if (ses) mtx_unlock(&ses->swcr_lock); crypto_done(crp); return 0; } static void swcr_identify(driver_t *drv, device_t parent) { /* NB: order 10 is so we get attached after h/w devices */ if (device_find_child(parent, "cryptosoft", -1) == NULL && BUS_ADD_CHILD(parent, 10, "cryptosoft", 0) == 0) panic("cryptosoft: could not attach"); } static int swcr_probe(device_t dev) { device_set_desc(dev, "software crypto"); return (BUS_PROBE_NOWILDCARD); } static int swcr_attach(device_t dev) { memset(hmac_ipad_buffer, HMAC_IPAD_VAL, HMAC_MAX_BLOCK_LEN); memset(hmac_opad_buffer, HMAC_OPAD_VAL, HMAC_MAX_BLOCK_LEN); swcr_id = crypto_get_driverid(dev, sizeof(struct swcr_session), CRYPTOCAP_F_SOFTWARE | CRYPTOCAP_F_SYNC); if (swcr_id < 0) { device_printf(dev, "cannot initialize!"); return ENOMEM; } #define REGISTER(alg) \ crypto_register(swcr_id, alg, 0,0) REGISTER(CRYPTO_DES_CBC); REGISTER(CRYPTO_3DES_CBC); REGISTER(CRYPTO_BLF_CBC); REGISTER(CRYPTO_CAST_CBC); REGISTER(CRYPTO_SKIPJACK_CBC); REGISTER(CRYPTO_NULL_CBC); REGISTER(CRYPTO_MD5_HMAC); REGISTER(CRYPTO_SHA1_HMAC); REGISTER(CRYPTO_SHA2_224_HMAC); REGISTER(CRYPTO_SHA2_256_HMAC); REGISTER(CRYPTO_SHA2_384_HMAC); REGISTER(CRYPTO_SHA2_512_HMAC); REGISTER(CRYPTO_RIPEMD160_HMAC); REGISTER(CRYPTO_NULL_HMAC); REGISTER(CRYPTO_MD5_KPDK); REGISTER(CRYPTO_SHA1_KPDK); REGISTER(CRYPTO_MD5); REGISTER(CRYPTO_SHA1); REGISTER(CRYPTO_SHA2_224); REGISTER(CRYPTO_SHA2_256); REGISTER(CRYPTO_SHA2_384); REGISTER(CRYPTO_SHA2_512); REGISTER(CRYPTO_RIJNDAEL128_CBC); REGISTER(CRYPTO_AES_XTS); REGISTER(CRYPTO_AES_ICM); REGISTER(CRYPTO_AES_NIST_GCM_16); REGISTER(CRYPTO_AES_NIST_GMAC); REGISTER(CRYPTO_AES_128_NIST_GMAC); REGISTER(CRYPTO_AES_192_NIST_GMAC); REGISTER(CRYPTO_AES_256_NIST_GMAC); REGISTER(CRYPTO_CAMELLIA_CBC); REGISTER(CRYPTO_DEFLATE_COMP); REGISTER(CRYPTO_BLAKE2B); REGISTER(CRYPTO_BLAKE2S); REGISTER(CRYPTO_CHACHA20); + REGISTER(CRYPTO_AES_CCM_16); + REGISTER(CRYPTO_AES_CCM_CBC_MAC); REGISTER(CRYPTO_POLY1305); #undef REGISTER return 0; } static int swcr_detach(device_t dev) { crypto_unregister_all(swcr_id); return 0; } static device_method_t swcr_methods[] = { DEVMETHOD(device_identify, swcr_identify), DEVMETHOD(device_probe, swcr_probe), DEVMETHOD(device_attach, swcr_attach), DEVMETHOD(device_detach, swcr_detach), DEVMETHOD(cryptodev_newsession, swcr_newsession), DEVMETHOD(cryptodev_freesession,swcr_freesession), DEVMETHOD(cryptodev_process, swcr_process), {0, 0}, }; static driver_t swcr_driver = { "cryptosoft", swcr_methods, 0, /* NB: no softc */ }; static devclass_t swcr_devclass; /* * NB: We explicitly reference the crypto module so we * get the necessary ordering when built as a loadable * module. This is required because we bundle the crypto * module code together with the cryptosoft driver (otherwise * normal module dependencies would handle things). */ extern int crypto_modevent(struct module *, int, void *); /* XXX where to attach */ DRIVER_MODULE(cryptosoft, nexus, swcr_driver, swcr_devclass, crypto_modevent,0); MODULE_VERSION(cryptosoft, 1); MODULE_DEPEND(cryptosoft, crypto, 1, 1, 1); Index: projects/import-googletest-1.8.1/sys/opencrypto/xform_aes_icm.c =================================================================== --- projects/import-googletest-1.8.1/sys/opencrypto/xform_aes_icm.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/opencrypto/xform_aes_icm.c (revision 344271) @@ -1,152 +1,180 @@ /* $OpenBSD: xform.c,v 1.16 2001/08/28 12:20:43 ben Exp $ */ /*- * The authors of this code are John Ioannidis (ji@tla.org), * Angelos D. Keromytis (kermit@csd.uch.gr), * Niels Provos (provos@physnet.uni-hamburg.de) and * Damien Miller (djm@mindrot.org). * * This code was written by John Ioannidis for BSD/OS in Athens, Greece, * in November 1995. * * Ported to OpenBSD and NetBSD, with additional transforms, in December 1996, * by Angelos D. Keromytis. * * Additional transforms and features in 1997 and 1998 by Angelos D. Keromytis * and Niels Provos. * * Additional features in 1999 by Angelos D. Keromytis. * * AES XTS implementation in 2008 by Damien Miller * * Copyright (C) 1995, 1996, 1997, 1998, 1999 by John Ioannidis, * Angelos D. Keromytis and Niels Provos. * * Copyright (C) 2001, Angelos D. Keromytis. * * Copyright (C) 2008, Damien Miller * Copyright (c) 2014 The FreeBSD Foundation * All rights reserved. * * Portions of this software were developed by John-Mark Gurney * under sponsorship of the FreeBSD Foundation and * Rubicon Communications, LLC (Netgate). * * Permission to use, copy, and modify this software with or without fee * is hereby granted, provided that this entire notice is included in * all copies of any software which is or includes a copy or * modification of this software. * You may use this code under the GNU public license if you so wish. Please * contribute changes back to the authors under this freer than GPL license * so that we may further the use of strong encryption without limitations to * all. * * THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR * IMPLIED WARRANTY. IN PARTICULAR, NONE OF THE AUTHORS MAKES ANY * REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE * MERCHANTABILITY OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR * PURPOSE. */ #include __FBSDID("$FreeBSD$"); #include static int aes_icm_setkey(u_int8_t **, u_int8_t *, int); static void aes_icm_crypt(caddr_t, u_int8_t *); static void aes_icm_zerokey(u_int8_t **); static void aes_icm_reinit(caddr_t, u_int8_t *); static void aes_gcm_reinit(caddr_t, u_int8_t *); +static void aes_ccm_reinit(caddr_t, u_int8_t *); /* Encryption instances */ struct enc_xform enc_xform_aes_icm = { CRYPTO_AES_ICM, "AES-ICM", AES_BLOCK_LEN, AES_BLOCK_LEN, AES_MIN_KEY, AES_MAX_KEY, aes_icm_crypt, aes_icm_crypt, aes_icm_setkey, aes_icm_zerokey, aes_icm_reinit, }; struct enc_xform enc_xform_aes_nist_gcm = { CRYPTO_AES_NIST_GCM_16, "AES-GCM", AES_ICM_BLOCK_LEN, AES_GCM_IV_LEN, AES_MIN_KEY, AES_MAX_KEY, aes_icm_crypt, aes_icm_crypt, aes_icm_setkey, aes_icm_zerokey, aes_gcm_reinit, }; +struct enc_xform enc_xform_ccm = { + .type = CRYPTO_AES_CCM_16, + .name = "AES-CCM", + .blocksize = AES_ICM_BLOCK_LEN, .ivsize = AES_CCM_IV_LEN, + .minkey = AES_MIN_KEY, .maxkey = AES_MAX_KEY, + .encrypt = aes_icm_crypt, + .decrypt = aes_icm_crypt, + .setkey = aes_icm_setkey, + .zerokey = aes_icm_zerokey, + .reinit = aes_ccm_reinit, +}; + /* * Encryption wrapper routines. */ static void aes_icm_reinit(caddr_t key, u_int8_t *iv) { struct aes_icm_ctx *ctx; ctx = (struct aes_icm_ctx *)key; bcopy(iv, ctx->ac_block, AESICM_BLOCKSIZE); } static void aes_gcm_reinit(caddr_t key, u_int8_t *iv) { struct aes_icm_ctx *ctx; aes_icm_reinit(key, iv); ctx = (struct aes_icm_ctx *)key; /* GCM starts with 2 as counter 1 is used for final xor of tag. */ bzero(&ctx->ac_block[AESICM_BLOCKSIZE - 4], 4); ctx->ac_block[AESICM_BLOCKSIZE - 1] = 2; +} + +static void +aes_ccm_reinit(caddr_t key, u_int8_t *iv) +{ + struct aes_icm_ctx *ctx; + + ctx = (struct aes_icm_ctx*)key; + + /* CCM has flags, then the IV, then the counter, which starts at 1 */ + bzero(ctx->ac_block, sizeof(ctx->ac_block)); + /* 3 bytes for length field; this gives a nonce of 12 bytes */ + ctx->ac_block[0] = (15 - AES_CCM_IV_LEN) - 1; + bcopy(iv, ctx->ac_block+1, AES_CCM_IV_LEN); + ctx->ac_block[AESICM_BLOCKSIZE - 1] = 1; } static void aes_icm_crypt(caddr_t key, u_int8_t *data) { struct aes_icm_ctx *ctx; u_int8_t keystream[AESICM_BLOCKSIZE]; int i; ctx = (struct aes_icm_ctx *)key; rijndaelEncrypt(ctx->ac_ek, ctx->ac_nr, ctx->ac_block, keystream); for (i = 0; i < AESICM_BLOCKSIZE; i++) data[i] ^= keystream[i]; explicit_bzero(keystream, sizeof(keystream)); /* increment counter */ for (i = AESICM_BLOCKSIZE - 1; i >= 0; i--) if (++ctx->ac_block[i]) /* continue on overflow */ break; } static int aes_icm_setkey(u_int8_t **sched, u_int8_t *key, int len) { struct aes_icm_ctx *ctx; if (len != 16 && len != 24 && len != 32) return EINVAL; *sched = KMALLOC(sizeof(struct aes_icm_ctx), M_CRYPTO_DATA, M_NOWAIT | M_ZERO); if (*sched == NULL) return ENOMEM; ctx = (struct aes_icm_ctx *)*sched; ctx->ac_nr = rijndaelKeySetupEnc(ctx->ac_ek, (u_char *)key, len * 8); return 0; } static void aes_icm_zerokey(u_int8_t **sched) { bzero(*sched, sizeof(struct aes_icm_ctx)); KFREE(*sched, M_CRYPTO_DATA); *sched = NULL; } Index: projects/import-googletest-1.8.1/sys/opencrypto/xform_auth.h =================================================================== --- projects/import-googletest-1.8.1/sys/opencrypto/xform_auth.h (revision 344270) +++ projects/import-googletest-1.8.1/sys/opencrypto/xform_auth.h (revision 344271) @@ -1,100 +1,105 @@ /* $FreeBSD$ */ /* $OpenBSD: xform.h,v 1.8 2001/08/28 12:20:43 ben Exp $ */ /*- * The author of this code is Angelos D. Keromytis (angelos@cis.upenn.edu) * * This code was written by Angelos D. Keromytis in Athens, Greece, in * February 2000. Network Security Technologies Inc. (NSTI) kindly * supported the development of this code. * * Copyright (c) 2000 Angelos D. Keromytis * Copyright (c) 2014 The FreeBSD Foundation * All rights reserved. * * Portions of this software were developed by John-Mark Gurney * under sponsorship of the FreeBSD Foundation and * Rubicon Communications, LLC (Netgate). * * Permission to use, copy, and modify this software without fee * is hereby granted, provided that this entire notice is included in * all source code copies of any software which is or includes a copy or * modification of this software. * * THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR * IMPLIED WARRANTY. IN PARTICULAR, NONE OF THE AUTHORS MAKES ANY * REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE * MERCHANTABILITY OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR * PURPOSE. */ #ifndef _CRYPTO_XFORM_AUTH_H_ #define _CRYPTO_XFORM_AUTH_H_ #include #include #include #include #include #include #include #include #include #include +#include #include #include /* XXX use a define common with other hash stuff ! */ #define AH_ALEN_MAX 64 /* max authenticator hash length */ /* Declarations */ struct auth_hash { int type; char *name; u_int16_t keysize; u_int16_t hashsize; u_int16_t ctxsize; u_int16_t blocksize; void (*Init) (void *); void (*Setkey) (void *, const u_int8_t *, u_int16_t); void (*Reinit) (void *, const u_int8_t *, u_int16_t); int (*Update) (void *, const u_int8_t *, u_int16_t); void (*Final) (u_int8_t *, void *); }; extern struct auth_hash auth_hash_null; extern struct auth_hash auth_hash_key_md5; extern struct auth_hash auth_hash_key_sha1; extern struct auth_hash auth_hash_hmac_md5; extern struct auth_hash auth_hash_hmac_sha1; extern struct auth_hash auth_hash_hmac_ripemd_160; extern struct auth_hash auth_hash_hmac_sha2_224; extern struct auth_hash auth_hash_hmac_sha2_256; extern struct auth_hash auth_hash_hmac_sha2_384; extern struct auth_hash auth_hash_hmac_sha2_512; extern struct auth_hash auth_hash_sha1; extern struct auth_hash auth_hash_sha2_224; extern struct auth_hash auth_hash_sha2_256; extern struct auth_hash auth_hash_sha2_384; extern struct auth_hash auth_hash_sha2_512; extern struct auth_hash auth_hash_nist_gmac_aes_128; extern struct auth_hash auth_hash_nist_gmac_aes_192; extern struct auth_hash auth_hash_nist_gmac_aes_256; extern struct auth_hash auth_hash_blake2b; extern struct auth_hash auth_hash_blake2s; extern struct auth_hash auth_hash_poly1305; +extern struct auth_hash auth_hash_ccm_cbc_mac_128; +extern struct auth_hash auth_hash_ccm_cbc_mac_192; +extern struct auth_hash auth_hash_ccm_cbc_mac_256; union authctx { MD5_CTX md5ctx; SHA1_CTX sha1ctx; RMD160_CTX rmd160ctx; SHA224_CTX sha224ctx; SHA256_CTX sha256ctx; SHA384_CTX sha384ctx; SHA512_CTX sha512ctx; struct aes_gmac_ctx aes_gmac_ctx; + struct aes_cbc_mac_ctx aes_cbc_mac_ctx; }; #endif /* _CRYPTO_XFORM_AUTH_H_ */ Index: projects/import-googletest-1.8.1/sys/opencrypto/xform_cbc_mac.c =================================================================== --- projects/import-googletest-1.8.1/sys/opencrypto/xform_cbc_mac.c (nonexistent) +++ projects/import-googletest-1.8.1/sys/opencrypto/xform_cbc_mac.c (revision 344271) @@ -0,0 +1,55 @@ +#include +__FBSDID("$FreeBSD$"); + +#include +#include + +/* Authentication instances */ +struct auth_hash auth_hash_ccm_cbc_mac_128 = { + .type = CRYPTO_AES_CCM_CBC_MAC, + .name = "CBC-CCM-AES-128", + .keysize = AES_128_CBC_MAC_KEY_LEN, + .hashsize = AES_CBC_MAC_HASH_LEN, + .ctxsize = sizeof(struct aes_cbc_mac_ctx), + .blocksize = CCM_CBC_BLOCK_LEN, + .Init = (void (*)(void *)) AES_CBC_MAC_Init, + .Setkey = + (void (*)(void *, const u_int8_t *, u_int16_t))AES_CBC_MAC_Setkey, + .Reinit = + (void (*)(void *, const u_int8_t *, u_int16_t)) AES_CBC_MAC_Reinit, + .Update = + (int (*)(void *, const u_int8_t *, u_int16_t)) AES_CBC_MAC_Update, + .Final = (void (*)(u_int8_t *, void *)) AES_CBC_MAC_Final, +}; +struct auth_hash auth_hash_ccm_cbc_mac_192 = { + .type = CRYPTO_AES_CCM_CBC_MAC, + .name = "CBC-CCM-AES-192", + .keysize = AES_192_CBC_MAC_KEY_LEN, + .hashsize = AES_CBC_MAC_HASH_LEN, + .ctxsize = sizeof(struct aes_cbc_mac_ctx), + .blocksize = CCM_CBC_BLOCK_LEN, + .Init = (void (*)(void *)) AES_CBC_MAC_Init, + .Setkey = + (void (*)(void *, const u_int8_t *, u_int16_t)) AES_CBC_MAC_Setkey, + .Reinit = + (void (*)(void *, const u_int8_t *, u_int16_t)) AES_CBC_MAC_Reinit, + .Update = + (int (*)(void *, const u_int8_t *, u_int16_t)) AES_CBC_MAC_Update, + .Final = (void (*)(u_int8_t *, void *)) AES_CBC_MAC_Final, +}; +struct auth_hash auth_hash_ccm_cbc_mac_256 = { + .type = CRYPTO_AES_CCM_CBC_MAC, + .name = "CBC-CCM-AES-256", + .keysize = AES_256_CBC_MAC_KEY_LEN, + .hashsize = AES_CBC_MAC_HASH_LEN, + .ctxsize = sizeof(struct aes_cbc_mac_ctx), + .blocksize = CCM_CBC_BLOCK_LEN, + .Init = (void (*)(void *)) AES_CBC_MAC_Init, + .Setkey = + (void (*)(void *, const u_int8_t *, u_int16_t)) AES_CBC_MAC_Setkey, + .Reinit = + (void (*)(void *, const u_int8_t *, u_int16_t)) AES_CBC_MAC_Reinit, + .Update = + (int (*)(void *, const u_int8_t *, u_int16_t)) AES_CBC_MAC_Update, + .Final = (void (*)(u_int8_t *, void *)) AES_CBC_MAC_Final, +}; Property changes on: projects/import-googletest-1.8.1/sys/opencrypto/xform_cbc_mac.c ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property Index: projects/import-googletest-1.8.1/sys/opencrypto/xform_enc.h =================================================================== --- projects/import-googletest-1.8.1/sys/opencrypto/xform_enc.h (revision 344270) +++ projects/import-googletest-1.8.1/sys/opencrypto/xform_enc.h (revision 344271) @@ -1,101 +1,102 @@ /* $FreeBSD$ */ /* $OpenBSD: xform.h,v 1.8 2001/08/28 12:20:43 ben Exp $ */ /*- * The author of this code is Angelos D. Keromytis (angelos@cis.upenn.edu) * * This code was written by Angelos D. Keromytis in Athens, Greece, in * February 2000. Network Security Technologies Inc. (NSTI) kindly * supported the development of this code. * * Copyright (c) 2000 Angelos D. Keromytis * Copyright (c) 2014 The FreeBSD Foundation * All rights reserved. * * Portions of this software were developed by John-Mark Gurney * under sponsorship of the FreeBSD Foundation and * Rubicon Communications, LLC (Netgate). * * Permission to use, copy, and modify this software without fee * is hereby granted, provided that this entire notice is included in * all source code copies of any software which is or includes a copy or * modification of this software. * * THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR * IMPLIED WARRANTY. IN PARTICULAR, NONE OF THE AUTHORS MAKES ANY * REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE * MERCHANTABILITY OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR * PURPOSE. */ #ifndef _CRYPTO_XFORM_ENC_H_ #define _CRYPTO_XFORM_ENC_H_ #include #include #include #include #include #include #include #include #include #include #define AESICM_BLOCKSIZE AES_BLOCK_LEN #define AES_XTS_BLOCKSIZE 16 #define AES_XTS_IVSIZE 8 #define AES_XTS_ALPHA 0x87 /* GF(2^128) generator polynomial */ /* Declarations */ struct enc_xform { int type; char *name; u_int16_t blocksize; /* Required input block size -- 1 for stream ciphers. */ u_int16_t ivsize; u_int16_t minkey, maxkey; void (*encrypt) (caddr_t, u_int8_t *); void (*decrypt) (caddr_t, u_int8_t *); int (*setkey) (u_int8_t **, u_int8_t *, int len); void (*zerokey) (u_int8_t **); void (*reinit) (caddr_t, u_int8_t *); /* * Encrypt/decrypt 1+ blocks of input -- total size is 'len' bytes. * Len is guaranteed to be a multiple of the defined 'blocksize'. * Optional interface -- most useful for stream ciphers with a small * blocksize (1). */ void (*encrypt_multi) (void *, uint8_t *, size_t len); void (*decrypt_multi) (void *, uint8_t *, size_t len); }; extern struct enc_xform enc_xform_null; extern struct enc_xform enc_xform_des; extern struct enc_xform enc_xform_3des; extern struct enc_xform enc_xform_blf; extern struct enc_xform enc_xform_cast5; extern struct enc_xform enc_xform_skipjack; extern struct enc_xform enc_xform_rijndael128; extern struct enc_xform enc_xform_aes_icm; extern struct enc_xform enc_xform_aes_nist_gcm; extern struct enc_xform enc_xform_aes_nist_gmac; extern struct enc_xform enc_xform_aes_xts; extern struct enc_xform enc_xform_arc4; extern struct enc_xform enc_xform_camellia; extern struct enc_xform enc_xform_chacha20; +extern struct enc_xform enc_xform_ccm; struct aes_icm_ctx { u_int32_t ac_ek[4*(RIJNDAEL_MAXNR + 1)]; /* ac_block is initalized to IV */ u_int8_t ac_block[AESICM_BLOCKSIZE]; int ac_nr; }; struct aes_xts_ctx { rijndael_ctx key1; rijndael_ctx key2; u_int8_t tweak[AES_XTS_BLOCKSIZE]; }; #endif /* _CRYPTO_XFORM_ENC_H_ */ Index: projects/import-googletest-1.8.1/sys/powerpc/booke/pmap.c =================================================================== --- projects/import-googletest-1.8.1/sys/powerpc/booke/pmap.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/powerpc/booke/pmap.c (revision 344271) @@ -1,4456 +1,4497 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (C) 2007-2009 Semihalf, Rafal Jaworowski * Copyright (C) 2006 Semihalf, Marian Balakowicz * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED * TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * Some hw specific parts of this pmap were derived or influenced * by NetBSD's ibm4xx pmap module. More generic code is shared with * a few other pmap modules from the FreeBSD tree. */ /* * VM layout notes: * * Kernel and user threads run within one common virtual address space * defined by AS=0. * * 32-bit pmap: * Virtual address space layout: * ----------------------------- * 0x0000_0000 - 0x7fff_ffff : user process * 0x8000_0000 - 0xbfff_ffff : pmap_mapdev()-ed area (PCI/PCIE etc.) * 0xc000_0000 - 0xc0ff_ffff : kernel reserved * 0xc000_0000 - data_end : kernel code+data, env, metadata etc. * 0xc100_0000 - 0xffff_ffff : KVA * 0xc100_0000 - 0xc100_3fff : reserved for page zero/copy * 0xc100_4000 - 0xc200_3fff : reserved for ptbl bufs * 0xc200_4000 - 0xc200_8fff : guard page + kstack0 * 0xc200_9000 - 0xfeef_ffff : actual free KVA space * * 64-bit pmap: * Virtual address space layout: * ----------------------------- * 0x0000_0000_0000_0000 - 0xbfff_ffff_ffff_ffff : user process * 0x0000_0000_0000_0000 - 0x8fff_ffff_ffff_ffff : text, data, heap, maps, libraries * 0x9000_0000_0000_0000 - 0xafff_ffff_ffff_ffff : mmio region * 0xb000_0000_0000_0000 - 0xbfff_ffff_ffff_ffff : stack * 0xc000_0000_0000_0000 - 0xcfff_ffff_ffff_ffff : kernel reserved * 0xc000_0000_0000_0000 - endkernel-1 : kernel code & data * endkernel - msgbufp-1 : flat device tree * msgbufp - ptbl_bufs-1 : message buffer * ptbl_bufs - kernel_pdir-1 : kernel page tables * kernel_pdir - kernel_pp2d-1 : kernel page directory * kernel_pp2d - . : kernel pointers to page directory * pmap_zero_copy_min - crashdumpmap-1 : reserved for page zero/copy * crashdumpmap - ptbl_buf_pool_vabase-1 : reserved for ptbl bufs * ptbl_buf_pool_vabase - virtual_avail-1 : user page directories and page tables * virtual_avail - 0xcfff_ffff_ffff_ffff : actual free KVA space * 0xd000_0000_0000_0000 - 0xdfff_ffff_ffff_ffff : coprocessor region * 0xe000_0000_0000_0000 - 0xefff_ffff_ffff_ffff : mmio region * 0xf000_0000_0000_0000 - 0xffff_ffff_ffff_ffff : direct map * 0xf000_0000_0000_0000 - +Maxmem : physmem map * - 0xffff_ffff_ffff_ffff : device direct map */ #include __FBSDID("$FreeBSD$"); #include "opt_ddb.h" #include "opt_kstack_pages.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "mmu_if.h" #define SPARSE_MAPDEV #ifdef DEBUG #define debugf(fmt, args...) printf(fmt, ##args) #else #define debugf(fmt, args...) #endif #ifdef __powerpc64__ #define PRI0ptrX "016lx" #else #define PRI0ptrX "08x" #endif #define TODO panic("%s: not implemented", __func__); extern unsigned char _etext[]; extern unsigned char _end[]; extern uint32_t *bootinfo; vm_paddr_t kernload; vm_offset_t kernstart; vm_size_t kernsize; /* Message buffer and tables. */ static vm_offset_t data_start; static vm_size_t data_end; /* Phys/avail memory regions. */ static struct mem_region *availmem_regions; static int availmem_regions_sz; static struct mem_region *physmem_regions; static int physmem_regions_sz; /* Reserved KVA space and mutex for mmu_booke_zero_page. */ static vm_offset_t zero_page_va; static struct mtx zero_page_mutex; static struct mtx tlbivax_mutex; /* Reserved KVA space and mutex for mmu_booke_copy_page. */ static vm_offset_t copy_page_src_va; static vm_offset_t copy_page_dst_va; static struct mtx copy_page_mutex; /**************************************************************************/ /* PMAP */ /**************************************************************************/ static int mmu_booke_enter_locked(mmu_t, pmap_t, vm_offset_t, vm_page_t, vm_prot_t, u_int flags, int8_t psind); unsigned int kptbl_min; /* Index of the first kernel ptbl. */ unsigned int kernel_ptbls; /* Number of KVA ptbls. */ #ifdef __powerpc64__ unsigned int kernel_pdirs; #endif /* * If user pmap is processed with mmu_booke_remove and the resident count * drops to 0, there are no more pages to remove, so we need not continue. */ #define PMAP_REMOVE_DONE(pmap) \ ((pmap) != kernel_pmap && (pmap)->pm_stats.resident_count == 0) #if defined(COMPAT_FREEBSD32) || !defined(__powerpc64__) extern int elf32_nxstack; #endif /**************************************************************************/ /* TLB and TID handling */ /**************************************************************************/ /* Translation ID busy table */ static volatile pmap_t tidbusy[MAXCPU][TID_MAX + 1]; /* * TLB0 capabilities (entry, way numbers etc.). These can vary between e500 * core revisions and should be read from h/w registers during early config. */ uint32_t tlb0_entries; uint32_t tlb0_ways; uint32_t tlb0_entries_per_way; uint32_t tlb1_entries; #define TLB0_ENTRIES (tlb0_entries) #define TLB0_WAYS (tlb0_ways) #define TLB0_ENTRIES_PER_WAY (tlb0_entries_per_way) #define TLB1_ENTRIES (tlb1_entries) static vm_offset_t tlb1_map_base = VM_MAXUSER_ADDRESS + PAGE_SIZE; static tlbtid_t tid_alloc(struct pmap *); static void tid_flush(tlbtid_t tid); #ifdef DDB #ifdef __powerpc64__ static void tlb_print_entry(int, uint32_t, uint64_t, uint32_t, uint32_t); #else static void tlb_print_entry(int, uint32_t, uint32_t, uint32_t, uint32_t); #endif #endif static void tlb1_read_entry(tlb_entry_t *, unsigned int); static void tlb1_write_entry(tlb_entry_t *, unsigned int); static int tlb1_iomapped(int, vm_paddr_t, vm_size_t, vm_offset_t *); static vm_size_t tlb1_mapin_region(vm_offset_t, vm_paddr_t, vm_size_t); static vm_size_t tsize2size(unsigned int); static unsigned int size2tsize(vm_size_t); static unsigned int ilog2(unsigned long); static void set_mas4_defaults(void); static inline void tlb0_flush_entry(vm_offset_t); static inline unsigned int tlb0_tableidx(vm_offset_t, unsigned int); /**************************************************************************/ /* Page table management */ /**************************************************************************/ static struct rwlock_padalign pvh_global_lock; /* Data for the pv entry allocation mechanism */ static uma_zone_t pvzone; static int pv_entry_count = 0, pv_entry_max = 0, pv_entry_high_water = 0; #define PV_ENTRY_ZONE_MIN 2048 /* min pv entries in uma zone */ #ifndef PMAP_SHPGPERPROC #define PMAP_SHPGPERPROC 200 #endif static void ptbl_init(void); static struct ptbl_buf *ptbl_buf_alloc(void); static void ptbl_buf_free(struct ptbl_buf *); static void ptbl_free_pmap_ptbl(pmap_t, pte_t *); #ifdef __powerpc64__ static pte_t *ptbl_alloc(mmu_t, pmap_t, pte_t **, unsigned int, boolean_t); static void ptbl_free(mmu_t, pmap_t, pte_t **, unsigned int); static void ptbl_hold(mmu_t, pmap_t, pte_t **, unsigned int); static int ptbl_unhold(mmu_t, pmap_t, vm_offset_t); #else static pte_t *ptbl_alloc(mmu_t, pmap_t, unsigned int, boolean_t); static void ptbl_free(mmu_t, pmap_t, unsigned int); static void ptbl_hold(mmu_t, pmap_t, unsigned int); static int ptbl_unhold(mmu_t, pmap_t, unsigned int); #endif static vm_paddr_t pte_vatopa(mmu_t, pmap_t, vm_offset_t); static int pte_enter(mmu_t, pmap_t, vm_page_t, vm_offset_t, uint32_t, boolean_t); static int pte_remove(mmu_t, pmap_t, vm_offset_t, uint8_t); static pte_t *pte_find(mmu_t, pmap_t, vm_offset_t); static void kernel_pte_alloc(vm_offset_t, vm_offset_t, vm_offset_t); static pv_entry_t pv_alloc(void); static void pv_free(pv_entry_t); static void pv_insert(pmap_t, vm_offset_t, vm_page_t); static void pv_remove(pmap_t, vm_offset_t, vm_page_t); static void booke_pmap_init_qpages(void); /* Number of kva ptbl buffers, each covering one ptbl (PTBL_PAGES). */ #ifdef __powerpc64__ #define PTBL_BUFS (16UL * 16 * 16) #else #define PTBL_BUFS (128 * 16) #endif struct ptbl_buf { TAILQ_ENTRY(ptbl_buf) link; /* list link */ vm_offset_t kva; /* va of mapping */ }; /* ptbl free list and a lock used for access synchronization. */ static TAILQ_HEAD(, ptbl_buf) ptbl_buf_freelist; static struct mtx ptbl_buf_freelist_lock; /* Base address of kva space allocated fot ptbl bufs. */ static vm_offset_t ptbl_buf_pool_vabase; /* Pointer to ptbl_buf structures. */ static struct ptbl_buf *ptbl_bufs; #ifdef SMP extern tlb_entry_t __boot_tlb1[]; void pmap_bootstrap_ap(volatile uint32_t *); #endif /* * Kernel MMU interface */ static void mmu_booke_clear_modify(mmu_t, vm_page_t); static void mmu_booke_copy(mmu_t, pmap_t, pmap_t, vm_offset_t, vm_size_t, vm_offset_t); static void mmu_booke_copy_page(mmu_t, vm_page_t, vm_page_t); static void mmu_booke_copy_pages(mmu_t, vm_page_t *, vm_offset_t, vm_page_t *, vm_offset_t, int); static int mmu_booke_enter(mmu_t, pmap_t, vm_offset_t, vm_page_t, vm_prot_t, u_int flags, int8_t psind); static void mmu_booke_enter_object(mmu_t, pmap_t, vm_offset_t, vm_offset_t, vm_page_t, vm_prot_t); static void mmu_booke_enter_quick(mmu_t, pmap_t, vm_offset_t, vm_page_t, vm_prot_t); static vm_paddr_t mmu_booke_extract(mmu_t, pmap_t, vm_offset_t); static vm_page_t mmu_booke_extract_and_hold(mmu_t, pmap_t, vm_offset_t, vm_prot_t); static void mmu_booke_init(mmu_t); static boolean_t mmu_booke_is_modified(mmu_t, vm_page_t); static boolean_t mmu_booke_is_prefaultable(mmu_t, pmap_t, vm_offset_t); static boolean_t mmu_booke_is_referenced(mmu_t, vm_page_t); static int mmu_booke_ts_referenced(mmu_t, vm_page_t); static vm_offset_t mmu_booke_map(mmu_t, vm_offset_t *, vm_paddr_t, vm_paddr_t, int); static int mmu_booke_mincore(mmu_t, pmap_t, vm_offset_t, vm_paddr_t *); static void mmu_booke_object_init_pt(mmu_t, pmap_t, vm_offset_t, vm_object_t, vm_pindex_t, vm_size_t); static boolean_t mmu_booke_page_exists_quick(mmu_t, pmap_t, vm_page_t); static void mmu_booke_page_init(mmu_t, vm_page_t); static int mmu_booke_page_wired_mappings(mmu_t, vm_page_t); static void mmu_booke_pinit(mmu_t, pmap_t); static void mmu_booke_pinit0(mmu_t, pmap_t); static void mmu_booke_protect(mmu_t, pmap_t, vm_offset_t, vm_offset_t, vm_prot_t); static void mmu_booke_qenter(mmu_t, vm_offset_t, vm_page_t *, int); static void mmu_booke_qremove(mmu_t, vm_offset_t, int); static void mmu_booke_release(mmu_t, pmap_t); static void mmu_booke_remove(mmu_t, pmap_t, vm_offset_t, vm_offset_t); static void mmu_booke_remove_all(mmu_t, vm_page_t); static void mmu_booke_remove_write(mmu_t, vm_page_t); static void mmu_booke_unwire(mmu_t, pmap_t, vm_offset_t, vm_offset_t); static void mmu_booke_zero_page(mmu_t, vm_page_t); static void mmu_booke_zero_page_area(mmu_t, vm_page_t, int, int); static void mmu_booke_activate(mmu_t, struct thread *); static void mmu_booke_deactivate(mmu_t, struct thread *); static void mmu_booke_bootstrap(mmu_t, vm_offset_t, vm_offset_t); static void *mmu_booke_mapdev(mmu_t, vm_paddr_t, vm_size_t); static void *mmu_booke_mapdev_attr(mmu_t, vm_paddr_t, vm_size_t, vm_memattr_t); static void mmu_booke_unmapdev(mmu_t, vm_offset_t, vm_size_t); static vm_paddr_t mmu_booke_kextract(mmu_t, vm_offset_t); static void mmu_booke_kenter(mmu_t, vm_offset_t, vm_paddr_t); static void mmu_booke_kenter_attr(mmu_t, vm_offset_t, vm_paddr_t, vm_memattr_t); static void mmu_booke_kremove(mmu_t, vm_offset_t); static boolean_t mmu_booke_dev_direct_mapped(mmu_t, vm_paddr_t, vm_size_t); static void mmu_booke_sync_icache(mmu_t, pmap_t, vm_offset_t, vm_size_t); static void mmu_booke_dumpsys_map(mmu_t, vm_paddr_t pa, size_t, void **); static void mmu_booke_dumpsys_unmap(mmu_t, vm_paddr_t pa, size_t, void *); static void mmu_booke_scan_init(mmu_t); static vm_offset_t mmu_booke_quick_enter_page(mmu_t mmu, vm_page_t m); static void mmu_booke_quick_remove_page(mmu_t mmu, vm_offset_t addr); static int mmu_booke_change_attr(mmu_t mmu, vm_offset_t addr, vm_size_t sz, vm_memattr_t mode); static int mmu_booke_map_user_ptr(mmu_t mmu, pmap_t pm, volatile const void *uaddr, void **kaddr, size_t ulen, size_t *klen); static int mmu_booke_decode_kernel_ptr(mmu_t mmu, vm_offset_t addr, int *is_user, vm_offset_t *decoded_addr); static mmu_method_t mmu_booke_methods[] = { /* pmap dispatcher interface */ MMUMETHOD(mmu_clear_modify, mmu_booke_clear_modify), MMUMETHOD(mmu_copy, mmu_booke_copy), MMUMETHOD(mmu_copy_page, mmu_booke_copy_page), MMUMETHOD(mmu_copy_pages, mmu_booke_copy_pages), MMUMETHOD(mmu_enter, mmu_booke_enter), MMUMETHOD(mmu_enter_object, mmu_booke_enter_object), MMUMETHOD(mmu_enter_quick, mmu_booke_enter_quick), MMUMETHOD(mmu_extract, mmu_booke_extract), MMUMETHOD(mmu_extract_and_hold, mmu_booke_extract_and_hold), MMUMETHOD(mmu_init, mmu_booke_init), MMUMETHOD(mmu_is_modified, mmu_booke_is_modified), MMUMETHOD(mmu_is_prefaultable, mmu_booke_is_prefaultable), MMUMETHOD(mmu_is_referenced, mmu_booke_is_referenced), MMUMETHOD(mmu_ts_referenced, mmu_booke_ts_referenced), MMUMETHOD(mmu_map, mmu_booke_map), MMUMETHOD(mmu_mincore, mmu_booke_mincore), MMUMETHOD(mmu_object_init_pt, mmu_booke_object_init_pt), MMUMETHOD(mmu_page_exists_quick,mmu_booke_page_exists_quick), MMUMETHOD(mmu_page_init, mmu_booke_page_init), MMUMETHOD(mmu_page_wired_mappings, mmu_booke_page_wired_mappings), MMUMETHOD(mmu_pinit, mmu_booke_pinit), MMUMETHOD(mmu_pinit0, mmu_booke_pinit0), MMUMETHOD(mmu_protect, mmu_booke_protect), MMUMETHOD(mmu_qenter, mmu_booke_qenter), MMUMETHOD(mmu_qremove, mmu_booke_qremove), MMUMETHOD(mmu_release, mmu_booke_release), MMUMETHOD(mmu_remove, mmu_booke_remove), MMUMETHOD(mmu_remove_all, mmu_booke_remove_all), MMUMETHOD(mmu_remove_write, mmu_booke_remove_write), MMUMETHOD(mmu_sync_icache, mmu_booke_sync_icache), MMUMETHOD(mmu_unwire, mmu_booke_unwire), MMUMETHOD(mmu_zero_page, mmu_booke_zero_page), MMUMETHOD(mmu_zero_page_area, mmu_booke_zero_page_area), MMUMETHOD(mmu_activate, mmu_booke_activate), MMUMETHOD(mmu_deactivate, mmu_booke_deactivate), MMUMETHOD(mmu_quick_enter_page, mmu_booke_quick_enter_page), MMUMETHOD(mmu_quick_remove_page, mmu_booke_quick_remove_page), /* Internal interfaces */ MMUMETHOD(mmu_bootstrap, mmu_booke_bootstrap), MMUMETHOD(mmu_dev_direct_mapped,mmu_booke_dev_direct_mapped), MMUMETHOD(mmu_mapdev, mmu_booke_mapdev), MMUMETHOD(mmu_mapdev_attr, mmu_booke_mapdev_attr), MMUMETHOD(mmu_kenter, mmu_booke_kenter), MMUMETHOD(mmu_kenter_attr, mmu_booke_kenter_attr), MMUMETHOD(mmu_kextract, mmu_booke_kextract), MMUMETHOD(mmu_kremove, mmu_booke_kremove), MMUMETHOD(mmu_unmapdev, mmu_booke_unmapdev), MMUMETHOD(mmu_change_attr, mmu_booke_change_attr), MMUMETHOD(mmu_map_user_ptr, mmu_booke_map_user_ptr), MMUMETHOD(mmu_decode_kernel_ptr, mmu_booke_decode_kernel_ptr), /* dumpsys() support */ MMUMETHOD(mmu_dumpsys_map, mmu_booke_dumpsys_map), MMUMETHOD(mmu_dumpsys_unmap, mmu_booke_dumpsys_unmap), MMUMETHOD(mmu_scan_init, mmu_booke_scan_init), { 0, 0 } }; MMU_DEF(booke_mmu, MMU_TYPE_BOOKE, mmu_booke_methods, 0); static __inline uint32_t tlb_calc_wimg(vm_paddr_t pa, vm_memattr_t ma) { uint32_t attrib; int i; if (ma != VM_MEMATTR_DEFAULT) { switch (ma) { case VM_MEMATTR_UNCACHEABLE: return (MAS2_I | MAS2_G); case VM_MEMATTR_WRITE_COMBINING: case VM_MEMATTR_WRITE_BACK: case VM_MEMATTR_PREFETCHABLE: return (MAS2_I); case VM_MEMATTR_WRITE_THROUGH: return (MAS2_W | MAS2_M); case VM_MEMATTR_CACHEABLE: return (MAS2_M); } } /* * Assume the page is cache inhibited and access is guarded unless * it's in our available memory array. */ attrib = _TLB_ENTRY_IO; for (i = 0; i < physmem_regions_sz; i++) { if ((pa >= physmem_regions[i].mr_start) && (pa < (physmem_regions[i].mr_start + physmem_regions[i].mr_size))) { attrib = _TLB_ENTRY_MEM; break; } } return (attrib); } static inline void tlb_miss_lock(void) { #ifdef SMP struct pcpu *pc; if (!smp_started) return; STAILQ_FOREACH(pc, &cpuhead, pc_allcpu) { if (pc != pcpup) { CTR3(KTR_PMAP, "%s: tlb miss LOCK of CPU=%d, " "tlb_lock=%p", __func__, pc->pc_cpuid, pc->pc_booke.tlb_lock); KASSERT((pc->pc_cpuid != PCPU_GET(cpuid)), ("tlb_miss_lock: tried to lock self")); tlb_lock(pc->pc_booke.tlb_lock); CTR1(KTR_PMAP, "%s: locked", __func__); } } #endif } static inline void tlb_miss_unlock(void) { #ifdef SMP struct pcpu *pc; if (!smp_started) return; STAILQ_FOREACH(pc, &cpuhead, pc_allcpu) { if (pc != pcpup) { CTR2(KTR_PMAP, "%s: tlb miss UNLOCK of CPU=%d", __func__, pc->pc_cpuid); tlb_unlock(pc->pc_booke.tlb_lock); CTR1(KTR_PMAP, "%s: unlocked", __func__); } } #endif } /* Return number of entries in TLB0. */ static __inline void tlb0_get_tlbconf(void) { uint32_t tlb0_cfg; tlb0_cfg = mfspr(SPR_TLB0CFG); tlb0_entries = tlb0_cfg & TLBCFG_NENTRY_MASK; tlb0_ways = (tlb0_cfg & TLBCFG_ASSOC_MASK) >> TLBCFG_ASSOC_SHIFT; tlb0_entries_per_way = tlb0_entries / tlb0_ways; } /* Return number of entries in TLB1. */ static __inline void tlb1_get_tlbconf(void) { uint32_t tlb1_cfg; tlb1_cfg = mfspr(SPR_TLB1CFG); tlb1_entries = tlb1_cfg & TLBCFG_NENTRY_MASK; } /**************************************************************************/ /* Page table related */ /**************************************************************************/ #ifdef __powerpc64__ /* Initialize pool of kva ptbl buffers. */ static void ptbl_init(void) { int i; mtx_init(&ptbl_buf_freelist_lock, "ptbl bufs lock", NULL, MTX_DEF); TAILQ_INIT(&ptbl_buf_freelist); for (i = 0; i < PTBL_BUFS; i++) { ptbl_bufs[i].kva = ptbl_buf_pool_vabase + i * MAX(PTBL_PAGES,PDIR_PAGES) * PAGE_SIZE; TAILQ_INSERT_TAIL(&ptbl_buf_freelist, &ptbl_bufs[i], link); } } /* Get an sf_buf from the freelist. */ static struct ptbl_buf * ptbl_buf_alloc(void) { struct ptbl_buf *buf; mtx_lock(&ptbl_buf_freelist_lock); buf = TAILQ_FIRST(&ptbl_buf_freelist); if (buf != NULL) TAILQ_REMOVE(&ptbl_buf_freelist, buf, link); mtx_unlock(&ptbl_buf_freelist_lock); return (buf); } /* Return ptbl buff to free pool. */ static void ptbl_buf_free(struct ptbl_buf *buf) { mtx_lock(&ptbl_buf_freelist_lock); TAILQ_INSERT_TAIL(&ptbl_buf_freelist, buf, link); mtx_unlock(&ptbl_buf_freelist_lock); } /* * Search the list of allocated ptbl bufs and find on list of allocated ptbls */ static void ptbl_free_pmap_ptbl(pmap_t pmap, pte_t * ptbl) { struct ptbl_buf *pbuf; TAILQ_FOREACH(pbuf, &pmap->pm_ptbl_list, link) { if (pbuf->kva == (vm_offset_t) ptbl) { /* Remove from pmap ptbl buf list. */ TAILQ_REMOVE(&pmap->pm_ptbl_list, pbuf, link); /* Free corresponding ptbl buf. */ ptbl_buf_free(pbuf); break; } } } /* Get a pointer to a PTE in a page table. */ static __inline pte_t * pte_find(mmu_t mmu, pmap_t pmap, vm_offset_t va) { pte_t **pdir; pte_t *ptbl; KASSERT((pmap != NULL), ("pte_find: invalid pmap")); pdir = pmap->pm_pp2d[PP2D_IDX(va)]; if (!pdir) return NULL; ptbl = pdir[PDIR_IDX(va)]; return ((ptbl != NULL) ? &ptbl[PTBL_IDX(va)] : NULL); } /* * Search the list of allocated pdir bufs and find on list of allocated pdirs */ static void ptbl_free_pmap_pdir(mmu_t mmu, pmap_t pmap, pte_t ** pdir) { struct ptbl_buf *pbuf; TAILQ_FOREACH(pbuf, &pmap->pm_pdir_list, link) { if (pbuf->kva == (vm_offset_t) pdir) { /* Remove from pmap ptbl buf list. */ TAILQ_REMOVE(&pmap->pm_pdir_list, pbuf, link); /* Free corresponding pdir buf. */ ptbl_buf_free(pbuf); break; } } } /* Free pdir pages and invalidate pdir entry. */ static void pdir_free(mmu_t mmu, pmap_t pmap, unsigned int pp2d_idx) { pte_t **pdir; vm_paddr_t pa; vm_offset_t va; vm_page_t m; int i; pdir = pmap->pm_pp2d[pp2d_idx]; KASSERT((pdir != NULL), ("pdir_free: null pdir")); pmap->pm_pp2d[pp2d_idx] = NULL; for (i = 0; i < PDIR_PAGES; i++) { va = ((vm_offset_t) pdir + (i * PAGE_SIZE)); pa = pte_vatopa(mmu, kernel_pmap, va); m = PHYS_TO_VM_PAGE(pa); vm_page_free_zero(m); vm_wire_sub(1); pmap_kremove(va); } ptbl_free_pmap_pdir(mmu, pmap, pdir); } /* * Decrement pdir pages hold count and attempt to free pdir pages. Called * when removing directory entry from pdir. * * Return 1 if pdir pages were freed. */ static int pdir_unhold(mmu_t mmu, pmap_t pmap, u_int pp2d_idx) { pte_t **pdir; vm_paddr_t pa; vm_page_t m; int i; KASSERT((pmap != kernel_pmap), ("pdir_unhold: unholding kernel pdir!")); pdir = pmap->pm_pp2d[pp2d_idx]; KASSERT(((vm_offset_t) pdir >= VM_MIN_KERNEL_ADDRESS), ("pdir_unhold: non kva pdir")); /* decrement hold count */ for (i = 0; i < PDIR_PAGES; i++) { pa = pte_vatopa(mmu, kernel_pmap, (vm_offset_t) pdir + (i * PAGE_SIZE)); m = PHYS_TO_VM_PAGE(pa); m->wire_count--; } /* * Free pdir pages if there are no dir entries in this pdir. * wire_count has the same value for all ptbl pages, so check the * last page. */ if (m->wire_count == 0) { pdir_free(mmu, pmap, pp2d_idx); return (1); } return (0); } /* * Increment hold count for pdir pages. This routine is used when new ptlb * entry is being inserted into pdir. */ static void pdir_hold(mmu_t mmu, pmap_t pmap, pte_t ** pdir) { vm_paddr_t pa; vm_page_t m; int i; KASSERT((pmap != kernel_pmap), ("pdir_hold: holding kernel pdir!")); KASSERT((pdir != NULL), ("pdir_hold: null pdir")); for (i = 0; i < PDIR_PAGES; i++) { pa = pte_vatopa(mmu, kernel_pmap, (vm_offset_t) pdir + (i * PAGE_SIZE)); m = PHYS_TO_VM_PAGE(pa); m->wire_count++; } } /* Allocate page table. */ static pte_t * ptbl_alloc(mmu_t mmu, pmap_t pmap, pte_t ** pdir, unsigned int pdir_idx, boolean_t nosleep) { vm_page_t mtbl [PTBL_PAGES]; vm_page_t m; struct ptbl_buf *pbuf; unsigned int pidx; pte_t *ptbl; int i, j; int req; KASSERT((pdir[pdir_idx] == NULL), ("%s: valid ptbl entry exists!", __func__)); pbuf = ptbl_buf_alloc(); if (pbuf == NULL) panic("%s: couldn't alloc kernel virtual memory", __func__); ptbl = (pte_t *) pbuf->kva; for (i = 0; i < PTBL_PAGES; i++) { pidx = (PTBL_PAGES * pdir_idx) + i; req = VM_ALLOC_NOOBJ | VM_ALLOC_WIRED; while ((m = vm_page_alloc(NULL, pidx, req)) == NULL) { PMAP_UNLOCK(pmap); rw_wunlock(&pvh_global_lock); if (nosleep) { ptbl_free_pmap_ptbl(pmap, ptbl); for (j = 0; j < i; j++) vm_page_free(mtbl[j]); vm_wire_sub(i); return (NULL); } vm_wait(NULL); rw_wlock(&pvh_global_lock); PMAP_LOCK(pmap); } mtbl[i] = m; } /* Mapin allocated pages into kernel_pmap. */ mmu_booke_qenter(mmu, (vm_offset_t) ptbl, mtbl, PTBL_PAGES); /* Zero whole ptbl. */ bzero((caddr_t) ptbl, PTBL_PAGES * PAGE_SIZE); /* Add pbuf to the pmap ptbl bufs list. */ TAILQ_INSERT_TAIL(&pmap->pm_ptbl_list, pbuf, link); return (ptbl); } /* Free ptbl pages and invalidate pdir entry. */ static void ptbl_free(mmu_t mmu, pmap_t pmap, pte_t ** pdir, unsigned int pdir_idx) { pte_t *ptbl; vm_paddr_t pa; vm_offset_t va; vm_page_t m; int i; ptbl = pdir[pdir_idx]; KASSERT((ptbl != NULL), ("ptbl_free: null ptbl")); pdir[pdir_idx] = NULL; for (i = 0; i < PTBL_PAGES; i++) { va = ((vm_offset_t) ptbl + (i * PAGE_SIZE)); pa = pte_vatopa(mmu, kernel_pmap, va); m = PHYS_TO_VM_PAGE(pa); vm_page_free_zero(m); vm_wire_sub(1); pmap_kremove(va); } ptbl_free_pmap_ptbl(pmap, ptbl); } /* * Decrement ptbl pages hold count and attempt to free ptbl pages. Called * when removing pte entry from ptbl. * * Return 1 if ptbl pages were freed. */ static int ptbl_unhold(mmu_t mmu, pmap_t pmap, vm_offset_t va) { pte_t *ptbl; vm_paddr_t pa; vm_page_t m; u_int pp2d_idx; pte_t **pdir; u_int pdir_idx; int i; pp2d_idx = PP2D_IDX(va); pdir_idx = PDIR_IDX(va); KASSERT((pmap != kernel_pmap), ("ptbl_unhold: unholding kernel ptbl!")); pdir = pmap->pm_pp2d[pp2d_idx]; ptbl = pdir[pdir_idx]; KASSERT(((vm_offset_t) ptbl >= VM_MIN_KERNEL_ADDRESS), ("ptbl_unhold: non kva ptbl")); /* decrement hold count */ for (i = 0; i < PTBL_PAGES; i++) { pa = pte_vatopa(mmu, kernel_pmap, (vm_offset_t) ptbl + (i * PAGE_SIZE)); m = PHYS_TO_VM_PAGE(pa); m->wire_count--; } /* * Free ptbl pages if there are no pte entries in this ptbl. * wire_count has the same value for all ptbl pages, so check the * last page. */ if (m->wire_count == 0) { /* A pair of indirect entries might point to this ptbl page */ #if 0 tlb_flush_entry(pmap, va & ~((2UL * PAGE_SIZE_1M) - 1), TLB_SIZE_1M, MAS6_SIND); tlb_flush_entry(pmap, (va & ~((2UL * PAGE_SIZE_1M) - 1)) | PAGE_SIZE_1M, TLB_SIZE_1M, MAS6_SIND); #endif ptbl_free(mmu, pmap, pdir, pdir_idx); pdir_unhold(mmu, pmap, pp2d_idx); return (1); } return (0); } /* * Increment hold count for ptbl pages. This routine is used when new pte * entry is being inserted into ptbl. */ static void ptbl_hold(mmu_t mmu, pmap_t pmap, pte_t ** pdir, unsigned int pdir_idx) { vm_paddr_t pa; pte_t *ptbl; vm_page_t m; int i; KASSERT((pmap != kernel_pmap), ("ptbl_hold: holding kernel ptbl!")); ptbl = pdir[pdir_idx]; KASSERT((ptbl != NULL), ("ptbl_hold: null ptbl")); for (i = 0; i < PTBL_PAGES; i++) { pa = pte_vatopa(mmu, kernel_pmap, (vm_offset_t) ptbl + (i * PAGE_SIZE)); m = PHYS_TO_VM_PAGE(pa); m->wire_count++; } } #else /* Initialize pool of kva ptbl buffers. */ static void ptbl_init(void) { int i; CTR3(KTR_PMAP, "%s: s (ptbl_bufs = 0x%08x size 0x%08x)", __func__, (uint32_t)ptbl_bufs, sizeof(struct ptbl_buf) * PTBL_BUFS); CTR3(KTR_PMAP, "%s: s (ptbl_buf_pool_vabase = 0x%08x size = 0x%08x)", __func__, ptbl_buf_pool_vabase, PTBL_BUFS * PTBL_PAGES * PAGE_SIZE); mtx_init(&ptbl_buf_freelist_lock, "ptbl bufs lock", NULL, MTX_DEF); TAILQ_INIT(&ptbl_buf_freelist); for (i = 0; i < PTBL_BUFS; i++) { ptbl_bufs[i].kva = ptbl_buf_pool_vabase + i * PTBL_PAGES * PAGE_SIZE; TAILQ_INSERT_TAIL(&ptbl_buf_freelist, &ptbl_bufs[i], link); } } /* Get a ptbl_buf from the freelist. */ static struct ptbl_buf * ptbl_buf_alloc(void) { struct ptbl_buf *buf; mtx_lock(&ptbl_buf_freelist_lock); buf = TAILQ_FIRST(&ptbl_buf_freelist); if (buf != NULL) TAILQ_REMOVE(&ptbl_buf_freelist, buf, link); mtx_unlock(&ptbl_buf_freelist_lock); CTR2(KTR_PMAP, "%s: buf = %p", __func__, buf); return (buf); } /* Return ptbl buff to free pool. */ static void ptbl_buf_free(struct ptbl_buf *buf) { CTR2(KTR_PMAP, "%s: buf = %p", __func__, buf); mtx_lock(&ptbl_buf_freelist_lock); TAILQ_INSERT_TAIL(&ptbl_buf_freelist, buf, link); mtx_unlock(&ptbl_buf_freelist_lock); } /* * Search the list of allocated ptbl bufs and find on list of allocated ptbls */ static void ptbl_free_pmap_ptbl(pmap_t pmap, pte_t *ptbl) { struct ptbl_buf *pbuf; CTR2(KTR_PMAP, "%s: ptbl = %p", __func__, ptbl); PMAP_LOCK_ASSERT(pmap, MA_OWNED); TAILQ_FOREACH(pbuf, &pmap->pm_ptbl_list, link) if (pbuf->kva == (vm_offset_t)ptbl) { /* Remove from pmap ptbl buf list. */ TAILQ_REMOVE(&pmap->pm_ptbl_list, pbuf, link); /* Free corresponding ptbl buf. */ ptbl_buf_free(pbuf); break; } } /* Allocate page table. */ static pte_t * ptbl_alloc(mmu_t mmu, pmap_t pmap, unsigned int pdir_idx, boolean_t nosleep) { vm_page_t mtbl[PTBL_PAGES]; vm_page_t m; struct ptbl_buf *pbuf; unsigned int pidx; pte_t *ptbl; int i, j; CTR4(KTR_PMAP, "%s: pmap = %p su = %d pdir_idx = %d", __func__, pmap, (pmap == kernel_pmap), pdir_idx); KASSERT((pdir_idx <= (VM_MAXUSER_ADDRESS / PDIR_SIZE)), ("ptbl_alloc: invalid pdir_idx")); KASSERT((pmap->pm_pdir[pdir_idx] == NULL), ("pte_alloc: valid ptbl entry exists!")); pbuf = ptbl_buf_alloc(); if (pbuf == NULL) panic("pte_alloc: couldn't alloc kernel virtual memory"); ptbl = (pte_t *)pbuf->kva; CTR2(KTR_PMAP, "%s: ptbl kva = %p", __func__, ptbl); for (i = 0; i < PTBL_PAGES; i++) { pidx = (PTBL_PAGES * pdir_idx) + i; while ((m = vm_page_alloc(NULL, pidx, VM_ALLOC_NOOBJ | VM_ALLOC_WIRED)) == NULL) { PMAP_UNLOCK(pmap); rw_wunlock(&pvh_global_lock); if (nosleep) { ptbl_free_pmap_ptbl(pmap, ptbl); for (j = 0; j < i; j++) vm_page_free(mtbl[j]); vm_wire_sub(i); return (NULL); } vm_wait(NULL); rw_wlock(&pvh_global_lock); PMAP_LOCK(pmap); } mtbl[i] = m; } /* Map allocated pages into kernel_pmap. */ mmu_booke_qenter(mmu, (vm_offset_t)ptbl, mtbl, PTBL_PAGES); /* Zero whole ptbl. */ bzero((caddr_t)ptbl, PTBL_PAGES * PAGE_SIZE); /* Add pbuf to the pmap ptbl bufs list. */ TAILQ_INSERT_TAIL(&pmap->pm_ptbl_list, pbuf, link); return (ptbl); } /* Free ptbl pages and invalidate pdir entry. */ static void ptbl_free(mmu_t mmu, pmap_t pmap, unsigned int pdir_idx) { pte_t *ptbl; vm_paddr_t pa; vm_offset_t va; vm_page_t m; int i; CTR4(KTR_PMAP, "%s: pmap = %p su = %d pdir_idx = %d", __func__, pmap, (pmap == kernel_pmap), pdir_idx); KASSERT((pdir_idx <= (VM_MAXUSER_ADDRESS / PDIR_SIZE)), ("ptbl_free: invalid pdir_idx")); ptbl = pmap->pm_pdir[pdir_idx]; CTR2(KTR_PMAP, "%s: ptbl = %p", __func__, ptbl); KASSERT((ptbl != NULL), ("ptbl_free: null ptbl")); /* * Invalidate the pdir entry as soon as possible, so that other CPUs * don't attempt to look up the page tables we are releasing. */ mtx_lock_spin(&tlbivax_mutex); tlb_miss_lock(); pmap->pm_pdir[pdir_idx] = NULL; tlb_miss_unlock(); mtx_unlock_spin(&tlbivax_mutex); for (i = 0; i < PTBL_PAGES; i++) { va = ((vm_offset_t)ptbl + (i * PAGE_SIZE)); pa = pte_vatopa(mmu, kernel_pmap, va); m = PHYS_TO_VM_PAGE(pa); vm_page_free_zero(m); vm_wire_sub(1); mmu_booke_kremove(mmu, va); } ptbl_free_pmap_ptbl(pmap, ptbl); } /* * Decrement ptbl pages hold count and attempt to free ptbl pages. * Called when removing pte entry from ptbl. * * Return 1 if ptbl pages were freed. */ static int ptbl_unhold(mmu_t mmu, pmap_t pmap, unsigned int pdir_idx) { pte_t *ptbl; vm_paddr_t pa; vm_page_t m; int i; CTR4(KTR_PMAP, "%s: pmap = %p su = %d pdir_idx = %d", __func__, pmap, (pmap == kernel_pmap), pdir_idx); KASSERT((pdir_idx <= (VM_MAXUSER_ADDRESS / PDIR_SIZE)), ("ptbl_unhold: invalid pdir_idx")); KASSERT((pmap != kernel_pmap), ("ptbl_unhold: unholding kernel ptbl!")); ptbl = pmap->pm_pdir[pdir_idx]; //debugf("ptbl_unhold: ptbl = 0x%08x\n", (u_int32_t)ptbl); KASSERT(((vm_offset_t)ptbl >= VM_MIN_KERNEL_ADDRESS), ("ptbl_unhold: non kva ptbl")); /* decrement hold count */ for (i = 0; i < PTBL_PAGES; i++) { pa = pte_vatopa(mmu, kernel_pmap, (vm_offset_t)ptbl + (i * PAGE_SIZE)); m = PHYS_TO_VM_PAGE(pa); m->wire_count--; } /* * Free ptbl pages if there are no pte etries in this ptbl. * wire_count has the same value for all ptbl pages, so check the last * page. */ if (m->wire_count == 0) { ptbl_free(mmu, pmap, pdir_idx); //debugf("ptbl_unhold: e (freed ptbl)\n"); return (1); } return (0); } /* * Increment hold count for ptbl pages. This routine is used when a new pte * entry is being inserted into the ptbl. */ static void ptbl_hold(mmu_t mmu, pmap_t pmap, unsigned int pdir_idx) { vm_paddr_t pa; pte_t *ptbl; vm_page_t m; int i; CTR3(KTR_PMAP, "%s: pmap = %p pdir_idx = %d", __func__, pmap, pdir_idx); KASSERT((pdir_idx <= (VM_MAXUSER_ADDRESS / PDIR_SIZE)), ("ptbl_hold: invalid pdir_idx")); KASSERT((pmap != kernel_pmap), ("ptbl_hold: holding kernel ptbl!")); ptbl = pmap->pm_pdir[pdir_idx]; KASSERT((ptbl != NULL), ("ptbl_hold: null ptbl")); for (i = 0; i < PTBL_PAGES; i++) { pa = pte_vatopa(mmu, kernel_pmap, (vm_offset_t)ptbl + (i * PAGE_SIZE)); m = PHYS_TO_VM_PAGE(pa); m->wire_count++; } } #endif /* Allocate pv_entry structure. */ pv_entry_t pv_alloc(void) { pv_entry_t pv; pv_entry_count++; if (pv_entry_count > pv_entry_high_water) pagedaemon_wakeup(0); /* XXX powerpc NUMA */ pv = uma_zalloc(pvzone, M_NOWAIT); return (pv); } /* Free pv_entry structure. */ static __inline void pv_free(pv_entry_t pve) { pv_entry_count--; uma_zfree(pvzone, pve); } /* Allocate and initialize pv_entry structure. */ static void pv_insert(pmap_t pmap, vm_offset_t va, vm_page_t m) { pv_entry_t pve; //int su = (pmap == kernel_pmap); //debugf("pv_insert: s (su = %d pmap = 0x%08x va = 0x%08x m = 0x%08x)\n", su, // (u_int32_t)pmap, va, (u_int32_t)m); pve = pv_alloc(); if (pve == NULL) panic("pv_insert: no pv entries!"); pve->pv_pmap = pmap; pve->pv_va = va; /* add to pv_list */ PMAP_LOCK_ASSERT(pmap, MA_OWNED); rw_assert(&pvh_global_lock, RA_WLOCKED); TAILQ_INSERT_TAIL(&m->md.pv_list, pve, pv_link); //debugf("pv_insert: e\n"); } /* Destroy pv entry. */ static void pv_remove(pmap_t pmap, vm_offset_t va, vm_page_t m) { pv_entry_t pve; //int su = (pmap == kernel_pmap); //debugf("pv_remove: s (su = %d pmap = 0x%08x va = 0x%08x)\n", su, (u_int32_t)pmap, va); PMAP_LOCK_ASSERT(pmap, MA_OWNED); rw_assert(&pvh_global_lock, RA_WLOCKED); /* find pv entry */ TAILQ_FOREACH(pve, &m->md.pv_list, pv_link) { if ((pmap == pve->pv_pmap) && (va == pve->pv_va)) { /* remove from pv_list */ TAILQ_REMOVE(&m->md.pv_list, pve, pv_link); if (TAILQ_EMPTY(&m->md.pv_list)) vm_page_aflag_clear(m, PGA_WRITEABLE); /* free pv entry struct */ pv_free(pve); break; } } //debugf("pv_remove: e\n"); } #ifdef __powerpc64__ /* * Clean pte entry, try to free page table page if requested. * * Return 1 if ptbl pages were freed, otherwise return 0. */ static int pte_remove(mmu_t mmu, pmap_t pmap, vm_offset_t va, u_int8_t flags) { vm_page_t m; pte_t *pte; pte = pte_find(mmu, pmap, va); KASSERT(pte != NULL, ("%s: NULL pte", __func__)); if (!PTE_ISVALID(pte)) return (0); /* Get vm_page_t for mapped pte. */ m = PHYS_TO_VM_PAGE(PTE_PA(pte)); if (PTE_ISWIRED(pte)) pmap->pm_stats.wired_count--; /* Handle managed entry. */ if (PTE_ISMANAGED(pte)) { /* Handle modified pages. */ if (PTE_ISMODIFIED(pte)) vm_page_dirty(m); /* Referenced pages. */ if (PTE_ISREFERENCED(pte)) vm_page_aflag_set(m, PGA_REFERENCED); /* Remove pv_entry from pv_list. */ pv_remove(pmap, va, m); } else if (m->md.pv_tracked) { pv_remove(pmap, va, m); if (TAILQ_EMPTY(&m->md.pv_list)) m->md.pv_tracked = false; } mtx_lock_spin(&tlbivax_mutex); tlb_miss_lock(); tlb0_flush_entry(va); *pte = 0; tlb_miss_unlock(); mtx_unlock_spin(&tlbivax_mutex); pmap->pm_stats.resident_count--; if (flags & PTBL_UNHOLD) { return (ptbl_unhold(mmu, pmap, va)); } return (0); } /* * allocate a page of pointers to page directories, do not preallocate the * page tables */ static pte_t ** pdir_alloc(mmu_t mmu, pmap_t pmap, unsigned int pp2d_idx, bool nosleep) { vm_page_t mtbl [PDIR_PAGES]; vm_page_t m; struct ptbl_buf *pbuf; pte_t **pdir; unsigned int pidx; int i; int req; pbuf = ptbl_buf_alloc(); if (pbuf == NULL) panic("%s: couldn't alloc kernel virtual memory", __func__); /* Allocate pdir pages, this will sleep! */ for (i = 0; i < PDIR_PAGES; i++) { pidx = (PDIR_PAGES * pp2d_idx) + i; req = VM_ALLOC_NOOBJ | VM_ALLOC_WIRED; while ((m = vm_page_alloc(NULL, pidx, req)) == NULL) { PMAP_UNLOCK(pmap); vm_wait(NULL); PMAP_LOCK(pmap); } mtbl[i] = m; } /* Mapin allocated pages into kernel_pmap. */ pdir = (pte_t **) pbuf->kva; pmap_qenter((vm_offset_t) pdir, mtbl, PDIR_PAGES); /* Zero whole pdir. */ bzero((caddr_t) pdir, PDIR_PAGES * PAGE_SIZE); /* Add pdir to the pmap pdir bufs list. */ TAILQ_INSERT_TAIL(&pmap->pm_pdir_list, pbuf, link); return pdir; } /* * Insert PTE for a given page and virtual address. */ static int pte_enter(mmu_t mmu, pmap_t pmap, vm_page_t m, vm_offset_t va, uint32_t flags, boolean_t nosleep) { unsigned int pp2d_idx = PP2D_IDX(va); unsigned int pdir_idx = PDIR_IDX(va); unsigned int ptbl_idx = PTBL_IDX(va); pte_t *ptbl, *pte; pte_t **pdir; /* Get the page directory pointer. */ pdir = pmap->pm_pp2d[pp2d_idx]; if (pdir == NULL) pdir = pdir_alloc(mmu, pmap, pp2d_idx, nosleep); /* Get the page table pointer. */ ptbl = pdir[pdir_idx]; if (ptbl == NULL) { /* Allocate page table pages. */ ptbl = ptbl_alloc(mmu, pmap, pdir, pdir_idx, nosleep); if (ptbl == NULL) { KASSERT(nosleep, ("nosleep and NULL ptbl")); return (ENOMEM); } } else { /* * Check if there is valid mapping for requested va, if there * is, remove it. */ pte = &pdir[pdir_idx][ptbl_idx]; if (PTE_ISVALID(pte)) { pte_remove(mmu, pmap, va, PTBL_HOLD); } else { /* * pte is not used, increment hold count for ptbl * pages. */ if (pmap != kernel_pmap) ptbl_hold(mmu, pmap, pdir, pdir_idx); } } if (pdir[pdir_idx] == NULL) { if (pmap != kernel_pmap && pmap->pm_pp2d[pp2d_idx] != NULL) pdir_hold(mmu, pmap, pdir); pdir[pdir_idx] = ptbl; } if (pmap->pm_pp2d[pp2d_idx] == NULL) pmap->pm_pp2d[pp2d_idx] = pdir; /* * Insert pv_entry into pv_list for mapped page if part of managed * memory. */ if ((m->oflags & VPO_UNMANAGED) == 0) { flags |= PTE_MANAGED; /* Create and insert pv entry. */ pv_insert(pmap, va, m); } mtx_lock_spin(&tlbivax_mutex); tlb_miss_lock(); tlb0_flush_entry(va); pmap->pm_stats.resident_count++; pte = &pdir[pdir_idx][ptbl_idx]; *pte = PTE_RPN_FROM_PA(VM_PAGE_TO_PHYS(m)); *pte |= (PTE_VALID | flags); tlb_miss_unlock(); mtx_unlock_spin(&tlbivax_mutex); return (0); } /* Return the pa for the given pmap/va. */ static vm_paddr_t pte_vatopa(mmu_t mmu, pmap_t pmap, vm_offset_t va) { vm_paddr_t pa = 0; pte_t *pte; pte = pte_find(mmu, pmap, va); if ((pte != NULL) && PTE_ISVALID(pte)) pa = (PTE_PA(pte) | (va & PTE_PA_MASK)); return (pa); } /* allocate pte entries to manage (addr & mask) to (addr & mask) + size */ static void kernel_pte_alloc(vm_offset_t data_end, vm_offset_t addr, vm_offset_t pdir) { int i, j; vm_offset_t va; pte_t *pte; va = addr; /* Initialize kernel pdir */ for (i = 0; i < kernel_pdirs; i++) { kernel_pmap->pm_pp2d[i + PP2D_IDX(va)] = (pte_t **)(pdir + (i * PAGE_SIZE * PDIR_PAGES)); for (j = PDIR_IDX(va + (i * PAGE_SIZE * PDIR_NENTRIES * PTBL_NENTRIES)); j < PDIR_NENTRIES; j++) { kernel_pmap->pm_pp2d[i + PP2D_IDX(va)][j] = (pte_t *)(pdir + (kernel_pdirs * PAGE_SIZE * PDIR_PAGES) + (((i * PDIR_NENTRIES) + j) * PAGE_SIZE * PTBL_PAGES)); } } /* * Fill in PTEs covering kernel code and data. They are not required * for address translation, as this area is covered by static TLB1 * entries, but for pte_vatopa() to work correctly with kernel area * addresses. */ for (va = addr; va < data_end; va += PAGE_SIZE) { pte = &(kernel_pmap->pm_pp2d[PP2D_IDX(va)][PDIR_IDX(va)][PTBL_IDX(va)]); *pte = PTE_RPN_FROM_PA(kernload + (va - kernstart)); *pte |= PTE_M | PTE_SR | PTE_SW | PTE_SX | PTE_WIRED | PTE_VALID | PTE_PS_4KB; } } #else /* * Clean pte entry, try to free page table page if requested. * * Return 1 if ptbl pages were freed, otherwise return 0. */ static int pte_remove(mmu_t mmu, pmap_t pmap, vm_offset_t va, uint8_t flags) { unsigned int pdir_idx = PDIR_IDX(va); unsigned int ptbl_idx = PTBL_IDX(va); vm_page_t m; pte_t *ptbl; pte_t *pte; //int su = (pmap == kernel_pmap); //debugf("pte_remove: s (su = %d pmap = 0x%08x va = 0x%08x flags = %d)\n", // su, (u_int32_t)pmap, va, flags); ptbl = pmap->pm_pdir[pdir_idx]; KASSERT(ptbl, ("pte_remove: null ptbl")); pte = &ptbl[ptbl_idx]; if (pte == NULL || !PTE_ISVALID(pte)) return (0); if (PTE_ISWIRED(pte)) pmap->pm_stats.wired_count--; /* Get vm_page_t for mapped pte. */ m = PHYS_TO_VM_PAGE(PTE_PA(pte)); /* Handle managed entry. */ if (PTE_ISMANAGED(pte)) { if (PTE_ISMODIFIED(pte)) vm_page_dirty(m); if (PTE_ISREFERENCED(pte)) vm_page_aflag_set(m, PGA_REFERENCED); pv_remove(pmap, va, m); } else if (m->md.pv_tracked) { /* * Always pv_insert()/pv_remove() on MPC85XX, in case DPAA is * used. This is needed by the NCSW support code for fast * VA<->PA translation. */ pv_remove(pmap, va, m); if (TAILQ_EMPTY(&m->md.pv_list)) m->md.pv_tracked = false; } mtx_lock_spin(&tlbivax_mutex); tlb_miss_lock(); tlb0_flush_entry(va); *pte = 0; tlb_miss_unlock(); mtx_unlock_spin(&tlbivax_mutex); pmap->pm_stats.resident_count--; if (flags & PTBL_UNHOLD) { //debugf("pte_remove: e (unhold)\n"); return (ptbl_unhold(mmu, pmap, pdir_idx)); } //debugf("pte_remove: e\n"); return (0); } /* * Insert PTE for a given page and virtual address. */ static int pte_enter(mmu_t mmu, pmap_t pmap, vm_page_t m, vm_offset_t va, uint32_t flags, boolean_t nosleep) { unsigned int pdir_idx = PDIR_IDX(va); unsigned int ptbl_idx = PTBL_IDX(va); pte_t *ptbl, *pte; CTR4(KTR_PMAP, "%s: su = %d pmap = %p va = %p", __func__, pmap == kernel_pmap, pmap, va); /* Get the page table pointer. */ ptbl = pmap->pm_pdir[pdir_idx]; if (ptbl == NULL) { /* Allocate page table pages. */ ptbl = ptbl_alloc(mmu, pmap, pdir_idx, nosleep); if (ptbl == NULL) { KASSERT(nosleep, ("nosleep and NULL ptbl")); return (ENOMEM); } } else { /* * Check if there is valid mapping for requested * va, if there is, remove it. */ pte = &pmap->pm_pdir[pdir_idx][ptbl_idx]; if (PTE_ISVALID(pte)) { pte_remove(mmu, pmap, va, PTBL_HOLD); } else { /* * pte is not used, increment hold count * for ptbl pages. */ if (pmap != kernel_pmap) ptbl_hold(mmu, pmap, pdir_idx); } } /* * Insert pv_entry into pv_list for mapped page if part of managed * memory. */ if ((m->oflags & VPO_UNMANAGED) == 0) { flags |= PTE_MANAGED; /* Create and insert pv entry. */ pv_insert(pmap, va, m); } pmap->pm_stats.resident_count++; mtx_lock_spin(&tlbivax_mutex); tlb_miss_lock(); tlb0_flush_entry(va); if (pmap->pm_pdir[pdir_idx] == NULL) { /* * If we just allocated a new page table, hook it in * the pdir. */ pmap->pm_pdir[pdir_idx] = ptbl; } pte = &(pmap->pm_pdir[pdir_idx][ptbl_idx]); *pte = PTE_RPN_FROM_PA(VM_PAGE_TO_PHYS(m)); *pte |= (PTE_VALID | flags | PTE_PS_4KB); /* 4KB pages only */ tlb_miss_unlock(); mtx_unlock_spin(&tlbivax_mutex); return (0); } /* Return the pa for the given pmap/va. */ static vm_paddr_t pte_vatopa(mmu_t mmu, pmap_t pmap, vm_offset_t va) { vm_paddr_t pa = 0; pte_t *pte; pte = pte_find(mmu, pmap, va); if ((pte != NULL) && PTE_ISVALID(pte)) pa = (PTE_PA(pte) | (va & PTE_PA_MASK)); return (pa); } /* Get a pointer to a PTE in a page table. */ static pte_t * pte_find(mmu_t mmu, pmap_t pmap, vm_offset_t va) { unsigned int pdir_idx = PDIR_IDX(va); unsigned int ptbl_idx = PTBL_IDX(va); KASSERT((pmap != NULL), ("pte_find: invalid pmap")); if (pmap->pm_pdir[pdir_idx]) return (&(pmap->pm_pdir[pdir_idx][ptbl_idx])); return (NULL); } /* Set up kernel page tables. */ static void kernel_pte_alloc(vm_offset_t data_end, vm_offset_t addr, vm_offset_t pdir) { int i; vm_offset_t va; pte_t *pte; /* Initialize kernel pdir */ for (i = 0; i < kernel_ptbls; i++) kernel_pmap->pm_pdir[kptbl_min + i] = (pte_t *)(pdir + (i * PAGE_SIZE * PTBL_PAGES)); /* * Fill in PTEs covering kernel code and data. They are not required * for address translation, as this area is covered by static TLB1 * entries, but for pte_vatopa() to work correctly with kernel area * addresses. */ for (va = addr; va < data_end; va += PAGE_SIZE) { pte = &(kernel_pmap->pm_pdir[PDIR_IDX(va)][PTBL_IDX(va)]); *pte = PTE_RPN_FROM_PA(kernload + (va - kernstart)); *pte |= PTE_M | PTE_SR | PTE_SW | PTE_SX | PTE_WIRED | PTE_VALID | PTE_PS_4KB; } } #endif /**************************************************************************/ /* PMAP related */ /**************************************************************************/ /* * This is called during booke_init, before the system is really initialized. */ static void mmu_booke_bootstrap(mmu_t mmu, vm_offset_t start, vm_offset_t kernelend) { vm_paddr_t phys_kernelend; struct mem_region *mp, *mp1; int cnt, i, j; vm_paddr_t s, e, sz; vm_paddr_t physsz, hwphyssz; u_int phys_avail_count; vm_size_t kstack0_sz; vm_offset_t kernel_pdir, kstack0; vm_paddr_t kstack0_phys; void *dpcpu; debugf("mmu_booke_bootstrap: entered\n"); /* Set interesting system properties */ #ifdef __powerpc64__ hw_direct_map = 1; #else hw_direct_map = 0; #endif #if defined(COMPAT_FREEBSD32) || !defined(__powerpc64__) elf32_nxstack = 1; #endif /* Initialize invalidation mutex */ mtx_init(&tlbivax_mutex, "tlbivax", NULL, MTX_SPIN); /* Read TLB0 size and associativity. */ tlb0_get_tlbconf(); /* * Align kernel start and end address (kernel image). * Note that kernel end does not necessarily relate to kernsize. * kernsize is the size of the kernel that is actually mapped. */ kernstart = trunc_page(start); data_start = round_page(kernelend); data_end = data_start; /* Allocate the dynamic per-cpu area. */ dpcpu = (void *)data_end; data_end += DPCPU_SIZE; /* Allocate space for the message buffer. */ msgbufp = (struct msgbuf *)data_end; data_end += msgbufsize; debugf(" msgbufp at 0x%"PRI0ptrX" end = 0x%"PRI0ptrX"\n", (uintptr_t)msgbufp, data_end); data_end = round_page(data_end); /* Allocate space for ptbl_bufs. */ ptbl_bufs = (struct ptbl_buf *)data_end; data_end += sizeof(struct ptbl_buf) * PTBL_BUFS; debugf(" ptbl_bufs at 0x%"PRI0ptrX" end = 0x%"PRI0ptrX"\n", (uintptr_t)ptbl_bufs, data_end); data_end = round_page(data_end); /* Allocate PTE tables for kernel KVA. */ kernel_pdir = data_end; kernel_ptbls = howmany(VM_MAX_KERNEL_ADDRESS - VM_MIN_KERNEL_ADDRESS, PDIR_SIZE); #ifdef __powerpc64__ kernel_pdirs = howmany(kernel_ptbls, PDIR_NENTRIES); data_end += kernel_pdirs * PDIR_PAGES * PAGE_SIZE; #endif data_end += kernel_ptbls * PTBL_PAGES * PAGE_SIZE; debugf(" kernel ptbls: %d\n", kernel_ptbls); debugf(" kernel pdir at 0x%"PRI0ptrX" end = 0x%"PRI0ptrX"\n", kernel_pdir, data_end); debugf(" data_end: 0x%"PRI0ptrX"\n", data_end); if (data_end - kernstart > kernsize) { kernsize += tlb1_mapin_region(kernstart + kernsize, kernload + kernsize, (data_end - kernstart) - kernsize); } data_end = kernstart + kernsize; debugf(" updated data_end: 0x%"PRI0ptrX"\n", data_end); /* * Clear the structures - note we can only do it safely after the * possible additional TLB1 translations are in place (above) so that * all range up to the currently calculated 'data_end' is covered. */ dpcpu_init(dpcpu, 0); memset((void *)ptbl_bufs, 0, sizeof(struct ptbl_buf) * PTBL_SIZE); #ifdef __powerpc64__ memset((void *)kernel_pdir, 0, kernel_pdirs * PDIR_PAGES * PAGE_SIZE + kernel_ptbls * PTBL_PAGES * PAGE_SIZE); #else memset((void *)kernel_pdir, 0, kernel_ptbls * PTBL_PAGES * PAGE_SIZE); #endif /*******************************************************/ /* Set the start and end of kva. */ /*******************************************************/ virtual_avail = round_page(data_end); virtual_end = VM_MAX_KERNEL_ADDRESS; /* Allocate KVA space for page zero/copy operations. */ zero_page_va = virtual_avail; virtual_avail += PAGE_SIZE; copy_page_src_va = virtual_avail; virtual_avail += PAGE_SIZE; copy_page_dst_va = virtual_avail; virtual_avail += PAGE_SIZE; debugf("zero_page_va = 0x%"PRI0ptrX"\n", zero_page_va); debugf("copy_page_src_va = 0x%"PRI0ptrX"\n", copy_page_src_va); debugf("copy_page_dst_va = 0x%"PRI0ptrX"\n", copy_page_dst_va); /* Initialize page zero/copy mutexes. */ mtx_init(&zero_page_mutex, "mmu_booke_zero_page", NULL, MTX_DEF); mtx_init(©_page_mutex, "mmu_booke_copy_page", NULL, MTX_DEF); /* Allocate KVA space for ptbl bufs. */ ptbl_buf_pool_vabase = virtual_avail; virtual_avail += PTBL_BUFS * PTBL_PAGES * PAGE_SIZE; debugf("ptbl_buf_pool_vabase = 0x%"PRI0ptrX" end = 0x%"PRI0ptrX"\n", ptbl_buf_pool_vabase, virtual_avail); /* Calculate corresponding physical addresses for the kernel region. */ phys_kernelend = kernload + kernsize; debugf("kernel image and allocated data:\n"); debugf(" kernload = 0x%09llx\n", (uint64_t)kernload); debugf(" kernstart = 0x%"PRI0ptrX"\n", kernstart); debugf(" kernsize = 0x%"PRI0ptrX"\n", kernsize); if (sizeof(phys_avail) / sizeof(phys_avail[0]) < availmem_regions_sz) panic("mmu_booke_bootstrap: phys_avail too small"); /* * Remove kernel physical address range from avail regions list. Page * align all regions. Non-page aligned memory isn't very interesting * to us. Also, sort the entries for ascending addresses. */ /* Retrieve phys/avail mem regions */ mem_regions(&physmem_regions, &physmem_regions_sz, &availmem_regions, &availmem_regions_sz); sz = 0; cnt = availmem_regions_sz; debugf("processing avail regions:\n"); for (mp = availmem_regions; mp->mr_size; mp++) { s = mp->mr_start; e = mp->mr_start + mp->mr_size; debugf(" %09jx-%09jx -> ", (uintmax_t)s, (uintmax_t)e); /* Check whether this region holds all of the kernel. */ if (s < kernload && e > phys_kernelend) { availmem_regions[cnt].mr_start = phys_kernelend; availmem_regions[cnt++].mr_size = e - phys_kernelend; e = kernload; } /* Look whether this regions starts within the kernel. */ if (s >= kernload && s < phys_kernelend) { if (e <= phys_kernelend) goto empty; s = phys_kernelend; } /* Now look whether this region ends within the kernel. */ if (e > kernload && e <= phys_kernelend) { if (s >= kernload) goto empty; e = kernload; } /* Now page align the start and size of the region. */ s = round_page(s); e = trunc_page(e); if (e < s) e = s; sz = e - s; debugf("%09jx-%09jx = %jx\n", (uintmax_t)s, (uintmax_t)e, (uintmax_t)sz); /* Check whether some memory is left here. */ if (sz == 0) { empty: memmove(mp, mp + 1, (cnt - (mp - availmem_regions)) * sizeof(*mp)); cnt--; mp--; continue; } /* Do an insertion sort. */ for (mp1 = availmem_regions; mp1 < mp; mp1++) if (s < mp1->mr_start) break; if (mp1 < mp) { memmove(mp1 + 1, mp1, (char *)mp - (char *)mp1); mp1->mr_start = s; mp1->mr_size = sz; } else { mp->mr_start = s; mp->mr_size = sz; } } availmem_regions_sz = cnt; /*******************************************************/ /* Steal physical memory for kernel stack from the end */ /* of the first avail region */ /*******************************************************/ kstack0_sz = kstack_pages * PAGE_SIZE; kstack0_phys = availmem_regions[0].mr_start + availmem_regions[0].mr_size; kstack0_phys -= kstack0_sz; availmem_regions[0].mr_size -= kstack0_sz; /*******************************************************/ /* Fill in phys_avail table, based on availmem_regions */ /*******************************************************/ phys_avail_count = 0; physsz = 0; hwphyssz = 0; TUNABLE_ULONG_FETCH("hw.physmem", (u_long *) &hwphyssz); debugf("fill in phys_avail:\n"); for (i = 0, j = 0; i < availmem_regions_sz; i++, j += 2) { debugf(" region: 0x%jx - 0x%jx (0x%jx)\n", (uintmax_t)availmem_regions[i].mr_start, (uintmax_t)availmem_regions[i].mr_start + availmem_regions[i].mr_size, (uintmax_t)availmem_regions[i].mr_size); if (hwphyssz != 0 && (physsz + availmem_regions[i].mr_size) >= hwphyssz) { debugf(" hw.physmem adjust\n"); if (physsz < hwphyssz) { phys_avail[j] = availmem_regions[i].mr_start; phys_avail[j + 1] = availmem_regions[i].mr_start + hwphyssz - physsz; physsz = hwphyssz; phys_avail_count++; } break; } phys_avail[j] = availmem_regions[i].mr_start; phys_avail[j + 1] = availmem_regions[i].mr_start + availmem_regions[i].mr_size; phys_avail_count++; physsz += availmem_regions[i].mr_size; } physmem = btoc(physsz); /* Calculate the last available physical address. */ for (i = 0; phys_avail[i + 2] != 0; i += 2) ; Maxmem = powerpc_btop(phys_avail[i + 1]); debugf("Maxmem = 0x%08lx\n", Maxmem); debugf("phys_avail_count = %d\n", phys_avail_count); debugf("physsz = 0x%09jx physmem = %jd (0x%09jx)\n", (uintmax_t)physsz, (uintmax_t)physmem, (uintmax_t)physmem); #ifdef __powerpc64__ /* * Map the physical memory contiguously in TLB1. * Round so it fits into a single mapping. */ tlb1_mapin_region(DMAP_BASE_ADDRESS, 0, phys_avail[i + 1]); #endif /*******************************************************/ /* Initialize (statically allocated) kernel pmap. */ /*******************************************************/ PMAP_LOCK_INIT(kernel_pmap); #ifndef __powerpc64__ kptbl_min = VM_MIN_KERNEL_ADDRESS / PDIR_SIZE; #endif debugf("kernel_pmap = 0x%"PRI0ptrX"\n", (uintptr_t)kernel_pmap); kernel_pte_alloc(virtual_avail, kernstart, kernel_pdir); for (i = 0; i < MAXCPU; i++) { kernel_pmap->pm_tid[i] = TID_KERNEL; /* Initialize each CPU's tidbusy entry 0 with kernel_pmap */ tidbusy[i][TID_KERNEL] = kernel_pmap; } /* Mark kernel_pmap active on all CPUs */ CPU_FILL(&kernel_pmap->pm_active); /* * Initialize the global pv list lock. */ rw_init(&pvh_global_lock, "pmap pv global"); /*******************************************************/ /* Final setup */ /*******************************************************/ /* Enter kstack0 into kernel map, provide guard page */ kstack0 = virtual_avail + KSTACK_GUARD_PAGES * PAGE_SIZE; thread0.td_kstack = kstack0; thread0.td_kstack_pages = kstack_pages; debugf("kstack_sz = 0x%08x\n", kstack0_sz); debugf("kstack0_phys at 0x%09llx - 0x%09llx\n", kstack0_phys, kstack0_phys + kstack0_sz); debugf("kstack0 at 0x%"PRI0ptrX" - 0x%"PRI0ptrX"\n", kstack0, kstack0 + kstack0_sz); virtual_avail += KSTACK_GUARD_PAGES * PAGE_SIZE + kstack0_sz; for (i = 0; i < kstack_pages; i++) { mmu_booke_kenter(mmu, kstack0, kstack0_phys); kstack0 += PAGE_SIZE; kstack0_phys += PAGE_SIZE; } pmap_bootstrapped = 1; debugf("virtual_avail = %"PRI0ptrX"\n", virtual_avail); debugf("virtual_end = %"PRI0ptrX"\n", virtual_end); debugf("mmu_booke_bootstrap: exit\n"); } #ifdef SMP void tlb1_ap_prep(void) { tlb_entry_t *e, tmp; unsigned int i; /* Prepare TLB1 image for AP processors */ e = __boot_tlb1; for (i = 0; i < TLB1_ENTRIES; i++) { tlb1_read_entry(&tmp, i); if ((tmp.mas1 & MAS1_VALID) && (tmp.mas2 & _TLB_ENTRY_SHARED)) memcpy(e++, &tmp, sizeof(tmp)); } } void pmap_bootstrap_ap(volatile uint32_t *trcp __unused) { int i; /* * Finish TLB1 configuration: the BSP already set up its TLB1 and we * have the snapshot of its contents in the s/w __boot_tlb1[] table * created by tlb1_ap_prep(), so use these values directly to * (re)program AP's TLB1 hardware. * * Start at index 1 because index 0 has the kernel map. */ for (i = 1; i < TLB1_ENTRIES; i++) { if (__boot_tlb1[i].mas1 & MAS1_VALID) tlb1_write_entry(&__boot_tlb1[i], i); } set_mas4_defaults(); } #endif static void booke_pmap_init_qpages(void) { struct pcpu *pc; int i; CPU_FOREACH(i) { pc = pcpu_find(i); pc->pc_qmap_addr = kva_alloc(PAGE_SIZE); if (pc->pc_qmap_addr == 0) panic("pmap_init_qpages: unable to allocate KVA"); } } SYSINIT(qpages_init, SI_SUB_CPU, SI_ORDER_ANY, booke_pmap_init_qpages, NULL); /* * Get the physical page address for the given pmap/virtual address. */ static vm_paddr_t mmu_booke_extract(mmu_t mmu, pmap_t pmap, vm_offset_t va) { vm_paddr_t pa; PMAP_LOCK(pmap); pa = pte_vatopa(mmu, pmap, va); PMAP_UNLOCK(pmap); return (pa); } /* * Extract the physical page address associated with the given * kernel virtual address. */ static vm_paddr_t mmu_booke_kextract(mmu_t mmu, vm_offset_t va) { tlb_entry_t e; vm_paddr_t p = 0; int i; if (va >= VM_MIN_KERNEL_ADDRESS && va <= VM_MAX_KERNEL_ADDRESS) p = pte_vatopa(mmu, kernel_pmap, va); if (p == 0) { /* Check TLB1 mappings */ for (i = 0; i < TLB1_ENTRIES; i++) { tlb1_read_entry(&e, i); if (!(e.mas1 & MAS1_VALID)) continue; if (va >= e.virt && va < e.virt + e.size) return (e.phys + (va - e.virt)); } } return (p); } /* * Initialize the pmap module. * Called by vm_init, to initialize any structures that the pmap * system needs to map virtual memory. */ static void mmu_booke_init(mmu_t mmu) { int shpgperproc = PMAP_SHPGPERPROC; /* * Initialize the address space (zone) for the pv entries. Set a * high water mark so that the system can recover from excessive * numbers of pv entries. */ pvzone = uma_zcreate("PV ENTRY", sizeof(struct pv_entry), NULL, NULL, NULL, NULL, UMA_ALIGN_PTR, UMA_ZONE_VM | UMA_ZONE_NOFREE); TUNABLE_INT_FETCH("vm.pmap.shpgperproc", &shpgperproc); pv_entry_max = shpgperproc * maxproc + vm_cnt.v_page_count; TUNABLE_INT_FETCH("vm.pmap.pv_entries", &pv_entry_max); pv_entry_high_water = 9 * (pv_entry_max / 10); uma_zone_reserve_kva(pvzone, pv_entry_max); /* Pre-fill pvzone with initial number of pv entries. */ uma_prealloc(pvzone, PV_ENTRY_ZONE_MIN); /* Initialize ptbl allocation. */ ptbl_init(); } /* * Map a list of wired pages into kernel virtual address space. This is * intended for temporary mappings which do not need page modification or * references recorded. Existing mappings in the region are overwritten. */ static void mmu_booke_qenter(mmu_t mmu, vm_offset_t sva, vm_page_t *m, int count) { vm_offset_t va; va = sva; while (count-- > 0) { mmu_booke_kenter(mmu, va, VM_PAGE_TO_PHYS(*m)); va += PAGE_SIZE; m++; } } /* * Remove page mappings from kernel virtual address space. Intended for * temporary mappings entered by mmu_booke_qenter. */ static void mmu_booke_qremove(mmu_t mmu, vm_offset_t sva, int count) { vm_offset_t va; va = sva; while (count-- > 0) { mmu_booke_kremove(mmu, va); va += PAGE_SIZE; } } /* * Map a wired page into kernel virtual address space. */ static void mmu_booke_kenter(mmu_t mmu, vm_offset_t va, vm_paddr_t pa) { mmu_booke_kenter_attr(mmu, va, pa, VM_MEMATTR_DEFAULT); } static void mmu_booke_kenter_attr(mmu_t mmu, vm_offset_t va, vm_paddr_t pa, vm_memattr_t ma) { uint32_t flags; pte_t *pte; KASSERT(((va >= VM_MIN_KERNEL_ADDRESS) && (va <= VM_MAX_KERNEL_ADDRESS)), ("mmu_booke_kenter: invalid va")); flags = PTE_SR | PTE_SW | PTE_SX | PTE_WIRED | PTE_VALID; flags |= tlb_calc_wimg(pa, ma) << PTE_MAS2_SHIFT; flags |= PTE_PS_4KB; pte = pte_find(mmu, kernel_pmap, va); KASSERT((pte != NULL), ("mmu_booke_kenter: invalid va. NULL PTE")); mtx_lock_spin(&tlbivax_mutex); tlb_miss_lock(); if (PTE_ISVALID(pte)) { CTR1(KTR_PMAP, "%s: replacing entry!", __func__); /* Flush entry from TLB0 */ tlb0_flush_entry(va); } *pte = PTE_RPN_FROM_PA(pa) | flags; //debugf("mmu_booke_kenter: pdir_idx = %d ptbl_idx = %d va=0x%08x " // "pa=0x%08x rpn=0x%08x flags=0x%08x\n", // pdir_idx, ptbl_idx, va, pa, pte->rpn, pte->flags); /* Flush the real memory from the instruction cache. */ if ((flags & (PTE_I | PTE_G)) == 0) __syncicache((void *)va, PAGE_SIZE); tlb_miss_unlock(); mtx_unlock_spin(&tlbivax_mutex); } /* * Remove a page from kernel page table. */ static void mmu_booke_kremove(mmu_t mmu, vm_offset_t va) { pte_t *pte; CTR2(KTR_PMAP,"%s: s (va = 0x%"PRI0ptrX")\n", __func__, va); KASSERT(((va >= VM_MIN_KERNEL_ADDRESS) && (va <= VM_MAX_KERNEL_ADDRESS)), ("mmu_booke_kremove: invalid va")); pte = pte_find(mmu, kernel_pmap, va); if (!PTE_ISVALID(pte)) { CTR1(KTR_PMAP, "%s: invalid pte", __func__); return; } mtx_lock_spin(&tlbivax_mutex); tlb_miss_lock(); /* Invalidate entry in TLB0, update PTE. */ tlb0_flush_entry(va); *pte = 0; tlb_miss_unlock(); mtx_unlock_spin(&tlbivax_mutex); } /* * Provide a kernel pointer corresponding to a given userland pointer. * The returned pointer is valid until the next time this function is * called in this thread. This is used internally in copyin/copyout. */ int mmu_booke_map_user_ptr(mmu_t mmu, pmap_t pm, volatile const void *uaddr, void **kaddr, size_t ulen, size_t *klen) { if ((uintptr_t)uaddr + ulen > VM_MAXUSER_ADDRESS + PAGE_SIZE) return (EFAULT); *kaddr = (void *)(uintptr_t)uaddr; if (klen) *klen = ulen; return (0); } /* * Figure out where a given kernel pointer (usually in a fault) points * to from the VM's perspective, potentially remapping into userland's * address space. */ static int mmu_booke_decode_kernel_ptr(mmu_t mmu, vm_offset_t addr, int *is_user, vm_offset_t *decoded_addr) { if (addr < VM_MAXUSER_ADDRESS) *is_user = 1; else *is_user = 0; *decoded_addr = addr; return (0); } /* * Initialize pmap associated with process 0. */ static void mmu_booke_pinit0(mmu_t mmu, pmap_t pmap) { PMAP_LOCK_INIT(pmap); mmu_booke_pinit(mmu, pmap); PCPU_SET(curpmap, pmap); } /* * Initialize a preallocated and zeroed pmap structure, * such as one in a vmspace structure. */ static void mmu_booke_pinit(mmu_t mmu, pmap_t pmap) { int i; CTR4(KTR_PMAP, "%s: pmap = %p, proc %d '%s'", __func__, pmap, curthread->td_proc->p_pid, curthread->td_proc->p_comm); KASSERT((pmap != kernel_pmap), ("pmap_pinit: initializing kernel_pmap")); for (i = 0; i < MAXCPU; i++) pmap->pm_tid[i] = TID_NONE; CPU_ZERO(&kernel_pmap->pm_active); bzero(&pmap->pm_stats, sizeof(pmap->pm_stats)); #ifdef __powerpc64__ bzero(&pmap->pm_pp2d, sizeof(pte_t **) * PP2D_NENTRIES); TAILQ_INIT(&pmap->pm_pdir_list); #else bzero(&pmap->pm_pdir, sizeof(pte_t *) * PDIR_NENTRIES); #endif TAILQ_INIT(&pmap->pm_ptbl_list); } /* * Release any resources held by the given physical map. * Called when a pmap initialized by mmu_booke_pinit is being released. * Should only be called if the map contains no valid mappings. */ static void mmu_booke_release(mmu_t mmu, pmap_t pmap) { KASSERT(pmap->pm_stats.resident_count == 0, ("pmap_release: pmap resident count %ld != 0", pmap->pm_stats.resident_count)); } /* * Insert the given physical page at the specified virtual address in the * target physical map with the protection requested. If specified the page * will be wired down. */ static int mmu_booke_enter(mmu_t mmu, pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot, u_int flags, int8_t psind) { int error; rw_wlock(&pvh_global_lock); PMAP_LOCK(pmap); error = mmu_booke_enter_locked(mmu, pmap, va, m, prot, flags, psind); PMAP_UNLOCK(pmap); rw_wunlock(&pvh_global_lock); return (error); } static int mmu_booke_enter_locked(mmu_t mmu, pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot, u_int pmap_flags, int8_t psind __unused) { pte_t *pte; vm_paddr_t pa; uint32_t flags; int error, su, sync; pa = VM_PAGE_TO_PHYS(m); su = (pmap == kernel_pmap); sync = 0; //debugf("mmu_booke_enter_locked: s (pmap=0x%08x su=%d tid=%d m=0x%08x va=0x%08x " // "pa=0x%08x prot=0x%08x flags=%#x)\n", // (u_int32_t)pmap, su, pmap->pm_tid, // (u_int32_t)m, va, pa, prot, flags); if (su) { KASSERT(((va >= virtual_avail) && (va <= VM_MAX_KERNEL_ADDRESS)), ("mmu_booke_enter_locked: kernel pmap, non kernel va")); } else { KASSERT((va <= VM_MAXUSER_ADDRESS), ("mmu_booke_enter_locked: user pmap, non user va")); } if ((m->oflags & VPO_UNMANAGED) == 0 && !vm_page_xbusied(m)) VM_OBJECT_ASSERT_LOCKED(m->object); PMAP_LOCK_ASSERT(pmap, MA_OWNED); /* * If there is an existing mapping, and the physical address has not * changed, must be protection or wiring change. */ if (((pte = pte_find(mmu, pmap, va)) != NULL) && (PTE_ISVALID(pte)) && (PTE_PA(pte) == pa)) { /* * Before actually updating pte->flags we calculate and * prepare its new value in a helper var. */ flags = *pte; flags &= ~(PTE_UW | PTE_UX | PTE_SW | PTE_SX | PTE_MODIFIED); /* Wiring change, just update stats. */ if ((pmap_flags & PMAP_ENTER_WIRED) != 0) { if (!PTE_ISWIRED(pte)) { flags |= PTE_WIRED; pmap->pm_stats.wired_count++; } } else { if (PTE_ISWIRED(pte)) { flags &= ~PTE_WIRED; pmap->pm_stats.wired_count--; } } if (prot & VM_PROT_WRITE) { /* Add write permissions. */ flags |= PTE_SW; if (!su) flags |= PTE_UW; if ((flags & PTE_MANAGED) != 0) vm_page_aflag_set(m, PGA_WRITEABLE); } else { /* Handle modified pages, sense modify status. */ /* * The PTE_MODIFIED flag could be set by underlying * TLB misses since we last read it (above), possibly * other CPUs could update it so we check in the PTE * directly rather than rely on that saved local flags * copy. */ if (PTE_ISMODIFIED(pte)) vm_page_dirty(m); } if (prot & VM_PROT_EXECUTE) { flags |= PTE_SX; if (!su) flags |= PTE_UX; /* * Check existing flags for execute permissions: if we * are turning execute permissions on, icache should * be flushed. */ if ((*pte & (PTE_UX | PTE_SX)) == 0) sync++; } flags &= ~PTE_REFERENCED; /* * The new flags value is all calculated -- only now actually * update the PTE. */ mtx_lock_spin(&tlbivax_mutex); tlb_miss_lock(); tlb0_flush_entry(va); *pte &= ~PTE_FLAGS_MASK; *pte |= flags; tlb_miss_unlock(); mtx_unlock_spin(&tlbivax_mutex); } else { /* * If there is an existing mapping, but it's for a different * physical address, pte_enter() will delete the old mapping. */ //if ((pte != NULL) && PTE_ISVALID(pte)) // debugf("mmu_booke_enter_locked: replace\n"); //else // debugf("mmu_booke_enter_locked: new\n"); /* Now set up the flags and install the new mapping. */ flags = (PTE_SR | PTE_VALID); flags |= PTE_M; if (!su) flags |= PTE_UR; if (prot & VM_PROT_WRITE) { flags |= PTE_SW; if (!su) flags |= PTE_UW; if ((m->oflags & VPO_UNMANAGED) == 0) vm_page_aflag_set(m, PGA_WRITEABLE); } if (prot & VM_PROT_EXECUTE) { flags |= PTE_SX; if (!su) flags |= PTE_UX; } /* If its wired update stats. */ if ((pmap_flags & PMAP_ENTER_WIRED) != 0) flags |= PTE_WIRED; error = pte_enter(mmu, pmap, m, va, flags, (pmap_flags & PMAP_ENTER_NOSLEEP) != 0); if (error != 0) return (KERN_RESOURCE_SHORTAGE); if ((flags & PMAP_ENTER_WIRED) != 0) pmap->pm_stats.wired_count++; /* Flush the real memory from the instruction cache. */ if (prot & VM_PROT_EXECUTE) sync++; } if (sync && (su || pmap == PCPU_GET(curpmap))) { __syncicache((void *)va, PAGE_SIZE); sync = 0; } return (KERN_SUCCESS); } /* * Maps a sequence of resident pages belonging to the same object. * The sequence begins with the given page m_start. This page is * mapped at the given virtual address start. Each subsequent page is * mapped at a virtual address that is offset from start by the same * amount as the page is offset from m_start within the object. The * last page in the sequence is the page with the largest offset from * m_start that can be mapped at a virtual address less than the given * virtual address end. Not every virtual page between start and end * is mapped; only those for which a resident page exists with the * corresponding offset from m_start are mapped. */ static void mmu_booke_enter_object(mmu_t mmu, pmap_t pmap, vm_offset_t start, vm_offset_t end, vm_page_t m_start, vm_prot_t prot) { vm_page_t m; vm_pindex_t diff, psize; VM_OBJECT_ASSERT_LOCKED(m_start->object); psize = atop(end - start); m = m_start; rw_wlock(&pvh_global_lock); PMAP_LOCK(pmap); while (m != NULL && (diff = m->pindex - m_start->pindex) < psize) { mmu_booke_enter_locked(mmu, pmap, start + ptoa(diff), m, prot & (VM_PROT_READ | VM_PROT_EXECUTE), PMAP_ENTER_NOSLEEP, 0); m = TAILQ_NEXT(m, listq); } rw_wunlock(&pvh_global_lock); PMAP_UNLOCK(pmap); } static void mmu_booke_enter_quick(mmu_t mmu, pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot) { rw_wlock(&pvh_global_lock); PMAP_LOCK(pmap); mmu_booke_enter_locked(mmu, pmap, va, m, prot & (VM_PROT_READ | VM_PROT_EXECUTE), PMAP_ENTER_NOSLEEP, 0); rw_wunlock(&pvh_global_lock); PMAP_UNLOCK(pmap); } /* * Remove the given range of addresses from the specified map. * * It is assumed that the start and end are properly rounded to the page size. */ static void mmu_booke_remove(mmu_t mmu, pmap_t pmap, vm_offset_t va, vm_offset_t endva) { pte_t *pte; uint8_t hold_flag; int su = (pmap == kernel_pmap); //debugf("mmu_booke_remove: s (su = %d pmap=0x%08x tid=%d va=0x%08x endva=0x%08x)\n", // su, (u_int32_t)pmap, pmap->pm_tid, va, endva); if (su) { KASSERT(((va >= virtual_avail) && (va <= VM_MAX_KERNEL_ADDRESS)), ("mmu_booke_remove: kernel pmap, non kernel va")); } else { KASSERT((va <= VM_MAXUSER_ADDRESS), ("mmu_booke_remove: user pmap, non user va")); } if (PMAP_REMOVE_DONE(pmap)) { //debugf("mmu_booke_remove: e (empty)\n"); return; } hold_flag = PTBL_HOLD_FLAG(pmap); //debugf("mmu_booke_remove: hold_flag = %d\n", hold_flag); rw_wlock(&pvh_global_lock); PMAP_LOCK(pmap); for (; va < endva; va += PAGE_SIZE) { pte = pte_find(mmu, pmap, va); if ((pte != NULL) && PTE_ISVALID(pte)) pte_remove(mmu, pmap, va, hold_flag); } PMAP_UNLOCK(pmap); rw_wunlock(&pvh_global_lock); //debugf("mmu_booke_remove: e\n"); } /* * Remove physical page from all pmaps in which it resides. */ static void mmu_booke_remove_all(mmu_t mmu, vm_page_t m) { pv_entry_t pv, pvn; uint8_t hold_flag; rw_wlock(&pvh_global_lock); for (pv = TAILQ_FIRST(&m->md.pv_list); pv != NULL; pv = pvn) { pvn = TAILQ_NEXT(pv, pv_link); PMAP_LOCK(pv->pv_pmap); hold_flag = PTBL_HOLD_FLAG(pv->pv_pmap); pte_remove(mmu, pv->pv_pmap, pv->pv_va, hold_flag); PMAP_UNLOCK(pv->pv_pmap); } vm_page_aflag_clear(m, PGA_WRITEABLE); rw_wunlock(&pvh_global_lock); } /* * Map a range of physical addresses into kernel virtual address space. */ static vm_offset_t mmu_booke_map(mmu_t mmu, vm_offset_t *virt, vm_paddr_t pa_start, vm_paddr_t pa_end, int prot) { vm_offset_t sva = *virt; vm_offset_t va = sva; //debugf("mmu_booke_map: s (sva = 0x%08x pa_start = 0x%08x pa_end = 0x%08x)\n", // sva, pa_start, pa_end); while (pa_start < pa_end) { mmu_booke_kenter(mmu, va, pa_start); va += PAGE_SIZE; pa_start += PAGE_SIZE; } *virt = va; //debugf("mmu_booke_map: e (va = 0x%08x)\n", va); return (sva); } /* * The pmap must be activated before it's address space can be accessed in any * way. */ static void mmu_booke_activate(mmu_t mmu, struct thread *td) { pmap_t pmap; u_int cpuid; pmap = &td->td_proc->p_vmspace->vm_pmap; CTR5(KTR_PMAP, "%s: s (td = %p, proc = '%s', id = %d, pmap = 0x%"PRI0ptrX")", __func__, td, td->td_proc->p_comm, td->td_proc->p_pid, pmap); KASSERT((pmap != kernel_pmap), ("mmu_booke_activate: kernel_pmap!")); sched_pin(); cpuid = PCPU_GET(cpuid); CPU_SET_ATOMIC(cpuid, &pmap->pm_active); PCPU_SET(curpmap, pmap); if (pmap->pm_tid[cpuid] == TID_NONE) tid_alloc(pmap); /* Load PID0 register with pmap tid value. */ mtspr(SPR_PID0, pmap->pm_tid[cpuid]); __asm __volatile("isync"); mtspr(SPR_DBCR0, td->td_pcb->pcb_cpu.booke.dbcr0); sched_unpin(); CTR3(KTR_PMAP, "%s: e (tid = %d for '%s')", __func__, pmap->pm_tid[PCPU_GET(cpuid)], td->td_proc->p_comm); } /* * Deactivate the specified process's address space. */ static void mmu_booke_deactivate(mmu_t mmu, struct thread *td) { pmap_t pmap; pmap = &td->td_proc->p_vmspace->vm_pmap; CTR5(KTR_PMAP, "%s: td=%p, proc = '%s', id = %d, pmap = 0x%"PRI0ptrX, __func__, td, td->td_proc->p_comm, td->td_proc->p_pid, pmap); td->td_pcb->pcb_cpu.booke.dbcr0 = mfspr(SPR_DBCR0); CPU_CLR_ATOMIC(PCPU_GET(cpuid), &pmap->pm_active); PCPU_SET(curpmap, NULL); } /* * Copy the range specified by src_addr/len * from the source map to the range dst_addr/len * in the destination map. * * This routine is only advisory and need not do anything. */ static void mmu_booke_copy(mmu_t mmu, pmap_t dst_pmap, pmap_t src_pmap, vm_offset_t dst_addr, vm_size_t len, vm_offset_t src_addr) { } /* * Set the physical protection on the specified range of this map as requested. */ static void mmu_booke_protect(mmu_t mmu, pmap_t pmap, vm_offset_t sva, vm_offset_t eva, vm_prot_t prot) { vm_offset_t va; vm_page_t m; pte_t *pte; if ((prot & VM_PROT_READ) == VM_PROT_NONE) { mmu_booke_remove(mmu, pmap, sva, eva); return; } if (prot & VM_PROT_WRITE) return; PMAP_LOCK(pmap); for (va = sva; va < eva; va += PAGE_SIZE) { if ((pte = pte_find(mmu, pmap, va)) != NULL) { if (PTE_ISVALID(pte)) { m = PHYS_TO_VM_PAGE(PTE_PA(pte)); mtx_lock_spin(&tlbivax_mutex); tlb_miss_lock(); /* Handle modified pages. */ if (PTE_ISMODIFIED(pte) && PTE_ISMANAGED(pte)) vm_page_dirty(m); tlb0_flush_entry(va); *pte &= ~(PTE_UW | PTE_SW | PTE_MODIFIED); tlb_miss_unlock(); mtx_unlock_spin(&tlbivax_mutex); } } } PMAP_UNLOCK(pmap); } /* * Clear the write and modified bits in each of the given page's mappings. */ static void mmu_booke_remove_write(mmu_t mmu, vm_page_t m) { pv_entry_t pv; pte_t *pte; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("mmu_booke_remove_write: page %p is not managed", m)); /* * If the page is not exclusive busied, then PGA_WRITEABLE cannot be * set by another thread while the object is locked. Thus, * if PGA_WRITEABLE is clear, no page table entries need updating. */ VM_OBJECT_ASSERT_WLOCKED(m->object); if (!vm_page_xbusied(m) && (m->aflags & PGA_WRITEABLE) == 0) return; rw_wlock(&pvh_global_lock); TAILQ_FOREACH(pv, &m->md.pv_list, pv_link) { PMAP_LOCK(pv->pv_pmap); if ((pte = pte_find(mmu, pv->pv_pmap, pv->pv_va)) != NULL) { if (PTE_ISVALID(pte)) { m = PHYS_TO_VM_PAGE(PTE_PA(pte)); mtx_lock_spin(&tlbivax_mutex); tlb_miss_lock(); /* Handle modified pages. */ if (PTE_ISMODIFIED(pte)) vm_page_dirty(m); /* Flush mapping from TLB0. */ *pte &= ~(PTE_UW | PTE_SW | PTE_MODIFIED); tlb_miss_unlock(); mtx_unlock_spin(&tlbivax_mutex); } } PMAP_UNLOCK(pv->pv_pmap); } vm_page_aflag_clear(m, PGA_WRITEABLE); rw_wunlock(&pvh_global_lock); } static void mmu_booke_sync_icache(mmu_t mmu, pmap_t pm, vm_offset_t va, vm_size_t sz) { pte_t *pte; pmap_t pmap; vm_page_t m; vm_offset_t addr; vm_paddr_t pa = 0; int active, valid; va = trunc_page(va); sz = round_page(sz); rw_wlock(&pvh_global_lock); pmap = PCPU_GET(curpmap); active = (pm == kernel_pmap || pm == pmap) ? 1 : 0; while (sz > 0) { PMAP_LOCK(pm); pte = pte_find(mmu, pm, va); valid = (pte != NULL && PTE_ISVALID(pte)) ? 1 : 0; if (valid) pa = PTE_PA(pte); PMAP_UNLOCK(pm); if (valid) { if (!active) { /* Create a mapping in the active pmap. */ addr = 0; m = PHYS_TO_VM_PAGE(pa); PMAP_LOCK(pmap); pte_enter(mmu, pmap, m, addr, PTE_SR | PTE_VALID | PTE_UR, FALSE); __syncicache((void *)addr, PAGE_SIZE); pte_remove(mmu, pmap, addr, PTBL_UNHOLD); PMAP_UNLOCK(pmap); } else __syncicache((void *)va, PAGE_SIZE); } va += PAGE_SIZE; sz -= PAGE_SIZE; } rw_wunlock(&pvh_global_lock); } /* * Atomically extract and hold the physical page with the given * pmap and virtual address pair if that mapping permits the given * protection. */ static vm_page_t mmu_booke_extract_and_hold(mmu_t mmu, pmap_t pmap, vm_offset_t va, vm_prot_t prot) { pte_t *pte; vm_page_t m; uint32_t pte_wbit; vm_paddr_t pa; m = NULL; pa = 0; PMAP_LOCK(pmap); retry: pte = pte_find(mmu, pmap, va); if ((pte != NULL) && PTE_ISVALID(pte)) { if (pmap == kernel_pmap) pte_wbit = PTE_SW; else pte_wbit = PTE_UW; if ((*pte & pte_wbit) || ((prot & VM_PROT_WRITE) == 0)) { if (vm_page_pa_tryrelock(pmap, PTE_PA(pte), &pa)) goto retry; m = PHYS_TO_VM_PAGE(PTE_PA(pte)); vm_page_hold(m); } } PA_UNLOCK_COND(pa); PMAP_UNLOCK(pmap); return (m); } /* * Initialize a vm_page's machine-dependent fields. */ static void mmu_booke_page_init(mmu_t mmu, vm_page_t m) { m->md.pv_tracked = 0; TAILQ_INIT(&m->md.pv_list); } /* * mmu_booke_zero_page_area zeros the specified hardware page by * mapping it into virtual memory and using bzero to clear * its contents. * * off and size must reside within a single page. */ static void mmu_booke_zero_page_area(mmu_t mmu, vm_page_t m, int off, int size) { vm_offset_t va; /* XXX KASSERT off and size are within a single page? */ - mtx_lock(&zero_page_mutex); - va = zero_page_va; + if (hw_direct_map) { + va = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(m)); + bzero((caddr_t)va + off, size); + } else { + mtx_lock(&zero_page_mutex); + va = zero_page_va; - mmu_booke_kenter(mmu, va, VM_PAGE_TO_PHYS(m)); - bzero((caddr_t)va + off, size); - mmu_booke_kremove(mmu, va); + mmu_booke_kenter(mmu, va, VM_PAGE_TO_PHYS(m)); + bzero((caddr_t)va + off, size); + mmu_booke_kremove(mmu, va); - mtx_unlock(&zero_page_mutex); + mtx_unlock(&zero_page_mutex); + } } /* * mmu_booke_zero_page zeros the specified hardware page. */ static void mmu_booke_zero_page(mmu_t mmu, vm_page_t m) { vm_offset_t off, va; - mtx_lock(&zero_page_mutex); - va = zero_page_va; + if (hw_direct_map) { + va = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(m)); + } else { + va = zero_page_va; + mtx_lock(&zero_page_mutex); - mmu_booke_kenter(mmu, va, VM_PAGE_TO_PHYS(m)); + mmu_booke_kenter(mmu, va, VM_PAGE_TO_PHYS(m)); + } + for (off = 0; off < PAGE_SIZE; off += cacheline_size) __asm __volatile("dcbz 0,%0" :: "r"(va + off)); - mmu_booke_kremove(mmu, va); - mtx_unlock(&zero_page_mutex); + if (!hw_direct_map) { + mmu_booke_kremove(mmu, va); + + mtx_unlock(&zero_page_mutex); + } } /* * mmu_booke_copy_page copies the specified (machine independent) page by * mapping the page into virtual memory and using memcopy to copy the page, * one machine dependent page at a time. */ static void mmu_booke_copy_page(mmu_t mmu, vm_page_t sm, vm_page_t dm) { vm_offset_t sva, dva; sva = copy_page_src_va; dva = copy_page_dst_va; - mtx_lock(©_page_mutex); - mmu_booke_kenter(mmu, sva, VM_PAGE_TO_PHYS(sm)); - mmu_booke_kenter(mmu, dva, VM_PAGE_TO_PHYS(dm)); + if (hw_direct_map) { + sva = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(sm)); + dva = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(dm)); + } else { + mtx_lock(©_page_mutex); + mmu_booke_kenter(mmu, sva, VM_PAGE_TO_PHYS(sm)); + mmu_booke_kenter(mmu, dva, VM_PAGE_TO_PHYS(dm)); + } memcpy((caddr_t)dva, (caddr_t)sva, PAGE_SIZE); - mmu_booke_kremove(mmu, dva); - mmu_booke_kremove(mmu, sva); - mtx_unlock(©_page_mutex); + if (!hw_direct_map) { + mmu_booke_kremove(mmu, dva); + mmu_booke_kremove(mmu, sva); + mtx_unlock(©_page_mutex); + } } static inline void mmu_booke_copy_pages(mmu_t mmu, vm_page_t *ma, vm_offset_t a_offset, vm_page_t *mb, vm_offset_t b_offset, int xfersize) { void *a_cp, *b_cp; vm_offset_t a_pg_offset, b_pg_offset; int cnt; - mtx_lock(©_page_mutex); - while (xfersize > 0) { - a_pg_offset = a_offset & PAGE_MASK; - cnt = min(xfersize, PAGE_SIZE - a_pg_offset); - mmu_booke_kenter(mmu, copy_page_src_va, - VM_PAGE_TO_PHYS(ma[a_offset >> PAGE_SHIFT])); - a_cp = (char *)copy_page_src_va + a_pg_offset; - b_pg_offset = b_offset & PAGE_MASK; - cnt = min(cnt, PAGE_SIZE - b_pg_offset); - mmu_booke_kenter(mmu, copy_page_dst_va, - VM_PAGE_TO_PHYS(mb[b_offset >> PAGE_SHIFT])); - b_cp = (char *)copy_page_dst_va + b_pg_offset; - bcopy(a_cp, b_cp, cnt); - mmu_booke_kremove(mmu, copy_page_dst_va); - mmu_booke_kremove(mmu, copy_page_src_va); - a_offset += cnt; - b_offset += cnt; - xfersize -= cnt; + if (hw_direct_map) { + a_cp = (caddr_t)((uintptr_t)PHYS_TO_DMAP(VM_PAGE_TO_PHYS(*ma)) + + a_offset); + b_cp = (caddr_t)((uintptr_t)PHYS_TO_DMAP(VM_PAGE_TO_PHYS(*mb)) + + b_offset); + bcopy(a_cp, b_cp, xfersize); + } else { + mtx_lock(©_page_mutex); + while (xfersize > 0) { + a_pg_offset = a_offset & PAGE_MASK; + cnt = min(xfersize, PAGE_SIZE - a_pg_offset); + mmu_booke_kenter(mmu, copy_page_src_va, + VM_PAGE_TO_PHYS(ma[a_offset >> PAGE_SHIFT])); + a_cp = (char *)copy_page_src_va + a_pg_offset; + b_pg_offset = b_offset & PAGE_MASK; + cnt = min(cnt, PAGE_SIZE - b_pg_offset); + mmu_booke_kenter(mmu, copy_page_dst_va, + VM_PAGE_TO_PHYS(mb[b_offset >> PAGE_SHIFT])); + b_cp = (char *)copy_page_dst_va + b_pg_offset; + bcopy(a_cp, b_cp, cnt); + mmu_booke_kremove(mmu, copy_page_dst_va); + mmu_booke_kremove(mmu, copy_page_src_va); + a_offset += cnt; + b_offset += cnt; + xfersize -= cnt; + } + mtx_unlock(©_page_mutex); } - mtx_unlock(©_page_mutex); } static vm_offset_t mmu_booke_quick_enter_page(mmu_t mmu, vm_page_t m) { vm_paddr_t paddr; vm_offset_t qaddr; uint32_t flags; pte_t *pte; paddr = VM_PAGE_TO_PHYS(m); + if (hw_direct_map) + return (PHYS_TO_DMAP(paddr)); + flags = PTE_SR | PTE_SW | PTE_SX | PTE_WIRED | PTE_VALID; flags |= tlb_calc_wimg(paddr, pmap_page_get_memattr(m)) << PTE_MAS2_SHIFT; flags |= PTE_PS_4KB; critical_enter(); qaddr = PCPU_GET(qmap_addr); pte = pte_find(mmu, kernel_pmap, qaddr); KASSERT(*pte == 0, ("mmu_booke_quick_enter_page: PTE busy")); /* * XXX: tlbivax is broadcast to other cores, but qaddr should * not be present in other TLBs. Is there a better instruction * sequence to use? Or just forget it & use mmu_booke_kenter()... */ __asm __volatile("tlbivax 0, %0" :: "r"(qaddr & MAS2_EPN_MASK)); __asm __volatile("isync; msync"); *pte = PTE_RPN_FROM_PA(paddr) | flags; /* Flush the real memory from the instruction cache. */ if ((flags & (PTE_I | PTE_G)) == 0) __syncicache((void *)qaddr, PAGE_SIZE); return (qaddr); } static void mmu_booke_quick_remove_page(mmu_t mmu, vm_offset_t addr) { pte_t *pte; + if (hw_direct_map) + return; + pte = pte_find(mmu, kernel_pmap, addr); KASSERT(PCPU_GET(qmap_addr) == addr, ("mmu_booke_quick_remove_page: invalid address")); KASSERT(*pte != 0, ("mmu_booke_quick_remove_page: PTE not in use")); *pte = 0; critical_exit(); } /* * Return whether or not the specified physical page was modified * in any of physical maps. */ static boolean_t mmu_booke_is_modified(mmu_t mmu, vm_page_t m) { pte_t *pte; pv_entry_t pv; boolean_t rv; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("mmu_booke_is_modified: page %p is not managed", m)); rv = FALSE; /* * If the page is not exclusive busied, then PGA_WRITEABLE cannot be * concurrently set while the object is locked. Thus, if PGA_WRITEABLE * is clear, no PTEs can be modified. */ VM_OBJECT_ASSERT_WLOCKED(m->object); if (!vm_page_xbusied(m) && (m->aflags & PGA_WRITEABLE) == 0) return (rv); rw_wlock(&pvh_global_lock); TAILQ_FOREACH(pv, &m->md.pv_list, pv_link) { PMAP_LOCK(pv->pv_pmap); if ((pte = pte_find(mmu, pv->pv_pmap, pv->pv_va)) != NULL && PTE_ISVALID(pte)) { if (PTE_ISMODIFIED(pte)) rv = TRUE; } PMAP_UNLOCK(pv->pv_pmap); if (rv) break; } rw_wunlock(&pvh_global_lock); return (rv); } /* * Return whether or not the specified virtual address is eligible * for prefault. */ static boolean_t mmu_booke_is_prefaultable(mmu_t mmu, pmap_t pmap, vm_offset_t addr) { return (FALSE); } /* * Return whether or not the specified physical page was referenced * in any physical maps. */ static boolean_t mmu_booke_is_referenced(mmu_t mmu, vm_page_t m) { pte_t *pte; pv_entry_t pv; boolean_t rv; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("mmu_booke_is_referenced: page %p is not managed", m)); rv = FALSE; rw_wlock(&pvh_global_lock); TAILQ_FOREACH(pv, &m->md.pv_list, pv_link) { PMAP_LOCK(pv->pv_pmap); if ((pte = pte_find(mmu, pv->pv_pmap, pv->pv_va)) != NULL && PTE_ISVALID(pte)) { if (PTE_ISREFERENCED(pte)) rv = TRUE; } PMAP_UNLOCK(pv->pv_pmap); if (rv) break; } rw_wunlock(&pvh_global_lock); return (rv); } /* * Clear the modify bits on the specified physical page. */ static void mmu_booke_clear_modify(mmu_t mmu, vm_page_t m) { pte_t *pte; pv_entry_t pv; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("mmu_booke_clear_modify: page %p is not managed", m)); VM_OBJECT_ASSERT_WLOCKED(m->object); KASSERT(!vm_page_xbusied(m), ("mmu_booke_clear_modify: page %p is exclusive busied", m)); /* * If the page is not PG_AWRITEABLE, then no PTEs can be modified. * If the object containing the page is locked and the page is not * exclusive busied, then PG_AWRITEABLE cannot be concurrently set. */ if ((m->aflags & PGA_WRITEABLE) == 0) return; rw_wlock(&pvh_global_lock); TAILQ_FOREACH(pv, &m->md.pv_list, pv_link) { PMAP_LOCK(pv->pv_pmap); if ((pte = pte_find(mmu, pv->pv_pmap, pv->pv_va)) != NULL && PTE_ISVALID(pte)) { mtx_lock_spin(&tlbivax_mutex); tlb_miss_lock(); if (*pte & (PTE_SW | PTE_UW | PTE_MODIFIED)) { tlb0_flush_entry(pv->pv_va); *pte &= ~(PTE_SW | PTE_UW | PTE_MODIFIED | PTE_REFERENCED); } tlb_miss_unlock(); mtx_unlock_spin(&tlbivax_mutex); } PMAP_UNLOCK(pv->pv_pmap); } rw_wunlock(&pvh_global_lock); } /* * Return a count of reference bits for a page, clearing those bits. * It is not necessary for every reference bit to be cleared, but it * is necessary that 0 only be returned when there are truly no * reference bits set. * * As an optimization, update the page's dirty field if a modified bit is * found while counting reference bits. This opportunistic update can be * performed at low cost and can eliminate the need for some future calls * to pmap_is_modified(). However, since this function stops after * finding PMAP_TS_REFERENCED_MAX reference bits, it may not detect some * dirty pages. Those dirty pages will only be detected by a future call * to pmap_is_modified(). */ static int mmu_booke_ts_referenced(mmu_t mmu, vm_page_t m) { pte_t *pte; pv_entry_t pv; int count; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("mmu_booke_ts_referenced: page %p is not managed", m)); count = 0; rw_wlock(&pvh_global_lock); TAILQ_FOREACH(pv, &m->md.pv_list, pv_link) { PMAP_LOCK(pv->pv_pmap); if ((pte = pte_find(mmu, pv->pv_pmap, pv->pv_va)) != NULL && PTE_ISVALID(pte)) { if (PTE_ISMODIFIED(pte)) vm_page_dirty(m); if (PTE_ISREFERENCED(pte)) { mtx_lock_spin(&tlbivax_mutex); tlb_miss_lock(); tlb0_flush_entry(pv->pv_va); *pte &= ~PTE_REFERENCED; tlb_miss_unlock(); mtx_unlock_spin(&tlbivax_mutex); if (++count >= PMAP_TS_REFERENCED_MAX) { PMAP_UNLOCK(pv->pv_pmap); break; } } } PMAP_UNLOCK(pv->pv_pmap); } rw_wunlock(&pvh_global_lock); return (count); } /* * Clear the wired attribute from the mappings for the specified range of * addresses in the given pmap. Every valid mapping within that range must * have the wired attribute set. In contrast, invalid mappings cannot have * the wired attribute set, so they are ignored. * * The wired attribute of the page table entry is not a hardware feature, so * there is no need to invalidate any TLB entries. */ static void mmu_booke_unwire(mmu_t mmu, pmap_t pmap, vm_offset_t sva, vm_offset_t eva) { vm_offset_t va; pte_t *pte; PMAP_LOCK(pmap); for (va = sva; va < eva; va += PAGE_SIZE) { if ((pte = pte_find(mmu, pmap, va)) != NULL && PTE_ISVALID(pte)) { if (!PTE_ISWIRED(pte)) panic("mmu_booke_unwire: pte %p isn't wired", pte); *pte &= ~PTE_WIRED; pmap->pm_stats.wired_count--; } } PMAP_UNLOCK(pmap); } /* * Return true if the pmap's pv is one of the first 16 pvs linked to from this * page. This count may be changed upwards or downwards in the future; it is * only necessary that true be returned for a small subset of pmaps for proper * page aging. */ static boolean_t mmu_booke_page_exists_quick(mmu_t mmu, pmap_t pmap, vm_page_t m) { pv_entry_t pv; int loops; boolean_t rv; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("mmu_booke_page_exists_quick: page %p is not managed", m)); loops = 0; rv = FALSE; rw_wlock(&pvh_global_lock); TAILQ_FOREACH(pv, &m->md.pv_list, pv_link) { if (pv->pv_pmap == pmap) { rv = TRUE; break; } if (++loops >= 16) break; } rw_wunlock(&pvh_global_lock); return (rv); } /* * Return the number of managed mappings to the given physical page that are * wired. */ static int mmu_booke_page_wired_mappings(mmu_t mmu, vm_page_t m) { pv_entry_t pv; pte_t *pte; int count = 0; if ((m->oflags & VPO_UNMANAGED) != 0) return (count); rw_wlock(&pvh_global_lock); TAILQ_FOREACH(pv, &m->md.pv_list, pv_link) { PMAP_LOCK(pv->pv_pmap); if ((pte = pte_find(mmu, pv->pv_pmap, pv->pv_va)) != NULL) if (PTE_ISVALID(pte) && PTE_ISWIRED(pte)) count++; PMAP_UNLOCK(pv->pv_pmap); } rw_wunlock(&pvh_global_lock); return (count); } static int mmu_booke_dev_direct_mapped(mmu_t mmu, vm_paddr_t pa, vm_size_t size) { int i; vm_offset_t va; /* * This currently does not work for entries that * overlap TLB1 entries. */ for (i = 0; i < TLB1_ENTRIES; i ++) { if (tlb1_iomapped(i, pa, size, &va) == 0) return (0); } return (EFAULT); } void mmu_booke_dumpsys_map(mmu_t mmu, vm_paddr_t pa, size_t sz, void **va) { vm_paddr_t ppa; vm_offset_t ofs; vm_size_t gran; /* Minidumps are based on virtual memory addresses. */ if (do_minidump) { *va = (void *)(vm_offset_t)pa; return; } /* Raw physical memory dumps don't have a virtual address. */ /* We always map a 256MB page at 256M. */ gran = 256 * 1024 * 1024; ppa = rounddown2(pa, gran); ofs = pa - ppa; *va = (void *)gran; tlb1_set_entry((vm_offset_t)va, ppa, gran, _TLB_ENTRY_IO); if (sz > (gran - ofs)) tlb1_set_entry((vm_offset_t)(va + gran), ppa + gran, gran, _TLB_ENTRY_IO); } void mmu_booke_dumpsys_unmap(mmu_t mmu, vm_paddr_t pa, size_t sz, void *va) { vm_paddr_t ppa; vm_offset_t ofs; vm_size_t gran; tlb_entry_t e; int i; /* Minidumps are based on virtual memory addresses. */ /* Nothing to do... */ if (do_minidump) return; for (i = 0; i < TLB1_ENTRIES; i++) { tlb1_read_entry(&e, i); if (!(e.mas1 & MAS1_VALID)) break; } /* Raw physical memory dumps don't have a virtual address. */ i--; e.mas1 = 0; e.mas2 = 0; e.mas3 = 0; tlb1_write_entry(&e, i); gran = 256 * 1024 * 1024; ppa = rounddown2(pa, gran); ofs = pa - ppa; if (sz > (gran - ofs)) { i--; e.mas1 = 0; e.mas2 = 0; e.mas3 = 0; tlb1_write_entry(&e, i); } } extern struct dump_pa dump_map[PHYS_AVAIL_SZ + 1]; void mmu_booke_scan_init(mmu_t mmu) { vm_offset_t va; pte_t *pte; int i; if (!do_minidump) { /* Initialize phys. segments for dumpsys(). */ memset(&dump_map, 0, sizeof(dump_map)); mem_regions(&physmem_regions, &physmem_regions_sz, &availmem_regions, &availmem_regions_sz); for (i = 0; i < physmem_regions_sz; i++) { dump_map[i].pa_start = physmem_regions[i].mr_start; dump_map[i].pa_size = physmem_regions[i].mr_size; } return; } /* Virtual segments for minidumps: */ memset(&dump_map, 0, sizeof(dump_map)); /* 1st: kernel .data and .bss. */ dump_map[0].pa_start = trunc_page((uintptr_t)_etext); dump_map[0].pa_size = round_page((uintptr_t)_end) - dump_map[0].pa_start; /* 2nd: msgbuf and tables (see pmap_bootstrap()). */ dump_map[1].pa_start = data_start; dump_map[1].pa_size = data_end - data_start; /* 3rd: kernel VM. */ va = dump_map[1].pa_start + dump_map[1].pa_size; /* Find start of next chunk (from va). */ while (va < virtual_end) { /* Don't dump the buffer cache. */ if (va >= kmi.buffer_sva && va < kmi.buffer_eva) { va = kmi.buffer_eva; continue; } pte = pte_find(mmu, kernel_pmap, va); if (pte != NULL && PTE_ISVALID(pte)) break; va += PAGE_SIZE; } if (va < virtual_end) { dump_map[2].pa_start = va; va += PAGE_SIZE; /* Find last page in chunk. */ while (va < virtual_end) { /* Don't run into the buffer cache. */ if (va == kmi.buffer_sva) break; pte = pte_find(mmu, kernel_pmap, va); if (pte == NULL || !PTE_ISVALID(pte)) break; va += PAGE_SIZE; } dump_map[2].pa_size = va - dump_map[2].pa_start; } } /* * Map a set of physical memory pages into the kernel virtual address space. * Return a pointer to where it is mapped. This routine is intended to be used * for mapping device memory, NOT real memory. */ static void * mmu_booke_mapdev(mmu_t mmu, vm_paddr_t pa, vm_size_t size) { return (mmu_booke_mapdev_attr(mmu, pa, size, VM_MEMATTR_DEFAULT)); } static void * mmu_booke_mapdev_attr(mmu_t mmu, vm_paddr_t pa, vm_size_t size, vm_memattr_t ma) { tlb_entry_t e; void *res; uintptr_t va, tmpva; vm_size_t sz; int i; /* * Check if this is premapped in TLB1. Note: this should probably also * check whether a sequence of TLB1 entries exist that match the * requirement, but now only checks the easy case. */ for (i = 0; i < TLB1_ENTRIES; i++) { tlb1_read_entry(&e, i); if (!(e.mas1 & MAS1_VALID)) continue; if (pa >= e.phys && (pa + size) <= (e.phys + e.size) && (ma == VM_MEMATTR_DEFAULT || tlb_calc_wimg(pa, ma) == (e.mas2 & (MAS2_WIMGE_MASK & ~_TLB_ENTRY_SHARED)))) return (void *)(e.virt + (vm_offset_t)(pa - e.phys)); } size = roundup(size, PAGE_SIZE); /* * The device mapping area is between VM_MAXUSER_ADDRESS and * VM_MIN_KERNEL_ADDRESS. This gives 1GB of device addressing. */ #ifdef SPARSE_MAPDEV /* * With a sparse mapdev, align to the largest starting region. This * could feasibly be optimized for a 'best-fit' alignment, but that * calculation could be very costly. * Align to the smaller of: * - first set bit in overlap of (pa & size mask) * - largest size envelope * * It's possible the device mapping may start at a PA that's not larger * than the size mask, so we need to offset in to maximize the TLB entry * range and minimize the number of used TLB entries. */ do { tmpva = tlb1_map_base; sz = ffsl(((1 << flsl(size-1)) - 1) & pa); sz = sz ? min(roundup(sz + 3, 4), flsl(size) - 1) : flsl(size) - 1; va = roundup(tlb1_map_base, 1 << sz) | (((1 << sz) - 1) & pa); #ifdef __powerpc64__ } while (!atomic_cmpset_long(&tlb1_map_base, tmpva, va + size)); #else } while (!atomic_cmpset_int(&tlb1_map_base, tmpva, va + size)); #endif #else #ifdef __powerpc64__ va = atomic_fetchadd_long(&tlb1_map_base, size); #else va = atomic_fetchadd_int(&tlb1_map_base, size); #endif #endif res = (void *)va; do { sz = 1 << (ilog2(size) & ~1); /* Align size to PA */ if (pa % sz != 0) { do { sz >>= 2; } while (pa % sz != 0); } /* Now align from there to VA */ if (va % sz != 0) { do { sz >>= 2; } while (va % sz != 0); } if (bootverbose) printf("Wiring VA=%lx to PA=%jx (size=%lx)\n", va, (uintmax_t)pa, sz); if (tlb1_set_entry(va, pa, sz, _TLB_ENTRY_SHARED | tlb_calc_wimg(pa, ma)) < 0) return (NULL); size -= sz; pa += sz; va += sz; } while (size > 0); return (res); } /* * 'Unmap' a range mapped by mmu_booke_mapdev(). */ static void mmu_booke_unmapdev(mmu_t mmu, vm_offset_t va, vm_size_t size) { #ifdef SUPPORTS_SHRINKING_TLB1 vm_offset_t base, offset; /* * Unmap only if this is inside kernel virtual space. */ if ((va >= VM_MIN_KERNEL_ADDRESS) && (va <= VM_MAX_KERNEL_ADDRESS)) { base = trunc_page(va); offset = va & PAGE_MASK; size = roundup(offset + size, PAGE_SIZE); kva_free(base, size); } #endif } /* * mmu_booke_object_init_pt preloads the ptes for a given object into the * specified pmap. This eliminates the blast of soft faults on process startup * and immediately after an mmap. */ static void mmu_booke_object_init_pt(mmu_t mmu, pmap_t pmap, vm_offset_t addr, vm_object_t object, vm_pindex_t pindex, vm_size_t size) { VM_OBJECT_ASSERT_WLOCKED(object); KASSERT(object->type == OBJT_DEVICE || object->type == OBJT_SG, ("mmu_booke_object_init_pt: non-device object")); } /* * Perform the pmap work for mincore. */ static int mmu_booke_mincore(mmu_t mmu, pmap_t pmap, vm_offset_t addr, vm_paddr_t *locked_pa) { /* XXX: this should be implemented at some point */ return (0); } static int mmu_booke_change_attr(mmu_t mmu, vm_offset_t addr, vm_size_t sz, vm_memattr_t mode) { vm_offset_t va; pte_t *pte; int i, j; tlb_entry_t e; /* Check TLB1 mappings */ for (i = 0; i < TLB1_ENTRIES; i++) { tlb1_read_entry(&e, i); if (!(e.mas1 & MAS1_VALID)) continue; if (addr >= e.virt && addr < e.virt + e.size) break; } if (i < TLB1_ENTRIES) { /* Only allow full mappings to be modified for now. */ /* Validate the range. */ for (j = i, va = addr; va < addr + sz; va += e.size, j++) { tlb1_read_entry(&e, j); if (va != e.virt || (sz - (va - addr) < e.size)) return (EINVAL); } for (va = addr; va < addr + sz; va += e.size, i++) { tlb1_read_entry(&e, i); e.mas2 &= ~MAS2_WIMGE_MASK; e.mas2 |= tlb_calc_wimg(e.phys, mode); /* * Write it out to the TLB. Should really re-sync with other * cores. */ tlb1_write_entry(&e, i); } return (0); } /* Not in TLB1, try through pmap */ /* First validate the range. */ for (va = addr; va < addr + sz; va += PAGE_SIZE) { pte = pte_find(mmu, kernel_pmap, va); if (pte == NULL || !PTE_ISVALID(pte)) return (EINVAL); } mtx_lock_spin(&tlbivax_mutex); tlb_miss_lock(); for (va = addr; va < addr + sz; va += PAGE_SIZE) { pte = pte_find(mmu, kernel_pmap, va); *pte &= ~(PTE_MAS2_MASK << PTE_MAS2_SHIFT); *pte |= tlb_calc_wimg(PTE_PA(pte), mode) << PTE_MAS2_SHIFT; tlb0_flush_entry(va); } tlb_miss_unlock(); mtx_unlock_spin(&tlbivax_mutex); return (0); } /**************************************************************************/ /* TID handling */ /**************************************************************************/ /* * Allocate a TID. If necessary, steal one from someone else. * The new TID is flushed from the TLB before returning. */ static tlbtid_t tid_alloc(pmap_t pmap) { tlbtid_t tid; int thiscpu; KASSERT((pmap != kernel_pmap), ("tid_alloc: kernel pmap")); CTR2(KTR_PMAP, "%s: s (pmap = %p)", __func__, pmap); thiscpu = PCPU_GET(cpuid); tid = PCPU_GET(booke.tid_next); if (tid > TID_MAX) tid = TID_MIN; PCPU_SET(booke.tid_next, tid + 1); /* If we are stealing TID then clear the relevant pmap's field */ if (tidbusy[thiscpu][tid] != NULL) { CTR2(KTR_PMAP, "%s: warning: stealing tid %d", __func__, tid); tidbusy[thiscpu][tid]->pm_tid[thiscpu] = TID_NONE; /* Flush all entries from TLB0 matching this TID. */ tid_flush(tid); } tidbusy[thiscpu][tid] = pmap; pmap->pm_tid[thiscpu] = tid; __asm __volatile("msync; isync"); CTR3(KTR_PMAP, "%s: e (%02d next = %02d)", __func__, tid, PCPU_GET(booke.tid_next)); return (tid); } /**************************************************************************/ /* TLB0 handling */ /**************************************************************************/ /* Convert TLB0 va and way number to tlb0[] table index. */ static inline unsigned int tlb0_tableidx(vm_offset_t va, unsigned int way) { unsigned int idx; idx = (way * TLB0_ENTRIES_PER_WAY); idx += (va & MAS2_TLB0_ENTRY_IDX_MASK) >> MAS2_TLB0_ENTRY_IDX_SHIFT; return (idx); } /* * Invalidate TLB0 entry. */ static inline void tlb0_flush_entry(vm_offset_t va) { CTR2(KTR_PMAP, "%s: s va=0x%08x", __func__, va); mtx_assert(&tlbivax_mutex, MA_OWNED); __asm __volatile("tlbivax 0, %0" :: "r"(va & MAS2_EPN_MASK)); __asm __volatile("isync; msync"); __asm __volatile("tlbsync; msync"); CTR1(KTR_PMAP, "%s: e", __func__); } /**************************************************************************/ /* TLB1 handling */ /**************************************************************************/ /* * TLB1 mapping notes: * * TLB1[0] Kernel text and data. * TLB1[1-15] Additional kernel text and data mappings (if required), PCI * windows, other devices mappings. */ /* * Read an entry from given TLB1 slot. */ void tlb1_read_entry(tlb_entry_t *entry, unsigned int slot) { register_t msr; uint32_t mas0; KASSERT((entry != NULL), ("%s(): Entry is NULL!", __func__)); msr = mfmsr(); __asm __volatile("wrteei 0"); mas0 = MAS0_TLBSEL(1) | MAS0_ESEL(slot); mtspr(SPR_MAS0, mas0); __asm __volatile("isync; tlbre"); entry->mas1 = mfspr(SPR_MAS1); entry->mas2 = mfspr(SPR_MAS2); entry->mas3 = mfspr(SPR_MAS3); switch ((mfpvr() >> 16) & 0xFFFF) { case FSL_E500v2: case FSL_E500mc: case FSL_E5500: case FSL_E6500: entry->mas7 = mfspr(SPR_MAS7); break; default: entry->mas7 = 0; break; } mtmsr(msr); entry->virt = entry->mas2 & MAS2_EPN_MASK; entry->phys = ((vm_paddr_t)(entry->mas7 & MAS7_RPN) << 32) | (entry->mas3 & MAS3_RPN); entry->size = tsize2size((entry->mas1 & MAS1_TSIZE_MASK) >> MAS1_TSIZE_SHIFT); } struct tlbwrite_args { tlb_entry_t *e; unsigned int idx; }; static void tlb1_write_entry_int(void *arg) { struct tlbwrite_args *args = arg; uint32_t mas0; /* Select entry */ mas0 = MAS0_TLBSEL(1) | MAS0_ESEL(args->idx); mtspr(SPR_MAS0, mas0); - __asm __volatile("isync"); mtspr(SPR_MAS1, args->e->mas1); - __asm __volatile("isync"); mtspr(SPR_MAS2, args->e->mas2); - __asm __volatile("isync"); mtspr(SPR_MAS3, args->e->mas3); - __asm __volatile("isync"); switch ((mfpvr() >> 16) & 0xFFFF) { case FSL_E500mc: case FSL_E5500: case FSL_E6500: mtspr(SPR_MAS8, 0); - __asm __volatile("isync"); /* FALLTHROUGH */ case FSL_E500v2: mtspr(SPR_MAS7, args->e->mas7); - __asm __volatile("isync"); break; default: break; } - __asm __volatile("tlbwe; isync; msync"); + __asm __volatile("isync; tlbwe; isync; msync"); } static void tlb1_write_entry_sync(void *arg) { /* Empty synchronization point for smp_rendezvous(). */ } /* * Write given entry to TLB1 hardware. */ static void tlb1_write_entry(tlb_entry_t *e, unsigned int idx) { struct tlbwrite_args args; args.e = e; args.idx = idx; #ifdef SMP if ((e->mas2 & _TLB_ENTRY_SHARED) && smp_started) { mb(); smp_rendezvous(tlb1_write_entry_sync, tlb1_write_entry_int, tlb1_write_entry_sync, &args); } else #endif { register_t msr; msr = mfmsr(); __asm __volatile("wrteei 0"); tlb1_write_entry_int(&args); mtmsr(msr); } } /* * Return the largest uint value log such that 2^log <= num. */ static unsigned int ilog2(unsigned long num) { long lz; #ifdef __powerpc64__ __asm ("cntlzd %0, %1" : "=r" (lz) : "r" (num)); return (63 - lz); #else __asm ("cntlzw %0, %1" : "=r" (lz) : "r" (num)); return (31 - lz); #endif } /* * Convert TLB TSIZE value to mapped region size. */ static vm_size_t tsize2size(unsigned int tsize) { /* * size = 4^tsize KB * size = 4^tsize * 2^10 = 2^(2 * tsize - 10) */ return ((1 << (2 * tsize)) * 1024); } /* * Convert region size (must be power of 4) to TLB TSIZE value. */ static unsigned int size2tsize(vm_size_t size) { return (ilog2(size) / 2 - 5); } /* * Register permanent kernel mapping in TLB1. * * Entries are created starting from index 0 (current free entry is * kept in tlb1_idx) and are not supposed to be invalidated. */ int tlb1_set_entry(vm_offset_t va, vm_paddr_t pa, vm_size_t size, uint32_t flags) { tlb_entry_t e; uint32_t ts, tid; int tsize, index; for (index = 0; index < TLB1_ENTRIES; index++) { tlb1_read_entry(&e, index); if ((e.mas1 & MAS1_VALID) == 0) break; /* Check if we're just updating the flags, and update them. */ if (e.phys == pa && e.virt == va && e.size == size) { e.mas2 = (va & MAS2_EPN_MASK) | flags; tlb1_write_entry(&e, index); return (0); } } if (index >= TLB1_ENTRIES) { printf("tlb1_set_entry: TLB1 full!\n"); return (-1); } /* Convert size to TSIZE */ tsize = size2tsize(size); tid = (TID_KERNEL << MAS1_TID_SHIFT) & MAS1_TID_MASK; /* XXX TS is hard coded to 0 for now as we only use single address space */ ts = (0 << MAS1_TS_SHIFT) & MAS1_TS_MASK; e.phys = pa; e.virt = va; e.size = size; e.mas1 = MAS1_VALID | MAS1_IPROT | ts | tid; e.mas1 |= ((tsize << MAS1_TSIZE_SHIFT) & MAS1_TSIZE_MASK); e.mas2 = (va & MAS2_EPN_MASK) | flags; /* Set supervisor RWX permission bits */ e.mas3 = (pa & MAS3_RPN) | MAS3_SR | MAS3_SW | MAS3_SX; e.mas7 = (pa >> 32) & MAS7_RPN; tlb1_write_entry(&e, index); /* * XXX in general TLB1 updates should be propagated between CPUs, * since current design assumes to have the same TLB1 set-up on all * cores. */ return (0); } /* * Map in contiguous RAM region into the TLB1 using maximum of * KERNEL_REGION_MAX_TLB_ENTRIES entries. * * If necessary round up last entry size and return total size * used by all allocated entries. */ vm_size_t tlb1_mapin_region(vm_offset_t va, vm_paddr_t pa, vm_size_t size) { vm_size_t pgs[KERNEL_REGION_MAX_TLB_ENTRIES]; vm_size_t mapped, pgsz, base, mask; int idx, nents; /* Round up to the next 1M */ size = roundup2(size, 1 << 20); mapped = 0; idx = 0; base = va; pgsz = 64*1024*1024; while (mapped < size) { while (mapped < size && idx < KERNEL_REGION_MAX_TLB_ENTRIES) { while (pgsz > (size - mapped)) pgsz >>= 2; pgs[idx++] = pgsz; mapped += pgsz; } /* We under-map. Correct for this. */ if (mapped < size) { while (pgs[idx - 1] == pgsz) { idx--; mapped -= pgsz; } /* XXX We may increase beyond out starting point. */ pgsz <<= 2; pgs[idx++] = pgsz; mapped += pgsz; } } nents = idx; mask = pgs[0] - 1; /* Align address to the boundary */ if (va & mask) { va = (va + mask) & ~mask; pa = (pa + mask) & ~mask; } for (idx = 0; idx < nents; idx++) { pgsz = pgs[idx]; debugf("%u: %llx -> %jx, size=%jx\n", idx, pa, (uintmax_t)va, (uintmax_t)pgsz); tlb1_set_entry(va, pa, pgsz, _TLB_ENTRY_SHARED | _TLB_ENTRY_MEM); pa += pgsz; va += pgsz; } mapped = (va - base); if (bootverbose) printf("mapped size 0x%"PRIxPTR" (wasted space 0x%"PRIxPTR")\n", mapped, mapped - size); return (mapped); } /* * TLB1 initialization routine, to be called after the very first * assembler level setup done in locore.S. */ void tlb1_init() { uint32_t mas0, mas1, mas2, mas3, mas7; uint32_t tsz; tlb1_get_tlbconf(); mas0 = MAS0_TLBSEL(1) | MAS0_ESEL(0); mtspr(SPR_MAS0, mas0); __asm __volatile("isync; tlbre"); mas1 = mfspr(SPR_MAS1); mas2 = mfspr(SPR_MAS2); mas3 = mfspr(SPR_MAS3); mas7 = mfspr(SPR_MAS7); kernload = ((vm_paddr_t)(mas7 & MAS7_RPN) << 32) | (mas3 & MAS3_RPN); tsz = (mas1 & MAS1_TSIZE_MASK) >> MAS1_TSIZE_SHIFT; kernsize += (tsz > 0) ? tsize2size(tsz) : 0; /* Setup TLB miss defaults */ set_mas4_defaults(); } /* * pmap_early_io_unmap() should be used in short conjunction with * pmap_early_io_map(), as in the following snippet: * * x = pmap_early_io_map(...); * * pmap_early_io_unmap(x, size); * * And avoiding more allocations between. */ void pmap_early_io_unmap(vm_offset_t va, vm_size_t size) { int i; tlb_entry_t e; vm_size_t isize; size = roundup(size, PAGE_SIZE); isize = size; for (i = 0; i < TLB1_ENTRIES && size > 0; i++) { tlb1_read_entry(&e, i); if (!(e.mas1 & MAS1_VALID)) continue; if (va <= e.virt && (va + isize) >= (e.virt + e.size)) { size -= e.size; e.mas1 &= ~MAS1_VALID; tlb1_write_entry(&e, i); } } if (tlb1_map_base == va + isize) tlb1_map_base -= isize; } vm_offset_t pmap_early_io_map(vm_paddr_t pa, vm_size_t size) { vm_paddr_t pa_base; vm_offset_t va, sz; int i; tlb_entry_t e; KASSERT(!pmap_bootstrapped, ("Do not use after PMAP is up!")); for (i = 0; i < TLB1_ENTRIES; i++) { tlb1_read_entry(&e, i); if (!(e.mas1 & MAS1_VALID)) continue; if (pa >= e.phys && (pa + size) <= (e.phys + e.size)) return (e.virt + (pa - e.phys)); } pa_base = rounddown(pa, PAGE_SIZE); size = roundup(size + (pa - pa_base), PAGE_SIZE); tlb1_map_base = roundup2(tlb1_map_base, 1 << (ilog2(size) & ~1)); va = tlb1_map_base + (pa - pa_base); do { sz = 1 << (ilog2(size) & ~1); tlb1_set_entry(tlb1_map_base, pa_base, sz, _TLB_ENTRY_SHARED | _TLB_ENTRY_IO); size -= sz; pa_base += sz; tlb1_map_base += sz; } while (size > 0); return (va); } void pmap_track_page(pmap_t pmap, vm_offset_t va) { vm_paddr_t pa; vm_page_t page; struct pv_entry *pve; va = trunc_page(va); pa = pmap_kextract(va); page = PHYS_TO_VM_PAGE(pa); rw_wlock(&pvh_global_lock); PMAP_LOCK(pmap); TAILQ_FOREACH(pve, &page->md.pv_list, pv_link) { if ((pmap == pve->pv_pmap) && (va == pve->pv_va)) { goto out; } } page->md.pv_tracked = true; pv_insert(pmap, va, page); out: PMAP_UNLOCK(pmap); rw_wunlock(&pvh_global_lock); } /* * Setup MAS4 defaults. * These values are loaded to MAS0-2 on a TLB miss. */ static void set_mas4_defaults(void) { uint32_t mas4; /* Defaults: TLB0, PID0, TSIZED=4K */ mas4 = MAS4_TLBSELD0; mas4 |= (TLB_SIZE_4K << MAS4_TSIZED_SHIFT) & MAS4_TSIZED_MASK; #ifdef SMP mas4 |= MAS4_MD; #endif mtspr(SPR_MAS4, mas4); __asm __volatile("isync"); } /* * Return 0 if the physical IO range is encompassed by one of the * the TLB1 entries, otherwise return related error code. */ static int tlb1_iomapped(int i, vm_paddr_t pa, vm_size_t size, vm_offset_t *va) { uint32_t prot; vm_paddr_t pa_start; vm_paddr_t pa_end; unsigned int entry_tsize; vm_size_t entry_size; tlb_entry_t e; *va = (vm_offset_t)NULL; tlb1_read_entry(&e, i); /* Skip invalid entries */ if (!(e.mas1 & MAS1_VALID)) return (EINVAL); /* * The entry must be cache-inhibited, guarded, and r/w * so it can function as an i/o page */ prot = e.mas2 & (MAS2_I | MAS2_G); if (prot != (MAS2_I | MAS2_G)) return (EPERM); prot = e.mas3 & (MAS3_SR | MAS3_SW); if (prot != (MAS3_SR | MAS3_SW)) return (EPERM); /* The address should be within the entry range. */ entry_tsize = (e.mas1 & MAS1_TSIZE_MASK) >> MAS1_TSIZE_SHIFT; KASSERT((entry_tsize), ("tlb1_iomapped: invalid entry tsize")); entry_size = tsize2size(entry_tsize); pa_start = (((vm_paddr_t)e.mas7 & MAS7_RPN) << 32) | (e.mas3 & MAS3_RPN); pa_end = pa_start + entry_size; if ((pa < pa_start) || ((pa + size) > pa_end)) return (ERANGE); /* Return virtual address of this mapping. */ *va = (e.mas2 & MAS2_EPN_MASK) + (pa - pa_start); return (0); } /* * Invalidate all TLB0 entries which match the given TID. Note this is * dedicated for cases when invalidations should NOT be propagated to other * CPUs. */ static void tid_flush(tlbtid_t tid) { register_t msr; uint32_t mas0, mas1, mas2; int entry, way; /* Don't evict kernel translations */ if (tid == TID_KERNEL) return; msr = mfmsr(); __asm __volatile("wrteei 0"); + /* + * Newer (e500mc and later) have tlbilx, which doesn't broadcast, so use + * it for PID invalidation. + */ + switch ((mfpvr() >> 16) & 0xffff) { + case FSL_E500mc: + case FSL_E5500: + case FSL_E6500: + mtspr(SPR_MAS6, tid << MAS6_SPID0_SHIFT); + /* tlbilxpid */ + __asm __volatile("isync; .long 0x7c000024; isync; msync"); + mtmsr(msr); + return; + } + for (way = 0; way < TLB0_WAYS; way++) for (entry = 0; entry < TLB0_ENTRIES_PER_WAY; entry++) { mas0 = MAS0_TLBSEL(0) | MAS0_ESEL(way); mtspr(SPR_MAS0, mas0); - __asm __volatile("isync"); mas2 = entry << MAS2_TLB0_ENTRY_IDX_SHIFT; mtspr(SPR_MAS2, mas2); __asm __volatile("isync; tlbre"); mas1 = mfspr(SPR_MAS1); if (!(mas1 & MAS1_VALID)) continue; if (((mas1 & MAS1_TID_MASK) >> MAS1_TID_SHIFT) != tid) continue; mas1 &= ~MAS1_VALID; mtspr(SPR_MAS1, mas1); __asm __volatile("isync; tlbwe; isync; msync"); } mtmsr(msr); } #ifdef DDB /* Print out contents of the MAS registers for each TLB0 entry */ static void #ifdef __powerpc64__ tlb_print_entry(int i, uint32_t mas1, uint64_t mas2, uint32_t mas3, #else tlb_print_entry(int i, uint32_t mas1, uint32_t mas2, uint32_t mas3, #endif uint32_t mas7) { int as; char desc[3]; tlbtid_t tid; vm_size_t size; unsigned int tsize; desc[2] = '\0'; if (mas1 & MAS1_VALID) desc[0] = 'V'; else desc[0] = ' '; if (mas1 & MAS1_IPROT) desc[1] = 'P'; else desc[1] = ' '; as = (mas1 & MAS1_TS_MASK) ? 1 : 0; tid = MAS1_GETTID(mas1); tsize = (mas1 & MAS1_TSIZE_MASK) >> MAS1_TSIZE_SHIFT; size = 0; if (tsize) size = tsize2size(tsize); printf("%3d: (%s) [AS=%d] " "sz = 0x%08x tsz = %d tid = %d mas1 = 0x%08x " "mas2(va) = 0x%"PRI0ptrX" mas3(pa) = 0x%08x mas7 = 0x%08x\n", i, desc, as, size, tsize, tid, mas1, mas2, mas3, mas7); } DB_SHOW_COMMAND(tlb0, tlb0_print_tlbentries) { uint32_t mas0, mas1, mas3, mas7; #ifdef __powerpc64__ uint64_t mas2; #else uint32_t mas2; #endif int entryidx, way, idx; printf("TLB0 entries:\n"); for (way = 0; way < TLB0_WAYS; way ++) for (entryidx = 0; entryidx < TLB0_ENTRIES_PER_WAY; entryidx++) { mas0 = MAS0_TLBSEL(0) | MAS0_ESEL(way); mtspr(SPR_MAS0, mas0); - __asm __volatile("isync"); mas2 = entryidx << MAS2_TLB0_ENTRY_IDX_SHIFT; mtspr(SPR_MAS2, mas2); __asm __volatile("isync; tlbre"); mas1 = mfspr(SPR_MAS1); mas2 = mfspr(SPR_MAS2); mas3 = mfspr(SPR_MAS3); mas7 = mfspr(SPR_MAS7); idx = tlb0_tableidx(mas2, way); tlb_print_entry(idx, mas1, mas2, mas3, mas7); } } /* * Print out contents of the MAS registers for each TLB1 entry */ DB_SHOW_COMMAND(tlb1, tlb1_print_tlbentries) { uint32_t mas0, mas1, mas3, mas7; #ifdef __powerpc64__ uint64_t mas2; #else uint32_t mas2; #endif int i; printf("TLB1 entries:\n"); for (i = 0; i < TLB1_ENTRIES; i++) { mas0 = MAS0_TLBSEL(1) | MAS0_ESEL(i); mtspr(SPR_MAS0, mas0); __asm __volatile("isync; tlbre"); mas1 = mfspr(SPR_MAS1); mas2 = mfspr(SPR_MAS2); mas3 = mfspr(SPR_MAS3); mas7 = mfspr(SPR_MAS7); tlb_print_entry(i, mas1, mas2, mas3, mas7); } } #endif Index: projects/import-googletest-1.8.1/sys/powerpc/powerpc/elf32_machdep.c =================================================================== --- projects/import-googletest-1.8.1/sys/powerpc/powerpc/elf32_machdep.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/powerpc/powerpc/elf32_machdep.c (revision 344271) @@ -1,396 +1,396 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright 1996-1998 John D. Polstra. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * $FreeBSD$ */ #include #include #include #define __ELF_WORD_SIZE 32 #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef __powerpc64__ #include #include extern const char *freebsd32_syscallnames[]; static void ppc32_fixlimit(struct rlimit *rl, int which); static SYSCTL_NODE(_compat, OID_AUTO, ppc32, CTLFLAG_RW, 0, "32-bit mode"); #define PPC32_MAXDSIZ (1024*1024*1024) static u_long ppc32_maxdsiz = PPC32_MAXDSIZ; SYSCTL_ULONG(_compat_ppc32, OID_AUTO, maxdsiz, CTLFLAG_RWTUN, &ppc32_maxdsiz, 0, ""); #define PPC32_MAXSSIZ (64*1024*1024) u_long ppc32_maxssiz = PPC32_MAXSSIZ; SYSCTL_ULONG(_compat_ppc32, OID_AUTO, maxssiz, CTLFLAG_RWTUN, &ppc32_maxssiz, 0, ""); #endif struct sysentvec elf32_freebsd_sysvec = { .sv_size = SYS_MAXSYSCALL, #ifdef __powerpc64__ .sv_table = freebsd32_sysent, #else .sv_table = sysent, #endif .sv_errsize = 0, .sv_errtbl = NULL, .sv_transtrap = NULL, .sv_fixup = __elfN(freebsd_fixup), .sv_sendsig = sendsig, .sv_sigcode = sigcode32, .sv_szsigcode = &szsigcode32, .sv_name = "FreeBSD ELF32", .sv_coredump = __elfN(coredump), .sv_imgact_try = NULL, .sv_minsigstksz = MINSIGSTKSZ, .sv_pagesize = PAGE_SIZE, .sv_minuser = VM_MIN_ADDRESS, .sv_stackprot = VM_PROT_ALL, #ifdef __powerpc64__ .sv_maxuser = VM_MAXUSER_ADDRESS, .sv_usrstack = FREEBSD32_USRSTACK, .sv_psstrings = FREEBSD32_PS_STRINGS, .sv_copyout_strings = freebsd32_copyout_strings, .sv_setregs = ppc32_setregs, .sv_syscallnames = freebsd32_syscallnames, .sv_fixlimit = ppc32_fixlimit, #else .sv_maxuser = VM_MAXUSER_ADDRESS, .sv_usrstack = USRSTACK, .sv_psstrings = PS_STRINGS, .sv_copyout_strings = exec_copyout_strings, .sv_setregs = exec_setregs, .sv_syscallnames = syscallnames, .sv_fixlimit = NULL, #endif .sv_maxssiz = NULL, - .sv_flags = SV_ABI_FREEBSD | SV_ILP32 | SV_SHP, + .sv_flags = SV_ABI_FREEBSD | SV_ILP32 | SV_SHP | SV_ASLR, .sv_set_syscall_retval = cpu_set_syscall_retval, .sv_fetch_syscall_args = cpu_fetch_syscall_args, .sv_shared_page_base = FREEBSD32_SHAREDPAGE, .sv_shared_page_len = PAGE_SIZE, .sv_schedtail = NULL, .sv_thread_detach = NULL, .sv_trap = NULL, .sv_hwcap = &cpu_features, .sv_hwcap2 = &cpu_features2, }; INIT_SYSENTVEC(elf32_sysvec, &elf32_freebsd_sysvec); static Elf32_Brandinfo freebsd_brand_info = { .brand = ELFOSABI_FREEBSD, .machine = EM_PPC, .compat_3_brand = "FreeBSD", .emul_path = NULL, .interp_path = "/libexec/ld-elf.so.1", .sysvec = &elf32_freebsd_sysvec, #ifdef __powerpc64__ .interp_newpath = "/libexec/ld-elf32.so.1", #else .interp_newpath = NULL, #endif .brand_note = &elf32_freebsd_brandnote, .flags = BI_CAN_EXEC_DYN | BI_BRAND_NOTE }; SYSINIT(elf32, SI_SUB_EXEC, SI_ORDER_FIRST, (sysinit_cfunc_t) elf32_insert_brand_entry, &freebsd_brand_info); static Elf32_Brandinfo freebsd_brand_oinfo = { .brand = ELFOSABI_FREEBSD, .machine = EM_PPC, .compat_3_brand = "FreeBSD", .emul_path = NULL, .interp_path = "/usr/libexec/ld-elf.so.1", .sysvec = &elf32_freebsd_sysvec, .interp_newpath = NULL, .brand_note = &elf32_freebsd_brandnote, .flags = BI_CAN_EXEC_DYN | BI_BRAND_NOTE }; SYSINIT(oelf32, SI_SUB_EXEC, SI_ORDER_ANY, (sysinit_cfunc_t) elf32_insert_brand_entry, &freebsd_brand_oinfo); void elf_reloc_self(Elf_Dyn *dynp, Elf_Addr relocbase); void elf32_dump_thread(struct thread *td, void *dst, size_t *off) { size_t len; struct pcb *pcb; uint64_t vshr[32]; uint64_t *vsr_dw1; int vsr_idx; len = 0; pcb = td->td_pcb; if (pcb->pcb_flags & PCB_VEC) { save_vec_nodrop(td); if (dst != NULL) { len += elf32_populate_note(NT_PPC_VMX, &pcb->pcb_vec, (char *)dst + len, sizeof(pcb->pcb_vec), NULL); } else len += elf32_populate_note(NT_PPC_VMX, NULL, NULL, sizeof(pcb->pcb_vec), NULL); } if (pcb->pcb_flags & PCB_VSX) { save_fpu_nodrop(td); if (dst != NULL) { /* * Doubleword 0 of VSR0-VSR31 overlap with FPR0-FPR31 and * VSR32-VSR63 overlap with VR0-VR31, so we only copy * the non-overlapping data, which is doubleword 1 of VSR0-VSR31. */ for (vsr_idx = 0; vsr_idx < nitems(vshr); vsr_idx++) { vsr_dw1 = (uint64_t *)&pcb->pcb_fpu.fpr[vsr_idx].vsr[2]; vshr[vsr_idx] = *vsr_dw1; } len += elf32_populate_note(NT_PPC_VSX, vshr, (char *)dst + len, sizeof(vshr), NULL); } else len += elf32_populate_note(NT_PPC_VSX, NULL, NULL, sizeof(vshr), NULL); } *off = len; } #ifndef __powerpc64__ bool elf_is_ifunc_reloc(Elf_Size r_info __unused) { return (false); } /* Process one elf relocation with addend. */ static int elf_reloc_internal(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, int local, elf_lookup_fn lookup) { Elf_Addr *where; Elf_Half *hwhere; Elf_Addr addr; Elf_Addr addend; Elf_Word rtype, symidx; const Elf_Rela *rela; int error; switch (type) { case ELF_RELOC_REL: panic("PPC only supports RELA relocations"); break; case ELF_RELOC_RELA: rela = (const Elf_Rela *)data; where = (Elf_Addr *) ((uintptr_t)relocbase + rela->r_offset); hwhere = (Elf_Half *) ((uintptr_t)relocbase + rela->r_offset); addend = rela->r_addend; rtype = ELF_R_TYPE(rela->r_info); symidx = ELF_R_SYM(rela->r_info); break; default: panic("elf_reloc: unknown relocation mode %d\n", type); } switch (rtype) { case R_PPC_NONE: break; case R_PPC_ADDR32: /* word32 S + A */ error = lookup(lf, symidx, 1, &addr); if (error != 0) return -1; *where = elf_relocaddr(lf, addr + addend); break; case R_PPC_ADDR16_LO: /* #lo(S) */ error = lookup(lf, symidx, 1, &addr); if (error != 0) return -1; /* * addend values are sometimes relative to sections * (i.e. .rodata) in rela, where in reality they * are relative to relocbase. Detect this condition. */ if (addr > relocbase && addr <= (relocbase + addend)) addr = relocbase; addr = elf_relocaddr(lf, addr + addend); *hwhere = addr & 0xffff; break; case R_PPC_ADDR16_HA: /* #ha(S) */ error = lookup(lf, symidx, 1, &addr); if (error != 0) return -1; /* * addend values are sometimes relative to sections * (i.e. .rodata) in rela, where in reality they * are relative to relocbase. Detect this condition. */ if (addr > relocbase && addr <= (relocbase + addend)) addr = relocbase; addr = elf_relocaddr(lf, addr + addend); *hwhere = ((addr >> 16) + ((addr & 0x8000) ? 1 : 0)) & 0xffff; break; case R_PPC_RELATIVE: /* word32 B + A */ *where = elf_relocaddr(lf, relocbase + addend); break; default: printf("kldload: unexpected relocation type %d\n", (int) rtype); return -1; } return(0); } void elf_reloc_self(Elf_Dyn *dynp, Elf_Addr relocbase) { Elf_Rela *rela = NULL, *relalim; Elf_Addr relasz = 0; Elf_Addr *where; /* * Extract the rela/relasz values from the dynamic section */ for (; dynp->d_tag != DT_NULL; dynp++) { switch (dynp->d_tag) { case DT_RELA: rela = (Elf_Rela *)(relocbase+dynp->d_un.d_ptr); break; case DT_RELASZ: relasz = dynp->d_un.d_val; break; } } /* * Relocate these values */ relalim = (Elf_Rela *)((caddr_t)rela + relasz); for (; rela < relalim; rela++) { if (ELF_R_TYPE(rela->r_info) != R_PPC_RELATIVE) continue; where = (Elf_Addr *)(relocbase + rela->r_offset); *where = (Elf_Addr)(relocbase + rela->r_addend); } } int elf_reloc(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, elf_lookup_fn lookup) { return (elf_reloc_internal(lf, relocbase, data, type, 0, lookup)); } int elf_reloc_local(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, elf_lookup_fn lookup) { return (elf_reloc_internal(lf, relocbase, data, type, 1, lookup)); } int elf_cpu_load_file(linker_file_t lf) { /* Only sync the cache for non-kernel modules */ if (lf->id != 1) __syncicache(lf->address, lf->size); return (0); } int elf_cpu_unload_file(linker_file_t lf __unused) { return (0); } #endif #ifdef __powerpc64__ static void ppc32_fixlimit(struct rlimit *rl, int which) { switch (which) { case RLIMIT_DATA: if (ppc32_maxdsiz != 0) { if (rl->rlim_cur > ppc32_maxdsiz) rl->rlim_cur = ppc32_maxdsiz; if (rl->rlim_max > ppc32_maxdsiz) rl->rlim_max = ppc32_maxdsiz; } break; case RLIMIT_STACK: if (ppc32_maxssiz != 0) { if (rl->rlim_cur > ppc32_maxssiz) rl->rlim_cur = ppc32_maxssiz; if (rl->rlim_max > ppc32_maxssiz) rl->rlim_max = ppc32_maxssiz; } break; } } #endif Index: projects/import-googletest-1.8.1/sys/powerpc/powerpc/elf64_machdep.c =================================================================== --- projects/import-googletest-1.8.1/sys/powerpc/powerpc/elf64_machdep.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/powerpc/powerpc/elf64_machdep.c (revision 344271) @@ -1,412 +1,412 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright 1996-1998 John D. Polstra. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * $FreeBSD$ */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include static void exec_setregs_funcdesc(struct thread *td, struct image_params *imgp, u_long stack); struct sysentvec elf64_freebsd_sysvec_v1 = { .sv_size = SYS_MAXSYSCALL, .sv_table = sysent, .sv_errsize = 0, .sv_errtbl = NULL, .sv_transtrap = NULL, .sv_fixup = __elfN(freebsd_fixup), .sv_sendsig = sendsig, .sv_sigcode = sigcode64, .sv_szsigcode = &szsigcode64, .sv_name = "FreeBSD ELF64", .sv_coredump = __elfN(coredump), .sv_imgact_try = NULL, .sv_minsigstksz = MINSIGSTKSZ, .sv_pagesize = PAGE_SIZE, .sv_minuser = VM_MIN_ADDRESS, .sv_maxuser = VM_MAXUSER_ADDRESS, .sv_usrstack = USRSTACK, .sv_psstrings = PS_STRINGS, .sv_stackprot = VM_PROT_ALL, .sv_copyout_strings = exec_copyout_strings, .sv_setregs = exec_setregs_funcdesc, .sv_fixlimit = NULL, .sv_maxssiz = NULL, - .sv_flags = SV_ABI_FREEBSD | SV_LP64 | SV_SHP, + .sv_flags = SV_ABI_FREEBSD | SV_LP64 | SV_SHP | SV_ASLR, .sv_set_syscall_retval = cpu_set_syscall_retval, .sv_fetch_syscall_args = cpu_fetch_syscall_args, .sv_syscallnames = syscallnames, .sv_shared_page_base = SHAREDPAGE, .sv_shared_page_len = PAGE_SIZE, .sv_schedtail = NULL, .sv_thread_detach = NULL, .sv_trap = NULL, .sv_hwcap = &cpu_features, .sv_hwcap2 = &cpu_features2, }; INIT_SYSENTVEC(elf64_sysvec_v1, &elf64_freebsd_sysvec_v1); struct sysentvec elf64_freebsd_sysvec_v2 = { .sv_size = SYS_MAXSYSCALL, .sv_table = sysent, .sv_errsize = 0, .sv_errtbl = NULL, .sv_transtrap = NULL, .sv_fixup = __elfN(freebsd_fixup), .sv_sendsig = sendsig, .sv_sigcode = sigcode64_elfv2, .sv_szsigcode = &szsigcode64_elfv2, .sv_name = "FreeBSD ELF64 V2", .sv_coredump = __elfN(coredump), .sv_imgact_try = NULL, .sv_minsigstksz = MINSIGSTKSZ, .sv_pagesize = PAGE_SIZE, .sv_minuser = VM_MIN_ADDRESS, .sv_maxuser = VM_MAXUSER_ADDRESS, .sv_usrstack = USRSTACK, .sv_psstrings = PS_STRINGS, .sv_stackprot = VM_PROT_ALL, .sv_copyout_strings = exec_copyout_strings, .sv_setregs = exec_setregs, .sv_fixlimit = NULL, .sv_maxssiz = NULL, .sv_flags = SV_ABI_FREEBSD | SV_LP64 | SV_SHP, .sv_set_syscall_retval = cpu_set_syscall_retval, .sv_fetch_syscall_args = cpu_fetch_syscall_args, .sv_syscallnames = syscallnames, .sv_shared_page_base = SHAREDPAGE, .sv_shared_page_len = PAGE_SIZE, .sv_schedtail = NULL, .sv_thread_detach = NULL, .sv_trap = NULL, .sv_hwcap = &cpu_features, .sv_hwcap2 = &cpu_features2, }; INIT_SYSENTVEC(elf64_sysvec_v2, &elf64_freebsd_sysvec_v2); static boolean_t ppc64_elfv1_header_match(struct image_params *params); static boolean_t ppc64_elfv2_header_match(struct image_params *params); static Elf64_Brandinfo freebsd_brand_info_elfv1 = { .brand = ELFOSABI_FREEBSD, .machine = EM_PPC64, .compat_3_brand = "FreeBSD", .emul_path = NULL, .interp_path = "/libexec/ld-elf.so.1", .sysvec = &elf64_freebsd_sysvec_v1, .interp_newpath = NULL, .brand_note = &elf64_freebsd_brandnote, .flags = BI_CAN_EXEC_DYN | BI_BRAND_NOTE, .header_supported = &ppc64_elfv1_header_match }; SYSINIT(elf64v1, SI_SUB_EXEC, SI_ORDER_ANY, (sysinit_cfunc_t) elf64_insert_brand_entry, &freebsd_brand_info_elfv1); static Elf64_Brandinfo freebsd_brand_info_elfv2 = { .brand = ELFOSABI_FREEBSD, .machine = EM_PPC64, .compat_3_brand = "FreeBSD", .emul_path = NULL, .interp_path = "/libexec/ld-elf.so.1", .sysvec = &elf64_freebsd_sysvec_v2, .interp_newpath = NULL, .brand_note = &elf64_freebsd_brandnote, .flags = BI_CAN_EXEC_DYN | BI_BRAND_NOTE, .header_supported = &ppc64_elfv2_header_match }; SYSINIT(elf64v2, SI_SUB_EXEC, SI_ORDER_ANY, (sysinit_cfunc_t) elf64_insert_brand_entry, &freebsd_brand_info_elfv2); static Elf64_Brandinfo freebsd_brand_oinfo = { .brand = ELFOSABI_FREEBSD, .machine = EM_PPC64, .compat_3_brand = "FreeBSD", .emul_path = NULL, .interp_path = "/usr/libexec/ld-elf.so.1", .sysvec = &elf64_freebsd_sysvec_v1, .interp_newpath = NULL, .brand_note = &elf64_freebsd_brandnote, .flags = BI_CAN_EXEC_DYN | BI_BRAND_NOTE, .header_supported = &ppc64_elfv1_header_match }; SYSINIT(oelf64, SI_SUB_EXEC, SI_ORDER_ANY, (sysinit_cfunc_t) elf64_insert_brand_entry, &freebsd_brand_oinfo); void elf_reloc_self(Elf_Dyn *dynp, Elf_Addr relocbase); static boolean_t ppc64_elfv1_header_match(struct image_params *params) { const Elf64_Ehdr *hdr = (const Elf64_Ehdr *)params->image_header; int abi = (hdr->e_flags & 3); return (abi == 0 || abi == 1); } static boolean_t ppc64_elfv2_header_match(struct image_params *params) { const Elf64_Ehdr *hdr = (const Elf64_Ehdr *)params->image_header; int abi = (hdr->e_flags & 3); return (abi == 2); } static void exec_setregs_funcdesc(struct thread *td, struct image_params *imgp, u_long stack) { struct trapframe *tf; register_t entry_desc[3]; tf = trapframe(td); exec_setregs(td, imgp, stack); /* * For 64-bit ELFv1, we need to disentangle the function * descriptor * * 0. entry point * 1. TOC value (r2) * 2. Environment pointer (r11) */ (void)copyin((void *)imgp->entry_addr, entry_desc, sizeof(entry_desc)); tf->srr0 = entry_desc[0] + imgp->reloc_base; tf->fixreg[2] = entry_desc[1] + imgp->reloc_base; tf->fixreg[11] = entry_desc[2] + imgp->reloc_base; } void elf64_dump_thread(struct thread *td, void *dst, size_t *off) { size_t len; struct pcb *pcb; uint64_t vshr[32]; uint64_t *vsr_dw1; int vsr_idx; len = 0; pcb = td->td_pcb; if (pcb->pcb_flags & PCB_VEC) { save_vec_nodrop(td); if (dst != NULL) { len += elf64_populate_note(NT_PPC_VMX, &pcb->pcb_vec, (char *)dst + len, sizeof(pcb->pcb_vec), NULL); } else len += elf64_populate_note(NT_PPC_VMX, NULL, NULL, sizeof(pcb->pcb_vec), NULL); } if (pcb->pcb_flags & PCB_VSX) { save_fpu_nodrop(td); if (dst != NULL) { /* * Doubleword 0 of VSR0-VSR31 overlap with FPR0-FPR31 and * VSR32-VSR63 overlap with VR0-VR31, so we only copy * the non-overlapping data, which is doubleword 1 of VSR0-VSR31. */ for (vsr_idx = 0; vsr_idx < nitems(vshr); vsr_idx++) { vsr_dw1 = (uint64_t *)&pcb->pcb_fpu.fpr[vsr_idx].vsr[2]; vshr[vsr_idx] = *vsr_dw1; } len += elf64_populate_note(NT_PPC_VSX, vshr, (char *)dst + len, sizeof(vshr), NULL); } else len += elf64_populate_note(NT_PPC_VSX, NULL, NULL, sizeof(vshr), NULL); } *off = len; } bool elf_is_ifunc_reloc(Elf_Size r_info __unused) { return (false); } /* Process one elf relocation with addend. */ static int elf_reloc_internal(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, int local, elf_lookup_fn lookup) { Elf_Addr *where; Elf_Addr addr; Elf_Addr addend; Elf_Word rtype, symidx; const Elf_Rela *rela; int error; switch (type) { case ELF_RELOC_REL: panic("PPC only supports RELA relocations"); break; case ELF_RELOC_RELA: rela = (const Elf_Rela *)data; where = (Elf_Addr *) (relocbase + rela->r_offset); addend = rela->r_addend; rtype = ELF_R_TYPE(rela->r_info); symidx = ELF_R_SYM(rela->r_info); break; default: panic("elf_reloc: unknown relocation mode %d\n", type); } switch (rtype) { case R_PPC_NONE: break; case R_PPC64_ADDR64: /* doubleword64 S + A */ error = lookup(lf, symidx, 1, &addr); if (error != 0) return -1; addr += addend; *where = addr; break; case R_PPC_RELATIVE: /* doubleword64 B + A */ *where = elf_relocaddr(lf, relocbase + addend); break; case R_PPC_JMP_SLOT: /* function descriptor copy */ lookup(lf, symidx, 1, &addr); #if !defined(_CALL_ELF) || _CALL_ELF == 1 memcpy(where, (Elf_Addr *)addr, 3*sizeof(Elf_Addr)); #else *where = addr; #endif __asm __volatile("dcbst 0,%0; sync" :: "r"(where) : "memory"); break; default: printf("kldload: unexpected relocation type %d\n", (int) rtype); return -1; } return(0); } void elf_reloc_self(Elf_Dyn *dynp, Elf_Addr relocbase) { Elf_Rela *rela = NULL, *relalim; Elf_Addr relasz = 0; Elf_Addr *where; /* * Extract the rela/relasz values from the dynamic section */ for (; dynp->d_tag != DT_NULL; dynp++) { switch (dynp->d_tag) { case DT_RELA: rela = (Elf_Rela *)(relocbase+dynp->d_un.d_ptr); break; case DT_RELASZ: relasz = dynp->d_un.d_val; break; } } /* * Relocate these values */ relalim = (Elf_Rela *)((caddr_t)rela + relasz); for (; rela < relalim; rela++) { if (ELF_R_TYPE(rela->r_info) != R_PPC_RELATIVE) continue; where = (Elf_Addr *)(relocbase + rela->r_offset); *where = (Elf_Addr)(relocbase + rela->r_addend); } } int elf_reloc(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, elf_lookup_fn lookup) { return (elf_reloc_internal(lf, relocbase, data, type, 0, lookup)); } int elf_reloc_local(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, elf_lookup_fn lookup) { return (elf_reloc_internal(lf, relocbase, data, type, 1, lookup)); } int elf_cpu_load_file(linker_file_t lf) { /* Only sync the cache for non-kernel modules */ if (lf->id != 1) __syncicache(lf->address, lf->size); return (0); } int elf_cpu_unload_file(linker_file_t lf __unused) { return (0); } Index: projects/import-googletest-1.8.1/sys/powerpc/powerpc/exec_machdep.c =================================================================== --- projects/import-googletest-1.8.1/sys/powerpc/powerpc/exec_machdep.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/powerpc/powerpc/exec_machdep.c (revision 344271) @@ -1,1126 +1,1130 @@ /*- * SPDX-License-Identifier: BSD-4-Clause AND BSD-2-Clause-FreeBSD * * Copyright (C) 1995, 1996 Wolfgang Solfrank. * Copyright (C) 1995, 1996 TooLs GmbH. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by TooLs GmbH. * 4. The name of TooLs GmbH may not be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY TOOLS GMBH ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL TOOLS GMBH BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /*- * Copyright (C) 2001 Benno Rice * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY Benno Rice ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL TOOLS GMBH BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * $NetBSD: machdep.c,v 1.74.2.1 2000/11/01 16:13:48 tv Exp $ */ #include __FBSDID("$FreeBSD$"); #include "opt_fpu_emu.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef FPU_EMU #include #endif #ifdef COMPAT_FREEBSD32 #include #include #include typedef struct __ucontext32 { sigset_t uc_sigmask; mcontext32_t uc_mcontext; uint32_t uc_link; struct sigaltstack32 uc_stack; uint32_t uc_flags; uint32_t __spare__[4]; } ucontext32_t; struct sigframe32 { ucontext32_t sf_uc; struct siginfo32 sf_si; }; static int grab_mcontext32(struct thread *td, mcontext32_t *, int flags); #endif static int grab_mcontext(struct thread *, mcontext_t *, int); #ifdef __powerpc64__ extern struct sysentvec elf64_freebsd_sysvec_v2; #endif void sendsig(sig_t catcher, ksiginfo_t *ksi, sigset_t *mask) { struct trapframe *tf; struct sigacts *psp; struct sigframe sf; struct thread *td; struct proc *p; #ifdef COMPAT_FREEBSD32 struct siginfo32 siginfo32; struct sigframe32 sf32; #endif size_t sfpsize; caddr_t sfp, usfp; int oonstack, rndfsize; int sig; int code; td = curthread; p = td->td_proc; PROC_LOCK_ASSERT(p, MA_OWNED); psp = p->p_sigacts; mtx_assert(&psp->ps_mtx, MA_OWNED); tf = td->td_frame; oonstack = sigonstack(tf->fixreg[1]); /* * Fill siginfo structure. */ ksi->ksi_info.si_signo = ksi->ksi_signo; ksi->ksi_info.si_addr = (void *)((tf->exc == EXC_DSI || tf->exc == EXC_DSE) ? tf->dar : tf->srr0); #ifdef COMPAT_FREEBSD32 if (SV_PROC_FLAG(p, SV_ILP32)) { siginfo_to_siginfo32(&ksi->ksi_info, &siginfo32); sig = siginfo32.si_signo; code = siginfo32.si_code; sfp = (caddr_t)&sf32; sfpsize = sizeof(sf32); rndfsize = roundup(sizeof(sf32), 16); /* * Save user context */ memset(&sf32, 0, sizeof(sf32)); grab_mcontext32(td, &sf32.sf_uc.uc_mcontext, 0); sf32.sf_uc.uc_sigmask = *mask; sf32.sf_uc.uc_stack.ss_sp = (uintptr_t)td->td_sigstk.ss_sp; sf32.sf_uc.uc_stack.ss_size = (uint32_t)td->td_sigstk.ss_size; sf32.sf_uc.uc_stack.ss_flags = (td->td_pflags & TDP_ALTSTACK) ? ((oonstack) ? SS_ONSTACK : 0) : SS_DISABLE; sf32.sf_uc.uc_mcontext.mc_onstack = (oonstack) ? 1 : 0; } else { #endif sig = ksi->ksi_signo; code = ksi->ksi_code; sfp = (caddr_t)&sf; sfpsize = sizeof(sf); #ifdef __powerpc64__ /* * 64-bit PPC defines a 288 byte scratch region * below the stack. */ rndfsize = 288 + roundup(sizeof(sf), 48); #else rndfsize = roundup(sizeof(sf), 16); #endif /* * Save user context */ memset(&sf, 0, sizeof(sf)); grab_mcontext(td, &sf.sf_uc.uc_mcontext, 0); sf.sf_uc.uc_sigmask = *mask; sf.sf_uc.uc_stack = td->td_sigstk; sf.sf_uc.uc_stack.ss_flags = (td->td_pflags & TDP_ALTSTACK) ? ((oonstack) ? SS_ONSTACK : 0) : SS_DISABLE; sf.sf_uc.uc_mcontext.mc_onstack = (oonstack) ? 1 : 0; #ifdef COMPAT_FREEBSD32 } #endif CTR4(KTR_SIG, "sendsig: td=%p (%s) catcher=%p sig=%d", td, p->p_comm, catcher, sig); /* * Allocate and validate space for the signal handler context. */ if ((td->td_pflags & TDP_ALTSTACK) != 0 && !oonstack && SIGISMEMBER(psp->ps_sigonstack, sig)) { usfp = (void *)(((uintptr_t)td->td_sigstk.ss_sp + td->td_sigstk.ss_size - rndfsize) & ~0xFul); } else { usfp = (void *)((tf->fixreg[1] - rndfsize) & ~0xFul); } /* * Save the floating-point state, if necessary, then copy it. */ /* XXX */ /* * Set up the registers to return to sigcode. * * r1/sp - sigframe ptr * lr - sig function, dispatched to by blrl in trampoline * r3 - sig number * r4 - SIGINFO ? &siginfo : exception code * r5 - user context * srr0 - trampoline function addr */ tf->lr = (register_t)catcher; tf->fixreg[1] = (register_t)usfp; tf->fixreg[FIRSTARG] = sig; #ifdef COMPAT_FREEBSD32 tf->fixreg[FIRSTARG+2] = (register_t)usfp + ((SV_PROC_FLAG(p, SV_ILP32)) ? offsetof(struct sigframe32, sf_uc) : offsetof(struct sigframe, sf_uc)); #else tf->fixreg[FIRSTARG+2] = (register_t)usfp + offsetof(struct sigframe, sf_uc); #endif if (SIGISMEMBER(psp->ps_siginfo, sig)) { /* * Signal handler installed with SA_SIGINFO. */ #ifdef COMPAT_FREEBSD32 if (SV_PROC_FLAG(p, SV_ILP32)) { sf32.sf_si = siginfo32; tf->fixreg[FIRSTARG+1] = (register_t)usfp + offsetof(struct sigframe32, sf_si); sf32.sf_si = siginfo32; } else { #endif tf->fixreg[FIRSTARG+1] = (register_t)usfp + offsetof(struct sigframe, sf_si); sf.sf_si = ksi->ksi_info; #ifdef COMPAT_FREEBSD32 } #endif } else { /* Old FreeBSD-style arguments. */ tf->fixreg[FIRSTARG+1] = code; tf->fixreg[FIRSTARG+3] = (tf->exc == EXC_DSI) ? tf->dar : tf->srr0; } mtx_unlock(&psp->ps_mtx); PROC_UNLOCK(p); tf->srr0 = (register_t)p->p_sysent->sv_sigcode_base; /* * copy the frame out to userland. */ if (copyout(sfp, usfp, sfpsize) != 0) { /* * Process has trashed its stack. Kill it. */ CTR2(KTR_SIG, "sendsig: sigexit td=%p sfp=%p", td, sfp); PROC_LOCK(p); sigexit(td, SIGILL); } CTR3(KTR_SIG, "sendsig: return td=%p pc=%#x sp=%#x", td, tf->srr0, tf->fixreg[1]); PROC_LOCK(p); mtx_lock(&psp->ps_mtx); } int sys_sigreturn(struct thread *td, struct sigreturn_args *uap) { ucontext_t uc; int error; CTR2(KTR_SIG, "sigreturn: td=%p ucp=%p", td, uap->sigcntxp); if (copyin(uap->sigcntxp, &uc, sizeof(uc)) != 0) { CTR1(KTR_SIG, "sigreturn: efault td=%p", td); return (EFAULT); } error = set_mcontext(td, &uc.uc_mcontext); if (error != 0) return (error); kern_sigprocmask(td, SIG_SETMASK, &uc.uc_sigmask, NULL, 0); CTR3(KTR_SIG, "sigreturn: return td=%p pc=%#x sp=%#x", td, uc.uc_mcontext.mc_srr0, uc.uc_mcontext.mc_gpr[1]); return (EJUSTRETURN); } #ifdef COMPAT_FREEBSD4 int freebsd4_sigreturn(struct thread *td, struct freebsd4_sigreturn_args *uap) { return sys_sigreturn(td, (struct sigreturn_args *)uap); } #endif /* * Construct a PCB from a trapframe. This is called from kdb_trap() where * we want to start a backtrace from the function that caused us to enter * the debugger. We have the context in the trapframe, but base the trace * on the PCB. The PCB doesn't have to be perfect, as long as it contains * enough for a backtrace. */ void makectx(struct trapframe *tf, struct pcb *pcb) { pcb->pcb_lr = tf->srr0; pcb->pcb_sp = tf->fixreg[1]; } /* * get_mcontext/sendsig helper routine that doesn't touch the * proc lock */ static int grab_mcontext(struct thread *td, mcontext_t *mcp, int flags) { struct pcb *pcb; int i; pcb = td->td_pcb; memset(mcp, 0, sizeof(mcontext_t)); mcp->mc_vers = _MC_VERSION; mcp->mc_flags = 0; memcpy(&mcp->mc_frame, td->td_frame, sizeof(struct trapframe)); if (flags & GET_MC_CLEAR_RET) { mcp->mc_gpr[3] = 0; mcp->mc_gpr[4] = 0; } /* * This assumes that floating-point context is *not* lazy, * so if the thread has used FP there would have been a * FP-unavailable exception that would have set things up * correctly. */ if (pcb->pcb_flags & PCB_FPREGS) { if (pcb->pcb_flags & PCB_FPU) { KASSERT(td == curthread, ("get_mcontext: fp save not curthread")); critical_enter(); save_fpu(td); critical_exit(); } mcp->mc_flags |= _MC_FP_VALID; memcpy(&mcp->mc_fpscr, &pcb->pcb_fpu.fpscr, sizeof(double)); for (i = 0; i < 32; i++) memcpy(&mcp->mc_fpreg[i], &pcb->pcb_fpu.fpr[i].fpr, sizeof(double)); } if (pcb->pcb_flags & PCB_VSX) { for (i = 0; i < 32; i++) memcpy(&mcp->mc_vsxfpreg[i], &pcb->pcb_fpu.fpr[i].vsr[2], sizeof(double)); } /* * Repeat for Altivec context */ if (pcb->pcb_flags & PCB_VEC) { KASSERT(td == curthread, ("get_mcontext: fp save not curthread")); critical_enter(); save_vec(td); critical_exit(); mcp->mc_flags |= _MC_AV_VALID; mcp->mc_vscr = pcb->pcb_vec.vscr; mcp->mc_vrsave = pcb->pcb_vec.vrsave; memcpy(mcp->mc_avec, pcb->pcb_vec.vr, sizeof(mcp->mc_avec)); } mcp->mc_len = sizeof(*mcp); return (0); } int get_mcontext(struct thread *td, mcontext_t *mcp, int flags) { int error; error = grab_mcontext(td, mcp, flags); if (error == 0) { PROC_LOCK(curthread->td_proc); mcp->mc_onstack = sigonstack(td->td_frame->fixreg[1]); PROC_UNLOCK(curthread->td_proc); } return (error); } int set_mcontext(struct thread *td, mcontext_t *mcp) { struct pcb *pcb; struct trapframe *tf; register_t tls; int i; pcb = td->td_pcb; tf = td->td_frame; if (mcp->mc_vers != _MC_VERSION || mcp->mc_len != sizeof(*mcp)) return (EINVAL); /* * Don't let the user set privileged MSR bits */ if ((mcp->mc_srr1 & psl_userstatic) != (tf->srr1 & psl_userstatic)) { return (EINVAL); } /* Copy trapframe, preserving TLS pointer across context change */ if (SV_PROC_FLAG(td->td_proc, SV_LP64)) tls = tf->fixreg[13]; else tls = tf->fixreg[2]; memcpy(tf, mcp->mc_frame, sizeof(mcp->mc_frame)); if (SV_PROC_FLAG(td->td_proc, SV_LP64)) tf->fixreg[13] = tls; else tf->fixreg[2] = tls; + /* Disable FPU */ + tf->srr1 &= ~PSL_FP; + pcb->pcb_flags &= ~PCB_FPU; + if (mcp->mc_flags & _MC_FP_VALID) { /* enable_fpu() will happen lazily on a fault */ pcb->pcb_flags |= PCB_FPREGS; memcpy(&pcb->pcb_fpu.fpscr, &mcp->mc_fpscr, sizeof(double)); bzero(pcb->pcb_fpu.fpr, sizeof(pcb->pcb_fpu.fpr)); for (i = 0; i < 32; i++) { memcpy(&pcb->pcb_fpu.fpr[i].fpr, &mcp->mc_fpreg[i], sizeof(double)); memcpy(&pcb->pcb_fpu.fpr[i].vsr[2], &mcp->mc_vsxfpreg[i], sizeof(double)); } } if (mcp->mc_flags & _MC_AV_VALID) { if ((pcb->pcb_flags & PCB_VEC) != PCB_VEC) { critical_enter(); enable_vec(td); critical_exit(); } pcb->pcb_vec.vscr = mcp->mc_vscr; pcb->pcb_vec.vrsave = mcp->mc_vrsave; memcpy(pcb->pcb_vec.vr, mcp->mc_avec, sizeof(mcp->mc_avec)); } return (0); } /* * Set set up registers on exec. */ void exec_setregs(struct thread *td, struct image_params *imgp, u_long stack) { struct trapframe *tf; register_t argc; tf = trapframe(td); bzero(tf, sizeof *tf); #ifdef __powerpc64__ tf->fixreg[1] = -roundup(-stack + 48, 16); #else tf->fixreg[1] = -roundup(-stack + 8, 16); #endif /* * Set up arguments for _start(): * _start(argc, argv, envp, obj, cleanup, ps_strings); * * Notes: * - obj and cleanup are the auxilliary and termination * vectors. They are fixed up by ld.elf_so. * - ps_strings is a NetBSD extention, and will be * ignored by executables which are strictly * compliant with the SVR4 ABI. */ /* Collect argc from the user stack */ argc = fuword((void *)stack); tf->fixreg[3] = argc; tf->fixreg[4] = stack + sizeof(register_t); tf->fixreg[5] = stack + (2 + argc)*sizeof(register_t); tf->fixreg[6] = 0; /* auxillary vector */ tf->fixreg[7] = 0; /* termination vector */ tf->fixreg[8] = (register_t)imgp->ps_strings; /* NetBSD extension */ tf->srr0 = imgp->entry_addr; #ifdef __powerpc64__ tf->fixreg[12] = imgp->entry_addr; #endif tf->srr1 = psl_userset | PSL_FE_DFLT; td->td_pcb->pcb_flags = 0; } #ifdef COMPAT_FREEBSD32 void ppc32_setregs(struct thread *td, struct image_params *imgp, u_long stack) { struct trapframe *tf; uint32_t argc; tf = trapframe(td); bzero(tf, sizeof *tf); tf->fixreg[1] = -roundup(-stack + 8, 16); argc = fuword32((void *)stack); tf->fixreg[3] = argc; tf->fixreg[4] = stack + sizeof(uint32_t); tf->fixreg[5] = stack + (2 + argc)*sizeof(uint32_t); tf->fixreg[6] = 0; /* auxillary vector */ tf->fixreg[7] = 0; /* termination vector */ tf->fixreg[8] = (register_t)imgp->ps_strings; /* NetBSD extension */ tf->srr0 = imgp->entry_addr; tf->srr1 = psl_userset32 | PSL_FE_DFLT; td->td_pcb->pcb_flags = 0; } #endif int fill_regs(struct thread *td, struct reg *regs) { struct trapframe *tf; tf = td->td_frame; memcpy(regs, tf, sizeof(struct reg)); return (0); } int fill_dbregs(struct thread *td, struct dbreg *dbregs) { /* No debug registers on PowerPC */ return (ENOSYS); } int fill_fpregs(struct thread *td, struct fpreg *fpregs) { struct pcb *pcb; int i; pcb = td->td_pcb; if ((pcb->pcb_flags & PCB_FPREGS) == 0) memset(fpregs, 0, sizeof(struct fpreg)); else { memcpy(&fpregs->fpscr, &pcb->pcb_fpu.fpscr, sizeof(double)); for (i = 0; i < 32; i++) memcpy(&fpregs->fpreg[i], &pcb->pcb_fpu.fpr[i].fpr, sizeof(double)); } return (0); } int set_regs(struct thread *td, struct reg *regs) { struct trapframe *tf; tf = td->td_frame; memcpy(tf, regs, sizeof(struct reg)); return (0); } int set_dbregs(struct thread *td, struct dbreg *dbregs) { /* No debug registers on PowerPC */ return (ENOSYS); } int set_fpregs(struct thread *td, struct fpreg *fpregs) { struct pcb *pcb; int i; pcb = td->td_pcb; pcb->pcb_flags |= PCB_FPREGS; memcpy(&pcb->pcb_fpu.fpscr, &fpregs->fpscr, sizeof(double)); for (i = 0; i < 32; i++) { memcpy(&pcb->pcb_fpu.fpr[i].fpr, &fpregs->fpreg[i], sizeof(double)); } return (0); } #ifdef COMPAT_FREEBSD32 int set_regs32(struct thread *td, struct reg32 *regs) { struct trapframe *tf; int i; tf = td->td_frame; for (i = 0; i < 32; i++) tf->fixreg[i] = regs->fixreg[i]; tf->lr = regs->lr; tf->cr = regs->cr; tf->xer = regs->xer; tf->ctr = regs->ctr; tf->srr0 = regs->pc; return (0); } int fill_regs32(struct thread *td, struct reg32 *regs) { struct trapframe *tf; int i; tf = td->td_frame; for (i = 0; i < 32; i++) regs->fixreg[i] = tf->fixreg[i]; regs->lr = tf->lr; regs->cr = tf->cr; regs->xer = tf->xer; regs->ctr = tf->ctr; regs->pc = tf->srr0; return (0); } static int grab_mcontext32(struct thread *td, mcontext32_t *mcp, int flags) { mcontext_t mcp64; int i, error; error = grab_mcontext(td, &mcp64, flags); if (error != 0) return (error); mcp->mc_vers = mcp64.mc_vers; mcp->mc_flags = mcp64.mc_flags; mcp->mc_onstack = mcp64.mc_onstack; mcp->mc_len = mcp64.mc_len; memcpy(mcp->mc_avec,mcp64.mc_avec,sizeof(mcp64.mc_avec)); memcpy(mcp->mc_av,mcp64.mc_av,sizeof(mcp64.mc_av)); for (i = 0; i < 42; i++) mcp->mc_frame[i] = mcp64.mc_frame[i]; memcpy(mcp->mc_fpreg,mcp64.mc_fpreg,sizeof(mcp64.mc_fpreg)); memcpy(mcp->mc_vsxfpreg,mcp64.mc_vsxfpreg,sizeof(mcp64.mc_vsxfpreg)); return (0); } static int get_mcontext32(struct thread *td, mcontext32_t *mcp, int flags) { int error; error = grab_mcontext32(td, mcp, flags); if (error == 0) { PROC_LOCK(curthread->td_proc); mcp->mc_onstack = sigonstack(td->td_frame->fixreg[1]); PROC_UNLOCK(curthread->td_proc); } return (error); } static int set_mcontext32(struct thread *td, mcontext32_t *mcp) { mcontext_t mcp64; int i, error; mcp64.mc_vers = mcp->mc_vers; mcp64.mc_flags = mcp->mc_flags; mcp64.mc_onstack = mcp->mc_onstack; mcp64.mc_len = mcp->mc_len; memcpy(mcp64.mc_avec,mcp->mc_avec,sizeof(mcp64.mc_avec)); memcpy(mcp64.mc_av,mcp->mc_av,sizeof(mcp64.mc_av)); for (i = 0; i < 42; i++) mcp64.mc_frame[i] = mcp->mc_frame[i]; mcp64.mc_srr1 |= (td->td_frame->srr1 & 0xFFFFFFFF00000000ULL); memcpy(mcp64.mc_fpreg,mcp->mc_fpreg,sizeof(mcp64.mc_fpreg)); memcpy(mcp64.mc_vsxfpreg,mcp->mc_vsxfpreg,sizeof(mcp64.mc_vsxfpreg)); error = set_mcontext(td, &mcp64); return (error); } #endif #ifdef COMPAT_FREEBSD32 int freebsd32_sigreturn(struct thread *td, struct freebsd32_sigreturn_args *uap) { ucontext32_t uc; int error; CTR2(KTR_SIG, "sigreturn: td=%p ucp=%p", td, uap->sigcntxp); if (copyin(uap->sigcntxp, &uc, sizeof(uc)) != 0) { CTR1(KTR_SIG, "sigreturn: efault td=%p", td); return (EFAULT); } error = set_mcontext32(td, &uc.uc_mcontext); if (error != 0) return (error); kern_sigprocmask(td, SIG_SETMASK, &uc.uc_sigmask, NULL, 0); CTR3(KTR_SIG, "sigreturn: return td=%p pc=%#x sp=%#x", td, uc.uc_mcontext.mc_srr0, uc.uc_mcontext.mc_gpr[1]); return (EJUSTRETURN); } /* * The first two fields of a ucontext_t are the signal mask and the machine * context. The next field is uc_link; we want to avoid destroying the link * when copying out contexts. */ #define UC32_COPY_SIZE offsetof(ucontext32_t, uc_link) int freebsd32_getcontext(struct thread *td, struct freebsd32_getcontext_args *uap) { ucontext32_t uc; int ret; if (uap->ucp == NULL) ret = EINVAL; else { bzero(&uc, sizeof(uc)); get_mcontext32(td, &uc.uc_mcontext, GET_MC_CLEAR_RET); PROC_LOCK(td->td_proc); uc.uc_sigmask = td->td_sigmask; PROC_UNLOCK(td->td_proc); ret = copyout(&uc, uap->ucp, UC32_COPY_SIZE); } return (ret); } int freebsd32_setcontext(struct thread *td, struct freebsd32_setcontext_args *uap) { ucontext32_t uc; int ret; if (uap->ucp == NULL) ret = EINVAL; else { ret = copyin(uap->ucp, &uc, UC32_COPY_SIZE); if (ret == 0) { ret = set_mcontext32(td, &uc.uc_mcontext); if (ret == 0) { kern_sigprocmask(td, SIG_SETMASK, &uc.uc_sigmask, NULL, 0); } } } return (ret == 0 ? EJUSTRETURN : ret); } int freebsd32_swapcontext(struct thread *td, struct freebsd32_swapcontext_args *uap) { ucontext32_t uc; int ret; if (uap->oucp == NULL || uap->ucp == NULL) ret = EINVAL; else { bzero(&uc, sizeof(uc)); get_mcontext32(td, &uc.uc_mcontext, GET_MC_CLEAR_RET); PROC_LOCK(td->td_proc); uc.uc_sigmask = td->td_sigmask; PROC_UNLOCK(td->td_proc); ret = copyout(&uc, uap->oucp, UC32_COPY_SIZE); if (ret == 0) { ret = copyin(uap->ucp, &uc, UC32_COPY_SIZE); if (ret == 0) { ret = set_mcontext32(td, &uc.uc_mcontext); if (ret == 0) { kern_sigprocmask(td, SIG_SETMASK, &uc.uc_sigmask, NULL, 0); } } } } return (ret == 0 ? EJUSTRETURN : ret); } #endif void cpu_set_syscall_retval(struct thread *td, int error) { struct proc *p; struct trapframe *tf; int fixup; if (error == EJUSTRETURN) return; p = td->td_proc; tf = td->td_frame; if (tf->fixreg[0] == SYS___syscall && (SV_PROC_FLAG(p, SV_ILP32))) { int code = tf->fixreg[FIRSTARG + 1]; fixup = ( #if defined(COMPAT_FREEBSD6) && defined(SYS_freebsd6_lseek) code != SYS_freebsd6_lseek && #endif code != SYS_lseek) ? 1 : 0; } else fixup = 0; switch (error) { case 0: if (fixup) { /* * 64-bit return, 32-bit syscall. Fixup byte order */ tf->fixreg[FIRSTARG] = 0; tf->fixreg[FIRSTARG + 1] = td->td_retval[0]; } else { tf->fixreg[FIRSTARG] = td->td_retval[0]; tf->fixreg[FIRSTARG + 1] = td->td_retval[1]; } tf->cr &= ~0x10000000; /* Unset summary overflow */ break; case ERESTART: /* * Set user's pc back to redo the system call. */ tf->srr0 -= 4; break; default: tf->fixreg[FIRSTARG] = SV_ABI_ERRNO(p, error); tf->cr |= 0x10000000; /* Set summary overflow */ break; } } /* * Threading functions */ void cpu_thread_exit(struct thread *td) { } void cpu_thread_clean(struct thread *td) { } void cpu_thread_alloc(struct thread *td) { struct pcb *pcb; pcb = (struct pcb *)((td->td_kstack + td->td_kstack_pages * PAGE_SIZE - sizeof(struct pcb)) & ~0x2fUL); td->td_pcb = pcb; td->td_frame = (struct trapframe *)pcb - 1; } void cpu_thread_free(struct thread *td) { } int cpu_set_user_tls(struct thread *td, void *tls_base) { if (SV_PROC_FLAG(td->td_proc, SV_LP64)) td->td_frame->fixreg[13] = (register_t)tls_base + 0x7010; else td->td_frame->fixreg[2] = (register_t)tls_base + 0x7008; return (0); } void cpu_copy_thread(struct thread *td, struct thread *td0) { struct pcb *pcb2; struct trapframe *tf; struct callframe *cf; pcb2 = td->td_pcb; /* Copy the upcall pcb */ bcopy(td0->td_pcb, pcb2, sizeof(*pcb2)); /* Create a stack for the new thread */ tf = td->td_frame; bcopy(td0->td_frame, tf, sizeof(struct trapframe)); tf->fixreg[FIRSTARG] = 0; tf->fixreg[FIRSTARG + 1] = 0; tf->cr &= ~0x10000000; /* Set registers for trampoline to user mode. */ cf = (struct callframe *)tf - 1; memset(cf, 0, sizeof(struct callframe)); cf->cf_func = (register_t)fork_return; cf->cf_arg0 = (register_t)td; cf->cf_arg1 = (register_t)tf; pcb2->pcb_sp = (register_t)cf; #if defined(__powerpc64__) && (!defined(_CALL_ELF) || _CALL_ELF == 1) pcb2->pcb_lr = ((register_t *)fork_trampoline)[0]; pcb2->pcb_toc = ((register_t *)fork_trampoline)[1]; #else pcb2->pcb_lr = (register_t)fork_trampoline; pcb2->pcb_context[0] = pcb2->pcb_lr; #endif pcb2->pcb_cpu.aim.usr_vsid = 0; #ifdef __SPE__ pcb2->pcb_vec.vscr = SPEFSCR_FINVE | SPEFSCR_FDBZE | SPEFSCR_FUNFE | SPEFSCR_FOVFE; #endif /* Setup to release spin count in fork_exit(). */ td->td_md.md_spinlock_count = 1; td->td_md.md_saved_msr = psl_kernset; } void cpu_set_upcall(struct thread *td, void (*entry)(void *), void *arg, stack_t *stack) { struct trapframe *tf; uintptr_t sp; tf = td->td_frame; /* align stack and alloc space for frame ptr and saved LR */ #ifdef __powerpc64__ sp = ((uintptr_t)stack->ss_sp + stack->ss_size - 48) & ~0x1f; #else sp = ((uintptr_t)stack->ss_sp + stack->ss_size - 8) & ~0x1f; #endif bzero(tf, sizeof(struct trapframe)); tf->fixreg[1] = (register_t)sp; tf->fixreg[3] = (register_t)arg; if (SV_PROC_FLAG(td->td_proc, SV_ILP32)) { tf->srr0 = (register_t)entry; #ifdef __powerpc64__ tf->srr1 = psl_userset32 | PSL_FE_DFLT; #else tf->srr1 = psl_userset | PSL_FE_DFLT; #endif } else { #ifdef __powerpc64__ if (td->td_proc->p_sysent == &elf64_freebsd_sysvec_v2) { tf->srr0 = (register_t)entry; /* ELFv2 ABI requires that the global entry point be in r12. */ tf->fixreg[12] = (register_t)entry; } else { register_t entry_desc[3]; (void)copyin((void *)entry, entry_desc, sizeof(entry_desc)); tf->srr0 = entry_desc[0]; tf->fixreg[2] = entry_desc[1]; tf->fixreg[11] = entry_desc[2]; } tf->srr1 = psl_userset | PSL_FE_DFLT; #endif } td->td_pcb->pcb_flags = 0; #ifdef __SPE__ td->td_pcb->pcb_vec.vscr = SPEFSCR_FINVE | SPEFSCR_FDBZE | SPEFSCR_FUNFE | SPEFSCR_FOVFE; #endif td->td_retval[0] = (register_t)entry; td->td_retval[1] = 0; } static int emulate_mfspr(int spr, int reg, struct trapframe *frame){ struct thread *td; td = curthread; if (spr == SPR_DSCR) { // If DSCR was never set, get the default DSCR if ((td->td_pcb->pcb_flags & PCB_CDSCR) == 0) td->td_pcb->pcb_dscr = mfspr(SPR_DSCR); frame->fixreg[reg] = td->td_pcb->pcb_dscr; frame->srr0 += 4; return 0; } else return SIGILL; } static int emulate_mtspr(int spr, int reg, struct trapframe *frame){ struct thread *td; td = curthread; if (spr == SPR_DSCR) { td->td_pcb->pcb_flags |= PCB_CDSCR; td->td_pcb->pcb_dscr = frame->fixreg[reg]; frame->srr0 += 4; return 0; } else return SIGILL; } #define XFX 0xFC0007FF int ppc_instr_emulate(struct trapframe *frame, struct pcb *pcb) { uint32_t instr; int reg, sig; int rs, spr; instr = fuword32((void *)frame->srr0); sig = SIGILL; if ((instr & 0xfc1fffff) == 0x7c1f42a6) { /* mfpvr */ reg = (instr & ~0xfc1fffff) >> 21; frame->fixreg[reg] = mfpvr(); frame->srr0 += 4; return (0); } else if ((instr & XFX) == 0x7c0002a6) { /* mfspr */ rs = (instr & 0x3e00000) >> 21; spr = (instr & 0x1ff800) >> 16; return emulate_mfspr(spr, rs, frame); } else if ((instr & XFX) == 0x7c0003a6) { /* mtspr */ rs = (instr & 0x3e00000) >> 21; spr = (instr & 0x1ff800) >> 16; return emulate_mtspr(spr, rs, frame); } else if ((instr & 0xfc000ffe) == 0x7c0004ac) { /* various sync */ powerpc_sync(); /* Do a heavy-weight sync */ frame->srr0 += 4; return (0); } #ifdef FPU_EMU if (!(pcb->pcb_flags & PCB_FPREGS)) { bzero(&pcb->pcb_fpu, sizeof(pcb->pcb_fpu)); pcb->pcb_flags |= PCB_FPREGS; } sig = fpu_emulate(frame, &pcb->pcb_fpu); #endif if (sig == SIGILL) { if (pcb->pcb_lastill != frame->srr0) { /* Allow a second chance, in case of cache sync issues. */ sig = 0; pmap_sync_icache(PCPU_GET(curpmap), frame->srr0, 4); pcb->pcb_lastill = frame->srr0; } } return (sig); } Index: projects/import-googletest-1.8.1/sys/riscv/include/param.h =================================================================== --- projects/import-googletest-1.8.1/sys/riscv/include/param.h (revision 344270) +++ projects/import-googletest-1.8.1/sys/riscv/include/param.h (revision 344271) @@ -1,108 +1,108 @@ /*- * Copyright (c) 1990 The Regents of the University of California. * All rights reserved. * * This code is derived from software contributed to Berkeley by * William Jolitz. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * from: @(#)param.h 5.8 (Berkeley) 6/28/91 * $FreeBSD$ */ #ifndef _MACHINE_PARAM_H_ #define _MACHINE_PARAM_H_ /* * Machine dependent constants for RISC-V. */ #include #define STACKALIGNBYTES (16 - 1) #define STACKALIGN(p) ((uint64_t)(p) & ~STACKALIGNBYTES) #ifndef MACHINE #define MACHINE "riscv" #endif #ifndef MACHINE_ARCH #define MACHINE_ARCH "riscv64" #endif #if defined(SMP) || defined(KLD_MODULE) #ifndef MAXCPU #define MAXCPU 16 #endif #else #define MAXCPU 1 #endif /* SMP || KLD_MODULE */ #ifndef MAXMEMDOM #define MAXMEMDOM 1 #endif #define ALIGNBYTES _ALIGNBYTES #define ALIGN(p) _ALIGN(p) /* * ALIGNED_POINTER is a boolean macro that checks whether an address * is valid to fetch data elements of type t from on this architecture. * This does not reflect the optimal alignment, just the possibility * (within reasonable limits). */ #define ALIGNED_POINTER(p, t) ((((u_long)(p)) & (sizeof(t) - 1)) == 0) /* * CACHE_LINE_SIZE is the compile-time maximum cache line size for an * architecture. It should be used with appropriate caution. */ #define CACHE_LINE_SHIFT 6 #define CACHE_LINE_SIZE (1 << CACHE_LINE_SHIFT) #define PAGE_SHIFT 12 #define PAGE_SIZE (1 << PAGE_SHIFT) /* Page size */ #define PAGE_MASK (PAGE_SIZE - 1) -#define MAXPAGESIZES 1 /* maximum number of supported page sizes */ +#define MAXPAGESIZES 3 /* maximum number of supported page sizes */ #ifndef KSTACK_PAGES #define KSTACK_PAGES 4 /* pages of kernel stack (with pcb) */ #endif #define KSTACK_GUARD_PAGES 1 /* pages of kstack guard; 0 disables */ #define PCPU_PAGES 1 /* * Mach derived conversion macros */ #define round_page(x) (((unsigned long)(x) + PAGE_MASK) & ~PAGE_MASK) #define trunc_page(x) ((unsigned long)(x) & ~PAGE_MASK) #define atop(x) ((unsigned long)(x) >> PAGE_SHIFT) #define ptoa(x) ((unsigned long)(x) << PAGE_SHIFT) #define riscv_btop(x) ((unsigned long)(x) >> PAGE_SHIFT) #define riscv_ptob(x) ((unsigned long)(x) << PAGE_SHIFT) #define pgtok(x) ((unsigned long)(x) * (PAGE_SIZE / 1024)) #endif /* !_MACHINE_PARAM_H_ */ Index: projects/import-googletest-1.8.1/sys/riscv/include/pcb.h =================================================================== --- projects/import-googletest-1.8.1/sys/riscv/include/pcb.h (revision 344270) +++ projects/import-googletest-1.8.1/sys/riscv/include/pcb.h (revision 344271) @@ -1,69 +1,68 @@ /*- * Copyright (c) 2015-2016 Ruslan Bukin * All rights reserved. * * Portions of this software were developed by SRI International and the * University of Cambridge Computer Laboratory under DARPA/AFRL contract * FA8750-10-C-0237 ("CTSRD"), as part of the DARPA CRASH research programme. * * Portions of this software were developed by the University of Cambridge * Computer Laboratory as part of the CTSRD Project, with support from the * UK Higher Education Innovation Fund (HEIF). * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #ifndef _MACHINE_PCB_H_ #define _MACHINE_PCB_H_ #ifndef LOCORE struct trapframe; struct pcb { uint64_t pcb_ra; /* Return address */ uint64_t pcb_sp; /* Stack pointer */ uint64_t pcb_gp; /* Global pointer */ uint64_t pcb_tp; /* Thread pointer */ uint64_t pcb_t[7]; /* Temporary registers */ uint64_t pcb_s[12]; /* Saved registers */ uint64_t pcb_a[8]; /* Argument registers */ uint64_t pcb_x[32][2]; /* Floating point registers */ uint64_t pcb_fcsr; /* Floating point control reg */ uint64_t pcb_fpflags; /* Floating point flags */ #define PCB_FP_STARTED 0x1 #define PCB_FP_USERMASK 0x1 uint64_t pcb_sepc; /* Supervisor exception pc */ - vm_offset_t pcb_l1addr; /* L1 page tables base address */ vm_offset_t pcb_onfault; /* Copyinout fault handler */ }; #ifdef _KERNEL void makectx(struct trapframe *tf, struct pcb *pcb); int savectx(struct pcb *pcb) __returns_twice; #endif #endif /* !LOCORE */ #endif /* !_MACHINE_PCB_H_ */ Index: projects/import-googletest-1.8.1/sys/riscv/include/pcpu.h =================================================================== --- projects/import-googletest-1.8.1/sys/riscv/include/pcpu.h (revision 344270) +++ projects/import-googletest-1.8.1/sys/riscv/include/pcpu.h (revision 344271) @@ -1,87 +1,88 @@ /*- * Copyright (c) 1999 Luoqi Chen * Copyright (c) 2015-2016 Ruslan Bukin * All rights reserved. * * Portions of this software were developed by SRI International and the * University of Cambridge Computer Laboratory under DARPA/AFRL contract * FA8750-10-C-0237 ("CTSRD"), as part of the DARPA CRASH research programme. * * Portions of this software were developed by the University of Cambridge * Computer Laboratory as part of the CTSRD Project, with support from the * UK Higher Education Innovation Fund (HEIF). * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * from: FreeBSD: src/sys/i386/include/globaldata.h,v 1.27 2001/04/27 * $FreeBSD$ */ #ifndef _MACHINE_PCPU_H_ #define _MACHINE_PCPU_H_ #include #include #define ALT_STACK_SIZE 128 #define PCPU_MD_FIELDS \ + struct pmap *pc_curpmap; /* Currently active pmap */ \ uint32_t pc_pending_ipis; /* IPIs pending to this CPU */ \ char __pad[61] #ifdef _KERNEL struct pcb; struct pcpu; extern struct pcpu *pcpup; static inline struct pcpu * get_pcpu(void) { struct pcpu *pcpu; __asm __volatile("mv %0, gp" : "=&r"(pcpu)); return (pcpu); } static inline struct thread * get_curthread(void) { struct thread *td; __asm __volatile("ld %0, 0(gp)" : "=&r"(td)); return (td); } #define curthread get_curthread() #define PCPU_GET(member) (get_pcpu()->pc_ ## member) #define PCPU_ADD(member, value) (get_pcpu()->pc_ ## member += (value)) #define PCPU_INC(member) PCPU_ADD(member, 1) #define PCPU_PTR(member) (&get_pcpu()->pc_ ## member) #define PCPU_SET(member,value) (get_pcpu()->pc_ ## member = (value)) #endif /* _KERNEL */ #endif /* !_MACHINE_PCPU_H_ */ Index: projects/import-googletest-1.8.1/sys/riscv/include/pmap.h =================================================================== --- projects/import-googletest-1.8.1/sys/riscv/include/pmap.h (revision 344270) +++ projects/import-googletest-1.8.1/sys/riscv/include/pmap.h (revision 344271) @@ -1,162 +1,173 @@ /*- * Copyright (c) 1991 Regents of the University of California. * All rights reserved. * * This code is derived from software contributed to Berkeley by * the Systems Programming Group of the University of Utah Computer * Science Department and William Jolitz of UUNET Technologies Inc. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #ifndef _MACHINE_PMAP_H_ #define _MACHINE_PMAP_H_ #include #ifndef LOCORE #include +#include #include #include +#include + #ifdef _KERNEL #define vtophys(va) pmap_kextract((vm_offset_t)(va)) #endif #define pmap_page_get_memattr(m) ((m)->md.pv_memattr) #define pmap_page_is_write_mapped(m) (((m)->aflags & PGA_WRITEABLE) != 0) void pmap_page_set_memattr(vm_page_t m, vm_memattr_t ma); /* * Pmap stuff */ struct md_page { TAILQ_HEAD(,pv_entry) pv_list; int pv_gen; vm_memattr_t pv_memattr; }; /* * This structure is used to hold a virtual<->physical address * association and is used mostly by bootstrap code */ struct pv_addr { SLIST_ENTRY(pv_addr) pv_list; vm_offset_t pv_va; vm_paddr_t pv_pa; }; struct pmap { struct mtx pm_mtx; struct pmap_statistics pm_stats; /* pmap statictics */ pd_entry_t *pm_l1; + u_long pm_satp; /* value for SATP register */ + cpuset_t pm_active; /* active on cpus */ TAILQ_HEAD(,pv_chunk) pm_pvchunk; /* list of mappings in pmap */ LIST_ENTRY(pmap) pm_list; /* List of all pmaps */ + struct vm_radix pm_root; }; typedef struct pv_entry { vm_offset_t pv_va; /* virtual address for mapping */ TAILQ_ENTRY(pv_entry) pv_next; } *pv_entry_t; /* * pv_entries are allocated in chunks per-process. This avoids the * need to track per-pmap assignments. */ #define _NPCM 3 #define _NPCPV 168 struct pv_chunk { struct pmap * pc_pmap; TAILQ_ENTRY(pv_chunk) pc_list; uint64_t pc_map[_NPCM]; /* bitmap; 1 = free */ TAILQ_ENTRY(pv_chunk) pc_lru; struct pv_entry pc_pventry[_NPCPV]; }; typedef struct pmap *pmap_t; #ifdef _KERNEL extern struct pmap kernel_pmap_store; #define kernel_pmap (&kernel_pmap_store) #define pmap_kernel() kernel_pmap #define PMAP_ASSERT_LOCKED(pmap) \ mtx_assert(&(pmap)->pm_mtx, MA_OWNED) #define PMAP_LOCK(pmap) mtx_lock(&(pmap)->pm_mtx) #define PMAP_LOCK_ASSERT(pmap, type) \ mtx_assert(&(pmap)->pm_mtx, (type)) #define PMAP_LOCK_DESTROY(pmap) mtx_destroy(&(pmap)->pm_mtx) #define PMAP_LOCK_INIT(pmap) mtx_init(&(pmap)->pm_mtx, "pmap", \ NULL, MTX_DEF | MTX_DUPOK) #define PMAP_OWNED(pmap) mtx_owned(&(pmap)->pm_mtx) #define PMAP_MTX(pmap) (&(pmap)->pm_mtx) #define PMAP_TRYLOCK(pmap) mtx_trylock(&(pmap)->pm_mtx) #define PMAP_UNLOCK(pmap) mtx_unlock(&(pmap)->pm_mtx) #define PHYS_AVAIL_SIZE 10 extern vm_paddr_t phys_avail[]; extern vm_paddr_t dump_avail[]; extern vm_offset_t virtual_avail; extern vm_offset_t virtual_end; /* * Macros to test if a mapping is mappable with an L1 Section mapping * or an L2 Large Page mapping. */ #define L1_MAPPABLE_P(va, pa, size) \ ((((va) | (pa)) & L1_OFFSET) == 0 && (size) >= L1_SIZE) +struct thread; + +void pmap_activate_boot(pmap_t); +void pmap_activate_sw(struct thread *); void pmap_bootstrap(vm_offset_t, vm_paddr_t, vm_size_t); void pmap_kenter_device(vm_offset_t, vm_size_t, vm_paddr_t); vm_paddr_t pmap_kextract(vm_offset_t va); void pmap_kremove(vm_offset_t); void pmap_kremove_device(vm_offset_t, vm_size_t); +bool pmap_ps_enabled(pmap_t); void *pmap_mapdev(vm_offset_t, vm_size_t); void *pmap_mapbios(vm_paddr_t, vm_size_t); void pmap_unmapdev(vm_offset_t, vm_size_t); void pmap_unmapbios(vm_offset_t, vm_size_t); boolean_t pmap_map_io_transient(vm_page_t *, vm_offset_t *, int, boolean_t); void pmap_unmap_io_transient(vm_page_t *, vm_offset_t *, int, boolean_t); bool pmap_get_tables(pmap_t, vm_offset_t, pd_entry_t **, pd_entry_t **, pt_entry_t **); #define pmap_page_is_mapped(m) (!TAILQ_EMPTY(&(m)->md.pv_list)) int pmap_fault_fixup(pmap_t, vm_offset_t, vm_prot_t); #endif /* _KERNEL */ #endif /* !LOCORE */ #endif /* !_MACHINE_PMAP_H_ */ Index: projects/import-googletest-1.8.1/sys/riscv/include/pte.h =================================================================== --- projects/import-googletest-1.8.1/sys/riscv/include/pte.h (revision 344270) +++ projects/import-googletest-1.8.1/sys/riscv/include/pte.h (revision 344271) @@ -1,91 +1,94 @@ /*- * Copyright (c) 2014 Andrew Turner * Copyright (c) 2015-2018 Ruslan Bukin * All rights reserved. * * Portions of this software were developed by SRI International and the * University of Cambridge Computer Laboratory under DARPA/AFRL contract * FA8750-10-C-0237 ("CTSRD"), as part of the DARPA CRASH research programme. * * Portions of this software were developed by the University of Cambridge * Computer Laboratory as part of the CTSRD Project, with support from the * UK Higher Education Innovation Fund (HEIF). * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #ifndef _MACHINE_PTE_H_ #define _MACHINE_PTE_H_ #ifndef LOCORE typedef uint64_t pd_entry_t; /* page directory entry */ typedef uint64_t pt_entry_t; /* page table entry */ typedef uint64_t pn_t; /* page number */ #endif /* Level 0 table, 512GiB per entry */ #define L0_SHIFT 39 /* Level 1 table, 1GiB per entry */ #define L1_SHIFT 30 #define L1_SIZE (1 << L1_SHIFT) #define L1_OFFSET (L1_SIZE - 1) /* Level 2 table, 2MiB per entry */ #define L2_SHIFT 21 #define L2_SIZE (1 << L2_SHIFT) #define L2_OFFSET (L2_SIZE - 1) /* Level 3 table, 4KiB per entry */ #define L3_SHIFT 12 #define L3_SIZE (1 << L3_SHIFT) #define L3_OFFSET (L3_SIZE - 1) -#define Ln_ENTRIES (1 << 9) +#define Ln_ENTRIES_SHIFT 9 +#define Ln_ENTRIES (1 << Ln_ENTRIES_SHIFT) #define Ln_ADDR_MASK (Ln_ENTRIES - 1) /* Bits 9:8 are reserved for software */ #define PTE_SW_MANAGED (1 << 9) #define PTE_SW_WIRED (1 << 8) #define PTE_D (1 << 7) /* Dirty */ #define PTE_A (1 << 6) /* Accessed */ #define PTE_G (1 << 5) /* Global */ #define PTE_U (1 << 4) /* User */ #define PTE_X (1 << 3) /* Execute */ #define PTE_W (1 << 2) /* Write */ #define PTE_R (1 << 1) /* Read */ #define PTE_V (1 << 0) /* Valid */ #define PTE_RWX (PTE_R | PTE_W | PTE_X) #define PTE_RX (PTE_R | PTE_X) #define PTE_KERN (PTE_V | PTE_R | PTE_W | PTE_A | PTE_D) +#define PTE_PROMOTE (PTE_V | PTE_RWX | PTE_D | PTE_A | PTE_G | PTE_U | \ + PTE_SW_MANAGED | PTE_SW_WIRED) #define PTE_PPN0_S 10 #define PTE_PPN1_S 19 #define PTE_PPN2_S 28 #define PTE_PPN3_S 37 #define PTE_SIZE 8 #endif /* !_MACHINE_PTE_H_ */ /* End of pte.h */ Index: projects/import-googletest-1.8.1/sys/riscv/include/vmparam.h =================================================================== --- projects/import-googletest-1.8.1/sys/riscv/include/vmparam.h (revision 344270) +++ projects/import-googletest-1.8.1/sys/riscv/include/vmparam.h (revision 344271) @@ -1,240 +1,240 @@ /*- * Copyright (c) 1990 The Regents of the University of California. * All rights reserved. * Copyright (c) 1994 John S. Dyson * All rights reserved. * * This code is derived from software contributed to Berkeley by * William Jolitz. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * from: @(#)vmparam.h 5.9 (Berkeley) 5/12/91 * from: FreeBSD: src/sys/i386/include/vmparam.h,v 1.33 2000/03/30 * $FreeBSD$ */ #ifndef _MACHINE_VMPARAM_H_ #define _MACHINE_VMPARAM_H_ /* * Virtual memory related constants, all in bytes */ #ifndef MAXTSIZ #define MAXTSIZ (1*1024*1024*1024) /* max text size */ #endif #ifndef DFLDSIZ #define DFLDSIZ (128*1024*1024) /* initial data size limit */ #endif #ifndef MAXDSIZ #define MAXDSIZ (1*1024*1024*1024) /* max data size */ #endif #ifndef DFLSSIZ #define DFLSSIZ (128*1024*1024) /* initial stack size limit */ #endif #ifndef MAXSSIZ #define MAXSSIZ (1*1024*1024*1024) /* max stack size */ #endif #ifndef SGROWSIZ #define SGROWSIZ (128*1024) /* amount to grow stack */ #endif /* * The physical address space is sparsely populated. */ #define VM_PHYSSEG_SPARSE /* * The number of PHYSSEG entries. */ #define VM_PHYSSEG_MAX 64 /* * Create two free page pools: VM_FREEPOOL_DEFAULT is the default pool * from which physical pages are allocated and VM_FREEPOOL_DIRECT is * the pool from which physical pages for small UMA objects are * allocated. */ #define VM_NFREEPOOL 2 #define VM_FREEPOOL_DEFAULT 0 #define VM_FREEPOOL_DIRECT 1 /* * Create one free page list: VM_FREELIST_DEFAULT is for all physical * pages. */ #define VM_NFREELIST 1 #define VM_FREELIST_DEFAULT 0 /* * An allocation size of 16MB is supported in order to optimize the * use of the direct map by UMA. Specifically, a cache line contains * at most four TTEs, collectively mapping 16MB of physical memory. * By reducing the number of distinct 16MB "pages" that are used by UMA, * the physical memory allocator reduces the likelihood of both 4MB * page TLB misses and cache misses caused by 4MB page TLB misses. */ #define VM_NFREEORDER 12 /* - * Disable superpage reservations. + * Enable superpage reservations: 1 level. */ #ifndef VM_NRESERVLEVEL -#define VM_NRESERVLEVEL 0 +#define VM_NRESERVLEVEL 1 #endif /* * Level 0 reservations consist of 512 pages. */ #ifndef VM_LEVEL_0_ORDER #define VM_LEVEL_0_ORDER 9 #endif /** * Address space layout. * * RISC-V implements multiple paging modes with different virtual address space * sizes: SV32, SV39 and SV48. SV39 permits a virtual address space size of * 512GB and uses a three-level page table. Since this is large enough for most * purposes, we currently use SV39 for both userland and the kernel, avoiding * the extra translation step required by SV48. * * The address space is split into two regions at each end of the 64-bit address * space: * * 0x0000000000000000 - 0x0000003fffffffff 256GB user map * 0x0000004000000000 - 0xffffffbfffffffff unmappable * 0xffffffc000000000 - 0xffffffc7ffffffff 32GB kernel map * 0xffffffc800000000 - 0xffffffcfffffffff 32GB unused * 0xffffffd000000000 - 0xffffffefffffffff 128GB direct map * 0xfffffff000000000 - 0xffffffffffffffff 64GB unused * * The kernel is loaded at the beginning of the kernel map. * * We define some interesting address constants: * * VM_MIN_ADDRESS and VM_MAX_ADDRESS define the start and end of the entire * 64 bit address space, mostly just for convenience. * * VM_MIN_KERNEL_ADDRESS and VM_MAX_KERNEL_ADDRESS define the start and end of * mappable kernel virtual address space. * * VM_MIN_USER_ADDRESS and VM_MAX_USER_ADDRESS define the start and end of the * user address space. */ #define VM_MIN_ADDRESS (0x0000000000000000UL) #define VM_MAX_ADDRESS (0xffffffffffffffffUL) #define VM_MIN_KERNEL_ADDRESS (0xffffffc000000000UL) #define VM_MAX_KERNEL_ADDRESS (0xffffffc800000000UL) #define DMAP_MIN_ADDRESS (0xffffffd000000000UL) #define DMAP_MAX_ADDRESS (0xfffffff000000000UL) #define DMAP_MIN_PHYSADDR (dmap_phys_base) #define DMAP_MAX_PHYSADDR (dmap_phys_max) /* True if pa is in the dmap range */ #define PHYS_IN_DMAP(pa) ((pa) >= DMAP_MIN_PHYSADDR && \ (pa) < DMAP_MAX_PHYSADDR) /* True if va is in the dmap range */ #define VIRT_IN_DMAP(va) ((va) >= DMAP_MIN_ADDRESS && \ (va) < (dmap_max_addr)) #define PMAP_HAS_DMAP 1 #define PHYS_TO_DMAP(pa) \ ({ \ KASSERT(PHYS_IN_DMAP(pa), \ ("%s: PA out of range, PA: 0x%lx", __func__, \ (vm_paddr_t)(pa))); \ ((pa) - dmap_phys_base) + DMAP_MIN_ADDRESS; \ }) #define DMAP_TO_PHYS(va) \ ({ \ KASSERT(VIRT_IN_DMAP(va), \ ("%s: VA out of range, VA: 0x%lx", __func__, \ (vm_offset_t)(va))); \ ((va) - DMAP_MIN_ADDRESS) + dmap_phys_base; \ }) #define VM_MIN_USER_ADDRESS (0x0000000000000000UL) #define VM_MAX_USER_ADDRESS (0x0000004000000000UL) #define VM_MINUSER_ADDRESS (VM_MIN_USER_ADDRESS) #define VM_MAXUSER_ADDRESS (VM_MAX_USER_ADDRESS) #define KERNBASE (VM_MIN_KERNEL_ADDRESS) #define SHAREDPAGE (VM_MAXUSER_ADDRESS - PAGE_SIZE) #define USRSTACK SHAREDPAGE #define KERNENTRY (0) /* * How many physical pages per kmem arena virtual page. */ #ifndef VM_KMEM_SIZE_SCALE #define VM_KMEM_SIZE_SCALE (3) #endif /* * Optional floor (in bytes) on the size of the kmem arena. */ #ifndef VM_KMEM_SIZE_MIN #define VM_KMEM_SIZE_MIN (16 * 1024 * 1024) #endif /* * Optional ceiling (in bytes) on the size of the kmem arena: 60% of the * kernel map. */ #ifndef VM_KMEM_SIZE_MAX #define VM_KMEM_SIZE_MAX ((VM_MAX_KERNEL_ADDRESS - \ VM_MIN_KERNEL_ADDRESS + 1) * 3 / 5) #endif /* * Initial pagein size of beginning of executable file. */ #ifndef VM_INITIAL_PAGEIN #define VM_INITIAL_PAGEIN 16 #endif #define UMA_MD_SMALL_ALLOC #ifndef LOCORE extern vm_paddr_t dmap_phys_base; extern vm_paddr_t dmap_phys_max; extern vm_offset_t dmap_max_addr; extern u_int tsb_kernel_ldd_phys; extern vm_offset_t vm_max_kernel_address; extern vm_offset_t init_pt_va; #endif #define ZERO_REGION_SIZE (64 * 1024) /* 64KB */ #define DEVMAP_MAX_VADDR VM_MAX_KERNEL_ADDRESS #endif /* !_MACHINE_VMPARAM_H_ */ Index: projects/import-googletest-1.8.1/sys/riscv/riscv/elf_machdep.c =================================================================== --- projects/import-googletest-1.8.1/sys/riscv/riscv/elf_machdep.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/riscv/riscv/elf_machdep.c (revision 344271) @@ -1,522 +1,522 @@ /*- * Copyright 1996-1998 John D. Polstra. * Copyright (c) 2015 Ruslan Bukin * Copyright (c) 2016 Yukishige Shibata * All rights reserved. * * Portions of this software were developed by SRI International and the * University of Cambridge Computer Laboratory under DARPA/AFRL contract * FA8750-10-C-0237 ("CTSRD"), as part of the DARPA CRASH research programme. * * Portions of this software were developed by the University of Cambridge * Computer Laboratory as part of the CTSRD Project, with support from the * UK Higher Education Innovation Fund (HEIF). * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include struct sysentvec elf64_freebsd_sysvec = { .sv_size = SYS_MAXSYSCALL, .sv_table = sysent, .sv_errsize = 0, .sv_errtbl = NULL, .sv_transtrap = NULL, .sv_fixup = __elfN(freebsd_fixup), .sv_sendsig = sendsig, .sv_sigcode = sigcode, .sv_szsigcode = &szsigcode, .sv_name = "FreeBSD ELF64", .sv_coredump = __elfN(coredump), .sv_imgact_try = NULL, .sv_minsigstksz = MINSIGSTKSZ, .sv_pagesize = PAGE_SIZE, .sv_minuser = VM_MIN_ADDRESS, .sv_maxuser = VM_MAXUSER_ADDRESS, .sv_usrstack = USRSTACK, .sv_psstrings = PS_STRINGS, .sv_stackprot = VM_PROT_READ | VM_PROT_WRITE, .sv_copyout_strings = exec_copyout_strings, .sv_setregs = exec_setregs, .sv_fixlimit = NULL, .sv_maxssiz = NULL, - .sv_flags = SV_ABI_FREEBSD | SV_LP64 | SV_SHP, + .sv_flags = SV_ABI_FREEBSD | SV_LP64 | SV_SHP | SV_ASLR, .sv_set_syscall_retval = cpu_set_syscall_retval, .sv_fetch_syscall_args = cpu_fetch_syscall_args, .sv_syscallnames = syscallnames, .sv_shared_page_base = SHAREDPAGE, .sv_shared_page_len = PAGE_SIZE, .sv_schedtail = NULL, .sv_thread_detach = NULL, .sv_trap = NULL, }; INIT_SYSENTVEC(elf64_sysvec, &elf64_freebsd_sysvec); static Elf64_Brandinfo freebsd_brand_info = { .brand = ELFOSABI_FREEBSD, .machine = EM_RISCV, .compat_3_brand = "FreeBSD", .emul_path = NULL, .interp_path = "/libexec/ld-elf.so.1", .sysvec = &elf64_freebsd_sysvec, .interp_newpath = NULL, .brand_note = &elf64_freebsd_brandnote, .flags = BI_CAN_EXEC_DYN | BI_BRAND_NOTE }; SYSINIT(elf64, SI_SUB_EXEC, SI_ORDER_FIRST, (sysinit_cfunc_t) elf64_insert_brand_entry, &freebsd_brand_info); static int debug_kld; SYSCTL_INT(_kern, OID_AUTO, debug_kld, CTLFLAG_RW, &debug_kld, 0, "Activate debug prints in elf_reloc_internal()"); struct type2str_ent { int type; const char *str; }; void elf64_dump_thread(struct thread *td, void *dst, size_t *off) { } /* * Following 4 functions are used to manupilate bits on 32bit interger value. * FIXME: I implemetend for ease-to-understand rather than for well-optimized. */ static uint32_t gen_bitmask(int msb, int lsb) { uint32_t mask; if (msb == sizeof(mask) * 8 - 1) mask = ~0; else mask = (1U << (msb + 1)) - 1; if (lsb > 0) mask &= ~((1U << lsb) - 1); return (mask); } static uint32_t extract_bits(uint32_t x, int msb, int lsb) { uint32_t mask; mask = gen_bitmask(msb, lsb); x &= mask; x >>= lsb; return (x); } static uint32_t insert_bits(uint32_t d, uint32_t s, int msb, int lsb) { uint32_t mask; mask = gen_bitmask(msb, lsb); d &= ~mask; s <<= lsb; s &= mask; return (d | s); } static uint32_t insert_imm(uint32_t insn, uint32_t imm, int imm_msb, int imm_lsb, int insn_lsb) { int insn_msb; uint32_t v; v = extract_bits(imm, imm_msb, imm_lsb); insn_msb = (imm_msb - imm_lsb) + insn_lsb; return (insert_bits(insn, v, insn_msb, insn_lsb)); } /* * The RISC-V ISA is designed so that all of immediate values are * sign-extended. * An immediate value is sometimes generated at runtime by adding * 12bit sign integer and 20bit signed integer. This requests 20bit * immediate value to be ajusted if the MSB of the 12bit immediate * value is asserted (sign-extended value is treated as negative value). * * For example, 0x123800 can be calculated by adding upper 20 bit of * 0x124000 and sign-extended 12bit immediate whose bit pattern is * 0x800 as follows: * 0x123800 * = 0x123000 + 0x800 * = (0x123000 + 0x1000) + (-0x1000 + 0x800) * = (0x123000 + 0x1000) + (0xff...ff800) * = 0x124000 + sign-extention(0x800) */ static uint32_t calc_hi20_imm(uint32_t value) { /* * There is the arithmetical hack that can remove conditional * statement. But I implement it in straightforward way. */ if ((value & 0x800) != 0) value += 0x1000; return (value & ~0xfff); } static const struct type2str_ent t2s[] = { { R_RISCV_NONE, "R_RISCV_NONE" }, { R_RISCV_64, "R_RISCV_64" }, { R_RISCV_JUMP_SLOT, "R_RISCV_JUMP_SLOT" }, { R_RISCV_RELATIVE, "R_RISCV_RELATIVE" }, { R_RISCV_JAL, "R_RISCV_JAL" }, { R_RISCV_CALL, "R_RISCV_CALL" }, { R_RISCV_PCREL_HI20, "R_RISCV_PCREL_HI20" }, { R_RISCV_PCREL_LO12_I, "R_RISCV_PCREL_LO12_I" }, { R_RISCV_PCREL_LO12_S, "R_RISCV_PCREL_LO12_S" }, { R_RISCV_HI20, "R_RISCV_HI20" }, { R_RISCV_LO12_I, "R_RISCV_LO12_I" }, { R_RISCV_LO12_S, "R_RISCV_LO12_S" }, }; static const char * reloctype_to_str(int type) { int i; for (i = 0; i < sizeof(t2s) / sizeof(t2s[0]); ++i) { if (type == t2s[i].type) return t2s[i].str; } return "*unknown*"; } bool elf_is_ifunc_reloc(Elf_Size r_info __unused) { return (false); } /* * Currently kernel loadable module for RISCV is compiled with -fPIC option. * (see also additional CFLAGS definition for RISCV in sys/conf/kmod.mk) * Only R_RISCV_64, R_RISCV_JUMP_SLOT and RISCV_RELATIVE are emitted in * the module. Other relocations will be processed when kernel loadable * modules are built in non-PIC. * * FIXME: only RISCV64 is supported. */ static int elf_reloc_internal(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, int local, elf_lookup_fn lookup) { Elf_Size rtype, symidx; const Elf_Rela *rela; Elf_Addr val, addr; Elf64_Addr *where; Elf_Addr addend; uint32_t before32_1; uint32_t before32; uint64_t before64; uint32_t* insn32p; uint32_t imm20; int error; switch (type) { case ELF_RELOC_RELA: rela = (const Elf_Rela *)data; where = (Elf_Addr *)(relocbase + rela->r_offset); insn32p = (uint32_t*)where; addend = rela->r_addend; rtype = ELF_R_TYPE(rela->r_info); symidx = ELF_R_SYM(rela->r_info); break; default: printf("%s:%d unknown reloc type %d\n", __FUNCTION__, __LINE__, type); return -1; } switch (rtype) { case R_RISCV_NONE: break; case R_RISCV_64: case R_RISCV_JUMP_SLOT: error = lookup(lf, symidx, 1, &addr); if (error != 0) return -1; val = addr; before64 = *where; if (*where != val) *where = val; if (debug_kld) printf("%p %c %-24s %016lx -> %016lx\n", where, (local? 'l': 'g'), reloctype_to_str(rtype), before64, *where); break; case R_RISCV_RELATIVE: before64 = *where; *where = elf_relocaddr(lf, relocbase + addend); if (debug_kld) printf("%p %c %-24s %016lx -> %016lx\n", where, (local? 'l': 'g'), reloctype_to_str(rtype), before64, *where); break; case R_RISCV_JAL: error = lookup(lf, symidx, 1, &addr); if (error != 0) return -1; val = addr - (Elf_Addr)where; if ((val <= -(1UL << 20) || (1UL << 20) <= val)) { printf("kldload: huge offset against R_RISCV_JAL\n"); return -1; } before32 = *insn32p; *insn32p = insert_imm(*insn32p, val, 20, 20, 31); *insn32p = insert_imm(*insn32p, val, 10, 1, 21); *insn32p = insert_imm(*insn32p, val, 11, 11, 20); *insn32p = insert_imm(*insn32p, val, 19, 12, 12); if (debug_kld) printf("%p %c %-24s %08x -> %08x\n", where, (local? 'l': 'g'), reloctype_to_str(rtype), before32, *insn32p); break; case R_RISCV_CALL: /* * R_RISCV_CALL relocates 8-byte region that consists * of the sequence of AUIPC and JALR. */ /* calculate and check the pc relative offset. */ error = lookup(lf, symidx, 1, &addr); if (error != 0) return -1; val = addr - (Elf_Addr)where; if ((val <= -(1UL << 32) || (1UL << 32) <= val)) { printf("kldload: huge offset against R_RISCV_CALL\n"); return -1; } /* Relocate AUIPC. */ before32 = insn32p[0]; imm20 = calc_hi20_imm(val); insn32p[0] = insert_imm(insn32p[0], imm20, 31, 12, 12); /* Relocate JALR. */ before32_1 = insn32p[1]; insn32p[1] = insert_imm(insn32p[1], val, 11, 0, 20); if (debug_kld) printf("%p %c %-24s %08x %08x -> %08x %08x\n", where, (local? 'l': 'g'), reloctype_to_str(rtype), before32, insn32p[0], before32_1, insn32p[1]); break; case R_RISCV_PCREL_HI20: val = addr - (Elf_Addr)where; insn32p = (uint32_t*)where; before32 = *insn32p; imm20 = calc_hi20_imm(val); *insn32p = insert_imm(*insn32p, imm20, 31, 12, 12); if (debug_kld) printf("%p %c %-24s %08x -> %08x\n", where, (local? 'l': 'g'), reloctype_to_str(rtype), before32, *insn32p); break; case R_RISCV_PCREL_LO12_I: val = addr - (Elf_Addr)where; insn32p = (uint32_t*)where; before32 = *insn32p; *insn32p = insert_imm(*insn32p, addr, 11, 0, 20); if (debug_kld) printf("%p %c %-24s %08x -> %08x\n", where, (local? 'l': 'g'), reloctype_to_str(rtype), before32, *insn32p); break; case R_RISCV_PCREL_LO12_S: val = addr - (Elf_Addr)where; insn32p = (uint32_t*)where; before32 = *insn32p; *insn32p = insert_imm(*insn32p, addr, 11, 5, 25); *insn32p = insert_imm(*insn32p, addr, 4, 0, 7); if (debug_kld) printf("%p %c %-24s %08x -> %08x\n", where, (local? 'l': 'g'), reloctype_to_str(rtype), before32, *insn32p); break; case R_RISCV_HI20: error = lookup(lf, symidx, 1, &addr); if (error != 0) return -1; insn32p = (uint32_t*)where; before32 = *insn32p; imm20 = calc_hi20_imm(val); *insn32p = insert_imm(*insn32p, imm20, 31, 12, 12); if (debug_kld) printf("%p %c %-24s %08x -> %08x\n", where, (local? 'l': 'g'), reloctype_to_str(rtype), before32, *insn32p); break; case R_RISCV_LO12_I: error = lookup(lf, symidx, 1, &addr); if (error != 0) return -1; val = addr; insn32p = (uint32_t*)where; before32 = *insn32p; *insn32p = insert_imm(*insn32p, addr, 11, 0, 20); if (debug_kld) printf("%p %c %-24s %08x -> %08x\n", where, (local? 'l': 'g'), reloctype_to_str(rtype), before32, *insn32p); break; case R_RISCV_LO12_S: error = lookup(lf, symidx, 1, &addr); if (error != 0) return -1; val = addr; insn32p = (uint32_t*)where; before32 = *insn32p; *insn32p = insert_imm(*insn32p, addr, 11, 5, 25); *insn32p = insert_imm(*insn32p, addr, 4, 0, 7); if (debug_kld) printf("%p %c %-24s %08x -> %08x\n", where, (local? 'l': 'g'), reloctype_to_str(rtype), before32, *insn32p); break; default: printf("kldload: unexpected relocation type %ld\n", rtype); return (-1); } return (0); } int elf_reloc(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, elf_lookup_fn lookup) { return (elf_reloc_internal(lf, relocbase, data, type, 0, lookup)); } int elf_reloc_local(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, elf_lookup_fn lookup) { return (elf_reloc_internal(lf, relocbase, data, type, 1, lookup)); } int elf_cpu_load_file(linker_file_t lf __unused) { return (0); } int elf_cpu_unload_file(linker_file_t lf __unused) { return (0); } Index: projects/import-googletest-1.8.1/sys/riscv/riscv/genassym.c =================================================================== --- projects/import-googletest-1.8.1/sys/riscv/riscv/genassym.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/riscv/riscv/genassym.c (revision 344271) @@ -1,101 +1,100 @@ /*- * Copyright (c) 2015-2016 Ruslan Bukin * All rights reserved. * * Portions of this software were developed by SRI International and the * University of Cambridge Computer Laboratory under DARPA/AFRL contract * FA8750-10-C-0237 ("CTSRD"), as part of the DARPA CRASH research programme. * * Portions of this software were developed by the University of Cambridge * Computer Laboratory as part of the CTSRD Project, with support from the * UK Higher Education Innovation Fund (HEIF). * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include ASSYM(KERNBASE, KERNBASE); ASSYM(VM_MAXUSER_ADDRESS, VM_MAXUSER_ADDRESS); ASSYM(VM_MAX_KERNEL_ADDRESS, VM_MAX_KERNEL_ADDRESS); ASSYM(TDF_ASTPENDING, TDF_ASTPENDING); ASSYM(TDF_NEEDRESCHED, TDF_NEEDRESCHED); ASSYM(PCB_ONFAULT, offsetof(struct pcb, pcb_onfault)); -ASSYM(PCB_L1ADDR, offsetof(struct pcb, pcb_l1addr)); ASSYM(PCB_SIZE, sizeof(struct pcb)); ASSYM(PCB_RA, offsetof(struct pcb, pcb_ra)); ASSYM(PCB_SP, offsetof(struct pcb, pcb_sp)); ASSYM(PCB_GP, offsetof(struct pcb, pcb_gp)); ASSYM(PCB_TP, offsetof(struct pcb, pcb_tp)); ASSYM(PCB_T, offsetof(struct pcb, pcb_t)); ASSYM(PCB_S, offsetof(struct pcb, pcb_s)); ASSYM(PCB_A, offsetof(struct pcb, pcb_a)); ASSYM(PCB_X, offsetof(struct pcb, pcb_x)); ASSYM(PCB_FCSR, offsetof(struct pcb, pcb_fcsr)); ASSYM(SF_UC, offsetof(struct sigframe, sf_uc)); ASSYM(PC_CURPCB, offsetof(struct pcpu, pc_curpcb)); ASSYM(PC_CURTHREAD, offsetof(struct pcpu, pc_curthread)); ASSYM(TD_PCB, offsetof(struct thread, td_pcb)); ASSYM(TD_FLAGS, offsetof(struct thread, td_flags)); ASSYM(TD_PROC, offsetof(struct thread, td_proc)); ASSYM(TD_FRAME, offsetof(struct thread, td_frame)); ASSYM(TD_MD, offsetof(struct thread, td_md)); ASSYM(TD_LOCK, offsetof(struct thread, td_lock)); ASSYM(TF_SIZE, sizeof(struct trapframe)); ASSYM(TF_RA, offsetof(struct trapframe, tf_ra)); ASSYM(TF_SP, offsetof(struct trapframe, tf_sp)); ASSYM(TF_GP, offsetof(struct trapframe, tf_gp)); ASSYM(TF_TP, offsetof(struct trapframe, tf_tp)); ASSYM(TF_T, offsetof(struct trapframe, tf_t)); ASSYM(TF_S, offsetof(struct trapframe, tf_s)); ASSYM(TF_A, offsetof(struct trapframe, tf_a)); ASSYM(TF_SEPC, offsetof(struct trapframe, tf_sepc)); ASSYM(TF_STVAL, offsetof(struct trapframe, tf_stval)); ASSYM(TF_SCAUSE, offsetof(struct trapframe, tf_scause)); ASSYM(TF_SSTATUS, offsetof(struct trapframe, tf_sstatus)); Index: projects/import-googletest-1.8.1/sys/riscv/riscv/machdep.c =================================================================== --- projects/import-googletest-1.8.1/sys/riscv/riscv/machdep.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/riscv/riscv/machdep.c (revision 344271) @@ -1,895 +1,891 @@ /*- * Copyright (c) 2014 Andrew Turner * Copyright (c) 2015-2017 Ruslan Bukin * All rights reserved. * * Portions of this software were developed by SRI International and the * University of Cambridge Computer Laboratory under DARPA/AFRL contract * FA8750-10-C-0237 ("CTSRD"), as part of the DARPA CRASH research programme. * * Portions of this software were developed by the University of Cambridge * Computer Laboratory as part of the CTSRD Project, with support from the * UK Higher Education Innovation Fund (HEIF). * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "opt_platform.h" #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef FPE #include #endif #ifdef FDT #include #include #endif struct pcpu __pcpu[MAXCPU]; static struct trapframe proc0_tf; vm_paddr_t phys_avail[PHYS_AVAIL_SIZE + 2]; vm_paddr_t dump_avail[PHYS_AVAIL_SIZE + 2]; int early_boot = 1; int cold = 1; long realmem = 0; long Maxmem = 0; #define DTB_SIZE_MAX (1024 * 1024) #define PHYSMAP_SIZE (2 * (VM_PHYSSEG_MAX - 1)) vm_paddr_t physmap[PHYSMAP_SIZE]; u_int physmap_idx; struct kva_md_info kmi; int64_t dcache_line_size; /* The minimum D cache line size */ int64_t icache_line_size; /* The minimum I cache line size */ int64_t idcache_line_size; /* The minimum cache line size */ extern int *end; extern int *initstack_end; struct pcpu *pcpup; uintptr_t mcall_trap(uintptr_t mcause, uintptr_t* regs); uintptr_t mcall_trap(uintptr_t mcause, uintptr_t* regs) { return (0); } static void cpu_startup(void *dummy) { identify_cpu(); vm_ksubmap_init(&kmi); bufinit(); vm_pager_bufferinit(); } SYSINIT(cpu, SI_SUB_CPU, SI_ORDER_FIRST, cpu_startup, NULL); int cpu_idle_wakeup(int cpu) { return (0); } int fill_regs(struct thread *td, struct reg *regs) { struct trapframe *frame; frame = td->td_frame; regs->sepc = frame->tf_sepc; regs->sstatus = frame->tf_sstatus; regs->ra = frame->tf_ra; regs->sp = frame->tf_sp; regs->gp = frame->tf_gp; regs->tp = frame->tf_tp; memcpy(regs->t, frame->tf_t, sizeof(regs->t)); memcpy(regs->s, frame->tf_s, sizeof(regs->s)); memcpy(regs->a, frame->tf_a, sizeof(regs->a)); return (0); } int set_regs(struct thread *td, struct reg *regs) { struct trapframe *frame; frame = td->td_frame; frame->tf_sepc = regs->sepc; frame->tf_ra = regs->ra; frame->tf_sp = regs->sp; frame->tf_gp = regs->gp; frame->tf_tp = regs->tp; memcpy(frame->tf_t, regs->t, sizeof(frame->tf_t)); memcpy(frame->tf_s, regs->s, sizeof(frame->tf_s)); memcpy(frame->tf_a, regs->a, sizeof(frame->tf_a)); return (0); } int fill_fpregs(struct thread *td, struct fpreg *regs) { #ifdef FPE struct pcb *pcb; pcb = td->td_pcb; if ((pcb->pcb_fpflags & PCB_FP_STARTED) != 0) { /* * If we have just been running FPE instructions we will * need to save the state to memcpy it below. */ if (td == curthread) fpe_state_save(td); memcpy(regs->fp_x, pcb->pcb_x, sizeof(regs->fp_x)); regs->fp_fcsr = pcb->pcb_fcsr; } else #endif memset(regs, 0, sizeof(*regs)); return (0); } int set_fpregs(struct thread *td, struct fpreg *regs) { #ifdef FPE struct trapframe *frame; struct pcb *pcb; frame = td->td_frame; pcb = td->td_pcb; memcpy(pcb->pcb_x, regs->fp_x, sizeof(regs->fp_x)); pcb->pcb_fcsr = regs->fp_fcsr; pcb->pcb_fpflags |= PCB_FP_STARTED; frame->tf_sstatus &= ~SSTATUS_FS_MASK; frame->tf_sstatus |= SSTATUS_FS_CLEAN; #endif return (0); } int fill_dbregs(struct thread *td, struct dbreg *regs) { panic("fill_dbregs"); } int set_dbregs(struct thread *td, struct dbreg *regs) { panic("set_dbregs"); } int ptrace_set_pc(struct thread *td, u_long addr) { td->td_frame->tf_sepc = addr; return (0); } int ptrace_single_step(struct thread *td) { /* TODO; */ return (EOPNOTSUPP); } int ptrace_clear_single_step(struct thread *td) { /* TODO; */ return (EOPNOTSUPP); } void exec_setregs(struct thread *td, struct image_params *imgp, u_long stack) { struct trapframe *tf; struct pcb *pcb; tf = td->td_frame; pcb = td->td_pcb; memset(tf, 0, sizeof(struct trapframe)); tf->tf_a[0] = stack; tf->tf_sp = STACKALIGN(stack); tf->tf_ra = imgp->entry_addr; tf->tf_sepc = imgp->entry_addr; pcb->pcb_fpflags &= ~PCB_FP_STARTED; } /* Sanity check these are the same size, they will be memcpy'd to and fro */ CTASSERT(sizeof(((struct trapframe *)0)->tf_a) == sizeof((struct gpregs *)0)->gp_a); CTASSERT(sizeof(((struct trapframe *)0)->tf_s) == sizeof((struct gpregs *)0)->gp_s); CTASSERT(sizeof(((struct trapframe *)0)->tf_t) == sizeof((struct gpregs *)0)->gp_t); CTASSERT(sizeof(((struct trapframe *)0)->tf_a) == sizeof((struct reg *)0)->a); CTASSERT(sizeof(((struct trapframe *)0)->tf_s) == sizeof((struct reg *)0)->s); CTASSERT(sizeof(((struct trapframe *)0)->tf_t) == sizeof((struct reg *)0)->t); /* Support for FDT configurations only. */ CTASSERT(FDT); int get_mcontext(struct thread *td, mcontext_t *mcp, int clear_ret) { struct trapframe *tf = td->td_frame; memcpy(mcp->mc_gpregs.gp_t, tf->tf_t, sizeof(mcp->mc_gpregs.gp_t)); memcpy(mcp->mc_gpregs.gp_s, tf->tf_s, sizeof(mcp->mc_gpregs.gp_s)); memcpy(mcp->mc_gpregs.gp_a, tf->tf_a, sizeof(mcp->mc_gpregs.gp_a)); if (clear_ret & GET_MC_CLEAR_RET) { mcp->mc_gpregs.gp_a[0] = 0; mcp->mc_gpregs.gp_t[0] = 0; /* clear syscall error */ } mcp->mc_gpregs.gp_ra = tf->tf_ra; mcp->mc_gpregs.gp_sp = tf->tf_sp; mcp->mc_gpregs.gp_gp = tf->tf_gp; mcp->mc_gpregs.gp_tp = tf->tf_tp; mcp->mc_gpregs.gp_sepc = tf->tf_sepc; mcp->mc_gpregs.gp_sstatus = tf->tf_sstatus; return (0); } int set_mcontext(struct thread *td, mcontext_t *mcp) { struct trapframe *tf; tf = td->td_frame; memcpy(tf->tf_t, mcp->mc_gpregs.gp_t, sizeof(tf->tf_t)); memcpy(tf->tf_s, mcp->mc_gpregs.gp_s, sizeof(tf->tf_s)); memcpy(tf->tf_a, mcp->mc_gpregs.gp_a, sizeof(tf->tf_a)); tf->tf_ra = mcp->mc_gpregs.gp_ra; tf->tf_sp = mcp->mc_gpregs.gp_sp; tf->tf_gp = mcp->mc_gpregs.gp_gp; tf->tf_sepc = mcp->mc_gpregs.gp_sepc; tf->tf_sstatus = mcp->mc_gpregs.gp_sstatus; return (0); } static void get_fpcontext(struct thread *td, mcontext_t *mcp) { #ifdef FPE struct pcb *curpcb; critical_enter(); curpcb = curthread->td_pcb; KASSERT(td->td_pcb == curpcb, ("Invalid fpe pcb")); if ((curpcb->pcb_fpflags & PCB_FP_STARTED) != 0) { /* * If we have just been running FPE instructions we will * need to save the state to memcpy it below. */ fpe_state_save(td); KASSERT((curpcb->pcb_fpflags & ~PCB_FP_USERMASK) == 0, ("Non-userspace FPE flags set in get_fpcontext")); memcpy(mcp->mc_fpregs.fp_x, curpcb->pcb_x, sizeof(mcp->mc_fpregs)); mcp->mc_fpregs.fp_fcsr = curpcb->pcb_fcsr; mcp->mc_fpregs.fp_flags = curpcb->pcb_fpflags; mcp->mc_flags |= _MC_FP_VALID; } critical_exit(); #endif } static void set_fpcontext(struct thread *td, mcontext_t *mcp) { #ifdef FPE struct pcb *curpcb; critical_enter(); if ((mcp->mc_flags & _MC_FP_VALID) != 0) { curpcb = curthread->td_pcb; /* FPE usage is enabled, override registers. */ memcpy(curpcb->pcb_x, mcp->mc_fpregs.fp_x, sizeof(mcp->mc_fpregs)); curpcb->pcb_fcsr = mcp->mc_fpregs.fp_fcsr; curpcb->pcb_fpflags = mcp->mc_fpregs.fp_flags & PCB_FP_USERMASK; } critical_exit(); #endif } void cpu_idle(int busy) { spinlock_enter(); if (!busy) cpu_idleclock(); if (!sched_runnable()) __asm __volatile( "fence \n" "wfi \n"); if (!busy) cpu_activeclock(); spinlock_exit(); } void cpu_halt(void) { intr_disable(); for (;;) __asm __volatile("wfi"); } /* * Flush the D-cache for non-DMA I/O so that the I-cache can * be made coherent later. */ void cpu_flush_dcache(void *ptr, size_t len) { /* TBD */ } /* Get current clock frequency for the given CPU ID. */ int cpu_est_clockrate(int cpu_id, uint64_t *rate) { panic("cpu_est_clockrate"); } void cpu_pcpu_init(struct pcpu *pcpu, int cpuid, size_t size) { } void spinlock_enter(void) { struct thread *td; td = curthread; if (td->td_md.md_spinlock_count == 0) { td->td_md.md_spinlock_count = 1; td->td_md.md_saved_sstatus_ie = intr_disable(); } else td->td_md.md_spinlock_count++; critical_enter(); } void spinlock_exit(void) { struct thread *td; register_t sstatus_ie; td = curthread; critical_exit(); sstatus_ie = td->td_md.md_saved_sstatus_ie; td->td_md.md_spinlock_count--; if (td->td_md.md_spinlock_count == 0) intr_restore(sstatus_ie); } #ifndef _SYS_SYSPROTO_H_ struct sigreturn_args { ucontext_t *ucp; }; #endif int sys_sigreturn(struct thread *td, struct sigreturn_args *uap) { uint64_t sstatus; ucontext_t uc; int error; if (uap == NULL) return (EFAULT); if (copyin(uap->sigcntxp, &uc, sizeof(uc))) return (EFAULT); /* * Make sure the processor mode has not been tampered with and * interrupts have not been disabled. * Supervisor interrupts in user mode are always enabled. */ sstatus = uc.uc_mcontext.mc_gpregs.gp_sstatus; if ((sstatus & SSTATUS_SPP) != 0) return (EINVAL); error = set_mcontext(td, &uc.uc_mcontext); if (error != 0) return (error); set_fpcontext(td, &uc.uc_mcontext); /* Restore signal mask. */ kern_sigprocmask(td, SIG_SETMASK, &uc.uc_sigmask, NULL, 0); return (EJUSTRETURN); } /* * Construct a PCB from a trapframe. This is called from kdb_trap() where * we want to start a backtrace from the function that caused us to enter * the debugger. We have the context in the trapframe, but base the trace * on the PCB. The PCB doesn't have to be perfect, as long as it contains * enough for a backtrace. */ void makectx(struct trapframe *tf, struct pcb *pcb) { memcpy(pcb->pcb_t, tf->tf_t, sizeof(tf->tf_t)); memcpy(pcb->pcb_s, tf->tf_s, sizeof(tf->tf_s)); memcpy(pcb->pcb_a, tf->tf_a, sizeof(tf->tf_a)); pcb->pcb_ra = tf->tf_ra; pcb->pcb_sp = tf->tf_sp; pcb->pcb_gp = tf->tf_gp; pcb->pcb_tp = tf->tf_tp; pcb->pcb_sepc = tf->tf_sepc; } void sendsig(sig_t catcher, ksiginfo_t *ksi, sigset_t *mask) { struct sigframe *fp, frame; struct sysentvec *sysent; struct trapframe *tf; struct sigacts *psp; struct thread *td; struct proc *p; int onstack; int sig; td = curthread; p = td->td_proc; PROC_LOCK_ASSERT(p, MA_OWNED); sig = ksi->ksi_signo; psp = p->p_sigacts; mtx_assert(&psp->ps_mtx, MA_OWNED); tf = td->td_frame; onstack = sigonstack(tf->tf_sp); CTR4(KTR_SIG, "sendsig: td=%p (%s) catcher=%p sig=%d", td, p->p_comm, catcher, sig); /* Allocate and validate space for the signal handler context. */ if ((td->td_pflags & TDP_ALTSTACK) != 0 && !onstack && SIGISMEMBER(psp->ps_sigonstack, sig)) { fp = (struct sigframe *)((uintptr_t)td->td_sigstk.ss_sp + td->td_sigstk.ss_size); } else { fp = (struct sigframe *)td->td_frame->tf_sp; } /* Make room, keeping the stack aligned */ fp--; fp = (struct sigframe *)STACKALIGN(fp); /* Fill in the frame to copy out */ bzero(&frame, sizeof(frame)); get_mcontext(td, &frame.sf_uc.uc_mcontext, 0); get_fpcontext(td, &frame.sf_uc.uc_mcontext); frame.sf_si = ksi->ksi_info; frame.sf_uc.uc_sigmask = *mask; frame.sf_uc.uc_stack = td->td_sigstk; frame.sf_uc.uc_stack.ss_flags = (td->td_pflags & TDP_ALTSTACK) != 0 ? (onstack ? SS_ONSTACK : 0) : SS_DISABLE; mtx_unlock(&psp->ps_mtx); PROC_UNLOCK(td->td_proc); /* Copy the sigframe out to the user's stack. */ if (copyout(&frame, fp, sizeof(*fp)) != 0) { /* Process has trashed its stack. Kill it. */ CTR2(KTR_SIG, "sendsig: sigexit td=%p fp=%p", td, fp); PROC_LOCK(p); sigexit(td, SIGILL); } tf->tf_a[0] = sig; tf->tf_a[1] = (register_t)&fp->sf_si; tf->tf_a[2] = (register_t)&fp->sf_uc; tf->tf_sepc = (register_t)catcher; tf->tf_sp = (register_t)fp; sysent = p->p_sysent; if (sysent->sv_sigcode_base != 0) tf->tf_ra = (register_t)sysent->sv_sigcode_base; else tf->tf_ra = (register_t)(sysent->sv_psstrings - *(sysent->sv_szsigcode)); CTR3(KTR_SIG, "sendsig: return td=%p pc=%#x sp=%#x", td, tf->tf_sepc, tf->tf_sp); PROC_LOCK(p); mtx_lock(&psp->ps_mtx); } static void init_proc0(vm_offset_t kstack) { pcpup = &__pcpu[0]; proc_linkup0(&proc0, &thread0); thread0.td_kstack = kstack; thread0.td_pcb = (struct pcb *)(thread0.td_kstack) - 1; thread0.td_pcb->pcb_fpflags = 0; thread0.td_frame = &proc0_tf; pcpup->pc_curpcb = thread0.td_pcb; } static int add_physmap_entry(uint64_t base, uint64_t length, vm_paddr_t *physmap, u_int *physmap_idxp) { u_int i, insert_idx, _physmap_idx; _physmap_idx = *physmap_idxp; if (length == 0) return (1); /* * Find insertion point while checking for overlap. Start off by * assuming the new entry will be added to the end. */ insert_idx = _physmap_idx; for (i = 0; i <= _physmap_idx; i += 2) { if (base < physmap[i + 1]) { if (base + length <= physmap[i]) { insert_idx = i; break; } if (boothowto & RB_VERBOSE) printf( "Overlapping memory regions, ignoring second region\n"); return (1); } } /* See if we can prepend to the next entry. */ if (insert_idx <= _physmap_idx && base + length == physmap[insert_idx]) { physmap[insert_idx] = base; return (1); } /* See if we can append to the previous entry. */ if (insert_idx > 0 && base == physmap[insert_idx - 1]) { physmap[insert_idx - 1] += length; return (1); } _physmap_idx += 2; *physmap_idxp = _physmap_idx; if (_physmap_idx == PHYSMAP_SIZE) { printf( "Too many segments in the physical address map, giving up\n"); return (0); } /* * Move the last 'N' entries down to make room for the new * entry if needed. */ for (i = _physmap_idx; i > insert_idx; i -= 2) { physmap[i] = physmap[i - 2]; physmap[i + 1] = physmap[i - 1]; } /* Insert the new entry. */ physmap[insert_idx] = base; physmap[insert_idx + 1] = base + length; printf("physmap[%d] = 0x%016lx\n", insert_idx, base); printf("physmap[%d] = 0x%016lx\n", insert_idx + 1, base + length); return (1); } #ifdef FDT static void try_load_dtb(caddr_t kmdp, vm_offset_t dtbp) { #if defined(FDT_DTB_STATIC) dtbp = (vm_offset_t)&fdt_static_dtb; #endif if (dtbp == (vm_offset_t)NULL) { printf("ERROR loading DTB\n"); return; } if (OF_install(OFW_FDT, 0) == FALSE) panic("Cannot install FDT"); if (OF_init((void *)dtbp) != 0) panic("OF_init failed with the found device tree"); } #endif static void cache_setup(void) { /* TODO */ } /* * Fake up a boot descriptor table. * RISCVTODO: This needs to be done via loader (when it's available). */ vm_offset_t fake_preload_metadata(struct riscv_bootparams *rvbp __unused) { static uint32_t fake_preload[35]; #ifdef DDB vm_offset_t zstart = 0, zend = 0; #endif vm_offset_t lastaddr; int i; i = 0; fake_preload[i++] = MODINFO_NAME; fake_preload[i++] = strlen("kernel") + 1; strcpy((char*)&fake_preload[i++], "kernel"); i += 1; fake_preload[i++] = MODINFO_TYPE; fake_preload[i++] = strlen("elf64 kernel") + 1; strcpy((char*)&fake_preload[i++], "elf64 kernel"); i += 3; fake_preload[i++] = MODINFO_ADDR; fake_preload[i++] = sizeof(vm_offset_t); *(vm_offset_t *)&fake_preload[i++] = (vm_offset_t)(KERNBASE + KERNENTRY); i += 1; fake_preload[i++] = MODINFO_SIZE; fake_preload[i++] = sizeof(vm_offset_t); fake_preload[i++] = (vm_offset_t)&end - (vm_offset_t)(KERNBASE + KERNENTRY); i += 1; #ifdef DDB #if 0 /* RISCVTODO */ if (*(uint32_t *)KERNVIRTADDR == MAGIC_TRAMP_NUMBER) { fake_preload[i++] = MODINFO_METADATA|MODINFOMD_SSYM; fake_preload[i++] = sizeof(vm_offset_t); fake_preload[i++] = *(uint32_t *)(KERNVIRTADDR + 4); fake_preload[i++] = MODINFO_METADATA|MODINFOMD_ESYM; fake_preload[i++] = sizeof(vm_offset_t); fake_preload[i++] = *(uint32_t *)(KERNVIRTADDR + 8); lastaddr = *(uint32_t *)(KERNVIRTADDR + 8); zend = lastaddr; zstart = *(uint32_t *)(KERNVIRTADDR + 4); db_fetch_ksymtab(zstart, zend); } else #endif #endif lastaddr = (vm_offset_t)&end; fake_preload[i++] = 0; fake_preload[i] = 0; preload_metadata = (void *)fake_preload; return (lastaddr); } void initriscv(struct riscv_bootparams *rvbp) { struct mem_region mem_regions[FDT_MEM_REGIONS]; vm_offset_t rstart, rend; vm_offset_t s, e; int mem_regions_sz; vm_offset_t lastaddr; vm_size_t kernlen; caddr_t kmdp; int i; /* Set the module data location */ lastaddr = fake_preload_metadata(rvbp); /* Find the kernel address */ kmdp = preload_search_by_type("elf kernel"); if (kmdp == NULL) kmdp = preload_search_by_type("elf64 kernel"); boothowto = RB_VERBOSE | RB_SINGLE; boothowto = RB_VERBOSE; kern_envp = NULL; #ifdef FDT try_load_dtb(kmdp, rvbp->dtbp_virt); #endif /* Load the physical memory ranges */ physmap_idx = 0; #ifdef FDT /* Grab physical memory regions information from device tree. */ if (fdt_get_mem_regions(mem_regions, &mem_regions_sz, NULL) != 0) panic("Cannot get physical memory regions"); s = rvbp->dtbp_phys; e = s + DTB_SIZE_MAX; for (i = 0; i < mem_regions_sz; i++) { rstart = mem_regions[i].mr_start; rend = (mem_regions[i].mr_start + mem_regions[i].mr_size); if ((rstart < s) && (rend > e)) { /* Exclude DTB region. */ add_physmap_entry(rstart, (s - rstart), physmap, &physmap_idx); add_physmap_entry(e, (rend - e), physmap, &physmap_idx); } else { add_physmap_entry(mem_regions[i].mr_start, mem_regions[i].mr_size, physmap, &physmap_idx); } } #endif /* Set the pcpu data, this is needed by pmap_bootstrap */ pcpup = &__pcpu[0]; pcpu_init(pcpup, 0, sizeof(struct pcpu)); /* Set the pcpu pointer */ __asm __volatile("mv gp, %0" :: "r"(pcpup)); PCPU_SET(curthread, &thread0); /* Do basic tuning, hz etc */ init_param1(); cache_setup(); /* Bootstrap enough of pmap to enter the kernel proper */ kernlen = (lastaddr - KERNBASE); pmap_bootstrap(rvbp->kern_l1pt, mem_regions[0].mr_start, kernlen); cninit(); init_proc0(rvbp->kern_stack); - /* set page table base register for thread0 */ - thread0.td_pcb->pcb_l1addr = \ - (rvbp->kern_l1pt - KERNBASE + rvbp->kern_phys); - msgbufinit(msgbufp, msgbufsize); mutex_init(); init_param2(physmem); kdb_init(); early_boot = 0; } #undef bzero void bzero(void *buf, size_t len) { uint8_t *p; p = buf; while(len-- > 0) *p++ = 0; } Index: projects/import-googletest-1.8.1/sys/riscv/riscv/mp_machdep.c =================================================================== --- projects/import-googletest-1.8.1/sys/riscv/riscv/mp_machdep.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/riscv/riscv/mp_machdep.c (revision 344271) @@ -1,456 +1,460 @@ /*- * Copyright (c) 2015 The FreeBSD Foundation * Copyright (c) 2016 Ruslan Bukin * All rights reserved. * * Portions of this software were developed by Andrew Turner under * sponsorship from the FreeBSD Foundation. * * Portions of this software were developed by SRI International and the * University of Cambridge Computer Laboratory under DARPA/AFRL contract * FA8750-10-C-0237 ("CTSRD"), as part of the DARPA CRASH research programme. * * Portions of this software were developed by the University of Cambridge * Computer Laboratory as part of the CTSRD Project, with support from the * UK Higher Education Innovation Fund (HEIF). * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "opt_kstack_pages.h" #include "opt_platform.h" #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include +#include #include #include #include #ifdef FDT #include #include #endif boolean_t ofw_cpu_reg(phandle_t node, u_int, cell_t *); extern struct pcpu __pcpu[]; uint32_t __riscv_boot_ap[MAXCPU]; static enum { CPUS_UNKNOWN, #ifdef FDT CPUS_FDT, #endif } cpu_enum_method; static device_identify_t riscv64_cpu_identify; static device_probe_t riscv64_cpu_probe; static device_attach_t riscv64_cpu_attach; static int ipi_handler(void *); struct mtx ap_boot_mtx; struct pcb stoppcbs[MAXCPU]; #ifdef INVARIANTS static uint32_t cpu_reg[MAXCPU][2]; #endif static device_t cpu_list[MAXCPU]; void mpentry(unsigned long cpuid); void init_secondary(uint64_t); uint8_t secondary_stacks[MAXCPU - 1][PAGE_SIZE * KSTACK_PAGES] __aligned(16); /* Set to 1 once we're ready to let the APs out of the pen. */ volatile int aps_ready = 0; /* Temporary variables for init_secondary() */ void *dpcpu[MAXCPU - 1]; static device_method_t riscv64_cpu_methods[] = { /* Device interface */ DEVMETHOD(device_identify, riscv64_cpu_identify), DEVMETHOD(device_probe, riscv64_cpu_probe), DEVMETHOD(device_attach, riscv64_cpu_attach), DEVMETHOD_END }; static devclass_t riscv64_cpu_devclass; static driver_t riscv64_cpu_driver = { "riscv64_cpu", riscv64_cpu_methods, 0 }; DRIVER_MODULE(riscv64_cpu, cpu, riscv64_cpu_driver, riscv64_cpu_devclass, 0, 0); static void riscv64_cpu_identify(driver_t *driver, device_t parent) { if (device_find_child(parent, "riscv64_cpu", -1) != NULL) return; if (BUS_ADD_CHILD(parent, 0, "riscv64_cpu", -1) == NULL) device_printf(parent, "add child failed\n"); } static int riscv64_cpu_probe(device_t dev) { u_int cpuid; cpuid = device_get_unit(dev); if (cpuid >= MAXCPU || cpuid > mp_maxid) return (EINVAL); device_quiet(dev); return (0); } static int riscv64_cpu_attach(device_t dev) { const uint32_t *reg; size_t reg_size; u_int cpuid; int i; cpuid = device_get_unit(dev); if (cpuid >= MAXCPU || cpuid > mp_maxid) return (EINVAL); KASSERT(cpu_list[cpuid] == NULL, ("Already have cpu %u", cpuid)); reg = cpu_get_cpuid(dev, ®_size); if (reg == NULL) return (EINVAL); if (bootverbose) { device_printf(dev, "register <"); for (i = 0; i < reg_size; i++) printf("%s%x", (i == 0) ? "" : " ", reg[i]); printf(">\n"); } /* Set the device to start it later */ cpu_list[cpuid] = dev; return (0); } static void release_aps(void *dummy __unused) { u_long mask; int cpu, i; if (mp_ncpus == 1) return; /* Setup the IPI handler */ riscv_setup_ipihandler(ipi_handler); atomic_store_rel_int(&aps_ready, 1); /* Wake up the other CPUs */ mask = 0; for (i = 1; i < mp_ncpus; i++) mask |= (1 << i); sbi_send_ipi(&mask); printf("Release APs\n"); for (i = 0; i < 2000; i++) { if (smp_started) { for (cpu = 0; cpu <= mp_maxid; cpu++) { if (CPU_ABSENT(cpu)) continue; } return; } DELAY(1000); } printf("APs not started\n"); } SYSINIT(start_aps, SI_SUB_SMP, SI_ORDER_FIRST, release_aps, NULL); void init_secondary(uint64_t cpu) { struct pcpu *pcpup; /* Setup the pcpu pointer */ pcpup = &__pcpu[cpu]; __asm __volatile("mv gp, %0" :: "r"(pcpup)); /* Workaround: make sure wfi doesn't halt the hart */ csr_set(sie, SIE_SSIE); csr_set(sip, SIE_SSIE); /* Spin until the BSP releases the APs */ while (!aps_ready) __asm __volatile("wfi"); /* Initialize curthread */ KASSERT(PCPU_GET(idlethread) != NULL, ("no idle thread")); pcpup->pc_curthread = pcpup->pc_idlethread; pcpup->pc_curpcb = pcpup->pc_idlethread->td_pcb; /* * Identify current CPU. This is necessary to setup * affinity registers and to provide support for * runtime chip identification. */ identify_cpu(); /* Enable software interrupts */ riscv_unmask_ipi(); /* Start per-CPU event timers. */ cpu_initclocks_ap(); /* Enable external (PLIC) interrupts */ csr_set(sie, SIE_SEIE); + + /* Activate process 0's pmap. */ + pmap_activate_boot(vmspace_pmap(proc0.p_vmspace)); mtx_lock_spin(&ap_boot_mtx); atomic_add_rel_32(&smp_cpus, 1); if (smp_cpus == mp_ncpus) { /* enable IPI's, tlb shootdown, freezes etc */ atomic_store_rel_int(&smp_started, 1); } mtx_unlock_spin(&ap_boot_mtx); /* Enter the scheduler */ sched_throw(NULL); panic("scheduler returned us to init_secondary"); /* NOTREACHED */ } static int ipi_handler(void *arg) { u_int ipi_bitmap; u_int cpu, ipi; int bit; sbi_clear_ipi(); cpu = PCPU_GET(cpuid); mb(); ipi_bitmap = atomic_readandclear_int(PCPU_PTR(pending_ipis)); if (ipi_bitmap == 0) return (FILTER_HANDLED); while ((bit = ffs(ipi_bitmap))) { bit = (bit - 1); ipi = (1 << bit); ipi_bitmap &= ~ipi; mb(); switch (ipi) { case IPI_AST: CTR0(KTR_SMP, "IPI_AST"); break; case IPI_PREEMPT: CTR1(KTR_SMP, "%s: IPI_PREEMPT", __func__); sched_preempt(curthread); break; case IPI_RENDEZVOUS: CTR0(KTR_SMP, "IPI_RENDEZVOUS"); smp_rendezvous_action(); break; case IPI_STOP: case IPI_STOP_HARD: CTR0(KTR_SMP, (ipi == IPI_STOP) ? "IPI_STOP" : "IPI_STOP_HARD"); savectx(&stoppcbs[cpu]); /* Indicate we are stopped */ CPU_SET_ATOMIC(cpu, &stopped_cpus); /* Wait for restart */ while (!CPU_ISSET(cpu, &started_cpus)) cpu_spinwait(); CPU_CLR_ATOMIC(cpu, &started_cpus); CPU_CLR_ATOMIC(cpu, &stopped_cpus); CTR0(KTR_SMP, "IPI_STOP (restart)"); /* * The kernel debugger might have set a breakpoint, * so flush the instruction cache. */ fence_i(); break; case IPI_HARDCLOCK: CTR1(KTR_SMP, "%s: IPI_HARDCLOCK", __func__); hardclockintr(); break; default: panic("Unknown IPI %#0x on cpu %d", ipi, curcpu); } } return (FILTER_HANDLED); } struct cpu_group * cpu_topo(void) { return (smp_topo_none()); } /* Determine if we running MP machine */ int cpu_mp_probe(void) { return (mp_ncpus > 1); } #ifdef FDT static boolean_t cpu_init_fdt(u_int id, phandle_t node, u_int addr_size, pcell_t *reg) { uint64_t target_cpu; struct pcpu *pcpup; /* Check we are able to start this cpu */ if (id > mp_maxid) return (0); KASSERT(id < MAXCPU, ("Too many CPUs")); KASSERT(addr_size == 1 || addr_size == 2, ("Invalid register size")); #ifdef INVARIANTS cpu_reg[id][0] = reg[0]; if (addr_size == 2) cpu_reg[id][1] = reg[1]; #endif target_cpu = reg[0]; if (addr_size == 2) { target_cpu <<= 32; target_cpu |= reg[1]; } pcpup = &__pcpu[id]; /* We are already running on cpu 0 */ if (id == 0) { return (1); } pcpu_init(pcpup, id, sizeof(struct pcpu)); dpcpu[id - 1] = (void *)kmem_malloc(DPCPU_SIZE, M_WAITOK | M_ZERO); dpcpu_init(dpcpu[id - 1], id); printf("Starting CPU %u (%lx)\n", id, target_cpu); __riscv_boot_ap[id] = 1; CPU_SET(id, &all_cpus); return (1); } #endif /* Initialize and fire up non-boot processors */ void cpu_mp_start(void) { mtx_init(&ap_boot_mtx, "ap boot", NULL, MTX_SPIN); CPU_SET(0, &all_cpus); switch(cpu_enum_method) { #ifdef FDT case CPUS_FDT: ofw_cpu_early_foreach(cpu_init_fdt, true); break; #endif case CPUS_UNKNOWN: break; } } /* Introduce rest of cores to the world */ void cpu_mp_announce(void) { } void cpu_mp_setmaxid(void) { #ifdef FDT int cores; cores = ofw_cpu_early_foreach(NULL, false); if (cores > 0) { cores = MIN(cores, MAXCPU); if (bootverbose) printf("Found %d CPUs in the device tree\n", cores); mp_ncpus = cores; mp_maxid = cores - 1; cpu_enum_method = CPUS_FDT; return; } #endif if (bootverbose) printf("No CPU data, limiting to 1 core\n"); mp_ncpus = 1; mp_maxid = 0; } Index: projects/import-googletest-1.8.1/sys/riscv/riscv/pmap.c =================================================================== --- projects/import-googletest-1.8.1/sys/riscv/riscv/pmap.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/riscv/riscv/pmap.c (revision 344271) @@ -1,3327 +1,4440 @@ /*- * SPDX-License-Identifier: BSD-4-Clause * * Copyright (c) 1991 Regents of the University of California. * All rights reserved. * Copyright (c) 1994 John S. Dyson * All rights reserved. * Copyright (c) 1994 David Greenman * All rights reserved. * Copyright (c) 2003 Peter Wemm * All rights reserved. * Copyright (c) 2005-2010 Alan L. Cox * All rights reserved. * Copyright (c) 2014 Andrew Turner * All rights reserved. * Copyright (c) 2014 The FreeBSD Foundation * All rights reserved. * Copyright (c) 2015-2018 Ruslan Bukin * All rights reserved. * * This code is derived from software contributed to Berkeley by * the Systems Programming Group of the University of Utah Computer * Science Department and William Jolitz of UUNET Technologies Inc. * * Portions of this software were developed by Andrew Turner under * sponsorship from The FreeBSD Foundation. * * Portions of this software were developed by SRI International and the * University of Cambridge Computer Laboratory under DARPA/AFRL contract * FA8750-10-C-0237 ("CTSRD"), as part of the DARPA CRASH research programme. * * Portions of this software were developed by the University of Cambridge * Computer Laboratory as part of the CTSRD Project, with support from the * UK Higher Education Innovation Fund (HEIF). * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by the University of * California, Berkeley and its contributors. * 4. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * from: @(#)pmap.c 7.7 (Berkeley) 5/12/91 */ /*- * Copyright (c) 2003 Networks Associates Technology, Inc. * All rights reserved. * * This software was developed for the FreeBSD Project by Jake Burkholder, * Safeport Network Services, and Network Associates Laboratories, the * Security Research Division of Network Associates, Inc. under * DARPA/SPAWAR contract N66001-01-C-8035 ("CBOSS"), as part of the DARPA * CHATS research program. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* * Manages physical address maps. * * Since the information managed by this module is * also stored by the logical address mapping module, * this module may throw away valid virtual-to-physical * mappings at almost any time. However, invalidations * of virtual-to-physical mappings must be done as * requested. * * In order to cope with hardware architectures which * make virtual-to-physical map invalidates expensive, * this module may delay invalidate or reduced protection * operations until such time as they are actually * necessary. This module is given full information as * to which processors are currently using which maps, * and to when physical maps must be made correct. */ #include -#include #include +#include +#include +#include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include +#include #include #include #include #include #include #include #include -#define NPDEPG (PAGE_SIZE/(sizeof (pd_entry_t))) -#define NUPDE (NPDEPG * NPDEPG) -#define NUSERPGTBLS (NUPDE + NPDEPG) +#define NUL1E (Ln_ENTRIES * Ln_ENTRIES) +#define NUL2E (Ln_ENTRIES * NUL1E) #if !defined(DIAGNOSTIC) #ifdef __GNUC_GNU_INLINE__ #define PMAP_INLINE __attribute__((__gnu_inline__)) inline #else #define PMAP_INLINE extern inline #endif #else #define PMAP_INLINE #endif #ifdef PV_STATS #define PV_STAT(x) do { x ; } while (0) #else #define PV_STAT(x) do { } while (0) #endif #define pmap_l2_pindex(v) ((v) >> L2_SHIFT) +#define pa_to_pvh(pa) (&pv_table[pa_index(pa)]) #define NPV_LIST_LOCKS MAXCPU #define PHYS_TO_PV_LIST_LOCK(pa) \ - (&pv_list_locks[pa_index(pa) % NPV_LIST_LOCKS]) + (&pv_list_locks[pmap_l2_pindex(pa) % NPV_LIST_LOCKS]) #define CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, pa) do { \ struct rwlock **_lockp = (lockp); \ struct rwlock *_new_lock; \ \ _new_lock = PHYS_TO_PV_LIST_LOCK(pa); \ if (_new_lock != *_lockp) { \ if (*_lockp != NULL) \ rw_wunlock(*_lockp); \ *_lockp = _new_lock; \ rw_wlock(*_lockp); \ } \ } while (0) #define CHANGE_PV_LIST_LOCK_TO_VM_PAGE(lockp, m) \ CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, VM_PAGE_TO_PHYS(m)) #define RELEASE_PV_LIST_LOCK(lockp) do { \ struct rwlock **_lockp = (lockp); \ \ if (*_lockp != NULL) { \ rw_wunlock(*_lockp); \ *_lockp = NULL; \ } \ } while (0) #define VM_PAGE_TO_PV_LIST_LOCK(m) \ PHYS_TO_PV_LIST_LOCK(VM_PAGE_TO_PHYS(m)) /* The list of all the user pmaps */ LIST_HEAD(pmaplist, pmap); static struct pmaplist allpmaps = LIST_HEAD_INITIALIZER(); struct pmap kernel_pmap_store; vm_offset_t virtual_avail; /* VA of first avail page (after kernel bss) */ vm_offset_t virtual_end; /* VA of last avail page (end of kernel AS) */ vm_offset_t kernel_vm_end = 0; vm_paddr_t dmap_phys_base; /* The start of the dmap region */ vm_paddr_t dmap_phys_max; /* The limit of the dmap region */ vm_offset_t dmap_max_addr; /* The virtual address limit of the dmap */ /* This code assumes all L1 DMAP entries will be used */ CTASSERT((DMAP_MIN_ADDRESS & ~L1_OFFSET) == DMAP_MIN_ADDRESS); CTASSERT((DMAP_MAX_ADDRESS & ~L1_OFFSET) == DMAP_MAX_ADDRESS); static struct rwlock_padalign pvh_global_lock; static struct mtx_padalign allpmaps_lock; +static SYSCTL_NODE(_vm, OID_AUTO, pmap, CTLFLAG_RD, 0, + "VM/pmap parameters"); + +static int superpages_enabled = 1; +SYSCTL_INT(_vm_pmap, OID_AUTO, superpages_enabled, + CTLFLAG_RDTUN, &superpages_enabled, 0, + "Enable support for transparent superpages"); + +static SYSCTL_NODE(_vm_pmap, OID_AUTO, l2, CTLFLAG_RD, 0, + "2MB page mapping counters"); + +static u_long pmap_l2_demotions; +SYSCTL_ULONG(_vm_pmap_l2, OID_AUTO, demotions, CTLFLAG_RD, + &pmap_l2_demotions, 0, + "2MB page demotions"); + +static u_long pmap_l2_mappings; +SYSCTL_ULONG(_vm_pmap_l2, OID_AUTO, mappings, CTLFLAG_RD, + &pmap_l2_mappings, 0, + "2MB page mappings"); + +static u_long pmap_l2_p_failures; +SYSCTL_ULONG(_vm_pmap_l2, OID_AUTO, p_failures, CTLFLAG_RD, + &pmap_l2_p_failures, 0, + "2MB page promotion failures"); + +static u_long pmap_l2_promotions; +SYSCTL_ULONG(_vm_pmap_l2, OID_AUTO, promotions, CTLFLAG_RD, + &pmap_l2_promotions, 0, + "2MB page promotions"); + /* * Data for the pv entry allocation mechanism */ static TAILQ_HEAD(pch, pv_chunk) pv_chunks = TAILQ_HEAD_INITIALIZER(pv_chunks); static struct mtx pv_chunks_mutex; static struct rwlock pv_list_locks[NPV_LIST_LOCKS]; +static struct md_page *pv_table; +static struct md_page pv_dummy; +/* + * Internal flags for pmap_enter()'s helper functions. + */ +#define PMAP_ENTER_NORECLAIM 0x1000000 /* Don't reclaim PV entries. */ +#define PMAP_ENTER_NOREPLACE 0x2000000 /* Don't replace mappings. */ + static void free_pv_chunk(struct pv_chunk *pc); static void free_pv_entry(pmap_t pmap, pv_entry_t pv); static pv_entry_t get_pv_entry(pmap_t pmap, struct rwlock **lockp); static vm_page_t reclaim_pv_chunk(pmap_t locked_pmap, struct rwlock **lockp); static void pmap_pvh_free(struct md_page *pvh, pmap_t pmap, vm_offset_t va); static pv_entry_t pmap_pvh_remove(struct md_page *pvh, pmap_t pmap, vm_offset_t va); +static bool pmap_demote_l2(pmap_t pmap, pd_entry_t *l2, vm_offset_t va); +static bool pmap_demote_l2_locked(pmap_t pmap, pd_entry_t *l2, + vm_offset_t va, struct rwlock **lockp); +static int pmap_enter_l2(pmap_t pmap, vm_offset_t va, pd_entry_t new_l2, + u_int flags, vm_page_t m, struct rwlock **lockp); static vm_page_t pmap_enter_quick_locked(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot, vm_page_t mpte, struct rwlock **lockp); static int pmap_remove_l3(pmap_t pmap, pt_entry_t *l3, vm_offset_t sva, pd_entry_t ptepde, struct spglist *free, struct rwlock **lockp); static boolean_t pmap_try_insert_pv_entry(pmap_t pmap, vm_offset_t va, vm_page_t m, struct rwlock **lockp); static vm_page_t _pmap_alloc_l3(pmap_t pmap, vm_pindex_t ptepindex, struct rwlock **lockp); -static void _pmap_unwire_l3(pmap_t pmap, vm_offset_t va, vm_page_t m, +static void _pmap_unwire_ptp(pmap_t pmap, vm_offset_t va, vm_page_t m, struct spglist *free); -static int pmap_unuse_l3(pmap_t, vm_offset_t, pd_entry_t, struct spglist *); +static int pmap_unuse_pt(pmap_t, vm_offset_t, pd_entry_t, struct spglist *); #define pmap_clear(pte) pmap_store(pte, 0) #define pmap_clear_bits(pte, bits) atomic_clear_64(pte, bits) #define pmap_load_store(pte, entry) atomic_swap_64(pte, entry) #define pmap_load_clear(pte) pmap_load_store(pte, 0) #define pmap_load(pte) atomic_load_64(pte) #define pmap_store(pte, entry) atomic_store_64(pte, entry) #define pmap_store_bits(pte, bits) atomic_set_64(pte, bits) /********************/ /* Inline functions */ /********************/ static __inline void pagecopy(void *s, void *d) { memcpy(d, s, PAGE_SIZE); } static __inline void pagezero(void *p) { bzero(p, PAGE_SIZE); } #define pmap_l1_index(va) (((va) >> L1_SHIFT) & Ln_ADDR_MASK) #define pmap_l2_index(va) (((va) >> L2_SHIFT) & Ln_ADDR_MASK) #define pmap_l3_index(va) (((va) >> L3_SHIFT) & Ln_ADDR_MASK) #define PTE_TO_PHYS(pte) ((pte >> PTE_PPN0_S) * PAGE_SIZE) static __inline pd_entry_t * pmap_l1(pmap_t pmap, vm_offset_t va) { return (&pmap->pm_l1[pmap_l1_index(va)]); } static __inline pd_entry_t * pmap_l1_to_l2(pd_entry_t *l1, vm_offset_t va) { vm_paddr_t phys; pd_entry_t *l2; phys = PTE_TO_PHYS(pmap_load(l1)); l2 = (pd_entry_t *)PHYS_TO_DMAP(phys); return (&l2[pmap_l2_index(va)]); } static __inline pd_entry_t * pmap_l2(pmap_t pmap, vm_offset_t va) { pd_entry_t *l1; l1 = pmap_l1(pmap, va); if ((pmap_load(l1) & PTE_V) == 0) return (NULL); if ((pmap_load(l1) & PTE_RX) != 0) return (NULL); return (pmap_l1_to_l2(l1, va)); } static __inline pt_entry_t * pmap_l2_to_l3(pd_entry_t *l2, vm_offset_t va) { vm_paddr_t phys; pt_entry_t *l3; phys = PTE_TO_PHYS(pmap_load(l2)); l3 = (pd_entry_t *)PHYS_TO_DMAP(phys); return (&l3[pmap_l3_index(va)]); } static __inline pt_entry_t * pmap_l3(pmap_t pmap, vm_offset_t va) { pd_entry_t *l2; l2 = pmap_l2(pmap, va); if (l2 == NULL) return (NULL); if ((pmap_load(l2) & PTE_V) == 0) return (NULL); if ((pmap_load(l2) & PTE_RX) != 0) return (NULL); return (pmap_l2_to_l3(l2, va)); } static __inline void pmap_resident_count_inc(pmap_t pmap, int count) { PMAP_LOCK_ASSERT(pmap, MA_OWNED); pmap->pm_stats.resident_count += count; } static __inline void pmap_resident_count_dec(pmap_t pmap, int count) { PMAP_LOCK_ASSERT(pmap, MA_OWNED); KASSERT(pmap->pm_stats.resident_count >= count, ("pmap %p resident count underflow %ld %d", pmap, pmap->pm_stats.resident_count, count)); pmap->pm_stats.resident_count -= count; } static void pmap_distribute_l1(struct pmap *pmap, vm_pindex_t l1index, pt_entry_t entry) { struct pmap *user_pmap; pd_entry_t *l1; /* Distribute new kernel L1 entry to all the user pmaps */ if (pmap != kernel_pmap) return; mtx_lock(&allpmaps_lock); LIST_FOREACH(user_pmap, &allpmaps, pm_list) { l1 = &user_pmap->pm_l1[l1index]; pmap_store(l1, entry); } mtx_unlock(&allpmaps_lock); } static pt_entry_t * pmap_early_page_idx(vm_offset_t l1pt, vm_offset_t va, u_int *l1_slot, u_int *l2_slot) { pt_entry_t *l2; pd_entry_t *l1; l1 = (pd_entry_t *)l1pt; *l1_slot = (va >> L1_SHIFT) & Ln_ADDR_MASK; /* Check locore has used a table L1 map */ KASSERT((l1[*l1_slot] & PTE_RX) == 0, ("Invalid bootstrap L1 table")); /* Find the address of the L2 table */ l2 = (pt_entry_t *)init_pt_va; *l2_slot = pmap_l2_index(va); return (l2); } static vm_paddr_t pmap_early_vtophys(vm_offset_t l1pt, vm_offset_t va) { u_int l1_slot, l2_slot; pt_entry_t *l2; u_int ret; l2 = pmap_early_page_idx(l1pt, va, &l1_slot, &l2_slot); /* Check locore has used L2 superpages */ KASSERT((l2[l2_slot] & PTE_RX) != 0, ("Invalid bootstrap L2 table")); /* L2 is superpages */ ret = (l2[l2_slot] >> PTE_PPN1_S) << L2_SHIFT; ret += (va & L2_OFFSET); return (ret); } static void pmap_bootstrap_dmap(vm_offset_t kern_l1, vm_paddr_t min_pa, vm_paddr_t max_pa) { vm_offset_t va; vm_paddr_t pa; pd_entry_t *l1; u_int l1_slot; pt_entry_t entry; pn_t pn; pa = dmap_phys_base = min_pa & ~L1_OFFSET; va = DMAP_MIN_ADDRESS; l1 = (pd_entry_t *)kern_l1; l1_slot = pmap_l1_index(DMAP_MIN_ADDRESS); for (; va < DMAP_MAX_ADDRESS && pa < max_pa; pa += L1_SIZE, va += L1_SIZE, l1_slot++) { KASSERT(l1_slot < Ln_ENTRIES, ("Invalid L1 index")); /* superpages */ pn = (pa / PAGE_SIZE); entry = PTE_KERN; entry |= (pn << PTE_PPN0_S); pmap_store(&l1[l1_slot], entry); } /* Set the upper limit of the DMAP region */ dmap_phys_max = pa; dmap_max_addr = va; sfence_vma(); } static vm_offset_t pmap_bootstrap_l3(vm_offset_t l1pt, vm_offset_t va, vm_offset_t l3_start) { vm_offset_t l3pt; pt_entry_t entry; pd_entry_t *l2; vm_paddr_t pa; u_int l2_slot; pn_t pn; KASSERT((va & L2_OFFSET) == 0, ("Invalid virtual address")); l2 = pmap_l2(kernel_pmap, va); l2 = (pd_entry_t *)((uintptr_t)l2 & ~(PAGE_SIZE - 1)); l2_slot = pmap_l2_index(va); l3pt = l3_start; for (; va < VM_MAX_KERNEL_ADDRESS; l2_slot++, va += L2_SIZE) { KASSERT(l2_slot < Ln_ENTRIES, ("Invalid L2 index")); pa = pmap_early_vtophys(l1pt, l3pt); pn = (pa / PAGE_SIZE); entry = (PTE_V); entry |= (pn << PTE_PPN0_S); pmap_store(&l2[l2_slot], entry); l3pt += PAGE_SIZE; } /* Clean the L2 page table */ memset((void *)l3_start, 0, l3pt - l3_start); return (l3pt); } /* * Bootstrap the system enough to run with virtual memory. */ void pmap_bootstrap(vm_offset_t l1pt, vm_paddr_t kernstart, vm_size_t kernlen) { u_int l1_slot, l2_slot, avail_slot, map_slot; vm_offset_t freemempos; vm_offset_t dpcpu, msgbufpv; vm_paddr_t end, max_pa, min_pa, pa, start; int i; printf("pmap_bootstrap %lx %lx %lx\n", l1pt, kernstart, kernlen); printf("%lx\n", l1pt); printf("%lx\n", (KERNBASE >> L1_SHIFT) & Ln_ADDR_MASK); /* Set this early so we can use the pagetable walking functions */ kernel_pmap_store.pm_l1 = (pd_entry_t *)l1pt; PMAP_LOCK_INIT(kernel_pmap); rw_init(&pvh_global_lock, "pmap pv global"); + CPU_FILL(&kernel_pmap->pm_active); + /* Assume the address we were loaded to is a valid physical address. */ min_pa = max_pa = kernstart; /* * Find the minimum physical address. physmap is sorted, * but may contain empty ranges. */ for (i = 0; i < physmap_idx * 2; i += 2) { if (physmap[i] == physmap[i + 1]) continue; if (physmap[i] <= min_pa) min_pa = physmap[i]; if (physmap[i + 1] > max_pa) max_pa = physmap[i + 1]; } printf("physmap_idx %lx\n", physmap_idx); printf("min_pa %lx\n", min_pa); printf("max_pa %lx\n", max_pa); /* Create a direct map region early so we can use it for pa -> va */ pmap_bootstrap_dmap(l1pt, min_pa, max_pa); /* * Read the page table to find out what is already mapped. * This assumes we have mapped a block of memory from KERNBASE * using a single L1 entry. */ (void)pmap_early_page_idx(l1pt, KERNBASE, &l1_slot, &l2_slot); /* Sanity check the index, KERNBASE should be the first VA */ KASSERT(l2_slot == 0, ("The L2 index is non-zero")); freemempos = roundup2(KERNBASE + kernlen, PAGE_SIZE); /* Create the l3 tables for the early devmap */ freemempos = pmap_bootstrap_l3(l1pt, VM_MAX_KERNEL_ADDRESS - L2_SIZE, freemempos); sfence_vma(); #define alloc_pages(var, np) \ (var) = freemempos; \ freemempos += (np * PAGE_SIZE); \ memset((char *)(var), 0, ((np) * PAGE_SIZE)); /* Allocate dynamic per-cpu area. */ alloc_pages(dpcpu, DPCPU_SIZE / PAGE_SIZE); dpcpu_init((void *)dpcpu, 0); /* Allocate memory for the msgbuf, e.g. for /sbin/dmesg */ alloc_pages(msgbufpv, round_page(msgbufsize) / PAGE_SIZE); msgbufp = (void *)msgbufpv; virtual_avail = roundup2(freemempos, L2_SIZE); virtual_end = VM_MAX_KERNEL_ADDRESS - L2_SIZE; kernel_vm_end = virtual_avail; pa = pmap_early_vtophys(l1pt, freemempos); /* Initialize phys_avail. */ for (avail_slot = map_slot = physmem = 0; map_slot < physmap_idx * 2; map_slot += 2) { start = physmap[map_slot]; end = physmap[map_slot + 1]; if (start == end) continue; if (start >= kernstart && end <= pa) continue; if (start < kernstart && end > kernstart) end = kernstart; else if (start < pa && end > pa) start = pa; phys_avail[avail_slot] = start; phys_avail[avail_slot + 1] = end; physmem += (end - start) >> PAGE_SHIFT; avail_slot += 2; if (end != physmap[map_slot + 1] && end > pa) { phys_avail[avail_slot] = pa; phys_avail[avail_slot + 1] = physmap[map_slot + 1]; physmem += (physmap[map_slot + 1] - pa) >> PAGE_SHIFT; avail_slot += 2; } } phys_avail[avail_slot] = 0; phys_avail[avail_slot + 1] = 0; /* * Maxmem isn't the "maximum memory", it's one larger than the * highest page of the physical address space. It should be * called something like "Maxphyspage". */ Maxmem = atop(phys_avail[avail_slot - 1]); } /* * Initialize a vm_page's machine-dependent fields. */ void pmap_page_init(vm_page_t m) { TAILQ_INIT(&m->md.pv_list); m->md.pv_memattr = VM_MEMATTR_WRITE_BACK; } /* * Initialize the pmap module. * Called by vm_init, to initialize any structures that the pmap * system needs to map virtual memory. */ void pmap_init(void) { - int i; + vm_size_t s; + int i, pv_npg; /* * Initialize the pv chunk and pmap list mutexes. */ mtx_init(&pv_chunks_mutex, "pmap pv chunk list", NULL, MTX_DEF); mtx_init(&allpmaps_lock, "allpmaps", NULL, MTX_DEF); /* * Initialize the pool of pv list locks. */ for (i = 0; i < NPV_LIST_LOCKS; i++) rw_init(&pv_list_locks[i], "pmap pv list"); + + /* + * Calculate the size of the pv head table for superpages. + */ + pv_npg = howmany(vm_phys_segs[vm_phys_nsegs - 1].end, L2_SIZE); + + /* + * Allocate memory for the pv head table for superpages. + */ + s = (vm_size_t)(pv_npg * sizeof(struct md_page)); + s = round_page(s); + pv_table = (struct md_page *)kmem_malloc(s, M_WAITOK | M_ZERO); + for (i = 0; i < pv_npg; i++) + TAILQ_INIT(&pv_table[i].pv_list); + TAILQ_INIT(&pv_dummy.pv_list); + + if (superpages_enabled) + pagesizes[1] = L2_SIZE; } #ifdef SMP /* * For SMP, these functions have to use IPIs for coherence. * * In general, the calling thread uses a plain fence to order the * writes to the page tables before invoking an SBI callback to invoke * sfence_vma() on remote CPUs. - * - * Since the riscv pmap does not yet have a pm_active field, IPIs are - * sent to all CPUs in the system. */ static void pmap_invalidate_page(pmap_t pmap, vm_offset_t va) { cpuset_t mask; sched_pin(); - mask = all_cpus; + mask = pmap->pm_active; CPU_CLR(PCPU_GET(cpuid), &mask); fence(); - sbi_remote_sfence_vma(mask.__bits, va, 1); + if (!CPU_EMPTY(&mask) && smp_started) + sbi_remote_sfence_vma(mask.__bits, va, 1); sfence_vma_page(va); sched_unpin(); } static void pmap_invalidate_range(pmap_t pmap, vm_offset_t sva, vm_offset_t eva) { cpuset_t mask; sched_pin(); - mask = all_cpus; + mask = pmap->pm_active; CPU_CLR(PCPU_GET(cpuid), &mask); fence(); - sbi_remote_sfence_vma(mask.__bits, sva, eva - sva + 1); + if (!CPU_EMPTY(&mask) && smp_started) + sbi_remote_sfence_vma(mask.__bits, sva, eva - sva + 1); /* * Might consider a loop of sfence_vma_page() for a small * number of pages in the future. */ sfence_vma(); sched_unpin(); } static void pmap_invalidate_all(pmap_t pmap) { cpuset_t mask; sched_pin(); - mask = all_cpus; + mask = pmap->pm_active; CPU_CLR(PCPU_GET(cpuid), &mask); - fence(); /* * XXX: The SBI doc doesn't detail how to specify x0 as the * address to perform a global fence. BBL currently treats * all sfence_vma requests as global however. */ - sbi_remote_sfence_vma(mask.__bits, 0, 0); + fence(); + if (!CPU_EMPTY(&mask) && smp_started) + sbi_remote_sfence_vma(mask.__bits, 0, 0); sfence_vma(); sched_unpin(); } #else /* * Normal, non-SMP, invalidation functions. * We inline these within pmap.c for speed. */ static __inline void pmap_invalidate_page(pmap_t pmap, vm_offset_t va) { sfence_vma_page(va); } static __inline void pmap_invalidate_range(pmap_t pmap, vm_offset_t sva, vm_offset_t eva) { /* * Might consider a loop of sfence_vma_page() for a small * number of pages in the future. */ sfence_vma(); } static __inline void pmap_invalidate_all(pmap_t pmap) { sfence_vma(); } #endif /* * Routine: pmap_extract * Function: * Extract the physical page address associated * with the given map/virtual_address pair. */ vm_paddr_t pmap_extract(pmap_t pmap, vm_offset_t va) { pd_entry_t *l2p, l2; pt_entry_t *l3p, l3; vm_paddr_t pa; pa = 0; PMAP_LOCK(pmap); /* * Start with the l2 tabel. We are unable to allocate * pages in the l1 table. */ l2p = pmap_l2(pmap, va); if (l2p != NULL) { l2 = pmap_load(l2p); if ((l2 & PTE_RX) == 0) { l3p = pmap_l2_to_l3(l2p, va); if (l3p != NULL) { l3 = pmap_load(l3p); pa = PTE_TO_PHYS(l3); pa |= (va & L3_OFFSET); } } else { /* L2 is superpages */ pa = (l2 >> PTE_PPN1_S) << L2_SHIFT; pa |= (va & L2_OFFSET); } } PMAP_UNLOCK(pmap); return (pa); } /* * Routine: pmap_extract_and_hold * Function: * Atomically extract and hold the physical page * with the given pmap and virtual address pair * if that mapping permits the given protection. */ vm_page_t pmap_extract_and_hold(pmap_t pmap, vm_offset_t va, vm_prot_t prot) { pt_entry_t *l3p, l3; vm_paddr_t phys; vm_paddr_t pa; vm_page_t m; pa = 0; m = NULL; PMAP_LOCK(pmap); retry: l3p = pmap_l3(pmap, va); if (l3p != NULL && (l3 = pmap_load(l3p)) != 0) { if ((l3 & PTE_W) != 0 || (prot & VM_PROT_WRITE) == 0) { phys = PTE_TO_PHYS(l3); if (vm_page_pa_tryrelock(pmap, phys, &pa)) goto retry; m = PHYS_TO_VM_PAGE(phys); vm_page_hold(m); } } PA_UNLOCK_COND(pa); PMAP_UNLOCK(pmap); return (m); } vm_paddr_t pmap_kextract(vm_offset_t va) { pd_entry_t *l2; pt_entry_t *l3; vm_paddr_t pa; if (va >= DMAP_MIN_ADDRESS && va < DMAP_MAX_ADDRESS) { pa = DMAP_TO_PHYS(va); } else { l2 = pmap_l2(kernel_pmap, va); if (l2 == NULL) panic("pmap_kextract: No l2"); if ((pmap_load(l2) & PTE_RX) != 0) { /* superpages */ pa = (pmap_load(l2) >> PTE_PPN1_S) << L2_SHIFT; pa |= (va & L2_OFFSET); return (pa); } l3 = pmap_l2_to_l3(l2, va); if (l3 == NULL) panic("pmap_kextract: No l3..."); pa = PTE_TO_PHYS(pmap_load(l3)); pa |= (va & PAGE_MASK); } return (pa); } /*************************************************** * Low level mapping routines..... ***************************************************/ void pmap_kenter_device(vm_offset_t sva, vm_size_t size, vm_paddr_t pa) { pt_entry_t entry; pt_entry_t *l3; vm_offset_t va; pn_t pn; KASSERT((pa & L3_OFFSET) == 0, ("pmap_kenter_device: Invalid physical address")); KASSERT((sva & L3_OFFSET) == 0, ("pmap_kenter_device: Invalid virtual address")); KASSERT((size & PAGE_MASK) == 0, ("pmap_kenter_device: Mapping is not page-sized")); va = sva; while (size != 0) { l3 = pmap_l3(kernel_pmap, va); KASSERT(l3 != NULL, ("Invalid page table, va: 0x%lx", va)); pn = (pa / PAGE_SIZE); entry = PTE_KERN; entry |= (pn << PTE_PPN0_S); pmap_store(l3, entry); va += PAGE_SIZE; pa += PAGE_SIZE; size -= PAGE_SIZE; } pmap_invalidate_range(kernel_pmap, sva, va); } /* * Remove a page from the kernel pagetables. * Note: not SMP coherent. */ PMAP_INLINE void pmap_kremove(vm_offset_t va) { pt_entry_t *l3; l3 = pmap_l3(kernel_pmap, va); KASSERT(l3 != NULL, ("pmap_kremove: Invalid address")); pmap_clear(l3); sfence_vma(); } void pmap_kremove_device(vm_offset_t sva, vm_size_t size) { pt_entry_t *l3; vm_offset_t va; KASSERT((sva & L3_OFFSET) == 0, ("pmap_kremove_device: Invalid virtual address")); KASSERT((size & PAGE_MASK) == 0, ("pmap_kremove_device: Mapping is not page-sized")); va = sva; while (size != 0) { l3 = pmap_l3(kernel_pmap, va); KASSERT(l3 != NULL, ("Invalid page table, va: 0x%lx", va)); pmap_clear(l3); va += PAGE_SIZE; size -= PAGE_SIZE; } pmap_invalidate_range(kernel_pmap, sva, va); } /* * Used to map a range of physical addresses into kernel * virtual address space. * * The value passed in '*virt' is a suggested virtual address for * the mapping. Architectures which can support a direct-mapped * physical to virtual region can return the appropriate address * within that region, leaving '*virt' unchanged. Other * architectures should map the pages starting at '*virt' and * update '*virt' with the first usable address after the mapped * region. */ vm_offset_t pmap_map(vm_offset_t *virt, vm_paddr_t start, vm_paddr_t end, int prot) { return PHYS_TO_DMAP(start); } /* * Add a list of wired pages to the kva * this routine is only used for temporary * kernel mappings that do not need to have * page modification or references recorded. * Note that old mappings are simply written * over. The page *must* be wired. * Note: SMP coherent. Uses a ranged shootdown IPI. */ void pmap_qenter(vm_offset_t sva, vm_page_t *ma, int count) { pt_entry_t *l3, pa; vm_offset_t va; vm_page_t m; pt_entry_t entry; pn_t pn; int i; va = sva; for (i = 0; i < count; i++) { m = ma[i]; pa = VM_PAGE_TO_PHYS(m); pn = (pa / PAGE_SIZE); l3 = pmap_l3(kernel_pmap, va); entry = PTE_KERN; entry |= (pn << PTE_PPN0_S); pmap_store(l3, entry); va += L3_SIZE; } pmap_invalidate_range(kernel_pmap, sva, va); } /* * This routine tears out page mappings from the * kernel -- it is meant only for temporary mappings. * Note: SMP coherent. Uses a ranged shootdown IPI. */ void pmap_qremove(vm_offset_t sva, int count) { pt_entry_t *l3; vm_offset_t va; KASSERT(sva >= VM_MIN_KERNEL_ADDRESS, ("usermode va %lx", sva)); for (va = sva; count-- > 0; va += PAGE_SIZE) { l3 = pmap_l3(kernel_pmap, va); KASSERT(l3 != NULL, ("pmap_kremove: Invalid address")); pmap_clear(l3); } pmap_invalidate_range(kernel_pmap, sva, va); } +bool +pmap_ps_enabled(pmap_t pmap __unused) +{ + + return (superpages_enabled); +} + /*************************************************** * Page table page management routines..... ***************************************************/ /* * Schedule the specified unused page table page to be freed. Specifically, * add the page to the specified list of pages that will be released to the * physical memory manager after the TLB has been updated. */ static __inline void pmap_add_delayed_free_list(vm_page_t m, struct spglist *free, boolean_t set_PG_ZERO) { if (set_PG_ZERO) m->flags |= PG_ZERO; else m->flags &= ~PG_ZERO; SLIST_INSERT_HEAD(free, m, plinks.s.ss); } + +/* + * Inserts the specified page table page into the specified pmap's collection + * of idle page table pages. Each of a pmap's page table pages is responsible + * for mapping a distinct range of virtual addresses. The pmap's collection is + * ordered by this virtual address range. + */ +static __inline int +pmap_insert_pt_page(pmap_t pmap, vm_page_t ml3) +{ + + PMAP_LOCK_ASSERT(pmap, MA_OWNED); + return (vm_radix_insert(&pmap->pm_root, ml3)); +} + +/* + * Removes the page table page mapping the specified virtual address from the + * specified pmap's collection of idle page table pages, and returns it. + * Otherwise, returns NULL if there is no page table page corresponding to the + * specified virtual address. + */ +static __inline vm_page_t +pmap_remove_pt_page(pmap_t pmap, vm_offset_t va) +{ + + PMAP_LOCK_ASSERT(pmap, MA_OWNED); + return (vm_radix_remove(&pmap->pm_root, pmap_l2_pindex(va))); +} /* * Decrements a page table page's wire count, which is used to record the * number of valid page table entries within the page. If the wire count * drops to zero, then the page table page is unmapped. Returns TRUE if the * page table page was unmapped and FALSE otherwise. */ static inline boolean_t -pmap_unwire_l3(pmap_t pmap, vm_offset_t va, vm_page_t m, struct spglist *free) +pmap_unwire_ptp(pmap_t pmap, vm_offset_t va, vm_page_t m, struct spglist *free) { --m->wire_count; if (m->wire_count == 0) { - _pmap_unwire_l3(pmap, va, m, free); + _pmap_unwire_ptp(pmap, va, m, free); return (TRUE); } else { return (FALSE); } } static void -_pmap_unwire_l3(pmap_t pmap, vm_offset_t va, vm_page_t m, struct spglist *free) +_pmap_unwire_ptp(pmap_t pmap, vm_offset_t va, vm_page_t m, struct spglist *free) { vm_paddr_t phys; PMAP_LOCK_ASSERT(pmap, MA_OWNED); - /* - * unmap the page table page - */ - if (m->pindex >= NUPDE) { - /* PD page */ + if (m->pindex >= NUL1E) { pd_entry_t *l1; l1 = pmap_l1(pmap, va); pmap_clear(l1); pmap_distribute_l1(pmap, pmap_l1_index(va), 0); } else { - /* PTE page */ pd_entry_t *l2; l2 = pmap_l2(pmap, va); pmap_clear(l2); } pmap_resident_count_dec(pmap, 1); - if (m->pindex < NUPDE) { + if (m->pindex < NUL1E) { pd_entry_t *l1; - /* We just released a PT, unhold the matching PD */ vm_page_t pdpg; l1 = pmap_l1(pmap, va); phys = PTE_TO_PHYS(pmap_load(l1)); pdpg = PHYS_TO_VM_PAGE(phys); - pmap_unwire_l3(pmap, va, pdpg, free); + pmap_unwire_ptp(pmap, va, pdpg, free); } pmap_invalidate_page(pmap, va); vm_wire_sub(1); /* * Put page on a list so that it is released after * *ALL* TLB shootdown is done */ pmap_add_delayed_free_list(m, free, TRUE); } /* - * After removing an l3 entry, this routine is used to + * After removing a page table entry, this routine is used to * conditionally free the page, and manage the hold/wire counts. */ static int -pmap_unuse_l3(pmap_t pmap, vm_offset_t va, pd_entry_t ptepde, +pmap_unuse_pt(pmap_t pmap, vm_offset_t va, pd_entry_t ptepde, struct spglist *free) { - vm_paddr_t phys; vm_page_t mpte; if (va >= VM_MAXUSER_ADDRESS) return (0); KASSERT(ptepde != 0, ("pmap_unuse_pt: ptepde != 0")); - - phys = PTE_TO_PHYS(ptepde); - - mpte = PHYS_TO_VM_PAGE(phys); - return (pmap_unwire_l3(pmap, va, mpte, free)); + mpte = PHYS_TO_VM_PAGE(PTE_TO_PHYS(ptepde)); + return (pmap_unwire_ptp(pmap, va, mpte, free)); } void pmap_pinit0(pmap_t pmap) { PMAP_LOCK_INIT(pmap); bzero(&pmap->pm_stats, sizeof(pmap->pm_stats)); pmap->pm_l1 = kernel_pmap->pm_l1; + pmap->pm_satp = SATP_MODE_SV39 | (vtophys(pmap->pm_l1) >> PAGE_SHIFT); + CPU_ZERO(&pmap->pm_active); + pmap_activate_boot(pmap); } int pmap_pinit(pmap_t pmap) { vm_paddr_t l1phys; vm_page_t l1pt; /* * allocate the l1 page */ while ((l1pt = vm_page_alloc(NULL, 0xdeadbeef, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED | VM_ALLOC_ZERO)) == NULL) vm_wait(NULL); l1phys = VM_PAGE_TO_PHYS(l1pt); pmap->pm_l1 = (pd_entry_t *)PHYS_TO_DMAP(l1phys); + pmap->pm_satp = SATP_MODE_SV39 | (l1phys >> PAGE_SHIFT); if ((l1pt->flags & PG_ZERO) == 0) pagezero(pmap->pm_l1); bzero(&pmap->pm_stats, sizeof(pmap->pm_stats)); + CPU_ZERO(&pmap->pm_active); + /* Install kernel pagetables */ memcpy(pmap->pm_l1, kernel_pmap->pm_l1, PAGE_SIZE); /* Add to the list of all user pmaps */ mtx_lock(&allpmaps_lock); LIST_INSERT_HEAD(&allpmaps, pmap, pm_list); mtx_unlock(&allpmaps_lock); + vm_radix_init(&pmap->pm_root); + return (1); } /* * This routine is called if the desired page table page does not exist. * * If page table page allocation fails, this routine may sleep before * returning NULL. It sleeps only if a lock pointer was given. * * Note: If a page allocation fails at page table level two or three, * one or two pages may be held during the wait, only to be released * afterwards. This conservative approach is easily argued to avoid * race conditions. */ static vm_page_t _pmap_alloc_l3(pmap_t pmap, vm_pindex_t ptepindex, struct rwlock **lockp) { vm_page_t m, /*pdppg, */pdpg; pt_entry_t entry; vm_paddr_t phys; pn_t pn; PMAP_LOCK_ASSERT(pmap, MA_OWNED); /* * Allocate a page table page. */ if ((m = vm_page_alloc(NULL, ptepindex, VM_ALLOC_NOOBJ | VM_ALLOC_WIRED | VM_ALLOC_ZERO)) == NULL) { if (lockp != NULL) { RELEASE_PV_LIST_LOCK(lockp); PMAP_UNLOCK(pmap); rw_runlock(&pvh_global_lock); vm_wait(NULL); rw_rlock(&pvh_global_lock); PMAP_LOCK(pmap); } /* * Indicate the need to retry. While waiting, the page table * page may have been allocated. */ return (NULL); } if ((m->flags & PG_ZERO) == 0) pmap_zero_page(m); /* * Map the pagetable page into the process address space, if * it isn't already there. */ - if (ptepindex >= NUPDE) { + if (ptepindex >= NUL1E) { pd_entry_t *l1; vm_pindex_t l1index; - l1index = ptepindex - NUPDE; + l1index = ptepindex - NUL1E; l1 = &pmap->pm_l1[l1index]; pn = (VM_PAGE_TO_PHYS(m) / PAGE_SIZE); entry = (PTE_V); entry |= (pn << PTE_PPN0_S); pmap_store(l1, entry); pmap_distribute_l1(pmap, l1index, entry); } else { vm_pindex_t l1index; pd_entry_t *l1, *l2; l1index = ptepindex >> (L1_SHIFT - L2_SHIFT); l1 = &pmap->pm_l1[l1index]; if (pmap_load(l1) == 0) { /* recurse for allocating page dir */ - if (_pmap_alloc_l3(pmap, NUPDE + l1index, + if (_pmap_alloc_l3(pmap, NUL1E + l1index, lockp) == NULL) { vm_page_unwire_noq(m); vm_page_free_zero(m); return (NULL); } } else { phys = PTE_TO_PHYS(pmap_load(l1)); pdpg = PHYS_TO_VM_PAGE(phys); pdpg->wire_count++; } phys = PTE_TO_PHYS(pmap_load(l1)); l2 = (pd_entry_t *)PHYS_TO_DMAP(phys); l2 = &l2[ptepindex & Ln_ADDR_MASK]; pn = (VM_PAGE_TO_PHYS(m) / PAGE_SIZE); entry = (PTE_V); entry |= (pn << PTE_PPN0_S); pmap_store(l2, entry); } pmap_resident_count_inc(pmap, 1); return (m); } static vm_page_t +pmap_alloc_l2(pmap_t pmap, vm_offset_t va, struct rwlock **lockp) +{ + pd_entry_t *l1; + vm_page_t l2pg; + vm_pindex_t l2pindex; + +retry: + l1 = pmap_l1(pmap, va); + if (l1 != NULL && (pmap_load(l1) & PTE_RWX) == 0) { + /* Add a reference to the L2 page. */ + l2pg = PHYS_TO_VM_PAGE(PTE_TO_PHYS(pmap_load(l1))); + l2pg->wire_count++; + } else { + /* Allocate a L2 page. */ + l2pindex = pmap_l2_pindex(va) >> Ln_ENTRIES_SHIFT; + l2pg = _pmap_alloc_l3(pmap, NUL2E + l2pindex, lockp); + if (l2pg == NULL && lockp != NULL) + goto retry; + } + return (l2pg); +} + +static vm_page_t pmap_alloc_l3(pmap_t pmap, vm_offset_t va, struct rwlock **lockp) { vm_pindex_t ptepindex; pd_entry_t *l2; vm_paddr_t phys; vm_page_t m; /* * Calculate pagetable page index */ ptepindex = pmap_l2_pindex(va); retry: /* * Get the page directory entry */ l2 = pmap_l2(pmap, va); /* * If the page table page is mapped, we just increment the * hold count, and activate it. */ if (l2 != NULL && pmap_load(l2) != 0) { phys = PTE_TO_PHYS(pmap_load(l2)); m = PHYS_TO_VM_PAGE(phys); m->wire_count++; } else { /* * Here if the pte page isn't mapped, or if it has been * deallocated. */ m = _pmap_alloc_l3(pmap, ptepindex, lockp); if (m == NULL && lockp != NULL) goto retry; } return (m); } /*************************************************** * Pmap allocation/deallocation routines. ***************************************************/ /* * Release any resources held by the given physical map. * Called when a pmap initialized by pmap_pinit is being released. * Should only be called if the map contains no valid mappings. */ void pmap_release(pmap_t pmap) { vm_page_t m; KASSERT(pmap->pm_stats.resident_count == 0, ("pmap_release: pmap resident count %ld != 0", pmap->pm_stats.resident_count)); + KASSERT(CPU_EMPTY(&pmap->pm_active), + ("releasing active pmap %p", pmap)); mtx_lock(&allpmaps_lock); LIST_REMOVE(pmap, pm_list); mtx_unlock(&allpmaps_lock); m = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((vm_offset_t)pmap->pm_l1)); vm_page_unwire_noq(m); vm_page_free(m); } #if 0 static int kvm_size(SYSCTL_HANDLER_ARGS) { unsigned long ksize = VM_MAX_KERNEL_ADDRESS - VM_MIN_KERNEL_ADDRESS; return sysctl_handle_long(oidp, &ksize, 0, req); } SYSCTL_PROC(_vm, OID_AUTO, kvm_size, CTLTYPE_LONG|CTLFLAG_RD, 0, 0, kvm_size, "LU", "Size of KVM"); static int kvm_free(SYSCTL_HANDLER_ARGS) { unsigned long kfree = VM_MAX_KERNEL_ADDRESS - kernel_vm_end; return sysctl_handle_long(oidp, &kfree, 0, req); } SYSCTL_PROC(_vm, OID_AUTO, kvm_free, CTLTYPE_LONG|CTLFLAG_RD, 0, 0, kvm_free, "LU", "Amount of KVM free"); #endif /* 0 */ /* * grow the number of kernel page table entries, if needed */ void pmap_growkernel(vm_offset_t addr) { vm_paddr_t paddr; vm_page_t nkpg; pd_entry_t *l1, *l2; pt_entry_t entry; pn_t pn; mtx_assert(&kernel_map->system_mtx, MA_OWNED); addr = roundup2(addr, L2_SIZE); if (addr - 1 >= vm_map_max(kernel_map)) addr = vm_map_max(kernel_map); while (kernel_vm_end < addr) { l1 = pmap_l1(kernel_pmap, kernel_vm_end); if (pmap_load(l1) == 0) { /* We need a new PDP entry */ nkpg = vm_page_alloc(NULL, kernel_vm_end >> L1_SHIFT, VM_ALLOC_INTERRUPT | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED | VM_ALLOC_ZERO); if (nkpg == NULL) panic("pmap_growkernel: no memory to grow kernel"); if ((nkpg->flags & PG_ZERO) == 0) pmap_zero_page(nkpg); paddr = VM_PAGE_TO_PHYS(nkpg); pn = (paddr / PAGE_SIZE); entry = (PTE_V); entry |= (pn << PTE_PPN0_S); pmap_store(l1, entry); pmap_distribute_l1(kernel_pmap, pmap_l1_index(kernel_vm_end), entry); continue; /* try again */ } l2 = pmap_l1_to_l2(l1, kernel_vm_end); if ((pmap_load(l2) & PTE_V) != 0 && (pmap_load(l2) & PTE_RWX) == 0) { kernel_vm_end = (kernel_vm_end + L2_SIZE) & ~L2_OFFSET; if (kernel_vm_end - 1 >= vm_map_max(kernel_map)) { kernel_vm_end = vm_map_max(kernel_map); break; } continue; } nkpg = vm_page_alloc(NULL, kernel_vm_end >> L2_SHIFT, VM_ALLOC_INTERRUPT | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED | VM_ALLOC_ZERO); if (nkpg == NULL) panic("pmap_growkernel: no memory to grow kernel"); if ((nkpg->flags & PG_ZERO) == 0) { pmap_zero_page(nkpg); } paddr = VM_PAGE_TO_PHYS(nkpg); pn = (paddr / PAGE_SIZE); entry = (PTE_V); entry |= (pn << PTE_PPN0_S); pmap_store(l2, entry); pmap_invalidate_page(kernel_pmap, kernel_vm_end); kernel_vm_end = (kernel_vm_end + L2_SIZE) & ~L2_OFFSET; if (kernel_vm_end - 1 >= vm_map_max(kernel_map)) { kernel_vm_end = vm_map_max(kernel_map); break; } } } /*************************************************** * page management routines. ***************************************************/ CTASSERT(sizeof(struct pv_chunk) == PAGE_SIZE); CTASSERT(_NPCM == 3); CTASSERT(_NPCPV == 168); static __inline struct pv_chunk * pv_to_chunk(pv_entry_t pv) { return ((struct pv_chunk *)((uintptr_t)pv & ~(uintptr_t)PAGE_MASK)); } #define PV_PMAP(pv) (pv_to_chunk(pv)->pc_pmap) #define PC_FREE0 0xfffffffffffffffful #define PC_FREE1 0xfffffffffffffffful #define PC_FREE2 0x000000fffffffffful static const uint64_t pc_freemask[_NPCM] = { PC_FREE0, PC_FREE1, PC_FREE2 }; #if 0 #ifdef PV_STATS static int pc_chunk_count, pc_chunk_allocs, pc_chunk_frees, pc_chunk_tryfail; SYSCTL_INT(_vm_pmap, OID_AUTO, pc_chunk_count, CTLFLAG_RD, &pc_chunk_count, 0, "Current number of pv entry chunks"); SYSCTL_INT(_vm_pmap, OID_AUTO, pc_chunk_allocs, CTLFLAG_RD, &pc_chunk_allocs, 0, "Current number of pv entry chunks allocated"); SYSCTL_INT(_vm_pmap, OID_AUTO, pc_chunk_frees, CTLFLAG_RD, &pc_chunk_frees, 0, "Current number of pv entry chunks frees"); SYSCTL_INT(_vm_pmap, OID_AUTO, pc_chunk_tryfail, CTLFLAG_RD, &pc_chunk_tryfail, 0, "Number of times tried to get a chunk page but failed."); static long pv_entry_frees, pv_entry_allocs, pv_entry_count; static int pv_entry_spare; SYSCTL_LONG(_vm_pmap, OID_AUTO, pv_entry_frees, CTLFLAG_RD, &pv_entry_frees, 0, "Current number of pv entry frees"); SYSCTL_LONG(_vm_pmap, OID_AUTO, pv_entry_allocs, CTLFLAG_RD, &pv_entry_allocs, 0, "Current number of pv entry allocs"); SYSCTL_LONG(_vm_pmap, OID_AUTO, pv_entry_count, CTLFLAG_RD, &pv_entry_count, 0, "Current number of pv entries"); SYSCTL_INT(_vm_pmap, OID_AUTO, pv_entry_spare, CTLFLAG_RD, &pv_entry_spare, 0, "Current number of spare pv entries"); #endif #endif /* 0 */ /* * We are in a serious low memory condition. Resort to * drastic measures to free some pages so we can allocate * another pv entry chunk. * * Returns NULL if PV entries were reclaimed from the specified pmap. * * We do not, however, unmap 2mpages because subsequent accesses will * allocate per-page pv entries until repromotion occurs, thereby * exacerbating the shortage of free pv entries. */ static vm_page_t reclaim_pv_chunk(pmap_t locked_pmap, struct rwlock **lockp) { panic("RISCVTODO: reclaim_pv_chunk"); } /* * free the pv_entry back to the free list */ static void free_pv_entry(pmap_t pmap, pv_entry_t pv) { struct pv_chunk *pc; int idx, field, bit; rw_assert(&pvh_global_lock, RA_LOCKED); PMAP_LOCK_ASSERT(pmap, MA_OWNED); PV_STAT(atomic_add_long(&pv_entry_frees, 1)); PV_STAT(atomic_add_int(&pv_entry_spare, 1)); PV_STAT(atomic_subtract_long(&pv_entry_count, 1)); pc = pv_to_chunk(pv); idx = pv - &pc->pc_pventry[0]; field = idx / 64; bit = idx % 64; pc->pc_map[field] |= 1ul << bit; if (pc->pc_map[0] != PC_FREE0 || pc->pc_map[1] != PC_FREE1 || pc->pc_map[2] != PC_FREE2) { /* 98% of the time, pc is already at the head of the list. */ if (__predict_false(pc != TAILQ_FIRST(&pmap->pm_pvchunk))) { TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list); TAILQ_INSERT_HEAD(&pmap->pm_pvchunk, pc, pc_list); } return; } TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list); free_pv_chunk(pc); } static void free_pv_chunk(struct pv_chunk *pc) { vm_page_t m; mtx_lock(&pv_chunks_mutex); TAILQ_REMOVE(&pv_chunks, pc, pc_lru); mtx_unlock(&pv_chunks_mutex); PV_STAT(atomic_subtract_int(&pv_entry_spare, _NPCPV)); PV_STAT(atomic_subtract_int(&pc_chunk_count, 1)); PV_STAT(atomic_add_int(&pc_chunk_frees, 1)); /* entire chunk is free, return it */ m = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((vm_offset_t)pc)); #if 0 /* TODO: For minidump */ dump_drop_page(m->phys_addr); #endif vm_page_unwire(m, PQ_NONE); vm_page_free(m); } /* * Returns a new PV entry, allocating a new PV chunk from the system when * needed. If this PV chunk allocation fails and a PV list lock pointer was * given, a PV chunk is reclaimed from an arbitrary pmap. Otherwise, NULL is * returned. * * The given PV list lock may be released. */ static pv_entry_t get_pv_entry(pmap_t pmap, struct rwlock **lockp) { int bit, field; pv_entry_t pv; struct pv_chunk *pc; vm_page_t m; rw_assert(&pvh_global_lock, RA_LOCKED); PMAP_LOCK_ASSERT(pmap, MA_OWNED); PV_STAT(atomic_add_long(&pv_entry_allocs, 1)); retry: pc = TAILQ_FIRST(&pmap->pm_pvchunk); if (pc != NULL) { for (field = 0; field < _NPCM; field++) { if (pc->pc_map[field]) { bit = ffsl(pc->pc_map[field]) - 1; break; } } if (field < _NPCM) { pv = &pc->pc_pventry[field * 64 + bit]; pc->pc_map[field] &= ~(1ul << bit); /* If this was the last item, move it to tail */ if (pc->pc_map[0] == 0 && pc->pc_map[1] == 0 && pc->pc_map[2] == 0) { TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list); TAILQ_INSERT_TAIL(&pmap->pm_pvchunk, pc, pc_list); } PV_STAT(atomic_add_long(&pv_entry_count, 1)); PV_STAT(atomic_subtract_int(&pv_entry_spare, 1)); return (pv); } } /* No free items, allocate another chunk */ m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED); if (m == NULL) { if (lockp == NULL) { PV_STAT(pc_chunk_tryfail++); return (NULL); } m = reclaim_pv_chunk(pmap, lockp); if (m == NULL) goto retry; } PV_STAT(atomic_add_int(&pc_chunk_count, 1)); PV_STAT(atomic_add_int(&pc_chunk_allocs, 1)); #if 0 /* TODO: This is for minidump */ dump_add_page(m->phys_addr); #endif pc = (void *)PHYS_TO_DMAP(m->phys_addr); pc->pc_pmap = pmap; pc->pc_map[0] = PC_FREE0 & ~1ul; /* preallocated bit 0 */ pc->pc_map[1] = PC_FREE1; pc->pc_map[2] = PC_FREE2; mtx_lock(&pv_chunks_mutex); TAILQ_INSERT_TAIL(&pv_chunks, pc, pc_lru); mtx_unlock(&pv_chunks_mutex); pv = &pc->pc_pventry[0]; TAILQ_INSERT_HEAD(&pmap->pm_pvchunk, pc, pc_list); PV_STAT(atomic_add_long(&pv_entry_count, 1)); PV_STAT(atomic_add_int(&pv_entry_spare, _NPCPV - 1)); return (pv); } /* + * Ensure that the number of spare PV entries in the specified pmap meets or + * exceeds the given count, "needed". + * + * The given PV list lock may be released. + */ +static void +reserve_pv_entries(pmap_t pmap, int needed, struct rwlock **lockp) +{ + struct pch new_tail; + struct pv_chunk *pc; + vm_page_t m; + int avail, free; + bool reclaimed; + + rw_assert(&pvh_global_lock, RA_LOCKED); + PMAP_LOCK_ASSERT(pmap, MA_OWNED); + KASSERT(lockp != NULL, ("reserve_pv_entries: lockp is NULL")); + + /* + * Newly allocated PV chunks must be stored in a private list until + * the required number of PV chunks have been allocated. Otherwise, + * reclaim_pv_chunk() could recycle one of these chunks. In + * contrast, these chunks must be added to the pmap upon allocation. + */ + TAILQ_INIT(&new_tail); +retry: + avail = 0; + TAILQ_FOREACH(pc, &pmap->pm_pvchunk, pc_list) { + bit_count((bitstr_t *)pc->pc_map, 0, + sizeof(pc->pc_map) * NBBY, &free); + if (free == 0) + break; + avail += free; + if (avail >= needed) + break; + } + for (reclaimed = false; avail < needed; avail += _NPCPV) { + m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ | + VM_ALLOC_WIRED); + if (m == NULL) { + m = reclaim_pv_chunk(pmap, lockp); + if (m == NULL) + goto retry; + reclaimed = true; + } + /* XXX PV STATS */ +#if 0 + dump_add_page(m->phys_addr); +#endif + pc = (void *)PHYS_TO_DMAP(m->phys_addr); + pc->pc_pmap = pmap; + pc->pc_map[0] = PC_FREE0; + pc->pc_map[1] = PC_FREE1; + pc->pc_map[2] = PC_FREE2; + TAILQ_INSERT_HEAD(&pmap->pm_pvchunk, pc, pc_list); + TAILQ_INSERT_TAIL(&new_tail, pc, pc_lru); + + /* + * The reclaim might have freed a chunk from the current pmap. + * If that chunk contained available entries, we need to + * re-count the number of available entries. + */ + if (reclaimed) + goto retry; + } + if (!TAILQ_EMPTY(&new_tail)) { + mtx_lock(&pv_chunks_mutex); + TAILQ_CONCAT(&pv_chunks, &new_tail, pc_lru); + mtx_unlock(&pv_chunks_mutex); + } +} + +/* * First find and then remove the pv entry for the specified pmap and virtual * address from the specified pv list. Returns the pv entry if found and NULL * otherwise. This operation can be performed on pv lists for either 4KB or * 2MB page mappings. */ static __inline pv_entry_t pmap_pvh_remove(struct md_page *pvh, pmap_t pmap, vm_offset_t va) { pv_entry_t pv; rw_assert(&pvh_global_lock, RA_LOCKED); TAILQ_FOREACH(pv, &pvh->pv_list, pv_next) { if (pmap == PV_PMAP(pv) && va == pv->pv_va) { TAILQ_REMOVE(&pvh->pv_list, pv, pv_next); pvh->pv_gen++; break; } } return (pv); } /* * First find and then destroy the pv entry for the specified pmap and virtual * address. This operation can be performed on pv lists for either 4KB or 2MB * page mappings. */ static void pmap_pvh_free(struct md_page *pvh, pmap_t pmap, vm_offset_t va) { pv_entry_t pv; pv = pmap_pvh_remove(pvh, pmap, va); - KASSERT(pv != NULL, ("pmap_pvh_free: pv not found")); + KASSERT(pv != NULL, ("pmap_pvh_free: pv not found for %#lx", va)); free_pv_entry(pmap, pv); } /* * Conditionally create the PV entry for a 4KB page mapping if the required * memory can be allocated without resorting to reclamation. */ static boolean_t pmap_try_insert_pv_entry(pmap_t pmap, vm_offset_t va, vm_page_t m, struct rwlock **lockp) { pv_entry_t pv; rw_assert(&pvh_global_lock, RA_LOCKED); PMAP_LOCK_ASSERT(pmap, MA_OWNED); /* Pass NULL instead of the lock pointer to disable reclamation. */ if ((pv = get_pv_entry(pmap, NULL)) != NULL) { pv->pv_va = va; CHANGE_PV_LIST_LOCK_TO_VM_PAGE(lockp, m); TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next); m->md.pv_gen++; return (TRUE); } else return (FALSE); } /* + * After demotion from a 2MB page mapping to 512 4KB page mappings, + * destroy the pv entry for the 2MB page mapping and reinstantiate the pv + * entries for each of the 4KB page mappings. + */ +static void __unused +pmap_pv_demote_l2(pmap_t pmap, vm_offset_t va, vm_paddr_t pa, + struct rwlock **lockp) +{ + struct md_page *pvh; + struct pv_chunk *pc; + pv_entry_t pv; + vm_page_t m; + vm_offset_t va_last; + int bit, field; + + rw_assert(&pvh_global_lock, RA_LOCKED); + PMAP_LOCK_ASSERT(pmap, MA_OWNED); + CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, pa); + + /* + * Transfer the 2mpage's pv entry for this mapping to the first + * page's pv list. Once this transfer begins, the pv list lock + * must not be released until the last pv entry is reinstantiated. + */ + pvh = pa_to_pvh(pa); + va &= ~L2_OFFSET; + pv = pmap_pvh_remove(pvh, pmap, va); + KASSERT(pv != NULL, ("pmap_pv_demote_l2: pv not found")); + m = PHYS_TO_VM_PAGE(pa); + TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next); + m->md.pv_gen++; + /* Instantiate the remaining 511 pv entries. */ + va_last = va + L2_SIZE - PAGE_SIZE; + for (;;) { + pc = TAILQ_FIRST(&pmap->pm_pvchunk); + KASSERT(pc->pc_map[0] != 0 || pc->pc_map[1] != 0 || + pc->pc_map[2] != 0, ("pmap_pv_demote_l2: missing spare")); + for (field = 0; field < _NPCM; field++) { + while (pc->pc_map[field] != 0) { + bit = ffsl(pc->pc_map[field]) - 1; + pc->pc_map[field] &= ~(1ul << bit); + pv = &pc->pc_pventry[field * 64 + bit]; + va += PAGE_SIZE; + pv->pv_va = va; + m++; + KASSERT((m->oflags & VPO_UNMANAGED) == 0, + ("pmap_pv_demote_l2: page %p is not managed", m)); + TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next); + m->md.pv_gen++; + if (va == va_last) + goto out; + } + } + TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list); + TAILQ_INSERT_TAIL(&pmap->pm_pvchunk, pc, pc_list); + } +out: + if (pc->pc_map[0] == 0 && pc->pc_map[1] == 0 && pc->pc_map[2] == 0) { + TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list); + TAILQ_INSERT_TAIL(&pmap->pm_pvchunk, pc, pc_list); + } + /* XXX PV stats */ +} + +#if VM_NRESERVLEVEL > 0 +static void +pmap_pv_promote_l2(pmap_t pmap, vm_offset_t va, vm_paddr_t pa, + struct rwlock **lockp) +{ + struct md_page *pvh; + pv_entry_t pv; + vm_page_t m; + vm_offset_t va_last; + + rw_assert(&pvh_global_lock, RA_LOCKED); + KASSERT((va & L2_OFFSET) == 0, + ("pmap_pv_promote_l2: misaligned va %#lx", va)); + + CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, pa); + + m = PHYS_TO_VM_PAGE(pa); + pv = pmap_pvh_remove(&m->md, pmap, va); + KASSERT(pv != NULL, ("pmap_pv_promote_l2: pv for %#lx not found", va)); + pvh = pa_to_pvh(pa); + TAILQ_INSERT_TAIL(&pvh->pv_list, pv, pv_next); + pvh->pv_gen++; + + va_last = va + L2_SIZE - PAGE_SIZE; + do { + m++; + va += PAGE_SIZE; + pmap_pvh_free(&m->md, pmap, va); + } while (va < va_last); +} +#endif /* VM_NRESERVLEVEL > 0 */ + +/* + * Create the PV entry for a 2MB page mapping. Always returns true unless the + * flag PMAP_ENTER_NORECLAIM is specified. If that flag is specified, returns + * false if the PV entry cannot be allocated without resorting to reclamation. + */ +static bool +pmap_pv_insert_l2(pmap_t pmap, vm_offset_t va, pd_entry_t l2e, u_int flags, + struct rwlock **lockp) +{ + struct md_page *pvh; + pv_entry_t pv; + vm_paddr_t pa; + + PMAP_LOCK_ASSERT(pmap, MA_OWNED); + /* Pass NULL instead of the lock pointer to disable reclamation. */ + if ((pv = get_pv_entry(pmap, (flags & PMAP_ENTER_NORECLAIM) != 0 ? + NULL : lockp)) == NULL) + return (false); + pv->pv_va = va; + pa = PTE_TO_PHYS(l2e); + CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, pa); + pvh = pa_to_pvh(pa); + TAILQ_INSERT_TAIL(&pvh->pv_list, pv, pv_next); + pvh->pv_gen++; + return (true); +} + +static void +pmap_remove_kernel_l2(pmap_t pmap, pt_entry_t *l2, vm_offset_t va) +{ + pt_entry_t newl2, oldl2; + vm_page_t ml3; + vm_paddr_t ml3pa; + + KASSERT(!VIRT_IN_DMAP(va), ("removing direct mapping of %#lx", va)); + KASSERT(pmap == kernel_pmap, ("pmap %p is not kernel_pmap", pmap)); + PMAP_LOCK_ASSERT(pmap, MA_OWNED); + + ml3 = pmap_remove_pt_page(pmap, va); + if (ml3 == NULL) + panic("pmap_remove_kernel_l2: Missing pt page"); + + ml3pa = VM_PAGE_TO_PHYS(ml3); + newl2 = ml3pa | PTE_V; + + /* + * Initialize the page table page. + */ + pagezero((void *)PHYS_TO_DMAP(ml3pa)); + + /* + * Demote the mapping. + */ + oldl2 = pmap_load_store(l2, newl2); + KASSERT(oldl2 == 0, ("%s: found existing mapping at %p: %#lx", + __func__, l2, oldl2)); +} + +/* + * pmap_remove_l2: Do the things to unmap a level 2 superpage. + */ +static int +pmap_remove_l2(pmap_t pmap, pt_entry_t *l2, vm_offset_t sva, + pd_entry_t l1e, struct spglist *free, struct rwlock **lockp) +{ + struct md_page *pvh; + pt_entry_t oldl2; + vm_offset_t eva, va; + vm_page_t m, ml3; + + PMAP_LOCK_ASSERT(pmap, MA_OWNED); + KASSERT((sva & L2_OFFSET) == 0, ("pmap_remove_l2: sva is not aligned")); + oldl2 = pmap_load_clear(l2); + KASSERT((oldl2 & PTE_RWX) != 0, + ("pmap_remove_l2: L2e %lx is not a superpage mapping", oldl2)); + + /* + * The sfence.vma documentation states that it is sufficient to specify + * a single address within a superpage mapping. However, since we do + * not perform any invalidation upon promotion, TLBs may still be + * caching 4KB mappings within the superpage, so we must invalidate the + * entire range. + */ + pmap_invalidate_range(pmap, sva, sva + L2_SIZE); + if ((oldl2 & PTE_SW_WIRED) != 0) + pmap->pm_stats.wired_count -= L2_SIZE / PAGE_SIZE; + pmap_resident_count_dec(pmap, L2_SIZE / PAGE_SIZE); + if ((oldl2 & PTE_SW_MANAGED) != 0) { + CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, PTE_TO_PHYS(oldl2)); + pvh = pa_to_pvh(PTE_TO_PHYS(oldl2)); + pmap_pvh_free(pvh, pmap, sva); + eva = sva + L2_SIZE; + for (va = sva, m = PHYS_TO_VM_PAGE(PTE_TO_PHYS(oldl2)); + va < eva; va += PAGE_SIZE, m++) { + if ((oldl2 & PTE_D) != 0) + vm_page_dirty(m); + if ((oldl2 & PTE_A) != 0) + vm_page_aflag_set(m, PGA_REFERENCED); + if (TAILQ_EMPTY(&m->md.pv_list) && + TAILQ_EMPTY(&pvh->pv_list)) + vm_page_aflag_clear(m, PGA_WRITEABLE); + } + } + if (pmap == kernel_pmap) { + pmap_remove_kernel_l2(pmap, l2, sva); + } else { + ml3 = pmap_remove_pt_page(pmap, sva); + if (ml3 != NULL) { + pmap_resident_count_dec(pmap, 1); + KASSERT(ml3->wire_count == Ln_ENTRIES, + ("pmap_remove_l2: l3 page wire count error")); + ml3->wire_count = 1; + vm_page_unwire_noq(ml3); + pmap_add_delayed_free_list(ml3, free, FALSE); + } + } + return (pmap_unuse_pt(pmap, sva, l1e, free)); +} + +/* * pmap_remove_l3: do the things to unmap a page in a process */ static int pmap_remove_l3(pmap_t pmap, pt_entry_t *l3, vm_offset_t va, pd_entry_t l2e, struct spglist *free, struct rwlock **lockp) { pt_entry_t old_l3; vm_paddr_t phys; vm_page_t m; PMAP_LOCK_ASSERT(pmap, MA_OWNED); old_l3 = pmap_load_clear(l3); pmap_invalidate_page(pmap, va); if (old_l3 & PTE_SW_WIRED) pmap->pm_stats.wired_count -= 1; pmap_resident_count_dec(pmap, 1); if (old_l3 & PTE_SW_MANAGED) { phys = PTE_TO_PHYS(old_l3); m = PHYS_TO_VM_PAGE(phys); if ((old_l3 & PTE_D) != 0) vm_page_dirty(m); if (old_l3 & PTE_A) vm_page_aflag_set(m, PGA_REFERENCED); CHANGE_PV_LIST_LOCK_TO_VM_PAGE(lockp, m); pmap_pvh_free(&m->md, pmap, va); } - return (pmap_unuse_l3(pmap, va, l2e, free)); + return (pmap_unuse_pt(pmap, va, l2e, free)); } /* * Remove the given range of addresses from the specified map. * * It is assumed that the start and end are properly * rounded to the page size. */ void pmap_remove(pmap_t pmap, vm_offset_t sva, vm_offset_t eva) { + struct spglist free; struct rwlock *lock; vm_offset_t va, va_next; - pd_entry_t *l1, *l2; - pt_entry_t l3_pte, *l3; - struct spglist free; + pd_entry_t *l1, *l2, l2e; + pt_entry_t *l3; /* * Perform an unsynchronized read. This is, however, safe. */ if (pmap->pm_stats.resident_count == 0) return; SLIST_INIT(&free); rw_rlock(&pvh_global_lock); PMAP_LOCK(pmap); lock = NULL; for (; sva < eva; sva = va_next) { if (pmap->pm_stats.resident_count == 0) break; l1 = pmap_l1(pmap, sva); if (pmap_load(l1) == 0) { va_next = (sva + L1_SIZE) & ~L1_OFFSET; if (va_next < sva) va_next = eva; continue; } /* * Calculate index for next page table. */ va_next = (sva + L2_SIZE) & ~L2_OFFSET; if (va_next < sva) va_next = eva; l2 = pmap_l1_to_l2(l1, sva); if (l2 == NULL) continue; - - l3_pte = pmap_load(l2); - - /* - * Weed out invalid mappings. - */ - if (l3_pte == 0) + if ((l2e = pmap_load(l2)) == 0) continue; - if ((pmap_load(l2) & PTE_RX) != 0) - continue; + if ((l2e & PTE_RWX) != 0) { + if (sva + L2_SIZE == va_next && eva >= va_next) { + (void)pmap_remove_l2(pmap, l2, sva, + pmap_load(l1), &free, &lock); + continue; + } else if (!pmap_demote_l2_locked(pmap, l2, sva, + &lock)) { + /* + * The large page mapping was destroyed. + */ + continue; + } + l2e = pmap_load(l2); + } /* * Limit our scan to either the end of the va represented * by the current page table page, or to the end of the * range being removed. */ if (va_next > eva) va_next = eva; va = va_next; for (l3 = pmap_l2_to_l3(l2, sva); sva != va_next; l3++, sva += L3_SIZE) { - if (l3 == NULL) - panic("l3 == NULL"); if (pmap_load(l3) == 0) { if (va != va_next) { pmap_invalidate_range(pmap, va, sva); va = va_next; } continue; } if (va == va_next) va = sva; - if (pmap_remove_l3(pmap, l3, sva, l3_pte, &free, - &lock)) { + if (pmap_remove_l3(pmap, l3, sva, l2e, &free, &lock)) { sva += L3_SIZE; break; } } if (va != va_next) pmap_invalidate_range(pmap, va, sva); } if (lock != NULL) rw_wunlock(lock); - rw_runlock(&pvh_global_lock); + rw_runlock(&pvh_global_lock); PMAP_UNLOCK(pmap); vm_page_free_pages_toq(&free, false); } /* * Routine: pmap_remove_all * Function: * Removes this physical page from * all physical maps in which it resides. * Reflects back modify bits to the pager. * * Notes: * Original versions of this routine were very * inefficient because they iteratively called * pmap_remove (slow...) */ void pmap_remove_all(vm_page_t m) { - pv_entry_t pv; - pmap_t pmap; - pt_entry_t *l3, tl3; - pd_entry_t *l2, tl2; struct spglist free; + struct md_page *pvh; + pmap_t pmap; + pt_entry_t *l3, l3e; + pd_entry_t *l2, l2e; + pv_entry_t pv; + vm_offset_t va; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("pmap_remove_all: page %p is not managed", m)); SLIST_INIT(&free); + pvh = (m->flags & PG_FICTITIOUS) != 0 ? &pv_dummy : + pa_to_pvh(VM_PAGE_TO_PHYS(m)); + rw_wlock(&pvh_global_lock); + while ((pv = TAILQ_FIRST(&pvh->pv_list)) != NULL) { + pmap = PV_PMAP(pv); + PMAP_LOCK(pmap); + va = pv->pv_va; + l2 = pmap_l2(pmap, va); + (void)pmap_demote_l2(pmap, l2, va); + PMAP_UNLOCK(pmap); + } while ((pv = TAILQ_FIRST(&m->md.pv_list)) != NULL) { pmap = PV_PMAP(pv); PMAP_LOCK(pmap); pmap_resident_count_dec(pmap, 1); l2 = pmap_l2(pmap, pv->pv_va); KASSERT(l2 != NULL, ("pmap_remove_all: no l2 table found")); - tl2 = pmap_load(l2); + l2e = pmap_load(l2); - KASSERT((tl2 & PTE_RX) == 0, - ("pmap_remove_all: found a table when expecting " - "a block in %p's pv list", m)); + KASSERT((l2e & PTE_RX) == 0, + ("pmap_remove_all: found a superpage in %p's pv list", m)); l3 = pmap_l2_to_l3(l2, pv->pv_va); - tl3 = pmap_load_clear(l3); + l3e = pmap_load_clear(l3); pmap_invalidate_page(pmap, pv->pv_va); - if (tl3 & PTE_SW_WIRED) + if (l3e & PTE_SW_WIRED) pmap->pm_stats.wired_count--; - if ((tl3 & PTE_A) != 0) + if ((l3e & PTE_A) != 0) vm_page_aflag_set(m, PGA_REFERENCED); /* * Update the vm_page_t clean and reference bits. */ - if ((tl3 & PTE_D) != 0) + if ((l3e & PTE_D) != 0) vm_page_dirty(m); - pmap_unuse_l3(pmap, pv->pv_va, pmap_load(l2), &free); + pmap_unuse_pt(pmap, pv->pv_va, pmap_load(l2), &free); TAILQ_REMOVE(&m->md.pv_list, pv, pv_next); m->md.pv_gen++; free_pv_entry(pmap, pv); PMAP_UNLOCK(pmap); } vm_page_aflag_clear(m, PGA_WRITEABLE); rw_wunlock(&pvh_global_lock); vm_page_free_pages_toq(&free, false); } /* * Set the physical protection on the * specified range of this map as requested. */ void pmap_protect(pmap_t pmap, vm_offset_t sva, vm_offset_t eva, vm_prot_t prot) { - pd_entry_t *l1, *l2; + pd_entry_t *l1, *l2, l2e; pt_entry_t *l3, l3e, mask; vm_page_t m; - vm_offset_t va_next; + vm_paddr_t pa; + vm_offset_t va, va_next; + bool anychanged, pv_lists_locked; if ((prot & VM_PROT_READ) == VM_PROT_NONE) { pmap_remove(pmap, sva, eva); return; } if ((prot & (VM_PROT_WRITE | VM_PROT_EXECUTE)) == (VM_PROT_WRITE | VM_PROT_EXECUTE)) return; + anychanged = false; + pv_lists_locked = false; mask = 0; if ((prot & VM_PROT_WRITE) == 0) mask |= PTE_W | PTE_D; if ((prot & VM_PROT_EXECUTE) == 0) mask |= PTE_X; - +resume: PMAP_LOCK(pmap); for (; sva < eva; sva = va_next) { l1 = pmap_l1(pmap, sva); if (pmap_load(l1) == 0) { va_next = (sva + L1_SIZE) & ~L1_OFFSET; if (va_next < sva) va_next = eva; continue; } va_next = (sva + L2_SIZE) & ~L2_OFFSET; if (va_next < sva) va_next = eva; l2 = pmap_l1_to_l2(l1, sva); - if (l2 == NULL || pmap_load(l2) == 0) + if (l2 == NULL || (l2e = pmap_load(l2)) == 0) continue; - if ((pmap_load(l2) & PTE_RX) != 0) - continue; + if ((l2e & PTE_RWX) != 0) { + if (sva + L2_SIZE == va_next && eva >= va_next) { +retryl2: + if ((l2e & (PTE_SW_MANAGED | PTE_D)) == + (PTE_SW_MANAGED | PTE_D)) { + pa = PTE_TO_PHYS(l2e); + for (va = sva, m = PHYS_TO_VM_PAGE(pa); + va < va_next; m++, va += PAGE_SIZE) + vm_page_dirty(m); + } + if (!atomic_fcmpset_long(l2, &l2e, l2e & ~mask)) + goto retryl2; + anychanged = true; + } else { + if (!pv_lists_locked) { + pv_lists_locked = true; + if (!rw_try_rlock(&pvh_global_lock)) { + if (anychanged) + pmap_invalidate_all( + pmap); + PMAP_UNLOCK(pmap); + rw_rlock(&pvh_global_lock); + goto resume; + } + } + if (!pmap_demote_l2(pmap, l2, sva)) { + /* + * The large page mapping was destroyed. + */ + continue; + } + } + } if (va_next > eva) va_next = eva; for (l3 = pmap_l2_to_l3(l2, sva); sva != va_next; l3++, sva += L3_SIZE) { l3e = pmap_load(l3); -retry: +retryl3: if ((l3e & PTE_V) == 0) continue; if ((prot & VM_PROT_WRITE) == 0 && (l3e & (PTE_SW_MANAGED | PTE_D)) == (PTE_SW_MANAGED | PTE_D)) { m = PHYS_TO_VM_PAGE(PTE_TO_PHYS(l3e)); vm_page_dirty(m); } if (!atomic_fcmpset_long(l3, &l3e, l3e & ~mask)) - goto retry; - /* XXX: Use pmap_invalidate_range */ - pmap_invalidate_page(pmap, sva); + goto retryl3; + anychanged = true; } } + if (anychanged) + pmap_invalidate_all(pmap); + if (pv_lists_locked) + rw_runlock(&pvh_global_lock); PMAP_UNLOCK(pmap); } int pmap_fault_fixup(pmap_t pmap, vm_offset_t va, vm_prot_t ftype) { - pt_entry_t orig_l3; - pt_entry_t new_l3; - pt_entry_t *l3; + pd_entry_t *l2, l2e; + pt_entry_t bits, *pte, oldpte; int rv; rv = 0; - PMAP_LOCK(pmap); - - l3 = pmap_l3(pmap, va); - if (l3 == NULL) + l2 = pmap_l2(pmap, va); + if (l2 == NULL || ((l2e = pmap_load(l2)) & PTE_V) == 0) goto done; + if ((l2e & PTE_RWX) == 0) { + pte = pmap_l2_to_l3(l2, va); + if (pte == NULL || ((oldpte = pmap_load(pte) & PTE_V)) == 0) + goto done; + } else { + pte = l2; + oldpte = l2e; + } - orig_l3 = pmap_load(l3); - if ((orig_l3 & PTE_V) == 0 || - (ftype == VM_PROT_WRITE && (orig_l3 & PTE_W) == 0) || - (ftype == VM_PROT_EXECUTE && (orig_l3 & PTE_X) == 0) || - (ftype == VM_PROT_READ && (orig_l3 & PTE_R) == 0)) + if ((pmap != kernel_pmap && (oldpte & PTE_U) == 0) || + (ftype == VM_PROT_WRITE && (oldpte & PTE_W) == 0) || + (ftype == VM_PROT_EXECUTE && (oldpte & PTE_X) == 0) || + (ftype == VM_PROT_READ && (oldpte & PTE_R) == 0)) goto done; - new_l3 = orig_l3 | PTE_A; + bits = PTE_A; if (ftype == VM_PROT_WRITE) - new_l3 |= PTE_D; + bits |= PTE_D; - if (orig_l3 != new_l3) { - pmap_store(l3, new_l3); - pmap_invalidate_page(pmap, va); - rv = 1; - goto done; - } - - /* - * XXX: This case should never happen since it means - * the PTE shouldn't have resulted in a fault. + /* + * Spurious faults can occur if the implementation caches invalid + * entries in the TLB, or if simultaneous accesses on multiple CPUs + * race with each other. */ - + if ((oldpte & bits) != bits) + pmap_store_bits(pte, bits); + sfence_vma(); + rv = 1; done: PMAP_UNLOCK(pmap); + return (rv); +} +static bool +pmap_demote_l2(pmap_t pmap, pd_entry_t *l2, vm_offset_t va) +{ + struct rwlock *lock; + bool rv; + + lock = NULL; + rv = pmap_demote_l2_locked(pmap, l2, va, &lock); + if (lock != NULL) + rw_wunlock(lock); return (rv); } /* + * Tries to demote a 2MB page mapping. If demotion fails, the 2MB page + * mapping is invalidated. + */ +static bool +pmap_demote_l2_locked(pmap_t pmap, pd_entry_t *l2, vm_offset_t va, + struct rwlock **lockp) +{ + struct spglist free; + vm_page_t mpte; + pd_entry_t newl2, oldl2; + pt_entry_t *firstl3, newl3; + vm_paddr_t mptepa; + int i; + + PMAP_LOCK_ASSERT(pmap, MA_OWNED); + + oldl2 = pmap_load(l2); + KASSERT((oldl2 & PTE_RWX) != 0, + ("pmap_demote_l2_locked: oldl2 is not a leaf entry")); + if ((oldl2 & PTE_A) == 0 || (mpte = pmap_remove_pt_page(pmap, va)) == + NULL) { + if ((oldl2 & PTE_A) == 0 || (mpte = vm_page_alloc(NULL, + pmap_l2_pindex(va), (VIRT_IN_DMAP(va) ? VM_ALLOC_INTERRUPT : + VM_ALLOC_NORMAL) | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED)) == + NULL) { + SLIST_INIT(&free); + (void)pmap_remove_l2(pmap, l2, va & ~L2_OFFSET, + pmap_load(pmap_l1(pmap, va)), &free, lockp); + vm_page_free_pages_toq(&free, true); + CTR2(KTR_PMAP, "pmap_demote_l2_locked: " + "failure for va %#lx in pmap %p", va, pmap); + return (false); + } + if (va < VM_MAXUSER_ADDRESS) + pmap_resident_count_inc(pmap, 1); + } + mptepa = VM_PAGE_TO_PHYS(mpte); + firstl3 = (pt_entry_t *)PHYS_TO_DMAP(mptepa); + newl2 = ((mptepa / PAGE_SIZE) << PTE_PPN0_S) | PTE_V; + KASSERT((oldl2 & PTE_A) != 0, + ("pmap_demote_l2_locked: oldl2 is missing PTE_A")); + KASSERT((oldl2 & (PTE_D | PTE_W)) != PTE_W, + ("pmap_demote_l2_locked: oldl2 is missing PTE_D")); + newl3 = oldl2; + + /* + * If the page table page is new, initialize it. + */ + if (mpte->wire_count == 1) { + mpte->wire_count = Ln_ENTRIES; + for (i = 0; i < Ln_ENTRIES; i++) + pmap_store(firstl3 + i, newl3 + (i << PTE_PPN0_S)); + } + KASSERT(PTE_TO_PHYS(pmap_load(firstl3)) == PTE_TO_PHYS(newl3), + ("pmap_demote_l2_locked: firstl3 and newl3 map different physical " + "addresses")); + + /* + * If the mapping has changed attributes, update the page table + * entries. + */ + if ((pmap_load(firstl3) & PTE_PROMOTE) != (newl3 & PTE_PROMOTE)) + for (i = 0; i < Ln_ENTRIES; i++) + pmap_store(firstl3 + i, newl3 + (i << PTE_PPN0_S)); + + /* + * The spare PV entries must be reserved prior to demoting the + * mapping, that is, prior to changing the L2 entry. Otherwise, the + * state of the L2 entry and the PV lists will be inconsistent, which + * can result in reclaim_pv_chunk() attempting to remove a PV entry from + * the wrong PV list and pmap_pv_demote_l2() failing to find the + * expected PV entry for the 2MB page mapping that is being demoted. + */ + if ((oldl2 & PTE_SW_MANAGED) != 0) + reserve_pv_entries(pmap, Ln_ENTRIES - 1, lockp); + + /* + * Demote the mapping. + */ + pmap_store(l2, newl2); + + /* + * Demote the PV entry. + */ + if ((oldl2 & PTE_SW_MANAGED) != 0) + pmap_pv_demote_l2(pmap, va, PTE_TO_PHYS(oldl2), lockp); + + atomic_add_long(&pmap_l2_demotions, 1); + CTR2(KTR_PMAP, "pmap_demote_l2_locked: success for va %#lx in pmap %p", + va, pmap); + return (true); +} + +#if VM_NRESERVLEVEL > 0 +static void +pmap_promote_l2(pmap_t pmap, pd_entry_t *l2, vm_offset_t va, + struct rwlock **lockp) +{ + pt_entry_t *firstl3, *l3; + vm_paddr_t pa; + vm_page_t ml3; + + PMAP_LOCK_ASSERT(pmap, MA_OWNED); + + va &= ~L2_OFFSET; + KASSERT((pmap_load(l2) & PTE_RWX) == 0, + ("pmap_promote_l2: invalid l2 entry %p", l2)); + + firstl3 = (pt_entry_t *)PHYS_TO_DMAP(PTE_TO_PHYS(pmap_load(l2))); + pa = PTE_TO_PHYS(pmap_load(firstl3)); + if ((pa & L2_OFFSET) != 0) { + CTR2(KTR_PMAP, "pmap_promote_l2: failure for va %#lx pmap %p", + va, pmap); + atomic_add_long(&pmap_l2_p_failures, 1); + return; + } + + pa += PAGE_SIZE; + for (l3 = firstl3 + 1; l3 < firstl3 + Ln_ENTRIES; l3++) { + if (PTE_TO_PHYS(pmap_load(l3)) != pa) { + CTR2(KTR_PMAP, + "pmap_promote_l2: failure for va %#lx pmap %p", + va, pmap); + atomic_add_long(&pmap_l2_p_failures, 1); + return; + } + if ((pmap_load(l3) & PTE_PROMOTE) != + (pmap_load(firstl3) & PTE_PROMOTE)) { + CTR2(KTR_PMAP, + "pmap_promote_l2: failure for va %#lx pmap %p", + va, pmap); + atomic_add_long(&pmap_l2_p_failures, 1); + return; + } + pa += PAGE_SIZE; + } + + ml3 = PHYS_TO_VM_PAGE(PTE_TO_PHYS(pmap_load(l2))); + KASSERT(ml3->pindex == pmap_l2_pindex(va), + ("pmap_promote_l2: page table page's pindex is wrong")); + if (pmap_insert_pt_page(pmap, ml3)) { + CTR2(KTR_PMAP, "pmap_promote_l2: failure for va %#lx pmap %p", + va, pmap); + atomic_add_long(&pmap_l2_p_failures, 1); + return; + } + + if ((pmap_load(firstl3) & PTE_SW_MANAGED) != 0) + pmap_pv_promote_l2(pmap, va, PTE_TO_PHYS(pmap_load(firstl3)), + lockp); + + pmap_store(l2, pmap_load(firstl3)); + + atomic_add_long(&pmap_l2_promotions, 1); + CTR2(KTR_PMAP, "pmap_promote_l2: success for va %#lx in pmap %p", va, + pmap); +} +#endif + +/* * Insert the given physical page (p) at * the specified virtual address (v) in the * target physical map with the protection requested. * * If specified, the page will be wired down, meaning * that the related pte can not be reclaimed. * * NB: This is the only routine which MAY NOT lazy-evaluate * or lose information. That is, this routine must actually * insert this page into the given map NOW. */ int pmap_enter(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot, - u_int flags, int8_t psind __unused) + u_int flags, int8_t psind) { struct rwlock *lock; - pd_entry_t *l1, *l2; + pd_entry_t *l1, *l2, l2e; pt_entry_t new_l3, orig_l3; pt_entry_t *l3; pv_entry_t pv; vm_paddr_t opa, pa, l2_pa, l3_pa; vm_page_t mpte, om, l2_m, l3_m; - boolean_t nosleep; pt_entry_t entry; - pn_t l2_pn; - pn_t l3_pn; - pn_t pn; + pn_t l2_pn, l3_pn, pn; + int rv; + bool nosleep; va = trunc_page(va); if ((m->oflags & VPO_UNMANAGED) == 0 && !vm_page_xbusied(m)) VM_OBJECT_ASSERT_LOCKED(m->object); pa = VM_PAGE_TO_PHYS(m); pn = (pa / PAGE_SIZE); new_l3 = PTE_V | PTE_R | PTE_A; if (prot & VM_PROT_EXECUTE) new_l3 |= PTE_X; if (flags & VM_PROT_WRITE) new_l3 |= PTE_D; if (prot & VM_PROT_WRITE) new_l3 |= PTE_W; - if ((va >> 63) == 0) + if (va < VM_MAX_USER_ADDRESS) new_l3 |= PTE_U; new_l3 |= (pn << PTE_PPN0_S); if ((flags & PMAP_ENTER_WIRED) != 0) new_l3 |= PTE_SW_WIRED; /* * Set modified bit gratuitously for writeable mappings if * the page is unmanaged. We do not want to take a fault * to do the dirty bit accounting for these mappings. */ if ((m->oflags & VPO_UNMANAGED) != 0) { if (prot & VM_PROT_WRITE) new_l3 |= PTE_D; } else new_l3 |= PTE_SW_MANAGED; CTR2(KTR_PMAP, "pmap_enter: %.16lx -> %.16lx", va, pa); - mpte = NULL; - lock = NULL; + mpte = NULL; rw_rlock(&pvh_global_lock); PMAP_LOCK(pmap); + if (psind == 1) { + /* Assert the required virtual and physical alignment. */ + KASSERT((va & L2_OFFSET) == 0, + ("pmap_enter: va %#lx unaligned", va)); + KASSERT(m->psind > 0, ("pmap_enter: m->psind < psind")); + rv = pmap_enter_l2(pmap, va, new_l3, flags, m, &lock); + goto out; + } - if (va < VM_MAXUSER_ADDRESS) { + l2 = pmap_l2(pmap, va); + if (l2 != NULL && ((l2e = pmap_load(l2)) & PTE_V) != 0 && + ((l2e & PTE_RWX) == 0 || pmap_demote_l2_locked(pmap, l2, + va, &lock))) { + l3 = pmap_l2_to_l3(l2, va); + if (va < VM_MAXUSER_ADDRESS) { + mpte = PHYS_TO_VM_PAGE(PTE_TO_PHYS(pmap_load(l2))); + mpte->wire_count++; + } + } else if (va < VM_MAXUSER_ADDRESS) { nosleep = (flags & PMAP_ENTER_NOSLEEP) != 0; mpte = pmap_alloc_l3(pmap, va, nosleep ? NULL : &lock); if (mpte == NULL && nosleep) { CTR0(KTR_PMAP, "pmap_enter: mpte == NULL"); if (lock != NULL) rw_wunlock(lock); rw_runlock(&pvh_global_lock); PMAP_UNLOCK(pmap); return (KERN_RESOURCE_SHORTAGE); } l3 = pmap_l3(pmap, va); } else { l3 = pmap_l3(pmap, va); /* TODO: This is not optimal, but should mostly work */ if (l3 == NULL) { - l2 = pmap_l2(pmap, va); if (l2 == NULL) { l2_m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED | VM_ALLOC_ZERO); if (l2_m == NULL) panic("pmap_enter: l2 pte_m == NULL"); if ((l2_m->flags & PG_ZERO) == 0) pmap_zero_page(l2_m); l2_pa = VM_PAGE_TO_PHYS(l2_m); l2_pn = (l2_pa / PAGE_SIZE); l1 = pmap_l1(pmap, va); entry = (PTE_V); entry |= (l2_pn << PTE_PPN0_S); pmap_store(l1, entry); pmap_distribute_l1(pmap, pmap_l1_index(va), entry); l2 = pmap_l1_to_l2(l1, va); } - KASSERT(l2 != NULL, - ("No l2 table after allocating one")); - l3_m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED | VM_ALLOC_ZERO); if (l3_m == NULL) panic("pmap_enter: l3 pte_m == NULL"); if ((l3_m->flags & PG_ZERO) == 0) pmap_zero_page(l3_m); l3_pa = VM_PAGE_TO_PHYS(l3_m); l3_pn = (l3_pa / PAGE_SIZE); entry = (PTE_V); entry |= (l3_pn << PTE_PPN0_S); pmap_store(l2, entry); l3 = pmap_l2_to_l3(l2, va); } pmap_invalidate_page(pmap, va); } orig_l3 = pmap_load(l3); opa = PTE_TO_PHYS(orig_l3); pv = NULL; /* * Is the specified virtual address already mapped? */ if ((orig_l3 & PTE_V) != 0) { /* * Wiring change, just update stats. We don't worry about * wiring PT pages as they remain resident as long as there * are valid mappings in them. Hence, if a user page is wired, * the PT page will be also. */ if ((flags & PMAP_ENTER_WIRED) != 0 && (orig_l3 & PTE_SW_WIRED) == 0) pmap->pm_stats.wired_count++; else if ((flags & PMAP_ENTER_WIRED) == 0 && (orig_l3 & PTE_SW_WIRED) != 0) pmap->pm_stats.wired_count--; /* * Remove the extra PT page reference. */ if (mpte != NULL) { mpte->wire_count--; KASSERT(mpte->wire_count > 0, ("pmap_enter: missing reference to page table page," " va: 0x%lx", va)); } /* * Has the physical page changed? */ if (opa == pa) { /* * No, might be a protection or wiring change. */ if ((orig_l3 & PTE_SW_MANAGED) != 0 && (new_l3 & PTE_W) != 0) vm_page_aflag_set(m, PGA_WRITEABLE); goto validate; } /* * The physical page has changed. Temporarily invalidate * the mapping. This ensures that all threads sharing the * pmap keep a consistent view of the mapping, which is * necessary for the correct handling of COW faults. It * also permits reuse of the old mapping's PV entry, * avoiding an allocation. * * For consistency, handle unmanaged mappings the same way. */ orig_l3 = pmap_load_clear(l3); KASSERT(PTE_TO_PHYS(orig_l3) == opa, ("pmap_enter: unexpected pa update for %#lx", va)); if ((orig_l3 & PTE_SW_MANAGED) != 0) { om = PHYS_TO_VM_PAGE(opa); /* * The pmap lock is sufficient to synchronize with * concurrent calls to pmap_page_test_mappings() and * pmap_ts_referenced(). */ if ((orig_l3 & PTE_D) != 0) vm_page_dirty(om); if ((orig_l3 & PTE_A) != 0) vm_page_aflag_set(om, PGA_REFERENCED); CHANGE_PV_LIST_LOCK_TO_PHYS(&lock, opa); pv = pmap_pvh_remove(&om->md, pmap, va); + KASSERT(pv != NULL, + ("pmap_enter: no PV entry for %#lx", va)); if ((new_l3 & PTE_SW_MANAGED) == 0) free_pv_entry(pmap, pv); if ((om->aflags & PGA_WRITEABLE) != 0 && TAILQ_EMPTY(&om->md.pv_list)) vm_page_aflag_clear(om, PGA_WRITEABLE); } pmap_invalidate_page(pmap, va); orig_l3 = 0; } else { /* * Increment the counters. */ if ((new_l3 & PTE_SW_WIRED) != 0) pmap->pm_stats.wired_count++; pmap_resident_count_inc(pmap, 1); } /* * Enter on the PV list if part of our managed memory. */ if ((new_l3 & PTE_SW_MANAGED) != 0) { if (pv == NULL) { pv = get_pv_entry(pmap, &lock); pv->pv_va = va; } CHANGE_PV_LIST_LOCK_TO_PHYS(&lock, pa); TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next); m->md.pv_gen++; if ((new_l3 & PTE_W) != 0) vm_page_aflag_set(m, PGA_WRITEABLE); } validate: /* * Sync the i-cache on all harts before updating the PTE * if the new PTE is executable. */ if (prot & VM_PROT_EXECUTE) pmap_sync_icache(pmap, va, PAGE_SIZE); /* * Update the L3 entry. */ if (orig_l3 != 0) { orig_l3 = pmap_load_store(l3, new_l3); pmap_invalidate_page(pmap, va); KASSERT(PTE_TO_PHYS(orig_l3) == pa, ("pmap_enter: invalid update")); if ((orig_l3 & (PTE_D | PTE_SW_MANAGED)) == (PTE_D | PTE_SW_MANAGED)) vm_page_dirty(m); } else { pmap_store(l3, new_l3); } +#if VM_NRESERVLEVEL > 0 + if (mpte != NULL && mpte->wire_count == Ln_ENTRIES && + pmap_ps_enabled(pmap) && + (m->flags & PG_FICTITIOUS) == 0 && + vm_reserv_level_iffullpop(m) == 0) + pmap_promote_l2(pmap, l2, va, &lock); +#endif + + rv = KERN_SUCCESS; +out: if (lock != NULL) rw_wunlock(lock); rw_runlock(&pvh_global_lock); PMAP_UNLOCK(pmap); + return (rv); +} + +/* + * Tries to create a read- and/or execute-only 2MB page mapping. Returns true + * if successful. Returns false if (1) a page table page cannot be allocated + * without sleeping, (2) a mapping already exists at the specified virtual + * address, or (3) a PV entry cannot be allocated without reclaiming another + * PV entry. + */ +static bool +pmap_enter_2mpage(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot, + struct rwlock **lockp) +{ + pd_entry_t new_l2; + pn_t pn; + + PMAP_LOCK_ASSERT(pmap, MA_OWNED); + + pn = VM_PAGE_TO_PHYS(m) / PAGE_SIZE; + new_l2 = (pd_entry_t)((pn << PTE_PPN0_S) | PTE_R | PTE_V); + if ((m->oflags & VPO_UNMANAGED) == 0) + new_l2 |= PTE_SW_MANAGED; + if ((prot & VM_PROT_EXECUTE) != 0) + new_l2 |= PTE_X; + if (va < VM_MAXUSER_ADDRESS) + new_l2 |= PTE_U; + return (pmap_enter_l2(pmap, va, new_l2, PMAP_ENTER_NOSLEEP | + PMAP_ENTER_NOREPLACE | PMAP_ENTER_NORECLAIM, NULL, lockp) == + KERN_SUCCESS); +} + +/* + * Tries to create the specified 2MB page mapping. Returns KERN_SUCCESS if + * the mapping was created, and either KERN_FAILURE or KERN_RESOURCE_SHORTAGE + * otherwise. Returns KERN_FAILURE if PMAP_ENTER_NOREPLACE was specified and + * a mapping already exists at the specified virtual address. Returns + * KERN_RESOURCE_SHORTAGE if PMAP_ENTER_NOSLEEP was specified and a page table + * page allocation failed. Returns KERN_RESOURCE_SHORTAGE if + * PMAP_ENTER_NORECLAIM was specified and a PV entry allocation failed. + * + * The parameter "m" is only used when creating a managed, writeable mapping. + */ +static int +pmap_enter_l2(pmap_t pmap, vm_offset_t va, pd_entry_t new_l2, u_int flags, + vm_page_t m, struct rwlock **lockp) +{ + struct spglist free; + pd_entry_t *l2, *l3, oldl2; + vm_offset_t sva; + vm_page_t l2pg, mt; + + PMAP_LOCK_ASSERT(pmap, MA_OWNED); + + if ((l2pg = pmap_alloc_l2(pmap, va, (flags & PMAP_ENTER_NOSLEEP) != 0 ? + NULL : lockp)) == NULL) { + CTR2(KTR_PMAP, "pmap_enter_l2: failure for va %#lx in pmap %p", + va, pmap); + return (KERN_RESOURCE_SHORTAGE); + } + + l2 = (pd_entry_t *)PHYS_TO_DMAP(VM_PAGE_TO_PHYS(l2pg)); + l2 = &l2[pmap_l2_index(va)]; + if ((oldl2 = pmap_load(l2)) != 0) { + KASSERT(l2pg->wire_count > 1, + ("pmap_enter_l2: l2pg's wire count is too low")); + if ((flags & PMAP_ENTER_NOREPLACE) != 0) { + l2pg->wire_count--; + CTR2(KTR_PMAP, + "pmap_enter_l2: failure for va %#lx in pmap %p", + va, pmap); + return (KERN_FAILURE); + } + SLIST_INIT(&free); + if ((oldl2 & PTE_RWX) != 0) + (void)pmap_remove_l2(pmap, l2, va, + pmap_load(pmap_l1(pmap, va)), &free, lockp); + else + for (sva = va; sva < va + L2_SIZE; sva += PAGE_SIZE) { + l3 = pmap_l2_to_l3(l2, sva); + if ((pmap_load(l3) & PTE_V) != 0 && + pmap_remove_l3(pmap, l3, sva, oldl2, &free, + lockp) != 0) + break; + } + vm_page_free_pages_toq(&free, true); + if (va >= VM_MAXUSER_ADDRESS) { + mt = PHYS_TO_VM_PAGE(PTE_TO_PHYS(pmap_load(l2))); + if (pmap_insert_pt_page(pmap, mt)) { + /* + * XXX Currently, this can't happen bacuse + * we do not perform pmap_enter(psind == 1) + * on the kernel pmap. + */ + panic("pmap_enter_l2: trie insert failed"); + } + } else + KASSERT(pmap_load(l2) == 0, + ("pmap_enter_l2: non-zero L2 entry %p", l2)); + } + + if ((new_l2 & PTE_SW_MANAGED) != 0) { + /* + * Abort this mapping if its PV entry could not be created. + */ + if (!pmap_pv_insert_l2(pmap, va, new_l2, flags, lockp)) { + SLIST_INIT(&free); + if (pmap_unwire_ptp(pmap, va, l2pg, &free)) { + /* + * Although "va" is not mapped, paging-structure + * caches could nonetheless have entries that + * refer to the freed page table pages. + * Invalidate those entries. + */ + pmap_invalidate_page(pmap, va); + vm_page_free_pages_toq(&free, true); + } + CTR2(KTR_PMAP, + "pmap_enter_l2: failure for va %#lx in pmap %p", + va, pmap); + return (KERN_RESOURCE_SHORTAGE); + } + if ((new_l2 & PTE_W) != 0) + for (mt = m; mt < &m[L2_SIZE / PAGE_SIZE]; mt++) + vm_page_aflag_set(mt, PGA_WRITEABLE); + } + + /* + * Increment counters. + */ + if ((new_l2 & PTE_SW_WIRED) != 0) + pmap->pm_stats.wired_count += L2_SIZE / PAGE_SIZE; + pmap->pm_stats.resident_count += L2_SIZE / PAGE_SIZE; + + /* + * Map the superpage. + */ + pmap_store(l2, new_l2); + + atomic_add_long(&pmap_l2_mappings, 1); + CTR2(KTR_PMAP, "pmap_enter_l2: success for va %#lx in pmap %p", + va, pmap); + return (KERN_SUCCESS); } /* * Maps a sequence of resident pages belonging to the same object. * The sequence begins with the given page m_start. This page is * mapped at the given virtual address start. Each subsequent page is * mapped at a virtual address that is offset from start by the same * amount as the page is offset from m_start within the object. The * last page in the sequence is the page with the largest offset from * m_start that can be mapped at a virtual address less than the given * virtual address end. Not every virtual page between start and end * is mapped; only those for which a resident page exists with the * corresponding offset from m_start are mapped. */ void pmap_enter_object(pmap_t pmap, vm_offset_t start, vm_offset_t end, vm_page_t m_start, vm_prot_t prot) { struct rwlock *lock; vm_offset_t va; vm_page_t m, mpte; vm_pindex_t diff, psize; VM_OBJECT_ASSERT_LOCKED(m_start->object); psize = atop(end - start); mpte = NULL; m = m_start; lock = NULL; rw_rlock(&pvh_global_lock); PMAP_LOCK(pmap); while (m != NULL && (diff = m->pindex - m_start->pindex) < psize) { va = start + ptoa(diff); - mpte = pmap_enter_quick_locked(pmap, va, m, prot, mpte, &lock); + if ((va & L2_OFFSET) == 0 && va + L2_SIZE <= end && + m->psind == 1 && pmap_ps_enabled(pmap) && + pmap_enter_2mpage(pmap, va, m, prot, &lock)) + m = &m[L2_SIZE / PAGE_SIZE - 1]; + else + mpte = pmap_enter_quick_locked(pmap, va, m, prot, mpte, + &lock); m = TAILQ_NEXT(m, listq); } if (lock != NULL) rw_wunlock(lock); rw_runlock(&pvh_global_lock); PMAP_UNLOCK(pmap); } /* * this code makes some *MAJOR* assumptions: * 1. Current pmap & pmap exists. * 2. Not wired. * 3. Read access. * 4. No page table pages. * but is *MUCH* faster than pmap_enter... */ void pmap_enter_quick(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot) { struct rwlock *lock; lock = NULL; rw_rlock(&pvh_global_lock); PMAP_LOCK(pmap); (void)pmap_enter_quick_locked(pmap, va, m, prot, NULL, &lock); if (lock != NULL) rw_wunlock(lock); rw_runlock(&pvh_global_lock); PMAP_UNLOCK(pmap); } static vm_page_t pmap_enter_quick_locked(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot, vm_page_t mpte, struct rwlock **lockp) { struct spglist free; vm_paddr_t phys; pd_entry_t *l2; pt_entry_t *l3, newl3; KASSERT(va < kmi.clean_sva || va >= kmi.clean_eva || (m->oflags & VPO_UNMANAGED) != 0, ("pmap_enter_quick_locked: managed mapping within the clean submap")); rw_assert(&pvh_global_lock, RA_LOCKED); PMAP_LOCK_ASSERT(pmap, MA_OWNED); CTR2(KTR_PMAP, "pmap_enter_quick_locked: %p %lx", pmap, va); /* * In the case that a page table page is not * resident, we are creating it here. */ if (va < VM_MAXUSER_ADDRESS) { vm_pindex_t l2pindex; /* * Calculate pagetable page index */ l2pindex = pmap_l2_pindex(va); if (mpte && (mpte->pindex == l2pindex)) { mpte->wire_count++; } else { /* * Get the l2 entry */ l2 = pmap_l2(pmap, va); /* * If the page table page is mapped, we just increment * the hold count, and activate it. Otherwise, we * attempt to allocate a page table page. If this * attempt fails, we don't retry. Instead, we give up. */ if (l2 != NULL && pmap_load(l2) != 0) { phys = PTE_TO_PHYS(pmap_load(l2)); mpte = PHYS_TO_VM_PAGE(phys); mpte->wire_count++; } else { /* * Pass NULL instead of the PV list lock * pointer, because we don't intend to sleep. */ mpte = _pmap_alloc_l3(pmap, l2pindex, NULL); if (mpte == NULL) return (mpte); } } l3 = (pt_entry_t *)PHYS_TO_DMAP(VM_PAGE_TO_PHYS(mpte)); l3 = &l3[pmap_l3_index(va)]; } else { mpte = NULL; l3 = pmap_l3(kernel_pmap, va); } if (l3 == NULL) panic("pmap_enter_quick_locked: No l3"); if (pmap_load(l3) != 0) { if (mpte != NULL) { mpte->wire_count--; mpte = NULL; } return (mpte); } /* * Enter on the PV list if part of our managed memory. */ if ((m->oflags & VPO_UNMANAGED) == 0 && !pmap_try_insert_pv_entry(pmap, va, m, lockp)) { if (mpte != NULL) { SLIST_INIT(&free); - if (pmap_unwire_l3(pmap, va, mpte, &free)) { + if (pmap_unwire_ptp(pmap, va, mpte, &free)) { pmap_invalidate_page(pmap, va); vm_page_free_pages_toq(&free, false); } mpte = NULL; } return (mpte); } /* * Increment counters */ pmap_resident_count_inc(pmap, 1); newl3 = ((VM_PAGE_TO_PHYS(m) / PAGE_SIZE) << PTE_PPN0_S) | PTE_V | PTE_R; if ((prot & VM_PROT_EXECUTE) != 0) newl3 |= PTE_X; if ((m->oflags & VPO_UNMANAGED) == 0) newl3 |= PTE_SW_MANAGED; if (va < VM_MAX_USER_ADDRESS) newl3 |= PTE_U; /* * Sync the i-cache on all harts before updating the PTE * if the new PTE is executable. */ if (prot & VM_PROT_EXECUTE) pmap_sync_icache(pmap, va, PAGE_SIZE); pmap_store(l3, newl3); pmap_invalidate_page(pmap, va); return (mpte); } /* * This code maps large physical mmap regions into the * processor address space. Note that some shortcuts * are taken, but the code works. */ void pmap_object_init_pt(pmap_t pmap, vm_offset_t addr, vm_object_t object, vm_pindex_t pindex, vm_size_t size) { VM_OBJECT_ASSERT_WLOCKED(object); KASSERT(object->type == OBJT_DEVICE || object->type == OBJT_SG, ("pmap_object_init_pt: non-device object")); } /* * Clear the wired attribute from the mappings for the specified range of * addresses in the given pmap. Every valid mapping within that range * must have the wired attribute set. In contrast, invalid mappings * cannot have the wired attribute set, so they are ignored. * * The wired attribute of the page table entry is not a hardware feature, * so there is no need to invalidate any TLB entries. */ void pmap_unwire(pmap_t pmap, vm_offset_t sva, vm_offset_t eva) { vm_offset_t va_next; - pd_entry_t *l1, *l2; - pt_entry_t *l3; - boolean_t pv_lists_locked; + pd_entry_t *l1, *l2, l2e; + pt_entry_t *l3, l3e; + bool pv_lists_locked; - pv_lists_locked = FALSE; + pv_lists_locked = false; +retry: PMAP_LOCK(pmap); for (; sva < eva; sva = va_next) { l1 = pmap_l1(pmap, sva); if (pmap_load(l1) == 0) { va_next = (sva + L1_SIZE) & ~L1_OFFSET; if (va_next < sva) va_next = eva; continue; } va_next = (sva + L2_SIZE) & ~L2_OFFSET; if (va_next < sva) va_next = eva; l2 = pmap_l1_to_l2(l1, sva); - if (pmap_load(l2) == 0) + if ((l2e = pmap_load(l2)) == 0) continue; + if ((l2e & PTE_RWX) != 0) { + if (sva + L2_SIZE == va_next && eva >= va_next) { + if ((l2e & PTE_SW_WIRED) == 0) + panic("pmap_unwire: l2 %#jx is missing " + "PTE_SW_WIRED", (uintmax_t)l2e); + pmap_clear_bits(l2, PTE_SW_WIRED); + continue; + } else { + if (!pv_lists_locked) { + pv_lists_locked = true; + if (!rw_try_rlock(&pvh_global_lock)) { + PMAP_UNLOCK(pmap); + rw_rlock(&pvh_global_lock); + /* Repeat sva. */ + goto retry; + } + } + if (!pmap_demote_l2(pmap, l2, sva)) + panic("pmap_unwire: demotion failed"); + } + } if (va_next > eva) va_next = eva; for (l3 = pmap_l2_to_l3(l2, sva); sva != va_next; l3++, sva += L3_SIZE) { - if (pmap_load(l3) == 0) + if ((l3e = pmap_load(l3)) == 0) continue; - if ((pmap_load(l3) & PTE_SW_WIRED) == 0) + if ((l3e & PTE_SW_WIRED) == 0) panic("pmap_unwire: l3 %#jx is missing " - "PTE_SW_WIRED", (uintmax_t)*l3); + "PTE_SW_WIRED", (uintmax_t)l3e); /* * PG_W must be cleared atomically. Although the pmap * lock synchronizes access to PG_W, another processor * could be setting PG_M and/or PG_A concurrently. */ - atomic_clear_long(l3, PTE_SW_WIRED); + pmap_clear_bits(l3, PTE_SW_WIRED); pmap->pm_stats.wired_count--; } } if (pv_lists_locked) rw_runlock(&pvh_global_lock); PMAP_UNLOCK(pmap); } /* * Copy the range specified by src_addr/len * from the source map to the range dst_addr/len * in the destination map. * * This routine is only advisory and need not do anything. */ void pmap_copy(pmap_t dst_pmap, pmap_t src_pmap, vm_offset_t dst_addr, vm_size_t len, vm_offset_t src_addr) { } /* * pmap_zero_page zeros the specified hardware page by mapping * the page into KVM and using bzero to clear its contents. */ void pmap_zero_page(vm_page_t m) { vm_offset_t va = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(m)); pagezero((void *)va); } /* * pmap_zero_page_area zeros the specified hardware page by mapping * the page into KVM and using bzero to clear its contents. * * off and size may not cover an area beyond a single hardware page. */ void pmap_zero_page_area(vm_page_t m, int off, int size) { vm_offset_t va = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(m)); if (off == 0 && size == PAGE_SIZE) pagezero((void *)va); else bzero((char *)va + off, size); } /* * pmap_copy_page copies the specified (machine independent) * page by mapping the page into virtual memory and using * bcopy to copy the page, one machine dependent page at a * time. */ void pmap_copy_page(vm_page_t msrc, vm_page_t mdst) { vm_offset_t src = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(msrc)); vm_offset_t dst = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(mdst)); pagecopy((void *)src, (void *)dst); } int unmapped_buf_allowed = 1; void pmap_copy_pages(vm_page_t ma[], vm_offset_t a_offset, vm_page_t mb[], vm_offset_t b_offset, int xfersize) { void *a_cp, *b_cp; vm_page_t m_a, m_b; vm_paddr_t p_a, p_b; vm_offset_t a_pg_offset, b_pg_offset; int cnt; while (xfersize > 0) { a_pg_offset = a_offset & PAGE_MASK; m_a = ma[a_offset >> PAGE_SHIFT]; p_a = m_a->phys_addr; b_pg_offset = b_offset & PAGE_MASK; m_b = mb[b_offset >> PAGE_SHIFT]; p_b = m_b->phys_addr; cnt = min(xfersize, PAGE_SIZE - a_pg_offset); cnt = min(cnt, PAGE_SIZE - b_pg_offset); if (__predict_false(!PHYS_IN_DMAP(p_a))) { panic("!DMAP a %lx", p_a); } else { a_cp = (char *)PHYS_TO_DMAP(p_a) + a_pg_offset; } if (__predict_false(!PHYS_IN_DMAP(p_b))) { panic("!DMAP b %lx", p_b); } else { b_cp = (char *)PHYS_TO_DMAP(p_b) + b_pg_offset; } bcopy(a_cp, b_cp, cnt); a_offset += cnt; b_offset += cnt; xfersize -= cnt; } } vm_offset_t pmap_quick_enter_page(vm_page_t m) { return (PHYS_TO_DMAP(VM_PAGE_TO_PHYS(m))); } void pmap_quick_remove_page(vm_offset_t addr) { } /* * Returns true if the pmap's pv is one of the first * 16 pvs linked to from this page. This count may * be changed upwards or downwards in the future; it * is only necessary that true be returned for a small * subset of pmaps for proper page aging. */ boolean_t pmap_page_exists_quick(pmap_t pmap, vm_page_t m) { + struct md_page *pvh; struct rwlock *lock; pv_entry_t pv; int loops = 0; boolean_t rv; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("pmap_page_exists_quick: page %p is not managed", m)); rv = FALSE; rw_rlock(&pvh_global_lock); lock = VM_PAGE_TO_PV_LIST_LOCK(m); rw_rlock(lock); TAILQ_FOREACH(pv, &m->md.pv_list, pv_next) { if (PV_PMAP(pv) == pmap) { rv = TRUE; break; } loops++; if (loops >= 16) break; } + if (!rv && loops < 16 && (m->flags & PG_FICTITIOUS) == 0) { + pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m)); + TAILQ_FOREACH(pv, &pvh->pv_list, pv_next) { + if (PV_PMAP(pv) == pmap) { + rv = TRUE; + break; + } + loops++; + if (loops >= 16) + break; + } + } rw_runlock(lock); rw_runlock(&pvh_global_lock); return (rv); } /* * pmap_page_wired_mappings: * * Return the number of managed mappings to the given physical page * that are wired. */ int pmap_page_wired_mappings(vm_page_t m) { + struct md_page *pvh; struct rwlock *lock; pmap_t pmap; + pd_entry_t *l2; pt_entry_t *l3; pv_entry_t pv; - int count, md_gen; + int count, md_gen, pvh_gen; if ((m->oflags & VPO_UNMANAGED) != 0) return (0); rw_rlock(&pvh_global_lock); lock = VM_PAGE_TO_PV_LIST_LOCK(m); rw_rlock(lock); restart: count = 0; TAILQ_FOREACH(pv, &m->md.pv_list, pv_next) { pmap = PV_PMAP(pv); if (!PMAP_TRYLOCK(pmap)) { md_gen = m->md.pv_gen; rw_runlock(lock); PMAP_LOCK(pmap); rw_rlock(lock); if (md_gen != m->md.pv_gen) { PMAP_UNLOCK(pmap); goto restart; } } l3 = pmap_l3(pmap, pv->pv_va); if ((pmap_load(l3) & PTE_SW_WIRED) != 0) count++; PMAP_UNLOCK(pmap); } + if ((m->flags & PG_FICTITIOUS) == 0) { + pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m)); + TAILQ_FOREACH(pv, &pvh->pv_list, pv_next) { + pmap = PV_PMAP(pv); + if (!PMAP_TRYLOCK(pmap)) { + md_gen = m->md.pv_gen; + pvh_gen = pvh->pv_gen; + rw_runlock(lock); + PMAP_LOCK(pmap); + rw_rlock(lock); + if (md_gen != m->md.pv_gen || + pvh_gen != pvh->pv_gen) { + PMAP_UNLOCK(pmap); + goto restart; + } + } + l2 = pmap_l2(pmap, pv->pv_va); + if ((pmap_load(l2) & PTE_SW_WIRED) != 0) + count++; + PMAP_UNLOCK(pmap); + } + } rw_runlock(lock); rw_runlock(&pvh_global_lock); return (count); } +static void +pmap_remove_pages_pv(pmap_t pmap, vm_page_t m, pv_entry_t pv, + struct spglist *free, bool superpage) +{ + struct md_page *pvh; + vm_page_t mpte, mt; + + if (superpage) { + pmap_resident_count_dec(pmap, Ln_ENTRIES); + pvh = pa_to_pvh(m->phys_addr); + TAILQ_REMOVE(&pvh->pv_list, pv, pv_next); + pvh->pv_gen++; + if (TAILQ_EMPTY(&pvh->pv_list)) { + for (mt = m; mt < &m[Ln_ENTRIES]; mt++) + if (TAILQ_EMPTY(&mt->md.pv_list) && + (mt->aflags & PGA_WRITEABLE) != 0) + vm_page_aflag_clear(mt, PGA_WRITEABLE); + } + mpte = pmap_remove_pt_page(pmap, pv->pv_va); + if (mpte != NULL) { + pmap_resident_count_dec(pmap, 1); + KASSERT(mpte->wire_count == Ln_ENTRIES, + ("pmap_remove_pages: pte page wire count error")); + mpte->wire_count = 0; + pmap_add_delayed_free_list(mpte, free, FALSE); + } + } else { + pmap_resident_count_dec(pmap, 1); + TAILQ_REMOVE(&m->md.pv_list, pv, pv_next); + m->md.pv_gen++; + if (TAILQ_EMPTY(&m->md.pv_list) && + (m->aflags & PGA_WRITEABLE) != 0) { + pvh = pa_to_pvh(m->phys_addr); + if (TAILQ_EMPTY(&pvh->pv_list)) + vm_page_aflag_clear(m, PGA_WRITEABLE); + } + } +} + /* * Destroy all managed, non-wired mappings in the given user-space * pmap. This pmap cannot be active on any processor besides the * caller. * * This function cannot be applied to the kernel pmap. Moreover, it * is not intended for general use. It is only to be used during * process termination. Consequently, it can be implemented in ways * that make it faster than pmap_remove(). First, it can more quickly * destroy mappings by iterating over the pmap's collection of PV * entries, rather than searching the page table. Second, it doesn't * have to test and clear the page table entries atomically, because * no processor is currently accessing the user address space. In * particular, a page table entry's dirty bit won't change state once * this function starts. */ void pmap_remove_pages(pmap_t pmap) { - pd_entry_t ptepde, *l2; - pt_entry_t *l3, tl3; struct spglist free; - vm_page_t m; + pd_entry_t ptepde; + pt_entry_t *pte, tpte; + vm_page_t m, mt; pv_entry_t pv; struct pv_chunk *pc, *npc; struct rwlock *lock; int64_t bit; uint64_t inuse, bitmask; int allfree, field, freed, idx; - vm_paddr_t pa; + bool superpage; lock = NULL; SLIST_INIT(&free); rw_rlock(&pvh_global_lock); PMAP_LOCK(pmap); TAILQ_FOREACH_SAFE(pc, &pmap->pm_pvchunk, pc_list, npc) { allfree = 1; freed = 0; for (field = 0; field < _NPCM; field++) { inuse = ~pc->pc_map[field] & pc_freemask[field]; while (inuse != 0) { bit = ffsl(inuse) - 1; bitmask = 1UL << bit; idx = field * 64 + bit; pv = &pc->pc_pventry[idx]; inuse &= ~bitmask; - l2 = pmap_l2(pmap, pv->pv_va); - ptepde = pmap_load(l2); - l3 = pmap_l2_to_l3(l2, pv->pv_va); - tl3 = pmap_load(l3); + pte = pmap_l1(pmap, pv->pv_va); + ptepde = pmap_load(pte); + pte = pmap_l1_to_l2(pte, pv->pv_va); + tpte = pmap_load(pte); + if ((tpte & PTE_RWX) != 0) { + superpage = true; + } else { + ptepde = tpte; + pte = pmap_l2_to_l3(pte, pv->pv_va); + tpte = pmap_load(pte); + superpage = false; + } /* * We cannot remove wired pages from a * process' mapping at this time. */ - if (tl3 & PTE_SW_WIRED) { + if (tpte & PTE_SW_WIRED) { allfree = 0; continue; } - pa = PTE_TO_PHYS(tl3); - m = PHYS_TO_VM_PAGE(pa); - KASSERT(m->phys_addr == pa, - ("vm_page_t %p phys_addr mismatch %016jx %016jx", - m, (uintmax_t)m->phys_addr, - (uintmax_t)tl3)); - + m = PHYS_TO_VM_PAGE(PTE_TO_PHYS(tpte)); KASSERT((m->flags & PG_FICTITIOUS) != 0 || m < &vm_page_array[vm_page_array_size], - ("pmap_remove_pages: bad l3 %#jx", - (uintmax_t)tl3)); + ("pmap_remove_pages: bad pte %#jx", + (uintmax_t)tpte)); - pmap_clear(l3); + pmap_clear(pte); /* * Update the vm_page_t clean/reference bits. */ - if ((tl3 & PTE_D) != 0) - vm_page_dirty(m); + if ((tpte & (PTE_D | PTE_W)) == + (PTE_D | PTE_W)) { + if (superpage) + for (mt = m; + mt < &m[Ln_ENTRIES]; mt++) + vm_page_dirty(mt); + else + vm_page_dirty(m); + } CHANGE_PV_LIST_LOCK_TO_VM_PAGE(&lock, m); /* Mark free */ pc->pc_map[field] |= bitmask; - pmap_resident_count_dec(pmap, 1); - TAILQ_REMOVE(&m->md.pv_list, pv, pv_next); - m->md.pv_gen++; - if (TAILQ_EMPTY(&m->md.pv_list) && - (m->aflags & PGA_WRITEABLE) != 0) - vm_page_aflag_clear(m, PGA_WRITEABLE); - - pmap_unuse_l3(pmap, pv->pv_va, ptepde, &free); + pmap_remove_pages_pv(pmap, m, pv, &free, + superpage); + pmap_unuse_pt(pmap, pv->pv_va, ptepde, &free); freed++; } } PV_STAT(atomic_add_long(&pv_entry_frees, freed)); PV_STAT(atomic_add_int(&pv_entry_spare, freed)); PV_STAT(atomic_subtract_long(&pv_entry_count, freed)); if (allfree) { TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list); free_pv_chunk(pc); } } if (lock != NULL) rw_wunlock(lock); pmap_invalidate_all(pmap); rw_runlock(&pvh_global_lock); PMAP_UNLOCK(pmap); vm_page_free_pages_toq(&free, false); } -/* - * This is used to check if a page has been accessed or modified. As we - * don't have a bit to see if it has been modified we have to assume it - * has been if the page is read/write. - */ -static boolean_t +static bool pmap_page_test_mappings(vm_page_t m, boolean_t accessed, boolean_t modified) { + struct md_page *pvh; struct rwlock *lock; + pd_entry_t *l2; + pt_entry_t *l3, mask; pv_entry_t pv; - pt_entry_t *l3, mask, value; pmap_t pmap; - int md_gen; - boolean_t rv; + int md_gen, pvh_gen; + bool rv; + mask = 0; + if (modified) + mask |= PTE_D; + if (accessed) + mask |= PTE_A; + rv = FALSE; rw_rlock(&pvh_global_lock); lock = VM_PAGE_TO_PV_LIST_LOCK(m); rw_rlock(lock); restart: TAILQ_FOREACH(pv, &m->md.pv_list, pv_next) { pmap = PV_PMAP(pv); if (!PMAP_TRYLOCK(pmap)) { md_gen = m->md.pv_gen; rw_runlock(lock); PMAP_LOCK(pmap); rw_rlock(lock); if (md_gen != m->md.pv_gen) { PMAP_UNLOCK(pmap); goto restart; } } l3 = pmap_l3(pmap, pv->pv_va); - mask = 0; - value = 0; - if (modified) { - mask |= PTE_D; - value |= PTE_D; - } - if (accessed) { - mask |= PTE_A; - value |= PTE_A; - } - -#if 0 - if (modified) { - mask |= ATTR_AP_RW_BIT; - value |= ATTR_AP(ATTR_AP_RW); - } - if (accessed) { - mask |= ATTR_AF | ATTR_DESCR_MASK; - value |= ATTR_AF | L3_PAGE; - } -#endif - - rv = (pmap_load(l3) & mask) == value; + rv = (pmap_load(l3) & mask) == mask; PMAP_UNLOCK(pmap); if (rv) goto out; } + if ((m->flags & PG_FICTITIOUS) == 0) { + pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m)); + TAILQ_FOREACH(pv, &pvh->pv_list, pv_next) { + pmap = PV_PMAP(pv); + if (!PMAP_TRYLOCK(pmap)) { + md_gen = m->md.pv_gen; + pvh_gen = pvh->pv_gen; + rw_runlock(lock); + PMAP_LOCK(pmap); + rw_rlock(lock); + if (md_gen != m->md.pv_gen || + pvh_gen != pvh->pv_gen) { + PMAP_UNLOCK(pmap); + goto restart; + } + } + l2 = pmap_l2(pmap, pv->pv_va); + rv = (pmap_load(l2) & mask) == mask; + PMAP_UNLOCK(pmap); + if (rv) + goto out; + } + } out: rw_runlock(lock); rw_runlock(&pvh_global_lock); return (rv); } /* * pmap_is_modified: * * Return whether or not the specified physical page was modified * in any physical maps. */ boolean_t pmap_is_modified(vm_page_t m) { KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("pmap_is_modified: page %p is not managed", m)); /* * If the page is not exclusive busied, then PGA_WRITEABLE cannot be * concurrently set while the object is locked. Thus, if PGA_WRITEABLE * is clear, no PTEs can have PG_M set. */ VM_OBJECT_ASSERT_WLOCKED(m->object); if (!vm_page_xbusied(m) && (m->aflags & PGA_WRITEABLE) == 0) return (FALSE); return (pmap_page_test_mappings(m, FALSE, TRUE)); } /* * pmap_is_prefaultable: * * Return whether or not the specified virtual address is eligible * for prefault. */ boolean_t pmap_is_prefaultable(pmap_t pmap, vm_offset_t addr) { pt_entry_t *l3; boolean_t rv; rv = FALSE; PMAP_LOCK(pmap); l3 = pmap_l3(pmap, addr); if (l3 != NULL && pmap_load(l3) != 0) { rv = TRUE; } PMAP_UNLOCK(pmap); return (rv); } /* * pmap_is_referenced: * * Return whether or not the specified physical page was referenced * in any physical maps. */ boolean_t pmap_is_referenced(vm_page_t m) { KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("pmap_is_referenced: page %p is not managed", m)); return (pmap_page_test_mappings(m, TRUE, FALSE)); } /* * Clear the write and modified bits in each of the given page's mappings. */ void pmap_remove_write(vm_page_t m) { - pmap_t pmap; + struct md_page *pvh; struct rwlock *lock; - pv_entry_t pv; - pt_entry_t *l3, oldl3; - pt_entry_t newl3; - int md_gen; + pmap_t pmap; + pd_entry_t *l2; + pt_entry_t *l3, oldl3, newl3; + pv_entry_t next_pv, pv; + vm_offset_t va; + int md_gen, pvh_gen; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("pmap_remove_write: page %p is not managed", m)); /* * If the page is not exclusive busied, then PGA_WRITEABLE cannot be * set by another thread while the object is locked. Thus, * if PGA_WRITEABLE is clear, no page table entries need updating. */ VM_OBJECT_ASSERT_WLOCKED(m->object); if (!vm_page_xbusied(m) && (m->aflags & PGA_WRITEABLE) == 0) return; - rw_rlock(&pvh_global_lock); lock = VM_PAGE_TO_PV_LIST_LOCK(m); + pvh = (m->flags & PG_FICTITIOUS) != 0 ? &pv_dummy : + pa_to_pvh(VM_PAGE_TO_PHYS(m)); + rw_rlock(&pvh_global_lock); retry_pv_loop: rw_wlock(lock); + TAILQ_FOREACH_SAFE(pv, &pvh->pv_list, pv_next, next_pv) { + pmap = PV_PMAP(pv); + if (!PMAP_TRYLOCK(pmap)) { + pvh_gen = pvh->pv_gen; + rw_wunlock(lock); + PMAP_LOCK(pmap); + rw_wlock(lock); + if (pvh_gen != pvh->pv_gen) { + PMAP_UNLOCK(pmap); + rw_wunlock(lock); + goto retry_pv_loop; + } + } + va = pv->pv_va; + l2 = pmap_l2(pmap, va); + if ((pmap_load(l2) & PTE_W) != 0) + (void)pmap_demote_l2_locked(pmap, l2, va, &lock); + KASSERT(lock == VM_PAGE_TO_PV_LIST_LOCK(m), + ("inconsistent pv lock %p %p for page %p", + lock, VM_PAGE_TO_PV_LIST_LOCK(m), m)); + PMAP_UNLOCK(pmap); + } TAILQ_FOREACH(pv, &m->md.pv_list, pv_next) { pmap = PV_PMAP(pv); if (!PMAP_TRYLOCK(pmap)) { + pvh_gen = pvh->pv_gen; md_gen = m->md.pv_gen; rw_wunlock(lock); PMAP_LOCK(pmap); rw_wlock(lock); - if (md_gen != m->md.pv_gen) { + if (pvh_gen != pvh->pv_gen || md_gen != m->md.pv_gen) { PMAP_UNLOCK(pmap); rw_wunlock(lock); goto retry_pv_loop; } } l3 = pmap_l3(pmap, pv->pv_va); oldl3 = pmap_load(l3); retry: if ((oldl3 & PTE_W) != 0) { newl3 = oldl3 & ~(PTE_D | PTE_W); if (!atomic_fcmpset_long(l3, &oldl3, newl3)) goto retry; if ((oldl3 & PTE_D) != 0) vm_page_dirty(m); pmap_invalidate_page(pmap, pv->pv_va); } PMAP_UNLOCK(pmap); } rw_wunlock(lock); vm_page_aflag_clear(m, PGA_WRITEABLE); rw_runlock(&pvh_global_lock); } -static __inline boolean_t -safe_to_clear_referenced(pmap_t pmap, pt_entry_t pte) -{ - - return (FALSE); -} - /* * pmap_ts_referenced: * * Return a count of reference bits for a page, clearing those bits. * It is not necessary for every reference bit to be cleared, but it * is necessary that 0 only be returned when there are truly no * reference bits set. * * As an optimization, update the page's dirty field if a modified bit is * found while counting reference bits. This opportunistic update can be * performed at low cost and can eliminate the need for some future calls * to pmap_is_modified(). However, since this function stops after * finding PMAP_TS_REFERENCED_MAX reference bits, it may not detect some * dirty pages. Those dirty pages will only be detected by a future call * to pmap_is_modified(). */ int pmap_ts_referenced(vm_page_t m) { + struct spglist free; + struct md_page *pvh; + struct rwlock *lock; pv_entry_t pv, pvf; pmap_t pmap; - struct rwlock *lock; - pd_entry_t *l2; - pt_entry_t *l3, old_l3; + pd_entry_t *l2, l2e; + pt_entry_t *l3, l3e; vm_paddr_t pa; - int cleared, md_gen, not_cleared; - struct spglist free; + vm_offset_t va; + int md_gen, pvh_gen, ret; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("pmap_ts_referenced: page %p is not managed", m)); SLIST_INIT(&free); - cleared = 0; + ret = 0; pa = VM_PAGE_TO_PHYS(m); + pvh = (m->flags & PG_FICTITIOUS) != 0 ? &pv_dummy : pa_to_pvh(pa); + lock = PHYS_TO_PV_LIST_LOCK(pa); rw_rlock(&pvh_global_lock); rw_wlock(lock); retry: - not_cleared = 0; + if ((pvf = TAILQ_FIRST(&pvh->pv_list)) == NULL) + goto small_mappings; + pv = pvf; + do { + pmap = PV_PMAP(pv); + if (!PMAP_TRYLOCK(pmap)) { + pvh_gen = pvh->pv_gen; + rw_wunlock(lock); + PMAP_LOCK(pmap); + rw_wlock(lock); + if (pvh_gen != pvh->pv_gen) { + PMAP_UNLOCK(pmap); + goto retry; + } + } + va = pv->pv_va; + l2 = pmap_l2(pmap, va); + l2e = pmap_load(l2); + if ((l2e & (PTE_W | PTE_D)) == (PTE_W | PTE_D)) { + /* + * Although l2e is mapping a 2MB page, because + * this function is called at a 4KB page granularity, + * we only update the 4KB page under test. + */ + vm_page_dirty(m); + } + if ((l2e & PTE_A) != 0) { + /* + * Since this reference bit is shared by 512 4KB + * pages, it should not be cleared every time it is + * tested. Apply a simple "hash" function on the + * physical page number, the virtual superpage number, + * and the pmap address to select one 4KB page out of + * the 512 on which testing the reference bit will + * result in clearing that reference bit. This + * function is designed to avoid the selection of the + * same 4KB page for every 2MB page mapping. + * + * On demotion, a mapping that hasn't been referenced + * is simply destroyed. To avoid the possibility of a + * subsequent page fault on a demoted wired mapping, + * always leave its reference bit set. Moreover, + * since the superpage is wired, the current state of + * its reference bit won't affect page replacement. + */ + if ((((pa >> PAGE_SHIFT) ^ (pv->pv_va >> L2_SHIFT) ^ + (uintptr_t)pmap) & (Ln_ENTRIES - 1)) == 0 && + (l2e & PTE_SW_WIRED) == 0) { + pmap_clear_bits(l2, PTE_A); + pmap_invalidate_page(pmap, va); + } + ret++; + } + PMAP_UNLOCK(pmap); + /* Rotate the PV list if it has more than one entry. */ + if (pv != NULL && TAILQ_NEXT(pv, pv_next) != NULL) { + TAILQ_REMOVE(&pvh->pv_list, pv, pv_next); + TAILQ_INSERT_TAIL(&pvh->pv_list, pv, pv_next); + pvh->pv_gen++; + } + if (ret >= PMAP_TS_REFERENCED_MAX) + goto out; + } while ((pv = TAILQ_FIRST(&pvh->pv_list)) != pvf); +small_mappings: if ((pvf = TAILQ_FIRST(&m->md.pv_list)) == NULL) goto out; pv = pvf; do { - if (pvf == NULL) - pvf = pv; pmap = PV_PMAP(pv); if (!PMAP_TRYLOCK(pmap)) { + pvh_gen = pvh->pv_gen; md_gen = m->md.pv_gen; rw_wunlock(lock); PMAP_LOCK(pmap); rw_wlock(lock); - if (md_gen != m->md.pv_gen) { + if (pvh_gen != pvh->pv_gen || md_gen != m->md.pv_gen) { PMAP_UNLOCK(pmap); goto retry; } } l2 = pmap_l2(pmap, pv->pv_va); KASSERT((pmap_load(l2) & PTE_RX) == 0, ("pmap_ts_referenced: found an invalid l2 table")); l3 = pmap_l2_to_l3(l2, pv->pv_va); - old_l3 = pmap_load(l3); - if ((old_l3 & PTE_D) != 0) + l3e = pmap_load(l3); + if ((l3e & PTE_D) != 0) vm_page_dirty(m); - if ((old_l3 & PTE_A) != 0) { - if (safe_to_clear_referenced(pmap, old_l3)) { + if ((l3e & PTE_A) != 0) { + if ((l3e & PTE_SW_WIRED) == 0) { /* - * TODO: We don't handle the access flag - * at all. We need to be able to set it in - * the exception handler. - */ - panic("RISCVTODO: safe_to_clear_referenced\n"); - } else if ((old_l3 & PTE_SW_WIRED) == 0) { - /* * Wired pages cannot be paged out so * doing accessed bit emulation for * them is wasted effort. We do the * hard work for unwired pages only. */ - pmap_remove_l3(pmap, l3, pv->pv_va, - pmap_load(l2), &free, &lock); + pmap_clear_bits(l3, PTE_A); pmap_invalidate_page(pmap, pv->pv_va); - cleared++; - if (pvf == pv) - pvf = NULL; - pv = NULL; - KASSERT(lock == VM_PAGE_TO_PV_LIST_LOCK(m), - ("inconsistent pv lock %p %p for page %p", - lock, VM_PAGE_TO_PV_LIST_LOCK(m), m)); - } else - not_cleared++; + } + ret++; } PMAP_UNLOCK(pmap); /* Rotate the PV list if it has more than one entry. */ if (pv != NULL && TAILQ_NEXT(pv, pv_next) != NULL) { TAILQ_REMOVE(&m->md.pv_list, pv, pv_next); TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next); m->md.pv_gen++; } - } while ((pv = TAILQ_FIRST(&m->md.pv_list)) != pvf && cleared + - not_cleared < PMAP_TS_REFERENCED_MAX); + } while ((pv = TAILQ_FIRST(&m->md.pv_list)) != pvf && ret < + PMAP_TS_REFERENCED_MAX); out: rw_wunlock(lock); rw_runlock(&pvh_global_lock); vm_page_free_pages_toq(&free, false); - return (cleared + not_cleared); + return (ret); } /* * Apply the given advice to the specified range of addresses within the * given pmap. Depending on the advice, clear the referenced and/or * modified flags in each mapping and set the mapped page's dirty field. */ void pmap_advise(pmap_t pmap, vm_offset_t sva, vm_offset_t eva, int advice) { } /* * Clear the modify bits on the specified physical page. */ void pmap_clear_modify(vm_page_t m) { + struct md_page *pvh; + struct rwlock *lock; + pmap_t pmap; + pv_entry_t next_pv, pv; + pd_entry_t *l2, oldl2; + pt_entry_t *l3, oldl3; + vm_offset_t va; + int md_gen, pvh_gen; KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("pmap_clear_modify: page %p is not managed", m)); VM_OBJECT_ASSERT_WLOCKED(m->object); KASSERT(!vm_page_xbusied(m), ("pmap_clear_modify: page %p is exclusive busied", m)); /* * If the page is not PGA_WRITEABLE, then no PTEs can have PG_M set. * If the object containing the page is locked and the page is not * exclusive busied, then PGA_WRITEABLE cannot be concurrently set. */ if ((m->aflags & PGA_WRITEABLE) == 0) return; - - /* RISCVTODO: We lack support for tracking if a page is modified */ + pvh = (m->flags & PG_FICTITIOUS) != 0 ? &pv_dummy : + pa_to_pvh(VM_PAGE_TO_PHYS(m)); + lock = VM_PAGE_TO_PV_LIST_LOCK(m); + rw_rlock(&pvh_global_lock); + rw_wlock(lock); +restart: + TAILQ_FOREACH_SAFE(pv, &pvh->pv_list, pv_next, next_pv) { + pmap = PV_PMAP(pv); + if (!PMAP_TRYLOCK(pmap)) { + pvh_gen = pvh->pv_gen; + rw_wunlock(lock); + PMAP_LOCK(pmap); + rw_wlock(lock); + if (pvh_gen != pvh->pv_gen) { + PMAP_UNLOCK(pmap); + goto restart; + } + } + va = pv->pv_va; + l2 = pmap_l2(pmap, va); + oldl2 = pmap_load(l2); + if ((oldl2 & PTE_W) != 0) { + if (pmap_demote_l2_locked(pmap, l2, va, &lock)) { + if ((oldl2 & PTE_SW_WIRED) == 0) { + /* + * Write protect the mapping to a + * single page so that a subsequent + * write access may repromote. + */ + va += VM_PAGE_TO_PHYS(m) - + PTE_TO_PHYS(oldl2); + l3 = pmap_l2_to_l3(l2, va); + oldl3 = pmap_load(l3); + if ((oldl3 & PTE_V) != 0) { + while (!atomic_fcmpset_long(l3, + &oldl3, oldl3 & ~(PTE_D | + PTE_W))) + cpu_spinwait(); + vm_page_dirty(m); + pmap_invalidate_page(pmap, va); + } + } + } + } + PMAP_UNLOCK(pmap); + } + TAILQ_FOREACH(pv, &m->md.pv_list, pv_next) { + pmap = PV_PMAP(pv); + if (!PMAP_TRYLOCK(pmap)) { + md_gen = m->md.pv_gen; + pvh_gen = pvh->pv_gen; + rw_wunlock(lock); + PMAP_LOCK(pmap); + rw_wlock(lock); + if (pvh_gen != pvh->pv_gen || md_gen != m->md.pv_gen) { + PMAP_UNLOCK(pmap); + goto restart; + } + } + l2 = pmap_l2(pmap, pv->pv_va); + KASSERT((pmap_load(l2) & PTE_RWX) == 0, + ("pmap_clear_modify: found a 2mpage in page %p's pv list", + m)); + l3 = pmap_l2_to_l3(l2, pv->pv_va); + if ((pmap_load(l3) & (PTE_D | PTE_W)) == (PTE_D | PTE_W)) { + pmap_clear_bits(l3, PTE_D); + pmap_invalidate_page(pmap, pv->pv_va); + } + PMAP_UNLOCK(pmap); + } + rw_wunlock(lock); + rw_runlock(&pvh_global_lock); } void * pmap_mapbios(vm_paddr_t pa, vm_size_t size) { return ((void *)PHYS_TO_DMAP(pa)); } void pmap_unmapbios(vm_paddr_t pa, vm_size_t size) { } /* * Sets the memory attribute for the specified page. */ void pmap_page_set_memattr(vm_page_t m, vm_memattr_t ma) { m->md.pv_memattr = ma; /* * RISCVTODO: Implement the below (from the amd64 pmap) * If "m" is a normal page, update its direct mapping. This update * can be relied upon to perform any cache operations that are * required for data coherence. */ if ((m->flags & PG_FICTITIOUS) == 0 && PHYS_IN_DMAP(VM_PAGE_TO_PHYS(m))) panic("RISCVTODO: pmap_page_set_memattr"); } /* * perform the pmap work for mincore */ int pmap_mincore(pmap_t pmap, vm_offset_t addr, vm_paddr_t *locked_pa) { pt_entry_t *l2, *l3, tpte; vm_paddr_t pa; int val; bool managed; PMAP_LOCK(pmap); retry: managed = false; val = 0; l2 = pmap_l2(pmap, addr); if (l2 != NULL && ((tpte = pmap_load(l2)) & PTE_V) != 0) { - if ((tpte & (PTE_R | PTE_W | PTE_X)) != 0) { + if ((tpte & PTE_RWX) != 0) { pa = PTE_TO_PHYS(tpte) | (addr & L2_OFFSET); val = MINCORE_INCORE | MINCORE_SUPER; } else { l3 = pmap_l2_to_l3(l2, addr); tpte = pmap_load(l3); if ((tpte & PTE_V) == 0) goto done; pa = PTE_TO_PHYS(tpte) | (addr & L3_OFFSET); val = MINCORE_INCORE; } if ((tpte & PTE_D) != 0) val |= MINCORE_MODIFIED | MINCORE_MODIFIED_OTHER; if ((tpte & PTE_A) != 0) val |= MINCORE_REFERENCED | MINCORE_REFERENCED_OTHER; managed = (tpte & PTE_SW_MANAGED) == PTE_SW_MANAGED; } done: if ((val & (MINCORE_MODIFIED_OTHER | MINCORE_REFERENCED_OTHER)) != (MINCORE_MODIFIED_OTHER | MINCORE_REFERENCED_OTHER) && managed) { /* Ensure that "PHYS_TO_VM_PAGE(pa)->object" doesn't change. */ if (vm_page_pa_tryrelock(pmap, pa, locked_pa)) goto retry; } else PA_UNLOCK_COND(*locked_pa); PMAP_UNLOCK(pmap); return (val); } void -pmap_activate(struct thread *td) +pmap_activate_sw(struct thread *td) { - pmap_t pmap; - uint64_t reg; + pmap_t oldpmap, pmap; + u_int cpu; - critical_enter(); + oldpmap = PCPU_GET(curpmap); pmap = vmspace_pmap(td->td_proc->p_vmspace); - td->td_pcb->pcb_l1addr = vtophys(pmap->pm_l1); + if (pmap == oldpmap) + return; + load_satp(pmap->pm_satp); - reg = SATP_MODE_SV39; - reg |= (td->td_pcb->pcb_l1addr >> PAGE_SHIFT); - load_satp(reg); + cpu = PCPU_GET(cpuid); +#ifdef SMP + CPU_SET_ATOMIC(cpu, &pmap->pm_active); + CPU_CLR_ATOMIC(cpu, &oldpmap->pm_active); +#else + CPU_SET(cpu, &pmap->pm_active); + CPU_CLR(cpu, &oldpmap->pm_active); +#endif + PCPU_SET(curpmap, pmap); - pmap_invalidate_all(pmap); + sfence_vma(); +} + +void +pmap_activate(struct thread *td) +{ + + critical_enter(); + pmap_activate_sw(td); critical_exit(); } void -pmap_sync_icache(pmap_t pm, vm_offset_t va, vm_size_t sz) +pmap_activate_boot(pmap_t pmap) { + u_int cpu; + + cpu = PCPU_GET(cpuid); +#ifdef SMP + CPU_SET_ATOMIC(cpu, &pmap->pm_active); +#else + CPU_SET(cpu, &pmap->pm_active); +#endif + PCPU_SET(curpmap, pmap); +} + +void +pmap_sync_icache(pmap_t pmap, vm_offset_t va, vm_size_t sz) +{ cpuset_t mask; /* * From the RISC-V User-Level ISA V2.2: * * "To make a store to instruction memory visible to all * RISC-V harts, the writing hart has to execute a data FENCE * before requesting that all remote RISC-V harts execute a * FENCE.I." */ sched_pin(); mask = all_cpus; CPU_CLR(PCPU_GET(cpuid), &mask); fence(); - sbi_remote_fence_i(mask.__bits); + if (!CPU_EMPTY(&mask) && smp_started) + sbi_remote_fence_i(mask.__bits); sched_unpin(); } /* * Increase the starting virtual address of the given mapping if a * different alignment might result in more superpage mappings. */ void pmap_align_superpage(vm_object_t object, vm_ooffset_t offset, vm_offset_t *addr, vm_size_t size) { + vm_offset_t superpage_offset; + + if (size < L2_SIZE) + return; + if (object != NULL && (object->flags & OBJ_COLORED) != 0) + offset += ptoa(object->pg_color); + superpage_offset = offset & L2_OFFSET; + if (size - ((L2_SIZE - superpage_offset) & L2_OFFSET) < L2_SIZE || + (*addr & L2_OFFSET) == superpage_offset) + return; + if ((*addr & L2_OFFSET) < superpage_offset) + *addr = (*addr & ~L2_OFFSET) + superpage_offset; + else + *addr = ((*addr + L2_OFFSET) & ~L2_OFFSET) + superpage_offset; } /** * Get the kernel virtual address of a set of physical pages. If there are * physical addresses not covered by the DMAP perform a transient mapping * that will be removed when calling pmap_unmap_io_transient. * * \param page The pages the caller wishes to obtain the virtual * address on the kernel memory map. * \param vaddr On return contains the kernel virtual memory address * of the pages passed in the page parameter. * \param count Number of pages passed in. * \param can_fault TRUE if the thread using the mapped pages can take * page faults, FALSE otherwise. * * \returns TRUE if the caller must call pmap_unmap_io_transient when * finished or FALSE otherwise. * */ boolean_t pmap_map_io_transient(vm_page_t page[], vm_offset_t vaddr[], int count, boolean_t can_fault) { vm_paddr_t paddr; boolean_t needs_mapping; int error, i; /* * Allocate any KVA space that we need, this is done in a separate * loop to prevent calling vmem_alloc while pinned. */ needs_mapping = FALSE; for (i = 0; i < count; i++) { paddr = VM_PAGE_TO_PHYS(page[i]); if (__predict_false(paddr >= DMAP_MAX_PHYSADDR)) { error = vmem_alloc(kernel_arena, PAGE_SIZE, M_BESTFIT | M_WAITOK, &vaddr[i]); KASSERT(error == 0, ("vmem_alloc failed: %d", error)); needs_mapping = TRUE; } else { vaddr[i] = PHYS_TO_DMAP(paddr); } } /* Exit early if everything is covered by the DMAP */ if (!needs_mapping) return (FALSE); if (!can_fault) sched_pin(); for (i = 0; i < count; i++) { paddr = VM_PAGE_TO_PHYS(page[i]); if (paddr >= DMAP_MAX_PHYSADDR) { panic( "pmap_map_io_transient: TODO: Map out of DMAP data"); } } return (needs_mapping); } void pmap_unmap_io_transient(vm_page_t page[], vm_offset_t vaddr[], int count, boolean_t can_fault) { vm_paddr_t paddr; int i; if (!can_fault) sched_unpin(); for (i = 0; i < count; i++) { paddr = VM_PAGE_TO_PHYS(page[i]); if (paddr >= DMAP_MAX_PHYSADDR) { panic("RISCVTODO: pmap_unmap_io_transient: Unmap data"); } } } boolean_t pmap_is_valid_memattr(pmap_t pmap __unused, vm_memattr_t mode) { return (mode >= VM_MEMATTR_DEVICE && mode <= VM_MEMATTR_WRITE_BACK); } Index: projects/import-googletest-1.8.1/sys/riscv/riscv/swtch.S =================================================================== --- projects/import-googletest-1.8.1/sys/riscv/riscv/swtch.S (revision 344270) +++ projects/import-googletest-1.8.1/sys/riscv/riscv/swtch.S (revision 344271) @@ -1,494 +1,476 @@ /*- * Copyright (c) 2015-2017 Ruslan Bukin * All rights reserved. * * Portions of this software were developed by SRI International and the * University of Cambridge Computer Laboratory under DARPA/AFRL contract * FA8750-10-C-0237 ("CTSRD"), as part of the DARPA CRASH research programme. * * Portions of this software were developed by the University of Cambridge * Computer Laboratory as part of the CTSRD Project, with support from the * UK Higher Education Innovation Fund (HEIF). * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "assym.inc" #include "opt_sched.h" #include #include #include #include __FBSDID("$FreeBSD$"); #ifdef FPE .macro __fpe_state_save p /* * Enable FPE usage in supervisor mode, * so we can access registers. */ li t0, SSTATUS_FS_INITIAL csrs sstatus, t0 /* Store registers */ frcsr t0 sd t0, (PCB_FCSR)(\p) fsd f0, (PCB_X + 0 * 16)(\p) fsd f1, (PCB_X + 1 * 16)(\p) fsd f2, (PCB_X + 2 * 16)(\p) fsd f3, (PCB_X + 3 * 16)(\p) fsd f4, (PCB_X + 4 * 16)(\p) fsd f5, (PCB_X + 5 * 16)(\p) fsd f6, (PCB_X + 6 * 16)(\p) fsd f7, (PCB_X + 7 * 16)(\p) fsd f8, (PCB_X + 8 * 16)(\p) fsd f9, (PCB_X + 9 * 16)(\p) fsd f10, (PCB_X + 10 * 16)(\p) fsd f11, (PCB_X + 11 * 16)(\p) fsd f12, (PCB_X + 12 * 16)(\p) fsd f13, (PCB_X + 13 * 16)(\p) fsd f14, (PCB_X + 14 * 16)(\p) fsd f15, (PCB_X + 15 * 16)(\p) fsd f16, (PCB_X + 16 * 16)(\p) fsd f17, (PCB_X + 17 * 16)(\p) fsd f18, (PCB_X + 18 * 16)(\p) fsd f19, (PCB_X + 19 * 16)(\p) fsd f20, (PCB_X + 20 * 16)(\p) fsd f21, (PCB_X + 21 * 16)(\p) fsd f22, (PCB_X + 22 * 16)(\p) fsd f23, (PCB_X + 23 * 16)(\p) fsd f24, (PCB_X + 24 * 16)(\p) fsd f25, (PCB_X + 25 * 16)(\p) fsd f26, (PCB_X + 26 * 16)(\p) fsd f27, (PCB_X + 27 * 16)(\p) fsd f28, (PCB_X + 28 * 16)(\p) fsd f29, (PCB_X + 29 * 16)(\p) fsd f30, (PCB_X + 30 * 16)(\p) fsd f31, (PCB_X + 31 * 16)(\p) /* Disable FPE usage in supervisor mode. */ li t0, SSTATUS_FS_MASK csrc sstatus, t0 .endm .macro __fpe_state_load p /* * Enable FPE usage in supervisor mode, * so we can access registers. */ li t0, SSTATUS_FS_INITIAL csrs sstatus, t0 /* Restore registers */ ld t0, (PCB_FCSR)(\p) fscsr t0 fld f0, (PCB_X + 0 * 16)(\p) fld f1, (PCB_X + 1 * 16)(\p) fld f2, (PCB_X + 2 * 16)(\p) fld f3, (PCB_X + 3 * 16)(\p) fld f4, (PCB_X + 4 * 16)(\p) fld f5, (PCB_X + 5 * 16)(\p) fld f6, (PCB_X + 6 * 16)(\p) fld f7, (PCB_X + 7 * 16)(\p) fld f8, (PCB_X + 8 * 16)(\p) fld f9, (PCB_X + 9 * 16)(\p) fld f10, (PCB_X + 10 * 16)(\p) fld f11, (PCB_X + 11 * 16)(\p) fld f12, (PCB_X + 12 * 16)(\p) fld f13, (PCB_X + 13 * 16)(\p) fld f14, (PCB_X + 14 * 16)(\p) fld f15, (PCB_X + 15 * 16)(\p) fld f16, (PCB_X + 16 * 16)(\p) fld f17, (PCB_X + 17 * 16)(\p) fld f18, (PCB_X + 18 * 16)(\p) fld f19, (PCB_X + 19 * 16)(\p) fld f20, (PCB_X + 20 * 16)(\p) fld f21, (PCB_X + 21 * 16)(\p) fld f22, (PCB_X + 22 * 16)(\p) fld f23, (PCB_X + 23 * 16)(\p) fld f24, (PCB_X + 24 * 16)(\p) fld f25, (PCB_X + 25 * 16)(\p) fld f26, (PCB_X + 26 * 16)(\p) fld f27, (PCB_X + 27 * 16)(\p) fld f28, (PCB_X + 28 * 16)(\p) fld f29, (PCB_X + 29 * 16)(\p) fld f30, (PCB_X + 30 * 16)(\p) fld f31, (PCB_X + 31 * 16)(\p) /* Disable FPE usage in supervisor mode. */ li t0, SSTATUS_FS_MASK csrc sstatus, t0 .endm /* * void * fpe_state_save(struct thread *td) */ ENTRY(fpe_state_save) /* Get pointer to PCB */ ld a0, TD_PCB(a0) __fpe_state_save a0 ret END(fpe_state_save) #endif /* FPE */ /* * void * fpe_state_clear(void) */ ENTRY(fpe_state_clear) /* * Enable FPE usage in supervisor mode, * so we can access registers. */ li t0, SSTATUS_FS_INITIAL csrs sstatus, t0 fscsr zero fcvt.d.l f0, zero fcvt.d.l f1, zero fcvt.d.l f2, zero fcvt.d.l f3, zero fcvt.d.l f4, zero fcvt.d.l f5, zero fcvt.d.l f6, zero fcvt.d.l f7, zero fcvt.d.l f8, zero fcvt.d.l f9, zero fcvt.d.l f10, zero fcvt.d.l f11, zero fcvt.d.l f12, zero fcvt.d.l f13, zero fcvt.d.l f14, zero fcvt.d.l f15, zero fcvt.d.l f16, zero fcvt.d.l f17, zero fcvt.d.l f18, zero fcvt.d.l f19, zero fcvt.d.l f20, zero fcvt.d.l f21, zero fcvt.d.l f22, zero fcvt.d.l f23, zero fcvt.d.l f24, zero fcvt.d.l f25, zero fcvt.d.l f26, zero fcvt.d.l f27, zero fcvt.d.l f28, zero fcvt.d.l f29, zero fcvt.d.l f30, zero fcvt.d.l f31, zero /* Disable FPE usage in supervisor mode. */ li t0, SSTATUS_FS_MASK csrc sstatus, t0 ret END(fpe_state_clear) /* - * void cpu_throw(struct thread *old, struct thread *new) + * void cpu_throw(struct thread *old __unused, struct thread *new) */ ENTRY(cpu_throw) + /* Activate the new thread's pmap. */ + mv s0, a1 + mv a0, a1 + call _C_LABEL(pmap_activate_sw) + mv a0, s0 + /* Store the new curthread */ - sd a1, PC_CURTHREAD(gp) + sd a0, PC_CURTHREAD(gp) /* And the new pcb */ - ld x13, TD_PCB(a1) + ld x13, TD_PCB(a0) sd x13, PC_CURPCB(gp) - sfence.vma - - /* Switch to the new pmap */ - ld t0, PCB_L1ADDR(x13) - srli t0, t0, PAGE_SHIFT - li t1, SATP_MODE_SV39 - or t0, t0, t1 - csrw satp, t0 - - /* TODO: Invalidate the TLB */ - - sfence.vma - /* Load registers */ ld ra, (PCB_RA)(x13) ld sp, (PCB_SP)(x13) ld tp, (PCB_TP)(x13) /* s[0-11] */ ld s0, (PCB_S + 0 * 8)(x13) ld s1, (PCB_S + 1 * 8)(x13) ld s2, (PCB_S + 2 * 8)(x13) ld s3, (PCB_S + 3 * 8)(x13) ld s4, (PCB_S + 4 * 8)(x13) ld s5, (PCB_S + 5 * 8)(x13) ld s6, (PCB_S + 6 * 8)(x13) ld s7, (PCB_S + 7 * 8)(x13) ld s8, (PCB_S + 8 * 8)(x13) ld s9, (PCB_S + 9 * 8)(x13) ld s10, (PCB_S + 10 * 8)(x13) ld s11, (PCB_S + 11 * 8)(x13) #ifdef FPE /* Is FPE enabled for new thread? */ - ld t0, TD_FRAME(a1) + ld t0, TD_FRAME(a0) ld t1, (TF_SSTATUS)(t0) li t2, SSTATUS_FS_MASK and t3, t1, t2 beqz t3, 1f /* No, skip. */ /* Restore registers. */ __fpe_state_load x13 1: #endif ret END(cpu_throw) /* * void cpu_switch(struct thread *old, struct thread *new, struct mtx *mtx) * * a0 = old * a1 = new * a2 = mtx * x3 to x7, x16 and x17 are caller saved */ ENTRY(cpu_switch) /* Store the new curthread */ sd a1, PC_CURTHREAD(gp) /* And the new pcb */ ld x13, TD_PCB(a1) sd x13, PC_CURPCB(gp) /* Save the old context. */ ld x13, TD_PCB(a0) /* Store ra, sp and the callee-saved registers */ sd ra, (PCB_RA)(x13) sd sp, (PCB_SP)(x13) sd tp, (PCB_TP)(x13) /* s[0-11] */ sd s0, (PCB_S + 0 * 8)(x13) sd s1, (PCB_S + 1 * 8)(x13) sd s2, (PCB_S + 2 * 8)(x13) sd s3, (PCB_S + 3 * 8)(x13) sd s4, (PCB_S + 4 * 8)(x13) sd s5, (PCB_S + 5 * 8)(x13) sd s6, (PCB_S + 6 * 8)(x13) sd s7, (PCB_S + 7 * 8)(x13) sd s8, (PCB_S + 8 * 8)(x13) sd s9, (PCB_S + 9 * 8)(x13) sd s10, (PCB_S + 10 * 8)(x13) sd s11, (PCB_S + 11 * 8)(x13) #ifdef FPE /* * Is FPE enabled and is it in dirty state * for the old thread? */ ld t0, TD_FRAME(a0) ld t1, (TF_SSTATUS)(t0) li t2, SSTATUS_FS_MASK and t3, t1, t2 li t2, SSTATUS_FS_DIRTY bne t3, t2, 1f /* No, skip. */ /* Yes, mark FPE state clean and save registers. */ li t2, ~SSTATUS_FS_MASK and t3, t1, t2 li t2, SSTATUS_FS_CLEAN or t3, t3, t2 sd t3, (TF_SSTATUS)(t0) __fpe_state_save x13 1: #endif - /* - * Restore the saved context. - */ - ld x13, TD_PCB(a1) + /* Activate the new thread's pmap */ + mv s0, a0 + mv s1, a1 + mv s2, a2 + mv a0, a1 + call _C_LABEL(pmap_activate_sw) + mv a1, s1 - /* - * TODO: We may need to flush the cache here if switching - * to a user process. - */ - - sfence.vma - - /* Switch to the new pmap */ - ld t0, PCB_L1ADDR(x13) - srli t0, t0, PAGE_SHIFT - li t1, SATP_MODE_SV39 - or t0, t0, t1 - csrw satp, t0 - - /* TODO: Invalidate the TLB */ - - sfence.vma - /* Release the old thread */ - sd a2, TD_LOCK(a0) + sd s2, TD_LOCK(s0) #if defined(SCHED_ULE) && defined(SMP) /* Spin if TD_LOCK points to a blocked_lock */ - la a2, _C_LABEL(blocked_lock) + la s2, _C_LABEL(blocked_lock) 1: ld t0, TD_LOCK(a1) - beq t0, a2, 1b + beq t0, s2, 1b #endif + /* + * Restore the saved context. + */ + ld x13, TD_PCB(a1) /* Restore the registers */ ld tp, (PCB_TP)(x13) ld ra, (PCB_RA)(x13) ld sp, (PCB_SP)(x13) /* s[0-11] */ ld s0, (PCB_S + 0 * 8)(x13) ld s1, (PCB_S + 1 * 8)(x13) ld s2, (PCB_S + 2 * 8)(x13) ld s3, (PCB_S + 3 * 8)(x13) ld s4, (PCB_S + 4 * 8)(x13) ld s5, (PCB_S + 5 * 8)(x13) ld s6, (PCB_S + 6 * 8)(x13) ld s7, (PCB_S + 7 * 8)(x13) ld s8, (PCB_S + 8 * 8)(x13) ld s9, (PCB_S + 9 * 8)(x13) ld s10, (PCB_S + 10 * 8)(x13) ld s11, (PCB_S + 11 * 8)(x13) #ifdef FPE /* Is FPE enabled for new thread? */ ld t0, TD_FRAME(a1) ld t1, (TF_SSTATUS)(t0) li t2, SSTATUS_FS_MASK and t3, t1, t2 beqz t3, 1f /* No, skip. */ /* Restore registers. */ __fpe_state_load x13 1: #endif ret .Lcpu_switch_panic_str: .asciz "cpu_switch: %p\0" END(cpu_switch) /* * fork_exit(void (*callout)(void *, struct trapframe *), void *arg, * struct trapframe *frame) */ ENTRY(fork_trampoline) mv a0, s0 mv a1, s1 mv a2, sp call _C_LABEL(fork_exit) /* Restore sstatus */ ld t0, (TF_SSTATUS)(sp) /* Ensure interrupts disabled */ li t1, ~SSTATUS_SIE and t0, t0, t1 csrw sstatus, t0 /* Restore exception program counter */ ld t0, (TF_SEPC)(sp) csrw sepc, t0 /* Restore the registers */ ld t0, (TF_T + 0 * 8)(sp) ld t1, (TF_T + 1 * 8)(sp) ld t2, (TF_T + 2 * 8)(sp) ld t3, (TF_T + 3 * 8)(sp) ld t4, (TF_T + 4 * 8)(sp) ld t5, (TF_T + 5 * 8)(sp) ld t6, (TF_T + 6 * 8)(sp) ld s0, (TF_S + 0 * 8)(sp) ld s1, (TF_S + 1 * 8)(sp) ld s2, (TF_S + 2 * 8)(sp) ld s3, (TF_S + 3 * 8)(sp) ld s4, (TF_S + 4 * 8)(sp) ld s5, (TF_S + 5 * 8)(sp) ld s6, (TF_S + 6 * 8)(sp) ld s7, (TF_S + 7 * 8)(sp) ld s8, (TF_S + 8 * 8)(sp) ld s9, (TF_S + 9 * 8)(sp) ld s10, (TF_S + 10 * 8)(sp) ld s11, (TF_S + 11 * 8)(sp) ld a0, (TF_A + 0 * 8)(sp) ld a1, (TF_A + 1 * 8)(sp) ld a2, (TF_A + 2 * 8)(sp) ld a3, (TF_A + 3 * 8)(sp) ld a4, (TF_A + 4 * 8)(sp) ld a5, (TF_A + 5 * 8)(sp) ld a6, (TF_A + 6 * 8)(sp) ld a7, (TF_A + 7 * 8)(sp) /* Load user ra and sp */ ld ra, (TF_RA)(sp) /* * Store our pcpup on stack, we will load it back * on kernel mode trap. */ sd gp, (TF_SIZE)(sp) ld gp, (TF_GP)(sp) /* Save kernel stack so we can use it doing a user trap */ addi sp, sp, TF_SIZE csrw sscratch, sp /* Load user stack */ ld sp, (TF_SP - TF_SIZE)(sp) sret END(fork_trampoline) ENTRY(savectx) /* Store ra, sp and the callee-saved registers */ sd ra, (PCB_RA)(a0) sd sp, (PCB_SP)(a0) sd tp, (PCB_TP)(a0) /* s[0-11] */ sd s0, (PCB_S + 0 * 8)(a0) sd s1, (PCB_S + 1 * 8)(a0) sd s2, (PCB_S + 2 * 8)(a0) sd s3, (PCB_S + 3 * 8)(a0) sd s4, (PCB_S + 4 * 8)(a0) sd s5, (PCB_S + 5 * 8)(a0) sd s6, (PCB_S + 6 * 8)(a0) sd s7, (PCB_S + 7 * 8)(a0) sd s8, (PCB_S + 8 * 8)(a0) sd s9, (PCB_S + 9 * 8)(a0) sd s10, (PCB_S + 10 * 8)(a0) sd s11, (PCB_S + 11 * 8)(a0) #ifdef FPE __fpe_state_save a0 #endif ret END(savectx) Index: projects/import-googletest-1.8.1/sys/riscv/riscv/vm_machdep.c =================================================================== --- projects/import-googletest-1.8.1/sys/riscv/riscv/vm_machdep.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/riscv/riscv/vm_machdep.c (revision 344271) @@ -1,275 +1,272 @@ /*- * Copyright (c) 2015-2018 Ruslan Bukin * All rights reserved. * * Portions of this software were developed by SRI International and the * University of Cambridge Computer Laboratory under DARPA/AFRL contract * FA8750-10-C-0237 ("CTSRD"), as part of the DARPA CRASH research programme. * * Portions of this software were developed by the University of Cambridge * Computer Laboratory as part of the CTSRD Project, with support from the * UK Higher Education Innovation Fund (HEIF). * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #if __riscv_xlen == 64 #define TP_OFFSET 16 /* sizeof(struct tcb) */ #endif /* * Finish a fork operation, with process p2 nearly set up. * Copy and update the pcb, set up the stack so that the child * ready to run and return to user mode. */ void cpu_fork(struct thread *td1, struct proc *p2, struct thread *td2, int flags) { struct pcb *pcb2; struct trapframe *tf; register_t val; if ((flags & RFPROC) == 0) return; if (td1 == curthread) { /* * Save the tp. These normally happen in cpu_switch, * but if userland changes this then forks this may * not have happened. */ __asm __volatile("mv %0, tp" : "=&r"(val)); td1->td_pcb->pcb_tp = val; /* RISCVTODO: save the FPU state here */ } pcb2 = (struct pcb *)(td2->td_kstack + td2->td_kstack_pages * PAGE_SIZE) - 1; td2->td_pcb = pcb2; bcopy(td1->td_pcb, pcb2, sizeof(*pcb2)); - td2->td_pcb->pcb_l1addr = - vtophys(vmspace_pmap(td2->td_proc->p_vmspace)->pm_l1); - tf = (struct trapframe *)STACKALIGN((struct trapframe *)pcb2 - 1); bcopy(td1->td_frame, tf, sizeof(*tf)); /* Clear syscall error flag */ tf->tf_t[0] = 0; /* Arguments for child */ tf->tf_a[0] = 0; tf->tf_a[1] = 0; tf->tf_sstatus |= (SSTATUS_SPIE); /* Enable interrupts. */ tf->tf_sstatus &= ~(SSTATUS_SPP); /* User mode. */ td2->td_frame = tf; /* Set the return value registers for fork() */ td2->td_pcb->pcb_s[0] = (uintptr_t)fork_return; td2->td_pcb->pcb_s[1] = (uintptr_t)td2; td2->td_pcb->pcb_ra = (uintptr_t)fork_trampoline; td2->td_pcb->pcb_sp = (uintptr_t)td2->td_frame; /* Setup to release spin count in fork_exit(). */ td2->td_md.md_spinlock_count = 1; td2->td_md.md_saved_sstatus_ie = (SSTATUS_SIE); } void cpu_reset(void) { sbi_shutdown(); while(1); } void cpu_thread_swapin(struct thread *td) { } void cpu_thread_swapout(struct thread *td) { } void cpu_set_syscall_retval(struct thread *td, int error) { struct trapframe *frame; frame = td->td_frame; switch (error) { case 0: frame->tf_a[0] = td->td_retval[0]; frame->tf_a[1] = td->td_retval[1]; frame->tf_t[0] = 0; /* syscall succeeded */ break; case ERESTART: frame->tf_sepc -= 4; /* prev instruction */ break; case EJUSTRETURN: break; default: frame->tf_a[0] = error; frame->tf_t[0] = 1; /* syscall error */ break; } } /* * Initialize machine state, mostly pcb and trap frame for a new * thread, about to return to userspace. Put enough state in the new * thread's PCB to get it to go back to the fork_return(), which * finalizes the thread state and handles peculiarities of the first * return to userspace for the new thread. */ void cpu_copy_thread(struct thread *td, struct thread *td0) { bcopy(td0->td_frame, td->td_frame, sizeof(struct trapframe)); bcopy(td0->td_pcb, td->td_pcb, sizeof(struct pcb)); td->td_pcb->pcb_s[0] = (uintptr_t)fork_return; td->td_pcb->pcb_s[1] = (uintptr_t)td; td->td_pcb->pcb_ra = (uintptr_t)fork_trampoline; td->td_pcb->pcb_sp = (uintptr_t)td->td_frame; /* Setup to release spin count in fork_exit(). */ td->td_md.md_spinlock_count = 1; td->td_md.md_saved_sstatus_ie = (SSTATUS_SIE); } /* * Set that machine state for performing an upcall that starts * the entry function with the given argument. */ void cpu_set_upcall(struct thread *td, void (*entry)(void *), void *arg, stack_t *stack) { struct trapframe *tf; tf = td->td_frame; tf->tf_sp = STACKALIGN((uintptr_t)stack->ss_sp + stack->ss_size); tf->tf_sepc = (register_t)entry; tf->tf_a[0] = (register_t)arg; } int cpu_set_user_tls(struct thread *td, void *tls_base) { struct pcb *pcb; if ((uintptr_t)tls_base >= VM_MAXUSER_ADDRESS) return (EINVAL); pcb = td->td_pcb; pcb->pcb_tp = (register_t)tls_base + TP_OFFSET; if (td == curthread) __asm __volatile("mv tp, %0" :: "r"(pcb->pcb_tp)); return (0); } void cpu_thread_exit(struct thread *td) { } void cpu_thread_alloc(struct thread *td) { td->td_pcb = (struct pcb *)(td->td_kstack + td->td_kstack_pages * PAGE_SIZE) - 1; td->td_frame = (struct trapframe *)STACKALIGN( (caddr_t)td->td_pcb - 8 - sizeof(struct trapframe)); } void cpu_thread_free(struct thread *td) { } void cpu_thread_clean(struct thread *td) { } /* * Intercept the return address from a freshly forked process that has NOT * been scheduled yet. * * This is needed to make kernel threads stay in kernel mode. */ void cpu_fork_kthread_handler(struct thread *td, void (*func)(void *), void *arg) { td->td_pcb->pcb_s[0] = (uintptr_t)func; td->td_pcb->pcb_s[1] = (uintptr_t)arg; td->td_pcb->pcb_ra = (uintptr_t)fork_trampoline; td->td_pcb->pcb_sp = (uintptr_t)td->td_frame; } void cpu_exit(struct thread *td) { } void swi_vm(void *v) { /* Nothing to do here - busdma bounce buffers are not implemented. */ } Index: projects/import-googletest-1.8.1/sys/sparc64/sparc64/elf_machdep.c =================================================================== --- projects/import-googletest-1.8.1/sys/sparc64/sparc64/elf_machdep.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/sparc64/sparc64/elf_machdep.c (revision 344271) @@ -1,431 +1,431 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-NetBSD * * Copyright (c) 2001 Jake Burkholder. * Copyright (c) 2000 Eduardo Horvath. * Copyright (c) 1999 The NetBSD Foundation, Inc. * All rights reserved. * * This code is derived from software contributed to The NetBSD Foundation * by Paul Kranenburg. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. * * from: NetBSD: mdreloc.c,v 1.42 2008/04/28 20:23:04 martin Exp */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "linker_if.h" static struct sysentvec elf64_freebsd_sysvec = { .sv_size = SYS_MAXSYSCALL, .sv_table = sysent, .sv_errsize = 0, .sv_errtbl = NULL, .sv_transtrap = NULL, .sv_fixup = __elfN(freebsd_fixup), .sv_sendsig = sendsig, .sv_sigcode = NULL, .sv_szsigcode = NULL, .sv_name = "FreeBSD ELF64", .sv_coredump = __elfN(coredump), .sv_imgact_try = NULL, .sv_minsigstksz = MINSIGSTKSZ, .sv_pagesize = PAGE_SIZE, .sv_minuser = VM_MIN_ADDRESS, .sv_maxuser = VM_MAXUSER_ADDRESS, .sv_usrstack = USRSTACK, .sv_psstrings = PS_STRINGS, .sv_stackprot = VM_PROT_READ | VM_PROT_WRITE, .sv_copyout_strings = exec_copyout_strings, .sv_setregs = exec_setregs, .sv_fixlimit = NULL, .sv_maxssiz = NULL, - .sv_flags = SV_ABI_FREEBSD | SV_LP64, + .sv_flags = SV_ABI_FREEBSD | SV_LP64 | SV_ASLR, .sv_set_syscall_retval = cpu_set_syscall_retval, .sv_fetch_syscall_args = cpu_fetch_syscall_args, .sv_syscallnames = syscallnames, .sv_schedtail = NULL, .sv_thread_detach = NULL, .sv_trap = NULL, }; static Elf64_Brandinfo freebsd_brand_info = { .brand = ELFOSABI_FREEBSD, .machine = EM_SPARCV9, .compat_3_brand = "FreeBSD", .emul_path = NULL, .interp_path = "/libexec/ld-elf.so.1", .sysvec = &elf64_freebsd_sysvec, .interp_newpath = NULL, .brand_note = &elf64_freebsd_brandnote, .flags = BI_CAN_EXEC_DYN | BI_BRAND_NOTE }; SYSINIT(elf64, SI_SUB_EXEC, SI_ORDER_FIRST, (sysinit_cfunc_t)elf64_insert_brand_entry, &freebsd_brand_info); static Elf64_Brandinfo freebsd_brand_oinfo = { .brand = ELFOSABI_FREEBSD, .machine = EM_SPARCV9, .compat_3_brand = "FreeBSD", .emul_path = NULL, .interp_path = "/usr/libexec/ld-elf.so.1", .sysvec = &elf64_freebsd_sysvec, .interp_newpath = NULL, .brand_note = &elf64_freebsd_brandnote, .flags = BI_CAN_EXEC_DYN | BI_BRAND_NOTE }; SYSINIT(oelf64, SI_SUB_EXEC, SI_ORDER_ANY, (sysinit_cfunc_t)elf64_insert_brand_entry, &freebsd_brand_oinfo); void elf64_dump_thread(struct thread *td __unused, void *dst __unused, size_t *off __unused) { } /* * The following table holds for each relocation type: * - the width in bits of the memory location the relocation * applies to (not currently used) * - the number of bits the relocation value must be shifted to the * right (i.e. discard least significant bits) to fit into * the appropriate field in the instruction word. * - flags indicating whether * * the relocation involves a symbol * * the relocation is relative to the current position * * the relocation is for a GOT entry * * the relocation is relative to the load address * */ #define _RF_S 0x80000000 /* Resolve symbol */ #define _RF_A 0x40000000 /* Use addend */ #define _RF_P 0x20000000 /* Location relative */ #define _RF_G 0x10000000 /* GOT offset */ #define _RF_B 0x08000000 /* Load address relative */ #define _RF_U 0x04000000 /* Unaligned */ #define _RF_X 0x02000000 /* Bare symbols, needs proc */ #define _RF_D 0x01000000 /* Use dynamic TLS offset */ #define _RF_O 0x00800000 /* Use static TLS offset */ #define _RF_I 0x00400000 /* Use TLS object ID */ #define _RF_SZ(s) (((s) & 0xff) << 8) /* memory target size */ #define _RF_RS(s) ( (s) & 0xff) /* right shift */ static const int reloc_target_flags[] = { 0, /* NONE */ _RF_S|_RF_A| _RF_SZ(8) | _RF_RS(0), /* 8 */ _RF_S|_RF_A| _RF_SZ(16) | _RF_RS(0), /* 16 */ _RF_S|_RF_A| _RF_SZ(32) | _RF_RS(0), /* 32 */ _RF_S|_RF_A|_RF_P| _RF_SZ(8) | _RF_RS(0), /* DISP_8 */ _RF_S|_RF_A|_RF_P| _RF_SZ(16) | _RF_RS(0), /* DISP_16 */ _RF_S|_RF_A|_RF_P| _RF_SZ(32) | _RF_RS(0), /* DISP_32 */ _RF_S|_RF_A|_RF_P| _RF_SZ(32) | _RF_RS(2), /* WDISP_30 */ _RF_S|_RF_A|_RF_P| _RF_SZ(32) | _RF_RS(2), /* WDISP_22 */ _RF_S|_RF_A|_RF_X| _RF_SZ(32) | _RF_RS(10), /* HI22 */ _RF_S|_RF_A|_RF_X| _RF_SZ(32) | _RF_RS(0), /* 22 */ _RF_S|_RF_A|_RF_X| _RF_SZ(32) | _RF_RS(0), /* 13 */ _RF_S|_RF_A|_RF_X| _RF_SZ(32) | _RF_RS(0), /* LO10 */ _RF_G| _RF_SZ(32) | _RF_RS(0), /* GOT10 */ _RF_G| _RF_SZ(32) | _RF_RS(0), /* GOT13 */ _RF_G| _RF_SZ(32) | _RF_RS(10), /* GOT22 */ _RF_S|_RF_A|_RF_P| _RF_SZ(32) | _RF_RS(0), /* PC10 */ _RF_S|_RF_A|_RF_P| _RF_SZ(32) | _RF_RS(10), /* PC22 */ _RF_A|_RF_P| _RF_SZ(32) | _RF_RS(2), /* WPLT30 */ _RF_SZ(32) | _RF_RS(0), /* COPY */ _RF_S|_RF_A| _RF_SZ(64) | _RF_RS(0), /* GLOB_DAT */ _RF_SZ(32) | _RF_RS(0), /* JMP_SLOT */ _RF_A| _RF_B| _RF_SZ(64) | _RF_RS(0), /* RELATIVE */ _RF_S|_RF_A| _RF_U| _RF_SZ(32) | _RF_RS(0), /* UA_32 */ _RF_A| _RF_SZ(32) | _RF_RS(0), /* PLT32 */ _RF_A| _RF_SZ(32) | _RF_RS(10), /* HIPLT22 */ _RF_A| _RF_SZ(32) | _RF_RS(0), /* LOPLT10 */ _RF_A|_RF_P| _RF_SZ(32) | _RF_RS(0), /* PCPLT32 */ _RF_A|_RF_P| _RF_SZ(32) | _RF_RS(10), /* PCPLT22 */ _RF_A|_RF_P| _RF_SZ(32) | _RF_RS(0), /* PCPLT10 */ _RF_S|_RF_A|_RF_X| _RF_SZ(32) | _RF_RS(0), /* 10 */ _RF_S|_RF_A|_RF_X| _RF_SZ(32) | _RF_RS(0), /* 11 */ _RF_S|_RF_A|_RF_X| _RF_SZ(64) | _RF_RS(0), /* 64 */ _RF_S|_RF_A|/*extra*/ _RF_SZ(32) | _RF_RS(0), /* OLO10 */ _RF_S|_RF_A|_RF_X| _RF_SZ(32) | _RF_RS(42), /* HH22 */ _RF_S|_RF_A|_RF_X| _RF_SZ(32) | _RF_RS(32), /* HM10 */ _RF_S|_RF_A|_RF_X| _RF_SZ(32) | _RF_RS(10), /* LM22 */ _RF_S|_RF_A|_RF_P| _RF_SZ(32) | _RF_RS(42), /* PC_HH22 */ _RF_S|_RF_A|_RF_P| _RF_SZ(32) | _RF_RS(32), /* PC_HM10 */ _RF_S|_RF_A|_RF_P| _RF_SZ(32) | _RF_RS(10), /* PC_LM22 */ _RF_S|_RF_A|_RF_P| _RF_SZ(32) | _RF_RS(2), /* WDISP16 */ _RF_S|_RF_A|_RF_P| _RF_SZ(32) | _RF_RS(2), /* WDISP19 */ _RF_S|_RF_A| _RF_SZ(32) | _RF_RS(0), /* GLOB_JMP */ _RF_S|_RF_A|_RF_X| _RF_SZ(32) | _RF_RS(0), /* 7 */ _RF_S|_RF_A|_RF_X| _RF_SZ(32) | _RF_RS(0), /* 5 */ _RF_S|_RF_A|_RF_X| _RF_SZ(32) | _RF_RS(0), /* 6 */ _RF_S|_RF_A|_RF_P| _RF_SZ(64) | _RF_RS(0), /* DISP64 */ _RF_A| _RF_SZ(64) | _RF_RS(0), /* PLT64 */ _RF_S|_RF_A|_RF_X| _RF_SZ(32) | _RF_RS(10), /* HIX22 */ _RF_S|_RF_A|_RF_X| _RF_SZ(32) | _RF_RS(0), /* LOX10 */ _RF_S|_RF_A|_RF_X| _RF_SZ(32) | _RF_RS(22), /* H44 */ _RF_S|_RF_A|_RF_X| _RF_SZ(32) | _RF_RS(12), /* M44 */ _RF_S|_RF_A|_RF_X| _RF_SZ(32) | _RF_RS(0), /* L44 */ _RF_S|_RF_A| _RF_SZ(64) | _RF_RS(0), /* REGISTER */ _RF_S|_RF_A| _RF_U| _RF_SZ(64) | _RF_RS(0), /* UA64 */ _RF_S|_RF_A| _RF_U| _RF_SZ(16) | _RF_RS(0), /* UA16 */ #if 0 /* TLS */ _RF_S|_RF_A| _RF_SZ(32) | _RF_RS(10), /* GD_HI22 */ _RF_S|_RF_A| _RF_SZ(32) | _RF_RS(0), /* GD_LO10 */ 0, /* GD_ADD */ _RF_A|_RF_P| _RF_SZ(32) | _RF_RS(2), /* GD_CALL */ _RF_S|_RF_A| _RF_SZ(32) | _RF_RS(10), /* LDM_HI22 */ _RF_S|_RF_A| _RF_SZ(32) | _RF_RS(0), /* LDM_LO10 */ 0, /* LDM_ADD */ _RF_A|_RF_P| _RF_SZ(32) | _RF_RS(2), /* LDM_CALL */ _RF_S|_RF_A| _RF_SZ(32) | _RF_RS(10), /* LDO_HIX22 */ _RF_S|_RF_A| _RF_SZ(32) | _RF_RS(0), /* LDO_LOX10 */ 0, /* LDO_ADD */ _RF_S|_RF_A| _RF_SZ(32) | _RF_RS(10), /* IE_HI22 */ _RF_S|_RF_A| _RF_SZ(32) | _RF_RS(0), /* IE_LO10 */ 0, /* IE_LD */ 0, /* IE_LDX */ 0, /* IE_ADD */ _RF_S|_RF_A| _RF_O| _RF_SZ(32) | _RF_RS(10), /* LE_HIX22 */ _RF_S|_RF_A| _RF_O| _RF_SZ(32) | _RF_RS(0), /* LE_LOX10 */ _RF_S| _RF_I| _RF_SZ(32) | _RF_RS(0), /* DTPMOD32 */ _RF_S| _RF_I| _RF_SZ(64) | _RF_RS(0), /* DTPMOD64 */ _RF_S|_RF_A| _RF_D| _RF_SZ(32) | _RF_RS(0), /* DTPOFF32 */ _RF_S|_RF_A| _RF_D| _RF_SZ(64) | _RF_RS(0), /* DTPOFF64 */ _RF_S|_RF_A| _RF_O| _RF_SZ(32) | _RF_RS(0), /* TPOFF32 */ _RF_S|_RF_A| _RF_O| _RF_SZ(64) | _RF_RS(0) /* TPOFF64 */ #endif }; #if 0 static const char *const reloc_names[] = { "NONE", "8", "16", "32", "DISP_8", "DISP_16", "DISP_32", "WDISP_30", "WDISP_22", "HI22", "22", "13", "LO10", "GOT10", "GOT13", "GOT22", "PC10", "PC22", "WPLT30", "COPY", "GLOB_DAT", "JMP_SLOT", "RELATIVE", "UA_32", "PLT32", "HIPLT22", "LOPLT10", "LOPLT10", "PCPLT22", "PCPLT32", "10", "11", "64", "OLO10", "HH22", "HM10", "LM22", "PC_HH22", "PC_HM10", "PC_LM22", "WDISP16", "WDISP19", "GLOB_JMP", "7", "5", "6", "DISP64", "PLT64", "HIX22", "LOX10", "H44", "M44", "L44", "REGISTER", "UA64", "UA16", "GD_HI22", "GD_LO10", "GD_ADD", "GD_CALL", "LDM_HI22", "LDMO10", "LDM_ADD", "LDM_CALL", "LDO_HIX22", "LDO_LOX10", "LDO_ADD", "IE_HI22", "IE_LO10", "IE_LD", "IE_LDX", "IE_ADD", "LE_HIX22", "LE_LOX10", "DTPMOD32", "DTPMOD64", "DTPOFF32", "DTPOFF64", "TPOFF32", "TPOFF64" }; #endif #define RELOC_RESOLVE_SYMBOL(t) ((reloc_target_flags[t] & _RF_S) != 0) #define RELOC_PC_RELATIVE(t) ((reloc_target_flags[t] & _RF_P) != 0) #define RELOC_BASE_RELATIVE(t) ((reloc_target_flags[t] & _RF_B) != 0) #define RELOC_UNALIGNED(t) ((reloc_target_flags[t] & _RF_U) != 0) #define RELOC_USE_ADDEND(t) ((reloc_target_flags[t] & _RF_A) != 0) #define RELOC_BARE_SYMBOL(t) ((reloc_target_flags[t] & _RF_X) != 0) #define RELOC_USE_TLS_DOFF(t) ((reloc_target_flags[t] & _RF_D) != 0) #define RELOC_USE_TLS_OFF(t) ((reloc_target_flags[t] & _RF_O) != 0) #define RELOC_USE_TLS_ID(t) ((reloc_target_flags[t] & _RF_I) != 0) #define RELOC_TARGET_SIZE(t) ((reloc_target_flags[t] >> 8) & 0xff) #define RELOC_VALUE_RIGHTSHIFT(t) (reloc_target_flags[t] & 0xff) static const long reloc_target_bitmask[] = { #define _BM(x) (~(-(1ULL << (x)))) 0, /* NONE */ _BM(8), _BM(16), _BM(32), /* 8, 16, 32 */ _BM(8), _BM(16), _BM(32), /* DISP8, DISP16, DISP32 */ _BM(30), _BM(22), /* WDISP30, WDISP22 */ _BM(22), _BM(22), /* HI22, 22 */ _BM(13), _BM(10), /* 13, LO10 */ _BM(10), _BM(13), _BM(22), /* GOT10, GOT13, GOT22 */ _BM(10), _BM(22), /* PC10, PC22 */ _BM(30), 0, /* WPLT30, COPY */ _BM(32), _BM(32), _BM(32), /* GLOB_DAT, JMP_SLOT, RELATIVE */ _BM(32), _BM(32), /* UA32, PLT32 */ _BM(22), _BM(10), /* HIPLT22, LOPLT10 */ _BM(32), _BM(22), _BM(10), /* PCPLT32, PCPLT22, PCPLT10 */ _BM(10), _BM(11), -1, /* 10, 11, 64 */ _BM(13), _BM(22), /* OLO10, HH22 */ _BM(10), _BM(22), /* HM10, LM22 */ _BM(22), _BM(10), _BM(22), /* PC_HH22, PC_HM10, PC_LM22 */ _BM(16), _BM(19), /* WDISP16, WDISP19 */ -1, /* GLOB_JMP */ _BM(7), _BM(5), _BM(6), /* 7, 5, 6 */ -1, -1, /* DISP64, PLT64 */ _BM(22), _BM(13), /* HIX22, LOX10 */ _BM(22), _BM(10), _BM(13), /* H44, M44, L44 */ -1, -1, _BM(16), /* REGISTER, UA64, UA16 */ #if 0 _BM(22), _BM(10), 0, _BM(30), /* GD_HI22, GD_LO10, GD_ADD, GD_CALL */ _BM(22), _BM(10), 0, /* LDM_HI22, LDMO10, LDM_ADD */ _BM(30), /* LDM_CALL */ _BM(22), _BM(10), 0, /* LDO_HIX22, LDO_LOX10, LDO_ADD */ _BM(22), _BM(10), 0, 0, /* IE_HI22, IE_LO10, IE_LD, IE_LDX */ 0, /* IE_ADD */ _BM(22), _BM(13), /* LE_HIX22, LE_LOX10 */ _BM(32), -1, /* DTPMOD32, DTPMOD64 */ _BM(32), -1, /* DTPOFF32, DTPOFF64 */ _BM(32), -1 /* TPOFF32, TPOFF64 */ #endif #undef _BM }; #define RELOC_VALUE_BITMASK(t) (reloc_target_bitmask[t]) bool elf_is_ifunc_reloc(Elf_Size r_info __unused) { return (false); } int elf_reloc_local(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, elf_lookup_fn lookup __unused) { const Elf_Rela *rela; Elf_Addr *where; if (type != ELF_RELOC_RELA) return (-1); rela = (const Elf_Rela *)data; if (ELF64_R_TYPE_ID(rela->r_info) != R_SPARC_RELATIVE) return (-1); where = (Elf_Addr *)(relocbase + rela->r_offset); *where = elf_relocaddr(lf, rela->r_addend + relocbase); return (0); } /* Process one elf relocation with addend. */ int elf_reloc(linker_file_t lf, Elf_Addr relocbase, const void *data, int type, elf_lookup_fn lookup) { const Elf_Rela *rela; Elf_Word *where32; Elf_Addr *where; Elf_Size rtype, symidx; Elf_Addr value; Elf_Addr mask; Elf_Addr addr; int error; if (type != ELF_RELOC_RELA) return (-1); rela = (const Elf_Rela *)data; where = (Elf_Addr *)(relocbase + rela->r_offset); where32 = (Elf_Word *)where; rtype = ELF64_R_TYPE_ID(rela->r_info); symidx = ELF_R_SYM(rela->r_info); if (rtype == R_SPARC_NONE || rtype == R_SPARC_RELATIVE) return (0); if (rtype == R_SPARC_JMP_SLOT || rtype == R_SPARC_COPY || rtype >= nitems(reloc_target_bitmask)) { printf("kldload: unexpected relocation type %ld\n", rtype); return (-1); } if (RELOC_UNALIGNED(rtype)) { printf("kldload: unaligned relocation type %ld\n", rtype); return (-1); } value = rela->r_addend; if (RELOC_RESOLVE_SYMBOL(rtype)) { error = lookup(lf, symidx, 1, &addr); if (error != 0) return (-1); value += addr; if (RELOC_BARE_SYMBOL(rtype)) value = elf_relocaddr(lf, value); } if (rtype == R_SPARC_OLO10) value = (value & 0x3ff) + ELF64_R_TYPE_DATA(rela->r_info); if (rtype == R_SPARC_HIX22) value ^= 0xffffffffffffffff; if (RELOC_PC_RELATIVE(rtype)) value -= (Elf_Addr)where; if (RELOC_BASE_RELATIVE(rtype)) value = elf_relocaddr(lf, value + relocbase); mask = RELOC_VALUE_BITMASK(rtype); value >>= RELOC_VALUE_RIGHTSHIFT(rtype); value &= mask; if (rtype == R_SPARC_LOX10) value |= 0x1c00; if (RELOC_TARGET_SIZE(rtype) > 32) { *where &= ~mask; *where |= value; } else { *where32 &= ~mask; *where32 |= value; } return (0); } int elf_cpu_load_file(linker_file_t lf __unused) { return (0); } int elf_cpu_unload_file(linker_file_t lf __unused) { return (0); } Index: projects/import-googletest-1.8.1/sys/vm/vm_fault.c =================================================================== --- projects/import-googletest-1.8.1/sys/vm/vm_fault.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/vm/vm_fault.c (revision 344271) @@ -1,1822 +1,1824 @@ /*- * SPDX-License-Identifier: (BSD-4-Clause AND MIT-CMU) * * Copyright (c) 1991, 1993 * The Regents of the University of California. All rights reserved. * Copyright (c) 1994 John S. Dyson * All rights reserved. * Copyright (c) 1994 David Greenman * All rights reserved. * * * This code is derived from software contributed to Berkeley by * The Mach Operating System project at Carnegie-Mellon University. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by the University of * California, Berkeley and its contributors. * 4. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * from: @(#)vm_fault.c 8.4 (Berkeley) 1/12/94 * * * Copyright (c) 1987, 1990 Carnegie-Mellon University. * All rights reserved. * * Authors: Avadis Tevanian, Jr., Michael Wayne Young * * Permission to use, copy, modify and distribute this software and * its documentation is hereby granted, provided that both the copyright * notice and this permission notice appear in all copies of the * software, derivative works or modified versions, and any portions * thereof, and that both notices appear in supporting documentation. * * CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS" * CONDITION. CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND * FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE. * * Carnegie Mellon requests users of this software to return to * * Software Distribution Coordinator or Software.Distribution@CS.CMU.EDU * School of Computer Science * Carnegie Mellon University * Pittsburgh PA 15213-3890 * * any improvements or extensions that they make and grant Carnegie the * rights to redistribute these changes. */ /* * Page fault handling module. */ #include __FBSDID("$FreeBSD$"); #include "opt_ktrace.h" #include "opt_vm.h" #include #include #include #include #include #include #include #include #include #include #include #include #ifdef KTRACE #include #endif #include #include #include #include #include #include #include #include #include #include #include #define PFBAK 4 #define PFFOR 4 #define VM_FAULT_READ_DEFAULT (1 + VM_FAULT_READ_AHEAD_INIT) #define VM_FAULT_READ_MAX (1 + VM_FAULT_READ_AHEAD_MAX) #define VM_FAULT_DONTNEED_MIN 1048576 struct faultstate { vm_page_t m; vm_object_t object; vm_pindex_t pindex; vm_page_t first_m; vm_object_t first_object; vm_pindex_t first_pindex; vm_map_t map; vm_map_entry_t entry; int map_generation; bool lookup_still_valid; struct vnode *vp; }; static void vm_fault_dontneed(const struct faultstate *fs, vm_offset_t vaddr, int ahead); static void vm_fault_prefault(const struct faultstate *fs, vm_offset_t addra, int backward, int forward, bool obj_locked); static inline void release_page(struct faultstate *fs) { vm_page_xunbusy(fs->m); vm_page_lock(fs->m); vm_page_deactivate(fs->m); vm_page_unlock(fs->m); fs->m = NULL; } static inline void unlock_map(struct faultstate *fs) { if (fs->lookup_still_valid) { vm_map_lookup_done(fs->map, fs->entry); fs->lookup_still_valid = false; } } static void unlock_vp(struct faultstate *fs) { if (fs->vp != NULL) { vput(fs->vp); fs->vp = NULL; } } static void unlock_and_deallocate(struct faultstate *fs) { vm_object_pip_wakeup(fs->object); VM_OBJECT_WUNLOCK(fs->object); if (fs->object != fs->first_object) { VM_OBJECT_WLOCK(fs->first_object); vm_page_lock(fs->first_m); vm_page_free(fs->first_m); vm_page_unlock(fs->first_m); vm_object_pip_wakeup(fs->first_object); VM_OBJECT_WUNLOCK(fs->first_object); fs->first_m = NULL; } vm_object_deallocate(fs->first_object); unlock_map(fs); unlock_vp(fs); } static void vm_fault_dirty(vm_map_entry_t entry, vm_page_t m, vm_prot_t prot, vm_prot_t fault_type, int fault_flags, bool set_wd) { bool need_dirty; if (((prot & VM_PROT_WRITE) == 0 && (fault_flags & VM_FAULT_DIRTY) == 0) || (m->oflags & VPO_UNMANAGED) != 0) return; VM_OBJECT_ASSERT_LOCKED(m->object); need_dirty = ((fault_type & VM_PROT_WRITE) != 0 && (fault_flags & VM_FAULT_WIRE) == 0) || (fault_flags & VM_FAULT_DIRTY) != 0; if (set_wd) vm_object_set_writeable_dirty(m->object); else /* * If two callers of vm_fault_dirty() with set_wd == * FALSE, one for the map entry with MAP_ENTRY_NOSYNC * flag set, other with flag clear, race, it is * possible for the no-NOSYNC thread to see m->dirty * != 0 and not clear VPO_NOSYNC. Take vm_page lock * around manipulation of VPO_NOSYNC and * vm_page_dirty() call, to avoid the race and keep * m->oflags consistent. */ vm_page_lock(m); /* * If this is a NOSYNC mmap we do not want to set VPO_NOSYNC * if the page is already dirty to prevent data written with * the expectation of being synced from not being synced. * Likewise if this entry does not request NOSYNC then make * sure the page isn't marked NOSYNC. Applications sharing * data should use the same flags to avoid ping ponging. */ if ((entry->eflags & MAP_ENTRY_NOSYNC) != 0) { if (m->dirty == 0) { m->oflags |= VPO_NOSYNC; } } else { m->oflags &= ~VPO_NOSYNC; } /* * If the fault is a write, we know that this page is being * written NOW so dirty it explicitly to save on * pmap_is_modified() calls later. * * Also, since the page is now dirty, we can possibly tell * the pager to release any swap backing the page. Calling * the pager requires a write lock on the object. */ if (need_dirty) vm_page_dirty(m); if (!set_wd) vm_page_unlock(m); else if (need_dirty) vm_pager_page_unswapped(m); } static void vm_fault_fill_hold(vm_page_t *m_hold, vm_page_t m) { if (m_hold != NULL) { *m_hold = m; vm_page_lock(m); vm_page_hold(m); vm_page_unlock(m); } } /* * Unlocks fs.first_object and fs.map on success. */ static int vm_fault_soft_fast(struct faultstate *fs, vm_offset_t vaddr, vm_prot_t prot, int fault_type, int fault_flags, boolean_t wired, vm_page_t *m_hold) { vm_page_t m, m_map; #if (defined(__aarch64__) || defined(__amd64__) || (defined(__arm__) && \ - __ARM_ARCH >= 6) || defined(__i386__)) && VM_NRESERVLEVEL > 0 + __ARM_ARCH >= 6) || defined(__i386__) || defined(__riscv)) && \ + VM_NRESERVLEVEL > 0 vm_page_t m_super; int flags; #endif int psind, rv; MPASS(fs->vp == NULL); m = vm_page_lookup(fs->first_object, fs->first_pindex); /* A busy page can be mapped for read|execute access. */ if (m == NULL || ((prot & VM_PROT_WRITE) != 0 && vm_page_busied(m)) || m->valid != VM_PAGE_BITS_ALL) return (KERN_FAILURE); m_map = m; psind = 0; #if (defined(__aarch64__) || defined(__amd64__) || (defined(__arm__) && \ - __ARM_ARCH >= 6) || defined(__i386__)) && VM_NRESERVLEVEL > 0 + __ARM_ARCH >= 6) || defined(__i386__) || defined(__riscv)) && \ + VM_NRESERVLEVEL > 0 if ((m->flags & PG_FICTITIOUS) == 0 && (m_super = vm_reserv_to_superpage(m)) != NULL && rounddown2(vaddr, pagesizes[m_super->psind]) >= fs->entry->start && roundup2(vaddr + 1, pagesizes[m_super->psind]) <= fs->entry->end && (vaddr & (pagesizes[m_super->psind] - 1)) == (VM_PAGE_TO_PHYS(m) & (pagesizes[m_super->psind] - 1)) && pmap_ps_enabled(fs->map->pmap)) { flags = PS_ALL_VALID; if ((prot & VM_PROT_WRITE) != 0) { /* * Create a superpage mapping allowing write access * only if none of the constituent pages are busy and * all of them are already dirty (except possibly for * the page that was faulted on). */ flags |= PS_NONE_BUSY; if ((fs->first_object->flags & OBJ_UNMANAGED) == 0) flags |= PS_ALL_DIRTY; } if (vm_page_ps_test(m_super, flags, m)) { m_map = m_super; psind = m_super->psind; vaddr = rounddown2(vaddr, pagesizes[psind]); /* Preset the modified bit for dirty superpages. */ if ((flags & PS_ALL_DIRTY) != 0) fault_type |= VM_PROT_WRITE; } } #endif rv = pmap_enter(fs->map->pmap, vaddr, m_map, prot, fault_type | PMAP_ENTER_NOSLEEP | (wired ? PMAP_ENTER_WIRED : 0), psind); if (rv != KERN_SUCCESS) return (rv); vm_fault_fill_hold(m_hold, m); vm_fault_dirty(fs->entry, m, prot, fault_type, fault_flags, false); if (psind == 0 && !wired) vm_fault_prefault(fs, vaddr, PFBAK, PFFOR, true); VM_OBJECT_RUNLOCK(fs->first_object); vm_map_lookup_done(fs->map, fs->entry); curthread->td_ru.ru_minflt++; return (KERN_SUCCESS); } static void vm_fault_restore_map_lock(struct faultstate *fs) { VM_OBJECT_ASSERT_WLOCKED(fs->first_object); MPASS(fs->first_object->paging_in_progress > 0); if (!vm_map_trylock_read(fs->map)) { VM_OBJECT_WUNLOCK(fs->first_object); vm_map_lock_read(fs->map); VM_OBJECT_WLOCK(fs->first_object); } fs->lookup_still_valid = true; } static void vm_fault_populate_check_page(vm_page_t m) { /* * Check each page to ensure that the pager is obeying the * interface: the page must be installed in the object, fully * valid, and exclusively busied. */ MPASS(m != NULL); MPASS(m->valid == VM_PAGE_BITS_ALL); MPASS(vm_page_xbusied(m)); } static void vm_fault_populate_cleanup(vm_object_t object, vm_pindex_t first, vm_pindex_t last) { vm_page_t m; vm_pindex_t pidx; VM_OBJECT_ASSERT_WLOCKED(object); MPASS(first <= last); for (pidx = first, m = vm_page_lookup(object, pidx); pidx <= last; pidx++, m = vm_page_next(m)) { vm_fault_populate_check_page(m); vm_page_lock(m); vm_page_deactivate(m); vm_page_unlock(m); vm_page_xunbusy(m); } } static int vm_fault_populate(struct faultstate *fs, vm_prot_t prot, int fault_type, int fault_flags, boolean_t wired, vm_page_t *m_hold) { struct mtx *m_mtx; vm_offset_t vaddr; vm_page_t m; vm_pindex_t map_first, map_last, pager_first, pager_last, pidx; int i, npages, psind, rv; MPASS(fs->object == fs->first_object); VM_OBJECT_ASSERT_WLOCKED(fs->first_object); MPASS(fs->first_object->paging_in_progress > 0); MPASS(fs->first_object->backing_object == NULL); MPASS(fs->lookup_still_valid); pager_first = OFF_TO_IDX(fs->entry->offset); pager_last = pager_first + atop(fs->entry->end - fs->entry->start) - 1; unlock_map(fs); unlock_vp(fs); /* * Call the pager (driver) populate() method. * * There is no guarantee that the method will be called again * if the current fault is for read, and a future fault is * for write. Report the entry's maximum allowed protection * to the driver. */ rv = vm_pager_populate(fs->first_object, fs->first_pindex, fault_type, fs->entry->max_protection, &pager_first, &pager_last); VM_OBJECT_ASSERT_WLOCKED(fs->first_object); if (rv == VM_PAGER_BAD) { /* * VM_PAGER_BAD is the backdoor for a pager to request * normal fault handling. */ vm_fault_restore_map_lock(fs); if (fs->map->timestamp != fs->map_generation) return (KERN_RESOURCE_SHORTAGE); /* RetryFault */ return (KERN_NOT_RECEIVER); } if (rv != VM_PAGER_OK) return (KERN_FAILURE); /* AKA SIGSEGV */ /* Ensure that the driver is obeying the interface. */ MPASS(pager_first <= pager_last); MPASS(fs->first_pindex <= pager_last); MPASS(fs->first_pindex >= pager_first); MPASS(pager_last < fs->first_object->size); vm_fault_restore_map_lock(fs); if (fs->map->timestamp != fs->map_generation) { vm_fault_populate_cleanup(fs->first_object, pager_first, pager_last); return (KERN_RESOURCE_SHORTAGE); /* RetryFault */ } /* * The map is unchanged after our last unlock. Process the fault. * * The range [pager_first, pager_last] that is given to the * pager is only a hint. The pager may populate any range * within the object that includes the requested page index. * In case the pager expanded the range, clip it to fit into * the map entry. */ map_first = OFF_TO_IDX(fs->entry->offset); if (map_first > pager_first) { vm_fault_populate_cleanup(fs->first_object, pager_first, map_first - 1); pager_first = map_first; } map_last = map_first + atop(fs->entry->end - fs->entry->start) - 1; if (map_last < pager_last) { vm_fault_populate_cleanup(fs->first_object, map_last + 1, pager_last); pager_last = map_last; } for (pidx = pager_first, m = vm_page_lookup(fs->first_object, pidx); pidx <= pager_last; pidx += npages, m = vm_page_next(&m[npages - 1])) { vaddr = fs->entry->start + IDX_TO_OFF(pidx) - fs->entry->offset; #if defined(__aarch64__) || defined(__amd64__) || (defined(__arm__) && \ - __ARM_ARCH >= 6) || defined(__i386__) + __ARM_ARCH >= 6) || defined(__i386__) || defined(__riscv) psind = m->psind; if (psind > 0 && ((vaddr & (pagesizes[psind] - 1)) != 0 || pidx + OFF_TO_IDX(pagesizes[psind]) - 1 > pager_last || !pmap_ps_enabled(fs->map->pmap))) psind = 0; #else psind = 0; #endif npages = atop(pagesizes[psind]); for (i = 0; i < npages; i++) { vm_fault_populate_check_page(&m[i]); vm_fault_dirty(fs->entry, &m[i], prot, fault_type, fault_flags, true); } VM_OBJECT_WUNLOCK(fs->first_object); pmap_enter(fs->map->pmap, vaddr, m, prot, fault_type | (wired ? PMAP_ENTER_WIRED : 0), psind); VM_OBJECT_WLOCK(fs->first_object); m_mtx = NULL; for (i = 0; i < npages; i++) { vm_page_change_lock(&m[i], &m_mtx); if ((fault_flags & VM_FAULT_WIRE) != 0) vm_page_wire(&m[i]); else vm_page_activate(&m[i]); if (m_hold != NULL && m[i].pindex == fs->first_pindex) { *m_hold = &m[i]; vm_page_hold(&m[i]); } vm_page_xunbusy_maybelocked(&m[i]); } if (m_mtx != NULL) mtx_unlock(m_mtx); } curthread->td_ru.ru_majflt++; return (KERN_SUCCESS); } /* * vm_fault: * * Handle a page fault occurring at the given address, * requiring the given permissions, in the map specified. * If successful, the page is inserted into the * associated physical map. * * NOTE: the given address should be truncated to the * proper page address. * * KERN_SUCCESS is returned if the page fault is handled; otherwise, * a standard error specifying why the fault is fatal is returned. * * The map in question must be referenced, and remains so. * Caller may hold no locks. */ int vm_fault(vm_map_t map, vm_offset_t vaddr, vm_prot_t fault_type, int fault_flags) { struct thread *td; int result; td = curthread; if ((td->td_pflags & TDP_NOFAULTING) != 0) return (KERN_PROTECTION_FAILURE); #ifdef KTRACE if (map != kernel_map && KTRPOINT(td, KTR_FAULT)) ktrfault(vaddr, fault_type); #endif result = vm_fault_hold(map, trunc_page(vaddr), fault_type, fault_flags, NULL); #ifdef KTRACE if (map != kernel_map && KTRPOINT(td, KTR_FAULTEND)) ktrfaultend(result); #endif return (result); } int vm_fault_hold(vm_map_t map, vm_offset_t vaddr, vm_prot_t fault_type, int fault_flags, vm_page_t *m_hold) { struct faultstate fs; struct vnode *vp; struct domainset *dset; vm_object_t next_object, retry_object; vm_offset_t e_end, e_start; vm_pindex_t retry_pindex; vm_prot_t prot, retry_prot; int ahead, alloc_req, behind, cluster_offset, error, era, faultcount; int locked, nera, result, rv; u_char behavior; boolean_t wired; /* Passed by reference. */ bool dead, hardfault, is_first_object_locked; VM_CNT_INC(v_vm_faults); fs.vp = NULL; faultcount = 0; nera = -1; hardfault = false; RetryFault:; /* * Find the backing store object and offset into it to begin the * search. */ fs.map = map; result = vm_map_lookup(&fs.map, vaddr, fault_type | VM_PROT_FAULT_LOOKUP, &fs.entry, &fs.first_object, &fs.first_pindex, &prot, &wired); if (result != KERN_SUCCESS) { unlock_vp(&fs); return (result); } fs.map_generation = fs.map->timestamp; if (fs.entry->eflags & MAP_ENTRY_NOFAULT) { panic("%s: fault on nofault entry, addr: %#lx", __func__, (u_long)vaddr); } if (fs.entry->eflags & MAP_ENTRY_IN_TRANSITION && fs.entry->wiring_thread != curthread) { vm_map_unlock_read(fs.map); vm_map_lock(fs.map); if (vm_map_lookup_entry(fs.map, vaddr, &fs.entry) && (fs.entry->eflags & MAP_ENTRY_IN_TRANSITION)) { unlock_vp(&fs); fs.entry->eflags |= MAP_ENTRY_NEEDS_WAKEUP; vm_map_unlock_and_wait(fs.map, 0); } else vm_map_unlock(fs.map); goto RetryFault; } MPASS((fs.entry->eflags & MAP_ENTRY_GUARD) == 0); if (wired) fault_type = prot | (fault_type & VM_PROT_COPY); else KASSERT((fault_flags & VM_FAULT_WIRE) == 0, ("!wired && VM_FAULT_WIRE")); /* * Try to avoid lock contention on the top-level object through * special-case handling of some types of page faults, specifically, * those that are both (1) mapping an existing page from the top- * level object and (2) not having to mark that object as containing * dirty pages. Under these conditions, a read lock on the top-level * object suffices, allowing multiple page faults of a similar type to * run in parallel on the same top-level object. */ if (fs.vp == NULL /* avoid locked vnode leak */ && (fault_flags & (VM_FAULT_WIRE | VM_FAULT_DIRTY)) == 0 && /* avoid calling vm_object_set_writeable_dirty() */ ((prot & VM_PROT_WRITE) == 0 || (fs.first_object->type != OBJT_VNODE && (fs.first_object->flags & OBJ_TMPFS_NODE) == 0) || (fs.first_object->flags & OBJ_MIGHTBEDIRTY) != 0)) { VM_OBJECT_RLOCK(fs.first_object); if ((prot & VM_PROT_WRITE) == 0 || (fs.first_object->type != OBJT_VNODE && (fs.first_object->flags & OBJ_TMPFS_NODE) == 0) || (fs.first_object->flags & OBJ_MIGHTBEDIRTY) != 0) { rv = vm_fault_soft_fast(&fs, vaddr, prot, fault_type, fault_flags, wired, m_hold); if (rv == KERN_SUCCESS) return (rv); } if (!VM_OBJECT_TRYUPGRADE(fs.first_object)) { VM_OBJECT_RUNLOCK(fs.first_object); VM_OBJECT_WLOCK(fs.first_object); } } else { VM_OBJECT_WLOCK(fs.first_object); } /* * Make a reference to this object to prevent its disposal while we * are messing with it. Once we have the reference, the map is free * to be diddled. Since objects reference their shadows (and copies), * they will stay around as well. * * Bump the paging-in-progress count to prevent size changes (e.g. * truncation operations) during I/O. */ vm_object_reference_locked(fs.first_object); vm_object_pip_add(fs.first_object, 1); fs.lookup_still_valid = true; fs.first_m = NULL; /* * Search for the page at object/offset. */ fs.object = fs.first_object; fs.pindex = fs.first_pindex; while (TRUE) { /* * If the object is marked for imminent termination, * we retry here, since the collapse pass has raced * with us. Otherwise, if we see terminally dead * object, return fail. */ if ((fs.object->flags & OBJ_DEAD) != 0) { dead = fs.object->type == OBJT_DEAD; unlock_and_deallocate(&fs); if (dead) return (KERN_PROTECTION_FAILURE); pause("vmf_de", 1); goto RetryFault; } /* * See if page is resident */ fs.m = vm_page_lookup(fs.object, fs.pindex); if (fs.m != NULL) { /* * Wait/Retry if the page is busy. We have to do this * if the page is either exclusive or shared busy * because the vm_pager may be using read busy for * pageouts (and even pageins if it is the vnode * pager), and we could end up trying to pagein and * pageout the same page simultaneously. * * We can theoretically allow the busy case on a read * fault if the page is marked valid, but since such * pages are typically already pmap'd, putting that * special case in might be more effort then it is * worth. We cannot under any circumstances mess * around with a shared busied page except, perhaps, * to pmap it. */ if (vm_page_busied(fs.m)) { /* * Reference the page before unlocking and * sleeping so that the page daemon is less * likely to reclaim it. */ vm_page_aflag_set(fs.m, PGA_REFERENCED); if (fs.object != fs.first_object) { if (!VM_OBJECT_TRYWLOCK( fs.first_object)) { VM_OBJECT_WUNLOCK(fs.object); VM_OBJECT_WLOCK(fs.first_object); VM_OBJECT_WLOCK(fs.object); } vm_page_lock(fs.first_m); vm_page_free(fs.first_m); vm_page_unlock(fs.first_m); vm_object_pip_wakeup(fs.first_object); VM_OBJECT_WUNLOCK(fs.first_object); fs.first_m = NULL; } unlock_map(&fs); if (fs.m == vm_page_lookup(fs.object, fs.pindex)) { vm_page_sleep_if_busy(fs.m, "vmpfw"); } vm_object_pip_wakeup(fs.object); VM_OBJECT_WUNLOCK(fs.object); VM_CNT_INC(v_intrans); vm_object_deallocate(fs.first_object); goto RetryFault; } /* * Mark page busy for other processes, and the * pagedaemon. If it still isn't completely valid * (readable), jump to readrest, else break-out ( we * found the page ). */ vm_page_xbusy(fs.m); if (fs.m->valid != VM_PAGE_BITS_ALL) goto readrest; break; /* break to PAGE HAS BEEN FOUND */ } KASSERT(fs.m == NULL, ("fs.m should be NULL, not %p", fs.m)); /* * Page is not resident. If the pager might contain the page * or this is the beginning of the search, allocate a new * page. (Default objects are zero-fill, so there is no real * pager for them.) */ if (fs.object->type != OBJT_DEFAULT || fs.object == fs.first_object) { if (fs.pindex >= fs.object->size) { unlock_and_deallocate(&fs); return (KERN_PROTECTION_FAILURE); } if (fs.object == fs.first_object && (fs.first_object->flags & OBJ_POPULATE) != 0 && fs.first_object->shadow_count == 0) { rv = vm_fault_populate(&fs, prot, fault_type, fault_flags, wired, m_hold); switch (rv) { case KERN_SUCCESS: case KERN_FAILURE: unlock_and_deallocate(&fs); return (rv); case KERN_RESOURCE_SHORTAGE: unlock_and_deallocate(&fs); goto RetryFault; case KERN_NOT_RECEIVER: /* * Pager's populate() method * returned VM_PAGER_BAD. */ break; default: panic("inconsistent return codes"); } } /* * Allocate a new page for this object/offset pair. * * Unlocked read of the p_flag is harmless. At * worst, the P_KILLED might be not observed * there, and allocation can fail, causing * restart and new reading of the p_flag. */ dset = fs.object->domain.dr_policy; if (dset == NULL) dset = curthread->td_domain.dr_policy; if (!vm_page_count_severe_set(&dset->ds_mask) || P_KILLED(curproc)) { #if VM_NRESERVLEVEL > 0 vm_object_color(fs.object, atop(vaddr) - fs.pindex); #endif alloc_req = P_KILLED(curproc) ? VM_ALLOC_SYSTEM : VM_ALLOC_NORMAL; if (fs.object->type != OBJT_VNODE && fs.object->backing_object == NULL) alloc_req |= VM_ALLOC_ZERO; fs.m = vm_page_alloc(fs.object, fs.pindex, alloc_req); } if (fs.m == NULL) { unlock_and_deallocate(&fs); vm_waitpfault(dset); goto RetryFault; } } readrest: /* * At this point, we have either allocated a new page or found * an existing page that is only partially valid. * * We hold a reference on the current object and the page is * exclusive busied. */ /* * If the pager for the current object might have the page, * then determine the number of additional pages to read and * potentially reprioritize previously read pages for earlier * reclamation. These operations should only be performed * once per page fault. Even if the current pager doesn't * have the page, the number of additional pages to read will * apply to subsequent objects in the shadow chain. */ if (fs.object->type != OBJT_DEFAULT && nera == -1 && !P_KILLED(curproc)) { KASSERT(fs.lookup_still_valid, ("map unlocked")); era = fs.entry->read_ahead; behavior = vm_map_entry_behavior(fs.entry); if (behavior == MAP_ENTRY_BEHAV_RANDOM) { nera = 0; } else if (behavior == MAP_ENTRY_BEHAV_SEQUENTIAL) { nera = VM_FAULT_READ_AHEAD_MAX; if (vaddr == fs.entry->next_read) vm_fault_dontneed(&fs, vaddr, nera); } else if (vaddr == fs.entry->next_read) { /* * This is a sequential fault. Arithmetically * increase the requested number of pages in * the read-ahead window. The requested * number of pages is "# of sequential faults * x (read ahead min + 1) + read ahead min" */ nera = VM_FAULT_READ_AHEAD_MIN; if (era > 0) { nera += era + 1; if (nera > VM_FAULT_READ_AHEAD_MAX) nera = VM_FAULT_READ_AHEAD_MAX; } if (era == VM_FAULT_READ_AHEAD_MAX) vm_fault_dontneed(&fs, vaddr, nera); } else { /* * This is a non-sequential fault. */ nera = 0; } if (era != nera) { /* * A read lock on the map suffices to update * the read ahead count safely. */ fs.entry->read_ahead = nera; } /* * Prepare for unlocking the map. Save the map * entry's start and end addresses, which are used to * optimize the size of the pager operation below. * Even if the map entry's addresses change after * unlocking the map, using the saved addresses is * safe. */ e_start = fs.entry->start; e_end = fs.entry->end; } /* * Call the pager to retrieve the page if there is a chance * that the pager has it, and potentially retrieve additional * pages at the same time. */ if (fs.object->type != OBJT_DEFAULT) { /* * Release the map lock before locking the vnode or * sleeping in the pager. (If the current object has * a shadow, then an earlier iteration of this loop * may have already unlocked the map.) */ unlock_map(&fs); if (fs.object->type == OBJT_VNODE && (vp = fs.object->handle) != fs.vp) { /* * Perform an unlock in case the desired vnode * changed while the map was unlocked during a * retry. */ unlock_vp(&fs); locked = VOP_ISLOCKED(vp); if (locked != LK_EXCLUSIVE) locked = LK_SHARED; /* * We must not sleep acquiring the vnode lock * while we have the page exclusive busied or * the object's paging-in-progress count * incremented. Otherwise, we could deadlock. */ error = vget(vp, locked | LK_CANRECURSE | LK_NOWAIT, curthread); if (error != 0) { vhold(vp); release_page(&fs); unlock_and_deallocate(&fs); error = vget(vp, locked | LK_RETRY | LK_CANRECURSE, curthread); vdrop(vp); fs.vp = vp; KASSERT(error == 0, ("vm_fault: vget failed")); goto RetryFault; } fs.vp = vp; } KASSERT(fs.vp == NULL || !fs.map->system_map, ("vm_fault: vnode-backed object mapped by system map")); /* * Page in the requested page and hint the pager, * that it may bring up surrounding pages. */ if (nera == -1 || behavior == MAP_ENTRY_BEHAV_RANDOM || P_KILLED(curproc)) { behind = 0; ahead = 0; } else { /* Is this a sequential fault? */ if (nera > 0) { behind = 0; ahead = nera; } else { /* * Request a cluster of pages that is * aligned to a VM_FAULT_READ_DEFAULT * page offset boundary within the * object. Alignment to a page offset * boundary is more likely to coincide * with the underlying file system * block than alignment to a virtual * address boundary. */ cluster_offset = fs.pindex % VM_FAULT_READ_DEFAULT; behind = ulmin(cluster_offset, atop(vaddr - e_start)); ahead = VM_FAULT_READ_DEFAULT - 1 - cluster_offset; } ahead = ulmin(ahead, atop(e_end - vaddr) - 1); } rv = vm_pager_get_pages(fs.object, &fs.m, 1, &behind, &ahead); if (rv == VM_PAGER_OK) { faultcount = behind + 1 + ahead; hardfault = true; break; /* break to PAGE HAS BEEN FOUND */ } if (rv == VM_PAGER_ERROR) printf("vm_fault: pager read error, pid %d (%s)\n", curproc->p_pid, curproc->p_comm); /* * If an I/O error occurred or the requested page was * outside the range of the pager, clean up and return * an error. */ if (rv == VM_PAGER_ERROR || rv == VM_PAGER_BAD) { vm_page_lock(fs.m); if (fs.m->wire_count == 0) vm_page_free(fs.m); else vm_page_xunbusy_maybelocked(fs.m); vm_page_unlock(fs.m); fs.m = NULL; unlock_and_deallocate(&fs); return (rv == VM_PAGER_ERROR ? KERN_FAILURE : KERN_PROTECTION_FAILURE); } /* * The requested page does not exist at this object/ * offset. Remove the invalid page from the object, * waking up anyone waiting for it, and continue on to * the next object. However, if this is the top-level * object, we must leave the busy page in place to * prevent another process from rushing past us, and * inserting the page in that object at the same time * that we are. */ if (fs.object != fs.first_object) { vm_page_lock(fs.m); if (fs.m->wire_count == 0) vm_page_free(fs.m); else vm_page_xunbusy_maybelocked(fs.m); vm_page_unlock(fs.m); fs.m = NULL; } } /* * We get here if the object has default pager (or unwiring) * or the pager doesn't have the page. */ if (fs.object == fs.first_object) fs.first_m = fs.m; /* * Move on to the next object. Lock the next object before * unlocking the current one. */ next_object = fs.object->backing_object; if (next_object == NULL) { /* * If there's no object left, fill the page in the top * object with zeros. */ if (fs.object != fs.first_object) { vm_object_pip_wakeup(fs.object); VM_OBJECT_WUNLOCK(fs.object); fs.object = fs.first_object; fs.pindex = fs.first_pindex; fs.m = fs.first_m; VM_OBJECT_WLOCK(fs.object); } fs.first_m = NULL; /* * Zero the page if necessary and mark it valid. */ if ((fs.m->flags & PG_ZERO) == 0) { pmap_zero_page(fs.m); } else { VM_CNT_INC(v_ozfod); } VM_CNT_INC(v_zfod); fs.m->valid = VM_PAGE_BITS_ALL; /* Don't try to prefault neighboring pages. */ faultcount = 1; break; /* break to PAGE HAS BEEN FOUND */ } else { KASSERT(fs.object != next_object, ("object loop %p", next_object)); VM_OBJECT_WLOCK(next_object); vm_object_pip_add(next_object, 1); if (fs.object != fs.first_object) vm_object_pip_wakeup(fs.object); fs.pindex += OFF_TO_IDX(fs.object->backing_object_offset); VM_OBJECT_WUNLOCK(fs.object); fs.object = next_object; } } vm_page_assert_xbusied(fs.m); /* * PAGE HAS BEEN FOUND. [Loop invariant still holds -- the object lock * is held.] */ /* * If the page is being written, but isn't already owned by the * top-level object, we have to copy it into a new page owned by the * top-level object. */ if (fs.object != fs.first_object) { /* * We only really need to copy if we want to write it. */ if ((fault_type & (VM_PROT_COPY | VM_PROT_WRITE)) != 0) { /* * This allows pages to be virtually copied from a * backing_object into the first_object, where the * backing object has no other refs to it, and cannot * gain any more refs. Instead of a bcopy, we just * move the page from the backing object to the * first object. Note that we must mark the page * dirty in the first object so that it will go out * to swap when needed. */ is_first_object_locked = false; if ( /* * Only one shadow object */ (fs.object->shadow_count == 1) && /* * No COW refs, except us */ (fs.object->ref_count == 1) && /* * No one else can look this object up */ (fs.object->handle == NULL) && /* * No other ways to look the object up */ ((fs.object->type == OBJT_DEFAULT) || (fs.object->type == OBJT_SWAP)) && (is_first_object_locked = VM_OBJECT_TRYWLOCK(fs.first_object)) && /* * We don't chase down the shadow chain */ fs.object == fs.first_object->backing_object) { vm_page_lock(fs.m); vm_page_dequeue(fs.m); vm_page_remove(fs.m); vm_page_unlock(fs.m); vm_page_lock(fs.first_m); vm_page_replace_checked(fs.m, fs.first_object, fs.first_pindex, fs.first_m); vm_page_free(fs.first_m); vm_page_unlock(fs.first_m); vm_page_dirty(fs.m); #if VM_NRESERVLEVEL > 0 /* * Rename the reservation. */ vm_reserv_rename(fs.m, fs.first_object, fs.object, OFF_TO_IDX( fs.first_object->backing_object_offset)); #endif /* * Removing the page from the backing object * unbusied it. */ vm_page_xbusy(fs.m); fs.first_m = fs.m; fs.m = NULL; VM_CNT_INC(v_cow_optim); } else { /* * Oh, well, lets copy it. */ pmap_copy_page(fs.m, fs.first_m); fs.first_m->valid = VM_PAGE_BITS_ALL; if (wired && (fault_flags & VM_FAULT_WIRE) == 0) { vm_page_lock(fs.first_m); vm_page_wire(fs.first_m); vm_page_unlock(fs.first_m); vm_page_lock(fs.m); vm_page_unwire(fs.m, PQ_INACTIVE); vm_page_unlock(fs.m); } /* * We no longer need the old page or object. */ release_page(&fs); } /* * fs.object != fs.first_object due to above * conditional */ vm_object_pip_wakeup(fs.object); VM_OBJECT_WUNLOCK(fs.object); /* * We only try to prefault read-only mappings to the * neighboring pages when this copy-on-write fault is * a hard fault. In other cases, trying to prefault * is typically wasted effort. */ if (faultcount == 0) faultcount = 1; /* * Only use the new page below... */ fs.object = fs.first_object; fs.pindex = fs.first_pindex; fs.m = fs.first_m; if (!is_first_object_locked) VM_OBJECT_WLOCK(fs.object); VM_CNT_INC(v_cow_faults); curthread->td_cow++; } else { prot &= ~VM_PROT_WRITE; } } /* * We must verify that the maps have not changed since our last * lookup. */ if (!fs.lookup_still_valid) { if (!vm_map_trylock_read(fs.map)) { release_page(&fs); unlock_and_deallocate(&fs); goto RetryFault; } fs.lookup_still_valid = true; if (fs.map->timestamp != fs.map_generation) { result = vm_map_lookup_locked(&fs.map, vaddr, fault_type, &fs.entry, &retry_object, &retry_pindex, &retry_prot, &wired); /* * If we don't need the page any longer, put it on the inactive * list (the easiest thing to do here). If no one needs it, * pageout will grab it eventually. */ if (result != KERN_SUCCESS) { release_page(&fs); unlock_and_deallocate(&fs); /* * If retry of map lookup would have blocked then * retry fault from start. */ if (result == KERN_FAILURE) goto RetryFault; return (result); } if ((retry_object != fs.first_object) || (retry_pindex != fs.first_pindex)) { release_page(&fs); unlock_and_deallocate(&fs); goto RetryFault; } /* * Check whether the protection has changed or the object has * been copied while we left the map unlocked. Changing from * read to write permission is OK - we leave the page * write-protected, and catch the write fault. Changing from * write to read permission means that we can't mark the page * write-enabled after all. */ prot &= retry_prot; fault_type &= retry_prot; if (prot == 0) { release_page(&fs); unlock_and_deallocate(&fs); goto RetryFault; } /* Reassert because wired may have changed. */ KASSERT(wired || (fault_flags & VM_FAULT_WIRE) == 0, ("!wired && VM_FAULT_WIRE")); } } /* * If the page was filled by a pager, save the virtual address that * should be faulted on next under a sequential access pattern to the * map entry. A read lock on the map suffices to update this address * safely. */ if (hardfault) fs.entry->next_read = vaddr + ptoa(ahead) + PAGE_SIZE; vm_fault_dirty(fs.entry, fs.m, prot, fault_type, fault_flags, true); vm_page_assert_xbusied(fs.m); /* * Page must be completely valid or it is not fit to * map into user space. vm_pager_get_pages() ensures this. */ KASSERT(fs.m->valid == VM_PAGE_BITS_ALL, ("vm_fault: page %p partially invalid", fs.m)); VM_OBJECT_WUNLOCK(fs.object); /* * Put this page into the physical map. We had to do the unlock above * because pmap_enter() may sleep. We don't put the page * back on the active queue until later so that the pageout daemon * won't find it (yet). */ pmap_enter(fs.map->pmap, vaddr, fs.m, prot, fault_type | (wired ? PMAP_ENTER_WIRED : 0), 0); if (faultcount != 1 && (fault_flags & VM_FAULT_WIRE) == 0 && wired == 0) vm_fault_prefault(&fs, vaddr, faultcount > 0 ? behind : PFBAK, faultcount > 0 ? ahead : PFFOR, false); VM_OBJECT_WLOCK(fs.object); vm_page_lock(fs.m); /* * If the page is not wired down, then put it where the pageout daemon * can find it. */ if ((fault_flags & VM_FAULT_WIRE) != 0) vm_page_wire(fs.m); else vm_page_activate(fs.m); if (m_hold != NULL) { *m_hold = fs.m; vm_page_hold(fs.m); } vm_page_unlock(fs.m); vm_page_xunbusy(fs.m); /* * Unlock everything, and return */ unlock_and_deallocate(&fs); if (hardfault) { VM_CNT_INC(v_io_faults); curthread->td_ru.ru_majflt++; #ifdef RACCT if (racct_enable && fs.object->type == OBJT_VNODE) { PROC_LOCK(curproc); if ((fault_type & (VM_PROT_COPY | VM_PROT_WRITE)) != 0) { racct_add_force(curproc, RACCT_WRITEBPS, PAGE_SIZE + behind * PAGE_SIZE); racct_add_force(curproc, RACCT_WRITEIOPS, 1); } else { racct_add_force(curproc, RACCT_READBPS, PAGE_SIZE + ahead * PAGE_SIZE); racct_add_force(curproc, RACCT_READIOPS, 1); } PROC_UNLOCK(curproc); } #endif } else curthread->td_ru.ru_minflt++; return (KERN_SUCCESS); } /* * Speed up the reclamation of pages that precede the faulting pindex within * the first object of the shadow chain. Essentially, perform the equivalent * to madvise(..., MADV_DONTNEED) on a large cluster of pages that precedes * the faulting pindex by the cluster size when the pages read by vm_fault() * cross a cluster-size boundary. The cluster size is the greater of the * smallest superpage size and VM_FAULT_DONTNEED_MIN. * * When "fs->first_object" is a shadow object, the pages in the backing object * that precede the faulting pindex are deactivated by vm_fault(). So, this * function must only be concerned with pages in the first object. */ static void vm_fault_dontneed(const struct faultstate *fs, vm_offset_t vaddr, int ahead) { vm_map_entry_t entry; vm_object_t first_object, object; vm_offset_t end, start; vm_page_t m, m_next; vm_pindex_t pend, pstart; vm_size_t size; object = fs->object; VM_OBJECT_ASSERT_WLOCKED(object); first_object = fs->first_object; if (first_object != object) { if (!VM_OBJECT_TRYWLOCK(first_object)) { VM_OBJECT_WUNLOCK(object); VM_OBJECT_WLOCK(first_object); VM_OBJECT_WLOCK(object); } } /* Neither fictitious nor unmanaged pages can be reclaimed. */ if ((first_object->flags & (OBJ_FICTITIOUS | OBJ_UNMANAGED)) == 0) { size = VM_FAULT_DONTNEED_MIN; if (MAXPAGESIZES > 1 && size < pagesizes[1]) size = pagesizes[1]; end = rounddown2(vaddr, size); if (vaddr - end >= size - PAGE_SIZE - ptoa(ahead) && (entry = fs->entry)->start < end) { if (end - entry->start < size) start = entry->start; else start = end - size; pmap_advise(fs->map->pmap, start, end, MADV_DONTNEED); pstart = OFF_TO_IDX(entry->offset) + atop(start - entry->start); m_next = vm_page_find_least(first_object, pstart); pend = OFF_TO_IDX(entry->offset) + atop(end - entry->start); while ((m = m_next) != NULL && m->pindex < pend) { m_next = TAILQ_NEXT(m, listq); if (m->valid != VM_PAGE_BITS_ALL || vm_page_busied(m)) continue; /* * Don't clear PGA_REFERENCED, since it would * likely represent a reference by a different * process. * * Typically, at this point, prefetched pages * are still in the inactive queue. Only * pages that triggered page faults are in the * active queue. */ vm_page_lock(m); if (!vm_page_inactive(m)) vm_page_deactivate(m); vm_page_unlock(m); } } } if (first_object != object) VM_OBJECT_WUNLOCK(first_object); } /* * vm_fault_prefault provides a quick way of clustering * pagefaults into a processes address space. It is a "cousin" * of vm_map_pmap_enter, except it runs at page fault time instead * of mmap time. */ static void vm_fault_prefault(const struct faultstate *fs, vm_offset_t addra, int backward, int forward, bool obj_locked) { pmap_t pmap; vm_map_entry_t entry; vm_object_t backing_object, lobject; vm_offset_t addr, starta; vm_pindex_t pindex; vm_page_t m; int i; pmap = fs->map->pmap; if (pmap != vmspace_pmap(curthread->td_proc->p_vmspace)) return; entry = fs->entry; if (addra < backward * PAGE_SIZE) { starta = entry->start; } else { starta = addra - backward * PAGE_SIZE; if (starta < entry->start) starta = entry->start; } /* * Generate the sequence of virtual addresses that are candidates for * prefaulting in an outward spiral from the faulting virtual address, * "addra". Specifically, the sequence is "addra - PAGE_SIZE", "addra * + PAGE_SIZE", "addra - 2 * PAGE_SIZE", "addra + 2 * PAGE_SIZE", ... * If the candidate address doesn't have a backing physical page, then * the loop immediately terminates. */ for (i = 0; i < 2 * imax(backward, forward); i++) { addr = addra + ((i >> 1) + 1) * ((i & 1) == 0 ? -PAGE_SIZE : PAGE_SIZE); if (addr > addra + forward * PAGE_SIZE) addr = 0; if (addr < starta || addr >= entry->end) continue; if (!pmap_is_prefaultable(pmap, addr)) continue; pindex = ((addr - entry->start) + entry->offset) >> PAGE_SHIFT; lobject = entry->object.vm_object; if (!obj_locked) VM_OBJECT_RLOCK(lobject); while ((m = vm_page_lookup(lobject, pindex)) == NULL && lobject->type == OBJT_DEFAULT && (backing_object = lobject->backing_object) != NULL) { KASSERT((lobject->backing_object_offset & PAGE_MASK) == 0, ("vm_fault_prefault: unaligned object offset")); pindex += lobject->backing_object_offset >> PAGE_SHIFT; VM_OBJECT_RLOCK(backing_object); if (!obj_locked || lobject != entry->object.vm_object) VM_OBJECT_RUNLOCK(lobject); lobject = backing_object; } if (m == NULL) { if (!obj_locked || lobject != entry->object.vm_object) VM_OBJECT_RUNLOCK(lobject); break; } if (m->valid == VM_PAGE_BITS_ALL && (m->flags & PG_FICTITIOUS) == 0) pmap_enter_quick(pmap, addr, m, entry->protection); if (!obj_locked || lobject != entry->object.vm_object) VM_OBJECT_RUNLOCK(lobject); } } /* * Hold each of the physical pages that are mapped by the specified range of * virtual addresses, ["addr", "addr" + "len"), if those mappings are valid * and allow the specified types of access, "prot". If all of the implied * pages are successfully held, then the number of held pages is returned * together with pointers to those pages in the array "ma". However, if any * of the pages cannot be held, -1 is returned. */ int vm_fault_quick_hold_pages(vm_map_t map, vm_offset_t addr, vm_size_t len, vm_prot_t prot, vm_page_t *ma, int max_count) { vm_offset_t end, va; vm_page_t *mp; int count; boolean_t pmap_failed; if (len == 0) return (0); end = round_page(addr + len); addr = trunc_page(addr); /* * Check for illegal addresses. */ if (addr < vm_map_min(map) || addr > end || end > vm_map_max(map)) return (-1); if (atop(end - addr) > max_count) panic("vm_fault_quick_hold_pages: count > max_count"); count = atop(end - addr); /* * Most likely, the physical pages are resident in the pmap, so it is * faster to try pmap_extract_and_hold() first. */ pmap_failed = FALSE; for (mp = ma, va = addr; va < end; mp++, va += PAGE_SIZE) { *mp = pmap_extract_and_hold(map->pmap, va, prot); if (*mp == NULL) pmap_failed = TRUE; else if ((prot & VM_PROT_WRITE) != 0 && (*mp)->dirty != VM_PAGE_BITS_ALL) { /* * Explicitly dirty the physical page. Otherwise, the * caller's changes may go unnoticed because they are * performed through an unmanaged mapping or by a DMA * operation. * * The object lock is not held here. * See vm_page_clear_dirty_mask(). */ vm_page_dirty(*mp); } } if (pmap_failed) { /* * One or more pages could not be held by the pmap. Either no * page was mapped at the specified virtual address or that * mapping had insufficient permissions. Attempt to fault in * and hold these pages. * * If vm_fault_disable_pagefaults() was called, * i.e., TDP_NOFAULTING is set, we must not sleep nor * acquire MD VM locks, which means we must not call * vm_fault_hold(). Some (out of tree) callers mark * too wide a code area with vm_fault_disable_pagefaults() * already, use the VM_PROT_QUICK_NOFAULT flag to request * the proper behaviour explicitly. */ if ((prot & VM_PROT_QUICK_NOFAULT) != 0 && (curthread->td_pflags & TDP_NOFAULTING) != 0) goto error; for (mp = ma, va = addr; va < end; mp++, va += PAGE_SIZE) if (*mp == NULL && vm_fault_hold(map, va, prot, VM_FAULT_NORMAL, mp) != KERN_SUCCESS) goto error; } return (count); error: for (mp = ma; mp < ma + count; mp++) if (*mp != NULL) { vm_page_lock(*mp); vm_page_unhold(*mp); vm_page_unlock(*mp); } return (-1); } /* * Routine: * vm_fault_copy_entry * Function: * Create new shadow object backing dst_entry with private copy of * all underlying pages. When src_entry is equal to dst_entry, * function implements COW for wired-down map entry. Otherwise, * it forks wired entry into dst_map. * * In/out conditions: * The source and destination maps must be locked for write. * The source map entry must be wired down (or be a sharing map * entry corresponding to a main map entry that is wired down). */ void vm_fault_copy_entry(vm_map_t dst_map, vm_map_t src_map, vm_map_entry_t dst_entry, vm_map_entry_t src_entry, vm_ooffset_t *fork_charge) { vm_object_t backing_object, dst_object, object, src_object; vm_pindex_t dst_pindex, pindex, src_pindex; vm_prot_t access, prot; vm_offset_t vaddr; vm_page_t dst_m; vm_page_t src_m; boolean_t upgrade; #ifdef lint src_map++; #endif /* lint */ upgrade = src_entry == dst_entry; access = prot = dst_entry->protection; src_object = src_entry->object.vm_object; src_pindex = OFF_TO_IDX(src_entry->offset); if (upgrade && (dst_entry->eflags & MAP_ENTRY_NEEDS_COPY) == 0) { dst_object = src_object; vm_object_reference(dst_object); } else { /* * Create the top-level object for the destination entry. (Doesn't * actually shadow anything - we copy the pages directly.) */ dst_object = vm_object_allocate(OBJT_DEFAULT, atop(dst_entry->end - dst_entry->start)); #if VM_NRESERVLEVEL > 0 dst_object->flags |= OBJ_COLORED; dst_object->pg_color = atop(dst_entry->start); #endif dst_object->domain = src_object->domain; dst_object->charge = dst_entry->end - dst_entry->start; } VM_OBJECT_WLOCK(dst_object); KASSERT(upgrade || dst_entry->object.vm_object == NULL, ("vm_fault_copy_entry: vm_object not NULL")); if (src_object != dst_object) { dst_entry->object.vm_object = dst_object; dst_entry->offset = 0; } if (fork_charge != NULL) { KASSERT(dst_entry->cred == NULL, ("vm_fault_copy_entry: leaked swp charge")); dst_object->cred = curthread->td_ucred; crhold(dst_object->cred); *fork_charge += dst_object->charge; } else if ((dst_object->type == OBJT_DEFAULT || dst_object->type == OBJT_SWAP) && dst_object->cred == NULL) { KASSERT(dst_entry->cred != NULL, ("no cred for entry %p", dst_entry)); dst_object->cred = dst_entry->cred; dst_entry->cred = NULL; } /* * If not an upgrade, then enter the mappings in the pmap as * read and/or execute accesses. Otherwise, enter them as * write accesses. * * A writeable large page mapping is only created if all of * the constituent small page mappings are modified. Marking * PTEs as modified on inception allows promotion to happen * without taking potentially large number of soft faults. */ if (!upgrade) access &= ~VM_PROT_WRITE; /* * Loop through all of the virtual pages within the entry's * range, copying each page from the source object to the * destination object. Since the source is wired, those pages * must exist. In contrast, the destination is pageable. * Since the destination object doesn't share any backing storage * with the source object, all of its pages must be dirtied, * regardless of whether they can be written. */ for (vaddr = dst_entry->start, dst_pindex = 0; vaddr < dst_entry->end; vaddr += PAGE_SIZE, dst_pindex++) { again: /* * Find the page in the source object, and copy it in. * Because the source is wired down, the page will be * in memory. */ if (src_object != dst_object) VM_OBJECT_RLOCK(src_object); object = src_object; pindex = src_pindex + dst_pindex; while ((src_m = vm_page_lookup(object, pindex)) == NULL && (backing_object = object->backing_object) != NULL) { /* * Unless the source mapping is read-only or * it is presently being upgraded from * read-only, the first object in the shadow * chain should provide all of the pages. In * other words, this loop body should never be * executed when the source mapping is already * read/write. */ KASSERT((src_entry->protection & VM_PROT_WRITE) == 0 || upgrade, ("vm_fault_copy_entry: main object missing page")); VM_OBJECT_RLOCK(backing_object); pindex += OFF_TO_IDX(object->backing_object_offset); if (object != dst_object) VM_OBJECT_RUNLOCK(object); object = backing_object; } KASSERT(src_m != NULL, ("vm_fault_copy_entry: page missing")); if (object != dst_object) { /* * Allocate a page in the destination object. */ dst_m = vm_page_alloc(dst_object, (src_object == dst_object ? src_pindex : 0) + dst_pindex, VM_ALLOC_NORMAL); if (dst_m == NULL) { VM_OBJECT_WUNLOCK(dst_object); VM_OBJECT_RUNLOCK(object); vm_wait(dst_object); VM_OBJECT_WLOCK(dst_object); goto again; } pmap_copy_page(src_m, dst_m); VM_OBJECT_RUNLOCK(object); dst_m->valid = VM_PAGE_BITS_ALL; dst_m->dirty = VM_PAGE_BITS_ALL; } else { dst_m = src_m; if (vm_page_sleep_if_busy(dst_m, "fltupg")) goto again; if (dst_m->pindex >= dst_object->size) /* * We are upgrading. Index can occur * out of bounds if the object type is * vnode and the file was truncated. */ break; vm_page_xbusy(dst_m); KASSERT(dst_m->valid == VM_PAGE_BITS_ALL, ("invalid dst page %p", dst_m)); } VM_OBJECT_WUNLOCK(dst_object); /* * Enter it in the pmap. If a wired, copy-on-write * mapping is being replaced by a write-enabled * mapping, then wire that new mapping. */ pmap_enter(dst_map->pmap, vaddr, dst_m, prot, access | (upgrade ? PMAP_ENTER_WIRED : 0), 0); /* * Mark it no longer busy, and put it on the active list. */ VM_OBJECT_WLOCK(dst_object); if (upgrade) { if (src_m != dst_m) { vm_page_lock(src_m); vm_page_unwire(src_m, PQ_INACTIVE); vm_page_unlock(src_m); vm_page_lock(dst_m); vm_page_wire(dst_m); vm_page_unlock(dst_m); } else { KASSERT(dst_m->wire_count > 0, ("dst_m %p is not wired", dst_m)); } } else { vm_page_lock(dst_m); vm_page_activate(dst_m); vm_page_unlock(dst_m); } vm_page_xunbusy(dst_m); } VM_OBJECT_WUNLOCK(dst_object); if (upgrade) { dst_entry->eflags &= ~(MAP_ENTRY_COW | MAP_ENTRY_NEEDS_COPY); vm_object_deallocate(src_object); } } /* * Block entry into the machine-independent layer's page fault handler by * the calling thread. Subsequent calls to vm_fault() by that thread will * return KERN_PROTECTION_FAILURE. Enable machine-dependent handling of * spurious page faults. */ int vm_fault_disable_pagefaults(void) { return (curthread_pflags_set(TDP_NOFAULTING | TDP_RESETSPUR)); } void vm_fault_enable_pagefaults(int save) { curthread_pflags_restore(save); } Index: projects/import-googletest-1.8.1/sys/vm/vm_map.c =================================================================== --- projects/import-googletest-1.8.1/sys/vm/vm_map.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/vm/vm_map.c (revision 344271) @@ -1,4555 +1,4570 @@ /*- * SPDX-License-Identifier: (BSD-3-Clause AND MIT-CMU) * * Copyright (c) 1991, 1993 * The Regents of the University of California. All rights reserved. * * This code is derived from software contributed to Berkeley by * The Mach Operating System project at Carnegie-Mellon University. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * from: @(#)vm_map.c 8.3 (Berkeley) 1/12/94 * * * Copyright (c) 1987, 1990 Carnegie-Mellon University. * All rights reserved. * * Authors: Avadis Tevanian, Jr., Michael Wayne Young * * Permission to use, copy, modify and distribute this software and * its documentation is hereby granted, provided that both the copyright * notice and this permission notice appear in all copies of the * software, derivative works or modified versions, and any portions * thereof, and that both notices appear in supporting documentation. * * CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS" * CONDITION. CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND * FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE. * * Carnegie Mellon requests users of this software to return to * * Software Distribution Coordinator or Software.Distribution@CS.CMU.EDU * School of Computer Science * Carnegie Mellon University * Pittsburgh PA 15213-3890 * * any improvements or extensions that they make and grant Carnegie the * rights to redistribute these changes. */ /* * Virtual memory mapping module. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include /* * Virtual memory maps provide for the mapping, protection, * and sharing of virtual memory objects. In addition, * this module provides for an efficient virtual copy of * memory from one map to another. * * Synchronization is required prior to most operations. * * Maps consist of an ordered doubly-linked list of simple * entries; a self-adjusting binary search tree of these * entries is used to speed up lookups. * * Since portions of maps are specified by start/end addresses, * which may not align with existing map entries, all * routines merely "clip" entries to these start/end values. * [That is, an entry is split into two, bordering at a * start or end value.] Note that these clippings may not * always be necessary (as the two resulting entries are then * not changed); however, the clipping is done for convenience. * * As mentioned above, virtual copy operations are performed * by copying VM object references from one map to * another, and then marking both regions as copy-on-write. */ static struct mtx map_sleep_mtx; static uma_zone_t mapentzone; static uma_zone_t kmapentzone; static uma_zone_t mapzone; static uma_zone_t vmspace_zone; static int vmspace_zinit(void *mem, int size, int flags); static int vm_map_zinit(void *mem, int ize, int flags); static void _vm_map_init(vm_map_t map, pmap_t pmap, vm_offset_t min, vm_offset_t max); static int vm_map_alignspace(vm_map_t map, vm_object_t object, vm_ooffset_t offset, vm_offset_t *addr, vm_size_t length, vm_offset_t max_addr, vm_offset_t alignment); static void vm_map_entry_deallocate(vm_map_entry_t entry, boolean_t system_map); static void vm_map_entry_dispose(vm_map_t map, vm_map_entry_t entry); static void vm_map_entry_unwire(vm_map_t map, vm_map_entry_t entry); static int vm_map_growstack(vm_map_t map, vm_offset_t addr, vm_map_entry_t gap_entry); static void vm_map_pmap_enter(vm_map_t map, vm_offset_t addr, vm_prot_t prot, vm_object_t object, vm_pindex_t pindex, vm_size_t size, int flags); #ifdef INVARIANTS static void vm_map_zdtor(void *mem, int size, void *arg); static void vmspace_zdtor(void *mem, int size, void *arg); #endif static int vm_map_stack_locked(vm_map_t map, vm_offset_t addrbos, vm_size_t max_ssize, vm_size_t growsize, vm_prot_t prot, vm_prot_t max, int cow); static void vm_map_wire_entry_failure(vm_map_t map, vm_map_entry_t entry, vm_offset_t failed_addr); #define ENTRY_CHARGED(e) ((e)->cred != NULL || \ ((e)->object.vm_object != NULL && (e)->object.vm_object->cred != NULL && \ !((e)->eflags & MAP_ENTRY_NEEDS_COPY))) /* * PROC_VMSPACE_{UN,}LOCK() can be a noop as long as vmspaces are type * stable. */ #define PROC_VMSPACE_LOCK(p) do { } while (0) #define PROC_VMSPACE_UNLOCK(p) do { } while (0) /* * VM_MAP_RANGE_CHECK: [ internal use only ] * * Asserts that the starting and ending region * addresses fall within the valid range of the map. */ #define VM_MAP_RANGE_CHECK(map, start, end) \ { \ if (start < vm_map_min(map)) \ start = vm_map_min(map); \ if (end > vm_map_max(map)) \ end = vm_map_max(map); \ if (start > end) \ start = end; \ } /* * vm_map_startup: * * Initialize the vm_map module. Must be called before * any other vm_map routines. * * Map and entry structures are allocated from the general * purpose memory pool with some exceptions: * * - The kernel map and kmem submap are allocated statically. * - Kernel map entries are allocated out of a static pool. * * These restrictions are necessary since malloc() uses the * maps and requires map entries. */ void vm_map_startup(void) { mtx_init(&map_sleep_mtx, "vm map sleep mutex", NULL, MTX_DEF); mapzone = uma_zcreate("MAP", sizeof(struct vm_map), NULL, #ifdef INVARIANTS vm_map_zdtor, #else NULL, #endif vm_map_zinit, NULL, UMA_ALIGN_PTR, UMA_ZONE_NOFREE); uma_prealloc(mapzone, MAX_KMAP); kmapentzone = uma_zcreate("KMAP ENTRY", sizeof(struct vm_map_entry), NULL, NULL, NULL, NULL, UMA_ALIGN_PTR, UMA_ZONE_MTXCLASS | UMA_ZONE_VM); mapentzone = uma_zcreate("MAP ENTRY", sizeof(struct vm_map_entry), NULL, NULL, NULL, NULL, UMA_ALIGN_PTR, 0); vmspace_zone = uma_zcreate("VMSPACE", sizeof(struct vmspace), NULL, #ifdef INVARIANTS vmspace_zdtor, #else NULL, #endif vmspace_zinit, NULL, UMA_ALIGN_PTR, UMA_ZONE_NOFREE); } static int vmspace_zinit(void *mem, int size, int flags) { struct vmspace *vm; vm = (struct vmspace *)mem; vm->vm_map.pmap = NULL; (void)vm_map_zinit(&vm->vm_map, sizeof(vm->vm_map), flags); PMAP_LOCK_INIT(vmspace_pmap(vm)); return (0); } static int vm_map_zinit(void *mem, int size, int flags) { vm_map_t map; map = (vm_map_t)mem; memset(map, 0, sizeof(*map)); mtx_init(&map->system_mtx, "vm map (system)", NULL, MTX_DEF | MTX_DUPOK); sx_init(&map->lock, "vm map (user)"); return (0); } #ifdef INVARIANTS static void vmspace_zdtor(void *mem, int size, void *arg) { struct vmspace *vm; vm = (struct vmspace *)mem; vm_map_zdtor(&vm->vm_map, sizeof(vm->vm_map), arg); } static void vm_map_zdtor(void *mem, int size, void *arg) { vm_map_t map; map = (vm_map_t)mem; KASSERT(map->nentries == 0, ("map %p nentries == %d on free.", map, map->nentries)); KASSERT(map->size == 0, ("map %p size == %lu on free.", map, (unsigned long)map->size)); } #endif /* INVARIANTS */ /* * Allocate a vmspace structure, including a vm_map and pmap, * and initialize those structures. The refcnt is set to 1. * * If 'pinit' is NULL then the embedded pmap is initialized via pmap_pinit(). */ struct vmspace * vmspace_alloc(vm_offset_t min, vm_offset_t max, pmap_pinit_t pinit) { struct vmspace *vm; vm = uma_zalloc(vmspace_zone, M_WAITOK); KASSERT(vm->vm_map.pmap == NULL, ("vm_map.pmap must be NULL")); if (!pinit(vmspace_pmap(vm))) { uma_zfree(vmspace_zone, vm); return (NULL); } CTR1(KTR_VM, "vmspace_alloc: %p", vm); _vm_map_init(&vm->vm_map, vmspace_pmap(vm), min, max); vm->vm_refcnt = 1; vm->vm_shm = NULL; vm->vm_swrss = 0; vm->vm_tsize = 0; vm->vm_dsize = 0; vm->vm_ssize = 0; vm->vm_taddr = 0; vm->vm_daddr = 0; vm->vm_maxsaddr = 0; return (vm); } #ifdef RACCT static void vmspace_container_reset(struct proc *p) { PROC_LOCK(p); racct_set(p, RACCT_DATA, 0); racct_set(p, RACCT_STACK, 0); racct_set(p, RACCT_RSS, 0); racct_set(p, RACCT_MEMLOCK, 0); racct_set(p, RACCT_VMEM, 0); PROC_UNLOCK(p); } #endif static inline void vmspace_dofree(struct vmspace *vm) { CTR1(KTR_VM, "vmspace_free: %p", vm); /* * Make sure any SysV shm is freed, it might not have been in * exit1(). */ shmexit(vm); /* * Lock the map, to wait out all other references to it. * Delete all of the mappings and pages they hold, then call * the pmap module to reclaim anything left. */ (void)vm_map_remove(&vm->vm_map, vm_map_min(&vm->vm_map), vm_map_max(&vm->vm_map)); pmap_release(vmspace_pmap(vm)); vm->vm_map.pmap = NULL; uma_zfree(vmspace_zone, vm); } void vmspace_free(struct vmspace *vm) { WITNESS_WARN(WARN_GIANTOK | WARN_SLEEPOK, NULL, "vmspace_free() called"); if (vm->vm_refcnt == 0) panic("vmspace_free: attempt to free already freed vmspace"); if (atomic_fetchadd_int(&vm->vm_refcnt, -1) == 1) vmspace_dofree(vm); } void vmspace_exitfree(struct proc *p) { struct vmspace *vm; PROC_VMSPACE_LOCK(p); vm = p->p_vmspace; p->p_vmspace = NULL; PROC_VMSPACE_UNLOCK(p); KASSERT(vm == &vmspace0, ("vmspace_exitfree: wrong vmspace")); vmspace_free(vm); } void vmspace_exit(struct thread *td) { int refcnt; struct vmspace *vm; struct proc *p; /* * Release user portion of address space. * This releases references to vnodes, * which could cause I/O if the file has been unlinked. * Need to do this early enough that we can still sleep. * * The last exiting process to reach this point releases as * much of the environment as it can. vmspace_dofree() is the * slower fallback in case another process had a temporary * reference to the vmspace. */ p = td->td_proc; vm = p->p_vmspace; atomic_add_int(&vmspace0.vm_refcnt, 1); refcnt = vm->vm_refcnt; do { if (refcnt > 1 && p->p_vmspace != &vmspace0) { /* Switch now since other proc might free vmspace */ PROC_VMSPACE_LOCK(p); p->p_vmspace = &vmspace0; PROC_VMSPACE_UNLOCK(p); pmap_activate(td); } } while (!atomic_fcmpset_int(&vm->vm_refcnt, &refcnt, refcnt - 1)); if (refcnt == 1) { if (p->p_vmspace != vm) { /* vmspace not yet freed, switch back */ PROC_VMSPACE_LOCK(p); p->p_vmspace = vm; PROC_VMSPACE_UNLOCK(p); pmap_activate(td); } pmap_remove_pages(vmspace_pmap(vm)); /* Switch now since this proc will free vmspace */ PROC_VMSPACE_LOCK(p); p->p_vmspace = &vmspace0; PROC_VMSPACE_UNLOCK(p); pmap_activate(td); vmspace_dofree(vm); } #ifdef RACCT if (racct_enable) vmspace_container_reset(p); #endif } /* Acquire reference to vmspace owned by another process. */ struct vmspace * vmspace_acquire_ref(struct proc *p) { struct vmspace *vm; int refcnt; PROC_VMSPACE_LOCK(p); vm = p->p_vmspace; if (vm == NULL) { PROC_VMSPACE_UNLOCK(p); return (NULL); } refcnt = vm->vm_refcnt; do { if (refcnt <= 0) { /* Avoid 0->1 transition */ PROC_VMSPACE_UNLOCK(p); return (NULL); } } while (!atomic_fcmpset_int(&vm->vm_refcnt, &refcnt, refcnt + 1)); if (vm != p->p_vmspace) { PROC_VMSPACE_UNLOCK(p); vmspace_free(vm); return (NULL); } PROC_VMSPACE_UNLOCK(p); return (vm); } /* * Switch between vmspaces in an AIO kernel process. * * The AIO kernel processes switch to and from a user process's * vmspace while performing an I/O operation on behalf of a user * process. The new vmspace is either the vmspace of a user process * obtained from an active AIO request or the initial vmspace of the * AIO kernel process (when it is idling). Because user processes * will block to drain any active AIO requests before proceeding in * exit() or execve(), the vmspace reference count for these vmspaces * can never be 0. This allows for a much simpler implementation than * the loop in vmspace_acquire_ref() above. Similarly, AIO kernel * processes hold an extra reference on their initial vmspace for the * life of the process so that this guarantee is true for any vmspace * passed as 'newvm'. */ void vmspace_switch_aio(struct vmspace *newvm) { struct vmspace *oldvm; /* XXX: Need some way to assert that this is an aio daemon. */ KASSERT(newvm->vm_refcnt > 0, ("vmspace_switch_aio: newvm unreferenced")); oldvm = curproc->p_vmspace; if (oldvm == newvm) return; /* * Point to the new address space and refer to it. */ curproc->p_vmspace = newvm; atomic_add_int(&newvm->vm_refcnt, 1); /* Activate the new mapping. */ pmap_activate(curthread); /* Remove the daemon's reference to the old address space. */ KASSERT(oldvm->vm_refcnt > 1, ("vmspace_switch_aio: oldvm dropping last reference")); vmspace_free(oldvm); } void _vm_map_lock(vm_map_t map, const char *file, int line) { if (map->system_map) mtx_lock_flags_(&map->system_mtx, 0, file, line); else sx_xlock_(&map->lock, file, line); map->timestamp++; } static void vm_map_process_deferred(void) { struct thread *td; vm_map_entry_t entry, next; vm_object_t object; td = curthread; entry = td->td_map_def_user; td->td_map_def_user = NULL; while (entry != NULL) { next = entry->next; if ((entry->eflags & MAP_ENTRY_VN_WRITECNT) != 0) { /* * Decrement the object's writemappings and * possibly the vnode's v_writecount. */ KASSERT((entry->eflags & MAP_ENTRY_IS_SUB_MAP) == 0, ("Submap with writecount")); object = entry->object.vm_object; KASSERT(object != NULL, ("No object for writecount")); vnode_pager_release_writecount(object, entry->start, entry->end); } vm_map_entry_deallocate(entry, FALSE); entry = next; } } void _vm_map_unlock(vm_map_t map, const char *file, int line) { if (map->system_map) mtx_unlock_flags_(&map->system_mtx, 0, file, line); else { sx_xunlock_(&map->lock, file, line); vm_map_process_deferred(); } } void _vm_map_lock_read(vm_map_t map, const char *file, int line) { if (map->system_map) mtx_lock_flags_(&map->system_mtx, 0, file, line); else sx_slock_(&map->lock, file, line); } void _vm_map_unlock_read(vm_map_t map, const char *file, int line) { if (map->system_map) mtx_unlock_flags_(&map->system_mtx, 0, file, line); else { sx_sunlock_(&map->lock, file, line); vm_map_process_deferred(); } } int _vm_map_trylock(vm_map_t map, const char *file, int line) { int error; error = map->system_map ? !mtx_trylock_flags_(&map->system_mtx, 0, file, line) : !sx_try_xlock_(&map->lock, file, line); if (error == 0) map->timestamp++; return (error == 0); } int _vm_map_trylock_read(vm_map_t map, const char *file, int line) { int error; error = map->system_map ? !mtx_trylock_flags_(&map->system_mtx, 0, file, line) : !sx_try_slock_(&map->lock, file, line); return (error == 0); } /* * _vm_map_lock_upgrade: [ internal use only ] * * Tries to upgrade a read (shared) lock on the specified map to a write * (exclusive) lock. Returns the value "0" if the upgrade succeeds and a * non-zero value if the upgrade fails. If the upgrade fails, the map is * returned without a read or write lock held. * * Requires that the map be read locked. */ int _vm_map_lock_upgrade(vm_map_t map, const char *file, int line) { unsigned int last_timestamp; if (map->system_map) { mtx_assert_(&map->system_mtx, MA_OWNED, file, line); } else { if (!sx_try_upgrade_(&map->lock, file, line)) { last_timestamp = map->timestamp; sx_sunlock_(&map->lock, file, line); vm_map_process_deferred(); /* * If the map's timestamp does not change while the * map is unlocked, then the upgrade succeeds. */ sx_xlock_(&map->lock, file, line); if (last_timestamp != map->timestamp) { sx_xunlock_(&map->lock, file, line); return (1); } } } map->timestamp++; return (0); } void _vm_map_lock_downgrade(vm_map_t map, const char *file, int line) { if (map->system_map) { mtx_assert_(&map->system_mtx, MA_OWNED, file, line); } else sx_downgrade_(&map->lock, file, line); } /* * vm_map_locked: * * Returns a non-zero value if the caller holds a write (exclusive) lock * on the specified map and the value "0" otherwise. */ int vm_map_locked(vm_map_t map) { if (map->system_map) return (mtx_owned(&map->system_mtx)); else return (sx_xlocked(&map->lock)); } #ifdef INVARIANTS static void _vm_map_assert_locked(vm_map_t map, const char *file, int line) { if (map->system_map) mtx_assert_(&map->system_mtx, MA_OWNED, file, line); else sx_assert_(&map->lock, SA_XLOCKED, file, line); } #define VM_MAP_ASSERT_LOCKED(map) \ _vm_map_assert_locked(map, LOCK_FILE, LOCK_LINE) #else #define VM_MAP_ASSERT_LOCKED(map) #endif /* * _vm_map_unlock_and_wait: * * Atomically releases the lock on the specified map and puts the calling * thread to sleep. The calling thread will remain asleep until either * vm_map_wakeup() is performed on the map or the specified timeout is * exceeded. * * WARNING! This function does not perform deferred deallocations of * objects and map entries. Therefore, the calling thread is expected to * reacquire the map lock after reawakening and later perform an ordinary * unlock operation, such as vm_map_unlock(), before completing its * operation on the map. */ int _vm_map_unlock_and_wait(vm_map_t map, int timo, const char *file, int line) { mtx_lock(&map_sleep_mtx); if (map->system_map) mtx_unlock_flags_(&map->system_mtx, 0, file, line); else sx_xunlock_(&map->lock, file, line); return (msleep(&map->root, &map_sleep_mtx, PDROP | PVM, "vmmaps", timo)); } /* * vm_map_wakeup: * * Awaken any threads that have slept on the map using * vm_map_unlock_and_wait(). */ void vm_map_wakeup(vm_map_t map) { /* * Acquire and release map_sleep_mtx to prevent a wakeup() * from being performed (and lost) between the map unlock * and the msleep() in _vm_map_unlock_and_wait(). */ mtx_lock(&map_sleep_mtx); mtx_unlock(&map_sleep_mtx); wakeup(&map->root); } void vm_map_busy(vm_map_t map) { VM_MAP_ASSERT_LOCKED(map); map->busy++; } void vm_map_unbusy(vm_map_t map) { VM_MAP_ASSERT_LOCKED(map); KASSERT(map->busy, ("vm_map_unbusy: not busy")); if (--map->busy == 0 && (map->flags & MAP_BUSY_WAKEUP)) { vm_map_modflags(map, 0, MAP_BUSY_WAKEUP); wakeup(&map->busy); } } void vm_map_wait_busy(vm_map_t map) { VM_MAP_ASSERT_LOCKED(map); while (map->busy) { vm_map_modflags(map, MAP_BUSY_WAKEUP, 0); if (map->system_map) msleep(&map->busy, &map->system_mtx, 0, "mbusy", 0); else sx_sleep(&map->busy, &map->lock, 0, "mbusy", 0); } map->timestamp++; } long vmspace_resident_count(struct vmspace *vmspace) { return pmap_resident_count(vmspace_pmap(vmspace)); } /* * vm_map_create: * * Creates and returns a new empty VM map with * the given physical map structure, and having * the given lower and upper address bounds. */ vm_map_t vm_map_create(pmap_t pmap, vm_offset_t min, vm_offset_t max) { vm_map_t result; result = uma_zalloc(mapzone, M_WAITOK); CTR1(KTR_VM, "vm_map_create: %p", result); _vm_map_init(result, pmap, min, max); return (result); } /* * Initialize an existing vm_map structure * such as that in the vmspace structure. */ static void _vm_map_init(vm_map_t map, pmap_t pmap, vm_offset_t min, vm_offset_t max) { map->header.next = map->header.prev = &map->header; map->header.eflags = MAP_ENTRY_HEADER; map->needs_wakeup = FALSE; map->system_map = 0; map->pmap = pmap; map->header.end = min; map->header.start = max; map->flags = 0; map->root = NULL; map->timestamp = 0; map->busy = 0; map->anon_loc = 0; } void vm_map_init(vm_map_t map, pmap_t pmap, vm_offset_t min, vm_offset_t max) { _vm_map_init(map, pmap, min, max); mtx_init(&map->system_mtx, "system map", NULL, MTX_DEF | MTX_DUPOK); sx_init(&map->lock, "user map"); } /* * vm_map_entry_dispose: [ internal use only ] * * Inverse of vm_map_entry_create. */ static void vm_map_entry_dispose(vm_map_t map, vm_map_entry_t entry) { uma_zfree(map->system_map ? kmapentzone : mapentzone, entry); } /* * vm_map_entry_create: [ internal use only ] * * Allocates a VM map entry for insertion. * No entry fields are filled in. */ static vm_map_entry_t vm_map_entry_create(vm_map_t map) { vm_map_entry_t new_entry; if (map->system_map) new_entry = uma_zalloc(kmapentzone, M_NOWAIT); else new_entry = uma_zalloc(mapentzone, M_WAITOK); if (new_entry == NULL) panic("vm_map_entry_create: kernel resources exhausted"); return (new_entry); } /* * vm_map_entry_set_behavior: * * Set the expected access behavior, either normal, random, or * sequential. */ static inline void vm_map_entry_set_behavior(vm_map_entry_t entry, u_char behavior) { entry->eflags = (entry->eflags & ~MAP_ENTRY_BEHAV_MASK) | (behavior & MAP_ENTRY_BEHAV_MASK); } /* * vm_map_entry_set_max_free: * * Set the max_free field in a vm_map_entry. */ static inline void vm_map_entry_set_max_free(vm_map_entry_t entry) { entry->max_free = entry->adj_free; if (entry->left != NULL && entry->left->max_free > entry->max_free) entry->max_free = entry->left->max_free; if (entry->right != NULL && entry->right->max_free > entry->max_free) entry->max_free = entry->right->max_free; } /* * vm_map_entry_splay: * * The Sleator and Tarjan top-down splay algorithm with the * following variation. Max_free must be computed bottom-up, so * on the downward pass, maintain the left and right spines in * reverse order. Then, make a second pass up each side to fix * the pointers and compute max_free. The time bound is O(log n) * amortized. * * The new root is the vm_map_entry containing "addr", or else an * adjacent entry (lower or higher) if addr is not in the tree. * * The map must be locked, and leaves it so. * * Returns: the new root. */ static vm_map_entry_t vm_map_entry_splay(vm_offset_t addr, vm_map_entry_t root) { vm_map_entry_t llist, rlist; vm_map_entry_t ltree, rtree; vm_map_entry_t y; /* Special case of empty tree. */ if (root == NULL) return (root); /* * Pass One: Splay down the tree until we find addr or a NULL * pointer where addr would go. llist and rlist are the two * sides in reverse order (bottom-up), with llist linked by * the right pointer and rlist linked by the left pointer in * the vm_map_entry. Wait until Pass Two to set max_free on * the two spines. */ llist = NULL; rlist = NULL; for (;;) { /* root is never NULL in here. */ if (addr < root->start) { y = root->left; if (y == NULL) break; if (addr < y->start && y->left != NULL) { /* Rotate right and put y on rlist. */ root->left = y->right; y->right = root; vm_map_entry_set_max_free(root); root = y->left; y->left = rlist; rlist = y; } else { /* Put root on rlist. */ root->left = rlist; rlist = root; root = y; } } else if (addr >= root->end) { y = root->right; if (y == NULL) break; if (addr >= y->end && y->right != NULL) { /* Rotate left and put y on llist. */ root->right = y->left; y->left = root; vm_map_entry_set_max_free(root); root = y->right; y->right = llist; llist = y; } else { /* Put root on llist. */ root->right = llist; llist = root; root = y; } } else break; } /* * Pass Two: Walk back up the two spines, flip the pointers * and set max_free. The subtrees of the root go at the * bottom of llist and rlist. */ ltree = root->left; while (llist != NULL) { y = llist->right; llist->right = ltree; vm_map_entry_set_max_free(llist); ltree = llist; llist = y; } rtree = root->right; while (rlist != NULL) { y = rlist->left; rlist->left = rtree; vm_map_entry_set_max_free(rlist); rtree = rlist; rlist = y; } /* * Final assembly: add ltree and rtree as subtrees of root. */ root->left = ltree; root->right = rtree; vm_map_entry_set_max_free(root); return (root); } /* * vm_map_entry_{un,}link: * * Insert/remove entries from maps. */ static void vm_map_entry_link(vm_map_t map, vm_map_entry_t after_where, vm_map_entry_t entry) { CTR4(KTR_VM, "vm_map_entry_link: map %p, nentries %d, entry %p, after %p", map, map->nentries, entry, after_where); VM_MAP_ASSERT_LOCKED(map); KASSERT(after_where->end <= entry->start, ("vm_map_entry_link: prev end %jx new start %jx overlap", (uintmax_t)after_where->end, (uintmax_t)entry->start)); KASSERT(entry->end <= after_where->next->start, ("vm_map_entry_link: new end %jx next start %jx overlap", (uintmax_t)entry->end, (uintmax_t)after_where->next->start)); map->nentries++; entry->prev = after_where; entry->next = after_where->next; entry->next->prev = entry; after_where->next = entry; if (after_where != &map->header) { if (after_where != map->root) vm_map_entry_splay(after_where->start, map->root); entry->right = after_where->right; entry->left = after_where; after_where->right = NULL; after_where->adj_free = entry->start - after_where->end; vm_map_entry_set_max_free(after_where); } else { entry->right = map->root; entry->left = NULL; } entry->adj_free = entry->next->start - entry->end; vm_map_entry_set_max_free(entry); map->root = entry; } static void vm_map_entry_unlink(vm_map_t map, vm_map_entry_t entry) { vm_map_entry_t next, prev, root; VM_MAP_ASSERT_LOCKED(map); if (entry != map->root) vm_map_entry_splay(entry->start, map->root); if (entry->left == NULL) root = entry->right; else { root = vm_map_entry_splay(entry->start, entry->left); root->right = entry->right; root->adj_free = entry->next->start - root->end; vm_map_entry_set_max_free(root); } map->root = root; prev = entry->prev; next = entry->next; next->prev = prev; prev->next = next; map->nentries--; CTR3(KTR_VM, "vm_map_entry_unlink: map %p, nentries %d, entry %p", map, map->nentries, entry); } /* * vm_map_entry_resize_free: * * Recompute the amount of free space following a vm_map_entry * and propagate that value up the tree. Call this function after * resizing a map entry in-place, that is, without a call to * vm_map_entry_link() or _unlink(). * * The map must be locked, and leaves it so. */ static void vm_map_entry_resize_free(vm_map_t map, vm_map_entry_t entry) { /* * Using splay trees without parent pointers, propagating * max_free up the tree is done by moving the entry to the * root and making the change there. */ if (entry != map->root) map->root = vm_map_entry_splay(entry->start, map->root); entry->adj_free = entry->next->start - entry->end; vm_map_entry_set_max_free(entry); } /* * vm_map_lookup_entry: [ internal use only ] * * Finds the map entry containing (or * immediately preceding) the specified address * in the given map; the entry is returned * in the "entry" parameter. The boolean * result indicates whether the address is * actually contained in the map. */ boolean_t vm_map_lookup_entry( vm_map_t map, vm_offset_t address, vm_map_entry_t *entry) /* OUT */ { vm_map_entry_t cur; boolean_t locked; /* * If the map is empty, then the map entry immediately preceding * "address" is the map's header. */ cur = map->root; if (cur == NULL) *entry = &map->header; else if (address >= cur->start && cur->end > address) { *entry = cur; return (TRUE); } else if ((locked = vm_map_locked(map)) || sx_try_upgrade(&map->lock)) { /* * Splay requires a write lock on the map. However, it only * restructures the binary search tree; it does not otherwise * change the map. Thus, the map's timestamp need not change * on a temporary upgrade. */ map->root = cur = vm_map_entry_splay(address, cur); if (!locked) sx_downgrade(&map->lock); /* * If "address" is contained within a map entry, the new root * is that map entry. Otherwise, the new root is a map entry * immediately before or after "address". */ if (address >= cur->start) { *entry = cur; if (cur->end > address) return (TRUE); } else *entry = cur->prev; } else /* * Since the map is only locked for read access, perform a * standard binary search tree lookup for "address". */ for (;;) { if (address < cur->start) { if (cur->left == NULL) { *entry = cur->prev; break; } cur = cur->left; } else if (cur->end > address) { *entry = cur; return (TRUE); } else { if (cur->right == NULL) { *entry = cur; break; } cur = cur->right; } } return (FALSE); } /* * vm_map_insert: * * Inserts the given whole VM object into the target * map at the specified address range. The object's * size should match that of the address range. * * Requires that the map be locked, and leaves it so. * * If object is non-NULL, ref count must be bumped by caller * prior to making call to account for the new entry. */ int vm_map_insert(vm_map_t map, vm_object_t object, vm_ooffset_t offset, vm_offset_t start, vm_offset_t end, vm_prot_t prot, vm_prot_t max, int cow) { vm_map_entry_t new_entry, prev_entry, temp_entry; struct ucred *cred; vm_eflags_t protoeflags; vm_inherit_t inheritance; VM_MAP_ASSERT_LOCKED(map); KASSERT(object != kernel_object || (cow & MAP_COPY_ON_WRITE) == 0, ("vm_map_insert: kernel object and COW")); KASSERT(object == NULL || (cow & MAP_NOFAULT) == 0, ("vm_map_insert: paradoxical MAP_NOFAULT request")); KASSERT((prot & ~max) == 0, ("prot %#x is not subset of max_prot %#x", prot, max)); /* * Check that the start and end points are not bogus. */ if (start < vm_map_min(map) || end > vm_map_max(map) || start >= end) return (KERN_INVALID_ADDRESS); /* * Find the entry prior to the proposed starting address; if it's part * of an existing entry, this range is bogus. */ if (vm_map_lookup_entry(map, start, &temp_entry)) return (KERN_NO_SPACE); prev_entry = temp_entry; /* * Assert that the next entry doesn't overlap the end point. */ if (prev_entry->next->start < end) return (KERN_NO_SPACE); if ((cow & MAP_CREATE_GUARD) != 0 && (object != NULL || max != VM_PROT_NONE)) return (KERN_INVALID_ARGUMENT); protoeflags = 0; if (cow & MAP_COPY_ON_WRITE) protoeflags |= MAP_ENTRY_COW | MAP_ENTRY_NEEDS_COPY; if (cow & MAP_NOFAULT) protoeflags |= MAP_ENTRY_NOFAULT; if (cow & MAP_DISABLE_SYNCER) protoeflags |= MAP_ENTRY_NOSYNC; if (cow & MAP_DISABLE_COREDUMP) protoeflags |= MAP_ENTRY_NOCOREDUMP; if (cow & MAP_STACK_GROWS_DOWN) protoeflags |= MAP_ENTRY_GROWS_DOWN; if (cow & MAP_STACK_GROWS_UP) protoeflags |= MAP_ENTRY_GROWS_UP; if (cow & MAP_VN_WRITECOUNT) protoeflags |= MAP_ENTRY_VN_WRITECNT; if ((cow & MAP_CREATE_GUARD) != 0) protoeflags |= MAP_ENTRY_GUARD; if ((cow & MAP_CREATE_STACK_GAP_DN) != 0) protoeflags |= MAP_ENTRY_STACK_GAP_DN; if ((cow & MAP_CREATE_STACK_GAP_UP) != 0) protoeflags |= MAP_ENTRY_STACK_GAP_UP; if (cow & MAP_INHERIT_SHARE) inheritance = VM_INHERIT_SHARE; else inheritance = VM_INHERIT_DEFAULT; cred = NULL; if ((cow & (MAP_ACC_NO_CHARGE | MAP_NOFAULT | MAP_CREATE_GUARD)) != 0) goto charged; if ((cow & MAP_ACC_CHARGED) || ((prot & VM_PROT_WRITE) && ((protoeflags & MAP_ENTRY_NEEDS_COPY) || object == NULL))) { if (!(cow & MAP_ACC_CHARGED) && !swap_reserve(end - start)) return (KERN_RESOURCE_SHORTAGE); KASSERT(object == NULL || (protoeflags & MAP_ENTRY_NEEDS_COPY) != 0 || object->cred == NULL, ("overcommit: vm_map_insert o %p", object)); cred = curthread->td_ucred; } charged: /* Expand the kernel pmap, if necessary. */ if (map == kernel_map && end > kernel_vm_end) pmap_growkernel(end); if (object != NULL) { /* * OBJ_ONEMAPPING must be cleared unless this mapping * is trivially proven to be the only mapping for any * of the object's pages. (Object granularity * reference counting is insufficient to recognize * aliases with precision.) */ VM_OBJECT_WLOCK(object); if (object->ref_count > 1 || object->shadow_count != 0) vm_object_clear_flag(object, OBJ_ONEMAPPING); VM_OBJECT_WUNLOCK(object); } else if ((prev_entry->eflags & ~MAP_ENTRY_USER_WIRED) == protoeflags && (cow & (MAP_STACK_GROWS_DOWN | MAP_STACK_GROWS_UP)) == 0 && prev_entry->end == start && (prev_entry->cred == cred || (prev_entry->object.vm_object != NULL && prev_entry->object.vm_object->cred == cred)) && vm_object_coalesce(prev_entry->object.vm_object, prev_entry->offset, (vm_size_t)(prev_entry->end - prev_entry->start), (vm_size_t)(end - prev_entry->end), cred != NULL && (protoeflags & MAP_ENTRY_NEEDS_COPY) == 0)) { /* * We were able to extend the object. Determine if we * can extend the previous map entry to include the * new range as well. */ if (prev_entry->inheritance == inheritance && prev_entry->protection == prot && prev_entry->max_protection == max && prev_entry->wired_count == 0) { KASSERT((prev_entry->eflags & MAP_ENTRY_USER_WIRED) == 0, ("prev_entry %p has incoherent wiring", prev_entry)); if ((prev_entry->eflags & MAP_ENTRY_GUARD) == 0) map->size += end - prev_entry->end; prev_entry->end = end; vm_map_entry_resize_free(map, prev_entry); vm_map_simplify_entry(map, prev_entry); return (KERN_SUCCESS); } /* * If we can extend the object but cannot extend the * map entry, we have to create a new map entry. We * must bump the ref count on the extended object to * account for it. object may be NULL. */ object = prev_entry->object.vm_object; offset = prev_entry->offset + (prev_entry->end - prev_entry->start); vm_object_reference(object); if (cred != NULL && object != NULL && object->cred != NULL && !(prev_entry->eflags & MAP_ENTRY_NEEDS_COPY)) { /* Object already accounts for this uid. */ cred = NULL; } } if (cred != NULL) crhold(cred); /* * Create a new entry */ new_entry = vm_map_entry_create(map); new_entry->start = start; new_entry->end = end; new_entry->cred = NULL; new_entry->eflags = protoeflags; new_entry->object.vm_object = object; new_entry->offset = offset; new_entry->inheritance = inheritance; new_entry->protection = prot; new_entry->max_protection = max; new_entry->wired_count = 0; new_entry->wiring_thread = NULL; new_entry->read_ahead = VM_FAULT_READ_AHEAD_INIT; new_entry->next_read = start; KASSERT(cred == NULL || !ENTRY_CHARGED(new_entry), ("overcommit: vm_map_insert leaks vm_map %p", new_entry)); new_entry->cred = cred; /* * Insert the new entry into the list */ vm_map_entry_link(map, prev_entry, new_entry); if ((new_entry->eflags & MAP_ENTRY_GUARD) == 0) map->size += new_entry->end - new_entry->start; /* * Try to coalesce the new entry with both the previous and next * entries in the list. Previously, we only attempted to coalesce * with the previous entry when object is NULL. Here, we handle the * other cases, which are less common. */ vm_map_simplify_entry(map, new_entry); if ((cow & (MAP_PREFAULT | MAP_PREFAULT_PARTIAL)) != 0) { vm_map_pmap_enter(map, start, prot, object, OFF_TO_IDX(offset), end - start, cow & MAP_PREFAULT_PARTIAL); } return (KERN_SUCCESS); } /* * vm_map_findspace: * * Find the first fit (lowest VM address) for "length" free bytes * beginning at address >= start in the given map. * * In a vm_map_entry, "adj_free" is the amount of free space * adjacent (higher address) to this entry, and "max_free" is the * maximum amount of contiguous free space in its subtree. This * allows finding a free region in one path down the tree, so * O(log n) amortized with splay trees. * * The map must be locked, and leaves it so. * * Returns: 0 on success, and starting address in *addr, * 1 if insufficient space. */ int vm_map_findspace(vm_map_t map, vm_offset_t start, vm_size_t length, vm_offset_t *addr) /* OUT */ { vm_map_entry_t entry; vm_offset_t st; /* * Request must fit within min/max VM address and must avoid * address wrap. */ start = MAX(start, vm_map_min(map)); if (start + length > vm_map_max(map) || start + length < start) return (1); /* Empty tree means wide open address space. */ if (map->root == NULL) { *addr = start; return (0); } /* * After splay, if start comes before root node, then there * must be a gap from start to the root. */ map->root = vm_map_entry_splay(start, map->root); if (start + length <= map->root->start) { *addr = start; return (0); } /* * Root is the last node that might begin its gap before * start, and this is the last comparison where address * wrap might be a problem. */ st = (start > map->root->end) ? start : map->root->end; if (length <= map->root->end + map->root->adj_free - st) { *addr = st; return (0); } /* With max_free, can immediately tell if no solution. */ entry = map->root->right; if (entry == NULL || length > entry->max_free) return (1); /* * Search the right subtree in the order: left subtree, root, * right subtree (first fit). The previous splay implies that * all regions in the right subtree have addresses > start. */ while (entry != NULL) { if (entry->left != NULL && entry->left->max_free >= length) entry = entry->left; else if (entry->adj_free >= length) { *addr = entry->end; return (0); } else entry = entry->right; } /* Can't get here, so panic if we do. */ panic("vm_map_findspace: max_free corrupt"); } int vm_map_fixed(vm_map_t map, vm_object_t object, vm_ooffset_t offset, vm_offset_t start, vm_size_t length, vm_prot_t prot, vm_prot_t max, int cow) { vm_offset_t end; int result; end = start + length; KASSERT((cow & (MAP_STACK_GROWS_DOWN | MAP_STACK_GROWS_UP)) == 0 || object == NULL, ("vm_map_fixed: non-NULL backing object for stack")); vm_map_lock(map); VM_MAP_RANGE_CHECK(map, start, end); if ((cow & MAP_CHECK_EXCL) == 0) vm_map_delete(map, start, end); if ((cow & (MAP_STACK_GROWS_DOWN | MAP_STACK_GROWS_UP)) != 0) { result = vm_map_stack_locked(map, start, length, sgrowsiz, prot, max, cow); } else { result = vm_map_insert(map, object, offset, start, end, prot, max, cow); } vm_map_unlock(map); return (result); } static const int aslr_pages_rnd_64[2] = {0x1000, 0x10}; static const int aslr_pages_rnd_32[2] = {0x100, 0x4}; static int cluster_anon = 1; SYSCTL_INT(_vm, OID_AUTO, cluster_anon, CTLFLAG_RW, &cluster_anon, 0, - "Cluster anonymous mappings"); + "Cluster anonymous mappings: 0 = no, 1 = yes if no hint, 2 = always"); +static bool +clustering_anon_allowed(vm_offset_t addr) +{ + + switch (cluster_anon) { + case 0: + return (false); + case 1: + return (addr == 0); + case 2: + default: + return (true); + } +} + static long aslr_restarts; SYSCTL_LONG(_vm, OID_AUTO, aslr_restarts, CTLFLAG_RD, &aslr_restarts, 0, "Number of aslr failures"); #define MAP_32BIT_MAX_ADDR ((vm_offset_t)1 << 31) /* * Searches for the specified amount of free space in the given map with the * specified alignment. Performs an address-ordered, first-fit search from * the given address "*addr", with an optional upper bound "max_addr". If the * parameter "alignment" is zero, then the alignment is computed from the * given (object, offset) pair so as to enable the greatest possible use of * superpage mappings. Returns KERN_SUCCESS and the address of the free space * in "*addr" if successful. Otherwise, returns KERN_NO_SPACE. * * The map must be locked. Initially, there must be at least "length" bytes * of free space at the given address. */ static int vm_map_alignspace(vm_map_t map, vm_object_t object, vm_ooffset_t offset, vm_offset_t *addr, vm_size_t length, vm_offset_t max_addr, vm_offset_t alignment) { vm_offset_t aligned_addr, free_addr; VM_MAP_ASSERT_LOCKED(map); free_addr = *addr; KASSERT(!vm_map_findspace(map, free_addr, length, addr) && free_addr == *addr, ("caller provided insufficient free space")); for (;;) { /* * At the start of every iteration, the free space at address * "*addr" is at least "length" bytes. */ if (alignment == 0) pmap_align_superpage(object, offset, addr, length); else if ((*addr & (alignment - 1)) != 0) { *addr &= ~(alignment - 1); *addr += alignment; } aligned_addr = *addr; if (aligned_addr == free_addr) { /* * Alignment did not change "*addr", so "*addr" must * still provide sufficient free space. */ return (KERN_SUCCESS); } /* * Test for address wrap on "*addr". A wrapped "*addr" could * be a valid address, in which case vm_map_findspace() cannot * be relied upon to fail. */ if (aligned_addr < free_addr || vm_map_findspace(map, aligned_addr, length, addr) || (max_addr != 0 && *addr + length > max_addr)) return (KERN_NO_SPACE); free_addr = *addr; if (free_addr == aligned_addr) { /* * If a successful call to vm_map_findspace() did not * change "*addr", then "*addr" must still be aligned * and provide sufficient free space. */ return (KERN_SUCCESS); } } } /* * vm_map_find finds an unallocated region in the target address * map with the given length. The search is defined to be * first-fit from the specified address; the region found is * returned in the same parameter. * * If object is non-NULL, ref count must be bumped by caller * prior to making call to account for the new entry. */ int vm_map_find(vm_map_t map, vm_object_t object, vm_ooffset_t offset, vm_offset_t *addr, /* IN/OUT */ vm_size_t length, vm_offset_t max_addr, int find_space, vm_prot_t prot, vm_prot_t max, int cow) { vm_offset_t alignment, curr_min_addr, min_addr; int gap, pidx, rv, try; bool cluster, en_aslr, update_anon; KASSERT((cow & (MAP_STACK_GROWS_DOWN | MAP_STACK_GROWS_UP)) == 0 || object == NULL, ("vm_map_find: non-NULL backing object for stack")); MPASS((cow & MAP_REMAP) == 0 || (find_space == VMFS_NO_SPACE && (cow & (MAP_STACK_GROWS_DOWN | MAP_STACK_GROWS_UP)) == 0)); if (find_space == VMFS_OPTIMAL_SPACE && (object == NULL || (object->flags & OBJ_COLORED) == 0)) find_space = VMFS_ANY_SPACE; if (find_space >> 8 != 0) { KASSERT((find_space & 0xff) == 0, ("bad VMFS flags")); alignment = (vm_offset_t)1 << (find_space >> 8); } else alignment = 0; en_aslr = (map->flags & MAP_ASLR) != 0; - update_anon = cluster = cluster_anon != 0 && + update_anon = cluster = clustering_anon_allowed(*addr) && (map->flags & MAP_IS_SUB_MAP) == 0 && max_addr == 0 && find_space != VMFS_NO_SPACE && object == NULL && (cow & (MAP_INHERIT_SHARE | MAP_STACK_GROWS_UP | MAP_STACK_GROWS_DOWN)) == 0 && prot != PROT_NONE; curr_min_addr = min_addr = *addr; if (en_aslr && min_addr == 0 && !cluster && find_space != VMFS_NO_SPACE && (map->flags & MAP_ASLR_IGNSTART) != 0) curr_min_addr = min_addr = vm_map_min(map); try = 0; vm_map_lock(map); if (cluster) { curr_min_addr = map->anon_loc; if (curr_min_addr == 0) cluster = false; } if (find_space != VMFS_NO_SPACE) { KASSERT(find_space == VMFS_ANY_SPACE || find_space == VMFS_OPTIMAL_SPACE || find_space == VMFS_SUPER_SPACE || alignment != 0, ("unexpected VMFS flag")); again: /* * When creating an anonymous mapping, try clustering * with an existing anonymous mapping first. * * We make up to two attempts to find address space * for a given find_space value. The first attempt may * apply randomization or may cluster with an existing * anonymous mapping. If this first attempt fails, * perform a first-fit search of the available address * space. * * If all tries failed, and find_space is * VMFS_OPTIMAL_SPACE, fallback to VMFS_ANY_SPACE. * Again enable clustering and randomization. */ try++; MPASS(try <= 2); if (try == 2) { /* * Second try: we failed either to find a * suitable region for randomizing the * allocation, or to cluster with an existing * mapping. Retry with free run. */ curr_min_addr = (map->flags & MAP_ASLR_IGNSTART) != 0 ? vm_map_min(map) : min_addr; atomic_add_long(&aslr_restarts, 1); } if (try == 1 && en_aslr && !cluster) { /* * Find space for allocation, including * gap needed for later randomization. */ pidx = MAXPAGESIZES > 1 && pagesizes[1] != 0 && (find_space == VMFS_SUPER_SPACE || find_space == VMFS_OPTIMAL_SPACE) ? 1 : 0; gap = vm_map_max(map) > MAP_32BIT_MAX_ADDR && (max_addr == 0 || max_addr > MAP_32BIT_MAX_ADDR) ? aslr_pages_rnd_64[pidx] : aslr_pages_rnd_32[pidx]; if (vm_map_findspace(map, curr_min_addr, length + gap * pagesizes[pidx], addr) || (max_addr != 0 && *addr + length > max_addr)) goto again; /* And randomize the start address. */ *addr += (arc4random() % gap) * pagesizes[pidx]; } else if (vm_map_findspace(map, curr_min_addr, length, addr) || (max_addr != 0 && *addr + length > max_addr)) { if (cluster) { cluster = false; MPASS(try == 1); goto again; } rv = KERN_NO_SPACE; goto done; } if (find_space != VMFS_ANY_SPACE && (rv = vm_map_alignspace(map, object, offset, addr, length, max_addr, alignment)) != KERN_SUCCESS) { if (find_space == VMFS_OPTIMAL_SPACE) { find_space = VMFS_ANY_SPACE; curr_min_addr = min_addr; cluster = update_anon; try = 0; goto again; } goto done; } } else if ((cow & MAP_REMAP) != 0) { if (*addr < vm_map_min(map) || *addr + length > vm_map_max(map) || *addr + length <= length) { rv = KERN_INVALID_ADDRESS; goto done; } vm_map_delete(map, *addr, *addr + length); } if ((cow & (MAP_STACK_GROWS_DOWN | MAP_STACK_GROWS_UP)) != 0) { rv = vm_map_stack_locked(map, *addr, length, sgrowsiz, prot, max, cow); } else { rv = vm_map_insert(map, object, offset, *addr, *addr + length, prot, max, cow); } if (rv == KERN_SUCCESS && update_anon) map->anon_loc = *addr + length; done: vm_map_unlock(map); return (rv); } /* * vm_map_find_min() is a variant of vm_map_find() that takes an * additional parameter (min_addr) and treats the given address * (*addr) differently. Specifically, it treats *addr as a hint * and not as the minimum address where the mapping is created. * * This function works in two phases. First, it tries to * allocate above the hint. If that fails and the hint is * greater than min_addr, it performs a second pass, replacing * the hint with min_addr as the minimum address for the * allocation. */ int vm_map_find_min(vm_map_t map, vm_object_t object, vm_ooffset_t offset, vm_offset_t *addr, vm_size_t length, vm_offset_t min_addr, vm_offset_t max_addr, int find_space, vm_prot_t prot, vm_prot_t max, int cow) { vm_offset_t hint; int rv; hint = *addr; for (;;) { rv = vm_map_find(map, object, offset, addr, length, max_addr, find_space, prot, max, cow); if (rv == KERN_SUCCESS || min_addr >= hint) return (rv); *addr = hint = min_addr; } } /* * A map entry with any of the following flags set must not be merged with * another entry. */ #define MAP_ENTRY_NOMERGE_MASK (MAP_ENTRY_GROWS_DOWN | MAP_ENTRY_GROWS_UP | \ MAP_ENTRY_IN_TRANSITION | MAP_ENTRY_IS_SUB_MAP) static bool vm_map_mergeable_neighbors(vm_map_entry_t prev, vm_map_entry_t entry) { KASSERT((prev->eflags & MAP_ENTRY_NOMERGE_MASK) == 0 || (entry->eflags & MAP_ENTRY_NOMERGE_MASK) == 0, ("vm_map_mergeable_neighbors: neither %p nor %p are mergeable", prev, entry)); return (prev->end == entry->start && prev->object.vm_object == entry->object.vm_object && (prev->object.vm_object == NULL || prev->offset + (prev->end - prev->start) == entry->offset) && prev->eflags == entry->eflags && prev->protection == entry->protection && prev->max_protection == entry->max_protection && prev->inheritance == entry->inheritance && prev->wired_count == entry->wired_count && prev->cred == entry->cred); } static void vm_map_merged_neighbor_dispose(vm_map_t map, vm_map_entry_t entry) { /* * If the backing object is a vnode object, vm_object_deallocate() * calls vrele(). However, vrele() does not lock the vnode because * the vnode has additional references. Thus, the map lock can be * kept without causing a lock-order reversal with the vnode lock. * * Since we count the number of virtual page mappings in * object->un_pager.vnp.writemappings, the writemappings value * should not be adjusted when the entry is disposed of. */ if (entry->object.vm_object != NULL) vm_object_deallocate(entry->object.vm_object); if (entry->cred != NULL) crfree(entry->cred); vm_map_entry_dispose(map, entry); } /* * vm_map_simplify_entry: * * Simplify the given map entry by merging with either neighbor. This * routine also has the ability to merge with both neighbors. * * The map must be locked. * * This routine guarantees that the passed entry remains valid (though * possibly extended). When merging, this routine may delete one or * both neighbors. */ void vm_map_simplify_entry(vm_map_t map, vm_map_entry_t entry) { vm_map_entry_t next, prev; if ((entry->eflags & MAP_ENTRY_NOMERGE_MASK) != 0) return; prev = entry->prev; if (vm_map_mergeable_neighbors(prev, entry)) { vm_map_entry_unlink(map, prev); entry->start = prev->start; entry->offset = prev->offset; if (entry->prev != &map->header) vm_map_entry_resize_free(map, entry->prev); vm_map_merged_neighbor_dispose(map, prev); } next = entry->next; if (vm_map_mergeable_neighbors(entry, next)) { vm_map_entry_unlink(map, next); entry->end = next->end; vm_map_entry_resize_free(map, entry); vm_map_merged_neighbor_dispose(map, next); } } /* * vm_map_clip_start: [ internal use only ] * * Asserts that the given entry begins at or after * the specified address; if necessary, * it splits the entry into two. */ #define vm_map_clip_start(map, entry, startaddr) \ { \ if (startaddr > entry->start) \ _vm_map_clip_start(map, entry, startaddr); \ } /* * This routine is called only when it is known that * the entry must be split. */ static void _vm_map_clip_start(vm_map_t map, vm_map_entry_t entry, vm_offset_t start) { vm_map_entry_t new_entry; VM_MAP_ASSERT_LOCKED(map); KASSERT(entry->end > start && entry->start < start, ("_vm_map_clip_start: invalid clip of entry %p", entry)); /* * Split off the front portion -- note that we must insert the new * entry BEFORE this one, so that this entry has the specified * starting address. */ vm_map_simplify_entry(map, entry); /* * If there is no object backing this entry, we might as well create * one now. If we defer it, an object can get created after the map * is clipped, and individual objects will be created for the split-up * map. This is a bit of a hack, but is also about the best place to * put this improvement. */ if (entry->object.vm_object == NULL && !map->system_map && (entry->eflags & MAP_ENTRY_GUARD) == 0) { vm_object_t object; object = vm_object_allocate(OBJT_DEFAULT, atop(entry->end - entry->start)); entry->object.vm_object = object; entry->offset = 0; if (entry->cred != NULL) { object->cred = entry->cred; object->charge = entry->end - entry->start; entry->cred = NULL; } } else if (entry->object.vm_object != NULL && ((entry->eflags & MAP_ENTRY_NEEDS_COPY) == 0) && entry->cred != NULL) { VM_OBJECT_WLOCK(entry->object.vm_object); KASSERT(entry->object.vm_object->cred == NULL, ("OVERCOMMIT: vm_entry_clip_start: both cred e %p", entry)); entry->object.vm_object->cred = entry->cred; entry->object.vm_object->charge = entry->end - entry->start; VM_OBJECT_WUNLOCK(entry->object.vm_object); entry->cred = NULL; } new_entry = vm_map_entry_create(map); *new_entry = *entry; new_entry->end = start; entry->offset += (start - entry->start); entry->start = start; if (new_entry->cred != NULL) crhold(entry->cred); vm_map_entry_link(map, entry->prev, new_entry); if ((entry->eflags & MAP_ENTRY_IS_SUB_MAP) == 0) { vm_object_reference(new_entry->object.vm_object); /* * The object->un_pager.vnp.writemappings for the * object of MAP_ENTRY_VN_WRITECNT type entry shall be * kept as is here. The virtual pages are * re-distributed among the clipped entries, so the sum is * left the same. */ } } /* * vm_map_clip_end: [ internal use only ] * * Asserts that the given entry ends at or before * the specified address; if necessary, * it splits the entry into two. */ #define vm_map_clip_end(map, entry, endaddr) \ { \ if ((endaddr) < (entry->end)) \ _vm_map_clip_end((map), (entry), (endaddr)); \ } /* * This routine is called only when it is known that * the entry must be split. */ static void _vm_map_clip_end(vm_map_t map, vm_map_entry_t entry, vm_offset_t end) { vm_map_entry_t new_entry; VM_MAP_ASSERT_LOCKED(map); KASSERT(entry->start < end && entry->end > end, ("_vm_map_clip_end: invalid clip of entry %p", entry)); /* * If there is no object backing this entry, we might as well create * one now. If we defer it, an object can get created after the map * is clipped, and individual objects will be created for the split-up * map. This is a bit of a hack, but is also about the best place to * put this improvement. */ if (entry->object.vm_object == NULL && !map->system_map && (entry->eflags & MAP_ENTRY_GUARD) == 0) { vm_object_t object; object = vm_object_allocate(OBJT_DEFAULT, atop(entry->end - entry->start)); entry->object.vm_object = object; entry->offset = 0; if (entry->cred != NULL) { object->cred = entry->cred; object->charge = entry->end - entry->start; entry->cred = NULL; } } else if (entry->object.vm_object != NULL && ((entry->eflags & MAP_ENTRY_NEEDS_COPY) == 0) && entry->cred != NULL) { VM_OBJECT_WLOCK(entry->object.vm_object); KASSERT(entry->object.vm_object->cred == NULL, ("OVERCOMMIT: vm_entry_clip_end: both cred e %p", entry)); entry->object.vm_object->cred = entry->cred; entry->object.vm_object->charge = entry->end - entry->start; VM_OBJECT_WUNLOCK(entry->object.vm_object); entry->cred = NULL; } /* * Create a new entry and insert it AFTER the specified entry */ new_entry = vm_map_entry_create(map); *new_entry = *entry; new_entry->start = entry->end = end; new_entry->offset += (end - entry->start); if (new_entry->cred != NULL) crhold(entry->cred); vm_map_entry_link(map, entry, new_entry); if ((entry->eflags & MAP_ENTRY_IS_SUB_MAP) == 0) { vm_object_reference(new_entry->object.vm_object); } } /* * vm_map_submap: [ kernel use only ] * * Mark the given range as handled by a subordinate map. * * This range must have been created with vm_map_find, * and no other operations may have been performed on this * range prior to calling vm_map_submap. * * Only a limited number of operations can be performed * within this rage after calling vm_map_submap: * vm_fault * [Don't try vm_map_copy!] * * To remove a submapping, one must first remove the * range from the superior map, and then destroy the * submap (if desired). [Better yet, don't try it.] */ int vm_map_submap( vm_map_t map, vm_offset_t start, vm_offset_t end, vm_map_t submap) { vm_map_entry_t entry; int result; result = KERN_INVALID_ARGUMENT; vm_map_lock(submap); submap->flags |= MAP_IS_SUB_MAP; vm_map_unlock(submap); vm_map_lock(map); VM_MAP_RANGE_CHECK(map, start, end); if (vm_map_lookup_entry(map, start, &entry)) { vm_map_clip_start(map, entry, start); } else entry = entry->next; vm_map_clip_end(map, entry, end); if ((entry->start == start) && (entry->end == end) && ((entry->eflags & MAP_ENTRY_COW) == 0) && (entry->object.vm_object == NULL)) { entry->object.sub_map = submap; entry->eflags |= MAP_ENTRY_IS_SUB_MAP; result = KERN_SUCCESS; } vm_map_unlock(map); if (result != KERN_SUCCESS) { vm_map_lock(submap); submap->flags &= ~MAP_IS_SUB_MAP; vm_map_unlock(submap); } return (result); } /* * The maximum number of pages to map if MAP_PREFAULT_PARTIAL is specified */ #define MAX_INIT_PT 96 /* * vm_map_pmap_enter: * * Preload the specified map's pmap with mappings to the specified * object's memory-resident pages. No further physical pages are * allocated, and no further virtual pages are retrieved from secondary * storage. If the specified flags include MAP_PREFAULT_PARTIAL, then a * limited number of page mappings are created at the low-end of the * specified address range. (For this purpose, a superpage mapping * counts as one page mapping.) Otherwise, all resident pages within * the specified address range are mapped. */ static void vm_map_pmap_enter(vm_map_t map, vm_offset_t addr, vm_prot_t prot, vm_object_t object, vm_pindex_t pindex, vm_size_t size, int flags) { vm_offset_t start; vm_page_t p, p_start; vm_pindex_t mask, psize, threshold, tmpidx; if ((prot & (VM_PROT_READ | VM_PROT_EXECUTE)) == 0 || object == NULL) return; VM_OBJECT_RLOCK(object); if (object->type == OBJT_DEVICE || object->type == OBJT_SG) { VM_OBJECT_RUNLOCK(object); VM_OBJECT_WLOCK(object); if (object->type == OBJT_DEVICE || object->type == OBJT_SG) { pmap_object_init_pt(map->pmap, addr, object, pindex, size); VM_OBJECT_WUNLOCK(object); return; } VM_OBJECT_LOCK_DOWNGRADE(object); } psize = atop(size); if (psize + pindex > object->size) { if (object->size < pindex) { VM_OBJECT_RUNLOCK(object); return; } psize = object->size - pindex; } start = 0; p_start = NULL; threshold = MAX_INIT_PT; p = vm_page_find_least(object, pindex); /* * Assert: the variable p is either (1) the page with the * least pindex greater than or equal to the parameter pindex * or (2) NULL. */ for (; p != NULL && (tmpidx = p->pindex - pindex) < psize; p = TAILQ_NEXT(p, listq)) { /* * don't allow an madvise to blow away our really * free pages allocating pv entries. */ if (((flags & MAP_PREFAULT_MADVISE) != 0 && vm_page_count_severe()) || ((flags & MAP_PREFAULT_PARTIAL) != 0 && tmpidx >= threshold)) { psize = tmpidx; break; } if (p->valid == VM_PAGE_BITS_ALL) { if (p_start == NULL) { start = addr + ptoa(tmpidx); p_start = p; } /* Jump ahead if a superpage mapping is possible. */ if (p->psind > 0 && ((addr + ptoa(tmpidx)) & (pagesizes[p->psind] - 1)) == 0) { mask = atop(pagesizes[p->psind]) - 1; if (tmpidx + mask < psize && vm_page_ps_test(p, PS_ALL_VALID, NULL)) { p += mask; threshold += mask; } } } else if (p_start != NULL) { pmap_enter_object(map->pmap, start, addr + ptoa(tmpidx), p_start, prot); p_start = NULL; } } if (p_start != NULL) pmap_enter_object(map->pmap, start, addr + ptoa(psize), p_start, prot); VM_OBJECT_RUNLOCK(object); } /* * vm_map_protect: * * Sets the protection of the specified address * region in the target map. If "set_max" is * specified, the maximum protection is to be set; * otherwise, only the current protection is affected. */ int vm_map_protect(vm_map_t map, vm_offset_t start, vm_offset_t end, vm_prot_t new_prot, boolean_t set_max) { vm_map_entry_t current, entry; vm_object_t obj; struct ucred *cred; vm_prot_t old_prot; if (start == end) return (KERN_SUCCESS); vm_map_lock(map); /* * Ensure that we are not concurrently wiring pages. vm_map_wire() may * need to fault pages into the map and will drop the map lock while * doing so, and the VM object may end up in an inconsistent state if we * update the protection on the map entry in between faults. */ vm_map_wait_busy(map); VM_MAP_RANGE_CHECK(map, start, end); if (vm_map_lookup_entry(map, start, &entry)) { vm_map_clip_start(map, entry, start); } else { entry = entry->next; } /* * Make a first pass to check for protection violations. */ for (current = entry; current->start < end; current = current->next) { if ((current->eflags & MAP_ENTRY_GUARD) != 0) continue; if (current->eflags & MAP_ENTRY_IS_SUB_MAP) { vm_map_unlock(map); return (KERN_INVALID_ARGUMENT); } if ((new_prot & current->max_protection) != new_prot) { vm_map_unlock(map); return (KERN_PROTECTION_FAILURE); } } /* * Do an accounting pass for private read-only mappings that * now will do cow due to allowed write (e.g. debugger sets * breakpoint on text segment) */ for (current = entry; current->start < end; current = current->next) { vm_map_clip_end(map, current, end); if (set_max || ((new_prot & ~(current->protection)) & VM_PROT_WRITE) == 0 || ENTRY_CHARGED(current) || (current->eflags & MAP_ENTRY_GUARD) != 0) { continue; } cred = curthread->td_ucred; obj = current->object.vm_object; if (obj == NULL || (current->eflags & MAP_ENTRY_NEEDS_COPY)) { if (!swap_reserve(current->end - current->start)) { vm_map_unlock(map); return (KERN_RESOURCE_SHORTAGE); } crhold(cred); current->cred = cred; continue; } VM_OBJECT_WLOCK(obj); if (obj->type != OBJT_DEFAULT && obj->type != OBJT_SWAP) { VM_OBJECT_WUNLOCK(obj); continue; } /* * Charge for the whole object allocation now, since * we cannot distinguish between non-charged and * charged clipped mapping of the same object later. */ KASSERT(obj->charge == 0, ("vm_map_protect: object %p overcharged (entry %p)", obj, current)); if (!swap_reserve(ptoa(obj->size))) { VM_OBJECT_WUNLOCK(obj); vm_map_unlock(map); return (KERN_RESOURCE_SHORTAGE); } crhold(cred); obj->cred = cred; obj->charge = ptoa(obj->size); VM_OBJECT_WUNLOCK(obj); } /* * Go back and fix up protections. [Note that clipping is not * necessary the second time.] */ for (current = entry; current->start < end; current = current->next) { if ((current->eflags & MAP_ENTRY_GUARD) != 0) continue; old_prot = current->protection; if (set_max) current->protection = (current->max_protection = new_prot) & old_prot; else current->protection = new_prot; /* * For user wired map entries, the normal lazy evaluation of * write access upgrades through soft page faults is * undesirable. Instead, immediately copy any pages that are * copy-on-write and enable write access in the physical map. */ if ((current->eflags & MAP_ENTRY_USER_WIRED) != 0 && (current->protection & VM_PROT_WRITE) != 0 && (old_prot & VM_PROT_WRITE) == 0) vm_fault_copy_entry(map, map, current, current, NULL); /* * When restricting access, update the physical map. Worry * about copy-on-write here. */ if ((old_prot & ~current->protection) != 0) { #define MASK(entry) (((entry)->eflags & MAP_ENTRY_COW) ? ~VM_PROT_WRITE : \ VM_PROT_ALL) pmap_protect(map->pmap, current->start, current->end, current->protection & MASK(current)); #undef MASK } vm_map_simplify_entry(map, current); } vm_map_unlock(map); return (KERN_SUCCESS); } /* * vm_map_madvise: * * This routine traverses a processes map handling the madvise * system call. Advisories are classified as either those effecting * the vm_map_entry structure, or those effecting the underlying * objects. */ int vm_map_madvise( vm_map_t map, vm_offset_t start, vm_offset_t end, int behav) { vm_map_entry_t current, entry; bool modify_map; /* * Some madvise calls directly modify the vm_map_entry, in which case * we need to use an exclusive lock on the map and we need to perform * various clipping operations. Otherwise we only need a read-lock * on the map. */ switch(behav) { case MADV_NORMAL: case MADV_SEQUENTIAL: case MADV_RANDOM: case MADV_NOSYNC: case MADV_AUTOSYNC: case MADV_NOCORE: case MADV_CORE: if (start == end) return (0); modify_map = true; vm_map_lock(map); break; case MADV_WILLNEED: case MADV_DONTNEED: case MADV_FREE: if (start == end) return (0); modify_map = false; vm_map_lock_read(map); break; default: return (EINVAL); } /* * Locate starting entry and clip if necessary. */ VM_MAP_RANGE_CHECK(map, start, end); if (vm_map_lookup_entry(map, start, &entry)) { if (modify_map) vm_map_clip_start(map, entry, start); } else { entry = entry->next; } if (modify_map) { /* * madvise behaviors that are implemented in the vm_map_entry. * * We clip the vm_map_entry so that behavioral changes are * limited to the specified address range. */ for (current = entry; current->start < end; current = current->next) { if (current->eflags & MAP_ENTRY_IS_SUB_MAP) continue; vm_map_clip_end(map, current, end); switch (behav) { case MADV_NORMAL: vm_map_entry_set_behavior(current, MAP_ENTRY_BEHAV_NORMAL); break; case MADV_SEQUENTIAL: vm_map_entry_set_behavior(current, MAP_ENTRY_BEHAV_SEQUENTIAL); break; case MADV_RANDOM: vm_map_entry_set_behavior(current, MAP_ENTRY_BEHAV_RANDOM); break; case MADV_NOSYNC: current->eflags |= MAP_ENTRY_NOSYNC; break; case MADV_AUTOSYNC: current->eflags &= ~MAP_ENTRY_NOSYNC; break; case MADV_NOCORE: current->eflags |= MAP_ENTRY_NOCOREDUMP; break; case MADV_CORE: current->eflags &= ~MAP_ENTRY_NOCOREDUMP; break; default: break; } vm_map_simplify_entry(map, current); } vm_map_unlock(map); } else { vm_pindex_t pstart, pend; /* * madvise behaviors that are implemented in the underlying * vm_object. * * Since we don't clip the vm_map_entry, we have to clip * the vm_object pindex and count. */ for (current = entry; current->start < end; current = current->next) { vm_offset_t useEnd, useStart; if (current->eflags & MAP_ENTRY_IS_SUB_MAP) continue; pstart = OFF_TO_IDX(current->offset); pend = pstart + atop(current->end - current->start); useStart = current->start; useEnd = current->end; if (current->start < start) { pstart += atop(start - current->start); useStart = start; } if (current->end > end) { pend -= atop(current->end - end); useEnd = end; } if (pstart >= pend) continue; /* * Perform the pmap_advise() before clearing * PGA_REFERENCED in vm_page_advise(). Otherwise, a * concurrent pmap operation, such as pmap_remove(), * could clear a reference in the pmap and set * PGA_REFERENCED on the page before the pmap_advise() * had completed. Consequently, the page would appear * referenced based upon an old reference that * occurred before this pmap_advise() ran. */ if (behav == MADV_DONTNEED || behav == MADV_FREE) pmap_advise(map->pmap, useStart, useEnd, behav); vm_object_madvise(current->object.vm_object, pstart, pend, behav); /* * Pre-populate paging structures in the * WILLNEED case. For wired entries, the * paging structures are already populated. */ if (behav == MADV_WILLNEED && current->wired_count == 0) { vm_map_pmap_enter(map, useStart, current->protection, current->object.vm_object, pstart, ptoa(pend - pstart), MAP_PREFAULT_MADVISE ); } } vm_map_unlock_read(map); } return (0); } /* * vm_map_inherit: * * Sets the inheritance of the specified address * range in the target map. Inheritance * affects how the map will be shared with * child maps at the time of vmspace_fork. */ int vm_map_inherit(vm_map_t map, vm_offset_t start, vm_offset_t end, vm_inherit_t new_inheritance) { vm_map_entry_t entry; vm_map_entry_t temp_entry; switch (new_inheritance) { case VM_INHERIT_NONE: case VM_INHERIT_COPY: case VM_INHERIT_SHARE: case VM_INHERIT_ZERO: break; default: return (KERN_INVALID_ARGUMENT); } if (start == end) return (KERN_SUCCESS); vm_map_lock(map); VM_MAP_RANGE_CHECK(map, start, end); if (vm_map_lookup_entry(map, start, &temp_entry)) { entry = temp_entry; vm_map_clip_start(map, entry, start); } else entry = temp_entry->next; while (entry->start < end) { vm_map_clip_end(map, entry, end); if ((entry->eflags & MAP_ENTRY_GUARD) == 0 || new_inheritance != VM_INHERIT_ZERO) entry->inheritance = new_inheritance; vm_map_simplify_entry(map, entry); entry = entry->next; } vm_map_unlock(map); return (KERN_SUCCESS); } /* * vm_map_unwire: * * Implements both kernel and user unwiring. */ int vm_map_unwire(vm_map_t map, vm_offset_t start, vm_offset_t end, int flags) { vm_map_entry_t entry, first_entry, tmp_entry; vm_offset_t saved_start; unsigned int last_timestamp; int rv; boolean_t need_wakeup, result, user_unwire; if (start == end) return (KERN_SUCCESS); user_unwire = (flags & VM_MAP_WIRE_USER) ? TRUE : FALSE; vm_map_lock(map); VM_MAP_RANGE_CHECK(map, start, end); if (!vm_map_lookup_entry(map, start, &first_entry)) { if (flags & VM_MAP_WIRE_HOLESOK) first_entry = first_entry->next; else { vm_map_unlock(map); return (KERN_INVALID_ADDRESS); } } last_timestamp = map->timestamp; entry = first_entry; while (entry->start < end) { if (entry->eflags & MAP_ENTRY_IN_TRANSITION) { /* * We have not yet clipped the entry. */ saved_start = (start >= entry->start) ? start : entry->start; entry->eflags |= MAP_ENTRY_NEEDS_WAKEUP; if (vm_map_unlock_and_wait(map, 0)) { /* * Allow interruption of user unwiring? */ } vm_map_lock(map); if (last_timestamp+1 != map->timestamp) { /* * Look again for the entry because the map was * modified while it was unlocked. * Specifically, the entry may have been * clipped, merged, or deleted. */ if (!vm_map_lookup_entry(map, saved_start, &tmp_entry)) { if (flags & VM_MAP_WIRE_HOLESOK) tmp_entry = tmp_entry->next; else { if (saved_start == start) { /* * First_entry has been deleted. */ vm_map_unlock(map); return (KERN_INVALID_ADDRESS); } end = saved_start; rv = KERN_INVALID_ADDRESS; goto done; } } if (entry == first_entry) first_entry = tmp_entry; else first_entry = NULL; entry = tmp_entry; } last_timestamp = map->timestamp; continue; } vm_map_clip_start(map, entry, start); vm_map_clip_end(map, entry, end); /* * Mark the entry in case the map lock is released. (See * above.) */ KASSERT((entry->eflags & MAP_ENTRY_IN_TRANSITION) == 0 && entry->wiring_thread == NULL, ("owned map entry %p", entry)); entry->eflags |= MAP_ENTRY_IN_TRANSITION; entry->wiring_thread = curthread; /* * Check the map for holes in the specified region. * If VM_MAP_WIRE_HOLESOK was specified, skip this check. */ if (((flags & VM_MAP_WIRE_HOLESOK) == 0) && (entry->end < end && entry->next->start > entry->end)) { end = entry->end; rv = KERN_INVALID_ADDRESS; goto done; } /* * If system unwiring, require that the entry is system wired. */ if (!user_unwire && vm_map_entry_system_wired_count(entry) == 0) { end = entry->end; rv = KERN_INVALID_ARGUMENT; goto done; } entry = entry->next; } rv = KERN_SUCCESS; done: need_wakeup = FALSE; if (first_entry == NULL) { result = vm_map_lookup_entry(map, start, &first_entry); if (!result && (flags & VM_MAP_WIRE_HOLESOK)) first_entry = first_entry->next; else KASSERT(result, ("vm_map_unwire: lookup failed")); } for (entry = first_entry; entry->start < end; entry = entry->next) { /* * If VM_MAP_WIRE_HOLESOK was specified, an empty * space in the unwired region could have been mapped * while the map lock was dropped for draining * MAP_ENTRY_IN_TRANSITION. Moreover, another thread * could be simultaneously wiring this new mapping * entry. Detect these cases and skip any entries * marked as in transition by us. */ if ((entry->eflags & MAP_ENTRY_IN_TRANSITION) == 0 || entry->wiring_thread != curthread) { KASSERT((flags & VM_MAP_WIRE_HOLESOK) != 0, ("vm_map_unwire: !HOLESOK and new/changed entry")); continue; } if (rv == KERN_SUCCESS && (!user_unwire || (entry->eflags & MAP_ENTRY_USER_WIRED))) { if (user_unwire) entry->eflags &= ~MAP_ENTRY_USER_WIRED; if (entry->wired_count == 1) vm_map_entry_unwire(map, entry); else entry->wired_count--; } KASSERT((entry->eflags & MAP_ENTRY_IN_TRANSITION) != 0, ("vm_map_unwire: in-transition flag missing %p", entry)); KASSERT(entry->wiring_thread == curthread, ("vm_map_unwire: alien wire %p", entry)); entry->eflags &= ~MAP_ENTRY_IN_TRANSITION; entry->wiring_thread = NULL; if (entry->eflags & MAP_ENTRY_NEEDS_WAKEUP) { entry->eflags &= ~MAP_ENTRY_NEEDS_WAKEUP; need_wakeup = TRUE; } vm_map_simplify_entry(map, entry); } vm_map_unlock(map); if (need_wakeup) vm_map_wakeup(map); return (rv); } /* * vm_map_wire_entry_failure: * * Handle a wiring failure on the given entry. * * The map should be locked. */ static void vm_map_wire_entry_failure(vm_map_t map, vm_map_entry_t entry, vm_offset_t failed_addr) { VM_MAP_ASSERT_LOCKED(map); KASSERT((entry->eflags & MAP_ENTRY_IN_TRANSITION) != 0 && entry->wired_count == 1, ("vm_map_wire_entry_failure: entry %p isn't being wired", entry)); KASSERT(failed_addr < entry->end, ("vm_map_wire_entry_failure: entry %p was fully wired", entry)); /* * If any pages at the start of this entry were successfully wired, * then unwire them. */ if (failed_addr > entry->start) { pmap_unwire(map->pmap, entry->start, failed_addr); vm_object_unwire(entry->object.vm_object, entry->offset, failed_addr - entry->start, PQ_ACTIVE); } /* * Assign an out-of-range value to represent the failure to wire this * entry. */ entry->wired_count = -1; } /* * vm_map_wire: * * Implements both kernel and user wiring. */ int vm_map_wire(vm_map_t map, vm_offset_t start, vm_offset_t end, int flags) { vm_map_entry_t entry, first_entry, tmp_entry; vm_offset_t faddr, saved_end, saved_start; unsigned int last_timestamp; int rv; boolean_t need_wakeup, result, user_wire; vm_prot_t prot; if (start == end) return (KERN_SUCCESS); prot = 0; if (flags & VM_MAP_WIRE_WRITE) prot |= VM_PROT_WRITE; user_wire = (flags & VM_MAP_WIRE_USER) ? TRUE : FALSE; vm_map_lock(map); VM_MAP_RANGE_CHECK(map, start, end); if (!vm_map_lookup_entry(map, start, &first_entry)) { if (flags & VM_MAP_WIRE_HOLESOK) first_entry = first_entry->next; else { vm_map_unlock(map); return (KERN_INVALID_ADDRESS); } } last_timestamp = map->timestamp; entry = first_entry; while (entry->start < end) { if (entry->eflags & MAP_ENTRY_IN_TRANSITION) { /* * We have not yet clipped the entry. */ saved_start = (start >= entry->start) ? start : entry->start; entry->eflags |= MAP_ENTRY_NEEDS_WAKEUP; if (vm_map_unlock_and_wait(map, 0)) { /* * Allow interruption of user wiring? */ } vm_map_lock(map); if (last_timestamp + 1 != map->timestamp) { /* * Look again for the entry because the map was * modified while it was unlocked. * Specifically, the entry may have been * clipped, merged, or deleted. */ if (!vm_map_lookup_entry(map, saved_start, &tmp_entry)) { if (flags & VM_MAP_WIRE_HOLESOK) tmp_entry = tmp_entry->next; else { if (saved_start == start) { /* * first_entry has been deleted. */ vm_map_unlock(map); return (KERN_INVALID_ADDRESS); } end = saved_start; rv = KERN_INVALID_ADDRESS; goto done; } } if (entry == first_entry) first_entry = tmp_entry; else first_entry = NULL; entry = tmp_entry; } last_timestamp = map->timestamp; continue; } vm_map_clip_start(map, entry, start); vm_map_clip_end(map, entry, end); /* * Mark the entry in case the map lock is released. (See * above.) */ KASSERT((entry->eflags & MAP_ENTRY_IN_TRANSITION) == 0 && entry->wiring_thread == NULL, ("owned map entry %p", entry)); entry->eflags |= MAP_ENTRY_IN_TRANSITION; entry->wiring_thread = curthread; if ((entry->protection & (VM_PROT_READ | VM_PROT_EXECUTE)) == 0 || (entry->protection & prot) != prot) { entry->eflags |= MAP_ENTRY_WIRE_SKIPPED; if ((flags & VM_MAP_WIRE_HOLESOK) == 0) { end = entry->end; rv = KERN_INVALID_ADDRESS; goto done; } goto next_entry; } if (entry->wired_count == 0) { entry->wired_count++; saved_start = entry->start; saved_end = entry->end; /* * Release the map lock, relying on the in-transition * mark. Mark the map busy for fork. */ vm_map_busy(map); vm_map_unlock(map); faddr = saved_start; do { /* * Simulate a fault to get the page and enter * it into the physical map. */ if ((rv = vm_fault(map, faddr, VM_PROT_NONE, VM_FAULT_WIRE)) != KERN_SUCCESS) break; } while ((faddr += PAGE_SIZE) < saved_end); vm_map_lock(map); vm_map_unbusy(map); if (last_timestamp + 1 != map->timestamp) { /* * Look again for the entry because the map was * modified while it was unlocked. The entry * may have been clipped, but NOT merged or * deleted. */ result = vm_map_lookup_entry(map, saved_start, &tmp_entry); KASSERT(result, ("vm_map_wire: lookup failed")); if (entry == first_entry) first_entry = tmp_entry; else first_entry = NULL; entry = tmp_entry; while (entry->end < saved_end) { /* * In case of failure, handle entries * that were not fully wired here; * fully wired entries are handled * later. */ if (rv != KERN_SUCCESS && faddr < entry->end) vm_map_wire_entry_failure(map, entry, faddr); entry = entry->next; } } last_timestamp = map->timestamp; if (rv != KERN_SUCCESS) { vm_map_wire_entry_failure(map, entry, faddr); end = entry->end; goto done; } } else if (!user_wire || (entry->eflags & MAP_ENTRY_USER_WIRED) == 0) { entry->wired_count++; } /* * Check the map for holes in the specified region. * If VM_MAP_WIRE_HOLESOK was specified, skip this check. */ next_entry: if ((flags & VM_MAP_WIRE_HOLESOK) == 0 && entry->end < end && entry->next->start > entry->end) { end = entry->end; rv = KERN_INVALID_ADDRESS; goto done; } entry = entry->next; } rv = KERN_SUCCESS; done: need_wakeup = FALSE; if (first_entry == NULL) { result = vm_map_lookup_entry(map, start, &first_entry); if (!result && (flags & VM_MAP_WIRE_HOLESOK)) first_entry = first_entry->next; else KASSERT(result, ("vm_map_wire: lookup failed")); } for (entry = first_entry; entry->start < end; entry = entry->next) { /* * If VM_MAP_WIRE_HOLESOK was specified, an empty * space in the unwired region could have been mapped * while the map lock was dropped for faulting in the * pages or draining MAP_ENTRY_IN_TRANSITION. * Moreover, another thread could be simultaneously * wiring this new mapping entry. Detect these cases * and skip any entries marked as in transition not by us. */ if ((entry->eflags & MAP_ENTRY_IN_TRANSITION) == 0 || entry->wiring_thread != curthread) { KASSERT((flags & VM_MAP_WIRE_HOLESOK) != 0, ("vm_map_wire: !HOLESOK and new/changed entry")); continue; } if ((entry->eflags & MAP_ENTRY_WIRE_SKIPPED) != 0) goto next_entry_done; if (rv == KERN_SUCCESS) { if (user_wire) entry->eflags |= MAP_ENTRY_USER_WIRED; } else if (entry->wired_count == -1) { /* * Wiring failed on this entry. Thus, unwiring is * unnecessary. */ entry->wired_count = 0; } else if (!user_wire || (entry->eflags & MAP_ENTRY_USER_WIRED) == 0) { /* * Undo the wiring. Wiring succeeded on this entry * but failed on a later entry. */ if (entry->wired_count == 1) vm_map_entry_unwire(map, entry); else entry->wired_count--; } next_entry_done: KASSERT((entry->eflags & MAP_ENTRY_IN_TRANSITION) != 0, ("vm_map_wire: in-transition flag missing %p", entry)); KASSERT(entry->wiring_thread == curthread, ("vm_map_wire: alien wire %p", entry)); entry->eflags &= ~(MAP_ENTRY_IN_TRANSITION | MAP_ENTRY_WIRE_SKIPPED); entry->wiring_thread = NULL; if (entry->eflags & MAP_ENTRY_NEEDS_WAKEUP) { entry->eflags &= ~MAP_ENTRY_NEEDS_WAKEUP; need_wakeup = TRUE; } vm_map_simplify_entry(map, entry); } vm_map_unlock(map); if (need_wakeup) vm_map_wakeup(map); return (rv); } /* * vm_map_sync * * Push any dirty cached pages in the address range to their pager. * If syncio is TRUE, dirty pages are written synchronously. * If invalidate is TRUE, any cached pages are freed as well. * * If the size of the region from start to end is zero, we are * supposed to flush all modified pages within the region containing * start. Unfortunately, a region can be split or coalesced with * neighboring regions, making it difficult to determine what the * original region was. Therefore, we approximate this requirement by * flushing the current region containing start. * * Returns an error if any part of the specified range is not mapped. */ int vm_map_sync( vm_map_t map, vm_offset_t start, vm_offset_t end, boolean_t syncio, boolean_t invalidate) { vm_map_entry_t current; vm_map_entry_t entry; vm_size_t size; vm_object_t object; vm_ooffset_t offset; unsigned int last_timestamp; boolean_t failed; vm_map_lock_read(map); VM_MAP_RANGE_CHECK(map, start, end); if (!vm_map_lookup_entry(map, start, &entry)) { vm_map_unlock_read(map); return (KERN_INVALID_ADDRESS); } else if (start == end) { start = entry->start; end = entry->end; } /* * Make a first pass to check for user-wired memory and holes. */ for (current = entry; current->start < end; current = current->next) { if (invalidate && (current->eflags & MAP_ENTRY_USER_WIRED)) { vm_map_unlock_read(map); return (KERN_INVALID_ARGUMENT); } if (end > current->end && current->end != current->next->start) { vm_map_unlock_read(map); return (KERN_INVALID_ADDRESS); } } if (invalidate) pmap_remove(map->pmap, start, end); failed = FALSE; /* * Make a second pass, cleaning/uncaching pages from the indicated * objects as we go. */ for (current = entry; current->start < end;) { offset = current->offset + (start - current->start); size = (end <= current->end ? end : current->end) - start; if (current->eflags & MAP_ENTRY_IS_SUB_MAP) { vm_map_t smap; vm_map_entry_t tentry; vm_size_t tsize; smap = current->object.sub_map; vm_map_lock_read(smap); (void) vm_map_lookup_entry(smap, offset, &tentry); tsize = tentry->end - offset; if (tsize < size) size = tsize; object = tentry->object.vm_object; offset = tentry->offset + (offset - tentry->start); vm_map_unlock_read(smap); } else { object = current->object.vm_object; } vm_object_reference(object); last_timestamp = map->timestamp; vm_map_unlock_read(map); if (!vm_object_sync(object, offset, size, syncio, invalidate)) failed = TRUE; start += size; vm_object_deallocate(object); vm_map_lock_read(map); if (last_timestamp == map->timestamp || !vm_map_lookup_entry(map, start, ¤t)) current = current->next; } vm_map_unlock_read(map); return (failed ? KERN_FAILURE : KERN_SUCCESS); } /* * vm_map_entry_unwire: [ internal use only ] * * Make the region specified by this entry pageable. * * The map in question should be locked. * [This is the reason for this routine's existence.] */ static void vm_map_entry_unwire(vm_map_t map, vm_map_entry_t entry) { VM_MAP_ASSERT_LOCKED(map); KASSERT(entry->wired_count > 0, ("vm_map_entry_unwire: entry %p isn't wired", entry)); pmap_unwire(map->pmap, entry->start, entry->end); vm_object_unwire(entry->object.vm_object, entry->offset, entry->end - entry->start, PQ_ACTIVE); entry->wired_count = 0; } static void vm_map_entry_deallocate(vm_map_entry_t entry, boolean_t system_map) { if ((entry->eflags & MAP_ENTRY_IS_SUB_MAP) == 0) vm_object_deallocate(entry->object.vm_object); uma_zfree(system_map ? kmapentzone : mapentzone, entry); } /* * vm_map_entry_delete: [ internal use only ] * * Deallocate the given entry from the target map. */ static void vm_map_entry_delete(vm_map_t map, vm_map_entry_t entry) { vm_object_t object; vm_pindex_t offidxstart, offidxend, count, size1; vm_size_t size; vm_map_entry_unlink(map, entry); object = entry->object.vm_object; if ((entry->eflags & MAP_ENTRY_GUARD) != 0) { MPASS(entry->cred == NULL); MPASS((entry->eflags & MAP_ENTRY_IS_SUB_MAP) == 0); MPASS(object == NULL); vm_map_entry_deallocate(entry, map->system_map); return; } size = entry->end - entry->start; map->size -= size; if (entry->cred != NULL) { swap_release_by_cred(size, entry->cred); crfree(entry->cred); } if ((entry->eflags & MAP_ENTRY_IS_SUB_MAP) == 0 && (object != NULL)) { KASSERT(entry->cred == NULL || object->cred == NULL || (entry->eflags & MAP_ENTRY_NEEDS_COPY), ("OVERCOMMIT vm_map_entry_delete: both cred %p", entry)); count = atop(size); offidxstart = OFF_TO_IDX(entry->offset); offidxend = offidxstart + count; VM_OBJECT_WLOCK(object); if (object->ref_count != 1 && ((object->flags & (OBJ_NOSPLIT | OBJ_ONEMAPPING)) == OBJ_ONEMAPPING || object == kernel_object)) { vm_object_collapse(object); /* * The option OBJPR_NOTMAPPED can be passed here * because vm_map_delete() already performed * pmap_remove() on the only mapping to this range * of pages. */ vm_object_page_remove(object, offidxstart, offidxend, OBJPR_NOTMAPPED); if (object->type == OBJT_SWAP) swap_pager_freespace(object, offidxstart, count); if (offidxend >= object->size && offidxstart < object->size) { size1 = object->size; object->size = offidxstart; if (object->cred != NULL) { size1 -= object->size; KASSERT(object->charge >= ptoa(size1), ("object %p charge < 0", object)); swap_release_by_cred(ptoa(size1), object->cred); object->charge -= ptoa(size1); } } } VM_OBJECT_WUNLOCK(object); } else entry->object.vm_object = NULL; if (map->system_map) vm_map_entry_deallocate(entry, TRUE); else { entry->next = curthread->td_map_def_user; curthread->td_map_def_user = entry; } } /* * vm_map_delete: [ internal use only ] * * Deallocates the given address range from the target * map. */ int vm_map_delete(vm_map_t map, vm_offset_t start, vm_offset_t end) { vm_map_entry_t entry; vm_map_entry_t first_entry; VM_MAP_ASSERT_LOCKED(map); if (start == end) return (KERN_SUCCESS); /* * Find the start of the region, and clip it */ if (!vm_map_lookup_entry(map, start, &first_entry)) entry = first_entry->next; else { entry = first_entry; vm_map_clip_start(map, entry, start); } /* * Step through all entries in this region */ while (entry->start < end) { vm_map_entry_t next; /* * Wait for wiring or unwiring of an entry to complete. * Also wait for any system wirings to disappear on * user maps. */ if ((entry->eflags & MAP_ENTRY_IN_TRANSITION) != 0 || (vm_map_pmap(map) != kernel_pmap && vm_map_entry_system_wired_count(entry) != 0)) { unsigned int last_timestamp; vm_offset_t saved_start; vm_map_entry_t tmp_entry; saved_start = entry->start; entry->eflags |= MAP_ENTRY_NEEDS_WAKEUP; last_timestamp = map->timestamp; (void) vm_map_unlock_and_wait(map, 0); vm_map_lock(map); if (last_timestamp + 1 != map->timestamp) { /* * Look again for the entry because the map was * modified while it was unlocked. * Specifically, the entry may have been * clipped, merged, or deleted. */ if (!vm_map_lookup_entry(map, saved_start, &tmp_entry)) entry = tmp_entry->next; else { entry = tmp_entry; vm_map_clip_start(map, entry, saved_start); } } continue; } vm_map_clip_end(map, entry, end); next = entry->next; /* * Unwire before removing addresses from the pmap; otherwise, * unwiring will put the entries back in the pmap. */ if (entry->wired_count != 0) vm_map_entry_unwire(map, entry); /* * Remove mappings for the pages, but only if the * mappings could exist. For instance, it does not * make sense to call pmap_remove() for guard entries. */ if ((entry->eflags & MAP_ENTRY_IS_SUB_MAP) != 0 || entry->object.vm_object != NULL) pmap_remove(map->pmap, entry->start, entry->end); if (entry->end == map->anon_loc) map->anon_loc = entry->start; /* * Delete the entry only after removing all pmap * entries pointing to its pages. (Otherwise, its * page frames may be reallocated, and any modify bits * will be set in the wrong object!) */ vm_map_entry_delete(map, entry); entry = next; } return (KERN_SUCCESS); } /* * vm_map_remove: * * Remove the given address range from the target map. * This is the exported form of vm_map_delete. */ int vm_map_remove(vm_map_t map, vm_offset_t start, vm_offset_t end) { int result; vm_map_lock(map); VM_MAP_RANGE_CHECK(map, start, end); result = vm_map_delete(map, start, end); vm_map_unlock(map); return (result); } /* * vm_map_check_protection: * * Assert that the target map allows the specified privilege on the * entire address region given. The entire region must be allocated. * * WARNING! This code does not and should not check whether the * contents of the region is accessible. For example a smaller file * might be mapped into a larger address space. * * NOTE! This code is also called by munmap(). * * The map must be locked. A read lock is sufficient. */ boolean_t vm_map_check_protection(vm_map_t map, vm_offset_t start, vm_offset_t end, vm_prot_t protection) { vm_map_entry_t entry; vm_map_entry_t tmp_entry; if (!vm_map_lookup_entry(map, start, &tmp_entry)) return (FALSE); entry = tmp_entry; while (start < end) { /* * No holes allowed! */ if (start < entry->start) return (FALSE); /* * Check protection associated with entry. */ if ((entry->protection & protection) != protection) return (FALSE); /* go to next entry */ start = entry->end; entry = entry->next; } return (TRUE); } /* * vm_map_copy_entry: * * Copies the contents of the source entry to the destination * entry. The entries *must* be aligned properly. */ static void vm_map_copy_entry( vm_map_t src_map, vm_map_t dst_map, vm_map_entry_t src_entry, vm_map_entry_t dst_entry, vm_ooffset_t *fork_charge) { vm_object_t src_object; vm_map_entry_t fake_entry; vm_offset_t size; struct ucred *cred; int charged; VM_MAP_ASSERT_LOCKED(dst_map); if ((dst_entry->eflags|src_entry->eflags) & MAP_ENTRY_IS_SUB_MAP) return; if (src_entry->wired_count == 0 || (src_entry->protection & VM_PROT_WRITE) == 0) { /* * If the source entry is marked needs_copy, it is already * write-protected. */ if ((src_entry->eflags & MAP_ENTRY_NEEDS_COPY) == 0 && (src_entry->protection & VM_PROT_WRITE) != 0) { pmap_protect(src_map->pmap, src_entry->start, src_entry->end, src_entry->protection & ~VM_PROT_WRITE); } /* * Make a copy of the object. */ size = src_entry->end - src_entry->start; if ((src_object = src_entry->object.vm_object) != NULL) { VM_OBJECT_WLOCK(src_object); charged = ENTRY_CHARGED(src_entry); if (src_object->handle == NULL && (src_object->type == OBJT_DEFAULT || src_object->type == OBJT_SWAP)) { vm_object_collapse(src_object); if ((src_object->flags & (OBJ_NOSPLIT | OBJ_ONEMAPPING)) == OBJ_ONEMAPPING) { vm_object_split(src_entry); src_object = src_entry->object.vm_object; } } vm_object_reference_locked(src_object); vm_object_clear_flag(src_object, OBJ_ONEMAPPING); if (src_entry->cred != NULL && !(src_entry->eflags & MAP_ENTRY_NEEDS_COPY)) { KASSERT(src_object->cred == NULL, ("OVERCOMMIT: vm_map_copy_entry: cred %p", src_object)); src_object->cred = src_entry->cred; src_object->charge = size; } VM_OBJECT_WUNLOCK(src_object); dst_entry->object.vm_object = src_object; if (charged) { cred = curthread->td_ucred; crhold(cred); dst_entry->cred = cred; *fork_charge += size; if (!(src_entry->eflags & MAP_ENTRY_NEEDS_COPY)) { crhold(cred); src_entry->cred = cred; *fork_charge += size; } } src_entry->eflags |= MAP_ENTRY_COW | MAP_ENTRY_NEEDS_COPY; dst_entry->eflags |= MAP_ENTRY_COW | MAP_ENTRY_NEEDS_COPY; dst_entry->offset = src_entry->offset; if (src_entry->eflags & MAP_ENTRY_VN_WRITECNT) { /* * MAP_ENTRY_VN_WRITECNT cannot * indicate write reference from * src_entry, since the entry is * marked as needs copy. Allocate a * fake entry that is used to * decrement object->un_pager.vnp.writecount * at the appropriate time. Attach * fake_entry to the deferred list. */ fake_entry = vm_map_entry_create(dst_map); fake_entry->eflags = MAP_ENTRY_VN_WRITECNT; src_entry->eflags &= ~MAP_ENTRY_VN_WRITECNT; vm_object_reference(src_object); fake_entry->object.vm_object = src_object; fake_entry->start = src_entry->start; fake_entry->end = src_entry->end; fake_entry->next = curthread->td_map_def_user; curthread->td_map_def_user = fake_entry; } pmap_copy(dst_map->pmap, src_map->pmap, dst_entry->start, dst_entry->end - dst_entry->start, src_entry->start); } else { dst_entry->object.vm_object = NULL; dst_entry->offset = 0; if (src_entry->cred != NULL) { dst_entry->cred = curthread->td_ucred; crhold(dst_entry->cred); *fork_charge += size; } } } else { /* * We don't want to make writeable wired pages copy-on-write. * Immediately copy these pages into the new map by simulating * page faults. The new pages are pageable. */ vm_fault_copy_entry(dst_map, src_map, dst_entry, src_entry, fork_charge); } } /* * vmspace_map_entry_forked: * Update the newly-forked vmspace each time a map entry is inherited * or copied. The values for vm_dsize and vm_tsize are approximate * (and mostly-obsolete ideas in the face of mmap(2) et al.) */ static void vmspace_map_entry_forked(const struct vmspace *vm1, struct vmspace *vm2, vm_map_entry_t entry) { vm_size_t entrysize; vm_offset_t newend; if ((entry->eflags & MAP_ENTRY_GUARD) != 0) return; entrysize = entry->end - entry->start; vm2->vm_map.size += entrysize; if (entry->eflags & (MAP_ENTRY_GROWS_DOWN | MAP_ENTRY_GROWS_UP)) { vm2->vm_ssize += btoc(entrysize); } else if (entry->start >= (vm_offset_t)vm1->vm_daddr && entry->start < (vm_offset_t)vm1->vm_daddr + ctob(vm1->vm_dsize)) { newend = MIN(entry->end, (vm_offset_t)vm1->vm_daddr + ctob(vm1->vm_dsize)); vm2->vm_dsize += btoc(newend - entry->start); } else if (entry->start >= (vm_offset_t)vm1->vm_taddr && entry->start < (vm_offset_t)vm1->vm_taddr + ctob(vm1->vm_tsize)) { newend = MIN(entry->end, (vm_offset_t)vm1->vm_taddr + ctob(vm1->vm_tsize)); vm2->vm_tsize += btoc(newend - entry->start); } } /* * vmspace_fork: * Create a new process vmspace structure and vm_map * based on those of an existing process. The new map * is based on the old map, according to the inheritance * values on the regions in that map. * * XXX It might be worth coalescing the entries added to the new vmspace. * * The source map must not be locked. */ struct vmspace * vmspace_fork(struct vmspace *vm1, vm_ooffset_t *fork_charge) { struct vmspace *vm2; vm_map_t new_map, old_map; vm_map_entry_t new_entry, old_entry; vm_object_t object; int locked; vm_inherit_t inh; old_map = &vm1->vm_map; /* Copy immutable fields of vm1 to vm2. */ vm2 = vmspace_alloc(vm_map_min(old_map), vm_map_max(old_map), pmap_pinit); if (vm2 == NULL) return (NULL); vm2->vm_taddr = vm1->vm_taddr; vm2->vm_daddr = vm1->vm_daddr; vm2->vm_maxsaddr = vm1->vm_maxsaddr; vm_map_lock(old_map); if (old_map->busy) vm_map_wait_busy(old_map); new_map = &vm2->vm_map; locked = vm_map_trylock(new_map); /* trylock to silence WITNESS */ KASSERT(locked, ("vmspace_fork: lock failed")); new_map->anon_loc = old_map->anon_loc; old_entry = old_map->header.next; while (old_entry != &old_map->header) { if (old_entry->eflags & MAP_ENTRY_IS_SUB_MAP) panic("vm_map_fork: encountered a submap"); inh = old_entry->inheritance; if ((old_entry->eflags & MAP_ENTRY_GUARD) != 0 && inh != VM_INHERIT_NONE) inh = VM_INHERIT_COPY; switch (inh) { case VM_INHERIT_NONE: break; case VM_INHERIT_SHARE: /* * Clone the entry, creating the shared object if necessary. */ object = old_entry->object.vm_object; if (object == NULL) { object = vm_object_allocate(OBJT_DEFAULT, atop(old_entry->end - old_entry->start)); old_entry->object.vm_object = object; old_entry->offset = 0; if (old_entry->cred != NULL) { object->cred = old_entry->cred; object->charge = old_entry->end - old_entry->start; old_entry->cred = NULL; } } /* * Add the reference before calling vm_object_shadow * to insure that a shadow object is created. */ vm_object_reference(object); if (old_entry->eflags & MAP_ENTRY_NEEDS_COPY) { vm_object_shadow(&old_entry->object.vm_object, &old_entry->offset, old_entry->end - old_entry->start); old_entry->eflags &= ~MAP_ENTRY_NEEDS_COPY; /* Transfer the second reference too. */ vm_object_reference( old_entry->object.vm_object); /* * As in vm_map_simplify_entry(), the * vnode lock will not be acquired in * this call to vm_object_deallocate(). */ vm_object_deallocate(object); object = old_entry->object.vm_object; } VM_OBJECT_WLOCK(object); vm_object_clear_flag(object, OBJ_ONEMAPPING); if (old_entry->cred != NULL) { KASSERT(object->cred == NULL, ("vmspace_fork both cred")); object->cred = old_entry->cred; object->charge = old_entry->end - old_entry->start; old_entry->cred = NULL; } /* * Assert the correct state of the vnode * v_writecount while the object is locked, to * not relock it later for the assertion * correctness. */ if (old_entry->eflags & MAP_ENTRY_VN_WRITECNT && object->type == OBJT_VNODE) { KASSERT(((struct vnode *)object->handle)-> v_writecount > 0, ("vmspace_fork: v_writecount %p", object)); KASSERT(object->un_pager.vnp.writemappings > 0, ("vmspace_fork: vnp.writecount %p", object)); } VM_OBJECT_WUNLOCK(object); /* * Clone the entry, referencing the shared object. */ new_entry = vm_map_entry_create(new_map); *new_entry = *old_entry; new_entry->eflags &= ~(MAP_ENTRY_USER_WIRED | MAP_ENTRY_IN_TRANSITION); new_entry->wiring_thread = NULL; new_entry->wired_count = 0; if (new_entry->eflags & MAP_ENTRY_VN_WRITECNT) { vnode_pager_update_writecount(object, new_entry->start, new_entry->end); } /* * Insert the entry into the new map -- we know we're * inserting at the end of the new map. */ vm_map_entry_link(new_map, new_map->header.prev, new_entry); vmspace_map_entry_forked(vm1, vm2, new_entry); /* * Update the physical map */ pmap_copy(new_map->pmap, old_map->pmap, new_entry->start, (old_entry->end - old_entry->start), old_entry->start); break; case VM_INHERIT_COPY: /* * Clone the entry and link into the map. */ new_entry = vm_map_entry_create(new_map); *new_entry = *old_entry; /* * Copied entry is COW over the old object. */ new_entry->eflags &= ~(MAP_ENTRY_USER_WIRED | MAP_ENTRY_IN_TRANSITION | MAP_ENTRY_VN_WRITECNT); new_entry->wiring_thread = NULL; new_entry->wired_count = 0; new_entry->object.vm_object = NULL; new_entry->cred = NULL; vm_map_entry_link(new_map, new_map->header.prev, new_entry); vmspace_map_entry_forked(vm1, vm2, new_entry); vm_map_copy_entry(old_map, new_map, old_entry, new_entry, fork_charge); break; case VM_INHERIT_ZERO: /* * Create a new anonymous mapping entry modelled from * the old one. */ new_entry = vm_map_entry_create(new_map); memset(new_entry, 0, sizeof(*new_entry)); new_entry->start = old_entry->start; new_entry->end = old_entry->end; new_entry->eflags = old_entry->eflags & ~(MAP_ENTRY_USER_WIRED | MAP_ENTRY_IN_TRANSITION | MAP_ENTRY_VN_WRITECNT); new_entry->protection = old_entry->protection; new_entry->max_protection = old_entry->max_protection; new_entry->inheritance = VM_INHERIT_ZERO; vm_map_entry_link(new_map, new_map->header.prev, new_entry); vmspace_map_entry_forked(vm1, vm2, new_entry); new_entry->cred = curthread->td_ucred; crhold(new_entry->cred); *fork_charge += (new_entry->end - new_entry->start); break; } old_entry = old_entry->next; } /* * Use inlined vm_map_unlock() to postpone handling the deferred * map entries, which cannot be done until both old_map and * new_map locks are released. */ sx_xunlock(&old_map->lock); sx_xunlock(&new_map->lock); vm_map_process_deferred(); return (vm2); } /* * Create a process's stack for exec_new_vmspace(). This function is never * asked to wire the newly created stack. */ int vm_map_stack(vm_map_t map, vm_offset_t addrbos, vm_size_t max_ssize, vm_prot_t prot, vm_prot_t max, int cow) { vm_size_t growsize, init_ssize; rlim_t vmemlim; int rv; MPASS((map->flags & MAP_WIREFUTURE) == 0); growsize = sgrowsiz; init_ssize = (max_ssize < growsize) ? max_ssize : growsize; vm_map_lock(map); vmemlim = lim_cur(curthread, RLIMIT_VMEM); /* If we would blow our VMEM resource limit, no go */ if (map->size + init_ssize > vmemlim) { rv = KERN_NO_SPACE; goto out; } rv = vm_map_stack_locked(map, addrbos, max_ssize, growsize, prot, max, cow); out: vm_map_unlock(map); return (rv); } static int stack_guard_page = 1; SYSCTL_INT(_security_bsd, OID_AUTO, stack_guard_page, CTLFLAG_RWTUN, &stack_guard_page, 0, "Specifies the number of guard pages for a stack that grows"); static int vm_map_stack_locked(vm_map_t map, vm_offset_t addrbos, vm_size_t max_ssize, vm_size_t growsize, vm_prot_t prot, vm_prot_t max, int cow) { vm_map_entry_t new_entry, prev_entry; vm_offset_t bot, gap_bot, gap_top, top; vm_size_t init_ssize, sgp; int orient, rv; /* * The stack orientation is piggybacked with the cow argument. * Extract it into orient and mask the cow argument so that we * don't pass it around further. */ orient = cow & (MAP_STACK_GROWS_DOWN | MAP_STACK_GROWS_UP); KASSERT(orient != 0, ("No stack grow direction")); KASSERT(orient != (MAP_STACK_GROWS_DOWN | MAP_STACK_GROWS_UP), ("bi-dir stack")); if (addrbos < vm_map_min(map) || addrbos + max_ssize > vm_map_max(map) || addrbos + max_ssize <= addrbos) return (KERN_INVALID_ADDRESS); sgp = (vm_size_t)stack_guard_page * PAGE_SIZE; if (sgp >= max_ssize) return (KERN_INVALID_ARGUMENT); init_ssize = growsize; if (max_ssize < init_ssize + sgp) init_ssize = max_ssize - sgp; /* If addr is already mapped, no go */ if (vm_map_lookup_entry(map, addrbos, &prev_entry)) return (KERN_NO_SPACE); /* * If we can't accommodate max_ssize in the current mapping, no go. */ if (prev_entry->next->start < addrbos + max_ssize) return (KERN_NO_SPACE); /* * We initially map a stack of only init_ssize. We will grow as * needed later. Depending on the orientation of the stack (i.e. * the grow direction) we either map at the top of the range, the * bottom of the range or in the middle. * * Note: we would normally expect prot and max to be VM_PROT_ALL, * and cow to be 0. Possibly we should eliminate these as input * parameters, and just pass these values here in the insert call. */ if (orient == MAP_STACK_GROWS_DOWN) { bot = addrbos + max_ssize - init_ssize; top = bot + init_ssize; gap_bot = addrbos; gap_top = bot; } else /* if (orient == MAP_STACK_GROWS_UP) */ { bot = addrbos; top = bot + init_ssize; gap_bot = top; gap_top = addrbos + max_ssize; } rv = vm_map_insert(map, NULL, 0, bot, top, prot, max, cow); if (rv != KERN_SUCCESS) return (rv); new_entry = prev_entry->next; KASSERT(new_entry->end == top || new_entry->start == bot, ("Bad entry start/end for new stack entry")); KASSERT((orient & MAP_STACK_GROWS_DOWN) == 0 || (new_entry->eflags & MAP_ENTRY_GROWS_DOWN) != 0, ("new entry lacks MAP_ENTRY_GROWS_DOWN")); KASSERT((orient & MAP_STACK_GROWS_UP) == 0 || (new_entry->eflags & MAP_ENTRY_GROWS_UP) != 0, ("new entry lacks MAP_ENTRY_GROWS_UP")); rv = vm_map_insert(map, NULL, 0, gap_bot, gap_top, VM_PROT_NONE, VM_PROT_NONE, MAP_CREATE_GUARD | (orient == MAP_STACK_GROWS_DOWN ? MAP_CREATE_STACK_GAP_DN : MAP_CREATE_STACK_GAP_UP)); if (rv != KERN_SUCCESS) (void)vm_map_delete(map, bot, top); return (rv); } /* * Attempts to grow a vm stack entry. Returns KERN_SUCCESS if we * successfully grow the stack. */ static int vm_map_growstack(vm_map_t map, vm_offset_t addr, vm_map_entry_t gap_entry) { vm_map_entry_t stack_entry; struct proc *p; struct vmspace *vm; struct ucred *cred; vm_offset_t gap_end, gap_start, grow_start; size_t grow_amount, guard, max_grow; rlim_t lmemlim, stacklim, vmemlim; int rv, rv1; bool gap_deleted, grow_down, is_procstack; #ifdef notyet uint64_t limit; #endif #ifdef RACCT int error; #endif p = curproc; vm = p->p_vmspace; /* * Disallow stack growth when the access is performed by a * debugger or AIO daemon. The reason is that the wrong * resource limits are applied. */ if (map != &p->p_vmspace->vm_map || p->p_textvp == NULL) return (KERN_FAILURE); MPASS(!map->system_map); guard = stack_guard_page * PAGE_SIZE; lmemlim = lim_cur(curthread, RLIMIT_MEMLOCK); stacklim = lim_cur(curthread, RLIMIT_STACK); vmemlim = lim_cur(curthread, RLIMIT_VMEM); retry: /* If addr is not in a hole for a stack grow area, no need to grow. */ if (gap_entry == NULL && !vm_map_lookup_entry(map, addr, &gap_entry)) return (KERN_FAILURE); if ((gap_entry->eflags & MAP_ENTRY_GUARD) == 0) return (KERN_SUCCESS); if ((gap_entry->eflags & MAP_ENTRY_STACK_GAP_DN) != 0) { stack_entry = gap_entry->next; if ((stack_entry->eflags & MAP_ENTRY_GROWS_DOWN) == 0 || stack_entry->start != gap_entry->end) return (KERN_FAILURE); grow_amount = round_page(stack_entry->start - addr); grow_down = true; } else if ((gap_entry->eflags & MAP_ENTRY_STACK_GAP_UP) != 0) { stack_entry = gap_entry->prev; if ((stack_entry->eflags & MAP_ENTRY_GROWS_UP) == 0 || stack_entry->end != gap_entry->start) return (KERN_FAILURE); grow_amount = round_page(addr + 1 - stack_entry->end); grow_down = false; } else { return (KERN_FAILURE); } max_grow = gap_entry->end - gap_entry->start; if (guard > max_grow) return (KERN_NO_SPACE); max_grow -= guard; if (grow_amount > max_grow) return (KERN_NO_SPACE); /* * If this is the main process stack, see if we're over the stack * limit. */ is_procstack = addr >= (vm_offset_t)vm->vm_maxsaddr && addr < (vm_offset_t)p->p_sysent->sv_usrstack; if (is_procstack && (ctob(vm->vm_ssize) + grow_amount > stacklim)) return (KERN_NO_SPACE); #ifdef RACCT if (racct_enable) { PROC_LOCK(p); if (is_procstack && racct_set(p, RACCT_STACK, ctob(vm->vm_ssize) + grow_amount)) { PROC_UNLOCK(p); return (KERN_NO_SPACE); } PROC_UNLOCK(p); } #endif grow_amount = roundup(grow_amount, sgrowsiz); if (grow_amount > max_grow) grow_amount = max_grow; if (is_procstack && (ctob(vm->vm_ssize) + grow_amount > stacklim)) { grow_amount = trunc_page((vm_size_t)stacklim) - ctob(vm->vm_ssize); } #ifdef notyet PROC_LOCK(p); limit = racct_get_available(p, RACCT_STACK); PROC_UNLOCK(p); if (is_procstack && (ctob(vm->vm_ssize) + grow_amount > limit)) grow_amount = limit - ctob(vm->vm_ssize); #endif if (!old_mlock && (map->flags & MAP_WIREFUTURE) != 0) { if (ptoa(pmap_wired_count(map->pmap)) + grow_amount > lmemlim) { rv = KERN_NO_SPACE; goto out; } #ifdef RACCT if (racct_enable) { PROC_LOCK(p); if (racct_set(p, RACCT_MEMLOCK, ptoa(pmap_wired_count(map->pmap)) + grow_amount)) { PROC_UNLOCK(p); rv = KERN_NO_SPACE; goto out; } PROC_UNLOCK(p); } #endif } /* If we would blow our VMEM resource limit, no go */ if (map->size + grow_amount > vmemlim) { rv = KERN_NO_SPACE; goto out; } #ifdef RACCT if (racct_enable) { PROC_LOCK(p); if (racct_set(p, RACCT_VMEM, map->size + grow_amount)) { PROC_UNLOCK(p); rv = KERN_NO_SPACE; goto out; } PROC_UNLOCK(p); } #endif if (vm_map_lock_upgrade(map)) { gap_entry = NULL; vm_map_lock_read(map); goto retry; } if (grow_down) { grow_start = gap_entry->end - grow_amount; if (gap_entry->start + grow_amount == gap_entry->end) { gap_start = gap_entry->start; gap_end = gap_entry->end; vm_map_entry_delete(map, gap_entry); gap_deleted = true; } else { MPASS(gap_entry->start < gap_entry->end - grow_amount); gap_entry->end -= grow_amount; vm_map_entry_resize_free(map, gap_entry); gap_deleted = false; } rv = vm_map_insert(map, NULL, 0, grow_start, grow_start + grow_amount, stack_entry->protection, stack_entry->max_protection, MAP_STACK_GROWS_DOWN); if (rv != KERN_SUCCESS) { if (gap_deleted) { rv1 = vm_map_insert(map, NULL, 0, gap_start, gap_end, VM_PROT_NONE, VM_PROT_NONE, MAP_CREATE_GUARD | MAP_CREATE_STACK_GAP_DN); MPASS(rv1 == KERN_SUCCESS); } else { gap_entry->end += grow_amount; vm_map_entry_resize_free(map, gap_entry); } } } else { grow_start = stack_entry->end; cred = stack_entry->cred; if (cred == NULL && stack_entry->object.vm_object != NULL) cred = stack_entry->object.vm_object->cred; if (cred != NULL && !swap_reserve_by_cred(grow_amount, cred)) rv = KERN_NO_SPACE; /* Grow the underlying object if applicable. */ else if (stack_entry->object.vm_object == NULL || vm_object_coalesce(stack_entry->object.vm_object, stack_entry->offset, (vm_size_t)(stack_entry->end - stack_entry->start), (vm_size_t)grow_amount, cred != NULL)) { if (gap_entry->start + grow_amount == gap_entry->end) vm_map_entry_delete(map, gap_entry); else gap_entry->start += grow_amount; stack_entry->end += grow_amount; map->size += grow_amount; vm_map_entry_resize_free(map, stack_entry); rv = KERN_SUCCESS; } else rv = KERN_FAILURE; } if (rv == KERN_SUCCESS && is_procstack) vm->vm_ssize += btoc(grow_amount); /* * Heed the MAP_WIREFUTURE flag if it was set for this process. */ if (rv == KERN_SUCCESS && (map->flags & MAP_WIREFUTURE) != 0) { vm_map_unlock(map); vm_map_wire(map, grow_start, grow_start + grow_amount, VM_MAP_WIRE_USER | VM_MAP_WIRE_NOHOLES); vm_map_lock_read(map); } else vm_map_lock_downgrade(map); out: #ifdef RACCT if (racct_enable && rv != KERN_SUCCESS) { PROC_LOCK(p); error = racct_set(p, RACCT_VMEM, map->size); KASSERT(error == 0, ("decreasing RACCT_VMEM failed")); if (!old_mlock) { error = racct_set(p, RACCT_MEMLOCK, ptoa(pmap_wired_count(map->pmap))); KASSERT(error == 0, ("decreasing RACCT_MEMLOCK failed")); } error = racct_set(p, RACCT_STACK, ctob(vm->vm_ssize)); KASSERT(error == 0, ("decreasing RACCT_STACK failed")); PROC_UNLOCK(p); } #endif return (rv); } /* * Unshare the specified VM space for exec. If other processes are * mapped to it, then create a new one. The new vmspace is null. */ int vmspace_exec(struct proc *p, vm_offset_t minuser, vm_offset_t maxuser) { struct vmspace *oldvmspace = p->p_vmspace; struct vmspace *newvmspace; KASSERT((curthread->td_pflags & TDP_EXECVMSPC) == 0, ("vmspace_exec recursed")); newvmspace = vmspace_alloc(minuser, maxuser, pmap_pinit); if (newvmspace == NULL) return (ENOMEM); newvmspace->vm_swrss = oldvmspace->vm_swrss; /* * This code is written like this for prototype purposes. The * goal is to avoid running down the vmspace here, but let the * other process's that are still using the vmspace to finally * run it down. Even though there is little or no chance of blocking * here, it is a good idea to keep this form for future mods. */ PROC_VMSPACE_LOCK(p); p->p_vmspace = newvmspace; PROC_VMSPACE_UNLOCK(p); if (p == curthread->td_proc) pmap_activate(curthread); curthread->td_pflags |= TDP_EXECVMSPC; return (0); } /* * Unshare the specified VM space for forcing COW. This * is called by rfork, for the (RFMEM|RFPROC) == 0 case. */ int vmspace_unshare(struct proc *p) { struct vmspace *oldvmspace = p->p_vmspace; struct vmspace *newvmspace; vm_ooffset_t fork_charge; if (oldvmspace->vm_refcnt == 1) return (0); fork_charge = 0; newvmspace = vmspace_fork(oldvmspace, &fork_charge); if (newvmspace == NULL) return (ENOMEM); if (!swap_reserve_by_cred(fork_charge, p->p_ucred)) { vmspace_free(newvmspace); return (ENOMEM); } PROC_VMSPACE_LOCK(p); p->p_vmspace = newvmspace; PROC_VMSPACE_UNLOCK(p); if (p == curthread->td_proc) pmap_activate(curthread); vmspace_free(oldvmspace); return (0); } /* * vm_map_lookup: * * Finds the VM object, offset, and * protection for a given virtual address in the * specified map, assuming a page fault of the * type specified. * * Leaves the map in question locked for read; return * values are guaranteed until a vm_map_lookup_done * call is performed. Note that the map argument * is in/out; the returned map must be used in * the call to vm_map_lookup_done. * * A handle (out_entry) is returned for use in * vm_map_lookup_done, to make that fast. * * If a lookup is requested with "write protection" * specified, the map may be changed to perform virtual * copying operations, although the data referenced will * remain the same. */ int vm_map_lookup(vm_map_t *var_map, /* IN/OUT */ vm_offset_t vaddr, vm_prot_t fault_typea, vm_map_entry_t *out_entry, /* OUT */ vm_object_t *object, /* OUT */ vm_pindex_t *pindex, /* OUT */ vm_prot_t *out_prot, /* OUT */ boolean_t *wired) /* OUT */ { vm_map_entry_t entry; vm_map_t map = *var_map; vm_prot_t prot; vm_prot_t fault_type = fault_typea; vm_object_t eobject; vm_size_t size; struct ucred *cred; RetryLookup: vm_map_lock_read(map); RetryLookupLocked: /* * Lookup the faulting address. */ if (!vm_map_lookup_entry(map, vaddr, out_entry)) { vm_map_unlock_read(map); return (KERN_INVALID_ADDRESS); } entry = *out_entry; /* * Handle submaps. */ if (entry->eflags & MAP_ENTRY_IS_SUB_MAP) { vm_map_t old_map = map; *var_map = map = entry->object.sub_map; vm_map_unlock_read(old_map); goto RetryLookup; } /* * Check whether this task is allowed to have this page. */ prot = entry->protection; if ((fault_typea & VM_PROT_FAULT_LOOKUP) != 0) { fault_typea &= ~VM_PROT_FAULT_LOOKUP; if (prot == VM_PROT_NONE && map != kernel_map && (entry->eflags & MAP_ENTRY_GUARD) != 0 && (entry->eflags & (MAP_ENTRY_STACK_GAP_DN | MAP_ENTRY_STACK_GAP_UP)) != 0 && vm_map_growstack(map, vaddr, entry) == KERN_SUCCESS) goto RetryLookupLocked; } fault_type &= VM_PROT_READ | VM_PROT_WRITE | VM_PROT_EXECUTE; if ((fault_type & prot) != fault_type || prot == VM_PROT_NONE) { vm_map_unlock_read(map); return (KERN_PROTECTION_FAILURE); } KASSERT((prot & VM_PROT_WRITE) == 0 || (entry->eflags & (MAP_ENTRY_USER_WIRED | MAP_ENTRY_NEEDS_COPY)) != (MAP_ENTRY_USER_WIRED | MAP_ENTRY_NEEDS_COPY), ("entry %p flags %x", entry, entry->eflags)); if ((fault_typea & VM_PROT_COPY) != 0 && (entry->max_protection & VM_PROT_WRITE) == 0 && (entry->eflags & MAP_ENTRY_COW) == 0) { vm_map_unlock_read(map); return (KERN_PROTECTION_FAILURE); } /* * If this page is not pageable, we have to get it for all possible * accesses. */ *wired = (entry->wired_count != 0); if (*wired) fault_type = entry->protection; size = entry->end - entry->start; /* * If the entry was copy-on-write, we either ... */ if (entry->eflags & MAP_ENTRY_NEEDS_COPY) { /* * If we want to write the page, we may as well handle that * now since we've got the map locked. * * If we don't need to write the page, we just demote the * permissions allowed. */ if ((fault_type & VM_PROT_WRITE) != 0 || (fault_typea & VM_PROT_COPY) != 0) { /* * Make a new object, and place it in the object * chain. Note that no new references have appeared * -- one just moved from the map to the new * object. */ if (vm_map_lock_upgrade(map)) goto RetryLookup; if (entry->cred == NULL) { /* * The debugger owner is charged for * the memory. */ cred = curthread->td_ucred; crhold(cred); if (!swap_reserve_by_cred(size, cred)) { crfree(cred); vm_map_unlock(map); return (KERN_RESOURCE_SHORTAGE); } entry->cred = cred; } vm_object_shadow(&entry->object.vm_object, &entry->offset, size); entry->eflags &= ~MAP_ENTRY_NEEDS_COPY; eobject = entry->object.vm_object; if (eobject->cred != NULL) { /* * The object was not shadowed. */ swap_release_by_cred(size, entry->cred); crfree(entry->cred); entry->cred = NULL; } else if (entry->cred != NULL) { VM_OBJECT_WLOCK(eobject); eobject->cred = entry->cred; eobject->charge = size; VM_OBJECT_WUNLOCK(eobject); entry->cred = NULL; } vm_map_lock_downgrade(map); } else { /* * We're attempting to read a copy-on-write page -- * don't allow writes. */ prot &= ~VM_PROT_WRITE; } } /* * Create an object if necessary. */ if (entry->object.vm_object == NULL && !map->system_map) { if (vm_map_lock_upgrade(map)) goto RetryLookup; entry->object.vm_object = vm_object_allocate(OBJT_DEFAULT, atop(size)); entry->offset = 0; if (entry->cred != NULL) { VM_OBJECT_WLOCK(entry->object.vm_object); entry->object.vm_object->cred = entry->cred; entry->object.vm_object->charge = size; VM_OBJECT_WUNLOCK(entry->object.vm_object); entry->cred = NULL; } vm_map_lock_downgrade(map); } /* * Return the object/offset from this entry. If the entry was * copy-on-write or empty, it has been fixed up. */ *pindex = OFF_TO_IDX((vaddr - entry->start) + entry->offset); *object = entry->object.vm_object; *out_prot = prot; return (KERN_SUCCESS); } /* * vm_map_lookup_locked: * * Lookup the faulting address. A version of vm_map_lookup that returns * KERN_FAILURE instead of blocking on map lock or memory allocation. */ int vm_map_lookup_locked(vm_map_t *var_map, /* IN/OUT */ vm_offset_t vaddr, vm_prot_t fault_typea, vm_map_entry_t *out_entry, /* OUT */ vm_object_t *object, /* OUT */ vm_pindex_t *pindex, /* OUT */ vm_prot_t *out_prot, /* OUT */ boolean_t *wired) /* OUT */ { vm_map_entry_t entry; vm_map_t map = *var_map; vm_prot_t prot; vm_prot_t fault_type = fault_typea; /* * Lookup the faulting address. */ if (!vm_map_lookup_entry(map, vaddr, out_entry)) return (KERN_INVALID_ADDRESS); entry = *out_entry; /* * Fail if the entry refers to a submap. */ if (entry->eflags & MAP_ENTRY_IS_SUB_MAP) return (KERN_FAILURE); /* * Check whether this task is allowed to have this page. */ prot = entry->protection; fault_type &= VM_PROT_READ | VM_PROT_WRITE | VM_PROT_EXECUTE; if ((fault_type & prot) != fault_type) return (KERN_PROTECTION_FAILURE); /* * If this page is not pageable, we have to get it for all possible * accesses. */ *wired = (entry->wired_count != 0); if (*wired) fault_type = entry->protection; if (entry->eflags & MAP_ENTRY_NEEDS_COPY) { /* * Fail if the entry was copy-on-write for a write fault. */ if (fault_type & VM_PROT_WRITE) return (KERN_FAILURE); /* * We're attempting to read a copy-on-write page -- * don't allow writes. */ prot &= ~VM_PROT_WRITE; } /* * Fail if an object should be created. */ if (entry->object.vm_object == NULL && !map->system_map) return (KERN_FAILURE); /* * Return the object/offset from this entry. If the entry was * copy-on-write or empty, it has been fixed up. */ *pindex = OFF_TO_IDX((vaddr - entry->start) + entry->offset); *object = entry->object.vm_object; *out_prot = prot; return (KERN_SUCCESS); } /* * vm_map_lookup_done: * * Releases locks acquired by a vm_map_lookup * (according to the handle returned by that lookup). */ void vm_map_lookup_done(vm_map_t map, vm_map_entry_t entry) { /* * Unlock the main-level map */ vm_map_unlock_read(map); } vm_offset_t vm_map_max_KBI(const struct vm_map *map) { return (vm_map_max(map)); } vm_offset_t vm_map_min_KBI(const struct vm_map *map) { return (vm_map_min(map)); } pmap_t vm_map_pmap_KBI(vm_map_t map) { return (map->pmap); } #include "opt_ddb.h" #ifdef DDB #include #include static void vm_map_print(vm_map_t map) { vm_map_entry_t entry; db_iprintf("Task map %p: pmap=%p, nentries=%d, version=%u\n", (void *)map, (void *)map->pmap, map->nentries, map->timestamp); db_indent += 2; for (entry = map->header.next; entry != &map->header; entry = entry->next) { db_iprintf("map entry %p: start=%p, end=%p, eflags=%#x, \n", (void *)entry, (void *)entry->start, (void *)entry->end, entry->eflags); { static char *inheritance_name[4] = {"share", "copy", "none", "donate_copy"}; db_iprintf(" prot=%x/%x/%s", entry->protection, entry->max_protection, inheritance_name[(int)(unsigned char)entry->inheritance]); if (entry->wired_count != 0) db_printf(", wired"); } if (entry->eflags & MAP_ENTRY_IS_SUB_MAP) { db_printf(", share=%p, offset=0x%jx\n", (void *)entry->object.sub_map, (uintmax_t)entry->offset); if ((entry->prev == &map->header) || (entry->prev->object.sub_map != entry->object.sub_map)) { db_indent += 2; vm_map_print((vm_map_t)entry->object.sub_map); db_indent -= 2; } } else { if (entry->cred != NULL) db_printf(", ruid %d", entry->cred->cr_ruid); db_printf(", object=%p, offset=0x%jx", (void *)entry->object.vm_object, (uintmax_t)entry->offset); if (entry->object.vm_object && entry->object.vm_object->cred) db_printf(", obj ruid %d charge %jx", entry->object.vm_object->cred->cr_ruid, (uintmax_t)entry->object.vm_object->charge); if (entry->eflags & MAP_ENTRY_COW) db_printf(", copy (%s)", (entry->eflags & MAP_ENTRY_NEEDS_COPY) ? "needed" : "done"); db_printf("\n"); if ((entry->prev == &map->header) || (entry->prev->object.vm_object != entry->object.vm_object)) { db_indent += 2; vm_object_print((db_expr_t)(intptr_t) entry->object.vm_object, 0, 0, (char *)0); db_indent -= 2; } } } db_indent -= 2; } DB_SHOW_COMMAND(map, map) { if (!have_addr) { db_printf("usage: show map \n"); return; } vm_map_print((vm_map_t)addr); } DB_SHOW_COMMAND(procvm, procvm) { struct proc *p; if (have_addr) { p = db_lookup_proc(addr); } else { p = curproc; } db_printf("p = %p, vmspace = %p, map = %p, pmap = %p\n", (void *)p, (void *)p->p_vmspace, (void *)&p->p_vmspace->vm_map, (void *)vmspace_pmap(p->p_vmspace)); vm_map_print((vm_map_t)&p->p_vmspace->vm_map); } #endif /* DDB */ Index: projects/import-googletest-1.8.1/sys/vm/vm_pageout.c =================================================================== --- projects/import-googletest-1.8.1/sys/vm/vm_pageout.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/vm/vm_pageout.c (revision 344271) @@ -1,2131 +1,2108 @@ /*- * SPDX-License-Identifier: (BSD-4-Clause AND MIT-CMU) * * Copyright (c) 1991 Regents of the University of California. * All rights reserved. * Copyright (c) 1994 John S. Dyson * All rights reserved. * Copyright (c) 1994 David Greenman * All rights reserved. * Copyright (c) 2005 Yahoo! Technologies Norway AS * All rights reserved. * * This code is derived from software contributed to Berkeley by * The Mach Operating System project at Carnegie-Mellon University. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by the University of * California, Berkeley and its contributors. * 4. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * from: @(#)vm_pageout.c 7.4 (Berkeley) 5/7/91 * * * Copyright (c) 1987, 1990 Carnegie-Mellon University. * All rights reserved. * * Authors: Avadis Tevanian, Jr., Michael Wayne Young * * Permission to use, copy, modify and distribute this software and * its documentation is hereby granted, provided that both the copyright * notice and this permission notice appear in all copies of the * software, derivative works or modified versions, and any portions * thereof, and that both notices appear in supporting documentation. * * CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS" * CONDITION. CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND * FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE. * * Carnegie Mellon requests users of this software to return to * * Software Distribution Coordinator or Software.Distribution@CS.CMU.EDU * School of Computer Science * Carnegie Mellon University * Pittsburgh PA 15213-3890 * * any improvements or extensions that they make and grant Carnegie the * rights to redistribute these changes. */ /* * The proverbial page-out daemon. */ #include __FBSDID("$FreeBSD$"); #include "opt_vm.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include /* * System initialization */ /* the kernel process "vm_pageout"*/ static void vm_pageout(void); static void vm_pageout_init(void); static int vm_pageout_clean(vm_page_t m, int *numpagedout); static int vm_pageout_cluster(vm_page_t m); static void vm_pageout_mightbe_oom(struct vm_domain *vmd, int page_shortage, int starting_page_shortage); SYSINIT(pagedaemon_init, SI_SUB_KTHREAD_PAGE, SI_ORDER_FIRST, vm_pageout_init, NULL); struct proc *pageproc; static struct kproc_desc page_kp = { "pagedaemon", vm_pageout, &pageproc }; SYSINIT(pagedaemon, SI_SUB_KTHREAD_PAGE, SI_ORDER_SECOND, kproc_start, &page_kp); SDT_PROVIDER_DEFINE(vm); SDT_PROBE_DEFINE(vm, , , vm__lowmem_scan); /* Pagedaemon activity rates, in subdivisions of one second. */ #define VM_LAUNDER_RATE 10 #define VM_INACT_SCAN_RATE 10 static int vm_pageout_oom_seq = 12; static int vm_pageout_update_period; static int disable_swap_pageouts; static int lowmem_period = 10; static int swapdev_enabled; static int vm_panic_on_oom = 0; SYSCTL_INT(_vm, OID_AUTO, panic_on_oom, CTLFLAG_RWTUN, &vm_panic_on_oom, 0, "panic on out of memory instead of killing the largest process"); SYSCTL_INT(_vm, OID_AUTO, pageout_update_period, CTLFLAG_RWTUN, &vm_pageout_update_period, 0, "Maximum active LRU update period"); SYSCTL_INT(_vm, OID_AUTO, lowmem_period, CTLFLAG_RWTUN, &lowmem_period, 0, "Low memory callback period"); SYSCTL_INT(_vm, OID_AUTO, disable_swapspace_pageouts, CTLFLAG_RWTUN, &disable_swap_pageouts, 0, "Disallow swapout of dirty pages"); static int pageout_lock_miss; SYSCTL_INT(_vm, OID_AUTO, pageout_lock_miss, CTLFLAG_RD, &pageout_lock_miss, 0, "vget() lock misses during pageout"); SYSCTL_INT(_vm, OID_AUTO, pageout_oom_seq, CTLFLAG_RWTUN, &vm_pageout_oom_seq, 0, "back-to-back calls to oom detector to start OOM"); static int act_scan_laundry_weight = 3; SYSCTL_INT(_vm, OID_AUTO, act_scan_laundry_weight, CTLFLAG_RWTUN, &act_scan_laundry_weight, 0, "weight given to clean vs. dirty pages in active queue scans"); static u_int vm_background_launder_rate = 4096; SYSCTL_UINT(_vm, OID_AUTO, background_launder_rate, CTLFLAG_RWTUN, &vm_background_launder_rate, 0, "background laundering rate, in kilobytes per second"); static u_int vm_background_launder_max = 20 * 1024; SYSCTL_UINT(_vm, OID_AUTO, background_launder_max, CTLFLAG_RWTUN, &vm_background_launder_max, 0, "background laundering cap, in kilobytes"); int vm_pageout_page_count = 32; int vm_page_max_wired; /* XXX max # of wired pages system-wide */ SYSCTL_INT(_vm, OID_AUTO, max_wired, CTLFLAG_RW, &vm_page_max_wired, 0, "System-wide limit to wired page count"); static u_int isqrt(u_int num); static int vm_pageout_launder(struct vm_domain *vmd, int launder, bool in_shortfall); static void vm_pageout_laundry_worker(void *arg); struct scan_state { struct vm_batchqueue bq; struct vm_pagequeue *pq; vm_page_t marker; int maxscan; int scanned; }; static void vm_pageout_init_scan(struct scan_state *ss, struct vm_pagequeue *pq, vm_page_t marker, vm_page_t after, int maxscan) { vm_pagequeue_assert_locked(pq); KASSERT((marker->aflags & PGA_ENQUEUED) == 0, ("marker %p already enqueued", marker)); if (after == NULL) TAILQ_INSERT_HEAD(&pq->pq_pl, marker, plinks.q); else TAILQ_INSERT_AFTER(&pq->pq_pl, after, marker, plinks.q); vm_page_aflag_set(marker, PGA_ENQUEUED); vm_batchqueue_init(&ss->bq); ss->pq = pq; ss->marker = marker; ss->maxscan = maxscan; ss->scanned = 0; vm_pagequeue_unlock(pq); } static void vm_pageout_end_scan(struct scan_state *ss) { struct vm_pagequeue *pq; pq = ss->pq; vm_pagequeue_assert_locked(pq); KASSERT((ss->marker->aflags & PGA_ENQUEUED) != 0, ("marker %p not enqueued", ss->marker)); TAILQ_REMOVE(&pq->pq_pl, ss->marker, plinks.q); vm_page_aflag_clear(ss->marker, PGA_ENQUEUED); pq->pq_pdpages += ss->scanned; } /* * Add a small number of queued pages to a batch queue for later processing * without the corresponding queue lock held. The caller must have enqueued a * marker page at the desired start point for the scan. Pages will be * physically dequeued if the caller so requests. Otherwise, the returned * batch may contain marker pages, and it is up to the caller to handle them. * * When processing the batch queue, vm_page_queue() must be used to * determine whether the page has been logically dequeued by another thread. * Once this check is performed, the page lock guarantees that the page will * not be disassociated from the queue. */ static __always_inline void vm_pageout_collect_batch(struct scan_state *ss, const bool dequeue) { struct vm_pagequeue *pq; vm_page_t m, marker; marker = ss->marker; pq = ss->pq; KASSERT((marker->aflags & PGA_ENQUEUED) != 0, ("marker %p not enqueued", ss->marker)); vm_pagequeue_lock(pq); for (m = TAILQ_NEXT(marker, plinks.q); m != NULL && ss->scanned < ss->maxscan && ss->bq.bq_cnt < VM_BATCHQUEUE_SIZE; m = TAILQ_NEXT(m, plinks.q), ss->scanned++) { if ((m->flags & PG_MARKER) == 0) { KASSERT((m->aflags & PGA_ENQUEUED) != 0, ("page %p not enqueued", m)); KASSERT((m->flags & PG_FICTITIOUS) == 0, ("Fictitious page %p cannot be in page queue", m)); KASSERT((m->oflags & VPO_UNMANAGED) == 0, ("Unmanaged page %p cannot be in page queue", m)); } else if (dequeue) continue; (void)vm_batchqueue_insert(&ss->bq, m); if (dequeue) { TAILQ_REMOVE(&pq->pq_pl, m, plinks.q); vm_page_aflag_clear(m, PGA_ENQUEUED); } } TAILQ_REMOVE(&pq->pq_pl, marker, plinks.q); if (__predict_true(m != NULL)) TAILQ_INSERT_BEFORE(m, marker, plinks.q); else TAILQ_INSERT_TAIL(&pq->pq_pl, marker, plinks.q); if (dequeue) vm_pagequeue_cnt_add(pq, -ss->bq.bq_cnt); vm_pagequeue_unlock(pq); } /* Return the next page to be scanned, or NULL if the scan is complete. */ static __always_inline vm_page_t vm_pageout_next(struct scan_state *ss, const bool dequeue) { if (ss->bq.bq_cnt == 0) vm_pageout_collect_batch(ss, dequeue); return (vm_batchqueue_pop(&ss->bq)); } /* * Scan for pages at adjacent offsets within the given page's object that are * eligible for laundering, form a cluster of these pages and the given page, * and launder that cluster. */ static int vm_pageout_cluster(vm_page_t m) { vm_object_t object; vm_page_t mc[2 * vm_pageout_page_count], p, pb, ps; vm_pindex_t pindex; int ib, is, page_base, pageout_count; vm_page_assert_locked(m); object = m->object; VM_OBJECT_ASSERT_WLOCKED(object); pindex = m->pindex; vm_page_assert_unbusied(m); KASSERT(!vm_page_held(m), ("page %p is held", m)); pmap_remove_write(m); vm_page_unlock(m); mc[vm_pageout_page_count] = pb = ps = m; pageout_count = 1; page_base = vm_pageout_page_count; ib = 1; is = 1; /* * We can cluster only if the page is not clean, busy, or held, and * the page is in the laundry queue. * * During heavy mmap/modification loads the pageout * daemon can really fragment the underlying file * due to flushing pages out of order and not trying to * align the clusters (which leaves sporadic out-of-order * holes). To solve this problem we do the reverse scan * first and attempt to align our cluster, then do a * forward scan if room remains. */ more: while (ib != 0 && pageout_count < vm_pageout_page_count) { if (ib > pindex) { ib = 0; break; } if ((p = vm_page_prev(pb)) == NULL || vm_page_busied(p)) { ib = 0; break; } vm_page_test_dirty(p); if (p->dirty == 0) { ib = 0; break; } vm_page_lock(p); if (vm_page_held(p) || !vm_page_in_laundry(p)) { vm_page_unlock(p); ib = 0; break; } pmap_remove_write(p); vm_page_unlock(p); mc[--page_base] = pb = p; ++pageout_count; ++ib; /* * We are at an alignment boundary. Stop here, and switch * directions. Do not clear ib. */ if ((pindex - (ib - 1)) % vm_pageout_page_count == 0) break; } while (pageout_count < vm_pageout_page_count && pindex + is < object->size) { if ((p = vm_page_next(ps)) == NULL || vm_page_busied(p)) break; vm_page_test_dirty(p); if (p->dirty == 0) break; vm_page_lock(p); if (vm_page_held(p) || !vm_page_in_laundry(p)) { vm_page_unlock(p); break; } pmap_remove_write(p); vm_page_unlock(p); mc[page_base + pageout_count] = ps = p; ++pageout_count; ++is; } /* * If we exhausted our forward scan, continue with the reverse scan * when possible, even past an alignment boundary. This catches * boundary conditions. */ if (ib != 0 && pageout_count < vm_pageout_page_count) goto more; return (vm_pageout_flush(&mc[page_base], pageout_count, VM_PAGER_PUT_NOREUSE, 0, NULL, NULL)); } /* * vm_pageout_flush() - launder the given pages * * The given pages are laundered. Note that we setup for the start of * I/O ( i.e. busy the page ), mark it read-only, and bump the object * reference count all in here rather then in the parent. If we want * the parent to do more sophisticated things we may have to change * the ordering. * * Returned runlen is the count of pages between mreq and first * page after mreq with status VM_PAGER_AGAIN. * *eio is set to TRUE if pager returned VM_PAGER_ERROR or VM_PAGER_FAIL * for any page in runlen set. */ int vm_pageout_flush(vm_page_t *mc, int count, int flags, int mreq, int *prunlen, boolean_t *eio) { vm_object_t object = mc[0]->object; int pageout_status[count]; int numpagedout = 0; int i, runlen; VM_OBJECT_ASSERT_WLOCKED(object); /* * Initiate I/O. Mark the pages busy and verify that they're valid * and read-only. * * We do not have to fixup the clean/dirty bits here... we can * allow the pager to do it after the I/O completes. * * NOTE! mc[i]->dirty may be partial or fragmented due to an * edge case with file fragments. */ for (i = 0; i < count; i++) { KASSERT(mc[i]->valid == VM_PAGE_BITS_ALL, ("vm_pageout_flush: partially invalid page %p index %d/%d", mc[i], i, count)); KASSERT((mc[i]->aflags & PGA_WRITEABLE) == 0, ("vm_pageout_flush: writeable page %p", mc[i])); vm_page_sbusy(mc[i]); } vm_object_pip_add(object, count); vm_pager_put_pages(object, mc, count, flags, pageout_status); runlen = count - mreq; if (eio != NULL) *eio = FALSE; for (i = 0; i < count; i++) { vm_page_t mt = mc[i]; KASSERT(pageout_status[i] == VM_PAGER_PEND || !pmap_page_is_write_mapped(mt), ("vm_pageout_flush: page %p is not write protected", mt)); switch (pageout_status[i]) { case VM_PAGER_OK: vm_page_lock(mt); if (vm_page_in_laundry(mt)) vm_page_deactivate_noreuse(mt); vm_page_unlock(mt); /* FALLTHROUGH */ case VM_PAGER_PEND: numpagedout++; break; case VM_PAGER_BAD: /* * The page is outside the object's range. We pretend * that the page out worked and clean the page, so the * changes will be lost if the page is reclaimed by * the page daemon. */ vm_page_undirty(mt); vm_page_lock(mt); if (vm_page_in_laundry(mt)) vm_page_deactivate_noreuse(mt); vm_page_unlock(mt); break; case VM_PAGER_ERROR: case VM_PAGER_FAIL: /* * If the page couldn't be paged out to swap because the * pager wasn't able to find space, place the page in * the PQ_UNSWAPPABLE holding queue. This is an * optimization that prevents the page daemon from * wasting CPU cycles on pages that cannot be reclaimed * becase no swap device is configured. * * Otherwise, reactivate the page so that it doesn't * clog the laundry and inactive queues. (We will try * paging it out again later.) */ vm_page_lock(mt); if (object->type == OBJT_SWAP && pageout_status[i] == VM_PAGER_FAIL) { vm_page_unswappable(mt); numpagedout++; } else vm_page_activate(mt); vm_page_unlock(mt); if (eio != NULL && i >= mreq && i - mreq < runlen) *eio = TRUE; break; case VM_PAGER_AGAIN: if (i >= mreq && i - mreq < runlen) runlen = i - mreq; break; } /* * If the operation is still going, leave the page busy to * block all other accesses. Also, leave the paging in * progress indicator set so that we don't attempt an object * collapse. */ if (pageout_status[i] != VM_PAGER_PEND) { vm_object_pip_wakeup(object); vm_page_sunbusy(mt); } } if (prunlen != NULL) *prunlen = runlen; return (numpagedout); } static void vm_pageout_swapon(void *arg __unused, struct swdevt *sp __unused) { atomic_store_rel_int(&swapdev_enabled, 1); } static void vm_pageout_swapoff(void *arg __unused, struct swdevt *sp __unused) { if (swap_pager_nswapdev() == 1) atomic_store_rel_int(&swapdev_enabled, 0); } /* * Attempt to acquire all of the necessary locks to launder a page and * then call through the clustering layer to PUTPAGES. Wait a short * time for a vnode lock. * * Requires the page and object lock on entry, releases both before return. * Returns 0 on success and an errno otherwise. */ static int vm_pageout_clean(vm_page_t m, int *numpagedout) { struct vnode *vp; struct mount *mp; vm_object_t object; vm_pindex_t pindex; int error, lockmode; vm_page_assert_locked(m); object = m->object; VM_OBJECT_ASSERT_WLOCKED(object); error = 0; vp = NULL; mp = NULL; /* * The object is already known NOT to be dead. It * is possible for the vget() to block the whole * pageout daemon, but the new low-memory handling * code should prevent it. * * We can't wait forever for the vnode lock, we might * deadlock due to a vn_read() getting stuck in * vm_wait while holding this vnode. We skip the * vnode if we can't get it in a reasonable amount * of time. */ if (object->type == OBJT_VNODE) { vm_page_unlock(m); vp = object->handle; if (vp->v_type == VREG && vn_start_write(vp, &mp, V_NOWAIT) != 0) { mp = NULL; error = EDEADLK; goto unlock_all; } KASSERT(mp != NULL, ("vp %p with NULL v_mount", vp)); vm_object_reference_locked(object); pindex = m->pindex; VM_OBJECT_WUNLOCK(object); lockmode = MNT_SHARED_WRITES(vp->v_mount) ? LK_SHARED : LK_EXCLUSIVE; if (vget(vp, lockmode | LK_TIMELOCK, curthread)) { vp = NULL; error = EDEADLK; goto unlock_mp; } VM_OBJECT_WLOCK(object); /* * Ensure that the object and vnode were not disassociated * while locks were dropped. */ if (vp->v_object != object) { error = ENOENT; goto unlock_all; } vm_page_lock(m); /* * While the object and page were unlocked, the page * may have been: * (1) moved to a different queue, * (2) reallocated to a different object, * (3) reallocated to a different offset, or * (4) cleaned. */ if (!vm_page_in_laundry(m) || m->object != object || m->pindex != pindex || m->dirty == 0) { vm_page_unlock(m); error = ENXIO; goto unlock_all; } /* * The page may have been busied or referenced while the object * and page locks were released. */ if (vm_page_busied(m) || vm_page_held(m)) { vm_page_unlock(m); error = EBUSY; goto unlock_all; } } /* * If a page is dirty, then it is either being washed * (but not yet cleaned) or it is still in the * laundry. If it is still in the laundry, then we * start the cleaning operation. */ if ((*numpagedout = vm_pageout_cluster(m)) == 0) error = EIO; unlock_all: VM_OBJECT_WUNLOCK(object); unlock_mp: vm_page_lock_assert(m, MA_NOTOWNED); if (mp != NULL) { if (vp != NULL) vput(vp); vm_object_deallocate(object); vn_finished_write(mp); } return (error); } /* * Attempt to launder the specified number of pages. * * Returns the number of pages successfully laundered. */ static int vm_pageout_launder(struct vm_domain *vmd, int launder, bool in_shortfall) { struct scan_state ss; struct vm_pagequeue *pq; struct mtx *mtx; vm_object_t object; vm_page_t m, marker; int act_delta, error, numpagedout, queue, starting_target; int vnodes_skipped; - bool obj_locked, pageout_ok; + bool pageout_ok; mtx = NULL; - obj_locked = false; object = NULL; starting_target = launder; vnodes_skipped = 0; /* * Scan the laundry queues for pages eligible to be laundered. We stop * once the target number of dirty pages have been laundered, or once * we've reached the end of the queue. A single iteration of this loop * may cause more than one page to be laundered because of clustering. * * As an optimization, we avoid laundering from PQ_UNSWAPPABLE when no * swap devices are configured. */ if (atomic_load_acq_int(&swapdev_enabled)) queue = PQ_UNSWAPPABLE; else queue = PQ_LAUNDRY; scan: marker = &vmd->vmd_markers[queue]; pq = &vmd->vmd_pagequeues[queue]; vm_pagequeue_lock(pq); vm_pageout_init_scan(&ss, pq, marker, NULL, pq->pq_cnt); while (launder > 0 && (m = vm_pageout_next(&ss, false)) != NULL) { if (__predict_false((m->flags & PG_MARKER) != 0)) continue; vm_page_change_lock(m, &mtx); recheck: /* * The page may have been disassociated from the queue * while locks were dropped. */ if (vm_page_queue(m) != queue) continue; /* * A requeue was requested, so this page gets a second * chance. */ if ((m->aflags & PGA_REQUEUE) != 0) { vm_page_requeue(m); continue; } /* * Held pages are essentially stuck in the queue. * * Wired pages may not be freed. Complete their removal * from the queue now to avoid needless revisits during * future scans. */ if (m->hold_count != 0) continue; if (m->wire_count != 0) { vm_page_dequeue_deferred(m); continue; } if (object != m->object) { - if (obj_locked) { + if (object != NULL) VM_OBJECT_WUNLOCK(object); - obj_locked = false; - } object = m->object; - } - if (!obj_locked) { if (!VM_OBJECT_TRYWLOCK(object)) { mtx_unlock(mtx); /* Depends on type-stability. */ VM_OBJECT_WLOCK(object); - obj_locked = true; mtx_lock(mtx); goto recheck; - } else - obj_locked = true; + } } if (vm_page_busied(m)) continue; /* * Invalid pages can be easily freed. They cannot be * mapped; vm_page_free() asserts this. */ if (m->valid == 0) goto free_page; /* * If the page has been referenced and the object is not dead, * reactivate or requeue the page depending on whether the * object is mapped. * * Test PGA_REFERENCED after calling pmap_ts_referenced() so * that a reference from a concurrently destroyed mapping is * observed here and now. */ if (object->ref_count != 0) act_delta = pmap_ts_referenced(m); else { KASSERT(!pmap_page_is_mapped(m), ("page %p is mapped", m)); act_delta = 0; } if ((m->aflags & PGA_REFERENCED) != 0) { vm_page_aflag_clear(m, PGA_REFERENCED); act_delta++; } if (act_delta != 0) { if (object->ref_count != 0) { VM_CNT_INC(v_reactivated); vm_page_activate(m); /* * Increase the activation count if the page * was referenced while in the laundry queue. * This makes it less likely that the page will * be returned prematurely to the inactive * queue. */ m->act_count += act_delta + ACT_ADVANCE; /* * If this was a background laundering, count * activated pages towards our target. The * purpose of background laundering is to ensure * that pages are eventually cycled through the * laundry queue, and an activation is a valid * way out. */ if (!in_shortfall) launder--; continue; } else if ((object->flags & OBJ_DEAD) == 0) { vm_page_requeue(m); continue; } } /* * If the page appears to be clean at the machine-independent * layer, then remove all of its mappings from the pmap in * anticipation of freeing it. If, however, any of the page's * mappings allow write access, then the page may still be * modified until the last of those mappings are removed. */ if (object->ref_count != 0) { vm_page_test_dirty(m); if (m->dirty == 0) pmap_remove_all(m); } /* * Clean pages are freed, and dirty pages are paged out unless * they belong to a dead object. Requeueing dirty pages from * dead objects is pointless, as they are being paged out and * freed by the thread that destroyed the object. */ if (m->dirty == 0) { free_page: vm_page_free(m); VM_CNT_INC(v_dfree); } else if ((object->flags & OBJ_DEAD) == 0) { if (object->type != OBJT_SWAP && object->type != OBJT_DEFAULT) pageout_ok = true; else if (disable_swap_pageouts) pageout_ok = false; else pageout_ok = true; if (!pageout_ok) { vm_page_requeue(m); continue; } /* * Form a cluster with adjacent, dirty pages from the * same object, and page out that entire cluster. * * The adjacent, dirty pages must also be in the * laundry. However, their mappings are not checked * for new references. Consequently, a recently * referenced page may be paged out. However, that * page will not be prematurely reclaimed. After page * out, the page will be placed in the inactive queue, * where any new references will be detected and the * page reactivated. */ error = vm_pageout_clean(m, &numpagedout); if (error == 0) { launder -= numpagedout; ss.scanned += numpagedout; } else if (error == EDEADLK) { pageout_lock_miss++; vnodes_skipped++; } mtx = NULL; - obj_locked = false; + object = NULL; } } - if (mtx != NULL) { + if (mtx != NULL) mtx_unlock(mtx); - mtx = NULL; - } - if (obj_locked) { + if (object != NULL) VM_OBJECT_WUNLOCK(object); - obj_locked = false; - } vm_pagequeue_lock(pq); vm_pageout_end_scan(&ss); vm_pagequeue_unlock(pq); if (launder > 0 && queue == PQ_UNSWAPPABLE) { queue = PQ_LAUNDRY; goto scan; } /* * Wakeup the sync daemon if we skipped a vnode in a writeable object * and we didn't launder enough pages. */ if (vnodes_skipped > 0 && launder > 0) (void)speedup_syncer(); return (starting_target - launder); } /* * Compute the integer square root. */ static u_int isqrt(u_int num) { u_int bit, root, tmp; bit = 1u << ((NBBY * sizeof(u_int)) - 2); while (bit > num) bit >>= 2; root = 0; while (bit != 0) { tmp = root + bit; root >>= 1; if (num >= tmp) { num -= tmp; root += bit; } bit >>= 2; } return (root); } /* * Perform the work of the laundry thread: periodically wake up and determine * whether any pages need to be laundered. If so, determine the number of pages * that need to be laundered, and launder them. */ static void vm_pageout_laundry_worker(void *arg) { struct vm_domain *vmd; struct vm_pagequeue *pq; uint64_t nclean, ndirty, nfreed; int domain, last_target, launder, shortfall, shortfall_cycle, target; bool in_shortfall; domain = (uintptr_t)arg; vmd = VM_DOMAIN(domain); pq = &vmd->vmd_pagequeues[PQ_LAUNDRY]; KASSERT(vmd->vmd_segs != 0, ("domain without segments")); shortfall = 0; in_shortfall = false; shortfall_cycle = 0; last_target = target = 0; nfreed = 0; /* * Calls to these handlers are serialized by the swap syscall lock. */ (void)EVENTHANDLER_REGISTER(swapon, vm_pageout_swapon, vmd, EVENTHANDLER_PRI_ANY); (void)EVENTHANDLER_REGISTER(swapoff, vm_pageout_swapoff, vmd, EVENTHANDLER_PRI_ANY); /* * The pageout laundry worker is never done, so loop forever. */ for (;;) { KASSERT(target >= 0, ("negative target %d", target)); KASSERT(shortfall_cycle >= 0, ("negative cycle %d", shortfall_cycle)); launder = 0; /* * First determine whether we need to launder pages to meet a * shortage of free pages. */ if (shortfall > 0) { in_shortfall = true; shortfall_cycle = VM_LAUNDER_RATE / VM_INACT_SCAN_RATE; target = shortfall; } else if (!in_shortfall) goto trybackground; else if (shortfall_cycle == 0 || vm_laundry_target(vmd) <= 0) { /* * We recently entered shortfall and began laundering * pages. If we have completed that laundering run * (and we are no longer in shortfall) or we have met * our laundry target through other activity, then we * can stop laundering pages. */ in_shortfall = false; target = 0; goto trybackground; } launder = target / shortfall_cycle--; goto dolaundry; /* * There's no immediate need to launder any pages; see if we * meet the conditions to perform background laundering: * * 1. The ratio of dirty to clean inactive pages exceeds the * background laundering threshold, or * 2. we haven't yet reached the target of the current * background laundering run. * * The background laundering threshold is not a constant. * Instead, it is a slowly growing function of the number of * clean pages freed by the page daemon since the last * background laundering. Thus, as the ratio of dirty to * clean inactive pages grows, the amount of memory pressure * required to trigger laundering decreases. We ensure * that the threshold is non-zero after an inactive queue * scan, even if that scan failed to free a single clean page. */ trybackground: nclean = vmd->vmd_free_count + vmd->vmd_pagequeues[PQ_INACTIVE].pq_cnt; ndirty = vmd->vmd_pagequeues[PQ_LAUNDRY].pq_cnt; if (target == 0 && ndirty * isqrt(howmany(nfreed + 1, vmd->vmd_free_target - vmd->vmd_free_min)) >= nclean) { target = vmd->vmd_background_launder_target; } /* * We have a non-zero background laundering target. If we've * laundered up to our maximum without observing a page daemon * request, just stop. This is a safety belt that ensures we * don't launder an excessive amount if memory pressure is low * and the ratio of dirty to clean pages is large. Otherwise, * proceed at the background laundering rate. */ if (target > 0) { if (nfreed > 0) { nfreed = 0; last_target = target; } else if (last_target - target >= vm_background_launder_max * PAGE_SIZE / 1024) { target = 0; } launder = vm_background_launder_rate * PAGE_SIZE / 1024; launder /= VM_LAUNDER_RATE; if (launder > target) launder = target; } dolaundry: if (launder > 0) { /* * Because of I/O clustering, the number of laundered * pages could exceed "target" by the maximum size of * a cluster minus one. */ target -= min(vm_pageout_launder(vmd, launder, in_shortfall), target); pause("laundp", hz / VM_LAUNDER_RATE); } /* * If we're not currently laundering pages and the page daemon * hasn't posted a new request, sleep until the page daemon * kicks us. */ vm_pagequeue_lock(pq); if (target == 0 && vmd->vmd_laundry_request == VM_LAUNDRY_IDLE) (void)mtx_sleep(&vmd->vmd_laundry_request, vm_pagequeue_lockptr(pq), PVM, "launds", 0); /* * If the pagedaemon has indicated that it's in shortfall, start * a shortfall laundering unless we're already in the middle of * one. This may preempt a background laundering. */ if (vmd->vmd_laundry_request == VM_LAUNDRY_SHORTFALL && (!in_shortfall || shortfall_cycle == 0)) { shortfall = vm_laundry_target(vmd) + vmd->vmd_pageout_deficit; target = 0; } else shortfall = 0; if (target == 0) vmd->vmd_laundry_request = VM_LAUNDRY_IDLE; nfreed += vmd->vmd_clean_pages_freed; vmd->vmd_clean_pages_freed = 0; vm_pagequeue_unlock(pq); } } /* * Compute the number of pages we want to try to move from the * active queue to either the inactive or laundry queue. * * When scanning active pages during a shortage, we make clean pages * count more heavily towards the page shortage than dirty pages. * This is because dirty pages must be laundered before they can be * reused and thus have less utility when attempting to quickly * alleviate a free page shortage. However, this weighting also * causes the scan to deactivate dirty pages more aggressively, * improving the effectiveness of clustering. */ static int vm_pageout_active_target(struct vm_domain *vmd) { int shortage; shortage = vmd->vmd_inactive_target + vm_paging_target(vmd) - (vmd->vmd_pagequeues[PQ_INACTIVE].pq_cnt + vmd->vmd_pagequeues[PQ_LAUNDRY].pq_cnt / act_scan_laundry_weight); shortage *= act_scan_laundry_weight; return (shortage); } /* * Scan the active queue. If there is no shortage of inactive pages, scan a * small portion of the queue in order to maintain quasi-LRU. */ static void vm_pageout_scan_active(struct vm_domain *vmd, int page_shortage) { struct scan_state ss; struct mtx *mtx; vm_page_t m, marker; struct vm_pagequeue *pq; long min_scan; int act_delta, max_scan, scan_tick; marker = &vmd->vmd_markers[PQ_ACTIVE]; pq = &vmd->vmd_pagequeues[PQ_ACTIVE]; vm_pagequeue_lock(pq); /* * If we're just idle polling attempt to visit every * active page within 'update_period' seconds. */ scan_tick = ticks; if (vm_pageout_update_period != 0) { min_scan = pq->pq_cnt; min_scan *= scan_tick - vmd->vmd_last_active_scan; min_scan /= hz * vm_pageout_update_period; } else min_scan = 0; if (min_scan > 0 || (page_shortage > 0 && pq->pq_cnt > 0)) vmd->vmd_last_active_scan = scan_tick; /* * Scan the active queue for pages that can be deactivated. Update * the per-page activity counter and use it to identify deactivation * candidates. Held pages may be deactivated. * * To avoid requeuing each page that remains in the active queue, we * implement the CLOCK algorithm. To keep the implementation of the * enqueue operation consistent for all page queues, we use two hands, * represented by marker pages. Scans begin at the first hand, which * precedes the second hand in the queue. When the two hands meet, * they are moved back to the head and tail of the queue, respectively, * and scanning resumes. */ max_scan = page_shortage > 0 ? pq->pq_cnt : min_scan; mtx = NULL; act_scan: vm_pageout_init_scan(&ss, pq, marker, &vmd->vmd_clock[0], max_scan); while ((m = vm_pageout_next(&ss, false)) != NULL) { if (__predict_false(m == &vmd->vmd_clock[1])) { vm_pagequeue_lock(pq); TAILQ_REMOVE(&pq->pq_pl, &vmd->vmd_clock[0], plinks.q); TAILQ_REMOVE(&pq->pq_pl, &vmd->vmd_clock[1], plinks.q); TAILQ_INSERT_HEAD(&pq->pq_pl, &vmd->vmd_clock[0], plinks.q); TAILQ_INSERT_TAIL(&pq->pq_pl, &vmd->vmd_clock[1], plinks.q); max_scan -= ss.scanned; vm_pageout_end_scan(&ss); goto act_scan; } if (__predict_false((m->flags & PG_MARKER) != 0)) continue; vm_page_change_lock(m, &mtx); /* * The page may have been disassociated from the queue * while locks were dropped. */ if (vm_page_queue(m) != PQ_ACTIVE) continue; /* * Wired pages are dequeued lazily. */ if (m->wire_count != 0) { vm_page_dequeue_deferred(m); continue; } /* * Check to see "how much" the page has been used. * * Test PGA_REFERENCED after calling pmap_ts_referenced() so * that a reference from a concurrently destroyed mapping is * observed here and now. * * Perform an unsynchronized object ref count check. While * the page lock ensures that the page is not reallocated to * another object, in particular, one with unmanaged mappings * that cannot support pmap_ts_referenced(), two races are, * nonetheless, possible: * 1) The count was transitioning to zero, but we saw a non- * zero value. pmap_ts_referenced() will return zero * because the page is not mapped. * 2) The count was transitioning to one, but we saw zero. * This race delays the detection of a new reference. At * worst, we will deactivate and reactivate the page. */ if (m->object->ref_count != 0) act_delta = pmap_ts_referenced(m); else act_delta = 0; if ((m->aflags & PGA_REFERENCED) != 0) { vm_page_aflag_clear(m, PGA_REFERENCED); act_delta++; } /* * Advance or decay the act_count based on recent usage. */ if (act_delta != 0) { m->act_count += ACT_ADVANCE + act_delta; if (m->act_count > ACT_MAX) m->act_count = ACT_MAX; } else m->act_count -= min(m->act_count, ACT_DECLINE); if (m->act_count == 0) { /* * When not short for inactive pages, let dirty pages go * through the inactive queue before moving to the * laundry queues. This gives them some extra time to * be reactivated, potentially avoiding an expensive * pageout. However, during a page shortage, the * inactive queue is necessarily small, and so dirty * pages would only spend a trivial amount of time in * the inactive queue. Therefore, we might as well * place them directly in the laundry queue to reduce * queuing overhead. */ if (page_shortage <= 0) vm_page_deactivate(m); else { /* * Calling vm_page_test_dirty() here would * require acquisition of the object's write * lock. However, during a page shortage, * directing dirty pages into the laundry * queue is only an optimization and not a * requirement. Therefore, we simply rely on * the opportunistic updates to the page's * dirty field by the pmap. */ if (m->dirty == 0) { vm_page_deactivate(m); page_shortage -= act_scan_laundry_weight; } else { vm_page_launder(m); page_shortage--; } } } } if (mtx != NULL) { mtx_unlock(mtx); mtx = NULL; } vm_pagequeue_lock(pq); TAILQ_REMOVE(&pq->pq_pl, &vmd->vmd_clock[0], plinks.q); TAILQ_INSERT_AFTER(&pq->pq_pl, marker, &vmd->vmd_clock[0], plinks.q); vm_pageout_end_scan(&ss); vm_pagequeue_unlock(pq); } static int vm_pageout_reinsert_inactive_page(struct scan_state *ss, vm_page_t m) { struct vm_domain *vmd; if (m->queue != PQ_INACTIVE || (m->aflags & PGA_ENQUEUED) != 0) return (0); vm_page_aflag_set(m, PGA_ENQUEUED); if ((m->aflags & PGA_REQUEUE_HEAD) != 0) { vmd = vm_pagequeue_domain(m); TAILQ_INSERT_BEFORE(&vmd->vmd_inacthead, m, plinks.q); vm_page_aflag_clear(m, PGA_REQUEUE | PGA_REQUEUE_HEAD); } else if ((m->aflags & PGA_REQUEUE) != 0) { TAILQ_INSERT_TAIL(&ss->pq->pq_pl, m, plinks.q); vm_page_aflag_clear(m, PGA_REQUEUE | PGA_REQUEUE_HEAD); } else TAILQ_INSERT_BEFORE(ss->marker, m, plinks.q); return (1); } /* * Re-add stuck pages to the inactive queue. We will examine them again * during the next scan. If the queue state of a page has changed since * it was physically removed from the page queue in * vm_pageout_collect_batch(), don't do anything with that page. */ static void vm_pageout_reinsert_inactive(struct scan_state *ss, struct vm_batchqueue *bq, vm_page_t m) { struct vm_pagequeue *pq; int delta; delta = 0; pq = ss->pq; if (m != NULL) { if (vm_batchqueue_insert(bq, m)) return; vm_pagequeue_lock(pq); delta += vm_pageout_reinsert_inactive_page(ss, m); } else vm_pagequeue_lock(pq); while ((m = vm_batchqueue_pop(bq)) != NULL) delta += vm_pageout_reinsert_inactive_page(ss, m); vm_pagequeue_cnt_add(pq, delta); vm_pagequeue_unlock(pq); vm_batchqueue_init(bq); } /* * Attempt to reclaim the requested number of pages from the inactive queue. * Returns true if the shortage was addressed. */ static int vm_pageout_scan_inactive(struct vm_domain *vmd, int shortage, int *addl_shortage) { struct scan_state ss; struct vm_batchqueue rq; struct mtx *mtx; vm_page_t m, marker; struct vm_pagequeue *pq; vm_object_t object; int act_delta, addl_page_shortage, deficit, page_shortage; int starting_page_shortage; - bool obj_locked; /* * The addl_page_shortage is an estimate of the number of temporarily * stuck pages in the inactive queue. In other words, the * number of pages from the inactive count that should be * discounted in setting the target for the active queue scan. */ addl_page_shortage = 0; /* * vmd_pageout_deficit counts the number of pages requested in * allocations that failed because of a free page shortage. We assume * that the allocations will be reattempted and thus include the deficit * in our scan target. */ deficit = atomic_readandclear_int(&vmd->vmd_pageout_deficit); starting_page_shortage = page_shortage = shortage + deficit; mtx = NULL; - obj_locked = false; object = NULL; vm_batchqueue_init(&rq); /* * Start scanning the inactive queue for pages that we can free. The * scan will stop when we reach the target or we have scanned the * entire queue. (Note that m->act_count is not used to make * decisions for the inactive queue, only for the active queue.) */ marker = &vmd->vmd_markers[PQ_INACTIVE]; pq = &vmd->vmd_pagequeues[PQ_INACTIVE]; vm_pagequeue_lock(pq); vm_pageout_init_scan(&ss, pq, marker, NULL, pq->pq_cnt); while (page_shortage > 0 && (m = vm_pageout_next(&ss, true)) != NULL) { KASSERT((m->flags & PG_MARKER) == 0, ("marker page %p was dequeued", m)); vm_page_change_lock(m, &mtx); recheck: /* * The page may have been disassociated from the queue * while locks were dropped. */ if (vm_page_queue(m) != PQ_INACTIVE) { addl_page_shortage++; continue; } /* * The page was re-enqueued after the page queue lock was * dropped, or a requeue was requested. This page gets a second * chance. */ if ((m->aflags & (PGA_ENQUEUED | PGA_REQUEUE | PGA_REQUEUE_HEAD)) != 0) goto reinsert; /* * Held pages are essentially stuck in the queue. So, * they ought to be discounted from the inactive count. * See the description of addl_page_shortage above. * * Wired pages may not be freed. Complete their removal * from the queue now to avoid needless revisits during * future scans. */ if (m->hold_count != 0) { addl_page_shortage++; goto reinsert; } if (m->wire_count != 0) { vm_page_dequeue_deferred(m); continue; } if (object != m->object) { - if (obj_locked) { + if (object != NULL) VM_OBJECT_WUNLOCK(object); - obj_locked = false; - } object = m->object; - } - if (!obj_locked) { if (!VM_OBJECT_TRYWLOCK(object)) { mtx_unlock(mtx); /* Depends on type-stability. */ VM_OBJECT_WLOCK(object); - obj_locked = true; mtx_lock(mtx); goto recheck; - } else - obj_locked = true; + } } if (vm_page_busied(m)) { /* * Don't mess with busy pages. Leave them at * the front of the queue. Most likely, they * are being paged out and will leave the * queue shortly after the scan finishes. So, * they ought to be discounted from the * inactive count. */ addl_page_shortage++; goto reinsert; } /* * Invalid pages can be easily freed. They cannot be * mapped, vm_page_free() asserts this. */ if (m->valid == 0) goto free_page; /* * If the page has been referenced and the object is not dead, * reactivate or requeue the page depending on whether the * object is mapped. * * Test PGA_REFERENCED after calling pmap_ts_referenced() so * that a reference from a concurrently destroyed mapping is * observed here and now. */ if (object->ref_count != 0) act_delta = pmap_ts_referenced(m); else { KASSERT(!pmap_page_is_mapped(m), ("page %p is mapped", m)); act_delta = 0; } if ((m->aflags & PGA_REFERENCED) != 0) { vm_page_aflag_clear(m, PGA_REFERENCED); act_delta++; } if (act_delta != 0) { if (object->ref_count != 0) { VM_CNT_INC(v_reactivated); vm_page_activate(m); /* * Increase the activation count if the page * was referenced while in the inactive queue. * This makes it less likely that the page will * be returned prematurely to the inactive * queue. */ m->act_count += act_delta + ACT_ADVANCE; continue; } else if ((object->flags & OBJ_DEAD) == 0) { vm_page_aflag_set(m, PGA_REQUEUE); goto reinsert; } } /* * If the page appears to be clean at the machine-independent * layer, then remove all of its mappings from the pmap in * anticipation of freeing it. If, however, any of the page's * mappings allow write access, then the page may still be * modified until the last of those mappings are removed. */ if (object->ref_count != 0) { vm_page_test_dirty(m); if (m->dirty == 0) pmap_remove_all(m); } /* * Clean pages can be freed, but dirty pages must be sent back * to the laundry, unless they belong to a dead object. * Requeueing dirty pages from dead objects is pointless, as * they are being paged out and freed by the thread that * destroyed the object. */ if (m->dirty == 0) { free_page: /* * Because we dequeued the page and have already * checked for concurrent dequeue and enqueue * requests, we can safely disassociate the page * from the inactive queue. */ KASSERT((m->aflags & PGA_QUEUE_STATE_MASK) == 0, ("page %p has queue state", m)); m->queue = PQ_NONE; vm_page_free(m); page_shortage--; } else if ((object->flags & OBJ_DEAD) == 0) vm_page_launder(m); continue; reinsert: vm_pageout_reinsert_inactive(&ss, &rq, m); } - if (mtx != NULL) { + if (mtx != NULL) mtx_unlock(mtx); - mtx = NULL; - } - if (obj_locked) { + if (object != NULL) VM_OBJECT_WUNLOCK(object); - obj_locked = false; - } vm_pageout_reinsert_inactive(&ss, &rq, NULL); vm_pageout_reinsert_inactive(&ss, &ss.bq, NULL); vm_pagequeue_lock(pq); vm_pageout_end_scan(&ss); vm_pagequeue_unlock(pq); VM_CNT_ADD(v_dfree, starting_page_shortage - page_shortage); /* * Wake up the laundry thread so that it can perform any needed * laundering. If we didn't meet our target, we're in shortfall and * need to launder more aggressively. If PQ_LAUNDRY is empty and no * swap devices are configured, the laundry thread has no work to do, so * don't bother waking it up. * * The laundry thread uses the number of inactive queue scans elapsed * since the last laundering to determine whether to launder again, so * keep count. */ if (starting_page_shortage > 0) { pq = &vmd->vmd_pagequeues[PQ_LAUNDRY]; vm_pagequeue_lock(pq); if (vmd->vmd_laundry_request == VM_LAUNDRY_IDLE && (pq->pq_cnt > 0 || atomic_load_acq_int(&swapdev_enabled))) { if (page_shortage > 0) { vmd->vmd_laundry_request = VM_LAUNDRY_SHORTFALL; VM_CNT_INC(v_pdshortfalls); } else if (vmd->vmd_laundry_request != VM_LAUNDRY_SHORTFALL) vmd->vmd_laundry_request = VM_LAUNDRY_BACKGROUND; wakeup(&vmd->vmd_laundry_request); } vmd->vmd_clean_pages_freed += starting_page_shortage - page_shortage; vm_pagequeue_unlock(pq); } /* * Wakeup the swapout daemon if we didn't free the targeted number of * pages. */ if (page_shortage > 0) vm_swapout_run(); /* * If the inactive queue scan fails repeatedly to meet its * target, kill the largest process. */ vm_pageout_mightbe_oom(vmd, page_shortage, starting_page_shortage); /* * Reclaim pages by swapping out idle processes, if configured to do so. */ vm_swapout_run_idle(); /* * See the description of addl_page_shortage above. */ *addl_shortage = addl_page_shortage + deficit; return (page_shortage <= 0); } static int vm_pageout_oom_vote; /* * The pagedaemon threads randlomly select one to perform the * OOM. Trying to kill processes before all pagedaemons * failed to reach free target is premature. */ static void vm_pageout_mightbe_oom(struct vm_domain *vmd, int page_shortage, int starting_page_shortage) { int old_vote; if (starting_page_shortage <= 0 || starting_page_shortage != page_shortage) vmd->vmd_oom_seq = 0; else vmd->vmd_oom_seq++; if (vmd->vmd_oom_seq < vm_pageout_oom_seq) { if (vmd->vmd_oom) { vmd->vmd_oom = FALSE; atomic_subtract_int(&vm_pageout_oom_vote, 1); } return; } /* * Do not follow the call sequence until OOM condition is * cleared. */ vmd->vmd_oom_seq = 0; if (vmd->vmd_oom) return; vmd->vmd_oom = TRUE; old_vote = atomic_fetchadd_int(&vm_pageout_oom_vote, 1); if (old_vote != vm_ndomains - 1) return; /* * The current pagedaemon thread is the last in the quorum to * start OOM. Initiate the selection and signaling of the * victim. */ vm_pageout_oom(VM_OOM_MEM); /* * After one round of OOM terror, recall our vote. On the * next pass, current pagedaemon would vote again if the low * memory condition is still there, due to vmd_oom being * false. */ vmd->vmd_oom = FALSE; atomic_subtract_int(&vm_pageout_oom_vote, 1); } /* * The OOM killer is the page daemon's action of last resort when * memory allocation requests have been stalled for a prolonged period * of time because it cannot reclaim memory. This function computes * the approximate number of physical pages that could be reclaimed if * the specified address space is destroyed. * * Private, anonymous memory owned by the address space is the * principal resource that we expect to recover after an OOM kill. * Since the physical pages mapped by the address space's COW entries * are typically shared pages, they are unlikely to be released and so * they are not counted. * * To get to the point where the page daemon runs the OOM killer, its * efforts to write-back vnode-backed pages may have stalled. This * could be caused by a memory allocation deadlock in the write path * that might be resolved by an OOM kill. Therefore, physical pages * belonging to vnode-backed objects are counted, because they might * be freed without being written out first if the address space holds * the last reference to an unlinked vnode. * * Similarly, physical pages belonging to OBJT_PHYS objects are * counted because the address space might hold the last reference to * the object. */ static long vm_pageout_oom_pagecount(struct vmspace *vmspace) { vm_map_t map; vm_map_entry_t entry; vm_object_t obj; long res; map = &vmspace->vm_map; KASSERT(!map->system_map, ("system map")); sx_assert(&map->lock, SA_LOCKED); res = 0; for (entry = map->header.next; entry != &map->header; entry = entry->next) { if ((entry->eflags & MAP_ENTRY_IS_SUB_MAP) != 0) continue; obj = entry->object.vm_object; if (obj == NULL) continue; if ((entry->eflags & MAP_ENTRY_NEEDS_COPY) != 0 && obj->ref_count != 1) continue; switch (obj->type) { case OBJT_DEFAULT: case OBJT_SWAP: case OBJT_PHYS: case OBJT_VNODE: res += obj->resident_page_count; break; } } return (res); } void vm_pageout_oom(int shortage) { struct proc *p, *bigproc; vm_offset_t size, bigsize; struct thread *td; struct vmspace *vm; bool breakout; /* * We keep the process bigproc locked once we find it to keep anyone * from messing with it; however, there is a possibility of * deadlock if process B is bigproc and one of its child processes * attempts to propagate a signal to B while we are waiting for A's * lock while walking this list. To avoid this, we don't block on * the process lock but just skip a process if it is already locked. */ bigproc = NULL; bigsize = 0; sx_slock(&allproc_lock); FOREACH_PROC_IN_SYSTEM(p) { PROC_LOCK(p); /* * If this is a system, protected or killed process, skip it. */ if (p->p_state != PRS_NORMAL || (p->p_flag & (P_INEXEC | P_PROTECTED | P_SYSTEM | P_WEXIT)) != 0 || p->p_pid == 1 || P_KILLED(p) || (p->p_pid < 48 && swap_pager_avail != 0)) { PROC_UNLOCK(p); continue; } /* * If the process is in a non-running type state, * don't touch it. Check all the threads individually. */ breakout = false; FOREACH_THREAD_IN_PROC(p, td) { thread_lock(td); if (!TD_ON_RUNQ(td) && !TD_IS_RUNNING(td) && !TD_IS_SLEEPING(td) && !TD_IS_SUSPENDED(td) && !TD_IS_SWAPPED(td)) { thread_unlock(td); breakout = true; break; } thread_unlock(td); } if (breakout) { PROC_UNLOCK(p); continue; } /* * get the process size */ vm = vmspace_acquire_ref(p); if (vm == NULL) { PROC_UNLOCK(p); continue; } _PHOLD_LITE(p); PROC_UNLOCK(p); sx_sunlock(&allproc_lock); if (!vm_map_trylock_read(&vm->vm_map)) { vmspace_free(vm); sx_slock(&allproc_lock); PRELE(p); continue; } size = vmspace_swap_count(vm); if (shortage == VM_OOM_MEM) size += vm_pageout_oom_pagecount(vm); vm_map_unlock_read(&vm->vm_map); vmspace_free(vm); sx_slock(&allproc_lock); /* * If this process is bigger than the biggest one, * remember it. */ if (size > bigsize) { if (bigproc != NULL) PRELE(bigproc); bigproc = p; bigsize = size; } else { PRELE(p); } } sx_sunlock(&allproc_lock); if (bigproc != NULL) { if (vm_panic_on_oom != 0) panic("out of swap space"); PROC_LOCK(bigproc); killproc(bigproc, "out of swap space"); sched_nice(bigproc, PRIO_MIN); _PRELE(bigproc); PROC_UNLOCK(bigproc); } } static bool vm_pageout_lowmem(void) { static int lowmem_ticks = 0; int last; last = atomic_load_int(&lowmem_ticks); while ((u_int)(ticks - last) / hz >= lowmem_period) { if (atomic_fcmpset_int(&lowmem_ticks, &last, ticks) == 0) continue; /* * Decrease registered cache sizes. */ SDT_PROBE0(vm, , , vm__lowmem_scan); EVENTHANDLER_INVOKE(vm_lowmem, VM_LOW_PAGES); /* * We do this explicitly after the caches have been * drained above. */ uma_reclaim(); return (true); } return (false); } static void vm_pageout_worker(void *arg) { struct vm_domain *vmd; u_int ofree; int addl_shortage, domain, shortage; bool target_met; domain = (uintptr_t)arg; vmd = VM_DOMAIN(domain); shortage = 0; target_met = true; /* * XXXKIB It could be useful to bind pageout daemon threads to * the cores belonging to the domain, from which vm_page_array * is allocated. */ KASSERT(vmd->vmd_segs != 0, ("domain without segments")); vmd->vmd_last_active_scan = ticks; /* * The pageout daemon worker is never done, so loop forever. */ while (TRUE) { vm_domain_pageout_lock(vmd); /* * We need to clear wanted before we check the limits. This * prevents races with wakers who will check wanted after they * reach the limit. */ atomic_store_int(&vmd->vmd_pageout_wanted, 0); /* * Might the page daemon need to run again? */ if (vm_paging_needed(vmd, vmd->vmd_free_count)) { /* * Yes. If the scan failed to produce enough free * pages, sleep uninterruptibly for some time in the * hope that the laundry thread will clean some pages. */ vm_domain_pageout_unlock(vmd); if (!target_met) pause("pwait", hz / VM_INACT_SCAN_RATE); } else { /* * No, sleep until the next wakeup or until pages * need to have their reference stats updated. */ if (mtx_sleep(&vmd->vmd_pageout_wanted, vm_domain_pageout_lockptr(vmd), PDROP | PVM, "psleep", hz / VM_INACT_SCAN_RATE) == 0) VM_CNT_INC(v_pdwakeups); } /* Prevent spurious wakeups by ensuring that wanted is set. */ atomic_store_int(&vmd->vmd_pageout_wanted, 1); /* * Use the controller to calculate how many pages to free in * this interval, and scan the inactive queue. If the lowmem * handlers appear to have freed up some pages, subtract the * difference from the inactive queue scan target. */ shortage = pidctrl_daemon(&vmd->vmd_pid, vmd->vmd_free_count); if (shortage > 0) { ofree = vmd->vmd_free_count; if (vm_pageout_lowmem() && vmd->vmd_free_count > ofree) shortage -= min(vmd->vmd_free_count - ofree, (u_int)shortage); target_met = vm_pageout_scan_inactive(vmd, shortage, &addl_shortage); } else addl_shortage = 0; /* * Scan the active queue. A positive value for shortage * indicates that we must aggressively deactivate pages to avoid * a shortfall. */ shortage = vm_pageout_active_target(vmd) + addl_shortage; vm_pageout_scan_active(vmd, shortage); } } /* * vm_pageout_init initialises basic pageout daemon settings. */ static void vm_pageout_init_domain(int domain) { struct vm_domain *vmd; struct sysctl_oid *oid; vmd = VM_DOMAIN(domain); vmd->vmd_interrupt_free_min = 2; /* * v_free_reserved needs to include enough for the largest * swap pager structures plus enough for any pv_entry structs * when paging. */ if (vmd->vmd_page_count > 1024) vmd->vmd_free_min = 4 + (vmd->vmd_page_count - 1024) / 200; else vmd->vmd_free_min = 4; vmd->vmd_pageout_free_min = (2*MAXBSIZE)/PAGE_SIZE + vmd->vmd_interrupt_free_min; vmd->vmd_free_reserved = vm_pageout_page_count + vmd->vmd_pageout_free_min + (vmd->vmd_page_count / 768); vmd->vmd_free_severe = vmd->vmd_free_min / 2; vmd->vmd_free_target = 4 * vmd->vmd_free_min + vmd->vmd_free_reserved; vmd->vmd_free_min += vmd->vmd_free_reserved; vmd->vmd_free_severe += vmd->vmd_free_reserved; vmd->vmd_inactive_target = (3 * vmd->vmd_free_target) / 2; if (vmd->vmd_inactive_target > vmd->vmd_free_count / 3) vmd->vmd_inactive_target = vmd->vmd_free_count / 3; /* * Set the default wakeup threshold to be 10% below the paging * target. This keeps the steady state out of shortfall. */ vmd->vmd_pageout_wakeup_thresh = (vmd->vmd_free_target / 10) * 9; /* * Target amount of memory to move out of the laundry queue during a * background laundering. This is proportional to the amount of system * memory. */ vmd->vmd_background_launder_target = (vmd->vmd_free_target - vmd->vmd_free_min) / 10; /* Initialize the pageout daemon pid controller. */ pidctrl_init(&vmd->vmd_pid, hz / VM_INACT_SCAN_RATE, vmd->vmd_free_target, PIDCTRL_BOUND, PIDCTRL_KPD, PIDCTRL_KID, PIDCTRL_KDD); oid = SYSCTL_ADD_NODE(NULL, SYSCTL_CHILDREN(vmd->vmd_oid), OID_AUTO, "pidctrl", CTLFLAG_RD, NULL, ""); pidctrl_init_sysctl(&vmd->vmd_pid, SYSCTL_CHILDREN(oid)); } static void vm_pageout_init(void) { u_int freecount; int i; /* * Initialize some paging parameters. */ if (vm_cnt.v_page_count < 2000) vm_pageout_page_count = 8; freecount = 0; for (i = 0; i < vm_ndomains; i++) { struct vm_domain *vmd; vm_pageout_init_domain(i); vmd = VM_DOMAIN(i); vm_cnt.v_free_reserved += vmd->vmd_free_reserved; vm_cnt.v_free_target += vmd->vmd_free_target; vm_cnt.v_free_min += vmd->vmd_free_min; vm_cnt.v_inactive_target += vmd->vmd_inactive_target; vm_cnt.v_pageout_free_min += vmd->vmd_pageout_free_min; vm_cnt.v_interrupt_free_min += vmd->vmd_interrupt_free_min; vm_cnt.v_free_severe += vmd->vmd_free_severe; freecount += vmd->vmd_free_count; } /* * Set interval in seconds for active scan. We want to visit each * page at least once every ten minutes. This is to prevent worst * case paging behaviors with stale active LRU. */ if (vm_pageout_update_period == 0) vm_pageout_update_period = 600; if (vm_page_max_wired == 0) vm_page_max_wired = freecount / 3; } /* * vm_pageout is the high level pageout daemon. */ static void vm_pageout(void) { struct proc *p; struct thread *td; int error, first, i; p = curproc; td = curthread; swap_pager_swap_init(); for (first = -1, i = 0; i < vm_ndomains; i++) { if (VM_DOMAIN_EMPTY(i)) { if (bootverbose) printf("domain %d empty; skipping pageout\n", i); continue; } if (first == -1) first = i; else { error = kthread_add(vm_pageout_worker, (void *)(uintptr_t)i, p, NULL, 0, 0, "dom%d", i); if (error != 0) panic("starting pageout for domain %d: %d\n", i, error); } error = kthread_add(vm_pageout_laundry_worker, (void *)(uintptr_t)i, p, NULL, 0, 0, "laundry: dom%d", i); if (error != 0) panic("starting laundry for domain %d: %d", i, error); } error = kthread_add(uma_reclaim_worker, NULL, p, NULL, 0, 0, "uma"); if (error != 0) panic("starting uma_reclaim helper, error %d\n", error); snprintf(td->td_name, sizeof(td->td_name), "dom%d", first); vm_pageout_worker((void *)(uintptr_t)first); } /* * Perform an advisory wakeup of the page daemon. */ void pagedaemon_wakeup(int domain) { struct vm_domain *vmd; vmd = VM_DOMAIN(domain); vm_domain_pageout_assert_unlocked(vmd); if (curproc == pageproc) return; if (atomic_fetchadd_int(&vmd->vmd_pageout_wanted, 1) == 0) { vm_domain_pageout_lock(vmd); atomic_store_int(&vmd->vmd_pageout_wanted, 1); wakeup(&vmd->vmd_pageout_wanted); vm_domain_pageout_unlock(vmd); } } Index: projects/import-googletest-1.8.1/sys/vm/vnode_pager.c =================================================================== --- projects/import-googletest-1.8.1/sys/vm/vnode_pager.c (revision 344270) +++ projects/import-googletest-1.8.1/sys/vm/vnode_pager.c (revision 344271) @@ -1,1567 +1,1577 @@ /*- * SPDX-License-Identifier: BSD-4-Clause * * Copyright (c) 1990 University of Utah. * Copyright (c) 1991 The Regents of the University of California. * All rights reserved. * Copyright (c) 1993, 1994 John S. Dyson * Copyright (c) 1995, David Greenman * * This code is derived from software contributed to Berkeley by * the Systems Programming Group of the University of Utah Computer * Science Department. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by the University of * California, Berkeley and its contributors. * 4. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * from: @(#)vnode_pager.c 7.5 (Berkeley) 4/20/91 */ /* * Page to/from files (vnodes). */ /* * TODO: * Implement VOP_GETPAGES/PUTPAGES interface for filesystems. Will * greatly re-simplify the vnode_pager. */ #include __FBSDID("$FreeBSD$"); #include "opt_vm.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include static int vnode_pager_addr(struct vnode *vp, vm_ooffset_t address, daddr_t *rtaddress, int *run); static int vnode_pager_input_smlfs(vm_object_t object, vm_page_t m); static int vnode_pager_input_old(vm_object_t object, vm_page_t m); static void vnode_pager_dealloc(vm_object_t); static int vnode_pager_getpages(vm_object_t, vm_page_t *, int, int *, int *); static int vnode_pager_getpages_async(vm_object_t, vm_page_t *, int, int *, int *, vop_getpages_iodone_t, void *); static void vnode_pager_putpages(vm_object_t, vm_page_t *, int, int, int *); static boolean_t vnode_pager_haspage(vm_object_t, vm_pindex_t, int *, int *); static vm_object_t vnode_pager_alloc(void *, vm_ooffset_t, vm_prot_t, vm_ooffset_t, struct ucred *cred); static int vnode_pager_generic_getpages_done(struct buf *); static void vnode_pager_generic_getpages_done_async(struct buf *); struct pagerops vnodepagerops = { .pgo_alloc = vnode_pager_alloc, .pgo_dealloc = vnode_pager_dealloc, .pgo_getpages = vnode_pager_getpages, .pgo_getpages_async = vnode_pager_getpages_async, .pgo_putpages = vnode_pager_putpages, .pgo_haspage = vnode_pager_haspage, }; static struct domainset *vnode_domainset = NULL; SYSCTL_PROC(_debug, OID_AUTO, vnode_domainset, CTLTYPE_STRING | CTLFLAG_RW, &vnode_domainset, 0, sysctl_handle_domainset, "A", "Default vnode NUMA policy"); +static int nvnpbufs; +SYSCTL_INT(_vm, OID_AUTO, vnode_pbufs, CTLFLAG_RDTUN | CTLFLAG_NOFETCH, + &nvnpbufs, 0, "number of physical buffers allocated for vnode pager"); + static uma_zone_t vnode_pbuf_zone; static void vnode_pager_init(void *dummy) { - vnode_pbuf_zone = pbuf_zsecond_create("vnpbuf", nswbuf * 8); +#ifdef __LP64__ + nvnpbufs = nswbuf * 2; +#else + nvnpbufs = nswbuf / 2; +#endif + TUNABLE_INT_FETCH("vm.vnode_pbufs", &nvnpbufs); + vnode_pbuf_zone = pbuf_zsecond_create("vnpbuf", nvnpbufs); } SYSINIT(vnode_pager, SI_SUB_CPU, SI_ORDER_ANY, vnode_pager_init, NULL); /* Create the VM system backing object for this vnode */ int vnode_create_vobject(struct vnode *vp, off_t isize, struct thread *td) { vm_object_t object; vm_ooffset_t size = isize; struct vattr va; if (!vn_isdisk(vp, NULL) && vn_canvmio(vp) == FALSE) return (0); while ((object = vp->v_object) != NULL) { VM_OBJECT_WLOCK(object); if (!(object->flags & OBJ_DEAD)) { VM_OBJECT_WUNLOCK(object); return (0); } VOP_UNLOCK(vp, 0); vm_object_set_flag(object, OBJ_DISCONNECTWNT); VM_OBJECT_SLEEP(object, object, PDROP | PVM, "vodead", 0); vn_lock(vp, LK_EXCLUSIVE | LK_RETRY); } if (size == 0) { if (vn_isdisk(vp, NULL)) { size = IDX_TO_OFF(INT_MAX); } else { if (VOP_GETATTR(vp, &va, td->td_ucred)) return (0); size = va.va_size; } } object = vnode_pager_alloc(vp, size, 0, 0, td->td_ucred); /* * Dereference the reference we just created. This assumes * that the object is associated with the vp. */ VM_OBJECT_WLOCK(object); object->ref_count--; VM_OBJECT_WUNLOCK(object); vrele(vp); KASSERT(vp->v_object != NULL, ("vnode_create_vobject: NULL object")); return (0); } void vnode_destroy_vobject(struct vnode *vp) { struct vm_object *obj; obj = vp->v_object; if (obj == NULL) return; ASSERT_VOP_ELOCKED(vp, "vnode_destroy_vobject"); VM_OBJECT_WLOCK(obj); umtx_shm_object_terminated(obj); if (obj->ref_count == 0) { /* * don't double-terminate the object */ if ((obj->flags & OBJ_DEAD) == 0) { vm_object_terminate(obj); } else { /* * Waiters were already handled during object * termination. The exclusive vnode lock hopefully * prevented new waiters from referencing the dying * object. */ KASSERT((obj->flags & OBJ_DISCONNECTWNT) == 0, ("OBJ_DISCONNECTWNT set obj %p flags %x", obj, obj->flags)); vp->v_object = NULL; VM_OBJECT_WUNLOCK(obj); } } else { /* * Woe to the process that tries to page now :-). */ vm_pager_deallocate(obj); VM_OBJECT_WUNLOCK(obj); } KASSERT(vp->v_object == NULL, ("vp %p obj %p", vp, vp->v_object)); } /* * Allocate (or lookup) pager for a vnode. * Handle is a vnode pointer. * * MPSAFE */ vm_object_t vnode_pager_alloc(void *handle, vm_ooffset_t size, vm_prot_t prot, vm_ooffset_t offset, struct ucred *cred) { vm_object_t object; struct vnode *vp; /* * Pageout to vnode, no can do yet. */ if (handle == NULL) return (NULL); vp = (struct vnode *) handle; /* * If the object is being terminated, wait for it to * go away. */ retry: while ((object = vp->v_object) != NULL) { VM_OBJECT_WLOCK(object); if ((object->flags & OBJ_DEAD) == 0) break; vm_object_set_flag(object, OBJ_DISCONNECTWNT); VM_OBJECT_SLEEP(object, object, PDROP | PVM, "vadead", 0); } KASSERT(vp->v_usecount != 0, ("vnode_pager_alloc: no vnode reference")); if (object == NULL) { /* * Add an object of the appropriate size */ object = vm_object_allocate(OBJT_VNODE, OFF_TO_IDX(round_page(size))); object->un_pager.vnp.vnp_size = size; object->un_pager.vnp.writemappings = 0; object->domain.dr_policy = vnode_domainset; object->handle = handle; VI_LOCK(vp); if (vp->v_object != NULL) { /* * Object has been created while we were sleeping */ VI_UNLOCK(vp); VM_OBJECT_WLOCK(object); KASSERT(object->ref_count == 1, ("leaked ref %p %d", object, object->ref_count)); object->type = OBJT_DEAD; object->ref_count = 0; VM_OBJECT_WUNLOCK(object); vm_object_destroy(object); goto retry; } vp->v_object = object; VI_UNLOCK(vp); } else { object->ref_count++; #if VM_NRESERVLEVEL > 0 vm_object_color(object, 0); #endif VM_OBJECT_WUNLOCK(object); } vrefact(vp); return (object); } /* * The object must be locked. */ static void vnode_pager_dealloc(vm_object_t object) { struct vnode *vp; int refs; vp = object->handle; if (vp == NULL) panic("vnode_pager_dealloc: pager already dealloced"); VM_OBJECT_ASSERT_WLOCKED(object); vm_object_pip_wait(object, "vnpdea"); refs = object->ref_count; object->handle = NULL; object->type = OBJT_DEAD; if (object->flags & OBJ_DISCONNECTWNT) { vm_object_clear_flag(object, OBJ_DISCONNECTWNT); wakeup(object); } ASSERT_VOP_ELOCKED(vp, "vnode_pager_dealloc"); if (object->un_pager.vnp.writemappings > 0) { object->un_pager.vnp.writemappings = 0; VOP_ADD_WRITECOUNT(vp, -1); CTR3(KTR_VFS, "%s: vp %p v_writecount decreased to %d", __func__, vp, vp->v_writecount); } vp->v_object = NULL; VOP_UNSET_TEXT(vp); VM_OBJECT_WUNLOCK(object); while (refs-- > 0) vunref(vp); VM_OBJECT_WLOCK(object); } static boolean_t vnode_pager_haspage(vm_object_t object, vm_pindex_t pindex, int *before, int *after) { struct vnode *vp = object->handle; daddr_t bn; int err; daddr_t reqblock; int poff; int bsize; int pagesperblock, blocksperpage; VM_OBJECT_ASSERT_WLOCKED(object); /* * If no vp or vp is doomed or marked transparent to VM, we do not * have the page. */ if (vp == NULL || vp->v_iflag & VI_DOOMED) return FALSE; /* * If the offset is beyond end of file we do * not have the page. */ if (IDX_TO_OFF(pindex) >= object->un_pager.vnp.vnp_size) return FALSE; bsize = vp->v_mount->mnt_stat.f_iosize; pagesperblock = bsize / PAGE_SIZE; blocksperpage = 0; if (pagesperblock > 0) { reqblock = pindex / pagesperblock; } else { blocksperpage = (PAGE_SIZE / bsize); reqblock = pindex * blocksperpage; } VM_OBJECT_WUNLOCK(object); err = VOP_BMAP(vp, reqblock, NULL, &bn, after, before); VM_OBJECT_WLOCK(object); if (err) return TRUE; if (bn == -1) return FALSE; if (pagesperblock > 0) { poff = pindex - (reqblock * pagesperblock); if (before) { *before *= pagesperblock; *before += poff; } if (after) { /* * The BMAP vop can report a partial block in the * 'after', but must not report blocks after EOF. * Assert the latter, and truncate 'after' in case * of the former. */ KASSERT((reqblock + *after) * pagesperblock < roundup2(object->size, pagesperblock), ("%s: reqblock %jd after %d size %ju", __func__, (intmax_t )reqblock, *after, (uintmax_t )object->size)); *after *= pagesperblock; *after += pagesperblock - (poff + 1); if (pindex + *after >= object->size) *after = object->size - 1 - pindex; } } else { if (before) { *before /= blocksperpage; } if (after) { *after /= blocksperpage; } } return TRUE; } /* * Lets the VM system know about a change in size for a file. * We adjust our own internal size and flush any cached pages in * the associated object that are affected by the size change. * * Note: this routine may be invoked as a result of a pager put * operation (possibly at object termination time), so we must be careful. */ void vnode_pager_setsize(struct vnode *vp, vm_ooffset_t nsize) { vm_object_t object; vm_page_t m; vm_pindex_t nobjsize; if ((object = vp->v_object) == NULL) return; /* ASSERT_VOP_ELOCKED(vp, "vnode_pager_setsize and not locked vnode"); */ VM_OBJECT_WLOCK(object); if (object->type == OBJT_DEAD) { VM_OBJECT_WUNLOCK(object); return; } KASSERT(object->type == OBJT_VNODE, ("not vnode-backed object %p", object)); if (nsize == object->un_pager.vnp.vnp_size) { /* * Hasn't changed size */ VM_OBJECT_WUNLOCK(object); return; } nobjsize = OFF_TO_IDX(nsize + PAGE_MASK); if (nsize < object->un_pager.vnp.vnp_size) { /* * File has shrunk. Toss any cached pages beyond the new EOF. */ if (nobjsize < object->size) vm_object_page_remove(object, nobjsize, object->size, 0); /* * this gets rid of garbage at the end of a page that is now * only partially backed by the vnode. * * XXX for some reason (I don't know yet), if we take a * completely invalid page and mark it partially valid * it can screw up NFS reads, so we don't allow the case. */ if ((nsize & PAGE_MASK) && (m = vm_page_lookup(object, OFF_TO_IDX(nsize))) != NULL && m->valid != 0) { int base = (int)nsize & PAGE_MASK; int size = PAGE_SIZE - base; /* * Clear out partial-page garbage in case * the page has been mapped. */ pmap_zero_page_area(m, base, size); /* * Update the valid bits to reflect the blocks that * have been zeroed. Some of these valid bits may * have already been set. */ vm_page_set_valid_range(m, base, size); /* * Round "base" to the next block boundary so that the * dirty bit for a partially zeroed block is not * cleared. */ base = roundup2(base, DEV_BSIZE); /* * Clear out partial-page dirty bits. * * note that we do not clear out the valid * bits. This would prevent bogus_page * replacement from working properly. */ vm_page_clear_dirty(m, base, PAGE_SIZE - base); } } object->un_pager.vnp.vnp_size = nsize; object->size = nobjsize; VM_OBJECT_WUNLOCK(object); } /* * calculate the linear (byte) disk address of specified virtual * file address */ static int vnode_pager_addr(struct vnode *vp, vm_ooffset_t address, daddr_t *rtaddress, int *run) { int bsize; int err; daddr_t vblock; daddr_t voffset; if (address < 0) return -1; if (vp->v_iflag & VI_DOOMED) return -1; bsize = vp->v_mount->mnt_stat.f_iosize; vblock = address / bsize; voffset = address % bsize; err = VOP_BMAP(vp, vblock, NULL, rtaddress, run, NULL); if (err == 0) { if (*rtaddress != -1) *rtaddress += voffset / DEV_BSIZE; if (run) { *run += 1; *run *= bsize/PAGE_SIZE; *run -= voffset/PAGE_SIZE; } } return (err); } /* * small block filesystem vnode pager input */ static int vnode_pager_input_smlfs(vm_object_t object, vm_page_t m) { struct vnode *vp; struct bufobj *bo; struct buf *bp; struct sf_buf *sf; daddr_t fileaddr; vm_offset_t bsize; vm_page_bits_t bits; int error, i; error = 0; vp = object->handle; if (vp->v_iflag & VI_DOOMED) return VM_PAGER_BAD; bsize = vp->v_mount->mnt_stat.f_iosize; VOP_BMAP(vp, 0, &bo, 0, NULL, NULL); sf = sf_buf_alloc(m, 0); for (i = 0; i < PAGE_SIZE / bsize; i++) { vm_ooffset_t address; bits = vm_page_bits(i * bsize, bsize); if (m->valid & bits) continue; address = IDX_TO_OFF(m->pindex) + i * bsize; if (address >= object->un_pager.vnp.vnp_size) { fileaddr = -1; } else { error = vnode_pager_addr(vp, address, &fileaddr, NULL); if (error) break; } if (fileaddr != -1) { bp = uma_zalloc(vnode_pbuf_zone, M_WAITOK); /* build a minimal buffer header */ bp->b_iocmd = BIO_READ; bp->b_iodone = bdone; KASSERT(bp->b_rcred == NOCRED, ("leaking read ucred")); KASSERT(bp->b_wcred == NOCRED, ("leaking write ucred")); bp->b_rcred = crhold(curthread->td_ucred); bp->b_wcred = crhold(curthread->td_ucred); bp->b_data = (caddr_t)sf_buf_kva(sf) + i * bsize; bp->b_blkno = fileaddr; pbgetbo(bo, bp); bp->b_vp = vp; bp->b_bcount = bsize; bp->b_bufsize = bsize; bp->b_runningbufspace = bp->b_bufsize; atomic_add_long(&runningbufspace, bp->b_runningbufspace); /* do the input */ bp->b_iooffset = dbtob(bp->b_blkno); bstrategy(bp); bwait(bp, PVM, "vnsrd"); if ((bp->b_ioflags & BIO_ERROR) != 0) error = EIO; /* * free the buffer header back to the swap buffer pool */ bp->b_vp = NULL; pbrelbo(bp); uma_zfree(vnode_pbuf_zone, bp); if (error) break; } else bzero((caddr_t)sf_buf_kva(sf) + i * bsize, bsize); KASSERT((m->dirty & bits) == 0, ("vnode_pager_input_smlfs: page %p is dirty", m)); VM_OBJECT_WLOCK(object); m->valid |= bits; VM_OBJECT_WUNLOCK(object); } sf_buf_free(sf); if (error) { return VM_PAGER_ERROR; } return VM_PAGER_OK; } /* * old style vnode pager input routine */ static int vnode_pager_input_old(vm_object_t object, vm_page_t m) { struct uio auio; struct iovec aiov; int error; int size; struct sf_buf *sf; struct vnode *vp; VM_OBJECT_ASSERT_WLOCKED(object); error = 0; /* * Return failure if beyond current EOF */ if (IDX_TO_OFF(m->pindex) >= object->un_pager.vnp.vnp_size) { return VM_PAGER_BAD; } else { size = PAGE_SIZE; if (IDX_TO_OFF(m->pindex) + size > object->un_pager.vnp.vnp_size) size = object->un_pager.vnp.vnp_size - IDX_TO_OFF(m->pindex); vp = object->handle; VM_OBJECT_WUNLOCK(object); /* * Allocate a kernel virtual address and initialize so that * we can use VOP_READ/WRITE routines. */ sf = sf_buf_alloc(m, 0); aiov.iov_base = (caddr_t)sf_buf_kva(sf); aiov.iov_len = size; auio.uio_iov = &aiov; auio.uio_iovcnt = 1; auio.uio_offset = IDX_TO_OFF(m->pindex); auio.uio_segflg = UIO_SYSSPACE; auio.uio_rw = UIO_READ; auio.uio_resid = size; auio.uio_td = curthread; error = VOP_READ(vp, &auio, 0, curthread->td_ucred); if (!error) { int count = size - auio.uio_resid; if (count == 0) error = EINVAL; else if (count != PAGE_SIZE) bzero((caddr_t)sf_buf_kva(sf) + count, PAGE_SIZE - count); } sf_buf_free(sf); VM_OBJECT_WLOCK(object); } KASSERT(m->dirty == 0, ("vnode_pager_input_old: page %p is dirty", m)); if (!error) m->valid = VM_PAGE_BITS_ALL; return error ? VM_PAGER_ERROR : VM_PAGER_OK; } /* * generic vnode pager input routine */ /* * Local media VFS's that do not implement their own VOP_GETPAGES * should have their VOP_GETPAGES call to vnode_pager_generic_getpages() * to implement the previous behaviour. * * All other FS's should use the bypass to get to the local media * backing vp's VOP_GETPAGES. */ static int vnode_pager_getpages(vm_object_t object, vm_page_t *m, int count, int *rbehind, int *rahead) { struct vnode *vp; int rtval; vp = object->handle; VM_OBJECT_WUNLOCK(object); rtval = VOP_GETPAGES(vp, m, count, rbehind, rahead); KASSERT(rtval != EOPNOTSUPP, ("vnode_pager: FS getpages not implemented\n")); VM_OBJECT_WLOCK(object); return rtval; } static int vnode_pager_getpages_async(vm_object_t object, vm_page_t *m, int count, int *rbehind, int *rahead, vop_getpages_iodone_t iodone, void *arg) { struct vnode *vp; int rtval; vp = object->handle; VM_OBJECT_WUNLOCK(object); rtval = VOP_GETPAGES_ASYNC(vp, m, count, rbehind, rahead, iodone, arg); KASSERT(rtval != EOPNOTSUPP, ("vnode_pager: FS getpages_async not implemented\n")); VM_OBJECT_WLOCK(object); return (rtval); } /* * The implementation of VOP_GETPAGES() and VOP_GETPAGES_ASYNC() for * local filesystems, where partially valid pages can only occur at * the end of file. */ int vnode_pager_local_getpages(struct vop_getpages_args *ap) { return (vnode_pager_generic_getpages(ap->a_vp, ap->a_m, ap->a_count, ap->a_rbehind, ap->a_rahead, NULL, NULL)); } int vnode_pager_local_getpages_async(struct vop_getpages_async_args *ap) { return (vnode_pager_generic_getpages(ap->a_vp, ap->a_m, ap->a_count, ap->a_rbehind, ap->a_rahead, ap->a_iodone, ap->a_arg)); } /* * This is now called from local media FS's to operate against their * own vnodes if they fail to implement VOP_GETPAGES. */ int vnode_pager_generic_getpages(struct vnode *vp, vm_page_t *m, int count, int *a_rbehind, int *a_rahead, vop_getpages_iodone_t iodone, void *arg) { vm_object_t object; struct bufobj *bo; struct buf *bp; off_t foff; #ifdef INVARIANTS off_t blkno0; #endif int bsize, pagesperblock; int error, before, after, rbehind, rahead, poff, i; int bytecount, secmask; KASSERT(vp->v_type != VCHR && vp->v_type != VBLK, ("%s does not support devices", __func__)); if (vp->v_iflag & VI_DOOMED) return (VM_PAGER_BAD); object = vp->v_object; foff = IDX_TO_OFF(m[0]->pindex); bsize = vp->v_mount->mnt_stat.f_iosize; pagesperblock = bsize / PAGE_SIZE; KASSERT(foff < object->un_pager.vnp.vnp_size, ("%s: page %p offset beyond vp %p size", __func__, m[0], vp)); KASSERT(count <= sizeof(bp->b_pages), ("%s: requested %d pages", __func__, count)); /* * The last page has valid blocks. Invalid part can only * exist at the end of file, and the page is made fully valid * by zeroing in vm_pager_get_pages(). */ if (m[count - 1]->valid != 0 && --count == 0) { if (iodone != NULL) iodone(arg, m, 1, 0); return (VM_PAGER_OK); } bp = uma_zalloc(vnode_pbuf_zone, M_WAITOK); /* * Get the underlying device blocks for the file with VOP_BMAP(). * If the file system doesn't support VOP_BMAP, use old way of * getting pages via VOP_READ. */ error = VOP_BMAP(vp, foff / bsize, &bo, &bp->b_blkno, &after, &before); if (error == EOPNOTSUPP) { uma_zfree(vnode_pbuf_zone, bp); VM_OBJECT_WLOCK(object); for (i = 0; i < count; i++) { VM_CNT_INC(v_vnodein); VM_CNT_INC(v_vnodepgsin); error = vnode_pager_input_old(object, m[i]); if (error) break; } VM_OBJECT_WUNLOCK(object); return (error); } else if (error != 0) { uma_zfree(vnode_pbuf_zone, bp); return (VM_PAGER_ERROR); } /* * If the file system supports BMAP, but blocksize is smaller * than a page size, then use special small filesystem code. */ if (pagesperblock == 0) { uma_zfree(vnode_pbuf_zone, bp); for (i = 0; i < count; i++) { VM_CNT_INC(v_vnodein); VM_CNT_INC(v_vnodepgsin); error = vnode_pager_input_smlfs(object, m[i]); if (error) break; } return (error); } /* * A sparse file can be encountered only for a single page request, * which may not be preceded by call to vm_pager_haspage(). */ if (bp->b_blkno == -1) { KASSERT(count == 1, ("%s: array[%d] request to a sparse file %p", __func__, count, vp)); uma_zfree(vnode_pbuf_zone, bp); pmap_zero_page(m[0]); KASSERT(m[0]->dirty == 0, ("%s: page %p is dirty", __func__, m[0])); VM_OBJECT_WLOCK(object); m[0]->valid = VM_PAGE_BITS_ALL; VM_OBJECT_WUNLOCK(object); return (VM_PAGER_OK); } #ifdef INVARIANTS blkno0 = bp->b_blkno; #endif bp->b_blkno += (foff % bsize) / DEV_BSIZE; /* Recalculate blocks available after/before to pages. */ poff = (foff % bsize) / PAGE_SIZE; before *= pagesperblock; before += poff; after *= pagesperblock; after += pagesperblock - (poff + 1); if (m[0]->pindex + after >= object->size) after = object->size - 1 - m[0]->pindex; KASSERT(count <= after + 1, ("%s: %d pages asked, can do only %d", __func__, count, after + 1)); after -= count - 1; /* Trim requested rbehind/rahead to possible values. */ rbehind = a_rbehind ? *a_rbehind : 0; rahead = a_rahead ? *a_rahead : 0; rbehind = min(rbehind, before); rbehind = min(rbehind, m[0]->pindex); rahead = min(rahead, after); rahead = min(rahead, object->size - m[count - 1]->pindex); /* * Check that total amount of pages fit into buf. Trim rbehind and * rahead evenly if not. */ if (rbehind + rahead + count > nitems(bp->b_pages)) { int trim, sum; trim = rbehind + rahead + count - nitems(bp->b_pages) + 1; sum = rbehind + rahead; if (rbehind == before) { /* Roundup rbehind trim to block size. */ rbehind -= roundup(trim * rbehind / sum, pagesperblock); if (rbehind < 0) rbehind = 0; } else rbehind -= trim * rbehind / sum; rahead -= trim * rahead / sum; } KASSERT(rbehind + rahead + count <= nitems(bp->b_pages), ("%s: behind %d ahead %d count %d", __func__, rbehind, rahead, count)); /* * Fill in the bp->b_pages[] array with requested and optional * read behind or read ahead pages. Read behind pages are looked * up in a backward direction, down to a first cached page. Same * for read ahead pages, but there is no need to shift the array * in case of encountering a cached page. */ i = bp->b_npages = 0; if (rbehind) { vm_pindex_t startpindex, tpindex; vm_page_t p; VM_OBJECT_WLOCK(object); startpindex = m[0]->pindex - rbehind; if ((p = TAILQ_PREV(m[0], pglist, listq)) != NULL && p->pindex >= startpindex) startpindex = p->pindex + 1; /* tpindex is unsigned; beware of numeric underflow. */ for (tpindex = m[0]->pindex - 1; tpindex >= startpindex && tpindex < m[0]->pindex; tpindex--, i++) { p = vm_page_alloc(object, tpindex, VM_ALLOC_NORMAL); if (p == NULL) { /* Shift the array. */ for (int j = 0; j < i; j++) bp->b_pages[j] = bp->b_pages[j + tpindex + 1 - startpindex]; break; } bp->b_pages[tpindex - startpindex] = p; } bp->b_pgbefore = i; bp->b_npages += i; bp->b_blkno -= IDX_TO_OFF(i) / DEV_BSIZE; } else bp->b_pgbefore = 0; /* Requested pages. */ for (int j = 0; j < count; j++, i++) bp->b_pages[i] = m[j]; bp->b_npages += count; if (rahead) { vm_pindex_t endpindex, tpindex; vm_page_t p; if (!VM_OBJECT_WOWNED(object)) VM_OBJECT_WLOCK(object); endpindex = m[count - 1]->pindex + rahead + 1; if ((p = TAILQ_NEXT(m[count - 1], listq)) != NULL && p->pindex < endpindex) endpindex = p->pindex; if (endpindex > object->size) endpindex = object->size; for (tpindex = m[count - 1]->pindex + 1; tpindex < endpindex; i++, tpindex++) { p = vm_page_alloc(object, tpindex, VM_ALLOC_NORMAL); if (p == NULL) break; bp->b_pages[i] = p; } bp->b_pgafter = i - bp->b_npages; bp->b_npages = i; } else bp->b_pgafter = 0; if (VM_OBJECT_WOWNED(object)) VM_OBJECT_WUNLOCK(object); /* Report back actual behind/ahead read. */ if (a_rbehind) *a_rbehind = bp->b_pgbefore; if (a_rahead) *a_rahead = bp->b_pgafter; #ifdef INVARIANTS KASSERT(bp->b_npages <= nitems(bp->b_pages), ("%s: buf %p overflowed", __func__, bp)); for (int j = 1, prev = 0; j < bp->b_npages; j++) { if (bp->b_pages[j] == bogus_page) continue; KASSERT(bp->b_pages[j]->pindex - bp->b_pages[prev]->pindex == j - prev, ("%s: pages array not consecutive, bp %p", __func__, bp)); prev = j; } #endif /* * Recalculate first offset and bytecount with regards to read behind. * Truncate bytecount to vnode real size and round up physical size * for real devices. */ foff = IDX_TO_OFF(bp->b_pages[0]->pindex); bytecount = bp->b_npages << PAGE_SHIFT; if ((foff + bytecount) > object->un_pager.vnp.vnp_size) bytecount = object->un_pager.vnp.vnp_size - foff; secmask = bo->bo_bsize - 1; KASSERT(secmask < PAGE_SIZE && secmask > 0, ("%s: sector size %d too large", __func__, secmask + 1)); bytecount = (bytecount + secmask) & ~secmask; /* * And map the pages to be read into the kva, if the filesystem * requires mapped buffers. */ if ((vp->v_mount->mnt_kern_flag & MNTK_UNMAPPED_BUFS) != 0 && unmapped_buf_allowed) { bp->b_data = unmapped_buf; bp->b_offset = 0; } else { bp->b_data = bp->b_kvabase; pmap_qenter((vm_offset_t)bp->b_data, bp->b_pages, bp->b_npages); } /* Build a minimal buffer header. */ bp->b_iocmd = BIO_READ; KASSERT(bp->b_rcred == NOCRED, ("leaking read ucred")); KASSERT(bp->b_wcred == NOCRED, ("leaking write ucred")); bp->b_rcred = crhold(curthread->td_ucred); bp->b_wcred = crhold(curthread->td_ucred); pbgetbo(bo, bp); bp->b_vp = vp; bp->b_bcount = bp->b_bufsize = bp->b_runningbufspace = bytecount; bp->b_iooffset = dbtob(bp->b_blkno); KASSERT(IDX_TO_OFF(m[0]->pindex - bp->b_pages[0]->pindex) == (blkno0 - bp->b_blkno) * DEV_BSIZE + IDX_TO_OFF(m[0]->pindex) % bsize, ("wrong offsets bsize %d m[0] %ju b_pages[0] %ju " "blkno0 %ju b_blkno %ju", bsize, (uintmax_t)m[0]->pindex, (uintmax_t)bp->b_pages[0]->pindex, (uintmax_t)blkno0, (uintmax_t)bp->b_blkno)); atomic_add_long(&runningbufspace, bp->b_runningbufspace); VM_CNT_INC(v_vnodein); VM_CNT_ADD(v_vnodepgsin, bp->b_npages); if (iodone != NULL) { /* async */ bp->b_pgiodone = iodone; bp->b_caller1 = arg; bp->b_iodone = vnode_pager_generic_getpages_done_async; bp->b_flags |= B_ASYNC; BUF_KERNPROC(bp); bstrategy(bp); return (VM_PAGER_OK); } else { bp->b_iodone = bdone; bstrategy(bp); bwait(bp, PVM, "vnread"); error = vnode_pager_generic_getpages_done(bp); for (i = 0; i < bp->b_npages; i++) bp->b_pages[i] = NULL; bp->b_vp = NULL; pbrelbo(bp); uma_zfree(vnode_pbuf_zone, bp); return (error != 0 ? VM_PAGER_ERROR : VM_PAGER_OK); } } static void vnode_pager_generic_getpages_done_async(struct buf *bp) { int error; error = vnode_pager_generic_getpages_done(bp); /* Run the iodone upon the requested range. */ bp->b_pgiodone(bp->b_caller1, bp->b_pages + bp->b_pgbefore, bp->b_npages - bp->b_pgbefore - bp->b_pgafter, error); for (int i = 0; i < bp->b_npages; i++) bp->b_pages[i] = NULL; bp->b_vp = NULL; pbrelbo(bp); uma_zfree(vnode_pbuf_zone, bp); } static int vnode_pager_generic_getpages_done(struct buf *bp) { vm_object_t object; off_t tfoff, nextoff; int i, error; error = (bp->b_ioflags & BIO_ERROR) != 0 ? EIO : 0; object = bp->b_vp->v_object; if (error == 0 && bp->b_bcount != bp->b_npages * PAGE_SIZE) { if (!buf_mapped(bp)) { bp->b_data = bp->b_kvabase; pmap_qenter((vm_offset_t)bp->b_data, bp->b_pages, bp->b_npages); } bzero(bp->b_data + bp->b_bcount, PAGE_SIZE * bp->b_npages - bp->b_bcount); } if (buf_mapped(bp)) { pmap_qremove((vm_offset_t)bp->b_data, bp->b_npages); bp->b_data = unmapped_buf; } VM_OBJECT_WLOCK(object); for (i = 0, tfoff = IDX_TO_OFF(bp->b_pages[0]->pindex); i < bp->b_npages; i++, tfoff = nextoff) { vm_page_t mt; nextoff = tfoff + PAGE_SIZE; mt = bp->b_pages[i]; if (nextoff <= object->un_pager.vnp.vnp_size) { /* * Read filled up entire page. */ mt->valid = VM_PAGE_BITS_ALL; KASSERT(mt->dirty == 0, ("%s: page %p is dirty", __func__, mt)); KASSERT(!pmap_page_is_mapped(mt), ("%s: page %p is mapped", __func__, mt)); } else { /* * Read did not fill up entire page. * * Currently we do not set the entire page valid, * we just try to clear the piece that we couldn't * read. */ vm_page_set_valid_range(mt, 0, object->un_pager.vnp.vnp_size - tfoff); KASSERT((mt->dirty & vm_page_bits(0, object->un_pager.vnp.vnp_size - tfoff)) == 0, ("%s: page %p is dirty", __func__, mt)); } if (i < bp->b_pgbefore || i >= bp->b_npages - bp->b_pgafter) vm_page_readahead_finish(mt); } VM_OBJECT_WUNLOCK(object); if (error != 0) printf("%s: I/O read error %d\n", __func__, error); return (error); } /* * EOPNOTSUPP is no longer legal. For local media VFS's that do not * implement their own VOP_PUTPAGES, their VOP_PUTPAGES should call to * vnode_pager_generic_putpages() to implement the previous behaviour. * * All other FS's should use the bypass to get to the local media * backing vp's VOP_PUTPAGES. */ static void vnode_pager_putpages(vm_object_t object, vm_page_t *m, int count, int flags, int *rtvals) { int rtval; struct vnode *vp; int bytes = count * PAGE_SIZE; /* * Force synchronous operation if we are extremely low on memory * to prevent a low-memory deadlock. VOP operations often need to * allocate more memory to initiate the I/O ( i.e. do a BMAP * operation ). The swapper handles the case by limiting the amount * of asynchronous I/O, but that sort of solution doesn't scale well * for the vnode pager without a lot of work. * * Also, the backing vnode's iodone routine may not wake the pageout * daemon up. This should be probably be addressed XXX. */ if (vm_page_count_min()) flags |= VM_PAGER_PUT_SYNC; /* * Call device-specific putpages function */ vp = object->handle; VM_OBJECT_WUNLOCK(object); rtval = VOP_PUTPAGES(vp, m, bytes, flags, rtvals); KASSERT(rtval != EOPNOTSUPP, ("vnode_pager: stale FS putpages\n")); VM_OBJECT_WLOCK(object); } static int vn_off2bidx(vm_ooffset_t offset) { return ((offset & PAGE_MASK) / DEV_BSIZE); } static bool vn_dirty_blk(vm_page_t m, vm_ooffset_t offset) { KASSERT(IDX_TO_OFF(m->pindex) <= offset && offset < IDX_TO_OFF(m->pindex + 1), ("page %p pidx %ju offset %ju", m, (uintmax_t)m->pindex, (uintmax_t)offset)); return ((m->dirty & ((vm_page_bits_t)1 << vn_off2bidx(offset))) != 0); } /* * This is now called from local media FS's to operate against their * own vnodes if they fail to implement VOP_PUTPAGES. * * This is typically called indirectly via the pageout daemon and * clustering has already typically occurred, so in general we ask the * underlying filesystem to write the data out asynchronously rather * then delayed. */ int vnode_pager_generic_putpages(struct vnode *vp, vm_page_t *ma, int bytecount, int flags, int *rtvals) { vm_object_t object; vm_page_t m; vm_ooffset_t maxblksz, next_offset, poffset, prev_offset; struct uio auio; struct iovec aiov; off_t prev_resid, wrsz; int count, error, i, maxsize, ncount, pgoff, ppscheck; bool in_hole; static struct timeval lastfail; static int curfail; object = vp->v_object; count = bytecount / PAGE_SIZE; for (i = 0; i < count; i++) rtvals[i] = VM_PAGER_ERROR; if ((int64_t)ma[0]->pindex < 0) { printf("vnode_pager_generic_putpages: " "attempt to write meta-data 0x%jx(%lx)\n", (uintmax_t)ma[0]->pindex, (u_long)ma[0]->dirty); rtvals[0] = VM_PAGER_BAD; return (VM_PAGER_BAD); } maxsize = count * PAGE_SIZE; ncount = count; poffset = IDX_TO_OFF(ma[0]->pindex); /* * If the page-aligned write is larger then the actual file we * have to invalidate pages occurring beyond the file EOF. However, * there is an edge case where a file may not be page-aligned where * the last page is partially invalid. In this case the filesystem * may not properly clear the dirty bits for the entire page (which * could be VM_PAGE_BITS_ALL due to the page having been mmap()d). * With the page locked we are free to fix-up the dirty bits here. * * We do not under any circumstances truncate the valid bits, as * this will screw up bogus page replacement. */ VM_OBJECT_RLOCK(object); if (maxsize + poffset > object->un_pager.vnp.vnp_size) { if (!VM_OBJECT_TRYUPGRADE(object)) { VM_OBJECT_RUNLOCK(object); VM_OBJECT_WLOCK(object); if (maxsize + poffset <= object->un_pager.vnp.vnp_size) goto downgrade; } if (object->un_pager.vnp.vnp_size > poffset) { maxsize = object->un_pager.vnp.vnp_size - poffset; ncount = btoc(maxsize); if ((pgoff = (int)maxsize & PAGE_MASK) != 0) { pgoff = roundup2(pgoff, DEV_BSIZE); /* * If the object is locked and the following * conditions hold, then the page's dirty * field cannot be concurrently changed by a * pmap operation. */ m = ma[ncount - 1]; vm_page_assert_sbusied(m); KASSERT(!pmap_page_is_write_mapped(m), ("vnode_pager_generic_putpages: page %p is not read-only", m)); MPASS(m->dirty != 0); vm_page_clear_dirty(m, pgoff, PAGE_SIZE - pgoff); } } else { maxsize = 0; ncount = 0; } for (i = ncount; i < count; i++) rtvals[i] = VM_PAGER_BAD; downgrade: VM_OBJECT_LOCK_DOWNGRADE(object); } auio.uio_iov = &aiov; auio.uio_segflg = UIO_NOCOPY; auio.uio_rw = UIO_WRITE; auio.uio_td = NULL; maxblksz = roundup2(poffset + maxsize, DEV_BSIZE); for (prev_offset = poffset; prev_offset < maxblksz;) { /* Skip clean blocks. */ for (in_hole = true; in_hole && prev_offset < maxblksz;) { m = ma[OFF_TO_IDX(prev_offset - poffset)]; for (i = vn_off2bidx(prev_offset); i < sizeof(vm_page_bits_t) * NBBY && prev_offset < maxblksz; i++) { if (vn_dirty_blk(m, prev_offset)) { in_hole = false; break; } prev_offset += DEV_BSIZE; } } if (in_hole) goto write_done; /* Find longest run of dirty blocks. */ for (next_offset = prev_offset; next_offset < maxblksz;) { m = ma[OFF_TO_IDX(next_offset - poffset)]; for (i = vn_off2bidx(next_offset); i < sizeof(vm_page_bits_t) * NBBY && next_offset < maxblksz; i++) { if (!vn_dirty_blk(m, next_offset)) goto start_write; next_offset += DEV_BSIZE; } } start_write: if (next_offset > poffset + maxsize) next_offset = poffset + maxsize; /* * Getting here requires finding a dirty block in the * 'skip clean blocks' loop. */ MPASS(prev_offset < next_offset); VM_OBJECT_RUNLOCK(object); aiov.iov_base = NULL; auio.uio_iovcnt = 1; auio.uio_offset = prev_offset; prev_resid = auio.uio_resid = aiov.iov_len = next_offset - prev_offset; error = VOP_WRITE(vp, &auio, vnode_pager_putpages_ioflags(flags), curthread->td_ucred); wrsz = prev_resid - auio.uio_resid; if (wrsz == 0) { if (ppsratecheck(&lastfail, &curfail, 1) != 0) { vn_printf(vp, "vnode_pager_putpages: " "zero-length write at %ju resid %zd\n", auio.uio_offset, auio.uio_resid); } VM_OBJECT_RLOCK(object); break; } /* Adjust the starting offset for next iteration. */ prev_offset += wrsz; MPASS(auio.uio_offset == prev_offset); ppscheck = 0; if (error != 0 && (ppscheck = ppsratecheck(&lastfail, &curfail, 1)) != 0) vn_printf(vp, "vnode_pager_putpages: I/O error %d\n", error); if (auio.uio_resid != 0 && (ppscheck != 0 || ppsratecheck(&lastfail, &curfail, 1) != 0)) vn_printf(vp, "vnode_pager_putpages: residual I/O %zd " "at %ju\n", auio.uio_resid, (uintmax_t)ma[0]->pindex); VM_OBJECT_RLOCK(object); if (error != 0 || auio.uio_resid != 0) break; } write_done: /* Mark completely processed pages. */ for (i = 0; i < OFF_TO_IDX(prev_offset - poffset); i++) rtvals[i] = VM_PAGER_OK; /* Mark partial EOF page. */ if (prev_offset == poffset + maxsize && (prev_offset & PAGE_MASK) != 0) rtvals[i++] = VM_PAGER_OK; /* Unwritten pages in range, free bonus if the page is clean. */ for (; i < ncount; i++) rtvals[i] = ma[i]->dirty == 0 ? VM_PAGER_OK : VM_PAGER_ERROR; VM_OBJECT_RUNLOCK(object); VM_CNT_ADD(v_vnodepgsout, i); VM_CNT_INC(v_vnodeout); return (rtvals[0]); } int vnode_pager_putpages_ioflags(int pager_flags) { int ioflags; /* * Pageouts are already clustered, use IO_ASYNC to force a * bawrite() rather then a bdwrite() to prevent paging I/O * from saturating the buffer cache. Dummy-up the sequential * heuristic to cause large ranges to cluster. If neither * IO_SYNC or IO_ASYNC is set, the system decides how to * cluster. */ ioflags = IO_VMIO; if ((pager_flags & (VM_PAGER_PUT_SYNC | VM_PAGER_PUT_INVAL)) != 0) ioflags |= IO_SYNC; else if ((pager_flags & VM_PAGER_CLUSTER_OK) == 0) ioflags |= IO_ASYNC; ioflags |= (pager_flags & VM_PAGER_PUT_INVAL) != 0 ? IO_INVAL: 0; ioflags |= (pager_flags & VM_PAGER_PUT_NOREUSE) != 0 ? IO_NOREUSE : 0; ioflags |= IO_SEQMAX << IO_SEQSHIFT; return (ioflags); } /* * vnode_pager_undirty_pages(). * * A helper to mark pages as clean after pageout that was possibly * done with a short write. The lpos argument specifies the page run * length in bytes, and the written argument specifies how many bytes * were actually written. eof is the offset past the last valid byte * in the vnode using the absolute file position of the first byte in * the run as the base from which it is computed. */ void vnode_pager_undirty_pages(vm_page_t *ma, int *rtvals, int written, off_t eof, int lpos) { vm_object_t obj; int i, pos, pos_devb; if (written == 0 && eof >= lpos) return; obj = ma[0]->object; VM_OBJECT_WLOCK(obj); for (i = 0, pos = 0; pos < written; i++, pos += PAGE_SIZE) { if (pos < trunc_page(written)) { rtvals[i] = VM_PAGER_OK; vm_page_undirty(ma[i]); } else { /* Partially written page. */ rtvals[i] = VM_PAGER_AGAIN; vm_page_clear_dirty(ma[i], 0, written & PAGE_MASK); } } if (eof >= lpos) /* avoid truncation */ goto done; for (pos = eof, i = OFF_TO_IDX(trunc_page(pos)); pos < lpos; i++) { if (pos != trunc_page(pos)) { /* * The page contains the last valid byte in * the vnode, mark the rest of the page as * clean, potentially making the whole page * clean. */ pos_devb = roundup2(pos & PAGE_MASK, DEV_BSIZE); vm_page_clear_dirty(ma[i], pos_devb, PAGE_SIZE - pos_devb); /* * If the page was cleaned, report the pageout * on it as successful. msync() no longer * needs to write out the page, endlessly * creating write requests and dirty buffers. */ if (ma[i]->dirty == 0) rtvals[i] = VM_PAGER_OK; pos = round_page(pos); } else { /* vm_pageout_flush() clears dirty */ rtvals[i] = VM_PAGER_BAD; pos += PAGE_SIZE; } } done: VM_OBJECT_WUNLOCK(obj); } void vnode_pager_update_writecount(vm_object_t object, vm_offset_t start, vm_offset_t end) { struct vnode *vp; vm_ooffset_t old_wm; VM_OBJECT_WLOCK(object); if (object->type != OBJT_VNODE) { VM_OBJECT_WUNLOCK(object); return; } old_wm = object->un_pager.vnp.writemappings; object->un_pager.vnp.writemappings += (vm_ooffset_t)end - start; vp = object->handle; if (old_wm == 0 && object->un_pager.vnp.writemappings != 0) { ASSERT_VOP_ELOCKED(vp, "v_writecount inc"); VOP_ADD_WRITECOUNT(vp, 1); CTR3(KTR_VFS, "%s: vp %p v_writecount increased to %d", __func__, vp, vp->v_writecount); } else if (old_wm != 0 && object->un_pager.vnp.writemappings == 0) { ASSERT_VOP_ELOCKED(vp, "v_writecount dec"); VOP_ADD_WRITECOUNT(vp, -1); CTR3(KTR_VFS, "%s: vp %p v_writecount decreased to %d", __func__, vp, vp->v_writecount); } VM_OBJECT_WUNLOCK(object); } void vnode_pager_release_writecount(vm_object_t object, vm_offset_t start, vm_offset_t end) { struct vnode *vp; struct mount *mp; vm_offset_t inc; VM_OBJECT_WLOCK(object); /* * First, recheck the object type to account for the race when * the vnode is reclaimed. */ if (object->type != OBJT_VNODE) { VM_OBJECT_WUNLOCK(object); return; } /* * Optimize for the case when writemappings is not going to * zero. */ inc = end - start; if (object->un_pager.vnp.writemappings != inc) { object->un_pager.vnp.writemappings -= inc; VM_OBJECT_WUNLOCK(object); return; } vp = object->handle; vhold(vp); VM_OBJECT_WUNLOCK(object); mp = NULL; vn_start_write(vp, &mp, V_WAIT); vn_lock(vp, LK_EXCLUSIVE | LK_RETRY); /* * Decrement the object's writemappings, by swapping the start * and end arguments for vnode_pager_update_writecount(). If * there was not a race with vnode reclaimation, then the * vnode's v_writecount is decremented. */ vnode_pager_update_writecount(object, end, start); VOP_UNLOCK(vp, 0); vdrop(vp); if (mp != NULL) vn_finished_write(mp); } Index: projects/import-googletest-1.8.1/tools/build/mk/OptionalObsoleteFiles.inc =================================================================== --- projects/import-googletest-1.8.1/tools/build/mk/OptionalObsoleteFiles.inc (revision 344270) +++ projects/import-googletest-1.8.1/tools/build/mk/OptionalObsoleteFiles.inc (revision 344271) @@ -1,10252 +1,10779 @@ # # $FreeBSD$ # # This file add support for the WITHOUT_* and WITH_* knobs in src.conf(5) to # the check-old and delete-old* targets. # .if ${MK_ACCT} == no OLD_FILES+=etc/rc.d/accounting OLD_FILES+=etc/periodic/daily/310.accounting OLD_FILES+=usr/sbin/accton OLD_FILES+=usr/sbin/sa OLD_FILES+=usr/share/man/man8/accton.8.gz OLD_FILES+=usr/share/man/man8/sa.8.gz OLD_FILES+=usr/tests/usr.sbin/sa/Kyuafile OLD_FILES+=usr/tests/usr.sbin/sa/legacy_test OLD_FILES+=usr/tests/usr.sbin/sa/v1-amd64-sav.in OLD_FILES+=usr/tests/usr.sbin/sa/v1-amd64-sav.out OLD_FILES+=usr/tests/usr.sbin/sa/v1-amd64-u.out OLD_FILES+=usr/tests/usr.sbin/sa/v1-amd64-usr.in OLD_FILES+=usr/tests/usr.sbin/sa/v1-amd64-usr.out OLD_FILES+=usr/tests/usr.sbin/sa/v1-i386-sav.in OLD_FILES+=usr/tests/usr.sbin/sa/v1-i386-sav.out OLD_FILES+=usr/tests/usr.sbin/sa/v1-i386-u.out OLD_FILES+=usr/tests/usr.sbin/sa/v1-i386-usr.in OLD_FILES+=usr/tests/usr.sbin/sa/v1-i386-usr.out OLD_FILES+=usr/tests/usr.sbin/sa/v1-sparc64-sav.in OLD_FILES+=usr/tests/usr.sbin/sa/v1-sparc64-sav.out OLD_FILES+=usr/tests/usr.sbin/sa/v1-sparc64-u.out OLD_FILES+=usr/tests/usr.sbin/sa/v1-sparc64-usr.in OLD_FILES+=usr/tests/usr.sbin/sa/v1-sparc64-usr.out OLD_FILES+=usr/tests/usr.sbin/sa/v2-amd64-sav.in OLD_FILES+=usr/tests/usr.sbin/sa/v2-amd64-u.out OLD_FILES+=usr/tests/usr.sbin/sa/v2-amd64-usr.in OLD_FILES+=usr/tests/usr.sbin/sa/v2-i386-sav.in OLD_FILES+=usr/tests/usr.sbin/sa/v2-i386-u.out OLD_FILES+=usr/tests/usr.sbin/sa/v2-i386-usr.in OLD_FILES+=usr/tests/usr.sbin/sa/v2-sparc64-sav.in OLD_FILES+=usr/tests/usr.sbin/sa/v2-sparc64-u.out OLD_FILES+=usr/tests/usr.sbin/sa/v2-sparc64-usr.in OLD_DIRS+=usr/tests/usr.sbin/sa .endif .if ${MK_ACPI} == no OLD_FILES+=etc/devd/asus.conf OLD_FILES+=etc/rc.d/power_profile OLD_FILES+=usr/sbin/acpiconf OLD_FILES+=usr/sbin/acpidb OLD_FILES+=usr/sbin/acpidump OLD_FILES+=usr/sbin/iasl OLD_FILES+=usr/share/man/man8/acpiconf.8.gz OLD_FILES+=usr/share/man/man8/acpidb.8.gz OLD_FILES+=usr/share/man/man8/acpidump.8.gz OLD_FILES+=usr/share/man/man8/iasl.8.gz .endif +.if ${MK_ACPI} == no && ${MK_APM} == no +OLD_FILES+=etc/rc.d/powerd +.endif + .if ${MK_AMD} == no OLD_FILES+=etc/amd.map OLD_FILES+=etc/newsyslog.conf.d/amd.conf OLD_FILES+=etc/rc.d/amd OLD_FILES+=usr/bin/pawd OLD_FILES+=usr/sbin/amd OLD_FILES+=usr/sbin/amq OLD_FILES+=usr/sbin/fixmount OLD_FILES+=usr/sbin/fsinfo OLD_FILES+=usr/sbin/hlfsd OLD_FILES+=usr/sbin/mk-amd-map OLD_FILES+=usr/sbin/wire-test OLD_FILES+=usr/share/examples/etc/amd.map OLD_FILES+=usr/share/man/man1/pawd.1.gz OLD_FILES+=usr/share/man/man5/amd.conf.5.gz OLD_FILES+=usr/share/man/man8/amd.8.gz OLD_FILES+=usr/share/man/man8/amq.8.gz OLD_FILES+=usr/share/man/man8/fixmount.8.gz OLD_FILES+=usr/share/man/man8/fsinfo.8.gz OLD_FILES+=usr/share/man/man8/hlfsd.8.gz OLD_FILES+=usr/share/man/man8/mk-amd-map.8.gz OLD_FILES+=usr/share/man/man8/wire-test.8.gz .endif .if ${MK_APM} == no OLD_FILES+=etc/rc.d/apm OLD_FILES+=etc/rc.d/apmd OLD_FILES+=etc/apmd.conf OLD_FILES+=usr/sbin/apm OLD_FILES+=usr/share/examples/etc/apmd.conf OLD_FILES+=usr/share/man/man8/amd64/apm.8.gz OLD_FILES+=usr/share/man/man8/amd64/apmconf.8.gz .endif .if ${MK_AT} == no OLD_FILES+=etc/pam.d/atrun OLD_FILES+=usr/bin/at OLD_FILES+=usr/bin/atq OLD_FILES+=usr/bin/atrm OLD_FILES+=usr/bin/batch OLD_FILES+=usr/libexec/atrun OLD_FILES+=usr/share/man/man1/at.1.gz OLD_FILES+=usr/share/man/man1/atq.1.gz OLD_FILES+=usr/share/man/man1/atrm.1.gz OLD_FILES+=usr/share/man/man1/batch.1.gz OLD_FILES+=usr/share/man/man8/atrun.8.gz .endif .if ${MK_ATM} == no OLD_FILES+=usr/bin/sscop OLD_FILES+=usr/include/netnatm/addr.h OLD_FILES+=usr/include/netnatm/api/atmapi.h OLD_FILES+=usr/include/netnatm/api/ccatm.h OLD_FILES+=usr/include/netnatm/api/unisap.h OLD_DIRS+=usr/include/netnatm/api OLD_FILES+=usr/include/netnatm/msg/uni_config.h OLD_FILES+=usr/include/netnatm/msg/uni_hdr.h OLD_FILES+=usr/include/netnatm/msg/uni_ie.h OLD_FILES+=usr/include/netnatm/msg/uni_msg.h OLD_FILES+=usr/include/netnatm/msg/unimsglib.h OLD_FILES+=usr/include/netnatm/msg/uniprint.h OLD_FILES+=usr/include/netnatm/msg/unistruct.h OLD_DIRS+=usr/include/netnatm/msg OLD_FILES+=usr/include/netnatm/saal/sscfu.h OLD_FILES+=usr/include/netnatm/saal/sscfudef.h OLD_FILES+=usr/include/netnatm/saal/sscop.h OLD_FILES+=usr/include/netnatm/saal/sscopdef.h OLD_DIRS+=usr/include/netnatm/saal OLD_FILES+=usr/include/netnatm/sig/uni.h OLD_FILES+=usr/include/netnatm/sig/unidef.h OLD_FILES+=usr/include/netnatm/sig/unisig.h OLD_DIRS+=usr/include/netnatm/sig OLD_FILES+=usr/include/netnatm/unimsg.h OLD_FILES+=usr/lib/libngatm.a OLD_FILES+=usr/lib/libngatm.so OLD_LIBS+=usr/lib/libngatm.so.4 OLD_FILES+=usr/lib/libngatm_p.a .if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64" OLD_FILES+=usr/lib32/libngatm.a OLD_FILES+=usr/lib32/libngatm.so OLD_LIBS+=usr/lib32/libngatm.so.4 OLD_FILES+=usr/lib32/libngatm_p.a .endif OLD_FILES+=usr/share/man/man1/sscop.1.gz OLD_FILES+=usr/share/man/man3/libngatm.3.gz OLD_FILES+=usr/share/man/man3/uniaddr.3.gz OLD_FILES+=usr/share/man/man3/unifunc.3.gz OLD_FILES+=usr/share/man/man3/unimsg.3.gz OLD_FILES+=usr/share/man/man3/unisap.3.gz OLD_FILES+=usr/share/man/man3/unistruct.3.gz .endif .if ${MK_AUDIT} == no OLD_FILES+=etc/rc.d/auditd OLD_FILES+=etc/rc.d/auditdistd OLD_FILES+=usr/sbin/audit OLD_FILES+=usr/sbin/auditd OLD_FILES+=usr/sbin/auditdistd OLD_FILES+=usr/sbin/auditreduce OLD_FILES+=usr/sbin/praudit OLD_FILES+=usr/share/man/man1/auditreduce.1.gz OLD_FILES+=usr/share/man/man1/praudit.1.gz OLD_FILES+=usr/share/man/man5/auditdistd.conf.5.gz OLD_FILES+=usr/share/man/man8/audit.8.gz OLD_FILES+=usr/share/man/man8/auditd.8.gz OLD_FILES+=usr/share/man/man8/auditdistd.8.gz OLD_FILES+=usr/tests/sys/audit/process-control OLD_FILES+=usr/tests/sys/audit/open OLD_FILES+=usr/tests/sys/audit/network OLD_FILES+=usr/tests/sys/audit/miscellaneous OLD_FILES+=usr/tests/sys/audit/Kyuafile OLD_FILES+=usr/tests/sys/audit/ioctl OLD_FILES+=usr/tests/sys/audit/inter-process OLD_FILES+=usr/tests/sys/audit/file-write OLD_FILES+=usr/tests/sys/audit/file-read OLD_FILES+=usr/tests/sys/audit/file-delete OLD_FILES+=usr/tests/sys/audit/file-create OLD_FILES+=usr/tests/sys/audit/file-close OLD_FILES+=usr/tests/sys/audit/file-attribute-modify OLD_FILES+=usr/tests/sys/audit/file-attribute-access OLD_FILES+=usr/tests/sys/audit/administrative OLD_DIRS+=usr/tests/sys/audit .endif .if ${MK_AUTHPF} == no OLD_FILES+=usr/sbin/authpf OLD_FILES+=usr/sbin/authpf-noip OLD_FILES+=usr/share/man/man8/authpf.8.gz OLD_FILES+=usr/share/man/man8/authpf-noip.8.gz .endif .if ${MK_AUTOFS} == no OLD_FILES+=etc/autofs/include_ldap OLD_FILES+=etc/autofs/special_hosts OLD_FILES+=etc/autofs/special_media OLD_FILES+=etc/autofs/special_noauto OLD_FILES+=etc/autofs/special_null OLD_FILES+=etc/auto_master OLD_FILES+=etc/rc.d/automount OLD_FILES+=etc/rc.d/automountd OLD_FILES+=etc/rc.d/autounmountd OLD_FILES+=usr/sbin/automount OLD_FILES+=usr/sbin/automountd OLD_FILES+=usr/sbin/autounmountd OLD_FILES+=usr/share/man/man5/autofs.5.gz OLD_FILES+=usr/share/man/man5/auto_master.5.gz OLD_FILES+=usr/share/man/man8/automount.8.gz OLD_FILES+=usr/share/man/man8/automountd.8.gz OLD_FILES+=usr/share/man/man8/autounmountd.8.gz OLD_DIRS+=etc/autofs .endif .if ${MK_BHYVE} == no OLD_FILES+=usr/lib/libvmmapi.a OLD_FILES+=usr/lib/libvmmapi.so OLD_LIBS+=usr/lib/libvmmapi.so.5 OLD_FILES+=usr/include/vmmapi.h OLD_FILES+=usr/sbin/bhyve OLD_FILES+=usr/sbin/bhyvectl OLD_FILES+=usr/sbin/bhyveload OLD_FILES+=usr/share/examples/bhyve/vmrun.sh OLD_FILES+=usr/share/man/man8/bhyve.8.gz OLD_FILES+=usr/share/man/man8/bhyveload.8.gz OLD_DIRS+=usr/share/examples/bhyve .endif .if ${MK_BINUTILS} == no OLD_FILES+=usr/bin/as .if ${MK_LLD_IS_LD} == no OLD_FILES+=usr/bin/ld OLD_FILES+=usr/share/man/man1/ld.1.gz .endif OLD_FILES+=usr/bin/objdump OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.x OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xbn OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xc OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xd OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xdc OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xdw OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xn OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xr OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xs OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xsc OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xsw OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xu OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xw OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.x OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xbn OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xc OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xd OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xdc OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xdw OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xn OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xr OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xs OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xsc OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xsw OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xu OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xw OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.x OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xbn OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xc OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xd OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xdc OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xdw OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xn OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xr OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xs OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xsc OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xsw OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xu OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xw OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.x OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xbn OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xc OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xd OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xdc OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xdw OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xn OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xr OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xs OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xsc OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xsw OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xu OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xw OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.x OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xbn OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xc OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xd OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xdc OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xdw OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xn OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xr OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xs OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xsc OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xsw OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xu OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xw OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.x OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xbn OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xc OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xd OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xdc OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xdw OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xn OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xr OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xs OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xsc OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xsw OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xu OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xw OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.x OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xbn OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xc OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xd OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xdc OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xdw OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xn OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xr OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xs OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xsc OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xsw OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xu OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xw OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.x OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xbn OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xc OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xd OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xdc OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xdw OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xn OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xr OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xs OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xsc OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xsw OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xu OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xw OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.x OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xbn OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xc OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xd OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xdc OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xdw OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xn OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xr OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xs OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xsc OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xsw OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xu OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xw OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.x OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xbn OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xc OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xd OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xdc OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xdw OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xn OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xr OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xs OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xsc OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xsw OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xu OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xw OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.x OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xbn OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xc OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xd OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xdc OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xdw OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xn OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xr OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xs OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xsc OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xsw OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xu OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xw OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.x OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xbn OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xc OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xd OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xdc OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xdw OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xn OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xr OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xs OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xsc OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xsw OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xu OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xw OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.x OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xbn OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xc OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xd OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xdc OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xdw OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xn OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xr OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xs OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xsc OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xsw OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xu OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xw OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.x OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xbn OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xc OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xd OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xdc OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xdw OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xn OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xr OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xs OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xsc OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xsw OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xu OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xw OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.x OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xbn OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xc OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xd OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xdc OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xdw OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xn OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xr OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xs OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xsc OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xsw OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xu OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xw OLD_FILES+=usr/share/man/man1/as.1.gz OLD_FILES+=usr/share/man/man1/objdump.1.gz OLD_FILES+=usr/share/man/man7/as.7.gz OLD_FILES+=usr/share/man/man7/ld.7.gz OLD_FILES+=usr/share/man/man7/ldint.7.gz OLD_FILES+=usr/share/man/man7/binutils.7.gz .endif .if ${MK_BINUTILS} == no || ${MK_LLD_IS_LD} == yes OLD_FILES+=usr/bin/ld.bfd .endif .if ${MK_BLACKLIST} == no +OLD_FILES+=etc/blacklistd.conf OLD_FILES+=etc/rc.d/blacklistd OLD_FILES+=usr/include/blacklist.h OLD_FILES+=usr/lib/libblacklist.a OLD_FILES+=usr/lib/libblacklist_p.a OLD_FILES+=usr/lib/libblacklist.so OLD_LIBS+=usr/lib/libblacklist.so.0 OLD_FILES+=usr/libexec/blacklistd-helper OLD_FILES+=usr/sbin/blacklistctl OLD_FILES+=usr/sbin/blacklistd OLD_FILES+=usr/share/man/man3/blacklist.3.gz OLD_FILES+=usr/share/man/man3/blacklist_close.3.gz OLD_FILES+=usr/share/man/man3/blacklist_open.3.gz OLD_FILES+=usr/share/man/man3/blacklist_r.3.gz OLD_FILES+=usr/share/man/man3/blacklist_sa.3.gz OLD_FILES+=usr/share/man/man3/blacklist_sa_r.3.gz OLD_FILES+=usr/share/man/man5/blacklistd.conf.5.gz OLD_FILES+=usr/share/man/man8/blacklistctl.8.gz OLD_FILES+=usr/share/man/man8/blacklistd.8.gz .endif .if ${MK_BLUETOOTH} == no OLD_FILES+=etc/bluetooth/hcsecd.conf OLD_FILES+=etc/bluetooth/hosts OLD_FILES+=etc/bluetooth/protocols OLD_FILES+=etc/defaults/bluetooth.device.conf OLD_DIRS+=etc/bluetooth OLD_FILES+=etc/rc.d/bluetooth OLD_FILES+=etc/rc.d/bthidd OLD_FILES+=etc/rc.d/hcsecd OLD_FILES+=etc/rc.d/rfcomm_pppd_server OLD_FILES+=etc/rc.d/sdpd OLD_FILES+=etc/rc.d/ubthidhci OLD_FILES+=usr/bin/bthost OLD_FILES+=usr/bin/btsockstat OLD_FILES+=usr/bin/rfcomm_sppd OLD_FILES+=usr/include/bluetooth.h OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_bluetooth.h OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_bt3c.h OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_btsocket.h OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_btsocket_hci_raw.h OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_btsocket_l2cap.h OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_btsocket_rfcomm.h OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_btsocket_sco.h OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_h4.h OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_hci.h OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_l2cap.h OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_ubt.h OLD_DIRS+=usr/include/netgraph/bluetooth/include OLD_DIRS+=usr/include/netgraph/bluetooth OLD_FILES+=usr/include/sdp.h OLD_FILES+=usr/lib/libbluetooth.a OLD_FILES+=usr/lib/libbluetooth.so OLD_LIBS+=usr/lib/libbluetooth.so.4 OLD_FILES+=usr/lib/libbluetooth_p.a OLD_FILES+=usr/lib/libsdp.a OLD_FILES+=usr/lib/libsdp.so OLD_LIBS+=usr/lib/libsdp.so.4 OLD_FILES+=usr/lib/libsdp_p.a .if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64" OLD_FILES+=usr/lib32/libbluetooth.a OLD_FILES+=usr/lib32/libbluetooth.so OLD_LIBS+=usr/lib32/libbluetooth.so.4 OLD_FILES+=usr/lib32/libbluetooth_p.a OLD_FILES+=usr/lib32/libsdp.a OLD_FILES+=usr/lib32/libsdp.so OLD_LIBS+=usr/lib32/libsdp.so.4 OLD_FILES+=usr/lib32/libsdp_p.a .endif OLD_FILES+=usr/sbin/ath3kfw OLD_FILES+=usr/sbin/bcmfw OLD_FILES+=usr/sbin/bluetooth-config OLD_FILES+=usr/sbin/bt3cfw OLD_FILES+=usr/sbin/bthidcontrol OLD_FILES+=usr/sbin/bthidd OLD_FILES+=usr/sbin/btpand OLD_FILES+=usr/sbin/hccontrol OLD_FILES+=usr/sbin/hcsecd OLD_FILES+=usr/sbin/hcseriald OLD_FILES+=usr/sbin/l2control OLD_FILES+=usr/sbin/l2ping OLD_FILES+=usr/sbin/rfcomm_pppd OLD_FILES+=usr/sbin/sdpcontrol OLD_FILES+=usr/sbin/sdpd OLD_FILES+=usr/share/examples/etc/defaults/bluetooth.device.conf OLD_FILES+=usr/share/man/man1/bthost.1.gz OLD_FILES+=usr/share/man/man1/btsockstat.1.gz OLD_FILES+=usr/share/man/man1/rfcomm_sppd.1.gz OLD_FILES+=usr/share/man/man3/SDP_GET128.3.gz OLD_FILES+=usr/share/man/man3/SDP_GET16.3.gz OLD_FILES+=usr/share/man/man3/SDP_GET32.3.gz OLD_FILES+=usr/share/man/man3/SDP_GET64.3.gz OLD_FILES+=usr/share/man/man3/SDP_GET8.3.gz OLD_FILES+=usr/share/man/man3/SDP_PUT128.3.gz OLD_FILES+=usr/share/man/man3/SDP_PUT16.3.gz OLD_FILES+=usr/share/man/man3/SDP_PUT32.3.gz OLD_FILES+=usr/share/man/man3/SDP_PUT64.3.gz OLD_FILES+=usr/share/man/man3/SDP_PUT8.3.gz OLD_FILES+=usr/share/man/man3/bdaddr_any.3.gz OLD_FILES+=usr/share/man/man3/bdaddr_copy.3.gz OLD_FILES+=usr/share/man/man3/bdaddr_same.3.gz OLD_FILES+=usr/share/man/man3/bluetooth.3.gz OLD_FILES+=usr/share/man/man3/bt_aton.3.gz OLD_FILES+=usr/share/man/man3/bt_devaddr.3.gz OLD_FILES+=usr/share/man/man3/bt_devclose.3.gz OLD_FILES+=usr/share/man/man3/bt_devenum.3.gz OLD_FILES+=usr/share/man/man3/bt_devfilter.3.gz OLD_FILES+=usr/share/man/man3/bt_devfilter_evt_clr.3.gz OLD_FILES+=usr/share/man/man3/bt_devfilter_evt_set.3.gz OLD_FILES+=usr/share/man/man3/bt_devfilter_evt_tst.3.gz OLD_FILES+=usr/share/man/man3/bt_devfilter_pkt_clr.3.gz OLD_FILES+=usr/share/man/man3/bt_devfilter_pkt_set.3.gz OLD_FILES+=usr/share/man/man3/bt_devfilter_pkt_tst.3.gz OLD_FILES+=usr/share/man/man3/bt_devinfo.3.gz OLD_FILES+=usr/share/man/man3/bt_devinquiry.3.gz OLD_FILES+=usr/share/man/man3/bt_devname.3.gz OLD_FILES+=usr/share/man/man3/bt_devopen.3.gz OLD_FILES+=usr/share/man/man3/bt_devreq.3.gz OLD_FILES+=usr/share/man/man3/bt_devsend.3.gz OLD_FILES+=usr/share/man/man3/bt_endhostent.3.gz OLD_FILES+=usr/share/man/man3/bt_endprotoent.3.gz OLD_FILES+=usr/share/man/man3/bt_gethostbyaddr.3.gz OLD_FILES+=usr/share/man/man3/bt_gethostbyname.3.gz OLD_FILES+=usr/share/man/man3/bt_gethostent.3.gz OLD_FILES+=usr/share/man/man3/bt_getprotobyname.3.gz OLD_FILES+=usr/share/man/man3/bt_getprotobynumber.3.gz OLD_FILES+=usr/share/man/man3/bt_getprotoent.3.gz OLD_FILES+=usr/share/man/man3/bt_ntoa.3.gz OLD_FILES+=usr/share/man/man3/bt_sethostent.3.gz OLD_FILES+=usr/share/man/man3/bt_setprotoent.3.gz OLD_FILES+=usr/share/man/man3/sdp.3.gz OLD_FILES+=usr/share/man/man3/sdp_attr2desc.3.gz OLD_FILES+=usr/share/man/man3/sdp_change_service.3.gz OLD_FILES+=usr/share/man/man3/sdp_close.3.gz OLD_FILES+=usr/share/man/man3/sdp_error.3.gz OLD_FILES+=usr/share/man/man3/sdp_open.3.gz OLD_FILES+=usr/share/man/man3/sdp_open_local.3.gz OLD_FILES+=usr/share/man/man3/sdp_register_service.3.gz OLD_FILES+=usr/share/man/man3/sdp_search.3.gz OLD_FILES+=usr/share/man/man3/sdp_unregister_service.3.gz OLD_FILES+=usr/share/man/man3/sdp_uuid2desc.3.gz OLD_FILES+=usr/share/man/man4/ng_bluetooth.4.gz OLD_FILES+=usr/share/man/man5/bluetooth.device.conf.5.gz OLD_FILES+=usr/share/man/man5/bluetooth.hosts.5.gz OLD_FILES+=usr/share/man/man5/bluetooth.protocols.5.gz OLD_FILES+=usr/share/man/man5/hcsecd.conf.5.gz OLD_FILES+=usr/share/man/man8/ath3kfw.8.gz OLD_FILES+=usr/share/man/man8/bcmfw.8.gz OLD_FILES+=usr/share/man/man8/bluetooth-config.8.gz OLD_FILES+=usr/share/man/man8/bt3cfw.8.gz OLD_FILES+=usr/share/man/man8/bthidcontrol.8.gz OLD_FILES+=usr/share/man/man8/bthidd.8.gz OLD_FILES+=usr/share/man/man8/btpand.8.gz OLD_FILES+=usr/share/man/man8/hccontrol.8.gz OLD_FILES+=usr/share/man/man8/hcsecd.8.gz OLD_FILES+=usr/share/man/man8/hcseriald.8.gz OLD_FILES+=usr/share/man/man8/l2control.8.gz OLD_FILES+=usr/share/man/man8/l2ping.8.gz OLD_FILES+=usr/share/man/man8/rfcomm_pppd.8.gz OLD_FILES+=usr/share/man/man8/sdpcontrol.8.gz OLD_FILES+=usr/share/man/man8/sdpd.8.gz .endif .if ${MK_BOOT} == no OLD_FILES+=boot/beastie.4th OLD_FILES+=boot/boot OLD_FILES+=boot/boot0 OLD_FILES+=boot/boot0sio OLD_FILES+=boot/boot1 OLD_FILES+=boot/boot1.efi OLD_FILES+=boot/boot1.efifat OLD_FILES+=boot/boot2 OLD_FILES+=boot/brand.4th OLD_FILES+=boot/cdboot OLD_FILES+=boot/check-password.4th OLD_FILES+=boot/color.4th OLD_FILES+=boot/defaults/loader.conf OLD_FILES+=boot/delay.4th OLD_FILES+=boot/device.hints OLD_FILES+=boot/frames.4th OLD_FILES+=boot/gptboot OLD_FILES+=boot/gptzfsboot OLD_FILES+=boot/loader OLD_FILES+=boot/loader.4th OLD_FILES+=boot/loader.efi OLD_FILES+=boot/loader.help OLD_FILES+=boot/loader.rc OLD_FILES+=boot/mbr OLD_FILES+=boot/menu-commands.4th OLD_FILES+=boot/menu.4th OLD_FILES+=boot/menu.rc OLD_FILES+=boot/menusets.4th OLD_FILES+=boot/pcibios.4th OLD_FILES+=boot/pmbr OLD_FILES+=boot/pxeboot OLD_FILES+=boot/screen.4th OLD_FILES+=boot/shortcuts.4th OLD_FILES+=boot/support.4th OLD_FILES+=boot/userboot.so OLD_FILES+=boot/version.4th OLD_FILES+=boot/zfsboot OLD_FILES+=boot/zfsloader OLD_FILES+=usr/lib/kgzldr.o OLD_FILES+=usr/share/man/man5/loader.conf.5.gz OLD_FILES+=usr/share/man/man8/beastie.4th.8.gz OLD_FILES+=usr/share/man/man8/brand.4th.8.gz OLD_FILES+=usr/share/man/man8/check-password.4th.8.gz OLD_FILES+=usr/share/man/man8/color.4th.8.gz OLD_FILES+=usr/share/man/man8/delay.4th.8.gz OLD_FILES+=usr/share/man/man8/gptboot.8.gz OLD_FILES+=usr/share/man/man8/gptzfsboot.8.gz OLD_FILES+=usr/share/man/man8/loader.4th.8.gz OLD_FILES+=usr/share/man/man8/loader.8.gz OLD_FILES+=usr/share/man/man8/menu.4th.8.gz OLD_FILES+=usr/share/man/man8/menusets.4th.8.gz OLD_FILES+=usr/share/man/man8/pxeboot.8.gz OLD_FILES+=usr/share/man/man8/version.4th.8.gz OLD_FILES+=usr/share/man/man8/zfsboot.8.gz OLD_FILES+=usr/share/man/man8/zfsloader.8.gz .endif .if ${MK_BOOTPARAMD} == no +OLD_FILES+=etc/rc.d/bootparams OLD_FILES+=usr/sbin/bootparamd OLD_FILES+=usr/share/man/man5/bootparams.5.gz OLD_FILES+=usr/share/man/man8/bootparamd.8.gz OLD_FILES+=usr/sbin/callbootd .endif .if ${MK_BOOTPD} == no OLD_FILES+=usr/libexec/bootpd OLD_FILES+=usr/share/man/man5/bootptab.5.gz OLD_FILES+=usr/share/man/man8/bootpd.8.gz OLD_FILES+=usr/libexec/bootpgw OLD_FILES+=usr/sbin/bootpef OLD_FILES+=usr/share/man/man8/bootpef.8.gz OLD_FILES+=usr/sbin/bootptest OLD_FILES+=usr/share/man/man8/bootptest.8.gz .endif .if ${MK_BSD_CPIO} == no OLD_FILES+=usr/bin/bsdcpio OLD_FILES+=usr/bin/cpio OLD_FILES+=usr/share/man/man1/bsdcpio.1.gz OLD_FILES+=usr/share/man/man1/cpio.1.gz .endif .if ${MK_BSDINSTALL} == no OLD_FILES+=usr/libexec/bsdinstall/adduser OLD_FILES+=usr/libexec/bsdinstall/auto OLD_FILES+=usr/libexec/bsdinstall/autopart OLD_FILES+=usr/libexec/bsdinstall/bootconfig OLD_FILES+=usr/libexec/bsdinstall/checksum OLD_FILES+=usr/libexec/bsdinstall/config OLD_FILES+=usr/libexec/bsdinstall/distextract OLD_FILES+=usr/libexec/bsdinstall/distfetch OLD_FILES+=usr/libexec/bsdinstall/docsinstall OLD_FILES+=usr/libexec/bsdinstall/entropy OLD_FILES+=usr/libexec/bsdinstall/hardening OLD_FILES+=usr/libexec/bsdinstall/hostname OLD_FILES+=usr/libexec/bsdinstall/jail OLD_FILES+=usr/libexec/bsdinstall/keymap OLD_FILES+=usr/libexec/bsdinstall/mirrorselect OLD_FILES+=usr/libexec/bsdinstall/mount OLD_FILES+=usr/libexec/bsdinstall/netconfig OLD_FILES+=usr/libexec/bsdinstall/netconfig_ipv4 OLD_FILES+=usr/libexec/bsdinstall/netconfig_ipv6 OLD_FILES+=usr/libexec/bsdinstall/partedit OLD_FILES+=usr/libexec/bsdinstall/rootpass OLD_FILES+=usr/libexec/bsdinstall/script OLD_FILES+=usr/libexec/bsdinstall/scriptedpart OLD_FILES+=usr/libexec/bsdinstall/services OLD_FILES+=usr/libexec/bsdinstall/time OLD_FILES+=usr/libexec/bsdinstall/umount OLD_FILES+=usr/libexec/bsdinstall/wlanconfig OLD_FILES+=usr/libexec/bsdinstall/zfsboot OLD_FILES+=usr/sbin/bsdinstall OLD_FILES+=usr/share/man/man8/bsdinstall.8.gz OLD_FILES+=usr/share/man/man8/sade.8.gz OLD_DIRS+=usr/libexec/bsdinstall .endif .if ${MK_BSNMP} == no OLD_FILES+=etc/snmpd.config OLD_FILES+=etc/rc.d/bsnmpd OLD_FILES+=usr/bin/bsnmpget OLD_FILES+=usr/bin/bsnmpset OLD_FILES+=usr/bin/bsnmpwalk OLD_FILES+=usr/include/bsnmp/asn1.h OLD_FILES+=usr/include/bsnmp/bridge_snmp.h OLD_FILES+=usr/include/bsnmp/snmp.h OLD_FILES+=usr/include/bsnmp/snmp_mibII.h OLD_FILES+=usr/include/bsnmp/snmp_netgraph.h OLD_FILES+=usr/include/bsnmp/snmpagent.h OLD_FILES+=usr/include/bsnmp/snmpclient.h OLD_FILES+=usr/include/bsnmp/snmpmod.h OLD_FILES+=usr/lib/libbsnmp.a OLD_FILES+=usr/lib/libbsnmp.so OLD_LIBS+=usr/lib/libbsnmp.so.6 OLD_FILES+=usr/lib/libbsnmp_p.a OLD_FILES+=usr/lib/libbsnmptools.a OLD_FILES+=usr/lib/libbsnmptools.so OLD_LIBS+=usr/lib/libbsnmptools.so.0 OLD_FILES+=usr/lib/libbsnmptools_p.a OLD_FILES+=usr/lib/snmp_bridge.so OLD_LIBS+=usr/lib/snmp_bridge.so.6 OLD_FILES+=usr/lib/snmp_hast.so OLD_LIBS+=usr/lib/snmp_hast.so.6 OLD_FILES+=usr/lib/snmp_hostres.so OLD_LIBS+=usr/lib/snmp_hostres.so.6 OLD_FILES+=usr/lib/snmp_lm75.so OLD_LIBS+=usr/lib/snmp_lm75.so.6 OLD_FILES+=usr/lib/snmp_mibII.so OLD_LIBS+=usr/lib/snmp_mibII.so.6 OLD_FILES+=usr/lib/snmp_netgraph.so OLD_LIBS+=usr/lib/snmp_netgraph.so.6 OLD_FILES+=usr/lib/snmp_pf.so OLD_LIBS+=usr/lib/snmp_pf.so.6 OLD_FILES+=usr/lib/snmp_target.so OLD_LIBS+=usr/lib/snmp_target.so.6 OLD_FILES+=usr/lib/snmp_usm.so OLD_LIBS+=usr/lib/snmp_usm.so.6 OLD_FILES+=usr/lib/snmp_vacm.so OLD_LIBS+=usr/lib/snmp_vacm.so.6 OLD_FILES+=usr/lib/snmp_wlan.so OLD_LIBS+=usr/lib/snmp_wlan.so.6 OLD_FILES+=usr/lib32/libbsnmp.a OLD_FILES+=usr/lib32/libbsnmp.so OLD_LIBS+=usr/lib32/libbsnmp.so.6 OLD_FILES+=usr/lib32/libbsnmp_p.a OLD_FILES+=usr/sbin/bsnmpd OLD_FILES+=usr/sbin/gensnmptree OLD_FILES+=usr/share/examples/etc/snmpd.config OLD_FILES+=usr/share/man/man1/bsnmpd.1.gz OLD_FILES+=usr/share/man/man1/bsnmpget.1.gz OLD_FILES+=usr/share/man/man1/bsnmpset.1.gz OLD_FILES+=usr/share/man/man1/bsnmpwalk.1.gz OLD_FILES+=usr/share/man/man1/gensnmptree.1.gz # lib/libbsnmp/libbsnmp OLD_FILES+=usr/share/man/man3/TRUTH_GET.3.gz OLD_FILES+=usr/share/man/man3/TRUTH_MK.3.gz OLD_FILES+=usr/share/man/man3/TRUTH_OK.3.gz OLD_FILES+=usr/share/man/man3/asn1.3.gz OLD_FILES+=usr/share/man/man3/asn_append_oid.3.gz OLD_FILES+=usr/share/man/man3/asn_commit_header.3.gz OLD_FILES+=usr/share/man/man3/asn_compare_oid.3.gz OLD_FILES+=usr/share/man/man3/asn_get_counter64_raw.3.gz OLD_FILES+=usr/share/man/man3/asn_get_header.3.gz OLD_FILES+=usr/share/man/man3/asn_get_integer.3.gz OLD_FILES+=usr/share/man/man3/asn_get_integer_raw.3.gz OLD_FILES+=usr/share/man/man3/asn_get_ipaddress.3.gz OLD_FILES+=usr/share/man/man3/asn_get_ipaddress_raw.3.gz OLD_FILES+=usr/share/man/man3/asn_get_null.3.gz OLD_FILES+=usr/share/man/man3/asn_get_null_raw.3.gz OLD_FILES+=usr/share/man/man3/asn_get_objid.3.gz OLD_FILES+=usr/share/man/man3/asn_get_objid_raw.3.gz OLD_FILES+=usr/share/man/man3/asn_get_octetstring.3.gz OLD_FILES+=usr/share/man/man3/asn_get_octetstring_raw.3.gz OLD_FILES+=usr/share/man/man3/asn_get_sequence.3.gz OLD_FILES+=usr/share/man/man3/asn_get_timeticks.3.gz OLD_FILES+=usr/share/man/man3/asn_get_uint32_raw.3.gz OLD_FILES+=usr/share/man/man3/asn_is_suboid.3.gz OLD_FILES+=usr/share/man/man3/asn_oid2str.3.gz OLD_FILES+=usr/share/man/man3/asn_oid2str_r.3.gz OLD_FILES+=usr/share/man/man3/asn_put_counter64.3.gz OLD_FILES+=usr/share/man/man3/asn_put_exception.3.gz OLD_FILES+=usr/share/man/man3/asn_put_header.3.gz OLD_FILES+=usr/share/man/man3/asn_put_integer.3.gz OLD_FILES+=usr/share/man/man3/asn_put_ipaddress.3.gz OLD_FILES+=usr/share/man/man3/asn_put_null.3.gz OLD_FILES+=usr/share/man/man3/asn_put_objid.3.gz OLD_FILES+=usr/share/man/man3/asn_put_octetstring.3.gz OLD_FILES+=usr/share/man/man3/asn_put_temp_header.3.gz OLD_FILES+=usr/share/man/man3/asn_put_timeticks.3.gz OLD_FILES+=usr/share/man/man3/asn_put_uint32.3.gz OLD_FILES+=usr/share/man/man3/asn_skip.3.gz OLD_FILES+=usr/share/man/man3/asn_slice_oid.3.gz OLD_FILES+=usr/share/man/man3/snmp_add_binding.3.gz OLD_FILES+=usr/share/man/man3/snmp_calc_keychange.3.gz OLD_FILES+=usr/share/man/man3/snmp_client.3.gz OLD_FILES+=usr/share/man/man3/snmp_client_init.3.gz OLD_FILES+=usr/share/man/man3/snmp_client_set_host.3.gz OLD_FILES+=usr/share/man/man3/snmp_client_set_port.3.gz OLD_FILES+=usr/share/man/man3/snmp_close.3.gz OLD_FILES+=usr/share/man/man3/snmp_debug.3.gz OLD_FILES+=usr/share/man/man3/snmp_dep_commit.3.gz OLD_FILES+=usr/share/man/man3/snmp_dep_finish.3.gz OLD_FILES+=usr/share/man/man3/snmp_dep_lookup.3.gz OLD_FILES+=usr/share/man/man3/snmp_dep_rollback.3.gz OLD_FILES+=usr/share/man/man3/snmp_depop_t.3.gz OLD_FILES+=usr/share/man/man3/snmp_dialog.3.gz OLD_FILES+=usr/share/man/man3/snmp_discover_engine.3.gz OLD_FILES+=usr/share/man/man3/snmp_get.3.gz OLD_FILES+=usr/share/man/man3/snmp_get_local_keys.3.gz OLD_FILES+=usr/share/man/man3/snmp_getbulk.3.gz OLD_FILES+=usr/share/man/man3/snmp_getnext.3.gz OLD_FILES+=usr/share/man/man3/snmp_init_context.3.gz OLD_FILES+=usr/share/man/man3/snmp_make_errresp.3.gz OLD_FILES+=usr/share/man/man3/snmp_oid_append.3.gz OLD_FILES+=usr/share/man/man3/snmp_op_t.3.gz OLD_FILES+=usr/share/man/man3/snmp_open.3.gz OLD_FILES+=usr/share/man/man3/snmp_parse_server.3.gz OLD_FILES+=usr/share/man/man3/snmp_passwd_to_keys.3.gz OLD_FILES+=usr/share/man/man3/snmp_pdu_check.3.gz OLD_FILES+=usr/share/man/man3/snmp_pdu_create.3.gz OLD_FILES+=usr/share/man/man3/snmp_pdu_decode.3.gz OLD_FILES+=usr/share/man/man3/snmp_pdu_decode_header.3.gz OLD_FILES+=usr/share/man/man3/snmp_pdu_decode_scoped.3.gz OLD_FILES+=usr/share/man/man3/snmp_pdu_decode_secmode.3.gz OLD_FILES+=usr/share/man/man3/snmp_pdu_dump.3.gz OLD_FILES+=usr/share/man/man3/snmp_pdu_encode.3.gz OLD_FILES+=usr/share/man/man3/snmp_pdu_free.3.gz OLD_FILES+=usr/share/man/man3/snmp_pdu_init_secparams.3.gz OLD_FILES+=usr/share/man/man3/snmp_pdu_send.3.gz OLD_FILES+=usr/share/man/man3/snmp_receive.3.gz OLD_FILES+=usr/share/man/man3/snmp_send_cb_f.3.gz OLD_FILES+=usr/share/man/man3/snmp_set.3.gz OLD_FILES+=usr/share/man/man3/snmp_table_cb_f.3.gz OLD_FILES+=usr/share/man/man3/snmp_table_fetch.3.gz OLD_FILES+=usr/share/man/man3/snmp_table_fetch_async.3.gz OLD_FILES+=usr/share/man/man3/snmp_timeout_cb_f.3.gz OLD_FILES+=usr/share/man/man3/snmp_timeout_start_f.3.gz OLD_FILES+=usr/share/man/man3/snmp_timeout_stop_f.3.gz OLD_FILES+=usr/share/man/man3/snmp_trace.3.gz OLD_FILES+=usr/share/man/man3/snmp_value_copy.3.gz OLD_FILES+=usr/share/man/man3/snmp_value_free.3.gz OLD_FILES+=usr/share/man/man3/snmp_value_parse.3.gz OLD_FILES+=usr/share/man/man3/tree_size.3.gz # usr.sbin/bsnmpd/bsnmpd OLD_FILES+=usr/share/man/man3/FIND_OBJECT_INT.3.gz OLD_FILES+=usr/share/man/man3/FIND_OBJECT_INT_LINK.3.gz OLD_FILES+=usr/share/man/man3/FIND_OBJECT_INT_LINK_INDEX.3.gz OLD_FILES+=usr/share/man/man3/FIND_OBJECT_OID.3.gz OLD_FILES+=usr/share/man/man3/FIND_OBJECT_OID_LINK.3.gz OLD_FILES+=usr/share/man/man3/FIND_OBJECT_OID_LINK_INDEX.3.gz OLD_FILES+=usr/share/man/man3/INSERT_OBJECT_INT.3.gz OLD_FILES+=usr/share/man/man3/INSERT_OBJECT_INT_LINK.3.gz OLD_FILES+=usr/share/man/man3/INSERT_OBJECT_INT_LINK_INDEX.3.gz OLD_FILES+=usr/share/man/man3/INSERT_OBJECT_OID.3.gz OLD_FILES+=usr/share/man/man3/INSERT_OBJECT_OID_LINK.3.gz OLD_FILES+=usr/share/man/man3/INSERT_OBJECT_OID_LINK_INDEX.3.gz OLD_FILES+=usr/share/man/man3/NEXT_OBJECT_INT.3.gz OLD_FILES+=usr/share/man/man3/NEXT_OBJECT_INT_LINK.3.gz OLD_FILES+=usr/share/man/man3/NEXT_OBJECT_INT_LINK_INDEX.3.gz OLD_FILES+=usr/share/man/man3/NEXT_OBJECT_OID.3.gz OLD_FILES+=usr/share/man/man3/NEXT_OBJECT_OID_LINK.3.gz OLD_FILES+=usr/share/man/man3/NEXT_OBJECT_OID_LINK_INDEX.3.gz OLD_FILES+=usr/share/man/man3/asn1.3.gz OLD_FILES+=usr/share/man/man3/bsnmpagent.3.gz OLD_FILES+=usr/share/man/man3/bsnmpclient.3.gz OLD_FILES+=usr/share/man/man3/bsnmpd_get_target_stats.3.gz OLD_FILES+=usr/share/man/man3/bsnmpd_get_usm_stats.3.gz OLD_FILES+=usr/share/man/man3/bsnmpd_reset_usm_stats.3.gz OLD_FILES+=usr/share/man/man3/bsnmplib.3.gz OLD_FILES+=usr/share/man/man3/buf_alloc.3.gz OLD_FILES+=usr/share/man/man3/buf_size.3.gz OLD_FILES+=usr/share/man/man3/comm_define.3.gz OLD_FILES+=usr/share/man/man3/community.3.gz OLD_FILES+=usr/share/man/man3/fd_deselect.3.gz OLD_FILES+=usr/share/man/man3/fd_resume.3.gz OLD_FILES+=usr/share/man/man3/fd_select.3.gz OLD_FILES+=usr/share/man/man3/fd_suspend.3.gz OLD_FILES+=usr/share/man/man3/get_ticks.3.gz OLD_FILES+=usr/share/man/man3/index_append.3.gz OLD_FILES+=usr/share/man/man3/index_append_off.3.gz OLD_FILES+=usr/share/man/man3/index_compare.3.gz OLD_FILES+=usr/share/man/man3/index_compare_off.3.gz OLD_FILES+=usr/share/man/man3/index_decode.3.gz OLD_FILES+=usr/share/man/man3/ip_commit.3.gz OLD_FILES+=usr/share/man/man3/ip_get.3.gz OLD_FILES+=usr/share/man/man3/ip_rollback.3.gz OLD_FILES+=usr/share/man/man3/ip_save.3.gz OLD_FILES+=usr/share/man/man3/or_register.3.gz OLD_FILES+=usr/share/man/man3/or_unregister.3.gz OLD_FILES+=usr/share/man/man3/oid_commit.3.gz OLD_FILES+=usr/share/man/man3/oid_get.3.gz OLD_FILES+=usr/share/man/man3/oid_rollback.3.gz OLD_FILES+=usr/share/man/man3/oid_save.3.gz OLD_FILES+=usr/share/man/man3/oid_usmNotInTimeWindows.3.gz OLD_FILES+=usr/share/man/man3/oid_usmUnknownEngineIDs.3.gz OLD_FILES+=usr/share/man/man3/oid_zeroDotZero.3.gz OLD_FILES+=usr/share/man/man3/reqid_allocate.3.gz OLD_FILES+=usr/share/man/man3/reqid_base.3.gz OLD_FILES+=usr/share/man/man3/reqid_istype.3.gz OLD_FILES+=usr/share/man/man3/reqid_next.3.gz OLD_FILES+=usr/share/man/man3/reqid_type.3.gz OLD_FILES+=usr/share/man/man3/snmp_bridge.3.gz OLD_FILES+=usr/share/man/man3/snmp_hast.3.gz OLD_FILES+=usr/share/man/man3/snmp_hostres.3.gz OLD_FILES+=usr/share/man/man3/snmp_input_finish.3.gz OLD_FILES+=usr/share/man/man3/snmp_input_start.3.gz OLD_FILES+=usr/share/man/man3/snmp_lm75.3.gz OLD_FILES+=usr/share/man/man3/snmp_mibII.3.gz OLD_FILES+=usr/share/man/man3/snmp_netgraph.3.gz OLD_FILES+=usr/share/man/man3/snmp_output.3.gz OLD_FILES+=usr/share/man/man3/snmp_pdu_auth_access.3.gz OLD_FILES+=usr/share/man/man3/snmp_send_port.3.gz OLD_FILES+=usr/share/man/man3/snmp_send_trap.3.gz OLD_FILES+=usr/share/man/man3/snmp_target.3.gz OLD_FILES+=usr/share/man/man3/snmp_usm.3.gz OLD_FILES+=usr/share/man/man3/snmp_vacm.3.gz OLD_FILES+=usr/share/man/man3/snmp_wlan.3.gz OLD_FILES+=usr/share/man/man3/snmpd_target_stat.3.gz OLD_FILES+=usr/share/man/man3/snmpd_usmstats.3.gz OLD_FILES+=usr/share/man/man3/snmpmod.3.gz OLD_FILES+=usr/share/man/man3/start_tick.3.gz OLD_FILES+=usr/share/man/man3/string_commit.3.gz OLD_FILES+=usr/share/man/man3/string_free.3.gz OLD_FILES+=usr/share/man/man3/string_get.3.gz OLD_FILES+=usr/share/man/man3/string_get_max.3.gz OLD_FILES+=usr/share/man/man3/string_rollback.3.gz OLD_FILES+=usr/share/man/man3/string_save.3.gz OLD_FILES+=usr/share/man/man3/systemg.3.gz OLD_FILES+=usr/share/man/man3/this_tick.3.gz OLD_FILES+=usr/share/man/man3/timer_start.3.gz OLD_FILES+=usr/share/man/man3/timer_start_repeat.3.gz OLD_FILES+=usr/share/man/man3/timer_stop.3.gz OLD_FILES+=usr/share/man/man3/target_activate_address.3.gz OLD_FILES+=usr/share/man/man3/target_address.3.gz OLD_FILES+=usr/share/man/man3/target_delete_address.3.gz OLD_FILES+=usr/share/man/man3/target_delete_notify.3.gz OLD_FILES+=usr/share/man/man3/target_delete_param.3.gz OLD_FILES+=usr/share/man/man3/target_first_address.3.gz OLD_FILES+=usr/share/man/man3/target_first_notify.3.gz OLD_FILES+=usr/share/man/man3/target_first_param.3.gz OLD_FILES+=usr/share/man/man3/target_flush_all.3.gz OLD_FILES+=usr/share/man/man3/target_next_address.3.gz OLD_FILES+=usr/share/man/man3/target_next_notify.3.gz OLD_FILES+=usr/share/man/man3/target_next_param.3.gz OLD_FILES+=usr/share/man/man3/target_new_address.3.gz OLD_FILES+=usr/share/man/man3/target_new_notify.3.gz OLD_FILES+=usr/share/man/man3/target_new_param.3.gz OLD_FILES+=usr/share/man/man3/target_notify.3.gz OLD_FILES+=usr/share/man/man3/target_param.3.gz OLD_FILES+=usr/share/man/man3/usm_delete_user.3.gz OLD_FILES+=usr/share/man/man3/usm_find_user.3.gz OLD_FILES+=usr/share/man/man3/usm_first_user.3.gz OLD_FILES+=usr/share/man/man3/usm_flush_users.3.gz OLD_FILES+=usr/share/man/man3/usm_next_user.3.gz OLD_FILES+=usr/share/man/man3/usm_new_user.3.gz OLD_FILES+=usr/share/man/man3/usm_user.3.gz OLD_FILES+=usr/share/snmp/defs/bridge_tree.def OLD_FILES+=usr/share/snmp/defs/hast_tree.def OLD_FILES+=usr/share/snmp/defs/hostres_tree.def OLD_FILES+=usr/share/snmp/defs/lm75_tree.def OLD_FILES+=usr/share/snmp/defs/mibII_tree.def OLD_FILES+=usr/share/snmp/defs/netgraph_tree.def OLD_FILES+=usr/share/snmp/defs/pf_tree.def OLD_FILES+=usr/share/snmp/defs/target_tree.def OLD_FILES+=usr/share/snmp/defs/tree.def OLD_FILES+=usr/share/snmp/defs/usm_tree.def OLD_FILES+=usr/share/snmp/defs/vacm_tree.def OLD_FILES+=usr/share/snmp/defs/wlan_tree.def OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-ATM-FREEBSD-MIB.txt OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-ATM.txt OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-BRIDGE-MIB.txt OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-HAST-MIB.txt OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-HOSTRES-MIB.txt OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-IP-MIB.txt OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-LM75-MIB.txt OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-MIB.txt OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-MIB2-MIB.txt OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-NETGRAPH.txt OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-PF-MIB.txt OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-SNMPD.txt OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-WIRELESS-MIB.txt OLD_FILES+=usr/share/snmp/mibs/BRIDGE-MIB.txt OLD_FILES+=usr/share/snmp/mibs/FOKUS-MIB.txt OLD_FILES+=usr/share/snmp/mibs/FREEBSD-MIB.txt OLD_FILES+=usr/share/snmp/mibs/RSTP-MIB.txt OLD_DIRS+=usr/include/bsnmp OLD_DIRS+=usr/share/snmp OLD_DIRS+=usr/share/snmp/defs OLD_DIRS+=usr/share/snmp/mibs .endif .if ${MK_CALENDAR} == no OLD_FILES+=etc/periodic/daily/300.calendar OLD_FILES+=usr/bin/calendar OLD_FILES+=usr/share/calendar/calendar.all OLD_FILES+=usr/share/calendar/calendar.australia OLD_FILES+=usr/share/calendar/calendar.birthday OLD_FILES+=usr/share/calendar/calendar.brazilian OLD_FILES+=usr/share/calendar/calendar.christian OLD_FILES+=usr/share/calendar/calendar.computer OLD_FILES+=usr/share/calendar/calendar.croatian OLD_FILES+=usr/share/calendar/calendar.dutch OLD_FILES+=usr/share/calendar/calendar.freebsd OLD_FILES+=usr/share/calendar/calendar.french OLD_FILES+=usr/share/calendar/calendar.german OLD_FILES+=usr/share/calendar/calendar.history OLD_FILES+=usr/share/calendar/calendar.holiday OLD_FILES+=usr/share/calendar/calendar.hungarian OLD_FILES+=usr/share/calendar/calendar.judaic OLD_FILES+=usr/share/calendar/calendar.lotr OLD_FILES+=usr/share/calendar/calendar.music OLD_FILES+=usr/share/calendar/calendar.newzealand OLD_FILES+=usr/share/calendar/calendar.russian OLD_FILES+=usr/share/calendar/calendar.southafrica OLD_FILES+=usr/share/calendar/calendar.ukrainian OLD_FILES+=usr/share/calendar/calendar.usholiday OLD_FILES+=usr/share/calendar/calendar.world OLD_FILES+=usr/share/calendar/de_AT.ISO_8859-15/calendar.feiertag OLD_DIRS+=usr/share/calendar/de_AT.ISO_8859-15 OLD_FILES+=usr/share/calendar/de_DE.ISO8859-1/calendar.all OLD_FILES+=usr/share/calendar/de_DE.ISO8859-1/calendar.feiertag OLD_FILES+=usr/share/calendar/de_DE.ISO8859-1/calendar.geschichte OLD_FILES+=usr/share/calendar/de_DE.ISO8859-1/calendar.kirche OLD_FILES+=usr/share/calendar/de_DE.ISO8859-1/calendar.literatur OLD_FILES+=usr/share/calendar/de_DE.ISO8859-1/calendar.musik OLD_FILES+=usr/share/calendar/de_DE.ISO8859-1/calendar.wissenschaft OLD_DIRS+=usr/share/calendar/de_DE.ISO8859-1 OLD_FILES+=usr/share/calendar/de_DE.ISO8859-15 OLD_FILES+=usr/share/calendar/fr_FR.ISO8859-1/calendar.all OLD_FILES+=usr/share/calendar/fr_FR.ISO8859-1/calendar.fetes OLD_FILES+=usr/share/calendar/fr_FR.ISO8859-1/calendar.french OLD_FILES+=usr/share/calendar/fr_FR.ISO8859-1/calendar.jferies OLD_FILES+=usr/share/calendar/fr_FR.ISO8859-1/calendar.proverbes OLD_DIRS+=usr/share/calendar/fr_FR.ISO8859-1 OLD_FILES+=usr/share/calendar/fr_FR.ISO8859-15 OLD_FILES+=usr/share/calendar/hr_HR.ISO8859-2/calendar.all OLD_FILES+=usr/share/calendar/hr_HR.ISO8859-2/calendar.praznici OLD_DIRS+=usr/share/calendar/hr_HR.ISO8859-2 OLD_FILES+=usr/share/calendar/hu_HU.ISO8859-2/calendar.all OLD_FILES+=usr/share/calendar/hu_HU.ISO8859-2/calendar.nevnapok OLD_FILES+=usr/share/calendar/hu_HU.ISO8859-2/calendar.unnepek OLD_DIRS+=usr/share/calendar/hu_HU.ISO8859-2 OLD_FILES+=usr/share/calendar/pt_BR.ISO8859-1/calendar.all OLD_FILES+=usr/share/calendar/pt_BR.ISO8859-1/calendar.commemorative OLD_FILES+=usr/share/calendar/pt_BR.ISO8859-1/calendar.holidays OLD_FILES+=usr/share/calendar/pt_BR.ISO8859-1/calendar.mcommemorative OLD_DIRS+=usr/share/calendar/pt_BR.ISO8859-1 OLD_FILES+=usr/share/calendar/pt_BR.UTF-8/calendar.all OLD_FILES+=usr/share/calendar/pt_BR.UTF-8/calendar.commemorative OLD_FILES+=usr/share/calendar/pt_BR.UTF-8/calendar.holidays OLD_FILES+=usr/share/calendar/pt_BR.UTF-8/calendar.mcommemorative OLD_DIRS+=usr/share/calendar/pt_BR.UTF-8 OLD_FILES+=usr/share/calendar/ru_RU.KOI8-R/calendar.all OLD_FILES+=usr/share/calendar/ru_RU.KOI8-R/calendar.common OLD_FILES+=usr/share/calendar/ru_RU.KOI8-R/calendar.holiday OLD_FILES+=usr/share/calendar/ru_RU.KOI8-R/calendar.military OLD_FILES+=usr/share/calendar/ru_RU.KOI8-R/calendar.orthodox OLD_FILES+=usr/share/calendar/ru_RU.KOI8-R/calendar.pagan OLD_DIRS+=usr/share/calendar/ru_RU.KOI8-R OLD_FILES+=usr/share/calendar/ru_RU.UTF-8/calendar.all OLD_FILES+=usr/share/calendar/ru_RU.UTF-8/calendar.common OLD_FILES+=usr/share/calendar/ru_RU.UTF-8/calendar.holiday OLD_FILES+=usr/share/calendar/ru_RU.UTF-8/calendar.military OLD_FILES+=usr/share/calendar/ru_RU.UTF-8/calendar.orthodox OLD_FILES+=usr/share/calendar/ru_RU.UTF-8/calendar.pagan OLD_DIRS+=usr/share/calendar/ru_RU.UTF-8 OLD_FILES+=usr/share/calendar/uk_UA.KOI8-U/calendar.all OLD_FILES+=usr/share/calendar/uk_UA.KOI8-U/calendar.holiday OLD_FILES+=usr/share/calendar/uk_UA.KOI8-U/calendar.misc OLD_FILES+=usr/share/calendar/uk_UA.KOI8-U/calendar.orthodox OLD_DIRS+=usr/share/calendar/uk_UA.KOI8-U OLD_DIRS+=usr/share/calendar OLD_FILES+=usr/share/man/man1/calendar.1.gz OLD_FILES+=usr/tests/usr.bin/calendar/Kyuafile OLD_FILES+=usr/tests/usr.bin/calendar/calendar.calibrate OLD_FILES+=usr/tests/usr.bin/calendar/legacy_test OLD_FILES+=usr/tests/usr.bin/calendar/regress.a1.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.a2.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.a3.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.a4.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.a5.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.b1.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.b2.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.b3.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.b4.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.b5.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.s1.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.s2.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.s3.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.s4.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.sh OLD_FILES+=usr/tests/usr.bin/calendar/regress.w0-1.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.w0-2.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.w0-3.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.w0-4.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.w0-5.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.w0-6.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.w0-7.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.wn-1.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.wn-2.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.wn-3.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.wn-4.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.wn-5.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.wn-6.out OLD_FILES+=usr/tests/usr.bin/calendar/regress.wn-7.out OLD_DIRS+=usr/tests/usr.bin/calendar .endif .if ${MK_CASPER} == no OLD_FILES+=etc/casper/system.dns OLD_FILES+=etc/casper/system.grp OLD_FILES+=etc/casper/system.pwd OLD_FILES+=etc/casper/system.random OLD_FILES+=etc/casper/system.sysctl +OLD_DIRS+=etc/casper OLD_FILES+=etc/rc.d/casperd OLD_LIBS+=lib/libcapsicum.so.0 OLD_LIBS+=lib/libcasper.so.0 OLD_FILES+=libexec/casper/dns OLD_FILES+=libexec/casper/grp OLD_FILES+=libexec/casper/pwd OLD_FILES+=libexec/casper/random OLD_FILES+=libexec/casper/sysctl OLD_FILES+=sbin/casper OLD_FILES+=sbin/casperd OLD_FILES+=usr/include/libcapsicum.h OLD_FILES+=usr/include/libcapsicum_dns.h OLD_FILES+=usr/include/libcapsicum_grp.h OLD_FILES+=usr/include/libcapsicum_pwd.h OLD_FILES+=usr/include/libcapsicum_random.h OLD_FILES+=usr/include/libcapsicum_service.h OLD_FILES+=usr/include/libcapsicum_sysctl.h OLD_FILES+=usr/include/libcasper.h OLD_FILES+=usr/lib/libcapsicum.a OLD_FILES+=usr/lib/libcapsicum.so OLD_FILES+=usr/lib/libcapsicum_p.a OLD_FILES+=usr/lib/libcasper.a OLD_FILES+=usr/lib/libcasper.so OLD_FILES+=usr/lib/libcasper_p.a OLD_FILES+=usr/lib32/libcapsicum.a OLD_FILES+=usr/lib32/libcapsicum.so OLD_LIBS+=usr/lib32/libcapsicum.so.0 OLD_FILES+=usr/lib32/libcapsicum_p.a OLD_FILES+=usr/lib32/libcasper.a OLD_FILES+=usr/lib32/libcasper.so OLD_LIBS+=usr/lib32/libcasper.so.0 OLD_FILES+=usr/lib32/libcasper_p.a OLD_FILES+=usr/share/man/man3/cap_clone.3.gz OLD_FILES+=usr/share/man/man3/cap_close.3.gz OLD_FILES+=usr/share/man/man3/cap_init.3.gz OLD_FILES+=usr/share/man/man3/cap_limit_get.3.gz OLD_FILES+=usr/share/man/man3/cap_limit_set.3.gz OLD_FILES+=usr/share/man/man3/cap_recv_nvlist.3.gz OLD_FILES+=usr/share/man/man3/cap_send_nvlist.3.gz OLD_FILES+=usr/share/man/man3/cap_service_open.3.gz OLD_FILES+=usr/share/man/man3/cap_sock.3.gz OLD_FILES+=usr/share/man/man3/cap_unwrap.3.gz OLD_FILES+=usr/share/man/man3/cap_wrap.3.gz OLD_FILES+=usr/share/man/man3/cap_xfer_nvlist.3.gz OLD_FILES+=usr/share/man/man3/libcapsicum.3.gz OLD_FILES+=usr/share/man/man8/casperd.8.gz .endif .if ${MK_CCD} == no OLD_FILES+=etc/rc.d/ccd OLD_FILES+=rescue/ccdconfig OLD_FILES+=sbin/ccdconfig OLD_FILES+=usr/share/man/man4/ccd.4.gz OLD_FILES+=usr/share/man/man8/ccdconfig.8.gz .endif .if ${MK_CDDL} == no OLD_LIBS+=lib/libavl.so.2 OLD_LIBS+=lib/libctf.so.2 OLD_LIBS+=lib/libdtrace.so.2 OLD_LIBS+=lib/libnvpair.so.2 OLD_LIBS+=lib/libumem.so.2 OLD_LIBS+=lib/libuutil.so.2 OLD_FILES+=usr/bin/ctfconvert OLD_FILES+=usr/bin/ctfdump OLD_FILES+=usr/bin/ctfmerge OLD_FILES+=usr/lib/dtrace/drti.o OLD_FILES+=usr/lib/dtrace/errno.d OLD_FILES+=usr/lib/dtrace/io.d OLD_FILES+=usr/lib/dtrace/ip.d OLD_FILES+=usr/lib/dtrace/psinfo.d .if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "i386" OLD_FILES+=usr/lib/dtrace/regs_x86.d .endif OLD_FILES+=usr/lib/dtrace/signal.d OLD_FILES+=usr/lib/dtrace/tcp.d OLD_FILES+=usr/lib/dtrace/udp.d OLD_FILES+=usr/lib/dtrace/unistd.d OLD_FILES+=usr/lib/libavl.a OLD_FILES+=usr/lib/libavl.so OLD_FILES+=usr/lib/libavl_p.a OLD_FILES+=usr/lib/libctf.a OLD_FILES+=usr/lib/libctf.so OLD_FILES+=usr/lib/libctf_p.a OLD_FILES+=usr/lib/libdtrace.a OLD_FILES+=usr/lib/libdtrace.so OLD_FILES+=usr/lib/libdtrace_p.a OLD_FILES+=usr/lib/libnvpair.a OLD_FILES+=usr/lib/libnvpair.so OLD_FILES+=usr/lib/libnvpair_p.a OLD_FILES+=usr/lib/libumem.a OLD_FILES+=usr/lib/libumem.so OLD_FILES+=usr/lib/libumem_p.a OLD_FILES+=usr/lib/libuutil.a OLD_FILES+=usr/lib/libuutil.so OLD_FILES+=usr/lib/libuutil_p.a .if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64" OLD_FILES+=usr/lib32/dtrace/drti.o OLD_FILES+=usr/lib32/libavl.a OLD_FILES+=usr/lib32/libavl.so OLD_LIBS+=usr/lib32/libavl.so.2 OLD_FILES+=usr/lib32/libavl_p.a OLD_FILES+=usr/lib32/libctf.a OLD_FILES+=usr/lib32/libctf.so OLD_LIBS+=usr/lib32/libctf.so.2 OLD_FILES+=usr/lib32/libctf_p.a OLD_FILES+=usr/lib32/libdtrace.a OLD_FILES+=usr/lib32/libdtrace.so OLD_LIBS+=usr/lib32/libdtrace.so.2 OLD_FILES+=usr/lib32/libdtrace_p.a OLD_FILES+=usr/lib32/libnvpair.a OLD_FILES+=usr/lib32/libnvpair.so OLD_LIBS+=usr/lib32/libnvpair.so.2 OLD_FILES+=usr/lib32/libnvpair_p.a OLD_FILES+=usr/lib32/libumem.a OLD_FILES+=usr/lib32/libumem.so OLD_LIBS+=usr/lib32/libumem.so.2 OLD_FILES+=usr/lib32/libumem_p.a OLD_FILES+=usr/lib32/libuutil.a OLD_FILES+=usr/lib32/libuutil.so OLD_LIBS+=usr/lib32/libuutil.so.2 OLD_FILES+=usr/lib32/libuutil_p.a .endif OLD_LIBS+=lib/libdtrace.so.2 OLD_FILES+=usr/sbin/dtrace OLD_FILES+=usr/sbin/lockstat OLD_FILES+=usr/sbin/plockstat OLD_FILES+=usr/share/man/man1/dtrace.1.gz OLD_FILES+=usr/share/man/man1/dtruss.1.gz OLD_FILES+=usr/share/man/man1/lockstat.1.gz OLD_FILES+=usr/share/man/man1/plockstat.1.gz OLD_FILES+=usr/share/dtrace/disklatency OLD_FILES+=usr/share/dtrace/disklatencycmd OLD_FILES+=usr/share/dtrace/hotopen OLD_FILES+=usr/share/dtrace/nfsclienttime OLD_FILES+=usr/share/dtrace/toolkit/execsnoop OLD_FILES+=usr/share/dtrace/toolkit/hotkernel OLD_FILES+=usr/share/dtrace/toolkit/hotuser OLD_FILES+=usr/share/dtrace/toolkit/opensnoop OLD_FILES+=usr/share/dtrace/toolkit/procsystime OLD_FILES+=usr/share/dtrace/tcpconn OLD_FILES+=usr/share/dtrace/tcpstate OLD_FILES+=usr/share/dtrace/tcptrack OLD_FILES+=usr/share/dtrace/udptrack OLD_FILES+=usr/share/man/man1/dtrace.1.gz OLD_DIRS+=usr/lib/dtrace OLD_DIRS+=usr/lib32/dtrace OLD_DIRS+=usr/share/dtrace/toolkit OLD_DIRS+=usr/share/dtrace .endif .if ${MK_ZFS} == no OLD_FILES+=boot/gptzfsboot OLD_FILES+=boot/zfsboot OLD_FILES+=boot/zfsloader OLD_FILES+=etc/rc.d/zfs OLD_FILES+=etc/rc.d/zfsd OLD_FILES+=etc/rc.d/zfsbe OLD_FILES+=etc/rc.d/zvol OLD_FILES+=etc/devd/zfs.conf OLD_FILES+=etc/periodic/daily/404.status-zfs OLD_FILES+=etc/periodic/daily/800.scrub-zfs +OLD_FILES+=etc/zfs/exports +OLD_DIRS+=etc/zfs OLD_LIBS+=lib/libzfs.so.2 OLD_LIBS+=lib/libzfs.so.3 OLD_LIBS+=lib/libzfs_core.so.2 OLD_LIBS+=lib/libzpool.so.2 OLD_FILES+=rescue/zdb OLD_FILES+=rescue/zfs OLD_FILES+=rescue/zpool OLD_FILES+=sbin/bectl OLD_FILES+=sbin/zfs OLD_FILES+=sbin/zpool OLD_FILES+=sbin/zfsbootcfg OLD_FILES+=usr/bin/zinject OLD_FILES+=usr/bin/zstreamdump OLD_FILES+=usr/bin/ztest OLD_FILES+=usr/lib/libbe.a OLD_FILES+=usr/lib/libbe_p.a OLD_FILES+=usr/lib/libbe.so OLD_LIBS+=lib/libbe.so.1 OLD_FILES+=usr/lib/libzfs.a OLD_LIBS+=usr/lib/libzfs.so OLD_FILES+=usr/lib/libzfs_core.a OLD_LIBS+=usr/lib/libzfs_core.so OLD_LIBS+=usr/lib/libzfs_core_p.a OLD_FILES+=usr/lib/libzfs_p.a OLD_FILES+=usr/lib/libzpool.a OLD_FILES+=usr/lib/libzpool.so OLD_LIBS+=usr/lib/libzpool.so.2 .if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64" OLD_FILES+=usr/lib32/libzfs.a OLD_FILES+=usr/lib32/libzfs.so OLD_LIBS+=usr/lib32/libzfs.so.2 OLD_LIBS+=usr/lib32/libzfs.so.3 OLD_FILES+=usr/lib32/libzfs_core.a OLD_FILES+=usr/lib32/libzfs_core.so OLD_LIBS+=usr/lib32/libzfs_core.so.2 OLD_FILES+=usr/lib32/libzfs_core_p.a OLD_FILES+=usr/lib32/libzfs_p.a OLD_FILES+=usr/lib32/libzpool.a OLD_FILES+=usr/lib32/libzpool.so OLD_LIBS+=usr/lib32/libzpool.so.2 .endif OLD_FILES+=usr/sbin/zfsd OLD_FILES+=usr/sbin/zhack OLD_FILES+=usr/sbin/zdb OLD_FILES+=usr/share/man/man3/libbe.3.gz OLD_FILES+=usr/share/man/man7/zpool-features.7.gz OLD_FILES+=usr/share/man/man8/bectl.8.gz OLD_FILES+=usr/share/man/man8/gptzfsboot.8.gz OLD_FILES+=usr/share/man/man8/zdb.8.gz OLD_FILES+=usr/share/man/man8/zfs-program.8.gz OLD_FILES+=usr/share/man/man8/zfs.8.gz OLD_FILES+=usr/share/man/man8/zfsboot.8.gz OLD_FILES+=usr/share/man/man8/zfsbootcfg.8.gz OLD_FILES+=usr/share/man/man8/zfsd.8.gz OLD_FILES+=usr/share/man/man8/zfsloader.8.gz OLD_FILES+=usr/share/man/man8/zpool.8.gz .endif .if ${MK_CLANG} == no OLD_FILES+=usr/bin/clang OLD_FILES+=usr/bin/clang++ OLD_FILES+=usr/bin/clang-cpp OLD_FILES+=usr/bin/clang-tblgen OLD_FILES+=usr/bin/llvm-objdump OLD_FILES+=usr/bin/llvm-tblgen OLD_FILES+=usr/lib/clang/7.0.1/include/sanitizer/allocator_interface.h OLD_FILES+=usr/lib/clang/7.0.1/include/sanitizer/asan_interface.h OLD_FILES+=usr/lib/clang/7.0.1/include/sanitizer/common_interface_defs.h OLD_FILES+=usr/lib/clang/7.0.1/include/sanitizer/coverage_interface.h OLD_FILES+=usr/lib/clang/7.0.1/include/sanitizer/dfsan_interface.h OLD_FILES+=usr/lib/clang/7.0.1/include/sanitizer/esan_interface.h OLD_FILES+=usr/lib/clang/7.0.1/include/sanitizer/hwasan_interface.h OLD_FILES+=usr/lib/clang/7.0.1/include/sanitizer/linux_syscall_hooks.h OLD_FILES+=usr/lib/clang/7.0.1/include/sanitizer/lsan_interface.h OLD_FILES+=usr/lib/clang/7.0.1/include/sanitizer/msan_interface.h OLD_FILES+=usr/lib/clang/7.0.1/include/sanitizer/netbsd_syscall_hooks.h OLD_FILES+=usr/lib/clang/7.0.1/include/sanitizer/scudo_interface.h OLD_FILES+=usr/lib/clang/7.0.1/include/sanitizer/tsan_interface.h OLD_FILES+=usr/lib/clang/7.0.1/include/sanitizer/tsan_interface_atomic.h OLD_DIRS+=usr/lib/clang/7.0.1/include/sanitizer OLD_FILES+=usr/lib/clang/7.0.1/include/__clang_cuda_builtin_vars.h OLD_FILES+=usr/lib/clang/7.0.1/include/__clang_cuda_cmath.h OLD_FILES+=usr/lib/clang/7.0.1/include/__clang_cuda_complex_builtins.h OLD_FILES+=usr/lib/clang/7.0.1/include/__clang_cuda_device_functions.h OLD_FILES+=usr/lib/clang/7.0.1/include/__clang_cuda_intrinsics.h OLD_FILES+=usr/lib/clang/7.0.1/include/__clang_cuda_libdevice_declares.h OLD_FILES+=usr/lib/clang/7.0.1/include/__clang_cuda_math_forward_declares.h OLD_FILES+=usr/lib/clang/7.0.1/include/__clang_cuda_runtime_wrapper.h OLD_FILES+=usr/lib/clang/7.0.1/include/__stddef_max_align_t.h OLD_FILES+=usr/lib/clang/7.0.1/include/__wmmintrin_aes.h OLD_FILES+=usr/lib/clang/7.0.1/include/__wmmintrin_pclmul.h OLD_FILES+=usr/lib/clang/7.0.1/include/adxintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/altivec.h OLD_FILES+=usr/lib/clang/7.0.1/include/ammintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/arm64intr.h OLD_FILES+=usr/lib/clang/7.0.1/include/arm_acle.h OLD_FILES+=usr/lib/clang/7.0.1/include/arm_fp16.h OLD_FILES+=usr/lib/clang/7.0.1/include/arm_neon.h OLD_FILES+=usr/lib/clang/7.0.1/include/armintr.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx2intrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512bitalgintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512bwintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512cdintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512dqintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512erintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512fintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512ifmaintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512ifmavlintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512pfintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512vbmi2intrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512vbmiintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512vbmivlintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512vlbitalgintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512vlbwintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512vlcdintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512vldqintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512vlintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512vlvbmi2intrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512vlvnniintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512vnniintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512vpopcntdqintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avx512vpopcntdqvlintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/avxintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/bmi2intrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/bmiintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/cetintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/cldemoteintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/clflushoptintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/clwbintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/clzerointrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/cpuid.h OLD_FILES+=usr/lib/clang/7.0.1/include/emmintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/f16cintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/fma4intrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/fmaintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/fxsrintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/gfniintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/htmintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/htmxlintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/ia32intrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/immintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/invpcidintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/lwpintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/lzcntintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/mm3dnow.h OLD_FILES+=usr/lib/clang/7.0.1/include/mm_malloc.h OLD_FILES+=usr/lib/clang/7.0.1/include/mmintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/module.modulemap OLD_FILES+=usr/lib/clang/7.0.1/include/movdirintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/msa.h OLD_FILES+=usr/lib/clang/7.0.1/include/mwaitxintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/nmmintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/opencl-c.h OLD_FILES+=usr/lib/clang/7.0.1/include/pconfigintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/pkuintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/pmmintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/popcntintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/prfchwintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/ptwriteintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/rdseedintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/rtmintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/s390intrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/sgxintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/shaintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/smmintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/tbmintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/tmmintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/vadefs.h OLD_FILES+=usr/lib/clang/7.0.1/include/vaesintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/vecintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/vpclmulqdqintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/waitpkgintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/wbnoinvdintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/wmmintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/x86intrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/xmmintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/xopintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/xsavecintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/xsaveintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/xsaveoptintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/xsavesintrin.h OLD_FILES+=usr/lib/clang/7.0.1/include/xtestintrin.h OLD_DIRS+=usr/lib/clang/7.0.1/include OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.asan-i386.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.asan-i386.so OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.asan-preinit-i386.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.asan-preinit-x86_64.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.asan-x86_64.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.asan-x86_64.so OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.asan_cxx-i386.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.asan_cxx-x86_64.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.msan-i386.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.msan-x86_64.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.msan_cxx-i386.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.msan_cxx-x86_64.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.profile-arm.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.profile-armhf.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.profile-i386.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.profile-x86_64.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.safestack-i386.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.safestack-x86_64.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.stats-i386.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.stats-x86_64.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.stats_client-i386.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.stats_client-x86_64.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.tsan-x86_64.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.tsan_cxx-x86_64.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.ubsan_minimal-i386.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.ubsan_minimal-x86_64.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.ubsan_standalone-i386.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.ubsan_standalone-x86_64.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.ubsan_standalone_cxx-i386.a OLD_FILES+=usr/lib/clang/7.0.1/lib/freebsd/libclang_rt.ubsan_standalone_cxx-x86_64.a OLD_DIRS+=usr/lib/clang/7.0.1/lib/freebsd OLD_DIRS+=usr/lib/clang/7.0.1/lib OLD_DIRS+=usr/lib/clang/7.0.1 OLD_DIRS+=usr/lib/clang OLD_FILES+=usr/share/doc/llvm/clang/LICENSE.TXT OLD_DIRS+=usr/share/doc/llvm/clang OLD_FILES+=usr/share/doc/llvm/COPYRIGHT.regex OLD_FILES+=usr/share/doc/llvm/LICENSE.TXT OLD_DIRS+=usr/share/doc/llvm OLD_FILES+=usr/share/man/man1/clang.1.gz OLD_FILES+=usr/share/man/man1/clang++.1.gz OLD_FILES+=usr/share/man/man1/clang-cpp.1.gz OLD_FILES+=usr/share/man/man1/llvm-tblgen.1.gz .endif .if ${MK_CLANG_EXTRAS} == no OLD_FILES+=usr/bin/bugpoint OLD_FILES+=usr/bin/clang-format OLD_FILES+=usr/bin/llc OLD_FILES+=usr/bin/lli OLD_FILES+=usr/bin/llvm-ar OLD_FILES+=usr/bin/llvm-as OLD_FILES+=usr/bin/llvm-bcanalyzer OLD_FILES+=usr/bin/llvm-cxxdump OLD_FILES+=usr/bin/llvm-cxxfilt OLD_FILES+=usr/bin/llvm-diff OLD_FILES+=usr/bin/llvm-dis OLD_FILES+=usr/bin/llvm-dwarfdump OLD_FILES+=usr/bin/llvm-extract OLD_FILES+=usr/bin/llvm-link OLD_FILES+=usr/bin/llvm-lto OLD_FILES+=usr/bin/llvm-lto2 OLD_FILES+=usr/bin/llvm-mc OLD_FILES+=usr/bin/llvm-mca OLD_FILES+=usr/bin/llvm-modextract OLD_FILES+=usr/bin/llvm-nm OLD_FILES+=usr/bin/llvm-objcopy OLD_FILES+=usr/bin/llvm-pdbutil OLD_FILES+=usr/bin/llvm-ranlib OLD_FILES+=usr/bin/llvm-rtdyld OLD_FILES+=usr/bin/llvm-symbolizer OLD_FILES+=usr/bin/llvm-xray OLD_FILES+=usr/bin/opt OLD_FILES+=usr/share/man/man1/bugpoint.1.gz OLD_FILES+=usr/share/man/man1/llc.1.gz OLD_FILES+=usr/share/man/man1/lli.1.gz OLD_FILES+=usr/share/man/man1/llvm-ar.1.gz OLD_FILES+=usr/share/man/man1/llvm-as.1.gz OLD_FILES+=usr/share/man/man1/llvm-bcanalyzer.1.gz OLD_FILES+=usr/share/man/man1/llvm-diff.1.gz OLD_FILES+=usr/share/man/man1/llvm-dis.1.gz OLD_FILES+=usr/share/man/man1/llvm-dwarfdump.1 OLD_FILES+=usr/share/man/man1/llvm-extract.1.gz OLD_FILES+=usr/share/man/man1/llvm-link.1.gz OLD_FILES+=usr/share/man/man1/llvm-nm.1.gz OLD_FILES+=usr/share/man/man1/llvm-pdbutil.1.gz OLD_FILES+=usr/share/man/man1/llvm-symbolizer.1.gz OLD_FILES+=usr/share/man/man1/opt.1.gz .endif .if ${MK_CPP} == no OLD_FILES+=usr/bin/cpp OLD_FILES+=usr/share/man/man1/cpp.1.gz .endif .if ${MK_CUSE} == no OLD_FILES+=usr/include/fs/cuse/cuse_defs.h OLD_FILES+=usr/include/fs/cuse/cuse_ioctl.h OLD_FILES+=usr/include/cuse.h OLD_FILES+=usr/lib/libcuse.a OLD_LIBS+=usr/lib/libcuse.so.1 OLD_FILES+=usr/lib/libcuse_p.a OLD_FILES+=usr/share/man/man3/cuse.3.gz OLD_FILES+=usr/share/man/man3/cuse_alloc_unit_number.3.gz OLD_FILES+=usr/share/man/man3/cuse_alloc_unit_number_by_id.3.gz OLD_FILES+=usr/share/man/man3/cuse_copy_in.3.gz OLD_FILES+=usr/share/man/man3/cuse_copy_out.3.gz OLD_FILES+=usr/share/man/man3/cuse_dev_create.3.gz OLD_FILES+=usr/share/man/man3/cuse_dev_destroy.3.gz OLD_FILES+=usr/share/man/man3/cuse_dev_get_current.3.gz OLD_FILES+=usr/share/man/man3/cuse_dev_get_per_file_handle.3.gz OLD_FILES+=usr/share/man/man3/cuse_dev_get_priv0.3.gz OLD_FILES+=usr/share/man/man3/cuse_dev_get_priv1.3.gz OLD_FILES+=usr/share/man/man3/cuse_dev_set_per_file_handle.3.gz OLD_FILES+=usr/share/man/man3/cuse_dev_set_priv0.3.gz OLD_FILES+=usr/share/man/man3/cuse_dev_set_priv1.3.gz OLD_FILES+=usr/share/man/man3/cuse_free_unit_number.3.gz OLD_FILES+=usr/share/man/man3/cuse_free_unit_number_by_id.3.gz OLD_FILES+=usr/share/man/man3/cuse_get_local.3.gz OLD_FILES+=usr/share/man/man3/cuse_got_peer_signal.3.gz OLD_FILES+=usr/share/man/man3/cuse_init.3.gz OLD_FILES+=usr/share/man/man3/cuse_is_vmalloc_addr.3.gz OLD_FILES+=usr/share/man/man3/cuse_poll_wakeup.3.gz OLD_FILES+=usr/share/man/man3/cuse_set_local.3.gz OLD_FILES+=usr/share/man/man3/cuse_uninit.3.gz OLD_FILES+=usr/share/man/man3/cuse_vmalloc.3.gz OLD_FILES+=usr/share/man/man3/cuse_vmfree.3.gz OLD_FILES+=usr/share/man/man3/cuse_vmoffset.3.gz OLD_FILES+=usr/share/man/man3/cuse_wait_and_process.3.gz OLD_DIRS+=usr/include/fs/cuse .endif # devd(8) not listed here on purpose .if ${MK_CXX} == no OLD_FILES+=usr/bin/CC OLD_FILES+=usr/bin/c++ OLD_FILES+=usr/bin/g++ OLD_FILES+=usr/libexec/cc1plus .endif .if ${MK_DEBUG_FILES} == no .if exists(${DESTDIR}/usr/lib/debug) DEBUG_DIRS!=find ${DESTDIR}/usr/lib/debug -mindepth 1 \ -type d \! -path "${DESTDIR}/usr/lib/debug/boot/*" \ | sed -e 's,^${DESTDIR}/,,'; echo DEBUG_FILES!=find ${DESTDIR}/usr/lib/debug \ \! -type d \! -path "${DESTDIR}/usr/lib/debug/boot/*" \! -name "lib*.so*" \ | sed -e 's,^${DESTDIR}/,,'; echo DEBUG_LIBS!=find ${DESTDIR}/usr/lib/debug \! -type d -name "lib*.so*" \ | sed -e 's,^${DESTDIR}/,,'; echo OLD_DIRS+=${DEBUG_DIRS} OLD_FILES+=${DEBUG_FILES} OLD_LIBS+=${DEBUG_LIBS} .endif .endif .if ${MK_DIALOG} == no OLD_FILES+=usr/bin/dialog OLD_FILES+=usr/bin/dpv OLD_FILES+=usr/lib/libdialog.a OLD_FILES+=usr/lib/libdialog.so OLD_FILES+=usr/lib/libdialog.so.8 OLD_FILES+=usr/lib/libdialog_p.a OLD_FILES+=usr/lib/libdpv.a OLD_FILES+=usr/lib/libdpv.so OLD_FILES+=usr/lib/libdpv.so.1 OLD_FILES+=usr/lib/libdpv_p.a OLD_FILES+=usr/sbin/bsdconfig OLD_FILES+=usr/share/man/man1/dialog.1.gz OLD_FILES+=usr/share/man/man1/dpv.1.gz OLD_FILES+=usr/share/man/man3/dialog.3.gz OLD_FILES+=usr/share/man/man3/dpv.3.gz OLD_FILES+=usr/share/man/man8/bsdconfig.8.gz OLD_DIRS+=usr/share/bsdconfig OLD_DIRS+=usr/share/bsdconfig/media OLD_DIRS+=usr/share/bsdconfig/networking OLD_DIRS+=usr/share/bsdconfig/packages OLD_DIRS+=usr/share/bsdconfig/password OLD_DIRS+=usr/share/bsdconfig/startup OLD_DIRS+=usr/share/bsdconfig/timezone OLD_DIRS+=usr/share/bsdconfig/usermgmt .endif .if ${MK_EFI} == no OLD_FILES+=usr/sbin/efibootmgr OLD_FILES+=usr/sbin/efidp OLD_FILES+=usr/sbin/efivar OLD_FILES+=usr/sbin/uefisign OLD_FILES+=usr/share/examples/uefisign/uefikeys .endif .if ${MK_FMTREE} == no OLD_FILES+=usr/sbin/fmtree OLD_FILES+=usr/share/man/man8/fmtree.8.gz .endif .if ${MK_FTP} == no OLD_FILES+=etc/ftpusers OLD_FILES+=etc/newsyslog.conf.d/ftp.conf OLD_FILES+=etc/pam.d/ftp OLD_FILES+=etc/pam.d/ftpd OLD_FILES+=etc/rc.d/ftpd OLD_FILES+=etc/syslog.d/ftp.conf OLD_FILES+=usr/bin/ftp OLD_FILES+=usr/bin/gate-ftp OLD_FILES+=usr/bin/pftp OLD_FILES+=usr/libexec/ftpd OLD_FILES+=usr/share/man/man1/ftp.1.gz OLD_FILES+=usr/share/man/man1/gate-ftp.1.gz OLD_FILES+=usr/share/man/man1/pftp.1.gz OLD_FILES+=usr/share/man/man5/ftpchroot.5.gz OLD_FILES+=usr/share/man/man8/ftpd.8.gz .endif .if ${MK_GNUCXX} == no OLD_FILES+=usr/bin/g++ OLD_FILES+=usr/include/c++/4.2/algorithm OLD_FILES+=usr/include/c++/4.2/backward/algo.h OLD_FILES+=usr/include/c++/4.2/backward/algobase.h OLD_FILES+=usr/include/c++/4.2/backward/alloc.h OLD_FILES+=usr/include/c++/4.2/backward/backward_warning.h OLD_FILES+=usr/include/c++/4.2/backward/bvector.h OLD_FILES+=usr/include/c++/4.2/backward/complex.h OLD_FILES+=usr/include/c++/4.2/backward/defalloc.h OLD_FILES+=usr/include/c++/4.2/backward/deque.h OLD_FILES+=usr/include/c++/4.2/backward/fstream.h OLD_FILES+=usr/include/c++/4.2/backward/function.h OLD_FILES+=usr/include/c++/4.2/backward/hash_map.h OLD_FILES+=usr/include/c++/4.2/backward/hash_set.h OLD_FILES+=usr/include/c++/4.2/backward/hashtable.h OLD_FILES+=usr/include/c++/4.2/backward/heap.h OLD_FILES+=usr/include/c++/4.2/backward/iomanip.h OLD_FILES+=usr/include/c++/4.2/backward/iostream.h OLD_FILES+=usr/include/c++/4.2/backward/istream.h OLD_FILES+=usr/include/c++/4.2/backward/iterator.h OLD_FILES+=usr/include/c++/4.2/backward/list.h OLD_FILES+=usr/include/c++/4.2/backward/map.h OLD_FILES+=usr/include/c++/4.2/backward/multimap.h OLD_FILES+=usr/include/c++/4.2/backward/multiset.h OLD_FILES+=usr/include/c++/4.2/backward/new.h OLD_FILES+=usr/include/c++/4.2/backward/ostream.h OLD_FILES+=usr/include/c++/4.2/backward/pair.h OLD_FILES+=usr/include/c++/4.2/backward/queue.h OLD_FILES+=usr/include/c++/4.2/backward/rope.h OLD_FILES+=usr/include/c++/4.2/backward/set.h OLD_FILES+=usr/include/c++/4.2/backward/slist.h OLD_FILES+=usr/include/c++/4.2/backward/stack.h OLD_FILES+=usr/include/c++/4.2/backward/stream.h OLD_FILES+=usr/include/c++/4.2/backward/streambuf.h OLD_FILES+=usr/include/c++/4.2/backward/strstream OLD_FILES+=usr/include/c++/4.2/backward/tempbuf.h OLD_FILES+=usr/include/c++/4.2/backward/tree.h OLD_FILES+=usr/include/c++/4.2/backward/vector.h OLD_FILES+=usr/include/c++/4.2/bits/allocator.h OLD_FILES+=usr/include/c++/4.2/bits/atomic_word.h OLD_FILES+=usr/include/c++/4.2/bits/basic_file.h OLD_FILES+=usr/include/c++/4.2/bits/basic_ios.h OLD_FILES+=usr/include/c++/4.2/bits/basic_ios.tcc OLD_FILES+=usr/include/c++/4.2/bits/basic_string.h OLD_FILES+=usr/include/c++/4.2/bits/basic_string.tcc OLD_FILES+=usr/include/c++/4.2/bits/boost_concept_check.h OLD_FILES+=usr/include/c++/4.2/bits/c++allocator.h OLD_FILES+=usr/include/c++/4.2/bits/c++config.h OLD_FILES+=usr/include/c++/4.2/bits/c++io.h OLD_FILES+=usr/include/c++/4.2/bits/c++locale.h OLD_FILES+=usr/include/c++/4.2/bits/c++locale_internal.h OLD_FILES+=usr/include/c++/4.2/bits/char_traits.h OLD_FILES+=usr/include/c++/4.2/bits/cmath.tcc OLD_FILES+=usr/include/c++/4.2/bits/codecvt.h OLD_FILES+=usr/include/c++/4.2/bits/compatibility.h OLD_FILES+=usr/include/c++/4.2/bits/concept_check.h OLD_FILES+=usr/include/c++/4.2/bits/cpp_type_traits.h OLD_FILES+=usr/include/c++/4.2/bits/cpu_defines.h OLD_FILES+=usr/include/c++/4.2/bits/ctype_base.h OLD_FILES+=usr/include/c++/4.2/bits/ctype_inline.h OLD_FILES+=usr/include/c++/4.2/bits/ctype_noninline.h OLD_FILES+=usr/include/c++/4.2/bits/cxxabi_tweaks.h OLD_FILES+=usr/include/c++/4.2/bits/deque.tcc OLD_FILES+=usr/include/c++/4.2/bits/fstream.tcc OLD_FILES+=usr/include/c++/4.2/bits/functexcept.h OLD_FILES+=usr/include/c++/4.2/bits/gslice.h OLD_FILES+=usr/include/c++/4.2/bits/gslice_array.h OLD_FILES+=usr/include/c++/4.2/bits/gthr-default.h OLD_FILES+=usr/include/c++/4.2/bits/gthr-posix.h OLD_FILES+=usr/include/c++/4.2/bits/gthr-single.h OLD_FILES+=usr/include/c++/4.2/bits/gthr-tpf.h OLD_FILES+=usr/include/c++/4.2/bits/gthr.h OLD_FILES+=usr/include/c++/4.2/bits/indirect_array.h OLD_FILES+=usr/include/c++/4.2/bits/ios_base.h OLD_FILES+=usr/include/c++/4.2/bits/istream.tcc OLD_FILES+=usr/include/c++/4.2/bits/list.tcc OLD_FILES+=usr/include/c++/4.2/bits/locale_classes.h OLD_FILES+=usr/include/c++/4.2/bits/locale_facets.h OLD_FILES+=usr/include/c++/4.2/bits/locale_facets.tcc OLD_FILES+=usr/include/c++/4.2/bits/localefwd.h OLD_FILES+=usr/include/c++/4.2/bits/mask_array.h OLD_FILES+=usr/include/c++/4.2/bits/messages_members.h OLD_FILES+=usr/include/c++/4.2/bits/os_defines.h OLD_FILES+=usr/include/c++/4.2/bits/ostream.tcc OLD_FILES+=usr/include/c++/4.2/bits/ostream_insert.h OLD_FILES+=usr/include/c++/4.2/bits/postypes.h OLD_FILES+=usr/include/c++/4.2/bits/slice_array.h OLD_FILES+=usr/include/c++/4.2/bits/sstream.tcc OLD_FILES+=usr/include/c++/4.2/bits/stl_algo.h OLD_FILES+=usr/include/c++/4.2/bits/stl_algobase.h OLD_FILES+=usr/include/c++/4.2/bits/stl_bvector.h OLD_FILES+=usr/include/c++/4.2/bits/stl_construct.h OLD_FILES+=usr/include/c++/4.2/bits/stl_deque.h OLD_FILES+=usr/include/c++/4.2/bits/stl_function.h OLD_FILES+=usr/include/c++/4.2/bits/stl_heap.h OLD_FILES+=usr/include/c++/4.2/bits/stl_iterator.h OLD_FILES+=usr/include/c++/4.2/bits/stl_iterator_base_funcs.h OLD_FILES+=usr/include/c++/4.2/bits/stl_iterator_base_types.h OLD_FILES+=usr/include/c++/4.2/bits/stl_list.h OLD_FILES+=usr/include/c++/4.2/bits/stl_map.h OLD_FILES+=usr/include/c++/4.2/bits/stl_multimap.h OLD_FILES+=usr/include/c++/4.2/bits/stl_multiset.h OLD_FILES+=usr/include/c++/4.2/bits/stl_numeric.h OLD_FILES+=usr/include/c++/4.2/bits/stl_pair.h OLD_FILES+=usr/include/c++/4.2/bits/stl_queue.h OLD_FILES+=usr/include/c++/4.2/bits/stl_raw_storage_iter.h OLD_FILES+=usr/include/c++/4.2/bits/stl_relops.h OLD_FILES+=usr/include/c++/4.2/bits/stl_set.h OLD_FILES+=usr/include/c++/4.2/bits/stl_stack.h OLD_FILES+=usr/include/c++/4.2/bits/stl_tempbuf.h OLD_FILES+=usr/include/c++/4.2/bits/stl_tree.h OLD_FILES+=usr/include/c++/4.2/bits/stl_uninitialized.h OLD_FILES+=usr/include/c++/4.2/bits/stl_vector.h OLD_FILES+=usr/include/c++/4.2/bits/stream_iterator.h OLD_FILES+=usr/include/c++/4.2/bits/streambuf.tcc OLD_FILES+=usr/include/c++/4.2/bits/streambuf_iterator.h OLD_FILES+=usr/include/c++/4.2/bits/stringfwd.h OLD_FILES+=usr/include/c++/4.2/bits/time_members.h OLD_FILES+=usr/include/c++/4.2/bits/valarray_after.h OLD_FILES+=usr/include/c++/4.2/bits/valarray_array.h OLD_FILES+=usr/include/c++/4.2/bits/valarray_array.tcc OLD_FILES+=usr/include/c++/4.2/bits/valarray_before.h OLD_FILES+=usr/include/c++/4.2/bits/vector.tcc OLD_FILES+=usr/include/c++/4.2/bitset OLD_FILES+=usr/include/c++/4.2/cassert OLD_FILES+=usr/include/c++/4.2/cctype OLD_FILES+=usr/include/c++/4.2/cerrno OLD_FILES+=usr/include/c++/4.2/cfloat OLD_FILES+=usr/include/c++/4.2/ciso646 OLD_FILES+=usr/include/c++/4.2/climits OLD_FILES+=usr/include/c++/4.2/clocale OLD_FILES+=usr/include/c++/4.2/cmath OLD_FILES+=usr/include/c++/4.2/complex OLD_FILES+=usr/include/c++/4.2/csetjmp OLD_FILES+=usr/include/c++/4.2/csignal OLD_FILES+=usr/include/c++/4.2/cstdarg OLD_FILES+=usr/include/c++/4.2/cstddef OLD_FILES+=usr/include/c++/4.2/cstdio OLD_FILES+=usr/include/c++/4.2/cstdlib OLD_FILES+=usr/include/c++/4.2/cstring OLD_FILES+=usr/include/c++/4.2/ctime OLD_FILES+=usr/include/c++/4.2/cwchar OLD_FILES+=usr/include/c++/4.2/cwctype OLD_FILES+=usr/include/c++/4.2/cxxabi.h OLD_FILES+=usr/include/c++/4.2/debug/bitset OLD_FILES+=usr/include/c++/4.2/debug/debug.h OLD_FILES+=usr/include/c++/4.2/debug/deque OLD_FILES+=usr/include/c++/4.2/debug/formatter.h OLD_FILES+=usr/include/c++/4.2/debug/functions.h OLD_FILES+=usr/include/c++/4.2/debug/hash_map OLD_FILES+=usr/include/c++/4.2/debug/hash_map.h OLD_FILES+=usr/include/c++/4.2/debug/hash_multimap.h OLD_FILES+=usr/include/c++/4.2/debug/hash_multiset.h OLD_FILES+=usr/include/c++/4.2/debug/hash_set OLD_FILES+=usr/include/c++/4.2/debug/hash_set.h OLD_FILES+=usr/include/c++/4.2/debug/list OLD_FILES+=usr/include/c++/4.2/debug/macros.h OLD_FILES+=usr/include/c++/4.2/debug/map OLD_FILES+=usr/include/c++/4.2/debug/map.h OLD_FILES+=usr/include/c++/4.2/debug/multimap.h OLD_FILES+=usr/include/c++/4.2/debug/multiset.h OLD_FILES+=usr/include/c++/4.2/debug/safe_base.h OLD_FILES+=usr/include/c++/4.2/debug/safe_iterator.h OLD_FILES+=usr/include/c++/4.2/debug/safe_iterator.tcc OLD_FILES+=usr/include/c++/4.2/debug/safe_sequence.h OLD_FILES+=usr/include/c++/4.2/debug/set OLD_FILES+=usr/include/c++/4.2/debug/set.h OLD_FILES+=usr/include/c++/4.2/debug/string OLD_FILES+=usr/include/c++/4.2/debug/vector OLD_FILES+=usr/include/c++/4.2/deque OLD_FILES+=usr/include/c++/4.2/exception OLD_FILES+=usr/include/c++/4.2/exception_defines.h OLD_FILES+=usr/include/c++/4.2/ext/algorithm OLD_FILES+=usr/include/c++/4.2/ext/array_allocator.h OLD_FILES+=usr/include/c++/4.2/ext/atomicity.h OLD_FILES+=usr/include/c++/4.2/ext/bitmap_allocator.h OLD_FILES+=usr/include/c++/4.2/ext/codecvt_specializations.h OLD_FILES+=usr/include/c++/4.2/ext/concurrence.h OLD_FILES+=usr/include/c++/4.2/ext/debug_allocator.h OLD_FILES+=usr/include/c++/4.2/ext/functional OLD_FILES+=usr/include/c++/4.2/ext/hash_fun.h OLD_FILES+=usr/include/c++/4.2/ext/hash_map OLD_FILES+=usr/include/c++/4.2/ext/hash_set OLD_FILES+=usr/include/c++/4.2/ext/hashtable.h OLD_FILES+=usr/include/c++/4.2/ext/iterator OLD_FILES+=usr/include/c++/4.2/ext/malloc_allocator.h OLD_FILES+=usr/include/c++/4.2/ext/memory OLD_FILES+=usr/include/c++/4.2/ext/mt_allocator.h OLD_FILES+=usr/include/c++/4.2/ext/new_allocator.h OLD_FILES+=usr/include/c++/4.2/ext/numeric OLD_FILES+=usr/include/c++/4.2/ext/numeric_traits.h OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/assoc_container.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/basic_tree_policy/basic_tree_policy_base.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/basic_tree_policy/null_node_metadata.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/basic_tree_policy/traits.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/basic_types.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/bin_search_tree_.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/cond_dtor_entry_dealtor.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/cond_key_dtor_entry_dealtor.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/constructors_destructor_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/debug_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/erase_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/find_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/info_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/insert_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/iterators_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/node_iterators.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/point_iterators.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/policy_access_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/r_erase_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/rotate_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/split_join_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/traits.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/binary_heap_.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/const_iterator.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/const_point_iterator.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/constructors_destructor_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/debug_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/entry_cmp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/entry_pred.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/erase_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/find_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/info_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/insert_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/iterators_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/policy_access_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/resize_policy.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/split_join_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/trace_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_/binomial_heap_.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_/constructors_destructor_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_/debug_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_base_/binomial_heap_base_.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_base_/constructors_destructor_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_base_/debug_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_base_/erase_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_base_/find_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_base_/insert_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_base_/split_join_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/cc_ht_map_.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/cmp_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/cond_key_dtor_entry_dealtor.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/constructor_destructor_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/constructor_destructor_no_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/constructor_destructor_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/debug_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/debug_no_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/debug_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/entry_list_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/erase_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/erase_no_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/erase_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/find_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/find_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/info_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/insert_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/insert_no_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/insert_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/iterators_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/policy_access_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/resize_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/resize_no_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/resize_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/size_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/standard_policies.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/trace_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cond_dealtor.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/constructors_destructor_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/container_base_dispatch.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/eq_fn/eq_by_less.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/eq_fn/hash_eq_fn.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/constructor_destructor_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/constructor_destructor_no_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/constructor_destructor_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/debug_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/debug_no_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/debug_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/erase_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/erase_no_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/erase_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/find_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/find_no_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/find_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/gp_ht_map_.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/info_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/insert_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/insert_no_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/insert_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/iterator_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/policy_access_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/resize_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/resize_no_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/resize_store_hash_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/standard_policies.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/trace_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/direct_mask_range_hashing_imp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/direct_mod_range_hashing_imp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/linear_probe_fn_imp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/mask_based_range_hashing.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/mod_based_range_hashing.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/probe_fn_base.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/quadratic_probe_fn_imp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/ranged_hash_fn.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/ranged_probe_fn.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/sample_probe_fn.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/sample_range_hashing.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/sample_ranged_hash_fn.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/sample_ranged_probe_fn.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/const_iterator.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/const_point_iterator.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/constructors_destructor_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/debug_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/erase_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/info_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/insert_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/iterators_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/left_child_next_sibling_heap_.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/node.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/null_metadata.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/policy_access_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/trace_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/constructor_destructor_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/debug_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/entry_metadata_base.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/erase_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/find_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/info_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/insert_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/iterators_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/lu_map_.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/trace_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_policy/counter_lu_metadata.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_policy/counter_lu_policy_imp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_policy/mtf_lu_policy_imp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_policy/sample_update_policy.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/map_debug_base.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/cond_dtor.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/constructors_destructor_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/debug_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/erase_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/info_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/insert_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/iterators_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/node_iterators.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/ov_tree_map_.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/policy_access_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/split_join_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/traits.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pairing_heap_/constructors_destructor_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pairing_heap_/debug_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pairing_heap_/erase_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pairing_heap_/find_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pairing_heap_/insert_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pairing_heap_/pairing_heap_.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pairing_heap_/split_join_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/child_iterator.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/cond_dtor_entry_dealtor.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/const_child_iterator.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/constructors_destructor_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/debug_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/erase_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/find_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/head.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/info_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/insert_join_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/internal_node.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/iterators_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/leaf.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/node_base.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/node_iterators.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/node_metadata_base.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/pat_trie_.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/point_iterators.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/policy_access_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/r_erase_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/rotate_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/split_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/split_join_branch_bag.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/synth_e_access_traits.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/trace_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/traits.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/update_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/priority_queue_base_dispatch.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/constructors_destructor_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/debug_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/erase_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/find_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/info_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/insert_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/node.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/rb_tree_.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/split_join_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/traits.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rc_binomial_heap_/constructors_destructor_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rc_binomial_heap_/debug_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rc_binomial_heap_/erase_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rc_binomial_heap_/insert_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rc_binomial_heap_/rc.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rc_binomial_heap_/rc_binomial_heap_.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rc_binomial_heap_/split_join_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rc_binomial_heap_/trace_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/cc_hash_max_collision_check_resize_trigger_imp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/hash_exponential_size_policy_imp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/hash_load_check_resize_trigger_imp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/hash_load_check_resize_trigger_size_base.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/hash_prime_size_policy_imp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/hash_standard_resize_policy_imp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/sample_resize_policy.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/sample_resize_trigger.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/sample_size_policy.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/constructors_destructor_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/debug_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/erase_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/find_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/info_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/insert_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/node.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/splay_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/splay_tree_.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/split_join_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/traits.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/standard_policies.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/thin_heap_/constructors_destructor_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/thin_heap_/debug_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/thin_heap_/erase_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/thin_heap_/find_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/thin_heap_/insert_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/thin_heap_/split_join_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/thin_heap_/thin_heap_.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/thin_heap_/trace_fn_imps.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/tree_policy/node_metadata_selector.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/tree_policy/null_node_update_imp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/tree_policy/order_statistics_imp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/tree_policy/sample_tree_node_update.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/tree_trace_base.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/trie_policy/node_metadata_selector.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/trie_policy/null_node_update_imp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/trie_policy/order_statistics_imp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/trie_policy/prefix_search_node_update_imp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/trie_policy/sample_trie_e_access_traits.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/trie_policy/sample_trie_node_update.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/trie_policy/string_trie_e_access_traits_imp.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/trie_policy/trie_policy_base.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/type_utils.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/types_traits.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/unordered_iterator/const_iterator.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/unordered_iterator/const_point_iterator.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/unordered_iterator/iterator.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/unordered_iterator/point_iterator.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/exception.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/hash_policy.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/list_update_policy.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/priority_queue.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/tag_and_trait.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/tree_policy.hpp OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/trie_policy.hpp OLD_FILES+=usr/include/c++/4.2/ext/pod_char_traits.h OLD_FILES+=usr/include/c++/4.2/ext/pool_allocator.h OLD_FILES+=usr/include/c++/4.2/ext/rb_tree OLD_FILES+=usr/include/c++/4.2/ext/rc_string_base.h OLD_FILES+=usr/include/c++/4.2/ext/rope OLD_FILES+=usr/include/c++/4.2/ext/ropeimpl.h OLD_FILES+=usr/include/c++/4.2/ext/slist OLD_FILES+=usr/include/c++/4.2/ext/sso_string_base.h OLD_FILES+=usr/include/c++/4.2/ext/stdio_filebuf.h OLD_FILES+=usr/include/c++/4.2/ext/stdio_sync_filebuf.h OLD_FILES+=usr/include/c++/4.2/ext/throw_allocator.h OLD_FILES+=usr/include/c++/4.2/ext/type_traits.h OLD_FILES+=usr/include/c++/4.2/ext/typelist.h OLD_FILES+=usr/include/c++/4.2/ext/vstring.h OLD_FILES+=usr/include/c++/4.2/ext/vstring.tcc OLD_FILES+=usr/include/c++/4.2/ext/vstring_fwd.h OLD_FILES+=usr/include/c++/4.2/ext/vstring_util.h OLD_FILES+=usr/include/c++/4.2/fstream OLD_FILES+=usr/include/c++/4.2/functional OLD_FILES+=usr/include/c++/4.2/iomanip OLD_FILES+=usr/include/c++/4.2/ios OLD_FILES+=usr/include/c++/4.2/iosfwd OLD_FILES+=usr/include/c++/4.2/iostream OLD_FILES+=usr/include/c++/4.2/istream OLD_FILES+=usr/include/c++/4.2/iterator OLD_FILES+=usr/include/c++/4.2/limits OLD_FILES+=usr/include/c++/4.2/list OLD_FILES+=usr/include/c++/4.2/locale OLD_FILES+=usr/include/c++/4.2/map OLD_FILES+=usr/include/c++/4.2/memory OLD_FILES+=usr/include/c++/4.2/new OLD_FILES+=usr/include/c++/4.2/numeric OLD_FILES+=usr/include/c++/4.2/ostream OLD_FILES+=usr/include/c++/4.2/queue OLD_FILES+=usr/include/c++/4.2/set OLD_FILES+=usr/include/c++/4.2/sstream OLD_FILES+=usr/include/c++/4.2/stack OLD_FILES+=usr/include/c++/4.2/stdexcept OLD_FILES+=usr/include/c++/4.2/streambuf OLD_FILES+=usr/include/c++/4.2/string OLD_FILES+=usr/include/c++/4.2/tr1/array OLD_FILES+=usr/include/c++/4.2/tr1/bind_iterate.h OLD_FILES+=usr/include/c++/4.2/tr1/bind_repeat.h OLD_FILES+=usr/include/c++/4.2/tr1/boost_shared_ptr.h OLD_FILES+=usr/include/c++/4.2/tr1/cctype OLD_FILES+=usr/include/c++/4.2/tr1/cfenv OLD_FILES+=usr/include/c++/4.2/tr1/cfloat OLD_FILES+=usr/include/c++/4.2/tr1/cinttypes OLD_FILES+=usr/include/c++/4.2/tr1/climits OLD_FILES+=usr/include/c++/4.2/tr1/cmath OLD_FILES+=usr/include/c++/4.2/tr1/common.h OLD_FILES+=usr/include/c++/4.2/tr1/complex OLD_FILES+=usr/include/c++/4.2/tr1/cstdarg OLD_FILES+=usr/include/c++/4.2/tr1/cstdbool OLD_FILES+=usr/include/c++/4.2/tr1/cstdint OLD_FILES+=usr/include/c++/4.2/tr1/cstdio OLD_FILES+=usr/include/c++/4.2/tr1/cstdlib OLD_FILES+=usr/include/c++/4.2/tr1/ctgmath OLD_FILES+=usr/include/c++/4.2/tr1/ctime OLD_FILES+=usr/include/c++/4.2/tr1/ctype.h OLD_FILES+=usr/include/c++/4.2/tr1/cwchar OLD_FILES+=usr/include/c++/4.2/tr1/cwctype OLD_FILES+=usr/include/c++/4.2/tr1/fenv.h OLD_FILES+=usr/include/c++/4.2/tr1/float.h OLD_FILES+=usr/include/c++/4.2/tr1/functional OLD_FILES+=usr/include/c++/4.2/tr1/functional_hash.h OLD_FILES+=usr/include/c++/4.2/tr1/functional_iterate.h OLD_FILES+=usr/include/c++/4.2/tr1/hashtable OLD_FILES+=usr/include/c++/4.2/tr1/hashtable_policy.h OLD_FILES+=usr/include/c++/4.2/tr1/inttypes.h OLD_FILES+=usr/include/c++/4.2/tr1/limits.h OLD_FILES+=usr/include/c++/4.2/tr1/math.h OLD_FILES+=usr/include/c++/4.2/tr1/memory OLD_FILES+=usr/include/c++/4.2/tr1/mu_iterate.h OLD_FILES+=usr/include/c++/4.2/tr1/random OLD_FILES+=usr/include/c++/4.2/tr1/random.tcc OLD_FILES+=usr/include/c++/4.2/tr1/ref_fwd.h OLD_FILES+=usr/include/c++/4.2/tr1/ref_wrap_iterate.h OLD_FILES+=usr/include/c++/4.2/tr1/repeat.h OLD_FILES+=usr/include/c++/4.2/tr1/stdarg.h OLD_FILES+=usr/include/c++/4.2/tr1/stdbool.h OLD_FILES+=usr/include/c++/4.2/tr1/stdint.h OLD_FILES+=usr/include/c++/4.2/tr1/stdio.h OLD_FILES+=usr/include/c++/4.2/tr1/stdlib.h OLD_FILES+=usr/include/c++/4.2/tr1/tgmath.h OLD_FILES+=usr/include/c++/4.2/tr1/tuple OLD_FILES+=usr/include/c++/4.2/tr1/tuple_defs.h OLD_FILES+=usr/include/c++/4.2/tr1/tuple_iterate.h OLD_FILES+=usr/include/c++/4.2/tr1/type_traits OLD_FILES+=usr/include/c++/4.2/tr1/type_traits_fwd.h OLD_FILES+=usr/include/c++/4.2/tr1/unordered_map OLD_FILES+=usr/include/c++/4.2/tr1/unordered_set OLD_FILES+=usr/include/c++/4.2/tr1/utility OLD_FILES+=usr/include/c++/4.2/tr1/wchar.h OLD_FILES+=usr/include/c++/4.2/tr1/wctype.h OLD_FILES+=usr/include/c++/4.2/typeinfo OLD_FILES+=usr/include/c++/4.2/utility OLD_FILES+=usr/include/c++/4.2/valarray OLD_FILES+=usr/include/c++/4.2/vector OLD_FILES+=usr/lib/libstdc++.a OLD_FILES+=usr/lib/libstdc++.so OLD_LIBS+=usr/lib/libstdc++.so.6 OLD_FILES+=usr/lib/libstdc++_p.a OLD_FILES+=usr/lib/libsupc++.a OLD_FILES+=usr/lib/libsupc++.so OLD_LIBS+=usr/lib/libsupc++.so.1 OLD_FILES+=usr/lib/libsupc++_p.a .if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64" OLD_FILES+=usr/lib32/libstdc++.a OLD_FILES+=usr/lib32/libstdc++.so OLD_LIBS+=usr/lib32/libstdc++.so.6 OLD_FILES+=usr/lib32/libstdc++_p.a OLD_FILES+=usr/lib32/libsupc++.a OLD_FILES+=usr/lib32/libsupc++.so OLD_LIBS+=usr/lib32/libsupc++.so.1 OLD_FILES+=usr/lib32/libsupc++_p.a .endif OLD_FILES+=usr/libexec/cc1plus .endif .if ${MK_DICT} == no OLD_FILES+=usr/share/dict/README OLD_FILES+=usr/share/dict/freebsd OLD_FILES+=usr/share/dict/propernames OLD_FILES+=usr/share/dict/web2 OLD_FILES+=usr/share/dict/web2a OLD_FILES+=usr/share/dict/words OLD_DIRS+=usr/share/dict .endif .if ${MK_DMAGENT} == no OLD_FILES+=etc/dma/dma.conf +OLD_DIRS+=etc/dma OLD_FILES+=usr/libexec/dma OLD_FILES+=usr/libexec/dma-mbox-create OLD_FILES+=usr/share/man/man8/dma.8.gz OLD_FILES+=usr/share/examples/dma/mailer.conf .endif .if ${MK_EE} == no OLD_FILES+=usr/bin/edit OLD_FILES+=usr/bin/ee OLD_FILES+=usr/bin/ree OLD_FILES+=usr/share/man/man1/edit.1.gz OLD_FILES+=usr/share/man/man1/ee.1.gz OLD_FILES+=usr/share/man/man1/ree.1.gz OLD_FILES+=usr/share/nls/C/ee.cat OLD_FILES+=usr/share/nls/de_DE.ISO8859-1/ee.cat OLD_FILES+=usr/share/nls/fr_FR.ISO8859-1/ee.cat OLD_FILES+=usr/share/nls/hu_HU.ISO8859-2/ee.cat OLD_FILES+=usr/share/nls/pl_PL.ISO8859-2/ee.cat OLD_FILES+=usr/share/nls/pt_BR.ISO8859-1/ee.cat OLD_FILES+=usr/share/nls/ru_RU.KOI8-R/ee.cat OLD_FILES+=usr/share/nls/uk_UA.KOI8-U/ee.cat .endif .if ${MK_EXAMPLES} == no OLD_FILES+=usr/share/examples/BSD_daemon/FreeBSD.pfa OLD_FILES+=usr/share/examples/BSD_daemon/README OLD_FILES+=usr/share/examples/BSD_daemon/beastie.eps OLD_FILES+=usr/share/examples/BSD_daemon/beastie.fig OLD_FILES+=usr/share/examples/BSD_daemon/eps.patch OLD_FILES+=usr/share/examples/BSD_daemon/poster.sh OLD_FILES+=usr/share/examples/FreeBSD_version/FreeBSD_version.c OLD_FILES+=usr/share/examples/FreeBSD_version/Makefile OLD_FILES+=usr/share/examples/FreeBSD_version/README OLD_FILES+=usr/share/examples/IPv6/USAGE OLD_FILES+=usr/share/examples/bhyve/vmrun.sh OLD_FILES+=usr/share/examples/bootforth/README OLD_FILES+=usr/share/examples/bootforth/boot.4th OLD_FILES+=usr/share/examples/bootforth/frames.4th OLD_FILES+=usr/share/examples/bootforth/loader.rc OLD_FILES+=usr/share/examples/bootforth/menu.4th OLD_FILES+=usr/share/examples/bootforth/menuconf.4th OLD_FILES+=usr/share/examples/bootforth/screen.4th OLD_FILES+=usr/share/examples/bsdconfig/add_some_packages.sh OLD_FILES+=usr/share/examples/bsdconfig/browse_packages_http.sh OLD_FILES+=usr/share/examples/bsdconfig/bsdconfigrc OLD_FILES+=usr/share/examples/csh/dot.cshrc OLD_FILES+=usr/share/examples/diskless/ME OLD_FILES+=usr/share/examples/diskless/README.BOOTP OLD_FILES+=usr/share/examples/diskless/README.TEMPLATING OLD_FILES+=usr/share/examples/diskless/clone_root OLD_FILES+=usr/share/examples/dma/mailer.conf OLD_FILES+=usr/share/examples/drivers/README OLD_FILES+=usr/share/examples/drivers/make_device_driver.sh OLD_FILES+=usr/share/examples/drivers/make_pseudo_driver.sh OLD_FILES+=usr/share/examples/dwatch/profile_template OLD_FILES+=usr/share/examples/etc/README.examples OLD_FILES+=usr/share/examples/etc/bsd-style-copyright OLD_FILES+=usr/share/examples/etc/group OLD_FILES+=usr/share/examples/etc/login.access OLD_FILES+=usr/share/examples/etc/make.conf OLD_FILES+=usr/share/examples/etc/rc.bsdextended OLD_FILES+=usr/share/examples/etc/rc.firewall OLD_FILES+=usr/share/examples/etc/rc.sendmail OLD_FILES+=usr/share/examples/etc/termcap.small OLD_FILES+=usr/share/examples/etc/wpa_supplicant.conf OLD_FILES+=usr/share/examples/find_interface/Makefile OLD_FILES+=usr/share/examples/find_interface/README OLD_FILES+=usr/share/examples/find_interface/find_interface.c OLD_FILES+=usr/share/examples/hast/ucarp.sh OLD_FILES+=usr/share/examples/hast/ucarp_down.sh OLD_FILES+=usr/share/examples/hast/ucarp_up.sh OLD_FILES+=usr/share/examples/hast/vip-down.sh OLD_FILES+=usr/share/examples/hast/vip-up.sh OLD_FILES+=usr/share/examples/hostapd/hostapd.conf OLD_FILES+=usr/share/examples/hostapd/hostapd.eap_user OLD_FILES+=usr/share/examples/hostapd/hostapd.wpa_psk OLD_FILES+=usr/share/examples/indent/indent.pro OLD_FILES+=usr/share/examples/ipfilter/BASIC.NAT OLD_FILES+=usr/share/examples/ipfilter/BASIC_1.FW OLD_FILES+=usr/share/examples/ipfilter/BASIC_2.FW OLD_FILES+=usr/share/examples/ipfilter/README OLD_FILES+=usr/share/examples/ipfilter/example.1 OLD_FILES+=usr/share/examples/ipfilter/example.10 OLD_FILES+=usr/share/examples/ipfilter/example.11 OLD_FILES+=usr/share/examples/ipfilter/example.12 OLD_FILES+=usr/share/examples/ipfilter/example.13 OLD_FILES+=usr/share/examples/ipfilter/example.14 OLD_FILES+=usr/share/examples/ipfilter/example.2 OLD_FILES+=usr/share/examples/ipfilter/example.3 OLD_FILES+=usr/share/examples/ipfilter/example.4 OLD_FILES+=usr/share/examples/ipfilter/example.5 OLD_FILES+=usr/share/examples/ipfilter/example.6 OLD_FILES+=usr/share/examples/ipfilter/example.7 OLD_FILES+=usr/share/examples/ipfilter/example.8 OLD_FILES+=usr/share/examples/ipfilter/example.9 OLD_FILES+=usr/share/examples/ipfilter/example.sr OLD_FILES+=usr/share/examples/ipfilter/examples.txt OLD_FILES+=usr/share/examples/ipfilter/firewall OLD_FILES+=usr/share/examples/ipfilter/firewall.1 OLD_FILES+=usr/share/examples/ipfilter/firewall.2 OLD_FILES+=usr/share/examples/ipfilter/ftp-proxy OLD_FILES+=usr/share/examples/ipfilter/ftppxy OLD_FILES+=usr/share/examples/ipfilter/ipf-howto.txt OLD_FILES+=usr/share/examples/ipfilter/ipf.conf.permissive OLD_FILES+=usr/share/examples/ipfilter/ipf.conf.restrictive OLD_FILES+=usr/share/examples/ipfilter/ipf.conf.sample OLD_FILES+=usr/share/examples/ipfilter/ipnat.conf.sample OLD_FILES+=usr/share/examples/ipfilter/mkfilters OLD_FILES+=usr/share/examples/ipfilter/nat-setup OLD_FILES+=usr/share/examples/ipfilter/nat.eg OLD_FILES+=usr/share/examples/ipfilter/rules.txt OLD_FILES+=usr/share/examples/ipfilter/server OLD_FILES+=usr/share/examples/ipfilter/tcpstate OLD_FILES+=usr/share/examples/ipfw/change_rules.sh OLD_FILES+=usr/share/examples/jails/README OLD_FILES+=usr/share/examples/jails/VIMAGE OLD_FILES+=usr/share/examples/jails/jail.xxx.conf OLD_FILES+=usr/share/examples/jails/jib OLD_FILES+=usr/share/examples/jails/jng OLD_FILES+=usr/share/examples/jails/rc.conf.jails OLD_FILES+=usr/share/examples/jails/rcjail.xxx.conf OLD_FILES+=usr/share/examples/kld/Makefile OLD_FILES+=usr/share/examples/kld/cdev/Makefile OLD_FILES+=usr/share/examples/kld/cdev/README OLD_FILES+=usr/share/examples/kld/cdev/module/Makefile OLD_FILES+=usr/share/examples/kld/cdev/module/cdev.c OLD_FILES+=usr/share/examples/kld/cdev/module/cdev.h OLD_FILES+=usr/share/examples/kld/cdev/module/cdevmod.c OLD_FILES+=usr/share/examples/kld/cdev/test/Makefile OLD_FILES+=usr/share/examples/kld/cdev/test/testcdev.c OLD_FILES+=usr/share/examples/kld/dyn_sysctl/Makefile OLD_FILES+=usr/share/examples/kld/dyn_sysctl/README OLD_FILES+=usr/share/examples/kld/dyn_sysctl/dyn_sysctl.c OLD_FILES+=usr/share/examples/kld/firmware/Makefile OLD_FILES+=usr/share/examples/kld/firmware/README OLD_FILES+=usr/share/examples/kld/firmware/fwconsumer/Makefile OLD_FILES+=usr/share/examples/kld/firmware/fwconsumer/fw_consumer.c OLD_FILES+=usr/share/examples/kld/firmware/fwimage/Makefile OLD_FILES+=usr/share/examples/kld/firmware/fwimage/firmware.img.uu OLD_FILES+=usr/share/examples/kld/khelp/Makefile OLD_FILES+=usr/share/examples/kld/khelp/README OLD_FILES+=usr/share/examples/kld/khelp/h_example.c OLD_FILES+=usr/share/examples/kld/syscall/Makefile OLD_FILES+=usr/share/examples/kld/syscall/module/Makefile OLD_FILES+=usr/share/examples/kld/syscall/module/syscall.c OLD_FILES+=usr/share/examples/kld/syscall/test/Makefile OLD_FILES+=usr/share/examples/kld/syscall/test/call.c OLD_FILES+=usr/share/examples/libusb20/Makefile OLD_FILES+=usr/share/examples/libusb20/README OLD_FILES+=usr/share/examples/libusb20/bulk.c OLD_FILES+=usr/share/examples/libusb20/control.c OLD_FILES+=usr/share/examples/libusb20/util.c OLD_FILES+=usr/share/examples/libusb20/util.h OLD_FILES+=usr/share/examples/libvgl/Makefile OLD_FILES+=usr/share/examples/libvgl/demo.c OLD_FILES+=usr/share/examples/mdoc/POSIX-copyright OLD_FILES+=usr/share/examples/mdoc/deshallify.sh OLD_FILES+=usr/share/examples/mdoc/example.1 OLD_FILES+=usr/share/examples/mdoc/example.3 OLD_FILES+=usr/share/examples/mdoc/example.4 OLD_FILES+=usr/share/examples/mdoc/example.9 OLD_FILES+=usr/share/examples/netgraph/ether.bridge OLD_FILES+=usr/share/examples/netgraph/frame_relay OLD_FILES+=usr/share/examples/netgraph/ngctl OLD_FILES+=usr/share/examples/netgraph/raw OLD_FILES+=usr/share/examples/netgraph/udp.tunnel OLD_FILES+=usr/share/examples/netgraph/virtual.chain OLD_FILES+=usr/share/examples/netgraph/virtual.lan OLD_FILES+=usr/share/examples/pc-sysinstall/README OLD_FILES+=usr/share/examples/pc-sysinstall/pc-autoinstall.conf OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.fbsd-netinstall OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.geli OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.gmirror OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.netinstall OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.restore OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.rsync OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.upgrade OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.zfs OLD_FILES+=usr/share/examples/perfmon/Makefile OLD_FILES+=usr/share/examples/perfmon/README OLD_FILES+=usr/share/examples/perfmon/perfmon.c OLD_FILES+=usr/share/examples/pf/ackpri OLD_FILES+=usr/share/examples/pf/faq-example1 OLD_FILES+=usr/share/examples/pf/faq-example2 OLD_FILES+=usr/share/examples/pf/faq-example3 OLD_FILES+=usr/share/examples/pf/pf.conf OLD_FILES+=usr/share/examples/pf/queue1 OLD_FILES+=usr/share/examples/pf/queue2 OLD_FILES+=usr/share/examples/pf/queue3 OLD_FILES+=usr/share/examples/pf/queue4 OLD_FILES+=usr/share/examples/pf/spamd OLD_FILES+=usr/share/examples/ppi/Makefile OLD_FILES+=usr/share/examples/ppi/ppilcd.c OLD_FILES+=usr/share/examples/ppp/chap-auth OLD_FILES+=usr/share/examples/ppp/login-auth OLD_FILES+=usr/share/examples/ppp/ppp.conf.sample OLD_FILES+=usr/share/examples/ppp/ppp.conf.span-isp OLD_FILES+=usr/share/examples/ppp/ppp.conf.span-isp.working OLD_FILES+=usr/share/examples/ppp/ppp.linkdown.sample OLD_FILES+=usr/share/examples/ppp/ppp.linkdown.span-isp OLD_FILES+=usr/share/examples/ppp/ppp.linkdown.span-isp.working OLD_FILES+=usr/share/examples/ppp/ppp.linkup.sample OLD_FILES+=usr/share/examples/ppp/ppp.linkup.span-isp OLD_FILES+=usr/share/examples/ppp/ppp.linkup.span-isp.working OLD_FILES+=usr/share/examples/ppp/ppp.secret.sample OLD_FILES+=usr/share/examples/ppp/ppp.secret.span-isp OLD_FILES+=usr/share/examples/ppp/ppp.secret.span-isp.working OLD_FILES+=usr/share/examples/printing/diablo-if-net OLD_FILES+=usr/share/examples/printing/hpdf OLD_FILES+=usr/share/examples/printing/hpif OLD_FILES+=usr/share/examples/printing/hpof OLD_FILES+=usr/share/examples/printing/hprf OLD_FILES+=usr/share/examples/printing/hpvf OLD_FILES+=usr/share/examples/printing/if-simple OLD_FILES+=usr/share/examples/printing/if-simpleX OLD_FILES+=usr/share/examples/printing/ifhp OLD_FILES+=usr/share/examples/printing/make-ps-header OLD_FILES+=usr/share/examples/printing/netprint OLD_FILES+=usr/share/examples/printing/psdf OLD_FILES+=usr/share/examples/printing/psdfX OLD_FILES+=usr/share/examples/printing/psif OLD_FILES+=usr/share/examples/printing/pstf OLD_FILES+=usr/share/examples/printing/pstfX OLD_FILES+=usr/share/examples/scsi_target/Makefile OLD_FILES+=usr/share/examples/scsi_target/scsi_cmds.c OLD_FILES+=usr/share/examples/scsi_target/scsi_target.8 OLD_FILES+=usr/share/examples/scsi_target/scsi_target.c OLD_FILES+=usr/share/examples/scsi_target/scsi_target.h OLD_FILES+=usr/share/examples/ses/Makefile OLD_FILES+=usr/share/examples/ses/Makefile.inc OLD_FILES+=usr/share/examples/ses/getencstat/Makefile OLD_FILES+=usr/share/examples/ses/getencstat/getencstat.0 OLD_FILES+=usr/share/examples/ses/sesd/Makefile OLD_FILES+=usr/share/examples/ses/sesd/sesd.0 OLD_FILES+=usr/share/examples/ses/setencstat/Makefile OLD_FILES+=usr/share/examples/ses/setencstat/setencstat.0 OLD_FILES+=usr/share/examples/ses/setobjstat/Makefile OLD_FILES+=usr/share/examples/ses/setobjstat/setobjstat.0 OLD_FILES+=usr/share/examples/ses/srcs/chpmon.c OLD_FILES+=usr/share/examples/ses/srcs/eltsub.c OLD_FILES+=usr/share/examples/ses/srcs/eltsub.h OLD_FILES+=usr/share/examples/ses/srcs/getencstat.c OLD_FILES+=usr/share/examples/ses/srcs/getnobj.c OLD_FILES+=usr/share/examples/ses/srcs/getobjmap.c OLD_FILES+=usr/share/examples/ses/srcs/getobjstat.c OLD_FILES+=usr/share/examples/ses/srcs/inienc.c OLD_FILES+=usr/share/examples/ses/srcs/sesd.c OLD_FILES+=usr/share/examples/ses/srcs/setencstat.c OLD_FILES+=usr/share/examples/ses/srcs/setobjstat.c OLD_FILES+=usr/share/examples/smbfs/dot.nsmbrc OLD_FILES+=usr/share/examples/smbfs/print/lj6l OLD_FILES+=usr/share/examples/smbfs/print/ljspool OLD_FILES+=usr/share/examples/smbfs/print/printcap.sample OLD_FILES+=usr/share/examples/smbfs/print/tolj OLD_FILES+=usr/share/examples/sunrpc/Makefile OLD_FILES+=usr/share/examples/sunrpc/dir/Makefile OLD_FILES+=usr/share/examples/sunrpc/dir/dir.x OLD_FILES+=usr/share/examples/sunrpc/dir/dir_proc.c OLD_FILES+=usr/share/examples/sunrpc/dir/rls.c OLD_FILES+=usr/share/examples/sunrpc/msg/Makefile OLD_FILES+=usr/share/examples/sunrpc/msg/msg.x OLD_FILES+=usr/share/examples/sunrpc/msg/msg_proc.c OLD_FILES+=usr/share/examples/sunrpc/msg/printmsg.c OLD_FILES+=usr/share/examples/sunrpc/msg/rprintmsg.c OLD_FILES+=usr/share/examples/sunrpc/sort/Makefile OLD_FILES+=usr/share/examples/sunrpc/sort/rsort.c OLD_FILES+=usr/share/examples/sunrpc/sort/sort.x OLD_FILES+=usr/share/examples/sunrpc/sort/sort_proc.c OLD_FILES+=usr/share/examples/tcsh/complete.tcsh OLD_FILES+=usr/share/examples/tcsh/csh-mode.el OLD_FILES+=usr/share/examples/uefisign/uefikeys OLD_FILES+=usr/share/examples/ypldap/ypldap.conf OLD_DIRS+=usr/share/examples OLD_DIRS+=usr/share/examples/BSD_daemon OLD_DIRS+=usr/share/examples/FreeBSD_version OLD_DIRS+=usr/share/examples/IPv6 OLD_DIRS+=usr/share/examples/bhyve OLD_DIRS+=usr/share/examples/bootforth OLD_DIRS+=usr/share/examples/bsdconfig OLD_DIRS+=usr/share/examples/csh OLD_DIRS+=usr/share/examples/diskless OLD_DIRS+=usr/share/examples/dma OLD_DIRS+=usr/share/examples/drivers OLD_DIRS+=usr/share/examples/dwatch OLD_DIRS+=usr/share/examples/etc OLD_DIRS+=usr/share/examples/etc/defaults OLD_DIRS+=usr/share/examples/find_interface OLD_DIRS+=usr/share/examples/hast OLD_DIRS+=usr/share/examples/ibcs2 OLD_DIRS+=usr/share/examples/hostapd OLD_DIRS+=usr/share/examples/indent OLD_DIRS+=usr/share/examples/ipfilter OLD_DIRS+=usr/share/examples/ipfw OLD_DIRS+=usr/share/examples/jails OLD_DIRS+=usr/share/examples/kld OLD_DIRS+=usr/share/examples/kld/cdev OLD_DIRS+=usr/share/examples/kld/cdev/module OLD_DIRS+=usr/share/examples/kld/cdev/test OLD_DIRS+=usr/share/examples/kld/dyn_sysctl OLD_DIRS+=usr/share/examples/kld/firmware OLD_DIRS+=usr/share/examples/kld/firmware/fwconsumer OLD_DIRS+=usr/share/examples/kld/firmware/fwimage OLD_DIRS+=usr/share/examples/kld/khelp OLD_DIRS+=usr/share/examples/kld/syscall OLD_DIRS+=usr/share/examples/kld/syscall/module OLD_DIRS+=usr/share/examples/kld/syscall/test OLD_DIRS+=usr/share/examples/libusb20 OLD_DIRS+=usr/share/examples/libvgl OLD_DIRS+=usr/share/examples/mdoc OLD_DIRS+=usr/share/examples/netgraph OLD_DIRS+=usr/share/examples/pc-sysinstall OLD_DIRS+=usr/share/examples/perfmon OLD_DIRS+=usr/share/examples/pf OLD_DIRS+=usr/share/examples/ppi OLD_DIRS+=usr/share/examples/ppp OLD_DIRS+=usr/share/examples/printing OLD_DIRS+=usr/share/examples/scsi_target OLD_DIRS+=usr/share/examples/ses OLD_DIRS+=usr/share/examples/ses/getencstat OLD_DIRS+=usr/share/examples/ses/sesd OLD_DIRS+=usr/share/examples/ses/setencstat OLD_DIRS+=usr/share/examples/ses/setobjstat OLD_DIRS+=usr/share/examples/ses/srcs OLD_DIRS+=usr/share/examples/smbfs OLD_DIRS+=usr/share/examples/smbfs/print OLD_DIRS+=usr/share/examples/sunrpc OLD_DIRS+=usr/share/examples/sunrpc/dir OLD_DIRS+=usr/share/examples/sunrpc/msg OLD_DIRS+=usr/share/examples/sunrpc/sort OLD_DIRS+=usr/share/examples/tcsh OLD_DIRS+=usr/share/examples/uefisign OLD_DIRS+=usr/share/examples/ypldap .endif .if ${MK_FINGER} == no OLD_FILES+=usr/bin/finger OLD_FILES+=usr/share/man/man1/finger.1.gz OLD_FILES+=usr/share/man/man5/finger.conf.5.gz OLD_FILES+=usr/libexec/fingerd OLD_FILES+=usr/share/man/man8/fingerd.8.gz .endif .if ${MK_FLOPPY} == no OLD_FILES+=usr/sbin/fdcontrol OLD_FILES+=usr/sbin/fdformat OLD_FILES+=usr/sbin/fdread OLD_FILES+=usr/sbin/fdwrite OLD_FILES+=usr/share/man/man1/fdformat.1.gz OLD_FILES+=usr/share/man/man1/fdread.1.gz OLD_FILES+=usr/share/man/man1/fdwrite.1.gz OLD_FILES+=usr/share/man/man8/fdcontrol.8.gz .endif .if ${MK_FORTH} == no OLD_FILES+=usr/share/man/man8/beastie.4th.8.gz OLD_FILES+=usr/share/man/man8/brand.4th.8.gz OLD_FILES+=usr/share/man/man8/check-password.4th.8.gz OLD_FILES+=usr/share/man/man8/color.4th.8.gz OLD_FILES+=usr/share/man/man8/delay.4th.8.gz OLD_FILES+=usr/share/man/man8/loader.4th.8.gz OLD_FILES+=usr/share/man/man8/menu.4th.8.gz OLD_FILES+=usr/share/man/man8/menusets.4th.8.gz OLD_FILES+=usr/share/man/man8/version.4th.8.gz .endif .if ${MK_FREEBSD_UPDATE} == no OLD_FILES+=etc/freebsd-update.conf OLD_FILES+=usr/sbin/freebsd-update OLD_FILES+=usr/share/examples/etc/freebsd-update.conf OLD_FILES+=usr/share/man/man5/freebsd-update.conf.5.gz OLD_FILES+=usr/share/man/man8/freebsd-update.8.gz .endif .if ${MK_GAMES} == no OLD_FILES+=usr/bin/caesar OLD_FILES+=usr/bin/factor OLD_FILES+=usr/bin/fortune OLD_FILES+=usr/bin/grdc OLD_FILES+=usr/bin/morse OLD_FILES+=usr/bin/number OLD_FILES+=usr/bin/pom OLD_FILES+=usr/bin/primes OLD_FILES+=usr/bin/random OLD_FILES+=usr/bin/rot13 OLD_FILES+=usr/bin/strfile OLD_FILES+=usr/bin/unstr OLD_FILES+=usr/share/games/fortune/fortunes OLD_FILES+=usr/share/games/fortune/fortunes.dat OLD_FILES+=usr/share/games/fortune/freebsd-tips OLD_FILES+=usr/share/games/fortune/freebsd-tips.dat OLD_FILES+=usr/share/games/fortune/gerrold.limerick OLD_FILES+=usr/share/games/fortune/gerrold.limerick.dat OLD_FILES+=usr/share/games/fortune/limerick OLD_FILES+=usr/share/games/fortune/limerick.dat OLD_FILES+=usr/share/games/fortune/murphy OLD_FILES+=usr/share/games/fortune/murphy-o OLD_FILES+=usr/share/games/fortune/murphy-o.dat OLD_FILES+=usr/share/games/fortune/murphy.dat OLD_FILES+=usr/share/games/fortune/startrek OLD_FILES+=usr/share/games/fortune/startrek.dat OLD_FILES+=usr/share/games/fortune/zippy OLD_FILES+=usr/share/games/fortune/zippy.dat OLD_DIRS+=usr/share/games/fortune OLD_DIRS+=usr/share/games OLD_FILES+=usr/share/man/man6/caesar.6.gz OLD_FILES+=usr/share/man/man6/factor.6.gz OLD_FILES+=usr/share/man/man6/fortune.6.gz OLD_FILES+=usr/share/man/man6/grdc.6.gz OLD_FILES+=usr/share/man/man6/morse.6.gz OLD_FILES+=usr/share/man/man6/number.6.gz OLD_FILES+=usr/share/man/man6/pom.6.gz OLD_FILES+=usr/share/man/man6/primes.6.gz OLD_FILES+=usr/share/man/man6/random.6.gz OLD_FILES+=usr/share/man/man6/rot13.6.gz OLD_FILES+=usr/share/man/man8/strfile.8.gz OLD_FILES+=usr/share/man/man8/unstr.8.gz .endif .if ${MK_GCC} == no OLD_FILES+=usr/bin/g++ OLD_FILES+=usr/bin/gcc OLD_FILES+=usr/bin/gcpp OLD_FILES+=usr/bin/gperf .if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "i386" OLD_FILES+=usr/include/gcc/4.2/__wmmintrin_aes.h OLD_FILES+=usr/include/gcc/4.2/__wmmintrin_pclmul.h OLD_FILES+=usr/include/gcc/4.2/ammintrin.h OLD_FILES+=usr/include/gcc/4.2/emmintrin.h OLD_FILES+=usr/include/gcc/4.2/mm3dnow.h OLD_FILES+=usr/include/gcc/4.2/mm_malloc.h OLD_FILES+=usr/include/gcc/4.2/mmintrin.h OLD_FILES+=usr/include/gcc/4.2/pmmintrin.h OLD_FILES+=usr/include/gcc/4.2/tmmintrin.h OLD_FILES+=usr/include/gcc/4.2/wmmintrin.h OLD_FILES+=usr/include/gcc/4.2/xmmintrin.h .elif ${TARGET_ARCH} == "arm" OLD_FILES+=usr/include/gcc/4.2/mmintrin.h .elif ${TARGET_ARCH} == "powerpc" || ${TARGET_ARCH} == "powerpc64" OLD_FILES+=usr/include/gcc/4.2/altivec.h OLD_FILES+=usr/include/gcc/4.2/ppc-asm.h OLD_FILES+=usr/include/gcc/4.2/spe.h .endif OLD_FILES+=usr/include/omp.h OLD_FILES+=usr/lib/libgcov.a OLD_FILES+=usr/lib/libgomp.a OLD_FILES+=usr/lib/libgomp.so OLD_LIBS+=usr/lib/libgomp.so.1 OLD_FILES+=usr/lib/libgomp_p.a OLD_FILES+=usr/lib32/libgcov.a OLD_FILES+=usr/lib32/libgomp.a OLD_FILES+=usr/lib32/libgomp.so OLD_LIBS+=usr/lib32/libgomp.so.1 OLD_FILES+=usr/lib32/libgomp_p.a OLD_FILES+=usr/libexec/cc1 OLD_FILES+=usr/libexec/cc1plus OLD_FILES+=usr/share/man/man1/g++.1.gz OLD_FILES+=usr/share/man/man1/gcc.1.gz OLD_FILES+=usr/share/man/man1/gcpp.1.gz OLD_FILES+=usr/share/man/man1/gperf.1.gz OLD_FILES+=usr/share/man/man1/gperf.7.gz .endif .if (${MK_GCOV} == no || ${MK_GCC} == no) && ${MK_LLVM_COV} == no OLD_FILES+=usr/bin/gcov OLD_FILES+=usr/share/man/man1/gcov.1.gz .endif .if ${MK_LLVM_COV} == no OLD_FILES+=usr/bin/llvm-cov OLD_FILES+=usr/bin/llvm-profdata OLD_FILES+=usr/share/man/man1/llvm-cov.1.gz .endif .if ${MK_GDB} == no || ${MK_GDB_LIBEXEC} == yes OLD_FILES+=usr/bin/gdb OLD_FILES+=usr/bin/gdbserver OLD_FILES+=usr/bin/kgdb OLD_FILES+=usr/share/man/man1/gdb.1.gz OLD_FILES+=usr/share/man/man1/gdbserver.1.gz OLD_FILES+=usr/share/man/man1/kgdb.1.gz .endif .if ${MK_GDB} == no || ${MK_GDB_LIBEXEC} == no OLD_FILES+=usr/libexec/gdb OLD_FILES+=usr/libexec/kgdb .endif .if ${MK_GPIO} == no OLD_FILES+=usr/include/libgpio.h OLD_FILES+=usr/lib/libgpio.a OLD_FILES+=usr/lib/libgpio.so OLD_LIBS+=usr/lib/libgpio.so.0 OLD_FILES+=usr/lib/libgpio_p.a OLD_FILES+=usr/lib32/libgpio.a OLD_FILES+=usr/lib32/libgpio.so OLD_LIBS+=usr/lib32/libgpio.so.0 OLD_FILES+=usr/lib32/libgpio_p.a OLD_FILES+=usr/sbin/gpioctl OLD_FILES+=usr/share/man/man3/gpio.3.gz OLD_FILES+=usr/share/man/man3/gpio_close.3.gz OLD_FILES+=usr/share/man/man3/gpio_open.3.gz OLD_FILES+=usr/share/man/man3/gpio_open_device.3.gz OLD_FILES+=usr/share/man/man3/gpio_pin_config.3.gz OLD_FILES+=usr/share/man/man3/gpio_pin_get.3.gz OLD_FILES+=usr/share/man/man3/gpio_pin_high.3.gz OLD_FILES+=usr/share/man/man3/gpio_pin_input.3.gz OLD_FILES+=usr/share/man/man3/gpio_pin_invin.3.gz OLD_FILES+=usr/share/man/man3/gpio_pin_invout.3.gz OLD_FILES+=usr/share/man/man3/gpio_pin_list.3.gz OLD_FILES+=usr/share/man/man3/gpio_pin_low.3.gz OLD_FILES+=usr/share/man/man3/gpio_pin_opendrain.3.gz OLD_FILES+=usr/share/man/man3/gpio_pin_output.3.gz OLD_FILES+=usr/share/man/man3/gpio_pin_pulldown.3.gz OLD_FILES+=usr/share/man/man3/gpio_pin_pullup.3.gz OLD_FILES+=usr/share/man/man3/gpio_pin_pulsate.3.gz OLD_FILES+=usr/share/man/man3/gpio_pin_pushpull.3.gz OLD_FILES+=usr/share/man/man3/gpio_pin_set.3.gz OLD_FILES+=usr/share/man/man3/gpio_pin_set_flags.3.gz OLD_FILES+=usr/share/man/man3/gpio_pin_tristate.3.gz OLD_FILES+=usr/share/man/man8/gpioctl.8.gz .endif .if ${MK_GNU_DIFF} == no OLD_FILES+=usr/bin/diff3 OLD_FILES+=usr/share/man/man1/diff3.1.gz .endif .if ${MK_GNU_GREP} == no OLD_FILES+=usr/bin/gnugrep OLD_FILES+=usr/share/man/man1/gnugrep.1.gz .if ${MK_BSD_GREP} == no OLD_FILES+=usr/bin/bzgrep OLD_FILES+=usr/bin/bzegrep OLD_FILES+=usr/bin/bzfgrep OLD_FILES+=usr/bin/egrep OLD_FILES+=usr/bin/fgrep OLD_FILES+=usr/bin/grep OLD_FILES+=usr/bin/zegrep OLD_FILES+=usr/bin/zfgrep OLD_FILES+=usr/bin/zgrep OLD_FILES+=usr/share/man/man1/bzegrep.1.gz OLD_FILES+=usr/share/man/man1/bzfgrep.1.gz OLD_FILES+=usr/share/man/man1/bzgrep.1.gz OLD_FILES+=usr/share/man/man1/egrep.1.gz OLD_FILES+=usr/share/man/man1/fgrep.1.gz OLD_FILES+=usr/share/man/man1/grep.1.gz OLD_FILES+=usr/share/man/man1/zegrep.1.gz OLD_FILES+=usr/share/man/man1/zfgrep.1.gz OLD_FILES+=usr/share/man/man1/zgrep.1.gz .endif .endif .if ${MK_GSSAPI} == no OLD_FILES+=usr/include/gssapi/gssapi.h OLD_DIRS+=usr/include/gssapi OLD_FILES+=usr/include/gssapi.h OLD_FILES+=usr/lib/libgssapi.a OLD_FILES+=usr/lib/libgssapi.so OLD_LIBS+=usr/lib/libgssapi.so.10 OLD_FILES+=usr/lib/libgssapi_p.a OLD_FILES+=usr/lib/librpcsec_gss.a OLD_FILES+=usr/lib/librpcsec_gss.so OLD_LIBS+=usr/lib/librpcsec_gss.so.1 .if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64" OLD_FILES+=usr/lib32/libgssapi.a OLD_FILES+=usr/lib32/libgssapi.so OLD_LIBS+=usr/lib32/libgssapi.so.10 OLD_FILES+=usr/lib32/libgssapi_p.a OLD_FILES+=usr/lib32/librpcsec_gss.a OLD_FILES+=usr/lib32/librpcsec_gss.so OLD_LIBS+=usr/lib32/librpcsec_gss.so.1 .endif OLD_FILES+=usr/sbin/gssd OLD_FILES+=usr/share/man/man3/gss_accept_sec_context.3.gz OLD_FILES+=usr/share/man/man3/gss_acquire_cred.3.gz OLD_FILES+=usr/share/man/man3/gss_add_cred.3.gz OLD_FILES+=usr/share/man/man3/gss_add_oid_set_member.3.gz OLD_FILES+=usr/share/man/man3/gss_canonicalize_name.3.gz OLD_FILES+=usr/share/man/man3/gss_compare_name.3.gz OLD_FILES+=usr/share/man/man3/gss_context_time.3.gz OLD_FILES+=usr/share/man/man3/gss_create_empty_oid_set.3.gz OLD_FILES+=usr/share/man/man3/gss_delete_sec_context.3.gz OLD_FILES+=usr/share/man/man3/gss_display_name.3.gz OLD_FILES+=usr/share/man/man3/gss_display_status.3.gz OLD_FILES+=usr/share/man/man3/gss_duplicate_name.3.gz OLD_FILES+=usr/share/man/man3/gss_export_name.3.gz OLD_FILES+=usr/share/man/man3/gss_export_sec_context.3.gz OLD_FILES+=usr/share/man/man3/gss_get_mic.3.gz OLD_FILES+=usr/share/man/man3/gss_import_name.3.gz OLD_FILES+=usr/share/man/man3/gss_import_sec_context.3.gz OLD_FILES+=usr/share/man/man3/gss_indicate_mechs.3.gz OLD_FILES+=usr/share/man/man3/gss_init_sec_context.3.gz OLD_FILES+=usr/share/man/man3/gss_inquire_context.3.gz OLD_FILES+=usr/share/man/man3/gss_inquire_cred.3.gz OLD_FILES+=usr/share/man/man3/gss_inquire_cred_by_mech.3.gz OLD_FILES+=usr/share/man/man3/gss_inquire_mechs_for_name.3.gz OLD_FILES+=usr/share/man/man3/gss_inquire_names_for_mech.3.gz OLD_FILES+=usr/share/man/man3/gss_process_context_token.3.gz OLD_FILES+=usr/share/man/man3/gss_release_buffer.3.gz OLD_FILES+=usr/share/man/man3/gss_release_cred.3.gz OLD_FILES+=usr/share/man/man3/gss_release_name.3.gz OLD_FILES+=usr/share/man/man3/gss_release_oid_set.3.gz OLD_FILES+=usr/share/man/man3/gss_seal.3.gz OLD_FILES+=usr/share/man/man3/gss_sign.3.gz OLD_FILES+=usr/share/man/man3/gss_test_oid_set_member.3.gz OLD_FILES+=usr/share/man/man3/gss_unseal.3.gz OLD_FILES+=usr/share/man/man3/gss_unwrap.3.gz OLD_FILES+=usr/share/man/man3/gss_verify.3.gz OLD_FILES+=usr/share/man/man3/gss_verify_mic.3.gz OLD_FILES+=usr/share/man/man3/gss_wrap.3.gz OLD_FILES+=usr/share/man/man3/gss_wrap_size_limit.3.gz OLD_FILES+=usr/share/man/man3/gssapi.3.gz OLD_FILES+=usr/share/man/man3/rpc_gss_get_error.3.gz OLD_FILES+=usr/share/man/man3/rpc_gss_get_mech_info.3.gz OLD_FILES+=usr/share/man/man3/rpc_gss_get_mechanisms.3.gz OLD_FILES+=usr/share/man/man3/rpc_gss_get_principal_name.3.gz OLD_FILES+=usr/share/man/man3/rpc_gss_get_versions.3.gz OLD_FILES+=usr/share/man/man3/rpc_gss_getcred.3.gz OLD_FILES+=usr/share/man/man3/rpc_gss_is_installed.3.gz OLD_FILES+=usr/share/man/man3/rpc_gss_max_data_length.3.gz OLD_FILES+=usr/share/man/man3/rpc_gss_mech_to_oid.3.gz OLD_FILES+=usr/share/man/man3/rpc_gss_oid_to_mech.3.gz OLD_FILES+=usr/share/man/man3/rpc_gss_qop_to_num.3.gz OLD_FILES+=usr/share/man/man3/rpc_gss_seccreate.3.gz OLD_FILES+=usr/share/man/man3/rpc_gss_set_callback.3.gz OLD_FILES+=usr/share/man/man3/rpc_gss_set_defaults.3.gz OLD_FILES+=usr/share/man/man3/rpc_gss_set_svc_name.3.gz OLD_FILES+=usr/share/man/man3/rpc_gss_svc_max_data_length.3.gz OLD_FILES+=usr/share/man/man3/rpcsec_gss.3.gz OLD_FILES+=usr/share/man/man5/mech.5.gz OLD_FILES+=usr/share/man/man5/qop.5.gz OLD_FILES+=usr/share/man/man8/gssd.8.gz .endif .if ${MK_HAST} == no +OLD_FILES+=etc/rc.d/hastd OLD_FILES+=sbin/hastctl OLD_FILES+=sbin/hastd OLD_FILES+=usr/share/examples/hast/ucarp.sh OLD_FILES+=usr/share/examples/hast/ucarp_down.sh OLD_FILES+=usr/share/examples/hast/ucarp_up.sh OLD_FILES+=usr/share/examples/hast/vip-down.sh OLD_FILES+=usr/share/examples/hast/vip-up.sh OLD_FILES+=usr/share/man/man5/hast.conf.5.gz OLD_FILES+=usr/share/man/man8/hastctl.8.gz OLD_FILES+=usr/share/man/man8/hastd.8.gz OLD_DIRS+=usr/share/examples/hast # bsnmp OLD_FILES+=usr/lib/snmp_hast.so OLD_LIBS+=usr/lib/snmp_hast.so.6 OLD_FILES+=usr/share/man/man3/snmp_hast.3.gz OLD_FILES+=usr/share/snmp/defs/hast_tree.def OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-HAST-MIB.txt .endif .if ${MK_HESIOD} == no OLD_FILES+=usr/bin/hesinfo OLD_FILES+=usr/include/hesiod.h OLD_FILES+=usr/share/man/man1/hesinfo.1.gz OLD_FILES+=usr/share/man/man3/hesiod.3.gz OLD_FILES+=usr/share/man/man5/hesiod.conf.5.gz .endif .if ${MK_HTML} == no OLD_FILES+=usr/share/doc/ncurses/hackguide.html OLD_FILES+=usr/share/doc/ncurses/ncurses-intro.html OLD_DIRS+=usr/share/doc/ncurses OLD_FILES+=usr/share/doc/ntp/accopt.html OLD_FILES+=usr/share/doc/ntp/assoc.html OLD_FILES+=usr/share/doc/ntp/audio.html OLD_FILES+=usr/share/doc/ntp/authopt.html OLD_FILES+=usr/share/doc/ntp/build.html OLD_FILES+=usr/share/doc/ntp/clockopt.html OLD_FILES+=usr/share/doc/ntp/config.html OLD_FILES+=usr/share/doc/ntp/confopt.html OLD_FILES+=usr/share/doc/ntp/copyright.html OLD_FILES+=usr/share/doc/ntp/debug.html OLD_FILES+=usr/share/doc/ntp/driver1.html OLD_FILES+=usr/share/doc/ntp/driver10.html OLD_FILES+=usr/share/doc/ntp/driver11.html OLD_FILES+=usr/share/doc/ntp/driver12.html OLD_FILES+=usr/share/doc/ntp/driver16.html OLD_FILES+=usr/share/doc/ntp/driver18.html OLD_FILES+=usr/share/doc/ntp/driver19.html OLD_FILES+=usr/share/doc/ntp/driver2.html OLD_FILES+=usr/share/doc/ntp/driver20.html OLD_FILES+=usr/share/doc/ntp/driver22.html OLD_FILES+=usr/share/doc/ntp/driver26.html OLD_FILES+=usr/share/doc/ntp/driver27.html OLD_FILES+=usr/share/doc/ntp/driver28.html OLD_FILES+=usr/share/doc/ntp/driver29.html OLD_FILES+=usr/share/doc/ntp/driver3.html OLD_FILES+=usr/share/doc/ntp/driver30.html OLD_FILES+=usr/share/doc/ntp/driver32.html OLD_FILES+=usr/share/doc/ntp/driver33.html OLD_FILES+=usr/share/doc/ntp/driver34.html OLD_FILES+=usr/share/doc/ntp/driver35.html OLD_FILES+=usr/share/doc/ntp/driver36.html OLD_FILES+=usr/share/doc/ntp/driver37.html OLD_FILES+=usr/share/doc/ntp/driver4.html OLD_FILES+=usr/share/doc/ntp/driver5.html OLD_FILES+=usr/share/doc/ntp/driver6.html OLD_FILES+=usr/share/doc/ntp/driver7.html OLD_FILES+=usr/share/doc/ntp/driver8.html OLD_FILES+=usr/share/doc/ntp/driver9.html OLD_FILES+=usr/share/doc/ntp/extern.html OLD_FILES+=usr/share/doc/ntp/hints.html OLD_FILES+=usr/share/doc/ntp/howto.html OLD_FILES+=usr/share/doc/ntp/index.html OLD_FILES+=usr/share/doc/ntp/kern.html OLD_FILES+=usr/share/doc/ntp/ldisc.html OLD_FILES+=usr/share/doc/ntp/measure.html OLD_FILES+=usr/share/doc/ntp/miscopt.html OLD_FILES+=usr/share/doc/ntp/monopt.html OLD_FILES+=usr/share/doc/ntp/mx4200data.html OLD_FILES+=usr/share/doc/ntp/notes.html OLD_FILES+=usr/share/doc/ntp/ntpd.html OLD_FILES+=usr/share/doc/ntp/ntpdate.html OLD_FILES+=usr/share/doc/ntp/ntpdc.html OLD_FILES+=usr/share/doc/ntp/ntpq.html OLD_FILES+=usr/share/doc/ntp/ntptime.html OLD_FILES+=usr/share/doc/ntp/ntptrace.html OLD_FILES+=usr/share/doc/ntp/parsedata.html OLD_FILES+=usr/share/doc/ntp/parsenew.html OLD_FILES+=usr/share/doc/ntp/patches.html OLD_FILES+=usr/share/doc/ntp/porting.html OLD_FILES+=usr/share/doc/ntp/pps.html OLD_FILES+=usr/share/doc/ntp/prefer.html OLD_FILES+=usr/share/doc/ntp/quick.html OLD_FILES+=usr/share/doc/ntp/rdebug.html OLD_FILES+=usr/share/doc/ntp/refclock.html OLD_FILES+=usr/share/doc/ntp/release.html OLD_FILES+=usr/share/doc/ntp/tickadj.html .endif .if ${MK_ICONV} == no OLD_FILES+=usr/bin/iconv OLD_FILES+=usr/bin/mkcsmapper OLD_FILES+=usr/bin/mkesdb OLD_FILES+=usr/include/_libiconv_compat.h OLD_FILES+=usr/include/iconv.h OLD_FILES+=usr/share/man/man1/iconv.1.gz OLD_FILES+=usr/share/man/man1/mkcsmapper.1.gz OLD_FILES+=usr/share/man/man1/mkesdb.1.gz OLD_FILES+=usr/share/man/man3/__iconv.3.gz OLD_FILES+=usr/share/man/man3/__iconv_free_list.3.gz OLD_FILES+=usr/share/man/man3/__iconv_get_list.3.gz OLD_FILES+=usr/share/man/man3/iconv.3.gz OLD_FILES+=usr/share/man/man3/iconv_canonicalize.3.gz OLD_FILES+=usr/share/man/man3/iconv_close.3.gz OLD_FILES+=usr/share/man/man3/iconv_open.3.gz OLD_FILES+=usr/share/man/man3/iconv_open_into.3.gz OLD_FILES+=usr/share/man/man3/iconvctl.3.gz OLD_FILES+=usr/share/man/man3/iconvlist.3.gz OLD_DIRS+=usr/share/i18n OLD_DIRS+=usr/share/i18n/esdb OLD_DIRS+=usr/share/i18n/esdb/ISO-2022 OLD_DIRS+=usr/share/i18n/esdb/BIG5 OLD_DIRS+=usr/share/i18n/esdb/MISC OLD_DIRS+=usr/share/i18n/esdb/TCVN OLD_DIRS+=usr/share/i18n/esdb/EBCDIC OLD_DIRS+=usr/share/i18n/esdb/ISO-8859 OLD_DIRS+=usr/share/i18n/esdb/GEORGIAN OLD_DIRS+=usr/share/i18n/esdb/AST OLD_DIRS+=usr/share/i18n/esdb/KAZAKH OLD_DIRS+=usr/share/i18n/esdb/APPLE OLD_DIRS+=usr/share/i18n/esdb/EUC OLD_DIRS+=usr/share/i18n/esdb/CP OLD_DIRS+=usr/share/i18n/esdb/DEC OLD_DIRS+=usr/share/i18n/esdb/UTF OLD_DIRS+=usr/share/i18n/esdb/GB OLD_DIRS+=usr/share/i18n/esdb/ISO646 OLD_DIRS+=usr/share/i18n/esdb/KOI OLD_DIRS+=usr/share/i18n/csmapper OLD_DIRS+=usr/share/i18n/csmapper/KAZAKH OLD_DIRS+=usr/share/i18n/csmapper/CNS OLD_DIRS+=usr/share/i18n/csmapper/BIG5 OLD_DIRS+=usr/share/i18n/csmapper/JIS OLD_DIRS+=usr/share/i18n/csmapper/KOI OLD_DIRS+=usr/share/i18n/csmapper/TCVN OLD_DIRS+=usr/share/i18n/csmapper/MISC OLD_DIRS+=usr/share/i18n/csmapper/EBCDIC OLD_DIRS+=usr/share/i18n/csmapper/ISO646 OLD_DIRS+=usr/share/i18n/csmapper/CP OLD_DIRS+=usr/share/i18n/csmapper/GEORGIAN OLD_DIRS+=usr/share/i18n/csmapper/ISO-8859 OLD_DIRS+=usr/share/i18n/csmapper/AST OLD_DIRS+=usr/share/i18n/csmapper/APPLE OLD_DIRS+=usr/share/i18n/csmapper/KS OLD_DIRS+=usr/share/i18n/csmapper/GB .endif .if ${MK_INET6} == no OLD_FILES+=sbin/ping6 OLD_FILES+=sbin/rtsol OLD_FILES+=usr/sbin/ip6addrctl OLD_FILES+=usr/sbin/mld6query OLD_FILES+=usr/sbin/ndp OLD_FILES+=usr/sbin/rip6query OLD_FILES+=usr/sbin/route6d OLD_FILES+=usr/sbin/rrenumd OLD_FILES+=usr/sbin/rtadvctl OLD_FILES+=usr/sbin/rtadvd OLD_FILES+=usr/sbin/rtsold OLD_FILES+=usr/sbin/traceroute6 OLD_FILES+=usr/share/doc/IPv6/IMPLEMENTATION OLD_FILES+=usr/share/man/man5/rrenumd.conf.5.gz OLD_FILES+=usr/share/man/man5/rtadvd.conf.5.gz OLD_FILES+=usr/share/man/man8/ip6addrctl.8.gz OLD_FILES+=usr/share/man/man8/mld6query.8.gz OLD_FILES+=usr/share/man/man8/ndp.8.gz OLD_FILES+=usr/share/man/man8/ping6.8.gz OLD_FILES+=usr/share/man/man8/rip6query.8.gz OLD_FILES+=usr/share/man/man8/route6d.8.gz OLD_FILES+=usr/share/man/man8/rrenumd.8.gz OLD_FILES+=usr/share/man/man8/rtadvctl.8.gz OLD_FILES+=usr/share/man/man8/rtadvd.8.gz OLD_FILES+=usr/share/man/man8/rtsol.8.gz OLD_FILES+=usr/share/man/man8/rtsold.8.gz OLD_FILES+=usr/share/man/man8/traceroute6.8.gz .endif .if ${MK_INET6_SUPPORT} == no OLD_FILES+=rescue/ping6 OLD_FILES+=rescue/rtsol .endif .if ${MK_INETD} == no +OLD_FILES+=etc/inetd.conf OLD_FILES+=etc/rc.d/inetd OLD_FILES+=usr/sbin/inetd OLD_FILES+=usr/share/man/man5/inetd.conf.5.gz OLD_FILES+=usr/share/man/man8/inetd.8.gz .endif .if ${MK_IPFILTER} == no OLD_FILES+=etc/periodic/security/510.ipfdenied OLD_FILES+=etc/periodic/security/610.ipf6denied OLD_FILES+=rescue/ipf OLD_FILES+=sbin/ipf OLD_FILES+=sbin/ipfs OLD_FILES+=sbin/ipfstat OLD_FILES+=sbin/ipftest OLD_FILES+=sbin/ipmon OLD_FILES+=sbin/ipnat OLD_FILES+=sbin/ippool OLD_FILES+=sbin/ipresend OLD_FILES+=usr/include/netinet/ip_auth.h OLD_FILES+=usr/include/netinet/ip_compat.h OLD_FILES+=usr/include/netinet/ip_fil.h OLD_FILES+=usr/include/netinet/ip_frag.h OLD_FILES+=usr/include/netinet/ip_htable.h OLD_FILES+=usr/include/netinet/ip_lookup.h OLD_FILES+=usr/include/netinet/ip_nat.h OLD_FILES+=usr/include/netinet/ip_pool.h OLD_FILES+=usr/include/netinet/ip_proxy.h OLD_FILES+=usr/include/netinet/ip_rules.h OLD_FILES+=usr/include/netinet/ip_scan.h OLD_FILES+=usr/include/netinet/ip_state.h OLD_FILES+=usr/include/netinet/ip_sync.h OLD_FILES+=usr/include/netinet/ipl.h OLD_FILES+=usr/share/examples/ipfilter/README OLD_FILES+=usr/share/examples/ipfilter/BASIC.NAT OLD_FILES+=usr/share/examples/ipfilter/BASIC_1.FW OLD_FILES+=usr/share/examples/ipfilter/BASIC_2.FW OLD_FILES+=usr/share/examples/ipfilter/example.1 OLD_FILES+=usr/share/examples/ipfilter/example.2 OLD_FILES+=usr/share/examples/ipfilter/example.3 OLD_FILES+=usr/share/examples/ipfilter/example.4 OLD_FILES+=usr/share/examples/ipfilter/example.5 OLD_FILES+=usr/share/examples/ipfilter/example.6 OLD_FILES+=usr/share/examples/ipfilter/example.7 OLD_FILES+=usr/share/examples/ipfilter/example.8 OLD_FILES+=usr/share/examples/ipfilter/example.9 OLD_FILES+=usr/share/examples/ipfilter/example.10 OLD_FILES+=usr/share/examples/ipfilter/example.11 OLD_FILES+=usr/share/examples/ipfilter/example.12 OLD_FILES+=usr/share/examples/ipfilter/example.13 OLD_FILES+=usr/share/examples/ipfilter/example.sr OLD_FILES+=usr/share/examples/ipfilter/firewall OLD_FILES+=usr/share/examples/ipfilter/ftp-proxy OLD_FILES+=usr/share/examples/ipfilter/ftppxy OLD_FILES+=usr/share/examples/ipfilter/nat-setup OLD_FILES+=usr/share/examples/ipfilter/nat.eg OLD_FILES+=usr/share/examples/ipfilter/server OLD_FILES+=usr/share/examples/ipfilter/tcpstate OLD_FILES+=usr/share/examples/ipfilter/example.14 OLD_FILES+=usr/share/examples/ipfilter/firewall.1 OLD_FILES+=usr/share/examples/ipfilter/firewall.2 OLD_FILES+=usr/share/examples/ipfilter/ipf.conf.permissive OLD_FILES+=usr/share/examples/ipfilter/ipf.conf.restrictive OLD_FILES+=usr/share/examples/ipfilter/ipf.conf.sample OLD_FILES+=usr/share/examples/ipfilter/ipnat.conf.sample OLD_FILES+=usr/share/examples/ipfilter/ipf-howto.txt OLD_FILES+=usr/share/examples/ipfilter/examples.txt OLD_FILES+=usr/share/examples/ipfilter/rules.txt OLD_FILES+=usr/share/examples/ipfilter/mkfilters OLD_DIRS+=usr/share/examples/ipfilter OLD_FILES+=usr/share/man/man1/ipftest.1.gz OLD_FILES+=usr/share/man/man1/ipresend.1.gz OLD_FILES+=usr/share/man/man4/ipf.4.gz OLD_FILES+=usr/share/man/man4/ipl.4.gz OLD_FILES+=usr/share/man/man4/ipfilter.4.gz OLD_FILES+=usr/share/man/man4/ipnat.4.gz OLD_FILES+=usr/share/man/man5/ipf.5.gz OLD_FILES+=usr/share/man/man5/ipf.conf.5.gz OLD_FILES+=usr/share/man/man5/ipf6.conf.5.gz OLD_FILES+=usr/share/man/man5/ipnat.5.gz OLD_FILES+=usr/share/man/man5/ipnat.conf.5.gz OLD_FILES+=usr/share/man/man5/ippool.5.gz OLD_FILES+=usr/share/man/man8/ipf.8.gz OLD_FILES+=usr/share/man/man8/ipfs.8.gz OLD_FILES+=usr/share/man/man8/ipfstat.8.gz OLD_FILES+=usr/share/man/man8/ipmon.8.gz OLD_FILES+=usr/share/man/man8/ipnat.8.gz OLD_FILES+=usr/share/man/man8/ippool.8.gz .endif .if ${MK_IPFW} == no +OLD_FILES+=etc/rc.d/ipfw OLD_FILES+=etc/periodic/security/500.ipfwdenied OLD_FILES+=etc/periodic/security/550.ipfwlimit OLD_FILES+=sbin/ipfw OLD_FILES+=sbin/natd OLD_FILES+=usr/sbin/ipfwpcap OLD_FILES+=usr/share/man/man8/ipfw.8.gz OLD_FILES+=usr/share/man/man8/ipfwpcap.8.gz OLD_FILES+=usr/share/man/man8/natd.8.gz .endif .if ${MK_ISCSI} == no OLD_FILES+=etc/rc.d/iscsictl OLD_FILES+=etc/rc.d/iscsid OLD_FILES+=rescue/iscsictl OLD_FILES+=rescue/iscsid OLD_FILES+=sbin/iscontrol OLD_FILES+=usr/bin/iscsictl OLD_FILES+=usr/sbin/iscsid OLD_FILES+=usr/share/man/man4/iscsi.4.gz OLD_FILES+=usr/share/man/man4/iscsi_initiator.4.gz OLD_FILES+=usr/share/man/man5/iscsi.conf.5.gz OLD_FILES+=usr/share/man/man8/iscontrol.8.gz OLD_FILES+=usr/share/man/man8/iscsictl.8.gz OLD_FILES+=usr/share/man/man8/iscsid.8.gz .endif .if ${MK_JAIL} == no OLD_FILES+=etc/rc.d/jail OLD_FILES+=usr/sbin/jail OLD_FILES+=usr/sbin/jexec OLD_FILES+=usr/sbin/jls OLD_FILES+=usr/share/man/man5/jail.conf.5.gz OLD_FILES+=usr/share/man/man8/jail.8.gz OLD_FILES+=usr/share/man/man8/jexec.8.gz OLD_FILES+=usr/share/man/man8/jls.8.gz .endif .if ${MK_KDUMP} == no OLD_FILES+=usr/bin/kdump OLD_FILES+=usr/bin/truss OLD_FILES+=usr/share/man/man1/kdump.1.gz OLD_FILES+=usr/share/man/man1/truss.1.gz .endif .if ${MK_KERBEROS} == no OLD_FILES+=etc/rc.d/ipropd_master OLD_FILES+=etc/rc.d/ipropd_slave OLD_FILES+=usr/bin/asn1_compile OLD_FILES+=usr/bin/compile_et OLD_FILES+=usr/bin/hxtool OLD_FILES+=usr/bin/kadmin OLD_FILES+=usr/bin/kcc OLD_FILES+=usr/bin/kdestroy OLD_FILES+=usr/bin/kf OLD_FILES+=usr/bin/kgetcred OLD_FILES+=usr/bin/kinit OLD_FILES+=usr/bin/klist OLD_FILES+=usr/bin/kpasswd OLD_FILES+=usr/bin/krb5-config OLD_FILES+=usr/bin/ksu OLD_FILES+=usr/bin/kswitch OLD_FILES+=usr/bin/make-roken OLD_FILES+=usr/bin/slc OLD_FILES+=usr/bin/string2key OLD_FILES+=usr/bin/verify_krb5_conf OLD_FILES+=usr/include/asn1-common.h OLD_FILES+=usr/include/asn1_err.h OLD_FILES+=usr/include/base64.h OLD_FILES+=usr/include/cms_asn1.h OLD_FILES+=usr/include/crmf_asn1.h OLD_FILES+=usr/include/der-private.h OLD_FILES+=usr/include/der-protos.h OLD_FILES+=usr/include/der.h OLD_FILES+=usr/include/digest_asn1.h OLD_FILES+=usr/include/getarg.h OLD_FILES+=usr/include/gssapi/gssapi_krb5.h OLD_FILES+=usr/include/hdb-protos.h OLD_FILES+=usr/include/hdb.h OLD_FILES+=usr/include/hdb_asn1.h OLD_FILES+=usr/include/hdb_err.h OLD_FILES+=usr/include/heim_asn1.h OLD_FILES+=usr/include/heim_err.h OLD_FILES+=usr/include/heim_threads.h OLD_FILES+=usr/include/heimbase.h OLD_FILES+=usr/include/heimntlm-protos.h OLD_FILES+=usr/include/heimntlm.h OLD_FILES+=usr/include/hex.h OLD_FILES+=usr/include/hx509-private.h OLD_FILES+=usr/include/hx509-protos.h OLD_FILES+=usr/include/hx509.h OLD_FILES+=usr/include/hx509_err.h OLD_FILES+=usr/include/k524_err.h OLD_FILES+=usr/include/kadm5/admin.h OLD_FILES+=usr/include/kadm5/kadm5-private.h OLD_FILES+=usr/include/kadm5/kadm5-protos.h OLD_FILES+=usr/include/kadm5/kadm5-pwcheck.h OLD_FILES+=usr/include/kadm5/kadm5_err.h OLD_FILES+=usr/include/kadm5/private.h OLD_DIRS+=usr/include/kadm5 OLD_FILES+=usr/include/kafs.h OLD_FILES+=usr/include/kdc-protos.h OLD_FILES+=usr/include/kdc.h OLD_FILES+=usr/include/krb5-private.h OLD_FILES+=usr/include/krb5-protos.h OLD_FILES+=usr/include/krb5-types.h OLD_FILES+=usr/include/krb5.h OLD_FILES+=usr/include/krb5/ccache_plugin.h OLD_FILES+=usr/include/krb5/locate_plugin.h OLD_FILES+=usr/include/krb5/send_to_kdc_plugin.h OLD_FILES+=usr/include/krb5/windc_plugin.h OLD_DIRS+=usr/include/krb5 OLD_FILES+=usr/include/krb5_asn1.h OLD_FILES+=usr/include/krb5_ccapi.h OLD_FILES+=usr/include/krb5_err.h OLD_FILES+=usr/include/kx509_asn1.h OLD_FILES+=usr/include/ntlm_err.h OLD_FILES+=usr/include/ocsp_asn1.h OLD_FILES+=usr/include/parse_bytes.h OLD_FILES+=usr/include/parse_time.h OLD_FILES+=usr/include/parse_units.h OLD_FILES+=usr/include/pkcs10_asn1.h OLD_FILES+=usr/include/pkcs12_asn1.h OLD_FILES+=usr/include/pkcs8_asn1.h OLD_FILES+=usr/include/pkcs9_asn1.h OLD_FILES+=usr/include/pkinit_asn1.h OLD_FILES+=usr/include/resolve.h OLD_FILES+=usr/include/rfc2459_asn1.h OLD_FILES+=usr/include/roken-common.h OLD_FILES+=usr/include/rtbl.h OLD_FILES+=usr/include/wind.h OLD_FILES+=usr/include/wind_err.h OLD_FILES+=usr/include/xdbm.h OLD_FILES+=usr/lib/libasn1.a OLD_FILES+=usr/lib/libasn1.so OLD_LIBS+=usr/lib/libasn1.so.11 OLD_FILES+=usr/lib/libasn1_p.a OLD_FILES+=usr/lib/libcom_err.a OLD_FILES+=usr/lib/libcom_err.so OLD_LIBS+=usr/lib/libcom_err.so.5 OLD_FILES+=usr/lib/libcom_err_p.a OLD_FILES+=usr/lib/libgssapi_krb5.a OLD_FILES+=usr/lib/libgssapi_krb5.so OLD_LIBS+=usr/lib/libgssapi_krb5.so.10 OLD_FILES+=usr/lib/libgssapi_krb5_p.a OLD_FILES+=usr/lib/libgssapi_ntlm.a OLD_FILES+=usr/lib/libgssapi_ntlm.so OLD_LIBS+=usr/lib/libgssapi_ntlm.so.10 OLD_FILES+=usr/lib/libgssapi_ntlm_p.a OLD_FILES+=usr/lib/libgssapi_spnego.a OLD_FILES+=usr/lib/libgssapi_spnego.so OLD_LIBS+=usr/lib/libgssapi_spnego.so.10 OLD_FILES+=usr/lib/libgssapi_spnego_p.a OLD_FILES+=usr/lib/libhdb.a OLD_FILES+=usr/lib/libhdb.so OLD_LIBS+=usr/lib/libhdb.so.11 OLD_FILES+=usr/lib/libhdb_p.a OLD_FILES+=usr/lib/libheimbase.a OLD_FILES+=usr/lib/libheimbase.so OLD_LIBS+=usr/lib/libheimbase.so.11 OLD_FILES+=usr/lib/libheimbase_p.a OLD_FILES+=usr/lib/libheimntlm.a OLD_FILES+=usr/lib/libheimntlm.so OLD_LIBS+=usr/lib/libheimntlm.so.11 OLD_FILES+=usr/lib/libheimntlm_p.a OLD_FILES+=usr/lib/libheimsqlite.a OLD_FILES+=usr/lib/libheimsqlite.so OLD_LIBS+=usr/lib/libheimsqlite.so.11 OLD_FILES+=usr/lib/libheimsqlite_p.a OLD_FILES+=usr/lib/libhx509.a OLD_FILES+=usr/lib/libhx509.so OLD_LIBS+=usr/lib/libhx509.so.11 OLD_FILES+=usr/lib/libhx509_p.a OLD_FILES+=usr/lib/libkadm5clnt.a OLD_FILES+=usr/lib/libkadm5clnt.so OLD_LIBS+=usr/lib/libkadm5clnt.so.11 OLD_FILES+=usr/lib/libkadm5clnt_p.a OLD_FILES+=usr/lib/libkadm5srv.a OLD_FILES+=usr/lib/libkadm5srv.so OLD_LIBS+=usr/lib/libkadm5srv.so.11 OLD_FILES+=usr/lib/libkadm5srv_p.a OLD_FILES+=usr/lib/libkafs5.a OLD_FILES+=usr/lib/libkafs5.so OLD_LIBS+=usr/lib/libkafs5.so.11 OLD_FILES+=usr/lib/libkafs5_p.a OLD_FILES+=usr/lib/libkdc.a OLD_FILES+=usr/lib/libkdc.so OLD_LIBS+=usr/lib/libkdc.so.11 OLD_FILES+=usr/lib/libkdc_p.a OLD_FILES+=usr/lib/libkrb5.a OLD_FILES+=usr/lib/libkrb5.so OLD_LIBS+=usr/lib/libkrb5.so.11 OLD_FILES+=usr/lib/libkrb5_p.a OLD_FILES+=usr/lib/libroken.a OLD_FILES+=usr/lib/libroken.so OLD_LIBS+=usr/lib/libroken.so.11 OLD_FILES+=usr/lib/libroken_p.a OLD_FILES+=usr/lib/libwind.a OLD_FILES+=usr/lib/libwind.so OLD_LIBS+=usr/lib/libwind.so.11 OLD_FILES+=usr/lib/libwind_p.a OLD_FILES+=usr/lib/pam_krb5.so OLD_LIBS+=usr/lib/pam_krb5.so.6 OLD_FILES+=usr/lib/pam_ksu.so OLD_LIBS+=usr/lib/pam_ksu.so.6 OLD_FILES+=usr/lib/private/libheimipcc.a OLD_FILES+=usr/lib/private/libheimipcc.so OLD_LIBS+=usr/lib/private/libheimipcc.so.11 OLD_FILES+=usr/lib/private/libheimipcc_p.a OLD_FILES+=usr/lib/private/libheimipcs.a OLD_FILES+=usr/lib/private/libheimipcs.so OLD_LIBS+=usr/lib/private/libheimipcs.so.11 OLD_FILES+=usr/lib/private/libheimipcs_p.a .if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64" OLD_FILES+=usr/lib32/libasn1.a OLD_FILES+=usr/lib32/libasn1.so OLD_LIBS+=usr/lib32/libasn1.so.11 OLD_FILES+=usr/lib32/libasn1_p.a OLD_FILES+=usr/lib32/libgssapi_krb5.a OLD_FILES+=usr/lib32/libgssapi_krb5.so OLD_LIBS+=usr/lib32/libgssapi_krb5.so.10 OLD_FILES+=usr/lib32/libgssapi_krb5_p.a OLD_FILES+=usr/lib32/libgssapi_ntlm.a OLD_FILES+=usr/lib32/libgssapi_ntlm.so OLD_LIBS+=usr/lib32/libgssapi_ntlm.so.10 OLD_FILES+=usr/lib32/libgssapi_ntlm_p.a OLD_FILES+=usr/lib32/libgssapi_spnego.a OLD_FILES+=usr/lib32/libgssapi_spnego.so OLD_LIBS+=usr/lib32/libgssapi_spnego.so.10 OLD_FILES+=usr/lib32/libgssapi_spnego_p.a OLD_FILES+=usr/lib32/libhdb.a OLD_FILES+=usr/lib32/libhdb.so OLD_LIBS+=usr/lib32/libhdb.so.11 OLD_FILES+=usr/lib32/libhdb_p.a OLD_FILES+=usr/lib32/libheimbase.a OLD_FILES+=usr/lib32/libheimbase.so OLD_LIBS+=usr/lib32/libheimbase.so.11 OLD_FILES+=usr/lib32/libheimbase_p.a OLD_FILES+=usr/lib32/libheimntlm.a OLD_FILES+=usr/lib32/libheimntlm.so OLD_LIBS+=usr/lib32/libheimntlm.so.11 OLD_FILES+=usr/lib32/libheimntlm_p.a OLD_FILES+=usr/lib32/libheimsqlite.a OLD_FILES+=usr/lib32/libheimsqlite.so OLD_LIBS+=usr/lib32/libheimsqlite.so.11 OLD_FILES+=usr/lib32/libheimsqlite_p.a OLD_FILES+=usr/lib32/libhx509.a OLD_FILES+=usr/lib32/libhx509.so OLD_LIBS+=usr/lib32/libhx509.so.11 OLD_FILES+=usr/lib32/libhx509_p.a OLD_FILES+=usr/lib32/libkadm5clnt.a OLD_FILES+=usr/lib32/libkadm5clnt.so OLD_LIBS+=usr/lib32/libkadm5clnt.so.11 OLD_FILES+=usr/lib32/libkadm5clnt_p.a OLD_FILES+=usr/lib32/libkadm5srv.a OLD_FILES+=usr/lib32/libkadm5srv.so OLD_LIBS+=usr/lib32/libkadm5srv.so.11 OLD_FILES+=usr/lib32/libkadm5srv_p.a OLD_FILES+=usr/lib32/libkafs5.a OLD_FILES+=usr/lib32/libkafs5.so OLD_LIBS+=usr/lib32/libkafs5.so.11 OLD_FILES+=usr/lib32/libkafs5_p.a OLD_FILES+=usr/lib32/libkdc.a OLD_FILES+=usr/lib32/libkdc.so OLD_LIBS+=usr/lib32/libkdc.so.11 OLD_FILES+=usr/lib32/libkdc_p.a OLD_FILES+=usr/lib32/libkrb5.a OLD_FILES+=usr/lib32/libkrb5.so OLD_LIBS+=usr/lib32/libkrb5.so.11 OLD_FILES+=usr/lib32/libkrb5_p.a OLD_FILES+=usr/lib32/libroken.a OLD_FILES+=usr/lib32/libroken.so OLD_LIBS+=usr/lib32/libroken.so.11 OLD_FILES+=usr/lib32/libroken_p.a OLD_FILES+=usr/lib32/libwind.a OLD_FILES+=usr/lib32/libwind.so OLD_LIBS+=usr/lib32/libwind.so.11 OLD_FILES+=usr/lib32/libwind_p.a OLD_FILES+=usr/lib32/pam_krb5.so OLD_LIBS+=usr/lib32/pam_krb5.so.6 OLD_FILES+=usr/lib32/pam_ksu.so OLD_LIBS+=usr/lib32/pam_ksu.so.6 OLD_FILES+=usr/lib32/private/libheimipcc.a OLD_FILES+=usr/lib32/private/libheimipcc.so OLD_LIBS+=usr/lib32/private/libheimipcc.so.11 OLD_FILES+=usr/lib32/private/libheimipcc_p.a OLD_FILES+=usr/lib32/private/libheimipcs.a OLD_FILES+=usr/lib32/private/libheimipcs.so OLD_LIBS+=usr/lib32/private/libheimipcs.so.11 OLD_FILES+=usr/lib32/private/libheimipcs_p.a .endif OLD_FILES+=usr/libexec/digest-service OLD_FILES+=usr/libexec/hprop OLD_FILES+=usr/libexec/hpropd OLD_FILES+=usr/libexec/ipropd-master OLD_FILES+=usr/libexec/ipropd-slave OLD_FILES+=usr/libexec/kadmind OLD_FILES+=usr/libexec/kcm OLD_FILES+=usr/libexec/kdc OLD_FILES+=usr/libexec/kdigest OLD_FILES+=usr/libexec/kfd OLD_FILES+=usr/libexec/kimpersonate OLD_FILES+=usr/libexec/kpasswdd OLD_FILES+=usr/sbin/kstash OLD_FILES+=usr/sbin/ktutil OLD_FILES+=usr/sbin/iprop-log OLD_FILES+=usr/share/man/man1/kdestroy.1.gz OLD_FILES+=usr/share/man/man1/kf.1.gz OLD_FILES+=usr/share/man/man1/kinit.1.gz OLD_FILES+=usr/share/man/man1/klist.1.gz OLD_FILES+=usr/share/man/man1/kpasswd.1.gz OLD_FILES+=usr/share/man/man1/krb5-config.1.gz OLD_FILES+=usr/share/man/man1/kswitch.1.gz OLD_FILES+=usr/share/man/man3/HDB.3.gz OLD_FILES+=usr/share/man/man3/hdb__del.3.gz OLD_FILES+=usr/share/man/man3/hdb__get.3.gz OLD_FILES+=usr/share/man/man3/hdb__put.3.gz OLD_FILES+=usr/share/man/man3/hdb_auth_status.3.gz OLD_FILES+=usr/share/man/man3/hdb_check_constrained_delegation.3.gz OLD_FILES+=usr/share/man/man3/hdb_check_pkinit_ms_upn_match.3.gz OLD_FILES+=usr/share/man/man3/hdb_check_s4u2self.3.gz OLD_FILES+=usr/share/man/man3/hdb_close.3.gz OLD_FILES+=usr/share/man/man3/hdb_destroy.3.gz OLD_FILES+=usr/share/man/man3/hdb_entry_ex.3.gz OLD_FILES+=usr/share/man/man3/hdb_fetch_kvno.3.gz OLD_FILES+=usr/share/man/man3/hdb_firstkey.3.gz OLD_FILES+=usr/share/man/man3/hdb_free.3.gz OLD_FILES+=usr/share/man/man3/hdb_get_realms.3.gz OLD_FILES+=usr/share/man/man3/hdb_lock.3.gz OLD_FILES+=usr/share/man/man3/hdb_name.3.gz OLD_FILES+=usr/share/man/man3/hdb_nextkey.3.gz OLD_FILES+=usr/share/man/man3/hdb_open.3.gz OLD_FILES+=usr/share/man/man3/hdb_password.3.gz OLD_FILES+=usr/share/man/man3/hdb_remove.3.gz OLD_FILES+=usr/share/man/man3/hdb_rename.3.gz OLD_FILES+=usr/share/man/man3/hdb_store.3.gz OLD_FILES+=usr/share/man/man3/hdb_unlock.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_build_ntlm1_master.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_build_ntlm2_master.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_calculate_lm2.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_calculate_ntlm1.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_calculate_ntlm2.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_decode_targetinfo.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_encode_targetinfo.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_encode_type1.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_encode_type2.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_encode_type3.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_free_buf.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_free_targetinfo.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_free_type1.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_free_type2.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_free_type3.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_keyex_unwrap.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_nt_key.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_ntlmv2_key.3.gz OLD_FILES+=usr/share/man/man3/heim_ntlm_verify_ntlm2.3.gz OLD_FILES+=usr/share/man/man3/hx509.3.gz OLD_FILES+=usr/share/man/man3/hx509_bitstring_print.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_sign.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_sign_self.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_add_crl_dp_uri.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_add_eku.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_add_san_hostname.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_add_san_jid.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_add_san_ms_upn.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_add_san_otherName.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_add_san_pkinit.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_add_san_rfc822name.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_free.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_init.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_ca.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_domaincontroller.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_notAfter.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_notAfter_lifetime.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_notBefore.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_proxy.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_serialnumber.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_spki.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_subject.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_template.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_unique.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_subject_expand.3.gz OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_template_units.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_binary.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_check_eku.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_cmp.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_find_subjectAltName_otherName.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_free.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_get_SPKI.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_get_SPKI_AlgorithmIdentifier.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_get_attribute.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_get_base_subject.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_get_friendly_name.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_get_issuer.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_get_issuer_unique_id.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_get_notAfter.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_get_notBefore.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_get_serialnumber.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_get_subject.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_get_subject_unique_id.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_init.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_init_data.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_keyusage_print.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_ref.3.gz OLD_FILES+=usr/share/man/man3/hx509_cert_set_friendly_name.3.gz OLD_FILES+=usr/share/man/man3/hx509_certs_add.3.gz OLD_FILES+=usr/share/man/man3/hx509_certs_append.3.gz OLD_FILES+=usr/share/man/man3/hx509_certs_end_seq.3.gz OLD_FILES+=usr/share/man/man3/hx509_certs_filter.3.gz OLD_FILES+=usr/share/man/man3/hx509_certs_find.3.gz OLD_FILES+=usr/share/man/man3/hx509_certs_free.3.gz OLD_FILES+=usr/share/man/man3/hx509_certs_info.3.gz OLD_FILES+=usr/share/man/man3/hx509_certs_init.3.gz OLD_FILES+=usr/share/man/man3/hx509_certs_iter_f.3.gz OLD_FILES+=usr/share/man/man3/hx509_certs_merge.3.gz OLD_FILES+=usr/share/man/man3/hx509_certs_next_cert.3.gz OLD_FILES+=usr/share/man/man3/hx509_certs_start_seq.3.gz OLD_FILES+=usr/share/man/man3/hx509_certs_store.3.gz OLD_FILES+=usr/share/man/man3/hx509_ci_print_names.3.gz OLD_FILES+=usr/share/man/man3/hx509_clear_error_string.3.gz OLD_FILES+=usr/share/man/man3/hx509_cms.3.gz OLD_FILES+=usr/share/man/man3/hx509_cms_create_signed_1.3.gz OLD_FILES+=usr/share/man/man3/hx509_cms_envelope_1.3.gz OLD_FILES+=usr/share/man/man3/hx509_cms_unenvelope.3.gz OLD_FILES+=usr/share/man/man3/hx509_cms_unwrap_ContentInfo.3.gz OLD_FILES+=usr/share/man/man3/hx509_cms_verify_signed.3.gz OLD_FILES+=usr/share/man/man3/hx509_cms_wrap_ContentInfo.3.gz OLD_FILES+=usr/share/man/man3/hx509_context_free.3.gz OLD_FILES+=usr/share/man/man3/hx509_context_init.3.gz OLD_FILES+=usr/share/man/man3/hx509_context_set_missing_revoke.3.gz OLD_FILES+=usr/share/man/man3/hx509_crl_add_revoked_certs.3.gz OLD_FILES+=usr/share/man/man3/hx509_crl_alloc.3.gz OLD_FILES+=usr/share/man/man3/hx509_crl_free.3.gz OLD_FILES+=usr/share/man/man3/hx509_crl_lifetime.3.gz OLD_FILES+=usr/share/man/man3/hx509_crl_sign.3.gz OLD_FILES+=usr/share/man/man3/hx509_crypto.3.gz OLD_FILES+=usr/share/man/man3/hx509_env.3.gz OLD_FILES+=usr/share/man/man3/hx509_env_add.3.gz OLD_FILES+=usr/share/man/man3/hx509_env_add_binding.3.gz OLD_FILES+=usr/share/man/man3/hx509_env_find.3.gz OLD_FILES+=usr/share/man/man3/hx509_env_find_binding.3.gz OLD_FILES+=usr/share/man/man3/hx509_env_free.3.gz OLD_FILES+=usr/share/man/man3/hx509_env_lfind.3.gz OLD_FILES+=usr/share/man/man3/hx509_err.3.gz OLD_FILES+=usr/share/man/man3/hx509_error.3.gz OLD_FILES+=usr/share/man/man3/hx509_free_error_string.3.gz OLD_FILES+=usr/share/man/man3/hx509_free_octet_string_list.3.gz OLD_FILES+=usr/share/man/man3/hx509_general_name_unparse.3.gz OLD_FILES+=usr/share/man/man3/hx509_get_error_string.3.gz OLD_FILES+=usr/share/man/man3/hx509_get_one_cert.3.gz OLD_FILES+=usr/share/man/man3/hx509_keyset.3.gz OLD_FILES+=usr/share/man/man3/hx509_lock.3.gz OLD_FILES+=usr/share/man/man3/hx509_misc.3.gz OLD_FILES+=usr/share/man/man3/hx509_name.3.gz OLD_FILES+=usr/share/man/man3/hx509_name_binary.3.gz OLD_FILES+=usr/share/man/man3/hx509_name_cmp.3.gz OLD_FILES+=usr/share/man/man3/hx509_name_copy.3.gz OLD_FILES+=usr/share/man/man3/hx509_name_expand.3.gz OLD_FILES+=usr/share/man/man3/hx509_name_free.3.gz OLD_FILES+=usr/share/man/man3/hx509_name_is_null_p.3.gz OLD_FILES+=usr/share/man/man3/hx509_name_to_Name.3.gz OLD_FILES+=usr/share/man/man3/hx509_name_to_string.3.gz OLD_FILES+=usr/share/man/man3/hx509_ocsp_request.3.gz OLD_FILES+=usr/share/man/man3/hx509_ocsp_verify.3.gz OLD_FILES+=usr/share/man/man3/hx509_oid_print.3.gz OLD_FILES+=usr/share/man/man3/hx509_oid_sprint.3.gz OLD_FILES+=usr/share/man/man3/hx509_parse_name.3.gz OLD_FILES+=usr/share/man/man3/hx509_peer.3.gz OLD_FILES+=usr/share/man/man3/hx509_peer_info_add_cms_alg.3.gz OLD_FILES+=usr/share/man/man3/hx509_peer_info_alloc.3.gz OLD_FILES+=usr/share/man/man3/hx509_peer_info_free.3.gz OLD_FILES+=usr/share/man/man3/hx509_peer_info_set_cert.3.gz OLD_FILES+=usr/share/man/man3/hx509_peer_info_set_cms_algs.3.gz OLD_FILES+=usr/share/man/man3/hx509_print.3.gz OLD_FILES+=usr/share/man/man3/hx509_print_cert.3.gz OLD_FILES+=usr/share/man/man3/hx509_print_stdout.3.gz OLD_FILES+=usr/share/man/man3/hx509_query.3.gz OLD_FILES+=usr/share/man/man3/hx509_query_alloc.3.gz OLD_FILES+=usr/share/man/man3/hx509_query_free.3.gz OLD_FILES+=usr/share/man/man3/hx509_query_match_cmp_func.3.gz OLD_FILES+=usr/share/man/man3/hx509_query_match_eku.3.gz OLD_FILES+=usr/share/man/man3/hx509_query_match_friendly_name.3.gz OLD_FILES+=usr/share/man/man3/hx509_query_match_issuer_serial.3.gz OLD_FILES+=usr/share/man/man3/hx509_query_match_option.3.gz OLD_FILES+=usr/share/man/man3/hx509_query_statistic_file.3.gz OLD_FILES+=usr/share/man/man3/hx509_query_unparse_stats.3.gz OLD_FILES+=usr/share/man/man3/hx509_revoke.3.gz OLD_FILES+=usr/share/man/man3/hx509_revoke_add_crl.3.gz OLD_FILES+=usr/share/man/man3/hx509_revoke_add_ocsp.3.gz OLD_FILES+=usr/share/man/man3/hx509_revoke_free.3.gz OLD_FILES+=usr/share/man/man3/hx509_revoke_init.3.gz OLD_FILES+=usr/share/man/man3/hx509_revoke_ocsp_print.3.gz OLD_FILES+=usr/share/man/man3/hx509_revoke_verify.3.gz OLD_FILES+=usr/share/man/man3/hx509_set_error_string.3.gz OLD_FILES+=usr/share/man/man3/hx509_set_error_stringv.3.gz OLD_FILES+=usr/share/man/man3/hx509_unparse_der_name.3.gz OLD_FILES+=usr/share/man/man3/hx509_validate_cert.3.gz OLD_FILES+=usr/share/man/man3/hx509_validate_ctx_add_flags.3.gz OLD_FILES+=usr/share/man/man3/hx509_validate_ctx_free.3.gz OLD_FILES+=usr/share/man/man3/hx509_validate_ctx_init.3.gz OLD_FILES+=usr/share/man/man3/hx509_validate_ctx_set_print.3.gz OLD_FILES+=usr/share/man/man3/hx509_verify.3.gz OLD_FILES+=usr/share/man/man3/hx509_verify_attach_anchors.3.gz OLD_FILES+=usr/share/man/man3/hx509_verify_attach_revoke.3.gz OLD_FILES+=usr/share/man/man3/hx509_verify_ctx_f_allow_default_trustanchors.3.gz OLD_FILES+=usr/share/man/man3/hx509_verify_destroy_ctx.3.gz OLD_FILES+=usr/share/man/man3/hx509_verify_hostname.3.gz OLD_FILES+=usr/share/man/man3/hx509_verify_init_ctx.3.gz OLD_FILES+=usr/share/man/man3/hx509_verify_path.3.gz OLD_FILES+=usr/share/man/man3/hx509_verify_set_max_depth.3.gz OLD_FILES+=usr/share/man/man3/hx509_verify_set_proxy_certificate.3.gz OLD_FILES+=usr/share/man/man3/hx509_verify_set_strict_rfc3280_verification.3.gz OLD_FILES+=usr/share/man/man3/hx509_verify_set_time.3.gz OLD_FILES+=usr/share/man/man3/hx509_verify_signature.3.gz OLD_FILES+=usr/share/man/man3/hx509_xfree.3.gz OLD_FILES+=usr/share/man/man3/k_afs_cell_of_file.3.gz OLD_FILES+=usr/share/man/man3/k_hasafs.3.gz OLD_FILES+=usr/share/man/man3/k_pioctl.3.gz OLD_FILES+=usr/share/man/man3/k_setpag.3.gz OLD_FILES+=usr/share/man/man3/k_unlog.3.gz OLD_FILES+=usr/share/man/man3/kadm5_pwcheck.3.gz OLD_FILES+=usr/share/man/man3/kafs.3.gz OLD_FILES+=usr/share/man/man3/kafs5.3.gz OLD_FILES+=usr/share/man/man3/kafs_set_verbose.3.gz OLD_FILES+=usr/share/man/man3/kafs_settoken.3.gz OLD_FILES+=usr/share/man/man3/kafs_settoken5.3.gz OLD_FILES+=usr/share/man/man3/kafs_settoken_rxkad.3.gz OLD_FILES+=usr/share/man/man3/krb5.3.gz OLD_FILES+=usr/share/man/man3/krb524_convert_creds_kdc.3.gz OLD_FILES+=usr/share/man/man3/krb524_convert_creds_kdc_ccache.3.gz OLD_FILES+=usr/share/man/man3/krb5_425_conv_principal.3.gz OLD_FILES+=usr/share/man/man3/krb5_425_conv_principal_ext.3.gz OLD_FILES+=usr/share/man/man3/krb5_524_conv_principal.3.gz OLD_FILES+=usr/share/man/man3/krb5_acc_ops.3.gz OLD_FILES+=usr/share/man/man3/krb5_acl_match_file.3.gz OLD_FILES+=usr/share/man/man3/krb5_acl_match_string.3.gz OLD_FILES+=usr/share/man/man3/krb5_add_et_list.3.gz OLD_FILES+=usr/share/man/man3/krb5_add_extra_addresses.3.gz OLD_FILES+=usr/share/man/man3/krb5_add_ignore_addresses.3.gz OLD_FILES+=usr/share/man/man3/krb5_addlog_dest.3.gz OLD_FILES+=usr/share/man/man3/krb5_addlog_func.3.gz OLD_FILES+=usr/share/man/man3/krb5_addr2sockaddr.3.gz OLD_FILES+=usr/share/man/man3/krb5_address.3.gz OLD_FILES+=usr/share/man/man3/krb5_address_compare.3.gz OLD_FILES+=usr/share/man/man3/krb5_address_order.3.gz OLD_FILES+=usr/share/man/man3/krb5_address_prefixlen_boundary.3.gz OLD_FILES+=usr/share/man/man3/krb5_address_search.3.gz OLD_FILES+=usr/share/man/man3/krb5_afslog.3.gz OLD_FILES+=usr/share/man/man3/krb5_afslog_uid.3.gz OLD_FILES+=usr/share/man/man3/krb5_allow_weak_crypto.3.gz OLD_FILES+=usr/share/man/man3/krb5_aname_to_localname.3.gz OLD_FILES+=usr/share/man/man3/krb5_anyaddr.3.gz OLD_FILES+=usr/share/man/man3/krb5_appdefault.3.gz OLD_FILES+=usr/share/man/man3/krb5_appdefault_boolean.3.gz OLD_FILES+=usr/share/man/man3/krb5_appdefault_string.3.gz OLD_FILES+=usr/share/man/man3/krb5_appdefault_time.3.gz OLD_FILES+=usr/share/man/man3/krb5_append_addresses.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_free.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_genaddrs.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_getaddrs.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_getflags.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_getkey.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_getlocalsubkey.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_getrcache.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_getremotesubkey.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_getuserkey.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_init.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_initivector.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_setaddrs.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_setaddrs_from_fd.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_setflags.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_setivector.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_setkey.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_setlocalsubkey.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_setrcache.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_setremotesubkey.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_con_setuserkey.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_context.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_getauthenticator.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_getcksumtype.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_getkeytype.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_getlocalseqnumber.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_getremoteseqnumber.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_setcksumtype.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_setkeytype.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_setlocalseqnumber.3.gz OLD_FILES+=usr/share/man/man3/krb5_auth_setremoteseqnumber.3.gz OLD_FILES+=usr/share/man/man3/krb5_build_principal.3.gz OLD_FILES+=usr/share/man/man3/krb5_build_principal_ext.3.gz OLD_FILES+=usr/share/man/man3/krb5_build_principal_va.3.gz OLD_FILES+=usr/share/man/man3/krb5_build_principal_va_ext.3.gz OLD_FILES+=usr/share/man/man3/krb5_c_enctype_compare.3.gz OLD_FILES+=usr/share/man/man3/krb5_c_make_checksum.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_cache_end_seq_get.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_cache_get_first.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_cache_match.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_cache_next.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_clear_mcred.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_close.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_copy_cache.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_copy_creds.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_copy_match_f.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_default.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_default_name.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_destroy.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_end_seq_get.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_gen_new.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_get_config.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_get_flags.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_get_friendly_name.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_get_full_name.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_get_kdc_offset.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_get_lifetime.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_get_name.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_get_ops.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_get_prefix_ops.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_get_principal.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_get_type.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_get_version.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_initialize.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_last_change_time.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_move.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_new_unique.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_next_cred.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_register.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_remove_cred.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_resolve.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_retrieve_cred.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_set_config.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_set_default_name.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_set_flags.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_set_friendly_name.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_set_kdc_offset.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_start_seq_get.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_store_cred.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_support_switch.3.gz OLD_FILES+=usr/share/man/man3/krb5_cc_switch.3.gz OLD_FILES+=usr/share/man/man3/krb5_ccache.3.gz OLD_FILES+=usr/share/man/man3/krb5_ccache_intro.3.gz OLD_FILES+=usr/share/man/man3/krb5_cccol_cursor_free.3.gz OLD_FILES+=usr/share/man/man3/krb5_cccol_cursor_new.3.gz OLD_FILES+=usr/share/man/man3/krb5_cccol_cursor_next.3.gz OLD_FILES+=usr/share/man/man3/krb5_cccol_last_change_time.3.gz OLD_FILES+=usr/share/man/man3/krb5_change_password.3.gz OLD_FILES+=usr/share/man/man3/krb5_check_transited.3.gz OLD_FILES+=usr/share/man/man3/krb5_checksum_is_collision_proof.3.gz OLD_FILES+=usr/share/man/man3/krb5_checksum_is_keyed.3.gz OLD_FILES+=usr/share/man/man3/krb5_checksumsize.3.gz OLD_FILES+=usr/share/man/man3/krb5_cksumtype_to_enctype.3.gz OLD_FILES+=usr/share/man/man3/krb5_clear_error_message.3.gz OLD_FILES+=usr/share/man/man3/krb5_clear_error_string.3.gz OLD_FILES+=usr/share/man/man3/krb5_closelog.3.gz OLD_FILES+=usr/share/man/man3/krb5_compare_creds.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_file_free.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_free_strings.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_get_bool.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_get_bool_default.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_get_list.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_get_string.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_get_string_default.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_get_strings.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_get_time.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_get_time_default.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_parse_file_multi.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_parse_string_multi.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_vget_bool.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_vget_bool_default.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_vget_list.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_vget_string.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_vget_string_default.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_vget_strings.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_vget_time.3.gz OLD_FILES+=usr/share/man/man3/krb5_config_vget_time_default.3.gz OLD_FILES+=usr/share/man/man3/krb5_copy_address.3.gz OLD_FILES+=usr/share/man/man3/krb5_copy_addresses.3.gz OLD_FILES+=usr/share/man/man3/krb5_copy_context.3.gz OLD_FILES+=usr/share/man/man3/krb5_copy_creds.3.gz OLD_FILES+=usr/share/man/man3/krb5_copy_creds_contents.3.gz OLD_FILES+=usr/share/man/man3/krb5_copy_data.3.gz OLD_FILES+=usr/share/man/man3/krb5_copy_host_realm.3.gz OLD_FILES+=usr/share/man/man3/krb5_copy_keyblock.3.gz OLD_FILES+=usr/share/man/man3/krb5_copy_keyblock_contents.3.gz OLD_FILES+=usr/share/man/man3/krb5_copy_principal.3.gz OLD_FILES+=usr/share/man/man3/krb5_copy_ticket.3.gz OLD_FILES+=usr/share/man/man3/krb5_create_checksum.3.gz OLD_FILES+=usr/share/man/man3/krb5_create_checksum_iov.3.gz OLD_FILES+=usr/share/man/man3/krb5_credential.3.gz OLD_FILES+=usr/share/man/man3/krb5_creds.3.gz OLD_FILES+=usr/share/man/man3/krb5_creds_get_ticket_flags.3.gz OLD_FILES+=usr/share/man/man3/krb5_crypto.3.gz OLD_FILES+=usr/share/man/man3/krb5_crypto_destroy.3.gz OLD_FILES+=usr/share/man/man3/krb5_crypto_fx_cf2.3.gz OLD_FILES+=usr/share/man/man3/krb5_crypto_getblocksize.3.gz OLD_FILES+=usr/share/man/man3/krb5_crypto_getconfoundersize.3.gz OLD_FILES+=usr/share/man/man3/krb5_crypto_getenctype.3.gz OLD_FILES+=usr/share/man/man3/krb5_crypto_getpadsize.3.gz OLD_FILES+=usr/share/man/man3/krb5_crypto_init.3.gz OLD_FILES+=usr/share/man/man3/krb5_crypto_iov.3.gz OLD_FILES+=usr/share/man/man3/krb5_data_alloc.3.gz OLD_FILES+=usr/share/man/man3/krb5_data_cmp.3.gz OLD_FILES+=usr/share/man/man3/krb5_data_copy.3.gz OLD_FILES+=usr/share/man/man3/krb5_data_ct_cmp.3.gz OLD_FILES+=usr/share/man/man3/krb5_data_free.3.gz OLD_FILES+=usr/share/man/man3/krb5_data_realloc.3.gz OLD_FILES+=usr/share/man/man3/krb5_data_zero.3.gz OLD_FILES+=usr/share/man/man3/krb5_decrypt.3.gz OLD_FILES+=usr/share/man/man3/krb5_decrypt_EncryptedData.3.gz OLD_FILES+=usr/share/man/man3/krb5_decrypt_iov_ivec.3.gz OLD_FILES+=usr/share/man/man3/krb5_deprecated.3.gz OLD_FILES+=usr/share/man/man3/krb5_digest.3.gz OLD_FILES+=usr/share/man/man3/krb5_digest_probe.3.gz OLD_FILES+=usr/share/man/man3/krb5_eai_to_heim_errno.3.gz OLD_FILES+=usr/share/man/man3/krb5_encrypt.3.gz OLD_FILES+=usr/share/man/man3/krb5_encrypt_EncryptedData.3.gz OLD_FILES+=usr/share/man/man3/krb5_encrypt_iov_ivec.3.gz OLD_FILES+=usr/share/man/man3/krb5_enctype_disable.3.gz OLD_FILES+=usr/share/man/man3/krb5_enctype_enable.3.gz OLD_FILES+=usr/share/man/man3/krb5_enctype_valid.3.gz OLD_FILES+=usr/share/man/man3/krb5_enctypes_compatible_keys.3.gz OLD_FILES+=usr/share/man/man3/krb5_error.3.gz OLD_FILES+=usr/share/man/man3/krb5_expand_hostname.3.gz OLD_FILES+=usr/share/man/man3/krb5_expand_hostname_realms.3.gz OLD_FILES+=usr/share/man/man3/krb5_fcc_ops.3.gz OLD_FILES+=usr/share/man/man3/krb5_fileformats.3.gz OLD_FILES+=usr/share/man/man3/krb5_find_padata.3.gz OLD_FILES+=usr/share/man/man3/krb5_free_address.3.gz OLD_FILES+=usr/share/man/man3/krb5_free_addresses.3.gz OLD_FILES+=usr/share/man/man3/krb5_free_config_files.3.gz OLD_FILES+=usr/share/man/man3/krb5_free_context.3.gz OLD_FILES+=usr/share/man/man3/krb5_free_cred_contents.3.gz OLD_FILES+=usr/share/man/man3/krb5_free_creds.3.gz OLD_FILES+=usr/share/man/man3/krb5_free_creds_contents.3.gz OLD_FILES+=usr/share/man/man3/krb5_free_data.3.gz OLD_FILES+=usr/share/man/man3/krb5_free_data_contents.3.gz OLD_FILES+=usr/share/man/man3/krb5_free_error_string.3.gz OLD_FILES+=usr/share/man/man3/krb5_free_host_realm.3.gz OLD_FILES+=usr/share/man/man3/krb5_free_keyblock.3.gz OLD_FILES+=usr/share/man/man3/krb5_free_keyblock_contents.3.gz OLD_FILES+=usr/share/man/man3/krb5_free_krbhst.3.gz OLD_FILES+=usr/share/man/man3/krb5_free_principal.3.gz OLD_FILES+=usr/share/man/man3/krb5_free_ticket.3.gz OLD_FILES+=usr/share/man/man3/krb5_free_unparsed_name.3.gz OLD_FILES+=usr/share/man/man3/krb5_fwd_tgt_creds.3.gz OLD_FILES+=usr/share/man/man3/krb5_generate_random_block.3.gz OLD_FILES+=usr/share/man/man3/krb5_generate_subkey.3.gz OLD_FILES+=usr/share/man/man3/krb5_generate_subkey_extended.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_all_client_addrs.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_all_server_addrs.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_cred_from_kdc.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_cred_from_kdc_opt.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_credentials.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_creds.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_default_config_files.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_default_in_tkt_etypes.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_default_principal.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_default_realm.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_default_realms.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_dns_canonicalize_hostname.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_extra_addresses.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_fcache_version.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_forwarded_creds.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_host_realm.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_ignore_addresses.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_in_cred.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_in_tkt_with_keytab.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_in_tkt_with_password.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_in_tkt_with_skey.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_init_creds.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_init_creds_keyblock.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_init_creds_keytab.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_init_creds_opt_alloc.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_init_creds_opt_free.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_init_creds_opt_get_error.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_init_creds_opt_init.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_init_creds_password.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_kdc_sec_offset.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_krb524hst.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_krb_admin_hst.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_krb_changepw_hst.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_krbhst.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_max_time_skew.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_use_admin_kdc.3.gz OLD_FILES+=usr/share/man/man3/krb5_get_validated_creds.3.gz OLD_FILES+=usr/share/man/man3/krb5_getportbyname.3.gz OLD_FILES+=usr/share/man/man3/krb5_h_addr2addr.3.gz OLD_FILES+=usr/share/man/man3/krb5_h_addr2sockaddr.3.gz OLD_FILES+=usr/share/man/man3/krb5_h_errno_to_heim_errno.3.gz OLD_FILES+=usr/share/man/man3/krb5_init_context.3.gz OLD_FILES+=usr/share/man/man3/krb5_init_creds_free.3.gz OLD_FILES+=usr/share/man/man3/krb5_init_creds_get.3.gz OLD_FILES+=usr/share/man/man3/krb5_init_creds_get_error.3.gz OLD_FILES+=usr/share/man/man3/krb5_init_creds_init.3.gz OLD_FILES+=usr/share/man/man3/krb5_init_creds_intro.3.gz OLD_FILES+=usr/share/man/man3/krb5_init_creds_set_keytab.3.gz OLD_FILES+=usr/share/man/man3/krb5_init_creds_set_password.3.gz OLD_FILES+=usr/share/man/man3/krb5_init_creds_set_service.3.gz OLD_FILES+=usr/share/man/man3/krb5_init_creds_step.3.gz OLD_FILES+=usr/share/man/man3/krb5_init_ets.3.gz OLD_FILES+=usr/share/man/man3/krb5_initlog.3.gz OLD_FILES+=usr/share/man/man3/krb5_introduction.3.gz OLD_FILES+=usr/share/man/man3/krb5_is_config_principal.3.gz OLD_FILES+=usr/share/man/man3/krb5_is_thread_safe.3.gz OLD_FILES+=usr/share/man/man3/krb5_kerberos_enctypes.3.gz OLD_FILES+=usr/share/man/man3/krb5_keyblock_get_enctype.3.gz OLD_FILES+=usr/share/man/man3/krb5_keyblock_init.3.gz OLD_FILES+=usr/share/man/man3/krb5_keyblock_zero.3.gz OLD_FILES+=usr/share/man/man3/krb5_keytab.3.gz OLD_FILES+=usr/share/man/man3/krb5_keytab_intro.3.gz OLD_FILES+=usr/share/man/man3/krb5_keytab_key_proc.3.gz OLD_FILES+=usr/share/man/man3/krb5_keytype_to_enctypes.3.gz OLD_FILES+=usr/share/man/man3/krb5_keytype_to_enctypes_default.3.gz OLD_FILES+=usr/share/man/man3/krb5_keytype_to_string.3.gz OLD_FILES+=usr/share/man/man3/krb5_krbhst_format_string.3.gz OLD_FILES+=usr/share/man/man3/krb5_krbhst_free.3.gz OLD_FILES+=usr/share/man/man3/krb5_krbhst_get_addrinfo.3.gz OLD_FILES+=usr/share/man/man3/krb5_krbhst_init.3.gz OLD_FILES+=usr/share/man/man3/krb5_krbhst_next.3.gz OLD_FILES+=usr/share/man/man3/krb5_krbhst_next_as_string.3.gz OLD_FILES+=usr/share/man/man3/krb5_krbhst_reset.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_add_entry.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_close.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_compare.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_copy_entry_contents.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_default.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_default_modify_name.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_default_name.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_destroy.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_end_seq_get.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_free_entry.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_get_entry.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_get_full_name.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_get_name.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_get_type.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_have_content.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_next_entry.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_read_service_key.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_register.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_remove_entry.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_resolve.3.gz OLD_FILES+=usr/share/man/man3/krb5_kt_start_seq_get.3.gz OLD_FILES+=usr/share/man/man3/krb5_kuserok.3.gz OLD_FILES+=usr/share/man/man3/krb5_log.3.gz OLD_FILES+=usr/share/man/man3/krb5_log_msg.3.gz OLD_FILES+=usr/share/man/man3/krb5_make_addrport.3.gz OLD_FILES+=usr/share/man/man3/krb5_make_principal.3.gz OLD_FILES+=usr/share/man/man3/krb5_max_sockaddr_size.3.gz OLD_FILES+=usr/share/man/man3/krb5_mcc_ops.3.gz OLD_FILES+=usr/share/man/man3/krb5_mk_req.3.gz OLD_FILES+=usr/share/man/man3/krb5_mk_safe.3.gz OLD_FILES+=usr/share/man/man3/krb5_openlog.3.gz OLD_FILES+=usr/share/man/man3/krb5_pac.3.gz OLD_FILES+=usr/share/man/man3/krb5_pac_get_buffer.3.gz OLD_FILES+=usr/share/man/man3/krb5_pac_verify.3.gz OLD_FILES+=usr/share/man/man3/krb5_parse_address.3.gz OLD_FILES+=usr/share/man/man3/krb5_parse_name.3.gz OLD_FILES+=usr/share/man/man3/krb5_parse_name_flags.3.gz OLD_FILES+=usr/share/man/man3/krb5_parse_nametype.3.gz OLD_FILES+=usr/share/man/man3/krb5_password_key_proc.3.gz OLD_FILES+=usr/share/man/man3/krb5_plugin_register.3.gz OLD_FILES+=usr/share/man/man3/krb5_prepend_config_files_default.3.gz OLD_FILES+=usr/share/man/man3/krb5_princ_realm.3.gz OLD_FILES+=usr/share/man/man3/krb5_princ_set_realm.3.gz OLD_FILES+=usr/share/man/man3/krb5_principal.3.gz OLD_FILES+=usr/share/man/man3/krb5_principal_compare.3.gz OLD_FILES+=usr/share/man/man3/krb5_principal_compare_any_realm.3.gz OLD_FILES+=usr/share/man/man3/krb5_principal_get_comp_string.3.gz OLD_FILES+=usr/share/man/man3/krb5_principal_get_num_comp.3.gz OLD_FILES+=usr/share/man/man3/krb5_principal_get_realm.3.gz OLD_FILES+=usr/share/man/man3/krb5_principal_get_type.3.gz OLD_FILES+=usr/share/man/man3/krb5_principal_intro.3.gz OLD_FILES+=usr/share/man/man3/krb5_principal_is_krbtgt.3.gz OLD_FILES+=usr/share/man/man3/krb5_principal_match.3.gz OLD_FILES+=usr/share/man/man3/krb5_principal_set_realm.3.gz OLD_FILES+=usr/share/man/man3/krb5_principal_set_type.3.gz OLD_FILES+=usr/share/man/man3/krb5_print_address.3.gz OLD_FILES+=usr/share/man/man3/krb5_random_to_key.3.gz OLD_FILES+=usr/share/man/man3/krb5_rcache.3.gz OLD_FILES+=usr/share/man/man3/krb5_rd_error.3.gz OLD_FILES+=usr/share/man/man3/krb5_rd_req_ctx.3.gz OLD_FILES+=usr/share/man/man3/krb5_rd_req_in_ctx_alloc.3.gz OLD_FILES+=usr/share/man/man3/krb5_rd_req_in_set_keytab.3.gz OLD_FILES+=usr/share/man/man3/krb5_rd_req_in_set_pac_check.3.gz OLD_FILES+=usr/share/man/man3/krb5_rd_req_out_ctx_free.3.gz OLD_FILES+=usr/share/man/man3/krb5_rd_req_out_get_server.3.gz OLD_FILES+=usr/share/man/man3/krb5_rd_safe.3.gz OLD_FILES+=usr/share/man/man3/krb5_realm_compare.3.gz OLD_FILES+=usr/share/man/man3/krb5_ret_address.3.gz OLD_FILES+=usr/share/man/man3/krb5_ret_addrs.3.gz OLD_FILES+=usr/share/man/man3/krb5_ret_authdata.3.gz OLD_FILES+=usr/share/man/man3/krb5_ret_creds.3.gz OLD_FILES+=usr/share/man/man3/krb5_ret_creds_tag.3.gz OLD_FILES+=usr/share/man/man3/krb5_ret_data.3.gz OLD_FILES+=usr/share/man/man3/krb5_ret_int16.3.gz OLD_FILES+=usr/share/man/man3/krb5_ret_int32.3.gz OLD_FILES+=usr/share/man/man3/krb5_ret_int8.3.gz OLD_FILES+=usr/share/man/man3/krb5_ret_keyblock.3.gz OLD_FILES+=usr/share/man/man3/krb5_ret_principal.3.gz OLD_FILES+=usr/share/man/man3/krb5_ret_string.3.gz OLD_FILES+=usr/share/man/man3/krb5_ret_stringz.3.gz OLD_FILES+=usr/share/man/man3/krb5_ret_times.3.gz OLD_FILES+=usr/share/man/man3/krb5_ret_uint16.3.gz OLD_FILES+=usr/share/man/man3/krb5_ret_uint32.3.gz OLD_FILES+=usr/share/man/man3/krb5_ret_uint8.3.gz OLD_FILES+=usr/share/man/man3/krb5_set_config_files.3.gz OLD_FILES+=usr/share/man/man3/krb5_set_default_in_tkt_etypes.3.gz OLD_FILES+=usr/share/man/man3/krb5_set_default_realm.3.gz OLD_FILES+=usr/share/man/man3/krb5_set_dns_canonicalize_hostname.3.gz OLD_FILES+=usr/share/man/man3/krb5_set_error_message.3.gz OLD_FILES+=usr/share/man/man3/krb5_set_error_string.3.gz OLD_FILES+=usr/share/man/man3/krb5_set_extra_addresses.3.gz OLD_FILES+=usr/share/man/man3/krb5_set_fcache_version.3.gz OLD_FILES+=usr/share/man/man3/krb5_set_home_dir_access.3.gz OLD_FILES+=usr/share/man/man3/krb5_set_ignore_addresses.3.gz OLD_FILES+=usr/share/man/man3/krb5_set_kdc_sec_offset.3.gz OLD_FILES+=usr/share/man/man3/krb5_set_max_time_skew.3.gz OLD_FILES+=usr/share/man/man3/krb5_set_password.3.gz OLD_FILES+=usr/share/man/man3/krb5_set_real_time.3.gz OLD_FILES+=usr/share/man/man3/krb5_set_use_admin_kdc.3.gz OLD_FILES+=usr/share/man/man3/krb5_sname_to_principal.3.gz OLD_FILES+=usr/share/man/man3/krb5_sock_to_principal.3.gz OLD_FILES+=usr/share/man/man3/krb5_sockaddr2address.3.gz OLD_FILES+=usr/share/man/man3/krb5_sockaddr2port.3.gz OLD_FILES+=usr/share/man/man3/krb5_sockaddr_uninteresting.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_clear_flags.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_emem.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_free.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_from_data.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_from_fd.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_from_mem.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_from_readonly_mem.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_get_byteorder.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_get_eof_code.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_is_flags.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_read.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_seek.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_set_byteorder.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_set_eof_code.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_set_flags.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_set_max_alloc.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_to_data.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_truncate.3.gz OLD_FILES+=usr/share/man/man3/krb5_storage_write.3.gz OLD_FILES+=usr/share/man/man3/krb5_store_address.3.gz OLD_FILES+=usr/share/man/man3/krb5_store_addrs.3.gz OLD_FILES+=usr/share/man/man3/krb5_store_authdata.3.gz OLD_FILES+=usr/share/man/man3/krb5_store_creds.3.gz OLD_FILES+=usr/share/man/man3/krb5_store_creds_tag.3.gz OLD_FILES+=usr/share/man/man3/krb5_store_data.3.gz OLD_FILES+=usr/share/man/man3/krb5_store_int16.3.gz OLD_FILES+=usr/share/man/man3/krb5_store_int32.3.gz OLD_FILES+=usr/share/man/man3/krb5_store_int8.3.gz OLD_FILES+=usr/share/man/man3/krb5_store_keyblock.3.gz OLD_FILES+=usr/share/man/man3/krb5_store_principal.3.gz OLD_FILES+=usr/share/man/man3/krb5_store_string.3.gz OLD_FILES+=usr/share/man/man3/krb5_store_stringz.3.gz OLD_FILES+=usr/share/man/man3/krb5_store_times.3.gz OLD_FILES+=usr/share/man/man3/krb5_store_uint16.3.gz OLD_FILES+=usr/share/man/man3/krb5_store_uint32.3.gz OLD_FILES+=usr/share/man/man3/krb5_store_uint8.3.gz OLD_FILES+=usr/share/man/man3/krb5_string_to_key.3.gz OLD_FILES+=usr/share/man/man3/krb5_string_to_keytype.3.gz OLD_FILES+=usr/share/man/man3/krb5_support.3.gz OLD_FILES+=usr/share/man/man3/krb5_ticket.3.gz OLD_FILES+=usr/share/man/man3/krb5_ticket_get_authorization_data_type.3.gz OLD_FILES+=usr/share/man/man3/krb5_ticket_get_client.3.gz OLD_FILES+=usr/share/man/man3/krb5_ticket_get_endtime.3.gz OLD_FILES+=usr/share/man/man3/krb5_ticket_get_flags.3.gz OLD_FILES+=usr/share/man/man3/krb5_ticket_get_server.3.gz OLD_FILES+=usr/share/man/man3/krb5_timeofday.3.gz OLD_FILES+=usr/share/man/man3/krb5_unparse_name.3.gz OLD_FILES+=usr/share/man/man3/krb5_unparse_name_fixed.3.gz OLD_FILES+=usr/share/man/man3/krb5_unparse_name_fixed_flags.3.gz OLD_FILES+=usr/share/man/man3/krb5_unparse_name_fixed_short.3.gz OLD_FILES+=usr/share/man/man3/krb5_unparse_name_flags.3.gz OLD_FILES+=usr/share/man/man3/krb5_unparse_name_short.3.gz OLD_FILES+=usr/share/man/man3/krb5_us_timeofday.3.gz OLD_FILES+=usr/share/man/man3/krb5_v4compat.3.gz OLD_FILES+=usr/share/man/man3/krb5_verify_checksum.3.gz OLD_FILES+=usr/share/man/man3/krb5_verify_checksum_iov.3.gz OLD_FILES+=usr/share/man/man3/krb5_verify_init_creds.3.gz OLD_FILES+=usr/share/man/man3/krb5_verify_opt_init.3.gz OLD_FILES+=usr/share/man/man3/krb5_verify_opt_set_flags.3.gz OLD_FILES+=usr/share/man/man3/krb5_verify_opt_set_keytab.3.gz OLD_FILES+=usr/share/man/man3/krb5_verify_opt_set_secure.3.gz OLD_FILES+=usr/share/man/man3/krb5_verify_opt_set_service.3.gz OLD_FILES+=usr/share/man/man3/krb5_verify_user.3.gz OLD_FILES+=usr/share/man/man3/krb5_verify_user_lrealm.3.gz OLD_FILES+=usr/share/man/man3/krb5_verify_user_opt.3.gz OLD_FILES+=usr/share/man/man3/krb5_vlog.3.gz OLD_FILES+=usr/share/man/man3/krb5_vlog_msg.3.gz OLD_FILES+=usr/share/man/man3/krb5_vset_error_string.3.gz OLD_FILES+=usr/share/man/man3/krb5_vwarn.3.gz OLD_FILES+=usr/share/man/man3/krb_afslog.3.gz OLD_FILES+=usr/share/man/man3/krb_afslog_uid.3.gz OLD_FILES+=usr/share/man/man3/ntlm_buf.3.gz OLD_FILES+=usr/share/man/man3/ntlm_core.3.gz OLD_FILES+=usr/share/man/man3/ntlm_type1.3.gz OLD_FILES+=usr/share/man/man3/ntlm_type2.3.gz OLD_FILES+=usr/share/man/man3/ntlm_type3.3.gz OLD_FILES+=usr/share/man/man5/krb5.conf.5.gz OLD_FILES+=usr/share/man/man8/hprop.8.gz OLD_FILES+=usr/share/man/man8/hpropd.8.gz OLD_FILES+=usr/share/man/man8/iprop-log.8.gz OLD_FILES+=usr/share/man/man8/iprop.8.gz OLD_FILES+=usr/share/man/man8/kadmin.8.gz OLD_FILES+=usr/share/man/man8/kadmind.8.gz OLD_FILES+=usr/share/man/man8/kcm.8.gz OLD_FILES+=usr/share/man/man8/kdc.8.gz OLD_FILES+=usr/share/man/man8/kdigest.8.gz OLD_FILES+=usr/share/man/man8/kerberos.8.gz OLD_FILES+=usr/share/man/man8/kimpersonate.8.gz OLD_FILES+=usr/share/man/man8/kpasswdd.8.gz OLD_FILES+=usr/share/man/man8/kstash.8.gz OLD_FILES+=usr/share/man/man8/ktutil.8.gz OLD_FILES+=usr/share/man/man8/pam_krb5.8.gz OLD_FILES+=usr/share/man/man8/pam_ksu.8.gz OLD_FILES+=usr/share/man/man8/string2key.8.gz OLD_FILES+=usr/share/man/man8/verify_krb5_conf.8.gz .endif .if ${MK_KERBEROS_SUPPORT} == no OLD_FILES+=usr/bin/compile_et OLD_FILES+=usr/include/com_err.h OLD_FILES+=usr/include/com_right.h OLD_FILES+=usr/lib/libcom_err.a OLD_FILES+=usr/lib/libcom_err.so OLD_LIBS+=usr/lib/libcom_err.so.5 OLD_FILES+=usr/lib/libcom_err_p.a OLD_FILES+=usr/lib32/libcom_err.a OLD_FILES+=usr/lib32/libcom_err.so OLD_LIBS+=usr/lib32/libcom_err.so.5 OLD_FILES+=usr/lib32/libcom_err_p.a OLD_FILES+=usr/share/man/man1/compile_et.1.gz OLD_FILES+=usr/share/man/man3/com_err.3.gz .endif .if ${MK_LDNS} == no OLD_FILES+=usr/lib/private/libldns.a OLD_FILES+=usr/lib/private/libldns.so OLD_LIBS+=usr/lib/private/libldns.so.5 OLD_FILES+=usr/lib/private/libldns_p.a .if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64" OLD_FILES+=usr/lib32/private/libldns.a OLD_FILES+=usr/lib32/private/libldns.so OLD_LIBS+=usr/lib32/private/libldns.so.5 OLD_FILES+=usr/lib32/private/libldns_p.a .endif .endif .if ${MK_LDNS_UTILS} == no OLD_FILES+=usr/bin/drill OLD_FILES+=usr/share/man/man1/drill.1.gz OLD_FILES+=usr/bin/host OLD_FILES+=usr/share/man/man1/host.1.gz .endif .if ${MK_LEGACY_CONSOLE} == no +OLD_FILES+=etc/rc.d/moused +OLD_FILES+=etc/rc.d/syscons OLD_FILES+=usr/sbin/kbdcontrol OLD_FILES+=usr/sbin/kbdmap OLD_FILES+=usr/sbin/moused OLD_FILES+=usr/sbin/vidcontrol OLD_FILES+=usr/sbin/vidfont OLD_FILES+=usr/share/man/man1/kbdcontrol.1.gz OLD_FILES+=usr/share/man/man1/kbdmap.1.gz OLD_FILES+=usr/share/man/man1/vidcontrol.1.gz OLD_FILES+=usr/share/man/man1/vidfont.1.gz OLD_FILES+=usr/share/man/man5/kbdmap.5.gz OLD_FILES+=usr/share/man/man5/keymap.5.gz OLD_FILES+=usr/share/man/man8/moused.8.gz .endif .if ${MK_LIB32} == no OLD_FILES+=etc/mtree/BSD.lib32.dist OLD_FILES+=libexec/ld-elf32.so.1 . if exists(${DESTDIR}/usr/lib32) LIB32_DIRS!=find ${DESTDIR}/usr/lib32 -type d \ | sed -e 's,^${DESTDIR}/,,'; echo LIB32_FILES!=find ${DESTDIR}/usr/lib32 \! -type d \ \! -name "lib*.so*" | sed -e 's,^${DESTDIR}/,,'; echo LIB32_LIBS!=find ${DESTDIR}/usr/lib32 \! -type d \ -name "lib*.so*" | sed -e 's,^${DESTDIR}/,,'; echo OLD_DIRS+=${LIB32_DIRS} OLD_FILES+=${LIB32_FILES} OLD_LIBS+=${LIB32_LIBS} . endif . if ${MK_DEBUG_FILES} == no . if exists(${DESTDIR}/usr/lib/debug/usr/lib32) DEBUG_LIB32_DIRS!=find ${DESTDIR}/usr/lib/debug/usr/lib32 -type d \ | sed -e 's,^${DESTDIR}/,,'; echo DEBUG_LIB32_FILES!=find ${DESTDIR}/usr/lib/debug/usr/lib32 \! -type d \ \! -name "lib*.so*" | sed -e 's,^${DESTDIR}/,,'; echo DEBUG_LIB32_LIBS!=find ${DESTDIR}/usr/lib/debug/usr/lib32 \! -type d \ -name "lib*.so*" | sed -e 's,^${DESTDIR}/,,'; echo OLD_DIRS+=${DEBUG_LIB32_DIRS} OLD_FILES+=${DEBUG_LIB32_FILES} OLD_LIBS+=${DEBUG_LIB32_LIBS} . endif . endif .endif .if ${MK_LIBCPLUSPLUS} == no OLD_LIBS+=lib/libcxxrt.so.1 OLD_FILES+=usr/lib/libc++.a OLD_FILES+=usr/lib/libc++_p.a OLD_FILES+=usr/lib/libc++experimental.a OLD_FILES+=usr/lib/libc++fs.a OLD_FILES+=usr/lib/libc++.so OLD_LIBS+=usr/lib/libc++.so.1 OLD_FILES+=usr/lib/libcxxrt.a OLD_FILES+=usr/lib/libcxxrt.so OLD_FILES+=usr/lib/libcxxrt_p.a OLD_FILES+=usr/include/c++/v1/__bit_reference OLD_FILES+=usr/include/c++/v1/__bsd_locale_defaults.h OLD_FILES+=usr/include/c++/v1/__bsd_locale_fallbacks.h OLD_FILES+=usr/include/c++/v1/__config OLD_FILES+=usr/include/c++/v1/__debug OLD_FILES+=usr/include/c++/v1/__errc OLD_FILES+=usr/include/c++/v1/__functional_03 OLD_FILES+=usr/include/c++/v1/__functional_base OLD_FILES+=usr/include/c++/v1/__functional_base_03 OLD_FILES+=usr/include/c++/v1/__hash_table OLD_FILES+=usr/include/c++/v1/__libcpp_version OLD_FILES+=usr/include/c++/v1/__locale OLD_FILES+=usr/include/c++/v1/__mutex_base OLD_FILES+=usr/include/c++/v1/__node_handle OLD_FILES+=usr/include/c++/v1/__nullptr OLD_FILES+=usr/include/c++/v1/__split_buffer OLD_FILES+=usr/include/c++/v1/__sso_allocator OLD_FILES+=usr/include/c++/v1/__std_stream OLD_FILES+=usr/include/c++/v1/__string OLD_FILES+=usr/include/c++/v1/__threading_support OLD_FILES+=usr/include/c++/v1/__tree OLD_FILES+=usr/include/c++/v1/__tuple OLD_FILES+=usr/include/c++/v1/__undef_macros OLD_FILES+=usr/include/c++/v1/algorithm OLD_FILES+=usr/include/c++/v1/any OLD_FILES+=usr/include/c++/v1/array OLD_FILES+=usr/include/c++/v1/atomic OLD_FILES+=usr/include/c++/v1/bitset OLD_FILES+=usr/include/c++/v1/cassert OLD_FILES+=usr/include/c++/v1/ccomplex OLD_FILES+=usr/include/c++/v1/cctype OLD_FILES+=usr/include/c++/v1/cerrno OLD_FILES+=usr/include/c++/v1/cfenv OLD_FILES+=usr/include/c++/v1/cfloat OLD_FILES+=usr/include/c++/v1/charconv OLD_FILES+=usr/include/c++/v1/chrono OLD_FILES+=usr/include/c++/v1/cinttypes OLD_FILES+=usr/include/c++/v1/ciso646 OLD_FILES+=usr/include/c++/v1/climits OLD_FILES+=usr/include/c++/v1/clocale OLD_FILES+=usr/include/c++/v1/cmath OLD_FILES+=usr/include/c++/v1/codecvt OLD_FILES+=usr/include/c++/v1/compare OLD_FILES+=usr/include/c++/v1/complex OLD_FILES+=usr/include/c++/v1/complex.h OLD_FILES+=usr/include/c++/v1/condition_variable OLD_FILES+=usr/include/c++/v1/csetjmp OLD_FILES+=usr/include/c++/v1/csignal OLD_FILES+=usr/include/c++/v1/cstdarg OLD_FILES+=usr/include/c++/v1/cstdbool OLD_FILES+=usr/include/c++/v1/cstddef OLD_FILES+=usr/include/c++/v1/cstdint OLD_FILES+=usr/include/c++/v1/cstdio OLD_FILES+=usr/include/c++/v1/cstdlib OLD_FILES+=usr/include/c++/v1/cstring OLD_FILES+=usr/include/c++/v1/ctgmath OLD_FILES+=usr/include/c++/v1/ctime OLD_FILES+=usr/include/c++/v1/ctype.h OLD_FILES+=usr/include/c++/v1/cwchar OLD_FILES+=usr/include/c++/v1/cwctype OLD_FILES+=usr/include/c++/v1/cxxabi.h OLD_FILES+=usr/include/c++/v1/deque OLD_FILES+=usr/include/c++/v1/errno.h OLD_FILES+=usr/include/c++/v1/exception OLD_FILES+=usr/include/c++/v1/experimental/__config OLD_FILES+=usr/include/c++/v1/experimental/__memory OLD_FILES+=usr/include/c++/v1/experimental/algorithm OLD_FILES+=usr/include/c++/v1/experimental/any OLD_FILES+=usr/include/c++/v1/experimental/chrono OLD_FILES+=usr/include/c++/v1/experimental/coroutine OLD_FILES+=usr/include/c++/v1/experimental/deque OLD_FILES+=usr/include/c++/v1/experimental/dynarray OLD_FILES+=usr/include/c++/v1/experimental/filesystem OLD_FILES+=usr/include/c++/v1/experimental/forward_list OLD_FILES+=usr/include/c++/v1/experimental/functional OLD_FILES+=usr/include/c++/v1/experimental/iterator OLD_FILES+=usr/include/c++/v1/experimental/list OLD_FILES+=usr/include/c++/v1/experimental/map OLD_FILES+=usr/include/c++/v1/experimental/memory_resource OLD_FILES+=usr/include/c++/v1/experimental/numeric OLD_FILES+=usr/include/c++/v1/experimental/optional OLD_FILES+=usr/include/c++/v1/experimental/propagate_const OLD_FILES+=usr/include/c++/v1/experimental/ratio OLD_FILES+=usr/include/c++/v1/experimental/regex OLD_FILES+=usr/include/c++/v1/experimental/set OLD_FILES+=usr/include/c++/v1/experimental/simd OLD_FILES+=usr/include/c++/v1/experimental/string OLD_FILES+=usr/include/c++/v1/experimental/string_view OLD_FILES+=usr/include/c++/v1/experimental/system_error OLD_FILES+=usr/include/c++/v1/experimental/tuple OLD_FILES+=usr/include/c++/v1/experimental/type_traits OLD_FILES+=usr/include/c++/v1/experimental/unordered_map OLD_FILES+=usr/include/c++/v1/experimental/unordered_set OLD_FILES+=usr/include/c++/v1/experimental/utility OLD_FILES+=usr/include/c++/v1/experimental/vector OLD_FILES+=usr/include/c++/v1/ext/__hash OLD_FILES+=usr/include/c++/v1/ext/hash_map OLD_FILES+=usr/include/c++/v1/ext/hash_set OLD_FILES+=usr/include/c++/v1/filesystem OLD_FILES+=usr/include/c++/v1/float.h OLD_FILES+=usr/include/c++/v1/forward_list OLD_FILES+=usr/include/c++/v1/fstream OLD_FILES+=usr/include/c++/v1/functional OLD_FILES+=usr/include/c++/v1/future OLD_FILES+=usr/include/c++/v1/initializer_list OLD_FILES+=usr/include/c++/v1/inttypes.h OLD_FILES+=usr/include/c++/v1/iomanip OLD_FILES+=usr/include/c++/v1/ios OLD_FILES+=usr/include/c++/v1/iosfwd OLD_FILES+=usr/include/c++/v1/iostream OLD_FILES+=usr/include/c++/v1/istream OLD_FILES+=usr/include/c++/v1/iterator OLD_FILES+=usr/include/c++/v1/limits OLD_FILES+=usr/include/c++/v1/limits.h OLD_FILES+=usr/include/c++/v1/list OLD_FILES+=usr/include/c++/v1/locale OLD_FILES+=usr/include/c++/v1/locale.h OLD_FILES+=usr/include/c++/v1/map OLD_FILES+=usr/include/c++/v1/math.h OLD_FILES+=usr/include/c++/v1/memory OLD_FILES+=usr/include/c++/v1/mutex OLD_FILES+=usr/include/c++/v1/new OLD_FILES+=usr/include/c++/v1/numeric OLD_FILES+=usr/include/c++/v1/numeric OLD_FILES+=usr/include/c++/v1/optional OLD_FILES+=usr/include/c++/v1/ostream OLD_FILES+=usr/include/c++/v1/queue OLD_FILES+=usr/include/c++/v1/random OLD_FILES+=usr/include/c++/v1/ratio OLD_FILES+=usr/include/c++/v1/regex OLD_FILES+=usr/include/c++/v1/scoped_allocator OLD_FILES+=usr/include/c++/v1/set OLD_FILES+=usr/include/c++/v1/setjmp.h OLD_FILES+=usr/include/c++/v1/shared_mutex OLD_FILES+=usr/include/c++/v1/span OLD_FILES+=usr/include/c++/v1/sstream OLD_FILES+=usr/include/c++/v1/stack OLD_FILES+=usr/include/c++/v1/stdbool.h OLD_FILES+=usr/include/c++/v1/stddef.h OLD_FILES+=usr/include/c++/v1/stdexcept OLD_FILES+=usr/include/c++/v1/stdint.h OLD_FILES+=usr/include/c++/v1/stdio.h OLD_FILES+=usr/include/c++/v1/stdlib.h OLD_FILES+=usr/include/c++/v1/streambuf OLD_FILES+=usr/include/c++/v1/string OLD_FILES+=usr/include/c++/v1/string.h OLD_FILES+=usr/include/c++/v1/string_view OLD_FILES+=usr/include/c++/v1/strstream OLD_FILES+=usr/include/c++/v1/system_error OLD_FILES+=usr/include/c++/v1/tgmath.h OLD_FILES+=usr/include/c++/v1/thread OLD_FILES+=usr/include/c++/v1/version OLD_FILES+=usr/include/c++/v1/tr1/__bit_reference OLD_FILES+=usr/include/c++/v1/tr1/__bsd_locale_defaults.h OLD_FILES+=usr/include/c++/v1/tr1/__bsd_locale_fallbacks.h OLD_FILES+=usr/include/c++/v1/tr1/__config OLD_FILES+=usr/include/c++/v1/tr1/__debug OLD_FILES+=usr/include/c++/v1/tr1/__functional_03 OLD_FILES+=usr/include/c++/v1/tr1/__functional_base OLD_FILES+=usr/include/c++/v1/tr1/__functional_base_03 OLD_FILES+=usr/include/c++/v1/tr1/__hash_table OLD_FILES+=usr/include/c++/v1/tr1/__libcpp_version OLD_FILES+=usr/include/c++/v1/tr1/__locale OLD_FILES+=usr/include/c++/v1/tr1/__mutex_base OLD_FILES+=usr/include/c++/v1/tr1/__nullptr OLD_FILES+=usr/include/c++/v1/tr1/__split_buffer OLD_FILES+=usr/include/c++/v1/tr1/__sso_allocator OLD_FILES+=usr/include/c++/v1/tr1/__std_stream OLD_FILES+=usr/include/c++/v1/tr1/__string OLD_FILES+=usr/include/c++/v1/tr1/__threading_support OLD_FILES+=usr/include/c++/v1/tr1/__tree OLD_FILES+=usr/include/c++/v1/tr1/__tuple OLD_FILES+=usr/include/c++/v1/tr1/__undef_macros OLD_FILES+=usr/include/c++/v1/tr1/algorithm OLD_FILES+=usr/include/c++/v1/tr1/any OLD_FILES+=usr/include/c++/v1/tr1/array OLD_FILES+=usr/include/c++/v1/tr1/atomic OLD_FILES+=usr/include/c++/v1/tr1/bitset OLD_FILES+=usr/include/c++/v1/tr1/cassert OLD_FILES+=usr/include/c++/v1/tr1/ccomplex OLD_FILES+=usr/include/c++/v1/tr1/cctype OLD_FILES+=usr/include/c++/v1/tr1/cerrno OLD_FILES+=usr/include/c++/v1/tr1/cfenv OLD_FILES+=usr/include/c++/v1/tr1/cfloat OLD_FILES+=usr/include/c++/v1/tr1/chrono OLD_FILES+=usr/include/c++/v1/tr1/cinttypes OLD_FILES+=usr/include/c++/v1/tr1/ciso646 OLD_FILES+=usr/include/c++/v1/tr1/climits OLD_FILES+=usr/include/c++/v1/tr1/clocale OLD_FILES+=usr/include/c++/v1/tr1/cmath OLD_FILES+=usr/include/c++/v1/tr1/codecvt OLD_FILES+=usr/include/c++/v1/tr1/complex OLD_FILES+=usr/include/c++/v1/tr1/complex.h OLD_FILES+=usr/include/c++/v1/tr1/condition_variable OLD_FILES+=usr/include/c++/v1/tr1/csetjmp OLD_FILES+=usr/include/c++/v1/tr1/csignal OLD_FILES+=usr/include/c++/v1/tr1/cstdarg OLD_FILES+=usr/include/c++/v1/tr1/cstdbool OLD_FILES+=usr/include/c++/v1/tr1/cstddef OLD_FILES+=usr/include/c++/v1/tr1/cstdint OLD_FILES+=usr/include/c++/v1/tr1/cstdio OLD_FILES+=usr/include/c++/v1/tr1/cstdlib OLD_FILES+=usr/include/c++/v1/tr1/cstring OLD_FILES+=usr/include/c++/v1/tr1/ctgmath OLD_FILES+=usr/include/c++/v1/tr1/ctime OLD_FILES+=usr/include/c++/v1/tr1/ctype.h OLD_FILES+=usr/include/c++/v1/tr1/cwchar OLD_FILES+=usr/include/c++/v1/tr1/cwctype OLD_FILES+=usr/include/c++/v1/tr1/deque OLD_FILES+=usr/include/c++/v1/tr1/errno.h OLD_FILES+=usr/include/c++/v1/tr1/exception OLD_FILES+=usr/include/c++/v1/tr1/float.h OLD_FILES+=usr/include/c++/v1/tr1/forward_list OLD_FILES+=usr/include/c++/v1/tr1/fstream OLD_FILES+=usr/include/c++/v1/tr1/functional OLD_FILES+=usr/include/c++/v1/tr1/future OLD_FILES+=usr/include/c++/v1/tr1/initializer_list OLD_FILES+=usr/include/c++/v1/tr1/inttypes.h OLD_FILES+=usr/include/c++/v1/tr1/iomanip OLD_FILES+=usr/include/c++/v1/tr1/ios OLD_FILES+=usr/include/c++/v1/tr1/iosfwd OLD_FILES+=usr/include/c++/v1/tr1/iostream OLD_FILES+=usr/include/c++/v1/tr1/istream OLD_FILES+=usr/include/c++/v1/tr1/iterator OLD_FILES+=usr/include/c++/v1/tr1/limits OLD_FILES+=usr/include/c++/v1/tr1/limits.h OLD_FILES+=usr/include/c++/v1/tr1/list OLD_FILES+=usr/include/c++/v1/tr1/locale OLD_FILES+=usr/include/c++/v1/tr1/locale.h OLD_FILES+=usr/include/c++/v1/tr1/map OLD_FILES+=usr/include/c++/v1/tr1/math.h OLD_FILES+=usr/include/c++/v1/tr1/memory OLD_FILES+=usr/include/c++/v1/tr1/mutex OLD_FILES+=usr/include/c++/v1/tr1/new OLD_FILES+=usr/include/c++/v1/tr1/numeric OLD_FILES+=usr/include/c++/v1/tr1/numeric OLD_FILES+=usr/include/c++/v1/tr1/optional OLD_FILES+=usr/include/c++/v1/tr1/ostream OLD_FILES+=usr/include/c++/v1/tr1/queue OLD_FILES+=usr/include/c++/v1/tr1/random OLD_FILES+=usr/include/c++/v1/tr1/ratio OLD_FILES+=usr/include/c++/v1/tr1/regex OLD_FILES+=usr/include/c++/v1/tr1/scoped_allocator OLD_FILES+=usr/include/c++/v1/tr1/set OLD_FILES+=usr/include/c++/v1/tr1/setjmp.h OLD_FILES+=usr/include/c++/v1/tr1/shared_mutex OLD_FILES+=usr/include/c++/v1/tr1/sstream OLD_FILES+=usr/include/c++/v1/tr1/stack OLD_FILES+=usr/include/c++/v1/tr1/stdbool.h OLD_FILES+=usr/include/c++/v1/tr1/stddef.h OLD_FILES+=usr/include/c++/v1/tr1/stdexcept OLD_FILES+=usr/include/c++/v1/tr1/stdint.h OLD_FILES+=usr/include/c++/v1/tr1/stdio.h OLD_FILES+=usr/include/c++/v1/tr1/stdlib.h OLD_FILES+=usr/include/c++/v1/tr1/streambuf OLD_FILES+=usr/include/c++/v1/tr1/string OLD_FILES+=usr/include/c++/v1/tr1/string.h OLD_FILES+=usr/include/c++/v1/tr1/string_view OLD_FILES+=usr/include/c++/v1/tr1/strstream OLD_FILES+=usr/include/c++/v1/tr1/system_error OLD_FILES+=usr/include/c++/v1/tr1/tgmath.h OLD_FILES+=usr/include/c++/v1/tr1/thread OLD_FILES+=usr/include/c++/v1/tr1/tuple OLD_FILES+=usr/include/c++/v1/tr1/type_traits OLD_FILES+=usr/include/c++/v1/tr1/typeindex OLD_FILES+=usr/include/c++/v1/tr1/typeinfo OLD_FILES+=usr/include/c++/v1/tr1/unordered_map OLD_FILES+=usr/include/c++/v1/tr1/unordered_set OLD_FILES+=usr/include/c++/v1/tr1/utility OLD_FILES+=usr/include/c++/v1/tr1/valarray OLD_FILES+=usr/include/c++/v1/tr1/variant OLD_FILES+=usr/include/c++/v1/tr1/vector OLD_FILES+=usr/include/c++/v1/tr1/wchar.h OLD_FILES+=usr/include/c++/v1/tr1/wctype.h OLD_FILES+=usr/include/c++/v1/tuple OLD_FILES+=usr/include/c++/v1/type_traits OLD_FILES+=usr/include/c++/v1/typeindex OLD_FILES+=usr/include/c++/v1/typeinfo OLD_FILES+=usr/include/c++/v1/unordered_map OLD_FILES+=usr/include/c++/v1/unordered_set OLD_FILES+=usr/include/c++/v1/unwind-arm.h OLD_FILES+=usr/include/c++/v1/unwind-itanium.h OLD_FILES+=usr/include/c++/v1/unwind.h OLD_FILES+=usr/include/c++/v1/utility OLD_FILES+=usr/include/c++/v1/valarray OLD_FILES+=usr/include/c++/v1/variant OLD_FILES+=usr/include/c++/v1/vector OLD_FILES+=usr/include/c++/v1/wchar.h OLD_FILES+=usr/include/c++/v1/wctype.h OLD_FILES+=usr/lib32/libc++.a OLD_FILES+=usr/lib32/libc++.so OLD_LIBS+=usr/lib32/libc++.so.1 OLD_FILES+=usr/lib32/libc++_p.a OLD_FILES+=usr/lib32/libc++experimental.a OLD_FILES+=usr/lib32/libc++fs.a OLD_FILES+=usr/lib32/libcxxrt.a OLD_FILES+=usr/lib32/libcxxrt.so OLD_LIBS+=usr/lib32/libcxxrt.so.1 OLD_FILES+=usr/lib32/libcxxrt_p.a OLD_DIRS+=usr/include/c++/v1/tr1 OLD_DIRS+=usr/include/c++/v1/experimental OLD_DIRS+=usr/include/c++/v1/ext OLD_DIRS+=usr/include/c++/v1 .endif .if ${MK_LIBTHR} == no OLD_LIBS+=lib/libthr.so.3 OLD_FILES+=usr/lib/libthr.a OLD_FILES+=usr/lib/libthr_p.a OLD_FILES+=usr/share/man/man3/libthr.3.gz .endif .if ${MK_LLD} == no OLD_FILES+=usr/bin/ld.lld .endif .if ${MK_LLDB} == no OLD_FILES+=usr/bin/lldb OLD_FILES+=usr/share/man/man1/lldb.1.gz .endif .if ${MK_LOCALES} == no OLD_DIRS+=usr/share/locale OLD_DIRS+=usr/share/locale/af_ZA.ISO8859-15 OLD_FILES+=usr/share/locale/af_ZA.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/af_ZA.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/af_ZA.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/af_ZA.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/af_ZA.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/af_ZA.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/af_ZA.ISO8859-1 OLD_FILES+=usr/share/locale/af_ZA.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/af_ZA.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/af_ZA.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/af_ZA.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/af_ZA.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/af_ZA.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/af_ZA.UTF-8 OLD_FILES+=usr/share/locale/af_ZA.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/af_ZA.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/af_ZA.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/af_ZA.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/af_ZA.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/af_ZA.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/am_ET.UTF-8 OLD_FILES+=usr/share/locale/am_ET.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/am_ET.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/am_ET.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/am_ET.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/am_ET.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/am_ET.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/ar_AE.UTF-8 OLD_FILES+=usr/share/locale/ar_AE.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/ar_AE.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/ar_AE.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/ar_AE.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/ar_AE.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/ar_AE.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/ar_EG.UTF-8 OLD_FILES+=usr/share/locale/ar_EG.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/ar_EG.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/ar_EG.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/ar_EG.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/ar_EG.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/ar_EG.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/ar_JO.UTF-8 OLD_FILES+=usr/share/locale/ar_JO.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/ar_JO.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/ar_JO.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/ar_JO.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/ar_JO.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/ar_JO.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/ar_MA.UTF-8 OLD_FILES+=usr/share/locale/ar_MA.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/ar_MA.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/ar_MA.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/ar_MA.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/ar_MA.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/ar_MA.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/ar_QA.UTF-8 OLD_FILES+=usr/share/locale/ar_QA.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/ar_QA.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/ar_QA.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/ar_QA.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/ar_QA.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/ar_QA.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/ar_SA.UTF-8 OLD_FILES+=usr/share/locale/ar_SA.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/ar_SA.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/ar_SA.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/ar_SA.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/ar_SA.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/ar_SA.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/be_BY.CP1131 OLD_FILES+=usr/share/locale/be_BY.CP1131/LC_COLLATE OLD_FILES+=usr/share/locale/be_BY.CP1131/LC_CTYPE OLD_FILES+=usr/share/locale/be_BY.CP1131/LC_MESSAGES OLD_FILES+=usr/share/locale/be_BY.CP1131/LC_MONETARY OLD_FILES+=usr/share/locale/be_BY.CP1131/LC_NUMERIC OLD_FILES+=usr/share/locale/be_BY.CP1131/LC_TIME OLD_DIRS+=usr/share/locale/be_BY.CP1251 OLD_FILES+=usr/share/locale/be_BY.CP1251/LC_COLLATE OLD_FILES+=usr/share/locale/be_BY.CP1251/LC_CTYPE OLD_FILES+=usr/share/locale/be_BY.CP1251/LC_MESSAGES OLD_FILES+=usr/share/locale/be_BY.CP1251/LC_MONETARY OLD_FILES+=usr/share/locale/be_BY.CP1251/LC_NUMERIC OLD_FILES+=usr/share/locale/be_BY.CP1251/LC_TIME OLD_DIRS+=usr/share/locale/be_BY.ISO8859-5 OLD_FILES+=usr/share/locale/be_BY.ISO8859-5/LC_COLLATE OLD_FILES+=usr/share/locale/be_BY.ISO8859-5/LC_CTYPE OLD_FILES+=usr/share/locale/be_BY.ISO8859-5/LC_MESSAGES OLD_FILES+=usr/share/locale/be_BY.ISO8859-5/LC_MONETARY OLD_FILES+=usr/share/locale/be_BY.ISO8859-5/LC_NUMERIC OLD_FILES+=usr/share/locale/be_BY.ISO8859-5/LC_TIME OLD_DIRS+=usr/share/locale/be_BY.UTF-8 OLD_FILES+=usr/share/locale/be_BY.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/be_BY.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/be_BY.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/be_BY.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/be_BY.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/be_BY.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/bg_BG.CP1251 OLD_FILES+=usr/share/locale/bg_BG.CP1251/LC_COLLATE OLD_FILES+=usr/share/locale/bg_BG.CP1251/LC_CTYPE OLD_FILES+=usr/share/locale/bg_BG.CP1251/LC_MESSAGES OLD_FILES+=usr/share/locale/bg_BG.CP1251/LC_MONETARY OLD_FILES+=usr/share/locale/bg_BG.CP1251/LC_NUMERIC OLD_FILES+=usr/share/locale/bg_BG.CP1251/LC_TIME OLD_DIRS+=usr/share/locale/bg_BG.UTF-8 OLD_FILES+=usr/share/locale/bg_BG.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/bg_BG.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/bg_BG.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/bg_BG.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/bg_BG.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/bg_BG.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/ca_AD.ISO8859-1 OLD_FILES+=usr/share/locale/ca_AD.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/ca_AD.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/ca_AD.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/ca_AD.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/ca_AD.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/ca_AD.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/ca_AD.ISO8859-15 OLD_FILES+=usr/share/locale/ca_AD.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/ca_AD.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/ca_AD.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/ca_AD.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/ca_AD.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/ca_AD.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/ca_AD.UTF-8 OLD_FILES+=usr/share/locale/ca_AD.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/ca_AD.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/ca_AD.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/ca_AD.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/ca_AD.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/ca_AD.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/ca_ES.ISO8859-1 OLD_FILES+=usr/share/locale/ca_ES.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/ca_ES.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/ca_ES.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/ca_ES.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/ca_ES.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/ca_ES.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/ca_ES.ISO8859-15 OLD_FILES+=usr/share/locale/ca_ES.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/ca_ES.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/ca_ES.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/ca_ES.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/ca_ES.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/ca_ES.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/ca_ES.UTF-8 OLD_FILES+=usr/share/locale/ca_ES.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/ca_ES.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/ca_ES.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/ca_ES.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/ca_ES.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/ca_ES.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/ca_FR.ISO8859-1 OLD_FILES+=usr/share/locale/ca_FR.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/ca_FR.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/ca_FR.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/ca_FR.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/ca_FR.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/ca_FR.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/ca_FR.ISO8859-15 OLD_FILES+=usr/share/locale/ca_FR.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/ca_FR.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/ca_FR.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/ca_FR.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/ca_FR.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/ca_FR.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/ca_FR.UTF-8 OLD_FILES+=usr/share/locale/ca_FR.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/ca_FR.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/ca_FR.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/ca_FR.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/ca_FR.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/ca_FR.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/ca_IT.ISO8859-1 OLD_FILES+=usr/share/locale/ca_IT.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/ca_IT.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/ca_IT.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/ca_IT.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/ca_IT.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/ca_IT.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/ca_IT.ISO8859-15 OLD_FILES+=usr/share/locale/ca_IT.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/ca_IT.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/ca_IT.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/ca_IT.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/ca_IT.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/ca_IT.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/ca_IT.UTF-8 OLD_FILES+=usr/share/locale/ca_IT.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/ca_IT.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/ca_IT.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/ca_IT.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/ca_IT.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/ca_IT.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/cs_CZ.ISO8859-2 OLD_FILES+=usr/share/locale/cs_CZ.ISO8859-2/LC_COLLATE OLD_FILES+=usr/share/locale/cs_CZ.ISO8859-2/LC_CTYPE OLD_FILES+=usr/share/locale/cs_CZ.ISO8859-2/LC_MESSAGES OLD_FILES+=usr/share/locale/cs_CZ.ISO8859-2/LC_MONETARY OLD_FILES+=usr/share/locale/cs_CZ.ISO8859-2/LC_NUMERIC OLD_FILES+=usr/share/locale/cs_CZ.ISO8859-2/LC_TIME OLD_DIRS+=usr/share/locale/cs_CZ.UTF-8 OLD_FILES+=usr/share/locale/cs_CZ.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/cs_CZ.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/cs_CZ.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/cs_CZ.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/cs_CZ.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/cs_CZ.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/da_DK.ISO8859-1 OLD_FILES+=usr/share/locale/da_DK.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/da_DK.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/da_DK.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/da_DK.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/da_DK.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/da_DK.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/da_DK.ISO8859-15 OLD_FILES+=usr/share/locale/da_DK.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/da_DK.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/da_DK.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/da_DK.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/da_DK.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/da_DK.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/da_DK.UTF-8 OLD_FILES+=usr/share/locale/da_DK.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/da_DK.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/da_DK.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/da_DK.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/da_DK.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/da_DK.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/de_AT.ISO8859-1 OLD_FILES+=usr/share/locale/de_AT.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/de_AT.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/de_AT.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/de_AT.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/de_AT.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/de_AT.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/de_AT.ISO8859-15 OLD_FILES+=usr/share/locale/de_AT.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/de_AT.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/de_AT.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/de_AT.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/de_AT.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/de_AT.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/de_AT.UTF-8 OLD_FILES+=usr/share/locale/de_AT.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/de_AT.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/de_AT.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/de_AT.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/de_AT.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/de_AT.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/de_CH.ISO8859-1 OLD_FILES+=usr/share/locale/de_CH.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/de_CH.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/de_CH.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/de_CH.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/de_CH.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/de_CH.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/de_CH.ISO8859-15 OLD_FILES+=usr/share/locale/de_CH.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/de_CH.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/de_CH.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/de_CH.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/de_CH.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/de_CH.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/de_CH.UTF-8 OLD_FILES+=usr/share/locale/de_CH.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/de_CH.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/de_CH.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/de_CH.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/de_CH.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/de_CH.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/de_DE.ISO8859-1 OLD_FILES+=usr/share/locale/de_DE.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/de_DE.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/de_DE.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/de_DE.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/de_DE.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/de_DE.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/de_DE.ISO8859-15 OLD_FILES+=usr/share/locale/de_DE.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/de_DE.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/de_DE.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/de_DE.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/de_DE.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/de_DE.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/de_DE.UTF-8 OLD_FILES+=usr/share/locale/de_DE.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/de_DE.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/de_DE.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/de_DE.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/de_DE.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/de_DE.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/el_GR.ISO8859-7 OLD_FILES+=usr/share/locale/el_GR.ISO8859-7/LC_COLLATE OLD_FILES+=usr/share/locale/el_GR.ISO8859-7/LC_CTYPE OLD_FILES+=usr/share/locale/el_GR.ISO8859-7/LC_MESSAGES OLD_FILES+=usr/share/locale/el_GR.ISO8859-7/LC_MONETARY OLD_FILES+=usr/share/locale/el_GR.ISO8859-7/LC_NUMERIC OLD_FILES+=usr/share/locale/el_GR.ISO8859-7/LC_TIME OLD_DIRS+=usr/share/locale/el_GR.UTF-8 OLD_FILES+=usr/share/locale/el_GR.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/el_GR.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/el_GR.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/el_GR.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/el_GR.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/el_GR.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/en_AU.ISO8859-1 OLD_FILES+=usr/share/locale/en_AU.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/en_AU.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/en_AU.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/en_AU.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/en_AU.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/en_AU.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/en_AU.ISO8859-15 OLD_FILES+=usr/share/locale/en_AU.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/en_AU.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/en_AU.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/en_AU.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/en_AU.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/en_AU.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/en_AU.US-ASCII OLD_FILES+=usr/share/locale/en_AU.US-ASCII/LC_COLLATE OLD_FILES+=usr/share/locale/en_AU.US-ASCII/LC_CTYPE OLD_FILES+=usr/share/locale/en_AU.US-ASCII/LC_MESSAGES OLD_FILES+=usr/share/locale/en_AU.US-ASCII/LC_MONETARY OLD_FILES+=usr/share/locale/en_AU.US-ASCII/LC_NUMERIC OLD_FILES+=usr/share/locale/en_AU.US-ASCII/LC_TIME OLD_DIRS+=usr/share/locale/en_AU.UTF-8 OLD_FILES+=usr/share/locale/en_AU.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/en_AU.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/en_AU.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/en_AU.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/en_AU.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/en_AU.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/en_CA.ISO8859-1 OLD_FILES+=usr/share/locale/en_CA.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/en_CA.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/en_CA.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/en_CA.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/en_CA.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/en_CA.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/en_CA.ISO8859-15 OLD_FILES+=usr/share/locale/en_CA.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/en_CA.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/en_CA.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/en_CA.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/en_CA.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/en_CA.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/en_CA.US-ASCII OLD_FILES+=usr/share/locale/en_CA.US-ASCII/LC_COLLATE OLD_FILES+=usr/share/locale/en_CA.US-ASCII/LC_CTYPE OLD_FILES+=usr/share/locale/en_CA.US-ASCII/LC_MESSAGES OLD_FILES+=usr/share/locale/en_CA.US-ASCII/LC_MONETARY OLD_FILES+=usr/share/locale/en_CA.US-ASCII/LC_NUMERIC OLD_FILES+=usr/share/locale/en_CA.US-ASCII/LC_TIME OLD_DIRS+=usr/share/locale/en_CA.UTF-8 OLD_FILES+=usr/share/locale/en_CA.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/en_CA.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/en_CA.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/en_CA.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/en_CA.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/en_CA.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/en_GB.ISO8859-1 OLD_FILES+=usr/share/locale/en_GB.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/en_GB.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/en_GB.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/en_GB.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/en_GB.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/en_GB.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/en_GB.ISO8859-15 OLD_FILES+=usr/share/locale/en_GB.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/en_GB.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/en_GB.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/en_GB.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/en_GB.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/en_GB.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/en_GB.US-ASCII OLD_FILES+=usr/share/locale/en_GB.US-ASCII/LC_COLLATE OLD_FILES+=usr/share/locale/en_GB.US-ASCII/LC_CTYPE OLD_FILES+=usr/share/locale/en_GB.US-ASCII/LC_MESSAGES OLD_FILES+=usr/share/locale/en_GB.US-ASCII/LC_MONETARY OLD_FILES+=usr/share/locale/en_GB.US-ASCII/LC_NUMERIC OLD_FILES+=usr/share/locale/en_GB.US-ASCII/LC_TIME OLD_DIRS+=usr/share/locale/en_GB.UTF-8 OLD_FILES+=usr/share/locale/en_GB.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/en_GB.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/en_GB.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/en_GB.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/en_GB.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/en_GB.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/en_HK.ISO8859-1 OLD_FILES+=usr/share/locale/en_HK.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/en_HK.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/en_HK.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/en_HK.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/en_HK.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/en_HK.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/en_HK.UTF-8 OLD_FILES+=usr/share/locale/en_HK.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/en_HK.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/en_HK.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/en_HK.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/en_HK.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/en_HK.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/en_IE.ISO8859-1 OLD_FILES+=usr/share/locale/en_IE.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/en_IE.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/en_IE.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/en_IE.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/en_IE.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/en_IE.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/en_IE.ISO8859-15 OLD_FILES+=usr/share/locale/en_IE.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/en_IE.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/en_IE.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/en_IE.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/en_IE.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/en_IE.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/en_IE.UTF-8 OLD_FILES+=usr/share/locale/en_IE.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/en_IE.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/en_IE.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/en_IE.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/en_IE.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/en_IE.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/en_NZ.ISO8859-1 OLD_FILES+=usr/share/locale/en_NZ.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/en_NZ.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/en_NZ.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/en_NZ.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/en_NZ.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/en_NZ.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/en_NZ.ISO8859-15 OLD_FILES+=usr/share/locale/en_NZ.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/en_NZ.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/en_NZ.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/en_NZ.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/en_NZ.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/en_NZ.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/en_NZ.US-ASCII OLD_FILES+=usr/share/locale/en_NZ.US-ASCII/LC_COLLATE OLD_FILES+=usr/share/locale/en_NZ.US-ASCII/LC_CTYPE OLD_FILES+=usr/share/locale/en_NZ.US-ASCII/LC_MESSAGES OLD_FILES+=usr/share/locale/en_NZ.US-ASCII/LC_MONETARY OLD_FILES+=usr/share/locale/en_NZ.US-ASCII/LC_NUMERIC OLD_FILES+=usr/share/locale/en_NZ.US-ASCII/LC_TIME OLD_DIRS+=usr/share/locale/en_NZ.UTF-8 OLD_FILES+=usr/share/locale/en_NZ.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/en_NZ.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/en_NZ.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/en_NZ.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/en_NZ.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/en_NZ.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/en_PH.UTF-8 OLD_FILES+=usr/share/locale/en_PH.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/en_PH.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/en_PH.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/en_PH.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/en_PH.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/en_PH.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/en_SG.ISO8859-1 OLD_FILES+=usr/share/locale/en_SG.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/en_SG.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/en_SG.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/en_SG.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/en_SG.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/en_SG.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/en_SG.UTF-8 OLD_FILES+=usr/share/locale/en_SG.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/en_SG.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/en_SG.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/en_SG.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/en_SG.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/en_SG.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/en_US.ISO8859-1 OLD_FILES+=usr/share/locale/en_US.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/en_US.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/en_US.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/en_US.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/en_US.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/en_US.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/en_US.ISO8859-15 OLD_FILES+=usr/share/locale/en_US.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/en_US.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/en_US.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/en_US.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/en_US.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/en_US.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/en_US.US-ASCII OLD_FILES+=usr/share/locale/en_US.US-ASCII/LC_COLLATE OLD_FILES+=usr/share/locale/en_US.US-ASCII/LC_CTYPE OLD_FILES+=usr/share/locale/en_US.US-ASCII/LC_MESSAGES OLD_FILES+=usr/share/locale/en_US.US-ASCII/LC_MONETARY OLD_FILES+=usr/share/locale/en_US.US-ASCII/LC_NUMERIC OLD_FILES+=usr/share/locale/en_US.US-ASCII/LC_TIME OLD_DIRS+=usr/share/locale/en_US.UTF-8 OLD_FILES+=usr/share/locale/en_US.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/en_US.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/en_US.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/en_US.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/en_US.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/en_US.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/en_ZA.ISO8859-1 OLD_FILES+=usr/share/locale/en_ZA.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/en_ZA.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/en_ZA.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/en_ZA.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/en_ZA.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/en_ZA.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/en_ZA.ISO8859-15 OLD_FILES+=usr/share/locale/en_ZA.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/en_ZA.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/en_ZA.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/en_ZA.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/en_ZA.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/en_ZA.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/en_ZA.US-ASCII OLD_FILES+=usr/share/locale/en_ZA.US-ASCII/LC_COLLATE OLD_FILES+=usr/share/locale/en_ZA.US-ASCII/LC_CTYPE OLD_FILES+=usr/share/locale/en_ZA.US-ASCII/LC_MESSAGES OLD_FILES+=usr/share/locale/en_ZA.US-ASCII/LC_MONETARY OLD_FILES+=usr/share/locale/en_ZA.US-ASCII/LC_NUMERIC OLD_FILES+=usr/share/locale/en_ZA.US-ASCII/LC_TIME OLD_DIRS+=usr/share/locale/en_ZA.UTF-8 OLD_FILES+=usr/share/locale/en_ZA.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/en_ZA.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/en_ZA.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/en_ZA.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/en_ZA.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/en_ZA.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/es_AR.ISO8859-1 OLD_FILES+=usr/share/locale/es_AR.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/es_AR.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/es_AR.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/es_AR.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/es_AR.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/es_AR.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/es_AR.UTF-8 OLD_FILES+=usr/share/locale/es_AR.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/es_AR.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/es_AR.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/es_AR.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/es_AR.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/es_AR.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/es_CR.UTF-8 OLD_FILES+=usr/share/locale/es_CR.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/es_CR.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/es_CR.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/es_CR.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/es_CR.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/es_CR.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/es_ES.ISO8859-1 OLD_FILES+=usr/share/locale/es_ES.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/es_ES.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/es_ES.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/es_ES.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/es_ES.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/es_ES.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/es_ES.ISO8859-15 OLD_FILES+=usr/share/locale/es_ES.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/es_ES.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/es_ES.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/es_ES.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/es_ES.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/es_ES.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/es_ES.UTF-8 OLD_FILES+=usr/share/locale/es_ES.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/es_ES.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/es_ES.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/es_ES.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/es_ES.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/es_ES.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/es_MX.ISO8859-1 OLD_FILES+=usr/share/locale/es_MX.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/es_MX.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/es_MX.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/es_MX.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/es_MX.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/es_MX.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/es_MX.UTF-8 OLD_FILES+=usr/share/locale/es_MX.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/es_MX.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/es_MX.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/es_MX.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/es_MX.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/es_MX.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/et_EE.ISO8859-1 OLD_FILES+=usr/share/locale/et_EE.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/et_EE.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/et_EE.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/et_EE.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/et_EE.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/et_EE.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/et_EE.ISO8859-15 OLD_FILES+=usr/share/locale/et_EE.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/et_EE.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/et_EE.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/et_EE.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/et_EE.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/et_EE.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/et_EE.UTF-8 OLD_FILES+=usr/share/locale/et_EE.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/et_EE.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/et_EE.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/et_EE.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/et_EE.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/et_EE.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/eu_ES.ISO8859-1 OLD_FILES+=usr/share/locale/eu_ES.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/eu_ES.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/eu_ES.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/eu_ES.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/eu_ES.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/eu_ES.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/eu_ES.ISO8859-15 OLD_FILES+=usr/share/locale/eu_ES.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/eu_ES.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/eu_ES.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/eu_ES.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/eu_ES.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/eu_ES.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/eu_ES.UTF-8 OLD_FILES+=usr/share/locale/eu_ES.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/eu_ES.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/eu_ES.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/eu_ES.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/eu_ES.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/eu_ES.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/fi_FI.ISO8859-1 OLD_FILES+=usr/share/locale/fi_FI.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/fi_FI.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/fi_FI.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/fi_FI.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/fi_FI.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/fi_FI.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/fi_FI.ISO8859-15 OLD_FILES+=usr/share/locale/fi_FI.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/fi_FI.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/fi_FI.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/fi_FI.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/fi_FI.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/fi_FI.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/fi_FI.UTF-8 OLD_FILES+=usr/share/locale/fi_FI.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/fi_FI.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/fi_FI.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/fi_FI.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/fi_FI.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/fi_FI.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/fr_BE.ISO8859-1 OLD_FILES+=usr/share/locale/fr_BE.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/fr_BE.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/fr_BE.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/fr_BE.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/fr_BE.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/fr_BE.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/fr_BE.ISO8859-15 OLD_FILES+=usr/share/locale/fr_BE.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/fr_BE.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/fr_BE.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/fr_BE.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/fr_BE.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/fr_BE.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/fr_BE.UTF-8 OLD_FILES+=usr/share/locale/fr_BE.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/fr_BE.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/fr_BE.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/fr_BE.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/fr_BE.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/fr_BE.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/fr_CA.ISO8859-1 OLD_FILES+=usr/share/locale/fr_CA.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/fr_CA.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/fr_CA.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/fr_CA.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/fr_CA.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/fr_CA.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/fr_CA.ISO8859-15 OLD_FILES+=usr/share/locale/fr_CA.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/fr_CA.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/fr_CA.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/fr_CA.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/fr_CA.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/fr_CA.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/fr_CA.UTF-8 OLD_FILES+=usr/share/locale/fr_CA.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/fr_CA.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/fr_CA.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/fr_CA.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/fr_CA.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/fr_CA.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/fr_CH.ISO8859-1 OLD_FILES+=usr/share/locale/fr_CH.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/fr_CH.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/fr_CH.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/fr_CH.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/fr_CH.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/fr_CH.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/fr_CH.ISO8859-15 OLD_FILES+=usr/share/locale/fr_CH.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/fr_CH.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/fr_CH.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/fr_CH.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/fr_CH.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/fr_CH.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/fr_CH.UTF-8 OLD_FILES+=usr/share/locale/fr_CH.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/fr_CH.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/fr_CH.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/fr_CH.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/fr_CH.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/fr_CH.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/fr_FR.ISO8859-1 OLD_FILES+=usr/share/locale/fr_FR.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/fr_FR.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/fr_FR.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/fr_FR.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/fr_FR.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/fr_FR.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/fr_FR.ISO8859-15 OLD_FILES+=usr/share/locale/fr_FR.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/fr_FR.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/fr_FR.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/fr_FR.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/fr_FR.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/fr_FR.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/fr_FR.UTF-8 OLD_FILES+=usr/share/locale/fr_FR.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/fr_FR.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/fr_FR.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/fr_FR.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/fr_FR.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/fr_FR.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/he_IL.UTF-8 OLD_FILES+=usr/share/locale/he_IL.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/he_IL.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/he_IL.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/he_IL.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/he_IL.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/he_IL.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/hi_IN.ISCII-DEV OLD_FILES+=usr/share/locale/hi_IN.ISCII-DEV/LC_COLLATE OLD_FILES+=usr/share/locale/hi_IN.ISCII-DEV/LC_CTYPE OLD_FILES+=usr/share/locale/hi_IN.ISCII-DEV/LC_MESSAGES OLD_FILES+=usr/share/locale/hi_IN.ISCII-DEV/LC_MONETARY OLD_FILES+=usr/share/locale/hi_IN.ISCII-DEV/LC_NUMERIC OLD_FILES+=usr/share/locale/hi_IN.ISCII-DEV/LC_TIME OLD_DIRS+=usr/share/locale/hi_IN.UTF-8 OLD_FILES+=usr/share/locale/hi_IN.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/hi_IN.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/hi_IN.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/hi_IN.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/hi_IN.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/hi_IN.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/hr_HR.ISO8859-2 OLD_FILES+=usr/share/locale/hr_HR.ISO8859-2/LC_COLLATE OLD_FILES+=usr/share/locale/hr_HR.ISO8859-2/LC_CTYPE OLD_FILES+=usr/share/locale/hr_HR.ISO8859-2/LC_MESSAGES OLD_FILES+=usr/share/locale/hr_HR.ISO8859-2/LC_MONETARY OLD_FILES+=usr/share/locale/hr_HR.ISO8859-2/LC_NUMERIC OLD_FILES+=usr/share/locale/hr_HR.ISO8859-2/LC_TIME OLD_DIRS+=usr/share/locale/hr_HR.UTF-8 OLD_FILES+=usr/share/locale/hr_HR.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/hr_HR.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/hr_HR.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/hr_HR.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/hr_HR.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/hr_HR.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/hu_HU.ISO8859-2 OLD_FILES+=usr/share/locale/hu_HU.ISO8859-2/LC_COLLATE OLD_FILES+=usr/share/locale/hu_HU.ISO8859-2/LC_CTYPE OLD_FILES+=usr/share/locale/hu_HU.ISO8859-2/LC_MESSAGES OLD_FILES+=usr/share/locale/hu_HU.ISO8859-2/LC_MONETARY OLD_FILES+=usr/share/locale/hu_HU.ISO8859-2/LC_NUMERIC OLD_FILES+=usr/share/locale/hu_HU.ISO8859-2/LC_TIME OLD_DIRS+=usr/share/locale/hu_HU.UTF-8 OLD_FILES+=usr/share/locale/hu_HU.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/hu_HU.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/hu_HU.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/hu_HU.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/hu_HU.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/hu_HU.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/hy_AM.ARMSCII-8 OLD_FILES+=usr/share/locale/hy_AM.ARMSCII-8/LC_COLLATE OLD_FILES+=usr/share/locale/hy_AM.ARMSCII-8/LC_CTYPE OLD_FILES+=usr/share/locale/hy_AM.ARMSCII-8/LC_MESSAGES OLD_FILES+=usr/share/locale/hy_AM.ARMSCII-8/LC_MONETARY OLD_FILES+=usr/share/locale/hy_AM.ARMSCII-8/LC_NUMERIC OLD_FILES+=usr/share/locale/hy_AM.ARMSCII-8/LC_TIME OLD_DIRS+=usr/share/locale/hy_AM.UTF-8 OLD_FILES+=usr/share/locale/hy_AM.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/hy_AM.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/hy_AM.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/hy_AM.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/hy_AM.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/hy_AM.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/is_IS.ISO8859-1 OLD_FILES+=usr/share/locale/is_IS.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/is_IS.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/is_IS.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/is_IS.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/is_IS.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/is_IS.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/is_IS.ISO8859-15 OLD_FILES+=usr/share/locale/is_IS.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/is_IS.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/is_IS.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/is_IS.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/is_IS.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/is_IS.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/is_IS.UTF-8 OLD_FILES+=usr/share/locale/is_IS.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/is_IS.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/is_IS.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/is_IS.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/is_IS.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/is_IS.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/it_CH.ISO8859-1 OLD_FILES+=usr/share/locale/it_CH.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/it_CH.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/it_CH.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/it_CH.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/it_CH.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/it_CH.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/it_CH.ISO8859-15 OLD_FILES+=usr/share/locale/it_CH.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/it_CH.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/it_CH.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/it_CH.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/it_CH.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/it_CH.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/it_CH.UTF-8 OLD_FILES+=usr/share/locale/it_CH.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/it_CH.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/it_CH.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/it_CH.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/it_CH.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/it_CH.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/it_IT.ISO8859-1 OLD_FILES+=usr/share/locale/it_IT.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/it_IT.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/it_IT.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/it_IT.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/it_IT.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/it_IT.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/it_IT.ISO8859-15 OLD_FILES+=usr/share/locale/it_IT.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/it_IT.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/it_IT.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/it_IT.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/it_IT.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/it_IT.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/it_IT.UTF-8 OLD_FILES+=usr/share/locale/it_IT.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/it_IT.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/it_IT.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/it_IT.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/it_IT.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/it_IT.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/ja_JP.eucJP OLD_FILES+=usr/share/locale/ja_JP.eucJP/LC_COLLATE OLD_FILES+=usr/share/locale/ja_JP.eucJP/LC_CTYPE OLD_FILES+=usr/share/locale/ja_JP.eucJP/LC_MESSAGES OLD_FILES+=usr/share/locale/ja_JP.eucJP/LC_MONETARY OLD_FILES+=usr/share/locale/ja_JP.eucJP/LC_NUMERIC OLD_FILES+=usr/share/locale/ja_JP.eucJP/LC_TIME OLD_DIRS+=usr/share/locale/ja_JP.SJIS OLD_FILES+=usr/share/locale/ja_JP.SJIS/LC_COLLATE OLD_FILES+=usr/share/locale/ja_JP.SJIS/LC_CTYPE OLD_FILES+=usr/share/locale/ja_JP.SJIS/LC_MESSAGES OLD_FILES+=usr/share/locale/ja_JP.SJIS/LC_MONETARY OLD_FILES+=usr/share/locale/ja_JP.SJIS/LC_NUMERIC OLD_FILES+=usr/share/locale/ja_JP.SJIS/LC_TIME OLD_DIRS+=usr/share/locale/ja_JP.UTF-8 OLD_FILES+=usr/share/locale/ja_JP.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/ja_JP.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/ja_JP.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/ja_JP.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/ja_JP.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/ja_JP.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/kk_KZ.UTF-8 OLD_FILES+=usr/share/locale/kk_KZ.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/kk_KZ.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/kk_KZ.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/kk_KZ.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/kk_KZ.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/kk_KZ.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/ko_KR.CP949 OLD_FILES+=usr/share/locale/ko_KR.CP949/LC_COLLATE OLD_FILES+=usr/share/locale/ko_KR.CP949/LC_CTYPE OLD_FILES+=usr/share/locale/ko_KR.CP949/LC_MESSAGES OLD_FILES+=usr/share/locale/ko_KR.CP949/LC_MONETARY OLD_FILES+=usr/share/locale/ko_KR.CP949/LC_NUMERIC OLD_FILES+=usr/share/locale/ko_KR.CP949/LC_TIME OLD_DIRS+=usr/share/locale/ko_KR.eucKR OLD_FILES+=usr/share/locale/ko_KR.eucKR/LC_COLLATE OLD_FILES+=usr/share/locale/ko_KR.eucKR/LC_CTYPE OLD_FILES+=usr/share/locale/ko_KR.eucKR/LC_MESSAGES OLD_FILES+=usr/share/locale/ko_KR.eucKR/LC_MONETARY OLD_FILES+=usr/share/locale/ko_KR.eucKR/LC_NUMERIC OLD_FILES+=usr/share/locale/ko_KR.eucKR/LC_TIME OLD_DIRS+=usr/share/locale/ko_KR.UTF-8 OLD_FILES+=usr/share/locale/ko_KR.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/ko_KR.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/ko_KR.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/ko_KR.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/ko_KR.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/ko_KR.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/lt_LT.ISO8859-13 OLD_FILES+=usr/share/locale/lt_LT.ISO8859-13/LC_COLLATE OLD_FILES+=usr/share/locale/lt_LT.ISO8859-13/LC_CTYPE OLD_FILES+=usr/share/locale/lt_LT.ISO8859-13/LC_MESSAGES OLD_FILES+=usr/share/locale/lt_LT.ISO8859-13/LC_MONETARY OLD_FILES+=usr/share/locale/lt_LT.ISO8859-13/LC_NUMERIC OLD_FILES+=usr/share/locale/lt_LT.ISO8859-13/LC_TIME OLD_DIRS+=usr/share/locale/lt_LT.UTF-8 OLD_FILES+=usr/share/locale/lt_LT.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/lt_LT.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/lt_LT.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/lt_LT.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/lt_LT.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/lt_LT.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/lv_LV.ISO8859-13 OLD_FILES+=usr/share/locale/lv_LV.ISO8859-13/LC_COLLATE OLD_FILES+=usr/share/locale/lv_LV.ISO8859-13/LC_CTYPE OLD_FILES+=usr/share/locale/lv_LV.ISO8859-13/LC_MESSAGES OLD_FILES+=usr/share/locale/lv_LV.ISO8859-13/LC_MONETARY OLD_FILES+=usr/share/locale/lv_LV.ISO8859-13/LC_NUMERIC OLD_FILES+=usr/share/locale/lv_LV.ISO8859-13/LC_TIME OLD_DIRS+=usr/share/locale/lv_LV.UTF-8 OLD_FILES+=usr/share/locale/lv_LV.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/lv_LV.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/lv_LV.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/lv_LV.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/lv_LV.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/lv_LV.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/mn_MN.UTF-8 OLD_FILES+=usr/share/locale/mn_MN.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/mn_MN.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/mn_MN.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/mn_MN.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/mn_MN.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/mn_MN.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/nb_NO.ISO8859-1 OLD_FILES+=usr/share/locale/nb_NO.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/nb_NO.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/nb_NO.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/nb_NO.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/nb_NO.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/nb_NO.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/nb_NO.ISO8859-15 OLD_FILES+=usr/share/locale/nb_NO.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/nb_NO.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/nb_NO.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/nb_NO.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/nb_NO.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/nb_NO.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/nb_NO.UTF-8 OLD_FILES+=usr/share/locale/nb_NO.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/nb_NO.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/nb_NO.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/nb_NO.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/nb_NO.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/nb_NO.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/nl_BE.ISO8859-1 OLD_FILES+=usr/share/locale/nl_BE.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/nl_BE.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/nl_BE.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/nl_BE.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/nl_BE.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/nl_BE.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/nl_BE.ISO8859-15 OLD_FILES+=usr/share/locale/nl_BE.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/nl_BE.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/nl_BE.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/nl_BE.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/nl_BE.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/nl_BE.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/nl_BE.UTF-8 OLD_FILES+=usr/share/locale/nl_BE.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/nl_BE.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/nl_BE.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/nl_BE.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/nl_BE.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/nl_BE.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/nl_NL.ISO8859-1 OLD_FILES+=usr/share/locale/nl_NL.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/nl_NL.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/nl_NL.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/nl_NL.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/nl_NL.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/nl_NL.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/nl_NL.ISO8859-15 OLD_FILES+=usr/share/locale/nl_NL.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/nl_NL.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/nl_NL.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/nl_NL.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/nl_NL.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/nl_NL.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/nl_NL.UTF-8 OLD_FILES+=usr/share/locale/nl_NL.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/nl_NL.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/nl_NL.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/nl_NL.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/nl_NL.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/nl_NL.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/nn_NO.ISO8859-1 OLD_FILES+=usr/share/locale/nn_NO.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/nn_NO.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/nn_NO.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/nn_NO.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/nn_NO.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/nn_NO.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/nn_NO.ISO8859-15 OLD_FILES+=usr/share/locale/nn_NO.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/nn_NO.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/nn_NO.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/nn_NO.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/nn_NO.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/nn_NO.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/nn_NO.UTF-8 OLD_FILES+=usr/share/locale/nn_NO.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/nn_NO.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/nn_NO.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/nn_NO.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/nn_NO.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/nn_NO.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/pl_PL.ISO8859-2 OLD_FILES+=usr/share/locale/pl_PL.ISO8859-2/LC_COLLATE OLD_FILES+=usr/share/locale/pl_PL.ISO8859-2/LC_CTYPE OLD_FILES+=usr/share/locale/pl_PL.ISO8859-2/LC_MESSAGES OLD_FILES+=usr/share/locale/pl_PL.ISO8859-2/LC_MONETARY OLD_FILES+=usr/share/locale/pl_PL.ISO8859-2/LC_NUMERIC OLD_FILES+=usr/share/locale/pl_PL.ISO8859-2/LC_TIME OLD_DIRS+=usr/share/locale/pl_PL.UTF-8 OLD_FILES+=usr/share/locale/pl_PL.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/pl_PL.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/pl_PL.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/pl_PL.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/pl_PL.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/pl_PL.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/pt_BR.ISO8859-1 OLD_FILES+=usr/share/locale/pt_BR.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/pt_BR.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/pt_BR.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/pt_BR.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/pt_BR.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/pt_BR.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/pt_BR.UTF-8 OLD_FILES+=usr/share/locale/pt_BR.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/pt_BR.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/pt_BR.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/pt_BR.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/pt_BR.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/pt_BR.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/pt_PT.ISO8859-1 OLD_FILES+=usr/share/locale/pt_PT.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/pt_PT.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/pt_PT.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/pt_PT.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/pt_PT.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/pt_PT.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/pt_PT.ISO8859-15 OLD_FILES+=usr/share/locale/pt_PT.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/pt_PT.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/pt_PT.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/pt_PT.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/pt_PT.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/pt_PT.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/pt_PT.UTF-8 OLD_FILES+=usr/share/locale/pt_PT.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/pt_PT.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/pt_PT.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/pt_PT.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/pt_PT.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/pt_PT.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/ro_RO.ISO8859-2 OLD_FILES+=usr/share/locale/ro_RO.ISO8859-2/LC_COLLATE OLD_FILES+=usr/share/locale/ro_RO.ISO8859-2/LC_CTYPE OLD_FILES+=usr/share/locale/ro_RO.ISO8859-2/LC_MESSAGES OLD_FILES+=usr/share/locale/ro_RO.ISO8859-2/LC_MONETARY OLD_FILES+=usr/share/locale/ro_RO.ISO8859-2/LC_NUMERIC OLD_FILES+=usr/share/locale/ro_RO.ISO8859-2/LC_TIME OLD_DIRS+=usr/share/locale/ro_RO.UTF-8 OLD_FILES+=usr/share/locale/ro_RO.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/ro_RO.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/ro_RO.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/ro_RO.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/ro_RO.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/ro_RO.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/ru_RU.CP1251 OLD_FILES+=usr/share/locale/ru_RU.CP1251/LC_COLLATE OLD_FILES+=usr/share/locale/ru_RU.CP1251/LC_CTYPE OLD_FILES+=usr/share/locale/ru_RU.CP1251/LC_MESSAGES OLD_FILES+=usr/share/locale/ru_RU.CP1251/LC_MONETARY OLD_FILES+=usr/share/locale/ru_RU.CP1251/LC_NUMERIC OLD_FILES+=usr/share/locale/ru_RU.CP1251/LC_TIME OLD_DIRS+=usr/share/locale/ru_RU.CP866 OLD_FILES+=usr/share/locale/ru_RU.CP866/LC_COLLATE OLD_FILES+=usr/share/locale/ru_RU.CP866/LC_CTYPE OLD_FILES+=usr/share/locale/ru_RU.CP866/LC_MESSAGES OLD_FILES+=usr/share/locale/ru_RU.CP866/LC_MONETARY OLD_FILES+=usr/share/locale/ru_RU.CP866/LC_NUMERIC OLD_FILES+=usr/share/locale/ru_RU.CP866/LC_TIME OLD_DIRS+=usr/share/locale/ru_RU.ISO8859-5 OLD_FILES+=usr/share/locale/ru_RU.ISO8859-5/LC_COLLATE OLD_FILES+=usr/share/locale/ru_RU.ISO8859-5/LC_CTYPE OLD_FILES+=usr/share/locale/ru_RU.ISO8859-5/LC_MESSAGES OLD_FILES+=usr/share/locale/ru_RU.ISO8859-5/LC_MONETARY OLD_FILES+=usr/share/locale/ru_RU.ISO8859-5/LC_NUMERIC OLD_FILES+=usr/share/locale/ru_RU.ISO8859-5/LC_TIME OLD_DIRS+=usr/share/locale/ru_RU.KOI8-R OLD_FILES+=usr/share/locale/ru_RU.KOI8-R/LC_COLLATE OLD_FILES+=usr/share/locale/ru_RU.KOI8-R/LC_CTYPE OLD_FILES+=usr/share/locale/ru_RU.KOI8-R/LC_MESSAGES OLD_FILES+=usr/share/locale/ru_RU.KOI8-R/LC_MONETARY OLD_FILES+=usr/share/locale/ru_RU.KOI8-R/LC_NUMERIC OLD_FILES+=usr/share/locale/ru_RU.KOI8-R/LC_TIME OLD_DIRS+=usr/share/locale/ru_RU.UTF-8 OLD_FILES+=usr/share/locale/ru_RU.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/ru_RU.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/ru_RU.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/ru_RU.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/ru_RU.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/ru_RU.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/se_FI.UTF-8 OLD_FILES+=usr/share/locale/se_FI.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/se_FI.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/se_FI.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/se_FI.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/se_FI.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/se_FI.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/se_NO.UTF-8 OLD_FILES+=usr/share/locale/se_NO.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/se_NO.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/se_NO.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/se_NO.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/se_NO.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/se_NO.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/sk_SK.ISO8859-2 OLD_FILES+=usr/share/locale/sk_SK.ISO8859-2/LC_COLLATE OLD_FILES+=usr/share/locale/sk_SK.ISO8859-2/LC_CTYPE OLD_FILES+=usr/share/locale/sk_SK.ISO8859-2/LC_MESSAGES OLD_FILES+=usr/share/locale/sk_SK.ISO8859-2/LC_MONETARY OLD_FILES+=usr/share/locale/sk_SK.ISO8859-2/LC_NUMERIC OLD_FILES+=usr/share/locale/sk_SK.ISO8859-2/LC_TIME OLD_DIRS+=usr/share/locale/sk_SK.UTF-8 OLD_FILES+=usr/share/locale/sk_SK.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/sk_SK.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/sk_SK.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/sk_SK.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/sk_SK.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/sk_SK.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/sl_SI.ISO8859-2 OLD_FILES+=usr/share/locale/sl_SI.ISO8859-2/LC_COLLATE OLD_FILES+=usr/share/locale/sl_SI.ISO8859-2/LC_CTYPE OLD_FILES+=usr/share/locale/sl_SI.ISO8859-2/LC_MESSAGES OLD_FILES+=usr/share/locale/sl_SI.ISO8859-2/LC_MONETARY OLD_FILES+=usr/share/locale/sl_SI.ISO8859-2/LC_NUMERIC OLD_FILES+=usr/share/locale/sl_SI.ISO8859-2/LC_TIME OLD_DIRS+=usr/share/locale/sl_SI.UTF-8 OLD_FILES+=usr/share/locale/sl_SI.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/sl_SI.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/sl_SI.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/sl_SI.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/sl_SI.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/sl_SI.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/sr_RS.ISO8859-5 OLD_FILES+=usr/share/locale/sr_RS.ISO8859-5/LC_COLLATE OLD_FILES+=usr/share/locale/sr_RS.ISO8859-5/LC_CTYPE OLD_FILES+=usr/share/locale/sr_RS.ISO8859-5/LC_MESSAGES OLD_FILES+=usr/share/locale/sr_RS.ISO8859-5/LC_MONETARY OLD_FILES+=usr/share/locale/sr_RS.ISO8859-5/LC_NUMERIC OLD_FILES+=usr/share/locale/sr_RS.ISO8859-5/LC_TIME OLD_DIRS+=usr/share/locale/sr_RS.UTF-8 OLD_FILES+=usr/share/locale/sr_RS.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/sr_RS.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/sr_RS.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/sr_RS.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/sr_RS.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/sr_RS.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/sr_RS.ISO8859-2 OLD_FILES+=usr/share/locale/sr_RS.ISO8859-2/LC_COLLATE OLD_FILES+=usr/share/locale/sr_RS.ISO8859-2/LC_CTYPE OLD_FILES+=usr/share/locale/sr_RS.ISO8859-2/LC_MESSAGES OLD_FILES+=usr/share/locale/sr_RS.ISO8859-2/LC_MONETARY OLD_FILES+=usr/share/locale/sr_RS.ISO8859-2/LC_NUMERIC OLD_FILES+=usr/share/locale/sr_RS.ISO8859-2/LC_TIME OLD_DIRS+=usr/share/locale/sr_RS.UTF-8@latin OLD_FILES+=usr/share/locale/sr_RS.UTF-8@latin/LC_COLLATE OLD_FILES+=usr/share/locale/sr_RS.UTF-8@latin/LC_CTYPE OLD_FILES+=usr/share/locale/sr_RS.UTF-8@latin/LC_MESSAGES OLD_FILES+=usr/share/locale/sr_RS.UTF-8@latin/LC_MONETARY OLD_FILES+=usr/share/locale/sr_RS.UTF-8@latin/LC_NUMERIC OLD_FILES+=usr/share/locale/sr_RS.UTF-8@latin/LC_TIME OLD_DIRS+=usr/share/locale/sv_FI.ISO8859-1 OLD_FILES+=usr/share/locale/sv_FI.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/sv_FI.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/sv_FI.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/sv_FI.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/sv_FI.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/sv_FI.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/sv_FI.ISO8859-15 OLD_FILES+=usr/share/locale/sv_FI.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/sv_FI.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/sv_FI.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/sv_FI.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/sv_FI.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/sv_FI.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/sv_FI.UTF-8 OLD_FILES+=usr/share/locale/sv_FI.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/sv_FI.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/sv_FI.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/sv_FI.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/sv_FI.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/sv_FI.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/sv_SE.ISO8859-1 OLD_FILES+=usr/share/locale/sv_SE.ISO8859-1/LC_COLLATE OLD_FILES+=usr/share/locale/sv_SE.ISO8859-1/LC_CTYPE OLD_FILES+=usr/share/locale/sv_SE.ISO8859-1/LC_MESSAGES OLD_FILES+=usr/share/locale/sv_SE.ISO8859-1/LC_MONETARY OLD_FILES+=usr/share/locale/sv_SE.ISO8859-1/LC_NUMERIC OLD_FILES+=usr/share/locale/sv_SE.ISO8859-1/LC_TIME OLD_DIRS+=usr/share/locale/sv_SE.ISO8859-15 OLD_FILES+=usr/share/locale/sv_SE.ISO8859-15/LC_COLLATE OLD_FILES+=usr/share/locale/sv_SE.ISO8859-15/LC_CTYPE OLD_FILES+=usr/share/locale/sv_SE.ISO8859-15/LC_MESSAGES OLD_FILES+=usr/share/locale/sv_SE.ISO8859-15/LC_MONETARY OLD_FILES+=usr/share/locale/sv_SE.ISO8859-15/LC_NUMERIC OLD_FILES+=usr/share/locale/sv_SE.ISO8859-15/LC_TIME OLD_DIRS+=usr/share/locale/sv_SE.UTF-8 OLD_FILES+=usr/share/locale/sv_SE.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/sv_SE.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/sv_SE.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/sv_SE.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/sv_SE.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/sv_SE.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/tr_TR.ISO8859-9 OLD_FILES+=usr/share/locale/tr_TR.ISO8859-9/LC_COLLATE OLD_FILES+=usr/share/locale/tr_TR.ISO8859-9/LC_CTYPE OLD_FILES+=usr/share/locale/tr_TR.ISO8859-9/LC_MESSAGES OLD_FILES+=usr/share/locale/tr_TR.ISO8859-9/LC_MONETARY OLD_FILES+=usr/share/locale/tr_TR.ISO8859-9/LC_NUMERIC OLD_FILES+=usr/share/locale/tr_TR.ISO8859-9/LC_TIME OLD_DIRS+=usr/share/locale/tr_TR.UTF-8 OLD_FILES+=usr/share/locale/tr_TR.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/tr_TR.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/tr_TR.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/tr_TR.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/tr_TR.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/tr_TR.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/uk_UA.CP1251 OLD_FILES+=usr/share/locale/uk_UA.CP1251/LC_COLLATE OLD_FILES+=usr/share/locale/uk_UA.CP1251/LC_CTYPE OLD_FILES+=usr/share/locale/uk_UA.CP1251/LC_MESSAGES OLD_FILES+=usr/share/locale/uk_UA.CP1251/LC_MONETARY OLD_FILES+=usr/share/locale/uk_UA.CP1251/LC_NUMERIC OLD_FILES+=usr/share/locale/uk_UA.CP1251/LC_TIME OLD_DIRS+=usr/share/locale/uk_UA.ISO8859-5 OLD_FILES+=usr/share/locale/uk_UA.ISO8859-5/LC_COLLATE OLD_FILES+=usr/share/locale/uk_UA.ISO8859-5/LC_CTYPE OLD_FILES+=usr/share/locale/uk_UA.ISO8859-5/LC_MESSAGES OLD_FILES+=usr/share/locale/uk_UA.ISO8859-5/LC_MONETARY OLD_FILES+=usr/share/locale/uk_UA.ISO8859-5/LC_NUMERIC OLD_FILES+=usr/share/locale/uk_UA.ISO8859-5/LC_TIME OLD_DIRS+=usr/share/locale/uk_UA.KOI8-U OLD_FILES+=usr/share/locale/uk_UA.KOI8-U/LC_COLLATE OLD_FILES+=usr/share/locale/uk_UA.KOI8-U/LC_CTYPE OLD_FILES+=usr/share/locale/uk_UA.KOI8-U/LC_MESSAGES OLD_FILES+=usr/share/locale/uk_UA.KOI8-U/LC_MONETARY OLD_FILES+=usr/share/locale/uk_UA.KOI8-U/LC_NUMERIC OLD_FILES+=usr/share/locale/uk_UA.KOI8-U/LC_TIME OLD_DIRS+=usr/share/locale/uk_UA.UTF-8 OLD_FILES+=usr/share/locale/uk_UA.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/uk_UA.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/uk_UA.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/uk_UA.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/uk_UA.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/uk_UA.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/zh_CN.eucCN OLD_FILES+=usr/share/locale/zh_CN.eucCN/LC_COLLATE OLD_FILES+=usr/share/locale/zh_CN.eucCN/LC_CTYPE OLD_FILES+=usr/share/locale/zh_CN.eucCN/LC_MESSAGES OLD_FILES+=usr/share/locale/zh_CN.eucCN/LC_MONETARY OLD_FILES+=usr/share/locale/zh_CN.eucCN/LC_NUMERIC OLD_FILES+=usr/share/locale/zh_CN.eucCN/LC_TIME OLD_DIRS+=usr/share/locale/zh_CN.GB18030 OLD_FILES+=usr/share/locale/zh_CN.GB18030/LC_COLLATE OLD_FILES+=usr/share/locale/zh_CN.GB18030/LC_CTYPE OLD_FILES+=usr/share/locale/zh_CN.GB18030/LC_MESSAGES OLD_FILES+=usr/share/locale/zh_CN.GB18030/LC_MONETARY OLD_FILES+=usr/share/locale/zh_CN.GB18030/LC_NUMERIC OLD_FILES+=usr/share/locale/zh_CN.GB18030/LC_TIME OLD_DIRS+=usr/share/locale/zh_CN.GB2312 OLD_FILES+=usr/share/locale/zh_CN.GB2312/LC_COLLATE OLD_FILES+=usr/share/locale/zh_CN.GB2312/LC_CTYPE OLD_FILES+=usr/share/locale/zh_CN.GB2312/LC_MESSAGES OLD_FILES+=usr/share/locale/zh_CN.GB2312/LC_MONETARY OLD_FILES+=usr/share/locale/zh_CN.GB2312/LC_NUMERIC OLD_FILES+=usr/share/locale/zh_CN.GB2312/LC_TIME OLD_DIRS+=usr/share/locale/zh_CN.GBK OLD_FILES+=usr/share/locale/zh_CN.GBK/LC_COLLATE OLD_FILES+=usr/share/locale/zh_CN.GBK/LC_CTYPE OLD_FILES+=usr/share/locale/zh_CN.GBK/LC_MESSAGES OLD_FILES+=usr/share/locale/zh_CN.GBK/LC_MONETARY OLD_FILES+=usr/share/locale/zh_CN.GBK/LC_NUMERIC OLD_FILES+=usr/share/locale/zh_CN.GBK/LC_TIME OLD_DIRS+=usr/share/locale/zh_CN.UTF-8 OLD_FILES+=usr/share/locale/zh_CN.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/zh_CN.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/zh_CN.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/zh_CN.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/zh_CN.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/zh_CN.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/zh_HK.UTF-8 OLD_FILES+=usr/share/locale/zh_HK.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/zh_HK.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/zh_HK.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/zh_HK.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/zh_HK.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/zh_HK.UTF-8/LC_TIME OLD_DIRS+=usr/share/locale/zh_TW.Big5 OLD_FILES+=usr/share/locale/zh_TW.Big5/LC_COLLATE OLD_FILES+=usr/share/locale/zh_TW.Big5/LC_CTYPE OLD_FILES+=usr/share/locale/zh_TW.Big5/LC_MESSAGES OLD_FILES+=usr/share/locale/zh_TW.Big5/LC_MONETARY OLD_FILES+=usr/share/locale/zh_TW.Big5/LC_NUMERIC OLD_FILES+=usr/share/locale/zh_TW.Big5/LC_TIME OLD_DIRS+=usr/share/locale/zh_TW.UTF-8 OLD_FILES+=usr/share/locale/zh_TW.UTF-8/LC_COLLATE OLD_FILES+=usr/share/locale/zh_TW.UTF-8/LC_CTYPE OLD_FILES+=usr/share/locale/zh_TW.UTF-8/LC_MESSAGES OLD_FILES+=usr/share/locale/zh_TW.UTF-8/LC_MONETARY OLD_FILES+=usr/share/locale/zh_TW.UTF-8/LC_NUMERIC OLD_FILES+=usr/share/locale/zh_TW.UTF-8/LC_TIME .endif .if ${MK_LOCATE} == no OLD_FILES+=etc/locate.rc OLD_FILES+=etc/periodic/weekly/310.locate OLD_FILES+=usr/bin/locate OLD_FILES+=usr/libexec/locate.bigram OLD_FILES+=usr/libexec/locate.code OLD_FILES+=usr/libexec/locate.concatdb OLD_FILES+=usr/libexec/locate.mklocatedb OLD_FILES+=usr/libexec/locate.updatedb OLD_FILES+=usr/share/man/man1/locate.1.gz OLD_FILES+=usr/share/man/man8/locate.updatedb.8.gz OLD_FILES+=usr/share/man/man8/updatedb.8.gz .endif .if ${MK_LPR} == no OLD_FILES+=etc/hosts.lpd OLD_FILES+=etc/printcap OLD_FILES+=etc/newsyslog.conf.d/lpr.conf OLD_FILES+=etc/rc.d/lpd OLD_FILES+=etc/syslog.d/lpr.conf OLD_FILES+=usr/bin/lp OLD_FILES+=usr/bin/lpq OLD_FILES+=usr/bin/lpr OLD_FILES+=usr/bin/lprm OLD_FILES+=usr/libexec/lpr/ru/bjc-240.sh.sample OLD_FILES+=usr/libexec/lpr/ru/koi2alt OLD_FILES+=usr/libexec/lpr/ru/koi2855 OLD_DIRS+=usr/libexec/lpr/ru OLD_FILES+=usr/libexec/lpr/lpf OLD_DIRS+=usr/libexec/lpr OLD_FILES+=usr/sbin/chkprintcap OLD_FILES+=usr/sbin/lpc OLD_FILES+=usr/sbin/lpd OLD_FILES+=usr/sbin/lptest OLD_FILES+=usr/sbin/pac OLD_FILES+=usr/share/doc/smm/07.lpd/paper.ascii.gz OLD_DIRS+=usr/share/doc/smm/07.lpd OLD_FILES+=usr/share/examples/etc/hosts.lpd OLD_FILES+=usr/share/examples/etc/printcap OLD_FILES+=usr/share/man/man1/lp.1.gz OLD_FILES+=usr/share/man/man1/lpq.1.gz OLD_FILES+=usr/share/man/man1/lpr.1.gz OLD_FILES+=usr/share/man/man1/lprm.1.gz OLD_FILES+=usr/share/man/man1/lptest.1.gz OLD_FILES+=usr/share/man/man5/printcap.5.gz OLD_FILES+=usr/share/man/man8/chkprintcap.8.gz OLD_FILES+=usr/share/man/man8/lpc.8.gz OLD_FILES+=usr/share/man/man8/lpd.8.gz OLD_FILES+=usr/share/man/man8/pac.8.gz .endif .if ${MK_MAIL} == no OLD_FILES+=etc/aliases OLD_FILES+=etc/mail.rc OLD_FILES+=etc/mail/aliases OLD_FILES+=etc/mail/mailer.conf OLD_FILES+=etc/periodic/daily/130.clean-msgs +OLD_FILES+=etc/rc.d/othermta OLD_FILES+=usr/bin/Mail OLD_FILES+=usr/bin/biff OLD_FILES+=usr/bin/from OLD_FILES+=usr/bin/mail OLD_FILES+=usr/bin/mailx OLD_FILES+=usr/bin/msgs OLD_FILES+=usr/libexec/comsat OLD_FILES+=usr/share/examples/etc/mail.rc OLD_FILES+=usr/share/man/man1/Mail.1.gz OLD_FILES+=usr/share/man/man1/biff.1.gz OLD_FILES+=usr/share/man/man1/from.1.gz OLD_FILES+=usr/share/man/man1/mail.1.gz OLD_FILES+=usr/share/man/man1/mailx.1.gz OLD_FILES+=usr/share/man/man1/msgs.1.gz OLD_FILES+=usr/share/man/man8/comsat.8.gz OLD_FILES+=usr/share/misc/mail.help OLD_FILES+=usr/share/misc/mail.tildehelp .endif .if ${MK_MAILWRAPPER} == no OLD_FILES+=etc/mail/mailer.conf # Don't remove, for no mailwrapper case: # /usr/sbin/sendmail -> /usr/sbin/mailwrapper # /usr/sbin/mailwrapper -> /usr/libexec/sendmail/sendmail #OLD_FILES+=usr/sbin/mailwrapper OLD_FILES+=usr/share/man/man8/mailwrapper.8.gz .endif .if ${MK_MAKE} == no OLD_FILES+=usr/bin/make OLD_FILES+=usr/share/man/man1/make.1.gz OLD_FILES+=usr/share/mk/atf.test.mk OLD_FILES+=usr/share/mk/bsd.README OLD_FILES+=usr/share/mk/bsd.arch.inc.mk OLD_FILES+=usr/share/mk/bsd.compiler.mk OLD_FILES+=usr/share/mk/bsd.cpu.mk OLD_FILES+=usr/share/mk/bsd.crunchgen.mk OLD_FILES+=usr/share/mk/bsd.dep.mk OLD_FILES+=usr/share/mk/bsd.doc.mk OLD_FILES+=usr/share/mk/bsd.dtb.mk OLD_FILES+=usr/share/mk/bsd.endian.mk OLD_FILES+=usr/share/mk/bsd.files.mk OLD_FILES+=usr/share/mk/bsd.incs.mk OLD_FILES+=usr/share/mk/bsd.info.mk OLD_FILES+=usr/share/mk/bsd.init.mk OLD_FILES+=usr/share/mk/bsd.kmod.mk OLD_FILES+=usr/share/mk/bsd.lib.mk OLD_FILES+=usr/share/mk/bsd.libnames.mk OLD_FILES+=usr/share/mk/bsd.links.mk OLD_FILES+=usr/share/mk/bsd.man.mk OLD_FILES+=usr/share/mk/bsd.mkopt.mk OLD_FILES+=usr/share/mk/bsd.nls.mk OLD_FILES+=usr/share/mk/bsd.obj.mk OLD_FILES+=usr/share/mk/bsd.opts.mk OLD_FILES+=usr/share/mk/bsd.own.mk OLD_FILES+=usr/share/mk/bsd.port.mk OLD_FILES+=usr/share/mk/bsd.port.options.mk OLD_FILES+=usr/share/mk/bsd.port.post.mk OLD_FILES+=usr/share/mk/bsd.port.pre.mk OLD_FILES+=usr/share/mk/bsd.port.subdir.mk OLD_FILES+=usr/share/mk/bsd.prog.mk OLD_FILES+=usr/share/mk/bsd.progs.mk OLD_FILES+=usr/share/mk/bsd.snmpmod.mk OLD_FILES+=usr/share/mk/bsd.subdir.mk OLD_FILES+=usr/share/mk/bsd.symver.mk OLD_FILES+=usr/share/mk/bsd.sys.mk OLD_FILES+=usr/share/mk/bsd.test.mk OLD_FILES+=usr/share/mk/plain.test.mk OLD_FILES+=usr/share/mk/suite.test.mk OLD_FILES+=usr/share/mk/sys.mk OLD_FILES+=usr/share/mk/tap.test.mk OLD_FILES+=usr/share/mk/version_gen.awk OLD_FILES+=usr/tests/usr.bin/bmake/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/archives/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.status.2 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.status.3 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.status.4 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.status.5 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.status.6 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.status.7 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stderr.3 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stderr.4 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stderr.5 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stderr.6 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stderr.7 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stdout.3 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stdout.4 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stdout.5 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stdout.6 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stdout.7 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/libtest.a OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.status.2 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.status.3 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.status.4 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.status.5 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.status.6 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.status.7 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stderr.3 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stderr.4 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stderr.5 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stderr.6 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stderr.7 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stdout.3 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stdout.4 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stdout.5 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stdout.6 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stdout.7 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/libtest.a OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.status.2 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.status.3 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.status.4 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.status.5 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.status.6 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.status.7 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stderr.3 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stderr.4 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stderr.5 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stderr.6 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stderr.7 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stdout.3 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stdout.4 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stdout.5 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stdout.6 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stdout.7 OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/libtest.a OLD_FILES+=usr/tests/usr.bin/bmake/basic/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/basic/t0/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/basic/t0/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/basic/t0/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/basic/t0/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/basic/t0/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/basic/t1/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/basic/t1/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/basic/t1/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/basic/t1/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/basic/t1/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/basic/t1/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/basic/t2/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/basic/t2/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/basic/t2/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/basic/t2/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/basic/t2/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/basic/t2/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/basic/t3/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/basic/t3/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/basic/t3/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/basic/t3/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/basic/t3/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/common.sh OLD_FILES+=usr/tests/usr.bin/bmake/execution/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/execution/ellipsis/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/execution/ellipsis/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/execution/ellipsis/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/execution/ellipsis/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/execution/ellipsis/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/execution/ellipsis/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/execution/empty/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/execution/empty/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/execution/empty/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/execution/empty/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/execution/empty/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/execution/empty/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/execution/joberr/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/execution/joberr/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/execution/joberr/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/execution/joberr/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/execution/joberr/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/execution/joberr/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/execution/plus/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/execution/plus/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/execution/plus/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/execution/plus/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/execution/plus/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/execution/plus/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/shell/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/expected.status.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/sh OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/expected.status.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/sh OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/expected.status.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/sh OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/expected.status.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/shell OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/expected.status.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/shell OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/expected.status.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/basic/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/basic/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/basic/TEST1.a OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/basic/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/basic/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/basic/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/basic/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild1/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild1/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild1/TEST1.a OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild1/TEST2.a OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild1/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild1/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild1/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild1/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild2/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild2/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild2/TEST1.a OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild2/TEST2.a OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild2/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild2/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild2/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild2/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/syntax/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/syntax/directive-t0/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/syntax/directive-t0/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/syntax/directive-t0/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/directive-t0/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/directive-t0/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/directive-t0/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.status.2 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.status.3 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.status.4 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.status.5 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stderr.3 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stderr.4 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stderr.5 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stdout.3 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stdout.4 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stdout.5 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/expected.status.2 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/expected.status.2 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/2/1/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/2/1/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/2/1/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/2/1/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/2/1/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/2/1/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/2/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/mk/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/mk/sys.mk OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/2/1/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/2/1/cleanup OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/2/1/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/2/1/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/2/1/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/2/1/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/2/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/mk/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/mk/sys.mk OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/2/1/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/2/1/cleanup OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/2/1/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/2/1/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/2/1/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/2/1/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/2/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/mk/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/mk/sys.mk OLD_FILES+=usr/tests/usr.bin/bmake/test-new.mk OLD_FILES+=usr/tests/usr.bin/bmake/variables/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_M/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_M/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_M/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_M/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_M/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_M/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.status.2 OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.status.3 OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.stderr.3 OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.stdout.3 OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/expected.status.2 OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/legacy_test OLD_FILES+=usr/tests/usr.bin/bmake/variables/t0/Kyuafile OLD_FILES+=usr/tests/usr.bin/bmake/variables/t0/Makefile.test OLD_FILES+=usr/tests/usr.bin/bmake/variables/t0/expected.status.1 OLD_FILES+=usr/tests/usr.bin/bmake/variables/t0/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/bmake/variables/t0/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/bmake/variables/t0/legacy_test .endif .if ${MK_MAN} == no MAN_FILES!=find ${DESTDIR}/usr/share/man ${DESTDIR}/usr/share/openssl/man -type f | sed -e 's,^${DESTDIR}/,,'; echo OLD_FILES+=${MAN_FILES} MAN_DIRS!=find ${DESTDIR}/usr/share/man ${DESTDIR}/usr/share/openssl/man -type d | sed -e 's,^${DESTDIR}/,,'; echo OLD_DIRS+=${MAN_DIRS} .endif .if ${MK_MAN_UTILS} == no OLD_FILES+=etc/periodic/weekly/320.whatis OLD_FILES+=usr/bin/apropos OLD_FILES+=usr/bin/makewhatis OLD_FILES+=usr/bin/man OLD_FILES+=usr/bin/manpath OLD_FILES+=usr/bin/whatis OLD_FILES+=usr/libexec/makewhatis.local OLD_FILES+=usr/sbin/manctl OLD_FILES+=usr/share/man/man1/apropos.1.gz OLD_FILES+=usr/share/man/man1/makewhatis.1.gz OLD_FILES+=usr/share/man/man1/man.1.gz OLD_FILES+=usr/share/man/man1/manpath.1.gz OLD_FILES+=usr/share/man/man1/whatis.1.gz OLD_FILES+=usr/share/man/man5/man.conf.5.gz OLD_FILES+=usr/share/man/man8/makewhatis.local.8.gz OLD_FILES+=usr/share/man/man8/manctl.8.gz OLD_FILES+=usr/share/man/whatis OLD_FILES+=usr/share/openssl/man/whatis .endif .if ${MK_NDIS} == no OLD_FILES+=usr/sbin/ndiscvt OLD_FILES+=usr/sbin/ndisgen OLD_FILES+=usr/share/man/man8/ndiscvt.8.gz OLD_FILES+=usr/share/man/man8/ndisgen.8.gz OLD_FILES+=usr/share/misc/windrv_stub.c .endif .if ${MK_NETCAT} == no OLD_FILES+=rescue/nc OLD_FILES+=usr/bin/nc OLD_FILES+=usr/share/man/man1/nc.1.gz .endif .if ${MK_NETGRAPH} == no OLD_FILES+=usr/include/netgraph.h OLD_FILES+=usr/lib/libnetgraph.a OLD_FILES+=usr/lib/libnetgraph.so OLD_LIBS+=usr/lib/libnetgraph.so.4 OLD_FILES+=usr/lib/libnetgraph_p.a OLD_FILES+=usr/lib32/libnetgraph.a OLD_FILES+=usr/lib32/libnetgraph.so OLD_LIBS+=usr/lib32/libnetgraph.so.4 OLD_FILES+=usr/lib32/libnetgraph_p.a OLD_FILES+=usr/libexec/pppoed OLD_FILES+=usr/sbin/flowctl OLD_FILES+=usr/sbin/lmcconfig OLD_FILES+=usr/sbin/ngctl OLD_FILES+=usr/sbin/nghook OLD_FILES+=usr/share/man/man3/NgAllocRecvAsciiMsg.3.gz OLD_FILES+=usr/share/man/man3/NgAllocRecvData.3.gz OLD_FILES+=usr/share/man/man3/NgAllocRecvMsg.3.gz OLD_FILES+=usr/share/man/man3/NgMkSockNode.3.gz OLD_FILES+=usr/share/man/man3/NgNameNode.3.gz OLD_FILES+=usr/share/man/man3/NgRecvAsciiMsg.3.gz OLD_FILES+=usr/share/man/man3/NgRecvData.3.gz OLD_FILES+=usr/share/man/man3/NgRecvMsg.3.gz OLD_FILES+=usr/share/man/man3/NgSendAsciiMsg.3.gz OLD_FILES+=usr/share/man/man3/NgSendData.3.gz OLD_FILES+=usr/share/man/man3/NgSendMsg.3.gz OLD_FILES+=usr/share/man/man3/NgSendReplyMsg.3.gz OLD_FILES+=usr/share/man/man3/NgSetDebug.3.gz OLD_FILES+=usr/share/man/man3/NgSetErrLog.3.gz OLD_FILES+=usr/share/man/man3/netgraph.3.gz OLD_FILES+=usr/share/man/man8/flowctl.8.gz OLD_FILES+=usr/share/man/man8/lmcconfig.8.gz OLD_FILES+=usr/share/man/man8/ngctl.8.gz OLD_FILES+=usr/share/man/man8/nghook.8.gz OLD_FILES+=usr/share/man/man8/pppoed.8.gz .endif +.if ${MK_IPFW} == no || ${MK_NETGRAPH} == no +OLD_FILES+=etc/rc.d/ipfw_netflow +.endif + .if ${MK_NETGRAPH_SUPPORT} == no OLD_FILES+=usr/include/bsnmp/snmp_netgraph.h OLD_FILES+=usr/lib/snmp_netgraph.so OLD_LIBS+=usr/lib/snmp_netgraph.so.6 OLD_FILES+=usr/share/man/man3/snmp_netgraph.3.gz OLD_FILES+=usr/share/snmp/defs/netgraph_tree.def OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-NETGRAPH.txt .endif .if ${MK_NIS} == no OLD_FILES+=etc/rc.d/ypbind OLD_FILES+=etc/rc.d/ypldap OLD_FILES+=etc/rc.d/yppasswdd OLD_FILES+=etc/rc.d/ypserv OLD_FILES+=etc/rc.d/ypset OLD_FILES+=etc/rc.d/ypupdated OLD_FILES+=etc/rc.d/ypxfrd OLD_FILES+=usr/bin/ypcat OLD_FILES+=usr/bin/ypchfn OLD_FILES+=usr/bin/ypchpass OLD_FILES+=usr/bin/ypchsh OLD_FILES+=usr/bin/ypmatch OLD_FILES+=usr/bin/yppasswd OLD_FILES+=usr/bin/ypwhich OLD_FILES+=usr/include/ypclnt.h OLD_FILES+=usr/lib/libypclnt.a OLD_FILES+=usr/lib/libypclnt.so OLD_LIBS+=usr/lib/libypclnt.so.4 OLD_FILES+=usr/lib/libypclnt_p.a .if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64" OLD_FILES+=usr/lib32/libypclnt.a OLD_FILES+=usr/lib32/libypclnt.so OLD_LIBS+=usr/lib32/libypclnt.so.4 OLD_FILES+=usr/lib32/libypclnt_p.a .endif OLD_FILES+=usr/libexec/mknetid OLD_FILES+=usr/libexec/yppwupdate OLD_FILES+=usr/libexec/ypxfr OLD_FILES+=usr/sbin/rpc.yppasswdd OLD_FILES+=usr/sbin/rpc.ypupdated OLD_FILES+=usr/sbin/rpc.ypxfrd OLD_FILES+=usr/sbin/yp_mkdb OLD_FILES+=usr/sbin/ypbind OLD_FILES+=usr/sbin/ypinit OLD_FILES+=usr/sbin/ypldap OLD_FILES+=usr/sbin/yppoll OLD_FILES+=usr/sbin/yppush OLD_FILES+=usr/sbin/ypserv OLD_FILES+=usr/sbin/ypset OLD_FILES+=usr/share/man/man1/ypcat.1.gz OLD_FILES+=usr/share/man/man1/ypchfn.1.gz OLD_FILES+=usr/share/man/man1/ypchpass.1.gz OLD_FILES+=usr/share/man/man1/ypchsh.1.gz OLD_FILES+=usr/share/man/man1/ypmatch.1.gz OLD_FILES+=usr/share/man/man1/yppasswd.1.gz OLD_FILES+=usr/share/man/man1/ypwhich.1.gz OLD_FILES+=usr/share/man/man5/netid.5.gz OLD_FILES+=usr/share/man/man5/ypldap.conf.5.gz OLD_FILES+=usr/share/man/man8/mknetid.8.gz OLD_FILES+=usr/share/man/man8/rpc.yppasswdd.8.gz OLD_FILES+=usr/share/man/man8/rpc.ypxfrd.8.gz OLD_FILES+=usr/share/man/man8/NIS.8.gz OLD_FILES+=usr/share/man/man8/YP.8.gz OLD_FILES+=usr/share/man/man8/yp.8.gz OLD_FILES+=usr/share/man/man8/nis.8.gz OLD_FILES+=usr/share/man/man8/yp_mkdb.8.gz OLD_FILES+=usr/share/man/man8/ypbind.8.gz OLD_FILES+=usr/share/man/man8/ypinit.8.gz OLD_FILES+=usr/share/man/man8/ypldap.8.gz OLD_FILES+=usr/share/man/man8/yppoll.8.gz OLD_FILES+=usr/share/man/man8/yppush.8.gz OLD_FILES+=usr/share/man/man8/ypserv.8.gz OLD_FILES+=usr/share/man/man8/ypset.8.gz OLD_FILES+=usr/share/man/man8/ypxfr.8.gz OLD_FILES+=var/yp/Makefile OLD_FILES+=var/yp/Makefile.dist OLD_DIRS+=var/yp .endif .if ${MK_NLS} == no OLD_DIRS+=usr/share/nls/ OLD_DIRS+=usr/share/nls/C OLD_FILES+=usr/share/mk/bsd.nls.mk OLD_FILES+=usr/share/nls/C/ee.cat OLD_DIRS+=usr/share/nls/af_ZA.ISO8859-1 OLD_DIRS+=usr/share/nls/af_ZA.ISO8859-15 OLD_DIRS+=usr/share/nls/af_ZA.UTF-8 OLD_DIRS+=usr/share/nls/am_ET.UTF-8 OLD_DIRS+=usr/share/nls/be_BY.CP1131 OLD_DIRS+=usr/share/nls/be_BY.CP1251 OLD_DIRS+=usr/share/nls/be_BY.ISO8859-5 OLD_DIRS+=usr/share/nls/be_BY.UTF-8 OLD_FILES+=usr/share/nls/be_BY.UTF-8/libc.cat OLD_DIRS+=usr/share/nls/bg_BG.CP1251 OLD_DIRS+=usr/share/nls/bg_BG.UTF-8 OLD_DIRS+=usr/share/nls/ca_ES.ISO8859-1 OLD_FILES+=usr/share/nls/ca_ES.ISO8859-1/libc.cat OLD_DIRS+=usr/share/nls/ca_ES.ISO8859-15 OLD_DIRS+=usr/share/nls/ca_ES.UTF-8 OLD_DIRS+=usr/share/nls/cs_CZ.ISO8859-2 OLD_DIRS+=usr/share/nls/cs_CZ.UTF-8 OLD_DIRS+=usr/share/nls/da_DK.ISO8859-1 OLD_DIRS+=usr/share/nls/da_DK.ISO8859-15 OLD_DIRS+=usr/share/nls/da_DK.UTF-8 OLD_DIRS+=usr/share/nls/de_AT.ISO8859-1 OLD_FILES+=usr/share/nls/de_AT.ISO8859-1/ee.cat OLD_FILES+=usr/share/nls/de_AT.ISO8859-1/tcsh.cat OLD_DIRS+=usr/share/nls/de_AT.ISO8859-15 OLD_FILES+=usr/share/nls/de_AT.ISO8859-15/ee.cat OLD_FILES+=usr/share/nls/de_AT.ISO8859-15/tcsh.cat OLD_DIRS+=usr/share/nls/de_AT.UTF-8 OLD_FILES+=usr/share/nls/de_AT.UTF-8/tcsh.cat OLD_DIRS+=usr/share/nls/de_CH.ISO8859-1 OLD_FILES+=usr/share/nls/de_CH.ISO8859-1/ee.cat OLD_FILES+=usr/share/nls/de_CH.ISO8859-1/tcsh.cat OLD_DIRS+=usr/share/nls/de_CH.ISO8859-15 OLD_FILES+=usr/share/nls/de_CH.ISO8859-15/ee.cat OLD_FILES+=usr/share/nls/de_CH.ISO8859-15/tcsh.cat OLD_DIRS+=usr/share/nls/de_CH.UTF-8 OLD_FILES+=usr/share/nls/de_CH.UTF-8/tcsh.cat OLD_DIRS+=usr/share/nls/de_DE.ISO8859-1 OLD_FILES+=usr/share/nls/de_DE.ISO8859-1/ee.cat OLD_FILES+=usr/share/nls/de_DE.ISO8859-1/libc.cat OLD_FILES+=usr/share/nls/de_DE.ISO8859-1/tcsh.cat OLD_DIRS+=usr/share/nls/de_DE.ISO8859-15 OLD_FILES+=usr/share/nls/de_DE.ISO8859-15/ee.cat OLD_FILES+=usr/share/nls/de_DE.ISO8859-15/tcsh.cat OLD_DIRS+=usr/share/nls/de_DE.UTF-8 OLD_FILES+=usr/share/nls/de_DE.UTF-8/tcsh.cat OLD_DIRS+=usr/share/nls/el_GR.ISO8859-7 OLD_FILES+=usr/share/nls/el_GR.ISO8859-7/libc.cat OLD_FILES+=usr/share/nls/el_GR.ISO8859-7/tcsh.cat OLD_DIRS+=usr/share/nls/el_GR.UTF-8 OLD_FILES+=usr/share/nls/el_GR.UTF-8/tcsh.cat OLD_DIRS+=usr/share/nls/en_AU.ISO8859-1 OLD_DIRS+=usr/share/nls/en_AU.ISO8859-15 OLD_DIRS+=usr/share/nls/en_AU.US-ASCII OLD_DIRS+=usr/share/nls/en_AU.UTF-8 OLD_DIRS+=usr/share/nls/en_CA.ISO8859-1 OLD_FILES+=usr/share/nls/en_US.ISO8859-1/ee.cat OLD_DIRS+=usr/share/nls/en_CA.ISO8859-15 OLD_DIRS+=usr/share/nls/en_CA.US-ASCII OLD_DIRS+=usr/share/nls/en_CA.UTF-8 OLD_DIRS+=usr/share/nls/en_GB.ISO8859-1 OLD_DIRS+=usr/share/nls/en_GB.ISO8859-15 OLD_DIRS+=usr/share/nls/en_GB.US-ASCII OLD_DIRS+=usr/share/nls/en_GB.UTF-8 OLD_DIRS+=usr/share/nls/en_IE.UTF-8 OLD_DIRS+=usr/share/nls/en_NZ.ISO8859-1 OLD_DIRS+=usr/share/nls/en_NZ.ISO8859-15 OLD_DIRS+=usr/share/nls/en_NZ.US-ASCII OLD_DIRS+=usr/share/nls/en_NZ.UTF-8 OLD_DIRS+=usr/share/nls/en_US.ISO8859-1 OLD_DIRS+=usr/share/nls/en_US.ISO8859-15 OLD_FILES+=usr/share/nls/en_US.ISO8859-15/ee.cat OLD_DIRS+=usr/share/nls/en_US.UTF-8 OLD_DIRS+=usr/share/nls/es_ES.UTF-8 OLD_FILES+=usr/share/nls/es_ES.ISO8859-1/grep.cat OLD_FILES+=usr/share/nls/es_ES.ISO8859-1/libc.cat OLD_FILES+=usr/share/nls/es_ES.ISO8859-1/tcsh.cat OLD_DIRS+=usr/share/nls/es_ES.ISO8859-1 OLD_DIRS+=usr/share/nls/es_ES.ISO8859-15 OLD_FILES+=usr/share/nls/es_ES.ISO8859-15/tcsh.cat OLD_FILES+=usr/share/nls/es_ES.UTF-8/tcsh.cat OLD_DIRS+=usr/share/nls/et_EE.ISO8859-15 OLD_FILES+=usr/share/nls/et_EE.ISO8859-15/tcsh.cat OLD_DIRS+=usr/share/nls/et_EE.UTF-8 OLD_FILES+=usr/share/nls/et_EE.UTF-8/tcsh.cat OLD_DIRS+=usr/share/nls/fi_FI.ISO8859-1 OLD_FILES+=usr/share/nls/fi_FI.ISO8859-1/libc.cat OLD_FILES+=usr/share/nls/fi_FI.ISO8859-1/tcsh.cat OLD_DIRS+=usr/share/nls/fi_FI.ISO8859-15 OLD_FILES+=usr/share/nls/fi_FI.ISO8859-15/tcsh.cat OLD_DIRS+=usr/share/nls/fi_FI.UTF-8 OLD_FILES+=usr/share/nls/fi_FI.UTF-8/tcsh.cat OLD_DIRS+=usr/share/nls/fr_BE.ISO8859-1 OLD_FILES+=usr/share/nls/fr_BE.ISO8859-1/ee.cat OLD_FILES+=usr/share/nls/fr_BE.ISO8859-1/tcsh.cat OLD_DIRS+=usr/share/nls/fr_BE.ISO8859-15 OLD_FILES+=usr/share/nls/fr_BE.ISO8859-15/ee.cat OLD_FILES+=usr/share/nls/fr_BE.ISO8859-15/tcsh.cat OLD_DIRS+=usr/share/nls/fr_BE.UTF-8 OLD_FILES+=usr/share/nls/fr_BE.UTF-8/tcsh.cat OLD_DIRS+=usr/share/nls/fr_CA.ISO8859-1 OLD_FILES+=usr/share/nls/fr_CA.ISO8859-1/ee.cat OLD_FILES+=usr/share/nls/fr_CA.ISO8859-1/tcsh.cat OLD_DIRS+=usr/share/nls/fr_CA.ISO8859-15 OLD_FILES+=usr/share/nls/fr_CA.ISO8859-15/ee.cat OLD_FILES+=usr/share/nls/fr_CA.ISO8859-15/tcsh.cat OLD_DIRS+=usr/share/nls/fr_CA.UTF-8 OLD_FILES+=usr/share/nls/fr_CA.UTF-8/tcsh.cat OLD_DIRS+=usr/share/nls/fr_CH.ISO8859-1 OLD_FILES+=usr/share/nls/fr_CH.ISO8859-1/ee.cat OLD_FILES+=usr/share/nls/fr_CH.ISO8859-1/tcsh.cat OLD_DIRS+=usr/share/nls/fr_CH.ISO8859-15 OLD_FILES+=usr/share/nls/fr_CH.ISO8859-15/ee.cat OLD_FILES+=usr/share/nls/fr_CH.ISO8859-15/tcsh.cat OLD_DIRS+=usr/share/nls/fr_CH.UTF-8 OLD_FILES+=usr/share/nls/fr_CH.UTF-8/tcsh.cat OLD_DIRS+=usr/share/nls/fr_FR.ISO8859-1 OLD_FILES+=usr/share/nls/fr_FR.ISO8859-1/ee.cat OLD_FILES+=usr/share/nls/fr_FR.ISO8859-1/libc.cat OLD_FILES+=usr/share/nls/fr_FR.ISO8859-1/tcsh.cat OLD_DIRS+=usr/share/nls/fr_FR.ISO8859-15 OLD_FILES+=usr/share/nls/fr_FR.ISO8859-15/ee.cat OLD_FILES+=usr/share/nls/fr_FR.ISO8859-15/tcsh.cat OLD_DIRS+=usr/share/nls/fr_FR.UTF-8 OLD_FILES+=usr/share/nls/fr_FR.UTF-8/tcsh.cat OLD_DIRS+=usr/share/nls/gl_ES.ISO8859-1 OLD_FILES+=usr/share/nls/gl_ES.ISO8859-1/grep.cat OLD_FILES+=usr/share/nls/gl_ES.ISO8859-1/libc.cat OLD_DIRS+=usr/share/nls/he_IL.UTF-8 OLD_DIRS+=usr/share/nls/hi_IN.ISCII-DEV OLD_DIRS+=usr/share/nls/hr_HR.ISO8859-2 OLD_DIRS+=usr/share/nls/hu_HU.ISO8859-2 OLD_FILES+=usr/share/nls/hu_HU.ISO8859-2/ee.cat OLD_FILES+=usr/share/nls/hu_HU.ISO8859-2/grep.cat OLD_FILES+=usr/share/nls/hu_HU.ISO8859-2/libc.cat OLD_FILES+=usr/share/nls/hu_HU.ISO8859-2/sort.cat OLD_DIRS+=usr/share/nls/hr_HR.UTF-8 OLD_DIRS+=usr/share/nls/hu_HU.UTF-8 OLD_DIRS+=usr/share/nls/hy_AM.ARMSCII-8 OLD_DIRS+=usr/share/nls/hy_AM.UTF-8 OLD_DIRS+=usr/share/nls/is_IS.ISO8859-1 OLD_DIRS+=usr/share/nls/is_IS.ISO8859-15 OLD_DIRS+=usr/share/nls/is_IS.UTF-8 OLD_DIRS+=usr/share/nls/it_CH.ISO8859-1 OLD_FILES+=usr/share/nls/it_CH.ISO8859-1/tcsh.cat OLD_DIRS+=usr/share/nls/it_CH.ISO8859-15 OLD_FILES+=usr/share/nls/it_CH.ISO8859-15/tcsh.cat OLD_DIRS+=usr/share/nls/it_CH.UTF-8 OLD_FILES+=usr/share/nls/it_CH.UTF-8/tcsh.cat OLD_DIRS+=usr/share/nls/it_IT.ISO8859-1 OLD_FILES+=usr/share/nls/it_IT.ISO8859-1/tcsh.cat OLD_DIRS+=usr/share/nls/it_IT.ISO8859-15 OLD_FILES+=usr/share/nls/it_IT.ISO8859-15/libc.cat OLD_FILES+=usr/share/nls/it_IT.ISO8859-15/tcsh.cat OLD_DIRS+=usr/share/nls/it_IT.UTF-8 OLD_FILES+=usr/share/nls/it_IT.UTF-8/tcsh.cat OLD_DIRS+=usr/share/nls/ja_JP.SJIS OLD_FILES+=usr/share/nls/ja_JP.SJIS/grep.cat OLD_FILES+=usr/share/nls/ja_JP.SJIS/tcsh.cat OLD_DIRS+=usr/share/nls/ja_JP.UTF-8 OLD_FILES+=usr/share/nls/ja_JP.UTF-8/grep.cat OLD_FILES+=usr/share/nls/ja_JP.UTF-8/libc.cat OLD_FILES+=usr/share/nls/ja_JP.UTF-8/tcsh.cat OLD_DIRS+=usr/share/nls/ja_JP.eucJP OLD_FILES+=usr/share/nls/ja_JP.eucJP/grep.cat OLD_FILES+=usr/share/nls/ja_JP.eucJP/libc.cat OLD_FILES+=usr/share/nls/ja_JP.eucJP/tcsh.cat OLD_DIRS+=usr/share/nls/kk_KZ.PT154 OLD_DIRS+=usr/share/nls/kk_KZ.UTF-8 OLD_DIRS+=usr/share/nls/ko_KR.CP949 OLD_DIRS+=usr/share/nls/ko_KR.UTF-8 OLD_FILES+=usr/share/nls/ko_KR.UTF-8/libc.cat OLD_DIRS+=usr/share/nls/ko_KR.eucKR OLD_FILES+=usr/share/nls/ko_KR.eucKR/libc.cat OLD_DIRS+=usr/share/nls/lv_LV.UTF-8 OLD_DIRS+=usr/share/nls/lt_LT.ISO8859-13 OLD_DIRS+=usr/share/nls/lt_LT.UTF-8 OLD_DIRS+=usr/share/nls/lv_LV.ISO8859-13 OLD_DIRS+=usr/share/nls/mn_MN.UTF-8 OLD_FILES+=usr/share/nls/mn_MN.UTF-8/libc.cat OLD_DIRS+=usr/share/nls/nl_BE.ISO8859-1 OLD_DIRS+=usr/share/nls/nl_BE.ISO8859-15 OLD_DIRS+=usr/share/nls/nl_BE.UTF-8 OLD_DIRS+=usr/share/nls/no_NO.ISO8859-1 OLD_FILES+=usr/share/nls/nl_NL.ISO8859-1/libc.cat OLD_DIRS+=usr/share/nls/nl_NL.ISO8859-15 OLD_DIRS+=usr/share/nls/nl_NL.ISO8859-1 OLD_FILES+=usr/share/nls/no_NO.ISO8859-1/libc.cat OLD_DIRS+=usr/share/nls/no_NO.ISO8859-15 OLD_DIRS+=usr/share/nls/nl_NL.UTF-8 OLD_DIRS+=usr/share/nls/no_NO.UTF-8 OLD_DIRS+=usr/share/nls/pl_PL.ISO8859-2 OLD_FILES+=usr/share/nls/pl_PL.ISO8859-2/ee.cat OLD_FILES+=usr/share/nls/pl_PL.ISO8859-2/libc.cat OLD_DIRS+=usr/share/nls/pl_PL.UTF-8 OLD_DIRS+=usr/share/nls/pt_BR.ISO8859-1 OLD_DIRS+=usr/share/nls/pt_BR.UTF-8 OLD_DIRS+=usr/share/nls/pt_PT.ISO8859-1 OLD_FILES+=usr/share/nls/pt_BR.ISO8859-1/ee.cat OLD_FILES+=usr/share/nls/pt_BR.ISO8859-1/grep.cat OLD_FILES+=usr/share/nls/pt_BR.ISO8859-1/libc.cat OLD_FILES+=usr/share/nls/pt_PT.ISO8859-1/ee.cat OLD_DIRS+=usr/share/nls/pt_PT.ISO8859-15 OLD_DIRS+=usr/share/nls/pt_PT.UTF-8 OLD_DIRS+=usr/share/nls/ro_RO.ISO8859-2 OLD_DIRS+=usr/share/nls/ro_RO.UTF-8 OLD_DIRS+=usr/share/nls/ru_RU.CP1251 OLD_FILES+=usr/share/nls/ru_RU.CP1251/tcsh.cat OLD_DIRS+=usr/share/nls/ru_RU.CP866 OLD_FILES+=usr/share/nls/ru_RU.CP866/tcsh.cat OLD_DIRS+=usr/share/nls/ru_RU.ISO8859-5 OLD_FILES+=usr/share/nls/ru_RU.ISO8859-5/tcsh.cat OLD_DIRS+=usr/share/nls/ru_RU.KOI8-R OLD_FILES+=usr/share/nls/ru_RU.KOI8-R/ee.cat OLD_FILES+=usr/share/nls/ru_RU.KOI8-R/grep.cat OLD_FILES+=usr/share/nls/ru_RU.KOI8-R/libc.cat OLD_FILES+=usr/share/nls/ru_RU.KOI8-R/tcsh.cat OLD_DIRS+=usr/share/nls/ru_RU.UTF-8 OLD_FILES+=usr/share/nls/ru_RU.UTF-8/tcsh.cat OLD_DIRS+=usr/share/nls/sk_SK.ISO8859-2 OLD_FILES+=usr/share/nls/sk_SK.ISO8859-2/libc.cat OLD_DIRS+=usr/share/nls/sk_SK.UTF-8 OLD_DIRS+=usr/share/nls/sl_SI.ISO8859-2 OLD_DIRS+=usr/share/nls/sl_SI.UTF-8 OLD_DIRS+=usr/share/nls/sr_YU.ISO8859-2 OLD_DIRS+=usr/share/nls/sr_YU.ISO8859-5 OLD_DIRS+=usr/share/nls/sr_YU.UTF-8 OLD_DIRS+=usr/share/nls/sv_SE.ISO8859-1 OLD_FILES+=usr/share/nls/sv_SE.ISO8859-1/libc.cat OLD_DIRS+=usr/share/nls/sv_SE.ISO8859-15 OLD_DIRS+=usr/share/nls/sv_SE.UTF-8 OLD_DIRS+=usr/share/nls/tr_TR.ISO8859-9 OLD_DIRS+=usr/share/nls/tr_TR.UTF-8 OLD_DIRS+=usr/share/nls/uk_UA.ISO8859-5 OLD_FILES+=usr/share/nls/uk_UA.ISO8859-5/tcsh.cat OLD_DIRS+=usr/share/nls/uk_UA.KOI8-U OLD_FILES+=usr/share/nls/uk_UA.KOI8-U/ee.cat OLD_FILES+=usr/share/nls/uk_UA.KOI8-U/tcsh.cat OLD_DIRS+=usr/share/nls/uk_UA.UTF-8 OLD_FILES+=usr/share/nls/uk_UA.UTF-8/grep.cat OLD_FILES+=usr/share/nls/uk_UA.UTF-8/libc.cat OLD_FILES+=usr/share/nls/uk_UA.UTF-8/tcsh.cat OLD_DIRS+=usr/share/nls/zh_CN.GB18030 OLD_FILES+=usr/share/nls/zh_CN.GB18030/libc.cat OLD_DIRS+=usr/share/nls/zh_CN.GBK OLD_DIRS+=usr/share/nls/zh_CN.GB2312 OLD_FILES+=usr/share/nls/zh_CN.GB2312/libc.cat OLD_DIRS+=usr/share/nls/zh_CN.UTF-8 OLD_FILES+=usr/share/nls/zh_CN.UTF-8/grep.cat OLD_FILES+=usr/share/nls/zh_CN.UTF-8/libc.cat OLD_DIRS+=usr/share/nls/zh_CN.eucCN OLD_DIRS+=usr/share/nls/zh_HK.UTF-8 OLD_DIRS+=usr/share/nls/zh_TW.UTF-8 OLD_FILES+=usr/tests/bin/sh/builtins/locale1.0 .endif .if ${MK_NLS_CATALOGS} == no OLD_FILES+=usr/share/nls/de_AT.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/de_CH.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/de_DE.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/el_GR.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/es_ES.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/et_EE.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/fi_FI.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/fr_BE.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/fr_CA.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/fr_CH.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/fr_FR.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/it_CH.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/it_IT.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/ja_JP.SJIS/tcsh.cat OLD_FILES+=usr/share/nls/ja_JP.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/ru_RU.CP1251/tcsh.cat OLD_FILES+=usr/share/nls/ru_RU.CP866/tcsh.cat OLD_FILES+=usr/share/nls/ru_RU.ISO8859-5/tcsh.cat OLD_FILES+=usr/share/nls/ru_RU.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/uk_UA.ISO8859-5/tcsh.cat OLD_FILES+=usr/share/nls/uk_UA.UTF-8/tcsh.cat .endif .if ${MK_NS_CACHING} == no OLD_FILES+=etc/nscd.conf OLD_FILES+=etc/rc.d/nscd OLD_FILES+=usr/sbin/nscd OLD_FILES+=usr/share/examples/etc/nscd.conf OLD_FILES+=usr/share/man/man5/nscd.conf.5.gz OLD_FILES+=usr/share/man/man8/nscd.8.gz .endif .if ${MK_NTP} == no +OLD_FILES+=etc/ntp/leap-seconds +OLD_DIRS+=etc/ntp OLD_FILES+=etc/ntp.conf OLD_FILES+=etc/periodic/daily/480.status-ntpd +OLD_FILES+=etc/periodic/daily/480.leapfile-ntpd +OLD_FILES+=etc/rc.d/ntpd OLD_FILES+=usr/bin/ntpq OLD_FILES+=usr/sbin/ntp-keygen OLD_FILES+=usr/sbin/ntpd OLD_FILES+=usr/sbin/ntpdate OLD_FILES+=usr/sbin/ntpdc OLD_FILES+=usr/sbin/ntptime OLD_FILES+=usr/sbin/sntp OLD_FILES+=usr/share/doc/ntp/access.html OLD_FILES+=usr/share/doc/ntp/accopt.html OLD_FILES+=usr/share/doc/ntp/assoc.html OLD_FILES+=usr/share/doc/ntp/audio.html OLD_FILES+=usr/share/doc/ntp/authentic.html OLD_FILES+=usr/share/doc/ntp/authopt.html OLD_FILES+=usr/share/doc/ntp/autokey.html OLD_FILES+=usr/share/doc/ntp/bugs.html OLD_FILES+=usr/share/doc/ntp/build.html OLD_FILES+=usr/share/doc/ntp/clock.html OLD_FILES+=usr/share/doc/ntp/clockopt.html OLD_FILES+=usr/share/doc/ntp/cluster.html OLD_FILES+=usr/share/doc/ntp/comdex.html OLD_FILES+=usr/share/doc/ntp/config.html OLD_FILES+=usr/share/doc/ntp/confopt.html OLD_FILES+=usr/share/doc/ntp/copyright.html OLD_FILES+=usr/share/doc/ntp/debug.html OLD_FILES+=usr/share/doc/ntp/decode.html OLD_FILES+=usr/share/doc/ntp/discipline.html OLD_FILES+=usr/share/doc/ntp/discover.html OLD_FILES+=usr/share/doc/ntp/driver1.html OLD_FILES+=usr/share/doc/ntp/driver10.html OLD_FILES+=usr/share/doc/ntp/driver11.html OLD_FILES+=usr/share/doc/ntp/driver12.html OLD_FILES+=usr/share/doc/ntp/driver16.html OLD_FILES+=usr/share/doc/ntp/driver18.html OLD_FILES+=usr/share/doc/ntp/driver19.html OLD_FILES+=usr/share/doc/ntp/driver2.html OLD_FILES+=usr/share/doc/ntp/driver20.html OLD_FILES+=usr/share/doc/ntp/driver22.html OLD_FILES+=usr/share/doc/ntp/driver26.html OLD_FILES+=usr/share/doc/ntp/driver27.html OLD_FILES+=usr/share/doc/ntp/driver28.html OLD_FILES+=usr/share/doc/ntp/driver29.html OLD_FILES+=usr/share/doc/ntp/driver3.html OLD_FILES+=usr/share/doc/ntp/driver30.html OLD_FILES+=usr/share/doc/ntp/driver32.html OLD_FILES+=usr/share/doc/ntp/driver33.html OLD_FILES+=usr/share/doc/ntp/driver34.html OLD_FILES+=usr/share/doc/ntp/driver35.html OLD_FILES+=usr/share/doc/ntp/driver36.html OLD_FILES+=usr/share/doc/ntp/driver37.html OLD_FILES+=usr/share/doc/ntp/driver4.html OLD_FILES+=usr/share/doc/ntp/driver5.html OLD_FILES+=usr/share/doc/ntp/driver6.html OLD_FILES+=usr/share/doc/ntp/driver7.html OLD_FILES+=usr/share/doc/ntp/driver8.html OLD_FILES+=usr/share/doc/ntp/driver9.html OLD_FILES+=usr/share/doc/ntp/drivers/driver1.html OLD_FILES+=usr/share/doc/ntp/drivers/driver10.html OLD_FILES+=usr/share/doc/ntp/drivers/driver11.html OLD_FILES+=usr/share/doc/ntp/drivers/driver12.html OLD_FILES+=usr/share/doc/ntp/drivers/driver16.html OLD_FILES+=usr/share/doc/ntp/drivers/driver18.html OLD_FILES+=usr/share/doc/ntp/drivers/driver19.html OLD_FILES+=usr/share/doc/ntp/drivers/driver20.html OLD_FILES+=usr/share/doc/ntp/drivers/driver22.html OLD_FILES+=usr/share/doc/ntp/drivers/driver26.html OLD_FILES+=usr/share/doc/ntp/drivers/driver27.html OLD_FILES+=usr/share/doc/ntp/drivers/driver28.html OLD_FILES+=usr/share/doc/ntp/drivers/driver29.html OLD_FILES+=usr/share/doc/ntp/drivers/driver3.html OLD_FILES+=usr/share/doc/ntp/drivers/driver30.html OLD_FILES+=usr/share/doc/ntp/drivers/driver31.html OLD_FILES+=usr/share/doc/ntp/drivers/driver32.html OLD_FILES+=usr/share/doc/ntp/drivers/driver33.html OLD_FILES+=usr/share/doc/ntp/drivers/driver34.html OLD_FILES+=usr/share/doc/ntp/drivers/driver35.html OLD_FILES+=usr/share/doc/ntp/drivers/driver36.html OLD_FILES+=usr/share/doc/ntp/drivers/driver37.html OLD_FILES+=usr/share/doc/ntp/drivers/driver38.html OLD_FILES+=usr/share/doc/ntp/drivers/driver39.html OLD_FILES+=usr/share/doc/ntp/drivers/driver4.html OLD_FILES+=usr/share/doc/ntp/drivers/driver40.html OLD_FILES+=usr/share/doc/ntp/drivers/driver42.html OLD_FILES+=usr/share/doc/ntp/drivers/driver43.html OLD_FILES+=usr/share/doc/ntp/drivers/driver44.html OLD_FILES+=usr/share/doc/ntp/drivers/driver45.html OLD_FILES+=usr/share/doc/ntp/drivers/driver46.html OLD_FILES+=usr/share/doc/ntp/drivers/driver5.html OLD_FILES+=usr/share/doc/ntp/drivers/driver6.html OLD_FILES+=usr/share/doc/ntp/drivers/driver7.html OLD_FILES+=usr/share/doc/ntp/drivers/driver8.html OLD_FILES+=usr/share/doc/ntp/drivers/driver9.html OLD_FILES+=usr/share/doc/ntp/drivers/icons/home.gif OLD_FILES+=usr/share/doc/ntp/drivers/icons/mail2.gif OLD_FILES+=usr/share/doc/ntp/drivers/mx4200data.html OLD_FILES+=usr/share/doc/ntp/drivers/oncore-shmem.html OLD_FILES+=usr/share/doc/ntp/drivers/scripts/footer.txt OLD_FILES+=usr/share/doc/ntp/drivers/scripts/style.css OLD_FILES+=usr/share/doc/ntp/drivers/tf582_4.html OLD_FILES+=usr/share/doc/ntp/extern.html OLD_FILES+=usr/share/doc/ntp/filter.html OLD_FILES+=usr/share/doc/ntp/hints.html OLD_FILES+=usr/share/doc/ntp/hints/a-ux OLD_FILES+=usr/share/doc/ntp/hints/aix OLD_FILES+=usr/share/doc/ntp/hints/bsdi OLD_FILES+=usr/share/doc/ntp/hints/changes OLD_FILES+=usr/share/doc/ntp/hints/decosf1 OLD_FILES+=usr/share/doc/ntp/hints/decosf2 OLD_FILES+=usr/share/doc/ntp/hints/freebsd OLD_FILES+=usr/share/doc/ntp/hints/hpux OLD_FILES+=usr/share/doc/ntp/hints/linux OLD_FILES+=usr/share/doc/ntp/hints/mpeix OLD_FILES+=usr/share/doc/ntp/hints/notes-xntp-v3 OLD_FILES+=usr/share/doc/ntp/hints/parse OLD_FILES+=usr/share/doc/ntp/hints/refclocks OLD_FILES+=usr/share/doc/ntp/hints/rs6000 OLD_FILES+=usr/share/doc/ntp/hints/sco.html OLD_FILES+=usr/share/doc/ntp/hints/sgi OLD_FILES+=usr/share/doc/ntp/hints/solaris-dosynctodr.html OLD_FILES+=usr/share/doc/ntp/hints/solaris.html OLD_FILES+=usr/share/doc/ntp/hints/solaris.xtra.4023118 OLD_FILES+=usr/share/doc/ntp/hints/solaris.xtra.4095849 OLD_FILES+=usr/share/doc/ntp/hints/solaris.xtra.S99ntpd OLD_FILES+=usr/share/doc/ntp/hints/solaris.xtra.patchfreq OLD_FILES+=usr/share/doc/ntp/hints/sun4 OLD_FILES+=usr/share/doc/ntp/hints/svr4-dell OLD_FILES+=usr/share/doc/ntp/hints/svr4_package OLD_FILES+=usr/share/doc/ntp/hints/todo OLD_FILES+=usr/share/doc/ntp/hints/vxworks.html OLD_FILES+=usr/share/doc/ntp/hints/winnt.html OLD_FILES+=usr/share/doc/ntp/history.html OLD_FILES+=usr/share/doc/ntp/howto.html OLD_FILES+=usr/share/doc/ntp/huffpuff.html OLD_FILES+=usr/share/doc/ntp/icons/home.gif OLD_FILES+=usr/share/doc/ntp/icons/mail2.gif OLD_FILES+=usr/share/doc/ntp/icons/sitemap.png OLD_FILES+=usr/share/doc/ntp/index.html OLD_FILES+=usr/share/doc/ntp/kern.html OLD_FILES+=usr/share/doc/ntp/kernpps.html OLD_FILES+=usr/share/doc/ntp/keygen.html OLD_FILES+=usr/share/doc/ntp/ldisc.html OLD_FILES+=usr/share/doc/ntp/leap.html OLD_FILES+=usr/share/doc/ntp/measure.html OLD_FILES+=usr/share/doc/ntp/miscopt.html OLD_FILES+=usr/share/doc/ntp/monopt.html OLD_FILES+=usr/share/doc/ntp/msyslog.html OLD_FILES+=usr/share/doc/ntp/mx4200data.html OLD_FILES+=usr/share/doc/ntp/notes.html OLD_FILES+=usr/share/doc/ntp/ntp-keygen.html OLD_FILES+=usr/share/doc/ntp/ntp-wait.html OLD_FILES+=usr/share/doc/ntp/ntp.conf.html OLD_FILES+=usr/share/doc/ntp/ntp.keys.html OLD_FILES+=usr/share/doc/ntp/ntp_conf.html OLD_FILES+=usr/share/doc/ntp/ntpd.html OLD_FILES+=usr/share/doc/ntp/ntpdate.html OLD_FILES+=usr/share/doc/ntp/ntpdc.html OLD_FILES+=usr/share/doc/ntp/ntpdsim.html OLD_FILES+=usr/share/doc/ntp/ntpdsim_new.html OLD_FILES+=usr/share/doc/ntp/ntpq.html OLD_FILES+=usr/share/doc/ntp/ntpsnmpd.html OLD_FILES+=usr/share/doc/ntp/ntptime.html OLD_FILES+=usr/share/doc/ntp/ntptrace.html OLD_FILES+=usr/share/doc/ntp/orphan.html OLD_FILES+=usr/share/doc/ntp/parsedata.html OLD_FILES+=usr/share/doc/ntp/parsenew.html OLD_FILES+=usr/share/doc/ntp/patches.html OLD_FILES+=usr/share/doc/ntp/pic/9400n.jpg OLD_FILES+=usr/share/doc/ntp/pic/alice11.gif OLD_FILES+=usr/share/doc/ntp/pic/alice13.gif OLD_FILES+=usr/share/doc/ntp/pic/alice15.gif OLD_FILES+=usr/share/doc/ntp/pic/alice23.gif OLD_FILES+=usr/share/doc/ntp/pic/alice31.gif OLD_FILES+=usr/share/doc/ntp/pic/alice32.gif OLD_FILES+=usr/share/doc/ntp/pic/alice35.gif OLD_FILES+=usr/share/doc/ntp/pic/alice38.gif OLD_FILES+=usr/share/doc/ntp/pic/alice44.gif OLD_FILES+=usr/share/doc/ntp/pic/alice47.gif OLD_FILES+=usr/share/doc/ntp/pic/alice51.gif OLD_FILES+=usr/share/doc/ntp/pic/alice61.gif OLD_FILES+=usr/share/doc/ntp/pic/barnstable.gif OLD_FILES+=usr/share/doc/ntp/pic/beaver.gif OLD_FILES+=usr/share/doc/ntp/pic/boom3.gif OLD_FILES+=usr/share/doc/ntp/pic/boom3a.gif OLD_FILES+=usr/share/doc/ntp/pic/boom4.gif OLD_FILES+=usr/share/doc/ntp/pic/broad.gif OLD_FILES+=usr/share/doc/ntp/pic/bustardfly.gif OLD_FILES+=usr/share/doc/ntp/pic/c51.jpg OLD_FILES+=usr/share/doc/ntp/pic/description.jpg OLD_FILES+=usr/share/doc/ntp/pic/discipline.gif OLD_FILES+=usr/share/doc/ntp/pic/dogsnake.gif OLD_FILES+=usr/share/doc/ntp/pic/driver29.gif OLD_FILES+=usr/share/doc/ntp/pic/driver43_1.gif OLD_FILES+=usr/share/doc/ntp/pic/driver43_2.jpg OLD_FILES+=usr/share/doc/ntp/pic/fg6021.gif OLD_FILES+=usr/share/doc/ntp/pic/fg6039.jpg OLD_FILES+=usr/share/doc/ntp/pic/fig_3_1.gif OLD_FILES+=usr/share/doc/ntp/pic/flatheads.gif OLD_FILES+=usr/share/doc/ntp/pic/flt1.gif OLD_FILES+=usr/share/doc/ntp/pic/flt2.gif OLD_FILES+=usr/share/doc/ntp/pic/flt3.gif OLD_FILES+=usr/share/doc/ntp/pic/flt4.gif OLD_FILES+=usr/share/doc/ntp/pic/flt5.gif OLD_FILES+=usr/share/doc/ntp/pic/flt6.gif OLD_FILES+=usr/share/doc/ntp/pic/flt7.gif OLD_FILES+=usr/share/doc/ntp/pic/flt8.gif OLD_FILES+=usr/share/doc/ntp/pic/flt9.gif OLD_FILES+=usr/share/doc/ntp/pic/freq1211.gif OLD_FILES+=usr/share/doc/ntp/pic/gadget.jpg OLD_FILES+=usr/share/doc/ntp/pic/gps167.jpg OLD_FILES+=usr/share/doc/ntp/pic/group.gif OLD_FILES+=usr/share/doc/ntp/pic/hornraba.gif OLD_FILES+=usr/share/doc/ntp/pic/igclock.gif OLD_FILES+=usr/share/doc/ntp/pic/neoclock4x.gif OLD_FILES+=usr/share/doc/ntp/pic/offset1211.gif OLD_FILES+=usr/share/doc/ntp/pic/oncore_evalbig.gif OLD_FILES+=usr/share/doc/ntp/pic/oncore_remoteant.jpg OLD_FILES+=usr/share/doc/ntp/pic/oncore_utplusbig.gif OLD_FILES+=usr/share/doc/ntp/pic/oz2.gif OLD_FILES+=usr/share/doc/ntp/pic/panda.gif OLD_FILES+=usr/share/doc/ntp/pic/pd_om006.gif OLD_FILES+=usr/share/doc/ntp/pic/pd_om011.gif OLD_FILES+=usr/share/doc/ntp/pic/peer.gif OLD_FILES+=usr/share/doc/ntp/pic/pogo.gif OLD_FILES+=usr/share/doc/ntp/pic/pogo1a.gif OLD_FILES+=usr/share/doc/ntp/pic/pogo3a.gif OLD_FILES+=usr/share/doc/ntp/pic/pogo4.gif OLD_FILES+=usr/share/doc/ntp/pic/pogo5.gif OLD_FILES+=usr/share/doc/ntp/pic/pogo6.gif OLD_FILES+=usr/share/doc/ntp/pic/pogo7.gif OLD_FILES+=usr/share/doc/ntp/pic/pogo8.gif OLD_FILES+=usr/share/doc/ntp/pic/pzf509.jpg OLD_FILES+=usr/share/doc/ntp/pic/pzf511.jpg OLD_FILES+=usr/share/doc/ntp/pic/rabbit.gif OLD_FILES+=usr/share/doc/ntp/pic/radio2.jpg OLD_FILES+=usr/share/doc/ntp/pic/sheepb.jpg OLD_FILES+=usr/share/doc/ntp/pic/stack1a.jpg OLD_FILES+=usr/share/doc/ntp/pic/stats.gif OLD_FILES+=usr/share/doc/ntp/pic/sx5.gif OLD_FILES+=usr/share/doc/ntp/pic/thunderbolt.jpg OLD_FILES+=usr/share/doc/ntp/pic/time1.gif OLD_FILES+=usr/share/doc/ntp/pic/tonea.gif OLD_FILES+=usr/share/doc/ntp/pic/tribeb.gif OLD_FILES+=usr/share/doc/ntp/pic/wingdorothy.gif OLD_FILES+=usr/share/doc/ntp/poll.html OLD_FILES+=usr/share/doc/ntp/porting.html OLD_FILES+=usr/share/doc/ntp/pps.html OLD_FILES+=usr/share/doc/ntp/prefer.html OLD_FILES+=usr/share/doc/ntp/quick.html OLD_FILES+=usr/share/doc/ntp/rate.html OLD_FILES+=usr/share/doc/ntp/rdebug.html OLD_FILES+=usr/share/doc/ntp/refclock.html OLD_FILES+=usr/share/doc/ntp/release.html OLD_FILES+=usr/share/doc/ntp/scripts/accopt.txt OLD_FILES+=usr/share/doc/ntp/scripts/audio.txt OLD_FILES+=usr/share/doc/ntp/scripts/authopt.txt OLD_FILES+=usr/share/doc/ntp/scripts/clockopt.txt OLD_FILES+=usr/share/doc/ntp/scripts/command.txt OLD_FILES+=usr/share/doc/ntp/scripts/config.txt OLD_FILES+=usr/share/doc/ntp/scripts/confopt.txt OLD_FILES+=usr/share/doc/ntp/scripts/external.txt OLD_FILES+=usr/share/doc/ntp/scripts/footer.txt OLD_FILES+=usr/share/doc/ntp/scripts/hand.txt OLD_FILES+=usr/share/doc/ntp/scripts/install.txt OLD_FILES+=usr/share/doc/ntp/scripts/manual.txt OLD_FILES+=usr/share/doc/ntp/scripts/misc.txt OLD_FILES+=usr/share/doc/ntp/scripts/miscopt.txt OLD_FILES+=usr/share/doc/ntp/scripts/monopt.txt OLD_FILES+=usr/share/doc/ntp/scripts/refclock.txt OLD_FILES+=usr/share/doc/ntp/scripts/special.txt OLD_FILES+=usr/share/doc/ntp/scripts/style.css OLD_FILES+=usr/share/doc/ntp/select.html OLD_FILES+=usr/share/doc/ntp/sitemap.html OLD_FILES+=usr/share/doc/ntp/sntp.html OLD_FILES+=usr/share/doc/ntp/stats.html OLD_FILES+=usr/share/doc/ntp/tickadj.html OLD_FILES+=usr/share/doc/ntp/warp.html OLD_FILES+=usr/share/doc/ntp/xleave.html OLD_DIRS+=usr/share/doc/ntp/drivers OLD_DIRS+=usr/share/doc/ntp/drivers/scripts OLD_DIRS+=usr/share/doc/ntp/drivers/icons OLD_DIRS+=usr/share/doc/ntp/hints OLD_DIRS+=usr/share/doc/ntp/icons OLD_DIRS+=usr/share/doc/ntp/pic OLD_DIRS+=usr/share/doc/ntp/scripts OLD_DIRS+=usr/share/doc/ntp OLD_FILES+=usr/share/examples/etc/ntp.conf OLD_FILES+=usr/share/man/man1/sntp.1.gz OLD_FILES+=usr/share/man/man5/ntp.conf.5.gz OLD_FILES+=usr/share/man/man5/ntp.keys.5.gz OLD_FILES+=usr/share/man/man8/ntp-keygen.8.gz OLD_FILES+=usr/share/man/man8/ntpd.8.gz OLD_FILES+=usr/share/man/man8/ntpdate.8.gz OLD_FILES+=usr/share/man/man8/ntpdc.8.gz OLD_FILES+=usr/share/man/man8/ntpq.8.gz OLD_FILES+=usr/share/man/man8/ntptime.8.gz .endif +.if ${MK_OFED} == no +OLD_FILES+=etc/newsyslog.conf.d/opensm.conf +OLD_FILES+=etc/rc.d/opensm +OLD_FILES+=usr/bin/ibstat +OLD_FILES+=usr/bin/ibv_asyncwatch +OLD_FILES+=usr/bin/ibv_devices +OLD_FILES+=usr/bin/ibv_devinfo +OLD_FILES+=usr/bin/ibv_rc_pingpong +OLD_FILES+=usr/bin/ibv_srq_pingpong +OLD_FILES+=usr/bin/ibv_uc_pingpong +OLD_FILES+=usr/bin/ibv_ud_pingpong +OLD_FILES+=usr/bin/mckey +OLD_FILES+=usr/bin/rping +OLD_FILES+=usr/bin/ucmatose +OLD_FILES+=usr/bin/udaddy +OLD_FILES+=usr/include/infiniband/marshall.h +OLD_FILES+=usr/include/infiniband/kern-abi.h +OLD_FILES+=usr/include/infiniband/umad_sm.h +OLD_FILES+=usr/include/infiniband/umad.h +OLD_FILES+=usr/include/infiniband/arch.h +OLD_FILES+=usr/include/infiniband/verbs.h +OLD_FILES+=usr/include/infiniband/ib.h +OLD_FILES+=usr/include/infiniband/cm.h +OLD_FILES+=usr/include/infiniband/opcode.h +OLD_FILES+=usr/include/infiniband/ibnetdisc.h +OLD_FILES+=usr/include/infiniband/driver.h +OLD_FILES+=usr/include/infiniband/mad_osd.h +OLD_FILES+=usr/include/infiniband/umad_types.h +OLD_FILES+=usr/include/infiniband/umad_cm.h +OLD_FILES+=usr/include/infiniband/cm_abi.h +OLD_FILES+=usr/include/infiniband/sa-kern-abi.h +OLD_FILES+=usr/include/infiniband/ibnetdisc_osd.h +OLD_FILES+=usr/include/infiniband/opensm/osm_event_plugin.h +OLD_FILES+=usr/include/infiniband/opensm/osm_console_io.h +OLD_FILES+=usr/include/infiniband/opensm/osm_ucast_cache.h +OLD_FILES+=usr/include/infiniband/opensm/osm_port.h +OLD_FILES+=usr/include/infiniband/opensm/osm_path.h +OLD_FILES+=usr/include/infiniband/opensm/osm_mtree.h +OLD_FILES+=usr/include/infiniband/opensm/osm_log.h +OLD_FILES+=usr/include/infiniband/opensm/osm_mcm_port.h +OLD_FILES+=usr/include/infiniband/opensm/osm_subnet.h +OLD_FILES+=usr/include/infiniband/opensm/osm_pkey.h +OLD_FILES+=usr/include/infiniband/opensm/osm_remote_sm.h +OLD_FILES+=usr/include/infiniband/opensm/osm_qos_policy.h +OLD_FILES+=usr/include/infiniband/opensm/osm_sm.h +OLD_FILES+=usr/include/infiniband/opensm/osm_node.h +OLD_FILES+=usr/include/infiniband/opensm/osm_mcast_mgr.h +OLD_FILES+=usr/include/infiniband/opensm/osm_madw.h +OLD_FILES+=usr/include/infiniband/opensm/osm_lid_mgr.h +OLD_FILES+=usr/include/infiniband/opensm/osm_congestion_control.h +OLD_FILES+=usr/include/infiniband/opensm/osm_port_profile.h +OLD_FILES+=usr/include/infiniband/opensm/osm_perfmgr.h +OLD_FILES+=usr/include/infiniband/opensm/osm_service.h +OLD_FILES+=usr/include/infiniband/opensm/osm_base.h +OLD_FILES+=usr/include/infiniband/opensm/osm_vl15intf.h +OLD_FILES+=usr/include/infiniband/opensm/st.h +OLD_FILES+=usr/include/infiniband/opensm/osm_attrib_req.h +OLD_FILES+=usr/include/infiniband/opensm/osm_ucast_mgr.h +OLD_FILES+=usr/include/infiniband/opensm/osm_db.h +OLD_FILES+=usr/include/infiniband/opensm/osm_sa_mad_ctrl.h +OLD_FILES+=usr/include/infiniband/opensm/osm_db_pack.h +OLD_FILES+=usr/include/infiniband/opensm/osm_opensm.h +OLD_FILES+=usr/include/infiniband/opensm/osm_mesh.h +OLD_FILES+=usr/include/infiniband/opensm/osm_mcast_tbl.h +OLD_FILES+=usr/include/infiniband/opensm/osm_sm_mad_ctrl.h +OLD_FILES+=usr/include/infiniband/opensm/osm_stats.h +OLD_FILES+=usr/include/infiniband/opensm/osm_mad_pool.h +OLD_FILES+=usr/include/infiniband/opensm/osm_switch.h +OLD_FILES+=usr/include/infiniband/opensm/osm_ucast_lash.h +OLD_FILES+=usr/include/infiniband/opensm/osm_errors.h +OLD_FILES+=usr/include/infiniband/opensm/osm_partition.h +OLD_FILES+=usr/include/infiniband/opensm/osm_prefix_route.h +OLD_FILES+=usr/include/infiniband/opensm/osm_helper.h +OLD_FILES+=usr/include/infiniband/opensm/osm_version.h +OLD_FILES+=usr/include/infiniband/opensm/osm_sa.h +OLD_FILES+=usr/include/infiniband/opensm/osm_config.h +OLD_FILES+=usr/include/infiniband/opensm/osm_multicast.h +OLD_FILES+=usr/include/infiniband/opensm/osm_file_ids.h +OLD_FILES+=usr/include/infiniband/opensm/osm_perfmgr_db.h +OLD_FILES+=usr/include/infiniband/opensm/osm_console.h +OLD_FILES+=usr/include/infiniband/opensm/osm_msgdef.h +OLD_FILES+=usr/include/infiniband/opensm/osm_router.h +OLD_FILES+=usr/include/infiniband/opensm/osm_guid.h +OLD_FILES+=usr/include/infiniband/opensm/osm_inform.h +OLD_DIRS+=usr/include/infiniband/opensm +OLD_FILES+=usr/include/infiniband/iba/ib_types.h +OLD_FILES+=usr/include/infiniband/iba/ib_cm_types.h +OLD_DIRS+=usr/include/infiniband/iba +OLD_FILES+=usr/include/infiniband/umad_str.h +OLD_FILES+=usr/include/infiniband/udma_barrier.h +OLD_FILES+=usr/include/infiniband/umad_sa.h +OLD_FILES+=usr/include/infiniband/mad.h +OLD_FILES+=usr/include/infiniband/sa.h +OLD_FILES+=usr/include/infiniband/byteorder.h +OLD_FILES+=usr/include/infiniband/types.h +OLD_FILES+=usr/include/infiniband/byteswap.h +OLD_FILES+=usr/include/infiniband/vendor/osm_pkt_randomizer.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_mlx_rmpp_ctx.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_mtl_hca_guid.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_mlx_txn.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_mlx.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_mlx_svc.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_test.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_mlx_inout.h +OLD_FILES+=usr/include/infiniband/vendor/osm_mtl_bind.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_mlx_hca.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_sa_api.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_mlx_sender.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor.h +OLD_FILES+=usr/include/infiniband/vendor/osm_umadt.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_mtl_transaction_mgr.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_mlx_defs.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_mlx_dispatcher.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_api.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_mtl.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_mlx_transport.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_al.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_mlx_sar.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_umadt.h +OLD_FILES+=usr/include/infiniband/vendor/osm_ts_useraccess.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_ts.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_mlx_transport_anafa.h +OLD_FILES+=usr/include/infiniband/vendor/osm_vendor_ibumad.h +OLD_DIRS+=usr/include/infiniband/vendor +OLD_FILES+=usr/include/infiniband/endian.h +OLD_FILES+=usr/include/infiniband/complib/cl_byteswap.h +OLD_FILES+=usr/include/infiniband/complib/cl_types.h +OLD_FILES+=usr/include/infiniband/complib/cl_map.h +OLD_FILES+=usr/include/infiniband/complib/cl_packon.h +OLD_FILES+=usr/include/infiniband/complib/cl_timer.h +OLD_FILES+=usr/include/infiniband/complib/cl_thread_osd.h +OLD_FILES+=usr/include/infiniband/complib/cl_thread.h +OLD_FILES+=usr/include/infiniband/complib/cl_event.h +OLD_FILES+=usr/include/infiniband/complib/cl_byteswap_osd.h +OLD_FILES+=usr/include/infiniband/complib/cl_passivelock.h +OLD_FILES+=usr/include/infiniband/complib/cl_vector.h +OLD_FILES+=usr/include/infiniband/complib/cl_nodenamemap.h +OLD_FILES+=usr/include/infiniband/complib/cl_event_wheel.h +OLD_FILES+=usr/include/infiniband/complib/cl_log.h +OLD_FILES+=usr/include/infiniband/complib/cl_fleximap.h +OLD_FILES+=usr/include/infiniband/complib/cl_qlist.h +OLD_FILES+=usr/include/infiniband/complib/cl_timer_osd.h +OLD_FILES+=usr/include/infiniband/complib/cl_pool.h +OLD_FILES+=usr/include/infiniband/complib/cl_debug.h +OLD_FILES+=usr/include/infiniband/complib/cl_types_osd.h +OLD_FILES+=usr/include/infiniband/complib/cl_dispatcher.h +OLD_FILES+=usr/include/infiniband/complib/cl_ptr_vector.h +OLD_FILES+=usr/include/infiniband/complib/cl_atomic_osd.h +OLD_FILES+=usr/include/infiniband/complib/cl_qmap.h +OLD_FILES+=usr/include/infiniband/complib/cl_spinlock_osd.h +OLD_FILES+=usr/include/infiniband/complib/cl_qcomppool.h +OLD_FILES+=usr/include/infiniband/complib/cl_threadpool.h +OLD_FILES+=usr/include/infiniband/complib/cl_list.h +OLD_FILES+=usr/include/infiniband/complib/cl_debug_osd.h +OLD_FILES+=usr/include/infiniband/complib/cl_packoff.h +OLD_FILES+=usr/include/infiniband/complib/cl_qpool.h +OLD_FILES+=usr/include/infiniband/complib/cl_spinlock.h +OLD_FILES+=usr/include/infiniband/complib/cl_event_osd.h +OLD_FILES+=usr/include/infiniband/complib/cl_atomic.h +OLD_FILES+=usr/include/infiniband/complib/cl_math.h +OLD_FILES+=usr/include/infiniband/complib/cl_comppool.h +OLD_DIRS+=usr/include/infiniband/complib +OLD_DIRS+=usr/include/infiniband +OLD_FILES+=usr/lib/libcxgb4.a +OLD_FILES+=usr/lib/libcxgb4.so +OLD_LIBS+=usr/lib/libcxgb4.so.1 +OLD_FILES+=usr/lib/libibcm.a +OLD_FILES+=usr/lib/libibcm.so +OLD_LIBS+=usr/lib/libibcm.so.1 +OLD_FILES+=usr/lib/libibmad.a +OLD_FILES+=usr/lib/libibmad.so +OLD_LIBS+=usr/lib/libibmad.so.5 +OLD_FILES+=usr/lib/libibnetdisc.a +OLD_FILES+=usr/lib/libibnetdisc.so +OLD_LIBS+=usr/lib/libibnetdisc.so.5 +OLD_FILES+=usr/lib/libibumad.a +OLD_FILES+=usr/lib/libibumad.so +OLD_LIBS+=usr/lib/libibumad.so.1 +OLD_FILES+=usr/lib/libibverbs.a +OLD_FILES+=usr/lib/libibverbs.so +OLD_LIBS+=lib/libibverbs.so.1 +OLD_FILES+=usr/lib/libmlx4.a +OLD_FILES+=usr/lib/libmlx4.so +OLD_LIBS+=usr/lib/libmlx4.so.1 +OLD_FILES+=usr/lib/libmlx5.a +OLD_FILES+=usr/lib/libmlx5.so +OLD_LIBS+=lib/libmlx5.so.1 +OLD_FILES+=usr/lib/libopensm.a +OLD_FILES+=usr/lib/libopensm.so +OLD_LIBS+=usr/lib/libopensm.so.5 +OLD_FILES+=usr/lib/libosmcomp.a +OLD_FILES+=usr/lib/libosmcomp.so +OLD_LIBS+=usr/lib/libosmcomp.so.3 +OLD_FILES+=usr/lib/libosmvendor.a +OLD_FILES+=usr/lib/libosmvendor.so +OLD_LIBS+=usr/lib/libosmvendor.so.4 +OLD_FILES+=usr/lib/librdmacm.a +OLD_FILES+=usr/lib/librdmacm.so +OLD_LIBS+=usr/lib/librdmacm.so.1 +.if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64" +OLD_FILES+=usr/lib32/libcxgb4.a +OLD_FILES+=usr/lib32/libcxgb4.so +OLD_LIBS+=usr/lib32/libcxgb4.so.1 +OLD_FILES+=usr/lib32/libibcm.a +OLD_FILES+=usr/lib32/libibcm.so +OLD_LIBS+=usr/lib32/libibcm.so.1 +OLD_FILES+=usr/lib32/libibmad.a +OLD_FILES+=usr/lib32/libibmad.so +OLD_LIBS+=usr/lib32/libibmad.so.5 +OLD_FILES+=usr/lib32/libibnetdisc.a +OLD_FILES+=usr/lib32/libibnetdisc.so +OLD_LIBS+=usr/lib32/libibnetdisc.so.5 +OLD_FILES+=usr/lib32/libibumad.a +OLD_FILES+=usr/lib32/libibumad.so +OLD_LIBS+=usr/lib32/libibumad.so.1 +OLD_FILES+=usr/lib32/libibverbs.a +OLD_FILES+=usr/lib32/libibverbs.so +OLD_LIBS+=usr/lib32/libibverbs.so.1 +OLD_FILES+=usr/lib32/libmlx4.a +OLD_FILES+=usr/lib32/libmlx4.so +OLD_LIBS+=usr/lib32/libmlx4.so.1 +OLD_FILES+=usr/lib32/libmlx5.a +OLD_FILES+=usr/lib32/libmlx5.so +OLD_LIBS+=usr/lib32/libmlx5.so.1 +OLD_FILES+=usr/lib32/libopensm.a +OLD_FILES+=usr/lib32/libopensm.so +OLD_LIBS+=usr/lib32/libopensm.so.5 +OLD_FILES+=usr/lib32/libosmcomp.a +OLD_FILES+=usr/lib32/libosmcomp.so +OLD_LIBS+=usr/lib32/libosmcomp.so.3 +OLD_FILES+=usr/lib32/libosmvendor.a +OLD_FILES+=usr/lib32/libosmvendor.so +OLD_LIBS+=usr/lib32/libosmvendor.so.4 +OLD_FILES+=usr/lib32/librdmacm.a +OLD_FILES+=usr/lib32/librdmacm.so +OLD_LIBS+=usr/lib32/librdmacm.so.1 +.endif +OLD_FILES+=usr/share/man/man1/ibv_asyncwatch.1.gz +OLD_FILES+=usr/share/man/man1/ibv_devices.1.gz +OLD_FILES+=usr/share/man/man1/ibv_devinfo.1.gz +OLD_FILES+=usr/share/man/man1/ibv_rc_pingpong.1.gz +OLD_FILES+=usr/share/man/man1/ibv_srq_pingpong.1.gz +OLD_FILES+=usr/share/man/man1/ibv_uc_pingpong.1.gz +OLD_FILES+=usr/share/man/man1/ibv_ud_pingpong.1.gz +OLD_FILES+=usr/share/man/man1/mckey.1.gz +OLD_FILES+=usr/share/man/man1/rping.1.gz +OLD_FILES+=usr/share/man/man1/ucmatose.1.gz +OLD_FILES+=usr/share/man/man1/udaddy.1.gz +OLD_FILES+=usr/share/man/man3/ibnd_debug.3.gz +OLD_FILES+=usr/share/man/man3/ibnd_destroy_fabric.3.gz +OLD_FILES+=usr/share/man/man3/ibnd_discover_fabric.3.gz +OLD_FILES+=usr/share/man/man3/ibnd_find_node_dr.3.gz +OLD_FILES+=usr/share/man/man3/ibnd_find_node_guid.3.gz +OLD_FILES+=usr/share/man/man3/ibnd_iter_nodes.3.gz +OLD_FILES+=usr/share/man/man3/ibnd_iter_nodes_type.3.gz +OLD_FILES+=usr/share/man/man3/ibnd_show_progress.3.gz +OLD_FILES+=usr/share/man/man3/ibv_alloc_mw.3.gz +OLD_FILES+=usr/share/man/man3/ibv_alloc_pd.3.gz +OLD_FILES+=usr/share/man/man3/ibv_attach_mcast.3.gz +OLD_FILES+=usr/share/man/man3/ibv_bind_mw.3.gz +OLD_FILES+=usr/share/man/man3/ibv_create_ah.3.gz +OLD_FILES+=usr/share/man/man3/ibv_create_ah_from_wc.3.gz +OLD_FILES+=usr/share/man/man3/ibv_create_comp_channel.3.gz +OLD_FILES+=usr/share/man/man3/ibv_create_cq.3.gz +OLD_FILES+=usr/share/man/man3/ibv_create_cq_ex.3.gz +OLD_FILES+=usr/share/man/man3/ibv_create_flow.3.gz +OLD_FILES+=usr/share/man/man3/ibv_create_qp.3.gz +OLD_FILES+=usr/share/man/man3/ibv_create_qp_ex.3.gz +OLD_FILES+=usr/share/man/man3/ibv_create_rwq_ind_table.3.gz +OLD_FILES+=usr/share/man/man3/ibv_create_srq.3.gz +OLD_FILES+=usr/share/man/man3/ibv_create_srq_ex.3.gz +OLD_FILES+=usr/share/man/man3/ibv_create_wq.3.gz +OLD_FILES+=usr/share/man/man3/ibv_event_type_str.3.gz +OLD_FILES+=usr/share/man/man3/ibv_fork_init.3.gz +OLD_FILES+=usr/share/man/man3/ibv_get_async_event.3.gz +OLD_FILES+=usr/share/man/man3/ibv_get_cq_event.3.gz +OLD_FILES+=usr/share/man/man3/ibv_get_device_guid.3.gz +OLD_FILES+=usr/share/man/man3/ibv_get_device_list.3.gz +OLD_FILES+=usr/share/man/man3/ibv_get_device_name.3.gz +OLD_FILES+=usr/share/man/man3/ibv_get_srq_num.3.gz +OLD_FILES+=usr/share/man/man3/ibv_inc_rkey.3.gz +OLD_FILES+=usr/share/man/man3/ibv_modify_qp.3.gz +OLD_FILES+=usr/share/man/man3/ibv_modify_srq.3.gz +OLD_FILES+=usr/share/man/man3/ibv_modify_wq.3.gz +OLD_FILES+=usr/share/man/man3/ibv_open_device.3.gz +OLD_FILES+=usr/share/man/man3/ibv_open_qp.3.gz +OLD_FILES+=usr/share/man/man3/ibv_open_xrcd.3.gz +OLD_FILES+=usr/share/man/man3/ibv_poll_cq.3.gz +OLD_FILES+=usr/share/man/man3/ibv_post_recv.3.gz +OLD_FILES+=usr/share/man/man3/ibv_post_send.3.gz +OLD_FILES+=usr/share/man/man3/ibv_post_srq_recv.3.gz +OLD_FILES+=usr/share/man/man3/ibv_query_device.3.gz +OLD_FILES+=usr/share/man/man3/ibv_query_device_ex.3.gz +OLD_FILES+=usr/share/man/man3/ibv_query_gid.3.gz +OLD_FILES+=usr/share/man/man3/ibv_query_pkey.3.gz +OLD_FILES+=usr/share/man/man3/ibv_query_port.3.gz +OLD_FILES+=usr/share/man/man3/ibv_query_qp.3.gz +OLD_FILES+=usr/share/man/man3/ibv_query_rt_values_ex.3.gz +OLD_FILES+=usr/share/man/man3/ibv_query_srq.3.gz +OLD_FILES+=usr/share/man/man3/ibv_rate_to_mbps.3.gz +OLD_FILES+=usr/share/man/man3/ibv_rate_to_mult.3.gz +OLD_FILES+=usr/share/man/man3/ibv_reg_mr.3.gz +OLD_FILES+=usr/share/man/man3/ibv_req_notify_cq.3.gz +OLD_FILES+=usr/share/man/man3/ibv_rereg_mr.3.gz +OLD_FILES+=usr/share/man/man3/ibv_resize_cq.3.gz +OLD_FILES+=usr/share/man/man3/rdma_accept.3.gz +OLD_FILES+=usr/share/man/man3/rdma_ack_cm_event.3.gz +OLD_FILES+=usr/share/man/man3/rdma_bind_addr.3.gz +OLD_FILES+=usr/share/man/man3/rdma_connect.3.gz +OLD_FILES+=usr/share/man/man3/rdma_create_ep.3.gz +OLD_FILES+=usr/share/man/man3/rdma_create_event_channel.3.gz +OLD_FILES+=usr/share/man/man3/rdma_create_id.3.gz +OLD_FILES+=usr/share/man/man3/rdma_create_qp.3.gz +OLD_FILES+=usr/share/man/man3/rdma_create_srq.3.gz +OLD_FILES+=usr/share/man/man3/rdma_dereg_mr.3.gz +OLD_FILES+=usr/share/man/man3/rdma_destroy_ep.3.gz +OLD_FILES+=usr/share/man/man3/rdma_destroy_event_channel.3.gz +OLD_FILES+=usr/share/man/man3/rdma_destroy_id.3.gz +OLD_FILES+=usr/share/man/man3/rdma_destroy_qp.3.gz +OLD_FILES+=usr/share/man/man3/rdma_destroy_srq.3.gz +OLD_FILES+=usr/share/man/man3/rdma_disconnect.3.gz +OLD_FILES+=usr/share/man/man3/rdma_event_str.3.gz +OLD_FILES+=usr/share/man/man3/rdma_free_devices.3.gz +OLD_FILES+=usr/share/man/man3/rdma_get_cm_event.3.gz +OLD_FILES+=usr/share/man/man3/rdma_get_devices.3.gz +OLD_FILES+=usr/share/man/man3/rdma_get_dst_port.3.gz +OLD_FILES+=usr/share/man/man3/rdma_get_local_addr.3.gz +OLD_FILES+=usr/share/man/man3/rdma_get_peer_addr.3.gz +OLD_FILES+=usr/share/man/man3/rdma_get_recv_comp.3.gz +OLD_FILES+=usr/share/man/man3/rdma_get_request.3.gz +OLD_FILES+=usr/share/man/man3/rdma_get_send_comp.3.gz +OLD_FILES+=usr/share/man/man3/rdma_get_src_port.3.gz +OLD_FILES+=usr/share/man/man3/rdma_getaddrinfo.3.gz +OLD_FILES+=usr/share/man/man3/rdma_join_multicast.3.gz +OLD_FILES+=usr/share/man/man3/rdma_leave_multicast.3.gz +OLD_FILES+=usr/share/man/man3/rdma_listen.3.gz +OLD_FILES+=usr/share/man/man3/rdma_migrate_id.3.gz +OLD_FILES+=usr/share/man/man3/rdma_notify.3.gz +OLD_FILES+=usr/share/man/man3/rdma_post_read.3.gz +OLD_FILES+=usr/share/man/man3/rdma_post_readv.3.gz +OLD_FILES+=usr/share/man/man3/rdma_post_recv.3.gz +OLD_FILES+=usr/share/man/man3/rdma_post_recvv.3.gz +OLD_FILES+=usr/share/man/man3/rdma_post_send.3.gz +OLD_FILES+=usr/share/man/man3/rdma_post_sendv.3.gz +OLD_FILES+=usr/share/man/man3/rdma_post_ud_send.3.gz +OLD_FILES+=usr/share/man/man3/rdma_post_write.3.gz +OLD_FILES+=usr/share/man/man3/rdma_post_writev.3.gz +OLD_FILES+=usr/share/man/man3/rdma_reg_msgs.3.gz +OLD_FILES+=usr/share/man/man3/rdma_reg_read.3.gz +OLD_FILES+=usr/share/man/man3/rdma_reg_write.3.gz +OLD_FILES+=usr/share/man/man3/rdma_reject.3.gz +OLD_FILES+=usr/share/man/man3/rdma_resolve_addr.3.gz +OLD_FILES+=usr/share/man/man3/rdma_resolve_route.3.gz +OLD_FILES+=usr/share/man/man3/rdma_set_option.3.gz +OLD_FILES+=usr/share/man/man4/mlx4ib.4.gz +OLD_FILES+=usr/share/man/man4/mlx5ib.4.gz +OLD_FILES+=usr/share/man/man8/ibstat.8.gz +.endif + +.if ${MK_OFED_EXTRA} == no +OLD_FILES+=usr/bin/dump_fts +OLD_FILES+=usr/bin/ibaddr +OLD_FILES+=usr/bin/ibcacheedit +OLD_FILES+=usr/bin/ibccconfig +OLD_FILES+=usr/bin/ibccquery +OLD_FILES+=usr/bin/iblinkinfo +OLD_FILES+=usr/bin/ibmirror +OLD_FILES+=usr/bin/ibnetdiscover +OLD_FILES+=usr/bin/ibping +OLD_FILES+=usr/bin/ibportstate +OLD_FILES+=usr/bin/ibqueryerrors +OLD_FILES+=usr/bin/ibroute +OLD_FILES+=usr/bin/ibsysstat +OLD_FILES+=usr/bin/ibtracert +OLD_FILES+=usr/bin/opensm +OLD_FILES+=usr/bin/perfquery +OLD_FILES+=usr/bin/saquery +OLD_FILES+=usr/bin/sminfo +OLD_FILES+=usr/bin/smpdump +OLD_FILES+=usr/bin/smpquery +OLD_FILES+=usr/bin/vendstat +OLD_FILES+=usr/share/man/man8/dump_fts.8.gz +OLD_FILES+=usr/share/man/man8/ibaddr.8.gz +OLD_FILES+=usr/share/man/man8/ibcacheedit.8.gz +OLD_FILES+=usr/share/man/man8/ibccconfig.8.gz +OLD_FILES+=usr/share/man/man8/ibccquery.8.gz +OLD_FILES+=usr/share/man/man8/iblinkinfo.8.gz +OLD_FILES+=usr/share/man/man8/ibnetdiscover.8.gz +OLD_FILES+=usr/share/man/man8/ibping.8.gz +OLD_FILES+=usr/share/man/man8/ibportstate.8.gz +OLD_FILES+=usr/share/man/man8/ibqueryerrors.8.gz +OLD_FILES+=usr/share/man/man8/ibroute.8.gz +OLD_FILES+=usr/share/man/man8/ibsysstat.8.gz +OLD_FILES+=usr/share/man/man8/ibtracert.8.gz +OLD_FILES+=usr/share/man/man8/opensm.8.gz +OLD_FILES+=usr/share/man/man8/perfquery.8.gz +OLD_FILES+=usr/share/man/man8/saquery.8.gz +OLD_FILES+=usr/share/man/man8/sminfo.8.gz +OLD_FILES+=usr/share/man/man8/smpdump.8.gz +OLD_FILES+=usr/share/man/man8/smpquery.8.gz +OLD_FILES+=usr/share/man/man8/vendstat.8.gz +.endif + .if ${MK_OPENSSH} == no OLD_FILES+=etc/rc.d/sshd OLD_FILES+=etc/ssh/moduli OLD_FILES+=etc/ssh/ssh_config OLD_FILES+=etc/ssh/sshd_config +OLD_DIRS+=etc/ssh OLD_FILES+=usr/bin/scp OLD_FILES+=usr/bin/sftp OLD_FILES+=usr/bin/slogin OLD_FILES+=usr/bin/ssh OLD_FILES+=usr/bin/ssh-add OLD_FILES+=usr/bin/ssh-agent OLD_FILES+=usr/bin/ssh-copy-id OLD_FILES+=usr/bin/ssh-keygen OLD_FILES+=usr/bin/ssh-keyscan OLD_FILES+=usr/lib/pam_ssh.so OLD_LIBS+=usr/lib/pam_ssh.so.6 OLD_FILES+=usr/lib/private/libssh.a OLD_FILES+=usr/lib/private/libssh.so OLD_LIBS+=usr/lib/private/libssh.so.5 OLD_FILES+=usr/lib/private/libssh_p.a .if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64" OLD_FILES+=usr/lib32/pam_ssh.so OLD_LIBS+=usr/lib32/pam_ssh.so.6 OLD_FILES+=usr/lib32/private/libssh.a OLD_FILES+=usr/lib32/private/libssh.so OLD_LIBS+=usr/lib32/private/libssh.so.5 OLD_FILES+=usr/lib32/private/libssh_p.a .endif OLD_FILES+=usr/libexec/sftp-server OLD_FILES+=usr/libexec/ssh-keysign OLD_FILES+=usr/libexec/ssh-pkcs11-helper OLD_FILES+=usr/sbin/sshd OLD_FILES+=usr/share/man/man1/scp.1.gz OLD_FILES+=usr/share/man/man1/sftp.1.gz OLD_FILES+=usr/share/man/man1/slogin.1.gz OLD_FILES+=usr/share/man/man1/ssh-add.1.gz OLD_FILES+=usr/share/man/man1/ssh-agent.1.gz OLD_FILES+=usr/share/man/man1/ssh-copy-id.1.gz OLD_FILES+=usr/share/man/man1/ssh-keygen.1.gz OLD_FILES+=usr/share/man/man1/ssh-keyscan.1.gz OLD_FILES+=usr/share/man/man1/ssh.1.gz OLD_FILES+=usr/share/man/man5/ssh_config.5.gz OLD_FILES+=usr/share/man/man5/sshd_config.5.gz OLD_FILES+=usr/share/man/man8/pam_ssh.8.gz OLD_FILES+=usr/share/man/man8/sftp-server.8.gz OLD_FILES+=usr/share/man/man8/ssh-keysign.8.gz OLD_FILES+=usr/share/man/man8/ssh-pkcs11-helper.8.gz OLD_FILES+=usr/share/man/man8/sshd.8.gz .endif .if ${MK_OPENSSL} == no OLD_FILES+=etc/rc.d/keyserv .endif .if ${MK_PC_SYSINSTALL} == no # backend-partmanager OLD_FILES+=usr/share/pc-sysinstall/backend-partmanager/create-part.sh OLD_FILES+=usr/share/pc-sysinstall/backend-partmanager/delete-part.sh # backend-query OLD_FILES+=usr/share/pc-sysinstall/backend-query/detect-emulation.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/detect-laptop.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/detect-nics.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/disk-info.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/disk-list.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/disk-part.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/enable-net.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/get-packages.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/list-components.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/list-config.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/list-mirrors.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/list-packages.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/list-rsync-backups.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/list-tzones.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/query-langs.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/send-logs.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/setup-ssh-keys.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/set-mirror.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/sys-mem.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/test-live.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/test-netup.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/update-part-list.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/xkeyboard-layouts.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/xkeyboard-models.sh OLD_FILES+=usr/share/pc-sysinstall/backend-query/xkeyboard-variants.sh # backend OLD_FILES+=usr/share/pc-sysinstall/backend/functions-bsdlabel.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions-cleanup.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions-disk.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions-extractimage.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions-ftp.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions-installcomponents.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions-installpackages.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions-localize.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions-mountdisk.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions-mountoptical.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions-networking.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions-newfs.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions-parse.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions-packages.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions-runcommands.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions-unmount.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions-upgrade.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions-users.sh OLD_FILES+=usr/share/pc-sysinstall/backend/functions.sh OLD_FILES+=usr/share/pc-sysinstall/backend/installimage.sh OLD_FILES+=usr/share/pc-sysinstall/backend/parseconfig.sh OLD_FILES+=usr/share/pc-sysinstall/backend/startautoinstall.sh # conf OLD_FILES+=usr/share/pc-sysinstall/conf/avail-langs OLD_FILES+=usr/share/pc-sysinstall/conf/exclude-from-upgrade OLD_FILES+=usr/share/pc-sysinstall/conf/license/bsd-en.txt OLD_FILES+=usr/share/pc-sysinstall/conf/license/intel-en.txt OLD_FILES+=usr/share/pc-sysinstall/conf/license/nvidia-en.txt OLD_FILES+=usr/share/pc-sysinstall/conf/pc-sysinstall.conf # doc OLD_FILES+=usr/share/pc-sysinstall/doc/help-disk-list OLD_FILES+=usr/share/pc-sysinstall/doc/help-disk-size OLD_FILES+=usr/share/pc-sysinstall/doc/help-index OLD_FILES+=usr/share/pc-sysinstall/doc/help-start-autoinstall # examples OLD_FILES+=usr/share/examples/pc-sysinstall/README OLD_FILES+=usr/share/examples/pc-sysinstall/pc-autoinstall.conf OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.fbsd-netinstall OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.geli OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.gmirror OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.netinstall OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.restore OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.rsync OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.upgrade OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.zfs # pc-sysinstall OLD_FILES+=usr/sbin/pc-sysinstall OLD_FILES+=usr/share/man/man8/pc-sysinstall.8.gz OLD_DIRS+=usr/share/pc-sysinstall/backend OLD_DIRS+=usr/share/pc-sysinstall/backend-partmanager OLD_DIRS+=usr/share/pc-sysinstall/backend-query OLD_DIRS+=usr/share/pc-sysinstall/conf/license OLD_DIRS+=usr/share/pc-sysinstall/conf OLD_DIRS+=usr/share/pc-sysinstall/doc OLD_DIRS+=usr/share/pc-sysinstall OLD_DIRS+=usr/share/examples/pc-sysinstall .endif .if ${MK_PF} == no OLD_FILES+=etc/newsyslog.conf.d/pf.conf OLD_FILES+=etc/periodic/security/520.pfdenied OLD_FILES+=etc/pf.os OLD_FILES+=etc/rc.d/ftp-proxy OLD_FILES+=sbin/pfctl OLD_FILES+=sbin/pflogd OLD_FILES+=usr/include/netpfil/pf/pf.h OLD_FILES+=usr/include/netpfil/pf/pf_altq.h OLD_FILES+=usr/include/netpfil/pf/pf_mtag.h OLD_FILES+=usr/lib/snmp_pf.so OLD_LIBS+=usr/lib/snmp_pf.so.6 OLD_FILES+=usr/libexec/tftp-proxy OLD_FILES+=usr/sbin/ftp-proxy OLD_FILES+=usr/share/examples/etc/pf.os OLD_FILES+=usr/share/examples/pf/ackpri OLD_FILES+=usr/share/examples/pf/faq-example1 OLD_FILES+=usr/share/examples/pf/faq-example2 OLD_FILES+=usr/share/examples/pf/faq-example3 OLD_FILES+=usr/share/examples/pf/pf.conf OLD_FILES+=usr/share/examples/pf/queue1 OLD_FILES+=usr/share/examples/pf/queue2 OLD_FILES+=usr/share/examples/pf/queue3 OLD_FILES+=usr/share/examples/pf/queue4 OLD_FILES+=usr/share/examples/pf/spamd OLD_DIRS+=usr/share/examples/pf OLD_FILES+=usr/share/man/man4/pf.4.gz OLD_FILES+=usr/share/man/man4/pflog.4.gz OLD_FILES+=usr/share/man/man4/pfsync.4.gz OLD_FILES+=usr/share/man/man5/pf.conf.5.gz OLD_FILES+=usr/share/man/man5/pf.os.5.gz OLD_FILES+=usr/share/man/man8/ftp-proxy.8.gz OLD_FILES+=usr/share/man/man8/pfctl.8.gz OLD_FILES+=usr/share/man/man8/pflogd.8.gz OLD_FILES+=usr/share/man/man8/tftp-proxy.8.gz OLD_FILES+=usr/share/snmp/defs/pf_tree.def OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-PF-MIB.txt .endif .if ${MK_PKGBOOTSTRAP} == no OLD_FILES+=usr/sbin/pkg OLD_FILES+=usr/share/man/man7/pkg.7.gz .endif .if ${MK_PMC} == no OLD_FILES+=usr/bin/pmcstudy +.if ${TARGET_ARCH} == "amd64" +OLD_FILES+=usr/include/libipt/pt_last_ip.h +OLD_FILES+=usr/include/libipt/intel-pt.h +OLD_FILES+=usr/include/libipt/pt_time.h +OLD_FILES+=usr/include/libipt/pt_cpu.h +OLD_FILES+=usr/include/libipt/pt_compiler.h +OLD_DIRS+=usr/include/libipt +.endif +.if ${TARGET_ARCH} == "aarch64" +OLD_FILES+=usr/include/opencsd/c_api/opencsd_c_api.h +OLD_FILES+=usr/include/opencsd/c_api/ocsd_c_api_cust_impl.h +OLD_FILES+=usr/include/opencsd/c_api/ocsd_c_api_types.h +OLD_FILES+=usr/include/opencsd/c_api/ocsd_c_api_cust_fact.h +OLD_FILES+=usr/include/opencsd/c_api/ocsd_c_api_custom.h +OLD_DIRS+=usr/include/opencsd/c_api +OLD_FILES+=usr/include/opencsd/ocsd_if_types.h +OLD_FILES+=usr/include/opencsd/ptm/trc_dcd_mngr_ptm.h +OLD_FILES+=usr/include/opencsd/ptm/trc_pkt_proc_ptm.h +OLD_FILES+=usr/include/opencsd/ptm/trc_cmp_cfg_ptm.h +OLD_FILES+=usr/include/opencsd/ptm/ptm_decoder.h +OLD_FILES+=usr/include/opencsd/ptm/trc_pkt_elem_ptm.h +OLD_FILES+=usr/include/opencsd/ptm/trc_pkt_decode_ptm.h +OLD_FILES+=usr/include/opencsd/ptm/trc_pkt_types_ptm.h +OLD_DIRS+=usr/include/opencsd/ptm +OLD_FILES+=usr/include/opencsd/trc_gen_elem_types.h +OLD_FILES+=usr/include/opencsd/etmv4/trc_pkt_proc_etmv4.h +OLD_FILES+=usr/include/opencsd/etmv4/trc_etmv4_stack_elem.h +OLD_FILES+=usr/include/opencsd/etmv4/etmv4_decoder.h +OLD_FILES+=usr/include/opencsd/etmv4/trc_pkt_elem_etmv4i.h +OLD_FILES+=usr/include/opencsd/etmv4/trc_dcd_mngr_etmv4i.h +OLD_FILES+=usr/include/opencsd/etmv4/trc_pkt_types_etmv4.h +OLD_FILES+=usr/include/opencsd/etmv4/trc_pkt_elem_etmv4d.h +OLD_FILES+=usr/include/opencsd/etmv4/trc_pkt_decode_etmv4i.h +OLD_FILES+=usr/include/opencsd/etmv4/trc_cmp_cfg_etmv4.h +OLD_DIRS+=usr/include/opencsd/etmv4 +OLD_FILES+=usr/include/opencsd/etmv3/trc_pkt_decode_etmv3.h +OLD_FILES+=usr/include/opencsd/etmv3/trc_cmp_cfg_etmv3.h +OLD_FILES+=usr/include/opencsd/etmv3/etmv3_decoder.h +OLD_FILES+=usr/include/opencsd/etmv3/trc_pkt_proc_etmv3.h +OLD_FILES+=usr/include/opencsd/etmv3/trc_pkt_elem_etmv3.h +OLD_FILES+=usr/include/opencsd/etmv3/trc_pkt_types_etmv3.h +OLD_FILES+=usr/include/opencsd/etmv3/trc_dcd_mngr_etmv3.h +OLD_DIRS+=usr/include/opencsd/etmv3 +OLD_FILES+=usr/include/opencsd/trc_pkt_types.h +OLD_FILES+=usr/include/opencsd/stm/trc_pkt_proc_stm.h +OLD_FILES+=usr/include/opencsd/stm/trc_pkt_types_stm.h +OLD_FILES+=usr/include/opencsd/stm/stm_decoder.h +OLD_FILES+=usr/include/opencsd/stm/trc_dcd_mngr_stm.h +OLD_FILES+=usr/include/opencsd/stm/trc_cmp_cfg_stm.h +OLD_FILES+=usr/include/opencsd/stm/trc_pkt_elem_stm.h +OLD_FILES+=usr/include/opencsd/stm/trc_pkt_decode_stm.h +OLD_DIRS+=usr/include/opencsd/stm +OLD_DIRS+=usr/include/opencsd +.endif OLD_FILES+=usr/include/pmc.h OLD_FILES+=usr/include/pmclog.h +OLD_FILES+=usr/include/pmcformat.h +OLD_FILES+=usr/include/libpmcstat.h +.if ${TARGET_ARCH} == "amd64" +OLD_FILES+=usr/lib/libipt.a +OLD_FILES+=usr/lib/libipt.so +OLD_LIBS+=lib/libipt.so.0 +OLD_FILES+=usr/lib/libipt_p.a +.endif +.if ${TARGET_ARCH} == "aarch64" +OLD_FILES+=usr/lib/libopencsd.a +OLD_FILES+=usr/lib/libopencsd.so +OLD_LIBS+=lib/libopencsd.so.0 +OLD_FILES+=usr/lib/libopencsd_p.a +.endif OLD_FILES+=usr/lib/libpmc.a OLD_FILES+=usr/lib/libpmc.so OLD_LIBS+=usr/lib/libpmc.so.5 OLD_FILES+=usr/lib/libpmc_p.a OLD_FILES+=usr/lib32/libpmc.a OLD_FILES+=usr/lib32/libpmc.so OLD_LIBS+=usr/lib32/libpmc.so.5 OLD_FILES+=usr/lib32/libpmc_p.a +OLD_FILES+=usr/sbin/pmc OLD_FILES+=usr/sbin/pmcannotate OLD_FILES+=usr/sbin/pmccontrol OLD_FILES+=usr/sbin/pmcstat -OLD_FILES+=usr/share/man/man1/pmcstudy.1.gz OLD_FILES+=usr/share/man/man3/pmc.3.gz OLD_FILES+=usr/share/man/man3/pmc.atom.3.gz OLD_FILES+=usr/share/man/man3/pmc.atomsilvermont.3.gz OLD_FILES+=usr/share/man/man3/pmc.core.3.gz OLD_FILES+=usr/share/man/man3/pmc.core2.3.gz OLD_FILES+=usr/share/man/man3/pmc.corei7.3.gz OLD_FILES+=usr/share/man/man3/pmc.corei7uc.3.gz OLD_FILES+=usr/share/man/man3/pmc.haswell.3.gz OLD_FILES+=usr/share/man/man3/pmc.haswelluc.3.gz +OLD_FILES+=usr/share/man/man3/pmc.haswellxeon.3.gz OLD_FILES+=usr/share/man/man3/pmc.iaf.3.gz OLD_FILES+=usr/share/man/man3/pmc.ivybridge.3.gz OLD_FILES+=usr/share/man/man3/pmc.ivybridgexeon.3.gz OLD_FILES+=usr/share/man/man3/pmc.k7.3.gz OLD_FILES+=usr/share/man/man3/pmc.k8.3.gz OLD_FILES+=usr/share/man/man3/pmc.mips24k.3.gz OLD_FILES+=usr/share/man/man3/pmc.octeon.3.gz OLD_FILES+=usr/share/man/man3/pmc.p4.3.gz OLD_FILES+=usr/share/man/man3/pmc.p5.3.gz OLD_FILES+=usr/share/man/man3/pmc.p6.3.gz OLD_FILES+=usr/share/man/man3/pmc.sandybridge.3.gz OLD_FILES+=usr/share/man/man3/pmc.sandybridgeuc.3.gz OLD_FILES+=usr/share/man/man3/pmc.sandybridgexeon.3.gz OLD_FILES+=usr/share/man/man3/pmc.soft.3.gz OLD_FILES+=usr/share/man/man3/pmc.tsc.3.gz OLD_FILES+=usr/share/man/man3/pmc.ucf.3.gz OLD_FILES+=usr/share/man/man3/pmc.westmere.3.gz OLD_FILES+=usr/share/man/man3/pmc.westmereuc.3.gz OLD_FILES+=usr/share/man/man3/pmc.xscale.3.gz OLD_FILES+=usr/share/man/man3/pmc_allocate.3.gz OLD_FILES+=usr/share/man/man3/pmc_attach.3.gz OLD_FILES+=usr/share/man/man3/pmc_capabilities.3.gz OLD_FILES+=usr/share/man/man3/pmc_configure_logfile.3.gz OLD_FILES+=usr/share/man/man3/pmc_cpuinfo.3.gz OLD_FILES+=usr/share/man/man3/pmc_detach.3.gz OLD_FILES+=usr/share/man/man3/pmc_disable.3.gz OLD_FILES+=usr/share/man/man3/pmc_enable.3.gz OLD_FILES+=usr/share/man/man3/pmc_event_names_of_class.3.gz OLD_FILES+=usr/share/man/man3/pmc_flush_logfile.3.gz OLD_FILES+=usr/share/man/man3/pmc_get_driver_stats.3.gz OLD_FILES+=usr/share/man/man3/pmc_get_msr.3.gz OLD_FILES+=usr/share/man/man3/pmc_init.3.gz OLD_FILES+=usr/share/man/man3/pmc_name_of_capability.3.gz OLD_FILES+=usr/share/man/man3/pmc_name_of_class.3.gz OLD_FILES+=usr/share/man/man3/pmc_name_of_cputype.3.gz OLD_FILES+=usr/share/man/man3/pmc_name_of_disposition.3.gz OLD_FILES+=usr/share/man/man3/pmc_name_of_event.3.gz OLD_FILES+=usr/share/man/man3/pmc_name_of_mode.3.gz OLD_FILES+=usr/share/man/man3/pmc_name_of_state.3.gz OLD_FILES+=usr/share/man/man3/pmc_ncpu.3.gz OLD_FILES+=usr/share/man/man3/pmc_npmc.3.gz OLD_FILES+=usr/share/man/man3/pmc_pmcinfo.3.gz OLD_FILES+=usr/share/man/man3/pmc_read.3.gz OLD_FILES+=usr/share/man/man3/pmc_release.3.gz OLD_FILES+=usr/share/man/man3/pmc_rw.3.gz OLD_FILES+=usr/share/man/man3/pmc_set.3.gz OLD_FILES+=usr/share/man/man3/pmc_start.3.gz OLD_FILES+=usr/share/man/man3/pmc_stop.3.gz OLD_FILES+=usr/share/man/man3/pmc_width.3.gz OLD_FILES+=usr/share/man/man3/pmc_write.3.gz OLD_FILES+=usr/share/man/man3/pmc_writelog.3.gz OLD_FILES+=usr/share/man/man3/pmclog.3.gz OLD_FILES+=usr/share/man/man3/pmclog_close.3.gz OLD_FILES+=usr/share/man/man3/pmclog_feed.3.gz OLD_FILES+=usr/share/man/man3/pmclog_open.3.gz OLD_FILES+=usr/share/man/man3/pmclog_read.3.gz OLD_FILES+=usr/share/man/man8/pmcannotate.8.gz OLD_FILES+=usr/share/man/man8/pmccontrol.8.gz OLD_FILES+=usr/share/man/man8/pmcstat.8.gz +OLD_FILES+=usr/share/man/man8/pmcstudy.8.gz .endif .if ${MK_PORTSNAP} == no OLD_FILES+=etc/portsnap.conf OLD_FILES+=usr/libexec/make_index OLD_FILES+=usr/libexec/phttpget OLD_FILES+=usr/sbin/portsnap OLD_FILES+=usr/share/examples/etc/portsnap.conf OLD_FILES+=usr/share/man/man8/phttpget.8.gz OLD_FILES+=usr/share/man/man8/portsnap.8.gz .endif .if ${MK_PPP} == no OLD_FILES+=etc/newsyslog.conf.d/ppp.conf OLD_FILES+=etc/ppp/ppp.conf OLD_FILES+=etc/syslog.d/ppp.conf OLD_DIRS+=etc/ppp OLD_FILES+=usr/sbin/ppp OLD_FILES+=usr/sbin/pppctl OLD_FILES+=usr/share/man/man8/ppp.8.gz OLD_FILES+=usr/share/man/man8/pppctl.8.gz .endif .if ${MK_PROFILE} == no OLD_FILES+=usr/lib/lib80211_p.a OLD_FILES+=usr/lib/libBlocksRuntime_p.a OLD_FILES+=usr/lib/libalias_cuseeme_p.a OLD_FILES+=usr/lib/libalias_dummy_p.a OLD_FILES+=usr/lib/libalias_ftp_p.a OLD_FILES+=usr/lib/libalias_irc_p.a OLD_FILES+=usr/lib/libalias_nbt_p.a OLD_FILES+=usr/lib/libalias_p.a OLD_FILES+=usr/lib/libalias_pptp_p.a OLD_FILES+=usr/lib/libalias_skinny_p.a OLD_FILES+=usr/lib/libalias_smedia_p.a OLD_FILES+=usr/lib/libarchive_p.a OLD_FILES+=usr/lib/libasn1_p.a OLD_FILES+=usr/lib/libauditd_p.a OLD_FILES+=usr/lib/libavl_p.a OLD_FILES+=usr/lib/libbe_p.a OLD_FILES+=usr/lib/libbegemot_p.a OLD_FILES+=usr/lib/libblacklist_p.a OLD_FILES+=usr/lib/libbluetooth_p.a OLD_FILES+=usr/lib/libbsdxml_p.a OLD_FILES+=usr/lib/libbsm_p.a OLD_FILES+=usr/lib/libbsnmp_p.a OLD_FILES+=usr/lib/libbz2_p.a OLD_FILES+=usr/lib/libc++_p.a OLD_FILES+=usr/lib/libc_p.a OLD_FILES+=usr/lib/libcalendar_p.a OLD_FILES+=usr/lib/libcam_p.a OLD_FILES+=usr/lib/libcom_err_p.a OLD_FILES+=usr/lib/libcompat_p.a OLD_FILES+=usr/lib/libcompiler_rt_p.a OLD_FILES+=usr/lib/libcrypt_p.a OLD_FILES+=usr/lib/libcrypto_p.a OLD_FILES+=usr/lib/libctf_p.a OLD_FILES+=usr/lib/libcurses_p.a OLD_FILES+=usr/lib/libcursesw_p.a OLD_FILES+=usr/lib/libcuse_p.a OLD_FILES+=usr/lib/libcxxrt_p.a OLD_FILES+=usr/lib/libdevctl_p.a OLD_FILES+=usr/lib/libdevinfo_p.a OLD_FILES+=usr/lib/libdevstat_p.a OLD_FILES+=usr/lib/libdialog_p.a OLD_FILES+=usr/lib/libdl_p.a OLD_FILES+=usr/lib/libdpv_p.a OLD_FILES+=usr/lib/libdtrace_p.a OLD_FILES+=usr/lib/libdwarf_p.a OLD_FILES+=usr/lib/libedit_p.a OLD_FILES+=usr/lib/libefivar_p.a OLD_FILES+=usr/lib/libelf_p.a OLD_FILES+=usr/lib/libexecinfo_p.a OLD_FILES+=usr/lib/libfetch_p.a OLD_FILES+=usr/lib/libfigpar_p.a OLD_FILES+=usr/lib/libfl_p.a OLD_FILES+=usr/lib/libform_p.a OLD_FILES+=usr/lib/libformw_p.a OLD_FILES+=usr/lib/libgcc_eh_p.a OLD_FILES+=usr/lib/libgcc_p.a OLD_FILES+=usr/lib/libgeom_p.a OLD_FILES+=usr/lib/libgnuregex_p.a OLD_FILES+=usr/lib/libgpio_p.a OLD_FILES+=usr/lib/libgssapi_krb5_p.a OLD_FILES+=usr/lib/libgssapi_ntlm_p.a OLD_FILES+=usr/lib/libgssapi_p.a OLD_FILES+=usr/lib/libgssapi_spnego_p.a OLD_FILES+=usr/lib/libhdb_p.a OLD_FILES+=usr/lib/libheimbase_p.a OLD_FILES+=usr/lib/libheimntlm_p.a OLD_FILES+=usr/lib/libheimsqlite_p.a OLD_FILES+=usr/lib/libhistory_p.a OLD_FILES+=usr/lib/libhx509_p.a OLD_FILES+=usr/lib/libipsec_p.a OLD_FILES+=usr/lib/libipt_p.a OLD_FILES+=usr/lib/libjail_p.a OLD_FILES+=usr/lib/libkadm5clnt_p.a OLD_FILES+=usr/lib/libkadm5srv_p.a OLD_FILES+=usr/lib/libkafs5_p.a OLD_FILES+=usr/lib/libkdc_p.a OLD_FILES+=usr/lib/libkiconv_p.a OLD_FILES+=usr/lib/libkrb5_p.a OLD_FILES+=usr/lib/libkvm_p.a OLD_FILES+=usr/lib/libl_p.a OLD_FILES+=usr/lib/libln_p.a OLD_FILES+=usr/lib/liblzma_p.a OLD_FILES+=usr/lib/libm_p.a OLD_FILES+=usr/lib/libmagic_p.a OLD_FILES+=usr/lib/libmd_p.a OLD_FILES+=usr/lib/libmemstat_p.a OLD_FILES+=usr/lib/libmenu_p.a OLD_FILES+=usr/lib/libmenuw_p.a OLD_FILES+=usr/lib/libmilter_p.a OLD_FILES+=usr/lib/libmp_p.a OLD_FILES+=usr/lib/libmt_p.a OLD_FILES+=usr/lib/libncurses_p.a OLD_FILES+=usr/lib/libncursesw_p.a OLD_FILES+=usr/lib/libnetgraph_p.a OLD_FILES+=usr/lib/libngatm_p.a OLD_FILES+=usr/lib/libnv_p.a OLD_FILES+=usr/lib/libnvpair_p.a OLD_FILES+=usr/lib/libopencsd_p.a OLD_FILES+=usr/lib/libopie_p.a OLD_FILES+=usr/lib/libpanel_p.a OLD_FILES+=usr/lib/libpanelw_p.a OLD_FILES+=usr/lib/libpathconv_p.a OLD_FILES+=usr/lib/libpcap_p.a OLD_FILES+=usr/lib/libpjdlog_p.a OLD_FILES+=usr/lib/libpmc_p.a OLD_FILES+=usr/lib/libprivatebsdstat_p.a OLD_FILES+=usr/lib/libprivatedevdctl_p.a OLD_FILES+=usr/lib/libprivateevent_p.a OLD_FILES+=usr/lib/libprivateheimipcc_p.a OLD_FILES+=usr/lib/libprivateheimipcs_p.a OLD_FILES+=usr/lib/libprivateifconfig_p.a OLD_FILES+=usr/lib/libprivateldns_p.a OLD_FILES+=usr/lib/libprivatesqlite3_p.a OLD_FILES+=usr/lib/libprivatessh_p.a OLD_FILES+=usr/lib/libprivateucl_p.a OLD_FILES+=usr/lib/libprivateunbound_p.a OLD_FILES+=usr/lib/libprivatezstd_p.a OLD_FILES+=usr/lib/libproc_p.a OLD_FILES+=usr/lib/libprocstat_p.a OLD_FILES+=usr/lib/libpthread_p.a OLD_FILES+=usr/lib/libradius_p.a OLD_FILES+=usr/lib/libregex_p.a OLD_FILES+=usr/lib/libroken_p.a OLD_FILES+=usr/lib/librpcsvc_p.a OLD_FILES+=usr/lib/librss_p.a OLD_FILES+=usr/lib/librt_p.a OLD_FILES+=usr/lib/librtld_db_p.a OLD_FILES+=usr/lib/libsbuf_p.a OLD_FILES+=usr/lib/libsdp_p.a OLD_FILES+=usr/lib/libsmb_p.a OLD_FILES+=usr/lib/libssl_p.a OLD_FILES+=usr/lib/libstdbuf_p.a OLD_FILES+=usr/lib/libstdc++_p.a OLD_FILES+=usr/lib/libstdthreads_p.a OLD_FILES+=usr/lib/libsupc++_p.a OLD_FILES+=usr/lib/libsysdecode_p.a OLD_FILES+=usr/lib/libtacplus_p.a OLD_FILES+=usr/lib/libtermcap_p.a OLD_FILES+=usr/lib/libtermcapw_p.a OLD_FILES+=usr/lib/libtermlib_p.a OLD_FILES+=usr/lib/libtermlibw_p.a OLD_FILES+=usr/lib/libthr_p.a OLD_FILES+=usr/lib/libthread_db_p.a OLD_FILES+=usr/lib/libtinfo_p.a OLD_FILES+=usr/lib/libtinfow_p.a OLD_FILES+=usr/lib/libufs_p.a OLD_FILES+=usr/lib/libugidfw_p.a OLD_FILES+=usr/lib/libulog_p.a OLD_FILES+=usr/lib/libumem_p.a OLD_FILES+=usr/lib/libusb_p.a OLD_FILES+=usr/lib/libusbhid_p.a OLD_FILES+=usr/lib/libutempter_p.a OLD_FILES+=usr/lib/libutil_p.a OLD_FILES+=usr/lib/libuutil_p.a OLD_FILES+=usr/lib/libvgl_p.a OLD_FILES+=usr/lib/libvmmapi_p.a OLD_FILES+=usr/lib/libwind_p.a OLD_FILES+=usr/lib/libwrap_p.a OLD_FILES+=usr/lib/libxo_p.a OLD_FILES+=usr/lib/liby_p.a OLD_FILES+=usr/lib/libypclnt_p.a OLD_FILES+=usr/lib/libz_p.a OLD_FILES+=usr/lib/libzfs_core_p.a OLD_FILES+=usr/lib/libzfs_p.a OLD_FILES+=usr/lib/private/libldns_p.a OLD_FILES+=usr/lib/private/libssh_p.a .endif .if ${MK_QUOTAS} == no OLD_FILES+=sbin/quotacheck OLD_FILES+=usr/bin/quota OLD_FILES+=usr/sbin/edquota OLD_FILES+=usr/sbin/quotaoff OLD_FILES+=usr/sbin/quotaon OLD_FILES+=usr/sbin/repquota OLD_FILES+=usr/share/man/man1/quota.1.gz OLD_FILES+=usr/share/man/man8/edquota.8.gz OLD_FILES+=usr/share/man/man8/quotacheck.8.gz OLD_FILES+=usr/share/man/man8/quotaoff.8.gz OLD_FILES+=usr/share/man/man8/quotaon.8.gz OLD_FILES+=usr/share/man/man8/repquota.8.gz .endif .if ${MK_RADIUS_SUPPORT} == no OLD_FILES+=usr/lib/libradius.a OLD_FILES+=usr/lib/libradius.so OLD_LIBS+=usr/lib/libradius.so.4 OLD_FILES+=usr/lib/libradius_p.a OLD_FILES+=usr/lib/pam_radius.so OLD_LIBS+=usr/lib/pam_radius.so.6 OLD_FILES+=usr/include/radlib.h OLD_FILES+=usr/include/radlib_vs.h OLD_FILES+=usr/share/man/man3/libradius.3.gz OLD_FILES+=usr/share/man/man3/rad_acct_open.3.gz OLD_FILES+=usr/share/man/man3/rad_add_server.3.gz OLD_FILES+=usr/share/man/man3/rad_add_server_ex.3.gz OLD_FILES+=usr/share/man/man3/rad_auth_open.3.gz OLD_FILES+=usr/share/man/man3/rad_bind_to.3.gz OLD_FILES+=usr/share/man/man3/rad_close.3.gz OLD_FILES+=usr/share/man/man3/rad_config.3.gz OLD_FILES+=usr/share/man/man3/rad_continue_send_request.3.gz OLD_FILES+=usr/share/man/man3/rad_create_request.3.gz OLD_FILES+=usr/share/man/man3/rad_create_response.3.gz OLD_FILES+=usr/share/man/man3/rad_cvt_addr.3.gz OLD_FILES+=usr/share/man/man3/rad_cvt_int.3.gz OLD_FILES+=usr/share/man/man3/rad_cvt_string.3.gz OLD_FILES+=usr/share/man/man3/rad_demangle.3.gz OLD_FILES+=usr/share/man/man3/rad_demangle_mppe_key.3.gz OLD_FILES+=usr/share/man/man3/rad_get_attr.3.gz OLD_FILES+=usr/share/man/man3/rad_get_vendor_attr.3.gz OLD_FILES+=usr/share/man/man3/rad_init_send_request.3.gz OLD_FILES+=usr/share/man/man3/rad_put_addr.3.gz OLD_FILES+=usr/share/man/man3/rad_put_attr.3.gz OLD_FILES+=usr/share/man/man3/rad_put_int.3.gz OLD_FILES+=usr/share/man/man3/rad_put_message_authentic.3.gz OLD_FILES+=usr/share/man/man3/rad_put_string.3.gz OLD_FILES+=usr/share/man/man3/rad_put_vendor_addr.3.gz OLD_FILES+=usr/share/man/man3/rad_put_vendor_attr.3.gz OLD_FILES+=usr/share/man/man3/rad_put_vendor_int.3.gz OLD_FILES+=usr/share/man/man3/rad_put_vendor_string.3.gz OLD_FILES+=usr/share/man/man3/rad_receive_request.3.gz OLD_FILES+=usr/share/man/man3/rad_request_authenticator.3.gz OLD_FILES+=usr/share/man/man3/rad_send_request.3.gz OLD_FILES+=usr/share/man/man3/rad_send_response.3.gz OLD_FILES+=usr/share/man/man3/rad_server_open.3.gz OLD_FILES+=usr/share/man/man3/rad_server_secret.3.gz OLD_FILES+=usr/share/man/man3/rad_strerror.3.gz OLD_FILES+=usr/share/man/man5/radius.conf.5.gz OLD_FILES+=usr/share/man/man8/pam_radius.8.gz .endif .if ${MK_RBOOTD} == no OLD_FILES+=usr/libexec/rbootd OLD_FILES+=usr/share/man/man8/rbootd.8.gz .endif .if ${MK_RESCUE} == no . if exists(${DESTDIR}${TESTSBASE}) RESCUE_DIRS!=find ${DESTDIR}/rescue -type d 2>/dev/null | sed -e 's,^${DESTDIR}/,,'; echo OLD_DIRS+=${RESCUE_DIRS} RESCUE_FILES!=find ${DESTDIR}/rescue \! -type d 2>/dev/null | sed -e 's,^${DESTDIR}/,,'; echo OLD_FILES+=${RESCUE_FILES} . endif .endif .if ${MK_ROUTED} == no +OLD_FILES+=etc/rc.d/routed OLD_FILES+=rescue/routed OLD_FILES+=rescue/rtquery OLD_FILES+=sbin/routed OLD_FILES+=sbin/rtquery OLD_FILES+=usr/share/man/man8/routed.8.gz OLD_FILES+=usr/share/man/man8/rtquery.8.gz .endif .if ${MK_SENDMAIL} == no OLD_FILES+=etc/mtree/BSD.sendmail.dist OLD_FILES+=etc/newsyslog.conf.d/sendmail.conf OLD_FILES+=etc/periodic/daily/150.clean-hoststat OLD_FILES+=etc/periodic/daily/440.status-mailq OLD_FILES+=etc/periodic/daily/460.status-mail-rejects OLD_FILES+=etc/periodic/daily/500.queuerun +OLD_FILES+=etc/rc.d/sendmail OLD_FILES+=bin/rmail OLD_FILES+=usr/bin/vacation OLD_FILES+=usr/include/libmilter/mfapi.h OLD_FILES+=usr/include/libmilter/mfdef.h OLD_DIRS+=usr/include/libmilter OLD_FILES+=usr/lib/libmilter.a OLD_FILES+=usr/lib/libmilter.so OLD_LIBS+=usr/lib/libmilter.so.5 OLD_FILES+=usr/lib/libmilter_p.a .if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64" OLD_FILES+=usr/lib32/libmilter.a OLD_FILES+=usr/lib32/libmilter.so OLD_LIBS+=usr/lib32/libmilter.so.5 OLD_FILES+=usr/lib32/libmilter_p.a .endif OLD_FILES+=usr/libexec/mail.local OLD_FILES+=usr/libexec/sendmail/sendmail OLD_FILES+=usr/libexec/smrsh OLD_FILES+=usr/sbin/editmap OLD_FILES+=usr/sbin/mailstats OLD_FILES+=usr/sbin/makemap OLD_FILES+=usr/sbin/praliases OLD_FILES+=usr/share/doc/smm/08.sendmailop/paper.ascii.gz OLD_DIRS+=usr/share/doc/smm/08.sendmailop OLD_FILES+=usr/share/man/man1/mailq.1.gz OLD_FILES+=usr/share/man/man1/newaliases.1.gz OLD_FILES+=usr/share/man/man1/vacation.1.gz OLD_FILES+=usr/share/man/man5/aliases.5.gz OLD_FILES+=usr/share/man/man8/editmap.8.gz OLD_FILES+=usr/share/man/man8/hoststat.8.gz OLD_FILES+=usr/share/man/man8/mail.local.8.gz OLD_FILES+=usr/share/man/man8/mailstats.8.gz OLD_FILES+=usr/share/man/man8/makemap.8.gz OLD_FILES+=usr/share/man/man8/praliases.8.gz OLD_FILES+=usr/share/man/man8/purgestat.8.gz OLD_FILES+=usr/share/man/man8/rmail.8.gz OLD_FILES+=usr/share/man/man8/sendmail.8.gz OLD_FILES+=usr/share/man/man8/smrsh.8.gz OLD_FILES+=usr/share/sendmail/cf/README OLD_FILES+=usr/share/sendmail/cf/cf/Makefile OLD_FILES+=usr/share/sendmail/cf/cf/README OLD_FILES+=usr/share/sendmail/cf/cf/chez.cs.mc OLD_FILES+=usr/share/sendmail/cf/cf/clientproto.mc OLD_FILES+=usr/share/sendmail/cf/cf/cs-hpux10.mc OLD_FILES+=usr/share/sendmail/cf/cf/cs-hpux9.mc OLD_FILES+=usr/share/sendmail/cf/cf/cs-osf1.mc OLD_FILES+=usr/share/sendmail/cf/cf/cs-solaris2.mc OLD_FILES+=usr/share/sendmail/cf/cf/cs-sunos4.1.mc OLD_FILES+=usr/share/sendmail/cf/cf/cs-ultrix4.mc OLD_FILES+=usr/share/sendmail/cf/cf/cyrusproto.mc OLD_FILES+=usr/share/sendmail/cf/cf/generic-bsd4.4.mc OLD_FILES+=usr/share/sendmail/cf/cf/generic-hpux10.mc OLD_FILES+=usr/share/sendmail/cf/cf/generic-hpux9.mc OLD_FILES+=usr/share/sendmail/cf/cf/generic-linux.mc OLD_FILES+=usr/share/sendmail/cf/cf/generic-mpeix.mc OLD_FILES+=usr/share/sendmail/cf/cf/generic-nextstep3.3.mc OLD_FILES+=usr/share/sendmail/cf/cf/generic-osf1.mc OLD_FILES+=usr/share/sendmail/cf/cf/generic-solaris.mc OLD_FILES+=usr/share/sendmail/cf/cf/generic-sunos4.1.mc OLD_FILES+=usr/share/sendmail/cf/cf/generic-ultrix4.mc OLD_FILES+=usr/share/sendmail/cf/cf/huginn.cs.mc OLD_FILES+=usr/share/sendmail/cf/cf/knecht.mc OLD_FILES+=usr/share/sendmail/cf/cf/mail.cs.mc OLD_FILES+=usr/share/sendmail/cf/cf/mail.eecs.mc OLD_FILES+=usr/share/sendmail/cf/cf/mailspool.cs.mc OLD_FILES+=usr/share/sendmail/cf/cf/python.cs.mc OLD_FILES+=usr/share/sendmail/cf/cf/s2k-osf1.mc OLD_FILES+=usr/share/sendmail/cf/cf/s2k-ultrix4.mc OLD_FILES+=usr/share/sendmail/cf/cf/submit.cf OLD_FILES+=usr/share/sendmail/cf/cf/submit.mc OLD_FILES+=usr/share/sendmail/cf/cf/tcpproto.mc OLD_FILES+=usr/share/sendmail/cf/cf/ucbarpa.mc OLD_FILES+=usr/share/sendmail/cf/cf/ucbvax.mc OLD_FILES+=usr/share/sendmail/cf/cf/uucpproto.mc OLD_FILES+=usr/share/sendmail/cf/cf/vangogh.cs.mc OLD_DIRS+=usr/share/sendmail/cf/cf OLD_FILES+=usr/share/sendmail/cf/domain/Berkeley.EDU.m4 OLD_FILES+=usr/share/sendmail/cf/domain/CS.Berkeley.EDU.m4 OLD_FILES+=usr/share/sendmail/cf/domain/EECS.Berkeley.EDU.m4 OLD_FILES+=usr/share/sendmail/cf/domain/S2K.Berkeley.EDU.m4 OLD_FILES+=usr/share/sendmail/cf/domain/berkeley-only.m4 OLD_FILES+=usr/share/sendmail/cf/domain/generic.m4 OLD_DIRS+=usr/share/sendmail/cf/domain OLD_FILES+=usr/share/sendmail/cf/feature/accept_unqualified_senders.m4 OLD_FILES+=usr/share/sendmail/cf/feature/accept_unresolvable_domains.m4 OLD_FILES+=usr/share/sendmail/cf/feature/access_db.m4 OLD_FILES+=usr/share/sendmail/cf/feature/allmasquerade.m4 OLD_FILES+=usr/share/sendmail/cf/feature/always_add_domain.m4 OLD_FILES+=usr/share/sendmail/cf/feature/authinfo.m4 OLD_FILES+=usr/share/sendmail/cf/feature/badmx.m4 OLD_FILES+=usr/share/sendmail/cf/feature/bcc.m4 OLD_FILES+=usr/share/sendmail/cf/feature/bestmx_is_local.m4 OLD_FILES+=usr/share/sendmail/cf/feature/bitdomain.m4 OLD_FILES+=usr/share/sendmail/cf/feature/blacklist_recipients.m4 OLD_FILES+=usr/share/sendmail/cf/feature/block_bad_helo.m4 OLD_FILES+=usr/share/sendmail/cf/feature/compat_check.m4 OLD_FILES+=usr/share/sendmail/cf/feature/conncontrol.m4 OLD_FILES+=usr/share/sendmail/cf/feature/delay_checks.m4 OLD_FILES+=usr/share/sendmail/cf/feature/dnsbl.m4 OLD_FILES+=usr/share/sendmail/cf/feature/domaintable.m4 OLD_FILES+=usr/share/sendmail/cf/feature/enhdnsbl.m4 OLD_FILES+=usr/share/sendmail/cf/feature/generics_entire_domain.m4 OLD_FILES+=usr/share/sendmail/cf/feature/genericstable.m4 OLD_FILES+=usr/share/sendmail/cf/feature/greet_pause.m4 OLD_FILES+=usr/share/sendmail/cf/feature/ldap_routing.m4 OLD_FILES+=usr/share/sendmail/cf/feature/limited_masquerade.m4 OLD_FILES+=usr/share/sendmail/cf/feature/local_lmtp.m4 OLD_FILES+=usr/share/sendmail/cf/feature/local_no_masquerade.m4 OLD_FILES+=usr/share/sendmail/cf/feature/local_procmail.m4 OLD_FILES+=usr/share/sendmail/cf/feature/lookupdotdomain.m4 OLD_FILES+=usr/share/sendmail/cf/feature/loose_relay_check.m4 OLD_FILES+=usr/share/sendmail/cf/feature/mailertable.m4 OLD_FILES+=usr/share/sendmail/cf/feature/masquerade_entire_domain.m4 OLD_FILES+=usr/share/sendmail/cf/feature/masquerade_envelope.m4 OLD_FILES+=usr/share/sendmail/cf/feature/msp.m4 OLD_FILES+=usr/share/sendmail/cf/feature/mtamark.m4 OLD_FILES+=usr/share/sendmail/cf/feature/no_default_msa.m4 OLD_FILES+=usr/share/sendmail/cf/feature/nocanonify.m4 OLD_FILES+=usr/share/sendmail/cf/feature/nopercenthack.m4 OLD_FILES+=usr/share/sendmail/cf/feature/notsticky.m4 OLD_FILES+=usr/share/sendmail/cf/feature/nouucp.m4 OLD_FILES+=usr/share/sendmail/cf/feature/nullclient.m4 OLD_FILES+=usr/share/sendmail/cf/feature/prefixmod.m4 OLD_FILES+=usr/share/sendmail/cf/feature/preserve_local_plus_detail.m4 OLD_FILES+=usr/share/sendmail/cf/feature/preserve_luser_host.m4 OLD_FILES+=usr/share/sendmail/cf/feature/promiscuous_relay.m4 OLD_FILES+=usr/share/sendmail/cf/feature/queuegroup.m4 OLD_FILES+=usr/share/sendmail/cf/feature/ratecontrol.m4 OLD_FILES+=usr/share/sendmail/cf/feature/redirect.m4 OLD_FILES+=usr/share/sendmail/cf/feature/relay_based_on_MX.m4 OLD_FILES+=usr/share/sendmail/cf/feature/relay_entire_domain.m4 OLD_FILES+=usr/share/sendmail/cf/feature/relay_hosts_only.m4 OLD_FILES+=usr/share/sendmail/cf/feature/relay_local_from.m4 OLD_FILES+=usr/share/sendmail/cf/feature/relay_mail_from.m4 OLD_FILES+=usr/share/sendmail/cf/feature/require_rdns.m4 OLD_FILES+=usr/share/sendmail/cf/feature/smrsh.m4 OLD_FILES+=usr/share/sendmail/cf/feature/stickyhost.m4 OLD_FILES+=usr/share/sendmail/cf/feature/tls_session_features.m4 OLD_FILES+=usr/share/sendmail/cf/feature/use_client_ptr.m4 OLD_FILES+=usr/share/sendmail/cf/feature/use_ct_file.m4 OLD_FILES+=usr/share/sendmail/cf/feature/use_cw_file.m4 OLD_FILES+=usr/share/sendmail/cf/feature/uucpdomain.m4 OLD_FILES+=usr/share/sendmail/cf/feature/virtuser_entire_domain.m4 OLD_FILES+=usr/share/sendmail/cf/feature/virtusertable.m4 OLD_DIRS+=usr/share/sendmail/cf/feature OLD_FILES+=usr/share/sendmail/cf/hack/cssubdomain.m4 OLD_FILES+=usr/share/sendmail/cf/hack/xconnect.m4 OLD_DIRS+=usr/share/sendmail/cf/hack OLD_FILES+=usr/share/sendmail/cf/m4/cf.m4 OLD_FILES+=usr/share/sendmail/cf/m4/cfhead.m4 OLD_FILES+=usr/share/sendmail/cf/m4/proto.m4 OLD_FILES+=usr/share/sendmail/cf/m4/version.m4 OLD_DIRS+=usr/share/sendmail/cf/m4 OLD_FILES+=usr/share/sendmail/cf/mailer/cyrus.m4 OLD_FILES+=usr/share/sendmail/cf/mailer/cyrusv2.m4 OLD_FILES+=usr/share/sendmail/cf/mailer/fax.m4 OLD_FILES+=usr/share/sendmail/cf/mailer/local.m4 OLD_FILES+=usr/share/sendmail/cf/mailer/mail11.m4 OLD_FILES+=usr/share/sendmail/cf/mailer/phquery.m4 OLD_FILES+=usr/share/sendmail/cf/mailer/pop.m4 OLD_FILES+=usr/share/sendmail/cf/mailer/procmail.m4 OLD_FILES+=usr/share/sendmail/cf/mailer/qpage.m4 OLD_FILES+=usr/share/sendmail/cf/mailer/smtp.m4 OLD_FILES+=usr/share/sendmail/cf/mailer/usenet.m4 OLD_FILES+=usr/share/sendmail/cf/mailer/uucp.m4 OLD_DIRS+=usr/share/sendmail/cf/mailer OLD_FILES+=usr/share/sendmail/cf/ostype/a-ux.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/aix3.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/aix4.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/aix5.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/altos.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/amdahl-uts.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/bsd4.3.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/bsd4.4.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/bsdi.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/bsdi1.0.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/bsdi2.0.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/darwin.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/dgux.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/domainos.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/dragonfly.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/dynix3.2.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/freebsd4.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/freebsd5.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/freebsd6.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/gnu.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/hpux10.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/hpux11.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/hpux9.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/irix4.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/irix5.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/irix6.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/isc4.1.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/linux.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/maxion.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/mklinux.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/mpeix.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/nextstep.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/openbsd.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/osf1.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/powerux.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/ptx2.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/qnx.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/riscos4.5.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/sco-uw-2.1.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/sco3.2.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/sinix.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/solaris11.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/solaris2.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/solaris2.ml.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/solaris2.pre5.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/solaris8.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/sunos3.5.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/sunos4.1.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/svr4.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/ultrix4.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/unicos.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/unicosmk.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/unicosmp.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/unixware7.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/unknown.m4 OLD_FILES+=usr/share/sendmail/cf/ostype/uxpds.m4 OLD_DIRS+=usr/share/sendmail/cf/ostype OLD_FILES+=usr/share/sendmail/cf/sendmail.schema OLD_FILES+=usr/share/sendmail/cf/sh/makeinfo.sh OLD_DIRS+=usr/share/sendmail/cf/sh OLD_FILES+=usr/share/sendmail/cf/siteconfig/uucp.cogsci.m4 OLD_FILES+=usr/share/sendmail/cf/siteconfig/uucp.old.arpa.m4 OLD_FILES+=usr/share/sendmail/cf/siteconfig/uucp.ucbarpa.m4 OLD_FILES+=usr/share/sendmail/cf/siteconfig/uucp.ucbvax.m4 OLD_DIRS+=usr/share/sendmail/cf/siteconfig OLD_DIRS+=usr/share/sendmail/cf OLD_DIRS+=usr/share/sendmail OLD_DIRS+=var/spool/clientmqueue .endif .if ${MK_SERVICESDB} == no OLD_FILES+=var/db/services.db .endif .if ${MK_SHAREDOCS} == no OLD_FILES+=usr/share/doc/pjdfstest/README OLD_DIRS+=usr/share/doc/pjdfstest .endif .if ${MK_SSP} == no OLD_LIBS+=lib/libssp.so.0 OLD_FILES+=usr/include/ssp/ssp.h OLD_FILES+=usr/include/ssp/stdio.h OLD_FILES+=usr/include/ssp/string.h OLD_FILES+=usr/include/ssp/unistd.h OLD_FILES+=usr/lib/libssp.a OLD_FILES+=usr/lib/libssp.so OLD_FILES+=usr/lib/libssp_nonshared.a OLD_FILES+=usr/lib32/libssp.a OLD_FILES+=usr/lib32/libssp.so OLD_LIBS+=usr/lib32/libssp.so.0 OLD_FILES+=usr/lib32/libssp_nonshared.a OLD_FILES+=usr/tests/lib/libc/ssp/Kyuafile OLD_FILES+=usr/tests/lib/libc/ssp/h_fgets OLD_FILES+=usr/tests/lib/libc/ssp/h_getcwd OLD_FILES+=usr/tests/lib/libc/ssp/h_gets OLD_FILES+=usr/tests/lib/libc/ssp/h_memcpy OLD_FILES+=usr/tests/lib/libc/ssp/h_memmove OLD_FILES+=usr/tests/lib/libc/ssp/h_memset OLD_FILES+=usr/tests/lib/libc/ssp/h_read OLD_FILES+=usr/tests/lib/libc/ssp/h_readlink OLD_FILES+=usr/tests/lib/libc/ssp/h_snprintf OLD_FILES+=usr/tests/lib/libc/ssp/h_sprintf OLD_FILES+=usr/tests/lib/libc/ssp/h_stpcpy OLD_FILES+=usr/tests/lib/libc/ssp/h_stpncpy OLD_FILES+=usr/tests/lib/libc/ssp/h_strcat OLD_FILES+=usr/tests/lib/libc/ssp/h_strcpy OLD_FILES+=usr/tests/lib/libc/ssp/h_strncat OLD_FILES+=usr/tests/lib/libc/ssp/h_strncpy OLD_FILES+=usr/tests/lib/libc/ssp/h_vsnprintf OLD_FILES+=usr/tests/lib/libc/ssp/h_vsprintf OLD_FILES+=usr/tests/lib/libc/ssp/ssp_test .endif .if ${MK_SYSCONS} == no OLD_FILES+=usr/share/syscons/fonts/INDEX.fonts OLD_FILES+=usr/share/syscons/fonts/armscii8-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/armscii8-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/armscii8-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/cp1251-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/cp1251-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/cp1251-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/cp437-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/cp437-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/cp437-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/cp437-thin-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/cp437-thin-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/cp850-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/cp850-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/cp850-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/cp850-thin-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/cp850-thin-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/cp865-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/cp865-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/cp865-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/cp865-thin-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/cp865-thin-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/cp866-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/cp866-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/cp866-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/cp866b-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/cp866c-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/cp866u-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/cp866u-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/cp866u-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/haik8-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/haik8-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/haik8-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/iso-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/iso-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/iso-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/iso-thin-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/iso02-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/iso02-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/iso02-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/iso04-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/iso04-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/iso04-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/iso04-vga9-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/iso04-vga9-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/iso04-vga9-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/iso04-vga9-wide-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/iso04-wide-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/iso05-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/iso05-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/iso05-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/iso07-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/iso07-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/iso07-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/iso08-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/iso08-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/iso08-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/iso09-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/iso15-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/iso15-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/iso15-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/iso15-thin-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/koi8-r-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/koi8-r-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/koi8-r-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/koi8-rb-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/koi8-rc-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/koi8-u-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/koi8-u-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/koi8-u-8x8.fnt OLD_FILES+=usr/share/syscons/fonts/swiss-1131-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/swiss-1251-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/swiss-8x14.fnt OLD_FILES+=usr/share/syscons/fonts/swiss-8x16.fnt OLD_FILES+=usr/share/syscons/fonts/swiss-8x8.fnt OLD_FILES+=usr/share/syscons/keymaps/INDEX.keymaps OLD_FILES+=usr/share/syscons/keymaps/be.iso.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/be.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/bg.bds.ctrlcaps.kbd OLD_FILES+=usr/share/syscons/keymaps/bg.phonetic.ctrlcaps.kbd OLD_FILES+=usr/share/syscons/keymaps/br275.cp850.kbd OLD_FILES+=usr/share/syscons/keymaps/br275.iso.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/br275.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/by.cp1131.kbd OLD_FILES+=usr/share/syscons/keymaps/by.cp1251.kbd OLD_FILES+=usr/share/syscons/keymaps/by.iso5.kbd OLD_FILES+=usr/share/syscons/keymaps/ce.iso2.kbd OLD_FILES+=usr/share/syscons/keymaps/colemak.iso15.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/cs.latin2.qwertz.kbd OLD_FILES+=usr/share/syscons/keymaps/cz.iso2.kbd OLD_FILES+=usr/share/syscons/keymaps/danish.cp865.kbd OLD_FILES+=usr/share/syscons/keymaps/danish.iso.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/danish.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/danish.iso.macbook.kbd OLD_FILES+=usr/share/syscons/keymaps/dutch.iso.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/eee_nordic.kbd OLD_FILES+=usr/share/syscons/keymaps/el.iso07.kbd OLD_FILES+=usr/share/syscons/keymaps/estonian.cp850.kbd OLD_FILES+=usr/share/syscons/keymaps/estonian.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/estonian.iso15.kbd OLD_FILES+=usr/share/syscons/keymaps/finnish.cp850.kbd OLD_FILES+=usr/share/syscons/keymaps/finnish.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/fr.dvorak.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/fr.dvorak.kbd OLD_FILES+=usr/share/syscons/keymaps/fr.iso.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/fr.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/fr.macbook.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/fr_CA.iso.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/german.cp850.kbd OLD_FILES+=usr/share/syscons/keymaps/german.iso.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/german.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/gr.elot.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/gr.us101.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/hr.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/hu.iso2.101keys.kbd OLD_FILES+=usr/share/syscons/keymaps/hu.iso2.102keys.kbd OLD_FILES+=usr/share/syscons/keymaps/hy.armscii-8.kbd OLD_FILES+=usr/share/syscons/keymaps/icelandic.iso.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/icelandic.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/it.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/iw.iso8.kbd OLD_FILES+=usr/share/syscons/keymaps/jp.106.kbd OLD_FILES+=usr/share/syscons/keymaps/jp.106x.kbd OLD_FILES+=usr/share/syscons/keymaps/kk.pt154.io.kbd OLD_FILES+=usr/share/syscons/keymaps/kk.pt154.kst.kbd OLD_FILES+=usr/share/syscons/keymaps/latinamerican.iso.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/latinamerican.kbd OLD_FILES+=usr/share/syscons/keymaps/lt.iso4.kbd OLD_FILES+=usr/share/syscons/keymaps/norwegian.dvorak.kbd OLD_FILES+=usr/share/syscons/keymaps/norwegian.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/pl_PL.ISO8859-2.kbd OLD_FILES+=usr/share/syscons/keymaps/pl_PL.dvorak.kbd OLD_FILES+=usr/share/syscons/keymaps/pt.iso.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/pt.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/ru.cp866.kbd OLD_FILES+=usr/share/syscons/keymaps/ru.iso5.kbd OLD_FILES+=usr/share/syscons/keymaps/ru.koi8-r.kbd OLD_FILES+=usr/share/syscons/keymaps/ru.koi8-r.shift.kbd OLD_FILES+=usr/share/syscons/keymaps/ru.koi8-r.win.kbd OLD_FILES+=usr/share/syscons/keymaps/si.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/sk.iso2.kbd OLD_FILES+=usr/share/syscons/keymaps/spanish.dvorak.kbd OLD_FILES+=usr/share/syscons/keymaps/spanish.iso.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/spanish.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/spanish.iso15.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/swedish.cp850.kbd OLD_FILES+=usr/share/syscons/keymaps/swedish.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/swissfrench.cp850.kbd OLD_FILES+=usr/share/syscons/keymaps/swissfrench.iso.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/swissfrench.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/swissgerman.cp850.kbd OLD_FILES+=usr/share/syscons/keymaps/swissgerman.iso.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/swissgerman.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/swissgerman.macbook.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/tr.iso9.q.kbd OLD_FILES+=usr/share/syscons/keymaps/ua.iso5.kbd OLD_FILES+=usr/share/syscons/keymaps/ua.koi8-u.kbd OLD_FILES+=usr/share/syscons/keymaps/ua.koi8-u.shift.alt.kbd OLD_FILES+=usr/share/syscons/keymaps/uk.cp850-ctrl.kbd OLD_FILES+=usr/share/syscons/keymaps/uk.cp850.kbd OLD_FILES+=usr/share/syscons/keymaps/uk.dvorak.kbd OLD_FILES+=usr/share/syscons/keymaps/uk.iso-ctrl.kbd OLD_FILES+=usr/share/syscons/keymaps/uk.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/us.dvorak.kbd OLD_FILES+=usr/share/syscons/keymaps/us.dvorakl.kbd OLD_FILES+=usr/share/syscons/keymaps/us.dvorakp.kbd OLD_FILES+=usr/share/syscons/keymaps/us.dvorakr.kbd OLD_FILES+=usr/share/syscons/keymaps/us.dvorakx.kbd OLD_FILES+=usr/share/syscons/keymaps/us.emacs.kbd OLD_FILES+=usr/share/syscons/keymaps/us.iso.acc.kbd OLD_FILES+=usr/share/syscons/keymaps/us.iso.kbd OLD_FILES+=usr/share/syscons/keymaps/us.pc-ctrl.kbd OLD_FILES+=usr/share/syscons/keymaps/us.unix.kbd OLD_FILES+=usr/share/syscons/scrnmaps/armscii8-2haik8.scm OLD_FILES+=usr/share/syscons/scrnmaps/iso-8859-1_to_cp437.scm OLD_FILES+=usr/share/syscons/scrnmaps/iso-8859-4_for_vga9.scm OLD_FILES+=usr/share/syscons/scrnmaps/iso-8859-7_to_cp437.scm OLD_FILES+=usr/share/syscons/scrnmaps/koi8-r2cp866.scm OLD_FILES+=usr/share/syscons/scrnmaps/koi8-u2cp866u.scm OLD_FILES+=usr/share/syscons/scrnmaps/us-ascii_to_cp437.scm OLD_DIRS+=usr/share/syscons/fonts OLD_DIRS+=usr/share/syscons/scrnmaps OLD_DIRS+=usr/share/syscons/keymaps OLD_DIRS+=usr/share/syscons .endif .if ${MK_TALK} == no OLD_FILES+=usr/bin/talk OLD_FILES+=usr/libexec/ntalkd OLD_FILES+=usr/share/man/man1/talk.1.gz OLD_FILES+=usr/share/man/man8/talkd.8.gz .endif .if ${MK_TCSH} == no OLD_FILES+=.cshrc OLD_FILES+=etc/csh.cshrc OLD_FILES+=etc/csh.login OLD_FILES+=etc/csh.logout OLD_FILES+=bin/csh OLD_FILES+=bin/tcsh OLD_FILES+=rescue/csh OLD_FILES+=rescue/tcsh OLD_FILES+=root/.cshrc OLD_FILES+=root/.login OLD_FILES+=usr/share/examples/etc/csh.cshrc OLD_FILES+=usr/share/examples/etc/csh.login OLD_FILES+=usr/share/examples/etc/csh.logout OLD_FILES+=usr/share/examples/tcsh/complete.tcsh OLD_FILES+=usr/share/examples/tcsh/csh-mode.el OLD_DIRS+=usr/share/examples/tcsh OLD_FILES+=usr/share/man/man1/csh.1.gz OLD_FILES+=usr/share/man/man1/tcsh.1.gz OLD_FILES+=usr/share/nls/de_AT.ISO8859-1/tcsh.cat OLD_FILES+=usr/share/nls/de_AT.ISO8859-15/tcsh.cat OLD_FILES+=usr/share/nls/de_AT.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/de_CH.ISO8859-1/tcsh.cat OLD_FILES+=usr/share/nls/de_CH.ISO8859-15/tcsh.cat OLD_FILES+=usr/share/nls/de_CH.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/de_DE.ISO8859-1/tcsh.cat OLD_FILES+=usr/share/nls/de_DE.ISO8859-15/tcsh.cat OLD_FILES+=usr/share/nls/de_DE.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/el_GR.ISO8859-7/tcsh.cat OLD_FILES+=usr/share/nls/el_GR.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/es_ES.ISO8859-1/tcsh.cat OLD_FILES+=usr/share/nls/es_ES.ISO8859-15/tcsh.cat OLD_FILES+=usr/share/nls/es_ES.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/et_EE.ISO8859-15/tcsh.cat OLD_FILES+=usr/share/nls/et_EE.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/fi_FI.ISO8859-1/tcsh.cat OLD_FILES+=usr/share/nls/fi_FI.ISO8859-15/tcsh.cat OLD_FILES+=usr/share/nls/fi_FI.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/fr_BE.ISO8859-1/tcsh.cat OLD_FILES+=usr/share/nls/fr_BE.ISO8859-15/tcsh.cat OLD_FILES+=usr/share/nls/fr_BE.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/fr_CA.ISO8859-1/tcsh.cat OLD_FILES+=usr/share/nls/fr_CA.ISO8859-15/tcsh.cat OLD_FILES+=usr/share/nls/fr_CA.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/fr_CH.ISO8859-1/tcsh.cat OLD_FILES+=usr/share/nls/fr_CH.ISO8859-15/tcsh.cat OLD_FILES+=usr/share/nls/fr_CH.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/fr_FR.ISO8859-1/tcsh.cat OLD_FILES+=usr/share/nls/fr_FR.ISO8859-15/tcsh.cat OLD_FILES+=usr/share/nls/fr_FR.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/it_CH.ISO8859-1/tcsh.cat OLD_FILES+=usr/share/nls/it_CH.ISO8859-15/tcsh.cat OLD_FILES+=usr/share/nls/it_CH.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/it_IT.ISO8859-1/tcsh.cat OLD_FILES+=usr/share/nls/it_IT.ISO8859-15/tcsh.cat OLD_FILES+=usr/share/nls/it_IT.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/ja_JP.SJIS/tcsh.cat OLD_FILES+=usr/share/nls/ja_JP.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/ja_JP.eucJP/tcsh.cat OLD_FILES+=usr/share/nls/ru_RU.CP1251/tcsh.cat OLD_FILES+=usr/share/nls/ru_RU.CP866/tcsh.cat OLD_FILES+=usr/share/nls/ru_RU.ISO8859-5/tcsh.cat OLD_FILES+=usr/share/nls/ru_RU.KOI8-R/tcsh.cat OLD_FILES+=usr/share/nls/ru_RU.UTF-8/tcsh.cat OLD_FILES+=usr/share/nls/uk_UA.ISO8859-5/tcsh.cat OLD_FILES+=usr/share/nls/uk_UA.KOI8-U/tcsh.cat OLD_FILES+=usr/share/nls/uk_UA.UTF-8/tcsh.cat .endif .if ${MK_TELNET} == no OLD_FILES+=etc/pam.d/telnetd OLD_FILES+=usr/bin/telnet OLD_FILES+=usr/libexec/telnetd OLD_FILES+=usr/share/man/man1/telnet.1.gz OLD_FILES+=usr/share/man/man8/telnetd.8.gz .endif .if ${MK_TESTS} == yes OLD_FILES+=usr/bin/atf-sh OLD_FILES+=usr/include/atf-c++/config.hpp OLD_FILES+=usr/include/atf-c/config.h OLD_LIBS+=usr/lib/libatf-c++.a OLD_LIBS+=usr/lib/libatf-c++.so OLD_LIBS+=usr/lib/libatf-c++.so.1 OLD_LIBS+=usr/lib/libatf-c++.so.2 OLD_LIBS+=usr/lib/libatf-c++_p.a OLD_LIBS+=usr/lib/libatf-c.a OLD_LIBS+=usr/lib/libatf-c.so OLD_LIBS+=usr/lib/libatf-c.so.1 OLD_LIBS+=usr/lib/libatf-c_p.a OLD_LIBS+=usr/lib/private/libatf-c.so.0 OLD_LIBS+=usr/lib/private/libatf-c++.so.1 .if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64" OLD_LIBS+=usr/lib32/libatf-c++.a OLD_LIBS+=usr/lib32/libatf-c++.so OLD_LIBS+=usr/lib32/libatf-c++.so.1 OLD_LIBS+=usr/lib32/libatf-c++.so.2 OLD_LIBS+=usr/lib32/libatf-c++_p.a OLD_LIBS+=usr/lib32/libatf-c.a OLD_LIBS+=usr/lib32/libatf-c.so OLD_LIBS+=usr/lib32/libatf-c.so.1 OLD_LIBS+=usr/lib32/libatf-c_p.a OLD_LIBS+=usr/lib32/private/libatf-c.so.0 OLD_LIBS+=usr/lib32/private/libatf-c++.so.1 .endif OLD_FILES+=usr/libdata/pkgconfig/atf-c++.pc OLD_FILES+=usr/libdata/pkgconfig/atf-c.pc OLD_FILES+=usr/libdata/pkgconfig/atf-sh.pc OLD_FILES+=usr/share/aclocal/atf-c++.m4 OLD_FILES+=usr/share/aclocal/atf-c.m4 OLD_FILES+=usr/share/aclocal/atf-common.m4 OLD_FILES+=usr/share/aclocal/atf-sh.m4 OLD_DIRS+=usr/share/aclocal OLD_DIRS+=usr/tests/bin/chown OLD_FILES+=usr/tests/bin/chown/Kyuafile OLD_FILES+=usr/tests/bin/chown/chown-f_test OLD_FILES+=usr/tests/bin/chown/units_basics OLD_FILES+=usr/tests/bin/date/legacy_test OLD_FILES+=usr/tests/bin/sh/legacy_test OLD_FILES+=usr/tests/usr.bin/atf/Kyuafile OLD_FILES+=usr/tests/usr.bin/atf/atf-sh/Kyuafile OLD_FILES+=usr/tests/usr.bin/atf/atf-sh/atf_check_test OLD_FILES+=usr/tests/usr.bin/atf/atf-sh/config_test OLD_FILES+=usr/tests/usr.bin/atf/atf-sh/integration_test OLD_FILES+=usr/tests/usr.bin/atf/atf-sh/misc_helpers OLD_FILES+=usr/tests/usr.bin/atf/atf-sh/normalize_test OLD_FILES+=usr/tests/usr.bin/atf/atf-sh/tc_test OLD_FILES+=usr/tests/usr.bin/atf/atf-sh/tp_test OLD_DIRS+=usr/tests/usr.bin/atf/atf-sh OLD_DIRS+=usr/tests/usr.bin/atf OLD_FILES+=usr/tests/lib/atf/libatf-c/test_helpers_test OLD_FILES+=usr/tests/lib/atf/test-programs/fork_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/application_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/config_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/expand_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/parser_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/sanity_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/ui_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/env_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/exceptions_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/expand_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/fs_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/parser_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/process_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/sanity_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/pkg_config_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/text_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/ui_test OLD_FILES+=usr/tests/lib/atf/libatf-c/config_test OLD_FILES+=usr/tests/lib/atf/libatf-c/dynstr_test OLD_FILES+=usr/tests/lib/atf/libatf-c/env_test OLD_FILES+=usr/tests/lib/atf/libatf-c/fs_test OLD_FILES+=usr/tests/lib/atf/libatf-c/list_test OLD_FILES+=usr/tests/lib/atf/libatf-c/map_test OLD_FILES+=usr/tests/lib/atf/libatf-c/pkg_config_test OLD_FILES+=usr/tests/lib/atf/libatf-c/process_helpers OLD_FILES+=usr/tests/lib/atf/libatf-c/process_test OLD_FILES+=usr/tests/lib/atf/libatf-c/sanity_test OLD_FILES+=usr/tests/lib/atf/libatf-c/text_test OLD_FILES+=usr/tests/lib/atf/libatf-c/user_test .if ${MK_MAKE} == yes OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/legacy_test OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.status.2 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.status.3 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.status.4 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.status.5 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.status.6 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.status.7 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stderr.3 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stderr.4 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stderr.5 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stderr.6 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stderr.7 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stdout.3 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stdout.4 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stdout.5 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stdout.6 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stdout.7 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/libtest.a OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/legacy_test OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.status.2 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.status.3 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.status.4 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.status.5 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.status.6 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.status.7 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stderr.3 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stderr.4 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stderr.5 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stderr.6 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stderr.7 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stdout.3 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stdout.4 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stdout.5 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stdout.6 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stdout.7 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/libtest.a OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/legacy_test OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.status.2 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.status.3 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.status.4 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.status.5 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.status.6 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.status.7 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stderr.3 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stderr.4 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stderr.5 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stderr.6 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stderr.7 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stdout.3 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stdout.4 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stdout.5 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stdout.6 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stdout.7 OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/libtest.a OLD_FILES+=usr/tests/usr.bin/make/archives/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/basic/t0/legacy_test OLD_FILES+=usr/tests/usr.bin/make/basic/t0/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/basic/t0/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/basic/t0/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/basic/t0/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/basic/t1/legacy_test OLD_FILES+=usr/tests/usr.bin/make/basic/t1/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/basic/t1/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/basic/t1/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/basic/t1/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/basic/t1/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/basic/t2/legacy_test OLD_FILES+=usr/tests/usr.bin/make/basic/t2/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/basic/t2/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/basic/t2/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/basic/t2/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/basic/t2/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/basic/t3/legacy_test OLD_FILES+=usr/tests/usr.bin/make/basic/t3/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/basic/t3/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/basic/t3/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/basic/t3/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/basic/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/execution/ellipsis/legacy_test OLD_FILES+=usr/tests/usr.bin/make/execution/ellipsis/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/execution/ellipsis/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/execution/ellipsis/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/execution/ellipsis/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/execution/ellipsis/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/execution/empty/legacy_test OLD_FILES+=usr/tests/usr.bin/make/execution/empty/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/execution/empty/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/execution/empty/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/execution/empty/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/execution/empty/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/execution/joberr/legacy_test OLD_FILES+=usr/tests/usr.bin/make/execution/joberr/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/execution/joberr/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/execution/joberr/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/execution/joberr/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/execution/joberr/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/execution/plus/legacy_test OLD_FILES+=usr/tests/usr.bin/make/execution/plus/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/execution/plus/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/execution/plus/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/execution/plus/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/execution/plus/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/execution/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/legacy_test OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/expected.status.2 OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/sh OLD_FILES+=usr/tests/usr.bin/make/shell/meta/legacy_test OLD_FILES+=usr/tests/usr.bin/make/shell/meta/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/shell/meta/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/shell/meta/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/shell/meta/expected.status.2 OLD_FILES+=usr/tests/usr.bin/make/shell/meta/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/shell/meta/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/make/shell/meta/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/shell/meta/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/make/shell/meta/sh OLD_FILES+=usr/tests/usr.bin/make/shell/path/legacy_test OLD_FILES+=usr/tests/usr.bin/make/shell/path/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/shell/path/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/shell/path/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/shell/path/expected.status.2 OLD_FILES+=usr/tests/usr.bin/make/shell/path/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/shell/path/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/make/shell/path/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/shell/path/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/make/shell/path/sh OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/legacy_test OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/expected.status.2 OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/shell OLD_FILES+=usr/tests/usr.bin/make/shell/replace/legacy_test OLD_FILES+=usr/tests/usr.bin/make/shell/replace/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/shell/replace/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/shell/replace/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/shell/replace/expected.status.2 OLD_FILES+=usr/tests/usr.bin/make/shell/replace/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/shell/replace/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/make/shell/replace/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/shell/replace/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/make/shell/replace/shell OLD_FILES+=usr/tests/usr.bin/make/shell/select/legacy_test OLD_FILES+=usr/tests/usr.bin/make/shell/select/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/shell/select/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/shell/select/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/shell/select/expected.status.2 OLD_FILES+=usr/tests/usr.bin/make/shell/select/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/shell/select/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/make/shell/select/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/shell/select/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/make/shell/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/suffixes/basic/legacy_test OLD_FILES+=usr/tests/usr.bin/make/suffixes/basic/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/suffixes/basic/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/suffixes/basic/TEST1.a OLD_FILES+=usr/tests/usr.bin/make/suffixes/basic/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/suffixes/basic/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/suffixes/basic/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild1/legacy_test OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild1/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild1/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild1/TEST1.a OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild1/TEST2.a OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild1/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild1/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild1/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild2/legacy_test OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild2/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild2/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild2/TEST1.a OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild2/TEST2.a OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild2/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild2/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild2/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/suffixes/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/syntax/directive-t0/legacy_test OLD_FILES+=usr/tests/usr.bin/make/syntax/directive-t0/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/syntax/directive-t0/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/syntax/directive-t0/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/syntax/directive-t0/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/syntax/directive-t0/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/legacy_test OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.status.2 OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.status.3 OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.status.4 OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.status.5 OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stderr.3 OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stderr.4 OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stderr.5 OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stdout.3 OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stdout.4 OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stdout.5 OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/legacy_test OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/expected.status.2 OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/legacy_test OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/expected.status.2 OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/make/syntax/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/2/1/legacy_test OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/2/1/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/2/1/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/2/1/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/2/1/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/2/1/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/2/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/mk/sys.mk OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/mk/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/2/1/legacy_test OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/2/1/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/2/1/cleanup OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/2/1/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/2/1/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/2/1/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/2/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/mk/sys.mk OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/mk/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/2/1/legacy_test OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/2/1/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/2/1/cleanup OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/2/1/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/2/1/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/2/1/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/2/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/mk/sys.mk OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/mk/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/sysmk/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_M/legacy_test OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_M/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_M/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_M/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_M/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_M/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/legacy_test OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.status.2 OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.status.3 OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.stderr.3 OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.stdout.3 OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/legacy_test OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/expected.status.2 OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/expected.stderr.2 OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/expected.stdout.2 OLD_FILES+=usr/tests/usr.bin/make/variables/t0/legacy_test OLD_FILES+=usr/tests/usr.bin/make/variables/t0/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/variables/t0/Makefile.test OLD_FILES+=usr/tests/usr.bin/make/variables/t0/expected.status.1 OLD_FILES+=usr/tests/usr.bin/make/variables/t0/expected.stderr.1 OLD_FILES+=usr/tests/usr.bin/make/variables/t0/expected.stdout.1 OLD_FILES+=usr/tests/usr.bin/make/variables/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/Kyuafile OLD_FILES+=usr/tests/usr.bin/make/common.sh OLD_FILES+=usr/tests/usr.bin/make/test-new.mk OLD_DIRS+=usr/tests/usr.bin/make/variables/t0 OLD_DIRS+=usr/tests/usr.bin/make/variables/opt_V OLD_DIRS+=usr/tests/usr.bin/make/variables/modifier_t OLD_DIRS+=usr/tests/usr.bin/make/variables/modifier_M OLD_DIRS+=usr/tests/usr.bin/make/variables OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t2/mk OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t2/2/1 OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t2/2 OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t2 OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t1/mk OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t1/2/1 OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t1/2 OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t1 OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t0/mk OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t0/2/1 OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t0/2 OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t0 OLD_DIRS+=usr/tests/usr.bin/make/sysmk OLD_DIRS+=usr/tests/usr.bin/make/syntax/semi OLD_DIRS+=usr/tests/usr.bin/make/syntax/funny-targets OLD_DIRS+=usr/tests/usr.bin/make/syntax/enl OLD_DIRS+=usr/tests/usr.bin/make/syntax/directive-t0 OLD_DIRS+=usr/tests/usr.bin/make/syntax OLD_DIRS+=usr/tests/usr.bin/make/suffixes/src_wild2 OLD_DIRS+=usr/tests/usr.bin/make/suffixes/src_wild1 OLD_DIRS+=usr/tests/usr.bin/make/suffixes/basic OLD_DIRS+=usr/tests/usr.bin/make/suffixes OLD_DIRS+=usr/tests/usr.bin/make/shell/select OLD_DIRS+=usr/tests/usr.bin/make/shell/replace OLD_DIRS+=usr/tests/usr.bin/make/shell/path_select OLD_DIRS+=usr/tests/usr.bin/make/shell/path OLD_DIRS+=usr/tests/usr.bin/make/shell/meta OLD_DIRS+=usr/tests/usr.bin/make/shell/builtin OLD_DIRS+=usr/tests/usr.bin/make/shell OLD_DIRS+=usr/tests/usr.bin/make/execution/plus OLD_DIRS+=usr/tests/usr.bin/make/execution/joberr OLD_DIRS+=usr/tests/usr.bin/make/execution/empty OLD_DIRS+=usr/tests/usr.bin/make/execution/ellipsis OLD_DIRS+=usr/tests/usr.bin/make/execution OLD_DIRS+=usr/tests/usr.bin/make/basic/t3 OLD_DIRS+=usr/tests/usr.bin/make/basic/t2 OLD_DIRS+=usr/tests/usr.bin/make/basic/t1 OLD_DIRS+=usr/tests/usr.bin/make/basic/t0 OLD_DIRS+=usr/tests/usr.bin/make/basic OLD_DIRS+=usr/tests/usr.bin/make/archives/fmt_oldbsd OLD_DIRS+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod OLD_DIRS+=usr/tests/usr.bin/make/archives/fmt_44bsd OLD_DIRS+=usr/tests/usr.bin/make/archives OLD_DIRS+=usr/tests/usr.bin/make OLD_FILES+=usr/tests/usr.bin/yacc/legacy_test OLD_FILES+=usr/tests/usr.bin/yacc/regress.00.out OLD_FILES+=usr/tests/usr.bin/yacc/regress.01.out OLD_FILES+=usr/tests/usr.bin/yacc/regress.02.out OLD_FILES+=usr/tests/usr.bin/yacc/regress.03.out OLD_FILES+=usr/tests/usr.bin/yacc/regress.04.out OLD_FILES+=usr/tests/usr.bin/yacc/regress.05.out OLD_FILES+=usr/tests/usr.bin/yacc/regress.06.out OLD_FILES+=usr/tests/usr.bin/yacc/regress.07.out OLD_FILES+=usr/tests/usr.bin/yacc/regress.08.out OLD_FILES+=usr/tests/usr.bin/yacc/regress.09.out OLD_FILES+=usr/tests/usr.bin/yacc/regress.10.out OLD_FILES+=usr/tests/usr.bin/yacc/regress.11.out OLD_FILES+=usr/tests/usr.bin/yacc/regress.12.out OLD_FILES+=usr/tests/usr.bin/yacc/regress.13.out OLD_FILES+=usr/tests/usr.bin/yacc/regress.14.out OLD_FILES+=usr/tests/usr.bin/yacc/regress.sh OLD_FILES+=usr/tests/usr.bin/yacc/undefined.y .endif .else # ATF libraries. OLD_FILES+=etc/mtree/BSD.tests.dist OLD_FILES+=usr/bin/atf-sh OLD_DIRS+=usr/include/atf-c OLD_FILES+=usr/include/atf-c/build.h OLD_FILES+=usr/include/atf-c/check.h OLD_FILES+=usr/include/atf-c/config.h OLD_FILES+=usr/include/atf-c/defs.h OLD_FILES+=usr/include/atf-c/error.h OLD_FILES+=usr/include/atf-c/error_fwd.h OLD_FILES+=usr/include/atf-c/macros.h OLD_FILES+=usr/include/atf-c/tc.h OLD_FILES+=usr/include/atf-c/tp.h OLD_FILES+=usr/include/atf-c/utils.h OLD_FILES+=usr/include/atf-c.h OLD_DIRS+=usr/include/atf-c++ OLD_FILES+=usr/include/atf-c++/build.hpp OLD_FILES+=usr/include/atf-c++/check.hpp OLD_FILES+=usr/include/atf-c++/config.hpp OLD_FILES+=usr/include/atf-c++/macros.hpp OLD_FILES+=usr/include/atf-c++/tests.hpp OLD_FILES+=usr/include/atf-c++/utils.hpp OLD_FILES+=usr/include/atf-c++.hpp OLD_FILES+=usr/lib/libatf-c_p.a OLD_FILES+=usr/lib/libatf-c.so.1 OLD_FILES+=usr/lib/libatf-c.so OLD_FILES+=usr/lib/libatf-c++.a OLD_FILES+=usr/lib/libatf-c++_p.a OLD_FILES+=usr/lib/libatf-c++.so.1 OLD_FILES+=usr/lib/libatf-c++.so OLD_FILES+=usr/lib/libatf-c.a OLD_FILES+=usr/libexec/atf-check OLD_FILES+=usr/libexec/atf-sh OLD_DIRS+=usr/share/atf OLD_FILES+=usr/share/atf/libatf-sh.subr OLD_DIRS+=usr/share/doc/atf OLD_FILES+=usr/share/doc/atf/AUTHORS OLD_FILES+=usr/share/doc/atf/COPYING OLD_FILES+=usr/share/doc/atf/NEWS OLD_FILES+=usr/share/doc/atf/README OLD_FILES+=usr/share/doc/pjdfstest/README OLD_FILES+=usr/share/man/man1/atf-check.1.gz OLD_FILES+=usr/share/man/man1/atf-sh.1.gz OLD_FILES+=usr/share/man/man1/atf-test-program.1.gz OLD_FILES+=usr/share/man/man3/atf-c-api.3.gz OLD_FILES+=usr/share/man/man3/atf-c++-api.3.gz OLD_FILES+=usr/share/man/man3/atf-sh-api.3.gz OLD_FILES+=usr/share/man/man3/atf-sh.3.gz OLD_FILES+=usr/share/man/man4/atf-test-case.4.gz OLD_FILES+=usr/share/man/man7/atf.7.gz OLD_FILES+=usr/share/mk/atf.test.mk OLD_FILES+=usr/share/mk/plain.test.mk OLD_FILES+=usr/share/mk/suite.test.mk OLD_FILES+=usr/share/mk/tap.test.mk # Test suite. . if exists(${DESTDIR}${TESTSBASE}) TESTS_DIRS!=find ${DESTDIR}${TESTSBASE} -type d | sed -e 's,^${DESTDIR}/,,'; echo OLD_DIRS+=${TESTS_DIRS} TESTS_FILES!=find ${DESTDIR}${TESTSBASE} \! -type d | sed -e 's,^${DESTDIR}/,,'; echo OLD_FILES+=${TESTS_FILES} . endif .endif # Test suite. .if ${MK_TESTS_SUPPORT} == no OLD_FILES+=usr/include/atf-c++.hpp OLD_FILES+=usr/include/atf-c++/build.hpp OLD_FILES+=usr/include/atf-c++/check.hpp OLD_FILES+=usr/include/atf-c++/macros.hpp OLD_FILES+=usr/include/atf-c++/tests.hpp OLD_FILES+=usr/include/atf-c++/utils.hpp OLD_FILES+=usr/include/atf-c.h OLD_FILES+=usr/include/atf-c/build.h OLD_FILES+=usr/include/atf-c/check.h OLD_FILES+=usr/include/atf-c/defs.h OLD_FILES+=usr/include/atf-c/error.h OLD_FILES+=usr/include/atf-c/error_fwd.h OLD_FILES+=usr/include/atf-c/macros.h OLD_FILES+=usr/include/atf-c/tc.h OLD_FILES+=usr/include/atf-c/tp.h OLD_FILES+=usr/include/atf-c/utils.h OLD_LIBS+=usr/lib/private/libatf-c++.so.2 OLD_LIBS+=usr/lib/private/libatf-c.so.1 OLD_FILES+=usr/share/man/man3/atf-c++.3.gz OLD_FILES+=usr/share/man/man3/atf-c-api++.3.gz OLD_FILES+=usr/share/man/man3/atf-c-api.3.gz OLD_FILES+=usr/share/man/man3/atf-c.3.gz OLD_FILES+=usr/tests/lib/atf/Kyuafile OLD_FILES+=usr/tests/lib/atf/libatf-c++/Kyuafile OLD_FILES+=usr/tests/lib/atf/libatf-c++/atf_c++_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/build_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/check_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/Kyuafile OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/application_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/env_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/exceptions_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/fs_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/process_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/text_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/version_helper OLD_FILES+=usr/tests/lib/atf/libatf-c++/macros_hpp_test.cpp OLD_FILES+=usr/tests/lib/atf/libatf-c++/macros_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/tests_test OLD_FILES+=usr/tests/lib/atf/libatf-c++/unused_test.cpp OLD_FILES+=usr/tests/lib/atf/libatf-c++/utils_test OLD_FILES+=usr/tests/lib/atf/libatf-c/Kyuafile OLD_FILES+=usr/tests/lib/atf/libatf-c/atf_c_test OLD_FILES+=usr/tests/lib/atf/libatf-c/build_test OLD_FILES+=usr/tests/lib/atf/libatf-c/check_test OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/Kyuafile OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/dynstr_test OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/env_test OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/fs_test OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/list_test OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/map_test OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/process_helpers OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/process_test OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/sanity_test OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/text_test OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/user_test OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/version_helper OLD_FILES+=usr/tests/lib/atf/libatf-c/error_test OLD_FILES+=usr/tests/lib/atf/libatf-c/macros_h_test.c OLD_FILES+=usr/tests/lib/atf/libatf-c/macros_test OLD_FILES+=usr/tests/lib/atf/libatf-c/tc_test OLD_FILES+=usr/tests/lib/atf/libatf-c/tp_test OLD_FILES+=usr/tests/lib/atf/libatf-c/unused_test.c OLD_FILES+=usr/tests/lib/atf/libatf-c/utils_test OLD_FILES+=usr/tests/lib/atf/test-programs/Kyuafile OLD_FILES+=usr/tests/lib/atf/test-programs/c_helpers OLD_FILES+=usr/tests/lib/atf/test-programs/config_test OLD_FILES+=usr/tests/lib/atf/test-programs/cpp_helpers OLD_FILES+=usr/tests/lib/atf/test-programs/expect_test OLD_FILES+=usr/tests/lib/atf/test-programs/meta_data_test OLD_FILES+=usr/tests/lib/atf/test-programs/result_test OLD_FILES+=usr/tests/lib/atf/test-programs/sh_helpers OLD_FILES+=usr/tests/lib/atf/test-programs/srcdir_test .endif .if ${MK_TEXTPROC} == no OLD_FILES+=usr/bin/checknr OLD_FILES+=usr/bin/colcrt OLD_FILES+=usr/bin/ul OLD_FILES+=usr/share/man/man1/checknr.1.gz OLD_FILES+=usr/share/man/man1/colcrt.1.gz OLD_FILES+=usr/share/man/man1/ul.1.gz .endif .if ${MK_TFTP} == no OLD_FILES+=usr/bin/tftp OLD_FILES+=usr/libexec/tftpd OLD_FILES+=usr/share/man/man1/tftp.1.gz OLD_FILES+=usr/share/man/man8/tftpd.8.gz .endif .if ${MK_TOOLCHAIN} == no OLD_FILES+=usr/bin/addr2line OLD_FILES+=usr/bin/as OLD_FILES+=usr/bin/byacc OLD_FILES+=usr/bin/cc OLD_FILES+=usr/bin/c88 OLD_FILES+=usr/bin/c++ OLD_FILES+=usr/bin/c++filt OLD_FILES+=usr/bin/ld OLD_FILES+=usr/bin/ld.bfd OLD_FILES+=usr/bin/nm OLD_FILES+=usr/bin/objcopy OLD_FILES+=usr/bin/readelf OLD_FILES+=usr/bin/size OLD_FILES+=usr/bin/strip OLD_FILES+=usr/bin/yacc OLD_FILES+=usr/share/man/man1/addr2line.1.gz OLD_FILES+=usr/share/man/man1/c++filt.1.gz OLD_FILES+=usr/share/man/man1/nm.1.gz OLD_FILES+=usr/share/man/man1/readelf.1.gz OLD_FILES+=usr/share/man/man1/size.1.gz OLD_FILES+=usr/share/man/man1/strip.1.gz OLD_FILES+=usr/share/man/man1/objcopy.1.gz # lib/libelf OLD_FILES+=usr/share/man/man3/elf.3.gz OLD_FILES+=usr/share/man/man3/elf_begin.3.gz OLD_FILES+=usr/share/man/man3/elf_cntl.3.gz OLD_FILES+=usr/share/man/man3/elf_end.3.gz OLD_FILES+=usr/share/man/man3/elf_errmsg.3.gz OLD_FILES+=usr/share/man/man3/elf_fill.3.gz OLD_FILES+=usr/share/man/man3/elf_flagdata.3.gz OLD_FILES+=usr/share/man/man3/elf_getarhdr.3.gz OLD_FILES+=usr/share/man/man3/elf_getarsym.3.gz OLD_FILES+=usr/share/man/man3/elf_getbase.3.gz OLD_FILES+=usr/share/man/man3/elf_getdata.3.gz OLD_FILES+=usr/share/man/man3/elf_getident.3.gz OLD_FILES+=usr/share/man/man3/elf_getscn.3.gz OLD_FILES+=usr/share/man/man3/elf_getphdrnum.3.gz OLD_FILES+=usr/share/man/man3/elf_getphnum.3.gz OLD_FILES+=usr/share/man/man3/elf_getshdrnum.3.gz OLD_FILES+=usr/share/man/man3/elf_getshnum.3.gz OLD_FILES+=usr/share/man/man3/elf_getshdrstrndx.3.gz OLD_FILES+=usr/share/man/man3/elf_getshstrndx.3.gz OLD_FILES+=usr/share/man/man3/elf_hash.3.gz OLD_FILES+=usr/share/man/man3/elf_kind.3.gz OLD_FILES+=usr/share/man/man3/elf_memory.3.gz OLD_FILES+=usr/share/man/man3/elf_next.3.gz OLD_FILES+=usr/share/man/man3/elf_open.3.gz OLD_FILES+=usr/share/man/man3/elf_rawfile.3.gz OLD_FILES+=usr/share/man/man3/elf_rand.3.gz OLD_FILES+=usr/share/man/man3/elf_strptr.3.gz OLD_FILES+=usr/share/man/man3/elf_update.3.gz OLD_FILES+=usr/share/man/man3/elf_version.3.gz OLD_FILES+=usr/share/man/man3/gelf.3.gz OLD_FILES+=usr/share/man/man3/gelf_checksum.3.gz OLD_FILES+=usr/share/man/man3/gelf_fsize.3.gz OLD_FILES+=usr/share/man/man3/gelf_getcap.3.gz OLD_FILES+=usr/share/man/man3/gelf_getclass.3.gz OLD_FILES+=usr/share/man/man3/gelf_getdyn.3.gz OLD_FILES+=usr/share/man/man3/gelf_getehdr.3.gz OLD_FILES+=usr/share/man/man3/gelf_getmove.3.gz OLD_FILES+=usr/share/man/man3/gelf_getphdr.3.gz OLD_FILES+=usr/share/man/man3/gelf_getrel.3.gz OLD_FILES+=usr/share/man/man3/gelf_getrela.3.gz OLD_FILES+=usr/share/man/man3/gelf_getshdr.3.gz OLD_FILES+=usr/share/man/man3/gelf_getsym.3.gz OLD_FILES+=usr/share/man/man3/gelf_getsyminfo.3.gz OLD_FILES+=usr/share/man/man3/gelf_getsymshndx.3.gz OLD_FILES+=usr/share/man/man3/gelf_newehdr.3.gz OLD_FILES+=usr/share/man/man3/gelf_newphdr.3.gz OLD_FILES+=usr/share/man/man3/gelf_update_ehdr.3.gz OLD_FILES+=usr/share/man/man3/gelf_xlatetof.3.gz # lib/libelftc OLD_FILES+=usr/share/man/man3/elftc.3.gz OLD_FILES+=usr/share/man/man3/elftc_bfd_find_target.3.gz OLD_FILES+=usr/share/man/man3/elftc_copyfile.3.gz OLD_FILES+=usr/share/man/man3/elftc_demangle.3.gz OLD_FILES+=usr/share/man/man3/elftc_reloc_type_str.3.gz OLD_FILES+=usr/share/man/man3/elftc_set_timestamps.3.gz OLD_FILES+=usr/share/man/man3/elftc_timestamp.3.gz OLD_FILES+=usr/share/man/man3/elftc_string_table_create.3.gz OLD_FILES+=usr/share/man/man3/elftc_version.3.gz OLD_FILES+=usr/tests/usr.bin/yacc/Kyuafile OLD_FILES+=usr/tests/usr.bin/yacc/btyacc_calc1.y OLD_FILES+=usr/tests/usr.bin/yacc/btyacc_demo.y OLD_FILES+=usr/tests/usr.bin/yacc/calc.y OLD_FILES+=usr/tests/usr.bin/yacc/calc1.y OLD_FILES+=usr/tests/usr.bin/yacc/calc2.y OLD_FILES+=usr/tests/usr.bin/yacc/calc3.y OLD_FILES+=usr/tests/usr.bin/yacc/code_calc.y OLD_FILES+=usr/tests/usr.bin/yacc/code_debug.y OLD_FILES+=usr/tests/usr.bin/yacc/code_error.y OLD_FILES+=usr/tests/usr.bin/yacc/empty.y OLD_FILES+=usr/tests/usr.bin/yacc/err_inherit1.y OLD_FILES+=usr/tests/usr.bin/yacc/err_inherit2.y OLD_FILES+=usr/tests/usr.bin/yacc/err_inherit3.y OLD_FILES+=usr/tests/usr.bin/yacc/err_inherit4.y OLD_FILES+=usr/tests/usr.bin/yacc/err_inherit5.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax1.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax10.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax11.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax12.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax13.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax14.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax15.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax16.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax17.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax18.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax19.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax2.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax20.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax21.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax22.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax23.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax24.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax25.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax26.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax27.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax3.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax4.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax5.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax6.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax7.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax7a.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax7b.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax8.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax8a.y OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax9.y OLD_FILES+=usr/tests/usr.bin/yacc/error.y OLD_FILES+=usr/tests/usr.bin/yacc/grammar.y OLD_FILES+=usr/tests/usr.bin/yacc/inherit0.y OLD_FILES+=usr/tests/usr.bin/yacc/inherit1.y OLD_FILES+=usr/tests/usr.bin/yacc/inherit2.y OLD_FILES+=usr/tests/usr.bin/yacc/ok_syntax1.y OLD_FILES+=usr/tests/usr.bin/yacc/pure_calc.y OLD_FILES+=usr/tests/usr.bin/yacc/pure_error.y OLD_FILES+=usr/tests/usr.bin/yacc/quote_calc.y OLD_FILES+=usr/tests/usr.bin/yacc/quote_calc2.y OLD_FILES+=usr/tests/usr.bin/yacc/quote_calc3.y OLD_FILES+=usr/tests/usr.bin/yacc/quote_calc4.y OLD_FILES+=usr/tests/usr.bin/yacc/run_test OLD_FILES+=usr/tests/usr.bin/yacc/varsyntax_calc1.y OLD_FILES+=usr/tests/usr.bin/yacc/yacc/big_b.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/big_b.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/big_l.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/big_l.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc1.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc1.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc1.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc1.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc2.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc2.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc2.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc2.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc3.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc3.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc3.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc3.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_calc.code.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_calc.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_calc.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_calc.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_calc.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_error.code.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_error.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_error.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_error.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_error.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/empty.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/empty.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/empty.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/empty.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax1.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax1.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax1.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax1.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax10.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax10.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax10.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax10.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax11.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax11.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax11.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax11.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax12.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax12.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax12.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax12.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax13.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax13.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax13.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax13.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax14.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax14.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax14.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax14.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax15.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax15.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax15.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax15.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax16.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax16.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax16.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax16.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax17.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax17.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax17.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax17.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax18.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax18.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax18.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax18.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax19.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax19.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax19.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax19.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax2.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax2.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax2.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax2.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax20.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax20.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax20.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax20.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax21.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax21.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax21.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax21.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax22.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax22.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax22.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax22.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax23.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax23.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax23.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax23.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax24.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax24.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax24.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax24.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax25.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax25.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax25.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax25.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax26.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax26.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax26.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax26.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax27.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax27.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax27.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax27.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax3.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax3.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax3.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax3.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax4.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax4.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax4.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax4.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax5.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax5.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax5.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax5.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax6.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax6.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax6.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax6.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7a.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7a.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7a.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7a.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7b.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7b.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7b.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7b.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax8.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax8.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax8.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax8.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax8a.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax8a.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax8a.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax8a.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax9.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax9.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax9.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax9.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/error.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/error.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/error.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/error.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/grammar.dot OLD_FILES+=usr/tests/usr.bin/yacc/yacc/grammar.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/grammar.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/grammar.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/grammar.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/help.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/help.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_b_opt.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_b_opt.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_b_opt1.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_b_opt1.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_code_c.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_code_c.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_defines.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_defines.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_graph.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_graph.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_include.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_include.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_opts.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_opts.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_output.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_output.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_output1.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_output1.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_output2.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_output2.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_p_opt.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_p_opt.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_p_opt1.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_p_opt1.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_verbose.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_verbose.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/nostdin.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/nostdin.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/ok_syntax1.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/ok_syntax1.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/ok_syntax1.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/ok_syntax1.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/pure_calc.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/pure_calc.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/pure_calc.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/pure_calc.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/pure_error.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/pure_error.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/pure_error.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/pure_error.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc-s.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc-s.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc-s.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc-s.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc2-s.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc2-s.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc2-s.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc2-s.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc2.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc2.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc2.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc2.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc3-s.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc3-s.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc3-s.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc3-s.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc3.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc3.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc3.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc3.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc4-s.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc4-s.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc4-s.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc4-s.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc4.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc4.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc4.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc4.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/rename_debug.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/rename_debug.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/rename_debug.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc/rename_debug.i OLD_FILES+=usr/tests/usr.bin/yacc/yacc/rename_debug.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/varsyntax_calc1.error OLD_FILES+=usr/tests/usr.bin/yacc/yacc/varsyntax_calc1.output OLD_FILES+=usr/tests/usr.bin/yacc/yacc/varsyntax_calc1.tab.c OLD_FILES+=usr/tests/usr.bin/yacc/yacc/varsyntax_calc1.tab.h OLD_FILES+=usr/tests/usr.bin/yacc/yacc_tests OLD_DIRS+=usr/tests/usr.bin/yacc .endif .if ${MK_UNBOUND} == no OLD_FILES+=etc/rc.d/local_unbound OLD_FILES+=etc/unbound OLD_FILES+=usr/lib/private/libunbound.a OLD_FILES+=usr/lib/private/libunbound.so OLD_LIBS+=usr/lib/private/libunbound.so.5 OLD_FILES+=usr/lib/private/libunbound_p.a .if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64" OLD_FILES+=usr/lib32/private/libunbound.a OLD_FILES+=usr/lib32/private/libunbound.so OLD_LIBS+=usr/lib32/private/libunbound.so.5 OLD_FILES+=usr/lib32/private/libunbound_p.a OLD_FILES+=usr/share/man/man5/local-unbound.conf.5.gz OLD_FILES+=usr/share/man/man8/local-unbound-anchor.8.gz OLD_FILES+=usr/share/man/man8/local-unbound-checkconf.8.gz OLD_FILES+=usr/share/man/man8/local-unbound-control.8.gz OLD_FILES+=usr/share/man/man8/local-unbound.8.gz .endif OLD_FILES+=usr/sbin/local-unbound-setup OLD_FILES+=usr/sbin/local-unbound OLD_FILES+=usr/sbin/local-unbound-anchor OLD_FILES+=usr/sbin/local-unbound-checkconf OLD_FILES+=usr/sbin/local-unbound-control .endif .if ${MK_USB} == no OLD_FILES+=etc/devd/uath.conf OLD_FILES+=etc/devd/uauth.conf OLD_FILES+=etc/devd/ulpt.conf OLD_FILES+=etc/devd/usb.conf OLD_FILES+=usr/bin/usbhidaction OLD_FILES+=usr/bin/usbhidctl OLD_FILES+=usr/include/libusb.h OLD_FILES+=usr/include/libusb20.h OLD_FILES+=usr/include/libusb20_desc.h OLD_FILES+=usr/include/usb.h OLD_FILES+=usr/include/usbhid.h OLD_FILES+=usr/lib/libusb.a OLD_FILES+=usr/lib/libusb.so OLD_LIBS+=usr/lib/libusb.so.3 OLD_FILES+=usr/lib/libusb_p.a OLD_FILES+=usr/lib/libusbhid.a OLD_FILES+=usr/lib/libusbhid.so OLD_LIBS+=usr/lib/libusbhid.so.4 OLD_FILES+=usr/lib/libusbhid_p.a OLD_FILES+=usr/lib32/libusb.a OLD_FILES+=usr/lib32/libusb.so OLD_LIBS+=usr/lib32/libusb.so.3 OLD_FILES+=usr/lib32/libusb_p.a OLD_FILES+=usr/lib32/libusbhid.a OLD_FILES+=usr/lib32/libusbhid.so OLD_LIBS+=usr/lib32/libusbhid.so.4 OLD_FILES+=usr/lib32/libusbhid_p.a OLD_FILES+=usr/libdata/pkgconfig/libusb-0.1.pc OLD_FILES+=usr/libdata/pkgconfig/libusb-1.0.pc OLD_FILES+=usr/libdata/pkgconfig/libusb-2.0.pc OLD_FILES+=usr/sbin/uathload OLD_FILES+=usr/sbin/uhsoctl OLD_FILES+=usr/sbin/usbconfig OLD_FILES+=usr/sbin/usbdump OLD_FILES+=usr/share/examples/libusb20/Makefile OLD_FILES+=usr/share/examples/libusb20/README OLD_FILES+=usr/share/examples/libusb20/bulk.c OLD_FILES+=usr/share/examples/libusb20/control.c OLD_FILES+=usr/share/examples/libusb20/util.c OLD_FILES+=usr/share/examples/libusb20/util.h OLD_DIRS+=usr/share/examples/libusb20 OLD_FILES+=usr/share/firmware/ar5523.bin OLD_FILES+=usr/share/man/man1/uhsoctl.1.gz OLD_FILES+=usr/share/man/man1/usbhidaction.1.gz OLD_FILES+=usr/share/man/man1/usbhidctl.1.gz OLD_FILES+=usr/share/man/man3/hid_dispose_report_desc.3.gz OLD_FILES+=usr/share/man/man3/hid_end_parse.3.gz OLD_FILES+=usr/share/man/man3/hid_get_data.3.gz OLD_FILES+=usr/share/man/man3/hid_get_item.3.gz OLD_FILES+=usr/share/man/man3/hid_get_report_desc.3.gz OLD_FILES+=usr/share/man/man3/hid_init.3.gz OLD_FILES+=usr/share/man/man3/hid_locate.3.gz OLD_FILES+=usr/share/man/man3/hid_report_size.3.gz OLD_FILES+=usr/share/man/man3/hid_set_data.3.gz OLD_FILES+=usr/share/man/man3/hid_start_parse.3.gz OLD_FILES+=usr/share/man/man3/hid_usage_in_page.3.gz OLD_FILES+=usr/share/man/man3/hid_usage_page.3.gz OLD_FILES+=usr/share/man/man3/libusb.3.gz OLD_FILES+=usr/share/man/man3/libusb20.3.gz OLD_FILES+=usr/share/man/man3/libusb20_be_add_dev_quirk.3.gz OLD_FILES+=usr/share/man/man3/libusb20_be_alloc_default.3.gz OLD_FILES+=usr/share/man/man3/libusb20_be_dequeue_device.3.gz OLD_FILES+=usr/share/man/man3/libusb20_be_device_foreach.3.gz OLD_FILES+=usr/share/man/man3/libusb20_be_enqueue_device.3.gz OLD_FILES+=usr/share/man/man3/libusb20_be_free.3.gz OLD_FILES+=usr/share/man/man3/libusb20_be_get_dev_quirk.3.gz OLD_FILES+=usr/share/man/man3/libusb20_be_get_quirk_name.3.gz OLD_FILES+=usr/share/man/man3/libusb20_be_get_template.3.gz OLD_FILES+=usr/share/man/man3/libusb20_be_remove_dev_quirk.3.gz OLD_FILES+=usr/share/man/man3/libusb20_be_set_template.3.gz OLD_FILES+=usr/share/man/man3/libusb20_desc_foreach.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_alloc.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_alloc_config.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_check_connected.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_close.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_detach_kernel_driver.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_free.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_get_address.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_get_backend_name.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_get_bus_number.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_get_config_index.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_get_debug.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_get_desc.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_get_device_desc.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_get_fd.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_get_iface_desc.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_get_info.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_get_mode.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_get_parent_address.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_get_parent_port.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_get_port_path.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_get_power_mode.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_get_power_usage.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_get_speed.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_kernel_driver_active.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_open.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_process.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_req_string_simple_sync.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_req_string_sync.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_request_sync.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_reset.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_set_alt_index.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_set_config_index.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_set_debug.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_set_power_mode.3.gz OLD_FILES+=usr/share/man/man3/libusb20_dev_wait_process.3.gz OLD_FILES+=usr/share/man/man3/libusb20_error_name.3.gz OLD_FILES+=usr/share/man/man3/libusb20_me_decode.3.gz OLD_FILES+=usr/share/man/man3/libusb20_me_encode.3.gz OLD_FILES+=usr/share/man/man3/libusb20_me_get_1.3.gz OLD_FILES+=usr/share/man/man3/libusb20_me_get_2.3.gz OLD_FILES+=usr/share/man/man3/libusb20_strerror.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_bulk_intr_sync.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_callback_wrapper.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_clear_stall_sync.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_close.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_drain.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_get_actual_frames.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_get_actual_length.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_get_length.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_get_max_frames.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_get_max_packet_length.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_get_max_total_length.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_get_pointer.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_get_priv_sc0.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_get_priv_sc1.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_get_status.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_get_time_complete.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_open.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_pending.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_set_buffer.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_set_callback.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_set_flags.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_set_length.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_set_priv_sc0.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_set_priv_sc1.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_set_timeout.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_set_total_frames.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_setup_bulk.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_setup_control.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_setup_intr.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_setup_isoc.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_start.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_stop.3.gz OLD_FILES+=usr/share/man/man3/libusb20_tr_submit.3.gz OLD_FILES+=usr/share/man/man3/libusb_alloc_transfer.3.gz OLD_FILES+=usr/share/man/man3/libusb_attach_kernel_driver.3.gz OLD_FILES+=usr/share/man/man3/libusb_bulk_transfer.3.gz OLD_FILES+=usr/share/man/man3/libusb_cancel_transfer.3.gz OLD_FILES+=usr/share/man/man3/libusb_check_connected.3.gz OLD_FILES+=usr/share/man/man3/libusb_claim_interface.3.gz OLD_FILES+=usr/share/man/man3/libusb_clear_halt.3.gz OLD_FILES+=usr/share/man/man3/libusb_close.3.gz OLD_FILES+=usr/share/man/man3/libusb_control_transfer.3.gz OLD_FILES+=usr/share/man/man3/libusb_detach_kernel_driver.3.gz OLD_FILES+=usr/share/man/man3/libusb_detach_kernel_driver_np.3.gz OLD_FILES+=usr/share/man/man3/libusb_error_name.3.gz OLD_FILES+=usr/share/man/man3/libusb_event_handler_active.3.gz OLD_FILES+=usr/share/man/man3/libusb_event_handling_ok.3.gz OLD_FILES+=usr/share/man/man3/libusb_exit.3.gz OLD_FILES+=usr/share/man/man3/libusb_free_bos_descriptor.3.gz OLD_FILES+=usr/share/man/man3/libusb_free_config_descriptor.3.gz OLD_FILES+=usr/share/man/man3/libusb_free_device_list.3.gz OLD_FILES+=usr/share/man/man3/libusb_free_ss_endpoint_comp.3.gz OLD_FILES+=usr/share/man/man3/libusb_free_transfer.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_active_config_descriptor.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_bus_number.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_config_descriptor.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_config_descriptor_by_value.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_configuration.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_device.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_device_address.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_device_descriptor.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_device_list.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_device_speed.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_driver.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_driver_np.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_max_iso_packet_size.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_max_packet_size.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_next_timeout.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_pollfds.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_string_descriptor.3.gz OLD_FILES+=usr/share/man/man3/libusb_get_string_descriptor_ascii.3.gz OLD_FILES+=usr/share/man/man3/libusb_handle_events.3.gz OLD_FILES+=usr/share/man/man3/libusb_handle_events_completed.3.gz OLD_FILES+=usr/share/man/man3/libusb_handle_events_locked.3.gz OLD_FILES+=usr/share/man/man3/libusb_handle_events_timeout.3.gz OLD_FILES+=usr/share/man/man3/libusb_handle_events_timeout_completed.3.gz OLD_FILES+=usr/share/man/man3/libusb_init.3.gz OLD_FILES+=usr/share/man/man3/libusb_interrupt_transfer.3.gz OLD_FILES+=usr/share/man/man3/libusb_kernel_driver_active.3.gz OLD_FILES+=usr/share/man/man3/libusb_lock_event_waiters.3.gz OLD_FILES+=usr/share/man/man3/libusb_lock_events.3.gz OLD_FILES+=usr/share/man/man3/libusb_open.3.gz OLD_FILES+=usr/share/man/man3/libusb_open_device_with_vid_pid.3.gz OLD_FILES+=usr/share/man/man3/libusb_parse_bos_descriptor.3.gz OLD_FILES+=usr/share/man/man3/libusb_parse_ss_endpoint_comp.3.gz OLD_FILES+=usr/share/man/man3/libusb_ref_device.3.gz OLD_FILES+=usr/share/man/man3/libusb_release_interface.3.gz OLD_FILES+=usr/share/man/man3/libusb_reset_device.3.gz OLD_FILES+=usr/share/man/man3/libusb_set_configuration.3.gz OLD_FILES+=usr/share/man/man3/libusb_set_debug.3.gz OLD_FILES+=usr/share/man/man3/libusb_set_interface_alt_setting.3.gz OLD_FILES+=usr/share/man/man3/libusb_set_pollfd_notifiers.3.gz OLD_FILES+=usr/share/man/man3/libusb_strerror.3.gz OLD_FILES+=usr/share/man/man3/libusb_submit_transfer.3.gz OLD_FILES+=usr/share/man/man3/libusb_try_lock_events.3.gz OLD_FILES+=usr/share/man/man3/libusb_unlock_event_waiters.3.gz OLD_FILES+=usr/share/man/man3/libusb_unlock_events.3.gz OLD_FILES+=usr/share/man/man3/libusb_unref_device.3.gz OLD_FILES+=usr/share/man/man3/libusb_wait_for_event.3.gz OLD_FILES+=usr/share/man/man3/libusbhid.3.gz OLD_FILES+=usr/share/man/man3/usb.3.gz OLD_FILES+=usr/share/man/man3/usb_bulk_read.3.gz OLD_FILES+=usr/share/man/man3/usb_bulk_write.3.gz OLD_FILES+=usr/share/man/man3/usb_check_connected.3.gz OLD_FILES+=usr/share/man/man3/usb_claim_interface.3.gz OLD_FILES+=usr/share/man/man3/usb_clear_halt.3.gz OLD_FILES+=usr/share/man/man3/usb_close.3.gz OLD_FILES+=usr/share/man/man3/usb_control_msg.3.gz OLD_FILES+=usr/share/man/man3/usb_destroy_configuration.3.gz OLD_FILES+=usr/share/man/man3/usb_device.3.gz OLD_FILES+=usr/share/man/man3/usb_fetch_and_parse_descriptors.3.gz OLD_FILES+=usr/share/man/man3/usb_find_busses.3.gz OLD_FILES+=usr/share/man/man3/usb_find_devices.3.gz OLD_FILES+=usr/share/man/man3/usb_get_busses.3.gz OLD_FILES+=usr/share/man/man3/usb_get_descriptor.3.gz OLD_FILES+=usr/share/man/man3/usb_get_descriptor_by_endpoint.3.gz OLD_FILES+=usr/share/man/man3/usb_get_string.3.gz OLD_FILES+=usr/share/man/man3/usb_get_string_simple.3.gz OLD_FILES+=usr/share/man/man3/usb_init.3.gz OLD_FILES+=usr/share/man/man3/usb_interrupt_read.3.gz OLD_FILES+=usr/share/man/man3/usb_interrupt_write.3.gz OLD_FILES+=usr/share/man/man3/usb_open.3.gz OLD_FILES+=usr/share/man/man3/usb_parse_configuration.3.gz OLD_FILES+=usr/share/man/man3/usb_parse_descriptor.3.gz OLD_FILES+=usr/share/man/man3/usb_release_interface.3.gz OLD_FILES+=usr/share/man/man3/usb_reset.3.gz OLD_FILES+=usr/share/man/man3/usb_resetep.3.gz OLD_FILES+=usr/share/man/man3/usb_set_altinterface.3.gz OLD_FILES+=usr/share/man/man3/usb_set_configuration.3.gz OLD_FILES+=usr/share/man/man3/usb_set_debug.3.gz OLD_FILES+=usr/share/man/man3/usb_strerror.3.gz OLD_FILES+=usr/share/man/man3/usbhid.3.gz OLD_FILES+=usr/share/man/man4/if_otus.4.gz OLD_FILES+=usr/share/man/man4/if_rsu.4.gz OLD_FILES+=usr/share/man/man4/if_rtwn_usb.4.gz OLD_FILES+=usr/share/man/man4/if_rum.4.gz OLD_FILES+=usr/share/man/man4/if_run.4.gz OLD_FILES+=usr/share/man/man4/if_zyd.4.gz OLD_FILES+=usr/share/man/man4/otus.4.gz OLD_FILES+=usr/share/man/man4/otusfw.4.gz OLD_FILES+=usr/share/man/man4/rsu.4.gz OLD_FILES+=usr/share/man/man4/rsufw.4.gz OLD_FILES+=usr/share/man/man4/rtwn_usb.4.gz OLD_FILES+=usr/share/man/man4/rum.4.gz OLD_FILES+=usr/share/man/man4/run.4.gz OLD_FILES+=usr/share/man/man4/runfw.4.gz OLD_FILES+=usr/share/man/man4/u3g.4.gz OLD_FILES+=usr/share/man/man4/u3gstub.4.gz OLD_FILES+=usr/share/man/man4/uark.4.gz OLD_FILES+=usr/share/man/man4/uart.4.gz OLD_FILES+=usr/share/man/man4/uath.4.gz OLD_FILES+=usr/share/man/man4/ubsa.4.gz OLD_FILES+=usr/share/man/man4/ubsec.4.gz OLD_FILES+=usr/share/man/man4/ubser.4.gz OLD_FILES+=usr/share/man/man4/ubtbcmfw.4.gz OLD_FILES+=usr/share/man/man4/uchcom.4.gz OLD_FILES+=usr/share/man/man4/ucom.4.gz OLD_FILES+=usr/share/man/man4/ucycom.4.gz OLD_FILES+=usr/share/man/man4/udav.4.gz OLD_FILES+=usr/share/man/man4/udbp.4.gz OLD_FILES+=usr/share/man/man4/uep.4.gz OLD_FILES+=usr/share/man/man4/ufm.4.gz OLD_FILES+=usr/share/man/man4/ufoma.4.gz OLD_FILES+=usr/share/man/man4/uftdi.4.gz OLD_FILES+=usr/share/man/man4/ugen.4.gz OLD_FILES+=usr/share/man/man4/uhci.4.gz OLD_FILES+=usr/share/man/man4/uhid.4.gz OLD_FILES+=usr/share/man/man4/uhso.4.gz OLD_FILES+=usr/share/man/man4/uipaq.4.gz OLD_FILES+=usr/share/man/man4/ukbd.4.gz OLD_FILES+=usr/share/man/man4/uled.4.gz OLD_FILES+=usr/share/man/man4/ulpt.4.gz OLD_FILES+=usr/share/man/man4/umass.4.gz OLD_FILES+=usr/share/man/man4/umcs.4.gz OLD_FILES+=usr/share/man/man4/umct.4.gz OLD_FILES+=usr/share/man/man4/umodem.4.gz OLD_FILES+=usr/share/man/man4/umoscom.4.gz OLD_FILES+=usr/share/man/man4/ums.4.gz OLD_FILES+=usr/share/man/man4/unix.4.gz OLD_FILES+=usr/share/man/man4/upgt.4.gz OLD_FILES+=usr/share/man/man4/uplcom.4.gz OLD_FILES+=usr/share/man/man4/ural.4.gz OLD_FILES+=usr/share/man/man4/urio.4.gz OLD_FILES+=usr/share/man/man4/urndis.4.gz OLD_FILES+=usr/share/man/man4/urtw.4.gz OLD_FILES+=usr/share/man/man4/usb.4.gz OLD_FILES+=usr/share/man/man4/usb_quirk.4.gz OLD_FILES+=usr/share/man/man4/usb_template.4.gz OLD_FILES+=usr/share/man/man4/usfs.4.gz OLD_FILES+=usr/share/man/man4/uslcom.4.gz OLD_FILES+=usr/share/man/man4/uvisor.4.gz OLD_FILES+=usr/share/man/man4/uvscom.4.gz OLD_FILES+=usr/share/man/man4/zyd.4.gz OLD_FILES+=usr/share/man/man8/uathload.8.gz OLD_FILES+=usr/share/man/man8/usbconfig.8.gz OLD_FILES+=usr/share/man/man8/usbdump.8.gz OLD_FILES+=usr/share/man/man9/usb_fifo_alloc_buffer.9.gz OLD_FILES+=usr/share/man/man9/usb_fifo_attach.9.gz OLD_FILES+=usr/share/man/man9/usb_fifo_detach.9.gz OLD_FILES+=usr/share/man/man9/usb_fifo_free_buffer.9.gz OLD_FILES+=usr/share/man/man9/usb_fifo_get_data.9.gz OLD_FILES+=usr/share/man/man9/usb_fifo_get_data_buffer.9.gz OLD_FILES+=usr/share/man/man9/usb_fifo_get_data_error.9.gz OLD_FILES+=usr/share/man/man9/usb_fifo_get_data_linear.9.gz OLD_FILES+=usr/share/man/man9/usb_fifo_put_bytes_max.9.gz OLD_FILES+=usr/share/man/man9/usb_fifo_put_data.9.gz OLD_FILES+=usr/share/man/man9/usb_fifo_put_data_buffer.9.gz OLD_FILES+=usr/share/man/man9/usb_fifo_put_data_error.9.gz OLD_FILES+=usr/share/man/man9/usb_fifo_put_data_linear.9.gz OLD_FILES+=usr/share/man/man9/usb_fifo_reset.9.gz OLD_FILES+=usr/share/man/man9/usb_fifo_softc.9.gz OLD_FILES+=usr/share/man/man9/usb_fifo_wakeup.9.gz OLD_FILES+=usr/share/man/man9/usbd_do_request.9.gz OLD_FILES+=usr/share/man/man9/usbd_do_request_flags.9.gz OLD_FILES+=usr/share/man/man9/usbd_errstr.9.gz OLD_FILES+=usr/share/man/man9/usbd_lookup_id_by_info.9.gz OLD_FILES+=usr/share/man/man9/usbd_lookup_id_by_uaa.9.gz OLD_FILES+=usr/share/man/man9/usbd_transfer_clear_stall.9.gz OLD_FILES+=usr/share/man/man9/usbd_transfer_drain.9.gz OLD_FILES+=usr/share/man/man9/usbd_transfer_pending.9.gz OLD_FILES+=usr/share/man/man9/usbd_transfer_poll.9.gz OLD_FILES+=usr/share/man/man9/usbd_transfer_setup.9.gz OLD_FILES+=usr/share/man/man9/usbd_transfer_start.9.gz OLD_FILES+=usr/share/man/man9/usbd_transfer_stop.9.gz OLD_FILES+=usr/share/man/man9/usbd_transfer_submit.9.gz OLD_FILES+=usr/share/man/man9/usbd_transfer_unsetup.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_clr_flag.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_frame_data.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_frame_len.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_get_frame.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_get_priv.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_is_stalled.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_max_framelen.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_max_frames.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_max_len.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_set_flag.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_set_frame_data.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_set_frame_len.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_set_frame_offset.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_set_frames.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_set_interval.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_set_priv.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_set_stall.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_set_timeout.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_softc.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_state.9.gz OLD_FILES+=usr/share/man/man9/usbd_xfer_status.9.gz OLD_FILES+=usr/share/man/man9/usbdi.9.gz OLD_FILES+=usr/share/misc/usb_hid_usages OLD_FILES+=usr/share/misc/usbdevs .endif .if ${MK_UTMPX} == no OLD_FILES+=etc/periodic/monthly/200.accounting +OLD_FILES+=etc/rc.d/utx OLD_FILES+=usr/bin/last OLD_FILES+=usr/bin/users OLD_FILES+=usr/bin/who OLD_FILES+=usr/sbin/ac OLD_FILES+=usr/sbin/lastlogin OLD_FILES+=usr/sbin/utx OLD_FILES+=usr/share/man/man1/last.1.gz OLD_FILES+=usr/share/man/man1/users.1.gz OLD_FILES+=usr/share/man/man1/who.1.gz OLD_FILES+=usr/share/man/man8/ac.8.gz OLD_FILES+=usr/share/man/man8/lastlogin.8.gz OLD_FILES+=usr/share/man/man8/utx.8.gz +.endif + +.if ${MK_VI} == no +OLD_FILES+=etc/rc.d/virecover +OLD_FILES+=rescue/ex +OLD_FILES+=rescue/vi +OLD_FILES+=usr/bin/ex +OLD_FILES+=usr/bin/nex +OLD_FILES+=usr/bin/nvi +OLD_FILES+=usr/bin/nview +OLD_FILES+=usr/bin/vi +OLD_FILES+=usr/bin/view +OLD_FILES+=usr/share/man/man1/ex.1.gz +OLD_FILES+=usr/share/man/man1/nex.1.gz +OLD_FILES+=usr/share/man/man1/nvi.1.gz +OLD_FILES+=usr/share/man/man1/nview.1.gz +OLD_FILES+=usr/share/man/man1/vi.1.gz +OLD_FILES+=usr/share/man/man1/view.1.gz +. if exists(${DESTDIR}/usr/share/vi) +VI_DIRS!=find ${DESTDIR}/usr/share/vi -type d \ + | sed -e 's,^${DESTDIR}/,,'; echo +VI_FILES!=find ${DESTDIR}/usr/share/vi \! -type d \ + | sed -e 's,^${DESTDIR}/,,'; echo +OLD_DIRS+=${VI_DIRS} +OLD_FILES+=${VI_FILES} +. endif .endif .if ${MK_WIRELESS} == no OLD_FILES+=etc/regdomain.xml OLD_FILES+=etc/rc.d/hostapd OLD_FILES+=etc/rc.d/wpa_supplicant OLD_FILES+=usr/sbin/ancontrol OLD_FILES+=usr/sbin/hostapd OLD_FILES+=usr/sbin/hostapd_cli OLD_FILES+=usr/sbin/ndis_events OLD_FILES+=usr/sbin/wlandebug OLD_FILES+=usr/sbin/wpa_cli OLD_FILES+=usr/sbin/wpa_passphrase OLD_FILES+=usr/sbin/wpa_supplicant OLD_FILES+=usr/share/examples/etc/regdomain.xml OLD_FILES+=usr/share/examples/etc/wpa_supplicant.conf OLD_FILES+=usr/share/examples/hostapd/hostapd.conf OLD_FILES+=usr/share/examples/hostapd/hostapd.eap_user OLD_FILES+=usr/share/examples/hostapd/hostapd.wpa_psk OLD_DIRS+=usr/share/examples/hostapd OLD_FILES+=usr/share/man/man5/hostapd.conf.5.gz OLD_FILES+=usr/share/man/man5/wpa_supplicant.conf.5.gz OLD_FILES+=usr/share/man/man8/ancontrol.8.gz OLD_FILES+=usr/share/man/man8/hostapd.8.gz OLD_FILES+=usr/share/man/man8/hostapd_cli.8.gz OLD_FILES+=usr/share/man/man8/ndis_events.8.gz OLD_FILES+=usr/share/man/man8/wlandebug.8.gz OLD_FILES+=usr/share/man/man8/wpa_cli.8.gz OLD_FILES+=usr/share/man/man8/wpa_passphrase.8.gz OLD_FILES+=usr/share/man/man8/wpa_supplicant.8.gz OLD_FILES+=usr/lib/snmp_wlan.so OLD_LIBS+=usr/lib/snmp_wlan.so.6 # bsnmp module OLD_FILES+=usr/share/man/man3/snmp_wlan.3.gz OLD_FILES+=usr/share/snmp/defs/wlan_tree.def OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-WIRELESS-MIB.txt .endif .if ${MK_SVNLITE} == no || ${MK_SVN} == yes OLD_FILES+=usr/bin/svnlite OLD_FILES+=usr/bin/svnliteadmin OLD_FILES+=usr/bin/svnlitebench OLD_FILES+=usr/bin/svnlitedumpfilter OLD_FILES+=usr/bin/svnlitefsfs OLD_FILES+=usr/bin/svnlitelook OLD_FILES+=usr/bin/svnlitemucc OLD_FILES+=usr/bin/svnliterdump OLD_FILES+=usr/bin/svnliteserve OLD_FILES+=usr/bin/svnlitesync OLD_FILES+=usr/bin/svnliteversion OLD_FILES+=usr/share/man/man1/svnlite.1.gz .endif .if ${MK_SVN} == no OLD_FILES+=usr/bin/svn OLD_FILES+=usr/bin/svnadmin OLD_FILES+=usr/bin/svnbench OLD_FILES+=usr/bin/svndumpfilter OLD_FILES+=usr/bin/svnfsfs OLD_FILES+=usr/bin/svnlook OLD_FILES+=usr/bin/svnmucc OLD_FILES+=usr/bin/svnrdump OLD_FILES+=usr/bin/svnserve OLD_FILES+=usr/bin/svnsync OLD_FILES+=usr/bin/svnversion .endif .if ${MK_HYPERV} == no OLD_FILES+=etc/devd/hyperv.conf OLD_FILES+=usr/libexec/hyperv/hv_set_ifconfig OLD_FILES+=usr/libexec/hyperv/hv_get_dns_info OLD_FILES+=usr/libexec/hyperv/hv_get_dhcp_info OLD_FILES+=usr/sbin/hv_kvp_daemon OLD_FILES+=usr/sbin/hv_vss_daemon OLD_FILES+=usr/share/man/man8/hv_kvp_daemon.8.gz .endif .if ${MK_ZONEINFO} == no OLD_FILES+=usr/share/zoneinfo/Africa/Abidjan OLD_FILES+=usr/share/zoneinfo/Africa/Accra OLD_FILES+=usr/share/zoneinfo/Africa/Addis_Ababa OLD_FILES+=usr/share/zoneinfo/Africa/Algiers OLD_FILES+=usr/share/zoneinfo/Africa/Asmara OLD_FILES+=usr/share/zoneinfo/Africa/Bamako OLD_FILES+=usr/share/zoneinfo/Africa/Bangui OLD_FILES+=usr/share/zoneinfo/Africa/Banjul OLD_FILES+=usr/share/zoneinfo/Africa/Bissau OLD_FILES+=usr/share/zoneinfo/Africa/Blantyre OLD_FILES+=usr/share/zoneinfo/Africa/Brazzaville OLD_FILES+=usr/share/zoneinfo/Africa/Bujumbura OLD_FILES+=usr/share/zoneinfo/Africa/Cairo OLD_FILES+=usr/share/zoneinfo/Africa/Casablanca OLD_FILES+=usr/share/zoneinfo/Africa/Ceuta OLD_FILES+=usr/share/zoneinfo/Africa/Conakry OLD_FILES+=usr/share/zoneinfo/Africa/Dakar OLD_FILES+=usr/share/zoneinfo/Africa/Dar_es_Salaam OLD_FILES+=usr/share/zoneinfo/Africa/Djibouti OLD_FILES+=usr/share/zoneinfo/Africa/Douala OLD_FILES+=usr/share/zoneinfo/Africa/El_Aaiun OLD_FILES+=usr/share/zoneinfo/Africa/Freetown OLD_FILES+=usr/share/zoneinfo/Africa/Gaborone OLD_FILES+=usr/share/zoneinfo/Africa/Harare OLD_FILES+=usr/share/zoneinfo/Africa/Johannesburg OLD_FILES+=usr/share/zoneinfo/Africa/Juba OLD_FILES+=usr/share/zoneinfo/Africa/Kampala OLD_FILES+=usr/share/zoneinfo/Africa/Khartoum OLD_FILES+=usr/share/zoneinfo/Africa/Kigali OLD_FILES+=usr/share/zoneinfo/Africa/Kinshasa OLD_FILES+=usr/share/zoneinfo/Africa/Lagos OLD_FILES+=usr/share/zoneinfo/Africa/Libreville OLD_FILES+=usr/share/zoneinfo/Africa/Lome OLD_FILES+=usr/share/zoneinfo/Africa/Luanda OLD_FILES+=usr/share/zoneinfo/Africa/Lubumbashi OLD_FILES+=usr/share/zoneinfo/Africa/Lusaka OLD_FILES+=usr/share/zoneinfo/Africa/Malabo OLD_FILES+=usr/share/zoneinfo/Africa/Maputo OLD_FILES+=usr/share/zoneinfo/Africa/Maseru OLD_FILES+=usr/share/zoneinfo/Africa/Mbabane OLD_FILES+=usr/share/zoneinfo/Africa/Mogadishu OLD_FILES+=usr/share/zoneinfo/Africa/Monrovia OLD_FILES+=usr/share/zoneinfo/Africa/Nairobi OLD_FILES+=usr/share/zoneinfo/Africa/Ndjamena OLD_FILES+=usr/share/zoneinfo/Africa/Niamey OLD_FILES+=usr/share/zoneinfo/Africa/Nouakchott OLD_FILES+=usr/share/zoneinfo/Africa/Ouagadougou OLD_FILES+=usr/share/zoneinfo/Africa/Porto-Novo OLD_FILES+=usr/share/zoneinfo/Africa/Sao_Tome OLD_FILES+=usr/share/zoneinfo/Africa/Tripoli OLD_FILES+=usr/share/zoneinfo/Africa/Tunis OLD_FILES+=usr/share/zoneinfo/Africa/Windhoek OLD_FILES+=usr/share/zoneinfo/America/Adak OLD_FILES+=usr/share/zoneinfo/America/Anchorage OLD_FILES+=usr/share/zoneinfo/America/Anguilla OLD_FILES+=usr/share/zoneinfo/America/Antigua OLD_FILES+=usr/share/zoneinfo/America/Araguaina OLD_FILES+=usr/share/zoneinfo/America/Argentina/Buenos_Aires OLD_FILES+=usr/share/zoneinfo/America/Argentina/Catamarca OLD_FILES+=usr/share/zoneinfo/America/Argentina/Cordoba OLD_FILES+=usr/share/zoneinfo/America/Argentina/Jujuy OLD_FILES+=usr/share/zoneinfo/America/Argentina/La_Rioja OLD_FILES+=usr/share/zoneinfo/America/Argentina/Mendoza OLD_FILES+=usr/share/zoneinfo/America/Argentina/Rio_Gallegos OLD_FILES+=usr/share/zoneinfo/America/Argentina/Salta OLD_FILES+=usr/share/zoneinfo/America/Argentina/San_Juan OLD_FILES+=usr/share/zoneinfo/America/Argentina/San_Luis OLD_FILES+=usr/share/zoneinfo/America/Argentina/Tucuman OLD_FILES+=usr/share/zoneinfo/America/Argentina/Ushuaia OLD_FILES+=usr/share/zoneinfo/America/Aruba OLD_FILES+=usr/share/zoneinfo/America/Asuncion OLD_FILES+=usr/share/zoneinfo/America/Atikokan OLD_FILES+=usr/share/zoneinfo/America/Bahia OLD_FILES+=usr/share/zoneinfo/America/Bahia_Banderas OLD_FILES+=usr/share/zoneinfo/America/Barbados OLD_FILES+=usr/share/zoneinfo/America/Belem OLD_FILES+=usr/share/zoneinfo/America/Belize OLD_FILES+=usr/share/zoneinfo/America/Blanc-Sablon OLD_FILES+=usr/share/zoneinfo/America/Boa_Vista OLD_FILES+=usr/share/zoneinfo/America/Bogota OLD_FILES+=usr/share/zoneinfo/America/Boise OLD_FILES+=usr/share/zoneinfo/America/Cambridge_Bay OLD_FILES+=usr/share/zoneinfo/America/Campo_Grande OLD_FILES+=usr/share/zoneinfo/America/Cancun OLD_FILES+=usr/share/zoneinfo/America/Caracas OLD_FILES+=usr/share/zoneinfo/America/Cayenne OLD_FILES+=usr/share/zoneinfo/America/Cayman OLD_FILES+=usr/share/zoneinfo/America/Chicago OLD_FILES+=usr/share/zoneinfo/America/Chihuahua OLD_FILES+=usr/share/zoneinfo/America/Costa_Rica OLD_FILES+=usr/share/zoneinfo/America/Creston OLD_FILES+=usr/share/zoneinfo/America/Cuiaba OLD_FILES+=usr/share/zoneinfo/America/Curacao OLD_FILES+=usr/share/zoneinfo/America/Danmarkshavn OLD_FILES+=usr/share/zoneinfo/America/Dawson OLD_FILES+=usr/share/zoneinfo/America/Dawson_Creek OLD_FILES+=usr/share/zoneinfo/America/Denver OLD_FILES+=usr/share/zoneinfo/America/Detroit OLD_FILES+=usr/share/zoneinfo/America/Dominica OLD_FILES+=usr/share/zoneinfo/America/Edmonton OLD_FILES+=usr/share/zoneinfo/America/Eirunepe OLD_FILES+=usr/share/zoneinfo/America/El_Salvador OLD_FILES+=usr/share/zoneinfo/America/Fortaleza OLD_FILES+=usr/share/zoneinfo/America/Glace_Bay OLD_FILES+=usr/share/zoneinfo/America/Godthab OLD_FILES+=usr/share/zoneinfo/America/Goose_Bay OLD_FILES+=usr/share/zoneinfo/America/Grand_Turk OLD_FILES+=usr/share/zoneinfo/America/Grenada OLD_FILES+=usr/share/zoneinfo/America/Guadeloupe OLD_FILES+=usr/share/zoneinfo/America/Guatemala OLD_FILES+=usr/share/zoneinfo/America/Guayaquil OLD_FILES+=usr/share/zoneinfo/America/Guyana OLD_FILES+=usr/share/zoneinfo/America/Halifax OLD_FILES+=usr/share/zoneinfo/America/Havana OLD_FILES+=usr/share/zoneinfo/America/Hermosillo OLD_FILES+=usr/share/zoneinfo/America/Indiana/Indianapolis OLD_FILES+=usr/share/zoneinfo/America/Indiana/Knox OLD_FILES+=usr/share/zoneinfo/America/Indiana/Marengo OLD_FILES+=usr/share/zoneinfo/America/Indiana/Petersburg OLD_FILES+=usr/share/zoneinfo/America/Indiana/Tell_City OLD_FILES+=usr/share/zoneinfo/America/Indiana/Vevay OLD_FILES+=usr/share/zoneinfo/America/Indiana/Vincennes OLD_FILES+=usr/share/zoneinfo/America/Indiana/Winamac OLD_FILES+=usr/share/zoneinfo/America/Inuvik OLD_FILES+=usr/share/zoneinfo/America/Iqaluit OLD_FILES+=usr/share/zoneinfo/America/Jamaica OLD_FILES+=usr/share/zoneinfo/America/Juneau OLD_FILES+=usr/share/zoneinfo/America/Kentucky/Louisville OLD_FILES+=usr/share/zoneinfo/America/Kentucky/Monticello OLD_FILES+=usr/share/zoneinfo/America/Kralendijk OLD_FILES+=usr/share/zoneinfo/America/La_Paz OLD_FILES+=usr/share/zoneinfo/America/Lima OLD_FILES+=usr/share/zoneinfo/America/Los_Angeles OLD_FILES+=usr/share/zoneinfo/America/Lower_Princes OLD_FILES+=usr/share/zoneinfo/America/Maceio OLD_FILES+=usr/share/zoneinfo/America/Managua OLD_FILES+=usr/share/zoneinfo/America/Manaus OLD_FILES+=usr/share/zoneinfo/America/Marigot OLD_FILES+=usr/share/zoneinfo/America/Martinique OLD_FILES+=usr/share/zoneinfo/America/Matamoros OLD_FILES+=usr/share/zoneinfo/America/Mazatlan OLD_FILES+=usr/share/zoneinfo/America/Menominee OLD_FILES+=usr/share/zoneinfo/America/Merida OLD_FILES+=usr/share/zoneinfo/America/Metlakatla OLD_FILES+=usr/share/zoneinfo/America/Mexico_City OLD_FILES+=usr/share/zoneinfo/America/Miquelon OLD_FILES+=usr/share/zoneinfo/America/Moncton OLD_FILES+=usr/share/zoneinfo/America/Monterrey OLD_FILES+=usr/share/zoneinfo/America/Montevideo OLD_FILES+=usr/share/zoneinfo/America/Montreal OLD_FILES+=usr/share/zoneinfo/America/Montserrat OLD_FILES+=usr/share/zoneinfo/America/Nassau OLD_FILES+=usr/share/zoneinfo/America/New_York OLD_FILES+=usr/share/zoneinfo/America/Nipigon OLD_FILES+=usr/share/zoneinfo/America/Nome OLD_FILES+=usr/share/zoneinfo/America/Noronha OLD_FILES+=usr/share/zoneinfo/America/North_Dakota/Beulah OLD_FILES+=usr/share/zoneinfo/America/North_Dakota/Center OLD_FILES+=usr/share/zoneinfo/America/North_Dakota/New_Salem OLD_FILES+=usr/share/zoneinfo/America/Ojinaga OLD_FILES+=usr/share/zoneinfo/America/Panama OLD_FILES+=usr/share/zoneinfo/America/Pangnirtung OLD_FILES+=usr/share/zoneinfo/America/Paramaribo OLD_FILES+=usr/share/zoneinfo/America/Phoenix OLD_FILES+=usr/share/zoneinfo/America/Port-au-Prince OLD_FILES+=usr/share/zoneinfo/America/Port_of_Spain OLD_FILES+=usr/share/zoneinfo/America/Porto_Velho OLD_FILES+=usr/share/zoneinfo/America/Puerto_Rico OLD_FILES+=usr/share/zoneinfo/America/Rainy_River OLD_FILES+=usr/share/zoneinfo/America/Rankin_Inlet OLD_FILES+=usr/share/zoneinfo/America/Recife OLD_FILES+=usr/share/zoneinfo/America/Regina OLD_FILES+=usr/share/zoneinfo/America/Resolute OLD_FILES+=usr/share/zoneinfo/America/Rio_Branco OLD_FILES+=usr/share/zoneinfo/America/Santa_Isabel OLD_FILES+=usr/share/zoneinfo/America/Santarem OLD_FILES+=usr/share/zoneinfo/America/Santiago OLD_FILES+=usr/share/zoneinfo/America/Santo_Domingo OLD_FILES+=usr/share/zoneinfo/America/Sao_Paulo OLD_FILES+=usr/share/zoneinfo/America/Scoresbysund OLD_FILES+=usr/share/zoneinfo/America/Sitka OLD_FILES+=usr/share/zoneinfo/America/St_Barthelemy OLD_FILES+=usr/share/zoneinfo/America/St_Johns OLD_FILES+=usr/share/zoneinfo/America/St_Kitts OLD_FILES+=usr/share/zoneinfo/America/St_Lucia OLD_FILES+=usr/share/zoneinfo/America/St_Thomas OLD_FILES+=usr/share/zoneinfo/America/St_Vincent OLD_FILES+=usr/share/zoneinfo/America/Swift_Current OLD_FILES+=usr/share/zoneinfo/America/Tegucigalpa OLD_FILES+=usr/share/zoneinfo/America/Thule OLD_FILES+=usr/share/zoneinfo/America/Thunder_Bay OLD_FILES+=usr/share/zoneinfo/America/Tijuana OLD_FILES+=usr/share/zoneinfo/America/Toronto OLD_FILES+=usr/share/zoneinfo/America/Tortola OLD_FILES+=usr/share/zoneinfo/America/Vancouver OLD_FILES+=usr/share/zoneinfo/America/Whitehorse OLD_FILES+=usr/share/zoneinfo/America/Winnipeg OLD_FILES+=usr/share/zoneinfo/America/Yakutat OLD_FILES+=usr/share/zoneinfo/America/Yellowknife OLD_FILES+=usr/share/zoneinfo/Antarctica/Casey OLD_FILES+=usr/share/zoneinfo/Antarctica/Davis OLD_FILES+=usr/share/zoneinfo/Antarctica/DumontDUrville OLD_FILES+=usr/share/zoneinfo/Antarctica/Macquarie OLD_FILES+=usr/share/zoneinfo/Antarctica/Mawson OLD_FILES+=usr/share/zoneinfo/Antarctica/McMurdo OLD_FILES+=usr/share/zoneinfo/Antarctica/Palmer OLD_FILES+=usr/share/zoneinfo/Antarctica/Rothera OLD_FILES+=usr/share/zoneinfo/Antarctica/Syowa OLD_FILES+=usr/share/zoneinfo/Antarctica/Troll OLD_FILES+=usr/share/zoneinfo/Antarctica/Vostok OLD_FILES+=usr/share/zoneinfo/Arctic/Longyearbyen OLD_FILES+=usr/share/zoneinfo/Asia/Aden OLD_FILES+=usr/share/zoneinfo/Asia/Almaty OLD_FILES+=usr/share/zoneinfo/Asia/Amman OLD_FILES+=usr/share/zoneinfo/Asia/Anadyr OLD_FILES+=usr/share/zoneinfo/Asia/Aqtau OLD_FILES+=usr/share/zoneinfo/Asia/Aqtobe OLD_FILES+=usr/share/zoneinfo/Asia/Ashgabat OLD_FILES+=usr/share/zoneinfo/Asia/Baghdad OLD_FILES+=usr/share/zoneinfo/Asia/Bahrain OLD_FILES+=usr/share/zoneinfo/Asia/Baku OLD_FILES+=usr/share/zoneinfo/Asia/Bangkok OLD_FILES+=usr/share/zoneinfo/Asia/Beirut OLD_FILES+=usr/share/zoneinfo/Asia/Bishkek OLD_FILES+=usr/share/zoneinfo/Asia/Brunei OLD_FILES+=usr/share/zoneinfo/Asia/Chita OLD_FILES+=usr/share/zoneinfo/Asia/Choibalsan OLD_FILES+=usr/share/zoneinfo/Asia/Colombo OLD_FILES+=usr/share/zoneinfo/Asia/Damascus OLD_FILES+=usr/share/zoneinfo/Asia/Dhaka OLD_FILES+=usr/share/zoneinfo/Asia/Dili OLD_FILES+=usr/share/zoneinfo/Asia/Dubai OLD_FILES+=usr/share/zoneinfo/Asia/Dushanbe OLD_FILES+=usr/share/zoneinfo/Asia/Gaza OLD_FILES+=usr/share/zoneinfo/Asia/Hebron OLD_FILES+=usr/share/zoneinfo/Asia/Ho_Chi_Minh OLD_FILES+=usr/share/zoneinfo/Asia/Hong_Kong OLD_FILES+=usr/share/zoneinfo/Asia/Hovd OLD_FILES+=usr/share/zoneinfo/Asia/Irkutsk OLD_FILES+=usr/share/zoneinfo/Asia/Istanbul OLD_FILES+=usr/share/zoneinfo/Asia/Jakarta OLD_FILES+=usr/share/zoneinfo/Asia/Jayapura OLD_FILES+=usr/share/zoneinfo/Asia/Jerusalem OLD_FILES+=usr/share/zoneinfo/Asia/Kabul OLD_FILES+=usr/share/zoneinfo/Asia/Kamchatka OLD_FILES+=usr/share/zoneinfo/Asia/Karachi OLD_FILES+=usr/share/zoneinfo/Asia/Kathmandu OLD_FILES+=usr/share/zoneinfo/Asia/Khandyga OLD_FILES+=usr/share/zoneinfo/Asia/Kolkata OLD_FILES+=usr/share/zoneinfo/Asia/Krasnoyarsk OLD_FILES+=usr/share/zoneinfo/Asia/Kuala_Lumpur OLD_FILES+=usr/share/zoneinfo/Asia/Kuching OLD_FILES+=usr/share/zoneinfo/Asia/Kuwait OLD_FILES+=usr/share/zoneinfo/Asia/Macau OLD_FILES+=usr/share/zoneinfo/Asia/Magadan OLD_FILES+=usr/share/zoneinfo/Asia/Makassar OLD_FILES+=usr/share/zoneinfo/Asia/Manila OLD_FILES+=usr/share/zoneinfo/Asia/Muscat OLD_FILES+=usr/share/zoneinfo/Asia/Nicosia OLD_FILES+=usr/share/zoneinfo/Asia/Novokuznetsk OLD_FILES+=usr/share/zoneinfo/Asia/Novosibirsk OLD_FILES+=usr/share/zoneinfo/Asia/Omsk OLD_FILES+=usr/share/zoneinfo/Asia/Oral OLD_FILES+=usr/share/zoneinfo/Asia/Phnom_Penh OLD_FILES+=usr/share/zoneinfo/Asia/Pontianak OLD_FILES+=usr/share/zoneinfo/Asia/Pyongyang OLD_FILES+=usr/share/zoneinfo/Asia/Qatar OLD_FILES+=usr/share/zoneinfo/Asia/Qyzylorda OLD_FILES+=usr/share/zoneinfo/Asia/Rangoon OLD_FILES+=usr/share/zoneinfo/Asia/Riyadh OLD_FILES+=usr/share/zoneinfo/Asia/Sakhalin OLD_FILES+=usr/share/zoneinfo/Asia/Samarkand OLD_FILES+=usr/share/zoneinfo/Asia/Seoul OLD_FILES+=usr/share/zoneinfo/Asia/Shanghai OLD_FILES+=usr/share/zoneinfo/Asia/Singapore OLD_FILES+=usr/share/zoneinfo/Asia/Srednekolymsk OLD_FILES+=usr/share/zoneinfo/Asia/Taipei OLD_FILES+=usr/share/zoneinfo/Asia/Tashkent OLD_FILES+=usr/share/zoneinfo/Asia/Tbilisi OLD_FILES+=usr/share/zoneinfo/Asia/Tehran OLD_FILES+=usr/share/zoneinfo/Asia/Thimphu OLD_FILES+=usr/share/zoneinfo/Asia/Tokyo OLD_FILES+=usr/share/zoneinfo/Asia/Ulaanbaatar OLD_FILES+=usr/share/zoneinfo/Asia/Urumqi OLD_FILES+=usr/share/zoneinfo/Asia/Ust-Nera OLD_FILES+=usr/share/zoneinfo/Asia/Vientiane OLD_FILES+=usr/share/zoneinfo/Asia/Vladivostok OLD_FILES+=usr/share/zoneinfo/Asia/Yakutsk OLD_FILES+=usr/share/zoneinfo/Asia/Yekaterinburg OLD_FILES+=usr/share/zoneinfo/Asia/Yerevan OLD_FILES+=usr/share/zoneinfo/Atlantic/Azores OLD_FILES+=usr/share/zoneinfo/Atlantic/Bermuda OLD_FILES+=usr/share/zoneinfo/Atlantic/Canary OLD_FILES+=usr/share/zoneinfo/Atlantic/Cape_Verde OLD_FILES+=usr/share/zoneinfo/Atlantic/Faroe OLD_FILES+=usr/share/zoneinfo/Atlantic/Madeira OLD_FILES+=usr/share/zoneinfo/Atlantic/Reykjavik OLD_FILES+=usr/share/zoneinfo/Atlantic/South_Georgia OLD_FILES+=usr/share/zoneinfo/Atlantic/St_Helena OLD_FILES+=usr/share/zoneinfo/Atlantic/Stanley OLD_FILES+=usr/share/zoneinfo/Australia/Adelaide OLD_FILES+=usr/share/zoneinfo/Australia/Brisbane OLD_FILES+=usr/share/zoneinfo/Australia/Broken_Hill OLD_FILES+=usr/share/zoneinfo/Australia/Currie OLD_FILES+=usr/share/zoneinfo/Australia/Darwin OLD_FILES+=usr/share/zoneinfo/Australia/Eucla OLD_FILES+=usr/share/zoneinfo/Australia/Hobart OLD_FILES+=usr/share/zoneinfo/Australia/Lindeman OLD_FILES+=usr/share/zoneinfo/Australia/Lord_Howe OLD_FILES+=usr/share/zoneinfo/Australia/Melbourne OLD_FILES+=usr/share/zoneinfo/Australia/Perth OLD_FILES+=usr/share/zoneinfo/Australia/Sydney OLD_FILES+=usr/share/zoneinfo/CET OLD_FILES+=usr/share/zoneinfo/CST6CDT OLD_FILES+=usr/share/zoneinfo/EET OLD_FILES+=usr/share/zoneinfo/EST OLD_FILES+=usr/share/zoneinfo/EST5EDT OLD_FILES+=usr/share/zoneinfo/Etc/GMT OLD_FILES+=usr/share/zoneinfo/Etc/GMT+0 OLD_FILES+=usr/share/zoneinfo/Etc/GMT+1 OLD_FILES+=usr/share/zoneinfo/Etc/GMT+10 OLD_FILES+=usr/share/zoneinfo/Etc/GMT+11 OLD_FILES+=usr/share/zoneinfo/Etc/GMT+12 OLD_FILES+=usr/share/zoneinfo/Etc/GMT+2 OLD_FILES+=usr/share/zoneinfo/Etc/GMT+3 OLD_FILES+=usr/share/zoneinfo/Etc/GMT+4 OLD_FILES+=usr/share/zoneinfo/Etc/GMT+5 OLD_FILES+=usr/share/zoneinfo/Etc/GMT+6 OLD_FILES+=usr/share/zoneinfo/Etc/GMT+7 OLD_FILES+=usr/share/zoneinfo/Etc/GMT+8 OLD_FILES+=usr/share/zoneinfo/Etc/GMT+9 OLD_FILES+=usr/share/zoneinfo/Etc/GMT-0 OLD_FILES+=usr/share/zoneinfo/Etc/GMT-1 OLD_FILES+=usr/share/zoneinfo/Etc/GMT-10 OLD_FILES+=usr/share/zoneinfo/Etc/GMT-11 OLD_FILES+=usr/share/zoneinfo/Etc/GMT-12 OLD_FILES+=usr/share/zoneinfo/Etc/GMT-13 OLD_FILES+=usr/share/zoneinfo/Etc/GMT-14 OLD_FILES+=usr/share/zoneinfo/Etc/GMT-2 OLD_FILES+=usr/share/zoneinfo/Etc/GMT-3 OLD_FILES+=usr/share/zoneinfo/Etc/GMT-4 OLD_FILES+=usr/share/zoneinfo/Etc/GMT-5 OLD_FILES+=usr/share/zoneinfo/Etc/GMT-6 OLD_FILES+=usr/share/zoneinfo/Etc/GMT-7 OLD_FILES+=usr/share/zoneinfo/Etc/GMT-8 OLD_FILES+=usr/share/zoneinfo/Etc/GMT-9 OLD_FILES+=usr/share/zoneinfo/Etc/GMT0 OLD_FILES+=usr/share/zoneinfo/Etc/Greenwich OLD_FILES+=usr/share/zoneinfo/Etc/UCT OLD_FILES+=usr/share/zoneinfo/Etc/UTC OLD_FILES+=usr/share/zoneinfo/Etc/Universal OLD_FILES+=usr/share/zoneinfo/Etc/Zulu OLD_FILES+=usr/share/zoneinfo/Europe/Amsterdam OLD_FILES+=usr/share/zoneinfo/Europe/Andorra OLD_FILES+=usr/share/zoneinfo/Europe/Athens OLD_FILES+=usr/share/zoneinfo/Europe/Belgrade OLD_FILES+=usr/share/zoneinfo/Europe/Berlin OLD_FILES+=usr/share/zoneinfo/Europe/Bratislava OLD_FILES+=usr/share/zoneinfo/Europe/Brussels OLD_FILES+=usr/share/zoneinfo/Europe/Bucharest OLD_FILES+=usr/share/zoneinfo/Europe/Budapest OLD_FILES+=usr/share/zoneinfo/Europe/Busingen OLD_FILES+=usr/share/zoneinfo/Europe/Chisinau OLD_FILES+=usr/share/zoneinfo/Europe/Copenhagen OLD_FILES+=usr/share/zoneinfo/Europe/Dublin OLD_FILES+=usr/share/zoneinfo/Europe/Gibraltar OLD_FILES+=usr/share/zoneinfo/Europe/Guernsey OLD_FILES+=usr/share/zoneinfo/Europe/Helsinki OLD_FILES+=usr/share/zoneinfo/Europe/Isle_of_Man OLD_FILES+=usr/share/zoneinfo/Europe/Istanbul OLD_FILES+=usr/share/zoneinfo/Europe/Jersey OLD_FILES+=usr/share/zoneinfo/Europe/Kaliningrad OLD_FILES+=usr/share/zoneinfo/Europe/Kiev OLD_FILES+=usr/share/zoneinfo/Europe/Lisbon OLD_FILES+=usr/share/zoneinfo/Europe/Ljubljana OLD_FILES+=usr/share/zoneinfo/Europe/London OLD_FILES+=usr/share/zoneinfo/Europe/Luxembourg OLD_FILES+=usr/share/zoneinfo/Europe/Madrid OLD_FILES+=usr/share/zoneinfo/Europe/Malta OLD_FILES+=usr/share/zoneinfo/Europe/Mariehamn OLD_FILES+=usr/share/zoneinfo/Europe/Minsk OLD_FILES+=usr/share/zoneinfo/Europe/Monaco OLD_FILES+=usr/share/zoneinfo/Europe/Moscow OLD_FILES+=usr/share/zoneinfo/Europe/Nicosia OLD_FILES+=usr/share/zoneinfo/Europe/Oslo OLD_FILES+=usr/share/zoneinfo/Europe/Paris OLD_FILES+=usr/share/zoneinfo/Europe/Podgorica OLD_FILES+=usr/share/zoneinfo/Europe/Prague OLD_FILES+=usr/share/zoneinfo/Europe/Riga OLD_FILES+=usr/share/zoneinfo/Europe/Rome OLD_FILES+=usr/share/zoneinfo/Europe/Samara OLD_FILES+=usr/share/zoneinfo/Europe/San_Marino OLD_FILES+=usr/share/zoneinfo/Europe/Sarajevo OLD_FILES+=usr/share/zoneinfo/Europe/Simferopol OLD_FILES+=usr/share/zoneinfo/Europe/Skopje OLD_FILES+=usr/share/zoneinfo/Europe/Sofia OLD_FILES+=usr/share/zoneinfo/Europe/Stockholm OLD_FILES+=usr/share/zoneinfo/Europe/Tallinn OLD_FILES+=usr/share/zoneinfo/Europe/Tirane OLD_FILES+=usr/share/zoneinfo/Europe/Uzhgorod OLD_FILES+=usr/share/zoneinfo/Europe/Vaduz OLD_FILES+=usr/share/zoneinfo/Europe/Vatican OLD_FILES+=usr/share/zoneinfo/Europe/Vienna OLD_FILES+=usr/share/zoneinfo/Europe/Vilnius OLD_FILES+=usr/share/zoneinfo/Europe/Volgograd OLD_FILES+=usr/share/zoneinfo/Europe/Warsaw OLD_FILES+=usr/share/zoneinfo/Europe/Zagreb OLD_FILES+=usr/share/zoneinfo/Europe/Zaporozhye OLD_FILES+=usr/share/zoneinfo/Europe/Zurich OLD_FILES+=usr/share/zoneinfo/Factory OLD_FILES+=usr/share/zoneinfo/HST OLD_FILES+=usr/share/zoneinfo/Indian/Antananarivo OLD_FILES+=usr/share/zoneinfo/Indian/Chagos OLD_FILES+=usr/share/zoneinfo/Indian/Christmas OLD_FILES+=usr/share/zoneinfo/Indian/Cocos OLD_FILES+=usr/share/zoneinfo/Indian/Comoro OLD_FILES+=usr/share/zoneinfo/Indian/Kerguelen OLD_FILES+=usr/share/zoneinfo/Indian/Mahe OLD_FILES+=usr/share/zoneinfo/Indian/Maldives OLD_FILES+=usr/share/zoneinfo/Indian/Mauritius OLD_FILES+=usr/share/zoneinfo/Indian/Mayotte OLD_FILES+=usr/share/zoneinfo/Indian/Reunion OLD_FILES+=usr/share/zoneinfo/MET OLD_FILES+=usr/share/zoneinfo/MST OLD_FILES+=usr/share/zoneinfo/MST7MDT OLD_FILES+=usr/share/zoneinfo/PST8PDT OLD_FILES+=usr/share/zoneinfo/Pacific/Apia OLD_FILES+=usr/share/zoneinfo/Pacific/Auckland OLD_FILES+=usr/share/zoneinfo/Pacific/Bougainville OLD_FILES+=usr/share/zoneinfo/Pacific/Chatham OLD_FILES+=usr/share/zoneinfo/Pacific/Chuuk OLD_FILES+=usr/share/zoneinfo/Pacific/Easter OLD_FILES+=usr/share/zoneinfo/Pacific/Efate OLD_FILES+=usr/share/zoneinfo/Pacific/Enderbury OLD_FILES+=usr/share/zoneinfo/Pacific/Fakaofo OLD_FILES+=usr/share/zoneinfo/Pacific/Fiji OLD_FILES+=usr/share/zoneinfo/Pacific/Funafuti OLD_FILES+=usr/share/zoneinfo/Pacific/Galapagos OLD_FILES+=usr/share/zoneinfo/Pacific/Gambier OLD_FILES+=usr/share/zoneinfo/Pacific/Guadalcanal OLD_FILES+=usr/share/zoneinfo/Pacific/Guam OLD_FILES+=usr/share/zoneinfo/Pacific/Honolulu OLD_FILES+=usr/share/zoneinfo/Pacific/Johnston OLD_FILES+=usr/share/zoneinfo/Pacific/Kiritimati OLD_FILES+=usr/share/zoneinfo/Pacific/Kosrae OLD_FILES+=usr/share/zoneinfo/Pacific/Kwajalein OLD_FILES+=usr/share/zoneinfo/Pacific/Majuro OLD_FILES+=usr/share/zoneinfo/Pacific/Marquesas OLD_FILES+=usr/share/zoneinfo/Pacific/Midway OLD_FILES+=usr/share/zoneinfo/Pacific/Nauru OLD_FILES+=usr/share/zoneinfo/Pacific/Niue OLD_FILES+=usr/share/zoneinfo/Pacific/Norfolk OLD_FILES+=usr/share/zoneinfo/Pacific/Noumea OLD_FILES+=usr/share/zoneinfo/Pacific/Pago_Pago OLD_FILES+=usr/share/zoneinfo/Pacific/Palau OLD_FILES+=usr/share/zoneinfo/Pacific/Pitcairn OLD_FILES+=usr/share/zoneinfo/Pacific/Pohnpei OLD_FILES+=usr/share/zoneinfo/Pacific/Port_Moresby OLD_FILES+=usr/share/zoneinfo/Pacific/Rarotonga OLD_FILES+=usr/share/zoneinfo/Pacific/Saipan OLD_FILES+=usr/share/zoneinfo/Pacific/Tahiti OLD_FILES+=usr/share/zoneinfo/Pacific/Tarawa OLD_FILES+=usr/share/zoneinfo/Pacific/Tongatapu OLD_FILES+=usr/share/zoneinfo/Pacific/Wake OLD_FILES+=usr/share/zoneinfo/Pacific/Wallis OLD_FILES+=usr/share/zoneinfo/UTC OLD_FILES+=usr/share/zoneinfo/WET OLD_FILES+=usr/share/zoneinfo/posixrules OLD_FILES+=usr/share/zoneinfo/zone.tab .endif Index: projects/import-googletest-1.8.1/tools/build/options/WITHOUT_PIE =================================================================== --- projects/import-googletest-1.8.1/tools/build/options/WITHOUT_PIE (nonexistent) +++ projects/import-googletest-1.8.1/tools/build/options/WITHOUT_PIE (revision 344271) @@ -0,0 +1,3 @@ +.\" $FreeBSD$ +Do not build dynamically linked binaries as +Position-Independent Executable (PIE). Property changes on: projects/import-googletest-1.8.1/tools/build/options/WITHOUT_PIE ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property Index: projects/import-googletest-1.8.1/tools/build/options/WITH_PIE =================================================================== --- projects/import-googletest-1.8.1/tools/build/options/WITH_PIE (nonexistent) +++ projects/import-googletest-1.8.1/tools/build/options/WITH_PIE (revision 344271) @@ -0,0 +1,3 @@ +.\" $FreeBSD$ +Build dynamically linked binaries as +Position-Independent Executable (PIE). Property changes on: projects/import-googletest-1.8.1/tools/build/options/WITH_PIE ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property Index: projects/import-googletest-1.8.1/tools/tools/crypto/cryptocheck.c =================================================================== --- projects/import-googletest-1.8.1/tools/tools/crypto/cryptocheck.c (revision 344270) +++ projects/import-googletest-1.8.1/tools/tools/crypto/cryptocheck.c (revision 344271) @@ -1,1352 +1,1576 @@ /*- * Copyright (c) 2017 Chelsio Communications, Inc. * All rights reserved. * Written by: John Baldwin * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /*- * Copyright (c) 2004 Sam Leffler, Errno Consulting * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer, * without modification. * 2. Redistributions in binary form must reproduce at minimum a disclaimer * similar to the "NO WARRANTY" disclaimer below ("Disclaimer") and any * redistribution must be conditioned upon including a substantially * similar Disclaimer requirement for further binary redistribution. * 3. Neither the names of the above-listed copyright holders nor the names * of any contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * NO WARRANTY * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTIBILITY * AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL * THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, * OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF * THE POSSIBILITY OF SUCH DAMAGES. * * $FreeBSD$ */ /* * A different tool for checking hardware crypto support. Whereas * cryptotest is focused on simple performance numbers, this tool is * focused on correctness. For each crypto operation, it performs the * operation once in software via OpenSSL and a second time via * OpenCrypto and compares the results. * * cryptocheck [-vz] [-A aad length] [-a algorithm] [-d dev] [size ...] * * Options: * -v Verbose. * -z Run all algorithms on a variety of buffer sizes. * * Supported algorithms: * all Run all tests * hmac Run all hmac tests * blkcipher Run all block cipher tests * authenc Run all authenticated encryption tests * aead Run all authenticated encryption with associated data * tests * * HMACs: * sha1 sha1 hmac * sha256 256-bit sha2 hmac * sha384 384-bit sha2 hmac * sha512 512-bit sha2 hmac * blake2b Blake2-B * blake2s Blake2-S * * Block Ciphers: * aes-cbc 128-bit aes cbc * aes-cbc192 192-bit aes cbc * aes-cbc256 256-bit aes cbc * aes-ctr 128-bit aes ctr * aes-ctr192 192-bit aes ctr * aes-ctr256 256-bit aes ctr * aes-xts 128-bit aes xts * aes-xts256 256-bit aes xts * chacha20 * * Authenticated Encryption: * + * * Authenticated Encryption with Associated Data: * aes-gcm 128-bit aes gcm * aes-gcm192 192-bit aes gcm * aes-gcm256 256-bit aes gcm + * aes-ccm 128-bit aes ccm + * aes-ccm192 192-bit aes ccm + * aes-ccm256 256-bit aes ccm */ #include #include #include #include #include #include #include #include #include #include #include #include /* XXX: Temporary hack */ #ifndef COP_F_CIPHER_FIRST #define COP_F_CIPHER_FIRST 0x0001 /* Cipher before MAC. */ #endif struct alg { const char *name; int cipher; int mac; - enum { T_HASH, T_HMAC, T_BLKCIPHER, T_AUTHENC, T_GCM } type; + enum { T_HASH, T_HMAC, T_BLKCIPHER, T_AUTHENC, T_GCM, T_CCM } type; const EVP_CIPHER *(*evp_cipher)(void); const EVP_MD *(*evp_md)(void); } algs[] = { { .name = "sha1", .mac = CRYPTO_SHA1, .type = T_HASH, .evp_md = EVP_sha1 }, { .name = "sha224", .mac = CRYPTO_SHA2_224, .type = T_HASH, .evp_md = EVP_sha224 }, { .name = "sha256", .mac = CRYPTO_SHA2_256, .type = T_HASH, .evp_md = EVP_sha256 }, { .name = "sha384", .mac = CRYPTO_SHA2_384, .type = T_HASH, .evp_md = EVP_sha384 }, { .name = "sha512", .mac = CRYPTO_SHA2_512, .type = T_HASH, .evp_md = EVP_sha512 }, { .name = "sha1hmac", .mac = CRYPTO_SHA1_HMAC, .type = T_HMAC, .evp_md = EVP_sha1 }, { .name = "sha224hmac", .mac = CRYPTO_SHA2_224_HMAC, .type = T_HMAC, .evp_md = EVP_sha224 }, { .name = "sha256hmac", .mac = CRYPTO_SHA2_256_HMAC, .type = T_HMAC, .evp_md = EVP_sha256 }, { .name = "sha384hmac", .mac = CRYPTO_SHA2_384_HMAC, .type = T_HMAC, .evp_md = EVP_sha384 }, { .name = "sha512hmac", .mac = CRYPTO_SHA2_512_HMAC, .type = T_HMAC, .evp_md = EVP_sha512 }, { .name = "blake2b", .mac = CRYPTO_BLAKE2B, .type = T_HASH, .evp_md = EVP_blake2b512 }, { .name = "blake2s", .mac = CRYPTO_BLAKE2S, .type = T_HASH, .evp_md = EVP_blake2s256 }, { .name = "aes-cbc", .cipher = CRYPTO_AES_CBC, .type = T_BLKCIPHER, .evp_cipher = EVP_aes_128_cbc }, { .name = "aes-cbc192", .cipher = CRYPTO_AES_CBC, .type = T_BLKCIPHER, .evp_cipher = EVP_aes_192_cbc }, { .name = "aes-cbc256", .cipher = CRYPTO_AES_CBC, .type = T_BLKCIPHER, .evp_cipher = EVP_aes_256_cbc }, { .name = "aes-ctr", .cipher = CRYPTO_AES_ICM, .type = T_BLKCIPHER, .evp_cipher = EVP_aes_128_ctr }, { .name = "aes-ctr192", .cipher = CRYPTO_AES_ICM, .type = T_BLKCIPHER, .evp_cipher = EVP_aes_192_ctr }, { .name = "aes-ctr256", .cipher = CRYPTO_AES_ICM, .type = T_BLKCIPHER, .evp_cipher = EVP_aes_256_ctr }, { .name = "aes-xts", .cipher = CRYPTO_AES_XTS, .type = T_BLKCIPHER, .evp_cipher = EVP_aes_128_xts }, { .name = "aes-xts256", .cipher = CRYPTO_AES_XTS, .type = T_BLKCIPHER, .evp_cipher = EVP_aes_256_xts }, { .name = "chacha20", .cipher = CRYPTO_CHACHA20, .type = T_BLKCIPHER, .evp_cipher = EVP_chacha20 }, { .name = "aes-gcm", .cipher = CRYPTO_AES_NIST_GCM_16, .mac = CRYPTO_AES_128_NIST_GMAC, .type = T_GCM, .evp_cipher = EVP_aes_128_gcm }, { .name = "aes-gcm192", .cipher = CRYPTO_AES_NIST_GCM_16, .mac = CRYPTO_AES_192_NIST_GMAC, .type = T_GCM, .evp_cipher = EVP_aes_192_gcm }, { .name = "aes-gcm256", .cipher = CRYPTO_AES_NIST_GCM_16, .mac = CRYPTO_AES_256_NIST_GMAC, .type = T_GCM, .evp_cipher = EVP_aes_256_gcm }, + { .name = "aes-ccm", .cipher = CRYPTO_AES_CCM_16, + .mac = CRYPTO_AES_CCM_CBC_MAC, .type = T_CCM, + .evp_cipher = EVP_aes_128_ccm }, + { .name = "aes-ccm192", .cipher = CRYPTO_AES_CCM_16, + .mac = CRYPTO_AES_CCM_CBC_MAC, .type = T_CCM, + .evp_cipher = EVP_aes_192_ccm }, + { .name = "aes-ccm256", .cipher = CRYPTO_AES_CCM_16, + .mac = CRYPTO_AES_CCM_CBC_MAC, .type = T_CCM, + .evp_cipher = EVP_aes_256_ccm }, }; static bool verbose; static int crid; static size_t aad_len; static void usage(void) { fprintf(stderr, "usage: cryptocheck [-z] [-a algorithm] [-d dev] [size ...]\n"); exit(1); } static struct alg * find_alg(const char *name) { u_int i; for (i = 0; i < nitems(algs); i++) if (strcasecmp(algs[i].name, name) == 0) return (&algs[i]); return (NULL); } static struct alg * build_authenc(struct alg *cipher, struct alg *hmac) { static struct alg authenc; char *name; assert(cipher->type == T_BLKCIPHER); assert(hmac->type == T_HMAC); memset(&authenc, 0, sizeof(authenc)); asprintf(&name, "%s+%s", cipher->name, hmac->name); authenc.name = name; authenc.cipher = cipher->cipher; authenc.mac = hmac->mac; authenc.type = T_AUTHENC; authenc.evp_cipher = cipher->evp_cipher; authenc.evp_md = hmac->evp_md; return (&authenc); } static struct alg * build_authenc_name(const char *name) { struct alg *cipher, *hmac; const char *hmac_name; char *cp, *cipher_name; cp = strchr(name, '+'); cipher_name = strndup(name, cp - name); hmac_name = cp + 1; cipher = find_alg(cipher_name); free(cipher_name); if (cipher == NULL) errx(1, "Invalid cipher %s", cipher_name); hmac = find_alg(hmac_name); if (hmac == NULL) errx(1, "Invalid hash %s", hmac_name); return (build_authenc(cipher, hmac)); } static int devcrypto(void) { static int fd = -1; if (fd < 0) { fd = open("/dev/crypto", O_RDWR | O_CLOEXEC, 0); if (fd < 0) err(1, "/dev/crypto"); } return (fd); } static int crlookup(const char *devname) { struct crypt_find_op find; if (strncmp(devname, "soft", 4) == 0) return CRYPTO_FLAG_SOFTWARE; find.crid = -1; strlcpy(find.name, devname, sizeof(find.name)); if (ioctl(devcrypto(), CIOCFINDDEV, &find) == -1) err(1, "ioctl(CIOCFINDDEV)"); return (find.crid); } const char * crfind(int crid) { static struct crypt_find_op find; if (crid == CRYPTO_FLAG_SOFTWARE) return ("soft"); else if (crid == CRYPTO_FLAG_HARDWARE) return ("unknown"); bzero(&find, sizeof(find)); find.crid = crid; if (ioctl(devcrypto(), CRIOFINDDEV, &find) == -1) err(1, "ioctl(CIOCFINDDEV): crid %d", crid); return (find.name); } static int crget(void) { int fd; if (ioctl(devcrypto(), CRIOGET, &fd) == -1) err(1, "ioctl(CRIOGET)"); if (fcntl(fd, F_SETFD, 1) == -1) err(1, "fcntl(F_SETFD) (crget)"); return fd; } static char rdigit(void) { const char a[] = { 0x10,0x54,0x11,0x48,0x45,0x12,0x4f,0x13,0x49,0x53,0x14,0x41, 0x15,0x16,0x4e,0x55,0x54,0x17,0x18,0x4a,0x4f,0x42,0x19,0x01 }; return 0x20+a[random()%nitems(a)]; } static char * alloc_buffer(size_t len) { char *buf; size_t i; buf = malloc(len); for (i = 0; i < len; i++) buf[i] = rdigit(); return (buf); } static char * generate_iv(size_t len, struct alg *alg) { char *iv; iv = alloc_buffer(len); switch (alg->cipher) { case CRYPTO_AES_ICM: /* Clear the low 32 bits of the IV to hold the counter. */ iv[len - 4] = 0; iv[len - 3] = 0; iv[len - 2] = 0; iv[len - 1] = 0; break; case CRYPTO_AES_XTS: /* * Clear the low 64-bits to only store a 64-bit block * number. */ iv[len - 8] = 0; iv[len - 7] = 0; iv[len - 6] = 0; iv[len - 5] = 0; iv[len - 4] = 0; iv[len - 3] = 0; iv[len - 2] = 0; iv[len - 1] = 0; break; } return (iv); } static bool ocf_hash(struct alg *alg, const char *buffer, size_t size, char *digest, int *cridp) { struct session2_op sop; struct crypt_op cop; int fd; memset(&sop, 0, sizeof(sop)); memset(&cop, 0, sizeof(cop)); sop.crid = crid; sop.mac = alg->mac; fd = crget(); if (ioctl(fd, CIOCGSESSION2, &sop) < 0) { warn("cryptodev %s HASH not supported for device %s", alg->name, crfind(crid)); close(fd); return (false); } cop.ses = sop.ses; cop.op = 0; cop.len = size; cop.src = (char *)buffer; cop.dst = NULL; cop.mac = digest; cop.iv = NULL; if (ioctl(fd, CIOCCRYPT, &cop) < 0) { warn("cryptodev %s (%zu) HASH failed for device %s", alg->name, size, crfind(crid)); close(fd); return (false); } if (ioctl(fd, CIOCFSESSION, &sop.ses) < 0) warn("ioctl(CIOCFSESSION)"); close(fd); *cridp = sop.crid; return (true); } static void openssl_hash(struct alg *alg, const EVP_MD *md, const void *buffer, size_t size, void *digest_out, unsigned *digest_sz_out) { EVP_MD_CTX *mdctx; const char *errs; int rc; errs = ""; mdctx = EVP_MD_CTX_create(); if (mdctx == NULL) goto err_out; rc = EVP_DigestInit_ex(mdctx, md, NULL); if (rc != 1) goto err_out; rc = EVP_DigestUpdate(mdctx, buffer, size); if (rc != 1) goto err_out; rc = EVP_DigestFinal_ex(mdctx, digest_out, digest_sz_out); if (rc != 1) goto err_out; EVP_MD_CTX_destroy(mdctx); return; err_out: errx(1, "OpenSSL %s HASH failed%s: %s", alg->name, errs, ERR_error_string(ERR_get_error(), NULL)); } static void run_hash_test(struct alg *alg, size_t size) { const EVP_MD *md; char *buffer; u_int digest_len; int crid; char control_digest[EVP_MAX_MD_SIZE], test_digest[EVP_MAX_MD_SIZE]; memset(control_digest, 0x3c, sizeof(control_digest)); memset(test_digest, 0x3c, sizeof(test_digest)); md = alg->evp_md(); assert(EVP_MD_size(md) <= sizeof(control_digest)); buffer = alloc_buffer(size); /* OpenSSL HASH. */ digest_len = sizeof(control_digest); openssl_hash(alg, md, buffer, size, control_digest, &digest_len); /* cryptodev HASH. */ if (!ocf_hash(alg, buffer, size, test_digest, &crid)) goto out; if (memcmp(control_digest, test_digest, sizeof(control_digest)) != 0) { if (memcmp(control_digest, test_digest, EVP_MD_size(md)) == 0) printf("%s (%zu) mismatch in trailer:\n", alg->name, size); else printf("%s (%zu) mismatch:\n", alg->name, size); printf("control:\n"); hexdump(control_digest, sizeof(control_digest), NULL, 0); printf("test (cryptodev device %s):\n", crfind(crid)); hexdump(test_digest, sizeof(test_digest), NULL, 0); goto out; } if (verbose) printf("%s (%zu) matched (cryptodev device %s)\n", alg->name, size, crfind(crid)); out: free(buffer); } static bool ocf_hmac(struct alg *alg, const char *buffer, size_t size, const char *key, size_t key_len, char *digest, int *cridp) { struct session2_op sop; struct crypt_op cop; int fd; memset(&sop, 0, sizeof(sop)); memset(&cop, 0, sizeof(cop)); sop.crid = crid; sop.mackeylen = key_len; sop.mackey = (char *)key; sop.mac = alg->mac; fd = crget(); if (ioctl(fd, CIOCGSESSION2, &sop) < 0) { warn("cryptodev %s HMAC not supported for device %s", alg->name, crfind(crid)); close(fd); return (false); } cop.ses = sop.ses; cop.op = 0; cop.len = size; cop.src = (char *)buffer; cop.dst = NULL; cop.mac = digest; cop.iv = NULL; if (ioctl(fd, CIOCCRYPT, &cop) < 0) { warn("cryptodev %s (%zu) HMAC failed for device %s", alg->name, size, crfind(crid)); close(fd); return (false); } if (ioctl(fd, CIOCFSESSION, &sop.ses) < 0) warn("ioctl(CIOCFSESSION)"); close(fd); *cridp = sop.crid; return (true); } static void run_hmac_test(struct alg *alg, size_t size) { const EVP_MD *md; char *key, *buffer; u_int key_len, digest_len; int crid; char control_digest[EVP_MAX_MD_SIZE], test_digest[EVP_MAX_MD_SIZE]; memset(control_digest, 0x3c, sizeof(control_digest)); memset(test_digest, 0x3c, sizeof(test_digest)); md = alg->evp_md(); key_len = EVP_MD_size(md); assert(EVP_MD_size(md) <= sizeof(control_digest)); key = alloc_buffer(key_len); buffer = alloc_buffer(size); /* OpenSSL HMAC. */ digest_len = sizeof(control_digest); if (HMAC(md, key, key_len, (u_char *)buffer, size, (u_char *)control_digest, &digest_len) == NULL) errx(1, "OpenSSL %s (%zu) HMAC failed: %s", alg->name, size, ERR_error_string(ERR_get_error(), NULL)); /* cryptodev HMAC. */ if (!ocf_hmac(alg, buffer, size, key, key_len, test_digest, &crid)) goto out; if (memcmp(control_digest, test_digest, sizeof(control_digest)) != 0) { if (memcmp(control_digest, test_digest, EVP_MD_size(md)) == 0) printf("%s (%zu) mismatch in trailer:\n", alg->name, size); else printf("%s (%zu) mismatch:\n", alg->name, size); printf("control:\n"); hexdump(control_digest, sizeof(control_digest), NULL, 0); printf("test (cryptodev device %s):\n", crfind(crid)); hexdump(test_digest, sizeof(test_digest), NULL, 0); goto out; } if (verbose) printf("%s (%zu) matched (cryptodev device %s)\n", alg->name, size, crfind(crid)); out: free(buffer); free(key); } static void openssl_cipher(struct alg *alg, const EVP_CIPHER *cipher, const char *key, const char *iv, const char *input, char *output, size_t size, int enc) { EVP_CIPHER_CTX *ctx; int outl, total; ctx = EVP_CIPHER_CTX_new(); if (ctx == NULL) errx(1, "OpenSSL %s (%zu) ctx new failed: %s", alg->name, size, ERR_error_string(ERR_get_error(), NULL)); if (EVP_CipherInit_ex(ctx, cipher, NULL, (const u_char *)key, (const u_char *)iv, enc) != 1) errx(1, "OpenSSL %s (%zu) ctx init failed: %s", alg->name, size, ERR_error_string(ERR_get_error(), NULL)); EVP_CIPHER_CTX_set_padding(ctx, 0); if (EVP_CipherUpdate(ctx, (u_char *)output, &outl, (const u_char *)input, size) != 1) errx(1, "OpenSSL %s (%zu) cipher update failed: %s", alg->name, size, ERR_error_string(ERR_get_error(), NULL)); total = outl; if (EVP_CipherFinal_ex(ctx, (u_char *)output + outl, &outl) != 1) errx(1, "OpenSSL %s (%zu) cipher final failed: %s", alg->name, size, ERR_error_string(ERR_get_error(), NULL)); total += outl; if (total != size) errx(1, "OpenSSL %s (%zu) cipher size mismatch: %d", alg->name, size, total); EVP_CIPHER_CTX_free(ctx); } static bool ocf_cipher(struct alg *alg, const char *key, size_t key_len, const char *iv, const char *input, char *output, size_t size, int enc, int *cridp) { struct session2_op sop; struct crypt_op cop; int fd; memset(&sop, 0, sizeof(sop)); memset(&cop, 0, sizeof(cop)); sop.crid = crid; sop.keylen = key_len; sop.key = (char *)key; sop.cipher = alg->cipher; fd = crget(); if (ioctl(fd, CIOCGSESSION2, &sop) < 0) { warn("cryptodev %s block cipher not supported for device %s", alg->name, crfind(crid)); close(fd); return (false); } cop.ses = sop.ses; cop.op = enc ? COP_ENCRYPT : COP_DECRYPT; cop.len = size; cop.src = (char *)input; cop.dst = output; cop.mac = NULL; cop.iv = (char *)iv; if (ioctl(fd, CIOCCRYPT, &cop) < 0) { warn("cryptodev %s (%zu) block cipher failed for device %s", alg->name, size, crfind(crid)); close(fd); return (false); } if (ioctl(fd, CIOCFSESSION, &sop.ses) < 0) warn("ioctl(CIOCFSESSION)"); close(fd); *cridp = sop.crid; return (true); } static void run_blkcipher_test(struct alg *alg, size_t size) { const EVP_CIPHER *cipher; char *buffer, *cleartext, *ciphertext; char *iv, *key; u_int iv_len, key_len; int crid; cipher = alg->evp_cipher(); if (size % EVP_CIPHER_block_size(cipher) != 0) { if (verbose) printf( "%s (%zu): invalid buffer size (block size %d)\n", alg->name, size, EVP_CIPHER_block_size(cipher)); return; } key_len = EVP_CIPHER_key_length(cipher); iv_len = EVP_CIPHER_iv_length(cipher); key = alloc_buffer(key_len); iv = generate_iv(iv_len, alg); cleartext = alloc_buffer(size); buffer = malloc(size); ciphertext = malloc(size); /* OpenSSL cipher. */ openssl_cipher(alg, cipher, key, iv, cleartext, ciphertext, size, 1); if (size > 0 && memcmp(cleartext, ciphertext, size) == 0) errx(1, "OpenSSL %s (%zu): cipher text unchanged", alg->name, size); openssl_cipher(alg, cipher, key, iv, ciphertext, buffer, size, 0); if (memcmp(cleartext, buffer, size) != 0) { printf("OpenSSL %s (%zu): cipher mismatch:", alg->name, size); printf("original:\n"); hexdump(cleartext, size, NULL, 0); printf("decrypted:\n"); hexdump(buffer, size, NULL, 0); exit(1); } /* OCF encrypt. */ if (!ocf_cipher(alg, key, key_len, iv, cleartext, buffer, size, 1, &crid)) goto out; if (memcmp(ciphertext, buffer, size) != 0) { printf("%s (%zu) encryption mismatch:\n", alg->name, size); printf("control:\n"); hexdump(ciphertext, size, NULL, 0); printf("test (cryptodev device %s):\n", crfind(crid)); hexdump(buffer, size, NULL, 0); goto out; } /* OCF decrypt. */ if (!ocf_cipher(alg, key, key_len, iv, ciphertext, buffer, size, 0, &crid)) goto out; if (memcmp(cleartext, buffer, size) != 0) { printf("%s (%zu) decryption mismatch:\n", alg->name, size); printf("control:\n"); hexdump(cleartext, size, NULL, 0); printf("test (cryptodev device %s):\n", crfind(crid)); hexdump(buffer, size, NULL, 0); goto out; } if (verbose) printf("%s (%zu) matched (cryptodev device %s)\n", alg->name, size, crfind(crid)); out: free(ciphertext); free(buffer); free(cleartext); free(iv); free(key); } static bool ocf_authenc(struct alg *alg, const char *cipher_key, size_t cipher_key_len, const char *iv, size_t iv_len, const char *auth_key, size_t auth_key_len, const char *aad, size_t aad_len, const char *input, char *output, size_t size, char *digest, int enc, int *cridp) { struct session2_op sop; int fd; memset(&sop, 0, sizeof(sop)); sop.crid = crid; sop.keylen = cipher_key_len; sop.key = (char *)cipher_key; sop.cipher = alg->cipher; sop.mackeylen = auth_key_len; sop.mackey = (char *)auth_key; sop.mac = alg->mac; fd = crget(); if (ioctl(fd, CIOCGSESSION2, &sop) < 0) { warn("cryptodev %s AUTHENC not supported for device %s", alg->name, crfind(crid)); close(fd); return (false); } if (aad_len != 0) { struct crypt_aead caead; memset(&caead, 0, sizeof(caead)); caead.ses = sop.ses; caead.op = enc ? COP_ENCRYPT : COP_DECRYPT; caead.flags = enc ? COP_F_CIPHER_FIRST : 0; caead.len = size; caead.aadlen = aad_len; caead.ivlen = iv_len; caead.src = (char *)input; caead.dst = output; caead.aad = (char *)aad; caead.tag = digest; caead.iv = (char *)iv; if (ioctl(fd, CIOCCRYPTAEAD, &caead) < 0) { warn("cryptodev %s (%zu) failed for device %s", alg->name, size, crfind(crid)); close(fd); return (false); } } else { struct crypt_op cop; memset(&cop, 0, sizeof(cop)); cop.ses = sop.ses; cop.op = enc ? COP_ENCRYPT : COP_DECRYPT; cop.flags = enc ? COP_F_CIPHER_FIRST : 0; cop.len = size; cop.src = (char *)input; cop.dst = output; cop.mac = digest; cop.iv = (char *)iv; if (ioctl(fd, CIOCCRYPT, &cop) < 0) { warn("cryptodev %s (%zu) AUTHENC failed for device %s", alg->name, size, crfind(crid)); close(fd); return (false); } } if (ioctl(fd, CIOCFSESSION, &sop.ses) < 0) warn("ioctl(CIOCFSESSION)"); close(fd); *cridp = sop.crid; return (true); } static void run_authenc_test(struct alg *alg, size_t size) { const EVP_CIPHER *cipher; const EVP_MD *md; char *aad, *buffer, *cleartext, *ciphertext; char *iv, *auth_key, *cipher_key; u_int iv_len, auth_key_len, cipher_key_len, digest_len; int crid; char control_digest[EVP_MAX_MD_SIZE], test_digest[EVP_MAX_MD_SIZE]; cipher = alg->evp_cipher(); if (size % EVP_CIPHER_block_size(cipher) != 0) { if (verbose) printf( "%s (%zu): invalid buffer size (block size %d)\n", alg->name, size, EVP_CIPHER_block_size(cipher)); return; } memset(control_digest, 0x3c, sizeof(control_digest)); memset(test_digest, 0x3c, sizeof(test_digest)); md = alg->evp_md(); cipher_key_len = EVP_CIPHER_key_length(cipher); iv_len = EVP_CIPHER_iv_length(cipher); auth_key_len = EVP_MD_size(md); cipher_key = alloc_buffer(cipher_key_len); iv = generate_iv(iv_len, alg); auth_key = alloc_buffer(auth_key_len); cleartext = alloc_buffer(aad_len + size); buffer = malloc(aad_len + size); ciphertext = malloc(aad_len + size); /* OpenSSL encrypt + HMAC. */ if (aad_len != 0) memcpy(ciphertext, cleartext, aad_len); openssl_cipher(alg, cipher, cipher_key, iv, cleartext + aad_len, ciphertext + aad_len, size, 1); if (size > 0 && memcmp(cleartext + aad_len, ciphertext + aad_len, size) == 0) errx(1, "OpenSSL %s (%zu): cipher text unchanged", alg->name, size); digest_len = sizeof(control_digest); if (HMAC(md, auth_key, auth_key_len, (u_char *)ciphertext, aad_len + size, (u_char *)control_digest, &digest_len) == NULL) errx(1, "OpenSSL %s (%zu) HMAC failed: %s", alg->name, size, ERR_error_string(ERR_get_error(), NULL)); /* OCF encrypt + HMAC. */ if (!ocf_authenc(alg, cipher_key, cipher_key_len, iv, iv_len, auth_key, auth_key_len, aad_len != 0 ? cleartext : NULL, aad_len, cleartext + aad_len, buffer + aad_len, size, test_digest, 1, &crid)) goto out; if (memcmp(ciphertext + aad_len, buffer + aad_len, size) != 0) { printf("%s (%zu) encryption mismatch:\n", alg->name, size); printf("control:\n"); hexdump(ciphertext + aad_len, size, NULL, 0); printf("test (cryptodev device %s):\n", crfind(crid)); hexdump(buffer + aad_len, size, NULL, 0); goto out; } if (memcmp(control_digest, test_digest, sizeof(control_digest)) != 0) { if (memcmp(control_digest, test_digest, EVP_MD_size(md)) == 0) printf("%s (%zu) enc hash mismatch in trailer:\n", alg->name, size); else printf("%s (%zu) enc hash mismatch:\n", alg->name, size); printf("control:\n"); hexdump(control_digest, sizeof(control_digest), NULL, 0); printf("test (cryptodev device %s):\n", crfind(crid)); hexdump(test_digest, sizeof(test_digest), NULL, 0); goto out; } /* OCF HMAC + decrypt. */ memset(test_digest, 0x3c, sizeof(test_digest)); if (!ocf_authenc(alg, cipher_key, cipher_key_len, iv, iv_len, auth_key, auth_key_len, aad_len != 0 ? ciphertext : NULL, aad_len, ciphertext + aad_len, buffer + aad_len, size, test_digest, 0, &crid)) goto out; if (memcmp(control_digest, test_digest, sizeof(control_digest)) != 0) { if (memcmp(control_digest, test_digest, EVP_MD_size(md)) == 0) printf("%s (%zu) dec hash mismatch in trailer:\n", alg->name, size); else printf("%s (%zu) dec hash mismatch:\n", alg->name, size); printf("control:\n"); hexdump(control_digest, sizeof(control_digest), NULL, 0); printf("test (cryptodev device %s):\n", crfind(crid)); hexdump(test_digest, sizeof(test_digest), NULL, 0); goto out; } if (memcmp(cleartext + aad_len, buffer + aad_len, size) != 0) { printf("%s (%zu) decryption mismatch:\n", alg->name, size); printf("control:\n"); hexdump(cleartext, size, NULL, 0); printf("test (cryptodev device %s):\n", crfind(crid)); hexdump(buffer, size, NULL, 0); goto out; } if (verbose) printf("%s (%zu) matched (cryptodev device %s)\n", alg->name, size, crfind(crid)); out: free(ciphertext); free(buffer); free(cleartext); free(auth_key); free(iv); free(cipher_key); } static void openssl_gcm_encrypt(struct alg *alg, const EVP_CIPHER *cipher, const char *key, const char *iv, const char *aad, size_t aad_len, const char *input, char *output, size_t size, char *tag) { EVP_CIPHER_CTX *ctx; int outl, total; ctx = EVP_CIPHER_CTX_new(); if (ctx == NULL) errx(1, "OpenSSL %s (%zu) ctx new failed: %s", alg->name, size, ERR_error_string(ERR_get_error(), NULL)); if (EVP_EncryptInit_ex(ctx, cipher, NULL, (const u_char *)key, (const u_char *)iv) != 1) errx(1, "OpenSSL %s (%zu) ctx init failed: %s", alg->name, size, ERR_error_string(ERR_get_error(), NULL)); EVP_CIPHER_CTX_set_padding(ctx, 0); if (aad != NULL) { if (EVP_EncryptUpdate(ctx, NULL, &outl, (const u_char *)aad, aad_len) != 1) errx(1, "OpenSSL %s (%zu) aad update failed: %s", alg->name, size, ERR_error_string(ERR_get_error(), NULL)); } if (EVP_EncryptUpdate(ctx, (u_char *)output, &outl, (const u_char *)input, size) != 1) errx(1, "OpenSSL %s (%zu) encrypt update failed: %s", alg->name, size, ERR_error_string(ERR_get_error(), NULL)); total = outl; if (EVP_EncryptFinal_ex(ctx, (u_char *)output + outl, &outl) != 1) errx(1, "OpenSSL %s (%zu) encrypt final failed: %s", alg->name, size, ERR_error_string(ERR_get_error(), NULL)); total += outl; if (total != size) errx(1, "OpenSSL %s (%zu) encrypt size mismatch: %d", alg->name, size, total); if (EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_GCM_GET_TAG, AES_GMAC_HASH_LEN, tag) != 1) errx(1, "OpenSSL %s (%zu) get tag failed: %s", alg->name, size, ERR_error_string(ERR_get_error(), NULL)); EVP_CIPHER_CTX_free(ctx); } static bool ocf_gcm(struct alg *alg, const char *key, size_t key_len, const char *iv, size_t iv_len, const char *aad, size_t aad_len, const char *input, char *output, size_t size, char *tag, int enc, int *cridp) { struct session2_op sop; struct crypt_aead caead; int fd; memset(&sop, 0, sizeof(sop)); memset(&caead, 0, sizeof(caead)); sop.crid = crid; sop.keylen = key_len; sop.key = (char *)key; sop.cipher = alg->cipher; sop.mackeylen = key_len; sop.mackey = (char *)key; sop.mac = alg->mac; fd = crget(); if (ioctl(fd, CIOCGSESSION2, &sop) < 0) { warn("cryptodev %s not supported for device %s", alg->name, crfind(crid)); close(fd); return (false); } caead.ses = sop.ses; caead.op = enc ? COP_ENCRYPT : COP_DECRYPT; caead.len = size; caead.aadlen = aad_len; caead.ivlen = iv_len; caead.src = (char *)input; caead.dst = output; caead.aad = (char *)aad; caead.tag = tag; caead.iv = (char *)iv; if (ioctl(fd, CIOCCRYPTAEAD, &caead) < 0) { warn("cryptodev %s (%zu) failed for device %s", alg->name, size, crfind(crid)); close(fd); return (false); } if (ioctl(fd, CIOCFSESSION, &sop.ses) < 0) warn("ioctl(CIOCFSESSION)"); close(fd); *cridp = sop.crid; return (true); } #ifdef notused static bool openssl_gcm_decrypt(struct alg *alg, const EVP_CIPHER *cipher, const char *key, const char *iv, const char *aad, size_t aad_len, const char *input, char *output, size_t size, char *tag) { EVP_CIPHER_CTX *ctx; int outl, total; bool valid; ctx = EVP_CIPHER_CTX_new(); if (ctx == NULL) errx(1, "OpenSSL %s (%zu) ctx new failed: %s", alg->name, size, ERR_error_string(ERR_get_error(), NULL)); if (EVP_DecryptInit_ex(ctx, cipher, NULL, (const u_char *)key, (const u_char *)iv) != 1) errx(1, "OpenSSL %s (%zu) ctx init failed: %s", alg->name, size, ERR_error_string(ERR_get_error(), NULL)); EVP_CIPHER_CTX_set_padding(ctx, 0); if (aad != NULL) { if (EVP_DecryptUpdate(ctx, NULL, &outl, (const u_char *)aad, aad_len) != 1) errx(1, "OpenSSL %s (%zu) aad update failed: %s", alg->name, size, ERR_error_string(ERR_get_error(), NULL)); } if (EVP_DecryptUpdate(ctx, (u_char *)output, &outl, (const u_char *)input, size) != 1) errx(1, "OpenSSL %s (%zu) decrypt update failed: %s", alg->name, size, ERR_error_string(ERR_get_error(), NULL)); total = outl; if (EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_GCM_SET_TAG, AES_GMAC_HASH_LEN, tag) != 1) errx(1, "OpenSSL %s (%zu) get tag failed: %s", alg->name, size, ERR_error_string(ERR_get_error(), NULL)); valid = (EVP_DecryptFinal_ex(ctx, (u_char *)output + outl, &outl) != 1); total += outl; if (total != size) errx(1, "OpenSSL %s (%zu) decrypt size mismatch: %d", alg->name, size, total); EVP_CIPHER_CTX_free(ctx); return (valid); } #endif static void run_gcm_test(struct alg *alg, size_t size) { const EVP_CIPHER *cipher; char *aad, *buffer, *cleartext, *ciphertext; char *iv, *key; u_int iv_len, key_len; int crid; char control_tag[AES_GMAC_HASH_LEN], test_tag[AES_GMAC_HASH_LEN]; cipher = alg->evp_cipher(); if (size % EVP_CIPHER_block_size(cipher) != 0) { if (verbose) printf( "%s (%zu): invalid buffer size (block size %d)\n", alg->name, size, EVP_CIPHER_block_size(cipher)); return; } memset(control_tag, 0x3c, sizeof(control_tag)); memset(test_tag, 0x3c, sizeof(test_tag)); key_len = EVP_CIPHER_key_length(cipher); iv_len = EVP_CIPHER_iv_length(cipher); key = alloc_buffer(key_len); iv = generate_iv(iv_len, alg); cleartext = alloc_buffer(size); buffer = malloc(size); ciphertext = malloc(size); if (aad_len != 0) aad = alloc_buffer(aad_len); else aad = NULL; /* OpenSSL encrypt */ openssl_gcm_encrypt(alg, cipher, key, iv, aad, aad_len, cleartext, ciphertext, size, control_tag); /* OCF encrypt */ if (!ocf_gcm(alg, key, key_len, iv, iv_len, aad, aad_len, cleartext, buffer, size, test_tag, 1, &crid)) goto out; if (memcmp(ciphertext, buffer, size) != 0) { printf("%s (%zu) encryption mismatch:\n", alg->name, size); printf("control:\n"); hexdump(ciphertext, size, NULL, 0); printf("test (cryptodev device %s):\n", crfind(crid)); hexdump(buffer, size, NULL, 0); goto out; } if (memcmp(control_tag, test_tag, sizeof(control_tag)) != 0) { printf("%s (%zu) enc tag mismatch:\n", alg->name, size); printf("control:\n"); hexdump(control_tag, sizeof(control_tag), NULL, 0); printf("test (cryptodev device %s):\n", crfind(crid)); hexdump(test_tag, sizeof(test_tag), NULL, 0); goto out; } /* OCF decrypt */ if (!ocf_gcm(alg, key, key_len, iv, iv_len, aad, aad_len, ciphertext, buffer, size, control_tag, 0, &crid)) goto out; if (memcmp(cleartext, buffer, size) != 0) { printf("%s (%zu) decryption mismatch:\n", alg->name, size); printf("control:\n"); hexdump(cleartext, size, NULL, 0); printf("test (cryptodev device %s):\n", crfind(crid)); hexdump(buffer, size, NULL, 0); goto out; } if (verbose) printf("%s (%zu) matched (cryptodev device %s)\n", alg->name, size, crfind(crid)); out: free(aad); free(ciphertext); free(buffer); free(cleartext); free(iv); free(key); } static void +openssl_ccm_encrypt(struct alg *alg, const EVP_CIPHER *cipher, const char *key, + const char *iv, size_t iv_len, const char *aad, size_t aad_len, + const char *input, char *output, size_t size, char *tag) +{ + EVP_CIPHER_CTX *ctx; + int outl, total; + + ctx = EVP_CIPHER_CTX_new(); + if (ctx == NULL) + errx(1, "OpenSSL %s (%zu) ctx new failed: %s", alg->name, + size, ERR_error_string(ERR_get_error(), NULL)); + if (EVP_EncryptInit_ex(ctx, cipher, NULL, NULL, NULL) != 1) + errx(1, "OpenSSL %s (%zu) ctx init failed: %s", alg->name, + size, ERR_error_string(ERR_get_error(), NULL)); + if (EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_CCM_SET_TAG, AES_CBC_MAC_HASH_LEN, NULL) != 1) + errx(1, "OpenSSL %s (%zu) setting tag length failed: %s", alg->name, + size, ERR_error_string(ERR_get_error(), NULL)); + if (EVP_EncryptInit_ex(ctx, NULL, NULL, (const u_char *)key, + (const u_char *)iv) != 1) + errx(1, "OpenSSL %s (%zu) ctx init failed: %s", alg->name, + size, ERR_error_string(ERR_get_error(), NULL)); + if (EVP_EncryptUpdate(ctx, NULL, &outl, NULL, size) != 1) + errx(1, "OpenSSL %s (%zu) unable to set data length: %s", alg->name, + size, ERR_error_string(ERR_get_error(), NULL)); + + if (aad != NULL) { + if (EVP_EncryptUpdate(ctx, NULL, &outl, (const u_char *)aad, + aad_len) != 1) + errx(1, "OpenSSL %s (%zu) aad update failed: %s", + alg->name, size, + ERR_error_string(ERR_get_error(), NULL)); + } + if (EVP_EncryptUpdate(ctx, (u_char *)output, &outl, + (const u_char *)input, size) != 1) + errx(1, "OpenSSL %s (%zu) encrypt update failed: %s", alg->name, + size, ERR_error_string(ERR_get_error(), NULL)); + total = outl; + if (EVP_EncryptFinal_ex(ctx, (u_char *)output + outl, &outl) != 1) + errx(1, "OpenSSL %s (%zu) encrypt final failed: %s", alg->name, + size, ERR_error_string(ERR_get_error(), NULL)); + total += outl; + if (total != size) + errx(1, "OpenSSL %s (%zu) encrypt size mismatch: %d", alg->name, + size, total); + if (EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_CCM_GET_TAG, AES_CBC_MAC_HASH_LEN, + tag) != 1) + errx(1, "OpenSSL %s (%zu) get tag failed: %s", alg->name, + size, ERR_error_string(ERR_get_error(), NULL)); + EVP_CIPHER_CTX_free(ctx); +} + +static bool +ocf_ccm(struct alg *alg, const char *key, size_t key_len, const char *iv, + size_t iv_len, const char *aad, size_t aad_len, const char *input, + char *output, size_t size, char *tag, int enc, int *cridp) +{ + struct session2_op sop; + struct crypt_aead caead; + int fd; + bool rv; + + memset(&sop, 0, sizeof(sop)); + memset(&caead, 0, sizeof(caead)); + sop.crid = crid; + sop.keylen = key_len; + sop.key = (char *)key; + sop.cipher = alg->cipher; + sop.mackeylen = key_len; + sop.mackey = (char *)key; + sop.mac = alg->mac; + fd = crget(); + if (ioctl(fd, CIOCGSESSION2, &sop) < 0) { + warn("cryptodev %s not supported for device %s", + alg->name, crfind(crid)); + close(fd); + return (false); + } + + caead.ses = sop.ses; + caead.op = enc ? COP_ENCRYPT : COP_DECRYPT; + caead.len = size; + caead.aadlen = aad_len; + caead.ivlen = iv_len; + caead.src = (char *)input; + caead.dst = output; + caead.aad = (char *)aad; + caead.tag = tag; + caead.iv = (char *)iv; + + if (ioctl(fd, CIOCCRYPTAEAD, &caead) < 0) { + warn("cryptodev %s (%zu) failed for device %s", + alg->name, size, crfind(crid)); + rv = false; + } else + rv = true; + + if (ioctl(fd, CIOCFSESSION, &sop.ses) < 0) + warn("ioctl(CIOCFSESSION)"); + + close(fd); + *cridp = sop.crid; + return (rv); +} + +static void +run_ccm_test(struct alg *alg, size_t size) +{ + const EVP_CIPHER *cipher; + char *aad, *buffer, *cleartext, *ciphertext; + char *iv, *key; + u_int iv_len, key_len; + int crid; + char control_tag[AES_CBC_MAC_HASH_LEN], test_tag[AES_CBC_MAC_HASH_LEN]; + + cipher = alg->evp_cipher(); + if (size % EVP_CIPHER_block_size(cipher) != 0) { + if (verbose) + printf( + "%s (%zu): invalid buffer size (block size %d)\n", + alg->name, size, EVP_CIPHER_block_size(cipher)); + return; + } + + memset(control_tag, 0x3c, sizeof(control_tag)); + memset(test_tag, 0x3c, sizeof(test_tag)); + + /* + * We only have one algorithm constant for CBC-MAC; however, the + * alg structure uses the different openssl types, which gives us + * the key length. We need that for the OCF code. + */ + key_len = EVP_CIPHER_key_length(cipher); + + /* + * AES-CCM can have varying IV lengths; however, for the moment + * we only support AES_CCM_IV_LEN (12). So if the sizes are + * different, we'll fail. + */ + iv_len = EVP_CIPHER_iv_length(cipher); + if (iv_len != AES_CCM_IV_LEN) { + if (verbose) + printf("OpenSSL CCM IV length (%d) != AES_CCM_IV_LEN", + iv_len); + return; + } + + key = alloc_buffer(key_len); + iv = generate_iv(iv_len, alg); + cleartext = alloc_buffer(size); + buffer = malloc(size); + ciphertext = malloc(size); + if (aad_len != 0) + aad = alloc_buffer(aad_len); + else + aad = NULL; + + /* OpenSSL encrypt */ + openssl_ccm_encrypt(alg, cipher, key, iv, iv_len, aad, aad_len, cleartext, + ciphertext, size, control_tag); + + /* OCF encrypt */ + if (!ocf_ccm(alg, key, key_len, iv, iv_len, aad, aad_len, cleartext, + buffer, size, test_tag, 1, &crid)) + goto out; + if (memcmp(ciphertext, buffer, size) != 0) { + printf("%s (%zu) encryption mismatch:\n", alg->name, size); + printf("control:\n"); + hexdump(ciphertext, size, NULL, 0); + printf("test (cryptodev device %s):\n", crfind(crid)); + hexdump(buffer, size, NULL, 0); + goto out; + } + if (memcmp(control_tag, test_tag, sizeof(control_tag)) != 0) { + printf("%s (%zu) enc tag mismatch:\n", alg->name, size); + printf("control:\n"); + hexdump(control_tag, sizeof(control_tag), NULL, 0); + printf("test (cryptodev device %s):\n", crfind(crid)); + hexdump(test_tag, sizeof(test_tag), NULL, 0); + goto out; + } + + /* OCF decrypt */ + if (!ocf_ccm(alg, key, key_len, iv, iv_len, aad, aad_len, ciphertext, + buffer, size, control_tag, 0, &crid)) + goto out; + if (memcmp(cleartext, buffer, size) != 0) { + printf("%s (%zu) decryption mismatch:\n", alg->name, size); + printf("control:\n"); + hexdump(cleartext, size, NULL, 0); + printf("test (cryptodev device %s):\n", crfind(crid)); + hexdump(buffer, size, NULL, 0); + goto out; + } + + if (verbose) + printf("%s (%zu) matched (cryptodev device %s)\n", + alg->name, size, crfind(crid)); + +out: + free(aad); + free(ciphertext); + free(buffer); + free(cleartext); + free(iv); + free(key); +} + +static void run_test(struct alg *alg, size_t size) { switch (alg->type) { case T_HASH: run_hash_test(alg, size); break; case T_HMAC: run_hmac_test(alg, size); break; case T_BLKCIPHER: run_blkcipher_test(alg, size); break; case T_AUTHENC: run_authenc_test(alg, size); break; case T_GCM: run_gcm_test(alg, size); break; + case T_CCM: + run_ccm_test(alg, size); + break; } } static void run_test_sizes(struct alg *alg, size_t *sizes, u_int nsizes) { u_int i; for (i = 0; i < nsizes; i++) run_test(alg, sizes[i]); } static void run_hash_tests(size_t *sizes, u_int nsizes) { u_int i; for (i = 0; i < nitems(algs); i++) if (algs[i].type == T_HASH) run_test_sizes(&algs[i], sizes, nsizes); } static void run_hmac_tests(size_t *sizes, u_int nsizes) { u_int i; for (i = 0; i < nitems(algs); i++) if (algs[i].type == T_HMAC) run_test_sizes(&algs[i], sizes, nsizes); } static void run_blkcipher_tests(size_t *sizes, u_int nsizes) { u_int i; for (i = 0; i < nitems(algs); i++) if (algs[i].type == T_BLKCIPHER) run_test_sizes(&algs[i], sizes, nsizes); } static void run_authenc_tests(size_t *sizes, u_int nsizes) { struct alg *authenc, *cipher, *hmac; u_int i, j; for (i = 0; i < nitems(algs); i++) { cipher = &algs[i]; if (cipher->type != T_BLKCIPHER) continue; for (j = 0; j < nitems(algs); j++) { hmac = &algs[j]; if (hmac->type != T_HMAC) continue; authenc = build_authenc(cipher, hmac); run_test_sizes(authenc, sizes, nsizes); free((char *)authenc->name); } } } static void run_aead_tests(size_t *sizes, u_int nsizes) { u_int i; for (i = 0; i < nitems(algs); i++) - if (algs[i].type == T_GCM) + if (algs[i].type == T_GCM || + algs[i].type == T_CCM) run_test_sizes(&algs[i], sizes, nsizes); } int main(int ac, char **av) { const char *algname; struct alg *alg; size_t sizes[128]; u_int i, nsizes; bool testall; int ch; algname = NULL; crid = CRYPTO_FLAG_HARDWARE; testall = false; verbose = false; while ((ch = getopt(ac, av, "A:a:d:vz")) != -1) switch (ch) { case 'A': aad_len = atoi(optarg); break; case 'a': algname = optarg; break; case 'd': crid = crlookup(optarg); break; case 'v': verbose = true; break; case 'z': testall = true; break; default: usage(); } ac -= optind; av += optind; nsizes = 0; while (ac > 0) { char *cp; if (nsizes >= nitems(sizes)) { warnx("Too many sizes, ignoring extras"); break; } sizes[nsizes] = strtol(av[0], &cp, 0); if (*cp != '\0') errx(1, "Bad size %s", av[0]); nsizes++; ac--; av++; } if (algname == NULL) errx(1, "Algorithm required"); if (nsizes == 0) { sizes[0] = 16; nsizes++; if (testall) { while (sizes[nsizes - 1] * 2 < 240 * 1024) { assert(nsizes < nitems(sizes)); sizes[nsizes] = sizes[nsizes - 1] * 2; nsizes++; } if (sizes[nsizes - 1] < 240 * 1024) { assert(nsizes < nitems(sizes)); sizes[nsizes] = 240 * 1024; nsizes++; } } } if (strcasecmp(algname, "hash") == 0) run_hash_tests(sizes, nsizes); else if (strcasecmp(algname, "hmac") == 0) run_hmac_tests(sizes, nsizes); else if (strcasecmp(algname, "blkcipher") == 0) run_blkcipher_tests(sizes, nsizes); else if (strcasecmp(algname, "authenc") == 0) run_authenc_tests(sizes, nsizes); else if (strcasecmp(algname, "aead") == 0) run_aead_tests(sizes, nsizes); else if (strcasecmp(algname, "all") == 0) { run_hash_tests(sizes, nsizes); run_hmac_tests(sizes, nsizes); run_blkcipher_tests(sizes, nsizes); run_authenc_tests(sizes, nsizes); run_aead_tests(sizes, nsizes); } else if (strchr(algname, '+') != NULL) { alg = build_authenc_name(algname); run_test_sizes(alg, sizes, nsizes); } else { alg = find_alg(algname); if (alg == NULL) errx(1, "Invalid algorithm %s", algname); run_test_sizes(alg, sizes, nsizes); } return (0); } Index: projects/import-googletest-1.8.1/usr.bin/clang/Makefile.inc =================================================================== --- projects/import-googletest-1.8.1/usr.bin/clang/Makefile.inc (revision 344270) +++ projects/import-googletest-1.8.1/usr.bin/clang/Makefile.inc (revision 344271) @@ -1,13 +1,15 @@ # $FreeBSD$ WARNS?= 0 .include +MK_PIE:= no # Explicit libXXX.a references + .if ${COMPILER_TYPE} == "clang" DEBUG_FILES_CFLAGS= -gline-tables-only .else DEBUG_FILES_CFLAGS= -g1 .endif .include "../Makefile.inc" Index: projects/import-googletest-1.8.1/usr.bin/kdump/kdump.c =================================================================== --- projects/import-googletest-1.8.1/usr.bin/kdump/kdump.c (revision 344270) +++ projects/import-googletest-1.8.1/usr.bin/kdump/kdump.c (revision 344271) @@ -1,2181 +1,2181 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 1988, 1993 * The Regents of the University of California. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #ifndef lint static const char copyright[] = "@(#) Copyright (c) 1988, 1993\n\ The Regents of the University of California. All rights reserved.\n"; #endif /* not lint */ #ifndef lint #if 0 static char sccsid[] = "@(#)kdump.c 8.1 (Berkeley) 6/6/93"; #endif #endif /* not lint */ #include __FBSDID("$FreeBSD$"); #define _WANT_KERNEL_ERRNO #ifdef __LP64__ #define _WANT_KEVENT32 #endif #define _WANT_FREEBSD11_KEVENT #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef WITH_CASPER #include #endif #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "ktrace.h" #ifdef WITH_CASPER #include #include #include #endif int fetchprocinfo(struct ktr_header *, u_int *); u_int findabi(struct ktr_header *); int fread_tail(void *, int, int); void dumpheader(struct ktr_header *, u_int); void ktrsyscall(struct ktr_syscall *, u_int); void ktrsysret(struct ktr_sysret *, u_int); void ktrnamei(char *, int); void hexdump(char *, int, int); void visdump(char *, int, int); void ktrgenio(struct ktr_genio *, int); void ktrpsig(struct ktr_psig *); void ktrcsw(struct ktr_csw *); void ktrcsw_old(struct ktr_csw_old *); void ktruser(int, void *); void ktrcaprights(cap_rights_t *); void ktritimerval(struct itimerval *it); void ktrsockaddr(struct sockaddr *); void ktrstat(struct stat *); void ktrstruct(char *, size_t); void ktrcapfail(struct ktr_cap_fail *); void ktrfault(struct ktr_fault *); void ktrfaultend(struct ktr_faultend *); void ktrkevent(struct kevent *); void ktrstructarray(struct ktr_struct_array *, size_t); void usage(void); #define TIMESTAMP_NONE 0x0 #define TIMESTAMP_ABSOLUTE 0x1 #define TIMESTAMP_ELAPSED 0x2 #define TIMESTAMP_RELATIVE 0x4 static bool abiflag, decimal, fancy = true, resolv, suppressdata, syscallno, tail, threads; static int timestamp, maxdata; static const char *tracefile = DEF_TRACEFILE; static struct ktr_header ktr_header; #define TIME_FORMAT "%b %e %T %Y" #define eqs(s1, s2) (strcmp((s1), (s2)) == 0) #define print_number64(first,i,n,c) do { \ uint64_t __v; \ \ if (quad_align && (((ptrdiff_t)((i) - (first))) & 1) == 1) { \ (i)++; \ (n)--; \ } \ if (quad_slots == 2) \ __v = (uint64_t)(uint32_t)(i)[0] | \ ((uint64_t)(uint32_t)(i)[1]) << 32; \ else \ __v = (uint64_t)*(i); \ if (decimal) \ printf("%c%jd", (c), (intmax_t)__v); \ else \ printf("%c%#jx", (c), (uintmax_t)__v); \ (i) += quad_slots; \ (n) -= quad_slots; \ (c) = ','; \ } while (0) #define print_number(i,n,c) do { \ if (decimal) \ printf("%c%jd", c, (intmax_t)*i); \ else \ printf("%c%#jx", c, (uintmax_t)(u_register_t)*i); \ i++; \ n--; \ c = ','; \ } while (0) struct proc_info { TAILQ_ENTRY(proc_info) info; u_int sv_flags; pid_t pid; }; static TAILQ_HEAD(trace_procs, proc_info) trace_procs; #ifdef WITH_CASPER static cap_channel_t *cappwd, *capgrp; static int cappwdgrp_setup(cap_channel_t **cappwdp, cap_channel_t **capgrpp) { cap_channel_t *capcas, *cappwdloc, *capgrploc; const char *cmds[1], *fields[1]; capcas = cap_init(); if (capcas == NULL) { err(1, "unable to create casper process"); exit(1); } cappwdloc = cap_service_open(capcas, "system.pwd"); capgrploc = cap_service_open(capcas, "system.grp"); /* Casper capability no longer needed. */ cap_close(capcas); if (cappwdloc == NULL || capgrploc == NULL) { if (cappwdloc == NULL) warn("unable to open system.pwd service"); if (capgrploc == NULL) warn("unable to open system.grp service"); exit(1); } /* Limit system.pwd to only getpwuid() function and pw_name field. */ cmds[0] = "getpwuid"; if (cap_pwd_limit_cmds(cappwdloc, cmds, 1) < 0) err(1, "unable to limit system.pwd service"); fields[0] = "pw_name"; if (cap_pwd_limit_fields(cappwdloc, fields, 1) < 0) err(1, "unable to limit system.pwd service"); /* Limit system.grp to only getgrgid() function and gr_name field. */ cmds[0] = "getgrgid"; if (cap_grp_limit_cmds(capgrploc, cmds, 1) < 0) err(1, "unable to limit system.grp service"); fields[0] = "gr_name"; if (cap_grp_limit_fields(capgrploc, fields, 1) < 0) err(1, "unable to limit system.grp service"); *cappwdp = cappwdloc; *capgrpp = capgrploc; return (0); } #endif /* WITH_CASPER */ static void print_integer_arg(const char *(*decoder)(int), int value) { const char *str; str = decoder(value); if (str != NULL) printf("%s", str); else { if (decimal) printf("", value); else printf("", value); } } /* Like print_integer_arg but unknown values are treated as valid. */ static void print_integer_arg_valid(const char *(*decoder)(int), int value) { const char *str; str = decoder(value); if (str != NULL) printf("%s", str); else { if (decimal) printf("%d", value); else printf("%#x", value); } } static void print_mask_arg(bool (*decoder)(FILE *, int, int *), int value) { bool invalid; int rem; printf("%#x<", value); invalid = !decoder(stdout, value, &rem); printf(">"); if (invalid) printf("%u", rem); } static void print_mask_arg0(bool (*decoder)(FILE *, int, int *), int value) { bool invalid; int rem; if (value == 0) { printf("0"); return; } printf("%#x<", value); invalid = !decoder(stdout, value, &rem); printf(">"); if (invalid) printf("%u", rem); } static void decode_fileflags(fflags_t value) { bool invalid; fflags_t rem; if (value == 0) { printf("0"); return; } printf("%#x<", value); invalid = !sysdecode_fileflags(stdout, value, &rem); printf(">"); if (invalid) printf("%u", rem); } static void decode_filemode(int value) { bool invalid; int rem; if (value == 0) { printf("0"); return; } printf("%#o<", value); invalid = !sysdecode_filemode(stdout, value, &rem); printf(">"); if (invalid) printf("%u", rem); } static void print_mask_arg32(bool (*decoder)(FILE *, uint32_t, uint32_t *), uint32_t value) { bool invalid; uint32_t rem; printf("%#x<", value); invalid = !decoder(stdout, value, &rem); printf(">"); if (invalid) printf("%u", rem); } static void print_mask_argul(bool (*decoder)(FILE *, u_long, u_long *), u_long value) { bool invalid; u_long rem; if (value == 0) { printf("0"); return; } printf("%#lx<", value); invalid = !decoder(stdout, value, &rem); printf(">"); if (invalid) printf("%lu", rem); } int main(int argc, char *argv[]) { int ch, ktrlen, size; void *m; int trpoints = ALL_POINTS; int drop_logged; pid_t pid = 0; u_int sv_flags; setlocale(LC_CTYPE, ""); timestamp = TIMESTAMP_NONE; while ((ch = getopt(argc,argv,"f:dElm:np:AHRrSsTt:")) != -1) switch (ch) { case 'A': abiflag = true; break; case 'f': tracefile = optarg; break; case 'd': decimal = true; break; case 'l': tail = true; break; case 'm': maxdata = atoi(optarg); break; case 'n': fancy = false; break; case 'p': pid = atoi(optarg); break; case 'r': resolv = true; break; case 'S': syscallno = true; break; case 's': suppressdata = true; break; case 'E': timestamp |= TIMESTAMP_ELAPSED; break; case 'H': threads = true; break; case 'R': timestamp |= TIMESTAMP_RELATIVE; break; case 'T': timestamp |= TIMESTAMP_ABSOLUTE; break; case 't': trpoints = getpoints(optarg); if (trpoints < 0) errx(1, "unknown trace point in %s", optarg); break; default: usage(); } if (argc > optind) usage(); m = malloc(size = 1025); if (m == NULL) errx(1, "%s", strerror(ENOMEM)); if (strcmp(tracefile, "-") != 0) if (!freopen(tracefile, "r", stdin)) err(1, "%s", tracefile); caph_cache_catpages(); caph_cache_tzdata(); #ifdef WITH_CASPER if (resolv) { if (cappwdgrp_setup(&cappwd, &capgrp) < 0) { cappwd = NULL; capgrp = NULL; } } if (!resolv || (cappwd != NULL && capgrp != NULL)) { if (caph_enter() < 0) err(1, "unable to enter capability mode"); } #else if (!resolv) { if (caph_enter() < 0) err(1, "unable to enter capability mode"); } #endif if (caph_limit_stdio() == -1) err(1, "unable to limit stdio"); TAILQ_INIT(&trace_procs); drop_logged = 0; while (fread_tail(&ktr_header, sizeof(struct ktr_header), 1)) { if (ktr_header.ktr_type & KTR_DROP) { ktr_header.ktr_type &= ~KTR_DROP; if (!drop_logged && threads) { printf( "%6jd %6jd %-8.*s Events dropped.\n", (intmax_t)ktr_header.ktr_pid, ktr_header.ktr_tid > 0 ? (intmax_t)ktr_header.ktr_tid : 0, MAXCOMLEN, ktr_header.ktr_comm); drop_logged = 1; } else if (!drop_logged) { printf("%6jd %-8.*s Events dropped.\n", (intmax_t)ktr_header.ktr_pid, MAXCOMLEN, ktr_header.ktr_comm); drop_logged = 1; } } if ((ktrlen = ktr_header.ktr_len) < 0) errx(1, "bogus length 0x%x", ktrlen); if (ktrlen > size) { m = realloc(m, ktrlen+1); if (m == NULL) errx(1, "%s", strerror(ENOMEM)); size = ktrlen; } if (ktrlen && fread_tail(m, ktrlen, 1) == 0) errx(1, "data too short"); if (fetchprocinfo(&ktr_header, (u_int *)m) != 0) continue; if (pid && ktr_header.ktr_pid != pid && ktr_header.ktr_tid != pid) continue; if ((trpoints & (1<ktr_type) { case KTR_PROCCTOR: TAILQ_FOREACH(pi, &trace_procs, info) { if (pi->pid == kth->ktr_pid) { TAILQ_REMOVE(&trace_procs, pi, info); break; } } pi = malloc(sizeof(struct proc_info)); if (pi == NULL) errx(1, "%s", strerror(ENOMEM)); pi->sv_flags = *flags; pi->pid = kth->ktr_pid; TAILQ_INSERT_TAIL(&trace_procs, pi, info); return (1); case KTR_PROCDTOR: TAILQ_FOREACH(pi, &trace_procs, info) { if (pi->pid == kth->ktr_pid) { TAILQ_REMOVE(&trace_procs, pi, info); free(pi); break; } } return (1); } return (0); } u_int findabi(struct ktr_header *kth) { struct proc_info *pi; TAILQ_FOREACH(pi, &trace_procs, info) { if (pi->pid == kth->ktr_pid) { return (pi->sv_flags); } } return (0); } void dumpheader(struct ktr_header *kth, u_int sv_flags) { static char unknown[64]; static struct timeval prevtime, prevtime_e; struct timeval temp; const char *abi; const char *arch; const char *type; const char *sign; switch (kth->ktr_type) { case KTR_SYSCALL: type = "CALL"; break; case KTR_SYSRET: type = "RET "; break; case KTR_NAMEI: type = "NAMI"; break; case KTR_GENIO: type = "GIO "; break; case KTR_PSIG: type = "PSIG"; break; case KTR_CSW: type = "CSW "; break; case KTR_USER: type = "USER"; break; case KTR_STRUCT: case KTR_STRUCT_ARRAY: type = "STRU"; break; case KTR_SYSCTL: type = "SCTL"; break; case KTR_CAPFAIL: type = "CAP "; break; case KTR_FAULT: type = "PFLT"; break; case KTR_FAULTEND: type = "PRET"; break; default: sprintf(unknown, "UNKNOWN(%d)", kth->ktr_type); type = unknown; } /* * The ktr_tid field was previously the ktr_buffer field, which held * the kernel pointer value for the buffer associated with data * following the record header. It now holds a threadid, but only * for trace files after the change. Older trace files still contain * kernel pointers. Detect this and suppress the results by printing * negative tid's as 0. */ if (threads) printf("%6jd %6jd %-8.*s ", (intmax_t)kth->ktr_pid, kth->ktr_tid > 0 ? (intmax_t)kth->ktr_tid : 0, MAXCOMLEN, kth->ktr_comm); else printf("%6jd %-8.*s ", (intmax_t)kth->ktr_pid, MAXCOMLEN, kth->ktr_comm); if (timestamp) { if (timestamp & TIMESTAMP_ABSOLUTE) { printf("%jd.%06ld ", (intmax_t)kth->ktr_time.tv_sec, kth->ktr_time.tv_usec); } if (timestamp & TIMESTAMP_ELAPSED) { if (prevtime_e.tv_sec == 0) prevtime_e = kth->ktr_time; timersub(&kth->ktr_time, &prevtime_e, &temp); printf("%jd.%06ld ", (intmax_t)temp.tv_sec, temp.tv_usec); } if (timestamp & TIMESTAMP_RELATIVE) { if (prevtime.tv_sec == 0) prevtime = kth->ktr_time; if (timercmp(&kth->ktr_time, &prevtime, <)) { timersub(&prevtime, &kth->ktr_time, &temp); sign = "-"; } else { timersub(&kth->ktr_time, &prevtime, &temp); sign = ""; } prevtime = kth->ktr_time; printf("%s%jd.%06ld ", sign, (intmax_t)temp.tv_sec, temp.tv_usec); } } printf("%s ", type); if (abiflag != 0) { switch (sv_flags & SV_ABI_MASK) { case SV_ABI_LINUX: abi = "L"; break; case SV_ABI_FREEBSD: abi = "F"; break; case SV_ABI_CLOUDABI: abi = "C"; break; default: abi = "U"; break; } if ((sv_flags & SV_LP64) != 0) arch = "64"; else if ((sv_flags & SV_ILP32) != 0) arch = "32"; else arch = "00"; printf("%s%s ", abi, arch); } } #include static void ioctlname(unsigned long val) { const char *str; str = sysdecode_ioctlname(val); if (str != NULL) printf("%s", str); else if (decimal) printf("%lu", val); else printf("%#lx", val); } static enum sysdecode_abi syscallabi(u_int sv_flags) { if (sv_flags == 0) return (SYSDECODE_ABI_FREEBSD); switch (sv_flags & SV_ABI_MASK) { case SV_ABI_FREEBSD: return (SYSDECODE_ABI_FREEBSD); case SV_ABI_LINUX: #ifdef __LP64__ if (sv_flags & SV_ILP32) return (SYSDECODE_ABI_LINUX32); #endif return (SYSDECODE_ABI_LINUX); case SV_ABI_CLOUDABI: return (SYSDECODE_ABI_CLOUDABI64); default: return (SYSDECODE_ABI_UNKNOWN); } } static void syscallname(u_int code, u_int sv_flags) { const char *name; name = sysdecode_syscallname(syscallabi(sv_flags), code); if (name == NULL) printf("[%d]", code); else { printf("%s", name); if (syscallno) printf("[%d]", code); } } static void print_signal(int signo) { const char *signame; signame = sysdecode_signal(signo); if (signame != NULL) printf("%s", signame); else printf("SIG %d", signo); } void ktrsyscall(struct ktr_syscall *ktr, u_int sv_flags) { int narg = ktr->ktr_narg; register_t *ip, *first; intmax_t arg; int quad_align, quad_slots; syscallname(ktr->ktr_code, sv_flags); ip = first = &ktr->ktr_args[0]; if (narg) { char c = '('; if (fancy && (sv_flags == 0 || (sv_flags & SV_ABI_MASK) == SV_ABI_FREEBSD)) { quad_align = 0; if (sv_flags & SV_ILP32) { #ifdef __powerpc__ quad_align = 1; #endif quad_slots = 2; } else quad_slots = 1; switch (ktr->ktr_code) { case SYS_bindat: case SYS_chflagsat: case SYS_connectat: case SYS_faccessat: case SYS_fchmodat: case SYS_fchownat: case SYS_fstatat: case SYS_futimesat: case SYS_linkat: case SYS_mkdirat: case SYS_mkfifoat: case SYS_mknodat: case SYS_openat: case SYS_readlinkat: case SYS_renameat: case SYS_unlinkat: case SYS_utimensat: putchar('('); print_integer_arg_valid(sysdecode_atfd, *ip); c = ','; ip++; narg--; break; } switch (ktr->ktr_code) { case SYS_ioctl: { print_number(ip, narg, c); putchar(c); ioctlname(*ip); c = ','; ip++; narg--; break; } case SYS_ptrace: putchar('('); print_integer_arg(sysdecode_ptrace_request, *ip); c = ','; ip++; narg--; break; case SYS_access: case SYS_eaccess: case SYS_faccessat: print_number(ip, narg, c); putchar(','); print_mask_arg(sysdecode_access_mode, *ip); ip++; narg--; break; case SYS_open: case SYS_openat: print_number(ip, narg, c); putchar(','); print_mask_arg(sysdecode_open_flags, ip[0]); if ((ip[0] & O_CREAT) == O_CREAT) { putchar(','); decode_filemode(ip[1]); } ip += 2; narg -= 2; break; case SYS_wait4: print_number(ip, narg, c); print_number(ip, narg, c); putchar(','); print_mask_arg0(sysdecode_wait4_options, *ip); ip++; narg--; break; case SYS_wait6: putchar('('); print_integer_arg(sysdecode_idtype, *ip); c = ','; ip++; narg--; print_number64(first, ip, narg, c); print_number(ip, narg, c); putchar(','); print_mask_arg(sysdecode_wait6_options, *ip); ip++; narg--; break; case SYS_chmod: case SYS_fchmod: case SYS_lchmod: case SYS_fchmodat: print_number(ip, narg, c); putchar(','); decode_filemode(*ip); ip++; narg--; break; case SYS_mknodat: print_number(ip, narg, c); putchar(','); decode_filemode(*ip); ip++; narg--; break; case SYS_getfsstat: print_number(ip, narg, c); print_number(ip, narg, c); putchar(','); print_integer_arg(sysdecode_getfsstat_mode, *ip); ip++; narg--; break; case SYS_mount: print_number(ip, narg, c); print_number(ip, narg, c); putchar(','); print_mask_arg(sysdecode_mount_flags, *ip); ip++; narg--; break; case SYS_unmount: print_number(ip, narg, c); putchar(','); print_mask_arg(sysdecode_mount_flags, *ip); ip++; narg--; break; case SYS_recvmsg: case SYS_sendmsg: print_number(ip, narg, c); print_number(ip, narg, c); putchar(','); print_mask_arg0(sysdecode_msg_flags, *ip); ip++; narg--; break; case SYS_recvfrom: case SYS_sendto: print_number(ip, narg, c); print_number(ip, narg, c); print_number(ip, narg, c); putchar(','); print_mask_arg0(sysdecode_msg_flags, *ip); ip++; narg--; break; case SYS_chflags: case SYS_chflagsat: case SYS_fchflags: case SYS_lchflags: print_number(ip, narg, c); putchar(','); decode_fileflags(*ip); ip++; narg--; break; case SYS_kill: print_number(ip, narg, c); putchar(','); print_signal(*ip); ip++; narg--; break; case SYS_reboot: putchar('('); print_mask_arg(sysdecode_reboot_howto, *ip); ip++; narg--; break; case SYS_umask: putchar('('); decode_filemode(*ip); ip++; narg--; break; case SYS_msync: print_number(ip, narg, c); print_number(ip, narg, c); putchar(','); print_mask_arg(sysdecode_msync_flags, *ip); ip++; narg--; break; #ifdef SYS_freebsd6_mmap case SYS_freebsd6_mmap: print_number(ip, narg, c); print_number(ip, narg, c); putchar(','); print_mask_arg(sysdecode_mmap_prot, *ip); putchar(','); ip++; narg--; print_mask_arg(sysdecode_mmap_flags, *ip); ip++; narg--; break; #endif case SYS_mmap: print_number(ip, narg, c); print_number(ip, narg, c); putchar(','); print_mask_arg(sysdecode_mmap_prot, *ip); putchar(','); ip++; narg--; print_mask_arg(sysdecode_mmap_flags, *ip); ip++; narg--; break; case SYS_mprotect: print_number(ip, narg, c); print_number(ip, narg, c); putchar(','); print_mask_arg(sysdecode_mmap_prot, *ip); ip++; narg--; break; case SYS_madvise: print_number(ip, narg, c); print_number(ip, narg, c); putchar(','); print_integer_arg(sysdecode_madvice, *ip); ip++; narg--; break; case SYS_pathconf: case SYS_lpathconf: case SYS_fpathconf: print_number(ip, narg, c); putchar(','); print_integer_arg(sysdecode_pathconf_name, *ip); ip++; narg--; break; case SYS_getpriority: case SYS_setpriority: putchar('('); print_integer_arg(sysdecode_prio_which, *ip); c = ','; ip++; narg--; break; case SYS_fcntl: print_number(ip, narg, c); putchar(','); print_integer_arg(sysdecode_fcntl_cmd, ip[0]); if (sysdecode_fcntl_arg_p(ip[0])) { putchar(','); if (ip[0] == F_SETFL) print_mask_arg( sysdecode_fcntl_fileflags, ip[1]); else sysdecode_fcntl_arg(stdout, ip[0], ip[1], decimal ? 10 : 16); } ip += 2; narg -= 2; break; case SYS_socket: { int sockdomain; putchar('('); sockdomain = *ip; print_integer_arg(sysdecode_socketdomain, sockdomain); ip++; narg--; putchar(','); print_mask_arg(sysdecode_socket_type, *ip); ip++; narg--; if (sockdomain == PF_INET || sockdomain == PF_INET6) { putchar(','); print_integer_arg(sysdecode_ipproto, *ip); ip++; narg--; } c = ','; break; } case SYS_setsockopt: case SYS_getsockopt: { const char *str; print_number(ip, narg, c); putchar(','); print_integer_arg_valid(sysdecode_sockopt_level, *ip); str = sysdecode_sockopt_name(ip[0], ip[1]); if (str != NULL) { printf(",%s", str); ip++; narg--; } ip++; narg--; break; } #ifdef SYS_freebsd6_lseek case SYS_freebsd6_lseek: print_number(ip, narg, c); /* Hidden 'pad' argument, not in lseek(2) */ print_number(ip, narg, c); print_number64(first, ip, narg, c); putchar(','); print_integer_arg(sysdecode_whence, *ip); ip++; narg--; break; #endif case SYS_lseek: print_number(ip, narg, c); print_number64(first, ip, narg, c); putchar(','); print_integer_arg(sysdecode_whence, *ip); ip++; narg--; break; case SYS_flock: print_number(ip, narg, c); putchar(','); print_mask_arg(sysdecode_flock_operation, *ip); ip++; narg--; break; case SYS_mkfifo: case SYS_mkfifoat: case SYS_mkdir: case SYS_mkdirat: print_number(ip, narg, c); putchar(','); decode_filemode(*ip); ip++; narg--; break; case SYS_shutdown: print_number(ip, narg, c); putchar(','); print_integer_arg(sysdecode_shutdown_how, *ip); ip++; narg--; break; case SYS_socketpair: putchar('('); print_integer_arg(sysdecode_socketdomain, *ip); ip++; narg--; putchar(','); print_mask_arg(sysdecode_socket_type, *ip); ip++; narg--; c = ','; break; case SYS_getrlimit: case SYS_setrlimit: putchar('('); print_integer_arg(sysdecode_rlimit, *ip); ip++; narg--; c = ','; break; case SYS_getrusage: putchar('('); print_integer_arg(sysdecode_getrusage_who, *ip); ip++; narg--; c = ','; break; case SYS_quotactl: print_number(ip, narg, c); putchar(','); if (!sysdecode_quotactl_cmd(stdout, *ip)) { if (decimal) printf("", (int)*ip); else printf("", (int)*ip); } ip++; narg--; c = ','; break; case SYS_nfssvc: putchar('('); print_integer_arg(sysdecode_nfssvc_flags, *ip); ip++; narg--; c = ','; break; case SYS_rtprio: case SYS_rtprio_thread: putchar('('); print_integer_arg(sysdecode_rtprio_function, *ip); ip++; narg--; c = ','; break; case SYS___semctl: print_number(ip, narg, c); print_number(ip, narg, c); putchar(','); print_integer_arg(sysdecode_semctl_cmd, *ip); ip++; narg--; break; case SYS_semget: print_number(ip, narg, c); print_number(ip, narg, c); putchar(','); print_mask_arg(sysdecode_semget_flags, *ip); ip++; narg--; break; case SYS_msgctl: print_number(ip, narg, c); putchar(','); print_integer_arg(sysdecode_msgctl_cmd, *ip); ip++; narg--; break; case SYS_shmat: print_number(ip, narg, c); print_number(ip, narg, c); putchar(','); print_mask_arg(sysdecode_shmat_flags, *ip); ip++; narg--; break; case SYS_shmctl: print_number(ip, narg, c); putchar(','); print_integer_arg(sysdecode_shmctl_cmd, *ip); ip++; narg--; break; case SYS_shm_open: print_number(ip, narg, c); putchar(','); print_mask_arg(sysdecode_open_flags, ip[0]); putchar(','); decode_filemode(ip[1]); ip += 2; narg -= 2; break; case SYS_minherit: print_number(ip, narg, c); print_number(ip, narg, c); putchar(','); print_integer_arg(sysdecode_minherit_inherit, *ip); ip++; narg--; break; case SYS_rfork: putchar('('); print_mask_arg(sysdecode_rfork_flags, *ip); ip++; narg--; c = ','; break; case SYS_lio_listio: putchar('('); print_integer_arg(sysdecode_lio_listio_mode, *ip); ip++; narg--; c = ','; break; case SYS_mlockall: putchar('('); print_mask_arg(sysdecode_mlockall_flags, *ip); ip++; narg--; break; case SYS_sched_setscheduler: print_number(ip, narg, c); putchar(','); print_integer_arg(sysdecode_scheduler_policy, *ip); ip++; narg--; break; case SYS_sched_get_priority_max: case SYS_sched_get_priority_min: putchar('('); print_integer_arg(sysdecode_scheduler_policy, *ip); ip++; narg--; break; case SYS_sendfile: print_number(ip, narg, c); print_number(ip, narg, c); print_number(ip, narg, c); print_number(ip, narg, c); print_number(ip, narg, c); print_number(ip, narg, c); putchar(','); print_mask_arg(sysdecode_sendfile_flags, *ip); ip++; narg--; break; case SYS_kldsym: print_number(ip, narg, c); putchar(','); print_integer_arg(sysdecode_kldsym_cmd, *ip); ip++; narg--; break; case SYS_sigprocmask: putchar('('); print_integer_arg(sysdecode_sigprocmask_how, *ip); ip++; narg--; c = ','; break; case SYS___acl_get_file: case SYS___acl_set_file: case SYS___acl_get_fd: case SYS___acl_set_fd: case SYS___acl_delete_file: case SYS___acl_delete_fd: case SYS___acl_aclcheck_file: case SYS___acl_aclcheck_fd: case SYS___acl_get_link: case SYS___acl_set_link: case SYS___acl_delete_link: case SYS___acl_aclcheck_link: print_number(ip, narg, c); putchar(','); print_integer_arg(sysdecode_acltype, *ip); ip++; narg--; break; case SYS_sigaction: putchar('('); print_signal(*ip); ip++; narg--; c = ','; break; case SYS_extattrctl: print_number(ip, narg, c); putchar(','); print_integer_arg(sysdecode_extattrnamespace, *ip); ip++; narg--; break; case SYS_nmount: print_number(ip, narg, c); print_number(ip, narg, c); putchar(','); print_mask_arg(sysdecode_mount_flags, *ip); ip++; narg--; break; case SYS_thr_create: print_number(ip, narg, c); print_number(ip, narg, c); putchar(','); print_mask_arg(sysdecode_thr_create_flags, *ip); ip++; narg--; break; case SYS_thr_kill: print_number(ip, narg, c); putchar(','); print_signal(*ip); ip++; narg--; break; case SYS_kldunloadf: print_number(ip, narg, c); putchar(','); print_integer_arg(sysdecode_kldunload_flags, *ip); ip++; narg--; break; case SYS_linkat: case SYS_renameat: case SYS_symlinkat: print_number(ip, narg, c); putchar(','); print_integer_arg_valid(sysdecode_atfd, *ip); ip++; narg--; print_number(ip, narg, c); break; case SYS_cap_fcntls_limit: print_number(ip, narg, c); putchar(','); arg = *ip; ip++; narg--; print_mask_arg32(sysdecode_cap_fcntlrights, arg); break; case SYS_posix_fadvise: print_number(ip, narg, c); print_number(ip, narg, c); print_number(ip, narg, c); (void)putchar(','); print_integer_arg(sysdecode_fadvice, *ip); ip++; narg--; break; case SYS_procctl: putchar('('); print_integer_arg(sysdecode_idtype, *ip); c = ','; ip++; narg--; print_number64(first, ip, narg, c); putchar(','); print_integer_arg(sysdecode_procctl_cmd, *ip); ip++; narg--; break; case SYS__umtx_op: print_number(ip, narg, c); putchar(','); print_integer_arg(sysdecode_umtx_op, *ip); switch (*ip) { case UMTX_OP_CV_WAIT: ip++; narg--; putchar(','); print_mask_argul( sysdecode_umtx_cvwait_flags, *ip); break; case UMTX_OP_RW_RDLOCK: ip++; narg--; putchar(','); print_mask_argul( sysdecode_umtx_rwlock_flags, *ip); break; } ip++; narg--; break; case SYS_ftruncate: case SYS_truncate: print_number(ip, narg, c); print_number64(first, ip, narg, c); break; case SYS_fchownat: print_number(ip, narg, c); print_number(ip, narg, c); print_number(ip, narg, c); break; case SYS_fstatat: case SYS_utimensat: print_number(ip, narg, c); print_number(ip, narg, c); break; case SYS_unlinkat: print_number(ip, narg, c); break; case SYS_sysarch: putchar('('); print_integer_arg(sysdecode_sysarch_number, *ip); ip++; narg--; c = ','; break; } switch (ktr->ktr_code) { case SYS_chflagsat: case SYS_fchownat: case SYS_faccessat: case SYS_fchmodat: case SYS_fstatat: case SYS_linkat: case SYS_unlinkat: case SYS_utimensat: putchar(','); print_mask_arg0(sysdecode_atflags, *ip); ip++; narg--; break; } } while (narg > 0) { print_number(ip, narg, c); } putchar(')'); } putchar('\n'); } void ktrsysret(struct ktr_sysret *ktr, u_int sv_flags) { register_t ret = ktr->ktr_retval; int error = ktr->ktr_error; syscallname(ktr->ktr_code, sv_flags); printf(" "); if (error == 0) { if (fancy) { printf("%ld", (long)ret); if (ret < 0 || ret > 9) printf("/%#lx", (unsigned long)ret); } else { if (decimal) printf("%ld", (long)ret); else printf("%#lx", (unsigned long)ret); } } else if (error == ERESTART) printf("RESTART"); else if (error == EJUSTRETURN) printf("JUSTRETURN"); else { printf("-1 errno %d", sysdecode_freebsd_to_abi_errno( syscallabi(sv_flags), error)); if (fancy) printf(" %s", strerror(ktr->ktr_error)); } putchar('\n'); } void ktrnamei(char *cp, int len) { printf("\"%.*s\"\n", len, cp); } void hexdump(char *p, int len, int screenwidth) { int n, i; int width; width = 0; do { width += 2; i = 13; /* base offset */ i += (width / 2) + 1; /* spaces every second byte */ i += (width * 2); /* width of bytes */ i += 3; /* " |" */ i += width; /* each byte */ i += 1; /* "|" */ } while (i < screenwidth); width -= 2; for (n = 0; n < len; n += width) { for (i = n; i < n + width; i++) { if ((i % width) == 0) { /* beginning of line */ printf(" 0x%04x", i); } if ((i % 2) == 0) { printf(" "); } if (i < len) printf("%02x", p[i] & 0xff); else printf(" "); } printf(" |"); for (i = n; i < n + width; i++) { if (i >= len) break; if (p[i] >= ' ' && p[i] <= '~') printf("%c", p[i]); else printf("."); } printf("|\n"); } if ((i % width) != 0) printf("\n"); } void visdump(char *dp, int datalen, int screenwidth) { int col = 0; char *cp; int width; char visbuf[5]; printf(" \""); col = 8; for (;datalen > 0; datalen--, dp++) { vis(visbuf, *dp, VIS_CSTYLE, *(dp+1)); cp = visbuf; /* * Keep track of printables and * space chars (like fold(1)). */ if (col == 0) { putchar('\t'); col = 8; } switch(*cp) { case '\n': col = 0; putchar('\n'); continue; case '\t': width = 8 - (col&07); break; default: width = strlen(cp); } if (col + width > (screenwidth-2)) { printf("\\\n\t"); col = 8; } col += width; do { putchar(*cp++); } while (*cp); } if (col == 0) printf(" "); printf("\"\n"); } void ktrgenio(struct ktr_genio *ktr, int len) { int datalen = len - sizeof (struct ktr_genio); char *dp = (char *)ktr + sizeof (struct ktr_genio); static int screenwidth = 0; int i, binary; printf("fd %d %s %d byte%s\n", ktr->ktr_fd, ktr->ktr_rw == UIO_READ ? "read" : "wrote", datalen, datalen == 1 ? "" : "s"); if (suppressdata) return; if (screenwidth == 0) { struct winsize ws; if (fancy && ioctl(fileno(stderr), TIOCGWINSZ, &ws) != -1 && ws.ws_col > 8) screenwidth = ws.ws_col; else screenwidth = 80; } if (maxdata && datalen > maxdata) datalen = maxdata; for (i = 0, binary = 0; i < datalen && binary == 0; i++) { if (dp[i] >= 32 && dp[i] < 127) continue; if (dp[i] == 10 || dp[i] == 13 || dp[i] == 0 || dp[i] == 9) continue; binary = 1; } if (binary) hexdump(dp, datalen, screenwidth); else visdump(dp, datalen, screenwidth); } void ktrpsig(struct ktr_psig *psig) { const char *str; print_signal(psig->signo); if (psig->action == SIG_DFL) { printf(" SIG_DFL"); } else { printf(" caught handler=0x%lx mask=0x%x", (u_long)psig->action, psig->mask.__bits[0]); } printf(" code="); str = sysdecode_sigcode(psig->signo, psig->code); if (str != NULL) printf("%s", str); else printf("", psig->code); putchar('\n'); } void ktrcsw_old(struct ktr_csw_old *cs) { printf("%s %s\n", cs->out ? "stop" : "resume", cs->user ? "user" : "kernel"); } void ktrcsw(struct ktr_csw *cs) { printf("%s %s \"%s\"\n", cs->out ? "stop" : "resume", cs->user ? "user" : "kernel", cs->wmesg); } void ktruser(int len, void *p) { unsigned char *cp; if (sysdecode_utrace(stdout, p, len)) { printf("\n"); return; } printf("%d ", len); cp = p; while (len--) if (decimal) printf(" %d", *cp++); else printf(" %02x", *cp++); printf("\n"); } void ktrcaprights(cap_rights_t *rightsp) { printf("cap_rights_t "); sysdecode_cap_rights(stdout, rightsp); printf("\n"); } static void ktrtimeval(struct timeval *tv) { printf("{%ld, %ld}", (long)tv->tv_sec, tv->tv_usec); } void ktritimerval(struct itimerval *it) { printf("itimerval { .interval = "); ktrtimeval(&it->it_interval); printf(", .value = "); ktrtimeval(&it->it_value); printf(" }\n"); } void ktrsockaddr(struct sockaddr *sa) { /* TODO: Support additional address families #include struct sockaddr_nb *nb; */ const char *str; char addr[64]; /* * note: ktrstruct() has already verified that sa points to a * buffer at least sizeof(struct sockaddr) bytes long and exactly * sa->sa_len bytes long. */ printf("struct sockaddr { "); str = sysdecode_sockaddr_family(sa->sa_family); if (str != NULL) printf("%s", str); else printf("", sa->sa_family); printf(", "); #define check_sockaddr_len(n) \ if (sa_##n.s##n##_len < sizeof(struct sockaddr_##n)) { \ printf("invalid"); \ break; \ } switch(sa->sa_family) { case AF_INET: { struct sockaddr_in sa_in; memset(&sa_in, 0, sizeof(sa_in)); memcpy(&sa_in, sa, sa->sa_len); check_sockaddr_len(in); inet_ntop(AF_INET, &sa_in.sin_addr, addr, sizeof addr); printf("%s:%u", addr, ntohs(sa_in.sin_port)); break; } case AF_INET6: { struct sockaddr_in6 sa_in6; memset(&sa_in6, 0, sizeof(sa_in6)); memcpy(&sa_in6, sa, sa->sa_len); check_sockaddr_len(in6); getnameinfo((struct sockaddr *)&sa_in6, sizeof(sa_in6), addr, sizeof(addr), NULL, 0, NI_NUMERICHOST); printf("[%s]:%u", addr, htons(sa_in6.sin6_port)); break; } case AF_UNIX: { struct sockaddr_un sa_un; memset(&sa_un, 0, sizeof(sa_un)); memcpy(&sa_un, sa, sa->sa_len); printf("%.*s", (int)sizeof(sa_un.sun_path), sa_un.sun_path); break; } default: printf("unknown address family"); } printf(" }\n"); } void ktrstat(struct stat *statp) { char mode[12], timestr[PATH_MAX + 4]; struct passwd *pwd; struct group *grp; struct tm *tm; /* * note: ktrstruct() has already verified that statp points to a * buffer exactly sizeof(struct stat) bytes long. */ printf("struct stat {"); printf("dev=%ju, ino=%ju, ", (uintmax_t)statp->st_dev, (uintmax_t)statp->st_ino); if (!resolv) printf("mode=0%jo, ", (uintmax_t)statp->st_mode); else { strmode(statp->st_mode, mode); printf("mode=%s, ", mode); } printf("nlink=%ju, ", (uintmax_t)statp->st_nlink); if (!resolv) { pwd = NULL; } else { #ifdef WITH_CASPER if (cappwd != NULL) pwd = cap_getpwuid(cappwd, statp->st_uid); else #endif pwd = getpwuid(statp->st_uid); } if (pwd == NULL) printf("uid=%ju, ", (uintmax_t)statp->st_uid); else printf("uid=\"%s\", ", pwd->pw_name); if (!resolv) { grp = NULL; } else { #ifdef WITH_CASPER if (capgrp != NULL) grp = cap_getgrgid(capgrp, statp->st_gid); else #endif grp = getgrgid(statp->st_gid); } if (grp == NULL) printf("gid=%ju, ", (uintmax_t)statp->st_gid); else printf("gid=\"%s\", ", grp->gr_name); printf("rdev=%ju, ", (uintmax_t)statp->st_rdev); printf("atime="); if (!resolv) printf("%jd", (intmax_t)statp->st_atim.tv_sec); else { tm = localtime(&statp->st_atim.tv_sec); strftime(timestr, sizeof(timestr), TIME_FORMAT, tm); printf("\"%s\"", timestr); } if (statp->st_atim.tv_nsec != 0) printf(".%09ld, ", statp->st_atim.tv_nsec); else printf(", "); printf("mtime="); if (!resolv) printf("%jd", (intmax_t)statp->st_mtim.tv_sec); else { tm = localtime(&statp->st_mtim.tv_sec); strftime(timestr, sizeof(timestr), TIME_FORMAT, tm); printf("\"%s\"", timestr); } if (statp->st_mtim.tv_nsec != 0) printf(".%09ld, ", statp->st_mtim.tv_nsec); else printf(", "); printf("ctime="); if (!resolv) printf("%jd", (intmax_t)statp->st_ctim.tv_sec); else { tm = localtime(&statp->st_ctim.tv_sec); strftime(timestr, sizeof(timestr), TIME_FORMAT, tm); printf("\"%s\"", timestr); } if (statp->st_ctim.tv_nsec != 0) printf(".%09ld, ", statp->st_ctim.tv_nsec); else printf(", "); printf("birthtime="); if (!resolv) printf("%jd", (intmax_t)statp->st_birthtim.tv_sec); else { tm = localtime(&statp->st_birthtim.tv_sec); strftime(timestr, sizeof(timestr), TIME_FORMAT, tm); printf("\"%s\"", timestr); } if (statp->st_birthtim.tv_nsec != 0) printf(".%09ld, ", statp->st_birthtim.tv_nsec); else printf(", "); printf("size=%jd, blksize=%ju, blocks=%jd, flags=0x%x", (uintmax_t)statp->st_size, (uintmax_t)statp->st_blksize, (intmax_t)statp->st_blocks, statp->st_flags); printf(" }\n"); } void ktrstruct(char *buf, size_t buflen) { char *name, *data; size_t namelen, datalen; int i; cap_rights_t rights; struct itimerval it; struct stat sb; struct sockaddr_storage ss; for (name = buf, namelen = 0; namelen < buflen && name[namelen] != '\0'; ++namelen) /* nothing */; if (namelen == buflen) goto invalid; if (name[namelen] != '\0') goto invalid; data = buf + namelen + 1; datalen = buflen - namelen - 1; if (datalen == 0) goto invalid; /* sanity check */ for (i = 0; i < (int)namelen; ++i) if (!isalpha(name[i])) goto invalid; if (strcmp(name, "caprights") == 0) { if (datalen != sizeof(cap_rights_t)) goto invalid; memcpy(&rights, data, datalen); ktrcaprights(&rights); } else if (strcmp(name, "itimerval") == 0) { if (datalen != sizeof(struct itimerval)) goto invalid; memcpy(&it, data, datalen); ktritimerval(&it); } else if (strcmp(name, "stat") == 0) { if (datalen != sizeof(struct stat)) goto invalid; memcpy(&sb, data, datalen); ktrstat(&sb); } else if (strcmp(name, "sockaddr") == 0) { if (datalen > sizeof(ss)) goto invalid; memcpy(&ss, data, datalen); if (datalen != ss.ss_len) goto invalid; ktrsockaddr((struct sockaddr *)&ss); } else { printf("unknown structure\n"); } return; invalid: printf("invalid record\n"); } void ktrcapfail(struct ktr_cap_fail *ktr) { switch (ktr->cap_type) { case CAPFAIL_NOTCAPABLE: /* operation on fd with insufficient capabilities */ printf("operation requires "); sysdecode_cap_rights(stdout, &ktr->cap_needed); printf(", descriptor holds "); sysdecode_cap_rights(stdout, &ktr->cap_held); break; case CAPFAIL_INCREASE: /* requested more capabilities than fd already has */ printf("attempt to increase capabilities from "); sysdecode_cap_rights(stdout, &ktr->cap_held); printf(" to "); sysdecode_cap_rights(stdout, &ktr->cap_needed); break; case CAPFAIL_SYSCALL: /* called restricted syscall */ printf("disallowed system call"); break; case CAPFAIL_LOOKUP: - /* used ".." in strict-relative mode */ + /* absolute or AT_FDCWD path, ".." path, etc. */ printf("restricted VFS lookup"); break; default: printf("unknown capability failure: "); sysdecode_cap_rights(stdout, &ktr->cap_needed); printf(" "); sysdecode_cap_rights(stdout, &ktr->cap_held); break; } printf("\n"); } void ktrfault(struct ktr_fault *ktr) { printf("0x%jx ", (uintmax_t)ktr->vaddr); print_mask_arg(sysdecode_vmprot, ktr->type); printf("\n"); } void ktrfaultend(struct ktr_faultend *ktr) { const char *str; str = sysdecode_vmresult(ktr->result); if (str != NULL) printf("%s", str); else printf("", ktr->result); printf("\n"); } void ktrkevent(struct kevent *kev) { printf("{ ident="); switch (kev->filter) { case EVFILT_READ: case EVFILT_WRITE: case EVFILT_VNODE: case EVFILT_PROC: case EVFILT_TIMER: case EVFILT_PROCDESC: case EVFILT_EMPTY: printf("%ju", (uintmax_t)kev->ident); break; case EVFILT_SIGNAL: print_signal(kev->ident); break; default: printf("%p", (void *)kev->ident); } printf(", filter="); print_integer_arg(sysdecode_kevent_filter, kev->filter); printf(", flags="); print_mask_arg0(sysdecode_kevent_flags, kev->flags); printf(", fflags="); sysdecode_kevent_fflags(stdout, kev->filter, kev->fflags, decimal ? 10 : 16); printf(", data=%#jx, udata=%p }", (uintmax_t)kev->data, kev->udata); } void ktrstructarray(struct ktr_struct_array *ksa, size_t buflen) { struct kevent kev; char *name, *data; size_t namelen, datalen; int i; bool first; buflen -= sizeof(*ksa); for (name = (char *)(ksa + 1), namelen = 0; namelen < buflen && name[namelen] != '\0'; ++namelen) /* nothing */; if (namelen == buflen) goto invalid; if (name[namelen] != '\0') goto invalid; /* sanity check */ for (i = 0; i < (int)namelen; ++i) if (!isalnum(name[i]) && name[i] != '_') goto invalid; data = name + namelen + 1; datalen = buflen - namelen - 1; printf("struct %s[] = { ", name); first = true; for (; datalen >= ksa->struct_size; data += ksa->struct_size, datalen -= ksa->struct_size) { if (!first) printf("\n "); else first = false; if (strcmp(name, "kevent") == 0) { if (ksa->struct_size != sizeof(kev)) goto bad_size; memcpy(&kev, data, sizeof(kev)); ktrkevent(&kev); } else if (strcmp(name, "kevent_freebsd11") == 0) { struct kevent_freebsd11 kev11; if (ksa->struct_size != sizeof(kev11)) goto bad_size; memcpy(&kev11, data, sizeof(kev11)); memset(&kev, 0, sizeof(kev)); kev.ident = kev11.ident; kev.filter = kev11.filter; kev.flags = kev11.flags; kev.fflags = kev11.fflags; kev.data = kev11.data; kev.udata = kev11.udata; ktrkevent(&kev); #ifdef _WANT_KEVENT32 } else if (strcmp(name, "kevent32") == 0) { struct kevent32 kev32; if (ksa->struct_size != sizeof(kev32)) goto bad_size; memcpy(&kev32, data, sizeof(kev32)); memset(&kev, 0, sizeof(kev)); kev.ident = kev32.ident; kev.filter = kev32.filter; kev.flags = kev32.flags; kev.fflags = kev32.fflags; #if BYTE_ORDER == BIG_ENDIAN kev.data = kev32.data2 | ((int64_t)kev32.data1 << 32); #else kev.data = kev32.data1 | ((int64_t)kev32.data2 << 32); #endif kev.udata = (void *)(uintptr_t)kev32.udata; ktrkevent(&kev); } else if (strcmp(name, "kevent32_freebsd11") == 0) { struct kevent32_freebsd11 kev32; if (ksa->struct_size != sizeof(kev32)) goto bad_size; memcpy(&kev32, data, sizeof(kev32)); memset(&kev, 0, sizeof(kev)); kev.ident = kev32.ident; kev.filter = kev32.filter; kev.flags = kev32.flags; kev.fflags = kev32.fflags; kev.data = kev32.data; kev.udata = (void *)(uintptr_t)kev32.udata; ktrkevent(&kev); #endif } else { printf(" }\n"); return; } } printf(" }\n"); return; invalid: printf("invalid record\n"); return; bad_size: printf(" }\n"); return; } void usage(void) { fprintf(stderr, "usage: kdump [-dEnlHRrSsTA] [-f trfile] " "[-m maxdata] [-p pid] [-t trstr]\n"); exit(1); } Index: projects/import-googletest-1.8.1/usr.bin/svn/Makefile.inc =================================================================== --- projects/import-googletest-1.8.1/usr.bin/svn/Makefile.inc (revision 344270) +++ projects/import-googletest-1.8.1/usr.bin/svn/Makefile.inc (revision 344271) @@ -1,60 +1,62 @@ # $FreeBSD$ .include +MK_PIE:= no # Explicit libXXX.a references + .if ${MK_SVN} == "yes" SVNLITE?= .else SVNLITE?= lite .endif PACKAGE= svn .if !defined(SVNDIR) SVNDIR= ${SRCTOP}/contrib/subversion/subversion APRU= ${SRCTOP}/contrib/apr-util APR= ${SRCTOP}/contrib/apr WARNS?= 0 # definitely not warns friendly .sinclude "${.CURDIR:H:H}/Makefile.inc" LIBAPRDIR= ${.OBJDIR:H}/lib/libapr LIBAPR_UTILDIR= ${.OBJDIR:H}/lib/libapr_util LIBSERFDIR= ${.OBJDIR:H}/lib/libserf LIBSVN_CLIENTDIR= ${.OBJDIR:H}/lib/libsvn_client LIBSVN_DELTADIR= ${.OBJDIR:H}/lib/libsvn_delta LIBSVN_DIFFDIR= ${.OBJDIR:H}/lib/libsvn_diff LIBSVN_FSDIR= ${.OBJDIR:H}/lib/libsvn_fs LIBSVN_FS_FSDIR= ${.OBJDIR:H}/lib/libsvn_fs_fs LIBSVN_FS_UTILDIR= ${.OBJDIR:H}/lib/libsvn_fs_util LIBSVN_FS_XDIR= ${.OBJDIR:H}/lib/libsvn_fs_x LIBSVN_RADIR= ${.OBJDIR:H}/lib/libsvn_ra LIBSVN_RA_LOCALDIR= ${.OBJDIR:H}/lib/libsvn_ra_local LIBSVN_RA_SVNDIR= ${.OBJDIR:H}/lib/libsvn_ra_svn LIBSVN_RA_SERFDIR= ${.OBJDIR:H}/lib/libsvn_ra_serf LIBSVN_REPOSDIR= ${.OBJDIR:H}/lib/libsvn_repos LIBSVN_SUBRDIR= ${.OBJDIR:H}/lib/libsvn_subr LIBSVN_WCDIR= ${.OBJDIR:H}/lib/libsvn_wc LIBAPR= ${LIBAPRDIR}/libapr.a LIBAPR_UTIL= ${LIBAPR_UTILDIR}/libapr-util.a LIBSERF= ${LIBSERFDIR}/libserf.a LIBSVN_CLIENT= ${LIBSVN_CLIENTDIR}/libsvn_client.a LIBSVN_DELTA= ${LIBSVN_DELTADIR}/libsvn_delta.a LIBSVN_DIFF= ${LIBSVN_DIFFDIR}/libsvn_diff.a LIBSVN_FS= ${LIBSVN_FSDIR}/libsvn_fs.a LIBSVN_FS_FS= ${LIBSVN_FS_FSDIR}/libsvn_fs_fs.a LIBSVN_FS_UTIL= ${LIBSVN_FS_UTILDIR}/libsvn_fs_util.a LIBSVN_FS_X= ${LIBSVN_FS_XDIR}/libsvn_fs_x.a LIBSVN_RA= ${LIBSVN_RADIR}/libsvn_ra.a LIBSVN_RA_LOCAL= ${LIBSVN_RA_LOCALDIR}/libsvn_ra_local.a LIBSVN_RA_SVN= ${LIBSVN_RA_SVNDIR}/libsvn_ra_svn.a LIBSVN_RA_SERF= ${LIBSVN_RA_SERFDIR}/libsvn_ra_serf.a LIBSVN_REPOS= ${LIBSVN_REPOSDIR}/libsvn_repos.a LIBSVN_SUBR= ${LIBSVN_SUBRDIR}/libsvn_subr.a LIBSVN_WC= ${LIBSVN_WCDIR}/libsvn_wc.a .endif Index: projects/import-googletest-1.8.1/usr.sbin/bhyve/block_if.c =================================================================== --- projects/import-googletest-1.8.1/usr.sbin/bhyve/block_if.c (revision 344270) +++ projects/import-googletest-1.8.1/usr.sbin/bhyve/block_if.c (revision 344271) @@ -1,852 +1,850 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2013 Peter Grehan * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #include __FBSDID("$FreeBSD$"); #include #ifndef WITHOUT_CAPSICUM #include #endif #include #include #include #include #include #include #ifndef WITHOUT_CAPSICUM #include #endif #include #include #include #include #include #include #include #include #include #include #include #include "bhyverun.h" #include "mevent.h" #include "block_if.h" #define BLOCKIF_SIG 0xb109b109 #define BLOCKIF_NUMTHR 8 #define BLOCKIF_MAXREQ (64 + BLOCKIF_NUMTHR) enum blockop { BOP_READ, BOP_WRITE, BOP_FLUSH, BOP_DELETE }; enum blockstat { BST_FREE, BST_BLOCK, BST_PEND, BST_BUSY, BST_DONE }; struct blockif_elem { TAILQ_ENTRY(blockif_elem) be_link; struct blockif_req *be_req; enum blockop be_op; enum blockstat be_status; pthread_t be_tid; off_t be_block; }; struct blockif_ctxt { int bc_magic; int bc_fd; int bc_ischr; int bc_isgeom; int bc_candelete; int bc_rdonly; off_t bc_size; int bc_sectsz; int bc_psectsz; int bc_psectoff; int bc_closing; pthread_t bc_btid[BLOCKIF_NUMTHR]; pthread_mutex_t bc_mtx; pthread_cond_t bc_cond; /* Request elements and free/pending/busy queues */ TAILQ_HEAD(, blockif_elem) bc_freeq; TAILQ_HEAD(, blockif_elem) bc_pendq; TAILQ_HEAD(, blockif_elem) bc_busyq; struct blockif_elem bc_reqs[BLOCKIF_MAXREQ]; }; static pthread_once_t blockif_once = PTHREAD_ONCE_INIT; struct blockif_sig_elem { pthread_mutex_t bse_mtx; pthread_cond_t bse_cond; int bse_pending; struct blockif_sig_elem *bse_next; }; static struct blockif_sig_elem *blockif_bse_head; static int blockif_enqueue(struct blockif_ctxt *bc, struct blockif_req *breq, enum blockop op) { struct blockif_elem *be, *tbe; off_t off; int i; be = TAILQ_FIRST(&bc->bc_freeq); assert(be != NULL); assert(be->be_status == BST_FREE); TAILQ_REMOVE(&bc->bc_freeq, be, be_link); be->be_req = breq; be->be_op = op; switch (op) { case BOP_READ: case BOP_WRITE: case BOP_DELETE: off = breq->br_offset; for (i = 0; i < breq->br_iovcnt; i++) off += breq->br_iov[i].iov_len; break; default: off = OFF_MAX; } be->be_block = off; TAILQ_FOREACH(tbe, &bc->bc_pendq, be_link) { if (tbe->be_block == breq->br_offset) break; } if (tbe == NULL) { TAILQ_FOREACH(tbe, &bc->bc_busyq, be_link) { if (tbe->be_block == breq->br_offset) break; } } if (tbe == NULL) be->be_status = BST_PEND; else be->be_status = BST_BLOCK; TAILQ_INSERT_TAIL(&bc->bc_pendq, be, be_link); return (be->be_status == BST_PEND); } static int blockif_dequeue(struct blockif_ctxt *bc, pthread_t t, struct blockif_elem **bep) { struct blockif_elem *be; TAILQ_FOREACH(be, &bc->bc_pendq, be_link) { if (be->be_status == BST_PEND) break; assert(be->be_status == BST_BLOCK); } if (be == NULL) return (0); TAILQ_REMOVE(&bc->bc_pendq, be, be_link); be->be_status = BST_BUSY; be->be_tid = t; TAILQ_INSERT_TAIL(&bc->bc_busyq, be, be_link); *bep = be; return (1); } static void blockif_complete(struct blockif_ctxt *bc, struct blockif_elem *be) { struct blockif_elem *tbe; if (be->be_status == BST_DONE || be->be_status == BST_BUSY) TAILQ_REMOVE(&bc->bc_busyq, be, be_link); else TAILQ_REMOVE(&bc->bc_pendq, be, be_link); TAILQ_FOREACH(tbe, &bc->bc_pendq, be_link) { if (tbe->be_req->br_offset == be->be_block) tbe->be_status = BST_PEND; } be->be_tid = 0; be->be_status = BST_FREE; be->be_req = NULL; TAILQ_INSERT_TAIL(&bc->bc_freeq, be, be_link); } static void blockif_proc(struct blockif_ctxt *bc, struct blockif_elem *be, uint8_t *buf) { struct blockif_req *br; off_t arg[2]; ssize_t clen, len, off, boff, voff; int i, err; br = be->be_req; if (br->br_iovcnt <= 1) buf = NULL; err = 0; switch (be->be_op) { case BOP_READ: if (buf == NULL) { if ((len = preadv(bc->bc_fd, br->br_iov, br->br_iovcnt, br->br_offset)) < 0) err = errno; else br->br_resid -= len; break; } i = 0; off = voff = 0; while (br->br_resid > 0) { len = MIN(br->br_resid, MAXPHYS); if (pread(bc->bc_fd, buf, len, br->br_offset + off) < 0) { err = errno; break; } boff = 0; do { clen = MIN(len - boff, br->br_iov[i].iov_len - voff); memcpy(br->br_iov[i].iov_base + voff, buf + boff, clen); if (clen < br->br_iov[i].iov_len - voff) voff += clen; else { i++; voff = 0; } boff += clen; } while (boff < len); off += len; br->br_resid -= len; } break; case BOP_WRITE: if (bc->bc_rdonly) { err = EROFS; break; } if (buf == NULL) { if ((len = pwritev(bc->bc_fd, br->br_iov, br->br_iovcnt, br->br_offset)) < 0) err = errno; else br->br_resid -= len; break; } i = 0; off = voff = 0; while (br->br_resid > 0) { len = MIN(br->br_resid, MAXPHYS); boff = 0; do { clen = MIN(len - boff, br->br_iov[i].iov_len - voff); memcpy(buf + boff, br->br_iov[i].iov_base + voff, clen); if (clen < br->br_iov[i].iov_len - voff) voff += clen; else { i++; voff = 0; } boff += clen; } while (boff < len); if (pwrite(bc->bc_fd, buf, len, br->br_offset + off) < 0) { err = errno; break; } off += len; br->br_resid -= len; } break; case BOP_FLUSH: if (bc->bc_ischr) { if (ioctl(bc->bc_fd, DIOCGFLUSH)) err = errno; } else if (fsync(bc->bc_fd)) err = errno; break; case BOP_DELETE: if (!bc->bc_candelete) err = EOPNOTSUPP; else if (bc->bc_rdonly) err = EROFS; else if (bc->bc_ischr) { arg[0] = br->br_offset; arg[1] = br->br_resid; if (ioctl(bc->bc_fd, DIOCGDELETE, arg)) err = errno; else br->br_resid = 0; } else err = EOPNOTSUPP; break; default: err = EINVAL; break; } be->be_status = BST_DONE; (*br->br_callback)(br, err); } static void * blockif_thr(void *arg) { struct blockif_ctxt *bc; struct blockif_elem *be; pthread_t t; uint8_t *buf; bc = arg; if (bc->bc_isgeom) buf = malloc(MAXPHYS); else buf = NULL; t = pthread_self(); pthread_mutex_lock(&bc->bc_mtx); for (;;) { while (blockif_dequeue(bc, t, &be)) { pthread_mutex_unlock(&bc->bc_mtx); blockif_proc(bc, be, buf); pthread_mutex_lock(&bc->bc_mtx); blockif_complete(bc, be); } /* Check ctxt status here to see if exit requested */ if (bc->bc_closing) break; pthread_cond_wait(&bc->bc_cond, &bc->bc_mtx); } pthread_mutex_unlock(&bc->bc_mtx); if (buf) free(buf); pthread_exit(NULL); return (NULL); } static void blockif_sigcont_handler(int signal, enum ev_type type, void *arg) { struct blockif_sig_elem *bse; for (;;) { /* * Process the entire list even if not intended for * this thread. */ do { bse = blockif_bse_head; if (bse == NULL) return; } while (!atomic_cmpset_ptr((uintptr_t *)&blockif_bse_head, (uintptr_t)bse, (uintptr_t)bse->bse_next)); pthread_mutex_lock(&bse->bse_mtx); bse->bse_pending = 0; pthread_cond_signal(&bse->bse_cond); pthread_mutex_unlock(&bse->bse_mtx); } } static void blockif_init(void) { mevent_add(SIGCONT, EVF_SIGNAL, blockif_sigcont_handler, NULL); (void) signal(SIGCONT, SIG_IGN); } struct blockif_ctxt * blockif_open(const char *optstr, const char *ident) { char tname[MAXCOMLEN + 1]; char name[MAXPATHLEN]; char *nopt, *xopts, *cp; struct blockif_ctxt *bc; struct stat sbuf; struct diocgattr_arg arg; off_t size, psectsz, psectoff; int extra, fd, i, sectsz; int nocache, sync, ro, candelete, geom, ssopt, pssopt; #ifndef WITHOUT_CAPSICUM cap_rights_t rights; cap_ioctl_t cmds[] = { DIOCGFLUSH, DIOCGDELETE }; #endif pthread_once(&blockif_once, blockif_init); fd = -1; ssopt = 0; nocache = 0; sync = 0; ro = 0; /* * The first element in the optstring is always a pathname. * Optional elements follow */ nopt = xopts = strdup(optstr); while (xopts != NULL) { cp = strsep(&xopts, ","); if (cp == nopt) /* file or device pathname */ continue; else if (!strcmp(cp, "nocache")) nocache = 1; else if (!strcmp(cp, "sync") || !strcmp(cp, "direct")) sync = 1; else if (!strcmp(cp, "ro")) ro = 1; else if (sscanf(cp, "sectorsize=%d/%d", &ssopt, &pssopt) == 2) ; else if (sscanf(cp, "sectorsize=%d", &ssopt) == 1) pssopt = ssopt; else { fprintf(stderr, "Invalid device option \"%s\"\n", cp); goto err; } } extra = 0; if (nocache) extra |= O_DIRECT; if (sync) extra |= O_SYNC; fd = open(nopt, (ro ? O_RDONLY : O_RDWR) | extra); if (fd < 0 && !ro) { /* Attempt a r/w fail with a r/o open */ fd = open(nopt, O_RDONLY | extra); ro = 1; } if (fd < 0) { warn("Could not open backing file: %s", nopt); goto err; } if (fstat(fd, &sbuf) < 0) { warn("Could not stat backing file %s", nopt); goto err; } #ifndef WITHOUT_CAPSICUM cap_rights_init(&rights, CAP_FSYNC, CAP_IOCTL, CAP_READ, CAP_SEEK, CAP_WRITE); if (ro) cap_rights_clear(&rights, CAP_FSYNC, CAP_WRITE); if (caph_rights_limit(fd, &rights) == -1) errx(EX_OSERR, "Unable to apply rights for sandbox"); #endif /* * Deal with raw devices */ size = sbuf.st_size; sectsz = DEV_BSIZE; psectsz = psectoff = 0; candelete = geom = 0; if (S_ISCHR(sbuf.st_mode)) { if (ioctl(fd, DIOCGMEDIASIZE, &size) < 0 || ioctl(fd, DIOCGSECTORSIZE, §sz)) { perror("Could not fetch dev blk/sector size"); goto err; } assert(size != 0); assert(sectsz != 0); if (ioctl(fd, DIOCGSTRIPESIZE, &psectsz) == 0 && psectsz > 0) ioctl(fd, DIOCGSTRIPEOFFSET, &psectoff); strlcpy(arg.name, "GEOM::candelete", sizeof(arg.name)); arg.len = sizeof(arg.value.i); if (ioctl(fd, DIOCGATTR, &arg) == 0) candelete = arg.value.i; if (ioctl(fd, DIOCGPROVIDERNAME, name) == 0) geom = 1; } else psectsz = sbuf.st_blksize; #ifndef WITHOUT_CAPSICUM if (caph_ioctls_limit(fd, cmds, nitems(cmds)) == -1) errx(EX_OSERR, "Unable to apply rights for sandbox"); #endif if (ssopt != 0) { if (!powerof2(ssopt) || !powerof2(pssopt) || ssopt < 512 || ssopt > pssopt) { fprintf(stderr, "Invalid sector size %d/%d\n", ssopt, pssopt); goto err; } /* * Some backend drivers (e.g. cd0, ada0) require that the I/O * size be a multiple of the device's sector size. * * Validate that the emulated sector size complies with this * requirement. */ if (S_ISCHR(sbuf.st_mode)) { if (ssopt < sectsz || (ssopt % sectsz) != 0) { fprintf(stderr, "Sector size %d incompatible " "with underlying device sector size %d\n", ssopt, sectsz); goto err; } } sectsz = ssopt; psectsz = pssopt; psectoff = 0; } bc = calloc(1, sizeof(struct blockif_ctxt)); if (bc == NULL) { perror("calloc"); goto err; } bc->bc_magic = BLOCKIF_SIG; bc->bc_fd = fd; bc->bc_ischr = S_ISCHR(sbuf.st_mode); bc->bc_isgeom = geom; bc->bc_candelete = candelete; bc->bc_rdonly = ro; bc->bc_size = size; bc->bc_sectsz = sectsz; bc->bc_psectsz = psectsz; bc->bc_psectoff = psectoff; pthread_mutex_init(&bc->bc_mtx, NULL); pthread_cond_init(&bc->bc_cond, NULL); TAILQ_INIT(&bc->bc_freeq); TAILQ_INIT(&bc->bc_pendq); TAILQ_INIT(&bc->bc_busyq); for (i = 0; i < BLOCKIF_MAXREQ; i++) { bc->bc_reqs[i].be_status = BST_FREE; TAILQ_INSERT_HEAD(&bc->bc_freeq, &bc->bc_reqs[i], be_link); } for (i = 0; i < BLOCKIF_NUMTHR; i++) { pthread_create(&bc->bc_btid[i], NULL, blockif_thr, bc); snprintf(tname, sizeof(tname), "blk-%s-%d", ident, i); pthread_set_name_np(bc->bc_btid[i], tname); } return (bc); err: if (fd >= 0) close(fd); - free(cp); - free(xopts); free(nopt); return (NULL); } static int blockif_request(struct blockif_ctxt *bc, struct blockif_req *breq, enum blockop op) { int err; err = 0; pthread_mutex_lock(&bc->bc_mtx); if (!TAILQ_EMPTY(&bc->bc_freeq)) { /* * Enqueue and inform the block i/o thread * that there is work available */ if (blockif_enqueue(bc, breq, op)) pthread_cond_signal(&bc->bc_cond); } else { /* * Callers are not allowed to enqueue more than * the specified blockif queue limit. Return an * error to indicate that the queue length has been * exceeded. */ err = E2BIG; } pthread_mutex_unlock(&bc->bc_mtx); return (err); } int blockif_read(struct blockif_ctxt *bc, struct blockif_req *breq) { assert(bc->bc_magic == BLOCKIF_SIG); return (blockif_request(bc, breq, BOP_READ)); } int blockif_write(struct blockif_ctxt *bc, struct blockif_req *breq) { assert(bc->bc_magic == BLOCKIF_SIG); return (blockif_request(bc, breq, BOP_WRITE)); } int blockif_flush(struct blockif_ctxt *bc, struct blockif_req *breq) { assert(bc->bc_magic == BLOCKIF_SIG); return (blockif_request(bc, breq, BOP_FLUSH)); } int blockif_delete(struct blockif_ctxt *bc, struct blockif_req *breq) { assert(bc->bc_magic == BLOCKIF_SIG); return (blockif_request(bc, breq, BOP_DELETE)); } int blockif_cancel(struct blockif_ctxt *bc, struct blockif_req *breq) { struct blockif_elem *be; assert(bc->bc_magic == BLOCKIF_SIG); pthread_mutex_lock(&bc->bc_mtx); /* * Check pending requests. */ TAILQ_FOREACH(be, &bc->bc_pendq, be_link) { if (be->be_req == breq) break; } if (be != NULL) { /* * Found it. */ blockif_complete(bc, be); pthread_mutex_unlock(&bc->bc_mtx); return (0); } /* * Check in-flight requests. */ TAILQ_FOREACH(be, &bc->bc_busyq, be_link) { if (be->be_req == breq) break; } if (be == NULL) { /* * Didn't find it. */ pthread_mutex_unlock(&bc->bc_mtx); return (EINVAL); } /* * Interrupt the processing thread to force it return * prematurely via it's normal callback path. */ while (be->be_status == BST_BUSY) { struct blockif_sig_elem bse, *old_head; pthread_mutex_init(&bse.bse_mtx, NULL); pthread_cond_init(&bse.bse_cond, NULL); bse.bse_pending = 1; do { old_head = blockif_bse_head; bse.bse_next = old_head; } while (!atomic_cmpset_ptr((uintptr_t *)&blockif_bse_head, (uintptr_t)old_head, (uintptr_t)&bse)); pthread_kill(be->be_tid, SIGCONT); pthread_mutex_lock(&bse.bse_mtx); while (bse.bse_pending) pthread_cond_wait(&bse.bse_cond, &bse.bse_mtx); pthread_mutex_unlock(&bse.bse_mtx); } pthread_mutex_unlock(&bc->bc_mtx); /* * The processing thread has been interrupted. Since it's not * clear if the callback has been invoked yet, return EBUSY. */ return (EBUSY); } int blockif_close(struct blockif_ctxt *bc) { void *jval; int i; assert(bc->bc_magic == BLOCKIF_SIG); /* * Stop the block i/o thread */ pthread_mutex_lock(&bc->bc_mtx); bc->bc_closing = 1; pthread_mutex_unlock(&bc->bc_mtx); pthread_cond_broadcast(&bc->bc_cond); for (i = 0; i < BLOCKIF_NUMTHR; i++) pthread_join(bc->bc_btid[i], &jval); /* XXX Cancel queued i/o's ??? */ /* * Release resources */ bc->bc_magic = 0; close(bc->bc_fd); free(bc); return (0); } /* * Return virtual C/H/S values for a given block. Use the algorithm * outlined in the VHD specification to calculate values. */ void blockif_chs(struct blockif_ctxt *bc, uint16_t *c, uint8_t *h, uint8_t *s) { off_t sectors; /* total sectors of the block dev */ off_t hcyl; /* cylinders times heads */ uint16_t secpt; /* sectors per track */ uint8_t heads; assert(bc->bc_magic == BLOCKIF_SIG); sectors = bc->bc_size / bc->bc_sectsz; /* Clamp the size to the largest possible with CHS */ if (sectors > 65535UL*16*255) sectors = 65535UL*16*255; if (sectors >= 65536UL*16*63) { secpt = 255; heads = 16; hcyl = sectors / secpt; } else { secpt = 17; hcyl = sectors / secpt; heads = (hcyl + 1023) / 1024; if (heads < 4) heads = 4; if (hcyl >= (heads * 1024) || heads > 16) { secpt = 31; heads = 16; hcyl = sectors / secpt; } if (hcyl >= (heads * 1024)) { secpt = 63; heads = 16; hcyl = sectors / secpt; } } *c = hcyl / heads; *h = heads; *s = secpt; } /* * Accessors */ off_t blockif_size(struct blockif_ctxt *bc) { assert(bc->bc_magic == BLOCKIF_SIG); return (bc->bc_size); } int blockif_sectsz(struct blockif_ctxt *bc) { assert(bc->bc_magic == BLOCKIF_SIG); return (bc->bc_sectsz); } void blockif_psectsz(struct blockif_ctxt *bc, int *size, int *off) { assert(bc->bc_magic == BLOCKIF_SIG); *size = bc->bc_psectsz; *off = bc->bc_psectoff; } int blockif_queuesz(struct blockif_ctxt *bc) { assert(bc->bc_magic == BLOCKIF_SIG); return (BLOCKIF_MAXREQ - 1); } int blockif_is_ro(struct blockif_ctxt *bc) { assert(bc->bc_magic == BLOCKIF_SIG); return (bc->bc_rdonly); } int blockif_candelete(struct blockif_ctxt *bc) { assert(bc->bc_magic == BLOCKIF_SIG); return (bc->bc_candelete); } Index: projects/import-googletest-1.8.1/usr.sbin/bhyve/pci_xhci.c =================================================================== --- projects/import-googletest-1.8.1/usr.sbin/bhyve/pci_xhci.c (revision 344270) +++ projects/import-googletest-1.8.1/usr.sbin/bhyve/pci_xhci.c (revision 344271) @@ -1,2839 +1,2838 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2014 Leon Dang * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* XHCI options: -s ,xhci,{devices} devices: tablet USB tablet mouse */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "bhyverun.h" #include "pci_emul.h" #include "pci_xhci.h" #include "usb_emul.h" static int xhci_debug = 0; #define DPRINTF(params) if (xhci_debug) printf params #define WPRINTF(params) printf params #define XHCI_NAME "xhci" #define XHCI_MAX_DEVS 8 /* 4 USB3 + 4 USB2 devs */ #define XHCI_MAX_SLOTS 64 /* min allowed by Windows drivers */ /* * XHCI data structures can be up to 64k, but limit paddr_guest2host mapping * to 4k to avoid going over the guest physical memory barrier. */ #define XHCI_PADDR_SZ 4096 /* paddr_guest2host max size */ #define XHCI_ERST_MAX 0 /* max 2^entries event ring seg tbl */ #define XHCI_CAPLEN (4*8) /* offset of op register space */ #define XHCI_HCCPRAMS2 0x1C /* offset of HCCPARAMS2 register */ #define XHCI_PORTREGS_START 0x400 #define XHCI_DOORBELL_MAX 256 #define XHCI_STREAMS_MAX 1 /* 4-15 in XHCI spec */ /* caplength and hci-version registers */ #define XHCI_SET_CAPLEN(x) ((x) & 0xFF) #define XHCI_SET_HCIVERSION(x) (((x) & 0xFFFF) << 16) #define XHCI_GET_HCIVERSION(x) (((x) >> 16) & 0xFFFF) /* hcsparams1 register */ #define XHCI_SET_HCSP1_MAXSLOTS(x) ((x) & 0xFF) #define XHCI_SET_HCSP1_MAXINTR(x) (((x) & 0x7FF) << 8) #define XHCI_SET_HCSP1_MAXPORTS(x) (((x) & 0xFF) << 24) /* hcsparams2 register */ #define XHCI_SET_HCSP2_IST(x) ((x) & 0x0F) #define XHCI_SET_HCSP2_ERSTMAX(x) (((x) & 0x0F) << 4) #define XHCI_SET_HCSP2_MAXSCRATCH_HI(x) (((x) & 0x1F) << 21) #define XHCI_SET_HCSP2_MAXSCRATCH_LO(x) (((x) & 0x1F) << 27) /* hcsparams3 register */ #define XHCI_SET_HCSP3_U1EXITLATENCY(x) ((x) & 0xFF) #define XHCI_SET_HCSP3_U2EXITLATENCY(x) (((x) & 0xFFFF) << 16) /* hccparams1 register */ #define XHCI_SET_HCCP1_AC64(x) ((x) & 0x01) #define XHCI_SET_HCCP1_BNC(x) (((x) & 0x01) << 1) #define XHCI_SET_HCCP1_CSZ(x) (((x) & 0x01) << 2) #define XHCI_SET_HCCP1_PPC(x) (((x) & 0x01) << 3) #define XHCI_SET_HCCP1_PIND(x) (((x) & 0x01) << 4) #define XHCI_SET_HCCP1_LHRC(x) (((x) & 0x01) << 5) #define XHCI_SET_HCCP1_LTC(x) (((x) & 0x01) << 6) #define XHCI_SET_HCCP1_NSS(x) (((x) & 0x01) << 7) #define XHCI_SET_HCCP1_PAE(x) (((x) & 0x01) << 8) #define XHCI_SET_HCCP1_SPC(x) (((x) & 0x01) << 9) #define XHCI_SET_HCCP1_SEC(x) (((x) & 0x01) << 10) #define XHCI_SET_HCCP1_CFC(x) (((x) & 0x01) << 11) #define XHCI_SET_HCCP1_MAXPSA(x) (((x) & 0x0F) << 12) #define XHCI_SET_HCCP1_XECP(x) (((x) & 0xFFFF) << 16) /* hccparams2 register */ #define XHCI_SET_HCCP2_U3C(x) ((x) & 0x01) #define XHCI_SET_HCCP2_CMC(x) (((x) & 0x01) << 1) #define XHCI_SET_HCCP2_FSC(x) (((x) & 0x01) << 2) #define XHCI_SET_HCCP2_CTC(x) (((x) & 0x01) << 3) #define XHCI_SET_HCCP2_LEC(x) (((x) & 0x01) << 4) #define XHCI_SET_HCCP2_CIC(x) (((x) & 0x01) << 5) /* other registers */ #define XHCI_SET_DOORBELL(x) ((x) & ~0x03) #define XHCI_SET_RTSOFFSET(x) ((x) & ~0x0F) /* register masks */ #define XHCI_PS_PLS_MASK (0xF << 5) /* port link state */ #define XHCI_PS_SPEED_MASK (0xF << 10) /* port speed */ #define XHCI_PS_PIC_MASK (0x3 << 14) /* port indicator */ /* port register set */ #define XHCI_PORTREGS_BASE 0x400 /* base offset */ #define XHCI_PORTREGS_PORT0 0x3F0 #define XHCI_PORTREGS_SETSZ 0x10 /* size of a set */ #define MASK_64_HI(x) ((x) & ~0xFFFFFFFFULL) #define MASK_64_LO(x) ((x) & 0xFFFFFFFFULL) #define FIELD_REPLACE(a,b,m,s) (((a) & ~((m) << (s))) | \ (((b) & (m)) << (s))) #define FIELD_COPY(a,b,m,s) (((a) & ~((m) << (s))) | \ (((b) & ((m) << (s))))) struct pci_xhci_trb_ring { uint64_t ringaddr; /* current dequeue guest address */ uint32_t ccs; /* consumer cycle state */ }; /* device endpoint transfer/stream rings */ struct pci_xhci_dev_ep { union { struct xhci_trb *_epu_tr; struct xhci_stream_ctx *_epu_sctx; } _ep_trbsctx; #define ep_tr _ep_trbsctx._epu_tr #define ep_sctx _ep_trbsctx._epu_sctx union { struct pci_xhci_trb_ring _epu_trb; struct pci_xhci_trb_ring *_epu_sctx_trbs; } _ep_trb_rings; #define ep_ringaddr _ep_trb_rings._epu_trb.ringaddr #define ep_ccs _ep_trb_rings._epu_trb.ccs #define ep_sctx_trbs _ep_trb_rings._epu_sctx_trbs struct usb_data_xfer *ep_xfer; /* transfer chain */ }; /* device context base address array: maps slot->device context */ struct xhci_dcbaa { uint64_t dcba[USB_MAX_DEVICES+1]; /* xhci_dev_ctx ptrs */ }; /* port status registers */ struct pci_xhci_portregs { uint32_t portsc; /* port status and control */ uint32_t portpmsc; /* port pwr mgmt status & control */ uint32_t portli; /* port link info */ uint32_t porthlpmc; /* port hardware LPM control */ } __packed; #define XHCI_PS_SPEED_SET(x) (((x) & 0xF) << 10) /* xHC operational registers */ struct pci_xhci_opregs { uint32_t usbcmd; /* usb command */ uint32_t usbsts; /* usb status */ uint32_t pgsz; /* page size */ uint32_t dnctrl; /* device notification control */ uint64_t crcr; /* command ring control */ uint64_t dcbaap; /* device ctx base addr array ptr */ uint32_t config; /* configure */ /* guest mapped addresses: */ struct xhci_trb *cr_p; /* crcr dequeue */ struct xhci_dcbaa *dcbaa_p; /* dev ctx array ptr */ }; /* xHC runtime registers */ struct pci_xhci_rtsregs { uint32_t mfindex; /* microframe index */ struct { /* interrupter register set */ uint32_t iman; /* interrupter management */ uint32_t imod; /* interrupter moderation */ uint32_t erstsz; /* event ring segment table size */ uint32_t rsvd; uint64_t erstba; /* event ring seg-tbl base addr */ uint64_t erdp; /* event ring dequeue ptr */ } intrreg __packed; /* guest mapped addresses */ struct xhci_event_ring_seg *erstba_p; struct xhci_trb *erst_p; /* event ring segment tbl */ int er_deq_seg; /* event ring dequeue segment */ int er_enq_idx; /* event ring enqueue index - xHCI */ int er_enq_seg; /* event ring enqueue segment */ uint32_t er_events_cnt; /* number of events in ER */ uint32_t event_pcs; /* producer cycle state flag */ }; struct pci_xhci_softc; /* * USB device emulation container. * This is referenced from usb_hci->hci_sc; 1 pci_xhci_dev_emu for each * emulated device instance. */ struct pci_xhci_dev_emu { struct pci_xhci_softc *xsc; /* XHCI contexts */ struct xhci_dev_ctx *dev_ctx; struct pci_xhci_dev_ep eps[XHCI_MAX_ENDPOINTS]; int dev_slotstate; struct usb_devemu *dev_ue; /* USB emulated dev */ void *dev_sc; /* device's softc */ struct usb_hci hci; }; struct pci_xhci_softc { struct pci_devinst *xsc_pi; pthread_mutex_t mtx; uint32_t caplength; /* caplen & hciversion */ uint32_t hcsparams1; /* structural parameters 1 */ uint32_t hcsparams2; /* structural parameters 2 */ uint32_t hcsparams3; /* structural parameters 3 */ uint32_t hccparams1; /* capability parameters 1 */ uint32_t dboff; /* doorbell offset */ uint32_t rtsoff; /* runtime register space offset */ uint32_t hccparams2; /* capability parameters 2 */ uint32_t regsend; /* end of configuration registers */ struct pci_xhci_opregs opregs; struct pci_xhci_rtsregs rtsregs; struct pci_xhci_portregs *portregs; struct pci_xhci_dev_emu **devices; /* XHCI[port] = device */ struct pci_xhci_dev_emu **slots; /* slots assigned from 1 */ int ndevices; int usb2_port_start; int usb3_port_start; }; /* portregs and devices arrays are set up to start from idx=1 */ #define XHCI_PORTREG_PTR(x,n) &(x)->portregs[(n)] #define XHCI_DEVINST_PTR(x,n) (x)->devices[(n)] #define XHCI_SLOTDEV_PTR(x,n) (x)->slots[(n)] #define XHCI_HALTED(sc) ((sc)->opregs.usbsts & XHCI_STS_HCH) #define XHCI_GADDR(sc,a) paddr_guest2host((sc)->xsc_pi->pi_vmctx, \ (a), \ XHCI_PADDR_SZ - ((a) & (XHCI_PADDR_SZ-1))) static int xhci_in_use; /* map USB errors to XHCI */ static const int xhci_usb_errors[USB_ERR_MAX] = { [USB_ERR_NORMAL_COMPLETION] = XHCI_TRB_ERROR_SUCCESS, [USB_ERR_PENDING_REQUESTS] = XHCI_TRB_ERROR_RESOURCE, [USB_ERR_NOT_STARTED] = XHCI_TRB_ERROR_ENDP_NOT_ON, [USB_ERR_INVAL] = XHCI_TRB_ERROR_INVALID, [USB_ERR_NOMEM] = XHCI_TRB_ERROR_RESOURCE, [USB_ERR_CANCELLED] = XHCI_TRB_ERROR_STOPPED, [USB_ERR_BAD_ADDRESS] = XHCI_TRB_ERROR_PARAMETER, [USB_ERR_BAD_BUFSIZE] = XHCI_TRB_ERROR_PARAMETER, [USB_ERR_BAD_FLAG] = XHCI_TRB_ERROR_PARAMETER, [USB_ERR_NO_CALLBACK] = XHCI_TRB_ERROR_STALL, [USB_ERR_IN_USE] = XHCI_TRB_ERROR_RESOURCE, [USB_ERR_NO_ADDR] = XHCI_TRB_ERROR_RESOURCE, [USB_ERR_NO_PIPE] = XHCI_TRB_ERROR_RESOURCE, [USB_ERR_ZERO_NFRAMES] = XHCI_TRB_ERROR_UNDEFINED, [USB_ERR_ZERO_MAXP] = XHCI_TRB_ERROR_UNDEFINED, [USB_ERR_SET_ADDR_FAILED] = XHCI_TRB_ERROR_RESOURCE, [USB_ERR_NO_POWER] = XHCI_TRB_ERROR_ENDP_NOT_ON, [USB_ERR_TOO_DEEP] = XHCI_TRB_ERROR_RESOURCE, [USB_ERR_IOERROR] = XHCI_TRB_ERROR_TRB, [USB_ERR_NOT_CONFIGURED] = XHCI_TRB_ERROR_ENDP_NOT_ON, [USB_ERR_TIMEOUT] = XHCI_TRB_ERROR_CMD_ABORTED, [USB_ERR_SHORT_XFER] = XHCI_TRB_ERROR_SHORT_PKT, [USB_ERR_STALLED] = XHCI_TRB_ERROR_STALL, [USB_ERR_INTERRUPTED] = XHCI_TRB_ERROR_CMD_ABORTED, [USB_ERR_DMA_LOAD_FAILED] = XHCI_TRB_ERROR_DATA_BUF, [USB_ERR_BAD_CONTEXT] = XHCI_TRB_ERROR_TRB, [USB_ERR_NO_ROOT_HUB] = XHCI_TRB_ERROR_UNDEFINED, [USB_ERR_NO_INTR_THREAD] = XHCI_TRB_ERROR_UNDEFINED, [USB_ERR_NOT_LOCKED] = XHCI_TRB_ERROR_UNDEFINED, }; #define USB_TO_XHCI_ERR(e) ((e) < USB_ERR_MAX ? xhci_usb_errors[(e)] : \ XHCI_TRB_ERROR_INVALID) static int pci_xhci_insert_event(struct pci_xhci_softc *sc, struct xhci_trb *evtrb, int do_intr); static void pci_xhci_dump_trb(struct xhci_trb *trb); static void pci_xhci_assert_interrupt(struct pci_xhci_softc *sc); static void pci_xhci_reset_slot(struct pci_xhci_softc *sc, int slot); static void pci_xhci_reset_port(struct pci_xhci_softc *sc, int portn, int warm); static void pci_xhci_update_ep_ring(struct pci_xhci_softc *sc, struct pci_xhci_dev_emu *dev, struct pci_xhci_dev_ep *devep, struct xhci_endp_ctx *ep_ctx, uint32_t streamid, uint64_t ringaddr, int ccs); static void pci_xhci_set_evtrb(struct xhci_trb *evtrb, uint64_t port, uint32_t errcode, uint32_t evtype) { evtrb->qwTrb0 = port << 24; evtrb->dwTrb2 = XHCI_TRB_2_ERROR_SET(errcode); evtrb->dwTrb3 = XHCI_TRB_3_TYPE_SET(evtype); } /* controller reset */ static void pci_xhci_reset(struct pci_xhci_softc *sc) { int i; sc->rtsregs.er_enq_idx = 0; sc->rtsregs.er_events_cnt = 0; sc->rtsregs.event_pcs = 1; for (i = 1; i <= XHCI_MAX_SLOTS; i++) { pci_xhci_reset_slot(sc, i); } } static uint32_t pci_xhci_usbcmd_write(struct pci_xhci_softc *sc, uint32_t cmd) { int do_intr = 0; int i; if (cmd & XHCI_CMD_RS) { do_intr = (sc->opregs.usbcmd & XHCI_CMD_RS) == 0; sc->opregs.usbcmd |= XHCI_CMD_RS; sc->opregs.usbsts &= ~XHCI_STS_HCH; sc->opregs.usbsts |= XHCI_STS_PCD; /* Queue port change event on controller run from stop */ if (do_intr) for (i = 1; i <= XHCI_MAX_DEVS; i++) { struct pci_xhci_dev_emu *dev; struct pci_xhci_portregs *port; struct xhci_trb evtrb; if ((dev = XHCI_DEVINST_PTR(sc, i)) == NULL) continue; port = XHCI_PORTREG_PTR(sc, i); port->portsc |= XHCI_PS_CSC | XHCI_PS_CCS; port->portsc &= ~XHCI_PS_PLS_MASK; /* * XHCI 4.19.3 USB2 RxDetect->Polling, * USB3 Polling->U0 */ if (dev->dev_ue->ue_usbver == 2) port->portsc |= XHCI_PS_PLS_SET(UPS_PORT_LS_POLL); else port->portsc |= XHCI_PS_PLS_SET(UPS_PORT_LS_U0); pci_xhci_set_evtrb(&evtrb, i, XHCI_TRB_ERROR_SUCCESS, XHCI_TRB_EVENT_PORT_STS_CHANGE); if (pci_xhci_insert_event(sc, &evtrb, 0) != XHCI_TRB_ERROR_SUCCESS) break; } } else { sc->opregs.usbcmd &= ~XHCI_CMD_RS; sc->opregs.usbsts |= XHCI_STS_HCH; sc->opregs.usbsts &= ~XHCI_STS_PCD; } /* start execution of schedule; stop when set to 0 */ cmd |= sc->opregs.usbcmd & XHCI_CMD_RS; if (cmd & XHCI_CMD_HCRST) { /* reset controller */ pci_xhci_reset(sc); cmd &= ~XHCI_CMD_HCRST; } cmd &= ~(XHCI_CMD_CSS | XHCI_CMD_CRS); if (do_intr) pci_xhci_assert_interrupt(sc); return (cmd); } static void pci_xhci_portregs_write(struct pci_xhci_softc *sc, uint64_t offset, uint64_t value) { struct xhci_trb evtrb; struct pci_xhci_portregs *p; int port; uint32_t oldpls, newpls; if (sc->portregs == NULL) return; port = (offset - XHCI_PORTREGS_PORT0) / XHCI_PORTREGS_SETSZ; offset = (offset - XHCI_PORTREGS_PORT0) % XHCI_PORTREGS_SETSZ; DPRINTF(("pci_xhci: portregs wr offset 0x%lx, port %u: 0x%lx\r\n", offset, port, value)); assert(port >= 0); if (port > XHCI_MAX_DEVS) { DPRINTF(("pci_xhci: portregs_write port %d > ndevices\r\n", port)); return; } if (XHCI_DEVINST_PTR(sc, port) == NULL) { DPRINTF(("pci_xhci: portregs_write to unattached port %d\r\n", port)); } p = XHCI_PORTREG_PTR(sc, port); switch (offset) { case 0: /* port reset or warm reset */ if (value & (XHCI_PS_PR | XHCI_PS_WPR)) { pci_xhci_reset_port(sc, port, value & XHCI_PS_WPR); break; } if ((p->portsc & XHCI_PS_PP) == 0) { WPRINTF(("pci_xhci: portregs_write to unpowered " "port %d\r\n", port)); break; } /* Port status and control register */ oldpls = XHCI_PS_PLS_GET(p->portsc); newpls = XHCI_PS_PLS_GET(value); p->portsc &= XHCI_PS_PED | XHCI_PS_PLS_MASK | XHCI_PS_SPEED_MASK | XHCI_PS_PIC_MASK; if (XHCI_DEVINST_PTR(sc, port)) p->portsc |= XHCI_PS_CCS; p->portsc |= (value & ~(XHCI_PS_OCA | XHCI_PS_PR | XHCI_PS_PED | XHCI_PS_PLS_MASK | /* link state */ XHCI_PS_SPEED_MASK | XHCI_PS_PIC_MASK | /* port indicator */ XHCI_PS_LWS | XHCI_PS_DR | XHCI_PS_WPR)); /* clear control bits */ p->portsc &= ~(value & (XHCI_PS_CSC | XHCI_PS_PEC | XHCI_PS_WRC | XHCI_PS_OCC | XHCI_PS_PRC | XHCI_PS_PLC | XHCI_PS_CEC | XHCI_PS_CAS)); /* port disable request; for USB3, don't care */ if (value & XHCI_PS_PED) DPRINTF(("Disable port %d request\r\n", port)); if (!(value & XHCI_PS_LWS)) break; DPRINTF(("Port new PLS: %d\r\n", newpls)); switch (newpls) { case 0: /* U0 */ case 3: /* U3 */ if (oldpls != newpls) { p->portsc &= ~XHCI_PS_PLS_MASK; p->portsc |= XHCI_PS_PLS_SET(newpls) | XHCI_PS_PLC; if (oldpls != 0 && newpls == 0) { pci_xhci_set_evtrb(&evtrb, port, XHCI_TRB_ERROR_SUCCESS, XHCI_TRB_EVENT_PORT_STS_CHANGE); pci_xhci_insert_event(sc, &evtrb, 1); } } break; default: DPRINTF(("Unhandled change port %d PLS %u\r\n", port, newpls)); break; } break; case 4: /* Port power management status and control register */ p->portpmsc = value; break; case 8: /* Port link information register */ DPRINTF(("pci_xhci attempted write to PORTLI, port %d\r\n", port)); break; case 12: /* * Port hardware LPM control register. * For USB3, this register is reserved. */ p->porthlpmc = value; break; } } struct xhci_dev_ctx * pci_xhci_get_dev_ctx(struct pci_xhci_softc *sc, uint32_t slot) { uint64_t devctx_addr; struct xhci_dev_ctx *devctx; assert(slot > 0 && slot <= sc->ndevices); assert(sc->opregs.dcbaa_p != NULL); devctx_addr = sc->opregs.dcbaa_p->dcba[slot]; if (devctx_addr == 0) { DPRINTF(("get_dev_ctx devctx_addr == 0\r\n")); return (NULL); } DPRINTF(("pci_xhci: get dev ctx, slot %u devctx addr %016lx\r\n", slot, devctx_addr)); devctx = XHCI_GADDR(sc, devctx_addr & ~0x3FUL); return (devctx); } struct xhci_trb * pci_xhci_trb_next(struct pci_xhci_softc *sc, struct xhci_trb *curtrb, uint64_t *guestaddr) { struct xhci_trb *next; assert(curtrb != NULL); if (XHCI_TRB_3_TYPE_GET(curtrb->dwTrb3) == XHCI_TRB_TYPE_LINK) { if (guestaddr) *guestaddr = curtrb->qwTrb0 & ~0xFUL; next = XHCI_GADDR(sc, curtrb->qwTrb0 & ~0xFUL); } else { if (guestaddr) *guestaddr += sizeof(struct xhci_trb) & ~0xFUL; next = curtrb + 1; } return (next); } static void pci_xhci_assert_interrupt(struct pci_xhci_softc *sc) { sc->rtsregs.intrreg.erdp |= XHCI_ERDP_LO_BUSY; sc->rtsregs.intrreg.iman |= XHCI_IMAN_INTR_PEND; sc->opregs.usbsts |= XHCI_STS_EINT; /* only trigger interrupt if permitted */ if ((sc->opregs.usbcmd & XHCI_CMD_INTE) && (sc->rtsregs.intrreg.iman & XHCI_IMAN_INTR_ENA)) { if (pci_msi_enabled(sc->xsc_pi)) pci_generate_msi(sc->xsc_pi, 0); else pci_lintr_assert(sc->xsc_pi); } } static void pci_xhci_deassert_interrupt(struct pci_xhci_softc *sc) { if (!pci_msi_enabled(sc->xsc_pi)) pci_lintr_assert(sc->xsc_pi); } static void pci_xhci_init_ep(struct pci_xhci_dev_emu *dev, int epid) { struct xhci_dev_ctx *dev_ctx; struct pci_xhci_dev_ep *devep; struct xhci_endp_ctx *ep_ctx; uint32_t pstreams; int i; dev_ctx = dev->dev_ctx; ep_ctx = &dev_ctx->ctx_ep[epid]; devep = &dev->eps[epid]; pstreams = XHCI_EPCTX_0_MAXP_STREAMS_GET(ep_ctx->dwEpCtx0); if (pstreams > 0) { DPRINTF(("init_ep %d with pstreams %d\r\n", epid, pstreams)); assert(devep->ep_sctx_trbs == NULL); devep->ep_sctx = XHCI_GADDR(dev->xsc, ep_ctx->qwEpCtx2 & XHCI_EPCTX_2_TR_DQ_PTR_MASK); devep->ep_sctx_trbs = calloc(pstreams, sizeof(struct pci_xhci_trb_ring)); for (i = 0; i < pstreams; i++) { devep->ep_sctx_trbs[i].ringaddr = devep->ep_sctx[i].qwSctx0 & XHCI_SCTX_0_TR_DQ_PTR_MASK; devep->ep_sctx_trbs[i].ccs = XHCI_SCTX_0_DCS_GET(devep->ep_sctx[i].qwSctx0); } } else { DPRINTF(("init_ep %d with no pstreams\r\n", epid)); devep->ep_ringaddr = ep_ctx->qwEpCtx2 & XHCI_EPCTX_2_TR_DQ_PTR_MASK; devep->ep_ccs = XHCI_EPCTX_2_DCS_GET(ep_ctx->qwEpCtx2); devep->ep_tr = XHCI_GADDR(dev->xsc, devep->ep_ringaddr); DPRINTF(("init_ep tr DCS %x\r\n", devep->ep_ccs)); } if (devep->ep_xfer == NULL) { devep->ep_xfer = malloc(sizeof(struct usb_data_xfer)); USB_DATA_XFER_INIT(devep->ep_xfer); } } static void pci_xhci_disable_ep(struct pci_xhci_dev_emu *dev, int epid) { struct xhci_dev_ctx *dev_ctx; struct pci_xhci_dev_ep *devep; struct xhci_endp_ctx *ep_ctx; DPRINTF(("pci_xhci disable_ep %d\r\n", epid)); dev_ctx = dev->dev_ctx; ep_ctx = &dev_ctx->ctx_ep[epid]; ep_ctx->dwEpCtx0 = (ep_ctx->dwEpCtx0 & ~0x7) | XHCI_ST_EPCTX_DISABLED; devep = &dev->eps[epid]; if (XHCI_EPCTX_0_MAXP_STREAMS_GET(ep_ctx->dwEpCtx0) > 0 && devep->ep_sctx_trbs != NULL) free(devep->ep_sctx_trbs); if (devep->ep_xfer != NULL) { free(devep->ep_xfer); devep->ep_xfer = NULL; } memset(devep, 0, sizeof(struct pci_xhci_dev_ep)); } /* reset device at slot and data structures related to it */ static void pci_xhci_reset_slot(struct pci_xhci_softc *sc, int slot) { struct pci_xhci_dev_emu *dev; dev = XHCI_SLOTDEV_PTR(sc, slot); if (!dev) { DPRINTF(("xhci reset unassigned slot (%d)?\r\n", slot)); } else { dev->dev_slotstate = XHCI_ST_DISABLED; } /* TODO: reset ring buffer pointers */ } static int pci_xhci_insert_event(struct pci_xhci_softc *sc, struct xhci_trb *evtrb, int do_intr) { struct pci_xhci_rtsregs *rts; uint64_t erdp; int erdp_idx; int err; struct xhci_trb *evtrbptr; err = XHCI_TRB_ERROR_SUCCESS; rts = &sc->rtsregs; erdp = rts->intrreg.erdp & ~0xF; erdp_idx = (erdp - rts->erstba_p[rts->er_deq_seg].qwEvrsTablePtr) / sizeof(struct xhci_trb); DPRINTF(("pci_xhci: insert event 0[%lx] 2[%x] 3[%x]\r\n" "\terdp idx %d/seg %d, enq idx %d/seg %d, pcs %u\r\n" "\t(erdp=0x%lx, erst=0x%lx, tblsz=%u, do_intr %d)\r\n", evtrb->qwTrb0, evtrb->dwTrb2, evtrb->dwTrb3, erdp_idx, rts->er_deq_seg, rts->er_enq_idx, rts->er_enq_seg, rts->event_pcs, erdp, rts->erstba_p->qwEvrsTablePtr, rts->erstba_p->dwEvrsTableSize, do_intr)); evtrbptr = &rts->erst_p[rts->er_enq_idx]; /* TODO: multi-segment table */ if (rts->er_events_cnt >= rts->erstba_p->dwEvrsTableSize) { DPRINTF(("pci_xhci[%d] cannot insert event; ring full\r\n", __LINE__)); err = XHCI_TRB_ERROR_EV_RING_FULL; goto done; } if (rts->er_events_cnt == rts->erstba_p->dwEvrsTableSize - 1) { struct xhci_trb errev; if ((evtrbptr->dwTrb3 & 0x1) == (rts->event_pcs & 0x1)) { DPRINTF(("pci_xhci[%d] insert evt err: ring full\r\n", __LINE__)); errev.qwTrb0 = 0; errev.dwTrb2 = XHCI_TRB_2_ERROR_SET( XHCI_TRB_ERROR_EV_RING_FULL); errev.dwTrb3 = XHCI_TRB_3_TYPE_SET( XHCI_TRB_EVENT_HOST_CTRL) | rts->event_pcs; rts->er_events_cnt++; memcpy(&rts->erst_p[rts->er_enq_idx], &errev, sizeof(struct xhci_trb)); rts->er_enq_idx = (rts->er_enq_idx + 1) % rts->erstba_p->dwEvrsTableSize; err = XHCI_TRB_ERROR_EV_RING_FULL; do_intr = 1; goto done; } } else { rts->er_events_cnt++; } evtrb->dwTrb3 &= ~XHCI_TRB_3_CYCLE_BIT; evtrb->dwTrb3 |= rts->event_pcs; memcpy(&rts->erst_p[rts->er_enq_idx], evtrb, sizeof(struct xhci_trb)); rts->er_enq_idx = (rts->er_enq_idx + 1) % rts->erstba_p->dwEvrsTableSize; if (rts->er_enq_idx == 0) rts->event_pcs ^= 1; done: if (do_intr) pci_xhci_assert_interrupt(sc); return (err); } static uint32_t pci_xhci_cmd_enable_slot(struct pci_xhci_softc *sc, uint32_t *slot) { struct pci_xhci_dev_emu *dev; uint32_t cmderr; int i; cmderr = XHCI_TRB_ERROR_NO_SLOTS; if (sc->portregs != NULL) for (i = 1; i <= XHCI_MAX_SLOTS; i++) { dev = XHCI_SLOTDEV_PTR(sc, i); if (dev && dev->dev_slotstate == XHCI_ST_DISABLED) { *slot = i; dev->dev_slotstate = XHCI_ST_ENABLED; cmderr = XHCI_TRB_ERROR_SUCCESS; dev->hci.hci_address = i; break; } } DPRINTF(("pci_xhci enable slot (error=%d) slot %u\r\n", cmderr != XHCI_TRB_ERROR_SUCCESS, *slot)); return (cmderr); } static uint32_t pci_xhci_cmd_disable_slot(struct pci_xhci_softc *sc, uint32_t slot) { struct pci_xhci_dev_emu *dev; uint32_t cmderr; DPRINTF(("pci_xhci disable slot %u\r\n", slot)); cmderr = XHCI_TRB_ERROR_NO_SLOTS; if (sc->portregs == NULL) goto done; if (slot > sc->ndevices) { cmderr = XHCI_TRB_ERROR_SLOT_NOT_ON; goto done; } dev = XHCI_SLOTDEV_PTR(sc, slot); if (dev) { if (dev->dev_slotstate == XHCI_ST_DISABLED) { cmderr = XHCI_TRB_ERROR_SLOT_NOT_ON; } else { dev->dev_slotstate = XHCI_ST_DISABLED; cmderr = XHCI_TRB_ERROR_SUCCESS; /* TODO: reset events and endpoints */ } } done: return (cmderr); } static uint32_t pci_xhci_cmd_reset_device(struct pci_xhci_softc *sc, uint32_t slot) { struct pci_xhci_dev_emu *dev; struct xhci_dev_ctx *dev_ctx; struct xhci_endp_ctx *ep_ctx; uint32_t cmderr; int i; cmderr = XHCI_TRB_ERROR_NO_SLOTS; if (sc->portregs == NULL) goto done; DPRINTF(("pci_xhci reset device slot %u\r\n", slot)); dev = XHCI_SLOTDEV_PTR(sc, slot); if (!dev || dev->dev_slotstate == XHCI_ST_DISABLED) cmderr = XHCI_TRB_ERROR_SLOT_NOT_ON; else { dev->dev_slotstate = XHCI_ST_DEFAULT; dev->hci.hci_address = 0; dev_ctx = pci_xhci_get_dev_ctx(sc, slot); /* slot state */ dev_ctx->ctx_slot.dwSctx3 = FIELD_REPLACE( dev_ctx->ctx_slot.dwSctx3, XHCI_ST_SLCTX_DEFAULT, 0x1F, 27); /* number of contexts */ dev_ctx->ctx_slot.dwSctx0 = FIELD_REPLACE( dev_ctx->ctx_slot.dwSctx0, 1, 0x1F, 27); /* reset all eps other than ep-0 */ for (i = 2; i <= 31; i++) { ep_ctx = &dev_ctx->ctx_ep[i]; ep_ctx->dwEpCtx0 = FIELD_REPLACE( ep_ctx->dwEpCtx0, XHCI_ST_EPCTX_DISABLED, 0x7, 0); } cmderr = XHCI_TRB_ERROR_SUCCESS; } pci_xhci_reset_slot(sc, slot); done: return (cmderr); } static uint32_t pci_xhci_cmd_address_device(struct pci_xhci_softc *sc, uint32_t slot, struct xhci_trb *trb) { struct pci_xhci_dev_emu *dev; struct xhci_input_dev_ctx *input_ctx; struct xhci_slot_ctx *islot_ctx; struct xhci_dev_ctx *dev_ctx; struct xhci_endp_ctx *ep0_ctx; uint32_t cmderr; input_ctx = XHCI_GADDR(sc, trb->qwTrb0 & ~0xFUL); islot_ctx = &input_ctx->ctx_slot; ep0_ctx = &input_ctx->ctx_ep[1]; cmderr = XHCI_TRB_ERROR_SUCCESS; DPRINTF(("pci_xhci: address device, input ctl: D 0x%08x A 0x%08x,\r\n" " slot %08x %08x %08x %08x\r\n" " ep0 %08x %08x %016lx %08x\r\n", input_ctx->ctx_input.dwInCtx0, input_ctx->ctx_input.dwInCtx1, islot_ctx->dwSctx0, islot_ctx->dwSctx1, islot_ctx->dwSctx2, islot_ctx->dwSctx3, ep0_ctx->dwEpCtx0, ep0_ctx->dwEpCtx1, ep0_ctx->qwEpCtx2, ep0_ctx->dwEpCtx4)); /* when setting address: drop-ctx=0, add-ctx=slot+ep0 */ if ((input_ctx->ctx_input.dwInCtx0 != 0) || (input_ctx->ctx_input.dwInCtx1 & 0x03) != 0x03) { DPRINTF(("pci_xhci: address device, input ctl invalid\r\n")); cmderr = XHCI_TRB_ERROR_TRB; goto done; } /* assign address to slot */ dev_ctx = pci_xhci_get_dev_ctx(sc, slot); DPRINTF(("pci_xhci: address device, dev ctx\r\n" " slot %08x %08x %08x %08x\r\n", dev_ctx->ctx_slot.dwSctx0, dev_ctx->ctx_slot.dwSctx1, dev_ctx->ctx_slot.dwSctx2, dev_ctx->ctx_slot.dwSctx3)); dev = XHCI_SLOTDEV_PTR(sc, slot); assert(dev != NULL); dev->hci.hci_address = slot; dev->dev_ctx = dev_ctx; if (dev->dev_ue->ue_reset == NULL || dev->dev_ue->ue_reset(dev->dev_sc) < 0) { cmderr = XHCI_TRB_ERROR_ENDP_NOT_ON; goto done; } memcpy(&dev_ctx->ctx_slot, islot_ctx, sizeof(struct xhci_slot_ctx)); dev_ctx->ctx_slot.dwSctx3 = XHCI_SCTX_3_SLOT_STATE_SET(XHCI_ST_SLCTX_ADDRESSED) | XHCI_SCTX_3_DEV_ADDR_SET(slot); memcpy(&dev_ctx->ctx_ep[1], ep0_ctx, sizeof(struct xhci_endp_ctx)); ep0_ctx = &dev_ctx->ctx_ep[1]; ep0_ctx->dwEpCtx0 = (ep0_ctx->dwEpCtx0 & ~0x7) | XHCI_EPCTX_0_EPSTATE_SET(XHCI_ST_EPCTX_RUNNING); pci_xhci_init_ep(dev, 1); dev->dev_slotstate = XHCI_ST_ADDRESSED; DPRINTF(("pci_xhci: address device, output ctx\r\n" " slot %08x %08x %08x %08x\r\n" " ep0 %08x %08x %016lx %08x\r\n", dev_ctx->ctx_slot.dwSctx0, dev_ctx->ctx_slot.dwSctx1, dev_ctx->ctx_slot.dwSctx2, dev_ctx->ctx_slot.dwSctx3, ep0_ctx->dwEpCtx0, ep0_ctx->dwEpCtx1, ep0_ctx->qwEpCtx2, ep0_ctx->dwEpCtx4)); done: return (cmderr); } static uint32_t pci_xhci_cmd_config_ep(struct pci_xhci_softc *sc, uint32_t slot, struct xhci_trb *trb) { struct xhci_input_dev_ctx *input_ctx; struct pci_xhci_dev_emu *dev; struct xhci_dev_ctx *dev_ctx; struct xhci_endp_ctx *ep_ctx, *iep_ctx; uint32_t cmderr; int i; cmderr = XHCI_TRB_ERROR_SUCCESS; DPRINTF(("pci_xhci config_ep slot %u\r\n", slot)); dev = XHCI_SLOTDEV_PTR(sc, slot); assert(dev != NULL); if ((trb->dwTrb3 & XHCI_TRB_3_DCEP_BIT) != 0) { DPRINTF(("pci_xhci config_ep - deconfigure ep slot %u\r\n", slot)); if (dev->dev_ue->ue_stop != NULL) dev->dev_ue->ue_stop(dev->dev_sc); dev->dev_slotstate = XHCI_ST_ADDRESSED; dev->hci.hci_address = 0; dev_ctx = pci_xhci_get_dev_ctx(sc, slot); /* number of contexts */ dev_ctx->ctx_slot.dwSctx0 = FIELD_REPLACE( dev_ctx->ctx_slot.dwSctx0, 1, 0x1F, 27); /* slot state */ dev_ctx->ctx_slot.dwSctx3 = FIELD_REPLACE( dev_ctx->ctx_slot.dwSctx3, XHCI_ST_SLCTX_ADDRESSED, 0x1F, 27); /* disable endpoints */ for (i = 2; i < 32; i++) pci_xhci_disable_ep(dev, i); cmderr = XHCI_TRB_ERROR_SUCCESS; goto done; } if (dev->dev_slotstate < XHCI_ST_ADDRESSED) { DPRINTF(("pci_xhci: config_ep slotstate x%x != addressed\r\n", dev->dev_slotstate)); cmderr = XHCI_TRB_ERROR_SLOT_NOT_ON; goto done; } /* In addressed/configured state; * for each drop endpoint ctx flag: * ep->state = DISABLED * for each add endpoint ctx flag: * cp(ep-in, ep-out) * ep->state = RUNNING * for each drop+add endpoint flag: * reset ep resources * cp(ep-in, ep-out) * ep->state = RUNNING * if input->DisabledCtx[2-31] < 30: (at least 1 ep not disabled) * slot->state = configured */ input_ctx = XHCI_GADDR(sc, trb->qwTrb0 & ~0xFUL); dev_ctx = dev->dev_ctx; DPRINTF(("pci_xhci: config_ep inputctx: D:x%08x A:x%08x 7:x%08x\r\n", input_ctx->ctx_input.dwInCtx0, input_ctx->ctx_input.dwInCtx1, input_ctx->ctx_input.dwInCtx7)); for (i = 2; i <= 31; i++) { ep_ctx = &dev_ctx->ctx_ep[i]; if (input_ctx->ctx_input.dwInCtx0 & XHCI_INCTX_0_DROP_MASK(i)) { DPRINTF((" config ep - dropping ep %d\r\n", i)); pci_xhci_disable_ep(dev, i); } if (input_ctx->ctx_input.dwInCtx1 & XHCI_INCTX_1_ADD_MASK(i)) { iep_ctx = &input_ctx->ctx_ep[i]; DPRINTF((" enable ep[%d] %08x %08x %016lx %08x\r\n", i, iep_ctx->dwEpCtx0, iep_ctx->dwEpCtx1, iep_ctx->qwEpCtx2, iep_ctx->dwEpCtx4)); memcpy(ep_ctx, iep_ctx, sizeof(struct xhci_endp_ctx)); pci_xhci_init_ep(dev, i); /* ep state */ ep_ctx->dwEpCtx0 = FIELD_REPLACE( ep_ctx->dwEpCtx0, XHCI_ST_EPCTX_RUNNING, 0x7, 0); } } /* slot state to configured */ dev_ctx->ctx_slot.dwSctx3 = FIELD_REPLACE( dev_ctx->ctx_slot.dwSctx3, XHCI_ST_SLCTX_CONFIGURED, 0x1F, 27); dev_ctx->ctx_slot.dwSctx0 = FIELD_COPY( dev_ctx->ctx_slot.dwSctx0, input_ctx->ctx_slot.dwSctx0, 0x1F, 27); dev->dev_slotstate = XHCI_ST_CONFIGURED; DPRINTF(("EP configured; slot %u [0]=0x%08x [1]=0x%08x [2]=0x%08x " "[3]=0x%08x\r\n", slot, dev_ctx->ctx_slot.dwSctx0, dev_ctx->ctx_slot.dwSctx1, dev_ctx->ctx_slot.dwSctx2, dev_ctx->ctx_slot.dwSctx3)); done: return (cmderr); } static uint32_t pci_xhci_cmd_reset_ep(struct pci_xhci_softc *sc, uint32_t slot, struct xhci_trb *trb) { struct pci_xhci_dev_emu *dev; struct pci_xhci_dev_ep *devep; struct xhci_dev_ctx *dev_ctx; struct xhci_endp_ctx *ep_ctx; uint32_t cmderr, epid; uint32_t type; epid = XHCI_TRB_3_EP_GET(trb->dwTrb3); DPRINTF(("pci_xhci: reset ep %u: slot %u\r\n", epid, slot)); cmderr = XHCI_TRB_ERROR_SUCCESS; type = XHCI_TRB_3_TYPE_GET(trb->dwTrb3); dev = XHCI_SLOTDEV_PTR(sc, slot); assert(dev != NULL); if (type == XHCI_TRB_TYPE_STOP_EP && (trb->dwTrb3 & XHCI_TRB_3_SUSP_EP_BIT) != 0) { /* XXX suspend endpoint for 10ms */ } if (epid < 1 || epid > 31) { DPRINTF(("pci_xhci: reset ep: invalid epid %u\r\n", epid)); cmderr = XHCI_TRB_ERROR_TRB; goto done; } devep = &dev->eps[epid]; if (devep->ep_xfer != NULL) USB_DATA_XFER_RESET(devep->ep_xfer); dev_ctx = dev->dev_ctx; assert(dev_ctx != NULL); ep_ctx = &dev_ctx->ctx_ep[epid]; ep_ctx->dwEpCtx0 = (ep_ctx->dwEpCtx0 & ~0x7) | XHCI_ST_EPCTX_STOPPED; if (XHCI_EPCTX_0_MAXP_STREAMS_GET(ep_ctx->dwEpCtx0) == 0) ep_ctx->qwEpCtx2 = devep->ep_ringaddr | devep->ep_ccs; DPRINTF(("pci_xhci: reset ep[%u] %08x %08x %016lx %08x\r\n", epid, ep_ctx->dwEpCtx0, ep_ctx->dwEpCtx1, ep_ctx->qwEpCtx2, ep_ctx->dwEpCtx4)); if (type == XHCI_TRB_TYPE_RESET_EP && (dev->dev_ue->ue_reset == NULL || dev->dev_ue->ue_reset(dev->dev_sc) < 0)) { cmderr = XHCI_TRB_ERROR_ENDP_NOT_ON; goto done; } done: return (cmderr); } static uint32_t pci_xhci_find_stream(struct pci_xhci_softc *sc, struct xhci_endp_ctx *ep, uint32_t streamid, struct xhci_stream_ctx **osctx) { struct xhci_stream_ctx *sctx; uint32_t maxpstreams; maxpstreams = XHCI_EPCTX_0_MAXP_STREAMS_GET(ep->dwEpCtx0); if (maxpstreams == 0) return (XHCI_TRB_ERROR_TRB); if (maxpstreams > XHCI_STREAMS_MAX) return (XHCI_TRB_ERROR_INVALID_SID); if (XHCI_EPCTX_0_LSA_GET(ep->dwEpCtx0) == 0) { DPRINTF(("pci_xhci: find_stream; LSA bit not set\r\n")); return (XHCI_TRB_ERROR_INVALID_SID); } /* only support primary stream */ if (streamid > maxpstreams) return (XHCI_TRB_ERROR_STREAM_TYPE); sctx = XHCI_GADDR(sc, ep->qwEpCtx2 & ~0xFUL) + streamid; if (!XHCI_SCTX_0_SCT_GET(sctx->qwSctx0)) return (XHCI_TRB_ERROR_STREAM_TYPE); *osctx = sctx; return (XHCI_TRB_ERROR_SUCCESS); } static uint32_t pci_xhci_cmd_set_tr(struct pci_xhci_softc *sc, uint32_t slot, struct xhci_trb *trb) { struct pci_xhci_dev_emu *dev; struct pci_xhci_dev_ep *devep; struct xhci_dev_ctx *dev_ctx; struct xhci_endp_ctx *ep_ctx; uint32_t cmderr, epid; uint32_t streamid; cmderr = XHCI_TRB_ERROR_SUCCESS; dev = XHCI_SLOTDEV_PTR(sc, slot); assert(dev != NULL); DPRINTF(("pci_xhci set_tr: new-tr x%016lx, SCT %u DCS %u\r\n" " stream-id %u, slot %u, epid %u, C %u\r\n", (trb->qwTrb0 & ~0xF), (uint32_t)((trb->qwTrb0 >> 1) & 0x7), (uint32_t)(trb->qwTrb0 & 0x1), (trb->dwTrb2 >> 16) & 0xFFFF, XHCI_TRB_3_SLOT_GET(trb->dwTrb3), XHCI_TRB_3_EP_GET(trb->dwTrb3), trb->dwTrb3 & 0x1)); epid = XHCI_TRB_3_EP_GET(trb->dwTrb3); if (epid < 1 || epid > 31) { DPRINTF(("pci_xhci: set_tr_deq: invalid epid %u\r\n", epid)); cmderr = XHCI_TRB_ERROR_TRB; goto done; } dev_ctx = dev->dev_ctx; assert(dev_ctx != NULL); ep_ctx = &dev_ctx->ctx_ep[epid]; devep = &dev->eps[epid]; switch (XHCI_EPCTX_0_EPSTATE_GET(ep_ctx->dwEpCtx0)) { case XHCI_ST_EPCTX_STOPPED: case XHCI_ST_EPCTX_ERROR: break; default: DPRINTF(("pci_xhci cmd set_tr invalid state %x\r\n", XHCI_EPCTX_0_EPSTATE_GET(ep_ctx->dwEpCtx0))); cmderr = XHCI_TRB_ERROR_CONTEXT_STATE; goto done; } streamid = XHCI_TRB_2_STREAM_GET(trb->dwTrb2); if (XHCI_EPCTX_0_MAXP_STREAMS_GET(ep_ctx->dwEpCtx0) > 0) { struct xhci_stream_ctx *sctx; sctx = NULL; cmderr = pci_xhci_find_stream(sc, ep_ctx, streamid, &sctx); if (sctx != NULL) { assert(devep->ep_sctx != NULL); devep->ep_sctx[streamid].qwSctx0 = trb->qwTrb0; devep->ep_sctx_trbs[streamid].ringaddr = trb->qwTrb0 & ~0xF; devep->ep_sctx_trbs[streamid].ccs = XHCI_EPCTX_2_DCS_GET(trb->qwTrb0); } } else { if (streamid != 0) { DPRINTF(("pci_xhci cmd set_tr streamid %x != 0\r\n", streamid)); } ep_ctx->qwEpCtx2 = trb->qwTrb0 & ~0xFUL; devep->ep_ringaddr = ep_ctx->qwEpCtx2 & ~0xFUL; devep->ep_ccs = trb->qwTrb0 & 0x1; devep->ep_tr = XHCI_GADDR(sc, devep->ep_ringaddr); DPRINTF(("pci_xhci set_tr first TRB:\r\n")); pci_xhci_dump_trb(devep->ep_tr); } ep_ctx->dwEpCtx0 = (ep_ctx->dwEpCtx0 & ~0x7) | XHCI_ST_EPCTX_STOPPED; done: return (cmderr); } static uint32_t pci_xhci_cmd_eval_ctx(struct pci_xhci_softc *sc, uint32_t slot, struct xhci_trb *trb) { struct xhci_input_dev_ctx *input_ctx; struct xhci_slot_ctx *islot_ctx; struct xhci_dev_ctx *dev_ctx; struct xhci_endp_ctx *ep0_ctx; uint32_t cmderr; input_ctx = XHCI_GADDR(sc, trb->qwTrb0 & ~0xFUL); islot_ctx = &input_ctx->ctx_slot; ep0_ctx = &input_ctx->ctx_ep[1]; cmderr = XHCI_TRB_ERROR_SUCCESS; DPRINTF(("pci_xhci: eval ctx, input ctl: D 0x%08x A 0x%08x,\r\n" " slot %08x %08x %08x %08x\r\n" " ep0 %08x %08x %016lx %08x\r\n", input_ctx->ctx_input.dwInCtx0, input_ctx->ctx_input.dwInCtx1, islot_ctx->dwSctx0, islot_ctx->dwSctx1, islot_ctx->dwSctx2, islot_ctx->dwSctx3, ep0_ctx->dwEpCtx0, ep0_ctx->dwEpCtx1, ep0_ctx->qwEpCtx2, ep0_ctx->dwEpCtx4)); /* this command expects drop-ctx=0 & add-ctx=slot+ep0 */ if ((input_ctx->ctx_input.dwInCtx0 != 0) || (input_ctx->ctx_input.dwInCtx1 & 0x03) == 0) { DPRINTF(("pci_xhci: eval ctx, input ctl invalid\r\n")); cmderr = XHCI_TRB_ERROR_TRB; goto done; } /* assign address to slot; in this emulation, slot_id = address */ dev_ctx = pci_xhci_get_dev_ctx(sc, slot); DPRINTF(("pci_xhci: eval ctx, dev ctx\r\n" " slot %08x %08x %08x %08x\r\n", dev_ctx->ctx_slot.dwSctx0, dev_ctx->ctx_slot.dwSctx1, dev_ctx->ctx_slot.dwSctx2, dev_ctx->ctx_slot.dwSctx3)); if (input_ctx->ctx_input.dwInCtx1 & 0x01) { /* slot ctx */ /* set max exit latency */ dev_ctx->ctx_slot.dwSctx1 = FIELD_COPY( dev_ctx->ctx_slot.dwSctx1, input_ctx->ctx_slot.dwSctx1, 0xFFFF, 0); /* set interrupter target */ dev_ctx->ctx_slot.dwSctx2 = FIELD_COPY( dev_ctx->ctx_slot.dwSctx2, input_ctx->ctx_slot.dwSctx2, 0x3FF, 22); } if (input_ctx->ctx_input.dwInCtx1 & 0x02) { /* control ctx */ /* set max packet size */ dev_ctx->ctx_ep[1].dwEpCtx1 = FIELD_COPY( dev_ctx->ctx_ep[1].dwEpCtx1, ep0_ctx->dwEpCtx1, 0xFFFF, 16); ep0_ctx = &dev_ctx->ctx_ep[1]; } DPRINTF(("pci_xhci: eval ctx, output ctx\r\n" " slot %08x %08x %08x %08x\r\n" " ep0 %08x %08x %016lx %08x\r\n", dev_ctx->ctx_slot.dwSctx0, dev_ctx->ctx_slot.dwSctx1, dev_ctx->ctx_slot.dwSctx2, dev_ctx->ctx_slot.dwSctx3, ep0_ctx->dwEpCtx0, ep0_ctx->dwEpCtx1, ep0_ctx->qwEpCtx2, ep0_ctx->dwEpCtx4)); done: return (cmderr); } static int pci_xhci_complete_commands(struct pci_xhci_softc *sc) { struct xhci_trb evtrb; struct xhci_trb *trb; uint64_t crcr; uint32_t ccs; /* cycle state (XHCI 4.9.2) */ uint32_t type; uint32_t slot; uint32_t cmderr; int error; error = 0; sc->opregs.crcr |= XHCI_CRCR_LO_CRR; trb = sc->opregs.cr_p; ccs = sc->opregs.crcr & XHCI_CRCR_LO_RCS; crcr = sc->opregs.crcr & ~0xF; while (1) { sc->opregs.cr_p = trb; type = XHCI_TRB_3_TYPE_GET(trb->dwTrb3); if ((trb->dwTrb3 & XHCI_TRB_3_CYCLE_BIT) != (ccs & XHCI_TRB_3_CYCLE_BIT)) break; DPRINTF(("pci_xhci: cmd type 0x%x, Trb0 x%016lx dwTrb2 x%08x" " dwTrb3 x%08x, TRB_CYCLE %u/ccs %u\r\n", type, trb->qwTrb0, trb->dwTrb2, trb->dwTrb3, trb->dwTrb3 & XHCI_TRB_3_CYCLE_BIT, ccs)); cmderr = XHCI_TRB_ERROR_SUCCESS; evtrb.dwTrb2 = 0; evtrb.dwTrb3 = (ccs & XHCI_TRB_3_CYCLE_BIT) | XHCI_TRB_3_TYPE_SET(XHCI_TRB_EVENT_CMD_COMPLETE); slot = 0; switch (type) { case XHCI_TRB_TYPE_LINK: /* 0x06 */ if (trb->dwTrb3 & XHCI_TRB_3_TC_BIT) ccs ^= XHCI_CRCR_LO_RCS; break; case XHCI_TRB_TYPE_ENABLE_SLOT: /* 0x09 */ cmderr = pci_xhci_cmd_enable_slot(sc, &slot); break; case XHCI_TRB_TYPE_DISABLE_SLOT: /* 0x0A */ slot = XHCI_TRB_3_SLOT_GET(trb->dwTrb3); cmderr = pci_xhci_cmd_disable_slot(sc, slot); break; case XHCI_TRB_TYPE_ADDRESS_DEVICE: /* 0x0B */ slot = XHCI_TRB_3_SLOT_GET(trb->dwTrb3); cmderr = pci_xhci_cmd_address_device(sc, slot, trb); break; case XHCI_TRB_TYPE_CONFIGURE_EP: /* 0x0C */ slot = XHCI_TRB_3_SLOT_GET(trb->dwTrb3); cmderr = pci_xhci_cmd_config_ep(sc, slot, trb); break; case XHCI_TRB_TYPE_EVALUATE_CTX: /* 0x0D */ slot = XHCI_TRB_3_SLOT_GET(trb->dwTrb3); cmderr = pci_xhci_cmd_eval_ctx(sc, slot, trb); break; case XHCI_TRB_TYPE_RESET_EP: /* 0x0E */ DPRINTF(("Reset Endpoint on slot %d\r\n", slot)); slot = XHCI_TRB_3_SLOT_GET(trb->dwTrb3); cmderr = pci_xhci_cmd_reset_ep(sc, slot, trb); break; case XHCI_TRB_TYPE_STOP_EP: /* 0x0F */ DPRINTF(("Stop Endpoint on slot %d\r\n", slot)); slot = XHCI_TRB_3_SLOT_GET(trb->dwTrb3); cmderr = pci_xhci_cmd_reset_ep(sc, slot, trb); break; case XHCI_TRB_TYPE_SET_TR_DEQUEUE: /* 0x10 */ slot = XHCI_TRB_3_SLOT_GET(trb->dwTrb3); cmderr = pci_xhci_cmd_set_tr(sc, slot, trb); break; case XHCI_TRB_TYPE_RESET_DEVICE: /* 0x11 */ slot = XHCI_TRB_3_SLOT_GET(trb->dwTrb3); cmderr = pci_xhci_cmd_reset_device(sc, slot); break; case XHCI_TRB_TYPE_FORCE_EVENT: /* 0x12 */ /* TODO: */ break; case XHCI_TRB_TYPE_NEGOTIATE_BW: /* 0x13 */ break; case XHCI_TRB_TYPE_SET_LATENCY_TOL: /* 0x14 */ break; case XHCI_TRB_TYPE_GET_PORT_BW: /* 0x15 */ break; case XHCI_TRB_TYPE_FORCE_HEADER: /* 0x16 */ break; case XHCI_TRB_TYPE_NOOP_CMD: /* 0x17 */ break; default: DPRINTF(("pci_xhci: unsupported cmd %x\r\n", type)); break; } if (type != XHCI_TRB_TYPE_LINK) { /* * insert command completion event and assert intr */ evtrb.qwTrb0 = crcr; evtrb.dwTrb2 |= XHCI_TRB_2_ERROR_SET(cmderr); evtrb.dwTrb3 |= XHCI_TRB_3_SLOT_SET(slot); DPRINTF(("pci_xhci: command 0x%x result: 0x%x\r\n", type, cmderr)); pci_xhci_insert_event(sc, &evtrb, 1); } trb = pci_xhci_trb_next(sc, trb, &crcr); } sc->opregs.crcr = crcr | (sc->opregs.crcr & XHCI_CRCR_LO_CA) | ccs; sc->opregs.crcr &= ~XHCI_CRCR_LO_CRR; return (error); } static void pci_xhci_dump_trb(struct xhci_trb *trb) { static const char *trbtypes[] = { "RESERVED", "NORMAL", "SETUP_STAGE", "DATA_STAGE", "STATUS_STAGE", "ISOCH", "LINK", "EVENT_DATA", "NOOP", "ENABLE_SLOT", "DISABLE_SLOT", "ADDRESS_DEVICE", "CONFIGURE_EP", "EVALUATE_CTX", "RESET_EP", "STOP_EP", "SET_TR_DEQUEUE", "RESET_DEVICE", "FORCE_EVENT", "NEGOTIATE_BW", "SET_LATENCY_TOL", "GET_PORT_BW", "FORCE_HEADER", "NOOP_CMD" }; uint32_t type; type = XHCI_TRB_3_TYPE_GET(trb->dwTrb3); DPRINTF(("pci_xhci: trb[@%p] type x%02x %s 0:x%016lx 2:x%08x 3:x%08x\r\n", trb, type, type <= XHCI_TRB_TYPE_NOOP_CMD ? trbtypes[type] : "INVALID", trb->qwTrb0, trb->dwTrb2, trb->dwTrb3)); } static int pci_xhci_xfer_complete(struct pci_xhci_softc *sc, struct usb_data_xfer *xfer, uint32_t slot, uint32_t epid, int *do_intr) { struct pci_xhci_dev_emu *dev; struct pci_xhci_dev_ep *devep; struct xhci_dev_ctx *dev_ctx; struct xhci_endp_ctx *ep_ctx; struct xhci_trb *trb; struct xhci_trb evtrb; uint32_t trbflags; uint32_t edtla; int i, err; dev = XHCI_SLOTDEV_PTR(sc, slot); devep = &dev->eps[epid]; dev_ctx = pci_xhci_get_dev_ctx(sc, slot); assert(dev_ctx != NULL); ep_ctx = &dev_ctx->ctx_ep[epid]; err = XHCI_TRB_ERROR_SUCCESS; *do_intr = 0; edtla = 0; /* go through list of TRBs and insert event(s) */ for (i = xfer->head; xfer->ndata > 0; ) { evtrb.qwTrb0 = (uint64_t)xfer->data[i].hci_data; trb = XHCI_GADDR(sc, evtrb.qwTrb0); trbflags = trb->dwTrb3; DPRINTF(("pci_xhci: xfer[%d] done?%u:%d trb %x %016lx %x " "(err %d) IOC?%d\r\n", i, xfer->data[i].processed, xfer->data[i].blen, XHCI_TRB_3_TYPE_GET(trbflags), evtrb.qwTrb0, trbflags, err, trb->dwTrb3 & XHCI_TRB_3_IOC_BIT ? 1 : 0)); if (!xfer->data[i].processed) { xfer->head = i; break; } xfer->ndata--; edtla += xfer->data[i].bdone; trb->dwTrb3 = (trb->dwTrb3 & ~0x1) | (xfer->data[i].ccs); pci_xhci_update_ep_ring(sc, dev, devep, ep_ctx, xfer->data[i].streamid, xfer->data[i].trbnext, xfer->data[i].ccs); /* Only interrupt if IOC or short packet */ if (!(trb->dwTrb3 & XHCI_TRB_3_IOC_BIT) && !((err == XHCI_TRB_ERROR_SHORT_PKT) && (trb->dwTrb3 & XHCI_TRB_3_ISP_BIT))) { i = (i + 1) % USB_MAX_XFER_BLOCKS; continue; } evtrb.dwTrb2 = XHCI_TRB_2_ERROR_SET(err) | XHCI_TRB_2_REM_SET(xfer->data[i].blen); evtrb.dwTrb3 = XHCI_TRB_3_TYPE_SET(XHCI_TRB_EVENT_TRANSFER) | XHCI_TRB_3_SLOT_SET(slot) | XHCI_TRB_3_EP_SET(epid); if (XHCI_TRB_3_TYPE_GET(trbflags) == XHCI_TRB_TYPE_EVENT_DATA) { DPRINTF(("pci_xhci EVENT_DATA edtla %u\r\n", edtla)); evtrb.qwTrb0 = trb->qwTrb0; evtrb.dwTrb2 = (edtla & 0xFFFFF) | XHCI_TRB_2_ERROR_SET(err); evtrb.dwTrb3 |= XHCI_TRB_3_ED_BIT; edtla = 0; } *do_intr = 1; err = pci_xhci_insert_event(sc, &evtrb, 0); if (err != XHCI_TRB_ERROR_SUCCESS) { break; } i = (i + 1) % USB_MAX_XFER_BLOCKS; } return (err); } static void pci_xhci_update_ep_ring(struct pci_xhci_softc *sc, struct pci_xhci_dev_emu *dev, struct pci_xhci_dev_ep *devep, struct xhci_endp_ctx *ep_ctx, uint32_t streamid, uint64_t ringaddr, int ccs) { if (XHCI_EPCTX_0_MAXP_STREAMS_GET(ep_ctx->dwEpCtx0) != 0) { devep->ep_sctx[streamid].qwSctx0 = (ringaddr & ~0xFUL) | (ccs & 0x1); devep->ep_sctx_trbs[streamid].ringaddr = ringaddr & ~0xFUL; devep->ep_sctx_trbs[streamid].ccs = ccs & 0x1; ep_ctx->qwEpCtx2 = (ep_ctx->qwEpCtx2 & ~0x1) | (ccs & 0x1); DPRINTF(("xhci update ep-ring stream %d, addr %lx\r\n", streamid, devep->ep_sctx[streamid].qwSctx0)); } else { devep->ep_ringaddr = ringaddr & ~0xFUL; devep->ep_ccs = ccs & 0x1; devep->ep_tr = XHCI_GADDR(sc, ringaddr & ~0xFUL); ep_ctx->qwEpCtx2 = (ringaddr & ~0xFUL) | (ccs & 0x1); DPRINTF(("xhci update ep-ring, addr %lx\r\n", (devep->ep_ringaddr | devep->ep_ccs))); } } /* * Outstanding transfer still in progress (device NAK'd earlier) so retry * the transfer again to see if it succeeds. */ static int pci_xhci_try_usb_xfer(struct pci_xhci_softc *sc, struct pci_xhci_dev_emu *dev, struct pci_xhci_dev_ep *devep, struct xhci_endp_ctx *ep_ctx, uint32_t slot, uint32_t epid) { struct usb_data_xfer *xfer; int err; int do_intr; ep_ctx->dwEpCtx0 = FIELD_REPLACE( ep_ctx->dwEpCtx0, XHCI_ST_EPCTX_RUNNING, 0x7, 0); err = 0; do_intr = 0; xfer = devep->ep_xfer; USB_DATA_XFER_LOCK(xfer); /* outstanding requests queued up */ if (dev->dev_ue->ue_data != NULL) { err = dev->dev_ue->ue_data(dev->dev_sc, xfer, epid & 0x1 ? USB_XFER_IN : USB_XFER_OUT, epid/2); if (err == USB_ERR_CANCELLED) { if (USB_DATA_GET_ERRCODE(&xfer->data[xfer->head]) == USB_NAK) err = XHCI_TRB_ERROR_SUCCESS; } else { err = pci_xhci_xfer_complete(sc, xfer, slot, epid, &do_intr); if (err == XHCI_TRB_ERROR_SUCCESS && do_intr) { pci_xhci_assert_interrupt(sc); } /* XXX should not do it if error? */ USB_DATA_XFER_RESET(xfer); } } USB_DATA_XFER_UNLOCK(xfer); return (err); } static int pci_xhci_handle_transfer(struct pci_xhci_softc *sc, struct pci_xhci_dev_emu *dev, struct pci_xhci_dev_ep *devep, struct xhci_endp_ctx *ep_ctx, struct xhci_trb *trb, uint32_t slot, uint32_t epid, uint64_t addr, uint32_t ccs, uint32_t streamid) { struct xhci_trb *setup_trb; struct usb_data_xfer *xfer; struct usb_data_xfer_block *xfer_block; uint64_t val; uint32_t trbflags; int do_intr, err; int do_retry; ep_ctx->dwEpCtx0 = FIELD_REPLACE(ep_ctx->dwEpCtx0, XHCI_ST_EPCTX_RUNNING, 0x7, 0); xfer = devep->ep_xfer; USB_DATA_XFER_LOCK(xfer); DPRINTF(("pci_xhci handle_transfer slot %u\r\n", slot)); retry: err = 0; do_retry = 0; do_intr = 0; setup_trb = NULL; while (1) { pci_xhci_dump_trb(trb); trbflags = trb->dwTrb3; if (XHCI_TRB_3_TYPE_GET(trbflags) != XHCI_TRB_TYPE_LINK && (trbflags & XHCI_TRB_3_CYCLE_BIT) != (ccs & XHCI_TRB_3_CYCLE_BIT)) { DPRINTF(("Cycle-bit changed trbflags %x, ccs %x\r\n", trbflags & XHCI_TRB_3_CYCLE_BIT, ccs)); break; } xfer_block = NULL; switch (XHCI_TRB_3_TYPE_GET(trbflags)) { case XHCI_TRB_TYPE_LINK: if (trb->dwTrb3 & XHCI_TRB_3_TC_BIT) ccs ^= 0x1; xfer_block = usb_data_xfer_append(xfer, NULL, 0, (void *)addr, ccs); xfer_block->processed = 1; break; case XHCI_TRB_TYPE_SETUP_STAGE: if ((trbflags & XHCI_TRB_3_IDT_BIT) == 0 || XHCI_TRB_2_BYTES_GET(trb->dwTrb2) != 8) { DPRINTF(("pci_xhci: invalid setup trb\r\n")); err = XHCI_TRB_ERROR_TRB; goto errout; } setup_trb = trb; val = trb->qwTrb0; if (!xfer->ureq) xfer->ureq = malloc( sizeof(struct usb_device_request)); memcpy(xfer->ureq, &val, sizeof(struct usb_device_request)); xfer_block = usb_data_xfer_append(xfer, NULL, 0, (void *)addr, ccs); xfer_block->processed = 1; break; case XHCI_TRB_TYPE_NORMAL: case XHCI_TRB_TYPE_ISOCH: if (setup_trb != NULL) { DPRINTF(("pci_xhci: trb not supposed to be in " "ctl scope\r\n")); err = XHCI_TRB_ERROR_TRB; goto errout; } /* fall through */ case XHCI_TRB_TYPE_DATA_STAGE: xfer_block = usb_data_xfer_append(xfer, (void *)(trbflags & XHCI_TRB_3_IDT_BIT ? &trb->qwTrb0 : XHCI_GADDR(sc, trb->qwTrb0)), trb->dwTrb2 & 0x1FFFF, (void *)addr, ccs); break; case XHCI_TRB_TYPE_STATUS_STAGE: xfer_block = usb_data_xfer_append(xfer, NULL, 0, (void *)addr, ccs); break; case XHCI_TRB_TYPE_NOOP: xfer_block = usb_data_xfer_append(xfer, NULL, 0, (void *)addr, ccs); xfer_block->processed = 1; break; case XHCI_TRB_TYPE_EVENT_DATA: xfer_block = usb_data_xfer_append(xfer, NULL, 0, (void *)addr, ccs); if ((epid > 1) && (trbflags & XHCI_TRB_3_IOC_BIT)) { xfer_block->processed = 1; } break; default: DPRINTF(("pci_xhci: handle xfer unexpected trb type " "0x%x\r\n", XHCI_TRB_3_TYPE_GET(trbflags))); err = XHCI_TRB_ERROR_TRB; goto errout; } trb = pci_xhci_trb_next(sc, trb, &addr); DPRINTF(("pci_xhci: next trb: 0x%lx\r\n", (uint64_t)trb)); if (xfer_block) { xfer_block->trbnext = addr; xfer_block->streamid = streamid; } if (!setup_trb && !(trbflags & XHCI_TRB_3_CHAIN_BIT) && XHCI_TRB_3_TYPE_GET(trbflags) != XHCI_TRB_TYPE_LINK) { break; } /* handle current batch that requires interrupt on complete */ if (trbflags & XHCI_TRB_3_IOC_BIT) { DPRINTF(("pci_xhci: trb IOC bit set\r\n")); if (epid == 1) do_retry = 1; break; } } DPRINTF(("pci_xhci[%d]: xfer->ndata %u\r\n", __LINE__, xfer->ndata)); if (epid == 1) { err = USB_ERR_NOT_STARTED; if (dev->dev_ue->ue_request != NULL) err = dev->dev_ue->ue_request(dev->dev_sc, xfer); setup_trb = NULL; } else { /* handle data transfer */ pci_xhci_try_usb_xfer(sc, dev, devep, ep_ctx, slot, epid); err = XHCI_TRB_ERROR_SUCCESS; goto errout; } err = USB_TO_XHCI_ERR(err); if ((err == XHCI_TRB_ERROR_SUCCESS) || (err == XHCI_TRB_ERROR_SHORT_PKT)) { err = pci_xhci_xfer_complete(sc, xfer, slot, epid, &do_intr); if (err != XHCI_TRB_ERROR_SUCCESS) do_retry = 0; } errout: if (err == XHCI_TRB_ERROR_EV_RING_FULL) DPRINTF(("pci_xhci[%d]: event ring full\r\n", __LINE__)); if (!do_retry) USB_DATA_XFER_UNLOCK(xfer); if (do_intr) pci_xhci_assert_interrupt(sc); if (do_retry) { USB_DATA_XFER_RESET(xfer); DPRINTF(("pci_xhci[%d]: retry:continuing with next TRBs\r\n", __LINE__)); goto retry; } if (epid == 1) USB_DATA_XFER_RESET(xfer); return (err); } static void pci_xhci_device_doorbell(struct pci_xhci_softc *sc, uint32_t slot, uint32_t epid, uint32_t streamid) { struct pci_xhci_dev_emu *dev; struct pci_xhci_dev_ep *devep; struct xhci_dev_ctx *dev_ctx; struct xhci_endp_ctx *ep_ctx; struct pci_xhci_trb_ring *sctx_tr; struct xhci_trb *trb; uint64_t ringaddr; uint32_t ccs; DPRINTF(("pci_xhci doorbell slot %u epid %u stream %u\r\n", slot, epid, streamid)); if (slot == 0 || slot > sc->ndevices) { DPRINTF(("pci_xhci: invalid doorbell slot %u\r\n", slot)); return; } dev = XHCI_SLOTDEV_PTR(sc, slot); devep = &dev->eps[epid]; dev_ctx = pci_xhci_get_dev_ctx(sc, slot); if (!dev_ctx) { return; } ep_ctx = &dev_ctx->ctx_ep[epid]; sctx_tr = NULL; DPRINTF(("pci_xhci: device doorbell ep[%u] %08x %08x %016lx %08x\r\n", epid, ep_ctx->dwEpCtx0, ep_ctx->dwEpCtx1, ep_ctx->qwEpCtx2, ep_ctx->dwEpCtx4)); if (ep_ctx->qwEpCtx2 == 0) return; /* handle pending transfers */ if (devep->ep_xfer->ndata > 0) { pci_xhci_try_usb_xfer(sc, dev, devep, ep_ctx, slot, epid); return; } /* get next trb work item */ if (XHCI_EPCTX_0_MAXP_STREAMS_GET(ep_ctx->dwEpCtx0) != 0) { sctx_tr = &devep->ep_sctx_trbs[streamid]; ringaddr = sctx_tr->ringaddr; ccs = sctx_tr->ccs; trb = XHCI_GADDR(sc, sctx_tr->ringaddr & ~0xFUL); DPRINTF(("doorbell, stream %u, ccs %lx, trb ccs %x\r\n", streamid, ep_ctx->qwEpCtx2 & XHCI_TRB_3_CYCLE_BIT, trb->dwTrb3 & XHCI_TRB_3_CYCLE_BIT)); } else { ringaddr = devep->ep_ringaddr; ccs = devep->ep_ccs; trb = devep->ep_tr; DPRINTF(("doorbell, ccs %lx, trb ccs %x\r\n", ep_ctx->qwEpCtx2 & XHCI_TRB_3_CYCLE_BIT, trb->dwTrb3 & XHCI_TRB_3_CYCLE_BIT)); } if (XHCI_TRB_3_TYPE_GET(trb->dwTrb3) == 0) { DPRINTF(("pci_xhci: ring %lx trb[%lx] EP %u is RESERVED?\r\n", ep_ctx->qwEpCtx2, devep->ep_ringaddr, epid)); return; } pci_xhci_handle_transfer(sc, dev, devep, ep_ctx, trb, slot, epid, ringaddr, ccs, streamid); } static void pci_xhci_dbregs_write(struct pci_xhci_softc *sc, uint64_t offset, uint64_t value) { offset = (offset - sc->dboff) / sizeof(uint32_t); DPRINTF(("pci_xhci: doorbell write offset 0x%lx: 0x%lx\r\n", offset, value)); if (XHCI_HALTED(sc)) { DPRINTF(("pci_xhci: controller halted\r\n")); return; } if (offset == 0) pci_xhci_complete_commands(sc); else if (sc->portregs != NULL) pci_xhci_device_doorbell(sc, offset, XHCI_DB_TARGET_GET(value), XHCI_DB_SID_GET(value)); } static void pci_xhci_rtsregs_write(struct pci_xhci_softc *sc, uint64_t offset, uint64_t value) { struct pci_xhci_rtsregs *rts; offset -= sc->rtsoff; if (offset == 0) { DPRINTF(("pci_xhci attempted write to MFINDEX\r\n")); return; } DPRINTF(("pci_xhci: runtime regs write offset 0x%lx: 0x%lx\r\n", offset, value)); offset -= 0x20; /* start of intrreg */ rts = &sc->rtsregs; switch (offset) { case 0x00: if (value & XHCI_IMAN_INTR_PEND) rts->intrreg.iman &= ~XHCI_IMAN_INTR_PEND; rts->intrreg.iman = (value & XHCI_IMAN_INTR_ENA) | (rts->intrreg.iman & XHCI_IMAN_INTR_PEND); if (!(value & XHCI_IMAN_INTR_ENA)) pci_xhci_deassert_interrupt(sc); break; case 0x04: rts->intrreg.imod = value; break; case 0x08: rts->intrreg.erstsz = value & 0xFFFF; break; case 0x10: /* ERSTBA low bits */ rts->intrreg.erstba = MASK_64_HI(sc->rtsregs.intrreg.erstba) | (value & ~0x3F); break; case 0x14: /* ERSTBA high bits */ rts->intrreg.erstba = (value << 32) | MASK_64_LO(sc->rtsregs.intrreg.erstba); rts->erstba_p = XHCI_GADDR(sc, sc->rtsregs.intrreg.erstba & ~0x3FUL); rts->erst_p = XHCI_GADDR(sc, sc->rtsregs.erstba_p->qwEvrsTablePtr & ~0x3FUL); rts->er_enq_idx = 0; rts->er_events_cnt = 0; DPRINTF(("pci_xhci: wr erstba erst (%p) ptr 0x%lx, sz %u\r\n", rts->erstba_p, rts->erstba_p->qwEvrsTablePtr, rts->erstba_p->dwEvrsTableSize)); break; case 0x18: /* ERDP low bits */ rts->intrreg.erdp = MASK_64_HI(sc->rtsregs.intrreg.erdp) | (rts->intrreg.erdp & XHCI_ERDP_LO_BUSY) | (value & ~0xF); if (value & XHCI_ERDP_LO_BUSY) { rts->intrreg.erdp &= ~XHCI_ERDP_LO_BUSY; rts->intrreg.iman &= ~XHCI_IMAN_INTR_PEND; } rts->er_deq_seg = XHCI_ERDP_LO_SINDEX(value); break; case 0x1C: /* ERDP high bits */ rts->intrreg.erdp = (value << 32) | MASK_64_LO(sc->rtsregs.intrreg.erdp); if (rts->er_events_cnt > 0) { uint64_t erdp; uint32_t erdp_i; erdp = rts->intrreg.erdp & ~0xF; erdp_i = (erdp - rts->erstba_p->qwEvrsTablePtr) / sizeof(struct xhci_trb); if (erdp_i <= rts->er_enq_idx) rts->er_events_cnt = rts->er_enq_idx - erdp_i; else rts->er_events_cnt = rts->erstba_p->dwEvrsTableSize - (erdp_i - rts->er_enq_idx); DPRINTF(("pci_xhci: erdp 0x%lx, events cnt %u\r\n", erdp, rts->er_events_cnt)); } break; default: DPRINTF(("pci_xhci attempted write to RTS offset 0x%lx\r\n", offset)); break; } } static uint64_t pci_xhci_portregs_read(struct pci_xhci_softc *sc, uint64_t offset) { int port; uint32_t *p; if (sc->portregs == NULL) return (0); port = (offset - 0x3F0) / 0x10; if (port > XHCI_MAX_DEVS) { DPRINTF(("pci_xhci: portregs_read port %d >= XHCI_MAX_DEVS\r\n", port)); /* return default value for unused port */ return (XHCI_PS_SPEED_SET(3)); } offset = (offset - 0x3F0) % 0x10; p = &sc->portregs[port].portsc; p += offset / sizeof(uint32_t); DPRINTF(("pci_xhci: portregs read offset 0x%lx port %u -> 0x%x\r\n", offset, port, *p)); return (*p); } static void pci_xhci_hostop_write(struct pci_xhci_softc *sc, uint64_t offset, uint64_t value) { offset -= XHCI_CAPLEN; if (offset < 0x400) DPRINTF(("pci_xhci: hostop write offset 0x%lx: 0x%lx\r\n", offset, value)); switch (offset) { case XHCI_USBCMD: sc->opregs.usbcmd = pci_xhci_usbcmd_write(sc, value & 0x3F0F); break; case XHCI_USBSTS: /* clear bits on write */ sc->opregs.usbsts &= ~(value & (XHCI_STS_HSE|XHCI_STS_EINT|XHCI_STS_PCD|XHCI_STS_SSS| XHCI_STS_RSS|XHCI_STS_SRE|XHCI_STS_CNR)); break; case XHCI_PAGESIZE: /* read only */ break; case XHCI_DNCTRL: sc->opregs.dnctrl = value & 0xFFFF; break; case XHCI_CRCR_LO: if (sc->opregs.crcr & XHCI_CRCR_LO_CRR) { sc->opregs.crcr &= ~(XHCI_CRCR_LO_CS|XHCI_CRCR_LO_CA); sc->opregs.crcr |= value & (XHCI_CRCR_LO_CS|XHCI_CRCR_LO_CA); } else { sc->opregs.crcr = MASK_64_HI(sc->opregs.crcr) | (value & (0xFFFFFFC0 | XHCI_CRCR_LO_RCS)); } break; case XHCI_CRCR_HI: if (!(sc->opregs.crcr & XHCI_CRCR_LO_CRR)) { sc->opregs.crcr = MASK_64_LO(sc->opregs.crcr) | (value << 32); sc->opregs.cr_p = XHCI_GADDR(sc, sc->opregs.crcr & ~0xF); } if (sc->opregs.crcr & XHCI_CRCR_LO_CS) { /* Stop operation of Command Ring */ } if (sc->opregs.crcr & XHCI_CRCR_LO_CA) { /* Abort command */ } break; case XHCI_DCBAAP_LO: sc->opregs.dcbaap = MASK_64_HI(sc->opregs.dcbaap) | (value & 0xFFFFFFC0); break; case XHCI_DCBAAP_HI: sc->opregs.dcbaap = MASK_64_LO(sc->opregs.dcbaap) | (value << 32); sc->opregs.dcbaa_p = XHCI_GADDR(sc, sc->opregs.dcbaap & ~0x3FUL); DPRINTF(("pci_xhci: opregs dcbaap = 0x%lx (vaddr 0x%lx)\r\n", sc->opregs.dcbaap, (uint64_t)sc->opregs.dcbaa_p)); break; case XHCI_CONFIG: sc->opregs.config = value & 0x03FF; break; default: if (offset >= 0x400) pci_xhci_portregs_write(sc, offset, value); break; } } static void pci_xhci_write(struct vmctx *ctx, int vcpu, struct pci_devinst *pi, int baridx, uint64_t offset, int size, uint64_t value) { struct pci_xhci_softc *sc; sc = pi->pi_arg; assert(baridx == 0); pthread_mutex_lock(&sc->mtx); if (offset < XHCI_CAPLEN) /* read only registers */ WPRINTF(("pci_xhci: write RO-CAPs offset %ld\r\n", offset)); else if (offset < sc->dboff) pci_xhci_hostop_write(sc, offset, value); else if (offset < sc->rtsoff) pci_xhci_dbregs_write(sc, offset, value); else if (offset < sc->regsend) pci_xhci_rtsregs_write(sc, offset, value); else WPRINTF(("pci_xhci: write invalid offset %ld\r\n", offset)); pthread_mutex_unlock(&sc->mtx); } static uint64_t pci_xhci_hostcap_read(struct pci_xhci_softc *sc, uint64_t offset) { uint64_t value; switch (offset) { case XHCI_CAPLENGTH: /* 0x00 */ value = sc->caplength; break; case XHCI_HCSPARAMS1: /* 0x04 */ value = sc->hcsparams1; break; case XHCI_HCSPARAMS2: /* 0x08 */ value = sc->hcsparams2; break; case XHCI_HCSPARAMS3: /* 0x0C */ value = sc->hcsparams3; break; case XHCI_HCSPARAMS0: /* 0x10 */ value = sc->hccparams1; break; case XHCI_DBOFF: /* 0x14 */ value = sc->dboff; break; case XHCI_RTSOFF: /* 0x18 */ value = sc->rtsoff; break; case XHCI_HCCPRAMS2: /* 0x1C */ value = sc->hccparams2; break; default: value = 0; break; } DPRINTF(("pci_xhci: hostcap read offset 0x%lx -> 0x%lx\r\n", offset, value)); return (value); } static uint64_t pci_xhci_hostop_read(struct pci_xhci_softc *sc, uint64_t offset) { uint64_t value; offset = (offset - XHCI_CAPLEN); switch (offset) { case XHCI_USBCMD: /* 0x00 */ value = sc->opregs.usbcmd; break; case XHCI_USBSTS: /* 0x04 */ value = sc->opregs.usbsts; break; case XHCI_PAGESIZE: /* 0x08 */ value = sc->opregs.pgsz; break; case XHCI_DNCTRL: /* 0x14 */ value = sc->opregs.dnctrl; break; case XHCI_CRCR_LO: /* 0x18 */ value = sc->opregs.crcr & XHCI_CRCR_LO_CRR; break; case XHCI_CRCR_HI: /* 0x1C */ value = 0; break; case XHCI_DCBAAP_LO: /* 0x30 */ value = sc->opregs.dcbaap & 0xFFFFFFFF; break; case XHCI_DCBAAP_HI: /* 0x34 */ value = (sc->opregs.dcbaap >> 32) & 0xFFFFFFFF; break; case XHCI_CONFIG: /* 0x38 */ value = sc->opregs.config; break; default: if (offset >= 0x400) value = pci_xhci_portregs_read(sc, offset); else value = 0; break; } if (offset < 0x400) DPRINTF(("pci_xhci: hostop read offset 0x%lx -> 0x%lx\r\n", offset, value)); return (value); } static uint64_t pci_xhci_dbregs_read(struct pci_xhci_softc *sc, uint64_t offset) { /* read doorbell always returns 0 */ return (0); } static uint64_t pci_xhci_rtsregs_read(struct pci_xhci_softc *sc, uint64_t offset) { uint32_t value; offset -= sc->rtsoff; value = 0; if (offset == XHCI_MFINDEX) { value = sc->rtsregs.mfindex; } else if (offset >= 0x20) { int item; uint32_t *p; offset -= 0x20; item = offset % 32; assert(offset < sizeof(sc->rtsregs.intrreg)); p = &sc->rtsregs.intrreg.iman; p += item / sizeof(uint32_t); value = *p; } DPRINTF(("pci_xhci: rtsregs read offset 0x%lx -> 0x%x\r\n", offset, value)); return (value); } static uint64_t pci_xhci_xecp_read(struct pci_xhci_softc *sc, uint64_t offset) { uint32_t value; offset -= sc->regsend; value = 0; switch (offset) { case 0: /* rev major | rev minor | next-cap | cap-id */ value = (0x02 << 24) | (4 << 8) | XHCI_ID_PROTOCOLS; break; case 4: /* name string = "USB" */ value = 0x20425355; break; case 8: /* psic | proto-defined | compat # | compat offset */ value = ((XHCI_MAX_DEVS/2) << 8) | sc->usb2_port_start; break; case 12: break; case 16: /* rev major | rev minor | next-cap | cap-id */ value = (0x03 << 24) | XHCI_ID_PROTOCOLS; break; case 20: /* name string = "USB" */ value = 0x20425355; break; case 24: /* psic | proto-defined | compat # | compat offset */ value = ((XHCI_MAX_DEVS/2) << 8) | sc->usb3_port_start; break; case 28: break; default: DPRINTF(("pci_xhci: xecp invalid offset 0x%lx\r\n", offset)); break; } DPRINTF(("pci_xhci: xecp read offset 0x%lx -> 0x%x\r\n", offset, value)); return (value); } static uint64_t pci_xhci_read(struct vmctx *ctx, int vcpu, struct pci_devinst *pi, int baridx, uint64_t offset, int size) { struct pci_xhci_softc *sc; uint32_t value; sc = pi->pi_arg; assert(baridx == 0); pthread_mutex_lock(&sc->mtx); if (offset < XHCI_CAPLEN) value = pci_xhci_hostcap_read(sc, offset); else if (offset < sc->dboff) value = pci_xhci_hostop_read(sc, offset); else if (offset < sc->rtsoff) value = pci_xhci_dbregs_read(sc, offset); else if (offset < sc->regsend) value = pci_xhci_rtsregs_read(sc, offset); else if (offset < (sc->regsend + 4*32)) value = pci_xhci_xecp_read(sc, offset); else { value = 0; WPRINTF(("pci_xhci: read invalid offset %ld\r\n", offset)); } pthread_mutex_unlock(&sc->mtx); switch (size) { case 1: value &= 0xFF; break; case 2: value &= 0xFFFF; break; case 4: value &= 0xFFFFFFFF; break; } return (value); } static void pci_xhci_reset_port(struct pci_xhci_softc *sc, int portn, int warm) { struct pci_xhci_portregs *port; struct pci_xhci_dev_emu *dev; struct xhci_trb evtrb; int error; assert(portn <= XHCI_MAX_DEVS); DPRINTF(("xhci reset port %d\r\n", portn)); port = XHCI_PORTREG_PTR(sc, portn); dev = XHCI_DEVINST_PTR(sc, portn); if (dev) { port->portsc &= ~(XHCI_PS_PLS_MASK | XHCI_PS_PR | XHCI_PS_PRC); port->portsc |= XHCI_PS_PED | XHCI_PS_SPEED_SET(dev->dev_ue->ue_usbspeed); if (warm && dev->dev_ue->ue_usbver == 3) { port->portsc |= XHCI_PS_WRC; } if ((port->portsc & XHCI_PS_PRC) == 0) { port->portsc |= XHCI_PS_PRC; pci_xhci_set_evtrb(&evtrb, portn, XHCI_TRB_ERROR_SUCCESS, XHCI_TRB_EVENT_PORT_STS_CHANGE); error = pci_xhci_insert_event(sc, &evtrb, 1); if (error != XHCI_TRB_ERROR_SUCCESS) DPRINTF(("xhci reset port insert event " "failed\r\n")); } } } static void pci_xhci_init_port(struct pci_xhci_softc *sc, int portn) { struct pci_xhci_portregs *port; struct pci_xhci_dev_emu *dev; port = XHCI_PORTREG_PTR(sc, portn); dev = XHCI_DEVINST_PTR(sc, portn); if (dev) { port->portsc = XHCI_PS_CCS | /* connected */ XHCI_PS_PP; /* port power */ if (dev->dev_ue->ue_usbver == 2) { port->portsc |= XHCI_PS_PLS_SET(UPS_PORT_LS_POLL) | XHCI_PS_SPEED_SET(dev->dev_ue->ue_usbspeed); } else { port->portsc |= XHCI_PS_PLS_SET(UPS_PORT_LS_U0) | XHCI_PS_PED | /* enabled */ XHCI_PS_SPEED_SET(dev->dev_ue->ue_usbspeed); } DPRINTF(("Init port %d 0x%x\n", portn, port->portsc)); } else { port->portsc = XHCI_PS_PLS_SET(UPS_PORT_LS_RX_DET) | XHCI_PS_PP; DPRINTF(("Init empty port %d 0x%x\n", portn, port->portsc)); } } static int pci_xhci_dev_intr(struct usb_hci *hci, int epctx) { struct pci_xhci_dev_emu *dev; struct xhci_dev_ctx *dev_ctx; struct xhci_trb evtrb; struct pci_xhci_softc *sc; struct pci_xhci_portregs *p; struct xhci_endp_ctx *ep_ctx; int error; int dir_in; int epid; dir_in = epctx & 0x80; epid = epctx & ~0x80; /* HW endpoint contexts are 0-15; convert to epid based on dir */ epid = (epid * 2) + (dir_in ? 1 : 0); assert(epid >= 1 && epid <= 31); dev = hci->hci_sc; sc = dev->xsc; /* check if device is ready; OS has to initialise it */ if (sc->rtsregs.erstba_p == NULL || (sc->opregs.usbcmd & XHCI_CMD_RS) == 0 || dev->dev_ctx == NULL) return (0); p = XHCI_PORTREG_PTR(sc, hci->hci_port); /* raise event if link U3 (suspended) state */ if (XHCI_PS_PLS_GET(p->portsc) == 3) { p->portsc &= ~XHCI_PS_PLS_MASK; p->portsc |= XHCI_PS_PLS_SET(UPS_PORT_LS_RESUME); if ((p->portsc & XHCI_PS_PLC) != 0) return (0); p->portsc |= XHCI_PS_PLC; pci_xhci_set_evtrb(&evtrb, hci->hci_port, XHCI_TRB_ERROR_SUCCESS, XHCI_TRB_EVENT_PORT_STS_CHANGE); error = pci_xhci_insert_event(sc, &evtrb, 0); if (error != XHCI_TRB_ERROR_SUCCESS) goto done; } dev_ctx = dev->dev_ctx; ep_ctx = &dev_ctx->ctx_ep[epid]; if ((ep_ctx->dwEpCtx0 & 0x7) == XHCI_ST_EPCTX_DISABLED) { DPRINTF(("xhci device interrupt on disabled endpoint %d\r\n", epid)); return (0); } DPRINTF(("xhci device interrupt on endpoint %d\r\n", epid)); pci_xhci_device_doorbell(sc, hci->hci_port, epid, 0); done: return (error); } static int pci_xhci_dev_event(struct usb_hci *hci, enum hci_usbev evid, void *param) { DPRINTF(("xhci device event port %d\r\n", hci->hci_port)); return (0); } static void pci_xhci_device_usage(char *opt) { fprintf(stderr, "Invalid USB emulation \"%s\"\r\n", opt); } static int pci_xhci_parse_opts(struct pci_xhci_softc *sc, char *opts) { struct pci_xhci_dev_emu **devices; struct pci_xhci_dev_emu *dev; struct usb_devemu *ue; void *devsc; char *uopt, *xopts, *config; int usb3_port, usb2_port, i; + uopt = NULL; usb3_port = sc->usb3_port_start - 1; usb2_port = sc->usb2_port_start - 1; devices = NULL; if (opts == NULL) goto portsfinal; devices = calloc(XHCI_MAX_DEVS, sizeof(struct pci_xhci_dev_emu *)); sc->slots = calloc(XHCI_MAX_SLOTS, sizeof(struct pci_xhci_dev_emu *)); sc->devices = devices; sc->ndevices = 0; uopt = strdup(opts); for (xopts = strtok(uopt, ","); xopts != NULL; xopts = strtok(NULL, ",")) { if (usb2_port == ((sc->usb2_port_start-1) + XHCI_MAX_DEVS/2) || usb3_port == ((sc->usb3_port_start-1) + XHCI_MAX_DEVS/2)) { WPRINTF(("pci_xhci max number of USB 2 or 3 " "devices reached, max %d\r\n", XHCI_MAX_DEVS/2)); usb2_port = usb3_port = -1; goto done; } /* device[=] */ if ((config = strchr(xopts, '=')) == NULL) config = ""; /* no config */ else *config++ = '\0'; ue = usb_emu_finddev(xopts); if (ue == NULL) { pci_xhci_device_usage(xopts); DPRINTF(("pci_xhci device not found %s\r\n", xopts)); usb2_port = usb3_port = -1; goto done; } DPRINTF(("pci_xhci adding device %s, opts \"%s\"\r\n", xopts, config)); dev = calloc(1, sizeof(struct pci_xhci_dev_emu)); dev->xsc = sc; dev->hci.hci_sc = dev; dev->hci.hci_intr = pci_xhci_dev_intr; dev->hci.hci_event = pci_xhci_dev_event; if (ue->ue_usbver == 2) { dev->hci.hci_port = usb2_port + 1; devices[usb2_port] = dev; usb2_port++; } else { dev->hci.hci_port = usb3_port + 1; devices[usb3_port] = dev; usb3_port++; } dev->hci.hci_address = 0; devsc = ue->ue_init(&dev->hci, config); if (devsc == NULL) { pci_xhci_device_usage(xopts); usb2_port = usb3_port = -1; goto done; } dev->dev_ue = ue; dev->dev_sc = devsc; /* assign slot number to device */ sc->slots[sc->ndevices] = dev; sc->ndevices++; } - if (uopt != NULL) - free(uopt); portsfinal: sc->portregs = calloc(XHCI_MAX_DEVS, sizeof(struct pci_xhci_portregs)); if (sc->ndevices > 0) { /* port and slot numbering start from 1 */ sc->devices--; sc->portregs--; sc->slots--; for (i = 1; i <= XHCI_MAX_DEVS; i++) { pci_xhci_init_port(sc, i); } } else { WPRINTF(("pci_xhci no USB devices configured\r\n")); sc->ndevices = 1; } done: if (devices != NULL) { if (usb2_port <= 0 && usb3_port <= 0) { sc->devices = NULL; for (i = 0; devices[i] != NULL; i++) free(devices[i]); sc->ndevices = -1; free(devices); } } free(uopt); return (sc->ndevices); } static int pci_xhci_init(struct vmctx *ctx, struct pci_devinst *pi, char *opts) { struct pci_xhci_softc *sc; int error; if (xhci_in_use) { WPRINTF(("pci_xhci controller already defined\r\n")); return (-1); } xhci_in_use = 1; sc = calloc(1, sizeof(struct pci_xhci_softc)); pi->pi_arg = sc; sc->xsc_pi = pi; sc->usb2_port_start = (XHCI_MAX_DEVS/2) + 1; sc->usb3_port_start = 1; /* discover devices */ error = pci_xhci_parse_opts(sc, opts); if (error < 0) goto done; else error = 0; sc->caplength = XHCI_SET_CAPLEN(XHCI_CAPLEN) | XHCI_SET_HCIVERSION(0x0100); sc->hcsparams1 = XHCI_SET_HCSP1_MAXPORTS(XHCI_MAX_DEVS) | XHCI_SET_HCSP1_MAXINTR(1) | /* interrupters */ XHCI_SET_HCSP1_MAXSLOTS(XHCI_MAX_SLOTS); sc->hcsparams2 = XHCI_SET_HCSP2_ERSTMAX(XHCI_ERST_MAX) | XHCI_SET_HCSP2_IST(0x04); sc->hcsparams3 = 0; /* no latency */ sc->hccparams1 = XHCI_SET_HCCP1_NSS(1) | /* no 2nd-streams */ XHCI_SET_HCCP1_SPC(1) | /* short packet */ XHCI_SET_HCCP1_MAXPSA(XHCI_STREAMS_MAX); sc->hccparams2 = XHCI_SET_HCCP2_LEC(1) | XHCI_SET_HCCP2_U3C(1); sc->dboff = XHCI_SET_DOORBELL(XHCI_CAPLEN + XHCI_PORTREGS_START + XHCI_MAX_DEVS * sizeof(struct pci_xhci_portregs)); /* dboff must be 32-bit aligned */ if (sc->dboff & 0x3) sc->dboff = (sc->dboff + 0x3) & ~0x3; /* rtsoff must be 32-bytes aligned */ sc->rtsoff = XHCI_SET_RTSOFFSET(sc->dboff + (XHCI_MAX_SLOTS+1) * 32); if (sc->rtsoff & 0x1F) sc->rtsoff = (sc->rtsoff + 0x1F) & ~0x1F; DPRINTF(("pci_xhci dboff: 0x%x, rtsoff: 0x%x\r\n", sc->dboff, sc->rtsoff)); sc->opregs.usbsts = XHCI_STS_HCH; sc->opregs.pgsz = XHCI_PAGESIZE_4K; pci_xhci_reset(sc); sc->regsend = sc->rtsoff + 0x20 + 32; /* only 1 intrpter */ /* * Set extended capabilities pointer to be after regsend; * value of xecp field is 32-bit offset. */ sc->hccparams1 |= XHCI_SET_HCCP1_XECP(sc->regsend/4); pci_set_cfgdata16(pi, PCIR_DEVICE, 0x1E31); pci_set_cfgdata16(pi, PCIR_VENDOR, 0x8086); pci_set_cfgdata8(pi, PCIR_CLASS, PCIC_SERIALBUS); pci_set_cfgdata8(pi, PCIR_SUBCLASS, PCIS_SERIALBUS_USB); pci_set_cfgdata8(pi, PCIR_PROGIF,PCIP_SERIALBUS_USB_XHCI); pci_set_cfgdata8(pi, PCI_USBREV, PCI_USB_REV_3_0); pci_emul_add_msicap(pi, 1); /* regsend + xecp registers */ pci_emul_alloc_bar(pi, 0, PCIBAR_MEM32, sc->regsend + 4*32); DPRINTF(("pci_xhci pci_emu_alloc: %d\r\n", sc->regsend + 4*32)); pci_lintr_request(pi); pthread_mutex_init(&sc->mtx, NULL); done: if (error) { free(sc); } return (error); } struct pci_devemu pci_de_xhci = { .pe_emu = "xhci", .pe_init = pci_xhci_init, .pe_barwrite = pci_xhci_write, .pe_barread = pci_xhci_read }; PCI_EMUL_SET(pci_de_xhci); Property changes on: projects/import-googletest-1.8.1/usr.sbin/bhyve/pci_xhci.c ___________________________________________________________________ Modified: svn:mergeinfo ## -0,0 +0,1 ## Merged /head/usr.sbin/bhyve/pci_xhci.c:r344081-344270 Index: projects/import-googletest-1.8.1/usr.sbin/bsdinstall/partedit/partedit_powerpc.c =================================================================== --- projects/import-googletest-1.8.1/usr.sbin/bsdinstall/partedit/partedit_powerpc.c (revision 344270) +++ projects/import-googletest-1.8.1/usr.sbin/bsdinstall/partedit/partedit_powerpc.c (revision 344271) @@ -1,143 +1,147 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2011 Nathan Whitehorn * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #include #include #include #include "partedit.h" static char platform[255] = ""; const char * default_scheme(void) { size_t platlen = sizeof(platform); if (strlen(platform) == 0) sysctlbyname("hw.platform", platform, &platlen, NULL, -1); if (strcmp(platform, "powermac") == 0) return ("APM"); if (strcmp(platform, "chrp") == 0 || strcmp(platform, "ps3") == 0 || strcmp(platform, "mpc85xx") == 0) return ("MBR"); /* Pick GPT as a generic default */ return ("GPT"); } int is_scheme_bootable(const char *part_type) { size_t platlen = sizeof(platform); if (strlen(platform) == 0) sysctlbyname("hw.platform", platform, &platlen, NULL, -1); if (strcmp(platform, "powermac") == 0 && strcmp(part_type, "APM") == 0) return (1); if (strcmp(platform, "powernv") == 0 && strcmp(part_type, "GPT") == 0) return (1); if ((strcmp(platform, "chrp") == 0 || strcmp(platform, "ps3") == 0) && (strcmp(part_type, "MBR") == 0 || strcmp(part_type, "BSD") == 0 || strcmp(part_type, "GPT") == 0)) return (1); if (strcmp(platform, "mpc85xx") == 0 && strcmp(part_type, "MBR") == 0) return (1); return (0); } int is_fs_bootable(const char *part_type, const char *fs) { if (strcmp(fs, "freebsd-ufs") == 0) return (1); return (0); } size_t bootpart_size(const char *part_type) { size_t platlen = sizeof(platform); if (strlen(platform) == 0) sysctlbyname("hw.platform", platform, &platlen, NULL, -1); if (strcmp(part_type, "APM") == 0) return (800*1024); if (strcmp(part_type, "BSD") == 0) /* Nothing for nested */ return (0); if (strcmp(platform, "chrp") == 0) return (800*1024); - if (strcmp(platform, "ps3") == 0 || strcmp(platform, "powernv") == 0 || - strcmp(platform, "mpc85xx") == 0) + if (strcmp(platform, "ps3") == 0 || strcmp(platform, "powernv") == 0) return (512*1024*1024); + if (strcmp(platform, "mpc85xx") == 0) + return (16*1024*1024); return (0); } const char * bootpart_type(const char *scheme, const char **mountpoint) { size_t platlen = sizeof(platform); if (strlen(platform) == 0) sysctlbyname("hw.platform", platform, &platlen, NULL, -1); if (strcmp(platform, "chrp") == 0) return ("prep-boot"); if (strcmp(platform, "powermac") == 0) return ("apple-boot"); - if (strcmp(platform, "powernv") == 0 || strcmp(platform, "ps3") == 0 || - strcmp(platform, "mpc85xx") == 0) { + if (strcmp(platform, "powernv") == 0 || strcmp(platform, "ps3") == 0) { *mountpoint = "/boot"; if (strcmp(scheme, "GPT") == 0) return ("ms-basic-data"); else if (strcmp(scheme, "MBR") == 0) return ("fat32"); + } + if (strcmp(platform, "mpc85xx") == 0) { + *mountpoint = "/boot/uboot"; + return ("fat16"); } return ("freebsd-boot"); } const char * bootcode_path(const char *part_type) { return (NULL); } const char * partcode_path(const char *part_type, const char *fs_type) { size_t platlen = sizeof(platform); if (strlen(platform) == 0) sysctlbyname("hw.platform", platform, &platlen, NULL, -1); if (strcmp(part_type, "APM") == 0) return ("/boot/boot1.hfs"); if (strcmp(platform, "chrp") == 0) return ("/boot/boot1.elf"); return (NULL); } Index: projects/import-googletest-1.8.1/usr.sbin/bsnmpd/modules/snmp_hostres/hostres_partition_tbl.c =================================================================== --- projects/import-googletest-1.8.1/usr.sbin/bsnmpd/modules/snmp_hostres/hostres_partition_tbl.c (revision 344270) +++ projects/import-googletest-1.8.1/usr.sbin/bsnmpd/modules/snmp_hostres/hostres_partition_tbl.c (revision 344271) @@ -1,626 +1,626 @@ /*- * Copyright (c) 2005-2006 The FreeBSD Project * All rights reserved. * * Author: Victor Cruceru * * Redistribution of this software and documentation and use in source and * binary forms, with or without modification, are permitted provided that * the following conditions are met: * * 1. Redistributions of source code or documentation must retain the above * copyright notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ /* * Host Resources MIB: hrPartitionTable implementation for SNMPd. */ #include #include #include #include #include #include #include #include #include #include #include #include "hostres_snmp.h" #include "hostres_oid.h" #include "hostres_tree.h" #define HR_FREEBSD_PART_TYPE 165 /* Maximum length for label and id including \0 */ #define PART_STR_MLEN (128 + 1) /* * One row in the hrPartitionTable */ struct partition_entry { asn_subid_t index[2]; u_char *label; /* max allocated len will be PART_STR_MLEN */ u_char *id; /* max allocated len will be PART_STR_MLEN */ int32_t size; int32_t fs_Index; TAILQ_ENTRY(partition_entry) link; #define HR_PARTITION_FOUND 0x001 uint32_t flags; }; TAILQ_HEAD(partition_tbl, partition_entry); /* * This table is used to get a consistent indexing. It saves the name -> index * mapping while we rebuild the partition table. */ struct partition_map_entry { int32_t index; /* partition_entry::index */ u_char *id; /* max allocated len will be PART_STR_MLEN */ /* * next may be NULL if the respective partition_entry * is (temporally) gone. */ struct partition_entry *entry; STAILQ_ENTRY(partition_map_entry) link; }; STAILQ_HEAD(partition_map, partition_map_entry); /* Mapping table for consistent indexing */ static struct partition_map partition_map = STAILQ_HEAD_INITIALIZER(partition_map); /* THE partition table. */ static struct partition_tbl partition_tbl = TAILQ_HEAD_INITIALIZER(partition_tbl); /* next int available for indexing the hrPartitionTable */ static uint32_t next_partition_index = 1; /* * Partition_entry_cmp is used for INSERT_OBJECT_FUNC_LINK * macro. */ static int partition_entry_cmp(const struct partition_entry *a, const struct partition_entry *b) { assert(a != NULL); assert(b != NULL); if (a->index[0] < b->index[0]) return (-1); if (a->index[0] > b->index[0]) return (+1); if (a->index[1] < b->index[1]) return (-1); if (a->index[1] > b->index[1]) return (+1); return (0); } /* * Partition_idx_cmp is used for NEXT_OBJECT_FUNC and FIND_OBJECT_FUNC * macros */ static int partition_idx_cmp(const struct asn_oid *oid, u_int sub, const struct partition_entry *entry) { u_int i; for (i = 0; i < 2 && i < oid->len - sub; i++) { if (oid->subs[sub + i] < entry->index[i]) return (-1); if (oid->subs[sub + i] > entry->index[i]) return (+1); } if (oid->len - sub < 2) return (-1); if (oid->len - sub > 2) return (+1); return (0); } /** * Create a new partition table entry */ static struct partition_entry * partition_entry_create(int32_t ds_index, const char *chunk_name) { struct partition_entry *entry; struct partition_map_entry *map; size_t id_len; /* sanity checks */ assert(chunk_name != NULL); if (chunk_name == NULL || chunk_name[0] == '\0') return (NULL); /* check whether we already have seen this partition */ STAILQ_FOREACH(map, &partition_map, link) if (strcmp(map->id, chunk_name) == 0) break; if (map == NULL) { /* new object - get a new index and create a map */ if (next_partition_index > INT_MAX) { /* Unrecoverable error - die clean and quicly*/ syslog(LOG_ERR, "%s: hrPartitionTable index wrap", __func__); errx(EX_SOFTWARE, "hrPartitionTable index wrap"); } if ((map = malloc(sizeof(*map))) == NULL) { syslog(LOG_ERR, "hrPartitionTable: %s: %m", __func__); return (NULL); } id_len = strlen(chunk_name) + 1; if (id_len > PART_STR_MLEN) id_len = PART_STR_MLEN; if ((map->id = malloc(id_len)) == NULL) { free(map); return (NULL); } map->index = next_partition_index++; strlcpy(map->id, chunk_name, id_len); map->entry = NULL; STAILQ_INSERT_TAIL(&partition_map, map, link); HRDBG("%s added into hrPartitionMap at index=%d", chunk_name, map->index); } else { HRDBG("%s exists in hrPartitionMap index=%d", chunk_name, map->index); } if ((entry = malloc(sizeof(*entry))) == NULL) { syslog(LOG_WARNING, "hrPartitionTable: %s: %m", __func__); return (NULL); } memset(entry, 0, sizeof(*entry)); /* create the index */ entry->index[0] = ds_index; entry->index[1] = map->index; map->entry = entry; if ((entry->id = strdup(map->id)) == NULL) { free(entry); return (NULL); } /* * reuse id_len from here till the end of this function * for partition_entry::label */ id_len = strlen(_PATH_DEV) + strlen(chunk_name) + 1; if (id_len > PART_STR_MLEN) id_len = PART_STR_MLEN; if ((entry->label = malloc(id_len )) == NULL) { free(entry->id); free(entry); return (NULL); } snprintf(entry->label, id_len, "%s%s", _PATH_DEV, chunk_name); INSERT_OBJECT_FUNC_LINK(entry, &partition_tbl, link, partition_entry_cmp); return (entry); } /** * Delete a partition table entry but keep the map entry intact. */ static void partition_entry_delete(struct partition_entry *entry) { struct partition_map_entry *map; assert(entry != NULL); TAILQ_REMOVE(&partition_tbl, entry, link); STAILQ_FOREACH(map, &partition_map, link) if (map->entry == entry) { map->entry = NULL; break; } free(entry->id); free(entry->label); free(entry); } /** * Find a partition table entry by name. If none is found, return NULL. */ static struct partition_entry * partition_entry_find_by_name(const char *name) { struct partition_entry *entry = NULL; TAILQ_FOREACH(entry, &partition_tbl, link) if (strcmp(entry->id, name) == 0) return (entry); return (NULL); } /** * Find a partition table entry by label. If none is found, return NULL. */ static struct partition_entry * partition_entry_find_by_label(const char *name) { struct partition_entry *entry = NULL; TAILQ_FOREACH(entry, &partition_tbl, link) if (strcmp(entry->label, name) == 0) return (entry); return (NULL); } /** * Process a chunk from libgeom(4). A chunk is either a slice or a partition. * If necessary create a new partition table entry for it. In any case * set the size field of the entry and set the FOUND flag. */ static void handle_chunk(int32_t ds_index, const char *chunk_name, off_t chunk_size) { struct partition_entry *entry; daddr_t k_size; assert(chunk_name != NULL); assert(chunk_name[0] != '\0'); - if (chunk_name == NULL || chunk_name == '\0') + if (chunk_name == NULL || chunk_name[0] == '\0') return; HRDBG("ANALYZE chunk %s", chunk_name); if ((entry = partition_entry_find_by_name(chunk_name)) == NULL) if ((entry = partition_entry_create(ds_index, chunk_name)) == NULL) return; entry->flags |= HR_PARTITION_FOUND; /* actual size may overflow the SNMP type */ k_size = chunk_size / 1024; entry->size = (k_size > (off_t)INT_MAX ? INT_MAX : k_size); } /** * Start refreshing the partition table. A call to this function will * be followed by a call to handleDiskStorage() for every disk, followed * by a single call to the post_refresh function. */ void partition_tbl_pre_refresh(void) { struct partition_entry *entry; /* mark each entry as missing */ TAILQ_FOREACH(entry, &partition_tbl, link) entry->flags &= ~HR_PARTITION_FOUND; } /** * Try to find a geom(4) class by its name. Returns a pointer to that * class if found NULL otherways. */ static struct gclass * find_class(struct gmesh *mesh, const char *name) { struct gclass *classp; LIST_FOREACH(classp, &mesh->lg_class, lg_class) if (strcmp(classp->lg_name, name) == 0) return (classp); return (NULL); } /** * Process all MBR-type partitions from the given disk. */ static void get_mbr(struct gclass *classp, int32_t ds_index, const char *disk_dev_name) { struct ggeom *gp; struct gprovider *pp; struct gconfig *conf; long part_type; LIST_FOREACH(gp, &classp->lg_geom, lg_geom) { /* We are only interested in partitions from this disk */ if (strcmp(gp->lg_name, disk_dev_name) != 0) continue; /* * Find all the non-BSD providers (these are handled in get_bsd) */ LIST_FOREACH(pp, &gp->lg_provider, lg_provider) { LIST_FOREACH(conf, &pp->lg_config, lg_config) { if (conf->lg_name == NULL || conf->lg_val == NULL || strcmp(conf->lg_name, "type") != 0) continue; /* * We are not interested in BSD partitions * (ie ad0s1 is not interesting at this point). * We'll take care of them in detail (slice * by slice) in get_bsd. */ part_type = strtol(conf->lg_val, NULL, 10); if (part_type == HR_FREEBSD_PART_TYPE) break; HRDBG("-> MBR PROVIDER Name: %s", pp->lg_name); HRDBG("Mediasize: %jd", (intmax_t)pp->lg_mediasize / 1024); HRDBG("Sectorsize: %u", pp->lg_sectorsize); HRDBG("Mode: %s", pp->lg_mode); HRDBG("CONFIG: %s: %s", conf->lg_name, conf->lg_val); handle_chunk(ds_index, pp->lg_name, pp->lg_mediasize); } } } } /** * Process all BSD-type partitions from the given disk. */ static void get_bsd_sun(struct gclass *classp, int32_t ds_index, const char *disk_dev_name) { struct ggeom *gp; struct gprovider *pp; LIST_FOREACH(gp, &classp->lg_geom, lg_geom) { /* * We are only interested in those geoms starting with * the disk_dev_name passed as parameter to this function. */ if (strncmp(gp->lg_name, disk_dev_name, strlen(disk_dev_name)) != 0) continue; LIST_FOREACH(pp, &gp->lg_provider, lg_provider) { if (pp->lg_name == NULL) continue; handle_chunk(ds_index, pp->lg_name, pp->lg_mediasize); } } } /** * Called from the DiskStorage table for every row. Open the GEOM(4) framework * and process all the partitions in it. * ds_index is the index into the DiskStorage table. * This is done in two steps: for non BSD partitions the geom class "MBR" is * used, for our BSD slices the "BSD" geom class. */ void partition_tbl_handle_disk(int32_t ds_index, const char *disk_dev_name) { struct gmesh mesh; /* GEOM userland tree */ struct gclass *classp; int error; assert(disk_dev_name != NULL); assert(ds_index > 0); HRDBG("===> getting partitions for %s <===", disk_dev_name); /* try to construct the GEOM tree */ if ((error = geom_gettree(&mesh)) != 0) { syslog(LOG_WARNING, "cannot get GEOM tree: %m"); return; } /* * First try the GEOM "MBR" class. * This is needed for non-BSD slices (aka partitions) * on PC architectures. */ if ((classp = find_class(&mesh, "MBR")) != NULL) { get_mbr(classp, ds_index, disk_dev_name); } else { HRDBG("cannot find \"MBR\" geom class"); } /* * Get the "BSD" GEOM class. * Here we'll find all the info needed about the BSD slices. */ if ((classp = find_class(&mesh, "BSD")) != NULL) { get_bsd_sun(classp, ds_index, disk_dev_name); } else { /* no problem on sparc64 */ HRDBG("cannot find \"BSD\" geom class"); } /* * Get the "SUN" GEOM class. * Here we'll find all the info needed about the BSD slices. */ if ((classp = find_class(&mesh, "SUN")) != NULL) { get_bsd_sun(classp, ds_index, disk_dev_name); } else { /* no problem on i386 */ HRDBG("cannot find \"SUN\" geom class"); } geom_deletetree(&mesh); } /** * Finish refreshing the table. */ void partition_tbl_post_refresh(void) { struct partition_entry *e, *etmp; /* * Purge items that disappeared */ TAILQ_FOREACH_SAFE(e, &partition_tbl, link, etmp) if (!(e->flags & HR_PARTITION_FOUND)) partition_entry_delete(e); } /* * Finalization routine for hrPartitionTable * It destroys the lists and frees any allocated heap memory */ void fini_partition_tbl(void) { struct partition_map_entry *m; while ((m = STAILQ_FIRST(&partition_map)) != NULL) { STAILQ_REMOVE_HEAD(&partition_map, link); if(m->entry != NULL) { TAILQ_REMOVE(&partition_tbl, m->entry, link); free(m->entry->id); free(m->entry->label); free(m->entry); } free(m->id); free(m); } assert(TAILQ_EMPTY(&partition_tbl)); } /** * Called from the file system code to insert the file system table index * into the partition table entry. Note, that an partition table entry exists * only for local file systems. */ void handle_partition_fs_index(const char *name, int32_t fs_idx) { struct partition_entry *entry; if ((entry = partition_entry_find_by_label(name)) == NULL) { HRDBG("%s IS MISSING from hrPartitionTable", name); return; } HRDBG("%s [FS index = %d] IS in hrPartitionTable", name, fs_idx); entry->fs_Index = fs_idx; } /* * This is the implementation for a generated (by our SNMP tool) * function prototype, see hostres_tree.h * It handles the SNMP operations for hrPartitionTable */ int op_hrPartitionTable(struct snmp_context *ctx __unused, struct snmp_value *value, u_int sub, u_int iidx __unused, enum snmp_op op) { struct partition_entry *entry; /* * Refresh the disk storage table (which refreshes the partition * table) if necessary. */ refresh_disk_storage_tbl(0); switch (op) { case SNMP_OP_GETNEXT: if ((entry = NEXT_OBJECT_FUNC(&partition_tbl, &value->var, sub, partition_idx_cmp)) == NULL) return (SNMP_ERR_NOSUCHNAME); value->var.len = sub + 2; value->var.subs[sub] = entry->index[0]; value->var.subs[sub + 1] = entry->index[1]; goto get; case SNMP_OP_GET: if ((entry = FIND_OBJECT_FUNC(&partition_tbl, &value->var, sub, partition_idx_cmp)) == NULL) return (SNMP_ERR_NOSUCHNAME); goto get; case SNMP_OP_SET: if ((entry = FIND_OBJECT_FUNC(&partition_tbl, &value->var, sub, partition_idx_cmp)) == NULL) return (SNMP_ERR_NOT_WRITEABLE); return (SNMP_ERR_NO_CREATION); case SNMP_OP_ROLLBACK: case SNMP_OP_COMMIT: abort(); } abort(); get: switch (value->var.subs[sub - 1]) { case LEAF_hrPartitionIndex: value->v.integer = entry->index[1]; return (SNMP_ERR_NOERROR); case LEAF_hrPartitionLabel: return (string_get(value, entry->label, -1)); case LEAF_hrPartitionID: return(string_get(value, entry->id, -1)); case LEAF_hrPartitionSize: value->v.integer = entry->size; return (SNMP_ERR_NOERROR); case LEAF_hrPartitionFSIndex: value->v.integer = entry->fs_Index; return (SNMP_ERR_NOERROR); } abort(); } Index: projects/import-googletest-1.8.1/usr.sbin/nfsd/nfsd.8 =================================================================== --- projects/import-googletest-1.8.1/usr.sbin/nfsd/nfsd.8 (revision 344270) +++ projects/import-googletest-1.8.1/usr.sbin/nfsd/nfsd.8 (revision 344271) @@ -1,352 +1,356 @@ .\" Copyright (c) 1989, 1991, 1993 .\" The Regents of the University of California. All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" 3. Neither the name of the University nor the names of its contributors .\" may be used to endorse or promote products derived from this software .\" without specific prior written permission. .\" .\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" @(#)nfsd.8 8.4 (Berkeley) 3/29/95 .\" $FreeBSD$ .\" -.Dd August 5, 2018 +.Dd February 14, 2019 .Dt NFSD 8 .Os .Sh NAME .Nm nfsd .Nd remote .Tn NFS server .Sh SYNOPSIS .Nm .Op Fl ardute .Op Fl n Ar num_servers .Op Fl h Ar bindip .Op Fl p Ar pnfs_setup .Op Fl m Ar mirror_level +.Op Fl V Ar virtual_hostname .Op Fl Fl maxthreads Ar max_threads .Op Fl Fl minthreads Ar min_threads .Sh DESCRIPTION The .Nm utility runs on a server machine to service .Tn NFS requests from client machines. At least one .Nm must be running for a machine to operate as a server. .Pp Unless otherwise specified, eight servers per CPU for .Tn UDP transport are started. .Pp The following options are available: .Bl -tag -width Ds .It Fl r Register the .Tn NFS service with .Xr rpcbind 8 without creating any servers. This option can be used along with the .Fl u or .Fl t options to re-register NFS if the rpcbind server is restarted. .It Fl d Unregister the .Tn NFS service with .Xr rpcbind 8 without creating any servers. +.It Fl V Ar virtual_hostname +Specifies a hostname to be used as a principal name, instead of +the default hostname. .It Fl n Ar threads Specifies how many servers to create. This option is equivalent to specifying .Fl Fl maxthreads and .Fl Fl minthreads with their respective arguments to .Ar threads . .It Fl Fl maxthreads Ar threads Specifies the maximum servers that will be kept around to service requests. .It Fl Fl minthreads Ar threads Specifies the minimum servers that will be kept around to service requests. .It Fl h Ar bindip Specifies which IP address or hostname to bind to on the local host. This option is recommended when a host has multiple interfaces. Multiple .Fl h options may be specified. .It Fl a Specifies that nfsd should bind to the wildcard IP address. This is the default if no .Fl h options are given. It may also be specified in addition to any .Fl h options given. Note that NFS/UDP does not operate properly when bound to the wildcard IP address whether you use -a or do not use -h. .It Fl p Ar pnfs_setup Enables pNFS support in the server and specifies the information that the daemon needs to start it. This option can only be used on one server and specifies that this server will be the MetaData Server (MDS) for the pNFS service. This can only be done if there is at least one FreeBSD system configured as a Data Server (DS) for it to use. .Pp The .Ar pnfs_setup string is a set of fields separated by ',' characters: .Bl -tag -width Ds Each of these fields specifies one DS. It consists of a server hostname, followed by a ':' and the directory path where the DS's data storage file system is mounted on this MDS server. This can optionally be followed by a '#' and the mds_path, which is the directory path for an exported file system on this MDS. If this is specified, it means that this DS is to be used to store data files for this mds_path file system only. If this optional component does not exist, the DS will be used to store data files for all exported MDS file systems. The DS storage file systems must be mounted on this system before the .Nm is started with this option specified. .br For example: .sp nfsv4-data0:/data0,nfsv4-data1:/data1 .sp would specify two DS servers called nfsv4-data0 and nfsv4-data1 that comprise the data storage component of the pNFS service. These two DSs would be used to store data files for all exported file systems on this MDS. The directories .Dq /data0 and .Dq /data1 are where the data storage servers exported storage directories are mounted on this system (which will act as the MDS). .br Whereas, for the example: .sp nfsv4-data0:/data0#/export1,nfsv4-data1:/data1#/export2 .sp would specify two DSs as above, however nfsv4-data0 will be used to store data files for .Dq /export1 and nfsv4-data1 will be used to store data files for .Dq /export2 . .El .sp When using IPv6 addresses for DSs be wary of using link local addresses. The IPv6 address for the DS is sent to the client and there is no scope zone in it. As such, a link local address may not work for a pNFS client to DS TCP connection. When parsed, .Nm will only use a link local address if it is the only address returned by .Xr getaddrinfo 3 for the DS hostname. .It Fl m Ar mirror_level This option is only meaningful when used with the .Fl p option. It specifies the .Dq mirror_level , which defines how many of the DSs will have a copy of a file's data storage file. The default of one implies no mirroring of data storage files on the DSs. The .Dq mirror_level would normally be set to 2 to enable mirroring, but can be as high as NFSDEV_MAXMIRRORS. There must be at least .Dq mirror_level DSs for each exported file system on the MDS, as specified in the .Fl p option. This implies that, for the above example using "#/export1" and "#/export2", mirroring cannot be done. There would need to be two DS entries for each of "#/export1" and "#/export2" in order to support a .Dq mirror_level of two. .Pp If mirroring is enabled, the server must use the Flexible File layout. If mirroring is not enabled, the server will use the File layout by default, but this default can be changed to the Flexible File layout if the .Xr sysctl 1 vfs.nfsd.default_flexfile is set non-zero. .It Fl t Serve .Tn TCP NFS clients. .It Fl u Serve .Tn UDP NFS clients. .It Fl e Ignored; included for backward compatibility. .El .Pp For example, .Dq Li "nfsd -u -t -n 6" serves .Tn UDP and .Tn TCP transports using six daemons. .Pp A server should run enough daemons to handle the maximum level of concurrency from its clients, typically four to six. .Pp The .Nm utility listens for service requests at the port indicated in the .Tn NFS server specification; see .%T "Network File System Protocol Specification" , RFC1094, .%T "NFS: Network File System Version 3 Protocol Specification" , RFC1813, .%T "Network File System (NFS) Version 4 Protocol" , RFC3530 and .%T "Network File System (NFS) Version 4 Minor Version 1 Protocol" , RFC5661. .Pp If .Nm detects that .Tn NFS is not loaded in the running kernel, it will attempt to load a loadable kernel module containing .Tn NFS support using .Xr kldload 2 . If this fails, or no .Tn NFS KLD is available, .Nm will exit with an error. .Pp If .Nm is to be run on a host with multiple interfaces or interface aliases, use of the .Fl h option is recommended. If you do not use the option NFS may not respond to UDP packets from the same IP address they were sent to. Use of this option is also recommended when securing NFS exports on a firewalling machine such that the NFS sockets can only be accessed by the inside interface. The .Nm ipfw utility would then be used to block nfs-related packets that come in on the outside interface. .Pp If the server has stopped servicing clients and has generated a console message like .Dq Li "nfsd server cache flooded..." , the value for vfs.nfsd.tcphighwater needs to be increased. This should allow the server to again handle requests without a reboot. Also, you may want to consider decreasing the value for vfs.nfsd.tcpcachetimeo to several minutes (in seconds) instead of 12 hours when this occurs. .Pp Unfortunately making vfs.nfsd.tcphighwater too large can result in the mbuf limit being reached, as indicated by a console message like .Dq Li "kern.ipc.nmbufs limit reached" . If you cannot find values of the above .Nm sysctl values that work, you can disable the DRC cache for TCP by setting vfs.nfsd.cachetcp to 0. .Pp The .Nm utility has to be terminated with .Dv SIGUSR1 and cannot be killed with .Dv SIGTERM or .Dv SIGQUIT . The .Nm utility needs to ignore these signals in order to stay alive as long as possible during a shutdown, otherwise loopback mounts will not be able to unmount. If you have to kill .Nm just do a .Dq Li "kill -USR1 " .Sh EXIT STATUS .Ex -std .Sh SEE ALSO .Xr nfsstat 1 , .Xr kldload 2 , .Xr nfssvc 2 , .Xr nfsv4 4 , .Xr pnfs 4 , .Xr pnfsserver 4 , .Xr exports 5 , .Xr stablerestart 5 , .Xr gssd 8 , .Xr ipfw 8 , .Xr mountd 8 , .Xr nfsiod 8 , .Xr nfsrevoke 8 , .Xr nfsuserd 8 , .Xr rpcbind 8 .Sh HISTORY The .Nm utility first appeared in .Bx 4.4 . .Sh BUGS If .Nm is started when .Xr gssd 8 is not running, it will service AUTH_SYS requests only. To fix the problem you must kill .Nm and then restart it, after the .Xr gssd 8 is running. .Pp If mirroring is enabled via the .Fl m option and there are Linux clients doing NFSv4.1 mounts, those clients need to be patched to support the .Dq tightly coupled variant of the Flexible File layout or the .Xr sysctl 1 vfs.nfsd.flexlinuxhack must be set to one on the MDS as a workaround. Index: projects/import-googletest-1.8.1/usr.sbin/nfsd/nfsd.c =================================================================== --- projects/import-googletest-1.8.1/usr.sbin/nfsd/nfsd.c (revision 344270) +++ projects/import-googletest-1.8.1/usr.sbin/nfsd/nfsd.c (revision 344271) @@ -1,1331 +1,1343 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 1989, 1993, 1994 * The Regents of the University of California. All rights reserved. * * This code is derived from software contributed to Berkeley by * Rick Macklem at The University of Guelph. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #ifndef lint static const char copyright[] = "@(#) Copyright (c) 1989, 1993, 1994\n\ The Regents of the University of California. All rights reserved.\n"; #endif /* not lint */ #ifndef lint #if 0 static char sccsid[] = "@(#)nfsd.c 8.9 (Berkeley) 3/29/95"; #endif static const char rcsid[] = "$FreeBSD$"; #endif /* not lint */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include static int debug = 0; #define NFSD_STABLERESTART "/var/db/nfs-stablerestart" #define NFSD_STABLEBACKUP "/var/db/nfs-stablerestart.bak" #define MAXNFSDCNT 256 #define DEFNFSDCNT 4 #define NFS_VER2 2 #define NFS_VER3 3 #define NFS_VER4 4 static pid_t children[MAXNFSDCNT]; /* PIDs of children */ static pid_t masterpid; /* PID of master/parent */ static int nfsdcnt; /* number of children */ static int nfsdcnt_set; static int minthreads; static int maxthreads; static int nfssvc_nfsd; /* Set to correct NFSSVC_xxx flag */ static int stablefd = -1; /* Fd for the stable restart file */ static int backupfd; /* Fd for the backup stable restart file */ static const char *getopt_shortopts; static const char *getopt_usage; static int minthreads_set; static int maxthreads_set; static struct option longopts[] = { { "debug", no_argument, &debug, 1 }, { "minthreads", required_argument, &minthreads_set, 1 }, { "maxthreads", required_argument, &maxthreads_set, 1 }, { "pnfs", required_argument, NULL, 'p' }, { "mirror", required_argument, NULL, 'm' }, { NULL, 0, NULL, 0} }; static void cleanup(int); static void child_cleanup(int); static void killchildren(void); static void nfsd_exit(int); static void nonfs(int); static void reapchild(int); static int setbindhost(struct addrinfo **ia, const char *bindhost, struct addrinfo hints); -static void start_server(int, struct nfsd_nfsd_args *); +static void start_server(int, struct nfsd_nfsd_args *, const char *vhost); static void unregistration(void); static void usage(void); static void open_stable(int *, int *); static void copy_stable(int, int); static void backup_stable(int); static void set_nfsdcnt(int); static void parse_dsserver(const char *, struct nfsd_nfsd_args *); /* * Nfs server daemon mostly just a user context for nfssvc() * * 1 - do file descriptor and signal cleanup * 2 - fork the nfsd(s) * 3 - create server socket(s) * 4 - register socket with rpcbind * * For connectionless protocols, just pass the socket into the kernel via. * nfssvc(). * For connection based sockets, loop doing accepts. When you get a new * socket from accept, pass the msgsock into the kernel via. nfssvc(). * The arguments are: * -r - reregister with rpcbind * -d - unregister with rpcbind * -t - support tcp nfs clients * -u - support udp nfs clients * -e - forces it to run a server that supports nfsv4 * -p - enable a pNFS service * -m - set the mirroring level for a pNFS service * followed by "n" which is the number of nfsds' to fork off */ int main(int argc, char **argv) { struct nfsd_addsock_args addsockargs; struct addrinfo *ai_udp, *ai_tcp, *ai_udp6, *ai_tcp6, hints; struct netconfig *nconf_udp, *nconf_tcp, *nconf_udp6, *nconf_tcp6; struct netbuf nb_udp, nb_tcp, nb_udp6, nb_tcp6; struct sockaddr_storage peer; fd_set ready, sockbits; int ch, connect_type_cnt, i, maxsock, msgsock; socklen_t len; int on = 1, unregister, reregister, sock; int tcp6sock, ip6flag, tcpflag, tcpsock; int udpflag, ecode, error, s; int bindhostc, bindanyflag, rpcbreg, rpcbregcnt; int nfssvc_addsock; int longindex = 0; int nfs_minvers = NFS_VER2; size_t nfs_minvers_size; const char *lopt; char **bindhost = NULL; pid_t pid; struct nfsd_nfsd_args nfsdargs; + const char *vhostname = NULL; nfsdargs.mirrorcnt = 1; nfsdargs.addr = NULL; nfsdargs.addrlen = 0; nfsdcnt = DEFNFSDCNT; unregister = reregister = tcpflag = maxsock = 0; bindanyflag = udpflag = connect_type_cnt = bindhostc = 0; - getopt_shortopts = "ah:n:rdtuep:m:"; + getopt_shortopts = "ah:n:rdtuep:m:V:"; getopt_usage = "usage:\n" " nfsd [-ardtue] [-h bindip]\n" " [-n numservers] [--minthreads #] [--maxthreads #]\n" - " [-p/--pnfs dsserver0:/dsserver0-mounted-on-dir,...," - "dsserverN:/dsserverN-mounted-on-dir] [-m mirrorlevel]\n"; + " [-p/--pnfs dsserver0:/dsserver0-mounted-on-dir,...,\n" + " [-V virtual_hostname]\n" + " dsserverN:/dsserverN-mounted-on-dir] [-m mirrorlevel]\n"; while ((ch = getopt_long(argc, argv, getopt_shortopts, longopts, &longindex)) != -1) switch (ch) { + case 'V': + if (strlen(optarg) <= MAXHOSTNAMELEN) + vhostname = optarg; + else + warnx("Virtual host name (%s) is too long", + optarg); + break; case 'a': bindanyflag = 1; break; case 'n': set_nfsdcnt(atoi(optarg)); break; case 'h': bindhostc++; bindhost = realloc(bindhost,sizeof(char *)*bindhostc); if (bindhost == NULL) errx(1, "Out of memory"); bindhost[bindhostc-1] = strdup(optarg); if (bindhost[bindhostc-1] == NULL) errx(1, "Out of memory"); break; case 'r': reregister = 1; break; case 'd': unregister = 1; break; case 't': tcpflag = 1; break; case 'u': udpflag = 1; break; case 'e': /* now a no-op, since this is the default */ break; case 'p': /* Parse out the DS server host names and mount pts. */ parse_dsserver(optarg, &nfsdargs); break; case 'm': /* Set the mirror level for a pNFS service. */ i = atoi(optarg); if (i < 2 || i > NFSDEV_MAXMIRRORS) errx(1, "Mirror level out of range 2<-->%d", NFSDEV_MAXMIRRORS); nfsdargs.mirrorcnt = i; break; case 0: lopt = longopts[longindex].name; if (!strcmp(lopt, "minthreads")) { minthreads = atoi(optarg); } else if (!strcmp(lopt, "maxthreads")) { maxthreads = atoi(optarg); } break; default: case '?': usage(); } if (!tcpflag && !udpflag) udpflag = 1; argv += optind; argc -= optind; if (minthreads_set && maxthreads_set && minthreads > maxthreads) errx(EX_USAGE, "error: minthreads(%d) can't be greater than " "maxthreads(%d)", minthreads, maxthreads); /* * XXX * Backward compatibility, trailing number is the count of daemons. */ if (argc > 1) usage(); if (argc == 1) set_nfsdcnt(atoi(argv[0])); /* * Unless the "-o" option was specified, try and run "nfsd". * If "-o" was specified, try and run "nfsserver". */ if (modfind("nfsd") < 0) { /* Not present in kernel, try loading it */ if (kldload("nfsd") < 0 || modfind("nfsd") < 0) errx(1, "NFS server is not available"); } ip6flag = 1; s = socket(AF_INET6, SOCK_DGRAM, IPPROTO_UDP); if (s == -1) { if (errno != EPROTONOSUPPORT && errno != EAFNOSUPPORT) err(1, "socket"); ip6flag = 0; } else if (getnetconfigent("udp6") == NULL || getnetconfigent("tcp6") == NULL) { ip6flag = 0; } if (s != -1) close(s); if (bindhostc == 0 || bindanyflag) { bindhostc++; bindhost = realloc(bindhost,sizeof(char *)*bindhostc); if (bindhost == NULL) errx(1, "Out of memory"); bindhost[bindhostc-1] = strdup("*"); if (bindhost[bindhostc-1] == NULL) errx(1, "Out of memory"); } nfs_minvers_size = sizeof(nfs_minvers); error = sysctlbyname("vfs.nfsd.server_min_nfsvers", &nfs_minvers, &nfs_minvers_size, NULL, 0); if (error != 0 || nfs_minvers < NFS_VER2 || nfs_minvers > NFS_VER4) { warnx("sysctlbyname(vfs.nfsd.server_min_nfsvers) failed," " defaulting to NFSv2"); nfs_minvers = NFS_VER2; } if (unregister) { unregistration(); exit (0); } if (reregister) { if (udpflag) { memset(&hints, 0, sizeof hints); hints.ai_flags = AI_PASSIVE; hints.ai_family = AF_INET; hints.ai_socktype = SOCK_DGRAM; hints.ai_protocol = IPPROTO_UDP; ecode = getaddrinfo(NULL, "nfs", &hints, &ai_udp); if (ecode != 0) err(1, "getaddrinfo udp: %s", gai_strerror(ecode)); nconf_udp = getnetconfigent("udp"); if (nconf_udp == NULL) err(1, "getnetconfigent udp failed"); nb_udp.buf = ai_udp->ai_addr; nb_udp.len = nb_udp.maxlen = ai_udp->ai_addrlen; if (nfs_minvers == NFS_VER2) if (!rpcb_set(NFS_PROGRAM, 2, nconf_udp, &nb_udp)) err(1, "rpcb_set udp failed"); if (nfs_minvers <= NFS_VER3) if (!rpcb_set(NFS_PROGRAM, 3, nconf_udp, &nb_udp)) err(1, "rpcb_set udp failed"); freeaddrinfo(ai_udp); } if (udpflag && ip6flag) { memset(&hints, 0, sizeof hints); hints.ai_flags = AI_PASSIVE; hints.ai_family = AF_INET6; hints.ai_socktype = SOCK_DGRAM; hints.ai_protocol = IPPROTO_UDP; ecode = getaddrinfo(NULL, "nfs", &hints, &ai_udp6); if (ecode != 0) err(1, "getaddrinfo udp6: %s", gai_strerror(ecode)); nconf_udp6 = getnetconfigent("udp6"); if (nconf_udp6 == NULL) err(1, "getnetconfigent udp6 failed"); nb_udp6.buf = ai_udp6->ai_addr; nb_udp6.len = nb_udp6.maxlen = ai_udp6->ai_addrlen; if (nfs_minvers == NFS_VER2) if (!rpcb_set(NFS_PROGRAM, 2, nconf_udp6, &nb_udp6)) err(1, "rpcb_set udp6 failed"); if (nfs_minvers <= NFS_VER3) if (!rpcb_set(NFS_PROGRAM, 3, nconf_udp6, &nb_udp6)) err(1, "rpcb_set udp6 failed"); freeaddrinfo(ai_udp6); } if (tcpflag) { memset(&hints, 0, sizeof hints); hints.ai_flags = AI_PASSIVE; hints.ai_family = AF_INET; hints.ai_socktype = SOCK_STREAM; hints.ai_protocol = IPPROTO_TCP; ecode = getaddrinfo(NULL, "nfs", &hints, &ai_tcp); if (ecode != 0) err(1, "getaddrinfo tcp: %s", gai_strerror(ecode)); nconf_tcp = getnetconfigent("tcp"); if (nconf_tcp == NULL) err(1, "getnetconfigent tcp failed"); nb_tcp.buf = ai_tcp->ai_addr; nb_tcp.len = nb_tcp.maxlen = ai_tcp->ai_addrlen; if (nfs_minvers == NFS_VER2) if (!rpcb_set(NFS_PROGRAM, 2, nconf_tcp, &nb_tcp)) err(1, "rpcb_set tcp failed"); if (nfs_minvers <= NFS_VER3) if (!rpcb_set(NFS_PROGRAM, 3, nconf_tcp, &nb_tcp)) err(1, "rpcb_set tcp failed"); freeaddrinfo(ai_tcp); } if (tcpflag && ip6flag) { memset(&hints, 0, sizeof hints); hints.ai_flags = AI_PASSIVE; hints.ai_family = AF_INET6; hints.ai_socktype = SOCK_STREAM; hints.ai_protocol = IPPROTO_TCP; ecode = getaddrinfo(NULL, "nfs", &hints, &ai_tcp6); if (ecode != 0) err(1, "getaddrinfo tcp6: %s", gai_strerror(ecode)); nconf_tcp6 = getnetconfigent("tcp6"); if (nconf_tcp6 == NULL) err(1, "getnetconfigent tcp6 failed"); nb_tcp6.buf = ai_tcp6->ai_addr; nb_tcp6.len = nb_tcp6.maxlen = ai_tcp6->ai_addrlen; if (nfs_minvers == NFS_VER2) if (!rpcb_set(NFS_PROGRAM, 2, nconf_tcp6, &nb_tcp6)) err(1, "rpcb_set tcp6 failed"); if (nfs_minvers <= NFS_VER3) if (!rpcb_set(NFS_PROGRAM, 3, nconf_tcp6, &nb_tcp6)) err(1, "rpcb_set tcp6 failed"); freeaddrinfo(ai_tcp6); } exit (0); } if (debug == 0) { daemon(0, 0); (void)signal(SIGHUP, SIG_IGN); (void)signal(SIGINT, SIG_IGN); /* * nfsd sits in the kernel most of the time. It needs * to ignore SIGTERM/SIGQUIT in order to stay alive as long * as possible during a shutdown, otherwise loopback * mounts will not be able to unmount. */ (void)signal(SIGTERM, SIG_IGN); (void)signal(SIGQUIT, SIG_IGN); } (void)signal(SIGSYS, nonfs); (void)signal(SIGCHLD, reapchild); (void)signal(SIGUSR2, backup_stable); openlog("nfsd", LOG_PID | (debug ? LOG_PERROR : 0), LOG_DAEMON); /* * For V4, we open the stablerestart file and call nfssvc() * to get it loaded. This is done before the daemons do the * regular nfssvc() call to service NFS requests. * (This way the file remains open until the last nfsd is killed * off.) * It and the backup copy will be created as empty files * the first time this nfsd is started and should never be * deleted/replaced if at all possible. It should live on a * local, non-volatile storage device that does not do hardware * level write-back caching. (See SCSI doc for more information * on how to prevent write-back caching on SCSI disks.) */ open_stable(&stablefd, &backupfd); if (stablefd < 0) { syslog(LOG_ERR, "Can't open %s: %m\n", NFSD_STABLERESTART); exit(1); } /* This system call will fail for old kernels, but that's ok. */ nfssvc(NFSSVC_BACKUPSTABLE, NULL); if (nfssvc(NFSSVC_STABLERESTART, (caddr_t)&stablefd) < 0) { syslog(LOG_ERR, "Can't read stable storage file: %m\n"); exit(1); } nfssvc_addsock = NFSSVC_NFSDADDSOCK; nfssvc_nfsd = NFSSVC_NFSDNFSD | NFSSVC_NEWSTRUCT; if (tcpflag) { /* * For TCP mode, we fork once to start the first * kernel nfsd thread. The kernel will add more * threads as needed. */ masterpid = getpid(); pid = fork(); if (pid == -1) { syslog(LOG_ERR, "fork: %m"); nfsd_exit(1); } if (pid) { children[0] = pid; } else { (void)signal(SIGUSR1, child_cleanup); setproctitle("server"); - start_server(0, &nfsdargs); + start_server(0, &nfsdargs, vhostname); } } (void)signal(SIGUSR1, cleanup); FD_ZERO(&sockbits); rpcbregcnt = 0; /* Set up the socket for udp and rpcb register it. */ if (udpflag) { rpcbreg = 0; for (i = 0; i < bindhostc; i++) { memset(&hints, 0, sizeof hints); hints.ai_flags = AI_PASSIVE; hints.ai_family = AF_INET; hints.ai_socktype = SOCK_DGRAM; hints.ai_protocol = IPPROTO_UDP; if (setbindhost(&ai_udp, bindhost[i], hints) == 0) { rpcbreg = 1; rpcbregcnt++; if ((sock = socket(ai_udp->ai_family, ai_udp->ai_socktype, ai_udp->ai_protocol)) < 0) { syslog(LOG_ERR, "can't create udp socket"); nfsd_exit(1); } if (bind(sock, ai_udp->ai_addr, ai_udp->ai_addrlen) < 0) { syslog(LOG_ERR, "can't bind udp addr %s: %m", bindhost[i]); nfsd_exit(1); } freeaddrinfo(ai_udp); addsockargs.sock = sock; addsockargs.name = NULL; addsockargs.namelen = 0; if (nfssvc(nfssvc_addsock, &addsockargs) < 0) { syslog(LOG_ERR, "can't Add UDP socket"); nfsd_exit(1); } (void)close(sock); } } if (rpcbreg == 1) { memset(&hints, 0, sizeof hints); hints.ai_flags = AI_PASSIVE; hints.ai_family = AF_INET; hints.ai_socktype = SOCK_DGRAM; hints.ai_protocol = IPPROTO_UDP; ecode = getaddrinfo(NULL, "nfs", &hints, &ai_udp); if (ecode != 0) { syslog(LOG_ERR, "getaddrinfo udp: %s", gai_strerror(ecode)); nfsd_exit(1); } nconf_udp = getnetconfigent("udp"); if (nconf_udp == NULL) err(1, "getnetconfigent udp failed"); nb_udp.buf = ai_udp->ai_addr; nb_udp.len = nb_udp.maxlen = ai_udp->ai_addrlen; if (nfs_minvers == NFS_VER2) if (!rpcb_set(NFS_PROGRAM, 2, nconf_udp, &nb_udp)) err(1, "rpcb_set udp failed"); if (nfs_minvers <= NFS_VER3) if (!rpcb_set(NFS_PROGRAM, 3, nconf_udp, &nb_udp)) err(1, "rpcb_set udp failed"); freeaddrinfo(ai_udp); } } /* Set up the socket for udp6 and rpcb register it. */ if (udpflag && ip6flag) { rpcbreg = 0; for (i = 0; i < bindhostc; i++) { memset(&hints, 0, sizeof hints); hints.ai_flags = AI_PASSIVE; hints.ai_family = AF_INET6; hints.ai_socktype = SOCK_DGRAM; hints.ai_protocol = IPPROTO_UDP; if (setbindhost(&ai_udp6, bindhost[i], hints) == 0) { rpcbreg = 1; rpcbregcnt++; if ((sock = socket(ai_udp6->ai_family, ai_udp6->ai_socktype, ai_udp6->ai_protocol)) < 0) { syslog(LOG_ERR, "can't create udp6 socket"); nfsd_exit(1); } if (setsockopt(sock, IPPROTO_IPV6, IPV6_V6ONLY, &on, sizeof on) < 0) { syslog(LOG_ERR, "can't set v6-only binding for " "udp6 socket: %m"); nfsd_exit(1); } if (bind(sock, ai_udp6->ai_addr, ai_udp6->ai_addrlen) < 0) { syslog(LOG_ERR, "can't bind udp6 addr %s: %m", bindhost[i]); nfsd_exit(1); } freeaddrinfo(ai_udp6); addsockargs.sock = sock; addsockargs.name = NULL; addsockargs.namelen = 0; if (nfssvc(nfssvc_addsock, &addsockargs) < 0) { syslog(LOG_ERR, "can't add UDP6 socket"); nfsd_exit(1); } (void)close(sock); } } if (rpcbreg == 1) { memset(&hints, 0, sizeof hints); hints.ai_flags = AI_PASSIVE; hints.ai_family = AF_INET6; hints.ai_socktype = SOCK_DGRAM; hints.ai_protocol = IPPROTO_UDP; ecode = getaddrinfo(NULL, "nfs", &hints, &ai_udp6); if (ecode != 0) { syslog(LOG_ERR, "getaddrinfo udp6: %s", gai_strerror(ecode)); nfsd_exit(1); } nconf_udp6 = getnetconfigent("udp6"); if (nconf_udp6 == NULL) err(1, "getnetconfigent udp6 failed"); nb_udp6.buf = ai_udp6->ai_addr; nb_udp6.len = nb_udp6.maxlen = ai_udp6->ai_addrlen; if (nfs_minvers == NFS_VER2) if (!rpcb_set(NFS_PROGRAM, 2, nconf_udp6, &nb_udp6)) err(1, "rpcb_set udp6 failed"); if (nfs_minvers <= NFS_VER3) if (!rpcb_set(NFS_PROGRAM, 3, nconf_udp6, &nb_udp6)) err(1, "rpcb_set udp6 failed"); freeaddrinfo(ai_udp6); } } /* Set up the socket for tcp and rpcb register it. */ if (tcpflag) { rpcbreg = 0; for (i = 0; i < bindhostc; i++) { memset(&hints, 0, sizeof hints); hints.ai_flags = AI_PASSIVE; hints.ai_family = AF_INET; hints.ai_socktype = SOCK_STREAM; hints.ai_protocol = IPPROTO_TCP; if (setbindhost(&ai_tcp, bindhost[i], hints) == 0) { rpcbreg = 1; rpcbregcnt++; if ((tcpsock = socket(AF_INET, SOCK_STREAM, 0)) < 0) { syslog(LOG_ERR, "can't create tcp socket"); nfsd_exit(1); } if (setsockopt(tcpsock, SOL_SOCKET, SO_REUSEADDR, (char *)&on, sizeof(on)) < 0) syslog(LOG_ERR, "setsockopt SO_REUSEADDR: %m"); if (bind(tcpsock, ai_tcp->ai_addr, ai_tcp->ai_addrlen) < 0) { syslog(LOG_ERR, "can't bind tcp addr %s: %m", bindhost[i]); nfsd_exit(1); } if (listen(tcpsock, -1) < 0) { syslog(LOG_ERR, "listen failed"); nfsd_exit(1); } freeaddrinfo(ai_tcp); FD_SET(tcpsock, &sockbits); maxsock = tcpsock; connect_type_cnt++; } } if (rpcbreg == 1) { memset(&hints, 0, sizeof hints); hints.ai_flags = AI_PASSIVE; hints.ai_family = AF_INET; hints.ai_socktype = SOCK_STREAM; hints.ai_protocol = IPPROTO_TCP; ecode = getaddrinfo(NULL, "nfs", &hints, &ai_tcp); if (ecode != 0) { syslog(LOG_ERR, "getaddrinfo tcp: %s", gai_strerror(ecode)); nfsd_exit(1); } nconf_tcp = getnetconfigent("tcp"); if (nconf_tcp == NULL) err(1, "getnetconfigent tcp failed"); nb_tcp.buf = ai_tcp->ai_addr; nb_tcp.len = nb_tcp.maxlen = ai_tcp->ai_addrlen; if (nfs_minvers == NFS_VER2) if (!rpcb_set(NFS_PROGRAM, 2, nconf_tcp, &nb_tcp)) err(1, "rpcb_set tcp failed"); if (nfs_minvers <= NFS_VER3) if (!rpcb_set(NFS_PROGRAM, 3, nconf_tcp, &nb_tcp)) err(1, "rpcb_set tcp failed"); freeaddrinfo(ai_tcp); } } /* Set up the socket for tcp6 and rpcb register it. */ if (tcpflag && ip6flag) { rpcbreg = 0; for (i = 0; i < bindhostc; i++) { memset(&hints, 0, sizeof hints); hints.ai_flags = AI_PASSIVE; hints.ai_family = AF_INET6; hints.ai_socktype = SOCK_STREAM; hints.ai_protocol = IPPROTO_TCP; if (setbindhost(&ai_tcp6, bindhost[i], hints) == 0) { rpcbreg = 1; rpcbregcnt++; if ((tcp6sock = socket(ai_tcp6->ai_family, ai_tcp6->ai_socktype, ai_tcp6->ai_protocol)) < 0) { syslog(LOG_ERR, "can't create tcp6 socket"); nfsd_exit(1); } if (setsockopt(tcp6sock, SOL_SOCKET, SO_REUSEADDR, (char *)&on, sizeof(on)) < 0) syslog(LOG_ERR, "setsockopt SO_REUSEADDR: %m"); if (setsockopt(tcp6sock, IPPROTO_IPV6, IPV6_V6ONLY, &on, sizeof on) < 0) { syslog(LOG_ERR, "can't set v6-only binding for tcp6 " "socket: %m"); nfsd_exit(1); } if (bind(tcp6sock, ai_tcp6->ai_addr, ai_tcp6->ai_addrlen) < 0) { syslog(LOG_ERR, "can't bind tcp6 addr %s: %m", bindhost[i]); nfsd_exit(1); } if (listen(tcp6sock, -1) < 0) { syslog(LOG_ERR, "listen failed"); nfsd_exit(1); } freeaddrinfo(ai_tcp6); FD_SET(tcp6sock, &sockbits); if (maxsock < tcp6sock) maxsock = tcp6sock; connect_type_cnt++; } } if (rpcbreg == 1) { memset(&hints, 0, sizeof hints); hints.ai_flags = AI_PASSIVE; hints.ai_family = AF_INET6; hints.ai_socktype = SOCK_STREAM; hints.ai_protocol = IPPROTO_TCP; ecode = getaddrinfo(NULL, "nfs", &hints, &ai_tcp6); if (ecode != 0) { syslog(LOG_ERR, "getaddrinfo tcp6: %s", gai_strerror(ecode)); nfsd_exit(1); } nconf_tcp6 = getnetconfigent("tcp6"); if (nconf_tcp6 == NULL) err(1, "getnetconfigent tcp6 failed"); nb_tcp6.buf = ai_tcp6->ai_addr; nb_tcp6.len = nb_tcp6.maxlen = ai_tcp6->ai_addrlen; if (nfs_minvers == NFS_VER2) if (!rpcb_set(NFS_PROGRAM, 2, nconf_tcp6, &nb_tcp6)) err(1, "rpcb_set tcp6 failed"); if (nfs_minvers <= NFS_VER3) if (!rpcb_set(NFS_PROGRAM, 3, nconf_tcp6, &nb_tcp6)) err(1, "rpcb_set tcp6 failed"); freeaddrinfo(ai_tcp6); } } if (rpcbregcnt == 0) { syslog(LOG_ERR, "rpcb_set() failed, nothing to do: %m"); nfsd_exit(1); } if (tcpflag && connect_type_cnt == 0) { syslog(LOG_ERR, "tcp connects == 0, nothing to do: %m"); nfsd_exit(1); } setproctitle("master"); /* * We always want a master to have a clean way to shut nfsd down * (with unregistration): if the master is killed, it unregisters and * kills all children. If we run for UDP only (and so do not have to * loop waiting for accept), we instead make the parent * a "server" too. start_server will not return. */ if (!tcpflag) - start_server(1, &nfsdargs); + start_server(1, &nfsdargs, vhostname); /* * Loop forever accepting connections and passing the sockets * into the kernel for the mounts. */ for (;;) { ready = sockbits; if (connect_type_cnt > 1) { if (select(maxsock + 1, &ready, NULL, NULL, NULL) < 1) { error = errno; if (error == EINTR) continue; syslog(LOG_ERR, "select failed: %m"); nfsd_exit(1); } } for (tcpsock = 0; tcpsock <= maxsock; tcpsock++) { if (FD_ISSET(tcpsock, &ready)) { len = sizeof(peer); if ((msgsock = accept(tcpsock, (struct sockaddr *)&peer, &len)) < 0) { error = errno; syslog(LOG_ERR, "accept failed: %m"); if (error == ECONNABORTED || error == EINTR) continue; nfsd_exit(1); } if (setsockopt(msgsock, SOL_SOCKET, SO_KEEPALIVE, (char *)&on, sizeof(on)) < 0) syslog(LOG_ERR, "setsockopt SO_KEEPALIVE: %m"); addsockargs.sock = msgsock; addsockargs.name = (caddr_t)&peer; addsockargs.namelen = len; nfssvc(nfssvc_addsock, &addsockargs); (void)close(msgsock); } } } } static int setbindhost(struct addrinfo **ai, const char *bindhost, struct addrinfo hints) { int ecode; u_int32_t host_addr[4]; /* IPv4 or IPv6 */ const char *hostptr; if (bindhost == NULL || strcmp("*", bindhost) == 0) hostptr = NULL; else hostptr = bindhost; if (hostptr != NULL) { switch (hints.ai_family) { case AF_INET: if (inet_pton(AF_INET, hostptr, host_addr) == 1) { hints.ai_flags = AI_NUMERICHOST; } else { if (inet_pton(AF_INET6, hostptr, host_addr) == 1) return (1); } break; case AF_INET6: if (inet_pton(AF_INET6, hostptr, host_addr) == 1) { hints.ai_flags = AI_NUMERICHOST; } else { if (inet_pton(AF_INET, hostptr, host_addr) == 1) return (1); } break; default: break; } } ecode = getaddrinfo(hostptr, "nfs", &hints, ai); if (ecode != 0) { syslog(LOG_ERR, "getaddrinfo %s: %s", bindhost, gai_strerror(ecode)); return (1); } return (0); } static void set_nfsdcnt(int proposed) { if (proposed < 1) { warnx("nfsd count too low %d; reset to %d", proposed, DEFNFSDCNT); nfsdcnt = DEFNFSDCNT; } else if (proposed > MAXNFSDCNT) { warnx("nfsd count too high %d; truncated to %d", proposed, MAXNFSDCNT); nfsdcnt = MAXNFSDCNT; } else nfsdcnt = proposed; nfsdcnt_set = 1; } static void usage(void) { (void)fprintf(stderr, "%s", getopt_usage); exit(1); } static void nonfs(__unused int signo) { syslog(LOG_ERR, "missing system call: NFS not available"); } static void reapchild(__unused int signo) { pid_t pid; int i; while ((pid = wait3(NULL, WNOHANG, NULL)) > 0) { for (i = 0; i < nfsdcnt; i++) if (pid == children[i]) children[i] = -1; } } static void unregistration(void) { if ((!rpcb_unset(NFS_PROGRAM, 2, NULL)) || (!rpcb_unset(NFS_PROGRAM, 3, NULL))) syslog(LOG_ERR, "rpcb_unset failed"); } static void killchildren(void) { int i; for (i = 0; i < nfsdcnt; i++) { if (children[i] > 0) kill(children[i], SIGKILL); } } /* * Cleanup master after SIGUSR1. */ static void cleanup(__unused int signo) { nfsd_exit(0); } /* * Cleanup child after SIGUSR1. */ static void child_cleanup(__unused int signo) { exit(0); } static void nfsd_exit(int status) { killchildren(); unregistration(); exit(status); } static int get_tuned_nfsdcount(void) { int ncpu, error, tuned_nfsdcnt; size_t ncpu_size; ncpu_size = sizeof(ncpu); error = sysctlbyname("hw.ncpu", &ncpu, &ncpu_size, NULL, 0); if (error) { warnx("sysctlbyname(hw.ncpu) failed defaulting to %d nfs servers", DEFNFSDCNT); tuned_nfsdcnt = DEFNFSDCNT; } else { tuned_nfsdcnt = ncpu * 8; } return tuned_nfsdcnt; } static void -start_server(int master, struct nfsd_nfsd_args *nfsdargp) +start_server(int master, struct nfsd_nfsd_args *nfsdargp, const char *vhost) { char principal[MAXHOSTNAMELEN + 5]; int status, error; char hostname[MAXHOSTNAMELEN + 1], *cp; struct addrinfo *aip, hints; status = 0; - gethostname(hostname, sizeof (hostname)); + if (vhost == NULL) + gethostname(hostname, sizeof (hostname)); + else + strlcpy(hostname, vhost, sizeof (hostname)); snprintf(principal, sizeof (principal), "nfs@%s", hostname); if ((cp = strchr(hostname, '.')) == NULL || *(cp + 1) == '\0') { /* If not fully qualified, try getaddrinfo() */ memset((void *)&hints, 0, sizeof (hints)); hints.ai_flags = AI_CANONNAME; error = getaddrinfo(hostname, NULL, &hints, &aip); if (error == 0) { if (aip->ai_canonname != NULL && (cp = strchr(aip->ai_canonname, '.')) != NULL && *(cp + 1) != '\0') snprintf(principal, sizeof (principal), "nfs@%s", aip->ai_canonname); freeaddrinfo(aip); } } nfsdargp->principal = principal; if (nfsdcnt_set) nfsdargp->minthreads = nfsdargp->maxthreads = nfsdcnt; else { nfsdargp->minthreads = minthreads_set ? minthreads : get_tuned_nfsdcount(); nfsdargp->maxthreads = maxthreads_set ? maxthreads : nfsdargp->minthreads; if (nfsdargp->maxthreads < nfsdargp->minthreads) nfsdargp->maxthreads = nfsdargp->minthreads; } error = nfssvc(nfssvc_nfsd, nfsdargp); if (error < 0 && errno == EAUTH) { /* * This indicates that it could not register the * rpcsec_gss credentials, usually because the * gssd daemon isn't running. * (only the experimental server with nfsv4) */ syslog(LOG_ERR, "No gssd, using AUTH_SYS only"); principal[0] = '\0'; error = nfssvc(nfssvc_nfsd, nfsdargp); } if (error < 0) { if (errno == ENXIO) { syslog(LOG_ERR, "Bad -p option, cannot run"); if (masterpid != 0 && master == 0) kill(masterpid, SIGUSR1); } else syslog(LOG_ERR, "nfssvc: %m"); status = 1; } if (master) nfsd_exit(status); else exit(status); } /* * Open the stable restart file and return the file descriptor for it. */ static void open_stable(int *stable_fdp, int *backup_fdp) { int stable_fd, backup_fd = -1, ret; struct stat st, backup_st; /* Open and stat the stable restart file. */ stable_fd = open(NFSD_STABLERESTART, O_RDWR, 0); if (stable_fd < 0) stable_fd = open(NFSD_STABLERESTART, O_RDWR | O_CREAT, 0600); if (stable_fd >= 0) { ret = fstat(stable_fd, &st); if (ret < 0) { close(stable_fd); stable_fd = -1; } } /* Open and stat the backup stable restart file. */ if (stable_fd >= 0) { backup_fd = open(NFSD_STABLEBACKUP, O_RDWR, 0); if (backup_fd < 0) backup_fd = open(NFSD_STABLEBACKUP, O_RDWR | O_CREAT, 0600); if (backup_fd >= 0) { ret = fstat(backup_fd, &backup_st); if (ret < 0) { close(backup_fd); backup_fd = -1; } } if (backup_fd < 0) { close(stable_fd); stable_fd = -1; } } *stable_fdp = stable_fd; *backup_fdp = backup_fd; if (stable_fd < 0) return; /* Sync up the 2 files, as required. */ if (st.st_size > 0) copy_stable(stable_fd, backup_fd); else if (backup_st.st_size > 0) copy_stable(backup_fd, stable_fd); } /* * Copy the stable restart file to the backup or vice versa. */ static void copy_stable(int from_fd, int to_fd) { int cnt, ret; static char buf[1024]; ret = lseek(from_fd, (off_t)0, SEEK_SET); if (ret >= 0) ret = lseek(to_fd, (off_t)0, SEEK_SET); if (ret >= 0) ret = ftruncate(to_fd, (off_t)0); if (ret >= 0) do { cnt = read(from_fd, buf, 1024); if (cnt > 0) ret = write(to_fd, buf, cnt); else if (cnt < 0) ret = cnt; } while (cnt > 0 && ret >= 0); if (ret >= 0) ret = fsync(to_fd); if (ret < 0) syslog(LOG_ERR, "stable restart copy failure: %m"); } /* * Back up the stable restart file when indicated by the kernel. */ static void backup_stable(__unused int signo) { if (stablefd >= 0) copy_stable(stablefd, backupfd); } /* * Parse the pNFS string and extract the DS servers and ports numbers. */ static void parse_dsserver(const char *optionarg, struct nfsd_nfsd_args *nfsdargp) { char *cp, *cp2, *dsaddr, *dshost, *dspath, *dsvol, nfsprt[9]; char *mdspath, *mdsp, ip6[INET6_ADDRSTRLEN]; const char *ad; int ecode; u_int adsiz, dsaddrcnt, dshostcnt, dspathcnt, hostsiz, pathsiz; u_int mdspathcnt; size_t dsaddrsiz, dshostsiz, dspathsiz, nfsprtsiz, mdspathsiz; struct addrinfo hints, *ai_tcp, *res; struct sockaddr_in sin; struct sockaddr_in6 sin6; cp = strdup(optionarg); if (cp == NULL) errx(1, "Out of memory"); /* Now, do the host names. */ dspathsiz = 1024; dspathcnt = 0; dspath = malloc(dspathsiz); if (dspath == NULL) errx(1, "Out of memory"); dshostsiz = 1024; dshostcnt = 0; dshost = malloc(dshostsiz); if (dshost == NULL) errx(1, "Out of memory"); dsaddrsiz = 1024; dsaddrcnt = 0; dsaddr = malloc(dsaddrsiz); if (dsaddr == NULL) errx(1, "Out of memory"); mdspathsiz = 1024; mdspathcnt = 0; mdspath = malloc(mdspathsiz); if (mdspath == NULL) errx(1, "Out of memory"); /* Put the NFS port# in "." form. */ snprintf(nfsprt, 9, ".%d.%d", 2049 >> 8, 2049 & 0xff); nfsprtsiz = strlen(nfsprt); ai_tcp = NULL; /* Loop around for each DS server name. */ do { cp2 = strchr(cp, ','); if (cp2 != NULL) { /* Not the last DS in the list. */ *cp2++ = '\0'; if (*cp2 == '\0') usage(); } dsvol = strchr(cp, ':'); if (dsvol == NULL || *(dsvol + 1) == '\0') usage(); *dsvol++ = '\0'; /* Optional path for MDS file system to be stored on DS. */ mdsp = strchr(dsvol, '#'); if (mdsp != NULL) { if (*(mdsp + 1) == '\0' || mdsp <= dsvol) usage(); *mdsp++ = '\0'; } /* Append this pathname to dspath. */ pathsiz = strlen(dsvol); if (dspathcnt + pathsiz + 1 > dspathsiz) { dspathsiz *= 2; dspath = realloc(dspath, dspathsiz); if (dspath == NULL) errx(1, "Out of memory"); } strcpy(&dspath[dspathcnt], dsvol); dspathcnt += pathsiz + 1; /* Append this pathname to mdspath. */ if (mdsp != NULL) pathsiz = strlen(mdsp); else pathsiz = 0; if (mdspathcnt + pathsiz + 1 > mdspathsiz) { mdspathsiz *= 2; mdspath = realloc(mdspath, mdspathsiz); if (mdspath == NULL) errx(1, "Out of memory"); } if (mdsp != NULL) strcpy(&mdspath[mdspathcnt], mdsp); else mdspath[mdspathcnt] = '\0'; mdspathcnt += pathsiz + 1; if (ai_tcp != NULL) freeaddrinfo(ai_tcp); /* Get the fully qualified domain name and IP address. */ memset(&hints, 0, sizeof(hints)); hints.ai_flags = AI_CANONNAME | AI_ADDRCONFIG; hints.ai_family = PF_UNSPEC; hints.ai_socktype = SOCK_STREAM; hints.ai_protocol = IPPROTO_TCP; ecode = getaddrinfo(cp, NULL, &hints, &ai_tcp); if (ecode != 0) err(1, "getaddrinfo pnfs: %s %s", cp, gai_strerror(ecode)); ad = NULL; for (res = ai_tcp; res != NULL; res = res->ai_next) { if (res->ai_addr->sa_family == AF_INET) { if (res->ai_addrlen < sizeof(sin)) err(1, "getaddrinfo() returned " "undersized IPv4 address"); /* * Mips cares about sockaddr_in alignment, * so copy the address. */ memcpy(&sin, res->ai_addr, sizeof(sin)); ad = inet_ntoa(sin.sin_addr); break; } else if (res->ai_family == AF_INET6) { if (res->ai_addrlen < sizeof(sin6)) err(1, "getaddrinfo() returned " "undersized IPv6 address"); /* * Mips cares about sockaddr_in6 alignment, * so copy the address. */ memcpy(&sin6, res->ai_addr, sizeof(sin6)); ad = inet_ntop(AF_INET6, &sin6.sin6_addr, ip6, sizeof(ip6)); /* * XXX * Since a link local address will only * work if the client and DS are in the * same scope zone, only use it if it is * the only address. */ if (ad != NULL && !IN6_IS_ADDR_LINKLOCAL(&sin6.sin6_addr)) break; } } if (ad == NULL) err(1, "No IP address for %s", cp); /* Append this address to dsaddr. */ adsiz = strlen(ad); if (dsaddrcnt + adsiz + nfsprtsiz + 1 > dsaddrsiz) { dsaddrsiz *= 2; dsaddr = realloc(dsaddr, dsaddrsiz); if (dsaddr == NULL) errx(1, "Out of memory"); } strcpy(&dsaddr[dsaddrcnt], ad); strcat(&dsaddr[dsaddrcnt], nfsprt); dsaddrcnt += adsiz + nfsprtsiz + 1; /* Append this hostname to dshost. */ hostsiz = strlen(ai_tcp->ai_canonname); if (dshostcnt + hostsiz + 1 > dshostsiz) { dshostsiz *= 2; dshost = realloc(dshost, dshostsiz); if (dshost == NULL) errx(1, "Out of memory"); } strcpy(&dshost[dshostcnt], ai_tcp->ai_canonname); dshostcnt += hostsiz + 1; cp = cp2; } while (cp != NULL); nfsdargp->addr = dsaddr; nfsdargp->addrlen = dsaddrcnt; nfsdargp->dnshost = dshost; nfsdargp->dnshostlen = dshostcnt; nfsdargp->dspath = dspath; nfsdargp->dspathlen = dspathcnt; nfsdargp->mdspath = mdspath; nfsdargp->mdspathlen = mdspathcnt; freeaddrinfo(ai_tcp); } Index: projects/import-googletest-1.8.1/usr.sbin/rpc.ypupdated/update.c =================================================================== --- projects/import-googletest-1.8.1/usr.sbin/rpc.ypupdated/update.c (revision 344270) +++ projects/import-googletest-1.8.1/usr.sbin/rpc.ypupdated/update.c (revision 344271) @@ -1,329 +1,336 @@ /* * Sun RPC is a product of Sun Microsystems, Inc. and is provided for * unrestricted use provided that this legend is included on all tape * media and as a part of the software program in whole or part. Users * may copy or modify Sun RPC without charge, but are not authorized * to license or distribute it to anyone else except as part of a product or * program developed by the user or with the express written consent of * Sun Microsystems, Inc. * * SUN RPC IS PROVIDED AS IS WITH NO WARRANTIES OF ANY KIND INCLUDING THE * WARRANTIES OF DESIGN, MERCHANTIBILITY AND FITNESS FOR A PARTICULAR * PURPOSE, OR ARISING FROM A COURSE OF DEALING, USAGE OR TRADE PRACTICE. * * Sun RPC is provided with no support and without any obligation on the * part of Sun Microsystems, Inc. to assist in its use, correction, * modification or enhancement. * * SUN MICROSYSTEMS, INC. SHALL HAVE NO LIABILITY WITH RESPECT TO THE * INFRINGEMENT OF COPYRIGHTS, TRADE SECRETS OR ANY PATENTS BY SUN RPC * OR ANY PART THEREOF. * * In no event will Sun Microsystems, Inc. be liable for any lost revenue * or profits or other special, indirect and consequential damages, even if * Sun has been advised of the possibility of such damages. * * Sun Microsystems, Inc. * 2550 Garcia Avenue * Mountain View, California 94043 */ #ifndef lint #if 0 static char sccsid[] = "@(#)update.c 1.2 91/03/11 Copyr 1986 Sun Micro"; #endif static const char rcsid[] = "$FreeBSD$"; #endif /* not lint */ /* * Copyright (C) 1986, 1989, Sun Microsystems, Inc. */ /* * Administrative tool to add a new user to the publickey database */ #include #include #include #include #include #ifdef YP #include #include #include #include #endif /* YP */ #include #include #include #include "ypupdated_extern.h" #ifdef YP #define MAXMAPNAMELEN 256 #else #define YPOP_CHANGE 1 /* change, do not add */ #define YPOP_INSERT 2 /* add, do not change */ #define YPOP_DELETE 3 /* delete this entry */ #define YPOP_STORE 4 /* add, or change */ #endif #ifdef YP static char SHELL[] = "/bin/sh"; static char YPDBPATH[]="/var/yp"; /* This is defined but not used! */ static char PKMAP[] = "publickey.byname"; static char UPDATEFILE[] = "updaters"; static char PKFILE[] = "/etc/publickey"; #endif /* YP */ #ifdef YP static int _openchild(char *, FILE **, FILE **); /* * Determine if requester is allowed to update the given map, * and update it if so. Returns the yp status, which is zero * if there is no access violation. */ int mapupdate(char *requester, char *mapname, u_int op, u_int keylen, char *key, u_int datalen, char *data) { char updater[MAXMAPNAMELEN + 40]; FILE *childargs; FILE *childrslt; #ifdef WEXITSTATUS int status; #else union wait status; #endif pid_t pid; u_int yperrno; #ifdef DEBUG printf("%s %s\n", key, data); #endif (void)sprintf(updater, "make -s -f %s/%s %s", YPDBPATH, /* !!! */ UPDATEFILE, mapname); pid = _openchild(updater, &childargs, &childrslt); if (pid < 0) { return (YPERR_YPERR); } /* * Write to child */ (void)fprintf(childargs, "%s\n", requester); (void)fprintf(childargs, "%u\n", op); (void)fprintf(childargs, "%u\n", keylen); (void)fwrite(key, (int)keylen, 1, childargs); (void)fprintf(childargs, "\n"); (void)fprintf(childargs, "%u\n", datalen); (void)fwrite(data, (int)datalen, 1, childargs); (void)fprintf(childargs, "\n"); (void)fclose(childargs); /* * Read from child */ (void)fscanf(childrslt, "%d", &yperrno); (void)fclose(childrslt); (void)wait(&status); #ifdef WEXITSTATUS if (WEXITSTATUS(status) != 0) #else if (status.w_retcode != 0) #endif return (YPERR_YPERR); return (yperrno); } /* * returns pid, or -1 for failure */ static int _openchild(char *command, FILE **fto, FILE **ffrom) { int i; pid_t pid; int pdto[2]; int pdfrom[2]; char *com; struct rlimit rl; if (pipe(pdto) < 0) { goto error1; } if (pipe(pdfrom) < 0) { goto error2; } switch (pid = fork()) { case -1: goto error3; case 0: /* * child: read from pdto[0], write into pdfrom[1] */ (void)close(0); (void)dup(pdto[0]); (void)close(1); (void)dup(pdfrom[1]); getrlimit(RLIMIT_NOFILE, &rl); for (i = rl.rlim_max - 1; i >= 3; i--) { (void) close(i); } com = malloc((unsigned) strlen(command) + 6); if (com == NULL) { _exit(~0); } (void)sprintf(com, "exec %s", command); execl(SHELL, basename(SHELL), "-c", com, (char *)NULL); _exit(~0); default: /* * parent: write into pdto[1], read from pdfrom[0] */ *fto = fdopen(pdto[1], "w"); (void)close(pdto[0]); *ffrom = fdopen(pdfrom[0], "r"); (void)close(pdfrom[1]); break; } return (pid); /* * error cleanup and return */ error3: (void)close(pdfrom[0]); (void)close(pdfrom[1]); error2: (void)close(pdto[0]); (void)close(pdto[1]); error1: return (-1); } static char * basename(char *path) { char *p; p = strrchr(path, '/'); if (p == NULL) { return (path); } else { return (p + 1); } } #else /* YP */ static int match(char *, char *); /* * Determine if requester is allowed to update the given map, * and update it if so. Returns the status, which is zero * if there is no access violation. This function updates * the local file and then shuts up. */ int localupdate(char *name, char *filename, u_int op, u_int keylen __unused, char *key, u_int datalen __unused, char *data) { char line[256]; FILE *rf; FILE *wf; char *tmpname; int err; /* * Check permission */ if (strcmp(name, key) != 0) { return (ERR_ACCESS); } if (strcmp(name, "nobody") == 0) { /* * Can't change "nobody"s key. */ return (ERR_ACCESS); } /* * Open files */ tmpname = malloc(strlen(filename) + 4); if (tmpname == NULL) { return (ERR_MALLOC); } sprintf(tmpname, "%s.tmp", filename); rf = fopen(filename, "r"); if (rf == NULL) { - return (ERR_READ); + err = ERR_READ; + goto cleanup; } wf = fopen(tmpname, "w"); if (wf == NULL) { - return (ERR_WRITE); + fclose(rf); + err = ERR_WRITE; + goto cleanup; } err = -1; while (fgets(line, sizeof (line), rf)) { if (err < 0 && match(line, name)) { switch (op) { case YPOP_INSERT: err = ERR_KEY; break; case YPOP_STORE: case YPOP_CHANGE: fprintf(wf, "%s %s\n", key, data); err = 0; break; case YPOP_DELETE: /* do nothing */ err = 0; break; } } else { fputs(line, wf); } } if (err < 0) { switch (op) { case YPOP_CHANGE: case YPOP_DELETE: err = ERR_KEY; break; case YPOP_INSERT: case YPOP_STORE: err = 0; fprintf(wf, "%s %s\n", key, data); break; } } fclose(wf); fclose(rf); if (err == 0) { if (rename(tmpname, filename) < 0) { - return (ERR_DBASE); + err = ERR_DBASE; + goto cleanup; } } else { if (unlink(tmpname) < 0) { - return (ERR_DBASE); + err = ERR_DBASE; + goto cleanup; } } +cleanup: + free(tmpname); return (err); } static int match(char *line, char *name) { int len; len = strlen(name); return (strncmp(line, name, len) == 0 && (line[len] == ' ' || line[len] == '\t')); } #endif /* !YP */ Index: projects/import-googletest-1.8.1/usr.sbin/wlandebug/Makefile =================================================================== --- projects/import-googletest-1.8.1/usr.sbin/wlandebug/Makefile (revision 344270) +++ projects/import-googletest-1.8.1/usr.sbin/wlandebug/Makefile (revision 344271) @@ -1,10 +1,11 @@ # $FreeBSD$ PROG= wlandebug MAN= wlandebug.8 +MK_PIE:= no LIBADD+= ifconfig WARNS?= 2 .include Index: projects/import-googletest-1.8.1 =================================================================== --- projects/import-googletest-1.8.1 (revision 344270) +++ projects/import-googletest-1.8.1 (revision 344271) Property changes on: projects/import-googletest-1.8.1 ___________________________________________________________________ Modified: svn:mergeinfo ## -0,0 +0,1 ## Merged /head:r344081-344270